modelId
string | author
string | last_modified
timestamp[us, tz=UTC] | downloads
int64 | likes
int64 | library_name
string | tags
list | pipeline_tag
string | createdAt
timestamp[us, tz=UTC] | card
string |
|---|---|---|---|---|---|---|---|---|---|
Abderrahim2/bert-finetuned-Location
|
Abderrahim2
| 2022-06-01T20:18:34Z
| 3
| 0
|
transformers
|
[
"transformers",
"pytorch",
"bert",
"text-classification",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-06-01T17:38:50Z
|
---
license: mit
tags:
- generated_from_trainer
metrics:
- f1
- accuracy
model-index:
- name: bert-finetuned-Location
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-finetuned-Location
This model is a fine-tuned version of [dbmdz/bert-base-french-europeana-cased](https://huggingface.co/dbmdz/bert-base-french-europeana-cased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5462
- F1: 0.8167
- Roc Auc: 0.8624
- Accuracy: 0.8133
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 8
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 | Roc Auc | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:------:|:-------:|:--------:|
| 0.4229 | 1.0 | 742 | 0.3615 | 0.7402 | 0.8014 | 0.6900 |
| 0.3722 | 2.0 | 1484 | 0.3103 | 0.7906 | 0.8416 | 0.7796 |
| 0.262 | 3.0 | 2226 | 0.3364 | 0.8135 | 0.8600 | 0.8100 |
| 0.2239 | 4.0 | 2968 | 0.4593 | 0.8085 | 0.8561 | 0.8066 |
| 0.1461 | 5.0 | 3710 | 0.5534 | 0.7923 | 0.8440 | 0.7904 |
| 0.1333 | 6.0 | 4452 | 0.5462 | 0.8167 | 0.8624 | 0.8133 |
| 0.0667 | 7.0 | 5194 | 0.6298 | 0.7972 | 0.8479 | 0.7958 |
| 0.0616 | 8.0 | 5936 | 0.6362 | 0.8075 | 0.8556 | 0.8059 |
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
|
AlexanderPeter/bert-finetuned-ner
|
AlexanderPeter
| 2022-06-01T19:56:43Z
| 3
| 0
|
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"token-classification",
"generated_from_trainer",
"dataset:conll2003",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-06-01T18:06:45Z
|
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- conll2003
model-index:
- name: bert-finetuned-ner
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-finetuned-ner
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the conll2003 dataset.
It achieves the following results on the evaluation set:
- eval_loss: 0.0593
- eval_precision: 0.9293
- eval_recall: 0.9485
- eval_f1: 0.9388
- eval_accuracy: 0.9858
- eval_runtime: 120.5431
- eval_samples_per_second: 26.97
- eval_steps_per_second: 3.376
- epoch: 2.0
- step: 3512
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cpu
- Datasets 2.2.2
- Tokenizers 0.12.1
|
FritzOS/TEdetection_distiBERT_mLM_V2
|
FritzOS
| 2022-06-01T17:10:46Z
| 4
| 0
|
transformers
|
[
"transformers",
"tf",
"distilbert",
"fill-mask",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-06-01T17:10:29Z
|
---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: TEdetection_distiBERT_mLM_V2
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# TEdetection_distiBERT_mLM_V2
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: None
- training_precision: float32
### Training results
### Framework versions
- Transformers 4.19.2
- TensorFlow 2.8.0
- Datasets 2.2.2
- Tokenizers 0.12.1
|
vectorian/t5-small-finetuned-tds
|
vectorian
| 2022-06-01T17:10:46Z
| 3
| 0
|
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"medium-summarization",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-05-23T17:49:29Z
|
---
license: apache-2.0
tags:
- medium-summarization
- generated_from_trainer
model-index:
- name: t5-small-finetuned-tds
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-small-finetuned-tds
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5.6e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 8
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
|
osanseviero/my-helsinki-duplicate
|
osanseviero
| 2022-06-01T15:58:23Z
| 14
| 0
|
transformers
|
[
"transformers",
"pytorch",
"rust",
"marian",
"text2text-generation",
"translation",
"zh",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2022-06-01T15:56:44Z
|
---
language:
- zh
- en
tags:
- translation
license: apache-2.0
---
### zho-eng
* source group: Chinese
* target group: English
* OPUS readme: [zho-eng](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/zho-eng/README.md)
* model: transformer
* source language(s): cjy_Hans cjy_Hant cmn cmn_Hans cmn_Hant gan lzh lzh_Hans nan wuu yue yue_Hans yue_Hant
* target language(s): eng
* model: transformer
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* download original weights: [opus-2020-07-17.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/zho-eng/opus-2020-07-17.zip)
* test set translations: [opus-2020-07-17.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/zho-eng/opus-2020-07-17.test.txt)
* test set scores: [opus-2020-07-17.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/zho-eng/opus-2020-07-17.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.zho.eng | 36.1 | 0.548 |
### System Info:
- hf_name: zho-eng
- source_languages: zho
- target_languages: eng
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/zho-eng/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['zh', 'en']
- src_constituents: {'cmn_Hans', 'nan', 'nan_Hani', 'gan', 'yue', 'cmn_Kana', 'yue_Hani', 'wuu_Bopo', 'cmn_Latn', 'yue_Hira', 'cmn_Hani', 'cjy_Hans', 'cmn', 'lzh_Hang', 'lzh_Hira', 'cmn_Hant', 'lzh_Bopo', 'zho', 'zho_Hans', 'zho_Hant', 'lzh_Hani', 'yue_Hang', 'wuu', 'yue_Kana', 'wuu_Latn', 'yue_Bopo', 'cjy_Hant', 'yue_Hans', 'lzh', 'cmn_Hira', 'lzh_Yiii', 'lzh_Hans', 'cmn_Bopo', 'cmn_Hang', 'hak_Hani', 'cmn_Yiii', 'yue_Hant', 'lzh_Kana', 'wuu_Hani'}
- tgt_constituents: {'eng'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/zho-eng/opus-2020-07-17.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/zho-eng/opus-2020-07-17.test.txt
- src_alpha3: zho
- tgt_alpha3: eng
- short_pair: zh-en
- chrF2_score: 0.5479999999999999
- bleu: 36.1
- brevity_penalty: 0.948
- ref_len: 82826.0
- src_name: Chinese
- tgt_name: English
- train_date: 2020-07-17
- src_alpha2: zh
- tgt_alpha2: en
- prefer_old: False
- long_pair: zho-eng
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41
|
Alian3785/TEST2ppo-BipedalWalker-v3
|
Alian3785
| 2022-06-01T15:33:28Z
| 0
| 0
|
stable-baselines3
|
[
"stable-baselines3",
"BipedalWalker-v3",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-06-01T15:32:38Z
|
---
library_name: stable-baselines3
tags:
- BipedalWalker-v3
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- metrics:
- type: mean_reward
value: 126.62 +/- 7.52
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: BipedalWalker-v3
type: BipedalWalker-v3
---
# **PPO** Agent playing **BipedalWalker-v3**
This is a trained model of a **PPO** agent playing **BipedalWalker-v3**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
aware-ai/wav2vec2-xls-r-1b-5gram-german
|
aware-ai
| 2022-06-01T13:33:48Z
| 21
| 1
|
transformers
|
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"audio",
"speech",
"hf-asr-leaderboard",
"de",
"dataset:common_voice",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-24T10:33:47Z
|
---
language: de
datasets:
- common_voice
metrics:
- wer
- cer
tags:
- audio
- automatic-speech-recognition
- speech
- hf-asr-leaderboard
license: apache-2.0
model-index:
- name: wav2vec2-xls-r-1b-5gram-german with LM by Florian Zimmermeister @A\\Ware
results:
- task:
name: Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice de
type: common_voice
args: de
metrics:
- name: Test WER
type: wer
value: 4.382541642219636
- name: Test CER
type: cer
value: 1.6235493024026488
- task:
name: Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice 8 de
type: mozilla-foundation/common_voice_8_0
args: de
metrics:
- name: Test WER
type: wer
value: 4.382541642219636
- name: Test CER
type: cer
value: 1.6235493024026488
---
## Evaluation
The model can be evaluated as follows on the German test data of Common Voice.
```python
import torch
from transformers import AutoModelForCTC, AutoProcessor
from unidecode import unidecode
import re
from datasets import load_dataset, load_metric
import datasets
counter = 0
wer_counter = 0
cer_counter = 0
device = "cuda" if torch.cuda.is_available() else "cpu"
special_chars = [["Ä"," AE "], ["Ö"," OE "], ["Ü"," UE "], ["ä"," ae "], ["ö"," oe "], ["ü"," ue "]]
def clean_text(sentence):
for special in special_chars:
sentence = sentence.replace(special[0], special[1])
sentence = unidecode(sentence)
for special in special_chars:
sentence = sentence.replace(special[1], special[0])
sentence = re.sub("[^a-zA-Z0-9öäüÖÄÜ ,.!?]", " ", sentence)
return sentence
def main(model_id):
print("load model")
model = AutoModelForCTC.from_pretrained(model_id).to(device)
print("load processor")
processor = AutoProcessor.from_pretrained(processor_id)
print("load metrics")
wer = load_metric("wer")
cer = load_metric("cer")
ds = load_dataset("mozilla-foundation/common_voice_8_0","de")
ds = ds["test"]
ds = ds.cast_column(
"audio", datasets.features.Audio(sampling_rate=16_000)
)
def calculate_metrics(batch):
global counter, wer_counter, cer_counter
resampled_audio = batch["audio"]["array"]
input_values = processor(resampled_audio, return_tensors="pt", sampling_rate=16_000).input_values
with torch.no_grad():
logits = model(input_values.to(device)).logits.cpu().numpy()[0]
decoded = processor.decode(logits)
pred = decoded.text.lower()
ref = clean_text(batch["sentence"]).lower()
wer_result = wer.compute(predictions=[pred], references=[ref])
cer_result = cer.compute(predictions=[pred], references=[ref])
counter += 1
wer_counter += wer_result
cer_counter += cer_result
if counter % 100 == True:
print(f"WER: {(wer_counter/counter)*100} | CER: {(cer_counter/counter)*100}")
return batch
ds.map(calculate_metrics, remove_columns=ds.column_names)
print(f"WER: {(wer_counter/counter)*100} | CER: {(cer_counter/counter)*100}")
model_id = "flozi00/wav2vec2-xls-r-1b-5gram-german"
main(model_id)
```
|
YeRyeongLee/bert-base-uncased-finetuned-filtered-0601
|
YeRyeongLee
| 2022-06-01T13:29:32Z
| 5
| 0
|
transformers
|
[
"transformers",
"pytorch",
"bert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-06-01T12:22:30Z
|
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: bert-base-uncased-finetuned-filtered-0601
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-uncased-finetuned-filtered-0601
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1152
- Accuracy: 0.9814
- F1: 0.9815
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:------:|
| No log | 1.0 | 3180 | 0.1346 | 0.9664 | 0.9665 |
| No log | 2.0 | 6360 | 0.1352 | 0.9748 | 0.9749 |
| No log | 3.0 | 9540 | 0.1038 | 0.9808 | 0.9808 |
| No log | 4.0 | 12720 | 0.1152 | 0.9814 | 0.9815 |
### Framework versions
- Transformers 4.19.2
- Pytorch 1.9.0
- Datasets 1.16.1
- Tokenizers 0.12.1
|
cjbarrie/masress-medcrit-camel
|
cjbarrie
| 2022-06-01T13:23:54Z
| 3
| 0
|
transformers
|
[
"transformers",
"pytorch",
"bert",
"text-classification",
"autotrain",
"unk",
"dataset:cjbarrie/autotrain-data-masress-medcrit-binary-5",
"co2_eq_emissions",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-06-01T12:56:34Z
|
---
tags: autotrain
language: unk
widget:
- text: "الكل ينتقد الرئيس على إخفاقاته"
datasets:
- cjbarrie/autotrain-data-masress-medcrit-binary-5
co2_eq_emissions: 0.01017487638098474
---
# Model Trained Using AutoTrain
- Problem type: Multi-class Classification
- Model ID: 937130980
- CO2 Emissions (in grams): 0.01017487638098474
## Validation Metrics
- Loss: 0.757265031337738
- Accuracy: 0.7551020408163265
- Macro F1: 0.7202470830473576
- Micro F1: 0.7551020408163265
- Weighted F1: 0.7594301962377263
- Macro Precision: 0.718716577540107
- Micro Precision: 0.7551020408163265
- Weighted Precision: 0.7711448215649895
- Macro Recall: 0.7285714285714286
- Micro Recall: 0.7551020408163265
- Weighted Recall: 0.7551020408163265
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/cjbarrie/autotrain-masress-medcrit-binary-5-937130980
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("cjbarrie/autotrain-masress-medcrit-binary-5-937130980", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("cjbarrie/autotrain-masress-medcrit-binary-5-937130980", use_auth_token=True)
inputs = tokenizer("I love AutoTrain", return_tensors="pt")
outputs = model(**inputs)
```
|
Sundhar/bart_customized
|
Sundhar
| 2022-06-01T12:20:25Z
| 0
| 0
|
fastai
|
[
"fastai",
"region:us"
] | null | 2022-06-01T12:18:33Z
|
---
tags:
- fastai
---
# Amazing!
🥳 Congratulations on hosting your fastai model on the Hugging Face Hub!
# Some next steps
1. Fill out this model card with more information (see the template below and the [documentation here](https://huggingface.co/docs/hub/model-repos))!
2. Create a demo in Gradio or Streamlit using 🤗 Spaces ([documentation here](https://huggingface.co/docs/hub/spaces)).
3. Join the fastai community on the [Fastai Discord](https://discord.com/invite/YKrxeNn)!
Greetings fellow fastlearner 🤝! Don't forget to delete this content from your model card.
---
# Model card
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
|
jayeshgar/q-FrozenLake-v1-4x4-noSlippery
|
jayeshgar
| 2022-06-01T11:40:35Z
| 0
| 0
| null |
[
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-06-01T11:40:28Z
|
---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
---
# **Q-Learning** Agent playing **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="jayeshgar/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
evaluate_agent(env, model["max_steps"], model["n_eval_episodes"], model["qtable"], model["eval_seed"])
```
|
aaatul/xlm-roberta-large-finetuned-ner
|
aaatul
| 2022-06-01T09:06:31Z
| 5
| 0
|
transformers
|
[
"transformers",
"pytorch",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"dataset:hi_ner_config",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-05-05T06:32:26Z
|
---
license: mit
tags:
- generated_from_trainer
datasets:
- hi_ner_config
model-index:
- name: xlm-roberta-large-finetuned-ner
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-large-finetuned-ner
This model is a fine-tuned version of [xlm-roberta-large](https://huggingface.co/xlm-roberta-large) on the hi_ner_config dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 6
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
|
adache/xlm-roberta-base-finetuned-panx-all
|
adache
| 2022-06-01T08:20:34Z
| 4
| 0
|
transformers
|
[
"transformers",
"pytorch",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-06-01T07:54:01Z
|
---
license: mit
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-all
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-all
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1782
- F1: 0.8541
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.2995 | 1.0 | 739 | 0.1891 | 0.8085 |
| 0.1552 | 2.0 | 1478 | 0.1798 | 0.8425 |
| 0.1008 | 3.0 | 2217 | 0.1782 | 0.8541 |
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
|
adache/xlm-roberta-base-finetuned-panx-en
|
adache
| 2022-06-01T07:53:50Z
| 5
| 0
|
transformers
|
[
"transformers",
"pytorch",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"dataset:xtreme",
"license:mit",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-06-01T07:34:03Z
|
---
license: mit
tags:
- generated_from_trainer
datasets:
- xtreme
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-en
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: xtreme
type: xtreme
args: PAN-X.en
metrics:
- name: F1
type: f1
value: 0.692179700499168
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-en
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3921
- F1: 0.6922
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 1.1465 | 1.0 | 50 | 0.5838 | 0.4777 |
| 0.5055 | 2.0 | 100 | 0.4477 | 0.6374 |
| 0.3713 | 3.0 | 150 | 0.3921 | 0.6922 |
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
|
ceggian/sbart_pt_reddit_softmax_64
|
ceggian
| 2022-06-01T07:46:44Z
| 1
| 0
|
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"bart",
"feature-extraction",
"sentence-similarity",
"transformers",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2022-06-01T07:43:02Z
|
---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}')
model = AutoModel.from_pretrained('{MODEL_NAME}')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 117759 with parameters:
```
{'batch_size': 8, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.SoftmaxLoss.SoftmaxLoss`
Parameters of the fit()-Method:
```
{
"epochs": 1,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'transformers.optimization.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 11775,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 1024, 'do_lower_case': False}) with Transformer model: BartModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information -->
|
adache/xlm-roberta-base-finetuned-panx-de-fr
|
adache
| 2022-06-01T06:47:31Z
| 5
| 0
|
transformers
|
[
"transformers",
"pytorch",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-06-01T06:21:05Z
|
---
license: mit
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-de-fr
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-de-fr
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1644
- F1: 0.8617
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.2891 | 1.0 | 715 | 0.1780 | 0.8288 |
| 0.1471 | 2.0 | 1430 | 0.1627 | 0.8509 |
| 0.0947 | 3.0 | 2145 | 0.1644 | 0.8617 |
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
|
t-bank-ai/response-quality-classifier-tiny
|
t-bank-ai
| 2022-06-01T06:34:56Z
| 17
| 3
|
transformers
|
[
"transformers",
"pytorch",
"bert",
"text-classification",
"conversational",
"ru",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-05-31T08:32:08Z
|
---
license: mit
widget:
- text: "[CLS]привет[SEP]привет![SEP]как дела?[RESPONSE_TOKEN]супер, вот только проснулся, у тебя как?"
example_title: "Dialog example 1"
- text: "[CLS]привет[SEP]привет![SEP]как дела?[RESPONSE_TOKEN]норм"
example_title: "Dialog example 2"
- text: "[CLS]привет[SEP]привет![SEP]как дела?[RESPONSE_TOKEN]норм, у тя как?"
example_title: "Dialog example 3"
language:
- ru
tags:
- conversational
---
This classification model is based on [cointegrated/rubert-tiny2](https://huggingface.co/cointegrated/rubert-tiny2).
The model should be used to produce relevance and specificity of the last message in the context of a dialogue.
The labels explanation:
- `relevance`: is the last message in the dialogue relevant in the context of the full dialogue.
- `specificity`: is the last message in the dialogue interesting and promotes the continuation of the dialogue.
It is pretrained on a large corpus of dialog data in unsupervised manner: the model is trained to predict whether last response was in a real dialog, or it was pulled from some other dialog at random.
Then it was finetuned on manually labelled examples (dataset will be posted soon).
The model was trained with three messages in the context and one response. Each message was tokenized separately with ``` max_length = 32 ```.
The performance of the model on validation split (dataset will be posted soon) (with the best thresholds for validation samples):
| | threshold | f0.5 | ROC AUC |
|:------------|------------:|-------:|----------:|
| relevance | 0.51 | 0.82 | 0.74 |
| specificity | 0.54 | 0.81 | 0.8 |
How to use:
```python
import torch
from transformers import AutoTokenizer, AutoModelForSequenceClassification
tokenizer = AutoTokenizer.from_pretrained('tinkoff-ai/response-quality-classifier-tiny')
model = AutoModelForSequenceClassification.from_pretrained('tinkoff-ai/response-quality-classifier-tiny')
inputs = tokenizer('[CLS]привет[SEP]привет![SEP]как дела?[RESPONSE_TOKEN]норм, у тя как?', max_length=128, add_special_tokens=False, return_tensors='pt')
with torch.inference_mode():
logits = model(**inputs).logits
probas = torch.sigmoid(logits)[0].cpu().detach().numpy()
relevance, specificity = probas
```
The [app](https://huggingface.co/spaces/tinkoff-ai/response-quality-classifiers) where you can easily interact with this model.
The work was done during internship at Tinkoff by [egoriyaa](https://github.com/egoriyaa), mentored by [solemn-leader](https://huggingface.co/solemn-leader).
|
t-bank-ai/response-quality-classifier-base
|
t-bank-ai
| 2022-06-01T06:34:22Z
| 17
| 2
|
transformers
|
[
"transformers",
"pytorch",
"bert",
"text-classification",
"conversational",
"ru",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-05-31T10:17:12Z
|
---
license: mit
widget:
- text: "[CLS]привет[SEP]привет![SEP]как дела?[RESPONSE_TOKEN]супер, вот только проснулся, у тебя как?"
example_title: "Dialog example 1"
- text: "[CLS]привет[SEP]привет![SEP]как дела?[RESPONSE_TOKEN]норм"
example_title: "Dialog example 2"
- text: "[CLS]привет[SEP]привет![SEP]как дела?[RESPONSE_TOKEN]норм, у тя как?"
example_title: "Dialog example 3"
language:
- ru
tags:
- conversational
---
This classification model is based on [DeepPavlov/rubert-base-cased-sentence](https://huggingface.co/DeepPavlov/rubert-base-cased-sentence).
The model should be used to produce relevance and specificity of the last message in the context of a dialogue.
The labels explanation:
- `relevance`: is the last message in the dialogue relevant in the context of the full dialogue.
- `specificity`: is the last message in the dialogue interesting and promotes the continuation of the dialogue.
It is pretrained on a large corpus of dialog data in unsupervised manner: the model is trained to predict whether last response was in a real dialog, or it was pulled from some other dialog at random.
Then it was finetuned on manually labelled examples (dataset will be posted soon).
The model was trained with three messages in the context and one response. Each message was tokenized separately with ``` max_length = 32 ```.
The performance of the model on validation split (dataset will be posted soon) (with the best thresholds for validation samples):
| | threshold | f0.5 | ROC AUC |
|:------------|------------:|-------:|----------:|
| relevance | 0.49 | 0.84 | 0.79 |
| specificity | 0.53 | 0.83 | 0.83 |
How to use:
```python
import torch
from transformers import AutoTokenizer, AutoModelForSequenceClassification
tokenizer = AutoTokenizer.from_pretrained('tinkoff-ai/response-quality-classifier-base')
model = AutoModelForSequenceClassification.from_pretrained('tinkoff-ai/response-quality-classifier-base')
inputs = tokenizer('[CLS]привет[SEP]привет![SEP]как дела?[RESPONSE_TOKEN]норм, у тя как?', max_length=128, add_special_tokens=False, return_tensors='pt')
with torch.inference_mode():
logits = model(**inputs).logits
probas = torch.sigmoid(logits)[0].cpu().detach().numpy()
relevance, specificity = probas
```
The [app](https://huggingface.co/spaces/tinkoff-ai/response-quality-classifiers) where you can easily interact with this model.
The work was done during internship at Tinkoff by [egoriyaa](https://github.com/egoriyaa), mentored by [solemn-leader](https://huggingface.co/solemn-leader).
|
jiseong/mt5-small-finetuned-news
|
jiseong
| 2022-06-01T06:22:12Z
| 3
| 0
|
transformers
|
[
"transformers",
"tf",
"tensorboard",
"mt5",
"text2text-generation",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-06-01T00:47:52Z
|
---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: jiseong/mt5-small-finetuned-news
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# jiseong/mt5-small-finetuned-news
This model is a fine-tuned version of [google/mt5-small](https://huggingface.co/google/mt5-small) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.1208
- Validation Loss: 0.1012
- Epoch: 2
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 2e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 0.1829 | 0.1107 | 0 |
| 0.1421 | 0.1135 | 1 |
| 0.1208 | 0.1012 | 2 |
### Framework versions
- Transformers 4.19.2
- TensorFlow 2.8.0
- Datasets 2.2.2
- Tokenizers 0.12.1
|
arize-ai/distilbert_reviews_with_language_drift
|
arize-ai
| 2022-06-01T06:15:35Z
| 6
| 0
|
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:ecommerce_reviews_with_language_drift",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-06-01T05:46:28Z
|
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- ecommerce_reviews_with_language_drift
metrics:
- accuracy
- f1
model-index:
- name: distilbert_reviews_with_language_drift
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: ecommerce_reviews_with_language_drift
type: ecommerce_reviews_with_language_drift
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.818
- name: F1
type: f1
value: 0.8167126877417763
widget:
- text: "Poor quality of fabric and ridiculously tight at chest. It's way too short."
example_title: "Negative"
- text: "One worked perfectly, but the other one has a slight leak and we end up with water underneath the filter."
example_title: "Neutral"
- text: "I liked the price most! Nothing to dislike here!"
example_title: "Positive"
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert_reviews_with_language_drift
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the ecommerce_reviews_with_language_drift dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4970
- Accuracy: 0.818
- F1: 0.8167
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.593 | 1.0 | 500 | 0.4723 | 0.799 | 0.7976 |
| 0.3714 | 2.0 | 1000 | 0.4679 | 0.818 | 0.8177 |
| 0.2652 | 3.0 | 1500 | 0.4970 | 0.818 | 0.8167 |
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
|
chrisvinsen/wav2vec2-17
|
chrisvinsen
| 2022-06-01T06:05:03Z
| 3
| 0
|
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-06-01T02:17:11Z
|
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: wav2vec2-17
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-17
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.1355
- Wer: 1.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 256
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 50
- num_epochs: 30
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:---:|
| 7.5865 | 1.38 | 25 | 3.4717 | 1.0 |
| 2.9762 | 2.77 | 50 | 3.1483 | 1.0 |
| 2.9265 | 4.16 | 75 | 3.1946 | 1.0 |
| 2.8813 | 5.55 | 100 | 3.0504 | 1.0 |
| 2.887 | 6.93 | 125 | 3.1358 | 1.0 |
| 2.9124 | 8.33 | 150 | 3.1653 | 1.0 |
| 2.8854 | 9.71 | 175 | 3.1243 | 1.0 |
| 2.91 | 11.11 | 200 | 3.0879 | 1.0 |
| 2.8868 | 12.49 | 225 | 3.1658 | 1.0 |
| 2.8827 | 13.88 | 250 | 3.1236 | 1.0 |
| 2.911 | 15.27 | 275 | 3.1206 | 1.0 |
| 2.8829 | 16.66 | 300 | 3.1171 | 1.0 |
| 2.9105 | 18.05 | 325 | 3.1127 | 1.0 |
| 2.8845 | 19.44 | 350 | 3.1377 | 1.0 |
| 2.8803 | 20.82 | 375 | 3.1157 | 1.0 |
| 2.9102 | 22.22 | 400 | 3.1265 | 1.0 |
| 2.8803 | 23.6 | 425 | 3.1493 | 1.0 |
| 2.8837 | 24.99 | 450 | 3.1085 | 1.0 |
| 2.9106 | 26.38 | 475 | 3.1099 | 1.0 |
| 2.8787 | 27.77 | 500 | 3.1352 | 1.0 |
| 2.9132 | 29.16 | 525 | 3.1355 | 1.0 |
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
|
Oseias/ppo-LunarLander-v2_review
|
Oseias
| 2022-06-01T02:26:14Z
| 0
| 0
|
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-06-01T02:25:48Z
|
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- metrics:
- type: mean_reward
value: 254.90 +/- 26.83
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
radev/distilbert-base-uncased-finetuned-emotion
|
radev
| 2022-06-01T02:20:13Z
| 14
| 0
|
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-16T21:47:07Z
|
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.8945
- name: F1
type: f1
value: 0.8871610121255439
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3645
- Accuracy: 0.8945
- F1: 0.8872
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| No log | 1.0 | 125 | 0.5816 | 0.8015 | 0.7597 |
| 0.7707 | 2.0 | 250 | 0.3645 | 0.8945 | 0.8872 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.11.0
- Datasets 1.16.1
- Tokenizers 0.10.3
|
erickfm/t5-small-finetuned-bias
|
erickfm
| 2022-06-01T02:02:16Z
| 3
| 0
|
transformers
|
[
"transformers",
"pytorch",
"t5",
"text2text-generation",
"en",
"dataset:WNC",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-05-31T23:29:18Z
|
---
language:
- en
license: apache-2.0
datasets:
- WNC
metrics:
- accuracy
---
This model is a fine-tune checkpoint of [T5-small](https://huggingface.co/t5-small), fine-tuned on the [Wiki Neutrality Corpus (WNC)](https://github.com/rpryzant/neutralizing-bias), a labeled dataset composed of 180,000 biased and neutralized sentence pairs that are generated from Wikipedia edits tagged for “neutral point of view”. This model reaches an accuracy of 0.32 on a dev split of the WNC.
For more details about T5, check out this [model card](https://huggingface.co/t5-small).
|
sanchit-gandhi/flax-wav2vec2-2-bart-large-cv9-feature-encoder
|
sanchit-gandhi
| 2022-06-01T00:43:26Z
| 3
| 0
|
transformers
|
[
"transformers",
"jax",
"speech-encoder-decoder",
"automatic-speech-recognition",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-05-29T16:50:26Z
|
/home/sanchitgandhi/seq2seq-speech/README.md
|
skr3178/xlm-roberta-base-finetuned-panx-en
|
skr3178
| 2022-05-31T23:31:12Z
| 4
| 0
|
transformers
|
[
"transformers",
"pytorch",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"dataset:xtreme",
"license:mit",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-05-31T23:14:17Z
|
---
license: mit
tags:
- generated_from_trainer
datasets:
- xtreme
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-en
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: xtreme
type: xtreme
args: PAN-X.en
metrics:
- name: F1
type: f1
value: 0.692179700499168
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-en
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3921
- F1: 0.6922
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 1.1465 | 1.0 | 50 | 0.5838 | 0.4777 |
| 0.5055 | 2.0 | 100 | 0.4477 | 0.6374 |
| 0.3713 | 3.0 | 150 | 0.3921 | 0.6922 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.11.0+cu113
- Datasets 1.16.1
- Tokenizers 0.10.3
|
skr3178/xlm-roberta-base-finetuned-panx-it
|
skr3178
| 2022-05-31T23:14:06Z
| 6
| 0
|
transformers
|
[
"transformers",
"pytorch",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"dataset:xtreme",
"license:mit",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-05-31T22:57:02Z
|
---
license: mit
tags:
- generated_from_trainer
datasets:
- xtreme
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-it
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: xtreme
type: xtreme
args: PAN-X.it
metrics:
- name: F1
type: f1
value: 0.8247845711940912
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-it
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2421
- F1: 0.8248
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.809 | 1.0 | 70 | 0.3380 | 0.7183 |
| 0.2939 | 2.0 | 140 | 0.2582 | 0.7977 |
| 0.1813 | 3.0 | 210 | 0.2421 | 0.8248 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.11.0+cu113
- Datasets 1.16.1
- Tokenizers 0.10.3
|
skr3178/xlm-roberta-base-finetuned-panx-de-fr
|
skr3178
| 2022-05-31T22:37:32Z
| 5
| 0
|
transformers
|
[
"transformers",
"pytorch",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-05-31T22:14:05Z
|
---
license: mit
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-de-fr
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-de-fr
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1644
- F1: 0.8617
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.2891 | 1.0 | 715 | 0.1780 | 0.8288 |
| 0.1471 | 2.0 | 1430 | 0.1627 | 0.8509 |
| 0.0947 | 3.0 | 2145 | 0.1644 | 0.8617 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.11.0+cu113
- Datasets 1.16.1
- Tokenizers 0.10.3
|
jppaolim/v40_NeoSmall
|
jppaolim
| 2022-05-31T22:23:08Z
| 3
| 0
|
transformers
|
[
"transformers",
"pytorch",
"gpt_neo",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-05-31T22:11:48Z
|
# My Story model
Arthur goes to the beach. Arthur is in the ocean. He is enjoying the water. He cannot wait for the sun to rise. He goes to the beach. It is very hot outside.
Arthur goes to the beach. Arthur is going to the beach. He is going to the beach. He is going to go swimming. He feels a breeze on his shirt. He feels very relaxed.
Arthur goes to the beach. Arthur is walking on the beach. He notices a sign for the beach club. He asks for a cab. He gets a cab to go to the beach. Arthur and his friends go to the beach together.
Arthur goes to the beach. Arthur was excited to go to the beach. He drove his car to the beach. When he got there, he was amazed at the waves. The waves had a huge sandcastle. Arthur went to the beach and enjoyed the beach.
Arthur goes to the beach. Arthur is playing in the sand with his friends. He is having a great time, and they are all laughing. They all seem to be enjoying themselves. Arthur decides he has to leave. Arthur is sad that he will not be able to go to the beach.
Arthur goes to the beach. Arthur wants to go to the beach. He decides to go to the beach. He sees a sign for the beach. He goes to the beach. Arthur is happy to go to the beach.
Arthur goes to the beach. Arthur is at the beach. He is playing with his friends. They go swimming. Arthur is caught in a water. Arthur is taken to the beach.
Arthur goes to the beach. Arthur is in the ocean. He is bored. He decides to go to the beach. He is bored for a few hours. Arthur leaves the beach.
Arthur goes to the beach. Arthur is out swimming. He is going to the beach. He goes to the beach. He goes to the beach. He goes to the beach.
Arthur goes to the beach. Arthur was at the beach with his friends. They went swimming and laid out on the sand. They found a beach they liked. They decided to go to the beach and play. They were so happy that they decided to go back to the beach.
Arthur goes to the beach. Arthur is at the beach with his family. They are going to go to the beach. Arthur is very excited. He is going to go to the beach. Arthur is happy that he went to the beach.
Arthur goes to the beach. Arthur was at the beach with his friends. They were having a great time. They all went to the beach. They had a great time. Arthur is very happy.
Arthur goes to the beach. Arthur is bored. He decides to go to the beach. He goes to the beach. He goes to the beach. He is happy that he went to the beach.
Arthur goes to the beach. Arthur is bored. He decides to go to the beach. He is very bored. He decides to go to the beach. Arthur is happy that he went to the beach.
Arthur goes to the beach. Arthur is on his way to the beach. He is going to the beach. He is going to the beach. He is going to the beach. Arthur is going to the beach.
|
skr3178/xlm-roberta-base-finetuned-panx-de
|
skr3178
| 2022-05-31T22:09:30Z
| 3
| 0
|
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"dataset:xtreme",
"license:mit",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-05-31T21:47:53Z
|
---
license: mit
tags:
- generated_from_trainer
datasets:
- xtreme
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-de
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: xtreme
type: xtreme
args: PAN-X.de
metrics:
- name: F1
type: f1
value: 0.8627004891366169
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-de
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1363
- F1: 0.8627
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.2539 | 1.0 | 525 | 0.1697 | 0.8179 |
| 0.1317 | 2.0 | 1050 | 0.1327 | 0.8516 |
| 0.0819 | 3.0 | 1575 | 0.1363 | 0.8627 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.11.0+cu113
- Datasets 1.16.1
- Tokenizers 0.10.3
|
jppaolim/v39_Best20Epoch
|
jppaolim
| 2022-05-31T21:42:21Z
| 3
| 0
|
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-05-31T21:32:41Z
|
# My Story model
Arthur goes to the beach. Arthur is feeling very hot and bored. He decides to go to the beach. He goes to the beach. He spends the day swimming. Arthur cannot wait for the next day to go swimming.
Arthur goes to the beach. Arthur wants to go to the beach. He gets a map. He looks at the map. He goes to the beach. He goes to the beach.
Arthur goes to the beach. Arthur has been working hard all summer. He has been working hard every day. One day his boss asks him to come to work. Arthur is happy to see that his hard work is paying off. Arthur is so glad he took the chance to go to the beach.
Arthur goes to the beach. Arthur is walking to the beach. He sees a small boy playing in the sand. The boy tells Arthur to leave. Arthur tells the boy he doesn't want to go to the beach. Arthur leaves the beach.
Arthur goes to the beach. Arthur is a young boy who lived in a very small town. He wanted to feel like a big city kid. He drove to the coast and swam in the ocean. When he got home, his mom told him to pack up and come back. Arthur packed up and didn't go to the beach anymore.
Arthur goes to the beach. Arthur is bored at home. He decides to go to the local beach. He goes down to the water. Arthur waves. He is glad he went for a walk down the beach.
Arthur goes to the beach. Arthur wants to go to the beach. He has been looking forward to this for a week. He gets to the beach and everything feels perfect. He gets to the water and it is very nice. Arthur has the best day ever.
Arthur goes to the beach. Arthur is going to the beach tomorrow. He is going to play in the ocean. He can't find his keys. He is starting to panic. Arthur finally finds his keys in his car.
Arthur goes to the beach. Arthur is going to the beach tomorrow. He has been working hard all week. He is going to the beach with his friends. Arthur and his friends get in the car to go to the beach. Arthur swims all day and goes to sleep.
Arthur goes to the beach. Arthur wants to go to the beach. He goes to the beach. He swims in the ocean. He has fun. Arthur has a good day.
Arthur goes to the beach. Arthur is a young man. He likes to surf. He decides to go to the beach. He spends the whole day at the beach. He goes to the ocean and has fun.
Arthur goes to the beach. Arthur is a young man. He wants to go to the beach. He gets on his car and drives to the beach. He spends the entire day at the beach. Arthur has the best day ever at the beach.
Arthur goes to the beach. Arthur is a young man. He likes to surf and swim. He decides to go to the beach. Arthur swam all day long. He had a great day at the beach.
Arthur goes to the beach. Arthur is going to the beach tomorrow. He has been working all day, but hasn't been swimming. He decides to go for a swim anyway and cool off. He spends the next few days playing in the ocean. Arthur has the time of his life.
Arthur goes to the beach. Arthur is a young boy who lived in a very small town. He wanted to go to the beach but his dad said no. Arthur asked his dad if he could go alone. Arthur's dad told him that they couldn't afford to go together. Arthur was sad that his dad wouldn't go with him to the beach.
|
Simon10/my-awesome-model-3
|
Simon10
| 2022-05-31T21:26:38Z
| 7
| 0
|
transformers
|
[
"transformers",
"tf",
"bert",
"fill-mask",
"generated_from_keras_callback",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-05-31T21:20:01Z
|
---
license: mit
tags:
- generated_from_keras_callback
model-index:
- name: my-awesome-model-3
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# my-awesome-model-3
This model is a fine-tuned version of [dbmdz/bert-base-italian-cased](https://huggingface.co/dbmdz/bert-base-italian-cased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.2061
- Validation Loss: 0.0632
- Epoch: 0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'WarmUp', 'config': {'initial_learning_rate': 2e-05, 'decay_schedule_fn': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': -811, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, '__passive_serialization__': True}, 'warmup_steps': 1000, 'power': 1.0, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: mixed_float16
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 0.2061 | 0.0632 | 0 |
### Framework versions
- Transformers 4.18.0
- TensorFlow 2.8.1
- Datasets 2.2.2
- Tokenizers 0.11.0
|
Dizzykong/test-charles-dickens
|
Dizzykong
| 2022-05-31T21:22:30Z
| 5
| 0
|
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-05-31T21:10:52Z
|
---
license: mit
tags:
- generated_from_trainer
model-index:
- name: test-charles-dickens
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# test-charles-dickens
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 1
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 4
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
|
sanchit-gandhi/flax-wav2vec2-2-bart-large-tedlium-feature-encoder
|
sanchit-gandhi
| 2022-05-31T21:06:15Z
| 7
| 0
|
transformers
|
[
"transformers",
"jax",
"speech-encoder-decoder",
"automatic-speech-recognition",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-05-29T16:54:24Z
|
/home/sanchitgandhi/seq2seq-speech/README.md
|
rajistics/autotrain-Adult-934630783
|
rajistics
| 2022-05-31T19:36:02Z
| 2
| 2
|
transformers
|
[
"transformers",
"joblib",
"extra_trees",
"autotrain",
"tabular",
"classification",
"tabular-classification",
"dataset:rajistics/autotrain-data-Adult",
"co2_eq_emissions",
"endpoints_compatible",
"region:us"
] |
tabular-classification
| 2022-05-31T17:54:27Z
|
---
tags:
- autotrain
- tabular
- classification
- tabular-classification
datasets:
- rajistics/autotrain-data-Adult
co2_eq_emissions: 38.42484725553464
---
# Model Trained Using AutoTrain
- Problem type: Binary Classification
- Model ID: 934630783
- CO2 Emissions (in grams): 38.42484725553464
## Validation Metrics
- Loss: 0.2984429822985684
- Accuracy: 0.8628221244500315
- Precision: 0.7873263888888888
- Recall: 0.5908794788273616
- AUC: 0.9182195921357326
- F1: 0.6751023446222553
## Usage
```python
import json
import joblib
import pandas as pd
model = joblib.load('model.joblib')
config = json.load(open('config.json'))
features = config['features']
# data = pd.read_csv("data.csv")
data = data[features]
data.columns = ["feat_" + str(col) for col in data.columns]
predictions = model.predict(data) # or model.predict_proba(data)
```
|
arampacha/q-Taxi-v3
|
arampacha
| 2022-05-31T19:31:41Z
| 0
| 0
| null |
[
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-05-31T19:31:34Z
|
---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-Taxi-v3
results:
- metrics:
- type: mean_reward
value: 7.48 +/- 2.63
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
---
# **Q-Learning** Agent playing **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="arampacha/q-Taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
evaluate_agent(env, model["max_steps"], model["n_eval_episodes"], model["qtable"], model["eval_seed"])
```
|
yukta10/finetuning-sentiment-model-3000-samples
|
yukta10
| 2022-05-31T18:29:16Z
| 4
| 0
|
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:imdb",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-05-31T15:51:49Z
|
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imdb
model-index:
- name: finetuning-sentiment-model-3000-samples
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuning-sentiment-model-3000-samples
This model is a fine-tuned version of [federicopascual/finetuning-sentiment-model-3000-samples](https://huggingface.co/federicopascual/finetuning-sentiment-model-3000-samples) on the imdb dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
|
renjithks/layoutlmv3-er-ner
|
renjithks
| 2022-05-31T17:36:05Z
| 12
| 0
|
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"layoutlmv3",
"token-classification",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-05-23T16:46:44Z
|
---
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: layoutlmv3-er-ner
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# layoutlmv3-er-ner
This model is a fine-tuned version of [renjithks/layoutlmv3-cord-ner](https://huggingface.co/renjithks/layoutlmv3-cord-ner) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2025
- Precision: 0.6442
- Recall: 0.6761
- F1: 0.6598
- Accuracy: 0.9507
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 22 | 0.2940 | 0.4214 | 0.2956 | 0.3475 | 0.9147 |
| No log | 2.0 | 44 | 0.2487 | 0.4134 | 0.4526 | 0.4321 | 0.9175 |
| No log | 3.0 | 66 | 0.1922 | 0.5399 | 0.5460 | 0.5429 | 0.9392 |
| No log | 4.0 | 88 | 0.1977 | 0.5653 | 0.5813 | 0.5732 | 0.9434 |
| No log | 5.0 | 110 | 0.2018 | 0.6173 | 0.6252 | 0.6212 | 0.9477 |
| No log | 6.0 | 132 | 0.1823 | 0.6232 | 0.6153 | 0.6192 | 0.9485 |
| No log | 7.0 | 154 | 0.1972 | 0.6203 | 0.6238 | 0.6220 | 0.9477 |
| No log | 8.0 | 176 | 0.1952 | 0.6292 | 0.6407 | 0.6349 | 0.9511 |
| No log | 9.0 | 198 | 0.2070 | 0.6331 | 0.6492 | 0.6411 | 0.9489 |
| No log | 10.0 | 220 | 0.2025 | 0.6442 | 0.6761 | 0.6598 | 0.9507 |
### Framework versions
- Transformers 4.20.0.dev0
- Pytorch 1.11.0
- Datasets 2.1.0
- Tokenizers 0.12.1
|
ruselkomp/sber-framebank-50size-2
|
ruselkomp
| 2022-05-31T15:59:07Z
| 6
| 0
|
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"question-answering",
"generated_from_trainer",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2022-05-31T10:03:43Z
|
---
tags:
- generated_from_trainer
model-index:
- name: sber-framebank-50size-2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# sber-framebank-50size-2
This model is a fine-tuned version of [sberbank-ai/sbert_large_nlu_ru](https://huggingface.co/sberbank-ai/sbert_large_nlu_ru) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.3736
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 1.0623 | 1.0 | 11307 | 1.0958 |
| 0.8145 | 2.0 | 22614 | 1.1778 |
| 0.6168 | 3.0 | 33921 | 1.3736 |
### Framework versions
- Transformers 4.19.0.dev0
- Pytorch 1.11.0+cu113
- Datasets 2.2.3.dev0
- Tokenizers 0.12.1
|
malra/segformer-b0-finetuned-segments-sidewalk-4
|
malra
| 2022-05-31T15:42:53Z
| 4
| 0
|
transformers
|
[
"transformers",
"pytorch",
"segformer",
"vision",
"image-segmentation",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
image-segmentation
| 2022-05-31T15:22:56Z
|
---
license: apache-2.0
tags:
- vision
- image-segmentation
- generated_from_trainer
model-index:
- name: segformer-b0-finetuned-segments-sidewalk-4
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# segformer-b0-finetuned-segments-sidewalk-4
This model is a fine-tuned version of [nvidia/mit-b0](https://huggingface.co/nvidia/mit-b0) on the segments/sidewalk-semantic dataset.
It achieves the following results on the evaluation set:
- Loss: 2.5207
- Mean Iou: 0.1023
- Mean Accuracy: 0.1567
- Overall Accuracy: 0.6612
- Per Category Iou: [0.0, 0.37997208823402434, 0.7030895600821837, 0.0, 0.0020740824048893942, 0.0006611109803275343, 0.0, 0.0009644717061794479, 0.0, 0.0, 0.44780560238339745, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.4962679673706645, 0.0, 0.008267299447856608, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.6719286019431624, 0.1932540547332544, 0.6762198255750292, 0.0, 0.0, 0.0003312368464636427, 0.0]
- Per Category Accuracy: [nan, 0.7085417733756095, 0.8643251797889624, 0.0, 0.0020922282164545967, 0.0006691672739475508, nan, 0.0009725011389865425, 0.0, 0.0, 0.9224475476880146, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.7984415122785299, 0.0, 0.008394275137866055, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.9294223049507054, 0.2306496542338313, 0.7045666997791757, 0.0, 0.0, 0.0003315891206418271, 0.0]
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 6e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Mean Iou | Mean Accuracy | Overall Accuracy | Per Category Iou | Per Category Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:-------------:|:----------------:|:-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------:|:--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------:|
| 2.8255 | 1.0 | 25 | 3.0220 | 0.0892 | 0.1429 | 0.6352 | [0.0, 0.3631053229188519, 0.6874502125236047, 0.0, 0.012635239862746197, 0.001133215250040838, 0.0, 0.00463024415429387, 2.6557099661207286e-05, 0.0, 0.3968535016422742, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.4820466790242289, 0.0, 0.00693999220077067, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.6134928158666486, 0.05160593984758798, 0.5016270369795023, 0.0, 0.0, 0.00023524914354608678, 0.0] | [nan, 0.6625398055826, 0.851744092156527, 0.0, 0.01307675614921835, 0.001170877257777663, nan, 0.004771009467501389, 2.6941417811356193e-05, 0.0, 0.9316713675735513, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.7310221003907382, 0.0, 0.0070371168820434, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.948375993368795, 0.056265031783493576, 0.5061367774453964, 0.0, 0.0, 0.00023723449281691698, 0.0] |
| 2.5443 | 2.0 | 50 | 2.5207 | 0.1023 | 0.1567 | 0.6612 | [0.0, 0.37997208823402434, 0.7030895600821837, 0.0, 0.0020740824048893942, 0.0006611109803275343, 0.0, 0.0009644717061794479, 0.0, 0.0, 0.44780560238339745, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.4962679673706645, 0.0, 0.008267299447856608, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.6719286019431624, 0.1932540547332544, 0.6762198255750292, 0.0, 0.0, 0.0003312368464636427, 0.0] | [nan, 0.7085417733756095, 0.8643251797889624, 0.0, 0.0020922282164545967, 0.0006691672739475508, nan, 0.0009725011389865425, 0.0, 0.0, 0.9224475476880146, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.7984415122785299, 0.0, 0.008394275137866055, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.9294223049507054, 0.2306496542338313, 0.7045666997791757, 0.0, 0.0, 0.0003315891206418271, 0.0] |
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu102
- Datasets 2.2.2
- Tokenizers 0.12.1
|
arrandi/distilbert-base-uncased-finetuned-emotion
|
arrandi
| 2022-05-31T15:20:26Z
| 8
| 0
|
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-05-31T15:03:38Z
|
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.934
- name: F1
type: f1
value: 0.9341704717427723
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1652
- Accuracy: 0.934
- F1: 0.9342
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.2606 | 1.0 | 250 | 0.1780 | 0.9285 | 0.9284 |
| 0.1486 | 2.0 | 500 | 0.1652 | 0.934 | 0.9342 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.11.0+cu113
- Datasets 1.16.1
- Tokenizers 0.10.3
|
wuxiaofei/finetuning-sentiment-model-3000-samples
|
wuxiaofei
| 2022-05-31T15:12:52Z
| 6
| 0
|
transformers
|
[
"transformers",
"pytorch",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:imdb",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-05-31T11:19:04Z
|
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imdb
metrics:
- accuracy
- f1
model-index:
- name: finetuning-sentiment-model-3000-samples
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: imdb
type: imdb
args: plain_text
metrics:
- name: Accuracy
type: accuracy
value: 0.86
- name: F1
type: f1
value: 0.8636363636363636
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuning-sentiment-model-3000-samples
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6787
- Accuracy: 0.86
- F1: 0.8636
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu102
- Datasets 2.2.2
- Tokenizers 0.12.1
|
clementgyj/bert-finetuned-squad-50k
|
clementgyj
| 2022-05-31T15:03:55Z
| 5
| 0
|
transformers
|
[
"transformers",
"tf",
"bert",
"question-answering",
"generated_from_keras_callback",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2022-05-31T11:23:52Z
|
---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: clementgyj/bert-finetuned-squad-50k
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# clementgyj/bert-finetuned-squad-50k
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.5470
- Epoch: 2
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 9486, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: mixed_float16
### Training results
| Train Loss | Epoch |
|:----------:|:-----:|
| 1.3302 | 0 |
| 0.7686 | 1 |
| 0.5470 | 2 |
### Framework versions
- Transformers 4.19.2
- TensorFlow 2.8.0
- Datasets 2.2.2
- Tokenizers 0.12.1
|
jkhan447/sarcasm-detection-xlnet-base-cased
|
jkhan447
| 2022-05-31T14:17:58Z
| 5
| 1
|
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"xlnet",
"text-classification",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-05-31T08:50:25Z
|
---
license: mit
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: sarcasm-detection-xlnet-base-cased
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# sarcasm-detection-xlnet-base-cased
This model is a fine-tuned version of [xlnet-base-cased](https://huggingface.co/xlnet-base-cased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.1470
- Accuracy: 0.7117
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
### Training results
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
|
EMBO/BioMegatron345mCased
|
EMBO
| 2022-05-31T13:24:48Z
| 21
| 1
|
transformers
|
[
"transformers",
"pytorch",
"megatron-bert",
"language model",
"arxiv:2010.06060",
"license:cc-by-4.0",
"endpoints_compatible",
"region:us"
] | null | 2022-05-31T11:38:39Z
|
---
license: cc-by-4.0
language:
- english
thumbnail:
tags:
- language model
---
!---
# ##############################################################################################
#
# This model has been uploaded to HuggingFace by https://huggingface.co/drAbreu
# The model is based on the NVIDIA checkpoint located at
# https://catalog.ngc.nvidia.com/orgs/nvidia/models/biomegatron345mcased
#
# ##############################################################################################
-->
[BioMegatron](https://arxiv.org/pdf/2010.06060.pdf) is a transformer developed by the Applied Deep Learning Research team at NVIDIA. This particular Megatron model trained on top of the Megatron-LM model, adding a PubMed corpusto the Megatron-LM corpora(Wikipedia, RealNews, OpenWebText, and CC-Stories). BioMegatron follows a similar (albeit not identical) architecture as BERT and it has 345 million parameters:
* 24 layers
* 16 attention heads with a hidden size of 1024.
More information available at [nVIDIA NGC CATALOG](https://catalog.ngc.nvidia.com/orgs/nvidia/models/biomegatron345mcased)
# Running BioMegatron in 🤗 transformers
In this implementation we have followed the commands of the [`nvidia/megatron-bert-uncased-345m`](https://huggingface.co/nvidia/megatron-bert-cased-345m) repository to make BioMegatron available in 🤗.
However, the file [`convert_megatron_bert_checkpoint.py`](https://github.com/huggingface/transformers/blob/main/src/transformers/models/megatron_bert/convert_megatron_bert_checkpoint.py) needed a modification. The reason is that the Megatron model shown in [`nvidia/megatron-bert-uncased-345m`](https://huggingface.co/nvidia/megatron-bert-cased-345m) has included head layers, while the weights of the BioMegatron model that we upload to this repository do not contain a head.
The code below is a modification of the original [`convert_megatron_bert_checkpoint.py`](https://github.com/huggingface/transformers/blob/main/src/transformers/models/megatron_bert/convert_megatron_bert_checkpoint.py).
```python
import os
import torch
from convert_biomegatron_checkpoint import convert_megatron_checkpoint
print_checkpoint_structure = True
path_to_checkpoint = "/path/to/BioMegatron345mUncased/"
# Extract the basename.
basename = os.path.dirname(path_to_checkpoint).split('/')[-1]
# Load the model.
input_state_dict = torch.load(os.path.join(path_to_checkpoint, 'model_optim_rng.pt'), map_location="cpu")
# Convert.
print("Converting")
output_state_dict, output_config = convert_megatron_checkpoint(input_state_dict, head_model=False)
# Print the structure of converted state dict.
if print_checkpoint_structure:
recursive_print(None, output_state_dict)
# Store the config to file.
output_config_file = os.path.join(path_to_checkpoint, "config.json")
print(f'Saving config to "{output_config_file}"')
with open(output_config_file, "w") as f:
json.dump(output_config, f)
# Store the state_dict to file.
output_checkpoint_file = os.path.join(path_to_checkpoint, "pytorch_model.bin")
print(f'Saving checkpoint to "{output_checkpoint_file}"')
torch.save(output_state_dict, output_checkpoint_file)
```
We provide in the repository an alternative version of the [python script](https://huggingface.co/EMBO/BioMegatron345mCased/blob/main/convert_biomegatron_checkpoint.py) in order to any user to cross-check the validity of the model replicated in this repository.
BioMegatron can be run with the standard 🤗 script for loading models. Here we show an example identical to that of [`nvidia/megatron-bert-uncased-345m`](https://huggingface.co/nvidia/megatron-bert-cased-345m).
```python
import os
import torch
from transformers import BertTokenizer, MegatronBertForMaskedLM, AutoModelForMaskedLM
checkpoint = "EMBO/BioMegatron345mCased"
# The tokenizer. Megatron was trained with standard tokenizer(s).
tokenizer = BertTokenizer.from_pretrained(checkpoint)
# Load the model from $MYDIR/nvidia/megatron-bert-uncased-345m.
model = AutoModelForMaskedLM.from_pretrained(checkpoint)
device = torch.device("cpu")
# Create inputs (from the BERT example page).
input = tokenizer("The capital of France is [MASK]", return_tensors="pt").to(device)
label = tokenizer("The capital of France is Paris", return_tensors="pt")["input_ids"].to(device)
# Run the model.
with torch.no_grad():
output = model(**input, labels=label)
print(output)
```
# Limitations
This implementation has not been fine-tuned in any task. It has only the weights of the official nVIDIA checkpoint. It needs to be trained to perform any downstream task.
# Original code
The original code for Megatron can be found here: [https://github.com/NVIDIA/Megatron-LM](https://github.com/NVIDIA/Megatron-LM).
|
theojolliffe/bart-cnn-science-v3-e6
|
theojolliffe
| 2022-05-31T12:32:01Z
| 3
| 0
|
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bart",
"text2text-generation",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-05-31T11:35:59Z
|
---
license: mit
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: bart-cnn-science-v3-e6
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bart-cnn-science-v3-e6
This model is a fine-tuned version of [theojolliffe/bart-cnn-science](https://huggingface.co/theojolliffe/bart-cnn-science) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8057
- Rouge1: 53.7462
- Rouge2: 34.9622
- Rougel: 37.5676
- Rougelsum: 51.0619
- Gen Len: 142.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 6
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:--------:|
| No log | 1.0 | 398 | 0.9961 | 52.632 | 32.8104 | 35.0789 | 50.3747 | 142.0 |
| 1.174 | 2.0 | 796 | 0.8565 | 52.8308 | 32.7064 | 34.6605 | 50.3348 | 142.0 |
| 0.7073 | 3.0 | 1194 | 0.8322 | 52.2418 | 32.8677 | 36.1806 | 49.6297 | 141.5556 |
| 0.4867 | 4.0 | 1592 | 0.8137 | 53.5537 | 34.5404 | 36.7194 | 50.8394 | 142.0 |
| 0.4867 | 5.0 | 1990 | 0.7996 | 53.4959 | 35.1017 | 37.5143 | 50.9972 | 141.8704 |
| 0.3529 | 6.0 | 2388 | 0.8057 | 53.7462 | 34.9622 | 37.5676 | 51.0619 | 142.0 |
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
|
batya66/bert-finetuned-ner
|
batya66
| 2022-05-31T12:02:04Z
| 3
| 0
|
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"token-classification",
"generated_from_trainer",
"dataset:conll2003",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-05-31T11:45:17Z
|
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- conll2003
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: bert-finetuned-ner
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: conll2003
type: conll2003
args: conll2003
metrics:
- name: Precision
type: precision
value: 0.9287951211471898
- name: Recall
type: recall
value: 0.9483338943116796
- name: F1
type: f1
value: 0.9384628195520027
- name: Accuracy
type: accuracy
value: 0.985915700241361
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-finetuned-ner
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the conll2003 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0622
- Precision: 0.9288
- Recall: 0.9483
- F1: 0.9385
- Accuracy: 0.9859
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.0876 | 1.0 | 1756 | 0.0657 | 0.9093 | 0.9349 | 0.9219 | 0.9826 |
| 0.0412 | 2.0 | 3512 | 0.0555 | 0.9357 | 0.9500 | 0.9428 | 0.9867 |
| 0.0205 | 3.0 | 5268 | 0.0622 | 0.9288 | 0.9483 | 0.9385 | 0.9859 |
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
|
huggingtweets/magiceden
|
huggingtweets
| 2022-05-31T11:45:39Z
| 3
| 0
|
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-05-31T11:42:06Z
|
---
language: en
thumbnail: http://www.huggingtweets.com/magiceden/1653997534626/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1529814669493682176/BqZU57Cf_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Magic Eden 🪄</div>
<div style="text-align: center; font-size: 14px;">@magiceden</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Magic Eden 🪄.
| Data | Magic Eden 🪄 |
| --- | --- |
| Tweets downloaded | 3249 |
| Retweets | 141 |
| Short tweets | 908 |
| Tweets kept | 2200 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/9t2x97k9/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @magiceden's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/32j65yat) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/32j65yat/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/magiceden')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
kamalkraj/bert-base-uncased-squad-v2.0-finetuned
|
kamalkraj
| 2022-05-31T11:44:58Z
| 5
| 0
|
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"question-answering",
"generated_from_trainer",
"dataset:squad_v2",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2022-05-31T10:48:38Z
|
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- squad_v2
model-index:
- name: bert-base-uncased-squad-v2.0-finetuned
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-uncased-squad-v2.0-finetuned
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the squad_v2 dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.00012
- train_batch_size: 48
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2.0
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.20.0.dev0
- Pytorch 1.10.0+cu113
- Datasets 1.17.0
- Tokenizers 0.12.1
|
chrisvinsen/wav2vec2-15
|
chrisvinsen
| 2022-05-31T11:13:41Z
| 3
| 0
|
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-05-31T08:01:18Z
|
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: wav2vec2-15
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-15
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8623
- Wer: 0.8585
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 400
- num_epochs: 30
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 9.6808 | 1.37 | 200 | 3.7154 | 1.0 |
| 3.0784 | 2.74 | 400 | 3.1542 | 1.0 |
| 2.8919 | 4.11 | 600 | 2.9918 | 1.0 |
| 2.8317 | 5.48 | 800 | 2.8971 | 1.0 |
| 2.7958 | 6.85 | 1000 | 2.8409 | 1.0 |
| 2.7699 | 8.22 | 1200 | 2.8278 | 1.0 |
| 2.6365 | 9.59 | 1400 | 2.4657 | 1.0 |
| 2.1096 | 10.96 | 1600 | 1.8358 | 0.9988 |
| 1.6485 | 12.33 | 1800 | 1.4525 | 0.9847 |
| 1.3967 | 13.7 | 2000 | 1.2467 | 0.9532 |
| 1.2492 | 15.07 | 2200 | 1.1261 | 0.9376 |
| 1.1543 | 16.44 | 2400 | 1.0654 | 0.9194 |
| 1.0863 | 17.81 | 2600 | 1.0136 | 0.9161 |
| 1.0275 | 19.18 | 2800 | 0.9601 | 0.8827 |
| 0.9854 | 20.55 | 3000 | 0.9435 | 0.8878 |
| 0.9528 | 21.92 | 3200 | 0.9170 | 0.8807 |
| 0.926 | 23.29 | 3400 | 0.9121 | 0.8783 |
| 0.9025 | 24.66 | 3600 | 0.8884 | 0.8646 |
| 0.8909 | 26.03 | 3800 | 0.8836 | 0.8690 |
| 0.8717 | 27.4 | 4000 | 0.8810 | 0.8646 |
| 0.8661 | 28.77 | 4200 | 0.8623 | 0.8585 |
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
|
theojolliffe/bart-cnn-science-v3-e5
|
theojolliffe
| 2022-05-31T10:55:17Z
| 3
| 1
|
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bart",
"text2text-generation",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-05-31T10:00:56Z
|
---
license: mit
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: bart-cnn-science-v3-e5
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bart-cnn-science-v3-e5
This model is a fine-tuned version of [theojolliffe/bart-cnn-science](https://huggingface.co/theojolliffe/bart-cnn-science) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8090
- Rouge1: 54.0053
- Rouge2: 35.5018
- Rougel: 37.3204
- Rougelsum: 51.5456
- Gen Len: 142.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:--------:|
| No log | 1.0 | 398 | 0.9935 | 51.9669 | 31.8139 | 34.4748 | 49.5311 | 141.7407 |
| 1.1747 | 2.0 | 796 | 0.8565 | 51.7344 | 31.7341 | 34.3917 | 49.2488 | 141.7222 |
| 0.7125 | 3.0 | 1194 | 0.8252 | 52.829 | 33.2332 | 35.8865 | 50.1883 | 141.5556 |
| 0.4991 | 4.0 | 1592 | 0.8222 | 53.582 | 33.4906 | 35.7232 | 50.589 | 142.0 |
| 0.4991 | 5.0 | 1990 | 0.8090 | 54.0053 | 35.5018 | 37.3204 | 51.5456 | 142.0 |
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
|
YeRyeongLee/electra-base-discriminator-finetuned-removed-0530
|
YeRyeongLee
| 2022-05-31T10:46:25Z
| 3
| 0
|
transformers
|
[
"transformers",
"pytorch",
"electra",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-05-31T08:40:07Z
|
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: electra-base-discriminator-finetuned-removed-0530
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# electra-base-discriminator-finetuned-removed-0530
This model is a fine-tuned version of [google/electra-base-discriminator](https://huggingface.co/google/electra-base-discriminator) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9713
- Accuracy: 0.8824
- F1: 0.8824
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:------:|
| No log | 1.0 | 3180 | 0.6265 | 0.8107 | 0.8128 |
| No log | 2.0 | 6360 | 0.5158 | 0.8544 | 0.8541 |
| No log | 3.0 | 9540 | 0.6686 | 0.8563 | 0.8567 |
| No log | 4.0 | 12720 | 0.6491 | 0.8711 | 0.8709 |
| No log | 5.0 | 15900 | 0.8048 | 0.8660 | 0.8672 |
| No log | 6.0 | 19080 | 0.8110 | 0.8708 | 0.8710 |
| No log | 7.0 | 22260 | 1.0082 | 0.8651 | 0.8640 |
| 0.2976 | 8.0 | 25440 | 0.8343 | 0.8811 | 0.8814 |
| 0.2976 | 9.0 | 28620 | 0.9366 | 0.8780 | 0.8780 |
| 0.2976 | 10.0 | 31800 | 0.9713 | 0.8824 | 0.8824 |
### Framework versions
- Transformers 4.19.2
- Pytorch 1.9.0
- Datasets 1.16.1
- Tokenizers 0.12.1
|
StanKarz/q-FrozenLake-v1-4x4-noSlippery
|
StanKarz
| 2022-05-31T10:21:45Z
| 0
| 0
| null |
[
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-05-31T10:21:39Z
|
---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
---
# **Q-Learning** Agent playing **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="Sicko-Code/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
evaluate_agent(env, model["max_steps"], model["n_eval_episodes"], model["qtable"], model["eval_seed"])
```
|
theojolliffe/bart-cnn-science-v3-e4
|
theojolliffe
| 2022-05-31T09:41:01Z
| 3
| 0
|
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bart",
"text2text-generation",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-05-31T08:36:30Z
|
---
license: mit
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: bart-cnn-science-v3-e4
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bart-cnn-science-v3-e4
This model is a fine-tuned version of [theojolliffe/bart-cnn-science](https://huggingface.co/theojolliffe/bart-cnn-science) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8265
- Rouge1: 53.0296
- Rouge2: 33.4957
- Rougel: 35.8876
- Rougelsum: 50.0786
- Gen Len: 141.5926
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:--------:|
| No log | 1.0 | 398 | 0.9965 | 52.4108 | 32.1506 | 35.0281 | 50.0368 | 142.0 |
| 1.176 | 2.0 | 796 | 0.8646 | 52.7182 | 32.9681 | 35.1454 | 49.9527 | 141.8333 |
| 0.7201 | 3.0 | 1194 | 0.8354 | 52.5417 | 32.6428 | 35.8703 | 49.8037 | 142.0 |
| 0.5244 | 4.0 | 1592 | 0.8265 | 53.0296 | 33.4957 | 35.8876 | 50.0786 | 141.5926 |
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
|
orkg/orkgnlp-cs-ner-abstracts
|
orkg
| 2022-05-31T09:40:05Z
| 0
| 1
| null |
[
"license:mit",
"region:us"
] | null | 2022-04-12T10:51:55Z
|
---
license: mit
---
This Repository includes the files required to run the `Computer Science Named Entity Recognition (CS-NER)` ORKG-NLP service.
Please check [this article](https://orkg-nlp-pypi.readthedocs.io/en/latest/services/services.html) for more details about the service.
|
matjesg/cFOS_in_HC
|
matjesg
| 2022-05-31T09:39:52Z
| 0
| 0
| null |
[
"onnx",
"image-segmentation",
"semantic-segmentation",
"deepflash2",
"arxiv:2111.06693",
"license:apache-2.0",
"region:us"
] |
image-segmentation
| 2022-05-16T07:28:51Z
|
---
tags:
- image-segmentation
- semantic-segmentation
- deepflash2
license: apache-2.0
datasets:
- "cFOS in HC"
library_tag: deepflash2
---
# Welcome to the demo of

- **Task**: Image Segmentation / Semantic Segmentation
- **Paper**: The preprint of our paper is available on [arXiv](https://arxiv.org/pdf/2111.06693.pdf)
- **Data**: The cFOS in HC dataset ([Article](https://doi.org/10.7554/eLife.59780), [Data](https://doi.org/10.5061/dryad.4b8gtht9d)) describes the indirect immunofluorescent labeling of the transcription factor cFOS in different subregions of the hippocampus after behavioral testing of the mice.
- **Library**: See [github](https://github.com/matjesg/deepflash2/)
|
orkg/orkgnlp-cs-ner-titles
|
orkg
| 2022-05-31T09:39:40Z
| 0
| 0
| null |
[
"license:mit",
"region:us"
] | null | 2022-04-11T14:31:01Z
|
---
license: mit
---
This Repository includes the files required to run the `Computer Science Named Entity Recognition (CS-NER)` ORKG-NLP service.
Please check [this article](https://orkg-nlp-pypi.readthedocs.io/en/latest/services/services.html) for more details about the service.
|
moshew/paraphrase-mpnet-base-v2_SetFit_sst2_nun_training_64
|
moshew
| 2022-05-31T09:24:16Z
| 3
| 0
|
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"mpnet",
"feature-extraction",
"sentence-similarity",
"transformers",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2022-05-31T09:24:02Z
|
---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# moshew/paraphrase-mpnet-base-v2_SetFit_sst2_nun_training_64
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('moshew/paraphrase-mpnet-base-v2_SetFit_sst2_nun_training_64')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('moshew/paraphrase-mpnet-base-v2_SetFit_sst2_nun_training_64')
model = AutoModel.from_pretrained('moshew/paraphrase-mpnet-base-v2_SetFit_sst2_nun_training_64')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=moshew/paraphrase-mpnet-base-v2_SetFit_sst2_nun_training_64)
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 160 with parameters:
```
{'batch_size': 8, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss`
Parameters of the fit()-Method:
```
{
"epochs": 1,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'transformers.optimization.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 10,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: MPNetModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information -->
|
huggingtweets/hellokitty
|
huggingtweets
| 2022-05-31T08:42:57Z
| 3
| 0
|
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-05-31T08:34:06Z
|
---
language: en
thumbnail: https://github.com/borisdayma/huggingtweets/blob/master/img/logo.png?raw=true
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1476611165157355521/-lvlmsRT_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Hello Kitty</div>
<div style="text-align: center; font-size: 14px;">@hellokitty</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Hello Kitty.
| Data | Hello Kitty |
| --- | --- |
| Tweets downloaded | 3218 |
| Retweets | 286 |
| Short tweets | 117 |
| Tweets kept | 2815 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/32b69c39/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @hellokitty's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/1npkfvyz) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/1npkfvyz/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/hellokitty')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
theojolliffe/bart-cnn-science-v3-e3
|
theojolliffe
| 2022-05-31T08:34:03Z
| 4
| 0
|
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bart",
"text2text-generation",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-05-31T07:25:43Z
|
---
license: mit
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: bart-cnn-science-v3-e3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bart-cnn-science-v3-e3
This model is a fine-tuned version of [theojolliffe/bart-cnn-science](https://huggingface.co/theojolliffe/bart-cnn-science) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8586
- Rouge1: 53.3497
- Rouge2: 34.0001
- Rougel: 35.6149
- Rougelsum: 50.5723
- Gen Len: 141.3519
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:--------:|
| No log | 1.0 | 398 | 0.9977 | 51.8104 | 31.5395 | 33.6887 | 49.2385 | 142.0 |
| 1.1785 | 2.0 | 796 | 0.8875 | 53.7817 | 34.5394 | 35.9556 | 51.3317 | 141.537 |
| 0.7376 | 3.0 | 1194 | 0.8586 | 53.3497 | 34.0001 | 35.6149 | 50.5723 | 141.3519 |
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
|
YeRyeongLee/xlm-roberta-base-finetuned-removed-0530
|
YeRyeongLee
| 2022-05-31T08:31:07Z
| 3
| 0
|
transformers
|
[
"transformers",
"pytorch",
"xlm-roberta",
"text-classification",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-05-31T05:12:27Z
|
---
license: mit
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: xlm-roberta-base-finetuned-removed-0530
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-removed-0530
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9944
- Accuracy: 0.8717
- F1: 0.8719
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:------:|
| No log | 1.0 | 3180 | 0.6390 | 0.7899 | 0.7852 |
| No log | 2.0 | 6360 | 0.5597 | 0.8223 | 0.8230 |
| No log | 3.0 | 9540 | 0.5177 | 0.8462 | 0.8471 |
| No log | 4.0 | 12720 | 0.5813 | 0.8642 | 0.8647 |
| No log | 5.0 | 15900 | 0.7324 | 0.8557 | 0.8568 |
| No log | 6.0 | 19080 | 0.7589 | 0.8626 | 0.8634 |
| No log | 7.0 | 22260 | 0.7958 | 0.8752 | 0.8751 |
| 0.3923 | 8.0 | 25440 | 0.9177 | 0.8651 | 0.8653 |
| 0.3923 | 9.0 | 28620 | 1.0188 | 0.8673 | 0.8671 |
| 0.3923 | 10.0 | 31800 | 0.9944 | 0.8717 | 0.8719 |
### Framework versions
- Transformers 4.19.2
- Pytorch 1.9.0
- Datasets 1.16.1
- Tokenizers 0.12.1
|
pravesh/wav2vec2-large-xls-r-300m-hindi-colabrathee-intel
|
pravesh
| 2022-05-31T07:04:30Z
| 4
| 0
|
transformers
|
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"dataset:common_voice",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-05-31T06:40:06Z
|
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- common_voice
model-index:
- name: wav2vec2-large-xls-r-300m-hindi-colabrathee-intel
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xls-r-300m-hindi-colabrathee-intel
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 30
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu113
- Datasets 1.18.3
- Tokenizers 0.10.3
|
hunkim/sentence-transformers-klue-bert-base
|
hunkim
| 2022-05-31T06:46:31Z
| 28
| 0
|
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"bert",
"feature-extraction",
"sentence-similarity",
"transformers",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2022-05-31T06:46:17Z
|
---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# hunkim/sentence-transformers-klue-bert-base
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('hunkim/sentence-transformers-klue-bert-base')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('hunkim/sentence-transformers-klue-bert-base')
model = AutoModel.from_pretrained('hunkim/sentence-transformers-klue-bert-base')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=hunkim/sentence-transformers-klue-bert-base)
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 365 with parameters:
```
{'batch_size': 32, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss`
Parameters of the fit()-Method:
```
{
"epochs": 4,
"evaluation_steps": 1000,
"evaluator": "sentence_transformers.evaluation.EmbeddingSimilarityEvaluator.EmbeddingSimilarityEvaluator",
"max_grad_norm": 1,
"optimizer_class": "<class 'transformers.optimization.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 146,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information -->
|
hunkim/sentence-transformersklue-bert-base
|
hunkim
| 2022-05-31T06:39:28Z
| 2
| 0
|
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"bert",
"feature-extraction",
"sentence-similarity",
"transformers",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2022-05-31T06:39:14Z
|
---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# hunkim/sentence-transformersklue-bert-base
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('hunkim/sentence-transformersklue-bert-base')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('hunkim/sentence-transformersklue-bert-base')
model = AutoModel.from_pretrained('hunkim/sentence-transformersklue-bert-base')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=hunkim/sentence-transformersklue-bert-base)
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 365 with parameters:
```
{'batch_size': 32, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss`
Parameters of the fit()-Method:
```
{
"epochs": 4,
"evaluation_steps": 1000,
"evaluator": "sentence_transformers.evaluation.EmbeddingSimilarityEvaluator.EmbeddingSimilarityEvaluator",
"max_grad_norm": 1,
"optimizer_class": "<class 'transformers.optimization.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 146,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information -->
|
bdh240901/wav2vec2-large-xls-r-300m-vi-colab
|
bdh240901
| 2022-05-31T06:11:31Z
| 5
| 0
|
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"dataset:common_voice",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-05-31T05:20:28Z
|
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- common_voice
model-index:
- name: wav2vec2-large-xls-r-300m-vi-colab
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xls-r-300m-vi-colab
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu113
- Datasets 1.18.3
- Tokenizers 0.10.3
|
daniel780/finetuning-sentiment-model-3000-samples
|
daniel780
| 2022-05-31T05:39:08Z
| 8
| 0
|
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:amazon_polarity",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-05-30T20:23:54Z
|
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- amazon_polarity
metrics:
- accuracy
- f1
model-index:
- name: finetuning-sentiment-model-3000-samples
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: amazon_polarity
type: amazon_polarity
args: amazon_polarity
metrics:
- name: Accuracy
type: accuracy
value: 0.8066666666666666
- name: F1
type: f1
value: 0.8079470198675497
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuning-sentiment-model-3000-samples
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the amazon_polarity dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4356
- Accuracy: 0.8067
- F1: 0.8079
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
|
Splend1dchan/xtreme_s_xlsr_300m_minds14.en-US_2
|
Splend1dchan
| 2022-05-31T00:59:25Z
| 8
| 0
|
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"audio-classification",
"minds14",
"google/xtreme_s",
"generated_from_trainer",
"dataset:xtreme_s",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
audio-classification
| 2022-05-31T00:39:11Z
|
---
language:
- en-US
license: apache-2.0
tags:
- minds14
- google/xtreme_s
- generated_from_trainer
datasets:
- xtreme_s
metrics:
- f1
- accuracy
model-index:
- name: xtreme_s_xlsr_300m_minds14.en-US_2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xtreme_s_xlsr_300m_minds14.en-US_2
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the GOOGLE/XTREME_S - MINDS14.EN-US dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5685
- F1: 0.8747
- Accuracy: 0.8759
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 2
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 2
- gradient_accumulation_steps: 8
- total_train_batch_size: 32
- total_eval_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 100
- num_epochs: 50.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:------:|:--------:|
| 2.6195 | 3.95 | 20 | 2.6348 | 0.0172 | 0.0816 |
| 2.5925 | 7.95 | 40 | 2.6119 | 0.0352 | 0.0851 |
| 2.1271 | 11.95 | 60 | 2.3066 | 0.1556 | 0.1986 |
| 1.2618 | 15.95 | 80 | 1.3810 | 0.6877 | 0.7128 |
| 0.5455 | 19.95 | 100 | 1.0403 | 0.6992 | 0.7270 |
| 0.2571 | 23.95 | 120 | 0.8423 | 0.8160 | 0.8121 |
| 0.3478 | 27.95 | 140 | 0.6500 | 0.8516 | 0.8440 |
| 0.0732 | 31.95 | 160 | 0.7066 | 0.8123 | 0.8156 |
| 0.1092 | 35.95 | 180 | 0.5878 | 0.8767 | 0.8759 |
| 0.0271 | 39.95 | 200 | 0.5994 | 0.8578 | 0.8617 |
| 0.4664 | 43.95 | 220 | 0.7830 | 0.8403 | 0.8440 |
| 0.0192 | 47.95 | 240 | 0.5685 | 0.8747 | 0.8759 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0+cu113
- Datasets 2.1.0
- Tokenizers 0.12.1
|
cwchengtw/wav2vec2-large-xls-r-300m-turkish-colab2
|
cwchengtw
| 2022-05-31T00:51:18Z
| 4
| 0
|
transformers
|
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"dataset:common_voice",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-05-30T06:00:21Z
|
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- common_voice
model-index:
- name: wav2vec2-large-xls-r-300m-turkish-colab2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xls-r-300m-turkish-colab2
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3738
- Wer: 0.3532
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 3.9022 | 3.7 | 400 | 0.6778 | 0.7414 |
| 0.4106 | 7.4 | 800 | 0.4123 | 0.5049 |
| 0.1862 | 11.11 | 1200 | 0.4260 | 0.4232 |
| 0.1342 | 14.81 | 1600 | 0.3951 | 0.4097 |
| 0.0997 | 18.51 | 2000 | 0.4100 | 0.3999 |
| 0.0782 | 22.22 | 2400 | 0.3918 | 0.3875 |
| 0.059 | 25.92 | 2800 | 0.3803 | 0.3698 |
| 0.0474 | 29.63 | 3200 | 0.3738 | 0.3532 |
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
|
ElMuchoDingDong/AudreyBotBlenderBot
|
ElMuchoDingDong
| 2022-05-30T21:08:38Z
| 4
| 1
|
transformers
|
[
"transformers",
"pytorch",
"tf",
"jax",
"blenderbot",
"text2text-generation",
"convAI",
"conversational",
"facebook",
"en",
"dataset:blended_skill_talk",
"arxiv:2004.13637",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-05-30T20:46:43Z
|
---
language:
- en
thumbnail:
tags:
- convAI
- conversational
- facebook
license: apache-2.0
datasets:
- blended_skill_talk
metrics:
- perplexity
---
## Model description
+ Paper: [Recipes for building an open-domain chatbot]( https://arxiv.org/abs/2004.13637)
+ [Original PARLAI Code](https://parl.ai/projects/recipes/)
### Abstract
Building open-domain chatbots is a challenging area for machine learning research. While prior work has shown that scaling neural models in the number of parameters and the size of the data they are trained on gives improved results, we show that other ingredients are important for a high-performing chatbot. Good conversation requires a number of skills that an expert conversationalist blends in a seamless way: providing engaging talking points and listening to their partners, both asking and answering questions, and displaying knowledge, empathy and personality appropriately, depending on the situation. We show that large scale models can learn these skills when given appropriate training data and choice of generation strategy. We build variants of these recipes with 90M, 2.7B and 9.4B parameter neural models, and make our models and code publicly available. Human evaluations show our best models are superior to existing approaches in multi-turn dialogue in terms of engagingness and humanness measurements. We then discuss the limitations of this work by analyzing failure cases of our models.
|
ykirpichev/q-Taxi-v3
|
ykirpichev
| 2022-05-30T21:04:59Z
| 0
| 0
| null |
[
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-05-30T20:54:46Z
|
---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-Taxi-v3
results:
- metrics:
- type: mean_reward
value: 7.56 +/- 2.71
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
---
# **Q-Learning** Agent playing **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="ykirpichev/q-Taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
evaluate_agent(env, model["max_steps"], model["n_eval_episodes"], model["qtable"], model["eval_seed"])
```
|
ykirpichev/q-FrozenLake-v1-4x4-noSlippery
|
ykirpichev
| 2022-05-30T20:45:24Z
| 0
| 0
| null |
[
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-05-30T20:45:16Z
|
---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
---
# **Q-Learning** Agent playing **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="ykirpichev/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
evaluate_agent(env, model["max_steps"], model["n_eval_episodes"], model["qtable"], model["eval_seed"])
```
|
nouman10/robertabase-claims-3
|
nouman10
| 2022-05-30T19:43:06Z
| 3
| 0
|
transformers
|
[
"transformers",
"tf",
"roberta",
"fill-mask",
"generated_from_keras_callback",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-05-30T18:22:34Z
|
---
license: mit
tags:
- generated_from_keras_callback
model-index:
- name: nouman10/robertabase-claims-3
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# nouman10/robertabase-claims-3
This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.0310
- Validation Loss: 0.1227
- Epoch: 1
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'WarmUp', 'config': {'initial_learning_rate': 2e-05, 'decay_schedule_fn': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': -861, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, '__passive_serialization__': True}, 'warmup_steps': 1000, 'power': 1.0, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: mixed_float16
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 0.1380 | 0.1630 | 0 |
| 0.0310 | 0.1227 | 1 |
### Framework versions
- Transformers 4.19.2
- TensorFlow 2.8.0
- Datasets 2.2.2
- Tokenizers 0.12.1
|
clementgyj/roberta-finetuned-squad-50k
|
clementgyj
| 2022-05-30T19:02:29Z
| 3
| 0
|
transformers
|
[
"transformers",
"tf",
"roberta",
"question-answering",
"generated_from_keras_callback",
"license:mit",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2022-05-30T15:16:42Z
|
---
license: mit
tags:
- generated_from_keras_callback
model-index:
- name: clementgyj/roberta-finetuned-squad-50k
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# clementgyj/roberta-finetuned-squad-50k
This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.5281
- Epoch: 2
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 9462, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: mixed_float16
### Training results
| Train Loss | Epoch |
|:----------:|:-----:|
| 1.0876 | 0 |
| 0.6879 | 1 |
| 0.5281 | 2 |
### Framework versions
- Transformers 4.19.2
- TensorFlow 2.8.0
- Datasets 2.2.2
- Tokenizers 0.12.1
|
theojolliffe/bart-cnn-science-v3-e1
|
theojolliffe
| 2022-05-30T18:32:12Z
| 3
| 0
|
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bart",
"text2text-generation",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-05-30T18:01:33Z
|
---
license: mit
tags:
- generated_from_trainer
model-index:
- name: bart-cnn-science-v3-e1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bart-cnn-science-v3-e1
This model is a fine-tuned version of [theojolliffe/bart-cnn-science](https://huggingface.co/theojolliffe/bart-cnn-science) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:--------:|
| No log | 1.0 | 398 | 1.0643 | 51.6454 | 31.8213 | 33.7711 | 49.3471 | 141.5926 |
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
|
ulyanaisaeva/udmurt-bert-base-uncased
|
ulyanaisaeva
| 2022-05-30T18:18:07Z
| 5
| 0
|
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"fill-mask",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-05-24T13:44:30Z
|
---
tags:
- generated_from_trainer
model-index:
- name: vocab2-bert-base-multilingual-uncased-udm-tsa
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vocab2-bert-base-multilingual-uncased-udm-tsa
This model was trained from scratch on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 4.8497
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 20
- eval_batch_size: 20
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 5
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 7.3112 | 1.0 | 6419 | 6.1814 |
| 5.8524 | 2.0 | 12838 | 5.4075 |
| 5.3392 | 3.0 | 19257 | 5.0810 |
| 5.0958 | 4.0 | 25676 | 4.9015 |
| 4.9897 | 5.0 | 32095 | 4.8497 |
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
|
theojolliffe/bart-cnn-science
|
theojolliffe
| 2022-05-30T17:31:48Z
| 4
| 1
|
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bart",
"text2text-generation",
"generated_from_trainer",
"dataset:scientific_papers",
"license:mit",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-05-30T08:39:52Z
|
---
license: mit
tags:
- generated_from_trainer
datasets:
- scientific_papers
metrics:
- rouge
model-index:
- name: bart-large-cnn-pubmed1o3-pubmed2o3-pubmed3o3-arxiv1o3-arxiv2o3-arxiv3o3
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: scientific_papers
type: scientific_papers
args: arxiv
metrics:
- name: Rouge1
type: rouge
value: 42.5835
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bart-large-cnn-pubmed1o3-pubmed2o3-pubmed3o3-arxiv1o3-arxiv2o3-arxiv3o3
This model is a fine-tuned version of [theojolliffe/bart-large-cnn-pubmed1o3-pubmed2o3-pubmed3o3-arxiv1o3-arxiv2o3](https://huggingface.co/theojolliffe/bart-large-cnn-pubmed1o3-pubmed2o3-pubmed3o3-arxiv1o3-arxiv2o3) on the scientific_papers dataset.
It achieves the following results on the evaluation set:
- Loss: 2.0646
- Rouge1: 42.5835
- Rouge2: 16.1887
- Rougel: 24.7972
- Rougelsum: 38.1846
- Gen Len: 129.9291
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:-----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:--------:|
| 2.0865 | 1.0 | 33840 | 2.0646 | 42.5835 | 16.1887 | 24.7972 | 38.1846 | 129.9291 |
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
|
miesnerjacob/distilbert-base-uncased-finetuned-squad-d5716d28
|
miesnerjacob
| 2022-05-30T17:27:30Z
| 9
| 0
|
transformers
|
[
"transformers",
"pytorch",
"distilbert",
"fill-mask",
"question-answering",
"en",
"dataset:squad",
"arxiv:1910.01108",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2022-05-30T17:17:42Z
|
---
language:
- en
thumbnail: https://github.com/karanchahal/distiller/blob/master/distiller.jpg
tags:
- question-answering
license: apache-2.0
datasets:
- squad
metrics:
- squad
---
# DistilBERT with a second step of distillation
## Model description
This model replicates the "DistilBERT (D)" model from Table 2 of the [DistilBERT paper](https://arxiv.org/pdf/1910.01108.pdf). In this approach, a DistilBERT student is fine-tuned on SQuAD v1.1, but with a BERT model (also fine-tuned on SQuAD v1.1) acting as a teacher for a second step of task-specific distillation.
In this version, the following pre-trained models were used:
* Student: `distilbert-base-uncased`
* Teacher: `lewtun/bert-base-uncased-finetuned-squad-v1`
## Training data
This model was trained on the SQuAD v1.1 dataset which can be obtained from the `datasets` library as follows:
```python
from datasets import load_dataset
squad = load_dataset('squad')
```
## Training procedure
## Eval results
| | Exact Match | F1 |
|------------------|-------------|------|
| DistilBERT paper | 79.1 | 86.9 |
| Ours | 78.4 | 86.5 |
The scores were calculated using the `squad` metric from `datasets`.
### BibTeX entry and citation info
```bibtex
@misc{sanh2020distilbert,
title={DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter},
author={Victor Sanh and Lysandre Debut and Julien Chaumond and Thomas Wolf},
year={2020},
eprint={1910.01108},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
Mikey8943/marian-finetuned-kde4-en-to-fr
|
Mikey8943
| 2022-05-30T17:16:08Z
| 4
| 0
|
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"marian",
"text2text-generation",
"translation",
"generated_from_trainer",
"dataset:kde4",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2022-05-30T16:14:03Z
|
---
license: apache-2.0
tags:
- translation
- generated_from_trainer
datasets:
- kde4
metrics:
- bleu
model-index:
- name: marian-finetuned-kde4-en-to-fr
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: kde4
type: kde4
args: en-fr
metrics:
- name: Bleu
type: bleu
value: 50.16950271131339
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# marian-finetuned-kde4-en-to-fr
This model is a fine-tuned version of [Helsinki-NLP/opus-mt-en-fr](https://huggingface.co/Helsinki-NLP/opus-mt-en-fr) on the kde4 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9643
- Bleu: 50.1695
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.15.0
- Pytorch 1.11.0+cu113
- Datasets 1.17.0
- Tokenizers 0.10.3
|
tclong/wav2vec2-dataset-vios
|
tclong
| 2022-05-30T17:12:49Z
| 5
| 0
|
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"dataset:vivos_dataset",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-05-29T14:17:21Z
|
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- vivos_dataset
model-index:
- name: wav2vec2-dataset-vios
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-dataset-vios
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the vivos_dataset dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5423
- Wer: 0.4075
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| 5.0963 | 1.1 | 400 | 1.1336 | 0.7374 |
| 0.6576 | 2.2 | 800 | 0.4716 | 0.3727 |
| 0.4099 | 3.3 | 1200 | 0.3907 | 0.3100 |
| 0.3332 | 4.4 | 1600 | 0.3735 | 0.2766 |
| 0.2976 | 5.49 | 2000 | 0.3932 | 0.2801 |
| 0.2645 | 6.59 | 2400 | 0.3628 | 0.2542 |
| 0.2395 | 7.69 | 2800 | 0.3702 | 0.2734 |
| 0.2208 | 8.79 | 3200 | 0.3667 | 0.2467 |
| 0.1974 | 9.89 | 3600 | 0.3688 | 0.2398 |
| 0.1772 | 10.99 | 4000 | 0.3819 | 0.2457 |
| 0.1695 | 12.09 | 4400 | 0.3840 | 0.2451 |
| 0.319 | 13.19 | 4800 | 0.6531 | 0.4084 |
| 0.7305 | 14.29 | 5200 | 0.9883 | 0.6348 |
| 0.5787 | 15.38 | 5600 | 0.5260 | 0.3063 |
| 0.8558 | 16.48 | 6000 | 1.2870 | 0.7692 |
| 1.155 | 17.58 | 6400 | 1.0568 | 0.6353 |
| 0.8393 | 18.68 | 6800 | 0.7360 | 0.4486 |
| 0.6094 | 19.78 | 7200 | 0.6072 | 0.4108 |
| 0.5346 | 20.88 | 7600 | 0.5749 | 0.4095 |
| 0.5073 | 21.98 | 8000 | 0.5588 | 0.4056 |
| 0.4859 | 23.08 | 8400 | 0.5475 | 0.4015 |
| 0.475 | 24.18 | 8800 | 0.5430 | 0.4011 |
| 0.4683 | 25.27 | 9200 | 0.5400 | 0.3990 |
| 0.4673 | 26.37 | 9600 | 0.5407 | 0.4011 |
| 0.4665 | 27.47 | 10000 | 0.5408 | 0.3992 |
| 0.4703 | 28.57 | 10400 | 0.5420 | 0.4070 |
| 0.4709 | 29.67 | 10800 | 0.5423 | 0.4075 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu113
- Datasets 1.18.3
- Tokenizers 0.10.3
|
huggingtweets/erinkhoo
|
huggingtweets
| 2022-05-30T16:48:54Z
| 3
| 0
|
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-05-30T16:48:47Z
|
---
language: en
thumbnail: https://github.com/borisdayma/huggingtweets/blob/master/img/logo.png?raw=true
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1362800111118659591/O6gxa7NN_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">erinkhoo.x</div>
<div style="text-align: center; font-size: 14px;">@erinkhoo</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from erinkhoo.x.
| Data | erinkhoo.x |
| --- | --- |
| Tweets downloaded | 3216 |
| Retweets | 1795 |
| Short tweets | 181 |
| Tweets kept | 1240 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/navmzjcl/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @erinkhoo's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/3uoi8z43) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/3uoi8z43/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/erinkhoo')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
VanessaSchenkel/mbart-large-50-finetuned-opus-en-pt-translation-finetuned-en-to-pt-dataset-opus-books
|
VanessaSchenkel
| 2022-05-30T16:38:08Z
| 4
| 0
|
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"mbart",
"text2text-generation",
"generated_from_trainer",
"dataset:opus_books",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-05-30T14:28:23Z
|
---
tags:
- generated_from_trainer
datasets:
- opus_books
model-index:
- name: mbart-large-50-finetuned-opus-en-pt-translation-finetuned-en-to-pt-dataset-opus-books
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mbart-large-50-finetuned-opus-en-pt-translation-finetuned-en-to-pt-dataset-opus-books
This model is a fine-tuned version of [Narrativa/mbart-large-50-finetuned-opus-en-pt-translation](https://huggingface.co/Narrativa/mbart-large-50-finetuned-opus-en-pt-translation) on the opus_books dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|
| No log | 1.0 | 79 | 1.5854 | 31.2219 | 26.9149 |
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
|
AbhilashDatta/T5_qgen-squad_v1
|
AbhilashDatta
| 2022-05-30T16:19:49Z
| 4
| 0
|
transformers
|
[
"transformers",
"pytorch",
"t5",
"text2text-generation",
"license:afl-3.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-05-30T05:23:30Z
|
---
license: afl-3.0
---
# Question generation using T5 transformer trained on SQuAD
<h2> <i>Input format: context: "..." answer: "..." </i></h2>
Import the pretrained model as well as tokenizer:
```
from transformers import T5ForConditionalGeneration, T5Tokenizer
model = T5ForConditionalGeneration.from_pretrained('AbhilashDatta/T5_qgen-squad_v1')
tokenizer = T5Tokenizer.from_pretrained('AbhilashDatta/T5_qgen-squad_v1')
```
Then use the tokenizer to encode/decode and model to generate:
```
input = "context: My name is Abhilash Datta. answer: Abhilash"
batch = tokenizer(input, padding='longest', max_length=512, return_tensors='pt')
inputs_batch = batch['input_ids'][0]
inputs_batch = torch.unsqueeze(inputs_batch, 0)
ques_id = model.generate(inputs_batch, max_length=100, early_stopping=True)
ques_batch = [tokenizer.decode(g, skip_special_tokens=True, clean_up_tokenization_spaces=False) for g in ques_id]
print(ques_batch)
```
Output:
```
['what is my name']
```
|
cewinharhar/iceCream
|
cewinharhar
| 2022-05-30T16:17:21Z
| 10
| 0
|
transformers
|
[
"transformers",
"tf",
"t5",
"text2text-generation",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-05-30T15:12:45Z
|
---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: cewinharhar/iceCream
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# cewinharhar/iceCream
This model is a fine-tuned version of [t5-base](https://huggingface.co/t5-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 1.1909
- Validation Loss: 3.0925
- Epoch: 92
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 2e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 4.9926 | 4.0419 | 0 |
| 3.9831 | 3.8247 | 1 |
| 3.8396 | 3.7337 | 2 |
| 3.7352 | 3.6509 | 3 |
| 3.6382 | 3.5948 | 4 |
| 3.5595 | 3.5458 | 5 |
| 3.4845 | 3.4667 | 6 |
| 3.4140 | 3.4460 | 7 |
| 3.3546 | 3.4035 | 8 |
| 3.2939 | 3.3571 | 9 |
| 3.2420 | 3.3465 | 10 |
| 3.1867 | 3.2970 | 11 |
| 3.1418 | 3.2716 | 12 |
| 3.0865 | 3.2609 | 13 |
| 3.0419 | 3.2318 | 14 |
| 2.9962 | 3.2279 | 15 |
| 2.9551 | 3.1991 | 16 |
| 2.9178 | 3.1656 | 17 |
| 2.8701 | 3.1654 | 18 |
| 2.8348 | 3.1372 | 19 |
| 2.7988 | 3.1281 | 20 |
| 2.7597 | 3.0978 | 21 |
| 2.7216 | 3.1019 | 22 |
| 2.6844 | 3.0388 | 23 |
| 2.6489 | 3.0791 | 24 |
| 2.6192 | 3.0885 | 25 |
| 2.5677 | 3.0388 | 26 |
| 2.5478 | 3.0530 | 27 |
| 2.5136 | 3.0403 | 28 |
| 2.4756 | 3.0521 | 29 |
| 2.4454 | 3.0173 | 30 |
| 2.4203 | 3.0079 | 31 |
| 2.3882 | 3.0325 | 32 |
| 2.3596 | 3.0066 | 33 |
| 2.3279 | 2.9919 | 34 |
| 2.2947 | 2.9871 | 35 |
| 2.2712 | 2.9834 | 36 |
| 2.2311 | 2.9917 | 37 |
| 2.2022 | 2.9796 | 38 |
| 2.1703 | 2.9641 | 39 |
| 2.1394 | 2.9571 | 40 |
| 2.1237 | 2.9662 | 41 |
| 2.0949 | 2.9358 | 42 |
| 2.0673 | 2.9653 | 43 |
| 2.0417 | 2.9416 | 44 |
| 2.0194 | 2.9531 | 45 |
| 2.0009 | 2.9417 | 46 |
| 1.9716 | 2.9325 | 47 |
| 1.9488 | 2.9476 | 48 |
| 1.9265 | 2.9559 | 49 |
| 1.8975 | 2.9477 | 50 |
| 1.8815 | 2.9429 | 51 |
| 1.8552 | 2.9119 | 52 |
| 1.8358 | 2.9377 | 53 |
| 1.8226 | 2.9605 | 54 |
| 1.7976 | 2.9446 | 55 |
| 1.7677 | 2.9162 | 56 |
| 1.7538 | 2.9292 | 57 |
| 1.7376 | 2.9968 | 58 |
| 1.7156 | 2.9525 | 59 |
| 1.7001 | 2.9275 | 60 |
| 1.6806 | 2.9714 | 61 |
| 1.6582 | 2.9903 | 62 |
| 1.6436 | 2.9363 | 63 |
| 1.6254 | 2.9714 | 64 |
| 1.6093 | 2.9804 | 65 |
| 1.5900 | 2.9740 | 66 |
| 1.5686 | 2.9835 | 67 |
| 1.5492 | 3.0018 | 68 |
| 1.5371 | 3.0088 | 69 |
| 1.5245 | 2.9780 | 70 |
| 1.5021 | 3.0176 | 71 |
| 1.4839 | 2.9917 | 72 |
| 1.4726 | 3.0602 | 73 |
| 1.4568 | 3.0055 | 74 |
| 1.4435 | 3.0186 | 75 |
| 1.4225 | 2.9948 | 76 |
| 1.4088 | 3.0270 | 77 |
| 1.3947 | 3.0676 | 78 |
| 1.3780 | 3.0615 | 79 |
| 1.3627 | 3.0780 | 80 |
| 1.3445 | 3.0491 | 81 |
| 1.3293 | 3.0534 | 82 |
| 1.3130 | 3.0460 | 83 |
| 1.2980 | 3.0846 | 84 |
| 1.2895 | 3.0709 | 85 |
| 1.2737 | 3.0903 | 86 |
| 1.2557 | 3.0854 | 87 |
| 1.2499 | 3.1101 | 88 |
| 1.2353 | 3.1181 | 89 |
| 1.2104 | 3.1111 | 90 |
| 1.2101 | 3.1153 | 91 |
| 1.1909 | 3.0925 | 92 |
### Framework versions
- Transformers 4.19.2
- TensorFlow 2.9.1
- Datasets 2.1.0
- Tokenizers 0.12.1
|
knurm/my-finetuned-xml-roberta4
|
knurm
| 2022-05-30T16:14:33Z
| 4
| 0
|
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"xlm-roberta",
"question-answering",
"generated_from_trainer",
"license:mit",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2022-05-30T07:48:32Z
|
---
license: mit
tags:
- generated_from_trainer
model-index:
- name: my-finetuned-xml-roberta4
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my-finetuned-xml-roberta4
This model is a fine-tuned version of [knurm/xlm-roberta-base-finetuned-est](https://huggingface.co/knurm/xlm-roberta-base-finetuned-est) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 3.7709
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 7
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 3.4629 | 1.0 | 5652 | 3.3367 |
| 3.1814 | 2.0 | 11304 | 3.2952 |
| 2.9718 | 3.0 | 16956 | 3.2592 |
| 2.7442 | 4.0 | 22608 | 3.3133 |
| 2.5991 | 5.0 | 28260 | 3.4292 |
| 2.4221 | 6.0 | 33912 | 3.5928 |
| 2.3259 | 7.0 | 39564 | 3.7709 |
### Framework versions
- Transformers 4.19.1
- Pytorch 1.11.0+cu113
- Datasets 2.2.1
- Tokenizers 0.12.1
|
jkhan447/sarcasm-detection-Bert-base-uncased-CR
|
jkhan447
| 2022-05-30T15:02:31Z
| 5
| 0
|
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-05-30T09:54:35Z
|
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: sarcasm-detection-Bert-base-uncased-CR
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# sarcasm-detection-Bert-base-uncased-CR
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.2057
- Accuracy: 0.7187
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
### Training results
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
|
jkhan447/sarcasm-detection-RoBerta-base-CR
|
jkhan447
| 2022-05-30T14:57:19Z
| 31
| 1
|
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"roberta",
"text-classification",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-05-30T09:52:35Z
|
---
license: mit
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: sarcasm-detection-RoBerta-base-CR
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# sarcasm-detection-RoBerta-base-CR
This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.0240
- Accuracy: 0.726
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
### Training results
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
|
merve/deeplab-v3
|
merve
| 2022-05-30T14:49:29Z
| 0
| 0
|
keras
|
[
"keras",
"tensorboard",
"tf-keras",
"image-segmentation",
"region:us"
] |
image-segmentation
| 2022-05-30T14:49:02Z
|
---
library_name: keras
tags:
- image-segmentation
---
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'learning_rate': 0.001, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False}
- training_precision: float32
## Training Metrics
| Epochs | Train Loss | Train Accuracy | Validation Loss | Validation Accuracy |
|--- |--- |--- |--- |--- |
| 1| 1.206| 0.636| 2.55| 0.555|
| 2| 0.957| 0.696| 2.671| 0.598|
| 3| 0.847| 0.729| 1.431| 0.612|
| 4| 0.774| 0.751| 1.008| 0.689|
| 5| 0.712| 0.771| 1.016| 0.705|
## Model Plot
<details>
<summary>View Model Plot</summary>

</details>
|
ianspektor/ppo-LunarLander-v2
|
ianspektor
| 2022-05-30T14:42:19Z
| 1
| 0
|
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-05-30T14:41:39Z
|
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- metrics:
- type: mean_reward
value: 193.83 +/- 12.64
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
ruselkomp/deeppavlov-framebank-50size
|
ruselkomp
| 2022-05-30T14:11:08Z
| 8
| 0
|
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"question-answering",
"generated_from_trainer",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2022-05-30T10:00:12Z
|
---
tags:
- generated_from_trainer
model-index:
- name: deeppavlov-framebank-50size
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# deeppavlov-framebank-50size
This model is a fine-tuned version of [DeepPavlov/rubert-base-cased](https://huggingface.co/DeepPavlov/rubert-base-cased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1007
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.0733 | 1.0 | 2827 | 1.0076 |
| 0.7875 | 2.0 | 5654 | 1.0309 |
| 0.6003 | 3.0 | 8481 | 1.1007 |
### Framework versions
- Transformers 4.19.0.dev0
- Pytorch 1.11.0+cu113
- Datasets 2.2.3.dev0
- Tokenizers 0.12.1
|
Selma/pytorch-resnet34
|
Selma
| 2022-05-30T13:49:04Z
| 0
| 0
| null |
[
"region:us"
] | null | 2022-05-27T13:58:04Z
|
# The model
Pytorch resnet34
# Intended use
Image classification
# Training parameters
pretrained = True
---
language:
- eng
thumbnail:
- "https://pytorch.org/vision/stable/models.html#id10"
tags:
- pytorch
- image classification
license:
- "bsd-2-clause"
metrics:
- acc@1 (on ImageNet-1K): 73.314
- acc@5 (on ImageNet-1K): 91.42
---
|
y05uk/wav2vec2-base-timit-demo-google-colab
|
y05uk
| 2022-05-30T13:32:00Z
| 3
| 0
|
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-05-30T10:59:05Z
|
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: wav2vec2-base-timit-demo-google-colab
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base-timit-demo-google-colab
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5353
- Wer: 0.3360
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| 3.5345 | 1.0 | 500 | 1.8229 | 0.9810 |
| 0.8731 | 2.01 | 1000 | 0.5186 | 0.5165 |
| 0.4455 | 3.01 | 1500 | 0.4386 | 0.4572 |
| 0.3054 | 4.02 | 2000 | 0.4396 | 0.4286 |
| 0.2354 | 5.02 | 2500 | 0.4454 | 0.4051 |
| 0.1897 | 6.02 | 3000 | 0.4465 | 0.3925 |
| 0.1605 | 7.03 | 3500 | 0.4776 | 0.3974 |
| 0.1413 | 8.03 | 4000 | 0.5254 | 0.4062 |
| 0.1211 | 9.04 | 4500 | 0.5123 | 0.3913 |
| 0.1095 | 10.04 | 5000 | 0.4171 | 0.3711 |
| 0.1039 | 11.04 | 5500 | 0.4258 | 0.3732 |
| 0.0932 | 12.05 | 6000 | 0.4879 | 0.3701 |
| 0.0867 | 13.05 | 6500 | 0.4725 | 0.3637 |
| 0.0764 | 14.06 | 7000 | 0.5041 | 0.3636 |
| 0.0661 | 15.06 | 7500 | 0.4692 | 0.3646 |
| 0.0647 | 16.06 | 8000 | 0.4804 | 0.3612 |
| 0.0576 | 17.07 | 8500 | 0.5545 | 0.3628 |
| 0.0577 | 18.07 | 9000 | 0.5004 | 0.3557 |
| 0.0481 | 19.08 | 9500 | 0.5341 | 0.3558 |
| 0.0466 | 20.08 | 10000 | 0.5056 | 0.3514 |
| 0.0433 | 21.08 | 10500 | 0.4864 | 0.3481 |
| 0.0362 | 22.09 | 11000 | 0.4994 | 0.3473 |
| 0.0325 | 23.09 | 11500 | 0.5327 | 0.3446 |
| 0.0351 | 24.1 | 12000 | 0.5360 | 0.3445 |
| 0.0284 | 25.1 | 12500 | 0.5085 | 0.3399 |
| 0.027 | 26.1 | 13000 | 0.5344 | 0.3426 |
| 0.0247 | 27.11 | 13500 | 0.5310 | 0.3357 |
| 0.0251 | 28.11 | 14000 | 0.5201 | 0.3355 |
| 0.0228 | 29.12 | 14500 | 0.5353 | 0.3360 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.11.0+cu113
- Datasets 1.18.3
- Tokenizers 0.12.1
|
Splend1dchan/xtreme_s_xlsr_300m_mt5-small_minds14.en-US
|
Splend1dchan
| 2022-05-30T12:33:15Z
| 5
| 0
|
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"minds14",
"google/xtreme_s",
"generated_from_trainer",
"dataset:xtreme_s",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2022-05-30T11:47:22Z
|
---
language:
- en-US
license: apache-2.0
tags:
- minds14
- google/xtreme_s
- generated_from_trainer
datasets:
- xtreme_s
metrics:
- f1
- accuracy
model-index:
- name: xtreme_s_xlsr_300m_mt5-small_minds14.en-US
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xtreme_s_xlsr_300m_mt5-small_minds14.en-US
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the GOOGLE/XTREME_S - MINDS14.EN-US dataset.
It achieves the following results on the evaluation set:
- Loss: 4.7321
- F1: 0.0154
- Accuracy: 0.0638
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 8
- eval_batch_size: 64
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 100
- num_epochs: 50.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:------:|:--------:|
| 2.6067 | 3.95 | 20 | 2.6501 | 0.0112 | 0.0851 |
| 2.5614 | 7.95 | 40 | 2.8018 | 0.0133 | 0.0603 |
| 2.2836 | 11.95 | 60 | 3.0786 | 0.0084 | 0.0603 |
| 1.9597 | 15.95 | 80 | 3.2288 | 0.0126 | 0.0638 |
| 1.5566 | 19.95 | 100 | 3.6934 | 0.0178 | 0.0567 |
| 1.3168 | 23.95 | 120 | 3.9135 | 0.0150 | 0.0638 |
| 1.0598 | 27.95 | 140 | 4.2618 | 0.0084 | 0.0603 |
| 0.5721 | 31.95 | 160 | 3.7973 | 0.0354 | 0.0780 |
| 0.4402 | 35.95 | 180 | 4.6233 | 0.0179 | 0.0638 |
| 0.6113 | 39.95 | 200 | 4.6149 | 0.0208 | 0.0674 |
| 0.3938 | 43.95 | 220 | 4.7886 | 0.0159 | 0.0638 |
| 0.2473 | 47.95 | 240 | 4.7321 | 0.0154 | 0.0638 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0+cu113
- Datasets 2.1.0
- Tokenizers 0.12.1
|
nestoralvaro/mT5_multilingual_XLSum-finetuned-xsum
|
nestoralvaro
| 2022-05-30T11:58:22Z
| 3
| 0
|
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"mt5",
"text2text-generation",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-05-08T22:11:28Z
|
---
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: mT5_multilingual_XLSum-finetuned-xsum
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mT5_multilingual_XLSum-finetuned-xsum
This model is a fine-tuned version of [csebuetnlp/mT5_multilingual_XLSum](https://huggingface.co/csebuetnlp/mT5_multilingual_XLSum) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: nan
- Rouge1: 0.0
- Rouge2: 0.0
- Rougel: 0.0
- Rougelsum: 0.0
- Gen Len: 1.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:-----:|:---------------:|:------:|:------:|:------:|:---------:|:-------:|
| 0.0 | 1.0 | 36479 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 1.0 |
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
|
ernestumorga/ppo-seals_Walker2d-v0
|
ernestumorga
| 2022-05-30T10:53:04Z
| 1
| 0
|
stable-baselines3
|
[
"stable-baselines3",
"seals/Walker2d-v0",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-05-30T10:52:33Z
|
---
library_name: stable-baselines3
tags:
- seals/Walker2d-v0
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- metrics:
- type: mean_reward
value: 1429.13 +/- 411.75
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: seals/Walker2d-v0
type: seals/Walker2d-v0
---
# **PPO** Agent playing **seals/Walker2d-v0**
This is a trained model of a **PPO** agent playing **seals/Walker2d-v0**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
```
# Download model and save it into the logs/ folder
python -m utils.load_from_hub --algo ppo --env seals/Walker2d-v0 -orga ernestumorga -f logs/
python enjoy.py --algo ppo --env seals/Walker2d-v0 -f logs/
```
## Training (with the RL Zoo)
```
python train.py --algo ppo --env seals/Walker2d-v0 -f logs/
# Upload the model and generate video (when possible)
python -m utils.push_to_hub --algo ppo --env seals/Walker2d-v0 -f logs/ -orga ernestumorga
```
## Hyperparameters
```python
OrderedDict([('batch_size', 8),
('clip_range', 0.4),
('ent_coef', 0.00013057334805552262),
('gae_lambda', 0.92),
('gamma', 0.98),
('learning_rate', 3.791707778339674e-05),
('max_grad_norm', 0.6),
('n_envs', 1),
('n_epochs', 5),
('n_steps', 2048),
('n_timesteps', 1000000.0),
('normalize', True),
('policy', 'MlpPolicy'),
('policy_kwargs',
'dict(activation_fn=nn.ReLU, net_arch=[dict(pi=[256, 256], '
'vf=[256, 256])])'),
('vf_coef', 0.6167177795726859),
('normalize_kwargs', {'norm_obs': True, 'norm_reward': False})])
```
|
M47Labs/spanish_news_classification_headlines_untrained
|
M47Labs
| 2022-05-30T10:44:44Z
| 8
| 0
|
transformers
|
[
"transformers",
"pytorch",
"bert",
"text-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-05-30T08:26:13Z
|
---
widget:
- text: "El dólar se dispara tras la reunión de la Fed"
---
# Spanish News Classification Headlines
SNCH: this model was developed by [M47Labs](https://www.m47labs.com/es/) the goal is text classification, the base model use was [BETO](https://huggingface.co/dccuchile/bert-base-spanish-wwm-cased), however this model has not been fine-tuned on any dataset. The objective is to show the performance of this model when is used with the objective of inference without training at all.
## Dataset validation Sample
Dataset size : 1000
Columns: idTask,task content 1,idTag,tag.
|task content|tag|
|------|------|
|Alcalá de Guadaíra celebra la IV Semana de la Diversidad Sexual con acciones de sensibilización|sociedad|
|El Archipiélago Chinijo Graciplus se impone en el Trofeo Centro Comercial Rubicón|deportes|
|Un total de 39 personas padecen ELA actualmente en la provincia|sociedad|
|Eurocopa 2021 : Italia vence a Gales y pasa a octavos con su candidatura reforzada|deportes|
|Resolución de 10 de junio de 2021, del Ayuntamiento de Tarazona de La Mancha (Albacete), referente a la convocatoria para proveer una plaza.|sociedad|
|El primer ministro sueco pierde una moción de censura|politica|
|El dólar se dispara tras la reunión de la Fed|economia|
## Labels:
* ciencia_tecnologia
* clickbait
* cultura
* deportes
* economia
* educacion
* medio_ambiente
* opinion
* politica
* sociedad
## Example of Use
### Pipeline
```{python}
import torch
from transformers import AutoTokenizer, BertForSequenceClassification,TextClassificationPipeline
review_text = 'los vehiculos que esten esperando pasajaeros deberan estar apagados para reducir emisiones'
path = "M47Labs/spanish_news_classification_headlines_untrained"
tokenizer = AutoTokenizer.from_pretrained(path)
model = BertForSequenceClassification.from_pretrained(path)
nlp = TextClassificationPipeline(task = "text-classification",
model = model,
tokenizer = tokenizer)
print(nlp(review_text))
```
```[{'label': 'medio_ambiente', 'score': 0.2834321384291023}]```
### Pytorch
```{python}
import torch
from transformers import AutoTokenizer, BertForSequenceClassification,TextClassificationPipeline
from numpy import np
model_name = 'M47Labs/spanish_news_classification_headlines_untrained'
MAX_LEN = 32
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForSequenceClassification.from_pretrained(model_name)
texto = "las emisiones estan bajando, debido a las medidas ambientales tomadas por el gobierno"
encoded_review = tokenizer.encode_plus(
texto,
max_length=MAX_LEN,
add_special_tokens=True,
#return_token_type_ids=False,
pad_to_max_length=True,
return_attention_mask=True,
return_tensors='pt',
)
input_ids = encoded_review['input_ids']
attention_mask = encoded_review['attention_mask']
output = model(input_ids, attention_mask)
_, prediction = torch.max(output['logits'], dim=1)
print(f'Review text: {texto}')
print(f'Sentiment : {model.config.id2label[prediction.detach().cpu().numpy()[0]]}')
```
```Review text: las emisiones estan bajando, debido a las medidas ambientales tomadas por el gobierno```
```Sentiment : opinion```
A more in depth example on how to use the model can be found in this colab notebook: https://colab.research.google.com/drive/1XsKea6oMyEckye2FePW_XN7Rf8v41Cw_?usp=sharing
## Validation Results
|Full Dataset||
|------|------|
|Accuracy Score|0.362|
|Precision (Macro)|0.21|
|Recall (Macro)|0.22|

|
Misha24-10/TEST2ppo-LunarLander-v6
|
Misha24-10
| 2022-05-30T10:39:37Z
| 1
| 0
|
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-05-30T10:39:08Z
|
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- metrics:
- type: mean_reward
value: 279.89 +/- 16.37
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
iftekher/bangla_voice
|
iftekher
| 2022-05-30T10:03:21Z
| 7
| 1
|
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-21T04:56:57Z
|
---
tags:
- generated_from_trainer
model-index:
- name: bangla_voice
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bangla_voice
This model is a fine-tuned version of [iftekher/bangla_voice](https://huggingface.co/iftekher/bangla_voice) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 208.2614
- Wer: 0.3201
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 100
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 158.927 | 0.21 | 100 | 81.4025 | 0.3489 |
| 206.3938 | 0.42 | 200 | 117.4497 | 0.3680 |
| 194.8868 | 0.64 | 300 | 473.2094 | 0.3622 |
| 177.3037 | 0.85 | 400 | 81.0834 | 0.3585 |
| 150.9285 | 1.06 | 500 | 397.6080 | 0.3592 |
| 164.899 | 1.27 | 600 | 71.5732 | 0.3476 |
| 157.9872 | 1.48 | 700 | 76.6225 | 0.3560 |
| 139.5956 | 1.69 | 800 | 76.4330 | 0.3512 |
| 132.7378 | 1.91 | 900 | 154.8127 | 0.3378 |
| 137.2875 | 2.12 | 1000 | 275.6554 | 0.3453 |
| 128.1135 | 2.33 | 1100 | 210.1160 | 0.3409 |
| 124.5749 | 2.54 | 1200 | 109.8560 | 0.3400 |
| 115.9728 | 2.75 | 1300 | 165.5507 | 0.3373 |
| 120.9464 | 2.97 | 1400 | 248.8096 | 0.3357 |
| 104.8963 | 3.18 | 1500 | 308.7221 | 0.3361 |
| 115.9144 | 3.39 | 1600 | 214.0615 | 0.3300 |
| 109.0966 | 3.6 | 1700 | 197.1803 | 0.3286 |
| 111.4354 | 3.81 | 1800 | 189.1278 | 0.3245 |
| 111.9318 | 4.03 | 1900 | 191.4921 | 0.3282 |
| 109.2148 | 4.24 | 2000 | 185.1797 | 0.3298 |
| 114.0561 | 4.45 | 2100 | 190.5829 | 0.3229 |
| 105.7045 | 4.66 | 2200 | 209.0799 | 0.3220 |
| 127.4207 | 4.87 | 2300 | 208.2614 | 0.3201 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu111
- Datasets 1.13.3
- Tokenizers 0.10.3
|
stevemobs/deberta-base-combined-squad1-aqa-1epoch-and-newsqa-1epoch
|
stevemobs
| 2022-05-30T09:12:36Z
| 4
| 0
|
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"deberta",
"question-answering",
"generated_from_trainer",
"license:mit",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2022-05-30T02:46:39Z
|
---
license: mit
tags:
- generated_from_trainer
model-index:
- name: deberta-base-combined-squad1-aqa-1epoch-and-newsqa-1epoch
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# deberta-base-combined-squad1-aqa-1epoch-and-newsqa-1epoch
This model is a fine-tuned version of [stevemobs/deberta-base-combined-squad1-aqa-1epoch](https://huggingface.co/stevemobs/deberta-base-combined-squad1-aqa-1epoch) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6807
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 12
- eval_batch_size: 12
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 0.6654 | 1.0 | 17307 | 0.6807 |
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
|
Santarabantoosoo/Clinical-Longformer-MLM-opnote
|
Santarabantoosoo
| 2022-05-30T08:23:25Z
| 3
| 0
|
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"longformer",
"fill-mask",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-05-29T22:08:37Z
|
---
tags:
- generated_from_trainer
model-index:
- name: Clinical-Longformer-MLM-opnote
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Clinical-Longformer-MLM-opnote
This model is a fine-tuned version of [yikuan8/Clinical-Longformer](https://huggingface.co/yikuan8/Clinical-Longformer) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8286
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 4
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 266 | 0.9606 |
| 1.1655 | 2.0 | 532 | 0.8677 |
| 1.1655 | 3.0 | 798 | 0.8195 |
### Framework versions
- Transformers 4.19.1
- Pytorch 1.10.1
- Datasets 2.2.1
- Tokenizers 0.12.1
|
Splend1dchan/wav2vec2-large-lv60_t5lephone-small_nofreeze_bs16_forMINDS.en.all2
|
Splend1dchan
| 2022-05-30T07:38:51Z
| 4
| 0
|
transformers
|
[
"transformers",
"pytorch",
"speechmix",
"endpoints_compatible",
"region:us"
] | null | 2022-05-30T01:14:14Z
|
wav2vec2 -> t5lephone
bs = 16
dropout = 0.3
performance : 29%
{
"architectures": [
"SpeechMixEEDT5"
],
"decoder": {
"_name_or_path": "voidful/phoneme_byt5",
"add_cross_attention": true,
"architectures": [
"T5ForConditionalGeneration"
],
"bad_words_ids": null,
"bos_token_id": null,
"chunk_size_feed_forward": 0,
"cross_attention_hidden_size": null,
"d_ff": 3584,
"d_kv": 64,
"d_model": 1472,
"decoder_start_token_id": 0,
"diversity_penalty": 0.0,
"do_sample": false,
"dropout_rate": 0.1,
"early_stopping": false,
"encoder_no_repeat_ngram_size": 0,
"eos_token_id": 1,
"feed_forward_proj": "gated-gelu",
"finetuning_task": null,
"forced_bos_token_id": null,
"forced_eos_token_id": null,
"gradient_checkpointing": false,
"id2label": {
"0": "LABEL_0",
"1": "LABEL_1"
},
"initializer_factor": 1.0,
"is_decoder": true,
"is_encoder_decoder": true,
"label2id": {
"LABEL_0": 0,
"LABEL_1": 1
},
"layer_norm_epsilon": 1e-06,
"length_penalty": 1.0,
"max_length": 20,
"min_length": 0,
"model_type": "t5",
"no_repeat_ngram_size": 0,
"num_beam_groups": 1,
"num_beams": 1,
"num_decoder_layers": 4,
"num_heads": 6,
"num_layers": 12,
"num_return_sequences": 1,
"output_attentions": false,
"output_hidden_states": false,
"output_scores": false,
"pad_token_id": 0,
"prefix": null,
"problem_type": null,
"pruned_heads": {},
"relative_attention_max_distance": 128,
"relative_attention_num_buckets": 32,
"remove_invalid_values": false,
"repetition_penalty": 1.0,
"return_dict": true,
"return_dict_in_generate": false,
"sep_token_id": null,
"task_specific_params": null,
"temperature": 1.0,
"tie_encoder_decoder": false,
"tie_word_embeddings": false,
"tokenizer_class": "ByT5Tokenizer",
"top_k": 50,
"top_p": 1.0,
"torch_dtype": "float32",
"torchscript": false,
"transformers_version": "4.17.0",
"typical_p": 1.0,
"use_bfloat16": false,
"use_cache": true,
"vocab_size": 384
},
"encoder": {
"_name_or_path": "facebook/wav2vec2-large-lv60",
"activation_dropout": 0.1,
"adapter_kernel_size": 3,
"adapter_stride": 2,
"add_adapter": false,
"add_cross_attention": false,
"apply_spec_augment": true,
"architectures": [
"Wav2Vec2ForPreTraining"
],
"attention_dropout": 0.1,
"bad_words_ids": null,
"bos_token_id": 1,
"chunk_size_feed_forward": 0,
"classifier_proj_size": 256,
"codevector_dim": 768,
"contrastive_logits_temperature": 0.1,
"conv_bias": true,
"conv_dim": [
512,
512,
512,
512,
512,
512,
512
],
"conv_kernel": [
10,
3,
3,
3,
3,
2,
2
],
"conv_stride": [
5,
2,
2,
2,
2,
2,
2
],
"cross_attention_hidden_size": null,
"ctc_loss_reduction": "sum",
"ctc_zero_infinity": false,
"decoder_start_token_id": null,
"diversity_loss_weight": 0.1,
"diversity_penalty": 0.0,
"do_sample": false,
"do_stable_layer_norm": true,
"early_stopping": false,
"encoder_no_repeat_ngram_size": 0,
"eos_token_id": 2,
"feat_extract_activation": "gelu",
"feat_extract_dropout": 0.0,
"feat_extract_norm": "layer",
"feat_proj_dropout": 0.1,
"feat_quantizer_dropout": 0.0,
"final_dropout": 0.1,
"finetuning_task": null,
"forced_bos_token_id": null,
"forced_eos_token_id": null,
"gradient_checkpointing": false,
"hidden_act": "gelu",
"hidden_dropout": 0.1,
"hidden_dropout_prob": 0.1,
"hidden_size": 1024,
"id2label": {
"0": "LABEL_0",
"1": "LABEL_1"
},
"initializer_range": 0.02,
"intermediate_size": 4096,
"is_decoder": false,
"is_encoder_decoder": false,
"label2id": {
"LABEL_0": 0,
"LABEL_1": 1
},
"layer_norm_eps": 1e-05,
"layerdrop": 0.0,
"length_penalty": 1.0,
"mask_feature_length": 10,
"mask_feature_min_masks": 0,
"mask_feature_prob": 0.0,
"mask_time_length": 10,
"mask_time_min_masks": 2,
"mask_time_prob": 0.05,
"max_length": 20,
"min_length": 0,
"model_type": "wav2vec2",
"no_repeat_ngram_size": 0,
"num_adapter_layers": 3,
"num_attention_heads": 16,
"num_beam_groups": 1,
"num_beams": 1,
"num_codevector_groups": 2,
"num_codevectors_per_group": 320,
"num_conv_pos_embedding_groups": 16,
"num_conv_pos_embeddings": 128,
"num_feat_extract_layers": 7,
"num_hidden_layers": 24,
"num_negatives": 100,
"num_return_sequences": 1,
"output_attentions": false,
"output_hidden_size": 1024,
"output_hidden_states": false,
"output_scores": false,
"pad_token_id": 0,
"prefix": null,
"problem_type": null,
"proj_codevector_dim": 768,
"pruned_heads": {},
"remove_invalid_values": false,
"repetition_penalty": 1.0,
"return_dict": true,
"return_dict_in_generate": false,
"sep_token_id": null,
"task_specific_params": null,
"tdnn_dilation": [
1,
2,
3,
1,
1
],
"tdnn_dim": [
512,
512,
512,
512,
1500
],
"tdnn_kernel": [
5,
3,
3,
1,
1
],
"temperature": 1.0,
"tie_encoder_decoder": false,
"tie_word_embeddings": true,
"tokenizer_class": null,
"top_k": 50,
"top_p": 1.0,
"torch_dtype": null,
"torchscript": false,
"transformers_version": "4.17.0",
"typical_p": 1.0,
"use_bfloat16": false,
"use_weighted_layer_sum": false,
"vocab_size": 32,
"xvector_output_dim": 512
},
"is_encoder_decoder": true,
"model_type": "speechmix",
"torch_dtype": "float32",
"transformers_version": null
}
|
Subsets and Splits
Filtered Qwen2.5 Distill Models
Identifies specific configurations of models by filtering cards that contain 'distill', 'qwen2.5', '7b' while excluding certain base models and incorrect model ID patterns, uncovering unique model variants.
Filtered Model Cards Count
Finds the count of entries with specific card details that include 'distill', 'qwen2.5', '7b' but exclude certain base models, revealing valuable insights about the dataset's content distribution.
Filtered Distill Qwen 7B Models
Filters for specific card entries containing 'distill', 'qwen', and '7b', excluding certain strings and patterns, to identify relevant model configurations.
Filtered Qwen-7b Model Cards
The query performs a detailed filtering based on specific keywords and excludes certain entries, which could be useful for identifying a specific subset of cards but does not provide deeper insights or trends.
Filtered Qwen 7B Model Cards
The query filters for specific terms related to "distilled" or "distill", "qwen", and "7b" in the 'card' column but excludes certain base models, providing a limited set of entries for further inspection.
Qwen 7B Distilled Models
The query provides a basic filtering of records to find specific card names that include keywords related to distilled Qwen 7b models, excluding a particular base model, which gives limited insight but helps in focusing on relevant entries.
Qwen 7B Distilled Model Cards
The query filters data based on specific keywords in the modelId and card fields, providing limited insight primarily useful for locating specific entries rather than revealing broad patterns or trends.
Qwen 7B Distilled Models
Finds all entries containing the terms 'distilled', 'qwen', and '7b' in a case-insensitive manner, providing a filtered set of records but without deeper analysis.
Distilled Qwen 7B Models
The query filters for specific model IDs containing 'distilled', 'qwen', and '7b', providing a basic retrieval of relevant entries but without deeper analysis or insight.
Filtered Model Cards with Distill Qwen2.
Filters and retrieves records containing specific keywords in the card description while excluding certain phrases, providing a basic count of relevant entries.
Filtered Model Cards with Distill Qwen 7
The query filters specific variations of card descriptions containing 'distill', 'qwen', and '7b' while excluding a particular base model, providing limited but specific data retrieval.
Distill Qwen 7B Model Cards
The query filters and retrieves rows where the 'card' column contains specific keywords ('distill', 'qwen', and '7b'), providing a basic filter result that can help in identifying specific entries.