modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-09-05 18:32:21
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 540
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 55
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-09-05 18:32:07
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
aaraki/marian-finetuned-kde4-en-to-fr
|
aaraki
| 2022-03-02T01:54:57Z | 22 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"marian",
"text2text-generation",
"translation",
"generated_from_trainer",
"dataset:kde4",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
tags:
- translation
- generated_from_trainer
datasets:
- kde4
metrics:
- bleu
model-index:
- name: marian-finetuned-kde4-en-to-fr
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: kde4
type: kde4
args: en-fr
metrics:
- name: Bleu
type: bleu
value: 52.94560734092563
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# marian-finetuned-kde4-en-to-fr
This model is a fine-tuned version of [Helsinki-NLP/opus-mt-en-fr](https://huggingface.co/Helsinki-NLP/opus-mt-en-fr) on the kde4 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8559
- Bleu: 52.9456
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.11.6
|
BigSalmon/InformalToFormalLincoln22
|
BigSalmon
| 2022-03-01T22:38:59Z | 10 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-03-02T23:29:04Z |
```
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("BigSalmon/InformalToFormalLincoln22")
model = AutoModelForCausalLM.from_pretrained("BigSalmon/InformalToFormalLincoln22")
```
```
How To Make Prompt:
informal english: i am very ready to do that just that.
Translated into the Style of Abraham Lincoln: you can assure yourself of my readiness to work toward this end.
Translated into the Style of Abraham Lincoln: please be assured that i am most ready to undertake this laborious task.
***
informal english: space is huge and needs to be explored.
Translated into the Style of Abraham Lincoln: space awaits traversal, a new world whose boundaries are endless.
Translated into the Style of Abraham Lincoln: space is a ( limitless / boundless ) expanse, a vast virgin domain awaiting exploration.
***
informal english: corn fields are all across illinois, visible once you leave chicago.
Translated into the Style of Abraham Lincoln: corn fields ( permeate illinois / span the state of illinois / ( occupy / persist in ) all corners of illinois / line the horizon of illinois / envelop the landscape of illinois ), manifesting themselves visibly as one ventures beyond chicago.
informal english:
```
```
- declining viewership facing the nba.
- does not have to be this way.
- in fact, many solutions exist.
- the four point line would surely draw in eyes.
Text: failing to draw in the masses, the NBA has fallen into disrepair. such does not have to be the case, however. in fact, a myriad of simple, relatively cheap solutions could revive the league. the addition of the much-hyped four-point line would surely juice viewership.
***
-
```
```
infill: chrome extensions [MASK] accomplish everyday tasks.
Translated into the Style of Abraham Lincoln: chrome extensions ( expedite the ability to / unlock the means to more readily ) accomplish everyday tasks.
infill: at a time when nintendo has become inflexible, [MASK] consoles that are tethered to a fixed iteration, sega diligently curates its legacy of classic video games on handheld devices.
Translated into the Style of Abraham Lincoln: at a time when nintendo has become inflexible, ( stubbornly [MASK] on / firmly set on / unyielding in its insistence on ) consoles that are tethered to a fixed iteration, sega diligently curates its legacy of classic video games on handheld devices.
infill:
```
```
Essay Intro (California High-Speed Rail): built with an eye on the future, california's high-speed rail service resolves to change the face of travel.
Essay Intro (YIMBY's Need To Win): home to the most expensive housing market in the united states, san francisco is the city in which the yimby and anti-yimby hordes wage an eternal battle.
Essay Intro (
```
```
Search: What is the definition of Checks and Balances?
https://en.wikipedia.org/wiki/Checks_and_balances
Checks and Balances is the idea of having a system where each and every action in government should be subject to one or more checks that would not allow one branch or the other to overly dominate.
https://www.harvard.edu/glossary/Checks_and_Balances
Checks and Balances is a system that allows each branch of government to limit the powers of the other branches in order to prevent abuse of power
https://www.law.cornell.edu/library/constitution/Checks_and_Balances
Checks and Balances is a system of separation through which branches of government can control the other, thus preventing excess power.
***
Search: What is the definition of Separation of Powers?
https://en.wikipedia.org/wiki/Separation_of_powers
The separation of powers is a principle in government, whereby governmental powers are separated into different branches, each with their own set of powers, that are prevent one branch from aggregating too much power.
https://www.yale.edu/tcf/Separation_of_Powers.html
Separation of Powers is the division of governmental functions between the executive, legislative and judicial branches, clearly demarcating each branch's authority, in the interest of ensuring that individual liberty or security is not undermined.
***
Search: What is the definition of Connection of Powers?
https://en.wikipedia.org/wiki/Connection_of_powers
Connection of Powers is a feature of some parliamentary forms of government where different branches of government are intermingled, typically the executive and legislative branches.
https://simple.wikipedia.org/wiki/Connection_of_powers
The term Connection of Powers describes a system of government in which there is overlap between different parts of the government.
***
Search: What is the definition of
```
```
Search: What are phrase synonyms for "second-guess"?
https://www.powerthesaurus.org/second-guess/synonyms
Shortest to Longest:
- feel dubious about
- raise an eyebrow at
- wrinkle their noses at
- cast a jaundiced eye at
- teeter on the fence about
***
Search: What are phrase synonyms for "mean to newbies"?
https://www.powerthesaurus.org/mean_to_newbies/synonyms
Shortest to Longest:
- readiness to balk at rookies
- absence of tolerance for novices
- hostile attitude toward newcomers
***
Search: What are phrase synonyms for "make use of"?
https://www.powerthesaurus.org/make_use_of/synonyms
Shortest to Longest:
- call upon
- glean value from
- reap benefits from
- derive utility from
- seize on the merits of
- draw on the strength of
- tap into the potential of
***
Search: What are phrase synonyms for "hurting itself"?
https://www.powerthesaurus.org/hurting_itself/synonyms
Shortest to Longest:
- erring
- slighting itself
- forfeiting its integrity
- doing itself a disservice
- evincing a lack of backbone
***
Search: What are phrase synonyms for "
```
|
ali2066/correct_BERT_token_itr0_0.0001_all_01_03_2022-15_52_19
|
ali2066
| 2022-03-01T14:55:36Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"token-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: correct_BERT_token_itr0_0.0001_all_01_03_2022-15_52_19
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# correct_BERT_token_itr0_0.0001_all_01_03_2022-15_52_19
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2711
- Precision: 0.3373
- Recall: 0.5670
- F1: 0.4230
- Accuracy: 0.8943
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 30 | 0.3783 | 0.1833 | 0.3975 | 0.2509 | 0.8413 |
| No log | 2.0 | 60 | 0.3021 | 0.3280 | 0.4820 | 0.3904 | 0.8876 |
| No log | 3.0 | 90 | 0.3196 | 0.3504 | 0.5036 | 0.4133 | 0.8918 |
| No log | 4.0 | 120 | 0.3645 | 0.3434 | 0.5306 | 0.4170 | 0.8759 |
| No log | 5.0 | 150 | 0.4027 | 0.3217 | 0.5486 | 0.4056 | 0.8797 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.1+cu113
- Datasets 1.18.0
- Tokenizers 0.10.3
|
SuperAI2-Machima/mt5-small-thai-qg-v2
|
SuperAI2-Machima
| 2022-03-01T14:53:52Z | 26 | 2 |
transformers
|
[
"transformers",
"pytorch",
"mt5",
"text2text-generation",
"question-generation",
"dataset:NSC2018",
"dataset:wiki-documents-nsc",
"dataset:ThaiQACorpus-DevelopmentDataset",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-03-02T23:29:05Z |
---
tags:
- question-generation
language:
- thai
- th
datasets:
- NSC2018
- wiki-documents-nsc
- ThaiQACorpus-DevelopmentDataset
widget:
- text: "โรงเรียนบ้านขุนด่าน ตั้งอยู่ที่ขุนด่าน จ.นครนายก </s>"
example_title: "Example 01"
- text: "พลเอก ประยุทธ์ จันทร์โอชา (เกิด 21 มีนาคม พ.ศ. 2497) ชื่อเล่น ตู่ เป็นนักการเมืองและอดีตนายทหารบกชาวไทย </s>"
example_title: "Example 02"
- text: "วันที่ 1 กันยายน 2550 12:00 น. ตำรวจภูธรจ.บุรีรัมย์บุกตรวจยึดไม้แปรรูปหวงห้ามกว่า 80 แผ่น </s>"
example_title: "Example 03"
- text: "กรุงเทพมหานคร เป็นศูนย์กลางการปกครอง การศึกษา การคมนาคมขนส่ง การเงินการธนาคาร การพาณิชย์ การสื่อสาร และความเจริญของประเทศ ตั้งอยู่บนสามเหลี่ยมปากแม่น้ำเจ้าพระยา มีแม่น้ำเจ้าพระยาไหลผ่านและแบ่งเมืองออกเป็น 2 ฝั่ง คือ ฝั่งพระนครและฝั่งธนบุรี กรุงเทพมหานครมีพื้นที่ทั้งหมด 1,568.737 ตร.กม. </s>"
example_title: "Example 04"
license: mit
---
[SuperAI Engineer Season 2](https://superai.aiat.or.th/) , [Machima](https://machchima.superai.me/)
[Google's mT5](https://github.com/google-research/multilingual-t5) , [Pollawat](https://huggingface.co/Pollawat/mt5-small-thai-qg)
```python
from transformers import T5Tokenizer, T5ForConditionalGeneration, T5Config
model = T5ForConditionalGeneration.from_pretrained('SuperAI2-Machima/mt5-small-thai-qg-v2')
tokenizer = T5Tokenizer.from_pretrained('SuperAI2-Machima/mt5-small-thai-qg-v2')
source_text = 'บุกยึดไม้เถื่อน อดีต ส.ส.บุรีรัมย์ เตรียมสร้างคฤหาสน์ทรงไทย 1 กันยายน 2550 12:00 น. ตำรวจภูธรจ.บุรีรัมย์บุกตรวจยึดไม้แปรรูปหวงห้ามกว่า 80 แผ่น'
print('Predicted Summary Text : ')
tokenized_text = tokenizer.encode(source_text, return_tensors="pt").to(device)
summary_ids = model.generate(tokenized_text,
num_beams=4,
no_repeat_ngram_size=2,
max_length=50,
early_stopping=True)
output = tokenizer.decode(summary_ids[0], skip_special_tokens=True)
print(output)
#Predicted Summary Text :
#answer: 80 แผ่น question: ตํารวจภูธรจ.บุรีรัมย์บุกตรวจยึดไม้แปรรูปหวงห้ามกว่ากี่แผ่น
```
|
ali2066/correct_BERT_token_itr0_0.0001_essays_01_03_2022-15_48_47
|
ali2066
| 2022-03-01T14:50:16Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"token-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: correct_BERT_token_itr0_0.0001_essays_01_03_2022-15_48_47
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# correct_BERT_token_itr0_0.0001_essays_01_03_2022-15_48_47
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1801
- Precision: 0.6153
- Recall: 0.7301
- F1: 0.6678
- Accuracy: 0.9346
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 11 | 0.2746 | 0.4586 | 0.5922 | 0.5169 | 0.9031 |
| No log | 2.0 | 22 | 0.2223 | 0.5233 | 0.6181 | 0.5668 | 0.9148 |
| No log | 3.0 | 33 | 0.2162 | 0.5335 | 0.6699 | 0.5940 | 0.9274 |
| No log | 4.0 | 44 | 0.2053 | 0.5989 | 0.7055 | 0.6478 | 0.9237 |
| No log | 5.0 | 55 | 0.2123 | 0.5671 | 0.7249 | 0.6364 | 0.9267 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.1+cu113
- Datasets 1.18.0
- Tokenizers 0.10.3
|
ali2066/correct_BERT_token_itr0_0.0001_webDiscourse_01_03_2022-15_47_14
|
ali2066
| 2022-03-01T14:48:43Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"token-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: correct_BERT_token_itr0_0.0001_webDiscourse_01_03_2022-15_47_14
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# correct_BERT_token_itr0_0.0001_webDiscourse_01_03_2022-15_47_14
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6542
- Precision: 0.0092
- Recall: 0.0403
- F1: 0.0150
- Accuracy: 0.7291
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 10 | 0.5856 | 0.0012 | 0.0125 | 0.0022 | 0.6950 |
| No log | 2.0 | 20 | 0.5933 | 0.0 | 0.0 | 0.0 | 0.7282 |
| No log | 3.0 | 30 | 0.5729 | 0.0051 | 0.025 | 0.0085 | 0.7155 |
| No log | 4.0 | 40 | 0.6178 | 0.0029 | 0.0125 | 0.0047 | 0.7143 |
| No log | 5.0 | 50 | 0.6707 | 0.0110 | 0.0375 | 0.0170 | 0.7178 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.1+cu113
- Datasets 1.18.0
- Tokenizers 0.10.3
|
ali2066/correct_distilBERT_token_itr0_1e-05_essays_01_03_2022-15_41_29
|
ali2066
| 2022-03-01T14:42:27Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"token-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: correct_distilBERT_token_itr0_1e-05_essays_01_03_2022-15_41_29
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# correct_distilBERT_token_itr0_1e-05_essays_01_03_2022-15_41_29
This model is a fine-tuned version of [distilbert-base-uncased-finetuned-sst-2-english](https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3097
- Precision: 0.2769
- Recall: 0.4391
- F1: 0.3396
- Accuracy: 0.8878
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 11 | 0.4573 | 0.0094 | 0.0027 | 0.0042 | 0.7702 |
| No log | 2.0 | 22 | 0.3660 | 0.1706 | 0.3253 | 0.2239 | 0.8516 |
| No log | 3.0 | 33 | 0.3096 | 0.2339 | 0.408 | 0.2974 | 0.8827 |
| No log | 4.0 | 44 | 0.2868 | 0.2963 | 0.4693 | 0.3633 | 0.8928 |
| No log | 5.0 | 55 | 0.2798 | 0.3141 | 0.48 | 0.3797 | 0.8960 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.1+cu113
- Datasets 1.18.0
- Tokenizers 0.10.3
|
ali2066/correct_distilBERT_token_itr0_1e-05_webDiscourse_01_03_2022-15_40_24
|
ali2066
| 2022-03-01T14:41:24Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"token-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: correct_distilBERT_token_itr0_1e-05_webDiscourse_01_03_2022-15_40_24
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# correct_distilBERT_token_itr0_1e-05_webDiscourse_01_03_2022-15_40_24
This model is a fine-tuned version of [distilbert-base-uncased-finetuned-sst-2-english](https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5794
- Precision: 0.0094
- Recall: 0.0147
- F1: 0.0115
- Accuracy: 0.7156
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 10 | 0.6319 | 0.08 | 0.0312 | 0.0449 | 0.6753 |
| No log | 2.0 | 20 | 0.6265 | 0.0364 | 0.0312 | 0.0336 | 0.6764 |
| No log | 3.0 | 30 | 0.6216 | 0.0351 | 0.0312 | 0.0331 | 0.6762 |
| No log | 4.0 | 40 | 0.6193 | 0.0274 | 0.0312 | 0.0292 | 0.6759 |
| No log | 5.0 | 50 | 0.6183 | 0.0222 | 0.0312 | 0.0260 | 0.6773 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.1+cu113
- Datasets 1.18.0
- Tokenizers 0.10.3
|
ali2066/correct_twitter_RoBERTa_token_itr0_1e-05_editorials_01_03_2022-15_33_51
|
ali2066
| 2022-03-01T14:36:00Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"roberta",
"token-classification",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-03-02T23:29:05Z |
---
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: correct_twitter_RoBERTa_token_itr0_1e-05_editorials_01_03_2022-15_33_51
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# correct_twitter_RoBERTa_token_itr0_1e-05_editorials_01_03_2022-15_33_51
This model is a fine-tuned version of [cardiffnlp/twitter-roberta-base](https://huggingface.co/cardiffnlp/twitter-roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1138
- Precision: 0.5788
- Recall: 0.4712
- F1: 0.5195
- Accuracy: 0.9688
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 15 | 0.1316 | 0.04 | 0.0021 | 0.0040 | 0.9624 |
| No log | 2.0 | 30 | 0.1016 | 0.6466 | 0.4688 | 0.5435 | 0.9767 |
| No log | 3.0 | 45 | 0.0899 | 0.5873 | 0.4625 | 0.5175 | 0.9757 |
| No log | 4.0 | 60 | 0.0849 | 0.5984 | 0.4813 | 0.5335 | 0.9761 |
| No log | 5.0 | 75 | 0.0835 | 0.5984 | 0.4813 | 0.5335 | 0.9761 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.1+cu113
- Datasets 1.18.0
- Tokenizers 0.10.3
|
ali2066/correct_twitter_RoBERTa_token_itr0_1e-05_essays_01_03_2022-15_32_16
|
ali2066
| 2022-03-01T14:33:46Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"roberta",
"token-classification",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-03-02T23:29:05Z |
---
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: correct_twitter_RoBERTa_token_itr0_1e-05_essays_01_03_2022-15_32_16
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# correct_twitter_RoBERTa_token_itr0_1e-05_essays_01_03_2022-15_32_16
This model is a fine-tuned version of [cardiffnlp/twitter-roberta-base](https://huggingface.co/cardiffnlp/twitter-roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2663
- Precision: 0.3644
- Recall: 0.4985
- F1: 0.4210
- Accuracy: 0.8997
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 11 | 0.5174 | 0.0120 | 0.0061 | 0.0081 | 0.6997 |
| No log | 2.0 | 22 | 0.4029 | 0.1145 | 0.3098 | 0.1672 | 0.8265 |
| No log | 3.0 | 33 | 0.3604 | 0.2539 | 0.4448 | 0.3233 | 0.8632 |
| No log | 4.0 | 44 | 0.3449 | 0.2992 | 0.4755 | 0.3673 | 0.8704 |
| No log | 5.0 | 55 | 0.3403 | 0.3340 | 0.4816 | 0.3945 | 0.8760 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.1+cu113
- Datasets 1.18.0
- Tokenizers 0.10.3
|
ali2066/distilBERT_token_itr0_0.0001_all_01_03_2022-15_22_12
|
ali2066
| 2022-03-01T14:25:30Z | 9 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"token-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: distilBERT_token_itr0_0.0001_all_01_03_2022-15_22_12
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilBERT_token_itr0_0.0001_all_01_03_2022-15_22_12
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2811
- Precision: 0.3231
- Recall: 0.5151
- F1: 0.3971
- Accuracy: 0.8913
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 30 | 0.2881 | 0.2089 | 0.3621 | 0.2650 | 0.8715 |
| No log | 2.0 | 60 | 0.2500 | 0.2619 | 0.3842 | 0.3115 | 0.8845 |
| No log | 3.0 | 90 | 0.2571 | 0.2327 | 0.4338 | 0.3030 | 0.8809 |
| No log | 4.0 | 120 | 0.2479 | 0.3051 | 0.4761 | 0.3719 | 0.8949 |
| No log | 5.0 | 150 | 0.2783 | 0.3287 | 0.4761 | 0.3889 | 0.8936 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.1+cu113
- Datasets 1.18.0
- Tokenizers 0.10.3
|
ali2066/distilBERT_token_itr0_0.0001_essays_01_03_2022-15_18_35
|
ali2066
| 2022-03-01T14:20:06Z | 6 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"token-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: distilBERT_token_itr0_0.0001_essays_01_03_2022-15_18_35
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilBERT_token_itr0_0.0001_essays_01_03_2022-15_18_35
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1832
- Precision: 0.6138
- Recall: 0.7169
- F1: 0.6613
- Accuracy: 0.9332
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 11 | 0.2740 | 0.4554 | 0.5460 | 0.4966 | 0.8943 |
| No log | 2.0 | 22 | 0.2189 | 0.5470 | 0.6558 | 0.5965 | 0.9193 |
| No log | 3.0 | 33 | 0.2039 | 0.5256 | 0.6706 | 0.5893 | 0.9198 |
| No log | 4.0 | 44 | 0.2097 | 0.5401 | 0.6795 | 0.6018 | 0.9237 |
| No log | 5.0 | 55 | 0.2255 | 0.6117 | 0.6825 | 0.6452 | 0.9223 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.1+cu113
- Datasets 1.18.0
- Tokenizers 0.10.3
|
ali2066/distilBERT_token_itr0_1e-05_essays_01_03_2022-15_11_44
|
ali2066
| 2022-03-01T14:12:43Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"token-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: distilBERT_token_itr0_1e-05_essays_01_03_2022-15_11_44
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilBERT_token_itr0_1e-05_essays_01_03_2022-15_11_44
This model is a fine-tuned version of [distilbert-base-uncased-finetuned-sst-2-english](https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3082
- Precision: 0.2796
- Recall: 0.4373
- F1: 0.3411
- Accuracy: 0.8887
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 11 | 0.5018 | 0.0192 | 0.0060 | 0.0091 | 0.7370 |
| No log | 2.0 | 22 | 0.4066 | 0.1541 | 0.2814 | 0.1992 | 0.8340 |
| No log | 3.0 | 33 | 0.3525 | 0.1768 | 0.3234 | 0.2286 | 0.8612 |
| No log | 4.0 | 44 | 0.3250 | 0.2171 | 0.3503 | 0.2680 | 0.8766 |
| No log | 5.0 | 55 | 0.3160 | 0.2353 | 0.3713 | 0.2880 | 0.8801 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.1+cu113
- Datasets 1.18.0
- Tokenizers 0.10.3
|
ali2066/twitter_RoBERTa_token_itr0_1e-05_all_01_03_2022-15_02_39
|
ali2066
| 2022-03-01T14:05:57Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"roberta",
"token-classification",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-03-02T23:29:05Z |
---
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: twitter_RoBERTa_token_itr0_1e-05_all_01_03_2022-15_02_39
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# twitter_RoBERTa_token_itr0_1e-05_all_01_03_2022-15_02_39
This model is a fine-tuned version of [cardiffnlp/twitter-roberta-base](https://huggingface.co/cardiffnlp/twitter-roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2903
- Precision: 0.2440
- Recall: 0.4465
- F1: 0.3155
- Accuracy: 0.8706
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 30 | 0.4378 | 0.0463 | 0.1136 | 0.0658 | 0.7742 |
| No log | 2.0 | 60 | 0.3739 | 0.1472 | 0.3756 | 0.2115 | 0.8284 |
| No log | 3.0 | 90 | 0.3422 | 0.1865 | 0.4330 | 0.2607 | 0.8374 |
| No log | 4.0 | 120 | 0.3286 | 0.2243 | 0.4833 | 0.3064 | 0.8438 |
| No log | 5.0 | 150 | 0.3239 | 0.2356 | 0.4809 | 0.3163 | 0.8490 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.1+cu113
- Datasets 1.18.0
- Tokenizers 0.10.3
|
ali2066/twitter_RoBERTa_token_itr0_1e-05_editorials_01_03_2022-15_00_35
|
ali2066
| 2022-03-01T14:02:32Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"roberta",
"token-classification",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-03-02T23:29:05Z |
---
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: twitter_RoBERTa_token_itr0_1e-05_editorials_01_03_2022-15_00_35
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# twitter_RoBERTa_token_itr0_1e-05_editorials_01_03_2022-15_00_35
This model is a fine-tuned version of [cardiffnlp/twitter-roberta-base](https://huggingface.co/cardiffnlp/twitter-roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1155
- Precision: 0.5720
- Recall: 0.4705
- F1: 0.5163
- Accuracy: 0.9687
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 15 | 0.1256 | 0.04 | 0.0021 | 0.0039 | 0.9624 |
| No log | 2.0 | 30 | 0.0963 | 0.7121 | 0.5711 | 0.6339 | 0.9794 |
| No log | 3.0 | 45 | 0.0844 | 0.6205 | 0.5732 | 0.5959 | 0.9778 |
| No log | 4.0 | 60 | 0.0770 | 0.6201 | 0.5856 | 0.6023 | 0.9778 |
| No log | 5.0 | 75 | 0.0750 | 0.6174 | 0.5856 | 0.6011 | 0.9777 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.1+cu113
- Datasets 1.18.0
- Tokenizers 0.10.3
|
ali2066/twitter_RoBERTa_token_itr0_1e-05_essays_01_03_2022-14_58_58
|
ali2066
| 2022-03-01T14:00:30Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"roberta",
"token-classification",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-03-02T23:29:05Z |
---
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: twitter_RoBERTa_token_itr0_1e-05_essays_01_03_2022-14_58_58
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# twitter_RoBERTa_token_itr0_1e-05_essays_01_03_2022-14_58_58
This model is a fine-tuned version of [cardiffnlp/twitter-roberta-base](https://huggingface.co/cardiffnlp/twitter-roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2698
- Precision: 0.3554
- Recall: 0.4884
- F1: 0.4114
- Accuracy: 0.8973
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 11 | 0.4423 | 0.0261 | 0.0184 | 0.0216 | 0.7728 |
| No log | 2.0 | 22 | 0.3220 | 0.1256 | 0.3129 | 0.1793 | 0.8735 |
| No log | 3.0 | 33 | 0.2561 | 0.2633 | 0.4264 | 0.3255 | 0.9103 |
| No log | 4.0 | 44 | 0.2535 | 0.3303 | 0.4509 | 0.3813 | 0.9115 |
| No log | 5.0 | 55 | 0.2414 | 0.3696 | 0.4693 | 0.4135 | 0.9181 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.1+cu113
- Datasets 1.18.0
- Tokenizers 0.10.3
|
ali2066/twitter_RoBERTa_token_itr0_1e-05_webDiscourse_01_03_2022-14_45_20
|
ali2066
| 2022-03-01T13:46:20Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"token-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: twitter_RoBERTa_token_itr0_1e-05_webDiscourse_01_03_2022-14_45_20
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# twitter_RoBERTa_token_itr0_1e-05_webDiscourse_01_03_2022-14_45_20
This model is a fine-tuned version of [distilbert-base-uncased-finetuned-sst-2-english](https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6113
- Precision: 0.0097
- Recall: 0.0145
- F1: 0.0116
- Accuracy: 0.6780
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:---:|:--------:|
| No log | 1.0 | 10 | 0.6399 | 0.0 | 0.0 | 0.0 | 0.6603 |
| No log | 2.0 | 20 | 0.6192 | 0.0 | 0.0 | 0.0 | 0.6603 |
| No log | 3.0 | 30 | 0.6133 | 0.0 | 0.0 | 0.0 | 0.6605 |
| No log | 4.0 | 40 | 0.6142 | 0.0 | 0.0 | 0.0 | 0.6617 |
| No log | 5.0 | 50 | 0.6129 | 0.0 | 0.0 | 0.0 | 0.6632 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.1+cu113
- Datasets 1.18.0
- Tokenizers 0.10.3
|
ali2066/twitter_RoBERTa_token_itr0_1e-05_all_01_03_2022-14_37_35
|
ali2066
| 2022-03-01T13:39:36Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"token-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: twitter_RoBERTa_token_itr0_1e-05_all_01_03_2022-14_37_35
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# twitter_RoBERTa_token_itr0_1e-05_all_01_03_2022-14_37_35
This model is a fine-tuned version of [distilbert-base-uncased-finetuned-sst-2-english](https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3190
- Precision: 0.1194
- Recall: 0.2563
- F1: 0.1629
- Accuracy: 0.8546
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 30 | 0.4963 | 0.0223 | 0.0562 | 0.0319 | 0.7461 |
| No log | 2.0 | 60 | 0.4089 | 0.0617 | 0.1359 | 0.0849 | 0.8093 |
| No log | 3.0 | 90 | 0.3919 | 0.1053 | 0.2101 | 0.1403 | 0.8219 |
| No log | 4.0 | 120 | 0.3787 | 0.1202 | 0.2482 | 0.1619 | 0.8270 |
| No log | 5.0 | 150 | 0.3745 | 0.1171 | 0.2391 | 0.1572 | 0.8311 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.1+cu113
- Datasets 1.18.0
- Tokenizers 0.10.3
|
ali2066/distilbert_token_itr0_1e-05_all_01_03_2022-14_33_33
|
ali2066
| 2022-03-01T13:35:34Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"token-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: distilbert_token_itr0_1e-05_all_01_03_2022-14_33_33
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert_token_itr0_1e-05_all_01_03_2022-14_33_33
This model is a fine-tuned version of [distilbert-base-uncased-finetuned-sst-2-english](https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3255
- Precision: 0.1412
- Recall: 0.25
- F1: 0.1805
- Accuracy: 0.8491
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 30 | 0.4549 | 0.0228 | 0.0351 | 0.0276 | 0.7734 |
| No log | 2.0 | 60 | 0.3577 | 0.0814 | 0.1260 | 0.0989 | 0.8355 |
| No log | 3.0 | 90 | 0.3116 | 0.1534 | 0.2648 | 0.1943 | 0.8611 |
| No log | 4.0 | 120 | 0.2975 | 0.1792 | 0.2967 | 0.2234 | 0.8690 |
| No log | 5.0 | 150 | 0.2935 | 0.1873 | 0.2998 | 0.2305 | 0.8715 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.1+cu113
- Datasets 1.18.0
- Tokenizers 0.10.3
|
ali2066/distilbert_token_itr0_0.0001_all_01_03_2022-14_30_58
|
ali2066
| 2022-03-01T13:33:00Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"token-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: distilbert_token_itr0_0.0001_all_01_03_2022-14_30_58
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert_token_itr0_0.0001_all_01_03_2022-14_30_58
This model is a fine-tuned version of [distilbert-base-uncased-finetuned-sst-2-english](https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2572
- Precision: 0.3363
- Recall: 0.5110
- F1: 0.4057
- Accuracy: 0.8931
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 30 | 0.3976 | 0.1405 | 0.3058 | 0.1925 | 0.7921 |
| No log | 2.0 | 60 | 0.3511 | 0.2360 | 0.4038 | 0.2979 | 0.8260 |
| No log | 3.0 | 90 | 0.3595 | 0.1863 | 0.3827 | 0.2506 | 0.8211 |
| No log | 4.0 | 120 | 0.3591 | 0.2144 | 0.4288 | 0.2859 | 0.8299 |
| No log | 5.0 | 150 | 0.3605 | 0.1989 | 0.4212 | 0.2702 | 0.8343 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.1+cu113
- Datasets 1.18.0
- Tokenizers 0.10.3
|
nickmuchi/vit-finetuned-cats-dogs
|
nickmuchi
| 2022-03-01T13:15:13Z | 132 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"vit",
"image-classification",
"huggingpics",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2022-03-02T23:29:05Z |
---
tags:
- image-classification
- pytorch
- huggingpics
metrics:
- accuracy
widget:
- src: https://cdn.pixabay.com/photo/2021/09/19/12/19/animal-6637774_1280.jpg
example_title: Dog
- src: https://cdn.pixabay.com/photo/2017/02/20/18/03/cat-2083492_1280.jpg
example_title: Cat
model-index:
- name: vit-finetuned-cats-dogs
results:
- task:
name: Image Classification
type: image-classification
metrics:
- name: Accuracy
type: accuracy
value: 0.9971014261245728
---
# vit-finetuned-cats-dogs
Autogenerated by HuggingPics🤗🖼️
Create your own image classifier for **anything** by running [the demo on Google Colab](https://colab.research.google.com/github/nateraw/huggingpics/blob/main/HuggingPics.ipynb).
Report any issues with the demo at the [github repo](https://github.com/nateraw/huggingpics).
## Example Images
#### cat

#### dog

|
coastalcph/fairlex-cail-minilm
|
coastalcph
| 2022-03-01T13:12:22Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"xlm-roberta",
"fill-mask",
"legal",
"fairlex",
"zh",
"license:cc-by-nc-sa-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-03-02T23:29:05Z |
---
language: zh
pipeline_tag: fill-mask
license: cc-by-nc-sa-4.0
tags:
- legal
- fairlex
widget:
- text: "上述事实,被告人在庭审过程中亦无异议,且有<mask>的陈述,现场辨认笔录及照片,被告人的前科刑事判决书,释放证明材料,抓获经过,被告人的供述及身份证明等证据证实,足以认定。"
---
# FairLex: A multilingual benchmark for evaluating fairness in legal text processing
We present a benchmark suite of four datasets for evaluating the fairness of pre-trained legal language models and the techniques used to fine-tune them for downstream tasks. Our benchmarks cover four jurisdictions (European Council, USA, Swiss, and Chinese), five languages (English, German, French, Italian and Chinese) and fairness across five attributes (gender, age, nationality/region, language, and legal area). In our experiments, we evaluate pre-trained language models using several group-robust fine-tuning techniques and show that performance group disparities are vibrant in many cases, while none of these techniques guarantee fairness, nor consistently mitigate group disparities. Furthermore, we provide a quantitative and qualitative analysis of our results, highlighting open challenges in the development of robustness methods in legal NLP.
---
Ilias Chalkidis, Tommaso Passini, Sheng Zhang, Letizia Tomada, Sebastian Felix Schwemer, and Anders Søgaard. 2022. FairLex: A multilingual bench-mark for evaluating fairness in legal text processing. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics, Dublin, Ireland.
---
## Pre-training details
For the purpose of this work, we release four domain-specific BERT models with continued pre-training on the corpora of the examined datasets (ECtHR, SCOTUS, FSCS, SPC).
We train mini-sized BERT models with 6 Transformer blocks, 384 hidden units, and 12 attention heads.
We warm-start all models from the public MiniLMv2 (Wang et al., 2021) using the distilled version of RoBERTa (Liu et al., 2019).
For the English datasets (ECtHR, SCOTUS) and the one distilled from XLM-R (Conneau et al., 2021) for the rest (trilingual FSCS, and Chinese SPC).
## Models list
| Model name | Training corpora | Language |
|-----------------------------------|------------------|--------------------|
| `coastalcph/fairlex-ecthr-minlm` | ECtHR | `en` |
| `coastalcph/fairlex-scotus-minlm` | SCOTUS | `en` |
| `coastalcph/fairlex-fscs-minlm` | FSCS | [`de`, `fr`, `it`] |
| `coastalcph/fairlex-cail-minlm` | CAIL | `zh` |
## Load Pretrained Model
```python
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("coastalcph/fairlex-cail-minlm")
model = AutoModel.from_pretrained("coastalcph/fairlex-cail-minlm")
```
## Evaluation on downstream tasks
Consider the experiments in the article:
_Ilias Chalkidis, Tommaso Passini, Sheng Zhang, Letizia Tomada, Sebastian Felix Schwemer, and Anders Søgaard. 2022. Fairlex: A multilingual bench-mark for evaluating fairness in legal text processing. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics, Dublin, Ireland._
## Author - Publication
```
@inproceedings{chalkidis-2022-fairlex,
author={Chalkidis, Ilias and Passini, Tommaso and Zhang, Sheng and
Tomada, Letizia and Schwemer, Sebastian Felix and Søgaard, Anders},
title={FairLex: A Multilingual Benchmark for Evaluating Fairness in Legal Text Processing},
booktitle={Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics},
year={2022},
address={Dublin, Ireland}
}
```
Ilias Chalkidis on behalf of [CoAStaL NLP Group](https://coastalcph.github.io)
| Github: [@ilias.chalkidis](https://github.com/iliaschalkidis) | Twitter: [@KiddoThe2B](https://twitter.com/KiddoThe2B) |
|
ali2066/twitter_RoBERTa_base_sentence_itr0_1e-05_all_01_03_2022-13_53_11
|
ali2066
| 2022-03-01T13:03:25Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"roberta",
"text-classification",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-02T23:29:05Z |
---
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
- precision
- recall
model-index:
- name: twitter_RoBERTa_base_sentence_itr0_1e-05_all_01_03_2022-13_53_11
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# twitter_RoBERTa_base_sentence_itr0_1e-05_all_01_03_2022-13_53_11
This model is a fine-tuned version of [cardiffnlp/twitter-roberta-base](https://huggingface.co/cardiffnlp/twitter-roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4118
- Accuracy: 0.8446
- F1: 0.8968
- Precision: 0.8740
- Recall: 0.9207
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:---------:|:------:|
| No log | 1.0 | 390 | 0.3532 | 0.8451 | 0.8990 | 0.8997 | 0.8983 |
| 0.4111 | 2.0 | 780 | 0.3381 | 0.8561 | 0.9080 | 0.8913 | 0.9253 |
| 0.3031 | 3.0 | 1170 | 0.3490 | 0.8537 | 0.9034 | 0.9152 | 0.8919 |
| 0.2408 | 4.0 | 1560 | 0.3562 | 0.8671 | 0.9148 | 0.9 | 0.9300 |
| 0.2408 | 5.0 | 1950 | 0.3725 | 0.8659 | 0.9131 | 0.9074 | 0.9189 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.1+cu113
- Datasets 1.18.0
- Tokenizers 0.10.3
|
ali2066/finetuned_sentence_itr0_2e-05_all_01_03_2022-13_11_55
|
ali2066
| 2022-03-01T12:17:50Z | 6 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
- precision
- recall
model-index:
- name: finetuned_sentence_itr0_2e-05_all_01_03_2022-13_11_55
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuned_sentence_itr0_2e-05_all_01_03_2022-13_11_55
This model is a fine-tuned version of [distilbert-base-uncased-finetuned-sst-2-english](https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6168
- Accuracy: 0.8286
- F1: 0.8887
- Precision: 0.8628
- Recall: 0.9162
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:---------:|:------:|
| No log | 1.0 | 390 | 0.3890 | 0.8110 | 0.8749 | 0.8631 | 0.8871 |
| 0.4535 | 2.0 | 780 | 0.3921 | 0.8439 | 0.8984 | 0.8721 | 0.9264 |
| 0.266 | 3.0 | 1170 | 0.4454 | 0.8415 | 0.8947 | 0.8860 | 0.9034 |
| 0.16 | 4.0 | 1560 | 0.5610 | 0.8427 | 0.8957 | 0.8850 | 0.9067 |
| 0.16 | 5.0 | 1950 | 0.6180 | 0.8488 | 0.9010 | 0.8799 | 0.9231 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.1+cu113
- Datasets 1.18.0
- Tokenizers 0.10.3
|
huggingtweets/berniesanders-dril
|
huggingtweets
| 2022-03-01T10:13:41Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-03-02T23:29:05Z |
---
language: en
thumbnail: https://github.com/borisdayma/huggingtweets/blob/master/img/logo.png?raw=true
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/847818629840228354/VXyQHfn0_400x400.jpg')">
</div>
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1097820307388334080/9ddg5F6v_400x400.png')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI CYBORG 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">wint & Bernie Sanders</div>
<div style="text-align: center; font-size: 14px;">@berniesanders-dril</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from wint & Bernie Sanders.
| Data | wint | Bernie Sanders |
| --- | --- | --- |
| Tweets downloaded | 3229 | 3250 |
| Retweets | 473 | 429 |
| Short tweets | 300 | 10 |
| Tweets kept | 2456 | 2811 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/yw6378l1/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @berniesanders-dril's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/3pydufi9) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/3pydufi9/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/berniesanders-dril')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
huggingtweets/berniesanders-cnn-dril
|
huggingtweets
| 2022-03-01T09:43:27Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-03-02T23:29:05Z |
---
language: en
thumbnail: http://www.huggingtweets.com/berniesanders-cnn-dril/1646127802129/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1097820307388334080/9ddg5F6v_400x400.png')">
</div>
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/847818629840228354/VXyQHfn0_400x400.jpg')">
</div>
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1278259160644227073/MfCyF7CG_400x400.jpg')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI CYBORG 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Bernie Sanders & wint & CNN</div>
<div style="text-align: center; font-size: 14px;">@berniesanders-cnn-dril</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Bernie Sanders & wint & CNN.
| Data | Bernie Sanders | wint | CNN |
| --- | --- | --- | --- |
| Tweets downloaded | 3250 | 3229 | 3250 |
| Retweets | 429 | 473 | 30 |
| Short tweets | 10 | 300 | 6 |
| Tweets kept | 2811 | 2456 | 3214 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/1yapgpjj/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @berniesanders-cnn-dril's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/1hmm651a) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/1hmm651a/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/berniesanders-cnn-dril')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
inovex/multi2convai-corona-it-bert
|
inovex
| 2022-03-01T09:20:35Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"text-classification",
"it",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-02T23:29:05Z |
---
tags:
- text-classification
widget:
- text: "Devo indossare una maschera?"
license: mit
language: it
---
# Multi2ConvAI-Corona: finetuned Bert for Italian
This model was developed in the [Multi2ConvAI](https://multi2conv.ai) project:
- domain: Corona (more details about our use cases: ([en](https://multi2convai/en/blog/use-cases), [de](https://multi2convai/en/blog/use-cases)))
- language: Italian (it)
- model type: finetuned Bert
## How to run
Requires:
- Huggingface transformers
### Run with Huggingface Transformers
````python
from transformers import AutoTokenizer, AutoModelForSequenceClassification
tokenizer = AutoTokenizer.from_pretrained("inovex/multi2convai-logistics-it-bert")
model = AutoModelForSequenceClassification.from_pretrained("inovex/multi2convai-logistics-it-bert")
````
## Further information on Multi2ConvAI:
- https://multi2conv.ai
- https://github.com/inovex/multi2convai
- mailto: [email protected]
|
inovex/multi2convai-corona-en-bert
|
inovex
| 2022-03-01T09:20:04Z | 8 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"text-classification",
"en",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-02T23:29:05Z |
---
tags:
- text-classification
- pytorch
- transformers
widget:
- text: "Do I need to wear a mask?"
license: mit
language: en
---
# Multi2ConvAI-Corona: finetuned Bert for English
This model was developed in the [Multi2ConvAI](https://multi2conv.ai) project:
- domain: Corona (more details about our use cases: ([en](https://multi2convai/en/blog/use-cases), [de](https://multi2convai/en/blog/use-cases)))
- language: English (en)
- model type: finetuned Bert
## How to run
Requires:
- Huggingface transformers
### Run with Huggingface Transformers
````python
from transformers import AutoTokenizer, AutoModelForSequenceClassification
tokenizer = AutoTokenizer.from_pretrained("inovex/multi2convai-logistics-en-bert")
model = AutoModelForSequenceClassification.from_pretrained("inovex/multi2convai-logistics-en-bert")
````
## Further information on Multi2ConvAI:
- https://multi2conv.ai
- https://github.com/inovex/multi2convai
- mailto: [email protected]
|
inovex/multi2convai-corona-de-bert
|
inovex
| 2022-03-01T09:18:20Z | 4 | 1 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"text-classification",
"de",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-02T23:29:05Z |
---
tags:
- text-classification
- pytorch
- transformers
widget:
- text: "Muss ich eine Maske tragen?"
license: mit
language: de
---
# Multi2ConvAI-Corona: finetuned Bert for German
This model was developed in the [Multi2ConvAI](https://multi2conv.ai) project:
- domain: Corona (more details about our use cases: ([en](https://multi2convai/en/blog/use-cases), [de](https://multi2convai/en/blog/use-cases)))
- language: German (de)
- model type: finetuned Bert
## How to run
Requires:
- Huggingface transformers
### Run with Huggingface Transformers
````python
from transformers import AutoTokenizer, AutoModelForSequenceClassification
tokenizer = AutoTokenizer.from_pretrained("inovex/multi2convai-logistics-de-bert")
model = AutoModelForSequenceClassification.from_pretrained("inovex/multi2convai-logistics-de-bert")
````
## Further information on Multi2ConvAI:
- https://multi2conv.ai
- https://github.com/inovex/multi2convai
- mailto: [email protected]
|
hfl/chinese-roberta-wwm-ext-large
|
hfl
| 2022-03-01T09:15:16Z | 5,610 | 196 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"jax",
"bert",
"fill-mask",
"zh",
"arxiv:1906.08101",
"arxiv:2004.13922",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-03-02T23:29:05Z |
---
language:
- zh
tags:
- bert
license: "apache-2.0"
---
# Please use 'Bert' related functions to load this model!
## Chinese BERT with Whole Word Masking
For further accelerating Chinese natural language processing, we provide **Chinese pre-trained BERT with Whole Word Masking**.
**[Pre-Training with Whole Word Masking for Chinese BERT](https://arxiv.org/abs/1906.08101)**
Yiming Cui, Wanxiang Che, Ting Liu, Bing Qin, Ziqing Yang, Shijin Wang, Guoping Hu
This repository is developed based on:https://github.com/google-research/bert
You may also interested in,
- Chinese BERT series: https://github.com/ymcui/Chinese-BERT-wwm
- Chinese MacBERT: https://github.com/ymcui/MacBERT
- Chinese ELECTRA: https://github.com/ymcui/Chinese-ELECTRA
- Chinese XLNet: https://github.com/ymcui/Chinese-XLNet
- Knowledge Distillation Toolkit - TextBrewer: https://github.com/airaria/TextBrewer
More resources by HFL: https://github.com/ymcui/HFL-Anthology
## Citation
If you find the technical report or resource is useful, please cite the following technical report in your paper.
- Primary: https://arxiv.org/abs/2004.13922
```
@inproceedings{cui-etal-2020-revisiting,
title = "Revisiting Pre-Trained Models for {C}hinese Natural Language Processing",
author = "Cui, Yiming and
Che, Wanxiang and
Liu, Ting and
Qin, Bing and
Wang, Shijin and
Hu, Guoping",
booktitle = "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: Findings",
month = nov,
year = "2020",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://www.aclweb.org/anthology/2020.findings-emnlp.58",
pages = "657--668",
}
```
- Secondary: https://arxiv.org/abs/1906.08101
```
@article{chinese-bert-wwm,
title={Pre-Training with Whole Word Masking for Chinese BERT},
author={Cui, Yiming and Che, Wanxiang and Liu, Ting and Qin, Bing and Yang, Ziqing and Wang, Shijin and Hu, Guoping},
journal={arXiv preprint arXiv:1906.08101},
year={2019}
}
```
|
huggingtweets/coffee__burger
|
huggingtweets
| 2022-03-01T09:06:14Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-03-02T23:29:05Z |
---
language: en
thumbnail: http://www.huggingtweets.com/coffee__burger/1646125569654/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/794725967948181506/Zn4x_F6i_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Coffee Burger</div>
<div style="text-align: center; font-size: 14px;">@coffee__burger</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Coffee Burger.
| Data | Coffee Burger |
| --- | --- |
| Tweets downloaded | 2471 |
| Retweets | 525 |
| Short tweets | 337 |
| Tweets kept | 1609 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/3ad82qis/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @coffee__burger's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/1kxzm2oz) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/1kxzm2oz/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/coffee__burger')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
inovex/multi2convai-quality-it-mbert
|
inovex
| 2022-03-01T09:02:26Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"text-classification",
"it",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-02T23:29:05Z |
---
tags:
- text-classification
widget:
- text: "Avviare il programma"
license: mit
language: it
---
# Multi2ConvAI-Quality: finetuned MBert for Italian
This model was developed in the [Multi2ConvAI](https://multi2conv.ai) project:
- domain: Quality (more details about our use cases: ([en](https://multi2convai/en/blog/use-cases), [de](https://multi2convai/en/blog/use-cases)))
- language: Italian (it)
- model type: finetuned MBert
## How to run
Requires:
- Huggingface transformers
### Run with Huggingface Transformers
````python
from transformers import AutoTokenizer, AutoModelForSequenceClassification
tokenizer = AutoTokenizer.from_pretrained("inovex/multi2convai-quality-it-mbert")
model = AutoModelForSequenceClassification.from_pretrained("inovex/multi2convai-quality-it-mbert")
````
## Further information on Multi2ConvAI:
- https://multi2conv.ai
- https://github.com/inovex/multi2convai
- mailto: [email protected]
|
inovex/multi2convai-quality-de-bert
|
inovex
| 2022-03-01T09:00:15Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"text-classification",
"de",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-02T23:29:05Z |
---
tags:
- text-classification
widget:
- text: "Starte das Programm"
license: mit
language: de
---
# Multi2ConvAI-Quality: finetuned Bert for German
This model was developed in the [Multi2ConvAI](https://multi2conv.ai) project:
- domain: Quality (more details about our use cases: ([en](https://multi2convai/en/blog/use-cases), [de](https://multi2convai/en/blog/use-cases)))
- language: German (de)
- model type: finetuned Bert
## How to run
Requires:
- Huggingface transformers
### Run with Huggingface Transformers
````python
from transformers import AutoTokenizer, AutoModelForSequenceClassification
tokenizer = AutoTokenizer.from_pretrained("inovex/multi2convai-quality-de-bert")
model = AutoModelForSequenceClassification.from_pretrained("inovex/multi2convai-quality-de-bert")
````
## Further information on Multi2ConvAI:
- https://multi2conv.ai
- https://github.com/inovex/multi2convai
- mailto: [email protected]
|
inovex/multi2convai-logistics-tr-bert
|
inovex
| 2022-03-01T08:54:59Z | 6 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"text-classification",
"tr",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-02T23:29:05Z |
---
tags:
- text-classification
widget:
- text: "paketi nereye koyabilirim?"
license: mit
language: tr
---
# Multi2ConvAI-Logistics: finetuned Bert for Turkish
This model was developed in the [Multi2ConvAI](https://multi2conv.ai) project:
- domain: Logistics (more details about our use cases: ([en](https://multi2convai/en/blog/use-cases), [de](https://multi2convai/en/blog/use-cases)))
- language: Turkish (tr)
- model type: finetuned Bert
## How to run
Requires:
- Huggingface transformers
### Run with Huggingface Transformers
````python
from transformers import AutoTokenizer, AutoModelForSequenceClassification
tokenizer = AutoTokenizer.from_pretrained("inovex/multi2convai-logistics-tr-bert")
model = AutoModelForSequenceClassification.from_pretrained("inovex/multi2convai-logistics-tr-bert")
````
## Further information on Multi2ConvAI:
- https://multi2conv.ai
- https://github.com/inovex/multi2convai
- mailto: [email protected]
|
inovex/multi2convai-logistics-pl-bert
|
inovex
| 2022-03-01T08:54:40Z | 8 | 2 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"text-classification",
"pl",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-02T23:29:05Z |
---
tags:
- text-classification
widget:
- text: "gdzie mogę umieścić paczkę?"
license: mit
language: pl
---
# Multi2ConvAI-Logistics: finetuned Bert for Polish
This model was developed in the [Multi2ConvAI](https://multi2conv.ai) project:
- domain: Logistics (more details about our use cases: ([en](https://multi2convai/en/blog/use-cases), [de](https://multi2convai/en/blog/use-cases)))
- language: Polish (pl)
- model type: finetuned Bert
## How to run
Requires:
- Huggingface transformers
### Run with Huggingface Transformers
````python
from transformers import AutoTokenizer, AutoModelForSequenceClassification
tokenizer = AutoTokenizer.from_pretrained("inovex/multi2convai-logistics-pl-bert")
model = AutoModelForSequenceClassification.from_pretrained("inovex/multi2convai-logistics-pl-bert")
````
## Further information on Multi2ConvAI:
- https://multi2conv.ai
- https://github.com/inovex/multi2convai
- mailto: [email protected]
|
aasem/wav2vec2-xls-r-300m-Urdu
|
aasem
| 2022-03-01T08:28:25Z | 5 | 1 |
transformers
|
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:05Z |
---
datasets:
-
common_voice: ~
language:
-
ur: ~
library_name:
transformers: ~
license:
mit: ~
metrics:
-
wer: ~
model-index:
-
name:
wav2vec2-xls-r-300m-Urdu: ~
results:
-
task:
dataset:
args:
ur: ~
name:
: "common_voice"
: ~
type:
common_voice: ~
metrics:
-
type:
wer: ~
value:
0.2459: ~
-
type:
cer: ~
value:
0.0691: ~
type:
automatic-speech-recognition: ~
tags:
-
audio: ~
-
automatic-speech-recognition: ~
-
speech: ~
Finetuning of [Facebook's 300M model](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on Common Voice 8.0 Urdu dataset
|
ali2066/finetuned_sentence_itr0_2e-05_all_01_03_2022-05_32_03
|
ali2066
| 2022-03-01T04:37:52Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
- precision
- recall
model-index:
- name: finetuned_sentence_itr0_2e-05_all_01_03_2022-05_32_03
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuned_sentence_itr0_2e-05_all_01_03_2022-05_32_03
This model is a fine-tuned version of [distilbert-base-uncased-finetuned-sst-2-english](https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4208
- Accuracy: 0.8283
- F1: 0.8915
- Precision: 0.8487
- Recall: 0.9389
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:---------:|:------:|
| No log | 1.0 | 390 | 0.4443 | 0.7768 | 0.8589 | 0.8072 | 0.9176 |
| 0.4532 | 2.0 | 780 | 0.4603 | 0.8098 | 0.8791 | 0.8302 | 0.9341 |
| 0.2608 | 3.0 | 1170 | 0.5284 | 0.8061 | 0.8713 | 0.8567 | 0.8863 |
| 0.1577 | 4.0 | 1560 | 0.6398 | 0.8085 | 0.8749 | 0.8472 | 0.9044 |
| 0.1577 | 5.0 | 1950 | 0.7089 | 0.8085 | 0.8741 | 0.8516 | 0.8979 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.1+cu113
- Datasets 1.18.0
- Tokenizers 0.10.3
|
Sarahliu186/wav2vec2-base-timit-demo-colab
|
Sarahliu186
| 2022-03-01T04:01:20Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:04Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: wav2vec2-base-timit-demo-colab
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base-timit-demo-colab
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.10.3
|
ali2066/bert-base-uncased_token_itr0_0.0001_all_01_03_2022-04_48_27
|
ali2066
| 2022-03-01T03:51:48Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"token-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: bert-base-uncased_token_itr0_0.0001_all_01_03_2022-04_48_27
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-uncased_token_itr0_0.0001_all_01_03_2022-04_48_27
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2899
- Precision: 0.3170
- Recall: 0.5261
- F1: 0.3956
- Accuracy: 0.8799
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 30 | 0.2912 | 0.2752 | 0.4444 | 0.3400 | 0.8730 |
| No log | 2.0 | 60 | 0.2772 | 0.4005 | 0.4589 | 0.4277 | 0.8911 |
| No log | 3.0 | 90 | 0.2267 | 0.3642 | 0.5281 | 0.4311 | 0.9043 |
| No log | 4.0 | 120 | 0.2129 | 0.3617 | 0.5455 | 0.4350 | 0.9140 |
| No log | 5.0 | 150 | 0.2399 | 0.3797 | 0.5556 | 0.4511 | 0.9114 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.1+cu113
- Datasets 1.18.0
- Tokenizers 0.10.3
|
armageddon/albert-squad-v2-covid-qa-deepset
|
armageddon
| 2022-03-01T02:04:26Z | 28 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"albert",
"question-answering",
"generated_from_trainer",
"dataset:covid_qa_deepset",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- covid_qa_deepset
model-index:
- name: covid_qa_analysis_albert_base_squad_v2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# covid_qa_analysis_albert_base_squad_v2
This model is a fine-tuned version of [abhilash1910/albert-squad-v2](https://huggingface.co/abhilash1910/albert-squad-v2) on the covid_qa_deepset dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- distributed_type: tpu
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
### Framework versions
- Transformers 4.16.2
- Pytorch 1.9.0+cu102
- Datasets 1.18.3
- Tokenizers 0.11.6
|
nateraw/cryptopunks-gan
|
nateraw
| 2022-03-01T01:59:49Z | 0 | 3 |
pytorch
|
[
"pytorch",
"tensorboard",
"dcgan",
"region:us"
] | null | 2022-03-02T23:29:05Z |
---
library_name: pytorch
tags:
- dcgan
---
# cryptopunks-gan
A DCGAN trained to generate novel Cryptopunks.
Check out the code by Teddy Koker [here](https://github.com/teddykoker/cryptopunks-gan).
## Generated Punks
Here are some punks generated by this model:

## Usage
You can try it out yourself, or you can play with the [demo](https://huggingface.co/spaces/nateraw/cryptopunks-generator).
To use it yourself - make sure you have `torch`, `torchvision`, and `huggingface_hub` installed. Then, run the following to generate a grid of 64 random punks:
```python
import torch
from huggingface_hub import hf_hub_download
from torch import nn
from torchvision.utils import save_image
class Generator(nn.Module):
def __init__(self, nc=4, nz=100, ngf=64):
super(Generator, self).__init__()
self.network = nn.Sequential(
nn.ConvTranspose2d(nz, ngf * 4, 3, 1, 0, bias=False),
nn.BatchNorm2d(ngf * 4),
nn.ReLU(True),
nn.ConvTranspose2d(ngf * 4, ngf * 2, 3, 2, 1, bias=False),
nn.BatchNorm2d(ngf * 2),
nn.ReLU(True),
nn.ConvTranspose2d(ngf * 2, ngf, 4, 2, 0, bias=False),
nn.BatchNorm2d(ngf),
nn.ReLU(True),
nn.ConvTranspose2d(ngf, nc, 4, 2, 1, bias=False),
nn.Tanh(),
)
def forward(self, input):
output = self.network(input)
return output
model = Generator()
weights_path = hf_hub_download('nateraw/cryptopunks-gan', 'generator.pth')
model.load_state_dict(torch.load(weights_path, map_location=torch.device('cpu')))
out = model(torch.randn(64, 100, 1, 1))
save_image(out, "punks.png", normalize=True)
```
|
armageddon/roberta-base-squad2-covid-qa-deepset
|
armageddon
| 2022-02-28T22:34:27Z | 19 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"roberta",
"question-answering",
"generated_from_trainer",
"dataset:covid_qa_deepset",
"license:cc-by-4.0",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2022-03-02T23:29:05Z |
---
license: cc-by-4.0
tags:
- generated_from_trainer
datasets:
- covid_qa_deepset
model-index:
- name: covid_qa_analysis_roberta-base-squad2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# covid_qa_analysis_roberta-base-squad2
This model is a fine-tuned version of [deepset/roberta-base-squad2](https://huggingface.co/deepset/roberta-base-squad2) on the covid_qa_deepset dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- distributed_type: tpu
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
### Framework versions
- Transformers 4.16.2
- Pytorch 1.9.0+cu102
- Datasets 1.18.3
- Tokenizers 0.11.6
|
Msp/classifier
|
Msp
| 2022-02-28T22:02:26Z | 0 | 0 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2022-03-02T23:29:04Z |
---
license: apache-2.0
---
|
Chikita1/www_stash_stock
|
Chikita1
| 2022-02-28T19:23:25Z | 0 | 0 | null |
[
"license:bsd-3-clause-clear",
"region:us"
] | null | 2022-03-02T23:29:04Z |
---
license: bsd-3-clause-clear
---
|
akhaliq/YOLOP
|
akhaliq
| 2022-02-28T16:56:50Z | 0 | 0 | null |
[
"object-detection",
"arxiv:2108.11250",
"arxiv:1612.07695",
"arxiv:1606.02147",
"region:us"
] |
object-detection
| 2022-03-02T23:29:05Z |
---
tags:
- object-detection
---
<div align="left">
## You Only Look Once for Panoptic Driving Perception
> [**You Only Look at Once for Panoptic driving Perception**](https://arxiv.org/abs/2108.11250)
>
> by Dong Wu, Manwen Liao, Weitian Zhang, [Xinggang Wang](https://xinggangw.info/) [*School of EIC, HUST*](http://eic.hust.edu.cn/English/Home.htm)
>
> *arXiv technical report ([arXiv 2108.11250](https://arxiv.org/abs/2108.11250))*
---
### The Illustration of YOLOP

### Contributions
* We put forward an efficient multi-task network that can jointly handle three crucial tasks in autonomous driving: object detection, drivable area segmentation and lane detection to save computational costs, reduce inference time as well as improve the performance of each task. Our work is the first to reach real-time on embedded devices while maintaining state-of-the-art level performance on the `BDD100K `dataset.
* We design the ablative experiments to verify the effectiveness of our multi-tasking scheme. It is proved that the three tasks can be learned jointly without tedious alternating optimization.
### Results
#### Traffic Object Detection Result
| Model | Recall(%) | mAP50(%) | Speed(fps) |
| -------------- | --------- | -------- | ---------- |
| `Multinet` | 81.3 | 60.2 | 8.6 |
| `DLT-Net` | 89.4 | 68.4 | 9.3 |
| `Faster R-CNN` | 77.2 | 55.6 | 5.3 |
| `YOLOv5s` | 86.8 | 77.2 | 82 |
| `YOLOP(ours)` | 89.2 | 76.5 | 41 |
#### Drivable Area Segmentation Result
| Model | mIOU(%) | Speed(fps) |
| ------------- | ------- | ---------- |
| `Multinet` | 71.6 | 8.6 |
| `DLT-Net` | 71.3 | 9.3 |
| `PSPNet` | 89.6 | 11.1 |
| `YOLOP(ours)` | 91.5 | 41 |
#### Lane Detection Result:
| Model | mIOU(%) | IOU(%) |
| ------------- | ------- | ------ |
| `ENet` | 34.12 | 14.64 |
| `SCNN` | 35.79 | 15.84 |
| `ENet-SAD` | 36.56 | 16.02 |
| `YOLOP(ours)` | 70.50 | 26.20 |
#### Ablation Studies 1: End-to-end v.s. Step-by-step:
| Training_method | Recall(%) | AP(%) | mIoU(%) | Accuracy(%) | IoU(%) |
| --------------- | --------- | ----- | ------- | ----------- | ------ |
| `ES-W` | 87.0 | 75.3 | 90.4 | 66.8 | 26.2 |
| `ED-W` | 87.3 | 76.0 | 91.6 | 71.2 | 26.1 |
| `ES-D-W` | 87.0 | 75.1 | 91.7 | 68.6 | 27.0 |
| `ED-S-W` | 87.5 | 76.1 | 91.6 | 68.0 | 26.8 |
| `End-to-end` | 89.2 | 76.5 | 91.5 | 70.5 | 26.2 |
#### Ablation Studies 2: Multi-task v.s. Single task:
| Training_method | Recall(%) | AP(%) | mIoU(%) | Accuracy(%) | IoU(%) | Speed(ms/frame) |
| --------------- | --------- | ----- | ------- | ----------- | ------ | --------------- |
| `Det(only)` | 88.2 | 76.9 | - | - | - | 15.7 |
| `Da-Seg(only)` | - | - | 92.0 | - | - | 14.8 |
| `Ll-Seg(only)` | - | - | - | 79.6 | 27.9 | 14.8 |
| `Multitask` | 89.2 | 76.5 | 91.5 | 70.5 | 26.2 | 24.4 |
**Notes**:
- The works we has use for reference including `Multinet` ([paper](https://arxiv.org/pdf/1612.07695.pdf?utm_campaign=affiliate-ir-Optimise%20media%28%20South%20East%20Asia%29%20Pte.%20ltd._156_-99_national_R_all_ACQ_cpa_en&utm_content=&utm_source=%20388939),[code](https://github.com/MarvinTeichmann/MultiNet)),`DLT-Net` ([paper](https://ieeexplore.ieee.org/abstract/document/8937825)),`Faster R-CNN` ([paper](https://proceedings.neurips.cc/paper/2015/file/14bfa6bb14875e45bba028a21ed38046-Paper.pdf),[code](https://github.com/ShaoqingRen/faster_rcnn)),`YOLOv5s`([code](https://github.com/ultralytics/yolov5)) ,`PSPNet`([paper](https://openaccess.thecvf.com/content_cvpr_2017/papers/Zhao_Pyramid_Scene_Parsing_CVPR_2017_paper.pdf),[code](https://github.com/hszhao/PSPNet)) ,`ENet`([paper](https://arxiv.org/pdf/1606.02147.pdf),[code](https://github.com/osmr/imgclsmob)) `SCNN`([paper](https://www.aaai.org/ocs/index.php/AAAI/AAAI18/paper/download/16802/16322),[code](https://github.com/XingangPan/SCNN)) `SAD-ENet`([paper](https://openaccess.thecvf.com/content_ICCV_2019/papers/Hou_Learning_Lightweight_Lane_Detection_CNNs_by_Self_Attention_Distillation_ICCV_2019_paper.pdf),[code](https://github.com/cardwing/Codes-for-Lane-Detection)). Thanks for their wonderful works.
- In table 4, E, D, S and W refer to Encoder, Detect head, two Segment heads and whole network. So the Algorithm (First, we only train Encoder and Detect head. Then we freeze the Encoder and Detect head as well as train two Segmentation heads. Finally, the entire network is trained jointly for all three tasks.) can be marked as ED-S-W, and the same for others.
---
### Visualization
#### Traffic Object Detection Result

#### Drivable Area Segmentation Result

#### Lane Detection Result

**Notes**:
- The visualization of lane detection result has been post processed by quadratic fitting.
---
### Project Structure
```python
├─inference
│ ├─images # inference images
│ ├─output # inference result
├─lib
│ ├─config/default # configuration of training and validation
│ ├─core
│ │ ├─activations.py # activation function
│ │ ├─evaluate.py # calculation of metric
│ │ ├─function.py # training and validation of model
│ │ ├─general.py #calculation of metric、nms、conversion of data-format、visualization
│ │ ├─loss.py # loss function
│ │ ├─postprocess.py # postprocess(refine da-seg and ll-seg, unrelated to paper)
│ ├─dataset
│ │ ├─AutoDriveDataset.py # Superclass dataset,general function
│ │ ├─bdd.py # Subclass dataset,specific function
│ │ ├─hust.py # Subclass dataset(Campus scene, unrelated to paper)
│ │ ├─convect.py
│ │ ├─DemoDataset.py # demo dataset(image, video and stream)
│ ├─models
│ │ ├─YOLOP.py # Setup and Configuration of model
│ │ ├─light.py # Model lightweight(unrelated to paper, zwt)
│ │ ├─commom.py # calculation module
│ ├─utils
│ │ ├─augmentations.py # data augumentation
│ │ ├─autoanchor.py # auto anchor(k-means)
│ │ ├─split_dataset.py # (Campus scene, unrelated to paper)
│ │ ├─utils.py # logging、device_select、time_measure、optimizer_select、model_save&initialize 、Distributed training
│ ├─run
│ │ ├─dataset/training time # Visualization, logging and model_save
├─tools
│ │ ├─demo.py # demo(folder、camera)
│ │ ├─test.py
│ │ ├─train.py
├─toolkits
│ │ ├─depoly # Deployment of model
├─weights # Pretraining model
```
---
### Requirement
This codebase has been developed with python version 3.7, PyTorch 1.7+ and torchvision 0.8+:
```
conda install pytorch==1.7.0 torchvision==0.8.0 cudatoolkit=10.2 -c pytorch
```
See `requirements.txt` for additional dependencies and version requirements.
```setup
pip install -r requirements.txt
```
### Data preparation
#### Download
- Download the images from [images](https://bdd-data.berkeley.edu/).
- Download the annotations of detection from [det_annotations](https://drive.google.com/file/d/1Ge-R8NTxG1eqd4zbryFo-1Uonuh0Nxyl/view?usp=sharing).
- Download the annotations of drivable area segmentation from [da_seg_annotations](https://drive.google.com/file/d/1xy_DhUZRHR8yrZG3OwTQAHhYTnXn7URv/view?usp=sharing).
- Download the annotations of lane line segmentation from [ll_seg_annotations](https://drive.google.com/file/d/1lDNTPIQj_YLNZVkksKM25CvCHuquJ8AP/view?usp=sharing).
We recommend the dataset directory structure to be the following:
```
# The id represent the correspondence relation
├─dataset root
│ ├─images
│ │ ├─train
│ │ ├─val
│ ├─det_annotations
│ │ ├─train
│ │ ├─val
│ ├─da_seg_annotations
│ │ ├─train
│ │ ├─val
│ ├─ll_seg_annotations
│ │ ├─train
│ │ ├─val
```
Update the your dataset path in the `./lib/config/default.py`.
### Training
You can set the training configuration in the `./lib/config/default.py`. (Including: the loading of preliminary model, loss, data augmentation, optimizer, warm-up and cosine annealing, auto-anchor, training epochs, batch_size).
If you want try alternating optimization or train model for single task, please modify the corresponding configuration in `./lib/config/default.py` to `True`. (As following, all configurations is `False`, which means training multiple tasks end to end).
```python
# Alternating optimization
_C.TRAIN.SEG_ONLY = False # Only train two segmentation branchs
_C.TRAIN.DET_ONLY = False # Only train detection branch
_C.TRAIN.ENC_SEG_ONLY = False # Only train encoder and two segmentation branchs
_C.TRAIN.ENC_DET_ONLY = False # Only train encoder and detection branch
# Single task
_C.TRAIN.DRIVABLE_ONLY = False # Only train da_segmentation task
_C.TRAIN.LANE_ONLY = False # Only train ll_segmentation task
_C.TRAIN.DET_ONLY = False # Only train detection task
```
Start training:
```shell
python tools/train.py
```
### Evaluation
You can set the evaluation configuration in the `./lib/config/default.py`. (Including: batch_size and threshold value for nms).
Start evaluating:
```shell
python tools/test.py --weights weights/End-to-end.pth
```
### Demo Test
We provide two testing method.
#### Folder
You can store the image or video in `--source`, and then save the reasoning result to `--save-dir`
```shell
python tools/demo --source inference/images
```
#### Camera
If there are any camera connected to your computer, you can set the `source` as the camera number(The default is 0).
```shell
python tools/demo --source 0
```
### Deployment
Our model can reason in real-time on `Jetson Tx2`, with `Zed Camera` to capture image. We use `TensorRT` tool for speeding up. We provide code for deployment and reasoning of model in `./toolkits/deploy`.
## Citation
If you find our paper and code useful for your research, please consider giving a star and citation:
```BibTeX
@misc{2108.11250,
Author = {Dong Wu and Manwen Liao and Weitian Zhang and Xinggang Wang},
Title = {YOLOP: You Only Look Once for Panoptic Driving Perception},
Year = {2021},
Eprint = {arXiv:2108.11250},
}
```
|
Visual-Attention-Network/VAN-Large-original
|
Visual-Attention-Network
| 2022-02-28T16:35:18Z | 0 | 0 | null |
[
"image-classification",
"dataset:imagenet",
"arxiv:2202.09741",
"license:apache-2.0",
"region:us"
] |
image-classification
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
tags:
- image-classification
datasets:
- imagenet
---
# VAN-Large
VAN is trained on ImageNet-1k (1 million images, 1,000 classes) at resolution 224x224. It was first introduced in the paper [Visual Attention Network](https://arxiv.org/abs/2202.09741) and first released in [here](https://github.com/Visual-Attention-Network).
## Description
While originally designed for natural language processing (NLP) tasks, the self-attention mechanism has recently taken various computer vision areas by storm. However, the 2D nature of images brings three challenges for applying self-attention in computer vision. (1) Treating images as 1D sequences neglects their 2D structures. (2) The quadratic complexity is too expensive for high-resolution images. (3) It only captures spatial adaptability but ignores channel adaptability. In this paper, we propose a novel large kernel attention (LKA) module to enable self-adaptive and long-range correlations in self-attention while avoiding the above issues. We further introduce a novel neural network based on LKA, namely Visual Attention Network (VAN). While extremely simple and efficient, VAN outperforms the state-of-the-art vision transformers (ViTs) and convolutional neural networks (CNNs) with a large margin in extensive experiments, including image classification, object detection, semantic segmentation, instance segmentation, etc.
## Evaluation Results
| Model | #Params(M) | GFLOPs | Top1 Acc(%) | Download |
| :-------- | :--------: | :----: | :---------: | :----------------------------------------------------------: |
| VAN-Tiny | 4.1 | 0.9 | 75.4 |[Hugging Face 🤗](https://huggingface.co/Visual-Attention-Network/VAN-Tiny) |
| VAN-Small | 13.9 | 2.5 | 81.1 |[Hugging Face 🤗](https://huggingface.co/Visual-Attention-Network/VAN-Small) |
| VAN-Base | 26.6 | 5.0 | 82.8 |[Hugging Face 🤗](https://huggingface.co/Visual-Attention-Network/VAN-Base), |
| VAN-Large | 44.8 | 9.0 | 83.9 |[Hugging Face 🤗](https://huggingface.co/Visual-Attention-Network/VAN-Large) |
### BibTeX entry and citation info
```bibtex
@article{guo2022visual,
title={Visual Attention Network},
author={Guo, Meng-Hao and Lu, Cheng-Ze and Liu, Zheng-Ning and Cheng, Ming-Ming and Hu, Shi-Min},
journal={arXiv preprint arXiv:2202.09741},
year={2022}
}
```
|
Visual-Attention-Network/VAN-Small-original
|
Visual-Attention-Network
| 2022-02-28T16:33:16Z | 0 | 0 | null |
[
"image-classification",
"dataset:imagenet",
"arxiv:2202.09741",
"license:apache-2.0",
"region:us"
] |
image-classification
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
tags:
- image-classification
datasets:
- imagenet
---
# VAN-Small
VAN is trained on ImageNet-1k (1 million images, 1,000 classes) at resolution 224x224. It was first introduced in the paper [Visual Attention Network](https://arxiv.org/abs/2202.09741) and first released in [here](https://github.com/Visual-Attention-Network).
## Description
While originally designed for natural language processing (NLP) tasks, the self-attention mechanism has recently taken various computer vision areas by storm. However, the 2D nature of images brings three challenges for applying self-attention in computer vision. (1) Treating images as 1D sequences neglects their 2D structures. (2) The quadratic complexity is too expensive for high-resolution images. (3) It only captures spatial adaptability but ignores channel adaptability. In this paper, we propose a novel large kernel attention (LKA) module to enable self-adaptive and long-range correlations in self-attention while avoiding the above issues. We further introduce a novel neural network based on LKA, namely Visual Attention Network (VAN). While extremely simple and efficient, VAN outperforms the state-of-the-art vision transformers (ViTs) and convolutional neural networks (CNNs) with a large margin in extensive experiments, including image classification, object detection, semantic segmentation, instance segmentation, etc.
## Evaluation Results
| Model | #Params(M) | GFLOPs | Top1 Acc(%) | Download |
| :-------- | :--------: | :----: | :---------: | :----------------------------------------------------------: |
| VAN-Tiny | 4.1 | 0.9 | 75.4 |[Hugging Face 🤗](https://huggingface.co/Visual-Attention-Network/VAN-Tiny) |
| VAN-Small | 13.9 | 2.5 | 81.1 |[Hugging Face 🤗](https://huggingface.co/Visual-Attention-Network/VAN-Small) |
| VAN-Base | 26.6 | 5.0 | 82.8 |[Hugging Face 🤗](https://huggingface.co/Visual-Attention-Network/VAN-Base), |
| VAN-Large | 44.8 | 9.0 | 83.9 |[Hugging Face 🤗](https://huggingface.co/Visual-Attention-Network/VAN-Large) |
### BibTeX entry and citation info
```bibtex
@article{guo2022visual,
title={Visual Attention Network},
author={Guo, Meng-Hao and Lu, Cheng-Ze and Liu, Zheng-Ning and Cheng, Ming-Ming and Hu, Shi-Min},
journal={arXiv preprint arXiv:2202.09741},
year={2022}
}
```
|
frahman/distilbert-base-uncased-distilled-clinc
|
frahman
| 2022-02-28T15:54:22Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:clinc_oos",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- clinc_oos
metrics:
- accuracy
model-index:
- name: distilbert-base-uncased-distilled-clinc
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: clinc_oos
type: clinc_oos
args: plus
metrics:
- name: Accuracy
type: accuracy
value: 0.9406451612903226
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-distilled-clinc
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the clinc_oos dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1002
- Accuracy: 0.9406
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 48
- eval_batch_size: 48
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.9039 | 1.0 | 318 | 0.5777 | 0.7335 |
| 0.4486 | 2.0 | 636 | 0.2860 | 0.8768 |
| 0.2528 | 3.0 | 954 | 0.1792 | 0.9210 |
| 0.176 | 4.0 | 1272 | 0.1398 | 0.9274 |
| 0.1417 | 5.0 | 1590 | 0.1209 | 0.9329 |
| 0.1245 | 6.0 | 1908 | 0.1110 | 0.94 |
| 0.1135 | 7.0 | 2226 | 0.1061 | 0.9390 |
| 0.1074 | 8.0 | 2544 | 0.1026 | 0.94 |
| 0.1032 | 9.0 | 2862 | 0.1006 | 0.9410 |
| 0.1017 | 10.0 | 3180 | 0.1002 | 0.9406 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu111
- Datasets 1.16.1
- Tokenizers 0.10.3
|
EngNada/wav2vec2-large-xlsr-53-demo-colab
|
EngNada
| 2022-02-28T15:47:56Z | 6 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"dataset:common_voice",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:04Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- common_voice
model-index:
- name: wav2vec2-large-xlsr-53-demo-colab
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xlsr-53-demo-colab
This model is a fine-tuned version of [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on the common_voice dataset.
It achieves the following results on the evaluation set:
- Loss: 7.9807
- Wer: 1.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:---:|
| 22.8021 | 1.78 | 80 | 7.9807 | 1.0 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu111
- Datasets 1.14.0
- Tokenizers 0.10.3
|
EMBEDDIA/litlat-bert
|
EMBEDDIA
| 2022-02-28T13:46:36Z | 62 | 5 |
transformers
|
[
"transformers",
"pytorch",
"xlm-roberta",
"fill-mask",
"lt",
"lv",
"en",
"multilingual",
"license:cc-by-sa-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-03-02T23:29:04Z |
---
language:
- lt
- lv
- en
- multilingual
license: cc-by-sa-4.0
---
# LitLat BERT
LitLat BERT is a trilingual model, using xlm-roberta-base architecture, trained on Lithuanian, Latvian, and English corpora. Focusing on three languages, the model performs better than [multilingual BERT](https://huggingface.co/bert-base-multilingual-cased), while still offering an option for cross-lingual knowledge transfer, which a monolingual model wouldn't.
### Named entity recognition evaluation
We compare LitLat BERT with multilingual BERT (mBERT), XLM-RoBERTa (XLM-R) and monolingual Latvian BERT (LVBERT) (Znotins and Barzdins, 2020). The report the results as a macro F1 score of 3 named entity classes shared in all three datasets: person, location, organization.
Language | mBERT | XLM-R | LVBERT | LitLat
---|---|---|---|---
Latvian | 0.830 | 0.865 | 0.797 | **0.881**
Lithuanian | 0.797 | 0.817 | / | **0.850**
English | 0.939 | 0.937 | / | **0.943**
|
inovex/multi2convai-quality-fr-logreg-ft
|
inovex
| 2022-02-28T13:43:14Z | 0 | 0 | null |
[
"text-classification",
"fr",
"license:mit",
"region:us"
] |
text-classification
| 2022-03-02T23:29:05Z |
---
tags:
- text-classification
widget:
- text: "Hosted inference API not supported"
license: mit
language: fr
---
# Multi2ConvAI-Quality: French logistic regression model using fasttext embeddings
This model was developed in the [Multi2ConvAI](https://multi2conv.ai) project:
- domain: Quality (more details about our use cases: ([en](https://multi2convai/en/blog/use-cases), [de](https://multi2convai/en/blog/use-cases)))
- language: French (fr)
- model type: logistic regression
- embeddings: fastText embeddings
## How to run
Requires:
- [multi2convai](https://github.com/inovex/multi2convai)
- serialized fastText embeddings (see last section of this readme or [these instructions](https://github.com/inovex/multi2convai/models/embeddings.README.md))
### Run with one line of code
After installing `multi2convai` and locally available fastText embeddings you can run:
````bash
# assumes working dir is the root of the cloned multi2convai repo
python scripts/run_inference.py -m multi2convai-quality-fr-logreg-ft
>>> Create pipeline for config: multi2convai-quality-fr-logreg-ft.
>>> Created a LogisticRegressionFasttextPipeline for domain: 'quality' and language 'fr'.
>>>
>>> Enter your text (type 'stop' to end execution): Lancer le programme
>>> 'Lancer le programme' was classified as 'neo.start' (confidence: 0.8943)
````
### How to run model using multi2convai
After installing `multi2convai` and locally available fastText embeddings you can run:
````python
# assumes working dir is the root of the cloned multi2convai repo
from pathlib import Path
from multi2convai.pipelines.inference.base import ClassificationConfig
from multi2convai.pipelines.inference.logistic_regression_fasttext import (
LogisticRegressionFasttextConfig,
LogisticRegressionFasttextPipeline,
)
language = "fr"
domain = "quality"
# 1. Define paths of model, label dict and embeddings
model_file = "model.pth"
label_dict_file = "label_dict.json"
embedding_path = Path(
f"../models/embeddings/fasttext/fr/wiki.200k.fr.embed"
)
vocabulary_path = Path(
f"../models/embeddings/fasttext/fr/wiki.200k.fr.vocab"
)
# 2. Create and setup pipeline
model_config = LogisticRegressionFasttextConfig(
model_file, embedding_path, vocabulary_path
)
config = ClassificationConfig(language, domain, label_dict_file, model_config)
pipeline = LogisticRegressionFasttextPipeline(config)
pipeline.setup()
# 3. Run intent classification on a text of your choice
label = pipeline.run("Lancer le programme")
label
>>> Label(string='neo.start', ratio='0.8943')
````
### Download and serialize fastText
````bash
# assumes working dir is the root of the cloned multi2convai repo
mkdir models/fasttext/fr
curl https://dl.fbaipublicfiles.com/fasttext/vectors-wiki/wiki.fr.vec --output models/fasttext/fr/wiki.fr.vec
python scripts/serialize_fasttext.py -r fasttext/wiki.fr.vec -v fasttext/fr/wiki.200k.fr.vocab -e fasttext/fr/wiki.200k.fr.embed -n 200000
````
## Further information on Multi2ConvAI:
- https://multi2conv.ai
- https://github.com/inovex/multi2convai
- mailto: [email protected]
|
inovex/multi2convai-quality-en-logreg-ft
|
inovex
| 2022-02-28T13:42:54Z | 0 | 0 | null |
[
"text-classification",
"en",
"license:mit",
"region:us"
] |
text-classification
| 2022-03-02T23:29:05Z |
---
tags:
- text-classification
widget:
- text: "Hosted inference API not supported"
license: mit
language: en
---
# Multi2ConvAI-Quality: English logistic regression model using fasttext embeddings
This model was developed in the [Multi2ConvAI](https://multi2conv.ai) project:
- domain: Quality (more details about our use cases: ([en](https://multi2convai/en/blog/use-cases), [de](https://multi2convai/en/blog/use-cases)))
- language: English (en)
- model type: logistic regression
- embeddings: fastText embeddings
## How to run
Requires:
- [multi2convai](https://github.com/inovex/multi2convai)
- serialized fastText embeddings (see last section of this readme or [these instructions](https://github.com/inovex/multi2convai/models/embeddings.README.md))
### Run with one line of code
After installing `multi2convai` and locally available fastText embeddings you can run:
````bash
# assumes working dir is the root of the cloned multi2convai repo
python scripts/run_inference.py -m multi2convai-quality-en-logreg-ft
>>> Create pipeline for config: multi2convai-quality-en-logreg-ft.
>>> Created a LogisticRegressionFasttextPipeline for domain: 'quality' and language 'en'.
>>>
>>> Enter your text (type 'stop' to end execution): Start the program
>>> 'Start the program' was classified as 'neo.start' (confidence: 0.8943)
````
### How to run model using multi2convai
After installing `multi2convai` and locally available fastText embeddings you can run:
````python
# assumes working dir is the root of the cloned multi2convai repo
from pathlib import Path
from multi2convai.pipelines.inference.base import ClassificationConfig
from multi2convai.pipelines.inference.logistic_regression_fasttext import (
LogisticRegressionFasttextConfig,
LogisticRegressionFasttextPipeline,
)
language = "en"
domain = "quality"
# 1. Define paths of model, label dict and embeddings
model_file = "model.pth"
label_dict_file = "label_dict.json"
embedding_path = Path(
f"../models/embeddings/fasttext/en/wiki.200k.en.embed"
)
vocabulary_path = Path(
f"../models/embeddings/fasttext/en/wiki.200k.en.vocab"
)
# 2. Create and setup pipeline
model_config = LogisticRegressionFasttextConfig(
model_file, embedding_path, vocabulary_path
)
config = ClassificationConfig(language, domain, label_dict_file, model_config)
pipeline = LogisticRegressionFasttextPipeline(config)
pipeline.setup()
# 3. Run intent classification on a text of your choice
label = pipeline.run("Start the program")
label
>>> Label(string='neo.start', ratio='0.8943')
````
### Download and serialize fastText
````bash
# assumes working dir is the root of the cloned multi2convai repo
mkdir models/fasttext/en
curl https://dl.fbaipublicfiles.com/fasttext/vectors-wiki/wiki.en.vec --output models/fasttext/en/wiki.en.vec
python scripts/serialize_fasttext.py -r fasttext/wiki.en.vec -v fasttext/en/wiki.200k.en.vocab -e fasttext/en/wiki.200k.en.embed -n 200000
````
## Further information on Multi2ConvAI:
- https://multi2conv.ai
- https://github.com/inovex/multi2convai
- mailto: [email protected]
|
inovex/multi2convai-quality-it-logreg-ft
|
inovex
| 2022-02-28T13:42:18Z | 0 | 0 | null |
[
"text-classification",
"it",
"license:mit",
"region:us"
] |
text-classification
| 2022-03-02T23:29:05Z |
---
tags:
- text-classification
widget:
- text: "Hosted inference API not supported"
license: mit
language: it
---
# Multi2ConvAI-Quality: Italian logistic regression model using fasttext embeddings
This model was developed in the [Multi2ConvAI](https://multi2conv.ai) project:
- domain: Quality (more details about our use cases: ([en](https://multi2convai/en/blog/use-cases), [de](https://multi2convai/en/blog/use-cases)))
- language: Italian (ml)
- model type: logistic regression
- embeddings: fastText embeddings
## How to run
Requires:
- [multi2convai](https://github.com/inovex/multi2convai)
- serialized fastText embeddings (see last section of this readme or [these instructions](https://github.com/inovex/multi2convai/models/embeddings.README.md))
### Run with one line of code
After installing `multi2convai` and locally available fastText embeddings you can run:
````bash
# assumes working dir is the root of the cloned multi2convai repo
python scripts/run_inference.py -m multi2convai-quality-it-logreg-ft
>>> Create pipeline for config: multi2convai-quality-it-logreg-ft.
>>> Created a LogisticRegressionFasttextPipeline for domain: 'quality' and language 'it'.
>>>
>>> Enter your text (type 'stop' to end execution): Avviare il programma
>>> 'Avviare il programma' was classified as 'neo.start' (confidence: 0.8943)
````
### How to run model using multi2convai
After installing `multi2convai` and locally available fastText embeddings you can run:
````python
# assumes working dir is the root of the cloned multi2convai repo
from pathlib import Path
from multi2convai.pipelines.inference.base import ClassificationConfig
from multi2convai.pipelines.inference.logistic_regression_fasttext import (
LogisticRegressionFasttextConfig,
LogisticRegressionFasttextPipeline,
)
language = "it"
domain = "quality"
# 1. Define paths of model, label dict and embeddings
model_file = "model.pth"
label_dict_file = "label_dict.json"
embedding_path = Path(
f"../models/embeddings/fasttext/it/wiki.200k.it.embed"
)
vocabulary_path = Path(
f"../models/embeddings/fasttext/it/wiki.200k.it.vocab"
)
# 2. Create and setup pipeline
model_config = LogisticRegressionFasttextConfig(
model_file, embedding_path, vocabulary_path
)
config = ClassificationConfig(language, domain, label_dict_file, model_config)
pipeline = LogisticRegressionFasttextPipeline(config)
pipeline.setup()
# 3. Run intent classification on a text of your choice
label = pipeline.run("Avviare il programma")
label
>>> Label(string='neo.start', ratio='0.8943')
````
### Download and serialize fastText
````bash
# assumes working dir is the root of the cloned multi2convai repo
mkdir models/fasttext/it
curl https://dl.fbaipublicfiles.com/fasttext/vectors-wiki/wiki.it.vec --output models/fasttext/it/wiki.it.vec
python scripts/serialize_fasttext.py -r fasttext/wiki.it.vec -v fasttext/it/wiki.200k.it.vocab -e fasttext/it/wiki.200k.it.embed -n 200000
````
## Further information on Multi2ConvAI:
- https://multi2conv.ai
- https://github.com/inovex/multi2convai
- mailto: [email protected]
|
alina1997/marian_en_de_test
|
alina1997
| 2022-02-28T13:31:36Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"marian",
"text2text-generation",
"generated_from_trainer",
"en",
"de",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-03-02T23:29:05Z |
---
language:
- en
- de
tags:
- generated_from_trainer
metrics:
- bleu
model-index:
- name: trained_model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# trained_model
This model is a fine-tuned version of [opus-mt-en-de](https://huggingface.co/opus-mt-en-de) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.4519
- Bleu: 27.6198
- Gen Len: 106.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|
| No log | 1.0 | 3 | 1.4519 | 27.6198 | 106.0 |
### Framework versions
- Transformers 4.13.0.dev0
- Pytorch 1.8.0
- Datasets 1.18.3
- Tokenizers 0.10.3
|
inovex/multi2convai-corona-en-logreg-ft
|
inovex
| 2022-02-28T12:33:30Z | 0 | 0 | null |
[
"text-classification",
"en",
"license:mit",
"region:us"
] |
text-classification
| 2022-03-02T23:29:05Z |
---
tags:
- text-classification
widget:
- text: "Hosted inference API not supported"
license: mit
language: en
---
# Multi2ConvAI-Corona: English logistic regression model using fasttext embeddings
This model was developed in the [Multi2ConvAI](https://multi2conv.ai) project:
- domain: Corona (more details about our use cases: ([en](https://multi2convai/en/blog/use-cases), [de](https://multi2convai/en/blog/use-cases)))
- language: English (en)
- model type: logistic regression
- embeddings: fastText embeddings
## How to run
Requires:
- [multi2convai](https://github.com/inovex/multi2convai)
- serialized fastText embeddings (see last section of this readme or [these instructions](https://github.com/inovex/multi2convai/models/embeddings.README.md))
### Run with one line of code
After installing `multi2convai` and locally available fastText embeddings you can run:
````bash
# assumes working dir is the root of the cloned multi2convai repo
python scripts/run_inference.py -m multi2convai-corona-en-logreg-ft
>>> Create pipeline for config: multi2convai-corona-en-logreg-ft.
>>> Created a LogisticRegressionFasttextPipeline for domain: 'corona' and language 'en'.
>>>
>>> Enter your text (type 'stop' to end execution): Do I need to wear a mask?
>>> 'Do I need to wear a mask?' was classified as 'corona.masks' (confidence: 0.8943)
````
### How to run model using multi2convai
After installing `multi2convai` and locally available fastText embeddings you can run:
````python
# assumes working dir is the root of the cloned multi2convai repo
from pathlib import Path
from multi2convai.pipelines.inference.base import ClassificationConfig
from multi2convai.pipelines.inference.logistic_regression_fasttext import (
LogisticRegressionFasttextConfig,
LogisticRegressionFasttextPipeline,
)
language = "en"
domain = "corona"
# 1. Define paths of model, label dict and embeddings
model_file = "model.pth"
label_dict_file = "label_dict.json"
embedding_path = Path(
f"../models/embeddings/fasttext/en/wiki.200k.en.embed"
)
vocabulary_path = Path(
f"../models/embeddings/fasttext/en/wiki.200k.en.vocab"
)
# 2. Create and setup pipeline
model_config = LogisticRegressionFasttextConfig(
model_file, embedding_path, vocabulary_path
)
config = ClassificationConfig(language, domain, label_dict_file, model_config)
pipeline = LogisticRegressionFasttextPipeline(config)
pipeline.setup()
# 3. Run intent classification on a text of your choice
label = pipeline.run("Do I need to wear a mask?")
label
>>> Label(string='corona.masks', ratio='0.8943')
````
### Download and serialize fastText
````bash
# assumes working dir is the root of the cloned multi2convai repo
mkdir models/fasttext/en
curl https://dl.fbaipublicfiles.com/fasttext/vectors-wiki/wiki.en.vec --output models/fasttext/en/wiki.en.vec
python scripts/serialize_fasttext.py -r fasttext/wiki.en.vec -v fasttext/en/wiki.200k.en.vocab -e fasttext/en/wiki.200k.en.embed -n 200000
````
## Further information on Multi2ConvAI:
- https://multi2conv.ai
- https://github.com/inovex/multi2convai
- mailto: [email protected]
|
inovex/multi2convai-logistics-de-logreg-ft
|
inovex
| 2022-02-28T12:31:23Z | 0 | 0 | null |
[
"text-classification",
"de",
"license:mit",
"region:us"
] |
text-classification
| 2022-03-02T23:29:05Z |
---
tags:
- text-classification
widget:
- text: "Hosted inference API not supported"
license: mit
language: de
---
# Multi2ConvAI-Logistics: German logistic regression model using fasttext embeddings
This model was developed in the [Multi2ConvAI](https://multi2conv.ai) project:
- domain: Logistics (more details about our use cases: ([en](https://multi2convai/en/blog/use-cases), [de](https://multi2convai/en/blog/use-cases)))
- language: German (de)
- model type: logistic regression
- embeddings: fastText embeddings
## How to run
Requires:
- [multi2convai](https://github.com/inovex/multi2convai)
- serialized fastText embeddings (see last section of this readme or [these instructions](https://github.com/inovex/multi2convai/models/embeddings.README.md))
### Run with one line of code
After installing `multi2convai` and locally available fastText embeddings you can run:
````bash
# assumes working dir is the root of the cloned multi2convai repo
python scripts/run_inference.py -m multi2convai-logistics-de-logreg-ft
>>> Create pipeline for config: multi2convai-logistics-de-logreg-ft.
>>> Created a LogisticRegressionFasttextPipeline for domain: 'logistics' and language 'de'.
>>>
>>> Enter your text (type 'stop' to end execution): Muss ich eine Maske tragen?
>>> 'Wo kann ich das Paket ablegen?' was classified as 'details.safeplace' (confidence: 0.8943)
````
### How to run model using multi2convai
After installing `multi2convai` and locally available fastText embeddings you can run:
````python
# assumes working dir is the root of the cloned multi2convai repo
from pathlib import Path
from multi2convai.pipelines.inference.base import ClassificationConfig
from multi2convai.pipelines.inference.logistic_regression_fasttext import (
LogisticRegressionFasttextConfig,
LogisticRegressionFasttextPipeline,
)
language = "de"
domain = "logistics"
# 1. Define paths of model, label dict and embeddings
model_file = "model.pth"
label_dict_file = "label_dict.json"
embedding_path = Path(
f"../models/embeddings/fasttext/de/wiki.200k.de.embed"
)
vocabulary_path = Path(
f"../models/embeddings/fasttext/de/wiki.200k.de.vocab"
)
# 2. Create and setup pipeline
model_config = LogisticRegressionFasttextConfig(
model_file, embedding_path, vocabulary_path
)
config = ClassificationConfig(language, domain, label_dict_file, model_config)
pipeline = LogisticRegressionFasttextPipeline(config)
pipeline.setup()
# 3. Run intent classification on a text of your choice
label = pipeline.run("Wo kann ich das Paket ablegen?")
label
>>> Label(string='details.safeplace', ratio='0.8943')
````
### Download and serialize fastText
````bash
# assumes working dir is the root of the cloned multi2convai repo
mkdir models/fasttext/de
curl https://dl.fbaipublicfiles.com/fasttext/vectors-wiki/wiki.de.vec --output models/fasttext/de/wiki.de.vec
python scripts/serialize_fasttext.py -r fasttext/wiki.de.vec -v fasttext/de/wiki.200k.de.vocab -e fasttext/de/wiki.200k.de.embed -n 200000
````
## Further information on Multi2ConvAI:
- https://multi2conv.ai
- https://github.com/inovex/multi2convai
- mailto: [email protected]
|
cnicu/pegasus-large-booksum
|
cnicu
| 2022-02-28T12:12:37Z | 16 | 0 |
transformers
|
[
"transformers",
"pytorch",
"pegasus",
"text2text-generation",
"summarization",
"dataset:kmfoda/booksum",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
summarization
| 2022-03-02T23:29:05Z |
---
license: mit
tags:
- summarization
datasets:
- kmfoda/booksum
---
|
peterhsu/marian-finetuned-kde4-en-to-zh_TW
|
peterhsu
| 2022-02-28T11:26:43Z | 13 | 1 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"marian",
"text2text-generation",
"translation",
"generated_from_trainer",
"dataset:kde4",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
tags:
- translation
- generated_from_trainer
datasets:
- kde4
metrics:
- bleu
model-index:
- name: marian-finetuned-kde4-en-to-zh_TW
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: kde4
type: kde4
args: en-zh_TW
metrics:
- name: Bleu
type: bleu
value: 39.086345838465
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# marian-finetuned-kde4-en-to-zh_TW
This model is a fine-tuned version of [Helsinki-NLP/opus-mt-en-zh](https://huggingface.co/Helsinki-NLP/opus-mt-en-zh) on the kde4 dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0047
- Bleu: 39.0863
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.11.6
|
NbAiLab/roberta_jan_128_scandinavian
|
NbAiLab
| 2022-02-28T11:01:33Z | 50 | 0 |
transformers
|
[
"transformers",
"jax",
"tensorboard",
"roberta",
"fill-mask",
"license:cc-by-sa-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-03-02T23:29:04Z |
---
license: cc-by-sa-4.0
---
|
Theivaprakasham/layoutlmv2-finetuned-sroie_mod
|
Theivaprakasham
| 2022-02-28T09:50:47Z | 7 | 1 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"layoutlmv2",
"token-classification",
"generated_from_trainer",
"license:cc-by-nc-sa-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-03-02T23:29:05Z |
---
license: cc-by-nc-sa-4.0
tags:
- generated_from_trainer
model-index:
- name: layoutlmv2-finetuned-sroie_mod
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# layoutlmv2-finetuned-sroie_mod
This model is a fine-tuned version of [microsoft/layoutlmv2-base-uncased](https://huggingface.co/microsoft/layoutlmv2-base-uncased) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- training_steps: 3000
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.16.2
- Pytorch 1.8.0+cu101
- Datasets 1.18.3
- Tokenizers 0.11.0
|
FardinSaboori/bert-finetuned-squad
|
FardinSaboori
| 2022-02-28T06:22:27Z | 27 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"question-answering",
"generated_from_trainer",
"dataset:squad",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2022-03-02T23:29:04Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: bert-finetuned-squad
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-finetuned-squad
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.11.0
|
mipatov/rugpt3_nb_descr
|
mipatov
| 2022-02-27T23:44:38Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-03-02T23:29:05Z |
based on `sberbank-ai/rugpt3medium_based_on_gpt2`
finetuned for generate text description for notebook-devices
|
Ayham/distilbert_roberta_summarization_cnn_dailymail
|
Ayham
| 2022-02-27T23:40:20Z | 9 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"encoder-decoder",
"text2text-generation",
"generated_from_trainer",
"dataset:cnn_dailymail",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-03-02T23:29:04Z |
---
tags:
- generated_from_trainer
datasets:
- cnn_dailymail
model-index:
- name: distilbert_roberta_summarization_cnn_dailymail
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert_roberta_summarization_cnn_dailymail
This model is a fine-tuned version of [](https://huggingface.co/) on the cnn_dailymail dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 2000
- num_epochs: 3.0
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.12.0.dev0
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.10.3
|
MatsUy/wav2vec2-common_voice-nl-demo
|
MatsUy
| 2022-02-27T22:07:14Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"common_voice",
"generated_from_trainer",
"nl",
"dataset:common_voice",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:04Z |
---
language:
- nl
license: apache-2.0
tags:
- automatic-speech-recognition
- common_voice
- generated_from_trainer
datasets:
- common_voice
model-index:
- name: wav2vec2-common_voice-nl-demo
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-common_voice-nl-demo
This model is a fine-tuned version of [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on the COMMON_VOICE - NL dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3523
- Wer: 0.2046
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 15.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 3.0536 | 1.12 | 500 | 0.5349 | 0.4338 |
| 0.2543 | 2.24 | 1000 | 0.3859 | 0.3029 |
| 0.1472 | 3.36 | 1500 | 0.3471 | 0.2818 |
| 0.1088 | 4.47 | 2000 | 0.3489 | 0.2731 |
| 0.0855 | 5.59 | 2500 | 0.3582 | 0.2558 |
| 0.0721 | 6.71 | 3000 | 0.3457 | 0.2471 |
| 0.0653 | 7.83 | 3500 | 0.3299 | 0.2357 |
| 0.0527 | 8.95 | 4000 | 0.3440 | 0.2334 |
| 0.0444 | 10.07 | 4500 | 0.3417 | 0.2289 |
| 0.0404 | 11.19 | 5000 | 0.3691 | 0.2204 |
| 0.0345 | 12.3 | 5500 | 0.3453 | 0.2102 |
| 0.0288 | 13.42 | 6000 | 0.3634 | 0.2089 |
| 0.027 | 14.54 | 6500 | 0.3532 | 0.2044 |
### Framework versions
- Transformers 4.17.0.dev0
- Pytorch 1.10.2+cu102
- Datasets 1.18.3
- Tokenizers 0.11.0
|
ali2066/finetuned_sentence_itr0_3e-05_all_27_02_2022-22_36_26
|
ali2066
| 2022-02-27T21:41:47Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: finetuned_sentence_itr0_3e-05_all_27_02_2022-22_36_26
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuned_sentence_itr0_3e-05_all_27_02_2022-22_36_26
This model is a fine-tuned version of [distilbert-base-uncased-finetuned-sst-2-english](https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6071
- Accuracy: 0.8337
- F1: 0.8922
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| No log | 1.0 | 195 | 0.3920 | 0.7988 | 0.8624 |
| No log | 2.0 | 390 | 0.3873 | 0.8171 | 0.8739 |
| 0.3673 | 3.0 | 585 | 0.4354 | 0.8256 | 0.8835 |
| 0.3673 | 4.0 | 780 | 0.5358 | 0.8293 | 0.8887 |
| 0.3673 | 5.0 | 975 | 0.5616 | 0.8366 | 0.8923 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.1+cu113
- Datasets 1.18.0
- Tokenizers 0.10.3
|
lighteternal/fact-or-opinion-xlmr-el
|
lighteternal
| 2022-02-27T19:41:57Z | 949 | 21 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"xlm-roberta",
"text-classification",
"fact-or-opinion",
"en",
"el",
"multilingual",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-02T23:29:05Z |
---
language:
- en
- el
- multilingual
tags:
- text-classification
- fact-or-opinion
- transformers
widget:
- text: "Ξεχωρίζει η καθηλωτική ερμηνεία του πρωταγωνιστή."
- text: "Η Ελλάδα είναι χώρα της Ευρώπης."
- text: "Tolkien was an English writer"
- text: "Tolkien is my favorite writer."
pipeline_tag: text-classification
license: apache-2.0
---
# Fact vs. opinion binary classifier, trained on a mixed EN-EL annotated corpus.
### By the Hellenic Army Academy (SSE) and the Technical University of Crete (TUC)
This is an XLM-Roberta-base model with a binary classification head. Given a sentence, it can classify it either as a fact or an opinion based on its content.
You can use this model in any of the XLM-R supported languages for the same task, taking advantage of its 0-shot learning capabilities. However, the model was trained only using English and Greek sentences.
Legend of HuggingFace API labels:
* Label 0: Opinion/Subjective sentence
* Label 1: Fact/Objective sentence
## Dataset training info
The original dataset (available here: https://github.com/1024er/cbert_aug/tree/crayon/datasets/subj) contained aprox. 9000 annotated sentences (classified as subjective or objective). It was translated to Greek using Google Translate. The Greek version was then concatenated with the original English one to create the mixed EN-EL dataset.
The model was trained for 5 epochs, using batch size = 8. Detailed metrics and hyperparameters available on the "Metrics" tab.
## Evaluation Results on test set
| accuracy | precision | recall | f1 |
| ----------- | ----------- | ----------- | ----------- |
|0.952 | 0.945 | 0.960 | 0.952 |
## Acknowledgement
The research work was supported by the Hellenic Foundation for Research and Innovation (HFRI) under the HFRI PhD Fellowship grant (Fellowship Number:50, 2nd call)
|
ali2066/finetuned_sentence_itr0_3e-05_essays_27_02_2022-19_35_56
|
ali2066
| 2022-02-27T18:38:37Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: finetuned_sentence_itr0_3e-05_essays_27_02_2022-19_35_56
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuned_sentence_itr0_3e-05_essays_27_02_2022-19_35_56
This model is a fine-tuned version of [distilbert-base-uncased-finetuned-sst-2-english](https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3767
- Accuracy: 0.8638
- F1: 0.9165
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| No log | 1.0 | 81 | 0.4489 | 0.8309 | 0.8969 |
| No log | 2.0 | 162 | 0.4429 | 0.8272 | 0.8915 |
| No log | 3.0 | 243 | 0.5154 | 0.8529 | 0.9083 |
| No log | 4.0 | 324 | 0.5552 | 0.8309 | 0.8925 |
| No log | 5.0 | 405 | 0.5896 | 0.8309 | 0.8940 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.1+cu113
- Datasets 1.18.0
- Tokenizers 0.10.3
|
ali2066/finetuned_sentence_itr0_0.0002_essays_27_02_2022-19_33_10
|
ali2066
| 2022-02-27T18:35:51Z | 6 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: finetuned_sentence_itr0_0.0002_essays_27_02_2022-19_33_10
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuned_sentence_itr0_0.0002_essays_27_02_2022-19_33_10
This model is a fine-tuned version of [distilbert-base-uncased-finetuned-sst-2-english](https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3358
- Accuracy: 0.8688
- F1: 0.9225
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| No log | 1.0 | 81 | 0.4116 | 0.8382 | 0.9027 |
| No log | 2.0 | 162 | 0.4360 | 0.8382 | 0.8952 |
| No log | 3.0 | 243 | 0.5719 | 0.8382 | 0.8995 |
| No log | 4.0 | 324 | 0.7251 | 0.8493 | 0.9021 |
| No log | 5.0 | 405 | 0.8384 | 0.8456 | 0.9019 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.1+cu113
- Datasets 1.18.0
- Tokenizers 0.10.3
|
ali2066/finetuned_sentence_itr0_0.0002_all_27_02_2022-19_11_17
|
ali2066
| 2022-02-27T18:16:49Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: finetuned_sentence_itr0_0.0002_all_27_02_2022-19_11_17
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuned_sentence_itr0_0.0002_all_27_02_2022-19_11_17
This model is a fine-tuned version of [distilbert-base-uncased-finetuned-sst-2-english](https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4064
- Accuracy: 0.8289
- F1: 0.8901
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| No log | 1.0 | 195 | 0.4163 | 0.8085 | 0.8780 |
| No log | 2.0 | 390 | 0.4098 | 0.8268 | 0.8878 |
| 0.312 | 3.0 | 585 | 0.5892 | 0.8244 | 0.8861 |
| 0.312 | 4.0 | 780 | 0.7580 | 0.8232 | 0.8845 |
| 0.312 | 5.0 | 975 | 0.9028 | 0.8183 | 0.8824 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.1+cu113
- Datasets 1.18.0
- Tokenizers 0.10.3
|
ali2066/finetuned_sentence_itr0_2e-05_all_27_02_2022-19_05_42
|
ali2066
| 2022-02-27T18:11:13Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: finetuned_sentence_itr0_2e-05_all_27_02_2022-19_05_42
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuned_sentence_itr0_2e-05_all_27_02_2022-19_05_42
This model is a fine-tuned version of [distilbert-base-uncased-finetuned-sst-2-english](https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4917
- Accuracy: 0.8231
- F1: 0.8833
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| No log | 1.0 | 195 | 0.3883 | 0.8146 | 0.8833 |
| No log | 2.0 | 390 | 0.3607 | 0.8390 | 0.8964 |
| 0.4085 | 3.0 | 585 | 0.3812 | 0.8488 | 0.9042 |
| 0.4085 | 4.0 | 780 | 0.3977 | 0.8549 | 0.9077 |
| 0.4085 | 5.0 | 975 | 0.4233 | 0.8573 | 0.9092 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.1+cu113
- Datasets 1.18.0
- Tokenizers 0.10.3
|
bullmount/hseBert-it-cased
|
bullmount
| 2022-02-27T18:08:11Z | 14 | 2 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"fill-mask",
"it",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-03-02T23:29:05Z |
---
language: it
license: mit
widget:
- text: "È stata pubblicata la [MASK] di conversione del D.L. 24 dicembre 2021 n. 221 ."
- text: "La legge fornisce l’esatta [MASK] di Green pass base."
- text: "Il datore di lavoro organizza e predispone i posti di lavoro di cui all'articolo 173, in [MASK] ai requisiti minimi di cui all'allegato XXXIV."
- text: "Le principali novità riguardano la quarantena precauzionale e il [MASK] di autosorveglianza."
---
# hseBERT
**hseBert-it-cased** is a BERT model obtained by MLM adaptive-tuning [**bert-base-italian-xxl-cased**](https://huggingface.co/dbmdz/bert-base-italian-xxl-cased) on texts of Italian regulation (Testo unico sulla sicurezza sul lavoro - D.lgs. 9 aprile 2008, n. 81, Codice dell'Ambiente - D.lgs. 3 aprile 2006, n. 152), approximately 7k sentences.
# Usage
```python
from transformers import AutoModel, AutoTokenizer
model_name = "bullmount/hseBert-it-cased"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModel.from_pretrained(model_name)
```
|
ali2066/finetuned_sentence_itr2_2e-05_webDiscourse_27_02_2022-18_56_32
|
ali2066
| 2022-02-27T17:59:00Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: finetuned_sentence_itr2_2e-05_webDiscourse_27_02_2022-18_56_32
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuned_sentence_itr2_2e-05_webDiscourse_27_02_2022-18_56_32
This model is a fine-tuned version of [distilbert-base-uncased-finetuned-sst-2-english](https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6049
- Accuracy: 0.6926
- F1: 0.4160
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| No log | 1.0 | 48 | 0.5835 | 0.71 | 0.0333 |
| No log | 2.0 | 96 | 0.5718 | 0.715 | 0.3871 |
| No log | 3.0 | 144 | 0.5731 | 0.715 | 0.4 |
| No log | 4.0 | 192 | 0.6009 | 0.705 | 0.3516 |
| No log | 5.0 | 240 | 0.6122 | 0.7 | 0.4000 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.1+cu113
- Datasets 1.18.0
- Tokenizers 0.10.3
|
ali2066/finetuned_sentence_itr4_3e-05_all_27_02_2022-18_46_19
|
ali2066
| 2022-02-27T17:51:50Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: finetuned_sentence_itr4_3e-05_all_27_02_2022-18_46_19
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuned_sentence_itr4_3e-05_all_27_02_2022-18_46_19
This model is a fine-tuned version of [distilbert-base-uncased-finetuned-sst-2-english](https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3962
- Accuracy: 0.8231
- F1: 0.8873
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| No log | 1.0 | 195 | 0.3591 | 0.8366 | 0.8950 |
| No log | 2.0 | 390 | 0.3558 | 0.8415 | 0.9012 |
| 0.3647 | 3.0 | 585 | 0.4049 | 0.8427 | 0.8983 |
| 0.3647 | 4.0 | 780 | 0.5030 | 0.8378 | 0.8949 |
| 0.3647 | 5.0 | 975 | 0.5719 | 0.8354 | 0.8943 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.1+cu113
- Datasets 1.18.0
- Tokenizers 0.10.3
|
nimrah/wav2vec2-large-xls-r-300m-my_hindi_home-latest-colab
|
nimrah
| 2022-02-27T17:42:46Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"dataset:common_voice",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- common_voice
model-index:
- name: wav2vec2-large-xls-r-300m-my_hindi_home-latest-colab
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xls-r-300m-my_hindi_home-latest-colab
This model is a fine-tuned version of [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on the common_voice dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu111
- Datasets 1.13.3
- Tokenizers 0.10.3
|
ali2066/finetuned_sentence_itr2_3e-05_all_27_02_2022-18_35_02
|
ali2066
| 2022-02-27T17:40:35Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: finetuned_sentence_itr2_3e-05_all_27_02_2022-18_35_02
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuned_sentence_itr2_3e-05_all_27_02_2022-18_35_02
This model is a fine-tuned version of [distilbert-base-uncased-finetuned-sst-2-english](https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3962
- Accuracy: 0.8231
- F1: 0.8873
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| No log | 1.0 | 195 | 0.3591 | 0.8366 | 0.8950 |
| No log | 2.0 | 390 | 0.3558 | 0.8415 | 0.9012 |
| 0.3647 | 3.0 | 585 | 0.4049 | 0.8427 | 0.8983 |
| 0.3647 | 4.0 | 780 | 0.5030 | 0.8378 | 0.8949 |
| 0.3647 | 5.0 | 975 | 0.5719 | 0.8354 | 0.8943 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.1+cu113
- Datasets 1.18.0
- Tokenizers 0.10.3
|
ali2066/finetuned_sentence_itr0_3e-05_all_27_02_2022-18_23_48
|
ali2066
| 2022-02-27T17:29:20Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: finetuned_sentence_itr0_3e-05_all_27_02_2022-18_23_48
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuned_sentence_itr0_3e-05_all_27_02_2022-18_23_48
This model is a fine-tuned version of [distilbert-base-uncased-finetuned-sst-2-english](https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3962
- Accuracy: 0.8231
- F1: 0.8873
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| No log | 1.0 | 195 | 0.3591 | 0.8366 | 0.8950 |
| No log | 2.0 | 390 | 0.3558 | 0.8415 | 0.9012 |
| 0.3647 | 3.0 | 585 | 0.4049 | 0.8427 | 0.8983 |
| 0.3647 | 4.0 | 780 | 0.5030 | 0.8378 | 0.8949 |
| 0.3647 | 5.0 | 975 | 0.5719 | 0.8354 | 0.8943 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.1+cu113
- Datasets 1.18.0
- Tokenizers 0.10.3
|
ali2066/finetuned_sentence_itr4_0.0002_all_27_02_2022-18_18_11
|
ali2066
| 2022-02-27T17:23:43Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: finetuned_sentence_itr4_0.0002_all_27_02_2022-18_18_11
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuned_sentence_itr4_0.0002_all_27_02_2022-18_18_11
This model is a fine-tuned version of [distilbert-base-uncased-finetuned-sst-2-english](https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7600
- Accuracy: 0.8144
- F1: 0.8788
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| No log | 1.0 | 195 | 0.3514 | 0.8427 | 0.8979 |
| No log | 2.0 | 390 | 0.3853 | 0.8293 | 0.8936 |
| 0.3147 | 3.0 | 585 | 0.5494 | 0.8268 | 0.8868 |
| 0.3147 | 4.0 | 780 | 0.6235 | 0.8427 | 0.8995 |
| 0.3147 | 5.0 | 975 | 0.8302 | 0.8378 | 0.8965 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.1+cu113
- Datasets 1.18.0
- Tokenizers 0.10.3
|
ali2066/finetuned_sentence_itr1_0.0002_all_27_02_2022-18_01_22
|
ali2066
| 2022-02-27T17:06:54Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: finetuned_sentence_itr1_0.0002_all_27_02_2022-18_01_22
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuned_sentence_itr1_0.0002_all_27_02_2022-18_01_22
This model is a fine-tuned version of [distilbert-base-uncased-finetuned-sst-2-english](https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7600
- Accuracy: 0.8144
- F1: 0.8788
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| No log | 1.0 | 195 | 0.3514 | 0.8427 | 0.8979 |
| No log | 2.0 | 390 | 0.3853 | 0.8293 | 0.8936 |
| 0.3147 | 3.0 | 585 | 0.5494 | 0.8268 | 0.8868 |
| 0.3147 | 4.0 | 780 | 0.6235 | 0.8427 | 0.8995 |
| 0.3147 | 5.0 | 975 | 0.8302 | 0.8378 | 0.8965 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.1+cu113
- Datasets 1.18.0
- Tokenizers 0.10.3
|
ali2066/finetuned_sentence_itr0_0.0002_all_27_02_2022-17_55_43
|
ali2066
| 2022-02-27T17:01:16Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: finetuned_sentence_itr0_0.0002_all_27_02_2022-17_55_43
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuned_sentence_itr0_0.0002_all_27_02_2022-17_55_43
This model is a fine-tuned version of [distilbert-base-uncased-finetuned-sst-2-english](https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7600
- Accuracy: 0.8144
- F1: 0.8788
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| No log | 1.0 | 195 | 0.3514 | 0.8427 | 0.8979 |
| No log | 2.0 | 390 | 0.3853 | 0.8293 | 0.8936 |
| 0.3147 | 3.0 | 585 | 0.5494 | 0.8268 | 0.8868 |
| 0.3147 | 4.0 | 780 | 0.6235 | 0.8427 | 0.8995 |
| 0.3147 | 5.0 | 975 | 0.8302 | 0.8378 | 0.8965 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.1+cu113
- Datasets 1.18.0
- Tokenizers 0.10.3
|
ali2066/finetuned_sentence_itr4_2e-05_all_27_02_2022-17_50_05
|
ali2066
| 2022-02-27T16:55:39Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: finetuned_sentence_itr4_2e-05_all_27_02_2022-17_50_05
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuned_sentence_itr4_2e-05_all_27_02_2022-17_50_05
This model is a fine-tuned version of [distilbert-base-uncased-finetuned-sst-2-english](https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4095
- Accuracy: 0.8263
- F1: 0.8865
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| No log | 1.0 | 195 | 0.3685 | 0.8293 | 0.8911 |
| No log | 2.0 | 390 | 0.3495 | 0.8415 | 0.8992 |
| 0.4065 | 3.0 | 585 | 0.3744 | 0.8463 | 0.9014 |
| 0.4065 | 4.0 | 780 | 0.4260 | 0.8427 | 0.8980 |
| 0.4065 | 5.0 | 975 | 0.4548 | 0.8366 | 0.8940 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.1+cu113
- Datasets 1.18.0
- Tokenizers 0.10.3
|
ali2066/finetuned_sentence_itr3_2e-05_all_27_02_2022-17_44_32
|
ali2066
| 2022-02-27T16:50:01Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: finetuned_sentence_itr3_2e-05_all_27_02_2022-17_44_32
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuned_sentence_itr3_2e-05_all_27_02_2022-17_44_32
This model is a fine-tuned version of [distilbert-base-uncased-finetuned-sst-2-english](https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4095
- Accuracy: 0.8263
- F1: 0.8865
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| No log | 1.0 | 195 | 0.3685 | 0.8293 | 0.8911 |
| No log | 2.0 | 390 | 0.3495 | 0.8415 | 0.8992 |
| 0.4065 | 3.0 | 585 | 0.3744 | 0.8463 | 0.9014 |
| 0.4065 | 4.0 | 780 | 0.4260 | 0.8427 | 0.8980 |
| 0.4065 | 5.0 | 975 | 0.4548 | 0.8366 | 0.8940 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.1+cu113
- Datasets 1.18.0
- Tokenizers 0.10.3
|
ali2066/finetuned_sentence_itr2_2e-05_all_27_02_2022-17_38_58
|
ali2066
| 2022-02-27T16:44:27Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: finetuned_sentence_itr2_2e-05_all_27_02_2022-17_38_58
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuned_sentence_itr2_2e-05_all_27_02_2022-17_38_58
This model is a fine-tuned version of [distilbert-base-uncased-finetuned-sst-2-english](https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4095
- Accuracy: 0.8263
- F1: 0.8865
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| No log | 1.0 | 195 | 0.3685 | 0.8293 | 0.8911 |
| No log | 2.0 | 390 | 0.3495 | 0.8415 | 0.8992 |
| 0.4065 | 3.0 | 585 | 0.3744 | 0.8463 | 0.9014 |
| 0.4065 | 4.0 | 780 | 0.4260 | 0.8427 | 0.8980 |
| 0.4065 | 5.0 | 975 | 0.4548 | 0.8366 | 0.8940 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.1+cu113
- Datasets 1.18.0
- Tokenizers 0.10.3
|
ali2066/finetuned_sentence_itr1_2e-05_all_27_02_2022-17_33_22
|
ali2066
| 2022-02-27T16:38:53Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: finetuned_sentence_itr1_2e-05_all_27_02_2022-17_33_22
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuned_sentence_itr1_2e-05_all_27_02_2022-17_33_22
This model is a fine-tuned version of [distilbert-base-uncased-finetuned-sst-2-english](https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4095
- Accuracy: 0.8263
- F1: 0.8865
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| No log | 1.0 | 195 | 0.3685 | 0.8293 | 0.8911 |
| No log | 2.0 | 390 | 0.3495 | 0.8415 | 0.8992 |
| 0.4065 | 3.0 | 585 | 0.3744 | 0.8463 | 0.9014 |
| 0.4065 | 4.0 | 780 | 0.4260 | 0.8427 | 0.8980 |
| 0.4065 | 5.0 | 975 | 0.4548 | 0.8366 | 0.8940 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.1+cu113
- Datasets 1.18.0
- Tokenizers 0.10.3
|
Daryaflp/roberta-retrained_ru_covid
|
Daryaflp
| 2022-02-27T16:18:22Z | 9 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"roberta",
"fill-mask",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-03-02T23:29:04Z |
---
tags:
- generated_from_trainer
model-index:
- name: roberta-retrained_ru_covid
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-retrained_ru_covid
This model is a fine-tuned version of [blinoff/roberta-base-russian-v0](https://huggingface.co/blinoff/roberta-base-russian-v0) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.8518
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 1
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 25
### Training results
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.11.0
|
emilyalsentzer/Bio_Discharge_Summary_BERT
|
emilyalsentzer
| 2022-02-27T13:59:50Z | 5,949 | 34 |
transformers
|
[
"transformers",
"pytorch",
"jax",
"bert",
"fill-mask",
"en",
"arxiv:1904.03323",
"arxiv:1901.08746",
"license:mit",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-03-02T23:29:05Z |
---
language: "en"
tags:
- fill-mask
license: mit
---
# ClinicalBERT - Bio + Discharge Summary BERT Model
The [Publicly Available Clinical BERT Embeddings](https://arxiv.org/abs/1904.03323) paper contains four unique clinicalBERT models: initialized with BERT-Base (`cased_L-12_H-768_A-12`) or BioBERT (`BioBERT-Base v1.0 + PubMed 200K + PMC 270K`) & trained on either all MIMIC notes or only discharge summaries.
This model card describes the Bio+Discharge Summary BERT model, which was initialized from [BioBERT](https://arxiv.org/abs/1901.08746) & trained on only discharge summaries from MIMIC.
## Pretraining Data
The `Bio_Discharge_Summary_BERT` model was trained on all discharge summaries from [MIMIC III](https://www.nature.com/articles/sdata201635), a database containing electronic health records from ICU patients at the Beth Israel Hospital in Boston, MA. For more details on MIMIC, see [here](https://mimic.physionet.org/). All notes from the `NOTEEVENTS` table were included (~880M words).
## Model Pretraining
### Note Preprocessing
Each note in MIMIC was first split into sections using a rules-based section splitter (e.g. discharge summary notes were split into "History of Present Illness", "Family History", "Brief Hospital Course", etc. sections). Then each section was split into sentences using SciSpacy (`en core sci md` tokenizer).
### Pretraining Procedures
The model was trained using code from [Google's BERT repository](https://github.com/google-research/bert) on a GeForce GTX TITAN X 12 GB GPU. Model parameters were initialized with BioBERT (`BioBERT-Base v1.0 + PubMed 200K + PMC 270K`).
### Pretraining Hyperparameters
We used a batch size of 32, a maximum sequence length of 128, and a learning rate of 5 · 10−5 for pre-training our models. The models trained on all MIMIC notes were trained for 150,000 steps. The dup factor for duplicating input data with different masks was set to 5. All other default parameters were used (specifically, masked language model probability = 0.15
and max predictions per sequence = 20).
## How to use the model
Load the model via the transformers library:
```
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("emilyalsentzer/Bio_Discharge_Summary_BERT")
model = AutoModel.from_pretrained("emilyalsentzer/Bio_Discharge_Summary_BERT")
```
## More Information
Refer to the original paper, [Publicly Available Clinical BERT Embeddings](https://arxiv.org/abs/1904.03323) (NAACL Clinical NLP Workshop 2019) for additional details and performance on NLI and NER tasks.
## Questions?
Post a Github issue on the [clinicalBERT repo](https://github.com/EmilyAlsentzer/clinicalBERT) or email [email protected] with any questions.
|
nadaAlnada/wav2vec2-base-timit-demo-colab
|
nadaAlnada
| 2022-02-27T13:55:32Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"dataset:common_voice",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- common_voice
model-index:
- name: wav2vec2-base-timit-demo-colab
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base-timit-demo-colab
This model is a fine-tuned version of [anas/wav2vec2-large-xlsr-arabic](https://huggingface.co/anas/wav2vec2-large-xlsr-arabic) on the common_voice dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 30
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.10.3
|
facebook/wav2vec2-base-mt-voxpopuli-v2
|
facebook
| 2022-02-27T13:15:54Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"wav2vec2",
"pretraining",
"audio",
"automatic-speech-recognition",
"voxpopuli-v2",
"mt",
"dataset:voxpopuli",
"arxiv:2101.00390",
"license:cc-by-nc-4.0",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:05Z |
---
language: mt
tags:
- audio
- automatic-speech-recognition
- voxpopuli-v2
datasets:
- voxpopuli
license: cc-by-nc-4.0
inference: false
---
# Wav2Vec2-base-VoxPopuli-V2
[Facebook's Wav2Vec2](https://ai.facebook.com/blog/wav2vec-20-learning-the-structure-of-speech-from-raw-audio/) base model pretrained only in **mt** on **9.1k** unlabeled datat of the [VoxPopuli corpus](https://arxiv.org/abs/2101.00390).
The model is pretrained on 16kHz sampled speech audio. When using the model make sure that your speech input is also sampled at 16Khz.
**Note**: This model does not have a tokenizer as it was pretrained on audio alone. In order to use this model for **speech recognition**, a tokenizer should be created and the model should be fine-tuned on labeled text data in **mt**. Check out [this blog](https://huggingface.co/blog/fine-tune-xlsr-wav2vec2) for a more in-detail explanation of how to fine-tune the model.
**Paper**: *[VoxPopuli: A Large-Scale Multilingual Speech Corpus for Representation
Learning, Semi-Supervised Learning and Interpretation](https://arxiv.org/abs/2101.00390)*
**Authors**: *Changhan Wang, Morgane Riviere, Ann Lee, Anne Wu, Chaitanya Talnikar, Daniel Haziza, Mary Williamson, Juan Pino, Emmanuel Dupoux* from *Facebook AI*.
See the official website for more information, [here](https://github.com/facebookresearch/voxpopuli/).
|
facebook/wav2vec2-base-el-voxpopuli-v2
|
facebook
| 2022-02-27T13:15:45Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"wav2vec2",
"pretraining",
"audio",
"automatic-speech-recognition",
"voxpopuli-v2",
"el",
"dataset:voxpopuli",
"arxiv:2101.00390",
"license:cc-by-nc-4.0",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:05Z |
---
language: el
tags:
- audio
- automatic-speech-recognition
- voxpopuli-v2
datasets:
- voxpopuli
license: cc-by-nc-4.0
inference: false
---
# Wav2Vec2-base-VoxPopuli-V2
[Facebook's Wav2Vec2](https://ai.facebook.com/blog/wav2vec-20-learning-the-structure-of-speech-from-raw-audio/) base model pretrained only in **el** on **17.7k** unlabeled datat of the [VoxPopuli corpus](https://arxiv.org/abs/2101.00390).
The model is pretrained on 16kHz sampled speech audio. When using the model make sure that your speech input is also sampled at 16Khz.
**Note**: This model does not have a tokenizer as it was pretrained on audio alone. In order to use this model for **speech recognition**, a tokenizer should be created and the model should be fine-tuned on labeled text data in **el**. Check out [this blog](https://huggingface.co/blog/fine-tune-xlsr-wav2vec2) for a more in-detail explanation of how to fine-tune the model.
**Paper**: *[VoxPopuli: A Large-Scale Multilingual Speech Corpus for Representation
Learning, Semi-Supervised Learning and Interpretation](https://arxiv.org/abs/2101.00390)*
**Authors**: *Changhan Wang, Morgane Riviere, Ann Lee, Anne Wu, Chaitanya Talnikar, Daniel Haziza, Mary Williamson, Juan Pino, Emmanuel Dupoux* from *Facebook AI*.
See the official website for more information, [here](https://github.com/facebookresearch/voxpopuli/).
|
facebook/wav2vec2-base-lt-voxpopuli-v2
|
facebook
| 2022-02-27T13:15:36Z | 22 | 0 |
transformers
|
[
"transformers",
"pytorch",
"wav2vec2",
"pretraining",
"audio",
"automatic-speech-recognition",
"voxpopuli-v2",
"lt",
"dataset:voxpopuli",
"arxiv:2101.00390",
"license:cc-by-nc-4.0",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:05Z |
---
language: lt
tags:
- audio
- automatic-speech-recognition
- voxpopuli-v2
datasets:
- voxpopuli
license: cc-by-nc-4.0
inference: false
---
# Wav2Vec2-base-VoxPopuli-V2
[Facebook's Wav2Vec2](https://ai.facebook.com/blog/wav2vec-20-learning-the-structure-of-speech-from-raw-audio/) base model pretrained only in **lt** on **14.4k** unlabeled datat of the [VoxPopuli corpus](https://arxiv.org/abs/2101.00390).
The model is pretrained on 16kHz sampled speech audio. When using the model make sure that your speech input is also sampled at 16Khz.
**Note**: This model does not have a tokenizer as it was pretrained on audio alone. In order to use this model for **speech recognition**, a tokenizer should be created and the model should be fine-tuned on labeled text data in **lt**. Check out [this blog](https://huggingface.co/blog/fine-tune-xlsr-wav2vec2) for a more in-detail explanation of how to fine-tune the model.
**Paper**: *[VoxPopuli: A Large-Scale Multilingual Speech Corpus for Representation
Learning, Semi-Supervised Learning and Interpretation](https://arxiv.org/abs/2101.00390)*
**Authors**: *Changhan Wang, Morgane Riviere, Ann Lee, Anne Wu, Chaitanya Talnikar, Daniel Haziza, Mary Williamson, Juan Pino, Emmanuel Dupoux* from *Facebook AI*.
See the official website for more information, [here](https://github.com/facebookresearch/voxpopuli/).
|
facebook/wav2vec2-base-fi-voxpopuli-v2
|
facebook
| 2022-02-27T13:15:08Z | 6 | 1 |
transformers
|
[
"transformers",
"pytorch",
"wav2vec2",
"pretraining",
"audio",
"automatic-speech-recognition",
"voxpopuli-v2",
"fi",
"dataset:voxpopuli",
"arxiv:2101.00390",
"license:cc-by-nc-4.0",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:05Z |
---
language: fi
tags:
- audio
- automatic-speech-recognition
- voxpopuli-v2
datasets:
- voxpopuli
license: cc-by-nc-4.0
inference: false
---
# Wav2Vec2-base-VoxPopuli-V2
[Facebook's Wav2Vec2](https://ai.facebook.com/blog/wav2vec-20-learning-the-structure-of-speech-from-raw-audio/) base model pretrained only in **fi** on **14.2k** unlabeled datat of the [VoxPopuli corpus](https://arxiv.org/abs/2101.00390).
The model is pretrained on 16kHz sampled speech audio. When using the model make sure that your speech input is also sampled at 16Khz.
**Note**: This model does not have a tokenizer as it was pretrained on audio alone. In order to use this model for **speech recognition**, a tokenizer should be created and the model should be fine-tuned on labeled text data in **fi**. Check out [this blog](https://huggingface.co/blog/fine-tune-xlsr-wav2vec2) for a more in-detail explanation of how to fine-tune the model.
**Paper**: *[VoxPopuli: A Large-Scale Multilingual Speech Corpus for Representation
Learning, Semi-Supervised Learning and Interpretation](https://arxiv.org/abs/2101.00390)*
**Authors**: *Changhan Wang, Morgane Riviere, Ann Lee, Anne Wu, Chaitanya Talnikar, Daniel Haziza, Mary Williamson, Juan Pino, Emmanuel Dupoux* from *Facebook AI*.
See the official website for more information, [here](https://github.com/facebookresearch/voxpopuli/).
|
facebook/wav2vec2-base-pl-voxpopuli-v2
|
facebook
| 2022-02-27T13:14:25Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"wav2vec2",
"pretraining",
"audio",
"automatic-speech-recognition",
"voxpopuli-v2",
"pl",
"dataset:voxpopuli",
"arxiv:2101.00390",
"license:cc-by-nc-4.0",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:05Z |
---
language: pl
tags:
- audio
- automatic-speech-recognition
- voxpopuli-v2
datasets:
- voxpopuli
license: cc-by-nc-4.0
inference: false
---
# Wav2Vec2-base-VoxPopuli-V2
[Facebook's Wav2Vec2](https://ai.facebook.com/blog/wav2vec-20-learning-the-structure-of-speech-from-raw-audio/) base model pretrained only in **pl** on **21.2k** unlabeled datat of the [VoxPopuli corpus](https://arxiv.org/abs/2101.00390).
The model is pretrained on 16kHz sampled speech audio. When using the model make sure that your speech input is also sampled at 16Khz.
**Note**: This model does not have a tokenizer as it was pretrained on audio alone. In order to use this model for **speech recognition**, a tokenizer should be created and the model should be fine-tuned on labeled text data in **pl**. Check out [this blog](https://huggingface.co/blog/fine-tune-xlsr-wav2vec2) for a more in-detail explanation of how to fine-tune the model.
**Paper**: *[VoxPopuli: A Large-Scale Multilingual Speech Corpus for Representation
Learning, Semi-Supervised Learning and Interpretation](https://arxiv.org/abs/2101.00390)*
**Authors**: *Changhan Wang, Morgane Riviere, Ann Lee, Anne Wu, Chaitanya Talnikar, Daniel Haziza, Mary Williamson, Juan Pino, Emmanuel Dupoux* from *Facebook AI*.
See the official website for more information, [here](https://github.com/facebookresearch/voxpopuli/).
|
facebook/wav2vec2-base-bg-voxpopuli-v2
|
facebook
| 2022-02-27T13:13:50Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"wav2vec2",
"pretraining",
"audio",
"automatic-speech-recognition",
"voxpopuli-v2",
"bg",
"dataset:voxpopuli",
"arxiv:2101.00390",
"license:cc-by-nc-4.0",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:05Z |
---
language: bg
tags:
- audio
- automatic-speech-recognition
- voxpopuli-v2
datasets:
- voxpopuli
license: cc-by-nc-4.0
inference: false
---
# Wav2Vec2-base-VoxPopuli-V2
[Facebook's Wav2Vec2](https://ai.facebook.com/blog/wav2vec-20-learning-the-structure-of-speech-from-raw-audio/) base model pretrained only in **bg** on **17.6k** unlabeled datat of the [VoxPopuli corpus](https://arxiv.org/abs/2101.00390).
The model is pretrained on 16kHz sampled speech audio. When using the model make sure that your speech input is also sampled at 16Khz.
**Note**: This model does not have a tokenizer as it was pretrained on audio alone. In order to use this model for **speech recognition**, a tokenizer should be created and the model should be fine-tuned on labeled text data in **bg**. Check out [this blog](https://huggingface.co/blog/fine-tune-xlsr-wav2vec2) for a more in-detail explanation of how to fine-tune the model.
**Paper**: *[VoxPopuli: A Large-Scale Multilingual Speech Corpus for Representation
Learning, Semi-Supervised Learning and Interpretation](https://arxiv.org/abs/2101.00390)*
**Authors**: *Changhan Wang, Morgane Riviere, Ann Lee, Anne Wu, Chaitanya Talnikar, Daniel Haziza, Mary Williamson, Juan Pino, Emmanuel Dupoux* from *Facebook AI*.
See the official website for more information, [here](https://github.com/facebookresearch/voxpopuli/).
|
facebook/wav2vec2-base-da-voxpopuli-v2
|
facebook
| 2022-02-27T13:13:38Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"wav2vec2",
"pretraining",
"audio",
"automatic-speech-recognition",
"voxpopuli-v2",
"da",
"dataset:voxpopuli",
"arxiv:2101.00390",
"license:cc-by-nc-4.0",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:05Z |
---
language: da
tags:
- audio
- automatic-speech-recognition
- voxpopuli-v2
datasets:
- voxpopuli
license: cc-by-nc-4.0
inference: false
---
# Wav2Vec2-base-VoxPopuli-V2
[Facebook's Wav2Vec2](https://ai.facebook.com/blog/wav2vec-20-learning-the-structure-of-speech-from-raw-audio/) base model pretrained only in **da** on **13.6k** unlabeled datat of the [VoxPopuli corpus](https://arxiv.org/abs/2101.00390).
The model is pretrained on 16kHz sampled speech audio. When using the model make sure that your speech input is also sampled at 16Khz.
**Note**: This model does not have a tokenizer as it was pretrained on audio alone. In order to use this model for **speech recognition**, a tokenizer should be created and the model should be fine-tuned on labeled text data in **da**. Check out [this blog](https://huggingface.co/blog/fine-tune-xlsr-wav2vec2) for a more in-detail explanation of how to fine-tune the model.
**Paper**: *[VoxPopuli: A Large-Scale Multilingual Speech Corpus for Representation
Learning, Semi-Supervised Learning and Interpretation](https://arxiv.org/abs/2101.00390)*
**Authors**: *Changhan Wang, Morgane Riviere, Ann Lee, Anne Wu, Chaitanya Talnikar, Daniel Haziza, Mary Williamson, Juan Pino, Emmanuel Dupoux* from *Facebook AI*.
See the official website for more information, [here](https://github.com/facebookresearch/voxpopuli/).
|
facebook/wav2vec2-base-de-voxpopuli-v2
|
facebook
| 2022-02-27T13:13:15Z | 7 | 1 |
transformers
|
[
"transformers",
"pytorch",
"wav2vec2",
"pretraining",
"audio",
"automatic-speech-recognition",
"voxpopuli-v2",
"de",
"dataset:voxpopuli",
"arxiv:2101.00390",
"license:cc-by-nc-4.0",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:05Z |
---
language: de
tags:
- audio
- automatic-speech-recognition
- voxpopuli-v2
datasets:
- voxpopuli
license: cc-by-nc-4.0
inference: false
---
# Wav2Vec2-base-VoxPopuli-V2
[Facebook's Wav2Vec2](https://ai.facebook.com/blog/wav2vec-20-learning-the-structure-of-speech-from-raw-audio/) base model pretrained only in **de** on **23.2k** unlabeled datat of the [VoxPopuli corpus](https://arxiv.org/abs/2101.00390).
The model is pretrained on 16kHz sampled speech audio. When using the model make sure that your speech input is also sampled at 16Khz.
**Note**: This model does not have a tokenizer as it was pretrained on audio alone. In order to use this model for **speech recognition**, a tokenizer should be created and the model should be fine-tuned on labeled text data in **de**. Check out [this blog](https://huggingface.co/blog/fine-tune-xlsr-wav2vec2) for a more in-detail explanation of how to fine-tune the model.
**Paper**: *[VoxPopuli: A Large-Scale Multilingual Speech Corpus for Representation
Learning, Semi-Supervised Learning and Interpretation](https://arxiv.org/abs/2101.00390)*
**Authors**: *Changhan Wang, Morgane Riviere, Ann Lee, Anne Wu, Chaitanya Talnikar, Daniel Haziza, Mary Williamson, Juan Pino, Emmanuel Dupoux* from *Facebook AI*.
See the official website for more information, [here](https://github.com/facebookresearch/voxpopuli/).
|
facebook/wav2vec2-base-es-voxpopuli-v2
|
facebook
| 2022-02-27T13:11:53Z | 4 | 1 |
transformers
|
[
"transformers",
"pytorch",
"wav2vec2",
"pretraining",
"audio",
"automatic-speech-recognition",
"voxpopuli-v2",
"es",
"dataset:voxpopuli",
"arxiv:2101.00390",
"license:cc-by-nc-4.0",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:05Z |
---
language: es
tags:
- audio
- automatic-speech-recognition
- voxpopuli-v2
datasets:
- voxpopuli
license: cc-by-nc-4.0
inference: false
---
# Wav2Vec2-base-VoxPopuli-V2
[Facebook's Wav2Vec2](https://ai.facebook.com/blog/wav2vec-20-learning-the-structure-of-speech-from-raw-audio/) base model pretrained only in **es** on **21.4k** unlabeled datat of the [VoxPopuli corpus](https://arxiv.org/abs/2101.00390).
The model is pretrained on 16kHz sampled speech audio. When using the model make sure that your speech input is also sampled at 16Khz.
**Note**: This model does not have a tokenizer as it was pretrained on audio alone. In order to use this model for **speech recognition**, a tokenizer should be created and the model should be fine-tuned on labeled text data in **es**. Check out [this blog](https://huggingface.co/blog/fine-tune-xlsr-wav2vec2) for a more in-detail explanation of how to fine-tune the model.
**Paper**: *[VoxPopuli: A Large-Scale Multilingual Speech Corpus for Representation
Learning, Semi-Supervised Learning and Interpretation](https://arxiv.org/abs/2101.00390)*
**Authors**: *Changhan Wang, Morgane Riviere, Ann Lee, Anne Wu, Chaitanya Talnikar, Daniel Haziza, Mary Williamson, Juan Pino, Emmanuel Dupoux* from *Facebook AI*.
See the official website for more information, [here](https://github.com/facebookresearch/voxpopuli/).
|
facebook/wav2vec2-large-mt-voxpopuli-v2
|
facebook
| 2022-02-27T12:51:06Z | 6 | 0 |
transformers
|
[
"transformers",
"pytorch",
"wav2vec2",
"pretraining",
"audio",
"automatic-speech-recognition",
"voxpopuli-v2",
"mt",
"dataset:voxpopuli",
"arxiv:2101.00390",
"license:cc-by-nc-4.0",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:05Z |
---
language: mt
tags:
- audio
- automatic-speech-recognition
- voxpopuli-v2
datasets:
- voxpopuli
license: cc-by-nc-4.0
inference: false
---
# Wav2Vec2-large-VoxPopuli-V2
[Facebook's Wav2Vec2](https://ai.facebook.com/blog/wav2vec-20-learning-the-structure-of-speech-from-raw-audio/) large model pretrained only in **mt** on **9.1** unlabeled datat of the [VoxPopuli corpus](https://arxiv.org/abs/2101.00390).
The model is pretrained on 16kHz sampled speech audio. When using the model make sure that your speech input is also sampled at 16Khz.
**Note**: This model does not have a tokenizer as it was pretrained on audio alone. In order to use this model for **speech recognition**, a tokenizer should be created and the model should be fine-tuned on labeled text data in **mt**. Check out [this blog](https://huggingface.co/blog/fine-tune-xlsr-wav2vec2) for a more in-detail explanation of how to fine-tune the model.
**Paper**: *[VoxPopuli: A Large-Scale Multilingual Speech Corpus for Representation
Learning, Semi-Supervised Learning and Interpretation](https://arxiv.org/abs/2101.00390)*
**Authors**: *Changhan Wang, Morgane Riviere, Ann Lee, Anne Wu, Chaitanya Talnikar, Daniel Haziza, Mary Williamson, Juan Pino, Emmanuel Dupoux* from *Facebook AI*.
See the official website for more information, [here](https://github.com/facebookresearch/voxpopuli/).
|
facebook/wav2vec2-large-el-voxpopuli-v2
|
facebook
| 2022-02-27T12:48:30Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"wav2vec2",
"pretraining",
"audio",
"automatic-speech-recognition",
"voxpopuli-v2",
"el",
"dataset:voxpopuli",
"arxiv:2101.00390",
"license:cc-by-nc-4.0",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:05Z |
---
language: el
tags:
- audio
- automatic-speech-recognition
- voxpopuli-v2
datasets:
- voxpopuli
license: cc-by-nc-4.0
inference: false
---
# Wav2Vec2-large-VoxPopuli-V2
[Facebook's Wav2Vec2](https://ai.facebook.com/blog/wav2vec-20-learning-the-structure-of-speech-from-raw-audio/) large model pretrained only in **el** on **17.7** unlabeled datat of the [VoxPopuli corpus](https://arxiv.org/abs/2101.00390).
The model is pretrained on 16kHz sampled speech audio. When using the model make sure that your speech input is also sampled at 16Khz.
**Note**: This model does not have a tokenizer as it was pretrained on audio alone. In order to use this model for **speech recognition**, a tokenizer should be created and the model should be fine-tuned on labeled text data in **el**. Check out [this blog](https://huggingface.co/blog/fine-tune-xlsr-wav2vec2) for a more in-detail explanation of how to fine-tune the model.
**Paper**: *[VoxPopuli: A Large-Scale Multilingual Speech Corpus for Representation
Learning, Semi-Supervised Learning and Interpretation](https://arxiv.org/abs/2101.00390)*
**Authors**: *Changhan Wang, Morgane Riviere, Ann Lee, Anne Wu, Chaitanya Talnikar, Daniel Haziza, Mary Williamson, Juan Pino, Emmanuel Dupoux* from *Facebook AI*.
See the official website for more information, [here](https://github.com/facebookresearch/voxpopuli/).
|
facebook/wav2vec2-large-slavic-voxpopuli-v2
|
facebook
| 2022-02-27T12:40:42Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"wav2vec2",
"pretraining",
"audio",
"automatic-speech-recognition",
"voxpopuli-v2",
"dataset:voxpopuli",
"arxiv:2101.00390",
"license:cc-by-nc-4.0",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:05Z |
---
language: slavic
tags:
- audio
- automatic-speech-recognition
- voxpopuli-v2
datasets:
- voxpopuli
license: cc-by-nc-4.0
inference: false
---
# Wav2Vec2-large-VoxPopuli-V2
[Facebook's Wav2Vec2](https://ai.facebook.com/blog/wav2vec-20-learning-the-structure-of-speech-from-raw-audio/) large model pretrained only in **slavic** on **88.99999999999999** unlabeled datat of the [VoxPopuli corpus](https://arxiv.org/abs/2101.00390).
The model is pretrained on 16kHz sampled speech audio. When using the model make sure that your speech input is also sampled at 16Khz.
**Note**: This model does not have a tokenizer as it was pretrained on audio alone. In order to use this model for **speech recognition**, a tokenizer should be created and the model should be fine-tuned on labeled text data in **slavic**. Check out [this blog](https://huggingface.co/blog/fine-tune-xlsr-wav2vec2) for a more in-detail explanation of how to fine-tune the model.
**Paper**: *[VoxPopuli: A Large-Scale Multilingual Speech Corpus for Representation
Learning, Semi-Supervised Learning and Interpretation](https://arxiv.org/abs/2101.00390)*
**Authors**: *Changhan Wang, Morgane Riviere, Ann Lee, Anne Wu, Chaitanya Talnikar, Daniel Haziza, Mary Williamson, Juan Pino, Emmanuel Dupoux* from *Facebook AI*.
See the official website for more information, [here](https://github.com/facebookresearch/voxpopuli/).
|
facebook/wav2vec2-large-north_germanic-voxpopuli-v2
|
facebook
| 2022-02-27T12:37:56Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"wav2vec2",
"pretraining",
"audio",
"automatic-speech-recognition",
"voxpopuli-v2",
"dataset:voxpopuli",
"arxiv:2101.00390",
"license:cc-by-nc-4.0",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:05Z |
---
language: north_germanic
tags:
- audio
- automatic-speech-recognition
- voxpopuli-v2
datasets:
- voxpopuli
license: cc-by-nc-4.0
inference: false
---
# Wav2Vec2-large-VoxPopuli-V2
[Facebook's Wav2Vec2](https://ai.facebook.com/blog/wav2vec-20-learning-the-structure-of-speech-from-raw-audio/) large model pretrained only in **north_germanic** on **29.9** unlabeled datat of the [VoxPopuli corpus](https://arxiv.org/abs/2101.00390).
The model is pretrained on 16kHz sampled speech audio. When using the model make sure that your speech input is also sampled at 16Khz.
**Note**: This model does not have a tokenizer as it was pretrained on audio alone. In order to use this model for **speech recognition**, a tokenizer should be created and the model should be fine-tuned on labeled text data in **north_germanic**. Check out [this blog](https://huggingface.co/blog/fine-tune-xlsr-wav2vec2) for a more in-detail explanation of how to fine-tune the model.
**Paper**: *[VoxPopuli: A Large-Scale Multilingual Speech Corpus for Representation
Learning, Semi-Supervised Learning and Interpretation](https://arxiv.org/abs/2101.00390)*
**Authors**: *Changhan Wang, Morgane Riviere, Ann Lee, Anne Wu, Chaitanya Talnikar, Daniel Haziza, Mary Williamson, Juan Pino, Emmanuel Dupoux* from *Facebook AI*.
See the official website for more information, [here](https://github.com/facebookresearch/voxpopuli/).
|
facebook/wav2vec2-large-romance-voxpopuli-v2
|
facebook
| 2022-02-27T12:32:07Z | 6 | 0 |
transformers
|
[
"transformers",
"pytorch",
"wav2vec2",
"pretraining",
"audio",
"automatic-speech-recognition",
"voxpopuli-v2",
"dataset:voxpopuli",
"arxiv:2101.00390",
"license:cc-by-nc-4.0",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:05Z |
---
language: romance
tags:
- audio
- automatic-speech-recognition
- voxpopuli-v2
datasets:
- voxpopuli
license: cc-by-nc-4.0
inference: false
---
# Wav2Vec2-large-VoxPopuli-V2
[Facebook's Wav2Vec2](https://ai.facebook.com/blog/wav2vec-20-learning-the-structure-of-speech-from-raw-audio/) large model pretrained only in **romance** on **101.5** unlabeled datat of the [VoxPopuli corpus](https://arxiv.org/abs/2101.00390).
The model is pretrained on 16kHz sampled speech audio. When using the model make sure that your speech input is also sampled at 16Khz.
**Note**: This model does not have a tokenizer as it was pretrained on audio alone. In order to use this model for **speech recognition**, a tokenizer should be created and the model should be fine-tuned on labeled text data in **romance**. Check out [this blog](https://huggingface.co/blog/fine-tune-xlsr-wav2vec2) for a more in-detail explanation of how to fine-tune the model.
**Paper**: *[VoxPopuli: A Large-Scale Multilingual Speech Corpus for Representation
Learning, Semi-Supervised Learning and Interpretation](https://arxiv.org/abs/2101.00390)*
**Authors**: *Changhan Wang, Morgane Riviere, Ann Lee, Anne Wu, Chaitanya Talnikar, Daniel Haziza, Mary Williamson, Juan Pino, Emmanuel Dupoux* from *Facebook AI*.
See the official website for more information, [here](https://github.com/facebookresearch/voxpopuli/).
|
Subsets and Splits
Filtered Qwen2.5 Distill Models
Identifies specific configurations of models by filtering cards that contain 'distill', 'qwen2.5', '7b' while excluding certain base models and incorrect model ID patterns, uncovering unique model variants.
Filtered Model Cards Count
Finds the count of entries with specific card details that include 'distill', 'qwen2.5', '7b' but exclude certain base models, revealing valuable insights about the dataset's content distribution.
Filtered Distill Qwen 7B Models
Filters for specific card entries containing 'distill', 'qwen', and '7b', excluding certain strings and patterns, to identify relevant model configurations.
Filtered Qwen-7b Model Cards
The query performs a detailed filtering based on specific keywords and excludes certain entries, which could be useful for identifying a specific subset of cards but does not provide deeper insights or trends.
Filtered Qwen 7B Model Cards
The query filters for specific terms related to "distilled" or "distill", "qwen", and "7b" in the 'card' column but excludes certain base models, providing a limited set of entries for further inspection.
Qwen 7B Distilled Models
The query provides a basic filtering of records to find specific card names that include keywords related to distilled Qwen 7b models, excluding a particular base model, which gives limited insight but helps in focusing on relevant entries.
Qwen 7B Distilled Model Cards
The query filters data based on specific keywords in the modelId and card fields, providing limited insight primarily useful for locating specific entries rather than revealing broad patterns or trends.
Qwen 7B Distilled Models
Finds all entries containing the terms 'distilled', 'qwen', and '7b' in a case-insensitive manner, providing a filtered set of records but without deeper analysis.
Distilled Qwen 7B Models
The query filters for specific model IDs containing 'distilled', 'qwen', and '7b', providing a basic retrieval of relevant entries but without deeper analysis or insight.
Filtered Model Cards with Distill Qwen2.
Filters and retrieves records containing specific keywords in the card description while excluding certain phrases, providing a basic count of relevant entries.
Filtered Model Cards with Distill Qwen 7
The query filters specific variations of card descriptions containing 'distill', 'qwen', and '7b' while excluding a particular base model, providing limited but specific data retrieval.
Distill Qwen 7B Model Cards
The query filters and retrieves rows where the 'card' column contains specific keywords ('distill', 'qwen', and '7b'), providing a basic filter result that can help in identifying specific entries.