modelId
string | author
string | last_modified
timestamp[us, tz=UTC] | downloads
int64 | likes
int64 | library_name
string | tags
list | pipeline_tag
string | createdAt
timestamp[us, tz=UTC] | card
string |
|---|---|---|---|---|---|---|---|---|---|
ChenDRAG/zephyr-NCA-preference
|
ChenDRAG
| 2024-02-08T09:13:03Z
| 5
| 1
|
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"conversational",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] |
text-generation
| 2024-02-08T09:12:01Z
|
# zephyr-NCA-preference
This model is a fine-tuned version of [HuggingFaceH4/mistral-7b-sft-beta](https://huggingface.co/HuggingFaceH4/mistral-7b-sft-beta) on the [openbmb/UltraFeedback](https://huggingface.co/datasets/openbmb/UltraFeedback) dataset.
It achieves the following results on the evaluation set:
- Loss: 1.3030
- Rewards/chosen: 0.0489
- Rewards/rejected: -0.5399
- Rewards/accuracies: 0.7820
- Rewards/margins: 0.5888
- Verify/constant 1: 1.0
- Verify/constant 1len: 1000.0
- Logps/rejected: -287.1594
- Logps/chosen: -270.2584
- Verify/bz: 1.0
- Verify/gather Bz: 2.0
- Regularization/forward Kl: 0.6109
- Regularization/reverse Kl: 0.4631
- Regularization/policy Data Loss: 1.8007
- Regularization/reference Data Loss: 1.3337
- Regularization/policy Ref Data Loss Gap: 0.4670
- Mask/mask Ratio: 0.4809
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-06
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- distributed_type: multi-GPU
- num_devices: 2
- gradient_accumulation_steps: 16
- total_train_batch_size: 32
- total_eval_batch_size: 2
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rewards/chosen | Rewards/rejected | Rewards/accuracies | Rewards/margins | Verify/constant 1 | Verify/constant 1len | Logps/rejected | Logps/chosen | Verify/bz | Verify/gather Bz | Regularization/forward Kl | Regularization/reverse Kl | Regularization/policy Data Loss | Regularization/reference Data Loss | Regularization/policy Ref Data Loss Gap | Mask/mask Ratio |
|:-------------:|:-----:|:----:|:---------------:|:--------------:|:----------------:|:------------------:|:---------------:|:-----------------:|:--------------------:|:--------------:|:------------:|:---------:|:----------------:|:-------------------------:|:-------------------------:|:-------------------------------:|:----------------------------------:|:---------------------------------------:|:---------------:|
| 1.3844 | 0.05 | 100 | 1.3839 | 0.0037 | -0.0061 | 0.7075 | 0.0098 | 1.0 | 1000.0 | -233.7844 | -274.7838 | 1.0 | 2.0 | 0.0009 | 0.0009 | 1.3404 | 1.3337 | 0.0067 | 0.4809 |
| 1.3593 | 0.1 | 200 | 1.3605 | -0.0445 | -0.1811 | 0.7320 | 0.1366 | 1.0 | 1000.0 | -251.2808 | -279.5988 | 1.0 | 2.0 | 0.1063 | 0.0867 | 1.4942 | 1.3337 | 0.1604 | 0.4809 |
| 1.3432 | 0.15 | 300 | 1.3399 | -0.0181 | -0.2809 | 0.7695 | 0.2628 | 1.0 | 1000.0 | -261.2633 | -276.9577 | 1.0 | 2.0 | 0.2787 | 0.2104 | 1.5199 | 1.3337 | 0.1862 | 0.4809 |
| 1.3404 | 0.21 | 400 | 1.3251 | 0.0042 | -0.3854 | 0.7720 | 0.3896 | 1.0 | 1000.0 | -271.7116 | -274.7323 | 1.0 | 2.0 | 0.5454 | 0.4274 | 1.5819 | 1.3337 | 0.2481 | 0.4809 |
| 1.3295 | 0.26 | 500 | 1.3173 | 0.0213 | -0.4300 | 0.7770 | 0.4513 | 1.0 | 1000.0 | -276.1767 | -273.0250 | 1.0 | 2.0 | 0.5684 | 0.4290 | 1.6808 | 1.3337 | 0.3471 | 0.4809 |
| 1.3187 | 0.31 | 600 | 1.3122 | 0.0267 | -0.4649 | 0.7790 | 0.4917 | 1.0 | 1000.0 | -279.6683 | -272.4786 | 1.0 | 2.0 | 0.5839 | 0.4556 | 1.7090 | 1.3337 | 0.3753 | 0.4809 |
| 1.3105 | 0.36 | 700 | 1.3106 | 0.0180 | -0.5079 | 0.7685 | 0.5259 | 1.0 | 1000.0 | -283.9655 | -273.3516 | 1.0 | 2.0 | 0.5818 | 0.4701 | 1.8137 | 1.3337 | 0.4800 | 0.4809 |
| 1.3086 | 0.41 | 800 | 1.3094 | 0.0287 | -0.5003 | 0.7820 | 0.5290 | 1.0 | 1000.0 | -283.2076 | -272.2820 | 1.0 | 2.0 | 0.5724 | 0.4410 | 1.7950 | 1.3337 | 0.4613 | 0.4809 |
| 1.3164 | 0.46 | 900 | 1.3071 | 0.0494 | -0.4863 | 0.7865 | 0.5356 | 1.0 | 1000.0 | -281.7993 | -270.2156 | 1.0 | 2.0 | 0.5937 | 0.4471 | 1.6937 | 1.3337 | 0.3599 | 0.4809 |
| 1.3065 | 0.52 | 1000 | 1.3058 | 0.0442 | -0.5122 | 0.7875 | 0.5564 | 1.0 | 1000.0 | -284.3954 | -270.7371 | 1.0 | 2.0 | 0.6214 | 0.4609 | 1.7262 | 1.3337 | 0.3925 | 0.4809 |
| 1.3274 | 0.57 | 1100 | 1.3097 | 0.0187 | -0.5605 | 0.7765 | 0.5792 | 1.0 | 1000.0 | -289.2202 | -273.2801 | 1.0 | 2.0 | 0.6048 | 0.4467 | 1.9267 | 1.3337 | 0.5930 | 0.4809 |
| 1.3128 | 0.62 | 1200 | 1.3053 | 0.0391 | -0.5393 | 0.7795 | 0.5784 | 1.0 | 1000.0 | -287.1077 | -271.2448 | 1.0 | 2.0 | 0.5974 | 0.4596 | 1.8496 | 1.3337 | 0.5159 | 0.4809 |
| 1.3018 | 0.67 | 1300 | 1.3043 | 0.0370 | -0.5532 | 0.7765 | 0.5902 | 1.0 | 1000.0 | -288.4903 | -271.4501 | 1.0 | 2.0 | 0.6164 | 0.4737 | 1.8233 | 1.3337 | 0.4896 | 0.4809 |
| 1.3137 | 0.72 | 1400 | 1.3040 | 0.0532 | -0.5183 | 0.7790 | 0.5715 | 1.0 | 1000.0 | -285.0031 | -269.8345 | 1.0 | 2.0 | 0.5985 | 0.4642 | 1.7409 | 1.3337 | 0.4072 | 0.4809 |
| 1.304 | 0.77 | 1500 | 1.3034 | 0.0489 | -0.5344 | 0.7815 | 0.5833 | 1.0 | 1000.0 | -286.6187 | -270.2639 | 1.0 | 2.0 | 0.6056 | 0.4668 | 1.7960 | 1.3337 | 0.4623 | 0.4809 |
| 1.3194 | 0.83 | 1600 | 1.3033 | 0.0496 | -0.5367 | 0.7770 | 0.5864 | 1.0 | 1000.0 | -286.8489 | -270.1884 | 1.0 | 2.0 | 0.6093 | 0.4660 | 1.7863 | 1.3337 | 0.4526 | 0.4809 |
| 1.3194 | 0.88 | 1700 | 1.3030 | 0.0498 | -0.5367 | 0.7820 | 0.5865 | 1.0 | 1000.0 | -286.8430 | -270.1689 | 1.0 | 2.0 | 0.6106 | 0.4640 | 1.7905 | 1.3337 | 0.4568 | 0.4809 |
| 1.32 | 0.93 | 1800 | 1.3031 | 0.0475 | -0.5425 | 0.7815 | 0.5901 | 1.0 | 1000.0 | -287.4280 | -270.3985 | 1.0 | 2.0 | 0.6118 | 0.4635 | 1.8042 | 1.3337 | 0.4705 | 0.4809 |
| 1.3119 | 0.98 | 1900 | 1.3030 | 0.0490 | -0.5398 | 0.7810 | 0.5888 | 1.0 | 1000.0 | -287.1560 | -270.2523 | 1.0 | 2.0 | 0.6107 | 0.4630 | 1.8007 | 1.3337 | 0.4670 | 0.4809 |
### Framework versions
- Transformers 4.35.0
- Pytorch 2.1.0
- Datasets 2.14.6
- Tokenizers 0.14.1
|
dbmdz/bert-base-historic-english-cased
|
dbmdz
| 2024-02-08T09:11:34Z
| 37
| 2
|
transformers
|
[
"transformers",
"pytorch",
"jax",
"tensorboard",
"safetensors",
"bert",
"fill-mask",
"en",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-03-02T23:29:05Z
|
---
language: en
license: mit
widget:
- text: "and I cannot conceive the reafon why [MASK] hath"
---
π¨ Notice: After re-checking this model again, it seems that the model is not working very well. E.g. MLM predictions are very likely to predict `[UNK]` token, which is
actually not good.
We will update this model soon. For now, please use the [`bigscience-historical-texts/bert-base-blbooks-cased`](https://huggingface.co/bigscience-historical-texts/bert-base-blbooks-cased) instead, as it was pretrained on the same corpus.
|
ChenDRAG/zephyr-infoNCA-reward
|
ChenDRAG
| 2024-02-08T09:11:20Z
| 6
| 1
|
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"conversational",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] |
text-generation
| 2024-02-08T09:10:14Z
|
# zephyr-infoNCA-reward
This model is a fine-tuned version of [HuggingFaceH4/mistral-7b-sft-beta](https://huggingface.co/HuggingFaceH4/mistral-7b-sft-beta) on the [openbmb/UltraFeedback](https://huggingface.co/datasets/openbmb/UltraFeedback) dataset.
- Loss: 0.8810
- Loss/mini Gap Loss: 0.8810
- Loss/ori Loss: 1.1137
- Loss/reward Entrophy: 0.2326
- Regularization/forward Kl: 1.5849
- Regularization/reverse Kl: 0.9146
- Regularization/policy Data Loss: 3.2706
- Regularization/reference Data Loss: 1.2660
- Regularization/policy Ref Data Loss Gap: 2.0046
- Mask/mask Ratio: 0.4577
- Reward/reward A0: -0.9007
- Reward/reward A1: -1.2463
- Reward/reward A2: -1.5959
- Reward/reward A3: -2.0882
- Rewards/chosen: -0.9007
- Rewards/rejected: -1.6434
- Rewards/margins: 0.7428
- Reward/a01 Acc: 0.6366
- Reward/a02 Acc: 0.7334
- Reward/a03 Acc: 0.8302
- Rewards/accuracies: 0.7334
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-06
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- distributed_type: multi-GPU
- num_devices: 4
- gradient_accumulation_steps: 8
- total_train_batch_size: 32
- total_eval_batch_size: 4
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Loss/mini Gap Loss | Loss/ori Loss | Loss/reward Entrophy | Regularization/forward Kl | Regularization/reverse Kl | Regularization/policy Data Loss | Regularization/reference Data Loss | Regularization/policy Ref Data Loss Gap | Mask/mask Ratio | Reward/reward A0 | Reward/reward A1 | Reward/reward A2 | Reward/reward A3 | Rewards/chosen | Rewards/rejected | Rewards/margins | Reward/a01 Acc | Reward/a02 Acc | Reward/a03 Acc | Rewards/accuracies |
|:-------------:|:-----:|:----:|:---------------:|:------------------:|:-------------:|:--------------------:|:-------------------------:|:-------------------------:|:-------------------------------:|:----------------------------------:|:---------------------------------------:|:---------------:|:----------------:|:----------------:|:----------------:|:----------------:|:--------------:|:----------------:|:---------------:|:--------------:|:--------------:|:--------------:|:------------------:|
| 1.1592 | 0.05 | 100 | 1.1483 | 1.1484 | 1.3811 | 0.2326 | 0.0008 | 0.0008 | 1.2693 | 1.2660 | 0.0033 | 0.4577 | 0.0031 | -0.0005 | -0.0032 | -0.0066 | 0.0031 | -0.0034 | 0.0065 | 0.5864 | 0.6667 | 0.7205 | 0.6579 |
| 1.0838 | 0.11 | 200 | 1.0772 | 1.0773 | 1.3100 | 0.2326 | 0.1510 | 0.1265 | 1.4842 | 1.2660 | 0.2182 | 0.4577 | -0.1490 | -0.2198 | -0.2639 | -0.3185 | -0.1490 | -0.2674 | 0.1184 | 0.6040 | 0.6698 | 0.7081 | 0.6606 |
| 1.0427 | 0.16 | 300 | 1.0091 | 1.0092 | 1.2419 | 0.2326 | 0.5873 | 0.4077 | 1.8854 | 1.2660 | 0.6194 | 0.4577 | -0.4752 | -0.6617 | -0.7889 | -0.9494 | -0.4752 | -0.8000 | 0.3248 | 0.6196 | 0.6744 | 0.7360 | 0.6767 |
| 0.9666 | 0.21 | 400 | 0.9712 | 0.9713 | 1.2039 | 0.2326 | 0.7687 | 0.4464 | 2.2361 | 1.2660 | 0.9701 | 0.4577 | -0.5326 | -0.7468 | -0.9238 | -1.1650 | -0.5326 | -0.9452 | 0.4126 | 0.6289 | 0.7013 | 0.7676 | 0.6993 |
| 0.984 | 0.27 | 500 | 0.9523 | 0.9524 | 1.1850 | 0.2326 | 0.8699 | 0.4759 | 2.4013 | 1.2660 | 1.1353 | 0.4577 | -0.5793 | -0.8081 | -1.0134 | -1.2919 | -0.5793 | -1.0378 | 0.4585 | 0.6242 | 0.7034 | 0.7831 | 0.7036 |
| 1.0017 | 0.32 | 600 | 0.9367 | 0.9368 | 1.1694 | 0.2326 | 1.0544 | 0.6109 | 2.6903 | 1.2660 | 1.4243 | 0.4577 | -0.7541 | -1.0241 | -1.2660 | -1.5769 | -0.7541 | -1.2890 | 0.5350 | 0.6413 | 0.7091 | 0.7836 | 0.7113 |
| 0.9615 | 0.37 | 700 | 0.9338 | 0.9338 | 1.1665 | 0.2326 | 1.2767 | 0.7017 | 3.0578 | 1.2660 | 1.7918 | 0.4577 | -0.9159 | -1.2048 | -1.4643 | -1.7939 | -0.9159 | -1.4877 | 0.5717 | 0.6289 | 0.7133 | 0.7867 | 0.7096 |
| 0.9292 | 0.42 | 800 | 0.9237 | 0.9237 | 1.1564 | 0.2326 | 1.3185 | 0.7646 | 3.1225 | 1.2660 | 1.8565 | 0.4577 | -0.8569 | -1.1333 | -1.4081 | -1.7547 | -0.8569 | -1.4320 | 0.5751 | 0.6284 | 0.7169 | 0.8043 | 0.7165 |
| 0.9366 | 0.48 | 900 | 0.9099 | 0.9100 | 1.1426 | 0.2326 | 1.3334 | 0.7449 | 2.9612 | 1.2660 | 1.6952 | 0.4577 | -0.8158 | -1.1198 | -1.4146 | -1.8111 | -0.8158 | -1.4485 | 0.6327 | 0.6387 | 0.7220 | 0.8121 | 0.7243 |
| 0.8746 | 0.53 | 1000 | 0.9005 | 0.9005 | 1.1332 | 0.2326 | 1.4735 | 0.8523 | 3.0808 | 1.2660 | 1.8148 | 0.4577 | -0.8931 | -1.2235 | -1.5380 | -1.9733 | -0.8931 | -1.5782 | 0.6852 | 0.6392 | 0.7319 | 0.8080 | 0.7264 |
| 0.8941 | 0.58 | 1100 | 0.8952 | 0.8952 | 1.1279 | 0.2326 | 1.4775 | 0.8426 | 3.1270 | 1.2660 | 1.8610 | 0.4577 | -0.9341 | -1.2736 | -1.6024 | -2.0415 | -0.9341 | -1.6392 | 0.7051 | 0.6413 | 0.7340 | 0.8111 | 0.7288 |
| 0.9201 | 0.64 | 1200 | 0.8891 | 0.8891 | 1.1218 | 0.2326 | 1.5023 | 0.8385 | 3.2583 | 1.2660 | 1.9923 | 0.4577 | -0.9362 | -1.2764 | -1.6100 | -2.0560 | -0.9362 | -1.6474 | 0.7112 | 0.6335 | 0.7329 | 0.8245 | 0.7303 |
| 0.8358 | 0.69 | 1300 | 0.8860 | 0.8861 | 1.1187 | 0.2326 | 1.6540 | 0.9301 | 3.2862 | 1.2660 | 2.0202 | 0.4577 | -0.9350 | -1.2850 | -1.6319 | -2.1211 | -0.9350 | -1.6793 | 0.7443 | 0.6423 | 0.7329 | 0.8214 | 0.7322 |
| 0.8829 | 0.74 | 1400 | 0.8846 | 0.8847 | 1.1174 | 0.2326 | 1.4174 | 0.8464 | 3.0760 | 1.2660 | 1.8100 | 0.4577 | -0.8119 | -1.1349 | -1.4591 | -1.9229 | -0.8119 | -1.5056 | 0.6938 | 0.6392 | 0.7381 | 0.8297 | 0.7357 |
| 0.8779 | 0.8 | 1500 | 0.8822 | 0.8823 | 1.1150 | 0.2326 | 1.6183 | 0.9325 | 3.3052 | 1.2660 | 2.0392 | 0.4577 | -0.9158 | -1.2611 | -1.6110 | -2.1030 | -0.9158 | -1.6583 | 0.7425 | 0.6387 | 0.7345 | 0.8261 | 0.7331 |
| 0.9388 | 0.85 | 1600 | 0.8818 | 0.8819 | 1.1145 | 0.2326 | 1.6409 | 0.9388 | 3.3318 | 1.2660 | 2.0658 | 0.4577 | -0.9332 | -1.2823 | -1.6359 | -2.1322 | -0.9332 | -1.6834 | 0.7502 | 0.6361 | 0.7319 | 0.8271 | 0.7317 |
| 0.8319 | 0.9 | 1700 | 0.8811 | 0.8812 | 1.1139 | 0.2326 | 1.5745 | 0.9076 | 3.2655 | 1.2660 | 1.9995 | 0.4577 | -0.8984 | -1.2427 | -1.5909 | -2.0806 | -0.8984 | -1.6380 | 0.7396 | 0.6356 | 0.7350 | 0.8307 | 0.7338 |
| 0.8719 | 0.96 | 1800 | 0.8809 | 0.8810 | 1.1137 | 0.2326 | 1.5827 | 0.9136 | 3.2695 | 1.2660 | 2.0034 | 0.4577 | -0.8998 | -1.2451 | -1.5947 | -2.0870 | -0.8998 | -1.6423 | 0.7424 | 0.6372 | 0.7340 | 0.8307 | 0.7339 |
### Framework versions
- Transformers 4.35.0
- Pytorch 2.1.0+cu118
- Datasets 2.14.6
- Tokenizers 0.14.1
|
sujith013/tamil-llama-7b-instruct-quantized-ASR-output-fine-tuning
|
sujith013
| 2024-02-08T09:05:03Z
| 5
| 0
|
peft
|
[
"peft",
"tensorboard",
"safetensors",
"trl",
"sft",
"Tamil-ASR, ASR-fine-tuning, Tamil-llama",
"generated_from_trainer",
"ta",
"base_model:abhinand/tamil-llama-7b-instruct-v0.1",
"base_model:adapter:abhinand/tamil-llama-7b-instruct-v0.1",
"license:llama2",
"region:us"
] | null | 2024-02-08T07:03:25Z
|
---
language:
- ta
license: llama2
library_name: peft
tags:
- trl
- sft
- Tamil-ASR, ASR-fine-tuning, Tamil-llama
- generated_from_trainer
base_model: abhinand/tamil-llama-7b-instruct-v0.1
model-index:
- name: tamil-llama-7b-instruct-quantized-ASR-output-fine-tuning
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# tamil-llama-7b-instruct-quantized-ASR-output-fine-tuning
This model is a fine-tuned version of [abhinand/tamil-llama-7b-instruct-v0.1](https://huggingface.co/abhinand/tamil-llama-7b-instruct-v0.1) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 5.3527
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 4
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- lr_scheduler_warmup_ratio: 0.03
- training_steps: 1500
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.4659 | 2.58 | 500 | 4.2383 |
| 1.9248 | 5.17 | 1000 | 4.6944 |
| 1.4112 | 7.75 | 1500 | 5.3527 |
### Framework versions
- PEFT 0.8.2
- Transformers 4.37.0
- Pytorch 2.1.2
- Datasets 2.16.1
- Tokenizers 0.15.1
|
issuez/ishan
|
issuez
| 2024-02-08T09:01:32Z
| 0
| 0
| null |
[
"arxiv:1910.09700",
"region:us"
] | null | 2024-02-08T09:00:23Z
|
---
# For reference on model card metadata, see the spec: https://github.com/huggingface/hub-docs/blob/main/modelcard.md?plain=1
# Doc / guide: https://huggingface.co/docs/hub/model-cards
{}
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
This modelcard aims to be a base template for new models. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/modelcard_template.md?plain=1).
## Model Details
### Model Description
<!-- Gangster with guns in hand -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
YashRawal225/New-3-7b-chat-finetune-german500-GGUF
|
YashRawal225
| 2024-02-08T09:00:31Z
| 5
| 0
|
transformers
|
[
"transformers",
"gguf",
"mistral",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-02-08T08:57:46Z
|
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
CLMBR/existential-there-quantifier-transformer-2
|
CLMBR
| 2024-02-08T08:59:47Z
| 5
| 0
|
transformers
|
[
"transformers",
"pytorch",
"opt",
"text-generation",
"generated_from_trainer",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-02-02T10:11:11Z
|
---
tags:
- generated_from_trainer
model-index:
- name: existential-there-quantifier-transformer-2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# existential-there-quantifier-transformer-2
This model is a fine-tuned version of [](https://huggingface.co/) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.8631
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 2
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 3052726
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-------:|:---------------:|
| 4.2284 | 0.03 | 76320 | 4.1985 |
| 4.0216 | 1.03 | 152640 | 4.0292 |
| 3.9117 | 0.03 | 228960 | 3.9547 |
| 3.846 | 1.03 | 305280 | 3.9137 |
| 3.7932 | 0.03 | 381600 | 3.8879 |
| 3.7526 | 1.03 | 457920 | 3.8726 |
| 3.7186 | 0.03 | 534240 | 3.8616 |
| 3.6853 | 1.03 | 610560 | 3.8549 |
| 3.6566 | 0.03 | 686880 | 3.8491 |
| 3.6299 | 1.03 | 763200 | 3.8466 |
| 3.6088 | 0.03 | 839520 | 3.8450 |
| 3.5909 | 1.03 | 915840 | 3.8433 |
| 3.5721 | 0.03 | 992160 | 3.8440 |
| 3.5517 | 1.03 | 1068480 | 3.8438 |
| 3.5396 | 0.03 | 1144800 | 3.8448 |
| 3.5253 | 1.03 | 1221120 | 3.8455 |
| 3.5095 | 0.03 | 1297440 | 3.8461 |
| 3.4965 | 0.03 | 1373760 | 3.8489 |
| 3.4797 | 1.03 | 1450080 | 3.8500 |
| 3.4741 | 0.03 | 1526400 | 3.8496 |
| 3.463 | 1.03 | 1602720 | 3.8523 |
| 3.456 | 0.03 | 1679040 | 3.8542 |
| 3.4458 | 1.03 | 1755360 | 3.8550 |
| 3.433 | 0.03 | 1831680 | 3.8559 |
| 3.4181 | 0.03 | 1908000 | 3.8570 |
| 3.4069 | 1.03 | 1984320 | 3.8597 |
| 3.3962 | 0.03 | 2060640 | 3.8610 |
| 3.3886 | 1.03 | 2136960 | 3.8617 |
| 3.3791 | 0.03 | 2213280 | 3.8636 |
| 3.3653 | 1.03 | 2289600 | 3.8646 |
| 3.3589 | 0.03 | 2365920 | 3.8649 |
| 3.3494 | 1.03 | 2442240 | 3.8656 |
| 3.3363 | 0.03 | 2518560 | 3.8670 |
| 3.3258 | 1.03 | 2594880 | 3.8668 |
| 3.3168 | 0.03 | 2671200 | 3.8669 |
| 3.3126 | 1.03 | 2747520 | 3.8667 |
| 3.3062 | 0.03 | 2823840 | 3.8659 |
| 3.3037 | 1.03 | 2900160 | 3.8657 |
| 3.2966 | 0.03 | 2976480 | 3.8640 |
| 3.2869 | 1.02 | 3052726 | 3.8631 |
### Framework versions
- Transformers 4.33.3
- Pytorch 2.0.1
- Datasets 2.12.0
- Tokenizers 0.13.3
|
litvan/SDXL_finetuned_for_russian_churches
|
litvan
| 2024-02-08T08:58:02Z
| 1
| 0
|
diffusers
|
[
"diffusers",
"stable-diffusion-xl",
"stable-diffusion-xl-diffusers",
"text-to-image",
"lora",
"template:sd-lora",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"license:openrail++",
"region:us"
] |
text-to-image
| 2024-02-06T13:47:35Z
|
---
tags:
- stable-diffusion-xl
- stable-diffusion-xl-diffusers
- text-to-image
- diffusers
- lora
- template:sd-lora
widget:
- text: 'Orthodox church in the style of African buildings of the 6th century'
output:
url:
"image_0.png"
- text: 'Orthodox church in the style of African buildings of the 6th century'
output:
url:
"image_1.png"
- text: 'Orthodox church in the style of African buildings of the 6th century'
output:
url:
"image_2.png"
- text: 'Orthodox church in the style of African buildings of the 6th century'
output:
url:
"image_3.png"
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt: Orthodox church
license: openrail++
---
# SDXL LoRA DreamBooth - litvan/SDXL_finetuned_for_russian_churches
<Gallery />
## Model description
These are litvan/SDXL_finetuned_for_russian_churches LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0.
The main purpose of the model: Generate Orthodox churches in different cultural and architectural codes of countries
The weights were trained using [DreamBooth](https://dreambooth.github.io/).
LoRA for the text encoder was enabled: False.
Special VAE used for training: madebyollin/sdxl-vae-fp16-fix.
Dataset for finetuning: litvan/russian_churches_with_blip_captioning
For training were used: 3 GPU A100(80Gb)
## Trigger words
You should use Orthodox church to trigger the image generation.
## Download model
You can do this using the following lines of code:
```
from diffusers import DiffusionPipeline
pipeline = DiffusionPipeline.from_pretrained("stabilityai/stable-diffusion-xl-base-1.0").cuda()
pipeline.load_lora_weights("litvan/SDXL_finetuned_for_russian_churches")
```
### For using refiner
```
refiner = DiffusionPipeline.from_pretrained(
"stabilityai/stable-diffusion-xl-refiner-1.0",
text_encoder_2=pipeline.text_encoder_2,
vae=pipeline.vae,
torch_dtype=torch.float32,
use_safetensors=True,
).cuda()
```
|
gizmo-ai/blip-image-captioning-large
|
gizmo-ai
| 2024-02-08T08:47:59Z
| 11
| 2
|
transformers
|
[
"transformers",
"pytorch",
"tf",
"safetensors",
"blip",
"image-text-to-text",
"image-captioning",
"image-to-text",
"arxiv:2201.12086",
"license:bsd-3-clause",
"endpoints_compatible",
"region:us"
] |
image-to-text
| 2024-02-08T08:47:59Z
|
---
pipeline_tag: image-to-text
tags:
- image-captioning
languages:
- en
license: bsd-3-clause
---
# BLIP: Bootstrapping Language-Image Pre-training for Unified Vision-Language Understanding and Generation
Model card for image captioning pretrained on COCO dataset - base architecture (with ViT large backbone).
|  |
|:--:|
| <b> Pull figure from BLIP official repo | Image source: https://github.com/salesforce/BLIP </b>|
## TL;DR
Authors from the [paper](https://arxiv.org/abs/2201.12086) write in the abstract:
*Vision-Language Pre-training (VLP) has advanced the performance for many vision-language tasks. However, most existing pre-trained models only excel in either understanding-based tasks or generation-based tasks. Furthermore, performance improvement has been largely achieved by scaling up the dataset with noisy image-text pairs collected from the web, which is a suboptimal source of supervision. In this paper, we propose BLIP, a new VLP framework which transfers flexibly to both vision-language understanding and generation tasks. BLIP effectively utilizes the noisy web data by bootstrapping the captions, where a captioner generates synthetic captions and a filter removes the noisy ones. We achieve state-of-the-art results on a wide range of vision-language tasks, such as image-text retrieval (+2.7% in average recall@1), image captioning (+2.8% in CIDEr), and VQA (+1.6% in VQA score). BLIP also demonstrates strong generalization ability when directly transferred to videolanguage tasks in a zero-shot manner. Code, models, and datasets are released.*
## Usage
You can use this model for conditional and un-conditional image captioning
### Using the Pytorch model
#### Running the model on CPU
<details>
<summary> Click to expand </summary>
```python
import requests
from PIL import Image
from transformers import BlipProcessor, BlipForConditionalGeneration
processor = BlipProcessor.from_pretrained("Salesforce/blip-image-captioning-large")
model = BlipForConditionalGeneration.from_pretrained("Salesforce/blip-image-captioning-large")
img_url = 'https://storage.googleapis.com/sfr-vision-language-research/BLIP/demo.jpg'
raw_image = Image.open(requests.get(img_url, stream=True).raw).convert('RGB')
# conditional image captioning
text = "a photography of"
inputs = processor(raw_image, text, return_tensors="pt")
out = model.generate(**inputs)
print(processor.decode(out[0], skip_special_tokens=True))
# unconditional image captioning
inputs = processor(raw_image, return_tensors="pt")
out = model.generate(**inputs)
print(processor.decode(out[0], skip_special_tokens=True))
```
</details>
#### Running the model on GPU
##### In full precision
<details>
<summary> Click to expand </summary>
```python
import requests
from PIL import Image
from transformers import BlipProcessor, BlipForConditionalGeneration
processor = BlipProcessor.from_pretrained("Salesforce/blip-image-captioning-large")
model = BlipForConditionalGeneration.from_pretrained("Salesforce/blip-image-captioning-large").to("cuda")
img_url = 'https://storage.googleapis.com/sfr-vision-language-research/BLIP/demo.jpg'
raw_image = Image.open(requests.get(img_url, stream=True).raw).convert('RGB')
# conditional image captioning
text = "a photography of"
inputs = processor(raw_image, text, return_tensors="pt").to("cuda")
out = model.generate(**inputs)
print(processor.decode(out[0], skip_special_tokens=True))
# unconditional image captioning
inputs = processor(raw_image, return_tensors="pt").to("cuda")
out = model.generate(**inputs)
print(processor.decode(out[0], skip_special_tokens=True))
```
</details>
##### In half precision (`float16`)
<details>
<summary> Click to expand </summary>
```python
import torch
import requests
from PIL import Image
from transformers import BlipProcessor, BlipForConditionalGeneration
processor = BlipProcessor.from_pretrained("Salesforce/blip-image-captioning-large")
model = BlipForConditionalGeneration.from_pretrained("Salesforce/blip-image-captioning-large", torch_dtype=torch.float16).to("cuda")
img_url = 'https://storage.googleapis.com/sfr-vision-language-research/BLIP/demo.jpg'
raw_image = Image.open(requests.get(img_url, stream=True).raw).convert('RGB')
# conditional image captioning
text = "a photography of"
inputs = processor(raw_image, text, return_tensors="pt").to("cuda", torch.float16)
out = model.generate(**inputs)
print(processor.decode(out[0], skip_special_tokens=True))
# >>> a photography of a woman and her dog
# unconditional image captioning
inputs = processor(raw_image, return_tensors="pt").to("cuda", torch.float16)
out = model.generate(**inputs)
print(processor.decode(out[0], skip_special_tokens=True))
>>> a woman sitting on the beach with her dog
```
</details>
## BibTex and citation info
```
@misc{https://doi.org/10.48550/arxiv.2201.12086,
doi = {10.48550/ARXIV.2201.12086},
url = {https://arxiv.org/abs/2201.12086},
author = {Li, Junnan and Li, Dongxu and Xiong, Caiming and Hoi, Steven},
keywords = {Computer Vision and Pattern Recognition (cs.CV), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {BLIP: Bootstrapping Language-Image Pre-training for Unified Vision-Language Understanding and Generation},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
|
RapidOrc121/BERT_sentiment_analysis
|
RapidOrc121
| 2024-02-08T08:43:36Z
| 10
| 2
|
bertopic
|
[
"bertopic",
"safetensors",
"distilbert",
"text-classification",
"en",
"dataset:carblacac/twitter-sentiment-analysis",
"region:us"
] |
text-classification
| 2024-01-26T17:56:23Z
|
---
datasets:
- carblacac/twitter-sentiment-analysis
language:
- en
library_name: bertopic
pipeline_tag: text-classification
---
LABEL_0="sadness"
LABEL_1="joy"
LABEL_2="love"
LABEL_3="anger"
LABEL_4="fear"
LABEL_5="surprise"
|
arun100/whisper-base-ar-1
|
arun100
| 2024-02-08T08:37:05Z
| 7
| 0
|
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"whisper",
"automatic-speech-recognition",
"whisper-event",
"generated_from_trainer",
"ar",
"dataset:mozilla-foundation/common_voice_16_0",
"base_model:openai/whisper-base",
"base_model:finetune:openai/whisper-base",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2024-02-07T19:03:58Z
|
---
language:
- ar
license: apache-2.0
base_model: openai/whisper-base
tags:
- whisper-event
- generated_from_trainer
datasets:
- mozilla-foundation/common_voice_16_0
metrics:
- wer
model-index:
- name: Whisper Base Arabic
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: mozilla-foundation/common_voice_16_0 ar
type: mozilla-foundation/common_voice_16_0
config: ar
split: test
args: ar
metrics:
- name: Wer
type: wer
value: 80.47772163527792
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Whisper Base Arabic
This model is a fine-tuned version of [openai/whisper-base](https://huggingface.co/openai/whisper-base) on the mozilla-foundation/common_voice_16_0 ar dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5856
- Wer: 80.4777
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-07
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 10000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.7392 | 1.53 | 500 | 0.8623 | 100.8133 |
| 0.5938 | 3.07 | 1000 | 0.7397 | 93.6651 |
| 0.5388 | 4.6 | 1500 | 0.6953 | 92.3005 |
| 0.4982 | 6.13 | 2000 | 0.6682 | 88.9392 |
| 0.4795 | 7.67 | 2500 | 0.6512 | 90.1524 |
| 0.4483 | 9.2 | 3000 | 0.6373 | 87.1234 |
| 0.4374 | 10.74 | 3500 | 0.6261 | 85.3144 |
| 0.4331 | 12.27 | 4000 | 0.6179 | 86.4290 |
| 0.4125 | 13.8 | 4500 | 0.6106 | 83.2865 |
| 0.3984 | 15.34 | 5000 | 0.6059 | 83.0676 |
| 0.4035 | 16.87 | 5500 | 0.6008 | 82.2165 |
| 0.3997 | 18.4 | 6000 | 0.5970 | 81.1195 |
| 0.3878 | 19.94 | 6500 | 0.5941 | 81.7153 |
| 0.3827 | 21.47 | 7000 | 0.5906 | 81.2559 |
| 0.3785 | 23.01 | 7500 | 0.5892 | 81.0506 |
| 0.372 | 24.54 | 8000 | 0.5882 | 81.4248 |
| 0.3655 | 26.07 | 8500 | 0.5865 | 81.0479 |
| 0.3697 | 27.61 | 9000 | 0.5856 | 80.4777 |
| 0.3658 | 29.14 | 9500 | 0.5849 | 80.6128 |
| 0.3539 | 30.67 | 10000 | 0.5848 | 80.6696 |
### Framework versions
- Transformers 4.37.0.dev0
- Pytorch 2.1.2+cu121
- Datasets 2.16.2.dev0
- Tokenizers 0.15.0
|
mertllc/mms-tts-tur-fifties_female
|
mertllc
| 2024-02-08T08:32:14Z
| 18
| 0
|
transformers
|
[
"transformers",
"safetensors",
"vits",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-02-08T08:29:32Z
|
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
DrishtiSharma/phi2-english-to-hinglish-translation
|
DrishtiSharma
| 2024-02-08T08:24:31Z
| 4
| 0
|
peft
|
[
"peft",
"safetensors",
"generated_from_trainer",
"base_model:microsoft/phi-2",
"base_model:adapter:microsoft/phi-2",
"license:mit",
"region:us"
] | null | 2024-02-07T20:23:18Z
|
---
license: mit
library_name: peft
tags:
- generated_from_trainer
base_model: microsoft/phi-2
model-index:
- name: phi2-english-to-hinglish-translation
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# phi2-english-to-hinglish-translation
This model is a fine-tuned version of [microsoft/phi-2](https://huggingface.co/microsoft/phi-2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.3394
- Rouge Scores: {'rouge1': 0.02194963696306387, 'rouge2': 0.017844397420545253, 'rougeL': 0.017985463648805815, 'rougeLsum': 0.02198801722885821}
- Bleu Scores: [0.0141983812922229, 0.013783602019353523, 0.013237039007079092, 0.012647324457245113]
- Gen Len: 2048.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 2
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 4
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- num_epochs: 2
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge Scores | Bleu Scores | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:--------------------------------------------------------------------------------------------------------------------------:|:----------------------------------------------------------------------------------------:|:-------:|
| 1.6688 | 1.0 | 500 | 1.4150 | {'rouge1': 0.021944939879946292, 'rouge2': 0.017781155558600512, 'rougeL': 0.017866554441667286, 'rougeLsum': 0.02197862373873669} | [0.014214089766333284, 0.013807603949625002, 0.013250971870467268, 0.012646602626664907] | 2048.0 |
| 1.2148 | 2.0 | 1000 | 1.3394 | {'rouge1': 0.02194963696306387, 'rouge2': 0.017844397420545253, 'rougeL': 0.017985463648805815, 'rougeLsum': 0.02198801722885821} | [0.0141983812922229, 0.013783602019353523, 0.013237039007079092, 0.012647324457245113] | 2048.0 |
### Framework versions
- PEFT 0.8.2
- Transformers 4.38.0.dev0
- Pytorch 2.1.0+cu118
- Datasets 2.16.2.dev0
- Tokenizers 0.15.1
|
DrishtiSharma/mistral-7b-v0.1-english-to-hinglish-translation
|
DrishtiSharma
| 2024-02-08T08:23:29Z
| 0
| 0
|
peft
|
[
"peft",
"safetensors",
"generated_from_trainer",
"base_model:mistralai/Mistral-7B-v0.1",
"base_model:adapter:mistralai/Mistral-7B-v0.1",
"license:apache-2.0",
"region:us"
] | null | 2024-02-05T19:57:01Z
|
---
license: apache-2.0
library_name: peft
tags:
- generated_from_trainer
base_model: mistralai/Mistral-7B-v0.1
model-index:
- name: mistral-7b-v0.1-english-to-hinglish-translation
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mistral-7b-v0.1-english-to-hinglish-translation
This model is a fine-tuned version of [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9017
- Rouge Scores: {'rouge1': 0.9052154858930703, 'rouge2': 0.7938118811886605, 'rougeL': 0.8365543601879399, 'rougeLsum': 0.9051011676969527}
- Bleu Scores: [0.9286814242037147, 0.9121661008968365, 0.8907823041130339, 0.8677722819236368]
- Gen Len: 2048.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 2
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 4
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- num_epochs: 2
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge Scores | Bleu Scores | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-----------------------------------------------------------------------------------------------------------------------:|:--------------------------------------------------------------------------------:|:-------:|
| 0.9667 | 1.0 | 500 | 0.8997 | {'rouge1': 0.9066197962103982, 'rouge2': 0.7949438120742293, 'rougeL': 0.8365583570941119, 'rougeLsum': 0.906542182776239} | [0.9280923249970773, 0.9116476390859075, 0.8901882800412136, 0.8671907395641425] | 2048.0 |
| 0.5702 | 2.0 | 1000 | 0.9017 | {'rouge1': 0.9052154858930703, 'rouge2': 0.7938118811886605, 'rougeL': 0.8365543601879399, 'rougeLsum': 0.9051011676969527} | [0.9286814242037147, 0.9121661008968365, 0.8907823041130339, 0.8677722819236368] | 2048.0 |
### Framework versions
- PEFT 0.8.2
- Transformers 4.38.0.dev0
- Pytorch 2.1.0+cu118
- Datasets 2.16.2.dev0
- Tokenizers 0.15.1
|
madhiarasan/hr_qna
|
madhiarasan
| 2024-02-08T08:21:46Z
| 4
| 0
|
peft
|
[
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:tiiuae/falcon-7b-instruct",
"base_model:adapter:tiiuae/falcon-7b-instruct",
"region:us"
] | null | 2024-02-08T08:21:44Z
|
---
library_name: peft
base_model: tiiuae/falcon-7b-instruct
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.8.2
|
anish005/mistral-reddit
|
anish005
| 2024-02-08T08:09:39Z
| 60
| 0
|
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] |
text-generation
| 2024-02-08T06:40:22Z
|
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
YashRawal225/New-3-7b-chat-finetune-german500
|
YashRawal225
| 2024-02-08T08:05:53Z
| 3
| 0
|
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-02-08T07:58:06Z
|
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
RajuEEE/LlaMa_FineTunedModel
|
RajuEEE
| 2024-02-08T08:00:51Z
| 0
| 0
|
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-02-08T08:00:43Z
|
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
mertllc/mms-tts-tur-thirties-male
|
mertllc
| 2024-02-08T08:00:04Z
| 22
| 0
|
transformers
|
[
"transformers",
"safetensors",
"vits",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-02-08T07:57:19Z
|
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
akashAD/phi-1_5-finetuned-query-classify
|
akashAD
| 2024-02-08T07:55:50Z
| 6
| 0
|
transformers
|
[
"transformers",
"safetensors",
"phi",
"text-generation",
"custom_code",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-02-08T07:50:54Z
|
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Atozzio/q-FrozenLake-v1-4x4-noSlippery
|
Atozzio
| 2024-02-08T07:48:58Z
| 0
| 0
| null |
[
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2024-02-08T07:48:55Z
|
---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="Atozzio/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
yuandli/dogbooth
|
yuandli
| 2024-02-08T07:41:43Z
| 0
| 0
|
diffusers
|
[
"diffusers",
"tensorboard",
"safetensors",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"dreambooth",
"base_model:stabilityai/stable-diffusion-2-1",
"base_model:finetune:stabilityai/stable-diffusion-2-1",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2024-02-07T07:07:47Z
|
---
license: creativeml-openrail-m
library_name: diffusers
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- dreambooth
- text-to-image
- dreambooth
- stable-diffusion
- stable-diffusion-diffusers
base_model: stabilityai/stable-diffusion-2-1
instance_prompt: a photo of [v]dog
inference: true
---
# DreamBooth - yuandli/dogbooth
This is a dreambooth model derived from stabilityai/stable-diffusion-2-1. The weights were trained on a photo of [v]dog using [DreamBooth](https://dreambooth.github.io/).
You can find some example images in the following.
DreamBooth for the text encoder was enabled: False.
|
hossamdaoud/bloomz-1b7-academic-detector
|
hossamdaoud
| 2024-02-08T07:40:40Z
| 11
| 0
|
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"bloom",
"text-classification",
"mgt-detection",
"ai-detection",
"en",
"dataset:NicolaiSivesind/human-vs-machine",
"dataset:gfissore/arxiv-abstracts-2021",
"license:openrail",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-02-07T17:36:09Z
|
---
license: openrail
widget:
- text: I am totally a human, trust me bro.
example_title: default
- text: >-
In Finnish folklore, all places and things, and also human beings, have a
haltija (a genius, guardian spirit) of their own. One such haltija is called
etiΓ€inenβan image, doppelgΓ€nger, or just an impression that goes ahead of a
person, doing things the person in question later does. For example, people
waiting at home might hear the door close or even see a shadow or a
silhouette, only to realize that no one has yet arrived. EtiΓ€inen can also
refer to some kind of a feeling that something is going to happen. Sometimes
it could, for example, warn of a bad year coming. In modern Finnish, the
term has detached from its shamanistic origins and refers to premonition.
Unlike clairvoyance, divination, and similar practices, etiΓ€iset (plural)
are spontaneous and can't be induced. Quite the opposite, they may be
unwanted and cause anxiety, like ghosts. EtiΓ€iset need not be too dramatic
and may concern everyday events, although ones related to e.g. deaths are
common. As these phenomena are still reported today, they can be considered
a living tradition, as a way to explain the psychological experience of
premonition.
example_title: real wikipedia
- text: >-
In Finnish folklore, all places and things, animate or inanimate, have a
spirit or "etiΓ€inen" that lives there. EtiΓ€inen can manifest in many forms,
but is usually described as a kind, elderly woman with white hair. She is
the guardian of natural places and often helps people in need. EtiΓ€inen has
been a part of Finnish culture for centuries and is still widely believed in
today. Folklorists study etiΓ€inen to understand Finnish traditions and how
they have changed over time.
example_title: generated wikipedia
- text: >-
This paper presents a novel framework for sparsity-certifying graph
decompositions, which are important tools in various areas of computer
science, including algorithm design, complexity theory, and optimization.
Our approach is based on the concept of "cut sparsifiers," which are sparse
graphs that preserve the cut structure of the original graph up to a certain
error bound. We show that cut sparsifiers can be efficiently constructed
using a combination of spectral techniques and random sampling, and we use
them to develop new algorithms for decomposing graphs into sparse subgraphs.
example_title: from ChatGPT
- text: >-
Recent work has demonstrated substantial gains on many NLP tasks and
benchmarks by pre-training on a large corpus of text followed by fine-tuning
on a specific task. While typically task-agnostic in architecture, this
method still requires task-specific fine-tuning datasets of thousands or
tens of thousands of examples. By contrast, humans can generally perform a
new language task from only a few examples or from simple instructions -
something which current NLP systems still largely struggle to do. Here we
show that scaling up language models greatly improves task-agnostic,
few-shot performance, sometimes even reaching competitiveness with prior
state-of-the-art fine-tuning approaches. Specifically, we train GPT-3, an
autoregressive language model with 175 billion parameters, 10x more than any
previous non-sparse language model, and test its performance in the few-shot
setting. For all tasks, GPT-3 is applied without any gradient updates or
fine-tuning, with tasks and few-shot demonstrations specified purely via
text interaction with the model. GPT-3 achieves strong performance on many
NLP datasets, including translation, question-answering, and cloze tasks, as
well as several tasks that require on-the-fly reasoning or domain
adaptation, such as unscrambling words, using a novel word in a sentence, or
performing 3-digit arithmetic. At the same time, we also identify some
datasets where GPT-3's few-shot learning still struggles, as well as some
datasets where GPT-3 faces methodological issues related to training on
large web corpora. Finally, we find that GPT-3 can generate samples of news
articles which human evaluators have difficulty distinguishing from articles
written by humans. We discuss broader societal impacts of this finding and
of GPT-3 in general.
example_title: GPT-3 paper
datasets:
- NicolaiSivesind/human-vs-machine
- gfissore/arxiv-abstracts-2021
language:
- en
pipeline_tag: text-classification
tags:
- mgt-detection
- ai-detection
---
Machine-generated text-detection by fine-tuning of language models
===
This project is related to a bachelor's thesis with the title "*Turning Poachers into Gamekeepers: Detecting Machine-Generated Text in Academia using Large Language Models*" (see [here](https://ntnuopen.ntnu.no/ntnu-xmlui/handle/11250/3078096)) written by *Nicolai Thorer Sivesind* and *Andreas Bentzen Winje* at the *Department of Computer Science* at the *Norwegian University of Science and Technology*.
It contains text classification models trained to distinguish human-written text from text generated by language models like ChatGPT and GPT-3. The best models were able to achieve an accuracy of 100% on real and *GPT-3*-generated wikipedia articles (4500 samples), and an accuracy of 98.4% on real and *ChatGPT*-generated research abstracts (3000 samples).
The dataset card for the dataset that was created in relation to this project can be found [here](https://huggingface.co/datasets/NicolaiSivesind/human-vs-machine).
**NOTE**: the hosted inference on this site only works for the RoBERTa-models, and not for the Bloomz-models. The Bloomz-models otherwise can produce wrong predictions when not explicitly providing the attention mask from the tokenizer to the model for inference. To be sure, the [pipeline](https://huggingface.co/docs/transformers/main_classes/pipelines)-library seems to produce the most consistent results.
## Fine-tuned detectors
This project includes 12 fine-tuned models based on the RoBERTa-base model, and three sizes of the bloomz-models.
| Base-model | RoBERTa-base | Bloomz-560m | Bloomz-1b7 | Bloomz-3b |
|------------|--------------------------------------------------------------------------------|--------------------------------------------------------------------------------------------|------------------------------------------------------------------------------------------|----------------------------------------------------------------------------------------|
| Wiki | [roberta-wiki](https://huggingface.co/andreas122001/roberta-wiki-detector) | [Bloomz-560m-wiki](https://huggingface.co/andreas122001/bloomz-560m-wiki-detector) | [Bloomz-1b7-wiki](https://huggingface.co/andreas122001/bloomz-1b7-wiki-detector) | [Bloomz-3b-wiki](https://huggingface.co/andreas122001/bloomz-3b-wiki-detector) |
| Academic | [roberta-academic](https://huggingface.co/andreas122001/roberta-academic-detector) | [Bloomz-560m-academic](https://huggingface.co/andreas122001/bloomz-560m-academic-detector) | [Bloomz-1b7-academic](https://huggingface.co/andreas122001/bloomz-1b7-academic-detector) | [Bloomz-3b-academic](https://huggingface.co/andreas122001/bloomz-3b-academic-detector) |
| Mixed | [roberta-mixed](https://huggingface.co/andreas122001/roberta-mixed-detector) | [Bloomz-560m-mixed](https://huggingface.co/andreas122001/bloomz-560m-mixed-detector) | [Bloomz-1b7-mixed](https://huggingface.co/andreas122001/bloomz-1b7-mixed-detector) | [Bloomz-3b-mixed](https://huggingface.co/andreas122001/bloomz-3b-mixed-detector) |
### Datasets
The models were trained on selections from the [GPT-wiki-intros]() and [ChatGPT-Research-Abstracts](), and are separated into three types, **wiki**-detectors, **academic**-detectors and **mixed**-detectors, respectively.
- **Wiki-detectors**:
- Trained on 30'000 datapoints (10%) of GPT-wiki-intros.
- Best model (in-domain) is Bloomz-3b-wiki, with an accuracy of 100%.
- **Academic-detectors**:
- Trained on 20'000 datapoints (100%) of ChatGPT-Research-Abstracts.
- Best model (in-domain) is Bloomz-3b-academic, with an accuracy of 98.4%
- **Mixed-detectors**:
- Trained on 15'000 datapoints (5%) of GPT-wiki-intros and 10'000 datapoints (50%) of ChatGPT-Research-Abstracts.
- Best model (in-domain) is RoBERTa-mixed, with an F1-score of 99.3%.
### Hyperparameters
All models were trained using the same hyperparameters:
```python
{
"num_train_epochs": 1,
"adam_beta1": 0.9,
"adam_beta2": 0.999,
"batch_size": 8,
"adam_epsilon": 1e-08
"optim": "adamw_torch" # the optimizer (AdamW)
"learning_rate": 5e-05, # (LR)
"lr_scheduler_type": "linear", # scheduler type for LR
"seed": 42, # seed for PyTorch RNG-generator.
}
```
### Metrics
Metrics can be found at https://wandb.ai/idatt2900-072/IDATT2900-072.
In-domain performance of wiki-detectors:
| Base model | Accuracy | Precision | Recall | F1-score |
|-------------|----------|-----------|--------|----------|
| Bloomz-560m | 0.973 | *1.000 | 0.945 | 0.972 |
| Bloomz-1b7 | 0.972 | *1.000 | 0.945 | 0.972 |
| Bloomz-3b | *1.000 | *1.000 | *1.000 | *1.000 |
| RoBERTa | 0.998 | 0.999 | 0.997 | 0.998 |
In-domain peformance of academic-detectors:
| Base model | Accuracy | Precision | Recall | F1-score |
|-------------|----------|-----------|--------|----------|
| Bloomz-560m | 0.964 | 0.963 | 0.965 | 0.964 |
| Bloomz-1b7 | 0.946 | 0.941 | 0.951 | 0.946 |
| Bloomz-3b | *0.984 | *0.983 | 0.985 | *0.984 |
| RoBERTa | 0.982 | 0.968 | *0.997 | 0.982 |
F1-scores of the mixed-detectors on all three datasets:
| Base model | Mixed | Wiki | CRA |
|-------------|--------|--------|--------|
| Bloomz-560m | 0.948 | 0.972 | *0.848 |
| Bloomz-1b7 | 0.929 | 0.964 | 0.816 |
| Bloomz-3b | 0.988 | 0.996 | 0.772 |
| RoBERTa | *0.993 | *0.997 | 0.829 |
## Credits
- [GPT-wiki-intro](https://huggingface.co/datasets/aadityaubhat/GPT-wiki-intro), by Aaditya Bhat
- [arxiv-abstracts-2021](https://huggingface.co/datasets/gfissore/arxiv-abstracts-2021), by Giancarlo
- [Bloomz](bigscience/bloomz), by BigScience
- [RoBERTa](https://huggingface.co/roberta-base), by Liu et. al.
## Citation
Please use the following citation:
```
@misc {sivesind_2023,
author = { {Nicolai Thorer Sivesind} and {Andreas Bentzen Winje} },
title = { Machine-generated text-detection by fine-tuning of language models },
url = { https://huggingface.co/andreas122001/roberta-academic-detector }
year = 2023,
publisher = { Hugging Face }
}
```
|
noza-kit/Adapter_llama2_translate_Q_enjppt_ex2-1epoch
|
noza-kit
| 2024-02-08T07:34:02Z
| 1
| 0
|
peft
|
[
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:meta-llama/Llama-2-7b-hf",
"base_model:adapter:meta-llama/Llama-2-7b-hf",
"region:us"
] | null | 2024-02-08T04:46:02Z
|
---
library_name: peft
base_model: meta-llama/Llama-2-7b-hf
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.8.2
|
JiajingChen/3
|
JiajingChen
| 2024-02-08T07:19:20Z
| 0
| 0
| null |
[
"Pixelcopter-PLE-v0",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2024-02-07T21:03:29Z
|
---
tags:
- Pixelcopter-PLE-v0
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: '3'
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Pixelcopter-PLE-v0
type: Pixelcopter-PLE-v0
metrics:
- type: mean_reward
value: 6.47 +/- 10.92
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **Pixelcopter-PLE-v0**
This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
rygielcorpuz/temoc
|
rygielcorpuz
| 2024-02-08T07:12:38Z
| 6
| 1
|
diffusers
|
[
"diffusers",
"text-to-image",
"stable-diffusion",
"lora",
"template:sd-lora",
"base_model:runwayml/stable-diffusion-v1-5",
"base_model:adapter:runwayml/stable-diffusion-v1-5",
"region:us"
] |
text-to-image
| 2023-12-16T06:40:11Z
|
---
tags:
- text-to-image
- stable-diffusion
- lora
- diffusers
- template:sd-lora
widget:
- text: temoc flexing
output:
url: images/image3.png
- text: temoc suit
output:
url: images/image2.png
- text: temoc kicking
output:
url: images/image1.png
base_model: runwayml/stable-diffusion-v1-5
instance_prompt: null
---
# temoc
<Gallery />
## Download model
Weights for this model are available in Safetensors format.
[Download](/rygielcorpuz/temoc/tree/main) them in the Files & versions tab.
|
humung/polyglot-ko-12.8b-vlending-v0.5
|
humung
| 2024-02-08T07:09:28Z
| 0
| 0
|
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-02-08T07:09:21Z
|
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
souvenger/NLP2Linux
|
souvenger
| 2024-02-08T07:09:20Z
| 6
| 0
|
setfit
|
[
"setfit",
"safetensors",
"mpnet",
"sentence-transformers",
"text-classification",
"generated_from_setfit_trainer",
"arxiv:2209.11055",
"base_model:sentence-transformers/paraphrase-mpnet-base-v2",
"base_model:finetune:sentence-transformers/paraphrase-mpnet-base-v2",
"model-index",
"region:us"
] |
text-classification
| 2024-02-08T07:09:07Z
|
---
library_name: setfit
tags:
- setfit
- sentence-transformers
- text-classification
- generated_from_setfit_trainer
metrics:
- accuracy
widget:
- text: Upgrade all installed packages with superuser privileges
- text: Install package 'vim' as superuser
- text: Remove package 'firefox' with superuser privileges
- text: Change permissions of directory 'docs' to writable
- text: Update package lists using superuser privileges
pipeline_tag: text-classification
inference: true
base_model: sentence-transformers/paraphrase-mpnet-base-v2
model-index:
- name: SetFit with sentence-transformers/paraphrase-mpnet-base-v2
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: Unknown
type: unknown
split: test
metrics:
- type: accuracy
value: 0.0
name: Accuracy
---
# SetFit with sentence-transformers/paraphrase-mpnet-base-v2
This is a [SetFit](https://github.com/huggingface/setfit) model that can be used for Text Classification. This SetFit model uses [sentence-transformers/paraphrase-mpnet-base-v2](https://huggingface.co/sentence-transformers/paraphrase-mpnet-base-v2) as the Sentence Transformer embedding model. A [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance is used for classification.
The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
## Model Details
### Model Description
- **Model Type:** SetFit
- **Sentence Transformer body:** [sentence-transformers/paraphrase-mpnet-base-v2](https://huggingface.co/sentence-transformers/paraphrase-mpnet-base-v2)
- **Classification head:** a [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance
- **Maximum Sequence Length:** 512 tokens
- **Number of Classes:** 30 classes
<!-- - **Training Dataset:** [Unknown](https://huggingface.co/datasets/unknown) -->
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Repository:** [SetFit on GitHub](https://github.com/huggingface/setfit)
- **Paper:** [Efficient Few-Shot Learning Without Prompts](https://arxiv.org/abs/2209.11055)
- **Blogpost:** [SetFit: Efficient Few-Shot Learning Without Prompts](https://huggingface.co/blog/setfit)
### Model Labels
| Label | Examples |
|:----------------------------|:-----------------------------------------------------------------------------------------------------------------------------------------------------|
| ls | <ul><li>'List all files and directories'</li><li>'Show files in the current directory'</li><li>'Display contents of the current directory'</li></ul> |
| cd | <ul><li>'Change to the specified directory'</li><li>'Move to the home directory'</li><li>'Navigate to the specified directory path'</li></ul> |
| mkdir docs | <ul><li>"Create a new directory named 'docs'"</li></ul> |
| mkdir projects | <ul><li>"Make a directory named 'projects'"</li></ul> |
| mkdir data | <ul><li>"Create a folder called 'data'"</li></ul> |
| mkdir images | <ul><li>"Make a directory named 'images'"</li></ul> |
| mkdir scripts | <ul><li>"Create a new folder named 'scripts'"</li></ul> |
| rm example.txt | <ul><li>"Remove the file named 'example.txt'"</li></ul> |
| rm temp.txt | <ul><li>"Delete the file called 'temp.txt'"</li></ul> |
| rm file1 | <ul><li>"Remove the file named 'file1'"</li></ul> |
| rm file2 | <ul><li>"Delete the file named 'file2'"</li></ul> |
| rm backup.txt | <ul><li>"Remove the file named 'backup.txt'"</li></ul> |
| cp file1 /destination | <ul><li>'Copy file1 to directory /destination'</li></ul> |
| cp file2 /backup | <ul><li>'Duplicate file2 to directory /backup'</li></ul> |
| cp file3 /archive | <ul><li>'Copy file3 to folder /archive'</li></ul> |
| cp file4 /temp | <ul><li>'Duplicate file4 to folder /temp'</li></ul> |
| cp file5 /images | <ul><li>'Copy file5 to directory /images'</li></ul> |
| mv file2 /new_location | <ul><li>'Move file2 to directory /new_location'</li></ul> |
| mv file3 /backup | <ul><li>'Transfer file3 to directory /backup'</li></ul> |
| mv file4 /archive | <ul><li>'Move file4 to folder /archive'</li></ul> |
| mv file5 /temp | <ul><li>'Transfer file5 to folder /temp'</li></ul> |
| mv file6 /images | <ul><li>'Move file6 to directory /images'</li></ul> |
| cat README.md | <ul><li>"Display the contents of file 'README.md'"</li></ul> |
| cat notes.txt | <ul><li>"Show the content of file 'notes.txt'"</li></ul> |
| cat data.csv | <ul><li>"Print the contents of file 'data.csv'"</li></ul> |
| cat script.sh | <ul><li>"Display the content of file 'script.sh'"</li></ul> |
| cat config.ini | <ul><li>"Show the contents of file 'config.ini'"</li></ul> |
| grep 'pattern' data.txt | <ul><li>"Search for 'pattern' in file 'data.txt'"</li></ul> |
| grep 'word' text.txt | <ul><li>"Find occurrences of 'word' in file 'text.txt'"</li></ul> |
| grep 'keyword' document.txt | <ul><li>"Search for 'keyword' in file 'document.txt'"</li></ul> |
## Evaluation
### Metrics
| Label | Accuracy |
|:--------|:---------|
| **all** | 0.0 |
## Uses
### Direct Use for Inference
First install the SetFit library:
```bash
pip install setfit
```
Then you can load this model and run inference.
```python
from setfit import SetFitModel
# Download from the π€ Hub
model = SetFitModel.from_pretrained("souvenger/NLP2Linux")
# Run inference
preds = model("Install package 'vim' as superuser")
```
<!--
### Downstream Use
*List how someone could finetune this model on their own dataset.*
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Set Metrics
| Training set | Min | Median | Max |
|:-------------|:----|:-------|:----|
| Word count | 5 | 5.6667 | 9 |
| Label | Training Sample Count |
|:----------------------------|:----------------------|
| cat README.md | 1 |
| cat config.ini | 1 |
| cat data.csv | 1 |
| cat notes.txt | 1 |
| cat script.sh | 1 |
| cd | 10 |
| cp file1 /destination | 1 |
| cp file2 /backup | 1 |
| cp file3 /archive | 1 |
| cp file4 /temp | 1 |
| cp file5 /images | 1 |
| grep 'keyword' document.txt | 1 |
| grep 'pattern' data.txt | 1 |
| grep 'word' text.txt | 1 |
| ls | 10 |
| mkdir data | 1 |
| mkdir docs | 1 |
| mkdir images | 1 |
| mkdir projects | 1 |
| mkdir scripts | 1 |
| mv file2 /new_location | 1 |
| mv file3 /backup | 1 |
| mv file4 /archive | 1 |
| mv file5 /temp | 1 |
| mv file6 /images | 1 |
| rm backup.txt | 1 |
| rm example.txt | 1 |
| rm file1 | 1 |
| rm file2 | 1 |
| rm temp.txt | 1 |
### Training Hyperparameters
- batch_size: (8, 8)
- num_epochs: (1, 1)
- max_steps: -1
- sampling_strategy: oversampling
- num_iterations: 20
- body_learning_rate: (2e-05, 2e-05)
- head_learning_rate: 2e-05
- loss: CosineSimilarityLoss
- distance_metric: cosine_distance
- margin: 0.25
- end_to_end: False
- use_amp: False
- warmup_proportion: 0.1
- seed: 42
- eval_max_steps: -1
- load_best_model_at_end: False
### Training Results
| Epoch | Step | Training Loss | Validation Loss |
|:------:|:----:|:-------------:|:---------------:|
| 0.0042 | 1 | 0.1215 | - |
| 0.2083 | 50 | 0.0232 | - |
| 0.4167 | 100 | 0.01 | - |
| 0.625 | 150 | 0.0044 | - |
| 0.8333 | 200 | 0.0025 | - |
### Framework Versions
- Python: 3.10.13
- SetFit: 1.0.3
- Sentence Transformers: 2.3.1
- Transformers: 4.37.0
- PyTorch: 2.1.2
- Datasets: 2.1.0
- Tokenizers: 0.15.1
## Citation
### BibTeX
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
-->
|
ssaryssane/ssary-10.7B-slerp
|
ssaryssane
| 2024-02-08T07:02:07Z
| 6
| 0
|
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"mergekit",
"merge",
"conversational",
"base_model:Sao10K/Fimbulvetr-10.7B-v1",
"base_model:merge:Sao10K/Fimbulvetr-10.7B-v1",
"base_model:upstage/SOLAR-10.7B-Instruct-v1.0",
"base_model:merge:upstage/SOLAR-10.7B-Instruct-v1.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-02-08T00:05:41Z
|
---
base_model:
- upstage/SOLAR-10.7B-Instruct-v1.0
- Sao10K/Fimbulvetr-10.7B-v1
library_name: transformers
tags:
- mergekit
- merge
---
# merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the SLERP merge method.
### Models Merged
The following models were included in the merge:
* [upstage/SOLAR-10.7B-Instruct-v1.0](https://huggingface.co/upstage/SOLAR-10.7B-Instruct-v1.0)
* [Sao10K/Fimbulvetr-10.7B-v1](https://huggingface.co/Sao10K/Fimbulvetr-10.7B-v1)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
slices:
- sources:
- model: Sao10K/Fimbulvetr-10.7B-v1
layer_range: [0, 32]
- model: upstage/SOLAR-10.7B-Instruct-v1.0
layer_range: [0, 32]
merge_method: slerp
base_model: upstage/SOLAR-10.7B-Instruct-v1.0
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: bfloat16
```
|
gotchu/s8-knarf
|
gotchu
| 2024-02-08T07:01:12Z
| 4
| 0
|
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"mergekit",
"merge",
"base_model:ChaiML/season_4_top_solution",
"base_model:finetune:ChaiML/season_4_top_solution",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-02-08T06:50:26Z
|
---
base_model:
- ChaiML/season_4_top_solution
library_name: transformers
tags:
- mergekit
- merge
---
# merged
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the passthrough merge method.
### Models Merged
The following models were included in the merge:
* [ChaiML/season_4_top_solution](https://huggingface.co/ChaiML/season_4_top_solution)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
dtype: bfloat16
merge_method: passthrough
slices:
- sources:
- layer_range: [0, 30]
model:
model:
path: ChaiML/season_4_top_solution
- sources:
- layer_range: [10, 40]
model:
model:
path: ChaiML/season_4_top_solution
```
|
GowthamMl/deepseeker-table-identification-v2
|
GowthamMl
| 2024-02-08T06:56:33Z
| 0
| 0
|
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-01-23T06:58:20Z
|
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
nightdude/kanji-lora
|
nightdude
| 2024-02-08T06:56:12Z
| 0
| 0
|
diffusers
|
[
"diffusers",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"lora",
"base_model:CompVis/stable-diffusion-v1-4",
"base_model:adapter:CompVis/stable-diffusion-v1-4",
"license:creativeml-openrail-m",
"region:us"
] |
text-to-image
| 2024-02-08T06:35:22Z
|
---
license: creativeml-openrail-m
base_model: CompVis/stable-diffusion-v1-4
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- lora
inference: true
---
# LoRA text2image fine-tuning - nightdude/kanji-lora
These are LoRA adaption weights for CompVis/stable-diffusion-v1-4. The weights were fine-tuned on the nightdude/sakana-kanji dataset. You can find some example images in the following.




|
yoon1000/TrOCR_0208-2
|
yoon1000
| 2024-02-08T06:54:16Z
| 33
| 0
|
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"vision-encoder-decoder",
"image-text-to-text",
"generated_from_trainer",
"base_model:microsoft/trocr-base-stage1",
"base_model:finetune:microsoft/trocr-base-stage1",
"endpoints_compatible",
"region:us"
] |
image-text-to-text
| 2024-02-08T06:51:30Z
|
---
base_model: microsoft/trocr-base-stage1
tags:
- generated_from_trainer
model-index:
- name: TrOCR_0208-2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# TrOCR_0208-2
This model is a fine-tuned version of [microsoft/trocr-base-stage1](https://huggingface.co/microsoft/trocr-base-stage1) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2584
- Cer: 0.1211
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Cer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 1.3873 | 1.71 | 500 | 1.6813 | 0.2361 |
| 0.8298 | 3.42 | 1000 | 1.7390 | 0.2441 |
| 0.5587 | 5.14 | 1500 | 1.5896 | 0.2090 |
| 0.376 | 6.85 | 2000 | 1.4717 | 0.1775 |
| 0.2847 | 8.56 | 2500 | 1.5528 | 0.1928 |
| 0.2376 | 10.27 | 3000 | 1.4412 | 0.1727 |
| 0.2101 | 11.99 | 3500 | 1.3770 | 0.1592 |
| 0.2551 | 13.7 | 4000 | 1.4311 | 0.1564 |
| 0.226 | 15.41 | 4500 | 1.2536 | 0.1337 |
| 0.1365 | 17.12 | 5000 | 1.2753 | 0.1272 |
| 0.14 | 18.84 | 5500 | 1.2584 | 0.1211 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.1+cu121
- Datasets 2.13.0
- Tokenizers 0.15.0
|
Jaerim/bloom-7b1-lora-tagger_3
|
Jaerim
| 2024-02-08T06:52:51Z
| 0
| 0
|
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-02-08T06:49:56Z
|
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
suvadityamuk/stable-diffusion-japanese-kanji
|
suvadityamuk
| 2024-02-08T06:41:14Z
| 15
| 0
|
diffusers
|
[
"diffusers",
"safetensors",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"dataset:suvadityamuk/japanese-kanji",
"base_model:stabilityai/stable-diffusion-2-1",
"base_model:finetune:stabilityai/stable-diffusion-2-1",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2024-01-25T04:48:22Z
|
---
license: creativeml-openrail-m
base_model: stabilityai/stable-diffusion-2-1
datasets:
- suvadityamuk/japanese-kanji
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
inference: true
---
# Text-to-image finetuning - suvadityamuk/stable-diffusion-japanese-kanji
This pipeline was finetuned from **stabilityai/stable-diffusion-2-1** on the **suvadityamuk/japanese-kanji** dataset. Below are some example images generated with the finetuned pipeline using the following prompts: ['deep learning', 'elon musk', 'india', 'sakana', 'fish', 'foundation', 'neural network', 'machine learning', 'man', 'woman', 'tokyo', 'mumbai', 'google', 'youtube', 'deepmind', 'attention', 'diffusion', 'stability']:

## Pipeline usage
You can use the pipeline like so:
```python
from diffusers import DiffusionPipeline
import torch
pipeline = DiffusionPipeline.from_pretrained("suvadityamuk/stable-diffusion-japanese-kanji", torch_dtype=torch.float16)
prompt = "deep learning"
image = pipeline(prompt).images[0]
image.save("my_image.png")
```
## Training info
These are the key hyperparameters used during training:
* Epochs: 20
* Learning rate: 0.00025
* Batch size: 128
* Gradient accumulation steps: 4
* Image resolution: 128
* Mixed-precision: bf16
More information on all the CLI arguments and the environment are available on your [`wandb` run page](https://wandb.ai/suvadityamuk/sakana-kanji/runs/ymtm4e77).
|
turgutburak01/Pixelcopter-PLE-v0
|
turgutburak01
| 2024-02-08T06:38:32Z
| 0
| 0
| null |
[
"Pixelcopter-PLE-v0",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2024-02-08T06:38:29Z
|
---
tags:
- Pixelcopter-PLE-v0
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Pixelcopter-PLE-v0
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Pixelcopter-PLE-v0
type: Pixelcopter-PLE-v0
metrics:
- type: mean_reward
value: 25.70 +/- 13.80
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **Pixelcopter-PLE-v0**
This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
Basha738/llama2-supervised-ft-5epochs
|
Basha738
| 2024-02-08T06:34:13Z
| 4
| 0
|
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] |
text-generation
| 2024-02-08T06:30:17Z
|
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
arpanl/Fine-Tuned_Model2
|
arpanl
| 2024-02-08T06:29:41Z
| 5
| 0
|
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"vit",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"base_model:google/vit-base-patch16-224",
"base_model:finetune:google/vit-base-patch16-224",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2024-02-08T04:56:44Z
|
---
license: apache-2.0
base_model: google/vit-base-patch16-224
tags:
- generated_from_trainer
datasets:
- imagefolder
model-index:
- name: Fine-Tuned_Model2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Fine-Tuned_Model2
This model is a fine-tuned version of [google/vit-base-patch16-224](https://huggingface.co/google/vit-base-patch16-224) on the imagefolder dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 100
### Training results
### Framework versions
- Transformers 4.37.2
- Pytorch 2.1.0+cu121
- Datasets 2.16.1
- Tokenizers 0.15.1
|
akshayugale/roberta-finetuned-subjqa-movies_2
|
akshayugale
| 2024-02-08T06:28:30Z
| 20
| 0
|
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"roberta",
"question-answering",
"generated_from_trainer",
"base_model:deepset/roberta-base-squad2",
"base_model:finetune:deepset/roberta-base-squad2",
"license:cc-by-4.0",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2024-02-08T05:52:21Z
|
---
license: cc-by-4.0
base_model: deepset/roberta-base-squad2
tags:
- generated_from_trainer
model-index:
- name: roberta-finetuned-subjqa-movies_2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-finetuned-subjqa-movies_2
This model is a fine-tuned version of [deepset/roberta-base-squad2](https://huggingface.co/deepset/roberta-base-squad2) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu121
- Datasets 2.16.1
- Tokenizers 0.15.1
|
RajuEEE/GPT2_FineTunedModel
|
RajuEEE
| 2024-02-08T06:26:55Z
| 0
| 0
|
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-02-08T06:26:52Z
|
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
ssoh/llama-2-7b-all-strings
|
ssoh
| 2024-02-08T06:25:09Z
| 3
| 0
|
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-02-08T06:20:26Z
|
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
umuthopeyildirim/fin-rwkv-169M
|
umuthopeyildirim
| 2024-02-08T06:22:40Z
| 16
| 0
|
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"rwkv",
"text-generation",
"finance",
"en",
"dataset:gbharti/finance-alpaca",
"arxiv:2305.13048",
"arxiv:2307.08621",
"arxiv:2302.10866",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-28T07:17:22Z
|
---
license: apache-2.0
datasets:
- gbharti/finance-alpaca
language:
- en
library_name: transformers
tags:
- finance
widget:
- text: "Is this headline positive or negative? Headline: Australian Tycoon Forrest Shuts Nickel Mines After Prices Crash."
example_title: "Sentiment analysis"
- text: "Aluminum price per KG is 50$. Forecast max: +1$ min:+0.3$. What should be the current price of aluminum?"
example_title: "Forecast"
---
# Fin-RWKV: Attention Free Financal Expert (WIP)
Fin-RWKV is a cutting-edge, attention-free model designed specifically for financial analysis and prediction. Developed as part of a MindsDB Hackathon, this model leverages the simplicity and efficiency of the RWKV architecture to process financial data, providing insights and forecasts with remarkable accuracy. Fin-RWKV is tailored for professionals and enthusiasts in the finance sector who seek to integrate advanced deep learning techniques into their financial analyses.
## Use Cases
- Sentiment analysis
- Forecast
- Product Pricing
## Features
- Attention-Free Architecture: Utilizes the RWKV (Recurrent Weighted Kernel-based) model, which bypasses the complexity of attention mechanisms while maintaining high performance.
- Lower Costs: 10x to over a 100x+ lower inference cost, 2x to 10x lower training cost
- Tinyyyy: Lightweight enough to run on CPUs in real-time bypassing the GPU - and is able to run on your laptop today
- Finance-Specific Training: Trained on the gbharti/finance-alpaca dataset, ensuring that the model is finely tuned for financial data analysis.
- Transformers Library Integration: Built on the popular 'transformers' library, ensuring easy integration with existing ML pipelines and applications.
## Competing Against
| Name | Param Count | Cost | Inference Cost |
|---------------|-------------|------|----------------|
| Fin-RWKV | 169M | $1.45 | Free on HuggingFace π€ & Low-End CPU |
| [BloombergGPT](https://www.bloomberg.com/company/press/bloomberggpt-50-billion-parameter-llm-tuned-finance/) | 50 Billion | $1.3 million | Enterprise GPUs |
| [FinGPT](https://huggingface.co/FinGPT) | 7 Bilion | $302.4 | Consumer GPUs |
| Architecture | Status | Compute Efficiency | Largest Model | Trained Token | Link |
|--------------|--------|--------------------|---------------|---------------|------|
| (Fin)RWKV | In Production | O ( N ) | 14B | 500B++ (the pile+) | [Paper](https://arxiv.org/abs/2305.13048) |
| Ret Net (Microsoft) | Research | O ( N ) | 6.7B | 100B (mixed) | [Paper](https://arxiv.org/abs/2307.08621) |
| State Space (Stanford) | Prototype | O ( Log N ) | 355M | 15B (the pile, subset) | [Paper](https://arxiv.org/abs/2302.10866) |
| Liquid (MIT) | Research | - | <1M | - | [Paper](https://arxiv.org/abs/2302.10866) |
| Transformer Architecture (included for contrasting reference) | In Production | O ( N^2 ) | 800B (est) | 13T++ (est) | - |
<img src="https://cdn-uploads.huggingface.co/production/uploads/631ea4247beada30465fa606/7vAOYsXH1vhTyh22o6jYB.png" width="500" alt="Inference computational cost vs. Number of tokens">
_Note: Needs more data and training, testing purposes only._
|
Jzuluaga/bert-base-speaker-role-atc-en-uwb-atcc
|
Jzuluaga
| 2024-02-08T06:22:15Z
| 32
| 3
|
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"bert",
"text-classification",
"text",
"sequence-classification",
"en-atc",
"en",
"generated_from_trainer",
"bertraffic",
"audio-classification",
"dataset:Jzuluaga/uwb_atcc",
"arxiv:2211.04054",
"base_model:google-bert/bert-base-uncased",
"base_model:finetune:google-bert/bert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
audio-classification
| 2022-12-05T10:18:56Z
|
---
language: en
license: apache-2.0
tags:
- text
- sequence-classification
- en-atc
- en
- generated_from_trainer
- bert
- bertraffic
- audio-classification
datasets:
- Jzuluaga/uwb_atcc
metrics:
- Precision
- Recall
- Accuracy
- F1
widget:
- text: >-
csa two nine six startup approved mike current qnh one zero one eight time
check one seven
- text: >-
swiss four eight seven november runway three one cleared for takeoff wind
one three zero degrees seven knots
- text: >-
lufthansa five yankee victor runway one three clear to land wind zero seven
zero degrees
- text: austrian seven one zulu hello to you reduce one six zero knots
- text: >-
sky travel one nine two approaching holding point three one ready for
departure
- name: bert-base-speaker-role-atc-en-uwb-atcc
results:
- task:
type: token-classification
name: chunking
dataset:
type: Jzuluaga/uwb_atcc
name: UWB-ATCC corpus (Air Traffic Control Communications)
config: test
split: test
metrics:
- type: F1
value: 0.87
name: TEST F1 (macro)
verified: false
- type: Accuracy
value: 0.91
name: TEST Accuracy
verified: false
- type: Precision
value: 0.86
name: TEST Precision (macro)
verified: false
- type: Recall
value: 0.88
name: TEST Recall (macro)
verified: false
- type: Jaccard Error Rate
value: 0.169
name: TEST Jaccard Error Rate
verified: false
base_model: bert-base-uncased
---
# bert-base-speaker-role-atc-en-uwb-atcc
This model allow to detect speaker roles based on text. Normally, this task is done on the acoustic level. However, we propose to perform this task on the text level.
We solve this challenge by performing speaker role with a BERT model. We fine-tune it on the sequence classification task.
For instance:
- Utterance 1: **lufthansa six two nine charlie tango report when established**
- Utterance 2: **report when established lufthansa six two nine charlie tango**
Based on that, could you tell the speaker role? Is it Utterance 1 air traffic controller or pilot?
Check the inference API (there are 5 examples)!
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the [UWB-ATCC corpus](https://huggingface.co/datasets/Jzuluaga/uwb_atcc).
<a href="https://github.com/idiap/atco2-corpus">
<img alt="GitHub" src="https://img.shields.io/badge/GitHub-Open%20source-green\">
</a>
It achieves the following results on the evaluation set:
- Loss: 0.6191
- Accuracy: 0.9103
- Precision: 0.9239
- Recall: 0.9161
- F1: 0.9200
**Paper**: [ATCO2 corpus: A Large-Scale Dataset for Research on Automatic Speech Recognition and Natural Language Understanding of Air Traffic Control Communications](https://arxiv.org/abs/2211.04054)
Authors: Juan Zuluaga-Gomez, Karel VeselΓ½, Igor SzΓΆke, Petr Motlicek, Martin Kocour, Mickael Rigault, Khalid Choukri, Amrutha Prasad and others
Abstract: Personal assistants, automatic speech recognizers and dialogue understanding systems are becoming more critical in our interconnected digital world. A clear example is air traffic control (ATC) communications. ATC aims at guiding aircraft and controlling the airspace in a safe and optimal manner. These voice-based dialogues are carried between an air traffic controller (ATCO) and pilots via very-high frequency radio channels. In order to incorporate these novel technologies into ATC (low-resource domain), large-scale annotated datasets are required to develop the data-driven AI systems. Two examples are automatic speech recognition (ASR) and natural language understanding (NLU). In this paper, we introduce the ATCO2 corpus, a dataset that aims at fostering research on the challenging ATC field, which has lagged behind due to lack of annotated data. The ATCO2 corpus covers 1) data collection and pre-processing, 2) pseudo-annotations of speech data, and 3) extraction of ATC-related named entities. The ATCO2 corpus is split into three subsets. 1) ATCO2-test-set corpus contains 4 hours of ATC speech with manual transcripts and a subset with gold annotations for named-entity recognition (callsign, command, value). 2) The ATCO2-PL-set corpus consists of 5281 hours of unlabeled ATC data enriched with automatic transcripts from an in-domain speech recognizer, contextual information, speaker turn information, signal-to-noise ratio estimate and English language detection score per sample. Both available for purchase through ELDA at this http URL. 3) The ATCO2-test-set-1h corpus is a one-hour subset from the original test set corpus, that we are offering for free at this url: https://www.atco2.org/data. We expect the ATCO2 corpus will foster research on robust ASR and NLU not only in the field of ATC communications but also in the general research community.
Code β GitHub repository: https://github.com/idiap/atco2-corpus
## Intended uses & limitations
This model was fine-tuned on air traffic control data. We don't expect that it keeps the same performance on some others datasets where BERT was pre-trained or fine-tuned.
## Training and evaluation data
See Table 7 (page 19) in our paper: [ATCO2 corpus: A Large-Scale Dataset for Research on Automatic Speech Recognition and Natural Language Understanding of Air Traffic Control Communications](https://arxiv.org/abs/2211.04054). We described there the data used to fine-tune our sequence classification model.
- We use the UWB-ATCC corpus to fine-tune this model. You can download the raw data here: https://lindat.mff.cuni.cz/repository/xmlui/handle/11858/00-097C-0000-0001-CCA1-0
- However, do not worry, we have prepared a script in our repository for preparing this databases:
- Dataset preparation folder: https://github.com/idiap/atco2-corpus/tree/main/data/databases/uwb_atcc/
- Prepare the data: https://github.com/idiap/atco2-corpus/blob/main/data/databases/uwb_atcc/data_prepare_uwb_atcc_corpus_other.sh
## Writing your own inference script
The snippet of code:
```python
from transformers import pipeline, AutoTokenizer, AutoModelForSequenceClassification
tokenizer = AutoTokenizer.from_pretrained("Jzuluaga/bert-base-speaker-role-atc-en-uwb-atcc")
model = AutoModelForSequenceClassification.from_pretrained("Jzuluaga/bert-base-speaker-role-atc-en-uwb-atcc")
##### Process text sample (from UWB-ATCC)
from transformers import pipeline
nlp = pipeline('text-classification', model=model, tokenizer=tokenizer)
nlp("lining up runway three one csa five bravo")
[{'label': 'pilot',
'score': 0.9998971223831177}]
```
# Cite us
If you use this code for your research, please cite our paper with:
```
@article{zuluaga2022bertraffic,
title={BERTraffic: BERT-based Joint Speaker Role and Speaker Change Detection for Air Traffic Control Communications},
author={Zuluaga-Gomez, Juan and Sarfjoo, Seyyed Saeed and Prasad, Amrutha and others},
journal={IEEE Spoken Language Technology Workshop (SLT), Doha, Qatar},
year={2022}
}
```
and,
```
@article{zuluaga2022how,
title={How Does Pre-trained Wav2Vec2. 0 Perform on Domain Shifted ASR? An Extensive Benchmark on Air Traffic Control Communications},
author={Zuluaga-Gomez, Juan and Prasad, Amrutha and Nigmatulina, Iuliia and Sarfjoo, Saeed and others},
journal={IEEE Spoken Language Technology Workshop (SLT), Doha, Qatar},
year={2022}
}
```
and,
```
@article{zuluaga2022atco2,
title={ATCO2 corpus: A Large-Scale Dataset for Research on Automatic Speech Recognition and Natural Language Understanding of Air Traffic Control Communications},
author={Zuluaga-Gomez, Juan and Vesel{\`y}, Karel and Sz{\"o}ke, Igor and Motlicek, Petr and others},
journal={arXiv preprint arXiv:2211.04054},
year={2022}
}
```
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 3000
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:---------:|:------:|:------:|
| No log | 3.36 | 500 | 0.2346 | 0.9207 | 0.9197 | 0.9413 | 0.9303 |
| 0.2212 | 6.71 | 1000 | 0.3161 | 0.9046 | 0.9260 | 0.9027 | 0.9142 |
| 0.2212 | 10.07 | 1500 | 0.4337 | 0.9065 | 0.9191 | 0.9144 | 0.9167 |
| 0.0651 | 13.42 | 2000 | 0.4743 | 0.9178 | 0.9249 | 0.9295 | 0.9272 |
| 0.0651 | 16.78 | 2500 | 0.5538 | 0.9103 | 0.9196 | 0.9211 | 0.9204 |
| 0.0296 | 20.13 | 3000 | 0.6191 | 0.9103 | 0.9239 | 0.9161 | 0.9200 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.13.0+cu117
- Datasets 2.7.0
- Tokenizers 0.13.2
|
blzncz/segformer-finetuned-4ss1st3r_s3gs3m_24Jan_all-10k-steps
|
blzncz
| 2024-02-08T06:17:44Z
| 21
| 0
|
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"segformer",
"image-segmentation",
"vision",
"generated_from_trainer",
"license:other",
"endpoints_compatible",
"region:us"
] |
image-segmentation
| 2024-02-07T10:32:00Z
|
---
license: other
tags:
- image-segmentation
- vision
- generated_from_trainer
model-index:
- name: segformer-finetuned-4ss1st3r_s3gs3m_24Jan_all-10k-steps
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# segformer-finetuned-4ss1st3r_s3gs3m_24Jan_all-10k-steps
This model is a fine-tuned version of [nvidia/mit-b0](https://huggingface.co/nvidia/mit-b0) on the blzncz/4ss1st3r_s3gs3m_24Jan_all dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3095
- Mean Iou: 0.5513
- Mean Accuracy: 0.7874
- Overall Accuracy: 0.9260
- Accuracy Bg: nan
- Accuracy Fallo cohesivo: 0.9668
- Accuracy Fallo malla: 0.6808
- Accuracy Fallo adhesivo: 0.9727
- Accuracy Fallo burbuja: 0.5291
- Iou Bg: 0.0
- Iou Fallo cohesivo: 0.9167
- Iou Fallo malla: 0.6189
- Iou Fallo adhesivo: 0.7307
- Iou Fallo burbuja: 0.4903
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 6e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 1337
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: polynomial
- training_steps: 10000
### Training results
| Training Loss | Epoch | Step | Validation Loss | Mean Iou | Mean Accuracy | Overall Accuracy | Accuracy Bg | Accuracy Fallo cohesivo | Accuracy Fallo malla | Accuracy Fallo adhesivo | Accuracy Fallo burbuja | Iou Bg | Iou Fallo cohesivo | Iou Fallo malla | Iou Fallo adhesivo | Iou Fallo burbuja |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:-------------:|:----------------:|:-----------:|:-----------------------:|:--------------------:|:-----------------------:|:----------------------:|:------:|:------------------:|:---------------:|:------------------:|:-----------------:|
| 0.1378 | 1.0 | 783 | 0.2677 | 0.4895 | 0.7143 | 0.9122 | nan | 0.9724 | 0.5531 | 0.9663 | 0.3654 | 0.0 | 0.9038 | 0.5327 | 0.6757 | 0.3351 |
| 0.1117 | 2.0 | 1566 | 0.2305 | 0.5289 | 0.7978 | 0.9246 | nan | 0.9507 | 0.7727 | 0.9705 | 0.4974 | 0.0 | 0.9214 | 0.6808 | 0.5876 | 0.4549 |
| 0.0881 | 3.0 | 2349 | 0.2041 | 0.5556 | 0.7867 | 0.9354 | nan | 0.9712 | 0.7391 | 0.9389 | 0.4975 | 0.0 | 0.9273 | 0.6790 | 0.7323 | 0.4394 |
| 0.0878 | 4.0 | 3132 | 0.1984 | 0.5584 | 0.8003 | 0.9346 | nan | 0.9556 | 0.8247 | 0.9602 | 0.4606 | 0.0 | 0.9261 | 0.6935 | 0.7373 | 0.4352 |
| 0.0895 | 5.0 | 3915 | 0.2841 | 0.5246 | 0.8086 | 0.9088 | nan | 0.9137 | 0.8834 | 0.9719 | 0.4652 | 0.0 | 0.8964 | 0.6309 | 0.6593 | 0.4365 |
| 0.0773 | 6.0 | 4698 | 0.2547 | 0.5652 | 0.7823 | 0.9336 | nan | 0.9775 | 0.6843 | 0.9384 | 0.5291 | 0.0 | 0.9251 | 0.6378 | 0.7820 | 0.4813 |
| 0.0667 | 7.0 | 5481 | 0.2726 | 0.5609 | 0.7932 | 0.9295 | nan | 0.9741 | 0.6609 | 0.9689 | 0.5689 | 0.0 | 0.9203 | 0.6202 | 0.7548 | 0.5093 |
| 0.0678 | 8.0 | 6264 | 0.2950 | 0.5276 | 0.8002 | 0.9175 | nan | 0.9443 | 0.7561 | 0.9713 | 0.5292 | 0.0 | 0.9089 | 0.6570 | 0.5900 | 0.4822 |
| 0.0653 | 9.0 | 7047 | 0.2712 | 0.5467 | 0.7682 | 0.9288 | nan | 0.9690 | 0.6971 | 0.9641 | 0.4425 | 0.0 | 0.9189 | 0.6330 | 0.7588 | 0.4228 |
| 0.0646 | 10.0 | 7830 | 0.2841 | 0.5499 | 0.7819 | 0.9272 | nan | 0.9681 | 0.6840 | 0.9688 | 0.5068 | 0.0 | 0.9178 | 0.6243 | 0.7345 | 0.4728 |
| 0.057 | 11.0 | 8613 | 0.3373 | 0.5257 | 0.7782 | 0.9166 | nan | 0.9593 | 0.6555 | 0.9739 | 0.5242 | 0.0 | 0.9075 | 0.6040 | 0.6319 | 0.4848 |
| 0.0591 | 12.0 | 9396 | 0.3082 | 0.5504 | 0.7900 | 0.9247 | nan | 0.9656 | 0.6776 | 0.9705 | 0.5463 | 0.0 | 0.9148 | 0.6172 | 0.7182 | 0.5019 |
| 0.053 | 12.77 | 10000 | 0.3095 | 0.5513 | 0.7874 | 0.9260 | nan | 0.9668 | 0.6808 | 0.9727 | 0.5291 | 0.0 | 0.9167 | 0.6189 | 0.7307 | 0.4903 |
### Framework versions
- Transformers 4.31.0.dev0
- Pytorch 2.0.1+cpu
- Datasets 2.13.1
- Tokenizers 0.13.3
|
diksha13/arrivae-mod-13
|
diksha13
| 2024-02-08T06:15:43Z
| 1
| 1
|
diffusers
|
[
"diffusers",
"tensorboard",
"safetensors",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"lora",
"base_model:runwayml/stable-diffusion-v1-5",
"base_model:adapter:runwayml/stable-diffusion-v1-5",
"license:creativeml-openrail-m",
"region:us"
] |
text-to-image
| 2024-02-08T03:49:51Z
|
---
license: creativeml-openrail-m
base_model: runwayml/stable-diffusion-v1-5
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- lora
inference: true
---
# LoRA text2image fine-tuning - diksha13/arrivae-mod-13
These are LoRA adaption weights for runwayml/stable-diffusion-v1-5. The weights were fine-tuned on the diksha13/13 dataset. You can find some example images in the following.




|
danaleee/Long_rank10_iter500_valprompt_token
|
danaleee
| 2024-02-08T06:13:04Z
| 4
| 0
|
diffusers
|
[
"diffusers",
"tensorboard",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"lora",
"base_model:CompVis/stable-diffusion-v1-4",
"base_model:adapter:CompVis/stable-diffusion-v1-4",
"license:creativeml-openrail-m",
"region:us"
] |
text-to-image
| 2024-02-08T03:48:36Z
|
---
license: creativeml-openrail-m
base_model: CompVis/stable-diffusion-v1-4
instance_prompt: a photo of omd rc_car
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- lora
inference: true
---
# LoRA DreamBooth - danaleee/Long_rank10_iter500_valprompt_token
These are LoRA adaption weights for CompVis/stable-diffusion-v1-4. The weights were trained on a photo of omd rc_car using [DreamBooth](https://dreambooth.github.io/). You can find some example images in the following.




LoRA for the text encoder was enabled: False.
|
umuthopeyildirim/fin-rwkv-1b5
|
umuthopeyildirim
| 2024-02-08T06:13:02Z
| 20
| 0
|
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"rwkv",
"text-generation",
"finance",
"en",
"dataset:gbharti/finance-alpaca",
"arxiv:2305.13048",
"arxiv:2307.08621",
"arxiv:2302.10866",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-28T12:05:35Z
|
---
license: apache-2.0
datasets:
- gbharti/finance-alpaca
language:
- en
library_name: transformers
tags:
- finance
widget:
- text: >-
user: Hypothetical, can taxes ever cause a net loss on otherwise-profitable stocks?
bot:
example_title: Hypothetical
- text: >-
user: What are some signs that the stock market might crash?
bot:
example_title: Question 2
- text: >-
user: Where should I be investing my money?
bot:
example_title: Question
- text: >-
user: Is this headline positive or negative? Headline: Australian Tycoon
Forrest Shuts Nickel Mines After Prices Crash.
bot:
example_title: Sentiment analysis
- text: >-
user: Aluminum price per KG is 50$. Forecast max: +1$ min:+0.3$. What should
be the current price of aluminum?
bot:
example_title: Forecast
---
# Fin-RWKV: Attention Free Financal Expert (WIP)
Fin-RWKV is a cutting-edge, attention-free model designed specifically for financial analysis and prediction. Developed as part of a MindsDB Hackathon, this model leverages the simplicity and efficiency of the RWKV architecture to process financial data, providing insights and forecasts with remarkable accuracy. Fin-RWKV is tailored for professionals and enthusiasts in the finance sector who seek to integrate advanced deep learning techniques into their financial analyses.
## Use Cases
- Sentiment analysis
- Forecast
- Product Pricing
## Features
- Attention-Free Architecture: Utilizes the RWKV (Recurrent Weighted Kernel-based) model, which bypasses the complexity of attention mechanisms while maintaining high performance.
- Lower Costs: 10x to over a 100x+ lower inference cost, 2x to 10x lower training cost
- Tinyyyy: Lightweight enough to run on CPUs in real-time bypassing the GPU - and is able to run on your laptop today
- Finance-Specific Training: Trained on the gbharti/finance-alpaca dataset, ensuring that the model is finely tuned for financial data analysis.
- Transformers Library Integration: Built on the popular 'transformers' library, ensuring easy integration with existing ML pipelines and applications.
## How to use
```py
from transformers import AutoTokenizer, AutoModelForCausalLM, StoppingCriteria, StoppingCriteriaList, TextIteratorStreamer
from threading import Thread
import torch
tokenizer = AutoTokenizer.from_pretrained("umuthopeyildirim/fin-rwkv-1b5")
model = AutoModelForCausalLM.from_pretrained("umuthopeyildirim/fin-rwkv-1b5")
prompt = "user: Is this headline positive or negative? Headline: Australian Tycoon Forrest Shuts Nickel Mines After Prices Crash\nbot:"
# Tokenize the input
input_ids = tokenizer.encode(prompt, return_tensors="pt")
# Generate a response
output = model.generate(input_ids, max_length=333, num_return_sequences=1)
# Decode the output
generated_text = tokenizer.decode(output[0], skip_special_tokens=True)
print(generated_text)
```
## Competing Against
| Name | Param Count | Cost | Inference Cost |
|---------------|-------------|------|----------------|
| Fin-RWKV | 1B5 | $3 | Free on HuggingFace π€ & Low-End CPU |
| [BloombergGPT](https://www.bloomberg.com/company/press/bloomberggpt-50-billion-parameter-llm-tuned-finance/) | 50 Billion | $1.3 million | Enterprise GPUs |
| [FinGPT](https://huggingface.co/FinGPT) | 7 Bilion | $302.4 | Consumer GPUs |
| Architecture | Status | Compute Efficiency | Largest Model | Trained Token | Link |
|--------------|--------|--------------------|---------------|---------------|------|
| (Fin)RWKV | In Production | O ( N ) | 14B | 500B++ (the pile+) | [Paper](https://arxiv.org/abs/2305.13048) |
| Ret Net (Microsoft) | Research | O ( N ) | 6.7B | 100B (mixed) | [Paper](https://arxiv.org/abs/2307.08621) |
| State Space (Stanford) | Prototype | O ( Log N ) | 355M | 15B (the pile, subset) | [Paper](https://arxiv.org/abs/2302.10866) |
| Liquid (MIT) | Research | - | <1M | - | [Paper](https://arxiv.org/abs/2302.10866) |
| Transformer Architecture (included for contrasting reference) | In Production | O ( N^2 ) | 800B (est) | 13T++ (est) | - |
<img src="https://cdn-uploads.huggingface.co/production/uploads/631ea4247beada30465fa606/7vAOYsXH1vhTyh22o6jYB.png" width="500" alt="Inference computational cost vs. Number of tokens">
## Stats for nerds
### Training Config
- n_epoch: 100
- epoch_save_frequency: 10
- batch_size: 5
- ctx_len: 2000
- T_MAX: 384
- RWKV_FLOAT_MODE: fp16
- RWKV_DEEPSPEED: 0
### Loss
<img src="https://cdn-uploads.huggingface.co/production/uploads/631ea4247beada30465fa606/NvPKCBlbVhiVeeMpUAv2C.png" width="500" alt="Loss">
_Note: Needs more data and training, testing purposes only. Not recomended for production level deployment._
[Presentation](https://docs.google.com/presentation/d/1vNQ8Y5wwR0WXlO60fsXjkru5R9I0ZgykTmgag0B3Ato/edit?usp=sharing)
|
Amanaccessassist/adhar
|
Amanaccessassist
| 2024-02-08T05:58:30Z
| 0
| 0
|
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-02-08T05:58:03Z
|
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
TesterGG/act_classifier
|
TesterGG
| 2024-02-08T05:57:42Z
| 46
| 0
|
transformers
|
[
"transformers",
"tf",
"distilbert",
"text-classification",
"generated_from_keras_callback",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-02-08T05:28:51Z
|
---
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_keras_callback
model-index:
- name: TesterGG/act_classifier
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# TesterGG/act_classifier
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.3861
- Validation Loss: 0.5086
- Train Accuracy: 0.8073
- Epoch: 2
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': True, 'is_legacy_optimizer': False, 'learning_rate': {'module': 'keras.optimizers.schedules', 'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 9080, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered_name': None}, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Train Accuracy | Epoch |
|:----------:|:---------------:|:--------------:|:-----:|
| 0.5062 | 0.5242 | 0.7969 | 0 |
| 0.4096 | 0.5086 | 0.8073 | 1 |
| 0.3861 | 0.5086 | 0.8073 | 2 |
### Framework versions
- Transformers 4.38.0.dev0
- TensorFlow 2.15.0
- Datasets 2.16.1
- Tokenizers 0.15.1
|
joowon99/SOLAR-10.7B-ko_alpaca
|
joowon99
| 2024-02-08T05:50:48Z
| 2
| 0
|
peft
|
[
"peft",
"safetensors",
"llama",
"llama-factory",
"lora",
"generated_from_trainer",
"pytorch",
"base_model:upstage/SOLAR-10.7B-Instruct-v1.0",
"base_model:adapter:upstage/SOLAR-10.7B-Instruct-v1.0",
"license:cc-by-4.0",
"region:us"
] | null | 2024-02-07T05:01:55Z
|
---
license: cc-by-4.0
library_name: peft
tags:
- llama-factory
- lora
- generated_from_trainer
- pytorch
base_model: upstage/SOLAR-10.7B-Instruct-v1.0
model-index:
- name: solar_model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# solar_model
This model is a fine-tuned version of [upstage/SOLAR-10.7B-Instruct-v1.0](https://huggingface.co/upstage/SOLAR-10.7B-Instruct-v1.0) on the ko_alpaca_style_dataset dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- gradient_accumulation_steps: 2
- total_train_batch_size: 128
- total_eval_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- num_epochs: 1.0
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- PEFT 0.7.1
- Transformers 4.37.1
- Pytorch 2.0.1+cu117
- Datasets 2.16.1
- Tokenizers 0.15.1
|
humung/polyglot-ko-12.8b-vlending-v0.4
|
humung
| 2024-02-08T05:36:25Z
| 0
| 0
|
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-02-08T05:36:19Z
|
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
firqaaa/indo-setfit-bert-base-p3
|
firqaaa
| 2024-02-08T05:28:10Z
| 6
| 0
|
setfit
|
[
"setfit",
"safetensors",
"bert",
"sentence-transformers",
"text-classification",
"generated_from_setfit_trainer",
"arxiv:2209.11055",
"base_model:firqaaa/indo-sentence-bert-base",
"base_model:finetune:firqaaa/indo-sentence-bert-base",
"model-index",
"region:us"
] |
text-classification
| 2024-02-08T04:41:47Z
|
---
library_name: setfit
tags:
- setfit
- sentence-transformers
- text-classification
- generated_from_setfit_trainer
metrics:
- accuracy
widget:
- text: Aku sudah lebih tua dan hidupku sangat berbeda. Aku bisa merasakan betapa
takjubnya aku pagi itu
- text: Saya merasa cukup href http kata-kata yang tak terucapkan disimpan di dalam
- text: Aku melihat ke dalam dompetku dan aku merasakan hawa dingin
- text: Aku menurunkan Erik dengan perasaan agak tidak puas dengan malam itu
- text: Aku bertanya-tanya apa yang siswa lain di kelasku rasakan ketika aku tidak
takut untuk memberikan jawaban di luar sana
pipeline_tag: text-classification
inference: true
base_model: firqaaa/indo-sentence-bert-base
model-index:
- name: SetFit with firqaaa/indo-sentence-bert-base
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: firqaaa/emotion-bahasa
type: unknown
split: test
metrics:
- type: accuracy
value: 0.718
name: Accuracy
---
# SetFit with firqaaa/indo-sentence-bert-base
This is a [SetFit](https://github.com/huggingface/setfit) model that can be used for Text Classification. This SetFit model uses [firqaaa/indo-sentence-bert-base](https://huggingface.co/firqaaa/indo-sentence-bert-base) as the Sentence Transformer embedding model. A [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance is used for classification.
The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
## Model Details
### Model Description
- **Model Type:** SetFit
- **Sentence Transformer body:** [firqaaa/indo-sentence-bert-base](https://huggingface.co/firqaaa/indo-sentence-bert-base)
- **Classification head:** a [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance
- **Maximum Sequence Length:** 512 tokens
- **Number of Classes:** 6 classes
<!-- - **Training Dataset:** [Unknown](https://huggingface.co/datasets/unknown) -->
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Repository:** [SetFit on GitHub](https://github.com/huggingface/setfit)
- **Paper:** [Efficient Few-Shot Learning Without Prompts](https://arxiv.org/abs/2209.11055)
- **Blogpost:** [SetFit: Efficient Few-Shot Learning Without Prompts](https://huggingface.co/blog/setfit)
### Model Labels
| Label | Examples |
|:----------|:------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| kesedihan | <ul><li>'Saya merasa agak kecewa, saya rasa harus menyerahkan sesuatu yang tidak menarik hanya untuk memenuhi tenggat waktu'</li><li>'Aku merasa seperti aku telah cukup lalai terhadap blogku dan aku hanya mengatakan bahwa kita di sini hidup dan bahagia'</li><li>'Aku tahu dan aku selalu terkoyak karenanya karena aku merasa tidak berdaya dan tidak berguna'</li></ul> |
| sukacita | <ul><li>'aku mungkin tidak merasa begitu keren'</li><li>'saya merasa baik-baik saja'</li><li>'saya merasa seperti saya seorang ibu dengan mengorbankan produktivitas'</li></ul> |
| cinta | <ul><li>'aku merasa mencintaimu'</li><li>'aku akan merasa sangat nostalgia di usia yang begitu muda'</li><li>'Saya merasa diberkati bahwa saya tinggal di Amerika memiliki keluarga yang luar biasa dan Dorothy Kelsey adalah bagian dari hidup saya'</li></ul> |
| amarah | <ul><li>'Aku terlalu memikirkan cara dudukku, suaraku terdengar jika ada makanan di mulutku, dan perasaan bahwa aku harus berjalan ke semua orang agar tidak bersikap kasar'</li><li>'aku merasa memberontak sedikit kesal gila terkurung'</li><li>'Aku merasakan perasaan itu muncul kembali dari perasaan paranoid dan cemburu yang penuh kebencian yang selalu menyiksaku tanpa henti'</li></ul> |
| takut | <ul><li>'aku merasa seperti diserang oleh landak titanium'</li><li>'Aku membiarkan diriku memikirkan perilakuku terhadapmu saat kita masih kecil. Aku merasakan campuran aneh antara rasa bersalah dan kekaguman atas ketangguhanmu'</li><li>'saya marah karena majikan saya tidak berinvestasi pada kami sama sekali, gaji pelatihan, kenaikan hari libur bank dan rasanya seperti ketidakadilan sehingga saya merasa tidak berdaya'</li></ul> |
| kejutan | <ul><li>'Aku membaca bagian ol feefyefo Aku merasa takjub melihat betapa aku bisa mengoceh dan betapa transparannya aku dalam hidupku'</li><li>'saya menemukan seni di sisi lain saya merasa sangat terkesan dengan karya saya'</li><li>'aku merasa penasaran, bersemangat dan tidak sabar'</li></ul> |
## Evaluation
### Metrics
| Label | Accuracy |
|:--------|:---------|
| **all** | 0.718 |
## Uses
### Direct Use for Inference
First install the SetFit library:
```bash
pip install setfit
```
Then you can load this model and run inference.
```python
from setfit import SetFitModel
# Download from the π€ Hub
model = SetFitModel.from_pretrained("firqaaa/indo-setfit-bert-base-p3")
# Run inference
preds = model("Aku melihat ke dalam dompetku dan aku merasakan hawa dingin")
```
<!--
### Downstream Use
*List how someone could finetune this model on their own dataset.*
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Set Metrics
| Training set | Min | Median | Max |
|:-------------|:----|:--------|:----|
| Word count | 2 | 16.7928 | 56 |
| Label | Training Sample Count |
|:----------|:----------------------|
| kesedihan | 300 |
| sukacita | 300 |
| cinta | 300 |
| amarah | 300 |
| takut | 300 |
| kejutan | 300 |
### Training Hyperparameters
- batch_size: (128, 128)
- num_epochs: (1, 1)
- max_steps: -1
- sampling_strategy: oversampling
- body_learning_rate: (2e-05, 1e-05)
- head_learning_rate: 0.01
- loss: CosineSimilarityLoss
- distance_metric: cosine_distance
- margin: 0.25
- end_to_end: False
- use_amp: False
- warmup_proportion: 0.1
- seed: 42
- eval_max_steps: -1
- load_best_model_at_end: True
### Training Results
| Epoch | Step | Training Loss | Validation Loss |
|:-------:|:---------:|:-------------:|:---------------:|
| 0.0000 | 1 | 0.2927 | - |
| 0.0024 | 50 | 0.2605 | - |
| 0.0047 | 100 | 0.2591 | - |
| 0.0071 | 150 | 0.2638 | - |
| 0.0095 | 200 | 0.245 | - |
| 0.0119 | 250 | 0.226 | - |
| 0.0142 | 300 | 0.222 | - |
| 0.0166 | 350 | 0.1968 | - |
| 0.0190 | 400 | 0.1703 | - |
| 0.0213 | 450 | 0.1703 | - |
| 0.0237 | 500 | 0.1587 | - |
| 0.0261 | 550 | 0.1087 | - |
| 0.0284 | 600 | 0.1203 | - |
| 0.0308 | 650 | 0.0844 | - |
| 0.0332 | 700 | 0.0696 | - |
| 0.0356 | 750 | 0.0606 | - |
| 0.0379 | 800 | 0.0333 | - |
| 0.0403 | 850 | 0.0453 | - |
| 0.0427 | 900 | 0.033 | - |
| 0.0450 | 950 | 0.0142 | - |
| 0.0474 | 1000 | 0.004 | - |
| 0.0498 | 1050 | 0.0097 | - |
| 0.0521 | 1100 | 0.0065 | - |
| 0.0545 | 1150 | 0.0081 | - |
| 0.0569 | 1200 | 0.0041 | - |
| 0.0593 | 1250 | 0.0044 | - |
| 0.0616 | 1300 | 0.0013 | - |
| 0.0640 | 1350 | 0.0024 | - |
| 0.0664 | 1400 | 0.001 | - |
| 0.0687 | 1450 | 0.0012 | - |
| 0.0711 | 1500 | 0.0013 | - |
| 0.0735 | 1550 | 0.0006 | - |
| 0.0759 | 1600 | 0.0033 | - |
| 0.0782 | 1650 | 0.0006 | - |
| 0.0806 | 1700 | 0.0013 | - |
| 0.0830 | 1750 | 0.0008 | - |
| 0.0853 | 1800 | 0.0006 | - |
| 0.0877 | 1850 | 0.0008 | - |
| 0.0901 | 1900 | 0.0004 | - |
| 0.0924 | 1950 | 0.0005 | - |
| 0.0948 | 2000 | 0.0004 | - |
| 0.0972 | 2050 | 0.0002 | - |
| 0.0996 | 2100 | 0.0002 | - |
| 0.1019 | 2150 | 0.0003 | - |
| 0.1043 | 2200 | 0.0006 | - |
| 0.1067 | 2250 | 0.0005 | - |
| 0.1090 | 2300 | 0.0003 | - |
| 0.1114 | 2350 | 0.0018 | - |
| 0.1138 | 2400 | 0.0003 | - |
| 0.1161 | 2450 | 0.0002 | - |
| 0.1185 | 2500 | 0.0018 | - |
| 0.1209 | 2550 | 0.0003 | - |
| 0.1233 | 2600 | 0.0008 | - |
| 0.1256 | 2650 | 0.0002 | - |
| 0.1280 | 2700 | 0.0007 | - |
| 0.1304 | 2750 | 0.006 | - |
| 0.1327 | 2800 | 0.0002 | - |
| 0.1351 | 2850 | 0.0001 | - |
| 0.1375 | 2900 | 0.0001 | - |
| 0.1399 | 2950 | 0.0001 | - |
| 0.1422 | 3000 | 0.0001 | - |
| 0.1446 | 3050 | 0.0001 | - |
| 0.1470 | 3100 | 0.0001 | - |
| 0.1493 | 3150 | 0.0001 | - |
| 0.1517 | 3200 | 0.0002 | - |
| 0.1541 | 3250 | 0.0003 | - |
| 0.1564 | 3300 | 0.0004 | - |
| 0.1588 | 3350 | 0.0001 | - |
| 0.1612 | 3400 | 0.0001 | - |
| 0.1636 | 3450 | 0.0014 | - |
| 0.1659 | 3500 | 0.0005 | - |
| 0.1683 | 3550 | 0.0003 | - |
| 0.1707 | 3600 | 0.0001 | - |
| 0.1730 | 3650 | 0.0001 | - |
| 0.1754 | 3700 | 0.0001 | - |
| 0.1778 | 3750 | 0.0001 | - |
| 0.1801 | 3800 | 0.0001 | - |
| 0.1825 | 3850 | 0.0001 | - |
| 0.1849 | 3900 | 0.0001 | - |
| 0.1873 | 3950 | 0.0001 | - |
| 0.1896 | 4000 | 0.0001 | - |
| 0.1920 | 4050 | 0.0001 | - |
| 0.1944 | 4100 | 0.0003 | - |
| 0.1967 | 4150 | 0.0006 | - |
| 0.1991 | 4200 | 0.0001 | - |
| 0.2015 | 4250 | 0.0 | - |
| 0.2038 | 4300 | 0.0 | - |
| 0.2062 | 4350 | 0.0001 | - |
| 0.2086 | 4400 | 0.0 | - |
| 0.2110 | 4450 | 0.0 | - |
| 0.2133 | 4500 | 0.0001 | - |
| 0.2157 | 4550 | 0.0002 | - |
| 0.2181 | 4600 | 0.0003 | - |
| 0.2204 | 4650 | 0.0018 | - |
| 0.2228 | 4700 | 0.0003 | - |
| 0.2252 | 4750 | 0.0145 | - |
| 0.2276 | 4800 | 0.0001 | - |
| 0.2299 | 4850 | 0.0006 | - |
| 0.2323 | 4900 | 0.0001 | - |
| 0.2347 | 4950 | 0.0007 | - |
| 0.2370 | 5000 | 0.0001 | - |
| 0.2394 | 5050 | 0.0 | - |
| 0.2418 | 5100 | 0.0 | - |
| 0.2441 | 5150 | 0.0001 | - |
| 0.2465 | 5200 | 0.0003 | - |
| 0.2489 | 5250 | 0.0 | - |
| 0.2513 | 5300 | 0.0 | - |
| 0.2536 | 5350 | 0.0 | - |
| 0.2560 | 5400 | 0.0 | - |
| 0.2584 | 5450 | 0.0004 | - |
| 0.2607 | 5500 | 0.0 | - |
| 0.2631 | 5550 | 0.0 | - |
| 0.2655 | 5600 | 0.0 | - |
| 0.2678 | 5650 | 0.0 | - |
| 0.2702 | 5700 | 0.0 | - |
| 0.2726 | 5750 | 0.0002 | - |
| 0.2750 | 5800 | 0.0 | - |
| 0.2773 | 5850 | 0.0 | - |
| 0.2797 | 5900 | 0.0 | - |
| 0.2821 | 5950 | 0.0 | - |
| 0.2844 | 6000 | 0.0 | - |
| 0.2868 | 6050 | 0.0 | - |
| 0.2892 | 6100 | 0.0 | - |
| 0.2916 | 6150 | 0.0 | - |
| 0.2939 | 6200 | 0.0 | - |
| 0.2963 | 6250 | 0.0 | - |
| 0.2987 | 6300 | 0.0001 | - |
| 0.3010 | 6350 | 0.0003 | - |
| 0.3034 | 6400 | 0.0048 | - |
| 0.3058 | 6450 | 0.0 | - |
| 0.3081 | 6500 | 0.0 | - |
| 0.3105 | 6550 | 0.0 | - |
| 0.3129 | 6600 | 0.0 | - |
| 0.3153 | 6650 | 0.0 | - |
| 0.3176 | 6700 | 0.0 | - |
| 0.3200 | 6750 | 0.0 | - |
| 0.3224 | 6800 | 0.0 | - |
| 0.3247 | 6850 | 0.0 | - |
| 0.3271 | 6900 | 0.0 | - |
| 0.3295 | 6950 | 0.0 | - |
| 0.3318 | 7000 | 0.0 | - |
| 0.3342 | 7050 | 0.0 | - |
| 0.3366 | 7100 | 0.0 | - |
| 0.3390 | 7150 | 0.0011 | - |
| 0.3413 | 7200 | 0.0002 | - |
| 0.3437 | 7250 | 0.0 | - |
| 0.3461 | 7300 | 0.0 | - |
| 0.3484 | 7350 | 0.0001 | - |
| 0.3508 | 7400 | 0.0001 | - |
| 0.3532 | 7450 | 0.0002 | - |
| 0.3556 | 7500 | 0.0 | - |
| 0.3579 | 7550 | 0.0 | - |
| 0.3603 | 7600 | 0.0 | - |
| 0.3627 | 7650 | 0.0 | - |
| 0.3650 | 7700 | 0.0 | - |
| 0.3674 | 7750 | 0.0 | - |
| 0.3698 | 7800 | 0.0001 | - |
| 0.3721 | 7850 | 0.0 | - |
| 0.3745 | 7900 | 0.0 | - |
| 0.3769 | 7950 | 0.0 | - |
| 0.3793 | 8000 | 0.0 | - |
| 0.3816 | 8050 | 0.0 | - |
| 0.3840 | 8100 | 0.0 | - |
| 0.3864 | 8150 | 0.0 | - |
| 0.3887 | 8200 | 0.0 | - |
| 0.3911 | 8250 | 0.0 | - |
| 0.3935 | 8300 | 0.0 | - |
| 0.3958 | 8350 | 0.0 | - |
| 0.3982 | 8400 | 0.0 | - |
| 0.4006 | 8450 | 0.0 | - |
| 0.4030 | 8500 | 0.0 | - |
| 0.4053 | 8550 | 0.0001 | - |
| 0.4077 | 8600 | 0.0001 | - |
| 0.4101 | 8650 | 0.0008 | - |
| 0.4124 | 8700 | 0.0001 | - |
| 0.4148 | 8750 | 0.0 | - |
| 0.4172 | 8800 | 0.0 | - |
| 0.4196 | 8850 | 0.0001 | - |
| 0.4219 | 8900 | 0.0 | - |
| 0.4243 | 8950 | 0.0 | - |
| 0.4267 | 9000 | 0.0 | - |
| 0.4290 | 9050 | 0.0 | - |
| 0.4314 | 9100 | 0.0 | - |
| 0.4338 | 9150 | 0.0 | - |
| 0.4361 | 9200 | 0.0 | - |
| 0.4385 | 9250 | 0.0 | - |
| 0.4409 | 9300 | 0.0 | - |
| 0.4433 | 9350 | 0.0 | - |
| 0.4456 | 9400 | 0.0 | - |
| 0.4480 | 9450 | 0.0 | - |
| 0.4504 | 9500 | 0.0 | - |
| 0.4527 | 9550 | 0.0 | - |
| 0.4551 | 9600 | 0.0 | - |
| 0.4575 | 9650 | 0.0 | - |
| 0.4598 | 9700 | 0.0 | - |
| 0.4622 | 9750 | 0.0001 | - |
| 0.4646 | 9800 | 0.0 | - |
| 0.4670 | 9850 | 0.0 | - |
| 0.4693 | 9900 | 0.0 | - |
| 0.4717 | 9950 | 0.0 | - |
| 0.4741 | 10000 | 0.0 | - |
| 0.4764 | 10050 | 0.0 | - |
| 0.4788 | 10100 | 0.0006 | - |
| 0.4812 | 10150 | 0.0 | - |
| 0.4835 | 10200 | 0.0 | - |
| 0.4859 | 10250 | 0.0 | - |
| 0.4883 | 10300 | 0.0 | - |
| 0.4907 | 10350 | 0.0 | - |
| 0.4930 | 10400 | 0.0 | - |
| 0.4954 | 10450 | 0.0 | - |
| 0.4978 | 10500 | 0.0 | - |
| 0.5001 | 10550 | 0.0 | - |
| 0.5025 | 10600 | 0.0 | - |
| 0.5049 | 10650 | 0.0 | - |
| 0.5073 | 10700 | 0.0 | - |
| 0.5096 | 10750 | 0.0 | - |
| 0.5120 | 10800 | 0.0 | - |
| 0.5144 | 10850 | 0.0 | - |
| 0.5167 | 10900 | 0.0 | - |
| 0.5191 | 10950 | 0.0 | - |
| 0.5215 | 11000 | 0.0 | - |
| 0.5238 | 11050 | 0.0 | - |
| 0.5262 | 11100 | 0.0 | - |
| 0.5286 | 11150 | 0.0 | - |
| 0.5310 | 11200 | 0.0 | - |
| 0.5333 | 11250 | 0.0 | - |
| 0.5357 | 11300 | 0.0 | - |
| 0.5381 | 11350 | 0.0 | - |
| 0.5404 | 11400 | 0.0 | - |
| 0.5428 | 11450 | 0.0 | - |
| 0.5452 | 11500 | 0.0 | - |
| 0.5475 | 11550 | 0.0 | - |
| 0.5499 | 11600 | 0.0 | - |
| 0.5523 | 11650 | 0.0001 | - |
| 0.5547 | 11700 | 0.0 | - |
| 0.5570 | 11750 | 0.0043 | - |
| 0.5594 | 11800 | 0.0 | - |
| 0.5618 | 11850 | 0.0 | - |
| 0.5641 | 11900 | 0.0 | - |
| 0.5665 | 11950 | 0.0 | - |
| 0.5689 | 12000 | 0.0 | - |
| 0.5713 | 12050 | 0.0 | - |
| 0.5736 | 12100 | 0.0 | - |
| 0.5760 | 12150 | 0.0 | - |
| 0.5784 | 12200 | 0.0 | - |
| 0.5807 | 12250 | 0.0029 | - |
| 0.5831 | 12300 | 0.0 | - |
| 0.5855 | 12350 | 0.0 | - |
| 0.5878 | 12400 | 0.0 | - |
| 0.5902 | 12450 | 0.0 | - |
| 0.5926 | 12500 | 0.0 | - |
| 0.5950 | 12550 | 0.0 | - |
| 0.5973 | 12600 | 0.0 | - |
| 0.5997 | 12650 | 0.0 | - |
| 0.6021 | 12700 | 0.0 | - |
| 0.6044 | 12750 | 0.0 | - |
| 0.6068 | 12800 | 0.0 | - |
| 0.6092 | 12850 | 0.0 | - |
| 0.6115 | 12900 | 0.0 | - |
| 0.6139 | 12950 | 0.0 | - |
| 0.6163 | 13000 | 0.0 | - |
| 0.6187 | 13050 | 0.0 | - |
| 0.6210 | 13100 | 0.0 | - |
| 0.6234 | 13150 | 0.0001 | - |
| 0.6258 | 13200 | 0.0 | - |
| 0.6281 | 13250 | 0.0 | - |
| 0.6305 | 13300 | 0.0 | - |
| 0.6329 | 13350 | 0.0 | - |
| 0.6353 | 13400 | 0.0001 | - |
| 0.6376 | 13450 | 0.0 | - |
| 0.6400 | 13500 | 0.0 | - |
| 0.6424 | 13550 | 0.0 | - |
| 0.6447 | 13600 | 0.0 | - |
| 0.6471 | 13650 | 0.0 | - |
| 0.6495 | 13700 | 0.0 | - |
| 0.6518 | 13750 | 0.0 | - |
| 0.6542 | 13800 | 0.0 | - |
| 0.6566 | 13850 | 0.0 | - |
| 0.6590 | 13900 | 0.0 | - |
| 0.6613 | 13950 | 0.0 | - |
| 0.6637 | 14000 | 0.0 | - |
| 0.6661 | 14050 | 0.0 | - |
| 0.6684 | 14100 | 0.0 | - |
| 0.6708 | 14150 | 0.0 | - |
| 0.6732 | 14200 | 0.0 | - |
| 0.6755 | 14250 | 0.0 | - |
| 0.6779 | 14300 | 0.0 | - |
| 0.6803 | 14350 | 0.0 | - |
| 0.6827 | 14400 | 0.0 | - |
| 0.6850 | 14450 | 0.0 | - |
| 0.6874 | 14500 | 0.0 | - |
| 0.6898 | 14550 | 0.0 | - |
| 0.6921 | 14600 | 0.0 | - |
| 0.6945 | 14650 | 0.0 | - |
| 0.6969 | 14700 | 0.0 | - |
| 0.6993 | 14750 | 0.0 | - |
| 0.7016 | 14800 | 0.0 | - |
| 0.7040 | 14850 | 0.0 | - |
| 0.7064 | 14900 | 0.0 | - |
| 0.7087 | 14950 | 0.0 | - |
| 0.7111 | 15000 | 0.0 | - |
| 0.7135 | 15050 | 0.0 | - |
| 0.7158 | 15100 | 0.0 | - |
| 0.7182 | 15150 | 0.0 | - |
| 0.7206 | 15200 | 0.0 | - |
| 0.7230 | 15250 | 0.0 | - |
| 0.7253 | 15300 | 0.0 | - |
| 0.7277 | 15350 | 0.0 | - |
| 0.7301 | 15400 | 0.0 | - |
| 0.7324 | 15450 | 0.0 | - |
| 0.7348 | 15500 | 0.0 | - |
| 0.7372 | 15550 | 0.0 | - |
| 0.7395 | 15600 | 0.0 | - |
| 0.7419 | 15650 | 0.0 | - |
| 0.7443 | 15700 | 0.0 | - |
| 0.7467 | 15750 | 0.0 | - |
| 0.7490 | 15800 | 0.0 | - |
| 0.7514 | 15850 | 0.0 | - |
| 0.7538 | 15900 | 0.0 | - |
| 0.7561 | 15950 | 0.0 | - |
| 0.7585 | 16000 | 0.0 | - |
| 0.7609 | 16050 | 0.0 | - |
| 0.7633 | 16100 | 0.0 | - |
| 0.7656 | 16150 | 0.0 | - |
| 0.7680 | 16200 | 0.0 | - |
| 0.7704 | 16250 | 0.0 | - |
| 0.7727 | 16300 | 0.0 | - |
| 0.7751 | 16350 | 0.0 | - |
| 0.7775 | 16400 | 0.0 | - |
| 0.7798 | 16450 | 0.0 | - |
| 0.7822 | 16500 | 0.0 | - |
| 0.7846 | 16550 | 0.0 | - |
| 0.7870 | 16600 | 0.0 | - |
| 0.7893 | 16650 | 0.0 | - |
| 0.7917 | 16700 | 0.0 | - |
| 0.7941 | 16750 | 0.0 | - |
| 0.7964 | 16800 | 0.0 | - |
| 0.7988 | 16850 | 0.0 | - |
| 0.8012 | 16900 | 0.0 | - |
| 0.8035 | 16950 | 0.0 | - |
| 0.8059 | 17000 | 0.0 | - |
| 0.8083 | 17050 | 0.0 | - |
| 0.8107 | 17100 | 0.0 | - |
| 0.8130 | 17150 | 0.0 | - |
| 0.8154 | 17200 | 0.0 | - |
| 0.8178 | 17250 | 0.0 | - |
| 0.8201 | 17300 | 0.0 | - |
| 0.8225 | 17350 | 0.0 | - |
| 0.8249 | 17400 | 0.0 | - |
| 0.8272 | 17450 | 0.0 | - |
| 0.8296 | 17500 | 0.0 | - |
| 0.8320 | 17550 | 0.0 | - |
| 0.8344 | 17600 | 0.0 | - |
| 0.8367 | 17650 | 0.0 | - |
| 0.8391 | 17700 | 0.0 | - |
| 0.8415 | 17750 | 0.0 | - |
| 0.8438 | 17800 | 0.0 | - |
| 0.8462 | 17850 | 0.0 | - |
| 0.8486 | 17900 | 0.0 | - |
| 0.8510 | 17950 | 0.0 | - |
| 0.8533 | 18000 | 0.0 | - |
| 0.8557 | 18050 | 0.0 | - |
| 0.8581 | 18100 | 0.0 | - |
| 0.8604 | 18150 | 0.0 | - |
| 0.8628 | 18200 | 0.0 | - |
| 0.8652 | 18250 | 0.0 | - |
| 0.8675 | 18300 | 0.0 | - |
| 0.8699 | 18350 | 0.0 | - |
| 0.8723 | 18400 | 0.0 | - |
| 0.8747 | 18450 | 0.0 | - |
| 0.8770 | 18500 | 0.0 | - |
| 0.8794 | 18550 | 0.0 | - |
| 0.8818 | 18600 | 0.0 | - |
| 0.8841 | 18650 | 0.0 | - |
| 0.8865 | 18700 | 0.0 | - |
| 0.8889 | 18750 | 0.0 | - |
| 0.8912 | 18800 | 0.0 | - |
| 0.8936 | 18850 | 0.0 | - |
| 0.8960 | 18900 | 0.0 | - |
| 0.8984 | 18950 | 0.0 | - |
| 0.9007 | 19000 | 0.0 | - |
| 0.9031 | 19050 | 0.0 | - |
| 0.9055 | 19100 | 0.0 | - |
| 0.9078 | 19150 | 0.0 | - |
| 0.9102 | 19200 | 0.0 | - |
| 0.9126 | 19250 | 0.0 | - |
| 0.9150 | 19300 | 0.0 | - |
| 0.9173 | 19350 | 0.0 | - |
| 0.9197 | 19400 | 0.0 | - |
| 0.9221 | 19450 | 0.0 | - |
| 0.9244 | 19500 | 0.0 | - |
| 0.9268 | 19550 | 0.0 | - |
| 0.9292 | 19600 | 0.0 | - |
| 0.9315 | 19650 | 0.0 | - |
| 0.9339 | 19700 | 0.0 | - |
| 0.9363 | 19750 | 0.0 | - |
| 0.9387 | 19800 | 0.0 | - |
| 0.9410 | 19850 | 0.0 | - |
| 0.9434 | 19900 | 0.0 | - |
| 0.9458 | 19950 | 0.0 | - |
| 0.9481 | 20000 | 0.0 | - |
| 0.9505 | 20050 | 0.0 | - |
| 0.9529 | 20100 | 0.0 | - |
| 0.9552 | 20150 | 0.0 | - |
| 0.9576 | 20200 | 0.0 | - |
| 0.9600 | 20250 | 0.0 | - |
| 0.9624 | 20300 | 0.0 | - |
| 0.9647 | 20350 | 0.0 | - |
| 0.9671 | 20400 | 0.0 | - |
| 0.9695 | 20450 | 0.0 | - |
| 0.9718 | 20500 | 0.0 | - |
| 0.9742 | 20550 | 0.0 | - |
| 0.9766 | 20600 | 0.0 | - |
| 0.9790 | 20650 | 0.0 | - |
| 0.9813 | 20700 | 0.0 | - |
| 0.9837 | 20750 | 0.0 | - |
| 0.9861 | 20800 | 0.0 | - |
| 0.9884 | 20850 | 0.0 | - |
| 0.9908 | 20900 | 0.0 | - |
| 0.9932 | 20950 | 0.0 | - |
| 0.9955 | 21000 | 0.0 | - |
| 0.9979 | 21050 | 0.0 | - |
| **1.0** | **21094** | **-** | **0.2251** |
* The bold row denotes the saved checkpoint.
### Framework Versions
- Python: 3.10.13
- SetFit: 1.0.3
- Sentence Transformers: 2.2.2
- Transformers: 4.36.2
- PyTorch: 2.1.2+cu121
- Datasets: 2.16.1
- Tokenizers: 0.15.0
## Citation
### BibTeX
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
-->
|
hotsuyuki/gpt_0.125B_global_step4000_openassistant
|
hotsuyuki
| 2024-02-08T05:23:01Z
| 89
| 0
|
transformers
|
[
"transformers",
"safetensors",
"gpt2",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-02-08T05:22:30Z
|
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
PostsDesert/segformer-b5-finetuned-segments-instryde-foot-test
|
PostsDesert
| 2024-02-08T05:22:31Z
| 173
| 0
|
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"segformer",
"vision",
"image-segmentation",
"generated_from_trainer",
"base_model:nvidia/mit-b5",
"base_model:finetune:nvidia/mit-b5",
"license:other",
"endpoints_compatible",
"region:us"
] |
image-segmentation
| 2024-02-07T16:47:37Z
|
---
license: other
base_model: nvidia/mit-b5
tags:
- vision
- image-segmentation
- generated_from_trainer
model-index:
- name: segformer-b5-finetuned-segments-instryde-foot-test
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# segformer-b5-finetuned-segments-instryde-foot-test
This model is a fine-tuned version of [nvidia/mit-b5](https://huggingface.co/nvidia/mit-b5) on the inStryde/inStrydeSegmentationFoot dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0149
- Mean Iou: 0.4800
- Mean Accuracy: 0.9599
- Overall Accuracy: 0.9599
- Per Category Iou: [0.0, 0.9599216842864238]
- Per Category Accuracy: [nan, 0.9599216842864238]
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 6e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Mean Iou | Mean Accuracy | Overall Accuracy | Per Category Iou | Per Category Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:-------------:|:----------------:|:-------------------------:|:-------------------------:|
| 0.1024 | 0.27 | 20 | 0.2085 | 0.4534 | 0.9067 | 0.9067 | [0.0, 0.9067344993758137] | [nan, 0.9067344993758137] |
| 0.0431 | 0.53 | 40 | 0.0487 | 0.4604 | 0.9207 | 0.9207 | [0.0, 0.9207331455341442] | [nan, 0.9207331455341442] |
| 0.0354 | 0.8 | 60 | 0.0319 | 0.4577 | 0.9155 | 0.9155 | [0.0, 0.9154662028576415] | [nan, 0.9154662028576415] |
| 0.0389 | 1.07 | 80 | 0.0276 | 0.4629 | 0.9257 | 0.9257 | [0.0, 0.9257162800419576] | [nan, 0.9257162800419576] |
| 0.0208 | 1.33 | 100 | 0.0244 | 0.4702 | 0.9404 | 0.9404 | [0.0, 0.9403945317069335] | [nan, 0.9403945317069335] |
| 0.0241 | 1.6 | 120 | 0.0212 | 0.4703 | 0.9406 | 0.9406 | [0.0, 0.9406131407017349] | [nan, 0.9406131407017349] |
| 0.0167 | 1.87 | 140 | 0.0208 | 0.4761 | 0.9521 | 0.9521 | [0.0, 0.9521215619420916] | [nan, 0.9521215619420916] |
| 0.0156 | 2.13 | 160 | 0.0205 | 0.4612 | 0.9224 | 0.9224 | [0.0, 0.9224359945462809] | [nan, 0.9224359945462809] |
| 0.0156 | 2.4 | 180 | 0.0208 | 0.4734 | 0.9468 | 0.9468 | [0.0, 0.9467575875538612] | [nan, 0.9467575875538612] |
| 0.0167 | 2.67 | 200 | 0.0182 | 0.4833 | 0.9667 | 0.9667 | [0.0, 0.9666659635383208] | [nan, 0.9666659635383208] |
| 0.0145 | 2.93 | 220 | 0.0243 | 0.4351 | 0.8702 | 0.8702 | [0.0, 0.8702122233110058] | [nan, 0.8702122233110058] |
| 0.0114 | 3.2 | 240 | 0.0176 | 0.4686 | 0.9373 | 0.9373 | [0.0, 0.93726765603217] | [nan, 0.93726765603217] |
| 0.0155 | 3.47 | 260 | 0.0161 | 0.4770 | 0.9541 | 0.9541 | [0.0, 0.9540767701096305] | [nan, 0.9540767701096305] |
| 0.0158 | 3.73 | 280 | 0.0169 | 0.4684 | 0.9368 | 0.9368 | [0.0, 0.9368239181251786] | [nan, 0.9368239181251786] |
| 0.0114 | 4.0 | 300 | 0.0162 | 0.4777 | 0.9554 | 0.9554 | [0.0, 0.9554348305492647] | [nan, 0.9554348305492647] |
| 0.0112 | 4.27 | 320 | 0.0159 | 0.4839 | 0.9678 | 0.9678 | [0.0, 0.9677532556440432] | [nan, 0.9677532556440432] |
| 0.0131 | 4.53 | 340 | 0.0154 | 0.4811 | 0.9622 | 0.9622 | [0.0, 0.9622032718479555] | [nan, 0.9622032718479555] |
| 0.0101 | 4.8 | 360 | 0.0156 | 0.4683 | 0.9367 | 0.9367 | [0.0, 0.9366846987126999] | [nan, 0.9366846987126999] |
| 0.0102 | 5.07 | 380 | 0.0152 | 0.4758 | 0.9517 | 0.9517 | [0.0, 0.9516509773164403] | [nan, 0.9516509773164403] |
| 0.0101 | 5.33 | 400 | 0.0169 | 0.4884 | 0.9768 | 0.9768 | [0.0, 0.9768393358121804] | [nan, 0.9768393358121804] |
| 0.0082 | 5.6 | 420 | 0.0150 | 0.4761 | 0.9522 | 0.9522 | [0.0, 0.9522462074215836] | [nan, 0.9522462074215836] |
| 0.01 | 5.87 | 440 | 0.0152 | 0.4788 | 0.9576 | 0.9576 | [0.0, 0.9575745140264517] | [nan, 0.9575745140264517] |
| 0.0098 | 6.13 | 460 | 0.0148 | 0.4783 | 0.9565 | 0.9565 | [0.0, 0.9565489693736469] | [nan, 0.9565489693736469] |
| 0.0088 | 6.4 | 480 | 0.0153 | 0.4795 | 0.9591 | 0.9591 | [0.0, 0.959051850601846] | [nan, 0.959051850601846] |
| 0.0091 | 6.67 | 500 | 0.0152 | 0.4828 | 0.9656 | 0.9656 | [0.0, 0.965590177169167] | [nan, 0.965590177169167] |
| 0.0102 | 6.93 | 520 | 0.0149 | 0.4800 | 0.9599 | 0.9599 | [0.0, 0.9599216842864238] | [nan, 0.9599216842864238] |
### Framework versions
- Transformers 4.37.2
- Pytorch 2.0.1
- Datasets 2.16.1
- Tokenizers 0.15.1
|
TesterGG/sequence_classification_model
|
TesterGG
| 2024-02-08T05:20:40Z
| 46
| 0
|
transformers
|
[
"transformers",
"tf",
"distilbert",
"text-classification",
"generated_from_keras_callback",
"dataset:daily_dialog",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-02-07T16:04:25Z
|
---
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_keras_callback
model-index:
- name: TesterGG/sequence_classification_model
results: []
datasets:
- daily_dialog
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# TesterGG/sequence_classification_model
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the 'act' classification labels in 'daily_dialog' dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.3870
- Validation Loss: 0.5128
- Train Accuracy: 0.8059
- Epoch: 2
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': True, 'is_legacy_optimizer': False, 'learning_rate': {'module': 'keras.optimizers.schedules', 'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 9080, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered_name': None}, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Train Accuracy | Epoch |
|:----------:|:---------------:|:--------------:|:-----:|
| 0.5076 | 0.5274 | 0.7987 | 0 |
| 0.4112 | 0.5128 | 0.8059 | 1 |
| 0.3870 | 0.5128 | 0.8059 | 2 |
### Framework versions
- Transformers 4.38.0.dev0
- TensorFlow 2.15.0
- Datasets 2.16.1
- Tokenizers 0.15.1
|
lmg-anon/vntl-13b-v0.2-qlora
|
lmg-anon
| 2024-02-08T05:08:47Z
| 0
| 0
|
peft
|
[
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:unsloth/llama-2-13b-bnb-4bit",
"base_model:adapter:unsloth/llama-2-13b-bnb-4bit",
"region:us"
] | null | 2024-02-08T04:52:45Z
|
---
library_name: peft
base_model: unsloth/llama-2-13b-bnb-4bit
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.8.2
|
GobinathR/language-training
|
GobinathR
| 2024-02-08T05:08:28Z
| 0
| 0
|
keras
|
[
"keras",
"code",
"text2text-generation",
"en",
"ta",
"dataset:HuggingFaceM4/WebSight",
"region:us"
] |
text2text-generation
| 2024-02-07T04:24:40Z
|
---
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 2272
num_examples: 8
download_size: 3903
dataset_size: 2272
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
datasets:
- HuggingFaceM4/WebSight
language:
- en
- ta
metrics:
- character
library_name: keras
pipeline_tag: text2text-generation
tags:
- code
---
|
apatidar0/chat_style_phi-2
|
apatidar0
| 2024-02-08T04:57:13Z
| 4
| 0
|
transformers
|
[
"transformers",
"safetensors",
"phi",
"text-generation",
"conversational",
"custom_code",
"en",
"arxiv:1910.09700",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] |
text-generation
| 2024-02-06T08:05:59Z
|
---
library_name: transformers
license: apache-2.0
language:
- en
pipeline_tag: conversational
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
rityakh/realitium-finetune
|
rityakh
| 2024-02-08T04:56:21Z
| 85
| 0
|
diffusers
|
[
"diffusers",
"text-to-image",
"region:us"
] |
text-to-image
| 2024-02-05T01:39:15Z
|
---
library_name: diffusers
pipeline_tag: text-to-image
---
# Realitium finetune models
### Here you can find only pure, trained models, without mixing.
|
LoneStriker/Everyone-Coder-33b-v2-Base-8.0bpw-h8-exl2
|
LoneStriker
| 2024-02-08T04:56:08Z
| 5
| 1
|
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"merge",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-02-08T04:38:20Z
|
---
license: other
license_name: deepseek
license_link: https://github.com/deepseek-ai/DeepSeek-Coder/blob/main/LICENSE-MODEL
tags:
- merge
---
Everyone-Coder-33b-v2-Base

EveryoneLLM series of models made by the community, for the community. This is a coding specific model made using fine-tunes of deekseekcoder-33b-base.
This Version 2 of the Everything-Coder-33b model uses the task_arithmetic merging method which has major increases in coding performance as opposed to the ties method. You should find this version having much better coding performance than Version 1, without any of the negative that merging has on the integrity of the model.
Prompt template: Alpaca
```
Below is an instruction that describes a task. Write a response that appropriately completes the request.
### Instruction:
{prompt}
### Response:
```
The models that were used in this merger were as follow:
- https://huggingface.co/deepseek-ai/deepseek-coder-33b-instruct
- https://huggingface.co/codefuse-ai/CodeFuse-DeepSeek-33B
- https://huggingface.co/WizardLM/WizardCoder-33B-V1.1
Thank you to the creators of the above ai models, they have full credit for the EveryoneLLM series of models. Without their hard work we wouldnt be able to achieve the great success we have in the open source community. π
You can find the write up for merging models here:
https://docs.google.com/document/d/1_vOftBnrk9NRk5h10UqrfJ5CDih9KBKL61yvrZtVWPE/edit?usp=sharing
Config for the merger can be found bellow:
```yaml
models:
- model: codefuse-ai_CodeFuse-DeepSeek-33B
parameters:
weight: 1
- model: deepseek-ai_deepseek-coder-33b-instruct
parameters:
weight: 1
- model: WizardLM_WizardCoder-33B-V1.1
parameters:
weight: 1
merge_method: task_arithmetic
base_model: deepseek-ai_deepseek-coder-33b-base
parameters:
normalize: true
int8_mask: true
dtype: float16
```
|
SagarKeshave/wizard_math_
|
SagarKeshave
| 2024-02-08T04:50:47Z
| 1
| 0
|
transformers
|
[
"transformers",
"pytorch",
"mistral",
"text-generation",
"en",
"arxiv:2304.12244",
"arxiv:2306.08568",
"arxiv:2308.09583",
"autotrain_compatible",
"text-generation-inference",
"region:us"
] |
text-generation
| 2024-02-08T04:50:47Z
|
---
inference: false
language:
- en
pipeline_tag: text-generation
---
## WizardMath: Empowering Mathematical Reasoning for Large Language Models via Reinforced Evol-Instruct (RLEIF)
<p style="font-size:28px;" align="center">
π <a href="https://wizardlm.github.io/" target="_blank">Home Page</a> </p>
<p align="center">
<p align="center">
π€ <a href="https://huggingface.co/WizardLM" target="_blank">HF Repo</a> β’π± <a href="https://github.com/nlpxucan/WizardLM" target="_blank">Github Repo</a> β’ π¦ <a href="https://twitter.com/WizardLM_AI" target="_blank">Twitter</a> </p>
<p align="center">
π <a href="https://arxiv.org/abs/2304.12244" target="_blank">[WizardLM]</a> β’ π <a href="https://arxiv.org/abs/2306.08568" target="_blank">[WizardCoder]</a> β’ π <a href="https://arxiv.org/abs/2308.09583" target="_blank">[WizardMath]</a> <br>
</p>
<p align="center">
π Join our <a href="https://discord.gg/VZjjHtWrKs" target="_blank">Discord</a>
</p>
## News
[12/19/2023] π₯ We released **WizardMath-7B-V1.1** trained from Mistral-7B, the **SOTA 7B math LLM**, achieves **83.2 pass@1** on GSM8k, and **33.0 pass@1** on MATH. Use this [[**Demo**](http://47.103.63.15:50083/)] to chat with it.
[12/19/2023] π₯ **WizardMath-7B-V1.1** outperforms **ChatGPT 3.5**, **Gemini Pro**, **Mixtral MOE**, and **Claude Instant** on GSM8K pass@1.
[12/19/2023] π₯ **WizardMath-7B-V1.1** is comparable with **ChatGPT 3.5**, **Gemini Pro**, and surpasses **Mixtral MOE** on MATH pass@1.
| Model | Checkpoint | Paper | GSM8k | MATH | Demo|
| ----- |------| ---- |------|-------|-------|
| **WizardMath-7B-V1.1** | π€ <a href="https://huggingface.co/WizardLM/WizardMath-7B-V1.1" target="_blank">HF Link</a> | π <a href="https://arxiv.org/abs/2308.09583" target="_blank">[WizardMath]</a>| **83.2** | **33.0** |[[**Demo**](http://47.103.63.15:50083/)] |
| WizardMath-70B-V1.0 | π€ <a href="https://huggingface.co/WizardLM/WizardMath-70B-V1.0" target="_blank">HF Link</a> | π <a href="https://arxiv.org/abs/2308.09583" target="_blank">[WizardMath]</a>| **81.6** | **22.7** ||
| WizardMath-13B-V1.0 | π€ <a href="https://huggingface.co/WizardLM/WizardMath-13B-V1.0" target="_blank">HF Link</a> | π <a href="https://arxiv.org/abs/2308.09583" target="_blank">[WizardMath]</a>| **63.9** | **14.0** ||
| WizardMath-7B-V1.0 | π€ <a href="https://huggingface.co/WizardLM/WizardMath-7B-V1.0" target="_blank">HF Link</a> | π <a href="https://arxiv.org/abs/2308.09583" target="_blank">[WizardMath]</a>| **54.9** | **10.7** | |
## [12/19/2023] Comparing WizardMath-7B-V1.1 with other open source 7B size math LLMs.
| Model | GSM8k Pass@1 | MATH Pass@1 |
| ----- |------| ---- |
| MPT-7B | 6.8 | 3.0 |
|Llama 1-7B | 11.0 | 2.9 |
|Llama 2-7B|12.3 |2.8 |
|Yi-6b| 32.6 |5.8 |
|Mistral-7B|37.8 |9.1 |
|Qwen-7b|47.8 |9.3 |
| RFT-7B | 50.3 | -- |
| MAmmoTH-7B (COT) | 50.5 | 10.4 |
| WizardMath-7B-V1.0 | 54.9 | 10.7 |
|Abel-7B-001 |59.7 |13 |
| MetaMath-7B | 66.5 | 19.8 |
| Arithmo-Mistral-7B | 74.7 | 25.3 |
|MetaMath-Mistral-7B|77.7 |28.2 |
|Abel-7B-002 | 80.4 | 29.5 |
| **WizardMath-7B-V1.1** | **83.2** | **33.0** |
## [12/19/2023] Comparing WizardMath-7B-V1.1 with large open source (30B~70B) LLMs.
| Model | GSM8k Pass@1 | MATH Pass@1 |
| ----- |------| ---- |
| Llemma-34B | 51.5 | 25.0 |
| Minerva-62B | 52.4 | 27.6 |
| Llama 2-70B | 56.8 | 13.5 |
| DeepSeek 67B | 63.4 | -- |
| Gork 33B | 62.9 | 23.9 |
| MAmmoTH-70B | 72.4 | 21.1 |
| Yi-34B | 67.9 | 15.9 |
| Mixtral 8x7B | 74.4 | 28.4 |
| MetaMath-70B | 82.3 | 26.6 |
| **WizardMath-7B-V1.1** | **83.2** | **33.0** |
## β Data Contamination Check:
Before model training, we carefully and rigorously checked all the training data, and used multiple deduplication methods to verify and prevent data leakage on GSM8k and MATH test set.
π₯
β<b>Note for model system prompts usage:</b>
Please use **the same systems prompts strictly** with us, and we do not guarantee the accuracy of the **quantified versions**.
**Default version:**
```
"Below is an instruction that describes a task. Write a response that appropriately completes the request.\n\n### Instruction:\n{instruction}\n\n### Response:"
```
**CoT Version:** οΌβFor the **simple** math questions, we do NOT recommend to use the CoT prompt.οΌ
```
"Below is an instruction that describes a task. Write a response that appropriately completes the request.\n\n### Instruction:\n{instruction}\n\n### Response: Let's think step by step."
```
## Inference WizardMath Demo Script
We provide the WizardMath inference demo code [here](https://github.com/nlpxucan/WizardLM/tree/main/demo).
## Citation
Please cite the repo if you use the data, method or code in this repo.
```
@article{luo2023wizardmath,
title={WizardMath: Empowering Mathematical Reasoning for Large Language Models via Reinforced Evol-Instruct},
author={Luo, Haipeng and Sun, Qingfeng and Xu, Can and Zhao, Pu and Lou, Jianguang and Tao, Chongyang and Geng, Xiubo and Lin, Qingwei and Chen, Shifeng and Zhang, Dongmei},
journal={arXiv preprint arXiv:2308.09583},
year={2023}
}
```
|
BrauuHdzM/GPT-J-finetuned-noticias
|
BrauuHdzM
| 2024-02-08T04:43:33Z
| 0
| 0
|
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-02-08T04:22:29Z
|
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
lulygavri/berto-subj
|
lulygavri
| 2024-02-08T04:40:58Z
| 7
| 0
|
transformers
|
[
"transformers",
"tf",
"bert",
"text-classification",
"generated_from_keras_callback",
"base_model:dccuchile/bert-base-spanish-wwm-uncased",
"base_model:finetune:dccuchile/bert-base-spanish-wwm-uncased",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-02-07T20:17:13Z
|
---
base_model: dccuchile/bert-base-spanish-wwm-uncased
tags:
- generated_from_keras_callback
model-index:
- name: lulygavri/berto-subj
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# lulygavri/berto-subj
This model is a fine-tuned version of [dccuchile/bert-base-spanish-wwm-uncased](https://huggingface.co/dccuchile/bert-base-spanish-wwm-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.2648
- Validation Loss: 0.2302
- Train Accuracy: 0.8400
- Train Precision: [0.9935821 0.39460253]
- Train Precision W: 0.9301
- Train Recall: [0.82643237 0.95494063]
- Train Recall W: 0.8400
- Train F1: [0.90233174 0.55844377]
- Train F1 W: 0.8659
- Epoch: 1
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': True, 'is_legacy_optimizer': False, 'learning_rate': {'module': 'transformers.optimization_tf', 'class_name': 'WarmUp', 'config': {'initial_learning_rate': 2e-05, 'decay_schedule_fn': {'module': 'keras.optimizers.schedules', 'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 18106, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered_name': None}, 'warmup_steps': 500, 'power': 1.0, 'name': None}, 'registered_name': 'WarmUp'}, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: mixed_float16
### Training results
| Train Loss | Validation Loss | Train Accuracy | Train Precision | Train Precision W | Train Recall | Train Recall W | Train F1 | Train F1 W | Epoch |
|:----------:|:---------------:|:--------------:|:-----------------------:|:-----------------:|:-----------------------:|:--------------:|:-----------------------:|:----------:|:-----:|
| 0.2648 | 0.2302 | 0.8400 | [0.9935821 0.39460253] | 0.9301 | [0.82643237 0.95494063] | 0.8400 | [0.90233174 0.55844377] | 0.8659 | 1 |
### Framework versions
- Transformers 4.35.2
- TensorFlow 2.15.0
- Datasets 2.16.1
- Tokenizers 0.15.1
|
nightdude/kanji-lora-conv
|
nightdude
| 2024-02-08T04:40:09Z
| 1
| 1
|
diffusers
|
[
"diffusers",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"lora",
"base_model:CompVis/stable-diffusion-v1-4",
"base_model:adapter:CompVis/stable-diffusion-v1-4",
"license:creativeml-openrail-m",
"region:us"
] |
text-to-image
| 2024-02-08T03:37:14Z
|
---
license: creativeml-openrail-m
base_model: CompVis/stable-diffusion-v1-4
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- lora
inference: true
---
# LoRA text2image fine-tuning - nightdude/kanji-lora-conv
These are LoRA adaption weights for CompVis/stable-diffusion-v1-4. The weights were fine-tuned on the nightdude/sakana-kanji dataset. You can find some example images in the following.




|
LoneStriker/Everyone-Coder-33b-v2-Base-6.0bpw-h6-exl2
|
LoneStriker
| 2024-02-08T04:38:18Z
| 3
| 0
|
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"merge",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-02-08T04:28:03Z
|
---
license: other
license_name: deepseek
license_link: https://github.com/deepseek-ai/DeepSeek-Coder/blob/main/LICENSE-MODEL
tags:
- merge
---
Everyone-Coder-33b-v2-Base

EveryoneLLM series of models made by the community, for the community. This is a coding specific model made using fine-tunes of deekseekcoder-33b-base.
This Version 2 of the Everything-Coder-33b model uses the task_arithmetic merging method which has major increases in coding performance as opposed to the ties method. You should find this version having much better coding performance than Version 1, without any of the negative that merging has on the integrity of the model.
Prompt template: Alpaca
```
Below is an instruction that describes a task. Write a response that appropriately completes the request.
### Instruction:
{prompt}
### Response:
```
The models that were used in this merger were as follow:
- https://huggingface.co/deepseek-ai/deepseek-coder-33b-instruct
- https://huggingface.co/codefuse-ai/CodeFuse-DeepSeek-33B
- https://huggingface.co/WizardLM/WizardCoder-33B-V1.1
Thank you to the creators of the above ai models, they have full credit for the EveryoneLLM series of models. Without their hard work we wouldnt be able to achieve the great success we have in the open source community. π
You can find the write up for merging models here:
https://docs.google.com/document/d/1_vOftBnrk9NRk5h10UqrfJ5CDih9KBKL61yvrZtVWPE/edit?usp=sharing
Config for the merger can be found bellow:
```yaml
models:
- model: codefuse-ai_CodeFuse-DeepSeek-33B
parameters:
weight: 1
- model: deepseek-ai_deepseek-coder-33b-instruct
parameters:
weight: 1
- model: WizardLM_WizardCoder-33B-V1.1
parameters:
weight: 1
merge_method: task_arithmetic
base_model: deepseek-ai_deepseek-coder-33b-base
parameters:
normalize: true
int8_mask: true
dtype: float16
```
|
LoneStriker/Everyone-Coder-33b-v2-Base-5.0bpw-h6-exl2
|
LoneStriker
| 2024-02-08T04:28:02Z
| 3
| 0
|
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"merge",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-02-08T04:19:26Z
|
---
license: other
license_name: deepseek
license_link: https://github.com/deepseek-ai/DeepSeek-Coder/blob/main/LICENSE-MODEL
tags:
- merge
---
Everyone-Coder-33b-v2-Base

EveryoneLLM series of models made by the community, for the community. This is a coding specific model made using fine-tunes of deekseekcoder-33b-base.
This Version 2 of the Everything-Coder-33b model uses the task_arithmetic merging method which has major increases in coding performance as opposed to the ties method. You should find this version having much better coding performance than Version 1, without any of the negative that merging has on the integrity of the model.
Prompt template: Alpaca
```
Below is an instruction that describes a task. Write a response that appropriately completes the request.
### Instruction:
{prompt}
### Response:
```
The models that were used in this merger were as follow:
- https://huggingface.co/deepseek-ai/deepseek-coder-33b-instruct
- https://huggingface.co/codefuse-ai/CodeFuse-DeepSeek-33B
- https://huggingface.co/WizardLM/WizardCoder-33B-V1.1
Thank you to the creators of the above ai models, they have full credit for the EveryoneLLM series of models. Without their hard work we wouldnt be able to achieve the great success we have in the open source community. π
You can find the write up for merging models here:
https://docs.google.com/document/d/1_vOftBnrk9NRk5h10UqrfJ5CDih9KBKL61yvrZtVWPE/edit?usp=sharing
Config for the merger can be found bellow:
```yaml
models:
- model: codefuse-ai_CodeFuse-DeepSeek-33B
parameters:
weight: 1
- model: deepseek-ai_deepseek-coder-33b-instruct
parameters:
weight: 1
- model: WizardLM_WizardCoder-33B-V1.1
parameters:
weight: 1
merge_method: task_arithmetic
base_model: deepseek-ai_deepseek-coder-33b-base
parameters:
normalize: true
int8_mask: true
dtype: float16
```
|
LoneStriker/Everyone-Coder-33b-v2-Base-4.65bpw-h6-exl2
|
LoneStriker
| 2024-02-08T04:19:24Z
| 5
| 0
|
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"merge",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-02-08T04:11:25Z
|
---
license: other
license_name: deepseek
license_link: https://github.com/deepseek-ai/DeepSeek-Coder/blob/main/LICENSE-MODEL
tags:
- merge
---
Everyone-Coder-33b-v2-Base

EveryoneLLM series of models made by the community, for the community. This is a coding specific model made using fine-tunes of deekseekcoder-33b-base.
This Version 2 of the Everything-Coder-33b model uses the task_arithmetic merging method which has major increases in coding performance as opposed to the ties method. You should find this version having much better coding performance than Version 1, without any of the negative that merging has on the integrity of the model.
Prompt template: Alpaca
```
Below is an instruction that describes a task. Write a response that appropriately completes the request.
### Instruction:
{prompt}
### Response:
```
The models that were used in this merger were as follow:
- https://huggingface.co/deepseek-ai/deepseek-coder-33b-instruct
- https://huggingface.co/codefuse-ai/CodeFuse-DeepSeek-33B
- https://huggingface.co/WizardLM/WizardCoder-33B-V1.1
Thank you to the creators of the above ai models, they have full credit for the EveryoneLLM series of models. Without their hard work we wouldnt be able to achieve the great success we have in the open source community. π
You can find the write up for merging models here:
https://docs.google.com/document/d/1_vOftBnrk9NRk5h10UqrfJ5CDih9KBKL61yvrZtVWPE/edit?usp=sharing
Config for the merger can be found bellow:
```yaml
models:
- model: codefuse-ai_CodeFuse-DeepSeek-33B
parameters:
weight: 1
- model: deepseek-ai_deepseek-coder-33b-instruct
parameters:
weight: 1
- model: WizardLM_WizardCoder-33B-V1.1
parameters:
weight: 1
merge_method: task_arithmetic
base_model: deepseek-ai_deepseek-coder-33b-base
parameters:
normalize: true
int8_mask: true
dtype: float16
```
|
VanoInvestigations/bertin-gpt-j-6B_8bit_13
|
VanoInvestigations
| 2024-02-08T04:08:04Z
| 3
| 0
|
peft
|
[
"peft",
"tensorboard",
"safetensors",
"generated_from_trainer",
"base_model:bertin-project/bertin-gpt-j-6B",
"base_model:adapter:bertin-project/bertin-gpt-j-6B",
"license:apache-2.0",
"region:us"
] | null | 2024-02-07T21:45:35Z
|
---
license: apache-2.0
library_name: peft
tags:
- generated_from_trainer
base_model: bertin-project/bertin-gpt-j-6B
model-index:
- name: bertin-gpt-j-6B_8bit_13
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bertin-gpt-j-6B_8bit_13
This model is a fine-tuned version of [bertin-project/bertin-gpt-j-6B](https://huggingface.co/bertin-project/bertin-gpt-j-6B) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1.41e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- PEFT 0.7.1
- Transformers 4.37.2
- Pytorch 2.2.0+cu121
- Datasets 2.14.6
- Tokenizers 0.15.1
|
hivaze/AAQG-QA-QG-FRED-T5-1.7B
|
hivaze
| 2024-02-08T04:01:46Z
| 13
| 4
|
transformers
|
[
"transformers",
"safetensors",
"t5",
"text2text-generation",
"ru",
"dataset:hivaze/ru-AAQG-QA-QG",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2024-02-08T02:57:28Z
|
---
language:
- ru
license: apache-2.0
library_name: transformers
datasets:
- hivaze/ru-AAQG-QA-QG
pipeline_tag: text2text-generation
---
## Description
This is **ai-forever/FRED-T5-1.7B** model trained on **Question-Answering**, **Question-Generation** and **Answer-Aware Question Generation** tasks on russian dataset (**hivaze/ru-AAQG-QA-QG**)
### Prompts
```python
AAQG_PROMPT = "Π‘Π³Π΅Π½Π΅ΡΠΈΡΡΠΉ Π²ΠΎΠΏΡΠΎΡ ΠΏΠΎ ΡΠ΅ΠΊΡΡΡ, ΠΈΡΠΏΠΎΠ»ΡΠ·ΡΡ ΠΈΠ·Π²Π΅ΡΡΠ½ΡΠΉ ΠΎΡΠ²Π΅Ρ. Π’Π΅ΠΊΡΡ: '{context}'. ΠΡΠ²Π΅Ρ: '{answer}'."
QG_PROMPT = "Π‘Π³Π΅Π½Π΅ΡΠΈΡΡΠΉ Π²ΠΎΠΏΡΠΎΡ ΠΏΠΎ ΡΠ΅ΠΊΡΡΡ. Π’Π΅ΠΊΡΡ: '{context}'."
QA_PROMPT = "Π‘Π³Π΅Π½Π΅ΡΠΈΡΡΠΉ ΠΎΡΠ²Π΅Ρ Π½Π° Π²ΠΎΠΏΡΠΎΡ ΠΏΠΎ ΡΠ΅ΠΊΡΡΡ. Π’Π΅ΠΊΡΡ: '{context}'. ΠΠΎΠΏΡΠΎΡ: '{question}'."
```
### Examples and code
```python
from transformers import AutoTokenizer, T5ForConditionalGeneration
from functools import partial
saved_checkpoint = 'hivaze/AAQG-QA-QG-FRED-T5-1.7B'
tokenizer = AutoTokenizer.from_pretrained(saved_checkpoint)
model = T5ForConditionalGeneration.from_pretrained(saved_checkpoint).cuda()
def generate_text(prompt, tokenizer, model, n=1, temperature=0.8, num_beams=3):
encoded_input = tokenizer.encode_plus(prompt, return_tensors='pt')
encoded_input = {k: v.to(model.device) for k, v in encoded_input.items()}
resulted_tokens = model.generate(**encoded_input,
max_new_tokens=64,
do_sample=True,
num_beams=num_beams,
num_return_sequences=n,
temperature=temperature,
top_p=0.9,
top_k=50)
resulted_texts = tokenizer.batch_decode(resulted_tokens, skip_special_tokens=True)
return resulted_texts
generate_text = partial(generate_text, tokenizer=tokenizer, model=model)
test_context = "ΠΡΡΠ΅ΡΠ΅ΡΡΠ²Π΅Π½Π½ΠΈΠΊ Π€Π΅Π΄ΠΎΡ ΠΠΎΠ½ΡΡ
ΠΎΠ² ΠΈ ΠΏΠΈΠ»ΠΎΡ ΠΠ³ΠΎΡΡ ΠΠΎΡΠ°ΠΏΠΊΠΈΠ½ ΡΡΡΠ°Π½ΠΎΠ²ΠΈΠ»ΠΈ ΠΌΠΈΡΠΎΠ²ΠΎΠΉ ΡΠ΅ΠΊΠΎΡΠ΄ Π²ΡΡΠΎΡΡ ΠΏΠΎΠ»Π΅ΡΠ° Π½Π° ΠΏΠ°ΡΠ°Π»ΡΡΠ΅, ΠΏΠΎΠ΄Π½ΡΠ²ΡΠΈΡΡ Π½Π° Π²ΡΡΠΎΡΡ 4728 ΠΌΠ΅ΡΡΠΎΠ² β ΡΠ°ΠΉΡ ΠΠΎΠ½ΡΡ
ΠΎΠ²Π°"
```
#### AAQG
```python
generate_text(AAQG_PROMPT.format(
context=test_context,
answer='Π½Π° ΠΏΠ°ΡΠ°Π»ΡΡΠ΅'
), n=1)
```
> "ΠΠ° ΡΠ΅ΠΌ ΠΏΡΡΠ΅ΡΠ΅ΡΡΠ²Π΅Π½Π½ΠΈΠΊ Π€Π΅Π΄ΠΎΡ ΠΠΎΠ½ΡΡ
ΠΎΠ² ΠΈ ΠΏΠΈΠ»ΠΎΡ ΠΠ³ΠΎΡΡ ΠΠΎΡΠ°ΠΏΠΊΠΈΠ½ ΡΡΡΠ°Π½ΠΎΠ²ΠΈΠ»ΠΈ ΠΌΠΈΡΠΎΠ²ΠΎΠΉ ΡΠ΅ΠΊΠΎΡΠ΄ Π²ΡΡΠΎΡΡ ΠΏΠΎΠ»Π΅ΡΠ°?"
```python
generate_text(AAQG_PROMPT.format(
context=test_context,
answer='ΡΠ΅ΠΊΠΎΡΠ΄ Π²ΡΡΠΎΡΡ ΠΏΠΎΠ»Π΅ΡΠ°'
), n=1)
```
> "Π§ΡΠΎ ΡΡΡΠ°Π½ΠΎΠ²ΠΈΠ»ΠΈ ΠΏΡΡΠ΅ΡΠ΅ΡΡΠ²Π΅Π½Π½ΠΈΠΊ Π€Π΅Π΄ΠΎΡ ΠΠΎΠ½ΡΡ
ΠΎΠ² ΠΈ ΠΏΠΈΠ»ΠΎΡ ΠΠ³ΠΎΡΡ ΠΠΎΡΠ°ΠΏΠΊΠΈΠ½?"
#### QA
```python
generate_text(QA_PROMPT.format(
context=test_context,
question='Π§ΡΠΎ ΡΡΡΠ°Π½ΠΎΠ²ΠΈΠ»ΠΈ ΠΏΡΡΠ΅ΡΠ΅ΡΡΠ²Π΅Π½Π½ΠΈΠΊ Π€Π΅Π΄ΠΎΡ ΠΠΎΠ½ΡΡ
ΠΎΠ² ΠΈ ΠΏΠΈΠ»ΠΎΡ ΠΠ³ΠΎΡΡ ΠΠΎΡΠ°ΠΏΠΊΠΈΠ½?'
), n=1)
```
> "ΠΠΈΡΠΎΠ²ΠΎΠΉ ΡΠ΅ΠΊΠΎΡΠ΄ Π²ΡΡΠΎΡΡ ΠΏΠΎΠ»Π΅ΡΠ° Π½Π° ΠΏΠ°ΡΠ°Π»ΡΡΠ΅"
#### QG
```python
generate_text(QG_PROMPT.format(context=test_context), n=1)
```
> "ΠΡΠΎ ΡΡΡΠ°Π½ΠΎΠ²ΠΈΠ» ΠΌΠΈΡΠΎΠ²ΠΎΠΉ ΡΠ΅ΠΊΠΎΡΠ΄ Π²ΡΡΠΎΡΡ ΠΏΠΎΠ»Π΅ΡΠ° Π½Π° ΠΏΠ°ΡΠ°Π»ΡΡΠ΅?"
## Metrics
| Step | Training Loss | Validation Loss | Sbleu | Chr F | Rouge1 | Rouge2 | Rougel |
|---|---|---|---|---|---|---|---|
| 500 | 1.020500 | 1.059296 | 41.556000 | 66.391100 | 0.104200 | 0.033700 | 0.104200 |
| 1000 | 1.050200 | 0.998357 | 43.035900 | 66.376800 | 0.105100 | 0.034100 | 0.105200 |
| 1500 | 0.994000 | 0.966051 | 43.692200 | 66.597600 | 0.106300 | 0.034400 | 0.106400 |
| 2000 | 0.947800 | 0.953637 | 44.012400 | 66.711100 | 0.106600 | 0.034900 | 0.106800 |
| 2500 | 0.978200 | 0.944621 | 44.027900 | 66.657400 | 0.106500 | 0.034600 | 0.106500 |
## Authors
- Sergei Bratchikov (https://t.me/nlpwanderer)
|
bdpc/SciBERT_twowayloss_25K_bs64
|
bdpc
| 2024-02-08T03:57:34Z
| 4
| 0
|
transformers
|
[
"transformers",
"pytorch",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:allenai/scibert_scivocab_uncased",
"base_model:finetune:allenai/scibert_scivocab_uncased",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-02-06T14:37:28Z
|
---
base_model: allenai/scibert_scivocab_uncased
tags:
- generated_from_trainer
metrics:
- accuracy
- precision
- recall
- f1
model-index:
- name: SciBERT_TwoWayLoss_25K_bs64
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# SciBERT_TwoWayLoss_25K_bs64
This model is a fine-tuned version of [allenai/scibert_scivocab_uncased](https://huggingface.co/allenai/scibert_scivocab_uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 5.7117
- Accuracy: 0.7367
- Precision: 0.0357
- Recall: 0.9994
- F1: 0.0689
- Hamming: 0.2633
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 192
- eval_batch_size: 192
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- training_steps: 25000
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 | Hamming |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:---------:|:------:|:------:|:-------:|
| 6.7538 | 0.47 | 5000 | 6.4722 | 0.7208 | 0.0337 | 0.9987 | 0.0652 | 0.2792 |
| 6.1625 | 0.95 | 10000 | 6.0293 | 0.7311 | 0.0350 | 0.9991 | 0.0676 | 0.2689 |
| 5.7863 | 1.42 | 15000 | 5.8415 | 0.7362 | 0.0356 | 0.9992 | 0.0688 | 0.2638 |
| 5.6995 | 1.9 | 20000 | 5.7343 | 0.7366 | 0.0357 | 0.9994 | 0.0689 | 0.2634 |
| 5.4711 | 2.37 | 25000 | 5.7117 | 0.7367 | 0.0357 | 0.9994 | 0.0689 | 0.2633 |
### Framework versions
- Transformers 4.35.0.dev0
- Pytorch 2.0.1+cu118
- Datasets 2.7.1
- Tokenizers 0.14.1
|
hivaze/ru-e5-large
|
hivaze
| 2024-02-08T03:52:20Z
| 12
| 4
|
transformers
|
[
"transformers",
"safetensors",
"xlm-roberta",
"feature-extraction",
"ru",
"uk",
"kk",
"be",
"endpoints_compatible",
"region:us"
] |
feature-extraction
| 2024-02-01T14:06:36Z
|
---
library_name: transformers
language:
- ru
- uk
- kk
- be
---
## About model creation
This is a smaller version of the **intfloat/multilingual-e5-large** with only some Russian (Cyrillic in general) and English (fever) tokens (and embeddings) left.
The model created in a similar way as described in this https://medium.com/m/global-identity-2?redirectUrl=https%3A%2F%2Ftowardsdatascience.com%2Fhow-to-adapt-a-multilingual-t5-model-for-a-single-language-b9f94f3d9c90 post.
The **CulturaX** dataset was used to search for the required tokens. As a result, out of 250k tokens of the original model, only **69,382** required were left.
## Was the model trained in any way?
No. The tokenizer has been modified, and all changes to token identifiers have been corrected by moving embeddings in the model word_embeddings module to their new places, so **the quality of this model** on Cyrilic (and English) **is exactly the same** as the original one.
## Why do we need this?
This allows you to use significantly less memory during training and also greatly reduces the weight of the model.
## Authors
- Sergei Bratchikov (https://t.me/nlpwanderer)
|
andysalerno/rainbowfish-v7-lora-adapter
|
andysalerno
| 2024-02-08T03:47:54Z
| 0
| 0
|
peft
|
[
"peft",
"safetensors",
"mistral",
"axolotl",
"generated_from_trainer",
"base_model:andysalerno/mistral-sft-v3",
"base_model:adapter:andysalerno/mistral-sft-v3",
"license:apache-2.0",
"8-bit",
"bitsandbytes",
"region:us"
] | null | 2024-02-07T05:23:23Z
|
---
license: apache-2.0
library_name: peft
tags:
- axolotl
- generated_from_trainer
base_model: andysalerno/mistral-sft-v3
model-index:
- name: rainbowfish-v7
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.0`
```yaml
base_model: andysalerno/mistral-sft-v3
model_type: AutoModelForCausalLM
load_in_8bit: true
load_in_4bit: false
strict: false
datasets:
- path: andysalerno/rainbowfish-v1
type:
system_prompt: ""
field_system: system
field_instruction: input
field_output: output
format: "{instruction}"
no_input_format: "{instruction}"
dataset_prepared_path: last_run_prepared
val_set_size: 0.005
output_dir: ./lora-out-rainbow7
adapter: lora
lora_model_dir:
sequence_len: 2048
sample_packing: false # was true
eval_sample_packing: false
pad_to_sequence_len: false
padding_side: left
lora_r: 64
lora_alpha: 16
lora_dropout: 0.05
lora_target_linear: true
lora_fan_in_fan_out:
lora_target_modules:
- gate_proj
- down_proj
- up_proj
- q_proj
- v_proj
- k_proj
- o_proj
lora_modules_to_save:
- embed_tokens
- lm_head
wandb_project: axolotl
wandb_entity:
wandb_watch:
wandb_name:
wandb_log_model:
gradient_accumulation_steps: 4
micro_batch_size: 4
optimizer: paged_adamw_8bit
lr_scheduler: cosine
learning_rate: 2e-5
train_on_inputs: false
group_by_length: false
bf16: true
fp16:
tf32: false
gradient_checkpointing: true
gradient_checkpointing_kwargs:
use_reentrant: false
# early_stopping_patience: 3
local_rank:
logging_steps: 1
xformers_attention:
flash_attention: true
loss_watchdog_threshold: 5.0
loss_watchdog_patience: 3
hub_strategy: "every_save"
hub_model_id: andysalerno/rainbowfish-v7
num_epochs: 2
warmup_steps: 100
# warmup_ratio: 0.1
eval_steps: 200
eval_table_size:
eval_table_max_new_tokens: 128
# save_steps: 5
# max_steps: 400
saves_per_epoch: 2
debug:
weight_decay: 0.1
fsdp:
fsdp_config:
special_tokens:
bos_token: "<|im_start|>"
eos_token: "<|im_end|>"
unk_token: "<unk>"
```
</details><br>
# rainbowfish-v7
This model is a fine-tuned version of [andysalerno/mistral-sft-v3](https://huggingface.co/andysalerno/mistral-sft-v3) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6464
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- distributed_type: multi-GPU
- num_devices: 4
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- total_eval_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 100
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.6514 | 0.18 | 200 | 0.6828 |
| 0.6875 | 0.37 | 400 | 0.6691 |
| 0.6626 | 0.55 | 600 | 0.6625 |
| 0.688 | 0.74 | 800 | 0.6558 |
| 0.7143 | 0.92 | 1000 | 0.6520 |
| 0.5243 | 1.11 | 1200 | 0.6495 |
| 0.6205 | 1.29 | 1400 | 0.6482 |
| 0.6159 | 1.47 | 1600 | 0.6469 |
| 0.6287 | 1.66 | 1800 | 0.6465 |
| 0.6606 | 1.84 | 2000 | 0.6464 |
### Framework versions
- PEFT 0.8.2
- Transformers 4.38.0.dev0
- Pytorch 2.1.2+cu118
- Datasets 2.16.1
- Tokenizers 0.15.0
|
YutoNishimura-v2/text-to-kanji
|
YutoNishimura-v2
| 2024-02-08T03:41:43Z
| 22
| 0
|
diffusers
|
[
"diffusers",
"safetensors",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2024-02-08T03:40:27Z
|
---
library_name: diffusers
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 𧨠diffusers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
cvzion/mistral-dqg-v5
|
cvzion
| 2024-02-08T03:26:18Z
| 0
| 0
|
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-02-08T03:24:38Z
|
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Chattiori/AnyOrangeMix
|
Chattiori
| 2024-02-08T03:23:51Z
| 18
| 4
|
diffusers
|
[
"diffusers",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"en",
"license:creativeml-openrail-m",
"region:us"
] |
text-to-image
| 2023-03-19T10:33:30Z
|
---
license: creativeml-openrail-m
language:
- en
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
---
# **AnyOrangeMix**
_____
AnyOrangeMix is merge model of Anything v4.5 and AbyssOrangeMix3.
CivitAI: https://civitai.com/models/21503/anyorangemix-anything-v45-abyssorangemix-3
# Merge Source:
Anything v4.5 (0.5) + AbyssOrangeMix 3A1B (0.5) Weighted Sum
# Recommended Settings:
* Sampler: βDPM++ SDE Karrasβ recommended.
* Steps: 20~
* Clipskip: 1 or 2
* CFG Scale: 7 or higher recommended.
* VAE: anything_v4.0.vae.pt
# Recommended Prompt:
Prompt : masterpiece, best quality,
Negative : lowres,bad anatomy,bad hands,text,error,missing fingers,extra digit,fewer digits,cropped,worst quality,low quality,normal quality,jpeg artifacts,blurry,extra legs,extra feet,extra arms,extra fingers,missing legs,missing arms,ugly ,huge breasts,monochrome
# Recommended Embeds:
* bad prompt
* bad hands
* bad artist
* Easy Negative
|
Smoorf2022/TiniKatia
|
Smoorf2022
| 2024-02-08T03:09:57Z
| 0
| 0
| null |
[
"dataset:HuggingFaceM4/WebSight",
"license:cc",
"region:us"
] | null | 2024-02-08T03:01:38Z
|
---
license: cc
datasets:
- HuggingFaceM4/WebSight
metrics:
- character
---
|
neozhang2003/ppo-LunarLander-v2
|
neozhang2003
| 2024-02-08T03:01:48Z
| 0
| 0
|
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2024-02-08T03:01:30Z
|
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 248.89 +/- 29.23
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
CLMBR/existential-there-quantifier-transformer-0
|
CLMBR
| 2024-02-08T03:01:26Z
| 1
| 0
|
transformers
|
[
"transformers",
"pytorch",
"opt",
"text-generation",
"generated_from_trainer",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-02-02T09:49:57Z
|
---
tags:
- generated_from_trainer
model-index:
- name: existential-there-quantifier-transformer-0
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# existential-there-quantifier-transformer-0
This model is a fine-tuned version of [](https://huggingface.co/) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.8606
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 0
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 3052726
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-------:|:---------------:|
| 4.226 | 0.03 | 76320 | 4.1970 |
| 4.0196 | 1.03 | 152640 | 4.0272 |
| 3.9103 | 0.03 | 228960 | 3.9532 |
| 3.842 | 1.03 | 305280 | 3.9117 |
| 3.7902 | 0.03 | 381600 | 3.8860 |
| 3.7496 | 1.03 | 457920 | 3.8709 |
| 3.7142 | 0.03 | 534240 | 3.8605 |
| 3.6843 | 1.03 | 610560 | 3.8533 |
| 3.6562 | 0.03 | 686880 | 3.8494 |
| 3.6294 | 1.03 | 763200 | 3.8464 |
| 3.6054 | 0.03 | 839520 | 3.8448 |
| 3.5872 | 1.03 | 915840 | 3.8442 |
| 3.5719 | 0.03 | 992160 | 3.8433 |
| 3.5494 | 1.03 | 1068480 | 3.8438 |
| 3.5361 | 0.03 | 1144800 | 3.8453 |
| 3.5229 | 1.03 | 1221120 | 3.8448 |
| 3.5091 | 0.03 | 1297440 | 3.8469 |
| 3.4962 | 0.03 | 1373760 | 3.8474 |
| 3.4817 | 0.03 | 1450080 | 3.8502 |
| 3.4739 | 1.03 | 1526400 | 3.8508 |
| 3.4641 | 0.03 | 1602720 | 3.8521 |
| 3.455 | 1.03 | 1679040 | 3.8532 |
| 3.4471 | 0.03 | 1755360 | 3.8544 |
| 3.4338 | 1.03 | 1831680 | 3.8554 |
| 3.4207 | 0.03 | 1908000 | 3.8572 |
| 3.4107 | 1.03 | 1984320 | 3.8577 |
| 3.3968 | 0.03 | 2060640 | 3.8601 |
| 3.3889 | 0.03 | 2136960 | 3.8605 |
| 3.3808 | 1.03 | 2213280 | 3.8612 |
| 3.364 | 0.03 | 2289600 | 3.8615 |
| 3.3563 | 1.03 | 2365920 | 3.8631 |
| 3.3506 | 0.03 | 2442240 | 3.8637 |
| 3.3402 | 1.03 | 2518560 | 3.8635 |
| 3.328 | 0.03 | 2594880 | 3.8644 |
| 3.3179 | 0.03 | 2671200 | 3.8645 |
| 3.3121 | 1.03 | 2747520 | 3.8638 |
| 3.3051 | 0.03 | 2823840 | 3.8637 |
| 3.3015 | 1.03 | 2900160 | 3.8633 |
| 3.2959 | 0.03 | 2976480 | 3.8622 |
| 3.2885 | 0.02 | 3052726 | 3.8606 |
### Framework versions
- Transformers 4.33.3
- Pytorch 2.0.1
- Datasets 2.12.0
- Tokenizers 0.13.3
|
LULab/myNLP-Tagging-models
|
LULab
| 2024-02-08T02:45:50Z
| 0
| 0
| null |
[
"region:us"
] | null | 2024-01-30T16:32:07Z
|
---
{}
---
### POS Tagging and NER Tagging models for Myanmar language
|
mathreader/q-Taxi-v3-v2-large
|
mathreader
| 2024-02-08T02:37:52Z
| 0
| 0
| null |
[
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2024-02-08T02:37:48Z
|
---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-Taxi-v3-v2-large
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.56 +/- 2.71
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="mathreader/q-Taxi-v3-v2-large", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
rujengelal/ner-model
|
rujengelal
| 2024-02-08T02:21:59Z
| 11
| 0
|
transformers
|
[
"transformers",
"safetensors",
"distilbert",
"token-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2024-02-04T15:52:51Z
|
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
mathreader/q-Taxi-v3
|
mathreader
| 2024-02-08T02:17:05Z
| 0
| 0
| null |
[
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2024-02-08T02:17:02Z
|
---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-Taxi-v3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.56 +/- 2.71
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="mathreader/q-Taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
Anguuuuus/laryngitis
|
Anguuuuus
| 2024-02-08T01:55:54Z
| 145
| 0
|
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"wav2vec2",
"audio-classification",
"generated_from_trainer",
"base_model:facebook/wav2vec2-base",
"base_model:finetune:facebook/wav2vec2-base",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
audio-classification
| 2024-02-08T01:31:07Z
|
---
license: apache-2.0
base_model: facebook/wav2vec2-base
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: laryngitis
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# laryngitis
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7828
- Accuracy: 0.5455
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.4888 | 1.0 | 6 | 0.7395 | 0.4091 |
| 0.4714 | 2.0 | 12 | 0.7492 | 0.4545 |
| 0.4298 | 3.0 | 18 | 0.7774 | 0.5 |
| 0.3732 | 4.0 | 24 | 0.7864 | 0.5 |
| 0.352 | 5.0 | 30 | 0.7903 | 0.5 |
| 0.3147 | 6.0 | 36 | 0.8435 | 0.5 |
| 0.2969 | 7.0 | 42 | 0.7719 | 0.5 |
| 0.2902 | 8.0 | 48 | 0.7035 | 0.5909 |
| 0.238 | 9.0 | 54 | 0.7546 | 0.5909 |
| 0.2654 | 10.0 | 60 | 0.7828 | 0.5455 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu121
- Datasets 2.16.1
- Tokenizers 0.15.1
|
tomashs/multiple_choice_cowese_betoLDA_2
|
tomashs
| 2024-02-08T01:51:52Z
| 19
| 0
|
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"bert",
"generated_from_trainer",
"base_model:dccuchile/bert-base-spanish-wwm-cased",
"base_model:finetune:dccuchile/bert-base-spanish-wwm-cased",
"endpoints_compatible",
"region:us"
] | null | 2024-02-08T01:51:28Z
|
---
base_model: dccuchile/bert-base-spanish-wwm-cased
tags:
- generated_from_trainer
model-index:
- name: multiple_choice_cowese_betoLDA_2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# multiple_choice_cowese_betoLDA_2
This model is a fine-tuned version of [dccuchile/bert-base-spanish-wwm-cased](https://huggingface.co/dccuchile/bert-base-spanish-wwm-cased) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1.5e-05
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu121
- Datasets 2.16.1
- Tokenizers 0.15.1
|
yunyangx/EfficientSAM
|
yunyangx
| 2024-02-08T01:40:42Z
| 0
| 3
| null |
[
"onnx",
"arxiv:2312.00863",
"license:apache-2.0",
"region:us"
] | null | 2024-02-08T01:21:52Z
|
---
license: apache-2.0
---
# EfficientSAM
EfficientSAM: Leveraged Masked Image Pretraining for Efficient Segment Anything
## Online Demo & Examples
[Online demo](https://huggingface.co/spaces/yunyangx/EfficientSAM) and examples can be found in the [project page](https://yformer.github.io/efficient-sam/).
If you're using EfficientSAM in your research or applications, please cite using this BibTeX:
```bibtex
@article{xiong2023efficientsam,
title={EfficientSAM: Leveraged Masked Image Pretraining for Efficient Segment Anything},
author={Yunyang Xiong, Bala Varadarajan, Lemeng Wu, Xiaoyu Xiang, Fanyi Xiao, Chenchen Zhu, Xiaoliang Dai, Dilin Wang, Fei Sun, Forrest Iandola, Raghuraman Krishnamoorthi, Vikas Chandra},
journal={arXiv:2312.00863},
year={2023}
}
```
|
jeiku/Pasta-PrimaMaid-7b_GGUF
|
jeiku
| 2024-02-08T01:35:26Z
| 2
| 1
|
transformers
|
[
"transformers",
"gguf",
"mergekit",
"merge",
"base_model:Nitral-Archive/Kunocchini-7b",
"base_model:quantized:Nitral-Archive/Kunocchini-7b",
"endpoints_compatible",
"region:us"
] | null | 2024-02-08T00:48:12Z
|
---
base_model:
- Test157t/Kunocchini-7b
- Test157t/Pasta-Made_7b
library_name: transformers
tags:
- mergekit
- merge
---
This is a merge created by https://huggingface.co/Test157t I have merely quantized the model into GGUF. Please visit https://huggingface.co/Test157t/Kunocchini-7b for the original weights. The original description is as follows:
# mergedmodel
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the SLERP merge method.
Quants from @s3nh! https://huggingface.co/s3nh/Pasta-PrimaMaid-7b-GGUF
### Models Merged
The following models were included in the merge:
* [Test157t/Kunocchini-7b](https://huggingface.co/Test157t/Kunocchini-7b)
* [Test157t/Pasta-Made_7b](https://huggingface.co/Test157t/Pasta-Made_7b)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
slices:
- sources:
- model: Test157t/Kunocchini-7b
layer_range: [0, 32]
- model: Test157t/Pasta-Made_7b
layer_range: [0, 32]
merge_method: slerp
base_model: Test157t/Kunocchini-7b
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: bfloat16
```
|
kevinautomation/TinyLlama-1.1B-intermediate-step-1431k-3T_reddit_expert_model
|
kevinautomation
| 2024-02-08T01:27:57Z
| 6
| 0
|
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] |
text-generation
| 2024-02-08T01:27:09Z
|
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
valurank/seo-headline
|
valurank
| 2024-02-08T01:26:06Z
| 15
| 0
|
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"pegasus",
"text2text-generation",
"generated_from_trainer",
"base_model:google/pegasus-cnn_dailymail",
"base_model:finetune:google/pegasus-cnn_dailymail",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2024-02-06T16:58:52Z
|
---
base_model: google/pegasus-cnn_dailymail
tags:
- generated_from_trainer
model-index:
- name: seo-headline_2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# seo-headline_2
This model is a fine-tuned version of [google/pegasus-cnn_dailymail](https://huggingface.co/google/pegasus-cnn_dailymail) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5682
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 6
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.8031 | 1.29 | 500 | 0.7142 |
| 0.6117 | 2.58 | 1000 | 0.5948 |
| 0.5568 | 3.86 | 1500 | 0.5755 |
| 0.5219 | 5.15 | 2000 | 0.5682 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu121
- Datasets 2.16.1
- Tokenizers 0.15.1
|
christinacdl/XLM_RoBERTa-Multilingual-Hate-Speech-Detection-New
|
christinacdl
| 2024-02-08T01:25:37Z
| 99
| 0
|
transformers
|
[
"transformers",
"safetensors",
"xlm-roberta",
"text-classification",
"generated_from_trainer",
"base_model:FacebookAI/xlm-roberta-large",
"base_model:finetune:FacebookAI/xlm-roberta-large",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-02-06T16:37:13Z
|
---
license: mit
base_model: xlm-roberta-large
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: XLM_RoBERTa-Multilingual-Hate-Speech-Detection-New
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# XLM_RoBERTa-Multilingual-Hate-Speech-Detection-New
This model is a fine-tuned version of [xlm-roberta-large](https://huggingface.co/xlm-roberta-large) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5873
- Micro F1: 0.9065
- Macro F1: 0.9050
- Accuracy: 0.9065
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
### Framework versions
- Transformers 4.36.1
- Pytorch 2.1.0+cu121
- Datasets 2.13.1
- Tokenizers 0.15.0
|
IB13/t5_ppo_model_withoutkl
|
IB13
| 2024-02-08T01:17:02Z
| 2
| 0
|
peft
|
[
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:IB13/sft_t5_base_processed_model",
"base_model:adapter:IB13/sft_t5_base_processed_model",
"region:us"
] | null | 2024-02-08T01:16:57Z
|
---
library_name: peft
base_model: IB13/sft_t5_base_processed_model
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
## Training procedure
### Framework versions
- PEFT 0.6.2
|
mathreader/q-FrozenLake-v1-4x4-noSlippery
|
mathreader
| 2024-02-08T01:10:16Z
| 0
| 0
| null |
[
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2024-02-08T01:10:13Z
|
---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="mathreader/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
davisalex22/Llama2TurismEC-7b-hf-ft
|
davisalex22
| 2024-02-08T01:00:27Z
| 6
| 0
|
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-02-08T00:56:29Z
|
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
jeiku/Konocchini-7B_GGUF
|
jeiku
| 2024-02-08T00:55:22Z
| 18
| 2
|
transformers
|
[
"transformers",
"gguf",
"mergekit",
"merge",
"alpaca",
"mistral",
"base_model:Epiculous/Fett-uccine-7B",
"base_model:merge:Epiculous/Fett-uccine-7B",
"base_model:SanjiWatsuki/Kunoichi-DPO-v2-7B",
"base_model:merge:SanjiWatsuki/Kunoichi-DPO-v2-7B",
"endpoints_compatible",
"region:us"
] | null | 2024-02-07T23:58:56Z
|
---
base_model:
- SanjiWatsuki/Kunoichi-DPO-v2-7B
- Epiculous/Fett-uccine-7B
library_name: transformers
tags:
- mergekit
- merge
- alpaca
- mistral
---
This is a merge created by https://huggingface.co/Test157t I have merely quantized the model into GGUF. Please visit https://huggingface.co/Test157t/Kunocchini-7b for the original weights. The original description is as follows:
Thanks to @Epiculous for the dope model/ help with llm backends and support overall.
Id like to also thank @kalomaze for the dope sampler additions to ST.
@SanjiWatsuki Thank you very much for the help, and the model!
ST users can find the TextGenPreset in the folder labeled so.

Quants:Thank you @bartowski! https://huggingface.co/bartowski/Kunocchini-exl2
# mergedmodel
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the SLERP merge method.
### Models Merged
The following models were included in the merge:
* [SanjiWatsuki/Kunoichi-DPO-v2-7B](https://huggingface.co/SanjiWatsuki/Kunoichi-DPO-v2-7B)
* [Epiculous/Fett-uccine-7B](https://huggingface.co/Epiculous/Fett-uccine-7B)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
slices:
- sources:
- model: SanjiWatsuki/Kunoichi-DPO-v2-7B
layer_range: [0, 32]
- model: Epiculous/Fett-uccine-7B
layer_range: [0, 32]
merge_method: slerp
base_model: SanjiWatsuki/Kunoichi-DPO-v2-7B
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: bfloat16
```
|
Tommidi/st_vit_trained-8epoch-ucf101-subset
|
Tommidi
| 2024-02-08T00:33:03Z
| 18
| 0
|
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"st_vit",
"generated_from_trainer",
"base_model:Tommidi/st_vit_untrained",
"base_model:finetune:Tommidi/st_vit_untrained",
"endpoints_compatible",
"region:us"
] | null | 2024-02-07T21:31:16Z
|
---
base_model: Tommidi/st_vit_untrained
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: st_vit_trained-8epoch-ucf101-subset
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# st_vit_trained-8epoch-ucf101-subset
This model is a fine-tuned version of [Tommidi/st_vit_untrained](https://huggingface.co/Tommidi/st_vit_untrained) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0648
- Accuracy: 0.9733
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- training_steps: 296
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.6314 | 0.13 | 38 | 0.1264 | 0.9333 |
| 0.3547 | 1.13 | 76 | 0.0077 | 1.0 |
| 0.0189 | 2.13 | 114 | 0.5103 | 0.9333 |
| 0.0611 | 3.13 | 152 | 0.1508 | 0.9333 |
| 0.0027 | 4.13 | 190 | 0.0018 | 1.0 |
| 0.0812 | 5.13 | 228 | 0.0943 | 0.9333 |
| 0.0005 | 6.13 | 266 | 0.0635 | 0.9667 |
| 0.3035 | 7.1 | 296 | 0.0530 | 0.9667 |
### Framework versions
- Transformers 4.37.2
- Pytorch 2.1.0+cu121
- Datasets 2.16.1
- Tokenizers 0.15.1
|
rasyosef/bert-amharic-tokenizer
|
rasyosef
| 2024-02-08T00:31:39Z
| 0
| 2
|
transformers
|
[
"transformers",
"am",
"dataset:oscar",
"dataset:mc4",
"license:mit",
"endpoints_compatible",
"region:us"
] | null | 2024-02-08T00:10:43Z
|
---
license: mit
datasets:
- oscar
- mc4
language:
- am
library_name: transformers
---
# Amharic WordPiece Tokenizer
This repo contains a **WordPiece** tokenizer trained on the **Amharic** subset of the [oscar](https://huggingface.co/datasets/oscar) and [mc4](https://huggingface.co/datasets/mc4) datasets. It's the same as the **BERT** tokenizer but trained from scratch on an amharic dataset with a vocabulary size of `30522`.
# How to use
You can load the tokenizer from huggingface hub as follows.
```python
from transformers import AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("rasyosef/bert-amharic-tokenizer")
tokenizer.tokenize("α¨αααα αα αα» ααα΅ αα΅ααα΅ α΅α
αα΅α αααΈαα α αα°α¨αα α΅αα α αα± α αα αα£αͺα« ααα αα»α α₯α α¨αααααα΅ αα³α ααα’")
```
Output:
```python
['α¨ααα', '##α αα', 'αα»', 'ααα΅', 'αα΅ααα΅', 'α΅α
αα΅α', 'αααΈαα', 'α αα°α¨αα', 'α΅αα', 'α αα±', 'α αα', 'αα£αͺα«', 'ααα', 'αα»α', 'α₯α', 'α¨αααααα΅', 'αα³α', 'αα', 'α’']
```
|
Epiculous/Fett-uccine-Long-Noodle-7B-120k-Context-GGUF
|
Epiculous
| 2024-02-08T00:28:38Z
| 43
| 6
|
transformers
|
[
"transformers",
"gguf",
"mergekit",
"merge",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-02-07T20:37:41Z
|
---
base_model: []
library_name: transformers
tags:
- mergekit
- merge
---
# Fett-uccine-Long-Noodle-7B-120k-Context
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
A merge with Fett-uccine and Mistral Yarn 120k ctx.
Credit to Nitral for the merge script and idea.
### Merge Method
This model was merged using the SLERP merge method.
### Models Merged
The following models were included in the merge:
* Z:\ModelColdStorage\Yarn-Mistral-7b-128k
* Z:\ModelColdStorage\Fett-uccine-7B
### Configuration
The following YAML configuration was used to produce this model:
```yaml
slices:
- sources:
- model: Z:\ModelColdStorage\Fett-uccine-7B
layer_range: [0, 32]
- model: Z:\ModelColdStorage\Yarn-Mistral-7b-128k
layer_range: [0, 32]
merge_method: slerp
base_model: Z:\ModelColdStorage\Fett-uccine-7B
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: bfloat16
```
|
smotoc/foxy_mistral7B_unsloth_4k
|
smotoc
| 2024-02-08T00:26:25Z
| 14
| 0
|
transformers
|
[
"transformers",
"pytorch",
"gguf",
"mistral",
"text-generation",
"text-generation-inference",
"unsloth",
"en",
"base_model:unsloth/mistral-7b-bnb-4bit",
"base_model:quantized:unsloth/mistral-7b-bnb-4bit",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-02-08T00:09:47Z
|
---
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- mistral
- gguf
base_model: unsloth/mistral-7b-bnb-4bit
---
# Uploaded model
- **Developed by:** smotoc
- **License:** apache-2.0
- **Finetuned from model :** unsloth/mistral-7b-bnb-4bit
This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
celik-muhammed/multi-qa-mpnet-base-dot-v1-finetuned-dtc-zoomcamp
|
celik-muhammed
| 2024-02-08T00:21:13Z
| 5
| 0
|
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"tflite",
"safetensors",
"mpnet",
"feature-extraction",
"sentence-similarity",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2024-02-08T00:11:24Z
|
---
library_name: sentence-transformers
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
---
# celik-muhammed/multi-qa-mpnet-base-dot-v1-finetuned-dtc-zoomcamp
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('celik-muhammed/multi-qa-mpnet-base-dot-v1-finetuned-dtc-zoomcamp')
embeddings = model.encode(sentences)
print(embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=celik-muhammed/multi-qa-mpnet-base-dot-v1-finetuned-dtc-zoomcamp)
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 794 with parameters:
```
{'batch_size': 4, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.MultipleNegativesRankingLoss.MultipleNegativesRankingLoss` with parameters:
```
{'scale': 20.0, 'similarity_fct': 'cos_sim'}
```
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 989 with parameters:
```
{'batch_size': 4, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.OnlineContrastiveLoss.OnlineContrastiveLoss`
Parameters of the fit()-Method:
```
{
"epochs": 1,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
"optimizer_params": {
"lr": 1.2800000000000005e-10
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 80.0,
"weight_decay": 0.1
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: MPNetModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': True, 'pooling_mode_mean_sqrt_len_tokens': True, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False})
(2): Dense({'in_features': 3072, 'out_features': 768, 'bias': True, 'activation_function': 'torch.nn.modules.activation.Tanh'})
(3): Normalize()
)
```
## Citing & Authors
<!--- Describe where people can find more information -->
|
arieridwans/phi_2-finetuned-lyrics
|
arieridwans
| 2024-02-07T23:59:58Z
| 3
| 0
|
transformers
|
[
"transformers",
"safetensors",
"phi",
"text-generation",
"custom_code",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-02-07T23:55:11Z
|
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
sosoai/codellama-korean-merged
|
sosoai
| 2024-02-07T23:58:25Z
| 6
| 0
|
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"custom_code",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-02-07T23:52:25Z
|
hariqueen/code-llama-korean μ lora μ΄λ΅ν°μ λ² μ΄μ€ λͺ¨λΈμΈ TinyPixel/CodeLlama-7B-Python-bf16-sharded λ¨Έμ§ν μ½λλΌλ§ νκ΅μ΄ λ²μ μ
λλ€.
|
Subsets and Splits
Filtered Qwen2.5 Distill Models
Identifies specific configurations of models by filtering cards that contain 'distill', 'qwen2.5', '7b' while excluding certain base models and incorrect model ID patterns, uncovering unique model variants.
Filtered Model Cards Count
Finds the count of entries with specific card details that include 'distill', 'qwen2.5', '7b' but exclude certain base models, revealing valuable insights about the dataset's content distribution.
Filtered Distill Qwen 7B Models
Filters for specific card entries containing 'distill', 'qwen', and '7b', excluding certain strings and patterns, to identify relevant model configurations.
Filtered Qwen-7b Model Cards
The query performs a detailed filtering based on specific keywords and excludes certain entries, which could be useful for identifying a specific subset of cards but does not provide deeper insights or trends.
Filtered Qwen 7B Model Cards
The query filters for specific terms related to "distilled" or "distill", "qwen", and "7b" in the 'card' column but excludes certain base models, providing a limited set of entries for further inspection.
Qwen 7B Distilled Models
The query provides a basic filtering of records to find specific card names that include keywords related to distilled Qwen 7b models, excluding a particular base model, which gives limited insight but helps in focusing on relevant entries.
Qwen 7B Distilled Model Cards
The query filters data based on specific keywords in the modelId and card fields, providing limited insight primarily useful for locating specific entries rather than revealing broad patterns or trends.
Qwen 7B Distilled Models
Finds all entries containing the terms 'distilled', 'qwen', and '7b' in a case-insensitive manner, providing a filtered set of records but without deeper analysis.
Distilled Qwen 7B Models
The query filters for specific model IDs containing 'distilled', 'qwen', and '7b', providing a basic retrieval of relevant entries but without deeper analysis or insight.
Filtered Model Cards with Distill Qwen2.
Filters and retrieves records containing specific keywords in the card description while excluding certain phrases, providing a basic count of relevant entries.
Filtered Model Cards with Distill Qwen 7
The query filters specific variations of card descriptions containing 'distill', 'qwen', and '7b' while excluding a particular base model, providing limited but specific data retrieval.
Distill Qwen 7B Model Cards
The query filters and retrieves rows where the 'card' column contains specific keywords ('distill', 'qwen', and '7b'), providing a basic filter result that can help in identifying specific entries.