modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-09-07 18:30:29
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 544
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 55
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-09-07 18:30:28
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
NasimB/bnc-rarity-no-cut-shuffled
|
NasimB
| 2023-07-16T06:24:06Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"generated_from_trainer",
"dataset:generator",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-07-16T04:27:02Z |
---
license: mit
tags:
- generated_from_trainer
datasets:
- generator
model-index:
- name: bnc-rarity-no-cut-shuffled
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bnc-rarity-no-cut-shuffled
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the generator dataset.
It achieves the following results on the evaluation set:
- Loss: 4.3207
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 1000
- num_epochs: 6
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 6.7157 | 0.29 | 500 | 5.6437 |
| 5.3513 | 0.58 | 1000 | 5.2021 |
| 5.0016 | 0.88 | 1500 | 4.9595 |
| 4.7286 | 1.17 | 2000 | 4.8122 |
| 4.5693 | 1.46 | 2500 | 4.6857 |
| 4.4647 | 1.75 | 3000 | 4.5770 |
| 4.3308 | 2.05 | 3500 | 4.5068 |
| 4.1402 | 2.34 | 4000 | 4.4574 |
| 4.1123 | 2.63 | 4500 | 4.3983 |
| 4.0711 | 2.92 | 5000 | 4.3468 |
| 3.8657 | 3.22 | 5500 | 4.3414 |
| 3.8086 | 3.51 | 6000 | 4.3099 |
| 3.7977 | 3.8 | 6500 | 4.2728 |
| 3.6947 | 4.09 | 7000 | 4.2729 |
| 3.5188 | 4.39 | 7500 | 4.2684 |
| 3.5211 | 4.68 | 8000 | 4.2523 |
| 3.5159 | 4.97 | 8500 | 4.2387 |
| 3.3414 | 5.26 | 9000 | 4.2532 |
| 3.3357 | 5.56 | 9500 | 4.2520 |
| 3.328 | 5.85 | 10000 | 4.2517 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.11.0+cu113
- Datasets 2.13.0
- Tokenizers 0.13.3
|
lovelyxs/rl_course_vizdoom_health_gathering_supreme
|
lovelyxs
| 2023-07-16T05:56:49Z | 0 | 0 |
sample-factory
|
[
"sample-factory",
"tensorboard",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-07-16T05:56:44Z |
---
library_name: sample-factory
tags:
- deep-reinforcement-learning
- reinforcement-learning
- sample-factory
model-index:
- name: APPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: doom_health_gathering_supreme
type: doom_health_gathering_supreme
metrics:
- type: mean_reward
value: 13.28 +/- 4.85
name: mean_reward
verified: false
---
A(n) **APPO** model trained on the **doom_health_gathering_supreme** environment.
This model was trained using Sample-Factory 2.0: https://github.com/alex-petrenko/sample-factory.
Documentation for how to use Sample-Factory can be found at https://www.samplefactory.dev/
## Downloading the model
After installing Sample-Factory, download the model with:
```
python -m sample_factory.huggingface.load_from_hub -r lovelyxs/rl_course_vizdoom_health_gathering_supreme
```
## Using the model
To run the model after download, use the `enjoy` script corresponding to this environment:
```
python -m .usr.local.lib.python3.10.dist-packages.ipykernel_launcher --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme
```
You can also upload models to the Hugging Face Hub using the same script with the `--push_to_hub` flag.
See https://www.samplefactory.dev/10-huggingface/huggingface/ for more details
## Training with this model
To continue training with this model, use the `train` script corresponding to this environment:
```
python -m .usr.local.lib.python3.10.dist-packages.ipykernel_launcher --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme --restart_behavior=resume --train_for_env_steps=10000000000
```
Note, you may have to adjust `--train_for_env_steps` to a suitably high number as the experiment will resume at the number of steps it concluded at.
|
Sucial/so-vits-svc4.1-Tim_Cook
|
Sucial
| 2023-07-16T05:45:34Z | 3 | 2 |
transformers
|
[
"transformers",
"so-vits-svc",
"license:cc-by-nc-sa-4.0",
"endpoints_compatible",
"region:us"
] | null | 2023-07-16T05:42:08Z |
---
license: cc-by-nc-sa-4.0
tags:
- so-vits-svc
---
# so-vits-svc4.1-Tim_Cook
## 官方项目地址:https://github.com/svc-develop-team/so-vits-svc
## 如何使用?How to use?
1. install requirements
2. download pretrain model [checkpoint_best_legacy_500.pt](https://ibm.box.com/s/z1wgl1stco8ffooyatzdwsqn2psd9lrr) and put it into `./pretrain`
3. put `Tim_Cook.pth`, `feature_and_index.pkl`, 'kmeans_10000.pt' into `./logs/44k`
4. put `config.json`into `./config`
5. enjoy!
## 以下引用官方文档
## 推理
使用 [inference_main.py](inference_main.py)
```shell
# 例
python inference_main.py -m "logs/44k/G_30400.pth" -c "configs/config.json" -n "君の知らない物語-src.wav" -t 0 -s "nen"
```
必填项部分:
+ `-m` | `--model_path`:模型路径
+ `-c` | `--config_path`:配置文件路径
+ `-n` | `--clean_names`:wav 文件名列表,放在 raw 文件夹下
+ `-t` | `--trans`:音高调整,支持正负(半音)
+ `-s` | `--spk_list`:合成目标说话人名称
+ `-cl` | `--clip`:音频强制切片,默认0为自动切片,单位为秒/s
可选项部分:部分具体见下一节
+ `-lg` | `--linear_gradient`:两段音频切片的交叉淡入长度,如果强制切片后出现人声不连贯可调整该数值,如果连贯建议采用默认值0,单位为秒
+ `-f0p` | `--f0_predictor`:选择F0预测器,可选择crepe,pm,dio,harvest,默认为pm(注意:crepe为原F0使用均值滤波器)
+ `-a` | `--auto_predict_f0`:语音转换自动预测音高,转换歌声时不要打开这个会严重跑调
+ `-cm` | `--cluster_model_path`:聚类模型或特征检索索引路径,如果没有训练聚类或特征检索则随便填
+ `-cr` | `--cluster_infer_ratio`:聚类方案或特征检索占比,范围0-1,若没有训练聚类模型或特征检索则默认0即可
+ `-eh` | `--enhance`:是否使用NSF_HIFIGAN增强器,该选项对部分训练集少的模型有一定的音质增强效果,但是对训练好的模型有反面效果,默认关闭
+ `-shd` | `--shallow_diffusion`:是否使用浅层扩散,使用后可解决一部分电音问题,默认关闭,该选项打开时,NSF_HIFIGAN增强器将会被禁止
+ `-usm` | `--use_spk_mix`:是否使用角色融合/动态声线融合
+ `-lea` | `--loudness_envelope_adjustment`:输入源响度包络替换输出响度包络融合比例,越靠近1越使用输出响度包络
+ `-fr` | `--feature_retrieval`:是否使用特征检索,如果使用聚类模型将被禁用,且cm与cr参数将会变成特征检索的索引路径与混合比例
浅扩散设置:
+ `-dm` | `--diffusion_model_path`:扩散模型路径
+ `-dc` | `--diffusion_config_path`:扩散模型配置文件路径
+ `-ks` | `--k_step`:扩散步数,越大越接近扩散模型的结果,默认100
+ `-od` | `--only_diffusion`:纯扩散模式,该模式不会加载sovits模型,以扩散模型推理
+ `-se` | `--second_encoding`:二次编码,浅扩散前会对原始音频进行二次编码,玄学选项,有时候效果好,有时候效果差
### 注意
如果使用`whisper-ppg` 声音编码器进行推理,需要将`--clip`设置为25,`-lg`设置为1。否则将无法正常推理。
## 🤔 可选项
如果前面的效果已经满意,或者没看明白下面在讲啥,那后面的内容都可以忽略,不影响模型使用(这些可选项影响比较小,可能在某些特定数据上有点效果,但大部分情况似乎都感知不太明显)
### 自动f0预测
4.0模型训练过程会训练一个f0预测器,对于语音转换可以开启自动音高预测,如果效果不好也可以使用手动的,但转换歌声时请不要启用此功能!!!会严重跑调!!
+ 在inference_main中设置auto_predict_f0为true即可
### 聚类音色泄漏控制
介绍:聚类方案可以减小音色泄漏,使得模型训练出来更像目标的音色(但其实不是特别明显),但是单纯的聚类方案会降低模型的咬字(会口齿不清)(这个很明显),本模型采用了融合的方式,可以线性控制聚类方案与非聚类方案的占比,也就是可以手动在"像目标音色" 和 "咬字清晰" 之间调整比例,找到合适的折中点
使用聚类前面的已有步骤不用进行任何的变动,只需要额外训练一个聚类模型,虽然效果比较有限,但训练成本也比较低
+ 训练过程:
+ 使用cpu性能较好的机器训练,据我的经验在腾讯云6核cpu训练每个speaker需要约4分钟即可完成训练
+ 执行`python cluster/train_cluster.py`,模型的输出会在`logs/44k/kmeans_10000.pt`
+ 聚类模型目前可以使用gpu进行训练,执行`python cluster/train_cluster.py --gpu`
+ 推理过程:
+ `inference_main.py`中指定`cluster_model_path`
+ `inference_main.py`中指定`cluster_infer_ratio`,`0`为完全不使用聚类,`1`为只使用聚类,通常设置`0.5`即可
### 特征检索
介绍:跟聚类方案一样可以减小音色泄漏,咬字比聚类稍好,但会降低推理速度,采用了融合的方式,可以线性控制特征检索与非特征检索的占比,
+ 训练过程:
首先需要在生成hubert与f0后执行:
```shell
python train_index.py -c configs/config.json
```
模型的输出会在`logs/44k/feature_and_index.pkl`
+ 推理过程:
+ 需要首先制定`--feature_retrieval`,此时聚类方案会自动切换到特征检索方案
+ `inference_main.py`中指定`cluster_model_path` 为模型输出文件
+ `inference_main.py`中指定`cluster_infer_ratio`,`0`为完全不使用特征检索,`1`为只使用特征检索,通常设置`0.5`即可
### 静态声线混合
**参考`webUI.py`文件中,小工具/实验室特性的静态声线融合。**
介绍:该功能可以将多个声音模型合成为一个声音模型(多个模型参数的凸组合或线性组合),从而制造出现实中不存在的声线
**注意:**
1. 该功能仅支持单说话人的模型
2. 如果强行使用多说话人模型,需要保证多个模型的说话人数量相同,这样可以混合同一个SpaekerID下的声音
3. 保证所有待混合模型的config.json中的model字段是相同的
4. 输出的混合模型可以使用待合成模型的任意一个config.json,但聚类模型将不能使用
5. 批量上传模型的时候最好把模型放到一个文件夹选中后一起上传
6. 混合比例调整建议大小在0-100之间,也可以调为其他数字,但在线性组合模式下会出现未知的效果
7. 混合完毕后,文件将会保存在项目根目录中,文件名为output.pth
8. 凸组合模式会将混合比例执行Softmax使混合比例相加为1,而线性组合模式不会
### 动态声线混合
**参考`spkmix.py`文件中关于动态声线混合的介绍**
角色混合轨道 编写规则:
角色ID : \[\[起始时间1, 终止时间1, 起始数值1, 起始数值1], [起始时间2, 终止时间2, 起始数值2, 起始数值2]]
起始时间和前一个的终止时间必须相同,第一个起始时间必须为0,最后一个终止时间必须为1 (时间的范围为0-1)
全部角色必须填写,不使用的角色填\[\[0., 1., 0., 0.]]即可
融合数值可以随便填,在指定的时间段内从起始数值线性变化为终止数值,内部会自动确保线性组合为1(凸组合条件),可以放心使用
推理的时候使用`--use_spk_mix`参数即可启用动态声线混合
## 📚 一些法律条例参考
#### 任何国家,地区,组织和个人使用此项目必须遵守以下法律
#### 《民法典》
##### 第一千零一十九条
任何组织或者个人不得以丑化、污损,或者利用信息技术手段伪造等方式侵害他人的肖像权。未经肖像权人同意,不得制作、使用、公开肖像权人的肖像,但是法律另有规定的除外。未经肖像权人同意,肖像作品权利人不得以发表、复制、发行、出租、展览等方式使用或者公开肖像权人的肖像。对自然人声音的保护,参照适用肖像权保护的有关规定。
##### 第一千零二十四条
【名誉权】民事主体享有名誉权。任何组织或者个人不得以侮辱、诽谤等方式侵害他人的名誉权。
##### 第一千零二十七条
【作品侵害名誉权】行为人发表的文学、艺术作品以真人真事或者特定人为描述对象,含有侮辱、诽谤内容,侵害他人名誉权的,受害人有权依法请求该行为人承担民事责任。行为人发表的文学、艺术作品不以特定人为描述对象,仅其中的情节与该特定人的情况相似的,不承担民事责任。
#### 《[中华人民共和国宪法](http://www.gov.cn/guoqing/2018-03/22/content_5276318.htm)》
#### 《[中华人民共和国刑法](http://gongbao.court.gov.cn/Details/f8e30d0689b23f57bfc782d21035c3.html?sw=中华人民共和国刑法)》
#### 《[中华人民共和国民法典](http://gongbao.court.gov.cn/Details/51eb6750b8361f79be8f90d09bc202.html)》
|
Vasanth/distilbert-stock-tweet-sentiment-analysis
|
Vasanth
| 2023-07-16T05:26:06Z | 185 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-07-16T05:15:36Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: distilbert-stock-tweet-sentiment-analysis
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-stock-tweet-sentiment-analysis
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6075
- Accuracy: 0.782
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.686 | 1.0 | 1000 | 0.5916 | 0.7745 |
| 0.4804 | 2.0 | 2000 | 0.5635 | 0.7812 |
| 0.3644 | 3.0 | 3000 | 0.6075 | 0.782 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
ltmai/morgan-embed-bio-clinical-bert-ddi
|
ltmai
| 2023-07-16T05:24:59Z | 31 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"generated_from_trainer",
"endpoints_compatible",
"region:us"
] | null | 2023-07-15T18:38:02Z |
---
tags:
- generated_from_trainer
model-index:
- name: morgan-embed-bio-clinical-bert-ddi
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# morgan-embed-bio-clinical-bert-ddi
This model is a fine-tuned version of [](https://huggingface.co/) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.000628
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1
- Datasets 2.13.1
- Tokenizers 0.13.3
|
diogopaes10/007-microsoft-deberta-v3-base-finetuned-yahoo-80_20k
|
diogopaes10
| 2023-07-16T05:23:43Z | 106 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"deberta-v2",
"text-classification",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-07-16T04:56:59Z |
---
license: mit
tags:
- generated_from_trainer
metrics:
- f1
- accuracy
- precision
- recall
model-index:
- name: 007-microsoft-deberta-v3-base-finetuned-yahoo-80_20k
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# 007-microsoft-deberta-v3-base-finetuned-yahoo-80_20k
This model is a fine-tuned version of [microsoft/deberta-v3-base](https://huggingface.co/microsoft/deberta-v3-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8060
- F1: 0.7514
- Accuracy: 0.7552
- Precision: 0.7512
- Recall: 0.7552
- System Ram Used: 4.1778
- System Ram Total: 83.4807
- Gpu Ram Allocated: 2.0903
- Gpu Ram Cached: 34.3125
- Gpu Ram Total: 39.5640
- Gpu Utilization: 44
- Disk Space Used: 36.0258
- Disk Space Total: 78.1898
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 | Accuracy | Precision | Recall | System Ram Used | System Ram Total | Gpu Ram Allocated | Gpu Ram Cached | Gpu Ram Total | Gpu Utilization | Disk Space Used | Disk Space Total |
|:-------------:|:-----:|:----:|:---------------:|:------:|:--------:|:---------:|:------:|:---------------:|:----------------:|:-----------------:|:--------------:|:-------------:|:---------------:|:---------------:|:----------------:|
| 1.3512 | 0.15 | 375 | 0.9418 | 0.7160 | 0.7189 | 0.7210 | 0.7189 | 3.9586 | 83.4807 | 2.0903 | 34.3125 | 39.5640 | 42 | 24.9904 | 78.1898 |
| 0.9581 | 0.3 | 750 | 0.8981 | 0.7232 | 0.7298 | 0.7301 | 0.7298 | 3.9108 | 83.4807 | 2.0903 | 34.3125 | 39.5640 | 46 | 24.9906 | 78.1898 |
| 0.9184 | 0.45 | 1125 | 0.8941 | 0.7248 | 0.7316 | 0.7301 | 0.7316 | 3.8717 | 83.4807 | 2.0903 | 34.3125 | 39.5640 | 46 | 24.9910 | 78.1898 |
| 0.8716 | 0.6 | 1500 | 0.8481 | 0.7368 | 0.7391 | 0.7414 | 0.7391 | 3.9030 | 83.4807 | 2.0903 | 34.3125 | 39.5640 | 46 | 24.9913 | 78.1898 |
| 0.8564 | 0.75 | 1875 | 0.8394 | 0.7379 | 0.7440 | 0.7423 | 0.7440 | 3.8964 | 83.4807 | 2.0903 | 34.3125 | 39.5640 | 44 | 24.9915 | 78.1898 |
| 0.8359 | 0.9 | 2250 | 0.8371 | 0.7347 | 0.7403 | 0.7417 | 0.7403 | 3.8917 | 83.4807 | 2.0903 | 34.3125 | 39.5640 | 48 | 24.9917 | 78.1898 |
| 0.7896 | 1.05 | 2625 | 0.8277 | 0.7369 | 0.7435 | 0.7461 | 0.7435 | 4.1488 | 83.4807 | 2.0903 | 34.3125 | 39.5640 | 44 | 29.8274 | 78.1898 |
| 0.7368 | 1.2 | 3000 | 0.8204 | 0.7426 | 0.7473 | 0.7468 | 0.7473 | 4.1447 | 83.4807 | 2.0903 | 34.3125 | 39.5640 | 45 | 29.8276 | 78.1898 |
| 0.72 | 1.35 | 3375 | 0.8199 | 0.7455 | 0.7486 | 0.7467 | 0.7486 | 3.9562 | 83.4807 | 2.0903 | 34.3125 | 39.5640 | 43 | 29.8279 | 78.1898 |
| 0.7333 | 1.5 | 3750 | 0.7991 | 0.7488 | 0.7524 | 0.7496 | 0.7524 | 3.9475 | 83.4807 | 2.0903 | 34.3125 | 39.5640 | 45 | 29.8282 | 78.1898 |
| 0.7116 | 1.65 | 4125 | 0.8149 | 0.7470 | 0.7499 | 0.7497 | 0.7499 | 3.9456 | 83.4807 | 2.0903 | 34.3125 | 39.5640 | 43 | 29.8285 | 78.1898 |
| 0.7177 | 1.8 | 4500 | 0.7880 | 0.7523 | 0.7558 | 0.7529 | 0.7558 | 3.9296 | 83.4807 | 2.0903 | 34.3125 | 39.5640 | 44 | 29.8287 | 78.1898 |
| 0.7151 | 1.95 | 4875 | 0.7949 | 0.7509 | 0.7540 | 0.7507 | 0.7540 | 3.9427 | 83.4807 | 2.0903 | 34.3125 | 39.5640 | 41 | 29.8294 | 78.1898 |
| 0.657 | 2.1 | 5250 | 0.8097 | 0.7500 | 0.7537 | 0.7506 | 0.7537 | 4.1520 | 83.4807 | 2.0903 | 34.3125 | 39.5640 | 43 | 33.9634 | 78.1898 |
| 0.6218 | 2.25 | 5625 | 0.8049 | 0.7485 | 0.7528 | 0.7484 | 0.7528 | 4.1390 | 83.4807 | 2.0903 | 34.3125 | 39.5640 | 44 | 33.9635 | 78.1898 |
| 0.6185 | 2.4 | 6000 | 0.8093 | 0.7511 | 0.7543 | 0.7513 | 0.7543 | 3.9715 | 83.4807 | 2.0903 | 34.3125 | 39.5640 | 42 | 33.9637 | 78.1898 |
| 0.6271 | 2.55 | 6375 | 0.8019 | 0.7517 | 0.7550 | 0.7521 | 0.7550 | 3.9697 | 83.4807 | 2.0903 | 34.3125 | 39.5640 | 46 | 33.9638 | 78.1898 |
| 0.6103 | 2.7 | 6750 | 0.8026 | 0.7519 | 0.7554 | 0.7523 | 0.7554 | 3.9622 | 83.4807 | 2.0903 | 34.3125 | 39.5640 | 46 | 33.9639 | 78.1898 |
| 0.6111 | 2.85 | 7125 | 0.8056 | 0.7507 | 0.7546 | 0.7511 | 0.7546 | 3.9783 | 83.4807 | 2.0903 | 34.3125 | 39.5640 | 41 | 33.9640 | 78.1898 |
| 0.6015 | 3.0 | 7500 | 0.8060 | 0.7514 | 0.7552 | 0.7512 | 0.7552 | 3.9702 | 83.4807 | 2.0903 | 34.3125 | 39.5640 | 42 | 33.9642 | 78.1898 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
kojitakahiro/webui
|
kojitakahiro
| 2023-07-16T05:21:17Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-05-12T07:09:31Z |
---
license: creativeml-openrail-m
---
|
Denilah/distilbert-base-uncased-finetuned-emotion
|
Denilah
| 2023-07-16T05:15:46Z | 106 | 0 |
transformers
|
[
"transformers",
"pytorch",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-07-16T03:24:16Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
config: split
split: validation
args: split
metrics:
- name: Accuracy
type: accuracy
value: 0.937
- name: F1
type: f1
value: 0.9373121473490384
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1565
- Accuracy: 0.937
- F1: 0.9373
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.4774 | 1.0 | 1000 | 0.1971 | 0.923 | 0.9226 |
| 0.147 | 2.0 | 2000 | 0.1565 | 0.937 | 0.9373 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu117
- Datasets 2.13.1
- Tokenizers 0.13.3
|
Ahwaztime/Ahwazt
|
Ahwaztime
| 2023-07-16T04:43:19Z | 0 | 0 | null |
[
"license:bigscience-openrail-m",
"region:us"
] | null | 2023-07-16T04:43:19Z |
---
license: bigscience-openrail-m
---
|
LeoLyu/finetuning-sentiment-model-3000-samples
|
LeoLyu
| 2023-07-16T04:39:09Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:imdb",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-06-04T01:18:18Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imdb
metrics:
- accuracy
- f1
model-index:
- name: finetuning-sentiment-model-3000-samples
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: imdb
type: imdb
config: plain_text
split: test
args: plain_text
metrics:
- name: Accuracy
type: accuracy
value: 0.88
- name: F1
type: f1
value: 0.880794701986755
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuning-sentiment-model-3000-samples
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2903
- Accuracy: 0.88
- F1: 0.8808
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- Transformers 4.28.0
- Pytorch 2.0.1+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
j-hyeok/taxi-v3
|
j-hyeok
| 2023-07-16T04:27:10Z | 0 | 0 | null |
[
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-07-16T04:27:06Z |
---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: taxi-v3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.46 +/- 2.78
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="j-hyeok/taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
laserchalk/kangaroo-training-part-7
|
laserchalk
| 2023-07-16T04:15:03Z | 2 | 0 |
diffusers
|
[
"diffusers",
"safetensors",
"text-to-image",
"stable-diffusion",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-07-16T04:04:01Z |
---
license: creativeml-openrail-m
tags:
- text-to-image
- stable-diffusion
---
### Kangaroo-training-part-7 Dreambooth model trained by laserchalk with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook
Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb)
Sample pictures of this concept:
|
NasimB/guten-rarity-all-no-cut-shuffled
|
NasimB
| 2023-07-16T04:02:15Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"generated_from_trainer",
"dataset:generator",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-07-16T02:00:34Z |
---
license: mit
tags:
- generated_from_trainer
datasets:
- generator
model-index:
- name: guten-rarity-all-no-cut-shuffled
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# guten-rarity-all-no-cut-shuffled
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the generator dataset.
It achieves the following results on the evaluation set:
- Loss: 4.3381
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 1000
- num_epochs: 6
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 6.7098 | 0.29 | 500 | 5.6383 |
| 5.3461 | 0.59 | 1000 | 5.1998 |
| 5.0069 | 0.88 | 1500 | 4.9558 |
| 4.7285 | 1.17 | 2000 | 4.8116 |
| 4.5719 | 1.46 | 2500 | 4.6858 |
| 4.4638 | 1.76 | 3000 | 4.5832 |
| 4.3437 | 2.05 | 3500 | 4.5081 |
| 4.145 | 2.34 | 4000 | 4.4640 |
| 4.1225 | 2.63 | 4500 | 4.4066 |
| 4.0778 | 2.93 | 5000 | 4.3542 |
| 3.8706 | 3.22 | 5500 | 4.3487 |
| 3.8204 | 3.51 | 6000 | 4.3185 |
| 3.8077 | 3.8 | 6500 | 4.2826 |
| 3.7002 | 4.1 | 7000 | 4.2849 |
| 3.5345 | 4.39 | 7500 | 4.2807 |
| 3.5332 | 4.68 | 8000 | 4.2650 |
| 3.5096 | 4.97 | 8500 | 4.2535 |
| 3.3568 | 5.27 | 9000 | 4.2678 |
| 3.3403 | 5.56 | 9500 | 4.2672 |
| 3.3398 | 5.85 | 10000 | 4.2659 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.11.0+cu113
- Datasets 2.13.0
- Tokenizers 0.13.3
|
blackmount8/falcon-7b-instruct-ct2-int8_float16
|
blackmount8
| 2023-07-16T03:36:52Z | 1 | 0 |
transformers
|
[
"transformers",
"en",
"dataset:tiiuae/falcon-refinedweb",
"arxiv:2205.14135",
"arxiv:1911.02150",
"arxiv:2005.14165",
"arxiv:2104.09864",
"arxiv:2306.01116",
"license:apache-2.0",
"region:us"
] | null | 2023-07-15T16:58:47Z |
---
datasets:
- tiiuae/falcon-refinedweb
language:
- en
inference: false
license: apache-2.0
---
# blackmount8/falcon-7b-instruct-ct2-int8_float16
Int8_float16 version of [tiiuae/falcon-7b-instruct](https://huggingface.co/tiiuae/falcon-7b-instruct), quantized using CTranslate2.
# ✨ Falcon-7B-Instruct
**Falcon-7B-Instruct is a 7B parameters causal decoder-only model built by [TII](https://www.tii.ae) based on [Falcon-7B](https://huggingface.co/tiiuae/falcon-7b) and finetuned on a mixture of chat/instruct datasets. It is made available under the Apache 2.0 license.**
*Paper coming soon 😊.*
🤗 To get started with Falcon (inference, finetuning, quantization, etc.), we recommend reading [this great blogpost fron HF](https://huggingface.co/blog/falcon)!
## Why use Falcon-7B-Instruct?
* **You are looking for a ready-to-use chat/instruct model based on [Falcon-7B](https://huggingface.co/tiiuae/falcon-7b).**
* **Falcon-7B is a strong base model, outperforming comparable open-source models** (e.g., [MPT-7B](https://huggingface.co/mosaicml/mpt-7b), [StableLM](https://github.com/Stability-AI/StableLM), [RedPajama](https://huggingface.co/togethercomputer/RedPajama-INCITE-Base-7B-v0.1) etc.), thanks to being trained on 1,500B tokens of [RefinedWeb](https://huggingface.co/datasets/tiiuae/falcon-refinedweb) enhanced with curated corpora. See the [OpenLLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
* **It features an architecture optimized for inference**, with FlashAttention ([Dao et al., 2022](https://arxiv.org/abs/2205.14135)) and multiquery ([Shazeer et al., 2019](https://arxiv.org/abs/1911.02150)).
💬 **This is an instruct model, which may not be ideal for further finetuning.** If you are interested in building your own instruct/chat model, we recommend starting from [Falcon-7B](https://huggingface.co/tiiuae/falcon-7b).
🔥 **Looking for an even more powerful model?** [Falcon-40B-Instruct](https://huggingface.co/tiiuae/falcon-40b-instruct) is Falcon-7B-Instruct's big brother!
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
import transformers
import torch
model = "tiiuae/falcon-7b-instruct"
tokenizer = AutoTokenizer.from_pretrained(model)
pipeline = transformers.pipeline(
"text-generation",
model=model,
tokenizer=tokenizer,
torch_dtype=torch.bfloat16,
trust_remote_code=True,
device_map="auto",
)
sequences = pipeline(
"Girafatron is obsessed with giraffes, the most glorious animal on the face of this Earth. Giraftron believes all other animals are irrelevant when compared to the glorious majesty of the giraffe.\nDaniel: Hello, Girafatron!\nGirafatron:",
max_length=200,
do_sample=True,
top_k=10,
num_return_sequences=1,
eos_token_id=tokenizer.eos_token_id,
)
for seq in sequences:
print(f"Result: {seq['generated_text']}")
```
💥 **Falcon LLMs require PyTorch 2.0 for use with `transformers`!**
For fast inference with Falcon, check-out [Text Generation Inference](https://github.com/huggingface/text-generation-inference)! Read more in this [blogpost]((https://huggingface.co/blog/falcon).
You will need **at least 16GB of memory** to swiftly run inference with Falcon-7B-Instruct.
# Model Card for Falcon-7B-Instruct
## Model Details
### Model Description
- **Developed by:** [https://www.tii.ae](https://www.tii.ae);
- **Model type:** Causal decoder-only;
- **Language(s) (NLP):** English and French;
- **License:** Apache 2.0;
- **Finetuned from model:** [Falcon-7B](https://huggingface.co/tiiuae/falcon-7b).
### Model Source
- **Paper:** *coming soon*.
## Uses
### Direct Use
Falcon-7B-Instruct has been finetuned on a mixture of instruct and chat datasets.
### Out-of-Scope Use
Production use without adequate assessment of risks and mitigation; any use cases which may be considered irresponsible or harmful.
## Bias, Risks, and Limitations
Falcon-7B-Instruct is mostly trained on English data, and will not generalize appropriately to other languages. Furthermore, as it is trained on a large-scale corpora representative of the web, it will carry the stereotypes and biases commonly encountered online.
### Recommendations
We recommend users of Falcon-7B-Instruct to develop guardrails and to take appropriate precautions for any production use.
## How to Get Started with the Model
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
import transformers
import torch
model = "tiiuae/falcon-7b-instruct"
tokenizer = AutoTokenizer.from_pretrained(model)
pipeline = transformers.pipeline(
"text-generation",
model=model,
tokenizer=tokenizer,
torch_dtype=torch.bfloat16,
trust_remote_code=True,
device_map="auto",
)
sequences = pipeline(
"Girafatron is obsessed with giraffes, the most glorious animal on the face of this Earth. Giraftron believes all other animals are irrelevant when compared to the glorious majesty of the giraffe.\nDaniel: Hello, Girafatron!\nGirafatron:",
max_length=200,
do_sample=True,
top_k=10,
num_return_sequences=1,
eos_token_id=tokenizer.eos_token_id,
)
for seq in sequences:
print(f"Result: {seq['generated_text']}")
```
## Training Details
### Training Data
Falcon-7B-Instruct was finetuned on a 250M tokens mixture of instruct/chat datasets.
| **Data source** | **Fraction** | **Tokens** | **Description** |
|--------------------|--------------|------------|-----------------------------------|
| [Bai ze](https://github.com/project-baize/baize-chatbot) | 65% | 164M | chat |
| [GPT4All](https://github.com/nomic-ai/gpt4all) | 25% | 62M | instruct |
| [GPTeacher](https://github.com/teknium1/GPTeacher) | 5% | 11M | instruct |
| [RefinedWeb-English](https://huggingface.co/datasets/tiiuae/falcon-refinedweb) | 5% | 13M | massive web crawl |
The data was tokenized with the Falcon-[7B](https://huggingface.co/tiiuae/falcon-7b)/[40B](https://huggingface.co/tiiuae/falcon-40b) tokenizer.
## Evaluation
*Paper coming soon.*
See the [OpenLLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) for early results.
Note that this model variant is not optimized for NLP benchmarks.
## Technical Specifications
For more information about pretraining, see [Falcon-7B](https://huggingface.co/tiiuae/falcon-7b).
### Model Architecture and Objective
Falcon-7B is a causal decoder-only model trained on a causal language modeling task (i.e., predict the next token).
The architecture is broadly adapted from the GPT-3 paper ([Brown et al., 2020](https://arxiv.org/abs/2005.14165)), with the following differences:
* **Positionnal embeddings:** rotary ([Su et al., 2021](https://arxiv.org/abs/2104.09864));
* **Attention:** multiquery ([Shazeer et al., 2019](https://arxiv.org/abs/1911.02150)) and FlashAttention ([Dao et al., 2022](https://arxiv.org/abs/2205.14135));
* **Decoder-block:** parallel attention/MLP with a single layer norm.
| **Hyperparameter** | **Value** | **Comment** |
|--------------------|-----------|----------------------------------------|
| Layers | 32 | |
| `d_model` | 4544 | Increased to compensate for multiquery |
| `head_dim` | 64 | Reduced to optimise for FlashAttention |
| Vocabulary | 65024 | |
| Sequence length | 2048 | |
### Compute Infrastructure
#### Hardware
Falcon-7B-Instruct was trained on AWS SageMaker, on 32 A100 40GB GPUs in P4d instances.
#### Software
Falcon-7B-Instruct was trained a custom distributed training codebase, Gigatron. It uses a 3D parallelism approach combined with ZeRO and high-performance Triton kernels (FlashAttention, etc.)
## Citation
*Paper coming soon* 😊. In the meanwhile, you can use the following information to cite:
```
@article{falcon40b,
title={{Falcon-40B}: an open large language model with state-of-the-art performance},
author={Almazrouei, Ebtesam and Alobeidli, Hamza and Alshamsi, Abdulaziz and Cappelli, Alessandro and Cojocaru, Ruxandra and Debbah, Merouane and Goffinet, Etienne and Heslow, Daniel and Launay, Julien and Malartic, Quentin and Noune, Badreddine and Pannier, Baptiste and Penedo, Guilherme},
year={2023}
}
```
To learn more about the pretraining dataset, see the 📓 [RefinedWeb paper](https://arxiv.org/abs/2306.01116).
```
@article{refinedweb,
title={The {R}efined{W}eb dataset for {F}alcon {LLM}: outperforming curated corpora with web data, and web data only},
author={Guilherme Penedo and Quentin Malartic and Daniel Hesslow and Ruxandra Cojocaru and Alessandro Cappelli and Hamza Alobeidli and Baptiste Pannier and Ebtesam Almazrouei and Julien Launay},
journal={arXiv preprint arXiv:2306.01116},
eprint={2306.01116},
eprinttype = {arXiv},
url={https://arxiv.org/abs/2306.01116},
year={2023}
}
```
## License
Falcon-7B-Instruct is made available under the Apache 2.0 license.
## Contact
[email protected]
|
KonekoSushi/Ado
|
KonekoSushi
| 2023-07-16T03:36:21Z | 0 | 2 | null |
[
"rvc",
"rvc2",
"japanese artist",
"artist ",
"ja",
"en",
"region:us"
] | null | 2023-07-15T23:01:30Z |
---
language:
- ja
- en
tags:
- rvc
- rvc2
- japanese artist
- 'artist '
---
|
OptimalScale/robin-7b-v2-delta
|
OptimalScale
| 2023-07-16T03:14:44Z | 1,548 | 11 |
transformers
|
[
"transformers",
"pytorch",
"llama",
"text-generation",
"arxiv:2302.13971",
"arxiv:2306.12420",
"autotrain_compatible",
"text-generation-inference",
"region:us"
] |
text-generation
| 2023-05-28T02:41:29Z |
---
inference: false
---
# Robin Model Card
## Model Details
Robin is a series of models finetuned from LLaMA on several high-quality data.
- **Developed by:** [LMFlow](https://github.com/OptimalScale/LMFlow/)
- **Model type:** An auto-regressive language model based on the transformer architecture.
- **License:** Non-commercial license
- **Finetuned from model:** [LLaMA](https://arxiv.org/abs/2302.13971).
### Model Sources
- **Repository:** https://github.com/OptimalScale/LMFlow/
- **Blog:** https://medium.com/@hkust.ml/robin-v2-launches-achieves-unparalleled-performance-on-openllm-4f6886e822c1
- **Paper:** https://arxiv.org/abs/2306.12420
- **Demo:** https://lmflow.com/
## Uses
Robin is primarily utilized for conducting research on extensive language models and chatbots, catering to users specializing in natural language processing, machine learning, and artificial intelligence research.
## How to Get Started with the Model
We provide four kinds of demos including:
- Online Service: If you don't want to run any code and just want to try our models, we deploy our instruction-tuned LLaMA you to have a try.
- Colab Chatbot (shell): An interactive shell-based chatbot for you to easily deploy a chatbot on colab.
- Colab Chatbot (web): An interactive web-based chatbot for you to easily deploy your own chatbot on colab.
- Local Deploy: We also provide a way for you to deploy your model/chatbot locally, which means you can deploy much larger model than previous three methods if you have enough resource.
Please refer to https://github.com/OptimalScale/LMFlow#demos
## Training Details
Expanding upon the initial idea of self-instruct techniques, we incorporated several different data sources and build a new dataset called [LMFlow Dataset](http://lmflow.org:5000/lmflow_data.tar.gz).
The new training split is created by merging the following datasets:
- ShareGPT: randomly sample 50K English data and 10K Chinese data from ShareGPT.
- GPT-4-LLM: 52K English data from GPT-4-LLM.
- BELLE: randomly sample 80K Chinese data from BELLE.
See more details in the "Instruction Tuning" section in our [paper](https://arxiv.org/pdf/2306.12420.pdf).
## Evaluation
Robin is evaluated with [LMFlow Benchmark](https://blog.gopenai.com/lmflow-benchmark-an-automatic-evaluation-framework-for-open-source-llms-ef5c6f142418).
See more details in this [paper](https://arxiv.org/pdf/2306.12420.pdf).
## Citation
If you find this repository useful, please consider giving ⭐ and citing our [paper](https://arxiv.org/abs/2306.12420):
```
@misc{lmflow,
author = {Shizhe Diao and Rui Pan and Hanze Dong and KaShun Shum and Jipeng Zhang and Wei Xiong and Tong Zhang},
title = {LMFlow: An Extensible Toolkit for Finetuning and Inference of Large Foundation Models},
year = {2023},
publisher = {GitHub},
journal = {GitHub repository},
howpublished = {\url{https://optimalscale.github.io/LMFlow/}},
}
```
|
ALM-AHME/convnextv2-large-1k-224-finetuned-BreastCancer-Classification-BreakHis-AH-60-20-20
|
ALM-AHME
| 2023-07-16T03:13:16Z | 12 | 1 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"convnextv2",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2023-07-15T00:35:53Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
model-index:
- name: convnextv2-large-1k-224-finetuned-BreastCancer-Classification-BreakHis-AH-60-20-20
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
config: Splitted-Resized
split: train
args: Splitted-Resized
metrics:
- name: Accuracy
type: accuracy
value: 0.9900990099009901
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# convnextv2-large-1k-224-finetuned-BreastCancer-Classification-BreakHis-AH-60-20-20
This model is a fine-tuned version of [facebook/convnextv2-large-1k-224](https://huggingface.co/facebook/convnextv2-large-1k-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0353
- Accuracy: 0.9901
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.9
- num_epochs: 12
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.5207 | 1.0 | 199 | 0.4745 | 0.8887 |
| 0.2029 | 2.0 | 398 | 0.2072 | 0.9401 |
| 0.1615 | 3.0 | 597 | 0.1489 | 0.9547 |
| 0.1662 | 4.0 | 796 | 0.1312 | 0.9562 |
| 0.1986 | 5.0 | 995 | 0.1026 | 0.9698 |
| 0.0854 | 6.0 | 1194 | 0.0583 | 0.9802 |
| 0.0538 | 7.0 | 1393 | 0.0568 | 0.9835 |
| 0.0977 | 8.0 | 1592 | 0.0654 | 0.9793 |
| 0.6971 | 9.0 | 1791 | 0.6821 | 0.5450 |
| 0.211 | 10.0 | 1990 | 0.1654 | 0.9326 |
| 0.1775 | 11.0 | 2189 | 0.0859 | 0.9665 |
| 0.0042 | 12.0 | 2388 | 0.0353 | 0.9901 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
GarbageCollector/EFX2
|
GarbageCollector
| 2023-07-16T03:07:37Z | 0 | 0 | null |
[
"stable-diffusion",
"safetensors",
"text-to-image",
"license:unknown",
"region:us"
] |
text-to-image
| 2023-07-16T02:27:12Z |
---
tags:
- stable-diffusion
- safetensors
pipeline_tag: text-to-image
license: unknown
---
<p>this place is my garbage collection.<br>
some models are not better than others.</p>
<p>___SAMPLES___</p>
<p>LOOMER<br>
<img src="https://huggingface.co/GarbageCollector/EFX2/resolve/main/samples/LOOMER.jpg"/>
</p>
|
OptimalScale/robin-65b-v2-delta
|
OptimalScale
| 2023-07-16T02:48:33Z | 1,534 | 12 |
transformers
|
[
"transformers",
"pytorch",
"llama",
"text-generation",
"arxiv:2302.13971",
"arxiv:2306.12420",
"autotrain_compatible",
"text-generation-inference",
"region:us"
] |
text-generation
| 2023-06-11T06:48:38Z |
---
inference: false
---
# Robin Model Card
## Model Details
Robin is a series of models finetuned from LLaMA on several high-quality data.
- **Developed by:** [LMFlow](https://github.com/OptimalScale/LMFlow/)
- **Model type:** An auto-regressive language model based on the transformer architecture.
- **License:** Non-commercial license
- **Finetuned from model:** [LLaMA](https://arxiv.org/abs/2302.13971).
### Model Sources
- **Repository:** https://github.com/OptimalScale/LMFlow/
- **Blog:** https://medium.com/@hkust.ml/robin-v2-launches-achieves-unparalleled-performance-on-openllm-4f6886e822c1
- **Paper:** https://arxiv.org/abs/2306.12420
- **Demo:** https://lmflow.com/
## Uses
Robin is primarily utilized for conducting research on extensive language models and chatbots, catering to users specializing in natural language processing, machine learning, and artificial intelligence research.
## How to Get Started with the Model
We provide four kinds of demos including:
- Online Service: If you don't want to run any code and just want to try our models, we deploy our instruction-tuned LLaMA you to have a try.
- Colab Chatbot (shell): An interactive shell-based chatbot for you to easily deploy a chatbot on colab.
- Colab Chatbot (web): An interactive web-based chatbot for you to easily deploy your own chatbot on colab.
- Local Deploy: We also provide a way for you to deploy your model/chatbot locally, which means you can deploy much larger model than previous three methods if you have enough resource.
Please refer to https://github.com/OptimalScale/LMFlow#demos
## Training Details
Expanding upon the initial idea of self-instruct techniques, we incorporated several different data sources and build a new dataset called [LMFlow Dataset](http://lmflow.org:5000/lmflow_data.tar.gz).
The new training split is created by merging the following datasets:
- ShareGPT: randomly sample 50K English data and 10K Chinese data from ShareGPT.
- GPT-4-LLM: 52K English data from GPT-4-LLM.
- BELLE: randomly sample 80K Chinese data from BELLE.
See more details in the "Instruction Tuning" section in our [paper](https://arxiv.org/pdf/2306.12420.pdf).
## Evaluation
Robin is evaluated with [LMFlow Benchmark](https://blog.gopenai.com/lmflow-benchmark-an-automatic-evaluation-framework-for-open-source-llms-ef5c6f142418).
See more details in this [paper](https://arxiv.org/pdf/2306.12420.pdf).
## Citation
If you find this repository useful, please consider giving ⭐ and citing our [paper](https://arxiv.org/abs/2306.12420):
```
@misc{lmflow,
author = {Shizhe Diao and Rui Pan and Hanze Dong and KaShun Shum and Jipeng Zhang and Wei Xiong and Tong Zhang},
title = {LMFlow: An Extensible Toolkit for Finetuning and Inference of Large Foundation Models},
year = {2023},
publisher = {GitHub},
journal = {GitHub repository},
howpublished = {\url{https://optimalscale.github.io/LMFlow/}},
}
```
|
Pamela153/ppo-LunarLander-v2
|
Pamela153
| 2023-07-16T02:47:00Z | 2 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-07-16T02:44:30Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 251.70 +/- 12.72
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
PeterBrendan/pbjsGPT2v2
|
PeterBrendan
| 2023-07-16T02:32:02Z | 144 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-07-12T15:07:20Z |
---
license: mit
widget:
- text: bidderTimeout
- text: Usebidcache
- text: bidderSequence
- text: customPriceBucket
---
## Model: GPT-2
### Model name: pbjsGPT2v2
### Model description:
This fine-tuned version of the GPT-2 model was trained on a subset of 1100+ publisher domains' Prebid config files. Its focus is on sophisticated Prebid publishers. The model provides insights into how these publishers configure their Prebid settings. By inputting a Prebid config setting, such as ***bidderTimeout***, the model generates sample Prebid configuration settings based on the collected data. It aims to assist publishers in understanding different configurations used by sophisticated publishers.
### Intended uses:
This model is intended to assist publishers in understanding and exploring how other publishers configure their Prebid settings. It serves as a reference for gaining insights into common configurations, best practices, and different approaches used by top publishers across various domains.
### Limitations:
The generated Prebid configuration settings are based on the data from the training set and may not cover all possible configurations or reflect the specific requirements of a particular domain. Publishers should carefully review and adapt the generated configurations to their specific needs and business rules.
### How to use:
To use this model, provide a Prebid config setting, such as ***bidderSequence***. The model will generate a sample Prebid configuration related to that input based on the collected data.
### Training data:
This model was trained on a subset of 1100+ publisher domains Prebid config files. The dataset was collected from a variety of publishers and represents a wide range of Prebid settings used in the industry.
### Training procedure:
The model was fine-tuned using the GPT-2 base model with the aforementioned dataset.
### Evaluation results:
The evaluation of this model focuses on its ability to generate coherent and valid Prebid configuration settings based on the provided Prebid config setting. Human evaluators reviewed the generated configurations for relevance and accuracy.
### Safety and bias considerations:
The model is trained on data from actual Prebid config files and aims to provide accurate insights into publishers' configurations. However, it's important to note that biases may exist in the original data itself, as the training data is based on real-world configurations. Users should review and validate the generated configurations to ensure they align with their specific requirements and guidelines.
Users are encouraged to exercise caution and use their expertise in interpreting and adapting the generated Prebid configurations for their own use. The model should be seen as a helpful tool to gain inspiration and understanding of common Prebid settings but not as a substitute for thorough testing and manual review of the final configurations.
|
monideep2255/spell_correction_M04_V3
|
monideep2255
| 2023-07-16T02:10:18Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bart",
"text2text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-07-16T00:59:14Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: spell_correction_M04_V3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# spell_correction_M04_V3
This model is a fine-tuned version of [facebook/bart-base](https://huggingface.co/facebook/bart-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0178
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 30
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 269 | 0.2687 |
| 1.8467 | 2.0 | 538 | 0.0361 |
| 1.8467 | 3.0 | 807 | 0.0241 |
| 0.0357 | 4.0 | 1076 | 0.0198 |
| 0.0357 | 5.0 | 1345 | 0.0199 |
| 0.0159 | 6.0 | 1614 | 0.0175 |
| 0.0159 | 7.0 | 1883 | 0.0179 |
| 0.0077 | 8.0 | 2152 | 0.0189 |
| 0.0077 | 9.0 | 2421 | 0.0183 |
| 0.006 | 10.0 | 2690 | 0.0183 |
| 0.006 | 11.0 | 2959 | 0.0191 |
| 0.0044 | 12.0 | 3228 | 0.0186 |
| 0.0044 | 13.0 | 3497 | 0.0192 |
| 0.0033 | 14.0 | 3766 | 0.0189 |
| 0.0024 | 15.0 | 4035 | 0.0173 |
| 0.0024 | 16.0 | 4304 | 0.0171 |
| 0.0026 | 17.0 | 4573 | 0.0183 |
| 0.0026 | 18.0 | 4842 | 0.0181 |
| 0.0021 | 19.0 | 5111 | 0.0177 |
| 0.0021 | 20.0 | 5380 | 0.0174 |
| 0.0015 | 21.0 | 5649 | 0.0173 |
| 0.0015 | 22.0 | 5918 | 0.0174 |
| 0.0016 | 23.0 | 6187 | 0.0178 |
| 0.0016 | 24.0 | 6456 | 0.0180 |
| 0.0018 | 25.0 | 6725 | 0.0175 |
| 0.0018 | 26.0 | 6994 | 0.0171 |
| 0.0017 | 27.0 | 7263 | 0.0175 |
| 0.0014 | 28.0 | 7532 | 0.0177 |
| 0.0014 | 29.0 | 7801 | 0.0178 |
| 0.0013 | 30.0 | 8070 | 0.0178 |
### Framework versions
- Transformers 4.28.0
- Pytorch 1.12.1+cu102
- Datasets 2.13.1
- Tokenizers 0.13.3
|
manmyung/ppo-SnowballTarget
|
manmyung
| 2023-07-16T02:08:22Z | 2 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"SnowballTarget",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-SnowballTarget",
"region:us"
] |
reinforcement-learning
| 2023-07-16T02:08:19Z |
---
library_name: ml-agents
tags:
- SnowballTarget
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SnowballTarget
---
# **ppo** Agent playing **SnowballTarget**
This is a trained model of a **ppo** agent playing **SnowballTarget**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: manmyung/ppo-SnowballTarget
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
WasuratS/whisper-small-da
|
WasuratS
| 2023-07-16T02:07:39Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"da",
"dataset:mozilla-foundation/common_voice_13_0",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2023-07-15T15:11:37Z |
---
language:
- da
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- mozilla-foundation/common_voice_13_0
metrics:
- wer
model-index:
- name: Whisper Small Da - WasuratS
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice 13
type: mozilla-foundation/common_voice_13_0
config: da
split: test
args: da
metrics:
- name: Wer
type: wer
value: 23.39882224190943
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Whisper Small Da - WasuratS
This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the Common Voice 13 dataset on Danish language
It achieves the following results on the evaluation set:
- Loss: 0.6393
- Wer Ortho: 29.0926
- Wer: 23.3988
## Model description
[openai/whisper-small](https://huggingface.co/openai/whisper-small)
## Training and evaluation data
[mozilla-foundation/common_voice_13_0](https://huggingface.co/datasets/mozilla-foundation/common_voice_13_0)
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- distributed_type: multi-GPU
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 50
- training_steps: 4000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer Ortho | Wer |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:-------:|
| 0.218 | 1.61 | 500 | 0.4724 | 30.2496 | 24.7069 |
| 0.0628 | 3.22 | 1000 | 0.4825 | 28.8946 | 23.3154 |
| 0.0289 | 4.82 | 1500 | 0.5311 | 29.3376 | 23.4666 |
| 0.0078 | 6.43 | 2000 | 0.5740 | 29.4627 | 23.6542 |
| 0.0032 | 8.04 | 2500 | 0.6070 | 29.0613 | 23.2790 |
| 0.0025 | 9.65 | 3000 | 0.6274 | 29.1187 | 23.4770 |
| 0.0012 | 11.25 | 3500 | 0.6335 | 29.0978 | 23.3623 |
| 0.0011 | 12.86 | 4000 | 0.6393 | 29.0926 | 23.3988 |
### Framework versions
- Transformers 4.29.2
- Pytorch 1.13.1+cu117
- Datasets 2.12.0
- Tokenizers 0.13.3
|
mitra-mir/setfit_model_Calgary_epochs2_Jul_15_2023
|
mitra-mir
| 2023-07-16T02:00:04Z | 4 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"mpnet",
"feature-extraction",
"sentence-similarity",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2023-07-16T01:59:53Z |
---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
---
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 115 with parameters:
```
{'batch_size': 64, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss`
Parameters of the fit()-Method:
```
{
"epochs": 2,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": 230,
"warmup_steps": 23,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 384, 'do_lower_case': False}) with Transformer model: MPNetModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
(2): Normalize()
)
```
## Citing & Authors
<!--- Describe where people can find more information -->
|
NasimB/guten_rarity_all_cut_19k_shuffled
|
NasimB
| 2023-07-16T01:54:07Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"generated_from_trainer",
"dataset:generator",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-07-15T23:59:13Z |
---
license: mit
tags:
- generated_from_trainer
datasets:
- generator
model-index:
- name: guten_rarity_all_cut_19k_shuffled
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# guten_rarity_all_cut_19k_shuffled
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the generator dataset.
It achieves the following results on the evaluation set:
- Loss: 4.3157
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 1000
- num_epochs: 6
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 6.6912 | 0.29 | 500 | 5.6363 |
| 5.3342 | 0.59 | 1000 | 5.1999 |
| 4.9978 | 0.88 | 1500 | 4.9467 |
| 4.7092 | 1.17 | 2000 | 4.7986 |
| 4.5524 | 1.47 | 2500 | 4.6740 |
| 4.4477 | 1.76 | 3000 | 4.5737 |
| 4.3238 | 2.05 | 3500 | 4.4934 |
| 4.1271 | 2.35 | 4000 | 4.4404 |
| 4.1 | 2.64 | 4500 | 4.3886 |
| 4.0602 | 2.93 | 5000 | 4.3370 |
| 3.8454 | 3.23 | 5500 | 4.3333 |
| 3.8039 | 3.52 | 6000 | 4.3005 |
| 3.7844 | 3.81 | 6500 | 4.2628 |
| 3.6706 | 4.11 | 7000 | 4.2667 |
| 3.5198 | 4.4 | 7500 | 4.2607 |
| 3.5089 | 4.69 | 8000 | 4.2466 |
| 3.4958 | 4.99 | 8500 | 4.2321 |
| 3.3358 | 5.28 | 9000 | 4.2473 |
| 3.3204 | 5.57 | 9500 | 4.2460 |
| 3.3125 | 5.87 | 10000 | 4.2451 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.11.0+cu113
- Datasets 2.13.0
- Tokenizers 0.13.3
|
akraieski/taxi-v3
|
akraieski
| 2023-07-16T01:06:47Z | 0 | 0 | null |
[
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-07-16T01:06:42Z |
---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: taxi-v3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.36 +/- 2.88
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="akraieski/taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
yzzhong/RL_q_tax_v2
|
yzzhong
| 2023-07-16T01:03:21Z | 0 | 0 | null |
[
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-07-16T00:46:12Z |
---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: RL_q_tax_v2
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.56 +/- 2.71
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="yzzhong/RL_q_tax_v2", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
tyavika/lr1e5_bs16_layer1_Bert_CNN128LSTM128NoBid
|
tyavika
| 2023-07-16T00:31:17Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"question-answering",
"generated_from_trainer",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2023-07-12T18:38:30Z |
---
tags:
- generated_from_trainer
model-index:
- name: lr1e5_bs16_layer1_Bert_CNN128LSTM128NoBid
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# lr1e5_bs16_layer1_Bert_CNN128LSTM128NoBid
This model is a fine-tuned version of [tyavika/lr1e5_bs16_layer1_Bert_CNN128LSTM128NoBid](https://huggingface.co/tyavika/lr1e5_bs16_layer1_Bert_CNN128LSTM128NoBid) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Framework versions
- Transformers 4.28.0
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
KingKazma/xsum_t5-small_prefix_tuning_500_10_3000_8_e-1_s108_v3_prefix200_manual
|
KingKazma
| 2023-07-16T00:15:55Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-07-16T00:15:54Z |
---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.4.0.dev0
|
Liduvina/LLM_A1
|
Liduvina
| 2023-07-15T23:36:45Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-07-15T23:36:39Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: True
- load_in_4bit: False
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: fp4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
### Framework versions
- PEFT 0.4.0.dev0
|
NasimB/cbt-guten-log-rarity-all-no-cut
|
NasimB
| 2023-07-15T23:32:37Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"generated_from_trainer",
"dataset:generator",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-07-15T21:37:01Z |
---
license: mit
tags:
- generated_from_trainer
datasets:
- generator
model-index:
- name: cbt-guten-log-rarity-all-no-cut
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# cbt-guten-log-rarity-all-no-cut
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the generator dataset.
It achieves the following results on the evaluation set:
- Loss: 4.3166
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 1000
- num_epochs: 6
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 6.6947 | 0.29 | 500 | 5.6397 |
| 5.3475 | 0.58 | 1000 | 5.2031 |
| 4.991 | 0.87 | 1500 | 4.9524 |
| 4.7228 | 1.17 | 2000 | 4.8034 |
| 4.563 | 1.46 | 2500 | 4.6832 |
| 4.446 | 1.75 | 3000 | 4.5709 |
| 4.3323 | 2.04 | 3500 | 4.4920 |
| 4.1314 | 2.33 | 4000 | 4.4447 |
| 4.1022 | 2.62 | 4500 | 4.3948 |
| 4.059 | 2.91 | 5000 | 4.3383 |
| 3.8712 | 3.21 | 5500 | 4.3368 |
| 3.8024 | 3.5 | 6000 | 4.3008 |
| 3.7855 | 3.79 | 6500 | 4.2702 |
| 3.6976 | 4.08 | 7000 | 4.2655 |
| 3.5207 | 4.37 | 7500 | 4.2612 |
| 3.5156 | 4.66 | 8000 | 4.2501 |
| 3.5001 | 4.95 | 8500 | 4.2351 |
| 3.357 | 5.24 | 9000 | 4.2478 |
| 3.3255 | 5.54 | 9500 | 4.2467 |
| 3.3217 | 5.83 | 10000 | 4.2455 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.11.0+cu113
- Datasets 2.13.0
- Tokenizers 0.13.3
|
Jonathaniu/alpaca-breast-cancer-13b-mix_data
|
Jonathaniu
| 2023-07-15T23:30:49Z | 2 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-07-15T23:30:29Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
### Framework versions
- PEFT 0.4.0.dev0
|
KingKazma/xsum_t5-small_prefix_tuning_500_10_3000_8_e-1_s6789_v3_manual
|
KingKazma
| 2023-07-15T23:19:59Z | 1 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-07-15T23:19:56Z |
---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.4.0.dev0
|
NasimB/cbt-log-rarity-all-no-cut
|
NasimB
| 2023-07-15T23:15:14Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"generated_from_trainer",
"dataset:generator",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-07-15T21:20:04Z |
---
license: mit
tags:
- generated_from_trainer
datasets:
- generator
model-index:
- name: cbt-log-rarity-all-no-cut
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# cbt-log-rarity-all-no-cut
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the generator dataset.
It achieves the following results on the evaluation set:
- Loss: 4.3130
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 1000
- num_epochs: 6
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 6.6895 | 0.29 | 500 | 5.6304 |
| 5.3369 | 0.58 | 1000 | 5.2048 |
| 4.9919 | 0.87 | 1500 | 4.9517 |
| 4.7188 | 1.16 | 2000 | 4.8039 |
| 4.5541 | 1.46 | 2500 | 4.6726 |
| 4.4401 | 1.75 | 3000 | 4.5700 |
| 4.333 | 2.04 | 3500 | 4.4973 |
| 4.122 | 2.33 | 4000 | 4.4425 |
| 4.0972 | 2.62 | 4500 | 4.3886 |
| 4.0567 | 2.91 | 5000 | 4.3345 |
| 3.8616 | 3.2 | 5500 | 4.3307 |
| 3.7938 | 3.49 | 6000 | 4.2967 |
| 3.7866 | 3.79 | 6500 | 4.2664 |
| 3.6955 | 4.08 | 7000 | 4.2620 |
| 3.5098 | 4.37 | 7500 | 4.2572 |
| 3.5009 | 4.66 | 8000 | 4.2436 |
| 3.4957 | 4.95 | 8500 | 4.2324 |
| 3.3439 | 5.24 | 9000 | 4.2435 |
| 3.3139 | 5.53 | 9500 | 4.2430 |
| 3.3107 | 5.82 | 10000 | 4.2420 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.11.0+cu113
- Datasets 2.13.0
- Tokenizers 0.13.3
|
MohamedExperio/layoutxlm-finetuned-xfund-fr
|
MohamedExperio
| 2023-07-15T23:14:01Z | 78 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"layoutlmv2",
"token-classification",
"generated_from_trainer",
"dataset:xfun",
"license:cc-by-nc-sa-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2023-07-15T22:52:20Z |
---
license: cc-by-nc-sa-4.0
tags:
- generated_from_trainer
datasets:
- xfun
model-index:
- name: layoutxlm-finetuned-xfund-fr
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# layoutxlm-finetuned-xfund-fr
This model is a fine-tuned version of [microsoft/layoutxlm-base](https://huggingface.co/microsoft/layoutxlm-base) on the xfun dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- training_steps: 1000
### Training results
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.0
- Datasets 2.1.0
- Tokenizers 0.13.3
|
seny1004/wav2vec2-large-mms-1b-korean-colab
|
seny1004
| 2023-07-15T22:55:48Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"dataset:common_voice_13_0",
"base_model:facebook/mms-1b-l1107",
"base_model:finetune:facebook/mms-1b-l1107",
"license:cc-by-nc-4.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2023-07-14T06:47:50Z |
---
license: cc-by-nc-4.0
base_model: facebook/mms-1b-l1107
tags:
- generated_from_trainer
datasets:
- common_voice_13_0
metrics:
- wer
model-index:
- name: wav2vec2-large-mms-1b-korean-colab
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: common_voice_13_0
type: common_voice_13_0
config: ko
split: test
args: ko
metrics:
- name: Wer
type: wer
value: 0.9929506545820745
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-mms-1b-korean-colab
This model is a fine-tuned version of [facebook/mms-1b-l1107](https://huggingface.co/facebook/mms-1b-l1107) on the common_voice_13_0 dataset.
It achieves the following results on the evaluation set:
- Loss: 7.8135
- Wer: 0.9930
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 100
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 10.9747 | 2.63 | 100 | 7.8812 | 0.9990 |
| 5.9431 | 5.26 | 200 | 8.2212 | 0.9960 |
| 5.7372 | 7.89 | 300 | 8.1054 | 0.9930 |
| 5.2582 | 10.53 | 400 | 8.2347 | 0.9940 |
| 3.8725 | 13.16 | 500 | 7.7536 | 0.9940 |
| 3.4454 | 15.79 | 600 | 7.7220 | 0.9930 |
| 2.5989 | 18.42 | 700 | 7.8135 | 0.9930 |
### Framework versions
- Transformers 4.31.0.dev0
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
KingKazma/xsum_t5-small_prefix_tuning_500_10_3000_8_e-1_s108_v3_manual
|
KingKazma
| 2023-07-15T22:55:24Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-07-15T22:55:23Z |
---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.4.0.dev0
|
kfahn/speecht5_finetuned_voxpopuli_es
|
kfahn
| 2023-07-15T22:38:37Z | 82 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"speecht5",
"text-to-audio",
"generated_from_trainer",
"es",
"dataset:facebook/voxpopuli",
"license:mit",
"endpoints_compatible",
"region:us"
] |
text-to-audio
| 2023-07-15T19:48:30Z |
---
language:
- es
license: mit
tags:
- generated_from_trainer
datasets:
- facebook/voxpopuli
model-index:
- name: speecht5_finetuned_voxpopuli_es
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# speecht5_finetuned_voxpopuli_es
This model is a fine-tuned version of [microsoft/speecht5_tts](https://huggingface.co/microsoft/speecht5_tts) on the Vox Populi Spanish dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4488
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 4
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 4000
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.5184 | 1.89 | 1000 | 0.4695 |
| 0.4984 | 3.77 | 2000 | 0.4548 |
| 0.4922 | 5.66 | 3000 | 0.4504 |
| 0.4848 | 7.54 | 4000 | 0.4488 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
crcdng/q-Taxi-v3
|
crcdng
| 2023-07-15T22:35:04Z | 0 | 0 | null |
[
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-07-15T19:49:55Z |
---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-Taxi-v3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.56 +/- 2.71
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="crcdng/q-Taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
LarryAIDraw/Arima_Kana_V1-000003
|
LarryAIDraw
| 2023-07-15T22:16:49Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-07-15T22:11:09Z |
---
license: creativeml-openrail-m
---
https://civitai.com/models/55346/arima-kanaoshi-no-ko
|
KingKazma/xsum_t5-small_p_tuning_500_10_3000_8_e-1_s108_v3_manual
|
KingKazma
| 2023-07-15T22:16:42Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-07-15T22:16:41Z |
---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.4.0.dev0
|
KingKazma/xsum_t5-small_p_tuning_500_10_3000_8_e-1_s55555_v3_manual
|
KingKazma
| 2023-07-15T21:43:50Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-07-15T21:43:49Z |
---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.4.0.dev0
|
merthacioglu/roberta-finetuned-subjqa-movies_2
|
merthacioglu
| 2023-07-15T21:39:57Z | 114 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"roberta",
"question-answering",
"generated_from_trainer",
"license:cc-by-4.0",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2023-07-15T14:30:17Z |
---
license: cc-by-4.0
tags:
- generated_from_trainer
model-index:
- name: roberta-finetuned-subjqa-movies_2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-finetuned-subjqa-movies_2
This model is a fine-tuned version of [deepset/roberta-base-squad2](https://huggingface.co/deepset/roberta-base-squad2) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
ArthurBaia/albertina-squad-v1.1-pt.br
|
ArthurBaia
| 2023-07-15T21:32:25Z | 116 | 0 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"deberta-v2",
"question-answering",
"generated_from_trainer",
"dataset:ArthurBaia/squad_v1_pt_br",
"base_model:PORTULAN/albertina-900m-portuguese-ptbr-encoder-brwac",
"base_model:finetune:PORTULAN/albertina-900m-portuguese-ptbr-encoder-brwac",
"license:other",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2023-07-15T02:00:06Z |
---
license: other
base_model: PORTULAN/albertina-ptbr
tags:
- generated_from_trainer
datasets:
- ArthurBaia/squad_v1_pt_br
model-index:
- name: albertina
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# albertina
This model is a fine-tuned version of [PORTULAN/albertina-ptbr](https://huggingface.co/PORTULAN/albertina-ptbr) on the ArthurBaia/squad_v1_pt_br dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
{
"epoch": 3.0,
"eval_exact_match": 76.96310312204352,
"eval_f1": 87.82372321450285,
"eval_runtime": 189.7132,
"eval_samples": 10977,
"eval_samples_per_second": 57.861,
"eval_steps_per_second": 7.237
}
### Framework versions
- Transformers 4.31.0.dev0
- Pytorch 2.0.1+cu117
- Datasets 2.13.1
- Tokenizers 0.13.3
|
lovelyxs/ppo-LunarLander-v2-2
|
lovelyxs
| 2023-07-15T21:23:37Z | 0 | 0 | null |
[
"tensorboard",
"LunarLander-v2",
"ppo",
"deep-reinforcement-learning",
"reinforcement-learning",
"custom-implementation",
"deep-rl-course",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-07-15T20:27:07Z |
---
tags:
- LunarLander-v2
- ppo
- deep-reinforcement-learning
- reinforcement-learning
- custom-implementation
- deep-rl-course
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 133.96 +/- 135.43
name: mean_reward
verified: false
---
# PPO Agent Playing LunarLander-v2
This is a trained model of a PPO agent playing LunarLander-v2.
# Hyperparameters
```python
{'exp_name': 'ppo'
'seed': 1
'torch_deterministic': True
'cuda': True
'track': False
'wandb_project_name': 'cleanRL'
'wandb_entity': None
'capture_video': False
'env_id': 'LunarLander-v2'
'total_timesteps': 2000000
'learning_rate': 0.0003
'num_envs': 4
'num_steps': 128
'anneal_lr': True
'gae': True
'gamma': 0.99
'gae_lambda': 0.95
'num_minibatches': 4
'update_epochs': 4
'norm_adv': True
'clip_coef': 0.25
'clip_vloss': True
'ent_coef': 0.01
'vf_coef': 0.5
'max_grad_norm': 0.5
'target_kl': None
'repo_id': 'lovelyxs/ppo-LunarLander-v2-2'
'batch_size': 512
'minibatch_size': 128}
```
|
KingKazma/xsum_t5-small_p_tuning_500_10_3000_16_e-1_s55555_v3_manual
|
KingKazma
| 2023-07-15T21:12:40Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-07-15T21:12:39Z |
---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.4.0.dev0
|
NasimB/gpt2-concat-wiki-rarity-no-cut
|
NasimB
| 2023-07-15T21:10:22Z | 7 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"generated_from_trainer",
"dataset:generator",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-07-15T19:08:48Z |
---
license: mit
tags:
- generated_from_trainer
datasets:
- generator
model-index:
- name: gpt2-concat-wiki-rarity-no-cut
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gpt2-concat-wiki-rarity-no-cut
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the generator dataset.
It achieves the following results on the evaluation set:
- Loss: 4.3201
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 1000
- num_epochs: 6
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 6.7051 | 0.29 | 500 | 5.6378 |
| 5.3367 | 0.58 | 1000 | 5.1972 |
| 4.9867 | 0.87 | 1500 | 4.9538 |
| 4.7104 | 1.16 | 2000 | 4.8093 |
| 4.5621 | 1.46 | 2500 | 4.6885 |
| 4.4544 | 1.75 | 3000 | 4.5808 |
| 4.3353 | 2.04 | 3500 | 4.5031 |
| 4.1291 | 2.33 | 4000 | 4.4542 |
| 4.1138 | 2.62 | 4500 | 4.3959 |
| 4.0612 | 2.91 | 5000 | 4.3429 |
| 3.8709 | 3.2 | 5500 | 4.3403 |
| 3.8046 | 3.49 | 6000 | 4.3115 |
| 3.7892 | 3.78 | 6500 | 4.2732 |
| 3.7056 | 4.07 | 7000 | 4.2679 |
| 3.5187 | 4.37 | 7500 | 4.2666 |
| 3.5135 | 4.66 | 8000 | 4.2503 |
| 3.5039 | 4.95 | 8500 | 4.2386 |
| 3.3508 | 5.24 | 9000 | 4.2509 |
| 3.324 | 5.53 | 9500 | 4.2505 |
| 3.3217 | 5.82 | 10000 | 4.2496 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.11.0+cu113
- Datasets 2.13.0
- Tokenizers 0.13.3
|
AnupamShankar/anupamshankar
|
AnupamShankar
| 2023-07-15T21:07:27Z | 3 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"mpnet",
"setfit",
"text-classification",
"arxiv:2209.11055",
"license:apache-2.0",
"region:us"
] |
text-classification
| 2023-07-15T20:56:30Z |
---
license: apache-2.0
tags:
- setfit
- sentence-transformers
- text-classification
pipeline_tag: text-classification
---
# /var/folders/yp/w3wbm1755g3dkb_6mbfzm41r0000gn/T/tmpa72zbztn/AnupamShankar/anupamshankar
This is a [SetFit model](https://github.com/huggingface/setfit) that can be used for text classification. The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
## Usage
To use this model for inference, first install the SetFit library:
```bash
python -m pip install setfit
```
You can then run inference as follows:
```python
from setfit import SetFitModel
# Download from Hub and run inference
model = SetFitModel.from_pretrained("/var/folders/yp/w3wbm1755g3dkb_6mbfzm41r0000gn/T/tmpa72zbztn/AnupamShankar/anupamshankar")
# Run inference
preds = model(["i loved the spiderman movie!", "pineapple on pizza is the worst 🤮"])
```
## BibTeX entry and citation info
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
|
0sunfire0/Pixelcopter_train_01
|
0sunfire0
| 2023-07-15T21:01:20Z | 0 | 0 | null |
[
"Pixelcopter-PLE-v0",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-07-15T21:01:01Z |
---
tags:
- Pixelcopter-PLE-v0
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Pixelcopter_train_01
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Pixelcopter-PLE-v0
type: Pixelcopter-PLE-v0
metrics:
- type: mean_reward
value: 38.00 +/- 26.76
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **Pixelcopter-PLE-v0**
This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
nolanaatama/kylbrflvsksthprkrvcv2300pchrhys
|
nolanaatama
| 2023-07-15T20:57:09Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-07-15T20:54:39Z |
---
license: creativeml-openrail-m
---
|
NasimB/guten-mod-rarity-all-end-est-19k
|
NasimB
| 2023-07-15T20:51:36Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"generated_from_trainer",
"dataset:generator",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-07-15T18:49:26Z |
---
license: mit
tags:
- generated_from_trainer
datasets:
- generator
model-index:
- name: guten-mod-rarity-all-end-est-19k
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# guten-mod-rarity-all-end-est-19k
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the generator dataset.
It achieves the following results on the evaluation set:
- Loss: 4.3119
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 1000
- num_epochs: 6
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 6.6905 | 0.29 | 500 | 5.6474 |
| 5.341 | 0.59 | 1000 | 5.2080 |
| 4.9929 | 0.88 | 1500 | 4.9578 |
| 4.716 | 1.17 | 2000 | 4.8093 |
| 4.5529 | 1.47 | 2500 | 4.6791 |
| 4.4478 | 1.76 | 3000 | 4.5686 |
| 4.32 | 2.05 | 3500 | 4.4927 |
| 4.133 | 2.35 | 4000 | 4.4466 |
| 4.1021 | 2.64 | 4500 | 4.3862 |
| 4.0551 | 2.93 | 5000 | 4.3333 |
| 3.8497 | 3.23 | 5500 | 4.3300 |
| 3.8038 | 3.52 | 6000 | 4.2997 |
| 3.7766 | 3.81 | 6500 | 4.2648 |
| 3.6682 | 4.11 | 7000 | 4.2638 |
| 3.5163 | 4.4 | 7500 | 4.2577 |
| 3.5129 | 4.69 | 8000 | 4.2423 |
| 3.502 | 4.99 | 8500 | 4.2289 |
| 3.3286 | 5.28 | 9000 | 4.2431 |
| 3.3215 | 5.58 | 9500 | 4.2421 |
| 3.3231 | 5.87 | 10000 | 4.2414 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.11.0+cu113
- Datasets 2.13.0
- Tokenizers 0.13.3
|
KingKazma/xsum_t5-small_p_tuning_500_10_3000_16_e-1_s108_v3_manual
|
KingKazma
| 2023-07-15T20:36:36Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-07-15T20:36:35Z |
---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.4.0.dev0
|
peft-internal-testing/opt-350m-lora-pickle
|
peft-internal-testing
| 2023-07-15T19:58:13Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-07-15T19:58:11Z |
---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.4.0.dev0
|
Umer1542/llama-7b-hf-task-b
|
Umer1542
| 2023-07-15T19:44:26Z | 1 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-07-15T19:44:21Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.4.0.dev0
|
akar49/mri_classifier
|
akar49
| 2023-07-15T19:42:47Z | 63 | 0 |
transformers
|
[
"transformers",
"tf",
"vit",
"image-classification",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2023-07-15T17:47:10Z |
---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: akar49/mri_classifier
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# akar49/mri_classifier
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.1032
- Validation Loss: 0.1556
- Train Accuracy: 0.9367
- Epoch: 14
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'SGD', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': True, 'is_legacy_optimizer': False, 'learning_rate': 0.001, 'momentum': 0.0, 'nesterov': False}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Train Accuracy | Epoch |
|:----------:|:---------------:|:--------------:|:-----:|
| 0.6447 | 0.6133 | 0.7004 | 0 |
| 0.5405 | 0.5010 | 0.8256 | 1 |
| 0.4181 | 0.3917 | 0.8650 | 2 |
| 0.3122 | 0.3189 | 0.9058 | 3 |
| 0.2474 | 0.3069 | 0.8875 | 4 |
| 0.2021 | 0.2733 | 0.9044 | 5 |
| 0.1745 | 0.2455 | 0.9100 | 6 |
| 0.1591 | 0.2203 | 0.9212 | 7 |
| 0.1450 | 0.2350 | 0.9142 | 8 |
| 0.1397 | 0.2122 | 0.9198 | 9 |
| 0.1227 | 0.2098 | 0.9212 | 10 |
| 0.1169 | 0.1754 | 0.9325 | 11 |
| 0.1080 | 0.1782 | 0.9339 | 12 |
| 0.0971 | 0.1705 | 0.9353 | 13 |
| 0.1032 | 0.1556 | 0.9367 | 14 |
### Framework versions
- Transformers 4.30.2
- TensorFlow 2.12.0
- Datasets 2.13.1
- Tokenizers 0.13.3
|
madoe001/ppo-Pyramids
|
madoe001
| 2023-07-15T19:42:21Z | 5 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"Pyramids",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Pyramids",
"region:us"
] |
reinforcement-learning
| 2023-07-15T19:40:09Z |
---
library_name: ml-agents
tags:
- Pyramids
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Pyramids
---
# **ppo** Agent playing **Pyramids**
This is a trained model of a **ppo** agent playing **Pyramids**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: madoe001/ppo-Pyramids
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
MOONBOW2/EVA
|
MOONBOW2
| 2023-07-15T19:39:36Z | 0 | 0 |
adapter-transformers
|
[
"adapter-transformers",
"code",
"nl",
"en",
"li",
"dataset:openchat/openchat_sharegpt4_dataset",
"license:mit",
"region:us"
] | null | 2023-07-15T19:31:40Z |
---
license: mit
datasets:
- openchat/openchat_sharegpt4_dataset
language:
- nl
- en
- li
metrics:
- character
- code_eval
library_name: adapter-transformers
tags:
- code
---
|
jsavva/my_awesome_billsum_model
|
jsavva
| 2023-07-15T19:36:16Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"generated_from_trainer",
"dataset:billsum",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-07-15T19:34:09Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- billsum
metrics:
- rouge
model-index:
- name: my_awesome_billsum_model
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: billsum
type: billsum
config: default
split: ca_test
args: default
metrics:
- name: Rouge1
type: rouge
value: 0.1401
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_awesome_billsum_model
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the billsum dataset.
It achieves the following results on the evaluation set:
- Loss: 2.5003
- Rouge1: 0.1401
- Rouge2: 0.047
- Rougel: 0.1145
- Rougelsum: 0.1146
- Gen Len: 19.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:|:-------:|
| No log | 1.0 | 62 | 2.7893 | 0.1304 | 0.0398 | 0.1089 | 0.1086 | 19.0 |
| No log | 2.0 | 124 | 2.5795 | 0.1368 | 0.0481 | 0.1155 | 0.1155 | 19.0 |
| No log | 3.0 | 186 | 2.5177 | 0.1403 | 0.0478 | 0.1146 | 0.1147 | 19.0 |
| No log | 4.0 | 248 | 2.5003 | 0.1401 | 0.047 | 0.1145 | 0.1146 | 19.0 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu117
- Datasets 2.13.1
- Tokenizers 0.13.3
|
MichaelKonu/MoneyMike
|
MichaelKonu
| 2023-07-15T19:29:54Z | 0 | 0 |
fastai
|
[
"fastai",
"region:us"
] | null | 2023-07-15T19:21:09Z |
---
tags:
- fastai
---
# Model card
## Model description
Classifies three types of bears: teddy, black, grizzly
## Intended uses & limitations
For fun
## Training and evaluation data
ddg
|
KingKazma/xsum_t5-small_lora_500_10_3000_8_e-1_s6789_v3_manual
|
KingKazma
| 2023-07-15T19:27:13Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-07-15T19:27:12Z |
---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.4.0.dev0
|
IAyoub/finetuning-sentiment-model-base-zero-shot
|
IAyoub
| 2023-07-15T19:21:16Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-07-15T17:13:37Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: finetuning-sentiment-model-base-zero-shot
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuning-sentiment-model-base-zero-shot
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5560
- Accuracy: 0.8015
- F1: 0.5511
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| No log | 0.02 | 10 | 0.8518 | 0.6738 | 0.2684 |
| No log | 0.03 | 20 | 0.7875 | 0.6738 | 0.2684 |
| No log | 0.05 | 30 | 0.7443 | 0.6738 | 0.2684 |
| No log | 0.07 | 40 | 0.7358 | 0.6746 | 0.2706 |
| No log | 0.08 | 50 | 0.7233 | 0.6742 | 0.2695 |
| No log | 0.1 | 60 | 0.6832 | 0.7148 | 0.3657 |
| No log | 0.12 | 70 | 0.6272 | 0.7735 | 0.4807 |
| No log | 0.13 | 80 | 0.5994 | 0.7910 | 0.4960 |
| No log | 0.15 | 90 | 0.5908 | 0.7898 | 0.5113 |
| No log | 0.17 | 100 | 0.5985 | 0.7982 | 0.5031 |
| No log | 0.18 | 110 | 0.5920 | 0.7965 | 0.5006 |
| No log | 0.2 | 120 | 0.5661 | 0.8053 | 0.5186 |
| No log | 0.22 | 130 | 0.5900 | 0.8015 | 0.5092 |
| No log | 0.23 | 140 | 0.5671 | 0.8023 | 0.5189 |
| No log | 0.25 | 150 | 0.6000 | 0.8044 | 0.5114 |
| No log | 0.27 | 160 | 0.5931 | 0.7785 | 0.5122 |
| No log | 0.28 | 170 | 0.5477 | 0.8065 | 0.5220 |
| No log | 0.3 | 180 | 0.5573 | 0.8107 | 0.5206 |
| No log | 0.32 | 190 | 0.5586 | 0.7961 | 0.5206 |
| No log | 0.34 | 200 | 0.5498 | 0.8107 | 0.5247 |
| No log | 0.35 | 210 | 0.5829 | 0.8036 | 0.5082 |
| No log | 0.37 | 220 | 0.5731 | 0.7843 | 0.5124 |
| No log | 0.39 | 230 | 0.5704 | 0.7915 | 0.5179 |
| No log | 0.4 | 240 | 0.5409 | 0.8070 | 0.5217 |
| No log | 0.42 | 250 | 0.5486 | 0.8120 | 0.5237 |
| No log | 0.44 | 260 | 0.5640 | 0.8082 | 0.5179 |
| No log | 0.45 | 270 | 0.5525 | 0.8086 | 0.5182 |
| No log | 0.47 | 280 | 0.5426 | 0.8086 | 0.5260 |
| No log | 0.49 | 290 | 0.5599 | 0.8040 | 0.5090 |
| No log | 0.5 | 300 | 0.5504 | 0.8124 | 0.5244 |
| No log | 0.52 | 310 | 0.5561 | 0.8074 | 0.5149 |
| No log | 0.54 | 320 | 0.5511 | 0.8061 | 0.5198 |
| No log | 0.55 | 330 | 0.5574 | 0.8082 | 0.5194 |
| No log | 0.57 | 340 | 0.5468 | 0.8099 | 0.5228 |
| No log | 0.59 | 350 | 0.5518 | 0.7990 | 0.5262 |
| No log | 0.6 | 360 | 0.5482 | 0.8099 | 0.5301 |
| No log | 0.62 | 370 | 0.5409 | 0.8111 | 0.5364 |
| No log | 0.64 | 380 | 0.5495 | 0.8103 | 0.5378 |
| No log | 0.65 | 390 | 0.5508 | 0.8111 | 0.5362 |
| No log | 0.67 | 400 | 0.5618 | 0.8011 | 0.5275 |
| No log | 0.69 | 410 | 0.5490 | 0.8103 | 0.5306 |
| No log | 0.7 | 420 | 0.5476 | 0.8116 | 0.5238 |
| No log | 0.72 | 430 | 0.5414 | 0.8090 | 0.5306 |
| No log | 0.74 | 440 | 0.5293 | 0.8153 | 0.5293 |
| No log | 0.75 | 450 | 0.5595 | 0.8141 | 0.5339 |
| No log | 0.77 | 460 | 0.5298 | 0.8132 | 0.5384 |
| No log | 0.79 | 470 | 0.5309 | 0.8132 | 0.5359 |
| No log | 0.8 | 480 | 0.5329 | 0.8132 | 0.5238 |
| No log | 0.82 | 490 | 0.5305 | 0.8132 | 0.5314 |
| 0.5831 | 0.84 | 500 | 0.5560 | 0.8015 | 0.5511 |
| 0.5831 | 0.85 | 510 | 0.5207 | 0.8162 | 0.5393 |
| 0.5831 | 0.87 | 520 | 0.5607 | 0.8070 | 0.5481 |
| 0.5831 | 0.89 | 530 | 0.5321 | 0.8120 | 0.5317 |
### Framework versions
- Transformers 4.29.2
- Pytorch 1.13.1+cu117
- Datasets 2.12.0
- Tokenizers 0.13.3
|
Umer1542/llama-7b-hf-task-c
|
Umer1542
| 2023-07-15T19:18:05Z | 1 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-07-15T19:18:03Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.4.0.dev0
|
sudheer997/lilt-en-funsd-8
|
sudheer997
| 2023-07-15T19:08:00Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"lilt",
"token-classification",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2023-07-15T18:25:37Z |
---
license: mit
tags:
- generated_from_trainer
model-index:
- name: lilt-en-funsd-8
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# lilt-en-funsd-8
This model is a fine-tuned version of [SCUT-DLVCLab/lilt-roberta-en-base](https://huggingface.co/SCUT-DLVCLab/lilt-roberta-en-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2967
- Other: {'precision': 0.9514893617021276, 'recall': 0.9414736842105264, 'f1': 0.9464550264550264, 'number': 2375}
- Billing Address: {'precision': 0.8, 'recall': 0.8, 'f1': 0.8000000000000002, 'number': 25}
- Credits: {'precision': 1.0, 'recall': 0.4, 'f1': 0.5714285714285715, 'number': 5}
- Currency: {'precision': 0.5, 'recall': 0.75, 'f1': 0.6, 'number': 4}
- Delivery Date: {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 1}
- Due Date: {'precision': 0.8947368421052632, 'recall': 0.9714285714285714, 'f1': 0.9315068493150684, 'number': 35}
- Invoice Date: {'precision': 0.9642857142857143, 'recall': 0.9310344827586207, 'f1': 0.9473684210526316, 'number': 58}
- Invoice Number: {'precision': 0.9056603773584906, 'recall': 0.9795918367346939, 'f1': 0.9411764705882353, 'number': 49}
- Line Amount: {'precision': 0.8839285714285714, 'recall': 0.9519230769230769, 'f1': 0.9166666666666665, 'number': 104}
- Line Catlog Number: {'precision': 0.9090909090909091, 'recall': 0.9090909090909091, 'f1': 0.9090909090909091, 'number': 11}
- Line Item Name: {'precision': 0.6410256410256411, 'recall': 0.7731958762886598, 'f1': 0.7009345794392524, 'number': 97}
- Line Other Item Name: {'precision': 1.0, 'recall': 0.6666666666666666, 'f1': 0.8, 'number': 15}
- Line Quantity: {'precision': 0.8918918918918919, 'recall': 0.8571428571428571, 'f1': 0.8741721854304636, 'number': 77}
- Line Rate: {'precision': 0.8089887640449438, 'recall': 0.935064935064935, 'f1': 0.8674698795180723, 'number': 77}
- Order Date: {'precision': 0.9285714285714286, 'recall': 0.7222222222222222, 'f1': 0.8125000000000001, 'number': 18}
- Other Charges: {'precision': 1.0, 'recall': 0.7142857142857143, 'f1': 0.8333333333333333, 'number': 14}
- Payment Terms: {'precision': 0.9736842105263158, 'recall': 0.9487179487179487, 'f1': 0.9610389610389611, 'number': 39}
- Po Number: {'precision': 1.0, 'recall': 0.8846153846153846, 'f1': 0.9387755102040816, 'number': 26}
- Remit Address: {'precision': 0.7058823529411765, 'recall': 1.0, 'f1': 0.8275862068965517, 'number': 12}
- Shipping Address: {'precision': 0.7058823529411765, 'recall': 0.8571428571428571, 'f1': 0.7741935483870968, 'number': 14}
- Shipping Terms: {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 1}
- Subtotal: {'precision': 0.7407407407407407, 'recall': 0.9523809523809523, 'f1': 0.8333333333333334, 'number': 21}
- Tax: {'precision': 0.76, 'recall': 0.7307692307692307, 'f1': 0.7450980392156863, 'number': 26}
- Total Amount: {'precision': 0.8769230769230769, 'recall': 0.890625, 'f1': 0.883720930232558, 'number': 64}
- Vendor Address: {'precision': 0.75, 'recall': 0.75, 'f1': 0.75, 'number': 24}
- Vendor Name: {'precision': 0.7321428571428571, 'recall': 0.8723404255319149, 'f1': 0.7961165048543688, 'number': 47}
- Overall Precision: 0.9178
- Overall Recall: 0.9234
- Overall F1: 0.9206
- Overall Accuracy: 0.9543
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 2000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Other | Billing Address | Credits | Currency | Delivery Date | Due Date | Invoice Date | Invoice Number | Line Amount | Line Catlog Number | Line Item Name | Line Other Item Name | Line Quantity | Line Rate | Order Date | Other Charges | Payment Terms | Po Number | Remit Address | Shipping Address | Shipping Terms | Subtotal | Tax | Total Amount | Vendor Address | Vendor Name | Overall Precision | Overall Recall | Overall F1 | Overall Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------------------------------------------------------------------------------------------------------:|:------------------------------------------------------------------------------------------:|:-------------------------------------------------------------------------:|:-------------------------------------------------------------------------:|:---------------------------------------------------------:|:-------------------------------------------------------------------------------------------------------:|:--------------------------------------------------------------------------------------------------------:|:-------------------------------------------------------------------------------------------------------:|:--------------------------------------------------------------------------------------------------------:|:-------------------------------------------------------------------------------------------------------:|:--------------------------------------------------------------------------------------------------------:|:-------------------------------------------------------------------------------------------------------:|:-------------------------------------------------------------------------------------------------------:|:---------------------------------------------------------------------------------------------------------:|:-------------------------------------------------------------------------------------------------------:|:--------------------------------------------------------------------------------------------------------:|:-------------------------------------------------------------------------------------------------------:|:--------------------------------------------------------------------------------------------------------:|:--------------------------------------------------------------------------------------------------------:|:------------------------------------------------------------------------------------------------------------:|:---------------------------------------------------------:|:---------------------------------------------------------------------------------------------------------:|:--------------------------------------------------------------------------------------------------------:|:---------------------------------------------------------------------------------------------:|:--------------------------------------------------------------------------------------------------------:|:---------------------------------------------------------------------------------------------------------:|:-----------------:|:--------------:|:----------:|:----------------:|
| 1.3403 | 1.56 | 100 | 0.6902 | {'precision': 0.7414893617021276, 'recall': 0.880421052631579, 'f1': 0.8050048123195379, 'number': 2375} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 25} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 5} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 4} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 1} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 35} | {'precision': 0.5, 'recall': 0.1724137931034483, 'f1': 0.25641025641025644, 'number': 58} | {'precision': 1.0, 'recall': 0.02040816326530612, 'f1': 0.039999999999999994, 'number': 49} | {'precision': 0.5338345864661654, 'recall': 0.6826923076923077, 'f1': 0.5991561181434599, 'number': 104} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 11} | {'precision': 0.32666666666666666, 'recall': 0.5051546391752577, 'f1': 0.3967611336032389, 'number': 97} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 15} | {'precision': 1.0, 'recall': 0.4155844155844156, 'f1': 0.5871559633027523, 'number': 77} | {'precision': 0.6111111111111112, 'recall': 0.14285714285714285, 'f1': 0.23157894736842108, 'number': 77} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 18} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 14} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 39} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 26} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 12} | {'precision': 0.013888888888888888, 'recall': 0.07142857142857142, 'f1': 0.023255813953488372, 'number': 14} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 1} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 21} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 26} | {'precision': 0.5833333333333334, 'recall': 0.21875, 'f1': 0.31818181818181823, 'number': 64} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 24} | {'precision': 0.34615384615384615, 'recall': 0.19148936170212766, 'f1': 0.2465753424657534, 'number': 47} | 0.6772 | 0.7067 | 0.6916 | 0.7934 |
| 0.485 | 3.12 | 200 | 0.3301 | {'precision': 0.8824006488240065, 'recall': 0.9162105263157895, 'f1': 0.8989878124354472, 'number': 2375} | {'precision': 0.35714285714285715, 'recall': 0.6, 'f1': 0.44776119402985076, 'number': 25} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 5} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 4} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 1} | {'precision': 1.0, 'recall': 0.08571428571428572, 'f1': 0.15789473684210528, 'number': 35} | {'precision': 0.47619047619047616, 'recall': 0.8620689655172413, 'f1': 0.6134969325153374, 'number': 58} | {'precision': 0.6031746031746031, 'recall': 0.7755102040816326, 'f1': 0.6785714285714285, 'number': 49} | {'precision': 0.822429906542056, 'recall': 0.8461538461538461, 'f1': 0.8341232227488151, 'number': 104} | {'precision': 1.0, 'recall': 0.18181818181818182, 'f1': 0.3076923076923077, 'number': 11} | {'precision': 0.49122807017543857, 'recall': 0.5773195876288659, 'f1': 0.5308056872037914, 'number': 97} | {'precision': 0.5, 'recall': 0.5333333333333333, 'f1': 0.5161290322580646, 'number': 15} | {'precision': 0.8309859154929577, 'recall': 0.7662337662337663, 'f1': 0.7972972972972973, 'number': 77} | {'precision': 0.6666666666666666, 'recall': 0.7532467532467533, 'f1': 0.7073170731707318, 'number': 77} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 18} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 14} | {'precision': 0.825, 'recall': 0.8461538461538461, 'f1': 0.8354430379746836, 'number': 39} | {'precision': 0.9090909090909091, 'recall': 0.38461538461538464, 'f1': 0.5405405405405405, 'number': 26} | {'precision': 0.2777777777777778, 'recall': 0.4166666666666667, 'f1': 0.33333333333333337, 'number': 12} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 14} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 1} | {'precision': 0.3333333333333333, 'recall': 0.09523809523809523, 'f1': 0.14814814814814814, 'number': 21} | {'precision': 1.0, 'recall': 0.038461538461538464, 'f1': 0.07407407407407407, 'number': 26} | {'precision': 0.627906976744186, 'recall': 0.84375, 'f1': 0.72, 'number': 64} | {'precision': 0.5882352941176471, 'recall': 0.4166666666666667, 'f1': 0.48780487804878053, 'number': 24} | {'precision': 0.6363636363636364, 'recall': 0.7446808510638298, 'f1': 0.6862745098039216, 'number': 47} | 0.8107 | 0.8345 | 0.8225 | 0.9029 |
| 0.223 | 4.69 | 300 | 0.2859 | {'precision': 0.9231094212082805, 'recall': 0.92, 'f1': 0.9215520877266976, 'number': 2375} | {'precision': 0.56, 'recall': 0.56, 'f1': 0.56, 'number': 25} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 5} | {'precision': 0.6, 'recall': 0.75, 'f1': 0.6666666666666665, 'number': 4} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 1} | {'precision': 0.7, 'recall': 0.8, 'f1': 0.7466666666666666, 'number': 35} | {'precision': 0.6891891891891891, 'recall': 0.8793103448275862, 'f1': 0.7727272727272727, 'number': 58} | {'precision': 0.6984126984126984, 'recall': 0.8979591836734694, 'f1': 0.7857142857142857, 'number': 49} | {'precision': 0.7265625, 'recall': 0.8942307692307693, 'f1': 0.8017241379310346, 'number': 104} | {'precision': 0.9, 'recall': 0.8181818181818182, 'f1': 0.8571428571428572, 'number': 11} | {'precision': 0.5785123966942148, 'recall': 0.7216494845360825, 'f1': 0.6422018348623852, 'number': 97} | {'precision': 0.8333333333333334, 'recall': 0.6666666666666666, 'f1': 0.7407407407407408, 'number': 15} | {'precision': 0.918918918918919, 'recall': 0.8831168831168831, 'f1': 0.9006622516556292, 'number': 77} | {'precision': 0.6774193548387096, 'recall': 0.8181818181818182, 'f1': 0.7411764705882352, 'number': 77} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 18} | {'precision': 0.4375, 'recall': 0.5, 'f1': 0.4666666666666667, 'number': 14} | {'precision': 0.95, 'recall': 0.9743589743589743, 'f1': 0.9620253164556962, 'number': 39} | {'precision': 1.0, 'recall': 0.5, 'f1': 0.6666666666666666, 'number': 26} | {'precision': 0.5294117647058824, 'recall': 0.75, 'f1': 0.6206896551724139, 'number': 12} | {'precision': 0.3333333333333333, 'recall': 0.7857142857142857, 'f1': 0.4680851063829786, 'number': 14} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 1} | {'precision': 0.5384615384615384, 'recall': 0.3333333333333333, 'f1': 0.41176470588235287, 'number': 21} | {'precision': 0.75, 'recall': 0.11538461538461539, 'f1': 0.19999999999999998, 'number': 26} | {'precision': 0.704225352112676, 'recall': 0.78125, 'f1': 0.7407407407407407, 'number': 64} | {'precision': 0.6086956521739131, 'recall': 0.5833333333333334, 'f1': 0.5957446808510638, 'number': 24} | {'precision': 0.5573770491803278, 'recall': 0.723404255319149, 'f1': 0.6296296296296297, 'number': 47} | 0.8550 | 0.8719 | 0.8633 | 0.9205 |
| 0.1297 | 6.25 | 400 | 0.2666 | {'precision': 0.9432809773123909, 'recall': 0.9103157894736842, 'f1': 0.9265052496250268, 'number': 2375} | {'precision': 0.8148148148148148, 'recall': 0.88, 'f1': 0.8461538461538461, 'number': 25} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 5} | {'precision': 0.6, 'recall': 0.75, 'f1': 0.6666666666666665, 'number': 4} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 1} | {'precision': 0.7560975609756098, 'recall': 0.8857142857142857, 'f1': 0.8157894736842105, 'number': 35} | {'precision': 0.7971014492753623, 'recall': 0.9482758620689655, 'f1': 0.8661417322834646, 'number': 58} | {'precision': 0.6818181818181818, 'recall': 0.9183673469387755, 'f1': 0.782608695652174, 'number': 49} | {'precision': 0.8495575221238938, 'recall': 0.9230769230769231, 'f1': 0.8847926267281105, 'number': 104} | {'precision': 0.6428571428571429, 'recall': 0.8181818181818182, 'f1': 0.7200000000000001, 'number': 11} | {'precision': 0.6186440677966102, 'recall': 0.7525773195876289, 'f1': 0.6790697674418604, 'number': 97} | {'precision': 0.8333333333333334, 'recall': 0.6666666666666666, 'f1': 0.7407407407407408, 'number': 15} | {'precision': 0.7448979591836735, 'recall': 0.948051948051948, 'f1': 0.8342857142857143, 'number': 77} | {'precision': 0.6966292134831461, 'recall': 0.8051948051948052, 'f1': 0.7469879518072291, 'number': 77} | {'precision': 0.8181818181818182, 'recall': 0.5, 'f1': 0.6206896551724137, 'number': 18} | {'precision': 0.4782608695652174, 'recall': 0.7857142857142857, 'f1': 0.5945945945945946, 'number': 14} | {'precision': 0.9487179487179487, 'recall': 0.9487179487179487, 'f1': 0.9487179487179487, 'number': 39} | {'precision': 0.8888888888888888, 'recall': 0.6153846153846154, 'f1': 0.7272727272727274, 'number': 26} | {'precision': 0.4782608695652174, 'recall': 0.9166666666666666, 'f1': 0.6285714285714286, 'number': 12} | {'precision': 0.7333333333333333, 'recall': 0.7857142857142857, 'f1': 0.7586206896551724, 'number': 14} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 1} | {'precision': 0.5, 'recall': 0.5714285714285714, 'f1': 0.5333333333333333, 'number': 21} | {'precision': 0.5625, 'recall': 0.34615384615384615, 'f1': 0.4285714285714286, 'number': 26} | {'precision': 0.691358024691358, 'recall': 0.875, 'f1': 0.7724137931034484, 'number': 64} | {'precision': 0.5555555555555556, 'recall': 0.625, 'f1': 0.5882352941176471, 'number': 24} | {'precision': 0.7333333333333333, 'recall': 0.9361702127659575, 'f1': 0.822429906542056, 'number': 47} | 0.8753 | 0.8867 | 0.8810 | 0.9378 |
| 0.0888 | 7.81 | 500 | 0.2430 | {'precision': 0.9316239316239316, 'recall': 0.9178947368421052, 'f1': 0.9247083775185577, 'number': 2375} | {'precision': 0.7916666666666666, 'recall': 0.76, 'f1': 0.7755102040816326, 'number': 25} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 5} | {'precision': 0.6, 'recall': 0.75, 'f1': 0.6666666666666665, 'number': 4} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 1} | {'precision': 0.8571428571428571, 'recall': 0.8571428571428571, 'f1': 0.8571428571428571, 'number': 35} | {'precision': 0.9454545454545454, 'recall': 0.896551724137931, 'f1': 0.920353982300885, 'number': 58} | {'precision': 0.8269230769230769, 'recall': 0.8775510204081632, 'f1': 0.8514851485148514, 'number': 49} | {'precision': 0.9111111111111111, 'recall': 0.7884615384615384, 'f1': 0.845360824742268, 'number': 104} | {'precision': 0.9, 'recall': 0.8181818181818182, 'f1': 0.8571428571428572, 'number': 11} | {'precision': 0.7096774193548387, 'recall': 0.6804123711340206, 'f1': 0.6947368421052632, 'number': 97} | {'precision': 0.55, 'recall': 0.7333333333333333, 'f1': 0.6285714285714286, 'number': 15} | {'precision': 0.825, 'recall': 0.8571428571428571, 'f1': 0.8407643312101911, 'number': 77} | {'precision': 0.7582417582417582, 'recall': 0.8961038961038961, 'f1': 0.8214285714285714, 'number': 77} | {'precision': 0.7333333333333333, 'recall': 0.6111111111111112, 'f1': 0.6666666666666666, 'number': 18} | {'precision': 0.3333333333333333, 'recall': 0.6428571428571429, 'f1': 0.43902439024390244, 'number': 14} | {'precision': 0.925, 'recall': 0.9487179487179487, 'f1': 0.9367088607594937, 'number': 39} | {'precision': 1.0, 'recall': 0.7307692307692307, 'f1': 0.8444444444444443, 'number': 26} | {'precision': 0.75, 'recall': 0.75, 'f1': 0.75, 'number': 12} | {'precision': 0.6666666666666666, 'recall': 0.8571428571428571, 'f1': 0.75, 'number': 14} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 1} | {'precision': 0.5517241379310345, 'recall': 0.7619047619047619, 'f1': 0.64, 'number': 21} | {'precision': 0.4444444444444444, 'recall': 0.46153846153846156, 'f1': 0.4528301886792453, 'number': 26} | {'precision': 0.7349397590361446, 'recall': 0.953125, 'f1': 0.8299319727891157, 'number': 64} | {'precision': 0.6333333333333333, 'recall': 0.7916666666666666, 'f1': 0.7037037037037038, 'number': 24} | {'precision': 0.7254901960784313, 'recall': 0.7872340425531915, 'f1': 0.7551020408163265, 'number': 47} | 0.8848 | 0.8867 | 0.8857 | 0.9430 |
| 0.0647 | 9.38 | 600 | 0.2311 | {'precision': 0.931426167437947, 'recall': 0.9322105263157895, 'f1': 0.9318181818181818, 'number': 2375} | {'precision': 0.8181818181818182, 'recall': 0.72, 'f1': 0.7659574468085107, 'number': 25} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 5} | {'precision': 0.6, 'recall': 0.75, 'f1': 0.6666666666666665, 'number': 4} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 1} | {'precision': 0.8648648648648649, 'recall': 0.9142857142857143, 'f1': 0.888888888888889, 'number': 35} | {'precision': 0.9152542372881356, 'recall': 0.9310344827586207, 'f1': 0.923076923076923, 'number': 58} | {'precision': 0.8333333333333334, 'recall': 0.9183673469387755, 'f1': 0.8737864077669903, 'number': 49} | {'precision': 0.8389830508474576, 'recall': 0.9519230769230769, 'f1': 0.8918918918918919, 'number': 104} | {'precision': 1.0, 'recall': 0.9090909090909091, 'f1': 0.9523809523809523, 'number': 11} | {'precision': 0.6101694915254238, 'recall': 0.7422680412371134, 'f1': 0.6697674418604651, 'number': 97} | {'precision': 0.75, 'recall': 0.6, 'f1': 0.6666666666666665, 'number': 15} | {'precision': 0.8554216867469879, 'recall': 0.922077922077922, 'f1': 0.8875, 'number': 77} | {'precision': 0.7654320987654321, 'recall': 0.8051948051948052, 'f1': 0.7848101265822786, 'number': 77} | {'precision': 0.9166666666666666, 'recall': 0.6111111111111112, 'f1': 0.7333333333333334, 'number': 18} | {'precision': 0.8333333333333334, 'recall': 0.7142857142857143, 'f1': 0.7692307692307692, 'number': 14} | {'precision': 0.9743589743589743, 'recall': 0.9743589743589743, 'f1': 0.9743589743589743, 'number': 39} | {'precision': 1.0, 'recall': 0.8461538461538461, 'f1': 0.9166666666666666, 'number': 26} | {'precision': 0.6666666666666666, 'recall': 0.8333333333333334, 'f1': 0.7407407407407408, 'number': 12} | {'precision': 0.4782608695652174, 'recall': 0.7857142857142857, 'f1': 0.5945945945945946, 'number': 14} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 1} | {'precision': 0.8888888888888888, 'recall': 0.7619047619047619, 'f1': 0.8205128205128205, 'number': 21} | {'precision': 0.5925925925925926, 'recall': 0.6153846153846154, 'f1': 0.6037735849056604, 'number': 26} | {'precision': 0.8636363636363636, 'recall': 0.890625, 'f1': 0.8769230769230768, 'number': 64} | {'precision': 0.5151515151515151, 'recall': 0.7083333333333334, 'f1': 0.5964912280701754, 'number': 24} | {'precision': 0.7321428571428571, 'recall': 0.8723404255319149, 'f1': 0.7961165048543688, 'number': 47} | 0.8906 | 0.9071 | 0.8987 | 0.9479 |
| 0.044 | 10.94 | 700 | 0.2745 | {'precision': 0.948018747337026, 'recall': 0.9368421052631579, 'f1': 0.9423972892842015, 'number': 2375} | {'precision': 0.84, 'recall': 0.84, 'f1': 0.8399999999999999, 'number': 25} | {'precision': 1.0, 'recall': 0.2, 'f1': 0.33333333333333337, 'number': 5} | {'precision': 0.5, 'recall': 0.75, 'f1': 0.6, 'number': 4} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 1} | {'precision': 0.7333333333333333, 'recall': 0.9428571428571428, 'f1': 0.8250000000000001, 'number': 35} | {'precision': 0.9152542372881356, 'recall': 0.9310344827586207, 'f1': 0.923076923076923, 'number': 58} | {'precision': 0.8867924528301887, 'recall': 0.9591836734693877, 'f1': 0.9215686274509803, 'number': 49} | {'precision': 0.8620689655172413, 'recall': 0.9615384615384616, 'f1': 0.9090909090909091, 'number': 104} | {'precision': 0.9090909090909091, 'recall': 0.9090909090909091, 'f1': 0.9090909090909091, 'number': 11} | {'precision': 0.6548672566371682, 'recall': 0.7628865979381443, 'f1': 0.7047619047619047, 'number': 97} | {'precision': 0.7692307692307693, 'recall': 0.6666666666666666, 'f1': 0.7142857142857142, 'number': 15} | {'precision': 0.9, 'recall': 0.935064935064935, 'f1': 0.9171974522292993, 'number': 77} | {'precision': 0.7613636363636364, 'recall': 0.8701298701298701, 'f1': 0.8121212121212121, 'number': 77} | {'precision': 1.0, 'recall': 0.6111111111111112, 'f1': 0.7586206896551725, 'number': 18} | {'precision': 1.0, 'recall': 0.7142857142857143, 'f1': 0.8333333333333333, 'number': 14} | {'precision': 0.9743589743589743, 'recall': 0.9743589743589743, 'f1': 0.9743589743589743, 'number': 39} | {'precision': 1.0, 'recall': 0.8461538461538461, 'f1': 0.9166666666666666, 'number': 26} | {'precision': 0.7692307692307693, 'recall': 0.8333333333333334, 'f1': 0.8, 'number': 12} | {'precision': 0.8, 'recall': 0.8571428571428571, 'f1': 0.8275862068965518, 'number': 14} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 1} | {'precision': 0.8695652173913043, 'recall': 0.9523809523809523, 'f1': 0.909090909090909, 'number': 21} | {'precision': 0.72, 'recall': 0.6923076923076923, 'f1': 0.7058823529411765, 'number': 26} | {'precision': 0.8870967741935484, 'recall': 0.859375, 'f1': 0.8730158730158729, 'number': 64} | {'precision': 0.8, 'recall': 0.8333333333333334, 'f1': 0.816326530612245, 'number': 24} | {'precision': 0.7307692307692307, 'recall': 0.8085106382978723, 'f1': 0.7676767676767676, 'number': 47} | 0.9130 | 0.9173 | 0.9151 | 0.9509 |
| 0.0352 | 12.5 | 800 | 0.2702 | {'precision': 0.9464438731790917, 'recall': 0.9301052631578948, 'f1': 0.9382034402208538, 'number': 2375} | {'precision': 0.8, 'recall': 0.8, 'f1': 0.8000000000000002, 'number': 25} | {'precision': 1.0, 'recall': 0.2, 'f1': 0.33333333333333337, 'number': 5} | {'precision': 0.6, 'recall': 0.75, 'f1': 0.6666666666666665, 'number': 4} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 1} | {'precision': 0.9428571428571428, 'recall': 0.9428571428571428, 'f1': 0.9428571428571428, 'number': 35} | {'precision': 0.9310344827586207, 'recall': 0.9310344827586207, 'f1': 0.9310344827586207, 'number': 58} | {'precision': 0.8867924528301887, 'recall': 0.9591836734693877, 'f1': 0.9215686274509803, 'number': 49} | {'precision': 0.875, 'recall': 0.9423076923076923, 'f1': 0.9074074074074073, 'number': 104} | {'precision': 0.7142857142857143, 'recall': 0.9090909090909091, 'f1': 0.8, 'number': 11} | {'precision': 0.6446280991735537, 'recall': 0.8041237113402062, 'f1': 0.7155963302752294, 'number': 97} | {'precision': 0.8333333333333334, 'recall': 0.6666666666666666, 'f1': 0.7407407407407408, 'number': 15} | {'precision': 0.8554216867469879, 'recall': 0.922077922077922, 'f1': 0.8875, 'number': 77} | {'precision': 0.7640449438202247, 'recall': 0.8831168831168831, 'f1': 0.8192771084337349, 'number': 77} | {'precision': 1.0, 'recall': 0.6666666666666666, 'f1': 0.8, 'number': 18} | {'precision': 1.0, 'recall': 0.7142857142857143, 'f1': 0.8333333333333333, 'number': 14} | {'precision': 0.9743589743589743, 'recall': 0.9743589743589743, 'f1': 0.9743589743589743, 'number': 39} | {'precision': 1.0, 'recall': 0.8076923076923077, 'f1': 0.8936170212765957, 'number': 26} | {'precision': 0.6875, 'recall': 0.9166666666666666, 'f1': 0.7857142857142857, 'number': 12} | {'precision': 0.5555555555555556, 'recall': 0.7142857142857143, 'f1': 0.6250000000000001, 'number': 14} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 1} | {'precision': 0.8, 'recall': 0.9523809523809523, 'f1': 0.8695652173913043, 'number': 21} | {'precision': 0.75, 'recall': 0.6923076923076923, 'f1': 0.7199999999999999, 'number': 26} | {'precision': 0.9193548387096774, 'recall': 0.890625, 'f1': 0.9047619047619047, 'number': 64} | {'precision': 0.5517241379310345, 'recall': 0.6666666666666666, 'f1': 0.6037735849056604, 'number': 24} | {'precision': 0.7777777777777778, 'recall': 0.8936170212765957, 'f1': 0.8316831683168316, 'number': 47} | 0.9093 | 0.9129 | 0.9111 | 0.9514 |
| 0.0261 | 14.06 | 900 | 0.2707 | {'precision': 0.9474141677531508, 'recall': 0.9178947368421052, 'f1': 0.932420872540633, 'number': 2375} | {'precision': 0.8518518518518519, 'recall': 0.92, 'f1': 0.8846153846153846, 'number': 25} | {'precision': 1.0, 'recall': 0.2, 'f1': 0.33333333333333337, 'number': 5} | {'precision': 0.6, 'recall': 0.75, 'f1': 0.6666666666666665, 'number': 4} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 1} | {'precision': 0.85, 'recall': 0.9714285714285714, 'f1': 0.9066666666666667, 'number': 35} | {'precision': 0.9298245614035088, 'recall': 0.9137931034482759, 'f1': 0.9217391304347825, 'number': 58} | {'precision': 0.8245614035087719, 'recall': 0.9591836734693877, 'f1': 0.8867924528301887, 'number': 49} | {'precision': 0.8695652173913043, 'recall': 0.9615384615384616, 'f1': 0.91324200913242, 'number': 104} | {'precision': 0.9090909090909091, 'recall': 0.9090909090909091, 'f1': 0.9090909090909091, 'number': 11} | {'precision': 0.6607142857142857, 'recall': 0.7628865979381443, 'f1': 0.7081339712918661, 'number': 97} | {'precision': 0.9090909090909091, 'recall': 0.6666666666666666, 'f1': 0.7692307692307692, 'number': 15} | {'precision': 0.8295454545454546, 'recall': 0.948051948051948, 'f1': 0.8848484848484849, 'number': 77} | {'precision': 0.7777777777777778, 'recall': 0.9090909090909091, 'f1': 0.8383233532934132, 'number': 77} | {'precision': 1.0, 'recall': 0.6111111111111112, 'f1': 0.7586206896551725, 'number': 18} | {'precision': 1.0, 'recall': 0.7142857142857143, 'f1': 0.8333333333333333, 'number': 14} | {'precision': 0.9743589743589743, 'recall': 0.9743589743589743, 'f1': 0.9743589743589743, 'number': 39} | {'precision': 1.0, 'recall': 0.8461538461538461, 'f1': 0.9166666666666666, 'number': 26} | {'precision': 0.5217391304347826, 'recall': 1.0, 'f1': 0.6857142857142856, 'number': 12} | {'precision': 0.5625, 'recall': 0.6428571428571429, 'f1': 0.6000000000000001, 'number': 14} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 1} | {'precision': 0.8333333333333334, 'recall': 0.9523809523809523, 'f1': 0.888888888888889, 'number': 21} | {'precision': 0.7037037037037037, 'recall': 0.7307692307692307, 'f1': 0.7169811320754716, 'number': 26} | {'precision': 0.8656716417910447, 'recall': 0.90625, 'f1': 0.8854961832061069, 'number': 64} | {'precision': 0.8333333333333334, 'recall': 0.8333333333333334, 'f1': 0.8333333333333334, 'number': 24} | {'precision': 0.7608695652173914, 'recall': 0.7446808510638298, 'f1': 0.7526881720430109, 'number': 47} | 0.9094 | 0.9052 | 0.9073 | 0.9506 |
| 0.019 | 15.62 | 1000 | 0.2902 | {'precision': 0.9488054607508533, 'recall': 0.9364210526315789, 'f1': 0.9425725789362153, 'number': 2375} | {'precision': 0.875, 'recall': 0.84, 'f1': 0.8571428571428572, 'number': 25} | {'precision': 1.0, 'recall': 0.2, 'f1': 0.33333333333333337, 'number': 5} | {'precision': 0.5, 'recall': 0.75, 'f1': 0.6, 'number': 4} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 1} | {'precision': 0.918918918918919, 'recall': 0.9714285714285714, 'f1': 0.9444444444444445, 'number': 35} | {'precision': 0.8870967741935484, 'recall': 0.9482758620689655, 'f1': 0.9166666666666667, 'number': 58} | {'precision': 0.8627450980392157, 'recall': 0.8979591836734694, 'f1': 0.8799999999999999, 'number': 49} | {'precision': 0.8761061946902655, 'recall': 0.9519230769230769, 'f1': 0.9124423963133641, 'number': 104} | {'precision': 0.7142857142857143, 'recall': 0.9090909090909091, 'f1': 0.8, 'number': 11} | {'precision': 0.7027027027027027, 'recall': 0.8041237113402062, 'f1': 0.7499999999999999, 'number': 97} | {'precision': 1.0, 'recall': 0.6666666666666666, 'f1': 0.8, 'number': 15} | {'precision': 0.8604651162790697, 'recall': 0.961038961038961, 'f1': 0.9079754601226995, 'number': 77} | {'precision': 0.7777777777777778, 'recall': 0.9090909090909091, 'f1': 0.8383233532934132, 'number': 77} | {'precision': 0.9285714285714286, 'recall': 0.7222222222222222, 'f1': 0.8125000000000001, 'number': 18} | {'precision': 1.0, 'recall': 0.7142857142857143, 'f1': 0.8333333333333333, 'number': 14} | {'precision': 0.9743589743589743, 'recall': 0.9743589743589743, 'f1': 0.9743589743589743, 'number': 39} | {'precision': 1.0, 'recall': 0.8461538461538461, 'f1': 0.9166666666666666, 'number': 26} | {'precision': 0.5789473684210527, 'recall': 0.9166666666666666, 'f1': 0.7096774193548387, 'number': 12} | {'precision': 0.6875, 'recall': 0.7857142857142857, 'f1': 0.7333333333333334, 'number': 14} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 1} | {'precision': 0.9523809523809523, 'recall': 0.9523809523809523, 'f1': 0.9523809523809523, 'number': 21} | {'precision': 0.6206896551724138, 'recall': 0.6923076923076923, 'f1': 0.6545454545454545, 'number': 26} | {'precision': 0.8307692307692308, 'recall': 0.84375, 'f1': 0.8372093023255814, 'number': 64} | {'precision': 0.76, 'recall': 0.7916666666666666, 'f1': 0.7755102040816326, 'number': 24} | {'precision': 0.7543859649122807, 'recall': 0.9148936170212766, 'f1': 0.8269230769230769, 'number': 47} | 0.9130 | 0.9207 | 0.9168 | 0.9510 |
| 0.0166 | 17.19 | 1100 | 0.2622 | {'precision': 0.9491670226398975, 'recall': 0.9355789473684211, 'f1': 0.9423240033927057, 'number': 2375} | {'precision': 0.875, 'recall': 0.84, 'f1': 0.8571428571428572, 'number': 25} | {'precision': 1.0, 'recall': 0.4, 'f1': 0.5714285714285715, 'number': 5} | {'precision': 0.5, 'recall': 0.75, 'f1': 0.6, 'number': 4} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 1} | {'precision': 0.8947368421052632, 'recall': 0.9714285714285714, 'f1': 0.9315068493150684, 'number': 35} | {'precision': 0.9152542372881356, 'recall': 0.9310344827586207, 'f1': 0.923076923076923, 'number': 58} | {'precision': 0.8545454545454545, 'recall': 0.9591836734693877, 'f1': 0.9038461538461537, 'number': 49} | {'precision': 0.8761061946902655, 'recall': 0.9519230769230769, 'f1': 0.9124423963133641, 'number': 104} | {'precision': 0.6666666666666666, 'recall': 0.9090909090909091, 'f1': 0.7692307692307692, 'number': 11} | {'precision': 0.6810344827586207, 'recall': 0.8144329896907216, 'f1': 0.7417840375586854, 'number': 97} | {'precision': 1.0, 'recall': 0.6666666666666666, 'f1': 0.8, 'number': 15} | {'precision': 0.8846153846153846, 'recall': 0.8961038961038961, 'f1': 0.8903225806451613, 'number': 77} | {'precision': 0.8, 'recall': 0.935064935064935, 'f1': 0.8622754491017963, 'number': 77} | {'precision': 1.0, 'recall': 0.6666666666666666, 'f1': 0.8, 'number': 18} | {'precision': 0.8181818181818182, 'recall': 0.6428571428571429, 'f1': 0.7200000000000001, 'number': 14} | {'precision': 0.9743589743589743, 'recall': 0.9743589743589743, 'f1': 0.9743589743589743, 'number': 39} | {'precision': 1.0, 'recall': 0.8461538461538461, 'f1': 0.9166666666666666, 'number': 26} | {'precision': 0.7333333333333333, 'recall': 0.9166666666666666, 'f1': 0.8148148148148148, 'number': 12} | {'precision': 0.5555555555555556, 'recall': 0.7142857142857143, 'f1': 0.6250000000000001, 'number': 14} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 1} | {'precision': 0.6896551724137931, 'recall': 0.9523809523809523, 'f1': 0.7999999999999999, 'number': 21} | {'precision': 0.7916666666666666, 'recall': 0.7307692307692307, 'f1': 0.76, 'number': 26} | {'precision': 0.9032258064516129, 'recall': 0.875, 'f1': 0.8888888888888888, 'number': 64} | {'precision': 0.64, 'recall': 0.6666666666666666, 'f1': 0.6530612244897959, 'number': 24} | {'precision': 0.7258064516129032, 'recall': 0.9574468085106383, 'f1': 0.8256880733944956, 'number': 47} | 0.9124 | 0.9200 | 0.9162 | 0.9553 |
| 0.0131 | 18.75 | 1200 | 0.2735 | {'precision': 0.9440203562340967, 'recall': 0.9372631578947368, 'f1': 0.9406296218043525, 'number': 2375} | {'precision': 0.8461538461538461, 'recall': 0.88, 'f1': 0.8627450980392156, 'number': 25} | {'precision': 1.0, 'recall': 0.4, 'f1': 0.5714285714285715, 'number': 5} | {'precision': 0.5, 'recall': 0.75, 'f1': 0.6, 'number': 4} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 1} | {'precision': 0.918918918918919, 'recall': 0.9714285714285714, 'f1': 0.9444444444444445, 'number': 35} | {'precision': 0.9818181818181818, 'recall': 0.9310344827586207, 'f1': 0.9557522123893805, 'number': 58} | {'precision': 0.8703703703703703, 'recall': 0.9591836734693877, 'f1': 0.912621359223301, 'number': 49} | {'precision': 0.8672566371681416, 'recall': 0.9423076923076923, 'f1': 0.9032258064516129, 'number': 104} | {'precision': 0.8333333333333334, 'recall': 0.9090909090909091, 'f1': 0.8695652173913043, 'number': 11} | {'precision': 0.7027027027027027, 'recall': 0.8041237113402062, 'f1': 0.7499999999999999, 'number': 97} | {'precision': 1.0, 'recall': 0.6666666666666666, 'f1': 0.8, 'number': 15} | {'precision': 0.9090909090909091, 'recall': 0.7792207792207793, 'f1': 0.8391608391608392, 'number': 77} | {'precision': 0.7473684210526316, 'recall': 0.922077922077922, 'f1': 0.8255813953488372, 'number': 77} | {'precision': 0.9333333333333333, 'recall': 0.7777777777777778, 'f1': 0.8484848484848485, 'number': 18} | {'precision': 1.0, 'recall': 0.7142857142857143, 'f1': 0.8333333333333333, 'number': 14} | {'precision': 0.9743589743589743, 'recall': 0.9743589743589743, 'f1': 0.9743589743589743, 'number': 39} | {'precision': 1.0, 'recall': 0.8846153846153846, 'f1': 0.9387755102040816, 'number': 26} | {'precision': 0.7857142857142857, 'recall': 0.9166666666666666, 'f1': 0.8461538461538461, 'number': 12} | {'precision': 0.8571428571428571, 'recall': 0.8571428571428571, 'f1': 0.8571428571428571, 'number': 14} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 1} | {'precision': 0.7692307692307693, 'recall': 0.9523809523809523, 'f1': 0.8510638297872339, 'number': 21} | {'precision': 0.7307692307692307, 'recall': 0.7307692307692307, 'f1': 0.7307692307692306, 'number': 26} | {'precision': 0.8656716417910447, 'recall': 0.90625, 'f1': 0.8854961832061069, 'number': 64} | {'precision': 0.7391304347826086, 'recall': 0.7083333333333334, 'f1': 0.723404255319149, 'number': 24} | {'precision': 0.7142857142857143, 'recall': 0.851063829787234, 'f1': 0.7766990291262136, 'number': 47} | 0.9138 | 0.9191 | 0.9164 | 0.9535 |
| 0.0101 | 20.31 | 1300 | 0.2810 | {'precision': 0.9521162890123984, 'recall': 0.9376842105263158, 'f1': 0.9448451421298261, 'number': 2375} | {'precision': 0.8076923076923077, 'recall': 0.84, 'f1': 0.8235294117647058, 'number': 25} | {'precision': 1.0, 'recall': 0.4, 'f1': 0.5714285714285715, 'number': 5} | {'precision': 0.6, 'recall': 0.75, 'f1': 0.6666666666666665, 'number': 4} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 1} | {'precision': 0.8333333333333334, 'recall': 1.0, 'f1': 0.9090909090909091, 'number': 35} | {'precision': 0.9473684210526315, 'recall': 0.9310344827586207, 'f1': 0.9391304347826087, 'number': 58} | {'precision': 0.94, 'recall': 0.9591836734693877, 'f1': 0.9494949494949495, 'number': 49} | {'precision': 0.8547008547008547, 'recall': 0.9615384615384616, 'f1': 0.9049773755656108, 'number': 104} | {'precision': 0.8333333333333334, 'recall': 0.9090909090909091, 'f1': 0.8695652173913043, 'number': 11} | {'precision': 0.672566371681416, 'recall': 0.7835051546391752, 'f1': 0.7238095238095238, 'number': 97} | {'precision': 1.0, 'recall': 0.6666666666666666, 'f1': 0.8, 'number': 15} | {'precision': 0.9041095890410958, 'recall': 0.8571428571428571, 'f1': 0.88, 'number': 77} | {'precision': 0.7954545454545454, 'recall': 0.9090909090909091, 'f1': 0.8484848484848484, 'number': 77} | {'precision': 0.9166666666666666, 'recall': 0.6111111111111112, 'f1': 0.7333333333333334, 'number': 18} | {'precision': 1.0, 'recall': 0.7142857142857143, 'f1': 0.8333333333333333, 'number': 14} | {'precision': 0.9743589743589743, 'recall': 0.9743589743589743, 'f1': 0.9743589743589743, 'number': 39} | {'precision': 1.0, 'recall': 0.8846153846153846, 'f1': 0.9387755102040816, 'number': 26} | {'precision': 0.9230769230769231, 'recall': 1.0, 'f1': 0.9600000000000001, 'number': 12} | {'precision': 0.8, 'recall': 0.8571428571428571, 'f1': 0.8275862068965518, 'number': 14} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 1} | {'precision': 0.8333333333333334, 'recall': 0.9523809523809523, 'f1': 0.888888888888889, 'number': 21} | {'precision': 0.76, 'recall': 0.7307692307692307, 'f1': 0.7450980392156863, 'number': 26} | {'precision': 0.9047619047619048, 'recall': 0.890625, 'f1': 0.8976377952755906, 'number': 64} | {'precision': 0.75, 'recall': 0.75, 'f1': 0.75, 'number': 24} | {'precision': 0.6440677966101694, 'recall': 0.8085106382978723, 'f1': 0.7169811320754716, 'number': 47} | 0.9192 | 0.9197 | 0.9194 | 0.9546 |
| 0.0079 | 21.88 | 1400 | 0.2989 | {'precision': 0.9542148053059478, 'recall': 0.9389473684210526, 'f1': 0.9465195246179965, 'number': 2375} | {'precision': 0.7857142857142857, 'recall': 0.88, 'f1': 0.830188679245283, 'number': 25} | {'precision': 1.0, 'recall': 0.4, 'f1': 0.5714285714285715, 'number': 5} | {'precision': 0.6, 'recall': 0.75, 'f1': 0.6666666666666665, 'number': 4} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 1} | {'precision': 0.8717948717948718, 'recall': 0.9714285714285714, 'f1': 0.9189189189189189, 'number': 35} | {'precision': 0.9152542372881356, 'recall': 0.9310344827586207, 'f1': 0.923076923076923, 'number': 58} | {'precision': 0.9215686274509803, 'recall': 0.9591836734693877, 'f1': 0.9400000000000001, 'number': 49} | {'precision': 0.868421052631579, 'recall': 0.9519230769230769, 'f1': 0.908256880733945, 'number': 104} | {'precision': 0.7142857142857143, 'recall': 0.9090909090909091, 'f1': 0.8, 'number': 11} | {'precision': 0.6410256410256411, 'recall': 0.7731958762886598, 'f1': 0.7009345794392524, 'number': 97} | {'precision': 1.0, 'recall': 0.6666666666666666, 'f1': 0.8, 'number': 15} | {'precision': 0.9027777777777778, 'recall': 0.8441558441558441, 'f1': 0.87248322147651, 'number': 77} | {'precision': 0.7912087912087912, 'recall': 0.935064935064935, 'f1': 0.8571428571428572, 'number': 77} | {'precision': 0.9285714285714286, 'recall': 0.7222222222222222, 'f1': 0.8125000000000001, 'number': 18} | {'precision': 1.0, 'recall': 0.7142857142857143, 'f1': 0.8333333333333333, 'number': 14} | {'precision': 0.9743589743589743, 'recall': 0.9743589743589743, 'f1': 0.9743589743589743, 'number': 39} | {'precision': 0.96, 'recall': 0.9230769230769231, 'f1': 0.9411764705882353, 'number': 26} | {'precision': 0.8, 'recall': 1.0, 'f1': 0.888888888888889, 'number': 12} | {'precision': 0.75, 'recall': 0.8571428571428571, 'f1': 0.7999999999999999, 'number': 14} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 1} | {'precision': 0.6896551724137931, 'recall': 0.9523809523809523, 'f1': 0.7999999999999999, 'number': 21} | {'precision': 0.7307692307692307, 'recall': 0.7307692307692307, 'f1': 0.7307692307692306, 'number': 26} | {'precision': 0.8888888888888888, 'recall': 0.875, 'f1': 0.8818897637795274, 'number': 64} | {'precision': 0.72, 'recall': 0.75, 'f1': 0.7346938775510204, 'number': 24} | {'precision': 0.7547169811320755, 'recall': 0.851063829787234, 'f1': 0.8, 'number': 47} | 0.9173 | 0.9216 | 0.9195 | 0.9543 |
| 0.0061 | 23.44 | 1500 | 0.3160 | {'precision': 0.9498284734133791, 'recall': 0.9326315789473684, 'f1': 0.9411514765243255, 'number': 2375} | {'precision': 0.9130434782608695, 'recall': 0.84, 'f1': 0.8749999999999999, 'number': 25} | {'precision': 1.0, 'recall': 0.4, 'f1': 0.5714285714285715, 'number': 5} | {'precision': 0.5, 'recall': 0.75, 'f1': 0.6, 'number': 4} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 1} | {'precision': 0.9444444444444444, 'recall': 0.9714285714285714, 'f1': 0.9577464788732395, 'number': 35} | {'precision': 0.9642857142857143, 'recall': 0.9310344827586207, 'f1': 0.9473684210526316, 'number': 58} | {'precision': 0.9056603773584906, 'recall': 0.9795918367346939, 'f1': 0.9411764705882353, 'number': 49} | {'precision': 0.868421052631579, 'recall': 0.9519230769230769, 'f1': 0.908256880733945, 'number': 104} | {'precision': 0.7692307692307693, 'recall': 0.9090909090909091, 'f1': 0.8333333333333333, 'number': 11} | {'precision': 0.6578947368421053, 'recall': 0.7731958762886598, 'f1': 0.7109004739336494, 'number': 97} | {'precision': 1.0, 'recall': 0.6666666666666666, 'f1': 0.8, 'number': 15} | {'precision': 0.8904109589041096, 'recall': 0.8441558441558441, 'f1': 0.8666666666666666, 'number': 77} | {'precision': 0.7727272727272727, 'recall': 0.8831168831168831, 'f1': 0.8242424242424242, 'number': 77} | {'precision': 0.9285714285714286, 'recall': 0.7222222222222222, 'f1': 0.8125000000000001, 'number': 18} | {'precision': 1.0, 'recall': 0.7142857142857143, 'f1': 0.8333333333333333, 'number': 14} | {'precision': 0.9736842105263158, 'recall': 0.9487179487179487, 'f1': 0.9610389610389611, 'number': 39} | {'precision': 1.0, 'recall': 0.8846153846153846, 'f1': 0.9387755102040816, 'number': 26} | {'precision': 0.8, 'recall': 1.0, 'f1': 0.888888888888889, 'number': 12} | {'precision': 0.8, 'recall': 0.8571428571428571, 'f1': 0.8275862068965518, 'number': 14} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 1} | {'precision': 0.8, 'recall': 0.9523809523809523, 'f1': 0.8695652173913043, 'number': 21} | {'precision': 0.7142857142857143, 'recall': 0.7692307692307693, 'f1': 0.7407407407407408, 'number': 26} | {'precision': 0.8636363636363636, 'recall': 0.890625, 'f1': 0.8769230769230768, 'number': 64} | {'precision': 0.8333333333333334, 'recall': 0.8333333333333334, 'f1': 0.8333333333333334, 'number': 24} | {'precision': 0.8181818181818182, 'recall': 0.7659574468085106, 'f1': 0.7912087912087913, 'number': 47} | 0.9199 | 0.9151 | 0.9175 | 0.9505 |
| 0.006 | 25.0 | 1600 | 0.2967 | {'precision': 0.9514893617021276, 'recall': 0.9414736842105264, 'f1': 0.9464550264550264, 'number': 2375} | {'precision': 0.8, 'recall': 0.8, 'f1': 0.8000000000000002, 'number': 25} | {'precision': 1.0, 'recall': 0.4, 'f1': 0.5714285714285715, 'number': 5} | {'precision': 0.5, 'recall': 0.75, 'f1': 0.6, 'number': 4} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 1} | {'precision': 0.8947368421052632, 'recall': 0.9714285714285714, 'f1': 0.9315068493150684, 'number': 35} | {'precision': 0.9642857142857143, 'recall': 0.9310344827586207, 'f1': 0.9473684210526316, 'number': 58} | {'precision': 0.9056603773584906, 'recall': 0.9795918367346939, 'f1': 0.9411764705882353, 'number': 49} | {'precision': 0.8839285714285714, 'recall': 0.9519230769230769, 'f1': 0.9166666666666665, 'number': 104} | {'precision': 0.9090909090909091, 'recall': 0.9090909090909091, 'f1': 0.9090909090909091, 'number': 11} | {'precision': 0.6410256410256411, 'recall': 0.7731958762886598, 'f1': 0.7009345794392524, 'number': 97} | {'precision': 1.0, 'recall': 0.6666666666666666, 'f1': 0.8, 'number': 15} | {'precision': 0.8918918918918919, 'recall': 0.8571428571428571, 'f1': 0.8741721854304636, 'number': 77} | {'precision': 0.8089887640449438, 'recall': 0.935064935064935, 'f1': 0.8674698795180723, 'number': 77} | {'precision': 0.9285714285714286, 'recall': 0.7222222222222222, 'f1': 0.8125000000000001, 'number': 18} | {'precision': 1.0, 'recall': 0.7142857142857143, 'f1': 0.8333333333333333, 'number': 14} | {'precision': 0.9736842105263158, 'recall': 0.9487179487179487, 'f1': 0.9610389610389611, 'number': 39} | {'precision': 1.0, 'recall': 0.8846153846153846, 'f1': 0.9387755102040816, 'number': 26} | {'precision': 0.7058823529411765, 'recall': 1.0, 'f1': 0.8275862068965517, 'number': 12} | {'precision': 0.7058823529411765, 'recall': 0.8571428571428571, 'f1': 0.7741935483870968, 'number': 14} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 1} | {'precision': 0.7407407407407407, 'recall': 0.9523809523809523, 'f1': 0.8333333333333334, 'number': 21} | {'precision': 0.76, 'recall': 0.7307692307692307, 'f1': 0.7450980392156863, 'number': 26} | {'precision': 0.8769230769230769, 'recall': 0.890625, 'f1': 0.883720930232558, 'number': 64} | {'precision': 0.75, 'recall': 0.75, 'f1': 0.75, 'number': 24} | {'precision': 0.7321428571428571, 'recall': 0.8723404255319149, 'f1': 0.7961165048543688, 'number': 47} | 0.9178 | 0.9234 | 0.9206 | 0.9543 |
| 0.0046 | 26.56 | 1700 | 0.2848 | {'precision': 0.9465422146796776, 'recall': 0.9393684210526316, 'f1': 0.9429416737109045, 'number': 2375} | {'precision': 0.84, 'recall': 0.84, 'f1': 0.8399999999999999, 'number': 25} | {'precision': 1.0, 'recall': 0.4, 'f1': 0.5714285714285715, 'number': 5} | {'precision': 0.5, 'recall': 0.75, 'f1': 0.6, 'number': 4} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 1} | {'precision': 0.918918918918919, 'recall': 0.9714285714285714, 'f1': 0.9444444444444445, 'number': 35} | {'precision': 0.9642857142857143, 'recall': 0.9310344827586207, 'f1': 0.9473684210526316, 'number': 58} | {'precision': 0.9038461538461539, 'recall': 0.9591836734693877, 'f1': 0.9306930693069307, 'number': 49} | {'precision': 0.8608695652173913, 'recall': 0.9519230769230769, 'f1': 0.9041095890410958, 'number': 104} | {'precision': 0.7692307692307693, 'recall': 0.9090909090909091, 'f1': 0.8333333333333333, 'number': 11} | {'precision': 0.6578947368421053, 'recall': 0.7731958762886598, 'f1': 0.7109004739336494, 'number': 97} | {'precision': 1.0, 'recall': 0.6666666666666666, 'f1': 0.8, 'number': 15} | {'precision': 0.8947368421052632, 'recall': 0.8831168831168831, 'f1': 0.8888888888888888, 'number': 77} | {'precision': 0.8068181818181818, 'recall': 0.922077922077922, 'f1': 0.8606060606060606, 'number': 77} | {'precision': 0.9285714285714286, 'recall': 0.7222222222222222, 'f1': 0.8125000000000001, 'number': 18} | {'precision': 1.0, 'recall': 0.7857142857142857, 'f1': 0.88, 'number': 14} | {'precision': 0.9743589743589743, 'recall': 0.9743589743589743, 'f1': 0.9743589743589743, 'number': 39} | {'precision': 1.0, 'recall': 0.8846153846153846, 'f1': 0.9387755102040816, 'number': 26} | {'precision': 0.75, 'recall': 1.0, 'f1': 0.8571428571428571, 'number': 12} | {'precision': 0.75, 'recall': 0.8571428571428571, 'f1': 0.7999999999999999, 'number': 14} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 1} | {'precision': 0.8, 'recall': 0.9523809523809523, 'f1': 0.8695652173913043, 'number': 21} | {'precision': 0.7307692307692307, 'recall': 0.7307692307692307, 'f1': 0.7307692307692306, 'number': 26} | {'precision': 0.8636363636363636, 'recall': 0.890625, 'f1': 0.8769230769230768, 'number': 64} | {'precision': 0.72, 'recall': 0.75, 'f1': 0.7346938775510204, 'number': 24} | {'precision': 0.7551020408163265, 'recall': 0.7872340425531915, 'f1': 0.7708333333333333, 'number': 47} | 0.9154 | 0.9216 | 0.9185 | 0.9542 |
| 0.0046 | 28.12 | 1800 | 0.2978 | {'precision': 0.9498714652956298, 'recall': 0.9334736842105263, 'f1': 0.941601189212147, 'number': 2375} | {'precision': 0.875, 'recall': 0.84, 'f1': 0.8571428571428572, 'number': 25} | {'precision': 1.0, 'recall': 0.4, 'f1': 0.5714285714285715, 'number': 5} | {'precision': 0.5, 'recall': 0.75, 'f1': 0.6, 'number': 4} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 1} | {'precision': 0.918918918918919, 'recall': 0.9714285714285714, 'f1': 0.9444444444444445, 'number': 35} | {'precision': 0.9642857142857143, 'recall': 0.9310344827586207, 'f1': 0.9473684210526316, 'number': 58} | {'precision': 0.94, 'recall': 0.9591836734693877, 'f1': 0.9494949494949495, 'number': 49} | {'precision': 0.8839285714285714, 'recall': 0.9519230769230769, 'f1': 0.9166666666666665, 'number': 104} | {'precision': 0.7692307692307693, 'recall': 0.9090909090909091, 'f1': 0.8333333333333333, 'number': 11} | {'precision': 0.6147540983606558, 'recall': 0.7731958762886598, 'f1': 0.6849315068493151, 'number': 97} | {'precision': 1.0, 'recall': 0.6666666666666666, 'f1': 0.8, 'number': 15} | {'precision': 0.8888888888888888, 'recall': 0.8311688311688312, 'f1': 0.8590604026845637, 'number': 77} | {'precision': 0.7888888888888889, 'recall': 0.922077922077922, 'f1': 0.8502994011976047, 'number': 77} | {'precision': 0.9285714285714286, 'recall': 0.7222222222222222, 'f1': 0.8125000000000001, 'number': 18} | {'precision': 1.0, 'recall': 0.7142857142857143, 'f1': 0.8333333333333333, 'number': 14} | {'precision': 0.9487179487179487, 'recall': 0.9487179487179487, 'f1': 0.9487179487179487, 'number': 39} | {'precision': 1.0, 'recall': 0.8846153846153846, 'f1': 0.9387755102040816, 'number': 26} | {'precision': 0.7857142857142857, 'recall': 0.9166666666666666, 'f1': 0.8461538461538461, 'number': 12} | {'precision': 0.75, 'recall': 0.8571428571428571, 'f1': 0.7999999999999999, 'number': 14} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 1} | {'precision': 0.8333333333333334, 'recall': 0.9523809523809523, 'f1': 0.888888888888889, 'number': 21} | {'precision': 0.7307692307692307, 'recall': 0.7307692307692307, 'f1': 0.7307692307692306, 'number': 26} | {'precision': 0.8636363636363636, 'recall': 0.890625, 'f1': 0.8769230769230768, 'number': 64} | {'precision': 0.76, 'recall': 0.7916666666666666, 'f1': 0.7755102040816326, 'number': 24} | {'precision': 0.7142857142857143, 'recall': 0.851063829787234, 'f1': 0.7766990291262136, 'number': 47} | 0.9158 | 0.9163 | 0.9160 | 0.9528 |
| 0.0038 | 29.69 | 1900 | 0.2931 | {'precision': 0.9496587030716723, 'recall': 0.9372631578947368, 'f1': 0.9434202161474888, 'number': 2375} | {'precision': 0.875, 'recall': 0.84, 'f1': 0.8571428571428572, 'number': 25} | {'precision': 1.0, 'recall': 0.4, 'f1': 0.5714285714285715, 'number': 5} | {'precision': 0.5, 'recall': 0.75, 'f1': 0.6, 'number': 4} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 1} | {'precision': 0.8947368421052632, 'recall': 0.9714285714285714, 'f1': 0.9315068493150684, 'number': 35} | {'precision': 0.9642857142857143, 'recall': 0.9310344827586207, 'f1': 0.9473684210526316, 'number': 58} | {'precision': 0.94, 'recall': 0.9591836734693877, 'f1': 0.9494949494949495, 'number': 49} | {'precision': 0.868421052631579, 'recall': 0.9519230769230769, 'f1': 0.908256880733945, 'number': 104} | {'precision': 0.7142857142857143, 'recall': 0.9090909090909091, 'f1': 0.8, 'number': 11} | {'precision': 0.625, 'recall': 0.7731958762886598, 'f1': 0.6912442396313364, 'number': 97} | {'precision': 1.0, 'recall': 0.6666666666666666, 'f1': 0.8, 'number': 15} | {'precision': 0.8904109589041096, 'recall': 0.8441558441558441, 'f1': 0.8666666666666666, 'number': 77} | {'precision': 0.8068181818181818, 'recall': 0.922077922077922, 'f1': 0.8606060606060606, 'number': 77} | {'precision': 0.9285714285714286, 'recall': 0.7222222222222222, 'f1': 0.8125000000000001, 'number': 18} | {'precision': 1.0, 'recall': 0.7142857142857143, 'f1': 0.8333333333333333, 'number': 14} | {'precision': 0.9487179487179487, 'recall': 0.9487179487179487, 'f1': 0.9487179487179487, 'number': 39} | {'precision': 1.0, 'recall': 0.8846153846153846, 'f1': 0.9387755102040816, 'number': 26} | {'precision': 0.7857142857142857, 'recall': 0.9166666666666666, 'f1': 0.8461538461538461, 'number': 12} | {'precision': 0.75, 'recall': 0.8571428571428571, 'f1': 0.7999999999999999, 'number': 14} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 1} | {'precision': 0.8, 'recall': 0.9523809523809523, 'f1': 0.8695652173913043, 'number': 21} | {'precision': 0.7307692307692307, 'recall': 0.7307692307692307, 'f1': 0.7307692307692306, 'number': 26} | {'precision': 0.890625, 'recall': 0.890625, 'f1': 0.890625, 'number': 64} | {'precision': 0.7916666666666666, 'recall': 0.7916666666666666, 'f1': 0.7916666666666666, 'number': 24} | {'precision': 0.7192982456140351, 'recall': 0.8723404255319149, 'f1': 0.7884615384615385, 'number': 47} | 0.9163 | 0.9197 | 0.9180 | 0.9537 |
| 0.004 | 31.25 | 2000 | 0.2942 | {'precision': 0.950406156477127, 'recall': 0.936, 'f1': 0.9431480695799747, 'number': 2375} | {'precision': 0.875, 'recall': 0.84, 'f1': 0.8571428571428572, 'number': 25} | {'precision': 1.0, 'recall': 0.4, 'f1': 0.5714285714285715, 'number': 5} | {'precision': 0.5, 'recall': 0.75, 'f1': 0.6, 'number': 4} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 1} | {'precision': 0.918918918918919, 'recall': 0.9714285714285714, 'f1': 0.9444444444444445, 'number': 35} | {'precision': 0.9642857142857143, 'recall': 0.9310344827586207, 'f1': 0.9473684210526316, 'number': 58} | {'precision': 0.94, 'recall': 0.9591836734693877, 'f1': 0.9494949494949495, 'number': 49} | {'precision': 0.8761061946902655, 'recall': 0.9519230769230769, 'f1': 0.9124423963133641, 'number': 104} | {'precision': 0.7142857142857143, 'recall': 0.9090909090909091, 'f1': 0.8, 'number': 11} | {'precision': 0.6495726495726496, 'recall': 0.7835051546391752, 'f1': 0.7102803738317757, 'number': 97} | {'precision': 1.0, 'recall': 0.6666666666666666, 'f1': 0.8, 'number': 15} | {'precision': 0.8918918918918919, 'recall': 0.8571428571428571, 'f1': 0.8741721854304636, 'number': 77} | {'precision': 0.8068181818181818, 'recall': 0.922077922077922, 'f1': 0.8606060606060606, 'number': 77} | {'precision': 0.9285714285714286, 'recall': 0.7222222222222222, 'f1': 0.8125000000000001, 'number': 18} | {'precision': 1.0, 'recall': 0.7142857142857143, 'f1': 0.8333333333333333, 'number': 14} | {'precision': 0.9487179487179487, 'recall': 0.9487179487179487, 'f1': 0.9487179487179487, 'number': 39} | {'precision': 1.0, 'recall': 0.8846153846153846, 'f1': 0.9387755102040816, 'number': 26} | {'precision': 0.7857142857142857, 'recall': 0.9166666666666666, 'f1': 0.8461538461538461, 'number': 12} | {'precision': 0.75, 'recall': 0.8571428571428571, 'f1': 0.7999999999999999, 'number': 14} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 1} | {'precision': 0.7692307692307693, 'recall': 0.9523809523809523, 'f1': 0.8510638297872339, 'number': 21} | {'precision': 0.7407407407407407, 'recall': 0.7692307692307693, 'f1': 0.7547169811320754, 'number': 26} | {'precision': 0.890625, 'recall': 0.890625, 'f1': 0.890625, 'number': 64} | {'precision': 0.76, 'recall': 0.7916666666666666, 'f1': 0.7755102040816326, 'number': 24} | {'precision': 0.7454545454545455, 'recall': 0.8723404255319149, 'f1': 0.803921568627451, 'number': 47} | 0.9186 | 0.9197 | 0.9192 | 0.9535 |
### Framework versions
- Transformers 4.28.0
- Pytorch 2.0.1+cu118
- Datasets 2.13.2.dev0
- Tokenizers 0.13.3
|
oknashar/arabertAutoModelForMaskedLM
|
oknashar
| 2023-07-15T19:04:04Z | 115 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"fill-mask",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2023-07-15T18:26:50Z |
---
tags:
- generated_from_trainer
model-index:
- name: arabertAutoModelForMaskedLM
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# arabertAutoModelForMaskedLM
This model is a fine-tuned version of [aubmindlab/bert-base-arabert](https://huggingface.co/aubmindlab/bert-base-arabert) on the None dataset.
It achieves the following results on the evaluation set:
- eval_loss: 0.0000
- eval_runtime: 0.1606
- eval_samples_per_second: 24.901
- eval_steps_per_second: 6.225
- epoch: 4.0
- step: 4
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 150
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.0+cpu
- Datasets 2.1.0
- Tokenizers 0.13.3
|
monideep2255/spell_correction_M04_verification
|
monideep2255
| 2023-07-15T19:01:02Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bart",
"text2text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-07-15T18:10:56Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: spell_correction_M04_verification
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# spell_correction_M04_verification
This model is a fine-tuned version of [facebook/bart-base](https://huggingface.co/facebook/bart-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0588
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 30
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 269 | 0.3070 |
| 1.8826 | 2.0 | 538 | 0.0769 |
| 1.8826 | 3.0 | 807 | 0.0592 |
| 0.0711 | 4.0 | 1076 | 0.0577 |
| 0.0711 | 5.0 | 1345 | 0.0563 |
| 0.04 | 6.0 | 1614 | 0.0562 |
| 0.04 | 7.0 | 1883 | 0.0560 |
| 0.0265 | 8.0 | 2152 | 0.0544 |
| 0.0265 | 9.0 | 2421 | 0.0540 |
| 0.0196 | 10.0 | 2690 | 0.0534 |
| 0.0196 | 11.0 | 2959 | 0.0548 |
| 0.015 | 12.0 | 3228 | 0.0552 |
| 0.015 | 13.0 | 3497 | 0.0578 |
| 0.0123 | 14.0 | 3766 | 0.0591 |
| 0.0116 | 15.0 | 4035 | 0.0578 |
| 0.0116 | 16.0 | 4304 | 0.0580 |
| 0.0091 | 17.0 | 4573 | 0.0592 |
| 0.0091 | 18.0 | 4842 | 0.0596 |
| 0.0088 | 19.0 | 5111 | 0.0605 |
| 0.0088 | 20.0 | 5380 | 0.0569 |
| 0.0074 | 21.0 | 5649 | 0.0598 |
| 0.0074 | 22.0 | 5918 | 0.0587 |
| 0.0078 | 23.0 | 6187 | 0.0589 |
| 0.0078 | 24.0 | 6456 | 0.0586 |
| 0.0068 | 25.0 | 6725 | 0.0588 |
| 0.0068 | 26.0 | 6994 | 0.0591 |
| 0.0076 | 27.0 | 7263 | 0.0590 |
| 0.0072 | 28.0 | 7532 | 0.0587 |
| 0.0072 | 29.0 | 7801 | 0.0587 |
| 0.0059 | 30.0 | 8070 | 0.0588 |
### Framework versions
- Transformers 4.28.0
- Pytorch 1.12.1+cu102
- Datasets 2.13.1
- Tokenizers 0.13.3
|
said10/classification_model_hotel_demo
|
said10
| 2023-07-15T18:56:41Z | 61 | 0 |
transformers
|
[
"transformers",
"tf",
"distilbert",
"text-classification",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-07-15T18:50:47Z |
---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: said10/classification_model_hotel_demo
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# said10/classification_model_hotel_demo
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.5752
- Validation Loss: 0.5130
- Train Accuracy: 0.94
- Epoch: 2
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': False, 'is_legacy_optimizer': False, 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 115, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Train Accuracy | Epoch |
|:----------:|:---------------:|:--------------:|:-----:|
| 1.1384 | 0.9133 | 0.8 | 0 |
| 0.7682 | 0.6438 | 0.88 | 1 |
| 0.5752 | 0.5130 | 0.94 | 2 |
### Framework versions
- Transformers 4.30.2
- TensorFlow 2.12.0
- Datasets 2.13.1
- Tokenizers 0.13.3
|
san94/tiny-random-GPT2LMHeadModel-finetuned-corpus
|
san94
| 2023-07-15T18:32:11Z | 154 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"generated_from_trainer",
"base_model:distilbert/distilgpt2",
"base_model:finetune:distilbert/distilgpt2",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-07-04T12:41:52Z |
---
license: apache-2.0
base_model: distilgpt2
tags:
- generated_from_trainer
model-index:
- name: tiny-random-GPT2LMHeadModel-finetuned-corpus
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# tiny-random-GPT2LMHeadModel-finetuned-corpus
This model is a fine-tuned version of [distilgpt2](https://huggingface.co/distilgpt2) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 4.4497
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 4.4433 | 1.0 | 1063 | 4.2789 |
| 3.7013 | 2.0 | 2126 | 4.2512 |
| 3.0412 | 3.0 | 3189 | 4.4497 |
### Framework versions
- Transformers 4.31.0.dev0
- Pytorch 2.0.0
- Datasets 2.1.0
- Tokenizers 0.13.3
|
Naruke/ppo-LunarLander-v2
|
Naruke
| 2023-07-15T18:25:05Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-07-15T18:24:39Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 291.25 +/- 14.22
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
NasimB/aggregate-all-best-so-far
|
NasimB
| 2023-07-15T18:23:51Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"generated_from_trainer",
"dataset:generator",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-07-15T16:26:00Z |
---
license: mit
tags:
- generated_from_trainer
datasets:
- generator
model-index:
- name: aggregate-all-best-so-far
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# aggregate-all-best-so-far
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the generator dataset.
It achieves the following results on the evaluation set:
- Loss: 4.3995
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 1000
- num_epochs: 6
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 6.686 | 0.3 | 500 | 5.6397 |
| 5.3431 | 0.6 | 1000 | 5.2192 |
| 5.0064 | 0.89 | 1500 | 4.9772 |
| 4.7469 | 1.19 | 2000 | 4.8431 |
| 4.5938 | 1.49 | 2500 | 4.7258 |
| 4.4972 | 1.79 | 3000 | 4.6345 |
| 4.3601 | 2.08 | 3500 | 4.5766 |
| 4.2 | 2.38 | 4000 | 4.5205 |
| 4.1717 | 2.68 | 4500 | 4.4612 |
| 4.1257 | 2.98 | 5000 | 4.4102 |
| 3.8873 | 3.28 | 5500 | 4.4068 |
| 3.8774 | 3.57 | 6000 | 4.3738 |
| 3.8522 | 3.87 | 6500 | 4.3392 |
| 3.6911 | 4.17 | 7000 | 4.3476 |
| 3.5905 | 4.47 | 7500 | 4.3367 |
| 3.5827 | 4.76 | 8000 | 4.3230 |
| 3.5304 | 5.06 | 8500 | 4.3246 |
| 3.3915 | 5.36 | 9000 | 4.3290 |
| 3.4003 | 5.66 | 9500 | 4.3258 |
| 3.3934 | 5.96 | 10000 | 4.3253 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.11.0+cu113
- Datasets 2.13.0
- Tokenizers 0.13.3
|
nlp-lab-2023-seq2seq/R-facebook-bart-base-full-ft-with-tum-nlp-german-gpt2_easy-prior-pp-no-ls-4c77
|
nlp-lab-2023-seq2seq
| 2023-07-15T18:23:21Z | 30 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bart",
"text2text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-07-15T11:11:24Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- sacrebleu
- bleu
- rouge
model-index:
- name: R-facebook-bart-base-full-ft-with-tum-nlp-german-gpt2_easy-prior-pp-no-ls-4c77
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# R-facebook-bart-base-full-ft-with-tum-nlp-german-gpt2_easy-prior-pp-no-ls-4c77
This model is a fine-tuned version of [facebook/bart-base](https://huggingface.co/facebook/bart-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 4.1506
- Sacrebleu: 7.6134
- Bleu: 0.0761
- Rouge1: 0.3006
- Rouge2: 0.1038
- Rougel: 0.2079
- Sari: 39.5909
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 100
- num_epochs: 15
- mixed_precision_training: Native AMP
- label_smoothing_factor: 0.1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Sacrebleu | Bleu | Rouge1 | Rouge2 | Rougel | Sari |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:------:|:------:|:-------:|
| 6.9721 | 0.25 | 100 | 4.1739 | 1.8048 | 0.0180 | 0.1980 | 0.0611 | 0.1541 | 37.1235 |
| 3.8977 | 0.5 | 200 | 4.0984 | 1.2756 | 0.0128 | 0.2076 | 0.0678 | 0.1581 | 37.6186 |
| 4.035 | 0.75 | 300 | 4.0622 | 2.6499 | 0.0265 | 0.2271 | 0.0740 | 0.1741 | 38.1373 |
| 8.2055 | 0.99 | 400 | 4.0561 | 2.7363 | 0.0274 | 0.2332 | 0.0804 | 0.1716 | 38.0851 |
| 3.6957 | 1.24 | 500 | 4.0262 | 3.5110 | 0.0351 | 0.2560 | 0.0852 | 0.1852 | 37.9403 |
| 3.0846 | 1.49 | 600 | 4.0121 | 3.2967 | 0.0330 | 0.2471 | 0.0815 | 0.1799 | 37.5590 |
| 3.283 | 1.74 | 700 | 4.0510 | 3.8512 | 0.0385 | 0.2602 | 0.0917 | 0.1951 | 38.0037 |
| 4.7429 | 1.99 | 800 | 4.0048 | 3.4891 | 0.0349 | 0.2524 | 0.0850 | 0.1877 | 38.0324 |
| 3.024 | 2.24 | 900 | 3.9860 | 3.9202 | 0.0392 | 0.2633 | 0.0844 | 0.1891 | 37.9931 |
| 5.6861 | 2.49 | 1000 | 4.0493 | 4.4801 | 0.0448 | 0.2622 | 0.0878 | 0.1926 | 38.2052 |
| 3.6185 | 2.74 | 1100 | 4.0394 | 3.6710 | 0.0367 | 0.2608 | 0.0857 | 0.1866 | 37.9620 |
| 3.3582 | 2.98 | 1200 | 4.0004 | 5.1257 | 0.0513 | 0.2695 | 0.0922 | 0.1956 | 38.4845 |
| 5.0036 | 3.23 | 1300 | 4.0223 | 5.3256 | 0.0533 | 0.2752 | 0.0938 | 0.1975 | 38.6943 |
| 3.9904 | 3.48 | 1400 | 4.0040 | 5.0070 | 0.0501 | 0.2744 | 0.0927 | 0.1951 | 38.5338 |
| 3.1496 | 3.73 | 1500 | 4.0282 | 5.9234 | 0.0592 | 0.2803 | 0.0907 | 0.2002 | 38.2119 |
| 3.9604 | 3.98 | 1600 | 4.0253 | 5.1875 | 0.0519 | 0.2658 | 0.0864 | 0.1920 | 38.2336 |
| 2.9813 | 4.23 | 1700 | 4.0148 | 5.9589 | 0.0596 | 0.2891 | 0.0976 | 0.2028 | 38.8216 |
| 3.5448 | 4.48 | 1800 | 4.0071 | 5.2759 | 0.0528 | 0.2736 | 0.0867 | 0.1894 | 37.8800 |
| 3.6836 | 4.72 | 1900 | 4.0105 | 5.1414 | 0.0514 | 0.2750 | 0.0894 | 0.1982 | 38.3898 |
| 4.0471 | 4.97 | 2000 | 3.9788 | 5.5747 | 0.0557 | 0.2792 | 0.0932 | 0.1973 | 38.5705 |
| 3.3437 | 5.22 | 2100 | 4.0057 | 5.3969 | 0.0540 | 0.2827 | 0.0926 | 0.1978 | 38.3453 |
| 3.1657 | 5.47 | 2200 | 4.0439 | 5.4820 | 0.0548 | 0.2861 | 0.0946 | 0.2071 | 38.4004 |
| 2.5486 | 5.72 | 2300 | 4.0315 | 6.1738 | 0.0617 | 0.2896 | 0.0966 | 0.2048 | 38.5404 |
| 3.6148 | 5.97 | 2400 | 4.0056 | 6.5570 | 0.0656 | 0.2941 | 0.1046 | 0.2072 | 39.0698 |
| 3.1477 | 6.22 | 2500 | 4.0612 | 6.2221 | 0.0622 | 0.2806 | 0.0932 | 0.1998 | 38.5211 |
| 3.175 | 6.47 | 2600 | 4.0126 | 6.6920 | 0.0669 | 0.2916 | 0.1037 | 0.2122 | 39.1438 |
| 4.6616 | 6.71 | 2700 | 4.0467 | 6.0344 | 0.0603 | 0.2804 | 0.0953 | 0.1983 | 38.4171 |
| 3.109 | 6.96 | 2800 | 4.0420 | 5.8656 | 0.0587 | 0.2864 | 0.0983 | 0.2034 | 38.7225 |
| 3.0659 | 7.21 | 2900 | 4.0613 | 5.6029 | 0.0560 | 0.2839 | 0.0938 | 0.1980 | 38.7136 |
| 2.658 | 7.46 | 3000 | 4.0726 | 6.2791 | 0.0628 | 0.2824 | 0.0947 | 0.1972 | 38.6330 |
| 3.178 | 7.71 | 3100 | 4.0437 | 6.4351 | 0.0644 | 0.2924 | 0.0956 | 0.2032 | 38.6577 |
| 4.0606 | 7.96 | 3200 | 4.0644 | 6.6271 | 0.0663 | 0.2966 | 0.1019 | 0.2088 | 39.1513 |
| 3.664 | 8.21 | 3300 | 4.0615 | 6.3354 | 0.0634 | 0.2961 | 0.0981 | 0.2024 | 38.6904 |
| 2.8457 | 8.46 | 3400 | 4.0861 | 7.4278 | 0.0743 | 0.2975 | 0.1025 | 0.2017 | 39.0452 |
| 3.3883 | 8.7 | 3500 | 4.1037 | 6.4498 | 0.0645 | 0.2826 | 0.0955 | 0.2008 | 38.5961 |
| 5.4189 | 8.95 | 3600 | 4.1099 | 6.0065 | 0.0601 | 0.2946 | 0.0952 | 0.2020 | 38.6177 |
| 3.2093 | 9.2 | 3700 | 4.1074 | 6.2514 | 0.0625 | 0.2933 | 0.0942 | 0.2014 | 38.7227 |
| 3.9625 | 9.45 | 3800 | 4.0937 | 6.6653 | 0.0667 | 0.2912 | 0.0970 | 0.2020 | 38.4853 |
| 2.7172 | 9.7 | 3900 | 4.1130 | 6.1736 | 0.0617 | 0.2860 | 0.0898 | 0.1948 | 38.5064 |
| 2.4973 | 9.95 | 4000 | 4.0737 | 7.4889 | 0.0749 | 0.2986 | 0.1023 | 0.2060 | 39.2124 |
| 2.7371 | 10.2 | 4100 | 4.1032 | 6.4897 | 0.0649 | 0.2985 | 0.0990 | 0.2031 | 38.3514 |
| 3.9244 | 10.44 | 4200 | 4.0880 | 6.7268 | 0.0673 | 0.2906 | 0.1006 | 0.2012 | 38.6404 |
| 3.2153 | 10.69 | 4300 | 4.0961 | 6.7780 | 0.0678 | 0.2953 | 0.0977 | 0.2008 | 38.7091 |
| 3.0715 | 10.94 | 4400 | 4.1005 | 7.1435 | 0.0714 | 0.2870 | 0.0937 | 0.1950 | 38.5542 |
| 2.7833 | 11.19 | 4500 | 4.1112 | 7.5856 | 0.0759 | 0.3008 | 0.1037 | 0.2063 | 38.8659 |
| 5.6278 | 11.44 | 4600 | 4.0988 | 7.8870 | 0.0789 | 0.2962 | 0.1019 | 0.2025 | 38.8174 |
| 4.3557 | 11.69 | 4700 | 4.1049 | 7.9121 | 0.0791 | 0.3105 | 0.1076 | 0.2106 | 39.2476 |
| 3.4938 | 11.94 | 4800 | 4.1067 | 7.1602 | 0.0716 | 0.2961 | 0.1009 | 0.2039 | 38.9165 |
| 5.6848 | 12.19 | 4900 | 4.1140 | 7.8746 | 0.0787 | 0.2951 | 0.0996 | 0.2005 | 38.7719 |
| 3.4738 | 12.43 | 5000 | 4.0969 | 7.8672 | 0.0787 | 0.3055 | 0.1087 | 0.2092 | 39.0808 |
| 2.9039 | 12.68 | 5100 | 4.1185 | 7.6696 | 0.0767 | 0.3033 | 0.1071 | 0.2092 | 39.0788 |
| 4.4091 | 12.93 | 5200 | 4.1346 | 7.9896 | 0.0799 | 0.3014 | 0.1046 | 0.2070 | 39.2032 |
| 3.102 | 13.18 | 5300 | 4.1308 | 7.2969 | 0.0730 | 0.3030 | 0.1032 | 0.2039 | 39.1031 |
| 2.9972 | 13.43 | 5400 | 4.1518 | 7.7779 | 0.0778 | 0.3017 | 0.1053 | 0.2090 | 39.4092 |
| 2.7672 | 13.68 | 5500 | 4.1515 | 7.7545 | 0.0775 | 0.3010 | 0.1079 | 0.2091 | 39.0093 |
| 3.7358 | 13.93 | 5600 | 4.1360 | 7.5980 | 0.0760 | 0.2970 | 0.1036 | 0.2080 | 39.0873 |
| 3.4363 | 14.17 | 5700 | 4.1367 | 7.2901 | 0.0729 | 0.3013 | 0.1057 | 0.2084 | 39.3389 |
| 3.3451 | 14.42 | 5800 | 4.1500 | 7.5605 | 0.0756 | 0.2984 | 0.0979 | 0.2074 | 39.0107 |
| 2.8616 | 14.67 | 5900 | 4.1447 | 7.8204 | 0.0782 | 0.3020 | 0.1059 | 0.2127 | 39.7465 |
| 3.1149 | 14.92 | 6000 | 4.1506 | 7.6134 | 0.0761 | 0.3006 | 0.1038 | 0.2079 | 39.5909 |
### Framework versions
- Transformers 4.29.2
- Pytorch 2.0.0+cu117
- Datasets 2.12.0
- Tokenizers 0.13.3
|
TheBloke/LLaMa-7B-GGML
|
TheBloke
| 2023-07-15T18:15:35Z | 90 | 71 |
transformers
|
[
"transformers",
"llama",
"license:other",
"region:us"
] | null | 2023-05-17T12:59:21Z |
---
inference: false
license: other
model_type: llama
---
<!-- header start -->
<div style="width: 100%;">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p><a href="https://discord.gg/theblokeai">Chat & support: my new Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<!-- header end -->
# Meta's LLaMA 7b GGML
These files are GGML format model files for [Meta's LLaMA 7b](https://ai.meta.com/blog/large-language-model-llama-meta-ai).
GGML files are for CPU + GPU inference using [llama.cpp](https://github.com/ggerganov/llama.cpp) and libraries and UIs which support this format, such as:
* [KoboldCpp](https://github.com/LostRuins/koboldcpp), a powerful GGML web UI with full GPU acceleration out of the box. Especially good for story telling.
* [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with GPU acceleration via the c_transformers backend.
* [LM Studio](https://lmstudio.ai/), a fully featured local GUI. Supports full GPU accel on macOS. Also supports Windows, without GPU accel.
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most popular web UI. Requires extra steps to enable GPU accel via llama.cpp backend.
* [ctransformers](https://github.com/marella/ctransformers), a Python library with LangChain support and OpenAI-compatible AI server.
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with OpenAI-compatible API server.
These files were quantised using hardware kindly provided by [Latitude.sh](https://www.latitude.sh/accelerate).
## Repositories available
* [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/LLaMA-7b-GPTQ)
* [2, 3, 4, 5, 6 and 8-bit GGML models for CPU+GPU inference](https://huggingface.co/TheBloke/LLaMA-7b-GGML)
* [Unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/huggyllama/llama-7b)
## Prompt template: None
```
{prompt}
```
<!-- compatibility_ggml start -->
## Compatibility
### Original llama.cpp quant methods: `q4_0, q4_1, q5_0, q5_1, q8_0`
These are guaranteed to be compatible with any UIs, tools and libraries released since late May. They may be phased out soon, as they are largely superseded by the new k-quant methods.
### New k-quant methods: `q2_K, q3_K_S, q3_K_M, q3_K_L, q4_K_S, q4_K_M, q5_K_S, q6_K`
These new quantisation methods are compatible with llama.cpp as of June 6th, commit `2d43387`.
They are now also compatible with recent releases of text-generation-webui, KoboldCpp, llama-cpp-python, ctransformers, rustformers and most others. For compatibility with other tools and libraries, please check their documentation.
## Explanation of the new k-quant methods
<details>
<summary>Click to see details</summary>
The new methods available are:
* GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw)
* GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw.
* GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw.
* GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw
* GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw
* GGML_TYPE_Q8_K - "type-0" 8-bit quantization. Only used for quantizing intermediate results. The difference to the existing Q8_0 is that the block size is 256. All 2-6 bit dot products are implemented for this quantization type.
Refer to the Provided Files table below to see what files use which methods, and how.
</details>
<!-- compatibility_ggml end -->
## Provided files
| Name | Quant method | Bits | Size | Max RAM required | Use case |
| ---- | ---- | ---- | ---- | ---- | ----- |
| llama-7b.ggmlv3.q2_K.bin | q2_K | 2 | 2.80 GB| 5.30 GB | New k-quant method. Uses GGML_TYPE_Q4_K for the attention.vw and feed_forward.w2 tensors, GGML_TYPE_Q2_K for the other tensors. |
| llama-7b.ggmlv3.q3_K_L.bin | q3_K_L | 3 | 3.55 GB| 6.05 GB | New k-quant method. Uses GGML_TYPE_Q5_K for the attention.wv, attention.wo, and feed_forward.w2 tensors, else GGML_TYPE_Q3_K |
| llama-7b.ggmlv3.q3_K_M.bin | q3_K_M | 3 | 3.23 GB| 5.73 GB | New k-quant method. Uses GGML_TYPE_Q4_K for the attention.wv, attention.wo, and feed_forward.w2 tensors, else GGML_TYPE_Q3_K |
| llama-7b.ggmlv3.q3_K_S.bin | q3_K_S | 3 | 2.90 GB| 5.40 GB | New k-quant method. Uses GGML_TYPE_Q3_K for all tensors |
| llama-7b.ggmlv3.q4_0.bin | q4_0 | 4 | 3.79 GB| 6.29 GB | Original quant method, 4-bit. |
| llama-7b.ggmlv3.q4_1.bin | q4_1 | 4 | 4.21 GB| 6.71 GB | Original quant method, 4-bit. Higher accuracy than q4_0 but not as high as q5_0. However has quicker inference than q5 models. |
| llama-7b.ggmlv3.q4_K_M.bin | q4_K_M | 4 | 4.05 GB| 6.55 GB | New k-quant method. Uses GGML_TYPE_Q6_K for half of the attention.wv and feed_forward.w2 tensors, else GGML_TYPE_Q4_K |
| llama-7b.ggmlv3.q4_K_S.bin | q4_K_S | 4 | 3.79 GB| 6.29 GB | New k-quant method. Uses GGML_TYPE_Q4_K for all tensors |
| llama-7b.ggmlv3.q5_0.bin | q5_0 | 5 | 4.63 GB| 7.13 GB | Original quant method, 5-bit. Higher accuracy, higher resource usage and slower inference. |
| llama-7b.ggmlv3.q5_1.bin | q5_1 | 5 | 5.06 GB| 7.56 GB | Original quant method, 5-bit. Even higher accuracy, resource usage and slower inference. |
| llama-7b.ggmlv3.q5_K_M.bin | q5_K_M | 5 | 4.77 GB| 7.27 GB | New k-quant method. Uses GGML_TYPE_Q6_K for half of the attention.wv and feed_forward.w2 tensors, else GGML_TYPE_Q5_K |
| llama-7b.ggmlv3.q5_K_S.bin | q5_K_S | 5 | 4.63 GB| 7.13 GB | New k-quant method. Uses GGML_TYPE_Q5_K for all tensors |
| llama-7b.ggmlv3.q6_K.bin | q6_K | 6 | 5.53 GB| 8.03 GB | New k-quant method. Uses GGML_TYPE_Q8_K for all tensors - 6-bit quantization |
| llama-7b.ggmlv3.q8_0.bin | q8_0 | 8 | 7.16 GB| 9.66 GB | Original quant method, 8-bit. Almost indistinguishable from float16. High resource use and slow. Not recommended for most users. |
**Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead.
## How to run in `llama.cpp`
I use the following command line; adjust for your tastes and needs:
```
./main -t 10 -ngl 32 -m llama-7b.ggmlv3.q4_0.bin --color -c 2048 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "### Instruction: Write a story about llamas\n### Response:"
```
Change `-t 10` to the number of physical CPU cores you have. For example if your system has 8 cores/16 threads, use `-t 8`.
Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins`
## How to run in `text-generation-webui`
Further instructions here: [text-generation-webui/docs/llama.cpp-models.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/llama.cpp-models.md).
<!-- footer start -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/theblokeai)
## Thanks, and how to contribute.
Thanks to the [chirper.ai](https://chirper.ai) team!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Special thanks to**: Luke from CarbonQuill, Aemon Algiz.
**Patreon special mentions**: Space Cruiser, Nikolai Manek, Sam, Chris McCloskey, Rishabh Srivastava, Kalila, Spiking Neurons AB, Khalefa Al-Ahmad, WelcomeToTheClub, Chadd, Lone Striker, Viktor Bowallius, Edmond Seymore, Ai Maven, Chris Smitley, Dave, Alexandros Triantafyllidis, Luke @flexchar, Elle, ya boyyy, Talal Aujan, Alex , Jonathan Leane, Deep Realms, Randy H, subjectnull, Preetika Verma, Joseph William Delisle, Michael Levine, chris gileta, K, Oscar Rangel, LangChain4j, Trenton Dambrowitz, Eugene Pentland, Johann-Peter Hartmann, Femi Adebogun, Illia Dulskyi, senxiiz, Daniel P. Andersen, Sean Connelly, Artur Olbinski, RoA, Mano Prime, Derek Yates, Raven Klaugh, David Flickinger, Willem Michiel, Pieter, Willian Hasse, vamX, Luke Pendergrass, webtim, Ghost , Rainer Wilmers, Nathan LeClaire, Will Dee, Cory Kujawski, John Detwiler, Fred von Graf, biorpg, Iucharbius , Imad Khwaja, Pierre Kircher, terasurfer , Asp the Wyvern, John Villwock, theTransient, zynix , Gabriel Tamborski, Fen Risland, Gabriel Puliatti, Matthew Berman, Pyrater, SuperWojo, Stephen Murray, Karl Bernard, Ajan Kanaga, Greatston Gnanesh, Junyu Yang.
Thank you to all my generous patrons and donaters!
<!-- footer end -->
# Original model card: Meta's LLaMA 7b
This contains the weights for the LLaMA-7b model. This model is under a non-commercial license (see the LICENSE file).
You should only use this repository if you have been granted access to the model by filling out [this form](https://docs.google.com/forms/d/e/1FAIpQLSfqNECQnMkycAp2jP4Z9TFX0cGR4uf7b_fBxjY_OjhJILlKGA/viewform?usp=send_form) but either lost your copy of the weights or got some trouble converting them to the Transformers format.
|
FabbriSimo01/Bloom_1b_Quantized
|
FabbriSimo01
| 2023-07-15T17:44:29Z | 1,552 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bloom",
"text-generation",
"license:bigscience-bloom-rail-1.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"8-bit",
"region:us"
] |
text-generation
| 2023-07-15T17:37:01Z |
---
license: bigscience-bloom-rail-1.0
---
|
Hedayat-Abrishami/Reinforce-1
|
Hedayat-Abrishami
| 2023-07-15T17:39:01Z | 0 | 0 | null |
[
"CartPole-v1",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-07-08T22:25:11Z |
---
tags:
- CartPole-v1
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-1
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 500.00 +/- 0.00
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
infiniterik/desc-detoxify-sicon
|
infiniterik
| 2023-07-15T17:36:34Z | 109 | 0 |
transformers
|
[
"transformers",
"pytorch",
"t5",
"text2text-generation",
"en",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-07-07T15:23:29Z |
---
license: apache-2.0
language:
- en
---
# `infiniterik/desc-detoxify-sicon`
<!-- Provide a quick summary of what the model is/does. -->
Fine-tuned instance of [T5-Large](https://huggingface.co/t5-large) for detoxifying discourse surrounding abortion debate.
Implementation and ethical considerations are listed in the paper [Detoxifying Online Discourse: A Guided Response Generation Approach for Reducing Toxicity in User-Generated Text](https://github.com/infiniterik/detoxify/blob/main/pdfs/detoxify-paper.pdf).
Github repository can be found [here](https://www.github.com/infiniterik/detoxify).
## Citation
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
```
@inproceedings{bose-etal-2023-detoxifying,
title = "Detoxifying Online Discourse: A Guided Response Generation Approach for Reducing Toxicity in User-Generated Text",
author = "Bose, Ritwik and Perera, Ian and Dorr, Bonnie",
booktitle = "Proceedings of the First Workshop on Social Influence in Conversations (SICon 2023)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.sicon-1.2",
pages = "9--14"
}
```
|
Tarel-HuggingFace/distilbert-base-uncased-finetuned-emotion
|
Tarel-HuggingFace
| 2023-07-15T17:35:13Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-07-15T14:22:37Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
config: split
split: validation
args: split
metrics:
- name: Accuracy
type: accuracy
value: 0.9265
- name: F1
type: f1
value: 0.9264675219632655
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2235
- Accuracy: 0.9265
- F1: 0.9265
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8607 | 1.0 | 250 | 0.3269 | 0.9065 | 0.9033 |
| 0.2575 | 2.0 | 500 | 0.2235 | 0.9265 | 0.9265 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
asafaya/albert-xlarge-arabic
|
asafaya
| 2023-07-15T17:16:23Z | 120 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"safetensors",
"albert",
"fill-mask",
"ar",
"masked-lm",
"dataset:oscar",
"dataset:wikipedia",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-03-02T23:29:05Z |
---
language: ar
datasets:
- oscar
- wikipedia
tags:
- ar
- masked-lm
---
# Arabic-ALBERT Xlarge
Arabic edition of ALBERT Xlarge pretrained language model
_If you use any of these models in your work, please cite this work as:_
```
@software{ali_safaya_2020_4718724,
author = {Ali Safaya},
title = {Arabic-ALBERT},
month = aug,
year = 2020,
publisher = {Zenodo},
version = {1.0.0},
doi = {10.5281/zenodo.4718724},
url = {https://doi.org/10.5281/zenodo.4718724}
}
```
## Pretraining data
The models were pretrained on ~4.4 Billion words:
- Arabic version of [OSCAR](https://oscar-corpus.com/) (unshuffled version of the corpus) - filtered from [Common Crawl](http://commoncrawl.org/)
- Recent dump of Arabic [Wikipedia](https://dumps.wikimedia.org/backup-index.html)
__Notes on training data:__
- Our final version of corpus contains some non-Arabic words inlines, which we did not remove from sentences since that would affect some tasks like NER.
- Although non-Arabic characters were lowered as a preprocessing step, since Arabic characters do not have upper or lower case, there is no cased and uncased version of the model.
- The corpus and vocabulary set are not restricted to Modern Standard Arabic, they contain some dialectical Arabic too.
## Pretraining details
- These models were trained using Google ALBERT's github [repository](https://github.com/google-research/albert) on a single TPU v3-8 provided for free from [TFRC](https://www.tensorflow.org/tfrc).
- Our pretraining procedure follows training settings of bert with some changes: trained for 7M training steps with batchsize of 64, instead of 125K with batchsize of 4096.
## Models
| | albert-base | albert-large | albert-xlarge |
|:---:|:---:|:---:|:---:|
| Hidden Layers | 12 | 24 | 24 |
| Attention heads | 12 | 16 | 32 |
| Hidden size | 768 | 1024 | 2048 |
## Results
For further details on the models performance or any other queries, please refer to [Arabic-ALBERT](https://github.com/KUIS-AI-Lab/Arabic-ALBERT/)
## How to use
You can use these models by installing `torch` or `tensorflow` and Huggingface library `transformers`. And you can use it directly by initializing it like this:
```python
from transformers import AutoTokenizer, AutoModel
# loading the tokenizer
tokenizer = AutoTokenizer.from_pretrained("kuisailab/albert-xlarge-arabic")
# loading the model
model = AutoModelForMaskedLM.from_pretrained("kuisailab/albert-xlarge-arabic")
```
## Acknowledgement
Thanks to Google for providing free TPU for the training process and for Huggingface for hosting these models on their servers 😊
|
phatjk/bloomz-lora-vi-QA-NLLB-viquad_v3
|
phatjk
| 2023-07-15T17:12:55Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-07-15T17:12:48Z |
---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.4.0.dev0
|
TheBloke/LLaMa-13B-GGML
|
TheBloke
| 2023-07-15T17:09:15Z | 27 | 19 |
transformers
|
[
"transformers",
"llama",
"license:other",
"region:us"
] | null | 2023-05-17T12:59:31Z |
---
inference: false
license: other
model_type: llama
---
<!-- header start -->
<div style="width: 100%;">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p><a href="https://discord.gg/theblokeai">Chat & support: my new Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<!-- header end -->
# Meta's LLaMA 13b GGML
These files are GGML format model files for [Meta's LLaMA 13b](https://ai.meta.com/blog/large-language-model-llama-meta-ai).
GGML files are for CPU + GPU inference using [llama.cpp](https://github.com/ggerganov/llama.cpp) and libraries and UIs which support this format, such as:
* [KoboldCpp](https://github.com/LostRuins/koboldcpp), a powerful GGML web UI with full GPU acceleration out of the box. Especially good for story telling.
* [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with GPU acceleration via the c_transformers backend.
* [LM Studio](https://lmstudio.ai/), a fully featured local GUI. Supports full GPU accel on macOS. Also supports Windows, without GPU accel.
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most popular web UI. Requires extra steps to enable GPU accel via llama.cpp backend.
* [ctransformers](https://github.com/marella/ctransformers), a Python library with LangChain support and OpenAI-compatible AI server.
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with OpenAI-compatible API server.
These files were quantised using hardware kindly provided by [Latitude.sh](https://www.latitude.sh/accelerate).
## Repositories available
* [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/LLaMA-13b-GPTQ)
* [2, 3, 4, 5, 6 and 8-bit GGML models for CPU+GPU inference](https://huggingface.co/TheBloke/LLaMA-13b-GGML)
* [Unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/huggyllama/llama-13b)
## Prompt template: None
```
{prompt}
```
<!-- compatibility_ggml start -->
## Compatibility
### Original llama.cpp quant methods: `q4_0, q4_1, q5_0, q5_1, q8_0`
These are guaranteed to be compatible with any UIs, tools and libraries released since late May. They may be phased out soon, as they are largely superseded by the new k-quant methods.
### New k-quant methods: `q2_K, q3_K_S, q3_K_M, q3_K_L, q4_K_S, q4_K_M, q5_K_S, q6_K`
These new quantisation methods are compatible with llama.cpp as of June 6th, commit `2d43387`.
They are now also compatible with recent releases of text-generation-webui, KoboldCpp, llama-cpp-python, ctransformers, rustformers and most others. For compatibility with other tools and libraries, please check their documentation.
## Explanation of the new k-quant methods
<details>
<summary>Click to see details</summary>
The new methods available are:
* GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw)
* GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw.
* GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw.
* GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw
* GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw
* GGML_TYPE_Q8_K - "type-0" 8-bit quantization. Only used for quantizing intermediate results. The difference to the existing Q8_0 is that the block size is 256. All 2-6 bit dot products are implemented for this quantization type.
Refer to the Provided Files table below to see what files use which methods, and how.
</details>
<!-- compatibility_ggml end -->
## Provided files
| Name | Quant method | Bits | Size | Max RAM required | Use case |
| ---- | ---- | ---- | ---- | ---- | ----- |
| llama-13b.ggmlv3.q2_K.bin | q2_K | 2 | 5.43 GB| 7.93 GB | New k-quant method. Uses GGML_TYPE_Q4_K for the attention.vw and feed_forward.w2 tensors, GGML_TYPE_Q2_K for the other tensors. |
| llama-13b.ggmlv3.q3_K_L.bin | q3_K_L | 3 | 6.87 GB| 9.37 GB | New k-quant method. Uses GGML_TYPE_Q5_K for the attention.wv, attention.wo, and feed_forward.w2 tensors, else GGML_TYPE_Q3_K |
| llama-13b.ggmlv3.q3_K_M.bin | q3_K_M | 3 | 6.25 GB| 8.75 GB | New k-quant method. Uses GGML_TYPE_Q4_K for the attention.wv, attention.wo, and feed_forward.w2 tensors, else GGML_TYPE_Q3_K |
| llama-13b.ggmlv3.q3_K_S.bin | q3_K_S | 3 | 5.59 GB| 8.09 GB | New k-quant method. Uses GGML_TYPE_Q3_K for all tensors |
| llama-13b.ggmlv3.q4_0.bin | q4_0 | 4 | 7.32 GB| 9.82 GB | Original quant method, 4-bit. |
| llama-13b.ggmlv3.q4_1.bin | q4_1 | 4 | 8.14 GB| 10.64 GB | Original quant method, 4-bit. Higher accuracy than q4_0 but not as high as q5_0. However has quicker inference than q5 models. |
| llama-13b.ggmlv3.q4_K_M.bin | q4_K_M | 4 | 7.82 GB| 10.32 GB | New k-quant method. Uses GGML_TYPE_Q6_K for half of the attention.wv and feed_forward.w2 tensors, else GGML_TYPE_Q4_K |
| llama-13b.ggmlv3.q4_K_S.bin | q4_K_S | 4 | 7.32 GB| 9.82 GB | New k-quant method. Uses GGML_TYPE_Q4_K for all tensors |
| llama-13b.ggmlv3.q5_0.bin | q5_0 | 5 | 8.95 GB| 11.45 GB | Original quant method, 5-bit. Higher accuracy, higher resource usage and slower inference. |
| llama-13b.ggmlv3.q5_1.bin | q5_1 | 5 | 9.76 GB| 12.26 GB | Original quant method, 5-bit. Even higher accuracy, resource usage and slower inference. |
| llama-13b.ggmlv3.q5_K_M.bin | q5_K_M | 5 | 9.21 GB| 11.71 GB | New k-quant method. Uses GGML_TYPE_Q6_K for half of the attention.wv and feed_forward.w2 tensors, else GGML_TYPE_Q5_K |
| llama-13b.ggmlv3.q5_K_S.bin | q5_K_S | 5 | 8.95 GB| 11.45 GB | New k-quant method. Uses GGML_TYPE_Q5_K for all tensors |
| llama-13b.ggmlv3.q6_K.bin | q6_K | 6 | 10.68 GB| 13.18 GB | New k-quant method. Uses GGML_TYPE_Q8_K for all tensors - 6-bit quantization |
| llama-13b.ggmlv3.q8_0.bin | q8_0 | 8 | 13.83 GB| 16.33 GB | Original quant method, 8-bit. Almost indistinguishable from float16. High resource use and slow. Not recommended for most users. |
**Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead.
## How to run in `llama.cpp`
I use the following command line; adjust for your tastes and needs:
```
./main -t 10 -ngl 32 -m llama-13b.ggmlv3.q4_0.bin --color -c 2048 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "### Instruction: Write a story about llamas\n### Response:"
```
Change `-t 10` to the number of physical CPU cores you have. For example if your system has 8 cores/16 threads, use `-t 8`.
Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins`
## How to run in `text-generation-webui`
Further instructions here: [text-generation-webui/docs/llama.cpp-models.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/llama.cpp-models.md).
<!-- footer start -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/theblokeai)
## Thanks, and how to contribute.
Thanks to the [chirper.ai](https://chirper.ai) team!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Special thanks to**: Luke from CarbonQuill, Aemon Algiz.
**Patreon special mentions**: Space Cruiser, Nikolai Manek, Sam, Chris McCloskey, Rishabh Srivastava, Kalila, Spiking Neurons AB, Khalefa Al-Ahmad, WelcomeToTheClub, Chadd, Lone Striker, Viktor Bowallius, Edmond Seymore, Ai Maven, Chris Smitley, Dave, Alexandros Triantafyllidis, Luke @flexchar, Elle, ya boyyy, Talal Aujan, Alex , Jonathan Leane, Deep Realms, Randy H, subjectnull, Preetika Verma, Joseph William Delisle, Michael Levine, chris gileta, K, Oscar Rangel, LangChain4j, Trenton Dambrowitz, Eugene Pentland, Johann-Peter Hartmann, Femi Adebogun, Illia Dulskyi, senxiiz, Daniel P. Andersen, Sean Connelly, Artur Olbinski, RoA, Mano Prime, Derek Yates, Raven Klaugh, David Flickinger, Willem Michiel, Pieter, Willian Hasse, vamX, Luke Pendergrass, webtim, Ghost , Rainer Wilmers, Nathan LeClaire, Will Dee, Cory Kujawski, John Detwiler, Fred von Graf, biorpg, Iucharbius , Imad Khwaja, Pierre Kircher, terasurfer , Asp the Wyvern, John Villwock, theTransient, zynix , Gabriel Tamborski, Fen Risland, Gabriel Puliatti, Matthew Berman, Pyrater, SuperWojo, Stephen Murray, Karl Bernard, Ajan Kanaga, Greatston Gnanesh, Junyu Yang.
Thank you to all my generous patrons and donaters!
<!-- footer end -->
# Original model card: Meta's LLaMA 13b
This contains the weights for the LLaMA-13b model. This model is under a non-commercial license (see the LICENSE file).
You should only use this repository if you have been granted access to the model by filling out [this form](https://docs.google.com/forms/d/e/1FAIpQLSfqNECQnMkycAp2jP4Z9TFX0cGR4uf7b_fBxjY_OjhJILlKGA/viewform?usp=send_form) but either lost your copy of the weights or got some trouble converting them to the Transformers format.
|
steventrouble/EfficientZeroRemastered
|
steventrouble
| 2023-07-15T17:05:31Z | 0 | 1 | null |
[
"reinforcement-learning",
"arxiv:2111.00210",
"license:openrail",
"region:us"
] |
reinforcement-learning
| 2023-07-15T16:48:21Z |
---
license: openrail
pipeline_tag: reinforcement-learning
---
# EfficientZero Remastered
This repo contains the pre-trained models for the EfficientZero Remastered
project from Gigglebit Studios, a project to stabilize the training process
for the state of the art EfficientZero model.
* [Training source code](https://github.com/steventrouble/EfficientZero)
* [About the project](https://www.gigglebit.net/blog/efficientzero.html)
* [About EfficientZero](https://arxiv.org/abs/2111.00210)
* [About Gigglebit](https://www.gigglebit.net/)
Huge thanks to [Stability AI](https://stability.ai/) for providing the compute
for this project!
---
## How to use these files
Download the model that you want to test, then run test.py to test the model.
_Note: We've only productionized the training process. If you want to use these
for inference in production, you'll need to write your own inference logic.
If you do, send us a PR and we'll add it to the repo!_
Files are labeled as follows:
```
{gym_env}-s{seed}-e{env_steps}-t{train_steps}
```
Where:
* `gym_env`: The string ID of the gym environment this model was trained on.
E.g. Breakout-v5
* `seed`: The seed that was used to train this model. Usually 0.
* `env_steps`: The total number of steps in the environment that this model
observed, usually 100k.
* `train_steps`: The total number of training epochs the model underwent.
Note that `env_steps` can differ from `train_steps` because the model can
continue fine-tuning using its replay buffer. In the paper, the last 20k
epochs are done in this manner. This isn't necessary outside of benchmarks
and in theory better performance should be attainable by getting more samples
from the env.
---
## Findings
Our primary goal in this project was to test out EfficientZero and see its capabilities.
We were amazed by the model overall, especially on Breakout, where it far outperformed
the human baseline. The overall cost was only about $50 per fully trained model, compared
to the hundreds of thousands of dollars needed to train MuZero.
Though the trained models achieved impressive scores in Atari, they didn't reach the
stellar scores demonstrated in the paper. This could be because we used different hardware
and dependencies or because ML research papers tend to cherry-pick models and environments
to showcase good results.
Additionally, the models tended to hit a performance wall between 75-100k steps. While we
don't have enough data to know why or how often this happens, it's not surprising: the model
was tuned specifically for data efficiency, so it hasn't been tested at larger scales. A
model like MuZero might be more appropriate if you have a large budget.
Training times seemed longer than those reported in the EfficientZero paper. The paper
stated that they could train a model to completion in 7 hours, while in practice, we've found
that it takes an A100 with 32 cores between 1 to 2 days to train a model to completion. This
is likely because the training process uses more CPU than other models and therefore does not
perform well on the low-frequency, many-core CPUs found in GPU clusters.
|
gfsggdg88677/lora
|
gfsggdg88677
| 2023-07-15T17:02:56Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-07-13T11:14:40Z |
---
license: creativeml-openrail-m
---
|
BrainTheos/whisper-tiny-ln-ojpl-2
|
BrainTheos
| 2023-07-15T16:57:22Z | 84 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"dataset:BrainTheos/ojpl",
"base_model:openai/whisper-tiny",
"base_model:finetune:openai/whisper-tiny",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2023-07-15T15:32:49Z |
---
license: apache-2.0
base_model: openai/whisper-tiny
tags:
- generated_from_trainer
datasets:
- BrainTheos/ojpl
metrics:
- wer
model-index:
- name: whisper-tiny-ln-ojpl-2
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: BrainTheos/ojpl
type: BrainTheos/ojpl
config: default
split: train
args: default
metrics:
- name: Wer
type: wer
value: 0.4351648351648352
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# whisper-tiny-ln-ojpl-2
This model is a fine-tuned version of [openai/whisper-tiny](https://huggingface.co/openai/whisper-tiny) on the BrainTheos/ojpl dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2661
- Wer Ortho: 50.1855
- Wer: 0.4352
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant_with_warmup
- lr_scheduler_warmup_steps: 50
- training_steps: 2000
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer Ortho | Wer |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|
| 0.1767 | 11.36 | 500 | 0.9122 | 52.1142 | 0.4579 |
| 0.0191 | 22.73 | 1000 | 1.0786 | 53.7463 | 0.4538 |
| 0.0059 | 34.09 | 1500 | 1.1891 | 53.2641 | 0.4766 |
| 0.0019 | 45.45 | 2000 | 1.2661 | 50.1855 | 0.4352 |
### Framework versions
- Transformers 4.31.0.dev0
- Pytorch 2.0.0+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
jeremyvictor/mt5-base-gramatika-final-e8-b16
|
jeremyvictor
| 2023-07-15T16:50:38Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"mt5",
"text2text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-07-15T15:56:50Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: mt5-base-gramatika-final-e8-b16
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mt5-base-gramatika-final-e8-b16
This model is a fine-tuned version of [google/mt5-base](https://huggingface.co/google/mt5-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2117
- Rouge1: 66.7567
- Rouge2: 59.3343
- Rougel: 66.4993
- Rougelsum: 66.5275
- Gen Len: 18.5566
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.001
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adafactor
- lr_scheduler_type: linear
- num_epochs: 8
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:|
| 0.9122 | 0.37 | 300 | 0.3395 | 63.1315 | 53.1537 | 62.8285 | 62.8152 | 18.5833 |
| 0.4611 | 0.73 | 600 | 0.2870 | 64.8744 | 56.0545 | 64.604 | 64.6011 | 18.5676 |
| 0.3866 | 1.1 | 900 | 0.2690 | 65.2446 | 56.534 | 64.9389 | 64.9484 | 18.5414 |
| 0.2833 | 1.46 | 1200 | 0.2424 | 65.6718 | 57.2619 | 65.4044 | 65.4076 | 18.5566 |
| 0.2633 | 1.83 | 1500 | 0.2240 | 65.7057 | 57.6829 | 65.4464 | 65.4601 | 18.5524 |
| 0.2126 | 2.2 | 1800 | 0.2350 | 66.1634 | 58.4004 | 65.9254 | 65.9147 | 18.5582 |
| 0.1787 | 2.56 | 2100 | 0.2176 | 66.4508 | 58.8845 | 66.1886 | 66.199 | 18.5571 |
| 0.175 | 2.93 | 2400 | 0.2151 | 66.1987 | 58.632 | 65.9844 | 65.995 | 18.5603 |
| 0.1231 | 3.29 | 2700 | 0.2227 | 66.6365 | 59.1886 | 66.4067 | 66.4293 | 18.5571 |
| 0.1195 | 3.66 | 3000 | 0.2117 | 66.7567 | 59.3343 | 66.4993 | 66.5275 | 18.5566 |
| 0.1146 | 4.02 | 3300 | 0.2197 | 66.9385 | 59.8666 | 66.7575 | 66.7651 | 18.5556 |
| 0.0757 | 4.39 | 3600 | 0.2235 | 66.8918 | 59.768 | 66.7208 | 66.7282 | 18.5608 |
| 0.0772 | 4.76 | 3900 | 0.2270 | 67.0955 | 59.9474 | 66.8681 | 66.8905 | 18.5566 |
| 0.0688 | 5.12 | 4200 | 0.2431 | 67.2444 | 60.2703 | 67.0501 | 67.0676 | 18.5550 |
| 0.0512 | 5.49 | 4500 | 0.2439 | 67.198 | 60.2026 | 67.0128 | 67.0433 | 18.5535 |
| 0.0523 | 5.85 | 4800 | 0.2362 | 67.3463 | 60.4479 | 67.1385 | 67.1792 | 18.5592 |
| 0.0408 | 6.22 | 5100 | 0.2587 | 67.4973 | 60.7533 | 67.305 | 67.3418 | 18.5624 |
| 0.0324 | 6.59 | 5400 | 0.2502 | 67.6102 | 60.905 | 67.428 | 67.4547 | 18.5566 |
| 0.0336 | 6.95 | 5700 | 0.2583 | 67.531 | 60.7718 | 67.355 | 67.3762 | 18.5587 |
| 0.0236 | 7.32 | 6000 | 0.2710 | 67.5641 | 60.7633 | 67.3445 | 67.3835 | 18.5603 |
| 0.0222 | 7.68 | 6300 | 0.2729 | 67.5898 | 60.8587 | 67.3926 | 67.4234 | 18.5608 |
### Framework versions
- Transformers 4.30.1
- Pytorch 1.11.0a0+b6df043
- Datasets 2.12.0
- Tokenizers 0.13.3
|
NAB1108/BITS_ClockTower
|
NAB1108
| 2023-07-15T16:40:30Z | 29 | 0 |
diffusers
|
[
"diffusers",
"text-to-image",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-07-15T14:59:57Z |
---
license: creativeml-openrail-m
pipeline_tag: text-to-image
---
# BITS Pilani Clock Tower Model by Nitin Birur
<!-- Provide a quick summary of what the model is/does. -->
This is a Stable Diffusion model fine-tuned on the iconic BITS Pilani Clock Tower.
It can be used by modifying the instance_prompt: bitsclck tower
## Model Details
### Model Description
Use the BITS Pilani Clock Tower in any image.
- **Developed by:** Nitin Birur
|
0sunfire0/ppo-PyramidsTraining_00
|
0sunfire0
| 2023-07-15T16:36:05Z | 20 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"Pyramids",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Pyramids",
"region:us"
] |
reinforcement-learning
| 2023-07-15T16:36:02Z |
---
library_name: ml-agents
tags:
- Pyramids
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Pyramids
---
# **ppo** Agent playing **Pyramids**
This is a trained model of a **ppo** agent playing **Pyramids**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: 0sunfire0/ppo-PyramidsTraining_00
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
chainsurfer/q-FrozenLake-v1-4x4-noSlippery
|
chainsurfer
| 2023-07-15T16:23:11Z | 0 | 0 | null |
[
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-07-15T16:23:09Z |
---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="chainsurfer/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
NotAgain0/ppo-LunarLander-v2
|
NotAgain0
| 2023-07-15T16:21:41Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-07-15T16:21:01Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: -166.20 +/- 21.91
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
digiplay/Remedy
|
digiplay
| 2023-07-15T16:04:57Z | 320 | 4 |
diffusers
|
[
"diffusers",
"safetensors",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-07-15T00:58:04Z |
---
license: other
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
inference: true
---
Model info :
https://civitai.com/models/87025
Original Author's DEMO images :




Sample image I made thru Huggingface's API :

|
NasimB/children-rarity-all-guten-rarity-all-2p5k
|
NasimB
| 2023-07-15T16:00:18Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"generated_from_trainer",
"dataset:generator",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-07-15T14:01:34Z |
---
license: mit
tags:
- generated_from_trainer
datasets:
- generator
model-index:
- name: children-rarity-all-guten-rarity-all-2p5k
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# children-rarity-all-guten-rarity-all-2p5k
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the generator dataset.
It achieves the following results on the evaluation set:
- Loss: 4.3200
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 1000
- num_epochs: 6
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 6.7039 | 0.29 | 500 | 5.6480 |
| 5.3358 | 0.59 | 1000 | 5.2080 |
| 4.9956 | 0.88 | 1500 | 4.9573 |
| 4.7225 | 1.17 | 2000 | 4.8060 |
| 4.5557 | 1.47 | 2500 | 4.6798 |
| 4.4478 | 1.76 | 3000 | 4.5744 |
| 4.3246 | 2.05 | 3500 | 4.4978 |
| 4.133 | 2.35 | 4000 | 4.4463 |
| 4.107 | 2.64 | 4500 | 4.3935 |
| 4.0654 | 2.93 | 5000 | 4.3409 |
| 3.8576 | 3.23 | 5500 | 4.3368 |
| 3.8053 | 3.52 | 6000 | 4.3112 |
| 3.7871 | 3.81 | 6500 | 4.2678 |
| 3.6811 | 4.11 | 7000 | 4.2724 |
| 3.5209 | 4.4 | 7500 | 4.2658 |
| 3.5172 | 4.69 | 8000 | 4.2488 |
| 3.4981 | 4.99 | 8500 | 4.2384 |
| 3.3366 | 5.28 | 9000 | 4.2518 |
| 3.3255 | 5.57 | 9500 | 4.2501 |
| 3.3248 | 5.87 | 10000 | 4.2492 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.11.0+cu113
- Datasets 2.13.0
- Tokenizers 0.13.3
|
lrthomps/LunarLander-v2
|
lrthomps
| 2023-07-15T15:59:35Z | 0 | 0 | null |
[
"tensorboard",
"LunarLander-v2",
"ppo",
"deep-reinforcement-learning",
"reinforcement-learning",
"custom-implementation",
"deep-rl-course",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-07-15T15:52:54Z |
---
tags:
- LunarLander-v2
- ppo
- deep-reinforcement-learning
- reinforcement-learning
- custom-implementation
- deep-rl-course
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: -176.92 +/- 108.43
name: mean_reward
verified: false
---
# PPO Agent Playing LunarLander-v2
This is a trained model of a PPO agent playing LunarLander-v2.
# Hyperparameters
```python
{'exp_name': 'ppo'
'seed': 1
'torch_deterministic': True
'cuda': True
'track': False
'wandb_project_name': 'cleanRL'
'wandb_entity': None
'capture_video': False
'env_id': 'LunarLander-v2'
'total_timesteps': 50000
'learning_rate': 0.00025
'num_envs': 4
'num_steps': 128
'anneal_lr': True
'gae': True
'gamma': 0.99
'gae_lambda': 0.95
'num_minibatches': 4
'update_epochs': 4
'norm_adv': True
'clip_coef': 0.2
'clip_vloss': True
'ent_coef': 0.01
'vf_coef': 0.5
'max_grad_norm': 0.5
'target_kl': None
'repo_id': 'lrthomps/ppo-CartPole-v1'
'batch_size': 512
'minibatch_size': 128}
```
|
oknashar/my_awesome_eli5_clm-model
|
oknashar
| 2023-07-15T15:55:46Z | 56 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-generation",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-07-15T15:46:04Z |
---
tags:
- generated_from_trainer
model-index:
- name: my_awesome_eli5_clm-model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_awesome_eli5_clm-model
This model is a fine-tuned version of [aubmindlab/bert-base-arabert](https://huggingface.co/aubmindlab/bert-base-arabert) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.0+cpu
- Datasets 2.1.0
- Tokenizers 0.13.3
|
Python/ACROSS-m2o-eng-small
|
Python
| 2023-07-15T15:53:27Z | 104 | 0 |
transformers
|
[
"transformers",
"pytorch",
"mt5",
"text2text-generation",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-07-14T10:19:10Z |
# ACROSS-m2o-eng-small
## How to use
```python
from transformers import MT5ForConditionalGeneration, AutoTokenizer
model = MT5ForConditionalGeneration.from_pretrained('Python/ACROSS-m2o-eng-small')
tokenizer = AutoTokenizer.from_pretrained('Python/ACROSS-m2o-eng-small', use_fast=False)
input_text = '冈山县的倉敷市整个泡在泥水之中,数千户人家停水停电 这是日本近30多年来因为降雨而造成的死亡人数最多的一次水灾。究竟为何如此严重?仍然是每个人心中的疑问。 日本一向被视为是“防灾强国”,日本人对地震、台风、海啸等自然灾难绝对不陌生。 但这次暴雨引发水灾和土石流,竟然出现如此惊人的天灾死亡人数,也令许多人感到震惊。 短短几日的降雨量达到整个7月正常降雨量的三倍之多 超大降雨 究其原因,首先是短时间之内的超大降雨。 日本气象厅上周对西日本多个地方发布“大雨特别警报”,警告西部地方会受到“数十年一遇”的豪大雨,结果一共有93个观测站录得史上雨量第一的纪录。 从上周四开始的短短几日之内,日本西部地区多个地方的降雨量达到整个7月正常降雨量的三倍之多。 日本此次降雨多个地方超过上千毫米,日本气象厅也将这次豪雨正式命名为“平成30年7月豪雨”。 一共有7万多人参与救灾工作 河川溃堤 此外,超大豪雨超过河川疏洪承受度,短时间涌入巨大水量造成河川溃堤,沿岸市镇整个泡在泥水之中。 日本《每日新闻》报道说,冈山县的小田川溃堤,至少4600户都被洪水淹没,许多长者逃生不及淹死在自己家中。 暴雨过后被毁坏的家园 回水现象 据《产经新闻》报导,冈山县仓敷市真备町内的高梁川各支流共有5处溃堤,是因为大雨让河川主流水位上升,导致原本要和主流汇集的的支流无法流入,因此溃堤淹没附近区域,这样的状况被称之为“回水现象”。 有专家指出,“回水现象”也是这次豪雨水灾如此严重的原因之一。 救难人员抓紧时间在土石堆和残垣断壁下搜寻抢救生还者 山体滑坡 除了超大豪雨之外,日本地形多山,还有板块和花岗岩地质层,不少民宅都建筑在山坡地,一旦遇上大雨容易发生山体滑坡现象。 《日本经济新闻》报道说,这次日本暴雨灾难,多个地方发生大规模山体滑坡灾害,导致遇难人数增加。 受灾区的15个县有大约12000人安置到学校和体育馆等避难中心 该报引述京都大学防灾研究所的应用地质学教授千木良雅弘分析说,灾区是花岗岩的分布地区,其表层由“风化花岗岩”砂土覆盖,一旦降雨,表层滑坡就成为土石流,涌入住宅区。 专家也指出,表层滑坡导致的灾害近年来频频发生,原因多半是局部性暴雨所导致,需要检讨是否要在可能发生表层滑坡的地区建设住宅。'
inputs = tokenizer(input_text, max_length=512, truncation=True, return_tensors='pt')
generate_ids = model.generate(
input_ids=inputs['input_ids'],
attention_mask=inputs['attention_mask'],
num_beams=5,
min_length=10,
length_penalty=0.8,
max_length=84
)
print(tokenizer.decode(generate_ids[0], skip_special_tokens=True))
```
|
zhdwwf/roomtype
|
zhdwwf
| 2023-07-15T15:45:14Z | 191 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"vit",
"image-classification",
"huggingpics",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2023-07-15T15:45:06Z |
---
tags:
- image-classification
- pytorch
- huggingpics
metrics:
- accuracy
model-index:
- name: roomtype
results:
- task:
name: Image Classification
type: image-classification
metrics:
- name: Accuracy
type: accuracy
value: 0.8181818127632141
---
# roomtype
Autogenerated by HuggingPics🤗🖼️
Create your own image classifier for **anything** by running [the demo on Google Colab](https://colab.research.google.com/github/nateraw/huggingpics/blob/main/HuggingPics.ipynb).
Report any issues with the demo at the [github repo](https://github.com/nateraw/huggingpics).
## Example Images
#### architecture

#### empty room

#### indoor

|
efainman/q-Taxi-v3
|
efainman
| 2023-07-15T15:43:55Z | 0 | 0 | null |
[
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-07-15T15:43:53Z |
---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-Taxi-v3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.48 +/- 2.70
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="efainman/q-Taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
Skiro/falcon-mini-ggml
|
Skiro
| 2023-07-15T15:42:39Z | 0 | 0 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2023-07-15T01:07:21Z |
---
license: apache-2.0
---
# Model description
This model is the ggml v3 version of [falcon-mini-shakespeare](https://huggingface.co/jploski/falcon-mini-shakespeare) and [falcon40b-mini-shakespeare](https://huggingface.co/jploski/falcon40b-mini-shakespeare).
# Intended uses & limitations
Intended just to aid debugging efforts of a GGML port of Falcon.
|
efainman/q-FrozenLake-v1-4x4-noSlippery
|
efainman
| 2023-07-15T15:39:51Z | 0 | 0 | null |
[
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-07-15T15:39:48Z |
---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="efainman/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
roborovski/phi-2-classifier
|
roborovski
| 2023-07-15T15:37:43Z | 28 | 2 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"text-classification",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-07-12T00:07:23Z |
---
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: phi-2-classifier
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# phi-2-classifier
This model is a fine-tuned version of [bigcode/starencoder](https://huggingface.co/bigcode/starencoder) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4538
- Accuracy: 0.875
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.3098 | 1.0 | 1485 | 0.3670 | 0.89 |
| 0.4251 | 2.0 | 2970 | 0.3698 | 0.88 |
| 0.226 | 3.0 | 4455 | 0.4538 | 0.875 |
### Framework versions
- Transformers 4.31.0.dev0
- Pytorch 1.13.1
- Datasets 2.13.1
- Tokenizers 0.13.3
|
jeremyvictor/mt5-large-gramatika-final-e8-b16
|
jeremyvictor
| 2023-07-15T15:35:54Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"mt5",
"text2text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-07-15T13:43:59Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: mt5-large-gramatika-final-e8-b16
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mt5-large-gramatika-final-e8-b16
This model is a fine-tuned version of [google/mt5-large](https://huggingface.co/google/mt5-large) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1999
- Rouge1: 66.308
- Rouge2: 58.8739
- Rougel: 66.1027
- Rougelsum: 66.1039
- Gen Len: 18.5592
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.001
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adafactor
- lr_scheduler_type: linear
- num_epochs: 8
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:|
| 0.8846 | 0.37 | 300 | 0.2954 | 64.6179 | 55.294 | 64.2807 | 64.2792 | 18.5597 |
| 0.3711 | 0.73 | 600 | 0.2474 | 65.6388 | 57.2663 | 65.3219 | 65.3365 | 18.5592 |
| 0.2874 | 1.1 | 900 | 0.2193 | 65.8689 | 57.6871 | 65.5424 | 65.5719 | 18.5603 |
| 0.1953 | 1.46 | 1200 | 0.2131 | 66.0438 | 57.8166 | 65.7565 | 65.7705 | 18.5409 |
| 0.1919 | 1.83 | 1500 | 0.1999 | 66.308 | 58.8739 | 66.1027 | 66.1039 | 18.5592 |
| 0.1487 | 2.2 | 1800 | 0.2034 | 66.5939 | 59.0628 | 66.3361 | 66.3475 | 18.5592 |
| 0.1132 | 2.56 | 2100 | 0.2010 | 67.0441 | 59.8117 | 66.8455 | 66.8562 | 18.5487 |
| 0.1087 | 2.93 | 2400 | 0.2001 | 67.0048 | 59.7807 | 66.7885 | 66.7972 | 18.5535 |
| 0.0681 | 3.29 | 2700 | 0.2143 | 67.2327 | 60.2527 | 67.0047 | 67.0106 | 18.5556 |
| 0.0621 | 3.66 | 3000 | 0.2093 | 67.357 | 60.51 | 67.1561 | 67.1709 | 18.5466 |
| 0.062 | 4.02 | 3300 | 0.2157 | 67.4353 | 60.7193 | 67.2526 | 67.2554 | 18.5624 |
| 0.036 | 4.39 | 3600 | 0.2208 | 67.5469 | 60.8111 | 67.3457 | 67.3472 | 18.5503 |
| 0.0351 | 4.76 | 3900 | 0.2282 | 67.3835 | 60.4009 | 67.138 | 67.1612 | 18.5561 |
| 0.0297 | 5.12 | 4200 | 0.2370 | 67.4004 | 60.5787 | 67.2004 | 67.2087 | 18.5603 |
| 0.0193 | 5.49 | 4500 | 0.2446 | 67.5339 | 60.6808 | 67.3484 | 67.3737 | 18.5577 |
| 0.0185 | 5.85 | 4800 | 0.2483 | 67.5055 | 60.8104 | 67.3217 | 67.3443 | 18.5566 |
| 0.0134 | 6.22 | 5100 | 0.2563 | 67.5748 | 60.9475 | 67.3996 | 67.4081 | 18.5597 |
| 0.0114 | 6.59 | 5400 | 0.2585 | 67.6337 | 61.0146 | 67.4553 | 67.472 | 18.5482 |
| 0.0099 | 6.95 | 5700 | 0.2622 | 67.6613 | 61.037 | 67.4761 | 67.4843 | 18.5498 |
| 0.0067 | 7.32 | 6000 | 0.2728 | 67.7996 | 61.2206 | 67.6194 | 67.6282 | 18.5561 |
| 0.0052 | 7.68 | 6300 | 0.2802 | 67.8009 | 61.2862 | 67.6178 | 67.6357 | 18.5545 |
### Framework versions
- Transformers 4.30.1
- Pytorch 1.11.0a0+b6df043
- Datasets 2.12.0
- Tokenizers 0.13.3
|
Mistermango24/yiffymix_3.1V
|
Mistermango24
| 2023-07-15T15:05:10Z | 0 | 1 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-07-15T14:58:42Z |
---
license: creativeml-openrail-m
---
|
hafidikhsan/wav2vec2-large-xlsr-53-english-pronunciation-evaluation-dt-oversampling-augmented
|
hafidikhsan
| 2023-07-15T15:04:25Z | 110 | 0 |
transformers
|
[
"transformers",
"pytorch",
"wav2vec2",
"audio-classification",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
audio-classification
| 2023-07-15T15:03:38Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
- precision
- recall
model-index:
- name: wav2vec2-large-xlsr-53-english-pronunciation-evaluation-dt-oversampling-augmented
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xlsr-53-english-pronunciation-evaluation-dt-oversampling-augmented
This model is a fine-tuned version of [jonatasgrosman/wav2vec2-large-xlsr-53-english](https://huggingface.co/jonatasgrosman/wav2vec2-large-xlsr-53-english) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8966
- Accuracy: 0.7608
- F1: 0.7592
- Precision: 0.7591
- Recall: 0.7608
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:---------:|:------:|
| 1.0067 | 1.0 | 313 | 1.0838 | 0.4368 | 0.3815 | 0.4622 | 0.4368 |
| 0.6538 | 2.0 | 626 | 1.1189 | 0.532 | 0.4887 | 0.5314 | 0.532 |
| 0.5285 | 3.0 | 939 | 0.6705 | 0.7184 | 0.7150 | 0.7153 | 0.7184 |
| 0.396 | 4.0 | 1252 | 0.7915 | 0.7416 | 0.7374 | 0.7397 | 0.7416 |
| 0.1296 | 5.0 | 1565 | 0.9171 | 0.7592 | 0.7565 | 0.7569 | 0.7592 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
Subsets and Splits
Filtered Qwen2.5 Distill Models
Identifies specific configurations of models by filtering cards that contain 'distill', 'qwen2.5', '7b' while excluding certain base models and incorrect model ID patterns, uncovering unique model variants.
Filtered Model Cards Count
Finds the count of entries with specific card details that include 'distill', 'qwen2.5', '7b' but exclude certain base models, revealing valuable insights about the dataset's content distribution.
Filtered Distill Qwen 7B Models
Filters for specific card entries containing 'distill', 'qwen', and '7b', excluding certain strings and patterns, to identify relevant model configurations.
Filtered Qwen-7b Model Cards
The query performs a detailed filtering based on specific keywords and excludes certain entries, which could be useful for identifying a specific subset of cards but does not provide deeper insights or trends.
Filtered Qwen 7B Model Cards
The query filters for specific terms related to "distilled" or "distill", "qwen", and "7b" in the 'card' column but excludes certain base models, providing a limited set of entries for further inspection.
Qwen 7B Distilled Models
The query provides a basic filtering of records to find specific card names that include keywords related to distilled Qwen 7b models, excluding a particular base model, which gives limited insight but helps in focusing on relevant entries.
Qwen 7B Distilled Model Cards
The query filters data based on specific keywords in the modelId and card fields, providing limited insight primarily useful for locating specific entries rather than revealing broad patterns or trends.
Qwen 7B Distilled Models
Finds all entries containing the terms 'distilled', 'qwen', and '7b' in a case-insensitive manner, providing a filtered set of records but without deeper analysis.
Distilled Qwen 7B Models
The query filters for specific model IDs containing 'distilled', 'qwen', and '7b', providing a basic retrieval of relevant entries but without deeper analysis or insight.
Filtered Model Cards with Distill Qwen2.
Filters and retrieves records containing specific keywords in the card description while excluding certain phrases, providing a basic count of relevant entries.
Filtered Model Cards with Distill Qwen 7
The query filters specific variations of card descriptions containing 'distill', 'qwen', and '7b' while excluding a particular base model, providing limited but specific data retrieval.
Distill Qwen 7B Model Cards
The query filters and retrieves rows where the 'card' column contains specific keywords ('distill', 'qwen', and '7b'), providing a basic filter result that can help in identifying specific entries.