modelId
string | author
string | last_modified
timestamp[us, tz=UTC] | downloads
int64 | likes
int64 | library_name
string | tags
list | pipeline_tag
string | createdAt
timestamp[us, tz=UTC] | card
string |
|---|---|---|---|---|---|---|---|---|---|
Jclementg/dqn-SpaceInvadersNoFrameskip-v4
|
Jclementg
| 2023-03-23T22:58:52Z
| 0
| 0
|
stable-baselines3
|
[
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-03-23T22:58:29Z
|
---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 577.50 +/- 142.96
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga Jclementg -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga Jclementg -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga Jclementg
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 1000000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
|
renbtt/distilbert-base-uncased-finetuned-alerts
|
renbtt
| 2023-03-23T22:32:43Z
| 103
| 0
|
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-03-21T19:22:39Z
|
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-alerts
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-alerts
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0282
- Accuracy: 0.9875
- F1: 0.9875
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| No log | 1.0 | 10 | 0.0107 | 1.0 | 1.0 |
| No log | 2.0 | 20 | 0.0282 | 0.9875 | 0.9875 |
### Framework versions
- Transformers 4.27.3
- Pytorch 1.13.1+cu116
- Datasets 2.10.1
- Tokenizers 0.13.2
|
bertranddecoster/unit4
|
bertranddecoster
| 2023-03-23T22:31:27Z
| 0
| 0
| null |
[
"CartPole-v1",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-03-23T22:24:54Z
|
---
tags:
- CartPole-v1
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: unit4
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 458.30 +/- 94.32
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
ElementBrawlerAI/tqc-PandaReachDense-v2
|
ElementBrawlerAI
| 2023-03-23T22:30:37Z
| 3
| 0
|
stable-baselines3
|
[
"stable-baselines3",
"PandaReachDense-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-03-23T20:42:37Z
|
---
library_name: stable-baselines3
tags:
- PandaReachDense-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: TQC
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: PandaReachDense-v2
type: PandaReachDense-v2
metrics:
- type: mean_reward
value: -0.68 +/- 0.18
name: mean_reward
verified: false
---
# **TQC** Agent playing **PandaReachDense-v2**
This is a trained model of a **TQC** agent playing **PandaReachDense-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
YoanG/Pixelcopter-PLE-v0
|
YoanG
| 2023-03-23T22:28:35Z
| 0
| 0
| null |
[
"Pixelcopter-PLE-v0",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-03-23T21:40:00Z
|
---
tags:
- Pixelcopter-PLE-v0
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Pixelcopter-PLE-v0
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Pixelcopter-PLE-v0
type: Pixelcopter-PLE-v0
metrics:
- type: mean_reward
value: 22.40 +/- 15.98
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **Pixelcopter-PLE-v0**
This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
Absie/Reinforce-CartPole-v1
|
Absie
| 2023-03-23T21:55:45Z
| 0
| 0
| null |
[
"CartPole-v1",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-03-23T21:55:36Z
|
---
tags:
- CartPole-v1
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-CartPole-v1
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 500.00 +/- 0.00
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
sanak/a2c-PandaReachDense-v2
|
sanak
| 2023-03-23T21:50:01Z
| 1
| 0
|
stable-baselines3
|
[
"stable-baselines3",
"PandaReachDense-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-03-23T17:45:23Z
|
---
library_name: stable-baselines3
tags:
- PandaReachDense-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: PandaReachDense-v2
type: PandaReachDense-v2
metrics:
- type: mean_reward
value: -1.62 +/- 0.51
name: mean_reward
verified: false
---
# **A2C** Agent playing **PandaReachDense-v2**
This is a trained model of a **A2C** agent playing **PandaReachDense-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
NourEldin-Osama/t5-small-finetuned-text-simplification
|
NourEldin-Osama
| 2023-03-23T21:32:11Z
| 12
| 0
|
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"safetensors",
"t5",
"text2text-generation",
"generated_from_trainer",
"dataset:wiki_auto",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-03-23T17:55:33Z
|
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- wiki_auto
model-index:
- name: t5-small-finetuned-text-simplification
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-small-finetuned-text-simplification
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the wiki_auto dataset.
It achieves the following results on the evaluation set:
- Loss: 4.9119
- Sari: 57.2334
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Sari |
|:-------------:|:-----:|:-----:|:---------------:|:-------:|
| 1.6567 | 1.0 | 23363 | 4.5102 | 58.1853 |
| 3.7655 | 2.0 | 46726 | 4.9119 | 57.2334 |
| 3.7498 | 3.0 | 70089 | 4.9119 | 57.2334 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.13.0
- Datasets 2.1.0
- Tokenizers 0.13.2
|
radames/FALdetector
|
radames
| 2023-03-23T21:15:52Z
| 0
| 0
| null |
[
"arxiv:1906.05856",
"license:apache-2.0",
"region:us"
] | null | 2023-03-17T19:22:57Z
|
---
license: apache-2.0
---
https://arxiv.org/abs/1906.05856
Important Note from: [https://peterwang512.github.io/FALdetector/](https://peterwang512.github.io/FALdetector/)
> # How to interpret the results
>
> Welcome! Computer vision algorithms often work well on some images, but fail on others. Ours is like this too. We believe our work is a significant step forward in detecting and undoing facial warping by image editing tools. However, there are still many hard cases, and this is by no means a solved problem.
>
> This is partly because our algorithm is trained on faces warped by the Face-aware Liquify tool in Photoshop, and will thus work well for these types of images, but not necessarily for others. We call this the "dataset bias" problem. Please see the paper for more details on this issue.
>
> While we trained our models with various data augmentation to be more robust to downstream operations such as resizing, jpeg compression and saturation/brightness changes, there are many other retouches (e.g. airbrushing) that can alter the low-level statistics of the images to make the detection a really hard one.
from https://github.com/PeterWang512/FALdetector/blob/master/weights/download_weights.sh
```
wget https://www.dropbox.com/s/rb8zpvrbxbbutxc/global.pth?dl=0 -O ./weights/global.pth
wget https://www.dropbox.com/s/pby9dhpr6cqziyl/local.pth?dl=0 -O ./weights/local.pth
```
```
@inproceedings{wang2019detecting,
title={Detecting Photoshopped Faces by Scripting Photoshop},
author={Wang, Sheng-Yu and Wang, Oliver and Owens, Andrew and Zhang, Richard and Efros, Alexei A},
booktitle={ICCV},
year={2019}
}
```
|
KoRo8888/nikolai_gogol_bsd_locon
|
KoRo8888
| 2023-03-23T21:12:18Z
| 0
| 0
| null |
[
"region:us"
] | null | 2023-03-23T20:53:02Z
|
recommend to use "hard" version
activation Token "Nikolai" or "nikolai gogol"
Special thanks to CTD aka "closertodeath#1703" for testing the LoCon


|
Rooshan/Rooshan-mbart-large50_finetuned_it_en-it_es
|
Rooshan
| 2023-03-23T21:12:17Z
| 3
| 0
|
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"mbart",
"text2text-generation",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-03-23T15:43:43Z
|
---
tags:
- generated_from_trainer
metrics:
- bleu
model-index:
- name: Rooshan-mbart-large50_finetuned_it_en-it_es
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Rooshan-mbart-large50_finetuned_it_en-it_es
This model is a fine-tuned version of [Rooshan/Rooshan-mbart-large50-finetuned-it-to-en](https://huggingface.co/Rooshan/Rooshan-mbart-large50-finetuned-it-to-en) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4150
- Bleu: 66.8317
- Gen Len: 23.6063
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len |
|:-------------:|:-----:|:-----:|:---------------:|:-------:|:-------:|
| 0.4305 | 1.0 | 13632 | 0.4150 | 66.8317 | 23.6063 |
### Framework versions
- Transformers 4.27.2
- Pytorch 1.13.1+cu116
- Datasets 2.10.1
- Tokenizers 0.13.2
|
Ellipsoul/ppo-SnowballTarget
|
Ellipsoul
| 2023-03-23T21:08:28Z
| 3
| 0
|
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"SnowballTarget",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-SnowballTarget",
"region:us"
] |
reinforcement-learning
| 2023-03-23T21:08:23Z
|
---
library_name: ml-agents
tags:
- SnowballTarget
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SnowballTarget
---
# **ppo** Agent playing **SnowballTarget**
This is a trained model of a **ppo** agent playing **SnowballTarget** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-SnowballTarget
2. Step 1: Find your model_id: Ellipsoul/ppo-SnowballTarget
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
artbreguez/q-Taxi-v3
|
artbreguez
| 2023-03-23T20:51:48Z
| 0
| 0
| null |
[
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-03-23T20:51:42Z
|
---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-Taxi-v3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.54 +/- 2.74
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="artbreguez/q-Taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
evaluate_agent(env, model["max_steps"], model["n_eval_episodes"], model["qtable"], model["eval_seed"])
```
|
thanhnguyenvn/distilbert-base-uncased-finetuned-cola
|
thanhnguyenvn
| 2023-03-23T20:51:21Z
| 7
| 0
|
transformers
|
[
"transformers",
"pytorch",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:glue",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-03-23T19:44:31Z
|
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- matthews_correlation
model-index:
- name: distilbert-base-uncased-finetuned-cola
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: glue
type: glue
config: cola
split: validation
args: cola
metrics:
- name: Matthews Correlation
type: matthews_correlation
value: 0.5523738743137101
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-cola
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8232
- Matthews Correlation: 0.5524
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Matthews Correlation |
|:-------------:|:-----:|:----:|:---------------:|:--------------------:|
| 0.5235 | 1.0 | 535 | 0.5373 | 0.4270 |
| 0.3452 | 2.0 | 1070 | 0.5037 | 0.4948 |
| 0.2281 | 3.0 | 1605 | 0.5574 | 0.5286 |
| 0.1693 | 4.0 | 2140 | 0.8080 | 0.5299 |
| 0.1285 | 5.0 | 2675 | 0.8232 | 0.5524 |
### Framework versions
- Transformers 4.27.3
- Pytorch 1.13.1
- Datasets 2.10.1
- Tokenizers 0.13.2
|
greg-szopinski/ppo-LunarLander-v2
|
greg-szopinski
| 2023-03-23T20:46:41Z
| 0
| 0
|
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-03-23T20:46:20Z
|
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 280.38 +/- 19.74
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
mnoukhov/gpt2-imdb-sentiment-classifier
|
mnoukhov
| 2023-03-23T20:44:51Z
| 364
| 6
|
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"gpt2",
"text-classification",
"generated_from_trainer",
"dataset:imdb",
"license:mit",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-03-23T19:21:49Z
|
---
license: mit
tags:
- generated_from_trainer
datasets:
- imdb
metrics:
- accuracy
model-index:
- name: gpt2-imdb-sentiment-classifier
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: imdb
type: imdb
args: plain_text
metrics:
- name: Accuracy
type: accuracy
value: 0.9394
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gpt2-imdb-sentiment-classifier
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the imdb dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1703
- Accuracy: 0.9394
## Model description
More information needed
## Intended uses & limitations
This is comparable to [distilbert-imdb](https://huggingface.co/lvwerra/distilbert-imdb) and trained with exactly the same [script](https://huggingface.co/lvwerra/distilbert-imdb/blob/main/distilbert-imdb-training.ipynb)
It achieves slightly lower loss (0.1703 vs 0.1903) and slightly higher accuracy (0.9394 vs 0.928)
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.1967 | 1.0 | 1563 | 0.1703 | 0.9394 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.13.1+cu117
- Datasets 2.9.0
- Tokenizers 0.12.1
|
AbhirupGhosh/opus-mt-finetuned-en-hi
|
AbhirupGhosh
| 2023-03-23T20:22:38Z
| 30
| 0
|
transformers
|
[
"transformers",
"pytorch",
"tf",
"safetensors",
"marian",
"text2text-generation",
"translation",
"Hindi",
"generated_from_keras_callback",
"en",
"hi",
"multilingual",
"dataset:HindiEnglishCorpora",
"arxiv:1706.03762",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2022-07-16T11:27:09Z
|
---
language:
- en
- hi
- multilingual
license: apache-2.0
tags:
- translation
- Hindi
- generated_from_keras_callback
datasets:
- HindiEnglishCorpora
---
# opus-mt-finetuned-hi-en
This model is a fine-tuned version of [Helsinki-NLP/opus-mt-hi-en](https://huggingface.co/Helsinki-NLP/opus-mt-hi-en) on [HindiEnglish Corpora](https://www.clarin.eu/resource-families/parallel-corpora)
## Model description
The model is a transformer model similar to the [Transformer](https://arxiv.org/abs/1706.03762?context=cs) as defined in Attention Is All You Need by Vaswani et al
## Training and evaluation data
More information needed
## Training procedure
The model was trained on 2 NVIDIA_TESLA_A100 GPU's on Google's vertex AI platform.
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: AdamWeightDecay
- training_precision: float32
### Training results
### Framework versions
- Transformers 4.20.1
- TensorFlow 2.8.2
- Datasets 2.3.2
- Tokenizers 0.12.1
|
SAL83/ppo-SnowballTarget
|
SAL83
| 2023-03-23T20:11:25Z
| 1
| 0
|
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"SnowballTarget",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-SnowballTarget",
"region:us"
] |
reinforcement-learning
| 2023-03-22T23:14:57Z
|
---
library_name: ml-agents
tags:
- SnowballTarget
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SnowballTarget
---
# **ppo** Agent playing **SnowballTarget**
This is a trained model of a **ppo** agent playing **SnowballTarget** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-SnowballTarget
2. Step 1: Find your model_id: SAL83/ppo-SnowballTarget
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
dimi1357/Reinforce-Pixelcopter-PLE-v0
|
dimi1357
| 2023-03-23T20:01:12Z
| 0
| 0
| null |
[
"Pixelcopter-PLE-v0",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-03-19T21:40:43Z
|
---
tags:
- Pixelcopter-PLE-v0
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-Pixelcopter-PLE-v0
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Pixelcopter-PLE-v0
type: Pixelcopter-PLE-v0
metrics:
- type: mean_reward
value: 52.70 +/- 42.68
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **Pixelcopter-PLE-v0**
This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
JosuMSC/fake-news-detector
|
JosuMSC
| 2023-03-23T19:36:33Z
| 14
| 1
|
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"distilbert",
"text-classification",
"fake-news",
"sentence-classification",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-02-26T12:19:08Z
|
---
license: apache-2.0
language:
- en
metrics:
- f1
tags:
- fake-news
- sentence-classification
---
|
Borismile/anime-diffusion-hypernetwork
|
Borismile
| 2023-03-23T19:33:26Z
| 0
| 0
| null |
[
"license:apache-2.0",
"region:us"
] | null | 2023-03-23T10:08:02Z
|
---
license: apache-2.0
---
This is my first hypernetwork.


This hypernetwork was trained on a dataset of 1655 images from anime films by Hayao Miyazaki.
generates images in 512 x 512 resolution. Best used with webui.
This model can be used for image2image and text2image


The advantages of this model are lightweight and fast working speed.
|
ElementBrawlerAI/a2c-PandaReachDense-v2
|
ElementBrawlerAI
| 2023-03-23T19:28:36Z
| 0
| 0
|
stable-baselines3
|
[
"stable-baselines3",
"PandaReachDense-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-03-23T19:26:08Z
|
---
library_name: stable-baselines3
tags:
- PandaReachDense-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: PandaReachDense-v2
type: PandaReachDense-v2
metrics:
- type: mean_reward
value: -4.09 +/- 0.96
name: mean_reward
verified: false
---
# **A2C** Agent playing **PandaReachDense-v2**
This is a trained model of a **A2C** agent playing **PandaReachDense-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
lora-library/dragon-ball-wufan
|
lora-library
| 2023-03-23T19:24:59Z
| 0
| 0
|
diffusers
|
[
"diffusers",
"tensorboard",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"lora",
"base_model:stabilityai/stable-diffusion-2-1-base",
"base_model:adapter:stabilityai/stable-diffusion-2-1-base",
"license:creativeml-openrail-m",
"region:us"
] |
text-to-image
| 2023-03-23T16:33:23Z
|
---
license: creativeml-openrail-m
base_model: stabilityai/stable-diffusion-2-1-base
instance_prompt: wufan
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- lora
inference: true
---
# LoRA DreamBooth - dragon-ball-wufan
These are LoRA adaption weights for [stabilityai/stable-diffusion-2-1-base](https://huggingface.co/stabilityai/stable-diffusion-2-1-base). The weights were trained on the instance prompt "wufan" using [DreamBooth](https://dreambooth.github.io/). You can find some example images in the following.
Test prompt: wufan




|
Absie/dqn-SpaceInvadersNoFrameskip-v4
|
Absie
| 2023-03-23T19:20:16Z
| 2
| 0
|
stable-baselines3
|
[
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-03-23T08:01:54Z
|
---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 561.50 +/- 45.06
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga Absie -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga Absie -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga Absie
```
## Hyperparameters
```python
OrderedDict([('batch_size', 128),
('buffer_size', 150000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.025),
('frame_stack', 5),
('gradient_steps', 1),
('learning_rate', 5e-05),
('learning_starts', 100000),
('n_timesteps', 2000000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 8),
('normalize', False)])
```
|
jkeruotis/LitBERTa-uncased
|
jkeruotis
| 2023-03-23T19:18:47Z
| 15
| 0
|
transformers
|
[
"transformers",
"pytorch",
"jax",
"safetensors",
"roberta",
"fill-mask",
"exbert",
"lt",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-03-02T23:29:05Z
|
---
language: lt
tags:
- exbert
license: mit
---
# LitBERTa uncased model
Not the best model because of limited resources (Trained on ~4.7 GB of data on RTX2070 8GB for ~10 days) but it covers special lithuanian symbols `ąčęėįšųūž`. 128K vocabulary chosen because language has a lot of word forms.
## How to use
```python
from transformers import pipeline
unmasker = pipeline('fill-mask', model='jkeruotis/LitBERTa-uncased')
unmasker('lietuvių kalba yra viena iš <mask> kalbų pasaulyje.')
[{'sequence': 'lietuvių kalba yra viena iš populiariausių kalbų pasaulyje.',
'score': 0.13887910544872284,
'token': 9404,
'token_str': ' populiariausių'},
{'sequence': 'lietuvių kalba yra viena iš pirmaujančių kalbų pasaulyje.',
'score': 0.13532795011997223,
'token': 27431,
'token_str': ' pirmaujančių'},
{'sequence': 'lietuvių kalba yra viena iš seniausių kalbų pasaulyje.',
'score': 0.1184583529829979,
'token': 14775,
'token_str': ' seniausių'},
{'sequence': 'lietuvių kalba yra viena iš geriausių kalbų pasaulyje.',
'score': 0.09306756407022476,
'token': 5617,
'token_str': ' geriausių'},
{'sequence': 'lietuvių kalba yra viena iš nedaugelio kalbų pasaulyje.',
'score': 0.08187634497880936,
'token': 28150,
'token_str': ' nedaugelio'}]```
|
Buseak/canine_2303
|
Buseak
| 2023-03-23T19:16:00Z
| 714
| 0
|
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"canine",
"token-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2023-03-23T18:36:40Z
|
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: canine_2303
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# canine_2303
This model is a fine-tuned version of [google/canine-s](https://huggingface.co/google/canine-s) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0001
- Precision: 0.9987
- Recall: 0.9982
- F1: 0.9985
- Accuracy: 0.9999
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 244 | 0.0025 | 0.9819 | 0.9924 | 0.9871 | 0.9993 |
| No log | 2.0 | 488 | 0.0018 | 0.9855 | 0.9925 | 0.9890 | 0.9995 |
| 0.0382 | 3.0 | 732 | 0.0014 | 0.9923 | 0.9891 | 0.9907 | 0.9996 |
| 0.0382 | 4.0 | 976 | 0.0009 | 0.9930 | 0.9931 | 0.9931 | 0.9997 |
| 0.0017 | 5.0 | 1220 | 0.0009 | 0.9922 | 0.9949 | 0.9936 | 0.9997 |
| 0.0017 | 6.0 | 1464 | 0.0007 | 0.9940 | 0.9952 | 0.9946 | 0.9998 |
| 0.0012 | 7.0 | 1708 | 0.0005 | 0.9947 | 0.9952 | 0.9949 | 0.9998 |
| 0.0012 | 8.0 | 1952 | 0.0005 | 0.9947 | 0.9955 | 0.9951 | 0.9998 |
| 0.0009 | 9.0 | 2196 | 0.0003 | 0.9959 | 0.9960 | 0.9959 | 0.9998 |
| 0.0009 | 10.0 | 2440 | 0.0003 | 0.9958 | 0.9963 | 0.9961 | 0.9998 |
| 0.0007 | 11.0 | 2684 | 0.0003 | 0.9971 | 0.9958 | 0.9965 | 0.9999 |
| 0.0007 | 12.0 | 2928 | 0.0003 | 0.9971 | 0.9962 | 0.9967 | 0.9999 |
| 0.0005 | 13.0 | 3172 | 0.0002 | 0.9974 | 0.9967 | 0.9971 | 0.9999 |
| 0.0005 | 14.0 | 3416 | 0.0002 | 0.9980 | 0.9972 | 0.9976 | 0.9999 |
| 0.0004 | 15.0 | 3660 | 0.0002 | 0.9982 | 0.9980 | 0.9981 | 0.9999 |
| 0.0004 | 16.0 | 3904 | 0.0002 | 0.9984 | 0.9974 | 0.9979 | 0.9999 |
| 0.0004 | 17.0 | 4148 | 0.0001 | 0.9984 | 0.9975 | 0.9979 | 0.9999 |
| 0.0004 | 18.0 | 4392 | 0.0001 | 0.9988 | 0.9982 | 0.9985 | 0.9999 |
| 0.0003 | 19.0 | 4636 | 0.0001 | 0.9987 | 0.9982 | 0.9985 | 0.9999 |
| 0.0003 | 20.0 | 4880 | 0.0001 | 0.9987 | 0.9982 | 0.9985 | 0.9999 |
### Framework versions
- Transformers 4.27.2
- Pytorch 1.13.1+cu116
- Datasets 2.10.1
- Tokenizers 0.13.2
|
Stokrotka/poca-SoccerTwos
|
Stokrotka
| 2023-03-23T19:04:25Z
| 0
| 0
|
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"unity-ml-agents",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-SoccerTwos",
"region:us"
] |
reinforcement-learning
| 2023-03-23T19:04:11Z
|
---
tags:
- unity-ml-agents
- ml-agents
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SoccerTwos
library_name: ml-agents
---
# **poca** Agent playing **SoccerTwos**
This is a trained model of a **poca** agent playing **SoccerTwos** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-SoccerTwos
2. Step 1: Write your model_id: Stokrotka/poca-SoccerTwos
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
mrm8488/RuPERTa-base-finetuned-ner
|
mrm8488
| 2023-03-23T18:57:22Z
| 59
| 1
|
transformers
|
[
"transformers",
"pytorch",
"jax",
"safetensors",
"roberta",
"token-classification",
"es",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-03-02T23:29:05Z
|
---
language: es
thumbnail:
---
# RuPERTa-base (Spanish RoBERTa) + NER 🎃🏷
This model is a fine-tuned on [NER-C](https://www.kaggle.com/nltkdata/conll-corpora) version of [RuPERTa-base](https://huggingface.co/mrm8488/RuPERTa-base) for **NER** downstream task.
## Details of the downstream task (NER) - Dataset
- [Dataset: CONLL Corpora ES](https://www.kaggle.com/nltkdata/conll-corpora) 📚
| Dataset | # Examples |
| ---------------------- | ----- |
| Train | 329 K |
| Dev | 40 K |
- [Fine-tune on NER script provided by Huggingface](https://github.com/huggingface/transformers/blob/master/examples/token-classification/run_ner_old.py)
- Labels covered:
```
B-LOC
B-MISC
B-ORG
B-PER
I-LOC
I-MISC
I-ORG
I-PER
O
```
## Metrics on evaluation set 🧾
| Metric | # score |
| :------------------------------------------------------------------------------------: | :-------: |
| F1 | **77.55**
| Precision | **75.53** |
| Recall | **79.68** |
## Model in action 🔨
Example of usage:
```python
import torch
from transformers import AutoModelForTokenClassification, AutoTokenizer
id2label = {
"0": "B-LOC",
"1": "B-MISC",
"2": "B-ORG",
"3": "B-PER",
"4": "I-LOC",
"5": "I-MISC",
"6": "I-ORG",
"7": "I-PER",
"8": "O"
}
text ="Julien, CEO de HF, nació en Francia."
input_ids = torch.tensor(tokenizer.encode(text)).unsqueeze(0)
outputs = model(input_ids)
last_hidden_states = outputs[0]
for m in last_hidden_states:
for index, n in enumerate(m):
if(index > 0 and index <= len(text.split(" "))):
print(text.split(" ")[index-1] + ": " + id2label[str(torch.argmax(n).item())])
'''
Output:
--------
Julien,: I-PER
CEO: O
de: O
HF,: B-ORG
nació: I-PER
en: I-PER
Francia.: I-LOC
'''
```
Yeah! Not too bad 🎉
> Created by [Manuel Romero/@mrm8488](https://twitter.com/mrm8488)
> Made with <span style="color: #e25555;">♥</span> in Spain
|
adorkin/xlm-roberta-en-ru-emoji
|
adorkin
| 2023-03-23T18:42:15Z
| 17
| 0
|
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"xlm-roberta",
"text-classification",
"en",
"ru",
"dataset:tweet_eval",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-02T23:29:04Z
|
---
language:
- en
- ru
datasets:
- tweet_eval
model_index:
- name: xlm-roberta-en-ru-emoji
results:
- task:
name: Sentiment Analysis
type: sentiment-analysis
dataset:
name: Tweet Eval
type: tweet_eval
args: emoji
widget:
- text: "Отлично!"
- text: "Awesome!"
- text: "lol"
---
# xlm-roberta-en-ru-emoji
- Problem type: Multi-class Classification
|
SmilingWolf/wd-v1-4-convnext-tagger-v2
|
SmilingWolf
| 2023-03-23T18:33:36Z
| 3,488
| 25
|
tf-keras
|
[
"tf-keras",
"onnx",
"license:apache-2.0",
"region:us"
] | null | 2023-01-21T11:05:40Z
|
---
license: apache-2.0
---
# WD 1.4 ConvNext Tagger V2
Supports ratings, characters and general tags.
Trained using https://github.com/SmilingWolf/SW-CV-ModelZoo.
TPUs used for training kindly provided by the [TRC program](https://sites.research.google/trc/about/).
## Dataset
Last image id: 5944504
Trained on Danbooru images with IDs modulo 0000-0899.
Validated on images with IDs modulo 0950-0999.
Images with less than 10 general tags were filtered out.
Tags with less than 600 images were filtered out.
## Validation results
`P=R: threshold = 0.3685, F1 = 0.6810`
## Final words
Subject to change and updates.
Downstream users are encouraged to use tagged releases rather than relying on the head of the repo.
|
SmilingWolf/wd-v1-4-vit-tagger-v2
|
SmilingWolf
| 2023-03-23T18:33:21Z
| 73
| 57
|
tf-keras
|
[
"tf-keras",
"onnx",
"license:apache-2.0",
"region:us"
] | null | 2023-01-21T11:05:59Z
|
---
license: apache-2.0
---
# WD 1.4 ViT Tagger V2
Supports ratings, characters and general tags.
Trained using https://github.com/SmilingWolf/SW-CV-ModelZoo.
TPUs used for training kindly provided by the [TRC program](https://sites.research.google/trc/about/).
## Dataset
Last image id: 5944504
Trained on Danbooru images with IDs modulo 0000-0899.
Validated on images with IDs modulo 0950-0999.
Images with less than 10 general tags were filtered out.
Tags with less than 600 images were filtered out.
## Validation results
`P=R: threshold = 0.3537, F1 = 0.6770`
## Final words
Subject to change and updates.
Downstream users are encouraged to use tagged releases rather than relying on the head of the repo.
|
rng0x17/qTable-Taxi-v3
|
rng0x17
| 2023-03-23T18:21:30Z
| 0
| 0
| null |
[
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-03-23T17:53:35Z
|
---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: qTable-Taxi-v3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.56 +/- 2.71
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="grinsepilz/qTable-Taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
helling100/Regression_albert_5
|
helling100
| 2023-03-23T18:21:25Z
| 61
| 0
|
transformers
|
[
"transformers",
"tf",
"distilbert",
"text-classification",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-03-23T16:17:51Z
|
---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: Regression_albert_5
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# Regression_albert_5
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.1548
- Train Mae: 0.2765
- Train Mse: 0.1336
- Train R2-score: 0.7547
- Train Accuracy: 0.7462
- Validation Loss: 0.1908
- Validation Mae: 0.3787
- Validation Mse: 0.1894
- Validation R2-score: 0.8458
- Validation Accuracy: 0.4595
- Epoch: 9
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': True, 'is_legacy_optimizer': False, 'learning_rate': 2e-05, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Train Mae | Train Mse | Train R2-score | Train Accuracy | Validation Loss | Validation Mae | Validation Mse | Validation R2-score | Validation Accuracy | Epoch |
|:----------:|:---------:|:---------:|:--------------:|:--------------:|:---------------:|:--------------:|:--------------:|:-------------------:|:-------------------:|:-----:|
| 0.5723 | 0.3984 | 0.2343 | 0.4755 | 0.5923 | 0.1856 | 0.3686 | 0.1843 | 0.8559 | 0.4324 | 0 |
| 0.1822 | 0.2906 | 0.1403 | 0.7246 | 0.6538 | 0.1577 | 0.3485 | 0.1561 | 0.8714 | 0.9459 | 1 |
| 0.1765 | 0.2865 | 0.1376 | 0.6770 | 0.6538 | 0.1356 | 0.3325 | 0.1337 | 0.8808 | 0.9459 | 2 |
| 0.1959 | 0.2945 | 0.1383 | 0.6806 | 0.7308 | 0.2115 | 0.4054 | 0.2104 | 0.8366 | 0.3243 | 3 |
| 0.1698 | 0.2906 | 0.1408 | 0.7195 | 0.6231 | 0.1489 | 0.3371 | 0.1472 | 0.8726 | 0.9459 | 4 |
| 0.2081 | 0.2687 | 0.1178 | 0.7632 | 0.8385 | 0.2547 | 0.4572 | 0.2539 | 0.8046 | 0.3243 | 5 |
| 0.1806 | 0.3087 | 0.1554 | 0.7168 | 0.6462 | 0.1477 | 0.3401 | 0.1460 | 0.8757 | 0.9459 | 6 |
| 0.1910 | 0.3102 | 0.1559 | 0.7295 | 0.6308 | 0.1726 | 0.3544 | 0.1711 | 0.8602 | 0.8919 | 7 |
| 0.1697 | 0.2609 | 0.1132 | 0.7876 | 0.8538 | 0.1856 | 0.3694 | 0.1843 | 0.8537 | 0.5946 | 8 |
| 0.1548 | 0.2765 | 0.1336 | 0.7547 | 0.7462 | 0.1908 | 0.3787 | 0.1894 | 0.8458 | 0.4595 | 9 |
### Framework versions
- Transformers 4.27.2
- TensorFlow 2.11.0
- Datasets 2.10.1
- Tokenizers 0.13.2
|
LarryAIDraw/murkysLegsUpLora_1
|
LarryAIDraw
| 2023-03-23T18:18:27Z
| 0
| 0
| null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-03-23T18:17:09Z
|
---
license: creativeml-openrail-m
---
https://civitai.com/models/14247/murkys-legs-up-lora
|
LarryAIDraw/facesittingGirlSitting_v1
|
LarryAIDraw
| 2023-03-23T18:15:55Z
| 0
| 0
| null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-03-23T18:06:43Z
|
---
license: creativeml-openrail-m
---
https://civitai.com/models/11271/facesitting-girl-sitting-on-face
|
LarryAIDraw/doggystyleFromSide_dsv02
|
LarryAIDraw
| 2023-03-23T18:15:07Z
| 0
| 0
| null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-03-23T18:07:44Z
|
---
license: creativeml-openrail-m
---
https://civitai.com/models/12961/doggystyle-from-side-view
|
LarryAIDraw/murkysAfterSexLying_1
|
LarryAIDraw
| 2023-03-23T18:14:23Z
| 0
| 0
| null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-03-23T18:08:25Z
|
---
license: creativeml-openrail-m
---
https://civitai.com/models/18194/murkys-after-sex-lying-lora
|
abhilash1910/financial_roberta
|
abhilash1910
| 2023-03-23T18:11:53Z
| 20
| 5
|
transformers
|
[
"transformers",
"pytorch",
"tf",
"jax",
"safetensors",
"roberta",
"fill-mask",
"finance",
"arxiv:1907.11692",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-03-02T23:29:05Z
|
---
tags:
- finance
---
# Roberta Masked Language Model Trained On Financial Phrasebank Corpus
This is a Masked Language Model trained with [Roberta](https://huggingface.co/transformers/model_doc/roberta.html) on a Financial Phrasebank Corpus.
The model is built using Huggingface transformers.
The model can be found at :[Financial_Roberta](https://huggingface.co/abhilash1910/financial_roberta)
## Specifications
The corpus for training is taken from the Financial Phrasebank (Malo et al)[https://www.researchgate.net/publication/251231107_Good_Debt_or_Bad_Debt_Detecting_Semantic_Orientations_in_Economic_Texts].
## Model Specification
The model chosen for training is [Roberta](https://arxiv.org/abs/1907.11692) with the following specifications:
1. vocab_size=56000
2. max_position_embeddings=514
3. num_attention_heads=12
4. num_hidden_layers=6
5. type_vocab_size=1
This is trained by using RobertaConfig from transformers package.
The model is trained for 10 epochs with a gpu batch size of 64 units.
## Usage Specifications
For using this model, we have to first import AutoTokenizer and AutoModelWithLMHead Modules from transformers
After that we have to specify, the pre-trained model,which in this case is 'abhilash1910/financial_roberta' for the tokenizers and the model.
```python
from transformers import AutoTokenizer, AutoModelWithLMHead
tokenizer = AutoTokenizer.from_pretrained("abhilash1910/financial_roberta")
model = AutoModelWithLMHead.from_pretrained("abhilash1910/financial_roberta")
```
After this the model will be downloaded, it will take some time to download all the model files.
For testing the model, we have to import pipeline module from transformers and create a masked output model for inference as follows:
```python
from transformers import pipeline
model_mask = pipeline('fill-mask', model='abhilash1910/inancial_roberta')
model_mask("The company had a <mask> of 20% in 2020.")
```
Some of the examples are also provided with generic financial statements:
Example 1:
```python
model_mask("The company had a <mask> of 20% in 2020.")
```
Output:
```bash
[{'sequence': '<s>The company had a profit of 20% in 2020.</s>',
'score': 0.023112965747714043,
'token': 421,
'token_str': 'Ġprofit'},
{'sequence': '<s>The company had a loss of 20% in 2020.</s>',
'score': 0.021379893645644188,
'token': 616,
'token_str': 'Ġloss'},
{'sequence': '<s>The company had a year of 20% in 2020.</s>',
'score': 0.0185744296759367,
'token': 443,
'token_str': 'Ġyear'},
{'sequence': '<s>The company had a sales of 20% in 2020.</s>',
'score': 0.018143286928534508,
'token': 428,
'token_str': 'Ġsales'},
{'sequence': '<s>The company had a value of 20% in 2020.</s>',
'score': 0.015319528989493847,
'token': 776,
'token_str': 'Ġvalue'}]
```
Example 2:
```python
model_mask("The <mask> is listed under NYSE")
```
Output:
```bash
[{'sequence': '<s>The company is listed under NYSE</s>',
'score': 0.1566661298274994,
'token': 359,
'token_str': 'Ġcompany'},
{'sequence': '<s>The total is listed under NYSE</s>',
'score': 0.05542507395148277,
'token': 522,
'token_str': 'Ġtotal'},
{'sequence': '<s>The value is listed under NYSE</s>',
'score': 0.04729423299431801,
'token': 776,
'token_str': 'Ġvalue'},
{'sequence': '<s>The order is listed under NYSE</s>',
'score': 0.02533523552119732,
'token': 798,
'token_str': 'Ġorder'},
{'sequence': '<s>The contract is listed under NYSE</s>',
'score': 0.02087237872183323,
'token': 635,
'token_str': 'Ġcontract'}]
```
## Resources
For all resources , please look into the [HuggingFace](https://huggingface.co/) Site and the [Repositories](https://github.com/huggingface).
|
abhilash1910/french-roberta
|
abhilash1910
| 2023-03-23T18:11:39Z
| 20
| 0
|
transformers
|
[
"transformers",
"pytorch",
"jax",
"safetensors",
"roberta",
"fill-mask",
"arxiv:1907.11692",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-03-02T23:29:05Z
|
# Roberta Trained Model For Masked Language Model On French Corpus :robot:
This is a Masked Language Model trained with [Roberta](https://huggingface.co/transformers/model_doc/roberta.html) on a small French News Corpus(Leipzig corpora).
The model is built using Huggingface transformers.
The model can be found at :[French-Roberta](https://huggingface.co/abhilash1910/french-roberta)
## Specifications
The corpus for training is taken from Leipzig Corpora (French News) , and is trained on a small set of the corpus (300K).
## Model Specification
The model chosen for training is [Roberta](https://arxiv.org/abs/1907.11692) with the following specifications:
1. vocab_size=32000
2. max_position_embeddings=514
3. num_attention_heads=12
4. num_hidden_layers=6
5. type_vocab_size=1
This is trained by using RobertaConfig from transformers package.The total training parameters :68124416
The model is trained for 100 epochs with a gpu batch size of 64 units.
More details for building custom models can be found at the [HuggingFace Blog](https://huggingface.co/blog/how-to-train)
## Usage Specifications
For using this model, we have to first import AutoTokenizer and AutoModelWithLMHead Modules from transformers
After that we have to specify, the pre-trained model,which in this case is 'abhilash1910/french-roberta' for the tokenizers and the model.
```python
from transformers import AutoTokenizer, AutoModelWithLMHead
tokenizer = AutoTokenizer.from_pretrained("abhilash1910/french-roberta")
model = AutoModelWithLMHead.from_pretrained("abhilash1910/french-roberta")
```
After this the model will be downloaded, it will take some time to download all the model files.
For testing the model, we have to import pipeline module from transformers and create a masked output model for inference as follows:
```python
from transformers import pipeline
model_mask = pipeline('fill-mask', model='abhilash1910/french-roberta')
model_mask("Le tweet <mask>.")
```
Some of the examples are also provided with generic French sentences:
Example 1:
```python
model_mask("À ce jour, <mask> projet a entraîné")
```
Output:
```bash
[{'sequence': '<s>À ce jour, belles projet a entraîné</s>',
'score': 0.18685665726661682,
'token': 6504,
'token_str': 'Ġbelles'},
{'sequence': '<s>À ce jour,- projet a entraîné</s>',
'score': 0.0005200508167035878,
'token': 17,
'token_str': '-'},
{'sequence': '<s>À ce jour, de projet a entraîné</s>',
'score': 0.00045729897101409733,
'token': 268,
'token_str': 'Ġde'},
{'sequence': '<s>À ce jour, du projet a entraîné</s>',
'score': 0.0004307595663703978,
'token': 326,
'token_str': 'Ġdu'},
{'sequence': '<s>À ce jour," projet a entraîné</s>',
'score': 0.0004219160182401538,
'token': 6,
'token_str': '"'}]
```
Example 2:
```python
model_mask("C'est un <mask>")
```
Output:
```bash
[{'sequence': "<s>C'est un belles</s>",
'score': 0.16440927982330322,
'token': 6504,
'token_str': 'Ġbelles'},
{'sequence': "<s>C'est un de</s>",
'score': 0.0005495127406902611,
'token': 268,
'token_str': 'Ġde'},
{'sequence': "<s>C'est un du</s>",
'score': 0.00044988933950662613,
'token': 326,
'token_str': 'Ġdu'},
{'sequence': "<s>C'est un-</s>",
'score': 0.00044542422983795404,
'token': 17,
'token_str': '-'},
{'sequence': "<s>C'est un </s>",
'score': 0.00037563967634923756,
'token': 202,
'token_str': 'ĉ'}]
```
## Resources
For all resources , please look into the [HuggingFace](https://huggingface.co/) Site and the [Repositories](https://github.com/huggingface).
---
language:
- fr
tags:
- fill-mask
license: apache-2.0
---
|
Piquimachay/pikimachay02
|
Piquimachay
| 2023-03-23T18:11:10Z
| 33
| 0
|
diffusers
|
[
"diffusers",
"safetensors",
"text-to-image",
"stable-diffusion",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-03-23T18:06:52Z
|
---
license: creativeml-openrail-m
tags:
- text-to-image
- stable-diffusion
---
### Pikimachay02 Dreambooth model trained by Piquimachay with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook
Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb)
Sample pictures of this concept:
|
ShreyasM/vizdoom_defend_the_line
|
ShreyasM
| 2023-03-23T18:05:45Z
| 0
| 0
|
sample-factory
|
[
"sample-factory",
"tensorboard",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-03-23T18:03:56Z
|
---
library_name: sample-factory
tags:
- deep-reinforcement-learning
- reinforcement-learning
- sample-factory
model-index:
- name: APPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: doom_defend_the_line
type: doom_defend_the_line
metrics:
- type: mean_reward
value: 12.60 +/- 5.10
name: mean_reward
verified: false
---
A(n) **APPO** model trained on the **doom_defend_the_line** environment.
This model was trained using Sample-Factory 2.0: https://github.com/alex-petrenko/sample-factory.
Documentation for how to use Sample-Factory can be found at https://www.samplefactory.dev/
## Downloading the model
After installing Sample-Factory, download the model with:
```
python -m sample_factory.huggingface.load_from_hub -r ShreyasM/vizdoom_defend_the_line
```
## Using the model
To run the model after download, use the `enjoy` script corresponding to this environment:
```
python -m .usr.local.lib.python3.9.dist-packages.ipykernel_launcher --algo=APPO --env=doom_defend_the_line --train_dir=./train_dir --experiment=vizdoom_defend_the_line
```
You can also upload models to the Hugging Face Hub using the same script with the `--push_to_hub` flag.
See https://www.samplefactory.dev/10-huggingface/huggingface/ for more details
## Training with this model
To continue training with this model, use the `train` script corresponding to this environment:
```
python -m .usr.local.lib.python3.9.dist-packages.ipykernel_launcher --algo=APPO --env=doom_defend_the_line --train_dir=./train_dir --experiment=vizdoom_defend_the_line --restart_behavior=resume --train_for_env_steps=10000000000
```
Note, you may have to adjust `--train_for_env_steps` to a suitably high number as the experiment will resume at the number of steps it concluded at.
|
vocabtrimmer/xlm-roberta-base-tweet-sentiment-fr-trimmed-fr-10000
|
vocabtrimmer
| 2023-03-23T17:52:08Z
| 5
| 0
|
transformers
|
[
"transformers",
"pytorch",
"xlm-roberta",
"text-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-03-15T19:28:53Z
|
# Vocabulary Trimmed [cardiffnlp/xlm-roberta-base-tweet-sentiment-fr](https://huggingface.co/cardiffnlp/xlm-roberta-base-tweet-sentiment-fr): `vocabtrimmer/xlm-roberta-base-tweet-sentiment-fr-trimmed-fr-10000`
This model is a trimmed version of [cardiffnlp/xlm-roberta-base-tweet-sentiment-fr](https://huggingface.co/cardiffnlp/xlm-roberta-base-tweet-sentiment-fr) by [`vocabtrimmer`](https://github.com/asahi417/lm-vocab-trimmer), a tool for trimming vocabulary of language models to compress the model size.
Following table shows a summary of the trimming process.
| | cardiffnlp/xlm-roberta-base-tweet-sentiment-fr | vocabtrimmer/xlm-roberta-base-tweet-sentiment-fr-trimmed-fr-10000 |
|:---------------------------|:-------------------------------------------------|:--------------------------------------------------------------------|
| parameter_size_full | 278,045,955 | 93,725,955 |
| parameter_size_embedding | 192,001,536 | 7,681,536 |
| vocab_size | 250,002 | 10,002 |
| compression_rate_full | 100.0 | 33.71 |
| compression_rate_embedding | 100.0 | 4.0 |
Following table shows the parameter used to trim vocabulary.
| language | dataset | dataset_column | dataset_name | dataset_split | target_vocab_size | min_frequency |
|:-----------|:----------------------------|:-----------------|:---------------|:----------------|--------------------:|----------------:|
| fr | vocabtrimmer/mc4_validation | text | fr | validation | 10000 | 2 |
|
ShreyasM/vizdoom_defend_the_center
|
ShreyasM
| 2023-03-23T17:39:13Z
| 0
| 0
|
sample-factory
|
[
"sample-factory",
"tensorboard",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-03-23T17:38:41Z
|
---
library_name: sample-factory
tags:
- deep-reinforcement-learning
- reinforcement-learning
- sample-factory
model-index:
- name: APPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: doom_defend_the_center
type: doom_defend_the_center
metrics:
- type: mean_reward
value: 13.70 +/- 3.26
name: mean_reward
verified: false
---
A(n) **APPO** model trained on the **doom_defend_the_center** environment.
This model was trained using Sample-Factory 2.0: https://github.com/alex-petrenko/sample-factory.
Documentation for how to use Sample-Factory can be found at https://www.samplefactory.dev/
## Downloading the model
After installing Sample-Factory, download the model with:
```
python -m sample_factory.huggingface.load_from_hub -r ShreyasM/vizdoom_defend_the_center
```
## Using the model
To run the model after download, use the `enjoy` script corresponding to this environment:
```
python -m .usr.local.lib.python3.9.dist-packages.ipykernel_launcher --algo=APPO --env=doom_defend_the_center --train_dir=./train_dir --experiment=vizdoom_defend_the_center
```
You can also upload models to the Hugging Face Hub using the same script with the `--push_to_hub` flag.
See https://www.samplefactory.dev/10-huggingface/huggingface/ for more details
## Training with this model
To continue training with this model, use the `train` script corresponding to this environment:
```
python -m .usr.local.lib.python3.9.dist-packages.ipykernel_launcher --algo=APPO --env=doom_defend_the_center --train_dir=./train_dir --experiment=vizdoom_defend_the_center --restart_behavior=resume --train_for_env_steps=10000000000
```
Note, you may have to adjust `--train_for_env_steps` to a suitably high number as the experiment will resume at the number of steps it concluded at.
|
nousr/robo-diffusion-2-base
|
nousr
| 2023-03-23T17:31:19Z
| 83
| 188
|
diffusers
|
[
"diffusers",
"robots",
"stable-diffusion",
"aiart",
"text-to-image",
"en",
"license:openrail++",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2022-11-28T20:36:50Z
|
---
language:
- en
thumbnail: "https://huggingface.co/nousr/robo-diffusion/resolve/main/robo_example.png"
tags:
- robots
- stable-diffusion
- aiart
- text-to-image
license: "openrail++"
---
# Robo-Diffusion 2 (base)
A dreambooth-method finetune of stable diffusion that will output cool looking robots when prompted.
<img src="https://huggingface.co/nousr/robo-diffusion-2-base/resolve/main/example_grid.png"/>
# Usage
Keep the words `nousr robot` towards the beginning of your prompt to invoke the finetuned style. Use negative prompts to achieve the best result.
```python
import torch
from diffusers import StableDiffusionPipeline, EulerDiscreteScheduler
scheduler = EulerDiscreteScheduler.from_pretrained("nousr/robo-diffusion-2-base", subfolder="scheduler")
pipe = StableDiffusionPipeline.from_pretrained("nousr/robo-diffusion-2-base", scheduler=scheduler, torch_dtype=torch.float16)
pipe = pipe.to("cuda")
prompt = "A realistic photograph of a 3d nousr robot in a modern city. A glossy white and orange nousr robot."
negative_prompt = "black and white robot, picture frame, a children's drawing in crayon. #Wholesale, Abstract Metal Sculpture. i'm leaving a bad review."
image = pipe(prompt, negative_prompt=negative_prompt, num_inference_steps=32, guidance_scale=5.0).images[0]
image.save("robo.png")
```
# Original Model
Based on stable diffusion 1.4 can be found [here](https://huggingface.co/nousr/robo-diffusion)
# Socials
Use the #robodiffusion so i can see the cool stuff you make!
If you enjoy the model i'd appreciate a follow on [twitter](https://twitter.com/nousr_)
If you are feeling especially generous, you can sponsor me on [github](https://github.com/nousr)
---
*NOTE: ensure you have read the license and agree to the terms
|
wooseoko/clip-roberta-finetuned_GQA
|
wooseoko
| 2023-03-23T17:30:35Z
| 5
| 0
|
transformers
|
[
"transformers",
"pytorch",
"vision-text-dual-encoder",
"feature-extraction",
"generated_from_trainer",
"endpoints_compatible",
"region:us"
] |
feature-extraction
| 2023-03-23T09:36:41Z
|
---
tags:
- generated_from_trainer
datasets:
- ./GQA_script.py
model-index:
- name: clip-roberta-finetuned_GQA
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# clip-roberta-finetuned_GQA
This model is a fine-tuned version of [./clip-roberta](https://huggingface.co/./clip-roberta) on the ./GQA_script.py relation dataset.
It achieves the following results on the evaluation set:
- Loss: 2.3532
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
### Framework versions
- Transformers 4.26.1
- Pytorch 1.11.0
- Datasets 2.10.1
- Tokenizers 0.13.2
|
SmilingWolf/wd-v1-4-convnextv2-tagger-v2
|
SmilingWolf
| 2023-03-23T17:09:39Z
| 142
| 40
|
tf-keras
|
[
"tf-keras",
"onnx",
"license:apache-2.0",
"region:us"
] | null | 2023-03-19T11:19:38Z
|
---
license: apache-2.0
---
# WD 1.4 ConvNextV2 Tagger V2
Supports ratings, characters and general tags.
Trained using https://github.com/SmilingWolf/SW-CV-ModelZoo.
TPUs used for training kindly provided by the [TRC program](https://sites.research.google/trc/about/).
## Dataset
Last image id: 5944504
Trained on Danbooru images with IDs modulo 0000-0899.
Validated on images with IDs modulo 0950-0999.
Images with less than 10 general tags were filtered out.
Tags with less than 600 images were filtered out.
## Validation results
`P=R: threshold = 0.3710, F1 = 0.6862`
## Final words
Subject to change and updates.
Downstream users are encouraged to use tagged releases rather than relying on the head of the repo.
|
butchland/a2c-PandaReachDense-v2
|
butchland
| 2023-03-23T16:52:57Z
| 0
| 0
|
stable-baselines3
|
[
"stable-baselines3",
"PandaReachDense-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-03-20T13:58:42Z
|
---
library_name: stable-baselines3
tags:
- PandaReachDense-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: PandaReachDense-v2
type: PandaReachDense-v2
metrics:
- type: mean_reward
value: -18.40 +/- 3.11
name: mean_reward
verified: false
---
# **A2C** Agent playing **PandaReachDense-v2**
This is a trained model of a **A2C** agent playing **PandaReachDense-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
JamexX90/Cyclops_girl_LoRA
|
JamexX90
| 2023-03-23T16:46:59Z
| 0
| 3
| null |
[
"license:cc-by-nc-4.0",
"region:us"
] | null | 2023-01-31T07:57:10Z
|
---
license: cc-by-nc-4.0
---
https://civitai.com/models/5973
Don't sell anything using my Lora
-
Don't claim it to be your's
-
at least Credit me if you used it, (my ego is fragile)
-
Do not create anything Illegal with my lora (-_-)
-
and
good luck using my lora :D
have a good day to any one reading this
-


|
mnavas/roberta-finetuned-WebClassification
|
mnavas
| 2023-03-23T16:42:27Z
| 5
| 1
|
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"roberta",
"text-classification",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-03-22T11:28:59Z
|
---
license: mit
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
- precision
- recall
model-index:
- name: roberta-finetuned-WebClassification
results: []
pipeline_tag: text-classification
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-finetuned-WebClassification
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the [Web Classification Dataset](https://www.kaggle.com/datasets/hetulmehta/website-classification).
It achieves the following results on the evaluation set:
- Loss: 0.3473
- Accuracy: 0.9504
- F1: 0.9504
- Precision: 0.9504
- Recall: 0.9504
## Model description
The model classifies websites into the following categories:
- "0": "Adult",
- "1": "Business/Corporate",
- "2": "Computers and Technology",
- "3": "E-Commerce",
- "4": "Education",
- "5": "Food",
- "6": "Forums",
- "7": "Games",
- "8": "Health and Fitness",
- "9": "Law and Government",
- "10": "News",
- "11": "Photography",
- "12": "Social Networking and Messaging",
- "13": "Sports",
- "14": "Streaming Services",
- "15": "Travel"
## Intended uses & limitations
Web classification in English (for now).
## Training and evaluation data
Trained and tested on a 80/20 split of the [Web Classification Dataset](https://www.kaggle.com/datasets/hetulmehta/website-classification).
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:---------:|:------:|
| No log | 1.0 | 141 | 0.9315 | 0.8617 | 0.8617 | 0.8617 | 0.8617 |
| No log | 2.0 | 282 | 0.4956 | 0.9007 | 0.9007 | 0.9007 | 0.9007 |
| No log | 3.0 | 423 | 0.4142 | 0.9184 | 0.9184 | 0.9184 | 0.9184 |
| 0.9036 | 4.0 | 564 | 0.3998 | 0.9255 | 0.9255 | 0.9255 | 0.9255 |
| 0.9036 | 5.0 | 705 | 0.3235 | 0.9397 | 0.9397 | 0.9397 | 0.9397 |
| 0.9036 | 6.0 | 846 | 0.3631 | 0.9397 | 0.9397 | 0.9397 | 0.9397 |
| 0.9036 | 7.0 | 987 | 0.3705 | 0.9362 | 0.9362 | 0.9362 | 0.9362 |
| 0.0898 | 8.0 | 1128 | 0.3469 | 0.9468 | 0.9468 | 0.9468 | 0.9468 |
| 0.0898 | 9.0 | 1269 | 0.3657 | 0.9326 | 0.9326 | 0.9326 | 0.9326 |
| 0.0898 | 10.0 | 1410 | 0.3473 | 0.9504 | 0.9504 | 0.9504 | 0.9504 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.9.1
- Datasets 1.18.4
- Tokenizers 0.11.6
|
sanak/a2c-AntBulletEnv-v0
|
sanak
| 2023-03-23T16:32:59Z
| 0
| 0
|
stable-baselines3
|
[
"stable-baselines3",
"AntBulletEnv-v0",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-03-23T16:31:53Z
|
---
library_name: stable-baselines3
tags:
- AntBulletEnv-v0
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: AntBulletEnv-v0
type: AntBulletEnv-v0
metrics:
- type: mean_reward
value: 1736.23 +/- 84.00
name: mean_reward
verified: false
---
# **A2C** Agent playing **AntBulletEnv-v0**
This is a trained model of a **A2C** agent playing **AntBulletEnv-v0**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
hoanglongvn/unit61
|
hoanglongvn
| 2023-03-23T16:26:17Z
| 0
| 0
|
stable-baselines3
|
[
"stable-baselines3",
"AntBulletEnv-v0",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-03-23T16:25:05Z
|
---
library_name: stable-baselines3
tags:
- AntBulletEnv-v0
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: AntBulletEnv-v0
type: AntBulletEnv-v0
metrics:
- type: mean_reward
value: 1207.05 +/- 141.76
name: mean_reward
verified: false
---
# **A2C** Agent playing **AntBulletEnv-v0**
This is a trained model of a **A2C** agent playing **AntBulletEnv-v0**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
nikaashpuri/gpt-expt-sp-v3-K-600-MA-actions-kmeans-v2
|
nikaashpuri
| 2023-03-23T16:21:56Z
| 12
| 0
|
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-03-18T19:39:13Z
|
---
license: mit
tags:
- generated_from_trainer
model-index:
- name: gpt-expt-sp-v3-K-600-MA-actions-kmeans-v2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gpt-expt-sp-v3-K-600-MA-actions-kmeans-v2
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0165
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 512
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 1000
- num_epochs: 500
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:------:|:---------------:|
| 0.1578 | 19.08 | 5000 | 0.0784 |
| 0.0432 | 38.17 | 10000 | 0.0289 |
| 0.1022 | 57.25 | 15000 | 0.0259 |
| 0.0257 | 76.34 | 20000 | 0.0203 |
| 0.0216 | 95.42 | 25000 | 0.0184 |
| 0.0196 | 114.5 | 30000 | 0.0177 |
| 0.0183 | 133.59 | 35000 | 0.0180 |
| 0.0178 | 152.67 | 40000 | 0.0171 |
| 0.0176 | 171.76 | 45000 | 0.0170 |
| 0.0174 | 190.84 | 50000 | 0.0169 |
| 0.0172 | 209.92 | 55000 | 0.0168 |
| 0.0171 | 229.01 | 60000 | 0.0168 |
| 0.017 | 248.09 | 65000 | 0.0167 |
| 0.0169 | 267.18 | 70000 | 0.0167 |
| 0.0169 | 286.26 | 75000 | 0.0166 |
| 0.0168 | 305.34 | 80000 | 0.0166 |
| 0.0168 | 324.43 | 85000 | 0.0166 |
| 0.0167 | 343.51 | 90000 | 0.0166 |
| 0.0167 | 362.6 | 95000 | 0.0165 |
| 0.0166 | 381.68 | 100000 | 0.0165 |
| 0.0166 | 400.76 | 105000 | 0.0165 |
| 0.0166 | 419.85 | 110000 | 0.0165 |
| 0.0165 | 438.93 | 115000 | 0.0165 |
| 0.0165 | 458.02 | 120000 | 0.0165 |
| 0.0165 | 477.1 | 125000 | 0.0165 |
| 0.0165 | 496.18 | 130000 | 0.0165 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.13.1.post200
- Datasets 2.9.0
- Tokenizers 0.13.2
|
Yasbok/Alpaca_instruction_fine_tune_Arabic
|
Yasbok
| 2023-03-23T16:10:08Z
| 0
| 11
|
transformers
|
[
"transformers",
"Alpaca",
"Instruction-fine-tuning",
"NLP",
"Instruct Alpaca",
"PEFT",
"LoRA",
"Instruction tuning",
"Pytorch",
"ar",
"dataset:Yasbok/Alpaca_arabic_instruct",
"endpoints_compatible",
"region:us"
] | null | 2023-03-19T00:41:32Z
|
---
datasets:
- Yasbok/Alpaca_arabic_instruct
language:
- ar
library_name: transformers
tags:
- Alpaca
- Instruction-fine-tuning
- NLP
- Instruct Alpaca
- PEFT
- LoRA
- Instruction tuning
- Pytorch
---
## How to use🦙:
```py
import torch
import bitsandbytes as bnb
from peft import PeftModel, PeftConfig, prepare_model_for_int8_training, LoraConfig, get_peft_model
from transformers import LlamaTokenizer, LlamaForCausalLM, GenerationConfig
peft_model_id = "Yasbok/Alpaca_instruction_fine_tune_Arabic"
# config = PeftConfig.from_pretrained(peft_model_id)
tokenizer = LlamaTokenizer.from_pretrained("decapoda-research/llama-7b-hf")
model = LlamaForCausalLM.from_pretrained("decapoda-research/llama-7b-hf",
load_in_8bit=True,
device_map="auto",)
# Load the Lora model
model = PeftModel.from_pretrained(model, peft_model_id)
# Based on the inference code by `tloen/alpaca-lora`
def generate_prompt(instruction, input=None):
if input:
return f"""يوجد أدناه تعليمات تصف مهمة ، إلى جانب إدخال يوفر المزيد من السياق. اكتب ردًا يكمل الطلب بشكل مناسب.
### تعليمات:
{instruction}
### مدخل:
{input}
### انتاج:"""
else:
return f"""يوجد أدناه إرشادات تصف مهمة. يُرجى كتابة رد يكمل الطلب بشكل مناسب.
### تعليمات:
{instruction}
### انتاج:"""
# Inputs to instantiate the model:
generation_config = GenerationConfig(
temperature=0.2,
top_p=0.75,
num_beams=4,
)
# Evaluate the model:
def evaluate(instruction, input=None):
prompt = generate_prompt(instruction, input)
inputs = tokenizer(prompt, return_tensors="pt")
input_ids = inputs["input_ids"].cuda()
generation_output = model.generate(
input_ids=input_ids,
generation_config=generation_config,
return_dict_in_generate=True,
output_scores=True,
max_new_tokens=256
)
for s in generation_output.sequences:
output = tokenizer.decode(s)
print("انتاج:", output.split("### انتاج:")[1].strip())
evaluate(input("تعليمات: "))
```
|
emmuzoo/dqn-SpaceInvadersNoFrameskip-v4
|
emmuzoo
| 2023-03-23T16:08:41Z
| 1
| 0
|
stable-baselines3
|
[
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-03-23T15:24:59Z
|
---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 467.50 +/- 191.73
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga emmuzoo -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga emmuzoo -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga emmuzoo
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 10000000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
|
sarahmiller137/BiomedNLP-PubMedBERT-base-uncased-abstract-fulltext-ft-ncbi-disease
|
sarahmiller137
| 2023-03-23T15:57:02Z
| 10
| 0
|
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"bert",
"token-classification",
"named-entity-recognition",
"en",
"dataset:ncbi_disease",
"license:cc",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-08-22T16:06:00Z
|
---
language: en
license: cc
tags:
- named-entity-recognition
- token-classification
task:
- named-entity-recognition
- token-classification
datasets: ncbi_disease
metrics:
- precision
- recall
- f1
- accuracy
widget:
- text: " The risk of cancer, especially lymphoid neoplasias, is substantially elevated in A-T patients and has long been associated with chromosomal instability."
---
## Model information:
microsoft/BiomedNLP-PubMedBERT-base-uncased-abstract-fulltext model finetuned using the ncbi_disease dataset from the datasets library.
## Intended uses:
This model is intended to be used for named entity recoginition tasks. The model will identify disease entities in text. The model will predict lables based upon the NCBI-disease dataset, please see the dataset information for details.
## Limitations:
Note that the dataset and model may not be fully represetative or suitable for all needs it is recommended that the paper for the dataset and the base model card should be reviewed before using the model -
- [NCBI Disease](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3951655/pdf/nihms557856.pdf)
- [microsoft/BiomedNLP-PubMedBERT-base-uncased-abstract-fulltext](https://huggingface.co/microsoft/BiomedNLP-PubMedBERT-base-uncased-abstract-fulltext)
## Widget text:
The text displayed in the example widget was taken from one of the ncbi datasets abstracts.
## How to use:
Load the model from the library using the following checkpoints:
```python
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("sarahmiller137/BiomedNLP-PubMedBERT-base-uncased-abstract-fulltext-ft-ncbi-disease")
model = AutoModel.from_pretrained("sarahmiller137/BiomedNLP-PubMedBERT-base-uncased-abstract-fulltext-ft-ncbi-disease")
```
|
sarahmiller137/BiomedNLP-PubMedBERT-base-uncased-abstract-ft-ncbi-disease
|
sarahmiller137
| 2023-03-23T15:56:40Z
| 21
| 0
|
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"bert",
"token-classification",
"named-entity-recognition",
"entity_extraction",
"multi_class_classification",
"en",
"dataset:ncbi_disease",
"license:cc",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-08-22T15:28:52Z
|
---
language: en
license: cc
tags:
- named-entity-recognition
- token-classification
- entity_extraction
- multi_class_classification
task:
- multi_class_classification
- entity_extraction
- named-entity-recognition
- token-classification
datasets: ncbi_disease
metrics:
- precision
- recall
- f1
- accuracy
widget:
- text: " The risk of cancer, especially lymphoid neoplasias, is substantially elevated in A-T patients and has long been associated with chromosomal instability."
---
## Model information:
microsoft/BiomedNLP-PubMedBERT-base-uncased-abstract model finetuned using the ncbi_disease dataset from the datasets library.
## Intended uses:
This model is intended to be used for named entity recoginition tasks. The model will identify disease entities in text. The model will predict lables based upon the NCBI-disease dataset, please see the dataset information for details.
## Limitations:
Note that the dataset and model may not be fully represetative or suitable for all needs it is recommended that the paper for the dataset and the base model card should be reviewed before using the model -
- [NCBI Disease](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3951655/pdf/nihms557856.pdf)
- [microsoft/BiomedNLP-PubMedBERT-base-uncased-abstract-ft-ncbi-disease](https://huggingface.co/microsoft/BiomedNLP-PubMedBERT-base-uncased-abstract)
## How to use:
Load the model from the library using the following checkpoints:
```python
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("sarahmiller137/BiomedNLP-PubMedBERT-base-uncased-abstract-ft-ncbi-disease")
model = AutoModel.from_pretrained("sarahmiller137/BiomedNLP-PubMedBERT-base-uncased-abstract-ft-ncbi-disease")
```
|
Lowkey17/cas
|
Lowkey17
| 2023-03-23T15:45:35Z
| 32
| 0
|
diffusers
|
[
"diffusers",
"safetensors",
"text-to-image",
"stable-diffusion",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-03-23T15:32:14Z
|
---
license: creativeml-openrail-m
tags:
- text-to-image
- stable-diffusion
---
### cas Dreambooth model trained by Lowkey17 with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook
Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb)
Sample pictures of this concept:
|
aszfcxcgszdx/t5-large-en-de
|
aszfcxcgszdx
| 2023-03-23T15:37:12Z
| 13
| 0
|
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"t5",
"text2text-generation",
"autotrain",
"translation",
"en",
"de",
"dataset:aszfcxcgszdx/autotrain-data-translator",
"co2_eq_emissions",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
translation
| 2023-03-13T19:48:55Z
|
---
tags:
- autotrain
- translation
language:
- en
- de
datasets:
- aszfcxcgszdx/autotrain-data-translator
co2_eq_emissions:
emissions: 4.2211417553362205
---
# Model Trained Using AutoTrain
- Problem type: Translation
- Finetuned from t5 large
- Model ID: 40847105640
- CO2 Emissions (in grams): 4.2211
## Validation Metrics
- Loss: 0.994
- SacreBLEU: 10.222
- Gen len: 16.562
|
vocabtrimmer/mt5-small-trimmed-fr-30000-frquad-qg
|
vocabtrimmer
| 2023-03-23T15:35:29Z
| 106
| 0
|
transformers
|
[
"transformers",
"pytorch",
"mt5",
"text2text-generation",
"question generation",
"fr",
"dataset:lmqg/qg_frquad",
"arxiv:2210.03992",
"license:cc-by-4.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-03-19T02:04:19Z
|
---
license: cc-by-4.0
metrics:
- bleu4
- meteor
- rouge-l
- bertscore
- moverscore
language: fr
datasets:
- lmqg/qg_frquad
pipeline_tag: text2text-generation
tags:
- question generation
widget:
- text: "Créateur » (Maker), lui aussi au singulier, « <hl> le Suprême Berger <hl> » (The Great Shepherd) ; de l'autre, des réminiscences de la théologie de l'Antiquité : le tonnerre, voix de Jupiter, « Et souvent ta voix gronde en un tonnerre terrifiant », etc."
example_title: "Question Generation Example 1"
- text: "Ce black dog peut être lié à des évènements traumatisants issus du monde extérieur, tels que son renvoi de l'Amirauté après la catastrophe des Dardanelles, lors de la <hl> Grande Guerre <hl> de 14-18, ou son rejet par l'électorat en juillet 1945."
example_title: "Question Generation Example 2"
- text: "contre <hl> Normie Smith <hl> et 15 000 dollars le 28 novembre 1938."
example_title: "Question Generation Example 3"
model-index:
- name: vocabtrimmer/mt5-small-trimmed-fr-30000-frquad-qg
results:
- task:
name: Text2text Generation
type: text2text-generation
dataset:
name: lmqg/qg_frquad
type: default
args: default
metrics:
- name: BLEU4 (Question Generation)
type: bleu4_question_generation
value: 6.88
- name: ROUGE-L (Question Generation)
type: rouge_l_question_generation
value: 26.45
- name: METEOR (Question Generation)
type: meteor_question_generation
value: 15.82
- name: BERTScore (Question Generation)
type: bertscore_question_generation
value: 78.82
- name: MoverScore (Question Generation)
type: moverscore_question_generation
value: 55.11
---
# Model Card of `vocabtrimmer/mt5-small-trimmed-fr-30000-frquad-qg`
This model is fine-tuned version of [ckpts/mt5-small-trimmed-fr-30000](https://huggingface.co/ckpts/mt5-small-trimmed-fr-30000) for question generation task on the [lmqg/qg_frquad](https://huggingface.co/datasets/lmqg/qg_frquad) (dataset_name: default) via [`lmqg`](https://github.com/asahi417/lm-question-generation).
### Overview
- **Language model:** [ckpts/mt5-small-trimmed-fr-30000](https://huggingface.co/ckpts/mt5-small-trimmed-fr-30000)
- **Language:** fr
- **Training data:** [lmqg/qg_frquad](https://huggingface.co/datasets/lmqg/qg_frquad) (default)
- **Online Demo:** [https://autoqg.net/](https://autoqg.net/)
- **Repository:** [https://github.com/asahi417/lm-question-generation](https://github.com/asahi417/lm-question-generation)
- **Paper:** [https://arxiv.org/abs/2210.03992](https://arxiv.org/abs/2210.03992)
### Usage
- With [`lmqg`](https://github.com/asahi417/lm-question-generation#lmqg-language-model-for-question-generation-)
```python
from lmqg import TransformersQG
# initialize model
model = TransformersQG(language="fr", model="vocabtrimmer/mt5-small-trimmed-fr-30000-frquad-qg")
# model prediction
questions = model.generate_q(list_context="Créateur » (Maker), lui aussi au singulier, « le Suprême Berger » (The Great Shepherd) ; de l'autre, des réminiscences de la théologie de l'Antiquité : le tonnerre, voix de Jupiter, « Et souvent ta voix gronde en un tonnerre terrifiant », etc.", list_answer="le Suprême Berger")
```
- With `transformers`
```python
from transformers import pipeline
pipe = pipeline("text2text-generation", "vocabtrimmer/mt5-small-trimmed-fr-30000-frquad-qg")
output = pipe("Créateur » (Maker), lui aussi au singulier, « <hl> le Suprême Berger <hl> » (The Great Shepherd) ; de l'autre, des réminiscences de la théologie de l'Antiquité : le tonnerre, voix de Jupiter, « Et souvent ta voix gronde en un tonnerre terrifiant », etc.")
```
## Evaluation
- ***Metric (Question Generation)***: [raw metric file](https://huggingface.co/vocabtrimmer/mt5-small-trimmed-fr-30000-frquad-qg/raw/main/eval/metric.first.sentence.paragraph_answer.question.lmqg_qg_frquad.default.json)
| | Score | Type | Dataset |
|:-----------|--------:|:--------|:-----------------------------------------------------------------|
| BERTScore | 78.82 | default | [lmqg/qg_frquad](https://huggingface.co/datasets/lmqg/qg_frquad) |
| Bleu_1 | 26.41 | default | [lmqg/qg_frquad](https://huggingface.co/datasets/lmqg/qg_frquad) |
| Bleu_2 | 15 | default | [lmqg/qg_frquad](https://huggingface.co/datasets/lmqg/qg_frquad) |
| Bleu_3 | 9.91 | default | [lmqg/qg_frquad](https://huggingface.co/datasets/lmqg/qg_frquad) |
| Bleu_4 | 6.88 | default | [lmqg/qg_frquad](https://huggingface.co/datasets/lmqg/qg_frquad) |
| METEOR | 15.82 | default | [lmqg/qg_frquad](https://huggingface.co/datasets/lmqg/qg_frquad) |
| MoverScore | 55.11 | default | [lmqg/qg_frquad](https://huggingface.co/datasets/lmqg/qg_frquad) |
| ROUGE_L | 26.45 | default | [lmqg/qg_frquad](https://huggingface.co/datasets/lmqg/qg_frquad) |
## Training hyperparameters
The following hyperparameters were used during fine-tuning:
- dataset_path: lmqg/qg_frquad
- dataset_name: default
- input_types: paragraph_answer
- output_types: question
- prefix_types: None
- model: ckpts/mt5-small-trimmed-fr-30000
- max_length: 512
- max_length_output: 32
- epoch: 16
- batch: 16
- lr: 0.0005
- fp16: False
- random_seed: 1
- gradient_accumulation_steps: 4
- label_smoothing: 0.15
The full configuration can be found at [fine-tuning config file](https://huggingface.co/vocabtrimmer/mt5-small-trimmed-fr-30000-frquad-qg/raw/main/trainer_config.json).
## Citation
```
@inproceedings{ushio-etal-2022-generative,
title = "{G}enerative {L}anguage {M}odels for {P}aragraph-{L}evel {Q}uestion {G}eneration",
author = "Ushio, Asahi and
Alva-Manchego, Fernando and
Camacho-Collados, Jose",
booktitle = "Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing",
month = dec,
year = "2022",
address = "Abu Dhabi, U.A.E.",
publisher = "Association for Computational Linguistics",
}
```
|
SirVeggie/wlop_lora
|
SirVeggie
| 2023-03-23T15:25:22Z
| 0
| 5
| null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-03-23T14:30:45Z
|
---
license: creativeml-openrail-m
---
# WLOP lora
Original artist: https://www.patreon.com/wlop
### Lora details
I haven't tested extensively yet, but sv_wlop is the 14 epoch version that I've mainly used and it works quite well.
Lora works pretty well at and below weight 1.
A good negative is helpful for optimal results. Here are some options:
```
(low quality, worst quality:1.4), (bad anatomy), extra digit, fewer digits, (extra arms:1.2), bad hands, by (bad-artist:0.6), bad-image-v2-39000
[low quality easynegative|worst quality 3d]
```
https://huggingface.co/datasets/gsdf/EasyNegative \
https://huggingface.co/nick-x-hacker/bad-artist \
https://huggingface.co/Xynon/models/tree/main/experimentals/TI
### Keywords
wlop, aeolian, yan, jade
The keywords other than `wlop` don't have much effect, but describing the characters will make them appear to a degree.
## Images





|
BobMcDear/swin_base_window6_simmim_in1k_100ep_ft_in1k_192
|
BobMcDear
| 2023-03-23T15:18:41Z
| 0
| 0
| null |
[
"region:us"
] | null | 2023-03-23T15:15:28Z
|
Please refer to [flaim](https://github.com/bobmcdear/flaim) for sample usage and more information.
|
BobMcDear/swin_base_window7_simmim_in1k_100ep_ft_in1k_224
|
BobMcDear
| 2023-03-23T15:18:31Z
| 0
| 0
| null |
[
"region:us"
] | null | 2023-03-23T15:15:29Z
|
Please refer to [flaim](https://github.com/bobmcdear/flaim) for sample usage and more information.
|
BobMcDear/swin_base_window6_simmim_in1k_800ep_192
|
BobMcDear
| 2023-03-23T15:17:49Z
| 0
| 0
| null |
[
"region:us"
] | null | 2023-03-23T15:15:30Z
|
Please refer to [flaim](https://github.com/bobmcdear/flaim) for sample usage and more information.
|
KoRo8888/Ohayou_Locon
|
KoRo8888
| 2023-03-23T15:01:31Z
| 0
| 6
| null |
[
"region:us"
] | null | 2023-03-23T14:35:31Z
|
512x512 with HiRes is most recommended
-
activation token is "Ohayou"
special thanks to "Wes#9704","Luna 🌙#9999","홍차세잔#9115","Vil#0404" for testing it!

|
TerryYH/ppo-LunarLander-v2
|
TerryYH
| 2023-03-23T14:57:54Z
| 1
| 0
|
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-03-23T14:57:33Z
|
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 275.30 +/- 14.87
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
micromind/CIFAR-10
|
micromind
| 2023-03-23T14:53:05Z
| 0
| 0
| null |
[
"image-classification",
"en",
"dataset:cifar10",
"license:mit",
"region:us"
] |
image-classification
| 2023-02-22T09:40:22Z
|
---
license: mit
datasets:
- cifar10
language:
- en
pipeline_tag: image-classification
---
# micromind checkpoints for CIFAR-10
This repository contains checkpoints for the CIFAR-10 dataset for the following networks:
| Model | Top 1 Accuracy | Top 5 Accuracy |
| ------------------ |---------------- | -------------- |
| `PhiNet(alpha=3, beta=0.75, t_zero=6, num_layers=7, resolution=160)` | 93.61% | 99.77% |
| `PhiNet(alpha=0.75, beta=1, t_zero=6, num_layers=5, resolution=160)` | 86.8% | 99.5% |
| `PhiNet(alpha=0.35, beta=1, t_zero=6, num_layers=7, resolution=160)` | 88.08% | 99.48% |
| `PhiNet(alpha=0.25, beta=1, t_zero=6, num_layers=7, resolution=160)` | 84.97% | 99.3% |
| `PhiNet(alpha=0.25, beta=1, t_zero=5, num_layers=7, resolution=160)` | 83.01% | 99.2% |
To download and use this repo:
```
from micromind import PhiNet
model = PhiNet.from_pretrained("CIFAR-10", alpha=3.0, beta=0.75, t_zero=6, num_layers=7, num_classes=10, resolution=160)
```
## Authors
- [@fpaissan](https://www.github.com/fpaissan)
- [@matteobeltrami](https://www.github.com/matteobeltrami)
---
license: mit
---
|
unagui/taxiv0
|
unagui
| 2023-03-23T14:36:35Z
| 0
| 0
| null |
[
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-03-23T14:36:33Z
|
---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: taxiv0
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.50 +/- 2.75
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="unagui/taxiv0", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
Vaibhavoutat/ppo-Huggy
|
Vaibhavoutat
| 2023-03-23T14:20:31Z
| 9
| 0
|
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"Huggy",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
] |
reinforcement-learning
| 2023-03-23T14:20:23Z
|
---
library_name: ml-agents
tags:
- Huggy
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-Huggy
2. Step 1: Find your model_id: Vaibhavoutat/ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
Periramm/ppo-PyramidsTraining
|
Periramm
| 2023-03-23T14:18:45Z
| 6
| 0
|
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"Pyramids",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Pyramids",
"region:us"
] |
reinforcement-learning
| 2023-03-23T14:18:26Z
|
---
library_name: ml-agents
tags:
- Pyramids
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Pyramids
---
# **ppo** Agent playing **Pyramids**
This is a trained model of a **ppo** agent playing **Pyramids** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-Pyramids
2. Step 1: Find your model_id: Periramm/ppo-PyramidsTraining
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
kikijiki/q-FrozenLake-v1-4x4-noSlippery
|
kikijiki
| 2023-03-23T14:05:46Z
| 0
| 0
| null |
[
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-03-23T14:05:42Z
|
---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="kikijiki/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
hidude562/Wiki-Complexity
|
hidude562
| 2023-03-23T13:52:40Z
| 27
| 4
|
transformers
|
[
"transformers",
"pytorch",
"jax",
"safetensors",
"distilbert",
"text-classification",
"autotrain",
"en",
"dataset:hidude562/autotrain-data-SimpleDetect",
"co2_eq_emissions",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-05-07T19:37:14Z
|
---
tags: autotrain
language: en
widget:
- text: "I quite enjoy using AutoTrain due to its simplicity."
datasets:
- hidude562/autotrain-data-SimpleDetect
co2_eq_emissions: 0.21691606119445225
---
# Model Description
This model detects if you are writing in a format that is more similar to Simple English Wikipedia or English Wikipedia. This can be extended to applications that aren't Wikipedia as well and to some extent, it can be used for other languages.
Please also note there is a major bias to special characters (Mainly the hyphen mark, but it also applies to others) so I would recommend removing them from your input text.
# Model Trained Using AutoTrain
- Problem type: Multi-class Classification
- Model ID: 837726721
- CO2 Emissions (in grams): 0.21691606119445225
## Validation Metrics
- Loss: 0.010096958838403225
- Accuracy: 0.996223414828066
- Macro F1: 0.996179398826373
- Micro F1: 0.996223414828066
- Weighted F1: 0.996223414828066
- Macro Precision: 0.996179398826373
- Micro Precision: 0.996223414828066
- Weighted Precision: 0.996223414828066
- Macro Recall: 0.996179398826373
- Micro Recall: 0.996223414828066
- Weighted Recall: 0.996223414828066
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I quite enjoy using AutoTrain due to its simplicity."}' https://api-inference.huggingface.co/models/hidude562/Wiki-Complexity
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("hidude562/Wiki-Complexity", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("hidude562/Wiki-Complexity", use_auth_token=True)
inputs = tokenizer("I quite enjoy using AutoTrain due to its simplicity.", return_tensors="pt")
outputs = model(**inputs)
```
|
shahrukhx01/paraphrase-mpnet-base-v2-fuzzy-matcher
|
shahrukhx01
| 2023-03-23T13:38:20Z
| 3,214
| 10
|
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"mpnet",
"feature-extraction",
"fuzzy-matching",
"fuzzy-search",
"entity-resolution",
"record-linking",
"structured-data-search",
"endpoints_compatible",
"region:us"
] |
feature-extraction
| 2022-03-02T23:29:05Z
|
---
tags:
- fuzzy-matching
- fuzzy-search
- entity-resolution
- record-linking
- structured-data-search
---
A Siamese BERT architecture trained at character levels tokens for embedding based Fuzzy matching.
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer, util
word1 = "fuzzformer"
word1 = " ".join([char for char in word1]) ## divide the word to char level to fuzzy match
word2 = "fizzformer"
word2 = " ".join([char for char in word2]) ## divide the word to char level to fuzzy match
words = [word1, word2]
model = SentenceTransformer('shahrukhx01/paraphrase-mpnet-base-v2-fuzzy-matcher')
fuzzy_embeddings = model.encode(words)
print("Fuzzy Match score:")
print(util.cos_sim(fuzzy_embeddings[0], fuzzy_embeddings[1]))
```
## Usage (HuggingFace Transformers)
```python
import torch
from transformers import AutoTokenizer, AutoModel
from torch import Tensor, device
def cos_sim(a: Tensor, b: Tensor):
"""
borrowed from sentence transformers repo
Computes the cosine similarity cos_sim(a[i], b[j]) for all i and j.
:return: Matrix with res[i][j] = cos_sim(a[i], b[j])
"""
if not isinstance(a, torch.Tensor):
a = torch.tensor(a)
if not isinstance(b, torch.Tensor):
b = torch.tensor(b)
if len(a.shape) == 1:
a = a.unsqueeze(0)
if len(b.shape) == 1:
b = b.unsqueeze(0)
a_norm = torch.nn.functional.normalize(a, p=2, dim=1)
b_norm = torch.nn.functional.normalize(b, p=2, dim=1)
return torch.mm(a_norm, b_norm.transpose(0, 1))
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Words we want fuzzy embeddings for
word1 = "fuzzformer"
word1 = " ".join([char for char in word1]) ## divide the word to char level to fuzzy match
word2 = "fizzformer"
word2 = " ".join([char for char in word2]) ## divide the word to char level to fuzzy match
words = [word1, word2]
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('shahrukhx01/paraphrase-mpnet-base-v2-fuzzy-matcher')
model = AutoModel.from_pretrained('shahrukhx01/paraphrase-mpnet-base-v2-fuzzy-matcher')
# Tokenize sentences
encoded_input = tokenizer(words, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, max pooling.
fuzzy_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Fuzzy Match score:")
print(cos_sim(fuzzy_embeddings[0], fuzzy_embeddings[1]))
```
## ACKNOWLEDGEMENT
A big thank you to [Sentence Transformers](https://github.com/UKPLab/sentence-transformers) as their implementation really expedited the implementation of Fuzzformer.
## Citation
To cite FuzzTransformer in your work, please use the following bibtex reference:
@misc{shahrukhkhan2021fuzzTransformer, <br>
author = {Shahrukh Khan},<br>
title = {FuzzTransformer: A character level embedding based Siamese transformer for fuzzy string matching.},<br>
year = 2021,<br>
publisher = {Coming soon},<br>
doi = {Coming soon},<br>
url = {Coming soon}<br>
}
|
kebei/q-FrozenLake-v1-4x4-noSlippery
|
kebei
| 2023-03-23T13:38:13Z
| 0
| 0
| null |
[
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-03-23T13:38:05Z
|
---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="kebei/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
obsei-ai/sell-buy-intent-classifier-bert-mini
|
obsei-ai
| 2023-03-23T13:38:00Z
| 30
| 3
|
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"bert",
"text-classification",
"buy-intent",
"sell-intent",
"consumer-intent",
"en",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-02T23:29:05Z
|
---
language: "en"
tags:
- buy-intent
- sell-intent
- consumer-intent
widget:
- text: "Can you please share pictures for Face Shields ? We are looking for large quantity pcs"
---
# Buy vs Sell Intent Classifier
| Train Loss | Validation Acc.| Test Acc.|
| ------------- |:-------------: | -----: |
| 0.013 | 0.988 | 0.992 |
# Sample Intents for Testings
LABEL_0 => **"SELLING_INTENT"** <br/>
LABEL_1 => **"BUYING_INTENT"**
## Buying Intents
- I am interested in this style of PGN-ES-D-6150 /Direct drive energy saving servo motor price and in doing business with you. Could you please send me the quotation
- Hi, I am looking for a supplier of calcium magnesium carbonate fertilizer. Can you send 1 bag sample via air freight to the USA?
- I am looking for the purple ombre dress with floral bodice in a size 12 for my wedding in June this year
- we are interested in your Corned Beef. do you have any quality assurance certificates? looking forward to hearing from you.
- I would like to know if pet nail clippers are of high quality. And if you would send a free sample?
## Selling Intents
- Black full body massage chair for sale.
- Boiler over 7 years old
- Polyester trousers black, size 24.
- Oliver Twist £1, German Dictionary 50p (Cold War s0ld), Penguin Plays £1, post by arrangement. The bundle price is £2. Will separate (Twelfth Night and Sketch B&W Sold)
- Brand new Royal Doulton bone China complete Dinner Service comprising 55 pieces including coffee pot and cups. (6 PLACE SETTING) ! 'Diana' design delicate pattern.
## Usage in Transformers
```python
from transformers import AutoTokenizer, AutoModelForSequenceClassification
tokenizer = AutoTokenizer.from_pretrained("obsei-ai/sell-buy-intent-classifier-bert-mini")
model = AutoModelForSequenceClassification.from_pretrained("obsei-ai/sell-buy-intent-classifier-bert-mini")
```
## <p style='color:red'>Due to the privacy reasons, I unfortunately can't share the dataset and its splits.</p>
|
ankandrew/ppo-SnowballTarget
|
ankandrew
| 2023-03-23T13:17:51Z
| 4
| 0
|
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"SnowballTarget",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-SnowballTarget",
"region:us"
] |
reinforcement-learning
| 2023-03-23T13:17:45Z
|
---
library_name: ml-agents
tags:
- SnowballTarget
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SnowballTarget
---
# **ppo** Agent playing **SnowballTarget**
This is a trained model of a **ppo** agent playing **SnowballTarget** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-SnowballTarget
2. Step 1: Find your model_id: ankandrew/ppo-SnowballTarget
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
enlyth/baj-tts
|
enlyth
| 2023-03-23T13:16:04Z
| 0
| 6
| null |
[
"tts",
"vits",
"license:openrail",
"region:us"
] | null | 2023-02-13T19:51:05Z
|
---
license: openrail
tags:
- tts
- vits
---
Pretrained VITS Text-to-Speech models for some popular personalities or celebrities.
Forsen, XQC, Juice WRLD, Donald Trump, David Attenborough, Obi-Wan Kenobi (Alec Guiness)
https://github.com/enlyth/baj-tts
|
Mehtap/base_10
|
Mehtap
| 2023-03-23T13:08:25Z
| 78
| 1
|
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"whisper",
"automatic-speech-recognition",
"hf-asr-leaderboard",
"generated_from_trainer",
"tr",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2023-03-22T09:09:47Z
|
---
language:
- tr
license: apache-2.0
tags:
- hf-asr-leaderboard
- generated_from_trainer
metrics:
- wer
model-index:
- name: base Turkish Whisper (bTW)
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# base Turkish Whisper (bTW)
This model is a fine-tuned version of [openai/whisper-base](https://huggingface.co/openai/whisper-base) on the Ermetal Meetings dataset.
It achieves the following results on the evaluation set:
- Loss: 1.8564
- Wer: 1.2482
- Cer: 0.7381
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 1000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer | Cer |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|
| 1.6604 | 2.86 | 100 | 1.9378 | 1.1296 | 0.6334 |
| 0.6453 | 5.71 | 200 | 1.4655 | 0.9878 | 0.5974 |
| 0.3912 | 8.57 | 300 | 1.4669 | 1.2543 | 0.7557 |
| 0.2081 | 11.43 | 400 | 1.4622 | 0.8203 | 0.5123 |
| 0.094 | 14.29 | 500 | 1.6592 | 0.9535 | 0.6367 |
| 0.039 | 17.14 | 600 | 1.6946 | 0.9658 | 0.5706 |
| 0.0172 | 20.0 | 700 | 1.8271 | 1.4046 | 1.0027 |
| 0.0086 | 22.86 | 800 | 1.8149 | 1.2567 | 0.7530 |
| 0.0064 | 25.71 | 900 | 1.8478 | 1.2311 | 0.7279 |
| 0.0061 | 28.57 | 1000 | 1.8564 | 1.2482 | 0.7381 |
### Framework versions
- Transformers 4.26.0
- Pytorch 1.12.0+cu102
- Datasets 2.9.0
- Tokenizers 0.13.2
|
enlyth/tresh-tortoise
|
enlyth
| 2023-03-23T13:02:48Z
| 0
| 0
| null |
[
"license:openrail",
"region:us"
] | null | 2023-03-23T12:40:53Z
|
---
license: openrail
---
Tresh voice, from the forsen stream.
Model and dataset, to be used with https://git.ecker.tech/mrq/ai-voice-cloning
Sample:
https://soundcloud.com/enlyth/tresh-00016
|
arrandi/rl_course_vizdoom_health_gathering_supreme
|
arrandi
| 2023-03-23T12:52:24Z
| 0
| 0
|
sample-factory
|
[
"sample-factory",
"tensorboard",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-03-23T10:54:23Z
|
---
library_name: sample-factory
tags:
- deep-reinforcement-learning
- reinforcement-learning
- sample-factory
model-index:
- name: APPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: doom_health_gathering_supreme
type: doom_health_gathering_supreme
metrics:
- type: mean_reward
value: 18.04 +/- 4.66
name: mean_reward
verified: false
---
A(n) **APPO** model trained on the **doom_health_gathering_supreme** environment.
This model was trained using Sample-Factory 2.0: https://github.com/alex-petrenko/sample-factory.
Documentation for how to use Sample-Factory can be found at https://www.samplefactory.dev/
## Downloading the model
After installing Sample-Factory, download the model with:
```
python -m sample_factory.huggingface.load_from_hub -r arrandi/rl_course_vizdoom_health_gathering_supreme
```
## Using the model
To run the model after download, use the `enjoy` script corresponding to this environment:
```
python -m .usr.local.lib.python3.9.dist-packages.ipykernel_launcher --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme
```
You can also upload models to the Hugging Face Hub using the same script with the `--push_to_hub` flag.
See https://www.samplefactory.dev/10-huggingface/huggingface/ for more details
## Training with this model
To continue training with this model, use the `train` script corresponding to this environment:
```
python -m .usr.local.lib.python3.9.dist-packages.ipykernel_launcher --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme --restart_behavior=resume --train_for_env_steps=10000000000
```
Note, you may have to adjust `--train_for_env_steps` to a suitably high number as the experiment will resume at the number of steps it concluded at.
|
Krzysiek111/lunar_lander_v1
|
Krzysiek111
| 2023-03-23T12:39:09Z
| 1
| 0
|
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-03-23T12:29:57Z
|
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 269.74 +/- 16.55
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
harikc456/poca-SoccerTwos
|
harikc456
| 2023-03-23T12:17:48Z
| 151
| 0
|
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"unity-ml-agents",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-SoccerTwos",
"region:us"
] |
reinforcement-learning
| 2023-03-20T14:59:26Z
|
---
tags:
- unity-ml-agents
- ml-agents
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SoccerTwos
library_name: ml-agents
---
# **poca** Agent playing **SoccerTwos**
This is a trained model of a **poca** agent playing **SoccerTwos** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-SoccerTwos
2. Step 1: Write your model_id: harikc456/poca-SoccerTwos
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
tobiasc/segformer-b0-finetuned-segments-sidewalk
|
tobiasc
| 2023-03-23T12:09:17Z
| 1,039
| 1
|
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"segformer",
"vision",
"image-segmentation",
"dataset:segments/sidewalk-semantic",
"arxiv:2105.15203",
"endpoints_compatible",
"region:us"
] |
image-segmentation
| 2022-03-03T16:41:19Z
|
---
tags:
- vision
- image-segmentation
datasets:
- segments/sidewalk-semantic
widget:
- src: https://segmentsai-prod.s3.eu-west-2.amazonaws.com/assets/admin-tobias/439f6843-80c5-47ce-9b17-0b2a1d54dbeb.jpg
example_title: Brugge
---
# SegFormer (b0-sized) model fine-tuned on Segments.ai sidewalk-semantic.
SegFormer model fine-tuned on [Segments.ai](https://segments.ai) [`sidewalk-semantic`](https://huggingface.co/datasets/segments/sidewalk-semantic). It was introduced in the paper [SegFormer: Simple and Efficient Design for Semantic Segmentation with Transformers](https://arxiv.org/abs/2105.15203) by Xie et al. and first released in [this repository](https://github.com/NVlabs/SegFormer).
## Model description
SegFormer consists of a hierarchical Transformer encoder and a lightweight all-MLP decode head to achieve great results on semantic segmentation benchmarks such as ADE20K and Cityscapes. The hierarchical Transformer is first pre-trained on ImageNet-1k, after which a decode head is added and fine-tuned altogether on a downstream dataset.
### How to use
Here is how to use this model to classify an image of the sidewalk dataset:
```python
from transformers import SegformerFeatureExtractor, SegformerForSemanticSegmentation
from PIL import Image
import requests
feature_extractor = SegformerFeatureExtractor.from_pretrained("nvidia/segformer-b0-finetuned-ade-512-512")
model = SegformerForSemanticSegmentation.from_pretrained("segments-tobias/segformer-b0-finetuned-segments-sidewalk")
url = "https://segmentsai-prod.s3.eu-west-2.amazonaws.com/assets/admin-tobias/439f6843-80c5-47ce-9b17-0b2a1d54dbeb.jpg"
image = Image.open(requests.get(url, stream=True).raw)
inputs = feature_extractor(images=image, return_tensors="pt")
outputs = model(**inputs)
logits = outputs.logits # shape (batch_size, num_labels, height/4, width/4)
```
For more code examples, we refer to the [documentation](https://huggingface.co/transformers/model_doc/segformer.html#).
### BibTeX entry and citation info
```bibtex
@article{DBLP:journals/corr/abs-2105-15203,
author = {Enze Xie and
Wenhai Wang and
Zhiding Yu and
Anima Anandkumar and
Jose M. Alvarez and
Ping Luo},
title = {SegFormer: Simple and Efficient Design for Semantic Segmentation with
Transformers},
journal = {CoRR},
volume = {abs/2105.15203},
year = {2021},
url = {https://arxiv.org/abs/2105.15203},
eprinttype = {arXiv},
eprint = {2105.15203},
timestamp = {Wed, 02 Jun 2021 11:46:42 +0200},
biburl = {https://dblp.org/rec/journals/corr/abs-2105-15203.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
|
rng0x17/ppo-LunarLander-v2
|
rng0x17
| 2023-03-23T12:00:45Z
| 1
| 0
|
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-03-22T22:20:24Z
|
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 282.19 +/- 20.79
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
EddyWebb/vit-base-patch16-224-finetuned-flower
|
EddyWebb
| 2023-03-23T11:57:00Z
| 18
| 0
|
transformers
|
[
"transformers",
"pytorch",
"vit",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2023-03-23T11:49:38Z
|
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imagefolder
model-index:
- name: vit-base-patch16-224-finetuned-flower
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-base-patch16-224-finetuned-flower
This model is a fine-tuned version of [google/vit-base-patch16-224](https://huggingface.co/google/vit-base-patch16-224) on the imagefolder dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
### Framework versions
- Transformers 4.24.0
- Pytorch 1.13.1+cu116
- Datasets 2.7.1
- Tokenizers 0.13.2
|
SpookyWooky5/q-FrozenLake-v1-4x4-noSlippery
|
SpookyWooky5
| 2023-03-23T11:54:49Z
| 0
| 0
| null |
[
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-03-23T11:54:46Z
|
---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="SpookyWooky5/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
rossHuggingMay/ppo-SnowballTarget
|
rossHuggingMay
| 2023-03-23T11:38:55Z
| 5
| 0
|
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"SnowballTarget",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-SnowballTarget",
"region:us"
] |
reinforcement-learning
| 2023-03-23T11:38:49Z
|
---
library_name: ml-agents
tags:
- SnowballTarget
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SnowballTarget
---
# **ppo** Agent playing **SnowballTarget**
This is a trained model of a **ppo** agent playing **SnowballTarget** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-SnowballTarget
2. Step 1: Find your model_id: rossHuggingMay/ppo-SnowballTarget
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
vocabtrimmer/xlm-roberta-base-tweet-sentiment-de-trimmed-de-15000
|
vocabtrimmer
| 2023-03-23T11:29:55Z
| 105
| 0
|
transformers
|
[
"transformers",
"pytorch",
"xlm-roberta",
"text-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-03-15T22:09:18Z
|
# Vocabulary Trimmed [cardiffnlp/xlm-roberta-base-tweet-sentiment-de](https://huggingface.co/cardiffnlp/xlm-roberta-base-tweet-sentiment-de): `vocabtrimmer/xlm-roberta-base-tweet-sentiment-de-trimmed-de-15000`
This model is a trimmed version of [cardiffnlp/xlm-roberta-base-tweet-sentiment-de](https://huggingface.co/cardiffnlp/xlm-roberta-base-tweet-sentiment-de) by [`vocabtrimmer`](https://github.com/asahi417/lm-vocab-trimmer), a tool for trimming vocabulary of language models to compress the model size.
Following table shows a summary of the trimming process.
| | cardiffnlp/xlm-roberta-base-tweet-sentiment-de | vocabtrimmer/xlm-roberta-base-tweet-sentiment-de-trimmed-de-15000 |
|:---------------------------|:-------------------------------------------------|:--------------------------------------------------------------------|
| parameter_size_full | 278,045,955 | 97,565,955 |
| parameter_size_embedding | 192,001,536 | 11,521,536 |
| vocab_size | 250,002 | 15,002 |
| compression_rate_full | 100.0 | 35.09 |
| compression_rate_embedding | 100.0 | 6.0 |
Following table shows the parameter used to trim vocabulary.
| language | dataset | dataset_column | dataset_name | dataset_split | target_vocab_size | min_frequency |
|:-----------|:----------------------------|:-----------------|:---------------|:----------------|--------------------:|----------------:|
| de | vocabtrimmer/mc4_validation | text | de | validation | 15000 | 2 |
|
Periramm/ppo-SnowballTarget
|
Periramm
| 2023-03-23T11:27:45Z
| 6
| 0
|
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"SnowballTarget",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-SnowballTarget",
"region:us"
] |
reinforcement-learning
| 2023-03-23T11:27:40Z
|
---
library_name: ml-agents
tags:
- SnowballTarget
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SnowballTarget
---
# **ppo** Agent playing **SnowballTarget**
This is a trained model of a **ppo** agent playing **SnowballTarget** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-SnowballTarget
2. Step 1: Find your model_id: Periramm/ppo-SnowballTarget
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
marimurta/q-Taxi-v3
|
marimurta
| 2023-03-23T11:27:10Z
| 0
| 0
| null |
[
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-03-23T11:27:09Z
|
---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-Taxi-v3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.56 +/- 2.71
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="marimurta/q-Taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
hruslen/Reinforce-CartPole-v1
|
hruslen
| 2023-03-23T11:23:32Z
| 0
| 0
| null |
[
"CartPole-v1",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-03-23T11:23:22Z
|
---
tags:
- CartPole-v1
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-CartPole-v1
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 500.00 +/- 0.00
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
vocabtrimmer/xlm-roberta-base-tweet-sentiment-de-trimmed-de-5000
|
vocabtrimmer
| 2023-03-23T11:22:08Z
| 4
| 0
|
transformers
|
[
"transformers",
"pytorch",
"xlm-roberta",
"text-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-03-15T22:01:07Z
|
# Vocabulary Trimmed [cardiffnlp/xlm-roberta-base-tweet-sentiment-de](https://huggingface.co/cardiffnlp/xlm-roberta-base-tweet-sentiment-de): `vocabtrimmer/xlm-roberta-base-tweet-sentiment-de-trimmed-de-5000`
This model is a trimmed version of [cardiffnlp/xlm-roberta-base-tweet-sentiment-de](https://huggingface.co/cardiffnlp/xlm-roberta-base-tweet-sentiment-de) by [`vocabtrimmer`](https://github.com/asahi417/lm-vocab-trimmer), a tool for trimming vocabulary of language models to compress the model size.
Following table shows a summary of the trimming process.
| | cardiffnlp/xlm-roberta-base-tweet-sentiment-de | vocabtrimmer/xlm-roberta-base-tweet-sentiment-de-trimmed-de-5000 |
|:---------------------------|:-------------------------------------------------|:-------------------------------------------------------------------|
| parameter_size_full | 278,045,955 | 89,885,955 |
| parameter_size_embedding | 192,001,536 | 3,841,536 |
| vocab_size | 250,002 | 5,002 |
| compression_rate_full | 100.0 | 32.33 |
| compression_rate_embedding | 100.0 | 2.0 |
Following table shows the parameter used to trim vocabulary.
| language | dataset | dataset_column | dataset_name | dataset_split | target_vocab_size | min_frequency |
|:-----------|:----------------------------|:-----------------|:---------------|:----------------|--------------------:|----------------:|
| de | vocabtrimmer/mc4_validation | text | de | validation | 5000 | 2 |
|
quilaquedi/Reinforce-Pixelcopter-PLE-v0
|
quilaquedi
| 2023-03-23T11:15:06Z
| 0
| 0
| null |
[
"Pixelcopter-PLE-v0",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-03-23T11:14:59Z
|
---
tags:
- Pixelcopter-PLE-v0
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-Pixelcopter-PLE-v0
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Pixelcopter-PLE-v0
type: Pixelcopter-PLE-v0
metrics:
- type: mean_reward
value: 23.30 +/- 25.75
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **Pixelcopter-PLE-v0**
This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
rootacess/distilbert-base-uncased-finetuned-mathQA
|
rootacess
| 2023-03-23T11:14:46Z
| 15
| 0
|
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-03-06T13:29:55Z
|
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-mathQA
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-mathQA
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0752
- Accuracy: 0.9857
- F1: 0.9857
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.3155 | 1.0 | 1865 | 0.0997 | 0.9727 | 0.9727 |
| 0.0726 | 2.0 | 3730 | 0.0813 | 0.9826 | 0.9825 |
| 0.0292 | 3.0 | 5595 | 0.0752 | 0.9857 | 0.9857 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.13.1+cu116
- Datasets 2.10.1
- Tokenizers 0.13.2
|
berchielli/cabrita-7b-pt-br
|
berchielli
| 2023-03-23T11:09:02Z
| 0
| 0
| null |
[
"region:us"
] | null | 2023-03-23T10:57:09Z
|
Model based on https://github.com/22-hours/cabrita
Install dependencies
```
!pip install -q datasets loralib sentencepiece
!pip uninstall transformers -y
!pip install git+https://github.com/huggingface/transformers.git
!pip -q install git+https://github.com/huggingface/peft.git
!pip -q install bitsandbytes
```
Import
```
from peft import PeftModel
from transformers import LlamaTokenizer, LlamaForCausalLM, GenerationConfig
import textwrap
```
Define model
```
tokenizer = LlamaTokenizer.from_pretrained("decapoda-research/llama-7b-hf")
model = LlamaForCausalLM.from_pretrained(
"decapoda-research/llama-7b-hf",
load_in_8bit=True,
device_map="auto",
)
model = PeftModel.from_pretrained(model, "berchielli/cabrita-7b-pt-br")
```
Use the model for inferences
```
generation_config = GenerationConfig(
temperature=0.9,
top_p=0.75,
num_beams=4,
)
prompt =
inputs = tokenizer(prompt, return_tensors="pt")
input_ids = inputs["input_ids"].cuda()
generation_output = model.generate(
input_ids=input_ids,
generation_config=generation_config,
return_dict_in_generate=True,
output_scores=True,
max_new_tokens=256
)
```
|
vocabtrimmer/xlm-roberta-base-tweet-sentiment-es-trimmed-es-60000
|
vocabtrimmer
| 2023-03-23T11:06:10Z
| 7
| 0
|
transformers
|
[
"transformers",
"pytorch",
"xlm-roberta",
"text-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-03-15T21:44:14Z
|
# Vocabulary Trimmed [cardiffnlp/xlm-roberta-base-tweet-sentiment-es](https://huggingface.co/cardiffnlp/xlm-roberta-base-tweet-sentiment-es): `vocabtrimmer/xlm-roberta-base-tweet-sentiment-es-trimmed-es-60000`
This model is a trimmed version of [cardiffnlp/xlm-roberta-base-tweet-sentiment-es](https://huggingface.co/cardiffnlp/xlm-roberta-base-tweet-sentiment-es) by [`vocabtrimmer`](https://github.com/asahi417/lm-vocab-trimmer), a tool for trimming vocabulary of language models to compress the model size.
Following table shows a summary of the trimming process.
| | cardiffnlp/xlm-roberta-base-tweet-sentiment-es | vocabtrimmer/xlm-roberta-base-tweet-sentiment-es-trimmed-es-60000 |
|:---------------------------|:-------------------------------------------------|:--------------------------------------------------------------------|
| parameter_size_full | 278,045,955 | 132,125,955 |
| parameter_size_embedding | 192,001,536 | 46,081,536 |
| vocab_size | 250,002 | 60,002 |
| compression_rate_full | 100.0 | 47.52 |
| compression_rate_embedding | 100.0 | 24.0 |
Following table shows the parameter used to trim vocabulary.
| language | dataset | dataset_column | dataset_name | dataset_split | target_vocab_size | min_frequency |
|:-----------|:----------------------------|:-----------------|:---------------|:----------------|--------------------:|----------------:|
| es | vocabtrimmer/mc4_validation | text | es | validation | 60000 | 2 |
|
vocabtrimmer/xlm-roberta-base-tweet-sentiment-es-trimmed-es-30000
|
vocabtrimmer
| 2023-03-23T11:00:43Z
| 10
| 0
|
transformers
|
[
"transformers",
"pytorch",
"xlm-roberta",
"text-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-03-15T21:37:41Z
|
# Vocabulary Trimmed [cardiffnlp/xlm-roberta-base-tweet-sentiment-es](https://huggingface.co/cardiffnlp/xlm-roberta-base-tweet-sentiment-es): `vocabtrimmer/xlm-roberta-base-tweet-sentiment-es-trimmed-es-30000`
This model is a trimmed version of [cardiffnlp/xlm-roberta-base-tweet-sentiment-es](https://huggingface.co/cardiffnlp/xlm-roberta-base-tweet-sentiment-es) by [`vocabtrimmer`](https://github.com/asahi417/lm-vocab-trimmer), a tool for trimming vocabulary of language models to compress the model size.
Following table shows a summary of the trimming process.
| | cardiffnlp/xlm-roberta-base-tweet-sentiment-es | vocabtrimmer/xlm-roberta-base-tweet-sentiment-es-trimmed-es-30000 |
|:---------------------------|:-------------------------------------------------|:--------------------------------------------------------------------|
| parameter_size_full | 278,045,955 | 109,085,955 |
| parameter_size_embedding | 192,001,536 | 23,041,536 |
| vocab_size | 250,002 | 30,002 |
| compression_rate_full | 100.0 | 39.23 |
| compression_rate_embedding | 100.0 | 12.0 |
Following table shows the parameter used to trim vocabulary.
| language | dataset | dataset_column | dataset_name | dataset_split | target_vocab_size | min_frequency |
|:-----------|:----------------------------|:-----------------|:---------------|:----------------|--------------------:|----------------:|
| es | vocabtrimmer/mc4_validation | text | es | validation | 30000 | 2 |
|
Ryosei0304/q-FrozenLake-v1-4x4-noSlippery
|
Ryosei0304
| 2023-03-23T10:58:55Z
| 0
| 0
| null |
[
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-03-23T10:58:48Z
|
---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="Ryosei0304/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
makaveli10/q-FrozenLake-v1-4x4-noSlippery
|
makaveli10
| 2023-03-23T10:57:50Z
| 0
| 0
| null |
[
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-03-23T10:57:39Z
|
---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="makaveli10/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
vocabtrimmer/xlm-roberta-base-tweet-sentiment-es-trimmed-es-15000
|
vocabtrimmer
| 2023-03-23T10:57:13Z
| 105
| 0
|
transformers
|
[
"transformers",
"pytorch",
"xlm-roberta",
"text-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-03-15T21:32:55Z
|
# Vocabulary Trimmed [cardiffnlp/xlm-roberta-base-tweet-sentiment-es](https://huggingface.co/cardiffnlp/xlm-roberta-base-tweet-sentiment-es): `vocabtrimmer/xlm-roberta-base-tweet-sentiment-es-trimmed-es-15000`
This model is a trimmed version of [cardiffnlp/xlm-roberta-base-tweet-sentiment-es](https://huggingface.co/cardiffnlp/xlm-roberta-base-tweet-sentiment-es) by [`vocabtrimmer`](https://github.com/asahi417/lm-vocab-trimmer), a tool for trimming vocabulary of language models to compress the model size.
Following table shows a summary of the trimming process.
| | cardiffnlp/xlm-roberta-base-tweet-sentiment-es | vocabtrimmer/xlm-roberta-base-tweet-sentiment-es-trimmed-es-15000 |
|:---------------------------|:-------------------------------------------------|:--------------------------------------------------------------------|
| parameter_size_full | 278,045,955 | 97,565,955 |
| parameter_size_embedding | 192,001,536 | 11,521,536 |
| vocab_size | 250,002 | 15,002 |
| compression_rate_full | 100.0 | 35.09 |
| compression_rate_embedding | 100.0 | 6.0 |
Following table shows the parameter used to trim vocabulary.
| language | dataset | dataset_column | dataset_name | dataset_split | target_vocab_size | min_frequency |
|:-----------|:----------------------------|:-----------------|:---------------|:----------------|--------------------:|----------------:|
| es | vocabtrimmer/mc4_validation | text | es | validation | 15000 | 2 |
|
karolill/mbert_LR3e-05_WR0.1_OPTIMadamw_hf_WD0.1
|
karolill
| 2023-03-23T10:56:43Z
| 91
| 0
|
transformers
|
[
"transformers",
"pytorch",
"bert",
"text-classification",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-03-23T10:52:11Z
|
---
license: mit
---
This is a [multilingual BERT](https://huggingface.co/bert-base-multilingual-cased) model fine-tuned on 4000 examples of the
[NoReC dataset](https://github.com/ltgoslo/norec) where examples with score 1/2 were marked as negative and 5/6 were marked as positive.
The model was fine-tuned for 2 epochs with the following parameters:
- learning_rate = 3e-05
- warmup_ratio = 0.1
- optim = 'adamw_hf'
- weight_decay = 0.1
|
Subsets and Splits
Filtered Qwen2.5 Distill Models
Identifies specific configurations of models by filtering cards that contain 'distill', 'qwen2.5', '7b' while excluding certain base models and incorrect model ID patterns, uncovering unique model variants.
Filtered Model Cards Count
Finds the count of entries with specific card details that include 'distill', 'qwen2.5', '7b' but exclude certain base models, revealing valuable insights about the dataset's content distribution.
Filtered Distill Qwen 7B Models
Filters for specific card entries containing 'distill', 'qwen', and '7b', excluding certain strings and patterns, to identify relevant model configurations.
Filtered Qwen-7b Model Cards
The query performs a detailed filtering based on specific keywords and excludes certain entries, which could be useful for identifying a specific subset of cards but does not provide deeper insights or trends.
Filtered Qwen 7B Model Cards
The query filters for specific terms related to "distilled" or "distill", "qwen", and "7b" in the 'card' column but excludes certain base models, providing a limited set of entries for further inspection.
Qwen 7B Distilled Models
The query provides a basic filtering of records to find specific card names that include keywords related to distilled Qwen 7b models, excluding a particular base model, which gives limited insight but helps in focusing on relevant entries.
Qwen 7B Distilled Model Cards
The query filters data based on specific keywords in the modelId and card fields, providing limited insight primarily useful for locating specific entries rather than revealing broad patterns or trends.
Qwen 7B Distilled Models
Finds all entries containing the terms 'distilled', 'qwen', and '7b' in a case-insensitive manner, providing a filtered set of records but without deeper analysis.
Distilled Qwen 7B Models
The query filters for specific model IDs containing 'distilled', 'qwen', and '7b', providing a basic retrieval of relevant entries but without deeper analysis or insight.
Filtered Model Cards with Distill Qwen2.
Filters and retrieves records containing specific keywords in the card description while excluding certain phrases, providing a basic count of relevant entries.
Filtered Model Cards with Distill Qwen 7
The query filters specific variations of card descriptions containing 'distill', 'qwen', and '7b' while excluding a particular base model, providing limited but specific data retrieval.
Distill Qwen 7B Model Cards
The query filters and retrieves rows where the 'card' column contains specific keywords ('distill', 'qwen', and '7b'), providing a basic filter result that can help in identifying specific entries.