modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-09-07 12:31:56
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 544
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 55
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-09-07 12:31:42
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
kyleeasterly/openllama-7b_purple-aerospace-v2-200-173
|
kyleeasterly
| 2023-08-09T08:07:50Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-08-09T07:46:31Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.5.0.dev0
|
kyleeasterly/openllama-7b_purple-aerospace-v2-200-128
|
kyleeasterly
| 2023-08-09T08:07:40Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-08-09T07:46:26Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.5.0.dev0
|
kyleeasterly/openllama-7b_purple-aerospace-v2-200-102
|
kyleeasterly
| 2023-08-09T08:07:06Z | 1 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-08-09T07:46:21Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.5.0.dev0
|
mohsin-riad/SD-joepenna-4-people
|
mohsin-riad
| 2023-08-09T08:05:43Z | 0 | 0 | null |
[
"text-to-image",
"stable-diffusion",
"dreambooth",
"en",
"license:creativeml-openrail-m",
"region:us"
] |
text-to-image
| 2023-08-02T19:06:57Z |
---
license: creativeml-openrail-m
tags:
- text-to-image
- stable-diffusion
- dreambooth
language:
- en
---
### SD 1.5 model trained by mohsin-riad
**Name of the persons and corresponding tokens:**
- Anthony -> anthony man
- Liza -> liza woman
- AJ -> aj man
- Michael -> michael boy
**Try prompt such as:**
```
RAW photo, close up portrait of michael boy wearing sunglass with blue eyes, detailed eyes, smiling, teeth, floral clothes, nature background, 8k uhd, dslr, soft lighting, high quality, film grain, Fujifilm XT3
```
---
## Model Details
- Developed by: Robin Rombach, Patrick Esser
- Trainable tweaks by: Joe Penna
- Finetuned by: Mohsin Riad
- Model type: Diffusion-based text-to-image generation model
- Language(s): English
---
> Happy inferencing!
|
kyleeasterly/openllama-7b_purple-aerospace-v2-200-96
|
kyleeasterly
| 2023-08-09T08:05:26Z | 4 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-08-09T07:46:17Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.5.0.dev0
|
kyleeasterly/openllama-7b_purple-aerospace-v2-200-72
|
kyleeasterly
| 2023-08-09T08:03:18Z | 1 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-08-09T07:46:04Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.5.0.dev0
|
jakezou/ppo-SnowballTarget
|
jakezou
| 2023-08-09T07:51:52Z | 5 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"SnowballTarget",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-SnowballTarget",
"region:us"
] |
reinforcement-learning
| 2023-08-09T07:51:50Z |
---
library_name: ml-agents
tags:
- SnowballTarget
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SnowballTarget
---
# **ppo** Agent playing **SnowballTarget**
This is a trained model of a **ppo** agent playing **SnowballTarget**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: jakezou/ppo-SnowballTarget
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
kyleeasterly/openllama-7b_purple-aerospace-v2-200-18
|
kyleeasterly
| 2023-08-09T07:51:19Z | 2 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-08-09T07:44:33Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.5.0.dev0
|
kyleeasterly/openllama-7b_purple-aerospace-v2-200-17
|
kyleeasterly
| 2023-08-09T07:50:48Z | 3 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-08-09T07:44:29Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.5.0.dev0
|
kyleeasterly/openllama-7b_purple-aerospace-v2-200-15
|
kyleeasterly
| 2023-08-09T07:50:07Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-08-09T07:44:21Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.5.0.dev0
|
kyleeasterly/openllama-7b_purple-aerospace-v2-200-14
|
kyleeasterly
| 2023-08-09T07:49:52Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-08-09T07:44:17Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.5.0.dev0
|
kyleeasterly/openllama-7b_purple-aerospace-v2-200-12
|
kyleeasterly
| 2023-08-09T07:49:17Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-08-09T07:44:09Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.5.0.dev0
|
kyleeasterly/openllama-7b_purple-aerospace-v2-200-11
|
kyleeasterly
| 2023-08-09T07:48:58Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-08-09T07:44:05Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.5.0.dev0
|
kyleeasterly/openllama-7b_purple-aerospace-v2-200-9
|
kyleeasterly
| 2023-08-09T07:48:37Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-08-09T07:43:55Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.5.0.dev0
|
kyleeasterly/openllama-7b_purple-aerospace-v2-200-8
|
kyleeasterly
| 2023-08-09T07:48:25Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-08-09T07:43:51Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.5.0.dev0
|
kyleeasterly/openllama-7b_purple-aerospace-v2-200-2
|
kyleeasterly
| 2023-08-09T07:47:13Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-08-09T07:43:37Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.5.0.dev0
|
arminmrm93/a2c-PandaReachDense-v3-v2
|
arminmrm93
| 2023-08-09T07:45:16Z | 1 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"PandaReachDense-v3",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-08-09T07:39:45Z |
---
library_name: stable-baselines3
tags:
- PandaReachDense-v3
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: PandaReachDense-v3
type: PandaReachDense-v3
metrics:
- type: mean_reward
value: -0.19 +/- 0.12
name: mean_reward
verified: false
---
# **A2C** Agent playing **PandaReachDense-v3**
This is a trained model of a **A2C** agent playing **PandaReachDense-v3**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
annaovesnaatatt/ppo-lunarlander-v2
|
annaovesnaatatt
| 2023-08-09T07:43:54Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-08-09T07:43:33Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 259.39 +/- 19.43
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
kyleeasterly/openllama-7b_purple-aerospace-v2-300-115
|
kyleeasterly
| 2023-08-09T07:41:39Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-08-09T07:34:21Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.5.0.dev0
|
kyleeasterly/openllama-7b_purple-aerospace-v2-300-104
|
kyleeasterly
| 2023-08-09T07:41:06Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-08-09T07:34:16Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.5.0.dev0
|
kyleeasterly/openllama-7b_purple-aerospace-v2-300-96
|
kyleeasterly
| 2023-08-09T07:40:18Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-08-09T07:34:13Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.5.0.dev0
|
kyleeasterly/openllama-7b_purple-aerospace-v2-300-64
|
kyleeasterly
| 2023-08-09T07:38:09Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-08-09T07:33:46Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.5.0.dev0
|
kyleeasterly/openllama-7b_purple-aerospace-v2-300-30
|
kyleeasterly
| 2023-08-09T07:37:20Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-08-09T07:33:24Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.5.0.dev0
|
kyleeasterly/openllama-7b_purple-aerospace-v2-300-18
|
kyleeasterly
| 2023-08-09T07:33:13Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-08-09T07:31:49Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.5.0.dev0
|
kyleeasterly/openllama-7b_purple-aerospace-v2-300-10
|
kyleeasterly
| 2023-08-09T07:31:01Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-08-09T07:26:54Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.5.0.dev0
|
kyleeasterly/openllama-7b_purple-aerospace-v2-300-8
|
kyleeasterly
| 2023-08-09T07:30:59Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-08-09T07:26:50Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.5.0.dev0
|
kyleeasterly/openllama-7b_purple-aerospace-v2-300-6
|
kyleeasterly
| 2023-08-09T07:29:59Z | 1 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-08-09T07:26:46Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.5.0.dev0
|
kyleeasterly/openllama-7b_purple-aerospace-v2-300-2
|
kyleeasterly
| 2023-08-09T07:28:23Z | 1 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-08-09T07:26:16Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.5.0.dev0
|
kyleeasterly/openllama-7b_purple-aerospace-v2-300-4
|
kyleeasterly
| 2023-08-09T07:28:23Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-08-09T07:26:25Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.5.0.dev0
|
peterandrew987/results
|
peterandrew987
| 2023-08-09T07:25:13Z | 104 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bart",
"text2text-generation",
"generated_from_trainer",
"dataset:squad",
"base_model:indobenchmark/indobart-v2",
"base_model:finetune:indobenchmark/indobart-v2",
"license:mit",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-08-08T11:45:45Z |
---
license: mit
base_model: indobenchmark/indobart-v2
tags:
- generated_from_trainer
datasets:
- squad
metrics:
- rouge
model-index:
- name: results
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: squad
type: squad
config: plain_text
split: train[:1000]
args: plain_text
metrics:
- name: Rouge1
type: rouge
value: 16.2693
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# results
This model is a fine-tuned version of [indobenchmark/indobart-v2](https://huggingface.co/indobenchmark/indobart-v2) on the squad dataset.
It achieves the following results on the evaluation set:
- Loss: 1.5998
- Rouge1: 16.2693
- Rouge2: 14.9952
- Rougel: 16.233
- Rougelsum: 16.2741
- Gen Len: 20.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 100
- num_epochs: 1
- label_smoothing_factor: 0.1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:------:|:---------:|:-------:|
| 1.4819 | 1.0 | 200 | 1.5998 | 16.2693 | 14.9952 | 16.233 | 16.2741 | 20.0 |
### Framework versions
- Transformers 4.31.0
- Pytorch 2.0.1+cu117
- Datasets 2.14.2
- Tokenizers 0.13.3
|
jakezou/dqn-SpaceInvadersNoFrameskip-v4
|
jakezou
| 2023-08-09T07:11:26Z | 9 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-08-09T07:10:48Z |
---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 690.50 +/- 356.79
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga jakezou -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga jakezou -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga jakezou
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 1000000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
# Environment Arguments
```python
{'render_mode': 'rgb_array'}
```
|
maikaarda/gte-base-ggml
|
maikaarda
| 2023-08-09T07:02:49Z | 0 | 0 | null |
[
"license:mit",
"region:us"
] | null | 2023-08-09T05:24:01Z |
---
license: mit
---
ggml files of [thenlper/gte-base](https://huggingface.co/thenlper/gte-base)
You can use this ggml for https://github.com/skeskinen/bert.cpp
### gte-base
| Data Type | STSBenchmark | eval time | EmotionClassification | eval time |
|-----------|-----------|------------|-----------|------------|
| f32 | 0.8571 | 38.98 | 0.5087 | 69.09 |
| f16 | 0.8571 | 33.06 | 0.5086 | 53.57 |
| q4_0 | 0.8580 | 25.28 | 0.5171 | 69.32 |
| q4_1 | 0.8581 | 28.12 | 0.5113 | 66.38 |
### all-MiniLM-L12-v2
| Data Type | STSBenchmark | eval time | EmotionClassification | eval time |
|-----------|-----------|------------|-----------|------------|
| f32 | 0.8306 | 13.36 | 0.4117 | 21.23 |
| f16 | 0.8306 | 11.51 | 0.4119 | 20.08 |
| q4_0 | 0.8310 | 11.27 | 0.4183 | 20.81 |
| q4_1 | 0.8325 | 12.37 | 0.4093 | 19.38 |
### all-MiniLM-L6-v2
| Data Type | STSBenchmark | eval time | EmotionClassification | eval time |
|-----------|-----------|------------|-----------|------------|
| f32 | 0.8201 | 6.83 | 0.4082 | 11.34 |
| f16 | 0.8201 | 6.17 | 0.4085 | 10.28 |
| q4_0 | 0.8175 | 5.45 | 0.3911 | 10.63 |
| q4_1 | 0.8223 | 6.79 | 0.4027 | 11.41 |
### bert-base-uncased
| Data Type | STSBenchmark | eval time | EmotionClassification | eval time |
|-----------|-----------|------------|-----------|------------|
| f32 | 0.4738 | 52.38 | 0.3361 | 88.56 |
| f16 | 0.4739 | 33.24 | 0.3361 | 55.86 |
| q4_0 | 0.4940 | 33.93 | 0.3375 | 57.82 |
| q4_1 | 0.4612 | 36.86 | 0.3318 | 59.63 |
|
Anonymou3/sd-class-butterflies-32
|
Anonymou3
| 2023-08-09T06:54:46Z | 31 | 0 |
diffusers
|
[
"diffusers",
"pytorch",
"unconditional-image-generation",
"diffusion-models-class",
"license:mit",
"diffusers:DDPMPipeline",
"region:us"
] |
unconditional-image-generation
| 2023-08-09T06:54:17Z |
---
license: mit
tags:
- pytorch
- diffusers
- unconditional-image-generation
- diffusion-models-class
---
# Model Card for Unit 1 of the [Diffusion Models Class 🧨](https://github.com/huggingface/diffusion-models-class)
This model is a diffusion model for unconditional image generation of cute 🦋.
## Usage
```python
from diffusers import DDPMPipeline
pipeline = DDPMPipeline.from_pretrained('Anonymou3/sd-class-butterflies-32')
image = pipeline().images[0]
image
```
|
maikaarda/gte-large-ggml
|
maikaarda
| 2023-08-09T06:52:31Z | 0 | 1 | null |
[
"license:mit",
"region:us"
] | null | 2023-08-09T05:26:15Z |
---
license: mit
---
ggml files of [thenlper/gte-large](https://huggingface.co/thenlper/gte-large)
You can use this ggml for https://github.com/skeskinen/bert.cpp
### gte-large
| Data Type | STSBenchmark | eval time | EmotionClassification | eval time |
|-----------|-----------|------------|-----------|------------|
| f32 | 0.8606 | 127.58 | 0.5060 | 199.61 |
| f16 | 0.8606 | 103.89 | 0.5060 | 169.68 |
| q4_0 | 0.8589 | 80.85 | 0.5037 | 157.05 |
| q4_1 | 0.8605 | 90.13 | 0.5107 | 162.59 |
### all-MiniLM-L12-v2
| Data Type | STSBenchmark | eval time | EmotionClassification | eval time |
|-----------|-----------|------------|-----------|------------|
| f32 | 0.8306 | 13.36 | 0.4117 | 21.23 |
| f16 | 0.8306 | 11.51 | 0.4119 | 20.08 |
| q4_0 | 0.8310 | 11.27 | 0.4183 | 20.81 |
| q4_1 | 0.8325 | 12.37 | 0.4093 | 19.38 |
### all-MiniLM-L6-v2
| Data Type | STSBenchmark | eval time | EmotionClassification | eval time |
|-----------|-----------|------------|-----------|------------|
| f32 | 0.8201 | 6.83 | 0.4082 | 11.34 |
| f16 | 0.8201 | 6.17 | 0.4085 | 10.28 |
| q4_0 | 0.8175 | 5.45 | 0.3911 | 10.63 |
| q4_1 | 0.8223 | 6.79 | 0.4027 | 11.41 |
### bert-base-uncased
| Data Type | STSBenchmark | eval time | EmotionClassification | eval time |
|-----------|-----------|------------|-----------|------------|
| f32 | 0.4738 | 52.38 | 0.3361 | 88.56 |
| f16 | 0.4739 | 33.24 | 0.3361 | 55.86 |
| q4_0 | 0.4940 | 33.93 | 0.3375 | 57.82 |
| q4_1 | 0.4612 | 36.86 | 0.3318 | 59.63 |
|
whywynn/Reinforce-CartPole-v1
|
whywynn
| 2023-08-09T06:46:02Z | 0 | 0 | null |
[
"CartPole-v1",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-08-09T06:45:51Z |
---
tags:
- CartPole-v1
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-CartPole-v1
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 500.00 +/- 0.00
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
CyberHarem/yuzuriha_jigokuraku
|
CyberHarem
| 2023-08-09T06:43:55Z | 0 | 0 | null |
[
"art",
"text-to-image",
"dataset:CyberHarem/yuzuriha_jigokuraku",
"license:mit",
"region:us"
] |
text-to-image
| 2023-08-09T06:40:17Z |
---
license: mit
datasets:
- CyberHarem/yuzuriha_jigokuraku
pipeline_tag: text-to-image
tags:
- art
---
# Lora of yuzuriha_jigokuraku
This model is trained with [HCP-Diffusion](https://github.com/7eu7d7/HCP-Diffusion). And the auto-training framework is maintained by [DeepGHS Team](https://huggingface.co/deepghs).
After downloading the pt and safetensors files for the specified step, you need to use them simultaneously. The pt file will be used as an embedding, while the safetensors file will be loaded for Lora.
For example, if you want to use the model from step 1500, you need to download `1500/yuzuriha_jigokuraku.pt` as the embedding and `1500/yuzuriha_jigokuraku.safetensors` for loading Lora. By using both files together, you can generate images for the desired characters.
**The trigger word is `yuzuriha_jigokuraku`.**
These are available steps:
| Steps | pattern_1 | bikini | free | nude | Download |
|--------:|:-----------------------------------------------|:-----------------------------------------|:-------------------------------------|:-----------------------------------------------|:-----------------------------------------|
| 1500 |  |  |  | [<NSFW, click to see>](1500/previews/nude.png) | [Download](1500/yuzuriha_jigokuraku.zip) |
| 1400 |  |  |  | [<NSFW, click to see>](1400/previews/nude.png) | [Download](1400/yuzuriha_jigokuraku.zip) |
| 1300 |  |  |  | [<NSFW, click to see>](1300/previews/nude.png) | [Download](1300/yuzuriha_jigokuraku.zip) |
| 1200 |  |  |  | [<NSFW, click to see>](1200/previews/nude.png) | [Download](1200/yuzuriha_jigokuraku.zip) |
| 1100 |  |  |  | [<NSFW, click to see>](1100/previews/nude.png) | [Download](1100/yuzuriha_jigokuraku.zip) |
| 1000 |  |  |  | [<NSFW, click to see>](1000/previews/nude.png) | [Download](1000/yuzuriha_jigokuraku.zip) |
| 900 |  |  |  | [<NSFW, click to see>](900/previews/nude.png) | [Download](900/yuzuriha_jigokuraku.zip) |
| 800 |  |  |  | [<NSFW, click to see>](800/previews/nude.png) | [Download](800/yuzuriha_jigokuraku.zip) |
| 700 |  |  |  | [<NSFW, click to see>](700/previews/nude.png) | [Download](700/yuzuriha_jigokuraku.zip) |
| 600 |  |  |  | [<NSFW, click to see>](600/previews/nude.png) | [Download](600/yuzuriha_jigokuraku.zip) |
| 500 |  |  |  | [<NSFW, click to see>](500/previews/nude.png) | [Download](500/yuzuriha_jigokuraku.zip) |
| 400 |  |  |  | [<NSFW, click to see>](400/previews/nude.png) | [Download](400/yuzuriha_jigokuraku.zip) |
| 300 |  |  |  | [<NSFW, click to see>](300/previews/nude.png) | [Download](300/yuzuriha_jigokuraku.zip) |
| 200 |  |  |  | [<NSFW, click to see>](200/previews/nude.png) | [Download](200/yuzuriha_jigokuraku.zip) |
| 100 |  |  |  | [<NSFW, click to see>](100/previews/nude.png) | [Download](100/yuzuriha_jigokuraku.zip) |
|
redstonehero/pastel-mix
|
redstonehero
| 2023-08-09T06:42:00Z | 21 | 0 |
diffusers
|
[
"diffusers",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-08-09T03:57:41Z |
---
license: creativeml-openrail-m
library_name: diffusers
---
|
redstonehero/mixprov4_v4
|
redstonehero
| 2023-08-09T06:41:40Z | 21 | 0 |
diffusers
|
[
"diffusers",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-08-09T03:57:23Z |
---
license: creativeml-openrail-m
library_name: diffusers
---
|
redstonehero/yozorav1origin
|
redstonehero
| 2023-08-09T06:41:26Z | 21 | 0 |
diffusers
|
[
"diffusers",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-08-09T03:56:49Z |
---
license: creativeml-openrail-m
library_name: diffusers
---
|
redstonehero/fantexiv09beta
|
redstonehero
| 2023-08-09T06:41:21Z | 21 | 1 |
diffusers
|
[
"diffusers",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-08-09T03:50:21Z |
---
license: creativeml-openrail-m
library_name: diffusers
---
|
rabede/Jupy
|
rabede
| 2023-08-09T06:40:40Z | 0 | 0 | null |
[
"arxiv:1910.09700",
"region:us"
] | null | 2023-08-09T06:33:11Z |
---
# For reference on model card metadata, see the spec: https://github.com/huggingface/hub-docs/blob/main/modelcard.md?plain=1
# Doc / guide: https://huggingface.co/docs/hub/model-cards
{}
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
This modelcard aims to be a base template for new models. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/modelcard_template.md?plain=1).
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Data Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
redstonehero/lazymix_real_amateur_nudes_v30b
|
redstonehero
| 2023-08-09T06:39:41Z | 119 | 3 |
diffusers
|
[
"diffusers",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-08-09T03:48:44Z |
---
license: creativeml-openrail-m
library_name: diffusers
---
|
redstonehero/meichidarkv4
|
redstonehero
| 2023-08-09T06:39:36Z | 26 | 1 |
diffusers
|
[
"diffusers",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-08-09T03:48:30Z |
---
license: creativeml-openrail-m
library_name: diffusers
---
|
ariobsessedwithai/axel
|
ariobsessedwithai
| 2023-08-09T06:38:14Z | 0 | 0 | null |
[
"arxiv:1910.09700",
"license:unknown",
"region:us"
] | null | 2023-08-09T06:30:13Z |
---
license: unknown
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
This modelcard aims to be a base template for new models. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/modelcard_template.md?plain=1).
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Data Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
peterandrew987/modified-qa
|
peterandrew987
| 2023-08-09T06:34:19Z | 107 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bart",
"text2text-generation",
"generated_from_trainer",
"dataset:squad",
"base_model:indobenchmark/indobart-v2",
"base_model:finetune:indobenchmark/indobart-v2",
"license:mit",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-08-09T05:55:30Z |
---
license: mit
base_model: indobenchmark/indobart-v2
tags:
- generated_from_trainer
datasets:
- squad
metrics:
- rouge
model-index:
- name: modified-qa
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: squad
type: squad
config: plain_text
split: train[:1000]
args: plain_text
metrics:
- name: Rouge1
type: rouge
value: 13.4458
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# modified-qa
This model is a fine-tuned version of [indobenchmark/indobart-v2](https://huggingface.co/indobenchmark/indobart-v2) on the squad dataset.
It achieves the following results on the evaluation set:
- Loss: 3.9723
- Rouge1: 13.4458
- Rouge2: 6.819
- Rougel: 11.2064
- Rougelsum: 12.5476
- Gen Len: 20.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 100
- num_epochs: 1
- label_smoothing_factor: 0.1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:------:|:-------:|:---------:|:-------:|
| 4.436 | 1.0 | 200 | 3.9723 | 13.4458 | 6.819 | 11.2064 | 12.5476 | 20.0 |
### Framework versions
- Transformers 4.31.0
- Pytorch 2.0.1+cu117
- Datasets 2.14.2
- Tokenizers 0.13.3
|
TinToTin/dqn-space-invader-noframesskip-v4-rl
|
TinToTin
| 2023-08-09T06:33:54Z | 1 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-08-09T06:33:15Z |
---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 534.00 +/- 94.31
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga Thineshan -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga Thineshan -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga Thineshan
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 1000000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
# Environment Arguments
```python
{'render_mode': 'rgb_array'}
```
|
maikaarda/bge-small-en-ggml
|
maikaarda
| 2023-08-09T06:31:04Z | 0 | 0 | null |
[
"license:mit",
"region:us"
] | null | 2023-08-09T05:29:43Z |
---
license: mit
---
ggml files of [bge-small-en](https://huggingface.co/BAAI/bge-small-en)
You can use this ggml for https://github.com/skeskinen/bert.cpp
### bge-small-en
| Data Type | STSBenchmark | eval time | EmotionClassification | eval time |
|-----------|-----------|------------|-----------|------------|
| f32 | 0.8654 | 12.81 | 0.5111 | 26.28 |
| f16 | 0.8654 | 12.02 | 0.5112 | 19.39 |
| q4_0 | 0.8637 | 10.07 | 0.5073 | 44.53 |
| q4_1 | 0.8645 | 11.04 | 0.5087 | 39.58 |
### all-MiniLM-L12-v2
| Data Type | STSBenchmark | eval time | EmotionClassification | eval time |
|-----------|-----------|------------|-----------|------------|
| f32 | 0.8306 | 13.36 | 0.4117 | 21.23 |
| f16 | 0.8306 | 11.51 | 0.4119 | 20.08 |
| q4_0 | 0.8310 | 11.27 | 0.4183 | 20.81 |
| q4_1 | 0.8325 | 12.37 | 0.4093 | 19.38 |
### all-MiniLM-L6-v2
| Data Type | STSBenchmark | eval time | EmotionClassification | eval time |
|-----------|-----------|------------|-----------|------------|
| f32 | 0.8201 | 6.83 | 0.4082 | 11.34 |
| f16 | 0.8201 | 6.17 | 0.4085 | 10.28 |
| q4_0 | 0.8175 | 5.45 | 0.3911 | 10.63 |
| q4_1 | 0.8223 | 6.79 | 0.4027 | 11.41 |
### bert-base-uncased
| Data Type | STSBenchmark | eval time | EmotionClassification | eval time |
|-----------|-----------|------------|-----------|------------|
| f32 | 0.4738 | 52.38 | 0.3361 | 88.56 |
| f16 | 0.4739 | 33.24 | 0.3361 | 55.86 |
| q4_0 | 0.4940 | 33.93 | 0.3375 | 57.82 |
| q4_1 | 0.4612 | 36.86 | 0.3318 | 59.63 |
|
maikaarda/bge-large-en-ggml
|
maikaarda
| 2023-08-09T06:30:57Z | 0 | 0 | null |
[
"license:mit",
"region:us"
] | null | 2023-08-09T05:28:59Z |
---
license: mit
---
ggml files of [bge-large-en](https://huggingface.co/BAAI/bge-large-en)
You can use this ggml for https://github.com/skeskinen/bert.cpp
### bge-large-en
| Data Type | STSBenchmark | eval time | EmotionClassification | eval time |
|-----------|-----------|------------|-----------|------------|
| f32 | 0.8807 | 129.10 | 0.5715 | 202.67 |
| f16 | 0.8807 | 107.80 | 0.5712 | 177.37 |
| q4_0 | 0.8798 | 81.91 | 0.5689 | 159.30 |
| q4_1 | 0.8792 | 91.66 | 0.5709 | 164.45 |
### all-MiniLM-L12-v2
| Data Type | STSBenchmark | eval time | EmotionClassification | eval time |
|-----------|-----------|------------|-----------|------------|
| f32 | 0.8306 | 13.36 | 0.4117 | 21.23 |
| f16 | 0.8306 | 11.51 | 0.4119 | 20.08 |
| q4_0 | 0.8310 | 11.27 | 0.4183 | 20.81 |
| q4_1 | 0.8325 | 12.37 | 0.4093 | 19.38 |
### all-MiniLM-L6-v2
| Data Type | STSBenchmark | eval time | EmotionClassification | eval time |
|-----------|-----------|------------|-----------|------------|
| f32 | 0.8201 | 6.83 | 0.4082 | 11.34 |
| f16 | 0.8201 | 6.17 | 0.4085 | 10.28 |
| q4_0 | 0.8175 | 5.45 | 0.3911 | 10.63 |
| q4_1 | 0.8223 | 6.79 | 0.4027 | 11.41 |
### bert-base-uncased
| Data Type | STSBenchmark | eval time | EmotionClassification | eval time |
|-----------|-----------|------------|-----------|------------|
| f32 | 0.4738 | 52.38 | 0.3361 | 88.56 |
| f16 | 0.4739 | 33.24 | 0.3361 | 55.86 |
| q4_0 | 0.4940 | 33.93 | 0.3375 | 57.82 |
| q4_1 | 0.4612 | 36.86 | 0.3318 | 59.63 |
|
maikaarda/gte-small-ggml
|
maikaarda
| 2023-08-09T06:30:29Z | 0 | 1 | null |
[
"license:mit",
"region:us"
] | null | 2023-08-09T05:27:17Z |
---
license: mit
---
ggml files of [thenlper/gte-small](https://huggingface.co/thenlper/gte-small)
You can use this ggml for https://github.com/skeskinen/bert.cpp
### gte-small
| Data Type | STSBenchmark | eval time | EmotionClassification | eval time |
|-----------|-----------|------------|-----------|------------|
| f32 | 0.8554 | 12.40 | 0.4808 | 26.39 |
| f16 | 0.8555 | 11.29 | 0.4808 | 18.48 |
| q4_0 | 0.8537 | 9.22 | 0.4860 | 43.92 |
| q4_1 | 0.8543 | 10.01 | 0.4832 | 38.33 |
### all-MiniLM-L12-v2
| Data Type | STSBenchmark | eval time | EmotionClassification | eval time |
|-----------|-----------|------------|-----------|------------|
| f32 | 0.8306 | 13.36 | 0.4117 | 21.23 |
| f16 | 0.8306 | 11.51 | 0.4119 | 20.08 |
| q4_0 | 0.8310 | 11.27 | 0.4183 | 20.81 |
| q4_1 | 0.8325 | 12.37 | 0.4093 | 19.38 |
### all-MiniLM-L6-v2
| Data Type | STSBenchmark | eval time | EmotionClassification | eval time |
|-----------|-----------|------------|-----------|------------|
| f32 | 0.8201 | 6.83 | 0.4082 | 11.34 |
| f16 | 0.8201 | 6.17 | 0.4085 | 10.28 |
| q4_0 | 0.8175 | 5.45 | 0.3911 | 10.63 |
| q4_1 | 0.8223 | 6.79 | 0.4027 | 11.41 |
### bert-base-uncased
| Data Type | STSBenchmark | eval time | EmotionClassification | eval time |
|-----------|-----------|------------|-----------|------------|
| f32 | 0.4738 | 52.38 | 0.3361 | 88.56 |
| f16 | 0.4739 | 33.24 | 0.3361 | 55.86 |
| q4_0 | 0.4940 | 33.93 | 0.3375 | 57.82 |
| q4_1 | 0.4612 | 36.86 | 0.3318 | 59.63 |
|
luistakahashi/my-awesome-setfit-pear-2
|
luistakahashi
| 2023-08-09T06:26:11Z | 3 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"mpnet",
"setfit",
"text-classification",
"arxiv:2209.11055",
"license:apache-2.0",
"region:us"
] |
text-classification
| 2023-08-09T06:15:17Z |
---
license: apache-2.0
tags:
- setfit
- sentence-transformers
- text-classification
pipeline_tag: text-classification
---
# luistakahashi/my-awesome-setfit-pear-2
This is a [SetFit model](https://github.com/huggingface/setfit) that can be used for text classification. The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
## Usage
To use this model for inference, first install the SetFit library:
```bash
python -m pip install setfit
```
You can then run inference as follows:
```python
from setfit import SetFitModel
# Download from Hub and run inference
model = SetFitModel.from_pretrained("luistakahashi/my-awesome-setfit-pear-2")
# Run inference
preds = model(["i loved the spiderman movie!", "pineapple on pizza is the worst 🤮"])
```
## BibTeX entry and citation info
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
|
chriskim2273/IOTNation_QA_Model_1.95_DistilBert_UNK_DATASET_NO_PARAPHRASE
|
chriskim2273
| 2023-08-09T06:14:51Z | 103 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"question-answering",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2023-08-09T06:03:58Z |
---
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
model-index:
- name: IOTNation_QA_Model_1.95_DistilBert_UNK_DATASET_NO_PARAPHRASE
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# IOTNation_QA_Model_1.95_DistilBert_UNK_DATASET_NO_PARAPHRASE
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6735
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
### Framework versions
- Transformers 4.31.0
- Pytorch 2.0.1+cu118
- Datasets 2.14.4
- Tokenizers 0.13.3
|
stanfordnlp/stanza-swl
|
stanfordnlp
| 2023-08-09T06:11:18Z | 2 | 0 |
stanza
|
[
"stanza",
"token-classification",
"swl",
"license:apache-2.0",
"region:us"
] |
token-classification
| 2022-03-02T23:29:05Z |
---
tags:
- stanza
- token-classification
library_name: stanza
language: swl
license: apache-2.0
---
# Stanza model for Swedish_Sign_Language (swl)
Stanza is a collection of accurate and efficient tools for the linguistic analysis of many human languages. Starting from raw text to syntactic analysis and entity recognition, Stanza brings state-of-the-art NLP models to languages of your choosing.
Find more about it in [our website](https://stanfordnlp.github.io/stanza) and our [GitHub repository](https://github.com/stanfordnlp/stanza).
This card and repo were automatically prepared with `hugging_stanza.py` in the `stanfordnlp/huggingface-models` repo
Last updated 2023-08-09 06:11:15.351
|
luistakahashi/my-awesome-setfit-pear-4
|
luistakahashi
| 2023-08-09T06:08:59Z | 4 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"mpnet",
"setfit",
"text-classification",
"arxiv:2209.11055",
"license:apache-2.0",
"region:us"
] |
text-classification
| 2023-08-09T05:57:25Z |
---
license: apache-2.0
tags:
- setfit
- sentence-transformers
- text-classification
pipeline_tag: text-classification
---
# luistakahashi/my-awesome-setfit-pear-4
This is a [SetFit model](https://github.com/huggingface/setfit) that can be used for text classification. The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
## Usage
To use this model for inference, first install the SetFit library:
```bash
python -m pip install setfit
```
You can then run inference as follows:
```python
from setfit import SetFitModel
# Download from Hub and run inference
model = SetFitModel.from_pretrained("luistakahashi/my-awesome-setfit-pear-4")
# Run inference
preds = model(["i loved the spiderman movie!", "pineapple on pizza is the worst 🤮"])
```
## BibTeX entry and citation info
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
|
Shadman-Rohan/llama2-qlora-finetunined-french
|
Shadman-Rohan
| 2023-08-09T06:08:56Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-08-09T06:08:37Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
### Framework versions
- PEFT 0.5.0.dev0
|
dkimds/Reinforce-CartPole-v1
|
dkimds
| 2023-08-09T06:07:57Z | 0 | 0 | null |
[
"CartPole-v1",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-08-09T06:07:53Z |
---
tags:
- CartPole-v1
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-CartPole-v1
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 154.90 +/- 5.15
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
kimnt93/mt-seed-task-cls
|
kimnt93
| 2023-08-09T05:55:42Z | 3 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"xlm-roberta",
"setfit",
"text-classification",
"arxiv:2209.11055",
"license:apache-2.0",
"region:us"
] |
text-classification
| 2023-08-09T03:12:04Z |
---
license: apache-2.0
tags:
- setfit
- sentence-transformers
- text-classification
pipeline_tag: text-classification
---
# kimnt93/vi_seed_task_cls
This is a [SetFit model](https://github.com/huggingface/setfit) that can be used for text classification. The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
## Usage
To use this model for inference, first install the SetFit library:
```bash
python -m pip install setfit
```
You can then run inference as follows:
```python
from setfit import SetFitModel
# Download from Hub and run inference
model = SetFitModel.from_pretrained("kimnt93/vi_seed_task_cls")
# Run inference
preds = model(["i loved the spiderman movie!", "pineapple on pizza is the worst 🤮"])
```
## BibTeX entry and citation info
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
|
divyeshrajpura/speecht5-finetuned-voxpopuli-sl
|
divyeshrajpura
| 2023-08-09T05:46:03Z | 83 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"speecht5",
"text-to-audio",
"generated_from_trainer",
"text-to-speech",
"sl",
"dataset:facebook/voxpopuli",
"base_model:microsoft/speecht5_tts",
"base_model:finetune:microsoft/speecht5_tts",
"license:mit",
"endpoints_compatible",
"region:us"
] |
text-to-speech
| 2023-08-09T04:29:07Z |
---
language:
- sl
license: mit
base_model: microsoft/speecht5_tts
tags:
- generated_from_trainer
- text-to-speech
datasets:
- facebook/voxpopuli
model-index:
- name: speecht5-finetuned-voxpopuli-sl
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# speecht5-finetuned-voxpopuli-sl
This model is a fine-tuned version of [microsoft/speecht5_tts](https://huggingface.co/microsoft/speecht5_tts) on the voxpopuli dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4598
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 125
- training_steps: 1000
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.6473 | 3.39 | 100 | 0.5703 |
| 0.5709 | 6.78 | 200 | 0.4998 |
| 0.5339 | 10.17 | 300 | 0.4802 |
| 0.5158 | 13.56 | 400 | 0.4733 |
| 0.5275 | 16.95 | 500 | 0.4691 |
| 0.4983 | 20.34 | 600 | 0.4671 |
| 0.499 | 23.73 | 700 | 0.4638 |
| 0.5003 | 27.12 | 800 | 0.4610 |
| 0.496 | 30.51 | 900 | 0.4610 |
| 0.4935 | 33.9 | 1000 | 0.4598 |
### Framework versions
- Transformers 4.31.0
- Pytorch 2.0.1+cu117
- Datasets 2.14.4
- Tokenizers 0.13.3
|
mohammedfazilvamos/gpt2adapter
|
mohammedfazilvamos
| 2023-08-09T05:41:36Z | 2 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-08-09T05:38:08Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: fp4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
### Framework versions
- PEFT 0.4.0
|
sanitas/sac-PandaPickAndPlace-v3
|
sanitas
| 2023-08-09T05:40:15Z | 1 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"PandaPickAndPlace-v3",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-08-09T05:34:52Z |
---
library_name: stable-baselines3
tags:
- PandaPickAndPlace-v3
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: SAC
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: PandaPickAndPlace-v3
type: PandaPickAndPlace-v3
metrics:
- type: mean_reward
value: -45.00 +/- 15.00
name: mean_reward
verified: false
---
# **SAC** Agent playing **PandaPickAndPlace-v3**
This is a trained model of a **SAC** agent playing **PandaPickAndPlace-v3**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
polejowska/detr-r50-cd45rb-8ah-12l-corrected
|
polejowska
| 2023-08-09T05:33:13Z | 161 | 0 |
transformers
|
[
"transformers",
"pytorch",
"detr",
"object-detection",
"generated_from_trainer",
"dataset:cd45rb",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
object-detection
| 2023-08-08T17:18:35Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- cd45rb
model-index:
- name: detr-r50-cd45rb-8ah-12l-corrected
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# detr-r50-cd45rb-8ah-12l-corrected
This model is a fine-tuned version of [facebook/detr-resnet-50](https://huggingface.co/facebook/detr-resnet-50) on the cd45rb dataset.
It achieves the following results on the evaluation set:
- Loss: 1.7298
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 3.1161 | 1.0 | 4606 | 2.2386 |
| 2.7777 | 2.0 | 9212 | 2.0665 |
| 2.6042 | 3.0 | 13818 | 1.9954 |
| 2.5082 | 4.0 | 18424 | 1.8991 |
| 2.4529 | 5.0 | 23030 | 1.9228 |
| 2.3944 | 6.0 | 27636 | 1.8829 |
| 2.3405 | 7.0 | 32242 | 1.8134 |
| 2.3082 | 8.0 | 36848 | 1.7851 |
| 2.2684 | 9.0 | 41454 | 1.7471 |
| 2.2422 | 10.0 | 46060 | 1.7298 |
### Framework versions
- Transformers 4.28.0
- Pytorch 2.0.1
- Datasets 2.12.0
- Tokenizers 0.13.3
|
maikaarda/e5-large-v2-ggml
|
maikaarda
| 2023-08-09T05:12:15Z | 0 | 0 | null |
[
"license:mit",
"region:us"
] | null | 2023-08-09T05:11:11Z |
---
license: mit
---
ggml files of [intfloat/e5-large-v2](https://huggingface.co/intfloat/e5-large-v2)
You can use this ggml for https://github.com/skeskinen/bert.cpp
|
openerotica/Qwen-7B-Chat-GPTQ
|
openerotica
| 2023-08-09T04:41:52Z | 13 | 4 |
transformers
|
[
"transformers",
"pytorch",
"qwen",
"text-generation",
"custom_code",
"zh",
"en",
"arxiv:2305.08322",
"arxiv:2009.03300",
"arxiv:2305.05280",
"arxiv:2210.03629",
"autotrain_compatible",
"region:us"
] |
text-generation
| 2023-08-09T04:06:36Z |
---
language:
- zh
- en
tags:
- qwen
pipeline_tag: text-generation
inference: false
---
# Qwen-7B-Chat
<p align="center">
<img src="https://qianwen-res.oss-cn-beijing.aliyuncs.com/logo.jpg" width="400"/>
<p>
<br>
<p align="center">
Qwen-7B <a href="https://modelscope.cn/models/qwen/Qwen-7B/summary">🤖 </a> | <a href="https://huggingface.co/Qwen/Qwen-7B">🤗</a>  | Qwen-7B-Chat <a href="https://modelscope.cn/models/qwen/Qwen-7B-Chat/summary">🤖 </a>| <a href="https://huggingface.co/Qwen/Qwen-7B-Chat">🤗</a>  |  <a href="https://modelscope.cn/studios/qwen/Qwen-7B-Chat-Demo/summary">Demo</a>  |  <a href="https://github.com/QwenLM/Qwen-7B/blob/main/tech_memo.md">Report</a>
</p>
<br>
## 介绍(Introduction)
**通义千问-7B(Qwen-7B)**是阿里云研发的通义千问大模型系列的70亿参数规模的模型。Qwen-7B是基于Transformer的大语言模型, 在超大规模的预训练数据上进行训练得到。预训练数据类型多样,覆盖广泛,包括大量网络文本、专业书籍、代码等。同时,在Qwen-7B的基础上,我们使用对齐机制打造了基于大语言模型的AI助手Qwen-7B-Chat。本仓库为Qwen-7B-Chat的仓库。
如果您想了解更多关于通义千问-7B开源模型的细节,我们建议您参阅[Github代码库](https://github.com/QwenLM/Qwen-7B)。
**Qwen-7B** is the 7B-parameter version of the large language model series, Qwen (abbr. Tongyi Qianwen), proposed by Aibaba Cloud. Qwen-7B`is a Transformer-based large language model, which is pretrained on a large volume of data, including web texts, books, codes, etc. Additionally, based on the pretrained Qwen-7B, we release Qwen-7B-Chat, a large-model-based AI assistant, which is trained with alignment techniques. This repository is the one for Qwen-7B-Chat.
For more details about the open-source model of Qwen-7B, please refer to the [Github](https://github.com/QwenLM/Qwen-7B) code repository.
## 要求(Requirements)
* python 3.8及以上版本
* pytorch 1.12及以上版本,推荐2.0及以上版本
* 建议使用CUDA 11.4及以上(GPU用户、flash-attention用户等需考虑此选项)
* python 3.8 and above
* pytorch 1.12 and above, 2.0 and above are recommended
* CUDA 11.4 and above are recommended (this is for GPU users, flash-attention users, etc.)
## 依赖项(Dependency)
运行Qwen-7B-Chat,请确保满足上述要求,再执行以下pip命令安装依赖库
To run Qwen-7B-Chat, please make sure you meet the above requirements, and then execute the following pip commands to install the dependent libraries.
```bash
pip install transformers==4.31.0 accelerate tiktoken einops
```
另外,推荐安装`flash-attention`库,以实现更高的效率和更低的显存占用。
In addition, it is recommended to install the `flash-attention` library for higher efficiency and lower memory usage.
```bash
git clone -b v1.0.8 https://github.com/Dao-AILab/flash-attention
cd flash-attention && pip install .
# 下方安装可选,安装可能比较缓慢。
# Below are optional. Installing them might be slow.
pip install csrc/layer_norm
pip install csrc/rotary
```
## 快速使用(Quickstart)
下面我们展示了一个使用Qwen-7B-Chat模型,进行多轮对话交互的样例:
We show an example of multi-turn interaction with Qwen-7B-Chat in the following code:
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
from transformers.generation import GenerationConfig
# Note: The default behavior now has injection attack prevention off.
tokenizer = AutoTokenizer.from_pretrained("Qwen/Qwen-7B-Chat", trust_remote_code=True)
# use bf16
# model = AutoModelForCausalLM.from_pretrained("Qwen/Qwen-7B-Chat", device_map="auto", trust_remote_code=True, bf16=True).eval()
# use fp16
# model = AutoModelForCausalLM.from_pretrained("Qwen/Qwen-7B-Chat", device_map="auto", trust_remote_code=True, fp16=True).eval()
# use cpu only
# model = AutoModelForCausalLM.from_pretrained("Qwen/Qwen-7B-Chat", device_map="cpu", trust_remote_code=True).eval()
# use auto mode, automatically select precision based on the device.
model = AutoModelForCausalLM.from_pretrained("Qwen/Qwen-7B-Chat", device_map="auto", trust_remote_code=True).eval()
# Specify hyperparameters for generation
model.generation_config = GenerationConfig.from_pretrained("Qwen/Qwen-7B-Chat", trust_remote_code=True) # 可指定不同的生成长度、top_p等相关超参
# 第一轮对话 1st dialogue turn
response, history = model.chat(tokenizer, "你好", history=None)
print(response)
# 你好!很高兴为你提供帮助。
# 第二轮对话 2nd dialogue turn
response, history = model.chat(tokenizer, "给我讲一个年轻人奋斗创业最终取得成功的故事。", history=history)
print(response)
# 这是一个关于一个年轻人奋斗创业最终取得成功的故事。
# 故事的主人公叫李明,他来自一个普通的家庭,父母都是普通的工人。从小,李明就立下了一个目标:要成为一名成功的企业家。
# 为了实现这个目标,李明勤奋学习,考上了大学。在大学期间,他积极参加各种创业比赛,获得了不少奖项。他还利用课余时间去实习,积累了宝贵的经验。
# 毕业后,李明决定开始自己的创业之路。他开始寻找投资机会,但多次都被拒绝了。然而,他并没有放弃。他继续努力,不断改进自己的创业计划,并寻找新的投资机会。
# 最终,李明成功地获得了一笔投资,开始了自己的创业之路。他成立了一家科技公司,专注于开发新型软件。在他的领导下,公司迅速发展起来,成为了一家成功的科技企业。
# 李明的成功并不是偶然的。他勤奋、坚韧、勇于冒险,不断学习和改进自己。他的成功也证明了,只要努力奋斗,任何人都有可能取得成功。
# 第三轮对话 3rd dialogue turn
response, history = model.chat(tokenizer, "给这个故事起一个标题", history=history)
print(response)
# 《奋斗创业:一个年轻人的成功之路》
```
关于更多的使用说明,请参考我们的[Github repo](https://github.com/QwenLM/Qwen-7B)获取更多信息。
For more information, please refer to our [Github repo](https://github.com/QwenLM/Qwen-7B) for more information.
## Tokenizer
> 注:作为术语的“tokenization”在中文中尚无共识的概念对应,本文档采用英文表达以利说明。
基于tiktoken的分词器有别于其他分词器,比如sentencepiece分词器。尤其在微调阶段,需要特别注意特殊token的使用。关于tokenizer的更多信息,以及微调时涉及的相关使用,请参阅[文档](https://github.com/QwenLM/Qwen-7B/blob/main/tokenization_note_zh.md)。
Our tokenizer based on tiktoken is different from other tokenizers, e.g., sentencepiece tokenizer. You need to pay attention to special tokens, especially in finetuning. For more detailed information on the tokenizer and related use in fine-tuning, please refer to the [documentation](https://github.com/QwenLM/Qwen-7B/blob/main/tokenization_note.md).
## 模型细节(Model)
与Qwen-7B预训练模型相同,Qwen-7B-Chat模型规模基本情况如下所示
The details of the model architecture of Qwen-7B-Chat are listed as follows
| Hyperparameter | Value |
|:------|:------|
| n_layers | 32 |
| n_heads | 32 |
| d_model | 4096 |
| vocab size | 151851 |
| sequence length | 2048 |
在位置编码、FFN激活函数和normalization的实现方式上,我们也采用了目前最流行的做法,
即RoPE相对位置编码、SwiGLU激活函数、RMSNorm(可选安装flash-attention加速)。
在分词器方面,相比目前主流开源模型以中英词表为主,Qwen-7B-Chat使用了约15万token大小的词表。
该词表在GPT-4使用的BPE词表`cl100k_base`基础上,对中文、多语言进行了优化,在对中、英、代码数据的高效编解码的基础上,对部分多语言更加友好,方便用户在不扩展词表的情况下对部分语种进行能力增强。
词表对数字按单个数字位切分。调用较为高效的[tiktoken分词库](https://github.com/openai/tiktoken)进行分词。
For position encoding, FFN activation function, and normalization calculation methods, we adopt the prevalent practices, i.e., RoPE relative position encoding, SwiGLU for activation function, and RMSNorm for normalization (optional installation of flash-attention for acceleration).
For tokenization, compared to the current mainstream open-source models based on Chinese and English vocabularies, Qwen-7B-Chat uses a vocabulary of over 150K tokens.
It first considers efficient encoding of Chinese, English, and code data, and is also more friendly to multilingual languages, enabling users to directly enhance the capability of some languages without expanding the vocabulary.
It segments numbers by single digit, and calls the [tiktoken](https://github.com/openai/tiktoken) tokenizer library for efficient tokenization.
## 评测效果(Evaluation)
对于Qwen-7B-Chat模型,我们同样评测了常规的中文理解(C-Eval)、英文理解(MMLU)、代码(HumanEval)和数学(GSM8K)等权威任务,同时包含了长序列任务的评测结果。由于Qwen-7B-Chat模型经过对齐后,激发了较强的外部系统调用能力,我们还进行了工具使用能力方面的评测。
提示:由于硬件和框架造成的舍入误差,复现结果如有波动属于正常现象。
For Qwen-7B-Chat, we also evaluate the model on C-Eval, MMLU, HumanEval, GSM8K, etc., as well as the benchmark evaluation for long-context understanding, and tool usage.
Note: Due to rounding errors caused by hardware and framework, differences in reproduced results are possible.
### 中文评测(Chinese Evaluation)
#### C-Eval
在[C-Eval](https://arxiv.org/abs/2305.08322)验证集上,我们评价了Qwen-7B-Chat模型的zero-shot准确率
We demonstrate the zero-shot accuracy of Qwen-7B-Chat on C-Eval validation set
| Model | Avg. Acc. |
|:--------------|:------:|
| LLaMA2-7B-Chat | 31.9 |
| LLaMA2-13B-Chat | 40.6 |
| Chinese-Alpaca-2-7B | 41.3 |
| Chinese-Alpaca-Plus-13B | 43.3 |
| Baichuan-13B-Chat | 50.4 |
| ChatGLM2-6B-Chat | 50.7 |
| InternLM-7B-Chat | 53.2 |
| **Qwen-7B-Chat** | **54.2** |
C-Eval测试集上,Qwen-7B-Chat模型的zero-shot准确率结果如下:
The zero-shot accuracy of Qwen-7B-Chat on C-Eval testing set is provided below:
| Model | Avg. | STEM | Social Sciences | Humanities | Others |
|:--------------|:------:|:------:|:------:|:------:|:------:|
| Chinese-Alpaca-Plus-13B | 41.5 | 36.6 | 49.7 | 43.1 | 41.2 |
| Chinese-Alpaca-2-7B | 40.3 | - | - | - | - |
| ChatGLM2-6B-Chat | 50.1 | 46.4 | 60.4 | 50.6 | 46.9 |
| Baichuan-13B-Chat | 51.5 | 43.7 | 64.6 | 56.2 | 49.2 |
| **Qwen-7B-Chat** | **54.6** | 47.8 | 67.6 | 59.3 | 50.6 |
在7B规模模型上,经过人类指令对齐的Qwen-7B-Chat模型,准确率在同类相近规模模型中仍然处于前列。
Compared with other pretrained models with comparable model size, the human-aligned Qwen-7B-Chat performs well in C-Eval accuracy.
### 英文评测(English Evaluation)
#### MMLU
[MMLU](https://arxiv.org/abs/2009.03300)评测集上,Qwen-7B-Chat模型的zero-shot准确率如下,效果同样在同类对齐模型中同样表现较优。
The zero-shot accuracy of Qwen-7B-Chat on MMLU is provided below.
The performance of Qwen-7B-Chat still on the top between other human-aligned models with comparable size.
| Model | Avg. Acc. |
|:--------------|:------:|
| ChatGLM2-6B-Chat | 45.5 |
| LLaMA2-7B-Chat | 47.0 |
| InternLM-7B-Chat | 50.8 |
| Baichuan-13B-Chat | 52.1 |
| ChatGLM2-12B-Chat | 52.1 |
| **Qwen-7B-Chat** | **53.9** |
### 代码评测(Coding Evaluation)
Qwen-7B-Chat在[HumanEval](https://github.com/openai/human-eval)的zero-shot Pass@1效果如下
The zero-shot Pass@1 of Qwen-7B-Chat on [HumanEval](https://github.com/openai/human-eval) is demonstrated below
| Model | Pass@1 |
|:--------------|:------:|
| LLaMA2-7B-Chat | 12.2 |
| InternLM-7B-Chat | 14.0 |
| Baichuan-13B-Chat | 16.5 |
| LLaMA2-13B-Chat | 18.9 |
| **Qwen-7B-Chat** | **24.4** |
### 数学评测
在评测数学能力的[GSM8K](https://github.com/openai/grade-school-math)上,Qwen-7B-Chat的准确率结果如下
The accuracy of Qwen-7B-Chat on GSM8K is shown below
| Model | Zero-shot Acc. | 4-shot Acc. |
|:--------------|:------:|:------:|
| ChatGLM2-6B-Chat | - | 28.0 |
| LLaMA2-7B-Chat | 20.4 | 28.2 |
| LLaMA2-13B-Chat | 29.4 | 36.7 |
| InternLM-7B-Chat | 32.6 | 34.5 |
| Baichuan-13B-Chat | - | 36.3 |
| ChatGLM2-12B-Chat | - | 38.1 |
| **Qwen-7B-Chat** | **41.1** | **43.5** |
### 长序列评测(Long-Context Understanding)
通过NTK插值,LogN注意力缩放可以扩展Qwen-7B-Chat的上下文长度。在长文本摘要数据集[VCSUM](https://arxiv.org/abs/2305.05280)上(文本平均长度在15K左右),Qwen-7B-Chat的Rouge-L结果如下:
**(若要启用这些技巧,请将config.json里的`use_dynamc_ntk`和`use_logn_attn`设置为true)**
We introduce NTK-aware interpolation, LogN attention scaling to extend the context length of Qwen-7B-Chat. The Rouge-L results of Qwen-7B-Chat on long-text summarization dataset [VCSUM](https://arxiv.org/abs/2305.05280) (The average length of this dataset is around 15K) are shown below:
**(To use these tricks, please set `use_dynamic_ntk` and `use_long_attn` to true in config.json.)**
| Model | VCSUM (zh) |
|:----------------|:-------:|
| GPT-3.5-Turbo-16k | 16.0 |
| LLama2-7B-Chat | 0.2 |
| InternLM-7B-Chat | 13.0 |
| ChatGLM2-6B-Chat | 16.3 |
| **Qwen-7B-Chat** | **16.6** |
### 工具使用能力的评测(Tool Usage)
#### ReAct Prompting
千问支持通过 [ReAct Prompting](https://arxiv.org/abs/2210.03629) 调用插件/工具/API。ReAct 也是 [LangChain](https://python.langchain.com/) 框架采用的主要方式之一。在我们开源的、用于评估工具使用能力的评测基准上,千问的表现如下:
Qwen-7B-Chat supports calling plugins/tools/APIs through [ReAct Prompting](https://arxiv.org/abs/2210.03629). ReAct is also one of the main approaches used by the [LangChain](https://python.langchain.com/) framework. In our evaluation benchmark for assessing tool usage capabilities, Qwen-7B-Chat's performance is as follows:
| Model | Tool Selection (Acc.↑) | Tool Input (Rouge-L↑) | False Positive Error↓ |
|:-----------------|:----------------------:|:---------------------:|:---------------------:|
| GPT-4 | 95% | **0.90** | 15% |
| GPT-3.5 | 85% | 0.88 | 75% |
| **Qwen-7B-Chat** | **99%** | 0.89 | **9.7%** |
> 评测基准中出现的插件均没有出现在千问的训练集中。该基准评估了模型在多个候选插件中选择正确插件的准确率、传入插件的参数的合理性、以及假阳率。假阳率(False Positive)定义:在处理不该调用插件的请求时,错误地调用了插件。
> The plugins that appear in the evaluation set do not appear in the training set of Qwen-7B-Chat. This benchmark evaluates the accuracy of the model in selecting the correct plugin from multiple candidate plugins, the rationality of the parameters passed into the plugin, and the false positive rate. False Positive: Incorrectly invoking a plugin when it should not have been called when responding to a query.
关于 ReAct Prompting 的 prompt 怎么写、怎么使用,请参考 [ReAct 样例说明](examples/react_prompt.md)。使用工具能使模型更好地完成任务。基于千问的工具使用能力,我们能实现下图所展示的效果:
For how to write and use prompts for ReAct Prompting, please refer to [the ReAct examples](examples/react_prompt.md). The use of tools can enable the model to better perform tasks, as shown in the following figures:


#### Huggingface Agent
千问还具备作为 [HuggingFace Agent](https://huggingface.co/docs/transformers/transformers_agents) 的能力。它在 Huggingface 提供的run模式评测基准上的表现如下:
Qwen-7B-Chat also has the capability to be used as a [HuggingFace Agent](https://huggingface.co/docs/transformers/transformers_agents). Its performance on the run-mode benchmark provided by HuggingFace is as follows:
| Model | Tool Selection↑ | Tool Used↑ | Code↑ |
|:-|:-:|:-:|:-:|
|GPT-4 | **100** | **100** | **97.41** |
|GPT-3.5 | 95.37 | 96.30 | 87.04 |
|StarCoder-15.5B | 87.04 | 87.96 | 68.89 |
| **Qwen-7B** | 90.74 | 92.59 | 74.07 |
## 量化(Quantization)
如希望使用更低精度的量化模型,如4比特和8比特的模型,我们提供了简单的示例来说明如何快速使用量化模型。在开始前,确保你已经安装了`bitsandbytes`。请注意,`bitsandbytes`的安装要求是:
We provide examples to show how to load models in `NF4` and `Int8`. For starters, make sure you have implemented `bitsandbytes`. Note that the requirements for `bitsandbytes` are:
```
**Requirements** Python >=3.8. Linux distribution (Ubuntu, MacOS, etc.) + CUDA > 10.0.
```
Windows用户需安装特定版本的`bitsandbytes`,可选项包括[bitsandbytes-windows-webui](https://github.com/jllllll/bitsandbytes-windows-webui/releases/tag/wheels)。
Windows users should find another option, which might be [bitsandbytes-windows-webui](https://github.com/jllllll/bitsandbytes-windows-webui/releases/tag/wheels).
你只需要在`AutoModelForCausalLM.from_pretrained`中添加你的量化配置,即可使用量化模型。如下所示:
Then you only need to add your quantization configuration to `AutoModelForCausalLM.from_pretrained`. See the example below:
```python
from transformers import AutoModelForCausalLM, BitsAndBytesConfig
# quantization configuration for NF4 (4 bits)
quantization_config = BitsAndBytesConfig(
load_in_4bit=True,
bnb_4bit_quant_type='nf4',
bnb_4bit_compute_dtype=torch.bfloat16
)
# quantization configuration for Int8 (8 bits)
quantization_config = BitsAndBytesConfig(load_in_8bit=True)
model = AutoModelForCausalLM.from_pretrained(
"Qwen/Qwen-7B-Chat",
device_map="cuda:0",
quantization_config=quantization_config,
max_memory=max_memory,
trust_remote_code=True,
).eval()
```
上述方法可以让我们将模型量化成`NF4`和`Int8`精度的模型进行读取,帮助我们节省显存开销。我们也提供了相关性能数据。我们发现尽管模型在效果上存在损失,但模型的显存开销大幅降低。
With this method, it is available to load Qwen-7B-Chat in `NF4`and `Int8`, which saves you memory usage. We provide related statistics of model performance below. We find that the quantization downgrades the effectiveness slightly but significantly increases inference efficiency and reduces memory costs.
| Precision | MMLU | Memory |
| :---------| :-------: | :-----: |
| BF16 | 56.7 | 16.2G |
| Int8 | 52.8 | 10.1G |
| NF4 | 48.9 | 7.4G |
## 使用协议(License Agreement)
我们的代码和模型权重对学术研究完全开放,并支持商用。请查看LICENSE了解具体的开源协议细节。
Our code and checkpoints are open to research purpose, and they are allowed for commercial purposes. Check [LICENSE](LICENSE) for more details about the license.
## 联系我们(Contact Us)
如果你想给我们的研发团队和产品团队留言,请通过邮件([email protected])联系我们。
If you are interested to leave a message to either our research team or product team, feel free to send an email to [email protected].
|
asenella/MMVAEPlus_beta_5_scale_False_seed_1
|
asenella
| 2023-08-09T04:31:34Z | 0 | 0 | null |
[
"multivae",
"en",
"license:apache-2.0",
"region:us"
] | null | 2023-07-27T17:09:35Z |
---
language: en
tags:
- multivae
license: apache-2.0
---
### Downloading this model from the Hub
This model was trained with multivae. It can be downloaded or reloaded using the method `load_from_hf_hub`
```python
>>> from multivae.models import AutoModel
>>> model = AutoModel.load_from_hf_hub(hf_hub_path="your_hf_username/repo_name")
```
|
carolinacalce/MiModeloCatsDogs
|
carolinacalce
| 2023-08-09T04:12:08Z | 220 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"vit",
"image-classification",
"generated_from_trainer",
"base_model:google/vit-base-patch16-224-in21k",
"base_model:finetune:google/vit-base-patch16-224-in21k",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2023-08-08T02:06:33Z |
---
license: apache-2.0
base_model: google/vit-base-patch16-224-in21k
tags:
- generated_from_trainer
model-index:
- name: MiModeloCatsDogs
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# MiModeloCatsDogs
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- Transformers 4.31.0
- Pytorch 2.0.1+cu118
- Datasets 2.14.4
- Tokenizers 0.13.3
|
rgarcia/my_awesome_food_model
|
rgarcia
| 2023-08-09T03:56:19Z | 195 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"vit",
"image-classification",
"generated_from_trainer",
"dataset:food101",
"base_model:google/vit-base-patch16-224-in21k",
"base_model:finetune:google/vit-base-patch16-224-in21k",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2023-08-06T00:44:16Z |
---
license: apache-2.0
base_model: google/vit-base-patch16-224-in21k
tags:
- generated_from_trainer
datasets:
- food101
metrics:
- accuracy
model-index:
- name: my_awesome_food_model
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: food101
type: food101
config: default
split: train[:5000]
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.895
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_awesome_food_model
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the food101 dataset.
It achieves the following results on the evaluation set:
- Loss: 1.5827
- Accuracy: 0.895
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 2.6833 | 0.99 | 62 | 2.4863 | 0.839 |
| 1.8076 | 2.0 | 125 | 1.7471 | 0.883 |
| 1.5823 | 2.98 | 186 | 1.5827 | 0.895 |
### Framework versions
- Transformers 4.32.0.dev0
- Pytorch 2.0.1+cu117
- Datasets 2.14.4
- Tokenizers 0.13.3
|
salohnana2018/OTE-NoDapt-ABSA-bert-base-qarib-OrginalHP-FineTune
|
salohnana2018
| 2023-08-09T03:45:03Z | 119 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"token-classification",
"generated_from_trainer",
"base_model:ahmedabdelali/bert-base-qarib",
"base_model:finetune:ahmedabdelali/bert-base-qarib",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2023-08-09T03:39:33Z |
---
base_model: qarib/bert-base-qarib
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: OTE-NoDapt-ABSA-bert-base-qarib-OrginalHP-FineTune
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# OTE-NoDapt-ABSA-bert-base-qarib-OrginalHP-FineTune
This model is a fine-tuned version of [qarib/bert-base-qarib](https://huggingface.co/qarib/bert-base-qarib) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1348
- Precision: 0.7488
- Recall: 0.7723
- F1: 0.7604
- Accuracy: 0.9532
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 8e-05
- train_batch_size: 64
- eval_batch_size: 8
- seed: 25
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.1656 | 1.0 | 61 | 0.1196 | 0.7299 | 0.7932 | 0.7603 | 0.9528 |
| 0.08 | 2.0 | 122 | 0.1176 | 0.7561 | 0.7678 | 0.7619 | 0.9543 |
| 0.0501 | 3.0 | 183 | 0.1348 | 0.7488 | 0.7723 | 0.7604 | 0.9532 |
### Framework versions
- Transformers 4.31.0
- Pytorch 2.0.1+cu118
- Datasets 2.14.4
- Tokenizers 0.13.3
|
iioSnail/bert-base-chinese-word-classifier
|
iioSnail
| 2023-08-09T03:40:04Z | 110 | 9 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"bert",
"text-classification",
"zh",
"license:afl-3.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-08-08T08:10:00Z |
---
license: afl-3.0
language:
- zh
---
# 中文词语分类
本模型对中文词语进行分类(多标签)。对于一个中文词语,其会被分为一个或多个类别,类别有如下:
```
"1": "人文科学",
"2": "农林渔畜",
"3": "医学",
"4": "城市信息大全",
"5": "娱乐",
"6": "工程与应用科学",
"7": "生活",
"8": "电子游戏",
"9": "社会科学",
"10": "自然科学",
"11": "艺术",
"12": "运动休闲"
```
> 类别来源于[搜狗词汇的类型](https://pinyin.sogou.com/dict/cate/index/167)
# 使用样例
```python
import torch
from transformers import AutoTokenizer, BertForSequenceClassification
model_path = "iioSnail/bert-base-chinese-word-classifier"
tokenizer = AutoTokenizer.from_pretrained(model_path)
model = BertForSequenceClassification.from_pretrained(model_path)
words = ["2型糖尿病", "太古里", "跑跑卡丁车", "河豚"]
inputs = tokenizer(words, return_tensors='pt', padding=True)
outputs = model(**inputs).logits
outputs = outputs.sigmoid()
preds = outputs > 0.5
for i, pred in enumerate(preds):
pred = torch.argwhere(pred).view(-1)
labels = [model.config.id2label[int(id)] for id in pred]
print(words[i], ":", labels)
```
输出:
```
2型糖尿病 : ['医学']
太古里 : ['城市信息大全']
跑跑卡丁车 : ['电子游戏']
河豚 : ['人文科学', '娱乐', '电子游戏', '自然科学']
```
|
nayanika/test_model
|
nayanika
| 2023-08-09T03:37:42Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-08-09T03:37:41Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.4.0.dev0
|
sanitas/a2c-PandaPickAndPlace-v3
|
sanitas
| 2023-08-09T03:36:27Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"PandaPickAndPlace-v3",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-08-09T03:31:02Z |
---
library_name: stable-baselines3
tags:
- PandaPickAndPlace-v3
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: PandaPickAndPlace-v3
type: PandaPickAndPlace-v3
metrics:
- type: mean_reward
value: -50.00 +/- 0.00
name: mean_reward
verified: false
---
# **A2C** Agent playing **PandaPickAndPlace-v3**
This is a trained model of a **A2C** agent playing **PandaPickAndPlace-v3**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
asenella/incomplete_mhd_MVTCAE_beta_5_scale_False_seed_1
|
asenella
| 2023-08-09T03:30:24Z | 0 | 0 | null |
[
"multivae",
"en",
"license:apache-2.0",
"region:us"
] | null | 2023-08-09T03:30:14Z |
---
language: en
tags:
- multivae
license: apache-2.0
---
### Downloading this model from the Hub
This model was trained with multivae. It can be downloaded or reloaded using the method `load_from_hf_hub`
```python
>>> from multivae.models import AutoModel
>>> model = AutoModel.load_from_hf_hub(hf_hub_path="your_hf_username/repo_name")
```
|
kanixwang/eth-setfit-payment-model_4epoch
|
kanixwang
| 2023-08-09T03:27:48Z | 1 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"mpnet",
"feature-extraction",
"sentence-similarity",
"transformers",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2023-08-09T03:27:37Z |
---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}')
model = AutoModel.from_pretrained('{MODEL_NAME}')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 26915 with parameters:
```
{'batch_size': 8, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss`
Parameters of the fit()-Method:
```
{
"epochs": 4,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": 26915,
"warmup_steps": 2692,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: MPNetModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information -->
|
phucleh2/controlnet_mask2cloth
|
phucleh2
| 2023-08-09T03:09:01Z | 5 | 13 |
diffusers
|
[
"diffusers",
"controlnet",
"stable-diffusion",
"image-to-image",
"mask-to-cloth",
"en",
"arxiv:2302.05543",
"base_model:runwayml/stable-diffusion-v1-5",
"base_model:adapter:runwayml/stable-diffusion-v1-5",
"region:us"
] |
image-to-image
| 2023-08-04T09:07:10Z |
---
base_model: runwayml/stable-diffusion-v1-5
tags:
- controlnet
- stable-diffusion
- image-to-image
- mask-to-cloth
language:
- en
library_name: diffusers
widget:
- src: https://huggingface.co/phucleh2/controlnet_mask2cloth/resolve/main/Examples/mask1.jpg
prompt: a red top with a lace - trimmed neckline
- src: https://huggingface.co/phucleh2/controlnet_mask2cloth/resolve/main/Examples/mask2.jpg
prompt: a long sleeved top - black
- src: https://huggingface.co/phucleh2/controlnet_mask2cloth/resolve/main/Examples/mask3.jpg
prompt: a blue blouse with a tie neck and a floral prints
- src: https://huggingface.co/phucleh2/controlnet_mask2cloth/resolve/main/Examples/mask4.jpg
prompt: a light pink blouse with rose patterns
- src: https://huggingface.co/phucleh2/controlnet_mask2cloth/resolve/main/Examples/mask5.jpg
prompt: t-shirt - black
---
# Mask + Prompt → Cloth
- This model is still in development for better realistic-ness of the output. Stay tuned 🤗
- This model is a fine-tuned version of ControlNet, tailored to utilize a black-and-white outline of an upper garment (mask) and a descriptive prompt to generate a garment that aligns with the mask's outline.
- It's important to note that this model exclusively operates with upper garments. Please refrain from inputting masks for pants or jeans, as this could yield unexpected outcomes.
- Input mask size: 384 x 512
- Each time the model is executed, **even with the same mask and prompt**, it will generate a distinct output.
- This model is a fine-tuned of [ControlNet](https://arxiv.org/abs/2302.05543) using [VITON-HD](https://github.com/shadow2496/VITON-HD) dataset
|
Pixel390/TORTOISE
|
Pixel390
| 2023-08-09T03:07:47Z | 0 | 0 |
diffusers
|
[
"diffusers",
"tensorboard",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"lora",
"base_model:stablediffusionapi/disney-pixar-cartoon",
"base_model:adapter:stablediffusionapi/disney-pixar-cartoon",
"license:creativeml-openrail-m",
"region:us"
] |
text-to-image
| 2023-08-09T01:35:22Z |
---
license: creativeml-openrail-m
base_model: stablediffusionapi/disney-pixar-cartoon
instance_prompt: a uxz tortoise
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- lora
inference: true
---
# LoRA DreamBooth - Pixel390/TORTOISE
These are LoRA adaption weights for stablediffusionapi/disney-pixar-cartoon. The weights were trained on a uxz tortoise using [DreamBooth](https://dreambooth.github.io/). You can find some example images in the following.
LoRA for the text encoder was enabled: True.
|
Zphyr/TRCVAE
|
Zphyr
| 2023-08-09T02:52:19Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-08-09T02:52:19Z |
---
license: creativeml-openrail-m
---
|
cjtonde/my_awesome_model
|
cjtonde
| 2023-08-09T02:51:08Z | 107 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-08-08T22:11:55Z |
---
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
model-index:
- name: my_awesome_model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_awesome_model
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Framework versions
- Transformers 4.31.0
- Pytorch 2.0.1+cu118
- Datasets 2.14.4
- Tokenizers 0.13.3
|
cabranch/distilgpt2-finetuned-wikitext2
|
cabranch
| 2023-08-09T02:41:17Z | 61 | 0 |
transformers
|
[
"transformers",
"tf",
"gpt2",
"text-generation",
"generated_from_keras_callback",
"base_model:distilbert/distilgpt2",
"base_model:finetune:distilbert/distilgpt2",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-08-08T14:37:24Z |
---
license: apache-2.0
base_model: distilgpt2
tags:
- generated_from_keras_callback
model-index:
- name: cabranch/distilgpt2-finetuned-wikitext2
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# cabranch/distilgpt2-finetuned-wikitext2
This model is a fine-tuned version of [distilgpt2](https://huggingface.co/distilgpt2) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 3.8577
- Validation Loss: 3.6756
- Epoch: 0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 2e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 3.8577 | 3.6756 | 0 |
### Framework versions
- Transformers 4.31.0
- TensorFlow 2.13.0
- Datasets 2.14.3
- Tokenizers 0.13.3
|
thenewcompany/ppo-PyramidsRND
|
thenewcompany
| 2023-08-09T02:38:43Z | 1 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"Pyramids",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Pyramids",
"region:us"
] |
reinforcement-learning
| 2023-08-09T02:38:40Z |
---
library_name: ml-agents
tags:
- Pyramids
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Pyramids
---
# **ppo** Agent playing **Pyramids**
This is a trained model of a **ppo** agent playing **Pyramids**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: thenewcompany/ppo-PyramidsRND
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
sanitas/a2c-PandaReachDense-v3
|
sanitas
| 2023-08-09T02:33:21Z | 1 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"PandaReachDense-v3",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-08-09T02:19:27Z |
---
library_name: stable-baselines3
tags:
- PandaReachDense-v3
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: PandaReachDense-v3
type: PandaReachDense-v3
metrics:
- type: mean_reward
value: -0.25 +/- 0.13
name: mean_reward
verified: false
---
# **A2C** Agent playing **PandaReachDense-v3**
This is a trained model of a **A2C** agent playing **PandaReachDense-v3**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
asenella/incomplete_mhd_MVTCAE_beta_5_scale_False_seed_0
|
asenella
| 2023-08-09T02:31:08Z | 0 | 0 | null |
[
"multivae",
"en",
"license:apache-2.0",
"region:us"
] | null | 2023-08-09T02:30:59Z |
---
language: en
tags:
- multivae
license: apache-2.0
---
### Downloading this model from the Hub
This model was trained with multivae. It can be downloaded or reloaded using the method `load_from_hf_hub`
```python
>>> from multivae.models import AutoModel
>>> model = AutoModel.load_from_hf_hub(hf_hub_path="your_hf_username/repo_name")
```
|
saurabh2086/Reinforce-CartPole-8
|
saurabh2086
| 2023-08-09T02:12:21Z | 0 | 0 | null |
[
"CartPole-v1",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-08-09T02:12:11Z |
---
tags:
- CartPole-v1
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-CartPole-8
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 500.00 +/- 0.00
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
ittailup/distilgender-es-2M
|
ittailup
| 2023-08-09T01:58:07Z | 111 | 0 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"distilbert",
"text-classification",
"es",
"dataset:ittailup/issste",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-08-09T01:36:11Z |
---
license: apache-2.0
datasets:
- ittailup/issste
language:
- es
metrics:
- accuracy: 0.9951
widget:
- text: AGATA
- text: GABRIEL
---
## Model Card
### Overview
This model card provides details about a trained model, its training process, and evaluation metrics. This information ensures transparency and assists users in understanding the model's performance and behavior.
### Training Details
- **Training Epochs**: The model was trained for 2 epochs.
- **Training Steps**: The model underwent 1856 training steps.
- **Training Runtime**: The model's training runtime was approximately 2680.184 seconds.
- **Training Speed**: The model trained at a rate of 0.692 steps per second and processed approximately 1417.813 samples per second.
- **Learning Rate**: The learning rate during training was approximately 0.0000095905.
- **Training Loss**: The average training loss recorded was approximately 0.0184, with a specific loss value of 0.023423514232553285.
### Evaluation Details
- **Evaluation Loss**: The model achieved an evaluation loss of 0.017659155651926994.
- **Evaluation Runtime**: The evaluation process took approximately 23.8414 seconds.
- **Evaluation Speed**: The model was evaluated at a rate of 2.055 steps per second, processing approximately 4194.378 samples per second.
### Performance Metrics
- **Accuracy**: The model achieved an accuracy of 0.9951 during evaluation.
- **Precision**: The precision of the model is approximately 0.9957234121187588.
- **Recall**: The model's recall is approximately 0.9956533216014078.
- **F1-Score**: The F1-Score for the model is approximately 0.995688365626595.
|
Aztects222002/Ehsush
|
Aztects222002
| 2023-08-09T01:54:45Z | 0 | 0 | null |
[
"license:bigscience-openrail-m",
"region:us"
] | null | 2023-08-09T01:54:45Z |
---
license: bigscience-openrail-m
---
|
henilp105/wav2vec2-large-xls-r-300m-telugu-asr
|
henilp105
| 2023-08-09T01:35:24Z | 20 | 1 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"safetensors",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-12-17T14:03:53Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- wer
model-index:
- name: wav2vec2-large-xls-r-300m-telugu-asr
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xls-r-300m-telugu-asr
This model is a fine-tuned version of [henilp105/wav2vec2-large-xls-r-300m-telugu-asr](https://huggingface.co/henilp105/wav2vec2-large-xls-r-300m-telugu-asr) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1050
- Wer: 0.6656
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 6.0506 | 2.3 | 200 | 0.8841 | 0.7564 |
| 0.6354 | 4.59 | 400 | 0.7448 | 0.6912 |
| 0.3934 | 6.89 | 600 | 0.8321 | 0.6929 |
| 0.2652 | 9.19 | 800 | 0.9529 | 0.6984 |
| 0.2022 | 11.49 | 1000 | 0.9490 | 0.6979 |
| 0.1514 | 13.79 | 1200 | 1.0025 | 0.6869 |
| 0.124 | 16.09 | 1400 | 1.0367 | 0.6799 |
| 0.1007 | 18.39 | 1600 | 1.0658 | 0.6734 |
| 0.0875 | 20.69 | 1800 | 1.0758 | 0.6779 |
| 0.0838 | 22.98 | 2000 | 1.0999 | 0.6701 |
| 0.0745 | 25.29 | 2200 | 1.1020 | 0.6708 |
| 0.0641 | 27.58 | 2400 | 1.1140 | 0.6683 |
| 0.0607 | 29.88 | 2600 | 1.1050 | 0.6656 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.10.0+cu113
- Datasets 1.18.3
- Tokenizers 0.13.2
|
simonycl/bert-base-uncased-sst-2-16-13-30
|
simonycl
| 2023-08-09T00:59:40Z | 104 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:google-bert/bert-base-uncased",
"base_model:finetune:google-bert/bert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-08-09T00:58:29Z |
---
license: apache-2.0
base_model: bert-base-uncased
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: bert-base-uncased-sst-2-16-13-30
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-uncased-sst-2-16-13-30
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5710
- Accuracy: 0.7188
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1.5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 5
- num_epochs: 30
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 1 | 0.6730 | 0.5938 |
| No log | 2.0 | 2 | 0.6718 | 0.625 |
| No log | 3.0 | 3 | 0.6692 | 0.6562 |
| No log | 4.0 | 4 | 0.6657 | 0.6875 |
| No log | 5.0 | 5 | 0.6616 | 0.6562 |
| No log | 6.0 | 6 | 0.6567 | 0.7188 |
| No log | 7.0 | 7 | 0.6514 | 0.6875 |
| No log | 8.0 | 8 | 0.6462 | 0.75 |
| No log | 9.0 | 9 | 0.6407 | 0.75 |
| 0.6558 | 10.0 | 10 | 0.6354 | 0.75 |
| 0.6558 | 11.0 | 11 | 0.6311 | 0.6562 |
| 0.6558 | 12.0 | 12 | 0.6277 | 0.625 |
| 0.6558 | 13.0 | 13 | 0.6244 | 0.5938 |
| 0.6558 | 14.0 | 14 | 0.6203 | 0.5938 |
| 0.6558 | 15.0 | 15 | 0.6158 | 0.5938 |
| 0.6558 | 16.0 | 16 | 0.6109 | 0.5938 |
| 0.6558 | 17.0 | 17 | 0.6066 | 0.5938 |
| 0.6558 | 18.0 | 18 | 0.6016 | 0.5938 |
| 0.6558 | 19.0 | 19 | 0.5968 | 0.5938 |
| 0.4973 | 20.0 | 20 | 0.5924 | 0.6562 |
| 0.4973 | 21.0 | 21 | 0.5882 | 0.6875 |
| 0.4973 | 22.0 | 22 | 0.5843 | 0.6875 |
| 0.4973 | 23.0 | 23 | 0.5812 | 0.6875 |
| 0.4973 | 24.0 | 24 | 0.5785 | 0.7188 |
| 0.4973 | 25.0 | 25 | 0.5762 | 0.7188 |
| 0.4973 | 26.0 | 26 | 0.5744 | 0.7188 |
| 0.4973 | 27.0 | 27 | 0.5730 | 0.7188 |
| 0.4973 | 28.0 | 28 | 0.5720 | 0.7188 |
| 0.4973 | 29.0 | 29 | 0.5713 | 0.7188 |
| 0.3872 | 30.0 | 30 | 0.5710 | 0.7188 |
### Framework versions
- Transformers 4.32.0.dev0
- Pytorch 2.0.1+cu118
- Datasets 2.4.0
- Tokenizers 0.13.3
|
Dani100/Ultron
|
Dani100
| 2023-08-09T00:59:36Z | 0 | 0 | null |
[
"arxiv:1910.09700",
"region:us"
] | null | 2023-08-09T00:55:20Z |
---
# For reference on model card metadata, see the spec: https://github.com/huggingface/hub-docs/blob/main/modelcard.md?plain=1
# Doc / guide: https://huggingface.co/docs/hub/model-cards
{}
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
This modelcard aims to be a base template for new models. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/modelcard_template.md?plain=1).
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Data Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
simonycl/roberta-base-sst-2-64-13-30
|
simonycl
| 2023-08-09T00:58:12Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"roberta",
"text-classification",
"generated_from_trainer",
"base_model:FacebookAI/roberta-base",
"base_model:finetune:FacebookAI/roberta-base",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-08-09T00:55:16Z |
---
license: mit
base_model: roberta-base
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: roberta-base-sst-2-64-13-30
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-base-sst-2-64-13-30
This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6400
- Accuracy: 0.8984
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1.5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 5
- num_epochs: 30
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 4 | 0.6936 | 0.5 |
| No log | 2.0 | 8 | 0.6928 | 0.5156 |
| 0.6938 | 3.0 | 12 | 0.6921 | 0.6328 |
| 0.6938 | 4.0 | 16 | 0.6911 | 0.6328 |
| 0.6895 | 5.0 | 20 | 0.6894 | 0.5859 |
| 0.6895 | 6.0 | 24 | 0.6866 | 0.625 |
| 0.6895 | 7.0 | 28 | 0.6818 | 0.6641 |
| 0.6758 | 8.0 | 32 | 0.6727 | 0.6953 |
| 0.6758 | 9.0 | 36 | 0.6495 | 0.7656 |
| 0.615 | 10.0 | 40 | 0.5773 | 0.8125 |
| 0.615 | 11.0 | 44 | 0.4229 | 0.875 |
| 0.615 | 12.0 | 48 | 0.3311 | 0.8906 |
| 0.3514 | 13.0 | 52 | 0.3047 | 0.8906 |
| 0.3514 | 14.0 | 56 | 0.3420 | 0.8828 |
| 0.0929 | 15.0 | 60 | 0.4113 | 0.8906 |
| 0.0929 | 16.0 | 64 | 0.4550 | 0.8906 |
| 0.0929 | 17.0 | 68 | 0.5299 | 0.8906 |
| 0.0206 | 18.0 | 72 | 0.6554 | 0.8594 |
| 0.0206 | 19.0 | 76 | 0.7213 | 0.8594 |
| 0.007 | 20.0 | 80 | 0.7860 | 0.8516 |
| 0.007 | 21.0 | 84 | 0.8466 | 0.8438 |
| 0.007 | 22.0 | 88 | 0.8522 | 0.8516 |
| 0.0037 | 23.0 | 92 | 0.8023 | 0.8516 |
| 0.0037 | 24.0 | 96 | 0.6670 | 0.8828 |
| 0.0028 | 25.0 | 100 | 0.6224 | 0.8984 |
| 0.0028 | 26.0 | 104 | 0.6283 | 0.8906 |
| 0.0028 | 27.0 | 108 | 0.6333 | 0.8906 |
| 0.0026 | 28.0 | 112 | 0.6307 | 0.8906 |
| 0.0026 | 29.0 | 116 | 0.6348 | 0.8984 |
| 0.003 | 30.0 | 120 | 0.6400 | 0.8984 |
### Framework versions
- Transformers 4.32.0.dev0
- Pytorch 2.0.1+cu118
- Datasets 2.4.0
- Tokenizers 0.13.3
|
simonycl/roberta-base-sst-2-16-13-30
|
simonycl
| 2023-08-09T00:53:16Z | 103 | 0 |
transformers
|
[
"transformers",
"pytorch",
"roberta",
"text-classification",
"generated_from_trainer",
"base_model:FacebookAI/roberta-base",
"base_model:finetune:FacebookAI/roberta-base",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-08-09T00:45:22Z |
---
license: mit
base_model: roberta-base
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: roberta-base-sst-2-16-13-30
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-base-sst-2-16-13-30
This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6585
- Accuracy: 0.6875
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1.5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 5
- num_epochs: 30
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 1 | 0.6934 | 0.5 |
| No log | 2.0 | 2 | 0.6933 | 0.5 |
| No log | 3.0 | 3 | 0.6933 | 0.5 |
| No log | 4.0 | 4 | 0.6929 | 0.5 |
| No log | 5.0 | 5 | 0.6925 | 0.5 |
| No log | 6.0 | 6 | 0.6920 | 0.5 |
| No log | 7.0 | 7 | 0.6914 | 0.5 |
| No log | 8.0 | 8 | 0.6909 | 0.6875 |
| No log | 9.0 | 9 | 0.6904 | 0.625 |
| 0.6897 | 10.0 | 10 | 0.6899 | 0.5 |
| 0.6897 | 11.0 | 11 | 0.6894 | 0.5 |
| 0.6897 | 12.0 | 12 | 0.6888 | 0.5 |
| 0.6897 | 13.0 | 13 | 0.6880 | 0.5312 |
| 0.6897 | 14.0 | 14 | 0.6871 | 0.5312 |
| 0.6897 | 15.0 | 15 | 0.6860 | 0.5312 |
| 0.6897 | 16.0 | 16 | 0.6849 | 0.6562 |
| 0.6897 | 17.0 | 17 | 0.6836 | 0.7188 |
| 0.6897 | 18.0 | 18 | 0.6821 | 0.6875 |
| 0.6897 | 19.0 | 19 | 0.6805 | 0.6875 |
| 0.6642 | 20.0 | 20 | 0.6788 | 0.6875 |
| 0.6642 | 21.0 | 21 | 0.6768 | 0.7188 |
| 0.6642 | 22.0 | 22 | 0.6746 | 0.7188 |
| 0.6642 | 23.0 | 23 | 0.6723 | 0.7188 |
| 0.6642 | 24.0 | 24 | 0.6696 | 0.7188 |
| 0.6642 | 25.0 | 25 | 0.6670 | 0.6875 |
| 0.6642 | 26.0 | 26 | 0.6644 | 0.6875 |
| 0.6642 | 27.0 | 27 | 0.6622 | 0.7188 |
| 0.6642 | 28.0 | 28 | 0.6604 | 0.7188 |
| 0.6642 | 29.0 | 29 | 0.6592 | 0.6875 |
| 0.5945 | 30.0 | 30 | 0.6585 | 0.6875 |
### Framework versions
- Transformers 4.32.0.dev0
- Pytorch 2.0.1+cu118
- Datasets 2.4.0
- Tokenizers 0.13.3
|
Jenniferkmc/controlnet-model2
|
Jenniferkmc
| 2023-08-09T00:52:49Z | 2 | 0 |
diffusers
|
[
"diffusers",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"controlnet",
"base_model:stabilityai/stable-diffusion-2-1-base",
"base_model:adapter:stabilityai/stable-diffusion-2-1-base",
"license:creativeml-openrail-m",
"region:us"
] |
text-to-image
| 2023-08-08T15:34:54Z |
---
license: creativeml-openrail-m
base_model: stabilityai/stable-diffusion-2-1-base
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- controlnet
inference: true
---
# controlnet-Jenniferkmc/controlnet-model2
These are controlnet weights trained on stabilityai/stable-diffusion-2-1-base with new type of conditioning.
You can find some example images below.
prompt: High-quality close-up dslr photo of man wearing a hat with trees in the background

prompt: Girl smiling, professional dslr photograph, dark background, studio lights, high quality

prompt: Portrait of a clown face, oil on canvas, bittersweet expression

|
pixelparty/pixel-party-xl
|
pixelparty
| 2023-08-09T00:35:28Z | 22 | 15 |
diffusers
|
[
"diffusers",
"safetensors",
"text-to-image",
"stable-diffusion",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:finetune:stabilityai/stable-diffusion-xl-base-1.0",
"license:creativeml-openrail-m",
"region:us"
] |
text-to-image
| 2023-08-08T21:57:15Z |
---
license: creativeml-openrail-m
tags:
- text-to-image
- stable-diffusion
- diffusers
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt: . in pixel art style
widget:
- text: cute dragon. in pixel art style
---
# Pixel Party XL
This is a full model training for better pixel art adherence based on SDXL. Feel free to use this model for your own projects, but please do not host it.

We are building on tools for indie game development and currently have tools for:
- Map tiles
- Movement animations
- Attack animations
- Inpainting
- Character reshaping
- Animation interpolation
And have much more planned! :D
If you want to support us or check out our other pixel art models, you can find us here [PixelLab](https://www.pixellab.ai) or on [Discord](https://discord.gg/pBeyTBF8T7).
## How to use
- Append ". in pixel art style" to your prompt. E.g. "cute dragon. in pixel art style"
- Downsize the image 8x using nearest neighbor
- Init images are very helpful
- Model works best at around 128x128 canvas size but still excels at creating smaller items/characters/other
- Use a VAE with fixed fp16 support: https://huggingface.co/madebyollin/sdxl-vae-fp16-fix
- Do not use refiner
### Diffusers
```python
from diffusers import DiffusionPipeline, UNet2DConditionModel
import torch
pipe = DiffusionPipeline.from_pretrained(
"stabilityai/stable-diffusion-xl-base-1.0",
torch_dtype=torch.float16,
unet=UNet2DConditionModel.from_pretrained("pixelparty/pixel-party-xl", torch_dtype=torch.float16),
use_safetensors=True,
variant="fp16",
)
pipe.to("cuda")
torch.manual_seed(11215)
prompt = "cute dragon. in pixel art style"
negative_prompt = "mixels. amateur. multiple"
image = pipe(prompt, negative_prompt=negative_prompt, num_inference_steps=25).images[0]
```
## License
Please do not host this model. It is otherwise licensed under CreativeML-OpenRail-M.
|
wellecks/llmstep-mathlib4-pythia2.8b
|
wellecks
| 2023-08-09T00:16:28Z | 412 | 6 |
transformers
|
[
"transformers",
"pytorch",
"gpt_neox",
"text-generation",
"arxiv:2102.06203",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-07-08T22:42:44Z |
---
license: mit
---
### llmstep: [L]LM proofstep suggestions in Lean
https://github.com/wellecks/llmstep
This model is a Pythia-2.8b-deduped language model fine-tuned on [LeanDojo Benchmark 4](https://zenodo.org/record/8040110).
The model is fine-tuned on sequences of the form:
```bash
[GOAL]tactic-state[PROOFSTEP]next-tactic<|endoftext|>
```
This format corresponds to the proofstep objective from [Han et al ICLR 2022](https://arxiv.org/abs/2102.06203).\
The [python/train](python/train) directory in the repository shows how the model was fine-tuned.
Please see the repository for more details.
```
@misc{llmstep,
author = {Sean Welleck},
title = {llmstep: LLM proofstep suggestions in Lean},
year = {2023},
publisher = {GitHub},
journal = {GitHub repository},
howpublished = {\url{https://github.com/wellecks/llmstep}},
}
```
|
RomyMy/dqn-SpaceInvadersNoFrameskip-v4
|
RomyMy
| 2023-08-09T00:15:47Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-08-09T00:15:08Z |
---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 657.50 +/- 342.41
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga RomyMy -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga RomyMy -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga RomyMy
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 1000000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
# Environment Arguments
```python
{'render_mode': 'rgb_array'}
```
|
thisiskeithkwan/cantomed7
|
thisiskeithkwan
| 2023-08-09T00:02:28Z | 76 | 0 |
transformers
|
[
"transformers",
"pytorch",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"yue",
"dataset:mozilla-foundation/common_voice_11_0",
"base_model:openai/whisper-medium",
"base_model:finetune:openai/whisper-medium",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2023-08-08T17:57:46Z |
---
language:
- yue
license: apache-2.0
base_model: openai/whisper-medium
tags:
- generated_from_trainer
datasets:
- mozilla-foundation/common_voice_11_0
model-index:
- name: Whisper medium 1/10
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Whisper medium 1/10
This model is a fine-tuned version of [openai/whisper-medium](https://huggingface.co/openai/whisper-medium) on the mozilla-foundation/common_voice_11_0 dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 64
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 100
- training_steps: 3000
### Framework versions
- Transformers 4.32.0.dev0
- Pytorch 2.0.1+cu118
- Datasets 2.14.4
- Tokenizers 0.13.3
|
C-Lo/masked-dataset
|
C-Lo
| 2023-08-08T23:45:45Z | 103 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:imdb",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-08-08T23:41:15Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imdb
model-index:
- name: masked-dataset
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# masked-dataset
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 6
### Training results
### Framework versions
- Transformers 4.28.0
- Pytorch 2.0.1+cu118
- Datasets 2.14.4
- Tokenizers 0.13.3
|
agustinl/reinforce-cartpole-v1
|
agustinl
| 2023-08-08T23:37:46Z | 0 | 0 | null |
[
"CartPole-v1",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-08-08T23:37:36Z |
---
tags:
- CartPole-v1
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: reinforce-cartpole-v1
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 500.00 +/- 0.00
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
C-Lo/neutral-dataset
|
C-Lo
| 2023-08-08T23:31:33Z | 103 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:imdb",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-08-08T23:27:21Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imdb
model-index:
- name: neutral-dataset
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# neutral-dataset
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 6
### Training results
### Framework versions
- Transformers 4.28.0
- Pytorch 2.0.1+cu118
- Datasets 2.14.4
- Tokenizers 0.13.3
|
C-Lo/gendered-dataset
|
C-Lo
| 2023-08-08T23:26:05Z | 122 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:imdb",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-08-08T23:22:05Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imdb
model-index:
- name: gendered-dataset
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gendered-dataset
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 6
### Training results
### Framework versions
- Transformers 4.28.0
- Pytorch 2.0.1+cu118
- Datasets 2.14.4
- Tokenizers 0.13.3
|
patonw/ppo-SnowballTarget
|
patonw
| 2023-08-08T23:13:34Z | 4 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"SnowballTarget",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-SnowballTarget",
"region:us"
] |
reinforcement-learning
| 2023-08-08T23:13:29Z |
---
library_name: ml-agents
tags:
- SnowballTarget
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SnowballTarget
---
# **ppo** Agent playing **SnowballTarget**
This is a trained model of a **ppo** agent playing **SnowballTarget**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: patonw/ppo-SnowballTarget
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
iamnambiar/Reinforce-CartPole-v1
|
iamnambiar
| 2023-08-08T22:21:22Z | 0 | 0 | null |
[
"CartPole-v1",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-08-08T22:21:11Z |
---
tags:
- CartPole-v1
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-CartPole-v1
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 500.00 +/- 0.00
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
danorel/dqn-SpaceInvadersNoFrameskip-v4
|
danorel
| 2023-08-08T22:06:39Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-08-08T22:06:01Z |
---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 620.50 +/- 135.54
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga danorel -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga danorel -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga danorel
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 1000000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
# Environment Arguments
```python
{'render_mode': 'rgb_array'}
```
|
divyeshrajpura/speecht5-finetuned-voxpopuli-nl
|
divyeshrajpura
| 2023-08-08T21:53:36Z | 75 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"speecht5",
"text-to-audio",
"generated_from_trainer",
"dataset:facebook/voxpopuli",
"base_model:microsoft/speecht5_tts",
"base_model:finetune:microsoft/speecht5_tts",
"license:mit",
"endpoints_compatible",
"region:us"
] |
text-to-audio
| 2023-08-08T18:46:09Z |
---
license: mit
base_model: microsoft/speecht5_tts
tags:
- generated_from_trainer
datasets:
- facebook/voxpopuli
model-index:
- name: speecht5-finetuned-voxpopuli-nl
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# speecht5-finetuned-voxpopuli-nl
This model is a fine-tuned version of [microsoft/speecht5_tts](https://huggingface.co/microsoft/speecht5_tts) on the voxpopuli dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4556
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 4000
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.5157 | 4.3 | 1000 | 0.4752 |
| 0.4994 | 8.6 | 2000 | 0.4619 |
| 0.5002 | 12.9 | 3000 | 0.4578 |
| 0.4968 | 17.2 | 4000 | 0.4556 |
### Framework versions
- Transformers 4.31.0
- Pytorch 2.0.1+cu117
- Datasets 2.14.4
- Tokenizers 0.13.3
|
Subsets and Splits
Filtered Qwen2.5 Distill Models
Identifies specific configurations of models by filtering cards that contain 'distill', 'qwen2.5', '7b' while excluding certain base models and incorrect model ID patterns, uncovering unique model variants.
Filtered Model Cards Count
Finds the count of entries with specific card details that include 'distill', 'qwen2.5', '7b' but exclude certain base models, revealing valuable insights about the dataset's content distribution.
Filtered Distill Qwen 7B Models
Filters for specific card entries containing 'distill', 'qwen', and '7b', excluding certain strings and patterns, to identify relevant model configurations.
Filtered Qwen-7b Model Cards
The query performs a detailed filtering based on specific keywords and excludes certain entries, which could be useful for identifying a specific subset of cards but does not provide deeper insights or trends.
Filtered Qwen 7B Model Cards
The query filters for specific terms related to "distilled" or "distill", "qwen", and "7b" in the 'card' column but excludes certain base models, providing a limited set of entries for further inspection.
Qwen 7B Distilled Models
The query provides a basic filtering of records to find specific card names that include keywords related to distilled Qwen 7b models, excluding a particular base model, which gives limited insight but helps in focusing on relevant entries.
Qwen 7B Distilled Model Cards
The query filters data based on specific keywords in the modelId and card fields, providing limited insight primarily useful for locating specific entries rather than revealing broad patterns or trends.
Qwen 7B Distilled Models
Finds all entries containing the terms 'distilled', 'qwen', and '7b' in a case-insensitive manner, providing a filtered set of records but without deeper analysis.
Distilled Qwen 7B Models
The query filters for specific model IDs containing 'distilled', 'qwen', and '7b', providing a basic retrieval of relevant entries but without deeper analysis or insight.
Filtered Model Cards with Distill Qwen2.
Filters and retrieves records containing specific keywords in the card description while excluding certain phrases, providing a basic count of relevant entries.
Filtered Model Cards with Distill Qwen 7
The query filters specific variations of card descriptions containing 'distill', 'qwen', and '7b' while excluding a particular base model, providing limited but specific data retrieval.
Distill Qwen 7B Model Cards
The query filters and retrieves rows where the 'card' column contains specific keywords ('distill', 'qwen', and '7b'), providing a basic filter result that can help in identifying specific entries.