Hugging Face
Models
Datasets
Spaces
Posts
Docs
Enterprise
Pricing
Log In
Sign Up
objects76
/
zephyr-7b-dpo-qlora
like
0
PEFT
TensorBoard
Safetensors
HuggingFaceH4/ultrafeedback_binarized
mistral
alignment-handbook
trl
dpo
Generated from Trainer
4-bit precision
bitsandbytes
License:
apache-2.0
Model card
Files
Files and versions
Metrics
Training metrics
Community
Train
Use this model
13990f9
zephyr-7b-dpo-qlora
1 contributor
History:
11 commits
objects76
Training in progress, step 400
13990f9
verified
10 months ago
runs
Training in progress, step 400
10 months ago
.gitattributes
1.52 kB
initial commit
10 months ago
README.md
9.22 kB
Model save
10 months ago
adapter_config.json
657 Bytes
Training in progress, step 400
10 months ago
adapter_model.safetensors
671 MB
LFS
Training in progress, step 400
10 months ago
all_results.json
192 Bytes
Model save
10 months ago
special_tokens_map.json
551 Bytes
Training in progress, step 900
10 months ago
tokenizer.json
1.8 MB
Training in progress, step 900
10 months ago
tokenizer_config.json
1.39 kB
Training in progress, step 900
10 months ago
train_results.json
192 Bytes
Model save
10 months ago
trainer_state.json
216 kB
Model save
10 months ago
training_args.bin
pickle
Detected Pickle imports (9)
"alignment.configs.DPOConfig"
,
"transformers.trainer_utils.IntervalStrategy"
,
"transformers.trainer_utils.HubStrategy"
,
"transformers.trainer_utils.SchedulerType"
,
"transformers.trainer_pt_utils.AcceleratorConfig"
,
"torch.device"
,
"accelerate.utils.dataclasses.DistributedType"
,
"transformers.training_args.OptimizerNames"
,
"accelerate.state.PartialState"
How to fix it?
5.05 kB
LFS
Training in progress, step 400
10 months ago