Model save
Browse files- README.md +0 -2
- model.safetensors +1 -1
- training_args.bin +1 -1
README.md
CHANGED
@@ -15,8 +15,6 @@ should probably proofread and complete it, then remove this comment. -->
|
|
15 |
# clip-finetuned
|
16 |
|
17 |
This model is a fine-tuned version of [openai/clip-vit-base-patch32](https://huggingface.co/openai/clip-vit-base-patch32) on the imagefolder dataset.
|
18 |
-
It achieves the following results on the evaluation set:
|
19 |
-
- Loss: 3.4175
|
20 |
|
21 |
## Model description
|
22 |
|
|
|
15 |
# clip-finetuned
|
16 |
|
17 |
This model is a fine-tuned version of [openai/clip-vit-base-patch32](https://huggingface.co/openai/clip-vit-base-patch32) on the imagefolder dataset.
|
|
|
|
|
18 |
|
19 |
## Model description
|
20 |
|
model.safetensors
CHANGED
@@ -1,3 +1,3 @@
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
-
oid sha256:
|
3 |
size 605156676
|
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:67bf0646c496f7e34a108e7da0bcd68fb76bbcc6e2aaa4c5e43a03f2d10cdfb5
|
3 |
size 605156676
|
training_args.bin
CHANGED
@@ -1,3 +1,3 @@
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
-
oid sha256:
|
3 |
size 4728
|
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:b99a22485a5a0cdb3e9be9f352aa0d6b95cd3d0dd4574567dc4916c66dda4166
|
3 |
size 4728
|