valhalla commited on
Commit
4207976
·
1 Parent(s): 9fc8cc8

End of training

Browse files
.gitattributes CHANGED
@@ -33,3 +33,8 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
 
 
 
 
 
 
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
36
+ image_control.png filter=lfs diff=lfs merge=lfs -text
37
+ images_0.png filter=lfs diff=lfs merge=lfs -text
38
+ images_1.png filter=lfs diff=lfs merge=lfs -text
39
+ images_2.png filter=lfs diff=lfs merge=lfs -text
40
+ images_3.png filter=lfs diff=lfs merge=lfs -text
418646_accelerate_config.yaml.autogenerated ADDED
@@ -0,0 +1,11 @@
 
 
 
 
 
 
 
 
 
 
 
 
1
+ compute_environment: LOCAL_MACHINE
2
+ deepspeed_config: {}
3
+ distributed_type: MULTI_GPU
4
+ fsdp_config: {}
5
+ machine_rank: 0
6
+ main_process_ip: ip-26-0-144-35
7
+ main_process_port: 6000
8
+ main_training_function: main
9
+ num_machines: 1
10
+ num_processes: 1
11
+ use_cpu: false
418651_accelerate_config.yaml.autogenerated ADDED
@@ -0,0 +1,11 @@
 
 
 
 
 
 
 
 
 
 
 
 
1
+ compute_environment: LOCAL_MACHINE
2
+ deepspeed_config: {}
3
+ distributed_type: MULTI_GPU
4
+ fsdp_config: {}
5
+ machine_rank: 0
6
+ main_process_ip: ip-26-0-144-35
7
+ main_process_port: 6000
8
+ main_training_function: main
9
+ num_machines: 1
10
+ num_processes: 1
11
+ use_cpu: false
418661_accelerate_config.yaml.autogenerated ADDED
@@ -0,0 +1,11 @@
 
 
 
 
 
 
 
 
 
 
 
 
1
+ compute_environment: LOCAL_MACHINE
2
+ deepspeed_config: {}
3
+ distributed_type: MULTI_GPU
4
+ fsdp_config: {}
5
+ machine_rank: 0
6
+ main_process_ip: ip-26-0-144-35
7
+ main_process_port: 6000
8
+ main_training_function: main
9
+ num_machines: 1
10
+ num_processes: 1
11
+ use_cpu: false
418664_accelerate_config.yaml.autogenerated ADDED
@@ -0,0 +1,11 @@
 
 
 
 
 
 
 
 
 
 
 
 
1
+ compute_environment: LOCAL_MACHINE
2
+ deepspeed_config: {}
3
+ distributed_type: MULTI_GPU
4
+ fsdp_config: {}
5
+ machine_rank: 0
6
+ main_process_ip: ip-26-0-144-35
7
+ main_process_port: 6000
8
+ main_training_function: main
9
+ num_machines: 1
10
+ num_processes: 1
11
+ use_cpu: false
418665_accelerate_config.yaml.autogenerated ADDED
@@ -0,0 +1,11 @@
 
 
 
 
 
 
 
 
 
 
 
 
1
+ compute_environment: LOCAL_MACHINE
2
+ deepspeed_config: {}
3
+ distributed_type: MULTI_GPU
4
+ fsdp_config: {}
5
+ machine_rank: 0
6
+ main_process_ip: ip-26-0-144-35
7
+ main_process_port: 6000
8
+ main_training_function: main
9
+ num_machines: 1
10
+ num_processes: 1
11
+ use_cpu: false
README.md ADDED
@@ -0,0 +1,26 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+
2
+ ---
3
+ license: creativeml-openrail-m
4
+ base_model: stabilityai/stable-diffusion-xl-base-1.0
5
+ tags:
6
+ - stable-diffusion-xl
7
+ - stable-diffusion-xl-diffusers
8
+ - text-to-image
9
+ - diffusers
10
+ - t2iadapter
11
+ inference: true
12
+ ---
13
+
14
+ # t2iadapter-valhalla/t2i-style
15
+
16
+ These are t2iadapter weights trained on stabilityai/stable-diffusion-xl-base-1.0 with new type of conditioning.
17
+ You can find some example images below.
18
+ prompt: a picture of a cat, 4k photo, highly detailed
19
+ ![images_0)](./images_0.png)
20
+ prompt: a jungle, 4k photo, highly detailed
21
+ ![images_1)](./images_1.png)
22
+ prompt: a truck, 4k photo, highly detailed
23
+ ![images_2)](./images_2.png)
24
+ prompt: a digital painting of a lion, highly detailed
25
+ ![images_3)](./images_3.png)
26
+
image_control.png ADDED

Git LFS Details

  • SHA256: fa096512e91e6cd8201453c8b77057e0350d19095fda9d7039a508dc4b59b1d2
  • Pointer size: 132 Bytes
  • Size of remote file: 1.3 MB
images_0.png ADDED

Git LFS Details

  • SHA256: 3ea97dc5f882cdb233e7fae84650f7f42be2e5c9a51e99c609e34531c8150182
  • Pointer size: 132 Bytes
  • Size of remote file: 9.02 MB
images_1.png ADDED

Git LFS Details

  • SHA256: ce161778551f902cde1ed0afde855f028f1580f04537a8a2438633f6231da173
  • Pointer size: 132 Bytes
  • Size of remote file: 9.37 MB
images_2.png ADDED

Git LFS Details

  • SHA256: 687b8183b6845529d41b2d970728ff31290384fd77faaa345cb04b5ee8ad40b5
  • Pointer size: 132 Bytes
  • Size of remote file: 7.46 MB
images_3.png ADDED

Git LFS Details

  • SHA256: 8eef0056f309ec2f86d2ee03f50a847597b4df6fae22063071da78de8466628b
  • Pointer size: 132 Bytes
  • Size of remote file: 9.25 MB
main_log.txt ADDED
@@ -0,0 +1,218 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ The following values were not passed to `accelerate launch` and had defaults used instead:
2
+ `--dynamo_backend` was set to a value of `'no'`
3
+ To avoid this warning pass in values for each of the problematic parameters or run `accelerate config`.
4
+ A matching Triton is not available, some optimizations will not be enabled.
5
+ Error caught was: No module named 'triton'
6
+ 09/05/2023 22:46:04 - INFO - __main__ - Distributed environment: MULTI_GPU Backend: nccl
7
+ Num processes: 1
8
+ Process index: 0
9
+ Local process index: 0
10
+ Device: cuda:0
11
+
12
+ Mixed precision type: fp16
13
+
14
+ You are using a model of type clip_text_model to instantiate a model of type . This is not supported for all configurations of models and can yield errors.
15
+ You are using a model of type clip_text_model to instantiate a model of type . This is not supported for all configurations of models and can yield errors.
16
+ {'attention_type'} was not found in config. Values will be initialized to default values.
17
+ 09/05/2023 22:46:17 - INFO - __main__ - Initializing t2iadapter weights from unet
18
+ Some weights of the model checkpoint at openai/clip-vit-large-patch14 were not used when initializing CLIPVisionModel: ['text_model.encoder.layers.7.self_attn.v_proj.bias', 'text_model.encoder.layers.4.self_attn.k_proj.bias', 'text_model.encoder.layers.4.self_attn.q_proj.bias', 'text_model.encoder.layers.6.mlp.fc2.bias', 'text_model.encoder.layers.2.self_attn.v_proj.weight', 'text_model.encoder.layers.11.mlp.fc1.bias', 'text_model.encoder.layers.7.self_attn.k_proj.bias', 'text_model.encoder.layers.3.self_attn.out_proj.weight', 'text_model.encoder.layers.3.mlp.fc1.weight', 'text_model.encoder.layers.2.layer_norm2.bias', 'text_model.encoder.layers.1.mlp.fc1.weight', 'text_model.encoder.layers.0.layer_norm1.weight', 'text_model.encoder.layers.4.self_attn.v_proj.bias', 'text_model.encoder.layers.0.self_attn.out_proj.bias', 'text_model.encoder.layers.1.self_attn.out_proj.bias', 'text_model.encoder.layers.8.self_attn.q_proj.bias', 'text_model.encoder.layers.7.mlp.fc2.weight', 'text_model.encoder.layers.8.self_attn.v_proj.weight', 'text_model.encoder.layers.0.self_attn.v_proj.weight', 'text_model.encoder.layers.6.self_attn.v_proj.bias', 'text_model.encoder.layers.1.mlp.fc1.bias', 'text_model.encoder.layers.0.layer_norm1.bias', 'text_model.encoder.layers.5.self_attn.k_proj.bias', 'text_model.encoder.layers.10.mlp.fc1.bias', 'text_model.encoder.layers.10.layer_norm2.weight', 'text_model.encoder.layers.10.layer_norm1.bias', 'text_model.encoder.layers.5.layer_norm2.weight', 'text_model.encoder.layers.0.mlp.fc2.bias', 'text_model.encoder.layers.3.self_attn.k_proj.weight', 'text_model.encoder.layers.4.layer_norm2.weight', 'text_model.encoder.layers.1.mlp.fc2.weight', 'text_model.encoder.layers.6.self_attn.k_proj.weight', 'text_model.encoder.layers.4.self_attn.q_proj.weight', 'text_model.encoder.layers.8.layer_norm1.bias', 'text_model.encoder.layers.2.self_attn.k_proj.weight', 'text_model.encoder.layers.3.self_attn.q_proj.weight', 'text_model.encoder.layers.6.self_attn.q_proj.weight', 'text_model.encoder.layers.8.layer_norm2.bias', 'text_model.encoder.layers.3.mlp.fc2.bias', 'text_model.encoder.layers.9.self_attn.out_proj.weight', 'text_model.encoder.layers.9.self_attn.v_proj.weight', 'text_model.final_layer_norm.weight', 'text_model.encoder.layers.6.self_attn.v_proj.weight', 'text_model.encoder.layers.2.self_attn.out_proj.bias', 'text_model.encoder.layers.4.layer_norm2.bias', 'text_model.encoder.layers.9.mlp.fc1.bias', 'text_model.encoder.layers.0.self_attn.q_proj.bias', 'text_model.encoder.layers.11.self_attn.q_proj.bias', 'text_model.encoder.layers.11.self_attn.q_proj.weight', 'text_model.encoder.layers.2.mlp.fc1.bias', 'text_model.encoder.layers.5.self_attn.out_proj.bias', 'text_model.encoder.layers.5.self_attn.v_proj.bias', 'text_model.encoder.layers.7.mlp.fc2.bias', 'text_model.encoder.layers.10.self_attn.k_proj.weight', 'text_model.encoder.layers.7.self_attn.out_proj.bias', 'text_model.encoder.layers.7.layer_norm1.bias', 'text_model.embeddings.token_embedding.weight', 'text_model.encoder.layers.6.mlp.fc1.weight', 'text_model.embeddings.position_embedding.weight', 'text_model.encoder.layers.10.self_attn.v_proj.weight', 'text_model.encoder.layers.10.self_attn.k_proj.bias', 'text_model.encoder.layers.3.layer_norm2.bias', 'text_model.encoder.layers.4.self_attn.v_proj.weight', 'text_model.encoder.layers.4.mlp.fc1.bias', 'text_model.encoder.layers.2.mlp.fc2.weight', 'text_model.encoder.layers.0.layer_norm2.bias', 'text_model.encoder.layers.10.mlp.fc2.weight', 'text_model.encoder.layers.4.mlp.fc1.weight', 'text_model.encoder.layers.2.layer_norm1.weight', 'text_model.encoder.layers.9.self_attn.q_proj.bias', 'text_model.encoder.layers.1.layer_norm2.weight', 'text_model.encoder.layers.8.self_attn.out_proj.bias', 'text_model.encoder.layers.1.mlp.fc2.bias', 'text_model.encoder.layers.10.mlp.fc1.weight', 'text_model.encoder.layers.11.layer_norm1.bias', 'text_model.encoder.layers.4.self_attn.k_proj.weight', 'text_model.encoder.layers.4.mlp.fc2.weight', 'text_model.encoder.layers.10.self_attn.v_proj.bias', 'text_model.encoder.layers.9.layer_norm2.bias', 'text_model.encoder.layers.11.mlp.fc1.weight', 'text_model.encoder.layers.7.self_attn.k_proj.weight', 'text_model.encoder.layers.5.layer_norm2.bias', 'text_model.encoder.layers.1.self_attn.q_proj.bias', 'text_model.encoder.layers.7.mlp.fc1.bias', 'text_model.encoder.layers.5.self_attn.q_proj.weight', 'text_model.encoder.layers.11.layer_norm1.weight', 'text_model.encoder.layers.10.layer_norm2.bias', 'text_model.encoder.layers.1.self_attn.v_proj.weight', 'text_model.encoder.layers.11.self_attn.k_proj.bias', 'text_model.encoder.layers.9.mlp.fc2.bias', 'text_model.encoder.layers.8.self_attn.k_proj.bias', 'text_model.encoder.layers.9.layer_norm1.bias', 'text_model.encoder.layers.3.layer_norm1.weight', 'text_model.encoder.layers.4.layer_norm1.bias', 'text_model.encoder.layers.2.layer_norm2.weight', 'text_model.encoder.layers.0.self_attn.k_proj.weight', 'text_model.encoder.layers.7.layer_norm2.bias', 'text_model.encoder.layers.7.self_attn.q_proj.bias', 'text_model.encoder.layers.9.self_attn.k_proj.weight', 'text_model.encoder.layers.7.layer_norm2.weight', 'text_model.encoder.layers.5.mlp.fc2.bias', 'text_model.encoder.layers.2.layer_norm1.bias', 'text_model.encoder.layers.7.self_attn.v_proj.weight', 'text_model.encoder.layers.10.self_attn.q_proj.weight', 'text_model.encoder.layers.8.self_attn.out_proj.weight', 'text_model.encoder.layers.5.layer_norm1.bias', 'text_model.encoder.layers.3.self_attn.k_proj.bias', 'text_model.final_layer_norm.bias', 'text_model.encoder.layers.11.mlp.fc2.weight', 'text_model.encoder.layers.3.self_attn.v_proj.bias', 'text_model.encoder.layers.7.self_attn.q_proj.weight', 'text_model.encoder.layers.3.mlp.fc2.weight', 'text_model.encoder.layers.6.layer_norm2.weight', 'text_model.encoder.layers.5.mlp.fc2.weight', 'text_model.encoder.layers.9.mlp.fc2.weight', 'text_model.encoder.layers.5.self_attn.v_proj.weight', 'text_model.encoder.layers.11.self_attn.v_proj.bias', 'text_model.encoder.layers.3.self_attn.out_proj.bias', 'text_model.encoder.layers.11.self_attn.out_proj.bias', 'text_model.encoder.layers.9.self_attn.q_proj.weight', 'text_model.encoder.layers.10.self_attn.q_proj.bias', 'text_model.encoder.layers.3.self_attn.v_proj.weight', 'text_model.encoder.layers.4.layer_norm1.weight', 'text_model.encoder.layers.9.self_attn.k_proj.bias', 'text_model.encoder.layers.10.self_attn.out_proj.bias', 'text_model.encoder.layers.6.layer_norm2.bias', 'text_model.encoder.layers.7.self_attn.out_proj.weight', 'text_model.encoder.layers.2.mlp.fc1.weight', 'text_model.encoder.layers.5.self_attn.out_proj.weight', 'text_model.encoder.layers.8.mlp.fc1.weight', 'text_model.encoder.layers.8.self_attn.k_proj.weight', 'text_model.encoder.layers.6.self_attn.out_proj.bias', 'text_model.encoder.layers.5.layer_norm1.weight', 'text_model.encoder.layers.1.layer_norm2.bias', 'text_model.encoder.layers.2.self_attn.v_proj.bias', 'text_model.encoder.layers.10.mlp.fc2.bias', 'text_model.embeddings.position_ids', 'text_model.encoder.layers.11.self_attn.k_proj.weight', 'text_model.encoder.layers.3.layer_norm1.bias', 'text_model.encoder.layers.5.mlp.fc1.weight', 'text_model.encoder.layers.11.self_attn.out_proj.weight', 'text_model.encoder.layers.7.layer_norm1.weight', 'text_model.encoder.layers.3.layer_norm2.weight', 'text_model.encoder.layers.10.layer_norm1.weight', 'text_model.encoder.layers.8.mlp.fc1.bias', 'text_model.encoder.layers.0.layer_norm2.weight', 'text_model.encoder.layers.9.layer_norm2.weight', 'text_model.encoder.layers.0.mlp.fc1.weight', 'text_model.encoder.layers.2.self_attn.out_proj.weight', 'visual_projection.weight', 'text_model.encoder.layers.11.self_attn.v_proj.weight', 'text_model.encoder.layers.11.layer_norm2.bias', 'text_model.encoder.layers.1.self_attn.v_proj.bias', 'text_model.encoder.layers.9.mlp.fc1.weight', 'text_model.encoder.layers.8.mlp.fc2.weight', 'text_model.encoder.layers.1.self_attn.out_proj.weight', 'text_model.encoder.layers.2.self_attn.k_proj.bias', 'text_model.encoder.layers.8.layer_norm1.weight', 'text_model.encoder.layers.1.self_attn.q_proj.weight', 'text_model.encoder.layers.9.layer_norm1.weight', 'text_model.encoder.layers.4.self_attn.out_proj.bias', 'text_model.encoder.layers.6.layer_norm1.bias', 'text_model.encoder.layers.0.self_attn.v_proj.bias', 'text_model.encoder.layers.5.self_attn.k_proj.weight', 'text_model.encoder.layers.5.self_attn.q_proj.bias', 'text_model.encoder.layers.3.mlp.fc1.bias', 'text_model.encoder.layers.0.self_attn.q_proj.weight', 'text_model.encoder.layers.6.self_attn.k_proj.bias', 'text_model.encoder.layers.3.self_attn.q_proj.bias', 'text_model.encoder.layers.1.self_attn.k_proj.bias', 'text_model.encoder.layers.5.mlp.fc1.bias', 'text_model.encoder.layers.0.mlp.fc1.bias', 'text_model.encoder.layers.8.layer_norm2.weight', 'text_model.encoder.layers.0.self_attn.k_proj.bias', 'text_model.encoder.layers.6.self_attn.q_proj.bias', 'text_model.encoder.layers.1.layer_norm1.weight', 'text_model.encoder.layers.4.self_attn.out_proj.weight', 'text_projection.weight', 'text_model.encoder.layers.11.mlp.fc2.bias', 'text_model.encoder.layers.11.layer_norm2.weight', 'text_model.encoder.layers.8.self_attn.q_proj.weight', 'logit_scale', 'text_model.encoder.layers.7.mlp.fc1.weight', 'text_model.encoder.layers.9.self_attn.out_proj.bias', 'text_model.encoder.layers.0.mlp.fc2.weight', 'text_model.encoder.layers.1.layer_norm1.bias', 'text_model.encoder.layers.4.mlp.fc2.bias', 'text_model.encoder.layers.9.self_attn.v_proj.bias', 'text_model.encoder.layers.6.self_attn.out_proj.weight', 'text_model.encoder.layers.2.self_attn.q_proj.weight', 'text_model.encoder.layers.0.self_attn.out_proj.weight', 'text_model.encoder.layers.6.mlp.fc2.weight', 'text_model.encoder.layers.6.mlp.fc1.bias', 'text_model.encoder.layers.8.self_attn.v_proj.bias', 'text_model.encoder.layers.10.self_attn.out_proj.weight', 'text_model.encoder.layers.6.layer_norm1.weight', 'text_model.encoder.layers.2.self_attn.q_proj.bias', 'text_model.encoder.layers.2.mlp.fc2.bias', 'text_model.encoder.layers.8.mlp.fc2.bias', 'text_model.encoder.layers.1.self_attn.k_proj.weight']
19
+ - This IS expected if you are initializing CLIPVisionModel from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model).
20
+ - This IS NOT expected if you are initializing CLIPVisionModel from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model).
21
+ /admin/home/suraj/code/muse-experiments/ctrlnet/train_t2i_adapter.py:1331: DeprecationWarning: The 'warn' method is deprecated, use 'warning' instead
22
+ logger.warn(
23
+ 09/05/2023 22:46:22 - WARNING - __main__ - xFormers 0.0.16 cannot be used for training in some GPUs. If you observe problems during training, please update xFormers to at least 0.0.17. See https://huggingface.co/docs/diffusers/main/en/optimization/xformers for more details.
24
+ wandb: Currently logged in as: psuraj. Use `wandb login --relogin` to force relogin
25
+ wandb: wandb version 0.15.9 is available! To upgrade, please run:
26
+ wandb: $ pip install wandb --upgrade
27
+ wandb: Tracking run with wandb version 0.12.21
28
+ wandb: Run data is saved locally in /admin/home/suraj/code/muse-experiments/ctrlnet/wandb/run-20230905_224634-25abmfuh
29
+ wandb: Run `wandb offline` to turn off syncing.
30
+ wandb: Syncing run worthy-wave-64
31
+ wandb: ⭐️ View project at https://wandb.ai/psuraj/sd_xl_train_t2iadapter
32
+ wandb: 🚀 View run at https://wandb.ai/psuraj/sd_xl_train_t2iadapter/runs/25abmfuh
33
+ 09/05/2023 22:46:38 - INFO - __main__ - ***** Running training *****
34
+ 09/05/2023 22:46:38 - INFO - __main__ - Num batches each epoch = 187504
35
+ 09/05/2023 22:46:38 - INFO - __main__ - Num Epochs = 1
36
+ 09/05/2023 22:46:38 - INFO - __main__ - Instantaneous batch size per device = 16
37
+ 09/05/2023 22:46:38 - INFO - __main__ - Total train batch size (w. parallel, distributed & accumulation) = 16
38
+ 09/05/2023 22:46:38 - INFO - __main__ - Gradient Accumulation steps = 1
39
+ 09/05/2023 22:46:38 - INFO - __main__ - Total optimization steps = 10
40
+ Checkpoint 'latest' does not exist. Starting a new training run.
41
+
42
+
43
+
44
+
45
+ Loaded tokenizer as CLIPTokenizer from `tokenizer` subfolder of stabilityai/stable-diffusion-xl-base-1.0.
46
+ Loaded tokenizer_2 as CLIPTokenizer from `tokenizer_2` subfolder of stabilityai/stable-diffusion-xl-base-1.0.
47
+
48
+
49
+ Loaded text_encoder as CLIPTextModel from `text_encoder` subfolder of stabilityai/stable-diffusion-xl-base-1.0.
50
+
51
+
52
+
53
+
54
+
55
+
56
+
57
+
58
+
59
+
60
+
61
+
62
+
63
+
64
+
65
+
66
+
67
+
68
+
69
+
70
+
71
+
72
+
73
+
74
+
75
+
76
+
77
+
78
+
79
+
80
+
81
+
82
+
83
+
84
+
85
+
86
+
87
+
88
+
89
+
90
+
91
+
92
+
93
+
94
+
95
+
96
+
97
+
98
+
99
+
100
+
101
+
102
+
103
+
104
+
105
+
106
+
107
+
108
+
109
+
110
+
111
+
112
+
113
+
114
+
115
+
116
+
117
+
118
+
119
+
120
+
121
+
122
+
123
+
124
+
125
+
126
+
127
+
128
+
129
+
130
+
131
+
132
+
133
+
134
+
135
+
136
+
137
+
138
+
139
+
140
+
141
+
142
+
143
+
144
+
145
+
146
+
147
+
148
+
149
+
150
+
151
+
152
+
153
+
154
+
155
+
156
+
157
+
158
+
159
+
160
+
161
+
162
+
163
+
164
+
165
+
166
+
167
+
168
+
169
+
170
+
171
+
172
+
173
+
174
+
175
+
176
+
177
+
178
+
179
+
180
+
181
+
182
+
183
+
184
+
185
+
186
+
187
+
188
+
189
+
190
+
191
+
192
+
193
+
194
+
195
+
196
+
197
+
198
+
199
+
200
+
201
+
202
+
203
+
204
+
205
+
206
+
207
+
208
+
209
+
210
+
211
+
212
+
213
+
214
+
215
+
216
+
217
+
218
+
pytorch_model.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:81fe263ec3f137b76ab8f3708f2861f6e9aa169e8dc04e3fd7840a6635d61d8e
3
+ size 159591023