|
The following values were not passed to `accelerate launch` and had defaults used instead: |
|
`--dynamo_backend` was set to a value of `'no'` |
|
To avoid this warning pass in values for each of the problematic parameters or run `accelerate config`. |
|
A matching Triton is not available, some optimizations will not be enabled. |
|
Error caught was: No module named 'triton' |
|
09/05/2023 22:46:04 - INFO - __main__ - Distributed environment: MULTI_GPU Backend: nccl |
|
Num processes: 1 |
|
Process index: 0 |
|
Local process index: 0 |
|
Device: cuda:0 |
|
|
|
Mixed precision type: fp16 |
|
|
|
You are using a model of type clip_text_model to instantiate a model of type . This is not supported for all configurations of models and can yield errors. |
|
You are using a model of type clip_text_model to instantiate a model of type . This is not supported for all configurations of models and can yield errors. |
|
{'attention_type'} was not found in config. Values will be initialized to default values. |
|
09/05/2023 22:46:17 - INFO - __main__ - Initializing t2iadapter weights from unet |
|
Some weights of the model checkpoint at openai/clip-vit-large-patch14 were not used when initializing CLIPVisionModel: ['text_model.encoder.layers.7.self_attn.v_proj.bias', 'text_model.encoder.layers.4.self_attn.k_proj.bias', 'text_model.encoder.layers.4.self_attn.q_proj.bias', 'text_model.encoder.layers.6.mlp.fc2.bias', 'text_model.encoder.layers.2.self_attn.v_proj.weight', 'text_model.encoder.layers.11.mlp.fc1.bias', 'text_model.encoder.layers.7.self_attn.k_proj.bias', 'text_model.encoder.layers.3.self_attn.out_proj.weight', 'text_model.encoder.layers.3.mlp.fc1.weight', 'text_model.encoder.layers.2.layer_norm2.bias', 'text_model.encoder.layers.1.mlp.fc1.weight', 'text_model.encoder.layers.0.layer_norm1.weight', 'text_model.encoder.layers.4.self_attn.v_proj.bias', 'text_model.encoder.layers.0.self_attn.out_proj.bias', 'text_model.encoder.layers.1.self_attn.out_proj.bias', 'text_model.encoder.layers.8.self_attn.q_proj.bias', 'text_model.encoder.layers.7.mlp.fc2.weight', 'text_model.encoder.layers.8.self_attn.v_proj.weight', 'text_model.encoder.layers.0.self_attn.v_proj.weight', 'text_model.encoder.layers.6.self_attn.v_proj.bias', 'text_model.encoder.layers.1.mlp.fc1.bias', 'text_model.encoder.layers.0.layer_norm1.bias', 'text_model.encoder.layers.5.self_attn.k_proj.bias', 'text_model.encoder.layers.10.mlp.fc1.bias', 'text_model.encoder.layers.10.layer_norm2.weight', 'text_model.encoder.layers.10.layer_norm1.bias', 'text_model.encoder.layers.5.layer_norm2.weight', 'text_model.encoder.layers.0.mlp.fc2.bias', 'text_model.encoder.layers.3.self_attn.k_proj.weight', 'text_model.encoder.layers.4.layer_norm2.weight', 'text_model.encoder.layers.1.mlp.fc2.weight', 'text_model.encoder.layers.6.self_attn.k_proj.weight', 'text_model.encoder.layers.4.self_attn.q_proj.weight', 'text_model.encoder.layers.8.layer_norm1.bias', 'text_model.encoder.layers.2.self_attn.k_proj.weight', 'text_model.encoder.layers.3.self_attn.q_proj.weight', 'text_model.encoder.layers.6.self_attn.q_proj.weight', 'text_model.encoder.layers.8.layer_norm2.bias', 'text_model.encoder.layers.3.mlp.fc2.bias', 'text_model.encoder.layers.9.self_attn.out_proj.weight', 'text_model.encoder.layers.9.self_attn.v_proj.weight', 'text_model.final_layer_norm.weight', 'text_model.encoder.layers.6.self_attn.v_proj.weight', 'text_model.encoder.layers.2.self_attn.out_proj.bias', 'text_model.encoder.layers.4.layer_norm2.bias', 'text_model.encoder.layers.9.mlp.fc1.bias', 'text_model.encoder.layers.0.self_attn.q_proj.bias', 'text_model.encoder.layers.11.self_attn.q_proj.bias', 'text_model.encoder.layers.11.self_attn.q_proj.weight', 'text_model.encoder.layers.2.mlp.fc1.bias', 'text_model.encoder.layers.5.self_attn.out_proj.bias', 'text_model.encoder.layers.5.self_attn.v_proj.bias', 'text_model.encoder.layers.7.mlp.fc2.bias', 'text_model.encoder.layers.10.self_attn.k_proj.weight', 'text_model.encoder.layers.7.self_attn.out_proj.bias', 'text_model.encoder.layers.7.layer_norm1.bias', 'text_model.embeddings.token_embedding.weight', 'text_model.encoder.layers.6.mlp.fc1.weight', 'text_model.embeddings.position_embedding.weight', 'text_model.encoder.layers.10.self_attn.v_proj.weight', 'text_model.encoder.layers.10.self_attn.k_proj.bias', 'text_model.encoder.layers.3.layer_norm2.bias', 'text_model.encoder.layers.4.self_attn.v_proj.weight', 'text_model.encoder.layers.4.mlp.fc1.bias', 'text_model.encoder.layers.2.mlp.fc2.weight', 'text_model.encoder.layers.0.layer_norm2.bias', 'text_model.encoder.layers.10.mlp.fc2.weight', 'text_model.encoder.layers.4.mlp.fc1.weight', 'text_model.encoder.layers.2.layer_norm1.weight', 'text_model.encoder.layers.9.self_attn.q_proj.bias', 'text_model.encoder.layers.1.layer_norm2.weight', 'text_model.encoder.layers.8.self_attn.out_proj.bias', 'text_model.encoder.layers.1.mlp.fc2.bias', 'text_model.encoder.layers.10.mlp.fc1.weight', 'text_model.encoder.layers.11.layer_norm1.bias', 'text_model.encoder.layers.4.self_attn.k_proj.weight', 'text_model.encoder.layers.4.mlp.fc2.weight', 'text_model.encoder.layers.10.self_attn.v_proj.bias', 'text_model.encoder.layers.9.layer_norm2.bias', 'text_model.encoder.layers.11.mlp.fc1.weight', 'text_model.encoder.layers.7.self_attn.k_proj.weight', 'text_model.encoder.layers.5.layer_norm2.bias', 'text_model.encoder.layers.1.self_attn.q_proj.bias', 'text_model.encoder.layers.7.mlp.fc1.bias', 'text_model.encoder.layers.5.self_attn.q_proj.weight', 'text_model.encoder.layers.11.layer_norm1.weight', 'text_model.encoder.layers.10.layer_norm2.bias', 'text_model.encoder.layers.1.self_attn.v_proj.weight', 'text_model.encoder.layers.11.self_attn.k_proj.bias', 'text_model.encoder.layers.9.mlp.fc2.bias', 'text_model.encoder.layers.8.self_attn.k_proj.bias', 'text_model.encoder.layers.9.layer_norm1.bias', 'text_model.encoder.layers.3.layer_norm1.weight', 'text_model.encoder.layers.4.layer_norm1.bias', 'text_model.encoder.layers.2.layer_norm2.weight', 'text_model.encoder.layers.0.self_attn.k_proj.weight', 'text_model.encoder.layers.7.layer_norm2.bias', 'text_model.encoder.layers.7.self_attn.q_proj.bias', 'text_model.encoder.layers.9.self_attn.k_proj.weight', 'text_model.encoder.layers.7.layer_norm2.weight', 'text_model.encoder.layers.5.mlp.fc2.bias', 'text_model.encoder.layers.2.layer_norm1.bias', 'text_model.encoder.layers.7.self_attn.v_proj.weight', 'text_model.encoder.layers.10.self_attn.q_proj.weight', 'text_model.encoder.layers.8.self_attn.out_proj.weight', 'text_model.encoder.layers.5.layer_norm1.bias', 'text_model.encoder.layers.3.self_attn.k_proj.bias', 'text_model.final_layer_norm.bias', 'text_model.encoder.layers.11.mlp.fc2.weight', 'text_model.encoder.layers.3.self_attn.v_proj.bias', 'text_model.encoder.layers.7.self_attn.q_proj.weight', 'text_model.encoder.layers.3.mlp.fc2.weight', 'text_model.encoder.layers.6.layer_norm2.weight', 'text_model.encoder.layers.5.mlp.fc2.weight', 'text_model.encoder.layers.9.mlp.fc2.weight', 'text_model.encoder.layers.5.self_attn.v_proj.weight', 'text_model.encoder.layers.11.self_attn.v_proj.bias', 'text_model.encoder.layers.3.self_attn.out_proj.bias', 'text_model.encoder.layers.11.self_attn.out_proj.bias', 'text_model.encoder.layers.9.self_attn.q_proj.weight', 'text_model.encoder.layers.10.self_attn.q_proj.bias', 'text_model.encoder.layers.3.self_attn.v_proj.weight', 'text_model.encoder.layers.4.layer_norm1.weight', 'text_model.encoder.layers.9.self_attn.k_proj.bias', 'text_model.encoder.layers.10.self_attn.out_proj.bias', 'text_model.encoder.layers.6.layer_norm2.bias', 'text_model.encoder.layers.7.self_attn.out_proj.weight', 'text_model.encoder.layers.2.mlp.fc1.weight', 'text_model.encoder.layers.5.self_attn.out_proj.weight', 'text_model.encoder.layers.8.mlp.fc1.weight', 'text_model.encoder.layers.8.self_attn.k_proj.weight', 'text_model.encoder.layers.6.self_attn.out_proj.bias', 'text_model.encoder.layers.5.layer_norm1.weight', 'text_model.encoder.layers.1.layer_norm2.bias', 'text_model.encoder.layers.2.self_attn.v_proj.bias', 'text_model.encoder.layers.10.mlp.fc2.bias', 'text_model.embeddings.position_ids', 'text_model.encoder.layers.11.self_attn.k_proj.weight', 'text_model.encoder.layers.3.layer_norm1.bias', 'text_model.encoder.layers.5.mlp.fc1.weight', 'text_model.encoder.layers.11.self_attn.out_proj.weight', 'text_model.encoder.layers.7.layer_norm1.weight', 'text_model.encoder.layers.3.layer_norm2.weight', 'text_model.encoder.layers.10.layer_norm1.weight', 'text_model.encoder.layers.8.mlp.fc1.bias', 'text_model.encoder.layers.0.layer_norm2.weight', 'text_model.encoder.layers.9.layer_norm2.weight', 'text_model.encoder.layers.0.mlp.fc1.weight', 'text_model.encoder.layers.2.self_attn.out_proj.weight', 'visual_projection.weight', 'text_model.encoder.layers.11.self_attn.v_proj.weight', 'text_model.encoder.layers.11.layer_norm2.bias', 'text_model.encoder.layers.1.self_attn.v_proj.bias', 'text_model.encoder.layers.9.mlp.fc1.weight', 'text_model.encoder.layers.8.mlp.fc2.weight', 'text_model.encoder.layers.1.self_attn.out_proj.weight', 'text_model.encoder.layers.2.self_attn.k_proj.bias', 'text_model.encoder.layers.8.layer_norm1.weight', 'text_model.encoder.layers.1.self_attn.q_proj.weight', 'text_model.encoder.layers.9.layer_norm1.weight', 'text_model.encoder.layers.4.self_attn.out_proj.bias', 'text_model.encoder.layers.6.layer_norm1.bias', 'text_model.encoder.layers.0.self_attn.v_proj.bias', 'text_model.encoder.layers.5.self_attn.k_proj.weight', 'text_model.encoder.layers.5.self_attn.q_proj.bias', 'text_model.encoder.layers.3.mlp.fc1.bias', 'text_model.encoder.layers.0.self_attn.q_proj.weight', 'text_model.encoder.layers.6.self_attn.k_proj.bias', 'text_model.encoder.layers.3.self_attn.q_proj.bias', 'text_model.encoder.layers.1.self_attn.k_proj.bias', 'text_model.encoder.layers.5.mlp.fc1.bias', 'text_model.encoder.layers.0.mlp.fc1.bias', 'text_model.encoder.layers.8.layer_norm2.weight', 'text_model.encoder.layers.0.self_attn.k_proj.bias', 'text_model.encoder.layers.6.self_attn.q_proj.bias', 'text_model.encoder.layers.1.layer_norm1.weight', 'text_model.encoder.layers.4.self_attn.out_proj.weight', 'text_projection.weight', 'text_model.encoder.layers.11.mlp.fc2.bias', 'text_model.encoder.layers.11.layer_norm2.weight', 'text_model.encoder.layers.8.self_attn.q_proj.weight', 'logit_scale', 'text_model.encoder.layers.7.mlp.fc1.weight', 'text_model.encoder.layers.9.self_attn.out_proj.bias', 'text_model.encoder.layers.0.mlp.fc2.weight', 'text_model.encoder.layers.1.layer_norm1.bias', 'text_model.encoder.layers.4.mlp.fc2.bias', 'text_model.encoder.layers.9.self_attn.v_proj.bias', 'text_model.encoder.layers.6.self_attn.out_proj.weight', 'text_model.encoder.layers.2.self_attn.q_proj.weight', 'text_model.encoder.layers.0.self_attn.out_proj.weight', 'text_model.encoder.layers.6.mlp.fc2.weight', 'text_model.encoder.layers.6.mlp.fc1.bias', 'text_model.encoder.layers.8.self_attn.v_proj.bias', 'text_model.encoder.layers.10.self_attn.out_proj.weight', 'text_model.encoder.layers.6.layer_norm1.weight', 'text_model.encoder.layers.2.self_attn.q_proj.bias', 'text_model.encoder.layers.2.mlp.fc2.bias', 'text_model.encoder.layers.8.mlp.fc2.bias', 'text_model.encoder.layers.1.self_attn.k_proj.weight'] |
|
- This IS expected if you are initializing CLIPVisionModel from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model). |
|
- This IS NOT expected if you are initializing CLIPVisionModel from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model). |
|
/admin/home/suraj/code/muse-experiments/ctrlnet/train_t2i_adapter.py:1331: DeprecationWarning: The 'warn' method is deprecated, use 'warning' instead |
|
logger.warn( |
|
09/05/2023 22:46:22 - WARNING - __main__ - xFormers 0.0.16 cannot be used for training in some GPUs. If you observe problems during training, please update xFormers to at least 0.0.17. See https://huggingface.co/docs/diffusers/main/en/optimization/xformers for more details. |
|
wandb: Currently logged in as: psuraj. Use `wandb login --relogin` to force relogin |
|
wandb: wandb version 0.15.9 is available! To upgrade, please run: |
|
wandb: $ pip install wandb --upgrade |
|
wandb: Tracking run with wandb version 0.12.21 |
|
wandb: Run data is saved locally in /admin/home/suraj/code/muse-experiments/ctrlnet/wandb/run-20230905_224634-25abmfuh |
|
wandb: Run `wandb offline` to turn off syncing. |
|
wandb: Syncing run worthy-wave-64 |
|
wandb: ⭐️ View project at https://wandb.ai/psuraj/sd_xl_train_t2iadapter |
|
wandb: 🚀 View run at https://wandb.ai/psuraj/sd_xl_train_t2iadapter/runs/25abmfuh |
|
09/05/2023 22:46:38 - INFO - __main__ - ***** Running training *****
|
|
09/05/2023 22:46:38 - INFO - __main__ - Num batches each epoch = 187504
|
|
09/05/2023 22:46:38 - INFO - __main__ - Num Epochs = 1
|
|
09/05/2023 22:46:38 - INFO - __main__ - Instantaneous batch size per device = 16
|
|
09/05/2023 22:46:38 - INFO - __main__ - Total train batch size (w. parallel, distributed & accumulation) = 16
|
|
09/05/2023 22:46:38 - INFO - __main__ - Gradient Accumulation steps = 1
|
|
09/05/2023 22:46:38 - INFO - __main__ - Total optimization steps = 10
|
|
Checkpoint 'latest' does not exist. Starting a new training run.
|
|
Steps: 0% 0/10 [00:00<?, ?it/s]
Steps: 10% 1/10 [00:12<01:54, 12.70s/it]
Steps: 10% 1/10 [00:12<01:54, 12.70s/it, loss=0.041, lr=1e-5]09/05/2023 22:46:52 - INFO - torch.nn.parallel.distributed - Reducer buckets have been rebuilt in this iteration.
|
|
Steps: 20% 2/10 [00:16<01:01, 7.73s/it, loss=0.041, lr=1e-5]
Steps: 20% 2/10 [00:16<01:01, 7.73s/it, loss=0.0519, lr=1e-5]
Steps: 30% 3/10 [00:21<00:43, 6.14s/it, loss=0.0519, lr=1e-5]
Steps: 30% 3/10 [00:21<00:43, 6.14s/it, loss=0.0302, lr=1e-5]
Steps: 40% 4/10 [00:25<00:32, 5.40s/it, loss=0.0302, lr=1e-5]
Steps: 40% 4/10 [00:25<00:32, 5.40s/it, loss=0.0355, lr=1e-5]
Steps: 50% 5/10 [00:29<00:24, 4.98s/it, loss=0.0355, lr=1e-5]
Steps: 50% 5/10 [00:29<00:24, 4.98s/it, loss=0.0418, lr=1e-5]
Steps: 60% 6/10 [00:33<00:18, 4.73s/it, loss=0.0418, lr=1e-5]
Steps: 60% 6/10 [00:33<00:18, 4.73s/it, loss=0.033, lr=1e-5]
Steps: 70% 7/10 [00:38<00:13, 4.58s/it, loss=0.033, lr=1e-5]
Steps: 70% 7/10 [00:38<00:13, 4.58s/it, loss=0.0901, lr=1e-5]
Steps: 80% 8/10 [00:42<00:08, 4.47s/it, loss=0.0901, lr=1e-5]
Steps: 80% 8/10 [00:42<00:08, 4.47s/it, loss=0.0388, lr=1e-5]
Steps: 90% 9/10 [00:46<00:04, 4.40s/it, loss=0.0388, lr=1e-5]
Steps: 90% 9/10 [00:46<00:04, 4.40s/it, loss=0.0824, lr=1e-5]
Steps: 100% 10/10 [00:50<00:00, 4.35s/it, loss=0.0824, lr=1e-5]09/05/2023 22:47:29 - INFO - __main__ - Running validation...
|
|
|
|
Loading pipeline components...: 0% 0/7 [00:00<?, ?it/s][ALoaded scheduler as EulerDiscreteScheduler from `scheduler` subfolder of stabilityai/stable-diffusion-xl-base-1.0.
|
|
Loaded tokenizer as CLIPTokenizer from `tokenizer` subfolder of stabilityai/stable-diffusion-xl-base-1.0.
|
|
Loaded tokenizer_2 as CLIPTokenizer from `tokenizer_2` subfolder of stabilityai/stable-diffusion-xl-base-1.0.
|
|
|
|
Loading pipeline components...: 71% 5/7 [00:00<00:00, 37.36it/s][ALoaded text_encoder_2 as CLIPTextModelWithProjection from `text_encoder_2` subfolder of stabilityai/stable-diffusion-xl-base-1.0.
|
|
Loaded text_encoder as CLIPTextModel from `text_encoder` subfolder of stabilityai/stable-diffusion-xl-base-1.0.
|
|
Loading pipeline components...: 100% 7/7 [00:03<00:00, 2.31it/s]
|
|
Steps: 100% 10/10 [01:56<00:00, 4.35s/it, loss=0.0269, lr=1e-5]
|
|
image_control.png: 0% 0.00/1.30M [00:00<?, ?B/s][A
|
|
|
|
|
|
|
|
Upload 6 LFS files: 0% 0/6 [00:00<?, ?it/s][A[A[A[A
|
|
|
|
|
|
images_2.png: 0% 0.00/7.46M [00:00<?, ?B/s][A[A[A
|
|
|
|
images_0.png: 0% 0.00/9.02M [00:00<?, ?B/s][A[A
|
|
|
|
|
|
|
|
|
|
images_1.png: 0% 0.00/9.37M [00:00<?, ?B/s][A[A[A[A[A
|
|
|
|
|
|
|
|
|
|
|
|
images_3.png: 0% 0.00/9.25M [00:00<?, ?B/s][A[A[A[A[A[A
|
|
image_control.png: 1% 8.19k/1.30M [00:00<00:32, 40.3kB/s][A
|
|
|
|
|
|
|
|
|
|
|
|
images_3.png: 0% 8.19k/9.25M [00:00<04:05, 37.6kB/s][A[A[A[A[A[A
|
|
|
|
images_0.png: 0% 8.19k/9.02M [00:00<04:19, 34.7kB/s][A[A
|
|
|
|
|
|
|
|
|
|
images_1.png: 0% 8.19k/9.37M [00:00<04:31, 34.4kB/s][A[A[A[A[A
|
|
|
|
|
|
images_2.png: 0% 8.19k/7.46M [00:00<03:39, 33.9kB/s][A[A[A
|
|
image_control.png: 14% 180k/1.30M [00:00<00:01, 672kB/s] [A
|
|
|
|
|
|
|
|
|
|
|
|
images_3.png: 2% 180k/9.25M [00:00<00:14, 630kB/s] [A[A[A[A[A[A
|
|
|
|
images_0.png: 2% 180k/9.02M [00:00<00:14, 621kB/s] [A[A
|
|
|
|
|
|
|
|
|
|
images_1.png: 2% 180k/9.37M [00:00<00:14, 616kB/s] [A[A[A[A[A
|
|
|
|
|
|
images_2.png: 2% 180k/7.46M [00:00<00:12, 603kB/s] [A[A[A
|
|
image_control.png: 38% 500k/1.30M [00:00<00:00, 1.45MB/s][A
|
|
|
|
|
|
images_2.png: 6% 475k/7.46M [00:00<00:05, 1.34MB/s][A[A[A
|
|
|
|
|
|
|
|
|
|
|
|
images_3.png: 6% 541k/9.25M [00:00<00:05, 1.49MB/s][A[A[A[A[A[A
|
|
|
|
images_0.png: 6% 557k/9.02M [00:00<00:05, 1.56MB/s][A[A
|
|
|
|
|
|
|
|
|
|
images_1.png: 6% 582k/9.37M [00:00<00:05, 1.63MB/s][A[A[A[A[A
|
|
|
|
|
|
images_2.png: 24% 1.79M/7.46M [00:00<00:01, 4.69MB/s][A[A[A
|
|
|
|
images_0.png: 20% 1.78M/9.02M [00:00<00:01, 4.56MB/s][A[A
|
|
|
|
|
|
|
|
|
|
images_1.png: 25% 2.32M/9.37M [00:00<00:01, 6.05MB/s][A[A[A[A[A
|
|
|
|
|
|
|
|
|
|
|
|
images_3.png: 25% 2.34M/9.25M [00:00<00:01, 5.99MB/s][A[A[A[A[A[A
|
|
|
|
images_0.png: 65% 5.89M/9.02M [00:00<00:00, 14.3MB/s][A[A
|
|
|
|
|
|
|
|
|
|
images_1.png: 62% 5.85M/9.37M [00:00<00:00, 13.5MB/s][A[A[A[A[A
|
|
|
|
|
|
|
|
|
|
|
|
images_3.png: 62% 5.74M/9.25M [00:00<00:00, 12.8MB/s][A[A[A[A[A[A
|
|
|
|
|
|
images_2.png: 76% 5.64M/7.46M [00:00<00:00, 12.8MB/s][A[A[A
image_control.png: 100% 1.30M/1.30M [00:00<00:00, 1.32MB/s]
|
|
|
|
|
|
|
|
|
|
Upload 6 LFS files: 17% 1/6 [00:00<00:04, 1.01it/s][A[A[A[A
|
|
pytorch_model.bin: 0% 0.00/160M [00:00<?, ?B/s][A
|
|
pytorch_model.bin: 0% 8.19k/160M [00:00<1:04:53, 41.0kB/s][A
images_2.png: 100% 7.46M/7.46M [00:01<00:00, 6.20MB/s]
|
|
images_0.png: 100% 9.02M/9.02M [00:01<00:00, 7.38MB/s]
|
|
|
|
|
|
|
|
|
|
Upload 6 LFS files: 33% 2/6 [00:01<00:02, 1.83it/s][A[A[A[A
images_1.png: 100% 9.37M/9.37M [00:01<00:00, 7.56MB/s]
|
|
|
|
pytorch_model.bin: 0% 180k/160M [00:00<03:55, 678kB/s] [A
images_3.png: 100% 9.25M/9.25M [00:01<00:00, 6.94MB/s]
|
|
|
|
|
|
|
|
|
|
Upload 6 LFS files: 83% 5/6 [00:01<00:00, 5.49it/s][A[A[A[A
|
|
pytorch_model.bin: 0% 516k/160M [00:00<01:45, 1.51MB/s][A
|
|
pytorch_model.bin: 1% 2.26M/160M [00:00<00:25, 6.15MB/s][A
|
|
pytorch_model.bin: 4% 5.79M/160M [00:00<00:11, 13.8MB/s][A
|
|
pytorch_model.bin: 7% 10.4M/160M [00:00<00:06, 21.7MB/s][A
|
|
pytorch_model.bin: 10% 15.9M/160M [00:00<00:04, 29.1MB/s][A
|
|
pytorch_model.bin: 12% 18.9M/160M [00:01<00:12, 10.9MB/s][A
|
|
pytorch_model.bin: 13% 21.1M/160M [00:01<00:11, 11.9MB/s][A
|
|
pytorch_model.bin: 17% 26.7M/160M [00:01<00:07, 17.7MB/s][A
|
|
pytorch_model.bin: 20% 31.6M/160M [00:02<00:05, 22.8MB/s][A
|
|
pytorch_model.bin: 22% 34.9M/160M [00:02<00:11, 11.2MB/s][A
|
|
pytorch_model.bin: 24% 38.9M/160M [00:02<00:08, 14.0MB/s][A
|
|
pytorch_model.bin: 28% 44.4M/160M [00:02<00:06, 18.9MB/s][A
|
|
pytorch_model.bin: 30% 48.0M/160M [00:03<00:07, 14.1MB/s][A
|
|
pytorch_model.bin: 32% 50.5M/160M [00:03<00:09, 11.4MB/s][A
|
|
pytorch_model.bin: 34% 53.8M/160M [00:03<00:07, 13.5MB/s][A
|
|
pytorch_model.bin: 37% 58.6M/160M [00:04<00:05, 17.6MB/s][A
|
|
pytorch_model.bin: 39% 62.7M/160M [00:04<00:04, 21.4MB/s][A
|
|
pytorch_model.bin: 41% 65.8M/160M [00:04<00:07, 11.9MB/s][A
|
|
pytorch_model.bin: 43% 68.8M/160M [00:04<00:07, 12.8MB/s][A
|
|
pytorch_model.bin: 47% 74.4M/160M [00:05<00:04, 18.1MB/s][A
|
|
pytorch_model.bin: 50% 79.1M/160M [00:05<00:03, 22.5MB/s][A
|
|
pytorch_model.bin: 52% 82.4M/160M [00:05<00:07, 10.9MB/s][A
|
|
pytorch_model.bin: 54% 85.9M/160M [00:06<00:05, 13.1MB/s][A
|
|
pytorch_model.bin: 57% 90.8M/160M [00:06<00:04, 17.0MB/s][A
|
|
pytorch_model.bin: 60% 95.6M/160M [00:06<00:03, 20.9MB/s][A
|
|
pytorch_model.bin: 62% 98.9M/160M [00:07<00:05, 11.1MB/s][A
|
|
pytorch_model.bin: 64% 102M/160M [00:07<00:04, 12.7MB/s] [A
|
|
pytorch_model.bin: 67% 107M/160M [00:07<00:03, 16.6MB/s][A
|
|
pytorch_model.bin: 69% 111M/160M [00:07<00:02, 18.4MB/s][A
|
|
pytorch_model.bin: 71% 113M/160M [00:08<00:04, 10.5MB/s][A
|
|
pytorch_model.bin: 72% 115M/160M [00:08<00:03, 11.3MB/s][A
|
|
pytorch_model.bin: 75% 120M/160M [00:08<00:02, 14.7MB/s][A
|
|
pytorch_model.bin: 78% 125M/160M [00:08<00:01, 19.5MB/s][A
|
|
pytorch_model.bin: 80% 128M/160M [00:08<00:02, 14.0MB/s][A
|
|
pytorch_model.bin: 82% 130M/160M [00:09<00:02, 11.1MB/s][A
|
|
pytorch_model.bin: 83% 133M/160M [00:09<00:02, 12.0MB/s][A
|
|
pytorch_model.bin: 86% 138M/160M [00:09<00:01, 17.0MB/s][A
|
|
pytorch_model.bin: 90% 144M/160M [00:09<00:00, 22.6MB/s][A
|
|
pytorch_model.bin: 92% 147M/160M [00:10<00:01, 11.8MB/s][A
|
|
pytorch_model.bin: 94% 150M/160M [00:10<00:00, 13.8MB/s][A
|
|
pytorch_model.bin: 97% 155M/160M [00:10<00:00, 17.8MB/s][A
|
|
pytorch_model.bin: 99% 159M/160M [00:10<00:00, 20.7MB/s][A
pytorch_model.bin: 100% 160M/160M [00:11<00:00, 14.3MB/s]
|
|
Upload 6 LFS files: 100% 6/6 [00:12<00:00, 2.02s/it]
|
|
|