GindaChen commited on
Commit
cb745f8
·
verified ·
1 Parent(s): eed20d6

Upload folder using huggingface_hub

Browse files
attnserver.run_attnserver.slurm.sh.343188.out.log CHANGED
@@ -124805,3 +124805,16 @@ batch tensor after cp: labels torch.Size([1, 16384])
124805
  batch tensor after cp: loss_mask torch.Size([1, 16384])
124806
  batch tensor after cp: attention_mask torch.Size([1, 1, 16384, 131072])
124807
  batch tensor after cp: position_ids torch.Size([1, 16384])
 
 
 
 
 
 
 
 
 
 
 
 
 
 
124805
  batch tensor after cp: loss_mask torch.Size([1, 16384])
124806
  batch tensor after cp: attention_mask torch.Size([1, 1, 16384, 131072])
124807
  batch tensor after cp: position_ids torch.Size([1, 16384])
124808
+ batch tensor: tokens torch.Size([1, 131072])
124809
+ batch tensor: labels torch.Size([1, 131072])
124810
+ batch tensor: loss_mask torch.Size([1, 131072])
124811
+ batch tensor: attention_mask torch.Size([1, 1, 131072, 131072])
124812
+ batch tensor: position_ids torch.Size([1, 131072])
124813
+ batch tensor after cp: tokens torch.Size([1, 16384])
124814
+ batch tensor after cp: labels torch.Size([1, 16384])
124815
+ batch tensor after cp: loss_mask torch.Size([1, 16384])
124816
+ batch tensor after cp: attention_mask torch.Size([1, 1, 16384, 131072])
124817
+ batch tensor after cp: position_ids torch.Size([1, 16384])
124818
+ Start exporting trace 8
124819
+ Done exporting trace 8
124820
+ [2025-06-21 21:18:21] iteration 9/ 10 | consumed samples: 9 | elapsed time per iteration (ms): 128291.4 | learning rate: 0.000000E+00 | global batch size: 1 | loss scale: 16777216.0 | number of skipped iterations: 1 | number of nan iterations: 0 |
attnserver.run_attnserver.slurm.sh.343195.out.log CHANGED
@@ -67730,3 +67730,36 @@ batch tensor after cp: labels torch.Size([1, 32768])
67730
  batch tensor after cp: loss_mask torch.Size([1, 32768])
67731
  batch tensor after cp: attention_mask torch.Size([1, 1, 32768, 131072])
67732
  batch tensor after cp: position_ids torch.Size([1, 32768])
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
67730
  batch tensor after cp: loss_mask torch.Size([1, 32768])
67731
  batch tensor after cp: attention_mask torch.Size([1, 1, 32768, 131072])
67732
  batch tensor after cp: position_ids torch.Size([1, 32768])
67733
+ batch tensor: tokens torch.Size([1, 131072])
67734
+ batch tensor: labels torch.Size([1, 131072])
67735
+ batch tensor: loss_mask torch.Size([1, 131072])
67736
+ batch tensor: attention_mask torch.Size([1, 1, 131072, 131072])
67737
+ batch tensor: position_ids torch.Size([1, 131072])
67738
+ batch tensor after cp: tokens torch.Size([1, 32768])
67739
+ batch tensor after cp: labels torch.Size([1, 32768])
67740
+ batch tensor after cp: loss_mask torch.Size([1, 32768])
67741
+ batch tensor after cp: attention_mask torch.Size([1, 1, 32768, 131072])
67742
+ batch tensor after cp: position_ids torch.Size([1, 32768])
67743
+ batch tensor: tokens torch.Size([1, 131072])
67744
+ batch tensor: labels torch.Size([1, 131072])
67745
+ batch tensor: loss_mask torch.Size([1, 131072])
67746
+ batch tensor: attention_mask torch.Size([1, 1, 131072, 131072])
67747
+ batch tensor: position_ids torch.Size([1, 131072])
67748
+ batch tensor after cp: tokens torch.Size([1, 32768])
67749
+ batch tensor after cp: labels torch.Size([1, 32768])
67750
+ batch tensor after cp: loss_mask torch.Size([1, 32768])
67751
+ batch tensor after cp: attention_mask torch.Size([1, 1, 32768, 131072])
67752
+ batch tensor after cp: position_ids torch.Size([1, 32768])
67753
+ batch tensor: tokens torch.Size([1, 131072])
67754
+ batch tensor: labels torch.Size([1, 131072])
67755
+ batch tensor: loss_mask torch.Size([1, 131072])
67756
+ batch tensor: attention_mask torch.Size([1, 1, 131072, 131072])
67757
+ batch tensor: position_ids torch.Size([1, 131072])
67758
+ batch tensor after cp: tokens torch.Size([1, 32768])
67759
+ batch tensor after cp: labels torch.Size([1, 32768])
67760
+ batch tensor after cp: loss_mask torch.Size([1, 32768])
67761
+ batch tensor after cp: attention_mask torch.Size([1, 1, 32768, 131072])
67762
+ batch tensor after cp: position_ids torch.Size([1, 32768])
67763
+ Start exporting trace 6
67764
+ Done exporting trace 6
67765
+ [2025-06-21 21:18:27] iteration 7/ 10 | consumed samples: 7 | elapsed time per iteration (ms): 152588.3 | learning rate: 0.000000E+00 | global batch size: 1 | loss scale: 67108864.0 | number of skipped iterations: 1 | number of nan iterations: 0 |
attnserver.run_attnserver.slurm.sh.343196.out.log CHANGED
@@ -50443,3 +50443,676 @@ batch tensor after cp: labels torch.Size([2, 24576])
50443
  batch tensor after cp: loss_mask torch.Size([2, 24576])
50444
  batch tensor after cp: attention_mask torch.Size([2, 1, 24576, 98304])
50445
  batch tensor after cp: position_ids torch.Size([2, 24576])
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
50443
  batch tensor after cp: loss_mask torch.Size([2, 24576])
50444
  batch tensor after cp: attention_mask torch.Size([2, 1, 24576, 98304])
50445
  batch tensor after cp: position_ids torch.Size([2, 24576])
50446
+ Start exporting trace 0
50447
+ Done exporting trace 0
50448
+ Number of parameters in transformer block in billions: 0.35
50449
+ Number of parameters in embedding layers in billions: 0.21
50450
+ Total number of parameters in billions: 0.56
50451
+ Number of parameters in most loaded shard in billions: 0.0703
50452
+ Theoretical memory footprints: weight and optimizer=1206.09 MB
50453
+ [Rank 18] (after 1 iterations) memory (MB) | allocated: 26294.28955078125 | max allocated: 32799.96435546875 | reserved: 34412.0 | max reserved: 34412.0
50454
+ [Rank 19] (after 1 iterations) memory (MB) | allocated: 26294.28955078125 | max allocated: 32799.96435546875 | reserved: 34412.0 | max reserved: 34412.0
50455
+ [Rank 17] (after 1 iterations) memory (MB) | allocated: 26294.28955078125 | max allocated: 32799.96435546875 | reserved: 34412.0 | max reserved: 34412.0
50456
+ [2025-06-21 21:17:42] iteration 1/ 10 | consumed samples: 1 | elapsed time per iteration (ms): 52208.9 | learning rate: 0.000000E+00 | global batch size: 1 | loss scale: 4294967296.0 | number of skipped iterations: 1 | number of nan iterations: 0 |
50457
+ [Rank 26] (after 1 iterations) memory (MB) | allocated: 26294.28955078125 | max allocated: 32799.96435546875 | reserved: 34352.0 | max reserved: 34352.0
50458
+ [Rank 29] (after 1 iterations) memory (MB) | allocated: 26294.28955078125 | max allocated: 32799.96435546875 | reserved: 34352.0 | max reserved: 34352.0
50459
+ [Rank 28] (after 1 iterations) memory (MB) | allocated: 26294.28955078125 | max allocated: 32799.96435546875 | reserved: 34352.0 | max reserved: 34352.0[Rank 27] (after 1 iterations) memory (MB) | allocated: 26294.28955078125 | max allocated: 32799.96435546875 | reserved: 34352.0 | max reserved: 34352.0
50460
+
50461
+ [Rank 1] (after 1 iterations) memory (MB) | allocated: 26294.28955078125 | max allocated: 32799.96435546875 | reserved: 34328.0 | max reserved: 34328.0
50462
+ [Rank 10] (after 1 iterations) memory (MB) | allocated: 26294.28955078125 | max allocated: 32799.96435546875 | reserved: 34352.0 | max reserved: 34352.0[Rank 15] (after 1 iterations) memory (MB) | allocated: 26294.28955078125 | max allocated: 32799.96435546875 | reserved: 34352.0 | max reserved: 34352.0
50463
+
50464
+ [Rank 11] (after 1 iterations) memory (MB) | allocated: 26294.28955078125 | max allocated: 32799.96435546875 | reserved: 34352.0 | max reserved: 34352.0
50465
+ [Rank 14] (after 1 iterations) memory (MB) | allocated: 26294.28955078125 | max allocated: 32799.96435546875 | reserved: 34352.0 | max reserved: 34352.0
50466
+ [Rank 12] (after 1 iterations) memory (MB) | allocated: 26294.28955078125 | max allocated: 32799.96435546875 | reserved: 34352.0 | max reserved: 34352.0
50467
+ [Rank 13] (after 1 iterations) memory (MB) | allocated: 26294.28955078125 | max allocated: 32799.96435546875 | reserved: 34352.0 | max reserved: 34352.0
50468
+ [Rank 21] (after 1 iterations) memory (MB) | allocated: 26294.28955078125 | max allocated: 32799.96435546875 | reserved: 34412.0 | max reserved: 34412.0
50469
+ [Rank 20] (after 1 iterations) memory (MB) | allocated: 26294.28955078125 | max allocated: 32799.96435546875 | reserved: 34412.0 | max reserved: 34412.0
50470
+ [Rank 30] (after 1 iterations) memory (MB) | allocated: 26294.28955078125 | max allocated: 32799.96435546875 | reserved: 34352.0 | max reserved: 34352.0
50471
+ [Rank 25] (after 1 iterations) memory (MB) | allocated: 26294.28955078125 | max allocated: 32799.96435546875 | reserved: 34352.0 | max reserved: 34352.0
50472
+ [Rank 31] (after 1 iterations) memory (MB) | allocated: 26294.28955078125 | max allocated: 32799.96435546875 | reserved: 34352.0 | max reserved: 34352.0
50473
+ [Rank 24] (after 1 iterations) memory (MB) | allocated: 26294.28955078125 | max allocated: 32799.96435546875 | reserved: 33968.0 | max reserved: 33968.0
50474
+ [Rank 2] (after 1 iterations) memory (MB) | allocated: 26294.28955078125 | max allocated: 32799.96435546875 | reserved: 34328.0 | max reserved: 34328.0
50475
+ [Rank 9] (after 1 iterations) memory (MB) | allocated: 26294.28955078125 | max allocated: 32799.96435546875 | reserved: 34352.0 | max reserved: 34352.0
50476
+ [Rank 22] (after 1 iterations) memory (MB) | allocated: 26294.28955078125 | max allocated: 32799.96435546875 | reserved: 34412.0 | max reserved: 34412.0
50477
+ [Rank 6] (after 1 iterations) memory (MB) | allocated: 26294.28955078125 | max allocated: 32799.96435546875 | reserved: 34328.0 | max reserved: 34328.0
50478
+ [Rank 3] (after 1 iterations) memory (MB) | allocated: 26294.28955078125 | max allocated: 32799.96435546875 | reserved: 34328.0 | max reserved: 34328.0
50479
+ [Rank 8] (after 1 iterations) memory (MB) | allocated: 26294.28955078125 | max allocated: 32799.96435546875 | reserved: 33968.0 | max reserved: 33968.0
50480
+ [Rank 23] (after 1 iterations) memory (MB) | allocated: 26294.28955078125 | max allocated: 32799.96435546875 | reserved: 34412.0 | max reserved: 34412.0
50481
+ [Rank 5] (after 1 iterations) memory (MB) | allocated: 26294.28955078125 | max allocated: 32799.96435546875 | reserved: 34328.0 | max reserved: 34328.0
50482
+ [Rank 16] (after 1 iterations) memory (MB) | allocated: 26294.28955078125 | max allocated: 32799.96435546875 | reserved: 34028.0 | max reserved: 34028.0
50483
+ [Rank 4] (after 1 iterations) memory (MB) | allocated: 26294.28955078125 | max allocated: 32799.96435546875 | reserved: 34328.0 | max reserved: 34328.0
50484
+ [Rank 0] (after 1 iterations) memory (MB) | allocated: 26294.28955078125 | max allocated: 32799.96435546875 | reserved: 33944.0 | max reserved: 33944.0
50485
+ [Rank 7] (after 1 iterations) memory (MB) | allocated: 26294.28955078125 | max allocated: 32799.96435546875 | reserved: 34328.0 | max reserved: 34328.0
50486
+ batch tensor: tokens torch.Size([2, 98304])
50487
+ batch tensor: labels torch.Size([2, 98304])
50488
+ batch tensor: loss_mask torch.Size([2, 98304])
50489
+ batch tensor: attention_mask torch.Size([2, 1, 98304, 98304])
50490
+ batch tensor: position_ids torch.Size([2, 98304])
50491
+ batch tensor after cp: tokens torch.Size([2, 24576])
50492
+ batch tensor after cp: labels torch.Size([2, 24576])
50493
+ batch tensor after cp: loss_mask torch.Size([2, 24576])
50494
+ batch tensor after cp: attention_mask torch.Size([2, 1, 24576, 98304])
50495
+ batch tensor after cp: position_ids torch.Size([2, 24576])
50496
+ batch tensor: tokens torch.Size([2, 98304])
50497
+ batch tensor: labels torch.Size([2, 98304])
50498
+ batch tensor: loss_mask torch.Size([2, 98304])
50499
+ batch tensor: attention_mask torch.Size([2, 1, 98304, 98304])
50500
+ batch tensor: position_ids torch.Size([2, 98304])
50501
+ batch tensor after cp: tokens torch.Size([2, 24576])
50502
+ batch tensor after cp: labels torch.Size([2, 24576])
50503
+ batch tensor after cp: loss_mask torch.Size([2, 24576])
50504
+ batch tensor after cp: attention_mask torch.Size([2, 1, 24576, 98304])
50505
+ batch tensor after cp: position_ids torch.Size([2, 24576])
50506
+ batch tensor: tokens torch.Size([2, 98304])
50507
+ batch tensor: labels torch.Size([2, 98304])
50508
+ batch tensor: loss_mask torch.Size([2, 98304])
50509
+ batch tensor: attention_mask torch.Size([2, 1, 98304, 98304])
50510
+ batch tensor: position_ids torch.Size([2, 98304])
50511
+ batch tensor after cp: tokens torch.Size([2, 24576])
50512
+ batch tensor after cp: labels torch.Size([2, 24576])
50513
+ batch tensor after cp: loss_mask torch.Size([2, 24576])
50514
+ batch tensor after cp: attention_mask torch.Size([2, 1, 24576, 98304])
50515
+ batch tensor after cp: position_ids torch.Size([2, 24576])
50516
+ batch tensor: tokens torch.Size([2, 98304])
50517
+ batch tensor: labels torch.Size([2, 98304])
50518
+ batch tensor: loss_mask torch.Size([2, 98304])
50519
+ batch tensor: attention_mask torch.Size([2, 1, 98304, 98304])
50520
+ batch tensor: position_ids torch.Size([2, 98304])
50521
+ batch tensor after cp: tokens torch.Size([2, 24576])
50522
+ batch tensor after cp: labels torch.Size([2, 24576])
50523
+ batch tensor after cp: loss_mask torch.Size([2, 24576])
50524
+ batch tensor after cp: attention_mask torch.Size([2, 1, 24576, 98304])
50525
+ batch tensor after cp: position_ids torch.Size([2, 24576])
50526
+ batch tensor: tokens torch.Size([2, 98304])
50527
+ batch tensor: labels torch.Size([2, 98304])
50528
+ batch tensor: loss_mask torch.Size([2, 98304])
50529
+ batch tensor: attention_mask torch.Size([2, 1, 98304, 98304])
50530
+ batch tensor: position_ids torch.Size([2, 98304])
50531
+ batch tensor after cp: tokens torch.Size([2, 24576])
50532
+ batch tensor after cp: labels torch.Size([2, 24576])
50533
+ batch tensor after cp: loss_mask torch.Size([2, 24576])
50534
+ batch tensor after cp: attention_mask torch.Size([2, 1, 24576, 98304])
50535
+ batch tensor after cp: position_ids torch.Size([2, 24576])
50536
+ batch tensor: tokens torch.Size([2, 98304])
50537
+ batch tensor: labels torch.Size([2, 98304])
50538
+ batch tensor: loss_mask torch.Size([2, 98304])
50539
+ batch tensor: attention_mask torch.Size([2, 1, 98304, 98304])
50540
+ batch tensor: position_ids torch.Size([2, 98304])
50541
+ batch tensor after cp: tokens torch.Size([2, 24576])
50542
+ batch tensor after cp: labels torch.Size([2, 24576])
50543
+ batch tensor after cp: loss_mask torch.Size([2, 24576])
50544
+ batch tensor after cp: attention_mask torch.Size([2, 1, 24576, 98304])
50545
+ batch tensor after cp: position_ids torch.Size([2, 24576])
50546
+ batch tensor: tokens torch.Size([2, 98304])
50547
+ batch tensor: labels torch.Size([2, 98304])
50548
+ batch tensor: loss_mask torch.Size([2, 98304])
50549
+ batch tensor: attention_mask torch.Size([2, 1, 98304, 98304])
50550
+ batch tensor: position_ids torch.Size([2, 98304])
50551
+ batch tensor after cp: tokens torch.Size([2, 24576])
50552
+ batch tensor after cp: labels torch.Size([2, 24576])
50553
+ batch tensor after cp: loss_mask torch.Size([2, 24576])
50554
+ batch tensor after cp: attention_mask torch.Size([2, 1, 24576, 98304])
50555
+ batch tensor after cp: position_ids torch.Size([2, 24576])
50556
+ batch tensor: tokens torch.Size([2, 98304])
50557
+ batch tensor: labels torch.Size([2, 98304])
50558
+ batch tensor: loss_mask torch.Size([2, 98304])
50559
+ batch tensor: attention_mask torch.Size([2, 1, 98304, 98304])
50560
+ batch tensor: position_ids torch.Size([2, 98304])
50561
+ batch tensor after cp: tokens torch.Size([2, 24576])
50562
+ batch tensor after cp: labels torch.Size([2, 24576])
50563
+ batch tensor after cp: loss_mask torch.Size([2, 24576])
50564
+ batch tensor after cp: attention_mask torch.Size([2, 1, 24576, 98304])
50565
+ batch tensor after cp: position_ids torch.Size([2, 24576])
50566
+ batch tensor: tokens torch.Size([2, 98304])
50567
+ batch tensor: labels torch.Size([2, 98304])
50568
+ batch tensor: loss_mask torch.Size([2, 98304])
50569
+ batch tensor: attention_mask torch.Size([2, 1, 98304, 98304])
50570
+ batch tensor: position_ids torch.Size([2, 98304])
50571
+ batch tensor after cp: tokens torch.Size([2, 24576])
50572
+ batch tensor after cp: labels torch.Size([2, 24576])
50573
+ batch tensor after cp: loss_mask torch.Size([2, 24576])
50574
+ batch tensor after cp: attention_mask torch.Size([2, 1, 24576, 98304])
50575
+ batch tensor after cp: position_ids torch.Size([2, 24576])
50576
+ batch tensor: tokens torch.Size([2, 98304])
50577
+ batch tensor: labels torch.Size([2, 98304])
50578
+ batch tensor: loss_mask torch.Size([2, 98304])
50579
+ batch tensor: attention_mask torch.Size([2, 1, 98304, 98304])
50580
+ batch tensor: position_ids torch.Size([2, 98304])
50581
+ batch tensor after cp: tokens torch.Size([2, 24576])
50582
+ batch tensor after cp: labels torch.Size([2, 24576])
50583
+ batch tensor after cp: loss_mask torch.Size([2, 24576])
50584
+ batch tensor after cp: attention_mask torch.Size([2, 1, 24576, 98304])
50585
+ batch tensor after cp: position_ids torch.Size([2, 24576])
50586
+ batch tensor: tokens torch.Size([2, 98304])
50587
+ batch tensor: labels torch.Size([2, 98304])
50588
+ batch tensor: loss_mask torch.Size([2, 98304])
50589
+ batch tensor: attention_mask torch.Size([2, 1, 98304, 98304])
50590
+ batch tensor: position_ids torch.Size([2, 98304])
50591
+ batch tensor after cp: tokens torch.Size([2, 24576])
50592
+ batch tensor after cp: labels torch.Size([2, 24576])
50593
+ batch tensor after cp: loss_mask torch.Size([2, 24576])
50594
+ batch tensor after cp: attention_mask torch.Size([2, 1, 24576, 98304])
50595
+ batch tensor after cp: position_ids torch.Size([2, 24576])
50596
+ batch tensor: tokens torch.Size([2, 98304])
50597
+ batch tensor: labels torch.Size([2, 98304])
50598
+ batch tensor: loss_mask torch.Size([2, 98304])
50599
+ batch tensor: attention_mask torch.Size([2, 1, 98304, 98304])
50600
+ batch tensor: position_ids torch.Size([2, 98304])
50601
+ batch tensor after cp: tokens torch.Size([2, 24576])
50602
+ batch tensor after cp: labels torch.Size([2, 24576])
50603
+ batch tensor after cp: loss_mask torch.Size([2, 24576])
50604
+ batch tensor after cp: attention_mask torch.Size([2, 1, 24576, 98304])
50605
+ batch tensor after cp: position_ids torch.Size([2, 24576])
50606
+ batch tensor: tokens torch.Size([2, 98304])
50607
+ batch tensor: labels torch.Size([2, 98304])
50608
+ batch tensor: loss_mask torch.Size([2, 98304])
50609
+ batch tensor: attention_mask torch.Size([2, 1, 98304, 98304])
50610
+ batch tensor: position_ids torch.Size([2, 98304])
50611
+ batch tensor after cp: tokens torch.Size([2, 24576])
50612
+ batch tensor after cp: labels torch.Size([2, 24576])
50613
+ batch tensor after cp: loss_mask torch.Size([2, 24576])
50614
+ batch tensor after cp: attention_mask torch.Size([2, 1, 24576, 98304])
50615
+ batch tensor after cp: position_ids torch.Size([2, 24576])
50616
+ batch tensor: tokens torch.Size([2, 98304])
50617
+ batch tensor: labels torch.Size([2, 98304])
50618
+ batch tensor: loss_mask torch.Size([2, 98304])
50619
+ batch tensor: attention_mask torch.Size([2, 1, 98304, 98304])
50620
+ batch tensor: position_ids torch.Size([2, 98304])
50621
+ batch tensor after cp: tokens torch.Size([2, 24576])
50622
+ batch tensor after cp: labels torch.Size([2, 24576])
50623
+ batch tensor after cp: loss_mask torch.Size([2, 24576])
50624
+ batch tensor after cp: attention_mask torch.Size([2, 1, 24576, 98304])
50625
+ batch tensor after cp: position_ids torch.Size([2, 24576])
50626
+ batch tensor: tokens torch.Size([2, 98304])
50627
+ batch tensor: labels torch.Size([2, 98304])
50628
+ batch tensor: loss_mask torch.Size([2, 98304])
50629
+ batch tensor: attention_mask torch.Size([2, 1, 98304, 98304])
50630
+ batch tensor: position_ids torch.Size([2, 98304])
50631
+ batch tensor after cp: tokens torch.Size([2, 24576])
50632
+ batch tensor after cp: labels torch.Size([2, 24576])
50633
+ batch tensor after cp: loss_mask torch.Size([2, 24576])
50634
+ batch tensor after cp: attention_mask torch.Size([2, 1, 24576, 98304])
50635
+ batch tensor after cp: position_ids torch.Size([2, 24576])
50636
+ batch tensor: tokens torch.Size([2, 98304])
50637
+ batch tensor: labels torch.Size([2, 98304])
50638
+ batch tensor: loss_mask torch.Size([2, 98304])
50639
+ batch tensor: attention_mask torch.Size([2, 1, 98304, 98304])
50640
+ batch tensor: position_ids torch.Size([2, 98304])
50641
+ batch tensor after cp: tokens torch.Size([2, 24576])
50642
+ batch tensor after cp: labels torch.Size([2, 24576])
50643
+ batch tensor after cp: loss_mask torch.Size([2, 24576])
50644
+ batch tensor after cp: attention_mask torch.Size([2, 1, 24576, 98304])
50645
+ batch tensor after cp: position_ids torch.Size([2, 24576])
50646
+ batch tensor: tokens torch.Size([2, 98304])
50647
+ batch tensor: labels torch.Size([2, 98304])
50648
+ batch tensor: loss_mask torch.Size([2, 98304])
50649
+ batch tensor: attention_mask torch.Size([2, 1, 98304, 98304])
50650
+ batch tensor: position_ids torch.Size([2, 98304])
50651
+ batch tensor after cp: tokens torch.Size([2, 24576])
50652
+ batch tensor after cp: labels torch.Size([2, 24576])
50653
+ batch tensor after cp: loss_mask torch.Size([2, 24576])
50654
+ batch tensor after cp: attention_mask torch.Size([2, 1, 24576, 98304])
50655
+ batch tensor after cp: position_ids torch.Size([2, 24576])
50656
+ batch tensor: tokens torch.Size([2, 98304])
50657
+ batch tensor: labels torch.Size([2, 98304])
50658
+ batch tensor: loss_mask torch.Size([2, 98304])
50659
+ batch tensor: attention_mask torch.Size([2, 1, 98304, 98304])
50660
+ batch tensor: position_ids torch.Size([2, 98304])
50661
+ batch tensor after cp: tokens torch.Size([2, 24576])
50662
+ batch tensor after cp: labels torch.Size([2, 24576])
50663
+ batch tensor after cp: loss_mask torch.Size([2, 24576])
50664
+ batch tensor after cp: attention_mask torch.Size([2, 1, 24576, 98304])
50665
+ batch tensor after cp: position_ids torch.Size([2, 24576])
50666
+ batch tensor: tokens torch.Size([2, 98304])
50667
+ batch tensor: labels torch.Size([2, 98304])
50668
+ batch tensor: loss_mask torch.Size([2, 98304])
50669
+ batch tensor: attention_mask torch.Size([2, 1, 98304, 98304])
50670
+ batch tensor: position_ids torch.Size([2, 98304])
50671
+ batch tensor after cp: tokens torch.Size([2, 24576])
50672
+ batch tensor after cp: labels torch.Size([2, 24576])
50673
+ batch tensor after cp: loss_mask torch.Size([2, 24576])
50674
+ batch tensor after cp: attention_mask torch.Size([2, 1, 24576, 98304])
50675
+ batch tensor after cp: position_ids torch.Size([2, 24576])
50676
+ batch tensor: tokens torch.Size([2, 98304])
50677
+ batch tensor: labels torch.Size([2, 98304])
50678
+ batch tensor: loss_mask torch.Size([2, 98304])
50679
+ batch tensor: attention_mask torch.Size([2, 1, 98304, 98304])
50680
+ batch tensor: position_ids torch.Size([2, 98304])
50681
+ batch tensor after cp: tokens torch.Size([2, 24576])
50682
+ batch tensor after cp: labels torch.Size([2, 24576])
50683
+ batch tensor after cp: loss_mask torch.Size([2, 24576])
50684
+ batch tensor after cp: attention_mask torch.Size([2, 1, 24576, 98304])
50685
+ batch tensor after cp: position_ids torch.Size([2, 24576])
50686
+ batch tensor: tokens torch.Size([2, 98304])
50687
+ batch tensor: labels torch.Size([2, 98304])
50688
+ batch tensor: loss_mask torch.Size([2, 98304])
50689
+ batch tensor: attention_mask torch.Size([2, 1, 98304, 98304])
50690
+ batch tensor: position_ids torch.Size([2, 98304])
50691
+ batch tensor after cp: tokens torch.Size([2, 24576])
50692
+ batch tensor after cp: labels torch.Size([2, 24576])
50693
+ batch tensor after cp: loss_mask torch.Size([2, 24576])
50694
+ batch tensor after cp: attention_mask torch.Size([2, 1, 24576, 98304])
50695
+ batch tensor after cp: position_ids torch.Size([2, 24576])
50696
+ batch tensor: tokens torch.Size([2, 98304])
50697
+ batch tensor: labels torch.Size([2, 98304])
50698
+ batch tensor: loss_mask torch.Size([2, 98304])
50699
+ batch tensor: attention_mask torch.Size([2, 1, 98304, 98304])
50700
+ batch tensor: position_ids torch.Size([2, 98304])
50701
+ batch tensor after cp: tokens torch.Size([2, 24576])
50702
+ batch tensor after cp: labels torch.Size([2, 24576])
50703
+ batch tensor after cp: loss_mask torch.Size([2, 24576])
50704
+ batch tensor after cp: attention_mask torch.Size([2, 1, 24576, 98304])
50705
+ batch tensor after cp: position_ids torch.Size([2, 24576])
50706
+ batch tensor: tokens torch.Size([2, 98304])
50707
+ batch tensor: labels torch.Size([2, 98304])
50708
+ batch tensor: loss_mask torch.Size([2, 98304])
50709
+ batch tensor: attention_mask torch.Size([2, 1, 98304, 98304])
50710
+ batch tensor: position_ids torch.Size([2, 98304])
50711
+ batch tensor after cp: tokens torch.Size([2, 24576])
50712
+ batch tensor after cp: labels torch.Size([2, 24576])
50713
+ batch tensor: tokens torch.Size([2, 98304])
50714
+ batch tensor: labels torch.Size([2, 98304])
50715
+ batch tensor: loss_mask torch.Size([2, 98304])
50716
+ batch tensor: attention_mask torch.Size([2, 1, 98304, 98304])
50717
+ batch tensor after cp: loss_mask torch.Size([2, 24576])
50718
+ batch tensor: position_ids torch.Size([2, 98304])
50719
+ batch tensor after cp: attention_mask torch.Size([2, 1, 24576, 98304])
50720
+ batch tensor after cp: position_ids torch.Size([2, 24576])
50721
+ batch tensor after cp: tokens torch.Size([2, 24576])
50722
+ batch tensor after cp: labels torch.Size([2, 24576])
50723
+ batch tensor after cp: loss_mask torch.Size([2, 24576])
50724
+ batch tensor after cp: attention_mask torch.Size([2, 1, 24576, 98304])
50725
+ batch tensor after cp: position_ids torch.Size([2, 24576])
50726
+ batch tensor: tokens torch.Size([2, 98304])
50727
+ batch tensor: labels torch.Size([2, 98304])
50728
+ batch tensor: loss_mask torch.Size([2, 98304])
50729
+ batch tensor: attention_mask torch.Size([2, 1, 98304, 98304])
50730
+ batch tensor: position_ids torch.Size([2, 98304])
50731
+ batch tensor after cp: tokens torch.Size([2, 24576])
50732
+ batch tensor after cp: labels torch.Size([2, 24576])
50733
+ batch tensor after cp: loss_mask torch.Size([2, 24576])
50734
+ batch tensor after cp: attention_mask torch.Size([2, 1, 24576, 98304])
50735
+ batch tensor after cp: position_ids torch.Size([2, 24576])
50736
+ batch tensor: tokens torch.Size([2, 98304])
50737
+ batch tensor: labels torch.Size([2, 98304])
50738
+ batch tensor: loss_mask torch.Size([2, 98304])
50739
+ batch tensor: attention_mask torch.Size([2, 1, 98304, 98304])
50740
+ batch tensor: position_ids torch.Size([2, 98304])
50741
+ batch tensor after cp: tokens torch.Size([2, 24576])
50742
+ batch tensor after cp: labels torch.Size([2, 24576])
50743
+ batch tensor after cp: loss_mask torch.Size([2, 24576])
50744
+ batch tensor after cp: attention_mask torch.Size([2, 1, 24576, 98304])
50745
+ batch tensor after cp: position_ids torch.Size([2, 24576])
50746
+ batch tensor: tokens torch.Size([2, 98304])
50747
+ batch tensor: labels torch.Size([2, 98304])
50748
+ batch tensor: loss_mask torch.Size([2, 98304])
50749
+ batch tensor: attention_mask torch.Size([2, 1, 98304, 98304])
50750
+ batch tensor: position_ids torch.Size([2, 98304])
50751
+ batch tensor after cp: tokens torch.Size([2, 24576])
50752
+ batch tensor after cp: labels torch.Size([2, 24576])
50753
+ batch tensor after cp: loss_mask torch.Size([2, 24576])
50754
+ batch tensor after cp: attention_mask torch.Size([2, 1, 24576, 98304])
50755
+ batch tensor after cp: position_ids torch.Size([2, 24576])
50756
+ batch tensor: tokens torch.Size([2, 98304])
50757
+ batch tensor: labels torch.Size([2, 98304])
50758
+ batch tensor: loss_mask torch.Size([2, 98304])
50759
+ batch tensor: attention_mask torch.Size([2, 1, 98304, 98304])
50760
+ batch tensor: position_ids torch.Size([2, 98304])
50761
+ batch tensor after cp: tokens torch.Size([2, 24576])
50762
+ batch tensor after cp: labels torch.Size([2, 24576])
50763
+ batch tensor after cp: loss_mask torch.Size([2, 24576])
50764
+ batch tensor after cp: attention_mask torch.Size([2, 1, 24576, 98304])
50765
+ batch tensor after cp: position_ids torch.Size([2, 24576])
50766
+ batch tensor: tokens torch.Size([2, 98304])
50767
+ batch tensor: labels torch.Size([2, 98304])
50768
+ batch tensor: loss_mask torch.Size([2, 98304])
50769
+ batch tensor: attention_mask torch.Size([2, 1, 98304, 98304])
50770
+ batch tensor: position_ids torch.Size([2, 98304])
50771
+ batch tensor after cp: tokens torch.Size([2, 24576])
50772
+ batch tensor after cp: labels torch.Size([2, 24576])
50773
+ batch tensor after cp: loss_mask torch.Size([2, 24576])
50774
+ batch tensor after cp: attention_mask torch.Size([2, 1, 24576, 98304])
50775
+ batch tensor after cp: position_ids torch.Size([2, 24576])
50776
+ batch tensor: tokens torch.Size([2, 98304])
50777
+ batch tensor: labels torch.Size([2, 98304])
50778
+ batch tensor: loss_mask torch.Size([2, 98304])
50779
+ batch tensor: attention_mask torch.Size([2, 1, 98304, 98304])
50780
+ batch tensor: position_ids torch.Size([2, 98304])
50781
+ batch tensor after cp: tokens torch.Size([2, 24576])
50782
+ batch tensor after cp: labels torch.Size([2, 24576])
50783
+ batch tensor after cp: loss_mask torch.Size([2, 24576])
50784
+ batch tensor after cp: attention_mask torch.Size([2, 1, 24576, 98304])
50785
+ batch tensor after cp: position_ids torch.Size([2, 24576])
50786
+ batch tensor: tokens torch.Size([2, 98304])
50787
+ batch tensor: labels torch.Size([2, 98304])
50788
+ batch tensor: loss_mask torch.Size([2, 98304])
50789
+ batch tensor: attention_mask torch.Size([2, 1, 98304, 98304])
50790
+ batch tensor: position_ids torch.Size([2, 98304])
50791
+ batch tensor after cp: tokens torch.Size([2, 24576])
50792
+ batch tensor after cp: labels torch.Size([2, 24576])
50793
+ batch tensor after cp: loss_mask torch.Size([2, 24576])
50794
+ batch tensor after cp: attention_mask torch.Size([2, 1, 24576, 98304])
50795
+ batch tensor after cp: position_ids torch.Size([2, 24576])
50796
+ batch tensor: tokens torch.Size([2, 98304])
50797
+ batch tensor: labels torch.Size([2, 98304])
50798
+ batch tensor: loss_mask torch.Size([2, 98304])
50799
+ batch tensor: attention_mask torch.Size([2, 1, 98304, 98304])
50800
+ batch tensor: position_ids torch.Size([2, 98304])
50801
+ batch tensor after cp: tokens torch.Size([2, 24576])
50802
+ batch tensor after cp: labels torch.Size([2, 24576])
50803
+ batch tensor after cp: loss_mask torch.Size([2, 24576])
50804
+ batch tensor after cp: attention_mask torch.Size([2, 1, 24576, 98304])
50805
+ batch tensor after cp: position_ids torch.Size([2, 24576])
50806
+ Start exporting trace 1
50807
+ Done exporting trace 1
50808
+ [2025-06-21 21:18:15] iteration 2/ 10 | consumed samples: 2 | elapsed time per iteration (ms): 33071.3 | learning rate: 0.000000E+00 | global batch size: 1 | loss scale: 2147483648.0 | number of skipped iterations: 1 | number of nan iterations: 0 |
50809
+ batch tensor: tokens torch.Size([2, 98304])
50810
+ batch tensor: labels torch.Size([2, 98304])
50811
+ batch tensor: loss_mask torch.Size([2, 98304])
50812
+ batch tensor: attention_mask torch.Size([2, 1, 98304, 98304])
50813
+ batch tensor: position_ids torch.Size([2, 98304])
50814
+ batch tensor after cp: tokens torch.Size([2, 24576])
50815
+ batch tensor after cp: labels torch.Size([2, 24576])
50816
+ batch tensor after cp: loss_mask torch.Size([2, 24576])
50817
+ batch tensor after cp: attention_mask torch.Size([2, 1, 24576, 98304])
50818
+ batch tensor after cp: position_ids torch.Size([2, 24576])
50819
+ batch tensor: tokens torch.Size([2, 98304])
50820
+ batch tensor: labels torch.Size([2, 98304])
50821
+ batch tensor: loss_mask torch.Size([2, 98304])
50822
+ batch tensor: attention_mask torch.Size([2, 1, 98304, 98304])
50823
+ batch tensor: position_ids torch.Size([2, 98304])
50824
+ batch tensor after cp: tokens torch.Size([2, 24576])
50825
+ batch tensor after cp: labels torch.Size([2, 24576])
50826
+ batch tensor after cp: loss_mask torch.Size([2, 24576])
50827
+ batch tensor after cp: attention_mask torch.Size([2, 1, 24576, 98304])
50828
+ batch tensor after cp: position_ids torch.Size([2, 24576])
50829
+ batch tensor: tokens torch.Size([2, 98304])
50830
+ batch tensor: labels torch.Size([2, 98304])
50831
+ batch tensor: loss_mask torch.Size([2, 98304])
50832
+ batch tensor: attention_mask torch.Size([2, 1, 98304, 98304])
50833
+ batch tensor: position_ids torch.Size([2, 98304])
50834
+ batch tensor after cp: tokens torch.Size([2, 24576])
50835
+ batch tensor after cp: labels torch.Size([2, 24576])
50836
+ batch tensor after cp: loss_mask torch.Size([2, 24576])
50837
+ batch tensor after cp: attention_mask torch.Size([2, 1, 24576, 98304])
50838
+ batch tensor after cp: position_ids torch.Size([2, 24576])
50839
+ batch tensor: tokens torch.Size([2, 98304])
50840
+ batch tensor: labels torch.Size([2, 98304])
50841
+ batch tensor: loss_mask torch.Size([2, 98304])
50842
+ batch tensor: attention_mask torch.Size([2, 1, 98304, 98304])
50843
+ batch tensor: position_ids torch.Size([2, 98304])
50844
+ batch tensor after cp: tokens torch.Size([2, 24576])
50845
+ batch tensor after cp: labels torch.Size([2, 24576])
50846
+ batch tensor after cp: loss_mask torch.Size([2, 24576])
50847
+ batch tensor after cp: attention_mask torch.Size([2, 1, 24576, 98304])
50848
+ batch tensor after cp: position_ids torch.Size([2, 24576])
50849
+ batch tensor: tokens torch.Size([2, 98304])
50850
+ batch tensor: labels torch.Size([2, 98304])
50851
+ batch tensor: loss_mask torch.Size([2, 98304])
50852
+ batch tensor: attention_mask torch.Size([2, 1, 98304, 98304])
50853
+ batch tensor: position_ids torch.Size([2, 98304])
50854
+ batch tensor after cp: tokens torch.Size([2, 24576])
50855
+ batch tensor after cp: labels torch.Size([2, 24576])
50856
+ batch tensor after cp: loss_mask torch.Size([2, 24576])
50857
+ batch tensor after cp: attention_mask torch.Size([2, 1, 24576, 98304])
50858
+ batch tensor after cp: position_ids torch.Size([2, 24576])
50859
+ batch tensor: tokens torch.Size([2, 98304])
50860
+ batch tensor: labels torch.Size([2, 98304])
50861
+ batch tensor: loss_mask torch.Size([2, 98304])
50862
+ batch tensor: attention_mask torch.Size([2, 1, 98304, 98304])
50863
+ batch tensor: position_ids torch.Size([2, 98304])
50864
+ batch tensor after cp: tokens torch.Size([2, 24576])
50865
+ batch tensor after cp: labels torch.Size([2, 24576])
50866
+ batch tensor after cp: loss_mask torch.Size([2, 24576])
50867
+ batch tensor after cp: attention_mask torch.Size([2, 1, 24576, 98304])
50868
+ batch tensor after cp: position_ids torch.Size([2, 24576])
50869
+ batch tensor: tokens torch.Size([2, 98304])
50870
+ batch tensor: labels torch.Size([2, 98304])
50871
+ batch tensor: loss_mask torch.Size([2, 98304])
50872
+ batch tensor: attention_mask torch.Size([2, 1, 98304, 98304])
50873
+ batch tensor: position_ids torch.Size([2, 98304])
50874
+ batch tensor after cp: tokens torch.Size([2, 24576])
50875
+ batch tensor after cp: labels torch.Size([2, 24576])
50876
+ batch tensor after cp: loss_mask torch.Size([2, 24576])
50877
+ batch tensor after cp: attention_mask torch.Size([2, 1, 24576, 98304])
50878
+ batch tensor after cp: position_ids torch.Size([2, 24576])
50879
+ batch tensor: tokens torch.Size([2, 98304])
50880
+ batch tensor: labels torch.Size([2, 98304])
50881
+ batch tensor: loss_mask torch.Size([2, 98304])
50882
+ batch tensor: attention_mask torch.Size([2, 1, 98304, 98304])
50883
+ batch tensor: position_ids torch.Size([2, 98304])
50884
+ batch tensor after cp: tokens torch.Size([2, 24576])
50885
+ batch tensor after cp: labels torch.Size([2, 24576])
50886
+ batch tensor after cp: loss_mask torch.Size([2, 24576])
50887
+ batch tensor after cp: attention_mask torch.Size([2, 1, 24576, 98304])
50888
+ batch tensor after cp: position_ids torch.Size([2, 24576])
50889
+ batch tensor: tokens torch.Size([2, 98304])
50890
+ batch tensor: labels torch.Size([2, 98304])
50891
+ batch tensor: loss_mask torch.Size([2, 98304])
50892
+ batch tensor: attention_mask torch.Size([2, 1, 98304, 98304])
50893
+ batch tensor: position_ids torch.Size([2, 98304])
50894
+ batch tensor after cp: tokens torch.Size([2, 24576])
50895
+ batch tensor after cp: labels torch.Size([2, 24576])
50896
+ batch tensor after cp: loss_mask torch.Size([2, 24576])
50897
+ batch tensor after cp: attention_mask torch.Size([2, 1, 24576, 98304])
50898
+ batch tensor after cp: position_ids torch.Size([2, 24576])
50899
+ batch tensor: tokens torch.Size([2, 98304])
50900
+ batch tensor: labels torch.Size([2, 98304])
50901
+ batch tensor: loss_mask torch.Size([2, 98304])
50902
+ batch tensor: attention_mask torch.Size([2, 1, 98304, 98304])
50903
+ batch tensor: position_ids torch.Size([2, 98304])
50904
+ batch tensor after cp: tokens torch.Size([2, 24576])
50905
+ batch tensor after cp: labels torch.Size([2, 24576])
50906
+ batch tensor after cp: loss_mask torch.Size([2, 24576])
50907
+ batch tensor after cp: attention_mask torch.Size([2, 1, 24576, 98304])
50908
+ batch tensor after cp: position_ids torch.Size([2, 24576])
50909
+ batch tensor: tokens torch.Size([2, 98304])
50910
+ batch tensor: labels torch.Size([2, 98304])
50911
+ batch tensor: loss_mask torch.Size([2, 98304])
50912
+ batch tensor: attention_mask torch.Size([2, 1, 98304, 98304])
50913
+ batch tensor: position_ids torch.Size([2, 98304])
50914
+ batch tensor after cp: tokens torch.Size([2, 24576])
50915
+ batch tensor after cp: labels torch.Size([2, 24576])
50916
+ batch tensor after cp: loss_mask torch.Size([2, 24576])
50917
+ batch tensor after cp: attention_mask torch.Size([2, 1, 24576, 98304])
50918
+ batch tensor after cp: position_ids torch.Size([2, 24576])
50919
+ batch tensor: tokens torch.Size([2, 98304])
50920
+ batch tensor: labels torch.Size([2, 98304])
50921
+ batch tensor: loss_mask torch.Size([2, 98304])
50922
+ batch tensor: attention_mask torch.Size([2, 1, 98304, 98304])
50923
+ batch tensor: position_ids torch.Size([2, 98304])
50924
+ batch tensor after cp: tokens torch.Size([2, 24576])
50925
+ batch tensor after cp: labels torch.Size([2, 24576])
50926
+ batch tensor after cp: loss_mask torch.Size([2, 24576])
50927
+ batch tensor after cp: attention_mask torch.Size([2, 1, 24576, 98304])
50928
+ batch tensor after cp: position_ids torch.Size([2, 24576])
50929
+ batch tensor: tokens torch.Size([2, 98304])
50930
+ batch tensor: labels torch.Size([2, 98304])
50931
+ batch tensor: loss_mask torch.Size([2, 98304])
50932
+ batch tensor: attention_mask torch.Size([2, 1, 98304, 98304])
50933
+ batch tensor: position_ids torch.Size([2, 98304])
50934
+ batch tensor after cp: tokens torch.Size([2, 24576])
50935
+ batch tensor after cp: labels torch.Size([2, 24576])
50936
+ batch tensor after cp: loss_mask torch.Size([2, 24576])
50937
+ batch tensor after cp: attention_mask torch.Size([2, 1, 24576, 98304])
50938
+ batch tensor after cp: position_ids torch.Size([2, 24576])
50939
+ batch tensor: tokens torch.Size([2, 98304])
50940
+ batch tensor: labels torch.Size([2, 98304])
50941
+ batch tensor: loss_mask torch.Size([2, 98304])
50942
+ batch tensor: attention_mask torch.Size([2, 1, 98304, 98304])
50943
+ batch tensor: position_ids torch.Size([2, 98304])
50944
+ batch tensor after cp: tokens torch.Size([2, 24576])
50945
+ batch tensor after cp: labels torch.Size([2, 24576])
50946
+ batch tensor after cp: loss_mask torch.Size([2, 24576])
50947
+ batch tensor after cp: attention_mask torch.Size([2, 1, 24576, 98304])
50948
+ batch tensor after cp: position_ids torch.Size([2, 24576])
50949
+ batch tensor: tokens torch.Size([2, 98304])
50950
+ batch tensor: labels torch.Size([2, 98304])
50951
+ batch tensor: loss_mask torch.Size([2, 98304])
50952
+ batch tensor: attention_mask torch.Size([2, 1, 98304, 98304])
50953
+ batch tensor: position_ids torch.Size([2, 98304])
50954
+ batch tensor after cp: tokens torch.Size([2, 24576])
50955
+ batch tensor after cp: labels torch.Size([2, 24576])
50956
+ batch tensor after cp: loss_mask torch.Size([2, 24576])
50957
+ batch tensor after cp: attention_mask torch.Size([2, 1, 24576, 98304])
50958
+ batch tensor after cp: position_ids torch.Size([2, 24576])
50959
+ batch tensor: tokens torch.Size([2, 98304])
50960
+ batch tensor: labels torch.Size([2, 98304])
50961
+ batch tensor: loss_mask torch.Size([2, 98304])
50962
+ batch tensor: attention_mask torch.Size([2, 1, 98304, 98304])
50963
+ batch tensor: position_ids torch.Size([2, 98304])
50964
+ batch tensor after cp: tokens torch.Size([2, 24576])
50965
+ batch tensor after cp: labels torch.Size([2, 24576])
50966
+ batch tensor after cp: loss_mask torch.Size([2, 24576])
50967
+ batch tensor after cp: attention_mask torch.Size([2, 1, 24576, 98304])
50968
+ batch tensor after cp: position_ids torch.Size([2, 24576])
50969
+ batch tensor: tokens torch.Size([2, 98304])
50970
+ batch tensor: labels torch.Size([2, 98304])
50971
+ batch tensor: loss_mask torch.Size([2, 98304])
50972
+ batch tensor: attention_mask torch.Size([2, 1, 98304, 98304])
50973
+ batch tensor: position_ids torch.Size([2, 98304])
50974
+ batch tensor after cp: tokens torch.Size([2, 24576])
50975
+ batch tensor after cp: labels torch.Size([2, 24576])
50976
+ batch tensor after cp: loss_mask torch.Size([2, 24576])
50977
+ batch tensor after cp: attention_mask torch.Size([2, 1, 24576, 98304])
50978
+ batch tensor after cp: position_ids torch.Size([2, 24576])
50979
+ batch tensor: tokens torch.Size([2, 98304])
50980
+ batch tensor: labels torch.Size([2, 98304])
50981
+ batch tensor: loss_mask torch.Size([2, 98304])
50982
+ batch tensor: attention_mask torch.Size([2, 1, 98304, 98304])
50983
+ batch tensor: position_ids torch.Size([2, 98304])
50984
+ batch tensor after cp: tokens torch.Size([2, 24576])
50985
+ batch tensor after cp: labels torch.Size([2, 24576])
50986
+ batch tensor after cp: loss_mask torch.Size([2, 24576])
50987
+ batch tensor after cp: attention_mask torch.Size([2, 1, 24576, 98304])
50988
+ batch tensor after cp: position_ids torch.Size([2, 24576])
50989
+ batch tensor: tokens torch.Size([2, 98304])
50990
+ batch tensor: labels torch.Size([2, 98304])
50991
+ batch tensor: loss_mask torch.Size([2, 98304])
50992
+ batch tensor: attention_mask torch.Size([2, 1, 98304, 98304])
50993
+ batch tensor: position_ids torch.Size([2, 98304])
50994
+ batch tensor after cp: tokens torch.Size([2, 24576])
50995
+ batch tensor after cp: labels torch.Size([2, 24576])
50996
+ batch tensor after cp: loss_mask torch.Size([2, 24576])
50997
+ batch tensor after cp: attention_mask torch.Size([2, 1, 24576, 98304])
50998
+ batch tensor after cp: position_ids torch.Size([2, 24576])
50999
+ batch tensor: tokens torch.Size([2, 98304])
51000
+ batch tensor: labels torch.Size([2, 98304])
51001
+ batch tensor: loss_mask torch.Size([2, 98304])
51002
+ batch tensor: attention_mask torch.Size([2, 1, 98304, 98304])
51003
+ batch tensor: position_ids torch.Size([2, 98304])
51004
+ batch tensor after cp: tokens torch.Size([2, 24576])
51005
+ batch tensor after cp: labels torch.Size([2, 24576])
51006
+ batch tensor after cp: loss_mask torch.Size([2, 24576])
51007
+ batch tensor after cp: attention_mask torch.Size([2, 1, 24576, 98304])
51008
+ batch tensor after cp: position_ids torch.Size([2, 24576])
51009
+ batch tensor: tokens torch.Size([2, 98304])
51010
+ batch tensor: labels torch.Size([2, 98304])
51011
+ batch tensor: loss_mask torch.Size([2, 98304])
51012
+ batch tensor: attention_mask torch.Size([2, 1, 98304, 98304])
51013
+ batch tensor: position_ids torch.Size([2, 98304])
51014
+ batch tensor after cp: tokens torch.Size([2, 24576])
51015
+ batch tensor after cp: labels torch.Size([2, 24576])
51016
+ batch tensor after cp: loss_mask torch.Size([2, 24576])
51017
+ batch tensor after cp: attention_mask torch.Size([2, 1, 24576, 98304])
51018
+ batch tensor after cp: position_ids torch.Size([2, 24576])
51019
+ batch tensor: tokens torch.Size([2, 98304])
51020
+ batch tensor: labels torch.Size([2, 98304])
51021
+ batch tensor: loss_mask torch.Size([2, 98304])
51022
+ batch tensor: attention_mask torch.Size([2, 1, 98304, 98304])
51023
+ batch tensor: position_ids torch.Size([2, 98304])
51024
+ batch tensor after cp: tokens torch.Size([2, 24576])
51025
+ batch tensor after cp: labels torch.Size([2, 24576])
51026
+ batch tensor after cp: loss_mask torch.Size([2, 24576])
51027
+ batch tensor after cp: attention_mask torch.Size([2, 1, 24576, 98304])
51028
+ batch tensor after cp: position_ids torch.Size([2, 24576])
51029
+ batch tensor: tokens torch.Size([2, 98304])
51030
+ batch tensor: labels torch.Size([2, 98304])
51031
+ batch tensor: loss_mask torch.Size([2, 98304])
51032
+ batch tensor: attention_mask torch.Size([2, 1, 98304, 98304])
51033
+ batch tensor: position_ids torch.Size([2, 98304])
51034
+ batch tensor after cp: tokens torch.Size([2, 24576])
51035
+ batch tensor after cp: labels torch.Size([2, 24576])
51036
+ batch tensor after cp: loss_mask torch.Size([2, 24576])
51037
+ batch tensor after cp: attention_mask torch.Size([2, 1, 24576, 98304])
51038
+ batch tensor after cp: position_ids torch.Size([2, 24576])
51039
+ batch tensor: tokens torch.Size([2, 98304])
51040
+ batch tensor: labels torch.Size([2, 98304])
51041
+ batch tensor: loss_mask torch.Size([2, 98304])
51042
+ batch tensor: attention_mask torch.Size([2, 1, 98304, 98304])
51043
+ batch tensor: position_ids torch.Size([2, 98304])
51044
+ batch tensor after cp: tokens torch.Size([2, 24576])
51045
+ batch tensor after cp: labels torch.Size([2, 24576])
51046
+ batch tensor after cp: loss_mask torch.Size([2, 24576])
51047
+ batch tensor after cp: attention_mask torch.Size([2, 1, 24576, 98304])
51048
+ batch tensor after cp: position_ids torch.Size([2, 24576])
51049
+ batch tensor: tokens torch.Size([2, 98304])
51050
+ batch tensor: labels torch.Size([2, 98304])
51051
+ batch tensor: loss_mask torch.Size([2, 98304])
51052
+ batch tensor: attention_mask torch.Size([2, 1, 98304, 98304])
51053
+ batch tensor: position_ids torch.Size([2, 98304])
51054
+ batch tensor after cp: tokens torch.Size([2, 24576])
51055
+ batch tensor after cp: labels torch.Size([2, 24576])
51056
+ batch tensor after cp: loss_mask torch.Size([2, 24576])
51057
+ batch tensor after cp: attention_mask torch.Size([2, 1, 24576, 98304])
51058
+ batch tensor after cp: position_ids torch.Size([2, 24576])
51059
+ batch tensor: tokens torch.Size([2, 98304])
51060
+ batch tensor: labels torch.Size([2, 98304])
51061
+ batch tensor: loss_mask torch.Size([2, 98304])
51062
+ batch tensor: attention_mask torch.Size([2, 1, 98304, 98304])
51063
+ batch tensor: position_ids torch.Size([2, 98304])
51064
+ batch tensor after cp: tokens torch.Size([2, 24576])
51065
+ batch tensor after cp: labels torch.Size([2, 24576])
51066
+ batch tensor after cp: loss_mask torch.Size([2, 24576])
51067
+ batch tensor after cp: attention_mask torch.Size([2, 1, 24576, 98304])
51068
+ batch tensor after cp: position_ids torch.Size([2, 24576])
51069
+ batch tensor: tokens torch.Size([2, 98304])
51070
+ batch tensor: labels torch.Size([2, 98304])
51071
+ batch tensor: loss_mask torch.Size([2, 98304])
51072
+ batch tensor: attention_mask torch.Size([2, 1, 98304, 98304])
51073
+ batch tensor: position_ids torch.Size([2, 98304])
51074
+ batch tensor after cp: tokens torch.Size([2, 24576])
51075
+ batch tensor after cp: labels torch.Size([2, 24576])
51076
+ batch tensor after cp: loss_mask torch.Size([2, 24576])
51077
+ batch tensor after cp: attention_mask torch.Size([2, 1, 24576, 98304])
51078
+ batch tensor after cp: position_ids torch.Size([2, 24576])
51079
+ batch tensor: tokens torch.Size([2, 98304])
51080
+ batch tensor: labels torch.Size([2, 98304])
51081
+ batch tensor: loss_mask torch.Size([2, 98304])
51082
+ batch tensor: attention_mask torch.Size([2, 1, 98304, 98304])
51083
+ batch tensor: position_ids torch.Size([2, 98304])
51084
+ batch tensor after cp: tokens torch.Size([2, 24576])
51085
+ batch tensor after cp: labels torch.Size([2, 24576])
51086
+ batch tensor after cp: loss_mask torch.Size([2, 24576])
51087
+ batch tensor after cp: attention_mask torch.Size([2, 1, 24576, 98304])
51088
+ batch tensor after cp: position_ids torch.Size([2, 24576])
51089
+ batch tensor: tokens torch.Size([2, 98304])
51090
+ batch tensor: labels torch.Size([2, 98304])
51091
+ batch tensor: loss_mask torch.Size([2, 98304])
51092
+ batch tensor: attention_mask torch.Size([2, 1, 98304, 98304])
51093
+ batch tensor: position_ids torch.Size([2, 98304])
51094
+ batch tensor after cp: tokens torch.Size([2, 24576])
51095
+ batch tensor after cp: labels torch.Size([2, 24576])
51096
+ batch tensor after cp: loss_mask torch.Size([2, 24576])
51097
+ batch tensor after cp: attention_mask torch.Size([2, 1, 24576, 98304])
51098
+ batch tensor after cp: position_ids torch.Size([2, 24576])
51099
+ batch tensor: tokens torch.Size([2, 98304])
51100
+ batch tensor: labels torch.Size([2, 98304])
51101
+ batch tensor: loss_mask torch.Size([2, 98304])
51102
+ batch tensor: attention_mask torch.Size([2, 1, 98304, 98304])
51103
+ batch tensor: position_ids torch.Size([2, 98304])
51104
+ batch tensor after cp: tokens torch.Size([2, 24576])
51105
+ batch tensor after cp: labels torch.Size([2, 24576])
51106
+ batch tensor after cp: loss_mask torch.Size([2, 24576])
51107
+ batch tensor after cp: attention_mask torch.Size([2, 1, 24576, 98304])
51108
+ batch tensor after cp: position_ids torch.Size([2, 24576])
51109
+ batch tensor: tokens torch.Size([2, 98304])
51110
+ batch tensor: labels torch.Size([2, 98304])
51111
+ batch tensor: loss_mask torch.Size([2, 98304])
51112
+ batch tensor: attention_mask torch.Size([2, 1, 98304, 98304])
51113
+ batch tensor: position_ids torch.Size([2, 98304])
51114
+ batch tensor after cp: tokens torch.Size([2, 24576])
51115
+ batch tensor after cp: labels torch.Size([2, 24576])
51116
+ batch tensor after cp: loss_mask torch.Size([2, 24576])
51117
+ batch tensor after cp: attention_mask torch.Size([2, 1, 24576, 98304])
51118
+ batch tensor after cp: position_ids torch.Size([2, 24576])
attnserver.run_attnserver.slurm.sh.343200.err.log CHANGED
The diff for this file is too large to render. See raw diff
 
attnserver.run_attnserver.slurm.sh.343200.out.log CHANGED
The diff for this file is too large to render. See raw diff
 
attnserver.run_attnserver.slurm.sh.343202.err.log CHANGED
@@ -6746,3 +6746,39 @@ W0621 21:13:47.545000 3922086 site-packages/torch/distributed/run.py:766] ******
6746
  warnings.warn(
6747
  /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/transformer_engine/pytorch/cpu_offload.py:595: DeprecationWarning: Offloading weights is deprecated. Using offload_weights=True does not have any effect.
6748
  warnings.warn(
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
6746
  warnings.warn(
6747
  /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/transformer_engine/pytorch/cpu_offload.py:595: DeprecationWarning: Offloading weights is deprecated. Using offload_weights=True does not have any effect.
6748
  warnings.warn(
6749
+ [rank0]: Traceback (most recent call last):
6750
+ [rank0]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 554, in <module>
6751
+ [rank0]: pretrain(
6752
+ [rank0]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/training/training.py", line 879, in pretrain
6753
+ [rank0]: save_checkpoint(
6754
+ [rank0]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/training/checkpointing.py", line 469, in save_checkpoint
6755
+ [rank0]: async_save_request = dist_checkpointing.save(state_dict, checkpoint_name, save_strategy,
6756
+ [rank0]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
6757
+ [rank0]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/dist_checkpointing/serialization.py", line 404, in save
6758
+ [rank0]: sharded_strategy.save(sharded_state_dict, checkpoint_dir)
6759
+ [rank0]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/dist_checkpointing/strategies/fully_parallel.py", line 95, in save
6760
+ [rank0]: return self.base_strategy.save(sharded_state_dict, checkpoint_dir)
6761
+ [rank0]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
6762
+ [rank0]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/dist_checkpointing/strategies/base.py", line 228, in save
6763
+ [rank0]: async_calls.maybe_finalize_async_calls(blocking=True)
6764
+ [rank0]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/dist_checkpointing/strategies/async_utils.py", line 545, in maybe_finalize_async_calls
6765
+ [rank0]: finalize_fn()
6766
+ [rank0]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/dist_checkpointing/strategies/torch.py", line 800, in finalize_fn
6767
+ [rank0]: save_state_dict_async_finalize(*save_state_dict_ret)
6768
+ [rank0]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/dist_checkpointing/strategies/state_dict_saver.py", line 243, in save_state_dict_async_finalize
6769
+ [rank0]: storage_writer.finish(global_metadata, all_results)
6770
+ [rank0]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/dist_checkpointing/strategies/filesystem_async.py", line 483, in finish
6771
+ [rank0]: super().finish(metadata, results)
6772
+ [rank0]: File "/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/distributed/checkpoint/filesystem.py", line 697, in finish
6773
+ [rank0]: with self.fs.create_stream(tmp_path, "wb") as metadata_file:
6774
+ [rank0]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
6775
+ [rank0]: File "/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/contextlib.py", line 137, in __enter__
6776
+ [rank0]: return next(self.gen)
6777
+ [rank0]: ^^^^^^^^^^^^^^
6778
+ [rank0]: File "/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/distributed/checkpoint/filesystem.py", line 476, in create_stream
6779
+ [rank0]: with path.open(mode) as stream:
6780
+ [rank0]: ^^^^^^^^^^^^^^^
6781
+ [rank0]: File "/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/pathlib.py", line 1013, in open
6782
+ [rank0]: return io.open(self, mode, buffering, encoding, errors, newline)
6783
+ [rank0]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
6784
+ [rank0]: FileNotFoundError: [Errno 2] No such file or directory: 'gpt-checkpoint/iter_0000010/.metadata.tmp'
attnserver.run_attnserver.slurm.sh.343202.out.log CHANGED
@@ -29045,3 +29045,879 @@ batch tensor after cp: position_ids torch.Size([2, 65536])
29045
  Start exporting trace 6
29046
  Done exporting trace 6
29047
  [2025-06-21 21:17:21] iteration 7/ 10 | consumed samples: 7 | elapsed time per iteration (ms): 21813.8 | learning rate: 0.000000E+00 | global batch size: 1 | loss scale: 67108864.0 | number of skipped iterations: 1 | number of nan iterations: 0 |
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
29045
  Start exporting trace 6
29046
  Done exporting trace 6
29047
  [2025-06-21 21:17:21] iteration 7/ 10 | consumed samples: 7 | elapsed time per iteration (ms): 21813.8 | learning rate: 0.000000E+00 | global batch size: 1 | loss scale: 67108864.0 | number of skipped iterations: 1 | number of nan iterations: 0 |
29048
+ batch tensor: tokens torch.Size([2, 131072])
29049
+ batch tensor: labels torch.Size([2, 131072])
29050
+ batch tensor: loss_mask torch.Size([2, 131072])
29051
+ batch tensor: attention_mask torch.Size([2, 1, 131072, 131072])
29052
+ batch tensor: position_ids torch.Size([2, 131072])
29053
+ batch tensor after cp: tokens torch.Size([2, 65536])
29054
+ batch tensor after cp: labels torch.Size([2, 65536])
29055
+ batch tensor after cp: loss_mask torch.Size([2, 65536])
29056
+ batch tensor after cp: attention_mask torch.Size([2, 1, 65536, 131072])
29057
+ batch tensor after cp: position_ids torch.Size([2, 65536])
29058
+ batch tensor: tokens torch.Size([2, 131072])
29059
+ batch tensor: labels torch.Size([2, 131072])
29060
+ batch tensor: loss_mask torch.Size([2, 131072])
29061
+ batch tensor: attention_mask torch.Size([2, 1, 131072, 131072])
29062
+ batch tensor: position_ids torch.Size([2, 131072])
29063
+ batch tensor after cp: tokens torch.Size([2, 65536])
29064
+ batch tensor after cp: labels torch.Size([2, 65536])
29065
+ batch tensor after cp: loss_mask torch.Size([2, 65536])
29066
+ batch tensor after cp: attention_mask torch.Size([2, 1, 65536, 131072])
29067
+ batch tensor after cp: position_ids torch.Size([2, 65536])
29068
+ batch tensor: tokens torch.Size([2, 131072])
29069
+ batch tensor: labels torch.Size([2, 131072])
29070
+ batch tensor: loss_mask torch.Size([2, 131072])
29071
+ batch tensor: attention_mask torch.Size([2, 1, 131072, 131072])
29072
+ batch tensor: position_ids torch.Size([2, 131072])
29073
+ batch tensor after cp: tokens torch.Size([2, 65536])
29074
+ batch tensor after cp: labels torch.Size([2, 65536])
29075
+ batch tensor after cp: loss_mask torch.Size([2, 65536])
29076
+ batch tensor after cp: attention_mask torch.Size([2, 1, 65536, 131072])
29077
+ batch tensor after cp: position_ids torch.Size([2, 65536])
29078
+ batch tensor: tokens torch.Size([2, 131072])
29079
+ batch tensor: labels torch.Size([2, 131072])
29080
+ batch tensor: loss_mask torch.Size([2, 131072])
29081
+ batch tensor: attention_mask torch.Size([2, 1, 131072, 131072])
29082
+ batch tensor: position_ids torch.Size([2, 131072])
29083
+ batch tensor after cp: tokens torch.Size([2, 65536])
29084
+ batch tensor after cp: labels torch.Size([2, 65536])
29085
+ batch tensor after cp: loss_mask torch.Size([2, 65536])
29086
+ batch tensor after cp: attention_mask torch.Size([2, 1, 65536, 131072])
29087
+ batch tensor after cp: position_ids torch.Size([2, 65536])
29088
+ batch tensor: tokens torch.Size([2, 131072])
29089
+ batch tensor: labels torch.Size([2, 131072])
29090
+ batch tensor: loss_mask torch.Size([2, 131072])
29091
+ batch tensor: attention_mask torch.Size([2, 1, 131072, 131072])
29092
+ batch tensor: position_ids torch.Size([2, 131072])
29093
+ batch tensor after cp: tokens torch.Size([2, 65536])
29094
+ batch tensor after cp: labels torch.Size([2, 65536])
29095
+ batch tensor after cp: loss_mask torch.Size([2, 65536])
29096
+ batch tensor after cp: attention_mask torch.Size([2, 1, 65536, 131072])
29097
+ batch tensor after cp: position_ids torch.Size([2, 65536])
29098
+ batch tensor: tokens torch.Size([2, 131072])
29099
+ batch tensor: labels torch.Size([2, 131072])
29100
+ batch tensor: loss_mask torch.Size([2, 131072])
29101
+ batch tensor: attention_mask torch.Size([2, 1, 131072, 131072])
29102
+ batch tensor: position_ids torch.Size([2, 131072])
29103
+ batch tensor after cp: tokens torch.Size([2, 65536])
29104
+ batch tensor after cp: labels torch.Size([2, 65536])
29105
+ batch tensor after cp: loss_mask torch.Size([2, 65536])
29106
+ batch tensor after cp: attention_mask torch.Size([2, 1, 65536, 131072])
29107
+ batch tensor after cp: position_ids torch.Size([2, 65536])
29108
+ batch tensor: tokens torch.Size([2, 131072])
29109
+ batch tensor: labels torch.Size([2, 131072])
29110
+ batch tensor: loss_mask torch.Size([2, 131072])
29111
+ batch tensor: attention_mask torch.Size([2, 1, 131072, 131072])
29112
+ batch tensor: position_ids torch.Size([2, 131072])
29113
+ batch tensor after cp: tokens torch.Size([2, 65536])
29114
+ batch tensor after cp: labels torch.Size([2, 65536])
29115
+ batch tensor after cp: loss_mask torch.Size([2, 65536])
29116
+ batch tensor after cp: attention_mask torch.Size([2, 1, 65536, 131072])
29117
+ batch tensor after cp: position_ids torch.Size([2, 65536])
29118
+ batch tensor: tokens torch.Size([2, 131072])
29119
+ batch tensor: labels torch.Size([2, 131072])
29120
+ batch tensor: loss_mask torch.Size([2, 131072])
29121
+ batch tensor: attention_mask torch.Size([2, 1, 131072, 131072])
29122
+ batch tensor: position_ids torch.Size([2, 131072])
29123
+ batch tensor after cp: tokens torch.Size([2, 65536])
29124
+ batch tensor after cp: labels torch.Size([2, 65536])
29125
+ batch tensor after cp: loss_mask torch.Size([2, 65536])
29126
+ batch tensor after cp: attention_mask torch.Size([2, 1, 65536, 131072])
29127
+ batch tensor after cp: position_ids torch.Size([2, 65536])
29128
+ batch tensor: tokens torch.Size([2, 131072])
29129
+ batch tensor: labels torch.Size([2, 131072])
29130
+ batch tensor: loss_mask torch.Size([2, 131072])
29131
+ batch tensor: attention_mask torch.Size([2, 1, 131072, 131072])
29132
+ batch tensor: position_ids torch.Size([2, 131072])
29133
+ batch tensor after cp: tokens torch.Size([2, 65536])
29134
+ batch tensor after cp: labels torch.Size([2, 65536])
29135
+ batch tensor after cp: loss_mask torch.Size([2, 65536])
29136
+ batch tensor after cp: attention_mask torch.Size([2, 1, 65536, 131072])
29137
+ batch tensor after cp: position_ids torch.Size([2, 65536])
29138
+ batch tensor: tokens torch.Size([2, 131072])
29139
+ batch tensor: labels torch.Size([2, 131072])
29140
+ batch tensor: loss_mask torch.Size([2, 131072])
29141
+ batch tensor: attention_mask torch.Size([2, 1, 131072, 131072])
29142
+ batch tensor: position_ids torch.Size([2, 131072])
29143
+ batch tensor after cp: tokens torch.Size([2, 65536])
29144
+ batch tensor after cp: labels torch.Size([2, 65536])
29145
+ batch tensor after cp: loss_mask torch.Size([2, 65536])
29146
+ batch tensor after cp: attention_mask torch.Size([2, 1, 65536, 131072])
29147
+ batch tensor after cp: position_ids torch.Size([2, 65536])
29148
+ batch tensor: tokens torch.Size([2, 131072])
29149
+ batch tensor: labels torch.Size([2, 131072])
29150
+ batch tensor: loss_mask torch.Size([2, 131072])
29151
+ batch tensor: attention_mask torch.Size([2, 1, 131072, 131072])
29152
+ batch tensor: position_ids torch.Size([2, 131072])
29153
+ batch tensor after cp: tokens torch.Size([2, 65536])
29154
+ batch tensor after cp: labels torch.Size([2, 65536])
29155
+ batch tensor after cp: loss_mask torch.Size([2, 65536])
29156
+ batch tensor after cp: attention_mask torch.Size([2, 1, 65536, 131072])
29157
+ batch tensor after cp: position_ids torch.Size([2, 65536])
29158
+ batch tensor: tokens torch.Size([2, 131072])
29159
+ batch tensor: labels torch.Size([2, 131072])
29160
+ batch tensor: loss_mask torch.Size([2, 131072])
29161
+ batch tensor: attention_mask torch.Size([2, 1, 131072, 131072])
29162
+ batch tensor: position_ids torch.Size([2, 131072])
29163
+ batch tensor after cp: tokens torch.Size([2, 65536])
29164
+ batch tensor after cp: labels torch.Size([2, 65536])
29165
+ batch tensor after cp: loss_mask torch.Size([2, 65536])
29166
+ batch tensor after cp: attention_mask torch.Size([2, 1, 65536, 131072])
29167
+ batch tensor after cp: position_ids torch.Size([2, 65536])
29168
+ batch tensor: tokens torch.Size([2, 131072])
29169
+ batch tensor: labels torch.Size([2, 131072])
29170
+ batch tensor: loss_mask torch.Size([2, 131072])
29171
+ batch tensor: attention_mask torch.Size([2, 1, 131072, 131072])
29172
+ batch tensor: position_ids torch.Size([2, 131072])
29173
+ batch tensor after cp: tokens torch.Size([2, 65536])
29174
+ batch tensor after cp: labels torch.Size([2, 65536])
29175
+ batch tensor after cp: loss_mask torch.Size([2, 65536])
29176
+ batch tensor after cp: attention_mask torch.Size([2, 1, 65536, 131072])
29177
+ batch tensor after cp: position_ids torch.Size([2, 65536])
29178
+ batch tensor: tokens torch.Size([2, 131072])
29179
+ batch tensor: labels torch.Size([2, 131072])
29180
+ batch tensor: loss_mask torch.Size([2, 131072])
29181
+ batch tensor: attention_mask torch.Size([2, 1, 131072, 131072])
29182
+ batch tensor: position_ids torch.Size([2, 131072])
29183
+ batch tensor after cp: tokens torch.Size([2, 65536])
29184
+ batch tensor after cp: labels torch.Size([2, 65536])
29185
+ batch tensor after cp: loss_mask torch.Size([2, 65536])
29186
+ batch tensor after cp: attention_mask torch.Size([2, 1, 65536, 131072])
29187
+ batch tensor after cp: position_ids torch.Size([2, 65536])
29188
+ batch tensor: tokens torch.Size([2, 131072])
29189
+ batch tensor: labels torch.Size([2, 131072])
29190
+ batch tensor: loss_mask torch.Size([2, 131072])
29191
+ batch tensor: attention_mask torch.Size([2, 1, 131072, 131072])
29192
+ batch tensor: position_ids torch.Size([2, 131072])
29193
+ batch tensor after cp: tokens torch.Size([2, 65536])
29194
+ batch tensor after cp: labels torch.Size([2, 65536])
29195
+ batch tensor after cp: loss_mask torch.Size([2, 65536])
29196
+ batch tensor after cp: attention_mask torch.Size([2, 1, 65536, 131072])
29197
+ batch tensor after cp: position_ids torch.Size([2, 65536])
29198
+ batch tensor: tokens torch.Size([2, 131072])
29199
+ batch tensor: labels torch.Size([2, 131072])
29200
+ batch tensor: loss_mask torch.Size([2, 131072])
29201
+ batch tensor: attention_mask torch.Size([2, 1, 131072, 131072])
29202
+ batch tensor: position_ids torch.Size([2, 131072])
29203
+ batch tensor after cp: tokens torch.Size([2, 65536])
29204
+ batch tensor after cp: labels torch.Size([2, 65536])
29205
+ batch tensor after cp: loss_mask torch.Size([2, 65536])
29206
+ batch tensor after cp: attention_mask torch.Size([2, 1, 65536, 131072])
29207
+ batch tensor after cp: position_ids torch.Size([2, 65536])
29208
+ Start exporting trace 7
29209
+ Done exporting trace 7
29210
+ [2025-06-21 21:17:42] iteration 8/ 10 | consumed samples: 8 | elapsed time per iteration (ms): 21609.9 | learning rate: 0.000000E+00 | global batch size: 1 | loss scale: 33554432.0 | number of skipped iterations: 1 | number of nan iterations: 0 |
29211
+ batch tensor: tokens torch.Size([2, 131072])
29212
+ batch tensor: labels torch.Size([2, 131072])
29213
+ batch tensor: loss_mask torch.Size([2, 131072])
29214
+ batch tensor: attention_mask torch.Size([2, 1, 131072, 131072])
29215
+ batch tensor: position_ids torch.Size([2, 131072])
29216
+ batch tensor after cp: tokens torch.Size([2, 65536])
29217
+ batch tensor after cp: labels torch.Size([2, 65536])
29218
+ batch tensor after cp: loss_mask torch.Size([2, 65536])
29219
+ batch tensor after cp: attention_mask torch.Size([2, 1, 65536, 131072])
29220
+ batch tensor after cp: position_ids torch.Size([2, 65536])
29221
+ batch tensor: tokens torch.Size([2, 131072])
29222
+ batch tensor: labels torch.Size([2, 131072])
29223
+ batch tensor: loss_mask torch.Size([2, 131072])
29224
+ batch tensor: attention_mask torch.Size([2, 1, 131072, 131072])
29225
+ batch tensor: position_ids torch.Size([2, 131072])
29226
+ batch tensor after cp: tokens torch.Size([2, 65536])
29227
+ batch tensor after cp: labels torch.Size([2, 65536])
29228
+ batch tensor after cp: loss_mask torch.Size([2, 65536])
29229
+ batch tensor after cp: attention_mask torch.Size([2, 1, 65536, 131072])
29230
+ batch tensor after cp: position_ids torch.Size([2, 65536])
29231
+ batch tensor: tokens torch.Size([2, 131072])
29232
+ batch tensor: labels torch.Size([2, 131072])
29233
+ batch tensor: loss_mask torch.Size([2, 131072])
29234
+ batch tensor: attention_mask torch.Size([2, 1, 131072, 131072])
29235
+ batch tensor: position_ids torch.Size([2, 131072])
29236
+ batch tensor after cp: tokens torch.Size([2, 65536])
29237
+ batch tensor after cp: labels torch.Size([2, 65536])
29238
+ batch tensor after cp: loss_mask torch.Size([2, 65536])
29239
+ batch tensor after cp: attention_mask torch.Size([2, 1, 65536, 131072])
29240
+ batch tensor after cp: position_ids torch.Size([2, 65536])
29241
+ batch tensor: tokens torch.Size([2, 131072])
29242
+ batch tensor: labels torch.Size([2, 131072])
29243
+ batch tensor: loss_mask torch.Size([2, 131072])
29244
+ batch tensor: attention_mask torch.Size([2, 1, 131072, 131072])
29245
+ batch tensor: position_ids torch.Size([2, 131072])
29246
+ batch tensor: tokens torch.Size([2, 131072])
29247
+ batch tensor: labels torch.Size([2, 131072])
29248
+ batch tensor after cp: tokens torch.Size([2, 65536])
29249
+ batch tensor after cp: labels torch.Size([2, 65536])
29250
+ batch tensor: loss_mask torch.Size([2, 131072])
29251
+ batch tensor after cp: loss_mask torch.Size([2, 65536])
29252
+ batch tensor: attention_mask torch.Size([2, 1, 131072, 131072])
29253
+ batch tensor: position_ids torch.Size([2, 131072])
29254
+ batch tensor after cp: attention_mask torch.Size([2, 1, 65536, 131072])
29255
+ batch tensor after cp: position_ids torch.Size([2, 65536])
29256
+ batch tensor after cp: tokens torch.Size([2, 65536])
29257
+ batch tensor after cp: labels torch.Size([2, 65536])
29258
+ batch tensor after cp: loss_mask torch.Size([2, 65536])
29259
+ batch tensor after cp: attention_mask torch.Size([2, 1, 65536, 131072])
29260
+ batch tensor after cp: position_ids torch.Size([2, 65536])
29261
+ batch tensor: tokens torch.Size([2, 131072])
29262
+ batch tensor: labels torch.Size([2, 131072])
29263
+ batch tensor: loss_mask torch.Size([2, 131072])
29264
+ batch tensor: attention_mask torch.Size([2, 1, 131072, 131072])
29265
+ batch tensor: position_ids torch.Size([2, 131072])
29266
+ batch tensor after cp: tokens torch.Size([2, 65536])
29267
+ batch tensor after cp: labels torch.Size([2, 65536])
29268
+ batch tensor after cp: loss_mask torch.Size([2, 65536])
29269
+ batch tensor after cp: attention_mask torch.Size([2, 1, 65536, 131072])
29270
+ batch tensor after cp: position_ids torch.Size([2, 65536])
29271
+ batch tensor: tokens torch.Size([2, 131072])
29272
+ batch tensor: labels torch.Size([2, 131072])
29273
+ batch tensor: loss_mask torch.Size([2, 131072])
29274
+ batch tensor: attention_mask torch.Size([2, 1, 131072, 131072])
29275
+ batch tensor: position_ids torch.Size([2, 131072])
29276
+ batch tensor after cp: tokens torch.Size([2, 65536])
29277
+ batch tensor after cp: labels torch.Size([2, 65536])
29278
+ batch tensor after cp: loss_mask torch.Size([2, 65536])
29279
+ batch tensor after cp: attention_mask torch.Size([2, 1, 65536, 131072])
29280
+ batch tensor after cp: position_ids torch.Size([2, 65536])
29281
+ batch tensor: tokens torch.Size([2, 131072])
29282
+ batch tensor: labels torch.Size([2, 131072])
29283
+ batch tensor: loss_mask torch.Size([2, 131072])
29284
+ batch tensor: attention_mask torch.Size([2, 1, 131072, 131072])
29285
+ batch tensor: position_ids torch.Size([2, 131072])
29286
+ batch tensor after cp: tokens torch.Size([2, 65536])
29287
+ batch tensor after cp: labels torch.Size([2, 65536])
29288
+ batch tensor after cp: loss_mask torch.Size([2, 65536])
29289
+ batch tensor after cp: attention_mask torch.Size([2, 1, 65536, 131072])
29290
+ batch tensor after cp: position_ids torch.Size([2, 65536])
29291
+ batch tensor: tokens torch.Size([2, 131072])
29292
+ batch tensor: labels torch.Size([2, 131072])
29293
+ batch tensor: loss_mask torch.Size([2, 131072])
29294
+ batch tensor: attention_mask torch.Size([2, 1, 131072, 131072])
29295
+ batch tensor: position_ids torch.Size([2, 131072])
29296
+ batch tensor after cp: tokens torch.Size([2, 65536])
29297
+ batch tensor after cp: labels torch.Size([2, 65536])
29298
+ batch tensor after cp: loss_mask torch.Size([2, 65536])
29299
+ batch tensor after cp: attention_mask torch.Size([2, 1, 65536, 131072])
29300
+ batch tensor after cp: position_ids torch.Size([2, 65536])
29301
+ batch tensor: tokens torch.Size([2, 131072])
29302
+ batch tensor: labels torch.Size([2, 131072])
29303
+ batch tensor: loss_mask torch.Size([2, 131072])
29304
+ batch tensor: attention_mask torch.Size([2, 1, 131072, 131072])
29305
+ batch tensor: position_ids torch.Size([2, 131072])
29306
+ batch tensor after cp: tokens torch.Size([2, 65536])
29307
+ batch tensor after cp: labels torch.Size([2, 65536])
29308
+ batch tensor after cp: loss_mask torch.Size([2, 65536])
29309
+ batch tensor after cp: attention_mask torch.Size([2, 1, 65536, 131072])
29310
+ batch tensor after cp: position_ids torch.Size([2, 65536])
29311
+ batch tensor: tokens torch.Size([2, 131072])
29312
+ batch tensor: labels torch.Size([2, 131072])
29313
+ batch tensor: loss_mask torch.Size([2, 131072])
29314
+ batch tensor: attention_mask torch.Size([2, 1, 131072, 131072])
29315
+ batch tensor: position_ids torch.Size([2, 131072])
29316
+ batch tensor after cp: tokens torch.Size([2, 65536])
29317
+ batch tensor after cp: labels torch.Size([2, 65536])
29318
+ batch tensor after cp: loss_mask torch.Size([2, 65536])
29319
+ batch tensor after cp: attention_mask torch.Size([2, 1, 65536, 131072])
29320
+ batch tensor after cp: position_ids torch.Size([2, 65536])
29321
+ batch tensor: tokens torch.Size([2, 131072])
29322
+ batch tensor: labels torch.Size([2, 131072])
29323
+ batch tensor: loss_mask torch.Size([2, 131072])
29324
+ batch tensor: attention_mask torch.Size([2, 1, 131072, 131072])
29325
+ batch tensor: position_ids torch.Size([2, 131072])
29326
+ batch tensor after cp: tokens torch.Size([2, 65536])
29327
+ batch tensor after cp: labels torch.Size([2, 65536])
29328
+ batch tensor after cp: loss_mask torch.Size([2, 65536])
29329
+ batch tensor after cp: attention_mask torch.Size([2, 1, 65536, 131072])
29330
+ batch tensor after cp: position_ids torch.Size([2, 65536])
29331
+ batch tensor: tokens torch.Size([2, 131072])
29332
+ batch tensor: labels torch.Size([2, 131072])
29333
+ batch tensor: loss_mask torch.Size([2, 131072])
29334
+ batch tensor: attention_mask torch.Size([2, 1, 131072, 131072])
29335
+ batch tensor: position_ids torch.Size([2, 131072])
29336
+ batch tensor after cp: tokens torch.Size([2, 65536])
29337
+ batch tensor after cp: labels torch.Size([2, 65536])
29338
+ batch tensor after cp: loss_mask torch.Size([2, 65536])
29339
+ batch tensor after cp: attention_mask torch.Size([2, 1, 65536, 131072])
29340
+ batch tensor after cp: position_ids torch.Size([2, 65536])
29341
+ batch tensor: tokens torch.Size([2, 131072])
29342
+ batch tensor: labels torch.Size([2, 131072])
29343
+ batch tensor: loss_mask torch.Size([2, 131072])
29344
+ batch tensor: attention_mask torch.Size([2, 1, 131072, 131072])
29345
+ batch tensor: position_ids torch.Size([2, 131072])
29346
+ batch tensor after cp: tokens torch.Size([2, 65536])
29347
+ batch tensor after cp: labels torch.Size([2, 65536])
29348
+ batch tensor after cp: loss_mask torch.Size([2, 65536])
29349
+ batch tensor after cp: attention_mask torch.Size([2, 1, 65536, 131072])
29350
+ batch tensor after cp: position_ids torch.Size([2, 65536])
29351
+ batch tensor: tokens torch.Size([2, 131072])
29352
+ batch tensor: labels torch.Size([2, 131072])
29353
+ batch tensor: loss_mask torch.Size([2, 131072])
29354
+ batch tensor: attention_mask torch.Size([2, 1, 131072, 131072])
29355
+ batch tensor: position_ids torch.Size([2, 131072])
29356
+ batch tensor after cp: tokens torch.Size([2, 65536])
29357
+ batch tensor after cp: labels torch.Size([2, 65536])
29358
+ batch tensor after cp: loss_mask torch.Size([2, 65536])
29359
+ batch tensor after cp: attention_mask torch.Size([2, 1, 65536, 131072])
29360
+ batch tensor after cp: position_ids torch.Size([2, 65536])
29361
+ batch tensor: tokens torch.Size([2, 131072])
29362
+ batch tensor: labels torch.Size([2, 131072])
29363
+ batch tensor: loss_mask torch.Size([2, 131072])
29364
+ batch tensor: attention_mask torch.Size([2, 1, 131072, 131072])
29365
+ batch tensor: position_ids torch.Size([2, 131072])
29366
+ batch tensor after cp: tokens torch.Size([2, 65536])
29367
+ batch tensor after cp: labels torch.Size([2, 65536])
29368
+ batch tensor after cp: loss_mask torch.Size([2, 65536])
29369
+ batch tensor after cp: attention_mask torch.Size([2, 1, 65536, 131072])
29370
+ batch tensor after cp: position_ids torch.Size([2, 65536])
29371
+ Start exporting trace 8
29372
+ Done exporting trace 8
29373
+ [2025-06-21 21:18:04] iteration 9/ 10 | consumed samples: 9 | elapsed time per iteration (ms): 21669.2 | learning rate: 0.000000E+00 | global batch size: 1 | loss scale: 16777216.0 | number of skipped iterations: 1 | number of nan iterations: 0 |
29374
+ batch tensor: tokens torch.Size([2, 131072])
29375
+ batch tensor: labels torch.Size([2, 131072])
29376
+ batch tensor: loss_mask torch.Size([2, 131072])
29377
+ batch tensor: attention_mask torch.Size([2, 1, 131072, 131072])
29378
+ batch tensor: position_ids torch.Size([2, 131072])
29379
+ batch tensor after cp: tokens torch.Size([2, 65536])
29380
+ batch tensor after cp: labels torch.Size([2, 65536])
29381
+ batch tensor after cp: loss_mask torch.Size([2, 65536])
29382
+ batch tensor after cp: attention_mask torch.Size([2, 1, 65536, 131072])
29383
+ batch tensor after cp: position_ids torch.Size([2, 65536])
29384
+ batch tensor: tokens torch.Size([2, 131072])
29385
+ batch tensor: labels torch.Size([2, 131072])
29386
+ batch tensor: loss_mask torch.Size([2, 131072])
29387
+ batch tensor: attention_mask torch.Size([2, 1, 131072, 131072])
29388
+ batch tensor: position_ids torch.Size([2, 131072])
29389
+ batch tensor after cp: tokens torch.Size([2, 65536])
29390
+ batch tensor after cp: labels torch.Size([2, 65536])
29391
+ batch tensor after cp: loss_mask torch.Size([2, 65536])
29392
+ batch tensor after cp: attention_mask torch.Size([2, 1, 65536, 131072])
29393
+ batch tensor after cp: position_ids torch.Size([2, 65536])
29394
+ batch tensor: tokens torch.Size([2, 131072])
29395
+ batch tensor: labels torch.Size([2, 131072])
29396
+ batch tensor: loss_mask torch.Size([2, 131072])
29397
+ batch tensor: attention_mask torch.Size([2, 1, 131072, 131072])
29398
+ batch tensor: position_ids torch.Size([2, 131072])
29399
+ batch tensor after cp: tokens torch.Size([2, 65536])
29400
+ batch tensor after cp: labels torch.Size([2, 65536])
29401
+ batch tensor after cp: loss_mask torch.Size([2, 65536])
29402
+ batch tensor after cp: attention_mask torch.Size([2, 1, 65536, 131072])
29403
+ batch tensor after cp: position_ids torch.Size([2, 65536])
29404
+ batch tensor: tokens torch.Size([2, 131072])
29405
+ batch tensor: labels torch.Size([2, 131072])
29406
+ batch tensor: loss_mask torch.Size([2, 131072])
29407
+ batch tensor: attention_mask torch.Size([2, 1, 131072, 131072])
29408
+ batch tensor: position_ids torch.Size([2, 131072])
29409
+ batch tensor after cp: tokens torch.Size([2, 65536])
29410
+ batch tensor after cp: labels torch.Size([2, 65536])
29411
+ batch tensor after cp: loss_mask torch.Size([2, 65536])
29412
+ batch tensor after cp: attention_mask torch.Size([2, 1, 65536, 131072])
29413
+ batch tensor after cp: position_ids torch.Size([2, 65536])
29414
+ batch tensor: tokens torch.Size([2, 131072])
29415
+ batch tensor: labels torch.Size([2, 131072])
29416
+ batch tensor: loss_mask torch.Size([2, 131072])
29417
+ batch tensor: attention_mask torch.Size([2, 1, 131072, 131072])
29418
+ batch tensor: position_ids torch.Size([2, 131072])
29419
+ batch tensor after cp: tokens torch.Size([2, 65536])
29420
+ batch tensor after cp: labels torch.Size([2, 65536])
29421
+ batch tensor after cp: loss_mask torch.Size([2, 65536])
29422
+ batch tensor after cp: attention_mask torch.Size([2, 1, 65536, 131072])
29423
+ batch tensor after cp: position_ids torch.Size([2, 65536])
29424
+ batch tensor: tokens torch.Size([2, 131072])
29425
+ batch tensor: labels torch.Size([2, 131072])
29426
+ batch tensor: loss_mask torch.Size([2, 131072])
29427
+ batch tensor: attention_mask torch.Size([2, 1, 131072, 131072])
29428
+ batch tensor: position_ids torch.Size([2, 131072])
29429
+ batch tensor after cp: tokens torch.Size([2, 65536])
29430
+ batch tensor after cp: labels torch.Size([2, 65536])
29431
+ batch tensor after cp: loss_mask torch.Size([2, 65536])
29432
+ batch tensor after cp: attention_mask torch.Size([2, 1, 65536, 131072])
29433
+ batch tensor after cp: position_ids torch.Size([2, 65536])
29434
+ batch tensor: tokens torch.Size([2, 131072])
29435
+ batch tensor: labels torch.Size([2, 131072])
29436
+ batch tensor: loss_mask torch.Size([2, 131072])
29437
+ batch tensor: attention_mask torch.Size([2, 1, 131072, 131072])
29438
+ batch tensor: position_ids torch.Size([2, 131072])
29439
+ batch tensor after cp: tokens torch.Size([2, 65536])
29440
+ batch tensor after cp: labels torch.Size([2, 65536])
29441
+ batch tensor after cp: loss_mask torch.Size([2, 65536])
29442
+ batch tensor after cp: attention_mask torch.Size([2, 1, 65536, 131072])
29443
+ batch tensor after cp: position_ids torch.Size([2, 65536])
29444
+ batch tensor: tokens torch.Size([2, 131072])
29445
+ batch tensor: labels torch.Size([2, 131072])
29446
+ batch tensor: loss_mask torch.Size([2, 131072])
29447
+ batch tensor: attention_mask torch.Size([2, 1, 131072, 131072])
29448
+ batch tensor: position_ids torch.Size([2, 131072])
29449
+ batch tensor after cp: tokens torch.Size([2, 65536])
29450
+ batch tensor after cp: labels torch.Size([2, 65536])
29451
+ batch tensor after cp: loss_mask torch.Size([2, 65536])
29452
+ batch tensor after cp: attention_mask torch.Size([2, 1, 65536, 131072])
29453
+ batch tensor after cp: position_ids torch.Size([2, 65536])
29454
+ batch tensor: tokens torch.Size([2, 131072])
29455
+ batch tensor: labels torch.Size([2, 131072])
29456
+ batch tensor: loss_mask torch.Size([2, 131072])
29457
+ batch tensor: attention_mask torch.Size([2, 1, 131072, 131072])
29458
+ batch tensor: position_ids torch.Size([2, 131072])
29459
+ batch tensor after cp: tokens torch.Size([2, 65536])
29460
+ batch tensor after cp: labels torch.Size([2, 65536])
29461
+ batch tensor after cp: loss_mask torch.Size([2, 65536])
29462
+ batch tensor after cp: attention_mask torch.Size([2, 1, 65536, 131072])
29463
+ batch tensor after cp: position_ids torch.Size([2, 65536])
29464
+ batch tensor: tokens torch.Size([2, 131072])
29465
+ batch tensor: labels torch.Size([2, 131072])
29466
+ batch tensor: loss_mask torch.Size([2, 131072])
29467
+ batch tensor: attention_mask torch.Size([2, 1, 131072, 131072])
29468
+ batch tensor: position_ids torch.Size([2, 131072])
29469
+ batch tensor after cp: tokens torch.Size([2, 65536])
29470
+ batch tensor after cp: labels torch.Size([2, 65536])
29471
+ batch tensor after cp: loss_mask torch.Size([2, 65536])
29472
+ batch tensor after cp: attention_mask torch.Size([2, 1, 65536, 131072])
29473
+ batch tensor after cp: position_ids torch.Size([2, 65536])
29474
+ batch tensor: tokens torch.Size([2, 131072])
29475
+ batch tensor: labels torch.Size([2, 131072])
29476
+ batch tensor: loss_mask torch.Size([2, 131072])
29477
+ batch tensor: attention_mask torch.Size([2, 1, 131072, 131072])
29478
+ batch tensor: position_ids torch.Size([2, 131072])
29479
+ batch tensor after cp: tokens torch.Size([2, 65536])
29480
+ batch tensor after cp: labels torch.Size([2, 65536])
29481
+ batch tensor after cp: loss_mask torch.Size([2, 65536])
29482
+ batch tensor after cp: attention_mask torch.Size([2, 1, 65536, 131072])
29483
+ batch tensor after cp: position_ids torch.Size([2, 65536])
29484
+ batch tensor: tokens torch.Size([2, 131072])
29485
+ batch tensor: labels torch.Size([2, 131072])
29486
+ batch tensor: loss_mask torch.Size([2, 131072])
29487
+ batch tensor: attention_mask torch.Size([2, 1, 131072, 131072])
29488
+ batch tensor: position_ids torch.Size([2, 131072])
29489
+ batch tensor after cp: tokens torch.Size([2, 65536])
29490
+ batch tensor after cp: labels torch.Size([2, 65536])
29491
+ batch tensor after cp: loss_mask torch.Size([2, 65536])
29492
+ batch tensor after cp: attention_mask torch.Size([2, 1, 65536, 131072])
29493
+ batch tensor after cp: position_ids torch.Size([2, 65536])
29494
+ batch tensor: tokens torch.Size([2, 131072])
29495
+ batch tensor: labels torch.Size([2, 131072])
29496
+ batch tensor: loss_mask torch.Size([2, 131072])
29497
+ batch tensor: attention_mask torch.Size([2, 1, 131072, 131072])
29498
+ batch tensor: position_ids torch.Size([2, 131072])
29499
+ batch tensor: tokens torch.Size([2, 131072])
29500
+ batch tensor after cp: tokens torch.Size([2, 65536])
29501
+ batch tensor after cp: labels torch.Size([2, 65536])
29502
+ batch tensor: labels torch.Size([2, 131072])
29503
+ batch tensor: loss_mask torch.Size([2, 131072])
29504
+ batch tensor after cp: loss_mask torch.Size([2, 65536])
29505
+ batch tensor after cp: attention_mask torch.Size([2, 1, 65536, 131072])
29506
+ batch tensor: attention_mask torch.Size([2, 1, 131072, 131072])
29507
+ batch tensor: position_ids torch.Size([2, 131072])
29508
+ batch tensor after cp: position_ids torch.Size([2, 65536])
29509
+ batch tensor after cp: tokens torch.Size([2, 65536])
29510
+ batch tensor after cp: labels torch.Size([2, 65536])
29511
+ batch tensor after cp: loss_mask torch.Size([2, 65536])
29512
+ batch tensor after cp: attention_mask torch.Size([2, 1, 65536, 131072])
29513
+ batch tensor after cp: position_ids torch.Size([2, 65536])
29514
+ batch tensor: tokens torch.Size([2, 131072])
29515
+ batch tensor: labels torch.Size([2, 131072])
29516
+ batch tensor: loss_mask torch.Size([2, 131072])
29517
+ batch tensor: attention_mask torch.Size([2, 1, 131072, 131072])
29518
+ batch tensor: position_ids torch.Size([2, 131072])
29519
+ batch tensor after cp: tokens torch.Size([2, 65536])
29520
+ batch tensor after cp: labels torch.Size([2, 65536])
29521
+ batch tensor after cp: loss_mask torch.Size([2, 65536])
29522
+ batch tensor after cp: attention_mask torch.Size([2, 1, 65536, 131072])
29523
+ batch tensor after cp: position_ids torch.Size([2, 65536])
29524
+ batch tensor: tokens torch.Size([2, 131072])
29525
+ batch tensor: labels torch.Size([2, 131072])
29526
+ batch tensor: loss_mask torch.Size([2, 131072])
29527
+ batch tensor: attention_mask torch.Size([2, 1, 131072, 131072])
29528
+ batch tensor: position_ids torch.Size([2, 131072])
29529
+ batch tensor after cp: tokens torch.Size([2, 65536])
29530
+ batch tensor after cp: labels torch.Size([2, 65536])
29531
+ batch tensor after cp: loss_mask torch.Size([2, 65536])
29532
+ batch tensor after cp: attention_mask torch.Size([2, 1, 65536, 131072])
29533
+ batch tensor after cp: position_ids torch.Size([2, 65536])
29534
+ Start exporting trace 9
29535
+ Done exporting trace 9
29536
+ [2025-06-21 21:18:25] iteration 10/ 10 | consumed samples: 10 | elapsed time per iteration (ms): 20595.4 | learning rate: 0.000000E+00 | global batch size: 1 | loss scale: 8388608.0 | number of skipped iterations: 1 | number of nan iterations: 0 |
29537
+ [after training is done] datetime: 2025-06-21 21:18:25
29538
+ saving checkpoint at iteration 10 to gpt-checkpoint in torch_dist format
29539
+ DEBUG:megatron.training.checkpointing:rank: 3, takes 0.030599117279052734 to prepare state dict for ckpt
29540
+ DEBUG:megatron.training.checkpointing:rank: 6, takes 0.030607223510742188 to prepare state dict for ckpt
29541
+ DEBUG:megatron.training.checkpointing:rank: 4, takes 0.030637502670288086 to prepare state dict for ckpt
29542
+ DEBUG:megatron.training.checkpointing:rank: 1, takes 0.03064894676208496 to prepare state dict for ckpt
29543
+ DEBUG:megatron.training.checkpointing:rank: 5, takes 0.030666351318359375 to prepare state dict for ckpt
29544
+ DEBUG:megatron.training.checkpointing:rank: 7, takes 0.0306704044342041 to prepare state dict for ckpt
29545
+ DEBUG:megatron.training.checkpointing:rank: 2, takes 0.030683279037475586 to prepare state dict for ckpt
29546
+ DEBUG:megatron.training.checkpointing:rank: 9, takes 0.03452897071838379 to prepare state dict for ckpt
29547
+ DEBUG:megatron.training.checkpointing:rank: 0, takes 0.03490161895751953 to prepare state dict for ckpt
29548
+ DEBUG:megatron.training.checkpointing:rank: 12, takes 0.034743547439575195 to prepare state dict for ckpt
29549
+ DEBUG:megatron.training.checkpointing:rank: 13, takes 0.03479146957397461 to prepare state dict for ckpt
29550
+ DEBUG:megatron.training.checkpointing:rank: 8, takes 0.03490424156188965 to prepare state dict for ckpt
29551
+ DEBUG:megatron.training.checkpointing:rank: 10, takes 0.034996986389160156 to prepare state dict for ckpt
29552
+ DEBUG:megatron.training.checkpointing:rank: 15, takes 0.03499650955200195 to prepare state dict for ckpt
29553
+ DEBUG:megatron.training.checkpointing:rank: 14, takes 0.03571033477783203 to prepare state dict for ckpt
29554
+ DEBUG:megatron.training.checkpointing:rank: 11, takes 0.039114952087402344 to prepare state dict for ckpt
29555
+ DEBUG:megatron.core.dist_checkpointing.strategies.fully_parallel:Apply save parallelization
29556
+ DEBUG:megatron.core.dist_checkpointing.strategies.fully_parallel:Apply save parallelization
29557
+ DEBUG:megatron.core.dist_checkpointing.strategies.fully_parallel:Apply save parallelization
29558
+ DEBUG:megatron.core.dist_checkpointing.strategies.fully_parallel:Apply save parallelization
29559
+ DEBUG:megatron.core.dist_checkpointing.strategies.fully_parallel:Apply save parallelization
29560
+ DEBUG:megatron.core.dist_checkpointing.strategies.fully_parallel:Apply save parallelization
29561
+ DEBUG:megatron.core.dist_checkpointing.strategies.fully_parallel:Apply save parallelization
29562
+ DEBUG:megatron.core.dist_checkpointing.strategies.fully_parallel:Apply save parallelization
29563
+ DEBUG:megatron.core.dist_checkpointing.strategies.fully_parallel:Apply save parallelization
29564
+ DEBUG:megatron.core.dist_checkpointing.strategies.fully_parallel:Apply save parallelization
29565
+ DEBUG:megatron.core.dist_checkpointing.strategies.fully_parallel:Apply save parallelization
29566
+ DEBUG:megatron.core.dist_checkpointing.strategies.fully_parallel:Apply save parallelization
29567
+ DEBUG:megatron.core.dist_checkpointing.strategies.fully_parallel:Apply save parallelization
29568
+ DEBUG:megatron.core.dist_checkpointing.exchange_utils:distribute_shards_to_ranks distribution: [(np.int64(209748992), 0), (np.int64(211812352), 1)]
29569
+ DEBUG:megatron.core.dist_checkpointing.strategies.fully_parallel:Apply save parallelization
29570
+ DEBUG:megatron.core.dist_checkpointing.exchange_utils:distribute_shards_to_ranks distribution: [(np.int64(209748992), 0), (np.int64(211812352), 1)]
29571
+ DEBUG:megatron.core.dist_checkpointing.exchange_utils:distribute_shards_to_ranks distribution: [(np.int64(209748992), 0), (np.int64(211812352), 1)]
29572
+ DEBUG:megatron.core.dist_checkpointing.exchange_utils:distribute_shards_to_ranks distribution: [(np.int64(209748992), 0), (np.int64(211812352), 1)]
29573
+ DEBUG:megatron.core.dist_checkpointing.strategies.fully_parallel:Apply save parallelization
29574
+ DEBUG:megatron.core.dist_checkpointing.exchange_utils:distribute_shards_to_ranks distribution: [(np.int64(209748992), 0), (np.int64(211812352), 1)]
29575
+ DEBUG:megatron.core.dist_checkpointing.exchange_utils:distribute_shards_to_ranks distribution: [(np.int64(209748992), 0), (np.int64(211812352), 1)]
29576
+ DEBUG:megatron.core.dist_checkpointing.exchange_utils:distribute_shards_to_ranks distribution: [(np.int64(209748992), 0), (np.int64(211812352), 1)]
29577
+ DEBUG:megatron.core.dist_checkpointing.exchange_utils:distribute_shards_to_ranks distribution: [(np.int64(209748992), 0), (np.int64(211812352), 1)]
29578
+ DEBUG:megatron.core.dist_checkpointing.exchange_utils:distribute_shards_to_ranks distribution: [(np.int64(209748992), 0), (np.int64(211812352), 1)]
29579
+ DEBUG:megatron.core.dist_checkpointing.exchange_utils:distribute_shards_to_ranks distribution: [(np.int64(209748992), 0), (np.int64(211812352), 1)]
29580
+ DEBUG:megatron.core.dist_checkpointing.exchange_utils:distribute_shards_to_ranks distribution: [(np.int64(209748992), 0), (np.int64(211812352), 1)]
29581
+ DEBUG:megatron.core.dist_checkpointing.exchange_utils:distribute_shards_to_ranks distribution: [(np.int64(209748992), 0), (np.int64(211812352), 1)]
29582
+ DEBUG:megatron.core.dist_checkpointing.exchange_utils:distribute_shards_to_ranks distribution: [(np.int64(209748992), 0), (np.int64(211812352), 1)]
29583
+ DEBUG:megatron.core.dist_checkpointing.exchange_utils:distribute_shards_to_ranks distribution: [(np.int64(209748992), 0), (np.int64(211812352), 1)]
29584
+ DEBUG:megatron.core.dist_checkpointing.strategies.fully_parallel:Apply save parallelization
29585
+ DEBUG:megatron.core.dist_checkpointing.exchange_utils:distribute_shards_to_ranks distribution: [(np.int64(1073741824), 0), (np.int64(958776320), 1)]
29586
+ DEBUG:megatron.core.dist_checkpointing.exchange_utils:distribute_shards_to_ranks distribution: [(np.int64(1073741824), 0), (np.int64(958776320), 1)]
29587
+ DEBUG:megatron.core.dist_checkpointing.strategies.fully_parallel:parallel save sharding, time: 5.1365416049957275
29588
+ DEBUG:megatron.core.dist_checkpointing.strategies.fully_parallel:parallel save sharding, time: 5.136698961257935
29589
+ DEBUG:megatron.core.dist_checkpointing.strategies.fully_parallel:parallel save sharding, time: 5.170587062835693
29590
+ DEBUG:megatron.core.dist_checkpointing.strategies.fully_parallel:parallel save sharding, time: 5.143614768981934
29591
+ DEBUG:megatron.core.dist_checkpointing.strategies.fully_parallel:parallel save sharding, time: 5.143584966659546
29592
+ DEBUG:megatron.core.dist_checkpointing.strategies.fully_parallel:parallel save sharding, time: 5.137044191360474
29593
+ DEBUG:megatron.core.dist_checkpointing.strategies.fully_parallel:parallel save sharding, time: 5.145409822463989
29594
+ DEBUG:megatron.core.dist_checkpointing.strategies.fully_parallel:parallel save sharding, time: 5.137248277664185
29595
+ DEBUG:megatron.core.dist_checkpointing.strategies.fully_parallel:parallel save sharding, time: 5.146100997924805
29596
+ DEBUG:megatron.core.dist_checkpointing.strategies.fully_parallel:parallel save sharding, time: 5.13724946975708
29597
+ DEBUG:megatron.core.dist_checkpointing.strategies.fully_parallel:parallel save sharding, time: 5.144410133361816
29598
+ DEBUG:megatron.core.dist_checkpointing.strategies.fully_parallel:parallel save sharding, time: 5.137635231018066
29599
+ DEBUG:megatron.core.dist_checkpointing.strategies.fully_parallel:parallel save sharding, time: 5.143915176391602
29600
+ DEBUG:megatron.core.dist_checkpointing.strategies.fully_parallel:parallel save sharding, time: 5.137681245803833
29601
+ DEBUG:megatron.core.dist_checkpointing.strategies.fully_parallel:parallel save sharding, time: 5.144082069396973
29602
+ DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:rank: 15, starting state dict save
29603
+ DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:rank: 10, starting state dict save
29604
+ DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:rank: 11, starting state dict save
29605
+ DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:rank: 13, starting state dict save
29606
+ DEBUG:megatron.core.dist_checkpointing.strategies.fully_parallel:parallel save sharding, time: 0.015091419219970703
29607
+ DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:rank: 14, starting state dict save
29608
+ DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:rank: 12, starting state dict save
29609
+ DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:verifying reuse of global metadata
29610
+ DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:loaded global metadata reuse verification: no loaded plans passed
29611
+ DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:verifying reuse of global metadata
29612
+ DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:loaded global metadata reuse verification: no loaded plans passed
29613
+ DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:verifying reuse of global metadata
29614
+ DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:loaded global metadata reuse verification: no loaded plans passed
29615
+ DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:rank: 9, starting state dict save
29616
+ DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:verifying reuse of global metadata
29617
+ DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:loaded global metadata reuse verification: no loaded plans passed
29618
+ DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:verifying reuse of global metadata
29619
+ DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:loaded global metadata reuse verification: no loaded plans passed
29620
+ DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:verifying reuse of global metadata
29621
+ DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:loaded global metadata reuse verification: no loaded plans passed
29622
+ DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:verifying reuse of global metadata
29623
+ DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:rank: 5, starting state dict save
29624
+ DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:loaded global metadata reuse verification: no loaded plans passed
29625
+ DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:rank: 2, starting state dict save
29626
+ DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:rank: 8, starting state dict save
29627
+ DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:rank: 0, starting state dict save
29628
+ DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:verifying reuse of global metadata
29629
+ DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:rank: 1, starting state dict save
29630
+ DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:loaded global metadata reuse verification: no loaded plans passed
29631
+ DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:rank: 7, starting state dict save
29632
+ DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:verifying reuse of global metadata
29633
+ DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:loaded global metadata reuse verification: no loaded plans passed
29634
+ DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:verifying reuse of global metadata
29635
+ DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:loaded global metadata reuse verification: no loaded plans passed
29636
+ DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:rank: 3, starting state dict save
29637
+ DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:verifying reuse of global metadata
29638
+ DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:loaded global metadata reuse verification: no loaded plans passed
29639
+ DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:rank: 4, starting state dict save
29640
+ DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:verifying reuse of global metadata
29641
+ DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:loaded global metadata reuse verification: no loaded plans passed
29642
+ DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:verifying reuse of global metadata
29643
+ DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:loaded global metadata reuse verification: no loaded plans passed
29644
+ DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:verifying reuse of global metadata
29645
+ DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:loaded global metadata reuse verification: no loaded plans passed
29646
+ DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:rank: 6, starting state dict save
29647
+ DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:verifying reuse of global metadata
29648
+ DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:rank: 13, plan time: 0.008120536804199219
29649
+ DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:rank: 12, plan time: 0.007681846618652344
29650
+ DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:loaded global metadata reuse verification: no loaded plans passed
29651
+ DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:rank: 14, plan time: 0.007982254028320312
29652
+ DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:rank: 15, plan time: 0.009308576583862305
29653
+ DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:verifying reuse of global metadata
29654
+ DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:rank: 8, plan time: 0.004968166351318359
29655
+ DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:thread_count: 2, time: 1750540710.470245
29656
+ DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:loaded global metadata reuse verification: no loaded plans passed
29657
+ DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:rank: 10, plan time: 0.008288383483886719
29658
+ DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:rank: 9, plan time: 0.007191896438598633
29659
+ DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:rank: 7, plan time: 0.003808259963989258
29660
+ DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:rank: 5, plan time: 0.00435948371887207
29661
+ DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:rank: 2, plan time: 0.004293680191040039
29662
+ DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:rank: 3, plan time: 0.0031213760375976562
29663
+ DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:rank: 1, plan time: 0.0039310455322265625
29664
+ DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:rank: 4, plan time: 0.0029730796813964844
29665
+ DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:thread_count: 2, time: 1750540710.4702554
29666
+ DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:thread_count: 2, time: 1750540710.4702635
29667
+ DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:thread_count: 2, time: 1750540710.4702652
29668
+ DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:rank: 11, plan time: 0.008260726928710938
29669
+ DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:thread_count: 2, time: 1750540710.4707708
29670
+ DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:thread_count: 2, time: 1750540710.470777
29671
+ DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:thread_count: 2, time: 1750540710.470776
29672
+ DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:thread_count: 2, time: 1750540710.4707768
29673
+ DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:thread_count: 2, time: 1750540710.4707768
29674
+ DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:thread_count: 2, time: 1750540710.4707775
29675
+ DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:thread_count: 2, time: 1750540710.4702742
29676
+ DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:bucket_prep, time: 7.987022399902344e-05
29677
+ DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:bucket_prep, time: 8.058547973632812e-05
29678
+ DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:bucket_prep, time: 8.320808410644531e-05
29679
+ DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:bucket_prep, time: 8.249282836914062e-05
29680
+ DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:bucket_prep, time: 8.130073547363281e-05
29681
+ DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:bucket_prep, time: 8.368492126464844e-05
29682
+ DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:thread_count: 2, time: 1750540710.4702814
29683
+ DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:thread_count: 2, time: 1750540710.470285
29684
+ DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:rank: 6, plan time: 0.0023756027221679688
29685
+ DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:thread_count: 2, time: 1750540710.4702938
29686
+ DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:thread_count: 2, time: 1750540710.470955
29687
+ DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:bucket_prep, time: 7.462501525878906e-05
29688
+ DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:bucket_prep, time: 8.487701416015625e-05
29689
+ DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:bucket_prep, time: 6.747245788574219e-05
29690
+ DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:bucket_prep, time: 8.058547973632812e-05
29691
+ DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:bucket_prep, time: 7.534027099609375e-05
29692
+ DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:bucket_prep, time: 7.557868957519531e-05
29693
+ DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:bucket_prep, time: 7.319450378417969e-05
29694
+ DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:bucket_prep, time: 8.440017700195312e-05
29695
+ DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:bucket_prep, time: 0.00012683868408203125
29696
+ DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:rank: 0, plan time: 0.009725332260131836
29697
+ DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:thread_count: 2, time: 1750540710.4762506
29698
+ DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:bucket_prep, time: 5.364418029785156e-05
29699
+ DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:D2H and push, time: 0.04866313934326172
29700
+ DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:D2H and push, time: 0.049065589904785156
29701
+ DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:1750540710.5198934 rank: 5, write(async) time: 0.049120187759399414
29702
+ DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:D2H and push, time: 0.049143075942993164
29703
+ DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:1750540710.5197566 rank: 12, write(async) time: 0.04950380325317383
29704
+ DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:1750540710.5198126 rank: 13, write(async) time: 0.04956817626953125
29705
+ DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:D2H and push, time: 0.04955911636352539
29706
+ DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:D2H and push, time: 0.049558162689208984
29707
+ DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:D2H and push, time: 0.0499417781829834
29708
+ DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:1750540710.520258 rank: 10, write(async) time: 0.04997444152832031
29709
+ DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:1750540710.521151 rank: 3, write(async) time: 0.05037212371826172
29710
+ DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:1750540710.5202801 rank: 9, write(async) time: 0.04999256134033203
29711
+ DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:D2H and push, time: 0.05165386199951172
29712
+ DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:1750540710.5228791 rank: 2, write(async) time: 0.052101850509643555
29713
+ DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:D2H and push, time: 0.0526432991027832
29714
+ DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:D2H and push, time: 0.05301833152770996
29715
+ DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:1750540710.5240407 rank: 6, write(async) time: 0.05308675765991211
29716
+ DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:1750540710.5237007 rank: 15, write(async) time: 0.05343365669250488
29717
+ DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:D2H and push, time: 0.053147077560424805
29718
+ DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:D2H and push, time: 0.053880929946899414
29719
+ DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:1750540710.5238378 rank: 14, write(async) time: 0.0535738468170166
29720
+ DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:1750540710.525106 rank: 4, write(async) time: 0.05432558059692383
29721
+ DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:D2H and push, time: 0.05367851257324219
29722
+ DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:1750540710.5244467 rank: 8, write(async) time: 0.054169654846191406
29723
+ DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:D2H and push, time: 0.054453372955322266
29724
+ DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:D2H and push, time: 0.05446195602416992
29725
+ DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:1750540710.5256987 rank: 1, write(async) time: 0.05492067337036133
29726
+ DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:1750540710.52575 rank: 7, write(async) time: 0.05497598648071289
29727
+ DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:D2H and push, time: 0.05733776092529297
29728
+ DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:1750540710.5280285 rank: 11, write(async) time: 0.05773162841796875
29729
+ DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:D2H and push, time: 0.05939149856567383
29730
+ DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:1750540710.536108 rank: 0, write(async) time: 0.059854984283447266
29731
+ DEBUG:megatron.core.dist_checkpointing.strategies.async_utils:rank: 13, takes 1.9788742065429688e-05 to finish D2H
29732
+ DEBUG:megatron.core.dist_checkpointing.strategies.async_utils:rank: 9, takes 1.7642974853515625e-05 to finish D2H
29733
+ DEBUG:megatron.core.dist_checkpointing.strategies.async_utils:rank: 10, takes 1.6450881958007812e-05 to finish D2H
29734
+ DEBUG:megatron.core.dist_checkpointing.strategies.async_utils:rank: 12, takes 1.621246337890625e-05 to finish D2H
29735
+ DEBUG:megatron.core.dist_checkpointing.strategies.async_utils:rank: 15, takes 1.6927719116210938e-05 to finish D2H
29736
+ DEBUG:megatron.core.dist_checkpointing.strategies.async_utils:rank: 14, takes 1.7642974853515625e-05 to finish D2H
29737
+ DEBUG:megatron.core.dist_checkpointing.strategies.async_utils:rank: 11, takes 1.4781951904296875e-05 to finish D2H
29738
+ DEBUG:megatron.core.dist_checkpointing.strategies.async_utils:rank: 5, takes 1.71661376953125e-05 to finish D2H
29739
+ DEBUG:megatron.core.dist_checkpointing.strategies.async_utils:rank: 3, takes 1.8596649169921875e-05 to finish D2H
29740
+ DEBUG:megatron.core.dist_checkpointing.strategies.async_utils:rank: 2, takes 2.2411346435546875e-05 to finish D2H
29741
+ DEBUG:megatron.core.dist_checkpointing.strategies.async_utils:rank: 7, takes 2.0265579223632812e-05 to finish D2H
29742
+ DEBUG:megatron.core.dist_checkpointing.strategies.async_utils:rank: 1, takes 1.8835067749023438e-05 to finish D2H
29743
+ DEBUG:megatron.core.dist_checkpointing.strategies.async_utils:rank: 6, takes 1.8596649169921875e-05 to finish D2H
29744
+ DEBUG:megatron.core.dist_checkpointing.strategies.async_utils:rank: 4, takes 1.8596649169921875e-05 to finish D2H
29745
+ DEBUG:megatron.core.dist_checkpointing.strategies.async_utils:rank: 13, takes 0.028268098831176758 to schedule async ckpt
29746
+ DEBUG:megatron.core.dist_checkpointing.strategies.async_utils:rank: 12, takes 0.027981996536254883 to schedule async ckpt
29747
+ DEBUG:megatron.core.dist_checkpointing.strategies.async_utils:rank: 9, takes 0.028491497039794922 to schedule async ckpt
29748
+ DEBUG:megatron.core.dist_checkpointing.strategies.async_utils:rank: 10, takes 0.028436899185180664 to schedule async ckpt
29749
+ DEBUG:megatron.core.dist_checkpointing.strategies.async_utils:rank: 15, takes 0.03314328193664551 to schedule async ckpt
29750
+ DEBUG:megatron.core.dist_checkpointing.strategies.async_utils:rank: 14, takes 0.03297257423400879 to schedule async ckpt
29751
+ DEBUG:megatron.core.dist_checkpointing.strategies.async_utils:rank: 11, takes 0.03707122802734375 to schedule async ckpt
29752
+ DEBUG:megatron.core.dist_checkpointing.strategies.async_utils:rank: 5, takes 0.03021383285522461 to schedule async ckpt
29753
+ DEBUG:megatron.core.dist_checkpointing.strategies.async_utils:rank: 3, takes 0.02933502197265625 to schedule async ckpt
29754
+ DEBUG:megatron.core.dist_checkpointing.strategies.async_utils:rank: 2, takes 0.03126120567321777 to schedule async ckpt
29755
+ DEBUG:megatron.core.dist_checkpointing.strategies.async_utils:rank: 7, takes 0.03090667724609375 to schedule async ckpt
29756
+ DEBUG:megatron.core.dist_checkpointing.strategies.async_utils:rank: 6, takes 0.03205680847167969 to schedule async ckpt
29757
+ DEBUG:megatron.core.dist_checkpointing.strategies.async_utils:rank: 4, takes 0.03353571891784668 to schedule async ckpt
29758
+ DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:0 started
29759
+ DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:0 started
29760
+ DEBUG:megatron.core.dist_checkpointing.strategies.async_utils:rank: 1, takes 0.04225039482116699 to schedule async ckpt
29761
+ DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:0 started
29762
+ DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:0 started
29763
+ DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:0 started
29764
+ DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:0 started
29765
+ DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:0 started
29766
+ DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:FileSystemWriterAsync: collecting worker results...
29767
+ DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:0 started
29768
+ DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:0 started
29769
+ DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:0 started
29770
+ DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:1 started
29771
+ DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:0 started
29772
+ DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:FileSystemWriterAsync: collecting worker results...
29773
+ DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:FileSystemWriterAsync: collecting worker results...
29774
+ DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:FileSystemWriterAsync: collecting worker results...
29775
+ DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:1 started
29776
+ DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:0 started
29777
+ DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:1 started
29778
+ DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:1 started
29779
+ DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:0 started
29780
+ DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:FileSystemWriterAsync: collecting worker results...
29781
+ DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:FileSystemWriterAsync: collecting worker results...
29782
+ DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:1 started
29783
+ DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:1 started
29784
+ DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:FileSystemWriterAsync: collecting worker results...
29785
+ DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:FileSystemWriterAsync: collecting worker results...
29786
+ DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:0 started
29787
+ DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:FileSystemWriterAsync: collecting worker results...
29788
+ DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:1 started
29789
+ DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:1 started
29790
+ DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:1 started
29791
+ DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:FileSystemWriterAsync: collecting worker results...
29792
+ DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:FileSystemWriterAsync: collecting worker results...
29793
+ DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:FileSystemWriterAsync: collecting worker results...
29794
+ DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:1 started
29795
+ DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:1 started
29796
+ DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:1 started
29797
+ DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:FileSystemWriterAsync: collecting worker results...
29798
+ DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:1 started
29799
+ DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:FileSystemWriterAsync: collecting worker results...
29800
+ DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:1 started
29801
+ DEBUG:megatron.core.dist_checkpointing.strategies.async_utils:rank: 8, takes 1.52587890625e-05 to finish D2H
29802
+ DEBUG:megatron.core.dist_checkpointing.strategies.async_utils:rank: 8, takes 0.03443336486816406 to schedule async ckpt
29803
+ DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:0 started
29804
+ DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:0 consumed: 109649920, before: 1695543296, after: 1805193216
29805
+ DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:0 consumed: 109772800, before: 1706926080, after: 1816698880
29806
+ DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:0 consumed: 112164864, before: 1706807296, after: 1818972160
29807
+ DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:0 consumed: 112361472, before: 1702477824, after: 1814839296
29808
+ DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:FileSystemWriterAsync: collecting worker results...
29809
+ DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:1 started
29810
+ DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:1 consumed: 109907968, before: 1709613056, after: 1819521024
29811
+ DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:1 consumed: 109580288, before: 1716420608, after: 1826000896
29812
+ DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:1 consumed: 109486080, before: 1714417664, after: 1823903744
29813
+ DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:0 consumed: 111648768, before: 1698844672, after: 1810493440
29814
+ DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:1 consumed: 109064192, before: 1699131392, after: 1808195584
29815
+ DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:0 consumed: 111783936, before: 1705713664, after: 1817497600
29816
+ DEBUG:megatron.core.dist_checkpointing.strategies.async_utils:rank: 0, takes 2.86102294921875e-05 to finish D2H
29817
+ DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:0 consumed: 109539328, before: 1698820096, after: 1808359424
29818
+ DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:0 consumed: 112291840, before: 1699897344, after: 1812189184
29819
+ DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:1 consumed: 110182400, before: 1706807296, after: 1816989696
29820
+ DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:1 consumed: 109723648, before: 1695543296, after: 1805266944
29821
+ DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:1 consumed: 109838336, before: 1699897344, after: 1809735680
29822
+ DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:1 consumed: 109957120, before: 1698832384, after: 1808789504
29823
+ DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:0 consumed: 112312320, before: 1699119104, after: 1811431424
29824
+ DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:0 consumed: 109424640, before: 1714417664, after: 1823842304
29825
+ DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:0 consumed: 109797376, before: 1716420608, after: 1826217984
29826
+ DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:1 consumed: 109686784, before: 1700237312, after: 1809924096
29827
+ DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:1 consumed: 109772800, before: 1698820096, after: 1808592896
29828
+ DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:0 consumed: 112128000, before: 1709613056, after: 1821741056
29829
+ DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:1 consumed: 109576192, before: 1694912512, after: 1804488704
29830
+ DEBUG:megatron.core.dist_checkpointing.strategies.async_utils:rank: 0, takes 0.0375056266784668 to schedule async ckpt
29831
+ DEBUG:megatron.core.dist_checkpointing.strategies.async_utils:rank: 0, joining self.process
29832
+ DEBUG:megatron.core.dist_checkpointing.strategies.async_utils:rank: 9, joining self.process
29833
+ DEBUG:megatron.core.dist_checkpointing.strategies.async_utils:rank: 8, joining self.process
29834
+ DEBUG:megatron.core.dist_checkpointing.strategies.async_utils:rank: 1, joining self.process
29835
+ DEBUG:megatron.core.dist_checkpointing.strategies.async_utils:rank: 2, joining self.process
29836
+ DEBUG:megatron.core.dist_checkpointing.strategies.async_utils:rank: 3, joining self.process
29837
+ DEBUG:megatron.core.dist_checkpointing.strategies.async_utils:rank: 10, joining self.process
29838
+ DEBUG:megatron.core.dist_checkpointing.strategies.async_utils:rank: 4, joining self.process
29839
+ DEBUG:megatron.core.dist_checkpointing.strategies.async_utils:rank: 11, joining self.process
29840
+ DEBUG:megatron.core.dist_checkpointing.strategies.async_utils:rank: 5, joining self.process
29841
+ DEBUG:megatron.core.dist_checkpointing.strategies.async_utils:rank: 12, joining self.process
29842
+ DEBUG:megatron.core.dist_checkpointing.strategies.async_utils:rank: 6, joining self.process
29843
+ DEBUG:megatron.core.dist_checkpointing.strategies.async_utils:rank: 7, joining self.process
29844
+ DEBUG:megatron.core.dist_checkpointing.strategies.async_utils:rank: 13, joining self.process
29845
+ DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:FileSystemWriterAsync: collected worker results successfully
29846
+ DEBUG:megatron.core.dist_checkpointing.strategies.async_utils:rank: 15, joining self.process
29847
+ DEBUG:megatron.core.dist_checkpointing.strategies.async_utils:rank: 14, joining self.process
29848
+ DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:1750540711.0303245, rank: 5, write(sync,parallel): 0.3796372413635254
29849
+ DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:FileSystemWriterAsync: collected worker results successfully
29850
+ DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:FileSystemWriterAsync: collected worker results successfully
29851
+ DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:1750540711.0302293, rank: 13, write(sync,parallel): 0.3942713737487793
29852
+ DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:1750540711.0310073, rank: 15, write(sync,parallel): 0.38779783248901367
29853
+ DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:0 consumed: 109617152, before: 1700237312, after: 1809854464
29854
+ DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:FileSystemWriterAsync: collected worker results successfully
29855
+ DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:FileSystemWriterAsync: collected worker results successfully
29856
+ DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:1750540711.0376275, rank: 11, write(sync,parallel): 0.3883233070373535
29857
+ DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:1 consumed: 110215168, before: 1705709568, after: 1815924736
29858
+ DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:1750540711.0408192, rank: 4, write(sync,parallel): 0.380814790725708
29859
+ DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:FileSystemWriterAsync: collected worker results successfully
29860
+ DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:0 consumed: 109510656, before: 1694912512, after: 1804423168
29861
+ DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:1 consumed: 109854720, before: 1706926080, after: 1816780800
29862
+ DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:1750540711.043519, rank: 14, write(sync,parallel): 0.3982844352722168
29863
+ DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:1 consumed: 110186496, before: 1702412288, after: 1812598784
29864
+ DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:FileSystemWriterAsync: collected worker results successfully
29865
+ DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:1750540711.0499644, rank: 7, write(sync,parallel): 0.3940567970275879
29866
+ DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:FileSystemWriterAsync: collected worker results successfully
29867
+ DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:FileSystemWriterAsync: collected worker results successfully
29868
+ DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:1750540711.0528767, rank: 12, write(sync,parallel): 0.4158616065979004
29869
+ DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:1750540711.0522892, rank: 3, write(sync,parallel): 0.4005098342895508
29870
+ DEBUG:megatron.core.dist_checkpointing.strategies.async_utils:TemporalAsyncCaller: Async process join finished after 0.45s from forking
29871
+ DEBUG:megatron.core.dist_checkpointing.strategies.async_utils:TemporalAsyncCaller: Async process join finished after 0.46s from forking
29872
+ DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:0 started
29873
+ DEBUG:megatron.core.dist_checkpointing.strategies.async_utils:TemporalAsyncCaller: Async process join finished after 0.46s from forking
29874
+ DEBUG:megatron.core.dist_checkpointing.strategies.async_utils:TemporalAsyncCaller: Async process join finished after 0.47s from forking
29875
+ DEBUG:megatron.core.dist_checkpointing.strategies.async_utils:TemporalAsyncCaller: Async process join finished after 0.46s from forking
29876
+ DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:FileSystemWriterAsync: collected worker results successfully
29877
+ DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:FileSystemWriterAsync: collected worker results successfully
29878
+ DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:1750540711.0828059, rank: 9, write(sync,parallel): 0.4452991485595703
29879
+ DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:1750540711.082717, rank: 1, write(sync,parallel): 0.4134373664855957
29880
+ DEBUG:megatron.core.dist_checkpointing.strategies.async_utils:TemporalAsyncCaller: Async process join finished after 0.48s from forking
29881
+ DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:FileSystemWriterAsync: collected worker results successfully
29882
+ DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:FileSystemWriterAsync: collected worker results successfully
29883
+ DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:1750540711.085831, rank: 2, write(sync,parallel): 0.43056535720825195
29884
+ DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:1750540711.0857573, rank: 10, write(sync,parallel): 0.44857358932495117
29885
+ DEBUG:megatron.core.dist_checkpointing.strategies.async_utils:TemporalAsyncCaller: Async process join finished after 0.47s from forking
29886
+ DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:FileSystemWriterAsync: collected worker results successfully
29887
+ DEBUG:megatron.core.dist_checkpointing.strategies.async_utils:TemporalAsyncCaller: Async process join finished after 0.49s from forking
29888
+ DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:1750540711.0925114, rank: 6, write(sync,parallel): 0.4342350959777832
29889
+ DEBUG:megatron.core.dist_checkpointing.strategies.async_utils:TemporalAsyncCaller: Async process join finished after 0.48s from forking
29890
+ DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:FileSystemWriterAsync: collecting worker results...
29891
+ DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:1 started
29892
+ DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:1 consumed: 147456, before: 2005639168, after: 2005786624
29893
+ DEBUG:megatron.core.dist_checkpointing.strategies.async_utils:TemporalAsyncCaller: Async process join finished after 0.51s from forking
29894
+ DEBUG:megatron.core.dist_checkpointing.strategies.async_utils:TemporalAsyncCaller: Async process join finished after 0.51s from forking
29895
+ DEBUG:megatron.core.dist_checkpointing.strategies.async_utils:TemporalAsyncCaller: Async process join finished after 0.50s from forking
29896
+ DEBUG:megatron.core.dist_checkpointing.strategies.async_utils:TemporalAsyncCaller: Async process join finished after 0.50s from forking
29897
+ DEBUG:megatron.core.dist_checkpointing.strategies.async_utils:TemporalAsyncCaller: Async process join finished after 0.51s from forking
29898
+ DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:1 consumed: 427352064, before: 1740451840, after: 2167803904
29899
+ DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:0 consumed: 542007296, before: 1740451840, after: 2282459136
29900
+ DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:FileSystemWriterAsync: collected worker results successfully
29901
+ DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:1750540711.8749943, rank: 8, write(sync,parallel): 0.9843263626098633
29902
+ DEBUG:megatron.core.dist_checkpointing.strategies.async_utils:TemporalAsyncCaller: Async process join finished after 1.06s from forking
29903
+ DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:0 consumed: 1078128640, before: 2005639168, after: 3083767808
29904
+ DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:FileSystemWriterAsync: collected worker results successfully
29905
+ DEBUG:megatron.core.dist_checkpointing.strategies.filesystem_async:1750540712.876399, rank: 0, write(sync,parallel): 1.8495192527770996
29906
+ DEBUG:megatron.core.dist_checkpointing.strategies.async_utils:TemporalAsyncCaller: Async process join finished after 1.96s from forking
29907
+ DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:1750540712.9502618, 6, gather: 1.8163981437683105
29908
+ DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:1750540712.9504297, 2, gather: 1.8253560066223145
29909
+ DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:1750540712.9503453, 5, gather: 1.883301019668579
29910
+ DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:1750540712.9502494, 9, gather: 1.8324296474456787
29911
+ DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:1750540712.9504378, 4, gather: 1.8708419799804688
29912
+ DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:1750540712.9503853, 13, gather: 1.881472110748291
29913
+ DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:1750540712.9503906, 12, gather: 1.858473300933838
29914
+ DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:1750540712.950525, 3, gather: 1.8542518615722656
29915
+ DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:1750540712.950411, 11, gather: 1.872950792312622
29916
+ DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:1750540712.9504201, 10, gather: 1.82920241355896
29917
+ DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:1750540712.950467, 7, gather: 1.85914945602417
29918
+ DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:1750540712.9505475, 14, gather: 1.8659534454345703
29919
+ DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:1750540712.9507422, 8, gather: 1.0384495258331299
29920
+ DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:1750540712.9511492, 1, gather: 1.8232641220092773
29921
+ DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:1750540712.9507585, 15, gather: 1.8799049854278564
29922
+ DEBUG:megatron.core.dist_checkpointing.strategies.state_dict_saver:1750540712.9546258, 0, gather: 0.007202863693237305
29923
+ DEBUG:megatron.core.dist_checkpointing.strategies.async_utils:finalize took 0.0099s
attnserver.run_attnserver.slurm.sh.343203.err.log CHANGED
@@ -560,3 +560,138 @@ W0621 21:17:15.874000 755570 site-packages/torch/distributed/run.py:766]
560
  W0621 21:17:15.874000 755570 site-packages/torch/distributed/run.py:766] *****************************************
561
  W0621 21:17:15.874000 755570 site-packages/torch/distributed/run.py:766] Setting OMP_NUM_THREADS environment variable for each process to be 1 in default, to avoid your system being overloaded, please further tune the variable for optimal performance in your application as needed.
562
  W0621 21:17:15.874000 755570 site-packages/torch/distributed/run.py:766] *****************************************
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
560
  W0621 21:17:15.874000 755570 site-packages/torch/distributed/run.py:766] *****************************************
561
  W0621 21:17:15.874000 755570 site-packages/torch/distributed/run.py:766] Setting OMP_NUM_THREADS environment variable for each process to be 1 in default, to avoid your system being overloaded, please further tune the variable for optimal performance in your application as needed.
562
  W0621 21:17:15.874000 755570 site-packages/torch/distributed/run.py:766] *****************************************
563
+ [rank8]:[W621 21:17:39.807640759 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 8] using GPU 0 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
564
+ [rank10]:[W621 21:17:40.128637473 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 10] using GPU 2 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
565
+ [rank2]:[W621 21:17:40.469786999 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 2] using GPU 2 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
566
+ [rank0]:[W621 21:17:40.491472929 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 0] using GPU 0 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
567
+ [rank11]:[W621 21:17:40.156252974 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 11] using GPU 3 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
568
+ [rank3]:[W621 21:17:40.496091577 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 3] using GPU 3 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
569
+ [rank15]:[W621 21:17:40.159377300 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 15] using GPU 7 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
570
+ [rank7]:[W621 21:17:40.498655977 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 7] using GPU 7 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
571
+ [rank12]:[W621 21:17:40.165476282 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 12] using GPU 4 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
572
+ [rank4]:[W621 21:17:40.507136515 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 4] using GPU 4 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
573
+ [rank14]:[W621 21:17:40.169105437 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 14] using GPU 6 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
574
+ [rank6]:[W621 21:17:40.511296861 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 6] using GPU 6 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
575
+ [rank5]:[W621 21:17:40.514242174 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 5] using GPU 5 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
576
+ [rank1]:[W621 21:17:40.514556635 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 1] using GPU 1 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
577
+ [rank13]:[W621 21:17:40.175710494 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 13] using GPU 5 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
578
+ [rank9]:[W621 21:17:40.178955361 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 9] using GPU 1 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
579
+ /mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/models/gpt/gpt_layer_specs.py:94: UserWarning: The fp8 argument in "get_gpt_layer_with_transformer_engine_spec" has been deprecated and will be removed soon. Please update your code accordingly.
580
+ warnings.warn(
581
+ /mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/models/gpt/gpt_layer_specs.py:94: UserWarning: The fp8 argument in "get_gpt_layer_with_transformer_engine_spec" has been deprecated and will be removed soon. Please update your code accordingly.
582
+ warnings.warn(
583
+ /mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/models/gpt/gpt_layer_specs.py:94: UserWarning: The fp8 argument in "get_gpt_layer_with_transformer_engine_spec" has been deprecated and will be removed soon. Please update your code accordingly.
584
+ warnings.warn(
585
+ /mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/models/gpt/gpt_layer_specs.py:94: UserWarning: The fp8 argument in "get_gpt_layer_with_transformer_engine_spec" has been deprecated and will be removed soon. Please update your code accordingly.
586
+ warnings.warn(
587
+ /mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/models/gpt/gpt_layer_specs.py:94: UserWarning: The fp8 argument in "get_gpt_layer_with_transformer_engine_spec" has been deprecated and will be removed soon. Please update your code accordingly.
588
+ warnings.warn(
589
+ /mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/models/gpt/gpt_layer_specs.py:94: UserWarning: The fp8 argument in "get_gpt_layer_with_transformer_engine_spec" has been deprecated and will be removed soon. Please update your code accordingly.
590
+ warnings.warn(
591
+ /mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/models/gpt/gpt_layer_specs.py:94: UserWarning: The fp8 argument in "get_gpt_layer_with_transformer_engine_spec" has been deprecated and will be removed soon. Please update your code accordingly.
592
+ warnings.warn(
593
+ /mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/models/gpt/gpt_layer_specs.py:94: UserWarning: The fp8 argument in "get_gpt_layer_with_transformer_engine_spec" has been deprecated and will be removed soon. Please update your code accordingly.
594
+ warnings.warn(
595
+ /mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/models/gpt/gpt_layer_specs.py:94: UserWarning: The fp8 argument in "get_gpt_layer_with_transformer_engine_spec" has been deprecated and will be removed soon. Please update your code accordingly.
596
+ warnings.warn(
597
+ /mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/models/gpt/gpt_layer_specs.py:94: UserWarning: The fp8 argument in "get_gpt_layer_with_transformer_engine_spec" has been deprecated and will be removed soon. Please update your code accordingly.
598
+ warnings.warn(
599
+ /mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/models/gpt/gpt_layer_specs.py:94: UserWarning: The fp8 argument in "get_gpt_layer_with_transformer_engine_spec" has been deprecated and will be removed soon. Please update your code accordingly.
600
+ warnings.warn(
601
+ /mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/models/gpt/gpt_layer_specs.py:94: UserWarning: The fp8 argument in "get_gpt_layer_with_transformer_engine_spec" has been deprecated and will be removed soon. Please update your code accordingly.
602
+ warnings.warn(
603
+ /mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/models/gpt/gpt_layer_specs.py:94: UserWarning: The fp8 argument in "get_gpt_layer_with_transformer_engine_spec" has been deprecated and will be removed soon. Please update your code accordingly.
604
+ warnings.warn(
605
+ /mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/models/gpt/gpt_layer_specs.py:94: UserWarning: The fp8 argument in "get_gpt_layer_with_transformer_engine_spec" has been deprecated and will be removed soon. Please update your code accordingly.
606
+ warnings.warn(
607
+ /mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/models/gpt/gpt_layer_specs.py:94: UserWarning: The fp8 argument in "get_gpt_layer_with_transformer_engine_spec" has been deprecated and will be removed soon. Please update your code accordingly.
608
+ warnings.warn(
609
+ /mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/models/gpt/gpt_layer_specs.py:94: UserWarning: The fp8 argument in "get_gpt_layer_with_transformer_engine_spec" has been deprecated and will be removed soon. Please update your code accordingly.
610
+ warnings.warn(
611
+ /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/transformer_engine/pytorch/cpu_offload.py:595: DeprecationWarning: Offloading weights is deprecated. Using offload_weights=True does not have any effect.
612
+ warnings.warn(
613
+ /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/transformer_engine/pytorch/cpu_offload.py:595: DeprecationWarning: Offloading weights is deprecated. Using offload_weights=True does not have any effect.
614
+ warnings.warn(
615
+ /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/transformer_engine/pytorch/cpu_offload.py:595: DeprecationWarning: Offloading weights is deprecated. Using offload_weights=True does not have any effect.
616
+ warnings.warn(
617
+ /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/transformer_engine/pytorch/cpu_offload.py:595: DeprecationWarning: Offloading weights is deprecated. Using offload_weights=True does not have any effect.
618
+ warnings.warn(
619
+ /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/transformer_engine/pytorch/cpu_offload.py:595: DeprecationWarning: Offloading weights is deprecated. Using offload_weights=True does not have any effect.
620
+ warnings.warn(
621
+ /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/transformer_engine/pytorch/cpu_offload.py:595: DeprecationWarning: Offloading weights is deprecated. Using offload_weights=True does not have any effect.
622
+ warnings.warn(
623
+ /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/transformer_engine/pytorch/cpu_offload.py:595: DeprecationWarning: Offloading weights is deprecated. Using offload_weights=True does not have any effect.
624
+ warnings.warn(
625
+ /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/transformer_engine/pytorch/cpu_offload.py:595: DeprecationWarning: Offloading weights is deprecated. Using offload_weights=True does not have any effect.
626
+ warnings.warn(
627
+ /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/transformer_engine/pytorch/cpu_offload.py:595: DeprecationWarning: Offloading weights is deprecated. Using offload_weights=True does not have any effect.
628
+ warnings.warn(
629
+ /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/transformer_engine/pytorch/cpu_offload.py:595: DeprecationWarning: Offloading weights is deprecated. Using offload_weights=True does not have any effect.
630
+ warnings.warn(
631
+ /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/transformer_engine/pytorch/cpu_offload.py:595: DeprecationWarning: Offloading weights is deprecated. Using offload_weights=True does not have any effect.
632
+ warnings.warn(
633
+ /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/transformer_engine/pytorch/cpu_offload.py:595: DeprecationWarning: Offloading weights is deprecated. Using offload_weights=True does not have any effect.
634
+ warnings.warn(
635
+ /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/transformer_engine/pytorch/cpu_offload.py:595: DeprecationWarning: Offloading weights is deprecated. Using offload_weights=True does not have any effect.
636
+ warnings.warn(
637
+ /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/transformer_engine/pytorch/cpu_offload.py:595: DeprecationWarning: Offloading weights is deprecated. Using offload_weights=True does not have any effect.
638
+ warnings.warn(
639
+ /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/transformer_engine/pytorch/cpu_offload.py:595: DeprecationWarning: Offloading weights is deprecated. Using offload_weights=True does not have any effect.
640
+ warnings.warn(
641
+ /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/transformer_engine/pytorch/cpu_offload.py:595: DeprecationWarning: Offloading weights is deprecated. Using offload_weights=True does not have any effect.
642
+ warnings.warn(
643
+ [rank0]:[W621 21:18:12.746685017 ProcessGroupNCCL.cpp:1476] Warning: WARNING: destroy_process_group() was not called before program exit, which can leak resources. For more info, please see https://pytorch.org/docs/stable/distributed.html#shutdown (function operator())
644
+ [rank7]:[W621 21:18:12.753953917 ProcessGroupNCCL.cpp:1476] Warning: WARNING: destroy_process_group() was not called before program exit, which can leak resources. For more info, please see https://pytorch.org/docs/stable/distributed.html#shutdown (function operator())
645
+ [rank2]:[W621 21:18:12.850162563 ProcessGroupNCCL.cpp:1476] Warning: WARNING: destroy_process_group() was not called before program exit, which can leak resources. For more info, please see https://pytorch.org/docs/stable/distributed.html#shutdown (function operator())
646
+ [rank1]:[W621 21:18:12.853369691 ProcessGroupNCCL.cpp:1476] Warning: WARNING: destroy_process_group() was not called before program exit, which can leak resources. For more info, please see https://pytorch.org/docs/stable/distributed.html#shutdown (function operator())
647
+ [rank4]:[W621 21:18:12.898370089 ProcessGroupNCCL.cpp:1476] Warning: WARNING: destroy_process_group() was not called before program exit, which can leak resources. For more info, please see https://pytorch.org/docs/stable/distributed.html#shutdown (function operator())
648
+ [rank3]:[W621 21:18:12.927309566 ProcessGroupNCCL.cpp:1476] Warning: WARNING: destroy_process_group() was not called before program exit, which can leak resources. For more info, please see https://pytorch.org/docs/stable/distributed.html#shutdown (function operator())
649
+ [rank6]:[W621 21:18:12.939298933 ProcessGroupNCCL.cpp:1476] Warning: WARNING: destroy_process_group() was not called before program exit, which can leak resources. For more info, please see https://pytorch.org/docs/stable/distributed.html#shutdown (function operator())
650
+ [rank5]:[W621 21:18:12.964087544 ProcessGroupNCCL.cpp:1476] Warning: WARNING: destroy_process_group() was not called before program exit, which can leak resources. For more info, please see https://pytorch.org/docs/stable/distributed.html#shutdown (function operator())
651
+ [rank13]:[W621 21:18:13.968139063 ProcessGroupNCCL.cpp:1476] Warning: WARNING: destroy_process_group() was not called before program exit, which can leak resources. For more info, please see https://pytorch.org/docs/stable/distributed.html#shutdown (function operator())
652
+ [rank11]:[W621 21:18:13.066932158 ProcessGroupNCCL.cpp:1476] Warning: WARNING: destroy_process_group() was not called before program exit, which can leak resources. For more info, please see https://pytorch.org/docs/stable/distributed.html#shutdown (function operator())
653
+ [rank15]:[W621 21:18:13.187760979 ProcessGroupNCCL.cpp:1476] Warning: WARNING: destroy_process_group() was not called before program exit, which can leak resources. For more info, please see https://pytorch.org/docs/stable/distributed.html#shutdown (function operator())
654
+ [rank10]:[W621 21:18:13.188025094 ProcessGroupNCCL.cpp:1476] Warning: WARNING: destroy_process_group() was not called before program exit, which can leak resources. For more info, please see https://pytorch.org/docs/stable/distributed.html#shutdown (function operator())
655
+ [rank14]:[W621 21:18:13.206550379 ProcessGroupNCCL.cpp:1476] Warning: WARNING: destroy_process_group() was not called before program exit, which can leak resources. For more info, please see https://pytorch.org/docs/stable/distributed.html#shutdown (function operator())
656
+ [rank9]:[W621 21:18:13.255674863 ProcessGroupNCCL.cpp:1476] Warning: WARNING: destroy_process_group() was not called before program exit, which can leak resources. For more info, please see https://pytorch.org/docs/stable/distributed.html#shutdown (function operator())
657
+ [rank8]:[W621 21:18:13.347988832 ProcessGroupNCCL.cpp:1476] Warning: WARNING: destroy_process_group() was not called before program exit, which can leak resources. For more info, please see https://pytorch.org/docs/stable/distributed.html#shutdown (function operator())
658
+ [rank12]:[W621 21:18:13.423960392 ProcessGroupNCCL.cpp:1476] Warning: WARNING: destroy_process_group() was not called before program exit, which can leak resources. For more info, please see https://pytorch.org/docs/stable/distributed.html#shutdown (function operator())
659
+ + set +x
660
+ + set +x
661
+ + for ctx_length in 1024 2048 4096 8192 12288 16384 24576 32768 40960 49152 65536 81920 98304 131072
662
+ + export PROF_CTX_LENGTH=12288
663
+ + PROF_CTX_LENGTH=12288
664
+ + name='/mnt/sharefs/users/hao.zhang/junda/megatron-prof-data--unstable-v5/mytrace.L12288*tp8.cp2.bs4.json'
665
+ + '[' -f '/mnt/sharefs/users/hao.zhang/junda/megatron-prof-data--unstable-v5/mytrace.L12288*tp8.cp2.bs4.json' ']'
666
+ + echo 'Running ctx_length=12288, TP_SIZE=8, CP_SIZE=2, BATCH_SIZE=4'
667
+ + srun bash ./attnserver.sh
668
+ + which python3
669
+ + which python3
670
+ + python3 -m torch.distributed.launch --nproc_per_node 8 --nnodes 2 --node_rank 0 --rdzv_id 343203 --rdzv_backend c10d --rdzv_endpoint fs-mbz-gpu-274:29500 ./pretrain_gpt_profile.py --tensor-model-parallel-size 8 --context-parallel-size 2 --num-layers 2 --hidden-size 4096 --num-attention-heads 64 --group-query-attention --num-query-groups 16 --seq-length 12288 --max-position-embeddings 12288 --micro-batch-size 1 --global-batch-size 1 --lr 0.0005 --train-iters 10 --lr-decay-iters 150000 --lr-decay-style cosine --lr-warmup-iters 2 --weight-decay .1 --adam-beta2 .999 --fp16 --log-interval 1 --save-interval 16 --eval-interval 16 --eval-iters 1 --vocab-file vocab.json --merge-file merges.txt --save gpt-checkpoint --load gpt-checkpoint --logging-level 0 --mock-data --tensorboard-dir tensorboard-logs/
671
+ + python3 -m torch.distributed.launch --nproc_per_node 8 --nnodes 2 --node_rank 1 --rdzv_id 343203 --rdzv_backend c10d --rdzv_endpoint fs-mbz-gpu-274:29500 ./pretrain_gpt_profile.py --tensor-model-parallel-size 8 --context-parallel-size 2 --num-layers 2 --hidden-size 4096 --num-attention-heads 64 --group-query-attention --num-query-groups 16 --seq-length 12288 --max-position-embeddings 12288 --micro-batch-size 1 --global-batch-size 1 --lr 0.0005 --train-iters 10 --lr-decay-iters 150000 --lr-decay-style cosine --lr-warmup-iters 2 --weight-decay .1 --adam-beta2 .999 --fp16 --log-interval 1 --save-interval 16 --eval-interval 16 --eval-iters 1 --vocab-file vocab.json --merge-file merges.txt --save gpt-checkpoint --load gpt-checkpoint --logging-level 0 --mock-data --tensorboard-dir tensorboard-logs/
672
+ /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/distributed/launch.py:207: FutureWarning: The module torch.distributed.launch is deprecated
673
+ and will be removed in future. Use torchrun.
674
+ Note that --use-env is set by default in torchrun.
675
+ If your script expects `--local-rank` argument to be set, please
676
+ change it to read from `os.environ['LOCAL_RANK']` instead. See
677
+ https://pytorch.org/docs/stable/distributed.html#launch-utility for
678
+ further instructions
679
+
680
+ main()
681
+ W0621 21:18:19.188000 1036067 site-packages/torch/distributed/run.py:766]
682
+ W0621 21:18:19.188000 1036067 site-packages/torch/distributed/run.py:766] *****************************************
683
+ W0621 21:18:19.188000 1036067 site-packages/torch/distributed/run.py:766] Setting OMP_NUM_THREADS environment variable for each process to be 1 in default, to avoid your system being overloaded, please further tune the variable for optimal performance in your application as needed.
684
+ W0621 21:18:19.188000 1036067 site-packages/torch/distributed/run.py:766] *****************************************
685
+ /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/distributed/launch.py:207: FutureWarning: The module torch.distributed.launch is deprecated
686
+ and will be removed in future. Use torchrun.
687
+ Note that --use-env is set by default in torchrun.
688
+ If your script expects `--local-rank` argument to be set, please
689
+ change it to read from `os.environ['LOCAL_RANK']` instead. See
690
+ https://pytorch.org/docs/stable/distributed.html#launch-utility for
691
+ further instructions
692
+
693
+ main()
694
+ W0621 21:18:19.257000 758676 site-packages/torch/distributed/run.py:766]
695
+ W0621 21:18:19.257000 758676 site-packages/torch/distributed/run.py:766] *****************************************
696
+ W0621 21:18:19.257000 758676 site-packages/torch/distributed/run.py:766] Setting OMP_NUM_THREADS environment variable for each process to be 1 in default, to avoid your system being overloaded, please further tune the variable for optimal performance in your application as needed.
697
+ W0621 21:18:19.257000 758676 site-packages/torch/distributed/run.py:766] *****************************************
attnserver.run_attnserver.slurm.sh.343203.out.log CHANGED
The diff for this file is too large to render. See raw diff
 
attnserver.run_attnserver.slurm.sh.343204.err.log CHANGED
@@ -5100,3 +5100,262 @@ W0621 21:17:21.418000 720494 site-packages/torch/distributed/elastic/multiproces
5100
  W0621 21:17:21.421000 720494 site-packages/torch/distributed/elastic/multiprocessing/api.py:900] Sending process 720572 closing signal SIGTERM
5101
  W0621 21:17:21.436000 720494 site-packages/torch/distributed/elastic/multiprocessing/api.py:900] Sending process 720573 closing signal SIGTERM
5102
  W0621 21:17:21.439000 720494 site-packages/torch/distributed/elastic/multiprocessing/api.py:900] Sending process 720574 closing signal SIGTERM
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
5100
  W0621 21:17:21.421000 720494 site-packages/torch/distributed/elastic/multiprocessing/api.py:900] Sending process 720572 closing signal SIGTERM
5101
  W0621 21:17:21.436000 720494 site-packages/torch/distributed/elastic/multiprocessing/api.py:900] Sending process 720573 closing signal SIGTERM
5102
  W0621 21:17:21.439000 720494 site-packages/torch/distributed/elastic/multiprocessing/api.py:900] Sending process 720574 closing signal SIGTERM
5103
+ E0621 21:17:22.819000 720494 site-packages/torch/distributed/elastic/multiprocessing/api.py:874] failed (exitcode: 1) local_rank: 0 (pid: 720567) of binary: /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/bin/python3
5104
+ Traceback (most recent call last):
5105
+ File "<frozen runpy>", line 198, in _run_module_as_main
5106
+ File "<frozen runpy>", line 88, in _run_code
5107
+ File "/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/distributed/launch.py", line 207, in <module>
5108
+ main()
5109
+ File "/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/typing_extensions.py", line 3253, in wrapper
5110
+ return arg(*args, **kwargs)
5111
+ ^^^^^^^^^^^^^^^^^^^^
5112
+ File "/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/distributed/launch.py", line 203, in main
5113
+ launch(args)
5114
+ File "/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/distributed/launch.py", line 188, in launch
5115
+ run(args)
5116
+ File "/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/distributed/run.py", line 883, in run
5117
+ elastic_launch(
5118
+ File "/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/distributed/launcher/api.py", line 139, in __call__
5119
+ return launch_agent(self._config, self._entrypoint, list(args))
5120
+ ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
5121
+ File "/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/distributed/launcher/api.py", line 270, in launch_agent
5122
+ raise ChildFailedError(
5123
+ torch.distributed.elastic.multiprocessing.errors.ChildFailedError:
5124
+ ============================================================
5125
+ ./pretrain_gpt_profile.py FAILED
5126
+ ------------------------------------------------------------
5127
+ Failures:
5128
+ <NO_OTHER_FAILURES>
5129
+ ------------------------------------------------------------
5130
+ Root Cause (first observed failure):
5131
+ [0]:
5132
+ time : 2025-06-21_21:17:21
5133
+ host : fs-mbz-gpu-600
5134
+ rank : 0 (local_rank: 0)
5135
+ exitcode : 1 (pid: 720567)
5136
+ error_file: <N/A>
5137
+ traceback : To enable traceback see: https://pytorch.org/docs/stable/elastic/errors.html
5138
+ ============================================================
5139
+ W0621 21:17:23.048000 1702270 site-packages/torch/distributed/elastic/multiprocessing/api.py:900] Sending process 1702341 closing signal SIGTERM
5140
+ W0621 21:17:23.051000 1702270 site-packages/torch/distributed/elastic/multiprocessing/api.py:900] Sending process 1702342 closing signal SIGTERM
5141
+ W0621 21:17:23.052000 1702270 site-packages/torch/distributed/elastic/multiprocessing/api.py:900] Sending process 1702343 closing signal SIGTERM
5142
+ W0621 21:17:23.056000 1702270 site-packages/torch/distributed/elastic/multiprocessing/api.py:900] Sending process 1702344 closing signal SIGTERM
5143
+ W0621 21:17:23.071000 1702270 site-packages/torch/distributed/elastic/multiprocessing/api.py:900] Sending process 1702345 closing signal SIGTERM
5144
+ W0621 21:17:23.078000 1702270 site-packages/torch/distributed/elastic/multiprocessing/api.py:900] Sending process 1702346 closing signal SIGTERM
5145
+ W0621 21:17:23.080000 1702270 site-packages/torch/distributed/elastic/multiprocessing/api.py:900] Sending process 1702347 closing signal SIGTERM
5146
+ W0621 21:17:23.084000 1702270 site-packages/torch/distributed/elastic/multiprocessing/api.py:900] Sending process 1702348 closing signal SIGTERM
5147
+ + set +x
5148
+ [W621 21:17:23.868715535 TCPStore.cpp:106] [c10d] sendBytes failed on SocketImpl(fd=3, addr=[fs-mbz-gpu-717]:48024, remote=[fs-mbz-gpu-600]:29500): Broken pipe
5149
+ Exception raised from sendBytes at /pytorch/torch/csrc/distributed/c10d/Utils.hpp:653 (most recent call first):
5150
+ frame #0: c10::Error::Error(c10::SourceLocation, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >) + 0x98 (0x1507e6d785e8 in /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/lib/libc10.so)
5151
+ frame #1: <unknown function> + 0x5ba8afe (0x1507cfc5aafe in /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/lib/libtorch_cpu.so)
5152
+ frame #2: <unknown function> + 0x5baa358 (0x1507cfc5c358 in /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/lib/libtorch_cpu.so)
5153
+ frame #3: <unknown function> + 0x5babb3e (0x1507cfc5db3e in /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/lib/libtorch_cpu.so)
5154
+ frame #4: c10d::TCPStore::doWait(c10::ArrayRef<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > >, std::chrono::duration<long, std::ratio<1l, 1000l> >) + 0x1a6 (0x1507cfc57ac6 in /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/lib/libtorch_cpu.so)
5155
+ frame #5: c10d::TCPStore::doGet(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&) + 0x33 (0x1507cfc57ea3 in /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/lib/libtorch_cpu.so)
5156
+ frame #6: c10d::TCPStore::get(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&) + 0xab (0x1507cfc58f8b in /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/lib/libtorch_cpu.so)
5157
+ frame #7: <unknown function> + 0xc0f526 (0x1507def8b526 in /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/lib/libtorch_python.so)
5158
+ frame #8: <unknown function> + 0x37f17d (0x1507de6fb17d in /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/lib/libtorch_python.so)
5159
+ <omitting python frames>
5160
+ frame #17: <unknown function> + 0x94ac3 (0x1507e7e50ac3 in /lib/x86_64-linux-gnu/libc.so.6)
5161
+ frame #18: <unknown function> + 0x126850 (0x1507e7ee2850 in /lib/x86_64-linux-gnu/libc.so.6)
5162
+
5163
+ W0621 21:17:23.980000 1702270 site-packages/torch/distributed/elastic/rendezvous/dynamic_rendezvous.py:1341] The node 'fs-mbz-gpu-717_1702270_0' has failed to send a keep-alive heartbeat to the rendezvous '343204' due to an error of type RendezvousConnectionError.
5164
+ [W621 21:17:24.735056148 TCPStore.cpp:106] [c10d] sendBytes failed on SocketImpl(fd=3, addr=[fs-mbz-gpu-717]:48024, remote=[fs-mbz-gpu-600]:29500): Broken pipe
5165
+ Exception raised from sendBytes at /pytorch/torch/csrc/distributed/c10d/Utils.hpp:653 (most recent call first):
5166
+ frame #0: c10::Error::Error(c10::SourceLocation, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >) + 0x98 (0x1507e6d785e8 in /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/lib/libc10.so)
5167
+ frame #1: <unknown function> + 0x5ba8afe (0x1507cfc5aafe in /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/lib/libtorch_cpu.so)
5168
+ frame #2: <unknown function> + 0x5baa358 (0x1507cfc5c358 in /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/lib/libtorch_cpu.so)
5169
+ frame #3: <unknown function> + 0x5babb3e (0x1507cfc5db3e in /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/lib/libtorch_cpu.so)
5170
+ frame #4: c10d::TCPStore::doWait(c10::ArrayRef<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > >, std::chrono::duration<long, std::ratio<1l, 1000l> >) + 0x1a6 (0x1507cfc57ac6 in /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/lib/libtorch_cpu.so)
5171
+ frame #5: c10d::TCPStore::doGet(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&) + 0x33 (0x1507cfc57ea3 in /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/lib/libtorch_cpu.so)
5172
+ frame #6: c10d::TCPStore::get(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&) + 0xab (0x1507cfc58f8b in /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/lib/libtorch_cpu.so)
5173
+ frame #7: <unknown function> + 0xc0f526 (0x1507def8b526 in /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/lib/libtorch_python.so)
5174
+ frame #8: <unknown function> + 0x37f17d (0x1507de6fb17d in /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/lib/libtorch_python.so)
5175
+ <omitting python frames>
5176
+ frame #26: <unknown function> + 0x29d90 (0x1507e7de5d90 in /lib/x86_64-linux-gnu/libc.so.6)
5177
+ frame #27: __libc_start_main + 0x80 (0x1507e7de5e40 in /lib/x86_64-linux-gnu/libc.so.6)
5178
+
5179
+ W0621 21:17:24.850000 1702270 site-packages/torch/distributed/elastic/rendezvous/dynamic_rendezvous.py:1292] The node 'fs-mbz-gpu-717_1702270_0' has failed to shutdown the rendezvous '343204' due to an error of type RendezvousConnectionError.
5180
+ [W621 21:17:24.749979375 TCPStore.cpp:106] [c10d] sendBytes failed on SocketImpl(fd=3, addr=[fs-mbz-gpu-717]:48024, remote=[fs-mbz-gpu-600]:29500): Broken pipe
5181
+ Exception raised from sendBytes at /pytorch/torch/csrc/distributed/c10d/Utils.hpp:653 (most recent call first):
5182
+ frame #0: c10::Error::Error(c10::SourceLocation, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >) + 0x98 (0x1507e6d785e8 in /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/lib/libc10.so)
5183
+ frame #1: <unknown function> + 0x5ba8afe (0x1507cfc5aafe in /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/lib/libtorch_cpu.so)
5184
+ frame #2: <unknown function> + 0x5baa358 (0x1507cfc5c358 in /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/lib/libtorch_cpu.so)
5185
+ frame #3: <unknown function> + 0x5babb3e (0x1507cfc5db3e in /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/lib/libtorch_cpu.so)
5186
+ frame #4: c10d::TCPStore::doWait(c10::ArrayRef<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > >, std::chrono::duration<long, std::ratio<1l, 1000l> >) + 0x1a6 (0x1507cfc57ac6 in /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/lib/libtorch_cpu.so)
5187
+ frame #5: c10d::TCPStore::doGet(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&) + 0x33 (0x1507cfc57ea3 in /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/lib/libtorch_cpu.so)
5188
+ frame #6: c10d::TCPStore::get(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&) + 0xab (0x1507cfc58f8b in /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/lib/libtorch_cpu.so)
5189
+ frame #7: <unknown function> + 0xc0f526 (0x1507def8b526 in /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/lib/libtorch_python.so)
5190
+ frame #8: <unknown function> + 0x37f17d (0x1507de6fb17d in /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/lib/libtorch_python.so)
5191
+ <omitting python frames>
5192
+ frame #26: <unknown function> + 0x29d90 (0x1507e7de5d90 in /lib/x86_64-linux-gnu/libc.so.6)
5193
+ frame #27: __libc_start_main + 0x80 (0x1507e7de5e40 in /lib/x86_64-linux-gnu/libc.so.6)
5194
+
5195
+ W0621 21:17:24.862000 1702270 site-packages/torch/distributed/elastic/rendezvous/dynamic_rendezvous.py:1292] The node 'fs-mbz-gpu-717_1702270_0' has failed to shutdown the rendezvous '343204' due to an error of type RendezvousConnectionError.
5196
+ Traceback (most recent call last):
5197
+ File "/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/distributed/elastic/rendezvous/c10d_rendezvous_backend.py", line 117, in _call_store
5198
+ return getattr(self._store, store_op)(*args, **kwargs)
5199
+ ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
5200
+ torch.distributed.DistNetworkError: failed to recv, got 0 bytes
5201
+
5202
+ The above exception was the direct cause of the following exception:
5203
+
5204
+ Traceback (most recent call last):
5205
+ File "<frozen runpy>", line 198, in _run_module_as_main
5206
+ File "<frozen runpy>", line 88, in _run_code
5207
+ File "/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/distributed/launch.py", line 207, in <module>
5208
+ main()
5209
+ File "/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/typing_extensions.py", line 3253, in wrapper
5210
+ return arg(*args, **kwargs)
5211
+ ^^^^^^^^^^^^^^^^^^^^
5212
+ File "/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/distributed/launch.py", line 203, in main
5213
+ launch(args)
5214
+ File "/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/distributed/launch.py", line 188, in launch
5215
+ run(args)
5216
+ File "/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/distributed/run.py", line 883, in run
5217
+ elastic_launch(
5218
+ File "/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/distributed/launcher/api.py", line 139, in __call__
5219
+ return launch_agent(self._config, self._entrypoint, list(args))
5220
+ ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
5221
+ File "/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/distributed/launcher/api.py", line 261, in launch_agent
5222
+ result = agent.run()
5223
+ ^^^^^^^^^^^
5224
+ File "/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/distributed/elastic/metrics/api.py", line 138, in wrapper
5225
+ result = f(*args, **kwargs)
5226
+ ^^^^^^^^^^^^^^^^^^
5227
+ File "/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/distributed/elastic/agent/server/api.py", line 711, in run
5228
+ result = self._invoke_run(role)
5229
+ ^^^^^^^^^^^^^^^^^^^^^^
5230
+ File "/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/distributed/elastic/agent/server/api.py", line 906, in _invoke_run
5231
+ num_nodes_waiting = rdzv_handler.num_nodes_waiting()
5232
+ ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
5233
+ File "/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/distributed/elastic/rendezvous/dynamic_rendezvous.py", line 1263, in num_nodes_waiting
5234
+ self._state_holder.sync()
5235
+ File "/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/distributed/elastic/rendezvous/dynamic_rendezvous.py", line 437, in sync
5236
+ get_response = self._backend.get_state()
5237
+ ^^^^^^^^^^^^^^^^^^^^^^^^^
5238
+ File "/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/distributed/elastic/rendezvous/c10d_rendezvous_backend.py", line 75, in get_state
5239
+ base64_state: bytes = self._call_store("get", self._key)
5240
+ ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
5241
+ File "/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/distributed/elastic/rendezvous/c10d_rendezvous_backend.py", line 119, in _call_store
5242
+ raise RendezvousConnectionError(
5243
+ torch.distributed.elastic.rendezvous.api.RendezvousConnectionError: The connection to the C10d store has failed. See inner exception for details.
5244
+ + set +x
5245
+ + for ctx_length in 1024 2048 4096 8192 12288 16384 24576 32768 40960 49152 65536 81920 98304 131072
5246
+ + export PROF_CTX_LENGTH=8192
5247
+ + PROF_CTX_LENGTH=8192
5248
+ + name='/mnt/sharefs/users/hao.zhang/junda/megatron-prof-data--unstable-v5/mytrace.L8192*tp8.cp2.bs8.json'
5249
+ + '[' -f '/mnt/sharefs/users/hao.zhang/junda/megatron-prof-data--unstable-v5/mytrace.L8192*tp8.cp2.bs8.json' ']'
5250
+ + echo 'Running ctx_length=8192, TP_SIZE=8, CP_SIZE=2, BATCH_SIZE=8'
5251
+ + srun bash ./attnserver.sh
5252
+ + which python3
5253
+ + python3 -m torch.distributed.launch --nproc_per_node 8 --nnodes 2 --node_rank 1 --rdzv_id 343204 --rdzv_backend c10d --rdzv_endpoint fs-mbz-gpu-600:29500 ./pretrain_gpt_profile.py --tensor-model-parallel-size 8 --context-parallel-size 2 --num-layers 2 --hidden-size 4096 --num-attention-heads 64 --group-query-attention --num-query-groups 16 --seq-length 8192 --max-position-embeddings 8192 --micro-batch-size 1 --global-batch-size 1 --lr 0.0005 --train-iters 10 --lr-decay-iters 150000 --lr-decay-style cosine --lr-warmup-iters 2 --weight-decay .1 --adam-beta2 .999 --fp16 --log-interval 1 --save-interval 16 --eval-interval 16 --eval-iters 1 --vocab-file vocab.json --merge-file merges.txt --save gpt-checkpoint --load gpt-checkpoint --logging-level 0 --mock-data --tensorboard-dir tensorboard-logs/
5254
+ + which python3
5255
+ + python3 -m torch.distributed.launch --nproc_per_node 8 --nnodes 2 --node_rank 0 --rdzv_id 343204 --rdzv_backend c10d --rdzv_endpoint fs-mbz-gpu-600:29500 ./pretrain_gpt_profile.py --tensor-model-parallel-size 8 --context-parallel-size 2 --num-layers 2 --hidden-size 4096 --num-attention-heads 64 --group-query-attention --num-query-groups 16 --seq-length 8192 --max-position-embeddings 8192 --micro-batch-size 1 --global-batch-size 1 --lr 0.0005 --train-iters 10 --lr-decay-iters 150000 --lr-decay-style cosine --lr-warmup-iters 2 --weight-decay .1 --adam-beta2 .999 --fp16 --log-interval 1 --save-interval 16 --eval-interval 16 --eval-iters 1 --vocab-file vocab.json --merge-file merges.txt --save gpt-checkpoint --load gpt-checkpoint --logging-level 0 --mock-data --tensorboard-dir tensorboard-logs/
5256
+ /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/distributed/launch.py:207: FutureWarning: The module torch.distributed.launch is deprecated
5257
+ and will be removed in future. Use torchrun.
5258
+ Note that --use-env is set by default in torchrun.
5259
+ If your script expects `--local-rank` argument to be set, please
5260
+ change it to read from `os.environ['LOCAL_RANK']` instead. See
5261
+ https://pytorch.org/docs/stable/distributed.html#launch-utility for
5262
+ further instructions
5263
+
5264
+ main()
5265
+ W0621 21:17:28.005000 1704865 site-packages/torch/distributed/run.py:766]
5266
+ W0621 21:17:28.005000 1704865 site-packages/torch/distributed/run.py:766] *****************************************
5267
+ W0621 21:17:28.005000 1704865 site-packages/torch/distributed/run.py:766] Setting OMP_NUM_THREADS environment variable for each process to be 1 in default, to avoid your system being overloaded, please further tune the variable for optimal performance in your application as needed.
5268
+ W0621 21:17:28.005000 1704865 site-packages/torch/distributed/run.py:766] *****************************************
5269
+ /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/distributed/launch.py:207: FutureWarning: The module torch.distributed.launch is deprecated
5270
+ and will be removed in future. Use torchrun.
5271
+ Note that --use-env is set by default in torchrun.
5272
+ If your script expects `--local-rank` argument to be set, please
5273
+ change it to read from `os.environ['LOCAL_RANK']` instead. See
5274
+ https://pytorch.org/docs/stable/distributed.html#launch-utility for
5275
+ further instructions
5276
+
5277
+ main()
5278
+ W0621 21:17:28.010000 723166 site-packages/torch/distributed/run.py:766]
5279
+ W0621 21:17:28.010000 723166 site-packages/torch/distributed/run.py:766] *****************************************
5280
+ W0621 21:17:28.010000 723166 site-packages/torch/distributed/run.py:766] Setting OMP_NUM_THREADS environment variable for each process to be 1 in default, to avoid your system being overloaded, please further tune the variable for optimal performance in your application as needed.
5281
+ W0621 21:17:28.010000 723166 site-packages/torch/distributed/run.py:766] *****************************************
5282
+ [rank2]:[W621 21:17:50.503583304 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 2] using GPU 2 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
5283
+ [rank10]:[W621 21:17:50.098178415 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 10] using GPU 2 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
5284
+ [rank11]:[W621 21:17:50.225173888 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 11] using GPU 3 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
5285
+ [rank8]:[W621 21:17:50.295557738 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 8] using GPU 0 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
5286
+ [rank3]:[W621 21:17:50.640182182 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 3] using GPU 3 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
5287
+ [rank14]:[W621 21:17:50.442314027 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 14] using GPU 6 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
5288
+ [rank6]:[W621 21:17:50.858109613 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 6] using GPU 6 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
5289
+ [rank0]:[W621 21:17:50.890359594 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 0] using GPU 0 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
5290
+ [rank1]:[W621 21:17:50.896970478 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 1] using GPU 1 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
5291
+ [rank9]:[W621 21:17:50.487824039 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 9] using GPU 1 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
5292
+ [rank15]:[W621 21:17:50.493958030 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 15] using GPU 7 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
5293
+ [rank7]:[W621 21:17:50.904497461 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 7] using GPU 7 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
5294
+ [rank12]:[W621 21:17:50.498492911 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 12] using GPU 4 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
5295
+ [rank4]:[W621 21:17:50.904650488 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 4] using GPU 4 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
5296
+ [rank5]:[W621 21:17:50.909090567 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 5] using GPU 5 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
5297
+ [rank13]:[W621 21:17:50.501167640 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 13] using GPU 5 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
5298
+ /mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/models/gpt/gpt_layer_specs.py:94: UserWarning: The fp8 argument in "get_gpt_layer_with_transformer_engine_spec" has been deprecated and will be removed soon. Please update your code accordingly.
5299
+ warnings.warn(
5300
+ /mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/models/gpt/gpt_layer_specs.py:94: UserWarning: The fp8 argument in "get_gpt_layer_with_transformer_engine_spec" has been deprecated and will be removed soon. Please update your code accordingly.
5301
+ warnings.warn(
5302
+ /mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/models/gpt/gpt_layer_specs.py:94: UserWarning: The fp8 argument in "get_gpt_layer_with_transformer_engine_spec" has been deprecated and will be removed soon. Please update your code accordingly.
5303
+ warnings.warn(
5304
+ /mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/models/gpt/gpt_layer_specs.py:94: UserWarning: The fp8 argument in "get_gpt_layer_with_transformer_engine_spec" has been deprecated and will be removed soon. Please update your code accordingly.
5305
+ warnings.warn(
5306
+ /mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/models/gpt/gpt_layer_specs.py:94: UserWarning: The fp8 argument in "get_gpt_layer_with_transformer_engine_spec" has been deprecated and will be removed soon. Please update your code accordingly.
5307
+ warnings.warn(
5308
+ /mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/models/gpt/gpt_layer_specs.py:94: UserWarning: The fp8 argument in "get_gpt_layer_with_transformer_engine_spec" has been deprecated and will be removed soon. Please update your code accordingly.
5309
+ warnings.warn(
5310
+ /mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/models/gpt/gpt_layer_specs.py:94: UserWarning: The fp8 argument in "get_gpt_layer_with_transformer_engine_spec" has been deprecated and will be removed soon. Please update your code accordingly.
5311
+ warnings.warn(
5312
+ /mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/models/gpt/gpt_layer_specs.py:94: UserWarning: The fp8 argument in "get_gpt_layer_with_transformer_engine_spec" has been deprecated and will be removed soon. Please update your code accordingly.
5313
+ warnings.warn(
5314
+ /mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/models/gpt/gpt_layer_specs.py:94: UserWarning: The fp8 argument in "get_gpt_layer_with_transformer_engine_spec" has been deprecated and will be removed soon. Please update your code accordingly.
5315
+ warnings.warn(
5316
+ /mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/models/gpt/gpt_layer_specs.py:94: UserWarning: The fp8 argument in "get_gpt_layer_with_transformer_engine_spec" has been deprecated and will be removed soon. Please update your code accordingly.
5317
+ warnings.warn(
5318
+ /mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/models/gpt/gpt_layer_specs.py:94: UserWarning: The fp8 argument in "get_gpt_layer_with_transformer_engine_spec" has been deprecated and will be removed soon. Please update your code accordingly.
5319
+ warnings.warn(
5320
+ /mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/models/gpt/gpt_layer_specs.py:94: UserWarning: The fp8 argument in "get_gpt_layer_with_transformer_engine_spec" has been deprecated and will be removed soon. Please update your code accordingly.
5321
+ warnings.warn(
5322
+ /mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/models/gpt/gpt_layer_specs.py:94: UserWarning: The fp8 argument in "get_gpt_layer_with_transformer_engine_spec" has been deprecated and will be removed soon. Please update your code accordingly.
5323
+ warnings.warn(
5324
+ /mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/models/gpt/gpt_layer_specs.py:94: UserWarning: The fp8 argument in "get_gpt_layer_with_transformer_engine_spec" has been deprecated and will be removed soon. Please update your code accordingly.
5325
+ warnings.warn(
5326
+ /mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/models/gpt/gpt_layer_specs.py:94: UserWarning: The fp8 argument in "get_gpt_layer_with_transformer_engine_spec" has been deprecated and will be removed soon. Please update your code accordingly.
5327
+ warnings.warn(
5328
+ /mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/models/gpt/gpt_layer_specs.py:94: UserWarning: The fp8 argument in "get_gpt_layer_with_transformer_engine_spec" has been deprecated and will be removed soon. Please update your code accordingly.
5329
+ warnings.warn(
5330
+ /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/transformer_engine/pytorch/cpu_offload.py:595: DeprecationWarning: Offloading weights is deprecated. Using offload_weights=True does not have any effect.
5331
+ warnings.warn(
5332
+ /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/transformer_engine/pytorch/cpu_offload.py:595: DeprecationWarning: Offloading weights is deprecated. Using offload_weights=True does not have any effect.
5333
+ warnings.warn(
5334
+ /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/transformer_engine/pytorch/cpu_offload.py:595: DeprecationWarning: Offloading weights is deprecated. Using offload_weights=True does not have any effect.
5335
+ warnings.warn(
5336
+ /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/transformer_engine/pytorch/cpu_offload.py:595: DeprecationWarning: Offloading weights is deprecated. Using offload_weights=True does not have any effect.
5337
+ warnings.warn(
5338
+ /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/transformer_engine/pytorch/cpu_offload.py:595: DeprecationWarning: Offloading weights is deprecated. Using offload_weights=True does not have any effect.
5339
+ warnings.warn(
5340
+ /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/transformer_engine/pytorch/cpu_offload.py:595: DeprecationWarning: Offloading weights is deprecated. Using offload_weights=True does not have any effect.
5341
+ warnings.warn(
5342
+ /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/transformer_engine/pytorch/cpu_offload.py:595: DeprecationWarning: Offloading weights is deprecated. Using offload_weights=True does not have any effect.
5343
+ warnings.warn(
5344
+ /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/transformer_engine/pytorch/cpu_offload.py:595: DeprecationWarning: Offloading weights is deprecated. Using offload_weights=True does not have any effect.
5345
+ warnings.warn(
5346
+ /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/transformer_engine/pytorch/cpu_offload.py:595: DeprecationWarning: Offloading weights is deprecated. Using offload_weights=True does not have any effect.
5347
+ warnings.warn(
5348
+ /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/transformer_engine/pytorch/cpu_offload.py:595: DeprecationWarning: Offloading weights is deprecated. Using offload_weights=True does not have any effect.
5349
+ warnings.warn(
5350
+ /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/transformer_engine/pytorch/cpu_offload.py:595: DeprecationWarning: Offloading weights is deprecated. Using offload_weights=True does not have any effect.
5351
+ warnings.warn(
5352
+ /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/transformer_engine/pytorch/cpu_offload.py:595: DeprecationWarning: Offloading weights is deprecated. Using offload_weights=True does not have any effect.
5353
+ warnings.warn(
5354
+ /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/transformer_engine/pytorch/cpu_offload.py:595: DeprecationWarning: Offloading weights is deprecated. Using offload_weights=True does not have any effect.
5355
+ warnings.warn(
5356
+ /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/transformer_engine/pytorch/cpu_offload.py:595: DeprecationWarning: Offloading weights is deprecated. Using offload_weights=True does not have any effect.
5357
+ warnings.warn(
5358
+ /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/transformer_engine/pytorch/cpu_offload.py:595: DeprecationWarning: Offloading weights is deprecated. Using offload_weights=True does not have any effect.
5359
+ warnings.warn(
5360
+ /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/transformer_engine/pytorch/cpu_offload.py:595: DeprecationWarning: Offloading weights is deprecated. Using offload_weights=True does not have any effect.
5361
+ warnings.warn(
attnserver.run_attnserver.slurm.sh.343204.out.log CHANGED
The diff for this file is too large to render. See raw diff
 
attnserver.run_attnserver.slurm.sh.343205.err.log CHANGED
The diff for this file is too large to render. See raw diff
 
attnserver.run_attnserver.slurm.sh.343206.err.log CHANGED
The diff for this file is too large to render. See raw diff
 
attnserver.run_attnserver.slurm.sh.343206.out.log CHANGED
The diff for this file is too large to render. See raw diff