GindaChen commited on
Commit
c80861c
·
verified ·
1 Parent(s): 5240706

Upload folder using huggingface_hub

Browse files
attnserver.run_attnserver.slurm.sh.343188.out.log CHANGED
@@ -123499,3 +123499,666 @@ batch tensor after cp: labels torch.Size([1, 16384])
123499
  batch tensor after cp: loss_mask torch.Size([1, 16384])
123500
  batch tensor after cp: attention_mask torch.Size([1, 1, 16384, 131072])
123501
  batch tensor after cp: position_ids torch.Size([1, 16384])
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
123499
  batch tensor after cp: loss_mask torch.Size([1, 16384])
123500
  batch tensor after cp: attention_mask torch.Size([1, 1, 16384, 131072])
123501
  batch tensor after cp: position_ids torch.Size([1, 16384])
123502
+ batch tensor: tokens torch.Size([1, 131072])
123503
+ batch tensor: labels torch.Size([1, 131072])
123504
+ batch tensor: loss_mask torch.Size([1, 131072])
123505
+ batch tensor: attention_mask torch.Size([1, 1, 131072, 131072])
123506
+ batch tensor: position_ids torch.Size([1, 131072])
123507
+ batch tensor after cp: tokens torch.Size([1, 16384])
123508
+ batch tensor after cp: labels torch.Size([1, 16384])
123509
+ batch tensor after cp: loss_mask torch.Size([1, 16384])
123510
+ batch tensor after cp: attention_mask torch.Size([1, 1, 16384, 131072])
123511
+ batch tensor after cp: position_ids torch.Size([1, 16384])
123512
+ batch tensor: tokens torch.Size([1, 131072])
123513
+ batch tensor: labels torch.Size([1, 131072])
123514
+ batch tensor: loss_mask torch.Size([1, 131072])
123515
+ batch tensor: attention_mask torch.Size([1, 1, 131072, 131072])
123516
+ batch tensor: position_ids torch.Size([1, 131072])
123517
+ batch tensor after cp: tokens torch.Size([1, 16384])
123518
+ batch tensor after cp: labels torch.Size([1, 16384])
123519
+ batch tensor after cp: loss_mask torch.Size([1, 16384])
123520
+ batch tensor after cp: attention_mask torch.Size([1, 1, 16384, 131072])
123521
+ batch tensor after cp: position_ids torch.Size([1, 16384])
123522
+ batch tensor: tokens torch.Size([1, 131072])
123523
+ batch tensor: labels torch.Size([1, 131072])
123524
+ batch tensor: loss_mask torch.Size([1, 131072])
123525
+ batch tensor: attention_mask torch.Size([1, 1, 131072, 131072])
123526
+ batch tensor: position_ids torch.Size([1, 131072])
123527
+ batch tensor after cp: tokens torch.Size([1, 16384])
123528
+ batch tensor after cp: labels torch.Size([1, 16384])
123529
+ batch tensor after cp: loss_mask torch.Size([1, 16384])
123530
+ batch tensor after cp: attention_mask torch.Size([1, 1, 16384, 131072])
123531
+ batch tensor after cp: position_ids torch.Size([1, 16384])
123532
+ Start exporting trace 6
123533
+ Done exporting trace 6
123534
+ [2025-06-21 21:14:01] iteration 7/ 10 | consumed samples: 7 | elapsed time per iteration (ms): 121597.2 | learning rate: 0.000000E+00 | global batch size: 1 | loss scale: 67108864.0 | number of skipped iterations: 1 | number of nan iterations: 0 |
123535
+ batch tensor: tokens torch.Size([1, 131072])
123536
+ batch tensor: labels torch.Size([1, 131072])
123537
+ batch tensor: loss_mask torch.Size([1, 131072])
123538
+ batch tensor: attention_mask torch.Size([1, 1, 131072, 131072])
123539
+ batch tensor: position_ids torch.Size([1, 131072])
123540
+ batch tensor after cp: tokens torch.Size([1, 16384])
123541
+ batch tensor after cp: labels torch.Size([1, 16384])
123542
+ batch tensor after cp: loss_mask torch.Size([1, 16384])
123543
+ batch tensor after cp: attention_mask torch.Size([1, 1, 16384, 131072])
123544
+ batch tensor after cp: position_ids torch.Size([1, 16384])
123545
+ batch tensor: tokens torch.Size([1, 131072])
123546
+ batch tensor: labels torch.Size([1, 131072])
123547
+ batch tensor: loss_mask torch.Size([1, 131072])
123548
+ batch tensor: attention_mask torch.Size([1, 1, 131072, 131072])
123549
+ batch tensor: position_ids torch.Size([1, 131072])
123550
+ batch tensor after cp: tokens torch.Size([1, 16384])
123551
+ batch tensor after cp: labels torch.Size([1, 16384])
123552
+ batch tensor after cp: loss_mask torch.Size([1, 16384])
123553
+ batch tensor after cp: attention_mask torch.Size([1, 1, 16384, 131072])
123554
+ batch tensor after cp: position_ids torch.Size([1, 16384])
123555
+ batch tensor: tokens torch.Size([1, 131072])
123556
+ batch tensor: labels torch.Size([1, 131072])
123557
+ batch tensor: loss_mask torch.Size([1, 131072])
123558
+ batch tensor: attention_mask torch.Size([1, 1, 131072, 131072])
123559
+ batch tensor: position_ids torch.Size([1, 131072])
123560
+ batch tensor after cp: tokens torch.Size([1, 16384])
123561
+ batch tensor after cp: labels torch.Size([1, 16384])
123562
+ batch tensor after cp: loss_mask torch.Size([1, 16384])
123563
+ batch tensor after cp: attention_mask torch.Size([1, 1, 16384, 131072])
123564
+ batch tensor after cp: position_ids torch.Size([1, 16384])
123565
+ batch tensor: tokens torch.Size([1, 131072])
123566
+ batch tensor: labels torch.Size([1, 131072])
123567
+ batch tensor: loss_mask torch.Size([1, 131072])
123568
+ batch tensor: attention_mask torch.Size([1, 1, 131072, 131072])
123569
+ batch tensor: position_ids torch.Size([1, 131072])
123570
+ batch tensor after cp: tokens torch.Size([1, 16384])
123571
+ batch tensor after cp: labels torch.Size([1, 16384])
123572
+ batch tensor after cp: loss_mask torch.Size([1, 16384])
123573
+ batch tensor after cp: attention_mask torch.Size([1, 1, 16384, 131072])
123574
+ batch tensor after cp: position_ids torch.Size([1, 16384])
123575
+ batch tensor: tokens torch.Size([1, 131072])
123576
+ batch tensor: labels torch.Size([1, 131072])
123577
+ batch tensor: loss_mask torch.Size([1, 131072])
123578
+ batch tensor: attention_mask torch.Size([1, 1, 131072, 131072])
123579
+ batch tensor: position_ids torch.Size([1, 131072])
123580
+ batch tensor after cp: tokens torch.Size([1, 16384])
123581
+ batch tensor after cp: labels torch.Size([1, 16384])
123582
+ batch tensor after cp: loss_mask torch.Size([1, 16384])
123583
+ batch tensor after cp: attention_mask torch.Size([1, 1, 16384, 131072])
123584
+ batch tensor after cp: position_ids torch.Size([1, 16384])
123585
+ batch tensor: tokens torch.Size([1, 131072])
123586
+ batch tensor: labels torch.Size([1, 131072])
123587
+ batch tensor: loss_mask torch.Size([1, 131072])
123588
+ batch tensor: attention_mask torch.Size([1, 1, 131072, 131072])
123589
+ batch tensor: position_ids torch.Size([1, 131072])
123590
+ batch tensor after cp: tokens torch.Size([1, 16384])
123591
+ batch tensor after cp: labels torch.Size([1, 16384])
123592
+ batch tensor after cp: loss_mask torch.Size([1, 16384])
123593
+ batch tensor after cp: attention_mask torch.Size([1, 1, 16384, 131072])
123594
+ batch tensor after cp: position_ids torch.Size([1, 16384])
123595
+ batch tensor: tokens torch.Size([1, 131072])
123596
+ batch tensor: labels torch.Size([1, 131072])
123597
+ batch tensor: loss_mask torch.Size([1, 131072])
123598
+ batch tensor: attention_mask torch.Size([1, 1, 131072, 131072])
123599
+ batch tensor: position_ids torch.Size([1, 131072])
123600
+ batch tensor after cp: tokens torch.Size([1, 16384])
123601
+ batch tensor after cp: labels torch.Size([1, 16384])
123602
+ batch tensor after cp: loss_mask torch.Size([1, 16384])
123603
+ batch tensor after cp: attention_mask torch.Size([1, 1, 16384, 131072])
123604
+ batch tensor after cp: position_ids torch.Size([1, 16384])
123605
+ batch tensor: tokens torch.Size([1, 131072])
123606
+ batch tensor: labels torch.Size([1, 131072])
123607
+ batch tensor: loss_mask torch.Size([1, 131072])
123608
+ batch tensor: attention_mask torch.Size([1, 1, 131072, 131072])
123609
+ batch tensor: position_ids torch.Size([1, 131072])
123610
+ batch tensor after cp: tokens torch.Size([1, 16384])
123611
+ batch tensor after cp: labels torch.Size([1, 16384])
123612
+ batch tensor after cp: loss_mask torch.Size([1, 16384])
123613
+ batch tensor after cp: attention_mask torch.Size([1, 1, 16384, 131072])
123614
+ batch tensor after cp: position_ids torch.Size([1, 16384])
123615
+ batch tensor: tokens torch.Size([1, 131072])
123616
+ batch tensor: labels torch.Size([1, 131072])
123617
+ batch tensor: loss_mask torch.Size([1, 131072])
123618
+ batch tensor: attention_mask torch.Size([1, 1, 131072, 131072])
123619
+ batch tensor: position_ids torch.Size([1, 131072])
123620
+ batch tensor after cp: tokens torch.Size([1, 16384])
123621
+ batch tensor after cp: labels torch.Size([1, 16384])
123622
+ batch tensor after cp: loss_mask torch.Size([1, 16384])
123623
+ batch tensor after cp: attention_mask torch.Size([1, 1, 16384, 131072])
123624
+ batch tensor after cp: position_ids torch.Size([1, 16384])
123625
+ batch tensor: tokens torch.Size([1, 131072])
123626
+ batch tensor: labels torch.Size([1, 131072])
123627
+ batch tensor: loss_mask torch.Size([1, 131072])
123628
+ batch tensor: attention_mask torch.Size([1, 1, 131072, 131072])
123629
+ batch tensor: position_ids torch.Size([1, 131072])
123630
+ batch tensor after cp: tokens torch.Size([1, 16384])
123631
+ batch tensor after cp: labels torch.Size([1, 16384])
123632
+ batch tensor after cp: loss_mask torch.Size([1, 16384])
123633
+ batch tensor after cp: attention_mask torch.Size([1, 1, 16384, 131072])
123634
+ batch tensor after cp: position_ids torch.Size([1, 16384])
123635
+ batch tensor: tokens torch.Size([1, 131072])
123636
+ batch tensor: labels torch.Size([1, 131072])
123637
+ batch tensor: loss_mask torch.Size([1, 131072])
123638
+ batch tensor: attention_mask torch.Size([1, 1, 131072, 131072])
123639
+ batch tensor: position_ids torch.Size([1, 131072])
123640
+ batch tensor after cp: tokens torch.Size([1, 16384])
123641
+ batch tensor after cp: labels torch.Size([1, 16384])
123642
+ batch tensor after cp: loss_mask torch.Size([1, 16384])
123643
+ batch tensor after cp: attention_mask torch.Size([1, 1, 16384, 131072])
123644
+ batch tensor after cp: position_ids torch.Size([1, 16384])
123645
+ batch tensor: tokens torch.Size([1, 131072])
123646
+ batch tensor: labels torch.Size([1, 131072])
123647
+ batch tensor: loss_mask torch.Size([1, 131072])
123648
+ batch tensor: attention_mask torch.Size([1, 1, 131072, 131072])
123649
+ batch tensor: position_ids torch.Size([1, 131072])
123650
+ batch tensor after cp: tokens torch.Size([1, 16384])
123651
+ batch tensor after cp: labels torch.Size([1, 16384])
123652
+ batch tensor after cp: loss_mask torch.Size([1, 16384])
123653
+ batch tensor after cp: attention_mask torch.Size([1, 1, 16384, 131072])
123654
+ batch tensor after cp: position_ids torch.Size([1, 16384])
123655
+ batch tensor: tokens torch.Size([1, 131072])
123656
+ batch tensor: labels torch.Size([1, 131072])
123657
+ batch tensor: loss_mask torch.Size([1, 131072])
123658
+ batch tensor: attention_mask torch.Size([1, 1, 131072, 131072])
123659
+ batch tensor: position_ids torch.Size([1, 131072])
123660
+ batch tensor after cp: tokens torch.Size([1, 16384])
123661
+ batch tensor after cp: labels torch.Size([1, 16384])
123662
+ batch tensor after cp: loss_mask torch.Size([1, 16384])
123663
+ batch tensor after cp: attention_mask torch.Size([1, 1, 16384, 131072])
123664
+ batch tensor after cp: position_ids torch.Size([1, 16384])
123665
+ batch tensor: tokens torch.Size([1, 131072])
123666
+ batch tensor: labels torch.Size([1, 131072])
123667
+ batch tensor: loss_mask torch.Size([1, 131072])
123668
+ batch tensor: attention_mask torch.Size([1, 1, 131072, 131072])
123669
+ batch tensor: position_ids torch.Size([1, 131072])
123670
+ batch tensor after cp: tokens torch.Size([1, 16384])
123671
+ batch tensor after cp: labels torch.Size([1, 16384])
123672
+ batch tensor after cp: loss_mask torch.Size([1, 16384])
123673
+ batch tensor after cp: attention_mask torch.Size([1, 1, 16384, 131072])
123674
+ batch tensor after cp: position_ids torch.Size([1, 16384])
123675
+ batch tensor: tokens torch.Size([1, 131072])
123676
+ batch tensor: labels torch.Size([1, 131072])
123677
+ batch tensor: loss_mask torch.Size([1, 131072])
123678
+ batch tensor: attention_mask torch.Size([1, 1, 131072, 131072])
123679
+ batch tensor: position_ids torch.Size([1, 131072])
123680
+ batch tensor after cp: tokens torch.Size([1, 16384])
123681
+ batch tensor after cp: labels torch.Size([1, 16384])
123682
+ batch tensor after cp: loss_mask torch.Size([1, 16384])
123683
+ batch tensor after cp: attention_mask torch.Size([1, 1, 16384, 131072])
123684
+ batch tensor after cp: position_ids torch.Size([1, 16384])
123685
+ batch tensor: tokens torch.Size([1, 131072])
123686
+ batch tensor: labels torch.Size([1, 131072])
123687
+ batch tensor: loss_mask torch.Size([1, 131072])
123688
+ batch tensor: attention_mask torch.Size([1, 1, 131072, 131072])
123689
+ batch tensor: position_ids torch.Size([1, 131072])
123690
+ batch tensor after cp: tokens torch.Size([1, 16384])
123691
+ batch tensor after cp: labels torch.Size([1, 16384])
123692
+ batch tensor after cp: loss_mask torch.Size([1, 16384])
123693
+ batch tensor after cp: attention_mask torch.Size([1, 1, 16384, 131072])
123694
+ batch tensor after cp: position_ids torch.Size([1, 16384])
123695
+ batch tensor: tokens torch.Size([1, 131072])
123696
+ batch tensor: labels torch.Size([1, 131072])
123697
+ batch tensor: loss_mask torch.Size([1, 131072])
123698
+ batch tensor: attention_mask torch.Size([1, 1, 131072, 131072])
123699
+ batch tensor: position_ids torch.Size([1, 131072])
123700
+ batch tensor after cp: tokens torch.Size([1, 16384])
123701
+ batch tensor after cp: labels torch.Size([1, 16384])
123702
+ batch tensor after cp: loss_mask torch.Size([1, 16384])
123703
+ batch tensor after cp: attention_mask torch.Size([1, 1, 16384, 131072])
123704
+ batch tensor after cp: position_ids torch.Size([1, 16384])
123705
+ batch tensor: tokens torch.Size([1, 131072])
123706
+ batch tensor: labels torch.Size([1, 131072])
123707
+ batch tensor: loss_mask torch.Size([1, 131072])
123708
+ batch tensor: attention_mask torch.Size([1, 1, 131072, 131072])
123709
+ batch tensor: position_ids torch.Size([1, 131072])
123710
+ batch tensor after cp: tokens torch.Size([1, 16384])
123711
+ batch tensor after cp: labels torch.Size([1, 16384])
123712
+ batch tensor after cp: loss_mask torch.Size([1, 16384])
123713
+ batch tensor after cp: attention_mask torch.Size([1, 1, 16384, 131072])
123714
+ batch tensor after cp: position_ids torch.Size([1, 16384])
123715
+ batch tensor: tokens torch.Size([1, 131072])
123716
+ batch tensor: labels torch.Size([1, 131072])
123717
+ batch tensor: loss_mask torch.Size([1, 131072])
123718
+ batch tensor: attention_mask torch.Size([1, 1, 131072, 131072])
123719
+ batch tensor: position_ids torch.Size([1, 131072])
123720
+ batch tensor after cp: tokens torch.Size([1, 16384])
123721
+ batch tensor after cp: labels torch.Size([1, 16384])
123722
+ batch tensor after cp: loss_mask torch.Size([1, 16384])
123723
+ batch tensor after cp: attention_mask torch.Size([1, 1, 16384, 131072])
123724
+ batch tensor after cp: position_ids torch.Size([1, 16384])
123725
+ batch tensor: tokens torch.Size([1, 131072])
123726
+ batch tensor: labels torch.Size([1, 131072])
123727
+ batch tensor: loss_mask torch.Size([1, 131072])
123728
+ batch tensor: attention_mask torch.Size([1, 1, 131072, 131072])
123729
+ batch tensor: position_ids torch.Size([1, 131072])
123730
+ batch tensor after cp: tokens torch.Size([1, 16384])
123731
+ batch tensor after cp: labels torch.Size([1, 16384])
123732
+ batch tensor after cp: loss_mask torch.Size([1, 16384])
123733
+ batch tensor after cp: attention_mask torch.Size([1, 1, 16384, 131072])
123734
+ batch tensor after cp: position_ids torch.Size([1, 16384])
123735
+ batch tensor: tokens torch.Size([1, 131072])
123736
+ batch tensor: labels torch.Size([1, 131072])
123737
+ batch tensor: loss_mask torch.Size([1, 131072])
123738
+ batch tensor: attention_mask torch.Size([1, 1, 131072, 131072])
123739
+ batch tensor: position_ids torch.Size([1, 131072])
123740
+ batch tensor after cp: tokens torch.Size([1, 16384])
123741
+ batch tensor after cp: labels torch.Size([1, 16384])
123742
+ batch tensor after cp: loss_mask torch.Size([1, 16384])
123743
+ batch tensor after cp: attention_mask torch.Size([1, 1, 16384, 131072])
123744
+ batch tensor after cp: position_ids torch.Size([1, 16384])
123745
+ batch tensor: tokens torch.Size([1, 131072])
123746
+ batch tensor: labels torch.Size([1, 131072])
123747
+ batch tensor: loss_mask torch.Size([1, 131072])
123748
+ batch tensor: attention_mask torch.Size([1, 1, 131072, 131072])
123749
+ batch tensor: position_ids torch.Size([1, 131072])
123750
+ batch tensor after cp: tokens torch.Size([1, 16384])
123751
+ batch tensor after cp: labels torch.Size([1, 16384])
123752
+ batch tensor after cp: loss_mask torch.Size([1, 16384])
123753
+ batch tensor after cp: attention_mask torch.Size([1, 1, 16384, 131072])
123754
+ batch tensor after cp: position_ids torch.Size([1, 16384])
123755
+ batch tensor: tokens torch.Size([1, 131072])
123756
+ batch tensor: labels torch.Size([1, 131072])
123757
+ batch tensor: loss_mask torch.Size([1, 131072])
123758
+ batch tensor: attention_mask torch.Size([1, 1, 131072, 131072])
123759
+ batch tensor: position_ids torch.Size([1, 131072])
123760
+ batch tensor after cp: tokens torch.Size([1, 16384])
123761
+ batch tensor after cp: labels torch.Size([1, 16384])
123762
+ batch tensor after cp: loss_mask torch.Size([1, 16384])
123763
+ batch tensor after cp: attention_mask torch.Size([1, 1, 16384, 131072])
123764
+ batch tensor after cp: position_ids torch.Size([1, 16384])
123765
+ batch tensor: tokens torch.Size([1, 131072])
123766
+ batch tensor: labels torch.Size([1, 131072])
123767
+ batch tensor: loss_mask torch.Size([1, 131072])
123768
+ batch tensor: attention_mask torch.Size([1, 1, 131072, 131072])
123769
+ batch tensor: position_ids torch.Size([1, 131072])
123770
+ batch tensor after cp: tokens torch.Size([1, 16384])
123771
+ batch tensor after cp: labels torch.Size([1, 16384])
123772
+ batch tensor after cp: loss_mask torch.Size([1, 16384])
123773
+ batch tensor after cp: attention_mask torch.Size([1, 1, 16384, 131072])
123774
+ batch tensor after cp: position_ids torch.Size([1, 16384])
123775
+ batch tensor: tokens torch.Size([1, 131072])
123776
+ batch tensor: labels torch.Size([1, 131072])
123777
+ batch tensor: loss_mask torch.Size([1, 131072])
123778
+ batch tensor: attention_mask torch.Size([1, 1, 131072, 131072])
123779
+ batch tensor: position_ids torch.Size([1, 131072])
123780
+ batch tensor after cp: tokens torch.Size([1, 16384])
123781
+ batch tensor after cp: labels torch.Size([1, 16384])
123782
+ batch tensor after cp: loss_mask torch.Size([1, 16384])
123783
+ batch tensor after cp: attention_mask torch.Size([1, 1, 16384, 131072])
123784
+ batch tensor after cp: position_ids torch.Size([1, 16384])
123785
+ batch tensor: tokens torch.Size([1, 131072])
123786
+ batch tensor: labels torch.Size([1, 131072])
123787
+ batch tensor: loss_mask torch.Size([1, 131072])
123788
+ batch tensor: attention_mask torch.Size([1, 1, 131072, 131072])
123789
+ batch tensor: position_ids torch.Size([1, 131072])
123790
+ batch tensor after cp: tokens torch.Size([1, 16384])
123791
+ batch tensor after cp: labels torch.Size([1, 16384])
123792
+ batch tensor after cp: loss_mask torch.Size([1, 16384])
123793
+ batch tensor after cp: attention_mask torch.Size([1, 1, 16384, 131072])
123794
+ batch tensor after cp: position_ids torch.Size([1, 16384])
123795
+ batch tensor: tokens torch.Size([1, 131072])
123796
+ batch tensor: labels torch.Size([1, 131072])
123797
+ batch tensor: loss_mask torch.Size([1, 131072])
123798
+ batch tensor: attention_mask torch.Size([1, 1, 131072, 131072])
123799
+ batch tensor: position_ids torch.Size([1, 131072])
123800
+ batch tensor after cp: tokens torch.Size([1, 16384])
123801
+ batch tensor after cp: labels torch.Size([1, 16384])
123802
+ batch tensor after cp: loss_mask torch.Size([1, 16384])
123803
+ batch tensor after cp: attention_mask torch.Size([1, 1, 16384, 131072])
123804
+ batch tensor after cp: position_ids torch.Size([1, 16384])
123805
+ batch tensor: tokens torch.Size([1, 131072])
123806
+ batch tensor: labels torch.Size([1, 131072])
123807
+ batch tensor: loss_mask torch.Size([1, 131072])
123808
+ batch tensor: attention_mask torch.Size([1, 1, 131072, 131072])
123809
+ batch tensor: position_ids torch.Size([1, 131072])
123810
+ batch tensor after cp: tokens torch.Size([1, 16384])
123811
+ batch tensor after cp: labels torch.Size([1, 16384])
123812
+ batch tensor after cp: loss_mask torch.Size([1, 16384])
123813
+ batch tensor after cp: attention_mask torch.Size([1, 1, 16384, 131072])
123814
+ batch tensor after cp: position_ids torch.Size([1, 16384])
123815
+ batch tensor: tokens torch.Size([1, 131072])
123816
+ batch tensor: labels torch.Size([1, 131072])
123817
+ batch tensor: loss_mask torch.Size([1, 131072])
123818
+ batch tensor: attention_mask torch.Size([1, 1, 131072, 131072])
123819
+ batch tensor: position_ids torch.Size([1, 131072])
123820
+ batch tensor after cp: tokens torch.Size([1, 16384])
123821
+ batch tensor after cp: labels torch.Size([1, 16384])
123822
+ batch tensor after cp: loss_mask torch.Size([1, 16384])
123823
+ batch tensor after cp: attention_mask torch.Size([1, 1, 16384, 131072])
123824
+ batch tensor after cp: position_ids torch.Size([1, 16384])
123825
+ batch tensor: tokens torch.Size([1, 131072])
123826
+ batch tensor: labels torch.Size([1, 131072])
123827
+ batch tensor: loss_mask torch.Size([1, 131072])
123828
+ batch tensor: attention_mask torch.Size([1, 1, 131072, 131072])
123829
+ batch tensor: position_ids torch.Size([1, 131072])
123830
+ batch tensor after cp: tokens torch.Size([1, 16384])
123831
+ batch tensor after cp: labels torch.Size([1, 16384])
123832
+ batch tensor after cp: loss_mask torch.Size([1, 16384])
123833
+ batch tensor after cp: attention_mask torch.Size([1, 1, 16384, 131072])
123834
+ batch tensor after cp: position_ids torch.Size([1, 16384])
123835
+ batch tensor: tokens torch.Size([1, 131072])
123836
+ batch tensor: labels torch.Size([1, 131072])
123837
+ batch tensor: loss_mask torch.Size([1, 131072])
123838
+ batch tensor: attention_mask torch.Size([1, 1, 131072, 131072])
123839
+ batch tensor: position_ids torch.Size([1, 131072])
123840
+ batch tensor after cp: tokens torch.Size([1, 16384])
123841
+ batch tensor after cp: labels torch.Size([1, 16384])
123842
+ batch tensor after cp: loss_mask torch.Size([1, 16384])
123843
+ batch tensor after cp: attention_mask torch.Size([1, 1, 16384, 131072])
123844
+ batch tensor after cp: position_ids torch.Size([1, 16384])
123845
+ batch tensor: tokens torch.Size([1, 131072])
123846
+ batch tensor: labels torch.Size([1, 131072])
123847
+ batch tensor: loss_mask torch.Size([1, 131072])
123848
+ batch tensor: attention_mask torch.Size([1, 1, 131072, 131072])
123849
+ batch tensor: position_ids torch.Size([1, 131072])
123850
+ batch tensor after cp: tokens torch.Size([1, 16384])
123851
+ batch tensor after cp: labels torch.Size([1, 16384])
123852
+ batch tensor after cp: loss_mask torch.Size([1, 16384])
123853
+ batch tensor after cp: attention_mask torch.Size([1, 1, 16384, 131072])
123854
+ batch tensor after cp: position_ids torch.Size([1, 16384])
123855
+ batch tensor: tokens torch.Size([1, 131072])
123856
+ batch tensor: labels torch.Size([1, 131072])
123857
+ batch tensor: loss_mask torch.Size([1, 131072])
123858
+ batch tensor: attention_mask torch.Size([1, 1, 131072, 131072])
123859
+ batch tensor: position_ids torch.Size([1, 131072])
123860
+ batch tensor after cp: tokens torch.Size([1, 16384])
123861
+ batch tensor after cp: labels torch.Size([1, 16384])
123862
+ batch tensor after cp: loss_mask torch.Size([1, 16384])
123863
+ batch tensor after cp: attention_mask torch.Size([1, 1, 16384, 131072])
123864
+ batch tensor after cp: position_ids torch.Size([1, 16384])
123865
+ batch tensor: tokens torch.Size([1, 131072])
123866
+ batch tensor: labels torch.Size([1, 131072])
123867
+ batch tensor: loss_mask torch.Size([1, 131072])
123868
+ batch tensor: attention_mask torch.Size([1, 1, 131072, 131072])
123869
+ batch tensor: position_ids torch.Size([1, 131072])
123870
+ batch tensor after cp: tokens torch.Size([1, 16384])
123871
+ batch tensor after cp: labels torch.Size([1, 16384])
123872
+ batch tensor after cp: loss_mask torch.Size([1, 16384])
123873
+ batch tensor after cp: attention_mask torch.Size([1, 1, 16384, 131072])
123874
+ batch tensor after cp: position_ids torch.Size([1, 16384])
123875
+ batch tensor: tokens torch.Size([1, 131072])
123876
+ batch tensor: labels torch.Size([1, 131072])
123877
+ batch tensor: loss_mask torch.Size([1, 131072])
123878
+ batch tensor: attention_mask torch.Size([1, 1, 131072, 131072])
123879
+ batch tensor: position_ids torch.Size([1, 131072])
123880
+ batch tensor after cp: tokens torch.Size([1, 16384])
123881
+ batch tensor after cp: labels torch.Size([1, 16384])
123882
+ batch tensor after cp: loss_mask torch.Size([1, 16384])
123883
+ batch tensor after cp: attention_mask torch.Size([1, 1, 16384, 131072])
123884
+ batch tensor after cp: position_ids torch.Size([1, 16384])
123885
+ batch tensor: tokens torch.Size([1, 131072])
123886
+ batch tensor: labels torch.Size([1, 131072])
123887
+ batch tensor: loss_mask torch.Size([1, 131072])
123888
+ batch tensor: attention_mask torch.Size([1, 1, 131072, 131072])
123889
+ batch tensor: position_ids torch.Size([1, 131072])
123890
+ batch tensor after cp: tokens torch.Size([1, 16384])
123891
+ batch tensor after cp: labels torch.Size([1, 16384])
123892
+ batch tensor after cp: loss_mask torch.Size([1, 16384])
123893
+ batch tensor after cp: attention_mask torch.Size([1, 1, 16384, 131072])
123894
+ batch tensor after cp: position_ids torch.Size([1, 16384])
123895
+ batch tensor: tokens torch.Size([1, 131072])
123896
+ batch tensor: labels torch.Size([1, 131072])
123897
+ batch tensor: loss_mask torch.Size([1, 131072])
123898
+ batch tensor: attention_mask torch.Size([1, 1, 131072, 131072])
123899
+ batch tensor: position_ids torch.Size([1, 131072])
123900
+ batch tensor after cp: tokens torch.Size([1, 16384])
123901
+ batch tensor after cp: labels torch.Size([1, 16384])
123902
+ batch tensor after cp: loss_mask torch.Size([1, 16384])
123903
+ batch tensor after cp: attention_mask torch.Size([1, 1, 16384, 131072])
123904
+ batch tensor after cp: position_ids torch.Size([1, 16384])
123905
+ batch tensor: tokens torch.Size([1, 131072])
123906
+ batch tensor: labels torch.Size([1, 131072])
123907
+ batch tensor: loss_mask torch.Size([1, 131072])
123908
+ batch tensor: attention_mask torch.Size([1, 1, 131072, 131072])
123909
+ batch tensor: position_ids torch.Size([1, 131072])
123910
+ batch tensor after cp: tokens torch.Size([1, 16384])
123911
+ batch tensor after cp: labels torch.Size([1, 16384])
123912
+ batch tensor after cp: loss_mask torch.Size([1, 16384])
123913
+ batch tensor after cp: attention_mask torch.Size([1, 1, 16384, 131072])
123914
+ batch tensor after cp: position_ids torch.Size([1, 16384])
123915
+ batch tensor: tokens torch.Size([1, 131072])
123916
+ batch tensor: labels torch.Size([1, 131072])
123917
+ batch tensor: loss_mask torch.Size([1, 131072])
123918
+ batch tensor: attention_mask torch.Size([1, 1, 131072, 131072])
123919
+ batch tensor: position_ids torch.Size([1, 131072])
123920
+ batch tensor after cp: tokens torch.Size([1, 16384])
123921
+ batch tensor after cp: labels torch.Size([1, 16384])
123922
+ batch tensor after cp: loss_mask torch.Size([1, 16384])
123923
+ batch tensor after cp: attention_mask torch.Size([1, 1, 16384, 131072])
123924
+ batch tensor after cp: position_ids torch.Size([1, 16384])
123925
+ batch tensor: tokens torch.Size([1, 131072])
123926
+ batch tensor: labels torch.Size([1, 131072])
123927
+ batch tensor: loss_mask torch.Size([1, 131072])
123928
+ batch tensor: attention_mask torch.Size([1, 1, 131072, 131072])
123929
+ batch tensor: position_ids torch.Size([1, 131072])
123930
+ batch tensor after cp: tokens torch.Size([1, 16384])
123931
+ batch tensor after cp: labels torch.Size([1, 16384])
123932
+ batch tensor after cp: loss_mask torch.Size([1, 16384])
123933
+ batch tensor after cp: attention_mask torch.Size([1, 1, 16384, 131072])
123934
+ batch tensor after cp: position_ids torch.Size([1, 16384])
123935
+ batch tensor: tokens torch.Size([1, 131072])
123936
+ batch tensor: labels torch.Size([1, 131072])
123937
+ batch tensor: loss_mask torch.Size([1, 131072])
123938
+ batch tensor: attention_mask torch.Size([1, 1, 131072, 131072])
123939
+ batch tensor: position_ids torch.Size([1, 131072])
123940
+ batch tensor after cp: tokens torch.Size([1, 16384])
123941
+ batch tensor after cp: labels torch.Size([1, 16384])
123942
+ batch tensor after cp: loss_mask torch.Size([1, 16384])
123943
+ batch tensor after cp: attention_mask torch.Size([1, 1, 16384, 131072])
123944
+ batch tensor after cp: position_ids torch.Size([1, 16384])
123945
+ batch tensor: tokens torch.Size([1, 131072])
123946
+ batch tensor: labels torch.Size([1, 131072])
123947
+ batch tensor: loss_mask torch.Size([1, 131072])
123948
+ batch tensor: attention_mask torch.Size([1, 1, 131072, 131072])
123949
+ batch tensor: position_ids torch.Size([1, 131072])
123950
+ batch tensor after cp: tokens torch.Size([1, 16384])
123951
+ batch tensor after cp: labels torch.Size([1, 16384])
123952
+ batch tensor after cp: loss_mask torch.Size([1, 16384])
123953
+ batch tensor after cp: attention_mask torch.Size([1, 1, 16384, 131072])
123954
+ batch tensor after cp: position_ids torch.Size([1, 16384])
123955
+ batch tensor: tokens torch.Size([1, 131072])
123956
+ batch tensor: labels torch.Size([1, 131072])
123957
+ batch tensor: loss_mask torch.Size([1, 131072])
123958
+ batch tensor: attention_mask torch.Size([1, 1, 131072, 131072])
123959
+ batch tensor: position_ids torch.Size([1, 131072])
123960
+ batch tensor after cp: tokens torch.Size([1, 16384])
123961
+ batch tensor after cp: labels torch.Size([1, 16384])
123962
+ batch tensor after cp: loss_mask torch.Size([1, 16384])
123963
+ batch tensor after cp: attention_mask torch.Size([1, 1, 16384, 131072])
123964
+ batch tensor after cp: position_ids torch.Size([1, 16384])
123965
+ batch tensor: tokens torch.Size([1, 131072])
123966
+ batch tensor: labels torch.Size([1, 131072])
123967
+ batch tensor: loss_mask torch.Size([1, 131072])
123968
+ batch tensor: attention_mask torch.Size([1, 1, 131072, 131072])
123969
+ batch tensor: position_ids torch.Size([1, 131072])
123970
+ batch tensor after cp: tokens torch.Size([1, 16384])
123971
+ batch tensor after cp: labels torch.Size([1, 16384])
123972
+ batch tensor after cp: loss_mask torch.Size([1, 16384])
123973
+ batch tensor after cp: attention_mask torch.Size([1, 1, 16384, 131072])
123974
+ batch tensor after cp: position_ids torch.Size([1, 16384])
123975
+ batch tensor: tokens torch.Size([1, 131072])
123976
+ batch tensor: labels torch.Size([1, 131072])
123977
+ batch tensor: loss_mask torch.Size([1, 131072])
123978
+ batch tensor: attention_mask torch.Size([1, 1, 131072, 131072])
123979
+ batch tensor: position_ids torch.Size([1, 131072])
123980
+ batch tensor after cp: tokens torch.Size([1, 16384])
123981
+ batch tensor after cp: labels torch.Size([1, 16384])
123982
+ batch tensor after cp: loss_mask torch.Size([1, 16384])
123983
+ batch tensor after cp: attention_mask torch.Size([1, 1, 16384, 131072])
123984
+ batch tensor after cp: position_ids torch.Size([1, 16384])
123985
+ batch tensor: tokens torch.Size([1, 131072])
123986
+ batch tensor: labels torch.Size([1, 131072])
123987
+ batch tensor: loss_mask torch.Size([1, 131072])
123988
+ batch tensor: attention_mask torch.Size([1, 1, 131072, 131072])
123989
+ batch tensor: position_ids torch.Size([1, 131072])
123990
+ batch tensor after cp: tokens torch.Size([1, 16384])
123991
+ batch tensor after cp: labels torch.Size([1, 16384])
123992
+ batch tensor after cp: loss_mask torch.Size([1, 16384])
123993
+ batch tensor after cp: attention_mask torch.Size([1, 1, 16384, 131072])
123994
+ batch tensor after cp: position_ids torch.Size([1, 16384])
123995
+ batch tensor: tokens torch.Size([1, 131072])
123996
+ batch tensor: labels torch.Size([1, 131072])
123997
+ batch tensor: loss_mask torch.Size([1, 131072])
123998
+ batch tensor: attention_mask torch.Size([1, 1, 131072, 131072])
123999
+ batch tensor: position_ids torch.Size([1, 131072])
124000
+ batch tensor after cp: tokens torch.Size([1, 16384])
124001
+ batch tensor after cp: labels torch.Size([1, 16384])
124002
+ batch tensor after cp: loss_mask torch.Size([1, 16384])
124003
+ batch tensor after cp: attention_mask torch.Size([1, 1, 16384, 131072])
124004
+ batch tensor after cp: position_ids torch.Size([1, 16384])
124005
+ batch tensor: tokens torch.Size([1, 131072])
124006
+ batch tensor: labels torch.Size([1, 131072])
124007
+ batch tensor: loss_mask torch.Size([1, 131072])
124008
+ batch tensor: attention_mask torch.Size([1, 1, 131072, 131072])
124009
+ batch tensor: position_ids torch.Size([1, 131072])
124010
+ batch tensor after cp: tokens torch.Size([1, 16384])
124011
+ batch tensor after cp: labels torch.Size([1, 16384])
124012
+ batch tensor after cp: loss_mask torch.Size([1, 16384])
124013
+ batch tensor after cp: attention_mask torch.Size([1, 1, 16384, 131072])
124014
+ batch tensor after cp: position_ids torch.Size([1, 16384])
124015
+ batch tensor: tokens torch.Size([1, 131072])
124016
+ batch tensor: labels torch.Size([1, 131072])
124017
+ batch tensor: loss_mask torch.Size([1, 131072])
124018
+ batch tensor: attention_mask torch.Size([1, 1, 131072, 131072])
124019
+ batch tensor: position_ids torch.Size([1, 131072])
124020
+ batch tensor after cp: tokens torch.Size([1, 16384])
124021
+ batch tensor after cp: labels torch.Size([1, 16384])
124022
+ batch tensor after cp: loss_mask torch.Size([1, 16384])
124023
+ batch tensor after cp: attention_mask torch.Size([1, 1, 16384, 131072])
124024
+ batch tensor after cp: position_ids torch.Size([1, 16384])
124025
+ batch tensor: tokens torch.Size([1, 131072])
124026
+ batch tensor: labels torch.Size([1, 131072])
124027
+ batch tensor: loss_mask torch.Size([1, 131072])
124028
+ batch tensor: attention_mask torch.Size([1, 1, 131072, 131072])
124029
+ batch tensor: position_ids torch.Size([1, 131072])
124030
+ batch tensor after cp: tokens torch.Size([1, 16384])
124031
+ batch tensor after cp: labels torch.Size([1, 16384])
124032
+ batch tensor after cp: loss_mask torch.Size([1, 16384])
124033
+ batch tensor after cp: attention_mask torch.Size([1, 1, 16384, 131072])
124034
+ batch tensor after cp: position_ids torch.Size([1, 16384])
124035
+ batch tensor: tokens torch.Size([1, 131072])
124036
+ batch tensor: labels torch.Size([1, 131072])
124037
+ batch tensor: loss_mask torch.Size([1, 131072])
124038
+ batch tensor: attention_mask torch.Size([1, 1, 131072, 131072])
124039
+ batch tensor: position_ids torch.Size([1, 131072])
124040
+ batch tensor after cp: tokens torch.Size([1, 16384])
124041
+ batch tensor after cp: labels torch.Size([1, 16384])
124042
+ batch tensor after cp: loss_mask torch.Size([1, 16384])
124043
+ batch tensor after cp: attention_mask torch.Size([1, 1, 16384, 131072])
124044
+ batch tensor after cp: position_ids torch.Size([1, 16384])
124045
+ batch tensor: tokens torch.Size([1, 131072])
124046
+ batch tensor: labels torch.Size([1, 131072])
124047
+ batch tensor: loss_mask torch.Size([1, 131072])
124048
+ batch tensor: attention_mask torch.Size([1, 1, 131072, 131072])
124049
+ batch tensor: position_ids torch.Size([1, 131072])
124050
+ batch tensor after cp: tokens torch.Size([1, 16384])
124051
+ batch tensor after cp: labels torch.Size([1, 16384])
124052
+ batch tensor after cp: loss_mask torch.Size([1, 16384])
124053
+ batch tensor after cp: attention_mask torch.Size([1, 1, 16384, 131072])
124054
+ batch tensor after cp: position_ids torch.Size([1, 16384])
124055
+ batch tensor: tokens torch.Size([1, 131072])
124056
+ batch tensor: labels torch.Size([1, 131072])
124057
+ batch tensor: loss_mask torch.Size([1, 131072])
124058
+ batch tensor: attention_mask torch.Size([1, 1, 131072, 131072])
124059
+ batch tensor: position_ids torch.Size([1, 131072])
124060
+ batch tensor after cp: tokens torch.Size([1, 16384])
124061
+ batch tensor after cp: labels torch.Size([1, 16384])
124062
+ batch tensor after cp: loss_mask torch.Size([1, 16384])
124063
+ batch tensor after cp: attention_mask torch.Size([1, 1, 16384, 131072])
124064
+ batch tensor after cp: position_ids torch.Size([1, 16384])
124065
+ batch tensor: tokens torch.Size([1, 131072])
124066
+ batch tensor: labels torch.Size([1, 131072])
124067
+ batch tensor: loss_mask torch.Size([1, 131072])
124068
+ batch tensor: attention_mask torch.Size([1, 1, 131072, 131072])
124069
+ batch tensor: position_ids torch.Size([1, 131072])
124070
+ batch tensor after cp: tokens torch.Size([1, 16384])
124071
+ batch tensor after cp: labels torch.Size([1, 16384])
124072
+ batch tensor after cp: loss_mask torch.Size([1, 16384])
124073
+ batch tensor after cp: attention_mask torch.Size([1, 1, 16384, 131072])
124074
+ batch tensor after cp: position_ids torch.Size([1, 16384])
124075
+ batch tensor: tokens torch.Size([1, 131072])
124076
+ batch tensor: labels torch.Size([1, 131072])
124077
+ batch tensor: loss_mask torch.Size([1, 131072])
124078
+ batch tensor: attention_mask torch.Size([1, 1, 131072, 131072])
124079
+ batch tensor: position_ids torch.Size([1, 131072])
124080
+ batch tensor after cp: tokens torch.Size([1, 16384])
124081
+ batch tensor after cp: labels torch.Size([1, 16384])
124082
+ batch tensor after cp: loss_mask torch.Size([1, 16384])
124083
+ batch tensor after cp: attention_mask torch.Size([1, 1, 16384, 131072])
124084
+ batch tensor after cp: position_ids torch.Size([1, 16384])
124085
+ batch tensor: tokens torch.Size([1, 131072])
124086
+ batch tensor: labels torch.Size([1, 131072])
124087
+ batch tensor: loss_mask torch.Size([1, 131072])
124088
+ batch tensor: attention_mask torch.Size([1, 1, 131072, 131072])
124089
+ batch tensor: position_ids torch.Size([1, 131072])
124090
+ batch tensor after cp: tokens torch.Size([1, 16384])
124091
+ batch tensor after cp: labels torch.Size([1, 16384])
124092
+ batch tensor after cp: loss_mask torch.Size([1, 16384])
124093
+ batch tensor after cp: attention_mask torch.Size([1, 1, 16384, 131072])
124094
+ batch tensor after cp: position_ids torch.Size([1, 16384])
124095
+ batch tensor: tokens torch.Size([1, 131072])
124096
+ batch tensor: labels torch.Size([1, 131072])
124097
+ batch tensor: loss_mask torch.Size([1, 131072])
124098
+ batch tensor: attention_mask torch.Size([1, 1, 131072, 131072])
124099
+ batch tensor: position_ids torch.Size([1, 131072])
124100
+ batch tensor after cp: tokens torch.Size([1, 16384])
124101
+ batch tensor after cp: labels torch.Size([1, 16384])
124102
+ batch tensor after cp: loss_mask torch.Size([1, 16384])
124103
+ batch tensor after cp: attention_mask torch.Size([1, 1, 16384, 131072])
124104
+ batch tensor after cp: position_ids torch.Size([1, 16384])
124105
+ batch tensor: tokens torch.Size([1, 131072])
124106
+ batch tensor: labels torch.Size([1, 131072])
124107
+ batch tensor: loss_mask torch.Size([1, 131072])
124108
+ batch tensor: attention_mask torch.Size([1, 1, 131072, 131072])
124109
+ batch tensor: position_ids torch.Size([1, 131072])
124110
+ batch tensor after cp: tokens torch.Size([1, 16384])
124111
+ batch tensor after cp: labels torch.Size([1, 16384])
124112
+ batch tensor after cp: loss_mask torch.Size([1, 16384])
124113
+ batch tensor after cp: attention_mask torch.Size([1, 1, 16384, 131072])
124114
+ batch tensor after cp: position_ids torch.Size([1, 16384])
124115
+ batch tensor: tokens torch.Size([1, 131072])
124116
+ batch tensor: labels torch.Size([1, 131072])
124117
+ batch tensor: loss_mask torch.Size([1, 131072])
124118
+ batch tensor: attention_mask torch.Size([1, 1, 131072, 131072])
124119
+ batch tensor: position_ids torch.Size([1, 131072])
124120
+ batch tensor after cp: tokens torch.Size([1, 16384])
124121
+ batch tensor after cp: labels torch.Size([1, 16384])
124122
+ batch tensor after cp: loss_mask torch.Size([1, 16384])
124123
+ batch tensor after cp: attention_mask torch.Size([1, 1, 16384, 131072])
124124
+ batch tensor after cp: position_ids torch.Size([1, 16384])
124125
+ batch tensor: tokens torch.Size([1, 131072])
124126
+ batch tensor: labels torch.Size([1, 131072])
124127
+ batch tensor: loss_mask torch.Size([1, 131072])
124128
+ batch tensor: attention_mask torch.Size([1, 1, 131072, 131072])
124129
+ batch tensor: position_ids torch.Size([1, 131072])
124130
+ batch tensor after cp: tokens torch.Size([1, 16384])
124131
+ batch tensor after cp: labels torch.Size([1, 16384])
124132
+ batch tensor after cp: loss_mask torch.Size([1, 16384])
124133
+ batch tensor after cp: attention_mask torch.Size([1, 1, 16384, 131072])
124134
+ batch tensor after cp: position_ids torch.Size([1, 16384])
124135
+ batch tensor: tokens torch.Size([1, 131072])
124136
+ batch tensor: labels torch.Size([1, 131072])
124137
+ batch tensor: loss_mask torch.Size([1, 131072])
124138
+ batch tensor: attention_mask torch.Size([1, 1, 131072, 131072])
124139
+ batch tensor: position_ids torch.Size([1, 131072])
124140
+ batch tensor after cp: tokens torch.Size([1, 16384])
124141
+ batch tensor after cp: labels torch.Size([1, 16384])
124142
+ batch tensor after cp: loss_mask torch.Size([1, 16384])
124143
+ batch tensor after cp: attention_mask torch.Size([1, 1, 16384, 131072])
124144
+ batch tensor after cp: position_ids torch.Size([1, 16384])
124145
+ batch tensor: tokens torch.Size([1, 131072])
124146
+ batch tensor: labels torch.Size([1, 131072])
124147
+ batch tensor: loss_mask torch.Size([1, 131072])
124148
+ batch tensor: attention_mask torch.Size([1, 1, 131072, 131072])
124149
+ batch tensor: position_ids torch.Size([1, 131072])
124150
+ batch tensor after cp: tokens torch.Size([1, 16384])
124151
+ batch tensor after cp: labels torch.Size([1, 16384])
124152
+ batch tensor after cp: loss_mask torch.Size([1, 16384])
124153
+ batch tensor after cp: attention_mask torch.Size([1, 1, 16384, 131072])
124154
+ batch tensor after cp: position_ids torch.Size([1, 16384])
124155
+ batch tensor: tokens torch.Size([1, 131072])
124156
+ batch tensor: labels torch.Size([1, 131072])
124157
+ batch tensor: loss_mask torch.Size([1, 131072])
124158
+ batch tensor: attention_mask torch.Size([1, 1, 131072, 131072])
124159
+ batch tensor: position_ids torch.Size([1, 131072])
124160
+ batch tensor after cp: tokens torch.Size([1, 16384])
124161
+ batch tensor after cp: labels torch.Size([1, 16384])
124162
+ batch tensor after cp: loss_mask torch.Size([1, 16384])
124163
+ batch tensor after cp: attention_mask torch.Size([1, 1, 16384, 131072])
124164
+ batch tensor after cp: position_ids torch.Size([1, 16384])
attnserver.run_attnserver.slurm.sh.343195.out.log CHANGED
@@ -67117,3 +67117,293 @@ batch tensor after cp: position_ids torch.Size([1, 32768])
67117
  Start exporting trace 4
67118
  Done exporting trace 4
67119
  [2025-06-21 21:13:17] iteration 5/ 10 | consumed samples: 5 | elapsed time per iteration (ms): 158672.1 | learning rate: 0.000000E+00 | global batch size: 1 | loss scale: 268435456.0 | number of skipped iterations: 1 | number of nan iterations: 0 |
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
67117
  Start exporting trace 4
67118
  Done exporting trace 4
67119
  [2025-06-21 21:13:17] iteration 5/ 10 | consumed samples: 5 | elapsed time per iteration (ms): 158672.1 | learning rate: 0.000000E+00 | global batch size: 1 | loss scale: 268435456.0 | number of skipped iterations: 1 | number of nan iterations: 0 |
67120
+ batch tensor: tokens torch.Size([1, 131072])
67121
+ batch tensor: labels torch.Size([1, 131072])
67122
+ batch tensor: loss_mask torch.Size([1, 131072])
67123
+ batch tensor: attention_mask torch.Size([1, 1, 131072, 131072])
67124
+ batch tensor: position_ids torch.Size([1, 131072])
67125
+ batch tensor after cp: tokens torch.Size([1, 32768])
67126
+ batch tensor after cp: labels torch.Size([1, 32768])
67127
+ batch tensor after cp: loss_mask torch.Size([1, 32768])
67128
+ batch tensor after cp: attention_mask torch.Size([1, 1, 32768, 131072])
67129
+ batch tensor after cp: position_ids torch.Size([1, 32768])
67130
+ batch tensor: tokens torch.Size([1, 131072])
67131
+ batch tensor: labels torch.Size([1, 131072])
67132
+ batch tensor: loss_mask torch.Size([1, 131072])
67133
+ batch tensor: attention_mask torch.Size([1, 1, 131072, 131072])
67134
+ batch tensor: position_ids torch.Size([1, 131072])
67135
+ batch tensor after cp: tokens torch.Size([1, 32768])
67136
+ batch tensor after cp: labels torch.Size([1, 32768])
67137
+ batch tensor after cp: loss_mask torch.Size([1, 32768])
67138
+ batch tensor after cp: attention_mask torch.Size([1, 1, 32768, 131072])
67139
+ batch tensor after cp: position_ids torch.Size([1, 32768])
67140
+ batch tensor: tokens torch.Size([1, 131072])
67141
+ batch tensor: labels torch.Size([1, 131072])
67142
+ batch tensor: loss_mask torch.Size([1, 131072])
67143
+ batch tensor: attention_mask torch.Size([1, 1, 131072, 131072])
67144
+ batch tensor: position_ids torch.Size([1, 131072])
67145
+ batch tensor after cp: tokens torch.Size([1, 32768])
67146
+ batch tensor after cp: labels torch.Size([1, 32768])
67147
+ batch tensor after cp: loss_mask torch.Size([1, 32768])
67148
+ batch tensor after cp: attention_mask torch.Size([1, 1, 32768, 131072])
67149
+ batch tensor after cp: position_ids torch.Size([1, 32768])
67150
+ batch tensor: tokens torch.Size([1, 131072])
67151
+ batch tensor: labels torch.Size([1, 131072])
67152
+ batch tensor: loss_mask torch.Size([1, 131072])
67153
+ batch tensor: attention_mask torch.Size([1, 1, 131072, 131072])
67154
+ batch tensor: position_ids torch.Size([1, 131072])
67155
+ batch tensor after cp: tokens torch.Size([1, 32768])
67156
+ batch tensor after cp: labels torch.Size([1, 32768])
67157
+ batch tensor after cp: loss_mask torch.Size([1, 32768])
67158
+ batch tensor after cp: attention_mask torch.Size([1, 1, 32768, 131072])
67159
+ batch tensor after cp: position_ids torch.Size([1, 32768])
67160
+ batch tensor: tokens torch.Size([1, 131072])
67161
+ batch tensor: labels torch.Size([1, 131072])
67162
+ batch tensor: loss_mask torch.Size([1, 131072])
67163
+ batch tensor: attention_mask torch.Size([1, 1, 131072, 131072])
67164
+ batch tensor: position_ids torch.Size([1, 131072])
67165
+ batch tensor after cp: tokens torch.Size([1, 32768])
67166
+ batch tensor after cp: labels torch.Size([1, 32768])
67167
+ batch tensor after cp: loss_mask torch.Size([1, 32768])
67168
+ batch tensor after cp: attention_mask torch.Size([1, 1, 32768, 131072])
67169
+ batch tensor after cp: position_ids torch.Size([1, 32768])
67170
+ batch tensor: tokens torch.Size([1, 131072])
67171
+ batch tensor: labels torch.Size([1, 131072])
67172
+ batch tensor: loss_mask torch.Size([1, 131072])
67173
+ batch tensor: attention_mask torch.Size([1, 1, 131072, 131072])
67174
+ batch tensor: position_ids torch.Size([1, 131072])
67175
+ batch tensor after cp: tokens torch.Size([1, 32768])
67176
+ batch tensor after cp: labels torch.Size([1, 32768])
67177
+ batch tensor after cp: loss_mask torch.Size([1, 32768])
67178
+ batch tensor after cp: attention_mask torch.Size([1, 1, 32768, 131072])
67179
+ batch tensor after cp: position_ids torch.Size([1, 32768])
67180
+ batch tensor: tokens torch.Size([1, 131072])
67181
+ batch tensor: labels torch.Size([1, 131072])
67182
+ batch tensor: loss_mask torch.Size([1, 131072])
67183
+ batch tensor: attention_mask torch.Size([1, 1, 131072, 131072])
67184
+ batch tensor: position_ids torch.Size([1, 131072])
67185
+ batch tensor after cp: tokens torch.Size([1, 32768])
67186
+ batch tensor after cp: labels torch.Size([1, 32768])
67187
+ batch tensor after cp: loss_mask torch.Size([1, 32768])
67188
+ batch tensor after cp: attention_mask torch.Size([1, 1, 32768, 131072])
67189
+ batch tensor after cp: position_ids torch.Size([1, 32768])
67190
+ batch tensor: tokens torch.Size([1, 131072])
67191
+ batch tensor: labels torch.Size([1, 131072])
67192
+ batch tensor: loss_mask torch.Size([1, 131072])
67193
+ batch tensor: attention_mask torch.Size([1, 1, 131072, 131072])
67194
+ batch tensor: position_ids torch.Size([1, 131072])
67195
+ batch tensor after cp: tokens torch.Size([1, 32768])
67196
+ batch tensor after cp: labels torch.Size([1, 32768])
67197
+ batch tensor after cp: loss_mask torch.Size([1, 32768])
67198
+ batch tensor after cp: attention_mask torch.Size([1, 1, 32768, 131072])
67199
+ batch tensor after cp: position_ids torch.Size([1, 32768])
67200
+ batch tensor: tokens torch.Size([1, 131072])
67201
+ batch tensor: labels torch.Size([1, 131072])
67202
+ batch tensor: loss_mask torch.Size([1, 131072])
67203
+ batch tensor: attention_mask torch.Size([1, 1, 131072, 131072])
67204
+ batch tensor: position_ids torch.Size([1, 131072])
67205
+ batch tensor after cp: tokens torch.Size([1, 32768])
67206
+ batch tensor after cp: labels torch.Size([1, 32768])
67207
+ batch tensor after cp: loss_mask torch.Size([1, 32768])
67208
+ batch tensor after cp: attention_mask torch.Size([1, 1, 32768, 131072])
67209
+ batch tensor after cp: position_ids torch.Size([1, 32768])
67210
+ batch tensor: tokens torch.Size([1, 131072])
67211
+ batch tensor: labels torch.Size([1, 131072])
67212
+ batch tensor: loss_mask torch.Size([1, 131072])
67213
+ batch tensor: attention_mask torch.Size([1, 1, 131072, 131072])
67214
+ batch tensor: position_ids torch.Size([1, 131072])
67215
+ batch tensor after cp: tokens torch.Size([1, 32768])
67216
+ batch tensor after cp: labels torch.Size([1, 32768])
67217
+ batch tensor after cp: loss_mask torch.Size([1, 32768])
67218
+ batch tensor after cp: attention_mask torch.Size([1, 1, 32768, 131072])
67219
+ batch tensor after cp: position_ids torch.Size([1, 32768])
67220
+ batch tensor: tokens torch.Size([1, 131072])
67221
+ batch tensor: labels torch.Size([1, 131072])
67222
+ batch tensor: loss_mask torch.Size([1, 131072])
67223
+ batch tensor: attention_mask torch.Size([1, 1, 131072, 131072])
67224
+ batch tensor: position_ids torch.Size([1, 131072])
67225
+ batch tensor after cp: tokens torch.Size([1, 32768])
67226
+ batch tensor after cp: labels torch.Size([1, 32768])
67227
+ batch tensor after cp: loss_mask torch.Size([1, 32768])
67228
+ batch tensor after cp: attention_mask torch.Size([1, 1, 32768, 131072])
67229
+ batch tensor after cp: position_ids torch.Size([1, 32768])
67230
+ batch tensor: tokens torch.Size([1, 131072])
67231
+ batch tensor: labels torch.Size([1, 131072])
67232
+ batch tensor: loss_mask torch.Size([1, 131072])
67233
+ batch tensor: attention_mask torch.Size([1, 1, 131072, 131072])
67234
+ batch tensor: position_ids torch.Size([1, 131072])
67235
+ batch tensor after cp: tokens torch.Size([1, 32768])
67236
+ batch tensor after cp: labels torch.Size([1, 32768])
67237
+ batch tensor after cp: loss_mask torch.Size([1, 32768])
67238
+ batch tensor after cp: attention_mask torch.Size([1, 1, 32768, 131072])
67239
+ batch tensor after cp: position_ids torch.Size([1, 32768])
67240
+ batch tensor: tokens torch.Size([1, 131072])
67241
+ batch tensor: labels torch.Size([1, 131072])
67242
+ batch tensor: loss_mask torch.Size([1, 131072])
67243
+ batch tensor: attention_mask torch.Size([1, 1, 131072, 131072])
67244
+ batch tensor: position_ids torch.Size([1, 131072])
67245
+ batch tensor after cp: tokens torch.Size([1, 32768])
67246
+ batch tensor after cp: labels torch.Size([1, 32768])
67247
+ batch tensor after cp: loss_mask torch.Size([1, 32768])
67248
+ batch tensor after cp: attention_mask torch.Size([1, 1, 32768, 131072])
67249
+ batch tensor after cp: position_ids torch.Size([1, 32768])
67250
+ batch tensor: tokens torch.Size([1, 131072])
67251
+ batch tensor: labels torch.Size([1, 131072])
67252
+ batch tensor: loss_mask torch.Size([1, 131072])
67253
+ batch tensor: attention_mask torch.Size([1, 1, 131072, 131072])
67254
+ batch tensor: position_ids torch.Size([1, 131072])
67255
+ batch tensor after cp: tokens torch.Size([1, 32768])
67256
+ batch tensor after cp: labels torch.Size([1, 32768])
67257
+ batch tensor after cp: loss_mask torch.Size([1, 32768])
67258
+ batch tensor after cp: attention_mask torch.Size([1, 1, 32768, 131072])
67259
+ batch tensor after cp: position_ids torch.Size([1, 32768])
67260
+ batch tensor: tokens torch.Size([1, 131072])
67261
+ batch tensor: labels torch.Size([1, 131072])
67262
+ batch tensor: loss_mask torch.Size([1, 131072])
67263
+ batch tensor: attention_mask torch.Size([1, 1, 131072, 131072])
67264
+ batch tensor: position_ids torch.Size([1, 131072])
67265
+ batch tensor after cp: tokens torch.Size([1, 32768])
67266
+ batch tensor after cp: labels torch.Size([1, 32768])
67267
+ batch tensor after cp: loss_mask torch.Size([1, 32768])
67268
+ batch tensor after cp: attention_mask torch.Size([1, 1, 32768, 131072])
67269
+ batch tensor after cp: position_ids torch.Size([1, 32768])
67270
+ batch tensor: tokens torch.Size([1, 131072])
67271
+ batch tensor: labels torch.Size([1, 131072])
67272
+ batch tensor: loss_mask torch.Size([1, 131072])
67273
+ batch tensor: attention_mask torch.Size([1, 1, 131072, 131072])
67274
+ batch tensor: position_ids torch.Size([1, 131072])
67275
+ batch tensor after cp: tokens torch.Size([1, 32768])
67276
+ batch tensor after cp: labels torch.Size([1, 32768])
67277
+ batch tensor after cp: loss_mask torch.Size([1, 32768])
67278
+ batch tensor after cp: attention_mask torch.Size([1, 1, 32768, 131072])
67279
+ batch tensor after cp: position_ids torch.Size([1, 32768])
67280
+ batch tensor: tokens torch.Size([1, 131072])
67281
+ batch tensor: labels torch.Size([1, 131072])
67282
+ batch tensor: loss_mask torch.Size([1, 131072])
67283
+ batch tensor: attention_mask torch.Size([1, 1, 131072, 131072])
67284
+ batch tensor: position_ids torch.Size([1, 131072])
67285
+ batch tensor after cp: tokens torch.Size([1, 32768])
67286
+ batch tensor after cp: labels torch.Size([1, 32768])
67287
+ batch tensor after cp: loss_mask torch.Size([1, 32768])
67288
+ batch tensor after cp: attention_mask torch.Size([1, 1, 32768, 131072])
67289
+ batch tensor after cp: position_ids torch.Size([1, 32768])
67290
+ batch tensor: tokens torch.Size([1, 131072])
67291
+ batch tensor: labels torch.Size([1, 131072])
67292
+ batch tensor: loss_mask torch.Size([1, 131072])
67293
+ batch tensor: attention_mask torch.Size([1, 1, 131072, 131072])
67294
+ batch tensor: position_ids torch.Size([1, 131072])
67295
+ batch tensor after cp: tokens torch.Size([1, 32768])
67296
+ batch tensor after cp: labels torch.Size([1, 32768])
67297
+ batch tensor after cp: loss_mask torch.Size([1, 32768])
67298
+ batch tensor after cp: attention_mask torch.Size([1, 1, 32768, 131072])
67299
+ batch tensor after cp: position_ids torch.Size([1, 32768])
67300
+ batch tensor: tokens batch tensor: tokens torch.Size([1, 131072])
67301
+ batch tensor: labels torch.Size([1, 131072])
67302
+ batch tensor:torch.Size([1, 131072]) loss_mask torch.Size([1, 131072])
67303
+
67304
+ batch tensor: batch tensor:attention_mask labels torch.Size([1, 1, 131072, 131072])torch.Size([1, 131072])
67305
+
67306
+ batch tensor:batch tensor: loss_maskposition_ids torch.Size([1, 131072])torch.Size([1, 131072])
67307
+
67308
+ batch tensor: attention_mask torch.Size([1, 1, 131072, 131072])
67309
+ batch tensor: position_ids torch.Size([1, 131072])
67310
+ batch tensor after cp: tokens torch.Size([1, 32768])
67311
+ batch tensor after cp: labelsbatch tensor: torch.Size([1, 32768])
67312
+ batch tensor after cp: loss_mask tokenstorch.Size([1, 32768])
67313
+ batch tensor after cp: attention_mask torch.Size([1, 1, 32768, 131072])
67314
+ batch tensor after cp: position_ids torch.Size([1, 32768])
67315
+ torch.Size([1, 131072])
67316
+ batch tensor: labels torch.Size([1, 131072])
67317
+ batch tensor: loss_mask torch.Size([1, 131072])
67318
+ batch tensor: attention_mask torch.Size([1, 1, 131072, 131072])
67319
+ batch tensor: position_ids torch.Size([1, 131072])
67320
+ batch tensor after cp: tokens torch.Size([1, 32768])
67321
+ batch tensor after cp: labels torch.Size([1, 32768])
67322
+ batch tensor after cp: loss_mask torch.Size([1, 32768])
67323
+ batch tensor after cp: attention_mask torch.Size([1, 1, 32768, 131072])
67324
+ batch tensor after cp: position_ids torch.Size([1, 32768])
67325
+ batch tensor after cp: tokens torch.Size([1, 32768])
67326
+ batch tensor after cp: labels torch.Size([1, 32768])
67327
+ batch tensor after cp: loss_mask torch.Size([1, 32768])
67328
+ batch tensor after cp: attention_mask torch.Size([1, 1, 32768, 131072])
67329
+ batch tensor after cp: position_ids torch.Size([1, 32768])
67330
+ batch tensor: tokens torch.Size([1, 131072])
67331
+ batch tensor: labels torch.Size([1, 131072])
67332
+ batch tensor: loss_mask torch.Size([1, 131072])
67333
+ batch tensor: attention_mask torch.Size([1, 1, 131072, 131072])
67334
+ batch tensor: position_ids torch.Size([1, 131072])
67335
+ batch tensor after cp: tokens torch.Size([1, 32768])
67336
+ batch tensor after cp: labels torch.Size([1, 32768])
67337
+ batch tensor after cp: loss_mask torch.Size([1, 32768])
67338
+ batch tensor after cp: attention_mask torch.Size([1, 1, 32768, 131072])
67339
+ batch tensor after cp: position_ids torch.Size([1, 32768])
67340
+ batch tensor: tokens torch.Size([1, 131072])
67341
+ batch tensor: labels torch.Size([1, 131072])
67342
+ batch tensor: loss_mask torch.Size([1, 131072])
67343
+ batch tensor: attention_mask torch.Size([1, 1, 131072, 131072])
67344
+ batch tensor: position_ids torch.Size([1, 131072])
67345
+ batch tensor after cp: tokens torch.Size([1, 32768])
67346
+ batch tensor after cp: labels torch.Size([1, 32768])
67347
+ batch tensor after cp: loss_mask torch.Size([1, 32768])
67348
+ batch tensor after cp: attention_mask torch.Size([1, 1, 32768, 131072])
67349
+ batch tensor after cp: position_ids torch.Size([1, 32768])
67350
+ batch tensor: tokens torch.Size([1, 131072])
67351
+ batch tensor: labels torch.Size([1, 131072])
67352
+ batch tensor: loss_mask torch.Size([1, 131072])
67353
+ batch tensor: attention_mask torch.Size([1, 1, 131072, 131072])
67354
+ batch tensor: position_ids torch.Size([1, 131072])
67355
+ batch tensor after cp: tokens torch.Size([1, 32768])
67356
+ batch tensor after cp: labels torch.Size([1, 32768])
67357
+ batch tensor after cp: loss_mask torch.Size([1, 32768])
67358
+ batch tensor after cp: attention_mask torch.Size([1, 1, 32768, 131072])
67359
+ batch tensor after cp: position_ids torch.Size([1, 32768])
67360
+ batch tensor: tokens torch.Size([1, 131072])
67361
+ batch tensor: labels torch.Size([1, 131072])
67362
+ batch tensor: loss_mask torch.Size([1, 131072])
67363
+ batch tensor: attention_mask torch.Size([1, 1, 131072, 131072])
67364
+ batch tensor: position_ids torch.Size([1, 131072])
67365
+ batch tensor after cp: tokens torch.Size([1, 32768])
67366
+ batch tensor after cp: labels torch.Size([1, 32768])
67367
+ batch tensor after cp: loss_mask torch.Size([1, 32768])
67368
+ batch tensor after cp: attention_mask torch.Size([1, 1, 32768, 131072])
67369
+ batch tensor after cp: position_ids torch.Size([1, 32768])
67370
+ batch tensor: tokens torch.Size([1, 131072])
67371
+ batch tensor: labels torch.Size([1, 131072])
67372
+ batch tensor: loss_mask torch.Size([1, 131072])
67373
+ batch tensor: attention_mask torch.Size([1, 1, 131072, 131072])
67374
+ batch tensor: position_ids torch.Size([1, 131072])
67375
+ batch tensor after cp: tokens torch.Size([1, 32768])
67376
+ batch tensor after cp: labels torch.Size([1, 32768])
67377
+ batch tensor after cp: loss_mask torch.Size([1, 32768])
67378
+ batch tensor after cp: attention_mask torch.Size([1, 1, 32768, 131072])
67379
+ batch tensor after cp: position_ids torch.Size([1, 32768])
67380
+ batch tensor: tokens torch.Size([1, 131072])
67381
+ batch tensor: labels torch.Size([1, 131072])
67382
+ batch tensor: loss_mask torch.Size([1, 131072])
67383
+ batch tensor: attention_mask torch.Size([1, 1, 131072, 131072])
67384
+ batch tensor: position_ids torch.Size([1, 131072])
67385
+ batch tensor after cp: tokens torch.Size([1, 32768])
67386
+ batch tensor after cp: labels torch.Size([1, 32768])
67387
+ batch tensor after cp: loss_mask torch.Size([1, 32768])
67388
+ batch tensor after cp: attention_mask torch.Size([1, 1, 32768, 131072])
67389
+ batch tensor after cp: position_ids torch.Size([1, 32768])
67390
+ batch tensor: tokens torch.Size([1, 131072])
67391
+ batch tensor: labels torch.Size([1, 131072])
67392
+ batch tensor: loss_mask torch.Size([1, 131072])
67393
+ batch tensor: attention_mask torch.Size([1, 1, 131072, 131072])
67394
+ batch tensor: position_ids torch.Size([1, 131072])
67395
+ batch tensor after cp: tokens torch.Size([1, 32768])
67396
+ batch tensor after cp: labels torch.Size([1, 32768])
67397
+ batch tensor after cp: loss_mask torch.Size([1, 32768])
67398
+ batch tensor after cp: attention_mask torch.Size([1, 1, 32768, 131072])
67399
+ batch tensor after cp: position_ids torch.Size([1, 32768])
67400
+ batch tensor: tokens torch.Size([1, 131072])
67401
+ batch tensor: labels torch.Size([1, 131072])
67402
+ batch tensor: loss_mask torch.Size([1, 131072])
67403
+ batch tensor: attention_mask torch.Size([1, 1, 131072, 131072])
67404
+ batch tensor: position_ids torch.Size([1, 131072])
67405
+ batch tensor after cp: tokens torch.Size([1, 32768])
67406
+ batch tensor after cp: labels torch.Size([1, 32768])
67407
+ batch tensor after cp: loss_mask torch.Size([1, 32768])
67408
+ batch tensor after cp: attention_mask torch.Size([1, 1, 32768, 131072])
67409
+ batch tensor after cp: position_ids torch.Size([1, 32768])
attnserver.run_attnserver.slurm.sh.343196.err.log CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:7b57691e143fa8e49e77bdf2990f89b57fb8b5d0dae348b9778a1a91e63cb072
3
- size 30593387
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:981e42ff9a48c2631e3df93380753e3ae9fc1cd27622449f48607ac2c0bc2ae5
3
+ size 30620791
attnserver.run_attnserver.slurm.sh.343196.out.log CHANGED
The diff for this file is too large to render. See raw diff
 
attnserver.run_attnserver.slurm.sh.343198.err.log CHANGED
The diff for this file is too large to render. See raw diff
 
attnserver.run_attnserver.slurm.sh.343198.out.log CHANGED
The diff for this file is too large to render. See raw diff
 
attnserver.run_attnserver.slurm.sh.343199.err.log CHANGED
The diff for this file is too large to render. See raw diff
 
attnserver.run_attnserver.slurm.sh.343199.out.log CHANGED
@@ -19772,3 +19772,294 @@ INFO:megatron.training.initialize:Setting logging level to 0
19772
  INFO:megatron.training.initialize:Setting logging level to 0
19773
  INFO:megatron.training.initialize:Setting logging level to 0
19774
  INFO:megatron.training.initialize:Setting logging level to 0
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
19772
  INFO:megatron.training.initialize:Setting logging level to 0
19773
  INFO:megatron.training.initialize:Setting logging level to 0
19774
  INFO:megatron.training.initialize:Setting logging level to 0
19775
+ WARNING: TensorBoard writing requested but is not available (are you using PyTorch 1.1.0 or later?), no TensorBoard logs will be written.
19776
+ WARNING: one_logger package is required to enable e2e metrics tracking. please go to https://confluence.nvidia.com/display/MLWFO/Package+Repositories for details to install it
19777
+ INFO:megatron.training.initialize:Setting logging level to 0
19778
+ INFO:megatron.training.initialize:Setting logging level to 0
19779
+ INFO:megatron.training.initialize:Setting logging level to 0
19780
+ INFO:megatron.training.initialize:Setting logging level to 0
19781
+ INFO:megatron.training.initialize:Setting logging level to 0
19782
+ INFO:megatron.training.initialize:Setting logging level to 0
19783
+ > initialized tensor model parallel with size 8
19784
+ > initialized pipeline model parallel with size 1
19785
+ > setting random seeds to 1234 ...
19786
+ INFO:megatron.training.initialize:Setting logging level to 0
19787
+ > compiling dataset index builder ...
19788
+ INFO:megatron.training.initialize:Setting logging level to 0
19789
+ make: Entering directory '/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/datasets'
19790
+ make: Nothing to be done for 'default'.
19791
+ make: Leaving directory '/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/datasets'
19792
+ >>> done with dataset index builder. Compilation time: 0.081 seconds
19793
+ WARNING: constraints for invoking optimized fused softmax kernel are not met. We default back to unfused kernel invocations.
19794
+ > compiling and loading fused kernels ...
19795
+ >>> done with compiling and loading fused kernels. Compilation time: 2.838 seconds
19796
+ time to initialize megatron (seconds): 10.241
19797
+ [after megatron is initialized] datetime: 2025-06-21 21:13:48
19798
+ building GPT model ...
19799
+ >>> embedding
19800
+ >>> decoder
19801
+ >>> output_layer
19802
+ > number of parameters on (tensor, pipeline) model parallel rank (4, 0): 607188480
19803
+ >>> embedding
19804
+ >>> decoder
19805
+ >>> output_layer
19806
+ > number of parameters on (tensor, pipeline) model parallel rank (7, 0): 607188480
19807
+ >>> embedding
19808
+ >>> decoder
19809
+ >>> output_layer
19810
+ > number of parameters on (tensor, pipeline) model parallel rank (0, 0): 607188480
19811
+ >>> embedding
19812
+ >>> decoder
19813
+ >>> output_layer
19814
+ > number of parameters on (tensor, pipeline) model parallel rank (0, 0): 607188480
19815
+ >>> embedding
19816
+ >>> decoder
19817
+ >>> output_layer
19818
+ > number of parameters on (tensor, pipeline) model parallel rank (3, 0): 607188480
19819
+ >>> embedding
19820
+ >>> decoder
19821
+ >>> output_layer
19822
+ > number of parameters on (tensor, pipeline) model parallel rank (0, 0): 607188480
19823
+ >>> embedding
19824
+ >>> decoder
19825
+ >>> output_layer
19826
+ > number of parameters on (tensor, pipeline) model parallel rank (2, 0): 607188480
19827
+ >>> embedding
19828
+ >>> decoder
19829
+ >>> output_layer
19830
+ > number of parameters on (tensor, pipeline) model parallel rank (4, 0): 607188480
19831
+ >>> embedding
19832
+ >>> decoder
19833
+ >>> output_layer
19834
+ > number of parameters on (tensor, pipeline) model parallel rank (2, 0): 607188480
19835
+ >>> embedding
19836
+ >>> decoder
19837
+ >>> output_layer
19838
+ > number of parameters on (tensor, pipeline) model parallel rank (6, 0): 607188480
19839
+ >>> embedding
19840
+ >>> decoder
19841
+ >>> output_layer
19842
+ > number of parameters on (tensor, pipeline) model parallel rank (4, 0): 607188480
19843
+ >>> embedding
19844
+ >>> decoder
19845
+ >>> output_layer
19846
+ > number of parameters on (tensor, pipeline) model parallel rank (6, 0): 607188480
19847
+ >>> embedding
19848
+ >>> decoder
19849
+ >>> output_layer
19850
+ > number of parameters on (tensor, pipeline) model parallel rank (2, 0): 607188480
19851
+ >>> embedding
19852
+ >>> decoder
19853
+ >>> output_layer
19854
+ > number of parameters on (tensor, pipeline) model parallel rank (3, 0): 607188480
19855
+ >>> embedding
19856
+ >>> decoder
19857
+ >>> output_layer
19858
+ > number of parameters on (tensor, pipeline) model parallel rank (3, 0): 607188480
19859
+ >>> embedding
19860
+ >>> decoder
19861
+ >>> output_layer
19862
+ > number of parameters on (tensor, pipeline) model parallel rank (5, 0): 607188480
19863
+ >>> embedding
19864
+ >>> decoder
19865
+ >>> output_layer
19866
+ >>> embedding
19867
+ >>> decoder
19868
+ >>> output_layer
19869
+ > number of parameters on (tensor, pipeline) model parallel rank (0, 0): 607188480
19870
+ INFO:megatron.core.distributed.distributed_data_parallel:Setting up DistributedDataParallel with config DistributedDataParallelConfig(grad_reduce_in_fp32=False, overlap_grad_reduce=False, overlap_param_gather=False, align_param_gather=False, use_distributed_optimizer=False, num_distributed_optimizer_instances=1, check_for_nan_in_grad=False, check_for_large_grads=False, bucket_size=None, pad_buckets_for_high_nccl_busbw=False, average_in_collective=False, fp8_param_gather=False, use_custom_fsdp=False, data_parallel_sharding_strategy='no_shard', gradient_reduce_div_fusion=True, suggested_communication_unit_size=None, preserve_fp32_weights=True, keep_fp8_transpose_cache_when_using_custom_fsdp=False, nccl_ub=False, fsdp_double_buffer=False)
19871
+ > number of parameters on (tensor, pipeline) model parallel rank (5, 0): 607188480
19872
+ INFO:megatron.core.distributed.param_and_grad_buffer:Number of buckets for gradient all-reduce / reduce-scatter: 1
19873
+ Params for bucket 1 (607188480 elements, 607188480 padded size):
19874
+ module.decoder.layers.1.mlp.linear_fc1.bias
19875
+ module.decoder.layers.0.mlp.linear_fc2.weight
19876
+ module.decoder.layers.0.mlp.linear_fc1.bias
19877
+ module.embedding.position_embeddings.weight
19878
+ module.embedding.word_embeddings.weight
19879
+ module.decoder.final_layernorm.weight
19880
+ module.decoder.layers.1.self_attention.linear_qkv.weight
19881
+ module.decoder.layers.1.self_attention.linear_proj.weight
19882
+ module.decoder.layers.0.self_attention.linear_qkv.bias
19883
+ module.decoder.layers.1.mlp.linear_fc2.weight
19884
+ module.decoder.layers.1.self_attention.linear_proj.bias
19885
+ module.decoder.layers.1.mlp.linear_fc1.layer_norm_bias
19886
+ module.decoder.layers.0.mlp.linear_fc1.layer_norm_bias
19887
+ module.decoder.layers.0.self_attention.linear_qkv.layer_norm_weight
19888
+ module.decoder.layers.1.mlp.linear_fc1.layer_norm_weight
19889
+ module.decoder.layers.1.self_attention.linear_qkv.bias
19890
+ module.decoder.layers.0.mlp.linear_fc2.bias
19891
+ module.decoder.layers.0.mlp.linear_fc1.layer_norm_weight
19892
+ module.decoder.layers.0.self_attention.linear_qkv.layer_norm_bias
19893
+ module.decoder.layers.1.mlp.linear_fc1.weight
19894
+ module.decoder.layers.0.mlp.linear_fc1.weight
19895
+ module.decoder.layers.1.mlp.linear_fc2.bias
19896
+ module.decoder.layers.1.self_attention.linear_qkv.layer_norm_weight
19897
+ module.decoder.layers.0.self_attention.linear_qkv.weight
19898
+ module.decoder.layers.0.self_attention.linear_proj.weight
19899
+ module.decoder.final_layernorm.bias
19900
+ module.decoder.layers.1.self_attention.linear_qkv.layer_norm_bias
19901
+ module.decoder.layers.0.self_attention.linear_proj.bias
19902
+ >>> embedding
19903
+ >>> decoder
19904
+ >>> output_layer
19905
+ INFO:megatron.core.optimizer:Setting up optimizer with config OptimizerConfig(optimizer='adam', lr=0.0005, min_lr=0.0, decoupled_lr=None, decoupled_min_lr=None, weight_decay=0.1, fp16=True, bf16=False, params_dtype=torch.float16, use_precision_aware_optimizer=False, store_param_remainders=True, main_grads_dtype=torch.float32, main_params_dtype=torch.float32, exp_avg_dtype=torch.float32, exp_avg_sq_dtype=torch.float32, loss_scale=None, initial_loss_scale=4294967296, min_loss_scale=1.0, loss_scale_window=1000, hysteresis=2, adam_beta1=0.9, adam_beta2=0.999, adam_eps=1e-08, sgd_momentum=0.9, use_distributed_optimizer=False, overlap_param_gather_with_optimizer_step=False, optimizer_cpu_offload=False, optimizer_offload_fraction=1.0, use_torch_optimizer_for_cpu_offload=False, overlap_cpu_optimizer_d2h_h2d=False, pin_cpu_grads=True, pin_cpu_params=True, clip_grad=1.0, log_num_zeros_in_grad=False, barrier_with_L1_time=True, timers=<megatron.core.timers.Timers object at 0x1514d73bdca0>, config_logger_dir='')
19906
+ > number of parameters on (tensor, pipeline) model parallel rank (7, 0): 607188480
19907
+ INFO:megatron.core.optimizer_param_scheduler:> learning rate decay style: cosine
19908
+ >>> embedding
19909
+ >>> decoder
19910
+ >>> output_layer
19911
+ >>> embedding
19912
+ >>> decoder
19913
+ >>> output_layer
19914
+ > number of parameters on (tensor, pipeline) model parallel rank (3, 0): 607188480
19915
+ > number of parameters on (tensor, pipeline) model parallel rank (1, 0): 607188480
19916
+ >>> embedding
19917
+ >>> decoder
19918
+ >>> output_layer
19919
+ > number of parameters on (tensor, pipeline) model parallel rank (5, 0): 607188480
19920
+ >>> embedding
19921
+ >>> decoder
19922
+ >>> output_layer
19923
+ > number of parameters on (tensor, pipeline) model parallel rank (4, 0): 607188480
19924
+ >>> embedding
19925
+ >>> decoder
19926
+ >>> output_layer
19927
+ > number of parameters on (tensor, pipeline) model parallel rank (5, 0): 607188480
19928
+ >>> embedding
19929
+ >>> decoder
19930
+ >>> output_layer
19931
+ > number of parameters on (tensor, pipeline) model parallel rank (7, 0): 607188480
19932
+ >>> embedding
19933
+ >>> decoder
19934
+ >>> output_layer
19935
+ > number of parameters on (tensor, pipeline) model parallel rank (6, 0): 607188480
19936
+ >>> embedding
19937
+ >>> decoder
19938
+ >>> output_layer
19939
+ > number of parameters on (tensor, pipeline) model parallel rank (6, 0): 607188480
19940
+ >>> embedding
19941
+ >>> decoder
19942
+ >>> output_layer
19943
+ > number of parameters on (tensor, pipeline) model parallel rank (1, 0): 607188480
19944
+ >>> embedding
19945
+ >>> decoder
19946
+ >>> output_layer
19947
+ > number of parameters on (tensor, pipeline) model parallel rank (1, 0): 607188480
19948
+ >>> embedding
19949
+ >>> decoder
19950
+ >>> output_layer
19951
+ > number of parameters on (tensor, pipeline) model parallel rank (7, 0): 607188480
19952
+ >>> embedding
19953
+ >>> decoder
19954
+ >>> output_layer
19955
+ > number of parameters on (tensor, pipeline) model parallel rank (1, 0): 607188480
19956
+ >>> embedding
19957
+ >>> decoder
19958
+ >>> output_layer
19959
+ > number of parameters on (tensor, pipeline) model parallel rank (2, 0): 607188480
19960
+ WARNING: could not find the metadata file gpt-checkpoint/latest_checkpointed_iteration.txt
19961
+ will not load any checkpoints and will start from random
19962
+ (min, max) time across ranks (ms):
19963
+ load-checkpoint ................................: (2.60, 3.98)
19964
+ [after model, optimizer, and learning rate scheduler are built] datetime: 2025-06-21 21:13:56
19965
+ > building train, validation, and test datasets ...
19966
+ > datasets target sizes (minimum size):
19967
+ train: 10
19968
+ validation: 1
19969
+ test: 1
19970
+ INFO:megatron.core.datasets.blended_megatron_dataset_config:Let mock = True, as both blend and blend_per_split are None
19971
+ INFO:megatron.core.datasets.blended_megatron_dataset_config:Let split = 1,1,1, an arbitrarily even split, as mock is True
19972
+ INFO:megatron.core.datasets.blended_megatron_dataset_config:Let split_matrix = [(0, 0.3333333333333333), (0.3333333333333333, 0.6666666666666666), (0.6666666666666666, 1.0)]
19973
+ > building train, validation, and test datasets for GPT ...
19974
+ INFO:megatron.core.datasets.blended_megatron_dataset_builder:Building MockGPTDataset splits with sizes=(10, 1, 1) and config=GPTDatasetConfig(random_seed=1234, sequence_length=131072, blend=None, blend_per_split=None, split='1,1,1', split_matrix=[(0, 0.3333333333333333), (0.3333333333333333, 0.6666666666666666), (0.6666666666666666, 1.0)], num_dataset_builder_threads=1, path_to_cache=None, mmap_bin_files=True, mock=True, tokenizer=<megatron.training.tokenizer.tokenizer._GPT2BPETokenizer object at 0x1514d6782690>, mid_level_dataset_surplus=0.005, reset_position_ids=False, reset_attention_mask=False, eod_mask_loss=False, create_attention_mask=True, drop_last_partial_validation_sequence=True, add_extra_token_to_sequence=True, object_storage_cache_path=None)
19975
+ INFO:megatron.core.datasets.gpt_dataset:Build and save the MockGPTDataset train indices
19976
+ DEBUG:megatron.core.datasets.gpt_dataset:> separate_final_epoch: False
19977
+ WARNING:megatron.core.datasets.gpt_dataset:Unable to save MockGPTDataset indexes because path_to_cache is None
19978
+ DEBUG:megatron.core.datasets.gpt_dataset: > time elapsed: 0.007666 seconds
19979
+ INFO:megatron.core.datasets.gpt_dataset:> total number of samples: 520
19980
+ INFO:megatron.core.datasets.gpt_dataset:> total number of epochs: 1
19981
+ INFO:megatron.core.datasets.gpt_dataset:Build and save the MockGPTDataset valid indices
19982
+ DEBUG:megatron.core.datasets.gpt_dataset:> separate_final_epoch: False
19983
+ WARNING:megatron.core.datasets.gpt_dataset:Unable to save MockGPTDataset indexes because path_to_cache is None
19984
+ DEBUG:megatron.core.datasets.gpt_dataset: > time elapsed: 0.001612 seconds
19985
+ INFO:megatron.core.datasets.gpt_dataset:> total number of samples: 520
19986
+ INFO:megatron.core.datasets.gpt_dataset:> total number of epochs: 1
19987
+ INFO:megatron.core.datasets.gpt_dataset:Build and save the MockGPTDataset test indices
19988
+ DEBUG:megatron.core.datasets.gpt_dataset:> separate_final_epoch: False
19989
+ WARNING:megatron.core.datasets.gpt_dataset:Unable to save MockGPTDataset indexes because path_to_cache is None
19990
+ DEBUG:megatron.core.datasets.gpt_dataset: > time elapsed: 0.001397 seconds
19991
+ INFO:megatron.core.datasets.gpt_dataset:> total number of samples: 520
19992
+ INFO:megatron.core.datasets.gpt_dataset:> total number of epochs: 1
19993
+ > finished creating GPT datasets ...
19994
+ [after dataloaders are built] datetime: 2025-06-21 21:13:56
19995
+ done with setup ...
19996
+ training ...
19997
+ (min, max) time across ranks (ms):
19998
+ model-and-optimizer-setup ......................: (7280.81, 7310.26)
19999
+ train/valid/test-data-iterators-setup ..........: (21.80, 150.57)
20000
+ Setting rerun_state_machine.current_iteration to 0...
20001
+ [before the start of training step] datetime: 2025-06-21 21:13:56
20002
+ WARNING:megatron.core.utils:CUDA out of memory. Tried to allocate 65536.00 GiB. GPU 4 has a total capacity of 139.81 GiB of which 132.15 GiB is free. Including non-PyTorch memory, this process has 7.65 GiB memory in use. Of the allocated memory 5.15 GiB is allocated by PyTorch, and 1007.51 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)
20003
+ ['Traceback (most recent call last):\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 446, in forward_step\n (tokens, labels, loss_mask, attention_mask, position_ids), token_lens = get_batch(data_iterator)\n ^^^^^^^^^^^^^^^^^^^^^^^^\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 284, in get_batch\n batch = next(global_batches)\n ^^^^^^^^^^^^^^^^^^^^\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 226, in setup_batches\n attention_mask = torch.ones(\n ^^^^^^^^^^^\n', 'torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 65536.00 GiB. GPU 4 has a total capacity of 139.81 GiB of which 132.15 GiB is free. Including non-PyTorch memory, this process has 7.65 GiB memory in use. Of the allocated memory 5.15 GiB is allocated by PyTorch, and 1007.51 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)\n']
20004
+ WARNING:megatron.core.utils:CUDA out of memory. Tried to allocate 65536.00 GiB. GPU 6 has a total capacity of 139.81 GiB of which 132.16 GiB is free. Including non-PyTorch memory, this process has 7.65 GiB memory in use. Of the allocated memory 5.15 GiB is allocated by PyTorch, and 1007.51 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)
20005
+ WARNING:megatron.core.utils:CUDA out of memory. Tried to allocate 65536.00 GiB. GPU 5 has a total capacity of 139.81 GiB of which 132.15 GiB is free. Including non-PyTorch memory, this process has 7.65 GiB memory in use. Of the allocated memory 5.15 GiB is allocated by PyTorch, and 1007.51 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)
20006
+ WARNING:megatron.core.utils:CUDA out of memory. Tried to allocate 65536.00 GiB. GPU 1 has a total capacity of 139.81 GiB of which 132.14 GiB is free. Including non-PyTorch memory, this process has 7.67 GiB memory in use. Of the allocated memory 5.15 GiB is allocated by PyTorch, and 1007.51 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)
20007
+ ['Traceback (most recent call last):\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 446, in forward_step\n (tokens, labels, loss_mask, attention_mask, position_ids), token_lens = get_batch(data_iterator)\n ^^^^^^^^^^^^^^^^^^^^^^^^\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 284, in get_batch\n batch = next(global_batches)\n ^^^^^^^^^^^^^^^^^^^^\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 226, in setup_batches\n attention_mask = torch.ones(\n ^^^^^^^^^^^\n', 'torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 65536.00 GiB. GPU 6 has a total capacity of 139.81 GiB of which 132.16 GiB is free. Including non-PyTorch memory, this process has 7.65 GiB memory in use. Of the allocated memory 5.15 GiB is allocated by PyTorch, and 1007.51 MiBWARNING:megatron.core.utils:CUDA out of memory. Tried to allocate 65536.00 GiB. GPU 0 has a total capacity of 139.81 GiB of which 132.17 GiB is free. Including non-PyTorch memory, this process has 7.63 GiB memory in use. Of the allocated memory 5.15 GiB is allocated by PyTorch, and 1007.51 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)
20008
+ ['Traceback (most recent call last):\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 446, in forward_step\n (tokens, labels, loss_mask, attention_mask, position_ids), token_lens = get_batch(data_iterator)\n ^^^^^^^^^^^^^^^^^^^^^^^^\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 284, in get_batch\n batch = next(global_batches)\n ^^^^^^^^^^^^^^^^^^^^\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 226, in setup_batches\n attention_mask = torch.ones(\n ^^^^^^^^^^^\n', 'torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 65536.00 GiB. GPU 5 has a total capacity of 139.81 GiB of which 132.15 GiB is free. Including non-PyTorch memory, this process has 7.65 GiB memory in use. Of the allocated memory 5.15 GiB is allocated by PyTorch, and 1007.51 MiB['Traceback (most recent call last):\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 446, in forward_step\n (tokens, labels, loss_mask, attention_mask, position_ids), token_lens = get_batch(data_iterator)\n ^^^^^^^^^^^^^^^^^^^^^^^^\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 284, in get_batch\n batch = next(global_batches)\n ^^^^^^^^^^^^^^^^^^^^\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 226, in setup_batches\n attention_mask = torch.ones(\n ^^^^^^^^^^^\n', 'torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 65536.00 GiB. GPU 1 has a total capacity of 139.81 GiB of which 132.14 GiB is free. Including non-PyTorch memory, this process has 7.67 GiB memory in use. Of the allocated memory 5.15 GiB is allocated by PyTorch, and 1007.51 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)\n']
20009
+ ['Traceback (most recent call last):\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 446, in forward_step\n (tokens, labels, loss_mask, attention_mask, position_ids), token_lens = get_batch(data_iterator)\n ^^^^^^^^^^^^^^^^^^^^^^^^\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 284, in get_batch\n batch = next(global_batches)\n ^^^^^^^^^^^^^^^^^^^^\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 226, in setup_batches\n attention_mask = torch.ones(\n ^^^^^^^^^^^\n', 'torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 65536.00 GiB. GPU 0 has a total capacity of 139.81 GiB of which 132.17 GiB is free. Including non-PyTorch memory, this process has 7.63 GiB memory in use. Of the allocated memory 5.15 GiB is allocated by PyTorch, and 1007.51 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)\n']
20010
+ WARNING:megatron.core.utils:CUDA out of memory. Tried to allocate 65536.00 GiB. GPU 3 has a total capacity of 139.81 GiB of which 132.15 GiB is free. Including non-PyTorch memory, this process has 7.65 GiB memory in use. Of the allocated memory 5.15 GiB is allocated by PyTorch, and 1007.51 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)
20011
+ is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)\n']
20012
+ WARNING:megatron.core.utils:CUDA out of memory. Tried to allocate 65536.00 GiB. GPU 7 has a total capacity of 139.81 GiB of which 132.17 GiB is free. Including non-PyTorch memory, this process has 7.63 GiB memory in use. Of the allocated memory 5.15 GiB is allocated by PyTorch, and 1007.51 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)
20013
+ is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)\n']
20014
+ WARNING:megatron.core.utils:CUDA out of memory. Tried to allocate 65536.00 GiB. GPU 6 has a total capacity of 139.81 GiB of which 132.17 GiB is free. Including non-PyTorch memory, this process has 7.63 GiB memory in use. Of the allocated memory 5.15 GiB is allocated by PyTorch, and 1007.51 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)
20015
+ ['Traceback (most recent call last):\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 446, in forward_step\n (tokens, labels, loss_mask, attention_mask, position_ids), token_lens = get_batch(data_iterator)\n ^^^^^^^^^^^^^^^^^^^^^^^^\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 284, in get_batch\n batch = next(global_batches)\n ^^^^^^^^^^^^^^^^^^^^\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 226, in setup_batches\n attention_mask = torch.ones(\n ^^^^^^^^^^^\n', 'torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 65536.00 GiB. GPU 3 has a total capacity of 139.81 GiB of which 132.15 GiB is free. Including non-PyTorch memory, this process has 7.65 GiB memory in use. Of the allocated memory 5.15 GiB is allocated by PyTorch, and 1007.51 MiBWARNING:megatron.core.utils:CUDA out of memory. Tried to allocate 65536.00 GiB. GPU 6 has a total capacity of 139.81 GiB of which 132.15 GiB is free. Including non-PyTorch memory, this process has 7.65 GiB memory in use. Of the allocated memory 5.15 GiB is allocated by PyTorch, and 1007.51 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)
20016
+ ['Traceback (most recent call last):\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 446, in forward_step\n (tokens, labels, loss_mask, attention_mask, position_ids), token_lens = get_batch(data_iterator)\n ^^^^^^^^^^^^^^^^^^^^^^^^\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 284, in get_batch\n batch = next(global_batches)\n ^^^^^^^^^^^^^^^^^^^^\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 226, in setup_batches\n attention_mask = torch.ones(\n ^^^^^^^^^^^\n', 'torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 65536.00 GiB. GPU 7 has a total capacity of 139.81 GiB of which 132.17 GiB is free. Including non-PyTorch memory, this process has 7.63 GiB memory in use. Of the allocated memory 5.15 GiB is allocated by PyTorch, and 1007.51 MiB['Traceback (most recent call last):\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 446, in forward_step\n (tokens, labels, loss_mask, attention_mask, position_ids), token_lens = get_batch(data_iterator)\n ^^^^^^^^^^^^^^^^^^^^^^^^\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 284, in get_batch\n batch = next(global_batches)\n ^^^^^^^^^^^^^^^^^^^^\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 226, in setup_batches\n attention_mask = torch.ones(\n ^^^^^^^^^^^\n', 'torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 65536.00 GiB. GPU 6 has a total capacity of 139.81 GiB of which 132.17 GiB is free. Including non-PyTorch memory, this process has 7.63 GiB memory in use. Of the allocated memory 5.15 GiB is allocated by PyTorch, and 1007.51 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)\n']
20017
+ ['Traceback (most recent call last):\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 446, in forward_step\n (tokens, labels, loss_mask, attention_mask, position_ids), token_lens = get_batch(data_iterator)\n ^^^^^^^^^^^^^^^^^^^^^^^^\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 284, in get_batch\n batch = next(global_batches)\n ^^^^^^^^^^^^^^^^^^^^\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 226, in setup_batches\n attention_mask = torch.ones(\n ^^^^^^^^^^^\n', 'torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 65536.00 GiB. GPU 6 has a total capacity of 139.81 GiB of which 132.15 GiB is free. Including non-PyTorch memory, this process has 7.65 GiB memory in use. Of the allocated memory 5.15 GiB is allocated by PyTorch, and 1007.51 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)\n']
20018
+ WARNING:megatron.core.utils:CUDA out of memory. Tried to allocate 65536.00 GiB. GPU 5 has a total capacity of 139.81 GiB of which 132.17 GiB is free. Including non-PyTorch memory, this process has 7.63 GiB memory in use. Of the allocated memory 5.15 GiB is allocated by PyTorch, and 1007.51 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)
20019
+ is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)\n']
20020
+ WARNING:megatron.core.utils:CUDA out of memory. Tried to allocate 65536.00 GiB. GPU 4 has a total capacity of 139.81 GiB of which 132.14 GiB is free. Including non-PyTorch memory, this process has 7.67 GiB memory in use. Of the allocated memory 5.15 GiB is allocated by PyTorch, and 1007.51 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)
20021
+ is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)\n']
20022
+ ['Traceback (most recent call last):\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 446, in forward_step\n (tokens, labels, loss_mask, attention_mask, position_ids), token_lens = get_batch(data_iterator)\n ^^^^^^^^^^^^^^^^^^^^^^^^\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 284, in get_batch\n batch = next(global_batches)\n ^^^^^^^^^^^^^^^^^^^^\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 226, in setup_batches\n attention_mask = torch.ones(\n ^^^^^^^^^^^\n', 'torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 65536.00 GiB. GPU 5 has a total capacity of 139.81 GiB of which 132.17 GiB is free. Including non-PyTorch memory, this process has 7.63 GiB memory in use. Of the allocated memory 5.15 GiB is allocated by PyTorch, and 1007.51 MiBWARNING:megatron.core.utils:CUDA out of memory. Tried to allocate 65536.00 GiB. GPU 1 has a total capacity of 139.81 GiB of which 132.16 GiB is free. Including non-PyTorch memory, this process has 7.65 GiB memory in use. Of the allocated memory 5.15 GiB is allocated by PyTorch, and 1007.51 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)
20023
+ ['Traceback (most recent call last):\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 446, in forward_step\n (tokens, labels, loss_mask, attention_mask, position_ids), token_lens = get_batch(data_iterator)\n ^^^^^^^^^^^^^^^^^^^^^^^^\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 284, in get_batch\n batch = next(global_batches)\n ^^^^^^^^^^^^^^^^^^^^\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 226, in setup_batches\n attention_mask = torch.ones(\n ^^^^^^^^^^^\n', 'torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 65536.00 GiB. GPU 4 has a total capacity of 139.81 GiB of which 132.14 GiB is free. Including non-PyTorch memory, this process has 7.67 GiB memory in use. Of the allocated memory 5.15 GiB is allocated by PyTorch, and 1007.51 MiBWARNING:megatron.core.utils:CUDA out of memory. Tried to allocate 65536.00 GiB. GPU 0 has a total capacity of 139.81 GiB of which 132.15 GiB is free. Including non-PyTorch memory, this process has 7.65 GiB memory in use. Of the allocated memory 5.15 GiB is allocated by PyTorch, and 1007.51 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)
20024
+ is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)\n']
20025
+ ['Traceback (most recent call last):\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 446, in forward_step\n (tokens, labels, loss_mask, attention_mask, position_ids), token_lens = get_batch(data_iterator)\n ^^^^^^^^^^^^^^^^^^^^^^^^\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 284, in get_batch\n batch = next(global_batches)\n ^^^^^^^^^^^^^^^^^^^^\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 226, in setup_batches\n attention_mask = torch.ones(\n ^^^^^^^^^^^\n', 'torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 65536.00 GiB. GPU 1 has a total capacity of 139.81 GiB of which 132.16 GiB is free. Including non-PyTorch memory, this process has 7.65 GiB memory in use. Of the allocated memory 5.15 GiB is allocated by PyTorch, and 1007.51 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)\n']
20026
+ ['Traceback (most recent call last):\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 446, in forward_step\n (tokens, labels, loss_mask, attention_mask, position_ids), token_lens = get_batch(data_iterator)\n ^^^^^^^^^^^^^^^^^^^^^^^^\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 284, in get_batch\n batch = next(global_batches)\n ^^^^^^^^^^^^^^^^^^^^\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 226, in setup_batches\n attention_mask = torch.ones(\n ^^^^^^^^^^^\n', 'torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 65536.00 GiB. GPU 0 has a total capacity of 139.81 GiB of which 132.15 GiB is free. Including non-PyTorch memory, this process has 7.65 GiB memory in use. Of the allocated memory 5.15 GiB is allocated by PyTorch, and 1007.51 MiBWARNING:megatron.core.utils:CUDA out of memory. Tried to allocate 65536.00 GiB. GPU 4 has a total capacity of 139.81 GiB of which 132.16 GiB is free. Including non-PyTorch memory, this process has 7.65 GiB memory in use. Of the allocated memory 5.15 GiB is allocated by PyTorch, and 1007.51 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)
20027
+ is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)\n']
20028
+ WARNING:megatron.core.utils:CUDA out of memory. Tried to allocate 65536.00 GiB. GPU 6 has a total capacity of 139.81 GiB of which 132.14 GiB is free. Including non-PyTorch memory, this process has 7.67 GiB memory in use. Of the allocated memory 5.15 GiB is allocated by PyTorch, and 1007.51 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)
20029
+ is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)\n']
20030
+ ['Traceback (most recent call last):\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 446, in forward_step\n (tokens, labels, loss_mask, attention_mask, position_ids), token_lens = get_batch(data_iterator)\n ^^^^^^^^^^^^^^^^^^^^^^^^\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 284, in get_batch\n batch = next(global_batches)\n ^^^^^^^^^^^^^^^^^^^^\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 226, in setup_batches\n attention_mask = torch.ones(\n ^^^^^^^^^^^\n', 'torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 65536.00 GiB. GPU 4 has a total capacity of 139.81 GiB of which 132.16 GiB is free. Including non-PyTorch memory, this process has 7.65 GiB memory in use. Of the allocated memory 5.15 GiB is allocated by PyTorch, and 1007.51 MiBWARNING:megatron.core.utils:CUDA out of memory. Tried to allocate 65536.00 GiB. GPU 2 has a total capacity of 139.81 GiB of which 132.17 GiB is free. Including non-PyTorch memory, this process has 7.63 GiB memory in use. Of the allocated memory 5.15 GiB is allocated by PyTorch, and 1007.51 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)
20031
+ ['Traceback (most recent call last):\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 446, in forward_step\n (tokens, labels, loss_mask, attention_mask, position_ids), token_lens = get_batch(data_iterator)\n ^^^^^^^^^^^^^^^^^^^^^^^^\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 284, in get_batch\n batch = next(global_batches)\n ^^^^^^^^^^^^^^^^^^^^\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 226, in setup_batches\n attention_mask = torch.ones(\n ^^^^^^^^^^^\n', 'torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 65536.00 GiB. GPU 6 has a total capacity of 139.81 GiB of which 132.14 GiB is free. Including non-PyTorch memory, this process has 7.67 GiB memory in use. Of the allocated memory 5.15 GiB is allocated by PyTorch, and 1007.51 MiBWARNING:megatron.core.utils:CUDA out of memory. Tried to allocate 65536.00 GiB. GPU 7 has a total capacity of 139.81 GiB of which 132.14 GiB is free. Including non-PyTorch memory, this process has 7.67 GiB memory in use. Of the allocated memory 5.15 GiB is allocated by PyTorch, and 1007.51 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)
20032
+ is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)\n']
20033
+ ['Traceback (most recent call last):\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 446, in forward_step\n (tokens, labels, loss_mask, attention_mask, position_ids), token_lens = get_batch(data_iterator)\n ^^^^^^^^^^^^^^^^^^^^^^^^\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 284, in get_batch\n batch = next(global_batches)\n ^^^^^^^^^^^^^^^^^^^^\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 226, in setup_batches\n attention_mask = torch.ones(\n ^^^^^^^^^^^\n', 'torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 65536.00 GiB. GPU 2 has a total capacity of 139.81 GiB of which 132.17 GiB is free. Including non-PyTorch memory, this process has 7.63 GiB memory in use. Of the allocated memory 5.15 GiB is allocated by PyTorch, and 1007.51 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)\n']
20034
+ ['Traceback (most recent call last):\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 446, in forward_step\n (tokens, labels, loss_mask, attention_mask, position_ids), token_lens = get_batch(data_iterator)\n ^^^^^^^^^^^^^^^^^^^^^^^^\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 284, in get_batch\n batch = next(global_batches)\n ^^^^^^^^^^^^^^^^^^^^\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 226, in setup_batches\n attention_mask = torch.ones(\n ^^^^^^^^^^^\n', 'torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 65536.00 GiB. GPU 7 has a total capacity of 139.81 GiB of which 132.14 GiB is free. Including non-PyTorch memory, this process has 7.67 GiB memory in use. Of the allocated memory 5.15 GiB is allocated by PyTorch, and 1007.51 MiBWARNING:megatron.core.utils:CUDA out of memory. Tried to allocate 65536.00 GiB. GPU 3 has a total capacity of 139.81 GiB of which 132.17 GiB is free. Including non-PyTorch memory, this process has 7.63 GiB memory in use. Of the allocated memory 5.15 GiB is allocated by PyTorch, and 1007.51 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)
20035
+ is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)\n']
20036
+ WARNING:megatron.core.utils:CUDA out of memory. Tried to allocate 65536.00 GiB. GPU 7 has a total capacity of 139.81 GiB of which 132.15 GiB is free. Including non-PyTorch memory, this process has 7.65 GiB memory in use. Of the allocated memory 5.15 GiB is allocated by PyTorch, and 1007.51 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)
20037
+ is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)\n']
20038
+ ['Traceback (most recent call last):\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 446, in forward_step\n (tokens, labels, loss_mask, attention_mask, position_ids), token_lens = get_batch(data_iterator)\n ^^^^^^^^^^^^^^^^^^^^^^^^\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 284, in get_batch\n batch = next(global_batches)\n ^^^^^^^^^^^^^^^^^^^^\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 226, in setup_batches\n attention_mask = torch.ones(\n ^^^^^^^^^^^\n', 'torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 65536.00 GiB. GPU 3 has a total capacity of 139.81 GiB of which 132.17 GiB is free. Including non-PyTorch memory, this process has 7.63 GiB memory in use. Of the allocated memory 5.15 GiB is allocated by PyTorch, and 1007.51 MiBWARNING:megatron.core.utils:CUDA out of memory. Tried to allocate 65536.00 GiB. GPU 3 has a total capacity of 139.81 GiB of which 132.16 GiB is free. Including non-PyTorch memory, this process has 7.65 GiB memory in use. Of the allocated memory 5.15 GiB is allocated by PyTorch, and 1007.51 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)
20039
+ ['Traceback (most recent call last):\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 446, in forward_step\n (tokens, labels, loss_mask, attention_mask, position_ids), token_lens = get_batch(data_iterator)\n ^^^^^^^^^^^^^^^^^^^^^^^^\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 284, in get_batch\n batch = next(global_batches)\n ^^^^^^^^^^^^^^^^^^^^\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 226, in setup_batches\n attention_mask = torch.ones(\n ^^^^^^^^^^^\n', 'torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 65536.00 GiB. GPU 7 has a total capacity of 139.81 GiB of which 132.15 GiB is free. Including non-PyTorch memory, this process has 7.65 GiB memory in use. Of the allocated memory 5.15 GiB is allocated by PyTorch, and 1007.51 MiBWARNING:megatron.core.utils:CUDA out of memory. Tried to allocate 65536.00 GiB. GPU 5 has a total capacity of 139.81 GiB of which 132.14 GiB is free. Including non-PyTorch memory, this process has 7.67 GiB memory in use. Of the allocated memory 5.15 GiB is allocated by PyTorch, and 1007.51 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)
20040
+ is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)\n']
20041
+ ['Traceback (most recent call last):\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 446, in forward_step\n (tokens, labels, loss_mask, attention_mask, position_ids), token_lens = get_batch(data_iterator)\n ^^^^^^^^^^^^^^^^^^^^^^^^\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 284, in get_batch\n batch = next(global_batches)\n ^^^^^^^^^^^^^^^^^^^^\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 226, in setup_batches\n attention_mask = torch.ones(\n ^^^^^^^^^^^\n', 'torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 65536.00 GiB. GPU 3 has a total capacity of 139.81 GiB of which 132.16 GiB is free. Including non-PyTorch memory, this process has 7.65 GiB memory in use. Of the allocated memory 5.15 GiB is allocated by PyTorch, and 1007.51 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)\n']
20042
+ ['Traceback (most recent call last):\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 446, in forward_step\n (tokens, labels, loss_mask, attention_mask, position_ids), token_lens = get_batch(data_iterator)\n ^^^^^^^^^^^^^^^^^^^^^^^^\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 284, in get_batch\n batch = next(global_batches)\n ^^^^^^^^^^^^^^^^^^^^\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 226, in setup_batches\n attention_mask = torch.ones(\n ^^^^^^^^^^^\n', 'torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 65536.00 GiB. GPU 5 has a total capacity of 139.81 GiB of which 132.14 GiB is free. Including non-PyTorch memory, this process has 7.67 GiB memory in use. Of the allocated memory 5.15 GiB is allocated by PyTorch, and 1007.51 MiBWARNING:megatron.core.utils:CUDA out of memory. Tried to allocate 65536.00 GiB. GPU 1 has a total capacity of 139.81 GiB of which 132.17 GiB is free. Including non-PyTorch memory, this process has 7.63 GiB memory in use. Of the allocated memory 5.15 GiB is allocated by PyTorch, and 1007.51 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)
20043
+ is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)\n']
20044
+ is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)\n']
20045
+ ['Traceback (most recent call last):\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 446, in forward_step\n (tokens, labels, loss_mask, attention_mask, position_ids), token_lens = get_batch(data_iterator)\n ^^^^^^^^^^^^^^^^^^^^^^^^\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 284, in get_batch\n batch = next(global_batches)\n ^^^^^^^^^^^^^^^^^^^^\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 226, in setup_batches\n attention_mask = torch.ones(\n ^^^^^^^^^^^\n', 'torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 65536.00 GiB. GPU 1 has a total capacity of 139.81 GiB of which 132.17 GiB is free. Including non-PyTorch memory, this process has 7.63 GiB memory in use. Of the allocated memory 5.15 GiB is allocated by PyTorch, and 1007.51 MiBWARNING:megatron.core.utils:CUDA out of memory. Tried to allocate 65536.00 GiB. GPU 2 has a total capacity of 139.81 GiB of which 132.15 GiB is free. Including non-PyTorch memory, this process has 7.65 GiB memory in use. Of the allocated memory 5.15 GiB is allocated by PyTorch, and 1007.51 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)
20046
+ is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)\n']
20047
+ ['Traceback (most recent call last):\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 446, in forward_step\n (tokens, labels, loss_mask, attention_mask, position_ids), token_lens = get_batch(data_iterator)\n ^^^^^^^^^^^^^^^^^^^^^^^^\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 284, in get_batch\n batch = next(global_batches)\n ^^^^^^^^^^^^^^^^^^^^\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 226, in setup_batches\n attention_mask = torch.ones(\n ^^^^^^^^^^^\n', 'torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 65536.00 GiB. GPU 2 has a total capacity of 139.81 GiB of which 132.15 GiB is free. Including non-PyTorch memory, this process has 7.65 GiB memory in use. Of the allocated memory 5.15 GiB is allocated by PyTorch, and 1007.51 MiBWARNING:megatron.core.utils:CUDA out of memory. Tried to allocate 65536.00 GiB. GPU 0 has a total capacity of 139.81 GiB of which 132.16 GiB is free. Including non-PyTorch memory, this process has 7.65 GiB memory in use. Of the allocated memory 5.15 GiB is allocated by PyTorch, and 1007.51 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)
20048
+ is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)\n']
20049
+ ['Traceback (most recent call last):\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 446, in forward_step\n (tokens, labels, loss_mask, attention_mask, position_ids), token_lens = get_batch(data_iterator)\n ^^^^^^^^^^^^^^^^^^^^^^^^\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 284, in get_batch\n batch = next(global_batches)\n ^^^^^^^^^^^^^^^^^^^^\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 226, in setup_batches\n attention_mask = torch.ones(\n ^^^^^^^^^^^\n', 'torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 65536.00 GiB. GPU 0 has a total capacity of 139.81 GiB of which 132.16 GiB is free. Including non-PyTorch memory, this process has 7.65 GiB memory in use. Of the allocated memory 5.15 GiB is allocated by PyTorch, and 1007.51 MiBWARNING:megatron.core.utils:CUDA out of memory. Tried to allocate 65536.00 GiB. GPU 3 has a total capacity of 139.81 GiB of which 132.14 GiB is free. Including non-PyTorch memory, this process has 7.67 GiB memory in use. Of the allocated memory 5.15 GiB is allocated by PyTorch, and 1007.51 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)
20050
+ is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)\n']
20051
+ ['Traceback (most recent call last):\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 446, in forward_step\n (tokens, labels, loss_mask, attention_mask, position_ids), token_lens = get_batch(data_iterator)\n ^^^^^^^^^^^^^^^^^^^^^^^^\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 284, in get_batch\n batch = next(global_batches)\n ^^^^^^^^^^^^^^^^^^^^\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 226, in setup_batches\n attention_mask = torch.ones(\n ^^^^^^^^^^^\n', 'torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 65536.00 GiB. GPU 3 has a total capacity of 139.81 GiB of which 132.14 GiB is free. Including non-PyTorch memory, this process has 7.67 GiB memory in use. Of the allocated memory 5.15 GiB is allocated by PyTorch, and 1007.51 MiBWARNING:megatron.core.utils:CUDA out of memory. Tried to allocate 65536.00 GiB. GPU 2 has a total capacity of 139.81 GiB of which 132.16 GiB is free. Including non-PyTorch memory, this process has 7.65 GiB memory in use. Of the allocated memory 5.15 GiB is allocated by PyTorch, and 1007.51 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)
20052
+ is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)\n']
20053
+ ['Traceback (most recent call last):\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 446, in forward_step\n (tokens, labels, loss_mask, attention_mask, position_ids), token_lens = get_batch(data_iterator)\n ^^^^^^^^^^^^^^^^^^^^^^^^\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 284, in get_batch\n batch = next(global_batches)\n ^^^^^^^^^^^^^^^^^^^^\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 226, in setup_batches\n attention_mask = torch.ones(\n ^^^^^^^^^^^\n', 'torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 65536.00 GiB. GPU 2 has a total capacity of 139.81 GiB of which 132.16 GiB is free. Including non-PyTorch memory, this process has 7.65 GiB memory in use. Of the allocated memory 5.15 GiB is allocated by PyTorch, and 1007.51 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)\n']
20054
+ WARNING:megatron.core.utils:CUDA out of memory. Tried to allocate 65536.00 GiB. GPU 7 has a total capacity of 139.81 GiB of which 132.16 GiB is free. Including non-PyTorch memory, this process has 7.65 GiB memory in use. Of the allocated memory 5.15 GiB is allocated by PyTorch, and 1007.51 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)
20055
+ ['Traceback (most recent call last):\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 446, in forward_step\n (tokens, labels, loss_mask, attention_mask, position_ids), token_lens = get_batch(data_iterator)\n ^^^^^^^^^^^^^^^^^^^^^^^^\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 284, in get_batch\n batch = next(global_batches)\n ^^^^^^^^^^^^^^^^^^^^\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 226, in setup_batches\n attention_mask = torch.ones(\n ^^^^^^^^^^^\n', 'torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 65536.00 GiB. GPU 7 has a total capacity of 139.81 GiB of which 132.16 GiB is free. Including non-PyTorch memory, this process has 7.65 GiB memory in use. Of the allocated memory 5.15 GiB is allocated by PyTorch, and 1007.51 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)\n']
20056
+ WARNING:megatron.core.utils:CUDA out of memory. Tried to allocate 65536.00 GiB. GPU 5 has a total capacity of 139.81 GiB of which 132.16 GiB is free. Including non-PyTorch memory, this process has 7.65 GiB memory in use. Of the allocated memory 5.15 GiB is allocated by PyTorch, and 1007.51 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)
20057
+ ['Traceback (most recent call last):\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 446, in forward_step\n (tokens, labels, loss_mask, attention_mask, position_ids), token_lens = get_batch(data_iterator)\n ^^^^^^^^^^^^^^^^^^^^^^^^\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 284, in get_batch\n batch = next(global_batches)\n ^^^^^^^^^^^^^^^^^^^^\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 226, in setup_batches\n attention_mask = torch.ones(\n ^^^^^^^^^^^\n', 'torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 65536.00 GiB. GPU 5 has a total capacity of 139.81 GiB of which 132.16 GiB is free. Including non-PyTorch memory, this process has 7.65 GiB memory in use. Of the allocated memory 5.15 GiB is allocated by PyTorch, and 1007.51 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)\n']
20058
+ WARNING:megatron.core.utils:CUDA out of memory. Tried to allocate 65536.00 GiB. GPU 4 has a total capacity of 139.81 GiB of which 132.17 GiB is free. Including non-PyTorch memory, this process has 7.63 GiB memory in use. Of the allocated memory 5.15 GiB is allocated by PyTorch, and 1007.51 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)
20059
+ WARNING:megatron.core.utils:CUDA out of memory. Tried to allocate 65536.00 GiB. GPU 2 has a total capacity of 139.81 GiB of which 132.14 GiB is free. Including non-PyTorch memory, this process has 7.67 GiB memory in use. Of the allocated memory 5.15 GiB is allocated by PyTorch, and 1007.51 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)
20060
+ ['Traceback (most recent call last):\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 446, in forward_step\n (tokens, labels, loss_mask, attention_mask, position_ids), token_lens = get_batch(data_iterator)\n ^^^^^^^^^^^^^^^^^^^^^^^^\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 284, in get_batch\n batch = next(global_batches)\n ^^^^^^^^^^^^^^^^^^^^\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 226, in setup_batches\n attention_mask = torch.ones(\n ^^^^^^^^^^^\n', 'torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 65536.00 GiB. GPU 4 has a total capacity of 139.81 GiB of which 132.17 GiB is free. Including non-PyTorch memory, this process has 7.63 GiB memory in use. Of the allocated memory 5.15 GiB is allocated by PyTorch, and 1007.51 MiB['Traceback (most recent call last):\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 446, in forward_step\n (tokens, labels, loss_mask, attention_mask, position_ids), token_lens = get_batch(data_iterator)\n ^^^^^^^^^^^^^^^^^^^^^^^^\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 284, in get_batch\n batch = next(global_batches)\n ^^^^^^^^^^^^^^^^^^^^\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 226, in setup_batches\n attention_mask = torch.ones(\n ^^^^^^^^^^^\n', 'torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 65536.00 GiB. GPU 2 has a total capacity of 139.81 GiB of which 132.14 GiB is free. Including non-PyTorch memory, this process has 7.67 GiB memory in use. Of the allocated memory 5.15 GiB is allocated by PyTorch, and 1007.51 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)\n']
20061
+ is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)\n']
20062
+ WARNING:megatron.core.utils:CUDA out of memory. Tried to allocate 65536.00 GiB. GPU 0 has a total capacity of 139.81 GiB of which 132.14 GiB is free. Including non-PyTorch memory, this process has 7.67 GiB memory in use. Of the allocated memory 5.15 GiB is allocated by PyTorch, and 1007.51 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)
20063
+ ['Traceback (most recent call last):\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 446, in forward_step\n (tokens, labels, loss_mask, attention_mask, position_ids), token_lens = get_batch(data_iterator)\n ^^^^^^^^^^^^^^^^^^^^^^^^\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 284, in get_batch\n batch = next(global_batches)\n ^^^^^^^^^^^^^^^^^^^^\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 226, in setup_batches\n attention_mask = torch.ones(\n ^^^^^^^^^^^\n', 'torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 65536.00 GiB. GPU 0 has a total capacity of 139.81 GiB of which 132.14 GiB is free. Including non-PyTorch memory, this process has 7.67 GiB memory in use. Of the allocated memory 5.15 GiB is allocated by PyTorch, and 1007.51 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)\n']
20064
+ WARNING:megatron.core.utils:CUDA out of memory. Tried to allocate 65536.00 GiB. GPU 1 has a total capacity of 139.81 GiB of which 132.15 GiB is free. Including non-PyTorch memory, this process has 7.65 GiB memory in use. Of the allocated memory 5.15 GiB is allocated by PyTorch, and 1007.51 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)
20065
+ ['Traceback (most recent call last):\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 446, in forward_step\n (tokens, labels, loss_mask, attention_mask, position_ids), token_lens = get_batch(data_iterator)\n ^^^^^^^^^^^^^^^^^^^^^^^^\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 284, in get_batch\n batch = next(global_batches)\n ^^^^^^^^^^^^^^^^^^^^\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 226, in setup_batches\n attention_mask = torch.ones(\n ^^^^^^^^^^^\n', 'torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 65536.00 GiB. GPU 1 has a total capacity of 139.81 GiB of which 132.15 GiB is free. Including non-PyTorch memory, this process has 7.65 GiB memory in use. Of the allocated memory 5.15 GiB is allocated by PyTorch, and 1007.51 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)\n']
attnserver.run_attnserver.slurm.sh.343200.err.log CHANGED
The diff for this file is too large to render. See raw diff
 
attnserver.run_attnserver.slurm.sh.343200.out.log CHANGED
The diff for this file is too large to render. See raw diff
 
attnserver.run_attnserver.slurm.sh.343202.err.log CHANGED
@@ -6627,3 +6627,122 @@ W0621 21:10:00.022000 3918653 site-packages/torch/distributed/run.py:766] ******
6627
  [rank10]:[W621 21:13:34.257100442 ProcessGroupNCCL.cpp:1476] Warning: WARNING: destroy_process_group() was not called before program exit, which can leak resources. For more info, please see https://pytorch.org/docs/stable/distributed.html#shutdown (function operator())
6628
  [rank9]:[W621 21:13:35.336997844 ProcessGroupNCCL.cpp:1476] Warning: WARNING: destroy_process_group() was not called before program exit, which can leak resources. For more info, please see https://pytorch.org/docs/stable/distributed.html#shutdown (function operator())
6629
  [rank8]:[W621 21:13:35.509281143 ProcessGroupNCCL.cpp:1476] Warning: WARNING: destroy_process_group() was not called before program exit, which can leak resources. For more info, please see https://pytorch.org/docs/stable/distributed.html#shutdown (function operator())
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
6627
  [rank10]:[W621 21:13:34.257100442 ProcessGroupNCCL.cpp:1476] Warning: WARNING: destroy_process_group() was not called before program exit, which can leak resources. For more info, please see https://pytorch.org/docs/stable/distributed.html#shutdown (function operator())
6628
  [rank9]:[W621 21:13:35.336997844 ProcessGroupNCCL.cpp:1476] Warning: WARNING: destroy_process_group() was not called before program exit, which can leak resources. For more info, please see https://pytorch.org/docs/stable/distributed.html#shutdown (function operator())
6629
  [rank8]:[W621 21:13:35.509281143 ProcessGroupNCCL.cpp:1476] Warning: WARNING: destroy_process_group() was not called before program exit, which can leak resources. For more info, please see https://pytorch.org/docs/stable/distributed.html#shutdown (function operator())
6630
+ + set +x
6631
+ + set +x
6632
+ + for ctx_length in 1024 2048 4096 8192 12288 16384 24576 32768 40960 49152 65536 81920 98304 131072
6633
+ + export PROF_CTX_LENGTH=65536
6634
+ + PROF_CTX_LENGTH=65536
6635
+ + name='/mnt/sharefs/users/hao.zhang/junda/megatron-prof-data--unstable-v5/mytrace.L65536*tp8.cp2.bs2.json'
6636
+ + '[' -f '/mnt/sharefs/users/hao.zhang/junda/megatron-prof-data--unstable-v5/mytrace.L65536*tp8.cp2.bs2.json' ']'
6637
+ + echo 'Running ctx_length=65536, TP_SIZE=8, CP_SIZE=2, BATCH_SIZE=2'
6638
+ + srun bash ./attnserver.sh
6639
+ + which python3
6640
+ + python3 -m torch.distributed.launch --nproc_per_node 8 --nnodes 2 --node_rank 1 --rdzv_id 343202 --rdzv_backend c10d --rdzv_endpoint fs-mbz-gpu-728:29500 ./pretrain_gpt_profile.py --tensor-model-parallel-size 8 --context-parallel-size 2 --num-layers 2 --hidden-size 4096 --num-attention-heads 64 --group-query-attention --num-query-groups 16 --seq-length 65536 --max-position-embeddings 65536 --micro-batch-size 1 --global-batch-size 1 --lr 0.0005 --train-iters 10 --lr-decay-iters 150000 --lr-decay-style cosine --lr-warmup-iters 2 --weight-decay .1 --adam-beta2 .999 --fp16 --log-interval 1 --save-interval 16 --eval-interval 16 --eval-iters 1 --vocab-file vocab.json --merge-file merges.txt --save gpt-checkpoint --load gpt-checkpoint --logging-level 0 --mock-data --tensorboard-dir tensorboard-logs/
6641
+ + which python3
6642
+ + python3 -m torch.distributed.launch --nproc_per_node 8 --nnodes 2 --node_rank 0 --rdzv_id 343202 --rdzv_backend c10d --rdzv_endpoint fs-mbz-gpu-728:29500 ./pretrain_gpt_profile.py --tensor-model-parallel-size 8 --context-parallel-size 2 --num-layers 2 --hidden-size 4096 --num-attention-heads 64 --group-query-attention --num-query-groups 16 --seq-length 65536 --max-position-embeddings 65536 --micro-batch-size 1 --global-batch-size 1 --lr 0.0005 --train-iters 10 --lr-decay-iters 150000 --lr-decay-style cosine --lr-warmup-iters 2 --weight-decay .1 --adam-beta2 .999 --fp16 --log-interval 1 --save-interval 16 --eval-interval 16 --eval-iters 1 --vocab-file vocab.json --merge-file merges.txt --save gpt-checkpoint --load gpt-checkpoint --logging-level 0 --mock-data --tensorboard-dir tensorboard-logs/
6643
+ /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/distributed/launch.py:207: FutureWarning: The module torch.distributed.launch is deprecated
6644
+ and will be removed in future. Use torchrun.
6645
+ Note that --use-env is set by default in torchrun.
6646
+ If your script expects `--local-rank` argument to be set, please
6647
+ change it to read from `os.environ['LOCAL_RANK']` instead. See
6648
+ https://pytorch.org/docs/stable/distributed.html#launch-utility for
6649
+ further instructions
6650
+
6651
+ main()
6652
+ W0621 21:13:47.330000 2522629 site-packages/torch/distributed/run.py:766]
6653
+ W0621 21:13:47.330000 2522629 site-packages/torch/distributed/run.py:766] *****************************************
6654
+ W0621 21:13:47.330000 2522629 site-packages/torch/distributed/run.py:766] Setting OMP_NUM_THREADS environment variable for each process to be 1 in default, to avoid your system being overloaded, please further tune the variable for optimal performance in your application as needed.
6655
+ W0621 21:13:47.330000 2522629 site-packages/torch/distributed/run.py:766] *****************************************
6656
+ /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/distributed/launch.py:207: FutureWarning: The module torch.distributed.launch is deprecated
6657
+ and will be removed in future. Use torchrun.
6658
+ Note that --use-env is set by default in torchrun.
6659
+ If your script expects `--local-rank` argument to be set, please
6660
+ change it to read from `os.environ['LOCAL_RANK']` instead. See
6661
+ https://pytorch.org/docs/stable/distributed.html#launch-utility for
6662
+ further instructions
6663
+
6664
+ main()
6665
+ W0621 21:13:47.545000 3922086 site-packages/torch/distributed/run.py:766]
6666
+ W0621 21:13:47.545000 3922086 site-packages/torch/distributed/run.py:766] *****************************************
6667
+ W0621 21:13:47.545000 3922086 site-packages/torch/distributed/run.py:766] Setting OMP_NUM_THREADS environment variable for each process to be 1 in default, to avoid your system being overloaded, please further tune the variable for optimal performance in your application as needed.
6668
+ W0621 21:13:47.545000 3922086 site-packages/torch/distributed/run.py:766] *****************************************
6669
+ [rank0]:[W621 21:14:10.917895880 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 0] using GPU 0 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
6670
+ [rank6]:[W621 21:14:10.969841440 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 6] using GPU 6 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
6671
+ [rank14]:[W621 21:14:10.504313725 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 14] using GPU 6 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
6672
+ [rank8]:[W621 21:14:10.524287366 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 8] using GPU 0 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
6673
+ [rank4]:[W621 21:14:10.004414749 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 4] using GPU 4 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
6674
+ [rank1]:[W621 21:14:10.004913310 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 1] using GPU 1 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
6675
+ [rank12]:[W621 21:14:10.539119097 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 12] using GPU 4 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
6676
+ [rank5]:[W621 21:14:10.006866612 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 5] using GPU 5 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
6677
+ [rank9]:[W621 21:14:10.540305615 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 9] using GPU 1 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
6678
+ [rank3]:[W621 21:14:10.009074874 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 3] using GPU 3 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
6679
+ [rank13]:[W621 21:14:10.542462813 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 13] using GPU 5 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
6680
+ [rank2]:[W621 21:14:10.010101574 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 2] using GPU 2 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
6681
+ [rank10]:[W621 21:14:10.543118008 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 10] using GPU 2 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
6682
+ [rank7]:[W621 21:14:10.011325301 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 7] using GPU 7 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
6683
+ [rank15]:[W621 21:14:10.544664712 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 15] using GPU 7 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
6684
+ [rank11]:[W621 21:14:10.546167454 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 11] using GPU 3 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
6685
+ /mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/models/gpt/gpt_layer_specs.py:94: UserWarning: The fp8 argument in "get_gpt_layer_with_transformer_engine_spec" has been deprecated and will be removed soon. Please update your code accordingly.
6686
+ warnings.warn(
6687
+ /mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/models/gpt/gpt_layer_specs.py:94: UserWarning: The fp8 argument in "get_gpt_layer_with_transformer_engine_spec" has been deprecated and will be removed soon. Please update your code accordingly.
6688
+ warnings.warn(
6689
+ /mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/models/gpt/gpt_layer_specs.py:94: UserWarning: The fp8 argument in "get_gpt_layer_with_transformer_engine_spec" has been deprecated and will be removed soon. Please update your code accordingly.
6690
+ warnings.warn(
6691
+ /mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/models/gpt/gpt_layer_specs.py:94: UserWarning: The fp8 argument in "get_gpt_layer_with_transformer_engine_spec" has been deprecated and will be removed soon. Please update your code accordingly.
6692
+ warnings.warn(
6693
+ /mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/models/gpt/gpt_layer_specs.py:94: UserWarning: The fp8 argument in "get_gpt_layer_with_transformer_engine_spec" has been deprecated and will be removed soon. Please update your code accordingly.
6694
+ warnings.warn(
6695
+ /mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/models/gpt/gpt_layer_specs.py:94: UserWarning: The fp8 argument in "get_gpt_layer_with_transformer_engine_spec" has been deprecated and will be removed soon. Please update your code accordingly.
6696
+ warnings.warn(
6697
+ /mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/models/gpt/gpt_layer_specs.py:94: UserWarning: The fp8 argument in "get_gpt_layer_with_transformer_engine_spec" has been deprecated and will be removed soon. Please update your code accordingly.
6698
+ warnings.warn(
6699
+ /mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/models/gpt/gpt_layer_specs.py:94: UserWarning: The fp8 argument in "get_gpt_layer_with_transformer_engine_spec" has been deprecated and will be removed soon. Please update your code accordingly.
6700
+ warnings.warn(
6701
+ /mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/models/gpt/gpt_layer_specs.py:94: UserWarning: The fp8 argument in "get_gpt_layer_with_transformer_engine_spec" has been deprecated and will be removed soon. Please update your code accordingly.
6702
+ warnings.warn(
6703
+ /mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/models/gpt/gpt_layer_specs.py:94: UserWarning: The fp8 argument in "get_gpt_layer_with_transformer_engine_spec" has been deprecated and will be removed soon. Please update your code accordingly.
6704
+ warnings.warn(
6705
+ /mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/models/gpt/gpt_layer_specs.py:94: UserWarning: The fp8 argument in "get_gpt_layer_with_transformer_engine_spec" has been deprecated and will be removed soon. Please update your code accordingly.
6706
+ warnings.warn(
6707
+ /mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/models/gpt/gpt_layer_specs.py:94: UserWarning: The fp8 argument in "get_gpt_layer_with_transformer_engine_spec" has been deprecated and will be removed soon. Please update your code accordingly.
6708
+ warnings.warn(
6709
+ /mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/models/gpt/gpt_layer_specs.py:94: UserWarning: The fp8 argument in "get_gpt_layer_with_transformer_engine_spec" has been deprecated and will be removed soon. Please update your code accordingly.
6710
+ warnings.warn(
6711
+ /mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/models/gpt/gpt_layer_specs.py:94: UserWarning: The fp8 argument in "get_gpt_layer_with_transformer_engine_spec" has been deprecated and will be removed soon. Please update your code accordingly.
6712
+ warnings.warn(
6713
+ /mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/models/gpt/gpt_layer_specs.py:94: UserWarning: The fp8 argument in "get_gpt_layer_with_transformer_engine_spec" has been deprecated and will be removed soon. Please update your code accordingly.
6714
+ warnings.warn(
6715
+ /mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/models/gpt/gpt_layer_specs.py:94: UserWarning: The fp8 argument in "get_gpt_layer_with_transformer_engine_spec" has been deprecated and will be removed soon. Please update your code accordingly.
6716
+ warnings.warn(
6717
+ /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/transformer_engine/pytorch/cpu_offload.py:595: DeprecationWarning: Offloading weights is deprecated. Using offload_weights=True does not have any effect.
6718
+ warnings.warn(
6719
+ /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/transformer_engine/pytorch/cpu_offload.py:595: DeprecationWarning: Offloading weights is deprecated. Using offload_weights=True does not have any effect.
6720
+ warnings.warn(
6721
+ /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/transformer_engine/pytorch/cpu_offload.py:595: DeprecationWarning: Offloading weights is deprecated. Using offload_weights=True does not have any effect.
6722
+ warnings.warn(
6723
+ /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/transformer_engine/pytorch/cpu_offload.py:595: DeprecationWarning: Offloading weights is deprecated. Using offload_weights=True does not have any effect.
6724
+ warnings.warn(
6725
+ /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/transformer_engine/pytorch/cpu_offload.py:595: DeprecationWarning: Offloading weights is deprecated. Using offload_weights=True does not have any effect.
6726
+ warnings.warn(
6727
+ /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/transformer_engine/pytorch/cpu_offload.py:595: DeprecationWarning: Offloading weights is deprecated. Using offload_weights=True does not have any effect.
6728
+ warnings.warn(
6729
+ /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/transformer_engine/pytorch/cpu_offload.py:595: DeprecationWarning: Offloading weights is deprecated. Using offload_weights=True does not have any effect.
6730
+ warnings.warn(
6731
+ /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/transformer_engine/pytorch/cpu_offload.py:595: DeprecationWarning: Offloading weights is deprecated. Using offload_weights=True does not have any effect.
6732
+ warnings.warn(
6733
+ /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/transformer_engine/pytorch/cpu_offload.py:595: DeprecationWarning: Offloading weights is deprecated. Using offload_weights=True does not have any effect.
6734
+ warnings.warn(
6735
+ /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/transformer_engine/pytorch/cpu_offload.py:595: DeprecationWarning: Offloading weights is deprecated. Using offload_weights=True does not have any effect.
6736
+ warnings.warn(
6737
+ /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/transformer_engine/pytorch/cpu_offload.py:595: DeprecationWarning: Offloading weights is deprecated. Using offload_weights=True does not have any effect.
6738
+ warnings.warn(
6739
+ /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/transformer_engine/pytorch/cpu_offload.py:595: DeprecationWarning: Offloading weights is deprecated. Using offload_weights=True does not have any effect.
6740
+ warnings.warn(
6741
+ /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/transformer_engine/pytorch/cpu_offload.py:595: DeprecationWarning: Offloading weights is deprecated. Using offload_weights=True does not have any effect.
6742
+ warnings.warn(
6743
+ /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/transformer_engine/pytorch/cpu_offload.py:595: DeprecationWarning: Offloading weights is deprecated. Using offload_weights=True does not have any effect.
6744
+ warnings.warn(
6745
+ /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/transformer_engine/pytorch/cpu_offload.py:595: DeprecationWarning: Offloading weights is deprecated. Using offload_weights=True does not have any effect.
6746
+ warnings.warn(
6747
+ /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/transformer_engine/pytorch/cpu_offload.py:595: DeprecationWarning: Offloading weights is deprecated. Using offload_weights=True does not have any effect.
6748
+ warnings.warn(
attnserver.run_attnserver.slurm.sh.343202.out.log CHANGED
@@ -27186,3 +27186,860 @@ WARNING:megatron.core.rerun_state_machine:Setting RerunStateMachine mode RerunMo
27186
  ----------------------------------------------------------------------------------------------------------
27187
  validation loss at iteration 10 on test set | lm loss value: 1.165397E+01 | lm loss PPL: 1.151480E+05 |
27188
  ----------------------------------------------------------------------------------------------------------
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
27186
  ----------------------------------------------------------------------------------------------------------
27187
  validation loss at iteration 10 on test set | lm loss value: 1.165397E+01 | lm loss PPL: 1.151480E+05 |
27188
  ----------------------------------------------------------------------------------------------------------
27189
+ Running ctx_length=65536, TP_SIZE=8, CP_SIZE=2, BATCH_SIZE=2
27190
+ Cleaning up checkpoint directory: gpt-checkpoint
27191
+ --------------------------------
27192
+ CTX_LENGTH: 65536
27193
+ TP_SIZE: 8
27194
+ CP_SIZE: 2
27195
+ CHECKPOINT_PATH: gpt-checkpoint
27196
+ PWD: /mnt/weka/home/hao.zhang/junda/attnserver-megatron
27197
+ --------------------------------
27198
+ /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/bin/python3
27199
+ Cleaning up checkpoint directory: gpt-checkpoint
27200
+ --------------------------------
27201
+ CTX_LENGTH: 65536
27202
+ TP_SIZE: 8
27203
+ CP_SIZE: 2
27204
+ CHECKPOINT_PATH: gpt-checkpoint
27205
+ PWD: /mnt/weka/home/hao.zhang/junda/attnserver-megatron
27206
+ --------------------------------
27207
+ /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/bin/python3
27208
+ INFO:megatron.training.initialize:Setting logging level to 0
27209
+ using world size: 16, data-parallel size: 1, context-parallel size: 2, hierarchical context-parallel sizes: Nonetensor-model-parallel size: 8, encoder-tensor-model-parallel size: 0, pipeline-model-parallel size: 1, encoder-pipeline-model-parallel size: 0
27210
+ Number of virtual stages per pipeline stage: None
27211
+ WARNING: Setting args.check_for_nan_in_loss_and_grad to False since dynamic loss scaling is being used
27212
+ using torch.float16 for parameters ...
27213
+ ------------------------ arguments ------------------------
27214
+ account_for_embedding_in_pipeline_split ......... False
27215
+ account_for_loss_in_pipeline_split .............. False
27216
+ accumulate_allreduce_grads_in_fp32 .............. False
27217
+ adam_beta1 ...................................... 0.9
27218
+ adam_beta2 ...................................... 0.999
27219
+ adam_eps ........................................ 1e-08
27220
+ add_bias_linear ................................. True
27221
+ add_position_embedding .......................... True
27222
+ add_qkv_bias .................................... True
27223
+ adlr_autoresume ................................. False
27224
+ adlr_autoresume_interval ........................ 1000
27225
+ align_grad_reduce ............................... True
27226
+ align_param_gather .............................. False
27227
+ app_tag_run_name ................................ None
27228
+ app_tag_run_version ............................. 0.0.0
27229
+ apply_layernorm_1p .............................. False
27230
+ apply_query_key_layer_scaling ................... False
27231
+ apply_residual_connection_post_layernorm ........ False
27232
+ apply_rope_fusion ............................... False
27233
+ async_save ...................................... None
27234
+ async_tensor_model_parallel_allreduce ........... True
27235
+ attention_backend ............................... AttnBackend.auto
27236
+ attention_dropout ............................... 0.1
27237
+ attention_softmax_in_fp32 ....................... False
27238
+ auto_detect_ckpt_format ......................... False
27239
+ barrier_with_L1_time ............................ True
27240
+ bert_binary_head ................................ True
27241
+ bert_embedder_type .............................. megatron
27242
+ bert_load ....................................... None
27243
+ bf16 ............................................ False
27244
+ bias_dropout_fusion ............................. True
27245
+ bias_gelu_fusion ................................ True
27246
+ bias_swiglu_fusion .............................. True
27247
+ biencoder_projection_dim ........................ 0
27248
+ biencoder_shared_query_context_model ............ False
27249
+ block_data_path ................................. None
27250
+ calc_ft_timeouts ................................ False
27251
+ calculate_per_token_loss ........................ False
27252
+ check_for_large_grads ........................... False
27253
+ check_for_nan_in_loss_and_grad .................. False
27254
+ check_for_spiky_loss ............................ False
27255
+ check_weight_hash_across_dp_replicas_interval ... None
27256
+ ckpt_assume_constant_structure .................. False
27257
+ ckpt_convert_format ............................. None
27258
+ ckpt_convert_save ............................... None
27259
+ ckpt_convert_update_legacy_dist_opt_format ...... False
27260
+ ckpt_format ..................................... torch_dist
27261
+ ckpt_fully_parallel_load ........................ False
27262
+ ckpt_fully_parallel_save ........................ True
27263
+ ckpt_fully_parallel_save_deprecated ............. False
27264
+ ckpt_step ....................................... None
27265
+ classes_fraction ................................ 1.0
27266
+ clip_grad ....................................... 1.0
27267
+ clone_scatter_output_in_embedding ............... True
27268
+ config_logger_dir ...............................
27269
+ consumed_train_samples .......................... 0
27270
+ consumed_valid_samples .......................... 0
27271
+ context_parallel_size ........................... 2
27272
+ cp_comm_type .................................... ['p2p']
27273
+ create_attention_mask_in_dataloader ............. True
27274
+ cross_entropy_fusion_impl ....................... native
27275
+ cross_entropy_loss_fusion ....................... False
27276
+ cuda_graph_scope ................................ full
27277
+ cuda_graph_warmup_steps ......................... 3
27278
+ data_args_path .................................. None
27279
+ data_cache_path ................................. None
27280
+ data_parallel_random_init ....................... False
27281
+ data_parallel_sharding_strategy ................. no_shard
27282
+ data_parallel_size .............................. 1
27283
+ data_path ....................................... None
27284
+ data_per_class_fraction ......................... 1.0
27285
+ data_sharding ................................... True
27286
+ dataloader_type ................................. single
27287
+ ddp_average_in_collective ....................... False
27288
+ ddp_bucket_size ................................. None
27289
+ ddp_num_buckets ................................. None
27290
+ ddp_pad_buckets_for_high_nccl_busbw ............. False
27291
+ decoder_first_pipeline_num_layers ............... None
27292
+ decoder_last_pipeline_num_layers ................ None
27293
+ decoder_num_layers .............................. None
27294
+ decoder_seq_length .............................. None
27295
+ decoupled_lr .................................... None
27296
+ decoupled_min_lr ................................ None
27297
+ decrease_batch_size_if_needed ................... False
27298
+ defer_embedding_wgrad_compute ................... False
27299
+ deprecated_use_mcore_models ..................... False
27300
+ deterministic_mode .............................. False
27301
+ dino_bottleneck_size ............................ 256
27302
+ dino_freeze_last_layer .......................... 1
27303
+ dino_head_hidden_size ........................... 2048
27304
+ dino_local_crops_number ......................... 10
27305
+ dino_local_img_size ............................. 96
27306
+ dino_norm_last_layer ............................ False
27307
+ dino_teacher_temp ............................... 0.07
27308
+ dino_warmup_teacher_temp ........................ 0.04
27309
+ dino_warmup_teacher_temp_epochs ................. 30
27310
+ disable_bf16_reduced_precision_matmul ........... False
27311
+ disable_mamba_mem_eff_path ...................... False
27312
+ disable_straggler_on_startup .................... False
27313
+ dist_ckpt_format_deprecated ..................... None
27314
+ dist_ckpt_strictness ............................ assume_ok_unexpected
27315
+ distribute_saved_activations .................... False
27316
+ distributed_backend ............................. nccl
27317
+ distributed_timeout_minutes ..................... 10
27318
+ embedding_path .................................. None
27319
+ empty_unused_memory_level ....................... 0
27320
+ enable_cuda_graph ............................... False
27321
+ enable_ft_package ............................... False
27322
+ enable_gloo_process_groups ...................... True
27323
+ enable_msc ...................................... True
27324
+ enable_one_logger ............................... True
27325
+ encoder_num_layers .............................. 2
27326
+ encoder_pipeline_model_parallel_size ............ 0
27327
+ encoder_seq_length .............................. 65536
27328
+ encoder_tensor_model_parallel_size .............. 0
27329
+ end_weight_decay ................................ 0.1
27330
+ eod_mask_loss ................................... False
27331
+ error_injection_rate ............................ 0
27332
+ error_injection_type ............................ transient_error
27333
+ eval_interval ................................... 16
27334
+ eval_iters ...................................... 1
27335
+ evidence_data_path .............................. None
27336
+ exit_duration_in_mins ........................... None
27337
+ exit_interval ................................... None
27338
+ exit_on_missing_checkpoint ...................... False
27339
+ exit_signal_handler ............................. False
27340
+ exp_avg_dtype ................................... torch.float32
27341
+ exp_avg_sq_dtype ................................ torch.float32
27342
+ expert_model_parallel_size ...................... 1
27343
+ expert_tensor_parallel_size ..................... 8
27344
+ external_cuda_graph ............................. False
27345
+ ffn_hidden_size ................................. 16384
27346
+ finetune ........................................ False
27347
+ first_last_layers_bf16 .......................... False
27348
+ flash_decode .................................... False
27349
+ fp16 ............................................ True
27350
+ fp16_lm_cross_entropy ........................... False
27351
+ fp32_residual_connection ........................ False
27352
+ fp8 ............................................. None
27353
+ fp8_amax_compute_algo ........................... most_recent
27354
+ fp8_amax_history_len ............................ 1
27355
+ fp8_interval .................................... 1
27356
+ fp8_margin ...................................... 0
27357
+ fp8_param_gather ................................ False
27358
+ fp8_recipe ...................................... delayed
27359
+ fp8_wgrad ....................................... True
27360
+ fsdp_double_buffer .............................. False
27361
+ global_batch_size ............................... 1
27362
+ grad_reduce_in_bf16 ............................. False
27363
+ gradient_accumulation_fusion .................... True
27364
+ gradient_reduce_div_fusion ...................... True
27365
+ group_query_attention ........................... True
27366
+ head_lr_mult .................................... 1.0
27367
+ heterogeneous_layers_config_encoded_json ........ None
27368
+ heterogeneous_layers_config_path ................ None
27369
+ hidden_dropout .................................. 0.1
27370
+ hidden_size ..................................... 4096
27371
+ hierarchical_context_parallel_sizes ............. None
27372
+ high_priority_stream_groups ..................... []
27373
+ hybrid_attention_ratio .......................... 0.0
27374
+ hybrid_mlp_ratio ................................ 0.0
27375
+ hybrid_override_pattern ......................... None
27376
+ hysteresis ...................................... 2
27377
+ ict_head_size ................................... None
27378
+ ict_load ........................................ None
27379
+ img_h ........................................... 224
27380
+ img_w ........................................... 224
27381
+ indexer_batch_size .............................. 128
27382
+ indexer_log_interval ............................ 1000
27383
+ inference_batch_times_seqlen_threshold .......... -1
27384
+ inference_dynamic_batching ...................... False
27385
+ inference_dynamic_batching_buffer_guaranteed_fraction 0.2
27386
+ inference_dynamic_batching_buffer_overflow_factor None
27387
+ inference_dynamic_batching_buffer_size_gb ....... 40.0
27388
+ inference_dynamic_batching_chunk_size ........... 256
27389
+ inference_dynamic_batching_max_requests_override None
27390
+ inference_dynamic_batching_max_tokens_override .. None
27391
+ inference_max_batch_size ........................ 8
27392
+ inference_max_seq_length ........................ 2560
27393
+ inference_rng_tracker ........................... False
27394
+ init_method_std ................................. 0.02
27395
+ init_method_xavier_uniform ...................... False
27396
+ init_model_with_meta_device ..................... False
27397
+ initial_loss_scale .............................. 4294967296
27398
+ inprocess_active_world_size ..................... 16
27399
+ inprocess_barrier_timeout ....................... 120
27400
+ inprocess_completion_timeout .................... 120
27401
+ inprocess_empty_cuda_cache ...................... False
27402
+ inprocess_granularity ........................... node
27403
+ inprocess_hard_timeout .......................... 90
27404
+ inprocess_heartbeat_interval .................... 30
27405
+ inprocess_heartbeat_timeout ..................... 60
27406
+ inprocess_last_call_wait ........................ 1
27407
+ inprocess_max_iterations ........................ None
27408
+ inprocess_monitor_process_interval .............. 1.0
27409
+ inprocess_monitor_thread_interval ............... 1.0
27410
+ inprocess_progress_watchdog_interval ............ 1.0
27411
+ inprocess_restart ............................... False
27412
+ inprocess_soft_timeout .......................... 60
27413
+ inprocess_termination_grace_time ................ 1
27414
+ is_hybrid_model ................................. False
27415
+ iter_per_epoch .................................. 1250
27416
+ iterations_to_skip .............................. []
27417
+ keep_fp8_transpose_cache_when_using_custom_fsdp . False
27418
+ kv_channels ..................................... 64
27419
+ kv_lora_rank .................................... 32
27420
+ lazy_mpu_init ................................... None
27421
+ load ............................................ gpt-checkpoint
27422
+ load_model_opt_format ........................... False
27423
+ local_rank ...................................... 0
27424
+ log_interval .................................... 1
27425
+ log_loss_scale_to_tensorboard ................... True
27426
+ log_memory_to_tensorboard ....................... False
27427
+ log_num_zeros_in_grad ........................... False
27428
+ log_params_norm ................................. False
27429
+ log_progress .................................... False
27430
+ log_straggler ................................... False
27431
+ log_throughput .................................. False
27432
+ log_timers_to_tensorboard ....................... False
27433
+ log_validation_ppl_to_tensorboard ............... False
27434
+ log_world_size_to_tensorboard ................... False
27435
+ logging_level ................................... 0
27436
+ loss_scale ...................................... None
27437
+ loss_scale_window ............................... 1000
27438
+ lr .............................................. 0.0005
27439
+ lr_decay_iters .................................. 150000
27440
+ lr_decay_samples ................................ None
27441
+ lr_decay_style .................................. cosine
27442
+ lr_warmup_fraction .............................. None
27443
+ lr_warmup_init .................................. 0.0
27444
+ lr_warmup_iters ................................. 2
27445
+ lr_warmup_samples ............................... 0
27446
+ lr_wsd_decay_iters .............................. None
27447
+ lr_wsd_decay_samples ............................ None
27448
+ lr_wsd_decay_style .............................. exponential
27449
+ main_grads_dtype ................................ torch.float32
27450
+ main_params_dtype ............................... torch.float32
27451
+ make_vocab_size_divisible_by .................... 128
27452
+ mamba_head_dim .................................. 64
27453
+ mamba_num_groups ................................ 8
27454
+ mamba_num_heads ................................. None
27455
+ mamba_state_dim ................................. 128
27456
+ manual_gc ....................................... False
27457
+ manual_gc_eval .................................. True
27458
+ manual_gc_interval .............................. 0
27459
+ mask_factor ..................................... 1.0
27460
+ mask_prob ....................................... 0.15
27461
+ mask_type ....................................... random
27462
+ masked_softmax_fusion ........................... True
27463
+ max_position_embeddings ......................... 65536
27464
+ max_tokens_to_oom ............................... 12000
27465
+ memory_snapshot_path ............................ snapshot.pickle
27466
+ merge_file ...................................... merges.txt
27467
+ micro_batch_size ................................ 1
27468
+ microbatch_group_size_per_vp_stage .............. None
27469
+ mid_level_dataset_surplus ....................... 0.005
27470
+ min_loss_scale .................................. 1.0
27471
+ min_lr .......................................... 0.0
27472
+ mlp_chunks_for_prefill .......................... 1
27473
+ mmap_bin_files .................................. True
27474
+ mock_data ....................................... True
27475
+ moe_apply_probs_on_input ........................ False
27476
+ moe_aux_loss_coeff .............................. 0.0
27477
+ moe_enable_deepep ............................... False
27478
+ moe_expert_capacity_factor ...................... None
27479
+ moe_extended_tp ................................. False
27480
+ moe_ffn_hidden_size ............................. None
27481
+ moe_grouped_gemm ................................ False
27482
+ moe_input_jitter_eps ............................ None
27483
+ moe_layer_freq .................................. 1
27484
+ moe_layer_recompute ............................. False
27485
+ moe_pad_expert_input_to_capacity ................ False
27486
+ moe_per_layer_logging ........................... False
27487
+ moe_permute_fusion .............................. False
27488
+ moe_router_bias_update_rate ..................... 0.001
27489
+ moe_router_dtype ................................ None
27490
+ moe_router_enable_expert_bias ................... False
27491
+ moe_router_force_load_balancing ................. False
27492
+ moe_router_group_topk ........................... None
27493
+ moe_router_load_balancing_type .................. aux_loss
27494
+ moe_router_num_groups ........................... None
27495
+ moe_router_padding_for_fp8 ...................... False
27496
+ moe_router_pre_softmax .......................... False
27497
+ moe_router_score_function ....................... softmax
27498
+ moe_router_topk ................................. 2
27499
+ moe_router_topk_scaling_factor .................. None
27500
+ moe_shared_expert_intermediate_size ............. None
27501
+ moe_shared_expert_overlap ....................... False
27502
+ moe_token_dispatcher_type ....................... allgather
27503
+ moe_token_drop_policy ........................... probs
27504
+ moe_use_legacy_grouped_gemm ..................... False
27505
+ moe_use_upcycling ............................... False
27506
+ moe_z_loss_coeff ................................ None
27507
+ mrope_section ................................... None
27508
+ mscale .......................................... 1.0
27509
+ mscale_all_dim .................................. 1.0
27510
+ mtp_loss_scaling_factor ......................... 0.1
27511
+ mtp_num_layers .................................. None
27512
+ multi_latent_attention .......................... False
27513
+ nccl_all_reduce_for_prefill ..................... False
27514
+ nccl_communicator_config_path ................... None
27515
+ nccl_ub ......................................... False
27516
+ no_load_optim ................................... None
27517
+ no_load_rng ..................................... None
27518
+ no_persist_layer_norm ........................... False
27519
+ no_rope_freq .................................... None
27520
+ no_save_optim ................................... None
27521
+ no_save_rng ..................................... None
27522
+ non_persistent_ckpt_type ........................ None
27523
+ non_persistent_global_ckpt_dir .................. None
27524
+ non_persistent_local_ckpt_algo .................. fully_parallel
27525
+ non_persistent_local_ckpt_dir ................... None
27526
+ non_persistent_save_interval .................... None
27527
+ norm_epsilon .................................... 1e-05
27528
+ normalization ................................... LayerNorm
27529
+ num_attention_heads ............................. 64
27530
+ num_channels .................................... 3
27531
+ num_classes ..................................... 1000
27532
+ num_dataset_builder_threads ..................... 1
27533
+ num_distributed_optimizer_instances ............. 1
27534
+ num_experts ..................................... None
27535
+ num_layers ...................................... 2
27536
+ num_layers_at_end_in_bf16 ....................... 1
27537
+ num_layers_at_start_in_bf16 ..................... 1
27538
+ num_layers_per_virtual_pipeline_stage ........... None
27539
+ num_query_groups ................................ 16
27540
+ num_virtual_stages_per_pipeline_rank ............ None
27541
+ num_workers ..................................... 2
27542
+ object_storage_cache_path ....................... None
27543
+ one_logger_async ................................ False
27544
+ one_logger_project .............................. megatron-lm
27545
+ one_logger_run_name ............................. None
27546
+ onnx_safe ....................................... None
27547
+ openai_gelu ..................................... False
27548
+ optimizer ....................................... adam
27549
+ optimizer_cpu_offload ........................... False
27550
+ optimizer_offload_fraction ...................... 1.0
27551
+ output_bert_embeddings .......................... False
27552
+ overlap_cpu_optimizer_d2h_h2d ................... False
27553
+ overlap_grad_reduce ............................. False
27554
+ overlap_p2p_comm ................................ False
27555
+ overlap_p2p_comm_warmup_flush ................... False
27556
+ overlap_param_gather ............................ False
27557
+ overlap_param_gather_with_optimizer_step ........ False
27558
+ override_opt_param_scheduler .................... False
27559
+ params_dtype .................................... torch.float16
27560
+ patch_dim ....................................... 16
27561
+ per_split_data_args_path ........................ None
27562
+ perform_initialization .......................... True
27563
+ pin_cpu_grads ................................... True
27564
+ pin_cpu_params .................................. True
27565
+ pipeline_model_parallel_comm_backend ............ None
27566
+ pipeline_model_parallel_size .................... 1
27567
+ pipeline_model_parallel_split_rank .............. None
27568
+ position_embedding_type ......................... learned_absolute
27569
+ pretrained_checkpoint ........................... None
27570
+ profile ......................................... False
27571
+ profile_ranks ................................... [0]
27572
+ profile_step_end ................................ 12
27573
+ profile_step_start .............................. 10
27574
+ q_lora_rank ..................................... None
27575
+ qk_head_dim ..................................... 128
27576
+ qk_l2_norm ...................................... False
27577
+ qk_layernorm .................................... False
27578
+ qk_pos_emb_head_dim ............................. 64
27579
+ query_in_block_prob ............................. 0.1
27580
+ rampup_batch_size ............................... None
27581
+ rank ............................................ 0
27582
+ recompute_granularity ........................... None
27583
+ recompute_method ................................ None
27584
+ recompute_modules ............................... None
27585
+ recompute_num_layers ............................ None
27586
+ record_memory_history ........................... False
27587
+ relative_attention_max_distance ................. 128
27588
+ relative_attention_num_buckets .................. 32
27589
+ replication ..................................... False
27590
+ replication_factor .............................. 2
27591
+ replication_jump ................................ None
27592
+ rerun_mode ...................................... disabled
27593
+ reset_attention_mask ............................ False
27594
+ reset_position_ids .............................. False
27595
+ result_rejected_tracker_filename ................ None
27596
+ retriever_report_topk_accuracies ................ []
27597
+ retriever_score_scaling ......................... False
27598
+ retriever_seq_length ............................ 256
27599
+ retro_add_retriever ............................. False
27600
+ retro_attention_gate ............................ 1
27601
+ retro_cyclic_train_iters ........................ None
27602
+ retro_encoder_attention_dropout ................. 0.1
27603
+ retro_encoder_hidden_dropout .................... 0.1
27604
+ retro_encoder_layers ............................ 2
27605
+ retro_num_neighbors ............................. 2
27606
+ retro_num_retrieved_chunks ...................... 2
27607
+ INFO:megatron.training.initialize:Setting logging level to 0
27608
+ retro_project_dir ............................... None
27609
+ retro_verify_neighbor_count ..................... True
27610
+ rope_scaling_factor ............................. 8.0
27611
+ rotary_base ..................................... 10000
27612
+ rotary_interleaved .............................. False
27613
+ rotary_percent .................................. 1.0
27614
+ rotary_scaling_factor ........................... 1.0
27615
+ rotary_seq_len_interpolation_factor ............. None
27616
+ run_workload_inspector_server ................... False
27617
+ sample_rate ..................................... 1.0
27618
+ save ............................................ gpt-checkpoint
27619
+ save_interval ................................... 16
27620
+ scatter_gather_tensors_in_pipeline .............. True
27621
+ seed ............................................ 1234
27622
+ seq_length ...................................... 65536
27623
+ sequence_parallel ............................... False
27624
+ sgd_momentum .................................... 0.9
27625
+ INFO:megatron.training.initialize:Setting logging level to 0
27626
+ short_seq_prob .................................. 0.1
27627
+ skip_train ...................................... False
27628
+ skipped_train_samples ........................... 0
27629
+ spec ............................................ None
27630
+ WARNING: TensorBoard writing requested but is not available (are you using PyTorch 1.1.0 or later?), no TensorBoard logs will be written.
27631
+ split ........................................... None
27632
+ squared_relu .................................... False
27633
+ start_weight_decay .............................. 0.1
27634
+ WARNING: one_logger package is required to enable e2e metrics tracking. please go to https://confluence.nvidia.com/display/MLWFO/Package+Repositories for details to install it
27635
+ straggler_ctrlr_port ............................ 65535
27636
+ straggler_minmax_count .......................... 1
27637
+ suggested_communication_unit_size ............... None
27638
+ swiglu .......................................... False
27639
+ swin_backbone_type .............................. tiny
27640
+ symmetric_ar_type ............................... None
27641
+ INFO:megatron.training.initialize:Setting logging level to 0
27642
+ te_rng_tracker .................................. False
27643
+ tensor_model_parallel_size ...................... 8
27644
+ tensorboard_dir ................................. tensorboard-logs/
27645
+ tensorboard_log_interval ........................ 1
27646
+ tensorboard_queue_size .......................... 1000
27647
+ test_data_path .................................. None
27648
+ test_mode ....................................... False
27649
+ tiktoken_num_special_tokens ..................... 1000
27650
+ tiktoken_pattern ................................ None
27651
+ tiktoken_special_tokens ......................... None
27652
+ timing_log_level ................................ 0
27653
+ timing_log_option ............................... minmax
27654
+ titles_data_path ................................ None
27655
+ tokenizer_model ................................. None
27656
+ tokenizer_type .................................. GPT2BPETokenizer
27657
+ torch_fsdp2_reshard_after_forward ............... True
27658
+ tp_comm_bootstrap_backend ....................... nccl
27659
+ tp_comm_bulk_dgrad .............................. True
27660
+ tp_comm_bulk_wgrad .............................. True
27661
+ tp_comm_overlap ................................. False
27662
+ tp_comm_overlap_ag .............................. True
27663
+ tp_comm_overlap_cfg ............................. None
27664
+ tp_comm_overlap_rs .............................. True
27665
+ tp_comm_overlap_rs_dgrad ........................ False
27666
+ tp_comm_split_ag ................................ True
27667
+ tp_comm_split_rs ................................ True
27668
+ train_data_path ................................. None
27669
+ train_iters ..................................... 10
27670
+ train_samples ................................... None
27671
+ train_sync_interval ............................. None
27672
+ transformer_impl ................................ transformer_engine
27673
+ transformer_pipeline_model_parallel_size ........ 1
27674
+ untie_embeddings_and_output_weights ............. False
27675
+ use_checkpoint_args ............................. False
27676
+ use_checkpoint_opt_param_scheduler .............. False
27677
+ use_cpu_initialization .......................... None
27678
+ use_custom_fsdp ................................. False
27679
+ use_dist_ckpt ................................... True
27680
+ use_dist_ckpt_deprecated ........................ False
27681
+ use_distributed_optimizer ....................... False
27682
+ use_flash_attn .................................. False
27683
+ use_legacy_models ............................... False
27684
+ use_mp_args_from_checkpoint_args ................ False
27685
+ use_one_sent_docs ............................... False
27686
+ use_persistent_ckpt_worker ...................... False
27687
+ use_precision_aware_optimizer ................... False
27688
+ use_pytorch_profiler ............................ False
27689
+ use_ring_exchange_p2p ........................... False
27690
+ use_rope_scaling ................................ False
27691
+ INFO:megatron.training.initialize:Setting logging level to 0
27692
+ use_rotary_position_embeddings .................. False
27693
+ use_sharp ....................................... False
27694
+ use_tokenizer_model_from_checkpoint_args ........ True
27695
+ use_torch_fsdp2 ................................. False
27696
+ INFO:megatron.training.initialize:Setting logging level to 0
27697
+ use_torch_optimizer_for_cpu_offload ............. False
27698
+ use_tp_pp_dp_mapping ............................ False
27699
+ v_head_dim ...................................... 128
27700
+ INFO:megatron.training.initialize:Setting logging level to 0
27701
+ valid_data_path ................................. None
27702
+ variable_seq_lengths ............................ False
27703
+ virtual_pipeline_model_parallel_size ............ None
27704
+ vision_backbone_type ............................ vit
27705
+ vision_pretraining .............................. False
27706
+ vision_pretraining_type ......................... classify
27707
+ vocab_extra_ids ................................. 0
27708
+ INFO:megatron.training.initialize:Setting logging level to 0
27709
+ vocab_file ...................................... vocab.json
27710
+ vocab_size ...................................... None
27711
+ wandb_exp_name ..................................
27712
+ wandb_project ...................................
27713
+ wandb_save_dir ..................................
27714
+ weight_decay .................................... 0.1
27715
+ INFO:megatron.training.initialize:Setting logging level to 0
27716
+ weight_decay_incr_style ......................... constant
27717
+ wgrad_deferral_limit ............................ 0
27718
+ world_size ...................................... 16
27719
+ yaml_cfg ........................................ None
27720
+ -------------------- end of arguments ---------------------
27721
+ INFO:megatron.core.num_microbatches_calculator:setting number of microbatches to constant 1
27722
+ > building GPT2BPETokenizer tokenizer ...
27723
+ > padded vocab (size: 50257) with 943 dummy tokens (new size: 51200)
27724
+ INFO:megatron.training.initialize:Setting logging level to 0
27725
+ WARNING:megatron.core.rerun_state_machine:RerunStateMachine initialized in mode RerunMode.DISABLED
27726
+ > initializing torch distributed ...
27727
+ INFO:megatron.training.initialize:Setting logging level to 0
27728
+ INFO:megatron.training.initialize:Setting logging level to 0
27729
+ INFO:megatron.training.initialize:Setting logging level to 0
27730
+ INFO:megatron.training.initialize:Setting logging level to 0
27731
+ INFO:megatron.training.initialize:Setting logging level to 0
27732
+ INFO:megatron.training.initialize:Setting logging level to 0
27733
+ > initialized tensor model parallel with size 8
27734
+ > initialized pipeline model parallel with size 1
27735
+ > setting random seeds to 1234 ...
27736
+ > compiling dataset index builder ...
27737
+ make: Entering directory '/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/datasets'
27738
+ make: Nothing to be done for 'default'.
27739
+ make: Leaving directory '/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/datasets'
27740
+ >>> done with dataset index builder. Compilation time: 0.047 seconds
27741
+ WARNING: constraints for invoking optimized fused softmax kernel are not met. We default back to unfused kernel invocations.
27742
+ > compiling and loading fused kernels ...
27743
+ >>> done with compiling and loading fused kernels. Compilation time: 2.465 seconds
27744
+ time to initialize megatron (seconds): 7.480
27745
+ [after megatron is initialized] datetime: 2025-06-21 21:14:16
27746
+ building GPT model ...
27747
+ >>> embedding
27748
+ >>> decoder
27749
+ >>> output_layer
27750
+ > number of parameters on (tensor, pipeline) model parallel rank (2, 0): 338753024
27751
+ >>> embedding
27752
+ >>> decoder
27753
+ >>> output_layer
27754
+ > number of parameters on (tensor, pipeline) model parallel rank (6, 0): 338753024
27755
+ >>> embedding
27756
+ >>> decoder
27757
+ >>> output_layer
27758
+ >>> embedding
27759
+ >>> decoder
27760
+ >>> output_layer
27761
+ > number of parameters on (tensor, pipeline) model parallel rank (2, 0): 338753024
27762
+ > number of parameters on (tensor, pipeline) model parallel rank (0, 0): 338753024
27763
+ >>> embedding
27764
+ >>> decoder
27765
+ >>> output_layer
27766
+ > number of parameters on (tensor, pipeline) model parallel rank (1, 0): 338753024
27767
+ >>> embedding
27768
+ >>> decoder
27769
+ >>> output_layer
27770
+ > number of parameters on (tensor, pipeline) model parallel rank (7, 0): 338753024
27771
+ >>> embedding
27772
+ >>> decoder
27773
+ >>> output_layer
27774
+ > number of parameters on (tensor, pipeline) model parallel rank (3, 0): 338753024
27775
+ >>> embedding
27776
+ >>> decoder
27777
+ >>> output_layer
27778
+ > number of parameters on (tensor, pipeline) model parallel rank (0, 0): 338753024
27779
+ >>> embedding
27780
+ >>> decoder
27781
+ >>> output_layer
27782
+ > number of parameters on (tensor, pipeline) model parallel rank (1, 0): 338753024
27783
+ >>> embedding
27784
+ >>> decoder
27785
+ >>> output_layer
27786
+ > number of parameters on (tensor, pipeline) model parallel rank (3, 0): 338753024
27787
+ >>> embedding
27788
+ >>> decoder
27789
+ >>> output_layer
27790
+ > number of parameters on (tensor, pipeline) model parallel rank (4, 0): 338753024
27791
+ >>> embedding
27792
+ >>> decoder
27793
+ >>> output_layer
27794
+ > number of parameters on (tensor, pipeline) model parallel rank (5, 0): 338753024
27795
+ >>> embedding
27796
+ >>> decoder
27797
+ >>> output_layer
27798
+ > number of parameters on (tensor, pipeline) model parallel rank (5, 0): 338753024
27799
+ >>> embedding
27800
+ >>> decoder
27801
+ >>> output_layer
27802
+ > number of parameters on (tensor, pipeline) model parallel rank (4, 0): 338753024
27803
+ >>> embedding
27804
+ >>> decoder
27805
+ >>> output_layer
27806
+ > number of parameters on (tensor, pipeline) model parallel rank (7, 0): 338753024
27807
+ INFO:megatron.core.distributed.distributed_data_parallel:Setting up DistributedDataParallel with config DistributedDataParallelConfig(grad_reduce_in_fp32=False, overlap_grad_reduce=False, overlap_param_gather=False, align_param_gather=False, use_distributed_optimizer=False, num_distributed_optimizer_instances=1, check_for_nan_in_grad=False, check_for_large_grads=False, bucket_size=None, pad_buckets_for_high_nccl_busbw=False, average_in_collective=False, fp8_param_gather=False, use_custom_fsdp=False, data_parallel_sharding_strategy='no_shard', gradient_reduce_div_fusion=True, suggested_communication_unit_size=None, preserve_fp32_weights=True, keep_fp8_transpose_cache_when_using_custom_fsdp=False, nccl_ub=False, fsdp_double_buffer=False)
27808
+ INFO:megatron.core.distributed.param_and_grad_buffer:Number of buckets for gradient all-reduce / reduce-scatter: 1
27809
+ Params for bucket 1 (338753024 elements, 338753024 padded size):
27810
+ module.decoder.layers.1.mlp.linear_fc1.layer_norm_bias
27811
+ module.decoder.layers.0.self_attention.linear_qkv.layer_norm_weight
27812
+ module.embedding.word_embeddings.weight
27813
+ module.decoder.final_layernorm.weight
27814
+ module.decoder.layers.1.mlp.linear_fc1.layer_norm_weight
27815
+ module.decoder.layers.1.self_attention.linear_qkv.bias
27816
+ module.decoder.layers.1.mlp.linear_fc1.weight
27817
+ module.decoder.layers.0.mlp.linear_fc1.bias
27818
+ module.decoder.layers.1.mlp.linear_fc2.bias
27819
+ module.decoder.layers.1.self_attention.linear_qkv.layer_norm_weight
27820
+ module.decoder.layers.0.self_attention.linear_qkv.weight
27821
+ module.decoder.layers.1.self_attention.linear_qkv.layer_norm_bias
27822
+ module.decoder.layers.0.self_attention.linear_qkv.layer_norm_bias
27823
+ module.decoder.layers.0.mlp.linear_fc1.layer_norm_bias
27824
+ module.decoder.layers.1.mlp.linear_fc1.bias
27825
+ module.decoder.layers.0.mlp.linear_fc2.weight
27826
+ module.decoder.layers.0.self_attention.linear_proj.weight
27827
+ module.decoder.layers.1.self_attention.linear_qkv.weight
27828
+ module.decoder.layers.1.self_attention.linear_proj.weight
27829
+ module.decoder.layers.0.mlp.linear_fc2.bias
27830
+ module.decoder.layers.0.mlp.linear_fc1.layer_norm_weight
27831
+ module.decoder.layers.0.self_attention.linear_qkv.bias
27832
+ module.decoder.layers.0.self_attention.linear_proj.bias
27833
+ module.embedding.position_embeddings.weight
27834
+ module.decoder.final_layernorm.bias
27835
+ module.decoder.layers.1.mlp.linear_fc2.weight
27836
+ module.decoder.layers.1.self_attention.linear_proj.bias
27837
+ module.decoder.layers.0.mlp.linear_fc1.weight
27838
+ INFO:megatron.core.optimizer:Setting up optimizer with config OptimizerConfig(optimizer='adam', lr=0.0005, min_lr=0.0, decoupled_lr=None, decoupled_min_lr=None, weight_decay=0.1, fp16=True, bf16=False, params_dtype=torch.float16, use_precision_aware_optimizer=False, store_param_remainders=True, main_grads_dtype=torch.float32, main_params_dtype=torch.float32, exp_avg_dtype=torch.float32, exp_avg_sq_dtype=torch.float32, loss_scale=None, initial_loss_scale=4294967296, min_loss_scale=1.0, loss_scale_window=1000, hysteresis=2, adam_beta1=0.9, adam_beta2=0.999, adam_eps=1e-08, sgd_momentum=0.9, use_distributed_optimizer=False, overlap_param_gather_with_optimizer_step=False, optimizer_cpu_offload=False, optimizer_offload_fraction=1.0, use_torch_optimizer_for_cpu_offload=False, overlap_cpu_optimizer_d2h_h2d=False, pin_cpu_grads=True, pin_cpu_params=True, clip_grad=1.0, log_num_zeros_in_grad=False, barrier_with_L1_time=True, timers=<megatron.core.timers.Timers object at 0x14935fdf1e50>, config_logger_dir='')
27839
+ INFO:megatron.core.optimizer_param_scheduler:> learning rate decay style: cosine
27840
+ >>> embedding
27841
+ >>> decoder
27842
+ >>> output_layer
27843
+ > number of parameters on (tensor, pipeline) model parallel rank (6, 0): 338753024
27844
+ WARNING: could not find the metadata file gpt-checkpoint/latest_checkpointed_iteration.txt
27845
+ will not load any checkpoints and will start from random
27846
+ (min, max) time across ranks (ms):
27847
+ load-checkpoint ................................: (2.91, 3.26)
27848
+ [after model, optimizer, and learning rate scheduler are built] datetime: 2025-06-21 21:14:19
27849
+ > building train, validation, and test datasets ...
27850
+ > datasets target sizes (minimum size):
27851
+ train: 10
27852
+ validation: 1
27853
+ test: 1
27854
+ INFO:megatron.core.datasets.blended_megatron_dataset_config:Let mock = True, as both blend and blend_per_split are None
27855
+ INFO:megatron.core.datasets.blended_megatron_dataset_config:Let split = 1,1,1, an arbitrarily even split, as mock is True
27856
+ INFO:megatron.core.datasets.blended_megatron_dataset_config:Let split_matrix = [(0, 0.3333333333333333), (0.3333333333333333, 0.6666666666666666), (0.6666666666666666, 1.0)]
27857
+ > building train, validation, and test datasets for GPT ...
27858
+ INFO:megatron.core.datasets.blended_megatron_dataset_builder:Building MockGPTDataset splits with sizes=(10, 1, 1) and config=GPTDatasetConfig(random_seed=1234, sequence_length=65536, blend=None, blend_per_split=None, split='1,1,1', split_matrix=[(0, 0.3333333333333333), (0.3333333333333333, 0.6666666666666666), (0.6666666666666666, 1.0)], num_dataset_builder_threads=1, path_to_cache=None, mmap_bin_files=True, mock=True, tokenizer=<megatron.training.tokenizer.tokenizer._GPT2BPETokenizer object at 0x149363295520>, mid_level_dataset_surplus=0.005, reset_position_ids=False, reset_attention_mask=False, eod_mask_loss=False, create_attention_mask=True, drop_last_partial_validation_sequence=True, add_extra_token_to_sequence=True, object_storage_cache_path=None)
27859
+ INFO:megatron.core.datasets.gpt_dataset:Build and save the MockGPTDataset train indices
27860
+ DEBUG:megatron.core.datasets.gpt_dataset:> separate_final_epoch: False
27861
+ WARNING:megatron.core.datasets.gpt_dataset:Unable to save MockGPTDataset indexes because path_to_cache is None
27862
+ DEBUG:megatron.core.datasets.gpt_dataset: > time elapsed: 0.004369 seconds
27863
+ INFO:megatron.core.datasets.gpt_dataset:> total number of samples: 1040
27864
+ INFO:megatron.core.datasets.gpt_dataset:> total number of epochs: 1
27865
+ INFO:megatron.core.datasets.gpt_dataset:Build and save the MockGPTDataset valid indices
27866
+ DEBUG:megatron.core.datasets.gpt_dataset:> separate_final_epoch: False
27867
+ WARNING:megatron.core.datasets.gpt_dataset:Unable to save MockGPTDataset indexes because path_to_cache is None
27868
+ DEBUG:megatron.core.datasets.gpt_dataset: > time elapsed: 0.001585 seconds
27869
+ INFO:megatron.core.datasets.gpt_dataset:> total number of samples: 1040
27870
+ INFO:megatron.core.datasets.gpt_dataset:> total number of epochs: 1
27871
+ INFO:megatron.core.datasets.gpt_dataset:Build and save the MockGPTDataset test indices
27872
+ DEBUG:megatron.core.datasets.gpt_dataset:> separate_final_epoch: False
27873
+ WARNING:megatron.core.datasets.gpt_dataset:Unable to save MockGPTDataset indexes because path_to_cache is None
27874
+ DEBUG:megatron.core.datasets.gpt_dataset: > time elapsed: 0.001322 seconds
27875
+ INFO:megatron.core.datasets.gpt_dataset:> total number of samples: 1041
27876
+ INFO:megatron.core.datasets.gpt_dataset:> total number of epochs: 1
27877
+ > finished creating GPT datasets ...
27878
+ [after dataloaders are built] datetime: 2025-06-21 21:14:20
27879
+ done with setup ...
27880
+ (min, max) time across ranks (ms):
27881
+ model-and-optimizer-setup ......................: (3069.19, 3071.22)
27882
+ train/valid/test-data-iterators-setup ..........: (14.70, 112.75)
27883
+ training ...
27884
+ Setting rerun_state_machine.current_iteration to 0...
27885
+ [before the start of training step] datetime: 2025-06-21 21:14:20
27886
+ batch tensor: tokens torch.Size([2, 131072])
27887
+ batch tensor: labels torch.Size([2, 131072])
27888
+ batch tensor: loss_mask torch.Size([2, 131072])
27889
+ batch tensor: attention_mask torch.Size([2, 1, 131072, 131072])
27890
+ batch tensor: position_ids torch.Size([2, 131072])
27891
+ batch tensor: tokens torch.Size([2, 131072])
27892
+ batch tensor: labels torch.Size([2, 131072])
27893
+ batch tensor: loss_mask torch.Size([2, 131072])
27894
+ batch tensor: attention_mask torch.Size([2, 1, 131072, 131072])
27895
+ batch tensor: position_ids torch.Size([2, 131072])
27896
+ batch tensor after cp: tokens torch.Size([2, 65536])
27897
+ batch tensor after cp: labels torch.Size([2, 65536])
27898
+ batch tensor after cp: loss_mask torch.Size([2, 65536])
27899
+ batch tensor after cp: attention_mask torch.Size([2, 1, 65536, 131072])
27900
+ batch tensor after cp: position_ids torch.Size([2, 65536])
27901
+ batch tensor after cp: tokens torch.Size([2, 65536])
27902
+ batch tensor after cp: labels torch.Size([2, 65536])
27903
+ batch tensor after cp: loss_mask torch.Size([2, 65536])
27904
+ batch tensor after cp: attention_mask torch.Size([2, 1, 65536, 131072])
27905
+ batch tensor after cp: position_ids torch.Size([2, 65536])
27906
+ batch tensor: tokens torch.Size([2, 131072])
27907
+ batch tensor: labels torch.Size([2, 131072])
27908
+ batch tensor: loss_mask torch.Size([2, 131072])
27909
+ batch tensor: attention_mask torch.Size([2, 1, 131072, 131072])
27910
+ batch tensor: position_ids torch.Size([2, 131072])
27911
+ batch tensor after cp: tokens torch.Size([2, 65536])
27912
+ batch tensor after cp: labels torch.Size([2, 65536])
27913
+ batch tensor after cp: loss_mask torch.Size([2, 65536])
27914
+ batch tensor after cp: attention_mask torch.Size([2, 1, 65536, 131072])
27915
+ batch tensor after cp: position_ids torch.Size([2, 65536])
27916
+ batch tensor: tokens torch.Size([2, 131072])
27917
+ batch tensor: labels torch.Size([2, 131072])
27918
+ batch tensor: loss_mask torch.Size([2, 131072])
27919
+ batch tensor: attention_mask torch.Size([2, 1, 131072, 131072])
27920
+ batch tensor: position_ids torch.Size([2, 131072])
27921
+ batch tensor: tokens torch.Size([2, 131072])
27922
+ batch tensor: labels torch.Size([2, 131072])
27923
+ batch tensor: loss_mask torch.Size([2, 131072])
27924
+ batch tensor: attention_mask torch.Size([2, 1, 131072, 131072])
27925
+ batch tensor: position_ids torch.Size([2, 131072])
27926
+ batch tensor after cp: tokens torch.Size([2, 65536])
27927
+ batch tensor after cp: labels torch.Size([2, 65536])
27928
+ batch tensor after cp: loss_mask torch.Size([2, 65536])
27929
+ batch tensor after cp: attention_mask torch.Size([2, 1, 65536, 131072])
27930
+ batch tensor after cp: position_ids torch.Size([2, 65536])
27931
+ batch tensor after cp: tokens torch.Size([2, 65536])
27932
+ batch tensor after cp: labels torch.Size([2, 65536])
27933
+ batch tensor after cp: loss_mask torch.Size([2, 65536])
27934
+ batch tensor after cp: attention_mask torch.Size([2, 1, 65536, 131072])
27935
+ batch tensor after cp: position_ids torch.Size([2, 65536])
27936
+ batch tensor: tokens torch.Size([2, 131072])
27937
+ batch tensor: labels torch.Size([2, 131072])
27938
+ batch tensor: loss_mask torch.Size([2, 131072])
27939
+ batch tensor: attention_mask torch.Size([2, 1, 131072, 131072])
27940
+ batch tensor: position_ids torch.Size([2, 131072])
27941
+ batch tensor: tokens torch.Size([2, 131072])
27942
+ batch tensor: labels torch.Size([2, 131072])
27943
+ batch tensor: loss_mask torch.Size([2, 131072])
27944
+ batch tensor: attention_mask torch.Size([2, 1, 131072, 131072])
27945
+ batch tensor: position_ids torch.Size([2, 131072])
27946
+ batch tensor: tokens torch.Size([2, 131072])
27947
+ batch tensor: labels torch.Size([2, 131072])
27948
+ batch tensor: loss_mask torch.Size([2, 131072])
27949
+ batch tensor: attention_mask torch.Size([2, 1, 131072, 131072])
27950
+ batch tensor: position_ids torch.Size([2, 131072])
27951
+ batch tensor after cp: tokens torch.Size([2, 65536])
27952
+ batch tensor after cp: labels torch.Size([2, 65536])
27953
+ batch tensor after cp: loss_mask torch.Size([2, 65536])
27954
+ batch tensor after cp: attention_mask torch.Size([2, 1, 65536, 131072])
27955
+ batch tensor after cp: position_ids torch.Size([2, 65536])
27956
+ batch tensor after cp: tokens torch.Size([2, 65536])
27957
+ batch tensor after cp: labels torch.Size([2, 65536])
27958
+ batch tensor after cp: loss_mask torch.Size([2, 65536])
27959
+ batch tensor after cp: attention_mask torch.Size([2, 1, 65536, 131072])
27960
+ batch tensor after cp: position_ids torch.Size([2, 65536])
27961
+ batch tensor after cp: tokens torch.Size([2, 65536])
27962
+ batch tensor after cp: labels torch.Size([2, 65536])
27963
+ batch tensor after cp: loss_mask torch.Size([2, 65536])
27964
+ batch tensor after cp: attention_mask torch.Size([2, 1, 65536, 131072])
27965
+ batch tensor after cp: position_ids torch.Size([2, 65536])
27966
+ batch tensor: tokens torch.Size([2, 131072])
27967
+ batch tensor: labels torch.Size([2, 131072])
27968
+ batch tensor: loss_mask torch.Size([2, 131072])
27969
+ batch tensor: attention_mask torch.Size([2, 1, 131072, 131072])
27970
+ batch tensor: position_ids torch.Size([2, 131072])
27971
+ batch tensor: tokens torch.Size([2, 131072])
27972
+ batch tensor: labels torch.Size([2, 131072])
27973
+ batch tensor: loss_mask torch.Size([2, 131072])
27974
+ batch tensor: attention_mask torch.Size([2, 1, 131072, 131072])
27975
+ batch tensor: position_ids torch.Size([2, 131072])
27976
+ batch tensor: tokens torch.Size([2, 131072])
27977
+ batch tensor: labels torch.Size([2, 131072])
27978
+ batch tensor: loss_mask torch.Size([2, 131072])
27979
+ batch tensor: attention_mask torch.Size([2, 1, 131072, 131072])
27980
+ batch tensor: position_ids torch.Size([2, 131072])
27981
+ batch tensor after cp: tokens torch.Size([2, 65536])
27982
+ batch tensor after cp: labels torch.Size([2, 65536])
27983
+ batch tensor after cp: loss_mask torch.Size([2, 65536])
27984
+ batch tensor after cp: attention_mask torch.Size([2, 1, 65536, 131072])
27985
+ batch tensor after cp: position_ids torch.Size([2, 65536])
27986
+ batch tensor after cp: tokens torch.Size([2, 65536])
27987
+ batch tensor after cp: labels torch.Size([2, 65536])
27988
+ batch tensor after cp: loss_mask torch.Size([2, 65536])
27989
+ batch tensor after cp: attention_mask torch.Size([2, 1, 65536, 131072])
27990
+ batch tensor after cp: position_ids torch.Size([2, 65536])
27991
+ batch tensor after cp: tokens torch.Size([2, 65536])
27992
+ batch tensor after cp: labels torch.Size([2, 65536])
27993
+ batch tensor after cp: loss_mask torch.Size([2, 65536])
27994
+ batch tensor after cp: attention_mask torch.Size([2, 1, 65536, 131072])
27995
+ batch tensor after cp: position_ids torch.Size([2, 65536])
27996
+ batch tensor: tokens torch.Size([2, 131072])
27997
+ batch tensor: labels torch.Size([2, 131072])
27998
+ batch tensor: loss_mask torch.Size([2, 131072])
27999
+ batch tensor: attention_mask torch.Size([2, 1, 131072, 131072])
28000
+ batch tensor: position_ids torch.Size([2, 131072])
28001
+ batch tensor: tokens torch.Size([2, 131072])
28002
+ batch tensor: labels torch.Size([2, 131072])
28003
+ batch tensor: loss_mask torch.Size([2, 131072])
28004
+ batch tensor: attention_mask torch.Size([2, 1, 131072, 131072])
28005
+ batch tensor: position_ids torch.Size([2, 131072])
28006
+ batch tensor: tokens torch.Size([2, 131072])
28007
+ batch tensor: labels torch.Size([2, 131072])
28008
+ batch tensor: loss_mask torch.Size([2, 131072])
28009
+ batch tensor: attention_mask torch.Size([2, 1, 131072, 131072])
28010
+ batch tensor: position_ids torch.Size([2, 131072])
28011
+ batch tensor after cp: tokens torch.Size([2, 65536])
28012
+ batch tensor after cp: labels torch.Size([2, 65536])
28013
+ batch tensor after cp: loss_mask torch.Size([2, 65536])
28014
+ batch tensor after cp: attention_mask torch.Size([2, 1, 65536, 131072])
28015
+ batch tensor after cp: position_ids torch.Size([2, 65536])
28016
+ batch tensor after cp: tokens torch.Size([2, 65536])
28017
+ batch tensor after cp: labels torch.Size([2, 65536])
28018
+ batch tensor after cp: loss_mask torch.Size([2, 65536])
28019
+ batch tensor after cp: attention_mask torch.Size([2, 1, 65536, 131072])
28020
+ batch tensor after cp: position_ids torch.Size([2, 65536])
28021
+ batch tensor after cp: tokens torch.Size([2, 65536])
28022
+ batch tensor after cp: labels torch.Size([2, 65536])
28023
+ batch tensor after cp: loss_mask torch.Size([2, 65536])
28024
+ batch tensor after cp: attention_mask torch.Size([2, 1, 65536, 131072])
28025
+ batch tensor after cp: position_ids torch.Size([2, 65536])
28026
+ batch tensor: tokens torch.Size([2, 131072])
28027
+ batch tensor: labels torch.Size([2, 131072])
28028
+ batch tensor: loss_mask torch.Size([2, 131072])
28029
+ batch tensor: attention_mask torch.Size([2, 1, 131072, 131072])
28030
+ batch tensor: position_ids torch.Size([2, 131072])
28031
+ batch tensor after cp: tokens torch.Size([2, 65536])
28032
+ batch tensor after cp: labels torch.Size([2, 65536])
28033
+ batch tensor after cp: loss_mask torch.Size([2, 65536])
28034
+ batch tensor after cp: attention_mask torch.Size([2, 1, 65536, 131072])
28035
+ batch tensor after cp: position_ids torch.Size([2, 65536])
28036
+ batch tensor: tokens torch.Size([2, 131072])
28037
+ batch tensor: labels torch.Size([2, 131072])
28038
+ batch tensor: loss_mask torch.Size([2, 131072])
28039
+ batch tensor: attention_mask torch.Size([2, 1, 131072, 131072])
28040
+ batch tensor: position_ids torch.Size([2, 131072])
28041
+ batch tensor after cp: tokens torch.Size([2, 65536])
28042
+ batch tensor after cp: labels torch.Size([2, 65536])
28043
+ batch tensor after cp: loss_mask torch.Size([2, 65536])
28044
+ batch tensor after cp: attention_mask torch.Size([2, 1, 65536, 131072])
28045
+ batch tensor after cp: position_ids torch.Size([2, 65536])
attnserver.run_attnserver.slurm.sh.343203.err.log ADDED
@@ -0,0 +1,172 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ + source /mnt/weka/home/hao.zhang/conda/miniconda/bin/activate
2
+ ++ _CONDA_ROOT=/mnt/weka/home/hao.zhang/conda/miniconda
3
+ ++ . /mnt/weka/home/hao.zhang/conda/miniconda/etc/profile.d/conda.sh
4
+ +++ export CONDA_EXE=/mnt/weka/home/hao.zhang/conda/miniconda/bin/conda
5
+ +++ CONDA_EXE=/mnt/weka/home/hao.zhang/conda/miniconda/bin/conda
6
+ +++ export _CE_M=
7
+ +++ _CE_M=
8
+ +++ export _CE_CONDA=
9
+ +++ _CE_CONDA=
10
+ +++ export CONDA_PYTHON_EXE=/mnt/weka/home/hao.zhang/conda/miniconda/bin/python
11
+ +++ CONDA_PYTHON_EXE=/mnt/weka/home/hao.zhang/conda/miniconda/bin/python
12
+ +++ '[' -z x ']'
13
+ ++ conda activate
14
+ ++ local cmd=activate
15
+ ++ case "$cmd" in
16
+ ++ __conda_activate activate
17
+ ++ '[' -n '' ']'
18
+ ++ local ask_conda
19
+ +++ PS1=
20
+ +++ __conda_exe shell.posix activate
21
+ +++ '[' -n '' ']'
22
+ +++ /mnt/weka/home/hao.zhang/conda/miniconda/bin/conda shell.posix activate
23
+ ++ ask_conda='unset _CE_M
24
+ unset _CE_CONDA
25
+ PS1='\''(base) '\''
26
+ export PATH='\''/mnt/weka/home/hao.zhang/.local/bin:/mnt/weka/home/hao.zhang/.local/bin:/mnt/weka/home/hao.zhang/.local/bin:/mnt/weka/home/hao.zhang/conda/miniconda/bin:/mnt/weka/home/hao.zhang/conda/miniconda/condabin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin'\''
27
+ export CONDA_SHLVL='\''1'\''
28
+ export CONDA_PROMPT_MODIFIER='\''(base) '\''
29
+ export CONDA_EXE='\''/mnt/weka/home/hao.zhang/conda/miniconda/bin/conda'\''
30
+ export CONDA_PYTHON_EXE='\''/mnt/weka/home/hao.zhang/conda/miniconda/bin/python'\'''
31
+ ++ eval 'unset _CE_M
32
+ unset _CE_CONDA
33
+ PS1='\''(base) '\''
34
+ export PATH='\''/mnt/weka/home/hao.zhang/.local/bin:/mnt/weka/home/hao.zhang/.local/bin:/mnt/weka/home/hao.zhang/.local/bin:/mnt/weka/home/hao.zhang/conda/miniconda/bin:/mnt/weka/home/hao.zhang/conda/miniconda/condabin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin'\''
35
+ export CONDA_SHLVL='\''1'\''
36
+ export CONDA_PROMPT_MODIFIER='\''(base) '\''
37
+ export CONDA_EXE='\''/mnt/weka/home/hao.zhang/conda/miniconda/bin/conda'\''
38
+ export CONDA_PYTHON_EXE='\''/mnt/weka/home/hao.zhang/conda/miniconda/bin/python'\'''
39
+ +++ unset _CE_M
40
+ +++ unset _CE_CONDA
41
+ +++ PS1='(base) '
42
+ +++ export PATH=/mnt/weka/home/hao.zhang/.local/bin:/mnt/weka/home/hao.zhang/.local/bin:/mnt/weka/home/hao.zhang/.local/bin:/mnt/weka/home/hao.zhang/conda/miniconda/bin:/mnt/weka/home/hao.zhang/conda/miniconda/condabin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin
43
+ +++ PATH=/mnt/weka/home/hao.zhang/.local/bin:/mnt/weka/home/hao.zhang/.local/bin:/mnt/weka/home/hao.zhang/.local/bin:/mnt/weka/home/hao.zhang/conda/miniconda/bin:/mnt/weka/home/hao.zhang/conda/miniconda/condabin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin
44
+ +++ export CONDA_SHLVL=1
45
+ +++ CONDA_SHLVL=1
46
+ +++ export 'CONDA_PROMPT_MODIFIER=(base) '
47
+ +++ CONDA_PROMPT_MODIFIER='(base) '
48
+ +++ export CONDA_EXE=/mnt/weka/home/hao.zhang/conda/miniconda/bin/conda
49
+ +++ CONDA_EXE=/mnt/weka/home/hao.zhang/conda/miniconda/bin/conda
50
+ +++ export CONDA_PYTHON_EXE=/mnt/weka/home/hao.zhang/conda/miniconda/bin/python
51
+ +++ CONDA_PYTHON_EXE=/mnt/weka/home/hao.zhang/conda/miniconda/bin/python
52
+ ++ __conda_hashr
53
+ ++ '[' -n '' ']'
54
+ ++ '[' -n '' ']'
55
+ ++ hash -r
56
+ + conda activate junda-attnserver
57
+ + local cmd=activate
58
+ + case "$cmd" in
59
+ + __conda_activate activate junda-attnserver
60
+ + '[' -n '' ']'
61
+ + local ask_conda
62
+ ++ PS1='(base) '
63
+ ++ __conda_exe shell.posix activate junda-attnserver
64
+ ++ '[' -n '' ']'
65
+ ++ /mnt/weka/home/hao.zhang/conda/miniconda/bin/conda shell.posix activate junda-attnserver
66
+ + ask_conda='unset _CE_M
67
+ unset _CE_CONDA
68
+ PS1='\''(junda-attnserver) '\''
69
+ export PATH='\''/mnt/weka/home/hao.zhang/.local/bin:/mnt/weka/home/hao.zhang/.local/bin:/mnt/weka/home/hao.zhang/.local/bin:/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/bin:/mnt/weka/home/hao.zhang/conda/miniconda/condabin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin'\''
70
+ export CONDA_PREFIX='\''/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver'\''
71
+ export CONDA_SHLVL='\''2'\''
72
+ export CONDA_DEFAULT_ENV='\''junda-attnserver'\''
73
+ export CONDA_PROMPT_MODIFIER='\''(junda-attnserver) '\''
74
+ export CONDA_PREFIX_1='\''/mnt/weka/home/hao.zhang/conda/miniconda'\''
75
+ export CONDA_EXE='\''/mnt/weka/home/hao.zhang/conda/miniconda/bin/conda'\''
76
+ export CONDA_PYTHON_EXE='\''/mnt/weka/home/hao.zhang/conda/miniconda/bin/python'\'''
77
+ + eval 'unset _CE_M
78
+ unset _CE_CONDA
79
+ PS1='\''(junda-attnserver) '\''
80
+ export PATH='\''/mnt/weka/home/hao.zhang/.local/bin:/mnt/weka/home/hao.zhang/.local/bin:/mnt/weka/home/hao.zhang/.local/bin:/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/bin:/mnt/weka/home/hao.zhang/conda/miniconda/condabin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin'\''
81
+ export CONDA_PREFIX='\''/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver'\''
82
+ export CONDA_SHLVL='\''2'\''
83
+ export CONDA_DEFAULT_ENV='\''junda-attnserver'\''
84
+ export CONDA_PROMPT_MODIFIER='\''(junda-attnserver) '\''
85
+ export CONDA_PREFIX_1='\''/mnt/weka/home/hao.zhang/conda/miniconda'\''
86
+ export CONDA_EXE='\''/mnt/weka/home/hao.zhang/conda/miniconda/bin/conda'\''
87
+ export CONDA_PYTHON_EXE='\''/mnt/weka/home/hao.zhang/conda/miniconda/bin/python'\'''
88
+ ++ unset _CE_M
89
+ ++ unset _CE_CONDA
90
+ ++ PS1='(junda-attnserver) '
91
+ ++ export PATH=/mnt/weka/home/hao.zhang/.local/bin:/mnt/weka/home/hao.zhang/.local/bin:/mnt/weka/home/hao.zhang/.local/bin:/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/bin:/mnt/weka/home/hao.zhang/conda/miniconda/condabin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin
92
+ ++ PATH=/mnt/weka/home/hao.zhang/.local/bin:/mnt/weka/home/hao.zhang/.local/bin:/mnt/weka/home/hao.zhang/.local/bin:/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/bin:/mnt/weka/home/hao.zhang/conda/miniconda/condabin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin
93
+ ++ export CONDA_PREFIX=/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver
94
+ ++ CONDA_PREFIX=/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver
95
+ ++ export CONDA_SHLVL=2
96
+ ++ CONDA_SHLVL=2
97
+ ++ export CONDA_DEFAULT_ENV=junda-attnserver
98
+ ++ CONDA_DEFAULT_ENV=junda-attnserver
99
+ ++ export 'CONDA_PROMPT_MODIFIER=(junda-attnserver) '
100
+ ++ CONDA_PROMPT_MODIFIER='(junda-attnserver) '
101
+ ++ export CONDA_PREFIX_1=/mnt/weka/home/hao.zhang/conda/miniconda
102
+ ++ CONDA_PREFIX_1=/mnt/weka/home/hao.zhang/conda/miniconda
103
+ ++ export CONDA_EXE=/mnt/weka/home/hao.zhang/conda/miniconda/bin/conda
104
+ ++ CONDA_EXE=/mnt/weka/home/hao.zhang/conda/miniconda/bin/conda
105
+ ++ export CONDA_PYTHON_EXE=/mnt/weka/home/hao.zhang/conda/miniconda/bin/python
106
+ ++ CONDA_PYTHON_EXE=/mnt/weka/home/hao.zhang/conda/miniconda/bin/python
107
+ + __conda_hashr
108
+ + '[' -n '' ']'
109
+ + '[' -n '' ']'
110
+ + hash -r
111
+ + export CHROME_TRACE_PREFIX=/mnt/sharefs/users/hao.zhang/junda/megatron-prof-data--unstable-v5
112
+ + CHROME_TRACE_PREFIX=/mnt/sharefs/users/hao.zhang/junda/megatron-prof-data--unstable-v5
113
+ + mkdir -p /mnt/sharefs/users/hao.zhang/junda/megatron-prof-data--unstable-v5
114
+ + export PROF_TP_SIZE=8
115
+ + PROF_TP_SIZE=8
116
+ + export PROF_CP_SIZE=2
117
+ + PROF_CP_SIZE=2
118
+ + export PROF_BS=4
119
+ + PROF_BS=4
120
+ + for ctx_length in 1024 2048 4096 8192 12288 16384 24576 32768 40960 49152 65536 81920 98304 131072
121
+ + export PROF_CTX_LENGTH=1024
122
+ + PROF_CTX_LENGTH=1024
123
+ + name='/mnt/sharefs/users/hao.zhang/junda/megatron-prof-data--unstable-v5/mytrace.L1024*tp8.cp2.bs4.json'
124
+ + '[' -f '/mnt/sharefs/users/hao.zhang/junda/megatron-prof-data--unstable-v5/mytrace.L1024*tp8.cp2.bs4.json' ']'
125
+ + echo 'Running ctx_length=1024, TP_SIZE=8, CP_SIZE=2, BATCH_SIZE=4'
126
+ + srun bash ./attnserver.sh
127
+ + which python3
128
+ + python3 -m torch.distributed.launch --nproc_per_node 8 --nnodes 2 --node_rank 0 --rdzv_id 343203 --rdzv_backend c10d --rdzv_endpoint fs-mbz-gpu-274:29500 ./pretrain_gpt_profile.py --tensor-model-parallel-size 8 --context-parallel-size 2 --num-layers 2 --hidden-size 4096 --num-attention-heads 64 --group-query-attention --num-query-groups 16 --seq-length 1024 --max-position-embeddings 1024 --micro-batch-size 1 --global-batch-size 1 --lr 0.0005 --train-iters 10 --lr-decay-iters 150000 --lr-decay-style cosine --lr-warmup-iters 2 --weight-decay .1 --adam-beta2 .999 --fp16 --log-interval 1 --save-interval 16 --eval-interval 16 --eval-iters 1 --vocab-file vocab.json --merge-file merges.txt --save gpt-checkpoint --load gpt-checkpoint --logging-level 0 --mock-data --tensorboard-dir tensorboard-logs/
129
+ + which python3
130
+ + python3 -m torch.distributed.launch --nproc_per_node 8 --nnodes 2 --node_rank 1 --rdzv_id 343203 --rdzv_backend c10d --rdzv_endpoint fs-mbz-gpu-274:29500 ./pretrain_gpt_profile.py --tensor-model-parallel-size 8 --context-parallel-size 2 --num-layers 2 --hidden-size 4096 --num-attention-heads 64 --group-query-attention --num-query-groups 16 --seq-length 1024 --max-position-embeddings 1024 --micro-batch-size 1 --global-batch-size 1 --lr 0.0005 --train-iters 10 --lr-decay-iters 150000 --lr-decay-style cosine --lr-warmup-iters 2 --weight-decay .1 --adam-beta2 .999 --fp16 --log-interval 1 --save-interval 16 --eval-interval 16 --eval-iters 1 --vocab-file vocab.json --merge-file merges.txt --save gpt-checkpoint --load gpt-checkpoint --logging-level 0 --mock-data --tensorboard-dir tensorboard-logs/
131
+ /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/distributed/launch.py:207: FutureWarning: The module torch.distributed.launch is deprecated
132
+ and will be removed in future. Use torchrun.
133
+ Note that --use-env is set by default in torchrun.
134
+ If your script expects `--local-rank` argument to be set, please
135
+ change it to read from `os.environ['LOCAL_RANK']` instead. See
136
+ https://pytorch.org/docs/stable/distributed.html#launch-utility for
137
+ further instructions
138
+
139
+ main()
140
+ /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/distributed/launch.py:207: FutureWarning: The module torch.distributed.launch is deprecated
141
+ and will be removed in future. Use torchrun.
142
+ Note that --use-env is set by default in torchrun.
143
+ If your script expects `--local-rank` argument to be set, please
144
+ change it to read from `os.environ['LOCAL_RANK']` instead. See
145
+ https://pytorch.org/docs/stable/distributed.html#launch-utility for
146
+ further instructions
147
+
148
+ main()
149
+ W0621 21:14:25.580000 745731 site-packages/torch/distributed/run.py:766]
150
+ W0621 21:14:25.580000 745731 site-packages/torch/distributed/run.py:766] *****************************************
151
+ W0621 21:14:25.580000 745731 site-packages/torch/distributed/run.py:766] Setting OMP_NUM_THREADS environment variable for each process to be 1 in default, to avoid your system being overloaded, please further tune the variable for optimal performance in your application as needed.
152
+ W0621 21:14:25.580000 745731 site-packages/torch/distributed/run.py:766] *****************************************
153
+ W0621 21:14:25.581000 1023500 site-packages/torch/distributed/run.py:766]
154
+ W0621 21:14:25.581000 1023500 site-packages/torch/distributed/run.py:766] *****************************************
155
+ W0621 21:14:25.581000 1023500 site-packages/torch/distributed/run.py:766] Setting OMP_NUM_THREADS environment variable for each process to be 1 in default, to avoid your system being overloaded, please further tune the variable for optimal performance in your application as needed.
156
+ W0621 21:14:25.581000 1023500 site-packages/torch/distributed/run.py:766] *****************************************
157
+ [rank0]:[W621 21:14:50.312096532 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 0] using GPU 0 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
158
+ [rank7]:[W621 21:14:50.525353333 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 7] using GPU 7 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
159
+ [rank8]:[W621 21:14:50.183904831 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 8] using GPU 0 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
160
+ [rank15]:[W621 21:14:50.186063934 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 15] using GPU 7 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
161
+ [rank1]:[W621 21:14:50.547624534 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 1] using GPU 1 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
162
+ [rank5]:[W621 21:14:50.548731566 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 5] using GPU 5 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
163
+ [rank9]:[W621 21:14:50.211465708 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 9] using GPU 1 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
164
+ [rank3]:[W621 21:14:50.550947312 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 3] using GPU 3 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
165
+ [rank13]:[W621 21:14:50.211955200 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 13] using GPU 5 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
166
+ [rank11]:[W621 21:14:50.214092084 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 11] using GPU 3 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
167
+ [rank4]:[W621 21:14:50.553329856 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 4] using GPU 4 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
168
+ [rank2]:[W621 21:14:50.554580721 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 2] using GPU 2 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
169
+ [rank6]:[W621 21:14:50.555759595 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 6] using GPU 6 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
170
+ [rank10]:[W621 21:14:50.217879933 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 10] using GPU 2 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
171
+ [rank14]:[W621 21:14:50.219224621 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 14] using GPU 6 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
172
+ [rank12]:[W621 21:14:50.219311960 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 12] using GPU 4 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
attnserver.run_attnserver.slurm.sh.343203.out.log ADDED
@@ -0,0 +1,553 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ Running ctx_length=1024, TP_SIZE=8, CP_SIZE=2, BATCH_SIZE=4
2
+ Cleaning up checkpoint directory: gpt-checkpoint
3
+ Cleaning up checkpoint directory: gpt-checkpoint
4
+ --------------------------------
5
+ CTX_LENGTH: 1024
6
+ TP_SIZE: 8
7
+ CP_SIZE: 2
8
+ CHECKPOINT_PATH: gpt-checkpoint
9
+ PWD: /mnt/weka/home/hao.zhang/junda/attnserver-megatron
10
+ --------------------------------
11
+ /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/bin/python3
12
+ --------------------------------
13
+ CTX_LENGTH: 1024
14
+ TP_SIZE: 8
15
+ CP_SIZE: 2
16
+ CHECKPOINT_PATH: gpt-checkpoint
17
+ PWD: /mnt/weka/home/hao.zhang/junda/attnserver-megatron
18
+ --------------------------------
19
+ /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/bin/python3
20
+ using world size: 16, data-parallel size: 1, context-parallel size: 2, hierarchical context-parallel sizes: Nonetensor-model-parallel size: 8, encoder-tensor-model-parallel size: 0, pipeline-model-parallel size: 1, encoder-pipeline-model-parallel size: 0
21
+ Number of virtual stages per pipeline stage: None
22
+ WARNING: Setting args.check_for_nan_in_loss_and_grad to False since dynamic loss scaling is being used
23
+ using torch.float16 for parameters ...
24
+ ------------------------ arguments ------------------------
25
+ account_for_embedding_in_pipeline_split ......... False
26
+ account_for_loss_in_pipeline_split .............. False
27
+ accumulate_allreduce_grads_in_fp32 .............. False
28
+ adam_beta1 ...................................... 0.9
29
+ adam_beta2 ...................................... 0.999
30
+ adam_eps ........................................ 1e-08
31
+ add_bias_linear ................................. True
32
+ add_position_embedding .......................... True
33
+ add_qkv_bias .................................... True
34
+ adlr_autoresume ................................. False
35
+ adlr_autoresume_interval ........................ 1000
36
+ align_grad_reduce ............................... True
37
+ align_param_gather .............................. False
38
+ app_tag_run_name ................................ None
39
+ app_tag_run_version ............................. 0.0.0
40
+ apply_layernorm_1p .............................. False
41
+ apply_query_key_layer_scaling ................... False
42
+ apply_residual_connection_post_layernorm ........ False
43
+ apply_rope_fusion ............................... False
44
+ async_save ...................................... None
45
+ async_tensor_model_parallel_allreduce ........... True
46
+ attention_backend ............................... AttnBackend.auto
47
+ attention_dropout ............................... 0.1
48
+ attention_softmax_in_fp32 ....................... False
49
+ auto_detect_ckpt_format ......................... False
50
+ barrier_with_L1_time ............................ True
51
+ bert_binary_head ................................ True
52
+ bert_embedder_type .............................. megatron
53
+ bert_load ....................................... None
54
+ bf16 ............................................ False
55
+ bias_dropout_fusion ............................. True
56
+ bias_gelu_fusion ................................ True
57
+ bias_swiglu_fusion .............................. True
58
+ biencoder_projection_dim ........................ 0
59
+ biencoder_shared_query_context_model ............ False
60
+ block_data_path ................................. None
61
+ calc_ft_timeouts ................................ False
62
+ calculate_per_token_loss ........................ False
63
+ check_for_large_grads ........................... False
64
+ check_for_nan_in_loss_and_grad .................. False
65
+ check_for_spiky_loss ............................ False
66
+ check_weight_hash_across_dp_replicas_interval ... None
67
+ ckpt_assume_constant_structure .................. False
68
+ ckpt_convert_format ............................. None
69
+ ckpt_convert_save ............................... None
70
+ ckpt_convert_update_legacy_dist_opt_format ...... False
71
+ ckpt_format ..................................... torch_dist
72
+ ckpt_fully_parallel_load ........................ False
73
+ ckpt_fully_parallel_save ........................ True
74
+ ckpt_fully_parallel_save_deprecated ............. False
75
+ ckpt_step ....................................... None
76
+ classes_fraction ................................ 1.0
77
+ clip_grad ....................................... 1.0
78
+ clone_scatter_output_in_embedding ............... True
79
+ config_logger_dir ...............................
80
+ consumed_train_samples .......................... 0
81
+ consumed_valid_samples .......................... 0
82
+ context_parallel_size ........................... 2
83
+ cp_comm_type .................................... ['p2p']
84
+ create_attention_mask_in_dataloader ............. True
85
+ cross_entropy_fusion_impl ....................... native
86
+ cross_entropy_loss_fusion ....................... False
87
+ cuda_graph_scope ................................ full
88
+ cuda_graph_warmup_steps ......................... 3
89
+ data_args_path .................................. None
90
+ data_cache_path ................................. None
91
+ data_parallel_random_init ....................... False
92
+ data_parallel_sharding_strategy ................. no_shard
93
+ data_parallel_size .............................. 1
94
+ data_path ....................................... None
95
+ data_per_class_fraction ......................... 1.0
96
+ data_sharding ................................... True
97
+ dataloader_type ................................. single
98
+ ddp_average_in_collective ....................... False
99
+ ddp_bucket_size ................................. None
100
+ ddp_num_buckets ................................. None
101
+ ddp_pad_buckets_for_high_nccl_busbw ............. False
102
+ decoder_first_pipeline_num_layers ............... None
103
+ decoder_last_pipeline_num_layers ................ None
104
+ decoder_num_layers .............................. None
105
+ decoder_seq_length .............................. None
106
+ decoupled_lr .................................... None
107
+ decoupled_min_lr ................................ None
108
+ decrease_batch_size_if_needed ................... False
109
+ defer_embedding_wgrad_compute ................... False
110
+ deprecated_use_mcore_models ..................... False
111
+ deterministic_mode .............................. False
112
+ dino_bottleneck_size ............................ 256
113
+ dino_freeze_last_layer .......................... 1
114
+ dino_head_hidden_size ........................... 2048
115
+ dino_local_crops_number ......................... 10
116
+ dino_local_img_size ............................. 96
117
+ dino_norm_last_layer ............................ False
118
+ dino_teacher_temp ............................... 0.07
119
+ dino_warmup_teacher_temp ........................ 0.04
120
+ dino_warmup_teacher_temp_epochs ................. 30
121
+ disable_bf16_reduced_precision_matmul ........... False
122
+ disable_mamba_mem_eff_path ...................... False
123
+ disable_straggler_on_startup .................... False
124
+ dist_ckpt_format_deprecated ..................... None
125
+ dist_ckpt_strictness ............................ assume_ok_unexpected
126
+ distribute_saved_activations .................... False
127
+ distributed_backend ............................. nccl
128
+ distributed_timeout_minutes ..................... 10
129
+ embedding_path .................................. None
130
+ empty_unused_memory_level ....................... 0
131
+ enable_cuda_graph ............................... False
132
+ enable_ft_package ............................... False
133
+ enable_gloo_process_groups ...................... True
134
+ enable_msc ...................................... True
135
+ enable_one_logger ............................... True
136
+ encoder_num_layers .............................. 2
137
+ encoder_pipeline_model_parallel_size ............ 0
138
+ encoder_seq_length .............................. 1024
139
+ encoder_tensor_model_parallel_size .............. 0
140
+ end_weight_decay ................................ 0.1
141
+ eod_mask_loss ................................... False
142
+ error_injection_rate ............................ 0
143
+ error_injection_type ............................ transient_error
144
+ eval_interval ................................... 16
145
+ eval_iters ...................................... 1
146
+ evidence_data_path .............................. None
147
+ exit_duration_in_mins ........................... None
148
+ exit_interval ................................... None
149
+ exit_on_missing_checkpoint ...................... False
150
+ exit_signal_handler ............................. False
151
+ exp_avg_dtype ................................... torch.float32
152
+ exp_avg_sq_dtype ................................ torch.float32
153
+ expert_model_parallel_size ...................... 1
154
+ expert_tensor_parallel_size ..................... 8
155
+ external_cuda_graph ............................. False
156
+ ffn_hidden_size ................................. 16384
157
+ finetune ........................................ False
158
+ first_last_layers_bf16 .......................... False
159
+ flash_decode .................................... False
160
+ fp16 ............................................ True
161
+ fp16_lm_cross_entropy ........................... False
162
+ fp32_residual_connection ........................ False
163
+ fp8 ............................................. None
164
+ fp8_amax_compute_algo ........................... most_recent
165
+ fp8_amax_history_len ............................ 1
166
+ fp8_interval .................................... 1
167
+ fp8_margin ...................................... 0
168
+ fp8_param_gather ................................ False
169
+ fp8_recipe ...................................... delayed
170
+ fp8_wgrad ....................................... True
171
+ fsdp_double_buffer .............................. False
172
+ global_batch_size ............................... 1
173
+ grad_reduce_in_bf16 ............................. False
174
+ gradient_accumulation_fusion .................... True
175
+ gradient_reduce_div_fusion ...................... True
176
+ group_query_attention ........................... True
177
+ head_lr_mult .................................... 1.0
178
+ heterogeneous_layers_config_encoded_json ........ None
179
+ heterogeneous_layers_config_path ................ None
180
+ hidden_dropout .................................. 0.1
181
+ hidden_size ..................................... 4096
182
+ hierarchical_context_parallel_sizes ............. None
183
+ high_priority_stream_groups ..................... []
184
+ hybrid_attention_ratio .......................... 0.0
185
+ hybrid_mlp_ratio ................................ 0.0
186
+ hybrid_override_pattern ......................... None
187
+ hysteresis ...................................... 2
188
+ ict_head_size ................................... None
189
+ ict_load ........................................ None
190
+ img_h ........................................... 224
191
+ img_w ........................................... 224
192
+ indexer_batch_size .............................. 128
193
+ indexer_log_interval ............................ 1000
194
+ inference_batch_times_seqlen_threshold .......... -1
195
+ inference_dynamic_batching ...................... False
196
+ inference_dynamic_batching_buffer_guaranteed_fraction 0.2
197
+ inference_dynamic_batching_buffer_overflow_factor None
198
+ inference_dynamic_batching_buffer_size_gb ....... 40.0
199
+ inference_dynamic_batching_chunk_size ........... 256
200
+ inference_dynamic_batching_max_requests_override None
201
+ inference_dynamic_batching_max_tokens_override .. None
202
+ inference_max_batch_size ........................ 8
203
+ inference_max_seq_length ........................ 2560
204
+ inference_rng_tracker ........................... False
205
+ init_method_std ................................. 0.02
206
+ init_method_xavier_uniform ...................... False
207
+ init_model_with_meta_device ..................... False
208
+ initial_loss_scale .............................. 4294967296
209
+ inprocess_active_world_size ..................... 16
210
+ inprocess_barrier_timeout ....................... 120
211
+ inprocess_completion_timeout .................... 120
212
+ inprocess_empty_cuda_cache ...................... False
213
+ inprocess_granularity ........................... node
214
+ inprocess_hard_timeout .......................... 90
215
+ inprocess_heartbeat_interval .................... 30
216
+ inprocess_heartbeat_timeout ..................... 60
217
+ inprocess_last_call_wait ........................ 1
218
+ inprocess_max_iterations ........................ None
219
+ inprocess_monitor_process_interval .............. 1.0
220
+ inprocess_monitor_thread_interval ............... 1.0
221
+ inprocess_progress_watchdog_interval ............ 1.0
222
+ inprocess_restart ............................... False
223
+ inprocess_soft_timeout .......................... 60
224
+ inprocess_termination_grace_time ................ 1
225
+ is_hybrid_model ................................. False
226
+ iter_per_epoch .................................. 1250
227
+ iterations_to_skip .............................. []
228
+ keep_fp8_transpose_cache_when_using_custom_fsdp . False
229
+ kv_channels ..................................... 64
230
+ kv_lora_rank .................................... 32
231
+ lazy_mpu_init ................................... None
232
+ load ............................................ gpt-checkpoint
233
+ load_model_opt_format ........................... False
234
+ local_rank ...................................... 0
235
+ log_interval .................................... 1
236
+ log_loss_scale_to_tensorboard ................... True
237
+ log_memory_to_tensorboard ....................... False
238
+ log_num_zeros_in_grad ........................... False
239
+ log_params_norm ................................. False
240
+ log_progress .................................... False
241
+ log_straggler ................................... False
242
+ log_throughput .................................. False
243
+ log_timers_to_tensorboard ....................... False
244
+ log_validation_ppl_to_tensorboard ............... False
245
+ log_world_size_to_tensorboard ................... False
246
+ logging_level ................................... 0
247
+ loss_scale ...................................... None
248
+ loss_scale_window ............................... 1000
249
+ lr .............................................. 0.0005
250
+ lr_decay_iters .................................. 150000
251
+ lr_decay_samples ................................ None
252
+ lr_decay_style .................................. cosine
253
+ lr_warmup_fraction .............................. None
254
+ lr_warmup_init .................................. 0.0
255
+ lr_warmup_iters ................................. 2
256
+ lr_warmup_samples ............................... 0
257
+ lr_wsd_decay_iters .............................. None
258
+ lr_wsd_decay_samples ............................ None
259
+ lr_wsd_decay_style .............................. exponential
260
+ main_grads_dtype ................................ torch.float32
261
+ main_params_dtype ............................... torch.float32
262
+ make_vocab_size_divisible_by .................... 128
263
+ mamba_head_dim .................................. 64
264
+ mamba_num_groups ................................ 8
265
+ mamba_num_heads ................................. None
266
+ mamba_state_dim ................................. 128
267
+ manual_gc ....................................... False
268
+ manual_gc_eval .................................. True
269
+ manual_gc_interval .............................. 0
270
+ mask_factor ..................................... 1.0
271
+ mask_prob ....................................... 0.15
272
+ mask_type ....................................... random
273
+ masked_softmax_fusion ........................... True
274
+ max_position_embeddings ......................... 1024
275
+ max_tokens_to_oom ............................... 12000
276
+ memory_snapshot_path ............................ snapshot.pickle
277
+ merge_file ...................................... merges.txt
278
+ micro_batch_size ................................ 1
279
+ microbatch_group_size_per_vp_stage .............. None
280
+ mid_level_dataset_surplus ....................... 0.005
281
+ min_loss_scale .................................. 1.0
282
+ min_lr .......................................... 0.0
283
+ mlp_chunks_for_prefill .......................... 1
284
+ mmap_bin_files .................................. True
285
+ mock_data ....................................... True
286
+ moe_apply_probs_on_input ........................ False
287
+ moe_aux_loss_coeff .............................. 0.0
288
+ moe_enable_deepep ............................... False
289
+ moe_expert_capacity_factor ...................... None
290
+ moe_extended_tp ................................. False
291
+ moe_ffn_hidden_size ............................. None
292
+ moe_grouped_gemm ................................ False
293
+ moe_input_jitter_eps ............................ None
294
+ moe_layer_freq .................................. 1
295
+ moe_layer_recompute ............................. False
296
+ moe_pad_expert_input_to_capacity ................ False
297
+ moe_per_layer_logging ........................... False
298
+ moe_permute_fusion .............................. False
299
+ moe_router_bias_update_rate ..................... 0.001
300
+ moe_router_dtype ................................ None
301
+ moe_router_enable_expert_bias ................... False
302
+ moe_router_force_load_balancing ................. False
303
+ moe_router_group_topk ........................... None
304
+ moe_router_load_balancing_type .................. aux_loss
305
+ moe_router_num_groups ........................... None
306
+ moe_router_padding_for_fp8 ...................... False
307
+ moe_router_pre_softmax .......................... False
308
+ moe_router_score_function ....................... softmax
309
+ moe_router_topk ................................. 2
310
+ moe_router_topk_scaling_factor .................. None
311
+ moe_shared_expert_intermediate_size ............. None
312
+ moe_shared_expert_overlap ....................... False
313
+ moe_token_dispatcher_type ....................... allgather
314
+ moe_token_drop_policy ........................... probs
315
+ moe_use_legacy_grouped_gemm ..................... False
316
+ moe_use_upcycling ............................... False
317
+ moe_z_loss_coeff ................................ None
318
+ mrope_section ................................... None
319
+ mscale .......................................... 1.0
320
+ mscale_all_dim .................................. 1.0
321
+ mtp_loss_scaling_factor ......................... 0.1
322
+ mtp_num_layers .................................. None
323
+ multi_latent_attention .......................... False
324
+ nccl_all_reduce_for_prefill ..................... False
325
+ nccl_communicator_config_path ................... None
326
+ nccl_ub ......................................... False
327
+ no_load_optim ................................... None
328
+ no_load_rng ..................................... None
329
+ no_persist_layer_norm ........................... False
330
+ no_rope_freq .................................... None
331
+ no_save_optim ................................... None
332
+ no_save_rng ..................................... None
333
+ non_persistent_ckpt_type ........................ None
334
+ non_persistent_global_ckpt_dir .................. None
335
+ non_persistent_local_ckpt_algo .................. fully_parallel
336
+ non_persistent_local_ckpt_dir ................... None
337
+ non_persistent_save_interval .................... None
338
+ norm_epsilon .................................... 1e-05
339
+ normalization ................................... LayerNorm
340
+ num_attention_heads ............................. 64
341
+ num_channels .................................... 3
342
+ num_classes ..................................... 1000
343
+ num_dataset_builder_threads ..................... 1
344
+ num_distributed_optimizer_instances ............. 1
345
+ num_experts ..................................... None
346
+ num_layers ...................................... 2
347
+ num_layers_at_end_in_bf16 ....................... 1
348
+ num_layers_at_start_in_bf16 ..................... 1
349
+ num_layers_per_virtual_pipeline_stage ........... None
350
+ num_query_groups ................................ 16
351
+ num_virtual_stages_per_pipeline_rank ............ None
352
+ num_workers ..................................... 2
353
+ object_storage_cache_path ....................... None
354
+ one_logger_async ................................ False
355
+ one_logger_project .............................. megatron-lm
356
+ one_logger_run_name ............................. None
357
+ onnx_safe ....................................... None
358
+ openai_gelu ..................................... False
359
+ optimizer ....................................... adam
360
+ optimizer_cpu_offload ........................... False
361
+ optimizer_offload_fraction ...................... 1.0
362
+ output_bert_embeddings .......................... False
363
+ overlap_cpu_optimizer_d2h_h2d ................... False
364
+ overlap_grad_reduce ............................. False
365
+ overlap_p2p_comm ................................ False
366
+ overlap_p2p_comm_warmup_flush ................... False
367
+ overlap_param_gather ............................ False
368
+ overlap_param_gather_with_optimizer_step ........ False
369
+ override_opt_param_scheduler .................... False
370
+ params_dtype .................................... torch.float16
371
+ patch_dim ....................................... 16
372
+ per_split_data_args_path ........................ None
373
+ perform_initialization .......................... True
374
+ pin_cpu_grads ................................... True
375
+ pin_cpu_params .................................. True
376
+ pipeline_model_parallel_comm_backend ............ None
377
+ pipeline_model_parallel_size .................... 1
378
+ pipeline_model_parallel_split_rank .............. None
379
+ position_embedding_type ......................... learned_absolute
380
+ pretrained_checkpoint ........................... None
381
+ profile ......................................... False
382
+ profile_ranks ................................... [0]
383
+ profile_step_end ................................ 12
384
+ profile_step_start .............................. 10
385
+ q_lora_rank ..................................... None
386
+ qk_head_dim ..................................... 128
387
+ qk_l2_norm ...................................... False
388
+ qk_layernorm .................................... False
389
+ qk_pos_emb_head_dim ............................. 64
390
+ query_in_block_prob ............................. 0.1
391
+ rampup_batch_size ............................... None
392
+ rank ............................................ 0
393
+ recompute_granularity ........................... None
394
+ recompute_method ................................ None
395
+ recompute_modules ............................... None
396
+ recompute_num_layers ............................ None
397
+ record_memory_history ........................... False
398
+ relative_attention_max_distance ................. 128
399
+ relative_attention_num_buckets .................. 32
400
+ replication ..................................... False
401
+ replication_factor .............................. 2
402
+ replication_jump ................................ None
403
+ rerun_mode ...................................... disabled
404
+ reset_attention_mask ............................ False
405
+ reset_position_ids .............................. False
406
+ result_rejected_tracker_filename ................ None
407
+ retriever_report_topk_accuracies ................ []
408
+ retriever_score_scaling ......................... False
409
+ retriever_seq_length ............................ 256
410
+ retro_add_retriever ............................. False
411
+ retro_attention_gate ............................ 1
412
+ retro_cyclic_train_iters ........................ None
413
+ retro_encoder_attention_dropout ................. 0.1
414
+ retro_encoder_hidden_dropout .................... 0.1
415
+ retro_encoder_layers ............................ 2
416
+ retro_num_neighbors ............................. 2
417
+ retro_num_retrieved_chunks ...................... 2
418
+ retro_project_dir ............................... None
419
+ retro_verify_neighbor_count ..................... True
420
+ rope_scaling_factor ............................. 8.0
421
+ rotary_base ..................................... 10000
422
+ rotary_interleaved .............................. False
423
+ rotary_percent .................................. 1.0
424
+ rotary_scaling_factor ........................... 1.0
425
+ rotary_seq_len_interpolation_factor ............. None
426
+ run_workload_inspector_server ................... False
427
+ sample_rate ..................................... 1.0
428
+ save ............................................ gpt-checkpoint
429
+ save_interval ................................... 16
430
+ scatter_gather_tensors_in_pipeline .............. True
431
+ seed ............................................ 1234
432
+ seq_length ...................................... 1024
433
+ sequence_parallel ............................... False
434
+ sgd_momentum .................................... 0.9
435
+ short_seq_prob .................................. 0.1
436
+ skip_train ...................................... False
437
+ skipped_train_samples ........................... 0
438
+ spec ............................................ None
439
+ split ........................................... None
440
+ squared_relu .................................... False
441
+ start_weight_decay .............................. 0.1
442
+ straggler_ctrlr_port ............................ 65535
443
+ straggler_minmax_count .......................... 1
444
+ suggested_communication_unit_size ............... None
445
+ swiglu .......................................... False
446
+ swin_backbone_type .............................. tiny
447
+ symmetric_ar_type ............................... None
448
+ te_rng_tracker .................................. False
449
+ tensor_model_parallel_size ...................... 8
450
+ tensorboard_dir ................................. tensorboard-logs/
451
+ tensorboard_log_interval ........................ 1
452
+ tensorboard_queue_size .......................... 1000
453
+ test_data_path .................................. None
454
+ test_mode ....................................... False
455
+ tiktoken_num_special_tokens ..................... 1000
456
+ tiktoken_pattern ................................ None
457
+ tiktoken_special_tokens ......................... None
458
+ timing_log_level ................................ 0
459
+ timing_log_option ............................... minmax
460
+ titles_data_path ................................ None
461
+ tokenizer_model ................................. None
462
+ tokenizer_type .................................. GPT2BPETokenizer
463
+ torch_fsdp2_reshard_after_forward ............... True
464
+ tp_comm_bootstrap_backend ....................... nccl
465
+ tp_comm_bulk_dgrad .............................. True
466
+ tp_comm_bulk_wgrad .............................. True
467
+ tp_comm_overlap ................................. False
468
+ tp_comm_overlap_ag .............................. True
469
+ tp_comm_overlap_cfg ............................. None
470
+ tp_comm_overlap_rs .............................. True
471
+ tp_comm_overlap_rs_dgrad ........................ False
472
+ tp_comm_split_ag ................................ True
473
+ tp_comm_split_rs ................................ True
474
+ train_data_path ................................. None
475
+ train_iters ..................................... 10
476
+ train_samples ................................... None
477
+ train_sync_interval ............................. None
478
+ transformer_impl ................................ transformer_engine
479
+ transformer_pipeline_model_parallel_size ........ 1
480
+ untie_embeddings_and_output_weights ............. False
481
+ use_checkpoint_args ............................. False
482
+ use_checkpoint_opt_param_scheduler .............. False
483
+ use_cpu_initialization .......................... None
484
+ use_custom_fsdp ................................. False
485
+ use_dist_ckpt ................................... True
486
+ use_dist_ckpt_deprecated ........................ False
487
+ use_distributed_optimizer ....................... False
488
+ use_flash_attn .................................. False
489
+ use_legacy_models ............................... False
490
+ use_mp_args_from_checkpoint_args ................ False
491
+ use_one_sent_docs ............................... False
492
+ use_persistent_ckpt_worker ...................... False
493
+ use_precision_aware_optimizer ................... False
494
+ use_pytorch_profiler ............................ False
495
+ use_ring_exchange_p2p ........................... False
496
+ use_rope_scaling ................................ False
497
+ use_rotary_position_embeddings .................. False
498
+ use_sharp ....................................... False
499
+ use_tokenizer_model_from_checkpoint_args ........ True
500
+ use_torch_fsdp2 ................................. False
501
+ use_torch_optimizer_for_cpu_offload ............. False
502
+ use_tp_pp_dp_mapping ............................ False
503
+ v_head_dim ...................................... 128
504
+ valid_data_path ................................. None
505
+ variable_seq_lengths ............................ False
506
+ virtual_pipeline_model_parallel_size ............ None
507
+ vision_backbone_type ............................ vit
508
+ vision_pretraining .............................. False
509
+ vision_pretraining_type ......................... classify
510
+ vocab_extra_ids ................................. 0
511
+ vocab_file ...................................... vocab.json
512
+ vocab_size ...................................... None
513
+ wandb_exp_name ..................................
514
+ wandb_project ...................................
515
+ wandb_save_dir ..................................
516
+ weight_decay .................................... 0.1
517
+ weight_decay_incr_style ......................... constant
518
+ wgrad_deferral_limit ............................ 0
519
+ world_size ...................................... 16
520
+ yaml_cfg ........................................ None
521
+ -------------------- end of arguments ---------------------
522
+ INFO:megatron.core.num_microbatches_calculator:setting number of microbatches to constant 1
523
+ > building GPT2BPETokenizer tokenizer ...
524
+ > padded vocab (size: 50257) with 943 dummy tokens (new size: 51200)
525
+ INFO:megatron.training.initialize:Setting logging level to 0
526
+ INFO:megatron.training.initialize:Setting logging level to 0
527
+ WARNING:megatron.core.rerun_state_machine:RerunStateMachine initialized in mode RerunMode.DISABLED
528
+ > initializing torch distributed ...
529
+ INFO:megatron.training.initialize:Setting logging level to 0
530
+ INFO:megatron.training.initialize:Setting logging level to 0
531
+ INFO:megatron.training.initialize:Setting logging level to 0
532
+ INFO:megatron.training.initialize:Setting logging level to 0
533
+ INFO:megatron.training.initialize:Setting logging level to 0
534
+ INFO:megatron.training.initialize:Setting logging level to 0
535
+ INFO:megatron.training.initialize:Setting logging level to 0
536
+ WARNING: TensorBoard writing requested but is not available (are you using PyTorch 1.1.0 or later?), no TensorBoard logs will be written.
537
+ WARNING: one_logger package is required to enable e2e metrics tracking. please go to https://confluence.nvidia.com/display/MLWFO/Package+Repositories for details to install it
538
+ INFO:megatron.training.initialize:Setting logging level to 0
539
+ > initialized tensor model parallel with size 8
540
+ > initialized pipeline model parallel with size 1
541
+ > setting random seeds to 1234 ...
542
+ > compiling dataset index builder ...
543
+ make: Entering directory '/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/datasets'
544
+ INFO:megatron.training.initialize:Setting logging level to 0
545
+ INFO:megatron.training.initialize:Setting logging level to 0
546
+ INFO:megatron.training.initialize:Setting logging level to 0
547
+ INFO:megatron.training.initialize:Setting logging level to 0
548
+ INFO:megatron.training.initialize:Setting logging level to 0
549
+ make: Nothing to be done for 'default'.
550
+ make: Leaving directory '/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/datasets'
551
+ >>> done with dataset index builder. Compilation time: 0.042 seconds
552
+ > compiling and loading fused kernels ...
553
+ INFO:megatron.training.initialize:Setting logging level to 0
attnserver.run_attnserver.slurm.sh.343204.err.log ADDED
@@ -0,0 +1,172 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ + source /mnt/weka/home/hao.zhang/conda/miniconda/bin/activate
2
+ ++ _CONDA_ROOT=/mnt/weka/home/hao.zhang/conda/miniconda
3
+ ++ . /mnt/weka/home/hao.zhang/conda/miniconda/etc/profile.d/conda.sh
4
+ +++ export CONDA_EXE=/mnt/weka/home/hao.zhang/conda/miniconda/bin/conda
5
+ +++ CONDA_EXE=/mnt/weka/home/hao.zhang/conda/miniconda/bin/conda
6
+ +++ export _CE_M=
7
+ +++ _CE_M=
8
+ +++ export _CE_CONDA=
9
+ +++ _CE_CONDA=
10
+ +++ export CONDA_PYTHON_EXE=/mnt/weka/home/hao.zhang/conda/miniconda/bin/python
11
+ +++ CONDA_PYTHON_EXE=/mnt/weka/home/hao.zhang/conda/miniconda/bin/python
12
+ +++ '[' -z x ']'
13
+ ++ conda activate
14
+ ++ local cmd=activate
15
+ ++ case "$cmd" in
16
+ ++ __conda_activate activate
17
+ ++ '[' -n '' ']'
18
+ ++ local ask_conda
19
+ +++ PS1=
20
+ +++ __conda_exe shell.posix activate
21
+ +++ '[' -n '' ']'
22
+ +++ /mnt/weka/home/hao.zhang/conda/miniconda/bin/conda shell.posix activate
23
+ ++ ask_conda='unset _CE_M
24
+ unset _CE_CONDA
25
+ PS1='\''(base) '\''
26
+ export PATH='\''/mnt/weka/home/hao.zhang/.local/bin:/mnt/weka/home/hao.zhang/.local/bin:/mnt/weka/home/hao.zhang/.local/bin:/mnt/weka/home/hao.zhang/conda/miniconda/bin:/mnt/weka/home/hao.zhang/conda/miniconda/condabin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin'\''
27
+ export CONDA_SHLVL='\''1'\''
28
+ export CONDA_PROMPT_MODIFIER='\''(base) '\''
29
+ export CONDA_EXE='\''/mnt/weka/home/hao.zhang/conda/miniconda/bin/conda'\''
30
+ export CONDA_PYTHON_EXE='\''/mnt/weka/home/hao.zhang/conda/miniconda/bin/python'\'''
31
+ ++ eval 'unset _CE_M
32
+ unset _CE_CONDA
33
+ PS1='\''(base) '\''
34
+ export PATH='\''/mnt/weka/home/hao.zhang/.local/bin:/mnt/weka/home/hao.zhang/.local/bin:/mnt/weka/home/hao.zhang/.local/bin:/mnt/weka/home/hao.zhang/conda/miniconda/bin:/mnt/weka/home/hao.zhang/conda/miniconda/condabin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin'\''
35
+ export CONDA_SHLVL='\''1'\''
36
+ export CONDA_PROMPT_MODIFIER='\''(base) '\''
37
+ export CONDA_EXE='\''/mnt/weka/home/hao.zhang/conda/miniconda/bin/conda'\''
38
+ export CONDA_PYTHON_EXE='\''/mnt/weka/home/hao.zhang/conda/miniconda/bin/python'\'''
39
+ +++ unset _CE_M
40
+ +++ unset _CE_CONDA
41
+ +++ PS1='(base) '
42
+ +++ export PATH=/mnt/weka/home/hao.zhang/.local/bin:/mnt/weka/home/hao.zhang/.local/bin:/mnt/weka/home/hao.zhang/.local/bin:/mnt/weka/home/hao.zhang/conda/miniconda/bin:/mnt/weka/home/hao.zhang/conda/miniconda/condabin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin
43
+ +++ PATH=/mnt/weka/home/hao.zhang/.local/bin:/mnt/weka/home/hao.zhang/.local/bin:/mnt/weka/home/hao.zhang/.local/bin:/mnt/weka/home/hao.zhang/conda/miniconda/bin:/mnt/weka/home/hao.zhang/conda/miniconda/condabin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin
44
+ +++ export CONDA_SHLVL=1
45
+ +++ CONDA_SHLVL=1
46
+ +++ export 'CONDA_PROMPT_MODIFIER=(base) '
47
+ +++ CONDA_PROMPT_MODIFIER='(base) '
48
+ +++ export CONDA_EXE=/mnt/weka/home/hao.zhang/conda/miniconda/bin/conda
49
+ +++ CONDA_EXE=/mnt/weka/home/hao.zhang/conda/miniconda/bin/conda
50
+ +++ export CONDA_PYTHON_EXE=/mnt/weka/home/hao.zhang/conda/miniconda/bin/python
51
+ +++ CONDA_PYTHON_EXE=/mnt/weka/home/hao.zhang/conda/miniconda/bin/python
52
+ ++ __conda_hashr
53
+ ++ '[' -n '' ']'
54
+ ++ '[' -n '' ']'
55
+ ++ hash -r
56
+ + conda activate junda-attnserver
57
+ + local cmd=activate
58
+ + case "$cmd" in
59
+ + __conda_activate activate junda-attnserver
60
+ + '[' -n '' ']'
61
+ + local ask_conda
62
+ ++ PS1='(base) '
63
+ ++ __conda_exe shell.posix activate junda-attnserver
64
+ ++ '[' -n '' ']'
65
+ ++ /mnt/weka/home/hao.zhang/conda/miniconda/bin/conda shell.posix activate junda-attnserver
66
+ + ask_conda='unset _CE_M
67
+ unset _CE_CONDA
68
+ PS1='\''(junda-attnserver) '\''
69
+ export PATH='\''/mnt/weka/home/hao.zhang/.local/bin:/mnt/weka/home/hao.zhang/.local/bin:/mnt/weka/home/hao.zhang/.local/bin:/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/bin:/mnt/weka/home/hao.zhang/conda/miniconda/condabin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin'\''
70
+ export CONDA_PREFIX='\''/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver'\''
71
+ export CONDA_SHLVL='\''2'\''
72
+ export CONDA_DEFAULT_ENV='\''junda-attnserver'\''
73
+ export CONDA_PROMPT_MODIFIER='\''(junda-attnserver) '\''
74
+ export CONDA_PREFIX_1='\''/mnt/weka/home/hao.zhang/conda/miniconda'\''
75
+ export CONDA_EXE='\''/mnt/weka/home/hao.zhang/conda/miniconda/bin/conda'\''
76
+ export CONDA_PYTHON_EXE='\''/mnt/weka/home/hao.zhang/conda/miniconda/bin/python'\'''
77
+ + eval 'unset _CE_M
78
+ unset _CE_CONDA
79
+ PS1='\''(junda-attnserver) '\''
80
+ export PATH='\''/mnt/weka/home/hao.zhang/.local/bin:/mnt/weka/home/hao.zhang/.local/bin:/mnt/weka/home/hao.zhang/.local/bin:/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/bin:/mnt/weka/home/hao.zhang/conda/miniconda/condabin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin'\''
81
+ export CONDA_PREFIX='\''/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver'\''
82
+ export CONDA_SHLVL='\''2'\''
83
+ export CONDA_DEFAULT_ENV='\''junda-attnserver'\''
84
+ export CONDA_PROMPT_MODIFIER='\''(junda-attnserver) '\''
85
+ export CONDA_PREFIX_1='\''/mnt/weka/home/hao.zhang/conda/miniconda'\''
86
+ export CONDA_EXE='\''/mnt/weka/home/hao.zhang/conda/miniconda/bin/conda'\''
87
+ export CONDA_PYTHON_EXE='\''/mnt/weka/home/hao.zhang/conda/miniconda/bin/python'\'''
88
+ ++ unset _CE_M
89
+ ++ unset _CE_CONDA
90
+ ++ PS1='(junda-attnserver) '
91
+ ++ export PATH=/mnt/weka/home/hao.zhang/.local/bin:/mnt/weka/home/hao.zhang/.local/bin:/mnt/weka/home/hao.zhang/.local/bin:/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/bin:/mnt/weka/home/hao.zhang/conda/miniconda/condabin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin
92
+ ++ PATH=/mnt/weka/home/hao.zhang/.local/bin:/mnt/weka/home/hao.zhang/.local/bin:/mnt/weka/home/hao.zhang/.local/bin:/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/bin:/mnt/weka/home/hao.zhang/conda/miniconda/condabin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin
93
+ ++ export CONDA_PREFIX=/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver
94
+ ++ CONDA_PREFIX=/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver
95
+ ++ export CONDA_SHLVL=2
96
+ ++ CONDA_SHLVL=2
97
+ ++ export CONDA_DEFAULT_ENV=junda-attnserver
98
+ ++ CONDA_DEFAULT_ENV=junda-attnserver
99
+ ++ export 'CONDA_PROMPT_MODIFIER=(junda-attnserver) '
100
+ ++ CONDA_PROMPT_MODIFIER='(junda-attnserver) '
101
+ ++ export CONDA_PREFIX_1=/mnt/weka/home/hao.zhang/conda/miniconda
102
+ ++ CONDA_PREFIX_1=/mnt/weka/home/hao.zhang/conda/miniconda
103
+ ++ export CONDA_EXE=/mnt/weka/home/hao.zhang/conda/miniconda/bin/conda
104
+ ++ CONDA_EXE=/mnt/weka/home/hao.zhang/conda/miniconda/bin/conda
105
+ ++ export CONDA_PYTHON_EXE=/mnt/weka/home/hao.zhang/conda/miniconda/bin/python
106
+ ++ CONDA_PYTHON_EXE=/mnt/weka/home/hao.zhang/conda/miniconda/bin/python
107
+ + __conda_hashr
108
+ + '[' -n '' ']'
109
+ + '[' -n '' ']'
110
+ + hash -r
111
+ + export CHROME_TRACE_PREFIX=/mnt/sharefs/users/hao.zhang/junda/megatron-prof-data--unstable-v5
112
+ + CHROME_TRACE_PREFIX=/mnt/sharefs/users/hao.zhang/junda/megatron-prof-data--unstable-v5
113
+ + mkdir -p /mnt/sharefs/users/hao.zhang/junda/megatron-prof-data--unstable-v5
114
+ + export PROF_TP_SIZE=8
115
+ + PROF_TP_SIZE=8
116
+ + export PROF_CP_SIZE=2
117
+ + PROF_CP_SIZE=2
118
+ + export PROF_BS=8
119
+ + PROF_BS=8
120
+ + for ctx_length in 1024 2048 4096 8192 12288 16384 24576 32768 40960 49152 65536 81920 98304 131072
121
+ + export PROF_CTX_LENGTH=1024
122
+ + PROF_CTX_LENGTH=1024
123
+ + name='/mnt/sharefs/users/hao.zhang/junda/megatron-prof-data--unstable-v5/mytrace.L1024*tp8.cp2.bs8.json'
124
+ + '[' -f '/mnt/sharefs/users/hao.zhang/junda/megatron-prof-data--unstable-v5/mytrace.L1024*tp8.cp2.bs8.json' ']'
125
+ + echo 'Running ctx_length=1024, TP_SIZE=8, CP_SIZE=2, BATCH_SIZE=8'
126
+ + srun bash ./attnserver.sh
127
+ + which python3
128
+ + python3 -m torch.distributed.launch --nproc_per_node 8 --nnodes 2 --node_rank 0 --rdzv_id 343204 --rdzv_backend c10d --rdzv_endpoint fs-mbz-gpu-600:29500 ./pretrain_gpt_profile.py --tensor-model-parallel-size 8 --context-parallel-size 2 --num-layers 2 --hidden-size 4096 --num-attention-heads 64 --group-query-attention --num-query-groups 16 --seq-length 1024 --max-position-embeddings 1024 --micro-batch-size 1 --global-batch-size 1 --lr 0.0005 --train-iters 10 --lr-decay-iters 150000 --lr-decay-style cosine --lr-warmup-iters 2 --weight-decay .1 --adam-beta2 .999 --fp16 --log-interval 1 --save-interval 16 --eval-interval 16 --eval-iters 1 --vocab-file vocab.json --merge-file merges.txt --save gpt-checkpoint --load gpt-checkpoint --logging-level 0 --mock-data --tensorboard-dir tensorboard-logs/
129
+ + which python3
130
+ + python3 -m torch.distributed.launch --nproc_per_node 8 --nnodes 2 --node_rank 1 --rdzv_id 343204 --rdzv_backend c10d --rdzv_endpoint fs-mbz-gpu-600:29500 ./pretrain_gpt_profile.py --tensor-model-parallel-size 8 --context-parallel-size 2 --num-layers 2 --hidden-size 4096 --num-attention-heads 64 --group-query-attention --num-query-groups 16 --seq-length 1024 --max-position-embeddings 1024 --micro-batch-size 1 --global-batch-size 1 --lr 0.0005 --train-iters 10 --lr-decay-iters 150000 --lr-decay-style cosine --lr-warmup-iters 2 --weight-decay .1 --adam-beta2 .999 --fp16 --log-interval 1 --save-interval 16 --eval-interval 16 --eval-iters 1 --vocab-file vocab.json --merge-file merges.txt --save gpt-checkpoint --load gpt-checkpoint --logging-level 0 --mock-data --tensorboard-dir tensorboard-logs/
131
+ /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/distributed/launch.py:207: FutureWarning: The module torch.distributed.launch is deprecated
132
+ and will be removed in future. Use torchrun.
133
+ Note that --use-env is set by default in torchrun.
134
+ If your script expects `--local-rank` argument to be set, please
135
+ change it to read from `os.environ['LOCAL_RANK']` instead. See
136
+ https://pytorch.org/docs/stable/distributed.html#launch-utility for
137
+ further instructions
138
+
139
+ main()
140
+ W0621 21:14:24.484000 714964 site-packages/torch/distributed/run.py:766]
141
+ W0621 21:14:24.484000 714964 site-packages/torch/distributed/run.py:766] *****************************************
142
+ W0621 21:14:24.484000 714964 site-packages/torch/distributed/run.py:766] Setting OMP_NUM_THREADS environment variable for each process to be 1 in default, to avoid your system being overloaded, please further tune the variable for optimal performance in your application as needed.
143
+ W0621 21:14:24.484000 714964 site-packages/torch/distributed/run.py:766] *****************************************
144
+ /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/distributed/launch.py:207: FutureWarning: The module torch.distributed.launch is deprecated
145
+ and will be removed in future. Use torchrun.
146
+ Note that --use-env is set by default in torchrun.
147
+ If your script expects `--local-rank` argument to be set, please
148
+ change it to read from `os.environ['LOCAL_RANK']` instead. See
149
+ https://pytorch.org/docs/stable/distributed.html#launch-utility for
150
+ further instructions
151
+
152
+ main()
153
+ W0621 21:14:24.530000 1696942 site-packages/torch/distributed/run.py:766]
154
+ W0621 21:14:24.530000 1696942 site-packages/torch/distributed/run.py:766] *****************************************
155
+ W0621 21:14:24.530000 1696942 site-packages/torch/distributed/run.py:766] Setting OMP_NUM_THREADS environment variable for each process to be 1 in default, to avoid your system being overloaded, please further tune the variable for optimal performance in your application as needed.
156
+ W0621 21:14:24.530000 1696942 site-packages/torch/distributed/run.py:766] *****************************************
157
+ [rank11]:[W621 21:14:47.056171966 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 11] using GPU 3 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
158
+ [rank3]:[W621 21:14:47.466190298 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 3] using GPU 3 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
159
+ [rank12]:[W621 21:14:47.078944088 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 12] using GPU 4 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
160
+ [rank8]:[W621 21:14:47.079965203 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 8] using GPU 0 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
161
+ [rank4]:[W621 21:14:47.502964203 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 4] using GPU 4 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
162
+ [rank14]:[W621 21:14:47.237324628 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 14] using GPU 6 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
163
+ [rank6]:[W621 21:14:47.651750207 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 6] using GPU 6 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
164
+ [rank0]:[W621 21:14:47.678463549 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 0] using GPU 0 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
165
+ [rank7]:[W621 21:14:47.698089708 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 7] using GPU 7 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
166
+ [rank2]:[W621 21:14:47.698584426 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 2] using GPU 2 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
167
+ [rank15]:[W621 21:14:47.290367699 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 15] using GPU 7 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
168
+ [rank5]:[W621 21:14:47.700404013 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 5] using GPU 5 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
169
+ [rank10]:[W621 21:14:47.291263320 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 10] using GPU 2 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
170
+ [rank1]:[W621 21:14:47.701985784 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 1] using GPU 1 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
171
+ [rank9]:[W621 21:14:47.294021018 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 9] using GPU 1 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
172
+ [rank13]:[W621 21:14:47.294471701 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 13] using GPU 5 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
attnserver.run_attnserver.slurm.sh.343204.out.log ADDED
@@ -0,0 +1,554 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ Running ctx_length=1024, TP_SIZE=8, CP_SIZE=2, BATCH_SIZE=8
2
+ Cleaning up checkpoint directory: gpt-checkpoint
3
+ --------------------------------
4
+ CTX_LENGTH: 1024
5
+ TP_SIZE: 8
6
+ CP_SIZE: 2
7
+ CHECKPOINT_PATH: gpt-checkpoint
8
+ PWD: /mnt/weka/home/hao.zhang/junda/attnserver-megatron
9
+ --------------------------------
10
+ /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/bin/python3
11
+ Cleaning up checkpoint directory: gpt-checkpoint
12
+ --------------------------------
13
+ CTX_LENGTH: 1024
14
+ TP_SIZE: 8
15
+ CP_SIZE: 2
16
+ CHECKPOINT_PATH: gpt-checkpoint
17
+ PWD: /mnt/weka/home/hao.zhang/junda/attnserver-megatron
18
+ --------------------------------
19
+ /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/bin/python3
20
+ INFO:megatron.training.initialize:Setting logging level to 0
21
+ INFO:megatron.training.initialize:Setting logging level to 0
22
+ INFO:megatron.training.initialize:Setting logging level to 0
23
+ WARNING: TensorBoard writing requested but is not available (are you using PyTorch 1.1.0 or later?), no TensorBoard logs will be written.
24
+ WARNING: one_logger package is required to enable e2e metrics tracking. please go to https://confluence.nvidia.com/display/MLWFO/Package+Repositories for details to install it
25
+ INFO:megatron.training.initialize:Setting logging level to 0
26
+ INFO:megatron.training.initialize:Setting logging level to 0
27
+ INFO:megatron.training.initialize:Setting logging level to 0
28
+ INFO:megatron.training.initialize:Setting logging level to 0
29
+ INFO:megatron.training.initialize:Setting logging level to 0
30
+ INFO:megatron.training.initialize:Setting logging level to 0
31
+ INFO:megatron.training.initialize:Setting logging level to 0
32
+ INFO:megatron.training.initialize:Setting logging level to 0
33
+ using world size: 16, data-parallel size: 1, context-parallel size: 2, hierarchical context-parallel sizes: Nonetensor-model-parallel size: 8, encoder-tensor-model-parallel size: 0, pipeline-model-parallel size: 1, encoder-pipeline-model-parallel size: 0
34
+ Number of virtual stages per pipeline stage: None
35
+ WARNING: Setting args.check_for_nan_in_loss_and_grad to False since dynamic loss scaling is being used
36
+ using torch.float16 for parameters ...
37
+ ------------------------ arguments ------------------------
38
+ account_for_embedding_in_pipeline_split ......... False
39
+ account_for_loss_in_pipeline_split .............. False
40
+ accumulate_allreduce_grads_in_fp32 .............. False
41
+ adam_beta1 ...................................... 0.9
42
+ adam_beta2 ...................................... 0.999
43
+ adam_eps ........................................ 1e-08
44
+ add_bias_linear ................................. True
45
+ add_position_embedding .......................... True
46
+ add_qkv_bias .................................... True
47
+ adlr_autoresume ................................. False
48
+ adlr_autoresume_interval ........................ 1000
49
+ align_grad_reduce ............................... True
50
+ align_param_gather .............................. False
51
+ app_tag_run_name ................................ None
52
+ app_tag_run_version ............................. 0.0.0
53
+ apply_layernorm_1p .............................. False
54
+ apply_query_key_layer_scaling ................... False
55
+ apply_residual_connection_post_layernorm ........ False
56
+ apply_rope_fusion ............................... False
57
+ async_save ...................................... None
58
+ async_tensor_model_parallel_allreduce ........... True
59
+ attention_backend ............................... AttnBackend.auto
60
+ attention_dropout ............................... 0.1
61
+ attention_softmax_in_fp32 ....................... False
62
+ auto_detect_ckpt_format ......................... False
63
+ barrier_with_L1_time ............................ True
64
+ bert_binary_head ................................ True
65
+ bert_embedder_type .............................. megatron
66
+ bert_load ....................................... None
67
+ bf16 ............................................ False
68
+ bias_dropout_fusion ............................. True
69
+ bias_gelu_fusion ................................ True
70
+ bias_swiglu_fusion .............................. True
71
+ biencoder_projection_dim ........................ 0
72
+ biencoder_shared_query_context_model ............ False
73
+ block_data_path ................................. None
74
+ calc_ft_timeouts ................................ False
75
+ calculate_per_token_loss ........................ False
76
+ check_for_large_grads ........................... False
77
+ check_for_nan_in_loss_and_grad .................. False
78
+ check_for_spiky_loss ............................ False
79
+ check_weight_hash_across_dp_replicas_interval ... None
80
+ ckpt_assume_constant_structure .................. False
81
+ ckpt_convert_format ............................. None
82
+ ckpt_convert_save ............................... None
83
+ ckpt_convert_update_legacy_dist_opt_format ...... False
84
+ ckpt_format ..................................... torch_dist
85
+ ckpt_fully_parallel_load ........................ False
86
+ ckpt_fully_parallel_save ........................ True
87
+ ckpt_fully_parallel_save_deprecated ............. False
88
+ ckpt_step ....................................... None
89
+ classes_fraction ................................ 1.0
90
+ clip_grad ....................................... 1.0
91
+ clone_scatter_output_in_embedding ............... True
92
+ config_logger_dir ...............................
93
+ consumed_train_samples .......................... 0
94
+ consumed_valid_samples .......................... 0
95
+ context_parallel_size ........................... 2
96
+ cp_comm_type .................................... ['p2p']
97
+ create_attention_mask_in_dataloader ............. True
98
+ cross_entropy_fusion_impl ....................... native
99
+ cross_entropy_loss_fusion ....................... False
100
+ cuda_graph_scope ................................ full
101
+ cuda_graph_warmup_steps ......................... 3
102
+ data_args_path .................................. None
103
+ data_cache_path ................................. None
104
+ data_parallel_random_init ....................... False
105
+ data_parallel_sharding_strategy ................. no_shard
106
+ data_parallel_size .............................. 1
107
+ data_path ....................................... None
108
+ data_per_class_fraction ......................... 1.0
109
+ data_sharding ................................... True
110
+ dataloader_type ................................. single
111
+ ddp_average_in_collective ....................... False
112
+ ddp_bucket_size ................................. None
113
+ ddp_num_buckets ................................. None
114
+ ddp_pad_buckets_for_high_nccl_busbw ............. False
115
+ decoder_first_pipeline_num_layers ............... None
116
+ decoder_last_pipeline_num_layers ................ None
117
+ decoder_num_layers .............................. None
118
+ decoder_seq_length .............................. None
119
+ decoupled_lr .................................... None
120
+ decoupled_min_lr ................................ None
121
+ decrease_batch_size_if_needed ................... False
122
+ defer_embedding_wgrad_compute ................... False
123
+ deprecated_use_mcore_models ..................... False
124
+ deterministic_mode .............................. False
125
+ dino_bottleneck_size ............................ 256
126
+ dino_freeze_last_layer .......................... 1
127
+ dino_head_hidden_size ........................... 2048
128
+ dino_local_crops_number ......................... 10
129
+ dino_local_img_size ............................. 96
130
+ dino_norm_last_layer ............................ False
131
+ dino_teacher_temp ............................... 0.07
132
+ dino_warmup_teacher_temp ........................ 0.04
133
+ dino_warmup_teacher_temp_epochs ................. 30
134
+ disable_bf16_reduced_precision_matmul ........... False
135
+ disable_mamba_mem_eff_path ...................... False
136
+ disable_straggler_on_startup .................... False
137
+ dist_ckpt_format_deprecated ..................... None
138
+ dist_ckpt_strictness ............................ assume_ok_unexpected
139
+ distribute_saved_activations .................... False
140
+ distributed_backend ............................. nccl
141
+ distributed_timeout_minutes ..................... 10
142
+ embedding_path .................................. None
143
+ empty_unused_memory_level ....................... 0
144
+ enable_cuda_graph ............................... False
145
+ enable_ft_package ............................... False
146
+ enable_gloo_process_groups ...................... True
147
+ enable_msc ...................................... True
148
+ enable_one_logger ............................... True
149
+ encoder_num_layers .............................. 2
150
+ encoder_pipeline_model_parallel_size ............ 0
151
+ encoder_seq_length .............................. 1024
152
+ encoder_tensor_model_parallel_size .............. 0
153
+ end_weight_decay ................................ 0.1
154
+ eod_mask_loss ................................... False
155
+ error_injection_rate ............................ 0
156
+ error_injection_type ............................ transient_error
157
+ eval_interval ................................... 16
158
+ eval_iters ...................................... 1
159
+ evidence_data_path .............................. None
160
+ exit_duration_in_mins ........................... None
161
+ exit_interval ................................... None
162
+ exit_on_missing_checkpoint ...................... False
163
+ exit_signal_handler ............................. False
164
+ exp_avg_dtype ................................... torch.float32
165
+ exp_avg_sq_dtype ................................ torch.float32
166
+ expert_model_parallel_size ...................... 1
167
+ expert_tensor_parallel_size ..................... 8
168
+ external_cuda_graph ............................. False
169
+ ffn_hidden_size ................................. 16384
170
+ finetune ........................................ False
171
+ first_last_layers_bf16 .......................... False
172
+ flash_decode .................................... False
173
+ fp16 ............................................ True
174
+ fp16_lm_cross_entropy ........................... False
175
+ fp32_residual_connection ........................ False
176
+ fp8 ............................................. None
177
+ fp8_amax_compute_algo ........................... most_recent
178
+ fp8_amax_history_len ............................ 1
179
+ fp8_interval .................................... 1
180
+ fp8_margin ...................................... 0
181
+ fp8_param_gather ................................ False
182
+ fp8_recipe ...................................... delayed
183
+ fp8_wgrad ....................................... True
184
+ fsdp_double_buffer .............................. False
185
+ global_batch_size ............................... 1
186
+ grad_reduce_in_bf16 ............................. False
187
+ gradient_accumulation_fusion .................... True
188
+ gradient_reduce_div_fusion ...................... True
189
+ group_query_attention ........................... True
190
+ head_lr_mult .................................... 1.0
191
+ heterogeneous_layers_config_encoded_json ........ None
192
+ heterogeneous_layers_config_path ................ None
193
+ hidden_dropout .................................. 0.1
194
+ hidden_size ..................................... 4096
195
+ hierarchical_context_parallel_sizes ............. None
196
+ high_priority_stream_groups ..................... []
197
+ hybrid_attention_ratio .......................... 0.0
198
+ hybrid_mlp_ratio ................................ 0.0
199
+ hybrid_override_pattern ......................... None
200
+ hysteresis ...................................... 2
201
+ ict_head_size ................................... None
202
+ ict_load ........................................ None
203
+ img_h ........................................... 224
204
+ img_w ........................................... 224
205
+ indexer_batch_size .............................. 128
206
+ indexer_log_interval ............................ 1000
207
+ inference_batch_times_seqlen_threshold .......... -1
208
+ inference_dynamic_batching ...................... False
209
+ inference_dynamic_batching_buffer_guaranteed_fraction 0.2
210
+ inference_dynamic_batching_buffer_overflow_factor None
211
+ inference_dynamic_batching_buffer_size_gb ....... 40.0
212
+ inference_dynamic_batching_chunk_size ........... 256
213
+ inference_dynamic_batching_max_requests_override None
214
+ inference_dynamic_batching_max_tokens_override .. None
215
+ inference_max_batch_size ........................ 8
216
+ inference_max_seq_length ........................ 2560
217
+ inference_rng_tracker ........................... False
218
+ init_method_std ................................. 0.02
219
+ init_method_xavier_uniform ...................... False
220
+ init_model_with_meta_device ..................... False
221
+ initial_loss_scale .............................. 4294967296
222
+ inprocess_active_world_size ..................... 16
223
+ inprocess_barrier_timeout ....................... 120
224
+ inprocess_completion_timeout .................... 120
225
+ inprocess_empty_cuda_cache ...................... False
226
+ inprocess_granularity ........................... node
227
+ inprocess_hard_timeout .......................... 90
228
+ inprocess_heartbeat_interval .................... 30
229
+ inprocess_heartbeat_timeout ..................... 60
230
+ inprocess_last_call_wait ........................ 1
231
+ inprocess_max_iterations ........................ None
232
+ inprocess_monitor_process_interval .............. 1.0
233
+ inprocess_monitor_thread_interval ............... 1.0
234
+ inprocess_progress_watchdog_interval ............ 1.0
235
+ inprocess_restart ............................... False
236
+ inprocess_soft_timeout .......................... 60
237
+ inprocess_termination_grace_time ................ 1
238
+ is_hybrid_model ................................. False
239
+ iter_per_epoch .................................. 1250
240
+ iterations_to_skip .............................. []
241
+ keep_fp8_transpose_cache_when_using_custom_fsdp . False
242
+ kv_channels ..................................... 64
243
+ kv_lora_rank .................................... 32
244
+ lazy_mpu_init ................................... None
245
+ load ............................................ gpt-checkpoint
246
+ load_model_opt_format ........................... False
247
+ local_rank ...................................... 0
248
+ log_interval .................................... 1
249
+ log_loss_scale_to_tensorboard ................... True
250
+ log_memory_to_tensorboard ....................... False
251
+ log_num_zeros_in_grad ........................... False
252
+ log_params_norm ................................. False
253
+ log_progress .................................... False
254
+ log_straggler ................................... False
255
+ log_throughput .................................. False
256
+ log_timers_to_tensorboard ....................... False
257
+ log_validation_ppl_to_tensorboard ............... False
258
+ log_world_size_to_tensorboard ................... False
259
+ logging_level ................................... 0
260
+ loss_scale ...................................... None
261
+ loss_scale_window ............................... 1000
262
+ lr .............................................. 0.0005
263
+ lr_decay_iters .................................. 150000
264
+ lr_decay_samples ................................ None
265
+ lr_decay_style .................................. cosine
266
+ lr_warmup_fraction .............................. None
267
+ lr_warmup_init .................................. 0.0
268
+ lr_warmup_iters ................................. 2
269
+ lr_warmup_samples ............................... 0
270
+ lr_wsd_decay_iters .............................. None
271
+ lr_wsd_decay_samples ............................ None
272
+ lr_wsd_decay_style .............................. exponential
273
+ main_grads_dtype ................................ torch.float32
274
+ main_params_dtype ............................... torch.float32
275
+ make_vocab_size_divisible_by .................... 128
276
+ mamba_head_dim .................................. 64
277
+ mamba_num_groups ................................ 8
278
+ mamba_num_heads ................................. None
279
+ mamba_state_dim ................................. 128
280
+ manual_gc ....................................... False
281
+ manual_gc_eval .................................. True
282
+ manual_gc_interval .............................. 0
283
+ mask_factor ..................................... 1.0
284
+ mask_prob ....................................... 0.15
285
+ mask_type ....................................... random
286
+ masked_softmax_fusion ........................... True
287
+ max_position_embeddings ......................... 1024
288
+ max_tokens_to_oom ............................... 12000
289
+ memory_snapshot_path ............................ snapshot.pickle
290
+ merge_file ...................................... merges.txt
291
+ micro_batch_size ................................ 1
292
+ microbatch_group_size_per_vp_stage .............. None
293
+ mid_level_dataset_surplus ....................... 0.005
294
+ min_loss_scale .................................. 1.0
295
+ min_lr .......................................... 0.0
296
+ mlp_chunks_for_prefill .......................... 1
297
+ mmap_bin_files .................................. True
298
+ mock_data ....................................... True
299
+ moe_apply_probs_on_input ........................ False
300
+ moe_aux_loss_coeff .............................. 0.0
301
+ moe_enable_deepep ............................... False
302
+ moe_expert_capacity_factor ...................... None
303
+ moe_extended_tp ................................. False
304
+ moe_ffn_hidden_size ............................. None
305
+ moe_grouped_gemm ................................ False
306
+ moe_input_jitter_eps ............................ None
307
+ moe_layer_freq .................................. 1
308
+ moe_layer_recompute ............................. False
309
+ moe_pad_expert_input_to_capacity ................ False
310
+ moe_per_layer_logging ........................... False
311
+ moe_permute_fusion .............................. False
312
+ moe_router_bias_update_rate ..................... 0.001
313
+ moe_router_dtype ................................ None
314
+ moe_router_enable_expert_bias ................... False
315
+ moe_router_force_load_balancing ................. False
316
+ moe_router_group_topk ........................... None
317
+ moe_router_load_balancing_type .................. aux_loss
318
+ moe_router_num_groups ........................... None
319
+ moe_router_padding_for_fp8 ...................... False
320
+ moe_router_pre_softmax .......................... False
321
+ moe_router_score_function ....................... softmax
322
+ moe_router_topk ................................. 2
323
+ moe_router_topk_scaling_factor .................. None
324
+ moe_shared_expert_intermediate_size ............. None
325
+ moe_shared_expert_overlap ....................... False
326
+ moe_token_dispatcher_type ....................... allgather
327
+ moe_token_drop_policy ........................... probs
328
+ moe_use_legacy_grouped_gemm ..................... False
329
+ moe_use_upcycling ............................... False
330
+ moe_z_loss_coeff ................................ None
331
+ mrope_section ................................... None
332
+ mscale .......................................... 1.0
333
+ mscale_all_dim .................................. 1.0
334
+ mtp_loss_scaling_factor ......................... 0.1
335
+ mtp_num_layers .................................. None
336
+ multi_latent_attention .......................... False
337
+ nccl_all_reduce_for_prefill ..................... False
338
+ nccl_communicator_config_path ................... None
339
+ nccl_ub ......................................... False
340
+ no_load_optim ................................... None
341
+ no_load_rng ..................................... None
342
+ no_persist_layer_norm ........................... False
343
+ no_rope_freq .................................... None
344
+ no_save_optim ................................... None
345
+ no_save_rng ..................................... None
346
+ non_persistent_ckpt_type ........................ None
347
+ non_persistent_global_ckpt_dir .................. None
348
+ non_persistent_local_ckpt_algo .................. fully_parallel
349
+ non_persistent_local_ckpt_dir ................... None
350
+ non_persistent_save_interval .................... None
351
+ norm_epsilon .................................... 1e-05
352
+ normalization ................................... LayerNorm
353
+ num_attention_heads ............................. 64
354
+ num_channels .................................... 3
355
+ num_classes ..................................... 1000
356
+ num_dataset_builder_threads ..................... 1
357
+ num_distributed_optimizer_instances ............. 1
358
+ num_experts ..................................... None
359
+ num_layers ...................................... 2
360
+ num_layers_at_end_in_bf16 ....................... 1
361
+ num_layers_at_start_in_bf16 ..................... 1
362
+ num_layers_per_virtual_pipeline_stage ........... None
363
+ num_query_groups ................................ 16
364
+ num_virtual_stages_per_pipeline_rank ............ None
365
+ num_workers ..................................... 2
366
+ object_storage_cache_path ....................... None
367
+ one_logger_async ................................ False
368
+ one_logger_project .............................. megatron-lm
369
+ one_logger_run_name ............................. None
370
+ onnx_safe ....................................... None
371
+ openai_gelu ..................................... False
372
+ optimizer ....................................... adam
373
+ optimizer_cpu_offload ........................... False
374
+ optimizer_offload_fraction ...................... 1.0
375
+ output_bert_embeddings .......................... False
376
+ overlap_cpu_optimizer_d2h_h2d ................... False
377
+ overlap_grad_reduce ............................. False
378
+ overlap_p2p_comm ................................ False
379
+ overlap_p2p_comm_warmup_flush ................... False
380
+ overlap_param_gather ............................ False
381
+ overlap_param_gather_with_optimizer_step ........ False
382
+ override_opt_param_scheduler .................... False
383
+ params_dtype .................................... torch.float16
384
+ patch_dim ....................................... 16
385
+ per_split_data_args_path ........................ None
386
+ perform_initialization .......................... True
387
+ pin_cpu_grads ................................... True
388
+ pin_cpu_params .................................. True
389
+ pipeline_model_parallel_comm_backend ............ None
390
+ pipeline_model_parallel_size .................... 1
391
+ pipeline_model_parallel_split_rank .............. None
392
+ position_embedding_type ......................... learned_absolute
393
+ pretrained_checkpoint ........................... None
394
+ profile ......................................... False
395
+ profile_ranks ................................... [0]
396
+ profile_step_end ................................ 12
397
+ profile_step_start .............................. 10
398
+ q_lora_rank ..................................... None
399
+ qk_head_dim ..................................... 128
400
+ qk_l2_norm ...................................... False
401
+ qk_layernorm .................................... False
402
+ qk_pos_emb_head_dim ............................. 64
403
+ query_in_block_prob ............................. 0.1
404
+ rampup_batch_size ............................... None
405
+ rank ............................................ 0
406
+ recompute_granularity ........................... None
407
+ recompute_method ................................ None
408
+ recompute_modules ............................... None
409
+ recompute_num_layers ............................ None
410
+ record_memory_history ........................... False
411
+ relative_attention_max_distance ................. 128
412
+ relative_attention_num_buckets .................. 32
413
+ replication ..................................... False
414
+ replication_factor .............................. 2
415
+ replication_jump ................................ None
416
+ rerun_mode ...................................... disabled
417
+ reset_attention_mask ............................ False
418
+ reset_position_ids .............................. False
419
+ result_rejected_tracker_filename ................ None
420
+ retriever_report_topk_accuracies ................ []
421
+ retriever_score_scaling ......................... False
422
+ retriever_seq_length ............................ 256
423
+ retro_add_retriever ............................. False
424
+ retro_attention_gate ............................ 1
425
+ retro_cyclic_train_iters ........................ None
426
+ retro_encoder_attention_dropout ................. 0.1
427
+ retro_encoder_hidden_dropout .................... 0.1
428
+ retro_encoder_layers ............................ 2
429
+ retro_num_neighbors ............................. 2
430
+ retro_num_retrieved_chunks ...................... 2
431
+ retro_project_dir ............................... None
432
+ retro_verify_neighbor_count ..................... True
433
+ rope_scaling_factor ............................. 8.0
434
+ rotary_base ..................................... 10000
435
+ rotary_interleaved .............................. False
436
+ rotary_percent .................................. 1.0
437
+ rotary_scaling_factor ........................... 1.0
438
+ rotary_seq_len_interpolation_factor ............. None
439
+ run_workload_inspector_server ................... False
440
+ sample_rate ..................................... 1.0
441
+ save ............................................ gpt-checkpoint
442
+ save_interval ................................... 16
443
+ scatter_gather_tensors_in_pipeline .............. True
444
+ seed ............................................ 1234
445
+ seq_length ...................................... 1024
446
+ sequence_parallel ............................... False
447
+ sgd_momentum .................................... 0.9
448
+ short_seq_prob .................................. 0.1
449
+ skip_train ...................................... False
450
+ skipped_train_samples ........................... 0
451
+ spec ............................................ None
452
+ split ........................................... None
453
+ squared_relu .................................... False
454
+ start_weight_decay .............................. 0.1
455
+ straggler_ctrlr_port ............................ 65535
456
+ straggler_minmax_count .......................... 1
457
+ suggested_communication_unit_size ............... None
458
+ swiglu .......................................... False
459
+ swin_backbone_type .............................. tiny
460
+ symmetric_ar_type ............................... None
461
+ te_rng_tracker .................................. False
462
+ tensor_model_parallel_size ...................... 8
463
+ tensorboard_dir ................................. tensorboard-logs/
464
+ tensorboard_log_interval ........................ 1
465
+ tensorboard_queue_size .......................... 1000
466
+ test_data_path .................................. None
467
+ test_mode ....................................... False
468
+ tiktoken_num_special_tokens ..................... 1000
469
+ tiktoken_pattern ................................ None
470
+ tiktoken_special_tokens ......................... None
471
+ timing_log_level ................................ 0
472
+ timing_log_option ............................... minmax
473
+ titles_data_path ................................ None
474
+ tokenizer_model ................................. None
475
+ tokenizer_type .................................. GPT2BPETokenizer
476
+ torch_fsdp2_reshard_after_forward ............... True
477
+ tp_comm_bootstrap_backend ....................... nccl
478
+ tp_comm_bulk_dgrad .............................. True
479
+ tp_comm_bulk_wgrad .............................. True
480
+ tp_comm_overlap ................................. False
481
+ tp_comm_overlap_ag .............................. True
482
+ tp_comm_overlap_cfg ............................. None
483
+ tp_comm_overlap_rs .............................. True
484
+ tp_comm_overlap_rs_dgrad ........................ False
485
+ tp_comm_split_ag ................................ True
486
+ tp_comm_split_rs ................................ True
487
+ train_data_path ................................. None
488
+ train_iters ..................................... 10
489
+ train_samples ................................... None
490
+ train_sync_interval ............................. None
491
+ transformer_impl ................................ transformer_engine
492
+ transformer_pipeline_model_parallel_size ........ 1
493
+ untie_embeddings_and_output_weights ............. False
494
+ use_checkpoint_args ............................. False
495
+ use_checkpoint_opt_param_scheduler .............. False
496
+ use_cpu_initialization .......................... None
497
+ use_custom_fsdp ................................. False
498
+ use_dist_ckpt ................................... True
499
+ use_dist_ckpt_deprecated ........................ False
500
+ use_distributed_optimizer ....................... False
501
+ use_flash_attn .................................. False
502
+ use_legacy_models ............................... False
503
+ use_mp_args_from_checkpoint_args ................ False
504
+ use_one_sent_docs ............................... False
505
+ use_persistent_ckpt_worker ...................... False
506
+ use_precision_aware_optimizer ................... False
507
+ use_pytorch_profiler ............................ False
508
+ use_ring_exchange_p2p ........................... False
509
+ use_rope_scaling ................................ False
510
+ use_rotary_position_embeddings .................. False
511
+ use_sharp ....................................... False
512
+ use_tokenizer_model_from_checkpoint_args ........ True
513
+ use_torch_fsdp2 ................................. False
514
+ use_torch_optimizer_for_cpu_offload ............. False
515
+ use_tp_pp_dp_mapping ............................ False
516
+ v_head_dim ...................................... 128
517
+ valid_data_path ................................. None
518
+ variable_seq_lengths ............................ False
519
+ virtual_pipeline_model_parallel_size ............ None
520
+ vision_backbone_type ............................ vit
521
+ vision_pretraining .............................. False
522
+ vision_pretraining_type ......................... classify
523
+ vocab_extra_ids ................................. 0
524
+ vocab_file ...................................... vocab.json
525
+ vocab_size ...................................... None
526
+ wandb_exp_name ..................................
527
+ wandb_project ...................................
528
+ wandb_save_dir ..................................
529
+ weight_decay .................................... 0.1
530
+ weight_decay_incr_style ......................... constant
531
+ wgrad_deferral_limit ............................ 0
532
+ world_size ...................................... 16
533
+ yaml_cfg ........................................ None
534
+ -------------------- end of arguments ---------------------
535
+ INFO:megatron.core.num_microbatches_calculator:setting number of microbatches to constant 1
536
+ > building GPT2BPETokenizer tokenizer ...
537
+ INFO:megatron.training.initialize:Setting logging level to 0
538
+ INFO:megatron.training.initialize:Setting logging level to 0
539
+ INFO:megatron.training.initialize:Setting logging level to 0
540
+ > padded vocab (size: 50257) with 943 dummy tokens (new size: 51200)
541
+ INFO:megatron.training.initialize:Setting logging level to 0
542
+ INFO:megatron.training.initialize:Setting logging level to 0
543
+ WARNING:megatron.core.rerun_state_machine:RerunStateMachine initialized in mode RerunMode.DISABLED
544
+ > initializing torch distributed ...
545
+ > initialized tensor model parallel with size 8
546
+ > initialized pipeline model parallel with size 1
547
+ > setting random seeds to 1234 ...
548
+ > compiling dataset index builder ...
549
+ make: Entering directory '/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/datasets'
550
+ make: Nothing to be done for 'default'.
551
+ make: Leaving directory '/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/datasets'
552
+ >>> done with dataset index builder. Compilation time: 0.043 seconds
553
+ > compiling and loading fused kernels ...
554
+ >>> done with compiling and loading fused kernels. Compilation time: 2.710 seconds
attnserver.run_attnserver.slurm.sh.343205.err.log ADDED
@@ -0,0 +1,156 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ + source /mnt/weka/home/hao.zhang/conda/miniconda/bin/activate
2
+ ++ _CONDA_ROOT=/mnt/weka/home/hao.zhang/conda/miniconda
3
+ ++ . /mnt/weka/home/hao.zhang/conda/miniconda/etc/profile.d/conda.sh
4
+ +++ export CONDA_EXE=/mnt/weka/home/hao.zhang/conda/miniconda/bin/conda
5
+ +++ CONDA_EXE=/mnt/weka/home/hao.zhang/conda/miniconda/bin/conda
6
+ +++ export _CE_M=
7
+ +++ _CE_M=
8
+ +++ export _CE_CONDA=
9
+ +++ _CE_CONDA=
10
+ +++ export CONDA_PYTHON_EXE=/mnt/weka/home/hao.zhang/conda/miniconda/bin/python
11
+ +++ CONDA_PYTHON_EXE=/mnt/weka/home/hao.zhang/conda/miniconda/bin/python
12
+ +++ '[' -z x ']'
13
+ ++ conda activate
14
+ ++ local cmd=activate
15
+ ++ case "$cmd" in
16
+ ++ __conda_activate activate
17
+ ++ '[' -n '' ']'
18
+ ++ local ask_conda
19
+ +++ PS1=
20
+ +++ __conda_exe shell.posix activate
21
+ +++ '[' -n '' ']'
22
+ +++ /mnt/weka/home/hao.zhang/conda/miniconda/bin/conda shell.posix activate
23
+ ++ ask_conda='unset _CE_M
24
+ unset _CE_CONDA
25
+ PS1='\''(base) '\''
26
+ export PATH='\''/mnt/weka/home/hao.zhang/.local/bin:/mnt/weka/home/hao.zhang/.local/bin:/mnt/weka/home/hao.zhang/.local/bin:/mnt/weka/home/hao.zhang/conda/miniconda/bin:/mnt/weka/home/hao.zhang/conda/miniconda/condabin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin'\''
27
+ export CONDA_SHLVL='\''1'\''
28
+ export CONDA_PROMPT_MODIFIER='\''(base) '\''
29
+ export CONDA_EXE='\''/mnt/weka/home/hao.zhang/conda/miniconda/bin/conda'\''
30
+ export CONDA_PYTHON_EXE='\''/mnt/weka/home/hao.zhang/conda/miniconda/bin/python'\'''
31
+ ++ eval 'unset _CE_M
32
+ unset _CE_CONDA
33
+ PS1='\''(base) '\''
34
+ export PATH='\''/mnt/weka/home/hao.zhang/.local/bin:/mnt/weka/home/hao.zhang/.local/bin:/mnt/weka/home/hao.zhang/.local/bin:/mnt/weka/home/hao.zhang/conda/miniconda/bin:/mnt/weka/home/hao.zhang/conda/miniconda/condabin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin'\''
35
+ export CONDA_SHLVL='\''1'\''
36
+ export CONDA_PROMPT_MODIFIER='\''(base) '\''
37
+ export CONDA_EXE='\''/mnt/weka/home/hao.zhang/conda/miniconda/bin/conda'\''
38
+ export CONDA_PYTHON_EXE='\''/mnt/weka/home/hao.zhang/conda/miniconda/bin/python'\'''
39
+ +++ unset _CE_M
40
+ +++ unset _CE_CONDA
41
+ +++ PS1='(base) '
42
+ +++ export PATH=/mnt/weka/home/hao.zhang/.local/bin:/mnt/weka/home/hao.zhang/.local/bin:/mnt/weka/home/hao.zhang/.local/bin:/mnt/weka/home/hao.zhang/conda/miniconda/bin:/mnt/weka/home/hao.zhang/conda/miniconda/condabin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin
43
+ +++ PATH=/mnt/weka/home/hao.zhang/.local/bin:/mnt/weka/home/hao.zhang/.local/bin:/mnt/weka/home/hao.zhang/.local/bin:/mnt/weka/home/hao.zhang/conda/miniconda/bin:/mnt/weka/home/hao.zhang/conda/miniconda/condabin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin
44
+ +++ export CONDA_SHLVL=1
45
+ +++ CONDA_SHLVL=1
46
+ +++ export 'CONDA_PROMPT_MODIFIER=(base) '
47
+ +++ CONDA_PROMPT_MODIFIER='(base) '
48
+ +++ export CONDA_EXE=/mnt/weka/home/hao.zhang/conda/miniconda/bin/conda
49
+ +++ CONDA_EXE=/mnt/weka/home/hao.zhang/conda/miniconda/bin/conda
50
+ +++ export CONDA_PYTHON_EXE=/mnt/weka/home/hao.zhang/conda/miniconda/bin/python
51
+ +++ CONDA_PYTHON_EXE=/mnt/weka/home/hao.zhang/conda/miniconda/bin/python
52
+ ++ __conda_hashr
53
+ ++ '[' -n '' ']'
54
+ ++ '[' -n '' ']'
55
+ ++ hash -r
56
+ + conda activate junda-attnserver
57
+ + local cmd=activate
58
+ + case "$cmd" in
59
+ + __conda_activate activate junda-attnserver
60
+ + '[' -n '' ']'
61
+ + local ask_conda
62
+ ++ PS1='(base) '
63
+ ++ __conda_exe shell.posix activate junda-attnserver
64
+ ++ '[' -n '' ']'
65
+ ++ /mnt/weka/home/hao.zhang/conda/miniconda/bin/conda shell.posix activate junda-attnserver
66
+ + ask_conda='unset _CE_M
67
+ unset _CE_CONDA
68
+ PS1='\''(junda-attnserver) '\''
69
+ export PATH='\''/mnt/weka/home/hao.zhang/.local/bin:/mnt/weka/home/hao.zhang/.local/bin:/mnt/weka/home/hao.zhang/.local/bin:/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/bin:/mnt/weka/home/hao.zhang/conda/miniconda/condabin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin'\''
70
+ export CONDA_PREFIX='\''/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver'\''
71
+ export CONDA_SHLVL='\''2'\''
72
+ export CONDA_DEFAULT_ENV='\''junda-attnserver'\''
73
+ export CONDA_PROMPT_MODIFIER='\''(junda-attnserver) '\''
74
+ export CONDA_PREFIX_1='\''/mnt/weka/home/hao.zhang/conda/miniconda'\''
75
+ export CONDA_EXE='\''/mnt/weka/home/hao.zhang/conda/miniconda/bin/conda'\''
76
+ export CONDA_PYTHON_EXE='\''/mnt/weka/home/hao.zhang/conda/miniconda/bin/python'\'''
77
+ + eval 'unset _CE_M
78
+ unset _CE_CONDA
79
+ PS1='\''(junda-attnserver) '\''
80
+ export PATH='\''/mnt/weka/home/hao.zhang/.local/bin:/mnt/weka/home/hao.zhang/.local/bin:/mnt/weka/home/hao.zhang/.local/bin:/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/bin:/mnt/weka/home/hao.zhang/conda/miniconda/condabin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin'\''
81
+ export CONDA_PREFIX='\''/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver'\''
82
+ export CONDA_SHLVL='\''2'\''
83
+ export CONDA_DEFAULT_ENV='\''junda-attnserver'\''
84
+ export CONDA_PROMPT_MODIFIER='\''(junda-attnserver) '\''
85
+ export CONDA_PREFIX_1='\''/mnt/weka/home/hao.zhang/conda/miniconda'\''
86
+ export CONDA_EXE='\''/mnt/weka/home/hao.zhang/conda/miniconda/bin/conda'\''
87
+ export CONDA_PYTHON_EXE='\''/mnt/weka/home/hao.zhang/conda/miniconda/bin/python'\'''
88
+ ++ unset _CE_M
89
+ ++ unset _CE_CONDA
90
+ ++ PS1='(junda-attnserver) '
91
+ ++ export PATH=/mnt/weka/home/hao.zhang/.local/bin:/mnt/weka/home/hao.zhang/.local/bin:/mnt/weka/home/hao.zhang/.local/bin:/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/bin:/mnt/weka/home/hao.zhang/conda/miniconda/condabin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin
92
+ ++ PATH=/mnt/weka/home/hao.zhang/.local/bin:/mnt/weka/home/hao.zhang/.local/bin:/mnt/weka/home/hao.zhang/.local/bin:/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/bin:/mnt/weka/home/hao.zhang/conda/miniconda/condabin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin
93
+ ++ export CONDA_PREFIX=/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver
94
+ ++ CONDA_PREFIX=/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver
95
+ ++ export CONDA_SHLVL=2
96
+ ++ CONDA_SHLVL=2
97
+ ++ export CONDA_DEFAULT_ENV=junda-attnserver
98
+ ++ CONDA_DEFAULT_ENV=junda-attnserver
99
+ ++ export 'CONDA_PROMPT_MODIFIER=(junda-attnserver) '
100
+ ++ CONDA_PROMPT_MODIFIER='(junda-attnserver) '
101
+ ++ export CONDA_PREFIX_1=/mnt/weka/home/hao.zhang/conda/miniconda
102
+ ++ CONDA_PREFIX_1=/mnt/weka/home/hao.zhang/conda/miniconda
103
+ ++ export CONDA_EXE=/mnt/weka/home/hao.zhang/conda/miniconda/bin/conda
104
+ ++ CONDA_EXE=/mnt/weka/home/hao.zhang/conda/miniconda/bin/conda
105
+ ++ export CONDA_PYTHON_EXE=/mnt/weka/home/hao.zhang/conda/miniconda/bin/python
106
+ ++ CONDA_PYTHON_EXE=/mnt/weka/home/hao.zhang/conda/miniconda/bin/python
107
+ + __conda_hashr
108
+ + '[' -n '' ']'
109
+ + '[' -n '' ']'
110
+ + hash -r
111
+ + export CHROME_TRACE_PREFIX=/mnt/sharefs/users/hao.zhang/junda/megatron-prof-data--unstable-v5
112
+ + CHROME_TRACE_PREFIX=/mnt/sharefs/users/hao.zhang/junda/megatron-prof-data--unstable-v5
113
+ + mkdir -p /mnt/sharefs/users/hao.zhang/junda/megatron-prof-data--unstable-v5
114
+ + export PROF_TP_SIZE=8
115
+ + PROF_TP_SIZE=8
116
+ + export PROF_CP_SIZE=2
117
+ + PROF_CP_SIZE=2
118
+ + export PROF_BS=16
119
+ + PROF_BS=16
120
+ + for ctx_length in 1024 2048 4096 8192 12288 16384 24576 32768 40960 49152 65536 81920 98304 131072
121
+ + export PROF_CTX_LENGTH=1024
122
+ + PROF_CTX_LENGTH=1024
123
+ + name='/mnt/sharefs/users/hao.zhang/junda/megatron-prof-data--unstable-v5/mytrace.L1024*tp8.cp2.bs16.json'
124
+ + '[' -f '/mnt/sharefs/users/hao.zhang/junda/megatron-prof-data--unstable-v5/mytrace.L1024*tp8.cp2.bs16.json' ']'
125
+ + echo 'Running ctx_length=1024, TP_SIZE=8, CP_SIZE=2, BATCH_SIZE=16'
126
+ + srun bash ./attnserver.sh
127
+ + which python3
128
+ + python3 -m torch.distributed.launch --nproc_per_node 8 --nnodes 2 --node_rank 1 --rdzv_id 343205 --rdzv_backend c10d --rdzv_endpoint fs-mbz-gpu-188:29500 ./pretrain_gpt_profile.py --tensor-model-parallel-size 8 --context-parallel-size 2 --num-layers 2 --hidden-size 4096 --num-attention-heads 64 --group-query-attention --num-query-groups 16 --seq-length 1024 --max-position-embeddings 1024 --micro-batch-size 1 --global-batch-size 1 --lr 0.0005 --train-iters 10 --lr-decay-iters 150000 --lr-decay-style cosine --lr-warmup-iters 2 --weight-decay .1 --adam-beta2 .999 --fp16 --log-interval 1 --save-interval 16 --eval-interval 16 --eval-iters 1 --vocab-file vocab.json --merge-file merges.txt --save gpt-checkpoint --load gpt-checkpoint --logging-level 0 --mock-data --tensorboard-dir tensorboard-logs/
129
+ + which python3
130
+ + python3 -m torch.distributed.launch --nproc_per_node 8 --nnodes 2 --node_rank 0 --rdzv_id 343205 --rdzv_backend c10d --rdzv_endpoint fs-mbz-gpu-188:29500 ./pretrain_gpt_profile.py --tensor-model-parallel-size 8 --context-parallel-size 2 --num-layers 2 --hidden-size 4096 --num-attention-heads 64 --group-query-attention --num-query-groups 16 --seq-length 1024 --max-position-embeddings 1024 --micro-batch-size 1 --global-batch-size 1 --lr 0.0005 --train-iters 10 --lr-decay-iters 150000 --lr-decay-style cosine --lr-warmup-iters 2 --weight-decay .1 --adam-beta2 .999 --fp16 --log-interval 1 --save-interval 16 --eval-interval 16 --eval-iters 1 --vocab-file vocab.json --merge-file merges.txt --save gpt-checkpoint --load gpt-checkpoint --logging-level 0 --mock-data --tensorboard-dir tensorboard-logs/
131
+ /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/distributed/launch.py:207: FutureWarning: The module torch.distributed.launch is deprecated
132
+ and will be removed in future. Use torchrun.
133
+ Note that --use-env is set by default in torchrun.
134
+ If your script expects `--local-rank` argument to be set, please
135
+ change it to read from `os.environ['LOCAL_RANK']` instead. See
136
+ https://pytorch.org/docs/stable/distributed.html#launch-utility for
137
+ further instructions
138
+
139
+ main()
140
+ W0621 21:14:30.977000 2067583 site-packages/torch/distributed/run.py:766]
141
+ W0621 21:14:30.977000 2067583 site-packages/torch/distributed/run.py:766] *****************************************
142
+ W0621 21:14:30.977000 2067583 site-packages/torch/distributed/run.py:766] Setting OMP_NUM_THREADS environment variable for each process to be 1 in default, to avoid your system being overloaded, please further tune the variable for optimal performance in your application as needed.
143
+ W0621 21:14:30.977000 2067583 site-packages/torch/distributed/run.py:766] *****************************************
144
+ /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/distributed/launch.py:207: FutureWarning: The module torch.distributed.launch is deprecated
145
+ and will be removed in future. Use torchrun.
146
+ Note that --use-env is set by default in torchrun.
147
+ If your script expects `--local-rank` argument to be set, please
148
+ change it to read from `os.environ['LOCAL_RANK']` instead. See
149
+ https://pytorch.org/docs/stable/distributed.html#launch-utility for
150
+ further instructions
151
+
152
+ main()
153
+ W0621 21:14:31.120000 722898 site-packages/torch/distributed/run.py:766]
154
+ W0621 21:14:31.120000 722898 site-packages/torch/distributed/run.py:766] *****************************************
155
+ W0621 21:14:31.120000 722898 site-packages/torch/distributed/run.py:766] Setting OMP_NUM_THREADS environment variable for each process to be 1 in default, to avoid your system being overloaded, please further tune the variable for optimal performance in your application as needed.
156
+ W0621 21:14:31.120000 722898 site-packages/torch/distributed/run.py:766] *****************************************
attnserver.run_attnserver.slurm.sh.343205.out.log ADDED
@@ -0,0 +1,19 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ Running ctx_length=1024, TP_SIZE=8, CP_SIZE=2, BATCH_SIZE=16
2
+ Cleaning up checkpoint directory: gpt-checkpoint
3
+ --------------------------------
4
+ CTX_LENGTH: 1024
5
+ TP_SIZE: 8
6
+ CP_SIZE: 2
7
+ CHECKPOINT_PATH: gpt-checkpoint
8
+ PWD: /mnt/weka/home/hao.zhang/junda/attnserver-megatron
9
+ --------------------------------
10
+ Cleaning up checkpoint directory: gpt-checkpoint
11
+ /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/bin/python3
12
+ --------------------------------
13
+ CTX_LENGTH: 1024
14
+ TP_SIZE: 8
15
+ CP_SIZE: 2
16
+ CHECKPOINT_PATH: gpt-checkpoint
17
+ PWD: /mnt/weka/home/hao.zhang/junda/attnserver-megatron
18
+ --------------------------------
19
+ /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/bin/python3
attnserver.run_attnserver.slurm.sh.343206.err.log ADDED
@@ -0,0 +1,156 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ + source /mnt/weka/home/hao.zhang/conda/miniconda/bin/activate
2
+ ++ _CONDA_ROOT=/mnt/weka/home/hao.zhang/conda/miniconda
3
+ ++ . /mnt/weka/home/hao.zhang/conda/miniconda/etc/profile.d/conda.sh
4
+ +++ export CONDA_EXE=/mnt/weka/home/hao.zhang/conda/miniconda/bin/conda
5
+ +++ CONDA_EXE=/mnt/weka/home/hao.zhang/conda/miniconda/bin/conda
6
+ +++ export _CE_M=
7
+ +++ _CE_M=
8
+ +++ export _CE_CONDA=
9
+ +++ _CE_CONDA=
10
+ +++ export CONDA_PYTHON_EXE=/mnt/weka/home/hao.zhang/conda/miniconda/bin/python
11
+ +++ CONDA_PYTHON_EXE=/mnt/weka/home/hao.zhang/conda/miniconda/bin/python
12
+ +++ '[' -z x ']'
13
+ ++ conda activate
14
+ ++ local cmd=activate
15
+ ++ case "$cmd" in
16
+ ++ __conda_activate activate
17
+ ++ '[' -n '' ']'
18
+ ++ local ask_conda
19
+ +++ PS1=
20
+ +++ __conda_exe shell.posix activate
21
+ +++ '[' -n '' ']'
22
+ +++ /mnt/weka/home/hao.zhang/conda/miniconda/bin/conda shell.posix activate
23
+ ++ ask_conda='unset _CE_M
24
+ unset _CE_CONDA
25
+ PS1='\''(base) '\''
26
+ export PATH='\''/mnt/weka/home/hao.zhang/.local/bin:/mnt/weka/home/hao.zhang/.local/bin:/mnt/weka/home/hao.zhang/.local/bin:/mnt/weka/home/hao.zhang/conda/miniconda/bin:/mnt/weka/home/hao.zhang/conda/miniconda/condabin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin'\''
27
+ export CONDA_SHLVL='\''1'\''
28
+ export CONDA_PROMPT_MODIFIER='\''(base) '\''
29
+ export CONDA_EXE='\''/mnt/weka/home/hao.zhang/conda/miniconda/bin/conda'\''
30
+ export CONDA_PYTHON_EXE='\''/mnt/weka/home/hao.zhang/conda/miniconda/bin/python'\'''
31
+ ++ eval 'unset _CE_M
32
+ unset _CE_CONDA
33
+ PS1='\''(base) '\''
34
+ export PATH='\''/mnt/weka/home/hao.zhang/.local/bin:/mnt/weka/home/hao.zhang/.local/bin:/mnt/weka/home/hao.zhang/.local/bin:/mnt/weka/home/hao.zhang/conda/miniconda/bin:/mnt/weka/home/hao.zhang/conda/miniconda/condabin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin'\''
35
+ export CONDA_SHLVL='\''1'\''
36
+ export CONDA_PROMPT_MODIFIER='\''(base) '\''
37
+ export CONDA_EXE='\''/mnt/weka/home/hao.zhang/conda/miniconda/bin/conda'\''
38
+ export CONDA_PYTHON_EXE='\''/mnt/weka/home/hao.zhang/conda/miniconda/bin/python'\'''
39
+ +++ unset _CE_M
40
+ +++ unset _CE_CONDA
41
+ +++ PS1='(base) '
42
+ +++ export PATH=/mnt/weka/home/hao.zhang/.local/bin:/mnt/weka/home/hao.zhang/.local/bin:/mnt/weka/home/hao.zhang/.local/bin:/mnt/weka/home/hao.zhang/conda/miniconda/bin:/mnt/weka/home/hao.zhang/conda/miniconda/condabin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin
43
+ +++ PATH=/mnt/weka/home/hao.zhang/.local/bin:/mnt/weka/home/hao.zhang/.local/bin:/mnt/weka/home/hao.zhang/.local/bin:/mnt/weka/home/hao.zhang/conda/miniconda/bin:/mnt/weka/home/hao.zhang/conda/miniconda/condabin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin
44
+ +++ export CONDA_SHLVL=1
45
+ +++ CONDA_SHLVL=1
46
+ +++ export 'CONDA_PROMPT_MODIFIER=(base) '
47
+ +++ CONDA_PROMPT_MODIFIER='(base) '
48
+ +++ export CONDA_EXE=/mnt/weka/home/hao.zhang/conda/miniconda/bin/conda
49
+ +++ CONDA_EXE=/mnt/weka/home/hao.zhang/conda/miniconda/bin/conda
50
+ +++ export CONDA_PYTHON_EXE=/mnt/weka/home/hao.zhang/conda/miniconda/bin/python
51
+ +++ CONDA_PYTHON_EXE=/mnt/weka/home/hao.zhang/conda/miniconda/bin/python
52
+ ++ __conda_hashr
53
+ ++ '[' -n '' ']'
54
+ ++ '[' -n '' ']'
55
+ ++ hash -r
56
+ + conda activate junda-attnserver
57
+ + local cmd=activate
58
+ + case "$cmd" in
59
+ + __conda_activate activate junda-attnserver
60
+ + '[' -n '' ']'
61
+ + local ask_conda
62
+ ++ PS1='(base) '
63
+ ++ __conda_exe shell.posix activate junda-attnserver
64
+ ++ '[' -n '' ']'
65
+ ++ /mnt/weka/home/hao.zhang/conda/miniconda/bin/conda shell.posix activate junda-attnserver
66
+ + ask_conda='unset _CE_M
67
+ unset _CE_CONDA
68
+ PS1='\''(junda-attnserver) '\''
69
+ export PATH='\''/mnt/weka/home/hao.zhang/.local/bin:/mnt/weka/home/hao.zhang/.local/bin:/mnt/weka/home/hao.zhang/.local/bin:/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/bin:/mnt/weka/home/hao.zhang/conda/miniconda/condabin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin'\''
70
+ export CONDA_PREFIX='\''/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver'\''
71
+ export CONDA_SHLVL='\''2'\''
72
+ export CONDA_DEFAULT_ENV='\''junda-attnserver'\''
73
+ export CONDA_PROMPT_MODIFIER='\''(junda-attnserver) '\''
74
+ export CONDA_PREFIX_1='\''/mnt/weka/home/hao.zhang/conda/miniconda'\''
75
+ export CONDA_EXE='\''/mnt/weka/home/hao.zhang/conda/miniconda/bin/conda'\''
76
+ export CONDA_PYTHON_EXE='\''/mnt/weka/home/hao.zhang/conda/miniconda/bin/python'\'''
77
+ + eval 'unset _CE_M
78
+ unset _CE_CONDA
79
+ PS1='\''(junda-attnserver) '\''
80
+ export PATH='\''/mnt/weka/home/hao.zhang/.local/bin:/mnt/weka/home/hao.zhang/.local/bin:/mnt/weka/home/hao.zhang/.local/bin:/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/bin:/mnt/weka/home/hao.zhang/conda/miniconda/condabin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin'\''
81
+ export CONDA_PREFIX='\''/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver'\''
82
+ export CONDA_SHLVL='\''2'\''
83
+ export CONDA_DEFAULT_ENV='\''junda-attnserver'\''
84
+ export CONDA_PROMPT_MODIFIER='\''(junda-attnserver) '\''
85
+ export CONDA_PREFIX_1='\''/mnt/weka/home/hao.zhang/conda/miniconda'\''
86
+ export CONDA_EXE='\''/mnt/weka/home/hao.zhang/conda/miniconda/bin/conda'\''
87
+ export CONDA_PYTHON_EXE='\''/mnt/weka/home/hao.zhang/conda/miniconda/bin/python'\'''
88
+ ++ unset _CE_M
89
+ ++ unset _CE_CONDA
90
+ ++ PS1='(junda-attnserver) '
91
+ ++ export PATH=/mnt/weka/home/hao.zhang/.local/bin:/mnt/weka/home/hao.zhang/.local/bin:/mnt/weka/home/hao.zhang/.local/bin:/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/bin:/mnt/weka/home/hao.zhang/conda/miniconda/condabin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin
92
+ ++ PATH=/mnt/weka/home/hao.zhang/.local/bin:/mnt/weka/home/hao.zhang/.local/bin:/mnt/weka/home/hao.zhang/.local/bin:/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/bin:/mnt/weka/home/hao.zhang/conda/miniconda/condabin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin
93
+ ++ export CONDA_PREFIX=/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver
94
+ ++ CONDA_PREFIX=/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver
95
+ ++ export CONDA_SHLVL=2
96
+ ++ CONDA_SHLVL=2
97
+ ++ export CONDA_DEFAULT_ENV=junda-attnserver
98
+ ++ CONDA_DEFAULT_ENV=junda-attnserver
99
+ ++ export 'CONDA_PROMPT_MODIFIER=(junda-attnserver) '
100
+ ++ CONDA_PROMPT_MODIFIER='(junda-attnserver) '
101
+ ++ export CONDA_PREFIX_1=/mnt/weka/home/hao.zhang/conda/miniconda
102
+ ++ CONDA_PREFIX_1=/mnt/weka/home/hao.zhang/conda/miniconda
103
+ ++ export CONDA_EXE=/mnt/weka/home/hao.zhang/conda/miniconda/bin/conda
104
+ ++ CONDA_EXE=/mnt/weka/home/hao.zhang/conda/miniconda/bin/conda
105
+ ++ export CONDA_PYTHON_EXE=/mnt/weka/home/hao.zhang/conda/miniconda/bin/python
106
+ ++ CONDA_PYTHON_EXE=/mnt/weka/home/hao.zhang/conda/miniconda/bin/python
107
+ + __conda_hashr
108
+ + '[' -n '' ']'
109
+ + '[' -n '' ']'
110
+ + hash -r
111
+ + export CHROME_TRACE_PREFIX=/mnt/sharefs/users/hao.zhang/junda/megatron-prof-data--unstable-v5
112
+ + CHROME_TRACE_PREFIX=/mnt/sharefs/users/hao.zhang/junda/megatron-prof-data--unstable-v5
113
+ + mkdir -p /mnt/sharefs/users/hao.zhang/junda/megatron-prof-data--unstable-v5
114
+ + export PROF_TP_SIZE=8
115
+ + PROF_TP_SIZE=8
116
+ + export PROF_CP_SIZE=2
117
+ + PROF_CP_SIZE=2
118
+ + export PROF_BS=32
119
+ + PROF_BS=32
120
+ + for ctx_length in 1024 2048 4096 8192 12288 16384 24576 32768 40960 49152 65536 81920 98304 131072
121
+ + export PROF_CTX_LENGTH=1024
122
+ + PROF_CTX_LENGTH=1024
123
+ + name='/mnt/sharefs/users/hao.zhang/junda/megatron-prof-data--unstable-v5/mytrace.L1024*tp8.cp2.bs32.json'
124
+ + '[' -f '/mnt/sharefs/users/hao.zhang/junda/megatron-prof-data--unstable-v5/mytrace.L1024*tp8.cp2.bs32.json' ']'
125
+ + echo 'Running ctx_length=1024, TP_SIZE=8, CP_SIZE=2, BATCH_SIZE=32'
126
+ + srun bash ./attnserver.sh
127
+ + which python3
128
+ + python3 -m torch.distributed.launch --nproc_per_node 8 --nnodes 2 --node_rank 0 --rdzv_id 343206 --rdzv_backend c10d --rdzv_endpoint fs-mbz-gpu-239:29500 ./pretrain_gpt_profile.py --tensor-model-parallel-size 8 --context-parallel-size 2 --num-layers 2 --hidden-size 4096 --num-attention-heads 64 --group-query-attention --num-query-groups 16 --seq-length 1024 --max-position-embeddings 1024 --micro-batch-size 1 --global-batch-size 1 --lr 0.0005 --train-iters 10 --lr-decay-iters 150000 --lr-decay-style cosine --lr-warmup-iters 2 --weight-decay .1 --adam-beta2 .999 --fp16 --log-interval 1 --save-interval 16 --eval-interval 16 --eval-iters 1 --vocab-file vocab.json --merge-file merges.txt --save gpt-checkpoint --load gpt-checkpoint --logging-level 0 --mock-data --tensorboard-dir tensorboard-logs/
129
+ + which python3
130
+ + python3 -m torch.distributed.launch --nproc_per_node 8 --nnodes 2 --node_rank 1 --rdzv_id 343206 --rdzv_backend c10d --rdzv_endpoint fs-mbz-gpu-239:29500 ./pretrain_gpt_profile.py --tensor-model-parallel-size 8 --context-parallel-size 2 --num-layers 2 --hidden-size 4096 --num-attention-heads 64 --group-query-attention --num-query-groups 16 --seq-length 1024 --max-position-embeddings 1024 --micro-batch-size 1 --global-batch-size 1 --lr 0.0005 --train-iters 10 --lr-decay-iters 150000 --lr-decay-style cosine --lr-warmup-iters 2 --weight-decay .1 --adam-beta2 .999 --fp16 --log-interval 1 --save-interval 16 --eval-interval 16 --eval-iters 1 --vocab-file vocab.json --merge-file merges.txt --save gpt-checkpoint --load gpt-checkpoint --logging-level 0 --mock-data --tensorboard-dir tensorboard-logs/
131
+ /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/distributed/launch.py:207: FutureWarning: The module torch.distributed.launch is deprecated
132
+ and will be removed in future. Use torchrun.
133
+ Note that --use-env is set by default in torchrun.
134
+ If your script expects `--local-rank` argument to be set, please
135
+ change it to read from `os.environ['LOCAL_RANK']` instead. See
136
+ https://pytorch.org/docs/stable/distributed.html#launch-utility for
137
+ further instructions
138
+
139
+ main()
140
+ W0621 21:14:31.024000 966285 site-packages/torch/distributed/run.py:766]
141
+ W0621 21:14:31.024000 966285 site-packages/torch/distributed/run.py:766] *****************************************
142
+ W0621 21:14:31.024000 966285 site-packages/torch/distributed/run.py:766] Setting OMP_NUM_THREADS environment variable for each process to be 1 in default, to avoid your system being overloaded, please further tune the variable for optimal performance in your application as needed.
143
+ W0621 21:14:31.024000 966285 site-packages/torch/distributed/run.py:766] *****************************************
144
+ /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/distributed/launch.py:207: FutureWarning: The module torch.distributed.launch is deprecated
145
+ and will be removed in future. Use torchrun.
146
+ Note that --use-env is set by default in torchrun.
147
+ If your script expects `--local-rank` argument to be set, please
148
+ change it to read from `os.environ['LOCAL_RANK']` instead. See
149
+ https://pytorch.org/docs/stable/distributed.html#launch-utility for
150
+ further instructions
151
+
152
+ main()
153
+ W0621 21:14:31.036000 1892432 site-packages/torch/distributed/run.py:766]
154
+ W0621 21:14:31.036000 1892432 site-packages/torch/distributed/run.py:766] *****************************************
155
+ W0621 21:14:31.036000 1892432 site-packages/torch/distributed/run.py:766] Setting OMP_NUM_THREADS environment variable for each process to be 1 in default, to avoid your system being overloaded, please further tune the variable for optimal performance in your application as needed.
156
+ W0621 21:14:31.036000 1892432 site-packages/torch/distributed/run.py:766] *****************************************
attnserver.run_attnserver.slurm.sh.343206.out.log ADDED
@@ -0,0 +1,29 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ Running ctx_length=1024, TP_SIZE=8, CP_SIZE=2, BATCH_SIZE=32
2
+ Cleaning up checkpoint directory: gpt-checkpoint
3
+ --------------------------------
4
+ CTX_LENGTH: 1024
5
+ TP_SIZE: 8
6
+ CP_SIZE: 2
7
+ CHECKPOINT_PATH: gpt-checkpoint
8
+ PWD: /mnt/weka/home/hao.zhang/junda/attnserver-megatron
9
+ --------------------------------
10
+ /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/bin/python3
11
+ Cleaning up checkpoint directory: gpt-checkpoint
12
+ --------------------------------
13
+ CTX_LENGTH: 1024
14
+ TP_SIZE: 8
15
+ CP_SIZE: 2
16
+ CHECKPOINT_PATH: gpt-checkpoint
17
+ PWD: /mnt/weka/home/hao.zhang/junda/attnserver-megatron
18
+ --------------------------------
19
+ /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/bin/python3
20
+ INFO:megatron.training.initialize:Setting logging level to 0
21
+ INFO:megatron.training.initialize:Setting logging level to 0
22
+ INFO:megatron.training.initialize:Setting logging level to 0
23
+ WARNING: TensorBoard writing requested but is not available (are you using PyTorch 1.1.0 or later?), no TensorBoard logs will be written.
24
+ WARNING: one_logger package is required to enable e2e metrics tracking. please go to https://confluence.nvidia.com/display/MLWFO/Package+Repositories for details to install it
25
+ INFO:megatron.training.initialize:Setting logging level to 0
26
+ INFO:megatron.training.initialize:Setting logging level to 0
27
+ INFO:megatron.training.initialize:Setting logging level to 0
28
+ INFO:megatron.training.initialize:Setting logging level to 0
29
+ INFO:megatron.training.initialize:Setting logging level to 0