Safetensors
llama
  • Trained using torchtune for CPT testing
  • Shows a good improvement in Ground Truth Accuracy when using Q&A dataset instead of just wiki like text: 37% to 51%
  • Curiously, uses the Question/Answer format for generating content even though the few shot prompt showed standard paragraph style text

Torchtune logs

Step 1 | loss:0.9870861172676086 lr:5e-06 tokens_per_second_per_gpu:349.07958984375 peak_memory_active:27.935279369354248 peak_memory_alloc:19.935279369354248 peak_memory_reserved:40.74609375 
Step 2 | loss:0.9891979098320007 lr:1e-05 tokens_per_second_per_gpu:358.468994140625 peak_memory_active:27.935280323028564 peak_memory_alloc:19.935280323028564 peak_memory_reserved:40.74609375 
Step 3 | loss:0.9162459969520569 lr:1.5000000000000002e-05 tokens_per_second_per_gpu:139.44334411621094 peak_memory_active:27.935280323028564 peak_memory_alloc:19.935280323028564 peak_memory_reserved:40.74609375 
Step 4 | loss:0.8624152541160583 lr:2e-05 tokens_per_second_per_gpu:359.07080078125 peak_memory_active:27.935280323028564 peak_memory_alloc:19.935280323028564 peak_memory_reserved:40.74609375 
Step 5 | loss:0.8651365637779236 lr:1.9961946980917457e-05 tokens_per_second_per_gpu:117.93551635742188 peak_memory_active:27.935280323028564 peak_memory_alloc:19.935280323028564 peak_memory_reserved:40.74609375 
Step 6 | loss:0.8049011826515198 lr:1.9848077530122083e-05 tokens_per_second_per_gpu:357.0572814941406 peak_memory_active:27.935280323028564 peak_memory_alloc:19.935280323028564 peak_memory_reserved:40.74609375 
Step 7 | loss:0.7778048515319824 lr:1.9659258262890683e-05 tokens_per_second_per_gpu:114.8694839477539 peak_memory_active:27.935280323028564 peak_memory_alloc:19.935280323028564 peak_memory_reserved:40.74609375 
Step 8 | loss:0.7539520263671875 lr:1.9396926207859085e-05 tokens_per_second_per_gpu:358.42266845703125 peak_memory_active:27.935280323028564 peak_memory_alloc:19.935280323028564 peak_memory_reserved:40.74609375 
Step 9 | loss:0.7280842661857605 lr:1.9063077870366504e-05 tokens_per_second_per_gpu:115.60746765136719 peak_memory_active:27.935280323028564 peak_memory_alloc:19.935280323028564 peak_memory_reserved:40.74609375 
Step 10 | loss:0.713391125202179 lr:1.866025403784439e-05 tokens_per_second_per_gpu:358.1277770996094 peak_memory_active:27.935280323028564 peak_memory_alloc:19.935280323028564 peak_memory_reserved:40.74609375 
Step 11 | loss:0.6915909051895142 lr:1.819152044288992e-05 tokens_per_second_per_gpu:115.99324798583984 peak_memory_active:27.935280323028564 peak_memory_alloc:19.935280323028564 peak_memory_reserved:40.74609375 
Step 12 | loss:0.6670817136764526 lr:1.766044443118978e-05 tokens_per_second_per_gpu:358.77874755859375 peak_memory_active:27.935280323028564 peak_memory_alloc:19.935280323028564 peak_memory_reserved:40.74609375 
Step 13 | loss:0.6580817699432373 lr:1.7071067811865477e-05 tokens_per_second_per_gpu:132.9275360107422 peak_memory_active:27.935280323028564 peak_memory_alloc:19.935280323028564 peak_memory_reserved:40.74609375 
Step 14 | loss:0.6470029354095459 lr:1.6427876096865394e-05 tokens_per_second_per_gpu:358.90679931640625 peak_memory_active:27.935280323028564 peak_memory_alloc:19.935280323028564 peak_memory_reserved:40.74609375 
Step 15 | loss:0.623786449432373 lr:1.573576436351046e-05 tokens_per_second_per_gpu:133.3542938232422 peak_memory_active:27.935280323028564 peak_memory_alloc:19.935280323028564 peak_memory_reserved:40.74609375 
Step 16 | loss:0.6137831211090088 lr:1.5000000000000002e-05 tokens_per_second_per_gpu:358.5255126953125 peak_memory_active:27.935280323028564 peak_memory_alloc:19.935280323028564 peak_memory_reserved:40.74609375 
Step 17 | loss:0.6104069352149963 lr:1.4226182617406996e-05 tokens_per_second_per_gpu:133.6260223388672 peak_memory_active:27.935280323028564 peak_memory_alloc:19.935280323028564 peak_memory_reserved:40.74609375 
Step 18 | loss:0.584002673625946 lr:1.342020143325669e-05 tokens_per_second_per_gpu:358.2818908691406 peak_memory_active:27.935280323028564 peak_memory_alloc:19.935280323028564 peak_memory_reserved:40.74609375 
Step 19 | loss:0.5767210125923157 lr:1.2588190451025209e-05 tokens_per_second_per_gpu:133.82418823242188 peak_memory_active:27.935280323028564 peak_memory_alloc:19.935280323028564 peak_memory_reserved:40.74609375 
Step 20 | loss:0.5692858099937439 lr:1.1736481776669307e-05 tokens_per_second_per_gpu:358.17291259765625 peak_memory_active:27.935280323028564 peak_memory_alloc:19.935280323028564 peak_memory_reserved:40.74609375 
Step 21 | loss:0.5572494864463806 lr:1.0871557427476585e-05 tokens_per_second_per_gpu:133.63356018066406 peak_memory_active:27.935280323028564 peak_memory_alloc:19.935280323028564 peak_memory_reserved:40.74609375 
Step 22 | loss:0.5557574033737183 lr:1e-05 tokens_per_second_per_gpu:358.4932861328125 peak_memory_active:27.935280323028564 peak_memory_alloc:19.935280323028564 peak_memory_reserved:40.74609375 
Step 23 | loss:0.5497353672981262 lr:9.128442572523418e-06 tokens_per_second_per_gpu:134.44566345214844 peak_memory_active:27.935280323028564 peak_memory_alloc:19.935280323028564 peak_memory_reserved:40.74609375 
Step 24 | loss:0.5377053022384644 lr:8.263518223330698e-06 tokens_per_second_per_gpu:357.4734191894531 peak_memory_active:27.935280323028564 peak_memory_alloc:19.935280323028564 peak_memory_reserved:40.74609375 
Step 25 | loss:0.5362685322761536 lr:7.411809548974792e-06 tokens_per_second_per_gpu:134.38502502441406 peak_memory_active:27.935280323028564 peak_memory_alloc:19.935280323028564 peak_memory_reserved:40.74609375 
Step 26 | loss:0.5332332253456116 lr:6.579798566743314e-06 tokens_per_second_per_gpu:358.1392517089844 peak_memory_active:27.935280323028564 peak_memory_alloc:19.935280323028564 peak_memory_reserved:40.74609375 
Step 27 | loss:0.5264667272567749 lr:5.773817382593008e-06 tokens_per_second_per_gpu:135.4611358642578 peak_memory_active:27.935280323028564 peak_memory_alloc:19.935280323028564 peak_memory_reserved:40.74609375 
Step 28 | loss:0.5324499607086182 lr:5.000000000000003e-06 tokens_per_second_per_gpu:358.5559387207031 peak_memory_active:27.935280323028564 peak_memory_alloc:19.935280323028564 peak_memory_reserved:40.74609375 
Step 29 | loss:0.5186256766319275 lr:4.264235636489542e-06 tokens_per_second_per_gpu:134.5880889892578 peak_memory_active:27.935280323028564 peak_memory_alloc:19.935280323028564 peak_memory_reserved:40.74609375 
Step 30 | loss:0.5276823043823242 lr:3.5721239031346067e-06 tokens_per_second_per_gpu:358.584228515625 peak_memory_active:27.935280323028564 peak_memory_alloc:19.935280323028564 peak_memory_reserved:40.74609375 
Step 31 | loss:0.5247493982315063 lr:2.9289321881345257e-06 tokens_per_second_per_gpu:135.09722900390625 peak_memory_active:27.935280323028564 peak_memory_alloc:19.935280323028564 peak_memory_reserved:40.74609375 
Step 32 | loss:0.5132156610488892 lr:2.339555568810221e-06 tokens_per_second_per_gpu:358.1979675292969 peak_memory_active:27.935280323028564 peak_memory_alloc:19.935280323028564 peak_memory_reserved:40.74609375 
Step 33 | loss:0.5184270143508911 lr:1.808479557110081e-06 tokens_per_second_per_gpu:134.74534606933594 peak_memory_active:27.935280323028564 peak_memory_alloc:19.935280323028564 peak_memory_reserved:40.74609375 
Step 34 | loss:0.5163306593894958 lr:1.339745962155613e-06 tokens_per_second_per_gpu:357.9968566894531 peak_memory_active:27.935280323028564 peak_memory_alloc:19.935280323028564 peak_memory_reserved:40.74609375 
Step 35 | loss:0.5152503252029419 lr:9.369221296335007e-07 tokens_per_second_per_gpu:134.68637084960938 peak_memory_active:27.935280323028564 peak_memory_alloc:19.935280323028564 peak_memory_reserved:40.74609375 
Step 36 | loss:0.5170396566390991 lr:6.030737921409169e-07 tokens_per_second_per_gpu:358.9433898925781 peak_memory_active:27.935280323028564 peak_memory_alloc:19.935280323028564 peak_memory_reserved:40.74609375 
Step 37 | loss:0.5208821892738342 lr:3.4074173710931804e-07 tokens_per_second_per_gpu:133.9708709716797 peak_memory_active:27.935280323028564 peak_memory_alloc:19.935280323028564 peak_memory_reserved:40.74609375 
Step 38 | loss:0.5146408081054688 lr:1.519224698779198e-07 tokens_per_second_per_gpu:358.0292053222656 peak_memory_active:27.935280323028564 peak_memory_alloc:19.935280323028564 peak_memory_reserved:40.74609375 
Step 39 | loss:0.5154322981834412 lr:3.805301908254455e-08 tokens_per_second_per_gpu:134.77313232421875 peak_memory_active:27.935280323028564 peak_memory_alloc:19.935280323028564 peak_memory_reserved:40.74609375 
Step 40 | loss:0.5126146674156189 lr:0.0 tokens_per_second_per_gpu:358.4921875 peak_memory_active:27.935280323028564 peak_memory_alloc:19.935280323028564 peak_memory_reserved:40.74609375 
Downloads last month
14
Safetensors
Model size
70.6B params
Tensor type
BF16
·
Inference Providers NEW
This model is not currently available via any of the supported third-party Inference Providers, and HF Inference API was unable to determine this model's library.

Model tree for amang1802/llama-3.1-70B-cpttest_mode2_qna_fulltext

Finetuned
(29)
this model

Dataset used to train amang1802/llama-3.1-70B-cpttest_mode2_qna_fulltext

Collection including amang1802/llama-3.1-70B-cpttest_mode2_qna_fulltext