Upload folder using huggingface_hub
Browse files- attnserver.run_attnserver.slurm.sh.343196.out.log +333 -0
- attnserver.run_attnserver.slurm.sh.343205.err.log +2 -2
- attnserver.run_attnserver.slurm.sh.343205.out.log +710 -0
- attnserver.run_attnserver.slurm.sh.343207.err.log +0 -0
- attnserver.run_attnserver.slurm.sh.343207.out.log +1284 -0
- attnserver.run_attnserver.slurm.sh.343208.err.log +40 -0
- attnserver.run_attnserver.slurm.sh.343208.out.log +734 -0
- attnserver.run_attnserver.slurm.sh.343209.err.log +315 -0
- attnserver.run_attnserver.slurm.sh.343209.out.log +654 -0
- attnserver.run_attnserver.slurm.sh.343211.err.log +315 -0
- attnserver.run_attnserver.slurm.sh.343211.out.log +654 -0
- attnserver.run_attnserver.slurm.sh.343213.out.log +0 -0
- attnserver.run_attnserver.slurm.sh.343214.err.log +639 -0
- attnserver.run_attnserver.slurm.sh.343214.out.log +0 -0
- attnserver.run_attnserver.slurm.sh.343219.err.log +55 -0
- attnserver.run_attnserver.slurm.sh.343219.out.log +0 -0
- attnserver.run_attnserver.slurm.sh.343220.out.log +1165 -0
- attnserver.run_attnserver.slurm.sh.343221.err.log +665 -0
- attnserver.run_attnserver.slurm.sh.343221.out.log +753 -0
- attnserver.run_attnserver.slurm.sh.343222.err.log +0 -0
- attnserver.run_attnserver.slurm.sh.343222.out.log +0 -0
- attnserver.run_attnserver.slurm.sh.343223.err.log +192 -0
- attnserver.run_attnserver.slurm.sh.343223.out.log +1553 -0
- attnserver.run_attnserver.slurm.sh.343224.err.log +156 -0
- attnserver.run_attnserver.slurm.sh.343224.out.log +19 -0
- attnserver.run_attnserver.slurm.sh.343225.err.log +79 -0
- attnserver.run_attnserver.slurm.sh.343225.out.log +0 -0
- attnserver.run_attnserver.slurm.sh.343226.err.log +92 -0
- attnserver.run_attnserver.slurm.sh.343226.out.log +0 -0
- attnserver.run_attnserver.slurm.sh.343227.err.log +141 -0
- attnserver.run_attnserver.slurm.sh.343227.out.log +10 -0
- attnserver.run_attnserver.slurm.sh.343228.err.log +149 -0
- attnserver.run_attnserver.slurm.sh.343228.out.log +536 -0
attnserver.run_attnserver.slurm.sh.343196.out.log
CHANGED
|
@@ -55261,3 +55261,336 @@ batch tensor after cp: labels torch.Size([2, 32768])
|
|
| 55261 |
batch tensor after cp: loss_mask torch.Size([2, 32768])
|
| 55262 |
batch tensor after cp: attention_mask torch.Size([2, 1, 32768, 131072])
|
| 55263 |
batch tensor after cp: position_ids torch.Size([2, 32768])
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 55261 |
batch tensor after cp: loss_mask torch.Size([2, 32768])
|
| 55262 |
batch tensor after cp: attention_mask torch.Size([2, 1, 32768, 131072])
|
| 55263 |
batch tensor after cp: position_ids torch.Size([2, 32768])
|
| 55264 |
+
batch tensor: tokens torch.Size([2, 131072])
|
| 55265 |
+
batch tensor: labels torch.Size([2, 131072])
|
| 55266 |
+
batch tensor: loss_mask torch.Size([2, 131072])
|
| 55267 |
+
batch tensor: attention_mask torch.Size([2, 1, 131072, 131072])
|
| 55268 |
+
batch tensor: position_ids torch.Size([2, 131072])
|
| 55269 |
+
batch tensor after cp: tokens torch.Size([2, 32768])
|
| 55270 |
+
batch tensor after cp: labels torch.Size([2, 32768])
|
| 55271 |
+
batch tensor after cp: loss_mask torch.Size([2, 32768])
|
| 55272 |
+
batch tensor after cp: attention_mask torch.Size([2, 1, 32768, 131072])
|
| 55273 |
+
batch tensor after cp: position_ids torch.Size([2, 32768])
|
| 55274 |
+
batch tensor: tokens torch.Size([2, 131072])
|
| 55275 |
+
batch tensor: labels torch.Size([2, 131072])
|
| 55276 |
+
batch tensor: loss_mask torch.Size([2, 131072])
|
| 55277 |
+
batch tensor: attention_mask torch.Size([2, 1, 131072, 131072])
|
| 55278 |
+
batch tensor: position_ids torch.Size([2, 131072])
|
| 55279 |
+
batch tensor after cp: tokens torch.Size([2, 32768])
|
| 55280 |
+
batch tensor after cp: labels torch.Size([2, 32768])
|
| 55281 |
+
batch tensor after cp: loss_mask torch.Size([2, 32768])
|
| 55282 |
+
batch tensor after cp: attention_mask torch.Size([2, 1, 32768, 131072])
|
| 55283 |
+
batch tensor after cp: position_ids torch.Size([2, 32768])
|
| 55284 |
+
Start exporting trace 2
|
| 55285 |
+
Done exporting trace 2
|
| 55286 |
+
[2025-06-21 21:34:06] iteration 3/ 10 | consumed samples: 3 | elapsed time per iteration (ms): 64811.2 | learning rate: 0.000000E+00 | global batch size: 1 | loss scale: 1073741824.0 | number of skipped iterations: 1 | number of nan iterations: 0 |
|
| 55287 |
+
batch tensor: tokens torch.Size([2, 131072])
|
| 55288 |
+
batch tensor: labels torch.Size([2, 131072])
|
| 55289 |
+
batch tensor: loss_mask torch.Size([2, 131072])
|
| 55290 |
+
batch tensor: attention_mask torch.Size([2, 1, 131072, 131072])
|
| 55291 |
+
batch tensor: position_ids torch.Size([2, 131072])
|
| 55292 |
+
batch tensor after cp: tokens torch.Size([2, 32768])
|
| 55293 |
+
batch tensor after cp: labels torch.Size([2, 32768])
|
| 55294 |
+
batch tensor after cp: loss_mask torch.Size([2, 32768])
|
| 55295 |
+
batch tensor after cp: attention_mask torch.Size([2, 1, 32768, 131072])
|
| 55296 |
+
batch tensor after cp: position_ids torch.Size([2, 32768])
|
| 55297 |
+
batch tensor: tokens torch.Size([2, 131072])
|
| 55298 |
+
batch tensor: labels torch.Size([2, 131072])
|
| 55299 |
+
batch tensor: loss_mask torch.Size([2, 131072])
|
| 55300 |
+
batch tensor: attention_mask torch.Size([2, 1, 131072, 131072])
|
| 55301 |
+
batch tensor: position_ids torch.Size([2, 131072])
|
| 55302 |
+
batch tensor after cp: tokens torch.Size([2, 32768])
|
| 55303 |
+
batch tensor after cp: labels torch.Size([2, 32768])
|
| 55304 |
+
batch tensor after cp: loss_mask torch.Size([2, 32768])
|
| 55305 |
+
batch tensor after cp: attention_mask torch.Size([2, 1, 32768, 131072])
|
| 55306 |
+
batch tensor after cp: position_ids torch.Size([2, 32768])
|
| 55307 |
+
batch tensor: tokens torch.Size([2, 131072])
|
| 55308 |
+
batch tensor: labels torch.Size([2, 131072])
|
| 55309 |
+
batch tensor: loss_mask torch.Size([2, 131072])
|
| 55310 |
+
batch tensor: attention_mask torch.Size([2, 1, 131072, 131072])
|
| 55311 |
+
batch tensor: position_ids torch.Size([2, 131072])
|
| 55312 |
+
batch tensor after cp: tokens torch.Size([2, 32768])
|
| 55313 |
+
batch tensor after cp: labels torch.Size([2, 32768])
|
| 55314 |
+
batch tensor after cp: loss_mask torch.Size([2, 32768])
|
| 55315 |
+
batch tensor after cp: attention_mask torch.Size([2, 1, 32768, 131072])
|
| 55316 |
+
batch tensor after cp: position_ids torch.Size([2, 32768])
|
| 55317 |
+
batch tensor: tokens torch.Size([2, 131072])
|
| 55318 |
+
batch tensor: labels torch.Size([2, 131072])
|
| 55319 |
+
batch tensor: loss_mask torch.Size([2, 131072])
|
| 55320 |
+
batch tensor: attention_mask torch.Size([2, 1, 131072, 131072])
|
| 55321 |
+
batch tensor: position_ids torch.Size([2, 131072])
|
| 55322 |
+
batch tensor after cp: tokens torch.Size([2, 32768])
|
| 55323 |
+
batch tensor after cp: labels torch.Size([2, 32768])
|
| 55324 |
+
batch tensor after cp: loss_mask torch.Size([2, 32768])
|
| 55325 |
+
batch tensor after cp: attention_mask torch.Size([2, 1, 32768, 131072])
|
| 55326 |
+
batch tensor after cp: position_ids torch.Size([2, 32768])
|
| 55327 |
+
batch tensor: tokens torch.Size([2, 131072])
|
| 55328 |
+
batch tensor: labels torch.Size([2, 131072])
|
| 55329 |
+
batch tensor: loss_mask torch.Size([2, 131072])
|
| 55330 |
+
batch tensor: attention_mask torch.Size([2, 1, 131072, 131072])
|
| 55331 |
+
batch tensor: position_ids torch.Size([2, 131072])
|
| 55332 |
+
batch tensor after cp: tokens torch.Size([2, 32768])
|
| 55333 |
+
batch tensor after cp: labels torch.Size([2, 32768])
|
| 55334 |
+
batch tensor after cp: loss_mask torch.Size([2, 32768])
|
| 55335 |
+
batch tensor after cp: attention_mask torch.Size([2, 1, 32768, 131072])
|
| 55336 |
+
batch tensor after cp: position_ids torch.Size([2, 32768])
|
| 55337 |
+
batch tensor: tokens torch.Size([2, 131072])
|
| 55338 |
+
batch tensor: labels torch.Size([2, 131072])
|
| 55339 |
+
batch tensor: loss_mask torch.Size([2, 131072])
|
| 55340 |
+
batch tensor: attention_mask torch.Size([2, 1, 131072, 131072])
|
| 55341 |
+
batch tensor: position_ids torch.Size([2, 131072])
|
| 55342 |
+
batch tensor after cp: tokens torch.Size([2, 32768])
|
| 55343 |
+
batch tensor after cp: labels torch.Size([2, 32768])
|
| 55344 |
+
batch tensor after cp: loss_mask torch.Size([2, 32768])
|
| 55345 |
+
batch tensor after cp: attention_mask torch.Size([2, 1, 32768, 131072])
|
| 55346 |
+
batch tensor after cp: position_ids torch.Size([2, 32768])
|
| 55347 |
+
batch tensor: tokens torch.Size([2, 131072])
|
| 55348 |
+
batch tensor: labels torch.Size([2, 131072])
|
| 55349 |
+
batch tensor: loss_mask torch.Size([2, 131072])
|
| 55350 |
+
batch tensor: attention_mask torch.Size([2, 1, 131072, 131072])
|
| 55351 |
+
batch tensor: position_ids torch.Size([2, 131072])
|
| 55352 |
+
batch tensor after cp: tokens torch.Size([2, 32768])
|
| 55353 |
+
batch tensor after cp: labels torch.Size([2, 32768])
|
| 55354 |
+
batch tensor after cp: loss_mask torch.Size([2, 32768])
|
| 55355 |
+
batch tensor after cp: attention_mask torch.Size([2, 1, 32768, 131072])
|
| 55356 |
+
batch tensor after cp: position_ids torch.Size([2, 32768])
|
| 55357 |
+
batch tensor: tokens torch.Size([2, 131072])
|
| 55358 |
+
batch tensor: labels torch.Size([2, 131072])
|
| 55359 |
+
batch tensor: loss_mask torch.Size([2, 131072])
|
| 55360 |
+
batch tensor: attention_mask torch.Size([2, 1, 131072, 131072])
|
| 55361 |
+
batch tensor: position_ids torch.Size([2, 131072])
|
| 55362 |
+
batch tensor after cp: tokens torch.Size([2, 32768])
|
| 55363 |
+
batch tensor after cp: labels torch.Size([2, 32768])
|
| 55364 |
+
batch tensor after cp: loss_mask torch.Size([2, 32768])
|
| 55365 |
+
batch tensor after cp: attention_mask torch.Size([2, 1, 32768, 131072])
|
| 55366 |
+
batch tensor after cp: position_ids torch.Size([2, 32768])
|
| 55367 |
+
batch tensor: tokens torch.Size([2, 131072])
|
| 55368 |
+
batch tensor: labels torch.Size([2, 131072])
|
| 55369 |
+
batch tensor: loss_mask torch.Size([2, 131072])
|
| 55370 |
+
batch tensor: attention_mask torch.Size([2, 1, 131072, 131072])
|
| 55371 |
+
batch tensor: position_ids torch.Size([2, 131072])
|
| 55372 |
+
batch tensor after cp: tokens torch.Size([2, 32768])
|
| 55373 |
+
batch tensor after cp: labels torch.Size([2, 32768])
|
| 55374 |
+
batch tensor after cp: loss_mask torch.Size([2, 32768])
|
| 55375 |
+
batch tensor after cp: attention_mask torch.Size([2, 1, 32768, 131072])
|
| 55376 |
+
batch tensor after cp: position_ids torch.Size([2, 32768])
|
| 55377 |
+
batch tensor: tokens torch.Size([2, 131072])
|
| 55378 |
+
batch tensor: labels torch.Size([2, 131072])
|
| 55379 |
+
batch tensor: loss_mask torch.Size([2, 131072])
|
| 55380 |
+
batch tensor: attention_mask torch.Size([2, 1, 131072, 131072])
|
| 55381 |
+
batch tensor: position_ids torch.Size([2, 131072])
|
| 55382 |
+
batch tensor after cp: tokens torch.Size([2, 32768])
|
| 55383 |
+
batch tensor after cp: labels torch.Size([2, 32768])
|
| 55384 |
+
batch tensor after cp: loss_mask torch.Size([2, 32768])
|
| 55385 |
+
batch tensor after cp: attention_mask torch.Size([2, 1, 32768, 131072])
|
| 55386 |
+
batch tensor after cp: position_ids torch.Size([2, 32768])
|
| 55387 |
+
batch tensor: tokens torch.Size([2, 131072])
|
| 55388 |
+
batch tensor: labels torch.Size([2, 131072])
|
| 55389 |
+
batch tensor: loss_mask torch.Size([2, 131072])
|
| 55390 |
+
batch tensor: attention_mask torch.Size([2, 1, 131072, 131072])
|
| 55391 |
+
batch tensor: position_ids torch.Size([2, 131072])
|
| 55392 |
+
batch tensor after cp: tokens torch.Size([2, 32768])
|
| 55393 |
+
batch tensor after cp: labels torch.Size([2, 32768])
|
| 55394 |
+
batch tensor after cp: loss_mask torch.Size([2, 32768])
|
| 55395 |
+
batch tensor after cp: attention_mask torch.Size([2, 1, 32768, 131072])
|
| 55396 |
+
batch tensor after cp: position_ids torch.Size([2, 32768])
|
| 55397 |
+
batch tensor: tokens torch.Size([2, 131072])
|
| 55398 |
+
batch tensor: labels torch.Size([2, 131072])
|
| 55399 |
+
batch tensor: loss_mask torch.Size([2, 131072])
|
| 55400 |
+
batch tensor: attention_mask torch.Size([2, 1, 131072, 131072])
|
| 55401 |
+
batch tensor: position_ids torch.Size([2, 131072])
|
| 55402 |
+
batch tensor after cp: tokens torch.Size([2, 32768])
|
| 55403 |
+
batch tensor after cp: labels torch.Size([2, 32768])
|
| 55404 |
+
batch tensor after cp: loss_mask torch.Size([2, 32768])
|
| 55405 |
+
batch tensor after cp: attention_mask torch.Size([2, 1, 32768, 131072])
|
| 55406 |
+
batch tensor after cp: position_ids torch.Size([2, 32768])
|
| 55407 |
+
batch tensor: tokens torch.Size([2, 131072])
|
| 55408 |
+
batch tensor: labels torch.Size([2, 131072])
|
| 55409 |
+
batch tensor: loss_mask torch.Size([2, 131072])
|
| 55410 |
+
batch tensor: attention_mask torch.Size([2, 1, 131072, 131072])
|
| 55411 |
+
batch tensor: position_ids torch.Size([2, 131072])
|
| 55412 |
+
batch tensor after cp: tokens torch.Size([2, 32768])
|
| 55413 |
+
batch tensor after cp: labels torch.Size([2, 32768])
|
| 55414 |
+
batch tensor after cp: loss_mask torch.Size([2, 32768])
|
| 55415 |
+
batch tensor after cp: attention_mask torch.Size([2, 1, 32768, 131072])
|
| 55416 |
+
batch tensor after cp: position_ids torch.Size([2, 32768])
|
| 55417 |
+
batch tensor: tokens torch.Size([2, 131072])
|
| 55418 |
+
batch tensor: labels torch.Size([2, 131072])
|
| 55419 |
+
batch tensor: loss_mask torch.Size([2, 131072])
|
| 55420 |
+
batch tensor: attention_mask torch.Size([2, 1, 131072, 131072])
|
| 55421 |
+
batch tensor: position_ids torch.Size([2, 131072])
|
| 55422 |
+
batch tensor: tokens torch.Size([2, 131072])
|
| 55423 |
+
batch tensor after cp: tokens torch.Size([2, 32768])
|
| 55424 |
+
batch tensor after cp: labels torch.Size([2, 32768])
|
| 55425 |
+
batch tensor: labels torch.Size([2, 131072])
|
| 55426 |
+
batch tensor: loss_mask torch.Size([2, 131072])
|
| 55427 |
+
batch tensor after cp: loss_mask torch.Size([2, 32768])
|
| 55428 |
+
batch tensor: attention_mask torch.Size([2, 1, 131072, 131072])
|
| 55429 |
+
batch tensor: position_ids torch.Size([2, 131072])
|
| 55430 |
+
batch tensor after cp: attention_mask torch.Size([2, 1, 32768, 131072])
|
| 55431 |
+
batch tensor after cp: position_ids torch.Size([2, 32768])
|
| 55432 |
+
batch tensor after cp: tokens torch.Size([2, 32768])
|
| 55433 |
+
batch tensor after cp: labels torch.Size([2, 32768])
|
| 55434 |
+
batch tensor after cp: loss_mask torch.Size([2, 32768])
|
| 55435 |
+
batch tensor after cp: attention_mask torch.Size([2, 1, 32768, 131072])
|
| 55436 |
+
batch tensor after cp: position_ids torch.Size([2, 32768])
|
| 55437 |
+
batch tensor: tokens torch.Size([2, 131072])
|
| 55438 |
+
batch tensor: labels torch.Size([2, 131072])
|
| 55439 |
+
batch tensor: loss_mask torch.Size([2, 131072])
|
| 55440 |
+
batch tensor: attention_mask torch.Size([2, 1, 131072, 131072])
|
| 55441 |
+
batch tensor: position_ids torch.Size([2, 131072])
|
| 55442 |
+
batch tensor after cp: tokens torch.Size([2, 32768])
|
| 55443 |
+
batch tensor after cp: labels torch.Size([2, 32768])
|
| 55444 |
+
batch tensor after cp: loss_mask torch.Size([2, 32768])
|
| 55445 |
+
batch tensor after cp: attention_mask torch.Size([2, 1, 32768, 131072])
|
| 55446 |
+
batch tensor after cp: position_ids torch.Size([2, 32768])
|
| 55447 |
+
batch tensor: tokens torch.Size([2, 131072])
|
| 55448 |
+
batch tensor: labels torch.Size([2, 131072])
|
| 55449 |
+
batch tensor: loss_mask torch.Size([2, 131072])
|
| 55450 |
+
batch tensor: attention_mask torch.Size([2, 1, 131072, 131072])
|
| 55451 |
+
batch tensor: position_ids torch.Size([2, 131072])
|
| 55452 |
+
batch tensor after cp: tokens torch.Size([2, 32768])
|
| 55453 |
+
batch tensor after cp: labels torch.Size([2, 32768])
|
| 55454 |
+
batch tensor after cp: loss_mask torch.Size([2, 32768])
|
| 55455 |
+
batch tensor after cp: attention_mask torch.Size([2, 1, 32768, 131072])
|
| 55456 |
+
batch tensor after cp: position_ids torch.Size([2, 32768])
|
| 55457 |
+
batch tensor: tokens torch.Size([2, 131072])
|
| 55458 |
+
batch tensor: labels torch.Size([2, 131072])
|
| 55459 |
+
batch tensor: loss_mask torch.Size([2, 131072])
|
| 55460 |
+
batch tensor: attention_mask torch.Size([2, 1, 131072, 131072])
|
| 55461 |
+
batch tensor: position_ids torch.Size([2, 131072])
|
| 55462 |
+
batch tensor after cp: tokens torch.Size([2, 32768])
|
| 55463 |
+
batch tensor after cp: labels torch.Size([2, 32768])
|
| 55464 |
+
batch tensor after cp: loss_mask torch.Size([2, 32768])
|
| 55465 |
+
batch tensor after cp: attention_mask torch.Size([2, 1, 32768, 131072])
|
| 55466 |
+
batch tensor after cp: position_ids torch.Size([2, 32768])
|
| 55467 |
+
batch tensor: tokens torch.Size([2, 131072])
|
| 55468 |
+
batch tensor: labels torch.Size([2, 131072])
|
| 55469 |
+
batch tensor: loss_mask torch.Size([2, 131072])
|
| 55470 |
+
batch tensor: attention_mask torch.Size([2, 1, 131072, 131072])
|
| 55471 |
+
batch tensor: position_ids torch.Size([2, 131072])
|
| 55472 |
+
batch tensor after cp: tokens torch.Size([2, 32768])
|
| 55473 |
+
batch tensor after cp: labels torch.Size([2, 32768])
|
| 55474 |
+
batch tensor after cp: loss_mask torch.Size([2, 32768])
|
| 55475 |
+
batch tensor after cp: attention_mask torch.Size([2, 1, 32768, 131072])
|
| 55476 |
+
batch tensor after cp: position_ids torch.Size([2, 32768])
|
| 55477 |
+
batch tensor: tokens torch.Size([2, 131072])
|
| 55478 |
+
batch tensor: labels torch.Size([2, 131072])
|
| 55479 |
+
batch tensor: loss_mask torch.Size([2, 131072])
|
| 55480 |
+
batch tensor: attention_mask torch.Size([2, 1, 131072, 131072])
|
| 55481 |
+
batch tensor: position_ids torch.Size([2, 131072])
|
| 55482 |
+
batch tensor after cp: tokens torch.Size([2, 32768])
|
| 55483 |
+
batch tensor after cp: labels torch.Size([2, 32768])
|
| 55484 |
+
batch tensor after cp: loss_mask torch.Size([2, 32768])
|
| 55485 |
+
batch tensor after cp: attention_mask torch.Size([2, 1, 32768, 131072])
|
| 55486 |
+
batch tensor after cp: position_ids torch.Size([2, 32768])
|
| 55487 |
+
batch tensor: tokens torch.Size([2, 131072])
|
| 55488 |
+
batch tensor: labels torch.Size([2, 131072])
|
| 55489 |
+
batch tensor: loss_mask torch.Size([2, 131072])
|
| 55490 |
+
batch tensor: attention_mask torch.Size([2, 1, 131072, 131072])
|
| 55491 |
+
batch tensor: position_ids torch.Size([2, 131072])
|
| 55492 |
+
batch tensor after cp: tokens torch.Size([2, 32768])
|
| 55493 |
+
batch tensor after cp: labels torch.Size([2, 32768])
|
| 55494 |
+
batch tensor after cp: loss_mask torch.Size([2, 32768])
|
| 55495 |
+
batch tensor after cp: attention_mask torch.Size([2, 1, 32768, 131072])
|
| 55496 |
+
batch tensor after cp: position_ids torch.Size([2, 32768])
|
| 55497 |
+
batch tensor: tokens torch.Size([2, 131072])
|
| 55498 |
+
batch tensor: labels torch.Size([2, 131072])
|
| 55499 |
+
batch tensor: loss_mask torch.Size([2, 131072])
|
| 55500 |
+
batch tensor: attention_mask torch.Size([2, 1, 131072, 131072])
|
| 55501 |
+
batch tensor: position_ids torch.Size([2, 131072])
|
| 55502 |
+
batch tensor after cp: tokens torch.Size([2, 32768])
|
| 55503 |
+
batch tensor after cp: labels torch.Size([2, 32768])
|
| 55504 |
+
batch tensor after cp: loss_mask torch.Size([2, 32768])
|
| 55505 |
+
batch tensor: tokens torch.Size([2, 131072])
|
| 55506 |
+
batch tensor after cp: attention_mask torch.Size([2, 1, 32768, 131072])
|
| 55507 |
+
batch tensor: labels torch.Size([2, 131072])
|
| 55508 |
+
batch tensor after cp: position_ids torch.Size([2, 32768])
|
| 55509 |
+
batch tensor: loss_mask torch.Size([2, 131072])
|
| 55510 |
+
batch tensor: attention_mask torch.Size([2, 1, 131072, 131072])
|
| 55511 |
+
batch tensor: position_ids torch.Size([2, 131072])
|
| 55512 |
+
batch tensor after cp: tokens torch.Size([2, 32768])
|
| 55513 |
+
batch tensor after cp: labels torch.Size([2, 32768])
|
| 55514 |
+
batch tensor after cp: loss_mask torch.Size([2, 32768])
|
| 55515 |
+
batch tensor after cp: attention_mask torch.Size([2, 1, 32768, 131072])
|
| 55516 |
+
batch tensor after cp: position_ids torch.Size([2, 32768])
|
| 55517 |
+
batch tensor: tokens torch.Size([2, 131072])
|
| 55518 |
+
batch tensor: labels torch.Size([2, 131072])
|
| 55519 |
+
batch tensor: loss_mask torch.Size([2, 131072])
|
| 55520 |
+
batch tensor: attention_mask torch.Size([2, 1, 131072, 131072])
|
| 55521 |
+
batch tensor: position_ids torch.Size([2, 131072])
|
| 55522 |
+
batch tensor after cp: tokens torch.Size([2, 32768])
|
| 55523 |
+
batch tensor after cp: labels torch.Size([2, 32768])
|
| 55524 |
+
batch tensor after cp: loss_mask torch.Size([2, 32768])
|
| 55525 |
+
batch tensor after cp: attention_mask torch.Size([2, 1, 32768, 131072])
|
| 55526 |
+
batch tensor after cp: position_ids torch.Size([2, 32768])
|
| 55527 |
+
batch tensor: tokens torch.Size([2, 131072])
|
| 55528 |
+
batch tensor: labels torch.Size([2, 131072])
|
| 55529 |
+
batch tensor: loss_mask torch.Size([2, 131072])
|
| 55530 |
+
batch tensor: attention_mask torch.Size([2, 1, 131072, 131072])
|
| 55531 |
+
batch tensor: position_ids torch.Size([2, 131072])
|
| 55532 |
+
batch tensor after cp: tokens torch.Size([2, 32768])
|
| 55533 |
+
batch tensor after cp: labels torch.Size([2, 32768])
|
| 55534 |
+
batch tensor after cp: loss_mask torch.Size([2, 32768])
|
| 55535 |
+
batch tensor after cp: attention_mask torch.Size([2, 1, 32768, 131072])
|
| 55536 |
+
batch tensor after cp: position_ids torch.Size([2, 32768])
|
| 55537 |
+
batch tensor: tokens torch.Size([2, 131072])
|
| 55538 |
+
batch tensor: labels torch.Size([2, 131072])
|
| 55539 |
+
batch tensor: loss_mask torch.Size([2, 131072])
|
| 55540 |
+
batch tensor: attention_mask torch.Size([2, 1, 131072, 131072])
|
| 55541 |
+
batch tensor: position_ids torch.Size([2, 131072])
|
| 55542 |
+
batch tensor after cp: tokens torch.Size([2, 32768])
|
| 55543 |
+
batch tensor after cp: labels torch.Size([2, 32768])
|
| 55544 |
+
batch tensor after cp: loss_mask torch.Size([2, 32768])
|
| 55545 |
+
batch tensor after cp: attention_mask torch.Size([2, 1, 32768, 131072])
|
| 55546 |
+
batch tensor after cp: position_ids torch.Size([2, 32768])
|
| 55547 |
+
batch tensor: tokens torch.Size([2, 131072])
|
| 55548 |
+
batch tensor: labels torch.Size([2, 131072])
|
| 55549 |
+
batch tensor: loss_mask torch.Size([2, 131072])
|
| 55550 |
+
batch tensor: attention_mask torch.Size([2, 1, 131072, 131072])
|
| 55551 |
+
batch tensor: position_ids torch.Size([2, 131072])
|
| 55552 |
+
batch tensor after cp: tokens torch.Size([2, 32768])
|
| 55553 |
+
batch tensor after cp: labels torch.Size([2, 32768])
|
| 55554 |
+
batch tensor after cp: loss_mask torch.Size([2, 32768])
|
| 55555 |
+
batch tensor after cp: attention_mask torch.Size([2, 1, 32768, 131072])
|
| 55556 |
+
batch tensor after cp: position_ids torch.Size([2, 32768])
|
| 55557 |
+
batch tensor: tokens torch.Size([2, 131072])
|
| 55558 |
+
batch tensor: labels torch.Size([2, 131072])
|
| 55559 |
+
batch tensor: loss_mask torch.Size([2, 131072])
|
| 55560 |
+
batch tensor: attention_mask torch.Size([2, 1, 131072, 131072])
|
| 55561 |
+
batch tensor: position_ids torch.Size([2, 131072])
|
| 55562 |
+
batch tensor after cp: tokens torch.Size([2, 32768])
|
| 55563 |
+
batch tensor after cp: labels torch.Size([2, 32768])
|
| 55564 |
+
batch tensor after cp: loss_mask torch.Size([2, 32768])
|
| 55565 |
+
batch tensor after cp: attention_mask torch.Size([2, 1, 32768, 131072])
|
| 55566 |
+
batch tensor after cp: position_ids torch.Size([2, 32768])
|
| 55567 |
+
batch tensor: tokens torch.Size([2, 131072])
|
| 55568 |
+
batch tensor: labels torch.Size([2, 131072])
|
| 55569 |
+
batch tensor: loss_mask torch.Size([2, 131072])
|
| 55570 |
+
batch tensor: attention_mask torch.Size([2, 1, 131072, 131072])
|
| 55571 |
+
batch tensor: position_ids torch.Size([2, 131072])
|
| 55572 |
+
batch tensor after cp: tokens torch.Size([2, 32768])
|
| 55573 |
+
batch tensor after cp: labels torch.Size([2, 32768])
|
| 55574 |
+
batch tensor after cp: loss_mask torch.Size([2, 32768])
|
| 55575 |
+
batch tensor after cp: attention_mask torch.Size([2, 1, 32768, 131072])
|
| 55576 |
+
batch tensor after cp: position_ids torch.Size([2, 32768])
|
| 55577 |
+
batch tensor: tokens torch.Size([2, 131072])
|
| 55578 |
+
batch tensor: labels torch.Size([2, 131072])
|
| 55579 |
+
batch tensor: loss_mask torch.Size([2, 131072])
|
| 55580 |
+
batch tensor: attention_mask torch.Size([2, 1, 131072, 131072])
|
| 55581 |
+
batch tensor: position_ids torch.Size([2, 131072])
|
| 55582 |
+
batch tensor after cp: tokens torch.Size([2, 32768])
|
| 55583 |
+
batch tensor after cp: labels torch.Size([2, 32768])
|
| 55584 |
+
batch tensor after cp: loss_mask torch.Size([2, 32768])
|
| 55585 |
+
batch tensor after cp: attention_mask torch.Size([2, 1, 32768, 131072])
|
| 55586 |
+
batch tensor after cp: position_ids torch.Size([2, 32768])
|
| 55587 |
+
batch tensor: tokens torch.Size([2, 131072])
|
| 55588 |
+
batch tensor: labels torch.Size([2, 131072])
|
| 55589 |
+
batch tensor: loss_mask torch.Size([2, 131072])
|
| 55590 |
+
batch tensor: attention_mask torch.Size([2, 1, 131072, 131072])
|
| 55591 |
+
batch tensor: position_ids torch.Size([2, 131072])
|
| 55592 |
+
batch tensor after cp: tokens torch.Size([2, 32768])
|
| 55593 |
+
batch tensor after cp: labels torch.Size([2, 32768])
|
| 55594 |
+
batch tensor after cp: loss_mask torch.Size([2, 32768])
|
| 55595 |
+
batch tensor after cp: attention_mask torch.Size([2, 1, 32768, 131072])
|
| 55596 |
+
batch tensor after cp: position_ids torch.Size([2, 32768])
|
attnserver.run_attnserver.slurm.sh.343205.err.log
CHANGED
|
@@ -1,3 +1,3 @@
|
|
| 1 |
version https://git-lfs.github.com/spec/v1
|
| 2 |
-
oid sha256:
|
| 3 |
-
size
|
|
|
|
| 1 |
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:988d900d5b3fae56a38d44aac5fe7e7ac3dae66ea02debff6ebd701ff95eb243
|
| 3 |
+
size 12422535
|
attnserver.run_attnserver.slurm.sh.343205.out.log
CHANGED
|
@@ -13123,3 +13123,713 @@ CHECKPOINT_PATH: gpt-checkpoint
|
|
| 13123 |
PWD: /mnt/weka/home/hao.zhang/junda/attnserver-megatron
|
| 13124 |
--------------------------------
|
| 13125 |
/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/bin/python3
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 13123 |
PWD: /mnt/weka/home/hao.zhang/junda/attnserver-megatron
|
| 13124 |
--------------------------------
|
| 13125 |
/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/bin/python3
|
| 13126 |
+
INFO:megatron.training.initialize:Setting logging level to 0
|
| 13127 |
+
WARNING: TensorBoard writing requested but is not available (are you using PyTorch 1.1.0 or later?), no TensorBoard logs will be written.
|
| 13128 |
+
WARNING: one_logger package is required to enable e2e metrics tracking. please go to https://confluence.nvidia.com/display/MLWFO/Package+Repositories for details to install it
|
| 13129 |
+
INFO:megatron.training.initialize:Setting logging level to 0
|
| 13130 |
+
INFO:megatron.training.initialize:Setting logging level to 0
|
| 13131 |
+
INFO:megatron.training.initialize:Setting logging level to 0
|
| 13132 |
+
INFO:megatron.training.initialize:Setting logging level to 0
|
| 13133 |
+
INFO:megatron.training.initialize:Setting logging level to 0
|
| 13134 |
+
INFO:megatron.training.initialize:Setting logging level to 0
|
| 13135 |
+
INFO:megatron.training.initialize:Setting logging level to 0
|
| 13136 |
+
INFO:megatron.training.initialize:Setting logging level to 0
|
| 13137 |
+
using world size: 16, data-parallel size: 1, context-parallel size: 2, hierarchical context-parallel sizes: Nonetensor-model-parallel size: 8, encoder-tensor-model-parallel size: 0, pipeline-model-parallel size: 1, encoder-pipeline-model-parallel size: 0
|
| 13138 |
+
Number of virtual stages per pipeline stage: None
|
| 13139 |
+
WARNING: Setting args.check_for_nan_in_loss_and_grad to False since dynamic loss scaling is being used
|
| 13140 |
+
using torch.float16 for parameters ...
|
| 13141 |
+
------------------------ arguments ------------------------
|
| 13142 |
+
account_for_embedding_in_pipeline_split ......... False
|
| 13143 |
+
account_for_loss_in_pipeline_split .............. False
|
| 13144 |
+
accumulate_allreduce_grads_in_fp32 .............. False
|
| 13145 |
+
adam_beta1 ...................................... 0.9
|
| 13146 |
+
adam_beta2 ...................................... 0.999
|
| 13147 |
+
adam_eps ........................................ 1e-08
|
| 13148 |
+
add_bias_linear ................................. True
|
| 13149 |
+
add_position_embedding .......................... True
|
| 13150 |
+
add_qkv_bias .................................... True
|
| 13151 |
+
adlr_autoresume ................................. False
|
| 13152 |
+
adlr_autoresume_interval ........................ 1000
|
| 13153 |
+
align_grad_reduce ............................... True
|
| 13154 |
+
align_param_gather .............................. False
|
| 13155 |
+
app_tag_run_name ................................ None
|
| 13156 |
+
app_tag_run_version ............................. 0.0.0
|
| 13157 |
+
apply_layernorm_1p .............................. False
|
| 13158 |
+
apply_query_key_layer_scaling ................... False
|
| 13159 |
+
apply_residual_connection_post_layernorm ........ False
|
| 13160 |
+
apply_rope_fusion ............................... False
|
| 13161 |
+
async_save ...................................... None
|
| 13162 |
+
async_tensor_model_parallel_allreduce ........... True
|
| 13163 |
+
attention_backend ............................... AttnBackend.auto
|
| 13164 |
+
attention_dropout ............................... 0.1
|
| 13165 |
+
attention_softmax_in_fp32 ....................... False
|
| 13166 |
+
auto_detect_ckpt_format ......................... False
|
| 13167 |
+
barrier_with_L1_time ............................ True
|
| 13168 |
+
bert_binary_head ................................ True
|
| 13169 |
+
bert_embedder_type .............................. megatron
|
| 13170 |
+
bert_load ....................................... None
|
| 13171 |
+
bf16 ............................................ False
|
| 13172 |
+
bias_dropout_fusion ............................. True
|
| 13173 |
+
bias_gelu_fusion ................................ True
|
| 13174 |
+
bias_swiglu_fusion .............................. True
|
| 13175 |
+
biencoder_projection_dim ........................ 0
|
| 13176 |
+
biencoder_shared_query_context_model ............ False
|
| 13177 |
+
block_data_path ................................. None
|
| 13178 |
+
calc_ft_timeouts ................................ False
|
| 13179 |
+
calculate_per_token_loss ........................ False
|
| 13180 |
+
check_for_large_grads ........................... False
|
| 13181 |
+
check_for_nan_in_loss_and_grad .................. False
|
| 13182 |
+
check_for_spiky_loss ............................ False
|
| 13183 |
+
check_weight_hash_across_dp_replicas_interval ... None
|
| 13184 |
+
ckpt_assume_constant_structure .................. False
|
| 13185 |
+
ckpt_convert_format ............................. None
|
| 13186 |
+
ckpt_convert_save ............................... None
|
| 13187 |
+
ckpt_convert_update_legacy_dist_opt_format ...... False
|
| 13188 |
+
ckpt_format ..................................... torch_dist
|
| 13189 |
+
ckpt_fully_parallel_load ........................ False
|
| 13190 |
+
ckpt_fully_parallel_save ........................ True
|
| 13191 |
+
ckpt_fully_parallel_save_deprecated ............. False
|
| 13192 |
+
ckpt_step ....................................... None
|
| 13193 |
+
classes_fraction ................................ 1.0
|
| 13194 |
+
clip_grad ....................................... 1.0
|
| 13195 |
+
clone_scatter_output_in_embedding ............... True
|
| 13196 |
+
config_logger_dir ...............................
|
| 13197 |
+
consumed_train_samples .......................... 0
|
| 13198 |
+
consumed_valid_samples .......................... 0
|
| 13199 |
+
context_parallel_size ........................... 2
|
| 13200 |
+
cp_comm_type .................................... ['p2p']
|
| 13201 |
+
create_attention_mask_in_dataloader ............. True
|
| 13202 |
+
cross_entropy_fusion_impl ....................... native
|
| 13203 |
+
cross_entropy_loss_fusion ....................... False
|
| 13204 |
+
cuda_graph_scope ................................ full
|
| 13205 |
+
cuda_graph_warmup_steps ......................... 3
|
| 13206 |
+
data_args_path .................................. None
|
| 13207 |
+
data_cache_path ................................. None
|
| 13208 |
+
data_parallel_random_init ....................... False
|
| 13209 |
+
data_parallel_sharding_strategy ................. no_shard
|
| 13210 |
+
data_parallel_size .............................. 1
|
| 13211 |
+
data_path ....................................... None
|
| 13212 |
+
data_per_class_fraction ......................... 1.0
|
| 13213 |
+
data_sharding ................................... True
|
| 13214 |
+
dataloader_type ................................. single
|
| 13215 |
+
ddp_average_in_collective ....................... False
|
| 13216 |
+
ddp_bucket_size ................................. None
|
| 13217 |
+
ddp_num_buckets ................................. None
|
| 13218 |
+
ddp_pad_buckets_for_high_nccl_busbw ............. False
|
| 13219 |
+
decoder_first_pipeline_num_layers ............... None
|
| 13220 |
+
decoder_last_pipeline_num_layers ................ None
|
| 13221 |
+
decoder_num_layers .............................. None
|
| 13222 |
+
decoder_seq_length .............................. None
|
| 13223 |
+
decoupled_lr .................................... None
|
| 13224 |
+
decoupled_min_lr ................................ None
|
| 13225 |
+
decrease_batch_size_if_needed ................... False
|
| 13226 |
+
defer_embedding_wgrad_compute ................... False
|
| 13227 |
+
deprecated_use_mcore_models ..................... False
|
| 13228 |
+
deterministic_mode .............................. False
|
| 13229 |
+
dino_bottleneck_size ............................ 256
|
| 13230 |
+
dino_freeze_last_layer .......................... 1
|
| 13231 |
+
dino_head_hidden_size ........................... 2048
|
| 13232 |
+
dino_local_crops_number ......................... 10
|
| 13233 |
+
dino_local_img_size ............................. 96
|
| 13234 |
+
dino_norm_last_layer ............................ False
|
| 13235 |
+
dino_teacher_temp ............................... 0.07
|
| 13236 |
+
dino_warmup_teacher_temp ........................ 0.04
|
| 13237 |
+
dino_warmup_teacher_temp_epochs ................. 30
|
| 13238 |
+
disable_bf16_reduced_precision_matmul ........... False
|
| 13239 |
+
disable_mamba_mem_eff_path ...................... False
|
| 13240 |
+
disable_straggler_on_startup .................... False
|
| 13241 |
+
dist_ckpt_format_deprecated ..................... None
|
| 13242 |
+
dist_ckpt_strictness ............................ assume_ok_unexpected
|
| 13243 |
+
distribute_saved_activations .................... False
|
| 13244 |
+
distributed_backend ............................. nccl
|
| 13245 |
+
distributed_timeout_minutes ..................... 10
|
| 13246 |
+
embedding_path .................................. None
|
| 13247 |
+
empty_unused_memory_level ....................... 0
|
| 13248 |
+
enable_cuda_graph ............................... False
|
| 13249 |
+
enable_ft_package ............................... False
|
| 13250 |
+
enable_gloo_process_groups ...................... True
|
| 13251 |
+
enable_msc ...................................... True
|
| 13252 |
+
enable_one_logger ............................... True
|
| 13253 |
+
encoder_num_layers .............................. 2
|
| 13254 |
+
encoder_pipeline_model_parallel_size ............ 0
|
| 13255 |
+
encoder_seq_length .............................. 131072
|
| 13256 |
+
encoder_tensor_model_parallel_size .............. 0
|
| 13257 |
+
end_weight_decay ................................ 0.1
|
| 13258 |
+
eod_mask_loss ................................... False
|
| 13259 |
+
error_injection_rate ............................ 0
|
| 13260 |
+
error_injection_type ............................ transient_error
|
| 13261 |
+
eval_interval ................................... 16
|
| 13262 |
+
eval_iters ...................................... 1
|
| 13263 |
+
evidence_data_path .............................. None
|
| 13264 |
+
exit_duration_in_mins ........................... None
|
| 13265 |
+
exit_interval ................................... None
|
| 13266 |
+
exit_on_missing_checkpoint ...................... False
|
| 13267 |
+
exit_signal_handler ............................. False
|
| 13268 |
+
exp_avg_dtype ................................... torch.float32
|
| 13269 |
+
exp_avg_sq_dtype ................................ torch.float32
|
| 13270 |
+
expert_model_parallel_size ...................... 1
|
| 13271 |
+
expert_tensor_parallel_size ..................... 8
|
| 13272 |
+
external_cuda_graph ............................. False
|
| 13273 |
+
ffn_hidden_size ................................. 16384
|
| 13274 |
+
finetune ........................................ False
|
| 13275 |
+
first_last_layers_bf16 .......................... False
|
| 13276 |
+
flash_decode .................................... False
|
| 13277 |
+
fp16 ............................................ True
|
| 13278 |
+
fp16_lm_cross_entropy ........................... False
|
| 13279 |
+
fp32_residual_connection ........................ False
|
| 13280 |
+
fp8 ............................................. None
|
| 13281 |
+
fp8_amax_compute_algo ........................... most_recent
|
| 13282 |
+
fp8_amax_history_len ............................ 1
|
| 13283 |
+
fp8_interval .................................... 1
|
| 13284 |
+
fp8_margin ...................................... 0
|
| 13285 |
+
fp8_param_gather ................................ False
|
| 13286 |
+
fp8_recipe ...................................... delayed
|
| 13287 |
+
fp8_wgrad ....................................... True
|
| 13288 |
+
fsdp_double_buffer .............................. False
|
| 13289 |
+
global_batch_size ............................... 1
|
| 13290 |
+
grad_reduce_in_bf16 ............................. False
|
| 13291 |
+
gradient_accumulation_fusion .................... True
|
| 13292 |
+
gradient_reduce_div_fusion ...................... True
|
| 13293 |
+
group_query_attention ........................... True
|
| 13294 |
+
head_lr_mult .................................... 1.0
|
| 13295 |
+
heterogeneous_layers_config_encoded_json ........ None
|
| 13296 |
+
heterogeneous_layers_config_path ................ None
|
| 13297 |
+
hidden_dropout .................................. 0.1
|
| 13298 |
+
hidden_size ..................................... 4096
|
| 13299 |
+
hierarchical_context_parallel_sizes ............. None
|
| 13300 |
+
high_priority_stream_groups ..................... []
|
| 13301 |
+
hybrid_attention_ratio .......................... 0.0
|
| 13302 |
+
hybrid_mlp_ratio ................................ 0.0
|
| 13303 |
+
hybrid_override_pattern ......................... None
|
| 13304 |
+
hysteresis ...................................... 2
|
| 13305 |
+
ict_head_size ................................... None
|
| 13306 |
+
ict_load ........................................ None
|
| 13307 |
+
img_h ........................................... 224
|
| 13308 |
+
img_w ........................................... 224
|
| 13309 |
+
indexer_batch_size .............................. 128
|
| 13310 |
+
indexer_log_interval ............................ 1000
|
| 13311 |
+
inference_batch_times_seqlen_threshold .......... -1
|
| 13312 |
+
inference_dynamic_batching ...................... False
|
| 13313 |
+
inference_dynamic_batching_buffer_guaranteed_fraction 0.2
|
| 13314 |
+
inference_dynamic_batching_buffer_overflow_factor None
|
| 13315 |
+
inference_dynamic_batching_buffer_size_gb ....... 40.0
|
| 13316 |
+
inference_dynamic_batching_chunk_size ........... 256
|
| 13317 |
+
inference_dynamic_batching_max_requests_override None
|
| 13318 |
+
inference_dynamic_batching_max_tokens_override .. None
|
| 13319 |
+
inference_max_batch_size ........................ 8
|
| 13320 |
+
inference_max_seq_length ........................ 2560
|
| 13321 |
+
inference_rng_tracker ........................... False
|
| 13322 |
+
init_method_std ................................. 0.02
|
| 13323 |
+
init_method_xavier_uniform ...................... False
|
| 13324 |
+
init_model_with_meta_device ..................... False
|
| 13325 |
+
initial_loss_scale .............................. 4294967296
|
| 13326 |
+
inprocess_active_world_size ..................... 16
|
| 13327 |
+
inprocess_barrier_timeout ....................... 120
|
| 13328 |
+
inprocess_completion_timeout .................... 120
|
| 13329 |
+
inprocess_empty_cuda_cache ...................... False
|
| 13330 |
+
inprocess_granularity ........................... node
|
| 13331 |
+
inprocess_hard_timeout .......................... 90
|
| 13332 |
+
inprocess_heartbeat_interval .................... 30
|
| 13333 |
+
inprocess_heartbeat_timeout ..................... 60
|
| 13334 |
+
inprocess_last_call_wait ........................ 1
|
| 13335 |
+
inprocess_max_iterations ........................ None
|
| 13336 |
+
inprocess_monitor_process_interval .............. 1.0
|
| 13337 |
+
inprocess_monitor_thread_interval ............... 1.0
|
| 13338 |
+
inprocess_progress_watchdog_interval ............ 1.0
|
| 13339 |
+
inprocess_restart ............................... False
|
| 13340 |
+
inprocess_soft_timeout .......................... 60
|
| 13341 |
+
inprocess_termination_grace_time ................ 1
|
| 13342 |
+
is_hybrid_model ................................. False
|
| 13343 |
+
iter_per_epoch .................................. 1250
|
| 13344 |
+
iterations_to_skip .............................. []
|
| 13345 |
+
keep_fp8_transpose_cache_when_using_custom_fsdp . False
|
| 13346 |
+
kv_channels ..................................... 64
|
| 13347 |
+
kv_lora_rank .................................... 32
|
| 13348 |
+
lazy_mpu_init ................................... None
|
| 13349 |
+
load ............................................ gpt-checkpoint
|
| 13350 |
+
load_model_opt_format ........................... False
|
| 13351 |
+
local_rank ...................................... 0
|
| 13352 |
+
log_interval .................................... 1
|
| 13353 |
+
log_loss_scale_to_tensorboard ................... True
|
| 13354 |
+
log_memory_to_tensorboard ....................... False
|
| 13355 |
+
log_num_zeros_in_grad ........................... False
|
| 13356 |
+
log_params_norm ................................. False
|
| 13357 |
+
log_progress .................................... False
|
| 13358 |
+
log_straggler ................................... False
|
| 13359 |
+
log_throughput .................................. False
|
| 13360 |
+
log_timers_to_tensorboard ....................... False
|
| 13361 |
+
log_validation_ppl_to_tensorboard ............... False
|
| 13362 |
+
log_world_size_to_tensorboard ................... False
|
| 13363 |
+
logging_level ................................... 0
|
| 13364 |
+
loss_scale ...................................... None
|
| 13365 |
+
loss_scale_window ............................... 1000
|
| 13366 |
+
lr .............................................. 0.0005
|
| 13367 |
+
lr_decay_iters .................................. 150000
|
| 13368 |
+
lr_decay_samples ................................ None
|
| 13369 |
+
lr_decay_style .................................. cosine
|
| 13370 |
+
lr_warmup_fraction .............................. None
|
| 13371 |
+
lr_warmup_init .................................. 0.0
|
| 13372 |
+
lr_warmup_iters ................................. 2
|
| 13373 |
+
lr_warmup_samples ............................... 0
|
| 13374 |
+
lr_wsd_decay_iters .............................. None
|
| 13375 |
+
lr_wsd_decay_samples ............................ None
|
| 13376 |
+
lr_wsd_decay_style .............................. exponential
|
| 13377 |
+
main_grads_dtype ................................ torch.float32
|
| 13378 |
+
main_params_dtype ............................... torch.float32
|
| 13379 |
+
make_vocab_size_divisible_by .................... 128
|
| 13380 |
+
mamba_head_dim .................................. 64
|
| 13381 |
+
mamba_num_groups ................................ 8
|
| 13382 |
+
mamba_num_heads ................................. None
|
| 13383 |
+
mamba_state_dim ................................. 128
|
| 13384 |
+
manual_gc ....................................... False
|
| 13385 |
+
manual_gc_eval .................................. True
|
| 13386 |
+
manual_gc_interval .............................. 0
|
| 13387 |
+
mask_factor ..................................... 1.0
|
| 13388 |
+
mask_prob ....................................... 0.15
|
| 13389 |
+
mask_type ....................................... random
|
| 13390 |
+
masked_softmax_fusion ........................... True
|
| 13391 |
+
max_position_embeddings ......................... 131072
|
| 13392 |
+
max_tokens_to_oom ............................... 12000
|
| 13393 |
+
memory_snapshot_path ............................ snapshot.pickle
|
| 13394 |
+
merge_file ...................................... merges.txt
|
| 13395 |
+
micro_batch_size ................................ 1
|
| 13396 |
+
microbatch_group_size_per_vp_stage .............. None
|
| 13397 |
+
mid_level_dataset_surplus ....................... 0.005
|
| 13398 |
+
min_loss_scale .................................. 1.0
|
| 13399 |
+
min_lr .......................................... 0.0
|
| 13400 |
+
mlp_chunks_for_prefill .......................... 1
|
| 13401 |
+
mmap_bin_files .................................. True
|
| 13402 |
+
mock_data ....................................... True
|
| 13403 |
+
moe_apply_probs_on_input ........................ False
|
| 13404 |
+
moe_aux_loss_coeff .............................. 0.0
|
| 13405 |
+
moe_enable_deepep ............................... False
|
| 13406 |
+
moe_expert_capacity_factor ...................... None
|
| 13407 |
+
moe_extended_tp ................................. False
|
| 13408 |
+
moe_ffn_hidden_size ............................. None
|
| 13409 |
+
moe_grouped_gemm ................................ False
|
| 13410 |
+
moe_input_jitter_eps ............................ None
|
| 13411 |
+
moe_layer_freq .................................. 1
|
| 13412 |
+
moe_layer_recompute ............................. False
|
| 13413 |
+
moe_pad_expert_input_to_capacity ................ False
|
| 13414 |
+
moe_per_layer_logging ........................... False
|
| 13415 |
+
moe_permute_fusion .............................. False
|
| 13416 |
+
moe_router_bias_update_rate ..................... 0.001
|
| 13417 |
+
moe_router_dtype ................................ None
|
| 13418 |
+
moe_router_enable_expert_bias ................... False
|
| 13419 |
+
moe_router_force_load_balancing ................. False
|
| 13420 |
+
moe_router_group_topk ........................... None
|
| 13421 |
+
moe_router_load_balancing_type .................. aux_loss
|
| 13422 |
+
moe_router_num_groups ........................... None
|
| 13423 |
+
moe_router_padding_for_fp8 ...................... False
|
| 13424 |
+
moe_router_pre_softmax .......................... False
|
| 13425 |
+
moe_router_score_function ....................... softmax
|
| 13426 |
+
moe_router_topk ................................. 2
|
| 13427 |
+
moe_router_topk_scaling_factor .................. None
|
| 13428 |
+
moe_shared_expert_intermediate_size ............. None
|
| 13429 |
+
moe_shared_expert_overlap ....................... False
|
| 13430 |
+
moe_token_dispatcher_type ....................... allgather
|
| 13431 |
+
moe_token_drop_policy ........................... probs
|
| 13432 |
+
moe_use_legacy_grouped_gemm ..................... False
|
| 13433 |
+
moe_use_upcycling ............................... False
|
| 13434 |
+
moe_z_loss_coeff ................................ None
|
| 13435 |
+
mrope_section ................................... None
|
| 13436 |
+
mscale .......................................... 1.0
|
| 13437 |
+
mscale_all_dim .................................. 1.0
|
| 13438 |
+
mtp_loss_scaling_factor ......................... 0.1
|
| 13439 |
+
mtp_num_layers .................................. None
|
| 13440 |
+
multi_latent_attention .......................... False
|
| 13441 |
+
nccl_all_reduce_for_prefill ..................... False
|
| 13442 |
+
nccl_communicator_config_path ................... None
|
| 13443 |
+
nccl_ub ......................................... False
|
| 13444 |
+
no_load_optim ................................... None
|
| 13445 |
+
no_load_rng ..................................... None
|
| 13446 |
+
no_persist_layer_norm ........................... False
|
| 13447 |
+
no_rope_freq .................................... None
|
| 13448 |
+
no_save_optim ................................... None
|
| 13449 |
+
no_save_rng ..................................... None
|
| 13450 |
+
non_persistent_ckpt_type ........................ None
|
| 13451 |
+
non_persistent_global_ckpt_dir .................. None
|
| 13452 |
+
non_persistent_local_ckpt_algo .................. fully_parallel
|
| 13453 |
+
non_persistent_local_ckpt_dir ................... None
|
| 13454 |
+
non_persistent_save_interval .................... None
|
| 13455 |
+
norm_epsilon .................................... 1e-05
|
| 13456 |
+
normalization ................................... LayerNorm
|
| 13457 |
+
num_attention_heads ............................. 64
|
| 13458 |
+
num_channels .................................... 3
|
| 13459 |
+
num_classes ..................................... 1000
|
| 13460 |
+
num_dataset_builder_threads ..................... 1
|
| 13461 |
+
num_distributed_optimizer_instances ............. 1
|
| 13462 |
+
num_experts ..................................... None
|
| 13463 |
+
num_layers ...................................... 2
|
| 13464 |
+
num_layers_at_end_in_bf16 ....................... 1
|
| 13465 |
+
num_layers_at_start_in_bf16 ..................... 1
|
| 13466 |
+
num_layers_per_virtual_pipeline_stage ........... None
|
| 13467 |
+
num_query_groups ................................ 16
|
| 13468 |
+
num_virtual_stages_per_pipeline_rank ............ None
|
| 13469 |
+
num_workers ..................................... 2
|
| 13470 |
+
object_storage_cache_path ....................... None
|
| 13471 |
+
one_logger_async ................................ False
|
| 13472 |
+
one_logger_project .............................. megatron-lm
|
| 13473 |
+
one_logger_run_name ............................. None
|
| 13474 |
+
onnx_safe ....................................... None
|
| 13475 |
+
openai_gelu ..................................... False
|
| 13476 |
+
optimizer ....................................... adam
|
| 13477 |
+
optimizer_cpu_offload ........................... False
|
| 13478 |
+
optimizer_offload_fraction ...................... 1.0
|
| 13479 |
+
output_bert_embeddings .......................... False
|
| 13480 |
+
overlap_cpu_optimizer_d2h_h2d ................... False
|
| 13481 |
+
overlap_grad_reduce ............................. False
|
| 13482 |
+
overlap_p2p_comm ................................ False
|
| 13483 |
+
overlap_p2p_comm_warmup_flush ................... False
|
| 13484 |
+
overlap_param_gather ............................ False
|
| 13485 |
+
overlap_param_gather_with_optimizer_step ........ False
|
| 13486 |
+
override_opt_param_scheduler .................... False
|
| 13487 |
+
params_dtype .................................... torch.float16
|
| 13488 |
+
patch_dim ....................................... 16
|
| 13489 |
+
per_split_data_args_path ........................ None
|
| 13490 |
+
perform_initialization .......................... True
|
| 13491 |
+
pin_cpu_grads ................................... True
|
| 13492 |
+
pin_cpu_params .................................. True
|
| 13493 |
+
pipeline_model_parallel_comm_backend ............ None
|
| 13494 |
+
pipeline_model_parallel_size .................... 1
|
| 13495 |
+
pipeline_model_parallel_split_rank .............. None
|
| 13496 |
+
position_embedding_type ......................... learned_absolute
|
| 13497 |
+
pretrained_checkpoint ........................... None
|
| 13498 |
+
profile ......................................... False
|
| 13499 |
+
profile_ranks ................................... [0]
|
| 13500 |
+
profile_step_end ................................ 12
|
| 13501 |
+
profile_step_start .............................. 10
|
| 13502 |
+
q_lora_rank ..................................... None
|
| 13503 |
+
qk_head_dim ..................................... 128
|
| 13504 |
+
qk_l2_norm ...................................... False
|
| 13505 |
+
qk_layernorm .................................... False
|
| 13506 |
+
qk_pos_emb_head_dim ............................. 64
|
| 13507 |
+
query_in_block_prob ............................. 0.1
|
| 13508 |
+
rampup_batch_size ............................... None
|
| 13509 |
+
rank ............................................ 0
|
| 13510 |
+
recompute_granularity ........................... None
|
| 13511 |
+
recompute_method ................................ None
|
| 13512 |
+
recompute_modules ............................... None
|
| 13513 |
+
recompute_num_layers ............................ None
|
| 13514 |
+
record_memory_history ........................... False
|
| 13515 |
+
relative_attention_max_distance ................. 128
|
| 13516 |
+
relative_attention_num_buckets .................. 32
|
| 13517 |
+
replication ..................................... False
|
| 13518 |
+
replication_factor .............................. 2
|
| 13519 |
+
replication_jump ................................ None
|
| 13520 |
+
rerun_mode ...................................... disabled
|
| 13521 |
+
reset_attention_mask ............................ False
|
| 13522 |
+
reset_position_ids .............................. False
|
| 13523 |
+
result_rejected_tracker_filename ................ None
|
| 13524 |
+
retriever_report_topk_accuracies ................ []
|
| 13525 |
+
retriever_score_scaling ......................... False
|
| 13526 |
+
retriever_seq_length ............................ 256
|
| 13527 |
+
retro_add_retriever ............................. False
|
| 13528 |
+
retro_attention_gate ............................ 1
|
| 13529 |
+
retro_cyclic_train_iters ........................ None
|
| 13530 |
+
retro_encoder_attention_dropout ................. 0.1
|
| 13531 |
+
retro_encoder_hidden_dropout .................... 0.1
|
| 13532 |
+
retro_encoder_layers ............................ 2
|
| 13533 |
+
retro_num_neighbors ............................. 2
|
| 13534 |
+
retro_num_retrieved_chunks ...................... 2
|
| 13535 |
+
retro_project_dir ............................... None
|
| 13536 |
+
retro_verify_neighbor_count ..................... True
|
| 13537 |
+
rope_scaling_factor ............................. 8.0
|
| 13538 |
+
rotary_base ..................................... 10000
|
| 13539 |
+
rotary_interleaved .............................. False
|
| 13540 |
+
rotary_percent .................................. 1.0
|
| 13541 |
+
rotary_scaling_factor ........................... 1.0
|
| 13542 |
+
rotary_seq_len_interpolation_factor ............. None
|
| 13543 |
+
run_workload_inspector_server ................... False
|
| 13544 |
+
sample_rate ..................................... 1.0
|
| 13545 |
+
save ............................................ gpt-checkpoint
|
| 13546 |
+
save_interval ................................... 16
|
| 13547 |
+
scatter_gather_tensors_in_pipeline .............. True
|
| 13548 |
+
seed ............................................ 1234
|
| 13549 |
+
seq_length ...................................... 131072
|
| 13550 |
+
sequence_parallel ............................... False
|
| 13551 |
+
sgd_momentum .................................... 0.9
|
| 13552 |
+
short_seq_prob .................................. 0.1
|
| 13553 |
+
skip_train ...................................... False
|
| 13554 |
+
skipped_train_samples ........................... 0
|
| 13555 |
+
spec ............................................ None
|
| 13556 |
+
split ........................................... None
|
| 13557 |
+
squared_relu .................................... False
|
| 13558 |
+
start_weight_decay .............................. 0.1
|
| 13559 |
+
straggler_ctrlr_port ............................ 65535
|
| 13560 |
+
straggler_minmax_count .......................... 1
|
| 13561 |
+
suggested_communication_unit_size ............... None
|
| 13562 |
+
swiglu .......................................... False
|
| 13563 |
+
swin_backbone_type .............................. tiny
|
| 13564 |
+
symmetric_ar_type ............................... None
|
| 13565 |
+
te_rng_tracker .................................. False
|
| 13566 |
+
tensor_model_parallel_size ...................... 8
|
| 13567 |
+
tensorboard_dir ................................. tensorboard-logs/
|
| 13568 |
+
tensorboard_log_interval ........................ 1
|
| 13569 |
+
tensorboard_queue_size .......................... 1000
|
| 13570 |
+
test_data_path .................................. None
|
| 13571 |
+
test_mode ....................................... False
|
| 13572 |
+
tiktoken_num_special_tokens ..................... 1000
|
| 13573 |
+
tiktoken_pattern ................................ None
|
| 13574 |
+
tiktoken_special_tokens ......................... None
|
| 13575 |
+
timing_log_level ................................ 0
|
| 13576 |
+
timing_log_option ............................... minmax
|
| 13577 |
+
titles_data_path ................................ None
|
| 13578 |
+
tokenizer_model ................................. None
|
| 13579 |
+
tokenizer_type .................................. GPT2BPETokenizer
|
| 13580 |
+
torch_fsdp2_reshard_after_forward ............... True
|
| 13581 |
+
tp_comm_bootstrap_backend ....................... nccl
|
| 13582 |
+
tp_comm_bulk_dgrad .............................. True
|
| 13583 |
+
tp_comm_bulk_wgrad .............................. True
|
| 13584 |
+
tp_comm_overlap ................................. False
|
| 13585 |
+
tp_comm_overlap_ag .............................. True
|
| 13586 |
+
tp_comm_overlap_cfg ............................. None
|
| 13587 |
+
tp_comm_overlap_rs .............................. True
|
| 13588 |
+
tp_comm_overlap_rs_dgrad ........................ False
|
| 13589 |
+
tp_comm_split_ag ................................ True
|
| 13590 |
+
tp_comm_split_rs ................................ True
|
| 13591 |
+
train_data_path ................................. None
|
| 13592 |
+
train_iters ..................................... 10
|
| 13593 |
+
train_samples ................................... None
|
| 13594 |
+
train_sync_interval ............................. None
|
| 13595 |
+
transformer_impl ................................ transformer_engine
|
| 13596 |
+
transformer_pipeline_model_parallel_size ........ 1
|
| 13597 |
+
untie_embeddings_and_output_weights ............. False
|
| 13598 |
+
use_checkpoint_args ............................. False
|
| 13599 |
+
use_checkpoint_opt_param_scheduler .............. False
|
| 13600 |
+
use_cpu_initialization .......................... None
|
| 13601 |
+
use_custom_fsdp ................................. False
|
| 13602 |
+
use_dist_ckpt ................................... True
|
| 13603 |
+
use_dist_ckpt_deprecated ........................ False
|
| 13604 |
+
use_distributed_optimizer ....................... False
|
| 13605 |
+
use_flash_attn .................................. False
|
| 13606 |
+
use_legacy_models ............................... False
|
| 13607 |
+
use_mp_args_from_checkpoint_args ................ False
|
| 13608 |
+
use_one_sent_docs ............................... False
|
| 13609 |
+
use_persistent_ckpt_worker ...................... False
|
| 13610 |
+
use_precision_aware_optimizer ................... False
|
| 13611 |
+
use_pytorch_profiler ............................ False
|
| 13612 |
+
use_ring_exchange_p2p ........................... False
|
| 13613 |
+
use_rope_scaling ................................ False
|
| 13614 |
+
use_rotary_position_embeddings .................. False
|
| 13615 |
+
use_sharp ....................................... False
|
| 13616 |
+
use_tokenizer_model_from_checkpoint_args ........ True
|
| 13617 |
+
use_torch_fsdp2 ................................. False
|
| 13618 |
+
use_torch_optimizer_for_cpu_offload ............. False
|
| 13619 |
+
use_tp_pp_dp_mapping ............................ False
|
| 13620 |
+
v_head_dim ...................................... 128
|
| 13621 |
+
valid_data_path ................................. None
|
| 13622 |
+
variable_seq_lengths ............................ False
|
| 13623 |
+
virtual_pipeline_model_parallel_size ............ None
|
| 13624 |
+
vision_backbone_type ............................ vit
|
| 13625 |
+
vision_pretraining .............................. False
|
| 13626 |
+
vision_pretraining_type ......................... classify
|
| 13627 |
+
vocab_extra_ids ................................. 0
|
| 13628 |
+
vocab_file ...................................... vocab.json
|
| 13629 |
+
vocab_size ...................................... None
|
| 13630 |
+
wandb_exp_name ..................................
|
| 13631 |
+
wandb_project ...................................
|
| 13632 |
+
wandb_save_dir ..................................
|
| 13633 |
+
weight_decay .................................... 0.1
|
| 13634 |
+
weight_decay_incr_style ......................... constant
|
| 13635 |
+
wgrad_deferral_limit ............................ 0
|
| 13636 |
+
world_size ...................................... 16
|
| 13637 |
+
yaml_cfg ........................................ None
|
| 13638 |
+
-------------------- end of arguments ---------------------
|
| 13639 |
+
INFO:megatron.core.num_microbatches_calculator:setting number of microbatches to constant 1
|
| 13640 |
+
> building GPT2BPETokenizer tokenizer ...
|
| 13641 |
+
INFO:megatron.training.initialize:Setting logging level to 0
|
| 13642 |
+
INFO:megatron.training.initialize:Setting logging level to 0
|
| 13643 |
+
> padded vocab (size: 50257) with 943 dummy tokens (new size: 51200)
|
| 13644 |
+
INFO:megatron.training.initialize:Setting logging level to 0
|
| 13645 |
+
WARNING:megatron.core.rerun_state_machine:RerunStateMachine initialized in mode RerunMode.DISABLED
|
| 13646 |
+
> initializing torch distributed ...
|
| 13647 |
+
INFO:megatron.training.initialize:Setting logging level to 0
|
| 13648 |
+
INFO:megatron.training.initialize:Setting logging level to 0
|
| 13649 |
+
> initialized tensor model parallel with size 8
|
| 13650 |
+
> initialized pipeline model parallel with size 1
|
| 13651 |
+
> setting random seeds to 1234 ...
|
| 13652 |
+
> compiling dataset index builder ...
|
| 13653 |
+
INFO:megatron.training.initialize:Setting logging level to 0
|
| 13654 |
+
make: Entering directory '/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/datasets'
|
| 13655 |
+
INFO:megatron.training.initialize:Setting logging level to 0
|
| 13656 |
+
make: Nothing to be done for 'default'.
|
| 13657 |
+
make: Leaving directory '/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/datasets'
|
| 13658 |
+
>>> done with dataset index builder. Compilation time: 0.038 seconds
|
| 13659 |
+
WARNING: constraints for invoking optimized fused softmax kernel are not met. We default back to unfused kernel invocations.
|
| 13660 |
+
> compiling and loading fused kernels ...
|
| 13661 |
+
>>> done with compiling and loading fused kernels. Compilation time: 2.517 seconds
|
| 13662 |
+
time to initialize megatron (seconds): 9.194
|
| 13663 |
+
[after megatron is initialized] datetime: 2025-06-21 21:34:16
|
| 13664 |
+
building GPT model ...
|
| 13665 |
+
>>> embedding
|
| 13666 |
+
>>> decoder
|
| 13667 |
+
>>> output_layer
|
| 13668 |
+
> number of parameters on (tensor, pipeline) model parallel rank (5, 0): 607188480
|
| 13669 |
+
>>> embedding
|
| 13670 |
+
>>> decoder
|
| 13671 |
+
>>> output_layer
|
| 13672 |
+
>>> embedding
|
| 13673 |
+
>>> decoder
|
| 13674 |
+
>>> output_layer
|
| 13675 |
+
> number of parameters on (tensor, pipeline) model parallel rank (1, 0): 607188480
|
| 13676 |
+
> number of parameters on (tensor, pipeline) model parallel rank (7, 0): 607188480
|
| 13677 |
+
>>> embedding
|
| 13678 |
+
>>> decoder
|
| 13679 |
+
>>> output_layer
|
| 13680 |
+
> number of parameters on (tensor, pipeline) model parallel rank (3, 0): 607188480
|
| 13681 |
+
>>> embedding
|
| 13682 |
+
>>> decoder
|
| 13683 |
+
>>> output_layer
|
| 13684 |
+
> number of parameters on (tensor, pipeline) model parallel rank (2, 0): 607188480
|
| 13685 |
+
>>> embedding
|
| 13686 |
+
>>> decoder
|
| 13687 |
+
>>> output_layer
|
| 13688 |
+
> number of parameters on (tensor, pipeline) model parallel rank (4, 0): 607188480
|
| 13689 |
+
>>> embedding
|
| 13690 |
+
>>> decoder
|
| 13691 |
+
>>> output_layer
|
| 13692 |
+
> number of parameters on (tensor, pipeline) model parallel rank (6, 0): 607188480
|
| 13693 |
+
>>> embedding
|
| 13694 |
+
>>> decoder
|
| 13695 |
+
>>> output_layer
|
| 13696 |
+
> number of parameters on (tensor, pipeline) model parallel rank (0, 0): 607188480
|
| 13697 |
+
>>> embedding
|
| 13698 |
+
>>> decoder
|
| 13699 |
+
>>> output_layer
|
| 13700 |
+
> number of parameters on (tensor, pipeline) model parallel rank (4, 0): 607188480
|
| 13701 |
+
>>> embedding
|
| 13702 |
+
>>> decoder
|
| 13703 |
+
>>> output_layer
|
| 13704 |
+
> number of parameters on (tensor, pipeline) model parallel rank (3, 0): 607188480
|
| 13705 |
+
>>> embedding
|
| 13706 |
+
>>> decoder
|
| 13707 |
+
>>> output_layer
|
| 13708 |
+
> number of parameters on (tensor, pipeline) model parallel rank (1, 0): 607188480
|
| 13709 |
+
>>> embedding
|
| 13710 |
+
>>> decoder
|
| 13711 |
+
>>> output_layer
|
| 13712 |
+
> number of parameters on (tensor, pipeline) model parallel rank (0, 0): 607188480
|
| 13713 |
+
>>> embedding
|
| 13714 |
+
>>> decoder
|
| 13715 |
+
>>> output_layer
|
| 13716 |
+
> number of parameters on (tensor, pipeline) model parallel rank (7, 0): 607188480
|
| 13717 |
+
>>> embedding
|
| 13718 |
+
>>> decoder
|
| 13719 |
+
>>> output_layer
|
| 13720 |
+
> number of parameters on (tensor, pipeline) model parallel rank (6, 0): 607188480
|
| 13721 |
+
>>> embedding
|
| 13722 |
+
>>> decoder
|
| 13723 |
+
>>> output_layer
|
| 13724 |
+
> number of parameters on (tensor, pipeline) model parallel rank (5, 0): 607188480
|
| 13725 |
+
INFO:megatron.core.distributed.distributed_data_parallel:Setting up DistributedDataParallel with config DistributedDataParallelConfig(grad_reduce_in_fp32=False, overlap_grad_reduce=False, overlap_param_gather=False, align_param_gather=False, use_distributed_optimizer=False, num_distributed_optimizer_instances=1, check_for_nan_in_grad=False, check_for_large_grads=False, bucket_size=None, pad_buckets_for_high_nccl_busbw=False, average_in_collective=False, fp8_param_gather=False, use_custom_fsdp=False, data_parallel_sharding_strategy='no_shard', gradient_reduce_div_fusion=True, suggested_communication_unit_size=None, preserve_fp32_weights=True, keep_fp8_transpose_cache_when_using_custom_fsdp=False, nccl_ub=False, fsdp_double_buffer=False)
|
| 13726 |
+
INFO:megatron.core.distributed.param_and_grad_buffer:Number of buckets for gradient all-reduce / reduce-scatter: 1
|
| 13727 |
+
Params for bucket 1 (607188480 elements, 607188480 padded size):
|
| 13728 |
+
module.decoder.layers.1.mlp.linear_fc1.layer_norm_weight
|
| 13729 |
+
module.decoder.layers.1.self_attention.linear_qkv.bias
|
| 13730 |
+
module.decoder.layers.0.mlp.linear_fc2.bias
|
| 13731 |
+
module.decoder.layers.0.mlp.linear_fc1.layer_norm_weight
|
| 13732 |
+
module.decoder.layers.0.self_attention.linear_qkv.layer_norm_bias
|
| 13733 |
+
module.decoder.final_layernorm.weight
|
| 13734 |
+
module.decoder.layers.1.mlp.linear_fc1.weight
|
| 13735 |
+
module.decoder.layers.0.mlp.linear_fc1.weight
|
| 13736 |
+
module.decoder.layers.1.mlp.linear_fc2.bias
|
| 13737 |
+
module.decoder.layers.1.self_attention.linear_qkv.layer_norm_weight
|
| 13738 |
+
module.decoder.layers.0.self_attention.linear_qkv.weight
|
| 13739 |
+
module.decoder.layers.0.self_attention.linear_proj.weight
|
| 13740 |
+
module.embedding.word_embeddings.weight
|
| 13741 |
+
module.decoder.layers.1.self_attention.linear_qkv.layer_norm_bias
|
| 13742 |
+
module.decoder.layers.0.self_attention.linear_proj.bias
|
| 13743 |
+
module.embedding.position_embeddings.weight
|
| 13744 |
+
module.decoder.layers.1.mlp.linear_fc1.bias
|
| 13745 |
+
module.decoder.layers.0.mlp.linear_fc2.weight
|
| 13746 |
+
module.decoder.layers.0.mlp.linear_fc1.bias
|
| 13747 |
+
module.decoder.layers.1.self_attention.linear_qkv.weight
|
| 13748 |
+
module.decoder.layers.1.self_attention.linear_proj.weight
|
| 13749 |
+
module.decoder.layers.0.self_attention.linear_qkv.bias
|
| 13750 |
+
module.decoder.layers.1.mlp.linear_fc2.weight
|
| 13751 |
+
module.decoder.layers.1.self_attention.linear_proj.bias
|
| 13752 |
+
module.decoder.final_layernorm.bias
|
| 13753 |
+
module.decoder.layers.1.mlp.linear_fc1.layer_norm_bias
|
| 13754 |
+
module.decoder.layers.0.mlp.linear_fc1.layer_norm_bias
|
| 13755 |
+
module.decoder.layers.0.self_attention.linear_qkv.layer_norm_weight
|
| 13756 |
+
INFO:megatron.core.optimizer:Setting up optimizer with config OptimizerConfig(optimizer='adam', lr=0.0005, min_lr=0.0, decoupled_lr=None, decoupled_min_lr=None, weight_decay=0.1, fp16=True, bf16=False, params_dtype=torch.float16, use_precision_aware_optimizer=False, store_param_remainders=True, main_grads_dtype=torch.float32, main_params_dtype=torch.float32, exp_avg_dtype=torch.float32, exp_avg_sq_dtype=torch.float32, loss_scale=None, initial_loss_scale=4294967296, min_loss_scale=1.0, loss_scale_window=1000, hysteresis=2, adam_beta1=0.9, adam_beta2=0.999, adam_eps=1e-08, sgd_momentum=0.9, use_distributed_optimizer=False, overlap_param_gather_with_optimizer_step=False, optimizer_cpu_offload=False, optimizer_offload_fraction=1.0, use_torch_optimizer_for_cpu_offload=False, overlap_cpu_optimizer_d2h_h2d=False, pin_cpu_grads=True, pin_cpu_params=True, clip_grad=1.0, log_num_zeros_in_grad=False, barrier_with_L1_time=True, timers=<megatron.core.timers.Timers object at 0x14ab483f68d0>, config_logger_dir='')
|
| 13757 |
+
INFO:megatron.core.optimizer_param_scheduler:> learning rate decay style: cosine
|
| 13758 |
+
>>> embedding
|
| 13759 |
+
>>> decoder
|
| 13760 |
+
>>> output_layer
|
| 13761 |
+
> number of parameters on (tensor, pipeline) model parallel rank (2, 0): 607188480
|
| 13762 |
+
WARNING: could not find the metadata file gpt-checkpoint/latest_checkpointed_iteration.txt
|
| 13763 |
+
will not load any checkpoints and will start from random
|
| 13764 |
+
(min, max) time across ranks (ms):
|
| 13765 |
+
load-checkpoint ................................: (3.70, 4.37)
|
| 13766 |
+
[after model, optimizer, and learning rate scheduler are built] datetime: 2025-06-21 21:34:24
|
| 13767 |
+
> building train, validation, and test datasets ...
|
| 13768 |
+
> datasets target sizes (minimum size):
|
| 13769 |
+
train: 10
|
| 13770 |
+
validation: 1
|
| 13771 |
+
test: 1
|
| 13772 |
+
INFO:megatron.core.datasets.blended_megatron_dataset_config:Let mock = True, as both blend and blend_per_split are None
|
| 13773 |
+
INFO:megatron.core.datasets.blended_megatron_dataset_config:Let split = 1,1,1, an arbitrarily even split, as mock is True
|
| 13774 |
+
INFO:megatron.core.datasets.blended_megatron_dataset_config:Let split_matrix = [(0, 0.3333333333333333), (0.3333333333333333, 0.6666666666666666), (0.6666666666666666, 1.0)]
|
| 13775 |
+
> building train, validation, and test datasets for GPT ...
|
| 13776 |
+
INFO:megatron.core.datasets.blended_megatron_dataset_builder:Building MockGPTDataset splits with sizes=(10, 1, 1) and config=GPTDatasetConfig(random_seed=1234, sequence_length=131072, blend=None, blend_per_split=None, split='1,1,1', split_matrix=[(0, 0.3333333333333333), (0.3333333333333333, 0.6666666666666666), (0.6666666666666666, 1.0)], num_dataset_builder_threads=1, path_to_cache=None, mmap_bin_files=True, mock=True, tokenizer=<megatron.training.tokenizer.tokenizer._GPT2BPETokenizer object at 0x14ab4880a8a0>, mid_level_dataset_surplus=0.005, reset_position_ids=False, reset_attention_mask=False, eod_mask_loss=False, create_attention_mask=True, drop_last_partial_validation_sequence=True, add_extra_token_to_sequence=True, object_storage_cache_path=None)
|
| 13777 |
+
INFO:megatron.core.datasets.gpt_dataset:Build and save the MockGPTDataset train indices
|
| 13778 |
+
DEBUG:megatron.core.datasets.gpt_dataset:> separate_final_epoch: False
|
| 13779 |
+
WARNING:megatron.core.datasets.gpt_dataset:Unable to save MockGPTDataset indexes because path_to_cache is None
|
| 13780 |
+
DEBUG:megatron.core.datasets.gpt_dataset: > time elapsed: 0.007208 seconds
|
| 13781 |
+
INFO:megatron.core.datasets.gpt_dataset:> total number of samples: 520
|
| 13782 |
+
INFO:megatron.core.datasets.gpt_dataset:> total number of epochs: 1
|
| 13783 |
+
INFO:megatron.core.datasets.gpt_dataset:Build and save the MockGPTDataset valid indices
|
| 13784 |
+
DEBUG:megatron.core.datasets.gpt_dataset:> separate_final_epoch: False
|
| 13785 |
+
WARNING:megatron.core.datasets.gpt_dataset:Unable to save MockGPTDataset indexes because path_to_cache is None
|
| 13786 |
+
DEBUG:megatron.core.datasets.gpt_dataset: > time elapsed: 0.001747 seconds
|
| 13787 |
+
INFO:megatron.core.datasets.gpt_dataset:> total number of samples: 520
|
| 13788 |
+
INFO:megatron.core.datasets.gpt_dataset:> total number of epochs: 1
|
| 13789 |
+
INFO:megatron.core.datasets.gpt_dataset:Build and save the MockGPTDataset test indices
|
| 13790 |
+
DEBUG:megatron.core.datasets.gpt_dataset:> separate_final_epoch: False
|
| 13791 |
+
WARNING:megatron.core.datasets.gpt_dataset:Unable to save MockGPTDataset indexes because path_to_cache is None
|
| 13792 |
+
DEBUG:megatron.core.datasets.gpt_dataset: > time elapsed: 0.001477 seconds
|
| 13793 |
+
INFO:megatron.core.datasets.gpt_dataset:> total number of samples: 520
|
| 13794 |
+
INFO:megatron.core.datasets.gpt_dataset:> total number of epochs: 1
|
| 13795 |
+
> finished creating GPT datasets ...
|
| 13796 |
+
[after dataloaders are built] datetime: 2025-06-21 21:34:24
|
| 13797 |
+
done with setup ...
|
| 13798 |
+
(min, max) time across ranks (ms):
|
| 13799 |
+
model-and-optimizer-setup ......................: (7671.11, 7677.53)
|
| 13800 |
+
train/valid/test-data-iterators-setup ..........: (20.05, 123.01)
|
| 13801 |
+
training ...
|
| 13802 |
+
Setting rerun_state_machine.current_iteration to 0...
|
| 13803 |
+
[before the start of training step] datetime: 2025-06-21 21:34:24
|
| 13804 |
+
WARNING:megatron.core.utils:CUDA out of memory. Tried to allocate 65536.00 GiB. GPU 4 has a total capacity of 139.81 GiB of which 132.18 GiB is free. Including non-PyTorch memory, this process has 7.63 GiB memory in use. Of the allocated memory 5.15 GiB is allocated by PyTorch, and 1007.51 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)
|
| 13805 |
+
['Traceback (most recent call last):\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 446, in forward_step\n (tokens, labels, loss_mask, attention_mask, position_ids), token_lens = get_batch(data_iterator)\n ^^^^^^^^^^^^^^^^^^^^^^^^\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 284, in get_batch\n batch = next(global_batches)\n ^^^^^^^^^^^^^^^^^^^^\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 226, in setup_batches\n attention_mask = torch.ones(\n ^^^^^^^^^^^\n', 'torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 65536.00 GiB. GPU 4 has a total capacity of 139.81 GiB of which 132.18 GiB is free. Including non-PyTorch memory, this process has 7.63 GiB memory in use. Of the allocated memory 5.15 GiB is allocated by PyTorch, and 1007.51 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)\n']
|
| 13806 |
+
WARNING:megatron.core.utils:CUDA out of memory. Tried to allocate 65536.00 GiB. GPU 3 has a total capacity of 139.81 GiB of which 132.19 GiB is free. Including non-PyTorch memory, this process has 7.61 GiB memory in use. Of the allocated memory 5.15 GiB is allocated by PyTorch, and 1007.51 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)
|
| 13807 |
+
['Traceback (most recent call last):\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 446, in forward_step\n (tokens, labels, loss_mask, attention_mask, position_ids), token_lens = get_batch(data_iterator)\n ^^^^^^^^^^^^^^^^^^^^^^^^\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 284, in get_batch\n batch = next(global_batches)\n ^^^^^^^^^^^^^^^^^^^^\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 226, in setup_batches\n attention_mask = torch.ones(\n ^^^^^^^^^^^\n', 'torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 65536.00 GiB. GPU 3 has a total capacity of 139.81 GiB of which 132.19 GiB is free. Including non-PyTorch memory, this process has 7.61 GiB memory in use. Of the allocated memory 5.15 GiB is allocated by PyTorch, and 1007.51 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)\n']
|
| 13808 |
+
WARNING:megatron.core.utils:CUDA out of memory. Tried to allocate 65536.00 GiB. GPU 6 has a total capacity of 139.81 GiB of which 132.18 GiB is free. Including non-PyTorch memory, this process has 7.63 GiB memory in use. Of the allocated memory 5.15 GiB is allocated by PyTorch, and 1007.51 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)
|
| 13809 |
+
['Traceback (most recent call last):\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 446, in forward_step\n (tokens, labels, loss_mask, attention_mask, position_ids), token_lens = get_batch(data_iterator)\n ^^^^^^^^^^^^^^^^^^^^^^^^\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 284, in get_batch\n batch = next(global_batches)\n ^^^^^^^^^^^^^^^^^^^^\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 226, in setup_batches\n attention_mask = torch.ones(\n ^^^^^^^^^^^\n', 'torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 65536.00 GiB. GPU 6 has a total capacity of 139.81 GiB of which 132.18 GiB is free. Including non-PyTorch memory, this process has 7.63 GiB memory in use. Of the allocated memory 5.15 GiB is allocated by PyTorch, and 1007.51 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)\n']
|
| 13810 |
+
WARNING:megatron.core.utils:CUDA out of memory. Tried to allocate 65536.00 GiB. GPU 1 has a total capacity of 139.81 GiB of which 132.19 GiB is free. Including non-PyTorch memory, this process has 7.61 GiB memory in use. Of the allocated memory 5.15 GiB is allocated by PyTorch, and 1007.51 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)
|
| 13811 |
+
['Traceback (most recent call last):\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 446, in forward_step\n (tokens, labels, loss_mask, attention_mask, position_ids), token_lens = get_batch(data_iterator)\n ^^^^^^^^^^^^^^^^^^^^^^^^\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 284, in get_batch\n batch = next(global_batches)\n ^^^^^^^^^^^^^^^^^^^^\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 226, in setup_batches\n attention_mask = torch.ones(\n ^^^^^^^^^^^\n', 'torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 65536.00 GiB. GPU 1 has a total capacity of 139.81 GiB of which 132.19 GiB is free. Including non-PyTorch memory, this process has 7.61 GiB memory in use. Of the allocated memory 5.15 GiB is allocated by PyTorch, and 1007.51 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)\n']
|
| 13812 |
+
WARNING:megatron.core.utils:CUDA out of memory. Tried to allocate 65536.00 GiB. GPU 7 has a total capacity of 139.81 GiB of which 132.19 GiB is free. Including non-PyTorch memory, this process has 7.61 GiB memory in use. Of the allocated memory 5.15 GiB is allocated by PyTorch, and 1007.51 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)
|
| 13813 |
+
['Traceback (most recent call last):\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 446, in forward_step\n (tokens, labels, loss_mask, attention_mask, position_ids), token_lens = get_batch(data_iterator)\n ^^^^^^^^^^^^^^^^^^^^^^^^\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 284, in get_batch\n batch = next(global_batches)\n ^^^^^^^^^^^^^^^^^^^^\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 226, in setup_batches\n attention_mask = torch.ones(\n ^^^^^^^^^^^\n', 'torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 65536.00 GiB. GPU 7 has a total capacity of 139.81 GiB of which 132.19 GiB is free. Including non-PyTorch memory, this process has 7.61 GiB memory in use. Of the allocated memory 5.15 GiB is allocated by PyTorch, and 1007.51 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)\n']
|
| 13814 |
+
WARNING:megatron.core.utils:CUDA out of memory. Tried to allocate 65536.00 GiB. GPU 2 has a total capacity of 139.81 GiB of which 132.18 GiB is free. Including non-PyTorch memory, this process has 7.63 GiB memory in use. Of the allocated memory 5.15 GiB is allocated by PyTorch, and 1007.51 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)
|
| 13815 |
+
['Traceback (most recent call last):\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 446, in forward_step\n (tokens, labels, loss_mask, attention_mask, position_ids), token_lens = get_batch(data_iterator)\n ^^^^^^^^^^^^^^^^^^^^^^^^\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 284, in get_batch\n batch = next(global_batches)\n ^^^^^^^^^^^^^^^^^^^^\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 226, in setup_batches\n attention_mask = torch.ones(\n ^^^^^^^^^^^\n', 'torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 65536.00 GiB. GPU 2 has a total capacity of 139.81 GiB of which 132.18 GiB is free. Including non-PyTorch memory, this process has 7.63 GiB memory in use. Of the allocated memory 5.15 GiB is allocated by PyTorch, and 1007.51 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)\n']
|
| 13816 |
+
WARNING:megatron.core.utils:CUDA out of memory. Tried to allocate 65536.00 GiB. GPU 0 has a total capacity of 139.81 GiB of which 132.18 GiB is free. Including non-PyTorch memory, this process has 7.63 GiB memory in use. Of the allocated memory 5.15 GiB is allocated by PyTorch, and 1007.51 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)
|
| 13817 |
+
['Traceback (most recent call last):\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 446, in forward_step\n (tokens, labels, loss_mask, attention_mask, position_ids), token_lens = get_batch(data_iterator)\n ^^^^^^^^^^^^^^^^^^^^^^^^\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 284, in get_batch\n batch = next(global_batches)\n ^^^^^^^^^^^^^^^^^^^^\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 226, in setup_batches\n attention_mask = torch.ones(\n ^^^^^^^^^^^\n', 'torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 65536.00 GiB. GPU 0 has a total capacity of 139.81 GiB of which 132.18 GiB is free. Including non-PyTorch memory, this process has 7.63 GiB memory in use. Of the allocated memory 5.15 GiB is allocated by PyTorch, and 1007.51 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)\n']
|
| 13818 |
+
WARNING:megatron.core.utils:CUDA out of memory. Tried to allocate 65536.00 GiB. GPU 5 has a total capacity of 139.81 GiB of which 132.19 GiB is free. Including non-PyTorch memory, this process has 7.61 GiB memory in use. Of the allocated memory 5.15 GiB is allocated by PyTorch, and 1007.51 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)
|
| 13819 |
+
['Traceback (most recent call last):\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 446, in forward_step\n (tokens, labels, loss_mask, attention_mask, position_ids), token_lens = get_batch(data_iterator)\n ^^^^^^^^^^^^^^^^^^^^^^^^\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 284, in get_batch\n batch = next(global_batches)\n ^^^^^^^^^^^^^^^^^^^^\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 226, in setup_batches\n attention_mask = torch.ones(\n ^^^^^^^^^^^\n', 'torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 65536.00 GiB. GPU 5 has a total capacity of 139.81 GiB of which 132.19 GiB is free. Including non-PyTorch memory, this process has 7.61 GiB memory in use. Of the allocated memory 5.15 GiB is allocated by PyTorch, and 1007.51 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)\n']
|
| 13820 |
+
WARNING:megatron.core.utils:CUDA out of memory. Tried to allocate 65536.00 GiB. GPU 3 has a total capacity of 139.81 GiB of which 132.18 GiB is free. Including non-PyTorch memory, this process has 7.63 GiB memory in use. Of the allocated memory 5.15 GiB is allocated by PyTorch, and 1007.51 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)
|
| 13821 |
+
['Traceback (most recent call last):\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 446, in forward_step\n (tokens, labels, loss_mask, attention_mask, position_ids), token_lens = get_batch(data_iterator)\n ^^^^^^^^^^^^^^^^^^^^^^^^\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 284, in get_batch\n batch = next(global_batches)\n ^^^^^^^^^^^^^^^^^^^^\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 226, in setup_batches\n attention_mask = torch.ones(\n ^^^^^^^^^^^\n', 'torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 65536.00 GiB. GPU 3 has a total capacity of 139.81 GiB of which 132.18 GiB is free. Including non-PyTorch memory, this process has 7.63 GiB memory in use. Of the allocated memory 5.15 GiB is allocated by PyTorch, and 1007.51 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)\n']
|
| 13822 |
+
WARNING:megatron.core.utils:CUDA out of memory. Tried to allocate 65536.00 GiB. GPU 6 has a total capacity of 139.81 GiB of which 132.19 GiB is free. Including non-PyTorch memory, this process has 7.61 GiB memory in use. Of the allocated memory 5.15 GiB is allocated by PyTorch, and 1007.51 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)
|
| 13823 |
+
['Traceback (most recent call last):\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 446, in forward_step\n (tokens, labels, loss_mask, attention_mask, position_ids), token_lens = get_batch(data_iterator)\n ^^^^^^^^^^^^^^^^^^^^^^^^\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 284, in get_batch\n batch = next(global_batches)\n ^^^^^^^^^^^^^^^^^^^^\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 226, in setup_batches\n attention_mask = torch.ones(\n ^^^^^^^^^^^\n', 'torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 65536.00 GiB. GPU 6 has a total capacity of 139.81 GiB of which 132.19 GiB is free. Including non-PyTorch memory, this process has 7.61 GiB memory in use. Of the allocated memory 5.15 GiB is allocated by PyTorch, and 1007.51 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)\n']
|
| 13824 |
+
WARNING:megatron.core.utils:CUDA out of memory. Tried to allocate 65536.00 GiB. GPU 5 has a total capacity of 139.81 GiB of which 132.18 GiB is free. Including non-PyTorch memory, this process has 7.63 GiB memory in use. Of the allocated memory 5.15 GiB is allocated by PyTorch, and 1007.51 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)
|
| 13825 |
+
['Traceback (most recent call last):\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 446, in forward_step\n (tokens, labels, loss_mask, attention_mask, position_ids), token_lens = get_batch(data_iterator)\n ^^^^^^^^^^^^^^^^^^^^^^^^\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 284, in get_batch\n batch = next(global_batches)\n ^^^^^^^^^^^^^^^^^^^^\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 226, in setup_batches\n attention_mask = torch.ones(\n ^^^^^^^^^^^\n', 'torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 65536.00 GiB. GPU 5 has a total capacity of 139.81 GiB of which 132.18 GiB is free. Including non-PyTorch memory, this process has 7.63 GiB memory in use. Of the allocated memory 5.15 GiB is allocated by PyTorch, and 1007.51 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)\n']
|
| 13826 |
+
WARNING:megatron.core.utils:CUDA out of memory. Tried to allocate 65536.00 GiB. GPU 4 has a total capacity of 139.81 GiB of which 132.19 GiB is free. Including non-PyTorch memory, this process has 7.61 GiB memory in use. Of the allocated memory 5.15 GiB is allocated by PyTorch, and 1007.51 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)
|
| 13827 |
+
['Traceback (most recent call last):\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 446, in forward_step\n (tokens, labels, loss_mask, attention_mask, position_ids), token_lens = get_batch(data_iterator)\n ^^^^^^^^^^^^^^^^^^^^^^^^\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 284, in get_batch\n batch = next(global_batches)\n ^^^^^^^^^^^^^^^^^^^^\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 226, in setup_batches\n attention_mask = torch.ones(\n ^^^^^^^^^^^\n', 'torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 65536.00 GiB. GPU 4 has a total capacity of 139.81 GiB of which 132.19 GiB is free. Including non-PyTorch memory, this process has 7.61 GiB memory in use. Of the allocated memory 5.15 GiB is allocated by PyTorch, and 1007.51 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)\n']
|
| 13828 |
+
WARNING:megatron.core.utils:CUDA out of memory. Tried to allocate 65536.00 GiB. GPU 0 has a total capacity of 139.81 GiB of which 132.19 GiB is free. Including non-PyTorch memory, this process has 7.61 GiB memory in use. Of the allocated memory 5.15 GiB is allocated by PyTorch, and 1007.51 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)
|
| 13829 |
+
['Traceback (most recent call last):\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 446, in forward_step\n (tokens, labels, loss_mask, attention_mask, position_ids), token_lens = get_batch(data_iterator)\n ^^^^^^^^^^^^^^^^^^^^^^^^\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 284, in get_batch\n batch = next(global_batches)\n ^^^^^^^^^^^^^^^^^^^^\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 226, in setup_batches\n attention_mask = torch.ones(\n ^^^^^^^^^^^\n', 'torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 65536.00 GiB. GPU 0 has a total capacity of 139.81 GiB of which 132.19 GiB is free. Including non-PyTorch memory, this process has 7.61 GiB memory in use. Of the allocated memory 5.15 GiB is allocated by PyTorch, and 1007.51 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)\n']
|
| 13830 |
+
WARNING:megatron.core.utils:CUDA out of memory. Tried to allocate 65536.00 GiB. GPU 1 has a total capacity of 139.81 GiB of which 132.18 GiB is free. Including non-PyTorch memory, this process has 7.63 GiB memory in use. Of the allocated memory 5.15 GiB is allocated by PyTorch, and 1007.51 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)
|
| 13831 |
+
['Traceback (most recent call last):\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 446, in forward_step\n (tokens, labels, loss_mask, attention_mask, position_ids), token_lens = get_batch(data_iterator)\n ^^^^^^^^^^^^^^^^^^^^^^^^\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 284, in get_batch\n batch = next(global_batches)\n ^^^^^^^^^^^^^^^^^^^^\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 226, in setup_batches\n attention_mask = torch.ones(\n ^^^^^^^^^^^\n', 'torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 65536.00 GiB. GPU 1 has a total capacity of 139.81 GiB of which 132.18 GiB is free. Including non-PyTorch memory, this process has 7.63 GiB memory in use. Of the allocated memory 5.15 GiB is allocated by PyTorch, and 1007.51 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)\n']
|
| 13832 |
+
WARNING:megatron.core.utils:CUDA out of memory. Tried to allocate 65536.00 GiB. GPU 7 has a total capacity of 139.81 GiB of which 132.18 GiB is free. Including non-PyTorch memory, this process has 7.63 GiB memory in use. Of the allocated memory 5.15 GiB is allocated by PyTorch, and 1007.51 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)
|
| 13833 |
+
['Traceback (most recent call last):\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 446, in forward_step\n (tokens, labels, loss_mask, attention_mask, position_ids), token_lens = get_batch(data_iterator)\n ^^^^^^^^^^^^^^^^^^^^^^^^\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 284, in get_batch\n batch = next(global_batches)\n ^^^^^^^^^^^^^^^^^^^^\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 226, in setup_batches\n attention_mask = torch.ones(\n ^^^^^^^^^^^\n', 'torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 65536.00 GiB. GPU 7 has a total capacity of 139.81 GiB of which 132.18 GiB is free. Including non-PyTorch memory, this process has 7.63 GiB memory in use. Of the allocated memory 5.15 GiB is allocated by PyTorch, and 1007.51 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)\n']
|
| 13834 |
+
WARNING:megatron.core.utils:CUDA out of memory. Tried to allocate 65536.00 GiB. GPU 2 has a total capacity of 139.81 GiB of which 132.19 GiB is free. Including non-PyTorch memory, this process has 7.61 GiB memory in use. Of the allocated memory 5.15 GiB is allocated by PyTorch, and 1007.51 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)
|
| 13835 |
+
['Traceback (most recent call last):\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 446, in forward_step\n (tokens, labels, loss_mask, attention_mask, position_ids), token_lens = get_batch(data_iterator)\n ^^^^^^^^^^^^^^^^^^^^^^^^\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 284, in get_batch\n batch = next(global_batches)\n ^^^^^^^^^^^^^^^^^^^^\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 226, in setup_batches\n attention_mask = torch.ones(\n ^^^^^^^^^^^\n', 'torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 65536.00 GiB. GPU 2 has a total capacity of 139.81 GiB of which 132.19 GiB is free. Including non-PyTorch memory, this process has 7.61 GiB memory in use. Of the allocated memory 5.15 GiB is allocated by PyTorch, and 1007.51 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)\n']
|
attnserver.run_attnserver.slurm.sh.343207.err.log
CHANGED
|
The diff for this file is too large to render.
See raw diff
|
|
|
attnserver.run_attnserver.slurm.sh.343207.out.log
CHANGED
|
@@ -14558,3 +14558,1287 @@ CHECKPOINT_PATH: gpt-checkpoint
|
|
| 14558 |
PWD: /mnt/weka/home/hao.zhang/junda/attnserver-megatron
|
| 14559 |
--------------------------------
|
| 14560 |
/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/bin/python3
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 14558 |
PWD: /mnt/weka/home/hao.zhang/junda/attnserver-megatron
|
| 14559 |
--------------------------------
|
| 14560 |
/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/bin/python3
|
| 14561 |
+
INFO:megatron.training.initialize:Setting logging level to 0
|
| 14562 |
+
using world size: 8, data-parallel size: 1, context-parallel size: 1, hierarchical context-parallel sizes: Nonetensor-model-parallel size: 8, encoder-tensor-model-parallel size: 0, pipeline-model-parallel size: 1, encoder-pipeline-model-parallel size: 0
|
| 14563 |
+
Number of virtual stages per pipeline stage: None
|
| 14564 |
+
WARNING: Setting args.check_for_nan_in_loss_and_grad to False since dynamic loss scaling is being used
|
| 14565 |
+
using torch.float16 for parameters ...
|
| 14566 |
+
------------------------ arguments ------------------------
|
| 14567 |
+
account_for_embedding_in_pipeline_split ......... False
|
| 14568 |
+
account_for_loss_in_pipeline_split .............. False
|
| 14569 |
+
accumulate_allreduce_grads_in_fp32 .............. False
|
| 14570 |
+
adam_beta1 ...................................... 0.9
|
| 14571 |
+
adam_beta2 ...................................... 0.999
|
| 14572 |
+
adam_eps ........................................ 1e-08
|
| 14573 |
+
add_bias_linear ................................. True
|
| 14574 |
+
add_position_embedding .......................... True
|
| 14575 |
+
add_qkv_bias .................................... True
|
| 14576 |
+
adlr_autoresume ................................. False
|
| 14577 |
+
adlr_autoresume_interval ........................ 1000
|
| 14578 |
+
align_grad_reduce ............................... True
|
| 14579 |
+
align_param_gather .............................. False
|
| 14580 |
+
app_tag_run_name ................................ None
|
| 14581 |
+
app_tag_run_version ............................. 0.0.0
|
| 14582 |
+
apply_layernorm_1p .............................. False
|
| 14583 |
+
apply_query_key_layer_scaling ................... False
|
| 14584 |
+
apply_residual_connection_post_layernorm ........ False
|
| 14585 |
+
apply_rope_fusion ............................... False
|
| 14586 |
+
async_save ...................................... None
|
| 14587 |
+
async_tensor_model_parallel_allreduce ........... True
|
| 14588 |
+
attention_backend ............................... AttnBackend.auto
|
| 14589 |
+
attention_dropout ............................... 0.1
|
| 14590 |
+
attention_softmax_in_fp32 ....................... False
|
| 14591 |
+
auto_detect_ckpt_format ......................... False
|
| 14592 |
+
barrier_with_L1_time ............................ True
|
| 14593 |
+
bert_binary_head ................................ True
|
| 14594 |
+
bert_embedder_type .............................. megatron
|
| 14595 |
+
bert_load ....................................... None
|
| 14596 |
+
bf16 ............................................ False
|
| 14597 |
+
bias_dropout_fusion ............................. True
|
| 14598 |
+
bias_gelu_fusion ................................ True
|
| 14599 |
+
bias_swiglu_fusion .............................. True
|
| 14600 |
+
biencoder_projection_dim ........................ 0
|
| 14601 |
+
biencoder_shared_query_context_model ............ False
|
| 14602 |
+
block_data_path ................................. None
|
| 14603 |
+
calc_ft_timeouts ................................ False
|
| 14604 |
+
calculate_per_token_loss ........................ False
|
| 14605 |
+
check_for_large_grads ........................... False
|
| 14606 |
+
check_for_nan_in_loss_and_grad .................. False
|
| 14607 |
+
check_for_spiky_loss ............................ False
|
| 14608 |
+
check_weight_hash_across_dp_replicas_interval ... None
|
| 14609 |
+
ckpt_assume_constant_structure .................. False
|
| 14610 |
+
ckpt_convert_format ............................. None
|
| 14611 |
+
ckpt_convert_save ............................... None
|
| 14612 |
+
ckpt_convert_update_legacy_dist_opt_format ...... False
|
| 14613 |
+
ckpt_format ..................................... torch_dist
|
| 14614 |
+
ckpt_fully_parallel_load ........................ False
|
| 14615 |
+
ckpt_fully_parallel_save ........................ True
|
| 14616 |
+
ckpt_fully_parallel_save_deprecated ............. False
|
| 14617 |
+
ckpt_step ....................................... None
|
| 14618 |
+
classes_fraction ................................ 1.0
|
| 14619 |
+
clip_grad ....................................... 1.0
|
| 14620 |
+
clone_scatter_output_in_embedding ............... True
|
| 14621 |
+
config_logger_dir ...............................
|
| 14622 |
+
consumed_train_samples .......................... 0
|
| 14623 |
+
consumed_valid_samples .......................... 0
|
| 14624 |
+
context_parallel_size ........................... 1
|
| 14625 |
+
cp_comm_type .................................... ['p2p']
|
| 14626 |
+
create_attention_mask_in_dataloader ............. True
|
| 14627 |
+
cross_entropy_fusion_impl ....................... native
|
| 14628 |
+
cross_entropy_loss_fusion ....................... False
|
| 14629 |
+
cuda_graph_scope ................................ full
|
| 14630 |
+
cuda_graph_warmup_steps ......................... 3
|
| 14631 |
+
data_args_path .................................. None
|
| 14632 |
+
data_cache_path ................................. None
|
| 14633 |
+
data_parallel_random_init ....................... False
|
| 14634 |
+
data_parallel_sharding_strategy ................. no_shard
|
| 14635 |
+
data_parallel_size .............................. 1
|
| 14636 |
+
data_path ....................................... None
|
| 14637 |
+
data_per_class_fraction ......................... 1.0
|
| 14638 |
+
data_sharding ................................... True
|
| 14639 |
+
dataloader_type ................................. single
|
| 14640 |
+
ddp_average_in_collective ....................... False
|
| 14641 |
+
ddp_bucket_size ................................. None
|
| 14642 |
+
ddp_num_buckets ................................. None
|
| 14643 |
+
ddp_pad_buckets_for_high_nccl_busbw ............. False
|
| 14644 |
+
decoder_first_pipeline_num_layers ............... None
|
| 14645 |
+
decoder_last_pipeline_num_layers ................ None
|
| 14646 |
+
decoder_num_layers .............................. None
|
| 14647 |
+
decoder_seq_length .............................. None
|
| 14648 |
+
decoupled_lr .................................... None
|
| 14649 |
+
decoupled_min_lr ................................ None
|
| 14650 |
+
decrease_batch_size_if_needed ................... False
|
| 14651 |
+
defer_embedding_wgrad_compute ................... False
|
| 14652 |
+
deprecated_use_mcore_models ..................... False
|
| 14653 |
+
deterministic_mode .............................. False
|
| 14654 |
+
dino_bottleneck_size ............................ 256
|
| 14655 |
+
dino_freeze_last_layer .......................... 1
|
| 14656 |
+
dino_head_hidden_size ........................... 2048
|
| 14657 |
+
dino_local_crops_number ......................... 10
|
| 14658 |
+
dino_local_img_size ............................. 96
|
| 14659 |
+
dino_norm_last_layer ............................ False
|
| 14660 |
+
dino_teacher_temp ............................... 0.07
|
| 14661 |
+
dino_warmup_teacher_temp ........................ 0.04
|
| 14662 |
+
dino_warmup_teacher_temp_epochs ................. 30
|
| 14663 |
+
disable_bf16_reduced_precision_matmul ........... False
|
| 14664 |
+
disable_mamba_mem_eff_path ...................... False
|
| 14665 |
+
disable_straggler_on_startup .................... False
|
| 14666 |
+
dist_ckpt_format_deprecated ..................... None
|
| 14667 |
+
dist_ckpt_strictness ............................ assume_ok_unexpected
|
| 14668 |
+
distribute_saved_activations .................... False
|
| 14669 |
+
distributed_backend ............................. nccl
|
| 14670 |
+
distributed_timeout_minutes ..................... 10
|
| 14671 |
+
embedding_path .................................. None
|
| 14672 |
+
empty_unused_memory_level ....................... 0
|
| 14673 |
+
enable_cuda_graph ............................... False
|
| 14674 |
+
enable_ft_package ............................... False
|
| 14675 |
+
enable_gloo_process_groups ...................... True
|
| 14676 |
+
enable_msc ...................................... True
|
| 14677 |
+
enable_one_logger ............................... True
|
| 14678 |
+
encoder_num_layers .............................. 2
|
| 14679 |
+
encoder_pipeline_model_parallel_size ............ 0
|
| 14680 |
+
encoder_seq_length .............................. 65536
|
| 14681 |
+
encoder_tensor_model_parallel_size .............. 0
|
| 14682 |
+
end_weight_decay ................................ 0.1
|
| 14683 |
+
eod_mask_loss ................................... False
|
| 14684 |
+
error_injection_rate ............................ 0
|
| 14685 |
+
error_injection_type ............................ transient_error
|
| 14686 |
+
eval_interval ................................... 16
|
| 14687 |
+
eval_iters ...................................... 1
|
| 14688 |
+
evidence_data_path .............................. None
|
| 14689 |
+
exit_duration_in_mins ........................... None
|
| 14690 |
+
exit_interval ................................... None
|
| 14691 |
+
exit_on_missing_checkpoint ...................... False
|
| 14692 |
+
exit_signal_handler ............................. False
|
| 14693 |
+
exp_avg_dtype ................................... torch.float32
|
| 14694 |
+
exp_avg_sq_dtype ................................ torch.float32
|
| 14695 |
+
expert_model_parallel_size ...................... 1
|
| 14696 |
+
expert_tensor_parallel_size ..................... 8
|
| 14697 |
+
external_cuda_graph ............................. False
|
| 14698 |
+
ffn_hidden_size ................................. 16384
|
| 14699 |
+
finetune ........................................ False
|
| 14700 |
+
first_last_layers_bf16 .......................... False
|
| 14701 |
+
flash_decode .................................... False
|
| 14702 |
+
fp16 ............................................ True
|
| 14703 |
+
fp16_lm_cross_entropy ........................... False
|
| 14704 |
+
fp32_residual_connection ........................ False
|
| 14705 |
+
fp8 ............................................. None
|
| 14706 |
+
fp8_amax_compute_algo ........................... most_recent
|
| 14707 |
+
fp8_amax_history_len ............................ 1
|
| 14708 |
+
fp8_interval .................................... 1
|
| 14709 |
+
fp8_margin ...................................... 0
|
| 14710 |
+
fp8_param_gather ................................ False
|
| 14711 |
+
fp8_recipe ...................................... delayed
|
| 14712 |
+
fp8_wgrad ....................................... True
|
| 14713 |
+
fsdp_double_buffer .............................. False
|
| 14714 |
+
global_batch_size ............................... 1
|
| 14715 |
+
grad_reduce_in_bf16 ............................. False
|
| 14716 |
+
gradient_accumulation_fusion .................... True
|
| 14717 |
+
gradient_reduce_div_fusion ...................... True
|
| 14718 |
+
group_query_attention ........................... True
|
| 14719 |
+
head_lr_mult .................................... 1.0
|
| 14720 |
+
heterogeneous_layers_config_encoded_json ........ None
|
| 14721 |
+
heterogeneous_layers_config_path ................ None
|
| 14722 |
+
hidden_dropout .................................. 0.1
|
| 14723 |
+
hidden_size ..................................... 4096
|
| 14724 |
+
hierarchical_context_parallel_sizes ............. None
|
| 14725 |
+
high_priority_stream_groups ..................... []
|
| 14726 |
+
hybrid_attention_ratio .......................... 0.0
|
| 14727 |
+
hybrid_mlp_ratio ................................ 0.0
|
| 14728 |
+
hybrid_override_pattern ......................... None
|
| 14729 |
+
hysteresis ...................................... 2
|
| 14730 |
+
ict_head_size ................................... None
|
| 14731 |
+
ict_load ........................................ None
|
| 14732 |
+
img_h ........................................... 224
|
| 14733 |
+
img_w ........................................... 224
|
| 14734 |
+
indexer_batch_size .............................. 128
|
| 14735 |
+
indexer_log_interval ............................ 1000
|
| 14736 |
+
inference_batch_times_seqlen_threshold .......... -1
|
| 14737 |
+
inference_dynamic_batching ...................... False
|
| 14738 |
+
inference_dynamic_batching_buffer_guaranteed_fraction 0.2
|
| 14739 |
+
inference_dynamic_batching_buffer_overflow_factor None
|
| 14740 |
+
inference_dynamic_batching_buffer_size_gb ....... 40.0
|
| 14741 |
+
inference_dynamic_batching_chunk_size ........... 256
|
| 14742 |
+
inference_dynamic_batching_max_requests_override None
|
| 14743 |
+
inference_dynamic_batching_max_tokens_override .. None
|
| 14744 |
+
inference_max_batch_size ........................ 8
|
| 14745 |
+
inference_max_seq_length ........................ 2560
|
| 14746 |
+
inference_rng_tracker ........................... False
|
| 14747 |
+
init_method_std ................................. 0.02
|
| 14748 |
+
init_method_xavier_uniform ...................... False
|
| 14749 |
+
init_model_with_meta_device ..................... False
|
| 14750 |
+
initial_loss_scale .............................. 4294967296
|
| 14751 |
+
inprocess_active_world_size ..................... 8
|
| 14752 |
+
inprocess_barrier_timeout ....................... 120
|
| 14753 |
+
inprocess_completion_timeout .................... 120
|
| 14754 |
+
inprocess_empty_cuda_cache ...................... False
|
| 14755 |
+
inprocess_granularity ........................... node
|
| 14756 |
+
inprocess_hard_timeout .......................... 90
|
| 14757 |
+
inprocess_heartbeat_interval .................... 30
|
| 14758 |
+
inprocess_heartbeat_timeout ..................... 60
|
| 14759 |
+
inprocess_last_call_wait ........................ 1
|
| 14760 |
+
inprocess_max_iterations ........................ None
|
| 14761 |
+
inprocess_monitor_process_interval .............. 1.0
|
| 14762 |
+
inprocess_monitor_thread_interval ............... 1.0
|
| 14763 |
+
inprocess_progress_watchdog_interval ............ 1.0
|
| 14764 |
+
inprocess_restart ............................... False
|
| 14765 |
+
inprocess_soft_timeout .......................... 60
|
| 14766 |
+
inprocess_termination_grace_time ................ 1
|
| 14767 |
+
is_hybrid_model ................................. False
|
| 14768 |
+
iter_per_epoch .................................. 1250
|
| 14769 |
+
iterations_to_skip .............................. []
|
| 14770 |
+
keep_fp8_transpose_cache_when_using_custom_fsdp . False
|
| 14771 |
+
kv_channels ..................................... 64
|
| 14772 |
+
kv_lora_rank .................................... 32
|
| 14773 |
+
lazy_mpu_init ................................... None
|
| 14774 |
+
load ............................................ gpt-checkpoint
|
| 14775 |
+
load_model_opt_format ........................... False
|
| 14776 |
+
local_rank ...................................... 0
|
| 14777 |
+
log_interval .................................... 1
|
| 14778 |
+
log_loss_scale_to_tensorboard ................... True
|
| 14779 |
+
log_memory_to_tensorboard ....................... False
|
| 14780 |
+
log_num_zeros_in_grad ........................... False
|
| 14781 |
+
log_params_norm ................................. False
|
| 14782 |
+
log_progress .................................... False
|
| 14783 |
+
log_straggler ................................... False
|
| 14784 |
+
log_throughput .................................. False
|
| 14785 |
+
log_timers_to_tensorboard ....................... False
|
| 14786 |
+
log_validation_ppl_to_tensorboard ............... False
|
| 14787 |
+
log_world_size_to_tensorboard ................... False
|
| 14788 |
+
logging_level ................................... 0
|
| 14789 |
+
loss_scale ...................................... None
|
| 14790 |
+
loss_scale_window ............................... 1000
|
| 14791 |
+
lr .............................................. 0.0005
|
| 14792 |
+
lr_decay_iters .................................. 150000
|
| 14793 |
+
lr_decay_samples ................................ None
|
| 14794 |
+
lr_decay_style .................................. cosine
|
| 14795 |
+
lr_warmup_fraction .............................. None
|
| 14796 |
+
lr_warmup_init .................................. 0.0
|
| 14797 |
+
lr_warmup_iters ................................. 2
|
| 14798 |
+
lr_warmup_samples ............................... 0
|
| 14799 |
+
lr_wsd_decay_iters .............................. None
|
| 14800 |
+
lr_wsd_decay_samples ............................ None
|
| 14801 |
+
lr_wsd_decay_style .............................. exponential
|
| 14802 |
+
main_grads_dtype ................................ torch.float32
|
| 14803 |
+
main_params_dtype ............................... torch.float32
|
| 14804 |
+
make_vocab_size_divisible_by .................... 128
|
| 14805 |
+
mamba_head_dim .................................. 64
|
| 14806 |
+
mamba_num_groups ................................ 8
|
| 14807 |
+
mamba_num_heads ................................. None
|
| 14808 |
+
mamba_state_dim ................................. 128
|
| 14809 |
+
manual_gc ....................................... False
|
| 14810 |
+
manual_gc_eval .................................. True
|
| 14811 |
+
manual_gc_interval .............................. 0
|
| 14812 |
+
mask_factor ..................................... 1.0
|
| 14813 |
+
mask_prob ....................................... 0.15
|
| 14814 |
+
mask_type ....................................... random
|
| 14815 |
+
masked_softmax_fusion ........................... True
|
| 14816 |
+
max_position_embeddings ......................... 65536
|
| 14817 |
+
max_tokens_to_oom ............................... 12000
|
| 14818 |
+
memory_snapshot_path ............................ snapshot.pickle
|
| 14819 |
+
merge_file ...................................... merges.txt
|
| 14820 |
+
micro_batch_size ................................ 1
|
| 14821 |
+
microbatch_group_size_per_vp_stage .............. None
|
| 14822 |
+
mid_level_dataset_surplus ....................... 0.005
|
| 14823 |
+
min_loss_scale .................................. 1.0
|
| 14824 |
+
min_lr .......................................... 0.0
|
| 14825 |
+
mlp_chunks_for_prefill .......................... 1
|
| 14826 |
+
mmap_bin_files .................................. True
|
| 14827 |
+
mock_data ....................................... True
|
| 14828 |
+
moe_apply_probs_on_input ........................ False
|
| 14829 |
+
moe_aux_loss_coeff .............................. 0.0
|
| 14830 |
+
moe_enable_deepep ............................... False
|
| 14831 |
+
moe_expert_capacity_factor ...................... None
|
| 14832 |
+
moe_extended_tp ................................. False
|
| 14833 |
+
moe_ffn_hidden_size ............................. None
|
| 14834 |
+
moe_grouped_gemm ................................ False
|
| 14835 |
+
moe_input_jitter_eps ............................ None
|
| 14836 |
+
moe_layer_freq .................................. 1
|
| 14837 |
+
moe_layer_recompute ............................. False
|
| 14838 |
+
moe_pad_expert_input_to_capacity ................ False
|
| 14839 |
+
moe_per_layer_logging ........................... False
|
| 14840 |
+
moe_permute_fusion .............................. False
|
| 14841 |
+
moe_router_bias_update_rate ..................... 0.001
|
| 14842 |
+
moe_router_dtype ................................ None
|
| 14843 |
+
moe_router_enable_expert_bias ................... False
|
| 14844 |
+
moe_router_force_load_balancing ................. False
|
| 14845 |
+
moe_router_group_topk ........................... None
|
| 14846 |
+
moe_router_load_balancing_type .................. aux_loss
|
| 14847 |
+
moe_router_num_groups ........................... None
|
| 14848 |
+
moe_router_padding_for_fp8 ...................... False
|
| 14849 |
+
moe_router_pre_softmax .......................... False
|
| 14850 |
+
moe_router_score_function ....................... softmax
|
| 14851 |
+
moe_router_topk ................................. 2
|
| 14852 |
+
moe_router_topk_scaling_factor .................. None
|
| 14853 |
+
moe_shared_expert_intermediate_size ............. None
|
| 14854 |
+
moe_shared_expert_overlap ....................... False
|
| 14855 |
+
moe_token_dispatcher_type ....................... allgather
|
| 14856 |
+
moe_token_drop_policy ........................... probs
|
| 14857 |
+
moe_use_legacy_grouped_gemm ..................... False
|
| 14858 |
+
moe_use_upcycling ............................... False
|
| 14859 |
+
moe_z_loss_coeff ................................ None
|
| 14860 |
+
mrope_section ................................... None
|
| 14861 |
+
mscale .......................................... 1.0
|
| 14862 |
+
mscale_all_dim .................................. 1.0
|
| 14863 |
+
mtp_loss_scaling_factor ......................... 0.1
|
| 14864 |
+
mtp_num_layers .................................. None
|
| 14865 |
+
multi_latent_attention .......................... False
|
| 14866 |
+
nccl_all_reduce_for_prefill ..................... False
|
| 14867 |
+
nccl_communicator_config_path ................... None
|
| 14868 |
+
nccl_ub ......................................... False
|
| 14869 |
+
no_load_optim ................................... None
|
| 14870 |
+
no_load_rng ..................................... None
|
| 14871 |
+
no_persist_layer_norm ........................... False
|
| 14872 |
+
no_rope_freq .................................... None
|
| 14873 |
+
no_save_optim ................................... None
|
| 14874 |
+
no_save_rng ..................................... None
|
| 14875 |
+
non_persistent_ckpt_type ........................ None
|
| 14876 |
+
non_persistent_global_ckpt_dir .................. None
|
| 14877 |
+
non_persistent_local_ckpt_algo .................. fully_parallel
|
| 14878 |
+
non_persistent_local_ckpt_dir ................... None
|
| 14879 |
+
non_persistent_save_interval .................... None
|
| 14880 |
+
norm_epsilon .................................... 1e-05
|
| 14881 |
+
normalization ................................... LayerNorm
|
| 14882 |
+
num_attention_heads ............................. 64
|
| 14883 |
+
num_channels .................................... 3
|
| 14884 |
+
num_classes ..................................... 1000
|
| 14885 |
+
num_dataset_builder_threads ..................... 1
|
| 14886 |
+
num_distributed_optimizer_instances ............. 1
|
| 14887 |
+
num_experts ..................................... None
|
| 14888 |
+
num_layers ...................................... 2
|
| 14889 |
+
num_layers_at_end_in_bf16 ....................... 1
|
| 14890 |
+
num_layers_at_start_in_bf16 ..................... 1
|
| 14891 |
+
num_layers_per_virtual_pipeline_stage ........... None
|
| 14892 |
+
num_query_groups ................................ 16
|
| 14893 |
+
num_virtual_stages_per_pipeline_rank ............ None
|
| 14894 |
+
num_workers ..................................... 2
|
| 14895 |
+
object_storage_cache_path ....................... None
|
| 14896 |
+
one_logger_async ................................ False
|
| 14897 |
+
one_logger_project .............................. megatron-lm
|
| 14898 |
+
one_logger_run_name ............................. None
|
| 14899 |
+
onnx_safe ....................................... None
|
| 14900 |
+
openai_gelu ..................................... False
|
| 14901 |
+
optimizer ....................................... adam
|
| 14902 |
+
optimizer_cpu_offload ........................... False
|
| 14903 |
+
optimizer_offload_fraction ...................... 1.0
|
| 14904 |
+
output_bert_embeddings .......................... False
|
| 14905 |
+
overlap_cpu_optimizer_d2h_h2d ................... False
|
| 14906 |
+
overlap_grad_reduce ............................. False
|
| 14907 |
+
overlap_p2p_comm ................................ False
|
| 14908 |
+
overlap_p2p_comm_warmup_flush ................... False
|
| 14909 |
+
overlap_param_gather ............................ False
|
| 14910 |
+
overlap_param_gather_with_optimizer_step ........ False
|
| 14911 |
+
override_opt_param_scheduler .................... False
|
| 14912 |
+
params_dtype .................................... torch.float16
|
| 14913 |
+
patch_dim ....................................... 16
|
| 14914 |
+
per_split_data_args_path ........................ None
|
| 14915 |
+
perform_initialization .......................... True
|
| 14916 |
+
pin_cpu_grads ................................... True
|
| 14917 |
+
pin_cpu_params .................................. True
|
| 14918 |
+
pipeline_model_parallel_comm_backend ............ None
|
| 14919 |
+
pipeline_model_parallel_size .................... 1
|
| 14920 |
+
pipeline_model_parallel_split_rank .............. None
|
| 14921 |
+
position_embedding_type ......................... learned_absolute
|
| 14922 |
+
pretrained_checkpoint ........................... None
|
| 14923 |
+
profile ......................................... False
|
| 14924 |
+
profile_ranks ................................... [0]
|
| 14925 |
+
profile_step_end ................................ 12
|
| 14926 |
+
profile_step_start .............................. 10
|
| 14927 |
+
q_lora_rank ..................................... None
|
| 14928 |
+
qk_head_dim ..................................... 128
|
| 14929 |
+
qk_l2_norm ...................................... False
|
| 14930 |
+
qk_layernorm .................................... False
|
| 14931 |
+
qk_pos_emb_head_dim ............................. 64
|
| 14932 |
+
query_in_block_prob ............................. 0.1
|
| 14933 |
+
rampup_batch_size ............................... None
|
| 14934 |
+
rank ............................................ 0
|
| 14935 |
+
recompute_granularity ........................... None
|
| 14936 |
+
recompute_method ................................ None
|
| 14937 |
+
recompute_modules ............................... None
|
| 14938 |
+
recompute_num_layers ............................ None
|
| 14939 |
+
record_memory_history ........................... False
|
| 14940 |
+
relative_attention_max_distance ................. 128
|
| 14941 |
+
relative_attention_num_buckets .................. 32
|
| 14942 |
+
replication ..................................... False
|
| 14943 |
+
replication_factor .............................. 2
|
| 14944 |
+
replication_jump ................................ None
|
| 14945 |
+
rerun_mode ...................................... disabled
|
| 14946 |
+
reset_attention_mask ............................ False
|
| 14947 |
+
reset_position_ids .............................. False
|
| 14948 |
+
result_rejected_tracker_filename ................ None
|
| 14949 |
+
retriever_report_topk_accuracies ................ []
|
| 14950 |
+
retriever_score_scaling ......................... False
|
| 14951 |
+
retriever_seq_length ............................ 256
|
| 14952 |
+
retro_add_retriever ............................. False
|
| 14953 |
+
retro_attention_gate ............................ 1
|
| 14954 |
+
retro_cyclic_train_iters ........................ None
|
| 14955 |
+
retro_encoder_attention_dropout ................. 0.1
|
| 14956 |
+
retro_encoder_hidden_dropout .................... 0.1
|
| 14957 |
+
retro_encoder_layers ............................ 2
|
| 14958 |
+
retro_num_neighbors ............................. 2
|
| 14959 |
+
retro_num_retrieved_chunks ...................... 2
|
| 14960 |
+
retro_project_dir ............................... None
|
| 14961 |
+
retro_verify_neighbor_count ..................... True
|
| 14962 |
+
rope_scaling_factor ............................. 8.0
|
| 14963 |
+
rotary_base ..................................... 10000
|
| 14964 |
+
rotary_interleaved .............................. False
|
| 14965 |
+
rotary_percent .................................. 1.0
|
| 14966 |
+
rotary_scaling_factor ........................... 1.0
|
| 14967 |
+
rotary_seq_len_interpolation_factor ............. None
|
| 14968 |
+
run_workload_inspector_server ................... False
|
| 14969 |
+
sample_rate ..................................... 1.0
|
| 14970 |
+
save ............................................ gpt-checkpoint
|
| 14971 |
+
save_interval ................................... 16
|
| 14972 |
+
scatter_gather_tensors_in_pipeline .............. True
|
| 14973 |
+
seed ............................................ 1234
|
| 14974 |
+
seq_length ...................................... 65536
|
| 14975 |
+
sequence_parallel ............................... False
|
| 14976 |
+
sgd_momentum .................................... 0.9
|
| 14977 |
+
short_seq_prob .................................. 0.1
|
| 14978 |
+
skip_train ...................................... False
|
| 14979 |
+
skipped_train_samples ........................... 0
|
| 14980 |
+
spec ............................................ None
|
| 14981 |
+
split ........................................... None
|
| 14982 |
+
squared_relu .................................... False
|
| 14983 |
+
start_weight_decay .............................. 0.1
|
| 14984 |
+
straggler_ctrlr_port ............................ 65535
|
| 14985 |
+
straggler_minmax_count .......................... 1
|
| 14986 |
+
suggested_communication_unit_size ............... None
|
| 14987 |
+
swiglu .......................................... False
|
| 14988 |
+
swin_backbone_type .............................. tiny
|
| 14989 |
+
symmetric_ar_type ............................... None
|
| 14990 |
+
te_rng_tracker .................................. False
|
| 14991 |
+
tensor_model_parallel_size ...................... 8
|
| 14992 |
+
tensorboard_dir ................................. tensorboard-logs/
|
| 14993 |
+
tensorboard_log_interval ........................ 1
|
| 14994 |
+
tensorboard_queue_size .......................... 1000
|
| 14995 |
+
test_data_path .................................. None
|
| 14996 |
+
test_mode ....................................... False
|
| 14997 |
+
tiktoken_num_special_tokens ..................... 1000
|
| 14998 |
+
tiktoken_pattern ................................ None
|
| 14999 |
+
tiktoken_special_tokens ......................... None
|
| 15000 |
+
timing_log_level ................................ 0
|
| 15001 |
+
timing_log_option ............................... minmax
|
| 15002 |
+
titles_data_path ................................ None
|
| 15003 |
+
tokenizer_model ................................. None
|
| 15004 |
+
tokenizer_type .................................. GPT2BPETokenizer
|
| 15005 |
+
torch_fsdp2_reshard_after_forward ............... True
|
| 15006 |
+
tp_comm_bootstrap_backend ....................... nccl
|
| 15007 |
+
tp_comm_bulk_dgrad .............................. True
|
| 15008 |
+
tp_comm_bulk_wgrad .............................. True
|
| 15009 |
+
tp_comm_overlap ................................. False
|
| 15010 |
+
tp_comm_overlap_ag .............................. True
|
| 15011 |
+
tp_comm_overlap_cfg ............................. None
|
| 15012 |
+
tp_comm_overlap_rs .............................. True
|
| 15013 |
+
tp_comm_overlap_rs_dgrad ........................ False
|
| 15014 |
+
tp_comm_split_ag ................................ True
|
| 15015 |
+
tp_comm_split_rs ................................ True
|
| 15016 |
+
train_data_path ................................. None
|
| 15017 |
+
train_iters ..................................... 10
|
| 15018 |
+
train_samples ................................... None
|
| 15019 |
+
train_sync_interval ............................. None
|
| 15020 |
+
transformer_impl ................................ transformer_engine
|
| 15021 |
+
transformer_pipeline_model_parallel_size ........ 1
|
| 15022 |
+
untie_embeddings_and_output_weights ............. False
|
| 15023 |
+
use_checkpoint_args ............................. False
|
| 15024 |
+
use_checkpoint_opt_param_scheduler .............. False
|
| 15025 |
+
use_cpu_initialization .......................... None
|
| 15026 |
+
use_custom_fsdp ................................. False
|
| 15027 |
+
use_dist_ckpt ................................... True
|
| 15028 |
+
use_dist_ckpt_deprecated ........................ False
|
| 15029 |
+
use_distributed_optimizer ....................... False
|
| 15030 |
+
use_flash_attn .................................. False
|
| 15031 |
+
use_legacy_models ............................... False
|
| 15032 |
+
use_mp_args_from_checkpoint_args ................ False
|
| 15033 |
+
use_one_sent_docs ............................... False
|
| 15034 |
+
use_persistent_ckpt_worker ...................... False
|
| 15035 |
+
use_precision_aware_optimizer ................... False
|
| 15036 |
+
use_pytorch_profiler ............................ False
|
| 15037 |
+
use_ring_exchange_p2p ........................... False
|
| 15038 |
+
use_rope_scaling ................................ False
|
| 15039 |
+
use_rotary_position_embeddings .................. False
|
| 15040 |
+
use_sharp ....................................... False
|
| 15041 |
+
use_tokenizer_model_from_checkpoint_args ........ True
|
| 15042 |
+
use_torch_fsdp2 ................................. False
|
| 15043 |
+
use_torch_optimizer_for_cpu_offload ............. False
|
| 15044 |
+
use_tp_pp_dp_mapping ............................ False
|
| 15045 |
+
v_head_dim ...................................... 128
|
| 15046 |
+
valid_data_path ................................. None
|
| 15047 |
+
variable_seq_lengths ............................ False
|
| 15048 |
+
virtual_pipeline_model_parallel_size ............ None
|
| 15049 |
+
vision_backbone_type ............................ vit
|
| 15050 |
+
vision_pretraining .............................. False
|
| 15051 |
+
vision_pretraining_type ......................... classify
|
| 15052 |
+
vocab_extra_ids ................................. 0
|
| 15053 |
+
vocab_file ...................................... vocab.json
|
| 15054 |
+
vocab_size ...................................... None
|
| 15055 |
+
wandb_exp_name ..................................
|
| 15056 |
+
wandb_project ...................................
|
| 15057 |
+
wandb_save_dir ..................................
|
| 15058 |
+
weight_decay .................................... 0.1
|
| 15059 |
+
weight_decay_incr_style ......................... constant
|
| 15060 |
+
wgrad_deferral_limit ............................ 0
|
| 15061 |
+
world_size ...................................... 8
|
| 15062 |
+
yaml_cfg ........................................ None
|
| 15063 |
+
-------------------- end of arguments ---------------------
|
| 15064 |
+
INFO:megatron.core.num_microbatches_calculator:setting number of microbatches to constant 1
|
| 15065 |
+
> building GPT2BPETokenizer tokenizer ...
|
| 15066 |
+
INFO:megatron.training.initialize:Setting logging level to 0
|
| 15067 |
+
INFO:megatron.training.initialize:Setting logging level to 0
|
| 15068 |
+
> padded vocab (size: 50257) with 943 dummy tokens (new size: 51200)
|
| 15069 |
+
INFO:megatron.training.initialize:Setting logging level to 0
|
| 15070 |
+
WARNING:megatron.core.rerun_state_machine:RerunStateMachine initialized in mode RerunMode.DISABLED
|
| 15071 |
+
> initializing torch distributed ...
|
| 15072 |
+
> initialized tensor model parallel with size 8
|
| 15073 |
+
> initialized pipeline model parallel with size 1
|
| 15074 |
+
> setting random seeds to 1234 ...
|
| 15075 |
+
> compiling dataset index builder ...
|
| 15076 |
+
INFO:megatron.training.initialize:Setting logging level to 0
|
| 15077 |
+
WARNING: TensorBoard writing requested but is not available (are you using PyTorch 1.1.0 or later?), no TensorBoard logs will be written.
|
| 15078 |
+
WARNING: one_logger package is required to enable e2e metrics tracking. please go to https://confluence.nvidia.com/display/MLWFO/Package+Repositories for details to install it
|
| 15079 |
+
INFO:megatron.training.initialize:Setting logging level to 0
|
| 15080 |
+
make: Entering directory '/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/datasets'
|
| 15081 |
+
INFO:megatron.training.initialize:Setting logging level to 0
|
| 15082 |
+
make: Nothing to be done for 'default'.
|
| 15083 |
+
make: Leaving directory '/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/datasets'
|
| 15084 |
+
>>> done with dataset index builder. Compilation time: 0.040 seconds
|
| 15085 |
+
WARNING: constraints for invoking optimized fused softmax kernel are not met. We default back to unfused kernel invocations.
|
| 15086 |
+
> compiling and loading fused kernels ...
|
| 15087 |
+
INFO:megatron.training.initialize:Setting logging level to 0
|
| 15088 |
+
>>> done with compiling and loading fused kernels. Compilation time: 2.541 seconds
|
| 15089 |
+
time to initialize megatron (seconds): 7.544
|
| 15090 |
+
[after megatron is initialized] datetime: 2025-06-21 21:34:06
|
| 15091 |
+
building GPT model ...
|
| 15092 |
+
>>> embedding
|
| 15093 |
+
>>> decoder
|
| 15094 |
+
>>> output_layer
|
| 15095 |
+
>>> embedding
|
| 15096 |
+
>>> decoder
|
| 15097 |
+
>>> output_layer
|
| 15098 |
+
> number of parameters on (tensor, pipeline) model parallel rank (2, 0): 338753024
|
| 15099 |
+
> number of parameters on (tensor, pipeline) model parallel rank (3, 0): 338753024
|
| 15100 |
+
>>> embedding
|
| 15101 |
+
>>> decoder
|
| 15102 |
+
>>> output_layer
|
| 15103 |
+
> number of parameters on (tensor, pipeline) model parallel rank (5, 0): 338753024
|
| 15104 |
+
>>> embedding
|
| 15105 |
+
>>> decoder
|
| 15106 |
+
>>> output_layer
|
| 15107 |
+
> number of parameters on (tensor, pipeline) model parallel rank (0, 0): 338753024
|
| 15108 |
+
>>> embedding
|
| 15109 |
+
>>> decoder
|
| 15110 |
+
>>> output_layer
|
| 15111 |
+
> number of parameters on (tensor, pipeline) model parallel rank (4, 0): 338753024
|
| 15112 |
+
>>> embedding
|
| 15113 |
+
>>> decoder
|
| 15114 |
+
>>> output_layer
|
| 15115 |
+
> number of parameters on (tensor, pipeline) model parallel rank (1, 0): 338753024
|
| 15116 |
+
>>> embedding
|
| 15117 |
+
>>> decoder
|
| 15118 |
+
>>> output_layer
|
| 15119 |
+
> number of parameters on (tensor, pipeline) model parallel rank (7, 0): 338753024
|
| 15120 |
+
>>> embedding
|
| 15121 |
+
>>> decoder
|
| 15122 |
+
>>> output_layer
|
| 15123 |
+
> number of parameters on (tensor, pipeline) model parallel rank (6, 0): 338753024
|
| 15124 |
+
INFO:megatron.core.distributed.distributed_data_parallel:Setting up DistributedDataParallel with config DistributedDataParallelConfig(grad_reduce_in_fp32=False, overlap_grad_reduce=False, overlap_param_gather=False, align_param_gather=False, use_distributed_optimizer=False, num_distributed_optimizer_instances=1, check_for_nan_in_grad=False, check_for_large_grads=False, bucket_size=None, pad_buckets_for_high_nccl_busbw=False, average_in_collective=False, fp8_param_gather=False, use_custom_fsdp=False, data_parallel_sharding_strategy='no_shard', gradient_reduce_div_fusion=True, suggested_communication_unit_size=None, preserve_fp32_weights=True, keep_fp8_transpose_cache_when_using_custom_fsdp=False, nccl_ub=False, fsdp_double_buffer=False)
|
| 15125 |
+
INFO:megatron.core.distributed.param_and_grad_buffer:Number of buckets for gradient all-reduce / reduce-scatter: 1
|
| 15126 |
+
Params for bucket 1 (338753024 elements, 338753024 padded size):
|
| 15127 |
+
module.decoder.layers.1.mlp.linear_fc1.bias
|
| 15128 |
+
module.decoder.layers.0.mlp.linear_fc1.layer_norm_bias
|
| 15129 |
+
module.decoder.layers.0.self_attention.linear_qkv.layer_norm_weight
|
| 15130 |
+
module.embedding.word_embeddings.weight
|
| 15131 |
+
module.decoder.final_layernorm.weight
|
| 15132 |
+
module.decoder.layers.1.self_attention.linear_qkv.weight
|
| 15133 |
+
module.decoder.layers.1.self_attention.linear_proj.weight
|
| 15134 |
+
module.decoder.layers.0.mlp.linear_fc2.bias
|
| 15135 |
+
module.decoder.layers.0.mlp.linear_fc1.layer_norm_weight
|
| 15136 |
+
module.decoder.layers.0.self_attention.linear_qkv.layer_norm_bias
|
| 15137 |
+
module.decoder.layers.0.self_attention.linear_proj.bias
|
| 15138 |
+
module.decoder.layers.1.mlp.linear_fc2.weight
|
| 15139 |
+
module.decoder.layers.1.self_attention.linear_proj.bias
|
| 15140 |
+
module.decoder.layers.0.mlp.linear_fc1.weight
|
| 15141 |
+
module.embedding.position_embeddings.weight
|
| 15142 |
+
module.decoder.layers.1.mlp.linear_fc1.layer_norm_bias
|
| 15143 |
+
module.decoder.layers.0.self_attention.linear_qkv.weight
|
| 15144 |
+
module.decoder.layers.1.mlp.linear_fc1.layer_norm_weight
|
| 15145 |
+
module.decoder.layers.1.self_attention.linear_qkv.bias
|
| 15146 |
+
module.decoder.layers.1.mlp.linear_fc1.weight
|
| 15147 |
+
module.decoder.layers.0.mlp.linear_fc2.weight
|
| 15148 |
+
module.decoder.layers.0.mlp.linear_fc1.bias
|
| 15149 |
+
module.decoder.layers.1.mlp.linear_fc2.bias
|
| 15150 |
+
module.decoder.layers.1.self_attention.linear_qkv.layer_norm_weight
|
| 15151 |
+
module.decoder.layers.0.self_attention.linear_qkv.bias
|
| 15152 |
+
module.decoder.final_layernorm.bias
|
| 15153 |
+
module.decoder.layers.1.self_attention.linear_qkv.layer_norm_bias
|
| 15154 |
+
module.decoder.layers.0.self_attention.linear_proj.weight
|
| 15155 |
+
INFO:megatron.core.optimizer:Setting up optimizer with config OptimizerConfig(optimizer='adam', lr=0.0005, min_lr=0.0, decoupled_lr=None, decoupled_min_lr=None, weight_decay=0.1, fp16=True, bf16=False, params_dtype=torch.float16, use_precision_aware_optimizer=False, store_param_remainders=True, main_grads_dtype=torch.float32, main_params_dtype=torch.float32, exp_avg_dtype=torch.float32, exp_avg_sq_dtype=torch.float32, loss_scale=None, initial_loss_scale=4294967296, min_loss_scale=1.0, loss_scale_window=1000, hysteresis=2, adam_beta1=0.9, adam_beta2=0.999, adam_eps=1e-08, sgd_momentum=0.9, use_distributed_optimizer=False, overlap_param_gather_with_optimizer_step=False, optimizer_cpu_offload=False, optimizer_offload_fraction=1.0, use_torch_optimizer_for_cpu_offload=False, overlap_cpu_optimizer_d2h_h2d=False, pin_cpu_grads=True, pin_cpu_params=True, clip_grad=1.0, log_num_zeros_in_grad=False, barrier_with_L1_time=True, timers=<megatron.core.timers.Timers object at 0x1550a632e5a0>, config_logger_dir='')
|
| 15156 |
+
INFO:megatron.core.optimizer_param_scheduler:> learning rate decay style: cosine
|
| 15157 |
+
(TP, PP, encoder TP, encoder PP) mismatch after resume ((8, 1, 0, 0) vs (4, 1, 0, 0) from checkpoint): RNG state will be ignored
|
| 15158 |
+
(TP, PP, encoder TP, encoder PP) mismatch after resume ((8, 1, 0, 0) vs (4, 1, 0, 0) from checkpoint): Rerun state will be ignored
|
| 15159 |
+
loading distributed checkpoint from gpt-checkpoint at iteration 10
|
| 15160 |
+
Running ctx_length=81920, TP_SIZE=8, CP_SIZE=1, BATCH_SIZE=1
|
| 15161 |
+
Cleaning up checkpoint directory: gpt-checkpoint
|
| 15162 |
+
--------------------------------
|
| 15163 |
+
CTX_LENGTH: 81920
|
| 15164 |
+
TP_SIZE: 8
|
| 15165 |
+
CP_SIZE: 1
|
| 15166 |
+
CHECKPOINT_PATH: gpt-checkpoint
|
| 15167 |
+
PWD: /mnt/weka/home/hao.zhang/junda/attnserver-megatron
|
| 15168 |
+
--------------------------------
|
| 15169 |
+
/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/bin/python3
|
| 15170 |
+
INFO:megatron.training.initialize:Setting logging level to 0
|
| 15171 |
+
using world size: 8, data-parallel size: 1, context-parallel size: 1, hierarchical context-parallel sizes: Nonetensor-model-parallel size: 8, encoder-tensor-model-parallel size: 0, pipeline-model-parallel size: 1, encoder-pipeline-model-parallel size: 0
|
| 15172 |
+
Number of virtual stages per pipeline stage: None
|
| 15173 |
+
WARNING: Setting args.check_for_nan_in_loss_and_grad to False since dynamic loss scaling is being used
|
| 15174 |
+
using torch.float16 for parameters ...
|
| 15175 |
+
------------------------ arguments ------------------------
|
| 15176 |
+
account_for_embedding_in_pipeline_split ......... False
|
| 15177 |
+
account_for_loss_in_pipeline_split .............. False
|
| 15178 |
+
accumulate_allreduce_grads_in_fp32 .............. False
|
| 15179 |
+
adam_beta1 ...................................... 0.9
|
| 15180 |
+
adam_beta2 ...................................... 0.999
|
| 15181 |
+
adam_eps ........................................ 1e-08
|
| 15182 |
+
add_bias_linear ................................. True
|
| 15183 |
+
add_position_embedding .......................... True
|
| 15184 |
+
add_qkv_bias .................................... True
|
| 15185 |
+
adlr_autoresume ................................. False
|
| 15186 |
+
adlr_autoresume_interval ........................ 1000
|
| 15187 |
+
align_grad_reduce ............................... True
|
| 15188 |
+
align_param_gather .............................. False
|
| 15189 |
+
app_tag_run_name ................................ None
|
| 15190 |
+
app_tag_run_version ............................. 0.0.0
|
| 15191 |
+
apply_layernorm_1p .............................. False
|
| 15192 |
+
apply_query_key_layer_scaling ................... False
|
| 15193 |
+
apply_residual_connection_post_layernorm ........ False
|
| 15194 |
+
apply_rope_fusion ............................... False
|
| 15195 |
+
async_save ...................................... None
|
| 15196 |
+
async_tensor_model_parallel_allreduce ........... True
|
| 15197 |
+
attention_backend ............................... AttnBackend.auto
|
| 15198 |
+
attention_dropout ............................... 0.1
|
| 15199 |
+
attention_softmax_in_fp32 ....................... False
|
| 15200 |
+
auto_detect_ckpt_format ......................... False
|
| 15201 |
+
barrier_with_L1_time ............................ True
|
| 15202 |
+
bert_binary_head ................................ True
|
| 15203 |
+
bert_embedder_type .............................. megatron
|
| 15204 |
+
bert_load ....................................... None
|
| 15205 |
+
bf16 ............................................ False
|
| 15206 |
+
bias_dropout_fusion ............................. True
|
| 15207 |
+
bias_gelu_fusion ................................ True
|
| 15208 |
+
bias_swiglu_fusion .............................. True
|
| 15209 |
+
biencoder_projection_dim ........................ 0
|
| 15210 |
+
biencoder_shared_query_context_model ............ False
|
| 15211 |
+
block_data_path ................................. None
|
| 15212 |
+
calc_ft_timeouts ................................ False
|
| 15213 |
+
calculate_per_token_loss ........................ False
|
| 15214 |
+
check_for_large_grads ........................... False
|
| 15215 |
+
check_for_nan_in_loss_and_grad .................. False
|
| 15216 |
+
check_for_spiky_loss ............................ False
|
| 15217 |
+
check_weight_hash_across_dp_replicas_interval ... None
|
| 15218 |
+
ckpt_assume_constant_structure .................. False
|
| 15219 |
+
ckpt_convert_format ............................. None
|
| 15220 |
+
ckpt_convert_save ............................... None
|
| 15221 |
+
ckpt_convert_update_legacy_dist_opt_format ...... False
|
| 15222 |
+
ckpt_format ..................................... torch_dist
|
| 15223 |
+
ckpt_fully_parallel_load ........................ False
|
| 15224 |
+
ckpt_fully_parallel_save ........................ True
|
| 15225 |
+
ckpt_fully_parallel_save_deprecated ............. False
|
| 15226 |
+
ckpt_step ....................................... None
|
| 15227 |
+
classes_fraction ................................ 1.0
|
| 15228 |
+
clip_grad ....................................... 1.0
|
| 15229 |
+
clone_scatter_output_in_embedding ............... True
|
| 15230 |
+
config_logger_dir ...............................
|
| 15231 |
+
consumed_train_samples .......................... 0
|
| 15232 |
+
consumed_valid_samples .......................... 0
|
| 15233 |
+
context_parallel_size ........................... 1
|
| 15234 |
+
cp_comm_type .................................... ['p2p']
|
| 15235 |
+
create_attention_mask_in_dataloader ............. True
|
| 15236 |
+
cross_entropy_fusion_impl ....................... native
|
| 15237 |
+
cross_entropy_loss_fusion ....................... False
|
| 15238 |
+
cuda_graph_scope ................................ full
|
| 15239 |
+
cuda_graph_warmup_steps ......................... 3
|
| 15240 |
+
data_args_path .................................. None
|
| 15241 |
+
data_cache_path ................................. None
|
| 15242 |
+
data_parallel_random_init ....................... False
|
| 15243 |
+
data_parallel_sharding_strategy ................. no_shard
|
| 15244 |
+
data_parallel_size .............................. 1
|
| 15245 |
+
data_path ....................................... None
|
| 15246 |
+
data_per_class_fraction ......................... 1.0
|
| 15247 |
+
data_sharding ................................... True
|
| 15248 |
+
dataloader_type ................................. single
|
| 15249 |
+
ddp_average_in_collective ....................... False
|
| 15250 |
+
ddp_bucket_size ................................. None
|
| 15251 |
+
ddp_num_buckets ................................. None
|
| 15252 |
+
ddp_pad_buckets_for_high_nccl_busbw ............. False
|
| 15253 |
+
decoder_first_pipeline_num_layers ............... None
|
| 15254 |
+
decoder_last_pipeline_num_layers ................ None
|
| 15255 |
+
decoder_num_layers .............................. None
|
| 15256 |
+
decoder_seq_length .............................. None
|
| 15257 |
+
decoupled_lr .................................... None
|
| 15258 |
+
decoupled_min_lr ................................ None
|
| 15259 |
+
decrease_batch_size_if_needed ................... False
|
| 15260 |
+
defer_embedding_wgrad_compute ................... False
|
| 15261 |
+
deprecated_use_mcore_models ..................... False
|
| 15262 |
+
deterministic_mode .............................. False
|
| 15263 |
+
dino_bottleneck_size ............................ 256
|
| 15264 |
+
dino_freeze_last_layer .......................... 1
|
| 15265 |
+
dino_head_hidden_size ........................... 2048
|
| 15266 |
+
dino_local_crops_number ......................... 10
|
| 15267 |
+
dino_local_img_size ............................. 96
|
| 15268 |
+
dino_norm_last_layer ............................ False
|
| 15269 |
+
dino_teacher_temp ............................... 0.07
|
| 15270 |
+
dino_warmup_teacher_temp ........................ 0.04
|
| 15271 |
+
dino_warmup_teacher_temp_epochs ................. 30
|
| 15272 |
+
disable_bf16_reduced_precision_matmul ........... False
|
| 15273 |
+
disable_mamba_mem_eff_path ...................... False
|
| 15274 |
+
disable_straggler_on_startup .................... False
|
| 15275 |
+
dist_ckpt_format_deprecated ..................... None
|
| 15276 |
+
dist_ckpt_strictness ............................ assume_ok_unexpected
|
| 15277 |
+
distribute_saved_activations .................... False
|
| 15278 |
+
distributed_backend ............................. nccl
|
| 15279 |
+
distributed_timeout_minutes ..................... 10
|
| 15280 |
+
embedding_path .................................. None
|
| 15281 |
+
empty_unused_memory_level ....................... 0
|
| 15282 |
+
enable_cuda_graph ............................... False
|
| 15283 |
+
enable_ft_package ............................... False
|
| 15284 |
+
enable_gloo_process_groups ...................... True
|
| 15285 |
+
enable_msc ...................................... True
|
| 15286 |
+
enable_one_logger ............................... True
|
| 15287 |
+
encoder_num_layers .............................. 2
|
| 15288 |
+
encoder_pipeline_model_parallel_size ............ 0
|
| 15289 |
+
encoder_seq_length .............................. 81920
|
| 15290 |
+
encoder_tensor_model_parallel_size .............. 0
|
| 15291 |
+
end_weight_decay ................................ 0.1
|
| 15292 |
+
eod_mask_loss ................................... False
|
| 15293 |
+
error_injection_rate ............................ 0
|
| 15294 |
+
error_injection_type ............................ transient_error
|
| 15295 |
+
eval_interval ................................... 16
|
| 15296 |
+
eval_iters ...................................... 1
|
| 15297 |
+
evidence_data_path .............................. None
|
| 15298 |
+
exit_duration_in_mins ........................... None
|
| 15299 |
+
exit_interval ................................... None
|
| 15300 |
+
exit_on_missing_checkpoint ...................... False
|
| 15301 |
+
exit_signal_handler ............................. False
|
| 15302 |
+
exp_avg_dtype ................................... torch.float32
|
| 15303 |
+
exp_avg_sq_dtype ................................ torch.float32
|
| 15304 |
+
expert_model_parallel_size ...................... 1
|
| 15305 |
+
expert_tensor_parallel_size ..................... 8
|
| 15306 |
+
external_cuda_graph ............................. False
|
| 15307 |
+
ffn_hidden_size ................................. 16384
|
| 15308 |
+
finetune ........................................ False
|
| 15309 |
+
first_last_layers_bf16 .......................... False
|
| 15310 |
+
flash_decode .................................... False
|
| 15311 |
+
fp16 ............................................ True
|
| 15312 |
+
fp16_lm_cross_entropy ........................... False
|
| 15313 |
+
fp32_residual_connection ........................ False
|
| 15314 |
+
fp8 ............................................. None
|
| 15315 |
+
fp8_amax_compute_algo ........................... most_recent
|
| 15316 |
+
fp8_amax_history_len ............................ 1
|
| 15317 |
+
fp8_interval .................................... 1
|
| 15318 |
+
fp8_margin ...................................... 0
|
| 15319 |
+
fp8_param_gather ................................ False
|
| 15320 |
+
fp8_recipe ...................................... delayed
|
| 15321 |
+
fp8_wgrad ....................................... True
|
| 15322 |
+
fsdp_double_buffer .............................. False
|
| 15323 |
+
global_batch_size ............................... 1
|
| 15324 |
+
grad_reduce_in_bf16 ............................. False
|
| 15325 |
+
gradient_accumulation_fusion .................... True
|
| 15326 |
+
gradient_reduce_div_fusion ...................... True
|
| 15327 |
+
group_query_attention ........................... True
|
| 15328 |
+
head_lr_mult .................................... 1.0
|
| 15329 |
+
heterogeneous_layers_config_encoded_json ........ None
|
| 15330 |
+
heterogeneous_layers_config_path ................ None
|
| 15331 |
+
hidden_dropout .................................. 0.1
|
| 15332 |
+
hidden_size ..................................... 4096
|
| 15333 |
+
hierarchical_context_parallel_sizes ............. None
|
| 15334 |
+
high_priority_stream_groups ..................... []
|
| 15335 |
+
hybrid_attention_ratio .......................... 0.0
|
| 15336 |
+
hybrid_mlp_ratio ................................ 0.0
|
| 15337 |
+
hybrid_override_pattern ......................... None
|
| 15338 |
+
hysteresis ...................................... 2
|
| 15339 |
+
ict_head_size ................................... None
|
| 15340 |
+
ict_load ........................................ None
|
| 15341 |
+
img_h ........................................... 224
|
| 15342 |
+
img_w ........................................... 224
|
| 15343 |
+
indexer_batch_size .............................. 128
|
| 15344 |
+
indexer_log_interval ............................ 1000
|
| 15345 |
+
inference_batch_times_seqlen_threshold .......... -1
|
| 15346 |
+
inference_dynamic_batching ...................... False
|
| 15347 |
+
inference_dynamic_batching_buffer_guaranteed_fraction 0.2
|
| 15348 |
+
inference_dynamic_batching_buffer_overflow_factor None
|
| 15349 |
+
inference_dynamic_batching_buffer_size_gb ....... 40.0
|
| 15350 |
+
inference_dynamic_batching_chunk_size ........... 256
|
| 15351 |
+
inference_dynamic_batching_max_requests_override None
|
| 15352 |
+
inference_dynamic_batching_max_tokens_override .. None
|
| 15353 |
+
inference_max_batch_size ........................ 8
|
| 15354 |
+
inference_max_seq_length ........................ 2560
|
| 15355 |
+
inference_rng_tracker ........................... False
|
| 15356 |
+
init_method_std ................................. 0.02
|
| 15357 |
+
init_method_xavier_uniform ...................... False
|
| 15358 |
+
init_model_with_meta_device ..................... False
|
| 15359 |
+
initial_loss_scale .............................. 4294967296
|
| 15360 |
+
inprocess_active_world_size ..................... 8
|
| 15361 |
+
inprocess_barrier_timeout ....................... 120
|
| 15362 |
+
inprocess_completion_timeout .................... 120
|
| 15363 |
+
inprocess_empty_cuda_cache ...................... False
|
| 15364 |
+
inprocess_granularity ........................... node
|
| 15365 |
+
inprocess_hard_timeout .......................... 90
|
| 15366 |
+
inprocess_heartbeat_interval .................... 30
|
| 15367 |
+
inprocess_heartbeat_timeout ..................... 60
|
| 15368 |
+
inprocess_last_call_wait ........................ 1
|
| 15369 |
+
inprocess_max_iterations ........................ None
|
| 15370 |
+
inprocess_monitor_process_interval .............. 1.0
|
| 15371 |
+
inprocess_monitor_thread_interval ............... 1.0
|
| 15372 |
+
inprocess_progress_watchdog_interval ............ 1.0
|
| 15373 |
+
inprocess_restart ............................... False
|
| 15374 |
+
inprocess_soft_timeout .......................... 60
|
| 15375 |
+
inprocess_termination_grace_time ................ 1
|
| 15376 |
+
is_hybrid_model ................................. False
|
| 15377 |
+
iter_per_epoch .................................. 1250
|
| 15378 |
+
iterations_to_skip .............................. []
|
| 15379 |
+
keep_fp8_transpose_cache_when_using_custom_fsdp . False
|
| 15380 |
+
kv_channels ..................................... 64
|
| 15381 |
+
kv_lora_rank .................................... 32
|
| 15382 |
+
lazy_mpu_init ................................... None
|
| 15383 |
+
load ............................................ gpt-checkpoint
|
| 15384 |
+
load_model_opt_format ........................... False
|
| 15385 |
+
local_rank ...................................... 0
|
| 15386 |
+
log_interval .................................... 1
|
| 15387 |
+
log_loss_scale_to_tensorboard ................... True
|
| 15388 |
+
log_memory_to_tensorboard ....................... False
|
| 15389 |
+
log_num_zeros_in_grad ........................... False
|
| 15390 |
+
log_params_norm ................................. False
|
| 15391 |
+
log_progress .................................... False
|
| 15392 |
+
log_straggler ................................... False
|
| 15393 |
+
log_throughput .................................. False
|
| 15394 |
+
log_timers_to_tensorboard ....................... False
|
| 15395 |
+
log_validation_ppl_to_tensorboard ............... False
|
| 15396 |
+
log_world_size_to_tensorboard ................... False
|
| 15397 |
+
logging_level ................................... 0
|
| 15398 |
+
loss_scale ...................................... None
|
| 15399 |
+
loss_scale_window ............................... 1000
|
| 15400 |
+
lr .............................................. 0.0005
|
| 15401 |
+
lr_decay_iters .................................. 150000
|
| 15402 |
+
lr_decay_samples ................................ None
|
| 15403 |
+
lr_decay_style .................................. cosine
|
| 15404 |
+
lr_warmup_fraction .............................. None
|
| 15405 |
+
lr_warmup_init .................................. 0.0
|
| 15406 |
+
lr_warmup_iters ................................. 2
|
| 15407 |
+
lr_warmup_samples ............................... 0
|
| 15408 |
+
lr_wsd_decay_iters .............................. None
|
| 15409 |
+
lr_wsd_decay_samples ............................ None
|
| 15410 |
+
lr_wsd_decay_style .............................. exponential
|
| 15411 |
+
main_grads_dtype ................................ torch.float32
|
| 15412 |
+
main_params_dtype ............................... torch.float32
|
| 15413 |
+
make_vocab_size_divisible_by .................... 128
|
| 15414 |
+
mamba_head_dim .................................. 64
|
| 15415 |
+
mamba_num_groups ................................ 8
|
| 15416 |
+
mamba_num_heads ................................. None
|
| 15417 |
+
mamba_state_dim ................................. 128
|
| 15418 |
+
manual_gc ....................................... False
|
| 15419 |
+
manual_gc_eval .................................. True
|
| 15420 |
+
manual_gc_interval .............................. 0
|
| 15421 |
+
mask_factor ..................................... 1.0
|
| 15422 |
+
mask_prob ....................................... 0.15
|
| 15423 |
+
mask_type ....................................... random
|
| 15424 |
+
masked_softmax_fusion ........................... True
|
| 15425 |
+
max_position_embeddings ......................... 81920
|
| 15426 |
+
max_tokens_to_oom ............................... 12000
|
| 15427 |
+
memory_snapshot_path ............................ snapshot.pickle
|
| 15428 |
+
merge_file ...................................... merges.txt
|
| 15429 |
+
micro_batch_size ................................ 1
|
| 15430 |
+
microbatch_group_size_per_vp_stage .............. None
|
| 15431 |
+
mid_level_dataset_surplus ....................... 0.005
|
| 15432 |
+
min_loss_scale .................................. 1.0
|
| 15433 |
+
min_lr .......................................... 0.0
|
| 15434 |
+
mlp_chunks_for_prefill .......................... 1
|
| 15435 |
+
mmap_bin_files .................................. True
|
| 15436 |
+
mock_data ....................................... True
|
| 15437 |
+
moe_apply_probs_on_input ........................ False
|
| 15438 |
+
moe_aux_loss_coeff .............................. 0.0
|
| 15439 |
+
moe_enable_deepep ............................... False
|
| 15440 |
+
moe_expert_capacity_factor ...................... None
|
| 15441 |
+
moe_extended_tp ................................. False
|
| 15442 |
+
moe_ffn_hidden_size ............................. None
|
| 15443 |
+
moe_grouped_gemm ................................ False
|
| 15444 |
+
moe_input_jitter_eps ............................ None
|
| 15445 |
+
moe_layer_freq .................................. 1
|
| 15446 |
+
moe_layer_recompute ............................. False
|
| 15447 |
+
moe_pad_expert_input_to_capacity ................ False
|
| 15448 |
+
moe_per_layer_logging ........................... False
|
| 15449 |
+
moe_permute_fusion .............................. False
|
| 15450 |
+
moe_router_bias_update_rate ..................... 0.001
|
| 15451 |
+
moe_router_dtype ................................ None
|
| 15452 |
+
moe_router_enable_expert_bias ................... False
|
| 15453 |
+
moe_router_force_load_balancing ................. False
|
| 15454 |
+
moe_router_group_topk ........................... None
|
| 15455 |
+
moe_router_load_balancing_type .................. aux_loss
|
| 15456 |
+
moe_router_num_groups ........................... None
|
| 15457 |
+
moe_router_padding_for_fp8 ...................... False
|
| 15458 |
+
moe_router_pre_softmax .......................... False
|
| 15459 |
+
moe_router_score_function ....................... softmax
|
| 15460 |
+
moe_router_topk ................................. 2
|
| 15461 |
+
moe_router_topk_scaling_factor .................. None
|
| 15462 |
+
moe_shared_expert_intermediate_size ............. None
|
| 15463 |
+
moe_shared_expert_overlap ....................... False
|
| 15464 |
+
moe_token_dispatcher_type ....................... allgather
|
| 15465 |
+
moe_token_drop_policy ........................... probs
|
| 15466 |
+
moe_use_legacy_grouped_gemm ..................... False
|
| 15467 |
+
moe_use_upcycling ............................... False
|
| 15468 |
+
moe_z_loss_coeff ................................ None
|
| 15469 |
+
mrope_section ................................... None
|
| 15470 |
+
mscale .......................................... 1.0
|
| 15471 |
+
mscale_all_dim .................................. 1.0
|
| 15472 |
+
mtp_loss_scaling_factor ......................... 0.1
|
| 15473 |
+
mtp_num_layers .................................. None
|
| 15474 |
+
multi_latent_attention .......................... False
|
| 15475 |
+
nccl_all_reduce_for_prefill ..................... False
|
| 15476 |
+
nccl_communicator_config_path ................... None
|
| 15477 |
+
nccl_ub ......................................... False
|
| 15478 |
+
no_load_optim ................................... None
|
| 15479 |
+
no_load_rng ..................................... None
|
| 15480 |
+
no_persist_layer_norm ........................... False
|
| 15481 |
+
no_rope_freq .................................... None
|
| 15482 |
+
no_save_optim ................................... None
|
| 15483 |
+
no_save_rng ..................................... None
|
| 15484 |
+
non_persistent_ckpt_type ........................ None
|
| 15485 |
+
non_persistent_global_ckpt_dir .................. None
|
| 15486 |
+
non_persistent_local_ckpt_algo .................. fully_parallel
|
| 15487 |
+
non_persistent_local_ckpt_dir ................... None
|
| 15488 |
+
non_persistent_save_interval .................... None
|
| 15489 |
+
norm_epsilon .................................... 1e-05
|
| 15490 |
+
normalization ................................... LayerNorm
|
| 15491 |
+
num_attention_heads ............................. 64
|
| 15492 |
+
num_channels .................................... 3
|
| 15493 |
+
num_classes ..................................... 1000
|
| 15494 |
+
num_dataset_builder_threads ..................... 1
|
| 15495 |
+
num_distributed_optimizer_instances ............. 1
|
| 15496 |
+
num_experts ..................................... None
|
| 15497 |
+
num_layers ...................................... 2
|
| 15498 |
+
num_layers_at_end_in_bf16 ....................... 1
|
| 15499 |
+
num_layers_at_start_in_bf16 ..................... 1
|
| 15500 |
+
num_layers_per_virtual_pipeline_stage ........... None
|
| 15501 |
+
num_query_groups ................................ 16
|
| 15502 |
+
num_virtual_stages_per_pipeline_rank ............ None
|
| 15503 |
+
num_workers ..................................... 2
|
| 15504 |
+
object_storage_cache_path ....................... None
|
| 15505 |
+
one_logger_async ................................ False
|
| 15506 |
+
one_logger_project .............................. megatron-lm
|
| 15507 |
+
one_logger_run_name ............................. None
|
| 15508 |
+
onnx_safe ....................................... None
|
| 15509 |
+
openai_gelu ..................................... False
|
| 15510 |
+
optimizer ....................................... adam
|
| 15511 |
+
optimizer_cpu_offload ........................... False
|
| 15512 |
+
optimizer_offload_fraction ...................... 1.0
|
| 15513 |
+
output_bert_embeddings .......................... False
|
| 15514 |
+
overlap_cpu_optimizer_d2h_h2d ................... False
|
| 15515 |
+
overlap_grad_reduce ............................. False
|
| 15516 |
+
overlap_p2p_comm ................................ False
|
| 15517 |
+
overlap_p2p_comm_warmup_flush ................... False
|
| 15518 |
+
overlap_param_gather ............................ False
|
| 15519 |
+
overlap_param_gather_with_optimizer_step ........ False
|
| 15520 |
+
override_opt_param_scheduler .................... False
|
| 15521 |
+
params_dtype .................................... torch.float16
|
| 15522 |
+
patch_dim ....................................... 16
|
| 15523 |
+
per_split_data_args_path ........................ None
|
| 15524 |
+
perform_initialization .......................... True
|
| 15525 |
+
pin_cpu_grads ................................... True
|
| 15526 |
+
pin_cpu_params .................................. True
|
| 15527 |
+
pipeline_model_parallel_comm_backend ............ None
|
| 15528 |
+
pipeline_model_parallel_size .................... 1
|
| 15529 |
+
pipeline_model_parallel_split_rank .............. None
|
| 15530 |
+
position_embedding_type ......................... learned_absolute
|
| 15531 |
+
pretrained_checkpoint ........................... None
|
| 15532 |
+
profile ......................................... False
|
| 15533 |
+
profile_ranks ................................... [0]
|
| 15534 |
+
profile_step_end ................................ 12
|
| 15535 |
+
profile_step_start .............................. 10
|
| 15536 |
+
q_lora_rank ..................................... None
|
| 15537 |
+
qk_head_dim ..................................... 128
|
| 15538 |
+
qk_l2_norm ...................................... False
|
| 15539 |
+
qk_layernorm .................................... False
|
| 15540 |
+
qk_pos_emb_head_dim ............................. 64
|
| 15541 |
+
query_in_block_prob ............................. 0.1
|
| 15542 |
+
rampup_batch_size ............................... None
|
| 15543 |
+
rank ............................................ 0
|
| 15544 |
+
recompute_granularity ........................... None
|
| 15545 |
+
recompute_method ................................ None
|
| 15546 |
+
recompute_modules ............................... None
|
| 15547 |
+
recompute_num_layers ............................ None
|
| 15548 |
+
record_memory_history ........................... False
|
| 15549 |
+
relative_attention_max_distance ................. 128
|
| 15550 |
+
relative_attention_num_buckets .................. 32
|
| 15551 |
+
replication ..................................... False
|
| 15552 |
+
replication_factor .............................. 2
|
| 15553 |
+
replication_jump ................................ None
|
| 15554 |
+
rerun_mode ...................................... disabled
|
| 15555 |
+
reset_attention_mask ............................ False
|
| 15556 |
+
reset_position_ids .............................. False
|
| 15557 |
+
result_rejected_tracker_filename ................ None
|
| 15558 |
+
retriever_report_topk_accuracies ................ []
|
| 15559 |
+
retriever_score_scaling ......................... False
|
| 15560 |
+
retriever_seq_length ............................ 256
|
| 15561 |
+
retro_add_retriever ............................. False
|
| 15562 |
+
retro_attention_gate ............................ 1
|
| 15563 |
+
retro_cyclic_train_iters ........................ None
|
| 15564 |
+
retro_encoder_attention_dropout ................. 0.1
|
| 15565 |
+
retro_encoder_hidden_dropout .................... 0.1
|
| 15566 |
+
retro_encoder_layers ............................ 2
|
| 15567 |
+
retro_num_neighbors ............................. 2
|
| 15568 |
+
retro_num_retrieved_chunks ...................... 2
|
| 15569 |
+
retro_project_dir ............................... None
|
| 15570 |
+
retro_verify_neighbor_count ..................... True
|
| 15571 |
+
rope_scaling_factor ............................. 8.0
|
| 15572 |
+
rotary_base ..................................... 10000
|
| 15573 |
+
rotary_interleaved .............................. False
|
| 15574 |
+
rotary_percent .................................. 1.0
|
| 15575 |
+
rotary_scaling_factor ........................... 1.0
|
| 15576 |
+
rotary_seq_len_interpolation_factor ............. None
|
| 15577 |
+
run_workload_inspector_server ................... False
|
| 15578 |
+
sample_rate ..................................... 1.0
|
| 15579 |
+
save ............................................ gpt-checkpoint
|
| 15580 |
+
save_interval ................................... 16
|
| 15581 |
+
scatter_gather_tensors_in_pipeline .............. True
|
| 15582 |
+
seed ............................................ 1234
|
| 15583 |
+
seq_length ...................................... 81920
|
| 15584 |
+
sequence_parallel ............................... False
|
| 15585 |
+
sgd_momentum .................................... 0.9
|
| 15586 |
+
short_seq_prob .................................. 0.1
|
| 15587 |
+
skip_train ...................................... False
|
| 15588 |
+
skipped_train_samples ........................... 0
|
| 15589 |
+
spec ............................................ None
|
| 15590 |
+
split ........................................... None
|
| 15591 |
+
squared_relu .................................... False
|
| 15592 |
+
start_weight_decay .............................. 0.1
|
| 15593 |
+
straggler_ctrlr_port ............................ 65535
|
| 15594 |
+
straggler_minmax_count .......................... 1
|
| 15595 |
+
suggested_communication_unit_size ............... None
|
| 15596 |
+
swiglu .......................................... False
|
| 15597 |
+
swin_backbone_type .............................. tiny
|
| 15598 |
+
symmetric_ar_type ............................... None
|
| 15599 |
+
te_rng_tracker .................................. False
|
| 15600 |
+
tensor_model_parallel_size ...................... 8
|
| 15601 |
+
tensorboard_dir ................................. tensorboard-logs/
|
| 15602 |
+
tensorboard_log_interval ........................ 1
|
| 15603 |
+
tensorboard_queue_size .......................... 1000
|
| 15604 |
+
test_data_path .................................. None
|
| 15605 |
+
test_mode ....................................... False
|
| 15606 |
+
tiktoken_num_special_tokens ..................... 1000
|
| 15607 |
+
tiktoken_pattern ................................ None
|
| 15608 |
+
tiktoken_special_tokens ......................... None
|
| 15609 |
+
timing_log_level ................................ 0
|
| 15610 |
+
timing_log_option ............................... minmax
|
| 15611 |
+
titles_data_path ................................ None
|
| 15612 |
+
tokenizer_model ................................. None
|
| 15613 |
+
tokenizer_type .................................. GPT2BPETokenizer
|
| 15614 |
+
torch_fsdp2_reshard_after_forward ............... True
|
| 15615 |
+
tp_comm_bootstrap_backend ....................... nccl
|
| 15616 |
+
tp_comm_bulk_dgrad .............................. True
|
| 15617 |
+
tp_comm_bulk_wgrad .............................. True
|
| 15618 |
+
tp_comm_overlap ................................. False
|
| 15619 |
+
tp_comm_overlap_ag .............................. True
|
| 15620 |
+
tp_comm_overlap_cfg ............................. None
|
| 15621 |
+
tp_comm_overlap_rs .............................. True
|
| 15622 |
+
tp_comm_overlap_rs_dgrad ........................ False
|
| 15623 |
+
tp_comm_split_ag ................................ True
|
| 15624 |
+
tp_comm_split_rs ................................ True
|
| 15625 |
+
train_data_path ................................. None
|
| 15626 |
+
train_iters ..................................... 10
|
| 15627 |
+
train_samples ................................... None
|
| 15628 |
+
train_sync_interval ............................. None
|
| 15629 |
+
transformer_impl ................................ transformer_engine
|
| 15630 |
+
transformer_pipeline_model_parallel_size ........ 1
|
| 15631 |
+
untie_embeddings_and_output_weights ............. False
|
| 15632 |
+
use_checkpoint_args ............................. False
|
| 15633 |
+
use_checkpoint_opt_param_scheduler .............. False
|
| 15634 |
+
use_cpu_initialization .......................... None
|
| 15635 |
+
use_custom_fsdp ................................. False
|
| 15636 |
+
use_dist_ckpt ................................... True
|
| 15637 |
+
use_dist_ckpt_deprecated ........................ False
|
| 15638 |
+
use_distributed_optimizer ....................... False
|
| 15639 |
+
use_flash_attn .................................. False
|
| 15640 |
+
use_legacy_models ............................... False
|
| 15641 |
+
use_mp_args_from_checkpoint_args ................ False
|
| 15642 |
+
use_one_sent_docs ............................... False
|
| 15643 |
+
use_persistent_ckpt_worker ...................... False
|
| 15644 |
+
use_precision_aware_optimizer ................... False
|
| 15645 |
+
use_pytorch_profiler ............................ False
|
| 15646 |
+
use_ring_exchange_p2p ........................... False
|
| 15647 |
+
use_rope_scaling ................................ False
|
| 15648 |
+
use_rotary_position_embeddings .................. False
|
| 15649 |
+
use_sharp ....................................... False
|
| 15650 |
+
use_tokenizer_model_from_checkpoint_args ........ True
|
| 15651 |
+
use_torch_fsdp2 ................................. False
|
| 15652 |
+
use_torch_optimizer_for_cpu_offload ............. False
|
| 15653 |
+
use_tp_pp_dp_mapping ............................ False
|
| 15654 |
+
v_head_dim ...................................... 128
|
| 15655 |
+
valid_data_path ................................. None
|
| 15656 |
+
variable_seq_lengths ............................ False
|
| 15657 |
+
virtual_pipeline_model_parallel_size ............ None
|
| 15658 |
+
vision_backbone_type ............................ vit
|
| 15659 |
+
vision_pretraining .............................. False
|
| 15660 |
+
vision_pretraining_type ......................... classify
|
| 15661 |
+
vocab_extra_ids ................................. 0
|
| 15662 |
+
vocab_file ...................................... vocab.json
|
| 15663 |
+
vocab_size ...................................... None
|
| 15664 |
+
wandb_exp_name ..................................
|
| 15665 |
+
wandb_project ...................................
|
| 15666 |
+
wandb_save_dir ..................................
|
| 15667 |
+
weight_decay .................................... 0.1
|
| 15668 |
+
weight_decay_incr_style ......................... constant
|
| 15669 |
+
wgrad_deferral_limit ............................ 0
|
| 15670 |
+
world_size ...................................... 8
|
| 15671 |
+
yaml_cfg ........................................ None
|
| 15672 |
+
-------------------- end of arguments ---------------------
|
| 15673 |
+
INFO:megatron.core.num_microbatches_calculator:setting number of microbatches to constant 1
|
| 15674 |
+
> building GPT2BPETokenizer tokenizer ...
|
| 15675 |
+
> padded vocab (size: 50257) with 943 dummy tokens (new size: 51200)
|
| 15676 |
+
INFO:megatron.training.initialize:Setting logging level to 0
|
| 15677 |
+
WARNING:megatron.core.rerun_state_machine:RerunStateMachine initialized in mode RerunMode.DISABLED
|
| 15678 |
+
> initializing torch distributed ...
|
| 15679 |
+
INFO:megatron.training.initialize:Setting logging level to 0
|
| 15680 |
+
INFO:megatron.training.initialize:Setting logging level to 0
|
| 15681 |
+
> initialized tensor model parallel with size 8
|
| 15682 |
+
> initialized pipeline model parallel with size 1
|
| 15683 |
+
> setting random seeds to 1234 ...
|
| 15684 |
+
> compiling dataset index builder ...
|
| 15685 |
+
make: Entering directory '/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/datasets'
|
| 15686 |
+
INFO:megatron.training.initialize:Setting logging level to 0
|
| 15687 |
+
INFO:megatron.training.initialize:Setting logging level to 0
|
| 15688 |
+
INFO:megatron.training.initialize:Setting logging level to 0
|
| 15689 |
+
WARNING: TensorBoard writing requested but is not available (are you using PyTorch 1.1.0 or later?), no TensorBoard logs will be written.
|
| 15690 |
+
WARNING: one_logger package is required to enable e2e metrics tracking. please go to https://confluence.nvidia.com/display/MLWFO/Package+Repositories for details to install it
|
| 15691 |
+
INFO:megatron.training.initialize:Setting logging level to 0
|
| 15692 |
+
make: Nothing to be done for 'default'.
|
| 15693 |
+
make: Leaving directory '/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/datasets'
|
| 15694 |
+
>>> done with dataset index builder. Compilation time: 0.045 seconds
|
| 15695 |
+
WARNING: constraints for invoking optimized fused softmax kernel are not met. We default back to unfused kernel invocations.
|
| 15696 |
+
> compiling and loading fused kernels ...
|
| 15697 |
+
>>> done with compiling and loading fused kernels. Compilation time: 2.505 seconds
|
| 15698 |
+
time to initialize megatron (seconds): 7.777
|
| 15699 |
+
[after megatron is initialized] datetime: 2025-06-21 21:34:49
|
| 15700 |
+
building GPT model ...
|
| 15701 |
+
>>> embedding
|
| 15702 |
+
>>> decoder
|
| 15703 |
+
>>> output_layer
|
| 15704 |
+
> number of parameters on (tensor, pipeline) model parallel rank (4, 0): 405861888
|
| 15705 |
+
>>> embedding
|
| 15706 |
+
>>> decoder
|
| 15707 |
+
>>> output_layer
|
| 15708 |
+
> number of parameters on (tensor, pipeline) model parallel rank (0, 0): 405861888
|
| 15709 |
+
>>> embedding
|
| 15710 |
+
>>> decoder
|
| 15711 |
+
>>> output_layer
|
| 15712 |
+
> number of parameters on (tensor, pipeline) model parallel rank (2, 0): 405861888
|
| 15713 |
+
>>> embedding
|
| 15714 |
+
>>> decoder
|
| 15715 |
+
>>> output_layer
|
| 15716 |
+
> number of parameters on (tensor, pipeline) model parallel rank (1, 0): 405861888
|
| 15717 |
+
>>> embedding
|
| 15718 |
+
>>> decoder
|
| 15719 |
+
>>> output_layer
|
| 15720 |
+
> number of parameters on (tensor, pipeline) model parallel rank (6, 0): 405861888
|
| 15721 |
+
>>> embedding
|
| 15722 |
+
>>> decoder
|
| 15723 |
+
>>> output_layer
|
| 15724 |
+
> number of parameters on (tensor, pipeline) model parallel rank (3, 0): 405861888
|
| 15725 |
+
>>> embedding
|
| 15726 |
+
>>> decoder
|
| 15727 |
+
>>> output_layer
|
| 15728 |
+
> number of parameters on (tensor, pipeline) model parallel rank (5, 0): 405861888
|
| 15729 |
+
INFO:megatron.core.distributed.distributed_data_parallel:Setting up DistributedDataParallel with config DistributedDataParallelConfig(grad_reduce_in_fp32=False, overlap_grad_reduce=False, overlap_param_gather=False, align_param_gather=False, use_distributed_optimizer=False, num_distributed_optimizer_instances=1, check_for_nan_in_grad=False, check_for_large_grads=False, bucket_size=None, pad_buckets_for_high_nccl_busbw=False, average_in_collective=False, fp8_param_gather=False, use_custom_fsdp=False, data_parallel_sharding_strategy='no_shard', gradient_reduce_div_fusion=True, suggested_communication_unit_size=None, preserve_fp32_weights=True, keep_fp8_transpose_cache_when_using_custom_fsdp=False, nccl_ub=False, fsdp_double_buffer=False)
|
| 15730 |
+
INFO:megatron.core.distributed.param_and_grad_buffer:Number of buckets for gradient all-reduce / reduce-scatter: 1
|
| 15731 |
+
Params for bucket 1 (405861888 elements, 405861888 padded size):
|
| 15732 |
+
module.decoder.layers.1.mlp.linear_fc1.layer_norm_weight
|
| 15733 |
+
module.decoder.layers.1.self_attention.linear_qkv.bias
|
| 15734 |
+
module.decoder.layers.0.self_attention.linear_proj.weight
|
| 15735 |
+
module.decoder.layers.1.mlp.linear_fc1.weight
|
| 15736 |
+
module.decoder.layers.0.mlp.linear_fc2.weight
|
| 15737 |
+
module.decoder.layers.0.mlp.linear_fc1.bias
|
| 15738 |
+
module.decoder.final_layernorm.bias
|
| 15739 |
+
module.decoder.layers.1.mlp.linear_fc2.bias
|
| 15740 |
+
module.decoder.layers.1.self_attention.linear_qkv.layer_norm_weight
|
| 15741 |
+
module.decoder.layers.0.self_attention.linear_qkv.bias
|
| 15742 |
+
module.decoder.layers.0.self_attention.linear_proj.bias
|
| 15743 |
+
module.decoder.layers.1.self_attention.linear_qkv.layer_norm_bias
|
| 15744 |
+
module.decoder.layers.0.self_attention.linear_qkv.layer_norm_weight
|
| 15745 |
+
module.embedding.position_embeddings.weight
|
| 15746 |
+
module.decoder.layers.1.mlp.linear_fc1.bias
|
| 15747 |
+
module.decoder.final_layernorm.weight
|
| 15748 |
+
module.decoder.layers.0.mlp.linear_fc1.layer_norm_bias
|
| 15749 |
+
module.embedding.word_embeddings.weight
|
| 15750 |
+
module.decoder.layers.1.self_attention.linear_qkv.weight
|
| 15751 |
+
module.decoder.layers.1.self_attention.linear_proj.weight
|
| 15752 |
+
module.decoder.layers.0.mlp.linear_fc2.bias
|
| 15753 |
+
module.decoder.layers.0.mlp.linear_fc1.layer_norm_weight
|
| 15754 |
+
module.decoder.layers.0.self_attention.linear_qkv.layer_norm_bias
|
| 15755 |
+
module.decoder.layers.1.mlp.linear_fc2.weight
|
| 15756 |
+
module.decoder.layers.1.self_attention.linear_proj.bias
|
| 15757 |
+
module.decoder.layers.0.mlp.linear_fc1.weight
|
| 15758 |
+
module.decoder.layers.1.mlp.linear_fc1.layer_norm_bias
|
| 15759 |
+
module.decoder.layers.0.self_attention.linear_qkv.weight
|
| 15760 |
+
INFO:megatron.core.optimizer:Setting up optimizer with config OptimizerConfig(optimizer='adam', lr=0.0005, min_lr=0.0, decoupled_lr=None, decoupled_min_lr=None, weight_decay=0.1, fp16=True, bf16=False, params_dtype=torch.float16, use_precision_aware_optimizer=False, store_param_remainders=True, main_grads_dtype=torch.float32, main_params_dtype=torch.float32, exp_avg_dtype=torch.float32, exp_avg_sq_dtype=torch.float32, loss_scale=None, initial_loss_scale=4294967296, min_loss_scale=1.0, loss_scale_window=1000, hysteresis=2, adam_beta1=0.9, adam_beta2=0.999, adam_eps=1e-08, sgd_momentum=0.9, use_distributed_optimizer=False, overlap_param_gather_with_optimizer_step=False, optimizer_cpu_offload=False, optimizer_offload_fraction=1.0, use_torch_optimizer_for_cpu_offload=False, overlap_cpu_optimizer_d2h_h2d=False, pin_cpu_grads=True, pin_cpu_params=True, clip_grad=1.0, log_num_zeros_in_grad=False, barrier_with_L1_time=True, timers=<megatron.core.timers.Timers object at 0x150e81b8e210>, config_logger_dir='')
|
| 15761 |
+
INFO:megatron.core.optimizer_param_scheduler:> learning rate decay style: cosine
|
| 15762 |
+
>>> embedding
|
| 15763 |
+
>>> decoder
|
| 15764 |
+
>>> output_layer
|
| 15765 |
+
> number of parameters on (tensor, pipeline) model parallel rank (7, 0): 405861888
|
| 15766 |
+
WARNING: could not find the metadata file gpt-checkpoint/latest_checkpointed_iteration.txt
|
| 15767 |
+
will not load any checkpoints and will start from random
|
| 15768 |
+
(min, max) time across ranks (ms):
|
| 15769 |
+
load-checkpoint ................................: (2.61, 3.66)
|
| 15770 |
+
[after model, optimizer, and learning rate scheduler are built] datetime: 2025-06-21 21:34:54
|
| 15771 |
+
> building train, validation, and test datasets ...
|
| 15772 |
+
> datasets target sizes (minimum size):
|
| 15773 |
+
train: 10
|
| 15774 |
+
validation: 1
|
| 15775 |
+
test: 1
|
| 15776 |
+
INFO:megatron.core.datasets.blended_megatron_dataset_config:Let mock = True, as both blend and blend_per_split are None
|
| 15777 |
+
INFO:megatron.core.datasets.blended_megatron_dataset_config:Let split = 1,1,1, an arbitrarily even split, as mock is True
|
| 15778 |
+
INFO:megatron.core.datasets.blended_megatron_dataset_config:Let split_matrix = [(0, 0.3333333333333333), (0.3333333333333333, 0.6666666666666666), (0.6666666666666666, 1.0)]
|
| 15779 |
+
> building train, validation, and test datasets for GPT ...
|
| 15780 |
+
INFO:megatron.core.datasets.blended_megatron_dataset_builder:Building MockGPTDataset splits with sizes=(10, 1, 1) and config=GPTDatasetConfig(random_seed=1234, sequence_length=81920, blend=None, blend_per_split=None, split='1,1,1', split_matrix=[(0, 0.3333333333333333), (0.3333333333333333, 0.6666666666666666), (0.6666666666666666, 1.0)], num_dataset_builder_threads=1, path_to_cache=None, mmap_bin_files=True, mock=True, tokenizer=<megatron.training.tokenizer.tokenizer._GPT2BPETokenizer object at 0x150e82287bf0>, mid_level_dataset_surplus=0.005, reset_position_ids=False, reset_attention_mask=False, eod_mask_loss=False, create_attention_mask=True, drop_last_partial_validation_sequence=True, add_extra_token_to_sequence=True, object_storage_cache_path=None)
|
| 15781 |
+
INFO:megatron.core.datasets.gpt_dataset:Build and save the MockGPTDataset train indices
|
| 15782 |
+
DEBUG:megatron.core.datasets.gpt_dataset:> separate_final_epoch: False
|
| 15783 |
+
WARNING:megatron.core.datasets.gpt_dataset:Unable to save MockGPTDataset indexes because path_to_cache is None
|
| 15784 |
+
DEBUG:megatron.core.datasets.gpt_dataset: > time elapsed: 0.005018 seconds
|
| 15785 |
+
INFO:megatron.core.datasets.gpt_dataset:> total number of samples: 832
|
| 15786 |
+
INFO:megatron.core.datasets.gpt_dataset:> total number of epochs: 1
|
| 15787 |
+
INFO:megatron.core.datasets.gpt_dataset:Build and save the MockGPTDataset valid indices
|
| 15788 |
+
DEBUG:megatron.core.datasets.gpt_dataset:> separate_final_epoch: False
|
| 15789 |
+
WARNING:megatron.core.datasets.gpt_dataset:Unable to save MockGPTDataset indexes because path_to_cache is None
|
| 15790 |
+
DEBUG:megatron.core.datasets.gpt_dataset: > time elapsed: 0.001677 seconds
|
| 15791 |
+
INFO:megatron.core.datasets.gpt_dataset:> total number of samples: 832
|
| 15792 |
+
INFO:megatron.core.datasets.gpt_dataset:> total number of epochs: 1
|
| 15793 |
+
INFO:megatron.core.datasets.gpt_dataset:Build and save the MockGPTDataset test indices
|
| 15794 |
+
DEBUG:megatron.core.datasets.gpt_dataset:> separate_final_epoch: False
|
| 15795 |
+
WARNING:megatron.core.datasets.gpt_dataset:Unable to save MockGPTDataset indexes because path_to_cache is None
|
| 15796 |
+
DEBUG:megatron.core.datasets.gpt_dataset: > time elapsed: 0.001414 seconds
|
| 15797 |
+
INFO:megatron.core.datasets.gpt_dataset:> total number of samples: 833
|
| 15798 |
+
INFO:megatron.core.datasets.gpt_dataset:> total number of epochs: 1
|
| 15799 |
+
> finished creating GPT datasets ...
|
| 15800 |
+
[after dataloaders are built] datetime: 2025-06-21 21:34:54
|
| 15801 |
+
done with setup ...
|
| 15802 |
+
(min, max) time across ranks (ms):
|
| 15803 |
+
model-and-optimizer-setup ......................: (4753.15, 4771.80)
|
| 15804 |
+
train/valid/test-data-iterators-setup ..........: (22.97, 115.52)
|
| 15805 |
+
training ...
|
| 15806 |
+
Setting rerun_state_machine.current_iteration to 0...
|
| 15807 |
+
[before the start of training step] datetime: 2025-06-21 21:34:54
|
| 15808 |
+
batch tensor: tokens torch.Size([1, 81920])
|
| 15809 |
+
batch tensor: labels torch.Size([1, 81920])
|
| 15810 |
+
batch tensor: loss_mask torch.Size([1, 81920])
|
| 15811 |
+
batch tensor: attention_mask torch.Size([1, 1, 81920, 81920])
|
| 15812 |
+
batch tensor: position_ids torch.Size([1, 81920])
|
| 15813 |
+
batch tensor after cp: tokens torch.Size([1, 81920])
|
| 15814 |
+
batch tensor after cp: labels torch.Size([1, 81920])
|
| 15815 |
+
batch tensor after cp: loss_mask torch.Size([1, 81920])
|
| 15816 |
+
batch tensor after cp: attention_mask torch.Size([1, 1, 81920, 81920])
|
| 15817 |
+
batch tensor after cp: position_ids torch.Size([1, 81920])
|
| 15818 |
+
batch tensor: tokens torch.Size([1, 81920])
|
| 15819 |
+
batch tensor: labels torch.Size([1, 81920])
|
| 15820 |
+
batch tensor: loss_mask torch.Size([1, 81920])
|
| 15821 |
+
batch tensor: attention_mask torch.Size([1, 1, 81920, 81920])
|
| 15822 |
+
batch tensor: position_ids torch.Size([1, 81920])
|
| 15823 |
+
batch tensor after cp: tokens torch.Size([1, 81920])
|
| 15824 |
+
batch tensor after cp: labels torch.Size([1, 81920])
|
| 15825 |
+
batch tensor after cp: loss_mask torch.Size([1, 81920])
|
| 15826 |
+
batch tensor after cp: attention_mask torch.Size([1, 1, 81920, 81920])
|
| 15827 |
+
batch tensor after cp: position_ids torch.Size([1, 81920])
|
| 15828 |
+
batch tensor: tokens torch.Size([1, 81920])
|
| 15829 |
+
batch tensor: labels torch.Size([1, 81920])
|
| 15830 |
+
batch tensor: loss_mask torch.Size([1, 81920])
|
| 15831 |
+
batch tensor: attention_mask torch.Size([1, 1, 81920, 81920])
|
| 15832 |
+
batch tensor: position_ids torch.Size([1, 81920])
|
| 15833 |
+
batch tensor after cp: tokens torch.Size([1, 81920])
|
| 15834 |
+
batch tensor after cp: labels torch.Size([1, 81920])
|
| 15835 |
+
batch tensor after cp: loss_mask torch.Size([1, 81920])
|
| 15836 |
+
batch tensor after cp: attention_mask torch.Size([1, 1, 81920, 81920])
|
| 15837 |
+
batch tensor after cp: position_ids torch.Size([1, 81920])
|
| 15838 |
+
batch tensor: tokens torch.Size([1, 81920])
|
| 15839 |
+
batch tensor: labels torch.Size([1, 81920])
|
| 15840 |
+
batch tensor: loss_mask torch.Size([1, 81920])
|
| 15841 |
+
batch tensor: attention_mask torch.Size([1, 1, 81920, 81920])
|
| 15842 |
+
batch tensor: position_ids torch.Size([1, 81920])
|
| 15843 |
+
batch tensor after cp: tokens torch.Size([1, 81920])
|
| 15844 |
+
batch tensor after cp: labels torch.Size([1, 81920])
|
attnserver.run_attnserver.slurm.sh.343208.err.log
CHANGED
|
@@ -2510,3 +2510,43 @@ W0621 21:33:53.354000 1714083 site-packages/torch/distributed/run.py:766]
|
|
| 2510 |
W0621 21:33:53.354000 1714083 site-packages/torch/distributed/run.py:766] *****************************************
|
| 2511 |
W0621 21:33:53.354000 1714083 site-packages/torch/distributed/run.py:766] Setting OMP_NUM_THREADS environment variable for each process to be 1 in default, to avoid your system being overloaded, please further tune the variable for optimal performance in your application as needed.
|
| 2512 |
W0621 21:33:53.354000 1714083 site-packages/torch/distributed/run.py:766] *****************************************
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 2510 |
W0621 21:33:53.354000 1714083 site-packages/torch/distributed/run.py:766] *****************************************
|
| 2511 |
W0621 21:33:53.354000 1714083 site-packages/torch/distributed/run.py:766] Setting OMP_NUM_THREADS environment variable for each process to be 1 in default, to avoid your system being overloaded, please further tune the variable for optimal performance in your application as needed.
|
| 2512 |
W0621 21:33:53.354000 1714083 site-packages/torch/distributed/run.py:766] *****************************************
|
| 2513 |
+
[rank0]:[W621 21:34:14.834919416 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 0] using GPU 0 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
|
| 2514 |
+
[rank5]:[W621 21:34:14.844636361 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 5] using GPU 5 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
|
| 2515 |
+
[rank4]:[W621 21:34:14.847352656 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 4] using GPU 4 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
|
| 2516 |
+
[rank2]:[W621 21:34:14.847397261 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 2] using GPU 2 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
|
| 2517 |
+
[rank3]:[W621 21:34:14.849657846 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 3] using GPU 3 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
|
| 2518 |
+
[rank1]:[W621 21:34:14.852460313 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 1] using GPU 1 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
|
| 2519 |
+
[rank6]:[W621 21:34:14.854328897 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 6] using GPU 6 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
|
| 2520 |
+
[rank7]:[W621 21:34:14.858873629 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 7] using GPU 7 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
|
| 2521 |
+
/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/models/gpt/gpt_layer_specs.py:94: UserWarning: The fp8 argument in "get_gpt_layer_with_transformer_engine_spec" has been deprecated and will be removed soon. Please update your code accordingly.
|
| 2522 |
+
warnings.warn(
|
| 2523 |
+
/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/models/gpt/gpt_layer_specs.py:94: UserWarning: The fp8 argument in "get_gpt_layer_with_transformer_engine_spec" has been deprecated and will be removed soon. Please update your code accordingly.
|
| 2524 |
+
warnings.warn(
|
| 2525 |
+
/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/models/gpt/gpt_layer_specs.py:94: UserWarning: The fp8 argument in "get_gpt_layer_with_transformer_engine_spec" has been deprecated and will be removed soon. Please update your code accordingly.
|
| 2526 |
+
warnings.warn(
|
| 2527 |
+
/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/models/gpt/gpt_layer_specs.py:94: UserWarning: The fp8 argument in "get_gpt_layer_with_transformer_engine_spec" has been deprecated and will be removed soon. Please update your code accordingly.
|
| 2528 |
+
warnings.warn(
|
| 2529 |
+
/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/models/gpt/gpt_layer_specs.py:94: UserWarning: The fp8 argument in "get_gpt_layer_with_transformer_engine_spec" has been deprecated and will be removed soon. Please update your code accordingly.
|
| 2530 |
+
warnings.warn(
|
| 2531 |
+
/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/models/gpt/gpt_layer_specs.py:94: UserWarning: The fp8 argument in "get_gpt_layer_with_transformer_engine_spec" has been deprecated and will be removed soon. Please update your code accordingly.
|
| 2532 |
+
warnings.warn(
|
| 2533 |
+
/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/models/gpt/gpt_layer_specs.py:94: UserWarning: The fp8 argument in "get_gpt_layer_with_transformer_engine_spec" has been deprecated and will be removed soon. Please update your code accordingly.
|
| 2534 |
+
warnings.warn(
|
| 2535 |
+
/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/models/gpt/gpt_layer_specs.py:94: UserWarning: The fp8 argument in "get_gpt_layer_with_transformer_engine_spec" has been deprecated and will be removed soon. Please update your code accordingly.
|
| 2536 |
+
warnings.warn(
|
| 2537 |
+
/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/transformer_engine/pytorch/cpu_offload.py:595: DeprecationWarning: Offloading weights is deprecated. Using offload_weights=True does not have any effect.
|
| 2538 |
+
warnings.warn(
|
| 2539 |
+
/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/transformer_engine/pytorch/cpu_offload.py:595: DeprecationWarning: Offloading weights is deprecated. Using offload_weights=True does not have any effect.
|
| 2540 |
+
warnings.warn(
|
| 2541 |
+
/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/transformer_engine/pytorch/cpu_offload.py:595: DeprecationWarning: Offloading weights is deprecated. Using offload_weights=True does not have any effect.
|
| 2542 |
+
warnings.warn(
|
| 2543 |
+
/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/transformer_engine/pytorch/cpu_offload.py:595: DeprecationWarning: Offloading weights is deprecated. Using offload_weights=True does not have any effect.
|
| 2544 |
+
warnings.warn(
|
| 2545 |
+
/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/transformer_engine/pytorch/cpu_offload.py:595: DeprecationWarning: Offloading weights is deprecated. Using offload_weights=True does not have any effect.
|
| 2546 |
+
warnings.warn(
|
| 2547 |
+
/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/transformer_engine/pytorch/cpu_offload.py:595: DeprecationWarning: Offloading weights is deprecated. Using offload_weights=True does not have any effect.
|
| 2548 |
+
warnings.warn(
|
| 2549 |
+
/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/transformer_engine/pytorch/cpu_offload.py:595: DeprecationWarning: Offloading weights is deprecated. Using offload_weights=True does not have any effect.
|
| 2550 |
+
warnings.warn(
|
| 2551 |
+
/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/transformer_engine/pytorch/cpu_offload.py:595: DeprecationWarning: Offloading weights is deprecated. Using offload_weights=True does not have any effect.
|
| 2552 |
+
warnings.warn(
|
attnserver.run_attnserver.slurm.sh.343208.out.log
CHANGED
|
@@ -13370,3 +13370,737 @@ CHECKPOINT_PATH: gpt-checkpoint
|
|
| 13370 |
PWD: /mnt/weka/home/hao.zhang/junda/attnserver-megatron
|
| 13371 |
--------------------------------
|
| 13372 |
/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/bin/python3
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 13370 |
PWD: /mnt/weka/home/hao.zhang/junda/attnserver-megatron
|
| 13371 |
--------------------------------
|
| 13372 |
/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/bin/python3
|
| 13373 |
+
using world size: 8, data-parallel size: 1, context-parallel size: 1, hierarchical context-parallel sizes: Nonetensor-model-parallel size: 8, encoder-tensor-model-parallel size: 0, pipeline-model-parallel size: 1, encoder-pipeline-model-parallel size: 0
|
| 13374 |
+
Number of virtual stages per pipeline stage: None
|
| 13375 |
+
WARNING: Setting args.check_for_nan_in_loss_and_grad to False since dynamic loss scaling is being used
|
| 13376 |
+
using torch.float16 for parameters ...
|
| 13377 |
+
------------------------ arguments ------------------------
|
| 13378 |
+
account_for_embedding_in_pipeline_split ......... False
|
| 13379 |
+
account_for_loss_in_pipeline_split .............. False
|
| 13380 |
+
accumulate_allreduce_grads_in_fp32 .............. False
|
| 13381 |
+
adam_beta1 ...................................... 0.9
|
| 13382 |
+
adam_beta2 ...................................... 0.999
|
| 13383 |
+
adam_eps ........................................ 1e-08
|
| 13384 |
+
add_bias_linear ................................. True
|
| 13385 |
+
add_position_embedding .......................... True
|
| 13386 |
+
add_qkv_bias .................................... True
|
| 13387 |
+
adlr_autoresume ................................. False
|
| 13388 |
+
adlr_autoresume_interval ........................ 1000
|
| 13389 |
+
align_grad_reduce ............................... True
|
| 13390 |
+
align_param_gather .............................. False
|
| 13391 |
+
app_tag_run_name ................................ None
|
| 13392 |
+
app_tag_run_version ............................. 0.0.0
|
| 13393 |
+
apply_layernorm_1p .............................. False
|
| 13394 |
+
apply_query_key_layer_scaling ................... False
|
| 13395 |
+
apply_residual_connection_post_layernorm ........ False
|
| 13396 |
+
apply_rope_fusion ............................... False
|
| 13397 |
+
async_save ...................................... None
|
| 13398 |
+
async_tensor_model_parallel_allreduce ........... True
|
| 13399 |
+
attention_backend ............................... AttnBackend.auto
|
| 13400 |
+
attention_dropout ............................... 0.1
|
| 13401 |
+
attention_softmax_in_fp32 ....................... False
|
| 13402 |
+
auto_detect_ckpt_format ......................... False
|
| 13403 |
+
barrier_with_L1_time ............................ True
|
| 13404 |
+
bert_binary_head ................................ True
|
| 13405 |
+
bert_embedder_type .............................. megatron
|
| 13406 |
+
bert_load ....................................... None
|
| 13407 |
+
bf16 ............................................ False
|
| 13408 |
+
bias_dropout_fusion ............................. True
|
| 13409 |
+
bias_gelu_fusion ................................ True
|
| 13410 |
+
bias_swiglu_fusion .............................. True
|
| 13411 |
+
biencoder_projection_dim ........................ 0
|
| 13412 |
+
biencoder_shared_query_context_model ............ False
|
| 13413 |
+
block_data_path ................................. None
|
| 13414 |
+
calc_ft_timeouts ................................ False
|
| 13415 |
+
calculate_per_token_loss ........................ False
|
| 13416 |
+
check_for_large_grads ........................... False
|
| 13417 |
+
check_for_nan_in_loss_and_grad .................. False
|
| 13418 |
+
check_for_spiky_loss ............................ False
|
| 13419 |
+
check_weight_hash_across_dp_replicas_interval ... None
|
| 13420 |
+
ckpt_assume_constant_structure .................. False
|
| 13421 |
+
ckpt_convert_format ............................. None
|
| 13422 |
+
ckpt_convert_save ............................... None
|
| 13423 |
+
ckpt_convert_update_legacy_dist_opt_format ...... False
|
| 13424 |
+
ckpt_format ..................................... torch_dist
|
| 13425 |
+
ckpt_fully_parallel_load ........................ False
|
| 13426 |
+
ckpt_fully_parallel_save ........................ True
|
| 13427 |
+
ckpt_fully_parallel_save_deprecated ............. False
|
| 13428 |
+
ckpt_step ....................................... None
|
| 13429 |
+
classes_fraction ................................ 1.0
|
| 13430 |
+
clip_grad ....................................... 1.0
|
| 13431 |
+
clone_scatter_output_in_embedding ............... True
|
| 13432 |
+
config_logger_dir ...............................
|
| 13433 |
+
consumed_train_samples .......................... 0
|
| 13434 |
+
consumed_valid_samples .......................... 0
|
| 13435 |
+
context_parallel_size ........................... 1
|
| 13436 |
+
cp_comm_type .................................... ['p2p']
|
| 13437 |
+
create_attention_mask_in_dataloader ............. True
|
| 13438 |
+
cross_entropy_fusion_impl ....................... native
|
| 13439 |
+
cross_entropy_loss_fusion ....................... False
|
| 13440 |
+
cuda_graph_scope ................................ full
|
| 13441 |
+
cuda_graph_warmup_steps ......................... 3
|
| 13442 |
+
data_args_path .................................. None
|
| 13443 |
+
data_cache_path ................................. None
|
| 13444 |
+
data_parallel_random_init ....................... False
|
| 13445 |
+
data_parallel_sharding_strategy ................. no_shard
|
| 13446 |
+
data_parallel_size .............................. 1
|
| 13447 |
+
data_path ....................................... None
|
| 13448 |
+
data_per_class_fraction ......................... 1.0
|
| 13449 |
+
data_sharding ................................... True
|
| 13450 |
+
dataloader_type ................................. single
|
| 13451 |
+
ddp_average_in_collective ....................... False
|
| 13452 |
+
ddp_bucket_size ................................. None
|
| 13453 |
+
ddp_num_buckets ................................. None
|
| 13454 |
+
ddp_pad_buckets_for_high_nccl_busbw ............. False
|
| 13455 |
+
decoder_first_pipeline_num_layers ............... None
|
| 13456 |
+
decoder_last_pipeline_num_layers ................ None
|
| 13457 |
+
decoder_num_layers .............................. None
|
| 13458 |
+
decoder_seq_length .............................. None
|
| 13459 |
+
decoupled_lr .................................... None
|
| 13460 |
+
decoupled_min_lr ................................ None
|
| 13461 |
+
decrease_batch_size_if_needed ................... False
|
| 13462 |
+
defer_embedding_wgrad_compute ................... False
|
| 13463 |
+
deprecated_use_mcore_models ..................... False
|
| 13464 |
+
deterministic_mode .............................. False
|
| 13465 |
+
dino_bottleneck_size ............................ 256
|
| 13466 |
+
dino_freeze_last_layer .......................... 1
|
| 13467 |
+
dino_head_hidden_size ........................... 2048
|
| 13468 |
+
dino_local_crops_number ......................... 10
|
| 13469 |
+
dino_local_img_size ............................. 96
|
| 13470 |
+
dino_norm_last_layer ............................ False
|
| 13471 |
+
dino_teacher_temp ............................... 0.07
|
| 13472 |
+
dino_warmup_teacher_temp ........................ 0.04
|
| 13473 |
+
dino_warmup_teacher_temp_epochs ................. 30
|
| 13474 |
+
disable_bf16_reduced_precision_matmul ........... False
|
| 13475 |
+
disable_mamba_mem_eff_path ...................... False
|
| 13476 |
+
disable_straggler_on_startup .................... False
|
| 13477 |
+
dist_ckpt_format_deprecated ..................... None
|
| 13478 |
+
dist_ckpt_strictness ............................ assume_ok_unexpected
|
| 13479 |
+
distribute_saved_activations .................... False
|
| 13480 |
+
distributed_backend ............................. nccl
|
| 13481 |
+
distributed_timeout_minutes ..................... 10
|
| 13482 |
+
embedding_path .................................. None
|
| 13483 |
+
empty_unused_memory_level ....................... 0
|
| 13484 |
+
enable_cuda_graph ............................... False
|
| 13485 |
+
enable_ft_package ............................... False
|
| 13486 |
+
enable_gloo_process_groups ...................... True
|
| 13487 |
+
enable_msc ...................................... True
|
| 13488 |
+
enable_one_logger ............................... True
|
| 13489 |
+
encoder_num_layers .............................. 2
|
| 13490 |
+
encoder_pipeline_model_parallel_size ............ 0
|
| 13491 |
+
encoder_seq_length .............................. 49152
|
| 13492 |
+
encoder_tensor_model_parallel_size .............. 0
|
| 13493 |
+
end_weight_decay ................................ 0.1
|
| 13494 |
+
eod_mask_loss ................................... False
|
| 13495 |
+
error_injection_rate ............................ 0
|
| 13496 |
+
error_injection_type ............................ transient_error
|
| 13497 |
+
eval_interval ................................... 16
|
| 13498 |
+
eval_iters ...................................... 1
|
| 13499 |
+
evidence_data_path .............................. None
|
| 13500 |
+
exit_duration_in_mins ........................... None
|
| 13501 |
+
exit_interval ................................... None
|
| 13502 |
+
exit_on_missing_checkpoint ...................... False
|
| 13503 |
+
exit_signal_handler ............................. False
|
| 13504 |
+
exp_avg_dtype ................................... torch.float32
|
| 13505 |
+
exp_avg_sq_dtype ................................ torch.float32
|
| 13506 |
+
expert_model_parallel_size ...................... 1
|
| 13507 |
+
expert_tensor_parallel_size ..................... 8
|
| 13508 |
+
external_cuda_graph ............................. False
|
| 13509 |
+
ffn_hidden_size ................................. 16384
|
| 13510 |
+
finetune ........................................ False
|
| 13511 |
+
first_last_layers_bf16 .......................... False
|
| 13512 |
+
flash_decode .................................... False
|
| 13513 |
+
fp16 ............................................ True
|
| 13514 |
+
fp16_lm_cross_entropy ........................... False
|
| 13515 |
+
fp32_residual_connection ........................ False
|
| 13516 |
+
fp8 ............................................. None
|
| 13517 |
+
fp8_amax_compute_algo ........................... most_recent
|
| 13518 |
+
fp8_amax_history_len ............................ 1
|
| 13519 |
+
fp8_interval .................................... 1
|
| 13520 |
+
fp8_margin ...................................... 0
|
| 13521 |
+
fp8_param_gather ................................ False
|
| 13522 |
+
fp8_recipe ...................................... delayed
|
| 13523 |
+
fp8_wgrad ....................................... True
|
| 13524 |
+
fsdp_double_buffer .............................. False
|
| 13525 |
+
global_batch_size ............................... 1
|
| 13526 |
+
grad_reduce_in_bf16 ............................. False
|
| 13527 |
+
gradient_accumulation_fusion .................... True
|
| 13528 |
+
gradient_reduce_div_fusion ...................... True
|
| 13529 |
+
group_query_attention ........................... True
|
| 13530 |
+
head_lr_mult .................................... 1.0
|
| 13531 |
+
heterogeneous_layers_config_encoded_json ........ None
|
| 13532 |
+
heterogeneous_layers_config_path ................ None
|
| 13533 |
+
hidden_dropout .................................. 0.1
|
| 13534 |
+
hidden_size ..................................... 4096
|
| 13535 |
+
hierarchical_context_parallel_sizes ............. None
|
| 13536 |
+
high_priority_stream_groups ..................... []
|
| 13537 |
+
hybrid_attention_ratio .......................... 0.0
|
| 13538 |
+
hybrid_mlp_ratio ................................ 0.0
|
| 13539 |
+
hybrid_override_pattern ......................... None
|
| 13540 |
+
hysteresis ...................................... 2
|
| 13541 |
+
ict_head_size ................................... None
|
| 13542 |
+
ict_load ........................................ None
|
| 13543 |
+
img_h ........................................... 224
|
| 13544 |
+
img_w ........................................... 224
|
| 13545 |
+
indexer_batch_size .............................. 128
|
| 13546 |
+
indexer_log_interval ............................ 1000
|
| 13547 |
+
inference_batch_times_seqlen_threshold .......... -1
|
| 13548 |
+
inference_dynamic_batching ...................... False
|
| 13549 |
+
inference_dynamic_batching_buffer_guaranteed_fraction 0.2
|
| 13550 |
+
inference_dynamic_batching_buffer_overflow_factor None
|
| 13551 |
+
inference_dynamic_batching_buffer_size_gb ....... 40.0
|
| 13552 |
+
inference_dynamic_batching_chunk_size ........... 256
|
| 13553 |
+
inference_dynamic_batching_max_requests_override None
|
| 13554 |
+
inference_dynamic_batching_max_tokens_override .. None
|
| 13555 |
+
inference_max_batch_size ........................ 8
|
| 13556 |
+
inference_max_seq_length ........................ 2560
|
| 13557 |
+
inference_rng_tracker ........................... False
|
| 13558 |
+
init_method_std ................................. 0.02
|
| 13559 |
+
init_method_xavier_uniform ...................... False
|
| 13560 |
+
init_model_with_meta_device ..................... False
|
| 13561 |
+
initial_loss_scale .............................. 4294967296
|
| 13562 |
+
inprocess_active_world_size ..................... 8
|
| 13563 |
+
inprocess_barrier_timeout ....................... 120
|
| 13564 |
+
inprocess_completion_timeout .................... 120
|
| 13565 |
+
inprocess_empty_cuda_cache ...................... False
|
| 13566 |
+
inprocess_granularity ........................... node
|
| 13567 |
+
inprocess_hard_timeout .......................... 90
|
| 13568 |
+
inprocess_heartbeat_interval .................... 30
|
| 13569 |
+
inprocess_heartbeat_timeout ..................... 60
|
| 13570 |
+
inprocess_last_call_wait ........................ 1
|
| 13571 |
+
inprocess_max_iterations ........................ None
|
| 13572 |
+
inprocess_monitor_process_interval .............. 1.0
|
| 13573 |
+
inprocess_monitor_thread_interval ............... 1.0
|
| 13574 |
+
inprocess_progress_watchdog_interval ............ 1.0
|
| 13575 |
+
inprocess_restart ............................... False
|
| 13576 |
+
inprocess_soft_timeout .......................... 60
|
| 13577 |
+
inprocess_termination_grace_time ................ 1
|
| 13578 |
+
is_hybrid_model ................................. False
|
| 13579 |
+
iter_per_epoch .................................. 1250
|
| 13580 |
+
iterations_to_skip .............................. []
|
| 13581 |
+
keep_fp8_transpose_cache_when_using_custom_fsdp . False
|
| 13582 |
+
kv_channels ..................................... 64
|
| 13583 |
+
kv_lora_rank .................................... 32
|
| 13584 |
+
lazy_mpu_init ................................... None
|
| 13585 |
+
load ............................................ gpt-checkpoint
|
| 13586 |
+
load_model_opt_format ........................... False
|
| 13587 |
+
local_rank ...................................... 0
|
| 13588 |
+
log_interval .................................... 1
|
| 13589 |
+
log_loss_scale_to_tensorboard ................... True
|
| 13590 |
+
log_memory_to_tensorboard ....................... False
|
| 13591 |
+
log_num_zeros_in_grad ........................... False
|
| 13592 |
+
log_params_norm ................................. False
|
| 13593 |
+
log_progress .................................... False
|
| 13594 |
+
log_straggler ................................... False
|
| 13595 |
+
log_throughput .................................. False
|
| 13596 |
+
log_timers_to_tensorboard ....................... False
|
| 13597 |
+
log_validation_ppl_to_tensorboard ............... False
|
| 13598 |
+
log_world_size_to_tensorboard ................... False
|
| 13599 |
+
logging_level ................................... 0
|
| 13600 |
+
loss_scale ...................................... None
|
| 13601 |
+
loss_scale_window ............................... 1000
|
| 13602 |
+
lr .............................................. 0.0005
|
| 13603 |
+
lr_decay_iters .................................. 150000
|
| 13604 |
+
lr_decay_samples ................................ None
|
| 13605 |
+
lr_decay_style .................................. cosine
|
| 13606 |
+
lr_warmup_fraction .............................. None
|
| 13607 |
+
lr_warmup_init .................................. 0.0
|
| 13608 |
+
lr_warmup_iters ................................. 2
|
| 13609 |
+
lr_warmup_samples ............................... 0
|
| 13610 |
+
lr_wsd_decay_iters .............................. None
|
| 13611 |
+
lr_wsd_decay_samples ............................ None
|
| 13612 |
+
lr_wsd_decay_style .............................. exponential
|
| 13613 |
+
main_grads_dtype ................................ torch.float32
|
| 13614 |
+
main_params_dtype ............................... torch.float32
|
| 13615 |
+
make_vocab_size_divisible_by .................... 128
|
| 13616 |
+
mamba_head_dim .................................. 64
|
| 13617 |
+
mamba_num_groups ................................ 8
|
| 13618 |
+
mamba_num_heads ................................. None
|
| 13619 |
+
mamba_state_dim ................................. 128
|
| 13620 |
+
manual_gc ....................................... False
|
| 13621 |
+
manual_gc_eval .................................. True
|
| 13622 |
+
manual_gc_interval .............................. 0
|
| 13623 |
+
mask_factor ..................................... 1.0
|
| 13624 |
+
mask_prob ....................................... 0.15
|
| 13625 |
+
mask_type ....................................... random
|
| 13626 |
+
masked_softmax_fusion ........................... True
|
| 13627 |
+
max_position_embeddings ......................... 49152
|
| 13628 |
+
max_tokens_to_oom ............................... 12000
|
| 13629 |
+
memory_snapshot_path ............................ snapshot.pickle
|
| 13630 |
+
merge_file ...................................... merges.txt
|
| 13631 |
+
micro_batch_size ................................ 1
|
| 13632 |
+
microbatch_group_size_per_vp_stage .............. None
|
| 13633 |
+
mid_level_dataset_surplus ....................... 0.005
|
| 13634 |
+
min_loss_scale .................................. 1.0
|
| 13635 |
+
min_lr .......................................... 0.0
|
| 13636 |
+
mlp_chunks_for_prefill .......................... 1
|
| 13637 |
+
mmap_bin_files .................................. True
|
| 13638 |
+
mock_data ....................................... True
|
| 13639 |
+
moe_apply_probs_on_input ........................ False
|
| 13640 |
+
moe_aux_loss_coeff .............................. 0.0
|
| 13641 |
+
moe_enable_deepep ............................... False
|
| 13642 |
+
moe_expert_capacity_factor ...................... None
|
| 13643 |
+
moe_extended_tp ................................. False
|
| 13644 |
+
moe_ffn_hidden_size ............................. None
|
| 13645 |
+
moe_grouped_gemm ................................ False
|
| 13646 |
+
moe_input_jitter_eps ............................ None
|
| 13647 |
+
moe_layer_freq .................................. 1
|
| 13648 |
+
moe_layer_recompute ............................. False
|
| 13649 |
+
moe_pad_expert_input_to_capacity ................ False
|
| 13650 |
+
moe_per_layer_logging ........................... False
|
| 13651 |
+
moe_permute_fusion .............................. False
|
| 13652 |
+
moe_router_bias_update_rate ..................... 0.001
|
| 13653 |
+
moe_router_dtype ................................ None
|
| 13654 |
+
moe_router_enable_expert_bias ................... False
|
| 13655 |
+
moe_router_force_load_balancing ................. False
|
| 13656 |
+
moe_router_group_topk ........................... None
|
| 13657 |
+
moe_router_load_balancing_type .................. aux_loss
|
| 13658 |
+
moe_router_num_groups ........................... None
|
| 13659 |
+
moe_router_padding_for_fp8 ...................... False
|
| 13660 |
+
moe_router_pre_softmax .......................... False
|
| 13661 |
+
moe_router_score_function ....................... softmax
|
| 13662 |
+
moe_router_topk ................................. 2
|
| 13663 |
+
moe_router_topk_scaling_factor .................. None
|
| 13664 |
+
moe_shared_expert_intermediate_size ............. None
|
| 13665 |
+
moe_shared_expert_overlap ....................... False
|
| 13666 |
+
moe_token_dispatcher_type ....................... allgather
|
| 13667 |
+
moe_token_drop_policy ........................... probs
|
| 13668 |
+
moe_use_legacy_grouped_gemm ..................... False
|
| 13669 |
+
moe_use_upcycling ............................... False
|
| 13670 |
+
moe_z_loss_coeff ................................ None
|
| 13671 |
+
mrope_section ................................... None
|
| 13672 |
+
mscale .......................................... 1.0
|
| 13673 |
+
mscale_all_dim .................................. 1.0
|
| 13674 |
+
mtp_loss_scaling_factor ......................... 0.1
|
| 13675 |
+
mtp_num_layers .................................. None
|
| 13676 |
+
multi_latent_attention .......................... False
|
| 13677 |
+
nccl_all_reduce_for_prefill ..................... False
|
| 13678 |
+
nccl_communicator_config_path ................... None
|
| 13679 |
+
nccl_ub ......................................... False
|
| 13680 |
+
no_load_optim ................................... None
|
| 13681 |
+
no_load_rng ..................................... None
|
| 13682 |
+
no_persist_layer_norm ........................... False
|
| 13683 |
+
no_rope_freq .................................... None
|
| 13684 |
+
no_save_optim ................................... None
|
| 13685 |
+
no_save_rng ..................................... None
|
| 13686 |
+
non_persistent_ckpt_type ........................ None
|
| 13687 |
+
non_persistent_global_ckpt_dir .................. None
|
| 13688 |
+
non_persistent_local_ckpt_algo .................. fully_parallel
|
| 13689 |
+
non_persistent_local_ckpt_dir ................... None
|
| 13690 |
+
non_persistent_save_interval .................... None
|
| 13691 |
+
norm_epsilon .................................... 1e-05
|
| 13692 |
+
normalization ................................... LayerNorm
|
| 13693 |
+
num_attention_heads ............................. 64
|
| 13694 |
+
num_channels .................................... 3
|
| 13695 |
+
num_classes ..................................... 1000
|
| 13696 |
+
num_dataset_builder_threads ..................... 1
|
| 13697 |
+
num_distributed_optimizer_instances ............. 1
|
| 13698 |
+
num_experts ..................................... None
|
| 13699 |
+
num_layers ...................................... 2
|
| 13700 |
+
num_layers_at_end_in_bf16 ....................... 1
|
| 13701 |
+
num_layers_at_start_in_bf16 ..................... 1
|
| 13702 |
+
num_layers_per_virtual_pipeline_stage ........... None
|
| 13703 |
+
num_query_groups ................................ 16
|
| 13704 |
+
num_virtual_stages_per_pipeline_rank ............ None
|
| 13705 |
+
num_workers ..................................... 2
|
| 13706 |
+
object_storage_cache_path ....................... None
|
| 13707 |
+
one_logger_async ................................ False
|
| 13708 |
+
one_logger_project .............................. megatron-lm
|
| 13709 |
+
one_logger_run_name ............................. None
|
| 13710 |
+
onnx_safe ....................................... None
|
| 13711 |
+
openai_gelu ..................................... False
|
| 13712 |
+
optimizer ....................................... adam
|
| 13713 |
+
optimizer_cpu_offload ........................... False
|
| 13714 |
+
optimizer_offload_fraction ...................... 1.0
|
| 13715 |
+
output_bert_embeddings .......................... False
|
| 13716 |
+
overlap_cpu_optimizer_d2h_h2d ................... False
|
| 13717 |
+
overlap_grad_reduce ............................. False
|
| 13718 |
+
overlap_p2p_comm ................................ False
|
| 13719 |
+
overlap_p2p_comm_warmup_flush ................... False
|
| 13720 |
+
overlap_param_gather ............................ False
|
| 13721 |
+
overlap_param_gather_with_optimizer_step ........ False
|
| 13722 |
+
override_opt_param_scheduler .................... False
|
| 13723 |
+
params_dtype .................................... torch.float16
|
| 13724 |
+
patch_dim ....................................... 16
|
| 13725 |
+
per_split_data_args_path ........................ None
|
| 13726 |
+
perform_initialization .......................... True
|
| 13727 |
+
pin_cpu_grads ................................... True
|
| 13728 |
+
pin_cpu_params .................................. True
|
| 13729 |
+
pipeline_model_parallel_comm_backend ............ None
|
| 13730 |
+
pipeline_model_parallel_size .................... 1
|
| 13731 |
+
pipeline_model_parallel_split_rank .............. None
|
| 13732 |
+
position_embedding_type ......................... learned_absolute
|
| 13733 |
+
pretrained_checkpoint ........................... None
|
| 13734 |
+
profile ......................................... False
|
| 13735 |
+
profile_ranks ................................... [0]
|
| 13736 |
+
profile_step_end ................................ 12
|
| 13737 |
+
profile_step_start .............................. 10
|
| 13738 |
+
q_lora_rank ..................................... None
|
| 13739 |
+
qk_head_dim ..................................... 128
|
| 13740 |
+
qk_l2_norm ...................................... False
|
| 13741 |
+
qk_layernorm .................................... False
|
| 13742 |
+
qk_pos_emb_head_dim ............................. 64
|
| 13743 |
+
query_in_block_prob ............................. 0.1
|
| 13744 |
+
rampup_batch_size ............................... None
|
| 13745 |
+
rank ............................................ 0
|
| 13746 |
+
recompute_granularity ........................... None
|
| 13747 |
+
recompute_method ................................ None
|
| 13748 |
+
recompute_modules ............................... None
|
| 13749 |
+
recompute_num_layers ............................ None
|
| 13750 |
+
record_memory_history ........................... False
|
| 13751 |
+
relative_attention_max_distance ................. 128
|
| 13752 |
+
relative_attention_num_buckets .................. 32
|
| 13753 |
+
replication ..................................... False
|
| 13754 |
+
replication_factor .............................. 2
|
| 13755 |
+
replication_jump ................................ None
|
| 13756 |
+
rerun_mode ...................................... disabled
|
| 13757 |
+
reset_attention_mask ............................ False
|
| 13758 |
+
reset_position_ids .............................. False
|
| 13759 |
+
result_rejected_tracker_filename ................ None
|
| 13760 |
+
retriever_report_topk_accuracies ................ []
|
| 13761 |
+
retriever_score_scaling ......................... False
|
| 13762 |
+
retriever_seq_length ............................ 256
|
| 13763 |
+
retro_add_retriever ............................. False
|
| 13764 |
+
retro_attention_gate ............................ 1
|
| 13765 |
+
retro_cyclic_train_iters ........................ None
|
| 13766 |
+
retro_encoder_attention_dropout ................. 0.1
|
| 13767 |
+
retro_encoder_hidden_dropout .................... 0.1
|
| 13768 |
+
retro_encoder_layers ............................ 2
|
| 13769 |
+
retro_num_neighbors ............................. 2
|
| 13770 |
+
retro_num_retrieved_chunks ...................... 2
|
| 13771 |
+
retro_project_dir ............................... None
|
| 13772 |
+
retro_verify_neighbor_count ..................... True
|
| 13773 |
+
rope_scaling_factor ............................. 8.0
|
| 13774 |
+
rotary_base ..................................... 10000
|
| 13775 |
+
rotary_interleaved .............................. False
|
| 13776 |
+
rotary_percent .................................. 1.0
|
| 13777 |
+
rotary_scaling_factor ........................... 1.0
|
| 13778 |
+
rotary_seq_len_interpolation_factor ............. None
|
| 13779 |
+
run_workload_inspector_server ................... False
|
| 13780 |
+
sample_rate ..................................... 1.0
|
| 13781 |
+
save ............................................ gpt-checkpoint
|
| 13782 |
+
save_interval ................................... 16
|
| 13783 |
+
scatter_gather_tensors_in_pipeline .............. True
|
| 13784 |
+
seed ............................................ 1234
|
| 13785 |
+
seq_length ...................................... 49152
|
| 13786 |
+
sequence_parallel ............................... False
|
| 13787 |
+
sgd_momentum .................................... 0.9
|
| 13788 |
+
short_seq_prob .................................. 0.1
|
| 13789 |
+
skip_train ...................................... False
|
| 13790 |
+
skipped_train_samples ........................... 0
|
| 13791 |
+
spec ............................................ None
|
| 13792 |
+
split ........................................... None
|
| 13793 |
+
squared_relu .................................... False
|
| 13794 |
+
start_weight_decay .............................. 0.1
|
| 13795 |
+
straggler_ctrlr_port ............................ 65535
|
| 13796 |
+
straggler_minmax_count .......................... 1
|
| 13797 |
+
suggested_communication_unit_size ............... None
|
| 13798 |
+
swiglu .......................................... False
|
| 13799 |
+
swin_backbone_type .............................. tiny
|
| 13800 |
+
symmetric_ar_type ............................... None
|
| 13801 |
+
te_rng_tracker .................................. False
|
| 13802 |
+
tensor_model_parallel_size ...................... 8
|
| 13803 |
+
tensorboard_dir ................................. tensorboard-logs/
|
| 13804 |
+
tensorboard_log_interval ........................ 1
|
| 13805 |
+
tensorboard_queue_size .......................... 1000
|
| 13806 |
+
test_data_path .................................. None
|
| 13807 |
+
test_mode ....................................... False
|
| 13808 |
+
tiktoken_num_special_tokens ..................... 1000
|
| 13809 |
+
tiktoken_pattern ................................ None
|
| 13810 |
+
tiktoken_special_tokens ......................... None
|
| 13811 |
+
timing_log_level ................................ 0
|
| 13812 |
+
timing_log_option ............................... minmax
|
| 13813 |
+
titles_data_path ................................ None
|
| 13814 |
+
tokenizer_model ................................. None
|
| 13815 |
+
tokenizer_type .................................. GPT2BPETokenizer
|
| 13816 |
+
torch_fsdp2_reshard_after_forward ............... True
|
| 13817 |
+
tp_comm_bootstrap_backend ....................... nccl
|
| 13818 |
+
tp_comm_bulk_dgrad .............................. True
|
| 13819 |
+
tp_comm_bulk_wgrad .............................. True
|
| 13820 |
+
tp_comm_overlap ................................. False
|
| 13821 |
+
tp_comm_overlap_ag .............................. True
|
| 13822 |
+
tp_comm_overlap_cfg ............................. None
|
| 13823 |
+
tp_comm_overlap_rs .............................. True
|
| 13824 |
+
tp_comm_overlap_rs_dgrad ........................ False
|
| 13825 |
+
tp_comm_split_ag ................................ True
|
| 13826 |
+
tp_comm_split_rs ................................ True
|
| 13827 |
+
train_data_path ................................. None
|
| 13828 |
+
train_iters ..................................... 10
|
| 13829 |
+
train_samples ................................... None
|
| 13830 |
+
train_sync_interval ............................. None
|
| 13831 |
+
transformer_impl ................................ transformer_engine
|
| 13832 |
+
transformer_pipeline_model_parallel_size ........ 1
|
| 13833 |
+
untie_embeddings_and_output_weights ............. False
|
| 13834 |
+
use_checkpoint_args ............................. False
|
| 13835 |
+
use_checkpoint_opt_param_scheduler .............. False
|
| 13836 |
+
use_cpu_initialization .......................... None
|
| 13837 |
+
use_custom_fsdp ................................. False
|
| 13838 |
+
use_dist_ckpt ................................... True
|
| 13839 |
+
use_dist_ckpt_deprecated ........................ False
|
| 13840 |
+
use_distributed_optimizer ....................... False
|
| 13841 |
+
use_flash_attn .................................. False
|
| 13842 |
+
use_legacy_models ............................... False
|
| 13843 |
+
use_mp_args_from_checkpoint_args ................ False
|
| 13844 |
+
use_one_sent_docs ............................... False
|
| 13845 |
+
use_persistent_ckpt_worker ...................... False
|
| 13846 |
+
use_precision_aware_optimizer ................... False
|
| 13847 |
+
use_pytorch_profiler ............................ False
|
| 13848 |
+
use_ring_exchange_p2p ........................... False
|
| 13849 |
+
use_rope_scaling ................................ False
|
| 13850 |
+
use_rotary_position_embeddings .................. False
|
| 13851 |
+
use_sharp ....................................... False
|
| 13852 |
+
use_tokenizer_model_from_checkpoint_args ........ True
|
| 13853 |
+
use_torch_fsdp2 ................................. False
|
| 13854 |
+
use_torch_optimizer_for_cpu_offload ............. False
|
| 13855 |
+
use_tp_pp_dp_mapping ............................ False
|
| 13856 |
+
v_head_dim ...................................... 128
|
| 13857 |
+
valid_data_path ................................. None
|
| 13858 |
+
variable_seq_lengths ............................ False
|
| 13859 |
+
virtual_pipeline_model_parallel_size ............ None
|
| 13860 |
+
vision_backbone_type ............................ vit
|
| 13861 |
+
vision_pretraining .............................. False
|
| 13862 |
+
vision_pretraining_type ......................... classify
|
| 13863 |
+
vocab_extra_ids ................................. 0
|
| 13864 |
+
vocab_file ...................................... vocab.json
|
| 13865 |
+
vocab_size ...................................... None
|
| 13866 |
+
wandb_exp_name ..................................
|
| 13867 |
+
wandb_project ...................................
|
| 13868 |
+
wandb_save_dir ..................................
|
| 13869 |
+
weight_decay .................................... 0.1
|
| 13870 |
+
weight_decay_incr_style ......................... constant
|
| 13871 |
+
wgrad_deferral_limit ............................ 0
|
| 13872 |
+
world_size ...................................... 8
|
| 13873 |
+
yaml_cfg ........................................ None
|
| 13874 |
+
-------------------- end of arguments ---------------------
|
| 13875 |
+
INFO:megatron.core.num_microbatches_calculator:setting number of microbatches to constant 1
|
| 13876 |
+
> building GPT2BPETokenizer tokenizer ...
|
| 13877 |
+
INFO:megatron.training.initialize:Setting logging level to 0
|
| 13878 |
+
WARNING: TensorBoard writing requested but is not available (are you using PyTorch 1.1.0 or later?), no TensorBoard logs will be written.
|
| 13879 |
+
WARNING: one_logger package is required to enable e2e metrics tracking. please go to https://confluence.nvidia.com/display/MLWFO/Package+Repositories for details to install it
|
| 13880 |
+
INFO:megatron.training.initialize:Setting logging level to 0
|
| 13881 |
+
> padded vocab (size: 50257) with 943 dummy tokens (new size: 51200)
|
| 13882 |
+
INFO:megatron.training.initialize:Setting logging level to 0
|
| 13883 |
+
WARNING:megatron.core.rerun_state_machine:RerunStateMachine initialized in mode RerunMode.DISABLED
|
| 13884 |
+
> initializing torch distributed ...
|
| 13885 |
+
INFO:megatron.training.initialize:Setting logging level to 0
|
| 13886 |
+
INFO:megatron.training.initialize:Setting logging level to 0
|
| 13887 |
+
INFO:megatron.training.initialize:Setting logging level to 0
|
| 13888 |
+
> initialized tensor model parallel with size 8
|
| 13889 |
+
> initialized pipeline model parallel with size 1
|
| 13890 |
+
> setting random seeds to 1234 ...
|
| 13891 |
+
> compiling dataset index builder ...
|
| 13892 |
+
INFO:megatron.training.initialize:Setting logging level to 0
|
| 13893 |
+
make: Entering directory '/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/datasets'
|
| 13894 |
+
INFO:megatron.training.initialize:Setting logging level to 0
|
| 13895 |
+
make: Nothing to be done for 'default'.
|
| 13896 |
+
make: Leaving directory '/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/datasets'
|
| 13897 |
+
>>> done with dataset index builder. Compilation time: 0.042 seconds
|
| 13898 |
+
WARNING: constraints for invoking optimized fused softmax kernel are not met. We default back to unfused kernel invocations.
|
| 13899 |
+
> compiling and loading fused kernels ...
|
| 13900 |
+
>>> done with compiling and loading fused kernels. Compilation time: 2.666 seconds
|
| 13901 |
+
time to initialize megatron (seconds): 7.383
|
| 13902 |
+
[after megatron is initialized] datetime: 2025-06-21 21:34:21
|
| 13903 |
+
building GPT model ...
|
| 13904 |
+
>>> embedding
|
| 13905 |
+
>>> decoder
|
| 13906 |
+
>>> output_layer
|
| 13907 |
+
> number of parameters on (tensor, pipeline) model parallel rank (1, 0): 271644160
|
| 13908 |
+
>>> embedding
|
| 13909 |
+
>>> decoder
|
| 13910 |
+
>>> output_layer
|
| 13911 |
+
> number of parameters on (tensor, pipeline) model parallel rank (7, 0): 271644160
|
| 13912 |
+
>>> embedding
|
| 13913 |
+
>>> decoder
|
| 13914 |
+
>>> output_layer
|
| 13915 |
+
> number of parameters on (tensor, pipeline) model parallel rank (4, 0): 271644160
|
| 13916 |
+
>>> embedding
|
| 13917 |
+
>>> decoder
|
| 13918 |
+
>>> output_layer
|
| 13919 |
+
> number of parameters on (tensor, pipeline) model parallel rank (5, 0): 271644160
|
| 13920 |
+
>>> embedding
|
| 13921 |
+
>>> decoder
|
| 13922 |
+
>>> output_layer
|
| 13923 |
+
> number of parameters on (tensor, pipeline) model parallel rank (3, 0): 271644160
|
| 13924 |
+
>>> embedding
|
| 13925 |
+
>>> decoder
|
| 13926 |
+
>>> output_layer
|
| 13927 |
+
> number of parameters on (tensor, pipeline) model parallel rank (2, 0): 271644160
|
| 13928 |
+
>>> embedding
|
| 13929 |
+
>>> decoder
|
| 13930 |
+
>>> output_layer
|
| 13931 |
+
> number of parameters on (tensor, pipeline) model parallel rank (6, 0): 271644160
|
| 13932 |
+
>>> embedding
|
| 13933 |
+
>>> decoder
|
| 13934 |
+
>>> output_layer
|
| 13935 |
+
> number of parameters on (tensor, pipeline) model parallel rank (0, 0): 271644160
|
| 13936 |
+
INFO:megatron.core.distributed.distributed_data_parallel:Setting up DistributedDataParallel with config DistributedDataParallelConfig(grad_reduce_in_fp32=False, overlap_grad_reduce=False, overlap_param_gather=False, align_param_gather=False, use_distributed_optimizer=False, num_distributed_optimizer_instances=1, check_for_nan_in_grad=False, check_for_large_grads=False, bucket_size=None, pad_buckets_for_high_nccl_busbw=False, average_in_collective=False, fp8_param_gather=False, use_custom_fsdp=False, data_parallel_sharding_strategy='no_shard', gradient_reduce_div_fusion=True, suggested_communication_unit_size=None, preserve_fp32_weights=True, keep_fp8_transpose_cache_when_using_custom_fsdp=False, nccl_ub=False, fsdp_double_buffer=False)
|
| 13937 |
+
INFO:megatron.core.distributed.param_and_grad_buffer:Number of buckets for gradient all-reduce / reduce-scatter: 1
|
| 13938 |
+
Params for bucket 1 (271644160 elements, 271644160 padded size):
|
| 13939 |
+
module.decoder.final_layernorm.bias
|
| 13940 |
+
module.decoder.layers.1.self_attention.linear_qkv.layer_norm_bias
|
| 13941 |
+
module.decoder.layers.0.self_attention.linear_qkv.layer_norm_bias
|
| 13942 |
+
module.decoder.layers.1.mlp.linear_fc1.bias
|
| 13943 |
+
module.decoder.layers.0.mlp.linear_fc1.bias
|
| 13944 |
+
module.decoder.final_layernorm.weight
|
| 13945 |
+
module.decoder.layers.1.self_attention.linear_qkv.weight
|
| 13946 |
+
module.decoder.layers.1.self_attention.linear_proj.weight
|
| 13947 |
+
module.decoder.layers.0.self_attention.linear_qkv.weight
|
| 13948 |
+
module.decoder.layers.0.self_attention.linear_proj.weight
|
| 13949 |
+
module.decoder.layers.1.mlp.linear_fc2.weight
|
| 13950 |
+
module.decoder.layers.1.self_attention.linear_proj.bias
|
| 13951 |
+
module.decoder.layers.0.self_attention.linear_proj.bias
|
| 13952 |
+
module.decoder.layers.1.mlp.linear_fc1.layer_norm_bias
|
| 13953 |
+
module.decoder.layers.0.mlp.linear_fc2.weight
|
| 13954 |
+
module.decoder.layers.0.mlp.linear_fc1.layer_norm_bias
|
| 13955 |
+
module.decoder.layers.1.mlp.linear_fc1.layer_norm_weight
|
| 13956 |
+
module.decoder.layers.1.self_attention.linear_qkv.bias
|
| 13957 |
+
module.decoder.layers.0.mlp.linear_fc2.bias
|
| 13958 |
+
module.decoder.layers.0.mlp.linear_fc1.layer_norm_weight
|
| 13959 |
+
module.decoder.layers.0.self_attention.linear_qkv.bias
|
| 13960 |
+
module.embedding.position_embeddings.weight
|
| 13961 |
+
module.decoder.layers.1.mlp.linear_fc1.weight
|
| 13962 |
+
module.decoder.layers.0.mlp.linear_fc1.weight
|
| 13963 |
+
module.decoder.layers.1.mlp.linear_fc2.bias
|
| 13964 |
+
module.decoder.layers.1.self_attention.linear_qkv.layer_norm_weight
|
| 13965 |
+
module.decoder.layers.0.self_attention.linear_qkv.layer_norm_weight
|
| 13966 |
+
module.embedding.word_embeddings.weight
|
| 13967 |
+
INFO:megatron.core.optimizer:Setting up optimizer with config OptimizerConfig(optimizer='adam', lr=0.0005, min_lr=0.0, decoupled_lr=None, decoupled_min_lr=None, weight_decay=0.1, fp16=True, bf16=False, params_dtype=torch.float16, use_precision_aware_optimizer=False, store_param_remainders=True, main_grads_dtype=torch.float32, main_params_dtype=torch.float32, exp_avg_dtype=torch.float32, exp_avg_sq_dtype=torch.float32, loss_scale=None, initial_loss_scale=4294967296, min_loss_scale=1.0, loss_scale_window=1000, hysteresis=2, adam_beta1=0.9, adam_beta2=0.999, adam_eps=1e-08, sgd_momentum=0.9, use_distributed_optimizer=False, overlap_param_gather_with_optimizer_step=False, optimizer_cpu_offload=False, optimizer_offload_fraction=1.0, use_torch_optimizer_for_cpu_offload=False, overlap_cpu_optimizer_d2h_h2d=False, pin_cpu_grads=True, pin_cpu_params=True, clip_grad=1.0, log_num_zeros_in_grad=False, barrier_with_L1_time=True, timers=<megatron.core.timers.Timers object at 0x14f23638e6f0>, config_logger_dir='')
|
| 13968 |
+
INFO:megatron.core.optimizer_param_scheduler:> learning rate decay style: cosine
|
| 13969 |
+
WARNING: could not find the metadata file gpt-checkpoint/latest_checkpointed_iteration.txt
|
| 13970 |
+
will not load any checkpoints and will start from random
|
| 13971 |
+
(min, max) time across ranks (ms):
|
| 13972 |
+
load-checkpoint ................................: (14.15, 14.46)
|
| 13973 |
+
[after model, optimizer, and learning rate scheduler are built] datetime: 2025-06-21 21:34:23
|
| 13974 |
+
> building train, validation, and test datasets ...
|
| 13975 |
+
> datasets target sizes (minimum size):
|
| 13976 |
+
train: 10
|
| 13977 |
+
validation: 1
|
| 13978 |
+
test: 1
|
| 13979 |
+
INFO:megatron.core.datasets.blended_megatron_dataset_config:Let mock = True, as both blend and blend_per_split are None
|
| 13980 |
+
INFO:megatron.core.datasets.blended_megatron_dataset_config:Let split = 1,1,1, an arbitrarily even split, as mock is True
|
| 13981 |
+
INFO:megatron.core.datasets.blended_megatron_dataset_config:Let split_matrix = [(0, 0.3333333333333333), (0.3333333333333333, 0.6666666666666666), (0.6666666666666666, 1.0)]
|
| 13982 |
+
> building train, validation, and test datasets for GPT ...
|
| 13983 |
+
INFO:megatron.core.datasets.blended_megatron_dataset_builder:Building MockGPTDataset splits with sizes=(10, 1, 1) and config=GPTDatasetConfig(random_seed=1234, sequence_length=49152, blend=None, blend_per_split=None, split='1,1,1', split_matrix=[(0, 0.3333333333333333), (0.3333333333333333, 0.6666666666666666), (0.6666666666666666, 1.0)], num_dataset_builder_threads=1, path_to_cache=None, mmap_bin_files=True, mock=True, tokenizer=<megatron.training.tokenizer.tokenizer._GPT2BPETokenizer object at 0x14f23644bd40>, mid_level_dataset_surplus=0.005, reset_position_ids=False, reset_attention_mask=False, eod_mask_loss=False, create_attention_mask=True, drop_last_partial_validation_sequence=True, add_extra_token_to_sequence=True, object_storage_cache_path=None)
|
| 13984 |
+
INFO:megatron.core.datasets.gpt_dataset:Build and save the MockGPTDataset train indices
|
| 13985 |
+
DEBUG:megatron.core.datasets.gpt_dataset:> separate_final_epoch: False
|
| 13986 |
+
WARNING:megatron.core.datasets.gpt_dataset:Unable to save MockGPTDataset indexes because path_to_cache is None
|
| 13987 |
+
DEBUG:megatron.core.datasets.gpt_dataset: > time elapsed: 0.005095 seconds
|
| 13988 |
+
INFO:megatron.core.datasets.gpt_dataset:> total number of samples: 1387
|
| 13989 |
+
INFO:megatron.core.datasets.gpt_dataset:> total number of epochs: 1
|
| 13990 |
+
INFO:megatron.core.datasets.gpt_dataset:Build and save the MockGPTDataset valid indices
|
| 13991 |
+
DEBUG:megatron.core.datasets.gpt_dataset:> separate_final_epoch: False
|
| 13992 |
+
WARNING:megatron.core.datasets.gpt_dataset:Unable to save MockGPTDataset indexes because path_to_cache is None
|
| 13993 |
+
DEBUG:megatron.core.datasets.gpt_dataset: > time elapsed: 0.001743 seconds
|
| 13994 |
+
INFO:megatron.core.datasets.gpt_dataset:> total number of samples: 1386
|
| 13995 |
+
INFO:megatron.core.datasets.gpt_dataset:> total number of epochs: 1
|
| 13996 |
+
INFO:megatron.core.datasets.gpt_dataset:Build and save the MockGPTDataset test indices
|
| 13997 |
+
DEBUG:megatron.core.datasets.gpt_dataset:> separate_final_epoch: False
|
| 13998 |
+
WARNING:megatron.core.datasets.gpt_dataset:Unable to save MockGPTDataset indexes because path_to_cache is None
|
| 13999 |
+
DEBUG:megatron.core.datasets.gpt_dataset: > time elapsed: 0.001374 seconds
|
| 14000 |
+
INFO:megatron.core.datasets.gpt_dataset:> total number of samples: 1389
|
| 14001 |
+
INFO:megatron.core.datasets.gpt_dataset:> total number of epochs: 1
|
| 14002 |
+
> finished creating GPT datasets ...
|
| 14003 |
+
[after dataloaders are built] datetime: 2025-06-21 21:34:23
|
| 14004 |
+
done with setup ...
|
| 14005 |
+
(min, max) time across ranks (ms):
|
| 14006 |
+
model-and-optimizer-setup ......................: (2174.05, 2191.01)
|
| 14007 |
+
train/valid/test-data-iterators-setup ..........: (29.93, 118.73)
|
| 14008 |
+
training ...
|
| 14009 |
+
Setting rerun_state_machine.current_iteration to 0...
|
| 14010 |
+
[before the start of training step] datetime: 2025-06-21 21:34:23
|
| 14011 |
+
batch tensor: tokens torch.Size([2, 98304])
|
| 14012 |
+
batch tensor: labels torch.Size([2, 98304])
|
| 14013 |
+
batch tensor: loss_mask torch.Size([2, 98304])
|
| 14014 |
+
batch tensor: attention_mask torch.Size([2, 1, 98304, 98304])
|
| 14015 |
+
batch tensor: position_ids torch.Size([2, 98304])
|
| 14016 |
+
batch tensor after cp: tokens torch.Size([2, 98304])
|
| 14017 |
+
batch tensor after cp: labels torch.Size([2, 98304])
|
| 14018 |
+
batch tensor after cp: loss_mask torch.Size([2, 98304])
|
| 14019 |
+
batch tensor after cp: attention_mask torch.Size([2, 1, 98304, 98304])
|
| 14020 |
+
batch tensor after cp: position_ids torch.Size([2, 98304])
|
| 14021 |
+
batch tensor: tokens torch.Size([2, 98304])
|
| 14022 |
+
batch tensor: labels torch.Size([2, 98304])
|
| 14023 |
+
batch tensor: loss_mask torch.Size([2, 98304])
|
| 14024 |
+
batch tensor: attention_mask torch.Size([2, 1, 98304, 98304])
|
| 14025 |
+
batch tensor: position_ids torch.Size([2, 98304])
|
| 14026 |
+
batch tensor after cp: tokens torch.Size([2, 98304])
|
| 14027 |
+
batch tensor after cp: labels torch.Size([2, 98304])
|
| 14028 |
+
batch tensor after cp: loss_mask torch.Size([2, 98304])
|
| 14029 |
+
batch tensor after cp: attention_mask torch.Size([2, 1, 98304, 98304])
|
| 14030 |
+
batch tensor after cp: position_ids torch.Size([2, 98304])
|
| 14031 |
+
batch tensor: tokens torch.Size([2, 98304])
|
| 14032 |
+
batch tensor: labels torch.Size([2, 98304])
|
| 14033 |
+
batch tensor: loss_mask torch.Size([2, 98304])
|
| 14034 |
+
batch tensor: attention_mask torch.Size([2, 1, 98304, 98304])
|
| 14035 |
+
batch tensor: position_ids torch.Size([2, 98304])
|
| 14036 |
+
batch tensor after cp: tokens torch.Size([2, 98304])
|
| 14037 |
+
batch tensor after cp: labels torch.Size([2, 98304])
|
| 14038 |
+
batch tensor after cp: loss_mask torch.Size([2, 98304])
|
| 14039 |
+
batch tensor after cp: attention_mask torch.Size([2, 1, 98304, 98304])
|
| 14040 |
+
batch tensor after cp: position_ids torch.Size([2, 98304])
|
| 14041 |
+
batch tensor: tokens torch.Size([2, 98304])
|
| 14042 |
+
batch tensor: labels torch.Size([2, 98304])
|
| 14043 |
+
batch tensor: loss_mask torch.Size([2, 98304])
|
| 14044 |
+
batch tensor: attention_mask torch.Size([2, 1, 98304, 98304])
|
| 14045 |
+
batch tensor: position_ids torch.Size([2, 98304])
|
| 14046 |
+
batch tensor after cp: tokens torch.Size([2, 98304])
|
| 14047 |
+
batch tensor after cp: labels torch.Size([2, 98304])
|
| 14048 |
+
batch tensor after cp: loss_mask torch.Size([2, 98304])
|
| 14049 |
+
batch tensor after cp: attention_mask torch.Size([2, 1, 98304, 98304])
|
| 14050 |
+
batch tensor after cp: position_ids torch.Size([2, 98304])
|
| 14051 |
+
batch tensor: tokens torch.Size([2, 98304])
|
| 14052 |
+
batch tensor: labels torch.Size([2, 98304])
|
| 14053 |
+
batch tensor: loss_mask torch.Size([2, 98304])
|
| 14054 |
+
batch tensor: attention_mask torch.Size([2, 1, 98304, 98304])
|
| 14055 |
+
batch tensor: position_ids torch.Size([2, 98304])
|
| 14056 |
+
batch tensor after cp: tokens torch.Size([2, 98304])
|
| 14057 |
+
batch tensor after cp: labels torch.Size([2, 98304])
|
| 14058 |
+
batch tensor after cp: loss_mask torch.Size([2, 98304])
|
| 14059 |
+
batch tensor after cp: attention_mask torch.Size([2, 1, 98304, 98304])
|
| 14060 |
+
batch tensor after cp: position_ids torch.Size([2, 98304])
|
| 14061 |
+
batch tensor: tokens torch.Size([2, 98304])
|
| 14062 |
+
batch tensor: labels torch.Size([2, 98304])
|
| 14063 |
+
batch tensor: loss_mask torch.Size([2, 98304])
|
| 14064 |
+
batch tensor: attention_mask torch.Size([2, 1, 98304, 98304])
|
| 14065 |
+
batch tensor: position_ids torch.Size([2, 98304])
|
| 14066 |
+
batch tensor after cp: tokens torch.Size([2, 98304])
|
| 14067 |
+
batch tensor after cp: labels torch.Size([2, 98304])
|
| 14068 |
+
batch tensor after cp: loss_mask torch.Size([2, 98304])
|
| 14069 |
+
batch tensor after cp: attention_mask torch.Size([2, 1, 98304, 98304])
|
| 14070 |
+
batch tensor after cp: position_ids torch.Size([2, 98304])
|
| 14071 |
+
batch tensor: tokens torch.Size([2, 98304])
|
| 14072 |
+
batch tensor: labels torch.Size([2, 98304])
|
| 14073 |
+
batch tensor: loss_mask torch.Size([2, 98304])
|
| 14074 |
+
batch tensor: attention_mask torch.Size([2, 1, 98304, 98304])
|
| 14075 |
+
batch tensor: position_ids torch.Size([2, 98304])
|
| 14076 |
+
batch tensor after cp: tokens torch.Size([2, 98304])
|
| 14077 |
+
batch tensor after cp: labels torch.Size([2, 98304])
|
| 14078 |
+
batch tensor after cp: loss_mask torch.Size([2, 98304])
|
| 14079 |
+
batch tensor after cp: attention_mask torch.Size([2, 1, 98304, 98304])
|
| 14080 |
+
batch tensor after cp: position_ids torch.Size([2, 98304])
|
| 14081 |
+
batch tensor: tokens torch.Size([2, 98304])
|
| 14082 |
+
batch tensor: labels torch.Size([2, 98304])
|
| 14083 |
+
batch tensor: loss_mask torch.Size([2, 98304])
|
| 14084 |
+
batch tensor: attention_mask torch.Size([2, 1, 98304, 98304])
|
| 14085 |
+
batch tensor: position_ids torch.Size([2, 98304])
|
| 14086 |
+
batch tensor after cp: tokens torch.Size([2, 98304])
|
| 14087 |
+
batch tensor after cp: labels torch.Size([2, 98304])
|
| 14088 |
+
batch tensor after cp: loss_mask torch.Size([2, 98304])
|
| 14089 |
+
batch tensor after cp: attention_mask torch.Size([2, 1, 98304, 98304])
|
| 14090 |
+
batch tensor after cp: position_ids torch.Size([2, 98304])
|
| 14091 |
+
Start exporting trace 0
|
| 14092 |
+
Done exporting trace 0
|
| 14093 |
+
[2025-06-21 21:34:59] iteration 1/ 10 | consumed samples: 1 | elapsed time per iteration (ms): 35801.9 | learning rate: 0.000000E+00 | global batch size: 1 | loss scale: 4294967296.0 | number of skipped iterations: 1 | number of nan iterations: 0 |
|
| 14094 |
+
Number of parameters in transformer block in billions: 0.35
|
| 14095 |
+
Number of parameters in embedding layers in billions: 0.21
|
| 14096 |
+
Total number of parameters in billions: 0.56
|
| 14097 |
+
Number of parameters in most loaded shard in billions: 0.0703
|
| 14098 |
+
Theoretical memory footprints: weight and optimizer=1206.09 MB
|
| 14099 |
+
[Rank 1] (after 1 iterations) memory (MB) | allocated: 21684.97607421875 | max allocated: 51035.55419921875 | reserved: 55928.0 | max reserved: 55928.0
|
| 14100 |
+
[Rank 6] (after 1 iterations) memory (MB) | allocated: 21684.97607421875 | max allocated: 51035.55419921875 | reserved: 55928.0 | max reserved: 55928.0
|
| 14101 |
+
[Rank 7] (after 1 iterations) memory (MB) | allocated: 21684.97607421875 | max allocated: 51035.55419921875 | reserved: 55928.0 | max reserved: 55928.0
|
| 14102 |
+
[Rank 3] (after 1 iterations) memory (MB) | allocated: 21684.97607421875 | max allocated: 51035.55419921875 | reserved: 55928.0 | max reserved: 55928.0
|
| 14103 |
+
[Rank 0] (after 1 iterations) memory (MB) | allocated: 21684.97607421875 | max allocated: 51035.55419921875 | reserved: 54392.0 | max reserved: 54392.0
|
| 14104 |
+
[Rank 4] (after 1 iterations) memory (MB) | allocated: 21684.97607421875 | max allocated: 51035.55419921875 | reserved: 54392.0 | max reserved: 54392.0[Rank 5] (after 1 iterations) memory (MB) | allocated: 21684.97607421875 | max allocated: 51035.55419921875 | reserved: 54392.0 | max reserved: 54392.0
|
| 14105 |
+
|
| 14106 |
+
[Rank 2] (after 1 iterations) memory (MB) | allocated: 21684.97607421875 | max allocated: 51035.55419921875 | reserved: 55928.0 | max reserved: 55928.0
|
attnserver.run_attnserver.slurm.sh.343209.err.log
CHANGED
|
@@ -4733,3 +4733,318 @@ W0621 21:33:45.058000 2009259 site-packages/torch/distributed/run.py:766]
|
|
| 4733 |
W0621 21:33:45.058000 2009259 site-packages/torch/distributed/run.py:766] *****************************************
|
| 4734 |
W0621 21:33:45.058000 2009259 site-packages/torch/distributed/run.py:766] Setting OMP_NUM_THREADS environment variable for each process to be 1 in default, to avoid your system being overloaded, please further tune the variable for optimal performance in your application as needed.
|
| 4735 |
W0621 21:33:45.058000 2009259 site-packages/torch/distributed/run.py:766] *****************************************
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 4733 |
W0621 21:33:45.058000 2009259 site-packages/torch/distributed/run.py:766] *****************************************
|
| 4734 |
W0621 21:33:45.058000 2009259 site-packages/torch/distributed/run.py:766] Setting OMP_NUM_THREADS environment variable for each process to be 1 in default, to avoid your system being overloaded, please further tune the variable for optimal performance in your application as needed.
|
| 4735 |
W0621 21:33:45.058000 2009259 site-packages/torch/distributed/run.py:766] *****************************************
|
| 4736 |
+
[rank5]:[W621 21:34:05.450271195 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 5] using GPU 5 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
|
| 4737 |
+
[rank0]:[W621 21:34:05.164434007 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 0] using GPU 0 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
|
| 4738 |
+
[rank4]:[W621 21:34:05.178424719 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 4] using GPU 4 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
|
| 4739 |
+
[rank1]:[W621 21:34:05.178497648 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 1] using GPU 1 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
|
| 4740 |
+
[rank2]:[W621 21:34:05.180917207 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 2] using GPU 2 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
|
| 4741 |
+
[rank3]:[W621 21:34:05.183453769 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 3] using GPU 3 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
|
| 4742 |
+
[rank6]:[W621 21:34:05.183555161 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 6] using GPU 6 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
|
| 4743 |
+
[rank7]:[W621 21:34:05.187468500 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 7] using GPU 7 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
|
| 4744 |
+
/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/models/gpt/gpt_layer_specs.py:94: UserWarning: The fp8 argument in "get_gpt_layer_with_transformer_engine_spec" has been deprecated and will be removed soon. Please update your code accordingly.
|
| 4745 |
+
warnings.warn(
|
| 4746 |
+
/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/models/gpt/gpt_layer_specs.py:94: UserWarning: The fp8 argument in "get_gpt_layer_with_transformer_engine_spec" has been deprecated and will be removed soon. Please update your code accordingly.
|
| 4747 |
+
warnings.warn(
|
| 4748 |
+
/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/models/gpt/gpt_layer_specs.py:94: UserWarning: The fp8 argument in "get_gpt_layer_with_transformer_engine_spec" has been deprecated and will be removed soon. Please update your code accordingly.
|
| 4749 |
+
warnings.warn(
|
| 4750 |
+
/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/models/gpt/gpt_layer_specs.py:94: UserWarning: The fp8 argument in "get_gpt_layer_with_transformer_engine_spec" has been deprecated and will be removed soon. Please update your code accordingly.
|
| 4751 |
+
warnings.warn(
|
| 4752 |
+
/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/models/gpt/gpt_layer_specs.py:94: UserWarning: The fp8 argument in "get_gpt_layer_with_transformer_engine_spec" has been deprecated and will be removed soon. Please update your code accordingly.
|
| 4753 |
+
warnings.warn(
|
| 4754 |
+
/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/models/gpt/gpt_layer_specs.py:94: UserWarning: The fp8 argument in "get_gpt_layer_with_transformer_engine_spec" has been deprecated and will be removed soon. Please update your code accordingly.
|
| 4755 |
+
warnings.warn(
|
| 4756 |
+
/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/models/gpt/gpt_layer_specs.py:94: UserWarning: The fp8 argument in "get_gpt_layer_with_transformer_engine_spec" has been deprecated and will be removed soon. Please update your code accordingly.
|
| 4757 |
+
warnings.warn(
|
| 4758 |
+
/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/models/gpt/gpt_layer_specs.py:94: UserWarning: The fp8 argument in "get_gpt_layer_with_transformer_engine_spec" has been deprecated and will be removed soon. Please update your code accordingly.
|
| 4759 |
+
warnings.warn(
|
| 4760 |
+
/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/transformer_engine/pytorch/cpu_offload.py:595: DeprecationWarning: Offloading weights is deprecated. Using offload_weights=True does not have any effect.
|
| 4761 |
+
warnings.warn(
|
| 4762 |
+
/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/transformer_engine/pytorch/cpu_offload.py:595: DeprecationWarning: Offloading weights is deprecated. Using offload_weights=True does not have any effect.
|
| 4763 |
+
warnings.warn(
|
| 4764 |
+
/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/transformer_engine/pytorch/cpu_offload.py:595: DeprecationWarning: Offloading weights is deprecated. Using offload_weights=True does not have any effect.
|
| 4765 |
+
warnings.warn(
|
| 4766 |
+
/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/transformer_engine/pytorch/cpu_offload.py:595: DeprecationWarning: Offloading weights is deprecated. Using offload_weights=True does not have any effect.
|
| 4767 |
+
warnings.warn(
|
| 4768 |
+
/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/transformer_engine/pytorch/cpu_offload.py:595: DeprecationWarning: Offloading weights is deprecated. Using offload_weights=True does not have any effect.
|
| 4769 |
+
warnings.warn(
|
| 4770 |
+
/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/transformer_engine/pytorch/cpu_offload.py:595: DeprecationWarning: Offloading weights is deprecated. Using offload_weights=True does not have any effect.
|
| 4771 |
+
warnings.warn(
|
| 4772 |
+
/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/transformer_engine/pytorch/cpu_offload.py:595: DeprecationWarning: Offloading weights is deprecated. Using offload_weights=True does not have any effect.
|
| 4773 |
+
warnings.warn(
|
| 4774 |
+
/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/transformer_engine/pytorch/cpu_offload.py:595: DeprecationWarning: Offloading weights is deprecated. Using offload_weights=True does not have any effect.
|
| 4775 |
+
warnings.warn(
|
| 4776 |
+
[rank6]: Traceback (most recent call last):
|
| 4777 |
+
[rank6]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 554, in <module>
|
| 4778 |
+
[rank6]: pretrain(
|
| 4779 |
+
[rank6]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/training/training.py", line 863, in pretrain
|
| 4780 |
+
[rank6]: iteration, num_floating_point_operations_so_far = train(
|
| 4781 |
+
[rank6]: ^^^^^^
|
| 4782 |
+
[rank6]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/training/training.py", line 2229, in train
|
| 4783 |
+
[rank6]: ) = train_step(
|
| 4784 |
+
[rank6]: ^^^^^^^^^^^
|
| 4785 |
+
[rank6]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/training/training.py", line 1382, in train_step
|
| 4786 |
+
[rank6]: losses_reduced = forward_backward_func(
|
| 4787 |
+
[rank6]: ^^^^^^^^^^^^^^^^^^^^^^
|
| 4788 |
+
[rank6]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/pipeline_parallel/schedules.py", line 518, in forward_backward_no_pipelining
|
| 4789 |
+
[rank6]: output_tensor, num_tokens = forward_step(
|
| 4790 |
+
[rank6]: ^^^^^^^^^^^^^
|
| 4791 |
+
[rank6]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/pipeline_parallel/schedules.py", line 289, in forward_step
|
| 4792 |
+
[rank6]: output_tensor, loss_func = forward_step_func(data_iterator, model)
|
| 4793 |
+
[rank6]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
| 4794 |
+
[rank6]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 446, in forward_step
|
| 4795 |
+
[rank6]: (tokens, labels, loss_mask, attention_mask, position_ids), token_lens = get_batch(data_iterator)
|
| 4796 |
+
[rank6]: ^^^^^^^^^^^^^^^^^^^^^^^^
|
| 4797 |
+
[rank6]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 284, in get_batch
|
| 4798 |
+
[rank6]: batch = next(global_batches)
|
| 4799 |
+
[rank6]: ^^^^^^^^^^^^^^^^^^^^
|
| 4800 |
+
[rank6]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 226, in setup_batches
|
| 4801 |
+
[rank6]: attention_mask = torch.ones(
|
| 4802 |
+
[rank6]: ^^^^^^^^^^^
|
| 4803 |
+
[rank6]: torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 1024.00 GiB. GPU 6 has a total capacity of 139.81 GiB of which 133.22 GiB is free. Including non-PyTorch memory, this process has 6.59 GiB memory in use. Of the allocated memory 4.56 GiB is allocated by PyTorch, and 583.51 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)
|
| 4804 |
+
[rank3]: Traceback (most recent call last):
|
| 4805 |
+
[rank3]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 554, in <module>
|
| 4806 |
+
[rank3]: pretrain(
|
| 4807 |
+
[rank3]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/training/training.py", line 863, in pretrain
|
| 4808 |
+
[rank3]: iteration, num_floating_point_operations_so_far = train(
|
| 4809 |
+
[rank3]: ^^^^^^
|
| 4810 |
+
[rank3]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/training/training.py", line 2229, in train
|
| 4811 |
+
[rank3]: ) = train_step(
|
| 4812 |
+
[rank3]: ^^^^^^^^^^^
|
| 4813 |
+
[rank3]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/training/training.py", line 1382, in train_step
|
| 4814 |
+
[rank3]: losses_reduced = forward_backward_func(
|
| 4815 |
+
[rank3]: ^^^^^^^^^^^^^^^^^^^^^^
|
| 4816 |
+
[rank3]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/pipeline_parallel/schedules.py", line 518, in forward_backward_no_pipelining
|
| 4817 |
+
[rank3]: output_tensor, num_tokens = forward_step(
|
| 4818 |
+
[rank3]: ^^^^^^^^^^^^^
|
| 4819 |
+
[rank3]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/pipeline_parallel/schedules.py", line 289, in forward_step
|
| 4820 |
+
[rank3]: output_tensor, loss_func = forward_step_func(data_iterator, model)
|
| 4821 |
+
[rank3]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
| 4822 |
+
[rank3]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 446, in forward_step
|
| 4823 |
+
[rank3]: (tokens, labels, loss_mask, attention_mask, position_ids), token_lens = get_batch(data_iterator)
|
| 4824 |
+
[rank3]: ^^^^^^^^^^^^^^^^^^^^^^^^
|
| 4825 |
+
[rank3]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 284, in get_batch
|
| 4826 |
+
[rank3]: batch = next(global_batches)
|
| 4827 |
+
[rank3]: ^^^^^^^^^^^^^^^^^^^^
|
| 4828 |
+
[rank3]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 226, in setup_batches
|
| 4829 |
+
[rank3]: attention_mask = torch.ones(
|
| 4830 |
+
[rank3]: ^^^^^^^^^^^
|
| 4831 |
+
[rank3]: torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 1024.00 GiB. GPU 3 has a total capacity of 139.81 GiB of which 133.22 GiB is free. Including non-PyTorch memory, this process has 6.59 GiB memory in use. Of the allocated memory 4.56 GiB is allocated by PyTorch, and 583.51 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)
|
| 4832 |
+
[rank5]: Traceback (most recent call last):
|
| 4833 |
+
[rank5]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 554, in <module>
|
| 4834 |
+
[rank5]: pretrain(
|
| 4835 |
+
[rank5]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/training/training.py", line 863, in pretrain
|
| 4836 |
+
[rank5]: iteration, num_floating_point_operations_so_far = train(
|
| 4837 |
+
[rank5]: ^^^^^^
|
| 4838 |
+
[rank5]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/training/training.py", line 2229, in train
|
| 4839 |
+
[rank5]: ) = train_step(
|
| 4840 |
+
[rank5]: ^^^^^^^^^^^
|
| 4841 |
+
[rank5]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/training/training.py", line 1382, in train_step
|
| 4842 |
+
[rank5]: losses_reduced = forward_backward_func(
|
| 4843 |
+
[rank5]: ^^^^^^^^^^^^^^^^^^^^^^
|
| 4844 |
+
[rank5]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/pipeline_parallel/schedules.py", line 518, in forward_backward_no_pipelining
|
| 4845 |
+
[rank5]: output_tensor, num_tokens = forward_step(
|
| 4846 |
+
[rank5]: ^^^^^^^^^^^^^
|
| 4847 |
+
[rank5]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/pipeline_parallel/schedules.py", line 289, in forward_step
|
| 4848 |
+
[rank5]: output_tensor, loss_func = forward_step_func(data_iterator, model)
|
| 4849 |
+
[rank5]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
| 4850 |
+
[rank5]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 446, in forward_step
|
| 4851 |
+
[rank5]: (tokens, labels, loss_mask, attention_mask, position_ids), token_lens = get_batch(data_iterator)
|
| 4852 |
+
[rank5]: ^^^^^^^^^^^^^^^^^^^^^^^^
|
| 4853 |
+
[rank5]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 284, in get_batch
|
| 4854 |
+
[rank5]: batch = next(global_batches)
|
| 4855 |
+
[rank5]: ^^^^^^^^^^^^^^^^^^^^
|
| 4856 |
+
[rank5]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 226, in setup_batches
|
| 4857 |
+
[rank5]: attention_mask = torch.ones(
|
| 4858 |
+
[rank5]: ^^^^^^^^^^^
|
| 4859 |
+
[rank5]: torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 1024.00 GiB. GPU 5 has a total capacity of 139.81 GiB of which 133.22 GiB is free. Including non-PyTorch memory, this process has 6.59 GiB memory in use. Of the allocated memory 4.56 GiB is allocated by PyTorch, and 583.51 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)
|
| 4860 |
+
[rank0]: Traceback (most recent call last):
|
| 4861 |
+
[rank0]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 554, in <module>
|
| 4862 |
+
[rank0]: pretrain(
|
| 4863 |
+
[rank0]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/training/training.py", line 863, in pretrain
|
| 4864 |
+
[rank0]: iteration, num_floating_point_operations_so_far = train(
|
| 4865 |
+
[rank0]: ^^^^^^
|
| 4866 |
+
[rank0]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/training/training.py", line 2229, in train
|
| 4867 |
+
[rank0]: ) = train_step(
|
| 4868 |
+
[rank0]: ^^^^^^^^^^^
|
| 4869 |
+
[rank0]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/training/training.py", line 1382, in train_step
|
| 4870 |
+
[rank0]: losses_reduced = forward_backward_func(
|
| 4871 |
+
[rank0]: ^^^^^^^^^^^^^^^^^^^^^^
|
| 4872 |
+
[rank0]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/pipeline_parallel/schedules.py", line 518, in forward_backward_no_pipelining
|
| 4873 |
+
[rank0]: output_tensor, num_tokens = forward_step(
|
| 4874 |
+
[rank0]: ^^^^^^^^^^^^^
|
| 4875 |
+
[rank0]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/pipeline_parallel/schedules.py", line 289, in forward_step
|
| 4876 |
+
[rank0]: output_tensor, loss_func = forward_step_func(data_iterator, model)
|
| 4877 |
+
[rank0]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
| 4878 |
+
[rank0]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 446, in forward_step
|
| 4879 |
+
[rank0]: (tokens, labels, loss_mask, attention_mask, position_ids), token_lens = get_batch(data_iterator)
|
| 4880 |
+
[rank0]: ^^^^^^^^^^^^^^^^^^^^^^^^
|
| 4881 |
+
[rank0]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 284, in get_batch
|
| 4882 |
+
[rank0]: batch = next(global_batches)
|
| 4883 |
+
[rank0]: ^^^^^^^^^^^^^^^^^^^^
|
| 4884 |
+
[rank0]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 226, in setup_batches
|
| 4885 |
+
[rank0]: attention_mask = torch.ones(
|
| 4886 |
+
[rank0]: ^^^^^^^^^^^
|
| 4887 |
+
[rank0]: torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 1024.00 GiB. GPU 0 has a total capacity of 139.81 GiB of which 133.22 GiB is free. Including non-PyTorch memory, this process has 6.59 GiB memory in use. Of the allocated memory 4.56 GiB is allocated by PyTorch, and 583.51 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)
|
| 4888 |
+
[rank4]: Traceback (most recent call last):
|
| 4889 |
+
[rank4]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 554, in <module>
|
| 4890 |
+
[rank4]: pretrain(
|
| 4891 |
+
[rank4]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/training/training.py", line 863, in pretrain
|
| 4892 |
+
[rank4]: iteration, num_floating_point_operations_so_far = train(
|
| 4893 |
+
[rank4]: ^^^^^^
|
| 4894 |
+
[rank4]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/training/training.py", line 2229, in train
|
| 4895 |
+
[rank4]: ) = train_step(
|
| 4896 |
+
[rank4]: ^^^^^^^^^^^
|
| 4897 |
+
[rank4]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/training/training.py", line 1382, in train_step
|
| 4898 |
+
[rank4]: losses_reduced = forward_backward_func(
|
| 4899 |
+
[rank4]: ^^^^^^^^^^^^^^^^^^^^^^
|
| 4900 |
+
[rank4]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/pipeline_parallel/schedules.py", line 518, in forward_backward_no_pipelining
|
| 4901 |
+
[rank4]: output_tensor, num_tokens = forward_step(
|
| 4902 |
+
[rank4]: ^^^^^^^^^^^^^
|
| 4903 |
+
[rank4]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/pipeline_parallel/schedules.py", line 289, in forward_step
|
| 4904 |
+
[rank4]: output_tensor, loss_func = forward_step_func(data_iterator, model)
|
| 4905 |
+
[rank4]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
| 4906 |
+
[rank4]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 446, in forward_step
|
| 4907 |
+
[rank4]: (tokens, labels, loss_mask, attention_mask, position_ids), token_lens = get_batch(data_iterator)
|
| 4908 |
+
[rank4]: ^^^^^^^^^^^^^^^^^^^^^^^^
|
| 4909 |
+
[rank4]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 284, in get_batch
|
| 4910 |
+
[rank4]: batch = next(global_batches)
|
| 4911 |
+
[rank4]: ^^^^^^^^^^^^^^^^^^^^
|
| 4912 |
+
[rank4]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 226, in setup_batches
|
| 4913 |
+
[rank4]: attention_mask = torch.ones(
|
| 4914 |
+
[rank4]: ^^^^^^^^^^^
|
| 4915 |
+
[rank4]: torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 1024.00 GiB. GPU 4 has a total capacity of 139.81 GiB of which 133.22 GiB is free. Including non-PyTorch memory, this process has 6.59 GiB memory in use. Of the allocated memory 4.56 GiB is allocated by PyTorch, and 583.51 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)
|
| 4916 |
+
[rank1]: Traceback (most recent call last):
|
| 4917 |
+
[rank1]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 554, in <module>
|
| 4918 |
+
[rank1]: pretrain(
|
| 4919 |
+
[rank1]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/training/training.py", line 863, in pretrain
|
| 4920 |
+
[rank1]: iteration, num_floating_point_operations_so_far = train(
|
| 4921 |
+
[rank1]: ^^^^^^
|
| 4922 |
+
[rank1]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/training/training.py", line 2229, in train
|
| 4923 |
+
[rank1]: ) = train_step(
|
| 4924 |
+
[rank1]: ^^^^^^^^^^^
|
| 4925 |
+
[rank1]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/training/training.py", line 1382, in train_step
|
| 4926 |
+
[rank1]: losses_reduced = forward_backward_func(
|
| 4927 |
+
[rank1]: ^^^^^^^^^^^^^^^^^^^^^^
|
| 4928 |
+
[rank1]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/pipeline_parallel/schedules.py", line 518, in forward_backward_no_pipelining
|
| 4929 |
+
[rank1]: output_tensor, num_tokens = forward_step(
|
| 4930 |
+
[rank1]: ^^^^^^^^^^^^^
|
| 4931 |
+
[rank1]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/pipeline_parallel/schedules.py", line 289, in forward_step
|
| 4932 |
+
[rank1]: output_tensor, loss_func = forward_step_func(data_iterator, model)
|
| 4933 |
+
[rank1]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
| 4934 |
+
[rank1]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 446, in forward_step
|
| 4935 |
+
[rank1]: (tokens, labels, loss_mask, attention_mask, position_ids), token_lens = get_batch(data_iterator)
|
| 4936 |
+
[rank1]: ^^^^^^^^^^^^^^^^^^^^^^^^
|
| 4937 |
+
[rank1]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 284, in get_batch
|
| 4938 |
+
[rank1]: batch = next(global_batches)
|
| 4939 |
+
[rank1]: ^^^^^^^^^^^^^^^^^^^^
|
| 4940 |
+
[rank1]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 226, in setup_batches
|
| 4941 |
+
[rank1]: attention_mask = torch.ones(
|
| 4942 |
+
[rank1]: ^^^^^^^^^^^
|
| 4943 |
+
[rank1]: torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 1024.00 GiB. GPU 1 has a total capacity of 139.81 GiB of which 133.22 GiB is free. Including non-PyTorch memory, this process has 6.59 GiB memory in use. Of the allocated memory 4.56 GiB is allocated by PyTorch, and 583.51 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)
|
| 4944 |
+
[rank7]: Traceback (most recent call last):
|
| 4945 |
+
[rank7]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 554, in <module>
|
| 4946 |
+
[rank7]: pretrain(
|
| 4947 |
+
[rank7]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/training/training.py", line 863, in pretrain
|
| 4948 |
+
[rank7]: iteration, num_floating_point_operations_so_far = train(
|
| 4949 |
+
[rank7]: ^^^^^^
|
| 4950 |
+
[rank7]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/training/training.py", line 2229, in train
|
| 4951 |
+
[rank7]: ) = train_step(
|
| 4952 |
+
[rank7]: ^^^^^^^^^^^
|
| 4953 |
+
[rank7]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/training/training.py", line 1382, in train_step
|
| 4954 |
+
[rank7]: losses_reduced = forward_backward_func(
|
| 4955 |
+
[rank7]: ^^^^^^^^^^^^^^^^^^^^^^
|
| 4956 |
+
[rank7]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/pipeline_parallel/schedules.py", line 518, in forward_backward_no_pipelining
|
| 4957 |
+
[rank7]: output_tensor, num_tokens = forward_step(
|
| 4958 |
+
[rank7]: ^^^^^^^^^^^^^
|
| 4959 |
+
[rank7]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/pipeline_parallel/schedules.py", line 289, in forward_step
|
| 4960 |
+
[rank7]: output_tensor, loss_func = forward_step_func(data_iterator, model)
|
| 4961 |
+
[rank7]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
| 4962 |
+
[rank7]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 446, in forward_step
|
| 4963 |
+
[rank7]: (tokens, labels, loss_mask, attention_mask, position_ids), token_lens = get_batch(data_iterator)
|
| 4964 |
+
[rank7]: ^^^^^^^^^^^^^^^^^^^^^^^^
|
| 4965 |
+
[rank7]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 284, in get_batch
|
| 4966 |
+
[rank7]: batch = next(global_batches)
|
| 4967 |
+
[rank7]: ^^^^^^^^^^^^^^^^^^^^
|
| 4968 |
+
[rank7]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 226, in setup_batches
|
| 4969 |
+
[rank7]: attention_mask = torch.ones(
|
| 4970 |
+
[rank7]: ^^^^^^^^^^^
|
| 4971 |
+
[rank7]: torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 1024.00 GiB. GPU 7 has a total capacity of 139.81 GiB of which 133.22 GiB is free. Including non-PyTorch memory, this process has 6.59 GiB memory in use. Of the allocated memory 4.56 GiB is allocated by PyTorch, and 583.51 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)
|
| 4972 |
+
[rank2]: Traceback (most recent call last):
|
| 4973 |
+
[rank2]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 554, in <module>
|
| 4974 |
+
[rank2]: pretrain(
|
| 4975 |
+
[rank2]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/training/training.py", line 863, in pretrain
|
| 4976 |
+
[rank2]: iteration, num_floating_point_operations_so_far = train(
|
| 4977 |
+
[rank2]: ^^^^^^
|
| 4978 |
+
[rank2]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/training/training.py", line 2229, in train
|
| 4979 |
+
[rank2]: ) = train_step(
|
| 4980 |
+
[rank2]: ^^^^^^^^^^^
|
| 4981 |
+
[rank2]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/training/training.py", line 1382, in train_step
|
| 4982 |
+
[rank2]: losses_reduced = forward_backward_func(
|
| 4983 |
+
[rank2]: ^^^^^^^^^^^^^^^^^^^^^^
|
| 4984 |
+
[rank2]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/pipeline_parallel/schedules.py", line 518, in forward_backward_no_pipelining
|
| 4985 |
+
[rank2]: output_tensor, num_tokens = forward_step(
|
| 4986 |
+
[rank2]: ^^^^^^^^^^^^^
|
| 4987 |
+
[rank2]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/pipeline_parallel/schedules.py", line 289, in forward_step
|
| 4988 |
+
[rank2]: output_tensor, loss_func = forward_step_func(data_iterator, model)
|
| 4989 |
+
[rank2]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
| 4990 |
+
[rank2]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 446, in forward_step
|
| 4991 |
+
[rank2]: (tokens, labels, loss_mask, attention_mask, position_ids), token_lens = get_batch(data_iterator)
|
| 4992 |
+
[rank2]: ^^^^^^^^^^^^^^^^^^^^^^^^
|
| 4993 |
+
[rank2]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 284, in get_batch
|
| 4994 |
+
[rank2]: batch = next(global_batches)
|
| 4995 |
+
[rank2]: ^^^^^^^^^^^^^^^^^^^^
|
| 4996 |
+
[rank2]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 226, in setup_batches
|
| 4997 |
+
[rank2]: attention_mask = torch.ones(
|
| 4998 |
+
[rank2]: ^^^^^^^^^^^
|
| 4999 |
+
[rank2]: torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 1024.00 GiB. GPU 2 has a total capacity of 139.81 GiB of which 133.22 GiB is free. Including non-PyTorch memory, this process has 6.59 GiB memory in use. Of the allocated memory 4.56 GiB is allocated by PyTorch, and 583.51 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)
|
| 5000 |
+
[rank4]:[W621 21:34:20.239208441 ProcessGroupNCCL.cpp:1476] Warning: WARNING: destroy_process_group() was not called before program exit, which can leak resources. For more info, please see https://pytorch.org/docs/stable/distributed.html#shutdown (function operator())
|
| 5001 |
+
[rank1]:[W621 21:34:20.266107719 ProcessGroupNCCL.cpp:1476] Warning: WARNING: destroy_process_group() was not called before program exit, which can leak resources. For more info, please see https://pytorch.org/docs/stable/distributed.html#shutdown (function operator())
|
| 5002 |
+
[rank7]:[W621 21:34:20.275130946 ProcessGroupNCCL.cpp:1476] Warning: WARNING: destroy_process_group() was not called before program exit, which can leak resources. For more info, please see https://pytorch.org/docs/stable/distributed.html#shutdown (function operator())
|
| 5003 |
+
[rank6]:[W621 21:34:20.327006670 ProcessGroupNCCL.cpp:1476] Warning: WARNING: destroy_process_group() was not called before program exit, which can leak resources. For more info, please see https://pytorch.org/docs/stable/distributed.html#shutdown (function operator())
|
| 5004 |
+
[rank3]:[W621 21:34:20.340917591 ProcessGroupNCCL.cpp:1476] Warning: WARNING: destroy_process_group() was not called before program exit, which can leak resources. For more info, please see https://pytorch.org/docs/stable/distributed.html#shutdown (function operator())
|
| 5005 |
+
[rank2]:[W621 21:34:20.379023672 ProcessGroupNCCL.cpp:1476] Warning: WARNING: destroy_process_group() was not called before program exit, which can leak resources. For more info, please see https://pytorch.org/docs/stable/distributed.html#shutdown (function operator())
|
| 5006 |
+
[rank5]:[W621 21:34:20.389656281 ProcessGroupNCCL.cpp:1476] Warning: WARNING: destroy_process_group() was not called before program exit, which can leak resources. For more info, please see https://pytorch.org/docs/stable/distributed.html#shutdown (function operator())
|
| 5007 |
+
W0621 21:34:21.239000 2009259 site-packages/torch/distributed/elastic/multiprocessing/api.py:900] Sending process 2009329 closing signal SIGTERM
|
| 5008 |
+
W0621 21:34:21.242000 2009259 site-packages/torch/distributed/elastic/multiprocessing/api.py:900] Sending process 2009330 closing signal SIGTERM
|
| 5009 |
+
W0621 21:34:21.242000 2009259 site-packages/torch/distributed/elastic/multiprocessing/api.py:900] Sending process 2009331 closing signal SIGTERM
|
| 5010 |
+
W0621 21:34:21.243000 2009259 site-packages/torch/distributed/elastic/multiprocessing/api.py:900] Sending process 2009332 closing signal SIGTERM
|
| 5011 |
+
W0621 21:34:21.243000 2009259 site-packages/torch/distributed/elastic/multiprocessing/api.py:900] Sending process 2009334 closing signal SIGTERM
|
| 5012 |
+
W0621 21:34:21.244000 2009259 site-packages/torch/distributed/elastic/multiprocessing/api.py:900] Sending process 2009335 closing signal SIGTERM
|
| 5013 |
+
W0621 21:34:21.244000 2009259 site-packages/torch/distributed/elastic/multiprocessing/api.py:900] Sending process 2009336 closing signal SIGTERM
|
| 5014 |
+
E0621 21:34:21.516000 2009259 site-packages/torch/distributed/elastic/multiprocessing/api.py:874] failed (exitcode: 1) local_rank: 4 (pid: 2009333) of binary: /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/bin/python3
|
| 5015 |
+
Traceback (most recent call last):
|
| 5016 |
+
File "<frozen runpy>", line 198, in _run_module_as_main
|
| 5017 |
+
File "<frozen runpy>", line 88, in _run_code
|
| 5018 |
+
File "/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/distributed/launch.py", line 207, in <module>
|
| 5019 |
+
main()
|
| 5020 |
+
File "/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/typing_extensions.py", line 3253, in wrapper
|
| 5021 |
+
return arg(*args, **kwargs)
|
| 5022 |
+
^^^^^^^^^^^^^^^^^^^^
|
| 5023 |
+
File "/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/distributed/launch.py", line 203, in main
|
| 5024 |
+
launch(args)
|
| 5025 |
+
File "/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/distributed/launch.py", line 188, in launch
|
| 5026 |
+
run(args)
|
| 5027 |
+
File "/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/distributed/run.py", line 883, in run
|
| 5028 |
+
elastic_launch(
|
| 5029 |
+
File "/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/distributed/launcher/api.py", line 139, in __call__
|
| 5030 |
+
return launch_agent(self._config, self._entrypoint, list(args))
|
| 5031 |
+
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
| 5032 |
+
File "/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/distributed/launcher/api.py", line 270, in launch_agent
|
| 5033 |
+
raise ChildFailedError(
|
| 5034 |
+
torch.distributed.elastic.multiprocessing.errors.ChildFailedError:
|
| 5035 |
+
============================================================
|
| 5036 |
+
./pretrain_gpt_profile.py FAILED
|
| 5037 |
+
------------------------------------------------------------
|
| 5038 |
+
Failures:
|
| 5039 |
+
<NO_OTHER_FAILURES>
|
| 5040 |
+
------------------------------------------------------------
|
| 5041 |
+
Root Cause (first observed failure):
|
| 5042 |
+
[0]:
|
| 5043 |
+
time : 2025-06-21_21:34:21
|
| 5044 |
+
host : fs-mbz-gpu-702
|
| 5045 |
+
rank : 4 (local_rank: 4)
|
| 5046 |
+
exitcode : 1 (pid: 2009333)
|
| 5047 |
+
error_file: <N/A>
|
| 5048 |
+
traceback : To enable traceback see: https://pytorch.org/docs/stable/elastic/errors.html
|
| 5049 |
+
============================================================
|
| 5050 |
+
+ set +x
|
attnserver.run_attnserver.slurm.sh.343209.out.log
CHANGED
|
@@ -16471,3 +16471,657 @@ CHECKPOINT_PATH: gpt-checkpoint
|
|
| 16471 |
PWD: /mnt/weka/home/hao.zhang/junda/attnserver-megatron
|
| 16472 |
--------------------------------
|
| 16473 |
/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/bin/python3
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 16471 |
PWD: /mnt/weka/home/hao.zhang/junda/attnserver-megatron
|
| 16472 |
--------------------------------
|
| 16473 |
/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/bin/python3
|
| 16474 |
+
INFO:megatron.training.initialize:Setting logging level to 0
|
| 16475 |
+
INFO:megatron.training.initialize:Setting logging level to 0
|
| 16476 |
+
WARNING: TensorBoard writing requested but is not available (are you using PyTorch 1.1.0 or later?), no TensorBoard logs will be written.
|
| 16477 |
+
WARNING: one_logger package is required to enable e2e metrics tracking. please go to https://confluence.nvidia.com/display/MLWFO/Package+Repositories for details to install it
|
| 16478 |
+
INFO:megatron.training.initialize:Setting logging level to 0
|
| 16479 |
+
using world size: 8, data-parallel size: 1, context-parallel size: 1, hierarchical context-parallel sizes: Nonetensor-model-parallel size: 8, encoder-tensor-model-parallel size: 0, pipeline-model-parallel size: 1, encoder-pipeline-model-parallel size: 0
|
| 16480 |
+
Number of virtual stages per pipeline stage: None
|
| 16481 |
+
WARNING: Setting args.check_for_nan_in_loss_and_grad to False since dynamic loss scaling is being used
|
| 16482 |
+
using torch.float16 for parameters ...
|
| 16483 |
+
------------------------ arguments ------------------------
|
| 16484 |
+
account_for_embedding_in_pipeline_split ......... False
|
| 16485 |
+
account_for_loss_in_pipeline_split .............. False
|
| 16486 |
+
accumulate_allreduce_grads_in_fp32 .............. False
|
| 16487 |
+
adam_beta1 ...................................... 0.9
|
| 16488 |
+
adam_beta2 ...................................... 0.999
|
| 16489 |
+
adam_eps ........................................ 1e-08
|
| 16490 |
+
add_bias_linear ................................. True
|
| 16491 |
+
add_position_embedding .......................... True
|
| 16492 |
+
add_qkv_bias .................................... True
|
| 16493 |
+
adlr_autoresume ................................. False
|
| 16494 |
+
adlr_autoresume_interval ........................ 1000
|
| 16495 |
+
align_grad_reduce ............................... True
|
| 16496 |
+
align_param_gather .............................. False
|
| 16497 |
+
app_tag_run_name ................................ None
|
| 16498 |
+
app_tag_run_version ............................. 0.0.0
|
| 16499 |
+
apply_layernorm_1p .............................. False
|
| 16500 |
+
apply_query_key_layer_scaling ................... False
|
| 16501 |
+
apply_residual_connection_post_layernorm ........ False
|
| 16502 |
+
apply_rope_fusion ............................... False
|
| 16503 |
+
async_save ...................................... None
|
| 16504 |
+
async_tensor_model_parallel_allreduce ........... True
|
| 16505 |
+
attention_backend ............................... AttnBackend.auto
|
| 16506 |
+
attention_dropout ............................... 0.1
|
| 16507 |
+
attention_softmax_in_fp32 ....................... False
|
| 16508 |
+
auto_detect_ckpt_format ......................... False
|
| 16509 |
+
barrier_with_L1_time ............................ True
|
| 16510 |
+
bert_binary_head ................................ True
|
| 16511 |
+
bert_embedder_type .............................. megatron
|
| 16512 |
+
bert_load ....................................... None
|
| 16513 |
+
bf16 ............................................ False
|
| 16514 |
+
bias_dropout_fusion ............................. True
|
| 16515 |
+
bias_gelu_fusion ................................ True
|
| 16516 |
+
bias_swiglu_fusion .............................. True
|
| 16517 |
+
biencoder_projection_dim ........................ 0
|
| 16518 |
+
biencoder_shared_query_context_model ............ False
|
| 16519 |
+
block_data_path ................................. None
|
| 16520 |
+
calc_ft_timeouts ................................ False
|
| 16521 |
+
calculate_per_token_loss ........................ False
|
| 16522 |
+
check_for_large_grads ........................... False
|
| 16523 |
+
check_for_nan_in_loss_and_grad .................. False
|
| 16524 |
+
check_for_spiky_loss ............................ False
|
| 16525 |
+
check_weight_hash_across_dp_replicas_interval ... None
|
| 16526 |
+
ckpt_assume_constant_structure .................. False
|
| 16527 |
+
ckpt_convert_format ............................. None
|
| 16528 |
+
ckpt_convert_save ............................... None
|
| 16529 |
+
ckpt_convert_update_legacy_dist_opt_format ...... False
|
| 16530 |
+
ckpt_format ..................................... torch_dist
|
| 16531 |
+
ckpt_fully_parallel_load ........................ False
|
| 16532 |
+
ckpt_fully_parallel_save ........................ True
|
| 16533 |
+
ckpt_fully_parallel_save_deprecated ............. False
|
| 16534 |
+
ckpt_step ....................................... None
|
| 16535 |
+
classes_fraction ................................ 1.0
|
| 16536 |
+
clip_grad ....................................... 1.0
|
| 16537 |
+
clone_scatter_output_in_embedding ............... True
|
| 16538 |
+
config_logger_dir ...............................
|
| 16539 |
+
consumed_train_samples .......................... 0
|
| 16540 |
+
consumed_valid_samples .......................... 0
|
| 16541 |
+
context_parallel_size ........................... 1
|
| 16542 |
+
cp_comm_type .................................... ['p2p']
|
| 16543 |
+
create_attention_mask_in_dataloader ............. True
|
| 16544 |
+
cross_entropy_fusion_impl ....................... native
|
| 16545 |
+
cross_entropy_loss_fusion ....................... False
|
| 16546 |
+
cuda_graph_scope ................................ full
|
| 16547 |
+
cuda_graph_warmup_steps ......................... 3
|
| 16548 |
+
data_args_path .................................. None
|
| 16549 |
+
data_cache_path ................................. None
|
| 16550 |
+
data_parallel_random_init ....................... False
|
| 16551 |
+
data_parallel_sharding_strategy ................. no_shard
|
| 16552 |
+
data_parallel_size .............................. 1
|
| 16553 |
+
data_path ....................................... None
|
| 16554 |
+
data_per_class_fraction ......................... 1.0
|
| 16555 |
+
data_sharding ................................... True
|
| 16556 |
+
dataloader_type ................................. single
|
| 16557 |
+
ddp_average_in_collective ....................... False
|
| 16558 |
+
ddp_bucket_size ................................. None
|
| 16559 |
+
ddp_num_buckets ................................. None
|
| 16560 |
+
ddp_pad_buckets_for_high_nccl_busbw ............. False
|
| 16561 |
+
decoder_first_pipeline_num_layers ............... None
|
| 16562 |
+
decoder_last_pipeline_num_layers ................ None
|
| 16563 |
+
decoder_num_layers .............................. None
|
| 16564 |
+
decoder_seq_length .............................. None
|
| 16565 |
+
decoupled_lr .................................... None
|
| 16566 |
+
decoupled_min_lr ................................ None
|
| 16567 |
+
decrease_batch_size_if_needed ................... False
|
| 16568 |
+
defer_embedding_wgrad_compute ................... False
|
| 16569 |
+
deprecated_use_mcore_models ..................... False
|
| 16570 |
+
deterministic_mode .............................. False
|
| 16571 |
+
dino_bottleneck_size ............................ 256
|
| 16572 |
+
dino_freeze_last_layer .......................... 1
|
| 16573 |
+
dino_head_hidden_size ........................... 2048
|
| 16574 |
+
dino_local_crops_number ......................... 10
|
| 16575 |
+
dino_local_img_size ............................. 96
|
| 16576 |
+
dino_norm_last_layer ............................ False
|
| 16577 |
+
dino_teacher_temp ............................... 0.07
|
| 16578 |
+
dino_warmup_teacher_temp ........................ 0.04
|
| 16579 |
+
dino_warmup_teacher_temp_epochs ................. 30
|
| 16580 |
+
disable_bf16_reduced_precision_matmul ........... False
|
| 16581 |
+
disable_mamba_mem_eff_path ...................... False
|
| 16582 |
+
disable_straggler_on_startup .................... False
|
| 16583 |
+
dist_ckpt_format_deprecated ..................... None
|
| 16584 |
+
dist_ckpt_strictness ............................ assume_ok_unexpected
|
| 16585 |
+
distribute_saved_activations .................... False
|
| 16586 |
+
distributed_backend ............................. nccl
|
| 16587 |
+
distributed_timeout_minutes ..................... 10
|
| 16588 |
+
embedding_path .................................. None
|
| 16589 |
+
empty_unused_memory_level ....................... 0
|
| 16590 |
+
enable_cuda_graph ............................... False
|
| 16591 |
+
enable_ft_package ............................... False
|
| 16592 |
+
enable_gloo_process_groups ...................... True
|
| 16593 |
+
enable_msc ...................................... True
|
| 16594 |
+
enable_one_logger ............................... True
|
| 16595 |
+
encoder_num_layers .............................. 2
|
| 16596 |
+
encoder_pipeline_model_parallel_size ............ 0
|
| 16597 |
+
encoder_seq_length .............................. 131072
|
| 16598 |
+
encoder_tensor_model_parallel_size .............. 0
|
| 16599 |
+
end_weight_decay ................................ 0.1
|
| 16600 |
+
eod_mask_loss ................................... False
|
| 16601 |
+
error_injection_rate ............................ 0
|
| 16602 |
+
error_injection_type ............................ transient_error
|
| 16603 |
+
eval_interval ................................... 16
|
| 16604 |
+
eval_iters ...................................... 1
|
| 16605 |
+
evidence_data_path .............................. None
|
| 16606 |
+
exit_duration_in_mins ........................... None
|
| 16607 |
+
exit_interval ................................... None
|
| 16608 |
+
exit_on_missing_checkpoint ...................... False
|
| 16609 |
+
exit_signal_handler ............................. False
|
| 16610 |
+
exp_avg_dtype ................................... torch.float32
|
| 16611 |
+
exp_avg_sq_dtype ................................ torch.float32
|
| 16612 |
+
expert_model_parallel_size ...................... 1
|
| 16613 |
+
expert_tensor_parallel_size ..................... 8
|
| 16614 |
+
external_cuda_graph ............................. False
|
| 16615 |
+
ffn_hidden_size ................................. 16384
|
| 16616 |
+
finetune ........................................ False
|
| 16617 |
+
first_last_layers_bf16 .......................... False
|
| 16618 |
+
flash_decode .................................... False
|
| 16619 |
+
fp16 ............................................ True
|
| 16620 |
+
fp16_lm_cross_entropy ........................... False
|
| 16621 |
+
fp32_residual_connection ........................ False
|
| 16622 |
+
fp8 ............................................. None
|
| 16623 |
+
fp8_amax_compute_algo ........................... most_recent
|
| 16624 |
+
fp8_amax_history_len ............................ 1
|
| 16625 |
+
fp8_interval .................................... 1
|
| 16626 |
+
fp8_margin ...................................... 0
|
| 16627 |
+
fp8_param_gather ................................ False
|
| 16628 |
+
fp8_recipe ...................................... delayed
|
| 16629 |
+
fp8_wgrad ....................................... True
|
| 16630 |
+
fsdp_double_buffer .............................. False
|
| 16631 |
+
global_batch_size ............................... 1
|
| 16632 |
+
grad_reduce_in_bf16 ............................. False
|
| 16633 |
+
gradient_accumulation_fusion .................... True
|
| 16634 |
+
gradient_reduce_div_fusion ...................... True
|
| 16635 |
+
group_query_attention ........................... True
|
| 16636 |
+
head_lr_mult .................................... 1.0
|
| 16637 |
+
heterogeneous_layers_config_encoded_json ........ None
|
| 16638 |
+
heterogeneous_layers_config_path ................ None
|
| 16639 |
+
hidden_dropout .................................. 0.1
|
| 16640 |
+
hidden_size ..................................... 4096
|
| 16641 |
+
hierarchical_context_parallel_sizes ............. None
|
| 16642 |
+
high_priority_stream_groups ..................... []
|
| 16643 |
+
hybrid_attention_ratio .......................... 0.0
|
| 16644 |
+
hybrid_mlp_ratio ................................ 0.0
|
| 16645 |
+
hybrid_override_pattern ......................... None
|
| 16646 |
+
hysteresis ...................................... 2
|
| 16647 |
+
ict_head_size ................................... None
|
| 16648 |
+
ict_load ........................................ None
|
| 16649 |
+
img_h ........................................... 224
|
| 16650 |
+
img_w ........................................... 224
|
| 16651 |
+
indexer_batch_size .............................. 128
|
| 16652 |
+
indexer_log_interval ............................ 1000
|
| 16653 |
+
inference_batch_times_seqlen_threshold .......... -1
|
| 16654 |
+
inference_dynamic_batching ...................... False
|
| 16655 |
+
inference_dynamic_batching_buffer_guaranteed_fraction 0.2
|
| 16656 |
+
inference_dynamic_batching_buffer_overflow_factor None
|
| 16657 |
+
inference_dynamic_batching_buffer_size_gb ....... 40.0
|
| 16658 |
+
inference_dynamic_batching_chunk_size ........... 256
|
| 16659 |
+
inference_dynamic_batching_max_requests_override None
|
| 16660 |
+
inference_dynamic_batching_max_tokens_override .. None
|
| 16661 |
+
inference_max_batch_size ........................ 8
|
| 16662 |
+
inference_max_seq_length ........................ 2560
|
| 16663 |
+
inference_rng_tracker ........................... False
|
| 16664 |
+
init_method_std ................................. 0.02
|
| 16665 |
+
init_method_xavier_uniform ...................... False
|
| 16666 |
+
init_model_with_meta_device ..................... False
|
| 16667 |
+
initial_loss_scale .............................. 4294967296
|
| 16668 |
+
inprocess_active_world_size ..................... 8
|
| 16669 |
+
inprocess_barrier_timeout ....................... 120
|
| 16670 |
+
inprocess_completion_timeout .................... 120
|
| 16671 |
+
inprocess_empty_cuda_cache ...................... False
|
| 16672 |
+
inprocess_granularity ........................... node
|
| 16673 |
+
inprocess_hard_timeout .......................... 90
|
| 16674 |
+
inprocess_heartbeat_interval .................... 30
|
| 16675 |
+
inprocess_heartbeat_timeout ..................... 60
|
| 16676 |
+
inprocess_last_call_wait ........................ 1
|
| 16677 |
+
inprocess_max_iterations ........................ None
|
| 16678 |
+
inprocess_monitor_process_interval .............. 1.0
|
| 16679 |
+
inprocess_monitor_thread_interval ............... 1.0
|
| 16680 |
+
inprocess_progress_watchdog_interval ............ 1.0
|
| 16681 |
+
inprocess_restart ............................... False
|
| 16682 |
+
inprocess_soft_timeout .......................... 60
|
| 16683 |
+
inprocess_termination_grace_time ................ 1
|
| 16684 |
+
is_hybrid_model ................................. False
|
| 16685 |
+
iter_per_epoch .................................. 1250
|
| 16686 |
+
iterations_to_skip .............................. []
|
| 16687 |
+
keep_fp8_transpose_cache_when_using_custom_fsdp . False
|
| 16688 |
+
kv_channels ..................................... 64
|
| 16689 |
+
kv_lora_rank .................................... 32
|
| 16690 |
+
lazy_mpu_init ................................... None
|
| 16691 |
+
load ............................................ gpt-checkpoint
|
| 16692 |
+
load_model_opt_format ........................... False
|
| 16693 |
+
local_rank ...................................... 0
|
| 16694 |
+
log_interval .................................... 1
|
| 16695 |
+
log_loss_scale_to_tensorboard ................... True
|
| 16696 |
+
log_memory_to_tensorboard ....................... False
|
| 16697 |
+
log_num_zeros_in_grad ........................... False
|
| 16698 |
+
log_params_norm ................................. False
|
| 16699 |
+
log_progress .................................... False
|
| 16700 |
+
log_straggler ................................... False
|
| 16701 |
+
log_throughput .................................. False
|
| 16702 |
+
log_timers_to_tensorboard ....................... False
|
| 16703 |
+
log_validation_ppl_to_tensorboard ............... False
|
| 16704 |
+
log_world_size_to_tensorboard ................... False
|
| 16705 |
+
logging_level ................................... 0
|
| 16706 |
+
loss_scale ...................................... None
|
| 16707 |
+
loss_scale_window ............................... 1000
|
| 16708 |
+
lr .............................................. 0.0005
|
| 16709 |
+
lr_decay_iters .................................. 150000
|
| 16710 |
+
lr_decay_samples ................................ None
|
| 16711 |
+
lr_decay_style .................................. cosine
|
| 16712 |
+
lr_warmup_fraction .............................. None
|
| 16713 |
+
lr_warmup_init .................................. 0.0
|
| 16714 |
+
lr_warmup_iters ................................. 2
|
| 16715 |
+
lr_warmup_samples ............................... 0
|
| 16716 |
+
lr_wsd_decay_iters .............................. None
|
| 16717 |
+
lr_wsd_decay_samples ............................ None
|
| 16718 |
+
lr_wsd_decay_style .............................. exponential
|
| 16719 |
+
main_grads_dtype ................................ torch.float32
|
| 16720 |
+
main_params_dtype ............................... torch.float32
|
| 16721 |
+
make_vocab_size_divisible_by .................... 128
|
| 16722 |
+
mamba_head_dim .................................. 64
|
| 16723 |
+
mamba_num_groups ................................ 8
|
| 16724 |
+
mamba_num_heads ................................. None
|
| 16725 |
+
mamba_state_dim ................................. 128
|
| 16726 |
+
manual_gc ....................................... False
|
| 16727 |
+
manual_gc_eval .................................. True
|
| 16728 |
+
manual_gc_interval .............................. 0
|
| 16729 |
+
mask_factor ..................................... 1.0
|
| 16730 |
+
mask_prob ....................................... 0.15
|
| 16731 |
+
mask_type ....................................... random
|
| 16732 |
+
masked_softmax_fusion ........................... True
|
| 16733 |
+
max_position_embeddings ......................... 131072
|
| 16734 |
+
max_tokens_to_oom ............................... 12000
|
| 16735 |
+
memory_snapshot_path ............................ snapshot.pickle
|
| 16736 |
+
merge_file ...................................... merges.txt
|
| 16737 |
+
micro_batch_size ................................ 1
|
| 16738 |
+
microbatch_group_size_per_vp_stage .............. None
|
| 16739 |
+
mid_level_dataset_surplus ....................... 0.005
|
| 16740 |
+
min_loss_scale .................................. 1.0
|
| 16741 |
+
min_lr .......................................... 0.0
|
| 16742 |
+
mlp_chunks_for_prefill .......................... 1
|
| 16743 |
+
mmap_bin_files .................................. True
|
| 16744 |
+
mock_data ....................................... True
|
| 16745 |
+
moe_apply_probs_on_input ........................ False
|
| 16746 |
+
moe_aux_loss_coeff .............................. 0.0
|
| 16747 |
+
moe_enable_deepep ............................... False
|
| 16748 |
+
moe_expert_capacity_factor ...................... None
|
| 16749 |
+
moe_extended_tp ................................. False
|
| 16750 |
+
moe_ffn_hidden_size ............................. None
|
| 16751 |
+
moe_grouped_gemm ................................ False
|
| 16752 |
+
moe_input_jitter_eps ............................ None
|
| 16753 |
+
moe_layer_freq .................................. 1
|
| 16754 |
+
moe_layer_recompute ............................. False
|
| 16755 |
+
moe_pad_expert_input_to_capacity ................ False
|
| 16756 |
+
moe_per_layer_logging ........................... False
|
| 16757 |
+
moe_permute_fusion .............................. False
|
| 16758 |
+
moe_router_bias_update_rate ..................... 0.001
|
| 16759 |
+
moe_router_dtype ................................ None
|
| 16760 |
+
moe_router_enable_expert_bias ................... False
|
| 16761 |
+
moe_router_force_load_balancing ................. False
|
| 16762 |
+
moe_router_group_topk ........................... None
|
| 16763 |
+
moe_router_load_balancing_type .................. aux_loss
|
| 16764 |
+
moe_router_num_groups ........................... None
|
| 16765 |
+
moe_router_padding_for_fp8 ...................... False
|
| 16766 |
+
moe_router_pre_softmax .......................... False
|
| 16767 |
+
moe_router_score_function ....................... softmax
|
| 16768 |
+
moe_router_topk ................................. 2
|
| 16769 |
+
moe_router_topk_scaling_factor .................. None
|
| 16770 |
+
moe_shared_expert_intermediate_size ............. None
|
| 16771 |
+
moe_shared_expert_overlap ....................... False
|
| 16772 |
+
moe_token_dispatcher_type ....................... allgather
|
| 16773 |
+
moe_token_drop_policy ........................... probs
|
| 16774 |
+
moe_use_legacy_grouped_gemm ..................... False
|
| 16775 |
+
moe_use_upcycling ............................... False
|
| 16776 |
+
moe_z_loss_coeff ................................ None
|
| 16777 |
+
mrope_section ................................... None
|
| 16778 |
+
mscale .......................................... 1.0
|
| 16779 |
+
mscale_all_dim .................................. 1.0
|
| 16780 |
+
mtp_loss_scaling_factor ......................... 0.1
|
| 16781 |
+
mtp_num_layers .................................. None
|
| 16782 |
+
multi_latent_attention .......................... False
|
| 16783 |
+
nccl_all_reduce_for_prefill ..................... False
|
| 16784 |
+
nccl_communicator_config_path ................... None
|
| 16785 |
+
nccl_ub ......................................... False
|
| 16786 |
+
no_load_optim ................................... None
|
| 16787 |
+
no_load_rng ..................................... None
|
| 16788 |
+
no_persist_layer_norm ........................... False
|
| 16789 |
+
no_rope_freq .................................... None
|
| 16790 |
+
no_save_optim ................................... None
|
| 16791 |
+
no_save_rng ..................................... None
|
| 16792 |
+
non_persistent_ckpt_type ........................ None
|
| 16793 |
+
non_persistent_global_ckpt_dir .................. None
|
| 16794 |
+
non_persistent_local_ckpt_algo .................. fully_parallel
|
| 16795 |
+
non_persistent_local_ckpt_dir ................... None
|
| 16796 |
+
non_persistent_save_interval .................... None
|
| 16797 |
+
norm_epsilon .................................... 1e-05
|
| 16798 |
+
normalization ................................... LayerNorm
|
| 16799 |
+
num_attention_heads ............................. 64
|
| 16800 |
+
num_channels .................................... 3
|
| 16801 |
+
num_classes ..................................... 1000
|
| 16802 |
+
num_dataset_builder_threads ..................... 1
|
| 16803 |
+
num_distributed_optimizer_instances ............. 1
|
| 16804 |
+
num_experts ..................................... None
|
| 16805 |
+
num_layers ...................................... 2
|
| 16806 |
+
num_layers_at_end_in_bf16 ....................... 1
|
| 16807 |
+
num_layers_at_start_in_bf16 ..................... 1
|
| 16808 |
+
num_layers_per_virtual_pipeline_stage ........... None
|
| 16809 |
+
num_query_groups ................................ 16
|
| 16810 |
+
num_virtual_stages_per_pipeline_rank ............ None
|
| 16811 |
+
num_workers ..................................... 2
|
| 16812 |
+
object_storage_cache_path ....................... None
|
| 16813 |
+
one_logger_async ................................ False
|
| 16814 |
+
one_logger_project .............................. megatron-lm
|
| 16815 |
+
one_logger_run_name ............................. None
|
| 16816 |
+
onnx_safe ....................................... None
|
| 16817 |
+
openai_gelu ..................................... False
|
| 16818 |
+
optimizer ....................................... adam
|
| 16819 |
+
optimizer_cpu_offload ........................... False
|
| 16820 |
+
optimizer_offload_fraction ...................... 1.0
|
| 16821 |
+
output_bert_embeddings .......................... False
|
| 16822 |
+
overlap_cpu_optimizer_d2h_h2d ................... False
|
| 16823 |
+
overlap_grad_reduce ............................. False
|
| 16824 |
+
overlap_p2p_comm ................................ False
|
| 16825 |
+
overlap_p2p_comm_warmup_flush ................... False
|
| 16826 |
+
overlap_param_gather ............................ False
|
| 16827 |
+
overlap_param_gather_with_optimizer_step ........ False
|
| 16828 |
+
override_opt_param_scheduler .................... False
|
| 16829 |
+
params_dtype .................................... torch.float16
|
| 16830 |
+
patch_dim ....................................... 16
|
| 16831 |
+
per_split_data_args_path ........................ None
|
| 16832 |
+
perform_initialization .......................... True
|
| 16833 |
+
pin_cpu_grads ................................... True
|
| 16834 |
+
pin_cpu_params .................................. True
|
| 16835 |
+
pipeline_model_parallel_comm_backend ............ None
|
| 16836 |
+
pipeline_model_parallel_size .................... 1
|
| 16837 |
+
pipeline_model_parallel_split_rank .............. None
|
| 16838 |
+
position_embedding_type ......................... learned_absolute
|
| 16839 |
+
pretrained_checkpoint ........................... None
|
| 16840 |
+
profile ......................................... False
|
| 16841 |
+
profile_ranks ................................... [0]
|
| 16842 |
+
profile_step_end ................................ 12
|
| 16843 |
+
profile_step_start .............................. 10
|
| 16844 |
+
q_lora_rank ..................................... None
|
| 16845 |
+
qk_head_dim ..................................... 128
|
| 16846 |
+
qk_l2_norm ...................................... False
|
| 16847 |
+
qk_layernorm .................................... False
|
| 16848 |
+
qk_pos_emb_head_dim ............................. 64
|
| 16849 |
+
query_in_block_prob ............................. 0.1
|
| 16850 |
+
rampup_batch_size ............................... None
|
| 16851 |
+
rank ............................................ 0
|
| 16852 |
+
recompute_granularity ........................... None
|
| 16853 |
+
recompute_method ................................ None
|
| 16854 |
+
recompute_modules ............................... None
|
| 16855 |
+
recompute_num_layers ............................ None
|
| 16856 |
+
record_memory_history ........................... False
|
| 16857 |
+
relative_attention_max_distance ................. 128
|
| 16858 |
+
relative_attention_num_buckets .................. 32
|
| 16859 |
+
replication ..................................... False
|
| 16860 |
+
replication_factor .............................. 2
|
| 16861 |
+
replication_jump ................................ None
|
| 16862 |
+
rerun_mode ...................................... disabled
|
| 16863 |
+
reset_attention_mask ............................ False
|
| 16864 |
+
reset_position_ids .............................. False
|
| 16865 |
+
result_rejected_tracker_filename ................ None
|
| 16866 |
+
retriever_report_topk_accuracies ................ []
|
| 16867 |
+
retriever_score_scaling ......................... False
|
| 16868 |
+
retriever_seq_length ............................ 256
|
| 16869 |
+
retro_add_retriever ............................. False
|
| 16870 |
+
retro_attention_gate ............................ 1
|
| 16871 |
+
retro_cyclic_train_iters ........................ None
|
| 16872 |
+
retro_encoder_attention_dropout ................. 0.1
|
| 16873 |
+
retro_encoder_hidden_dropout .................... 0.1
|
| 16874 |
+
retro_encoder_layers ............................ 2
|
| 16875 |
+
retro_num_neighbors ............................. 2
|
| 16876 |
+
retro_num_retrieved_chunks ...................... 2
|
| 16877 |
+
retro_project_dir ............................... None
|
| 16878 |
+
retro_verify_neighbor_count ..................... True
|
| 16879 |
+
rope_scaling_factor ............................. 8.0
|
| 16880 |
+
rotary_base ..................................... 10000
|
| 16881 |
+
rotary_interleaved .............................. False
|
| 16882 |
+
rotary_percent .................................. 1.0
|
| 16883 |
+
rotary_scaling_factor ........................... 1.0
|
| 16884 |
+
rotary_seq_len_interpolation_factor ............. None
|
| 16885 |
+
run_workload_inspector_server ................... False
|
| 16886 |
+
sample_rate ..................................... 1.0
|
| 16887 |
+
save ............................................ gpt-checkpoint
|
| 16888 |
+
save_interval ................................... 16
|
| 16889 |
+
scatter_gather_tensors_in_pipeline .............. True
|
| 16890 |
+
seed ............................................ 1234
|
| 16891 |
+
seq_length ...................................... 131072
|
| 16892 |
+
sequence_parallel ............................... False
|
| 16893 |
+
sgd_momentum .................................... 0.9
|
| 16894 |
+
short_seq_prob .................................. 0.1
|
| 16895 |
+
skip_train ...................................... False
|
| 16896 |
+
skipped_train_samples ........................... 0
|
| 16897 |
+
spec ............................................ None
|
| 16898 |
+
split ........................................... None
|
| 16899 |
+
squared_relu .................................... False
|
| 16900 |
+
start_weight_decay .............................. 0.1
|
| 16901 |
+
straggler_ctrlr_port ............................ 65535
|
| 16902 |
+
straggler_minmax_count .......................... 1
|
| 16903 |
+
suggested_communication_unit_size ............... None
|
| 16904 |
+
swiglu .......................................... False
|
| 16905 |
+
swin_backbone_type .............................. tiny
|
| 16906 |
+
symmetric_ar_type ............................... None
|
| 16907 |
+
te_rng_tracker .................................. False
|
| 16908 |
+
tensor_model_parallel_size ...................... 8
|
| 16909 |
+
tensorboard_dir ................................. tensorboard-logs/
|
| 16910 |
+
tensorboard_log_interval ........................ 1
|
| 16911 |
+
tensorboard_queue_size .......................... 1000
|
| 16912 |
+
test_data_path .................................. None
|
| 16913 |
+
test_mode ....................................... False
|
| 16914 |
+
tiktoken_num_special_tokens ..................... 1000
|
| 16915 |
+
tiktoken_pattern ................................ None
|
| 16916 |
+
tiktoken_special_tokens ......................... None
|
| 16917 |
+
timing_log_level ................................ 0
|
| 16918 |
+
timing_log_option ............................... minmax
|
| 16919 |
+
titles_data_path ................................ None
|
| 16920 |
+
tokenizer_model ................................. None
|
| 16921 |
+
tokenizer_type .................................. GPT2BPETokenizer
|
| 16922 |
+
torch_fsdp2_reshard_after_forward ............... True
|
| 16923 |
+
tp_comm_bootstrap_backend ....................... nccl
|
| 16924 |
+
tp_comm_bulk_dgrad .............................. True
|
| 16925 |
+
tp_comm_bulk_wgrad .............................. True
|
| 16926 |
+
tp_comm_overlap ................................. False
|
| 16927 |
+
tp_comm_overlap_ag .............................. True
|
| 16928 |
+
tp_comm_overlap_cfg ............................. None
|
| 16929 |
+
tp_comm_overlap_rs .............................. True
|
| 16930 |
+
tp_comm_overlap_rs_dgrad ........................ False
|
| 16931 |
+
tp_comm_split_ag ................................ True
|
| 16932 |
+
tp_comm_split_rs ................................ True
|
| 16933 |
+
train_data_path ................................. None
|
| 16934 |
+
train_iters ..................................... 10
|
| 16935 |
+
train_samples ................................... None
|
| 16936 |
+
train_sync_interval ............................. None
|
| 16937 |
+
transformer_impl ................................ transformer_engine
|
| 16938 |
+
transformer_pipeline_model_parallel_size ........ 1
|
| 16939 |
+
untie_embeddings_and_output_weights ............. False
|
| 16940 |
+
use_checkpoint_args ............................. False
|
| 16941 |
+
use_checkpoint_opt_param_scheduler .............. False
|
| 16942 |
+
use_cpu_initialization .......................... None
|
| 16943 |
+
use_custom_fsdp ................................. False
|
| 16944 |
+
use_dist_ckpt ................................... True
|
| 16945 |
+
use_dist_ckpt_deprecated ........................ False
|
| 16946 |
+
use_distributed_optimizer ....................... False
|
| 16947 |
+
use_flash_attn .................................. False
|
| 16948 |
+
use_legacy_models ............................... False
|
| 16949 |
+
use_mp_args_from_checkpoint_args ................ False
|
| 16950 |
+
use_one_sent_docs ............................... False
|
| 16951 |
+
use_persistent_ckpt_worker ...................... False
|
| 16952 |
+
use_precision_aware_optimizer ................... False
|
| 16953 |
+
use_pytorch_profiler ............................ False
|
| 16954 |
+
use_ring_exchange_p2p ........................... False
|
| 16955 |
+
use_rope_scaling ................................ False
|
| 16956 |
+
use_rotary_position_embeddings .................. False
|
| 16957 |
+
use_sharp ....................................... False
|
| 16958 |
+
use_tokenizer_model_from_checkpoint_args ........ True
|
| 16959 |
+
use_torch_fsdp2 ................................. False
|
| 16960 |
+
use_torch_optimizer_for_cpu_offload ............. False
|
| 16961 |
+
use_tp_pp_dp_mapping ............................ False
|
| 16962 |
+
v_head_dim ...................................... 128
|
| 16963 |
+
valid_data_path ................................. None
|
| 16964 |
+
variable_seq_lengths ............................ False
|
| 16965 |
+
virtual_pipeline_model_parallel_size ............ None
|
| 16966 |
+
vision_backbone_type ............................ vit
|
| 16967 |
+
vision_pretraining .............................. False
|
| 16968 |
+
vision_pretraining_type ......................... classify
|
| 16969 |
+
vocab_extra_ids ................................. 0
|
| 16970 |
+
vocab_file ...................................... vocab.json
|
| 16971 |
+
vocab_size ...................................... None
|
| 16972 |
+
wandb_exp_name ..................................
|
| 16973 |
+
wandb_project ...................................
|
| 16974 |
+
wandb_save_dir ..................................
|
| 16975 |
+
weight_decay .................................... 0.1
|
| 16976 |
+
weight_decay_incr_style ......................... constant
|
| 16977 |
+
wgrad_deferral_limit ............................ 0
|
| 16978 |
+
world_size ...................................... 8
|
| 16979 |
+
yaml_cfg ........................................ None
|
| 16980 |
+
-------------------- end of arguments ---------------------
|
| 16981 |
+
INFO:megatron.core.num_microbatches_calculator:setting number of microbatches to constant 1
|
| 16982 |
+
> building GPT2BPETokenizer tokenizer ...
|
| 16983 |
+
INFO:megatron.training.initialize:Setting logging level to 0
|
| 16984 |
+
INFO:megatron.training.initialize:Setting logging level to 0
|
| 16985 |
+
INFO:megatron.training.initialize:Setting logging level to 0
|
| 16986 |
+
> padded vocab (size: 50257) with 943 dummy tokens (new size: 51200)
|
| 16987 |
+
INFO:megatron.training.initialize:Setting logging level to 0
|
| 16988 |
+
WARNING:megatron.core.rerun_state_machine:RerunStateMachine initialized in mode RerunMode.DISABLED
|
| 16989 |
+
> initializing torch distributed ...
|
| 16990 |
+
> initialized tensor model parallel with size 8
|
| 16991 |
+
> initialized pipeline model parallel with size 1
|
| 16992 |
+
> setting random seeds to 1234 ...
|
| 16993 |
+
> compiling dataset index builder ...
|
| 16994 |
+
make: Entering directory '/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/datasets'
|
| 16995 |
+
INFO:megatron.training.initialize:Setting logging level to 0
|
| 16996 |
+
make: Nothing to be done for 'default'.
|
| 16997 |
+
make: Leaving directory '/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/datasets'
|
| 16998 |
+
>>> done with dataset index builder. Compilation time: 0.050 seconds
|
| 16999 |
+
WARNING: constraints for invoking optimized fused softmax kernel are not met. We default back to unfused kernel invocations.
|
| 17000 |
+
> compiling and loading fused kernels ...
|
| 17001 |
+
>>> done with compiling and loading fused kernels. Compilation time: 2.633 seconds
|
| 17002 |
+
time to initialize megatron (seconds): 8.076
|
| 17003 |
+
[after megatron is initialized] datetime: 2025-06-21 21:34:12
|
| 17004 |
+
building GPT model ...
|
| 17005 |
+
>>> embedding
|
| 17006 |
+
>>> decoder
|
| 17007 |
+
>>> output_layer
|
| 17008 |
+
> number of parameters on (tensor, pipeline) model parallel rank (5, 0): 607188480
|
| 17009 |
+
>>> embedding
|
| 17010 |
+
>>> decoder
|
| 17011 |
+
>>> output_layer
|
| 17012 |
+
> number of parameters on (tensor, pipeline) model parallel rank (3, 0): 607188480
|
| 17013 |
+
>>> embedding
|
| 17014 |
+
>>> decoder
|
| 17015 |
+
>>> output_layer
|
| 17016 |
+
> number of parameters on (tensor, pipeline) model parallel rank (2, 0): 607188480
|
| 17017 |
+
>>> embedding
|
| 17018 |
+
>>> decoder
|
| 17019 |
+
>>> output_layer
|
| 17020 |
+
> number of parameters on (tensor, pipeline) model parallel rank (1, 0): 607188480
|
| 17021 |
+
>>> embedding
|
| 17022 |
+
>>> decoder
|
| 17023 |
+
>>> output_layer
|
| 17024 |
+
> number of parameters on (tensor, pipeline) model parallel rank (7, 0): 607188480
|
| 17025 |
+
>>> embedding
|
| 17026 |
+
>>> decoder
|
| 17027 |
+
>>> output_layer
|
| 17028 |
+
> number of parameters on (tensor, pipeline) model parallel rank (6, 0): 607188480
|
| 17029 |
+
>>> embedding
|
| 17030 |
+
>>> decoder
|
| 17031 |
+
>>> output_layer
|
| 17032 |
+
> number of parameters on (tensor, pipeline) model parallel rank (4, 0): 607188480
|
| 17033 |
+
>>> embedding
|
| 17034 |
+
>>> decoder
|
| 17035 |
+
>>> output_layer
|
| 17036 |
+
> number of parameters on (tensor, pipeline) model parallel rank (0, 0): 607188480
|
| 17037 |
+
INFO:megatron.core.distributed.distributed_data_parallel:Setting up DistributedDataParallel with config DistributedDataParallelConfig(grad_reduce_in_fp32=False, overlap_grad_reduce=False, overlap_param_gather=False, align_param_gather=False, use_distributed_optimizer=False, num_distributed_optimizer_instances=1, check_for_nan_in_grad=False, check_for_large_grads=False, bucket_size=None, pad_buckets_for_high_nccl_busbw=False, average_in_collective=False, fp8_param_gather=False, use_custom_fsdp=False, data_parallel_sharding_strategy='no_shard', gradient_reduce_div_fusion=True, suggested_communication_unit_size=None, preserve_fp32_weights=True, keep_fp8_transpose_cache_when_using_custom_fsdp=False, nccl_ub=False, fsdp_double_buffer=False)
|
| 17038 |
+
INFO:megatron.core.distributed.param_and_grad_buffer:Number of buckets for gradient all-reduce / reduce-scatter: 1
|
| 17039 |
+
Params for bucket 1 (607188480 elements, 607188480 padded size):
|
| 17040 |
+
module.decoder.layers.1.mlp.linear_fc2.weight
|
| 17041 |
+
module.decoder.layers.1.self_attention.linear_proj.bias
|
| 17042 |
+
module.decoder.layers.1.mlp.linear_fc1.layer_norm_bias
|
| 17043 |
+
module.decoder.layers.0.mlp.linear_fc2.weight
|
| 17044 |
+
module.decoder.layers.0.mlp.linear_fc1.layer_norm_bias
|
| 17045 |
+
module.decoder.final_layernorm.bias
|
| 17046 |
+
module.decoder.layers.1.mlp.linear_fc1.layer_norm_weight
|
| 17047 |
+
module.decoder.layers.1.self_attention.linear_qkv.bias
|
| 17048 |
+
module.decoder.layers.0.mlp.linear_fc2.bias
|
| 17049 |
+
module.decoder.layers.0.mlp.linear_fc1.layer_norm_weight
|
| 17050 |
+
module.decoder.layers.0.self_attention.linear_qkv.bias
|
| 17051 |
+
module.decoder.layers.1.mlp.linear_fc1.weight
|
| 17052 |
+
module.decoder.layers.0.mlp.linear_fc1.weight
|
| 17053 |
+
module.decoder.layers.0.self_attention.linear_qkv.layer_norm_weight
|
| 17054 |
+
module.decoder.layers.1.mlp.linear_fc2.bias
|
| 17055 |
+
module.decoder.final_layernorm.weight
|
| 17056 |
+
module.decoder.layers.1.self_attention.linear_qkv.layer_norm_weight
|
| 17057 |
+
module.decoder.layers.0.self_attention.linear_proj.weight
|
| 17058 |
+
module.embedding.position_embeddings.weight
|
| 17059 |
+
module.decoder.layers.1.self_attention.linear_qkv.layer_norm_bias
|
| 17060 |
+
module.decoder.layers.0.self_attention.linear_qkv.layer_norm_bias
|
| 17061 |
+
module.decoder.layers.0.self_attention.linear_proj.bias
|
| 17062 |
+
module.decoder.layers.1.mlp.linear_fc1.bias
|
| 17063 |
+
module.decoder.layers.0.mlp.linear_fc1.bias
|
| 17064 |
+
module.decoder.layers.1.self_attention.linear_qkv.weight
|
| 17065 |
+
module.decoder.layers.1.self_attention.linear_proj.weight
|
| 17066 |
+
module.decoder.layers.0.self_attention.linear_qkv.weight
|
| 17067 |
+
module.embedding.word_embeddings.weight
|
| 17068 |
+
INFO:megatron.core.optimizer:Setting up optimizer with config OptimizerConfig(optimizer='adam', lr=0.0005, min_lr=0.0, decoupled_lr=None, decoupled_min_lr=None, weight_decay=0.1, fp16=True, bf16=False, params_dtype=torch.float16, use_precision_aware_optimizer=False, store_param_remainders=True, main_grads_dtype=torch.float32, main_params_dtype=torch.float32, exp_avg_dtype=torch.float32, exp_avg_sq_dtype=torch.float32, loss_scale=None, initial_loss_scale=4294967296, min_loss_scale=1.0, loss_scale_window=1000, hysteresis=2, adam_beta1=0.9, adam_beta2=0.999, adam_eps=1e-08, sgd_momentum=0.9, use_distributed_optimizer=False, overlap_param_gather_with_optimizer_step=False, optimizer_cpu_offload=False, optimizer_offload_fraction=1.0, use_torch_optimizer_for_cpu_offload=False, overlap_cpu_optimizer_d2h_h2d=False, pin_cpu_grads=True, pin_cpu_params=True, clip_grad=1.0, log_num_zeros_in_grad=False, barrier_with_L1_time=True, timers=<megatron.core.timers.Timers object at 0x15055792a330>, config_logger_dir='')
|
| 17069 |
+
INFO:megatron.core.optimizer_param_scheduler:> learning rate decay style: cosine
|
| 17070 |
+
WARNING: could not find the metadata file gpt-checkpoint/latest_checkpointed_iteration.txt
|
| 17071 |
+
will not load any checkpoints and will start from random
|
| 17072 |
+
(min, max) time across ranks (ms):
|
| 17073 |
+
load-checkpoint ................................: (2.72, 3.24)
|
| 17074 |
+
[after model, optimizer, and learning rate scheduler are built] datetime: 2025-06-21 21:34:18
|
| 17075 |
+
> building train, validation, and test datasets ...
|
| 17076 |
+
> datasets target sizes (minimum size):
|
| 17077 |
+
train: 10
|
| 17078 |
+
validation: 1
|
| 17079 |
+
test: 1
|
| 17080 |
+
INFO:megatron.core.datasets.blended_megatron_dataset_config:Let mock = True, as both blend and blend_per_split are None
|
| 17081 |
+
INFO:megatron.core.datasets.blended_megatron_dataset_config:Let split = 1,1,1, an arbitrarily even split, as mock is True
|
| 17082 |
+
INFO:megatron.core.datasets.blended_megatron_dataset_config:Let split_matrix = [(0, 0.3333333333333333), (0.3333333333333333, 0.6666666666666666), (0.6666666666666666, 1.0)]
|
| 17083 |
+
> building train, validation, and test datasets for GPT ...
|
| 17084 |
+
INFO:megatron.core.datasets.blended_megatron_dataset_builder:Building MockGPTDataset splits with sizes=(10, 1, 1) and config=GPTDatasetConfig(random_seed=1234, sequence_length=131072, blend=None, blend_per_split=None, split='1,1,1', split_matrix=[(0, 0.3333333333333333), (0.3333333333333333, 0.6666666666666666), (0.6666666666666666, 1.0)], num_dataset_builder_threads=1, path_to_cache=None, mmap_bin_files=True, mock=True, tokenizer=<megatron.training.tokenizer.tokenizer._GPT2BPETokenizer object at 0x150557fdc740>, mid_level_dataset_surplus=0.005, reset_position_ids=False, reset_attention_mask=False, eod_mask_loss=False, create_attention_mask=True, drop_last_partial_validation_sequence=True, add_extra_token_to_sequence=True, object_storage_cache_path=None)
|
| 17085 |
+
INFO:megatron.core.datasets.gpt_dataset:Build and save the MockGPTDataset train indices
|
| 17086 |
+
DEBUG:megatron.core.datasets.gpt_dataset:> separate_final_epoch: False
|
| 17087 |
+
WARNING:megatron.core.datasets.gpt_dataset:Unable to save MockGPTDataset indexes because path_to_cache is None
|
| 17088 |
+
DEBUG:megatron.core.datasets.gpt_dataset: > time elapsed: 0.005677 seconds
|
| 17089 |
+
INFO:megatron.core.datasets.gpt_dataset:> total number of samples: 520
|
| 17090 |
+
INFO:megatron.core.datasets.gpt_dataset:> total number of epochs: 1
|
| 17091 |
+
INFO:megatron.core.datasets.gpt_dataset:Build and save the MockGPTDataset valid indices
|
| 17092 |
+
DEBUG:megatron.core.datasets.gpt_dataset:> separate_final_epoch: False
|
| 17093 |
+
WARNING:megatron.core.datasets.gpt_dataset:Unable to save MockGPTDataset indexes because path_to_cache is None
|
| 17094 |
+
DEBUG:megatron.core.datasets.gpt_dataset: > time elapsed: 0.001687 seconds
|
| 17095 |
+
INFO:megatron.core.datasets.gpt_dataset:> total number of samples: 520
|
| 17096 |
+
INFO:megatron.core.datasets.gpt_dataset:> total number of epochs: 1
|
| 17097 |
+
INFO:megatron.core.datasets.gpt_dataset:Build and save the MockGPTDataset test indices
|
| 17098 |
+
DEBUG:megatron.core.datasets.gpt_dataset:> separate_final_epoch: False
|
| 17099 |
+
WARNING:megatron.core.datasets.gpt_dataset:Unable to save MockGPTDataset indexes because path_to_cache is None
|
| 17100 |
+
DEBUG:megatron.core.datasets.gpt_dataset: > time elapsed: 0.001423 seconds
|
| 17101 |
+
INFO:megatron.core.datasets.gpt_dataset:> total number of samples: 520
|
| 17102 |
+
INFO:megatron.core.datasets.gpt_dataset:> total number of epochs: 1
|
| 17103 |
+
> finished creating GPT datasets ...
|
| 17104 |
+
[after dataloaders are built] datetime: 2025-06-21 21:34:18
|
| 17105 |
+
done with setup ...
|
| 17106 |
+
(min, max) time across ranks (ms):
|
| 17107 |
+
model-and-optimizer-setup ......................: (5741.08, 5741.67)
|
| 17108 |
+
train/valid/test-data-iterators-setup ..........: (16.69, 107.18)
|
| 17109 |
+
training ...
|
| 17110 |
+
Setting rerun_state_machine.current_iteration to 0...
|
| 17111 |
+
[before the start of training step] datetime: 2025-06-21 21:34:18
|
| 17112 |
+
WARNING:megatron.core.utils:CUDA out of memory. Tried to allocate 1024.00 GiB. GPU 6 has a total capacity of 139.81 GiB of which 133.22 GiB is free. Including non-PyTorch memory, this process has 6.59 GiB memory in use. Of the allocated memory 4.56 GiB is allocated by PyTorch, and 583.51 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)
|
| 17113 |
+
['Traceback (most recent call last):\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 446, in forward_step\n (tokens, labels, loss_mask, attention_mask, position_ids), token_lens = get_batch(data_iterator)\n ^^^^^^^^^^^^^^^^^^^^^^^^\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 284, in get_batch\n batch = next(global_batches)\n ^^^^^^^^^^^^^^^^^^^^\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 226, in setup_batches\n attention_mask = torch.ones(\n ^^^^^^^^^^^\n', 'torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 1024.00 GiB. GPU 6 has a total capacity of 139.81 GiB of which 133.22 GiB is free. Including non-PyTorch memory, this process has 6.59 GiB memory in use. Of the allocated memory 4.56 GiB is allocated by PyTorch, and 583.51 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)\n']
|
| 17114 |
+
WARNING:megatron.core.utils:CUDA out of memory. Tried to allocate 1024.00 GiB. GPU 3 has a total capacity of 139.81 GiB of which 133.22 GiB is free. Including non-PyTorch memory, this process has 6.59 GiB memory in use. Of the allocated memory 4.56 GiB is allocated by PyTorch, and 583.51 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)
|
| 17115 |
+
['Traceback (most recent call last):\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 446, in forward_step\n (tokens, labels, loss_mask, attention_mask, position_ids), token_lens = get_batch(data_iterator)\n ^^^^^^^^^^^^^^^^^^^^^^^^\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 284, in get_batch\n batch = next(global_batches)\n ^^^^^^^^^^^^^^^^^^^^\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 226, in setup_batches\n attention_mask = torch.ones(\n ^^^^^^^^^^^\n', 'torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 1024.00 GiB. GPU 3 has a total capacity of 139.81 GiB of which 133.22 GiB is free. Including non-PyTorch memory, this process has 6.59 GiB memory in use. Of the allocated memory 4.56 GiB is allocated by PyTorch, and 583.51 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)\n']
|
| 17116 |
+
WARNING:megatron.core.utils:CUDA out of memory. Tried to allocate 1024.00 GiB. GPU 5 has a total capacity of 139.81 GiB of which 133.22 GiB is free. Including non-PyTorch memory, this process has 6.59 GiB memory in use. Of the allocated memory 4.56 GiB is allocated by PyTorch, and 583.51 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)
|
| 17117 |
+
['Traceback (most recent call last):\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 446, in forward_step\n (tokens, labels, loss_mask, attention_mask, position_ids), token_lens = get_batch(data_iterator)\n ^^^^^^^^^^^^^^^^^^^^^^^^\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 284, in get_batch\n batch = next(global_batches)\n ^^^^^^^^^^^^^^^^^^^^\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 226, in setup_batches\n attention_mask = torch.ones(\n ^^^^^^^^^^^\n', 'torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 1024.00 GiB. GPU 5 has a total capacity of 139.81 GiB of which 133.22 GiB is free. Including non-PyTorch memory, this process has 6.59 GiB memory in use. Of the allocated memory 4.56 GiB is allocated by PyTorch, and 583.51 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)\n']
|
| 17118 |
+
WARNING:megatron.core.utils:CUDA out of memory. Tried to allocate 1024.00 GiB. GPU 0 has a total capacity of 139.81 GiB of which 133.22 GiB is free. Including non-PyTorch memory, this process has 6.59 GiB memory in use. Of the allocated memory 4.56 GiB is allocated by PyTorch, and 583.51 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)
|
| 17119 |
+
['Traceback (most recent call last):\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 446, in forward_step\n (tokens, labels, loss_mask, attention_mask, position_ids), token_lens = get_batch(data_iterator)\n ^^^^^^^^^^^^^^^^^^^^^^^^\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 284, in get_batch\n batch = next(global_batches)\n ^^^^^^^^^^^^^^^^^^^^\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 226, in setup_batches\n attention_mask = torch.ones(\n ^^^^^^^^^^^\n', 'torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 1024.00 GiB. GPU 0 has a total capacity of 139.81 GiB of which 133.22 GiB is free. Including non-PyTorch memory, this process has 6.59 GiB memory in use. Of the allocated memory 4.56 GiB is allocated by PyTorch, and 583.51 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)\n']
|
| 17120 |
+
WARNING:megatron.core.utils:CUDA out of memory. Tried to allocate 1024.00 GiB. GPU 7 has a total capacity of 139.81 GiB of which 133.22 GiB is free. Including non-PyTorch memory, this process has 6.59 GiB memory in use. Of the allocated memory 4.56 GiB is allocated by PyTorch, and 583.51 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)
|
| 17121 |
+
['Traceback (most recent call last):\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 446, in forward_step\n (tokens, labels, loss_mask, attention_mask, position_ids), token_lens = get_batch(data_iterator)\n ^^^^^^^^^^^^^^^^^^^^^^^^\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 284, in get_batch\n batch = next(global_batches)\n ^^^^^^^^^^^^^^^^^^^^\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 226, in setup_batches\n attention_mask = torch.ones(\n ^^^^^^^^^^^\n', 'torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 1024.00 GiB. GPU 7 has a total capacity of 139.81 GiB of which 133.22 GiB is free. Including non-PyTorch memory, this process has 6.59 GiB memory in use. Of the allocated memory 4.56 GiB is allocated by PyTorch, and 583.51 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)\n']
|
| 17122 |
+
WARNING:megatron.core.utils:CUDA out of memory. Tried to allocate 1024.00 GiB. GPU 1 has a total capacity of 139.81 GiB of which 133.22 GiB is free. Including non-PyTorch memory, this process has 6.59 GiB memory in use. Of the allocated memory 4.56 GiB is allocated by PyTorch, and 583.51 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)
|
| 17123 |
+
['Traceback (most recent call last):\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 446, in forward_step\n (tokens, labels, loss_mask, attention_mask, position_ids), token_lens = get_batch(data_iterator)\n ^^^^^^^^^^^^^^^^^^^^^^^^\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 284, in get_batch\n batch = next(global_batches)\n ^^^^^^^^^^^^^^^^^^^^\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 226, in setup_batches\n attention_mask = torch.ones(\n ^^^^^^^^^^^\n', 'torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 1024.00 GiB. GPU 1 has a total capacity of 139.81 GiB of which 133.22 GiB is free. Including non-PyTorch memory, this process has 6.59 GiB memory in use. Of the allocated memory 4.56 GiB is allocated by PyTorch, and 583.51 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)\n']
|
| 17124 |
+
WARNING:megatron.core.utils:CUDA out of memory. Tried to allocate 1024.00 GiB. GPU 2 has a total capacity of 139.81 GiB of which 133.22 GiB is free. Including non-PyTorch memory, this process has 6.59 GiB memory in use. Of the allocated memory 4.56 GiB is allocated by PyTorch, and 583.51 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)
|
| 17125 |
+
['Traceback (most recent call last):\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 446, in forward_step\n (tokens, labels, loss_mask, attention_mask, position_ids), token_lens = get_batch(data_iterator)\n ^^^^^^^^^^^^^^^^^^^^^^^^\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 284, in get_batch\n batch = next(global_batches)\n ^^^^^^^^^^^^^^^^^^^^\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 226, in setup_batches\n attention_mask = torch.ones(\n ^^^^^^^^^^^\n', 'torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 1024.00 GiB. GPU 2 has a total capacity of 139.81 GiB of which 133.22 GiB is free. Including non-PyTorch memory, this process has 6.59 GiB memory in use. Of the allocated memory 4.56 GiB is allocated by PyTorch, and 583.51 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)\n']
|
| 17126 |
+
WARNING:megatron.core.utils:CUDA out of memory. Tried to allocate 1024.00 GiB. GPU 4 has a total capacity of 139.81 GiB of which 133.22 GiB is free. Including non-PyTorch memory, this process has 6.59 GiB memory in use. Of the allocated memory 4.56 GiB is allocated by PyTorch, and 583.51 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)
|
| 17127 |
+
['Traceback (most recent call last):\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 446, in forward_step\n (tokens, labels, loss_mask, attention_mask, position_ids), token_lens = get_batch(data_iterator)\n ^^^^^^^^^^^^^^^^^^^^^^^^\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 284, in get_batch\n batch = next(global_batches)\n ^^^^^^^^^^^^^^^^^^^^\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 226, in setup_batches\n attention_mask = torch.ones(\n ^^^^^^^^^^^\n', 'torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 1024.00 GiB. GPU 4 has a total capacity of 139.81 GiB of which 133.22 GiB is free. Including non-PyTorch memory, this process has 6.59 GiB memory in use. Of the allocated memory 4.56 GiB is allocated by PyTorch, and 583.51 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)\n']
|
attnserver.run_attnserver.slurm.sh.343211.err.log
CHANGED
|
@@ -7918,3 +7918,318 @@ W0621 21:33:43.995000 2320490 site-packages/torch/distributed/run.py:766]
|
|
| 7918 |
W0621 21:33:43.995000 2320490 site-packages/torch/distributed/run.py:766] *****************************************
|
| 7919 |
W0621 21:33:43.995000 2320490 site-packages/torch/distributed/run.py:766] Setting OMP_NUM_THREADS environment variable for each process to be 1 in default, to avoid your system being overloaded, please further tune the variable for optimal performance in your application as needed.
|
| 7920 |
W0621 21:33:43.995000 2320490 site-packages/torch/distributed/run.py:766] *****************************************
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 7918 |
W0621 21:33:43.995000 2320490 site-packages/torch/distributed/run.py:766] *****************************************
|
| 7919 |
W0621 21:33:43.995000 2320490 site-packages/torch/distributed/run.py:766] Setting OMP_NUM_THREADS environment variable for each process to be 1 in default, to avoid your system being overloaded, please further tune the variable for optimal performance in your application as needed.
|
| 7920 |
W0621 21:33:43.995000 2320490 site-packages/torch/distributed/run.py:766] *****************************************
|
| 7921 |
+
[rank5]:[W621 21:34:05.783419290 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 5] using GPU 5 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
|
| 7922 |
+
[rank0]:[W621 21:34:05.812412856 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 0] using GPU 0 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
|
| 7923 |
+
[rank3]:[W621 21:34:05.820758030 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 3] using GPU 3 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
|
| 7924 |
+
[rank4]:[W621 21:34:05.821116799 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 4] using GPU 4 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
|
| 7925 |
+
[rank2]:[W621 21:34:05.824468374 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 2] using GPU 2 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
|
| 7926 |
+
[rank7]:[W621 21:34:05.824980913 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 7] using GPU 7 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
|
| 7927 |
+
[rank6]:[W621 21:34:05.825012686 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 6] using GPU 6 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
|
| 7928 |
+
[rank1]:[W621 21:34:05.826235847 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 1] using GPU 1 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
|
| 7929 |
+
/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/models/gpt/gpt_layer_specs.py:94: UserWarning: The fp8 argument in "get_gpt_layer_with_transformer_engine_spec" has been deprecated and will be removed soon. Please update your code accordingly.
|
| 7930 |
+
warnings.warn(
|
| 7931 |
+
/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/models/gpt/gpt_layer_specs.py:94: UserWarning: The fp8 argument in "get_gpt_layer_with_transformer_engine_spec" has been deprecated and will be removed soon. Please update your code accordingly.
|
| 7932 |
+
warnings.warn(
|
| 7933 |
+
/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/models/gpt/gpt_layer_specs.py:94: UserWarning: The fp8 argument in "get_gpt_layer_with_transformer_engine_spec" has been deprecated and will be removed soon. Please update your code accordingly.
|
| 7934 |
+
warnings.warn(
|
| 7935 |
+
/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/models/gpt/gpt_layer_specs.py:94: UserWarning: The fp8 argument in "get_gpt_layer_with_transformer_engine_spec" has been deprecated and will be removed soon. Please update your code accordingly.
|
| 7936 |
+
warnings.warn(
|
| 7937 |
+
/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/models/gpt/gpt_layer_specs.py:94: UserWarning: The fp8 argument in "get_gpt_layer_with_transformer_engine_spec" has been deprecated and will be removed soon. Please update your code accordingly.
|
| 7938 |
+
warnings.warn(
|
| 7939 |
+
/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/models/gpt/gpt_layer_specs.py:94: UserWarning: The fp8 argument in "get_gpt_layer_with_transformer_engine_spec" has been deprecated and will be removed soon. Please update your code accordingly.
|
| 7940 |
+
warnings.warn(
|
| 7941 |
+
/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/models/gpt/gpt_layer_specs.py:94: UserWarning: The fp8 argument in "get_gpt_layer_with_transformer_engine_spec" has been deprecated and will be removed soon. Please update your code accordingly.
|
| 7942 |
+
warnings.warn(
|
| 7943 |
+
/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/models/gpt/gpt_layer_specs.py:94: UserWarning: The fp8 argument in "get_gpt_layer_with_transformer_engine_spec" has been deprecated and will be removed soon. Please update your code accordingly.
|
| 7944 |
+
warnings.warn(
|
| 7945 |
+
/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/transformer_engine/pytorch/cpu_offload.py:595: DeprecationWarning: Offloading weights is deprecated. Using offload_weights=True does not have any effect.
|
| 7946 |
+
warnings.warn(
|
| 7947 |
+
/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/transformer_engine/pytorch/cpu_offload.py:595: DeprecationWarning: Offloading weights is deprecated. Using offload_weights=True does not have any effect.
|
| 7948 |
+
warnings.warn(
|
| 7949 |
+
/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/transformer_engine/pytorch/cpu_offload.py:595: DeprecationWarning: Offloading weights is deprecated. Using offload_weights=True does not have any effect.
|
| 7950 |
+
warnings.warn(
|
| 7951 |
+
/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/transformer_engine/pytorch/cpu_offload.py:595: DeprecationWarning: Offloading weights is deprecated. Using offload_weights=True does not have any effect.
|
| 7952 |
+
warnings.warn(
|
| 7953 |
+
/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/transformer_engine/pytorch/cpu_offload.py:595: DeprecationWarning: Offloading weights is deprecated. Using offload_weights=True does not have any effect.
|
| 7954 |
+
warnings.warn(
|
| 7955 |
+
/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/transformer_engine/pytorch/cpu_offload.py:595: DeprecationWarning: Offloading weights is deprecated. Using offload_weights=True does not have any effect.
|
| 7956 |
+
warnings.warn(
|
| 7957 |
+
/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/transformer_engine/pytorch/cpu_offload.py:595: DeprecationWarning: Offloading weights is deprecated. Using offload_weights=True does not have any effect.
|
| 7958 |
+
warnings.warn(
|
| 7959 |
+
/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/transformer_engine/pytorch/cpu_offload.py:595: DeprecationWarning: Offloading weights is deprecated. Using offload_weights=True does not have any effect.
|
| 7960 |
+
warnings.warn(
|
| 7961 |
+
[rank5]: Traceback (most recent call last):
|
| 7962 |
+
[rank5]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 554, in <module>
|
| 7963 |
+
[rank5]: pretrain(
|
| 7964 |
+
[rank5]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/training/training.py", line 863, in pretrain
|
| 7965 |
+
[rank5]: iteration, num_floating_point_operations_so_far = train(
|
| 7966 |
+
[rank5]: ^^^^^^
|
| 7967 |
+
[rank5]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/training/training.py", line 2229, in train
|
| 7968 |
+
[rank5]: ) = train_step(
|
| 7969 |
+
[rank5]: ^^^^^^^^^^^
|
| 7970 |
+
[rank5]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/training/training.py", line 1382, in train_step
|
| 7971 |
+
[rank5]: losses_reduced = forward_backward_func(
|
| 7972 |
+
[rank5]: ^^^^^^^^^^^^^^^^^^^^^^
|
| 7973 |
+
[rank5]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/pipeline_parallel/schedules.py", line 518, in forward_backward_no_pipelining
|
| 7974 |
+
[rank5]: output_tensor, num_tokens = forward_step(
|
| 7975 |
+
[rank5]: ^^^^^^^^^^^^^
|
| 7976 |
+
[rank5]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/pipeline_parallel/schedules.py", line 289, in forward_step
|
| 7977 |
+
[rank5]: output_tensor, loss_func = forward_step_func(data_iterator, model)
|
| 7978 |
+
[rank5]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
| 7979 |
+
[rank5]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 446, in forward_step
|
| 7980 |
+
[rank5]: (tokens, labels, loss_mask, attention_mask, position_ids), token_lens = get_batch(data_iterator)
|
| 7981 |
+
[rank5]: ^^^^^^^^^^^^^^^^^^^^^^^^
|
| 7982 |
+
[rank5]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 284, in get_batch
|
| 7983 |
+
[rank5]: batch = next(global_batches)
|
| 7984 |
+
[rank5]: ^^^^^^^^^^^^^^^^^^^^
|
| 7985 |
+
[rank5]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 226, in setup_batches
|
| 7986 |
+
[rank5]: attention_mask = torch.ones(
|
| 7987 |
+
[rank5]: ^^^^^^^^^^^
|
| 7988 |
+
[rank5]: torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 65536.00 GiB. GPU 5 has a total capacity of 139.81 GiB of which 132.22 GiB is free. Including non-PyTorch memory, this process has 7.59 GiB memory in use. Of the allocated memory 5.15 GiB is allocated by PyTorch, and 1007.51 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)
|
| 7989 |
+
[rank1]: Traceback (most recent call last):
|
| 7990 |
+
[rank1]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 554, in <module>
|
| 7991 |
+
[rank1]: pretrain(
|
| 7992 |
+
[rank1]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/training/training.py", line 863, in pretrain
|
| 7993 |
+
[rank1]: iteration, num_floating_point_operations_so_far = train(
|
| 7994 |
+
[rank1]: ^^^^^^
|
| 7995 |
+
[rank1]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/training/training.py", line 2229, in train
|
| 7996 |
+
[rank1]: ) = train_step(
|
| 7997 |
+
[rank1]: ^^^^^^^^^^^
|
| 7998 |
+
[rank1]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/training/training.py", line 1382, in train_step
|
| 7999 |
+
[rank1]: losses_reduced = forward_backward_func(
|
| 8000 |
+
[rank1]: ^^^^^^^^^^^^^^^^^^^^^^
|
| 8001 |
+
[rank1]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/pipeline_parallel/schedules.py", line 518, in forward_backward_no_pipelining
|
| 8002 |
+
[rank1]: output_tensor, num_tokens = forward_step(
|
| 8003 |
+
[rank1]: ^^^^^^^^^^^^^
|
| 8004 |
+
[rank1]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/pipeline_parallel/schedules.py", line 289, in forward_step
|
| 8005 |
+
[rank1]: output_tensor, loss_func = forward_step_func(data_iterator, model)
|
| 8006 |
+
[rank1]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
| 8007 |
+
[rank1]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 446, in forward_step
|
| 8008 |
+
[rank1]: (tokens, labels, loss_mask, attention_mask, position_ids), token_lens = get_batch(data_iterator)
|
| 8009 |
+
[rank1]: ^^^^^^^^^^^^^^^^^^^^^^^^
|
| 8010 |
+
[rank1]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 284, in get_batch
|
| 8011 |
+
[rank1]: batch = next(global_batches)
|
| 8012 |
+
[rank1]: ^^^^^^^^^^^^^^^^^^^^
|
| 8013 |
+
[rank1]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 226, in setup_batches
|
| 8014 |
+
[rank1]: attention_mask = torch.ones(
|
| 8015 |
+
[rank1]: ^^^^^^^^^^^
|
| 8016 |
+
[rank1]: torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 65536.00 GiB. GPU 1 has a total capacity of 139.81 GiB of which 132.22 GiB is free. Including non-PyTorch memory, this process has 7.59 GiB memory in use. Of the allocated memory 5.15 GiB is allocated by PyTorch, and 1007.51 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)
|
| 8017 |
+
[rank7]: Traceback (most recent call last):
|
| 8018 |
+
[rank7]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 554, in <module>
|
| 8019 |
+
[rank7]: pretrain(
|
| 8020 |
+
[rank7]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/training/training.py", line 863, in pretrain
|
| 8021 |
+
[rank7]: iteration, num_floating_point_operations_so_far = train(
|
| 8022 |
+
[rank7]: ^^^^^^
|
| 8023 |
+
[rank7]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/training/training.py", line 2229, in train
|
| 8024 |
+
[rank7]: ) = train_step(
|
| 8025 |
+
[rank7]: ^^^^^^^^^^^
|
| 8026 |
+
[rank7]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/training/training.py", line 1382, in train_step
|
| 8027 |
+
[rank7]: losses_reduced = forward_backward_func(
|
| 8028 |
+
[rank7]: ^^^^^^^^^^^^^^^^^^^^^^
|
| 8029 |
+
[rank7]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/pipeline_parallel/schedules.py", line 518, in forward_backward_no_pipelining
|
| 8030 |
+
[rank7]: output_tensor, num_tokens = forward_step(
|
| 8031 |
+
[rank7]: ^^^^^^^^^^^^^
|
| 8032 |
+
[rank7]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/pipeline_parallel/schedules.py", line 289, in forward_step
|
| 8033 |
+
[rank7]: output_tensor, loss_func = forward_step_func(data_iterator, model)
|
| 8034 |
+
[rank7]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
| 8035 |
+
[rank7]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 446, in forward_step
|
| 8036 |
+
[rank7]: (tokens, labels, loss_mask, attention_mask, position_ids), token_lens = get_batch(data_iterator)
|
| 8037 |
+
[rank7]: ^^^^^^^^^^^^^^^^^^^^^^^^
|
| 8038 |
+
[rank7]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 284, in get_batch
|
| 8039 |
+
[rank7]: batch = next(global_batches)
|
| 8040 |
+
[rank7]: ^^^^^^^^^^^^^^^^^^^^
|
| 8041 |
+
[rank7]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 226, in setup_batches
|
| 8042 |
+
[rank7]: attention_mask = torch.ones(
|
| 8043 |
+
[rank7]: ^^^^^^^^^^^
|
| 8044 |
+
[rank7]: torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 65536.00 GiB. GPU 7 has a total capacity of 139.81 GiB of which 132.22 GiB is free. Including non-PyTorch memory, this process has 7.59 GiB memory in use. Of the allocated memory 5.15 GiB is allocated by PyTorch, and 1007.51 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)
|
| 8045 |
+
[rank2]: Traceback (most recent call last):
|
| 8046 |
+
[rank2]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 554, in <module>
|
| 8047 |
+
[rank2]: pretrain(
|
| 8048 |
+
[rank2]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/training/training.py", line 863, in pretrain
|
| 8049 |
+
[rank2]: iteration, num_floating_point_operations_so_far = train(
|
| 8050 |
+
[rank2]: ^^^^^^
|
| 8051 |
+
[rank2]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/training/training.py", line 2229, in train
|
| 8052 |
+
[rank2]: ) = train_step(
|
| 8053 |
+
[rank2]: ^^^^^^^^^^^
|
| 8054 |
+
[rank2]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/training/training.py", line 1382, in train_step
|
| 8055 |
+
[rank2]: losses_reduced = forward_backward_func(
|
| 8056 |
+
[rank2]: ^^^^^^^^^^^^^^^^^^^^^^
|
| 8057 |
+
[rank2]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/pipeline_parallel/schedules.py", line 518, in forward_backward_no_pipelining
|
| 8058 |
+
[rank2]: output_tensor, num_tokens = forward_step(
|
| 8059 |
+
[rank2]: ^^^^^^^^^^^^^
|
| 8060 |
+
[rank2]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/pipeline_parallel/schedules.py", line 289, in forward_step
|
| 8061 |
+
[rank2]: output_tensor, loss_func = forward_step_func(data_iterator, model)
|
| 8062 |
+
[rank2]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
| 8063 |
+
[rank2]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 446, in forward_step
|
| 8064 |
+
[rank2]: (tokens, labels, loss_mask, attention_mask, position_ids), token_lens = get_batch(data_iterator)
|
| 8065 |
+
[rank2]: ^^^^^^^^^^^^^^^^^^^^^^^^
|
| 8066 |
+
[rank2]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 284, in get_batch
|
| 8067 |
+
[rank2]: batch = next(global_batches)
|
| 8068 |
+
[rank2]: ^^^^^^^^^^^^^^^^^^^^
|
| 8069 |
+
[rank2]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 226, in setup_batches
|
| 8070 |
+
[rank2]: attention_mask = torch.ones(
|
| 8071 |
+
[rank2]: ^^^^^^^^^^^
|
| 8072 |
+
[rank2]: torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 65536.00 GiB. GPU 2 has a total capacity of 139.81 GiB of which 132.22 GiB is free. Including non-PyTorch memory, this process has 7.59 GiB memory in use. Of the allocated memory 5.15 GiB is allocated by PyTorch, and 1007.51 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)
|
| 8073 |
+
[rank0]: Traceback (most recent call last):
|
| 8074 |
+
[rank0]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 554, in <module>
|
| 8075 |
+
[rank0]: pretrain(
|
| 8076 |
+
[rank0]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/training/training.py", line 863, in pretrain
|
| 8077 |
+
[rank0]: iteration, num_floating_point_operations_so_far = train(
|
| 8078 |
+
[rank0]: ^^^^^^
|
| 8079 |
+
[rank0]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/training/training.py", line 2229, in train
|
| 8080 |
+
[rank0]: ) = train_step(
|
| 8081 |
+
[rank0]: ^^^^^^^^^^^
|
| 8082 |
+
[rank0]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/training/training.py", line 1382, in train_step
|
| 8083 |
+
[rank0]: losses_reduced = forward_backward_func(
|
| 8084 |
+
[rank0]: ^^^^^^^^^^^^^^^^^^^^^^
|
| 8085 |
+
[rank0]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/pipeline_parallel/schedules.py", line 518, in forward_backward_no_pipelining
|
| 8086 |
+
[rank0]: output_tensor, num_tokens = forward_step(
|
| 8087 |
+
[rank0]: ^^^^^^^^^^^^^
|
| 8088 |
+
[rank0]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/pipeline_parallel/schedules.py", line 289, in forward_step
|
| 8089 |
+
[rank0]: output_tensor, loss_func = forward_step_func(data_iterator, model)
|
| 8090 |
+
[rank0]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
| 8091 |
+
[rank0]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 446, in forward_step
|
| 8092 |
+
[rank0]: (tokens, labels, loss_mask, attention_mask, position_ids), token_lens = get_batch(data_iterator)
|
| 8093 |
+
[rank0]: ^^^^^^^^^^^^^^^^^^^^^^^^
|
| 8094 |
+
[rank0]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 284, in get_batch
|
| 8095 |
+
[rank0]: batch = next(global_batches)
|
| 8096 |
+
[rank0]: ^^^^^^^^^^^^^^^^^^^^
|
| 8097 |
+
[rank0]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 226, in setup_batches
|
| 8098 |
+
[rank0]: attention_mask = torch.ones(
|
| 8099 |
+
[rank0]: ^^^^^^^^^^^
|
| 8100 |
+
[rank0]: torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 65536.00 GiB. GPU 0 has a total capacity of 139.81 GiB of which 132.22 GiB is free. Including non-PyTorch memory, this process has 7.59 GiB memory in use. Of the allocated memory 5.15 GiB is allocated by PyTorch, and 1007.51 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)
|
| 8101 |
+
[rank4]: Traceback (most recent call last):
|
| 8102 |
+
[rank4]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 554, in <module>
|
| 8103 |
+
[rank4]: pretrain(
|
| 8104 |
+
[rank4]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/training/training.py", line 863, in pretrain
|
| 8105 |
+
[rank4]: iteration, num_floating_point_operations_so_far = train(
|
| 8106 |
+
[rank4]: ^^^^^^
|
| 8107 |
+
[rank4]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/training/training.py", line 2229, in train
|
| 8108 |
+
[rank4]: ) = train_step(
|
| 8109 |
+
[rank4]: ^^^^^^^^^^^
|
| 8110 |
+
[rank4]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/training/training.py", line 1382, in train_step
|
| 8111 |
+
[rank4]: losses_reduced = forward_backward_func(
|
| 8112 |
+
[rank4]: ^^^^^^^^^^^^^^^^^^^^^^
|
| 8113 |
+
[rank4]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/pipeline_parallel/schedules.py", line 518, in forward_backward_no_pipelining
|
| 8114 |
+
[rank4]: output_tensor, num_tokens = forward_step(
|
| 8115 |
+
[rank4]: ^^^^^^^^^^^^^
|
| 8116 |
+
[rank4]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/pipeline_parallel/schedules.py", line 289, in forward_step
|
| 8117 |
+
[rank4]: output_tensor, loss_func = forward_step_func(data_iterator, model)
|
| 8118 |
+
[rank4]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
| 8119 |
+
[rank4]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 446, in forward_step
|
| 8120 |
+
[rank4]: (tokens, labels, loss_mask, attention_mask, position_ids), token_lens = get_batch(data_iterator)
|
| 8121 |
+
[rank4]: ^^^^^^^^^^^^^^^^^^^^^^^^
|
| 8122 |
+
[rank4]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 284, in get_batch
|
| 8123 |
+
[rank4]: batch = next(global_batches)
|
| 8124 |
+
[rank4]: ^^^^^^^^^^^^^^^^^^^^
|
| 8125 |
+
[rank4]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 226, in setup_batches
|
| 8126 |
+
[rank4]: attention_mask = torch.ones(
|
| 8127 |
+
[rank4]: ^^^^^^^^^^^
|
| 8128 |
+
[rank4]: torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 65536.00 GiB. GPU 4 has a total capacity of 139.81 GiB of which 132.22 GiB is free. Including non-PyTorch memory, this process has 7.59 GiB memory in use. Of the allocated memory 5.15 GiB is allocated by PyTorch, and 1007.51 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)
|
| 8129 |
+
[rank3]: Traceback (most recent call last):
|
| 8130 |
+
[rank3]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 554, in <module>
|
| 8131 |
+
[rank3]: pretrain(
|
| 8132 |
+
[rank3]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/training/training.py", line 863, in pretrain
|
| 8133 |
+
[rank3]: iteration, num_floating_point_operations_so_far = train(
|
| 8134 |
+
[rank3]: ^^^^^^
|
| 8135 |
+
[rank3]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/training/training.py", line 2229, in train
|
| 8136 |
+
[rank3]: ) = train_step(
|
| 8137 |
+
[rank3]: ^^^^^^^^^^^
|
| 8138 |
+
[rank3]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/training/training.py", line 1382, in train_step
|
| 8139 |
+
[rank3]: losses_reduced = forward_backward_func(
|
| 8140 |
+
[rank3]: ^^^^^^^^^^^^^^^^^^^^^^
|
| 8141 |
+
[rank3]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/pipeline_parallel/schedules.py", line 518, in forward_backward_no_pipelining
|
| 8142 |
+
[rank3]: output_tensor, num_tokens = forward_step(
|
| 8143 |
+
[rank3]: ^^^^^^^^^^^^^
|
| 8144 |
+
[rank3]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/pipeline_parallel/schedules.py", line 289, in forward_step
|
| 8145 |
+
[rank3]: output_tensor, loss_func = forward_step_func(data_iterator, model)
|
| 8146 |
+
[rank3]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
| 8147 |
+
[rank3]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 446, in forward_step
|
| 8148 |
+
[rank3]: (tokens, labels, loss_mask, attention_mask, position_ids), token_lens = get_batch(data_iterator)
|
| 8149 |
+
[rank3]: ^^^^^^^^^^^^^^^^^^^^^^^^
|
| 8150 |
+
[rank3]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 284, in get_batch
|
| 8151 |
+
[rank3]: batch = next(global_batches)
|
| 8152 |
+
[rank3]: ^^^^^^^^^^^^^^^^^^^^
|
| 8153 |
+
[rank3]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 226, in setup_batches
|
| 8154 |
+
[rank3]: attention_mask = torch.ones(
|
| 8155 |
+
[rank3]: ^^^^^^^^^^^
|
| 8156 |
+
[rank3]: torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 65536.00 GiB. GPU 3 has a total capacity of 139.81 GiB of which 132.22 GiB is free. Including non-PyTorch memory, this process has 7.59 GiB memory in use. Of the allocated memory 5.15 GiB is allocated by PyTorch, and 1007.51 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)
|
| 8157 |
+
[rank6]: Traceback (most recent call last):
|
| 8158 |
+
[rank6]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 554, in <module>
|
| 8159 |
+
[rank6]: pretrain(
|
| 8160 |
+
[rank6]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/training/training.py", line 863, in pretrain
|
| 8161 |
+
[rank6]: iteration, num_floating_point_operations_so_far = train(
|
| 8162 |
+
[rank6]: ^^^^^^
|
| 8163 |
+
[rank6]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/training/training.py", line 2229, in train
|
| 8164 |
+
[rank6]: ) = train_step(
|
| 8165 |
+
[rank6]: ^^^^^^^^^^^
|
| 8166 |
+
[rank6]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/training/training.py", line 1382, in train_step
|
| 8167 |
+
[rank6]: losses_reduced = forward_backward_func(
|
| 8168 |
+
[rank6]: ^^^^^^^^^^^^^^^^^^^^^^
|
| 8169 |
+
[rank6]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/pipeline_parallel/schedules.py", line 518, in forward_backward_no_pipelining
|
| 8170 |
+
[rank6]: output_tensor, num_tokens = forward_step(
|
| 8171 |
+
[rank6]: ^^^^^^^^^^^^^
|
| 8172 |
+
[rank6]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/pipeline_parallel/schedules.py", line 289, in forward_step
|
| 8173 |
+
[rank6]: output_tensor, loss_func = forward_step_func(data_iterator, model)
|
| 8174 |
+
[rank6]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
| 8175 |
+
[rank6]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 446, in forward_step
|
| 8176 |
+
[rank6]: (tokens, labels, loss_mask, attention_mask, position_ids), token_lens = get_batch(data_iterator)
|
| 8177 |
+
[rank6]: ^^^^^^^^^^^^^^^^^^^^^^^^
|
| 8178 |
+
[rank6]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 284, in get_batch
|
| 8179 |
+
[rank6]: batch = next(global_batches)
|
| 8180 |
+
[rank6]: ^^^^^^^^^^^^^^^^^^^^
|
| 8181 |
+
[rank6]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 226, in setup_batches
|
| 8182 |
+
[rank6]: attention_mask = torch.ones(
|
| 8183 |
+
[rank6]: ^^^^^^^^^^^
|
| 8184 |
+
[rank6]: torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 65536.00 GiB. GPU 6 has a total capacity of 139.81 GiB of which 132.22 GiB is free. Including non-PyTorch memory, this process has 7.59 GiB memory in use. Of the allocated memory 5.15 GiB is allocated by PyTorch, and 1007.51 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)
|
| 8185 |
+
[rank1]:[W621 21:34:19.550684742 ProcessGroupNCCL.cpp:1476] Warning: WARNING: destroy_process_group() was not called before program exit, which can leak resources. For more info, please see https://pytorch.org/docs/stable/distributed.html#shutdown (function operator())
|
| 8186 |
+
[rank5]:[W621 21:34:19.574099654 ProcessGroupNCCL.cpp:1476] Warning: WARNING: destroy_process_group() was not called before program exit, which can leak resources. For more info, please see https://pytorch.org/docs/stable/distributed.html#shutdown (function operator())
|
| 8187 |
+
[rank3]:[W621 21:34:19.766926993 ProcessGroupNCCL.cpp:1476] Warning: WARNING: destroy_process_group() was not called before program exit, which can leak resources. For more info, please see https://pytorch.org/docs/stable/distributed.html#shutdown (function operator())
|
| 8188 |
+
[rank2]:[W621 21:34:19.818030378 ProcessGroupNCCL.cpp:1476] Warning: WARNING: destroy_process_group() was not called before program exit, which can leak resources. For more info, please see https://pytorch.org/docs/stable/distributed.html#shutdown (function operator())
|
| 8189 |
+
[rank7]:[W621 21:34:19.837059068 ProcessGroupNCCL.cpp:1476] Warning: WARNING: destroy_process_group() was not called before program exit, which can leak resources. For more info, please see https://pytorch.org/docs/stable/distributed.html#shutdown (function operator())
|
| 8190 |
+
[rank6]:[W621 21:34:19.883488558 ProcessGroupNCCL.cpp:1476] Warning: WARNING: destroy_process_group() was not called before program exit, which can leak resources. For more info, please see https://pytorch.org/docs/stable/distributed.html#shutdown (function operator())
|
| 8191 |
+
[rank4]:[W621 21:34:19.890447594 ProcessGroupNCCL.cpp:1476] Warning: WARNING: destroy_process_group() was not called before program exit, which can leak resources. For more info, please see https://pytorch.org/docs/stable/distributed.html#shutdown (function operator())
|
| 8192 |
+
W0621 21:34:20.496000 2320490 site-packages/torch/distributed/elastic/multiprocessing/api.py:900] Sending process 2320578 closing signal SIGTERM
|
| 8193 |
+
W0621 21:34:20.499000 2320490 site-packages/torch/distributed/elastic/multiprocessing/api.py:900] Sending process 2320580 closing signal SIGTERM
|
| 8194 |
+
W0621 21:34:20.499000 2320490 site-packages/torch/distributed/elastic/multiprocessing/api.py:900] Sending process 2320581 closing signal SIGTERM
|
| 8195 |
+
W0621 21:34:20.499000 2320490 site-packages/torch/distributed/elastic/multiprocessing/api.py:900] Sending process 2320582 closing signal SIGTERM
|
| 8196 |
+
W0621 21:34:20.500000 2320490 site-packages/torch/distributed/elastic/multiprocessing/api.py:900] Sending process 2320583 closing signal SIGTERM
|
| 8197 |
+
W0621 21:34:20.500000 2320490 site-packages/torch/distributed/elastic/multiprocessing/api.py:900] Sending process 2320584 closing signal SIGTERM
|
| 8198 |
+
W0621 21:34:20.500000 2320490 site-packages/torch/distributed/elastic/multiprocessing/api.py:900] Sending process 2320585 closing signal SIGTERM
|
| 8199 |
+
E0621 21:34:20.928000 2320490 site-packages/torch/distributed/elastic/multiprocessing/api.py:874] failed (exitcode: 1) local_rank: 1 (pid: 2320579) of binary: /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/bin/python3
|
| 8200 |
+
Traceback (most recent call last):
|
| 8201 |
+
File "<frozen runpy>", line 198, in _run_module_as_main
|
| 8202 |
+
File "<frozen runpy>", line 88, in _run_code
|
| 8203 |
+
File "/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/distributed/launch.py", line 207, in <module>
|
| 8204 |
+
main()
|
| 8205 |
+
File "/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/typing_extensions.py", line 3253, in wrapper
|
| 8206 |
+
return arg(*args, **kwargs)
|
| 8207 |
+
^^^^^^^^^^^^^^^^^^^^
|
| 8208 |
+
File "/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/distributed/launch.py", line 203, in main
|
| 8209 |
+
launch(args)
|
| 8210 |
+
File "/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/distributed/launch.py", line 188, in launch
|
| 8211 |
+
run(args)
|
| 8212 |
+
File "/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/distributed/run.py", line 883, in run
|
| 8213 |
+
elastic_launch(
|
| 8214 |
+
File "/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/distributed/launcher/api.py", line 139, in __call__
|
| 8215 |
+
return launch_agent(self._config, self._entrypoint, list(args))
|
| 8216 |
+
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
| 8217 |
+
File "/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/distributed/launcher/api.py", line 270, in launch_agent
|
| 8218 |
+
raise ChildFailedError(
|
| 8219 |
+
torch.distributed.elastic.multiprocessing.errors.ChildFailedError:
|
| 8220 |
+
============================================================
|
| 8221 |
+
./pretrain_gpt_profile.py FAILED
|
| 8222 |
+
------------------------------------------------------------
|
| 8223 |
+
Failures:
|
| 8224 |
+
<NO_OTHER_FAILURES>
|
| 8225 |
+
------------------------------------------------------------
|
| 8226 |
+
Root Cause (first observed failure):
|
| 8227 |
+
[0]:
|
| 8228 |
+
time : 2025-06-21_21:34:20
|
| 8229 |
+
host : fs-mbz-gpu-791
|
| 8230 |
+
rank : 1 (local_rank: 1)
|
| 8231 |
+
exitcode : 1 (pid: 2320579)
|
| 8232 |
+
error_file: <N/A>
|
| 8233 |
+
traceback : To enable traceback see: https://pytorch.org/docs/stable/elastic/errors.html
|
| 8234 |
+
============================================================
|
| 8235 |
+
+ set +x
|
attnserver.run_attnserver.slurm.sh.343211.out.log
CHANGED
|
@@ -10592,3 +10592,657 @@ CHECKPOINT_PATH: gpt-checkpoint
|
|
| 10592 |
PWD: /mnt/weka/home/hao.zhang/junda/attnserver-megatron
|
| 10593 |
--------------------------------
|
| 10594 |
/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/bin/python3
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 10592 |
PWD: /mnt/weka/home/hao.zhang/junda/attnserver-megatron
|
| 10593 |
--------------------------------
|
| 10594 |
/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/bin/python3
|
| 10595 |
+
INFO:megatron.training.initialize:Setting logging level to 0
|
| 10596 |
+
INFO:megatron.training.initialize:Setting logging level to 0
|
| 10597 |
+
using world size: 8, data-parallel size: 1, context-parallel size: 1, hierarchical context-parallel sizes: Nonetensor-model-parallel size: 8, encoder-tensor-model-parallel size: 0, pipeline-model-parallel size: 1, encoder-pipeline-model-parallel size: 0
|
| 10598 |
+
Number of virtual stages per pipeline stage: None
|
| 10599 |
+
WARNING: Setting args.check_for_nan_in_loss_and_grad to False since dynamic loss scaling is being used
|
| 10600 |
+
using torch.float16 for parameters ...
|
| 10601 |
+
------------------------ arguments ------------------------
|
| 10602 |
+
account_for_embedding_in_pipeline_split ......... False
|
| 10603 |
+
account_for_loss_in_pipeline_split .............. False
|
| 10604 |
+
accumulate_allreduce_grads_in_fp32 .............. False
|
| 10605 |
+
adam_beta1 ...................................... 0.9
|
| 10606 |
+
adam_beta2 ...................................... 0.999
|
| 10607 |
+
adam_eps ........................................ 1e-08
|
| 10608 |
+
add_bias_linear ................................. True
|
| 10609 |
+
add_position_embedding .......................... True
|
| 10610 |
+
add_qkv_bias .................................... True
|
| 10611 |
+
adlr_autoresume ................................. False
|
| 10612 |
+
adlr_autoresume_interval ........................ 1000
|
| 10613 |
+
align_grad_reduce ............................... True
|
| 10614 |
+
align_param_gather .............................. False
|
| 10615 |
+
app_tag_run_name ................................ None
|
| 10616 |
+
app_tag_run_version ............................. 0.0.0
|
| 10617 |
+
apply_layernorm_1p .............................. False
|
| 10618 |
+
apply_query_key_layer_scaling ................... False
|
| 10619 |
+
apply_residual_connection_post_layernorm ........ False
|
| 10620 |
+
apply_rope_fusion ............................... False
|
| 10621 |
+
async_save ...................................... None
|
| 10622 |
+
async_tensor_model_parallel_allreduce ........... True
|
| 10623 |
+
attention_backend ............................... AttnBackend.auto
|
| 10624 |
+
attention_dropout ............................... 0.1
|
| 10625 |
+
attention_softmax_in_fp32 ....................... False
|
| 10626 |
+
auto_detect_ckpt_format ......................... False
|
| 10627 |
+
barrier_with_L1_time ............................ True
|
| 10628 |
+
bert_binary_head ................................ True
|
| 10629 |
+
bert_embedder_type .............................. megatron
|
| 10630 |
+
bert_load ....................................... None
|
| 10631 |
+
bf16 ............................................ False
|
| 10632 |
+
bias_dropout_fusion ............................. True
|
| 10633 |
+
bias_gelu_fusion ................................ True
|
| 10634 |
+
bias_swiglu_fusion .............................. True
|
| 10635 |
+
biencoder_projection_dim ........................ 0
|
| 10636 |
+
biencoder_shared_query_context_model ............ False
|
| 10637 |
+
block_data_path ................................. None
|
| 10638 |
+
calc_ft_timeouts ................................ False
|
| 10639 |
+
calculate_per_token_loss ........................ False
|
| 10640 |
+
check_for_large_grads ........................... False
|
| 10641 |
+
check_for_nan_in_loss_and_grad .................. False
|
| 10642 |
+
check_for_spiky_loss ............................ False
|
| 10643 |
+
check_weight_hash_across_dp_replicas_interval ... None
|
| 10644 |
+
ckpt_assume_constant_structure .................. False
|
| 10645 |
+
ckpt_convert_format ............................. None
|
| 10646 |
+
ckpt_convert_save ............................... None
|
| 10647 |
+
ckpt_convert_update_legacy_dist_opt_format ...... False
|
| 10648 |
+
ckpt_format ..................................... torch_dist
|
| 10649 |
+
ckpt_fully_parallel_load ........................ False
|
| 10650 |
+
ckpt_fully_parallel_save ........................ True
|
| 10651 |
+
ckpt_fully_parallel_save_deprecated ............. False
|
| 10652 |
+
ckpt_step ....................................... None
|
| 10653 |
+
classes_fraction ................................ 1.0
|
| 10654 |
+
clip_grad ....................................... 1.0
|
| 10655 |
+
clone_scatter_output_in_embedding ............... True
|
| 10656 |
+
config_logger_dir ...............................
|
| 10657 |
+
consumed_train_samples .......................... 0
|
| 10658 |
+
consumed_valid_samples .......................... 0
|
| 10659 |
+
context_parallel_size ........................... 1
|
| 10660 |
+
cp_comm_type .................................... ['p2p']
|
| 10661 |
+
create_attention_mask_in_dataloader ............. True
|
| 10662 |
+
cross_entropy_fusion_impl ....................... native
|
| 10663 |
+
cross_entropy_loss_fusion ....................... False
|
| 10664 |
+
cuda_graph_scope ................................ full
|
| 10665 |
+
cuda_graph_warmup_steps ......................... 3
|
| 10666 |
+
data_args_path .................................. None
|
| 10667 |
+
data_cache_path ................................. None
|
| 10668 |
+
data_parallel_random_init ....................... False
|
| 10669 |
+
data_parallel_sharding_strategy ................. no_shard
|
| 10670 |
+
data_parallel_size .............................. 1
|
| 10671 |
+
data_path ....................................... None
|
| 10672 |
+
data_per_class_fraction ......................... 1.0
|
| 10673 |
+
data_sharding ................................... True
|
| 10674 |
+
dataloader_type ................................. single
|
| 10675 |
+
ddp_average_in_collective ....................... False
|
| 10676 |
+
ddp_bucket_size ................................. None
|
| 10677 |
+
ddp_num_buckets ................................. None
|
| 10678 |
+
ddp_pad_buckets_for_high_nccl_busbw ............. False
|
| 10679 |
+
decoder_first_pipeline_num_layers ............... None
|
| 10680 |
+
decoder_last_pipeline_num_layers ................ None
|
| 10681 |
+
decoder_num_layers .............................. None
|
| 10682 |
+
decoder_seq_length .............................. None
|
| 10683 |
+
decoupled_lr .................................... None
|
| 10684 |
+
decoupled_min_lr ................................ None
|
| 10685 |
+
decrease_batch_size_if_needed ................... False
|
| 10686 |
+
defer_embedding_wgrad_compute ................... False
|
| 10687 |
+
deprecated_use_mcore_models ..................... False
|
| 10688 |
+
deterministic_mode .............................. False
|
| 10689 |
+
dino_bottleneck_size ............................ 256
|
| 10690 |
+
dino_freeze_last_layer .......................... 1
|
| 10691 |
+
dino_head_hidden_size ........................... 2048
|
| 10692 |
+
dino_local_crops_number ......................... 10
|
| 10693 |
+
dino_local_img_size ............................. 96
|
| 10694 |
+
dino_norm_last_layer ............................ False
|
| 10695 |
+
dino_teacher_temp ............................... 0.07
|
| 10696 |
+
dino_warmup_teacher_temp ........................ 0.04
|
| 10697 |
+
dino_warmup_teacher_temp_epochs ................. 30
|
| 10698 |
+
disable_bf16_reduced_precision_matmul ........... False
|
| 10699 |
+
disable_mamba_mem_eff_path ...................... False
|
| 10700 |
+
disable_straggler_on_startup .................... False
|
| 10701 |
+
dist_ckpt_format_deprecated ..................... None
|
| 10702 |
+
dist_ckpt_strictness ............................ assume_ok_unexpected
|
| 10703 |
+
distribute_saved_activations .................... False
|
| 10704 |
+
distributed_backend ............................. nccl
|
| 10705 |
+
distributed_timeout_minutes ..................... 10
|
| 10706 |
+
embedding_path .................................. None
|
| 10707 |
+
empty_unused_memory_level ....................... 0
|
| 10708 |
+
enable_cuda_graph ............................... False
|
| 10709 |
+
enable_ft_package ............................... False
|
| 10710 |
+
enable_gloo_process_groups ...................... True
|
| 10711 |
+
enable_msc ...................................... True
|
| 10712 |
+
enable_one_logger ............................... True
|
| 10713 |
+
encoder_num_layers .............................. 2
|
| 10714 |
+
encoder_pipeline_model_parallel_size ............ 0
|
| 10715 |
+
encoder_seq_length .............................. 131072
|
| 10716 |
+
encoder_tensor_model_parallel_size .............. 0
|
| 10717 |
+
end_weight_decay ................................ 0.1
|
| 10718 |
+
eod_mask_loss ................................... False
|
| 10719 |
+
error_injection_rate ............................ 0
|
| 10720 |
+
error_injection_type ............................ transient_error
|
| 10721 |
+
eval_interval ................................... 16
|
| 10722 |
+
eval_iters ...................................... 1
|
| 10723 |
+
evidence_data_path .............................. None
|
| 10724 |
+
exit_duration_in_mins ........................... None
|
| 10725 |
+
exit_interval ................................... None
|
| 10726 |
+
exit_on_missing_checkpoint ...................... False
|
| 10727 |
+
exit_signal_handler ............................. False
|
| 10728 |
+
exp_avg_dtype ................................... torch.float32
|
| 10729 |
+
exp_avg_sq_dtype ................................ torch.float32
|
| 10730 |
+
expert_model_parallel_size ...................... 1
|
| 10731 |
+
expert_tensor_parallel_size ..................... 8
|
| 10732 |
+
external_cuda_graph ............................. False
|
| 10733 |
+
ffn_hidden_size ................................. 16384
|
| 10734 |
+
finetune ........................................ False
|
| 10735 |
+
first_last_layers_bf16 .......................... False
|
| 10736 |
+
flash_decode .................................... False
|
| 10737 |
+
fp16 ............................................ True
|
| 10738 |
+
fp16_lm_cross_entropy ........................... False
|
| 10739 |
+
fp32_residual_connection ........................ False
|
| 10740 |
+
fp8 ............................................. None
|
| 10741 |
+
fp8_amax_compute_algo ........................... most_recent
|
| 10742 |
+
fp8_amax_history_len ............................ 1
|
| 10743 |
+
fp8_interval .................................... 1
|
| 10744 |
+
fp8_margin ...................................... 0
|
| 10745 |
+
fp8_param_gather ................................ False
|
| 10746 |
+
fp8_recipe ...................................... delayed
|
| 10747 |
+
fp8_wgrad ....................................... True
|
| 10748 |
+
fsdp_double_buffer .............................. False
|
| 10749 |
+
global_batch_size ............................... 1
|
| 10750 |
+
grad_reduce_in_bf16 ............................. False
|
| 10751 |
+
gradient_accumulation_fusion .................... True
|
| 10752 |
+
gradient_reduce_div_fusion ...................... True
|
| 10753 |
+
group_query_attention ........................... True
|
| 10754 |
+
head_lr_mult .................................... 1.0
|
| 10755 |
+
heterogeneous_layers_config_encoded_json ........ None
|
| 10756 |
+
heterogeneous_layers_config_path ................ None
|
| 10757 |
+
hidden_dropout .................................. 0.1
|
| 10758 |
+
hidden_size ..................................... 4096
|
| 10759 |
+
hierarchical_context_parallel_sizes ............. None
|
| 10760 |
+
high_priority_stream_groups ..................... []
|
| 10761 |
+
hybrid_attention_ratio .......................... 0.0
|
| 10762 |
+
hybrid_mlp_ratio ................................ 0.0
|
| 10763 |
+
hybrid_override_pattern ......................... None
|
| 10764 |
+
hysteresis ...................................... 2
|
| 10765 |
+
ict_head_size ................................... None
|
| 10766 |
+
ict_load ........................................ None
|
| 10767 |
+
img_h ........................................... 224
|
| 10768 |
+
img_w ........................................... 224
|
| 10769 |
+
indexer_batch_size .............................. 128
|
| 10770 |
+
indexer_log_interval ............................ 1000
|
| 10771 |
+
inference_batch_times_seqlen_threshold .......... -1
|
| 10772 |
+
inference_dynamic_batching ...................... False
|
| 10773 |
+
inference_dynamic_batching_buffer_guaranteed_fraction 0.2
|
| 10774 |
+
inference_dynamic_batching_buffer_overflow_factor None
|
| 10775 |
+
inference_dynamic_batching_buffer_size_gb ....... 40.0
|
| 10776 |
+
inference_dynamic_batching_chunk_size ........... 256
|
| 10777 |
+
inference_dynamic_batching_max_requests_override None
|
| 10778 |
+
inference_dynamic_batching_max_tokens_override .. None
|
| 10779 |
+
inference_max_batch_size ........................ 8
|
| 10780 |
+
inference_max_seq_length ........................ 2560
|
| 10781 |
+
inference_rng_tracker ........................... False
|
| 10782 |
+
init_method_std ................................. 0.02
|
| 10783 |
+
init_method_xavier_uniform ...................... False
|
| 10784 |
+
init_model_with_meta_device ..................... False
|
| 10785 |
+
initial_loss_scale .............................. 4294967296
|
| 10786 |
+
inprocess_active_world_size ..................... 8
|
| 10787 |
+
inprocess_barrier_timeout ....................... 120
|
| 10788 |
+
inprocess_completion_timeout .................... 120
|
| 10789 |
+
inprocess_empty_cuda_cache ...................... False
|
| 10790 |
+
inprocess_granularity ........................... node
|
| 10791 |
+
inprocess_hard_timeout .......................... 90
|
| 10792 |
+
inprocess_heartbeat_interval .................... 30
|
| 10793 |
+
inprocess_heartbeat_timeout ..................... 60
|
| 10794 |
+
inprocess_last_call_wait ........................ 1
|
| 10795 |
+
inprocess_max_iterations ........................ None
|
| 10796 |
+
inprocess_monitor_process_interval .............. 1.0
|
| 10797 |
+
inprocess_monitor_thread_interval ............... 1.0
|
| 10798 |
+
inprocess_progress_watchdog_interval ............ 1.0
|
| 10799 |
+
inprocess_restart ............................... False
|
| 10800 |
+
inprocess_soft_timeout .......................... 60
|
| 10801 |
+
inprocess_termination_grace_time ................ 1
|
| 10802 |
+
is_hybrid_model ................................. False
|
| 10803 |
+
iter_per_epoch .................................. 1250
|
| 10804 |
+
iterations_to_skip .............................. []
|
| 10805 |
+
keep_fp8_transpose_cache_when_using_custom_fsdp . False
|
| 10806 |
+
kv_channels ..................................... 64
|
| 10807 |
+
kv_lora_rank .................................... 32
|
| 10808 |
+
lazy_mpu_init ................................... None
|
| 10809 |
+
load ............................................ gpt-checkpoint
|
| 10810 |
+
load_model_opt_format ........................... False
|
| 10811 |
+
local_rank ...................................... 0
|
| 10812 |
+
log_interval .................................... 1
|
| 10813 |
+
log_loss_scale_to_tensorboard ................... True
|
| 10814 |
+
log_memory_to_tensorboard ....................... False
|
| 10815 |
+
log_num_zeros_in_grad ........................... False
|
| 10816 |
+
log_params_norm ................................. False
|
| 10817 |
+
log_progress .................................... False
|
| 10818 |
+
log_straggler ................................... False
|
| 10819 |
+
log_throughput .................................. False
|
| 10820 |
+
log_timers_to_tensorboard ....................... False
|
| 10821 |
+
log_validation_ppl_to_tensorboard ............... False
|
| 10822 |
+
log_world_size_to_tensorboard ................... False
|
| 10823 |
+
logging_level ................................... 0
|
| 10824 |
+
loss_scale ...................................... None
|
| 10825 |
+
loss_scale_window ............................... 1000
|
| 10826 |
+
lr .............................................. 0.0005
|
| 10827 |
+
lr_decay_iters .................................. 150000
|
| 10828 |
+
lr_decay_samples ................................ None
|
| 10829 |
+
lr_decay_style .................................. cosine
|
| 10830 |
+
lr_warmup_fraction .............................. None
|
| 10831 |
+
lr_warmup_init .................................. 0.0
|
| 10832 |
+
lr_warmup_iters ................................. 2
|
| 10833 |
+
lr_warmup_samples ............................... 0
|
| 10834 |
+
lr_wsd_decay_iters .............................. None
|
| 10835 |
+
lr_wsd_decay_samples ............................ None
|
| 10836 |
+
lr_wsd_decay_style .............................. exponential
|
| 10837 |
+
main_grads_dtype ................................ torch.float32
|
| 10838 |
+
main_params_dtype ............................... torch.float32
|
| 10839 |
+
make_vocab_size_divisible_by .................... 128
|
| 10840 |
+
mamba_head_dim .................................. 64
|
| 10841 |
+
mamba_num_groups ................................ 8
|
| 10842 |
+
mamba_num_heads ................................. None
|
| 10843 |
+
mamba_state_dim ................................. 128
|
| 10844 |
+
manual_gc ....................................... False
|
| 10845 |
+
manual_gc_eval .................................. True
|
| 10846 |
+
manual_gc_interval .............................. 0
|
| 10847 |
+
mask_factor ..................................... 1.0
|
| 10848 |
+
mask_prob ....................................... 0.15
|
| 10849 |
+
mask_type ....................................... random
|
| 10850 |
+
masked_softmax_fusion ........................... True
|
| 10851 |
+
max_position_embeddings ......................... 131072
|
| 10852 |
+
max_tokens_to_oom ............................... 12000
|
| 10853 |
+
memory_snapshot_path ............................ snapshot.pickle
|
| 10854 |
+
merge_file ...................................... merges.txt
|
| 10855 |
+
micro_batch_size ................................ 1
|
| 10856 |
+
microbatch_group_size_per_vp_stage .............. None
|
| 10857 |
+
mid_level_dataset_surplus ....................... 0.005
|
| 10858 |
+
min_loss_scale .................................. 1.0
|
| 10859 |
+
min_lr .......................................... 0.0
|
| 10860 |
+
mlp_chunks_for_prefill .......................... 1
|
| 10861 |
+
mmap_bin_files .................................. True
|
| 10862 |
+
mock_data ....................................... True
|
| 10863 |
+
moe_apply_probs_on_input ........................ False
|
| 10864 |
+
moe_aux_loss_coeff .............................. 0.0
|
| 10865 |
+
moe_enable_deepep ............................... False
|
| 10866 |
+
moe_expert_capacity_factor ...................... None
|
| 10867 |
+
moe_extended_tp ................................. False
|
| 10868 |
+
moe_ffn_hidden_size ............................. None
|
| 10869 |
+
moe_grouped_gemm ................................ False
|
| 10870 |
+
moe_input_jitter_eps ............................ None
|
| 10871 |
+
moe_layer_freq .................................. 1
|
| 10872 |
+
moe_layer_recompute ............................. False
|
| 10873 |
+
moe_pad_expert_input_to_capacity ................ False
|
| 10874 |
+
moe_per_layer_logging ........................... False
|
| 10875 |
+
moe_permute_fusion .............................. False
|
| 10876 |
+
moe_router_bias_update_rate ..................... 0.001
|
| 10877 |
+
moe_router_dtype ................................ None
|
| 10878 |
+
moe_router_enable_expert_bias ................... False
|
| 10879 |
+
moe_router_force_load_balancing ................. False
|
| 10880 |
+
moe_router_group_topk ........................... None
|
| 10881 |
+
moe_router_load_balancing_type .................. aux_loss
|
| 10882 |
+
moe_router_num_groups ........................... None
|
| 10883 |
+
moe_router_padding_for_fp8 ...................... False
|
| 10884 |
+
moe_router_pre_softmax .......................... False
|
| 10885 |
+
moe_router_score_function ....................... softmax
|
| 10886 |
+
moe_router_topk ................................. 2
|
| 10887 |
+
moe_router_topk_scaling_factor .................. None
|
| 10888 |
+
moe_shared_expert_intermediate_size ............. None
|
| 10889 |
+
moe_shared_expert_overlap ....................... False
|
| 10890 |
+
moe_token_dispatcher_type ....................... allgather
|
| 10891 |
+
moe_token_drop_policy ........................... probs
|
| 10892 |
+
moe_use_legacy_grouped_gemm ..................... False
|
| 10893 |
+
moe_use_upcycling ............................... False
|
| 10894 |
+
moe_z_loss_coeff ................................ None
|
| 10895 |
+
mrope_section ................................... None
|
| 10896 |
+
mscale .......................................... 1.0
|
| 10897 |
+
mscale_all_dim .................................. 1.0
|
| 10898 |
+
mtp_loss_scaling_factor ......................... 0.1
|
| 10899 |
+
mtp_num_layers .................................. None
|
| 10900 |
+
multi_latent_attention .......................... False
|
| 10901 |
+
nccl_all_reduce_for_prefill ..................... False
|
| 10902 |
+
nccl_communicator_config_path ................... None
|
| 10903 |
+
nccl_ub ......................................... False
|
| 10904 |
+
no_load_optim ................................... None
|
| 10905 |
+
no_load_rng ..................................... None
|
| 10906 |
+
no_persist_layer_norm ........................... False
|
| 10907 |
+
no_rope_freq .................................... None
|
| 10908 |
+
no_save_optim ................................... None
|
| 10909 |
+
no_save_rng ..................................... None
|
| 10910 |
+
non_persistent_ckpt_type ........................ None
|
| 10911 |
+
non_persistent_global_ckpt_dir .................. None
|
| 10912 |
+
non_persistent_local_ckpt_algo .................. fully_parallel
|
| 10913 |
+
non_persistent_local_ckpt_dir ................... None
|
| 10914 |
+
non_persistent_save_interval .................... None
|
| 10915 |
+
norm_epsilon .................................... 1e-05
|
| 10916 |
+
normalization ................................... LayerNorm
|
| 10917 |
+
num_attention_heads ............................. 64
|
| 10918 |
+
num_channels .................................... 3
|
| 10919 |
+
num_classes ..................................... 1000
|
| 10920 |
+
num_dataset_builder_threads ..................... 1
|
| 10921 |
+
num_distributed_optimizer_instances ............. 1
|
| 10922 |
+
num_experts ..................................... None
|
| 10923 |
+
num_layers ...................................... 2
|
| 10924 |
+
num_layers_at_end_in_bf16 ....................... 1
|
| 10925 |
+
num_layers_at_start_in_bf16 ..................... 1
|
| 10926 |
+
num_layers_per_virtual_pipeline_stage ........... None
|
| 10927 |
+
num_query_groups ................................ 16
|
| 10928 |
+
num_virtual_stages_per_pipeline_rank ............ None
|
| 10929 |
+
num_workers ..................................... 2
|
| 10930 |
+
object_storage_cache_path ....................... None
|
| 10931 |
+
one_logger_async ................................ False
|
| 10932 |
+
one_logger_project .............................. megatron-lm
|
| 10933 |
+
one_logger_run_name ............................. None
|
| 10934 |
+
onnx_safe ....................................... None
|
| 10935 |
+
openai_gelu ..................................... False
|
| 10936 |
+
optimizer ....................................... adam
|
| 10937 |
+
optimizer_cpu_offload ........................... False
|
| 10938 |
+
optimizer_offload_fraction ...................... 1.0
|
| 10939 |
+
output_bert_embeddings .......................... False
|
| 10940 |
+
overlap_cpu_optimizer_d2h_h2d ................... False
|
| 10941 |
+
overlap_grad_reduce ............................. False
|
| 10942 |
+
overlap_p2p_comm ................................ False
|
| 10943 |
+
overlap_p2p_comm_warmup_flush ................... False
|
| 10944 |
+
overlap_param_gather ............................ False
|
| 10945 |
+
overlap_param_gather_with_optimizer_step ........ False
|
| 10946 |
+
override_opt_param_scheduler .................... False
|
| 10947 |
+
params_dtype .................................... torch.float16
|
| 10948 |
+
patch_dim ....................................... 16
|
| 10949 |
+
per_split_data_args_path ........................ None
|
| 10950 |
+
perform_initialization .......................... True
|
| 10951 |
+
pin_cpu_grads ................................... True
|
| 10952 |
+
pin_cpu_params .................................. True
|
| 10953 |
+
pipeline_model_parallel_comm_backend ............ None
|
| 10954 |
+
pipeline_model_parallel_size .................... 1
|
| 10955 |
+
pipeline_model_parallel_split_rank .............. None
|
| 10956 |
+
position_embedding_type ......................... learned_absolute
|
| 10957 |
+
pretrained_checkpoint ........................... None
|
| 10958 |
+
profile ......................................... False
|
| 10959 |
+
profile_ranks ................................... [0]
|
| 10960 |
+
profile_step_end ................................ 12
|
| 10961 |
+
profile_step_start .............................. 10
|
| 10962 |
+
q_lora_rank ..................................... None
|
| 10963 |
+
qk_head_dim ..................................... 128
|
| 10964 |
+
qk_l2_norm ...................................... False
|
| 10965 |
+
qk_layernorm .................................... False
|
| 10966 |
+
qk_pos_emb_head_dim ............................. 64
|
| 10967 |
+
query_in_block_prob ............................. 0.1
|
| 10968 |
+
rampup_batch_size ............................... None
|
| 10969 |
+
rank ............................................ 0
|
| 10970 |
+
recompute_granularity ........................... None
|
| 10971 |
+
recompute_method ................................ None
|
| 10972 |
+
recompute_modules ............................... None
|
| 10973 |
+
recompute_num_layers ............................ None
|
| 10974 |
+
record_memory_history ........................... False
|
| 10975 |
+
relative_attention_max_distance ................. 128
|
| 10976 |
+
relative_attention_num_buckets .................. 32
|
| 10977 |
+
replication ..................................... False
|
| 10978 |
+
replication_factor .............................. 2
|
| 10979 |
+
replication_jump ................................ None
|
| 10980 |
+
rerun_mode ...................................... disabled
|
| 10981 |
+
reset_attention_mask ............................ False
|
| 10982 |
+
reset_position_ids .............................. False
|
| 10983 |
+
result_rejected_tracker_filename ................ None
|
| 10984 |
+
retriever_report_topk_accuracies ................ []
|
| 10985 |
+
retriever_score_scaling ......................... False
|
| 10986 |
+
retriever_seq_length ............................ 256
|
| 10987 |
+
retro_add_retriever ............................. False
|
| 10988 |
+
retro_attention_gate ............................ 1
|
| 10989 |
+
retro_cyclic_train_iters ........................ None
|
| 10990 |
+
retro_encoder_attention_dropout ................. 0.1
|
| 10991 |
+
retro_encoder_hidden_dropout .................... 0.1
|
| 10992 |
+
retro_encoder_layers ............................ 2
|
| 10993 |
+
retro_num_neighbors ............................. 2
|
| 10994 |
+
retro_num_retrieved_chunks ...................... 2
|
| 10995 |
+
retro_project_dir ............................... None
|
| 10996 |
+
retro_verify_neighbor_count ..................... True
|
| 10997 |
+
rope_scaling_factor ............................. 8.0
|
| 10998 |
+
rotary_base ..................................... 10000
|
| 10999 |
+
rotary_interleaved .............................. False
|
| 11000 |
+
rotary_percent .................................. 1.0
|
| 11001 |
+
rotary_scaling_factor ........................... 1.0
|
| 11002 |
+
rotary_seq_len_interpolation_factor ............. None
|
| 11003 |
+
run_workload_inspector_server ................... False
|
| 11004 |
+
sample_rate ..................................... 1.0
|
| 11005 |
+
save ............................................ gpt-checkpoint
|
| 11006 |
+
save_interval ................................... 16
|
| 11007 |
+
scatter_gather_tensors_in_pipeline .............. True
|
| 11008 |
+
seed ............................................ 1234
|
| 11009 |
+
seq_length ...................................... 131072
|
| 11010 |
+
sequence_parallel ............................... False
|
| 11011 |
+
sgd_momentum .................................... 0.9
|
| 11012 |
+
short_seq_prob .................................. 0.1
|
| 11013 |
+
skip_train ...................................... False
|
| 11014 |
+
skipped_train_samples ........................... 0
|
| 11015 |
+
spec ............................................ None
|
| 11016 |
+
split ........................................... None
|
| 11017 |
+
squared_relu .................................... False
|
| 11018 |
+
start_weight_decay .............................. 0.1
|
| 11019 |
+
straggler_ctrlr_port ............................ 65535
|
| 11020 |
+
straggler_minmax_count .......................... 1
|
| 11021 |
+
suggested_communication_unit_size ............... None
|
| 11022 |
+
swiglu .......................................... False
|
| 11023 |
+
swin_backbone_type .............................. tiny
|
| 11024 |
+
symmetric_ar_type ............................... None
|
| 11025 |
+
te_rng_tracker .................................. False
|
| 11026 |
+
tensor_model_parallel_size ...................... 8
|
| 11027 |
+
tensorboard_dir ................................. tensorboard-logs/
|
| 11028 |
+
tensorboard_log_interval ........................ 1
|
| 11029 |
+
tensorboard_queue_size .......................... 1000
|
| 11030 |
+
test_data_path .................................. None
|
| 11031 |
+
test_mode ....................................... False
|
| 11032 |
+
tiktoken_num_special_tokens ..................... 1000
|
| 11033 |
+
tiktoken_pattern ................................ None
|
| 11034 |
+
tiktoken_special_tokens ......................... None
|
| 11035 |
+
timing_log_level ................................ 0
|
| 11036 |
+
timing_log_option ............................... minmax
|
| 11037 |
+
titles_data_path ................................ None
|
| 11038 |
+
tokenizer_model ................................. None
|
| 11039 |
+
tokenizer_type .................................. GPT2BPETokenizer
|
| 11040 |
+
torch_fsdp2_reshard_after_forward ............... True
|
| 11041 |
+
tp_comm_bootstrap_backend ....................... nccl
|
| 11042 |
+
tp_comm_bulk_dgrad .............................. True
|
| 11043 |
+
tp_comm_bulk_wgrad .............................. True
|
| 11044 |
+
tp_comm_overlap ................................. False
|
| 11045 |
+
tp_comm_overlap_ag .............................. True
|
| 11046 |
+
tp_comm_overlap_cfg ............................. None
|
| 11047 |
+
tp_comm_overlap_rs .............................. True
|
| 11048 |
+
tp_comm_overlap_rs_dgrad ........................ False
|
| 11049 |
+
tp_comm_split_ag ................................ True
|
| 11050 |
+
tp_comm_split_rs ................................ True
|
| 11051 |
+
train_data_path ................................. None
|
| 11052 |
+
train_iters ..................................... 10
|
| 11053 |
+
train_samples ................................... None
|
| 11054 |
+
train_sync_interval ............................. None
|
| 11055 |
+
transformer_impl ................................ transformer_engine
|
| 11056 |
+
transformer_pipeline_model_parallel_size ........ 1
|
| 11057 |
+
untie_embeddings_and_output_weights ............. False
|
| 11058 |
+
use_checkpoint_args ............................. False
|
| 11059 |
+
use_checkpoint_opt_param_scheduler .............. False
|
| 11060 |
+
use_cpu_initialization .......................... None
|
| 11061 |
+
use_custom_fsdp ................................. False
|
| 11062 |
+
use_dist_ckpt ................................... True
|
| 11063 |
+
use_dist_ckpt_deprecated ........................ False
|
| 11064 |
+
use_distributed_optimizer ....................... False
|
| 11065 |
+
use_flash_attn .................................. False
|
| 11066 |
+
use_legacy_models ............................... False
|
| 11067 |
+
use_mp_args_from_checkpoint_args ................ False
|
| 11068 |
+
use_one_sent_docs ............................... False
|
| 11069 |
+
use_persistent_ckpt_worker ...................... False
|
| 11070 |
+
use_precision_aware_optimizer ................... False
|
| 11071 |
+
use_pytorch_profiler ............................ False
|
| 11072 |
+
use_ring_exchange_p2p ........................... False
|
| 11073 |
+
use_rope_scaling ................................ False
|
| 11074 |
+
use_rotary_position_embeddings .................. False
|
| 11075 |
+
use_sharp ....................................... False
|
| 11076 |
+
use_tokenizer_model_from_checkpoint_args ........ True
|
| 11077 |
+
use_torch_fsdp2 ................................. False
|
| 11078 |
+
use_torch_optimizer_for_cpu_offload ............. False
|
| 11079 |
+
use_tp_pp_dp_mapping ............................ False
|
| 11080 |
+
v_head_dim ...................................... 128
|
| 11081 |
+
valid_data_path ................................. None
|
| 11082 |
+
variable_seq_lengths ............................ False
|
| 11083 |
+
virtual_pipeline_model_parallel_size ............ None
|
| 11084 |
+
vision_backbone_type ............................ vit
|
| 11085 |
+
vision_pretraining .............................. False
|
| 11086 |
+
vision_pretraining_type ......................... classify
|
| 11087 |
+
vocab_extra_ids ................................. 0
|
| 11088 |
+
vocab_file ...................................... vocab.json
|
| 11089 |
+
vocab_size ...................................... None
|
| 11090 |
+
wandb_exp_name ..................................
|
| 11091 |
+
wandb_project ...................................
|
| 11092 |
+
wandb_save_dir ..................................
|
| 11093 |
+
weight_decay .................................... 0.1
|
| 11094 |
+
weight_decay_incr_style ......................... constant
|
| 11095 |
+
wgrad_deferral_limit ............................ 0
|
| 11096 |
+
world_size ...................................... 8
|
| 11097 |
+
yaml_cfg ........................................ None
|
| 11098 |
+
-------------------- end of arguments ---------------------
|
| 11099 |
+
INFO:megatron.core.num_microbatches_calculator:setting number of microbatches to constant 1
|
| 11100 |
+
> building GPT2BPETokenizer tokenizer ...
|
| 11101 |
+
INFO:megatron.training.initialize:Setting logging level to 0
|
| 11102 |
+
INFO:megatron.training.initialize:Setting logging level to 0
|
| 11103 |
+
INFO:megatron.training.initialize:Setting logging level to 0
|
| 11104 |
+
> padded vocab (size: 50257) with 943 dummy tokens (new size: 51200)
|
| 11105 |
+
INFO:megatron.training.initialize:Setting logging level to 0
|
| 11106 |
+
WARNING:megatron.core.rerun_state_machine:RerunStateMachine initialized in mode RerunMode.DISABLED
|
| 11107 |
+
> initializing torch distributed ...
|
| 11108 |
+
> initialized tensor model parallel with size 8
|
| 11109 |
+
> initialized pipeline model parallel with size 1
|
| 11110 |
+
> setting random seeds to 1234 ...
|
| 11111 |
+
> compiling dataset index builder ...
|
| 11112 |
+
make: Entering directory '/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/datasets'
|
| 11113 |
+
WARNING: TensorBoard writing requested but is not available (are you using PyTorch 1.1.0 or later?), no TensorBoard logs will be written.
|
| 11114 |
+
WARNING: one_logger package is required to enable e2e metrics tracking. please go to https://confluence.nvidia.com/display/MLWFO/Package+Repositories for details to install it
|
| 11115 |
+
INFO:megatron.training.initialize:Setting logging level to 0
|
| 11116 |
+
INFO:megatron.training.initialize:Setting logging level to 0
|
| 11117 |
+
make: Nothing to be done for 'default'.
|
| 11118 |
+
make: Leaving directory '/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/datasets'
|
| 11119 |
+
>>> done with dataset index builder. Compilation time: 0.052 seconds
|
| 11120 |
+
WARNING: constraints for invoking optimized fused softmax kernel are not met. We default back to unfused kernel invocations.
|
| 11121 |
+
> compiling and loading fused kernels ...
|
| 11122 |
+
>>> done with compiling and loading fused kernels. Compilation time: 2.659 seconds
|
| 11123 |
+
time to initialize megatron (seconds): 7.469
|
| 11124 |
+
[after megatron is initialized] datetime: 2025-06-21 21:34:12
|
| 11125 |
+
building GPT model ...
|
| 11126 |
+
>>> embedding
|
| 11127 |
+
>>> decoder
|
| 11128 |
+
>>> output_layer
|
| 11129 |
+
> number of parameters on (tensor, pipeline) model parallel rank (2, 0): 607188480
|
| 11130 |
+
>>> embedding
|
| 11131 |
+
>>> decoder
|
| 11132 |
+
>>> output_layer
|
| 11133 |
+
>>> embedding
|
| 11134 |
+
>>> decoder
|
| 11135 |
+
>>> output_layer
|
| 11136 |
+
> number of parameters on (tensor, pipeline) model parallel rank (7, 0): 607188480
|
| 11137 |
+
> number of parameters on (tensor, pipeline) model parallel rank (6, 0): 607188480
|
| 11138 |
+
>>> embedding
|
| 11139 |
+
>>> decoder
|
| 11140 |
+
>>> output_layer
|
| 11141 |
+
> number of parameters on (tensor, pipeline) model parallel rank (4, 0): 607188480
|
| 11142 |
+
>>> embedding
|
| 11143 |
+
>>> decoder
|
| 11144 |
+
>>> output_layer
|
| 11145 |
+
> number of parameters on (tensor, pipeline) model parallel rank (1, 0): 607188480
|
| 11146 |
+
>>> embedding
|
| 11147 |
+
>>> decoder
|
| 11148 |
+
>>> output_layer
|
| 11149 |
+
> number of parameters on (tensor, pipeline) model parallel rank (3, 0): 607188480
|
| 11150 |
+
>>> embedding
|
| 11151 |
+
>>> decoder
|
| 11152 |
+
>>> output_layer
|
| 11153 |
+
> number of parameters on (tensor, pipeline) model parallel rank (5, 0): 607188480
|
| 11154 |
+
>>> embedding
|
| 11155 |
+
>>> decoder
|
| 11156 |
+
>>> output_layer
|
| 11157 |
+
> number of parameters on (tensor, pipeline) model parallel rank (0, 0): 607188480
|
| 11158 |
+
INFO:megatron.core.distributed.distributed_data_parallel:Setting up DistributedDataParallel with config DistributedDataParallelConfig(grad_reduce_in_fp32=False, overlap_grad_reduce=False, overlap_param_gather=False, align_param_gather=False, use_distributed_optimizer=False, num_distributed_optimizer_instances=1, check_for_nan_in_grad=False, check_for_large_grads=False, bucket_size=None, pad_buckets_for_high_nccl_busbw=False, average_in_collective=False, fp8_param_gather=False, use_custom_fsdp=False, data_parallel_sharding_strategy='no_shard', gradient_reduce_div_fusion=True, suggested_communication_unit_size=None, preserve_fp32_weights=True, keep_fp8_transpose_cache_when_using_custom_fsdp=False, nccl_ub=False, fsdp_double_buffer=False)
|
| 11159 |
+
INFO:megatron.core.distributed.param_and_grad_buffer:Number of buckets for gradient all-reduce / reduce-scatter: 1
|
| 11160 |
+
Params for bucket 1 (607188480 elements, 607188480 padded size):
|
| 11161 |
+
module.decoder.layers.1.mlp.linear_fc1.layer_norm_weight
|
| 11162 |
+
module.decoder.layers.1.self_attention.linear_qkv.bias
|
| 11163 |
+
module.decoder.layers.0.mlp.linear_fc2.bias
|
| 11164 |
+
module.decoder.layers.0.mlp.linear_fc1.layer_norm_weight
|
| 11165 |
+
module.decoder.layers.0.self_attention.linear_qkv.bias
|
| 11166 |
+
module.decoder.layers.1.mlp.linear_fc1.weight
|
| 11167 |
+
module.decoder.layers.0.mlp.linear_fc1.weight
|
| 11168 |
+
module.embedding.position_embeddings.weight
|
| 11169 |
+
module.embedding.word_embeddings.weight
|
| 11170 |
+
module.decoder.final_layernorm.bias
|
| 11171 |
+
module.decoder.layers.1.mlp.linear_fc2.bias
|
| 11172 |
+
module.decoder.layers.1.self_attention.linear_qkv.layer_norm_weight
|
| 11173 |
+
module.decoder.layers.0.self_attention.linear_qkv.layer_norm_weight
|
| 11174 |
+
module.decoder.layers.1.self_attention.linear_qkv.layer_norm_bias
|
| 11175 |
+
module.decoder.layers.0.self_attention.linear_qkv.layer_norm_bias
|
| 11176 |
+
module.decoder.layers.1.mlp.linear_fc1.bias
|
| 11177 |
+
module.decoder.final_layernorm.weight
|
| 11178 |
+
module.decoder.layers.0.mlp.linear_fc1.bias
|
| 11179 |
+
module.decoder.layers.1.self_attention.linear_qkv.weight
|
| 11180 |
+
module.decoder.layers.1.self_attention.linear_proj.weight
|
| 11181 |
+
module.decoder.layers.0.self_attention.linear_qkv.weight
|
| 11182 |
+
module.decoder.layers.0.self_attention.linear_proj.weight
|
| 11183 |
+
module.decoder.layers.1.mlp.linear_fc2.weight
|
| 11184 |
+
module.decoder.layers.1.self_attention.linear_proj.bias
|
| 11185 |
+
module.decoder.layers.0.self_attention.linear_proj.bias
|
| 11186 |
+
module.decoder.layers.1.mlp.linear_fc1.layer_norm_bias
|
| 11187 |
+
module.decoder.layers.0.mlp.linear_fc2.weight
|
| 11188 |
+
module.decoder.layers.0.mlp.linear_fc1.layer_norm_bias
|
| 11189 |
+
INFO:megatron.core.optimizer:Setting up optimizer with config OptimizerConfig(optimizer='adam', lr=0.0005, min_lr=0.0, decoupled_lr=None, decoupled_min_lr=None, weight_decay=0.1, fp16=True, bf16=False, params_dtype=torch.float16, use_precision_aware_optimizer=False, store_param_remainders=True, main_grads_dtype=torch.float32, main_params_dtype=torch.float32, exp_avg_dtype=torch.float32, exp_avg_sq_dtype=torch.float32, loss_scale=None, initial_loss_scale=4294967296, min_loss_scale=1.0, loss_scale_window=1000, hysteresis=2, adam_beta1=0.9, adam_beta2=0.999, adam_eps=1e-08, sgd_momentum=0.9, use_distributed_optimizer=False, overlap_param_gather_with_optimizer_step=False, optimizer_cpu_offload=False, optimizer_offload_fraction=1.0, use_torch_optimizer_for_cpu_offload=False, overlap_cpu_optimizer_d2h_h2d=False, pin_cpu_grads=True, pin_cpu_params=True, clip_grad=1.0, log_num_zeros_in_grad=False, barrier_with_L1_time=True, timers=<megatron.core.timers.Timers object at 0x14875ac0a5a0>, config_logger_dir='')
|
| 11190 |
+
INFO:megatron.core.optimizer_param_scheduler:> learning rate decay style: cosine
|
| 11191 |
+
WARNING: could not find the metadata file gpt-checkpoint/latest_checkpointed_iteration.txt
|
| 11192 |
+
will not load any checkpoints and will start from random
|
| 11193 |
+
(min, max) time across ranks (ms):
|
| 11194 |
+
load-checkpoint ................................: (2.80, 3.44)
|
| 11195 |
+
[after model, optimizer, and learning rate scheduler are built] datetime: 2025-06-21 21:34:17
|
| 11196 |
+
> building train, validation, and test datasets ...
|
| 11197 |
+
> datasets target sizes (minimum size):
|
| 11198 |
+
train: 10
|
| 11199 |
+
validation: 1
|
| 11200 |
+
test: 1
|
| 11201 |
+
INFO:megatron.core.datasets.blended_megatron_dataset_config:Let mock = True, as both blend and blend_per_split are None
|
| 11202 |
+
INFO:megatron.core.datasets.blended_megatron_dataset_config:Let split = 1,1,1, an arbitrarily even split, as mock is True
|
| 11203 |
+
INFO:megatron.core.datasets.blended_megatron_dataset_config:Let split_matrix = [(0, 0.3333333333333333), (0.3333333333333333, 0.6666666666666666), (0.6666666666666666, 1.0)]
|
| 11204 |
+
> building train, validation, and test datasets for GPT ...
|
| 11205 |
+
INFO:megatron.core.datasets.blended_megatron_dataset_builder:Building MockGPTDataset splits with sizes=(10, 1, 1) and config=GPTDatasetConfig(random_seed=1234, sequence_length=131072, blend=None, blend_per_split=None, split='1,1,1', split_matrix=[(0, 0.3333333333333333), (0.3333333333333333, 0.6666666666666666), (0.6666666666666666, 1.0)], num_dataset_builder_threads=1, path_to_cache=None, mmap_bin_files=True, mock=True, tokenizer=<megatron.training.tokenizer.tokenizer._GPT2BPETokenizer object at 0x14875b604500>, mid_level_dataset_surplus=0.005, reset_position_ids=False, reset_attention_mask=False, eod_mask_loss=False, create_attention_mask=True, drop_last_partial_validation_sequence=True, add_extra_token_to_sequence=True, object_storage_cache_path=None)
|
| 11206 |
+
INFO:megatron.core.datasets.gpt_dataset:Build and save the MockGPTDataset train indices
|
| 11207 |
+
DEBUG:megatron.core.datasets.gpt_dataset:> separate_final_epoch: False
|
| 11208 |
+
WARNING:megatron.core.datasets.gpt_dataset:Unable to save MockGPTDataset indexes because path_to_cache is None
|
| 11209 |
+
DEBUG:megatron.core.datasets.gpt_dataset: > time elapsed: 0.006024 seconds
|
| 11210 |
+
INFO:megatron.core.datasets.gpt_dataset:> total number of samples: 520
|
| 11211 |
+
INFO:megatron.core.datasets.gpt_dataset:> total number of epochs: 1
|
| 11212 |
+
INFO:megatron.core.datasets.gpt_dataset:Build and save the MockGPTDataset valid indices
|
| 11213 |
+
DEBUG:megatron.core.datasets.gpt_dataset:> separate_final_epoch: False
|
| 11214 |
+
WARNING:megatron.core.datasets.gpt_dataset:Unable to save MockGPTDataset indexes because path_to_cache is None
|
| 11215 |
+
DEBUG:megatron.core.datasets.gpt_dataset: > time elapsed: 0.001700 seconds
|
| 11216 |
+
INFO:megatron.core.datasets.gpt_dataset:> total number of samples: 520
|
| 11217 |
+
INFO:megatron.core.datasets.gpt_dataset:> total number of epochs: 1
|
| 11218 |
+
INFO:megatron.core.datasets.gpt_dataset:Build and save the MockGPTDataset test indices
|
| 11219 |
+
DEBUG:megatron.core.datasets.gpt_dataset:> separate_final_epoch: False
|
| 11220 |
+
WARNING:megatron.core.datasets.gpt_dataset:Unable to save MockGPTDataset indexes because path_to_cache is None
|
| 11221 |
+
DEBUG:megatron.core.datasets.gpt_dataset: > time elapsed: 0.001455 seconds
|
| 11222 |
+
INFO:megatron.core.datasets.gpt_dataset:> total number of samples: 520
|
| 11223 |
+
INFO:megatron.core.datasets.gpt_dataset:> total number of epochs: 1
|
| 11224 |
+
> finished creating GPT datasets ...
|
| 11225 |
+
[after dataloaders are built] datetime: 2025-06-21 21:34:17
|
| 11226 |
+
done with setup ...
|
| 11227 |
+
(min, max) time across ranks (ms):
|
| 11228 |
+
model-and-optimizer-setup ......................: (5690.89, 5707.90)
|
| 11229 |
+
train/valid/test-data-iterators-setup ..........: (17.61, 114.45)
|
| 11230 |
+
training ...
|
| 11231 |
+
Setting rerun_state_machine.current_iteration to 0...
|
| 11232 |
+
[before the start of training step] datetime: 2025-06-21 21:34:17
|
| 11233 |
+
WARNING:megatron.core.utils:CUDA out of memory. Tried to allocate 65536.00 GiB. GPU 5 has a total capacity of 139.81 GiB of which 132.22 GiB is free. Including non-PyTorch memory, this process has 7.59 GiB memory in use. Of the allocated memory 5.15 GiB is allocated by PyTorch, and 1007.51 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)
|
| 11234 |
+
['Traceback (most recent call last):\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 446, in forward_step\n (tokens, labels, loss_mask, attention_mask, position_ids), token_lens = get_batch(data_iterator)\n ^^^^^^^^^^^^^^^^^^^^^^^^\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 284, in get_batch\n batch = next(global_batches)\n ^^^^^^^^^^^^^^^^^^^^\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 226, in setup_batches\n attention_mask = torch.ones(\n ^^^^^^^^^^^\n', 'torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 65536.00 GiB. GPU 5 has a total capacity of 139.81 GiB of which 132.22 GiB is free. Including non-PyTorch memory, this process has 7.59 GiB memory in use. Of the allocated memory 5.15 GiB is allocated by PyTorch, and 1007.51 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)\n']
|
| 11235 |
+
WARNING:megatron.core.utils:CUDA out of memory. Tried to allocate 65536.00 GiB. GPU 2 has a total capacity of 139.81 GiB of which 132.22 GiB is free. Including non-PyTorch memory, this process has 7.59 GiB memory in use. Of the allocated memory 5.15 GiB is allocated by PyTorch, and 1007.51 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)
|
| 11236 |
+
['Traceback (most recent call last):\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 446, in forward_step\n (tokens, labels, loss_mask, attention_mask, position_ids), token_lens = get_batch(data_iterator)\n ^^^^^^^^^^^^^^^^^^^^^^^^\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 284, in get_batch\n batch = next(global_batches)\n ^^^^^^^^^^^^^^^^^^^^\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 226, in setup_batches\n attention_mask = torch.ones(\n ^^^^^^^^^^^\n', 'torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 65536.00 GiB. GPU 2 has a total capacity of 139.81 GiB of which 132.22 GiB is free. Including non-PyTorch memory, this process has 7.59 GiB memory in use. Of the allocated memory 5.15 GiB is allocated by PyTorch, and 1007.51 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)\n']
|
| 11237 |
+
WARNING:megatron.core.utils:CUDA out of memory. Tried to allocate 65536.00 GiB. GPU 0 has a total capacity of 139.81 GiB of which 132.22 GiB is free. Including non-PyTorch memory, this process has 7.59 GiB memory in use. Of the allocated memory 5.15 GiB is allocated by PyTorch, and 1007.51 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)
|
| 11238 |
+
['Traceback (most recent call last):\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 446, in forward_step\n (tokens, labels, loss_mask, attention_mask, position_ids), token_lens = get_batch(data_iterator)\n ^^^^^^^^^^^^^^^^^^^^^^^^\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 284, in get_batch\n batch = next(global_batches)\n ^^^^^^^^^^^^^^^^^^^^\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 226, in setup_batches\n attention_mask = torch.ones(\n ^^^^^^^^^^^\n', 'torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 65536.00 GiB. GPU 0 has a total capacity of 139.81 GiB of which 132.22 GiB is free. Including non-PyTorch memory, this process has 7.59 GiB memory in use. Of the allocated memory 5.15 GiB is allocated by PyTorch, and 1007.51 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)\n']
|
| 11239 |
+
WARNING:megatron.core.utils:CUDA out of memory. Tried to allocate 65536.00 GiB. GPU 7 has a total capacity of 139.81 GiB of which 132.22 GiB is free. Including non-PyTorch memory, this process has 7.59 GiB memory in use. Of the allocated memory 5.15 GiB is allocated by PyTorch, and 1007.51 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)
|
| 11240 |
+
['Traceback (most recent call last):\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 446, in forward_step\n (tokens, labels, loss_mask, attention_mask, position_ids), token_lens = get_batch(data_iterator)\n ^^^^^^^^^^^^^^^^^^^^^^^^\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 284, in get_batch\n batch = next(global_batches)\n ^^^^^^^^^^^^^^^^^^^^\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 226, in setup_batches\n attention_mask = torch.ones(\n ^^^^^^^^^^^\n', 'torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 65536.00 GiB. GPU 7 has a total capacity of 139.81 GiB of which 132.22 GiB is free. Including non-PyTorch memory, this process has 7.59 GiB memory in use. Of the allocated memory 5.15 GiB is allocated by PyTorch, and 1007.51 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)\n']
|
| 11241 |
+
WARNING:megatron.core.utils:CUDA out of memory. Tried to allocate 65536.00 GiB. GPU 1 has a total capacity of 139.81 GiB of which 132.22 GiB is free. Including non-PyTorch memory, this process has 7.59 GiB memory in use. Of the allocated memory 5.15 GiB is allocated by PyTorch, and 1007.51 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)
|
| 11242 |
+
['Traceback (most recent call last):\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 446, in forward_step\n (tokens, labels, loss_mask, attention_mask, position_ids), token_lens = get_batch(data_iterator)\n ^^^^^^^^^^^^^^^^^^^^^^^^\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 284, in get_batch\n batch = next(global_batches)\n ^^^^^^^^^^^^^^^^^^^^\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 226, in setup_batches\n attention_mask = torch.ones(\n ^^^^^^^^^^^\n', 'torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 65536.00 GiB. GPU 1 has a total capacity of 139.81 GiB of which 132.22 GiB is free. Including non-PyTorch memory, this process has 7.59 GiB memory in use. Of the allocated memory 5.15 GiB is allocated by PyTorch, and 1007.51 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)\n']
|
| 11243 |
+
WARNING:megatron.core.utils:CUDA out of memory. Tried to allocate 65536.00 GiB. GPU 4 has a total capacity of 139.81 GiB of which 132.22 GiB is free. Including non-PyTorch memory, this process has 7.59 GiB memory in use. Of the allocated memory 5.15 GiB is allocated by PyTorch, and 1007.51 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)
|
| 11244 |
+
['Traceback (most recent call last):\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 446, in forward_step\n (tokens, labels, loss_mask, attention_mask, position_ids), token_lens = get_batch(data_iterator)\n ^^^^^^^^^^^^^^^^^^^^^^^^\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 284, in get_batch\n batch = next(global_batches)\n ^^^^^^^^^^^^^^^^^^^^\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 226, in setup_batches\n attention_mask = torch.ones(\n ^^^^^^^^^^^\n', 'torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 65536.00 GiB. GPU 4 has a total capacity of 139.81 GiB of which 132.22 GiB is free. Including non-PyTorch memory, this process has 7.59 GiB memory in use. Of the allocated memory 5.15 GiB is allocated by PyTorch, and 1007.51 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)\n']
|
| 11245 |
+
WARNING:megatron.core.utils:CUDA out of memory. Tried to allocate 65536.00 GiB. GPU 3 has a total capacity of 139.81 GiB of which 132.22 GiB is free. Including non-PyTorch memory, this process has 7.59 GiB memory in use. Of the allocated memory 5.15 GiB is allocated by PyTorch, and 1007.51 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)
|
| 11246 |
+
['Traceback (most recent call last):\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 446, in forward_step\n (tokens, labels, loss_mask, attention_mask, position_ids), token_lens = get_batch(data_iterator)\n ^^^^^^^^^^^^^^^^^^^^^^^^\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 284, in get_batch\n batch = next(global_batches)\n ^^^^^^^^^^^^^^^^^^^^\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 226, in setup_batches\n attention_mask = torch.ones(\n ^^^^^^^^^^^\n', 'torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 65536.00 GiB. GPU 3 has a total capacity of 139.81 GiB of which 132.22 GiB is free. Including non-PyTorch memory, this process has 7.59 GiB memory in use. Of the allocated memory 5.15 GiB is allocated by PyTorch, and 1007.51 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)\n']
|
| 11247 |
+
WARNING:megatron.core.utils:CUDA out of memory. Tried to allocate 65536.00 GiB. GPU 6 has a total capacity of 139.81 GiB of which 132.22 GiB is free. Including non-PyTorch memory, this process has 7.59 GiB memory in use. Of the allocated memory 5.15 GiB is allocated by PyTorch, and 1007.51 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)
|
| 11248 |
+
['Traceback (most recent call last):\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 446, in forward_step\n (tokens, labels, loss_mask, attention_mask, position_ids), token_lens = get_batch(data_iterator)\n ^^^^^^^^^^^^^^^^^^^^^^^^\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 284, in get_batch\n batch = next(global_batches)\n ^^^^^^^^^^^^^^^^^^^^\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 226, in setup_batches\n attention_mask = torch.ones(\n ^^^^^^^^^^^\n', 'torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 65536.00 GiB. GPU 6 has a total capacity of 139.81 GiB of which 132.22 GiB is free. Including non-PyTorch memory, this process has 7.59 GiB memory in use. Of the allocated memory 5.15 GiB is allocated by PyTorch, and 1007.51 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)\n']
|
attnserver.run_attnserver.slurm.sh.343213.out.log
CHANGED
|
The diff for this file is too large to render.
See raw diff
|
|
|
attnserver.run_attnserver.slurm.sh.343214.err.log
CHANGED
|
@@ -52461,3 +52461,642 @@ W0621 21:32:50.558000 4104401 site-packages/torch/distributed/run.py:766] ******
|
|
| 52461 |
warnings.warn(
|
| 52462 |
/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/transformer_engine/pytorch/cpu_offload.py:595: DeprecationWarning: Offloading weights is deprecated. Using offload_weights=True does not have any effect.
|
| 52463 |
warnings.warn(
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 52461 |
warnings.warn(
|
| 52462 |
/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/transformer_engine/pytorch/cpu_offload.py:595: DeprecationWarning: Offloading weights is deprecated. Using offload_weights=True does not have any effect.
|
| 52463 |
warnings.warn(
|
| 52464 |
+
[rank0]: Traceback (most recent call last):
|
| 52465 |
+
[rank0]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 554, in <module>
|
| 52466 |
+
[rank0]: pretrain(
|
| 52467 |
+
[rank0]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/training/training.py", line 879, in pretrain
|
| 52468 |
+
[rank0]: save_checkpoint(
|
| 52469 |
+
[rank0]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/training/checkpointing.py", line 469, in save_checkpoint
|
| 52470 |
+
[rank0]: async_save_request = dist_checkpointing.save(state_dict, checkpoint_name, save_strategy,
|
| 52471 |
+
[rank0]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
| 52472 |
+
[rank0]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/dist_checkpointing/serialization.py", line 404, in save
|
| 52473 |
+
[rank0]: sharded_strategy.save(sharded_state_dict, checkpoint_dir)
|
| 52474 |
+
[rank0]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/dist_checkpointing/strategies/fully_parallel.py", line 95, in save
|
| 52475 |
+
[rank0]: return self.base_strategy.save(sharded_state_dict, checkpoint_dir)
|
| 52476 |
+
[rank0]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
| 52477 |
+
[rank0]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/dist_checkpointing/strategies/base.py", line 228, in save
|
| 52478 |
+
[rank0]: async_calls.maybe_finalize_async_calls(blocking=True)
|
| 52479 |
+
[rank0]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/dist_checkpointing/strategies/async_utils.py", line 545, in maybe_finalize_async_calls
|
| 52480 |
+
[rank0]: finalize_fn()
|
| 52481 |
+
[rank0]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/dist_checkpointing/strategies/torch.py", line 800, in finalize_fn
|
| 52482 |
+
[rank0]: save_state_dict_async_finalize(*save_state_dict_ret)
|
| 52483 |
+
[rank0]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/dist_checkpointing/strategies/state_dict_saver.py", line 243, in save_state_dict_async_finalize
|
| 52484 |
+
[rank0]: storage_writer.finish(global_metadata, all_results)
|
| 52485 |
+
[rank0]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/dist_checkpointing/strategies/filesystem_async.py", line 483, in finish
|
| 52486 |
+
[rank0]: super().finish(metadata, results)
|
| 52487 |
+
[rank0]: File "/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/distributed/checkpoint/filesystem.py", line 697, in finish
|
| 52488 |
+
[rank0]: with self.fs.create_stream(tmp_path, "wb") as metadata_file:
|
| 52489 |
+
[rank0]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
| 52490 |
+
[rank0]: File "/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/contextlib.py", line 137, in __enter__
|
| 52491 |
+
[rank0]: return next(self.gen)
|
| 52492 |
+
[rank0]: ^^^^^^^^^^^^^^
|
| 52493 |
+
[rank0]: File "/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/distributed/checkpoint/filesystem.py", line 476, in create_stream
|
| 52494 |
+
[rank0]: with path.open(mode) as stream:
|
| 52495 |
+
[rank0]: ^^^^^^^^^^^^^^^
|
| 52496 |
+
[rank0]: File "/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/pathlib.py", line 1013, in open
|
| 52497 |
+
[rank0]: return io.open(self, mode, buffering, encoding, errors, newline)
|
| 52498 |
+
[rank0]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
| 52499 |
+
[rank0]: FileNotFoundError: [Errno 2] No such file or directory: 'gpt-checkpoint/iter_0000010/.metadata.tmp'
|
| 52500 |
+
[rank0]:[W621 21:34:46.527885836 ProcessGroupNCCL.cpp:1476] Warning: WARNING: destroy_process_group() was not called before program exit, which can leak resources. For more info, please see https://pytorch.org/docs/stable/distributed.html#shutdown (function operator())
|
| 52501 |
+
W0621 21:34:54.005000 1985876 site-packages/torch/distributed/elastic/multiprocessing/api.py:900] Sending process 1985950 closing signal SIGTERM
|
| 52502 |
+
W0621 21:34:54.009000 1985876 site-packages/torch/distributed/elastic/multiprocessing/api.py:900] Sending process 1985951 closing signal SIGTERM
|
| 52503 |
+
W0621 21:34:54.012000 1985876 site-packages/torch/distributed/elastic/multiprocessing/api.py:900] Sending process 1985952 closing signal SIGTERM
|
| 52504 |
+
W0621 21:34:54.015000 1985876 site-packages/torch/distributed/elastic/multiprocessing/api.py:900] Sending process 1985953 closing signal SIGTERM
|
| 52505 |
+
W0621 21:34:54.017000 1985876 site-packages/torch/distributed/elastic/multiprocessing/api.py:900] Sending process 1985954 closing signal SIGTERM
|
| 52506 |
+
W0621 21:34:54.022000 1985876 site-packages/torch/distributed/elastic/multiprocessing/api.py:900] Sending process 1985955 closing signal SIGTERM
|
| 52507 |
+
W0621 21:34:54.024000 1985876 site-packages/torch/distributed/elastic/multiprocessing/api.py:900] Sending process 1985956 closing signal SIGTERM
|
| 52508 |
+
E0621 21:34:57.069000 1985876 site-packages/torch/distributed/elastic/multiprocessing/api.py:874] failed (exitcode: 1) local_rank: 0 (pid: 1985949) of binary: /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/bin/python3
|
| 52509 |
+
Traceback (most recent call last):
|
| 52510 |
+
File "<frozen runpy>", line 198, in _run_module_as_main
|
| 52511 |
+
File "<frozen runpy>", line 88, in _run_code
|
| 52512 |
+
File "/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/distributed/launch.py", line 207, in <module>
|
| 52513 |
+
main()
|
| 52514 |
+
File "/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/typing_extensions.py", line 3253, in wrapper
|
| 52515 |
+
return arg(*args, **kwargs)
|
| 52516 |
+
^^^^^^^^^^^^^^^^^^^^
|
| 52517 |
+
File "/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/distributed/launch.py", line 203, in main
|
| 52518 |
+
launch(args)
|
| 52519 |
+
File "/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/distributed/launch.py", line 188, in launch
|
| 52520 |
+
run(args)
|
| 52521 |
+
File "/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/distributed/run.py", line 883, in run
|
| 52522 |
+
elastic_launch(
|
| 52523 |
+
File "/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/distributed/launcher/api.py", line 139, in __call__
|
| 52524 |
+
return launch_agent(self._config, self._entrypoint, list(args))
|
| 52525 |
+
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
| 52526 |
+
File "/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/distributed/launcher/api.py", line 270, in launch_agent
|
| 52527 |
+
raise ChildFailedError(
|
| 52528 |
+
torch.distributed.elastic.multiprocessing.errors.ChildFailedError:
|
| 52529 |
+
============================================================
|
| 52530 |
+
./pretrain_gpt_profile.py FAILED
|
| 52531 |
+
------------------------------------------------------------
|
| 52532 |
+
Failures:
|
| 52533 |
+
<NO_OTHER_FAILURES>
|
| 52534 |
+
------------------------------------------------------------
|
| 52535 |
+
Root Cause (first observed failure):
|
| 52536 |
+
[0]:
|
| 52537 |
+
time : 2025-06-21_21:34:54
|
| 52538 |
+
host : fs-mbz-gpu-404
|
| 52539 |
+
rank : 0 (local_rank: 0)
|
| 52540 |
+
exitcode : 1 (pid: 1985949)
|
| 52541 |
+
error_file: <N/A>
|
| 52542 |
+
traceback : To enable traceback see: https://pytorch.org/docs/stable/elastic/errors.html
|
| 52543 |
+
============================================================
|
| 52544 |
+
[rank16]:[W621 21:34:57.576537020 TCPStore.cpp:125] [c10d] recvValue failed on SocketImpl(fd=75, addr=[fs-mbz-gpu-854]:38642, remote=[fs-mbz-gpu-404]:44033): failed to recv, got 0 bytes
|
| 52545 |
+
Exception raised from recvBytes at /pytorch/torch/csrc/distributed/c10d/Utils.hpp:678 (most recent call first):
|
| 52546 |
+
frame #0: c10::Error::Error(c10::SourceLocation, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >) + 0x98 (0x148305b785e8 in /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/lib/libc10.so)
|
| 52547 |
+
frame #1: <unknown function> + 0x5ba8afe (0x1482eee5aafe in /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/lib/libtorch_cpu.so)
|
| 52548 |
+
frame #2: <unknown function> + 0x5baae40 (0x1482eee5ce40 in /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/lib/libtorch_cpu.so)
|
| 52549 |
+
frame #3: <unknown function> + 0x5bab74a (0x1482eee5d74a in /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/lib/libtorch_cpu.so)
|
| 52550 |
+
frame #4: c10d::TCPStore::check(std::vector<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >, std::allocator<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > > > const&) + 0x2a9 (0x1482eee571a9 in /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/lib/libtorch_cpu.so)
|
| 52551 |
+
frame #5: c10d::ProcessGroupNCCL::heartbeatMonitor() + 0x379 (0x1482ac0509a9 in /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/lib/libtorch_cuda.so)
|
| 52552 |
+
frame #6: <unknown function> + 0xd3b6d (0x14829c019b6d in /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/bin/../lib/libstdc++.so.6)
|
| 52553 |
+
frame #7: <unknown function> + 0x94ac3 (0x148306edfac3 in /lib/x86_64-linux-gnu/libc.so.6)
|
| 52554 |
+
frame #8: <unknown function> + 0x126850 (0x148306f71850 in /lib/x86_64-linux-gnu/libc.so.6)
|
| 52555 |
+
|
| 52556 |
+
[rank16]:[W621 21:34:57.581204488 ProcessGroupNCCL.cpp:1659] [PG ID 0 PG GUID 0(default_pg) Rank 16] Failed to check the "should dump" flag on TCPStore, (maybe TCPStore server has shut down too early), with error: failed to recv, got 0 bytes
|
| 52557 |
+
W0621 21:34:57.237000 531378 site-packages/torch/distributed/elastic/multiprocessing/api.py:900] Sending process 531465 closing signal SIGTERM
|
| 52558 |
+
W0621 21:34:57.242000 531378 site-packages/torch/distributed/elastic/multiprocessing/api.py:900] Sending process 531466 closing signal SIGTERM
|
| 52559 |
+
W0621 21:34:57.244000 531378 site-packages/torch/distributed/elastic/multiprocessing/api.py:900] Sending process 531467 closing signal SIGTERM
|
| 52560 |
+
W0621 21:34:57.248000 531378 site-packages/torch/distributed/elastic/multiprocessing/api.py:900] Sending process 531468 closing signal SIGTERM
|
| 52561 |
+
W0621 21:34:57.251000 531378 site-packages/torch/distributed/elastic/multiprocessing/api.py:900] Sending process 531469 closing signal SIGTERM
|
| 52562 |
+
W0621 21:34:57.253000 531378 site-packages/torch/distributed/elastic/multiprocessing/api.py:900] Sending process 531470 closing signal SIGTERM
|
| 52563 |
+
W0621 21:34:57.292000 531378 site-packages/torch/distributed/elastic/multiprocessing/api.py:900] Sending process 531471 closing signal SIGTERM
|
| 52564 |
+
W0621 21:34:57.298000 531378 site-packages/torch/distributed/elastic/multiprocessing/api.py:900] Sending process 531472 closing signal SIGTERM
|
| 52565 |
+
[rank28]:[W621 21:34:57.665013032 TCPStore.cpp:125] [c10d] recvValue failed on SocketImpl(fd=95, addr=[fs-mbz-gpu-885]:43348, remote=[fs-mbz-gpu-404]:44033): failed to recv, got 0 bytes
|
| 52566 |
+
Exception raised from recvBytes at /pytorch/torch/csrc/distributed/c10d/Utils.hpp:678 (most recent call first):
|
| 52567 |
+
frame #0: c10::Error::Error(c10::SourceLocation, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >) + 0x98 (0x14f6eb1785e8 in /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/lib/libc10.so)
|
| 52568 |
+
frame #1: <unknown function> + 0x5ba8afe (0x14f6d405aafe in /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/lib/libtorch_cpu.so)
|
| 52569 |
+
frame #2: <unknown function> + 0x5baae40 (0x14f6d405ce40 in /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/lib/libtorch_cpu.so)
|
| 52570 |
+
frame #3: <unknown function> + 0x5bab74a (0x14f6d405d74a in /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/lib/libtorch_cpu.so)
|
| 52571 |
+
frame #4: c10d::TCPStore::check(std::vector<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >, std::allocator<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > > > const&) + 0x2a9 (0x14f6d40571a9 in /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/lib/libtorch_cpu.so)
|
| 52572 |
+
frame #5: c10d::ProcessGroupNCCL::heartbeatMonitor() + 0x379 (0x14f6912509a9 in /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/lib/libtorch_cuda.so)
|
| 52573 |
+
frame #6: <unknown function> + 0xd3b6d (0x14f6eacf1b6d in /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/bin/../lib/libstdc++.so.6)
|
| 52574 |
+
frame #7: <unknown function> + 0x94ac3 (0x14f6ec26fac3 in /lib/x86_64-linux-gnu/libc.so.6)
|
| 52575 |
+
frame #8: <unknown function> + 0x126850 (0x14f6ec301850 in /lib/x86_64-linux-gnu/libc.so.6)
|
| 52576 |
+
|
| 52577 |
+
[rank28]:[W621 21:34:57.669596521 ProcessGroupNCCL.cpp:1659] [PG ID 0 PG GUID 0(default_pg) Rank 28] Failed to check the "should dump" flag on TCPStore, (maybe TCPStore server has shut down too early), with error: failed to recv, got 0 bytes
|
| 52578 |
+
[rank12]:[W621 21:34:57.869922135 TCPStore.cpp:125] [c10d] recvValue failed on SocketImpl(fd=95, addr=[fs-mbz-gpu-455]:53750, remote=[fs-mbz-gpu-404]:44033): failed to recv, got 0 bytes
|
| 52579 |
+
Exception raised from recvBytes at /pytorch/torch/csrc/distributed/c10d/Utils.hpp:678 (most recent call first):
|
| 52580 |
+
frame #0: c10::Error::Error(c10::SourceLocation, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >) + 0x98 (0x148fded785e8 in /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/lib/libc10.so)
|
| 52581 |
+
frame #1: <unknown function> + 0x5ba8afe (0x148fc805aafe in /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/lib/libtorch_cpu.so)
|
| 52582 |
+
frame #2: <unknown function> + 0x5baae40 (0x148fc805ce40 in /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/lib/libtorch_cpu.so)
|
| 52583 |
+
frame #3: <unknown function> + 0x5bab74a (0x148fc805d74a in /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/lib/libtorch_cpu.so)
|
| 52584 |
+
frame #4: c10d::TCPStore::check(std::vector<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >, std::allocator<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > > > const&) + 0x2a9 (0x148fc80571a9 in /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/lib/libtorch_cpu.so)
|
| 52585 |
+
frame #5: c10d::ProcessGroupNCCL::heartbeatMonitor() + 0x379 (0x148f852509a9 in /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/lib/libtorch_cuda.so)
|
| 52586 |
+
frame #6: <unknown function> + 0xd3b6d (0x148f75219b6d in /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/bin/../lib/libstdc++.so.6)
|
| 52587 |
+
frame #7: <unknown function> + 0x94ac3 (0x148fe0152ac3 in /lib/x86_64-linux-gnu/libc.so.6)
|
| 52588 |
+
frame #8: <unknown function> + 0x126850 (0x148fe01e4850 in /lib/x86_64-linux-gnu/libc.so.6)
|
| 52589 |
+
|
| 52590 |
+
[rank12]:[W621 21:34:57.874271528 ProcessGroupNCCL.cpp:1659] [PG ID 0 PG GUID 0(default_pg) Rank 12] Failed to check the "should dump" flag on TCPStore, (maybe TCPStore server has shut down too early), with error: failed to recv, got 0 bytes
|
| 52591 |
+
[rank10]:[W621 21:34:57.874073360 TCPStore.cpp:125] [c10d] recvValue failed on SocketImpl(fd=95, addr=[fs-mbz-gpu-455]:53756, remote=[fs-mbz-gpu-404]:44033): failed to recv, got 0 bytes
|
| 52592 |
+
Exception raised from recvBytes at /pytorch/torch/csrc/distributed/c10d/Utils.hpp:678 (most recent call first):
|
| 52593 |
+
frame #0: c10::Error::Error(c10::SourceLocation, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >) + 0x98 (0x150b62d785e8 in /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/lib/libc10.so)
|
| 52594 |
+
frame #1: <unknown function> + 0x5ba8afe (0x150b4bc5aafe in /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/lib/libtorch_cpu.so)
|
| 52595 |
+
frame #2: <unknown function> + 0x5baae40 (0x150b4bc5ce40 in /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/lib/libtorch_cpu.so)
|
| 52596 |
+
frame #3: <unknown function> + 0x5bab74a (0x150b4bc5d74a in /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/lib/libtorch_cpu.so)
|
| 52597 |
+
frame #4: c10d::TCPStore::check(std::vector<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >, std::allocator<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > > > const&) + 0x2a9 (0x150b4bc571a9 in /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/lib/libtorch_cpu.so)
|
| 52598 |
+
frame #5: c10d::ProcessGroupNCCL::heartbeatMonitor() + 0x379 (0x150b08e509a9 in /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/lib/libtorch_cuda.so)
|
| 52599 |
+
frame #6: <unknown function> + 0xd3b6d (0x150b628f1b6d in /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/bin/../lib/libstdc++.so.6)
|
| 52600 |
+
frame #7: <unknown function> + 0x94ac3 (0x150b63e3bac3 in /lib/x86_64-linux-gnu/libc.so.6)
|
| 52601 |
+
frame #8: <unknown function> + 0x126850 (0x150b63ecd850 in /lib/x86_64-linux-gnu/libc.so.6)
|
| 52602 |
+
|
| 52603 |
+
[rank10]:[W621 21:34:57.877925995 ProcessGroupNCCL.cpp:1659] [PG ID 0 PG GUID 0(default_pg) Rank 10] Failed to check the "should dump" flag on TCPStore, (maybe TCPStore server has shut down too early), with error: failed to recv, got 0 bytes
|
| 52604 |
+
[rank14]:[W621 21:34:57.899028546 TCPStore.cpp:125] [c10d] recvValue failed on SocketImpl(fd=95, addr=[fs-mbz-gpu-455]:53788, remote=[fs-mbz-gpu-404]:44033): failed to recv, got 0 bytes
|
| 52605 |
+
Exception raised from recvBytes at /pytorch/torch/csrc/distributed/c10d/Utils.hpp:678 (most recent call first):
|
| 52606 |
+
frame #0: c10::Error::Error(c10::SourceLocation, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >) + 0x98 (0x15301e5785e8 in /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/lib/libc10.so)
|
| 52607 |
+
frame #1: <unknown function> + 0x5ba8afe (0x15300785aafe in /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/lib/libtorch_cpu.so)
|
| 52608 |
+
frame #2: <unknown function> + 0x5baae40 (0x15300785ce40 in /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/lib/libtorch_cpu.so)
|
| 52609 |
+
frame #3: <unknown function> + 0x5bab74a (0x15300785d74a in /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/lib/libtorch_cpu.so)
|
| 52610 |
+
frame #4: c10d::TCPStore::check(std::vector<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >, std::allocator<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > > > const&) + 0x2a9 (0x1530078571a9 in /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/lib/libtorch_cpu.so)
|
| 52611 |
+
frame #5: c10d::ProcessGroupNCCL::heartbeatMonitor() + 0x379 (0x152fc4a509a9 in /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/lib/libtorch_cuda.so)
|
| 52612 |
+
frame #6: <unknown function> + 0xd3b6d (0x152fb4a19b6d in /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/bin/../lib/libstdc++.so.6)
|
| 52613 |
+
frame #7: <unknown function> + 0x94ac3 (0x15301f902ac3 in /lib/x86_64-linux-gnu/libc.so.6)
|
| 52614 |
+
frame #8: <unknown function> + 0x126850 (0x15301f994850 in /lib/x86_64-linux-gnu/libc.so.6)
|
| 52615 |
+
|
| 52616 |
+
[rank14]:[W621 21:34:57.902839291 ProcessGroupNCCL.cpp:1659] [PG ID 0 PG GUID 0(default_pg) Rank 14] Failed to check the "should dump" flag on TCPStore, (maybe TCPStore server has shut down too early), with error: failed to recv, got 0 bytes
|
| 52617 |
+
[rank9]:[W621 21:34:57.899087381 TCPStore.cpp:125] [c10d] recvValue failed on SocketImpl(fd=95, addr=[fs-mbz-gpu-455]:53772, remote=[fs-mbz-gpu-404]:44033): failed to recv, got 0 bytes
|
| 52618 |
+
Exception raised from recvBytes at /pytorch/torch/csrc/distributed/c10d/Utils.hpp:678 (most recent call first):
|
| 52619 |
+
frame #0: c10::Error::Error(c10::SourceLocation, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >) + 0x98 (0x14e199d785e8 in /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/lib/libc10.so)
|
| 52620 |
+
frame #1: <unknown function> + 0x5ba8afe (0x14e18305aafe in /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/lib/libtorch_cpu.so)
|
| 52621 |
+
frame #2: <unknown function> + 0x5baae40 (0x14e18305ce40 in /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/lib/libtorch_cpu.so)
|
| 52622 |
+
frame #3: <unknown function> + 0x5bab74a (0x14e18305d74a in /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/lib/libtorch_cpu.so)
|
| 52623 |
+
frame #4: c10d::TCPStore::check(std::vector<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >, std::allocator<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > > > const&) + 0x2a9 (0x14e1830571a9 in /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/lib/libtorch_cpu.so)
|
| 52624 |
+
frame #5: c10d::ProcessGroupNCCL::heartbeatMonitor() + 0x379 (0x14e1402509a9 in /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/lib/libtorch_cuda.so)
|
| 52625 |
+
frame #6: <unknown function> + 0xd3b6d (0x14e130219b6d in /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/bin/../lib/libstdc++.so.6)
|
| 52626 |
+
frame #7: <unknown function> + 0x94ac3 (0x14e19b0b7ac3 in /lib/x86_64-linux-gnu/libc.so.6)
|
| 52627 |
+
frame #8: <unknown function> + 0x126850 (0x14e19b149850 in /lib/x86_64-linux-gnu/libc.so.6)
|
| 52628 |
+
|
| 52629 |
+
[rank9]:[W621 21:34:57.902984751 ProcessGroupNCCL.cpp:1659] [PG ID 0 PG GUID 0(default_pg) Rank 9] Failed to check the "should dump" flag on TCPStore, (maybe TCPStore server has shut down too early), with error: failed to recv, got 0 bytes
|
| 52630 |
+
[rank11]:[W621 21:34:57.899081793 TCPStore.cpp:125] [c10d] recvValue failed on SocketImpl(fd=95, addr=[fs-mbz-gpu-455]:53782, remote=[fs-mbz-gpu-404]:44033): failed to recv, got 0 bytes
|
| 52631 |
+
Exception raised from recvBytes at /pytorch/torch/csrc/distributed/c10d/Utils.hpp:678 (most recent call first):
|
| 52632 |
+
frame #0: c10::Error::Error(c10::SourceLocation, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >) + 0x98 (0x147d357785e8 in /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/lib/libc10.so)
|
| 52633 |
+
frame #1: <unknown function> + 0x5ba8afe (0x147d1e65aafe in /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/lib/libtorch_cpu.so)
|
| 52634 |
+
frame #2: <unknown function> + 0x5baae40 (0x147d1e65ce40 in /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/lib/libtorch_cpu.so)
|
| 52635 |
+
frame #3: <unknown function> + 0x5bab74a (0x147d1e65d74a in /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/lib/libtorch_cpu.so)
|
| 52636 |
+
frame #4: c10d::TCPStore::check(std::vector<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >, std::allocator<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > > > const&) + 0x2a9 (0x147d1e6571a9 in /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/lib/libtorch_cpu.so)
|
| 52637 |
+
frame #5: c10d::ProcessGroupNCCL::heartbeatMonitor() + 0x379 (0x147cdb8509a9 in /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/lib/libtorch_cuda.so)
|
| 52638 |
+
frame #6: <unknown function> + 0xd3b6d (0x147d352f1b6d in /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/bin/../lib/libstdc++.so.6)
|
| 52639 |
+
frame #7: <unknown function> + 0x94ac3 (0x147d367a2ac3 in /lib/x86_64-linux-gnu/libc.so.6)
|
| 52640 |
+
frame #8: <unknown function> + 0x126850 (0x147d36834850 in /lib/x86_64-linux-gnu/libc.so.6)
|
| 52641 |
+
|
| 52642 |
+
[rank11]:[W621 21:34:57.903365440 ProcessGroupNCCL.cpp:1659] [PG ID 0 PG GUID 0(default_pg) Rank 11] Failed to check the "should dump" flag on TCPStore, (maybe TCPStore server has shut down too early), with error: failed to recv, got 0 bytes
|
| 52643 |
+
[rank15]:[W621 21:34:57.899453852 TCPStore.cpp:125] [c10d] recvValue failed on SocketImpl(fd=95, addr=[fs-mbz-gpu-455]:53800, remote=[fs-mbz-gpu-404]:44033): failed to recv, got 0 bytes
|
| 52644 |
+
Exception raised from recvBytes at /pytorch/torch/csrc/distributed/c10d/Utils.hpp:678 (most recent call first):
|
| 52645 |
+
frame #0: c10::Error::Error(c10::SourceLocation, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >) + 0x98 (0x1538c23785e8 in /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/lib/libc10.so)
|
| 52646 |
+
frame #1: <unknown function> + 0x5ba8afe (0x1538ab25aafe in /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/lib/libtorch_cpu.so)
|
| 52647 |
+
frame #2: <unknown function> + 0x5baae40 (0x1538ab25ce40 in /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/lib/libtorch_cpu.so)
|
| 52648 |
+
frame #3: <unknown function> + 0x5bab74a (0x1538ab25d74a in /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/lib/libtorch_cpu.so)
|
| 52649 |
+
frame #4: c10d::TCPStore::check(std::vector<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >, std::allocator<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > > > const&) + 0x2a9 (0x1538ab2571a9 in /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/lib/libtorch_cpu.so)
|
| 52650 |
+
frame #5: c10d::ProcessGroupNCCL::heartbeatMonitor() + 0x379 (0x1538684509a9 in /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/lib/libtorch_cuda.so)
|
| 52651 |
+
frame #6: <unknown function> + 0xd3b6d (0x1538c1ef1b6d in /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/bin/../lib/libstdc++.so.6)
|
| 52652 |
+
frame #7: <unknown function> + 0x94ac3 (0x1538c33edac3 in /lib/x86_64-linux-gnu/libc.so.6)
|
| 52653 |
+
frame #8: <unknown function> + 0x126850 (0x1538c347f850 in /lib/x86_64-linux-gnu/libc.so.6)
|
| 52654 |
+
|
| 52655 |
+
[rank15]:[W621 21:34:57.903456420 ProcessGroupNCCL.cpp:1659] [PG ID 0 PG GUID 0(default_pg) Rank 15] Failed to check the "should dump" flag on TCPStore, (maybe TCPStore server has shut down too early), with error: failed to recv, got 0 bytes
|
| 52656 |
+
[rank13]:[W621 21:34:57.961938570 TCPStore.cpp:125] [c10d] recvValue failed on SocketImpl(fd=95, addr=[fs-mbz-gpu-455]:53718, remote=[fs-mbz-gpu-404]:44033): failed to recv, got 0 bytes
|
| 52657 |
+
Exception raised from recvBytes at /pytorch/torch/csrc/distributed/c10d/Utils.hpp:678 (most recent call first):
|
| 52658 |
+
frame #0: c10::Error::Error(c10::SourceLocation, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >) + 0x98 (0x1477a9d785e8 in /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/lib/libc10.so)
|
| 52659 |
+
frame #1: <unknown function> + 0x5ba8afe (0x147792c5aafe in /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/lib/libtorch_cpu.so)
|
| 52660 |
+
frame #2: <unknown function> + 0x5baae40 (0x147792c5ce40 in /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/lib/libtorch_cpu.so)
|
| 52661 |
+
frame #3: <unknown function> + 0x5bab74a (0x147792c5d74a in /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/lib/libtorch_cpu.so)
|
| 52662 |
+
frame #4: c10d::TCPStore::check(std::vector<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >, std::allocator<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > > > const&) + 0x2a9 (0x147792c571a9 in /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/lib/libtorch_cpu.so)
|
| 52663 |
+
frame #5: c10d::ProcessGroupNCCL::heartbeatMonitor() + 0x379 (0x14774fe509a9 in /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/lib/libtorch_cuda.so)
|
| 52664 |
+
frame #6: <unknown function> + 0xd3b6d (0x1477a98f1b6d in /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/bin/../lib/libstdc++.so.6)
|
| 52665 |
+
frame #7: <unknown function> + 0x94ac3 (0x1477aae25ac3 in /lib/x86_64-linux-gnu/libc.so.6)
|
| 52666 |
+
frame #8: <unknown function> + 0x126850 (0x1477aaeb7850 in /lib/x86_64-linux-gnu/libc.so.6)
|
| 52667 |
+
|
| 52668 |
+
[rank13]:[W621 21:34:57.965338811 ProcessGroupNCCL.cpp:1659] [PG ID 0 PG GUID 0(default_pg) Rank 13] Failed to check the "should dump" flag on TCPStore, (maybe TCPStore server has shut down too early), with error: failed to recv, got 0 bytes
|
| 52669 |
+
+ set +x
|
| 52670 |
+
[rank24]:[W621 21:34:57.912642183 TCPStore.cpp:125] [c10d] recvValue failed on SocketImpl(fd=75, addr=[fs-mbz-gpu-885]:43302, remote=[fs-mbz-gpu-404]:44033): failed to recv, got 0 bytes
|
| 52671 |
+
Exception raised from recvBytes at /pytorch/torch/csrc/distributed/c10d/Utils.hpp:678 (most recent call first):
|
| 52672 |
+
frame #0: c10::Error::Error(c10::SourceLocation, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >) + 0x98 (0x152edc3785e8 in /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/lib/libc10.so)
|
| 52673 |
+
frame #1: <unknown function> + 0x5ba8afe (0x152ec525aafe in /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/lib/libtorch_cpu.so)
|
| 52674 |
+
frame #2: <unknown function> + 0x5baae40 (0x152ec525ce40 in /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/lib/libtorch_cpu.so)
|
| 52675 |
+
frame #3: <unknown function> + 0x5bab74a (0x152ec525d74a in /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/lib/libtorch_cpu.so)
|
| 52676 |
+
frame #4: c10d::TCPStore::check(std::vector<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >, std::allocator<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > > > const&) + 0x2a9 (0x152ec52571a9 in /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/lib/libtorch_cpu.so)
|
| 52677 |
+
frame #5: c10d::ProcessGroupNCCL::heartbeatMonitor() + 0x379 (0x152e824509a9 in /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/lib/libtorch_cuda.so)
|
| 52678 |
+
frame #6: <unknown function> + 0xd3b6d (0x152edbef1b6d in /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/bin/../lib/libstdc++.so.6)
|
| 52679 |
+
frame #7: <unknown function> + 0x94ac3 (0x152edd38fac3 in /lib/x86_64-linux-gnu/libc.so.6)
|
| 52680 |
+
frame #8: <unknown function> + 0x126850 (0x152edd421850 in /lib/x86_64-linux-gnu/libc.so.6)
|
| 52681 |
+
|
| 52682 |
+
[rank24]:[W621 21:34:57.916656457 ProcessGroupNCCL.cpp:1659] [PG ID 0 PG GUID 0(default_pg) Rank 24] Failed to check the "should dump" flag on TCPStore, (maybe TCPStore server has shut down too early), with error: failed to recv, got 0 bytes
|
| 52683 |
+
[rank30]:[W621 21:34:57.083522265 TCPStore.cpp:125] [c10d] recvValue failed on SocketImpl(fd=95, addr=[fs-mbz-gpu-885]:43354, remote=[fs-mbz-gpu-404]:44033): failed to recv, got 0 bytes
|
| 52684 |
+
Exception raised from recvBytes at /pytorch/torch/csrc/distributed/c10d/Utils.hpp:678 (most recent call first):
|
| 52685 |
+
frame #0: c10::Error::Error(c10::SourceLocation, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >) + 0x98 (0x14afc35785e8 in /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/lib/libc10.so)
|
| 52686 |
+
frame #1: <unknown function> + 0x5ba8afe (0x14afac85aafe in /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/lib/libtorch_cpu.so)
|
| 52687 |
+
frame #2: <unknown function> + 0x5baae40 (0x14afac85ce40 in /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/lib/libtorch_cpu.so)
|
| 52688 |
+
frame #3: <unknown function> + 0x5bab74a (0x14afac85d74a in /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/lib/libtorch_cpu.so)
|
| 52689 |
+
frame #4: c10d::TCPStore::check(std::vector<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >, std::allocator<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > > > const&) + 0x2a9 (0x14afac8571a9 in /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/lib/libtorch_cpu.so)
|
| 52690 |
+
frame #5: c10d::ProcessGroupNCCL::heartbeatMonitor() + 0x379 (0x14af69a509a9 in /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/lib/libtorch_cuda.so)
|
| 52691 |
+
frame #6: <unknown function> + 0xd3b6d (0x14af59a19b6d in /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/bin/../lib/libstdc++.so.6)
|
| 52692 |
+
frame #7: <unknown function> + 0x94ac3 (0x14afc48c6ac3 in /lib/x86_64-linux-gnu/libc.so.6)
|
| 52693 |
+
frame #8: <unknown function> + 0x126850 (0x14afc4958850 in /lib/x86_64-linux-gnu/libc.so.6)
|
| 52694 |
+
|
| 52695 |
+
[rank31]:[W621 21:34:57.083747837 TCPStore.cpp:125] [c10d] recvValue failed on SocketImpl(fd=95, addr=[fs-mbz-gpu-885]:43314, remote=[fs-mbz-gpu-404]:44033): failed to recv, got 0 bytes
|
| 52696 |
+
Exception raised from recvBytes at /pytorch/torch/csrc/distributed/c10d/Utils.hpp:678 (most recent call first):
|
| 52697 |
+
frame #0: c10::Error::Error(c10::SourceLocation, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >) + 0x98 (0x14c2f99785e8 in /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/lib/libc10.so)
|
| 52698 |
+
frame #1: <unknown function> + 0x5ba8afe (0x14c2e2c5aafe in /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/lib/libtorch_cpu.so)
|
| 52699 |
+
frame #2: <unknown function> + 0x5baae40 (0x14c2e2c5ce40 in /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/lib/libtorch_cpu.so)
|
| 52700 |
+
frame #3: <unknown function> + 0x5bab74a (0x14c2e2c5d74a in /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/lib/libtorch_cpu.so)
|
| 52701 |
+
frame #4: c10d::TCPStore::check(std::vector<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >, std::allocator<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > > > const&) + 0x2a9 (0x14c2e2c571a9 in /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/lib/libtorch_cpu.so)
|
| 52702 |
+
frame #5: c10d::ProcessGroupNCCL::heartbeatMonitor() + 0x379 (0x14c29fe509a9 in /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/lib/libtorch_cuda.so)
|
| 52703 |
+
frame #6: <unknown function> + 0xd3b6d (0x14c28fe19b6d in /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/bin/../lib/libstdc++.so.6)
|
| 52704 |
+
frame #7: <unknown function> + 0x94ac3 (0x14c2facffac3 in /lib/x86_64-linux-gnu/libc.so.6)
|
| 52705 |
+
frame #8: <unknown function> + 0x126850 (0x14c2fad91850 in /lib/x86_64-linux-gnu/libc.so.6)
|
| 52706 |
+
|
| 52707 |
+
[rank30]:[W621 21:34:57.087213933 ProcessGroupNCCL.cpp:1659] [PG ID 0 PG GUID 0(default_pg) Rank 30] Failed to check the "should dump" flag on TCPStore, (maybe TCPStore server has shut down too early), with error: failed to recv, got 0 bytes
|
| 52708 |
+
[rank31]:[W621 21:34:57.087224855 ProcessGroupNCCL.cpp:1659] [PG ID 0 PG GUID 0(default_pg) Rank 31] Failed to check the "should dump" flag on TCPStore, (maybe TCPStore server has shut down too early), with error: failed to recv, got 0 bytes
|
| 52709 |
+
[rank29]:[W621 21:34:57.098578029 TCPStore.cpp:125] [c10d] recvValue failed on SocketImpl(fd=95, addr=[fs-mbz-gpu-885]:43334, remote=[fs-mbz-gpu-404]:44033): failed to recv, got 0 bytes
|
| 52710 |
+
Exception raised from recvBytes at /pytorch/torch/csrc/distributed/c10d/Utils.hpp:678 (most recent call first):
|
| 52711 |
+
frame #0: c10::Error::Error(c10::SourceLocation, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >) + 0x98 (0x1472ce9785e8 in /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/lib/libc10.so)
|
| 52712 |
+
frame #1: <unknown function> + 0x5ba8afe (0x1472b785aafe in /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/lib/libtorch_cpu.so)
|
| 52713 |
+
frame #2: <unknown function> + 0x5baae40 (0x1472b785ce40 in /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/lib/libtorch_cpu.so)
|
| 52714 |
+
frame #3: <unknown function> + 0x5bab74a (0x1472b785d74a in /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/lib/libtorch_cpu.so)
|
| 52715 |
+
frame #4: c10d::TCPStore::check(std::vector<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >, std::allocator<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > > > const&) + 0x2a9 (0x1472b78571a9 in /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/lib/libtorch_cpu.so)
|
| 52716 |
+
frame #5: c10d::ProcessGroupNCCL::heartbeatMonitor() + 0x379 (0x147274a509a9 in /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/lib/libtorch_cuda.so)
|
| 52717 |
+
frame #6: <unknown function> + 0xd3b6d (0x1472ce4f1b6d in /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/bin/../lib/libstdc++.so.6)
|
| 52718 |
+
frame #7: <unknown function> + 0x94ac3 (0x1472cfa6cac3 in /lib/x86_64-linux-gnu/libc.so.6)
|
| 52719 |
+
frame #8: <unknown function> + 0x126850 (0x1472cfafe850 in /lib/x86_64-linux-gnu/libc.so.6)
|
| 52720 |
+
|
| 52721 |
+
[rank29]:[W621 21:34:57.103477933 ProcessGroupNCCL.cpp:1659] [PG ID 0 PG GUID 0(default_pg) Rank 29] Failed to check the "should dump" flag on TCPStore, (maybe TCPStore server has shut down too early), with error: failed to recv, got 0 bytes
|
| 52722 |
+
W0621 21:34:57.972000 1707307 site-packages/torch/distributed/elastic/multiprocessing/api.py:900] Sending process 1707377 closing signal SIGTERM
|
| 52723 |
+
W0621 21:34:57.976000 1707307 site-packages/torch/distributed/elastic/multiprocessing/api.py:900] Sending process 1707378 closing signal SIGTERM
|
| 52724 |
+
W0621 21:34:57.979000 1707307 site-packages/torch/distributed/elastic/multiprocessing/api.py:900] Sending process 1707379 closing signal SIGTERM
|
| 52725 |
+
W0621 21:34:57.983000 1707307 site-packages/torch/distributed/elastic/multiprocessing/api.py:900] Sending process 1707380 closing signal SIGTERM
|
| 52726 |
+
W0621 21:34:57.986000 1707307 site-packages/torch/distributed/elastic/multiprocessing/api.py:900] Sending process 1707381 closing signal SIGTERM
|
| 52727 |
+
W0621 21:34:58.000000 1707307 site-packages/torch/distributed/elastic/multiprocessing/api.py:900] Sending process 1707382 closing signal SIGTERM
|
| 52728 |
+
[rank25]:[W621 21:34:57.368804395 TCPStore.cpp:125] [c10d] recvValue failed on SocketImpl(fd=95, addr=[fs-mbz-gpu-885]:43362, remote=[fs-mbz-gpu-404]:44033): failed to recv, got 0 bytes
|
| 52729 |
+
Exception raised from recvBytes at /pytorch/torch/csrc/distributed/c10d/Utils.hpp:678 (most recent call first):
|
| 52730 |
+
frame #0: c10::Error::Error(c10::SourceLocation, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >) + 0x98 (0x1496e77785e8 in /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/lib/libc10.so)
|
| 52731 |
+
frame #1: <unknown function> + 0x5ba8afe (0x1496d065aafe in /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/lib/libtorch_cpu.so)
|
| 52732 |
+
frame #2: <unknown function> + 0x5baae40 (0x1496d065ce40 in /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/lib/libtorch_cpu.so)
|
| 52733 |
+
frame #3: <unknown function> + 0x5bab74a (0x1496d065d74a in /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/lib/libtorch_cpu.so)
|
| 52734 |
+
frame #4: c10d::TCPStore::check(std::vector<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >, std::allocator<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > > > const&) + 0x2a9 (0x1496d06571a9 in /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/lib/libtorch_cpu.so)
|
| 52735 |
+
frame #5: c10d::ProcessGroupNCCL::heartbeatMonitor() + 0x379 (0x14968d8509a9 in /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/lib/libtorch_cuda.so)
|
| 52736 |
+
frame #6: <unknown function> + 0xd3b6d (0x1496e72f1b6d in /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/bin/../lib/libstdc++.so.6)
|
| 52737 |
+
frame #7: <unknown function> + 0x94ac3 (0x1496e8869ac3 in /lib/x86_64-linux-gnu/libc.so.6)
|
| 52738 |
+
frame #8: <unknown function> + 0x126850 (0x1496e88fb850 in /lib/x86_64-linux-gnu/libc.so.6)
|
| 52739 |
+
|
| 52740 |
+
[rank27]:[W621 21:34:57.368764620 TCPStore.cpp:125] [c10d] recvValue failed on SocketImpl(fd=95, addr=[fs-mbz-gpu-885]:43346, remote=[fs-mbz-gpu-404]:44033): failed to recv, got 0 bytes
|
| 52741 |
+
Exception raised from recvBytes at /pytorch/torch/csrc/distributed/c10d/Utils.hpp:678 (most recent call first):
|
| 52742 |
+
frame #0: c10::Error::Error(c10::SourceLocation, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >) + 0x98 (0x145943f785e8 in /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/lib/libc10.so)
|
| 52743 |
+
frame #1: <unknown function> + 0x5ba8afe (0x14592d25aafe in /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/lib/libtorch_cpu.so)
|
| 52744 |
+
frame #2: <unknown function> + 0x5baae40 (0x14592d25ce40 in /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/lib/libtorch_cpu.so)
|
| 52745 |
+
frame #3: <unknown function> + 0x5bab74a (0x14592d25d74a in /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/lib/libtorch_cpu.so)
|
| 52746 |
+
frame #4: c10d::TCPStore::check(std::vector<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >, std::allocator<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > > > const&) + 0x2a9 (0x14592d2571a9 in /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/lib/libtorch_cpu.so)
|
| 52747 |
+
frame #5: c10d::ProcessGroupNCCL::heartbeatMonitor() + 0x379 (0x1458ea4509a9 in /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/lib/libtorch_cuda.so)
|
| 52748 |
+
frame #6: <unknown function> + 0xd3b6d (0x1458da419b6d in /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/bin/../lib/libstdc++.so.6)
|
| 52749 |
+
frame #7: <unknown function> + 0x94ac3 (0x14594530cac3 in /lib/x86_64-linux-gnu/libc.so.6)
|
| 52750 |
+
frame #8: <unknown function> + 0x126850 (0x14594539e850 in /lib/x86_64-linux-gnu/libc.so.6)
|
| 52751 |
+
|
| 52752 |
+
[rank26]:[W621 21:34:57.368978570 TCPStore.cpp:125] [c10d] recvValue failed on SocketImpl(fd=95, addr=[fs-mbz-gpu-885]:43326, remote=[fs-mbz-gpu-404]:44033): failed to recv, got 0 bytes
|
| 52753 |
+
Exception raised from recvBytes at /pytorch/torch/csrc/distributed/c10d/Utils.hpp:678 (most recent call first):
|
| 52754 |
+
frame #0: c10::Error::Error(c10::SourceLocation, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >) + 0x98 (0x14a75a9785e8 in /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/lib/libc10.so)
|
| 52755 |
+
frame #1: <unknown function> + 0x5ba8afe (0x14a743c5aafe in /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/lib/libtorch_cpu.so)
|
| 52756 |
+
frame #2: <unknown function> + 0x5baae40 (0x14a743c5ce40 in /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/lib/libtorch_cpu.so)
|
| 52757 |
+
frame #3: <unknown function> + 0x5bab74a (0x14a743c5d74a in /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/lib/libtorch_cpu.so)
|
| 52758 |
+
frame #4: c10d::TCPStore::check(std::vector<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >, std::allocator<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > > > const&) + 0x2a9 (0x14a743c571a9 in /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/lib/libtorch_cpu.so)
|
| 52759 |
+
frame #5: c10d::ProcessGroupNCCL::heartbeatMonitor() + 0x379 (0x14a700e509a9 in /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/lib/libtorch_cuda.so)
|
| 52760 |
+
frame #6: <unknown function> + 0xd3b6d (0x14a6f0e19b6d in /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/bin/../lib/libstdc++.so.6)
|
| 52761 |
+
frame #7: <unknown function> + 0x94ac3 (0x14a75bc92ac3 in /lib/x86_64-linux-gnu/libc.so.6)
|
| 52762 |
+
frame #8: <unknown function> + 0x126850 (0x14a75bd24850 in /lib/x86_64-linux-gnu/libc.so.6)
|
| 52763 |
+
|
| 52764 |
+
[rank25]:[W621 21:34:58.373039383 ProcessGroupNCCL.cpp:1659] [PG ID 0 PG GUID 0(default_pg) Rank 25] Failed to check the "should dump" flag on TCPStore, (maybe TCPStore server has shut down too early), with error: failed to recv, got 0 bytes
|
| 52765 |
+
[rank27]:[W621 21:34:58.373055748 ProcessGroupNCCL.cpp:1659] [PG ID 0 PG GUID 0(default_pg) Rank 27] Failed to check the "should dump" flag on TCPStore, (maybe TCPStore server has shut down too early), with error: failed to recv, got 0 bytes
|
| 52766 |
+
[rank26]:[W621 21:34:58.373077485 ProcessGroupNCCL.cpp:1659] [PG ID 0 PG GUID 0(default_pg) Rank 26] Failed to check the "should dump" flag on TCPStore, (maybe TCPStore server has shut down too early), with error: failed to recv, got 0 bytes
|
| 52767 |
+
W0621 21:34:58.006000 1707307 site-packages/torch/distributed/elastic/multiprocessing/api.py:900] Sending process 1707383 closing signal SIGTERM
|
| 52768 |
+
W0621 21:34:58.009000 1707307 site-packages/torch/distributed/elastic/multiprocessing/api.py:900] Sending process 1707384 closing signal SIGTERM
|
| 52769 |
+
W0621 21:34:58.017000 4104401 site-packages/torch/distributed/elastic/multiprocessing/api.py:900] Sending process 4104471 closing signal SIGTERM
|
| 52770 |
+
W0621 21:34:58.020000 4104401 site-packages/torch/distributed/elastic/multiprocessing/api.py:900] Sending process 4104472 closing signal SIGTERM
|
| 52771 |
+
W0621 21:34:58.024000 4104401 site-packages/torch/distributed/elastic/multiprocessing/api.py:900] Sending process 4104473 closing signal SIGTERM
|
| 52772 |
+
W0621 21:34:58.027000 4104401 site-packages/torch/distributed/elastic/multiprocessing/api.py:900] Sending process 4104474 closing signal SIGTERM
|
| 52773 |
+
W0621 21:34:58.029000 4104401 site-packages/torch/distributed/elastic/multiprocessing/api.py:900] Sending process 4104475 closing signal SIGTERM
|
| 52774 |
+
W0621 21:34:58.042000 4104401 site-packages/torch/distributed/elastic/multiprocessing/api.py:900] Sending process 4104476 closing signal SIGTERM
|
| 52775 |
+
W0621 21:34:58.077000 4104401 site-packages/torch/distributed/elastic/multiprocessing/api.py:900] Sending process 4104477 closing signal SIGTERM
|
| 52776 |
+
W0621 21:34:58.080000 4104401 site-packages/torch/distributed/elastic/multiprocessing/api.py:900] Sending process 4104478 closing signal SIGTERM
|
| 52777 |
+
[W621 21:35:00.096795323 TCPStore.cpp:106] [c10d] sendBytes failed on SocketImpl(fd=4, addr=[fs-mbz-gpu-854]:33410, remote=[fs-mbz-gpu-404]:29500): Broken pipe
|
| 52778 |
+
Exception raised from sendBytes at /pytorch/torch/csrc/distributed/c10d/Utils.hpp:653 (most recent call first):
|
| 52779 |
+
frame #0: c10::Error::Error(c10::SourceLocation, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >) + 0x98 (0x1536d6f785e8 in /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/lib/libc10.so)
|
| 52780 |
+
frame #1: <unknown function> + 0x5ba8afe (0x1536bfe5aafe in /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/lib/libtorch_cpu.so)
|
| 52781 |
+
frame #2: <unknown function> + 0x5baa358 (0x1536bfe5c358 in /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/lib/libtorch_cpu.so)
|
| 52782 |
+
frame #3: <unknown function> + 0x5babb3e (0x1536bfe5db3e in /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/lib/libtorch_cpu.so)
|
| 52783 |
+
frame #4: c10d::TCPStore::doWait(c10::ArrayRef<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > >, std::chrono::duration<long, std::ratio<1l, 1000l> >) + 0x1a6 (0x1536bfe57ac6 in /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/lib/libtorch_cpu.so)
|
| 52784 |
+
frame #5: c10d::TCPStore::doGet(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&) + 0x33 (0x1536bfe57ea3 in /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/lib/libtorch_cpu.so)
|
| 52785 |
+
frame #6: c10d::TCPStore::get(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&) + 0xab (0x1536bfe58f8b in /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/lib/libtorch_cpu.so)
|
| 52786 |
+
frame #7: <unknown function> + 0xc0f526 (0x1536cf18b526 in /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/lib/libtorch_python.so)
|
| 52787 |
+
frame #8: <unknown function> + 0x37f17d (0x1536ce8fb17d in /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/lib/libtorch_python.so)
|
| 52788 |
+
<omitting python frames>
|
| 52789 |
+
frame #26: <unknown function> + 0x29d90 (0x1536d7f76d90 in /lib/x86_64-linux-gnu/libc.so.6)
|
| 52790 |
+
frame #27: __libc_start_main + 0x80 (0x1536d7f76e40 in /lib/x86_64-linux-gnu/libc.so.6)
|
| 52791 |
+
|
| 52792 |
+
W0621 21:35:00.737000 531378 site-packages/torch/distributed/elastic/rendezvous/dynamic_rendezvous.py:1292] The node 'fs-mbz-gpu-854_531378_0' has failed to shutdown the rendezvous '343214' due to an error of type RendezvousConnectionError.
|
| 52793 |
+
[W621 21:35:00.112410584 TCPStore.cpp:106] [c10d] sendBytes failed on SocketImpl(fd=4, addr=[fs-mbz-gpu-854]:33410, remote=[fs-mbz-gpu-404]:29500): Broken pipe
|
| 52794 |
+
Exception raised from sendBytes at /pytorch/torch/csrc/distributed/c10d/Utils.hpp:653 (most recent call first):
|
| 52795 |
+
frame #0: c10::Error::Error(c10::SourceLocation, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >) + 0x98 (0x1536d6f785e8 in /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/lib/libc10.so)
|
| 52796 |
+
frame #1: <unknown function> + 0x5ba8afe (0x1536bfe5aafe in /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/lib/libtorch_cpu.so)
|
| 52797 |
+
frame #2: <unknown function> + 0x5baa358 (0x1536bfe5c358 in /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/lib/libtorch_cpu.so)
|
| 52798 |
+
frame #3: <unknown function> + 0x5babb3e (0x1536bfe5db3e in /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/lib/libtorch_cpu.so)
|
| 52799 |
+
frame #4: c10d::TCPStore::doWait(c10::ArrayRef<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > >, std::chrono::duration<long, std::ratio<1l, 1000l> >) + 0x1a6 (0x1536bfe57ac6 in /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/lib/libtorch_cpu.so)
|
| 52800 |
+
frame #5: c10d::TCPStore::doGet(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&) + 0x33 (0x1536bfe57ea3 in /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/lib/libtorch_cpu.so)
|
| 52801 |
+
frame #6: c10d::TCPStore::get(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&) + 0xab (0x1536bfe58f8b in /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/lib/libtorch_cpu.so)
|
| 52802 |
+
frame #7: <unknown function> + 0xc0f526 (0x1536cf18b526 in /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/lib/libtorch_python.so)
|
| 52803 |
+
frame #8: <unknown function> + 0x37f17d (0x1536ce8fb17d in /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/lib/libtorch_python.so)
|
| 52804 |
+
<omitting python frames>
|
| 52805 |
+
frame #26: <unknown function> + 0x29d90 (0x1536d7f76d90 in /lib/x86_64-linux-gnu/libc.so.6)
|
| 52806 |
+
frame #27: __libc_start_main + 0x80 (0x1536d7f76e40 in /lib/x86_64-linux-gnu/libc.so.6)
|
| 52807 |
+
|
| 52808 |
+
W0621 21:35:00.750000 531378 site-packages/torch/distributed/elastic/rendezvous/dynamic_rendezvous.py:1292] The node 'fs-mbz-gpu-854_531378_0' has failed to shutdown the rendezvous '343214' due to an error of type RendezvousConnectionError.
|
| 52809 |
+
Traceback (most recent call last):
|
| 52810 |
+
File "/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/distributed/elastic/rendezvous/c10d_rendezvous_backend.py", line 117, in _call_store
|
| 52811 |
+
return getattr(self._store, store_op)(*args, **kwargs)
|
| 52812 |
+
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
| 52813 |
+
torch.distributed.DistNetworkError: failed to recv, got 0 bytes
|
| 52814 |
+
|
| 52815 |
+
The above exception was the direct cause of the following exception:
|
| 52816 |
+
|
| 52817 |
+
Traceback (most recent call last):
|
| 52818 |
+
File "<frozen runpy>", line 198, in _run_module_as_main
|
| 52819 |
+
File "<frozen runpy>", line 88, in _run_code
|
| 52820 |
+
File "/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/distributed/launch.py", line 207, in <module>
|
| 52821 |
+
main()
|
| 52822 |
+
File "/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/typing_extensions.py", line 3253, in wrapper
|
| 52823 |
+
return arg(*args, **kwargs)
|
| 52824 |
+
^^^^^^^^^^^^^^^^^^^^
|
| 52825 |
+
File "/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/distributed/launch.py", line 203, in main
|
| 52826 |
+
launch(args)
|
| 52827 |
+
File "/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/distributed/launch.py", line 188, in launch
|
| 52828 |
+
run(args)
|
| 52829 |
+
File "/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/distributed/run.py", line 883, in run
|
| 52830 |
+
elastic_launch(
|
| 52831 |
+
File "/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/distributed/launcher/api.py", line 139, in __call__
|
| 52832 |
+
return launch_agent(self._config, self._entrypoint, list(args))
|
| 52833 |
+
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
| 52834 |
+
File "/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/distributed/launcher/api.py", line 261, in launch_agent
|
| 52835 |
+
result = agent.run()
|
| 52836 |
+
^^^^^^^^^^^
|
| 52837 |
+
File "/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/distributed/elastic/metrics/api.py", line 138, in wrapper
|
| 52838 |
+
result = f(*args, **kwargs)
|
| 52839 |
+
^^^^^^^^^^^^^^^^^^
|
| 52840 |
+
File "/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/distributed/elastic/agent/server/api.py", line 711, in run
|
| 52841 |
+
result = self._invoke_run(role)
|
| 52842 |
+
^^^^^^^^^^^^^^^^^^^^^^
|
| 52843 |
+
File "/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/distributed/elastic/agent/server/api.py", line 906, in _invoke_run
|
| 52844 |
+
num_nodes_waiting = rdzv_handler.num_nodes_waiting()
|
| 52845 |
+
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
| 52846 |
+
File "/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/distributed/elastic/rendezvous/dynamic_rendezvous.py", line 1263, in num_nodes_waiting
|
| 52847 |
+
self._state_holder.sync()
|
| 52848 |
+
File "/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/distributed/elastic/rendezvous/dynamic_rendezvous.py", line 437, in sync
|
| 52849 |
+
get_response = self._backend.get_state()
|
| 52850 |
+
^^^^^^^^^^^^^^^^^^^^^^^^^
|
| 52851 |
+
File "/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/distributed/elastic/rendezvous/c10d_rendezvous_backend.py", line 75, in get_state
|
| 52852 |
+
base64_state: bytes = self._call_store("get", self._key)
|
| 52853 |
+
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
| 52854 |
+
File "/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/distributed/elastic/rendezvous/c10d_rendezvous_backend.py", line 119, in _call_store
|
| 52855 |
+
raise RendezvousConnectionError(
|
| 52856 |
+
torch.distributed.elastic.rendezvous.api.RendezvousConnectionError: The connection to the C10d store has failed. See inner exception for details.
|
| 52857 |
+
+ set +x
|
| 52858 |
+
[W621 21:35:01.116819207 TCPStore.cpp:106] [c10d] sendBytes failed on SocketImpl(fd=3, addr=[fs-mbz-gpu-455]:54080, remote=[fs-mbz-gpu-404]:29500): Broken pipe
|
| 52859 |
+
Exception raised from sendBytes at /pytorch/torch/csrc/distributed/c10d/Utils.hpp:653 (most recent call first):
|
| 52860 |
+
frame #0: c10::Error::Error(c10::SourceLocation, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >) + 0x98 (0x1551d73785e8 in /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/lib/libc10.so)
|
| 52861 |
+
frame #1: <unknown function> + 0x5ba8afe (0x1551c065aafe in /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/lib/libtorch_cpu.so)
|
| 52862 |
+
frame #2: <unknown function> + 0x5baa358 (0x1551c065c358 in /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/lib/libtorch_cpu.so)
|
| 52863 |
+
frame #3: <unknown function> + 0x5babb3e (0x1551c065db3e in /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/lib/libtorch_cpu.so)
|
| 52864 |
+
frame #4: c10d::TCPStore::doWait(c10::ArrayRef<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > >, std::chrono::duration<long, std::ratio<1l, 1000l> >) + 0x1a6 (0x1551c0657ac6 in /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/lib/libtorch_cpu.so)
|
| 52865 |
+
frame #5: c10d::TCPStore::doGet(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&) + 0x33 (0x1551c0657ea3 in /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/lib/libtorch_cpu.so)
|
| 52866 |
+
frame #6: c10d::TCPStore::get(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&) + 0xab (0x1551c0658f8b in /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/lib/libtorch_cpu.so)
|
| 52867 |
+
frame #7: <unknown function> + 0xc0f526 (0x1551cf98b526 in /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/lib/libtorch_python.so)
|
| 52868 |
+
frame #8: <unknown function> + 0x37f17d (0x1551cf0fb17d in /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/lib/libtorch_python.so)
|
| 52869 |
+
<omitting python frames>
|
| 52870 |
+
frame #26: <unknown function> + 0x29d90 (0x1551d86f6d90 in /lib/x86_64-linux-gnu/libc.so.6)
|
| 52871 |
+
frame #27: __libc_start_main + 0x80 (0x1551d86f6e40 in /lib/x86_64-linux-gnu/libc.so.6)
|
| 52872 |
+
|
| 52873 |
+
W0621 21:35:01.576000 1707307 site-packages/torch/distributed/elastic/rendezvous/dynamic_rendezvous.py:1292] The node 'fs-mbz-gpu-455_1707307_0' has failed to shutdown the rendezvous '343214' due to an error of type RendezvousConnectionError.
|
| 52874 |
+
[W621 21:35:01.132761949 TCPStore.cpp:106] [c10d] sendBytes failed on SocketImpl(fd=3, addr=[fs-mbz-gpu-455]:54080, remote=[fs-mbz-gpu-404]:29500): Broken pipe
|
| 52875 |
+
Exception raised from sendBytes at /pytorch/torch/csrc/distributed/c10d/Utils.hpp:653 (most recent call first):
|
| 52876 |
+
frame #0: c10::Error::Error(c10::SourceLocation, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >) + 0x98 (0x1551d73785e8 in /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/lib/libc10.so)
|
| 52877 |
+
frame #1: <unknown function> + 0x5ba8afe (0x1551c065aafe in /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/lib/libtorch_cpu.so)
|
| 52878 |
+
frame #2: <unknown function> + 0x5baa358 (0x1551c065c358 in /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/lib/libtorch_cpu.so)
|
| 52879 |
+
frame #3: <unknown function> + 0x5babb3e (0x1551c065db3e in /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/lib/libtorch_cpu.so)
|
| 52880 |
+
frame #4: c10d::TCPStore::doWait(c10::ArrayRef<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > >, std::chrono::duration<long, std::ratio<1l, 1000l> >) + 0x1a6 (0x1551c0657ac6 in /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/lib/libtorch_cpu.so)
|
| 52881 |
+
frame #5: c10d::TCPStore::doGet(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&) + 0x33 (0x1551c0657ea3 in /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/lib/libtorch_cpu.so)
|
| 52882 |
+
frame #6: c10d::TCPStore::get(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&) + 0xab (0x1551c0658f8b in /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/lib/libtorch_cpu.so)
|
| 52883 |
+
frame #7: <unknown function> + 0xc0f526 (0x1551cf98b526 in /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/lib/libtorch_python.so)
|
| 52884 |
+
frame #8: <unknown function> + 0x37f17d (0x1551cf0fb17d in /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/lib/libtorch_python.so)
|
| 52885 |
+
<omitting python frames>
|
| 52886 |
+
frame #26: <unknown function> + 0x29d90 (0x1551d86f6d90 in /lib/x86_64-linux-gnu/libc.so.6)
|
| 52887 |
+
frame #27: __libc_start_main + 0x80 (0x1551d86f6e40 in /lib/x86_64-linux-gnu/libc.so.6)
|
| 52888 |
+
|
| 52889 |
+
W0621 21:35:01.589000 1707307 site-packages/torch/distributed/elastic/rendezvous/dynamic_rendezvous.py:1292] The node 'fs-mbz-gpu-455_1707307_0' has failed to shutdown the rendezvous '343214' due to an error of type RendezvousConnectionError.
|
| 52890 |
+
Traceback (most recent call last):
|
| 52891 |
+
File "/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/distributed/elastic/rendezvous/c10d_rendezvous_backend.py", line 117, in _call_store
|
| 52892 |
+
return getattr(self._store, store_op)(*args, **kwargs)
|
| 52893 |
+
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
| 52894 |
+
torch.distributed.DistNetworkError: failed to recv, got 0 bytes
|
| 52895 |
+
|
| 52896 |
+
The above exception was the direct cause of the following exception:
|
| 52897 |
+
|
| 52898 |
+
Traceback (most recent call last):
|
| 52899 |
+
File "<frozen runpy>", line 198, in _run_module_as_main
|
| 52900 |
+
File "<frozen runpy>", line 88, in _run_code
|
| 52901 |
+
File "/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/distributed/launch.py", line 207, in <module>
|
| 52902 |
+
main()
|
| 52903 |
+
File "/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/typing_extensions.py", line 3253, in wrapper
|
| 52904 |
+
return arg(*args, **kwargs)
|
| 52905 |
+
^^^^^^^^^^^^^^^^^^^^
|
| 52906 |
+
File "/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/distributed/launch.py", line 203, in main
|
| 52907 |
+
launch(args)
|
| 52908 |
+
File "/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/distributed/launch.py", line 188, in launch
|
| 52909 |
+
run(args)
|
| 52910 |
+
File "/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/distributed/run.py", line 883, in run
|
| 52911 |
+
elastic_launch(
|
| 52912 |
+
File "/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/distributed/launcher/api.py", line 139, in __call__
|
| 52913 |
+
return launch_agent(self._config, self._entrypoint, list(args))
|
| 52914 |
+
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
| 52915 |
+
File "/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/distributed/launcher/api.py", line 261, in launch_agent
|
| 52916 |
+
result = agent.run()
|
| 52917 |
+
^^^^^^^^^^^
|
| 52918 |
+
File "/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/distributed/elastic/metrics/api.py", line 138, in wrapper
|
| 52919 |
+
result = f(*args, **kwargs)
|
| 52920 |
+
^^^^^^^^^^^^^^^^^^
|
| 52921 |
+
File "/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/distributed/elastic/agent/server/api.py", line 711, in run
|
| 52922 |
+
result = self._invoke_run(role)
|
| 52923 |
+
^^^^^^^^^^^^^^^^^^^^^^
|
| 52924 |
+
File "/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/distributed/elastic/agent/server/api.py", line 906, in _invoke_run
|
| 52925 |
+
num_nodes_waiting = rdzv_handler.num_nodes_waiting()
|
| 52926 |
+
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
| 52927 |
+
File "/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/distributed/elastic/rendezvous/dynamic_rendezvous.py", line 1263, in num_nodes_waiting
|
| 52928 |
+
self._state_holder.sync()
|
| 52929 |
+
File "/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/distributed/elastic/rendezvous/dynamic_rendezvous.py", line 437, in sync
|
| 52930 |
+
get_response = self._backend.get_state()
|
| 52931 |
+
^^^^^^^^^^^^^^^^^^^^^^^^^
|
| 52932 |
+
File "/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/distributed/elastic/rendezvous/c10d_rendezvous_backend.py", line 75, in get_state
|
| 52933 |
+
base64_state: bytes = self._call_store("get", self._key)
|
| 52934 |
+
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
| 52935 |
+
File "/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/distributed/elastic/rendezvous/c10d_rendezvous_backend.py", line 119, in _call_store
|
| 52936 |
+
raise RendezvousConnectionError(
|
| 52937 |
+
torch.distributed.elastic.rendezvous.api.RendezvousConnectionError: The connection to the C10d store has failed. See inner exception for details.
|
| 52938 |
+
+ set +x
|
| 52939 |
+
[W621 21:35:01.356306678 TCPStore.cpp:106] [c10d] sendBytes failed on SocketImpl(fd=3, addr=[fs-mbz-gpu-885]:35212, remote=[fs-mbz-gpu-404]:29500): Broken pipe
|
| 52940 |
+
Exception raised from sendBytes at /pytorch/torch/csrc/distributed/c10d/Utils.hpp:653 (most recent call first):
|
| 52941 |
+
frame #0: c10::Error::Error(c10::SourceLocation, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >) + 0x98 (0x15000f1785e8 in /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/lib/libc10.so)
|
| 52942 |
+
frame #1: <unknown function> + 0x5ba8afe (0x14fff845aafe in /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/lib/libtorch_cpu.so)
|
| 52943 |
+
frame #2: <unknown function> + 0x5baa358 (0x14fff845c358 in /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/lib/libtorch_cpu.so)
|
| 52944 |
+
frame #3: <unknown function> + 0x5babb3e (0x14fff845db3e in /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/lib/libtorch_cpu.so)
|
| 52945 |
+
frame #4: c10d::TCPStore::doWait(c10::ArrayRef<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > >, std::chrono::duration<long, std::ratio<1l, 1000l> >) + 0x1a6 (0x14fff8457ac6 in /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/lib/libtorch_cpu.so)
|
| 52946 |
+
frame #5: c10d::TCPStore::doGet(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&) + 0x33 (0x14fff8457ea3 in /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/lib/libtorch_cpu.so)
|
| 52947 |
+
frame #6: c10d::TCPStore::get(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&) + 0xab (0x14fff8458f8b in /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/lib/libtorch_cpu.so)
|
| 52948 |
+
frame #7: <unknown function> + 0xc0f526 (0x15000778b526 in /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/lib/libtorch_python.so)
|
| 52949 |
+
frame #8: <unknown function> + 0x37f17d (0x150006efb17d in /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/lib/libtorch_python.so)
|
| 52950 |
+
<omitting python frames>
|
| 52951 |
+
frame #17: <unknown function> + 0x94ac3 (0x15001057dac3 in /lib/x86_64-linux-gnu/libc.so.6)
|
| 52952 |
+
frame #18: <unknown function> + 0x126850 (0x15001060f850 in /lib/x86_64-linux-gnu/libc.so.6)
|
| 52953 |
+
|
| 52954 |
+
W0621 21:35:01.995000 4104401 site-packages/torch/distributed/elastic/rendezvous/dynamic_rendezvous.py:1341] The node 'fs-mbz-gpu-885_4104401_0' has failed to send a keep-alive heartbeat to the rendezvous '343214' due to an error of type RendezvousConnectionError.
|
| 52955 |
+
[W621 21:35:02.398091749 TCPStore.cpp:106] [c10d] sendBytes failed on SocketImpl(fd=3, addr=[fs-mbz-gpu-885]:35212, remote=[fs-mbz-gpu-404]:29500): Broken pipe
|
| 52956 |
+
Exception raised from sendBytes at /pytorch/torch/csrc/distributed/c10d/Utils.hpp:653 (most recent call first):
|
| 52957 |
+
frame #0: c10::Error::Error(c10::SourceLocation, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >) + 0x98 (0x15000f1785e8 in /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/lib/libc10.so)
|
| 52958 |
+
frame #1: <unknown function> + 0x5ba8afe (0x14fff845aafe in /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/lib/libtorch_cpu.so)
|
| 52959 |
+
frame #2: <unknown function> + 0x5baa358 (0x14fff845c358 in /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/lib/libtorch_cpu.so)
|
| 52960 |
+
frame #3: <unknown function> + 0x5babb3e (0x14fff845db3e in /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/lib/libtorch_cpu.so)
|
| 52961 |
+
frame #4: c10d::TCPStore::doWait(c10::ArrayRef<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > >, std::chrono::duration<long, std::ratio<1l, 1000l> >) + 0x1a6 (0x14fff8457ac6 in /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/lib/libtorch_cpu.so)
|
| 52962 |
+
frame #5: c10d::TCPStore::doGet(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&) + 0x33 (0x14fff8457ea3 in /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/lib/libtorch_cpu.so)
|
| 52963 |
+
frame #6: c10d::TCPStore::get(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&) + 0xab (0x14fff8458f8b in /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/lib/libtorch_cpu.so)
|
| 52964 |
+
frame #7: <unknown function> + 0xc0f526 (0x15000778b526 in /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/lib/libtorch_python.so)
|
| 52965 |
+
frame #8: <unknown function> + 0x37f17d (0x150006efb17d in /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/lib/libtorch_python.so)
|
| 52966 |
+
<omitting python frames>
|
| 52967 |
+
frame #26: <unknown function> + 0x29d90 (0x150010512d90 in /lib/x86_64-linux-gnu/libc.so.6)
|
| 52968 |
+
frame #27: __libc_start_main + 0x80 (0x150010512e40 in /lib/x86_64-linux-gnu/libc.so.6)
|
| 52969 |
+
|
| 52970 |
+
W0621 21:35:02.043000 4104401 site-packages/torch/distributed/elastic/rendezvous/dynamic_rendezvous.py:1292] The node 'fs-mbz-gpu-885_4104401_0' has failed to shutdown the rendezvous '343214' due to an error of type RendezvousConnectionError.
|
| 52971 |
+
[W621 21:35:02.412874338 TCPStore.cpp:106] [c10d] sendBytes failed on SocketImpl(fd=3, addr=[fs-mbz-gpu-885]:35212, remote=[fs-mbz-gpu-404]:29500): Broken pipe
|
| 52972 |
+
Exception raised from sendBytes at /pytorch/torch/csrc/distributed/c10d/Utils.hpp:653 (most recent call first):
|
| 52973 |
+
frame #0: c10::Error::Error(c10::SourceLocation, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >) + 0x98 (0x15000f1785e8 in /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/lib/libc10.so)
|
| 52974 |
+
frame #1: <unknown function> + 0x5ba8afe (0x14fff845aafe in /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/lib/libtorch_cpu.so)
|
| 52975 |
+
frame #2: <unknown function> + 0x5baa358 (0x14fff845c358 in /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/lib/libtorch_cpu.so)
|
| 52976 |
+
frame #3: <unknown function> + 0x5babb3e (0x14fff845db3e in /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/lib/libtorch_cpu.so)
|
| 52977 |
+
frame #4: c10d::TCPStore::doWait(c10::ArrayRef<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > >, std::chrono::duration<long, std::ratio<1l, 1000l> >) + 0x1a6 (0x14fff8457ac6 in /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/lib/libtorch_cpu.so)
|
| 52978 |
+
frame #5: c10d::TCPStore::doGet(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&) + 0x33 (0x14fff8457ea3 in /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/lib/libtorch_cpu.so)
|
| 52979 |
+
frame #6: c10d::TCPStore::get(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&) + 0xab (0x14fff8458f8b in /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/lib/libtorch_cpu.so)
|
| 52980 |
+
frame #7: <unknown function> + 0xc0f526 (0x15000778b526 in /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/lib/libtorch_python.so)
|
| 52981 |
+
frame #8: <unknown function> + 0x37f17d (0x150006efb17d in /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/lib/libtorch_python.so)
|
| 52982 |
+
<omitting python frames>
|
| 52983 |
+
frame #26: <unknown function> + 0x29d90 (0x150010512d90 in /lib/x86_64-linux-gnu/libc.so.6)
|
| 52984 |
+
frame #27: __libc_start_main + 0x80 (0x150010512e40 in /lib/x86_64-linux-gnu/libc.so.6)
|
| 52985 |
+
|
| 52986 |
+
W0621 21:35:02.055000 4104401 site-packages/torch/distributed/elastic/rendezvous/dynamic_rendezvous.py:1292] The node 'fs-mbz-gpu-885_4104401_0' has failed to shutdown the rendezvous '343214' due to an error of type RendezvousConnectionError.
|
| 52987 |
+
Traceback (most recent call last):
|
| 52988 |
+
File "/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/distributed/elastic/rendezvous/c10d_rendezvous_backend.py", line 117, in _call_store
|
| 52989 |
+
return getattr(self._store, store_op)(*args, **kwargs)
|
| 52990 |
+
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
| 52991 |
+
torch.distributed.DistNetworkError: failed to recv, got 0 bytes
|
| 52992 |
+
|
| 52993 |
+
The above exception was the direct cause of the following exception:
|
| 52994 |
+
|
| 52995 |
+
Traceback (most recent call last):
|
| 52996 |
+
File "<frozen runpy>", line 198, in _run_module_as_main
|
| 52997 |
+
File "<frozen runpy>", line 88, in _run_code
|
| 52998 |
+
File "/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/distributed/launch.py", line 207, in <module>
|
| 52999 |
+
main()
|
| 53000 |
+
File "/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/typing_extensions.py", line 3253, in wrapper
|
| 53001 |
+
return arg(*args, **kwargs)
|
| 53002 |
+
^^^^^^^^^^^^^^^^^^^^
|
| 53003 |
+
File "/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/distributed/launch.py", line 203, in main
|
| 53004 |
+
launch(args)
|
| 53005 |
+
File "/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/distributed/launch.py", line 188, in launch
|
| 53006 |
+
run(args)
|
| 53007 |
+
File "/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/distributed/run.py", line 883, in run
|
| 53008 |
+
elastic_launch(
|
| 53009 |
+
File "/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/distributed/launcher/api.py", line 139, in __call__
|
| 53010 |
+
return launch_agent(self._config, self._entrypoint, list(args))
|
| 53011 |
+
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
| 53012 |
+
File "/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/distributed/launcher/api.py", line 261, in launch_agent
|
| 53013 |
+
result = agent.run()
|
| 53014 |
+
^^^^^^^^^^^
|
| 53015 |
+
File "/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/distributed/elastic/metrics/api.py", line 138, in wrapper
|
| 53016 |
+
result = f(*args, **kwargs)
|
| 53017 |
+
^^^^^^^^^^^^^^^^^^
|
| 53018 |
+
File "/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/distributed/elastic/agent/server/api.py", line 711, in run
|
| 53019 |
+
result = self._invoke_run(role)
|
| 53020 |
+
^^^^^^^^^^^^^^^^^^^^^^
|
| 53021 |
+
File "/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/distributed/elastic/agent/server/api.py", line 906, in _invoke_run
|
| 53022 |
+
num_nodes_waiting = rdzv_handler.num_nodes_waiting()
|
| 53023 |
+
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
| 53024 |
+
File "/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/distributed/elastic/rendezvous/dynamic_rendezvous.py", line 1263, in num_nodes_waiting
|
| 53025 |
+
self._state_holder.sync()
|
| 53026 |
+
File "/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/distributed/elastic/rendezvous/dynamic_rendezvous.py", line 437, in sync
|
| 53027 |
+
get_response = self._backend.get_state()
|
| 53028 |
+
^^^^^^^^^^^^^^^^^^^^^^^^^
|
| 53029 |
+
File "/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/distributed/elastic/rendezvous/c10d_rendezvous_backend.py", line 75, in get_state
|
| 53030 |
+
base64_state: bytes = self._call_store("get", self._key)
|
| 53031 |
+
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
| 53032 |
+
File "/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/distributed/elastic/rendezvous/c10d_rendezvous_backend.py", line 119, in _call_store
|
| 53033 |
+
raise RendezvousConnectionError(
|
| 53034 |
+
torch.distributed.elastic.rendezvous.api.RendezvousConnectionError: The connection to the C10d store has failed. See inner exception for details.
|
| 53035 |
+
+ set +x
|
| 53036 |
+
+ for ctx_length in 1024 2048 4096 8192 12288 16384 24576 32768 40960 49152 65536 81920 98304 131072
|
| 53037 |
+
+ export PROF_CTX_LENGTH=32768
|
| 53038 |
+
+ PROF_CTX_LENGTH=32768
|
| 53039 |
+
+ name='/mnt/sharefs/users/hao.zhang/junda/megatron-prof-data--unstable-v5/mytrace.L32768*tp4.cp8.bs2.json'
|
| 53040 |
+
+ '[' -f '/mnt/sharefs/users/hao.zhang/junda/megatron-prof-data--unstable-v5/mytrace.L32768*tp4.cp8.bs2.json' ']'
|
| 53041 |
+
+ echo 'Running ctx_length=32768, TP_SIZE=4, CP_SIZE=8, BATCH_SIZE=2'
|
| 53042 |
+
+ srun bash ./attnserver.sh
|
| 53043 |
+
+ which python3
|
| 53044 |
+
+ python3 -m torch.distributed.launch --nproc_per_node 8 --nnodes 4 --node_rank 2 --rdzv_id 343214 --rdzv_backend c10d --rdzv_endpoint fs-mbz-gpu-404:29500 ./pretrain_gpt_profile.py --tensor-model-parallel-size 4 --context-parallel-size 8 --num-layers 2 --hidden-size 4096 --num-attention-heads 64 --group-query-attention --num-query-groups 16 --seq-length 32768 --max-position-embeddings 32768 --micro-batch-size 1 --global-batch-size 1 --lr 0.0005 --train-iters 10 --lr-decay-iters 150000 --lr-decay-style cosine --lr-warmup-iters 2 --weight-decay .1 --adam-beta2 .999 --fp16 --log-interval 1 --save-interval 16 --eval-interval 16 --eval-iters 1 --vocab-file vocab.json --merge-file merges.txt --save gpt-checkpoint --load gpt-checkpoint --logging-level 0 --mock-data --tensorboard-dir tensorboard-logs/
|
| 53045 |
+
+ which python3
|
| 53046 |
+
+ python3 -m torch.distributed.launch --nproc_per_node 8 --nnodes 4 --node_rank 1 --rdzv_id 343214 --rdzv_backend c10d --rdzv_endpoint fs-mbz-gpu-404:29500 ./pretrain_gpt_profile.py --tensor-model-parallel-size 4 --context-parallel-size 8 --num-layers 2 --hidden-size 4096 --num-attention-heads 64 --group-query-attention --num-query-groups 16 --seq-length 32768 --max-position-embeddings 32768 --micro-batch-size 1 --global-batch-size 1 --lr 0.0005 --train-iters 10 --lr-decay-iters 150000 --lr-decay-style cosine --lr-warmup-iters 2 --weight-decay .1 --adam-beta2 .999 --fp16 --log-interval 1 --save-interval 16 --eval-interval 16 --eval-iters 1 --vocab-file vocab.json --merge-file merges.txt --save gpt-checkpoint --load gpt-checkpoint --logging-level 0 --mock-data --tensorboard-dir tensorboard-logs/
|
| 53047 |
+
+ which python3
|
| 53048 |
+
+ python3 -m torch.distributed.launch --nproc_per_node 8 --nnodes 4 --node_rank 0 --rdzv_id 343214 --rdzv_backend c10d --rdzv_endpoint fs-mbz-gpu-404:29500 ./pretrain_gpt_profile.py --tensor-model-parallel-size 4 --context-parallel-size 8 --num-layers 2 --hidden-size 4096 --num-attention-heads 64 --group-query-attention --num-query-groups 16 --seq-length 32768 --max-position-embeddings 32768 --micro-batch-size 1 --global-batch-size 1 --lr 0.0005 --train-iters 10 --lr-decay-iters 150000 --lr-decay-style cosine --lr-warmup-iters 2 --weight-decay .1 --adam-beta2 .999 --fp16 --log-interval 1 --save-interval 16 --eval-interval 16 --eval-iters 1 --vocab-file vocab.json --merge-file merges.txt --save gpt-checkpoint --load gpt-checkpoint --logging-level 0 --mock-data --tensorboard-dir tensorboard-logs/
|
| 53049 |
+
+ which python3
|
| 53050 |
+
+ python3 -m torch.distributed.launch --nproc_per_node 8 --nnodes 4 --node_rank 3 --rdzv_id 343214 --rdzv_backend c10d --rdzv_endpoint fs-mbz-gpu-404:29500 ./pretrain_gpt_profile.py --tensor-model-parallel-size 4 --context-parallel-size 8 --num-layers 2 --hidden-size 4096 --num-attention-heads 64 --group-query-attention --num-query-groups 16 --seq-length 32768 --max-position-embeddings 32768 --micro-batch-size 1 --global-batch-size 1 --lr 0.0005 --train-iters 10 --lr-decay-iters 150000 --lr-decay-style cosine --lr-warmup-iters 2 --weight-decay .1 --adam-beta2 .999 --fp16 --log-interval 1 --save-interval 16 --eval-interval 16 --eval-iters 1 --vocab-file vocab.json --merge-file merges.txt --save gpt-checkpoint --load gpt-checkpoint --logging-level 0 --mock-data --tensorboard-dir tensorboard-logs/
|
| 53051 |
+
/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/distributed/launch.py:207: FutureWarning: The module torch.distributed.launch is deprecated
|
| 53052 |
+
and will be removed in future. Use torchrun.
|
| 53053 |
+
Note that --use-env is set by default in torchrun.
|
| 53054 |
+
If your script expects `--local-rank` argument to be set, please
|
| 53055 |
+
change it to read from `os.environ['LOCAL_RANK']` instead. See
|
| 53056 |
+
https://pytorch.org/docs/stable/distributed.html#launch-utility for
|
| 53057 |
+
further instructions
|
| 53058 |
+
|
| 53059 |
+
main()
|
| 53060 |
+
W0621 21:35:05.148000 1710457 site-packages/torch/distributed/run.py:766]
|
| 53061 |
+
W0621 21:35:05.148000 1710457 site-packages/torch/distributed/run.py:766] *****************************************
|
| 53062 |
+
W0621 21:35:05.148000 1710457 site-packages/torch/distributed/run.py:766] Setting OMP_NUM_THREADS environment variable for each process to be 1 in default, to avoid your system being overloaded, please further tune the variable for optimal performance in your application as needed.
|
| 53063 |
+
W0621 21:35:05.148000 1710457 site-packages/torch/distributed/run.py:766] *****************************************
|
| 53064 |
+
/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/distributed/launch.py:207: FutureWarning: The module torch.distributed.launch is deprecated
|
| 53065 |
+
and will be removed in future. Use torchrun.
|
| 53066 |
+
Note that --use-env is set by default in torchrun.
|
| 53067 |
+
If your script expects `--local-rank` argument to be set, please
|
| 53068 |
+
change it to read from `os.environ['LOCAL_RANK']` instead. See
|
| 53069 |
+
https://pytorch.org/docs/stable/distributed.html#launch-utility for
|
| 53070 |
+
further instructions
|
| 53071 |
+
|
| 53072 |
+
main()
|
| 53073 |
+
W0621 21:35:05.221000 4107424 site-packages/torch/distributed/run.py:766]
|
| 53074 |
+
W0621 21:35:05.221000 4107424 site-packages/torch/distributed/run.py:766] *****************************************
|
| 53075 |
+
W0621 21:35:05.221000 4107424 site-packages/torch/distributed/run.py:766] Setting OMP_NUM_THREADS environment variable for each process to be 1 in default, to avoid your system being overloaded, please further tune the variable for optimal performance in your application as needed.
|
| 53076 |
+
W0621 21:35:05.221000 4107424 site-packages/torch/distributed/run.py:766] *****************************************
|
| 53077 |
+
/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/distributed/launch.py:207: FutureWarning: The module torch.distributed.launch is deprecated
|
| 53078 |
+
and will be removed in future. Use torchrun.
|
| 53079 |
+
Note that --use-env is set by default in torchrun.
|
| 53080 |
+
If your script expects `--local-rank` argument to be set, please
|
| 53081 |
+
change it to read from `os.environ['LOCAL_RANK']` instead. See
|
| 53082 |
+
https://pytorch.org/docs/stable/distributed.html#launch-utility for
|
| 53083 |
+
further instructions
|
| 53084 |
+
|
| 53085 |
+
main()
|
| 53086 |
+
W0621 21:35:05.226000 534532 site-packages/torch/distributed/run.py:766]
|
| 53087 |
+
W0621 21:35:05.226000 534532 site-packages/torch/distributed/run.py:766] *****************************************
|
| 53088 |
+
W0621 21:35:05.226000 534532 site-packages/torch/distributed/run.py:766] Setting OMP_NUM_THREADS environment variable for each process to be 1 in default, to avoid your system being overloaded, please further tune the variable for optimal performance in your application as needed.
|
| 53089 |
+
W0621 21:35:05.226000 534532 site-packages/torch/distributed/run.py:766] *****************************************
|
| 53090 |
+
/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/distributed/launch.py:207: FutureWarning: The module torch.distributed.launch is deprecated
|
| 53091 |
+
and will be removed in future. Use torchrun.
|
| 53092 |
+
Note that --use-env is set by default in torchrun.
|
| 53093 |
+
If your script expects `--local-rank` argument to be set, please
|
| 53094 |
+
change it to read from `os.environ['LOCAL_RANK']` instead. See
|
| 53095 |
+
https://pytorch.org/docs/stable/distributed.html#launch-utility for
|
| 53096 |
+
further instructions
|
| 53097 |
+
|
| 53098 |
+
main()
|
| 53099 |
+
W0621 21:35:05.233000 1988993 site-packages/torch/distributed/run.py:766]
|
| 53100 |
+
W0621 21:35:05.233000 1988993 site-packages/torch/distributed/run.py:766] *****************************************
|
| 53101 |
+
W0621 21:35:05.233000 1988993 site-packages/torch/distributed/run.py:766] Setting OMP_NUM_THREADS environment variable for each process to be 1 in default, to avoid your system being overloaded, please further tune the variable for optimal performance in your application as needed.
|
| 53102 |
+
W0621 21:35:05.233000 1988993 site-packages/torch/distributed/run.py:766] *****************************************
|
attnserver.run_attnserver.slurm.sh.343214.out.log
CHANGED
|
The diff for this file is too large to render.
See raw diff
|
|
|
attnserver.run_attnserver.slurm.sh.343219.err.log
CHANGED
|
@@ -6473,3 +6473,58 @@ W0621 21:33:10.159000 3529303 site-packages/torch/distributed/run.py:766] ******
|
|
| 6473 |
warnings.warn(
|
| 6474 |
/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/transformer_engine/pytorch/cpu_offload.py:595: DeprecationWarning: Offloading weights is deprecated. Using offload_weights=True does not have any effect.
|
| 6475 |
warnings.warn(
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 6473 |
warnings.warn(
|
| 6474 |
/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/transformer_engine/pytorch/cpu_offload.py:595: DeprecationWarning: Offloading weights is deprecated. Using offload_weights=True does not have any effect.
|
| 6475 |
warnings.warn(
|
| 6476 |
+
[rank2]:[W621 21:34:40.686871147 ProcessGroupNCCL.cpp:1476] Warning: WARNING: destroy_process_group() was not called before program exit, which can leak resources. For more info, please see https://pytorch.org/docs/stable/distributed.html#shutdown (function operator())
|
| 6477 |
+
[rank3]:[W621 21:34:40.708866684 ProcessGroupNCCL.cpp:1476] Warning: WARNING: destroy_process_group() was not called before program exit, which can leak resources. For more info, please see https://pytorch.org/docs/stable/distributed.html#shutdown (function operator())
|
| 6478 |
+
[rank1]:[W621 21:34:40.764404297 ProcessGroupNCCL.cpp:1476] Warning: WARNING: destroy_process_group() was not called before program exit, which can leak resources. For more info, please see https://pytorch.org/docs/stable/distributed.html#shutdown (function operator())
|
| 6479 |
+
[rank0]:[W621 21:34:41.016499701 ProcessGroupNCCL.cpp:1476] Warning: WARNING: destroy_process_group() was not called before program exit, which can leak resources. For more info, please see https://pytorch.org/docs/stable/distributed.html#shutdown (function operator())
|
| 6480 |
+
[rank15]:[W621 21:34:41.991150391 ProcessGroupNCCL.cpp:1476] Warning: WARNING: destroy_process_group() was not called before program exit, which can leak resources. For more info, please see https://pytorch.org/docs/stable/distributed.html#shutdown (function operator())
|
| 6481 |
+
[rank5]:[W621 21:34:41.088710900 ProcessGroupNCCL.cpp:1476] Warning: WARNING: destroy_process_group() was not called before program exit, which can leak resources. For more info, please see https://pytorch.org/docs/stable/distributed.html#shutdown (function operator())
|
| 6482 |
+
[rank4]:[W621 21:34:41.111867956 ProcessGroupNCCL.cpp:1476] Warning: WARNING: destroy_process_group() was not called before program exit, which can leak resources. For more info, please see https://pytorch.org/docs/stable/distributed.html#shutdown (function operator())
|
| 6483 |
+
[rank13]:[W621 21:34:41.134323163 ProcessGroupNCCL.cpp:1476] Warning: WARNING: destroy_process_group() was not called before program exit, which can leak resources. For more info, please see https://pytorch.org/docs/stable/distributed.html#shutdown (function operator())
|
| 6484 |
+
[rank8]:[W621 21:34:41.147553016 ProcessGroupNCCL.cpp:1476] Warning: WARNING: destroy_process_group() was not called before program exit, which can leak resources. For more info, please see https://pytorch.org/docs/stable/distributed.html#shutdown (function operator())
|
| 6485 |
+
[rank14]:[W621 21:34:41.154468886 ProcessGroupNCCL.cpp:1476] Warning: WARNING: destroy_process_group() was not called before program exit, which can leak resources. For more info, please see https://pytorch.org/docs/stable/distributed.html#shutdown (function operator())
|
| 6486 |
+
[rank6]:[W621 21:34:41.270055586 ProcessGroupNCCL.cpp:1476] Warning: WARNING: destroy_process_group() was not called before program exit, which can leak resources. For more info, please see https://pytorch.org/docs/stable/distributed.html#shutdown (function operator())
|
| 6487 |
+
[rank10]:[W621 21:34:41.436563221 ProcessGroupNCCL.cpp:1476] Warning: WARNING: destroy_process_group() was not called before program exit, which can leak resources. For more info, please see https://pytorch.org/docs/stable/distributed.html#shutdown (function operator())
|
| 6488 |
+
[rank9]:[W621 21:34:41.446638720 ProcessGroupNCCL.cpp:1476] Warning: WARNING: destroy_process_group() was not called before program exit, which can leak resources. For more info, please see https://pytorch.org/docs/stable/distributed.html#shutdown (function operator())
|
| 6489 |
+
[rank11]:[W621 21:34:41.475402518 ProcessGroupNCCL.cpp:1476] Warning: WARNING: destroy_process_group() was not called before program exit, which can leak resources. For more info, please see https://pytorch.org/docs/stable/distributed.html#shutdown (function operator())
|
| 6490 |
+
[rank7]:[W621 21:34:41.561073154 ProcessGroupNCCL.cpp:1476] Warning: WARNING: destroy_process_group() was not called before program exit, which can leak resources. For more info, please see https://pytorch.org/docs/stable/distributed.html#shutdown (function operator())
|
| 6491 |
+
[rank12]:[W621 21:34:41.631388371 ProcessGroupNCCL.cpp:1476] Warning: WARNING: destroy_process_group() was not called before program exit, which can leak resources. For more info, please see https://pytorch.org/docs/stable/distributed.html#shutdown (function operator())
|
| 6492 |
+
+ set +x
|
| 6493 |
+
+ set +x
|
| 6494 |
+
+ for ctx_length in 1024 2048 4096 8192 12288 16384 24576 32768 40960 49152 65536 81920 98304 131072
|
| 6495 |
+
+ export PROF_CTX_LENGTH=40960
|
| 6496 |
+
+ PROF_CTX_LENGTH=40960
|
| 6497 |
+
+ name='/mnt/sharefs/users/hao.zhang/junda/megatron-prof-data--unstable-v5/mytrace.L40960*tp4.cp4.bs1.json'
|
| 6498 |
+
+ '[' -f '/mnt/sharefs/users/hao.zhang/junda/megatron-prof-data--unstable-v5/mytrace.L40960*tp4.cp4.bs1.json' ']'
|
| 6499 |
+
+ echo 'Running ctx_length=40960, TP_SIZE=4, CP_SIZE=4, BATCH_SIZE=1'
|
| 6500 |
+
+ srun bash ./attnserver.sh
|
| 6501 |
+
+ which python3
|
| 6502 |
+
+ which python3
|
| 6503 |
+
+ python3 -m torch.distributed.launch --nproc_per_node 8 --nnodes 2 --node_rank 0 --rdzv_id 343219 --rdzv_backend c10d --rdzv_endpoint fs-mbz-gpu-514:29500 ./pretrain_gpt_profile.py --tensor-model-parallel-size 4 --context-parallel-size 4 --num-layers 2 --hidden-size 4096 --num-attention-heads 64 --group-query-attention --num-query-groups 16 --seq-length 40960 --max-position-embeddings 40960 --micro-batch-size 1 --global-batch-size 1 --lr 0.0005 --train-iters 10 --lr-decay-iters 150000 --lr-decay-style cosine --lr-warmup-iters 2 --weight-decay .1 --adam-beta2 .999 --fp16 --log-interval 1 --save-interval 16 --eval-interval 16 --eval-iters 1 --vocab-file vocab.json --merge-file merges.txt --save gpt-checkpoint --load gpt-checkpoint --logging-level 0 --mock-data --tensorboard-dir tensorboard-logs/
|
| 6504 |
+
+ python3 -m torch.distributed.launch --nproc_per_node 8 --nnodes 2 --node_rank 1 --rdzv_id 343219 --rdzv_backend c10d --rdzv_endpoint fs-mbz-gpu-514:29500 ./pretrain_gpt_profile.py --tensor-model-parallel-size 4 --context-parallel-size 4 --num-layers 2 --hidden-size 4096 --num-attention-heads 64 --group-query-attention --num-query-groups 16 --seq-length 40960 --max-position-embeddings 40960 --micro-batch-size 1 --global-batch-size 1 --lr 0.0005 --train-iters 10 --lr-decay-iters 150000 --lr-decay-style cosine --lr-warmup-iters 2 --weight-decay .1 --adam-beta2 .999 --fp16 --log-interval 1 --save-interval 16 --eval-interval 16 --eval-iters 1 --vocab-file vocab.json --merge-file merges.txt --save gpt-checkpoint --load gpt-checkpoint --logging-level 0 --mock-data --tensorboard-dir tensorboard-logs/
|
| 6505 |
+
/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/distributed/launch.py:207: FutureWarning: The module torch.distributed.launch is deprecated
|
| 6506 |
+
and will be removed in future. Use torchrun.
|
| 6507 |
+
Note that --use-env is set by default in torchrun.
|
| 6508 |
+
If your script expects `--local-rank` argument to be set, please
|
| 6509 |
+
change it to read from `os.environ['LOCAL_RANK']` instead. See
|
| 6510 |
+
https://pytorch.org/docs/stable/distributed.html#launch-utility for
|
| 6511 |
+
further instructions
|
| 6512 |
+
|
| 6513 |
+
main()
|
| 6514 |
+
/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/distributed/launch.py:207: FutureWarning: The module torch.distributed.launch is deprecated
|
| 6515 |
+
and will be removed in future. Use torchrun.
|
| 6516 |
+
Note that --use-env is set by default in torchrun.
|
| 6517 |
+
If your script expects `--local-rank` argument to be set, please
|
| 6518 |
+
change it to read from `os.environ['LOCAL_RANK']` instead. See
|
| 6519 |
+
https://pytorch.org/docs/stable/distributed.html#launch-utility for
|
| 6520 |
+
further instructions
|
| 6521 |
+
|
| 6522 |
+
main()
|
| 6523 |
+
W0621 21:34:50.442000 3532790 site-packages/torch/distributed/run.py:766]
|
| 6524 |
+
W0621 21:34:50.442000 3532790 site-packages/torch/distributed/run.py:766] *****************************************
|
| 6525 |
+
W0621 21:34:50.442000 3532790 site-packages/torch/distributed/run.py:766] Setting OMP_NUM_THREADS environment variable for each process to be 1 in default, to avoid your system being overloaded, please further tune the variable for optimal performance in your application as needed.
|
| 6526 |
+
W0621 21:34:50.442000 3532790 site-packages/torch/distributed/run.py:766] *****************************************
|
| 6527 |
+
W0621 21:34:50.443000 3185485 site-packages/torch/distributed/run.py:766]
|
| 6528 |
+
W0621 21:34:50.443000 3185485 site-packages/torch/distributed/run.py:766] *****************************************
|
| 6529 |
+
W0621 21:34:50.443000 3185485 site-packages/torch/distributed/run.py:766] Setting OMP_NUM_THREADS environment variable for each process to be 1 in default, to avoid your system being overloaded, please further tune the variable for optimal performance in your application as needed.
|
| 6530 |
+
W0621 21:34:50.443000 3185485 site-packages/torch/distributed/run.py:766] *****************************************
|
attnserver.run_attnserver.slurm.sh.343219.out.log
CHANGED
|
The diff for this file is too large to render.
See raw diff
|
|
|
attnserver.run_attnserver.slurm.sh.343220.out.log
CHANGED
|
@@ -16942,3 +16942,1168 @@ batch tensor after cp: labels torch.Size([2, 20480])
|
|
| 16942 |
batch tensor after cp: loss_mask torch.Size([2, 20480])
|
| 16943 |
batch tensor after cp: attention_mask torch.Size([2, 1, 20480, 81920])
|
| 16944 |
batch tensor after cp: position_ids torch.Size([2, 20480])
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 16942 |
batch tensor after cp: loss_mask torch.Size([2, 20480])
|
| 16943 |
batch tensor after cp: attention_mask torch.Size([2, 1, 20480, 81920])
|
| 16944 |
batch tensor after cp: position_ids torch.Size([2, 20480])
|
| 16945 |
+
Start exporting trace 0
|
| 16946 |
+
Done exporting trace 0
|
| 16947 |
+
Number of parameters in transformer block in billions: 0.35
|
| 16948 |
+
Number of parameters in embedding layers in billions: 0.21
|
| 16949 |
+
Total number of parameters in billions: 0.56
|
| 16950 |
+
Number of parameters in most loaded shard in billions: 0.1400
|
| 16951 |
+
Theoretical memory footprints: weight and optimizer=2403.18 MB
|
| 16952 |
+
[Rank 2] (after 1 iterations) memory (MB) | allocated: 19677.26025390625 | max allocated: 27077.36181640625 | reserved: 28584.0 | max reserved: 28584.0
|
| 16953 |
+
[2025-06-21 21:34:09] iteration 1/ 10 | consumed samples: 1 | elapsed time per iteration (ms): 32305.9 | learning rate: 0.000000E+00 | global batch size: 1 | loss scale: 4294967296.0 | number of skipped iterations: 1 | number of nan iterations: 0 |
|
| 16954 |
+
[Rank 7] (after 1 iterations) memory (MB) | allocated: 19677.26025390625 | max allocated: 27077.36181640625 | reserved: 28624.0 | max reserved: 28624.0
|
| 16955 |
+
[Rank 10] (after 1 iterations) memory (MB) | allocated: 19677.26025390625 | max allocated: 27077.36181640625 | reserved: 28684.0 | max reserved: 28684.0[Rank 11] (after 1 iterations) memory (MB) | allocated: 19677.26025390625 | max allocated: 27077.36181640625 | reserved: 28684.0 | max reserved: 28684.0
|
| 16956 |
+
|
| 16957 |
+
[Rank 13] (after 1 iterations) memory (MB) | allocated: 19677.26025390625 | max allocated: 27077.36181640625 | reserved: 28704.0 | max reserved: 28704.0[Rank 14] (after 1 iterations) memory (MB) | allocated: 19677.26025390625 | max allocated: 27077.36181640625 | reserved: 28704.0 | max reserved: 28704.0
|
| 16958 |
+
|
| 16959 |
+
[Rank 3] (after 1 iterations) memory (MB) | allocated: 19677.26025390625 | max allocated: 27077.36181640625 | reserved: 28584.0 | max reserved: 28584.0
|
| 16960 |
+
[Rank 15] (after 1 iterations) memory (MB) | allocated: 19677.26025390625 | max allocated: 27077.36181640625 | reserved: 28704.0 | max reserved: 28704.0
|
| 16961 |
+
[Rank 1] (after 1 iterations) memory (MB) | allocated: 19677.26025390625 | max allocated: 27077.36181640625 | reserved: 28584.0 | max reserved: 28584.0
|
| 16962 |
+
[Rank 8] (after 1 iterations) memory (MB) | allocated: 19677.26025390625 | max allocated: 27077.36181640625 | reserved: 28284.0 | max reserved: 28284.0
|
| 16963 |
+
[Rank 6] (after 1 iterations) memory (MB) | allocated: 19677.26025390625 | max allocated: 27077.36181640625 | reserved: 28624.0 | max reserved: 28624.0
|
| 16964 |
+
[Rank 12] (after 1 iterations) memory (MB) | allocated: 19677.26025390625 | max allocated: 27077.36181640625 | reserved: 28304.0 | max reserved: 28304.0
|
| 16965 |
+
[Rank 4] (after 1 iterations) memory (MB) | allocated: 19677.26025390625 | max allocated: 27077.36181640625 | reserved: 28384.0 | max reserved: 28384.0
|
| 16966 |
+
[Rank 9] (after 1 iterations) memory (MB) | allocated: 19677.26025390625 | max allocated: 27077.36181640625 | reserved: 28684.0 | max reserved: 28684.0
|
| 16967 |
+
[Rank 5] (after 1 iterations) memory (MB) | allocated: 19677.26025390625 | max allocated: 27077.36181640625 | reserved: 28624.0 | max reserved: 28624.0
|
| 16968 |
+
[Rank 0] (after 1 iterations) memory (MB) | allocated: 19677.26025390625 | max allocated: 27077.36181640625 | reserved: 28344.0 | max reserved: 28344.0
|
| 16969 |
+
batch tensor: tokens torch.Size([2, 81920])
|
| 16970 |
+
batch tensor: labels torch.Size([2, 81920])
|
| 16971 |
+
batch tensor: loss_mask torch.Size([2, 81920])
|
| 16972 |
+
batch tensor: attention_mask torch.Size([2, 1, 81920, 81920])
|
| 16973 |
+
batch tensor: position_ids torch.Size([2, 81920])
|
| 16974 |
+
batch tensor after cp: tokens torch.Size([2, 20480])
|
| 16975 |
+
batch tensor after cp: labels torch.Size([2, 20480])
|
| 16976 |
+
batch tensor after cp: loss_mask torch.Size([2, 20480])
|
| 16977 |
+
batch tensor after cp: attention_mask torch.Size([2, 1, 20480, 81920])
|
| 16978 |
+
batch tensor after cp: position_ids torch.Size([2, 20480])
|
| 16979 |
+
batch tensor: tokens torch.Size([2, 81920])
|
| 16980 |
+
batch tensor: labels torch.Size([2, 81920])
|
| 16981 |
+
batch tensor: loss_mask torch.Size([2, 81920])
|
| 16982 |
+
batch tensor: attention_mask torch.Size([2, 1, 81920, 81920])
|
| 16983 |
+
batch tensor: position_ids torch.Size([2, 81920])
|
| 16984 |
+
batch tensor after cp: tokens torch.Size([2, 20480])
|
| 16985 |
+
batch tensor after cp: labels torch.Size([2, 20480])
|
| 16986 |
+
batch tensor after cp: loss_mask torch.Size([2, 20480])
|
| 16987 |
+
batch tensor after cp: attention_mask torch.Size([2, 1, 20480, 81920])
|
| 16988 |
+
batch tensor after cp: position_ids torch.Size([2, 20480])
|
| 16989 |
+
batch tensor: tokens torch.Size([2, 81920])
|
| 16990 |
+
batch tensor: labels torch.Size([2, 81920])
|
| 16991 |
+
batch tensor: loss_mask torch.Size([2, 81920])
|
| 16992 |
+
batch tensor: attention_mask torch.Size([2, 1, 81920, 81920])
|
| 16993 |
+
batch tensor: position_ids torch.Size([2, 81920])
|
| 16994 |
+
batch tensor after cp: tokens torch.Size([2, 20480])
|
| 16995 |
+
batch tensor after cp: labels torch.Size([2, 20480])
|
| 16996 |
+
batch tensor after cp: loss_mask torch.Size([2, 20480])
|
| 16997 |
+
batch tensor after cp: attention_mask torch.Size([2, 1, 20480, 81920])
|
| 16998 |
+
batch tensor after cp: position_ids torch.Size([2, 20480])
|
| 16999 |
+
batch tensor: tokens torch.Size([2, 81920])
|
| 17000 |
+
batch tensor: labels torch.Size([2, 81920])
|
| 17001 |
+
batch tensor: loss_mask torch.Size([2, 81920])
|
| 17002 |
+
batch tensor: attention_mask torch.Size([2, 1, 81920, 81920])
|
| 17003 |
+
batch tensor: position_ids torch.Size([2, 81920])
|
| 17004 |
+
batch tensor after cp: tokens torch.Size([2, 20480])
|
| 17005 |
+
batch tensor after cp: labels torch.Size([2, 20480])
|
| 17006 |
+
batch tensor after cp: loss_mask torch.Size([2, 20480])
|
| 17007 |
+
batch tensor after cp: attention_mask torch.Size([2, 1, 20480, 81920])
|
| 17008 |
+
batch tensor after cp: position_ids torch.Size([2, 20480])
|
| 17009 |
+
batch tensor: tokens torch.Size([2, 81920])
|
| 17010 |
+
batch tensor: labels torch.Size([2, 81920])
|
| 17011 |
+
batch tensor: loss_mask torch.Size([2, 81920])
|
| 17012 |
+
batch tensor: attention_mask torch.Size([2, 1, 81920, 81920])
|
| 17013 |
+
batch tensor: position_ids torch.Size([2, 81920])
|
| 17014 |
+
batch tensor after cp: tokens torch.Size([2, 20480])
|
| 17015 |
+
batch tensor after cp: labels torch.Size([2, 20480])
|
| 17016 |
+
batch tensor after cp: loss_mask torch.Size([2, 20480])
|
| 17017 |
+
batch tensor after cp: attention_mask torch.Size([2, 1, 20480, 81920])
|
| 17018 |
+
batch tensor after cp: position_ids torch.Size([2, 20480])
|
| 17019 |
+
batch tensor: tokens torch.Size([2, 81920])
|
| 17020 |
+
batch tensor: labels torch.Size([2, 81920])
|
| 17021 |
+
batch tensor: loss_mask torch.Size([2, 81920])
|
| 17022 |
+
batch tensor: tokens torch.Size([2, 81920])
|
| 17023 |
+
batch tensor: attention_mask torch.Size([2, 1, 81920, 81920])
|
| 17024 |
+
batch tensor: position_ids torch.Size([2, 81920])
|
| 17025 |
+
batch tensor: labels torch.Size([2, 81920])
|
| 17026 |
+
batch tensor: loss_mask torch.Size([2, 81920])
|
| 17027 |
+
batch tensor after cp: tokens torch.Size([2, 20480])
|
| 17028 |
+
batch tensor after cp: labels torch.Size([2, 20480])
|
| 17029 |
+
batch tensor after cp: loss_mask torch.Size([2, 20480])
|
| 17030 |
+
batch tensor after cp: attention_mask torch.Size([2, 1, 20480, 81920])
|
| 17031 |
+
batch tensor: attention_mask torch.Size([2, 1, 81920, 81920])
|
| 17032 |
+
batch tensor after cp: position_ids torch.Size([2, 20480])
|
| 17033 |
+
batch tensor: position_ids torch.Size([2, 81920])
|
| 17034 |
+
batch tensor after cp: tokens torch.Size([2, 20480])
|
| 17035 |
+
batch tensor after cp: labels torch.Size([2, 20480])
|
| 17036 |
+
batch tensor after cp: loss_mask torch.Size([2, 20480])
|
| 17037 |
+
batch tensor after cp: attention_mask torch.Size([2, 1, 20480, 81920])
|
| 17038 |
+
batch tensor after cp: position_ids torch.Size([2, 20480])
|
| 17039 |
+
batch tensor: tokens torch.Size([2, 81920])
|
| 17040 |
+
batch tensor: labels torch.Size([2, 81920])
|
| 17041 |
+
batch tensor: loss_mask torch.Size([2, 81920])
|
| 17042 |
+
batch tensor: attention_mask torch.Size([2, 1, 81920, 81920])
|
| 17043 |
+
batch tensor: position_ids torch.Size([2, 81920])
|
| 17044 |
+
batch tensor after cp: tokens torch.Size([2, 20480])
|
| 17045 |
+
batch tensor after cp: labels torch.Size([2, 20480])
|
| 17046 |
+
batch tensor after cp: loss_mask torch.Size([2, 20480])
|
| 17047 |
+
batch tensor after cp: attention_mask torch.Size([2, 1, 20480, 81920])
|
| 17048 |
+
batch tensor after cp: position_ids torch.Size([2, 20480])
|
| 17049 |
+
batch tensor: tokens torch.Size([2, 81920])
|
| 17050 |
+
batch tensor: labels torch.Size([2, 81920])
|
| 17051 |
+
batch tensor: loss_mask torch.Size([2, 81920])
|
| 17052 |
+
batch tensor: attention_mask torch.Size([2, 1, 81920, 81920])
|
| 17053 |
+
batch tensor: position_ids torch.Size([2, 81920])
|
| 17054 |
+
batch tensor after cp: tokens torch.Size([2, 20480])
|
| 17055 |
+
batch tensor after cp: labels torch.Size([2, 20480])
|
| 17056 |
+
batch tensor after cp: loss_mask torch.Size([2, 20480])
|
| 17057 |
+
batch tensor after cp: attention_mask torch.Size([2, 1, 20480, 81920])
|
| 17058 |
+
batch tensor after cp: position_ids torch.Size([2, 20480])
|
| 17059 |
+
batch tensor: tokens torch.Size([2, 81920])
|
| 17060 |
+
batch tensor: labels torch.Size([2, 81920])
|
| 17061 |
+
batch tensor: loss_mask torch.Size([2, 81920])
|
| 17062 |
+
batch tensor: attention_mask torch.Size([2, 1, 81920, 81920])
|
| 17063 |
+
batch tensor: position_ids torch.Size([2, 81920])
|
| 17064 |
+
batch tensor after cp: tokens torch.Size([2, 20480])
|
| 17065 |
+
batch tensor after cp: labels torch.Size([2, 20480])
|
| 17066 |
+
batch tensor after cp: loss_mask torch.Size([2, 20480])
|
| 17067 |
+
batch tensor after cp: attention_mask torch.Size([2, 1, 20480, 81920])
|
| 17068 |
+
batch tensor after cp: position_ids torch.Size([2, 20480])
|
| 17069 |
+
batch tensor: tokens torch.Size([2, 81920])
|
| 17070 |
+
batch tensor: labels torch.Size([2, 81920])
|
| 17071 |
+
batch tensor: loss_mask torch.Size([2, 81920])
|
| 17072 |
+
batch tensor: attention_mask torch.Size([2, 1, 81920, 81920])
|
| 17073 |
+
batch tensor: position_ids torch.Size([2, 81920])
|
| 17074 |
+
batch tensor after cp: tokens torch.Size([2, 20480])
|
| 17075 |
+
batch tensor after cp: labels torch.Size([2, 20480])
|
| 17076 |
+
batch tensor after cp: loss_mask torch.Size([2, 20480])
|
| 17077 |
+
batch tensor after cp: attention_mask torch.Size([2, 1, 20480, 81920])
|
| 17078 |
+
batch tensor after cp: position_ids torch.Size([2, 20480])
|
| 17079 |
+
batch tensor: tokens torch.Size([2, 81920])
|
| 17080 |
+
batch tensor: labels torch.Size([2, 81920])
|
| 17081 |
+
batch tensor: loss_mask torch.Size([2, 81920])
|
| 17082 |
+
batch tensor: attention_mask torch.Size([2, 1, 81920, 81920])
|
| 17083 |
+
batch tensor: position_ids torch.Size([2, 81920])
|
| 17084 |
+
batch tensor after cp: tokens torch.Size([2, 20480])
|
| 17085 |
+
batch tensor after cp: labels torch.Size([2, 20480])
|
| 17086 |
+
batch tensor after cp: loss_mask torch.Size([2, 20480])
|
| 17087 |
+
batch tensor: tokens torch.Size([2, 81920])
|
| 17088 |
+
batch tensor: labels torch.Size([2, 81920])
|
| 17089 |
+
batch tensor: loss_mask torch.Size([2, 81920])
|
| 17090 |
+
batch tensor after cp: attention_mask torch.Size([2, 1, 20480, 81920])
|
| 17091 |
+
batch tensor after cp: position_ids torch.Size([2, 20480])
|
| 17092 |
+
batch tensor: attention_mask torch.Size([2, 1, 81920, 81920])
|
| 17093 |
+
batch tensor: position_ids torch.Size([2, 81920])
|
| 17094 |
+
batch tensor after cp: tokens torch.Size([2, 20480])
|
| 17095 |
+
batch tensor after cp: labels torch.Size([2, 20480])
|
| 17096 |
+
batch tensor after cp: loss_mask torch.Size([2, 20480])
|
| 17097 |
+
batch tensor after cp: attention_mask torch.Size([2, 1, 20480, 81920])
|
| 17098 |
+
batch tensor after cp: position_ids torch.Size([2, 20480])
|
| 17099 |
+
batch tensor: tokens torch.Size([2, 81920])
|
| 17100 |
+
batch tensor: labels torch.Size([2, 81920])
|
| 17101 |
+
batch tensor: loss_mask torch.Size([2, 81920])
|
| 17102 |
+
batch tensor: attention_mask torch.Size([2, 1, 81920, 81920])
|
| 17103 |
+
batch tensor: position_ids torch.Size([2, 81920])
|
| 17104 |
+
batch tensor after cp: tokens torch.Size([2, 20480])
|
| 17105 |
+
batch tensor after cp: labels torch.Size([2, 20480])
|
| 17106 |
+
batch tensor after cp: loss_mask torch.Size([2, 20480])
|
| 17107 |
+
batch tensor after cp: attention_mask torch.Size([2, 1, 20480, 81920])
|
| 17108 |
+
batch tensor after cp: position_ids torch.Size([2, 20480])
|
| 17109 |
+
batch tensor: tokens torch.Size([2, 81920])
|
| 17110 |
+
batch tensor: labels torch.Size([2, 81920])
|
| 17111 |
+
batch tensor: loss_mask torch.Size([2, 81920])
|
| 17112 |
+
batch tensor: attention_mask torch.Size([2, 1, 81920, 81920])
|
| 17113 |
+
batch tensor: position_ids torch.Size([2, 81920])
|
| 17114 |
+
batch tensor after cp: tokens torch.Size([2, 20480])
|
| 17115 |
+
batch tensor after cp: labels torch.Size([2, 20480])
|
| 17116 |
+
batch tensor after cp: loss_mask torch.Size([2, 20480])
|
| 17117 |
+
batch tensor after cp: attention_mask torch.Size([2, 1, 20480, 81920])
|
| 17118 |
+
batch tensor after cp: position_ids torch.Size([2, 20480])
|
| 17119 |
+
batch tensor: tokens torch.Size([2, 81920])
|
| 17120 |
+
batch tensor: labels torch.Size([2, 81920])
|
| 17121 |
+
batch tensor: loss_mask torch.Size([2, 81920])
|
| 17122 |
+
batch tensor: attention_mask torch.Size([2, 1, 81920, 81920])
|
| 17123 |
+
batch tensor: position_ids torch.Size([2, 81920])
|
| 17124 |
+
batch tensor after cp: tokens torch.Size([2, 20480])
|
| 17125 |
+
batch tensor after cp: labels torch.Size([2, 20480])
|
| 17126 |
+
batch tensor after cp: loss_mask torch.Size([2, 20480])
|
| 17127 |
+
batch tensor after cp: attention_mask torch.Size([2, 1, 20480, 81920])
|
| 17128 |
+
batch tensor after cp: position_ids torch.Size([2, 20480])
|
| 17129 |
+
Start exporting trace 1
|
| 17130 |
+
Done exporting trace 1
|
| 17131 |
+
[2025-06-21 21:34:17] iteration 2/ 10 | consumed samples: 2 | elapsed time per iteration (ms): 8281.7 | learning rate: 0.000000E+00 | global batch size: 1 | loss scale: 2147483648.0 | number of skipped iterations: 1 | number of nan iterations: 0 |
|
| 17132 |
+
batch tensor: tokens torch.Size([2, 81920])
|
| 17133 |
+
batch tensor: labels torch.Size([2, 81920])
|
| 17134 |
+
batch tensor: loss_mask torch.Size([2, 81920])
|
| 17135 |
+
batch tensor: attention_mask torch.Size([2, 1, 81920, 81920])
|
| 17136 |
+
batch tensor: position_ids torch.Size([2, 81920])
|
| 17137 |
+
batch tensor after cp: tokens torch.Size([2, 20480])
|
| 17138 |
+
batch tensor after cp: labels torch.Size([2, 20480])
|
| 17139 |
+
batch tensor after cp: loss_mask torch.Size([2, 20480])
|
| 17140 |
+
batch tensor after cp: attention_mask torch.Size([2, 1, 20480, 81920])
|
| 17141 |
+
batch tensor after cp: position_ids torch.Size([2, 20480])
|
| 17142 |
+
batch tensor: tokens torch.Size([2, 81920])
|
| 17143 |
+
batch tensor: labels torch.Size([2, 81920])
|
| 17144 |
+
batch tensor: loss_mask torch.Size([2, 81920])
|
| 17145 |
+
batch tensor: attention_mask torch.Size([2, 1, 81920, 81920])
|
| 17146 |
+
batch tensor: position_ids torch.Size([2, 81920])
|
| 17147 |
+
batch tensor after cp: tokens torch.Size([2, 20480])
|
| 17148 |
+
batch tensor after cp: labels torch.Size([2, 20480])
|
| 17149 |
+
batch tensor after cp: loss_mask torch.Size([2, 20480])
|
| 17150 |
+
batch tensor after cp: attention_mask torch.Size([2, 1, 20480, 81920])
|
| 17151 |
+
batch tensor after cp: position_ids torch.Size([2, 20480])
|
| 17152 |
+
batch tensor: tokens torch.Size([2, 81920])
|
| 17153 |
+
batch tensor: labels torch.Size([2, 81920])
|
| 17154 |
+
batch tensor: loss_mask torch.Size([2, 81920])
|
| 17155 |
+
batch tensor: attention_mask torch.Size([2, 1, 81920, 81920])
|
| 17156 |
+
batch tensor: position_ids torch.Size([2, 81920])
|
| 17157 |
+
batch tensor after cp: tokens torch.Size([2, 20480])
|
| 17158 |
+
batch tensor after cp: labels torch.Size([2, 20480])
|
| 17159 |
+
batch tensor after cp: loss_mask torch.Size([2, 20480])
|
| 17160 |
+
batch tensor after cp: attention_mask torch.Size([2, 1, 20480, 81920])
|
| 17161 |
+
batch tensor after cp: position_ids torch.Size([2, 20480])
|
| 17162 |
+
batch tensor: tokens torch.Size([2, 81920])
|
| 17163 |
+
batch tensor: labels torch.Size([2, 81920])
|
| 17164 |
+
batch tensor: loss_mask torch.Size([2, 81920])
|
| 17165 |
+
batch tensor: attention_mask torch.Size([2, 1, 81920, 81920])
|
| 17166 |
+
batch tensor: position_ids torch.Size([2, 81920])
|
| 17167 |
+
batch tensor after cp: tokens torch.Size([2, 20480])
|
| 17168 |
+
batch tensor after cp: labels torch.Size([2, 20480])
|
| 17169 |
+
batch tensor after cp: loss_mask torch.Size([2, 20480])
|
| 17170 |
+
batch tensor after cp: attention_mask torch.Size([2, 1, 20480, 81920])
|
| 17171 |
+
batch tensor after cp: position_ids torch.Size([2, 20480])
|
| 17172 |
+
batch tensor: tokens torch.Size([2, 81920])
|
| 17173 |
+
batch tensor: labels torch.Size([2, 81920])
|
| 17174 |
+
batch tensor: loss_mask torch.Size([2, 81920])
|
| 17175 |
+
batch tensor: attention_mask torch.Size([2, 1, 81920, 81920])
|
| 17176 |
+
batch tensor: position_ids torch.Size([2, 81920])
|
| 17177 |
+
batch tensor after cp: tokens torch.Size([2, 20480])
|
| 17178 |
+
batch tensor after cp: labels torch.Size([2, 20480])
|
| 17179 |
+
batch tensor after cp: loss_mask torch.Size([2, 20480])
|
| 17180 |
+
batch tensor after cp: attention_mask torch.Size([2, 1, 20480, 81920])
|
| 17181 |
+
batch tensor after cp: position_ids torch.Size([2, 20480])
|
| 17182 |
+
batch tensor: tokens torch.Size([2, 81920])
|
| 17183 |
+
batch tensor: labels torch.Size([2, 81920])
|
| 17184 |
+
batch tensor: loss_mask torch.Size([2, 81920])
|
| 17185 |
+
batch tensor: attention_mask torch.Size([2, 1, 81920, 81920])
|
| 17186 |
+
batch tensor: position_ids torch.Size([2, 81920])
|
| 17187 |
+
batch tensor after cp: tokens torch.Size([2, 20480])
|
| 17188 |
+
batch tensor after cp: labels torch.Size([2, 20480])
|
| 17189 |
+
batch tensor after cp: loss_mask torch.Size([2, 20480])
|
| 17190 |
+
batch tensor after cp: attention_mask torch.Size([2, 1, 20480, 81920])
|
| 17191 |
+
batch tensor after cp: position_ids torch.Size([2, 20480])
|
| 17192 |
+
batch tensor: tokens torch.Size([2, 81920])
|
| 17193 |
+
batch tensor: labels torch.Size([2, 81920])
|
| 17194 |
+
batch tensor: loss_mask torch.Size([2, 81920])
|
| 17195 |
+
batch tensor: attention_mask torch.Size([2, 1, 81920, 81920])
|
| 17196 |
+
batch tensor: position_ids torch.Size([2, 81920])
|
| 17197 |
+
batch tensor after cp: tokens torch.Size([2, 20480])
|
| 17198 |
+
batch tensor after cp: labels torch.Size([2, 20480])
|
| 17199 |
+
batch tensor after cp: loss_mask torch.Size([2, 20480])
|
| 17200 |
+
batch tensor after cp: attention_mask torch.Size([2, 1, 20480, 81920])
|
| 17201 |
+
batch tensor after cp: position_ids torch.Size([2, 20480])
|
| 17202 |
+
batch tensor: tokens torch.Size([2, 81920])
|
| 17203 |
+
batch tensor: labels torch.Size([2, 81920])
|
| 17204 |
+
batch tensor: loss_mask torch.Size([2, 81920])
|
| 17205 |
+
batch tensor: attention_mask torch.Size([2, 1, 81920, 81920])
|
| 17206 |
+
batch tensor: position_ids torch.Size([2, 81920])
|
| 17207 |
+
batch tensor after cp: tokens torch.Size([2, 20480])
|
| 17208 |
+
batch tensor after cp: labels torch.Size([2, 20480])
|
| 17209 |
+
batch tensor after cp: loss_mask torch.Size([2, 20480])
|
| 17210 |
+
batch tensor after cp: attention_mask torch.Size([2, 1, 20480, 81920])
|
| 17211 |
+
batch tensor after cp: position_ids torch.Size([2, 20480])
|
| 17212 |
+
batch tensor: tokens torch.Size([2, 81920])
|
| 17213 |
+
batch tensor: labels torch.Size([2, 81920])
|
| 17214 |
+
batch tensor: loss_mask torch.Size([2, 81920])
|
| 17215 |
+
batch tensor: attention_mask torch.Size([2, 1, 81920, 81920])
|
| 17216 |
+
batch tensor: position_ids torch.Size([2, 81920])
|
| 17217 |
+
batch tensor after cp: tokens torch.Size([2, 20480])
|
| 17218 |
+
batch tensor after cp: labels torch.Size([2, 20480])
|
| 17219 |
+
batch tensor after cp: loss_mask torch.Size([2, 20480])
|
| 17220 |
+
batch tensor after cp: attention_mask torch.Size([2, 1, 20480, 81920])
|
| 17221 |
+
batch tensor after cp: position_ids torch.Size([2, 20480])
|
| 17222 |
+
batch tensor: tokens torch.Size([2, 81920])
|
| 17223 |
+
batch tensor: labels torch.Size([2, 81920])
|
| 17224 |
+
batch tensor: loss_mask torch.Size([2, 81920])
|
| 17225 |
+
batch tensor: attention_mask torch.Size([2, 1, 81920, 81920])
|
| 17226 |
+
batch tensor: position_ids torch.Size([2, 81920])
|
| 17227 |
+
batch tensor after cp: tokens torch.Size([2, 20480])
|
| 17228 |
+
batch tensor after cp: labels torch.Size([2, 20480])
|
| 17229 |
+
batch tensor after cp: loss_mask torch.Size([2, 20480])
|
| 17230 |
+
batch tensor after cp: attention_mask torch.Size([2, 1, 20480, 81920])
|
| 17231 |
+
batch tensor after cp: position_ids torch.Size([2, 20480])
|
| 17232 |
+
batch tensor: tokens torch.Size([2, 81920])
|
| 17233 |
+
batch tensor: labels torch.Size([2, 81920])
|
| 17234 |
+
batch tensor: loss_mask torch.Size([2, 81920])
|
| 17235 |
+
batch tensor: attention_mask torch.Size([2, 1, 81920, 81920])
|
| 17236 |
+
batch tensor: position_ids torch.Size([2, 81920])
|
| 17237 |
+
batch tensor after cp: tokens torch.Size([2, 20480])
|
| 17238 |
+
batch tensor after cp: labels torch.Size([2, 20480])
|
| 17239 |
+
batch tensor after cp: loss_mask torch.Size([2, 20480])
|
| 17240 |
+
batch tensor after cp: attention_mask torch.Size([2, 1, 20480, 81920])
|
| 17241 |
+
batch tensor after cp: position_ids torch.Size([2, 20480])
|
| 17242 |
+
batch tensor: tokens torch.Size([2, 81920])
|
| 17243 |
+
batch tensor: labels torch.Size([2, 81920])
|
| 17244 |
+
batch tensor: loss_mask torch.Size([2, 81920])
|
| 17245 |
+
batch tensor: attention_mask torch.Size([2, 1, 81920, 81920])
|
| 17246 |
+
batch tensor: position_ids torch.Size([2, 81920])
|
| 17247 |
+
batch tensor after cp: tokens torch.Size([2, 20480])
|
| 17248 |
+
batch tensor after cp: labels torch.Size([2, 20480])
|
| 17249 |
+
batch tensor after cp: loss_mask torch.Size([2, 20480])
|
| 17250 |
+
batch tensor after cp: attention_mask torch.Size([2, 1, 20480, 81920])
|
| 17251 |
+
batch tensor after cp: position_ids torch.Size([2, 20480])
|
| 17252 |
+
batch tensor: tokens torch.Size([2, 81920])
|
| 17253 |
+
batch tensor: labels torch.Size([2, 81920])
|
| 17254 |
+
batch tensor: loss_mask torch.Size([2, 81920])
|
| 17255 |
+
batch tensor: attention_mask torch.Size([2, 1, 81920, 81920])
|
| 17256 |
+
batch tensor: position_ids torch.Size([2, 81920])
|
| 17257 |
+
batch tensor after cp: tokens torch.Size([2, 20480])
|
| 17258 |
+
batch tensor after cp: labels torch.Size([2, 20480])
|
| 17259 |
+
batch tensor after cp: loss_mask torch.Size([2, 20480])
|
| 17260 |
+
batch tensor after cp: attention_mask torch.Size([2, 1, 20480, 81920])
|
| 17261 |
+
batch tensor after cp: position_ids torch.Size([2, 20480])
|
| 17262 |
+
batch tensor: tokens torch.Size([2, 81920])
|
| 17263 |
+
batch tensor: labels torch.Size([2, 81920])
|
| 17264 |
+
batch tensor: loss_mask torch.Size([2, 81920])
|
| 17265 |
+
batch tensor: attention_mask torch.Size([2, 1, 81920, 81920])
|
| 17266 |
+
batch tensor: position_ids torch.Size([2, 81920])
|
| 17267 |
+
batch tensor after cp: tokens torch.Size([2, 20480])
|
| 17268 |
+
batch tensor after cp: labels torch.Size([2, 20480])
|
| 17269 |
+
batch tensor after cp: loss_mask torch.Size([2, 20480])
|
| 17270 |
+
batch tensor after cp: attention_mask torch.Size([2, 1, 20480, 81920])
|
| 17271 |
+
batch tensor after cp: position_ids torch.Size([2, 20480])
|
| 17272 |
+
batch tensor: tokens torch.Size([2, 81920])
|
| 17273 |
+
batch tensor: labels torch.Size([2, 81920])
|
| 17274 |
+
batch tensor: loss_mask torch.Size([2, 81920])
|
| 17275 |
+
batch tensor: attention_mask torch.Size([2, 1, 81920, 81920])
|
| 17276 |
+
batch tensor: position_ids torch.Size([2, 81920])
|
| 17277 |
+
batch tensor after cp: tokens torch.Size([2, 20480])
|
| 17278 |
+
batch tensor after cp: labels torch.Size([2, 20480])
|
| 17279 |
+
batch tensor after cp: loss_mask torch.Size([2, 20480])
|
| 17280 |
+
batch tensor after cp: attention_mask torch.Size([2, 1, 20480, 81920])
|
| 17281 |
+
batch tensor after cp: position_ids torch.Size([2, 20480])
|
| 17282 |
+
batch tensor: tokens torch.Size([2, 81920])
|
| 17283 |
+
batch tensor: labels torch.Size([2, 81920])
|
| 17284 |
+
batch tensor: loss_mask torch.Size([2, 81920])
|
| 17285 |
+
batch tensor: attention_mask torch.Size([2, 1, 81920, 81920])
|
| 17286 |
+
batch tensor: position_ids torch.Size([2, 81920])
|
| 17287 |
+
batch tensor after cp: tokens torch.Size([2, 20480])
|
| 17288 |
+
batch tensor after cp: labels torch.Size([2, 20480])
|
| 17289 |
+
batch tensor after cp: loss_mask torch.Size([2, 20480])
|
| 17290 |
+
batch tensor after cp: attention_mask torch.Size([2, 1, 20480, 81920])
|
| 17291 |
+
batch tensor after cp: position_ids torch.Size([2, 20480])
|
| 17292 |
+
Start exporting trace 2
|
| 17293 |
+
Done exporting trace 2
|
| 17294 |
+
[2025-06-21 21:34:25] iteration 3/ 10 | consumed samples: 3 | elapsed time per iteration (ms): 7929.7 | learning rate: 0.000000E+00 | global batch size: 1 | loss scale: 1073741824.0 | number of skipped iterations: 1 | number of nan iterations: 0 |
|
| 17295 |
+
batch tensor: tokens torch.Size([2, 81920])
|
| 17296 |
+
batch tensor: labels torch.Size([2, 81920])
|
| 17297 |
+
batch tensor: loss_mask torch.Size([2, 81920])
|
| 17298 |
+
batch tensor: attention_mask torch.Size([2, 1, 81920, 81920])
|
| 17299 |
+
batch tensor: position_ids torch.Size([2, 81920])
|
| 17300 |
+
batch tensor after cp: tokens torch.Size([2, 20480])
|
| 17301 |
+
batch tensor after cp: labels torch.Size([2, 20480])
|
| 17302 |
+
batch tensor after cp: loss_mask torch.Size([2, 20480])
|
| 17303 |
+
batch tensor after cp: attention_mask torch.Size([2, 1, 20480, 81920])
|
| 17304 |
+
batch tensor after cp: position_ids torch.Size([2, 20480])
|
| 17305 |
+
batch tensor: tokens torch.Size([2, 81920])
|
| 17306 |
+
batch tensor: labels torch.Size([2, 81920])
|
| 17307 |
+
batch tensor: loss_mask torch.Size([2, 81920])
|
| 17308 |
+
batch tensor: attention_mask torch.Size([2, 1, 81920, 81920])
|
| 17309 |
+
batch tensor: position_ids torch.Size([2, 81920])
|
| 17310 |
+
batch tensor after cp: tokens torch.Size([2, 20480])
|
| 17311 |
+
batch tensor after cp: labels torch.Size([2, 20480])
|
| 17312 |
+
batch tensor after cp: loss_mask torch.Size([2, 20480])
|
| 17313 |
+
batch tensor after cp: attention_mask torch.Size([2, 1, 20480, 81920])
|
| 17314 |
+
batch tensor after cp: position_ids torch.Size([2, 20480])
|
| 17315 |
+
batch tensor: tokens torch.Size([2, 81920])
|
| 17316 |
+
batch tensor: labels torch.Size([2, 81920])
|
| 17317 |
+
batch tensor: loss_mask torch.Size([2, 81920])
|
| 17318 |
+
batch tensor: attention_mask torch.Size([2, 1, 81920, 81920])
|
| 17319 |
+
batch tensor: position_ids torch.Size([2, 81920])
|
| 17320 |
+
batch tensor after cp: tokens torch.Size([2, 20480])
|
| 17321 |
+
batch tensor after cp: labels torch.Size([2, 20480])
|
| 17322 |
+
batch tensor after cp: loss_mask torch.Size([2, 20480])
|
| 17323 |
+
batch tensor after cp: attention_mask torch.Size([2, 1, 20480, 81920])
|
| 17324 |
+
batch tensor after cp: position_ids torch.Size([2, 20480])
|
| 17325 |
+
batch tensor: tokens torch.Size([2, 81920])
|
| 17326 |
+
batch tensor: labels torch.Size([2, 81920])
|
| 17327 |
+
batch tensor: loss_mask torch.Size([2, 81920])
|
| 17328 |
+
batch tensor: attention_mask torch.Size([2, 1, 81920, 81920])
|
| 17329 |
+
batch tensor: position_ids torch.Size([2, 81920])
|
| 17330 |
+
batch tensor after cp: tokens torch.Size([2, 20480])
|
| 17331 |
+
batch tensor after cp: labels torch.Size([2, 20480])
|
| 17332 |
+
batch tensor after cp: loss_mask torch.Size([2, 20480])
|
| 17333 |
+
batch tensor after cp: attention_mask torch.Size([2, 1, 20480, 81920])
|
| 17334 |
+
batch tensor after cp: position_ids torch.Size([2, 20480])
|
| 17335 |
+
batch tensor: tokens torch.Size([2, 81920])
|
| 17336 |
+
batch tensor: labels torch.Size([2, 81920])
|
| 17337 |
+
batch tensor: loss_mask torch.Size([2, 81920])
|
| 17338 |
+
batch tensor: attention_mask torch.Size([2, 1, 81920, 81920])
|
| 17339 |
+
batch tensor: position_ids torch.Size([2, 81920])
|
| 17340 |
+
batch tensor after cp: tokens torch.Size([2, 20480])
|
| 17341 |
+
batch tensor after cp: labels torch.Size([2, 20480])
|
| 17342 |
+
batch tensor after cp: loss_mask torch.Size([2, 20480])
|
| 17343 |
+
batch tensor after cp: attention_mask torch.Size([2, 1, 20480, 81920])
|
| 17344 |
+
batch tensor after cp: position_ids torch.Size([2, 20480])
|
| 17345 |
+
batch tensor: tokens torch.Size([2, 81920])
|
| 17346 |
+
batch tensor: labels torch.Size([2, 81920])
|
| 17347 |
+
batch tensor: loss_mask torch.Size([2, 81920])
|
| 17348 |
+
batch tensor: attention_mask torch.Size([2, 1, 81920, 81920])
|
| 17349 |
+
batch tensor: position_ids torch.Size([2, 81920])
|
| 17350 |
+
batch tensor after cp: tokens torch.Size([2, 20480])
|
| 17351 |
+
batch tensor after cp: labels torch.Size([2, 20480])
|
| 17352 |
+
batch tensor after cp: loss_mask torch.Size([2, 20480])
|
| 17353 |
+
batch tensor after cp: attention_mask torch.Size([2, 1, 20480, 81920])
|
| 17354 |
+
batch tensor after cp: position_ids torch.Size([2, 20480])
|
| 17355 |
+
batch tensor: tokens torch.Size([2, 81920])
|
| 17356 |
+
batch tensor: labels torch.Size([2, 81920])
|
| 17357 |
+
batch tensor: loss_mask torch.Size([2, 81920])
|
| 17358 |
+
batch tensor: attention_mask torch.Size([2, 1, 81920, 81920])
|
| 17359 |
+
batch tensor: position_ids torch.Size([2, 81920])
|
| 17360 |
+
batch tensor after cp: tokens torch.Size([2, 20480])
|
| 17361 |
+
batch tensor after cp: labels torch.Size([2, 20480])
|
| 17362 |
+
batch tensor after cp: loss_mask torch.Size([2, 20480])
|
| 17363 |
+
batch tensor after cp: attention_mask torch.Size([2, 1, 20480, 81920])
|
| 17364 |
+
batch tensor after cp: position_ids torch.Size([2, 20480])
|
| 17365 |
+
batch tensor: tokens torch.Size([2, 81920])
|
| 17366 |
+
batch tensor: labels torch.Size([2, 81920])
|
| 17367 |
+
batch tensor: loss_mask torch.Size([2, 81920])
|
| 17368 |
+
batch tensor: attention_mask torch.Size([2, 1, 81920, 81920])
|
| 17369 |
+
batch tensor: position_ids torch.Size([2, 81920])
|
| 17370 |
+
batch tensor after cp: tokens torch.Size([2, 20480])
|
| 17371 |
+
batch tensor after cp: labels torch.Size([2, 20480])
|
| 17372 |
+
batch tensor after cp: loss_mask torch.Size([2, 20480])
|
| 17373 |
+
batch tensor after cp: attention_mask torch.Size([2, 1, 20480, 81920])
|
| 17374 |
+
batch tensor after cp: position_ids torch.Size([2, 20480])
|
| 17375 |
+
batch tensor: tokens torch.Size([2, 81920])
|
| 17376 |
+
batch tensor: labels torch.Size([2, 81920])
|
| 17377 |
+
batch tensor: loss_mask torch.Size([2, 81920])
|
| 17378 |
+
batch tensor: attention_mask torch.Size([2, 1, 81920, 81920])
|
| 17379 |
+
batch tensor: position_ids torch.Size([2, 81920])
|
| 17380 |
+
batch tensor after cp: tokens torch.Size([2, 20480])
|
| 17381 |
+
batch tensor after cp: labels torch.Size([2, 20480])
|
| 17382 |
+
batch tensor after cp: loss_mask torch.Size([2, 20480])
|
| 17383 |
+
batch tensor after cp: attention_mask torch.Size([2, 1, 20480, 81920])
|
| 17384 |
+
batch tensor after cp: position_ids torch.Size([2, 20480])
|
| 17385 |
+
batch tensor: tokens torch.Size([2, 81920])
|
| 17386 |
+
batch tensor: labels torch.Size([2, 81920])
|
| 17387 |
+
batch tensor: loss_mask torch.Size([2, 81920])
|
| 17388 |
+
batch tensor: attention_mask torch.Size([2, 1, 81920, 81920])
|
| 17389 |
+
batch tensor: position_ids torch.Size([2, 81920])
|
| 17390 |
+
batch tensor after cp: tokens torch.Size([2, 20480])
|
| 17391 |
+
batch tensor after cp: labels torch.Size([2, 20480])
|
| 17392 |
+
batch tensor after cp: loss_mask torch.Size([2, 20480])
|
| 17393 |
+
batch tensor after cp: attention_mask torch.Size([2, 1, 20480, 81920])
|
| 17394 |
+
batch tensor after cp: position_ids torch.Size([2, 20480])
|
| 17395 |
+
batch tensor: tokens torch.Size([2, 81920])
|
| 17396 |
+
batch tensor: labels torch.Size([2, 81920])
|
| 17397 |
+
batch tensor: loss_mask torch.Size([2, 81920])
|
| 17398 |
+
batch tensor: attention_mask torch.Size([2, 1, 81920, 81920])
|
| 17399 |
+
batch tensor: position_ids torch.Size([2, 81920])
|
| 17400 |
+
batch tensor after cp: tokens torch.Size([2, 20480])
|
| 17401 |
+
batch tensor after cp: labels torch.Size([2, 20480])
|
| 17402 |
+
batch tensor after cp: loss_mask torch.Size([2, 20480])
|
| 17403 |
+
batch tensor after cp: attention_mask torch.Size([2, 1, 20480, 81920])
|
| 17404 |
+
batch tensor after cp: position_ids torch.Size([2, 20480])
|
| 17405 |
+
batch tensor: tokens torch.Size([2, 81920])
|
| 17406 |
+
batch tensor: labels torch.Size([2, 81920])
|
| 17407 |
+
batch tensor: loss_mask torch.Size([2, 81920])
|
| 17408 |
+
batch tensor: attention_mask torch.Size([2, 1, 81920, 81920])
|
| 17409 |
+
batch tensor: position_ids torch.Size([2, 81920])
|
| 17410 |
+
batch tensor after cp: tokens torch.Size([2, 20480])
|
| 17411 |
+
batch tensor after cp: labels torch.Size([2, 20480])
|
| 17412 |
+
batch tensor after cp: loss_mask torch.Size([2, 20480])
|
| 17413 |
+
batch tensor after cp: attention_mask torch.Size([2, 1, 20480, 81920])
|
| 17414 |
+
batch tensor after cp: position_ids torch.Size([2, 20480])
|
| 17415 |
+
batch tensor: tokens torch.Size([2, 81920])
|
| 17416 |
+
batch tensor: labels torch.Size([2, 81920])
|
| 17417 |
+
batch tensor: loss_mask torch.Size([2, 81920])
|
| 17418 |
+
batch tensor: attention_mask torch.Size([2, 1, 81920, 81920])
|
| 17419 |
+
batch tensor: position_ids torch.Size([2, 81920])
|
| 17420 |
+
batch tensor after cp: tokens torch.Size([2, 20480])
|
| 17421 |
+
batch tensor after cp: labels torch.Size([2, 20480])
|
| 17422 |
+
batch tensor after cp: loss_mask torch.Size([2, 20480])
|
| 17423 |
+
batch tensor after cp: attention_mask torch.Size([2, 1, 20480, 81920])
|
| 17424 |
+
batch tensor after cp: position_ids torch.Size([2, 20480])
|
| 17425 |
+
batch tensor: tokens torch.Size([2, 81920])
|
| 17426 |
+
batch tensor: labels torch.Size([2, 81920])
|
| 17427 |
+
batch tensor: loss_mask torch.Size([2, 81920])
|
| 17428 |
+
batch tensor: attention_mask torch.Size([2, 1, 81920, 81920])
|
| 17429 |
+
batch tensor: position_ids torch.Size([2, 81920])
|
| 17430 |
+
batch tensor after cp: tokens torch.Size([2, 20480])
|
| 17431 |
+
batch tensor after cp: labels torch.Size([2, 20480])
|
| 17432 |
+
batch tensor after cp: loss_mask torch.Size([2, 20480])
|
| 17433 |
+
batch tensor after cp: attention_mask torch.Size([2, 1, 20480, 81920])
|
| 17434 |
+
batch tensor after cp: position_ids torch.Size([2, 20480])
|
| 17435 |
+
batch tensor: tokens torch.Size([2, 81920])
|
| 17436 |
+
batch tensor: labels torch.Size([2, 81920])
|
| 17437 |
+
batch tensor: loss_mask torch.Size([2, 81920])
|
| 17438 |
+
batch tensor: attention_mask torch.Size([2, 1, 81920, 81920])
|
| 17439 |
+
batch tensor: position_ids torch.Size([2, 81920])
|
| 17440 |
+
batch tensor after cp: tokens torch.Size([2, 20480])
|
| 17441 |
+
batch tensor after cp: labels torch.Size([2, 20480])
|
| 17442 |
+
batch tensor after cp: loss_mask torch.Size([2, 20480])
|
| 17443 |
+
batch tensor after cp: attention_mask torch.Size([2, 1, 20480, 81920])
|
| 17444 |
+
batch tensor after cp: position_ids torch.Size([2, 20480])
|
| 17445 |
+
batch tensor: tokens torch.Size([2, 81920])
|
| 17446 |
+
batch tensor: labels torch.Size([2, 81920])
|
| 17447 |
+
batch tensor: loss_mask torch.Size([2, 81920])
|
| 17448 |
+
batch tensor: attention_mask torch.Size([2, 1, 81920, 81920])
|
| 17449 |
+
batch tensor: position_ids torch.Size([2, 81920])
|
| 17450 |
+
batch tensor after cp: tokens torch.Size([2, 20480])
|
| 17451 |
+
batch tensor after cp: labels torch.Size([2, 20480])
|
| 17452 |
+
batch tensor after cp: loss_mask torch.Size([2, 20480])
|
| 17453 |
+
batch tensor after cp: attention_mask torch.Size([2, 1, 20480, 81920])
|
| 17454 |
+
batch tensor after cp: position_ids torch.Size([2, 20480])
|
| 17455 |
+
Start exporting trace 3
|
| 17456 |
+
Done exporting trace 3
|
| 17457 |
+
[2025-06-21 21:34:33] iteration 4/ 10 | consumed samples: 4 | elapsed time per iteration (ms): 7924.6 | learning rate: 0.000000E+00 | global batch size: 1 | loss scale: 536870912.0 | number of skipped iterations: 1 | number of nan iterations: 0 |
|
| 17458 |
+
batch tensor: tokens torch.Size([2, 81920])
|
| 17459 |
+
batch tensor: labels torch.Size([2, 81920])
|
| 17460 |
+
batch tensor: loss_mask torch.Size([2, 81920])
|
| 17461 |
+
batch tensor: attention_mask torch.Size([2, 1, 81920, 81920])
|
| 17462 |
+
batch tensor: position_ids torch.Size([2, 81920])
|
| 17463 |
+
batch tensor after cp: tokens torch.Size([2, 20480])
|
| 17464 |
+
batch tensor after cp: labels torch.Size([2, 20480])
|
| 17465 |
+
batch tensor after cp: loss_mask torch.Size([2, 20480])
|
| 17466 |
+
batch tensor after cp: attention_mask torch.Size([2, 1, 20480, 81920])
|
| 17467 |
+
batch tensor after cp: position_ids torch.Size([2, 20480])
|
| 17468 |
+
batch tensor: tokens torch.Size([2, 81920])
|
| 17469 |
+
batch tensor: labels torch.Size([2, 81920])
|
| 17470 |
+
batch tensor: loss_mask torch.Size([2, 81920])
|
| 17471 |
+
batch tensor: attention_mask torch.Size([2, 1, 81920, 81920])
|
| 17472 |
+
batch tensor: position_ids torch.Size([2, 81920])
|
| 17473 |
+
batch tensor after cp: tokens torch.Size([2, 20480])
|
| 17474 |
+
batch tensor after cp: labels torch.Size([2, 20480])
|
| 17475 |
+
batch tensor after cp: loss_mask torch.Size([2, 20480])
|
| 17476 |
+
batch tensor after cp: attention_mask torch.Size([2, 1, 20480, 81920])
|
| 17477 |
+
batch tensor after cp: position_ids torch.Size([2, 20480])
|
| 17478 |
+
batch tensor: tokens torch.Size([2, 81920])
|
| 17479 |
+
batch tensor: labels torch.Size([2, 81920])
|
| 17480 |
+
batch tensor: loss_mask torch.Size([2, 81920])
|
| 17481 |
+
batch tensor: attention_mask torch.Size([2, 1, 81920, 81920])
|
| 17482 |
+
batch tensor: position_ids torch.Size([2, 81920])
|
| 17483 |
+
batch tensor after cp: tokens torch.Size([2, 20480])
|
| 17484 |
+
batch tensor after cp: labels torch.Size([2, 20480])
|
| 17485 |
+
batch tensor after cp: loss_mask torch.Size([2, 20480])
|
| 17486 |
+
batch tensor after cp: attention_mask torch.Size([2, 1, 20480, 81920])
|
| 17487 |
+
batch tensor after cp: position_ids torch.Size([2, 20480])
|
| 17488 |
+
batch tensor: tokens torch.Size([2, 81920])
|
| 17489 |
+
batch tensor: labels torch.Size([2, 81920])
|
| 17490 |
+
batch tensor: loss_mask torch.Size([2, 81920])
|
| 17491 |
+
batch tensor: attention_mask torch.Size([2, 1, 81920, 81920])
|
| 17492 |
+
batch tensor: position_ids torch.Size([2, 81920])
|
| 17493 |
+
batch tensor after cp: tokens torch.Size([2, 20480])
|
| 17494 |
+
batch tensor after cp: labels torch.Size([2, 20480])
|
| 17495 |
+
batch tensor after cp: loss_mask torch.Size([2, 20480])
|
| 17496 |
+
batch tensor after cp: attention_mask torch.Size([2, 1, 20480, 81920])
|
| 17497 |
+
batch tensor after cp: position_ids torch.Size([2, 20480])
|
| 17498 |
+
batch tensor: tokens torch.Size([2, 81920])
|
| 17499 |
+
batch tensor: labels torch.Size([2, 81920])
|
| 17500 |
+
batch tensor: loss_mask torch.Size([2, 81920])
|
| 17501 |
+
batch tensor: attention_mask torch.Size([2, 1, 81920, 81920])
|
| 17502 |
+
batch tensor: position_ids torch.Size([2, 81920])
|
| 17503 |
+
batch tensor after cp: tokens torch.Size([2, 20480])
|
| 17504 |
+
batch tensor after cp: labels torch.Size([2, 20480])
|
| 17505 |
+
batch tensor after cp: loss_mask torch.Size([2, 20480])
|
| 17506 |
+
batch tensor after cp: attention_mask torch.Size([2, 1, 20480, 81920])
|
| 17507 |
+
batch tensor after cp: position_ids torch.Size([2, 20480])
|
| 17508 |
+
batch tensor: tokens torch.Size([2, 81920])
|
| 17509 |
+
batch tensor: labels torch.Size([2, 81920])
|
| 17510 |
+
batch tensor: loss_mask torch.Size([2, 81920])
|
| 17511 |
+
batch tensor: attention_mask torch.Size([2, 1, 81920, 81920])
|
| 17512 |
+
batch tensor: position_ids torch.Size([2, 81920])
|
| 17513 |
+
batch tensor after cp: tokens torch.Size([2, 20480])
|
| 17514 |
+
batch tensor after cp: labels torch.Size([2, 20480])
|
| 17515 |
+
batch tensor after cp: loss_mask torch.Size([2, 20480])
|
| 17516 |
+
batch tensor after cp: attention_mask torch.Size([2, 1, 20480, 81920])
|
| 17517 |
+
batch tensor after cp: position_ids torch.Size([2, 20480])
|
| 17518 |
+
batch tensor: tokens torch.Size([2, 81920])
|
| 17519 |
+
batch tensor: labels torch.Size([2, 81920])
|
| 17520 |
+
batch tensor: loss_mask torch.Size([2, 81920])
|
| 17521 |
+
batch tensor: attention_mask torch.Size([2, 1, 81920, 81920])
|
| 17522 |
+
batch tensor: position_ids torch.Size([2, 81920])
|
| 17523 |
+
batch tensor after cp: tokens torch.Size([2, 20480])
|
| 17524 |
+
batch tensor after cp: labels torch.Size([2, 20480])
|
| 17525 |
+
batch tensor after cp: loss_mask torch.Size([2, 20480])
|
| 17526 |
+
batch tensor after cp: attention_mask torch.Size([2, 1, 20480, 81920])
|
| 17527 |
+
batch tensor after cp: position_ids torch.Size([2, 20480])
|
| 17528 |
+
batch tensor: tokens torch.Size([2, 81920])
|
| 17529 |
+
batch tensor: labels torch.Size([2, 81920])
|
| 17530 |
+
batch tensor: loss_mask torch.Size([2, 81920])
|
| 17531 |
+
batch tensor: attention_mask torch.Size([2, 1, 81920, 81920])
|
| 17532 |
+
batch tensor: position_ids torch.Size([2, 81920])
|
| 17533 |
+
batch tensor after cp: tokens torch.Size([2, 20480])
|
| 17534 |
+
batch tensor after cp: labels torch.Size([2, 20480])
|
| 17535 |
+
batch tensor after cp: loss_mask torch.Size([2, 20480])
|
| 17536 |
+
batch tensor after cp: attention_mask torch.Size([2, 1, 20480, 81920])
|
| 17537 |
+
batch tensor after cp: position_ids torch.Size([2, 20480])
|
| 17538 |
+
batch tensor: tokens torch.Size([2, 81920])
|
| 17539 |
+
batch tensor: labels torch.Size([2, 81920])
|
| 17540 |
+
batch tensor: loss_mask torch.Size([2, 81920])
|
| 17541 |
+
batch tensor: attention_mask torch.Size([2, 1, 81920, 81920])
|
| 17542 |
+
batch tensor: position_ids torch.Size([2, 81920])
|
| 17543 |
+
batch tensor after cp: tokens torch.Size([2, 20480])
|
| 17544 |
+
batch tensor after cp: labels torch.Size([2, 20480])
|
| 17545 |
+
batch tensor after cp: loss_mask torch.Size([2, 20480])
|
| 17546 |
+
batch tensor after cp: attention_mask torch.Size([2, 1, 20480, 81920])
|
| 17547 |
+
batch tensor after cp: position_ids torch.Size([2, 20480])
|
| 17548 |
+
batch tensor: tokens torch.Size([2, 81920])
|
| 17549 |
+
batch tensor: labels torch.Size([2, 81920])
|
| 17550 |
+
batch tensor: loss_mask torch.Size([2, 81920])
|
| 17551 |
+
batch tensor: attention_mask torch.Size([2, 1, 81920, 81920])
|
| 17552 |
+
batch tensor: position_ids torch.Size([2, 81920])
|
| 17553 |
+
batch tensor after cp: tokens torch.Size([2, 20480])
|
| 17554 |
+
batch tensor after cp: labels torch.Size([2, 20480])
|
| 17555 |
+
batch tensor after cp: loss_mask torch.Size([2, 20480])
|
| 17556 |
+
batch tensor after cp: attention_mask torch.Size([2, 1, 20480, 81920])
|
| 17557 |
+
batch tensor after cp: position_ids torch.Size([2, 20480])
|
| 17558 |
+
batch tensor: tokens torch.Size([2, 81920])
|
| 17559 |
+
batch tensor: labels torch.Size([2, 81920])
|
| 17560 |
+
batch tensor: loss_mask torch.Size([2, 81920])
|
| 17561 |
+
batch tensor: attention_mask torch.Size([2, 1, 81920, 81920])
|
| 17562 |
+
batch tensor: position_ids torch.Size([2, 81920])
|
| 17563 |
+
batch tensor after cp: tokens torch.Size([2, 20480])
|
| 17564 |
+
batch tensor after cp: labels torch.Size([2, 20480])
|
| 17565 |
+
batch tensor after cp: loss_mask torch.Size([2, 20480])
|
| 17566 |
+
batch tensor after cp: attention_mask torch.Size([2, 1, 20480, 81920])
|
| 17567 |
+
batch tensor after cp: position_ids torch.Size([2, 20480])
|
| 17568 |
+
batch tensor: tokens torch.Size([2, 81920])
|
| 17569 |
+
batch tensor: labels torch.Size([2, 81920])
|
| 17570 |
+
batch tensor: loss_mask torch.Size([2, 81920])
|
| 17571 |
+
batch tensor: attention_mask torch.Size([2, 1, 81920, 81920])
|
| 17572 |
+
batch tensor: position_ids torch.Size([2, 81920])
|
| 17573 |
+
batch tensor after cp: tokens torch.Size([2, 20480])
|
| 17574 |
+
batch tensor after cp: labels torch.Size([2, 20480])
|
| 17575 |
+
batch tensor after cp: loss_mask torch.Size([2, 20480])
|
| 17576 |
+
batch tensor after cp: attention_mask torch.Size([2, 1, 20480, 81920])
|
| 17577 |
+
batch tensor after cp: position_ids torch.Size([2, 20480])
|
| 17578 |
+
batch tensor: tokens torch.Size([2, 81920])
|
| 17579 |
+
batch tensor: labels torch.Size([2, 81920])
|
| 17580 |
+
batch tensor: loss_mask torch.Size([2, 81920])
|
| 17581 |
+
batch tensor: attention_mask torch.Size([2, 1, 81920, 81920])
|
| 17582 |
+
batch tensor: position_ids torch.Size([2, 81920])
|
| 17583 |
+
batch tensor after cp: tokens torch.Size([2, 20480])
|
| 17584 |
+
batch tensor after cp: labels torch.Size([2, 20480])
|
| 17585 |
+
batch tensor after cp: loss_mask torch.Size([2, 20480])
|
| 17586 |
+
batch tensor after cp: attention_mask torch.Size([2, 1, 20480, 81920])
|
| 17587 |
+
batch tensor after cp: position_ids torch.Size([2, 20480])
|
| 17588 |
+
batch tensor: tokens torch.Size([2, 81920])
|
| 17589 |
+
batch tensor: labels torch.Size([2, 81920])
|
| 17590 |
+
batch tensor: loss_mask torch.Size([2, 81920])
|
| 17591 |
+
batch tensor: attention_mask torch.Size([2, 1, 81920, 81920])
|
| 17592 |
+
batch tensor: position_ids torch.Size([2, 81920])
|
| 17593 |
+
batch tensor after cp: tokens torch.Size([2, 20480])
|
| 17594 |
+
batch tensor after cp: labels torch.Size([2, 20480])
|
| 17595 |
+
batch tensor after cp: loss_mask torch.Size([2, 20480])
|
| 17596 |
+
batch tensor after cp: attention_mask torch.Size([2, 1, 20480, 81920])
|
| 17597 |
+
batch tensor after cp: position_ids torch.Size([2, 20480])
|
| 17598 |
+
batch tensor: tokens torch.Size([2, 81920])
|
| 17599 |
+
batch tensor: labels torch.Size([2, 81920])
|
| 17600 |
+
batch tensor: loss_mask torch.Size([2, 81920])
|
| 17601 |
+
batch tensor: attention_mask torch.Size([2, 1, 81920, 81920])
|
| 17602 |
+
batch tensor: position_ids torch.Size([2, 81920])
|
| 17603 |
+
batch tensor after cp: tokens torch.Size([2, 20480])
|
| 17604 |
+
batch tensor after cp: labels torch.Size([2, 20480])
|
| 17605 |
+
batch tensor after cp: loss_mask torch.Size([2, 20480])
|
| 17606 |
+
batch tensor after cp: attention_mask torch.Size([2, 1, 20480, 81920])
|
| 17607 |
+
batch tensor after cp: position_ids torch.Size([2, 20480])
|
| 17608 |
+
batch tensor: tokens torch.Size([2, 81920])
|
| 17609 |
+
batch tensor: labels torch.Size([2, 81920])
|
| 17610 |
+
batch tensor: loss_mask torch.Size([2, 81920])
|
| 17611 |
+
batch tensor: attention_mask torch.Size([2, 1, 81920, 81920])
|
| 17612 |
+
batch tensor: position_ids torch.Size([2, 81920])
|
| 17613 |
+
batch tensor after cp: tokens torch.Size([2, 20480])
|
| 17614 |
+
batch tensor after cp: labels torch.Size([2, 20480])
|
| 17615 |
+
batch tensor after cp: loss_mask torch.Size([2, 20480])
|
| 17616 |
+
batch tensor after cp: attention_mask torch.Size([2, 1, 20480, 81920])
|
| 17617 |
+
batch tensor after cp: position_ids torch.Size([2, 20480])
|
| 17618 |
+
Start exporting trace 4
|
| 17619 |
+
Done exporting trace 4
|
| 17620 |
+
[2025-06-21 21:34:41] iteration 5/ 10 | consumed samples: 5 | elapsed time per iteration (ms): 7823.7 | learning rate: 0.000000E+00 | global batch size: 1 | loss scale: 268435456.0 | number of skipped iterations: 1 | number of nan iterations: 0 |
|
| 17621 |
+
batch tensor: tokens torch.Size([2, 81920])
|
| 17622 |
+
batch tensor: labels torch.Size([2, 81920])
|
| 17623 |
+
batch tensor: loss_mask torch.Size([2, 81920])
|
| 17624 |
+
batch tensor: attention_mask torch.Size([2, 1, 81920, 81920])
|
| 17625 |
+
batch tensor: position_ids torch.Size([2, 81920])
|
| 17626 |
+
batch tensor after cp: tokens torch.Size([2, 20480])
|
| 17627 |
+
batch tensor after cp: labels torch.Size([2, 20480])
|
| 17628 |
+
batch tensor after cp: loss_mask torch.Size([2, 20480])
|
| 17629 |
+
batch tensor after cp: attention_mask torch.Size([2, 1, 20480, 81920])
|
| 17630 |
+
batch tensor after cp: position_ids torch.Size([2, 20480])
|
| 17631 |
+
batch tensor: tokens torch.Size([2, 81920])
|
| 17632 |
+
batch tensor: labels torch.Size([2, 81920])
|
| 17633 |
+
batch tensor: loss_mask torch.Size([2, 81920])
|
| 17634 |
+
batch tensor: attention_mask torch.Size([2, 1, 81920, 81920])
|
| 17635 |
+
batch tensor: position_ids torch.Size([2, 81920])
|
| 17636 |
+
batch tensor after cp: tokens torch.Size([2, 20480])
|
| 17637 |
+
batch tensor after cp: labels torch.Size([2, 20480])
|
| 17638 |
+
batch tensor after cp: loss_mask torch.Size([2, 20480])
|
| 17639 |
+
batch tensor after cp: attention_mask torch.Size([2, 1, 20480, 81920])
|
| 17640 |
+
batch tensor after cp: position_ids torch.Size([2, 20480])
|
| 17641 |
+
batch tensor: tokens torch.Size([2, 81920])
|
| 17642 |
+
batch tensor: labels torch.Size([2, 81920])
|
| 17643 |
+
batch tensor: loss_mask torch.Size([2, 81920])
|
| 17644 |
+
batch tensor: attention_mask torch.Size([2, 1, 81920, 81920])
|
| 17645 |
+
batch tensor: position_ids torch.Size([2, 81920])
|
| 17646 |
+
batch tensor after cp: tokens torch.Size([2, 20480])
|
| 17647 |
+
batch tensor after cp: labels torch.Size([2, 20480])
|
| 17648 |
+
batch tensor after cp: loss_mask torch.Size([2, 20480])
|
| 17649 |
+
batch tensor after cp: attention_mask torch.Size([2, 1, 20480, 81920])
|
| 17650 |
+
batch tensor after cp: position_ids torch.Size([2, 20480])
|
| 17651 |
+
batch tensor: tokens torch.Size([2, 81920])
|
| 17652 |
+
batch tensor: labels torch.Size([2, 81920])
|
| 17653 |
+
batch tensor: loss_mask torch.Size([2, 81920])
|
| 17654 |
+
batch tensor: attention_mask torch.Size([2, 1, 81920, 81920])
|
| 17655 |
+
batch tensor: position_ids torch.Size([2, 81920])
|
| 17656 |
+
batch tensor after cp: tokens torch.Size([2, 20480])
|
| 17657 |
+
batch tensor after cp: labels torch.Size([2, 20480])
|
| 17658 |
+
batch tensor after cp: loss_mask torch.Size([2, 20480])
|
| 17659 |
+
batch tensor after cp: attention_mask torch.Size([2, 1, 20480, 81920])
|
| 17660 |
+
batch tensor after cp: position_ids torch.Size([2, 20480])
|
| 17661 |
+
batch tensor: tokens torch.Size([2, 81920])
|
| 17662 |
+
batch tensor: labels torch.Size([2, 81920])
|
| 17663 |
+
batch tensor: loss_mask torch.Size([2, 81920])
|
| 17664 |
+
batch tensor: attention_mask torch.Size([2, 1, 81920, 81920])
|
| 17665 |
+
batch tensor: position_ids torch.Size([2, 81920])
|
| 17666 |
+
batch tensor after cp: tokens torch.Size([2, 20480])
|
| 17667 |
+
batch tensor after cp: labels torch.Size([2, 20480])
|
| 17668 |
+
batch tensor after cp: loss_mask torch.Size([2, 20480])
|
| 17669 |
+
batch tensor after cp: attention_mask torch.Size([2, 1, 20480, 81920])
|
| 17670 |
+
batch tensor after cp: position_ids torch.Size([2, 20480])
|
| 17671 |
+
batch tensor: tokens torch.Size([2, 81920])
|
| 17672 |
+
batch tensor: labels torch.Size([2, 81920])
|
| 17673 |
+
batch tensor: loss_mask torch.Size([2, 81920])
|
| 17674 |
+
batch tensor: attention_mask torch.Size([2, 1, 81920, 81920])
|
| 17675 |
+
batch tensor: position_ids torch.Size([2, 81920])
|
| 17676 |
+
batch tensor after cp: tokens torch.Size([2, 20480])
|
| 17677 |
+
batch tensor after cp: labels torch.Size([2, 20480])
|
| 17678 |
+
batch tensor after cp: loss_mask torch.Size([2, 20480])
|
| 17679 |
+
batch tensor after cp: attention_mask torch.Size([2, 1, 20480, 81920])
|
| 17680 |
+
batch tensor after cp: position_ids torch.Size([2, 20480])
|
| 17681 |
+
batch tensor: tokens torch.Size([2, 81920])
|
| 17682 |
+
batch tensor: labels torch.Size([2, 81920])
|
| 17683 |
+
batch tensor: loss_mask torch.Size([2, 81920])
|
| 17684 |
+
batch tensor: attention_mask torch.Size([2, 1, 81920, 81920])
|
| 17685 |
+
batch tensor: position_ids torch.Size([2, 81920])
|
| 17686 |
+
batch tensor after cp: tokens torch.Size([2, 20480])
|
| 17687 |
+
batch tensor after cp: labels torch.Size([2, 20480])
|
| 17688 |
+
batch tensor after cp: loss_mask torch.Size([2, 20480])
|
| 17689 |
+
batch tensor after cp: attention_mask torch.Size([2, 1, 20480, 81920])
|
| 17690 |
+
batch tensor after cp: position_ids torch.Size([2, 20480])
|
| 17691 |
+
batch tensor: tokens torch.Size([2, 81920])
|
| 17692 |
+
batch tensor: labels torch.Size([2, 81920])
|
| 17693 |
+
batch tensor: loss_mask torch.Size([2, 81920])
|
| 17694 |
+
batch tensor: attention_mask torch.Size([2, 1, 81920, 81920])
|
| 17695 |
+
batch tensor: position_ids torch.Size([2, 81920])
|
| 17696 |
+
batch tensor after cp: tokens torch.Size([2, 20480])
|
| 17697 |
+
batch tensor after cp: labels torch.Size([2, 20480])
|
| 17698 |
+
batch tensor after cp: loss_mask torch.Size([2, 20480])
|
| 17699 |
+
batch tensor after cp: attention_mask torch.Size([2, 1, 20480, 81920])
|
| 17700 |
+
batch tensor after cp: position_ids torch.Size([2, 20480])
|
| 17701 |
+
batch tensor: tokens torch.Size([2, 81920])
|
| 17702 |
+
batch tensor: labels torch.Size([2, 81920])
|
| 17703 |
+
batch tensor: loss_mask torch.Size([2, 81920])
|
| 17704 |
+
batch tensor: attention_mask torch.Size([2, 1, 81920, 81920])
|
| 17705 |
+
batch tensor: position_ids torch.Size([2, 81920])
|
| 17706 |
+
batch tensor after cp: tokens torch.Size([2, 20480])
|
| 17707 |
+
batch tensor after cp: labels torch.Size([2, 20480])
|
| 17708 |
+
batch tensor after cp: loss_mask torch.Size([2, 20480])
|
| 17709 |
+
batch tensor after cp: attention_mask torch.Size([2, 1, 20480, 81920])
|
| 17710 |
+
batch tensor after cp: position_ids torch.Size([2, 20480])
|
| 17711 |
+
batch tensor: tokens torch.Size([2, 81920])
|
| 17712 |
+
batch tensor: labels torch.Size([2, 81920])
|
| 17713 |
+
batch tensor: loss_mask torch.Size([2, 81920])
|
| 17714 |
+
batch tensor: attention_mask torch.Size([2, 1, 81920, 81920])
|
| 17715 |
+
batch tensor: position_ids torch.Size([2, 81920])
|
| 17716 |
+
batch tensor after cp: tokens torch.Size([2, 20480])
|
| 17717 |
+
batch tensor after cp: labels torch.Size([2, 20480])
|
| 17718 |
+
batch tensor after cp: loss_mask torch.Size([2, 20480])
|
| 17719 |
+
batch tensor after cp: attention_mask torch.Size([2, 1, 20480, 81920])
|
| 17720 |
+
batch tensor after cp: position_ids torch.Size([2, 20480])
|
| 17721 |
+
batch tensor: tokens torch.Size([2, 81920])
|
| 17722 |
+
batch tensor: labels torch.Size([2, 81920])
|
| 17723 |
+
batch tensor: loss_mask torch.Size([2, 81920])
|
| 17724 |
+
batch tensor: attention_mask torch.Size([2, 1, 81920, 81920])
|
| 17725 |
+
batch tensor: position_ids torch.Size([2, 81920])
|
| 17726 |
+
batch tensor after cp: tokens torch.Size([2, 20480])
|
| 17727 |
+
batch tensor after cp: labels torch.Size([2, 20480])
|
| 17728 |
+
batch tensor after cp: loss_mask torch.Size([2, 20480])
|
| 17729 |
+
batch tensor after cp: attention_mask torch.Size([2, 1, 20480, 81920])
|
| 17730 |
+
batch tensor after cp: position_ids torch.Size([2, 20480])
|
| 17731 |
+
batch tensor: tokens torch.Size([2, 81920])
|
| 17732 |
+
batch tensor: labels torch.Size([2, 81920])
|
| 17733 |
+
batch tensor: loss_mask torch.Size([2, 81920])
|
| 17734 |
+
batch tensor: attention_mask torch.Size([2, 1, 81920, 81920])
|
| 17735 |
+
batch tensor: position_ids torch.Size([2, 81920])
|
| 17736 |
+
batch tensor after cp: tokens torch.Size([2, 20480])
|
| 17737 |
+
batch tensor after cp: labels torch.Size([2, 20480])
|
| 17738 |
+
batch tensor after cp: loss_mask torch.Size([2, 20480])
|
| 17739 |
+
batch tensor after cp: attention_mask torch.Size([2, 1, 20480, 81920])
|
| 17740 |
+
batch tensor after cp: position_ids torch.Size([2, 20480])
|
| 17741 |
+
batch tensor: tokens torch.Size([2, 81920])
|
| 17742 |
+
batch tensor: labels torch.Size([2, 81920])
|
| 17743 |
+
batch tensor: loss_mask torch.Size([2, 81920])
|
| 17744 |
+
batch tensor: attention_mask torch.Size([2, 1, 81920, 81920])
|
| 17745 |
+
batch tensor: position_ids torch.Size([2, 81920])
|
| 17746 |
+
batch tensor after cp: tokens torch.Size([2, 20480])
|
| 17747 |
+
batch tensor after cp: labels torch.Size([2, 20480])
|
| 17748 |
+
batch tensor after cp: loss_mask torch.Size([2, 20480])
|
| 17749 |
+
batch tensor after cp: attention_mask torch.Size([2, 1, 20480, 81920])
|
| 17750 |
+
batch tensor after cp: position_ids torch.Size([2, 20480])
|
| 17751 |
+
batch tensor: tokens torch.Size([2, 81920])
|
| 17752 |
+
batch tensor: labels torch.Size([2, 81920])
|
| 17753 |
+
batch tensor: loss_mask torch.Size([2, 81920])
|
| 17754 |
+
batch tensor: attention_mask torch.Size([2, 1, 81920, 81920])
|
| 17755 |
+
batch tensor: position_ids torch.Size([2, 81920])
|
| 17756 |
+
batch tensor after cp: tokens torch.Size([2, 20480])
|
| 17757 |
+
batch tensor after cp: labels torch.Size([2, 20480])
|
| 17758 |
+
batch tensor after cp: loss_mask torch.Size([2, 20480])
|
| 17759 |
+
batch tensor after cp: attention_mask torch.Size([2, 1, 20480, 81920])
|
| 17760 |
+
batch tensor after cp: position_ids torch.Size([2, 20480])
|
| 17761 |
+
batch tensor: tokens torch.Size([2, 81920])
|
| 17762 |
+
batch tensor: labels torch.Size([2, 81920])
|
| 17763 |
+
batch tensor: loss_mask torch.Size([2, 81920])
|
| 17764 |
+
batch tensor: attention_mask torch.Size([2, 1, 81920, 81920])
|
| 17765 |
+
batch tensor: position_ids torch.Size([2, 81920])
|
| 17766 |
+
batch tensor after cp: tokens torch.Size([2, 20480])
|
| 17767 |
+
batch tensor after cp: labels torch.Size([2, 20480])
|
| 17768 |
+
batch tensor after cp: loss_mask torch.Size([2, 20480])
|
| 17769 |
+
batch tensor after cp: attention_mask torch.Size([2, 1, 20480, 81920])
|
| 17770 |
+
batch tensor after cp: position_ids torch.Size([2, 20480])
|
| 17771 |
+
batch tensor: tokens torch.Size([2, 81920])
|
| 17772 |
+
batch tensor: labels torch.Size([2, 81920])
|
| 17773 |
+
batch tensor: loss_mask torch.Size([2, 81920])
|
| 17774 |
+
batch tensor: attention_mask torch.Size([2, 1, 81920, 81920])
|
| 17775 |
+
batch tensor: position_ids torch.Size([2, 81920])
|
| 17776 |
+
batch tensor after cp: tokens torch.Size([2, 20480])
|
| 17777 |
+
batch tensor after cp: labels torch.Size([2, 20480])
|
| 17778 |
+
batch tensor after cp: loss_mask torch.Size([2, 20480])
|
| 17779 |
+
batch tensor after cp: attention_mask torch.Size([2, 1, 20480, 81920])
|
| 17780 |
+
batch tensor after cp: position_ids torch.Size([2, 20480])
|
| 17781 |
+
Start exporting trace 5
|
| 17782 |
+
Done exporting trace 5
|
| 17783 |
+
[2025-06-21 21:34:48] iteration 6/ 10 | consumed samples: 6 | elapsed time per iteration (ms): 7735.7 | learning rate: 0.000000E+00 | global batch size: 1 | loss scale: 134217728.0 | number of skipped iterations: 1 | number of nan iterations: 0 |
|
| 17784 |
+
batch tensor: tokens torch.Size([2, 81920])
|
| 17785 |
+
batch tensor: labels torch.Size([2, 81920])
|
| 17786 |
+
batch tensor: loss_mask torch.Size([2, 81920])
|
| 17787 |
+
batch tensor: attention_mask torch.Size([2, 1, 81920, 81920])
|
| 17788 |
+
batch tensor: position_ids torch.Size([2, 81920])
|
| 17789 |
+
batch tensor after cp: tokens torch.Size([2, 20480])
|
| 17790 |
+
batch tensor after cp: labels torch.Size([2, 20480])
|
| 17791 |
+
batch tensor after cp: loss_mask torch.Size([2, 20480])
|
| 17792 |
+
batch tensor after cp: attention_mask torch.Size([2, 1, 20480, 81920])
|
| 17793 |
+
batch tensor after cp: position_ids torch.Size([2, 20480])
|
| 17794 |
+
batch tensor: tokens torch.Size([2, 81920])
|
| 17795 |
+
batch tensor: labels torch.Size([2, 81920])
|
| 17796 |
+
batch tensor: loss_mask torch.Size([2, 81920])
|
| 17797 |
+
batch tensor: attention_mask torch.Size([2, 1, 81920, 81920])
|
| 17798 |
+
batch tensor: position_ids torch.Size([2, 81920])
|
| 17799 |
+
batch tensor after cp: tokens torch.Size([2, 20480])
|
| 17800 |
+
batch tensor after cp: labels torch.Size([2, 20480])
|
| 17801 |
+
batch tensor after cp: loss_mask torch.Size([2, 20480])
|
| 17802 |
+
batch tensor after cp: attention_mask torch.Size([2, 1, 20480, 81920])
|
| 17803 |
+
batch tensor after cp: position_ids torch.Size([2, 20480])
|
| 17804 |
+
batch tensor: tokens torch.Size([2, 81920])
|
| 17805 |
+
batch tensor: labels torch.Size([2, 81920])
|
| 17806 |
+
batch tensor: loss_mask torch.Size([2, 81920])
|
| 17807 |
+
batch tensor: attention_mask torch.Size([2, 1, 81920, 81920])
|
| 17808 |
+
batch tensor: position_ids torch.Size([2, 81920])
|
| 17809 |
+
batch tensor after cp: tokens torch.Size([2, 20480])
|
| 17810 |
+
batch tensor after cp: labels torch.Size([2, 20480])
|
| 17811 |
+
batch tensor after cp: loss_mask torch.Size([2, 20480])
|
| 17812 |
+
batch tensor after cp: attention_mask torch.Size([2, 1, 20480, 81920])
|
| 17813 |
+
batch tensor after cp: position_ids torch.Size([2, 20480])
|
| 17814 |
+
batch tensor: tokens torch.Size([2, 81920])
|
| 17815 |
+
batch tensor: labels torch.Size([2, 81920])
|
| 17816 |
+
batch tensor: loss_mask torch.Size([2, 81920])
|
| 17817 |
+
batch tensor: attention_mask torch.Size([2, 1, 81920, 81920])
|
| 17818 |
+
batch tensor: position_ids torch.Size([2, 81920])
|
| 17819 |
+
batch tensor after cp: tokens torch.Size([2, 20480])
|
| 17820 |
+
batch tensor after cp: labels torch.Size([2, 20480])
|
| 17821 |
+
batch tensor after cp: loss_mask torch.Size([2, 20480])
|
| 17822 |
+
batch tensor after cp: attention_mask torch.Size([2, 1, 20480, 81920])
|
| 17823 |
+
batch tensor after cp: position_ids torch.Size([2, 20480])
|
| 17824 |
+
batch tensor: tokens torch.Size([2, 81920])
|
| 17825 |
+
batch tensor: labels torch.Size([2, 81920])
|
| 17826 |
+
batch tensor: loss_mask torch.Size([2, 81920])
|
| 17827 |
+
batch tensor: attention_mask torch.Size([2, 1, 81920, 81920])
|
| 17828 |
+
batch tensor: position_ids torch.Size([2, 81920])
|
| 17829 |
+
batch tensor after cp: tokens torch.Size([2, 20480])
|
| 17830 |
+
batch tensor after cp: labels torch.Size([2, 20480])
|
| 17831 |
+
batch tensor after cp: loss_mask torch.Size([2, 20480])
|
| 17832 |
+
batch tensor after cp: attention_mask torch.Size([2, 1, 20480, 81920])
|
| 17833 |
+
batch tensor after cp: position_ids torch.Size([2, 20480])
|
| 17834 |
+
batch tensor: tokens torch.Size([2, 81920])
|
| 17835 |
+
batch tensor: labels torch.Size([2, 81920])
|
| 17836 |
+
batch tensor: loss_mask torch.Size([2, 81920])
|
| 17837 |
+
batch tensor: attention_mask torch.Size([2, 1, 81920, 81920])
|
| 17838 |
+
batch tensor: position_ids torch.Size([2, 81920])
|
| 17839 |
+
batch tensor after cp: tokens torch.Size([2, 20480])
|
| 17840 |
+
batch tensor after cp: labels torch.Size([2, 20480])
|
| 17841 |
+
batch tensor after cp: loss_mask torch.Size([2, 20480])
|
| 17842 |
+
batch tensor after cp: attention_mask torch.Size([2, 1, 20480, 81920])
|
| 17843 |
+
batch tensor after cp: position_ids torch.Size([2, 20480])
|
| 17844 |
+
batch tensor: tokens torch.Size([2, 81920])
|
| 17845 |
+
batch tensor: labels torch.Size([2, 81920])
|
| 17846 |
+
batch tensor: loss_mask torch.Size([2, 81920])
|
| 17847 |
+
batch tensor: attention_mask torch.Size([2, 1, 81920, 81920])
|
| 17848 |
+
batch tensor: position_ids torch.Size([2, 81920])
|
| 17849 |
+
batch tensor after cp: tokens torch.Size([2, 20480])
|
| 17850 |
+
batch tensor after cp: labels torch.Size([2, 20480])
|
| 17851 |
+
batch tensor after cp: loss_mask torch.Size([2, 20480])
|
| 17852 |
+
batch tensor after cp: attention_mask torch.Size([2, 1, 20480, 81920])
|
| 17853 |
+
batch tensor after cp: position_ids torch.Size([2, 20480])
|
| 17854 |
+
batch tensor: tokens torch.Size([2, 81920])
|
| 17855 |
+
batch tensor: labels torch.Size([2, 81920])
|
| 17856 |
+
batch tensor: loss_mask torch.Size([2, 81920])
|
| 17857 |
+
batch tensor: attention_mask torch.Size([2, 1, 81920, 81920])
|
| 17858 |
+
batch tensor: position_ids torch.Size([2, 81920])
|
| 17859 |
+
batch tensor after cp: tokens torch.Size([2, 20480])
|
| 17860 |
+
batch tensor after cp: labels torch.Size([2, 20480])
|
| 17861 |
+
batch tensor after cp: loss_mask torch.Size([2, 20480])
|
| 17862 |
+
batch tensor after cp: attention_mask torch.Size([2, 1, 20480, 81920])
|
| 17863 |
+
batch tensor after cp: position_ids torch.Size([2, 20480])
|
| 17864 |
+
batch tensor: tokens torch.Size([2, 81920])
|
| 17865 |
+
batch tensor: labels torch.Size([2, 81920])
|
| 17866 |
+
batch tensor: loss_mask torch.Size([2, 81920])
|
| 17867 |
+
batch tensor: attention_mask torch.Size([2, 1, 81920, 81920])
|
| 17868 |
+
batch tensor: position_ids torch.Size([2, 81920])
|
| 17869 |
+
batch tensor after cp: tokens torch.Size([2, 20480])
|
| 17870 |
+
batch tensor after cp: labels torch.Size([2, 20480])
|
| 17871 |
+
batch tensor after cp: loss_mask torch.Size([2, 20480])
|
| 17872 |
+
batch tensor after cp: attention_mask torch.Size([2, 1, 20480, 81920])
|
| 17873 |
+
batch tensor after cp: position_ids torch.Size([2, 20480])
|
| 17874 |
+
batch tensor: tokens torch.Size([2, 81920])
|
| 17875 |
+
batch tensor: labels torch.Size([2, 81920])
|
| 17876 |
+
batch tensor: loss_mask torch.Size([2, 81920])
|
| 17877 |
+
batch tensor: attention_mask torch.Size([2, 1, 81920, 81920])
|
| 17878 |
+
batch tensor: position_ids torch.Size([2, 81920])
|
| 17879 |
+
batch tensor after cp: tokens torch.Size([2, 20480])
|
| 17880 |
+
batch tensor after cp: labels torch.Size([2, 20480])
|
| 17881 |
+
batch tensor after cp: loss_mask torch.Size([2, 20480])
|
| 17882 |
+
batch tensor after cp: attention_mask torch.Size([2, 1, 20480, 81920])
|
| 17883 |
+
batch tensor after cp: position_ids torch.Size([2, 20480])
|
| 17884 |
+
batch tensor: tokens torch.Size([2, 81920])
|
| 17885 |
+
batch tensor: labels torch.Size([2, 81920])
|
| 17886 |
+
batch tensor: loss_mask torch.Size([2, 81920])
|
| 17887 |
+
batch tensor: attention_mask torch.Size([2, 1, 81920, 81920])
|
| 17888 |
+
batch tensor: position_ids torch.Size([2, 81920])
|
| 17889 |
+
batch tensor after cp: tokens torch.Size([2, 20480])
|
| 17890 |
+
batch tensor after cp: labels torch.Size([2, 20480])
|
| 17891 |
+
batch tensor after cp: loss_mask torch.Size([2, 20480])
|
| 17892 |
+
batch tensor after cp: attention_mask torch.Size([2, 1, 20480, 81920])
|
| 17893 |
+
batch tensor after cp: position_ids torch.Size([2, 20480])
|
| 17894 |
+
batch tensor: tokens torch.Size([2, 81920])
|
| 17895 |
+
batch tensor: labels torch.Size([2, 81920])
|
| 17896 |
+
batch tensor: loss_mask torch.Size([2, 81920])
|
| 17897 |
+
batch tensor: attention_mask torch.Size([2, 1, 81920, 81920])
|
| 17898 |
+
batch tensor: position_ids torch.Size([2, 81920])
|
| 17899 |
+
batch tensor after cp: tokens torch.Size([2, 20480])
|
| 17900 |
+
batch tensor after cp: labels torch.Size([2, 20480])
|
| 17901 |
+
batch tensor after cp: loss_mask torch.Size([2, 20480])
|
| 17902 |
+
batch tensor after cp: attention_mask torch.Size([2, 1, 20480, 81920])
|
| 17903 |
+
batch tensor after cp: position_ids torch.Size([2, 20480])
|
| 17904 |
+
batch tensor: tokens torch.Size([2, 81920])
|
| 17905 |
+
batch tensor: labels torch.Size([2, 81920])
|
| 17906 |
+
batch tensor: loss_mask torch.Size([2, 81920])
|
| 17907 |
+
batch tensor: attention_mask torch.Size([2, 1, 81920, 81920])
|
| 17908 |
+
batch tensor: position_ids torch.Size([2, 81920])
|
| 17909 |
+
batch tensor after cp: tokens torch.Size([2, 20480])
|
| 17910 |
+
batch tensor after cp: labels torch.Size([2, 20480])
|
| 17911 |
+
batch tensor after cp: loss_mask torch.Size([2, 20480])
|
| 17912 |
+
batch tensor after cp: attention_mask torch.Size([2, 1, 20480, 81920])
|
| 17913 |
+
batch tensor after cp: position_ids torch.Size([2, 20480])
|
| 17914 |
+
batch tensor: tokens torch.Size([2, 81920])
|
| 17915 |
+
batch tensor: labels torch.Size([2, 81920])
|
| 17916 |
+
batch tensor: loss_mask torch.Size([2, 81920])
|
| 17917 |
+
batch tensor: attention_mask torch.Size([2, 1, 81920, 81920])
|
| 17918 |
+
batch tensor: position_ids torch.Size([2, 81920])
|
| 17919 |
+
batch tensor after cp: tokens torch.Size([2, 20480])
|
| 17920 |
+
batch tensor after cp: labels torch.Size([2, 20480])
|
| 17921 |
+
batch tensor after cp: loss_mask torch.Size([2, 20480])
|
| 17922 |
+
batch tensor after cp: attention_mask torch.Size([2, 1, 20480, 81920])
|
| 17923 |
+
batch tensor after cp: position_ids torch.Size([2, 20480])
|
| 17924 |
+
batch tensor: tokens torch.Size([2, 81920])
|
| 17925 |
+
batch tensor: labels torch.Size([2, 81920])
|
| 17926 |
+
batch tensor: loss_mask torch.Size([2, 81920])
|
| 17927 |
+
batch tensor: attention_mask torch.Size([2, 1, 81920, 81920])
|
| 17928 |
+
batch tensor: position_ids torch.Size([2, 81920])
|
| 17929 |
+
batch tensor after cp: tokens torch.Size([2, 20480])
|
| 17930 |
+
batch tensor after cp: labels torch.Size([2, 20480])
|
| 17931 |
+
batch tensor after cp: loss_mask torch.Size([2, 20480])
|
| 17932 |
+
batch tensor after cp: attention_mask torch.Size([2, 1, 20480, 81920])
|
| 17933 |
+
batch tensor after cp: position_ids torch.Size([2, 20480])
|
| 17934 |
+
batch tensor: tokens torch.Size([2, 81920])
|
| 17935 |
+
batch tensor: labels torch.Size([2, 81920])
|
| 17936 |
+
batch tensor: loss_mask torch.Size([2, 81920])
|
| 17937 |
+
batch tensor: attention_mask torch.Size([2, 1, 81920, 81920])
|
| 17938 |
+
batch tensor: position_ids torch.Size([2, 81920])
|
| 17939 |
+
batch tensor after cp: tokens torch.Size([2, 20480])
|
| 17940 |
+
batch tensor after cp: labels torch.Size([2, 20480])
|
| 17941 |
+
batch tensor after cp: loss_mask torch.Size([2, 20480])
|
| 17942 |
+
batch tensor after cp: attention_mask torch.Size([2, 1, 20480, 81920])
|
| 17943 |
+
batch tensor after cp: position_ids torch.Size([2, 20480])
|
| 17944 |
+
Start exporting trace 6
|
| 17945 |
+
Done exporting trace 6
|
| 17946 |
+
[2025-06-21 21:34:56] iteration 7/ 10 | consumed samples: 7 | elapsed time per iteration (ms): 7652.5 | learning rate: 0.000000E+00 | global batch size: 1 | loss scale: 67108864.0 | number of skipped iterations: 1 | number of nan iterations: 0 |
|
| 17947 |
+
batch tensor: tokens torch.Size([2, 81920])
|
| 17948 |
+
batch tensor: labels torch.Size([2, 81920])
|
| 17949 |
+
batch tensor: loss_mask torch.Size([2, 81920])
|
| 17950 |
+
batch tensor: attention_mask torch.Size([2, 1, 81920, 81920])
|
| 17951 |
+
batch tensor: position_ids torch.Size([2, 81920])
|
| 17952 |
+
batch tensor after cp: tokens torch.Size([2, 20480])
|
| 17953 |
+
batch tensor after cp: labels torch.Size([2, 20480])
|
| 17954 |
+
batch tensor after cp: loss_mask torch.Size([2, 20480])
|
| 17955 |
+
batch tensor after cp: attention_mask torch.Size([2, 1, 20480, 81920])
|
| 17956 |
+
batch tensor after cp: position_ids torch.Size([2, 20480])
|
| 17957 |
+
batch tensor: tokens torch.Size([2, 81920])
|
| 17958 |
+
batch tensor: labels torch.Size([2, 81920])
|
| 17959 |
+
batch tensor: loss_mask torch.Size([2, 81920])
|
| 17960 |
+
batch tensor: attention_mask torch.Size([2, 1, 81920, 81920])
|
| 17961 |
+
batch tensor: position_ids torch.Size([2, 81920])
|
| 17962 |
+
batch tensor after cp: tokens torch.Size([2, 20480])
|
| 17963 |
+
batch tensor after cp: labels torch.Size([2, 20480])
|
| 17964 |
+
batch tensor after cp: loss_mask torch.Size([2, 20480])
|
| 17965 |
+
batch tensor after cp: attention_mask torch.Size([2, 1, 20480, 81920])
|
| 17966 |
+
batch tensor after cp: position_ids torch.Size([2, 20480])
|
| 17967 |
+
batch tensor: tokens torch.Size([2, 81920])
|
| 17968 |
+
batch tensor: labels torch.Size([2, 81920])
|
| 17969 |
+
batch tensor: loss_mask torch.Size([2, 81920])
|
| 17970 |
+
batch tensor: attention_mask torch.Size([2, 1, 81920, 81920])
|
| 17971 |
+
batch tensor: position_ids torch.Size([2, 81920])
|
| 17972 |
+
batch tensor after cp: tokens torch.Size([2, 20480])
|
| 17973 |
+
batch tensor after cp: labels torch.Size([2, 20480])
|
| 17974 |
+
batch tensor after cp: loss_mask torch.Size([2, 20480])
|
| 17975 |
+
batch tensor after cp: attention_mask torch.Size([2, 1, 20480, 81920])
|
| 17976 |
+
batch tensor after cp: position_ids torch.Size([2, 20480])
|
| 17977 |
+
batch tensor: tokens torch.Size([2, 81920])
|
| 17978 |
+
batch tensor: labels torch.Size([2, 81920])
|
| 17979 |
+
batch tensor: loss_mask torch.Size([2, 81920])
|
| 17980 |
+
batch tensor: attention_mask torch.Size([2, 1, 81920, 81920])
|
| 17981 |
+
batch tensor: position_ids torch.Size([2, 81920])
|
| 17982 |
+
batch tensor after cp: tokens torch.Size([2, 20480])
|
| 17983 |
+
batch tensor after cp: labels torch.Size([2, 20480])
|
| 17984 |
+
batch tensor after cp: loss_mask torch.Size([2, 20480])
|
| 17985 |
+
batch tensor after cp: attention_mask torch.Size([2, 1, 20480, 81920])
|
| 17986 |
+
batch tensor after cp: position_ids torch.Size([2, 20480])
|
| 17987 |
+
batch tensor: tokens torch.Size([2, 81920])
|
| 17988 |
+
batch tensor: labels torch.Size([2, 81920])
|
| 17989 |
+
batch tensor: loss_mask torch.Size([2, 81920])
|
| 17990 |
+
batch tensor: attention_mask torch.Size([2, 1, 81920, 81920])
|
| 17991 |
+
batch tensor: position_ids torch.Size([2, 81920])
|
| 17992 |
+
batch tensor after cp: tokens torch.Size([2, 20480])
|
| 17993 |
+
batch tensor after cp: labels torch.Size([2, 20480])
|
| 17994 |
+
batch tensor after cp: loss_mask torch.Size([2, 20480])
|
| 17995 |
+
batch tensor after cp: attention_mask torch.Size([2, 1, 20480, 81920])
|
| 17996 |
+
batch tensor after cp: position_ids torch.Size([2, 20480])
|
| 17997 |
+
batch tensor: tokens torch.Size([2, 81920])
|
| 17998 |
+
batch tensor: labels torch.Size([2, 81920])
|
| 17999 |
+
batch tensor: loss_mask torch.Size([2, 81920])
|
| 18000 |
+
batch tensor: attention_mask torch.Size([2, 1, 81920, 81920])
|
| 18001 |
+
batch tensor: position_ids torch.Size([2, 81920])
|
| 18002 |
+
batch tensor after cp: tokens torch.Size([2, 20480])
|
| 18003 |
+
batch tensor after cp: labels torch.Size([2, 20480])
|
| 18004 |
+
batch tensor after cp: loss_mask torch.Size([2, 20480])
|
| 18005 |
+
batch tensor after cp: attention_mask torch.Size([2, 1, 20480, 81920])
|
| 18006 |
+
batch tensor after cp: position_ids torch.Size([2, 20480])
|
| 18007 |
+
batch tensor: tokens torch.Size([2, 81920])
|
| 18008 |
+
batch tensor: labels torch.Size([2, 81920])
|
| 18009 |
+
batch tensor: loss_mask torch.Size([2, 81920])
|
| 18010 |
+
batch tensor: attention_mask torch.Size([2, 1, 81920, 81920])
|
| 18011 |
+
batch tensor: position_ids torch.Size([2, 81920])
|
| 18012 |
+
batch tensor after cp: tokens torch.Size([2, 20480])
|
| 18013 |
+
batch tensor after cp: labels torch.Size([2, 20480])
|
| 18014 |
+
batch tensor after cp: loss_mask torch.Size([2, 20480])
|
| 18015 |
+
batch tensor after cp: attention_mask torch.Size([2, 1, 20480, 81920])
|
| 18016 |
+
batch tensor after cp: position_ids torch.Size([2, 20480])
|
| 18017 |
+
batch tensor: tokens torch.Size([2, 81920])
|
| 18018 |
+
batch tensor: labels torch.Size([2, 81920])
|
| 18019 |
+
batch tensor: loss_mask torch.Size([2, 81920])
|
| 18020 |
+
batch tensor: attention_mask torch.Size([2, 1, 81920, 81920])
|
| 18021 |
+
batch tensor: position_ids torch.Size([2, 81920])
|
| 18022 |
+
batch tensor after cp: tokens torch.Size([2, 20480])
|
| 18023 |
+
batch tensor after cp: labels torch.Size([2, 20480])
|
| 18024 |
+
batch tensor after cp: loss_mask torch.Size([2, 20480])
|
| 18025 |
+
batch tensor after cp: attention_mask torch.Size([2, 1, 20480, 81920])
|
| 18026 |
+
batch tensor after cp: position_ids torch.Size([2, 20480])
|
| 18027 |
+
batch tensor: tokens torch.Size([2, 81920])
|
| 18028 |
+
batch tensor: labels torch.Size([2, 81920])
|
| 18029 |
+
batch tensor: loss_mask torch.Size([2, 81920])
|
| 18030 |
+
batch tensor: attention_mask torch.Size([2, 1, 81920, 81920])
|
| 18031 |
+
batch tensor: position_ids torch.Size([2, 81920])
|
| 18032 |
+
batch tensor after cp: tokens torch.Size([2, 20480])
|
| 18033 |
+
batch tensor after cp: labels torch.Size([2, 20480])
|
| 18034 |
+
batch tensor after cp: loss_mask torch.Size([2, 20480])
|
| 18035 |
+
batch tensor after cp: attention_mask torch.Size([2, 1, 20480, 81920])
|
| 18036 |
+
batch tensor after cp: position_ids torch.Size([2, 20480])
|
| 18037 |
+
batch tensor: tokens torch.Size([2, 81920])
|
| 18038 |
+
batch tensor: labels torch.Size([2, 81920])
|
| 18039 |
+
batch tensor: loss_mask torch.Size([2, 81920])
|
| 18040 |
+
batch tensor: attention_mask torch.Size([2, 1, 81920, 81920])
|
| 18041 |
+
batch tensor: tokens torch.Size([2, 81920])
|
| 18042 |
+
batch tensor: labels torch.Size([2, 81920])
|
| 18043 |
+
batch tensor: loss_mask torch.Size([2, 81920])
|
| 18044 |
+
batch tensor: attention_mask torch.Size([2, 1, 81920, 81920])
|
| 18045 |
+
batch tensor: position_ids torch.Size([2, 81920])
|
| 18046 |
+
batch tensor: position_ids torch.Size([2, 81920])
|
| 18047 |
+
batch tensor after cp: tokens torch.Size([2, 20480])
|
| 18048 |
+
batch tensor after cp: labels torch.Size([2, 20480])
|
| 18049 |
+
batch tensor after cp: tokens torch.Size([2, 20480])
|
| 18050 |
+
batch tensor after cp: labels torch.Size([2, 20480])
|
| 18051 |
+
batch tensor after cp: loss_mask torch.Size([2, 20480])
|
| 18052 |
+
batch tensor after cp: loss_mask torch.Size([2, 20480])
|
| 18053 |
+
batch tensor after cp: attention_mask torch.Size([2, 1, 20480, 81920])
|
| 18054 |
+
batch tensor after cp: position_ids torch.Size([2, 20480])
|
| 18055 |
+
batch tensor after cp: attention_mask torch.Size([2, 1, 20480, 81920])
|
| 18056 |
+
batch tensor after cp: position_ids torch.Size([2, 20480])
|
| 18057 |
+
batch tensor: tokens torch.Size([2, 81920])
|
| 18058 |
+
batch tensor: labels torch.Size([2, 81920])
|
| 18059 |
+
batch tensor: loss_mask torch.Size([2, 81920])
|
| 18060 |
+
batch tensor: attention_mask torch.Size([2, 1, 81920, 81920])
|
| 18061 |
+
batch tensor: position_ids torch.Size([2, 81920])
|
| 18062 |
+
batch tensor after cp: tokens torch.Size([2, 20480])
|
| 18063 |
+
batch tensor after cp: labels torch.Size([2, 20480])
|
| 18064 |
+
batch tensor after cp: loss_mask torch.Size([2, 20480])
|
| 18065 |
+
batch tensor after cp: attention_mask torch.Size([2, 1, 20480, 81920])
|
| 18066 |
+
batch tensor after cp: position_ids torch.Size([2, 20480])
|
| 18067 |
+
batch tensor: tokens torch.Size([2, 81920])
|
| 18068 |
+
batch tensor: labels torch.Size([2, 81920])
|
| 18069 |
+
batch tensor: loss_mask torch.Size([2, 81920])
|
| 18070 |
+
batch tensor: attention_mask torch.Size([2, 1, 81920, 81920])
|
| 18071 |
+
batch tensor: position_ids torch.Size([2, 81920])
|
| 18072 |
+
batch tensor after cp: tokens torch.Size([2, 20480])
|
| 18073 |
+
batch tensor after cp: labels torch.Size([2, 20480])
|
| 18074 |
+
batch tensor after cp: loss_mask torch.Size([2, 20480])
|
| 18075 |
+
batch tensor after cp: attention_mask torch.Size([2, 1, 20480, 81920])
|
| 18076 |
+
batch tensor after cp: position_ids torch.Size([2, 20480])
|
| 18077 |
+
batch tensor: tokens torch.Size([2, 81920])
|
| 18078 |
+
batch tensor: labels torch.Size([2, 81920])
|
| 18079 |
+
batch tensor: loss_mask torch.Size([2, 81920])
|
| 18080 |
+
batch tensor: attention_mask torch.Size([2, 1, 81920, 81920])
|
| 18081 |
+
batch tensor: position_ids torch.Size([2, 81920])
|
| 18082 |
+
batch tensor after cp: tokens torch.Size([2, 20480])
|
| 18083 |
+
batch tensor after cp: labels torch.Size([2, 20480])
|
| 18084 |
+
batch tensor after cp: loss_mask torch.Size([2, 20480])
|
| 18085 |
+
batch tensor after cp: attention_mask torch.Size([2, 1, 20480, 81920])
|
| 18086 |
+
batch tensor after cp: position_ids torch.Size([2, 20480])
|
| 18087 |
+
batch tensor: tokens torch.Size([2, 81920])
|
| 18088 |
+
batch tensor: labels torch.Size([2, 81920])
|
| 18089 |
+
batch tensor: loss_mask torch.Size([2, 81920])
|
| 18090 |
+
batch tensor: attention_mask torch.Size([2, 1, 81920, 81920])
|
| 18091 |
+
batch tensor: position_ids torch.Size([2, 81920])
|
| 18092 |
+
batch tensor after cp: tokens torch.Size([2, 20480])
|
| 18093 |
+
batch tensor after cp: labels torch.Size([2, 20480])
|
| 18094 |
+
batch tensor after cp: loss_mask torch.Size([2, 20480])
|
| 18095 |
+
batch tensor after cp: attention_mask torch.Size([2, 1, 20480, 81920])
|
| 18096 |
+
batch tensor after cp: position_ids torch.Size([2, 20480])
|
| 18097 |
+
batch tensor: tokens torch.Size([2, 81920])
|
| 18098 |
+
batch tensor: labels torch.Size([2, 81920])
|
| 18099 |
+
batch tensor: loss_mask torch.Size([2, 81920])
|
| 18100 |
+
batch tensor: attention_mask torch.Size([2, 1, 81920, 81920])
|
| 18101 |
+
batch tensor: position_ids torch.Size([2, 81920])
|
| 18102 |
+
batch tensor after cp: tokens torch.Size([2, 20480])
|
| 18103 |
+
batch tensor after cp: labels torch.Size([2, 20480])
|
| 18104 |
+
batch tensor after cp: loss_mask torch.Size([2, 20480])
|
| 18105 |
+
batch tensor after cp: attention_mask torch.Size([2, 1, 20480, 81920])
|
| 18106 |
+
batch tensor after cp: position_ids torch.Size([2, 20480])
|
| 18107 |
+
Start exporting trace 7
|
| 18108 |
+
Done exporting trace 7
|
| 18109 |
+
[2025-06-21 21:35:04] iteration 8/ 10 | consumed samples: 8 | elapsed time per iteration (ms): 8247.0 | learning rate: 0.000000E+00 | global batch size: 1 | loss scale: 33554432.0 | number of skipped iterations: 1 | number of nan iterations: 0 |
|
attnserver.run_attnserver.slurm.sh.343221.err.log
CHANGED
|
@@ -19806,3 +19806,668 @@ W0621 21:33:05.403000 764266 site-packages/torch/distributed/run.py:766] *******
|
|
| 19806 |
warnings.warn(
|
| 19807 |
/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/transformer_engine/pytorch/cpu_offload.py:595: DeprecationWarning: Offloading weights is deprecated. Using offload_weights=True does not have any effect.
|
| 19808 |
warnings.warn(
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 19806 |
warnings.warn(
|
| 19807 |
/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/transformer_engine/pytorch/cpu_offload.py:595: DeprecationWarning: Offloading weights is deprecated. Using offload_weights=True does not have any effect.
|
| 19808 |
warnings.warn(
|
| 19809 |
+
[rank14]: Traceback (most recent call last):
|
| 19810 |
+
[rank14]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 554, in <module>
|
| 19811 |
+
[rank14]: pretrain(
|
| 19812 |
+
[rank14]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/training/training.py", line 863, in pretrain
|
| 19813 |
+
[rank14]: iteration, num_floating_point_operations_so_far = train(
|
| 19814 |
+
[rank14]: ^^^^^^
|
| 19815 |
+
[rank14]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/training/training.py", line 2229, in train
|
| 19816 |
+
[rank14]: ) = train_step(
|
| 19817 |
+
[rank14]: ^^^^^^^^^^^
|
| 19818 |
+
[rank14]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/training/training.py", line 1382, in train_step
|
| 19819 |
+
[rank14]: losses_reduced = forward_backward_func(
|
| 19820 |
+
[rank14]: ^^^^^^^^^^^^^^^^^^^^^^
|
| 19821 |
+
[rank14]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/pipeline_parallel/schedules.py", line 518, in forward_backward_no_pipelining
|
| 19822 |
+
[rank14]: output_tensor, num_tokens = forward_step(
|
| 19823 |
+
[rank14]: ^^^^^^^^^^^^^
|
| 19824 |
+
[rank14]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/pipeline_parallel/schedules.py", line 289, in forward_step
|
| 19825 |
+
[rank14]: output_tensor, loss_func = forward_step_func(data_iterator, model)
|
| 19826 |
+
[rank14]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
| 19827 |
+
[rank14]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 446, in forward_step
|
| 19828 |
+
[rank14]: (tokens, labels, loss_mask, attention_mask, position_ids), token_lens = get_batch(data_iterator)
|
| 19829 |
+
[rank14]: ^^^^^^^^^^^^^^^^^^^^^^^^
|
| 19830 |
+
[rank14]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 284, in get_batch
|
| 19831 |
+
[rank14]: batch = next(global_batches)
|
| 19832 |
+
[rank14]: ^^^^^^^^^^^^^^^^^^^^
|
| 19833 |
+
[rank14]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 226, in setup_batches
|
| 19834 |
+
[rank14]: attention_mask = torch.ones(
|
| 19835 |
+
[rank14]: ^^^^^^^^^^^
|
| 19836 |
+
[rank14]: torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 64.00 GiB. GPU 6 has a total capacity of 139.81 GiB of which 53.90 GiB is free. Including non-PyTorch memory, this process has 85.91 GiB memory in use. Of the allocated memory 82.21 GiB is allocated by PyTorch, and 72.10 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)
|
| 19837 |
+
[rank11]: Traceback (most recent call last):
|
| 19838 |
+
[rank11]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 554, in <module>
|
| 19839 |
+
[rank11]: pretrain(
|
| 19840 |
+
[rank11]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/training/training.py", line 863, in pretrain
|
| 19841 |
+
[rank11]: iteration, num_floating_point_operations_so_far = train(
|
| 19842 |
+
[rank11]: ^^^^^^
|
| 19843 |
+
[rank11]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/training/training.py", line 2229, in train
|
| 19844 |
+
[rank11]: ) = train_step(
|
| 19845 |
+
[rank11]: ^^^^^^^^^^^
|
| 19846 |
+
[rank11]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/training/training.py", line 1382, in train_step
|
| 19847 |
+
[rank11]: losses_reduced = forward_backward_func(
|
| 19848 |
+
[rank11]: ^^^^^^^^^^^^^^^^^^^^^^
|
| 19849 |
+
[rank11]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/pipeline_parallel/schedules.py", line 518, in forward_backward_no_pipelining
|
| 19850 |
+
[rank11]: output_tensor, num_tokens = forward_step(
|
| 19851 |
+
[rank11]: ^^^^^^^^^^^^^
|
| 19852 |
+
[rank11]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/pipeline_parallel/schedules.py", line 289, in forward_step
|
| 19853 |
+
[rank11]: output_tensor, loss_func = forward_step_func(data_iterator, model)
|
| 19854 |
+
[rank11]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
| 19855 |
+
[rank11]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 446, in forward_step
|
| 19856 |
+
[rank11]: (tokens, labels, loss_mask, attention_mask, position_ids), token_lens = get_batch(data_iterator)
|
| 19857 |
+
[rank11]: ^^^^^^^^^^^^^^^^^^^^^^^^
|
| 19858 |
+
[rank11]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 284, in get_batch
|
| 19859 |
+
[rank11]: batch = next(global_batches)
|
| 19860 |
+
[rank11]: ^^^^^^^^^^^^^^^^^^^^
|
| 19861 |
+
[rank11]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 226, in setup_batches
|
| 19862 |
+
[rank11]: attention_mask = torch.ones(
|
| 19863 |
+
[rank11]: ^^^^^^^^^^^
|
| 19864 |
+
[rank11]: torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 64.00 GiB. GPU 3 has a total capacity of 139.81 GiB of which 53.91 GiB is free. Including non-PyTorch memory, this process has 85.89 GiB memory in use. Of the allocated memory 82.21 GiB is allocated by PyTorch, and 72.10 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)
|
| 19865 |
+
[rank5]: Traceback (most recent call last):
|
| 19866 |
+
[rank5]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 554, in <module>
|
| 19867 |
+
[rank5]: pretrain(
|
| 19868 |
+
[rank5]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/training/training.py", line 863, in pretrain
|
| 19869 |
+
[rank5]: iteration, num_floating_point_operations_so_far = train(
|
| 19870 |
+
[rank5]: ^^^^^^
|
| 19871 |
+
[rank5]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/training/training.py", line 2229, in train
|
| 19872 |
+
[rank5]: ) = train_step(
|
| 19873 |
+
[rank5]: ^^^^^^^^^^^
|
| 19874 |
+
[rank5]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/training/training.py", line 1382, in train_step
|
| 19875 |
+
[rank5]: losses_reduced = forward_backward_func(
|
| 19876 |
+
[rank5]: ^^^^^^^^^^^^^^^^^^^^^^
|
| 19877 |
+
[rank5]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/pipeline_parallel/schedules.py", line 518, in forward_backward_no_pipelining
|
| 19878 |
+
[rank5]: output_tensor, num_tokens = forward_step(
|
| 19879 |
+
[rank5]: ^^^^^^^^^^^^^
|
| 19880 |
+
[rank5]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/pipeline_parallel/schedules.py", line 289, in forward_step
|
| 19881 |
+
[rank5]: output_tensor, loss_func = forward_step_func(data_iterator, model)
|
| 19882 |
+
[rank5]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
| 19883 |
+
[rank5]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 446, in forward_step
|
| 19884 |
+
[rank5]: (tokens, labels, loss_mask, attention_mask, position_ids), token_lens = get_batch(data_iterator)
|
| 19885 |
+
[rank5]: ^^^^^^^^^^^^^^^^^^^^^^^^
|
| 19886 |
+
[rank5]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 284, in get_batch
|
| 19887 |
+
[rank5]: batch = next(global_batches)
|
| 19888 |
+
[rank5]: ^^^^^^^^^^^^^^^^^^^^
|
| 19889 |
+
[rank5]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 226, in setup_batches
|
| 19890 |
+
[rank5]: attention_mask = torch.ones(
|
| 19891 |
+
[rank5]: ^^^^^^^^^^^
|
| 19892 |
+
[rank5]: torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 64.00 GiB. GPU 5 has a total capacity of 139.81 GiB of which 53.77 GiB is free. Including non-PyTorch memory, this process has 86.03 GiB memory in use. Of the allocated memory 82.21 GiB is allocated by PyTorch, and 200.10 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)
|
| 19893 |
+
[rank8]: Traceback (most recent call last):
|
| 19894 |
+
[rank8]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 554, in <module>
|
| 19895 |
+
[rank8]: pretrain(
|
| 19896 |
+
[rank8]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/training/training.py", line 863, in pretrain
|
| 19897 |
+
[rank8]: iteration, num_floating_point_operations_so_far = train(
|
| 19898 |
+
[rank8]: ^^^^^^
|
| 19899 |
+
[rank8]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/training/training.py", line 2229, in train
|
| 19900 |
+
[rank8]: ) = train_step(
|
| 19901 |
+
[rank8]: ^^^^^^^^^^^
|
| 19902 |
+
[rank8]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/training/training.py", line 1382, in train_step
|
| 19903 |
+
[rank8]: losses_reduced = forward_backward_func(
|
| 19904 |
+
[rank8]: ^^^^^^^^^^^^^^^^^^^^^^
|
| 19905 |
+
[rank8]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/pipeline_parallel/schedules.py", line 518, in forward_backward_no_pipelining
|
| 19906 |
+
[rank8]: output_tensor, num_tokens = forward_step(
|
| 19907 |
+
[rank8]: ^^^^^^^^^^^^^
|
| 19908 |
+
[rank8]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/pipeline_parallel/schedules.py", line 289, in forward_step
|
| 19909 |
+
[rank8]: output_tensor, loss_func = forward_step_func(data_iterator, model)
|
| 19910 |
+
[rank8]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
| 19911 |
+
[rank8]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 446, in forward_step
|
| 19912 |
+
[rank8]: (tokens, labels, loss_mask, attention_mask, position_ids), token_lens = get_batch(data_iterator)
|
| 19913 |
+
[rank8]: ^^^^^^^^^^^^^^^^^^^^^^^^
|
| 19914 |
+
[rank8]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 284, in get_batch
|
| 19915 |
+
[rank8]: batch = next(global_batches)
|
| 19916 |
+
[rank8]: ^^^^^^^^^^^^^^^^^^^^
|
| 19917 |
+
[rank8]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 226, in setup_batches
|
| 19918 |
+
[rank8]: attention_mask = torch.ones(
|
| 19919 |
+
[rank8]: ^^^^^^^^^^^
|
| 19920 |
+
[rank8]: torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 64.00 GiB. GPU 0 has a total capacity of 139.81 GiB of which 52.87 GiB is free. Including non-PyTorch memory, this process has 86.91 GiB memory in use. Of the allocated memory 82.21 GiB is allocated by PyTorch, and 1.07 GiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)
|
| 19921 |
+
[rank10]: Traceback (most recent call last):
|
| 19922 |
+
[rank10]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 554, in <module>
|
| 19923 |
+
[rank10]: pretrain(
|
| 19924 |
+
[rank10]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/training/training.py", line 863, in pretrain
|
| 19925 |
+
[rank10]: iteration, num_floating_point_operations_so_far = train(
|
| 19926 |
+
[rank10]: ^^^^^^
|
| 19927 |
+
[rank10]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/training/training.py", line 2229, in train
|
| 19928 |
+
[rank10]: ) = train_step(
|
| 19929 |
+
[rank10]: ^^^^^^^^^^^
|
| 19930 |
+
[rank10]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/training/training.py", line 1382, in train_step
|
| 19931 |
+
[rank10]: losses_reduced = forward_backward_func(
|
| 19932 |
+
[rank10]: ^^^^^^^^^^^^^^^^^^^^^^
|
| 19933 |
+
[rank10]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/pipeline_parallel/schedules.py", line 518, in forward_backward_no_pipelining
|
| 19934 |
+
[rank10]: output_tensor, num_tokens = forward_step(
|
| 19935 |
+
[rank10]: ^^^^^^^^^^^^^
|
| 19936 |
+
[rank10]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/pipeline_parallel/schedules.py", line 289, in forward_step
|
| 19937 |
+
[rank10]: output_tensor, loss_func = forward_step_func(data_iterator, model)
|
| 19938 |
+
[rank10]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
| 19939 |
+
[rank10]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 446, in forward_step
|
| 19940 |
+
[rank10]: (tokens, labels, loss_mask, attention_mask, position_ids), token_lens = get_batch(data_iterator)
|
| 19941 |
+
[rank10]: ^^^^^^^^^^^^^^^^^^^^^^^^
|
| 19942 |
+
[rank10]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 284, in get_batch
|
| 19943 |
+
[rank10]: batch = next(global_batches)
|
| 19944 |
+
[rank10]: ^^^^^^^^^^^^^^^^^^^^
|
| 19945 |
+
[rank10]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 226, in setup_batches
|
| 19946 |
+
[rank10]: attention_mask = torch.ones(
|
| 19947 |
+
[rank10]: ^^^^^^^^^^^
|
| 19948 |
+
[rank10]: torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 64.00 GiB. GPU 2 has a total capacity of 139.81 GiB of which 53.90 GiB is free. Including non-PyTorch memory, this process has 85.91 GiB memory in use. Of the allocated memory 82.21 GiB is allocated by PyTorch, and 72.10 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)
|
| 19949 |
+
[rank4]: Traceback (most recent call last):
|
| 19950 |
+
[rank4]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 554, in <module>
|
| 19951 |
+
[rank4]: pretrain(
|
| 19952 |
+
[rank4]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/training/training.py", line 863, in pretrain
|
| 19953 |
+
[rank4]: iteration, num_floating_point_operations_so_far = train(
|
| 19954 |
+
[rank4]: ^^^^^^
|
| 19955 |
+
[rank4]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/training/training.py", line 2229, in train
|
| 19956 |
+
[rank4]: ) = train_step(
|
| 19957 |
+
[rank4]: ^^^^^^^^^^^
|
| 19958 |
+
[rank4]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/training/training.py", line 1382, in train_step
|
| 19959 |
+
[rank4]: losses_reduced = forward_backward_func(
|
| 19960 |
+
[rank4]: ^^^^^^^^^^^^^^^^^^^^^^
|
| 19961 |
+
[rank4]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/pipeline_parallel/schedules.py", line 518, in forward_backward_no_pipelining
|
| 19962 |
+
[rank4]: output_tensor, num_tokens = forward_step(
|
| 19963 |
+
[rank4]: ^^^^^^^^^^^^^
|
| 19964 |
+
[rank4]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/pipeline_parallel/schedules.py", line 289, in forward_step
|
| 19965 |
+
[rank4]: output_tensor, loss_func = forward_step_func(data_iterator, model)
|
| 19966 |
+
[rank4]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
| 19967 |
+
[rank4]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 446, in forward_step
|
| 19968 |
+
[rank4]: (tokens, labels, loss_mask, attention_mask, position_ids), token_lens = get_batch(data_iterator)
|
| 19969 |
+
[rank4]: ^^^^^^^^^^^^^^^^^^^^^^^^
|
| 19970 |
+
[rank4]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 284, in get_batch
|
| 19971 |
+
[rank4]: batch = next(global_batches)
|
| 19972 |
+
[rank4]: ^^^^^^^^^^^^^^^^^^^^
|
| 19973 |
+
[rank4]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 226, in setup_batches
|
| 19974 |
+
[rank4]: attention_mask = torch.ones(
|
| 19975 |
+
[rank4]: ^^^^^^^^^^^
|
| 19976 |
+
[rank4]: torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 64.00 GiB. GPU 4 has a total capacity of 139.81 GiB of which 53.76 GiB is free. Including non-PyTorch memory, this process has 86.02 GiB memory in use. Of the allocated memory 82.21 GiB is allocated by PyTorch, and 200.10 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)
|
| 19977 |
+
[rank12]: Traceback (most recent call last):
|
| 19978 |
+
[rank12]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 554, in <module>
|
| 19979 |
+
[rank12]: pretrain(
|
| 19980 |
+
[rank12]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/training/training.py", line 863, in pretrain
|
| 19981 |
+
[rank12]: iteration, num_floating_point_operations_so_far = train(
|
| 19982 |
+
[rank12]: ^^^^^^
|
| 19983 |
+
[rank12]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/training/training.py", line 2229, in train
|
| 19984 |
+
[rank12]: ) = train_step(
|
| 19985 |
+
[rank12]: ^^^^^^^^^^^
|
| 19986 |
+
[rank12]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/training/training.py", line 1382, in train_step
|
| 19987 |
+
[rank12]: losses_reduced = forward_backward_func(
|
| 19988 |
+
[rank12]: ^^^^^^^^^^^^^^^^^^^^^^
|
| 19989 |
+
[rank12]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/pipeline_parallel/schedules.py", line 518, in forward_backward_no_pipelining
|
| 19990 |
+
[rank12]: output_tensor, num_tokens = forward_step(
|
| 19991 |
+
[rank12]: ^^^^^^^^^^^^^
|
| 19992 |
+
[rank12]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/pipeline_parallel/schedules.py", line 289, in forward_step
|
| 19993 |
+
[rank12]: output_tensor, loss_func = forward_step_func(data_iterator, model)
|
| 19994 |
+
[rank12]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
| 19995 |
+
[rank12]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 446, in forward_step
|
| 19996 |
+
[rank12]: (tokens, labels, loss_mask, attention_mask, position_ids), token_lens = get_batch(data_iterator)
|
| 19997 |
+
[rank12]: ^^^^^^^^^^^^^^^^^^^^^^^^
|
| 19998 |
+
[rank12]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 284, in get_batch
|
| 19999 |
+
[rank12]: batch = next(global_batches)
|
| 20000 |
+
[rank12]: ^^^^^^^^^^^^^^^^^^^^
|
| 20001 |
+
[rank12]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 226, in setup_batches
|
| 20002 |
+
[rank12]: attention_mask = torch.ones(
|
| 20003 |
+
[rank12]: ^^^^^^^^^^^
|
| 20004 |
+
[rank12]: torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 64.00 GiB. GPU 4 has a total capacity of 139.81 GiB of which 53.87 GiB is free. Including non-PyTorch memory, this process has 85.91 GiB memory in use. Of the allocated memory 82.21 GiB is allocated by PyTorch, and 72.10 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)
|
| 20005 |
+
[rank9]: Traceback (most recent call last):
|
| 20006 |
+
[rank9]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 554, in <module>
|
| 20007 |
+
[rank9]: pretrain(
|
| 20008 |
+
[rank9]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/training/training.py", line 863, in pretrain
|
| 20009 |
+
[rank9]: iteration, num_floating_point_operations_so_far = train(
|
| 20010 |
+
[rank9]: ^^^^^^
|
| 20011 |
+
[rank9]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/training/training.py", line 2229, in train
|
| 20012 |
+
[rank9]: ) = train_step(
|
| 20013 |
+
[rank9]: ^^^^^^^^^^^
|
| 20014 |
+
[rank9]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/training/training.py", line 1382, in train_step
|
| 20015 |
+
[rank9]: losses_reduced = forward_backward_func(
|
| 20016 |
+
[rank9]: ^^^^^^^^^^^^^^^^^^^^^^
|
| 20017 |
+
[rank9]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/pipeline_parallel/schedules.py", line 518, in forward_backward_no_pipelining
|
| 20018 |
+
[rank9]: output_tensor, num_tokens = forward_step(
|
| 20019 |
+
[rank9]: ^^^^^^^^^^^^^
|
| 20020 |
+
[rank9]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/pipeline_parallel/schedules.py", line 289, in forward_step
|
| 20021 |
+
[rank9]: output_tensor, loss_func = forward_step_func(data_iterator, model)
|
| 20022 |
+
[rank9]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
| 20023 |
+
[rank9]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 446, in forward_step
|
| 20024 |
+
[rank9]: (tokens, labels, loss_mask, attention_mask, position_ids), token_lens = get_batch(data_iterator)
|
| 20025 |
+
[rank9]: ^^^^^^^^^^^^^^^^^^^^^^^^
|
| 20026 |
+
[rank9]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 284, in get_batch
|
| 20027 |
+
[rank9]: batch = next(global_batches)
|
| 20028 |
+
[rank9]: ^^^^^^^^^^^^^^^^^^^^
|
| 20029 |
+
[rank9]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 226, in setup_batches
|
| 20030 |
+
[rank9]: attention_mask = torch.ones(
|
| 20031 |
+
[rank9]: ^^^^^^^^^^^
|
| 20032 |
+
[rank9]: torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 64.00 GiB. GPU 1 has a total capacity of 139.81 GiB of which 53.91 GiB is free. Including non-PyTorch memory, this process has 85.89 GiB memory in use. Of the allocated memory 82.21 GiB is allocated by PyTorch, and 72.10 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)
|
| 20033 |
+
[rank3]: Traceback (most recent call last):
|
| 20034 |
+
[rank3]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 554, in <module>
|
| 20035 |
+
[rank3]: pretrain(
|
| 20036 |
+
[rank3]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/training/training.py", line 863, in pretrain
|
| 20037 |
+
[rank3]: iteration, num_floating_point_operations_so_far = train(
|
| 20038 |
+
[rank3]: ^^^^^^
|
| 20039 |
+
[rank3]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/training/training.py", line 2229, in train
|
| 20040 |
+
[rank3]: ) = train_step(
|
| 20041 |
+
[rank3]: ^^^^^^^^^^^
|
| 20042 |
+
[rank3]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/training/training.py", line 1382, in train_step
|
| 20043 |
+
[rank3]: losses_reduced = forward_backward_func(
|
| 20044 |
+
[rank3]: ^^^^^^^^^^^^^^^^^^^^^^
|
| 20045 |
+
[rank3]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/pipeline_parallel/schedules.py", line 518, in forward_backward_no_pipelining
|
| 20046 |
+
[rank3]: output_tensor, num_tokens = forward_step(
|
| 20047 |
+
[rank3]: ^^^^^^^^^^^^^
|
| 20048 |
+
[rank3]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/pipeline_parallel/schedules.py", line 289, in forward_step
|
| 20049 |
+
[rank3]: output_tensor, loss_func = forward_step_func(data_iterator, model)
|
| 20050 |
+
[rank3]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
| 20051 |
+
[rank3]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 446, in forward_step
|
| 20052 |
+
[rank3]: (tokens, labels, loss_mask, attention_mask, position_ids), token_lens = get_batch(data_iterator)
|
| 20053 |
+
[rank3]: ^^^^^^^^^^^^^^^^^^^^^^^^
|
| 20054 |
+
[rank3]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 284, in get_batch
|
| 20055 |
+
[rank3]: batch = next(global_batches)
|
| 20056 |
+
[rank3]: ^^^^^^^^^^^^^^^^^^^^
|
| 20057 |
+
[rank3]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 226, in setup_batches
|
| 20058 |
+
[rank3]: attention_mask = torch.ones(
|
| 20059 |
+
[rank3]: ^^^^^^^^^^^
|
| 20060 |
+
[rank3]: torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 64.00 GiB. GPU 3 has a total capacity of 139.81 GiB of which 53.77 GiB is free. Including non-PyTorch memory, this process has 86.03 GiB memory in use. Of the allocated memory 82.21 GiB is allocated by PyTorch, and 200.10 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)
|
| 20061 |
+
[rank15]: Traceback (most recent call last):
|
| 20062 |
+
[rank15]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 554, in <module>
|
| 20063 |
+
[rank15]: pretrain(
|
| 20064 |
+
[rank15]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/training/training.py", line 863, in pretrain
|
| 20065 |
+
[rank15]: iteration, num_floating_point_operations_so_far = train(
|
| 20066 |
+
[rank15]: ^^^^^^
|
| 20067 |
+
[rank15]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/training/training.py", line 2229, in train
|
| 20068 |
+
[rank15]: ) = train_step(
|
| 20069 |
+
[rank15]: ^^^^^^^^^^^
|
| 20070 |
+
[rank15]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/training/training.py", line 1382, in train_step
|
| 20071 |
+
[rank15]: losses_reduced = forward_backward_func(
|
| 20072 |
+
[rank15]: ^^^^^^^^^^^^^^^^^^^^^^
|
| 20073 |
+
[rank0]: Traceback (most recent call last):
|
| 20074 |
+
[rank0]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 554, in <module>
|
| 20075 |
+
[rank0]: pretrain(
|
| 20076 |
+
[rank0]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/training/training.py", line 863, in pretrain
|
| 20077 |
+
[rank0]: iteration, num_floating_point_operations_so_far = train(
|
| 20078 |
+
[rank0]: ^^^^^^
|
| 20079 |
+
[rank0]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/training/training.py", line 2229, in train
|
| 20080 |
+
[rank0]: ) = train_step(
|
| 20081 |
+
[rank0]: ^^^^^^^^^^^
|
| 20082 |
+
[rank0]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/training/training.py", line 1382, in train_step
|
| 20083 |
+
[rank0]: losses_reduced = forward_backward_func(
|
| 20084 |
+
[rank0]: ^^^^^^^^^^^^^^^^^^^^^^
|
| 20085 |
+
[rank0]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/pipeline_parallel/schedules.py", line 518, in forward_backward_no_pipelining
|
| 20086 |
+
[rank15]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/pipeline_parallel/schedules.py", line 518, in forward_backward_no_pipelining
|
| 20087 |
+
[rank15]: output_tensor, num_tokens = forward_step(
|
| 20088 |
+
[rank15]: ^^^^^^^^^^^^^
|
| 20089 |
+
[rank15]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/pipeline_parallel/schedules.py", line 289, in forward_step
|
| 20090 |
+
[rank15]: output_tensor, loss_func = forward_step_func(data_iterator, model)
|
| 20091 |
+
[rank15]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
| 20092 |
+
[rank15]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 446, in forward_step
|
| 20093 |
+
[rank15]: (tokens, labels, loss_mask, attention_mask, position_ids), token_lens = get_batch(data_iterator)
|
| 20094 |
+
[rank15]: ^^^^^^^^^^^^^^^^^^^^^^^^
|
| 20095 |
+
[rank0]: output_tensor, num_tokens = forward_step(
|
| 20096 |
+
[rank0]: ^^^^^^^^^^^^^
|
| 20097 |
+
[rank0]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/pipeline_parallel/schedules.py", line 289, in forward_step
|
| 20098 |
+
[rank0]: output_tensor, loss_func = forward_step_func(data_iterator, model)
|
| 20099 |
+
[rank0]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
| 20100 |
+
[rank0]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 446, in forward_step
|
| 20101 |
+
[rank0]: (tokens, labels, loss_mask, attention_mask, position_ids), token_lens = get_batch(data_iterator)
|
| 20102 |
+
[rank0]: ^^^^^^^^^^^^^^^^^^^^^^^^
|
| 20103 |
+
[rank0]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 284, in get_batch
|
| 20104 |
+
[rank0]: batch = next(global_batches)
|
| 20105 |
+
[rank0]: ^^^^^^^^^^^^^^^^^^^^
|
| 20106 |
+
[rank15]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 284, in get_batch
|
| 20107 |
+
[rank15]: batch = next(global_batches)
|
| 20108 |
+
[rank15]: ^^^^^^^^^^^^^^^^^^^^
|
| 20109 |
+
[rank15]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 226, in setup_batches
|
| 20110 |
+
[rank15]: attention_mask = torch.ones(
|
| 20111 |
+
[rank15]: ^^^^^^^^^^^
|
| 20112 |
+
[rank15]: torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 64.00 GiB. GPU 7 has a total capacity of 139.81 GiB of which 53.91 GiB is free. Including non-PyTorch memory, this process has 85.89 GiB memory in use. Of the allocated memory 82.21 GiB is allocated by PyTorch, and 72.10 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)
|
| 20113 |
+
[rank0]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 226, in setup_batches
|
| 20114 |
+
[rank0]: attention_mask = torch.ones(
|
| 20115 |
+
[rank0]: ^^^^^^^^^^^
|
| 20116 |
+
[rank0]: torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 64.00 GiB. GPU 0 has a total capacity of 139.81 GiB of which 53.76 GiB is free. Including non-PyTorch memory, this process has 86.02 GiB memory in use. Of the allocated memory 82.21 GiB is allocated by PyTorch, and 200.10 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)
|
| 20117 |
+
[rank13]: Traceback (most recent call last):
|
| 20118 |
+
[rank13]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 554, in <module>
|
| 20119 |
+
[rank13]: pretrain(
|
| 20120 |
+
[rank13]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/training/training.py", line 863, in pretrain
|
| 20121 |
+
[rank13]: iteration, num_floating_point_operations_so_far = train(
|
| 20122 |
+
[rank13]: ^^^^^^
|
| 20123 |
+
[rank13]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/training/training.py", line 2229, in train
|
| 20124 |
+
[rank13]: ) = train_step(
|
| 20125 |
+
[rank13]: ^^^^^^^^^^^
|
| 20126 |
+
[rank13]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/training/training.py", line 1382, in train_step
|
| 20127 |
+
[rank13]: losses_reduced = forward_backward_func(
|
| 20128 |
+
[rank13]: ^^^^^^^^^^^^^^^^^^^^^^
|
| 20129 |
+
[rank1]: Traceback (most recent call last):
|
| 20130 |
+
[rank1]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 554, in <module>
|
| 20131 |
+
[rank1]: pretrain(
|
| 20132 |
+
[rank1]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/training/training.py", line 863, in pretrain
|
| 20133 |
+
[rank1]: iteration, num_floating_point_operations_so_far = train(
|
| 20134 |
+
[rank1]: ^^^^^^
|
| 20135 |
+
[rank1]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/training/training.py", line 2229, in train
|
| 20136 |
+
[rank1]: ) = train_step(
|
| 20137 |
+
[rank1]: ^^^^^^^^^^^
|
| 20138 |
+
[rank1]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/training/training.py", line 1382, in train_step
|
| 20139 |
+
[rank1]: losses_reduced = forward_backward_func(
|
| 20140 |
+
[rank1]: ^^^^^^^^^^^^^^^^^^^^^^
|
| 20141 |
+
[rank1]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/pipeline_parallel/schedules.py", line 518, in forward_backward_no_pipelining
|
| 20142 |
+
[rank13]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/pipeline_parallel/schedules.py", line 518, in forward_backward_no_pipelining
|
| 20143 |
+
[rank13]: output_tensor, num_tokens = forward_step(
|
| 20144 |
+
[rank13]: ^^^^^^^^^^^^^
|
| 20145 |
+
[rank13]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/pipeline_parallel/schedules.py", line 289, in forward_step
|
| 20146 |
+
[rank13]: output_tensor, loss_func = forward_step_func(data_iterator, model)
|
| 20147 |
+
[rank13]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
| 20148 |
+
[rank13]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 446, in forward_step
|
| 20149 |
+
[rank13]: (tokens, labels, loss_mask, attention_mask, position_ids), token_lens = get_batch(data_iterator)
|
| 20150 |
+
[rank13]: ^^^^^^^^^^^^^^^^^^^^^^^^
|
| 20151 |
+
[rank1]: output_tensor, num_tokens = forward_step(
|
| 20152 |
+
[rank1]: ^^^^^^^^^^^^^
|
| 20153 |
+
[rank1]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/pipeline_parallel/schedules.py", line 289, in forward_step
|
| 20154 |
+
[rank1]: output_tensor, loss_func = forward_step_func(data_iterator, model)
|
| 20155 |
+
[rank1]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
| 20156 |
+
[rank1]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 446, in forward_step
|
| 20157 |
+
[rank1]: (tokens, labels, loss_mask, attention_mask, position_ids), token_lens = get_batch(data_iterator)
|
| 20158 |
+
[rank1]: ^^^^^^^^^^^^^^^^^^^^^^^^
|
| 20159 |
+
[rank1]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 284, in get_batch
|
| 20160 |
+
[rank1]: batch = next(global_batches)
|
| 20161 |
+
[rank1]: ^^^^^^^^^^^^^^^^^^^^
|
| 20162 |
+
[rank13]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 284, in get_batch
|
| 20163 |
+
[rank13]: batch = next(global_batches)
|
| 20164 |
+
[rank13]: ^^^^^^^^^^^^^^^^^^^^
|
| 20165 |
+
[rank13]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 226, in setup_batches
|
| 20166 |
+
[rank13]: attention_mask = torch.ones(
|
| 20167 |
+
[rank13]: ^^^^^^^^^^^
|
| 20168 |
+
[rank13]: torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 64.00 GiB. GPU 5 has a total capacity of 139.81 GiB of which 53.91 GiB is free. Including non-PyTorch memory, this process has 85.89 GiB memory in use. Of the allocated memory 82.21 GiB is allocated by PyTorch, and 72.10 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)
|
| 20169 |
+
[rank1]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 226, in setup_batches
|
| 20170 |
+
[rank1]: attention_mask = torch.ones(
|
| 20171 |
+
[rank1]: ^^^^^^^^^^^
|
| 20172 |
+
[rank1]: torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 64.00 GiB. GPU 1 has a total capacity of 139.81 GiB of which 53.77 GiB is free. Including non-PyTorch memory, this process has 86.03 GiB memory in use. Of the allocated memory 82.21 GiB is allocated by PyTorch, and 200.10 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)
|
| 20173 |
+
[rank6]: Traceback (most recent call last):
|
| 20174 |
+
[rank6]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 554, in <module>
|
| 20175 |
+
[rank6]: pretrain(
|
| 20176 |
+
[rank6]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/training/training.py", line 863, in pretrain
|
| 20177 |
+
[rank6]: iteration, num_floating_point_operations_so_far = train(
|
| 20178 |
+
[rank6]: ^^^^^^
|
| 20179 |
+
[rank6]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/training/training.py", line 2229, in train
|
| 20180 |
+
[rank6]: ) = train_step(
|
| 20181 |
+
[rank6]: ^^^^^^^^^^^
|
| 20182 |
+
[rank6]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/training/training.py", line 1382, in train_step
|
| 20183 |
+
[rank6]: losses_reduced = forward_backward_func(
|
| 20184 |
+
[rank6]: ^^^^^^^^^^^^^^^^^^^^^^
|
| 20185 |
+
[rank6]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/pipeline_parallel/schedules.py", line 518, in forward_backward_no_pipelining
|
| 20186 |
+
[rank6]: output_tensor, num_tokens = forward_step(
|
| 20187 |
+
[rank6]: ^^^^^^^^^^^^^
|
| 20188 |
+
[rank6]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/pipeline_parallel/schedules.py", line 289, in forward_step
|
| 20189 |
+
[rank6]: output_tensor, loss_func = forward_step_func(data_iterator, model)
|
| 20190 |
+
[rank6]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
| 20191 |
+
[rank6]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 446, in forward_step
|
| 20192 |
+
[rank6]: (tokens, labels, loss_mask, attention_mask, position_ids), token_lens = get_batch(data_iterator)
|
| 20193 |
+
[rank6]: ^^^^^^^^^^^^^^^^^^^^^^^^
|
| 20194 |
+
[rank6]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 284, in get_batch
|
| 20195 |
+
[rank6]: batch = next(global_batches)
|
| 20196 |
+
[rank6]: ^^^^^^^^^^^^^^^^^^^^
|
| 20197 |
+
[rank6]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 226, in setup_batches
|
| 20198 |
+
[rank6]: attention_mask = torch.ones(
|
| 20199 |
+
[rank6]: ^^^^^^^^^^^
|
| 20200 |
+
[rank6]: torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 64.00 GiB. GPU 6 has a total capacity of 139.81 GiB of which 53.79 GiB is free. Including non-PyTorch memory, this process has 86.02 GiB memory in use. Of the allocated memory 82.21 GiB is allocated by PyTorch, and 200.10 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)
|
| 20201 |
+
[rank7]: Traceback (most recent call last):
|
| 20202 |
+
[rank7]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 554, in <module>
|
| 20203 |
+
[rank7]: pretrain(
|
| 20204 |
+
[rank7]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/training/training.py", line 863, in pretrain
|
| 20205 |
+
[rank7]: iteration, num_floating_point_operations_so_far = train(
|
| 20206 |
+
[rank7]: ^^^^^^
|
| 20207 |
+
[rank7]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/training/training.py", line 2229, in train
|
| 20208 |
+
[rank7]: ) = train_step(
|
| 20209 |
+
[rank7]: ^^^^^^^^^^^
|
| 20210 |
+
[rank7]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/training/training.py", line 1382, in train_step
|
| 20211 |
+
[rank7]: losses_reduced = forward_backward_func(
|
| 20212 |
+
[rank7]: ^^^^^^^^^^^^^^^^^^^^^^
|
| 20213 |
+
[rank7]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/pipeline_parallel/schedules.py", line 518, in forward_backward_no_pipelining
|
| 20214 |
+
[rank7]: output_tensor, num_tokens = forward_step(
|
| 20215 |
+
[rank7]: ^^^^^^^^^^^^^
|
| 20216 |
+
[rank7]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/pipeline_parallel/schedules.py", line 289, in forward_step
|
| 20217 |
+
[rank7]: output_tensor, loss_func = forward_step_func(data_iterator, model)
|
| 20218 |
+
[rank7]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
| 20219 |
+
[rank7]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 446, in forward_step
|
| 20220 |
+
[rank7]: (tokens, labels, loss_mask, attention_mask, position_ids), token_lens = get_batch(data_iterator)
|
| 20221 |
+
[rank7]: ^^^^^^^^^^^^^^^^^^^^^^^^
|
| 20222 |
+
[rank7]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 284, in get_batch
|
| 20223 |
+
[rank7]: batch = next(global_batches)
|
| 20224 |
+
[rank7]: ^^^^^^^^^^^^^^^^^^^^
|
| 20225 |
+
[rank7]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 226, in setup_batches
|
| 20226 |
+
[rank7]: attention_mask = torch.ones(
|
| 20227 |
+
[rank7]: ^^^^^^^^^^^
|
| 20228 |
+
[rank7]: torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 64.00 GiB. GPU 7 has a total capacity of 139.81 GiB of which 53.77 GiB is free. Including non-PyTorch memory, this process has 86.03 GiB memory in use. Of the allocated memory 82.21 GiB is allocated by PyTorch, and 200.10 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)
|
| 20229 |
+
[rank2]: Traceback (most recent call last):
|
| 20230 |
+
[rank2]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 554, in <module>
|
| 20231 |
+
[rank2]: pretrain(
|
| 20232 |
+
[rank2]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/training/training.py", line 863, in pretrain
|
| 20233 |
+
[rank2]: iteration, num_floating_point_operations_so_far = train(
|
| 20234 |
+
[rank2]: ^^^^^^
|
| 20235 |
+
[rank2]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/training/training.py", line 2229, in train
|
| 20236 |
+
[rank2]: ) = train_step(
|
| 20237 |
+
[rank2]: ^^^^^^^^^^^
|
| 20238 |
+
[rank2]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/training/training.py", line 1382, in train_step
|
| 20239 |
+
[rank2]: losses_reduced = forward_backward_func(
|
| 20240 |
+
[rank2]: ^^^^^^^^^^^^^^^^^^^^^^
|
| 20241 |
+
[rank2]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/pipeline_parallel/schedules.py", line 518, in forward_backward_no_pipelining
|
| 20242 |
+
[rank2]: output_tensor, num_tokens = forward_step(
|
| 20243 |
+
[rank2]: ^^^^^^^^^^^^^
|
| 20244 |
+
[rank2]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/pipeline_parallel/schedules.py", line 289, in forward_step
|
| 20245 |
+
[rank2]: output_tensor, loss_func = forward_step_func(data_iterator, model)
|
| 20246 |
+
[rank2]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
| 20247 |
+
[rank2]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 446, in forward_step
|
| 20248 |
+
[rank2]: (tokens, labels, loss_mask, attention_mask, position_ids), token_lens = get_batch(data_iterator)
|
| 20249 |
+
[rank2]: ^^^^^^^^^^^^^^^^^^^^^^^^
|
| 20250 |
+
[rank2]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 284, in get_batch
|
| 20251 |
+
[rank2]: batch = next(global_batches)
|
| 20252 |
+
[rank2]: ^^^^^^^^^^^^^^^^^^^^
|
| 20253 |
+
[rank2]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 226, in setup_batches
|
| 20254 |
+
[rank2]: attention_mask = torch.ones(
|
| 20255 |
+
[rank2]: ^^^^^^^^^^^
|
| 20256 |
+
[rank2]: torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 64.00 GiB. GPU 2 has a total capacity of 139.81 GiB of which 53.79 GiB is free. Including non-PyTorch memory, this process has 86.02 GiB memory in use. Of the allocated memory 82.21 GiB is allocated by PyTorch, and 200.10 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)
|
| 20257 |
+
[rank3]:[W621 21:34:15.746861009 ProcessGroupNCCL.cpp:1476] Warning: WARNING: destroy_process_group() was not called before program exit, which can leak resources. For more info, please see https://pytorch.org/docs/stable/distributed.html#shutdown (function operator())
|
| 20258 |
+
[rank1]:[W621 21:34:15.752428945 ProcessGroupNCCL.cpp:1476] Warning: WARNING: destroy_process_group() was not called before program exit, which can leak resources. For more info, please see https://pytorch.org/docs/stable/distributed.html#shutdown (function operator())
|
| 20259 |
+
[rank2]:[W621 21:34:15.819194689 ProcessGroupNCCL.cpp:1476] Warning: WARNING: destroy_process_group() was not called before program exit, which can leak resources. For more info, please see https://pytorch.org/docs/stable/distributed.html#shutdown (function operator())
|
| 20260 |
+
[rank13]:[W621 21:34:15.634394649 ProcessGroupNCCL.cpp:1476] Warning: WARNING: destroy_process_group() was not called before program exit, which can leak resources. For more info, please see https://pytorch.org/docs/stable/distributed.html#shutdown (function operator())
|
| 20261 |
+
[rank14]:[W621 21:34:15.635955648 ProcessGroupNCCL.cpp:1476] Warning: WARNING: destroy_process_group() was not called before program exit, which can leak resources. For more info, please see https://pytorch.org/docs/stable/distributed.html#shutdown (function operator())
|
| 20262 |
+
[rank15]:[W621 21:34:15.640834016 ProcessGroupNCCL.cpp:1476] Warning: WARNING: destroy_process_group() was not called before program exit, which can leak resources. For more info, please see https://pytorch.org/docs/stable/distributed.html#shutdown (function operator())
|
| 20263 |
+
[rank6]:[W621 21:34:15.176127394 ProcessGroupNCCL.cpp:1476] Warning: WARNING: destroy_process_group() was not called before program exit, which can leak resources. For more info, please see https://pytorch.org/docs/stable/distributed.html#shutdown (function operator())
|
| 20264 |
+
[rank9]:[W621 21:34:15.775350643 ProcessGroupNCCL.cpp:1476] Warning: WARNING: destroy_process_group() was not called before program exit, which can leak resources. For more info, please see https://pytorch.org/docs/stable/distributed.html#shutdown (function operator())
|
| 20265 |
+
[rank5]:[W621 21:34:15.184536292 ProcessGroupNCCL.cpp:1476] Warning: WARNING: destroy_process_group() was not called before program exit, which can leak resources. For more info, please see https://pytorch.org/docs/stable/distributed.html#shutdown (function operator())
|
| 20266 |
+
[rank11]:[W621 21:34:15.802086113 ProcessGroupNCCL.cpp:1476] Warning: WARNING: destroy_process_group() was not called before program exit, which can leak resources. For more info, please see https://pytorch.org/docs/stable/distributed.html#shutdown (function operator())
|
| 20267 |
+
[rank10]:[W621 21:34:15.888555110 ProcessGroupNCCL.cpp:1476] Warning: WARNING: destroy_process_group() was not called before program exit, which can leak resources. For more info, please see https://pytorch.org/docs/stable/distributed.html#shutdown (function operator())
|
| 20268 |
+
[rank7]:[W621 21:34:16.513018370 ProcessGroupNCCL.cpp:1476] Warning: WARNING: destroy_process_group() was not called before program exit, which can leak resources. For more info, please see https://pytorch.org/docs/stable/distributed.html#shutdown (function operator())
|
| 20269 |
+
W0621 21:34:17.171000 1744676 site-packages/torch/distributed/elastic/multiprocessing/api.py:900] Sending process 1744745 closing signal SIGTERM
|
| 20270 |
+
W0621 21:34:17.175000 1744676 site-packages/torch/distributed/elastic/multiprocessing/api.py:900] Sending process 1744746 closing signal SIGTERM
|
| 20271 |
+
W0621 21:34:17.176000 1744676 site-packages/torch/distributed/elastic/multiprocessing/api.py:900] Sending process 1744747 closing signal SIGTERM
|
| 20272 |
+
W0621 21:34:17.176000 1744676 site-packages/torch/distributed/elastic/multiprocessing/api.py:900] Sending process 1744748 closing signal SIGTERM
|
| 20273 |
+
W0621 21:34:17.176000 1744676 site-packages/torch/distributed/elastic/multiprocessing/api.py:900] Sending process 1744749 closing signal SIGTERM
|
| 20274 |
+
W0621 21:34:17.179000 1744676 site-packages/torch/distributed/elastic/multiprocessing/api.py:900] Sending process 1744751 closing signal SIGTERM
|
| 20275 |
+
W0621 21:34:17.179000 1744676 site-packages/torch/distributed/elastic/multiprocessing/api.py:900] Sending process 1744752 closing signal SIGTERM
|
| 20276 |
+
W0621 21:34:17.401000 764266 site-packages/torch/distributed/elastic/multiprocessing/api.py:900] Sending process 764338 closing signal SIGTERM
|
| 20277 |
+
W0621 21:34:17.405000 764266 site-packages/torch/distributed/elastic/multiprocessing/api.py:900] Sending process 764339 closing signal SIGTERM
|
| 20278 |
+
W0621 21:34:17.406000 764266 site-packages/torch/distributed/elastic/multiprocessing/api.py:900] Sending process 764340 closing signal SIGTERM
|
| 20279 |
+
W0621 21:34:17.406000 764266 site-packages/torch/distributed/elastic/multiprocessing/api.py:900] Sending process 764341 closing signal SIGTERM
|
| 20280 |
+
W0621 21:34:17.407000 764266 site-packages/torch/distributed/elastic/multiprocessing/api.py:900] Sending process 764342 closing signal SIGTERM
|
| 20281 |
+
W0621 21:34:17.409000 764266 site-packages/torch/distributed/elastic/multiprocessing/api.py:900] Sending process 764343 closing signal SIGTERM
|
| 20282 |
+
W0621 21:34:17.410000 764266 site-packages/torch/distributed/elastic/multiprocessing/api.py:900] Sending process 764345 closing signal SIGTERM
|
| 20283 |
+
E0621 21:34:20.400000 1744676 site-packages/torch/distributed/elastic/multiprocessing/api.py:874] failed (exitcode: 1) local_rank: 5 (pid: 1744750) of binary: /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/bin/python3
|
| 20284 |
+
Traceback (most recent call last):
|
| 20285 |
+
File "<frozen runpy>", line 198, in _run_module_as_main
|
| 20286 |
+
File "<frozen runpy>", line 88, in _run_code
|
| 20287 |
+
File "/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/distributed/launch.py", line 207, in <module>
|
| 20288 |
+
main()
|
| 20289 |
+
File "/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/typing_extensions.py", line 3253, in wrapper
|
| 20290 |
+
return arg(*args, **kwargs)
|
| 20291 |
+
^^^^^^^^^^^^^^^^^^^^
|
| 20292 |
+
File "/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/distributed/launch.py", line 203, in main
|
| 20293 |
+
launch(args)
|
| 20294 |
+
File "/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/distributed/launch.py", line 188, in launch
|
| 20295 |
+
run(args)
|
| 20296 |
+
File "/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/distributed/run.py", line 883, in run
|
| 20297 |
+
elastic_launch(
|
| 20298 |
+
File "/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/distributed/launcher/api.py", line 139, in __call__
|
| 20299 |
+
return launch_agent(self._config, self._entrypoint, list(args))
|
| 20300 |
+
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
| 20301 |
+
File "/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/distributed/launcher/api.py", line 270, in launch_agent
|
| 20302 |
+
raise ChildFailedError(
|
| 20303 |
+
torch.distributed.elastic.multiprocessing.errors.ChildFailedError:
|
| 20304 |
+
============================================================
|
| 20305 |
+
./pretrain_gpt_profile.py FAILED
|
| 20306 |
+
------------------------------------------------------------
|
| 20307 |
+
Failures:
|
| 20308 |
+
<NO_OTHER_FAILURES>
|
| 20309 |
+
------------------------------------------------------------
|
| 20310 |
+
Root Cause (first observed failure):
|
| 20311 |
+
[0]:
|
| 20312 |
+
time : 2025-06-21_21:34:17
|
| 20313 |
+
host : fs-mbz-gpu-717
|
| 20314 |
+
rank : 13 (local_rank: 5)
|
| 20315 |
+
exitcode : 1 (pid: 1744750)
|
| 20316 |
+
error_file: <N/A>
|
| 20317 |
+
traceback : To enable traceback see: https://pytorch.org/docs/stable/elastic/errors.html
|
| 20318 |
+
============================================================
|
| 20319 |
+
+ set +x
|
| 20320 |
+
E0621 21:34:21.033000 764266 site-packages/torch/distributed/elastic/multiprocessing/api.py:874] failed (exitcode: 1) local_rank: 6 (pid: 764344) of binary: /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/bin/python3
|
| 20321 |
+
Traceback (most recent call last):
|
| 20322 |
+
File "<frozen runpy>", line 198, in _run_module_as_main
|
| 20323 |
+
File "<frozen runpy>", line 88, in _run_code
|
| 20324 |
+
File "/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/distributed/launch.py", line 207, in <module>
|
| 20325 |
+
main()
|
| 20326 |
+
File "/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/typing_extensions.py", line 3253, in wrapper
|
| 20327 |
+
return arg(*args, **kwargs)
|
| 20328 |
+
^^^^^^^^^^^^^^^^^^^^
|
| 20329 |
+
File "/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/distributed/launch.py", line 203, in main
|
| 20330 |
+
launch(args)
|
| 20331 |
+
File "/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/distributed/launch.py", line 188, in launch
|
| 20332 |
+
run(args)
|
| 20333 |
+
File "/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/distributed/run.py", line 883, in run
|
| 20334 |
+
elastic_launch(
|
| 20335 |
+
File "/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/distributed/launcher/api.py", line 139, in __call__
|
| 20336 |
+
return launch_agent(self._config, self._entrypoint, list(args))
|
| 20337 |
+
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
| 20338 |
+
File "/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/distributed/launcher/api.py", line 270, in launch_agent
|
| 20339 |
+
raise ChildFailedError(
|
| 20340 |
+
torch.distributed.elastic.multiprocessing.errors.ChildFailedError:
|
| 20341 |
+
============================================================
|
| 20342 |
+
./pretrain_gpt_profile.py FAILED
|
| 20343 |
+
------------------------------------------------------------
|
| 20344 |
+
Failures:
|
| 20345 |
+
<NO_OTHER_FAILURES>
|
| 20346 |
+
------------------------------------------------------------
|
| 20347 |
+
Root Cause (first observed failure):
|
| 20348 |
+
[0]:
|
| 20349 |
+
time : 2025-06-21_21:34:17
|
| 20350 |
+
host : fs-mbz-gpu-600
|
| 20351 |
+
rank : 6 (local_rank: 6)
|
| 20352 |
+
exitcode : 1 (pid: 764344)
|
| 20353 |
+
error_file: <N/A>
|
| 20354 |
+
traceback : To enable traceback see: https://pytorch.org/docs/stable/elastic/errors.html
|
| 20355 |
+
============================================================
|
| 20356 |
+
+ set +x
|
| 20357 |
+
+ for ctx_length in 1024 2048 4096 8192 12288 16384 24576 32768 40960 49152 65536 81920 98304 131072
|
| 20358 |
+
+ export PROF_CTX_LENGTH=40960
|
| 20359 |
+
+ PROF_CTX_LENGTH=40960
|
| 20360 |
+
+ name='/mnt/sharefs/users/hao.zhang/junda/megatron-prof-data--unstable-v5/mytrace.L40960*tp4.cp4.bs4.json'
|
| 20361 |
+
+ '[' -f '/mnt/sharefs/users/hao.zhang/junda/megatron-prof-data--unstable-v5/mytrace.L40960*tp4.cp4.bs4.json' ']'
|
| 20362 |
+
+ echo 'Running ctx_length=40960, TP_SIZE=4, CP_SIZE=4, BATCH_SIZE=4'
|
| 20363 |
+
+ srun bash ./attnserver.sh
|
| 20364 |
+
+ which python3
|
| 20365 |
+
+ python3 -m torch.distributed.launch --nproc_per_node 8 --nnodes 2 --node_rank 0 --rdzv_id 343221 --rdzv_backend c10d --rdzv_endpoint fs-mbz-gpu-600:29500 ./pretrain_gpt_profile.py --tensor-model-parallel-size 4 --context-parallel-size 4 --num-layers 2 --hidden-size 4096 --num-attention-heads 64 --group-query-attention --num-query-groups 16 --seq-length 40960 --max-position-embeddings 40960 --micro-batch-size 1 --global-batch-size 1 --lr 0.0005 --train-iters 10 --lr-decay-iters 150000 --lr-decay-style cosine --lr-warmup-iters 2 --weight-decay .1 --adam-beta2 .999 --fp16 --log-interval 1 --save-interval 16 --eval-interval 16 --eval-iters 1 --vocab-file vocab.json --merge-file merges.txt --save gpt-checkpoint --load gpt-checkpoint --logging-level 0 --mock-data --tensorboard-dir tensorboard-logs/
|
| 20366 |
+
+ which python3
|
| 20367 |
+
+ python3 -m torch.distributed.launch --nproc_per_node 8 --nnodes 2 --node_rank 1 --rdzv_id 343221 --rdzv_backend c10d --rdzv_endpoint fs-mbz-gpu-600:29500 ./pretrain_gpt_profile.py --tensor-model-parallel-size 4 --context-parallel-size 4 --num-layers 2 --hidden-size 4096 --num-attention-heads 64 --group-query-attention --num-query-groups 16 --seq-length 40960 --max-position-embeddings 40960 --micro-batch-size 1 --global-batch-size 1 --lr 0.0005 --train-iters 10 --lr-decay-iters 150000 --lr-decay-style cosine --lr-warmup-iters 2 --weight-decay .1 --adam-beta2 .999 --fp16 --log-interval 1 --save-interval 16 --eval-interval 16 --eval-iters 1 --vocab-file vocab.json --merge-file merges.txt --save gpt-checkpoint --load gpt-checkpoint --logging-level 0 --mock-data --tensorboard-dir tensorboard-logs/
|
| 20368 |
+
/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/distributed/launch.py:207: FutureWarning: The module torch.distributed.launch is deprecated
|
| 20369 |
+
and will be removed in future. Use torchrun.
|
| 20370 |
+
Note that --use-env is set by default in torchrun.
|
| 20371 |
+
If your script expects `--local-rank` argument to be set, please
|
| 20372 |
+
change it to read from `os.environ['LOCAL_RANK']` instead. See
|
| 20373 |
+
https://pytorch.org/docs/stable/distributed.html#launch-utility for
|
| 20374 |
+
further instructions
|
| 20375 |
+
|
| 20376 |
+
main()
|
| 20377 |
+
W0621 21:34:24.137000 767223 site-packages/torch/distributed/run.py:766]
|
| 20378 |
+
W0621 21:34:24.137000 767223 site-packages/torch/distributed/run.py:766] *****************************************
|
| 20379 |
+
W0621 21:34:24.137000 767223 site-packages/torch/distributed/run.py:766] Setting OMP_NUM_THREADS environment variable for each process to be 1 in default, to avoid your system being overloaded, please further tune the variable for optimal performance in your application as needed.
|
| 20380 |
+
W0621 21:34:24.137000 767223 site-packages/torch/distributed/run.py:766] *****************************************
|
| 20381 |
+
/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/distributed/launch.py:207: FutureWarning: The module torch.distributed.launch is deprecated
|
| 20382 |
+
and will be removed in future. Use torchrun.
|
| 20383 |
+
Note that --use-env is set by default in torchrun.
|
| 20384 |
+
If your script expects `--local-rank` argument to be set, please
|
| 20385 |
+
change it to read from `os.environ['LOCAL_RANK']` instead. See
|
| 20386 |
+
https://pytorch.org/docs/stable/distributed.html#launch-utility for
|
| 20387 |
+
further instructions
|
| 20388 |
+
|
| 20389 |
+
main()
|
| 20390 |
+
W0621 21:34:24.210000 1747549 site-packages/torch/distributed/run.py:766]
|
| 20391 |
+
W0621 21:34:24.210000 1747549 site-packages/torch/distributed/run.py:766] *****************************************
|
| 20392 |
+
W0621 21:34:24.210000 1747549 site-packages/torch/distributed/run.py:766] Setting OMP_NUM_THREADS environment variable for each process to be 1 in default, to avoid your system being overloaded, please further tune the variable for optimal performance in your application as needed.
|
| 20393 |
+
W0621 21:34:24.210000 1747549 site-packages/torch/distributed/run.py:766] *****************************************
|
| 20394 |
+
[rank2]:[W621 21:34:47.974858199 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 2] using GPU 2 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
|
| 20395 |
+
[rank3]:[W621 21:34:47.975554961 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 3] using GPU 3 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
|
| 20396 |
+
[rank1]:[W621 21:34:47.975568324 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 1] using GPU 1 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
|
| 20397 |
+
[rank6]:[W621 21:34:47.976134250 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 6] using GPU 6 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
|
| 20398 |
+
[rank4]:[W621 21:34:47.976140465 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 4] using GPU 4 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
|
| 20399 |
+
[rank5]:[W621 21:34:47.977593798 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 5] using GPU 5 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
|
| 20400 |
+
[rank13]:[W621 21:34:47.568137561 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 13] using GPU 5 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
|
| 20401 |
+
[rank7]:[W621 21:34:47.983186626 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 7] using GPU 7 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
|
| 20402 |
+
[rank14]:[W621 21:34:47.577350389 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 14] using GPU 6 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
|
| 20403 |
+
[rank11]:[W621 21:34:47.577379910 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 11] using GPU 3 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
|
| 20404 |
+
[rank10]:[W621 21:34:47.577451926 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 10] using GPU 2 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
|
| 20405 |
+
[rank15]:[W621 21:34:47.577469704 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 15] using GPU 7 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
|
| 20406 |
+
[rank12]:[W621 21:34:47.577811558 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 12] using GPU 4 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
|
| 20407 |
+
[rank9]:[W621 21:34:47.578294751 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 9] using GPU 1 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
|
| 20408 |
+
[rank8]:[W621 21:34:47.663829529 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 8] using GPU 0 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
|
| 20409 |
+
[rank0]:[W621 21:34:47.121298220 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 0] using GPU 0 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
|
| 20410 |
+
/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/models/gpt/gpt_layer_specs.py:94: UserWarning: The fp8 argument in "get_gpt_layer_with_transformer_engine_spec" has been deprecated and will be removed soon. Please update your code accordingly.
|
| 20411 |
+
warnings.warn(
|
| 20412 |
+
/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/models/gpt/gpt_layer_specs.py:94: UserWarning: The fp8 argument in "get_gpt_layer_with_transformer_engine_spec" has been deprecated and will be removed soon. Please update your code accordingly.
|
| 20413 |
+
warnings.warn(
|
| 20414 |
+
/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/models/gpt/gpt_layer_specs.py:94: UserWarning: The fp8 argument in "get_gpt_layer_with_transformer_engine_spec" has been deprecated and will be removed soon. Please update your code accordingly.
|
| 20415 |
+
warnings.warn(
|
| 20416 |
+
/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/models/gpt/gpt_layer_specs.py:94: UserWarning: The fp8 argument in "get_gpt_layer_with_transformer_engine_spec" has been deprecated and will be removed soon. Please update your code accordingly.
|
| 20417 |
+
warnings.warn(
|
| 20418 |
+
/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/models/gpt/gpt_layer_specs.py:94: UserWarning: The fp8 argument in "get_gpt_layer_with_transformer_engine_spec" has been deprecated and will be removed soon. Please update your code accordingly.
|
| 20419 |
+
warnings.warn(
|
| 20420 |
+
/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/models/gpt/gpt_layer_specs.py:94: UserWarning: The fp8 argument in "get_gpt_layer_with_transformer_engine_spec" has been deprecated and will be removed soon. Please update your code accordingly.
|
| 20421 |
+
warnings.warn(
|
| 20422 |
+
/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/models/gpt/gpt_layer_specs.py:94: UserWarning: The fp8 argument in "get_gpt_layer_with_transformer_engine_spec" has been deprecated and will be removed soon. Please update your code accordingly.
|
| 20423 |
+
warnings.warn(
|
| 20424 |
+
/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/models/gpt/gpt_layer_specs.py:94: UserWarning: The fp8 argument in "get_gpt_layer_with_transformer_engine_spec" has been deprecated and will be removed soon. Please update your code accordingly.
|
| 20425 |
+
warnings.warn(
|
| 20426 |
+
/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/models/gpt/gpt_layer_specs.py:94: UserWarning: The fp8 argument in "get_gpt_layer_with_transformer_engine_spec" has been deprecated and will be removed soon. Please update your code accordingly.
|
| 20427 |
+
warnings.warn(
|
| 20428 |
+
/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/models/gpt/gpt_layer_specs.py:94: UserWarning: The fp8 argument in "get_gpt_layer_with_transformer_engine_spec" has been deprecated and will be removed soon. Please update your code accordingly.
|
| 20429 |
+
warnings.warn(
|
| 20430 |
+
/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/models/gpt/gpt_layer_specs.py:94: UserWarning: The fp8 argument in "get_gpt_layer_with_transformer_engine_spec" has been deprecated and will be removed soon. Please update your code accordingly.
|
| 20431 |
+
warnings.warn(
|
| 20432 |
+
/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/models/gpt/gpt_layer_specs.py:94: UserWarning: The fp8 argument in "get_gpt_layer_with_transformer_engine_spec" has been deprecated and will be removed soon. Please update your code accordingly.
|
| 20433 |
+
warnings.warn(
|
| 20434 |
+
/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/models/gpt/gpt_layer_specs.py:94: UserWarning: The fp8 argument in "get_gpt_layer_with_transformer_engine_spec" has been deprecated and will be removed soon. Please update your code accordingly.
|
| 20435 |
+
warnings.warn(
|
| 20436 |
+
/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/models/gpt/gpt_layer_specs.py:94: UserWarning: The fp8 argument in "get_gpt_layer_with_transformer_engine_spec" has been deprecated and will be removed soon. Please update your code accordingly.
|
| 20437 |
+
warnings.warn(
|
| 20438 |
+
/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/models/gpt/gpt_layer_specs.py:94: UserWarning: The fp8 argument in "get_gpt_layer_with_transformer_engine_spec" has been deprecated and will be removed soon. Please update your code accordingly.
|
| 20439 |
+
warnings.warn(
|
| 20440 |
+
/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/models/gpt/gpt_layer_specs.py:94: UserWarning: The fp8 argument in "get_gpt_layer_with_transformer_engine_spec" has been deprecated and will be removed soon. Please update your code accordingly.
|
| 20441 |
+
warnings.warn(
|
| 20442 |
+
/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/transformer_engine/pytorch/cpu_offload.py:595: DeprecationWarning: Offloading weights is deprecated. Using offload_weights=True does not have any effect.
|
| 20443 |
+
warnings.warn(
|
| 20444 |
+
/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/transformer_engine/pytorch/cpu_offload.py:595: DeprecationWarning: Offloading weights is deprecated. Using offload_weights=True does not have any effect.
|
| 20445 |
+
warnings.warn(
|
| 20446 |
+
/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/transformer_engine/pytorch/cpu_offload.py:595: DeprecationWarning: Offloading weights is deprecated. Using offload_weights=True does not have any effect.
|
| 20447 |
+
warnings.warn(
|
| 20448 |
+
/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/transformer_engine/pytorch/cpu_offload.py:595: DeprecationWarning: Offloading weights is deprecated. Using offload_weights=True does not have any effect.
|
| 20449 |
+
warnings.warn(
|
| 20450 |
+
/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/transformer_engine/pytorch/cpu_offload.py:595: DeprecationWarning: Offloading weights is deprecated. Using offload_weights=True does not have any effect.
|
| 20451 |
+
warnings.warn(
|
| 20452 |
+
/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/transformer_engine/pytorch/cpu_offload.py:595: DeprecationWarning: Offloading weights is deprecated. Using offload_weights=True does not have any effect.
|
| 20453 |
+
warnings.warn(
|
| 20454 |
+
/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/transformer_engine/pytorch/cpu_offload.py:595: DeprecationWarning: Offloading weights is deprecated. Using offload_weights=True does not have any effect.
|
| 20455 |
+
warnings.warn(
|
| 20456 |
+
/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/transformer_engine/pytorch/cpu_offload.py:595: DeprecationWarning: Offloading weights is deprecated. Using offload_weights=True does not have any effect.
|
| 20457 |
+
warnings.warn(
|
| 20458 |
+
/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/transformer_engine/pytorch/cpu_offload.py:595: DeprecationWarning: Offloading weights is deprecated. Using offload_weights=True does not have any effect.
|
| 20459 |
+
warnings.warn(
|
| 20460 |
+
/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/transformer_engine/pytorch/cpu_offload.py:595: DeprecationWarning: Offloading weights is deprecated. Using offload_weights=True does not have any effect.
|
| 20461 |
+
warnings.warn(
|
| 20462 |
+
/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/transformer_engine/pytorch/cpu_offload.py:595: DeprecationWarning: Offloading weights is deprecated. Using offload_weights=True does not have any effect.
|
| 20463 |
+
warnings.warn(
|
| 20464 |
+
/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/transformer_engine/pytorch/cpu_offload.py:595: DeprecationWarning: Offloading weights is deprecated. Using offload_weights=True does not have any effect.
|
| 20465 |
+
warnings.warn(
|
| 20466 |
+
/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/transformer_engine/pytorch/cpu_offload.py:595: DeprecationWarning: Offloading weights is deprecated. Using offload_weights=True does not have any effect.
|
| 20467 |
+
warnings.warn(
|
| 20468 |
+
/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/transformer_engine/pytorch/cpu_offload.py:595: DeprecationWarning: Offloading weights is deprecated. Using offload_weights=True does not have any effect.
|
| 20469 |
+
warnings.warn(
|
| 20470 |
+
/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/transformer_engine/pytorch/cpu_offload.py:595: DeprecationWarning: Offloading weights is deprecated. Using offload_weights=True does not have any effect.
|
| 20471 |
+
warnings.warn(
|
| 20472 |
+
/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/transformer_engine/pytorch/cpu_offload.py:595: DeprecationWarning: Offloading weights is deprecated. Using offload_weights=True does not have any effect.
|
| 20473 |
+
warnings.warn(
|
attnserver.run_attnserver.slurm.sh.343221.out.log
CHANGED
|
@@ -12093,3 +12093,756 @@ batch tensor after cp: labels torch.Size([4, 32768])
|
|
| 12093 |
batch tensor after cp: loss_mask torch.Size([4, 32768])
|
| 12094 |
batch tensor after cp: attention_mask torch.Size([4, 1, 32768, 131072])
|
| 12095 |
batch tensor after cp: position_ids torch.Size([4, 32768])
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 12093 |
batch tensor after cp: loss_mask torch.Size([4, 32768])
|
| 12094 |
batch tensor after cp: attention_mask torch.Size([4, 1, 32768, 131072])
|
| 12095 |
batch tensor after cp: position_ids torch.Size([4, 32768])
|
| 12096 |
+
Start exporting trace 0
|
| 12097 |
+
Done exporting trace 0
|
| 12098 |
+
Number of parameters in transformer block in billions: 0.35
|
| 12099 |
+
Number of parameters in embedding layers in billions: 0.21
|
| 12100 |
+
Total number of parameters in billions: 0.56
|
| 12101 |
+
Number of parameters in most loaded shard in billions: 0.1400
|
| 12102 |
+
Theoretical memory footprints: weight and optimizer=2403.18 MB
|
| 12103 |
+
[2025-06-21 21:34:13] iteration 1/ 10 | consumed samples: 1 | elapsed time per iteration (ms): 37232.3 | learning rate: 0.000000E+00 | global batch size: 1 | loss scale: 4294967296.0 | number of skipped iterations: 1 | number of nan iterations: 0 |
|
| 12104 |
+
[Rank 1] (after 1 iterations) memory (MB) | allocated: 85227.16064453125 | max allocated: 111802.65283203125 | reserved: 116506.0 | max reserved: 116506.0
|
| 12105 |
+
[Rank 0] (after 1 iterations) memory (MB) | allocated: 85227.16064453125 | max allocated: 111802.65283203125 | reserved: 115482.0 | max reserved: 115482.0
|
| 12106 |
+
[Rank 11] (after 1 iterations) memory (MB) | allocated: 85226.16064453125 | max allocated: 111802.65283203125 | reserved: 116782.0 | max reserved: 116782.0
|
| 12107 |
+
[Rank 14] (after 1 iterations) memory (MB) | allocated: 85226.16064453125 | max allocated: 111802.65283203125 | reserved: 116910.0 | max reserved: 116910.0
|
| 12108 |
+
[Rank 2] (after 1 iterations) memory (MB) | allocated: 85227.16064453125 | max allocated: 111802.65283203125 | reserved: 116506.0 | max reserved: 116506.0
|
| 12109 |
+
[Rank 13] (after 1 iterations) memory (MB) | allocated: 85226.16064453125 | max allocated: 111802.65283203125 | reserved: 116910.0 | max reserved: 116910.0
|
| 12110 |
+
[Rank 10] (after 1 iterations) memory (MB) | allocated: 85226.16064453125 | max allocated: 111802.65283203125 | reserved: 116782.0 | max reserved: 116782.0
|
| 12111 |
+
[Rank 7] (after 1 iterations) memory (MB) | allocated: 85227.16064453125 | max allocated: 111802.65283203125 | reserved: 116654.0 | max reserved: 116654.0
|
| 12112 |
+
[Rank 15] (after 1 iterations) memory (MB) | allocated: 85226.16064453125 | max allocated: 111802.65283203125 | reserved: 116910.0 | max reserved: 116910.0
|
| 12113 |
+
[Rank 6] (after 1 iterations) memory (MB) | allocated: 85227.16064453125 | max allocated: 111802.65283203125 | reserved: 116654.0 | max reserved: 116654.0
|
| 12114 |
+
[Rank 9] (after 1 iterations) memory (MB) | allocated: 85226.16064453125 | max allocated: 111802.65283203125 | reserved: 116782.0 | max reserved: 116782.0[Rank 12] (after 1 iterations) memory (MB) | allocated: 85226.16064453125 | max allocated: 111802.65283203125 | reserved: 115886.0 | max reserved: 115886.0
|
| 12115 |
+
|
| 12116 |
+
[Rank 3] (after 1 iterations) memory (MB) | allocated: 85227.16064453125 | max allocated: 111802.65283203125 | reserved: 116506.0 | max reserved: 116506.0[Rank 4] (after 1 iterations) memory (MB) | allocated: 85227.16064453125 | max allocated: 111802.65283203125 | reserved: 115630.0 | max reserved: 115630.0
|
| 12117 |
+
|
| 12118 |
+
[Rank 8] (after 1 iterations) memory (MB) | allocated: 85227.16064453125 | max allocated: 111802.65283203125 | reserved: 115886.0 | max reserved: 115886.0
|
| 12119 |
+
[Rank 5] (after 1 iterations) memory (MB) | allocated: 85227.16064453125 | max allocated: 111802.65283203125 | reserved: 116654.0 | max reserved: 116654.0
|
| 12120 |
+
WARNING:megatron.core.utils:CUDA out of memory. Tried to allocate 64.00 GiB. GPU 6 has a total capacity of 139.81 GiB of which 53.90 GiB is free. Including non-PyTorch memory, this process has 85.91 GiB memory in use. Of the allocated memory 82.21 GiB is allocated by PyTorch, and 72.10 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)
|
| 12121 |
+
['Traceback (most recent call last):\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 446, in forward_step\n (tokens, labels, loss_mask, attention_mask, position_ids), token_lens = get_batch(data_iterator)\n ^^^^^^^^^^^^^^^^^^^^^^^^\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 284, in get_batch\n batch = next(global_batches)\n ^^^^^^^^^^^^^^^^^^^^\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 226, in setup_batches\n attention_mask = torch.ones(\n ^^^^^^^^^^^\n', 'torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 64.00 GiB. GPU 6 has a total capacity of 139.81 GiB of which 53.90 GiB is free. Including non-PyTorch memory, this process has 85.91 GiB memory in use. Of the allocated memory 82.21 GiB is allocated by PyTorch, and 72.10 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)\n']
|
| 12122 |
+
WARNING:megatron.core.utils:CUDA out of memory. Tried to allocate 64.00 GiB. GPU 3 has a total capacity of 139.81 GiB of which 53.91 GiB is free. Including non-PyTorch memory, this process has 85.89 GiB memory in use. Of the allocated memory 82.21 GiB is allocated by PyTorch, and 72.10 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)
|
| 12123 |
+
['Traceback (most recent call last):\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 446, in forward_step\n (tokens, labels, loss_mask, attention_mask, position_ids), token_lens = get_batch(data_iterator)\n ^^^^^^^^^^^^^^^^^^^^^^^^\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 284, in get_batch\n batch = next(global_batches)\n ^^^^^^^^^^^^^^^^^^^^\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 226, in setup_batches\n attention_mask = torch.ones(\n ^^^^^^^^^^^\n', 'torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 64.00 GiB. GPU 3 has a total capacity of 139.81 GiB of which 53.91 GiB is free. Including non-PyTorch memory, this process has 85.89 GiB memory in use. Of the allocated memory 82.21 GiB is allocated by PyTorch, and 72.10 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)\n']
|
| 12124 |
+
WARNING:megatron.core.utils:CUDA out of memory. Tried to allocate 64.00 GiB. GPU 0 has a total capacity of 139.81 GiB of which 52.87 GiB is free. Including non-PyTorch memory, this process has 86.91 GiB memory in use. Of the allocated memory 82.21 GiB is allocated by PyTorch, and 1.07 GiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)
|
| 12125 |
+
WARNING:megatron.core.utils:CUDA out of memory. Tried to allocate 64.00 GiB. GPU 5 has a total capacity of 139.81 GiB of which 53.77 GiB is free. Including non-PyTorch memory, this process has 86.03 GiB memory in use. Of the allocated memory 82.21 GiB is allocated by PyTorch, and 200.10 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)
|
| 12126 |
+
['Traceback (most recent call last):\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 446, in forward_step\n (tokens, labels, loss_mask, attention_mask, position_ids), token_lens = get_batch(data_iterator)\n ^^^^^^^^^^^^^^^^^^^^^^^^\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 284, in get_batch\n batch = next(global_batches)\n ^^^^^^^^^^^^^^^^^^^^\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 226, in setup_batches\n attention_mask = torch.ones(\n ^^^^^^^^^^^\n', 'torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 64.00 GiB. GPU 0 has a total capacity of 139.81 GiB of which 52.87 GiB is free. Including non-PyTorch memory, this process has 86.91 GiB memory in use. Of the allocated memory 82.21 GiB is allocated by PyTorch, and 1.07 GiB is r['Traceback (most recent call last):\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 446, in forward_step\n (tokens, labels, loss_mask, attention_mask, position_ids), token_lens = get_batch(data_iterator)\n ^^^^^^^^^^^^^^^^^^^^^^^^\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 284, in get_batch\n batch = next(global_batches)\n ^^^^^^^^^^^^^^^^^^^^\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 226, in setup_batches\n attention_mask = torch.ones(\n ^^^^^^^^^^^\n', 'torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 64.00 GiB. GPU 5 has a total capacity of 139.81 GiB of which 53.77 GiB is free. Including non-PyTorch memory, this process has 86.03 GiB memory in use. Of the allocated memory 82.21 GiB is allocated by PyTorch, and 200.10 MiB iseserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)\n']
|
| 12127 |
+
reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)\n']
|
| 12128 |
+
WARNING:megatron.core.utils:CUDA out of memory. Tried to allocate 64.00 GiB. GPU 2 has a total capacity of 139.81 GiB of which 53.90 GiB is free. Including non-PyTorch memory, this process has 85.91 GiB memory in use. Of the allocated memory 82.21 GiB is allocated by PyTorch, and 72.10 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)
|
| 12129 |
+
['Traceback (most recent call last):\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 446, in forward_step\n (tokens, labels, loss_mask, attention_mask, position_ids), token_lens = get_batch(data_iterator)\n ^^^^^^^^^^^^^^^^^^^^^^^^\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 284, in get_batch\n batch = next(global_batches)\n ^^^^^^^^^^^^^^^^^^^^\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 226, in setup_batches\n attention_mask = torch.ones(\n ^^^^^^^^^^^\n', 'torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 64.00 GiB. GPU 2 has a total capacity of 139.81 GiB of which 53.90 GiB is free. Including non-PyTorch memory, this process has 85.91 GiB memory in use. Of the allocated memory 82.21 GiB is allocated by PyTorch, and 72.10 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)\n']
|
| 12130 |
+
WARNING:megatron.core.utils:CUDA out of memory. Tried to allocate 64.00 GiB. GPU 4 has a total capacity of 139.81 GiB of which 53.76 GiB is free. Including non-PyTorch memory, this process has 86.02 GiB memory in use. Of the allocated memory 82.21 GiB is allocated by PyTorch, and 200.10 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)
|
| 12131 |
+
['Traceback (most recent call last):\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 446, in forward_step\n (tokens, labels, loss_mask, attention_mask, position_ids), token_lens = get_batch(data_iterator)\n ^^^^^^^^^^^^^^^^^^^^^^^^\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 284, in get_batch\n batch = next(global_batches)\n ^^^^^^^^^^^^^^^^^^^^\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 226, in setup_batches\n attention_mask = torch.ones(\n ^^^^^^^^^^^\n', 'torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 64.00 GiB. GPU 4 has a total capacity of 139.81 GiB of which 53.76 GiB is free. Including non-PyTorch memory, this process has 86.02 GiB memory in use. Of the allocated memory 82.21 GiB is allocated by PyTorch, and 200.10 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)\n']
|
| 12132 |
+
WARNING:megatron.core.utils:CUDA out of memory. Tried to allocate 64.00 GiB. GPU 4 has a total capacity of 139.81 GiB of which 53.87 GiB is free. Including non-PyTorch memory, this process has 85.91 GiB memory in use. Of the allocated memory 82.21 GiB is allocated by PyTorch, and 72.10 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)
|
| 12133 |
+
WARNING:megatron.core.utils:CUDA out of memory. Tried to allocate 64.00 GiB. GPU 3 has a total capacity of 139.81 GiB of which 53.77 GiB is free. Including non-PyTorch memory, this process has 86.03 GiB memory in use. Of the allocated memory 82.21 GiB is allocated by PyTorch, and 200.10 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)
|
| 12134 |
+
['Traceback (most recent call last):\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 446, in forward_step\n (tokens, labels, loss_mask, attention_mask, position_ids), token_lens = get_batch(data_iterator)\n ^^^^^^^^^^^^^^^^^^^^^^^^\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 284, in get_batch\n batch = next(global_batches)\n ^^^^^^^^^^^^^^^^^^^^\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 226, in setup_batches\n attention_mask = torch.ones(\n ^^^^^^^^^^^\n', 'torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 64.00 GiB. GPU 4 has a total capacity of 139.81 GiB of which 53.87 GiB is free. Including non-PyTorch memory, this process has 85.91 GiB memory in use. Of the allocated memory 82.21 GiB is allocated by PyTorch, and 72.10 MiB is ['Traceback (most recent call last):\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 446, in forward_step\n (tokens, labels, loss_mask, attention_mask, position_ids), token_lens = get_batch(data_iterator)\n ^^^^^^^^^^^^^^^^^^^^^^^^\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 284, in get_batch\n batch = next(global_batches)\n ^^^^^^^^^^^^^^^^^^^^\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 226, in setup_batches\n attention_mask = torch.ones(\n ^^^^^^^^^^^\n', 'torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 64.00 GiB. GPU 3 has a total capacity of 139.81 GiB of which 53.77 GiB is free. Including non-PyTorch memory, this process has 86.03 GiB memory in use. Of the allocated memory 82.21 GiB is allocated by PyTorch, and 200.10 MiB isreserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)\n']
|
| 12135 |
+
reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)\n']
|
| 12136 |
+
WARNING:megatron.core.utils:CUDA out of memory. Tried to allocate 64.00 GiB. GPU 1 has a total capacity of 139.81 GiB of which 53.91 GiB is free. Including non-PyTorch memory, this process has 85.89 GiB memory in use. Of the allocated memory 82.21 GiB is allocated by PyTorch, and 72.10 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)
|
| 12137 |
+
['Traceback (most recent call last):\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 446, in forward_step\n (tokens, labels, loss_mask, attention_mask, position_ids), token_lens = get_batch(data_iterator)\n ^^^^^^^^^^^^^^^^^^^^^^^^\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 284, in get_batch\n batch = next(global_batches)\n ^^^^^^^^^^^^^^^^^^^^\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 226, in setup_batches\n attention_mask = torch.ones(\n ^^^^^^^^^^^\n', 'torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 64.00 GiB. GPU 1 has a total capacity of 139.81 GiB of which 53.91 GiB is free. Including non-PyTorch memory, this process has 85.89 GiB memory in use. Of the allocated memory 82.21 GiB is allocated by PyTorch, and 72.10 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)\n']
|
| 12138 |
+
WARNING:megatron.core.utils:CUDA out of memory. Tried to allocate 64.00 GiB. GPU 0 has a total capacity of 139.81 GiB of which 53.76 GiB is free. Including non-PyTorch memory, this process has 86.02 GiB memory in use. Of the allocated memory 82.21 GiB is allocated by PyTorch, and 200.10 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)
|
| 12139 |
+
['Traceback (most recent call last):\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 446, in forward_step\n (tokens, labels, loss_mask, attention_mask, position_ids), token_lens = get_batch(data_iterator)\n ^^^^^^^^^^^^^^^^^^^^^^^^\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 284, in get_batch\n batch = next(global_batches)\n ^^^^^^^^^^^^^^^^^^^^\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 226, in setup_batches\n attention_mask = torch.ones(\n ^^^^^^^^^^^\n', 'torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 64.00 GiB. GPU 0 has a total capacity of 139.81 GiB of which 53.76 GiB is free. Including non-PyTorch memory, this process has 86.02 GiB memory in use. Of the allocated memory 82.21 GiB is allocated by PyTorch, and 200.10 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)\n']
|
| 12140 |
+
WARNING:megatron.core.utils:CUDA out of memory. Tried to allocate 64.00 GiB. GPU 1 has a total capacity of 139.81 GiB of which 53.77 GiB is free. Including non-PyTorch memory, this process has 86.03 GiB memory in use. Of the allocated memory 82.21 GiB is allocated by PyTorch, and 200.10 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)
|
| 12141 |
+
['Traceback (most recent call last):\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 446, in forward_step\n (tokens, labels, loss_mask, attention_mask, position_ids), token_lens = get_batch(data_iterator)\n ^^^^^^^^^^^^^^^^^^^^^^^^\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 284, in get_batch\n batch = next(global_batches)\n ^^^^^^^^^^^^^^^^^^^^\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 226, in setup_batches\n attention_mask = torch.ones(\n ^^^^^^^^^^^\n', 'torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 64.00 GiB. GPU 1 has a total capacity of 139.81 GiB of which 53.77 GiB is free. Including non-PyTorch memory, this process has 86.03 GiB memory in use. Of the allocated memory 82.21 GiB is allocated by PyTorch, and 200.10 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)\n']
|
| 12142 |
+
WARNING:megatron.core.utils:CUDA out of memory. Tried to allocate 64.00 GiB. GPU 7 has a total capacity of 139.81 GiB of which 53.91 GiB is free. Including non-PyTorch memory, this process has 85.89 GiB memory in use. Of the allocated memory 82.21 GiB is allocated by PyTorch, and 72.10 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)
|
| 12143 |
+
['Traceback (most recent call last):\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 446, in forward_step\n (tokens, labels, loss_mask, attention_mask, position_ids), token_lens = get_batch(data_iterator)\n ^^^^^^^^^^^^^^^^^^^^^^^^\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 284, in get_batch\n batch = next(global_batches)\n ^^^^^^^^^^^^^^^^^^^^\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 226, in setup_batches\n attention_mask = torch.ones(\n ^^^^^^^^^^^\n', 'torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 64.00 GiB. GPU 7 has a total capacity of 139.81 GiB of which 53.91 GiB is free. Including non-PyTorch memory, this process has 85.89 GiB memory in use. Of the allocated memory 82.21 GiB is allocated by PyTorch, and 72.10 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)\n']
|
| 12144 |
+
WARNING:megatron.core.utils:CUDA out of memory. Tried to allocate 64.00 GiB. GPU 5 has a total capacity of 139.81 GiB of which 53.91 GiB is free. Including non-PyTorch memory, this process has 85.89 GiB memory in use. Of the allocated memory 82.21 GiB is allocated by PyTorch, and 72.10 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)
|
| 12145 |
+
WARNING:megatron.core.utils:CUDA out of memory. Tried to allocate 64.00 GiB. GPU 6 has a total capacity of 139.81 GiB of which 53.79 GiB is free. Including non-PyTorch memory, this process has 86.02 GiB memory in use. Of the allocated memory 82.21 GiB is allocated by PyTorch, and 200.10 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)
|
| 12146 |
+
['Traceback (most recent call last):\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 446, in forward_step\n (tokens, labels, loss_mask, attention_mask, position_ids), token_lens = get_batch(data_iterator)\n ^^^^^^^^^^^^^^^^^^^^^^^^\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 284, in get_batch\n batch = next(global_batches)\n ^^^^^^^^^^^^^^^^^^^^\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 226, in setup_batches\n attention_mask = torch.ones(\n ^^^^^^^^^^^\n', 'torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 64.00 GiB. GPU 5 has a total capacity of 139.81 GiB of which 53.91 GiB is free. Including non-PyTorch memory, this process has 85.89 GiB memory in use. Of the allocated memory 82.21 GiB is allocated by PyTorch, and 72.10 MiB is ['Traceback (most recent call last):\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 446, in forward_step\n (tokens, labels, loss_mask, attention_mask, position_ids), token_lens = get_batch(data_iterator)\n ^^^^^^^^^^^^^^^^^^^^^^^^\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 284, in get_batch\n batch = next(global_batches)\n ^^^^^^^^^^^^^^^^^^^^\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 226, in setup_batches\n attention_mask = torch.ones(\n ^^^^^^^^^^^\n', 'torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 64.00 GiB. GPU 6 has a total capacity of 139.81 GiB of which 53.79 GiB is free. Including non-PyTorch memory, this process has 86.02 GiB memory in use. Of the allocated memory 82.21 GiB is allocated by PyTorch, and 200.10 MiB isreserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)\n']
|
| 12147 |
+
reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)\n']
|
| 12148 |
+
WARNING:megatron.core.utils:CUDA out of memory. Tried to allocate 64.00 GiB. GPU 7 has a total capacity of 139.81 GiB of which 53.77 GiB is free. Including non-PyTorch memory, this process has 86.03 GiB memory in use. Of the allocated memory 82.21 GiB is allocated by PyTorch, and 200.10 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)
|
| 12149 |
+
['Traceback (most recent call last):\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 446, in forward_step\n (tokens, labels, loss_mask, attention_mask, position_ids), token_lens = get_batch(data_iterator)\n ^^^^^^^^^^^^^^^^^^^^^^^^\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 284, in get_batch\n batch = next(global_batches)\n ^^^^^^^^^^^^^^^^^^^^\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 226, in setup_batches\n attention_mask = torch.ones(\n ^^^^^^^^^^^\n', 'torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 64.00 GiB. GPU 7 has a total capacity of 139.81 GiB of which 53.77 GiB is free. Including non-PyTorch memory, this process has 86.03 GiB memory in use. Of the allocated memory 82.21 GiB is allocated by PyTorch, and 200.10 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)\n']
|
| 12150 |
+
WARNING:megatron.core.utils:CUDA out of memory. Tried to allocate 64.00 GiB. GPU 2 has a total capacity of 139.81 GiB of which 53.79 GiB is free. Including non-PyTorch memory, this process has 86.02 GiB memory in use. Of the allocated memory 82.21 GiB is allocated by PyTorch, and 200.10 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)
|
| 12151 |
+
['Traceback (most recent call last):\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 446, in forward_step\n (tokens, labels, loss_mask, attention_mask, position_ids), token_lens = get_batch(data_iterator)\n ^^^^^^^^^^^^^^^^^^^^^^^^\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 284, in get_batch\n batch = next(global_batches)\n ^^^^^^^^^^^^^^^^^^^^\n', ' File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 226, in setup_batches\n attention_mask = torch.ones(\n ^^^^^^^^^^^\n', 'torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 64.00 GiB. GPU 2 has a total capacity of 139.81 GiB of which 53.79 GiB is free. Including non-PyTorch memory, this process has 86.02 GiB memory in use. Of the allocated memory 82.21 GiB is allocated by PyTorch, and 200.10 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)\n']
|
| 12152 |
+
Running ctx_length=40960, TP_SIZE=4, CP_SIZE=4, BATCH_SIZE=4
|
| 12153 |
+
Cleaning up checkpoint directory: gpt-checkpoint
|
| 12154 |
+
--------------------------------
|
| 12155 |
+
CTX_LENGTH: 40960
|
| 12156 |
+
TP_SIZE: 4
|
| 12157 |
+
CP_SIZE: 4
|
| 12158 |
+
CHECKPOINT_PATH: gpt-checkpoint
|
| 12159 |
+
PWD: /mnt/weka/home/hao.zhang/junda/attnserver-megatron
|
| 12160 |
+
--------------------------------
|
| 12161 |
+
/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/bin/python3
|
| 12162 |
+
Cleaning up checkpoint directory: gpt-checkpoint
|
| 12163 |
+
--------------------------------
|
| 12164 |
+
CTX_LENGTH: 40960
|
| 12165 |
+
TP_SIZE: 4
|
| 12166 |
+
CP_SIZE: 4
|
| 12167 |
+
CHECKPOINT_PATH: gpt-checkpoint
|
| 12168 |
+
PWD: /mnt/weka/home/hao.zhang/junda/attnserver-megatron
|
| 12169 |
+
--------------------------------
|
| 12170 |
+
/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/bin/python3
|
| 12171 |
+
INFO:megatron.training.initialize:Setting logging level to 0
|
| 12172 |
+
INFO:megatron.training.initialize:Setting logging level to 0
|
| 12173 |
+
INFO:megatron.training.initialize:Setting logging level to 0
|
| 12174 |
+
INFO:megatron.training.initialize:Setting logging level to 0
|
| 12175 |
+
WARNING: TensorBoard writing requested but is not available (are you using PyTorch 1.1.0 or later?), no TensorBoard logs will be written.
|
| 12176 |
+
WARNING: one_logger package is required to enable e2e metrics tracking. please go to https://confluence.nvidia.com/display/MLWFO/Package+Repositories for details to install it
|
| 12177 |
+
INFO:megatron.training.initialize:Setting logging level to 0
|
| 12178 |
+
INFO:megatron.training.initialize:Setting logging level to 0
|
| 12179 |
+
INFO:megatron.training.initialize:Setting logging level to 0
|
| 12180 |
+
INFO:megatron.training.initialize:Setting logging level to 0
|
| 12181 |
+
using world size: 16, data-parallel size: 1, context-parallel size: 4, hierarchical context-parallel sizes: Nonetensor-model-parallel size: 4, encoder-tensor-model-parallel size: 0, pipeline-model-parallel size: 1, encoder-pipeline-model-parallel size: 0
|
| 12182 |
+
Number of virtual stages per pipeline stage: None
|
| 12183 |
+
WARNING: Setting args.check_for_nan_in_loss_and_grad to False since dynamic loss scaling is being used
|
| 12184 |
+
using torch.float16 for parameters ...
|
| 12185 |
+
------------------------ arguments ------------------------
|
| 12186 |
+
account_for_embedding_in_pipeline_split ......... False
|
| 12187 |
+
account_for_loss_in_pipeline_split .............. False
|
| 12188 |
+
accumulate_allreduce_grads_in_fp32 .............. False
|
| 12189 |
+
adam_beta1 ...................................... 0.9
|
| 12190 |
+
adam_beta2 ...................................... 0.999
|
| 12191 |
+
adam_eps ........................................ 1e-08
|
| 12192 |
+
add_bias_linear ................................. True
|
| 12193 |
+
add_position_embedding .......................... True
|
| 12194 |
+
add_qkv_bias .................................... True
|
| 12195 |
+
adlr_autoresume ................................. False
|
| 12196 |
+
adlr_autoresume_interval ........................ 1000
|
| 12197 |
+
align_grad_reduce ............................... True
|
| 12198 |
+
align_param_gather .............................. False
|
| 12199 |
+
app_tag_run_name ................................ None
|
| 12200 |
+
app_tag_run_version ............................. 0.0.0
|
| 12201 |
+
apply_layernorm_1p .............................. False
|
| 12202 |
+
apply_query_key_layer_scaling ................... False
|
| 12203 |
+
apply_residual_connection_post_layernorm ........ False
|
| 12204 |
+
apply_rope_fusion ............................... False
|
| 12205 |
+
async_save ...................................... None
|
| 12206 |
+
async_tensor_model_parallel_allreduce ........... True
|
| 12207 |
+
attention_backend ............................... AttnBackend.auto
|
| 12208 |
+
attention_dropout ............................... 0.1
|
| 12209 |
+
attention_softmax_in_fp32 ....................... False
|
| 12210 |
+
auto_detect_ckpt_format ......................... False
|
| 12211 |
+
barrier_with_L1_time ............................ True
|
| 12212 |
+
bert_binary_head ................................ True
|
| 12213 |
+
bert_embedder_type .............................. megatron
|
| 12214 |
+
bert_load ....................................... None
|
| 12215 |
+
bf16 ............................................ False
|
| 12216 |
+
bias_dropout_fusion ............................. True
|
| 12217 |
+
bias_gelu_fusion ................................ True
|
| 12218 |
+
bias_swiglu_fusion .............................. True
|
| 12219 |
+
biencoder_projection_dim ........................ 0
|
| 12220 |
+
biencoder_shared_query_context_model ............ False
|
| 12221 |
+
block_data_path ................................. None
|
| 12222 |
+
calc_ft_timeouts ................................ False
|
| 12223 |
+
calculate_per_token_loss ........................ False
|
| 12224 |
+
check_for_large_grads ........................... False
|
| 12225 |
+
check_for_nan_in_loss_and_grad .................. False
|
| 12226 |
+
check_for_spiky_loss ............................ False
|
| 12227 |
+
check_weight_hash_across_dp_replicas_interval ... None
|
| 12228 |
+
ckpt_assume_constant_structure .................. False
|
| 12229 |
+
ckpt_convert_format ............................. None
|
| 12230 |
+
ckpt_convert_save ............................... None
|
| 12231 |
+
ckpt_convert_update_legacy_dist_opt_format ...... False
|
| 12232 |
+
ckpt_format ..................................... torch_dist
|
| 12233 |
+
ckpt_fully_parallel_load ........................ False
|
| 12234 |
+
ckpt_fully_parallel_save ........................ True
|
| 12235 |
+
ckpt_fully_parallel_save_deprecated ............. False
|
| 12236 |
+
ckpt_step ....................................... None
|
| 12237 |
+
classes_fraction ................................ 1.0
|
| 12238 |
+
clip_grad ....................................... 1.0
|
| 12239 |
+
clone_scatter_output_in_embedding ............... True
|
| 12240 |
+
config_logger_dir ...............................
|
| 12241 |
+
consumed_train_samples .......................... 0
|
| 12242 |
+
consumed_valid_samples .......................... 0
|
| 12243 |
+
context_parallel_size ........................... 4
|
| 12244 |
+
cp_comm_type .................................... ['p2p']
|
| 12245 |
+
create_attention_mask_in_dataloader ............. True
|
| 12246 |
+
cross_entropy_fusion_impl ....................... native
|
| 12247 |
+
cross_entropy_loss_fusion ....................... False
|
| 12248 |
+
cuda_graph_scope ................................ full
|
| 12249 |
+
cuda_graph_warmup_steps ......................... 3
|
| 12250 |
+
data_args_path .................................. None
|
| 12251 |
+
data_cache_path ................................. None
|
| 12252 |
+
data_parallel_random_init ....................... False
|
| 12253 |
+
data_parallel_sharding_strategy ................. no_shard
|
| 12254 |
+
data_parallel_size .............................. 1
|
| 12255 |
+
data_path ....................................... None
|
| 12256 |
+
data_per_class_fraction ......................... 1.0
|
| 12257 |
+
data_sharding ................................... True
|
| 12258 |
+
dataloader_type ................................. single
|
| 12259 |
+
ddp_average_in_collective ....................... False
|
| 12260 |
+
ddp_bucket_size ................................. None
|
| 12261 |
+
ddp_num_buckets ................................. None
|
| 12262 |
+
ddp_pad_buckets_for_high_nccl_busbw ............. False
|
| 12263 |
+
decoder_first_pipeline_num_layers ............... None
|
| 12264 |
+
decoder_last_pipeline_num_layers ................ None
|
| 12265 |
+
decoder_num_layers .............................. None
|
| 12266 |
+
decoder_seq_length .............................. None
|
| 12267 |
+
decoupled_lr .................................... None
|
| 12268 |
+
decoupled_min_lr ................................ None
|
| 12269 |
+
decrease_batch_size_if_needed ................... False
|
| 12270 |
+
defer_embedding_wgrad_compute ................... False
|
| 12271 |
+
deprecated_use_mcore_models ..................... False
|
| 12272 |
+
deterministic_mode .............................. False
|
| 12273 |
+
dino_bottleneck_size ............................ 256
|
| 12274 |
+
dino_freeze_last_layer .......................... 1
|
| 12275 |
+
dino_head_hidden_size ........................... 2048
|
| 12276 |
+
dino_local_crops_number ......................... 10
|
| 12277 |
+
dino_local_img_size ............................. 96
|
| 12278 |
+
dino_norm_last_layer ............................ False
|
| 12279 |
+
dino_teacher_temp ............................... 0.07
|
| 12280 |
+
dino_warmup_teacher_temp ........................ 0.04
|
| 12281 |
+
dino_warmup_teacher_temp_epochs ................. 30
|
| 12282 |
+
disable_bf16_reduced_precision_matmul ........... False
|
| 12283 |
+
disable_mamba_mem_eff_path ...................... False
|
| 12284 |
+
disable_straggler_on_startup .................... False
|
| 12285 |
+
dist_ckpt_format_deprecated ..................... None
|
| 12286 |
+
dist_ckpt_strictness ............................ assume_ok_unexpected
|
| 12287 |
+
distribute_saved_activations .................... False
|
| 12288 |
+
distributed_backend ............................. nccl
|
| 12289 |
+
distributed_timeout_minutes ..................... 10
|
| 12290 |
+
embedding_path .................................. None
|
| 12291 |
+
empty_unused_memory_level ....................... 0
|
| 12292 |
+
enable_cuda_graph ............................... False
|
| 12293 |
+
enable_ft_package ............................... False
|
| 12294 |
+
enable_gloo_process_groups ...................... True
|
| 12295 |
+
enable_msc ...................................... True
|
| 12296 |
+
enable_one_logger ............................... True
|
| 12297 |
+
encoder_num_layers .............................. 2
|
| 12298 |
+
encoder_pipeline_model_parallel_size ............ 0
|
| 12299 |
+
encoder_seq_length .............................. 40960
|
| 12300 |
+
encoder_tensor_model_parallel_size .............. 0
|
| 12301 |
+
end_weight_decay ................................ 0.1
|
| 12302 |
+
eod_mask_loss ................................... False
|
| 12303 |
+
error_injection_rate ............................ 0
|
| 12304 |
+
error_injection_type ............................ transient_error
|
| 12305 |
+
eval_interval ................................... 16
|
| 12306 |
+
eval_iters ...................................... 1
|
| 12307 |
+
evidence_data_path .............................. None
|
| 12308 |
+
exit_duration_in_mins ........................... None
|
| 12309 |
+
exit_interval ................................... None
|
| 12310 |
+
exit_on_missing_checkpoint ...................... False
|
| 12311 |
+
exit_signal_handler ............................. False
|
| 12312 |
+
exp_avg_dtype ................................... torch.float32
|
| 12313 |
+
exp_avg_sq_dtype ................................ torch.float32
|
| 12314 |
+
expert_model_parallel_size ...................... 1
|
| 12315 |
+
expert_tensor_parallel_size ..................... 4
|
| 12316 |
+
external_cuda_graph ............................. False
|
| 12317 |
+
ffn_hidden_size ................................. 16384
|
| 12318 |
+
finetune ........................................ False
|
| 12319 |
+
first_last_layers_bf16 .......................... False
|
| 12320 |
+
flash_decode .................................... False
|
| 12321 |
+
fp16 ............................................ True
|
| 12322 |
+
fp16_lm_cross_entropy ........................... False
|
| 12323 |
+
fp32_residual_connection ........................ False
|
| 12324 |
+
fp8 ............................................. None
|
| 12325 |
+
fp8_amax_compute_algo ........................... most_recent
|
| 12326 |
+
fp8_amax_history_len ............................ 1
|
| 12327 |
+
fp8_interval .................................... 1
|
| 12328 |
+
fp8_margin ...................................... 0
|
| 12329 |
+
fp8_param_gather ................................ False
|
| 12330 |
+
fp8_recipe ...................................... delayed
|
| 12331 |
+
fp8_wgrad ....................................... True
|
| 12332 |
+
fsdp_double_buffer .............................. False
|
| 12333 |
+
global_batch_size ............................... 1
|
| 12334 |
+
grad_reduce_in_bf16 ............................. False
|
| 12335 |
+
gradient_accumulation_fusion .................... True
|
| 12336 |
+
gradient_reduce_div_fusion ...................... True
|
| 12337 |
+
group_query_attention ........................... True
|
| 12338 |
+
head_lr_mult .................................... 1.0
|
| 12339 |
+
heterogeneous_layers_config_encoded_json ........ None
|
| 12340 |
+
heterogeneous_layers_config_path ................ None
|
| 12341 |
+
hidden_dropout .................................. 0.1
|
| 12342 |
+
hidden_size ..................................... 4096
|
| 12343 |
+
hierarchical_context_parallel_sizes ............. None
|
| 12344 |
+
high_priority_stream_groups ..................... []
|
| 12345 |
+
hybrid_attention_ratio .......................... 0.0
|
| 12346 |
+
hybrid_mlp_ratio ................................ 0.0
|
| 12347 |
+
hybrid_override_pattern ......................... None
|
| 12348 |
+
hysteresis ...................................... 2
|
| 12349 |
+
ict_head_size ................................... None
|
| 12350 |
+
ict_load ........................................ None
|
| 12351 |
+
img_h ........................................... 224
|
| 12352 |
+
img_w ........................................... 224
|
| 12353 |
+
indexer_batch_size .............................. 128
|
| 12354 |
+
indexer_log_interval ............................ 1000
|
| 12355 |
+
inference_batch_times_seqlen_threshold .......... -1
|
| 12356 |
+
inference_dynamic_batching ...................... False
|
| 12357 |
+
inference_dynamic_batching_buffer_guaranteed_fraction 0.2
|
| 12358 |
+
inference_dynamic_batching_buffer_overflow_factor None
|
| 12359 |
+
inference_dynamic_batching_buffer_size_gb ....... 40.0
|
| 12360 |
+
inference_dynamic_batching_chunk_size ........... 256
|
| 12361 |
+
inference_dynamic_batching_max_requests_override None
|
| 12362 |
+
inference_dynamic_batching_max_tokens_override .. None
|
| 12363 |
+
inference_max_batch_size ........................ 8
|
| 12364 |
+
inference_max_seq_length ........................ 2560
|
| 12365 |
+
inference_rng_tracker ........................... False
|
| 12366 |
+
init_method_std ................................. 0.02
|
| 12367 |
+
init_method_xavier_uniform ...................... False
|
| 12368 |
+
init_model_with_meta_device ..................... False
|
| 12369 |
+
initial_loss_scale .............................. 4294967296
|
| 12370 |
+
inprocess_active_world_size ..................... 16
|
| 12371 |
+
inprocess_barrier_timeout ....................... 120
|
| 12372 |
+
inprocess_completion_timeout .................... 120
|
| 12373 |
+
inprocess_empty_cuda_cache ...................... False
|
| 12374 |
+
inprocess_granularity ........................... node
|
| 12375 |
+
inprocess_hard_timeout .......................... 90
|
| 12376 |
+
inprocess_heartbeat_interval .................... 30
|
| 12377 |
+
inprocess_heartbeat_timeout ..................... 60
|
| 12378 |
+
inprocess_last_call_wait ........................ 1
|
| 12379 |
+
inprocess_max_iterations ........................ None
|
| 12380 |
+
inprocess_monitor_process_interval .............. 1.0
|
| 12381 |
+
inprocess_monitor_thread_interval ............... 1.0
|
| 12382 |
+
inprocess_progress_watchdog_interval ............ 1.0
|
| 12383 |
+
inprocess_restart ............................... False
|
| 12384 |
+
inprocess_soft_timeout .......................... 60
|
| 12385 |
+
inprocess_termination_grace_time ................ 1
|
| 12386 |
+
is_hybrid_model ................................. False
|
| 12387 |
+
iter_per_epoch .................................. 1250
|
| 12388 |
+
iterations_to_skip .............................. []
|
| 12389 |
+
keep_fp8_transpose_cache_when_using_custom_fsdp . False
|
| 12390 |
+
kv_channels ..................................... 64
|
| 12391 |
+
kv_lora_rank .................................... 32
|
| 12392 |
+
lazy_mpu_init ................................... None
|
| 12393 |
+
load ............................................ gpt-checkpoint
|
| 12394 |
+
load_model_opt_format ........................... False
|
| 12395 |
+
local_rank ...................................... 0
|
| 12396 |
+
log_interval .................................... 1
|
| 12397 |
+
log_loss_scale_to_tensorboard ................... True
|
| 12398 |
+
log_memory_to_tensorboard ....................... False
|
| 12399 |
+
log_num_zeros_in_grad ........................... False
|
| 12400 |
+
log_params_norm ................................. False
|
| 12401 |
+
log_progress .................................... False
|
| 12402 |
+
log_straggler ................................... False
|
| 12403 |
+
log_throughput .................................. False
|
| 12404 |
+
log_timers_to_tensorboard ....................... False
|
| 12405 |
+
log_validation_ppl_to_tensorboard ............... False
|
| 12406 |
+
log_world_size_to_tensorboard ................... False
|
| 12407 |
+
logging_level ................................... 0
|
| 12408 |
+
loss_scale ...................................... None
|
| 12409 |
+
loss_scale_window ............................... 1000
|
| 12410 |
+
lr .............................................. 0.0005
|
| 12411 |
+
lr_decay_iters .................................. 150000
|
| 12412 |
+
lr_decay_samples ................................ None
|
| 12413 |
+
lr_decay_style .................................. cosine
|
| 12414 |
+
lr_warmup_fraction .............................. None
|
| 12415 |
+
lr_warmup_init .................................. 0.0
|
| 12416 |
+
lr_warmup_iters ................................. 2
|
| 12417 |
+
lr_warmup_samples ............................... 0
|
| 12418 |
+
lr_wsd_decay_iters .............................. None
|
| 12419 |
+
lr_wsd_decay_samples ............................ None
|
| 12420 |
+
lr_wsd_decay_style .............................. exponential
|
| 12421 |
+
main_grads_dtype ................................ torch.float32
|
| 12422 |
+
main_params_dtype ............................... torch.float32
|
| 12423 |
+
make_vocab_size_divisible_by .................... 128
|
| 12424 |
+
mamba_head_dim .................................. 64
|
| 12425 |
+
mamba_num_groups ................................ 8
|
| 12426 |
+
mamba_num_heads ................................. None
|
| 12427 |
+
mamba_state_dim ................................. 128
|
| 12428 |
+
manual_gc ....................................... False
|
| 12429 |
+
manual_gc_eval .................................. True
|
| 12430 |
+
manual_gc_interval .............................. 0
|
| 12431 |
+
mask_factor ..................................... 1.0
|
| 12432 |
+
mask_prob ....................................... 0.15
|
| 12433 |
+
mask_type ....................................... random
|
| 12434 |
+
masked_softmax_fusion ........................... True
|
| 12435 |
+
max_position_embeddings ......................... 40960
|
| 12436 |
+
max_tokens_to_oom ............................... 12000
|
| 12437 |
+
memory_snapshot_path ............................ snapshot.pickle
|
| 12438 |
+
merge_file ...................................... merges.txt
|
| 12439 |
+
micro_batch_size ................................ 1
|
| 12440 |
+
microbatch_group_size_per_vp_stage .............. None
|
| 12441 |
+
mid_level_dataset_surplus ....................... 0.005
|
| 12442 |
+
min_loss_scale .................................. 1.0
|
| 12443 |
+
min_lr .......................................... 0.0
|
| 12444 |
+
mlp_chunks_for_prefill .......................... 1
|
| 12445 |
+
mmap_bin_files .................................. True
|
| 12446 |
+
mock_data ....................................... True
|
| 12447 |
+
moe_apply_probs_on_input ........................ False
|
| 12448 |
+
moe_aux_loss_coeff .............................. 0.0
|
| 12449 |
+
moe_enable_deepep ............................... False
|
| 12450 |
+
moe_expert_capacity_factor ...................... None
|
| 12451 |
+
moe_extended_tp ................................. False
|
| 12452 |
+
moe_ffn_hidden_size ............................. None
|
| 12453 |
+
moe_grouped_gemm ................................ False
|
| 12454 |
+
moe_input_jitter_eps ............................ None
|
| 12455 |
+
moe_layer_freq .................................. 1
|
| 12456 |
+
moe_layer_recompute ............................. False
|
| 12457 |
+
moe_pad_expert_input_to_capacity ................ False
|
| 12458 |
+
moe_per_layer_logging ........................... False
|
| 12459 |
+
moe_permute_fusion .............................. False
|
| 12460 |
+
moe_router_bias_update_rate ..................... 0.001
|
| 12461 |
+
moe_router_dtype ................................ None
|
| 12462 |
+
moe_router_enable_expert_bias ................... False
|
| 12463 |
+
moe_router_force_load_balancing ................. False
|
| 12464 |
+
moe_router_group_topk ........................... None
|
| 12465 |
+
moe_router_load_balancing_type .................. aux_loss
|
| 12466 |
+
moe_router_num_groups ........................... None
|
| 12467 |
+
moe_router_padding_for_fp8 ...................... False
|
| 12468 |
+
moe_router_pre_softmax .......................... False
|
| 12469 |
+
moe_router_score_function ....................... softmax
|
| 12470 |
+
moe_router_topk ................................. 2
|
| 12471 |
+
moe_router_topk_scaling_factor .................. None
|
| 12472 |
+
moe_shared_expert_intermediate_size ............. None
|
| 12473 |
+
moe_shared_expert_overlap ....................... False
|
| 12474 |
+
moe_token_dispatcher_type ....................... allgather
|
| 12475 |
+
moe_token_drop_policy ........................... probs
|
| 12476 |
+
moe_use_legacy_grouped_gemm ..................... False
|
| 12477 |
+
moe_use_upcycling ............................... False
|
| 12478 |
+
moe_z_loss_coeff ................................ None
|
| 12479 |
+
mrope_section ................................... None
|
| 12480 |
+
mscale .......................................... 1.0
|
| 12481 |
+
mscale_all_dim .................................. 1.0
|
| 12482 |
+
mtp_loss_scaling_factor ......................... 0.1
|
| 12483 |
+
mtp_num_layers .................................. None
|
| 12484 |
+
multi_latent_attention .......................... False
|
| 12485 |
+
nccl_all_reduce_for_prefill ..................... False
|
| 12486 |
+
nccl_communicator_config_path ................... None
|
| 12487 |
+
nccl_ub ......................................... False
|
| 12488 |
+
no_load_optim ................................... None
|
| 12489 |
+
no_load_rng ..................................... None
|
| 12490 |
+
no_persist_layer_norm ........................... False
|
| 12491 |
+
no_rope_freq .................................... None
|
| 12492 |
+
no_save_optim ................................... None
|
| 12493 |
+
no_save_rng ..................................... None
|
| 12494 |
+
non_persistent_ckpt_type ........................ None
|
| 12495 |
+
non_persistent_global_ckpt_dir .................. None
|
| 12496 |
+
non_persistent_local_ckpt_algo .................. fully_parallel
|
| 12497 |
+
non_persistent_local_ckpt_dir ................... None
|
| 12498 |
+
non_persistent_save_interval .................... None
|
| 12499 |
+
norm_epsilon .................................... 1e-05
|
| 12500 |
+
normalization ................................... LayerNorm
|
| 12501 |
+
num_attention_heads ............................. 64
|
| 12502 |
+
num_channels .................................... 3
|
| 12503 |
+
num_classes ..................................... 1000
|
| 12504 |
+
num_dataset_builder_threads ..................... 1
|
| 12505 |
+
num_distributed_optimizer_instances ............. 1
|
| 12506 |
+
num_experts ..................................... None
|
| 12507 |
+
num_layers ...................................... 2
|
| 12508 |
+
num_layers_at_end_in_bf16 ....................... 1
|
| 12509 |
+
num_layers_at_start_in_bf16 ..................... 1
|
| 12510 |
+
num_layers_per_virtual_pipeline_stage ........... None
|
| 12511 |
+
num_query_groups ................................ 16
|
| 12512 |
+
num_virtual_stages_per_pipeline_rank ............ None
|
| 12513 |
+
num_workers ..................................... 2
|
| 12514 |
+
object_storage_cache_path ....................... None
|
| 12515 |
+
one_logger_async ................................ False
|
| 12516 |
+
one_logger_project .............................. megatron-lm
|
| 12517 |
+
one_logger_run_name ............................. None
|
| 12518 |
+
onnx_safe ....................................... None
|
| 12519 |
+
openai_gelu ..................................... False
|
| 12520 |
+
optimizer ....................................... adam
|
| 12521 |
+
optimizer_cpu_offload ........................... False
|
| 12522 |
+
optimizer_offload_fraction ...................... 1.0
|
| 12523 |
+
output_bert_embeddings .......................... False
|
| 12524 |
+
overlap_cpu_optimizer_d2h_h2d ................... False
|
| 12525 |
+
overlap_grad_reduce ............................. False
|
| 12526 |
+
overlap_p2p_comm ................................ False
|
| 12527 |
+
overlap_p2p_comm_warmup_flush ................... False
|
| 12528 |
+
overlap_param_gather ............................ False
|
| 12529 |
+
overlap_param_gather_with_optimizer_step ........ False
|
| 12530 |
+
override_opt_param_scheduler .................... False
|
| 12531 |
+
params_dtype .................................... torch.float16
|
| 12532 |
+
patch_dim ....................................... 16
|
| 12533 |
+
per_split_data_args_path ........................ None
|
| 12534 |
+
perform_initialization .......................... True
|
| 12535 |
+
pin_cpu_grads ................................... True
|
| 12536 |
+
pin_cpu_params .................................. True
|
| 12537 |
+
pipeline_model_parallel_comm_backend ............ None
|
| 12538 |
+
pipeline_model_parallel_size .................... 1
|
| 12539 |
+
pipeline_model_parallel_split_rank .............. None
|
| 12540 |
+
position_embedding_type ......................... learned_absolute
|
| 12541 |
+
pretrained_checkpoint ........................... None
|
| 12542 |
+
profile ......................................... False
|
| 12543 |
+
profile_ranks ................................... [0]
|
| 12544 |
+
profile_step_end ................................ 12
|
| 12545 |
+
profile_step_start .............................. 10
|
| 12546 |
+
q_lora_rank ..................................... None
|
| 12547 |
+
qk_head_dim ..................................... 128
|
| 12548 |
+
qk_l2_norm ...................................... False
|
| 12549 |
+
qk_layernorm .................................... False
|
| 12550 |
+
qk_pos_emb_head_dim ............................. 64
|
| 12551 |
+
query_in_block_prob ............................. 0.1
|
| 12552 |
+
rampup_batch_size ............................... None
|
| 12553 |
+
rank ............................................ 0
|
| 12554 |
+
recompute_granularity ........................... None
|
| 12555 |
+
recompute_method ................................ None
|
| 12556 |
+
recompute_modules ............................... None
|
| 12557 |
+
recompute_num_layers ............................ None
|
| 12558 |
+
record_memory_history ........................... False
|
| 12559 |
+
relative_attention_max_distance ................. 128
|
| 12560 |
+
relative_attention_num_buckets .................. 32
|
| 12561 |
+
replication ..................................... False
|
| 12562 |
+
replication_factor .............................. 2
|
| 12563 |
+
replication_jump ................................ None
|
| 12564 |
+
rerun_mode ...................................... disabled
|
| 12565 |
+
reset_attention_mask ............................ False
|
| 12566 |
+
reset_position_ids .............................. False
|
| 12567 |
+
result_rejected_tracker_filename ................ None
|
| 12568 |
+
retriever_report_topk_accuracies ................ []
|
| 12569 |
+
retriever_score_scaling ......................... False
|
| 12570 |
+
retriever_seq_length ............................ 256
|
| 12571 |
+
retro_add_retriever ............................. False
|
| 12572 |
+
retro_attention_gate ............................ 1
|
| 12573 |
+
retro_cyclic_train_iters ........................ None
|
| 12574 |
+
retro_encoder_attention_dropout ................. 0.1
|
| 12575 |
+
retro_encoder_hidden_dropout .................... 0.1
|
| 12576 |
+
retro_encoder_layers ............................ 2
|
| 12577 |
+
retro_num_neighbors ............................. 2
|
| 12578 |
+
retro_num_retrieved_chunks ...................... 2
|
| 12579 |
+
retro_project_dir ............................... None
|
| 12580 |
+
retro_verify_neighbor_count ..................... True
|
| 12581 |
+
rope_scaling_factor ............................. 8.0
|
| 12582 |
+
rotary_base ..................................... 10000
|
| 12583 |
+
rotary_interleaved .............................. False
|
| 12584 |
+
rotary_percent .................................. 1.0
|
| 12585 |
+
rotary_scaling_factor ........................... 1.0
|
| 12586 |
+
rotary_seq_len_interpolation_factor ............. None
|
| 12587 |
+
run_workload_inspector_server ................... False
|
| 12588 |
+
sample_rate ..................................... 1.0
|
| 12589 |
+
save ............................................ gpt-checkpoint
|
| 12590 |
+
save_interval ................................... 16
|
| 12591 |
+
scatter_gather_tensors_in_pipeline .............. True
|
| 12592 |
+
seed ............................................ 1234
|
| 12593 |
+
seq_length ...................................... 40960
|
| 12594 |
+
sequence_parallel ............................... False
|
| 12595 |
+
sgd_momentum .................................... 0.9
|
| 12596 |
+
short_seq_prob .................................. 0.1
|
| 12597 |
+
skip_train ...................................... False
|
| 12598 |
+
skipped_train_samples ........................... 0
|
| 12599 |
+
spec ............................................ None
|
| 12600 |
+
split ........................................... None
|
| 12601 |
+
squared_relu .................................... False
|
| 12602 |
+
start_weight_decay .............................. 0.1
|
| 12603 |
+
straggler_ctrlr_port ............................ 65535
|
| 12604 |
+
straggler_minmax_count .......................... 1
|
| 12605 |
+
suggested_communication_unit_size ............... None
|
| 12606 |
+
swiglu .......................................... False
|
| 12607 |
+
swin_backbone_type .............................. tiny
|
| 12608 |
+
symmetric_ar_type ............................... None
|
| 12609 |
+
te_rng_tracker .................................. False
|
| 12610 |
+
tensor_model_parallel_size ...................... 4
|
| 12611 |
+
tensorboard_dir ................................. tensorboard-logs/
|
| 12612 |
+
tensorboard_log_interval ........................ 1
|
| 12613 |
+
tensorboard_queue_size .......................... 1000
|
| 12614 |
+
test_data_path .................................. None
|
| 12615 |
+
test_mode ....................................... False
|
| 12616 |
+
tiktoken_num_special_tokens ..................... 1000
|
| 12617 |
+
tiktoken_pattern ................................ None
|
| 12618 |
+
tiktoken_special_tokens ......................... None
|
| 12619 |
+
timing_log_level ................................ 0
|
| 12620 |
+
timing_log_option ............................... minmax
|
| 12621 |
+
titles_data_path ................................ None
|
| 12622 |
+
tokenizer_model ................................. None
|
| 12623 |
+
tokenizer_type .................................. GPT2BPETokenizer
|
| 12624 |
+
torch_fsdp2_reshard_after_forward ............... True
|
| 12625 |
+
tp_comm_bootstrap_backend ....................... nccl
|
| 12626 |
+
tp_comm_bulk_dgrad .............................. True
|
| 12627 |
+
tp_comm_bulk_wgrad .............................. True
|
| 12628 |
+
tp_comm_overlap ................................. False
|
| 12629 |
+
tp_comm_overlap_ag .............................. True
|
| 12630 |
+
tp_comm_overlap_cfg ............................. None
|
| 12631 |
+
tp_comm_overlap_rs .............................. True
|
| 12632 |
+
tp_comm_overlap_rs_dgrad ........................ False
|
| 12633 |
+
tp_comm_split_ag ................................ True
|
| 12634 |
+
tp_comm_split_rs ................................ True
|
| 12635 |
+
train_data_path ................................. None
|
| 12636 |
+
train_iters ..................................... 10
|
| 12637 |
+
train_samples ................................... None
|
| 12638 |
+
train_sync_interval ............................. None
|
| 12639 |
+
transformer_impl ................................ transformer_engine
|
| 12640 |
+
transformer_pipeline_model_parallel_size ........ 1
|
| 12641 |
+
untie_embeddings_and_output_weights ............. False
|
| 12642 |
+
use_checkpoint_args ............................. False
|
| 12643 |
+
use_checkpoint_opt_param_scheduler .............. False
|
| 12644 |
+
use_cpu_initialization .......................... None
|
| 12645 |
+
use_custom_fsdp ................................. False
|
| 12646 |
+
use_dist_ckpt ................................... True
|
| 12647 |
+
use_dist_ckpt_deprecated ........................ False
|
| 12648 |
+
use_distributed_optimizer ....................... False
|
| 12649 |
+
use_flash_attn .................................. False
|
| 12650 |
+
use_legacy_models ............................... False
|
| 12651 |
+
use_mp_args_from_checkpoint_args ................ False
|
| 12652 |
+
use_one_sent_docs ............................... False
|
| 12653 |
+
use_persistent_ckpt_worker ...................... False
|
| 12654 |
+
use_precision_aware_optimizer ................... False
|
| 12655 |
+
use_pytorch_profiler ............................ False
|
| 12656 |
+
use_ring_exchange_p2p ........................... False
|
| 12657 |
+
use_rope_scaling ................................ False
|
| 12658 |
+
use_rotary_position_embeddings .................. False
|
| 12659 |
+
use_sharp ....................................... False
|
| 12660 |
+
use_tokenizer_model_from_checkpoint_args ........ True
|
| 12661 |
+
use_torch_fsdp2 ................................. False
|
| 12662 |
+
use_torch_optimizer_for_cpu_offload ............. False
|
| 12663 |
+
use_tp_pp_dp_mapping ............................ False
|
| 12664 |
+
v_head_dim ...................................... 128
|
| 12665 |
+
valid_data_path ................................. None
|
| 12666 |
+
variable_seq_lengths ............................ False
|
| 12667 |
+
virtual_pipeline_model_parallel_size ............ None
|
| 12668 |
+
vision_backbone_type ............................ vit
|
| 12669 |
+
vision_pretraining .............................. False
|
| 12670 |
+
vision_pretraining_type ......................... classify
|
| 12671 |
+
vocab_extra_ids ................................. 0
|
| 12672 |
+
vocab_file ...................................... vocab.json
|
| 12673 |
+
vocab_size ...................................... None
|
| 12674 |
+
wandb_exp_name ..................................
|
| 12675 |
+
wandb_project ...................................
|
| 12676 |
+
wandb_save_dir ..................................
|
| 12677 |
+
weight_decay .................................... 0.1
|
| 12678 |
+
weight_decay_incr_style ......................... constant
|
| 12679 |
+
wgrad_deferral_limit ............................ 0
|
| 12680 |
+
world_size ...................................... 16
|
| 12681 |
+
yaml_cfg ........................................ None
|
| 12682 |
+
-------------------- end of arguments ---------------------
|
| 12683 |
+
INFO:megatron.core.num_microbatches_calculator:setting number of microbatches to constant 1
|
| 12684 |
+
> building GPT2BPETokenizer tokenizer ...
|
| 12685 |
+
INFO:megatron.training.initialize:Setting logging level to 0
|
| 12686 |
+
INFO:megatron.training.initialize:Setting logging level to 0
|
| 12687 |
+
INFO:megatron.training.initialize:Setting logging level to 0
|
| 12688 |
+
> padded vocab (size: 50257) with 431 dummy tokens (new size: 50688)
|
| 12689 |
+
INFO:megatron.training.initialize:Setting logging level to 0
|
| 12690 |
+
WARNING:megatron.core.rerun_state_machine:RerunStateMachine initialized in mode RerunMode.DISABLED
|
| 12691 |
+
> initializing torch distributed ...
|
| 12692 |
+
INFO:megatron.training.initialize:Setting logging level to 0
|
| 12693 |
+
INFO:megatron.training.initialize:Setting logging level to 0
|
| 12694 |
+
INFO:megatron.training.initialize:Setting logging level to 0
|
| 12695 |
+
INFO:megatron.training.initialize:Setting logging level to 0
|
| 12696 |
+
> initialized tensor model parallel with size 4
|
| 12697 |
+
> initialized pipeline model parallel with size 1
|
| 12698 |
+
> setting random seeds to 1234 ...
|
| 12699 |
+
> compiling dataset index builder ...
|
| 12700 |
+
make: Entering directory '/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/datasets'
|
| 12701 |
+
make: Nothing to be done for 'default'.
|
| 12702 |
+
make: Leaving directory '/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/datasets'
|
| 12703 |
+
>>> done with dataset index builder. Compilation time: 0.045 seconds
|
| 12704 |
+
WARNING: constraints for invoking optimized fused softmax kernel are not met. We default back to unfused kernel invocations.
|
| 12705 |
+
> compiling and loading fused kernels ...
|
| 12706 |
+
>>> done with compiling and loading fused kernels. Compilation time: 2.421 seconds
|
| 12707 |
+
time to initialize megatron (seconds): 8.474
|
| 12708 |
+
[after megatron is initialized] datetime: 2025-06-21 21:34:54
|
| 12709 |
+
building GPT model ...
|
| 12710 |
+
>>> embedding
|
| 12711 |
+
>>> decoder
|
| 12712 |
+
>>> output_layer
|
| 12713 |
+
> number of parameters on (tensor, pipeline) model parallel rank (0, 0): 307825664
|
| 12714 |
+
>>> embedding
|
| 12715 |
+
>>> decoder
|
| 12716 |
+
>>> output_layer
|
| 12717 |
+
> number of parameters on (tensor, pipeline) model parallel rank (2, 0): 307825664
|
| 12718 |
+
>>> embedding
|
| 12719 |
+
>>> decoder
|
| 12720 |
+
>>> output_layer
|
| 12721 |
+
> number of parameters on (tensor, pipeline) model parallel rank (1, 0): 307825664
|
| 12722 |
+
>>> embedding
|
| 12723 |
+
>>> decoder
|
| 12724 |
+
>>> output_layer
|
| 12725 |
+
> number of parameters on (tensor, pipeline) model parallel rank (2, 0): 307825664
|
| 12726 |
+
>>> embedding
|
| 12727 |
+
>>> decoder
|
| 12728 |
+
>>> output_layer
|
| 12729 |
+
> number of parameters on (tensor, pipeline) model parallel rank (0, 0): 307825664
|
| 12730 |
+
>>> embedding
|
| 12731 |
+
>>> decoder
|
| 12732 |
+
>>> output_layer
|
| 12733 |
+
> number of parameters on (tensor, pipeline) model parallel rank (3, 0): 307825664
|
| 12734 |
+
>>> embedding
|
| 12735 |
+
>>> decoder
|
| 12736 |
+
>>> output_layer
|
| 12737 |
+
>>> embedding
|
| 12738 |
+
>>> decoder
|
| 12739 |
+
>>> output_layer
|
| 12740 |
+
> number of parameters on (tensor, pipeline) model parallel rank (2, 0): 307825664
|
| 12741 |
+
> number of parameters on (tensor, pipeline) model parallel rank (2, 0): 307825664
|
| 12742 |
+
>>> embedding
|
| 12743 |
+
>>> decoder
|
| 12744 |
+
>>> output_layer
|
| 12745 |
+
>>> embedding
|
| 12746 |
+
>>> decoder
|
| 12747 |
+
>>> output_layer
|
| 12748 |
+
> number of parameters on (tensor, pipeline) model parallel rank (0, 0): 307825664
|
| 12749 |
+
> number of parameters on (tensor, pipeline) model parallel rank (1, 0): 307825664
|
| 12750 |
+
>>> embedding
|
| 12751 |
+
>>> decoder
|
| 12752 |
+
>>> output_layer
|
| 12753 |
+
> number of parameters on (tensor, pipeline) model parallel rank (0, 0): 307825664
|
| 12754 |
+
>>> embedding
|
| 12755 |
+
>>> decoder
|
| 12756 |
+
>>> output_layer
|
| 12757 |
+
> number of parameters on (tensor, pipeline) model parallel rank (3, 0): 307825664
|
| 12758 |
+
>>> embedding
|
| 12759 |
+
>>> decoder
|
| 12760 |
+
>>> output_layer
|
| 12761 |
+
> number of parameters on (tensor, pipeline) model parallel rank (3, 0): 307825664
|
| 12762 |
+
>>> embedding
|
| 12763 |
+
>>> decoder
|
| 12764 |
+
>>> output_layer
|
| 12765 |
+
> number of parameters on (tensor, pipeline) model parallel rank (3, 0): 307825664
|
| 12766 |
+
>>> embedding
|
| 12767 |
+
>>> decoder
|
| 12768 |
+
>>> output_layer
|
| 12769 |
+
> number of parameters on (tensor, pipeline) model parallel rank (1, 0): 307825664
|
| 12770 |
+
INFO:megatron.core.distributed.distributed_data_parallel:Setting up DistributedDataParallel with config DistributedDataParallelConfig(grad_reduce_in_fp32=False, overlap_grad_reduce=False, overlap_param_gather=False, align_param_gather=False, use_distributed_optimizer=False, num_distributed_optimizer_instances=1, check_for_nan_in_grad=False, check_for_large_grads=False, bucket_size=None, pad_buckets_for_high_nccl_busbw=False, average_in_collective=False, fp8_param_gather=False, use_custom_fsdp=False, data_parallel_sharding_strategy='no_shard', gradient_reduce_div_fusion=True, suggested_communication_unit_size=None, preserve_fp32_weights=True, keep_fp8_transpose_cache_when_using_custom_fsdp=False, nccl_ub=False, fsdp_double_buffer=False)
|
| 12771 |
+
INFO:megatron.core.distributed.param_and_grad_buffer:Number of buckets for gradient all-reduce / reduce-scatter: 1
|
| 12772 |
+
Params for bucket 1 (307825664 elements, 307825664 padded size):
|
| 12773 |
+
module.decoder.layers.1.mlp.linear_fc1.weight
|
| 12774 |
+
module.decoder.layers.0.mlp.linear_fc1.weight
|
| 12775 |
+
module.decoder.layers.1.mlp.linear_fc2.bias
|
| 12776 |
+
module.decoder.layers.1.self_attention.linear_qkv.layer_norm_weight
|
| 12777 |
+
module.decoder.layers.0.self_attention.linear_qkv.weight
|
| 12778 |
+
module.decoder.layers.1.self_attention.linear_qkv.layer_norm_bias
|
| 12779 |
+
module.decoder.layers.0.mlp.linear_fc1.bias
|
| 12780 |
+
module.decoder.layers.1.mlp.linear_fc1.bias
|
| 12781 |
+
module.decoder.layers.0.mlp.linear_fc2.weight
|
| 12782 |
+
module.decoder.layers.0.self_attention.linear_proj.weight
|
| 12783 |
+
module.decoder.layers.1.self_attention.linear_qkv.weight
|
| 12784 |
+
module.decoder.layers.1.self_attention.linear_proj.weight
|
| 12785 |
+
module.decoder.layers.0.self_attention.linear_qkv.bias
|
| 12786 |
+
module.decoder.layers.0.self_attention.linear_proj.bias
|
| 12787 |
+
module.decoder.final_layernorm.bias
|
| 12788 |
+
module.decoder.layers.1.mlp.linear_fc2.weight
|
| 12789 |
+
module.decoder.layers.1.self_attention.linear_proj.bias
|
| 12790 |
+
module.embedding.position_embeddings.weight
|
| 12791 |
+
module.decoder.layers.1.mlp.linear_fc1.layer_norm_bias
|
| 12792 |
+
module.decoder.layers.0.mlp.linear_fc1.layer_norm_bias
|
| 12793 |
+
module.decoder.final_layernorm.weight
|
| 12794 |
+
module.decoder.layers.1.mlp.linear_fc1.layer_norm_weight
|
| 12795 |
+
module.decoder.layers.1.self_attention.linear_qkv.bias
|
| 12796 |
+
module.decoder.layers.0.mlp.linear_fc2.bias
|
| 12797 |
+
module.decoder.layers.0.mlp.linear_fc1.layer_norm_weight
|
| 12798 |
+
module.decoder.layers.0.self_attention.linear_qkv.layer_norm_bias
|
| 12799 |
+
module.decoder.layers.0.self_attention.linear_qkv.layer_norm_weight
|
| 12800 |
+
module.embedding.word_embeddings.weight
|
| 12801 |
+
INFO:megatron.core.optimizer:Setting up optimizer with config OptimizerConfig(optimizer='adam', lr=0.0005, min_lr=0.0, decoupled_lr=None, decoupled_min_lr=None, weight_decay=0.1, fp16=True, bf16=False, params_dtype=torch.float16, use_precision_aware_optimizer=False, store_param_remainders=True, main_grads_dtype=torch.float32, main_params_dtype=torch.float32, exp_avg_dtype=torch.float32, exp_avg_sq_dtype=torch.float32, loss_scale=None, initial_loss_scale=4294967296, min_loss_scale=1.0, loss_scale_window=1000, hysteresis=2, adam_beta1=0.9, adam_beta2=0.999, adam_eps=1e-08, sgd_momentum=0.9, use_distributed_optimizer=False, overlap_param_gather_with_optimizer_step=False, optimizer_cpu_offload=False, optimizer_offload_fraction=1.0, use_torch_optimizer_for_cpu_offload=False, overlap_cpu_optimizer_d2h_h2d=False, pin_cpu_grads=True, pin_cpu_params=True, clip_grad=1.0, log_num_zeros_in_grad=False, barrier_with_L1_time=True, timers=<megatron.core.timers.Timers object at 0x148555f962a0>, config_logger_dir='')
|
| 12802 |
+
INFO:megatron.core.optimizer_param_scheduler:> learning rate decay style: cosine
|
| 12803 |
+
>>> embedding
|
| 12804 |
+
>>> decoder
|
| 12805 |
+
>>> output_layer
|
| 12806 |
+
> number of parameters on (tensor, pipeline) model parallel rank (1, 0): 307825664
|
| 12807 |
+
WARNING: could not find the metadata file gpt-checkpoint/latest_checkpointed_iteration.txt
|
| 12808 |
+
will not load any checkpoints and will start from random
|
| 12809 |
+
(min, max) time across ranks (ms):
|
| 12810 |
+
load-checkpoint ................................: (2.68, 3.84)
|
| 12811 |
+
[after model, optimizer, and learning rate scheduler are built] datetime: 2025-06-21 21:34:57
|
| 12812 |
+
> building train, validation, and test datasets ...
|
| 12813 |
+
> datasets target sizes (minimum size):
|
| 12814 |
+
train: 10
|
| 12815 |
+
validation: 1
|
| 12816 |
+
test: 1
|
| 12817 |
+
INFO:megatron.core.datasets.blended_megatron_dataset_config:Let mock = True, as both blend and blend_per_split are None
|
| 12818 |
+
INFO:megatron.core.datasets.blended_megatron_dataset_config:Let split = 1,1,1, an arbitrarily even split, as mock is True
|
| 12819 |
+
INFO:megatron.core.datasets.blended_megatron_dataset_config:Let split_matrix = [(0, 0.3333333333333333), (0.3333333333333333, 0.6666666666666666), (0.6666666666666666, 1.0)]
|
| 12820 |
+
> building train, validation, and test datasets for GPT ...
|
| 12821 |
+
INFO:megatron.core.datasets.blended_megatron_dataset_builder:Building MockGPTDataset splits with sizes=(10, 1, 1) and config=GPTDatasetConfig(random_seed=1234, sequence_length=40960, blend=None, blend_per_split=None, split='1,1,1', split_matrix=[(0, 0.3333333333333333), (0.3333333333333333, 0.6666666666666666), (0.6666666666666666, 1.0)], num_dataset_builder_threads=1, path_to_cache=None, mmap_bin_files=True, mock=True, tokenizer=<megatron.training.tokenizer.tokenizer._GPT2BPETokenizer object at 0x148556418470>, mid_level_dataset_surplus=0.005, reset_position_ids=False, reset_attention_mask=False, eod_mask_loss=False, create_attention_mask=True, drop_last_partial_validation_sequence=True, add_extra_token_to_sequence=True, object_storage_cache_path=None)
|
| 12822 |
+
INFO:megatron.core.datasets.gpt_dataset:Build and save the MockGPTDataset train indices
|
| 12823 |
+
DEBUG:megatron.core.datasets.gpt_dataset:> separate_final_epoch: False
|
| 12824 |
+
WARNING:megatron.core.datasets.gpt_dataset:Unable to save MockGPTDataset indexes because path_to_cache is None
|
| 12825 |
+
DEBUG:megatron.core.datasets.gpt_dataset: > time elapsed: 0.005231 seconds
|
| 12826 |
+
INFO:megatron.core.datasets.gpt_dataset:> total number of samples: 1664
|
| 12827 |
+
INFO:megatron.core.datasets.gpt_dataset:> total number of epochs: 1
|
| 12828 |
+
INFO:megatron.core.datasets.gpt_dataset:Build and save the MockGPTDataset valid indices
|
| 12829 |
+
DEBUG:megatron.core.datasets.gpt_dataset:> separate_final_epoch: False
|
| 12830 |
+
WARNING:megatron.core.datasets.gpt_dataset:Unable to save MockGPTDataset indexes because path_to_cache is None
|
| 12831 |
+
DEBUG:megatron.core.datasets.gpt_dataset: > time elapsed: 0.001617 seconds
|
| 12832 |
+
INFO:megatron.core.datasets.gpt_dataset:> total number of samples: 1664
|
| 12833 |
+
INFO:megatron.core.datasets.gpt_dataset:> total number of epochs: 1
|
| 12834 |
+
INFO:megatron.core.datasets.gpt_dataset:Build and save the MockGPTDataset test indices
|
| 12835 |
+
DEBUG:megatron.core.datasets.gpt_dataset:> separate_final_epoch: False
|
| 12836 |
+
WARNING:megatron.core.datasets.gpt_dataset:Unable to save MockGPTDataset indexes because path_to_cache is None
|
| 12837 |
+
DEBUG:megatron.core.datasets.gpt_dataset: > time elapsed: 0.001457 seconds
|
| 12838 |
+
INFO:megatron.core.datasets.gpt_dataset:> total number of samples: 1667
|
| 12839 |
+
INFO:megatron.core.datasets.gpt_dataset:> total number of epochs: 1
|
| 12840 |
+
> finished creating GPT datasets ...
|
| 12841 |
+
[after dataloaders are built] datetime: 2025-06-21 21:34:57
|
| 12842 |
+
done with setup ...
|
| 12843 |
+
(min, max) time across ranks (ms):
|
| 12844 |
+
model-and-optimizer-setup ......................: (2294.08, 2335.22)
|
| 12845 |
+
train/valid/test-data-iterators-setup ..........: (18.17, 143.69)
|
| 12846 |
+
training ...
|
| 12847 |
+
Setting rerun_state_machine.current_iteration to 0...
|
| 12848 |
+
[before the start of training step] datetime: 2025-06-21 21:34:57
|
attnserver.run_attnserver.slurm.sh.343222.err.log
CHANGED
|
The diff for this file is too large to render.
See raw diff
|
|
|
attnserver.run_attnserver.slurm.sh.343222.out.log
CHANGED
|
The diff for this file is too large to render.
See raw diff
|
|
|
attnserver.run_attnserver.slurm.sh.343223.err.log
CHANGED
|
@@ -610,3 +610,195 @@ W0621 21:33:06.777000 2474606 site-packages/torch/distributed/run.py:766] ******
|
|
| 610 |
warnings.warn(
|
| 611 |
/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/transformer_engine/pytorch/cpu_offload.py:595: DeprecationWarning: Offloading weights is deprecated. Using offload_weights=True does not have any effect.
|
| 612 |
warnings.warn(
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 610 |
warnings.warn(
|
| 611 |
/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/transformer_engine/pytorch/cpu_offload.py:595: DeprecationWarning: Offloading weights is deprecated. Using offload_weights=True does not have any effect.
|
| 612 |
warnings.warn(
|
| 613 |
+
[rank0]: Traceback (most recent call last):
|
| 614 |
+
[rank0]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 554, in <module>
|
| 615 |
+
[rank0]: pretrain(
|
| 616 |
+
[rank0]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/training/training.py", line 879, in pretrain
|
| 617 |
+
[rank0]: save_checkpoint(
|
| 618 |
+
[rank0]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/training/checkpointing.py", line 469, in save_checkpoint
|
| 619 |
+
[rank0]: async_save_request = dist_checkpointing.save(state_dict, checkpoint_name, save_strategy,
|
| 620 |
+
[rank0]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
| 621 |
+
[rank0]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/dist_checkpointing/serialization.py", line 386, in save
|
| 622 |
+
[rank0]: common_strategy.save_common(state_dict, checkpoint_dir)
|
| 623 |
+
[rank0]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/dist_checkpointing/strategies/common.py", line 48, in save_common
|
| 624 |
+
[rank0]: torch.save(common_state_dict, path)
|
| 625 |
+
[rank0]: File "/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/serialization.py", line 964, in save
|
| 626 |
+
[rank0]: with _open_zipfile_writer(f) as opened_zipfile:
|
| 627 |
+
[rank0]: ^^^^^^^^^^^^^^^^^^^^^^^
|
| 628 |
+
[rank0]: File "/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/serialization.py", line 828, in _open_zipfile_writer
|
| 629 |
+
[rank0]: return container(name_or_buffer)
|
| 630 |
+
[rank0]: ^^^^^^^^^^^^^^^^^^^^^^^^^
|
| 631 |
+
[rank0]: File "/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/serialization.py", line 792, in __init__
|
| 632 |
+
[rank0]: torch._C.PyTorchFileWriter(
|
| 633 |
+
[rank0]: RuntimeError: Parent directory gpt-checkpoint/iter_0000010 does not exist.
|
| 634 |
+
[rank0]:[W621 21:34:36.978710634 ProcessGroupNCCL.cpp:1476] Warning: WARNING: destroy_process_group() was not called before program exit, which can leak resources. For more info, please see https://pytorch.org/docs/stable/distributed.html#shutdown (function operator())
|
| 635 |
+
W0621 21:34:41.705000 2518285 site-packages/torch/distributed/elastic/multiprocessing/api.py:900] Sending process 2518357 closing signal SIGTERM
|
| 636 |
+
W0621 21:34:41.708000 2518285 site-packages/torch/distributed/elastic/multiprocessing/api.py:900] Sending process 2518358 closing signal SIGTERM
|
| 637 |
+
W0621 21:34:41.711000 2518285 site-packages/torch/distributed/elastic/multiprocessing/api.py:900] Sending process 2518359 closing signal SIGTERM
|
| 638 |
+
W0621 21:34:41.714000 2518285 site-packages/torch/distributed/elastic/multiprocessing/api.py:900] Sending process 2518360 closing signal SIGTERM
|
| 639 |
+
W0621 21:34:41.718000 2518285 site-packages/torch/distributed/elastic/multiprocessing/api.py:900] Sending process 2518361 closing signal SIGTERM
|
| 640 |
+
W0621 21:34:41.721000 2518285 site-packages/torch/distributed/elastic/multiprocessing/api.py:900] Sending process 2518362 closing signal SIGTERM
|
| 641 |
+
W0621 21:34:41.740000 2518285 site-packages/torch/distributed/elastic/multiprocessing/api.py:900] Sending process 2518363 closing signal SIGTERM
|
| 642 |
+
E0621 21:34:43.727000 2518285 site-packages/torch/distributed/elastic/multiprocessing/api.py:874] failed (exitcode: 1) local_rank: 0 (pid: 2518356) of binary: /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/bin/python3
|
| 643 |
+
Traceback (most recent call last):
|
| 644 |
+
File "<frozen runpy>", line 198, in _run_module_as_main
|
| 645 |
+
File "<frozen runpy>", line 88, in _run_code
|
| 646 |
+
File "/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/distributed/launch.py", line 207, in <module>
|
| 647 |
+
main()
|
| 648 |
+
File "/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/typing_extensions.py", line 3253, in wrapper
|
| 649 |
+
return arg(*args, **kwargs)
|
| 650 |
+
^^^^^^^^^^^^^^^^^^^^
|
| 651 |
+
File "/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/distributed/launch.py", line 203, in main
|
| 652 |
+
launch(args)
|
| 653 |
+
File "/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/distributed/launch.py", line 188, in launch
|
| 654 |
+
run(args)
|
| 655 |
+
File "/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/distributed/run.py", line 883, in run
|
| 656 |
+
elastic_launch(
|
| 657 |
+
File "/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/distributed/launcher/api.py", line 139, in __call__
|
| 658 |
+
return launch_agent(self._config, self._entrypoint, list(args))
|
| 659 |
+
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
| 660 |
+
File "/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/distributed/launcher/api.py", line 270, in launch_agent
|
| 661 |
+
raise ChildFailedError(
|
| 662 |
+
torch.distributed.elastic.multiprocessing.errors.ChildFailedError:
|
| 663 |
+
============================================================
|
| 664 |
+
./pretrain_gpt_profile.py FAILED
|
| 665 |
+
------------------------------------------------------------
|
| 666 |
+
Failures:
|
| 667 |
+
<NO_OTHER_FAILURES>
|
| 668 |
+
------------------------------------------------------------
|
| 669 |
+
Root Cause (first observed failure):
|
| 670 |
+
[0]:
|
| 671 |
+
time : 2025-06-21_21:34:41
|
| 672 |
+
host : fs-mbz-gpu-703
|
| 673 |
+
rank : 0 (local_rank: 0)
|
| 674 |
+
exitcode : 1 (pid: 2518356)
|
| 675 |
+
error_file: <N/A>
|
| 676 |
+
traceback : To enable traceback see: https://pytorch.org/docs/stable/elastic/errors.html
|
| 677 |
+
============================================================
|
| 678 |
+
+ set +x
|
| 679 |
+
W0621 21:34:44.078000 2474606 site-packages/torch/distributed/elastic/multiprocessing/api.py:900] Sending process 2474676 closing signal SIGTERM
|
| 680 |
+
W0621 21:34:44.082000 2474606 site-packages/torch/distributed/elastic/multiprocessing/api.py:900] Sending process 2474677 closing signal SIGTERM
|
| 681 |
+
W0621 21:34:44.084000 2474606 site-packages/torch/distributed/elastic/multiprocessing/api.py:900] Sending process 2474678 closing signal SIGTERM
|
| 682 |
+
W0621 21:34:44.086000 2474606 site-packages/torch/distributed/elastic/multiprocessing/api.py:900] Sending process 2474679 closing signal SIGTERM
|
| 683 |
+
W0621 21:34:44.089000 2474606 site-packages/torch/distributed/elastic/multiprocessing/api.py:900] Sending process 2474680 closing signal SIGTERM
|
| 684 |
+
W0621 21:34:44.108000 2474606 site-packages/torch/distributed/elastic/multiprocessing/api.py:900] Sending process 2474681 closing signal SIGTERM
|
| 685 |
+
W0621 21:34:44.132000 2474606 site-packages/torch/distributed/elastic/multiprocessing/api.py:900] Sending process 2474682 closing signal SIGTERM
|
| 686 |
+
W0621 21:34:44.137000 2474606 site-packages/torch/distributed/elastic/multiprocessing/api.py:900] Sending process 2474683 closing signal SIGTERM
|
| 687 |
+
[W621 21:34:46.046129759 TCPStore.cpp:106] [c10d] sendBytes failed on SocketImpl(fd=3, addr=[fs-mbz-gpu-786]:35214, remote=[fs-mbz-gpu-703]:29500): Broken pipe
|
| 688 |
+
Exception raised from sendBytes at /pytorch/torch/csrc/distributed/c10d/Utils.hpp:653 (most recent call first):
|
| 689 |
+
frame #0: c10::Error::Error(c10::SourceLocation, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >) + 0x98 (0x1466431785e8 in /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/lib/libc10.so)
|
| 690 |
+
frame #1: <unknown function> + 0x5ba8afe (0x14662c45aafe in /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/lib/libtorch_cpu.so)
|
| 691 |
+
frame #2: <unknown function> + 0x5baa358 (0x14662c45c358 in /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/lib/libtorch_cpu.so)
|
| 692 |
+
frame #3: <unknown function> + 0x5babb3e (0x14662c45db3e in /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/lib/libtorch_cpu.so)
|
| 693 |
+
frame #4: c10d::TCPStore::doWait(c10::ArrayRef<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > >, std::chrono::duration<long, std::ratio<1l, 1000l> >) + 0x1a6 (0x14662c457ac6 in /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/lib/libtorch_cpu.so)
|
| 694 |
+
frame #5: c10d::TCPStore::doGet(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&) + 0x33 (0x14662c457ea3 in /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/lib/libtorch_cpu.so)
|
| 695 |
+
frame #6: c10d::TCPStore::get(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&) + 0xab (0x14662c458f8b in /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/lib/libtorch_cpu.so)
|
| 696 |
+
frame #7: <unknown function> + 0xc0f526 (0x14663b78b526 in /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/lib/libtorch_python.so)
|
| 697 |
+
frame #8: <unknown function> + 0x37f17d (0x14663aefb17d in /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/lib/libtorch_python.so)
|
| 698 |
+
<omitting python frames>
|
| 699 |
+
frame #26: <unknown function> + 0x29d90 (0x1466444cbd90 in /lib/x86_64-linux-gnu/libc.so.6)
|
| 700 |
+
frame #27: __libc_start_main + 0x80 (0x1466444cbe40 in /lib/x86_64-linux-gnu/libc.so.6)
|
| 701 |
+
|
| 702 |
+
W0621 21:34:46.427000 2474606 site-packages/torch/distributed/elastic/rendezvous/dynamic_rendezvous.py:1292] The node 'fs-mbz-gpu-786_2474606_0' has failed to shutdown the rendezvous '343223' due to an error of type RendezvousConnectionError.
|
| 703 |
+
[W621 21:34:46.060885070 TCPStore.cpp:106] [c10d] sendBytes failed on SocketImpl(fd=3, addr=[fs-mbz-gpu-786]:35214, remote=[fs-mbz-gpu-703]:29500): Broken pipe
|
| 704 |
+
Exception raised from sendBytes at /pytorch/torch/csrc/distributed/c10d/Utils.hpp:653 (most recent call first):
|
| 705 |
+
frame #0: c10::Error::Error(c10::SourceLocation, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >) + 0x98 (0x1466431785e8 in /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/lib/libc10.so)
|
| 706 |
+
frame #1: <unknown function> + 0x5ba8afe (0x14662c45aafe in /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/lib/libtorch_cpu.so)
|
| 707 |
+
frame #2: <unknown function> + 0x5baa358 (0x14662c45c358 in /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/lib/libtorch_cpu.so)
|
| 708 |
+
frame #3: <unknown function> + 0x5babb3e (0x14662c45db3e in /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/lib/libtorch_cpu.so)
|
| 709 |
+
frame #4: c10d::TCPStore::doWait(c10::ArrayRef<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > >, std::chrono::duration<long, std::ratio<1l, 1000l> >) + 0x1a6 (0x14662c457ac6 in /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/lib/libtorch_cpu.so)
|
| 710 |
+
frame #5: c10d::TCPStore::doGet(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&) + 0x33 (0x14662c457ea3 in /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/lib/libtorch_cpu.so)
|
| 711 |
+
frame #6: c10d::TCPStore::get(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&) + 0xab (0x14662c458f8b in /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/lib/libtorch_cpu.so)
|
| 712 |
+
frame #7: <unknown function> + 0xc0f526 (0x14663b78b526 in /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/lib/libtorch_python.so)
|
| 713 |
+
frame #8: <unknown function> + 0x37f17d (0x14663aefb17d in /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/lib/libtorch_python.so)
|
| 714 |
+
<omitting python frames>
|
| 715 |
+
frame #26: <unknown function> + 0x29d90 (0x1466444cbd90 in /lib/x86_64-linux-gnu/libc.so.6)
|
| 716 |
+
frame #27: __libc_start_main + 0x80 (0x1466444cbe40 in /lib/x86_64-linux-gnu/libc.so.6)
|
| 717 |
+
|
| 718 |
+
W0621 21:34:46.438000 2474606 site-packages/torch/distributed/elastic/rendezvous/dynamic_rendezvous.py:1292] The node 'fs-mbz-gpu-786_2474606_0' has failed to shutdown the rendezvous '343223' due to an error of type RendezvousConnectionError.
|
| 719 |
+
Traceback (most recent call last):
|
| 720 |
+
File "/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/distributed/elastic/rendezvous/c10d_rendezvous_backend.py", line 117, in _call_store
|
| 721 |
+
return getattr(self._store, store_op)(*args, **kwargs)
|
| 722 |
+
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
| 723 |
+
torch.distributed.DistNetworkError: failed to recv, got 0 bytes
|
| 724 |
+
|
| 725 |
+
The above exception was the direct cause of the following exception:
|
| 726 |
+
|
| 727 |
+
Traceback (most recent call last):
|
| 728 |
+
File "<frozen runpy>", line 198, in _run_module_as_main
|
| 729 |
+
File "<frozen runpy>", line 88, in _run_code
|
| 730 |
+
File "/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/distributed/launch.py", line 207, in <module>
|
| 731 |
+
main()
|
| 732 |
+
File "/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/typing_extensions.py", line 3253, in wrapper
|
| 733 |
+
return arg(*args, **kwargs)
|
| 734 |
+
^^^^^^^^^^^^^^^^^^^^
|
| 735 |
+
File "/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/distributed/launch.py", line 203, in main
|
| 736 |
+
launch(args)
|
| 737 |
+
File "/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/distributed/launch.py", line 188, in launch
|
| 738 |
+
run(args)
|
| 739 |
+
File "/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/distributed/run.py", line 883, in run
|
| 740 |
+
elastic_launch(
|
| 741 |
+
File "/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/distributed/launcher/api.py", line 139, in __call__
|
| 742 |
+
return launch_agent(self._config, self._entrypoint, list(args))
|
| 743 |
+
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
| 744 |
+
File "/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/distributed/launcher/api.py", line 261, in launch_agent
|
| 745 |
+
result = agent.run()
|
| 746 |
+
^^^^^^^^^^^
|
| 747 |
+
File "/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/distributed/elastic/metrics/api.py", line 138, in wrapper
|
| 748 |
+
result = f(*args, **kwargs)
|
| 749 |
+
^^^^^^^^^^^^^^^^^^
|
| 750 |
+
File "/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/distributed/elastic/agent/server/api.py", line 711, in run
|
| 751 |
+
result = self._invoke_run(role)
|
| 752 |
+
^^^^^^^^^^^^^^^^^^^^^^
|
| 753 |
+
File "/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/distributed/elastic/agent/server/api.py", line 906, in _invoke_run
|
| 754 |
+
num_nodes_waiting = rdzv_handler.num_nodes_waiting()
|
| 755 |
+
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
| 756 |
+
File "/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/distributed/elastic/rendezvous/dynamic_rendezvous.py", line 1263, in num_nodes_waiting
|
| 757 |
+
self._state_holder.sync()
|
| 758 |
+
File "/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/distributed/elastic/rendezvous/dynamic_rendezvous.py", line 437, in sync
|
| 759 |
+
get_response = self._backend.get_state()
|
| 760 |
+
^^^^^^^^^^^^^^^^^^^^^^^^^
|
| 761 |
+
File "/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/distributed/elastic/rendezvous/c10d_rendezvous_backend.py", line 75, in get_state
|
| 762 |
+
base64_state: bytes = self._call_store("get", self._key)
|
| 763 |
+
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
| 764 |
+
File "/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/distributed/elastic/rendezvous/c10d_rendezvous_backend.py", line 119, in _call_store
|
| 765 |
+
raise RendezvousConnectionError(
|
| 766 |
+
torch.distributed.elastic.rendezvous.api.RendezvousConnectionError: The connection to the C10d store has failed. See inner exception for details.
|
| 767 |
+
+ set +x
|
| 768 |
+
+ for ctx_length in 1024 2048 4096 8192 12288 16384 24576 32768 40960 49152 65536 81920 98304 131072
|
| 769 |
+
+ export PROF_CTX_LENGTH=4096
|
| 770 |
+
+ PROF_CTX_LENGTH=4096
|
| 771 |
+
+ name='/mnt/sharefs/users/hao.zhang/junda/megatron-prof-data--unstable-v5/mytrace.L4096*tp4.cp4.bs16.json'
|
| 772 |
+
+ '[' -f '/mnt/sharefs/users/hao.zhang/junda/megatron-prof-data--unstable-v5/mytrace.L4096*tp4.cp4.bs16.json' ']'
|
| 773 |
+
+ echo 'Running ctx_length=4096, TP_SIZE=4, CP_SIZE=4, BATCH_SIZE=16'
|
| 774 |
+
+ srun bash ./attnserver.sh
|
| 775 |
+
+ which python3
|
| 776 |
+
+ python3 -m torch.distributed.launch --nproc_per_node 8 --nnodes 2 --node_rank 1 --rdzv_id 343223 --rdzv_backend c10d --rdzv_endpoint fs-mbz-gpu-703:29500 ./pretrain_gpt_profile.py --tensor-model-parallel-size 4 --context-parallel-size 4 --num-layers 2 --hidden-size 4096 --num-attention-heads 64 --group-query-attention --num-query-groups 16 --seq-length 4096 --max-position-embeddings 4096 --micro-batch-size 1 --global-batch-size 1 --lr 0.0005 --train-iters 10 --lr-decay-iters 150000 --lr-decay-style cosine --lr-warmup-iters 2 --weight-decay .1 --adam-beta2 .999 --fp16 --log-interval 1 --save-interval 16 --eval-interval 16 --eval-iters 1 --vocab-file vocab.json --merge-file merges.txt --save gpt-checkpoint --load gpt-checkpoint --logging-level 0 --mock-data --tensorboard-dir tensorboard-logs/
|
| 777 |
+
+ which python3
|
| 778 |
+
+ python3 -m torch.distributed.launch --nproc_per_node 8 --nnodes 2 --node_rank 0 --rdzv_id 343223 --rdzv_backend c10d --rdzv_endpoint fs-mbz-gpu-703:29500 ./pretrain_gpt_profile.py --tensor-model-parallel-size 4 --context-parallel-size 4 --num-layers 2 --hidden-size 4096 --num-attention-heads 64 --group-query-attention --num-query-groups 16 --seq-length 4096 --max-position-embeddings 4096 --micro-batch-size 1 --global-batch-size 1 --lr 0.0005 --train-iters 10 --lr-decay-iters 150000 --lr-decay-style cosine --lr-warmup-iters 2 --weight-decay .1 --adam-beta2 .999 --fp16 --log-interval 1 --save-interval 16 --eval-interval 16 --eval-iters 1 --vocab-file vocab.json --merge-file merges.txt --save gpt-checkpoint --load gpt-checkpoint --logging-level 0 --mock-data --tensorboard-dir tensorboard-logs/
|
| 779 |
+
/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/distributed/launch.py:207: FutureWarning: The module torch.distributed.launch is deprecated
|
| 780 |
+
and will be removed in future. Use torchrun.
|
| 781 |
+
Note that --use-env is set by default in torchrun.
|
| 782 |
+
If your script expects `--local-rank` argument to be set, please
|
| 783 |
+
change it to read from `os.environ['LOCAL_RANK']` instead. See
|
| 784 |
+
https://pytorch.org/docs/stable/distributed.html#launch-utility for
|
| 785 |
+
further instructions
|
| 786 |
+
|
| 787 |
+
main()
|
| 788 |
+
W0621 21:34:49.559000 2477421 site-packages/torch/distributed/run.py:766]
|
| 789 |
+
W0621 21:34:49.559000 2477421 site-packages/torch/distributed/run.py:766] *****************************************
|
| 790 |
+
W0621 21:34:49.559000 2477421 site-packages/torch/distributed/run.py:766] Setting OMP_NUM_THREADS environment variable for each process to be 1 in default, to avoid your system being overloaded, please further tune the variable for optimal performance in your application as needed.
|
| 791 |
+
W0621 21:34:49.559000 2477421 site-packages/torch/distributed/run.py:766] *****************************************
|
| 792 |
+
/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/distributed/launch.py:207: FutureWarning: The module torch.distributed.launch is deprecated
|
| 793 |
+
and will be removed in future. Use torchrun.
|
| 794 |
+
Note that --use-env is set by default in torchrun.
|
| 795 |
+
If your script expects `--local-rank` argument to be set, please
|
| 796 |
+
change it to read from `os.environ['LOCAL_RANK']` instead. See
|
| 797 |
+
https://pytorch.org/docs/stable/distributed.html#launch-utility for
|
| 798 |
+
further instructions
|
| 799 |
+
|
| 800 |
+
main()
|
| 801 |
+
W0621 21:34:49.719000 2521169 site-packages/torch/distributed/run.py:766]
|
| 802 |
+
W0621 21:34:49.719000 2521169 site-packages/torch/distributed/run.py:766] *****************************************
|
| 803 |
+
W0621 21:34:49.719000 2521169 site-packages/torch/distributed/run.py:766] Setting OMP_NUM_THREADS environment variable for each process to be 1 in default, to avoid your system being overloaded, please further tune the variable for optimal performance in your application as needed.
|
| 804 |
+
W0621 21:34:49.719000 2521169 site-packages/torch/distributed/run.py:766] *****************************************
|
attnserver.run_attnserver.slurm.sh.343223.out.log
CHANGED
|
@@ -3248,3 +3248,1556 @@ batch tensor after cp: attention_mask torch.Size([16, 1, 8192, 32768])
|
|
| 3248 |
batch tensor after cp: position_ids torch.Size([16, 8192])
|
| 3249 |
Start exporting trace 0
|
| 3250 |
Done exporting trace 0
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 3248 |
batch tensor after cp: position_ids torch.Size([16, 8192])
|
| 3249 |
Start exporting trace 0
|
| 3250 |
Done exporting trace 0
|
| 3251 |
+
Number of parameters in transformer block in billions: 0.35
|
| 3252 |
+
Number of parameters in embedding layers in billions: 0.21
|
| 3253 |
+
Total number of parameters in billions: 0.56
|
| 3254 |
+
Number of parameters in most loaded shard in billions: 0.1400
|
| 3255 |
+
Theoretical memory footprints: weight and optimizer=2403.18 MB
|
| 3256 |
+
[Rank 1] (after 1 iterations) memory (MB) | allocated: 22346.16064453125 | max allocated: 49402.65283203125 | reserved: 54076.0 | max reserved: 54076.0
|
| 3257 |
+
[2025-06-21 21:33:54] iteration 1/ 10 | consumed samples: 1 | elapsed time per iteration (ms): 12990.3 | learning rate: 0.000000E+00 | global batch size: 1 | loss scale: 4294967296.0 | number of skipped iterations: 1 | number of nan iterations: 0 |
|
| 3258 |
+
[Rank 0] (after 1 iterations) memory (MB) | allocated: 22346.16064453125 | max allocated: 49402.65283203125 | reserved: 54076.0 | max reserved: 54076.0
|
| 3259 |
+
[Rank 15] (after 1 iterations) memory (MB) | allocated: 22346.16064453125 | max allocated: 49402.65283203125 | reserved: 54500.0 | max reserved: 54500.0
|
| 3260 |
+
[Rank 12] (after 1 iterations) memory (MB) | allocated: 22346.16064453125 | max allocated: 49402.65283203125 | reserved: 53840.0 | max reserved: 53840.0
|
| 3261 |
+
[Rank 14] (after 1 iterations) memory (MB) | allocated: 22346.16064453125 | max allocated: 49402.65283203125 | reserved: 54500.0 | max reserved: 54500.0[Rank 9] (after 1 iterations) memory (MB) | allocated: 22346.16064453125 | max allocated: 49402.65283203125 | reserved: 53348.0 | max reserved: 53348.0
|
| 3262 |
+
[Rank 13] (after 1 iterations) memory (MB) | allocated: 22346.16064453125 | max allocated: 49402.65283203125 | reserved: 53968.0 | max reserved: 53968.0
|
| 3263 |
+
|
| 3264 |
+
[Rank 6] (after 1 iterations) memory (MB) | allocated: 22346.16064453125 | max allocated: 49402.65283203125 | reserved: 53200.0 | max reserved: 53200.0
|
| 3265 |
+
[Rank 10] (after 1 iterations) memory (MB) | allocated: 22346.16064453125 | max allocated: 49402.65283203125 | reserved: 53348.0 | max reserved: 53348.0
|
| 3266 |
+
[Rank 11] (after 1 iterations) memory (MB) | allocated: 22346.16064453125 | max allocated: 49402.65283203125 | reserved: 53348.0 | max reserved: 53348.0
|
| 3267 |
+
[Rank 8] (after 1 iterations) memory (MB) | allocated: 22346.16064453125 | max allocated: 49402.65283203125 | reserved: 53348.0 | max reserved: 53348.0
|
| 3268 |
+
[Rank 3] (after 1 iterations) memory (MB) | allocated: 22346.16064453125 | max allocated: 49402.65283203125 | reserved: 53052.0 | max reserved: 53052.0
|
| 3269 |
+
[Rank 7] (after 1 iterations) memory (MB) | allocated: 22346.16064453125 | max allocated: 49402.65283203125 | reserved: 53200.0 | max reserved: 53200.0[Rank 5] (after 1 iterations) memory (MB) | allocated: 22346.16064453125 | max allocated: 49402.65283203125 | reserved: 54224.0 | max reserved: 54224.0
|
| 3270 |
+
|
| 3271 |
+
[Rank 2] (after 1 iterations) memory (MB) | allocated: 22346.16064453125 | max allocated: 49402.65283203125 | reserved: 53308.0 | max reserved: 53308.0
|
| 3272 |
+
[Rank 4] (after 1 iterations) memory (MB) | allocated: 22346.16064453125 | max allocated: 49402.65283203125 | reserved: 54224.0 | max reserved: 54224.0
|
| 3273 |
+
batch tensor: tokens torch.Size([16, 32768])
|
| 3274 |
+
batch tensor: labels torch.Size([16, 32768])
|
| 3275 |
+
batch tensor: loss_mask torch.Size([16, 32768])
|
| 3276 |
+
batch tensor: attention_mask torch.Size([16, 1, 32768, 32768])
|
| 3277 |
+
batch tensor: position_ids torch.Size([16, 32768])
|
| 3278 |
+
batch tensor after cp: tokens torch.Size([16, 8192])
|
| 3279 |
+
batch tensor after cp: labels torch.Size([16, 8192])
|
| 3280 |
+
batch tensor after cp: loss_mask torch.Size([16, 8192])
|
| 3281 |
+
batch tensor after cp: attention_mask torch.Size([16, 1, 8192, 32768])
|
| 3282 |
+
batch tensor: tokens torch.Size([16, 32768])
|
| 3283 |
+
batch tensor: labels torch.Size([16, 32768])
|
| 3284 |
+
batch tensor: loss_mask torch.Size([16, 32768])
|
| 3285 |
+
batch tensor: attention_mask torch.Size([16, 1, 32768, 32768])
|
| 3286 |
+
batch tensor: position_ids torch.Size([16, 32768])
|
| 3287 |
+
batch tensor after cp: position_ids torch.Size([16, 8192])
|
| 3288 |
+
batch tensor: tokens torch.Size([16, 32768])
|
| 3289 |
+
batch tensor: labels torch.Size([16, 32768])
|
| 3290 |
+
batch tensor: loss_mask torch.Size([16, 32768])
|
| 3291 |
+
batch tensor: attention_mask torch.Size([16, 1, 32768, 32768])
|
| 3292 |
+
batch tensor: position_ids torch.Size([16, 32768])
|
| 3293 |
+
batch tensor: tokens torch.Size([16, 32768])
|
| 3294 |
+
batch tensor: labels torch.Size([16, 32768])
|
| 3295 |
+
batch tensor after cp: tokens torch.Size([16, 8192])
|
| 3296 |
+
batch tensor after cp: labels torch.Size([16, 8192])
|
| 3297 |
+
batch tensor after cp: loss_mask torch.Size([16, 8192])
|
| 3298 |
+
batch tensor: loss_mask torch.Size([16, 32768])
|
| 3299 |
+
batch tensor: attention_mask torch.Size([16, 1, 32768, 32768])
|
| 3300 |
+
batch tensor: position_ids torch.Size([16, 32768])
|
| 3301 |
+
batch tensor after cp: attention_mask torch.Size([16, 1, 8192, 32768])
|
| 3302 |
+
batch tensor after cp: tokens torch.Size([16, 8192])
|
| 3303 |
+
batch tensor after cp: labels torch.Size([16, 8192])
|
| 3304 |
+
batch tensor after cp: position_idsbatch tensor after cp: torch.Size([16, 8192])tokens
|
| 3305 |
+
torch.Size([16, 8192])
|
| 3306 |
+
batch tensor after cp: loss_mask torch.Size([16, 8192])
|
| 3307 |
+
batch tensor after cp: labels torch.Size([16, 8192])
|
| 3308 |
+
batch tensor after cp: attention_mask torch.Size([16, 1, 8192, 32768])
|
| 3309 |
+
batch tensor after cp: position_ids torch.Size([16, 8192])
|
| 3310 |
+
batch tensor after cp: loss_mask torch.Size([16, 8192])
|
| 3311 |
+
batch tensor after cp: attention_mask torch.Size([16, 1, 8192, 32768])
|
| 3312 |
+
batch tensor after cp: position_ids torch.Size([16, 8192])
|
| 3313 |
+
batch tensor: tokens torch.Size([16, 32768])
|
| 3314 |
+
batch tensor: tokens torch.Size([16, 32768])
|
| 3315 |
+
batch tensor: labels torch.Size([16, 32768])
|
| 3316 |
+
batch tensor: loss_mask torch.Size([16, 32768])
|
| 3317 |
+
batch tensor: attention_mask torch.Size([16, 1, 32768, 32768])
|
| 3318 |
+
batch tensor: position_ids torch.Size([16, 32768])
|
| 3319 |
+
batch tensor: labels torch.Size([16, 32768])
|
| 3320 |
+
batch tensor: loss_mask torch.Size([16, 32768])
|
| 3321 |
+
batch tensor: tokens torch.Size([16, 32768])
|
| 3322 |
+
batch tensor: attention_mask torch.Size([16, 1, 32768, 32768])
|
| 3323 |
+
batch tensor: labels torch.Size([16, 32768])
|
| 3324 |
+
batch tensor: loss_mask torch.Size([16, 32768])
|
| 3325 |
+
batch tensor: position_ids torch.Size([16, 32768])
|
| 3326 |
+
batch tensor: attention_mask torch.Size([16, 1, 32768, 32768])
|
| 3327 |
+
batch tensor: position_ids torch.Size([16, 32768])
|
| 3328 |
+
batch tensor after cp: tokens torch.Size([16, 8192])
|
| 3329 |
+
batch tensor after cp: tokens torch.Size([16, 8192])
|
| 3330 |
+
batch tensor after cp: labels torch.Size([16, 8192])
|
| 3331 |
+
batch tensor after cp: labels torch.Size([16, 8192])
|
| 3332 |
+
batch tensor after cp: loss_mask torch.Size([16, 8192])
|
| 3333 |
+
batch tensor after cp: attention_mask torch.Size([16, 1, 8192, 32768])
|
| 3334 |
+
batch tensor after cp: loss_mask torch.Size([16, 8192])
|
| 3335 |
+
batch tensor after cp: attention_mask torch.Size([16, 1, 8192, 32768])
|
| 3336 |
+
batch tensor after cp: position_ids torch.Size([16, 8192])
|
| 3337 |
+
batch tensor after cp: position_ids torch.Size([16, 8192])
|
| 3338 |
+
batch tensor after cp: tokens torch.Size([16, 8192])
|
| 3339 |
+
batch tensor after cp: labels torch.Size([16, 8192])
|
| 3340 |
+
batch tensor: tokens torch.Size([16, 32768])
|
| 3341 |
+
batch tensor after cp: loss_mask torch.Size([16, 8192])
|
| 3342 |
+
batch tensor after cp: attention_mask torch.Size([16, 1, 8192, 32768])
|
| 3343 |
+
batch tensor after cp: position_ids torch.Size([16, 8192])
|
| 3344 |
+
batch tensor: labels torch.Size([16, 32768])
|
| 3345 |
+
batch tensor: tokens torch.Size([16, 32768])
|
| 3346 |
+
batch tensor: loss_mask torch.Size([16, 32768])
|
| 3347 |
+
batch tensor: attention_mask torch.Size([16, 1, 32768, 32768])
|
| 3348 |
+
batch tensor: labels torch.Size([16, 32768])
|
| 3349 |
+
batch tensor: loss_mask torch.Size([16, 32768])
|
| 3350 |
+
batch tensor: attention_mask torch.Size([16, 1, 32768, 32768])
|
| 3351 |
+
batch tensor: position_ids torch.Size([16, 32768])
|
| 3352 |
+
batch tensor: position_ids torch.Size([16, 32768])
|
| 3353 |
+
batch tensor after cp: tokens torch.Size([16, 8192])
|
| 3354 |
+
batch tensor after cp: tokens torch.Size([16, 8192])
|
| 3355 |
+
batch tensor after cp: labels torch.Size([16, 8192])
|
| 3356 |
+
batch tensor after cp: labels torch.Size([16, 8192])
|
| 3357 |
+
batch tensor after cp: loss_mask torch.Size([16, 8192])
|
| 3358 |
+
batch tensor after cp: loss_mask torch.Size([16, 8192])
|
| 3359 |
+
batch tensor after cp: attention_mask torch.Size([16, 1, 8192, 32768])
|
| 3360 |
+
batch tensor after cp: attention_mask torch.Size([16, 1, 8192, 32768])
|
| 3361 |
+
batch tensor after cp: position_ids torch.Size([16, 8192])
|
| 3362 |
+
batch tensor after cp: position_ids torch.Size([16, 8192])
|
| 3363 |
+
batch tensor: tokens torch.Size([16, 32768])
|
| 3364 |
+
batch tensor: tokens torch.Size([16, 32768])
|
| 3365 |
+
batch tensor: labels torch.Size([16, 32768])
|
| 3366 |
+
batch tensor: labels torch.Size([16, 32768])
|
| 3367 |
+
batch tensor: loss_mask torch.Size([16, 32768])
|
| 3368 |
+
batch tensor: attention_mask torch.Size([16, 1, 32768, 32768])
|
| 3369 |
+
batch tensor: position_ids torch.Size([16, 32768])
|
| 3370 |
+
batch tensor: loss_mask torch.Size([16, 32768])
|
| 3371 |
+
batch tensor after cp: tokens torch.Size([16, 8192])
|
| 3372 |
+
batch tensor after cp: labels torch.Size([16, 8192])
|
| 3373 |
+
batch tensor: attention_mask torch.Size([16, 1, 32768, 32768])
|
| 3374 |
+
batch tensor after cp: loss_mask torch.Size([16, 8192])
|
| 3375 |
+
batch tensor after cp: attention_mask torch.Size([16, 1, 8192, 32768])
|
| 3376 |
+
batch tensor after cp: position_ids torch.Size([16, 8192])
|
| 3377 |
+
batch tensor: position_ids torch.Size([16, 32768])
|
| 3378 |
+
batch tensor: tokens torch.Size([16, 32768])
|
| 3379 |
+
batch tensor after cp: tokens torch.Size([16, 8192])
|
| 3380 |
+
batch tensor after cp: labels torch.Size([16, 8192])
|
| 3381 |
+
batch tensor: labels torch.Size([16, 32768])
|
| 3382 |
+
batch tensor: loss_mask torch.Size([16, 32768])
|
| 3383 |
+
batch tensor after cp: loss_mask torch.Size([16, 8192])
|
| 3384 |
+
batch tensor: attention_mask torch.Size([16, 1, 32768, 32768])
|
| 3385 |
+
batch tensor: position_ids torch.Size([16, 32768])
|
| 3386 |
+
batch tensor after cp: attention_mask torch.Size([16, 1, 8192, 32768])
|
| 3387 |
+
batch tensor after cp: tokens torch.Size([16, 8192])
|
| 3388 |
+
batch tensor after cp: labels torch.Size([16, 8192])
|
| 3389 |
+
batch tensor after cp: position_ids torch.Size([16, 8192])
|
| 3390 |
+
batch tensor after cp: loss_mask torch.Size([16, 8192])
|
| 3391 |
+
batch tensor after cp: attention_mask torch.Size([16, 1, 8192, 32768])
|
| 3392 |
+
batch tensor: tokens torch.Size([16, 32768])
|
| 3393 |
+
batch tensor after cp: position_ids torch.Size([16, 8192])
|
| 3394 |
+
batch tensor: labels torch.Size([16, 32768])
|
| 3395 |
+
batch tensor: tokens torch.Size([16, 32768])
|
| 3396 |
+
batch tensor: loss_mask torch.Size([16, 32768])
|
| 3397 |
+
batch tensor: labels torch.Size([16, 32768])
|
| 3398 |
+
batch tensor: attention_mask torch.Size([16, 1, 32768, 32768])
|
| 3399 |
+
batch tensor: loss_mask torch.Size([16, 32768])
|
| 3400 |
+
batch tensor: attention_mask torch.Size([16, 1, 32768, 32768])
|
| 3401 |
+
batch tensor: position_ids torch.Size([16, 32768])
|
| 3402 |
+
batch tensor: position_ids torch.Size([16, 32768])
|
| 3403 |
+
batch tensor after cp: tokens torch.Size([16, 8192])
|
| 3404 |
+
batch tensor after cp: labels torch.Size([16, 8192])
|
| 3405 |
+
batch tensor after cp: loss_mask torch.Size([16, 8192])
|
| 3406 |
+
batch tensor after cp: tokens torch.Size([16, 8192])
|
| 3407 |
+
batch tensor after cp: labels torch.Size([16, 8192])
|
| 3408 |
+
batch tensor after cp: attention_mask torch.Size([16, 1, 8192, 32768])
|
| 3409 |
+
batch tensor after cp: position_ids torch.Size([16, 8192])
|
| 3410 |
+
batch tensor after cp: loss_mask torch.Size([16, 8192])
|
| 3411 |
+
batch tensor after cp: attention_mask torch.Size([16, 1, 8192, 32768])
|
| 3412 |
+
batch tensor after cp: position_ids torch.Size([16, 8192])
|
| 3413 |
+
batch tensor: tokens torch.Size([16, 32768])
|
| 3414 |
+
batch tensor: labels torch.Size([16, 32768])
|
| 3415 |
+
batch tensor: loss_mask torch.Size([16, 32768])
|
| 3416 |
+
batch tensor: attention_mask torch.Size([16, 1, 32768, 32768])
|
| 3417 |
+
batch tensor: position_ids torch.Size([16, 32768])
|
| 3418 |
+
batch tensor: tokens torch.Size([16, 32768])
|
| 3419 |
+
batch tensor: labels torch.Size([16, 32768])
|
| 3420 |
+
batch tensor: loss_mask torch.Size([16, 32768])
|
| 3421 |
+
batch tensor: attention_mask torch.Size([16, 1, 32768, 32768])
|
| 3422 |
+
batch tensor: position_ids torch.Size([16, 32768])
|
| 3423 |
+
batch tensor after cp: tokens torch.Size([16, 8192])
|
| 3424 |
+
batch tensor after cp: labels torch.Size([16, 8192])
|
| 3425 |
+
batch tensor after cp: loss_mask torch.Size([16, 8192])
|
| 3426 |
+
batch tensor after cp: attention_mask torch.Size([16, 1, 8192, 32768])
|
| 3427 |
+
batch tensor after cp: position_ids torch.Size([16, 8192])
|
| 3428 |
+
batch tensor after cp: tokens torch.Size([16, 8192])
|
| 3429 |
+
batch tensor after cp: labels torch.Size([16, 8192])
|
| 3430 |
+
batch tensor after cp: loss_mask torch.Size([16, 8192])
|
| 3431 |
+
batch tensor after cp: attention_mask torch.Size([16, 1, 8192, 32768])
|
| 3432 |
+
batch tensor after cp: position_ids torch.Size([16, 8192])
|
| 3433 |
+
Start exporting trace 1
|
| 3434 |
+
Done exporting trace 1
|
| 3435 |
+
[2025-06-21 21:33:55] iteration 2/ 10 | consumed samples: 2 | elapsed time per iteration (ms): 961.8 | learning rate: 0.000000E+00 | global batch size: 1 | loss scale: 2147483648.0 | number of skipped iterations: 1 | number of nan iterations: 0 |
|
| 3436 |
+
batch tensor: tokens torch.Size([16, 32768])
|
| 3437 |
+
batch tensor: labels torch.Size([16, 32768])
|
| 3438 |
+
batch tensor: loss_mask torch.Size([16, 32768])
|
| 3439 |
+
batch tensor: attention_mask torch.Size([16, 1, 32768, 32768])
|
| 3440 |
+
batch tensor: position_ids torch.Size([16, 32768])
|
| 3441 |
+
batch tensor after cp: tokens torch.Size([16, 8192])
|
| 3442 |
+
batch tensor after cp: labels torch.Size([16, 8192])
|
| 3443 |
+
batch tensor after cp: loss_mask torch.Size([16, 8192])
|
| 3444 |
+
batch tensor after cp: attention_mask torch.Size([16, 1, 8192, 32768])
|
| 3445 |
+
batch tensor after cp: position_ids torch.Size([16, 8192])
|
| 3446 |
+
batch tensor: tokens torch.Size([16, 32768])
|
| 3447 |
+
batch tensor: labels torch.Size([16, 32768])
|
| 3448 |
+
batch tensor: loss_mask torch.Size([16, 32768])
|
| 3449 |
+
batch tensor: attention_mask torch.Size([16, 1, 32768, 32768])
|
| 3450 |
+
batch tensor: position_ids torch.Size([16, 32768])
|
| 3451 |
+
batch tensor after cp: tokens torch.Size([16, 8192])
|
| 3452 |
+
batch tensor after cp: labels torch.Size([16, 8192])
|
| 3453 |
+
batch tensor after cp: loss_mask torch.Size([16, 8192])
|
| 3454 |
+
batch tensor after cp: attention_mask torch.Size([16, 1, 8192, 32768])
|
| 3455 |
+
batch tensor after cp: position_ids torch.Size([16, 8192])
|
| 3456 |
+
batch tensor: tokens torch.Size([16, 32768])
|
| 3457 |
+
batch tensor: labels torch.Size([16, 32768])
|
| 3458 |
+
batch tensor: loss_mask torch.Size([16, 32768])
|
| 3459 |
+
batch tensor: attention_mask torch.Size([16, 1, 32768, 32768])
|
| 3460 |
+
batch tensor: position_ids torch.Size([16, 32768])
|
| 3461 |
+
batch tensor after cp: tokens torch.Size([16, 8192])
|
| 3462 |
+
batch tensor: tokens torch.Size([16, 32768])
|
| 3463 |
+
batch tensor after cp: labels torch.Size([16, 8192])
|
| 3464 |
+
batch tensor after cp: loss_mask torch.Size([16, 8192])
|
| 3465 |
+
batch tensor: labels torch.Size([16, 32768])
|
| 3466 |
+
batch tensor: loss_mask torch.Size([16, 32768])
|
| 3467 |
+
batch tensor after cp: attention_mask torch.Size([16, 1, 8192, 32768])
|
| 3468 |
+
batch tensor after cp: position_ids torch.Size([16, 8192])
|
| 3469 |
+
batch tensor: attention_mask torch.Size([16, 1, 32768, 32768])
|
| 3470 |
+
batch tensor: position_ids torch.Size([16, 32768])
|
| 3471 |
+
batch tensor after cp: tokens torch.Size([16, 8192])
|
| 3472 |
+
batch tensor after cp: labels torch.Size([16, 8192])
|
| 3473 |
+
batch tensor after cp: loss_mask torch.Size([16, 8192])
|
| 3474 |
+
batch tensor after cp: attention_mask torch.Size([16, 1, 8192, 32768])
|
| 3475 |
+
batch tensor after cp: position_ids torch.Size([16, 8192])
|
| 3476 |
+
batch tensor: tokens torch.Size([16, 32768])
|
| 3477 |
+
batch tensor: labels torch.Size([16, 32768])
|
| 3478 |
+
batch tensor: loss_mask torch.Size([16, 32768])
|
| 3479 |
+
batch tensor: attention_mask torch.Size([16, 1, 32768, 32768])
|
| 3480 |
+
batch tensor: position_ids torch.Size([16, 32768])
|
| 3481 |
+
batch tensor after cp: tokens torch.Size([16, 8192])
|
| 3482 |
+
batch tensor after cp: labels torch.Size([16, 8192])
|
| 3483 |
+
batch tensor after cp: loss_mask torch.Size([16, 8192])
|
| 3484 |
+
batch tensor after cp: attention_mask torch.Size([16, 1, 8192, 32768])
|
| 3485 |
+
batch tensor: tokens torch.Size([16, 32768])
|
| 3486 |
+
batch tensor: labels torch.Size([16, 32768])
|
| 3487 |
+
batch tensor: loss_mask torch.Size([16, 32768])
|
| 3488 |
+
batch tensor after cp: position_ids torch.Size([16, 8192])
|
| 3489 |
+
batch tensor: attention_mask torch.Size([16, 1, 32768, 32768])
|
| 3490 |
+
batch tensor: position_ids torch.Size([16, 32768])
|
| 3491 |
+
batch tensor: tokens torch.Size([16, 32768])
|
| 3492 |
+
batch tensor after cp: tokens torch.Size([16, 8192])
|
| 3493 |
+
batch tensor after cp: labels torch.Size([16, 8192])
|
| 3494 |
+
batch tensor: labels torch.Size([16, 32768])
|
| 3495 |
+
batch tensor: loss_mask torch.Size([16, 32768])
|
| 3496 |
+
batch tensor after cp: loss_mask torch.Size([16, 8192])
|
| 3497 |
+
batch tensor after cp: attention_mask torch.Size([16, 1, 8192, 32768])
|
| 3498 |
+
batch tensor: attention_mask torch.Size([16, 1, 32768, 32768])
|
| 3499 |
+
batch tensor: position_ids torch.Size([16, 32768])
|
| 3500 |
+
batch tensor after cp: position_ids torch.Size([16, 8192])
|
| 3501 |
+
batch tensor after cp: tokens torch.Size([16, 8192])
|
| 3502 |
+
batch tensor after cp: labels torch.Size([16, 8192])
|
| 3503 |
+
batch tensor: tokens torch.Size([16, 32768])
|
| 3504 |
+
batch tensor after cp: loss_mask torch.Size([16, 8192])
|
| 3505 |
+
batch tensor after cp: attention_mask torch.Size([16, 1, 8192, 32768])
|
| 3506 |
+
batch tensor: labels torch.Size([16, 32768])
|
| 3507 |
+
batch tensor after cp: position_ids torch.Size([16, 8192])
|
| 3508 |
+
batch tensor: loss_mask torch.Size([16, 32768])
|
| 3509 |
+
batch tensor: tokens torch.Size([16, 32768])
|
| 3510 |
+
batch tensor: attention_mask torch.Size([16, 1, 32768, 32768])
|
| 3511 |
+
batch tensor: position_ids torch.Size([16, 32768])
|
| 3512 |
+
batch tensor: labels torch.Size([16, 32768])
|
| 3513 |
+
batch tensor: loss_mask torch.Size([16, 32768])
|
| 3514 |
+
batch tensor: tokens torch.Size([16, 32768])
|
| 3515 |
+
batch tensor: attention_mask torch.Size([16, 1, 32768, 32768])
|
| 3516 |
+
batch tensor: labels torch.Size([16, 32768])
|
| 3517 |
+
batch tensor: loss_mask torch.Size([16, 32768])
|
| 3518 |
+
batch tensor: position_ids torch.Size([16, 32768])
|
| 3519 |
+
batch tensor: attention_mask torch.Size([16, 1, 32768, 32768])
|
| 3520 |
+
batch tensor: tokens torch.Size([16, 32768])
|
| 3521 |
+
batch tensor: position_ids torch.Size([16, 32768])
|
| 3522 |
+
batch tensor: labels torch.Size([16, 32768])
|
| 3523 |
+
batch tensor after cp: tokens torch.Size([16, 8192])
|
| 3524 |
+
batch tensor after cp: labels torch.Size([16, 8192])
|
| 3525 |
+
batch tensor: loss_mask torch.Size([16, 32768])
|
| 3526 |
+
batch tensor: attention_mask torch.Size([16, 1, 32768, 32768])
|
| 3527 |
+
batch tensor after cp: loss_mask torch.Size([16, 8192])
|
| 3528 |
+
batch tensor: position_ids torch.Size([16, 32768])
|
| 3529 |
+
batch tensor after cp: attention_mask torch.Size([16, 1, 8192, 32768])
|
| 3530 |
+
batch tensor after cp: position_ids torch.Size([16, 8192])
|
| 3531 |
+
batch tensor after cp: tokens torch.Size([16, 8192])
|
| 3532 |
+
batch tensor after cp: labels torch.Size([16, 8192])
|
| 3533 |
+
batch tensor after cp: tokens torch.Size([16, 8192])
|
| 3534 |
+
batch tensor after cp: labels torch.Size([16, 8192])
|
| 3535 |
+
batch tensor after cp: loss_mask torch.Size([16, 8192])
|
| 3536 |
+
batch tensor after cp: loss_mask torch.Size([16, 8192])
|
| 3537 |
+
batch tensor after cp: attention_mask torch.Size([16, 1, 8192, 32768])
|
| 3538 |
+
batch tensor after cp: position_ids torch.Size([16, 8192])
|
| 3539 |
+
batch tensor after cp: attention_mask torch.Size([16, 1, 8192, 32768])
|
| 3540 |
+
batch tensor after cp: position_ids torch.Size([16, 8192])
|
| 3541 |
+
batch tensor after cp: tokens torch.Size([16, 8192])
|
| 3542 |
+
batch tensor after cp: labels torch.Size([16, 8192])
|
| 3543 |
+
batch tensor after cp: loss_mask torch.Size([16, 8192])
|
| 3544 |
+
batch tensor: tokens torch.Size([16, 32768])
|
| 3545 |
+
batch tensor after cp: attention_mask torch.Size([16, 1, 8192, 32768])
|
| 3546 |
+
batch tensor after cp: position_ids torch.Size([16, 8192])
|
| 3547 |
+
batch tensor: labels torch.Size([16, 32768])
|
| 3548 |
+
batch tensor: loss_mask torch.Size([16, 32768])
|
| 3549 |
+
batch tensor: tokens torch.Size([16, 32768])
|
| 3550 |
+
batch tensor: attention_mask torch.Size([16, 1, 32768, 32768])
|
| 3551 |
+
batch tensor: labels torch.Size([16, 32768])
|
| 3552 |
+
batch tensor: position_ids torch.Size([16, 32768])
|
| 3553 |
+
batch tensor: loss_mask torch.Size([16, 32768])
|
| 3554 |
+
batch tensor after cp: tokens torch.Size([16, 8192])
|
| 3555 |
+
batch tensor after cp: labels torch.Size([16, 8192])
|
| 3556 |
+
batch tensor: attention_mask torch.Size([16, 1, 32768, 32768])
|
| 3557 |
+
batch tensor after cp: loss_mask torch.Size([16, 8192])
|
| 3558 |
+
batch tensor: position_ids torch.Size([16, 32768])
|
| 3559 |
+
batch tensor after cp: attention_mask torch.Size([16, 1, 8192, 32768])
|
| 3560 |
+
batch tensor after cp: tokens torch.Size([16, 8192])
|
| 3561 |
+
batch tensor after cp: labels torch.Size([16, 8192])
|
| 3562 |
+
batch tensor after cp: position_ids torch.Size([16, 8192])
|
| 3563 |
+
batch tensor after cp: loss_mask torch.Size([16, 8192])
|
| 3564 |
+
batch tensor after cp: attention_mask torch.Size([16, 1, 8192, 32768])
|
| 3565 |
+
batch tensor: tokens torch.Size([16, 32768])
|
| 3566 |
+
batch tensor after cp: position_ids torch.Size([16, 8192])
|
| 3567 |
+
batch tensor: labels torch.Size([16, 32768])
|
| 3568 |
+
batch tensor: tokens torch.Size([16, 32768])
|
| 3569 |
+
batch tensor: loss_mask torch.Size([16, 32768])
|
| 3570 |
+
batch tensor: labels torch.Size([16, 32768])
|
| 3571 |
+
batch tensor: loss_mask torch.Size([16, 32768])
|
| 3572 |
+
batch tensor: attention_mask torch.Size([16, 1, 32768, 32768])
|
| 3573 |
+
batch tensor: attention_mask torch.Size([16, 1, 32768, 32768])
|
| 3574 |
+
batch tensor: position_ids torch.Size([16, 32768])
|
| 3575 |
+
batch tensor: position_ids torch.Size([16, 32768])
|
| 3576 |
+
batch tensor after cp: tokens torch.Size([16, 8192])
|
| 3577 |
+
batch tensor after cp: labels torch.Size([16, 8192])
|
| 3578 |
+
batch tensor after cp: loss_mask torch.Size([16, 8192])
|
| 3579 |
+
batch tensor after cp: tokens torch.Size([16, 8192])
|
| 3580 |
+
batch tensor after cp: labels torch.Size([16, 8192])
|
| 3581 |
+
batch tensor after cp: attention_mask torch.Size([16, 1, 8192, 32768])
|
| 3582 |
+
batch tensor after cp: loss_mask torch.Size([16, 8192])
|
| 3583 |
+
batch tensor after cp: position_ids torch.Size([16, 8192])
|
| 3584 |
+
batch tensor after cp: attention_mask torch.Size([16, 1, 8192, 32768])
|
| 3585 |
+
batch tensor after cp: position_ids torch.Size([16, 8192])
|
| 3586 |
+
batch tensor: tokens torch.Size([16, 32768])
|
| 3587 |
+
batch tensor: labels torch.Size([16, 32768])
|
| 3588 |
+
batch tensor: loss_mask torch.Size([16, 32768])
|
| 3589 |
+
batch tensor: attention_mask torch.Size([16, 1, 32768, 32768])
|
| 3590 |
+
batch tensor: position_ids torch.Size([16, 32768])
|
| 3591 |
+
batch tensor after cp: tokens torch.Size([16, 8192])
|
| 3592 |
+
batch tensor after cp: labels torch.Size([16, 8192])
|
| 3593 |
+
batch tensor after cp: loss_mask torch.Size([16, 8192])
|
| 3594 |
+
batch tensor after cp: attention_mask torch.Size([16, 1, 8192, 32768])
|
| 3595 |
+
batch tensor after cp: position_ids torch.Size([16, 8192])
|
| 3596 |
+
Start exporting trace 2
|
| 3597 |
+
Done exporting trace 2
|
| 3598 |
+
[2025-06-21 21:33:56] iteration 3/ 10 | consumed samples: 3 | elapsed time per iteration (ms): 932.0 | learning rate: 0.000000E+00 | global batch size: 1 | loss scale: 1073741824.0 | number of skipped iterations: 1 | number of nan iterations: 0 |
|
| 3599 |
+
batch tensor: tokens torch.Size([16, 32768])
|
| 3600 |
+
batch tensor: labels torch.Size([16, 32768])
|
| 3601 |
+
batch tensor: loss_mask torch.Size([16, 32768])
|
| 3602 |
+
batch tensor: attention_mask torch.Size([16, 1, 32768, 32768])
|
| 3603 |
+
batch tensor: position_ids torch.Size([16, 32768])
|
| 3604 |
+
batch tensor after cp: tokens torch.Size([16, 8192])
|
| 3605 |
+
batch tensor after cp: labels torch.Size([16, 8192])
|
| 3606 |
+
batch tensor after cp: loss_mask torch.Size([16, 8192])
|
| 3607 |
+
batch tensor after cp: attention_mask torch.Size([16, 1, 8192, 32768])
|
| 3608 |
+
batch tensor after cp: position_ids torch.Size([16, 8192])
|
| 3609 |
+
batch tensor: tokens torch.Size([16, 32768])
|
| 3610 |
+
batch tensor: labels torch.Size([16, 32768])
|
| 3611 |
+
batch tensor: loss_mask torch.Size([16, 32768])
|
| 3612 |
+
batch tensor: tokens torch.Size([16, 32768])
|
| 3613 |
+
batch tensor: labels torch.Size([16, 32768])
|
| 3614 |
+
batch tensor: loss_mask torch.Size([16, 32768])
|
| 3615 |
+
batch tensor: attention_mask torch.Size([16, 1, 32768, 32768])
|
| 3616 |
+
batch tensor: position_ids torch.Size([16, 32768])
|
| 3617 |
+
batch tensor: attention_mask torch.Size([16, 1, 32768, 32768])
|
| 3618 |
+
batch tensor: position_ids torch.Size([16, 32768])
|
| 3619 |
+
batch tensor after cp: tokens torch.Size([16, 8192])
|
| 3620 |
+
batch tensor after cp: labels torch.Size([16, 8192])
|
| 3621 |
+
batch tensor after cp: loss_mask torch.Size([16, 8192])
|
| 3622 |
+
batch tensor after cp: tokens torch.Size([16, 8192])
|
| 3623 |
+
batch tensor after cp: labels torch.Size([16, 8192])
|
| 3624 |
+
batch tensor after cp: attention_mask torch.Size([16, 1, 8192, 32768])
|
| 3625 |
+
batch tensor after cp: position_ids torch.Size([16, 8192])
|
| 3626 |
+
batch tensor after cp: loss_mask torch.Size([16, 8192])
|
| 3627 |
+
batch tensor after cp: attention_mask torch.Size([16, 1, 8192, 32768])
|
| 3628 |
+
batch tensor: tokens torch.Size([16, 32768])
|
| 3629 |
+
batch tensor after cp: position_ids torch.Size([16, 8192])
|
| 3630 |
+
batch tensor: labels torch.Size([16, 32768])
|
| 3631 |
+
batch tensor: loss_mask torch.Size([16, 32768])
|
| 3632 |
+
batch tensor: attention_mask torch.Size([16, 1, 32768, 32768])
|
| 3633 |
+
batch tensor: position_ids torch.Size([16, 32768])
|
| 3634 |
+
batch tensor after cp: tokens torch.Size([16, 8192])
|
| 3635 |
+
batch tensor after cp: labels torch.Size([16, 8192])
|
| 3636 |
+
batch tensor after cp: loss_mask torch.Size([16, 8192])
|
| 3637 |
+
batch tensor after cp: attention_mask torch.Size([16, 1, 8192, 32768])
|
| 3638 |
+
batch tensor: tokens torch.Size([16, 32768])
|
| 3639 |
+
batch tensor after cp: position_ids torch.Size([16, 8192])
|
| 3640 |
+
batch tensor: labels torch.Size([16, 32768])
|
| 3641 |
+
batch tensor: loss_mask torch.Size([16, 32768])
|
| 3642 |
+
batch tensor: attention_mask torch.Size([16, 1, 32768, 32768])
|
| 3643 |
+
batch tensor: position_ids torch.Size([16, 32768])
|
| 3644 |
+
batch tensor after cp: tokens torch.Size([16, 8192])
|
| 3645 |
+
batch tensor after cp: labels torch.Size([16, 8192])
|
| 3646 |
+
batch tensor after cp: loss_mask torch.Size([16, 8192])
|
| 3647 |
+
batch tensor after cp: attention_mask torch.Size([16, 1, 8192, 32768])
|
| 3648 |
+
batch tensor after cp: position_ids torch.Size([16, 8192])
|
| 3649 |
+
batch tensor: tokens torch.Size([16, 32768])
|
| 3650 |
+
batch tensor: labels torch.Size([16, 32768])
|
| 3651 |
+
batch tensor: tokens torch.Size([16, 32768])
|
| 3652 |
+
batch tensor: labels torch.Size([16, 32768])
|
| 3653 |
+
batch tensor: loss_mask torch.Size([16, 32768])
|
| 3654 |
+
batch tensor: attention_mask torch.Size([16, 1, 32768, 32768])
|
| 3655 |
+
batch tensor: position_ids torch.Size([16, 32768])
|
| 3656 |
+
batch tensor: loss_mask torch.Size([16, 32768])
|
| 3657 |
+
batch tensor: attention_mask torch.Size([16, 1, 32768, 32768])
|
| 3658 |
+
batch tensor after cp: tokens torch.Size([16, 8192])
|
| 3659 |
+
batch tensor after cp: labels torch.Size([16, 8192])
|
| 3660 |
+
batch tensor after cp: loss_mask torch.Size([16, 8192])
|
| 3661 |
+
batch tensor: position_ids torch.Size([16, 32768])
|
| 3662 |
+
batch tensor after cp: attention_mask torch.Size([16, 1, 8192, 32768])
|
| 3663 |
+
batch tensor after cp: position_ids torch.Size([16, 8192])
|
| 3664 |
+
batch tensor: tokens torch.Size([16, 32768])
|
| 3665 |
+
batch tensor: tokens torch.Size([16, 32768])
|
| 3666 |
+
batch tensor: labels torch.Size([16, 32768])
|
| 3667 |
+
batch tensor: labels torch.Size([16, 32768])
|
| 3668 |
+
batch tensor: loss_mask torch.Size([16, 32768])
|
| 3669 |
+
batch tensor: attention_mask torch.Size([16, 1, 32768, 32768])
|
| 3670 |
+
batch tensor: loss_mask torch.Size([16, 32768])
|
| 3671 |
+
batch tensor: attention_mask torch.Size([16, 1, 32768, 32768])
|
| 3672 |
+
batch tensor: position_ids torch.Size([16, 32768])
|
| 3673 |
+
batch tensor: position_ids torch.Size([16, 32768])
|
| 3674 |
+
batch tensor after cp: tokens torch.Size([16, 8192])
|
| 3675 |
+
batch tensor after cp: tokens torch.Size([16, 8192])
|
| 3676 |
+
batch tensor after cp: labels torch.Size([16, 8192])
|
| 3677 |
+
batch tensor after cp: loss_mask torch.Size([16, 8192])
|
| 3678 |
+
batch tensor after cp: labels torch.Size([16, 8192])
|
| 3679 |
+
batch tensor after cp: loss_mask torch.Size([16, 8192])
|
| 3680 |
+
batch tensor after cp: attention_mask torch.Size([16, 1, 8192, 32768])
|
| 3681 |
+
batch tensor after cp: position_ids torch.Size([16, 8192])
|
| 3682 |
+
batch tensor after cp: attention_mask torch.Size([16, 1, 8192, 32768])
|
| 3683 |
+
batch tensor after cp: position_ids torch.Size([16, 8192])
|
| 3684 |
+
batch tensor: tokens torch.Size([16, 32768])
|
| 3685 |
+
batch tensor after cp: tokens torch.Size([16, 8192])
|
| 3686 |
+
batch tensor after cp: labels torch.Size([16, 8192])
|
| 3687 |
+
batch tensor after cp: loss_mask torch.Size([16, 8192])
|
| 3688 |
+
batch tensor: labels torch.Size([16, 32768])
|
| 3689 |
+
batch tensor after cp: attention_mask torch.Size([16, 1, 8192, 32768])
|
| 3690 |
+
batch tensor: loss_mask torch.Size([16, 32768])
|
| 3691 |
+
batch tensor: attention_mask torch.Size([16, 1, 32768, 32768])
|
| 3692 |
+
batch tensor: position_ids torch.Size([16, 32768])
|
| 3693 |
+
batch tensor after cp: position_ids torch.Size([16, 8192])
|
| 3694 |
+
batch tensor after cp: tokens torch.Size([16, 8192])
|
| 3695 |
+
batch tensor after cp: labels torch.Size([16, 8192])
|
| 3696 |
+
batch tensor: tokens torch.Size([16, 32768])
|
| 3697 |
+
batch tensor after cp: loss_mask torch.Size([16, 8192])
|
| 3698 |
+
batch tensor after cp: attention_mask torch.Size([16, 1, 8192, 32768])
|
| 3699 |
+
batch tensor: labels torch.Size([16, 32768])
|
| 3700 |
+
batch tensor: loss_mask torch.Size([16, 32768])
|
| 3701 |
+
batch tensor after cp: position_ids torch.Size([16, 8192])
|
| 3702 |
+
batch tensor: attention_mask torch.Size([16, 1, 32768, 32768])
|
| 3703 |
+
batch tensor: tokens torch.Size([16, 32768])
|
| 3704 |
+
batch tensor: position_ids torch.Size([16, 32768])
|
| 3705 |
+
batch tensor: labels torch.Size([16, 32768])
|
| 3706 |
+
batch tensor: loss_mask torch.Size([16, 32768])
|
| 3707 |
+
batch tensor: tokens torch.Size([16, 32768])
|
| 3708 |
+
batch tensor: attention_mask torch.Size([16, 1, 32768, 32768])
|
| 3709 |
+
batch tensor: position_ids torch.Size([16, 32768])
|
| 3710 |
+
batch tensor: labels torch.Size([16, 32768])
|
| 3711 |
+
batch tensor after cp: tokens torch.Size([16, 8192])
|
| 3712 |
+
batch tensor after cp: labels torch.Size([16, 8192])
|
| 3713 |
+
batch tensor after cp: loss_mask torch.Size([16, 8192])
|
| 3714 |
+
batch tensor after cp: attention_mask torch.Size([16, 1, 8192, 32768])
|
| 3715 |
+
batch tensor: loss_mask torch.Size([16, 32768])
|
| 3716 |
+
batch tensor after cp: position_ids torch.Size([16, 8192])
|
| 3717 |
+
batch tensor: attention_mask torch.Size([16, 1, 32768, 32768])
|
| 3718 |
+
batch tensor: tokens torch.Size([16, 32768])
|
| 3719 |
+
batch tensor: position_ids torch.Size([16, 32768])
|
| 3720 |
+
batch tensor: labels torch.Size([16, 32768])
|
| 3721 |
+
batch tensor: tokens torch.Size([16, 32768])
|
| 3722 |
+
batch tensor: loss_mask torch.Size([16, 32768])
|
| 3723 |
+
batch tensor: labels torch.Size([16, 32768])
|
| 3724 |
+
batch tensor: attention_mask torch.Size([16, 1, 32768, 32768])
|
| 3725 |
+
batch tensor: position_ids torch.Size([16, 32768])
|
| 3726 |
+
batch tensor: loss_mask torch.Size([16, 32768])
|
| 3727 |
+
batch tensor after cp: tokens torch.Size([16, 8192])
|
| 3728 |
+
batch tensor after cp: labels torch.Size([16, 8192])
|
| 3729 |
+
batch tensor: attention_mask torch.Size([16, 1, 32768, 32768])
|
| 3730 |
+
batch tensor: position_ids torch.Size([16, 32768])
|
| 3731 |
+
batch tensor after cp: loss_mask torch.Size([16, 8192])
|
| 3732 |
+
batch tensor after cp: tokens torch.Size([16, 8192])
|
| 3733 |
+
batch tensor after cp: attention_mask torch.Size([16, 1, 8192, 32768])
|
| 3734 |
+
batch tensor after cp: position_ids torch.Size([16, 8192])
|
| 3735 |
+
batch tensor after cp: labels torch.Size([16, 8192])
|
| 3736 |
+
batch tensor after cp: loss_mask torch.Size([16, 8192])
|
| 3737 |
+
batch tensor: tokens torch.Size([16, 32768])
|
| 3738 |
+
batch tensor after cp: attention_mask torch.Size([16, 1, 8192, 32768])
|
| 3739 |
+
batch tensor after cp: position_ids torch.Size([16, 8192])
|
| 3740 |
+
batch tensor: labels torch.Size([16, 32768])
|
| 3741 |
+
batch tensor after cp: tokens torch.Size([16, 8192])
|
| 3742 |
+
batch tensor: loss_mask torch.Size([16, 32768])
|
| 3743 |
+
batch tensor: attention_mask torch.Size([16, 1, 32768, 32768])
|
| 3744 |
+
batch tensor after cp: labels torch.Size([16, 8192])
|
| 3745 |
+
batch tensor: position_ids torch.Size([16, 32768])
|
| 3746 |
+
batch tensor after cp: loss_mask torch.Size([16, 8192])
|
| 3747 |
+
batch tensor after cp: attention_mask torch.Size([16, 1, 8192, 32768])
|
| 3748 |
+
batch tensor after cp: tokens torch.Size([16, 8192])
|
| 3749 |
+
batch tensor after cp: labels torch.Size([16, 8192])
|
| 3750 |
+
batch tensor after cp: position_ids torch.Size([16, 8192])
|
| 3751 |
+
batch tensor after cp: loss_mask torch.Size([16, 8192])
|
| 3752 |
+
batch tensor after cp: tokens torch.Size([16, 8192])
|
| 3753 |
+
batch tensor after cp: attention_mask torch.Size([16, 1, 8192, 32768])
|
| 3754 |
+
batch tensor after cp: labels torch.Size([16, 8192])
|
| 3755 |
+
batch tensor after cp: position_ids torch.Size([16, 8192])
|
| 3756 |
+
batch tensor after cp: loss_mask torch.Size([16, 8192])
|
| 3757 |
+
batch tensor after cp: attention_mask torch.Size([16, 1, 8192, 32768])
|
| 3758 |
+
batch tensor after cp: position_ids torch.Size([16, 8192])
|
| 3759 |
+
Start exporting trace 3
|
| 3760 |
+
Done exporting trace 3
|
| 3761 |
+
[2025-06-21 21:33:57] iteration 4/ 10 | consumed samples: 4 | elapsed time per iteration (ms): 922.5 | learning rate: 0.000000E+00 | global batch size: 1 | loss scale: 536870912.0 | number of skipped iterations: 1 | number of nan iterations: 0 |
|
| 3762 |
+
batch tensor: tokens torch.Size([16, 32768])
|
| 3763 |
+
batch tensor: labels torch.Size([16, 32768])
|
| 3764 |
+
batch tensor: loss_mask torch.Size([16, 32768])
|
| 3765 |
+
batch tensor: attention_mask torch.Size([16, 1, 32768, 32768])
|
| 3766 |
+
batch tensor: position_ids torch.Size([16, 32768])
|
| 3767 |
+
batch tensor after cp: tokens torch.Size([16, 8192])
|
| 3768 |
+
batch tensor after cp: labels torch.Size([16, 8192])
|
| 3769 |
+
batch tensor after cp: loss_mask torch.Size([16, 8192])
|
| 3770 |
+
batch tensor after cp: attention_mask torch.Size([16, 1, 8192, 32768])
|
| 3771 |
+
batch tensor after cp: position_ids torch.Size([16, 8192])
|
| 3772 |
+
batch tensor: tokens torch.Size([16, 32768])
|
| 3773 |
+
batch tensor: labels torch.Size([16, 32768])
|
| 3774 |
+
batch tensor: loss_mask torch.Size([16, 32768])
|
| 3775 |
+
batch tensor: attention_mask torch.Size([16, 1, 32768, 32768])
|
| 3776 |
+
batch tensor: position_ids torch.Size([16, 32768])
|
| 3777 |
+
batch tensor: tokens torch.Size([16, 32768])
|
| 3778 |
+
batch tensor: labels torch.Size([16, 32768])
|
| 3779 |
+
batch tensor: loss_mask torch.Size([16, 32768])
|
| 3780 |
+
batch tensor: attention_mask torch.Size([16, 1, 32768, 32768])
|
| 3781 |
+
batch tensor: position_ids torch.Size([16, 32768])
|
| 3782 |
+
batch tensor after cp: tokens torch.Size([16, 8192])
|
| 3783 |
+
batch tensor after cp: labels torch.Size([16, 8192])
|
| 3784 |
+
batch tensor after cp: loss_mask torch.Size([16, 8192])
|
| 3785 |
+
batch tensor after cp: tokens torch.Size([16, 8192])
|
| 3786 |
+
batch tensor after cp: labels torch.Size([16, 8192])
|
| 3787 |
+
batch tensor after cp: loss_mask torch.Size([16, 8192])
|
| 3788 |
+
batch tensor after cp: attention_mask torch.Size([16, 1, 8192, 32768])
|
| 3789 |
+
batch tensor after cp: attention_mask torch.Size([16, 1, 8192, 32768])
|
| 3790 |
+
batch tensor after cp: position_ids torch.Size([16, 8192])
|
| 3791 |
+
batch tensor after cp: position_ids torch.Size([16, 8192])
|
| 3792 |
+
batch tensor: tokens torch.Size([16, 32768])
|
| 3793 |
+
batch tensor: labels torch.Size([16, 32768])
|
| 3794 |
+
batch tensor: tokens torch.Size([16, 32768])
|
| 3795 |
+
batch tensor: labels torch.Size([16, 32768])
|
| 3796 |
+
batch tensor: loss_mask torch.Size([16, 32768])
|
| 3797 |
+
batch tensor: attention_mask torch.Size([16, 1, 32768, 32768])
|
| 3798 |
+
batch tensor: loss_mask torch.Size([16, 32768])
|
| 3799 |
+
batch tensor: attention_mask torch.Size([16, 1, 32768, 32768])
|
| 3800 |
+
batch tensor: position_ids torch.Size([16, 32768])
|
| 3801 |
+
batch tensor: position_ids torch.Size([16, 32768])
|
| 3802 |
+
batch tensor after cp: tokens torch.Size([16, 8192])
|
| 3803 |
+
batch tensor after cp: labels torch.Size([16, 8192])
|
| 3804 |
+
batch tensor after cp: tokens torch.Size([16, 8192])
|
| 3805 |
+
batch tensor after cp: labels torch.Size([16, 8192])
|
| 3806 |
+
batch tensor after cp: loss_mask torch.Size([16, 8192])
|
| 3807 |
+
batch tensor after cp: loss_mask torch.Size([16, 8192])
|
| 3808 |
+
batch tensor after cp: attention_mask torch.Size([16, 1, 8192, 32768])
|
| 3809 |
+
batch tensor after cp: attention_mask torch.Size([16, 1, 8192, 32768])
|
| 3810 |
+
batch tensor after cp: position_ids torch.Size([16, 8192])
|
| 3811 |
+
batch tensor after cp: position_ids torch.Size([16, 8192])
|
| 3812 |
+
batch tensor: tokens torch.Size([16, 32768])
|
| 3813 |
+
batch tensor: labels torch.Size([16, 32768])
|
| 3814 |
+
batch tensor: loss_mask torch.Size([16, 32768])
|
| 3815 |
+
batch tensor: attention_mask torch.Size([16, 1, 32768, 32768])
|
| 3816 |
+
batch tensor: position_ids torch.Size([16, 32768])
|
| 3817 |
+
batch tensor: tokens torch.Size([16, 32768])
|
| 3818 |
+
batch tensor after cp: tokens torch.Size([16, 8192])
|
| 3819 |
+
batch tensor after cp: labels torch.Size([16, 8192])
|
| 3820 |
+
batch tensor after cp: loss_mask torch.Size([16, 8192])
|
| 3821 |
+
batch tensor: labels torch.Size([16, 32768])
|
| 3822 |
+
batch tensor: loss_mask torch.Size([16, 32768])
|
| 3823 |
+
batch tensor after cp: attention_mask torch.Size([16, 1, 8192, 32768])
|
| 3824 |
+
batch tensor after cp: position_ids torch.Size([16, 8192])
|
| 3825 |
+
batch tensor: attention_mask torch.Size([16, 1, 32768, 32768])
|
| 3826 |
+
batch tensor: tokens torch.Size([16, 32768])
|
| 3827 |
+
batch tensor: position_ids torch.Size([16, 32768])
|
| 3828 |
+
batch tensor: labels torch.Size([16, 32768])
|
| 3829 |
+
batch tensor after cp: tokens torch.Size([16, 8192])
|
| 3830 |
+
batch tensor after cp: labels torch.Size([16, 8192])
|
| 3831 |
+
batch tensor: loss_mask torch.Size([16, 32768])
|
| 3832 |
+
batch tensor: attention_mask torch.Size([16, 1, 32768, 32768])
|
| 3833 |
+
batch tensor after cp: loss_mask torch.Size([16, 8192])
|
| 3834 |
+
batch tensor after cp: attention_mask torch.Size([16, 1, 8192, 32768])
|
| 3835 |
+
batch tensor: position_ids torch.Size([16, 32768])
|
| 3836 |
+
batch tensor after cp: position_ids torch.Size([16, 8192])
|
| 3837 |
+
batch tensor after cp: tokens torch.Size([16, 8192])
|
| 3838 |
+
batch tensor after cp: labels torch.Size([16, 8192])
|
| 3839 |
+
batch tensor: tokens torch.Size([16, 32768])
|
| 3840 |
+
batch tensor after cp: loss_mask torch.Size([16, 8192])
|
| 3841 |
+
batch tensor after cp: attention_mask torch.Size([16, 1, 8192, 32768])
|
| 3842 |
+
batch tensor: labels torch.Size([16, 32768])
|
| 3843 |
+
batch tensor: loss_mask torch.Size([16, 32768])
|
| 3844 |
+
batch tensor after cp: position_ids torch.Size([16, 8192])
|
| 3845 |
+
batch tensor: attention_mask torch.Size([16, 1, 32768, 32768])
|
| 3846 |
+
batch tensor: tokens torch.Size([16, 32768])
|
| 3847 |
+
batch tensor: position_ids torch.Size([16, 32768])
|
| 3848 |
+
batch tensor: labels torch.Size([16, 32768])
|
| 3849 |
+
batch tensor: tokens torch.Size([16, 32768])
|
| 3850 |
+
batch tensor: loss_mask torch.Size([16, 32768])
|
| 3851 |
+
batch tensor: attention_mask torch.Size([16, 1, 32768, 32768])
|
| 3852 |
+
batch tensor: labels torch.Size([16, 32768])
|
| 3853 |
+
batch tensor: position_ids torch.Size([16, 32768])
|
| 3854 |
+
batch tensor: loss_mask torch.Size([16, 32768])
|
| 3855 |
+
batch tensor: attention_mask torch.Size([16, 1, 32768, 32768])
|
| 3856 |
+
batch tensor: tokens torch.Size([16, 32768])
|
| 3857 |
+
batch tensor: position_ids torch.Size([16, 32768])
|
| 3858 |
+
batch tensor: labels torch.Size([16, 32768])
|
| 3859 |
+
batch tensor: loss_mask torch.Size([16, 32768])
|
| 3860 |
+
batch tensor after cp: tokens torch.Size([16, 8192])
|
| 3861 |
+
batch tensor after cp: labels torch.Size([16, 8192])
|
| 3862 |
+
batch tensor: attention_mask torch.Size([16, 1, 32768, 32768])
|
| 3863 |
+
batch tensor after cp: loss_mask torch.Size([16, 8192])
|
| 3864 |
+
batch tensor: position_ids torch.Size([16, 32768])
|
| 3865 |
+
batch tensor after cp: attention_mask torch.Size([16, 1, 8192, 32768])
|
| 3866 |
+
batch tensor after cp: position_ids torch.Size([16, 8192])
|
| 3867 |
+
batch tensor after cp: tokens torch.Size([16, 8192])
|
| 3868 |
+
batch tensor after cp: labels torch.Size([16, 8192])
|
| 3869 |
+
batch tensor after cp: tokens torch.Size([16, 8192])
|
| 3870 |
+
batch tensor after cp: loss_mask torch.Size([16, 8192])
|
| 3871 |
+
batch tensor after cp: labels torch.Size([16, 8192])
|
| 3872 |
+
batch tensor after cp: loss_mask torch.Size([16, 8192])
|
| 3873 |
+
batch tensor after cp: attention_mask torch.Size([16, 1, 8192, 32768])
|
| 3874 |
+
batch tensor after cp: position_ids torch.Size([16, 8192])
|
| 3875 |
+
batch tensor after cp: attention_mask torch.Size([16, 1, 8192, 32768])
|
| 3876 |
+
batch tensor after cp: position_ids torch.Size([16, 8192])
|
| 3877 |
+
batch tensor after cp: tokens torch.Size([16, 8192])
|
| 3878 |
+
batch tensor after cp: labels torch.Size([16, 8192])
|
| 3879 |
+
batch tensor: tokens torch.Size([16, 32768])
|
| 3880 |
+
batch tensor after cp: loss_mask torch.Size([16, 8192])
|
| 3881 |
+
batch tensor: labels torch.Size([16, 32768])
|
| 3882 |
+
batch tensor after cp: attention_mask torch.Size([16, 1, 8192, 32768])
|
| 3883 |
+
batch tensor after cp: position_ids torch.Size([16, 8192])
|
| 3884 |
+
batch tensor: loss_mask torch.Size([16, 32768])
|
| 3885 |
+
batch tensor: tokens torch.Size([16, 32768])
|
| 3886 |
+
batch tensor: attention_mask torch.Size([16, 1, 32768, 32768])
|
| 3887 |
+
batch tensor: labels torch.Size([16, 32768])
|
| 3888 |
+
batch tensor: position_ids torch.Size([16, 32768])
|
| 3889 |
+
batch tensor: loss_mask torch.Size([16, 32768])
|
| 3890 |
+
batch tensor after cp: tokens torch.Size([16, 8192])
|
| 3891 |
+
batch tensor after cp: labels torch.Size([16, 8192])
|
| 3892 |
+
batch tensor: attention_mask torch.Size([16, 1, 32768, 32768])
|
| 3893 |
+
batch tensor after cp: loss_mask torch.Size([16, 8192])
|
| 3894 |
+
batch tensor after cp: attention_mask torch.Size([16, 1, 8192, 32768])
|
| 3895 |
+
batch tensor: position_ids torch.Size([16, 32768])
|
| 3896 |
+
batch tensor after cp: position_ids torch.Size([16, 8192])
|
| 3897 |
+
batch tensor after cp: tokens torch.Size([16, 8192])
|
| 3898 |
+
batch tensor after cp: labels torch.Size([16, 8192])
|
| 3899 |
+
batch tensor: tokens torch.Size([16, 32768])
|
| 3900 |
+
batch tensor after cp: loss_mask torch.Size([16, 8192])
|
| 3901 |
+
batch tensor: labels torch.Size([16, 32768])
|
| 3902 |
+
batch tensor after cp: attention_mask torch.Size([16, 1, 8192, 32768])
|
| 3903 |
+
batch tensor after cp: position_ids torch.Size([16, 8192])
|
| 3904 |
+
batch tensor: loss_mask torch.Size([16, 32768])
|
| 3905 |
+
batch tensor: tokens torch.Size([16, 32768])
|
| 3906 |
+
batch tensor: labels torch.Size([16, 32768])
|
| 3907 |
+
batch tensor: attention_mask torch.Size([16, 1, 32768, 32768])
|
| 3908 |
+
batch tensor: loss_mask torch.Size([16, 32768])
|
| 3909 |
+
batch tensor: attention_mask torch.Size([16, 1, 32768, 32768])
|
| 3910 |
+
batch tensor: position_ids torch.Size([16, 32768])
|
| 3911 |
+
batch tensor: position_ids torch.Size([16, 32768])
|
| 3912 |
+
batch tensor after cp: tokens torch.Size([16, 8192])
|
| 3913 |
+
batch tensor after cp: tokens torch.Size([16, 8192])
|
| 3914 |
+
batch tensor after cp: labels torch.Size([16, 8192])
|
| 3915 |
+
batch tensor after cp: labels torch.Size([16, 8192])
|
| 3916 |
+
batch tensor after cp: loss_mask torch.Size([16, 8192])
|
| 3917 |
+
batch tensor after cp: loss_mask torch.Size([16, 8192])
|
| 3918 |
+
batch tensor after cp: attention_mask torch.Size([16, 1, 8192, 32768])
|
| 3919 |
+
batch tensor after cp: attention_mask torch.Size([16, 1, 8192, 32768])
|
| 3920 |
+
batch tensor after cp: position_ids torch.Size([16, 8192])
|
| 3921 |
+
batch tensor after cp: position_ids torch.Size([16, 8192])
|
| 3922 |
+
Start exporting trace 4
|
| 3923 |
+
Done exporting trace 4
|
| 3924 |
+
[2025-06-21 21:33:58] iteration 5/ 10 | consumed samples: 5 | elapsed time per iteration (ms): 928.7 | learning rate: 0.000000E+00 | global batch size: 1 | loss scale: 268435456.0 | number of skipped iterations: 1 | number of nan iterations: 0 |
|
| 3925 |
+
batch tensor: tokens torch.Size([16, 32768])
|
| 3926 |
+
batch tensor: labels torch.Size([16, 32768])
|
| 3927 |
+
batch tensor: loss_mask torch.Size([16, 32768])
|
| 3928 |
+
batch tensor: attention_mask torch.Size([16, 1, 32768, 32768])
|
| 3929 |
+
batch tensor: position_ids torch.Size([16, 32768])
|
| 3930 |
+
batch tensor after cp: tokens torch.Size([16, 8192])
|
| 3931 |
+
batch tensor after cp: labels torch.Size([16, 8192])
|
| 3932 |
+
batch tensor after cp: loss_mask torch.Size([16, 8192])
|
| 3933 |
+
batch tensor after cp: attention_mask torch.Size([16, 1, 8192, 32768])
|
| 3934 |
+
batch tensor after cp: position_ids torch.Size([16, 8192])
|
| 3935 |
+
batch tensor: tokens torch.Size([16, 32768])
|
| 3936 |
+
batch tensor: labels torch.Size([16, 32768])
|
| 3937 |
+
batch tensor: loss_mask torch.Size([16, 32768])
|
| 3938 |
+
batch tensor: attention_mask torch.Size([16, 1, 32768, 32768])
|
| 3939 |
+
batch tensor: position_ids torch.Size([16, 32768])
|
| 3940 |
+
batch tensor: tokens torch.Size([16, 32768])
|
| 3941 |
+
batch tensor: labels torch.Size([16, 32768])
|
| 3942 |
+
batch tensor after cp: tokens torch.Size([16, 8192])
|
| 3943 |
+
batch tensor after cp: labels torch.Size([16, 8192])
|
| 3944 |
+
batch tensor after cp: loss_mask torch.Size([16, 8192])
|
| 3945 |
+
batch tensor: loss_mask torch.Size([16, 32768])
|
| 3946 |
+
batch tensor: attention_mask torch.Size([16, 1, 32768, 32768])
|
| 3947 |
+
batch tensor: position_ids torch.Size([16, 32768])
|
| 3948 |
+
batch tensor after cp: attention_mask torch.Size([16, 1, 8192, 32768])
|
| 3949 |
+
batch tensor after cp: tokens torch.Size([16, 8192])
|
| 3950 |
+
batch tensor after cp: labels torch.Size([16, 8192])
|
| 3951 |
+
batch tensor after cp: position_ids torch.Size([16, 8192])
|
| 3952 |
+
batch tensor after cp: loss_mask torch.Size([16, 8192])
|
| 3953 |
+
batch tensor after cp: attention_mask torch.Size([16, 1, 8192, 32768])
|
| 3954 |
+
batch tensor:batch tensor after cp: position_ids torch.Size([16, 8192])
|
| 3955 |
+
batch tensor: tokens torch.Size([16, 32768])
|
| 3956 |
+
batch tensor: labels torch.Size([16, 32768])
|
| 3957 |
+
batch tensor: loss_mask torch.Size([16, 32768])
|
| 3958 |
+
batch tensor: attention_mask torch.Size([16, 1, 32768, 32768])
|
| 3959 |
+
batch tensor: position_ids torch.Size([16, 32768])
|
| 3960 |
+
tokens torch.Size([16, 32768])
|
| 3961 |
+
batch tensor: tokens torch.Size([16, 32768])
|
| 3962 |
+
batch tensor: labels torch.Size([16, 32768])
|
| 3963 |
+
batch tensor: loss_mask torch.Size([16, 32768])
|
| 3964 |
+
batch tensor: labels torch.Size([16, 32768])
|
| 3965 |
+
batch tensor: attention_mask torch.Size([16, 1, 32768, 32768])
|
| 3966 |
+
batch tensor: position_ids torch.Size([16, 32768])
|
| 3967 |
+
batch tensor: loss_mask torch.Size([16, 32768])
|
| 3968 |
+
batch tensor: attention_mask torch.Size([16, 1, 32768, 32768])
|
| 3969 |
+
batch tensor: tokens torch.Size([16, 32768])
|
| 3970 |
+
batch tensor: position_ids torch.Size([16, 32768])
|
| 3971 |
+
batch tensor: labels torch.Size([16, 32768])
|
| 3972 |
+
batch tensor after cp: tokens torch.Size([16, 8192])
|
| 3973 |
+
batch tensor after cp: labels torch.Size([16, 8192])
|
| 3974 |
+
batch tensor: loss_mask torch.Size([16, 32768])
|
| 3975 |
+
batch tensor after cp: loss_mask torch.Size([16, 8192])
|
| 3976 |
+
batch tensor: attention_mask torch.Size([16, 1, 32768, 32768])
|
| 3977 |
+
batch tensor: position_ids torch.Size([16, 32768])
|
| 3978 |
+
batch tensor after cp: attention_mask torch.Size([16, 1, 8192, 32768])
|
| 3979 |
+
batch tensor after cp: position_ids torch.Size([16, 8192])
|
| 3980 |
+
batch tensor after cp: tokens torch.Size([16, 8192])
|
| 3981 |
+
batch tensor after cp: labels torch.Size([16, 8192])
|
| 3982 |
+
batch tensor after cp: tokens torch.Size([16, 8192])
|
| 3983 |
+
batch tensor after cp: labels torch.Size([16, 8192])
|
| 3984 |
+
batch tensor after cp: loss_mask torch.Size([16, 8192])
|
| 3985 |
+
batch tensor after cp: loss_mask torch.Size([16, 8192])
|
| 3986 |
+
batch tensor after cp: attention_mask torch.Size([16, 1, 8192, 32768])
|
| 3987 |
+
batch tensor after cp: position_ids torch.Size([16, 8192])
|
| 3988 |
+
batch tensor after cp: attention_mask torch.Size([16, 1, 8192, 32768])
|
| 3989 |
+
batch tensor after cp: position_ids torch.Size([16, 8192])
|
| 3990 |
+
batch tensor after cp: tokens torch.Size([16, 8192])
|
| 3991 |
+
batch tensor after cp: labels torch.Size([16, 8192])
|
| 3992 |
+
batch tensor: tokens torch.Size([16, 32768])
|
| 3993 |
+
batch tensor after cp: loss_mask torch.Size([16, 8192])
|
| 3994 |
+
batch tensor: labels torch.Size([16, 32768])
|
| 3995 |
+
batch tensor after cp: attention_mask torch.Size([16, 1, 8192, 32768])
|
| 3996 |
+
batch tensor: loss_mask torch.Size([16, 32768])
|
| 3997 |
+
batch tensor: attention_mask torch.Size([16, 1, 32768, 32768])
|
| 3998 |
+
batch tensor after cp: position_ids torch.Size([16, 8192])
|
| 3999 |
+
batch tensor: position_ids torch.Size([16, 32768])
|
| 4000 |
+
batch tensor: tokens torch.Size([16, 32768])
|
| 4001 |
+
batch tensor: tokens torch.Size([16, 32768])
|
| 4002 |
+
batch tensor: labels torch.Size([16, 32768])
|
| 4003 |
+
batch tensor: labels torch.Size([16, 32768])
|
| 4004 |
+
batch tensor: loss_mask torch.Size([16, 32768])
|
| 4005 |
+
batch tensor: loss_mask torch.Size([16, 32768])
|
| 4006 |
+
batch tensor: attention_mask torch.Size([16, 1, 32768, 32768])
|
| 4007 |
+
batch tensor: attention_mask torch.Size([16, 1, 32768, 32768])
|
| 4008 |
+
batch tensor: position_ids torch.Size([16, 32768])
|
| 4009 |
+
batch tensor: position_ids torch.Size([16, 32768])
|
| 4010 |
+
batch tensor after cp: tokens torch.Size([16, 8192])
|
| 4011 |
+
batch tensor after cp: labels torch.Size([16, 8192])
|
| 4012 |
+
batch tensor after cp: tokens torch.Size([16, 8192])
|
| 4013 |
+
batch tensor after cp: loss_mask torch.Size([16, 8192])
|
| 4014 |
+
batch tensor after cp: labels torch.Size([16, 8192])
|
| 4015 |
+
batch tensor after cp: loss_mask torch.Size([16, 8192])
|
| 4016 |
+
batch tensor after cp: attention_mask torch.Size([16, 1, 8192, 32768])
|
| 4017 |
+
batch tensor after cp: position_ids torch.Size([16, 8192])
|
| 4018 |
+
batch tensor after cp: attention_mask torch.Size([16, 1, 8192, 32768])
|
| 4019 |
+
batch tensor: tokens torch.Size([16, 32768])
|
| 4020 |
+
batch tensor after cp: position_ids torch.Size([16, 8192])
|
| 4021 |
+
batch tensor: labels torch.Size([16, 32768])
|
| 4022 |
+
batch tensor after cp: tokens torch.Size([16, 8192])
|
| 4023 |
+
batch tensor after cp: labels torch.Size([16, 8192])
|
| 4024 |
+
batch tensor: loss_mask torch.Size([16, 32768])
|
| 4025 |
+
batch tensor: attention_mask torch.Size([16, 1, 32768, 32768])
|
| 4026 |
+
batch tensor after cp: loss_mask torch.Size([16, 8192])
|
| 4027 |
+
batch tensor after cp: attention_mask torch.Size([16, 1, 8192, 32768])
|
| 4028 |
+
batch tensor: position_ids torch.Size([16, 32768])
|
| 4029 |
+
batch tensor after cp: position_ids torch.Size([16, 8192])
|
| 4030 |
+
batch tensor after cp: tokens torch.Size([16, 8192])
|
| 4031 |
+
batch tensor after cp: labels torch.Size([16, 8192])
|
| 4032 |
+
batch tensor after cp: loss_mask torch.Size([16, 8192])
|
| 4033 |
+
batch tensor: tokens torch.Size([16, 32768])
|
| 4034 |
+
batch tensor after cp: attention_mask torch.Size([16, 1, 8192, 32768])
|
| 4035 |
+
batch tensor after cp: position_ids torch.Size([16, 8192])
|
| 4036 |
+
batch tensor: labels torch.Size([16, 32768])
|
| 4037 |
+
batch tensor: tokens torch.Size([16, 32768])
|
| 4038 |
+
batch tensor: loss_mask torch.Size([16, 32768])
|
| 4039 |
+
batch tensor: labels torch.Size([16, 32768])
|
| 4040 |
+
batch tensor: loss_mask torch.Size([16, 32768])
|
| 4041 |
+
batch tensor: attention_mask torch.Size([16, 1, 32768, 32768])
|
| 4042 |
+
batch tensor: attention_mask torch.Size([16, 1, 32768, 32768])
|
| 4043 |
+
batch tensor: position_ids torch.Size([16, 32768])
|
| 4044 |
+
batch tensor: position_ids torch.Size([16, 32768])
|
| 4045 |
+
batch tensor after cp: tokens torch.Size([16, 8192])
|
| 4046 |
+
batch tensor after cp: labels torch.Size([16, 8192])
|
| 4047 |
+
batch tensor after cp: loss_mask torch.Size([16, 8192])
|
| 4048 |
+
batch tensor after cp: attention_mask torch.Size([16, 1, 8192, 32768])
|
| 4049 |
+
batch tensor after cp: position_ids torch.Size([16, 8192])
|
| 4050 |
+
batch tensor after cp: tokens torch.Size([16, 8192])
|
| 4051 |
+
batch tensor after cp: labels torch.Size([16, 8192])
|
| 4052 |
+
batch tensor after cp: loss_mask torch.Size([16, 8192])
|
| 4053 |
+
batch tensor: tokens torch.Size([16, 32768])
|
| 4054 |
+
batch tensor: labels torch.Size([16, 32768])
|
| 4055 |
+
batch tensor: loss_mask torch.Size([16, 32768])
|
| 4056 |
+
batch tensor: attention_mask torch.Size([16, 1, 32768, 32768])
|
| 4057 |
+
batch tensor: position_ids torch.Size([16, 32768])
|
| 4058 |
+
batch tensor after cp: attention_mask torch.Size([16, 1, 8192, 32768])
|
| 4059 |
+
batch tensor after cp: tokens torch.Size([16, 8192])
|
| 4060 |
+
batch tensor after cp: labels torch.Size([16, 8192])
|
| 4061 |
+
batch tensor after cp: position_ids torch.Size([16, 8192])
|
| 4062 |
+
batch tensor after cp: loss_mask torch.Size([16, 8192])
|
| 4063 |
+
batch tensor after cp: attention_mask torch.Size([16, 1, 8192, 32768])
|
| 4064 |
+
batch tensor: tokens torch.Size([16, 32768])
|
| 4065 |
+
batch tensor after cp: position_ids torch.Size([16, 8192])
|
| 4066 |
+
batch tensor: labels torch.Size([16, 32768])
|
| 4067 |
+
batch tensor: loss_mask torch.Size([16, 32768])
|
| 4068 |
+
batch tensor: tokens torch.Size([16, 32768])
|
| 4069 |
+
batch tensor: attention_mask torch.Size([16, 1, 32768, 32768])
|
| 4070 |
+
batch tensor: labels torch.Size([16, 32768])
|
| 4071 |
+
batch tensor: position_ids torch.Size([16, 32768])
|
| 4072 |
+
batch tensor: loss_mask torch.Size([16, 32768])
|
| 4073 |
+
batch tensor after cp: tokens torch.Size([16, 8192])
|
| 4074 |
+
batch tensor after cp: labels torch.Size([16, 8192])
|
| 4075 |
+
batch tensor: attention_mask torch.Size([16, 1, 32768, 32768])
|
| 4076 |
+
batch tensor after cp: loss_mask torch.Size([16, 8192])
|
| 4077 |
+
batch tensor: position_ids torch.Size([16, 32768])
|
| 4078 |
+
batch tensor after cp: attention_mask torch.Size([16, 1, 8192, 32768])
|
| 4079 |
+
batch tensor after cp: position_ids torch.Size([16, 8192])
|
| 4080 |
+
batch tensor after cp: tokens torch.Size([16, 8192])
|
| 4081 |
+
batch tensor after cp: labels torch.Size([16, 8192])
|
| 4082 |
+
batch tensor after cp: loss_mask torch.Size([16, 8192])
|
| 4083 |
+
batch tensor after cp: attention_mask torch.Size([16, 1, 8192, 32768])
|
| 4084 |
+
batch tensor after cp: position_ids torch.Size([16, 8192])
|
| 4085 |
+
Start exporting trace 5
|
| 4086 |
+
Done exporting trace 5
|
| 4087 |
+
[2025-06-21 21:33:59] iteration 6/ 10 | consumed samples: 6 | elapsed time per iteration (ms): 928.5 | learning rate: 0.000000E+00 | global batch size: 1 | loss scale: 134217728.0 | number of skipped iterations: 1 | number of nan iterations: 0 |
|
| 4088 |
+
batch tensor: tokens torch.Size([16, 32768])
|
| 4089 |
+
batch tensor: labels torch.Size([16, 32768])
|
| 4090 |
+
batch tensor: loss_mask torch.Size([16, 32768])
|
| 4091 |
+
batch tensor: attention_mask torch.Size([16, 1, 32768, 32768])
|
| 4092 |
+
batch tensor: position_ids torch.Size([16, 32768])
|
| 4093 |
+
batch tensor after cp: tokens torch.Size([16, 8192])
|
| 4094 |
+
batch tensor after cp: labels torch.Size([16, 8192])
|
| 4095 |
+
batch tensor after cp: loss_mask torch.Size([16, 8192])
|
| 4096 |
+
batch tensor: tokens torch.Size([16, 32768])
|
| 4097 |
+
batch tensor: labels torch.Size([16, 32768])
|
| 4098 |
+
batch tensor: loss_mask torch.Size([16, 32768])
|
| 4099 |
+
batch tensor: attention_mask torch.Size([16, 1, 32768, 32768])
|
| 4100 |
+
batch tensor: position_ids torch.Size([16, 32768])
|
| 4101 |
+
batch tensor after cp: attention_mask torch.Size([16, 1, 8192, 32768])
|
| 4102 |
+
batch tensor after cp: position_ids torch.Size([16, 8192])
|
| 4103 |
+
batch tensor after cp: tokens torch.Size([16, 8192])
|
| 4104 |
+
batch tensor after cp: labels torch.Size([16, 8192])
|
| 4105 |
+
batch tensor after cp: loss_mask torch.Size([16, 8192])
|
| 4106 |
+
batch tensor: tokens torch.Size([16, 32768])
|
| 4107 |
+
batch tensor after cp: attention_mask torch.Size([16, 1, 8192, 32768])
|
| 4108 |
+
batch tensor after cp: position_ids torch.Size([16, 8192])
|
| 4109 |
+
batch tensor: labels torch.Size([16, 32768])
|
| 4110 |
+
batch tensor: loss_mask torch.Size([16, 32768])
|
| 4111 |
+
batch tensor: tokens torch.Size([16, 32768])
|
| 4112 |
+
batch tensor: attention_mask torch.Size([16, 1, 32768, 32768])
|
| 4113 |
+
batch tensor: position_ids torch.Size([16, 32768])
|
| 4114 |
+
batch tensor: labels torch.Size([16, 32768])
|
| 4115 |
+
batch tensor: loss_mask torch.Size([16, 32768])
|
| 4116 |
+
batch tensor after cp: tokens torch.Size([16, 8192])
|
| 4117 |
+
batch tensor after cp: labels torch.Size([16, 8192])
|
| 4118 |
+
batch tensor: attention_mask torch.Size([16, 1, 32768, 32768])
|
| 4119 |
+
batch tensor after cp: loss_mask torch.Size([16, 8192])
|
| 4120 |
+
batch tensor after cp: attention_mask torch.Size([16, 1, 8192, 32768])
|
| 4121 |
+
batch tensor: position_ids torch.Size([16, 32768])
|
| 4122 |
+
batch tensor after cp: position_ids torch.Size([16, 8192])
|
| 4123 |
+
batch tensor after cp: tokens torch.Size([16, 8192])
|
| 4124 |
+
batch tensor after cp: labels torch.Size([16, 8192])
|
| 4125 |
+
batch tensor after cp: loss_mask torch.Size([16, 8192])
|
| 4126 |
+
batch tensor: tokens torch.Size([16, 32768])
|
| 4127 |
+
batch tensor: labels torch.Size([16, 32768])
|
| 4128 |
+
batch tensor: loss_mask torch.Size([16, 32768])
|
| 4129 |
+
batch tensor after cp: attention_mask torch.Size([16, 1, 8192, 32768])
|
| 4130 |
+
batch tensor: attention_mask torch.Size([16, 1, 32768, 32768])
|
| 4131 |
+
batch tensor: position_ids torch.Size([16, 32768])
|
| 4132 |
+
batch tensor after cp: position_ids torch.Size([16, 8192])
|
| 4133 |
+
batch tensor after cp: tokens torch.Size([16, 8192])
|
| 4134 |
+
batch tensor after cp: labels torch.Size([16, 8192])
|
| 4135 |
+
batch tensor: tokens torch.Size([16, 32768])
|
| 4136 |
+
batch tensor: labels batch tensor:torch.Size([16, 32768])
|
| 4137 |
+
batch tensor: loss_mask tokenstorch.Size([16, 32768])
|
| 4138 |
+
batch tensor: attention_mask torch.Size([16, 1, 32768, 32768])
|
| 4139 |
+
torch.Size([16, 32768])batch tensor:
|
| 4140 |
+
batch tensor after cp: loss_mask torch.Size([16, 8192])
|
| 4141 |
+
position_idsbatch tensor: labelstorch.Size([16, 32768])
|
| 4142 |
+
torch.Size([16, 32768])
|
| 4143 |
+
batch tensor: loss_mask torch.Size([16, 32768])
|
| 4144 |
+
batch tensor: attention_mask torch.Size([16, 1, 32768, 32768])
|
| 4145 |
+
batch tensor: position_ids torch.Size([16, 32768])
|
| 4146 |
+
batch tensor after cp: attention_mask torch.Size([16, 1, 8192, 32768])
|
| 4147 |
+
batch tensor after cp: position_ids torch.Size([16, 8192])
|
| 4148 |
+
batch tensor: tokens torch.Size([16, 32768])
|
| 4149 |
+
batch tensor: labels torch.Size([16, 32768])
|
| 4150 |
+
batch tensor: loss_mask torch.Size([16, 32768])
|
| 4151 |
+
batch tensor: attention_mask torch.Size([16, 1, 32768, 32768])
|
| 4152 |
+
batch tensor: position_ids torch.Size([16, 32768])
|
| 4153 |
+
batch tensor: tokens torch.Size([16, 32768])
|
| 4154 |
+
batch tensor after cp: tokens torch.Size([16, 8192])
|
| 4155 |
+
batch tensor after cp: labels torch.Size([16, 8192])
|
| 4156 |
+
batch tensor: labels torch.Size([16, 32768])
|
| 4157 |
+
batch tensor after cp: loss_mask torch.Size([16, 8192])
|
| 4158 |
+
batch tensor: loss_mask torch.Size([16, 32768])
|
| 4159 |
+
batch tensor after cp: attention_maskbatch tensor after cp: tokenstorch.Size([16, 1, 8192, 32768])
|
| 4160 |
+
batch tensor after cp:torch.Size([16, 8192])
|
| 4161 |
+
position_idsbatch tensor after cp: batch tensor after cp: torch.Size([16, 8192]) tokens
|
| 4162 |
+
batch tensor: attention_mask torch.Size([16, 1, 32768, 32768])
|
| 4163 |
+
labels torch.Size([16, 8192])torch.Size([16, 8192])
|
| 4164 |
+
|
| 4165 |
+
batch tensor after cp:batch tensor after cp: loss_masklabels torch.Size([16, 8192])torch.Size([16, 8192])
|
| 4166 |
+
|
| 4167 |
+
batch tensor: position_ids torch.Size([16, 32768])
|
| 4168 |
+
batch tensor after cp:batch tensor after cp: attention_maskloss_mask torch.Size([16, 8192])torch.Size([16, 1, 8192, 32768])
|
| 4169 |
+
|
| 4170 |
+
batch tensor after cp:batch tensor after cp: attention_maskposition_ids torch.Size([16, 8192])torch.Size([16, 1, 8192, 32768])
|
| 4171 |
+
|
| 4172 |
+
batch tensor after cp: position_ids torch.Size([16, 8192])
|
| 4173 |
+
batch tensor after cp: tokens torch.Size([16, 8192])
|
| 4174 |
+
batch tensor after cp: labels torch.Size([16, 8192])
|
| 4175 |
+
batch tensor: tokens torch.Size([16, 32768])
|
| 4176 |
+
batch tensor after cp: loss_mask torch.Size([16, 8192])
|
| 4177 |
+
batch tensor: labels torch.Size([16, 32768])
|
| 4178 |
+
batch tensor after cp: attention_mask torch.Size([16, 1, 8192, 32768])
|
| 4179 |
+
batch tensor after cp: position_ids torch.Size([16, 8192])
|
| 4180 |
+
batch tensor: loss_mask torch.Size([16, 32768])
|
| 4181 |
+
batch tensor: tokens torch.Size([16, 32768])
|
| 4182 |
+
batch tensor: attention_mask torch.Size([16, 1, 32768, 32768])
|
| 4183 |
+
batch tensor: labels torch.Size([16, 32768])
|
| 4184 |
+
batch tensor: position_ids torch.Size([16, 32768])
|
| 4185 |
+
batch tensor: loss_mask torch.Size([16, 32768])
|
| 4186 |
+
batch tensor after cp: tokens torch.Size([16, 8192])
|
| 4187 |
+
batch tensor: attention_mask torch.Size([16, 1, 32768, 32768])
|
| 4188 |
+
batch tensor after cp: labels torch.Size([16, 8192])
|
| 4189 |
+
batch tensor after cp: loss_mask torch.Size([16, 8192])
|
| 4190 |
+
batch tensor: position_ids torch.Size([16, 32768])
|
| 4191 |
+
batch tensor after cp: attention_mask torch.Size([16, 1, 8192, 32768])
|
| 4192 |
+
batch tensor after cp: position_ids torch.Size([16, 8192])
|
| 4193 |
+
batch tensor after cp: tokens torch.Size([16, 8192])
|
| 4194 |
+
batch tensor after cp: labels torch.Size([16, 8192])
|
| 4195 |
+
batch tensor after cp: loss_mask torch.Size([16, 8192])
|
| 4196 |
+
batch tensor: tokens torch.Size([16, 32768])
|
| 4197 |
+
batch tensor: labels torch.Size([16, 32768])
|
| 4198 |
+
batch tensor after cp: attention_mask torch.Size([16, 1, 8192, 32768])
|
| 4199 |
+
batch tensor: loss_mask torch.Size([16, 32768])
|
| 4200 |
+
batch tensor after cp: position_ids torch.Size([16, 8192])
|
| 4201 |
+
batch tensor: attention_mask torch.Size([16, 1, 32768, 32768])
|
| 4202 |
+
batch tensor: position_ids torch.Size([16, 32768])
|
| 4203 |
+
batch tensor: tokens torch.Size([16, 32768])
|
| 4204 |
+
batch tensor: tokens torch.Size([16, 32768])
|
| 4205 |
+
batch tensor: labels torch.Size([16, 32768])
|
| 4206 |
+
batch tensor: loss_mask torch.Size([16, 32768])
|
| 4207 |
+
batch tensor: labels torch.Size([16, 32768])
|
| 4208 |
+
batch tensor: attention_mask torch.Size([16, 1, 32768, 32768])
|
| 4209 |
+
batch tensor: loss_mask torch.Size([16, 32768])
|
| 4210 |
+
batch tensor: attention_mask torch.Size([16, 1, 32768, 32768])
|
| 4211 |
+
batch tensor: position_ids torch.Size([16, 32768])
|
| 4212 |
+
batch tensor: position_ids torch.Size([16, 32768])
|
| 4213 |
+
batch tensor after cp: tokens torch.Size([16, 8192])
|
| 4214 |
+
batch tensor after cp: labels torch.Size([16, 8192])
|
| 4215 |
+
batch tensor after cp: tokens torch.Size([16, 8192])
|
| 4216 |
+
batch tensor after cp: labels torch.Size([16, 8192])
|
| 4217 |
+
batch tensor after cp: loss_mask torch.Size([16, 8192])
|
| 4218 |
+
batch tensor after cp: attention_mask torch.Size([16, 1, 8192, 32768])
|
| 4219 |
+
batch tensor after cp: loss_mask torch.Size([16, 8192])
|
| 4220 |
+
batch tensor after cp: position_ids torch.Size([16, 8192])
|
| 4221 |
+
batch tensor after cp: attention_mask torch.Size([16, 1, 8192, 32768])
|
| 4222 |
+
batch tensor after cp: position_ids torch.Size([16, 8192])
|
| 4223 |
+
batch tensor after cp: tokens torch.Size([16, 8192])
|
| 4224 |
+
batch tensor after cp: labels torch.Size([16, 8192])
|
| 4225 |
+
batch tensor after cp: loss_mask torch.Size([16, 8192])
|
| 4226 |
+
batch tensor: tokens torch.Size([16, 32768])
|
| 4227 |
+
batch tensor after cp: attention_mask torch.Size([16, 1, 8192, 32768])
|
| 4228 |
+
batch tensor: labels torch.Size([16, 32768])
|
| 4229 |
+
batch tensor after cp: position_ids torch.Size([16, 8192])
|
| 4230 |
+
batch tensor: loss_mask torch.Size([16, 32768])
|
| 4231 |
+
batch tensor: attention_mask torch.Size([16, 1, 32768, 32768])
|
| 4232 |
+
batch tensor: position_ids torch.Size([16, 32768])
|
| 4233 |
+
batch tensor: tokens torch.Size([16, 32768])
|
| 4234 |
+
batch tensor: labels torch.Size([16, 32768])
|
| 4235 |
+
batch tensor: loss_mask torch.Size([16, 32768])
|
| 4236 |
+
batch tensor: attention_mask torch.Size([16, 1, 32768, 32768])
|
| 4237 |
+
batch tensor: position_ids torch.Size([16, 32768])
|
| 4238 |
+
batch tensor after cp: tokens torch.Size([16, 8192])
|
| 4239 |
+
batch tensor after cp: labels torch.Size([16, 8192])
|
| 4240 |
+
batch tensor after cp: loss_mask torch.Size([16, 8192])
|
| 4241 |
+
batch tensor after cp: attention_mask torch.Size([16, 1, 8192, 32768])
|
| 4242 |
+
batch tensor after cp: position_ids torch.Size([16, 8192])
|
| 4243 |
+
batch tensor after cp: tokens torch.Size([16, 8192])
|
| 4244 |
+
batch tensor after cp: labels torch.Size([16, 8192])
|
| 4245 |
+
batch tensor after cp: loss_mask torch.Size([16, 8192])
|
| 4246 |
+
batch tensor after cp: attention_mask torch.Size([16, 1, 8192, 32768])
|
| 4247 |
+
batch tensor after cp: position_ids torch.Size([16, 8192])
|
| 4248 |
+
Start exporting trace 6
|
| 4249 |
+
Done exporting trace 6
|
| 4250 |
+
[2025-06-21 21:34:00] iteration 7/ 10 | consumed samples: 7 | elapsed time per iteration (ms): 930.0 | learning rate: 0.000000E+00 | global batch size: 1 | loss scale: 67108864.0 | number of skipped iterations: 1 | number of nan iterations: 0 |
|
| 4251 |
+
batch tensor: tokens torch.Size([16, 32768])
|
| 4252 |
+
batch tensor: labels torch.Size([16, 32768])
|
| 4253 |
+
batch tensor: loss_mask torch.Size([16, 32768])
|
| 4254 |
+
batch tensor: attention_mask torch.Size([16, 1, 32768, 32768])
|
| 4255 |
+
batch tensor: position_ids torch.Size([16, 32768])
|
| 4256 |
+
batch tensor after cp: tokens torch.Size([16, 8192])
|
| 4257 |
+
batch tensor after cp: labels torch.Size([16, 8192])
|
| 4258 |
+
batch tensor after cp: loss_mask torch.Size([16, 8192])
|
| 4259 |
+
batch tensor: tokens torch.Size([16, 32768])
|
| 4260 |
+
batch tensor: labels torch.Size([16, 32768])
|
| 4261 |
+
batch tensor: loss_mask torch.Size([16, 32768])
|
| 4262 |
+
batch tensor: attention_mask torch.Size([16, 1, 32768, 32768])
|
| 4263 |
+
batch tensor: position_ids torch.Size([16, 32768])
|
| 4264 |
+
batch tensor after cp: attention_mask torch.Size([16, 1, 8192, 32768])
|
| 4265 |
+
batch tensor after cp: position_ids torch.Size([16, 8192])
|
| 4266 |
+
batch tensor: tokens batch tensor after cp: tokenstorch.Size([16, 32768])
|
| 4267 |
+
torch.Size([16, 8192])
|
| 4268 |
+
batch tensor: batch tensor after cp:labels labels torch.Size([16, 32768])torch.Size([16, 8192])
|
| 4269 |
+
|
| 4270 |
+
batch tensor:batch tensor after cp: loss_maskloss_mask torch.Size([16, 32768])torch.Size([16, 8192])
|
| 4271 |
+
|
| 4272 |
+
batch tensor: tokens torch.Size([16, 32768])
|
| 4273 |
+
batch tensor after cp:batch tensor: attention_maskattention_mask torch.Size([16, 1, 8192, 32768])
|
| 4274 |
+
torch.Size([16, 1, 32768, 32768])batch tensor after cp:
|
| 4275 |
+
position_idsbatch tensor: torch.Size([16, 8192])position_ids
|
| 4276 |
+
torch.Size([16, 32768])
|
| 4277 |
+
batch tensor: labels torch.Size([16, 32768])
|
| 4278 |
+
batch tensor: loss_mask torch.Size([16, 32768])
|
| 4279 |
+
batch tensor after cp: tokens torch.Size([16, 8192])
|
| 4280 |
+
batch tensor after cp: labels torch.Size([16, 8192])
|
| 4281 |
+
batch tensor after cp: loss_mask torch.Size([16, 8192])
|
| 4282 |
+
batch tensor: attention_mask torch.Size([16, 1, 32768, 32768])
|
| 4283 |
+
batch tensor: position_ids torch.Size([16, 32768])
|
| 4284 |
+
batch tensor after cp: attention_mask torch.Size([16, 1, 8192, 32768])
|
| 4285 |
+
batch tensor after cp: tokens torch.Size([16, 8192])
|
| 4286 |
+
batch tensor after cp: labels torch.Size([16, 8192])
|
| 4287 |
+
batch tensor after cp: position_ids torch.Size([16, 8192])
|
| 4288 |
+
batch tensor after cp: loss_mask torch.Size([16, 8192])
|
| 4289 |
+
batch tensor after cp: attention_mask torch.Size([16, 1, 8192, 32768])
|
| 4290 |
+
batch tensor after cp: position_ids torch.Size([16, 8192])
|
| 4291 |
+
batch tensor: tokens torch.Size([16, 32768])
|
| 4292 |
+
batch tensor: labels torch.Size([16, 32768])
|
| 4293 |
+
batch tensor: loss_mask torch.Size([16, 32768])
|
| 4294 |
+
batch tensor: attention_mask torch.Size([16, 1, 32768, 32768])
|
| 4295 |
+
batch tensor: position_ids torch.Size([16, 32768])
|
| 4296 |
+
batch tensor after cp: tokens torch.Size([16, 8192])
|
| 4297 |
+
batch tensor after cp: labels torch.Size([16, 8192])
|
| 4298 |
+
batch tensor: tokens torch.Size([16, 32768])
|
| 4299 |
+
batch tensor: labels torch.Size([16, 32768])
|
| 4300 |
+
batch tensor: loss_mask torch.Size([16, 32768])
|
| 4301 |
+
batch tensor: attention_mask torch.Size([16, 1, 32768, 32768])
|
| 4302 |
+
batch tensor: position_ids torch.Size([16, 32768])
|
| 4303 |
+
batch tensor after cp: loss_mask torch.Size([16, 8192])
|
| 4304 |
+
batch tensor after cp: attention_mask torch.Size([16, 1, 8192, 32768])
|
| 4305 |
+
batch tensor after cp: position_ids torch.Size([16, 8192])
|
| 4306 |
+
batch tensor after cp: tokens torch.Size([16, 8192])
|
| 4307 |
+
batch tensor after cp: labels torch.Size([16, 8192])
|
| 4308 |
+
batch tensor: tokens torch.Size([16, 32768])
|
| 4309 |
+
batch tensor after cp: loss_mask torch.Size([16, 8192])
|
| 4310 |
+
batch tensor after cp: attention_mask torch.Size([16, 1, 8192, 32768])
|
| 4311 |
+
batch tensor: labels torch.Size([16, 32768])
|
| 4312 |
+
batch tensor: loss_mask torch.Size([16, 32768])
|
| 4313 |
+
batch tensor after cp: position_ids torch.Size([16, 8192])
|
| 4314 |
+
batch tensor: attention_mask torch.Size([16, 1, 32768, 32768])
|
| 4315 |
+
batch tensor: tokens torch.Size([16, 32768])
|
| 4316 |
+
batch tensor: labels torch.Size([16, 32768])
|
| 4317 |
+
batch tensor: position_ids torch.Size([16, 32768])
|
| 4318 |
+
batch tensor: loss_mask torch.Size([16, 32768])
|
| 4319 |
+
batch tensor: attention_mask torch.Size([16, 1, 32768, 32768])
|
| 4320 |
+
batch tensor: tokens torch.Size([16, 32768])
|
| 4321 |
+
batch tensor: position_ids torch.Size([16, 32768])
|
| 4322 |
+
batch tensor: labels torch.Size([16, 32768])
|
| 4323 |
+
batch tensor: loss_mask torch.Size([16, 32768])
|
| 4324 |
+
batch tensor: tokens torch.Size([16, 32768])
|
| 4325 |
+
batch tensor: attention_mask torch.Size([16, 1, 32768, 32768])
|
| 4326 |
+
batch tensor: position_ids torch.Size([16, 32768])
|
| 4327 |
+
batch tensor: labels torch.Size([16, 32768])
|
| 4328 |
+
batch tensor after cp: tokens torch.Size([16, 8192])
|
| 4329 |
+
batch tensor: loss_mask torch.Size([16, 32768])
|
| 4330 |
+
batch tensor: attention_mask torch.Size([16, 1, 32768, 32768])
|
| 4331 |
+
batch tensor after cp: labels torch.Size([16, 8192])
|
| 4332 |
+
batch tensor after cp: loss_mask torch.Size([16, 8192])
|
| 4333 |
+
batch tensor: position_ids torch.Size([16, 32768])
|
| 4334 |
+
batch tensor after cp: attention_mask torch.Size([16, 1, 8192, 32768])
|
| 4335 |
+
batch tensor after cp: position_ids torch.Size([16, 8192])
|
| 4336 |
+
batch tensor after cp: tokens torch.Size([16, 8192])
|
| 4337 |
+
batch tensor after cp: labels torch.Size([16, 8192])
|
| 4338 |
+
batch tensor after cp: loss_mask torch.Size([16, 8192])
|
| 4339 |
+
batch tensor after cp: tokens torch.Size([16, 8192])
|
| 4340 |
+
batch tensor after cp: labels torch.Size([16, 8192])
|
| 4341 |
+
batch tensor after cp: attention_mask torch.Size([16, 1, 8192, 32768])
|
| 4342 |
+
batch tensor after cp: loss_mask torch.Size([16, 8192])
|
| 4343 |
+
batch tensor after cp: attention_mask torch.Size([16, 1, 8192, 32768])
|
| 4344 |
+
batch tensor after cp: position_ids torch.Size([16, 8192])
|
| 4345 |
+
batch tensor after cp: position_ids torch.Size([16, 8192])
|
| 4346 |
+
batch tensor after cp: tokens torch.Size([16, 8192])
|
| 4347 |
+
batch tensor after cp: labels torch.Size([16, 8192])
|
| 4348 |
+
batch tensor after cp: loss_mask torch.Size([16, 8192])
|
| 4349 |
+
batch tensor: tokens torch.Size([16, 32768])
|
| 4350 |
+
batch tensor after cp: attention_mask torch.Size([16, 1, 8192, 32768])
|
| 4351 |
+
batch tensor: labels torch.Size([16, 32768])
|
| 4352 |
+
batch tensor after cp: position_ids torch.Size([16, 8192])
|
| 4353 |
+
batch tensor: loss_mask torch.Size([16, 32768])
|
| 4354 |
+
batch tensor: tokens torch.Size([16, 32768])
|
| 4355 |
+
batch tensor: attention_mask torch.Size([16, 1, 32768, 32768])
|
| 4356 |
+
batch tensor: position_ids torch.Size([16, 32768])
|
| 4357 |
+
batch tensor: labels torch.Size([16, 32768])
|
| 4358 |
+
batch tensor after cp: tokens torch.Size([16, 8192])
|
| 4359 |
+
batch tensor after cp: labels torch.Size([16, 8192])
|
| 4360 |
+
batch tensor: loss_mask torch.Size([16, 32768])
|
| 4361 |
+
batch tensor: attention_mask torch.Size([16, 1, 32768, 32768])
|
| 4362 |
+
batch tensor after cp: loss_mask torch.Size([16, 8192])
|
| 4363 |
+
batch tensor: position_ids torch.Size([16, 32768])
|
| 4364 |
+
batch tensor after cp: attention_mask torch.Size([16, 1, 8192, 32768])
|
| 4365 |
+
batch tensor after cp: position_ids torch.Size([16, 8192])
|
| 4366 |
+
batch tensor: tokens torch.Size([16, 32768])
|
| 4367 |
+
batch tensor: tokens torch.Size([16, 32768])
|
| 4368 |
+
batch tensor: labels torch.Size([16, 32768])
|
| 4369 |
+
batch tensor: loss_mask torch.Size([16, 32768])
|
| 4370 |
+
batch tensor: labels torch.Size([16, 32768])
|
| 4371 |
+
batch tensor: attention_mask torch.Size([16, 1, 32768, 32768])
|
| 4372 |
+
batch tensor: loss_mask torch.Size([16, 32768])
|
| 4373 |
+
batch tensor: attention_mask torch.Size([16, 1, 32768, 32768])
|
| 4374 |
+
batch tensor: position_ids torch.Size([16, 32768])
|
| 4375 |
+
batch tensor: position_ids torch.Size([16, 32768])
|
| 4376 |
+
batch tensor: tokens torch.Size([16, 32768])
|
| 4377 |
+
batch tensor after cp: tokens torch.Size([16, 8192])
|
| 4378 |
+
batch tensor: labels torch.Size([16, 32768])
|
| 4379 |
+
batch tensor after cp: labels torch.Size([16, 8192])
|
| 4380 |
+
batch tensor after cp: loss_mask torch.Size([16, 8192])
|
| 4381 |
+
batch tensor: loss_mask torch.Size([16, 32768])
|
| 4382 |
+
batch tensor after cp: attention_mask batch tensor:torch.Size([16, 1, 8192, 32768])
|
| 4383 |
+
batch tensor after cp: position_idstokens torch.Size([16, 8192])
|
| 4384 |
+
batch tensor: attention_mask torch.Size([16, 1, 32768, 32768])
|
| 4385 |
+
torch.Size([16, 32768])
|
| 4386 |
+
batch tensor: labels torch.Size([16, 32768])
|
| 4387 |
+
batch tensor: position_ids torch.Size([16, 32768])
|
| 4388 |
+
batch tensor: loss_mask torch.Size([16, 32768])
|
| 4389 |
+
batch tensor: attention_mask torch.Size([16, 1, 32768, 32768])
|
| 4390 |
+
batch tensor after cp: tokens torch.Size([16, 8192])
|
| 4391 |
+
batch tensor after cp: labels torch.Size([16, 8192])
|
| 4392 |
+
batch tensor: position_ids torch.Size([16, 32768])
|
| 4393 |
+
batch tensor after cp: loss_mask torch.Size([16, 8192])
|
| 4394 |
+
batch tensor after cp: tokens torch.Size([16, 8192])
|
| 4395 |
+
batch tensor after cp: attention_mask torch.Size([16, 1, 8192, 32768])
|
| 4396 |
+
batch tensor after cp: labels torch.Size([16, 8192])
|
| 4397 |
+
batch tensor after cp: position_ids torch.Size([16, 8192])
|
| 4398 |
+
batch tensor after cp: loss_mask torch.Size([16, 8192])
|
| 4399 |
+
batch tensor after cp: attention_mask torch.Size([16, 1, 8192, 32768])
|
| 4400 |
+
batch tensor after cp: position_ids torch.Size([16, 8192])
|
| 4401 |
+
batch tensor after cp: tokens torch.Size([16, 8192])
|
| 4402 |
+
batch tensor after cp: tokens torch.Size([16, 8192])
|
| 4403 |
+
batch tensor after cp: labels torch.Size([16, 8192])
|
| 4404 |
+
batch tensor after cp: labels torch.Size([16, 8192])
|
| 4405 |
+
batch tensor after cp: loss_mask torch.Size([16, 8192])
|
| 4406 |
+
batch tensor after cp: loss_mask torch.Size([16, 8192])
|
| 4407 |
+
batch tensor after cp: attention_mask torch.Size([16, 1, 8192, 32768])
|
| 4408 |
+
batch tensor after cp: attention_mask torch.Size([16, 1, 8192, 32768])
|
| 4409 |
+
batch tensor after cp: position_ids torch.Size([16, 8192])
|
| 4410 |
+
batch tensor after cp: position_ids torch.Size([16, 8192])
|
| 4411 |
+
Start exporting trace 7
|
| 4412 |
+
Done exporting trace 7
|
| 4413 |
+
[2025-06-21 21:34:01] iteration 8/ 10 | consumed samples: 8 | elapsed time per iteration (ms): 928.6 | learning rate: 0.000000E+00 | global batch size: 1 | loss scale: 33554432.0 | number of skipped iterations: 1 | number of nan iterations: 0 |
|
| 4414 |
+
batch tensor: tokens torch.Size([16, 32768])
|
| 4415 |
+
batch tensor: labels torch.Size([16, 32768])
|
| 4416 |
+
batch tensor: loss_mask torch.Size([16, 32768])
|
| 4417 |
+
batch tensor: attention_mask torch.Size([16, 1, 32768, 32768])
|
| 4418 |
+
batch tensor: position_ids torch.Size([16, 32768])
|
| 4419 |
+
batch tensor after cp: tokens torch.Size([16, 8192])
|
| 4420 |
+
batch tensor after cp: labels torch.Size([16, 8192])
|
| 4421 |
+
batch tensor after cp: loss_mask torch.Size([16, 8192])
|
| 4422 |
+
batch tensor: tokens torch.Size([16, 32768])
|
| 4423 |
+
batch tensor: labels torch.Size([16, 32768])
|
| 4424 |
+
batch tensor: loss_mask torch.Size([16, 32768])
|
| 4425 |
+
batch tensor: attention_mask torch.Size([16, 1, 32768, 32768])
|
| 4426 |
+
batch tensor: position_ids torch.Size([16, 32768])
|
| 4427 |
+
batch tensor after cp: attention_mask torch.Size([16, 1, 8192, 32768])
|
| 4428 |
+
batch tensor after cp: position_ids torch.Size([16, 8192])
|
| 4429 |
+
batch tensor after cp: tokens torch.Size([16, 8192])
|
| 4430 |
+
batch tensor after cp: labels torch.Size([16, 8192])
|
| 4431 |
+
batch tensor after cp: loss_mask torch.Size([16, 8192])
|
| 4432 |
+
batch tensor: tokens torch.Size([16, 32768])
|
| 4433 |
+
batch tensor after cp: attention_mask torch.Size([16, 1, 8192, 32768])
|
| 4434 |
+
batch tensor: labels torch.Size([16, 32768])
|
| 4435 |
+
batch tensor after cp: position_ids torch.Size([16, 8192])
|
| 4436 |
+
batch tensor: loss_mask torch.Size([16, 32768])
|
| 4437 |
+
batch tensor: attention_mask torch.Size([16, 1, 32768, 32768])
|
| 4438 |
+
batch tensor: tokens torch.Size([16, 32768])
|
| 4439 |
+
batch tensor: position_ids torch.Size([16, 32768])
|
| 4440 |
+
batch tensor: labels torch.Size([16, 32768])
|
| 4441 |
+
batch tensor: loss_mask torch.Size([16, 32768])
|
| 4442 |
+
batch tensor after cp: tokens torch.Size([16, 8192])
|
| 4443 |
+
batch tensor after cp: labels torch.Size([16, 8192])
|
| 4444 |
+
batch tensor: attention_mask torch.Size([16, 1, 32768, 32768])
|
| 4445 |
+
batch tensor after cp: loss_mask torch.Size([16, 8192])
|
| 4446 |
+
batch tensor after cp: attention_mask torch.Size([16, 1, 8192, 32768])
|
| 4447 |
+
batch tensor: position_ids torch.Size([16, 32768])
|
| 4448 |
+
batch tensor after cp: position_ids torch.Size([16, 8192])
|
| 4449 |
+
batch tensor after cp: tokens torch.Size([16, 8192])
|
| 4450 |
+
batch tensor after cp: labels torch.Size([16, 8192])
|
| 4451 |
+
batch tensor: tokens torch.Size([16, 32768])
|
| 4452 |
+
batch tensor: labels torch.Size([16, 32768])
|
| 4453 |
+
batch tensor: loss_mask torch.Size([16, 32768])
|
| 4454 |
+
batch tensor: attention_mask torch.Size([16, 1, 32768, 32768])
|
| 4455 |
+
batch tensor: position_ids torch.Size([16, 32768])
|
| 4456 |
+
batch tensor after cp: loss_mask torch.Size([16, 8192])
|
| 4457 |
+
batch tensor after cp: tokens torch.Size([16, 8192])
|
| 4458 |
+
batch tensor after cp: labels torch.Size([16, 8192])
|
| 4459 |
+
batch tensor after cp: loss_mask torch.Size([16, 8192])
|
| 4460 |
+
batch tensor after cp: attention_mask torch.Size([16, 1, 8192, 32768])
|
| 4461 |
+
batch tensor after cp: position_ids torch.Size([16, 8192])
|
| 4462 |
+
batch tensor after cp: attention_mask torch.Size([16, 1, 8192, 32768])
|
| 4463 |
+
batch tensor: tokens torch.Size([16, 32768])
|
| 4464 |
+
batch tensor: labels torch.Size([16, 32768])
|
| 4465 |
+
batch tensor: loss_mask torch.Size([16, 32768])
|
| 4466 |
+
batch tensor after cp: position_ids torch.Size([16, 8192])
|
| 4467 |
+
batch tensor: attention_mask torch.Size([16, 1, 32768, 32768])
|
| 4468 |
+
batch tensor: position_ids torch.Size([16, 32768])
|
| 4469 |
+
batch tensor: tokens torch.Size([16, 32768])
|
| 4470 |
+
batch tensor after cp: tokens torch.Size([16, 8192])
|
| 4471 |
+
batch tensor after cp: labels torch.Size([16, 8192])
|
| 4472 |
+
batch tensor: labels torch.Size([16, 32768])
|
| 4473 |
+
batch tensor after cp: loss_mask torch.Size([16, 8192])
|
| 4474 |
+
batch tensor: loss_mask torch.Size([16, 32768])
|
| 4475 |
+
batch tensor: attention_mask torch.Size([16, 1, 32768, 32768])
|
| 4476 |
+
batch tensor: position_ids torch.Size([16, 32768])
|
| 4477 |
+
batch tensor after cp: attention_mask torch.Size([16, 1, 8192, 32768])
|
| 4478 |
+
batch tensor after cp: position_ids torch.Size([16, 8192])
|
| 4479 |
+
batch tensor: tokens torch.Size([16, 32768])
|
| 4480 |
+
batch tensor: tokens torch.Size([16, 32768])
|
| 4481 |
+
batch tensor: labels torch.Size([16, 32768])
|
| 4482 |
+
batch tensor: loss_mask torch.Size([16, 32768])
|
| 4483 |
+
batch tensor: labels torch.Size([16, 32768])
|
| 4484 |
+
batch tensor: loss_mask torch.Size([16, 32768])
|
| 4485 |
+
batch tensor: attention_mask torch.Size([16, 1, 32768, 32768])
|
| 4486 |
+
batch tensor: attention_mask torch.Size([16, 1, 32768, 32768])
|
| 4487 |
+
batch tensor: position_ids torch.Size([16, 32768])
|
| 4488 |
+
batch tensor: position_ids torch.Size([16, 32768])
|
| 4489 |
+
batch tensor after cp: tokens torch.Size([16, 8192])
|
| 4490 |
+
batch tensor after cp: labels torch.Size([16, 8192])
|
| 4491 |
+
batch tensor after cp: tokens torch.Size([16, 8192])
|
| 4492 |
+
batch tensor after cp: labels torch.Size([16, 8192])
|
| 4493 |
+
batch tensor after cp: loss_mask torch.Size([16, 8192])
|
| 4494 |
+
batch tensor after cp: attention_mask torch.Size([16, 1, 8192, 32768])
|
| 4495 |
+
batch tensor after cp: loss_mask torch.Size([16, 8192])
|
| 4496 |
+
batch tensor after cp: position_ids torch.Size([16, 8192])
|
| 4497 |
+
batch tensor after cp: attention_mask torch.Size([16, 1, 8192, 32768])
|
| 4498 |
+
batch tensor: tokens torch.Size([16, 32768])
|
| 4499 |
+
batch tensor after cp: position_ids torch.Size([16, 8192])
|
| 4500 |
+
batch tensor: labels torch.Size([16, 32768])
|
| 4501 |
+
batch tensor: tokens torch.Size([16, 32768])
|
| 4502 |
+
batch tensor: loss_mask torch.Size([16, 32768])
|
| 4503 |
+
batch tensor: attention_mask torch.Size([16, 1, 32768, 32768])
|
| 4504 |
+
batch tensor: position_ids torch.Size([16, 32768])
|
| 4505 |
+
batch tensor: labels torch.Size([16, 32768])
|
| 4506 |
+
batch tensor: loss_mask torch.Size([16, 32768])
|
| 4507 |
+
batch tensor: tokens torch.Size([16, 32768])
|
| 4508 |
+
batch tensor: attention_mask torch.Size([16, 1, 32768, 32768])
|
| 4509 |
+
batch tensor: position_ids torch.Size([16, 32768])
|
| 4510 |
+
batch tensor: labelsbatch tensor after cp: torch.Size([16, 32768])tokens
|
| 4511 |
+
batch tensor: tokens torch.Size([16, 32768])
|
| 4512 |
+
batch tensor: torch.Size([16, 8192])loss_mask
|
| 4513 |
+
torch.Size([16, 32768])batch tensor after cp:
|
| 4514 |
+
labels batch tensor:torch.Size([16, 8192])
|
| 4515 |
+
attention_maskbatch tensor after cp: loss_masktorch.Size([16, 1, 32768, 32768])
|
| 4516 |
+
torch.Size([16, 8192])
|
| 4517 |
+
batch tensor: labels torch.Size([16, 32768])
|
| 4518 |
+
batch tensor: loss_mask torch.Size([16, 32768])
|
| 4519 |
+
batch tensor:batch tensor after cp: position_idsattention_mask torch.Size([16, 32768])torch.Size([16, 1, 8192, 32768])
|
| 4520 |
+
|
| 4521 |
+
batch tensor after cp: position_ids torch.Size([16, 8192])
|
| 4522 |
+
batch tensor: attention_mask torch.Size([16, 1, 32768, 32768])
|
| 4523 |
+
batch tensor after cp: tokens torch.Size([16, 8192])
|
| 4524 |
+
batch tensor after cp: labels torch.Size([16, 8192])
|
| 4525 |
+
batch tensor: position_ids torch.Size([16, 32768])
|
| 4526 |
+
batch tensor after cp: loss_mask torch.Size([16, 8192])
|
| 4527 |
+
batch tensor after cp: tokens torch.Size([16, 8192])
|
| 4528 |
+
batch tensor after cp: attention_mask torch.Size([16, 1, 8192, 32768])
|
| 4529 |
+
batch tensor after cp: position_ids torch.Size([16, 8192])
|
| 4530 |
+
batch tensor after cp: labels torch.Size([16, 8192])
|
| 4531 |
+
batch tensor after cp: tokens torch.Size([16, 8192])
|
| 4532 |
+
batch tensor after cp: labels torch.Size([16, 8192])
|
| 4533 |
+
batch tensor after cp: loss_mask torch.Size([16, 8192])
|
| 4534 |
+
batch tensor after cp: attention_mask torch.Size([16, 1, 8192, 32768])
|
| 4535 |
+
batch tensor after cp: loss_mask torch.Size([16, 8192])
|
| 4536 |
+
batch tensor after cp: position_ids torch.Size([16, 8192])
|
| 4537 |
+
batch tensor after cp: attention_mask torch.Size([16, 1, 8192, 32768])
|
| 4538 |
+
batch tensor after cp: position_ids torch.Size([16, 8192])
|
| 4539 |
+
batch tensor after cp: tokens torch.Size([16, 8192])
|
| 4540 |
+
batch tensor after cp: labels torch.Size([16, 8192])
|
| 4541 |
+
batch tensor: tokens torch.Size([16, 32768])
|
| 4542 |
+
batch tensor after cp: loss_mask torch.Size([16, 8192])
|
| 4543 |
+
batch tensor after cp: attention_mask torch.Size([16, 1, 8192, 32768])
|
| 4544 |
+
batch tensor: labels torch.Size([16, 32768])
|
| 4545 |
+
batch tensor: loss_mask torch.Size([16, 32768])
|
| 4546 |
+
batch tensor after cp: position_ids torch.Size([16, 8192])
|
| 4547 |
+
batch tensor: attention_mask torch.Size([16, 1, 32768, 32768])
|
| 4548 |
+
batch tensor: tokens torch.Size([16, 32768])
|
| 4549 |
+
batch tensor: position_ids torch.Size([16, 32768])
|
| 4550 |
+
batch tensor: labels torch.Size([16, 32768])
|
| 4551 |
+
batch tensor after cp: tokens torch.Size([16, 8192])
|
| 4552 |
+
batch tensor after cp: labels torch.Size([16, 8192])
|
| 4553 |
+
batch tensor after cp: loss_mask torch.Size([16, 8192])
|
| 4554 |
+
batch tensor: loss_mask torch.Size([16, 32768])
|
| 4555 |
+
batch tensor after cp: attention_mask torch.Size([16, 1, 8192, 32768])
|
| 4556 |
+
batch tensor after cp: position_ids torch.Size([16, 8192])
|
| 4557 |
+
batch tensor: attention_mask torch.Size([16, 1, 32768, 32768])
|
| 4558 |
+
batch tensor: position_ids torch.Size([16, 32768])
|
| 4559 |
+
batch tensor after cp: tokens torch.Size([16, 8192])
|
| 4560 |
+
batch tensor after cp: labels torch.Size([16, 8192])
|
| 4561 |
+
batch tensor after cp: loss_mask torch.Size([16, 8192])
|
| 4562 |
+
batch tensor after cp: attention_mask torch.Size([16, 1, 8192, 32768])
|
| 4563 |
+
batch tensor after cp: position_ids torch.Size([16, 8192])
|
| 4564 |
+
batch tensor: tokens torch.Size([16, 32768])
|
| 4565 |
+
batch tensor: labels torch.Size([16, 32768])
|
| 4566 |
+
batch tensor: loss_mask torch.Size([16, 32768])
|
| 4567 |
+
batch tensor: attention_mask torch.Size([16, 1, 32768, 32768])
|
| 4568 |
+
batch tensor: position_ids torch.Size([16, 32768])
|
| 4569 |
+
batch tensor after cp: tokens torch.Size([16, 8192])
|
| 4570 |
+
batch tensor after cp: labels torch.Size([16, 8192])
|
| 4571 |
+
batch tensor after cp: loss_mask torch.Size([16, 8192])
|
| 4572 |
+
batch tensor after cp: attention_mask torch.Size([16, 1, 8192, 32768])
|
| 4573 |
+
batch tensor after cp: position_ids torch.Size([16, 8192])
|
| 4574 |
+
Start exporting trace 8
|
| 4575 |
+
Done exporting trace 8
|
| 4576 |
+
[2025-06-21 21:34:02] iteration 9/ 10 | consumed samples: 9 | elapsed time per iteration (ms): 926.2 | learning rate: 0.000000E+00 | global batch size: 1 | loss scale: 16777216.0 | number of skipped iterations: 1 | number of nan iterations: 0 |
|
| 4577 |
+
batch tensor: tokens torch.Size([16, 32768])
|
| 4578 |
+
batch tensor: labels torch.Size([16, 32768])
|
| 4579 |
+
batch tensor: loss_mask torch.Size([16, 32768])
|
| 4580 |
+
batch tensor: attention_mask torch.Size([16, 1, 32768, 32768])
|
| 4581 |
+
batch tensor: position_ids torch.Size([16, 32768])
|
| 4582 |
+
batch tensor: tokens torch.Size([16, 32768])
|
| 4583 |
+
batch tensor: labels torch.Size([16, 32768])
|
| 4584 |
+
batch tensor: loss_mask torch.Size([16, 32768])
|
| 4585 |
+
batch tensor: attention_mask torch.Size([16, 1, 32768, 32768])
|
| 4586 |
+
batch tensor: position_ids torch.Size([16, 32768])
|
| 4587 |
+
batch tensor after cp: tokens torch.Size([16, 8192])
|
| 4588 |
+
batch tensor after cp: labels torch.Size([16, 8192])
|
| 4589 |
+
batch tensor after cp: loss_mask torch.Size([16, 8192])
|
| 4590 |
+
batch tensor after cp: tokens torch.Size([16, 8192])
|
| 4591 |
+
batch tensor after cp: labels torch.Size([16, 8192])
|
| 4592 |
+
batch tensor after cp: loss_mask torch.Size([16, 8192])
|
| 4593 |
+
batch tensor after cp: attention_mask torch.Size([16, 1, 8192, 32768])
|
| 4594 |
+
batch tensor after cp: position_ids torch.Size([16, 8192])
|
| 4595 |
+
batch tensor after cp: attention_mask torch.Size([16, 1, 8192, 32768])
|
| 4596 |
+
batch tensor after cp: position_ids torch.Size([16, 8192])
|
| 4597 |
+
batch tensor: tokens torch.Size([16, 32768])
|
| 4598 |
+
batch tensor: tokens torch.Size([16, 32768])
|
| 4599 |
+
batch tensor: labels torch.Size([16, 32768])
|
| 4600 |
+
batch tensor: loss_mask torch.Size([16, 32768])
|
| 4601 |
+
batch tensor: labels torch.Size([16, 32768])
|
| 4602 |
+
batch tensor: loss_mask torch.Size([16, 32768])
|
| 4603 |
+
batch tensor: attention_mask torch.Size([16, 1, 32768, 32768])
|
| 4604 |
+
batch tensor: position_ids torch.Size([16, 32768])
|
| 4605 |
+
batch tensor: attention_mask torch.Size([16, 1, 32768, 32768])
|
| 4606 |
+
batch tensor after cp: tokens torch.Size([16, 8192])
|
| 4607 |
+
batch tensor after cp: labels torch.Size([16, 8192])
|
| 4608 |
+
batch tensor: position_ids torch.Size([16, 32768])
|
| 4609 |
+
batch tensor after cp: loss_mask torch.Size([16, 8192])
|
| 4610 |
+
batch tensor after cp: attention_mask torch.Size([16, 1, 8192, 32768])
|
| 4611 |
+
batch tensor after cp: tokens torch.Size([16, 8192])
|
| 4612 |
+
batch tensor after cp: labels torch.Size([16, 8192])
|
| 4613 |
+
batch tensor after cp: position_ids torch.Size([16, 8192])
|
| 4614 |
+
batch tensor after cp: loss_mask torch.Size([16, 8192])
|
| 4615 |
+
batch tensor after cp: attention_mask torch.Size([16, 1, 8192, 32768])
|
| 4616 |
+
batch tensor: tokens torch.Size([16, 32768])
|
| 4617 |
+
batch tensor: labels torch.Size([16, 32768])
|
| 4618 |
+
batch tensor: loss_mask torch.Size([16, 32768])
|
| 4619 |
+
batch tensor: attention_mask torch.Size([16, 1, 32768, 32768])
|
| 4620 |
+
batch tensor: position_ids torch.Size([16, 32768])
|
| 4621 |
+
batch tensor after cp: position_ids torch.Size([16, 8192])
|
| 4622 |
+
batch tensor: tokens torch.Size([16, 32768])
|
| 4623 |
+
batch tensor: tokens torch.Size([16, 32768])
|
| 4624 |
+
batch tensor: labels torch.Size([16, 32768])
|
| 4625 |
+
batch tensor: loss_mask torch.Size([16, 32768])
|
| 4626 |
+
batch tensor: attention_mask torch.Size([16, 1, 32768, 32768])
|
| 4627 |
+
batch tensor: position_ids torch.Size([16, 32768])
|
| 4628 |
+
batch tensor: labels torch.Size([16, 32768])
|
| 4629 |
+
batch tensor: tokens torch.Size([16, 32768])
|
| 4630 |
+
batch tensor: loss_mask torch.Size([16, 32768])
|
| 4631 |
+
batch tensor: attention_mask torch.Size([16, 1, 32768, 32768])
|
| 4632 |
+
batch tensor: position_ids torch.Size([16, 32768])
|
| 4633 |
+
batch tensor: labels torch.Size([16, 32768])
|
| 4634 |
+
batch tensor after cp: tokens torch.Size([16, 8192])
|
| 4635 |
+
batch tensor after cp: labels torch.Size([16, 8192])
|
| 4636 |
+
batch tensor: loss_mask torch.Size([16, 32768])
|
| 4637 |
+
batch tensor after cp: loss_mask torch.Size([16, 8192])
|
| 4638 |
+
batch tensor after cp: attention_mask torch.Size([16, 1, 8192, 32768])
|
| 4639 |
+
batch tensor: attention_mask torch.Size([16, 1, 32768, 32768])
|
| 4640 |
+
batch tensor after cp: position_ids torch.Size([16, 8192])
|
| 4641 |
+
batch tensor: position_ids torch.Size([16, 32768])
|
| 4642 |
+
batch tensor after cp: tokens torch.Size([16, 8192])
|
| 4643 |
+
batch tensor after cp: labels torch.Size([16, 8192])
|
| 4644 |
+
batch tensor after cp: loss_mask torch.Size([16, 8192])
|
| 4645 |
+
batch tensor: tokens torch.Size([16, 32768])
|
| 4646 |
+
batch tensor after cp: attention_mask torch.Size([16, 1, 8192, 32768])
|
| 4647 |
+
batch tensor: labels torch.Size([16, 32768])
|
| 4648 |
+
batch tensor after cp: position_ids torch.Size([16, 8192])
|
| 4649 |
+
batch tensor: loss_mask torch.Size([16, 32768])
|
| 4650 |
+
batch tensor: tokens torch.Size([16, 32768])
|
| 4651 |
+
batch tensor: attention_mask torch.Size([16, 1, 32768, 32768])
|
| 4652 |
+
batch tensor: labels torch.Size([16, 32768])
|
| 4653 |
+
batch tensor: position_ids torch.Size([16, 32768])
|
| 4654 |
+
batch tensor: loss_mask torch.Size([16, 32768])
|
| 4655 |
+
batch tensor: attention_mask torch.Size([16, 1, 32768, 32768])
|
| 4656 |
+
batch tensor after cp: tokens torch.Size([16, 8192])
|
| 4657 |
+
batch tensor after cp: labels torch.Size([16, 8192])
|
| 4658 |
+
batch tensor: position_ids torch.Size([16, 32768])
|
| 4659 |
+
batch tensor after cp: loss_mask torch.Size([16, 8192])batch tensor:
|
| 4660 |
+
batch tensor: tokens torch.Size([16, 32768])
|
| 4661 |
+
batch tensor after cp: attention_masktokens torch.Size([16, 1, 8192, 32768])
|
| 4662 |
+
batch tensor after cp: position_ids torch.Size([16, 8192])
|
| 4663 |
+
torch.Size([16, 32768])
|
| 4664 |
+
batch tensor: labels torch.Size([16, 32768])
|
| 4665 |
+
batch tensor: labels torch.Size([16, 32768])
|
| 4666 |
+
batch tensor: loss_mask torch.Size([16, 32768])
|
| 4667 |
+
batch tensor: attention_mask torch.Size([16, 1, 32768, 32768])
|
| 4668 |
+
batch tensor: position_ids torch.Size([16, 32768])
|
| 4669 |
+
batch tensor: loss_mask torch.Size([16, 32768])
|
| 4670 |
+
batch tensor after cp: tokens torch.Size([16, 8192])
|
| 4671 |
+
batch tensor: attention_mask torch.Size([16, 1, 32768, 32768])
|
| 4672 |
+
batch tensor after cp: labels torch.Size([16, 8192])
|
| 4673 |
+
batch tensor after cp: loss_mask torch.Size([16, 8192])
|
| 4674 |
+
batch tensor after cp:batch tensor: tokensposition_ids torch.Size([16, 32768])torch.Size([16, 8192])
|
| 4675 |
+
|
| 4676 |
+
batch tensor after cp: attention_mask torch.Size([16, 1, 8192, 32768])
|
| 4677 |
+
batch tensor after cp: position_ids torch.Size([16, 8192])
|
| 4678 |
+
batch tensor after cp: labels torch.Size([16, 8192])
|
| 4679 |
+
batch tensor after cp: loss_mask torch.Size([16, 8192])
|
| 4680 |
+
batch tensor after cp: tokens torch.Size([16, 8192])
|
| 4681 |
+
batch tensor after cp: labels torch.Size([16, 8192])
|
| 4682 |
+
batch tensor after cp: attention_mask batch tensor:torch.Size([16, 1, 8192, 32768])
|
| 4683 |
+
batch tensor after cp:tokens position_ids torch.Size([16, 8192])
|
| 4684 |
+
batch tensor after cp: loss_mask torch.Size([16, 8192])
|
| 4685 |
+
torch.Size([16, 32768])
|
| 4686 |
+
batch tensor after cp: attention_mask torch.Size([16, 1, 8192, 32768])
|
| 4687 |
+
batch tensor after cp: position_ids torch.Size([16, 8192])
|
| 4688 |
+
batch tensor: labels torch.Size([16, 32768])
|
| 4689 |
+
batch tensor: tokens torch.Size([16, 32768])
|
| 4690 |
+
batch tensor: loss_mask torch.Size([16, 32768])
|
| 4691 |
+
batch tensor: labels torch.Size([16, 32768])
|
| 4692 |
+
batch tensor: attention_mask torch.Size([16, 1, 32768, 32768])
|
| 4693 |
+
batch tensor: loss_mask torch.Size([16, 32768])
|
| 4694 |
+
batch tensor: position_ids torch.Size([16, 32768])
|
| 4695 |
+
batch tensor: attention_mask torch.Size([16, 1, 32768, 32768])
|
| 4696 |
+
batch tensor after cp: tokens torch.Size([16, 8192])
|
| 4697 |
+
batch tensor after cp: labels torch.Size([16, 8192])
|
| 4698 |
+
batch tensor after cp: loss_mask torch.Size([16, 8192])
|
| 4699 |
+
batch tensor: position_ids torch.Size([16, 32768])
|
| 4700 |
+
batch tensor after cp: attention_mask torch.Size([16, 1, 8192, 32768])
|
| 4701 |
+
batch tensor: tokens torch.Size([16, 32768])
|
| 4702 |
+
batch tensor after cp: position_ids torch.Size([16, 8192])
|
| 4703 |
+
batch tensor: labels torch.Size([16, 32768])
|
| 4704 |
+
batch tensor after cp: tokens torch.Size([16, 8192])
|
| 4705 |
+
batch tensor after cp: labels torch.Size([16, 8192])
|
| 4706 |
+
batch tensor: loss_mask torch.Size([16, 32768])
|
| 4707 |
+
batch tensor after cp: loss_mask torch.Size([16, 8192])
|
| 4708 |
+
batch tensor after cp: attention_mask torch.Size([16, 1, 8192, 32768])
|
| 4709 |
+
batch tensor: attention_mask torch.Size([16, 1, 32768, 32768])
|
| 4710 |
+
batch tensor after cp: position_ids torch.Size([16, 8192])
|
| 4711 |
+
batch tensor: position_ids torch.Size([16, 32768])
|
| 4712 |
+
batch tensor after cp: tokens torch.Size([16, 8192])
|
| 4713 |
+
batch tensor after cp: labels torch.Size([16, 8192])
|
| 4714 |
+
batch tensor after cp: tokens torch.Size([16, 8192])
|
| 4715 |
+
batch tensor after cp: loss_mask torch.Size([16, 8192])
|
| 4716 |
+
batch tensor after cp: labels torch.Size([16, 8192])
|
| 4717 |
+
batch tensor after cp: loss_mask torch.Size([16, 8192])
|
| 4718 |
+
batch tensor after cp: attention_mask torch.Size([16, 1, 8192, 32768])
|
| 4719 |
+
batch tensor after cp: position_ids torch.Size([16, 8192])
|
| 4720 |
+
batch tensor after cp: attention_mask torch.Size([16, 1, 8192, 32768])
|
| 4721 |
+
batch tensor: tokens torch.Size([16, 32768])
|
| 4722 |
+
batch tensor after cp: position_ids torch.Size([16, 8192])
|
| 4723 |
+
batch tensor: labels torch.Size([16, 32768])
|
| 4724 |
+
batch tensor after cp: tokens torch.Size([16, 8192])
|
| 4725 |
+
batch tensor: loss_mask torch.Size([16, 32768])
|
| 4726 |
+
batch tensor after cp: labels torch.Size([16, 8192])
|
| 4727 |
+
batch tensor after cp: loss_mask torch.Size([16, 8192])
|
| 4728 |
+
batch tensor: attention_mask torch.Size([16, 1, 32768, 32768])
|
| 4729 |
+
batch tensor after cp: attention_mask torch.Size([16, 1, 8192, 32768])
|
| 4730 |
+
batch tensor: position_ids torch.Size([16, 32768])
|
| 4731 |
+
batch tensor after cp: position_ids torch.Size([16, 8192])
|
| 4732 |
+
batch tensor after cp: tokens torch.Size([16, 8192])
|
| 4733 |
+
batch tensor after cp: labels torch.Size([16, 8192])
|
| 4734 |
+
batch tensor after cp: loss_mask torch.Size([16, 8192])
|
| 4735 |
+
batch tensor after cp: attention_mask torch.Size([16, 1, 8192, 32768])
|
| 4736 |
+
batch tensor after cp: position_ids torch.Size([16, 8192])
|
| 4737 |
+
Start exporting trace 9
|
| 4738 |
+
Done exporting trace 9
|
| 4739 |
+
[2025-06-21 21:34:03] iteration 10/ 10 | consumed samples: 10 | elapsed time per iteration (ms): 924.0 | learning rate: 0.000000E+00 | global batch size: 1 | loss scale: 8388608.0 | number of skipped iterations: 1 | number of nan iterations: 0 |
|
| 4740 |
+
[after training is done] datetime: 2025-06-21 21:34:03
|
| 4741 |
+
saving checkpoint at iteration 10 to gpt-checkpoint in torch_dist format
|
| 4742 |
+
DEBUG:megatron.training.checkpointing:rank: 15, takes 0.026693105697631836 to prepare state dict for ckpt
|
| 4743 |
+
DEBUG:megatron.training.checkpointing:rank: 9, takes 0.0267791748046875 to prepare state dict for ckpt
|
| 4744 |
+
DEBUG:megatron.training.checkpointing:rank: 13, takes 0.027100086212158203 to prepare state dict for ckpt
|
| 4745 |
+
DEBUG:megatron.training.checkpointing:rank: 11, takes 0.027242422103881836 to prepare state dict for ckpt
|
| 4746 |
+
DEBUG:megatron.training.checkpointing:rank: 10, takes 0.02791142463684082 to prepare state dict for ckpt
|
| 4747 |
+
DEBUG:megatron.training.checkpointing:rank: 6, takes 0.030647754669189453 to prepare state dict for ckpt
|
| 4748 |
+
DEBUG:megatron.training.checkpointing:rank: 12, takes 0.028069019317626953 to prepare state dict for ckpt
|
| 4749 |
+
DEBUG:megatron.training.checkpointing:rank: 5, takes 0.030666589736938477 to prepare state dict for ckpt
|
| 4750 |
+
DEBUG:megatron.training.checkpointing:rank: 7, takes 0.030682086944580078 to prepare state dict for ckpt
|
| 4751 |
+
DEBUG:megatron.training.checkpointing:rank: 14, takes 0.028278112411499023 to prepare state dict for ckpt
|
| 4752 |
+
DEBUG:megatron.training.checkpointing:rank: 3, takes 0.030695438385009766 to prepare state dict for ckpt
|
| 4753 |
+
DEBUG:megatron.training.checkpointing:rank: 8, takes 0.028383493423461914 to prepare state dict for ckpt
|
| 4754 |
+
DEBUG:megatron.training.checkpointing:rank: 1, takes 0.030734539031982422 to prepare state dict for ckpt
|
| 4755 |
+
DEBUG:megatron.training.checkpointing:rank: 0, takes 0.0310213565826416 to prepare state dict for ckpt
|
| 4756 |
+
DEBUG:megatron.training.checkpointing:rank: 2, takes 0.03133106231689453 to prepare state dict for ckpt
|
| 4757 |
+
DEBUG:megatron.training.checkpointing:rank: 4, takes 0.033275604248046875 to prepare state dict for ckpt
|
| 4758 |
+
DEBUG:megatron.core.dist_checkpointing.strategies.fully_parallel:Apply save parallelization
|
| 4759 |
+
DEBUG:megatron.core.dist_checkpointing.strategies.fully_parallel:Apply save parallelization
|
| 4760 |
+
DEBUG:megatron.core.dist_checkpointing.strategies.fully_parallel:Apply save parallelization
|
| 4761 |
+
DEBUG:megatron.core.dist_checkpointing.strategies.fully_parallel:Apply save parallelization
|
| 4762 |
+
DEBUG:megatron.core.dist_checkpointing.strategies.fully_parallel:Apply save parallelization
|
| 4763 |
+
DEBUG:megatron.core.dist_checkpointing.strategies.fully_parallel:Apply save parallelization
|
| 4764 |
+
DEBUG:megatron.core.dist_checkpointing.strategies.fully_parallel:Apply save parallelization
|
| 4765 |
+
DEBUG:megatron.core.dist_checkpointing.strategies.fully_parallel:Apply save parallelization
|
| 4766 |
+
DEBUG:megatron.core.dist_checkpointing.strategies.fully_parallel:Apply save parallelization
|
| 4767 |
+
DEBUG:megatron.core.dist_checkpointing.strategies.fully_parallel:Apply save parallelization
|
| 4768 |
+
DEBUG:megatron.core.dist_checkpointing.strategies.fully_parallel:Apply save parallelization
|
| 4769 |
+
DEBUG:megatron.core.dist_checkpointing.strategies.fully_parallel:Apply save parallelization
|
| 4770 |
+
DEBUG:megatron.core.dist_checkpointing.strategies.fully_parallel:Apply save parallelization
|
| 4771 |
+
DEBUG:megatron.core.dist_checkpointing.strategies.fully_parallel:Apply save parallelization
|
| 4772 |
+
DEBUG:megatron.core.dist_checkpointing.exchange_utils:distribute_shards_to_ranks distribution: [(np.int64(207618048), 0), (np.int64(212860928), 1), (np.int64(213909504), 2), (np.int64(205588480), 3)]
|
| 4773 |
+
DEBUG:megatron.core.dist_checkpointing.strategies.fully_parallel:Apply save parallelization
|
| 4774 |
+
DEBUG:megatron.core.dist_checkpointing.exchange_utils:distribute_shards_to_ranks distribution: [(np.int64(207618048), 0), (np.int64(212860928), 1), (np.int64(213909504), 2), (np.int64(205588480), 3)]
|
| 4775 |
+
DEBUG:megatron.core.dist_checkpointing.exchange_utils:distribute_shards_to_ranks distribution: [(np.int64(207618048), 0), (np.int64(212860928), 1), (np.int64(213909504), 2), (np.int64(205588480), 3)]
|
| 4776 |
+
DEBUG:megatron.core.dist_checkpointing.exchange_utils:distribute_shards_to_ranks distribution: [(np.int64(207618048), 0), (np.int64(212860928), 1), (np.int64(213909504), 2), (np.int64(205588480), 3)]
|
| 4777 |
+
DEBUG:megatron.core.dist_checkpointing.exchange_utils:distribute_shards_to_ranks distribution: [(np.int64(207618048), 0), (np.int64(212860928), 1), (np.int64(213909504), 2), (np.int64(205588480), 3)]
|
| 4778 |
+
DEBUG:megatron.core.dist_checkpointing.exchange_utils:distribute_shards_to_ranks distribution: [(np.int64(207618048), 0), (np.int64(212860928), 1), (np.int64(213909504), 2), (np.int64(205588480), 3)]
|
| 4779 |
+
DEBUG:megatron.core.dist_checkpointing.exchange_utils:distribute_shards_to_ranks distribution: [(np.int64(207618048), 0), (np.int64(212860928), 1), (np.int64(213909504), 2), (np.int64(205588480), 3)]
|
| 4780 |
+
DEBUG:megatron.core.dist_checkpointing.exchange_utils:distribute_shards_to_ranks distribution: [(np.int64(207618048), 0), (np.int64(212860928), 1), (np.int64(213909504), 2), (np.int64(205588480), 3)]
|
| 4781 |
+
DEBUG:megatron.core.dist_checkpointing.exchange_utils:distribute_shards_to_ranks distribution: [(np.int64(207618048), 0), (np.int64(212860928), 1), (np.int64(213909504), 2), (np.int64(205588480), 3)]
|
| 4782 |
+
DEBUG:megatron.core.dist_checkpointing.exchange_utils:distribute_shards_to_ranks distribution: [(np.int64(207618048), 0), (np.int64(212860928), 1), (np.int64(213909504), 2), (np.int64(205588480), 3)]
|
| 4783 |
+
DEBUG:megatron.core.dist_checkpointing.exchange_utils:distribute_shards_to_ranks distribution: [(np.int64(207618048), 0), (np.int64(212860928), 1), (np.int64(213909504), 2), (np.int64(205588480), 3)]
|
| 4784 |
+
DEBUG:megatron.core.dist_checkpointing.exchange_utils:distribute_shards_to_ranks distribution: [(np.int64(207618048), 0), (np.int64(212860928), 1), (np.int64(213909504), 2), (np.int64(205588480), 3)]
|
| 4785 |
+
Running ctx_length=4096, TP_SIZE=4, CP_SIZE=4, BATCH_SIZE=16
|
| 4786 |
+
Cleaning up checkpoint directory: gpt-checkpoint
|
| 4787 |
+
--------------------------------
|
| 4788 |
+
CTX_LENGTH: 4096
|
| 4789 |
+
TP_SIZE: 4
|
| 4790 |
+
CP_SIZE: 4
|
| 4791 |
+
CHECKPOINT_PATH: gpt-checkpoint
|
| 4792 |
+
PWD: /mnt/weka/home/hao.zhang/junda/attnserver-megatron
|
| 4793 |
+
--------------------------------
|
| 4794 |
+
/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/bin/python3
|
| 4795 |
+
Cleaning up checkpoint directory: gpt-checkpoint
|
| 4796 |
+
--------------------------------
|
| 4797 |
+
CTX_LENGTH: 4096
|
| 4798 |
+
TP_SIZE: 4
|
| 4799 |
+
CP_SIZE: 4
|
| 4800 |
+
CHECKPOINT_PATH: gpt-checkpoint
|
| 4801 |
+
PWD: /mnt/weka/home/hao.zhang/junda/attnserver-megatron
|
| 4802 |
+
--------------------------------
|
| 4803 |
+
/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/bin/python3
|
attnserver.run_attnserver.slurm.sh.343224.err.log
ADDED
|
@@ -0,0 +1,156 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
+ source /mnt/weka/home/hao.zhang/conda/miniconda/bin/activate
|
| 2 |
+
++ _CONDA_ROOT=/mnt/weka/home/hao.zhang/conda/miniconda
|
| 3 |
+
++ . /mnt/weka/home/hao.zhang/conda/miniconda/etc/profile.d/conda.sh
|
| 4 |
+
+++ export CONDA_EXE=/mnt/weka/home/hao.zhang/conda/miniconda/bin/conda
|
| 5 |
+
+++ CONDA_EXE=/mnt/weka/home/hao.zhang/conda/miniconda/bin/conda
|
| 6 |
+
+++ export _CE_M=
|
| 7 |
+
+++ _CE_M=
|
| 8 |
+
+++ export _CE_CONDA=
|
| 9 |
+
+++ _CE_CONDA=
|
| 10 |
+
+++ export CONDA_PYTHON_EXE=/mnt/weka/home/hao.zhang/conda/miniconda/bin/python
|
| 11 |
+
+++ CONDA_PYTHON_EXE=/mnt/weka/home/hao.zhang/conda/miniconda/bin/python
|
| 12 |
+
+++ '[' -z x ']'
|
| 13 |
+
++ conda activate
|
| 14 |
+
++ local cmd=activate
|
| 15 |
+
++ case "$cmd" in
|
| 16 |
+
++ __conda_activate activate
|
| 17 |
+
++ '[' -n '' ']'
|
| 18 |
+
++ local ask_conda
|
| 19 |
+
+++ PS1=
|
| 20 |
+
+++ __conda_exe shell.posix activate
|
| 21 |
+
+++ '[' -n '' ']'
|
| 22 |
+
+++ /mnt/weka/home/hao.zhang/conda/miniconda/bin/conda shell.posix activate
|
| 23 |
+
++ ask_conda='unset _CE_M
|
| 24 |
+
unset _CE_CONDA
|
| 25 |
+
PS1='\''(base) '\''
|
| 26 |
+
export PATH='\''/mnt/weka/home/hao.zhang/.local/bin:/mnt/weka/home/hao.zhang/.local/bin:/mnt/weka/home/hao.zhang/.local/bin:/mnt/weka/home/hao.zhang/conda/miniconda/bin:/mnt/weka/home/hao.zhang/conda/miniconda/condabin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin'\''
|
| 27 |
+
export CONDA_SHLVL='\''1'\''
|
| 28 |
+
export CONDA_PROMPT_MODIFIER='\''(base) '\''
|
| 29 |
+
export CONDA_EXE='\''/mnt/weka/home/hao.zhang/conda/miniconda/bin/conda'\''
|
| 30 |
+
export CONDA_PYTHON_EXE='\''/mnt/weka/home/hao.zhang/conda/miniconda/bin/python'\'''
|
| 31 |
+
++ eval 'unset _CE_M
|
| 32 |
+
unset _CE_CONDA
|
| 33 |
+
PS1='\''(base) '\''
|
| 34 |
+
export PATH='\''/mnt/weka/home/hao.zhang/.local/bin:/mnt/weka/home/hao.zhang/.local/bin:/mnt/weka/home/hao.zhang/.local/bin:/mnt/weka/home/hao.zhang/conda/miniconda/bin:/mnt/weka/home/hao.zhang/conda/miniconda/condabin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin'\''
|
| 35 |
+
export CONDA_SHLVL='\''1'\''
|
| 36 |
+
export CONDA_PROMPT_MODIFIER='\''(base) '\''
|
| 37 |
+
export CONDA_EXE='\''/mnt/weka/home/hao.zhang/conda/miniconda/bin/conda'\''
|
| 38 |
+
export CONDA_PYTHON_EXE='\''/mnt/weka/home/hao.zhang/conda/miniconda/bin/python'\'''
|
| 39 |
+
+++ unset _CE_M
|
| 40 |
+
+++ unset _CE_CONDA
|
| 41 |
+
+++ PS1='(base) '
|
| 42 |
+
+++ export PATH=/mnt/weka/home/hao.zhang/.local/bin:/mnt/weka/home/hao.zhang/.local/bin:/mnt/weka/home/hao.zhang/.local/bin:/mnt/weka/home/hao.zhang/conda/miniconda/bin:/mnt/weka/home/hao.zhang/conda/miniconda/condabin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin
|
| 43 |
+
+++ PATH=/mnt/weka/home/hao.zhang/.local/bin:/mnt/weka/home/hao.zhang/.local/bin:/mnt/weka/home/hao.zhang/.local/bin:/mnt/weka/home/hao.zhang/conda/miniconda/bin:/mnt/weka/home/hao.zhang/conda/miniconda/condabin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin
|
| 44 |
+
+++ export CONDA_SHLVL=1
|
| 45 |
+
+++ CONDA_SHLVL=1
|
| 46 |
+
+++ export 'CONDA_PROMPT_MODIFIER=(base) '
|
| 47 |
+
+++ CONDA_PROMPT_MODIFIER='(base) '
|
| 48 |
+
+++ export CONDA_EXE=/mnt/weka/home/hao.zhang/conda/miniconda/bin/conda
|
| 49 |
+
+++ CONDA_EXE=/mnt/weka/home/hao.zhang/conda/miniconda/bin/conda
|
| 50 |
+
+++ export CONDA_PYTHON_EXE=/mnt/weka/home/hao.zhang/conda/miniconda/bin/python
|
| 51 |
+
+++ CONDA_PYTHON_EXE=/mnt/weka/home/hao.zhang/conda/miniconda/bin/python
|
| 52 |
+
++ __conda_hashr
|
| 53 |
+
++ '[' -n '' ']'
|
| 54 |
+
++ '[' -n '' ']'
|
| 55 |
+
++ hash -r
|
| 56 |
+
+ conda activate junda-attnserver
|
| 57 |
+
+ local cmd=activate
|
| 58 |
+
+ case "$cmd" in
|
| 59 |
+
+ __conda_activate activate junda-attnserver
|
| 60 |
+
+ '[' -n '' ']'
|
| 61 |
+
+ local ask_conda
|
| 62 |
+
++ PS1='(base) '
|
| 63 |
+
++ __conda_exe shell.posix activate junda-attnserver
|
| 64 |
+
++ '[' -n '' ']'
|
| 65 |
+
++ /mnt/weka/home/hao.zhang/conda/miniconda/bin/conda shell.posix activate junda-attnserver
|
| 66 |
+
+ ask_conda='unset _CE_M
|
| 67 |
+
unset _CE_CONDA
|
| 68 |
+
PS1='\''(junda-attnserver) '\''
|
| 69 |
+
export PATH='\''/mnt/weka/home/hao.zhang/.local/bin:/mnt/weka/home/hao.zhang/.local/bin:/mnt/weka/home/hao.zhang/.local/bin:/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/bin:/mnt/weka/home/hao.zhang/conda/miniconda/condabin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin'\''
|
| 70 |
+
export CONDA_PREFIX='\''/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver'\''
|
| 71 |
+
export CONDA_SHLVL='\''2'\''
|
| 72 |
+
export CONDA_DEFAULT_ENV='\''junda-attnserver'\''
|
| 73 |
+
export CONDA_PROMPT_MODIFIER='\''(junda-attnserver) '\''
|
| 74 |
+
export CONDA_PREFIX_1='\''/mnt/weka/home/hao.zhang/conda/miniconda'\''
|
| 75 |
+
export CONDA_EXE='\''/mnt/weka/home/hao.zhang/conda/miniconda/bin/conda'\''
|
| 76 |
+
export CONDA_PYTHON_EXE='\''/mnt/weka/home/hao.zhang/conda/miniconda/bin/python'\'''
|
| 77 |
+
+ eval 'unset _CE_M
|
| 78 |
+
unset _CE_CONDA
|
| 79 |
+
PS1='\''(junda-attnserver) '\''
|
| 80 |
+
export PATH='\''/mnt/weka/home/hao.zhang/.local/bin:/mnt/weka/home/hao.zhang/.local/bin:/mnt/weka/home/hao.zhang/.local/bin:/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/bin:/mnt/weka/home/hao.zhang/conda/miniconda/condabin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin'\''
|
| 81 |
+
export CONDA_PREFIX='\''/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver'\''
|
| 82 |
+
export CONDA_SHLVL='\''2'\''
|
| 83 |
+
export CONDA_DEFAULT_ENV='\''junda-attnserver'\''
|
| 84 |
+
export CONDA_PROMPT_MODIFIER='\''(junda-attnserver) '\''
|
| 85 |
+
export CONDA_PREFIX_1='\''/mnt/weka/home/hao.zhang/conda/miniconda'\''
|
| 86 |
+
export CONDA_EXE='\''/mnt/weka/home/hao.zhang/conda/miniconda/bin/conda'\''
|
| 87 |
+
export CONDA_PYTHON_EXE='\''/mnt/weka/home/hao.zhang/conda/miniconda/bin/python'\'''
|
| 88 |
+
++ unset _CE_M
|
| 89 |
+
++ unset _CE_CONDA
|
| 90 |
+
++ PS1='(junda-attnserver) '
|
| 91 |
+
++ export PATH=/mnt/weka/home/hao.zhang/.local/bin:/mnt/weka/home/hao.zhang/.local/bin:/mnt/weka/home/hao.zhang/.local/bin:/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/bin:/mnt/weka/home/hao.zhang/conda/miniconda/condabin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin
|
| 92 |
+
++ PATH=/mnt/weka/home/hao.zhang/.local/bin:/mnt/weka/home/hao.zhang/.local/bin:/mnt/weka/home/hao.zhang/.local/bin:/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/bin:/mnt/weka/home/hao.zhang/conda/miniconda/condabin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin
|
| 93 |
+
++ export CONDA_PREFIX=/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver
|
| 94 |
+
++ CONDA_PREFIX=/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver
|
| 95 |
+
++ export CONDA_SHLVL=2
|
| 96 |
+
++ CONDA_SHLVL=2
|
| 97 |
+
++ export CONDA_DEFAULT_ENV=junda-attnserver
|
| 98 |
+
++ CONDA_DEFAULT_ENV=junda-attnserver
|
| 99 |
+
++ export 'CONDA_PROMPT_MODIFIER=(junda-attnserver) '
|
| 100 |
+
++ CONDA_PROMPT_MODIFIER='(junda-attnserver) '
|
| 101 |
+
++ export CONDA_PREFIX_1=/mnt/weka/home/hao.zhang/conda/miniconda
|
| 102 |
+
++ CONDA_PREFIX_1=/mnt/weka/home/hao.zhang/conda/miniconda
|
| 103 |
+
++ export CONDA_EXE=/mnt/weka/home/hao.zhang/conda/miniconda/bin/conda
|
| 104 |
+
++ CONDA_EXE=/mnt/weka/home/hao.zhang/conda/miniconda/bin/conda
|
| 105 |
+
++ export CONDA_PYTHON_EXE=/mnt/weka/home/hao.zhang/conda/miniconda/bin/python
|
| 106 |
+
++ CONDA_PYTHON_EXE=/mnt/weka/home/hao.zhang/conda/miniconda/bin/python
|
| 107 |
+
+ __conda_hashr
|
| 108 |
+
+ '[' -n '' ']'
|
| 109 |
+
+ '[' -n '' ']'
|
| 110 |
+
+ hash -r
|
| 111 |
+
+ export CHROME_TRACE_PREFIX=/mnt/sharefs/users/hao.zhang/junda/megatron-prof-data--unstable-v5
|
| 112 |
+
+ CHROME_TRACE_PREFIX=/mnt/sharefs/users/hao.zhang/junda/megatron-prof-data--unstable-v5
|
| 113 |
+
+ mkdir -p /mnt/sharefs/users/hao.zhang/junda/megatron-prof-data--unstable-v5
|
| 114 |
+
+ export PROF_TP_SIZE=4
|
| 115 |
+
+ PROF_TP_SIZE=4
|
| 116 |
+
+ export PROF_CP_SIZE=4
|
| 117 |
+
+ PROF_CP_SIZE=4
|
| 118 |
+
+ export PROF_BS=32
|
| 119 |
+
+ PROF_BS=32
|
| 120 |
+
+ for ctx_length in 1024 2048 4096 8192 12288 16384 24576 32768 40960 49152 65536 81920 98304 131072
|
| 121 |
+
+ export PROF_CTX_LENGTH=1024
|
| 122 |
+
+ PROF_CTX_LENGTH=1024
|
| 123 |
+
+ name='/mnt/sharefs/users/hao.zhang/junda/megatron-prof-data--unstable-v5/mytrace.L1024*tp4.cp4.bs32.json'
|
| 124 |
+
+ '[' -f '/mnt/sharefs/users/hao.zhang/junda/megatron-prof-data--unstable-v5/mytrace.L1024*tp4.cp4.bs32.json' ']'
|
| 125 |
+
+ echo 'Running ctx_length=1024, TP_SIZE=4, CP_SIZE=4, BATCH_SIZE=32'
|
| 126 |
+
+ srun bash ./attnserver.sh
|
| 127 |
+
+ which python3
|
| 128 |
+
+ which python3
|
| 129 |
+
+ python3 -m torch.distributed.launch --nproc_per_node 8 --nnodes 2 --node_rank 0 --rdzv_id 343224 --rdzv_backend c10d --rdzv_endpoint fs-mbz-gpu-188:29500 ./pretrain_gpt_profile.py --tensor-model-parallel-size 4 --context-parallel-size 4 --num-layers 2 --hidden-size 4096 --num-attention-heads 64 --group-query-attention --num-query-groups 16 --seq-length 1024 --max-position-embeddings 1024 --micro-batch-size 1 --global-batch-size 1 --lr 0.0005 --train-iters 10 --lr-decay-iters 150000 --lr-decay-style cosine --lr-warmup-iters 2 --weight-decay .1 --adam-beta2 .999 --fp16 --log-interval 1 --save-interval 16 --eval-interval 16 --eval-iters 1 --vocab-file vocab.json --merge-file merges.txt --save gpt-checkpoint --load gpt-checkpoint --logging-level 0 --mock-data --tensorboard-dir tensorboard-logs/
|
| 130 |
+
+ python3 -m torch.distributed.launch --nproc_per_node 8 --nnodes 2 --node_rank 1 --rdzv_id 343224 --rdzv_backend c10d --rdzv_endpoint fs-mbz-gpu-188:29500 ./pretrain_gpt_profile.py --tensor-model-parallel-size 4 --context-parallel-size 4 --num-layers 2 --hidden-size 4096 --num-attention-heads 64 --group-query-attention --num-query-groups 16 --seq-length 1024 --max-position-embeddings 1024 --micro-batch-size 1 --global-batch-size 1 --lr 0.0005 --train-iters 10 --lr-decay-iters 150000 --lr-decay-style cosine --lr-warmup-iters 2 --weight-decay .1 --adam-beta2 .999 --fp16 --log-interval 1 --save-interval 16 --eval-interval 16 --eval-iters 1 --vocab-file vocab.json --merge-file merges.txt --save gpt-checkpoint --load gpt-checkpoint --logging-level 0 --mock-data --tensorboard-dir tensorboard-logs/
|
| 131 |
+
/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/distributed/launch.py:207: FutureWarning: The module torch.distributed.launch is deprecated
|
| 132 |
+
and will be removed in future. Use torchrun.
|
| 133 |
+
Note that --use-env is set by default in torchrun.
|
| 134 |
+
If your script expects `--local-rank` argument to be set, please
|
| 135 |
+
change it to read from `os.environ['LOCAL_RANK']` instead. See
|
| 136 |
+
https://pytorch.org/docs/stable/distributed.html#launch-utility for
|
| 137 |
+
further instructions
|
| 138 |
+
|
| 139 |
+
main()
|
| 140 |
+
W0621 21:34:52.697000 2098805 site-packages/torch/distributed/run.py:766]
|
| 141 |
+
W0621 21:34:52.697000 2098805 site-packages/torch/distributed/run.py:766] *****************************************
|
| 142 |
+
W0621 21:34:52.697000 2098805 site-packages/torch/distributed/run.py:766] Setting OMP_NUM_THREADS environment variable for each process to be 1 in default, to avoid your system being overloaded, please further tune the variable for optimal performance in your application as needed.
|
| 143 |
+
W0621 21:34:52.697000 2098805 site-packages/torch/distributed/run.py:766] *****************************************
|
| 144 |
+
/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/distributed/launch.py:207: FutureWarning: The module torch.distributed.launch is deprecated
|
| 145 |
+
and will be removed in future. Use torchrun.
|
| 146 |
+
Note that --use-env is set by default in torchrun.
|
| 147 |
+
If your script expects `--local-rank` argument to be set, please
|
| 148 |
+
change it to read from `os.environ['LOCAL_RANK']` instead. See
|
| 149 |
+
https://pytorch.org/docs/stable/distributed.html#launch-utility for
|
| 150 |
+
further instructions
|
| 151 |
+
|
| 152 |
+
main()
|
| 153 |
+
W0621 21:34:52.714000 753631 site-packages/torch/distributed/run.py:766]
|
| 154 |
+
W0621 21:34:52.714000 753631 site-packages/torch/distributed/run.py:766] *****************************************
|
| 155 |
+
W0621 21:34:52.714000 753631 site-packages/torch/distributed/run.py:766] Setting OMP_NUM_THREADS environment variable for each process to be 1 in default, to avoid your system being overloaded, please further tune the variable for optimal performance in your application as needed.
|
| 156 |
+
W0621 21:34:52.714000 753631 site-packages/torch/distributed/run.py:766] *****************************************
|
attnserver.run_attnserver.slurm.sh.343224.out.log
ADDED
|
@@ -0,0 +1,19 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
Running ctx_length=1024, TP_SIZE=4, CP_SIZE=4, BATCH_SIZE=32
|
| 2 |
+
Cleaning up checkpoint directory: gpt-checkpoint
|
| 3 |
+
Cleaning up checkpoint directory: gpt-checkpoint
|
| 4 |
+
--------------------------------
|
| 5 |
+
CTX_LENGTH: 1024
|
| 6 |
+
TP_SIZE: 4
|
| 7 |
+
CP_SIZE: 4
|
| 8 |
+
CHECKPOINT_PATH: gpt-checkpoint
|
| 9 |
+
--------------------------------
|
| 10 |
+
CTX_LENGTH: 1024
|
| 11 |
+
TP_SIZE: 4
|
| 12 |
+
CP_SIZE: 4
|
| 13 |
+
CHECKPOINT_PATH: gpt-checkpoint
|
| 14 |
+
PWD: /mnt/weka/home/hao.zhang/junda/attnserver-megatron
|
| 15 |
+
PWD: /mnt/weka/home/hao.zhang/junda/attnserver-megatron
|
| 16 |
+
--------------------------------
|
| 17 |
+
--------------------------------
|
| 18 |
+
/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/bin/python3
|
| 19 |
+
/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/bin/python3
|
attnserver.run_attnserver.slurm.sh.343225.err.log
CHANGED
|
@@ -409,3 +409,82 @@ W0621 21:33:46.099000 2214638 site-packages/torch/distributed/run.py:766]
|
|
| 409 |
W0621 21:33:46.099000 2214638 site-packages/torch/distributed/run.py:766] *****************************************
|
| 410 |
W0621 21:33:46.099000 2214638 site-packages/torch/distributed/run.py:766] Setting OMP_NUM_THREADS environment variable for each process to be 1 in default, to avoid your system being overloaded, please further tune the variable for optimal performance in your application as needed.
|
| 411 |
W0621 21:33:46.099000 2214638 site-packages/torch/distributed/run.py:766] *****************************************
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 409 |
W0621 21:33:46.099000 2214638 site-packages/torch/distributed/run.py:766] *****************************************
|
| 410 |
W0621 21:33:46.099000 2214638 site-packages/torch/distributed/run.py:766] Setting OMP_NUM_THREADS environment variable for each process to be 1 in default, to avoid your system being overloaded, please further tune the variable for optimal performance in your application as needed.
|
| 411 |
W0621 21:33:46.099000 2214638 site-packages/torch/distributed/run.py:766] *****************************************
|
| 412 |
+
[rank6]:[W621 21:34:07.487829542 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 6] using GPU 6 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
|
| 413 |
+
[rank7]:[W621 21:34:07.487835309 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 7] using GPU 7 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
|
| 414 |
+
[rank2]:[W621 21:34:07.487855923 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 2] using GPU 2 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
|
| 415 |
+
[rank3]:[W621 21:34:07.487926048 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 3] using GPU 3 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
|
| 416 |
+
[rank1]:[W621 21:34:07.495968121 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 1] using GPU 1 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
|
| 417 |
+
[rank5]:[W621 21:34:07.496212111 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 5] using GPU 5 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
|
| 418 |
+
[rank4]:[W621 21:34:07.496540265 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 4] using GPU 4 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
|
| 419 |
+
[rank0]:[W621 21:34:08.635239263 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 0] using GPU 0 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
|
| 420 |
+
/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/models/gpt/gpt_layer_specs.py:94: UserWarning: The fp8 argument in "get_gpt_layer_with_transformer_engine_spec" has been deprecated and will be removed soon. Please update your code accordingly.
|
| 421 |
+
warnings.warn(
|
| 422 |
+
/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/models/gpt/gpt_layer_specs.py:94: UserWarning: The fp8 argument in "get_gpt_layer_with_transformer_engine_spec" has been deprecated and will be removed soon. Please update your code accordingly.
|
| 423 |
+
warnings.warn(
|
| 424 |
+
/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/models/gpt/gpt_layer_specs.py:94: UserWarning: The fp8 argument in "get_gpt_layer_with_transformer_engine_spec" has been deprecated and will be removed soon. Please update your code accordingly.
|
| 425 |
+
warnings.warn(
|
| 426 |
+
/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/models/gpt/gpt_layer_specs.py:94: UserWarning: The fp8 argument in "get_gpt_layer_with_transformer_engine_spec" has been deprecated and will be removed soon. Please update your code accordingly.
|
| 427 |
+
warnings.warn(
|
| 428 |
+
/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/models/gpt/gpt_layer_specs.py:94: UserWarning: The fp8 argument in "get_gpt_layer_with_transformer_engine_spec" has been deprecated and will be removed soon. Please update your code accordingly.
|
| 429 |
+
warnings.warn(
|
| 430 |
+
/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/models/gpt/gpt_layer_specs.py:94: UserWarning: The fp8 argument in "get_gpt_layer_with_transformer_engine_spec" has been deprecated and will be removed soon. Please update your code accordingly.
|
| 431 |
+
warnings.warn(
|
| 432 |
+
/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/models/gpt/gpt_layer_specs.py:94: UserWarning: The fp8 argument in "get_gpt_layer_with_transformer_engine_spec" has been deprecated and will be removed soon. Please update your code accordingly.
|
| 433 |
+
warnings.warn(
|
| 434 |
+
/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/models/gpt/gpt_layer_specs.py:94: UserWarning: The fp8 argument in "get_gpt_layer_with_transformer_engine_spec" has been deprecated and will be removed soon. Please update your code accordingly.
|
| 435 |
+
warnings.warn(
|
| 436 |
+
/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/transformer_engine/pytorch/cpu_offload.py:595: DeprecationWarning: Offloading weights is deprecated. Using offload_weights=True does not have any effect.
|
| 437 |
+
warnings.warn(
|
| 438 |
+
/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/transformer_engine/pytorch/cpu_offload.py:595: DeprecationWarning: Offloading weights is deprecated. Using offload_weights=True does not have any effect.
|
| 439 |
+
warnings.warn(
|
| 440 |
+
/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/transformer_engine/pytorch/cpu_offload.py:595: DeprecationWarning: Offloading weights is deprecated. Using offload_weights=True does not have any effect.
|
| 441 |
+
warnings.warn(
|
| 442 |
+
/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/transformer_engine/pytorch/cpu_offload.py:595: DeprecationWarning: Offloading weights is deprecated. Using offload_weights=True does not have any effect.
|
| 443 |
+
warnings.warn(
|
| 444 |
+
/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/transformer_engine/pytorch/cpu_offload.py:595: DeprecationWarning: Offloading weights is deprecated. Using offload_weights=True does not have any effect.
|
| 445 |
+
warnings.warn(
|
| 446 |
+
/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/transformer_engine/pytorch/cpu_offload.py:595: DeprecationWarning: Offloading weights is deprecated. Using offload_weights=True does not have any effect.
|
| 447 |
+
warnings.warn(
|
| 448 |
+
/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/transformer_engine/pytorch/cpu_offload.py:595: DeprecationWarning: Offloading weights is deprecated. Using offload_weights=True does not have any effect.
|
| 449 |
+
warnings.warn(
|
| 450 |
+
/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/transformer_engine/pytorch/cpu_offload.py:595: DeprecationWarning: Offloading weights is deprecated. Using offload_weights=True does not have any effect.
|
| 451 |
+
warnings.warn(
|
| 452 |
+
[rank3]:[W621 21:34:39.700737341 ProcessGroupNCCL.cpp:1476] Warning: WARNING: destroy_process_group() was not called before program exit, which can leak resources. For more info, please see https://pytorch.org/docs/stable/distributed.html#shutdown (function operator())
|
| 453 |
+
[rank2]:[W621 21:34:39.712462994 ProcessGroupNCCL.cpp:1476] Warning: WARNING: destroy_process_group() was not called before program exit, which can leak resources. For more info, please see https://pytorch.org/docs/stable/distributed.html#shutdown (function operator())
|
| 454 |
+
[rank0]:[W621 21:34:39.718188712 ProcessGroupNCCL.cpp:1476] Warning: WARNING: destroy_process_group() was not called before program exit, which can leak resources. For more info, please see https://pytorch.org/docs/stable/distributed.html#shutdown (function operator())
|
| 455 |
+
[rank1]:[W621 21:34:39.940108258 ProcessGroupNCCL.cpp:1476] Warning: WARNING: destroy_process_group() was not called before program exit, which can leak resources. For more info, please see https://pytorch.org/docs/stable/distributed.html#shutdown (function operator())
|
| 456 |
+
[rank5]:[W621 21:34:39.147737758 ProcessGroupNCCL.cpp:1476] Warning: WARNING: destroy_process_group() was not called before program exit, which can leak resources. For more info, please see https://pytorch.org/docs/stable/distributed.html#shutdown (function operator())
|
| 457 |
+
[rank7]:[W621 21:34:39.176462496 ProcessGroupNCCL.cpp:1476] Warning: WARNING: destroy_process_group() was not called before program exit, which can leak resources. For more info, please see https://pytorch.org/docs/stable/distributed.html#shutdown (function operator())
|
| 458 |
+
[rank6]:[W621 21:34:39.181024070 ProcessGroupNCCL.cpp:1476] Warning: WARNING: destroy_process_group() was not called before program exit, which can leak resources. For more info, please see https://pytorch.org/docs/stable/distributed.html#shutdown (function operator())
|
| 459 |
+
[rank4]:[W621 21:34:39.294151165 ProcessGroupNCCL.cpp:1476] Warning: WARNING: destroy_process_group() was not called before program exit, which can leak resources. For more info, please see https://pytorch.org/docs/stable/distributed.html#shutdown (function operator())
|
| 460 |
+
+ set +x
|
| 461 |
+
+ for ctx_length in 1024 2048 4096 8192 12288 16384 24576 32768 40960 49152 65536 81920 98304 131072
|
| 462 |
+
+ export PROF_CTX_LENGTH=12288
|
| 463 |
+
+ PROF_CTX_LENGTH=12288
|
| 464 |
+
+ name='/mnt/sharefs/users/hao.zhang/junda/megatron-prof-data--unstable-v5/mytrace.L12288*tp4.cp2.bs1.json'
|
| 465 |
+
+ '[' -f '/mnt/sharefs/users/hao.zhang/junda/megatron-prof-data--unstable-v5/mytrace.L12288*tp4.cp2.bs1.json' ']'
|
| 466 |
+
+ echo 'Running ctx_length=12288, TP_SIZE=4, CP_SIZE=2, BATCH_SIZE=1'
|
| 467 |
+
+ srun bash ./attnserver.sh
|
| 468 |
+
+ which python3
|
| 469 |
+
+ python3 -m torch.distributed.launch --nproc_per_node 8 --nnodes 1 --node_rank 0 --rdzv_id 343225 --rdzv_backend c10d --rdzv_endpoint fs-mbz-gpu-768:29500 ./pretrain_gpt_profile.py --tensor-model-parallel-size 4 --context-parallel-size 2 --num-layers 2 --hidden-size 4096 --num-attention-heads 64 --group-query-attention --num-query-groups 16 --seq-length 12288 --max-position-embeddings 12288 --micro-batch-size 1 --global-batch-size 1 --lr 0.0005 --train-iters 10 --lr-decay-iters 150000 --lr-decay-style cosine --lr-warmup-iters 2 --weight-decay .1 --adam-beta2 .999 --fp16 --log-interval 1 --save-interval 16 --eval-interval 16 --eval-iters 1 --vocab-file vocab.json --merge-file merges.txt --save gpt-checkpoint --load gpt-checkpoint --logging-level 0 --mock-data --tensorboard-dir tensorboard-logs/
|
| 470 |
+
/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/distributed/launch.py:207: FutureWarning: The module torch.distributed.launch is deprecated
|
| 471 |
+
and will be removed in future. Use torchrun.
|
| 472 |
+
Note that --use-env is set by default in torchrun.
|
| 473 |
+
If your script expects `--local-rank` argument to be set, please
|
| 474 |
+
change it to read from `os.environ['LOCAL_RANK']` instead. See
|
| 475 |
+
https://pytorch.org/docs/stable/distributed.html#launch-utility for
|
| 476 |
+
further instructions
|
| 477 |
+
|
| 478 |
+
main()
|
| 479 |
+
W0621 21:34:45.781000 2217839 site-packages/torch/distributed/run.py:766]
|
| 480 |
+
W0621 21:34:45.781000 2217839 site-packages/torch/distributed/run.py:766] *****************************************
|
| 481 |
+
W0621 21:34:45.781000 2217839 site-packages/torch/distributed/run.py:766] Setting OMP_NUM_THREADS environment variable for each process to be 1 in default, to avoid your system being overloaded, please further tune the variable for optimal performance in your application as needed.
|
| 482 |
+
W0621 21:34:45.781000 2217839 site-packages/torch/distributed/run.py:766] *****************************************
|
| 483 |
+
[rank4]:[W621 21:35:08.677022510 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 4] using GPU 4 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
|
| 484 |
+
[rank1]:[W621 21:35:08.680781221 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 1] using GPU 1 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
|
| 485 |
+
[rank5]:[W621 21:35:08.680784263 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 5] using GPU 5 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
|
| 486 |
+
[rank6]:[W621 21:35:08.681997194 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 6] using GPU 6 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
|
| 487 |
+
[rank2]:[W621 21:35:08.685269567 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 2] using GPU 2 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
|
| 488 |
+
[rank7]:[W621 21:35:08.687936052 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 7] using GPU 7 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
|
| 489 |
+
[rank3]:[W621 21:35:08.687956111 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 3] using GPU 3 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
|
| 490 |
+
[rank0]:[W621 21:35:08.817375320 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 0] using GPU 0 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
|
attnserver.run_attnserver.slurm.sh.343225.out.log
CHANGED
|
The diff for this file is too large to render.
See raw diff
|
|
|
attnserver.run_attnserver.slurm.sh.343226.err.log
CHANGED
|
@@ -179,3 +179,95 @@ W0621 21:33:13.100000 1966606 site-packages/torch/distributed/run.py:766] ******
|
|
| 179 |
warnings.warn(
|
| 180 |
/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/transformer_engine/pytorch/cpu_offload.py:595: DeprecationWarning: Offloading weights is deprecated. Using offload_weights=True does not have any effect.
|
| 181 |
warnings.warn(
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 179 |
warnings.warn(
|
| 180 |
/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/transformer_engine/pytorch/cpu_offload.py:595: DeprecationWarning: Offloading weights is deprecated. Using offload_weights=True does not have any effect.
|
| 181 |
warnings.warn(
|
| 182 |
+
[rank2]:[W621 21:34:11.104867952 ProcessGroupNCCL.cpp:1476] Warning: WARNING: destroy_process_group() was not called before program exit, which can leak resources. For more info, please see https://pytorch.org/docs/stable/distributed.html#shutdown (function operator())
|
| 183 |
+
[rank3]:[W621 21:34:11.137815168 ProcessGroupNCCL.cpp:1476] Warning: WARNING: destroy_process_group() was not called before program exit, which can leak resources. For more info, please see https://pytorch.org/docs/stable/distributed.html#shutdown (function operator())
|
| 184 |
+
[rank0]:[W621 21:34:11.205114054 ProcessGroupNCCL.cpp:1476] Warning: WARNING: destroy_process_group() was not called before program exit, which can leak resources. For more info, please see https://pytorch.org/docs/stable/distributed.html#shutdown (function operator())
|
| 185 |
+
[rank1]:[W621 21:34:11.265262807 ProcessGroupNCCL.cpp:1476] Warning: WARNING: destroy_process_group() was not called before program exit, which can leak resources. For more info, please see https://pytorch.org/docs/stable/distributed.html#shutdown (function operator())
|
| 186 |
+
[rank7]:[W621 21:34:11.609105248 ProcessGroupNCCL.cpp:1476] Warning: WARNING: destroy_process_group() was not called before program exit, which can leak resources. For more info, please see https://pytorch.org/docs/stable/distributed.html#shutdown (function operator())
|
| 187 |
+
[rank5]:[W621 21:34:11.628118305 ProcessGroupNCCL.cpp:1476] Warning: WARNING: destroy_process_group() was not called before program exit, which can leak resources. For more info, please see https://pytorch.org/docs/stable/distributed.html#shutdown (function operator())
|
| 188 |
+
[rank4]:[W621 21:34:11.628580306 ProcessGroupNCCL.cpp:1476] Warning: WARNING: destroy_process_group() was not called before program exit, which can leak resources. For more info, please see https://pytorch.org/docs/stable/distributed.html#shutdown (function operator())
|
| 189 |
+
[rank6]:[W621 21:34:11.810632658 ProcessGroupNCCL.cpp:1476] Warning: WARNING: destroy_process_group() was not called before program exit, which can leak resources. For more info, please see https://pytorch.org/docs/stable/distributed.html#shutdown (function operator())
|
| 190 |
+
+ set +x
|
| 191 |
+
+ for ctx_length in 1024 2048 4096 8192 12288 16384 24576 32768 40960 49152 65536 81920 98304 131072
|
| 192 |
+
+ export PROF_CTX_LENGTH=2048
|
| 193 |
+
+ PROF_CTX_LENGTH=2048
|
| 194 |
+
+ name='/mnt/sharefs/users/hao.zhang/junda/megatron-prof-data--unstable-v5/mytrace.L2048*tp4.cp2.bs2.json'
|
| 195 |
+
+ '[' -f '/mnt/sharefs/users/hao.zhang/junda/megatron-prof-data--unstable-v5/mytrace.L2048*tp4.cp2.bs2.json' ']'
|
| 196 |
+
+ echo 'Running ctx_length=2048, TP_SIZE=4, CP_SIZE=2, BATCH_SIZE=2'
|
| 197 |
+
+ srun bash ./attnserver.sh
|
| 198 |
+
+ which python3
|
| 199 |
+
+ python3 -m torch.distributed.launch --nproc_per_node 8 --nnodes 1 --node_rank 0 --rdzv_id 343226 --rdzv_backend c10d --rdzv_endpoint fs-mbz-gpu-896:29500 ./pretrain_gpt_profile.py --tensor-model-parallel-size 4 --context-parallel-size 2 --num-layers 2 --hidden-size 4096 --num-attention-heads 64 --group-query-attention --num-query-groups 16 --seq-length 2048 --max-position-embeddings 2048 --micro-batch-size 1 --global-batch-size 1 --lr 0.0005 --train-iters 10 --lr-decay-iters 150000 --lr-decay-style cosine --lr-warmup-iters 2 --weight-decay .1 --adam-beta2 .999 --fp16 --log-interval 1 --save-interval 16 --eval-interval 16 --eval-iters 1 --vocab-file vocab.json --merge-file merges.txt --save gpt-checkpoint --load gpt-checkpoint --logging-level 0 --mock-data --tensorboard-dir tensorboard-logs/
|
| 200 |
+
/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/distributed/launch.py:207: FutureWarning: The module torch.distributed.launch is deprecated
|
| 201 |
+
and will be removed in future. Use torchrun.
|
| 202 |
+
Note that --use-env is set by default in torchrun.
|
| 203 |
+
If your script expects `--local-rank` argument to be set, please
|
| 204 |
+
change it to read from `os.environ['LOCAL_RANK']` instead. See
|
| 205 |
+
https://pytorch.org/docs/stable/distributed.html#launch-utility for
|
| 206 |
+
further instructions
|
| 207 |
+
|
| 208 |
+
main()
|
| 209 |
+
W0621 21:34:17.950000 1970094 site-packages/torch/distributed/run.py:766]
|
| 210 |
+
W0621 21:34:17.950000 1970094 site-packages/torch/distributed/run.py:766] *****************************************
|
| 211 |
+
W0621 21:34:17.950000 1970094 site-packages/torch/distributed/run.py:766] Setting OMP_NUM_THREADS environment variable for each process to be 1 in default, to avoid your system being overloaded, please further tune the variable for optimal performance in your application as needed.
|
| 212 |
+
W0621 21:34:17.950000 1970094 site-packages/torch/distributed/run.py:766] *****************************************
|
| 213 |
+
[rank6]:[W621 21:34:41.641514602 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 6] using GPU 6 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
|
| 214 |
+
[rank2]:[W621 21:34:41.641516382 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 2] using GPU 2 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
|
| 215 |
+
[rank5]:[W621 21:34:41.657758021 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 5] using GPU 5 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
|
| 216 |
+
[rank1]:[W621 21:34:41.657891541 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 1] using GPU 1 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
|
| 217 |
+
[rank7]:[W621 21:34:41.659904559 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 7] using GPU 7 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
|
| 218 |
+
[rank4]:[W621 21:34:41.660089120 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 4] using GPU 4 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
|
| 219 |
+
[rank3]:[W621 21:34:41.661630568 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 3] using GPU 3 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
|
| 220 |
+
[rank0]:[W621 21:34:41.817048114 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 0] using GPU 0 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
|
| 221 |
+
/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/models/gpt/gpt_layer_specs.py:94: UserWarning: The fp8 argument in "get_gpt_layer_with_transformer_engine_spec" has been deprecated and will be removed soon. Please update your code accordingly.
|
| 222 |
+
warnings.warn(
|
| 223 |
+
/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/models/gpt/gpt_layer_specs.py:94: UserWarning: The fp8 argument in "get_gpt_layer_with_transformer_engine_spec" has been deprecated and will be removed soon. Please update your code accordingly.
|
| 224 |
+
warnings.warn(
|
| 225 |
+
/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/models/gpt/gpt_layer_specs.py:94: UserWarning: The fp8 argument in "get_gpt_layer_with_transformer_engine_spec" has been deprecated and will be removed soon. Please update your code accordingly.
|
| 226 |
+
warnings.warn(
|
| 227 |
+
/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/models/gpt/gpt_layer_specs.py:94: UserWarning: The fp8 argument in "get_gpt_layer_with_transformer_engine_spec" has been deprecated and will be removed soon. Please update your code accordingly.
|
| 228 |
+
warnings.warn(
|
| 229 |
+
/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/models/gpt/gpt_layer_specs.py:94: UserWarning: The fp8 argument in "get_gpt_layer_with_transformer_engine_spec" has been deprecated and will be removed soon. Please update your code accordingly.
|
| 230 |
+
warnings.warn(
|
| 231 |
+
/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/models/gpt/gpt_layer_specs.py:94: UserWarning: The fp8 argument in "get_gpt_layer_with_transformer_engine_spec" has been deprecated and will be removed soon. Please update your code accordingly.
|
| 232 |
+
warnings.warn(
|
| 233 |
+
/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/models/gpt/gpt_layer_specs.py:94: UserWarning: The fp8 argument in "get_gpt_layer_with_transformer_engine_spec" has been deprecated and will be removed soon. Please update your code accordingly.
|
| 234 |
+
warnings.warn(
|
| 235 |
+
/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/models/gpt/gpt_layer_specs.py:94: UserWarning: The fp8 argument in "get_gpt_layer_with_transformer_engine_spec" has been deprecated and will be removed soon. Please update your code accordingly.
|
| 236 |
+
warnings.warn(
|
| 237 |
+
/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/transformer_engine/pytorch/cpu_offload.py:595: DeprecationWarning: Offloading weights is deprecated. Using offload_weights=True does not have any effect.
|
| 238 |
+
warnings.warn(
|
| 239 |
+
/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/transformer_engine/pytorch/cpu_offload.py:595: DeprecationWarning: Offloading weights is deprecated. Using offload_weights=True does not have any effect.
|
| 240 |
+
warnings.warn(
|
| 241 |
+
/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/transformer_engine/pytorch/cpu_offload.py:595: DeprecationWarning: Offloading weights is deprecated. Using offload_weights=True does not have any effect.
|
| 242 |
+
warnings.warn(
|
| 243 |
+
/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/transformer_engine/pytorch/cpu_offload.py:595: DeprecationWarning: Offloading weights is deprecated. Using offload_weights=True does not have any effect.
|
| 244 |
+
warnings.warn(
|
| 245 |
+
/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/transformer_engine/pytorch/cpu_offload.py:595: DeprecationWarning: Offloading weights is deprecated. Using offload_weights=True does not have any effect.
|
| 246 |
+
warnings.warn(
|
| 247 |
+
/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/transformer_engine/pytorch/cpu_offload.py:595: DeprecationWarning: Offloading weights is deprecated. Using offload_weights=True does not have any effect.
|
| 248 |
+
warnings.warn(
|
| 249 |
+
/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/transformer_engine/pytorch/cpu_offload.py:595: DeprecationWarning: Offloading weights is deprecated. Using offload_weights=True does not have any effect.
|
| 250 |
+
warnings.warn(
|
| 251 |
+
/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/transformer_engine/pytorch/cpu_offload.py:595: DeprecationWarning: Offloading weights is deprecated. Using offload_weights=True does not have any effect.
|
| 252 |
+
warnings.warn(
|
| 253 |
+
[rank0]: Traceback (most recent call last):
|
| 254 |
+
[rank0]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/./pretrain_gpt_profile.py", line 554, in <module>
|
| 255 |
+
[rank0]: pretrain(
|
| 256 |
+
[rank0]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/training/training.py", line 879, in pretrain
|
| 257 |
+
[rank0]: save_checkpoint(
|
| 258 |
+
[rank0]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/training/checkpointing.py", line 469, in save_checkpoint
|
| 259 |
+
[rank0]: async_save_request = dist_checkpointing.save(state_dict, checkpoint_name, save_strategy,
|
| 260 |
+
[rank0]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
| 261 |
+
[rank0]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/dist_checkpointing/serialization.py", line 386, in save
|
| 262 |
+
[rank0]: common_strategy.save_common(state_dict, checkpoint_dir)
|
| 263 |
+
[rank0]: File "/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/dist_checkpointing/strategies/common.py", line 48, in save_common
|
| 264 |
+
[rank0]: torch.save(common_state_dict, path)
|
| 265 |
+
[rank0]: File "/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/serialization.py", line 964, in save
|
| 266 |
+
[rank0]: with _open_zipfile_writer(f) as opened_zipfile:
|
| 267 |
+
[rank0]: ^^^^^^^^^^^^^^^^^^^^^^^
|
| 268 |
+
[rank0]: File "/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/serialization.py", line 828, in _open_zipfile_writer
|
| 269 |
+
[rank0]: return container(name_or_buffer)
|
| 270 |
+
[rank0]: ^^^^^^^^^^^^^^^^^^^^^^^^^
|
| 271 |
+
[rank0]: File "/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/serialization.py", line 792, in __init__
|
| 272 |
+
[rank0]: torch._C.PyTorchFileWriter(
|
| 273 |
+
[rank0]: RuntimeError: Parent directory gpt-checkpoint/iter_0000010 does not exist.
|
attnserver.run_attnserver.slurm.sh.343226.out.log
CHANGED
|
The diff for this file is too large to render.
See raw diff
|
|
|
attnserver.run_attnserver.slurm.sh.343227.err.log
ADDED
|
@@ -0,0 +1,141 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
+ source /mnt/weka/home/hao.zhang/conda/miniconda/bin/activate
|
| 2 |
+
++ _CONDA_ROOT=/mnt/weka/home/hao.zhang/conda/miniconda
|
| 3 |
+
++ . /mnt/weka/home/hao.zhang/conda/miniconda/etc/profile.d/conda.sh
|
| 4 |
+
+++ export CONDA_EXE=/mnt/weka/home/hao.zhang/conda/miniconda/bin/conda
|
| 5 |
+
+++ CONDA_EXE=/mnt/weka/home/hao.zhang/conda/miniconda/bin/conda
|
| 6 |
+
+++ export _CE_M=
|
| 7 |
+
+++ _CE_M=
|
| 8 |
+
+++ export _CE_CONDA=
|
| 9 |
+
+++ _CE_CONDA=
|
| 10 |
+
+++ export CONDA_PYTHON_EXE=/mnt/weka/home/hao.zhang/conda/miniconda/bin/python
|
| 11 |
+
+++ CONDA_PYTHON_EXE=/mnt/weka/home/hao.zhang/conda/miniconda/bin/python
|
| 12 |
+
+++ '[' -z x ']'
|
| 13 |
+
++ conda activate
|
| 14 |
+
++ local cmd=activate
|
| 15 |
+
++ case "$cmd" in
|
| 16 |
+
++ __conda_activate activate
|
| 17 |
+
++ '[' -n '' ']'
|
| 18 |
+
++ local ask_conda
|
| 19 |
+
+++ PS1=
|
| 20 |
+
+++ __conda_exe shell.posix activate
|
| 21 |
+
+++ '[' -n '' ']'
|
| 22 |
+
+++ /mnt/weka/home/hao.zhang/conda/miniconda/bin/conda shell.posix activate
|
| 23 |
+
++ ask_conda='unset _CE_M
|
| 24 |
+
unset _CE_CONDA
|
| 25 |
+
PS1='\''(base) '\''
|
| 26 |
+
export PATH='\''/mnt/weka/home/hao.zhang/.local/bin:/mnt/weka/home/hao.zhang/.local/bin:/mnt/weka/home/hao.zhang/.local/bin:/mnt/weka/home/hao.zhang/conda/miniconda/bin:/mnt/weka/home/hao.zhang/conda/miniconda/condabin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin'\''
|
| 27 |
+
export CONDA_SHLVL='\''1'\''
|
| 28 |
+
export CONDA_PROMPT_MODIFIER='\''(base) '\''
|
| 29 |
+
export CONDA_EXE='\''/mnt/weka/home/hao.zhang/conda/miniconda/bin/conda'\''
|
| 30 |
+
export CONDA_PYTHON_EXE='\''/mnt/weka/home/hao.zhang/conda/miniconda/bin/python'\'''
|
| 31 |
+
++ eval 'unset _CE_M
|
| 32 |
+
unset _CE_CONDA
|
| 33 |
+
PS1='\''(base) '\''
|
| 34 |
+
export PATH='\''/mnt/weka/home/hao.zhang/.local/bin:/mnt/weka/home/hao.zhang/.local/bin:/mnt/weka/home/hao.zhang/.local/bin:/mnt/weka/home/hao.zhang/conda/miniconda/bin:/mnt/weka/home/hao.zhang/conda/miniconda/condabin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin'\''
|
| 35 |
+
export CONDA_SHLVL='\''1'\''
|
| 36 |
+
export CONDA_PROMPT_MODIFIER='\''(base) '\''
|
| 37 |
+
export CONDA_EXE='\''/mnt/weka/home/hao.zhang/conda/miniconda/bin/conda'\''
|
| 38 |
+
export CONDA_PYTHON_EXE='\''/mnt/weka/home/hao.zhang/conda/miniconda/bin/python'\'''
|
| 39 |
+
+++ unset _CE_M
|
| 40 |
+
+++ unset _CE_CONDA
|
| 41 |
+
+++ PS1='(base) '
|
| 42 |
+
+++ export PATH=/mnt/weka/home/hao.zhang/.local/bin:/mnt/weka/home/hao.zhang/.local/bin:/mnt/weka/home/hao.zhang/.local/bin:/mnt/weka/home/hao.zhang/conda/miniconda/bin:/mnt/weka/home/hao.zhang/conda/miniconda/condabin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin
|
| 43 |
+
+++ PATH=/mnt/weka/home/hao.zhang/.local/bin:/mnt/weka/home/hao.zhang/.local/bin:/mnt/weka/home/hao.zhang/.local/bin:/mnt/weka/home/hao.zhang/conda/miniconda/bin:/mnt/weka/home/hao.zhang/conda/miniconda/condabin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin
|
| 44 |
+
+++ export CONDA_SHLVL=1
|
| 45 |
+
+++ CONDA_SHLVL=1
|
| 46 |
+
+++ export 'CONDA_PROMPT_MODIFIER=(base) '
|
| 47 |
+
+++ CONDA_PROMPT_MODIFIER='(base) '
|
| 48 |
+
+++ export CONDA_EXE=/mnt/weka/home/hao.zhang/conda/miniconda/bin/conda
|
| 49 |
+
+++ CONDA_EXE=/mnt/weka/home/hao.zhang/conda/miniconda/bin/conda
|
| 50 |
+
+++ export CONDA_PYTHON_EXE=/mnt/weka/home/hao.zhang/conda/miniconda/bin/python
|
| 51 |
+
+++ CONDA_PYTHON_EXE=/mnt/weka/home/hao.zhang/conda/miniconda/bin/python
|
| 52 |
+
++ __conda_hashr
|
| 53 |
+
++ '[' -n '' ']'
|
| 54 |
+
++ '[' -n '' ']'
|
| 55 |
+
++ hash -r
|
| 56 |
+
+ conda activate junda-attnserver
|
| 57 |
+
+ local cmd=activate
|
| 58 |
+
+ case "$cmd" in
|
| 59 |
+
+ __conda_activate activate junda-attnserver
|
| 60 |
+
+ '[' -n '' ']'
|
| 61 |
+
+ local ask_conda
|
| 62 |
+
++ PS1='(base) '
|
| 63 |
+
++ __conda_exe shell.posix activate junda-attnserver
|
| 64 |
+
++ '[' -n '' ']'
|
| 65 |
+
++ /mnt/weka/home/hao.zhang/conda/miniconda/bin/conda shell.posix activate junda-attnserver
|
| 66 |
+
+ ask_conda='unset _CE_M
|
| 67 |
+
unset _CE_CONDA
|
| 68 |
+
PS1='\''(junda-attnserver) '\''
|
| 69 |
+
export PATH='\''/mnt/weka/home/hao.zhang/.local/bin:/mnt/weka/home/hao.zhang/.local/bin:/mnt/weka/home/hao.zhang/.local/bin:/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/bin:/mnt/weka/home/hao.zhang/conda/miniconda/condabin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin'\''
|
| 70 |
+
export CONDA_PREFIX='\''/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver'\''
|
| 71 |
+
export CONDA_SHLVL='\''2'\''
|
| 72 |
+
export CONDA_DEFAULT_ENV='\''junda-attnserver'\''
|
| 73 |
+
export CONDA_PROMPT_MODIFIER='\''(junda-attnserver) '\''
|
| 74 |
+
export CONDA_PREFIX_1='\''/mnt/weka/home/hao.zhang/conda/miniconda'\''
|
| 75 |
+
export CONDA_EXE='\''/mnt/weka/home/hao.zhang/conda/miniconda/bin/conda'\''
|
| 76 |
+
export CONDA_PYTHON_EXE='\''/mnt/weka/home/hao.zhang/conda/miniconda/bin/python'\'''
|
| 77 |
+
+ eval 'unset _CE_M
|
| 78 |
+
unset _CE_CONDA
|
| 79 |
+
PS1='\''(junda-attnserver) '\''
|
| 80 |
+
export PATH='\''/mnt/weka/home/hao.zhang/.local/bin:/mnt/weka/home/hao.zhang/.local/bin:/mnt/weka/home/hao.zhang/.local/bin:/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/bin:/mnt/weka/home/hao.zhang/conda/miniconda/condabin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin'\''
|
| 81 |
+
export CONDA_PREFIX='\''/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver'\''
|
| 82 |
+
export CONDA_SHLVL='\''2'\''
|
| 83 |
+
export CONDA_DEFAULT_ENV='\''junda-attnserver'\''
|
| 84 |
+
export CONDA_PROMPT_MODIFIER='\''(junda-attnserver) '\''
|
| 85 |
+
export CONDA_PREFIX_1='\''/mnt/weka/home/hao.zhang/conda/miniconda'\''
|
| 86 |
+
export CONDA_EXE='\''/mnt/weka/home/hao.zhang/conda/miniconda/bin/conda'\''
|
| 87 |
+
export CONDA_PYTHON_EXE='\''/mnt/weka/home/hao.zhang/conda/miniconda/bin/python'\'''
|
| 88 |
+
++ unset _CE_M
|
| 89 |
+
++ unset _CE_CONDA
|
| 90 |
+
++ PS1='(junda-attnserver) '
|
| 91 |
+
++ export PATH=/mnt/weka/home/hao.zhang/.local/bin:/mnt/weka/home/hao.zhang/.local/bin:/mnt/weka/home/hao.zhang/.local/bin:/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/bin:/mnt/weka/home/hao.zhang/conda/miniconda/condabin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin
|
| 92 |
+
++ PATH=/mnt/weka/home/hao.zhang/.local/bin:/mnt/weka/home/hao.zhang/.local/bin:/mnt/weka/home/hao.zhang/.local/bin:/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/bin:/mnt/weka/home/hao.zhang/conda/miniconda/condabin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin
|
| 93 |
+
++ export CONDA_PREFIX=/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver
|
| 94 |
+
++ CONDA_PREFIX=/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver
|
| 95 |
+
++ export CONDA_SHLVL=2
|
| 96 |
+
++ CONDA_SHLVL=2
|
| 97 |
+
++ export CONDA_DEFAULT_ENV=junda-attnserver
|
| 98 |
+
++ CONDA_DEFAULT_ENV=junda-attnserver
|
| 99 |
+
++ export 'CONDA_PROMPT_MODIFIER=(junda-attnserver) '
|
| 100 |
+
++ CONDA_PROMPT_MODIFIER='(junda-attnserver) '
|
| 101 |
+
++ export CONDA_PREFIX_1=/mnt/weka/home/hao.zhang/conda/miniconda
|
| 102 |
+
++ CONDA_PREFIX_1=/mnt/weka/home/hao.zhang/conda/miniconda
|
| 103 |
+
++ export CONDA_EXE=/mnt/weka/home/hao.zhang/conda/miniconda/bin/conda
|
| 104 |
+
++ CONDA_EXE=/mnt/weka/home/hao.zhang/conda/miniconda/bin/conda
|
| 105 |
+
++ export CONDA_PYTHON_EXE=/mnt/weka/home/hao.zhang/conda/miniconda/bin/python
|
| 106 |
+
++ CONDA_PYTHON_EXE=/mnt/weka/home/hao.zhang/conda/miniconda/bin/python
|
| 107 |
+
+ __conda_hashr
|
| 108 |
+
+ '[' -n '' ']'
|
| 109 |
+
+ '[' -n '' ']'
|
| 110 |
+
+ hash -r
|
| 111 |
+
+ export CHROME_TRACE_PREFIX=/mnt/sharefs/users/hao.zhang/junda/megatron-prof-data--unstable-v5
|
| 112 |
+
+ CHROME_TRACE_PREFIX=/mnt/sharefs/users/hao.zhang/junda/megatron-prof-data--unstable-v5
|
| 113 |
+
+ mkdir -p /mnt/sharefs/users/hao.zhang/junda/megatron-prof-data--unstable-v5
|
| 114 |
+
+ export PROF_TP_SIZE=4
|
| 115 |
+
+ PROF_TP_SIZE=4
|
| 116 |
+
+ export PROF_CP_SIZE=2
|
| 117 |
+
+ PROF_CP_SIZE=2
|
| 118 |
+
+ export PROF_BS=4
|
| 119 |
+
+ PROF_BS=4
|
| 120 |
+
+ for ctx_length in 1024 2048 4096 8192 12288 16384 24576 32768 40960 49152 65536 81920 98304 131072
|
| 121 |
+
+ export PROF_CTX_LENGTH=1024
|
| 122 |
+
+ PROF_CTX_LENGTH=1024
|
| 123 |
+
+ name='/mnt/sharefs/users/hao.zhang/junda/megatron-prof-data--unstable-v5/mytrace.L1024*tp4.cp2.bs4.json'
|
| 124 |
+
+ '[' -f '/mnt/sharefs/users/hao.zhang/junda/megatron-prof-data--unstable-v5/mytrace.L1024*tp4.cp2.bs4.json' ']'
|
| 125 |
+
+ echo 'Running ctx_length=1024, TP_SIZE=4, CP_SIZE=2, BATCH_SIZE=4'
|
| 126 |
+
+ srun bash ./attnserver.sh
|
| 127 |
+
+ which python3
|
| 128 |
+
+ python3 -m torch.distributed.launch --nproc_per_node 8 --nnodes 1 --node_rank 0 --rdzv_id 343227 --rdzv_backend c10d --rdzv_endpoint fs-mbz-gpu-791:29500 ./pretrain_gpt_profile.py --tensor-model-parallel-size 4 --context-parallel-size 2 --num-layers 2 --hidden-size 4096 --num-attention-heads 64 --group-query-attention --num-query-groups 16 --seq-length 1024 --max-position-embeddings 1024 --micro-batch-size 1 --global-batch-size 1 --lr 0.0005 --train-iters 10 --lr-decay-iters 150000 --lr-decay-style cosine --lr-warmup-iters 2 --weight-decay .1 --adam-beta2 .999 --fp16 --log-interval 1 --save-interval 16 --eval-interval 16 --eval-iters 1 --vocab-file vocab.json --merge-file merges.txt --save gpt-checkpoint --load gpt-checkpoint --logging-level 0 --mock-data --tensorboard-dir tensorboard-logs/
|
| 129 |
+
/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/distributed/launch.py:207: FutureWarning: The module torch.distributed.launch is deprecated
|
| 130 |
+
and will be removed in future. Use torchrun.
|
| 131 |
+
Note that --use-env is set by default in torchrun.
|
| 132 |
+
If your script expects `--local-rank` argument to be set, please
|
| 133 |
+
change it to read from `os.environ['LOCAL_RANK']` instead. See
|
| 134 |
+
https://pytorch.org/docs/stable/distributed.html#launch-utility for
|
| 135 |
+
further instructions
|
| 136 |
+
|
| 137 |
+
main()
|
| 138 |
+
W0621 21:34:48.293000 2322929 site-packages/torch/distributed/run.py:766]
|
| 139 |
+
W0621 21:34:48.293000 2322929 site-packages/torch/distributed/run.py:766] *****************************************
|
| 140 |
+
W0621 21:34:48.293000 2322929 site-packages/torch/distributed/run.py:766] Setting OMP_NUM_THREADS environment variable for each process to be 1 in default, to avoid your system being overloaded, please further tune the variable for optimal performance in your application as needed.
|
| 141 |
+
W0621 21:34:48.293000 2322929 site-packages/torch/distributed/run.py:766] *****************************************
|
attnserver.run_attnserver.slurm.sh.343227.out.log
ADDED
|
@@ -0,0 +1,10 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
Running ctx_length=1024, TP_SIZE=4, CP_SIZE=2, BATCH_SIZE=4
|
| 2 |
+
Cleaning up checkpoint directory: gpt-checkpoint
|
| 3 |
+
--------------------------------
|
| 4 |
+
CTX_LENGTH: 1024
|
| 5 |
+
TP_SIZE: 4
|
| 6 |
+
CP_SIZE: 2
|
| 7 |
+
CHECKPOINT_PATH: gpt-checkpoint
|
| 8 |
+
PWD: /mnt/weka/home/hao.zhang/junda/attnserver-megatron
|
| 9 |
+
--------------------------------
|
| 10 |
+
/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/bin/python3
|
attnserver.run_attnserver.slurm.sh.343228.err.log
ADDED
|
@@ -0,0 +1,149 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
+ source /mnt/weka/home/hao.zhang/conda/miniconda/bin/activate
|
| 2 |
+
++ _CONDA_ROOT=/mnt/weka/home/hao.zhang/conda/miniconda
|
| 3 |
+
++ . /mnt/weka/home/hao.zhang/conda/miniconda/etc/profile.d/conda.sh
|
| 4 |
+
+++ export CONDA_EXE=/mnt/weka/home/hao.zhang/conda/miniconda/bin/conda
|
| 5 |
+
+++ CONDA_EXE=/mnt/weka/home/hao.zhang/conda/miniconda/bin/conda
|
| 6 |
+
+++ export _CE_M=
|
| 7 |
+
+++ _CE_M=
|
| 8 |
+
+++ export _CE_CONDA=
|
| 9 |
+
+++ _CE_CONDA=
|
| 10 |
+
+++ export CONDA_PYTHON_EXE=/mnt/weka/home/hao.zhang/conda/miniconda/bin/python
|
| 11 |
+
+++ CONDA_PYTHON_EXE=/mnt/weka/home/hao.zhang/conda/miniconda/bin/python
|
| 12 |
+
+++ '[' -z x ']'
|
| 13 |
+
++ conda activate
|
| 14 |
+
++ local cmd=activate
|
| 15 |
+
++ case "$cmd" in
|
| 16 |
+
++ __conda_activate activate
|
| 17 |
+
++ '[' -n '' ']'
|
| 18 |
+
++ local ask_conda
|
| 19 |
+
+++ PS1=
|
| 20 |
+
+++ __conda_exe shell.posix activate
|
| 21 |
+
+++ '[' -n '' ']'
|
| 22 |
+
+++ /mnt/weka/home/hao.zhang/conda/miniconda/bin/conda shell.posix activate
|
| 23 |
+
++ ask_conda='unset _CE_M
|
| 24 |
+
unset _CE_CONDA
|
| 25 |
+
PS1='\''(base) '\''
|
| 26 |
+
export PATH='\''/mnt/weka/home/hao.zhang/.local/bin:/mnt/weka/home/hao.zhang/.local/bin:/mnt/weka/home/hao.zhang/.local/bin:/mnt/weka/home/hao.zhang/conda/miniconda/bin:/mnt/weka/home/hao.zhang/conda/miniconda/condabin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin'\''
|
| 27 |
+
export CONDA_SHLVL='\''1'\''
|
| 28 |
+
export CONDA_PROMPT_MODIFIER='\''(base) '\''
|
| 29 |
+
export CONDA_EXE='\''/mnt/weka/home/hao.zhang/conda/miniconda/bin/conda'\''
|
| 30 |
+
export CONDA_PYTHON_EXE='\''/mnt/weka/home/hao.zhang/conda/miniconda/bin/python'\'''
|
| 31 |
+
++ eval 'unset _CE_M
|
| 32 |
+
unset _CE_CONDA
|
| 33 |
+
PS1='\''(base) '\''
|
| 34 |
+
export PATH='\''/mnt/weka/home/hao.zhang/.local/bin:/mnt/weka/home/hao.zhang/.local/bin:/mnt/weka/home/hao.zhang/.local/bin:/mnt/weka/home/hao.zhang/conda/miniconda/bin:/mnt/weka/home/hao.zhang/conda/miniconda/condabin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin'\''
|
| 35 |
+
export CONDA_SHLVL='\''1'\''
|
| 36 |
+
export CONDA_PROMPT_MODIFIER='\''(base) '\''
|
| 37 |
+
export CONDA_EXE='\''/mnt/weka/home/hao.zhang/conda/miniconda/bin/conda'\''
|
| 38 |
+
export CONDA_PYTHON_EXE='\''/mnt/weka/home/hao.zhang/conda/miniconda/bin/python'\'''
|
| 39 |
+
+++ unset _CE_M
|
| 40 |
+
+++ unset _CE_CONDA
|
| 41 |
+
+++ PS1='(base) '
|
| 42 |
+
+++ export PATH=/mnt/weka/home/hao.zhang/.local/bin:/mnt/weka/home/hao.zhang/.local/bin:/mnt/weka/home/hao.zhang/.local/bin:/mnt/weka/home/hao.zhang/conda/miniconda/bin:/mnt/weka/home/hao.zhang/conda/miniconda/condabin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin
|
| 43 |
+
+++ PATH=/mnt/weka/home/hao.zhang/.local/bin:/mnt/weka/home/hao.zhang/.local/bin:/mnt/weka/home/hao.zhang/.local/bin:/mnt/weka/home/hao.zhang/conda/miniconda/bin:/mnt/weka/home/hao.zhang/conda/miniconda/condabin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin
|
| 44 |
+
+++ export CONDA_SHLVL=1
|
| 45 |
+
+++ CONDA_SHLVL=1
|
| 46 |
+
+++ export 'CONDA_PROMPT_MODIFIER=(base) '
|
| 47 |
+
+++ CONDA_PROMPT_MODIFIER='(base) '
|
| 48 |
+
+++ export CONDA_EXE=/mnt/weka/home/hao.zhang/conda/miniconda/bin/conda
|
| 49 |
+
+++ CONDA_EXE=/mnt/weka/home/hao.zhang/conda/miniconda/bin/conda
|
| 50 |
+
+++ export CONDA_PYTHON_EXE=/mnt/weka/home/hao.zhang/conda/miniconda/bin/python
|
| 51 |
+
+++ CONDA_PYTHON_EXE=/mnt/weka/home/hao.zhang/conda/miniconda/bin/python
|
| 52 |
+
++ __conda_hashr
|
| 53 |
+
++ '[' -n '' ']'
|
| 54 |
+
++ '[' -n '' ']'
|
| 55 |
+
++ hash -r
|
| 56 |
+
+ conda activate junda-attnserver
|
| 57 |
+
+ local cmd=activate
|
| 58 |
+
+ case "$cmd" in
|
| 59 |
+
+ __conda_activate activate junda-attnserver
|
| 60 |
+
+ '[' -n '' ']'
|
| 61 |
+
+ local ask_conda
|
| 62 |
+
++ PS1='(base) '
|
| 63 |
+
++ __conda_exe shell.posix activate junda-attnserver
|
| 64 |
+
++ '[' -n '' ']'
|
| 65 |
+
++ /mnt/weka/home/hao.zhang/conda/miniconda/bin/conda shell.posix activate junda-attnserver
|
| 66 |
+
+ ask_conda='unset _CE_M
|
| 67 |
+
unset _CE_CONDA
|
| 68 |
+
PS1='\''(junda-attnserver) '\''
|
| 69 |
+
export PATH='\''/mnt/weka/home/hao.zhang/.local/bin:/mnt/weka/home/hao.zhang/.local/bin:/mnt/weka/home/hao.zhang/.local/bin:/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/bin:/mnt/weka/home/hao.zhang/conda/miniconda/condabin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin'\''
|
| 70 |
+
export CONDA_PREFIX='\''/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver'\''
|
| 71 |
+
export CONDA_SHLVL='\''2'\''
|
| 72 |
+
export CONDA_DEFAULT_ENV='\''junda-attnserver'\''
|
| 73 |
+
export CONDA_PROMPT_MODIFIER='\''(junda-attnserver) '\''
|
| 74 |
+
export CONDA_PREFIX_1='\''/mnt/weka/home/hao.zhang/conda/miniconda'\''
|
| 75 |
+
export CONDA_EXE='\''/mnt/weka/home/hao.zhang/conda/miniconda/bin/conda'\''
|
| 76 |
+
export CONDA_PYTHON_EXE='\''/mnt/weka/home/hao.zhang/conda/miniconda/bin/python'\'''
|
| 77 |
+
+ eval 'unset _CE_M
|
| 78 |
+
unset _CE_CONDA
|
| 79 |
+
PS1='\''(junda-attnserver) '\''
|
| 80 |
+
export PATH='\''/mnt/weka/home/hao.zhang/.local/bin:/mnt/weka/home/hao.zhang/.local/bin:/mnt/weka/home/hao.zhang/.local/bin:/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/bin:/mnt/weka/home/hao.zhang/conda/miniconda/condabin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin'\''
|
| 81 |
+
export CONDA_PREFIX='\''/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver'\''
|
| 82 |
+
export CONDA_SHLVL='\''2'\''
|
| 83 |
+
export CONDA_DEFAULT_ENV='\''junda-attnserver'\''
|
| 84 |
+
export CONDA_PROMPT_MODIFIER='\''(junda-attnserver) '\''
|
| 85 |
+
export CONDA_PREFIX_1='\''/mnt/weka/home/hao.zhang/conda/miniconda'\''
|
| 86 |
+
export CONDA_EXE='\''/mnt/weka/home/hao.zhang/conda/miniconda/bin/conda'\''
|
| 87 |
+
export CONDA_PYTHON_EXE='\''/mnt/weka/home/hao.zhang/conda/miniconda/bin/python'\'''
|
| 88 |
+
++ unset _CE_M
|
| 89 |
+
++ unset _CE_CONDA
|
| 90 |
+
++ PS1='(junda-attnserver) '
|
| 91 |
+
++ export PATH=/mnt/weka/home/hao.zhang/.local/bin:/mnt/weka/home/hao.zhang/.local/bin:/mnt/weka/home/hao.zhang/.local/bin:/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/bin:/mnt/weka/home/hao.zhang/conda/miniconda/condabin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin
|
| 92 |
+
++ PATH=/mnt/weka/home/hao.zhang/.local/bin:/mnt/weka/home/hao.zhang/.local/bin:/mnt/weka/home/hao.zhang/.local/bin:/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/bin:/mnt/weka/home/hao.zhang/conda/miniconda/condabin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin
|
| 93 |
+
++ export CONDA_PREFIX=/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver
|
| 94 |
+
++ CONDA_PREFIX=/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver
|
| 95 |
+
++ export CONDA_SHLVL=2
|
| 96 |
+
++ CONDA_SHLVL=2
|
| 97 |
+
++ export CONDA_DEFAULT_ENV=junda-attnserver
|
| 98 |
+
++ CONDA_DEFAULT_ENV=junda-attnserver
|
| 99 |
+
++ export 'CONDA_PROMPT_MODIFIER=(junda-attnserver) '
|
| 100 |
+
++ CONDA_PROMPT_MODIFIER='(junda-attnserver) '
|
| 101 |
+
++ export CONDA_PREFIX_1=/mnt/weka/home/hao.zhang/conda/miniconda
|
| 102 |
+
++ CONDA_PREFIX_1=/mnt/weka/home/hao.zhang/conda/miniconda
|
| 103 |
+
++ export CONDA_EXE=/mnt/weka/home/hao.zhang/conda/miniconda/bin/conda
|
| 104 |
+
++ CONDA_EXE=/mnt/weka/home/hao.zhang/conda/miniconda/bin/conda
|
| 105 |
+
++ export CONDA_PYTHON_EXE=/mnt/weka/home/hao.zhang/conda/miniconda/bin/python
|
| 106 |
+
++ CONDA_PYTHON_EXE=/mnt/weka/home/hao.zhang/conda/miniconda/bin/python
|
| 107 |
+
+ __conda_hashr
|
| 108 |
+
+ '[' -n '' ']'
|
| 109 |
+
+ '[' -n '' ']'
|
| 110 |
+
+ hash -r
|
| 111 |
+
+ export CHROME_TRACE_PREFIX=/mnt/sharefs/users/hao.zhang/junda/megatron-prof-data--unstable-v5
|
| 112 |
+
+ CHROME_TRACE_PREFIX=/mnt/sharefs/users/hao.zhang/junda/megatron-prof-data--unstable-v5
|
| 113 |
+
+ mkdir -p /mnt/sharefs/users/hao.zhang/junda/megatron-prof-data--unstable-v5
|
| 114 |
+
+ export PROF_TP_SIZE=4
|
| 115 |
+
+ PROF_TP_SIZE=4
|
| 116 |
+
+ export PROF_CP_SIZE=2
|
| 117 |
+
+ PROF_CP_SIZE=2
|
| 118 |
+
+ export PROF_BS=8
|
| 119 |
+
+ PROF_BS=8
|
| 120 |
+
+ for ctx_length in 1024 2048 4096 8192 12288 16384 24576 32768 40960 49152 65536 81920 98304 131072
|
| 121 |
+
+ export PROF_CTX_LENGTH=1024
|
| 122 |
+
+ PROF_CTX_LENGTH=1024
|
| 123 |
+
+ name='/mnt/sharefs/users/hao.zhang/junda/megatron-prof-data--unstable-v5/mytrace.L1024*tp4.cp2.bs8.json'
|
| 124 |
+
+ '[' -f '/mnt/sharefs/users/hao.zhang/junda/megatron-prof-data--unstable-v5/mytrace.L1024*tp4.cp2.bs8.json' ']'
|
| 125 |
+
+ echo 'Running ctx_length=1024, TP_SIZE=4, CP_SIZE=2, BATCH_SIZE=8'
|
| 126 |
+
+ srun bash ./attnserver.sh
|
| 127 |
+
+ which python3
|
| 128 |
+
+ python3 -m torch.distributed.launch --nproc_per_node 8 --nnodes 1 --node_rank 0 --rdzv_id 343228 --rdzv_backend c10d --rdzv_endpoint fs-mbz-gpu-702:29500 ./pretrain_gpt_profile.py --tensor-model-parallel-size 4 --context-parallel-size 2 --num-layers 2 --hidden-size 4096 --num-attention-heads 64 --group-query-attention --num-query-groups 16 --seq-length 1024 --max-position-embeddings 1024 --micro-batch-size 1 --global-batch-size 1 --lr 0.0005 --train-iters 10 --lr-decay-iters 150000 --lr-decay-style cosine --lr-warmup-iters 2 --weight-decay .1 --adam-beta2 .999 --fp16 --log-interval 1 --save-interval 16 --eval-interval 16 --eval-iters 1 --vocab-file vocab.json --merge-file merges.txt --save gpt-checkpoint --load gpt-checkpoint --logging-level 0 --mock-data --tensorboard-dir tensorboard-logs/
|
| 129 |
+
/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/distributed/launch.py:207: FutureWarning: The module torch.distributed.launch is deprecated
|
| 130 |
+
and will be removed in future. Use torchrun.
|
| 131 |
+
Note that --use-env is set by default in torchrun.
|
| 132 |
+
If your script expects `--local-rank` argument to be set, please
|
| 133 |
+
change it to read from `os.environ['LOCAL_RANK']` instead. See
|
| 134 |
+
https://pytorch.org/docs/stable/distributed.html#launch-utility for
|
| 135 |
+
further instructions
|
| 136 |
+
|
| 137 |
+
main()
|
| 138 |
+
W0621 21:34:47.144000 2011677 site-packages/torch/distributed/run.py:766]
|
| 139 |
+
W0621 21:34:47.144000 2011677 site-packages/torch/distributed/run.py:766] *****************************************
|
| 140 |
+
W0621 21:34:47.144000 2011677 site-packages/torch/distributed/run.py:766] Setting OMP_NUM_THREADS environment variable for each process to be 1 in default, to avoid your system being overloaded, please further tune the variable for optimal performance in your application as needed.
|
| 141 |
+
W0621 21:34:47.144000 2011677 site-packages/torch/distributed/run.py:766] *****************************************
|
| 142 |
+
[rank4]:[W621 21:35:08.948474453 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 4] using GPU 4 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
|
| 143 |
+
[rank7]:[W621 21:35:08.969019811 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 7] using GPU 7 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
|
| 144 |
+
[rank3]:[W621 21:35:08.969037473 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 3] using GPU 3 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
|
| 145 |
+
[rank1]:[W621 21:35:08.972937737 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 1] using GPU 1 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
|
| 146 |
+
[rank6]:[W621 21:35:08.972966899 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 6] using GPU 6 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
|
| 147 |
+
[rank2]:[W621 21:35:08.972995451 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 2] using GPU 2 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
|
| 148 |
+
[rank5]:[W621 21:35:08.973070968 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 5] using GPU 5 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
|
| 149 |
+
[rank0]:[W621 21:35:08.090706737 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 0] using GPU 0 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
|
attnserver.run_attnserver.slurm.sh.343228.out.log
ADDED
|
@@ -0,0 +1,536 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
Running ctx_length=1024, TP_SIZE=4, CP_SIZE=2, BATCH_SIZE=8
|
| 2 |
+
Cleaning up checkpoint directory: gpt-checkpoint
|
| 3 |
+
--------------------------------
|
| 4 |
+
CTX_LENGTH: 1024
|
| 5 |
+
TP_SIZE: 4
|
| 6 |
+
CP_SIZE: 2
|
| 7 |
+
CHECKPOINT_PATH: gpt-checkpoint
|
| 8 |
+
PWD: /mnt/weka/home/hao.zhang/junda/attnserver-megatron
|
| 9 |
+
--------------------------------
|
| 10 |
+
/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/bin/python3
|
| 11 |
+
using world size: 8, data-parallel size: 1, context-parallel size: 2, hierarchical context-parallel sizes: Nonetensor-model-parallel size: 4, encoder-tensor-model-parallel size: 0, pipeline-model-parallel size: 1, encoder-pipeline-model-parallel size: 0
|
| 12 |
+
Number of virtual stages per pipeline stage: None
|
| 13 |
+
WARNING: Setting args.check_for_nan_in_loss_and_grad to False since dynamic loss scaling is being used
|
| 14 |
+
using torch.float16 for parameters ...
|
| 15 |
+
------------------------ arguments ------------------------
|
| 16 |
+
account_for_embedding_in_pipeline_split ......... False
|
| 17 |
+
account_for_loss_in_pipeline_split .............. False
|
| 18 |
+
accumulate_allreduce_grads_in_fp32 .............. False
|
| 19 |
+
adam_beta1 ...................................... 0.9
|
| 20 |
+
adam_beta2 ...................................... 0.999
|
| 21 |
+
adam_eps ........................................ 1e-08
|
| 22 |
+
add_bias_linear ................................. True
|
| 23 |
+
add_position_embedding .......................... True
|
| 24 |
+
add_qkv_bias .................................... True
|
| 25 |
+
adlr_autoresume ................................. False
|
| 26 |
+
adlr_autoresume_interval ........................ 1000
|
| 27 |
+
align_grad_reduce ............................... True
|
| 28 |
+
align_param_gather .............................. False
|
| 29 |
+
app_tag_run_name ................................ None
|
| 30 |
+
app_tag_run_version ............................. 0.0.0
|
| 31 |
+
apply_layernorm_1p .............................. False
|
| 32 |
+
apply_query_key_layer_scaling ................... False
|
| 33 |
+
apply_residual_connection_post_layernorm ........ False
|
| 34 |
+
apply_rope_fusion ............................... False
|
| 35 |
+
async_save ...................................... None
|
| 36 |
+
async_tensor_model_parallel_allreduce ........... True
|
| 37 |
+
attention_backend ............................... AttnBackend.auto
|
| 38 |
+
attention_dropout ............................... 0.1
|
| 39 |
+
attention_softmax_in_fp32 ....................... False
|
| 40 |
+
auto_detect_ckpt_format ......................... False
|
| 41 |
+
barrier_with_L1_time ............................ True
|
| 42 |
+
bert_binary_head ................................ True
|
| 43 |
+
bert_embedder_type .............................. megatron
|
| 44 |
+
bert_load ....................................... None
|
| 45 |
+
bf16 ............................................ False
|
| 46 |
+
bias_dropout_fusion ............................. True
|
| 47 |
+
bias_gelu_fusion ................................ True
|
| 48 |
+
bias_swiglu_fusion .............................. True
|
| 49 |
+
biencoder_projection_dim ........................ 0
|
| 50 |
+
biencoder_shared_query_context_model ............ False
|
| 51 |
+
block_data_path ................................. None
|
| 52 |
+
calc_ft_timeouts ................................ False
|
| 53 |
+
calculate_per_token_loss ........................ False
|
| 54 |
+
check_for_large_grads ........................... False
|
| 55 |
+
check_for_nan_in_loss_and_grad .................. False
|
| 56 |
+
check_for_spiky_loss ............................ False
|
| 57 |
+
check_weight_hash_across_dp_replicas_interval ... None
|
| 58 |
+
ckpt_assume_constant_structure .................. False
|
| 59 |
+
ckpt_convert_format ............................. None
|
| 60 |
+
ckpt_convert_save ............................... None
|
| 61 |
+
ckpt_convert_update_legacy_dist_opt_format ...... False
|
| 62 |
+
ckpt_format ..................................... torch_dist
|
| 63 |
+
ckpt_fully_parallel_load ........................ False
|
| 64 |
+
ckpt_fully_parallel_save ........................ True
|
| 65 |
+
ckpt_fully_parallel_save_deprecated ............. False
|
| 66 |
+
ckpt_step ....................................... None
|
| 67 |
+
classes_fraction ................................ 1.0
|
| 68 |
+
clip_grad ....................................... 1.0
|
| 69 |
+
clone_scatter_output_in_embedding ............... True
|
| 70 |
+
config_logger_dir ...............................
|
| 71 |
+
consumed_train_samples .......................... 0
|
| 72 |
+
consumed_valid_samples .......................... 0
|
| 73 |
+
context_parallel_size ........................... 2
|
| 74 |
+
cp_comm_type .................................... ['p2p']
|
| 75 |
+
create_attention_mask_in_dataloader ............. True
|
| 76 |
+
cross_entropy_fusion_impl ....................... native
|
| 77 |
+
cross_entropy_loss_fusion ....................... False
|
| 78 |
+
cuda_graph_scope ................................ full
|
| 79 |
+
cuda_graph_warmup_steps ......................... 3
|
| 80 |
+
data_args_path .................................. None
|
| 81 |
+
data_cache_path ................................. None
|
| 82 |
+
data_parallel_random_init ....................... False
|
| 83 |
+
data_parallel_sharding_strategy ................. no_shard
|
| 84 |
+
data_parallel_size .............................. 1
|
| 85 |
+
data_path ....................................... None
|
| 86 |
+
data_per_class_fraction ......................... 1.0
|
| 87 |
+
data_sharding ................................... True
|
| 88 |
+
dataloader_type ................................. single
|
| 89 |
+
ddp_average_in_collective ....................... False
|
| 90 |
+
ddp_bucket_size ................................. None
|
| 91 |
+
ddp_num_buckets ................................. None
|
| 92 |
+
ddp_pad_buckets_for_high_nccl_busbw ............. False
|
| 93 |
+
decoder_first_pipeline_num_layers ............... None
|
| 94 |
+
decoder_last_pipeline_num_layers ................ None
|
| 95 |
+
decoder_num_layers .............................. None
|
| 96 |
+
decoder_seq_length .............................. None
|
| 97 |
+
decoupled_lr .................................... None
|
| 98 |
+
decoupled_min_lr ................................ None
|
| 99 |
+
decrease_batch_size_if_needed ................... False
|
| 100 |
+
defer_embedding_wgrad_compute ................... False
|
| 101 |
+
deprecated_use_mcore_models ..................... False
|
| 102 |
+
deterministic_mode .............................. False
|
| 103 |
+
dino_bottleneck_size ............................ 256
|
| 104 |
+
dino_freeze_last_layer .......................... 1
|
| 105 |
+
dino_head_hidden_size ........................... 2048
|
| 106 |
+
dino_local_crops_number ......................... 10
|
| 107 |
+
dino_local_img_size ............................. 96
|
| 108 |
+
dino_norm_last_layer ............................ False
|
| 109 |
+
dino_teacher_temp ............................... 0.07
|
| 110 |
+
dino_warmup_teacher_temp ........................ 0.04
|
| 111 |
+
dino_warmup_teacher_temp_epochs ................. 30
|
| 112 |
+
disable_bf16_reduced_precision_matmul ........... False
|
| 113 |
+
disable_mamba_mem_eff_path ...................... False
|
| 114 |
+
disable_straggler_on_startup .................... False
|
| 115 |
+
dist_ckpt_format_deprecated ..................... None
|
| 116 |
+
dist_ckpt_strictness ............................ assume_ok_unexpected
|
| 117 |
+
distribute_saved_activations .................... False
|
| 118 |
+
distributed_backend ............................. nccl
|
| 119 |
+
distributed_timeout_minutes ..................... 10
|
| 120 |
+
embedding_path .................................. None
|
| 121 |
+
empty_unused_memory_level ....................... 0
|
| 122 |
+
enable_cuda_graph ............................... False
|
| 123 |
+
enable_ft_package ............................... False
|
| 124 |
+
enable_gloo_process_groups ...................... True
|
| 125 |
+
enable_msc ...................................... True
|
| 126 |
+
enable_one_logger ............................... True
|
| 127 |
+
encoder_num_layers .............................. 2
|
| 128 |
+
encoder_pipeline_model_parallel_size ............ 0
|
| 129 |
+
encoder_seq_length .............................. 1024
|
| 130 |
+
encoder_tensor_model_parallel_size .............. 0
|
| 131 |
+
end_weight_decay ................................ 0.1
|
| 132 |
+
eod_mask_loss ................................... False
|
| 133 |
+
error_injection_rate ............................ 0
|
| 134 |
+
error_injection_type ............................ transient_error
|
| 135 |
+
eval_interval ................................... 16
|
| 136 |
+
eval_iters ...................................... 1
|
| 137 |
+
evidence_data_path .............................. None
|
| 138 |
+
exit_duration_in_mins ........................... None
|
| 139 |
+
exit_interval ................................... None
|
| 140 |
+
exit_on_missing_checkpoint ...................... False
|
| 141 |
+
exit_signal_handler ............................. False
|
| 142 |
+
exp_avg_dtype ................................... torch.float32
|
| 143 |
+
exp_avg_sq_dtype ................................ torch.float32
|
| 144 |
+
expert_model_parallel_size ...................... 1
|
| 145 |
+
expert_tensor_parallel_size ..................... 4
|
| 146 |
+
external_cuda_graph ............................. False
|
| 147 |
+
ffn_hidden_size ................................. 16384
|
| 148 |
+
finetune ........................................ False
|
| 149 |
+
first_last_layers_bf16 .......................... False
|
| 150 |
+
flash_decode .................................... False
|
| 151 |
+
fp16 ............................................ True
|
| 152 |
+
fp16_lm_cross_entropy ........................... False
|
| 153 |
+
fp32_residual_connection ........................ False
|
| 154 |
+
fp8 ............................................. None
|
| 155 |
+
fp8_amax_compute_algo ........................... most_recent
|
| 156 |
+
fp8_amax_history_len ............................ 1
|
| 157 |
+
fp8_interval .................................... 1
|
| 158 |
+
fp8_margin ...................................... 0
|
| 159 |
+
fp8_param_gather ................................ False
|
| 160 |
+
fp8_recipe ...................................... delayed
|
| 161 |
+
fp8_wgrad ....................................... True
|
| 162 |
+
fsdp_double_buffer .............................. False
|
| 163 |
+
global_batch_size ............................... 1
|
| 164 |
+
grad_reduce_in_bf16 ............................. False
|
| 165 |
+
gradient_accumulation_fusion .................... True
|
| 166 |
+
gradient_reduce_div_fusion ...................... True
|
| 167 |
+
group_query_attention ........................... True
|
| 168 |
+
head_lr_mult .................................... 1.0
|
| 169 |
+
heterogeneous_layers_config_encoded_json ........ None
|
| 170 |
+
heterogeneous_layers_config_path ................ None
|
| 171 |
+
hidden_dropout .................................. 0.1
|
| 172 |
+
hidden_size ..................................... 4096
|
| 173 |
+
hierarchical_context_parallel_sizes ............. None
|
| 174 |
+
high_priority_stream_groups ..................... []
|
| 175 |
+
hybrid_attention_ratio .......................... 0.0
|
| 176 |
+
hybrid_mlp_ratio ................................ 0.0
|
| 177 |
+
hybrid_override_pattern ......................... None
|
| 178 |
+
hysteresis ...................................... 2
|
| 179 |
+
ict_head_size ................................... None
|
| 180 |
+
ict_load ........................................ None
|
| 181 |
+
img_h ........................................... 224
|
| 182 |
+
img_w ........................................... 224
|
| 183 |
+
indexer_batch_size .............................. 128
|
| 184 |
+
indexer_log_interval ............................ 1000
|
| 185 |
+
inference_batch_times_seqlen_threshold .......... -1
|
| 186 |
+
inference_dynamic_batching ...................... False
|
| 187 |
+
inference_dynamic_batching_buffer_guaranteed_fraction 0.2
|
| 188 |
+
inference_dynamic_batching_buffer_overflow_factor None
|
| 189 |
+
inference_dynamic_batching_buffer_size_gb ....... 40.0
|
| 190 |
+
inference_dynamic_batching_chunk_size ........... 256
|
| 191 |
+
inference_dynamic_batching_max_requests_override None
|
| 192 |
+
inference_dynamic_batching_max_tokens_override .. None
|
| 193 |
+
inference_max_batch_size ........................ 8
|
| 194 |
+
inference_max_seq_length ........................ 2560
|
| 195 |
+
inference_rng_tracker ........................... False
|
| 196 |
+
init_method_std ................................. 0.02
|
| 197 |
+
init_method_xavier_uniform ...................... False
|
| 198 |
+
init_model_with_meta_device ..................... False
|
| 199 |
+
initial_loss_scale .............................. 4294967296
|
| 200 |
+
inprocess_active_world_size ..................... 8
|
| 201 |
+
inprocess_barrier_timeout ....................... 120
|
| 202 |
+
inprocess_completion_timeout .................... 120
|
| 203 |
+
inprocess_empty_cuda_cache ...................... False
|
| 204 |
+
inprocess_granularity ........................... node
|
| 205 |
+
inprocess_hard_timeout .......................... 90
|
| 206 |
+
inprocess_heartbeat_interval .................... 30
|
| 207 |
+
inprocess_heartbeat_timeout ..................... 60
|
| 208 |
+
inprocess_last_call_wait ........................ 1
|
| 209 |
+
inprocess_max_iterations ........................ None
|
| 210 |
+
inprocess_monitor_process_interval .............. 1.0
|
| 211 |
+
inprocess_monitor_thread_interval ............... 1.0
|
| 212 |
+
inprocess_progress_watchdog_interval ............ 1.0
|
| 213 |
+
inprocess_restart ............................... False
|
| 214 |
+
inprocess_soft_timeout .......................... 60
|
| 215 |
+
inprocess_termination_grace_time ................ 1
|
| 216 |
+
is_hybrid_model ................................. False
|
| 217 |
+
iter_per_epoch .................................. 1250
|
| 218 |
+
iterations_to_skip .............................. []
|
| 219 |
+
keep_fp8_transpose_cache_when_using_custom_fsdp . False
|
| 220 |
+
kv_channels ..................................... 64
|
| 221 |
+
kv_lora_rank .................................... 32
|
| 222 |
+
lazy_mpu_init ................................... None
|
| 223 |
+
load ............................................ gpt-checkpoint
|
| 224 |
+
load_model_opt_format ........................... False
|
| 225 |
+
local_rank ...................................... 0
|
| 226 |
+
log_interval .................................... 1
|
| 227 |
+
log_loss_scale_to_tensorboard ................... True
|
| 228 |
+
log_memory_to_tensorboard ....................... False
|
| 229 |
+
log_num_zeros_in_grad ........................... False
|
| 230 |
+
log_params_norm ................................. False
|
| 231 |
+
log_progress .................................... False
|
| 232 |
+
log_straggler ................................... False
|
| 233 |
+
log_throughput .................................. False
|
| 234 |
+
log_timers_to_tensorboard ....................... False
|
| 235 |
+
log_validation_ppl_to_tensorboard ............... False
|
| 236 |
+
log_world_size_to_tensorboard ................... False
|
| 237 |
+
logging_level ................................... 0
|
| 238 |
+
loss_scale ...................................... None
|
| 239 |
+
loss_scale_window ............................... 1000
|
| 240 |
+
lr .............................................. 0.0005
|
| 241 |
+
lr_decay_iters .................................. 150000
|
| 242 |
+
lr_decay_samples ................................ None
|
| 243 |
+
lr_decay_style .................................. cosine
|
| 244 |
+
lr_warmup_fraction .............................. None
|
| 245 |
+
lr_warmup_init .................................. 0.0
|
| 246 |
+
lr_warmup_iters ................................. 2
|
| 247 |
+
lr_warmup_samples ............................... 0
|
| 248 |
+
lr_wsd_decay_iters .............................. None
|
| 249 |
+
lr_wsd_decay_samples ............................ None
|
| 250 |
+
lr_wsd_decay_style .............................. exponential
|
| 251 |
+
main_grads_dtype ................................ torch.float32
|
| 252 |
+
main_params_dtype ............................... torch.float32
|
| 253 |
+
make_vocab_size_divisible_by .................... 128
|
| 254 |
+
mamba_head_dim .................................. 64
|
| 255 |
+
mamba_num_groups ................................ 8
|
| 256 |
+
mamba_num_heads ................................. None
|
| 257 |
+
mamba_state_dim ................................. 128
|
| 258 |
+
manual_gc ....................................... False
|
| 259 |
+
manual_gc_eval .................................. True
|
| 260 |
+
manual_gc_interval .............................. 0
|
| 261 |
+
mask_factor ..................................... 1.0
|
| 262 |
+
mask_prob ....................................... 0.15
|
| 263 |
+
mask_type ....................................... random
|
| 264 |
+
masked_softmax_fusion ........................... True
|
| 265 |
+
max_position_embeddings ......................... 1024
|
| 266 |
+
max_tokens_to_oom ............................... 12000
|
| 267 |
+
memory_snapshot_path ............................ snapshot.pickle
|
| 268 |
+
merge_file ...................................... merges.txt
|
| 269 |
+
micro_batch_size ................................ 1
|
| 270 |
+
microbatch_group_size_per_vp_stage .............. None
|
| 271 |
+
mid_level_dataset_surplus ....................... 0.005
|
| 272 |
+
min_loss_scale .................................. 1.0
|
| 273 |
+
min_lr .......................................... 0.0
|
| 274 |
+
mlp_chunks_for_prefill .......................... 1
|
| 275 |
+
mmap_bin_files .................................. True
|
| 276 |
+
mock_data ....................................... True
|
| 277 |
+
moe_apply_probs_on_input ........................ False
|
| 278 |
+
moe_aux_loss_coeff .............................. 0.0
|
| 279 |
+
moe_enable_deepep ............................... False
|
| 280 |
+
moe_expert_capacity_factor ...................... None
|
| 281 |
+
moe_extended_tp ................................. False
|
| 282 |
+
moe_ffn_hidden_size ............................. None
|
| 283 |
+
moe_grouped_gemm ................................ False
|
| 284 |
+
moe_input_jitter_eps ............................ None
|
| 285 |
+
moe_layer_freq .................................. 1
|
| 286 |
+
moe_layer_recompute ............................. False
|
| 287 |
+
moe_pad_expert_input_to_capacity ................ False
|
| 288 |
+
moe_per_layer_logging ........................... False
|
| 289 |
+
moe_permute_fusion .............................. False
|
| 290 |
+
moe_router_bias_update_rate ..................... 0.001
|
| 291 |
+
moe_router_dtype ................................ None
|
| 292 |
+
moe_router_enable_expert_bias ................... False
|
| 293 |
+
moe_router_force_load_balancing ................. False
|
| 294 |
+
moe_router_group_topk ........................... None
|
| 295 |
+
moe_router_load_balancing_type .................. aux_loss
|
| 296 |
+
moe_router_num_groups ........................... None
|
| 297 |
+
moe_router_padding_for_fp8 ...................... False
|
| 298 |
+
moe_router_pre_softmax .......................... False
|
| 299 |
+
moe_router_score_function ....................... softmax
|
| 300 |
+
moe_router_topk ................................. 2
|
| 301 |
+
moe_router_topk_scaling_factor .................. None
|
| 302 |
+
moe_shared_expert_intermediate_size ............. None
|
| 303 |
+
moe_shared_expert_overlap ....................... False
|
| 304 |
+
moe_token_dispatcher_type ....................... allgather
|
| 305 |
+
moe_token_drop_policy ........................... probs
|
| 306 |
+
moe_use_legacy_grouped_gemm ..................... False
|
| 307 |
+
moe_use_upcycling ............................... False
|
| 308 |
+
moe_z_loss_coeff ................................ None
|
| 309 |
+
mrope_section ................................... None
|
| 310 |
+
mscale .......................................... 1.0
|
| 311 |
+
mscale_all_dim .................................. 1.0
|
| 312 |
+
mtp_loss_scaling_factor ......................... 0.1
|
| 313 |
+
mtp_num_layers .................................. None
|
| 314 |
+
multi_latent_attention .......................... False
|
| 315 |
+
nccl_all_reduce_for_prefill ..................... False
|
| 316 |
+
nccl_communicator_config_path ................... None
|
| 317 |
+
nccl_ub ......................................... False
|
| 318 |
+
no_load_optim ................................... None
|
| 319 |
+
no_load_rng ..................................... None
|
| 320 |
+
no_persist_layer_norm ........................... False
|
| 321 |
+
no_rope_freq .................................... None
|
| 322 |
+
no_save_optim ................................... None
|
| 323 |
+
no_save_rng ..................................... None
|
| 324 |
+
non_persistent_ckpt_type ........................ None
|
| 325 |
+
non_persistent_global_ckpt_dir .................. None
|
| 326 |
+
non_persistent_local_ckpt_algo .................. fully_parallel
|
| 327 |
+
non_persistent_local_ckpt_dir ................... None
|
| 328 |
+
non_persistent_save_interval .................... None
|
| 329 |
+
norm_epsilon .................................... 1e-05
|
| 330 |
+
normalization ................................... LayerNorm
|
| 331 |
+
num_attention_heads ............................. 64
|
| 332 |
+
num_channels .................................... 3
|
| 333 |
+
num_classes ..................................... 1000
|
| 334 |
+
num_dataset_builder_threads ..................... 1
|
| 335 |
+
num_distributed_optimizer_instances ............. 1
|
| 336 |
+
num_experts ..................................... None
|
| 337 |
+
num_layers ...................................... 2
|
| 338 |
+
num_layers_at_end_in_bf16 ....................... 1
|
| 339 |
+
num_layers_at_start_in_bf16 ..................... 1
|
| 340 |
+
num_layers_per_virtual_pipeline_stage ........... None
|
| 341 |
+
num_query_groups ................................ 16
|
| 342 |
+
num_virtual_stages_per_pipeline_rank ............ None
|
| 343 |
+
num_workers ..................................... 2
|
| 344 |
+
object_storage_cache_path ....................... None
|
| 345 |
+
one_logger_async ................................ False
|
| 346 |
+
one_logger_project .............................. megatron-lm
|
| 347 |
+
one_logger_run_name ............................. None
|
| 348 |
+
onnx_safe ....................................... None
|
| 349 |
+
openai_gelu ..................................... False
|
| 350 |
+
optimizer ....................................... adam
|
| 351 |
+
optimizer_cpu_offload ........................... False
|
| 352 |
+
optimizer_offload_fraction ...................... 1.0
|
| 353 |
+
output_bert_embeddings .......................... False
|
| 354 |
+
overlap_cpu_optimizer_d2h_h2d ................... False
|
| 355 |
+
overlap_grad_reduce ............................. False
|
| 356 |
+
overlap_p2p_comm ................................ False
|
| 357 |
+
overlap_p2p_comm_warmup_flush ................... False
|
| 358 |
+
overlap_param_gather ............................ False
|
| 359 |
+
overlap_param_gather_with_optimizer_step ........ False
|
| 360 |
+
override_opt_param_scheduler .................... False
|
| 361 |
+
params_dtype .................................... torch.float16
|
| 362 |
+
patch_dim ....................................... 16
|
| 363 |
+
per_split_data_args_path ........................ None
|
| 364 |
+
perform_initialization .......................... True
|
| 365 |
+
pin_cpu_grads ................................... True
|
| 366 |
+
pin_cpu_params .................................. True
|
| 367 |
+
pipeline_model_parallel_comm_backend ............ None
|
| 368 |
+
pipeline_model_parallel_size .................... 1
|
| 369 |
+
pipeline_model_parallel_split_rank .............. None
|
| 370 |
+
position_embedding_type ......................... learned_absolute
|
| 371 |
+
pretrained_checkpoint ........................... None
|
| 372 |
+
profile ......................................... False
|
| 373 |
+
profile_ranks ................................... [0]
|
| 374 |
+
profile_step_end ................................ 12
|
| 375 |
+
profile_step_start .............................. 10
|
| 376 |
+
q_lora_rank ..................................... None
|
| 377 |
+
qk_head_dim ..................................... 128
|
| 378 |
+
qk_l2_norm ...................................... False
|
| 379 |
+
qk_layernorm .................................... False
|
| 380 |
+
qk_pos_emb_head_dim ............................. 64
|
| 381 |
+
query_in_block_prob ............................. 0.1
|
| 382 |
+
rampup_batch_size ............................... None
|
| 383 |
+
rank ............................................ 0
|
| 384 |
+
recompute_granularity ........................... None
|
| 385 |
+
recompute_method ................................ None
|
| 386 |
+
recompute_modules ............................... None
|
| 387 |
+
recompute_num_layers ............................ None
|
| 388 |
+
record_memory_history ........................... False
|
| 389 |
+
relative_attention_max_distance ................. 128
|
| 390 |
+
relative_attention_num_buckets .................. 32
|
| 391 |
+
replication ..................................... False
|
| 392 |
+
replication_factor .............................. 2
|
| 393 |
+
replication_jump ................................ None
|
| 394 |
+
rerun_mode ...................................... disabled
|
| 395 |
+
reset_attention_mask ............................ False
|
| 396 |
+
reset_position_ids .............................. False
|
| 397 |
+
result_rejected_tracker_filename ................ None
|
| 398 |
+
retriever_report_topk_accuracies ................ []
|
| 399 |
+
retriever_score_scaling ......................... False
|
| 400 |
+
retriever_seq_length ............................ 256
|
| 401 |
+
retro_add_retriever ............................. False
|
| 402 |
+
retro_attention_gate ............................ 1
|
| 403 |
+
retro_cyclic_train_iters ........................ None
|
| 404 |
+
retro_encoder_attention_dropout ................. 0.1
|
| 405 |
+
retro_encoder_hidden_dropout .................... 0.1
|
| 406 |
+
retro_encoder_layers ............................ 2
|
| 407 |
+
retro_num_neighbors ............................. 2
|
| 408 |
+
retro_num_retrieved_chunks ...................... 2
|
| 409 |
+
retro_project_dir ............................... None
|
| 410 |
+
retro_verify_neighbor_count ..................... True
|
| 411 |
+
rope_scaling_factor ............................. 8.0
|
| 412 |
+
rotary_base ..................................... 10000
|
| 413 |
+
rotary_interleaved .............................. False
|
| 414 |
+
rotary_percent .................................. 1.0
|
| 415 |
+
rotary_scaling_factor ........................... 1.0
|
| 416 |
+
rotary_seq_len_interpolation_factor ............. None
|
| 417 |
+
run_workload_inspector_server ................... False
|
| 418 |
+
sample_rate ..................................... 1.0
|
| 419 |
+
save ............................................ gpt-checkpoint
|
| 420 |
+
save_interval ................................... 16
|
| 421 |
+
scatter_gather_tensors_in_pipeline .............. True
|
| 422 |
+
seed ............................................ 1234
|
| 423 |
+
seq_length ...................................... 1024
|
| 424 |
+
sequence_parallel ............................... False
|
| 425 |
+
sgd_momentum .................................... 0.9
|
| 426 |
+
short_seq_prob .................................. 0.1
|
| 427 |
+
skip_train ...................................... False
|
| 428 |
+
skipped_train_samples ........................... 0
|
| 429 |
+
spec ............................................ None
|
| 430 |
+
split ........................................... None
|
| 431 |
+
squared_relu .................................... False
|
| 432 |
+
start_weight_decay .............................. 0.1
|
| 433 |
+
straggler_ctrlr_port ............................ 65535
|
| 434 |
+
straggler_minmax_count .......................... 1
|
| 435 |
+
suggested_communication_unit_size ............... None
|
| 436 |
+
swiglu .......................................... False
|
| 437 |
+
swin_backbone_type .............................. tiny
|
| 438 |
+
symmetric_ar_type ............................... None
|
| 439 |
+
te_rng_tracker .................................. False
|
| 440 |
+
tensor_model_parallel_size ...................... 4
|
| 441 |
+
tensorboard_dir ................................. tensorboard-logs/
|
| 442 |
+
tensorboard_log_interval ........................ 1
|
| 443 |
+
tensorboard_queue_size .......................... 1000
|
| 444 |
+
test_data_path .................................. None
|
| 445 |
+
test_mode ....................................... False
|
| 446 |
+
tiktoken_num_special_tokens ..................... 1000
|
| 447 |
+
tiktoken_pattern ................................ None
|
| 448 |
+
tiktoken_special_tokens ......................... None
|
| 449 |
+
timing_log_level ................................ 0
|
| 450 |
+
timing_log_option ............................... minmax
|
| 451 |
+
titles_data_path ................................ None
|
| 452 |
+
tokenizer_model ................................. None
|
| 453 |
+
tokenizer_type .................................. GPT2BPETokenizer
|
| 454 |
+
torch_fsdp2_reshard_after_forward ............... True
|
| 455 |
+
tp_comm_bootstrap_backend ....................... nccl
|
| 456 |
+
tp_comm_bulk_dgrad .............................. True
|
| 457 |
+
tp_comm_bulk_wgrad .............................. True
|
| 458 |
+
tp_comm_overlap ................................. False
|
| 459 |
+
tp_comm_overlap_ag .............................. True
|
| 460 |
+
tp_comm_overlap_cfg ............................. None
|
| 461 |
+
tp_comm_overlap_rs .............................. True
|
| 462 |
+
tp_comm_overlap_rs_dgrad ........................ False
|
| 463 |
+
tp_comm_split_ag ................................ True
|
| 464 |
+
tp_comm_split_rs ................................ True
|
| 465 |
+
train_data_path ................................. None
|
| 466 |
+
train_iters ..................................... 10
|
| 467 |
+
train_samples ................................... None
|
| 468 |
+
train_sync_interval ............................. None
|
| 469 |
+
transformer_impl ................................ transformer_engine
|
| 470 |
+
transformer_pipeline_model_parallel_size ........ 1
|
| 471 |
+
untie_embeddings_and_output_weights ............. False
|
| 472 |
+
use_checkpoint_args ............................. False
|
| 473 |
+
use_checkpoint_opt_param_scheduler .............. False
|
| 474 |
+
use_cpu_initialization .......................... None
|
| 475 |
+
use_custom_fsdp ................................. False
|
| 476 |
+
use_dist_ckpt ................................... True
|
| 477 |
+
use_dist_ckpt_deprecated ........................ False
|
| 478 |
+
use_distributed_optimizer ....................... False
|
| 479 |
+
use_flash_attn .................................. False
|
| 480 |
+
use_legacy_models ............................... False
|
| 481 |
+
use_mp_args_from_checkpoint_args ................ False
|
| 482 |
+
use_one_sent_docs ............................... False
|
| 483 |
+
use_persistent_ckpt_worker ...................... False
|
| 484 |
+
use_precision_aware_optimizer ................... False
|
| 485 |
+
use_pytorch_profiler ............................ False
|
| 486 |
+
use_ring_exchange_p2p ........................... False
|
| 487 |
+
use_rope_scaling ................................ False
|
| 488 |
+
use_rotary_position_embeddings .................. False
|
| 489 |
+
use_sharp ....................................... False
|
| 490 |
+
use_tokenizer_model_from_checkpoint_args ........ True
|
| 491 |
+
use_torch_fsdp2 ................................. False
|
| 492 |
+
use_torch_optimizer_for_cpu_offload ............. False
|
| 493 |
+
use_tp_pp_dp_mapping ............................ False
|
| 494 |
+
v_head_dim ...................................... 128
|
| 495 |
+
valid_data_path ................................. None
|
| 496 |
+
variable_seq_lengths ............................ False
|
| 497 |
+
virtual_pipeline_model_parallel_size ............ None
|
| 498 |
+
vision_backbone_type ............................ vit
|
| 499 |
+
vision_pretraining .............................. False
|
| 500 |
+
vision_pretraining_type ......................... classify
|
| 501 |
+
vocab_extra_ids ................................. 0
|
| 502 |
+
vocab_file ...................................... vocab.json
|
| 503 |
+
vocab_size ...................................... None
|
| 504 |
+
wandb_exp_name ..................................
|
| 505 |
+
wandb_project ...................................
|
| 506 |
+
wandb_save_dir ..................................
|
| 507 |
+
weight_decay .................................... 0.1
|
| 508 |
+
weight_decay_incr_style ......................... constant
|
| 509 |
+
wgrad_deferral_limit ............................ 0
|
| 510 |
+
world_size ...................................... 8
|
| 511 |
+
yaml_cfg ........................................ None
|
| 512 |
+
-------------------- end of arguments ---------------------
|
| 513 |
+
INFO:megatron.core.num_microbatches_calculator:setting number of microbatches to constant 1
|
| 514 |
+
> building GPT2BPETokenizer tokenizer ...
|
| 515 |
+
INFO:megatron.training.initialize:Setting logging level to 0
|
| 516 |
+
WARNING: TensorBoard writing requested but is not available (are you using PyTorch 1.1.0 or later?), no TensorBoard logs will be written.
|
| 517 |
+
WARNING: one_logger package is required to enable e2e metrics tracking. please go to https://confluence.nvidia.com/display/MLWFO/Package+Repositories for details to install it
|
| 518 |
+
INFO:megatron.training.initialize:Setting logging level to 0
|
| 519 |
+
INFO:megatron.training.initialize:Setting logging level to 0
|
| 520 |
+
INFO:megatron.training.initialize:Setting logging level to 0
|
| 521 |
+
> padded vocab (size: 50257) with 431 dummy tokens (new size: 50688)
|
| 522 |
+
INFO:megatron.training.initialize:Setting logging level to 0
|
| 523 |
+
WARNING:megatron.core.rerun_state_machine:RerunStateMachine initialized in mode RerunMode.DISABLED
|
| 524 |
+
> initializing torch distributed ...
|
| 525 |
+
INFO:megatron.training.initialize:Setting logging level to 0
|
| 526 |
+
INFO:megatron.training.initialize:Setting logging level to 0
|
| 527 |
+
INFO:megatron.training.initialize:Setting logging level to 0
|
| 528 |
+
> initialized tensor model parallel with size 4
|
| 529 |
+
> initialized pipeline model parallel with size 1
|
| 530 |
+
> setting random seeds to 1234 ...
|
| 531 |
+
> compiling dataset index builder ...
|
| 532 |
+
make: Entering directory '/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/datasets'
|
| 533 |
+
make: Nothing to be done for 'default'.
|
| 534 |
+
make: Leaving directory '/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/datasets'
|
| 535 |
+
>>> done with dataset index builder. Compilation time: 0.042 seconds
|
| 536 |
+
> compiling and loading fused kernels ...
|