text
stringlengths
0
1.16k
#######################################################################
Please cite the following paper when using nnU-Net:
Isensee, F., Jaeger, P. F., Kohl, S. A., Petersen, J., & Maier-Hein, K. H. (2021). nnU-Net: a self-configuring method for deep learning-based biomedical image segmentation. Nature methods, 18(2), 203-211.
#######################################################################
This is the configuration used by this training:
Configuration name: 2d
{'data_identifier': 'nnUNetPlans_2d', 'preprocessor_name': 'DefaultPreprocessor', 'batch_size': 12, 'patch_size': [448, 576], 'median_image_size_in_voxels': [2464.0, 3280.0], 'spacing': [1.0, 1.0], 'normalization_schemes': ['ZScoreNormalization'], 'use_mask_for_norm': [False], 'UNet_class_name': 'PlainConvUNet', 'UNet_base_num_features': 32, 'n_conv_per_stage_encoder': [2, 2, 2, 2, 2, 2, 2], 'n_conv_per_stage_decoder': [2, 2, 2, 2, 2, 2], 'num_pool_per_axis': [6, 6], 'pool_op_kernel_sizes': [[1, 1], [2, 2], [2, 2], [2, 2], [2, 2], [2, 2], [2, 2]], 'conv_kernel_sizes': [[3, 3], [3, 3], [3, 3], [3, 3], [3, 3], [3, 3], [3, 3]], 'unet_max_num_features': 512, 'resampling_fn_data': 'resample_data_or_seg_to_shape', 'resampling_fn_seg': 'resample_data_or_seg_to_shape', 'resampling_fn_data_kwargs': {'is_seg': False, 'order': 3, 'order_z': 0, 'force_separate_z': None}, 'resampling_fn_seg_kwargs': {'is_seg': True, 'order': 1, 'order_z': 0, 'force_separate_z': None}, 'resampling_fn_probabilities': 'resample_data_or_seg_to_shape', 'resampling_fn_probabilities_kwargs': {'is_seg': False, 'order': 1, 'order_z': 0, 'force_separate_z': None}, 'batch_dice': True}
These are the global plan.json settings:
{'dataset_name': 'Dataset789_ChronoRoot2', 'plans_name': 'nnUNetPlans', 'original_median_spacing_after_transp': [999.0, 1.0, 1.0], 'original_median_shape_after_transp': [1, 2464, 3280], 'image_reader_writer': 'NaturalImage2DIO', 'transpose_forward': [0, 1, 2], 'transpose_backward': [0, 1, 2], 'experiment_planner_used': 'ExperimentPlanner', 'label_manager': 'LabelManager', 'foreground_intensity_properties_per_channel': {'0': {'max': 225.0, 'mean': 119.90813446044922, 'median': 122.0, 'min': 0.0, 'percentile_00_5': 0.3148415684700012, 'percentile_99_5': 196.0, 'std': 38.823753356933594}}}
2025-01-20 15:33:33.043577: unpacking dataset...
2025-01-20 15:33:33.127604: unpacking done...
2025-01-20 15:33:33.128010: do_dummy_2d_data_aug: False
2025-01-20 15:33:33.130200: Using splits from existing split file: nnUNet_preprocessed/Dataset789_ChronoRoot2/splits_final.json
2025-01-20 15:33:33.130489: The split file contains 5 splits.
2025-01-20 15:33:33.130514: Desired fold for training: 0
2025-01-20 15:33:33.130532: This split has 756 training and 189 validation cases.
2025-01-20 15:33:33.661267: Unable to plot network architecture:
2025-01-20 15:33:33.663022: module 'torch.onnx' has no attribute '_optimize_trace'
2025-01-20 15:33:33.683954:
2025-01-20 15:33:33.684024: Epoch 0
2025-01-20 15:33:33.684111: Current learning rate: 0.01
2025-01-20 15:34:25.702976: train_loss 0.2018
2025-01-20 15:34:25.703284: val_loss 0.0585
2025-01-20 15:34:25.703332: Pseudo dice [np.float32(0.0), np.float32(0.0), np.float32(0.0), np.float32(0.0), np.float32(1e-04), np.float32(0.0)]
2025-01-20 15:34:25.703377: Epoch time: 52.02 s
2025-01-20 15:34:25.703405: Yayy! New best EMA pseudo Dice: 0.0
2025-01-20 15:34:26.294650:
2025-01-20 15:34:26.294980: Epoch 1
2025-01-20 15:34:26.295055: Current learning rate: 0.00999
2025-01-20 15:35:14.646918: train_loss -0.0013
2025-01-20 15:35:14.647027: val_loss -0.0872
2025-01-20 15:35:14.647067: Pseudo dice [np.float32(0.0), np.float32(0.0), np.float32(0.6697), np.float32(0.0), np.float32(0.0067), np.float32(0.0)]
2025-01-20 15:35:14.647120: Epoch time: 48.35 s
2025-01-20 15:35:14.647160: Yayy! New best EMA pseudo Dice: 0.011300000362098217
2025-01-20 15:35:15.423238:
2025-01-20 15:35:15.423292: Epoch 2
2025-01-20 15:35:15.423368: Current learning rate: 0.00998
2025-01-20 15:36:03.020656: train_loss -0.1381
2025-01-20 15:36:03.020907: val_loss -0.1816
2025-01-20 15:36:03.020954: Pseudo dice [np.float32(0.2881), np.float32(0.0), np.float32(0.7474), np.float32(0.0), np.float32(0.4787), np.float32(0.0)]
2025-01-20 15:36:03.021007: Epoch time: 47.6 s
2025-01-20 15:36:03.021029: Yayy! New best EMA pseudo Dice: 0.03539999946951866
2025-01-20 15:36:03.935388:
2025-01-20 15:36:03.935457: Epoch 3
2025-01-20 15:36:03.935537: Current learning rate: 0.00997
2025-01-20 15:36:51.606517: train_loss -0.2352
2025-01-20 15:36:51.606756: val_loss -0.3065
2025-01-20 15:36:51.606802: Pseudo dice [np.float32(0.5379), np.float32(0.3189), np.float32(0.774), np.float32(0.0), np.float32(0.7698), np.float32(0.0)]
2025-01-20 15:36:51.606851: Epoch time: 47.67 s
2025-01-20 15:36:51.606889: Yayy! New best EMA pseudo Dice: 0.07190000265836716
2025-01-20 15:36:52.389135:
2025-01-20 15:36:52.389323: Epoch 4
2025-01-20 15:36:52.389404: Current learning rate: 0.00996
2025-01-20 15:37:40.103456: train_loss -0.3428
2025-01-20 15:37:40.103700: val_loss -0.4076
2025-01-20 15:37:40.103748: Pseudo dice [np.float32(0.5844), np.float32(0.5596), np.float32(0.7762), np.float32(0.1596), np.float32(0.7759), np.float32(0.3504)]
2025-01-20 15:37:40.103804: Epoch time: 47.71 s
2025-01-20 15:37:40.103835: Yayy! New best EMA pseudo Dice: 0.11810000240802765
2025-01-20 15:37:40.897617:
2025-01-20 15:37:40.897854: Epoch 5
2025-01-20 15:37:40.897932: Current learning rate: 0.00995
2025-01-20 15:38:28.587023: train_loss -0.4272
2025-01-20 15:38:28.587277: val_loss -0.4336
2025-01-20 15:38:28.587343: Pseudo dice [np.float32(0.5876), np.float32(0.4889), np.float32(0.7751), np.float32(0.3306), np.float32(0.7703), np.float32(0.5668)]
2025-01-20 15:38:28.587382: Epoch time: 47.69 s
2025-01-20 15:38:28.587403: Yayy! New best EMA pseudo Dice: 0.16500000655651093
2025-01-20 15:38:29.368613:
2025-01-20 15:38:29.369121: Epoch 6
2025-01-20 15:38:29.369184: Current learning rate: 0.00995
2025-01-20 15:39:17.039433: train_loss -0.4698
2025-01-20 15:39:17.039688: val_loss -0.4844
2025-01-20 15:39:17.039737: Pseudo dice [np.float32(0.5719), np.float32(0.6184), np.float32(0.7701), np.float32(0.3576), np.float32(0.8051), np.float32(0.5977)]
2025-01-20 15:39:17.039773: Epoch time: 47.67 s
2025-01-20 15:39:17.039794: Yayy! New best EMA pseudo Dice: 0.21050000190734863
2025-01-20 15:39:17.820793:
2025-01-20 15:39:17.825356: Epoch 7
2025-01-20 15:39:17.825446: Current learning rate: 0.00994
2025-01-20 15:40:05.489944: train_loss -0.5131
2025-01-20 15:40:05.490206: val_loss -0.5391
2025-01-20 15:40:05.490256: Pseudo dice [np.float32(0.6362), np.float32(0.6527), np.float32(0.801), np.float32(0.4168), np.float32(0.7839), np.float32(0.6848)]
2025-01-20 15:40:05.490294: Epoch time: 47.67 s
2025-01-20 15:40:05.490315: Yayy! New best EMA pseudo Dice: 0.2556999921798706
2025-01-20 15:40:06.348648:
2025-01-20 15:40:06.383985: Epoch 8
2025-01-20 15:40:06.384077: Current learning rate: 0.00993
2025-01-20 15:40:54.073986: train_loss -0.5274
2025-01-20 15:40:54.074162: val_loss -0.5236
2025-01-20 15:40:54.074249: Pseudo dice [np.float32(0.6372), np.float32(0.6676), np.float32(0.8114), np.float32(0.5108), np.float32(0.8206), np.float32(0.6261)]
2025-01-20 15:40:54.074297: Epoch time: 47.73 s
2025-01-20 15:40:54.074326: Yayy! New best EMA pseudo Dice: 0.2980000078678131
2025-01-20 15:40:54.935162:
2025-01-20 15:40:54.970317: Epoch 9
2025-01-20 15:40:54.970425: Current learning rate: 0.00992
2025-01-20 15:41:42.661351: train_loss -0.552
2025-01-20 15:41:42.661447: val_loss -0.5817