These docs refer to the PiPPy integration.
( model split_points = 'auto' no_split_module_classes = None example_args = () example_kwargs: Optional = None num_chunks = None )
Parameters
torch.nn.Module
) —
A model we want to split for pipeline-parallel inference str
, defaults to ‘auto’) —
How to generate the split points and chunk the model across each GPU. ‘auto’ will find the best balanced
split given any model. List[str]
) —
A list of class names for layers we don’t want to be split. torch.Tensor
) —
The expected inputs for the model that uses order-based inputs. Recommended to use this method if possible. torch.Tensor
) —
The expected inputs for the model that uses dictionary-based inputs. This is a highly limiting structure
that requires the same keys be present at all inference calls. Not recommended unless the prior condition
is true for all cases. int
) —
The number of different stages the Pipeline will have. By default it will assign one chunk per GPU, but
this can be tuned and played with. In general one should have num_chunks >= num_gpus. Wraps model
for PipelineParallelism