Accelerate supports pipeline parallelism for large-scale training with the PyTorch torch.distributed.pipelining API.
( model split_points: Union = 'auto' no_split_module_classes: Optional = None example_args: Optional = () example_kwargs: Optional = None num_chunks: Optional = None gather_output: Optional = False )
Parameters
torch.nn.Module
) —
A model we want to split for pipeline-parallel inference str
or List[str]
, defaults to ‘auto’) —
How to generate the split points and chunk the model across each GPU. ‘auto’ will find the best balanced
split given any model. Should be a list of layer names in the model to split by otherwise. List[str]
) —
A list of class names for layers we don’t want to be split. int
, defaults to the number of available GPUs) —
The number of different stages the Pipeline will have. By default it will assign one chunk per GPU, but
this can be tuned and played with. In general one should have num_chunks >= num_gpus. bool
, defaults to False
) —
If True
, the output from the last GPU (which holds the true outputs) is sent across to all GPUs. Wraps model
for pipeline parallel inference.