In PEFT, torch.compile works for some but not all features. The reason why it won’t always work is because PEFT is highly dynamic in certain places (loading and switching between multiple adapters, for instance), which can cause trouble for torch.compile
. In other places, torch.compile
may work, but won’t be as fast as expected because of graph breaks.
If you don’t see an error, it doesn’t necessarily mean that torch.compile
worked correctly. It might give you an output, but the output is incorrect. This guide describes what works with torch.compile
and what doesn’t. For your own testing, we recommend using the latest PyTorch version, as torch.compile
is constantly being improved.
Unless indicated otherwise, the default torch.compile
settings were used.
These features work with torch.compile
. Everything listed below was tested with a causal LM:
Trainer
from 🤗 transformersThe following adapters were tested successfully:
Below are some of the more advanced PEFT features that work. They were all tested with LoRA.
modules_to_save
(i.e. config = LoraConfig(..., modules_to_save=...)
)model.add_weighted_adapter(...)
)with model.disable_adapter()
)model.merge_and_unload()
)model(batch, adapter_names=["__base__", "default", "other", ...])
)model.add_adapter
or model.load_adapter
to load more than 1 adapter); for this, only call torch.compile
after loading all adaptersGenerally, we can expect that if a feature works correctly with LoRA and is also supported by other adapter types, it should also work for that adapter type.
All the use cases listed above are tested inside of peft/tests/test_torch_compile.py
. If you want to check in more detail how we tested a certain feature, please go to that file and check the test that corresponds to your use case.
If you have another use case where you know that torch.compile
does or does not work with PEFT, please contribute by letting us know or by opening a PR to add this use case to the covered test cases.