The PatchTST model was proposed in A Time Series is Worth 64 Words: Long-term Forecasting with Transformers by Yuqi Nie, Nam H. Nguyen, Phanwadee Sinthong, Jayant Kalagnanam.
The abstract from the paper is the following:
We propose an efficient design of Transformer-based models for multivariate time series forecasting and self-supervised representation learning. It is based on two key components: (i) segmentation of time series into subseries-level patches which are served as input tokens to Transformer; (ii) channel-independence where each channel contains a single univariate time series that shares the same embedding and Transformer weights across all the series. Patching design naturally has three-fold benefit: local semantic information is retained in the embedding; computation and memory usage of the attention maps are quadratically reduced given the same look-back window; and the model can attend longer history. Our channel-independent patch time series Transformer (PatchTST) can improve the long-term forecasting accuracy significantly when compared with that of SOTA Transformer-based models. We also apply our model to self-supervised pre-training tasks and attain excellent fine-tuning performance, which outperforms supervised training on large datasets. Transferring of masked pre-trained representation on one dataset to others also produces SOTA forecasting accuracy.
Tips:
The model can also be used for time series classification and time series regression. See the respective PatchTSTForClassification and PatchTSTForRegression classes.
At a high level the model vectorizes time series into patches of a given size and encodes them via a Transformer which then outputs the prediction length forecasts:
This model was contributed by namctin, gsinthong, diepi, vijaye12, wmgifford, and kashif.
The original code can be found here.
( num_input_channels: int = 1 context_length: int = 32 distribution_output: str = 'student_t' loss: str = 'mse' patch_length: int = 1 patch_stride: int = 1 encoder_layers: int = 3 d_model: int = 64 encoder_attention_heads: int = 4 shared_embedding: bool = True channel_attention: bool = False encoder_ffn_dim: int = 256 norm: str = 'BatchNorm' norm_eps: float = 1e-05 attention_dropout: float = 0.0 dropout: float = 0.0 positional_dropout: float = 0.0 dropout_path: float = 0.0 ff_dropout: float = 0.0 bias: bool = True activation_function: str = 'gelu' pre_norm: bool = True positional_encoding_type: str = 'sincos' learn_pe: bool = False use_cls_token: bool = False init_std: float = 0.02 shared_projection: bool = True seed_number: typing.Optional[int] = None scaling: typing.Union[str, bool, NoneType] = 'mean' mask_input: typing.Optional[bool] = None mask_type: str = 'random' random_mask_ratio: float = 0.5 forecast_mask_patches: typing.List[int] = [2, 3] forecast_mask_ratios: typing.List[int] = [1, 1] channel_consistent_masking: bool = False unmasked_channel_indices: typing.Optional[typing.List[int]] = None mask_value = 0 pooling_type: str = 'mean' head_dropout: float = 0.0 prediction_length: int = 24 num_targets: int = 1 output_range: typing.List = None num_parallel_samples: int = 100 **kwargs )
Parameters
int
, optional, defaults to 1) —
The size of the target variable which by default is 1 for univariate targets. Would be > 1 in case of
multivariate targets. int
, optional, defaults to 32) —
The context length for the encoder. str
, optional, defaults to "student_t"
) —
The distribution emission head for the model when loss is “nll”. Could be either “student_t”, “normal” or
“negative_binomial”. str
, optional, defaults to "mse"
) —
The loss function for the model corresponding to the distribution_output
head. For parametric
distributions it is the negative log likelihood (“nll”) and for point estimates it is the mean squared
error “mse”. int
, optional, defaults to 1) —
Define the patch length of the patchification process. int
, optional, defaults to 1) —
define the stride of the patchification process. int
, optional, defaults to 3) —
Number of encoder layers. int
, optional, defaults to 64) —
Dimensionality of the transformer layers. int
, optional, defaults to 4) —
Number of attention heads for each attention layer in the Transformer encoder. bool
, optional, defaults to True
) —
Sharing the input embedding across all channels. bool
, optional, defaults to False
) —
Activate channel attention block in the Transformer to allow channels to attend each other. int
, optional, defaults to 256) —
Dimension of the “intermediate” (often named feed-forward) layer in encoder. str
, optional, defaults to "BatchNorm"
) —
Normalization at each Transformer layer. Can be "BatchNorm"
or "LayerNorm"
. float
, optional, defaults to 1e-05) —
A value added to the denominator for numerical stability of normalization. float
, optional, defaults to 0.0) —
The dropout probability for the attention probabilities. float
, optional, defaults to 0.0) —
The dropout probability for all fully connected layers in the encoder, and decoder. float
, optional, defaults to 0.0) —
The dropout probability in the positional embedding layer. float
, optional, defaults to 0.0) —
The dropout path in the residual block. float
, optional, defaults to 0.0) —
The dropout probability used between the two layers of the feed-forward networks. bool
, optional, defaults to True
) —
Consider bias in the feed-forward networks. str
, optional, defaults to "gelu"
) —
The non-linear activation function (string) in the encoder."gelu"
and "relu"
are supported. bool
, optional, defaults to True
) —
Normalization is applied before self-attention if pre_norm is set to True
. Otherwise, normalization is
applied after residual block. str
, optional, defaults to "sincos"
) —
Positional encodings. "zeros"
, "normal"
, "uniform"' and
“sincos”` are supported. bool
, optional, defaults to False
) —
Whether the positional encoding is updated during training. bool
, optional, defaults to False
) —
Whether cls token is used. float
, optional, defaults to 0.02) —
The standard deviation of the truncated normal weight initialization distribution. bool
, optional, defaults to True
) —
Sharing the projection layer across different channels in the forecast head. Optional
, optional) —
Seed number used for random masking. If unset, no seed is set. Union
, optional, defaults to "mean"
) —
Whether to scale the input targets via “mean” scaler, “std” scaler or no scaler if None
. If True
, the
scaler is set to “mean”. bool
, optional, defaults to False
) —
Apply masking during the pretraining. str
, optional, defaults to "random"
) —
Masking type. Only "random"
and "forecast"
are currently supported. float
, optional, defaults to 0.5) —
Masking ratio is applied to mask the input data during random pretraining. List
, optional, defaults to [2, 3]
) —
List of patch lengths to mask in the end of the data. List
, optional, defaults to [1, 1]
) —
List of weights to use for each patch length. For Ex. if patch_lengths is [5,4] and mix_ratio is [1,1],
then equal weights to both patch lengths. Defaults to None. bool
, optional, defaults to False
) —
If channel consistent masking is True, all the channels will have the same masking. list
, optional) —
Channels that are not masked during pretraining. int
, optional, defaults to 0) —
Define the value of entries to be masked when pretraining. str
, optional, defaults to "mean"
) —
Pooling of the embedding. "mean"
, "max"
and None
are supported. float
, optional, defaults to 0.0) —
The dropout probability for head. int
, optional, defaults to 24) —
The prediction length for the encoder. In other words, the prediction horizon of the model. int
, optional, defaults to 1) —
Number of targets for regression and classificastion tasks. For classification, it is the number of
classes. list
, optional) —
Output range for regression task. The range of output values can be set to enforce the model to produce
values within a range. int
, optional, defaults to 100) —
The number of samples is generated in parallel for probablistic prediction. This is the configuration class to store the configuration of an PatchTSTModel. It is used to instantiate an PatchTST model according to the specified arguments, defining the model architecture. ibm/patchtst architecture.
Configuration objects inherit from PretrainedConfig can be used to control the model outputs. Read the documentation from PretrainedConfig for more information.
>>> from transformers import PatchTSTConfig, PatchTSTModel
>>> # Initializing an PatchTST configuration with 12 time steps for prediction
>>> configuration = PatchTSTConfig(prediction_length=12)
>>> # Randomly initializing a model (with random weights) from the configuration
>>> model = PatchTSTModel(configuration)
>>> # Accessing the model configuration
>>> configuration = model.config
( config: PatchTSTConfig )
Parameters
The bare PatchTST Model outputting raw hidden-states without any specific head. This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.)
This model is also a PyTorch torch.nn.Module subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior.
( past_values: Tensor past_observed_mask: typing.Optional[torch.Tensor] = None future_values: typing.Optional[torch.Tensor] = None output_hidden_states: typing.Optional[bool] = None output_attentions: typing.Optional[bool] = None return_dict: typing.Optional[bool] = None )
Parameters
torch.Tensor
of shape (bs, sequence_length, num_input_channels)
, required) —
Input sequence to the model torch.BoolTensor
of shape (batch_size, sequence_length, num_input_channels)
, optional) —
Boolean mask to indicate which past_values
were observed and which were missing. Mask values selected
in [0, 1]
:
bool
, optional) —
Whether or not to return the hidden states of all layers bool
, optional) —
Whether or not to return the output attention of all layers bool
, optional) — Whether or not to return a ModelOutput
instead of a plain tuple. PatchTST for forecasting. The model contains PatchTST model + Forecasting head
( past_values: Tensor past_observed_mask: typing.Optional[torch.Tensor] = None future_values: typing.Optional[torch.Tensor] = None output_hidden_states: typing.Optional[bool] = None output_attentions: typing.Optional[bool] = None return_dict: typing.Optional[bool] = None )
Parameters
torch.Tensor
of shape (bs, sequence_length, num_input_channels)
, required) —
Input sequence to the model torch.BoolTensor
of shape (batch_size, sequence_length, num_input_channels)
, optional) —
Boolean mask to indicate which past_values
were observed and which were missing. Mask values selected
in [0, 1]
:
torch.Tensor
of shape (bs, forecast_len, num_input_channels)
, optional) —
future target values associated with the past_values
bool
, optional) —
Whether or not to return the hidden states of all layers bool
, optional) — Whether or not to return a ModelOutput
instead of a plain tuple. PatchTST model for classification. The model contains PatchTST model + classification head
( past_values: Tensor target_values: Tensor = None past_observed_mask: typing.Optional[bool] = None output_hidden_states: typing.Optional[bool] = None output_attentions: typing.Optional[bool] = None return_dict: typing.Optional[bool] = None )
Parameters
torch.Tensor
of shape (bs, sequence_length, num_input_channels)
, required) —
Input sequence to the model torch.Tensor
, optional) — labels associates with the past_values
torch.BoolTensor
of shape (batch_size, sequence_length, num_input_channels)
, optional) —
Boolean mask to indicate which past_values
were observed and which were missing. Mask values selected
in [0, 1]
:
bool
, optional) —
Whether or not to return the hidden states of all layers bool
, optional) — Whether or not to return a ModelOutput
instead of a plain tuple. Mask pretrain model: PatchTST model + pretrain head
( past_values: Tensor past_observed_mask: typing.Optional[torch.Tensor] = None output_hidden_states: typing.Optional[bool] = None output_attentions: typing.Optional[bool] = None return_dict: typing.Optional[bool] = None )
Parameters
torch.Tensor
of shape (bs, sequence_length, num_input_channels)
, required) —
Input sequence to the model torch.BoolTensor
of shape (batch_size, sequence_length, num_input_channels)
, optional) —
Boolean mask to indicate which past_values
were observed and which were missing. Mask values selected
in [0, 1]
:
bool
, optional) —
Whether or not to return the hidden states of all layers bool
, optional) — Whether or not to return a ModelOutput
instead of a plain tuple. ( past_values: Tensor target_values: Tensor past_observed_mask: typing.Optional[torch.Tensor] = None output_hidden_states: typing.Optional[bool] = None output_attentions: typing.Optional[bool] = None return_dict: typing.Optional[bool] = None )
Parameters
torch.Tensor
of shape (bs, sequence_length, num_input_channels)
, required) —
Input sequence to the model torch.BoolTensor
of shape (batch_size, sequence_length, num_input_channels)
, optional) —
Boolean mask to indicate which past_values
were observed and which were missing. Mask values selected
in [0, 1]
:
torch.Tensor
of shape (bs, num_input_channels)
) —
target values associates with the past_values
bool
, optional) —
Whether or not to return the hidden states of all layers bool
, optional) — Whether or not to return a ModelOutput
instead of a plain tuple.