Text Classification & Regression Parameters

class autotrain.trainers.text_classification.params.TextClassificationParams

< >

( data_path: str = None model: str = 'bert-base-uncased' lr: float = 5e-05 epochs: int = 3 max_seq_length: int = 128 batch_size: int = 8 warmup_ratio: float = 0.1 gradient_accumulation: int = 1 optimizer: str = 'adamw_torch' scheduler: str = 'linear' weight_decay: float = 0.0 max_grad_norm: float = 1.0 seed: int = 42 train_split: str = 'train' valid_split: Optional = None text_column: str = 'text' target_column: str = 'target' logging_steps: int = -1 project_name: str = 'project-name' auto_find_batch_size: bool = False mixed_precision: Optional = None save_total_limit: int = 1 token: Optional = None push_to_hub: bool = False eval_strategy: str = 'epoch' username: Optional = None log: str = 'none' early_stopping_patience: int = 5 early_stopping_threshold: float = 0.01 )

Parameters

  • data_path (str) — Path to the dataset.
  • model (str) — Name of the model to use. Default is “bert-base-uncased”.
  • lr (float) — Learning rate. Default is 5e-5.
  • epochs (int) — Number of training epochs. Default is 3.
  • max_seq_length (int) — Maximum sequence length. Default is 128.
  • batch_size (int) — Training batch size. Default is 8.
  • warmup_ratio (float) — Warmup proportion. Default is 0.1.
  • gradient_accumulation (int) — Number of gradient accumulation steps. Default is 1.
  • optimizer (str) — Optimizer to use. Default is “adamw_torch”.
  • scheduler (str) — Scheduler to use. Default is “linear”.
  • weight_decay (float) — Weight decay. Default is 0.0.
  • max_grad_norm (float) — Maximum gradient norm. Default is 1.0.
  • seed (int) — Random seed. Default is 42.
  • train_split (str) — Name of the training split. Default is “train”.
  • valid_split (Optional[str]) — Name of the validation split. Default is None.
  • text_column (str) — Name of the text column in the dataset. Default is “text”.
  • target_column (str) — Name of the target column in the dataset. Default is “target”.
  • logging_steps (int) — Number of steps between logging. Default is -1.
  • project_name (str) — Name of the project. Default is “project-name”.
  • auto_find_batch_size (bool) — Whether to automatically find the batch size. Default is False.
  • mixed_precision (Optional[str]) — Mixed precision setting (fp16, bf16, or None). Default is None.
  • save_total_limit (int) — Total number of checkpoints to save. Default is 1.
  • token (Optional[str]) — Hub token for authentication. Default is None.
  • push_to_hub (bool) — Whether to push the model to the hub. Default is False.
  • eval_strategy (str) — Evaluation strategy. Default is “epoch”.
  • username (Optional[str]) — Hugging Face username. Default is None.
  • log (str) — Logging method for experiment tracking. Default is “none”.
  • early_stopping_patience (int) — Number of epochs with no improvement after which training will be stopped. Default is 5.
  • early_stopping_threshold (float) — Threshold for measuring the new optimum to continue training. Default is 0.01.

TextClassificationParams is a configuration class for text classification training parameters.

--batch-size BATCH_SIZE
                    Training batch size to use
--seed SEED           Random seed for reproducibility
--epochs EPOCHS       Number of training epochs
--gradient_accumulation GRADIENT_ACCUMULATION
                    Gradient accumulation steps
--disable_gradient_checkpointing
                    Disable gradient checkpointing
--lr LR               Learning rate
--log {none,wandb,tensorboard}
                    Use experiment tracking
--text-column TEXT_COLUMN
                    Specify the column name in the dataset that contains the text data. Useful for distinguishing between multiple text fields.
                    Default is 'text'.
--target-column TARGET_COLUMN
                    Specify the column name that holds the target or label data for training. Helps in distinguishing different potential
                    outputs. Default is 'target'.
--max-seq-length MAX_SEQ_LENGTH
                    Set the maximum sequence length (number of tokens) that the model should handle in a single input. Longer sequences are
                    truncated. Affects both memory usage and computational requirements. Default is 128 tokens.
--warmup-ratio WARMUP_RATIO
                    Define the proportion of training to be dedicated to a linear warmup where learning rate gradually increases. This can help
                    in stabilizing the training process early on. Default ratio is 0.1.
--optimizer OPTIMIZER
                    Choose the optimizer algorithm for training the model. Different optimizers can affect the training speed and model
                    performance. 'adamw_torch' is used by default.
--scheduler SCHEDULER
                    Select the learning rate scheduler to adjust the learning rate based on the number of epochs. 'linear' decreases the
                    learning rate linearly from the initial lr set. Default is 'linear'. Try 'cosine' for a cosine annealing schedule.
--weight-decay WEIGHT_DECAY
                    Set the weight decay rate to apply for regularization. Helps in preventing the model from overfitting by penalizing large
                    weights. Default is 0.0, meaning no weight decay is applied.
--max-grad-norm MAX_GRAD_NORM
                    Specify the maximum norm of the gradients for gradient clipping. Gradient clipping is used to prevent the exploding gradient
                    problem in deep neural networks. Default is 1.0.
--logging-steps LOGGING_STEPS
                    Determine how often to log training progress. Set this to the number of steps between each log output. -1 determines logging
                    steps automatically. Default is -1.
--eval-strategy {steps,epoch,no}
                    Specify how often to evaluate the model performance. Options include 'no', 'steps', 'epoch'. 'epoch' evaluates at the end of
                    each training epoch by default.
--save-total-limit SAVE_TOTAL_LIMIT
                    Limit the total number of model checkpoints to save. Helps manage disk space by retaining only the most recent checkpoints.
                    Default is to save only the latest one.
--auto-find-batch-size
                    Enable automatic batch size determination based on your hardware capabilities. When set, it tries to find the largest batch
                    size that fits in memory.
--mixed-precision {fp16,bf16,None}
                    Choose the precision mode for training to optimize performance and memory usage. Options are 'fp16', 'bf16', or None for
                    default precision. Default is None.
< > Update on GitHub