Image classification is a form of supervised learning where a model is trained to identify and categorize objects within images. AutoTrain simplifies the process, enabling you to train a state-of-the-art image classification model by simply uploading labeled example images.
Image regression/scoring is a form of supervised learning where a model is trained to predict a score or value for an image. AutoTrain simplifies the process, enabling you to train a state-of-the-art image scoring model by simply uploading labeled example images.
To ensure your image classification model trains effectively, follow these guidelines for preparing your data:
Prepare a zip file containing your categorized images. Each category should have its own subfolder named after the class it represents. For example, to differentiate between ‘cats’ and ‘dogs’, your zip file structure should resemble the following:
cats_and_dogs.zip
├── cats
│ ├── cat.1.jpg
│ ├── cat.2.jpg
│ ├── cat.3.jpg
│ └── ...
└── dogs
├── dog.1.jpg
├── dog.2.jpg
├── dog.3.jpg
└── ...
You can also use a dataset from the Hugging Face Hub. Example dataset from Hugging Face Hub: truepositive/hotdog_nothotdog.
Prepare a zip file containing your images and metadata.jsonl.
Archive.zip
├── 0001.png
├── 0002.png
├── 0003.png
├── .
├── .
├── .
└── metadata.jsonl
Example for metadata.jsonl
:
{"file_name": "0001.png", "target": 0.5}
{"file_name": "0002.png", "target": 0.7}
{"file_name": "0003.png", "target": 0.3}
Please note that metadata.jsonl should contain the file_name
and the target
value for each image.
You can also use a dataset from the Hugging Face Hub. Example dataset from Hugging Face Hub: abhishek/img-quality-full.
Format: Ensure all images are in JPEG, JPG, or PNG format.
Quantity: Include at least 5 images per class to provide the model with sufficient examples for learning.
Exclusivity: The zip file should exclusively contain folders named after the classes, and these folders should only contain relevant images. No additional files or nested folders should be included.
Additional Tips
Uniformity: While not required, having images of similar sizes and resolutions can help improve model performance.
Variability: Include a variety of images for each class to encompass the range of appearances and contexts the model might encounter in real-world scenarios.
Some points to keep in mind:
When train.zip is decompressed, it creates two folders: cats and dogs. these are the two categories for classification. The images for both categories are in their respective folders. You can have as many categories as you want.
For image classification, if you are using a zip
dataset format, the column mapping should be default and should not be changed.
data:
.
.
.
column_mapping:
image_column: image
target_column: label
For image regression, the column mapping must be as follows:
data:
.
.
.
column_mapping:
image_column: image
target_column: target
For image regression, metadata.jsonl
should contain the file_name
and the target
value for each image.
If you are using a dataset from the Hugging Face Hub, you should set appropriate column mappings based on the dataset.
To train the model locally, create a configuration file (config.yaml) with the following content:
task: image_classification
base_model: google/vit-base-patch16-224
project_name: autotrain-cats-vs-dogs-finetuned
log: tensorboard
backend: local
data:
path: cats_vs_dogs
train_split: train
valid_split: null
column_mapping:
image_column: image
target_column: label
params:
epochs: 2
batch_size: 4
lr: 2e-5
optimizer: adamw_torch
scheduler: linear
gradient_accumulation: 1
mixed_precision: fp16
hub:
username: ${HF_USERNAME}
token: ${HF_TOKEN}
push_to_hub: true
Here, we are using cats_and_dogs
dataset from Hugging Face Hub. The model is trained for 2 epochs with a batch size of 4 and a learning rate of 2e-5
. We are using the adamw_torch
optimizer and the linear
scheduler. We are also using mixed precision training with a gradient accumulation of 1.
In order to use a local dataset, you can change the data
section to:
data:
path: data/
train_split: train # this folder inside data/ will be used for training, it contains the images in subfolders.
valid_split: valid # this folder inside data/ will be used for validation, it contains the images in subfolders. can also be null.
column_mapping:
image_column: image
target_column: label
Similarly, for image regression, you can use the following configuration file:
task: image_regression
base_model: microsoft/resnet-50
project_name: autotrain-img-quality-resnet50
log: tensorboard
backend: local
data:
path: abhishek/img-quality-full
train_split: train
valid_split: null
column_mapping:
image_column: image
target_column: target
params:
epochs: 10
batch_size: 8
lr: 2e-3
optimizer: adamw_torch
scheduler: cosine
gradient_accumulation: 1
mixed_precision: fp16
hub:
username: ${HF_USERNAME}
token: ${HF_TOKEN}
push_to_hub: true
To train the model, run the following command:
$ autotrain --config config.yaml
This will start the training process and save the model to the Hugging Face Hub after training is complete. In case you dont want to save the model to the hub, you can set push_to_hub
to false
in the configuration file.
To train the model on Hugging Face Spaces, create a training space as described in Quickstart
section.
An example UI for training an image scoring model on Hugging Face Spaces is shown below:
In this example, we are training an image scoring model using the microsoft/resnet-50
model on the abhishek/img-quality-full
dataset.
We are training the model for 3 epochs with a batch size of 8 and a learning rate of 5e-5
.
We are using the adamw_torch
optimizer and the linear
scheduler.
We are also using mixed precision training with a gradient accumulation of 1.
Note how the column mapping has now been changed and target
points to quality_mos
column in the dataset.
To train the model, click on the Start Training
button. This will start the training process and save the model to the Hugging Face Hub after training is complete.
( data_path: str = None model: str = 'google/vit-base-patch16-224' username: Optional = None lr: float = 5e-05 epochs: int = 3 batch_size: int = 8 warmup_ratio: float = 0.1 gradient_accumulation: int = 1 optimizer: str = 'adamw_torch' scheduler: str = 'linear' weight_decay: float = 0.0 max_grad_norm: float = 1.0 seed: int = 42 train_split: str = 'train' valid_split: Optional = None logging_steps: int = -1 project_name: str = 'project-name' auto_find_batch_size: bool = False mixed_precision: Optional = None save_total_limit: int = 1 token: Optional = None push_to_hub: bool = False eval_strategy: str = 'epoch' image_column: str = 'image' target_column: str = 'target' log: str = 'none' early_stopping_patience: int = 5 early_stopping_threshold: float = 0.01 )
Parameters
ImageClassificationParams is a configuration class for image classification training parameters.
( data_path: str = None model: str = 'google/vit-base-patch16-224' username: Optional = None lr: float = 5e-05 epochs: int = 3 batch_size: int = 8 warmup_ratio: float = 0.1 gradient_accumulation: int = 1 optimizer: str = 'adamw_torch' scheduler: str = 'linear' weight_decay: float = 0.0 max_grad_norm: float = 1.0 seed: int = 42 train_split: str = 'train' valid_split: Optional = None logging_steps: int = -1 project_name: str = 'project-name' auto_find_batch_size: bool = False mixed_precision: Optional = None save_total_limit: int = 1 token: Optional = None push_to_hub: bool = False eval_strategy: str = 'epoch' image_column: str = 'image' target_column: str = 'target' log: str = 'none' early_stopping_patience: int = 5 early_stopping_threshold: float = 0.01 )
Parameters
ImageRegressionParams is a configuration class for image regression training parameters.