metadata
title: Transformers Fine Tuner
emoji: 🔥
colorFrom: indigo
colorTo: blue
sdk: gradio
sdk_version: 5.14.0
app_file: app.py
pinned: false
license: apache-2.0
short_description: A Gradio interface
Transformers Fine Tuner
🔥 Transformers Fine Tuner is a user-friendly Gradio interface that enables seamless fine-tuning of pre-trained transformer models on custom datasets. This tool facilitates efficient model adaptation for various NLP tasks, making it accessible for both beginners and experienced practitioners.
Features
- Easy Dataset Integration: Load datasets via URLs or direct file uploads.
- Model Selection: Choose from a variety of pre-trained transformer models.
- Customizable Training Parameters: Adjust epochs, batch size, and learning rate to suit your needs.
- Real-time Monitoring: Track training progress and performance metrics.
Getting Started
Clone the Repository:
git clone https://huggingface.co/spaces/your-username/transformers-fine-tuner cd transformers-fine-tuner
Install Dependencies: Ensure you have Python 3.10 or higher. Install the required packages:
pip install -r requirements.txt
Run the Application:
python app.py
Access the interface at
http://localhost:7860/
.
Usage
- Model Name: Enter the name of the pre-trained model you wish to fine-tune (e.g.,
bert-base-uncased
). - Dataset URL: Provide a URL to your dataset.
- Upload Dataset: Alternatively, upload a dataset file directly.
- Number of Epochs: Set the number of training epochs.
- Learning Rate: Specify the learning rate for training.
- Batch Size: Define the batch size for training.
After configuring the parameters, click Submit to start the fine-tuning process. Monitor the training progress and performance metrics in real-time.
License
This project is licensed under the Apache-2.0 License. See the LICENSE file for more details.