Canstralian's picture
Update README.md
c079ce1 verified
|
raw
history blame
2.7 kB
metadata
title: Transformers Fine Tuner
emoji: 🔥
colorFrom: indigo
colorTo: blue
sdk: gradio
sdk_version: 5.14.0
app_file: app.py
pinned: false
license: apache-2.0
short_description: A Gradio interface

Python Version License Last Commit Issues Pull Requests Contributors

Transformers Fine Tuner

🔥 Transformers Fine Tuner is a user-friendly Gradio interface that enables seamless fine-tuning of pre-trained transformer models on custom datasets. This tool facilitates efficient model adaptation for various NLP tasks, making it accessible for both beginners and experienced practitioners.

Features

  • Easy Dataset Integration: Load datasets via URLs or direct file uploads.
  • Model Selection: Choose from a variety of pre-trained transformer models.
  • Customizable Training Parameters: Adjust epochs, batch size, and learning rate to suit your needs.
  • Real-time Monitoring: Track training progress and performance metrics.

Getting Started

  1. Clone the Repository:

    git clone https://huggingface.co/spaces/your-username/transformers-fine-tuner
    cd transformers-fine-tuner
    
  2. Install Dependencies: Ensure you have Python 3.10 or higher. Install the required packages:

    pip install -r requirements.txt
    
  3. Run the Application:

    python app.py
    

    Access the interface at http://localhost:7860/.

Usage

  • Model Name: Enter the name of the pre-trained model you wish to fine-tune (e.g., bert-base-uncased).
  • Dataset URL: Provide a URL to your dataset.
  • Upload Dataset: Alternatively, upload a dataset file directly.
  • Number of Epochs: Set the number of training epochs.
  • Learning Rate: Specify the learning rate for training.
  • Batch Size: Define the batch size for training.

After configuring the parameters, click Submit to start the fine-tuning process. Monitor the training progress and performance metrics in real-time.

License

This project is licensed under the Apache-2.0 License. See the LICENSE file for more details.

Acknowledgments