Canstralian commited on
Commit
82dcf77
·
verified ·
1 Parent(s): 464ed15

Add GitHub badges to README.md

Browse files

This PR adds various GitHub badges to the README.md file to provide quick insights into the project's status.

Files changed (1) hide show
  1. README.md +60 -2
README.md CHANGED
@@ -8,7 +8,65 @@ sdk_version: 5.14.0
8
  app_file: app.py
9
  pinned: false
10
  license: apache-2.0
11
- short_description: A user-friendly Gradio interface
12
  ---
13
 
14
- Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
8
  app_file: app.py
9
  pinned: false
10
  license: apache-2.0
11
+ short_description: A Gradio interface
12
  ---
13
 
14
+ ![Python Version](https://img.shields.io/badge/Python-3.10%2B-blue)
15
+ ![License](https://img.shields.io/badge/License-Apache%202.0-blue)
16
+ ![Last Commit](https://img.shields.io/github/last-commit/your-username/transformers-fine-tuner)
17
+ ![Issues](https://img.shields.io/github/issues/your-username/transformers-fine-tuner)
18
+ ![Pull Requests](https://img.shields.io/github/issues-pr/your-username/transformers-fine-tuner)
19
+ ![Contributors](https://img.shields.io/github/contributors/your-username/transformers-fine-tuner)
20
+
21
+
22
+ # Transformers Fine Tuner
23
+
24
+ 🔥 **Transformers Fine Tuner** is a user-friendly Gradio interface that enables seamless fine-tuning of pre-trained transformer models on custom datasets. This tool facilitates efficient model adaptation for various NLP tasks, making it accessible for both beginners and experienced practitioners.
25
+
26
+ ## Features
27
+
28
+ - **Easy Dataset Integration**: Load datasets via URLs or direct file uploads.
29
+ - **Model Selection**: Choose from a variety of pre-trained transformer models.
30
+ - **Customizable Training Parameters**: Adjust epochs, batch size, and learning rate to suit your needs.
31
+ - **Real-time Monitoring**: Track training progress and performance metrics.
32
+
33
+ ## Getting Started
34
+
35
+ 1. **Clone the Repository**:
36
+ ```bash
37
+ git clone https://huggingface.co/spaces/your-username/transformers-fine-tuner
38
+ cd transformers-fine-tuner
39
+ ```
40
+
41
+ 2. **Install Dependencies**:
42
+ Ensure you have Python 3.10 or higher. Install the required packages:
43
+ ```bash
44
+ pip install -r requirements.txt
45
+ ```
46
+
47
+ 3. **Run the Application**:
48
+ ```bash
49
+ python app.py
50
+ ```
51
+ Access the interface at `http://localhost:7860/`.
52
+
53
+ ## Usage
54
+
55
+ - **Model Name**: Enter the name of the pre-trained model you wish to fine-tune (e.g., `bert-base-uncased`).
56
+ - **Dataset URL**: Provide a URL to your dataset.
57
+ - **Upload Dataset**: Alternatively, upload a dataset file directly.
58
+ - **Number of Epochs**: Set the number of training epochs.
59
+ - **Learning Rate**: Specify the learning rate for training.
60
+ - **Batch Size**: Define the batch size for training.
61
+
62
+ After configuring the parameters, click **Submit** to start the fine-tuning process. Monitor the training progress and performance metrics in real-time.
63
+
64
+ ## License
65
+
66
+ This project is licensed under the Apache-2.0 License. See the [LICENSE](LICENSE) file for more details.
67
+
68
+ ## Acknowledgments
69
+
70
+ - [Hugging Face Transformers](https://huggingface.co/transformers/)
71
+ - [Gradio](https://gradio.app/)
72
+ - [Datasets](https://huggingface.co/docs/datasets/)