Update README.md
Browse files
README.md
CHANGED
|
@@ -22,7 +22,9 @@ tags:
|
|
| 22 |
- art
|
| 23 |
---
|
| 24 |
|
| 25 |
-
### Novaeus-Promptist-7B-Instruct Uploaded Model Files
|
|
|
|
|
|
|
| 26 |
|
| 27 |
| **File Name [ Uploaded Files ]** | **Size** | **Description** | **Upload Status** |
|
| 28 |
|--------------------------------------------|---------------|------------------------------------------|-------------------|
|
|
@@ -42,4 +44,54 @@ tags:
|
|
| 42 |
| `tokenizer_config.json` | 7.73 kB | Tokenizer configuration file. | Uploaded |
|
| 43 |
| `vocab.json` | 2.78 MB | Vocabulary for tokenizer. | Uploaded |
|
| 44 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 45 |
---
|
|
|
|
| 22 |
- art
|
| 23 |
---
|
| 24 |
|
| 25 |
+
### Novaeus-Promptist-7B-Instruct Uploaded Model Files
|
| 26 |
+
|
| 27 |
+
The **Novaeus-Promptist-7B-Instruct** is a fine-tuned large language model derived from the **Qwen2.5-7B-Instruct** base model. It is optimized for **prompt enhancement, text generation**, and **instruction-following tasks**, providing high-quality outputs tailored to various applications.
|
| 28 |
|
| 29 |
| **File Name [ Uploaded Files ]** | **Size** | **Description** | **Upload Status** |
|
| 30 |
|--------------------------------------------|---------------|------------------------------------------|-------------------|
|
|
|
|
| 44 |
| `tokenizer_config.json` | 7.73 kB | Tokenizer configuration file. | Uploaded |
|
| 45 |
| `vocab.json` | 2.78 MB | Vocabulary for tokenizer. | Uploaded |
|
| 46 |
|
| 47 |
+
---
|
| 48 |
+
|
| 49 |
+
### **Key Features:**
|
| 50 |
+
|
| 51 |
+
1. **Prompt Refinement:**
|
| 52 |
+
Designed to enhance input prompts by rephrasing, clarifying, and optimizing for more precise outcomes.
|
| 53 |
+
|
| 54 |
+
2. **Instruction Following:**
|
| 55 |
+
Accurately follows complex user instructions for various generation tasks, including creative writing, summarization, and question answering.
|
| 56 |
+
|
| 57 |
+
3. **Customization and Fine-Tuning:**
|
| 58 |
+
Incorporates datasets specifically curated for prompt optimization, enabling seamless adaptation to specific user needs.
|
| 59 |
+
|
| 60 |
+
---
|
| 61 |
+
|
| 62 |
+
### **Training Details:**
|
| 63 |
+
- **Base Model:** [Qwen2.5-7B-Instruct](#)
|
| 64 |
+
- **Datasets Used for Fine-Tuning:**
|
| 65 |
+
- **gokaygokay/prompt-enhancer-dataset:** Focuses on prompt engineering with 17.9k samples.
|
| 66 |
+
- **gokaygokay/prompt-enhancement-75k:** Encompasses a wider array of prompt styles with 73.2k samples.
|
| 67 |
+
- **prithivMLmods/Prompt-Enhancement-Mini:** A compact dataset (1.16k samples) for iterative refinement.
|
| 68 |
+
|
| 69 |
+
---
|
| 70 |
+
### **Capabilities:**
|
| 71 |
+
|
| 72 |
+
- **Prompt Optimization:**
|
| 73 |
+
Automatically refines and enhances user-input prompts for better generation results.
|
| 74 |
+
|
| 75 |
+
- **Instruction-Based Text Generation:**
|
| 76 |
+
Supports diverse tasks, including:
|
| 77 |
+
- Creative writing (stories, poems, scripts).
|
| 78 |
+
- Summaries and paraphrasing.
|
| 79 |
+
- Custom Q&A systems.
|
| 80 |
+
|
| 81 |
+
- **Efficient Fine-Tuning:**
|
| 82 |
+
Adaptable to additional fine-tuning tasks by leveraging the model's existing high-quality instruction-following capabilities.
|
| 83 |
+
|
| 84 |
+
---
|
| 85 |
+
|
| 86 |
+
### **Usage Instructions:**
|
| 87 |
+
|
| 88 |
+
1. **Setup:**
|
| 89 |
+
- Ensure all necessary model files, including shards, tokenizer configurations, and index files, are downloaded and placed in the correct directory.
|
| 90 |
+
|
| 91 |
+
2. **Load Model:**
|
| 92 |
+
Use PyTorch or Hugging Face Transformers to load the model and tokenizer. Ensure `pytorch_model.bin.index.json` is correctly set for efficient shard-based loading.
|
| 93 |
+
|
| 94 |
+
3. **Customize Generation:**
|
| 95 |
+
Adjust parameters in `generation_config.json` to control aspects such as temperature, top-p sampling, and maximum sequence length.
|
| 96 |
+
|
| 97 |
---
|