Update README.md
Browse files
README.md
CHANGED
@@ -1,27 +1,37 @@
|
|
1 |
-
|
2 |
-
|
3 |
-
|
4 |
-
|
5 |
-
|
6 |
-
|
7 |
-
|
8 |
-
|
9 |
-
|
10 |
-
|
11 |
-
|
12 |
-
|
13 |
-
|
14 |
-
|
15 |
-
|
16 |
-
|
17 |
-
|
18 |
-
|
19 |
-
|
20 |
-
|
21 |
-
|
22 |
-
|
23 |
-
|
24 |
-
|
25 |
-
|
26 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
27 |
This project uses the TinyLlama-1.1B-Chat-v1.0 model by the TinyLlama organization. All credits for the model go to the original authors. For more details, visit the TinyLlama Hugging Face page.
|
|
|
1 |
+
---
|
2 |
+
license: apache-2.0
|
3 |
+
language:
|
4 |
+
- en
|
5 |
+
base_model:
|
6 |
+
- TinyLlama/TinyLlama-1.1B-Chat-v1.0
|
7 |
+
library_name: transformers
|
8 |
+
tags:
|
9 |
+
- text-generation-inference
|
10 |
+
---
|
11 |
+
Tiny Llama Project Guide
|
12 |
+
This repository provides a comprehensive guide for students and researchers to experiment with the TinyLlama-1.1B-Chat-v1.0 model, an open-source language model developed by the TinyLlama organization. The goal is to enable accessible AI experimentation without any fees or personal information requirements.
|
13 |
+
Model Details
|
14 |
+
|
15 |
+
Model: TinyLlama-1.1B-Chat-v1.0
|
16 |
+
Source: Hugging Face - TinyLlama/TinyLlama-1.1B-Chat-v1.0
|
17 |
+
Organization: TinyLlama
|
18 |
+
Description: A lightweight, efficient 1.1B parameter model optimized for chat and text generation tasks, suitable for low-resource environments like laptops with 16GB RAM.
|
19 |
+
License: Refer to the model's official Hugging Face page for licensing details (typically Apache 2.0).
|
20 |
+
|
21 |
+
Resources
|
22 |
+
|
23 |
+
Code: Includes scripts for downloading the model, fine-tuning, and running a Flask-based chat UI.
|
24 |
+
Dataset: A small JSON dataset for fine-tuning tests.
|
25 |
+
Loss Plot: Training loss plot from fine-tuning (loss_plot.png).
|
26 |
+
|
27 |
+
|
28 |
+
Usage
|
29 |
+
This repository provides:
|
30 |
+
|
31 |
+
A Flask app for local inference with a user-friendly chat interface.
|
32 |
+
Fine-tuning scripts using LoRA for efficient training.
|
33 |
+
Detailed setup instructions in document.txt.
|
34 |
+
|
35 |
+
Note: Model weights are not included in this repository. Users must download them from the official Hugging Face repository using their access token.
|
36 |
+
Attribution
|
37 |
This project uses the TinyLlama-1.1B-Chat-v1.0 model by the TinyLlama organization. All credits for the model go to the original authors. For more details, visit the TinyLlama Hugging Face page.
|