File size: 1,065 Bytes
8a73fba
 
 
 
 
 
 
 
 
 
 
793d072
 
8a73fba
 
793d072
 
 
 
fd08b94
793d072
 
 
 
8a73fba
 
 
 
 
 
 
 
793d072
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
---
base_model: unsloth/llama-3-8b-bnb-4bit
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- gguf
datasets:
- Ken4070TiS/qubit_arXiv
---

This model was made by the following step:
1. Use a web crawler to collect the papers by using arXiv API. 
2. The searching keyword is "qubit AND (IBM OR IQM OR Rigetti)", the time range is 2018 - 2024.
3. The data was corrected in the JSON with column' Title, Abstract, Authors, arXiv_id, Date, Author_company.
4. Feed the JSON files to llama-3-8b-bnb-4bit and fine-tune the model by using unsloth on google colab, the GPU is A100
5. That's it!  :)



# Uploaded  model

- **Developed by:** Ken4070TiS
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3-8b-bnb-4bit

This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.

[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)