File size: 3,000 Bytes
c532b0f
 
00e952e
 
e907e50
 
 
 
 
 
 
c532b0f
 
 
 
 
 
 
 
4602f93
c532b0f
 
 
dde188c
 
4602f93
 
fdd4b63
dde188c
c532b0f
 
 
4602f93
c532b0f
4602f93
 
 
 
 
 
 
 
 
 
c532b0f
4602f93
 
 
c532b0f
 
e907e50
d147b17
 
 
e907e50
c532b0f
 
d147b17
c532b0f
d147b17
c532b0f
 
4602f93
 
 
 
 
 
 
c532b0f
 
 
4602f93
c532b0f
 
 
 
 
 
4602f93
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
---
library_name: transformers
language:
- tr
license: apache-2.0
datasets:
- umarigan/turkiye_finance_qa
metrics:
- accuracy
base_model:
- mistralai/Mistral-7B-v0.1
---

# Model Card for Model ID

<!-- Provide a quick summary of what the model is/does. -->


### Model Description
This model is a fine-tuned version of Mistral 7B using the LoRA (Low-Rank Adaptation) method. It has been developed with the Turkish finance dataset "umarigan/turkiye_finance_qa" to better understand Turkish texts in the financial domain and to perform well in related tasks.

This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.

- **Developed by:** [linkedin @saribasmetehan]
- **Shared by [optional]:** [linkedin @saribasmetehan]
- **Model type:** [Mistral 7B fine-tuned with LoRA]
- **Language(s) (NLP):** [Turkish]
- **Finetuned from model [optional]:** [mistralai/Mistral-7B-v0.1]
- **Fine-tuning Steps and Model Usage:**[github @saribasmetehan] 

## Bias, Risks, and Limitations

**Bias**

- **Language Bias:** Since the model is trained only on Turkish data, it may not perform well in other languages.
- **Domain Bias:** As the model is trained on Turkish finance data, its performance may be lower in other domains (e.g., healthcare, technology).
- **Data Bias:** The data set used is collected from specific sources within a certain time frame, so biases in the data may be reflected in the model's outputs.

**Risks**

- **Misinformation:** The model may generate incorrect information. It is important to verify the accuracy of the outputs.
- **Over-reliance:** Users should not overly rely on the model's outputs and should seek human review and approval when making critical decisions.
- **Ethical Concerns:** The model may raise ethical and privacy concerns when working with sensitive financial information.
**Limitations**

- **Limited Knowledge Base:** The model's knowledge base is limited to the training data and may not include the most recent information or events.
- **Performance in Complex Scenarios:** The model may not perform adequately in very complex financial scenarios or those requiring in-depth analysis.
- **Resource Intensive:** Using large models can require significant computational power and resources.


## Fine-Tuning Process :

You can use the following link:

https://github.com/saribasmetehan/Fine-tuning-Mistral-7B-using-LoRA-technique/blob/main/mistral_7b_turkish_finance.ipynb
## How to Get Started with the Model

You can use the following link:

https://github.com/saribasmetehan/Fine-tuning-Mistral-7B-using-LoRA-technique/blob/main/practice_of_using_the_model.ipynb

## Training Details
<ul>
  <li>Learning_rate=2e-5</li>
  <li>Per device train batch size=8 </li>
  <li>Trainable params: 21260288</li>
  <li>All params: 3773331456</li>
  <li>Ratio%: 0.5634354746703705</li>
</ul>

### Training Data

umarigan/turkiye_finance_qa




## Model Card Contact

[email protected]