AlberBshara commited on
Commit
968f3b3
·
verified ·
1 Parent(s): c018cd0

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +8 -27
README.md CHANGED
@@ -33,23 +33,16 @@ contextually relevant text in Arabic, thus expanding its multilingual capabiliti
33
  - Llama3.1_8k
34
  - context window 128k
35
 
36
- This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
37
 
38
- - **Developed by:** [More Information Needed]
39
- - **Funded by [optional]:** [More Information Needed]
40
- - **Shared by [optional]:** [More Information Needed]
41
  - **Model type:** [More Information Needed]
42
- - **Language(s) (NLP):** [More Information Needed]
43
- - **License:** [More Information Needed]
44
- - **Finetuned from model [optional]:** [More Information Needed]
45
 
46
  ### Model Sources [optional]
47
 
48
- <!-- Provide the basic links for the model. -->
49
-
50
- - **Repository:** [More Information Needed]
51
- - **Paper [optional]:** [More Information Needed]
52
- - **Demo [optional]:** [More Information Needed]
53
 
54
  ## Uses
55
 
@@ -83,29 +76,17 @@ This is the model card of a 🤗 transformers model that has been pushed on the
83
 
84
  <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
85
 
86
- Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
87
-
88
  ## How to Get Started with the Model
89
 
90
- Use the code below to get started with the model.
91
 
92
- [More Information Needed]
93
 
94
  ## Training Details
95
 
96
- ### Training Data
97
-
98
- <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
99
 
100
- [More Information Needed]
101
-
102
- ### Training Procedure
103
-
104
- <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
105
-
106
- #### Preprocessing [optional]
107
 
108
- [More Information Needed]
109
 
110
 
111
  #### Training Hyperparameters
 
33
  - Llama3.1_8k
34
  - context window 128k
35
 
 
36
 
37
+ - **Developed by:** [Alber Bshara]
 
 
38
  - **Model type:** [More Information Needed]
39
+ - **Language(s) (NLP):** [Arabic (Ar), English (En)]
40
+ - **License:** [NeptoneAI]
41
+ - **Finetuned from model:** [Fine-tuned from LLaMA3.1_8k model]
42
 
43
  ### Model Sources [optional]
44
 
45
+ - **Core Model:** [https://ai.meta.com/blog/meta-llama-3-1/]
 
 
 
 
46
 
47
  ## Uses
48
 
 
76
 
77
  <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
78
 
 
 
79
  ## How to Get Started with the Model
80
 
81
+ - To use this model, please scroll to the bottom of this page to see instance usage examples.
82
 
 
83
 
84
  ## Training Details
85
 
 
 
 
86
 
87
+ ### Training Data
 
 
 
 
 
 
88
 
89
+ https://huggingface.co/M-A-D#:~:text=The%20Mixed%20Arabic%20Datasets%20(MAD,language%20datasets%20across%20the%20Internet.
90
 
91
 
92
  #### Training Hyperparameters