mofawzy commited on
Commit
fa10c2d
·
1 Parent(s): 23a37c4

add more information in model card

Browse files
Files changed (1) hide show
  1. README.md +7 -9
README.md CHANGED
@@ -13,8 +13,6 @@ model-index:
13
  results: []
14
  ---
15
 
16
- <!-- This model card has been generated automatically according to the information the Trainer had access to. You
17
- should probably proofread and complete it, then remove this comment. -->
18
 
19
  # argpt2-goodreads
20
 
@@ -24,16 +22,15 @@ It achieves the following results on the evaluation set:
24
 
25
  ## Model description
26
 
27
- More information needed
28
 
29
  ## Intended uses & limitations
30
 
31
- More information needed
 
32
 
33
  ## Training and evaluation data
34
-
35
- More information needed
36
-
37
  ## Training procedure
38
 
39
  ### Training hyperparameters
@@ -53,11 +50,12 @@ The following hyperparameters were used during training:
53
 
54
  ### Training results
55
  - train_loss = 1.474
 
56
  ### Evaluation results
57
  - eval_loss = 1.4389
58
 
59
 
60
- ***** train metrics *****
61
  - epoch = 20.0
62
  - train_loss = 1.474
63
  - train_runtime = 2:18:14.51
@@ -67,7 +65,7 @@ The following hyperparameters were used during training:
67
 
68
 
69
 
70
- ***** eval metrics *****
71
  - epoch = 20.0
72
  - eval_loss = 1.4389
73
  - eval_runtime = 0:04:37.01
 
13
  results: []
14
  ---
15
 
 
 
16
 
17
  # argpt2-goodreads
18
 
 
22
 
23
  ## Model description
24
 
25
+ Generate sentences either positive/negative examples based on goodreads corpus in arabic language.
26
 
27
  ## Intended uses & limitations
28
 
29
+ the model fine-tuned on arabic language only with aspect to generate sentences such as reviews in order todo the same for other languages you need to fine-tune it in your own.
30
+ any harmful content generated by GPT2 should not be used in anywhere.
31
 
32
  ## Training and evaluation data
33
+ training and validation done on goodreads dataset LABR 80% for trainng and 20% for testing
 
 
34
  ## Training procedure
35
 
36
  ### Training hyperparameters
 
50
 
51
  ### Training results
52
  - train_loss = 1.474
53
+
54
  ### Evaluation results
55
  - eval_loss = 1.4389
56
 
57
 
58
+ ### train metrics
59
  - epoch = 20.0
60
  - train_loss = 1.474
61
  - train_runtime = 2:18:14.51
 
65
 
66
 
67
 
68
+ ### eval metrics
69
  - epoch = 20.0
70
  - eval_loss = 1.4389
71
  - eval_runtime = 0:04:37.01