Text Generation
Transformers
PyTorch
Japanese
llama
text-generation-inference
Inference Endpoints
ptrdvn commited on
Commit
c2fc322
·
1 Parent(s): 26a395b

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +1 -1
README.md CHANGED
@@ -15,7 +15,7 @@ We trained on equal samples of the following three datasets:
15
  * [TyDiQA (Ja)](https://huggingface.co/datasets/khalidalt/tydiqa-goldp)
16
  * [XLSUM (Ja)](https://huggingface.co/datasets/csebuetnlp/xlsum)
17
 
18
- which resulted in a dataset of 13167 samples total.
19
 
20
  These three datasets were chosen as they represent three distinct fine-tuning tasks (Text simplification, question answering, and text summarization, respectively) which we hypothesize can help to improve the language models suitability for dealing with Japanese data.
21
  These three datasets make up the model name: STX.
 
15
  * [TyDiQA (Ja)](https://huggingface.co/datasets/khalidalt/tydiqa-goldp)
16
  * [XLSUM (Ja)](https://huggingface.co/datasets/csebuetnlp/xlsum)
17
 
18
+ which resulted in a dataset of 13,167 samples total.
19
 
20
  These three datasets were chosen as they represent three distinct fine-tuning tasks (Text simplification, question answering, and text summarization, respectively) which we hypothesize can help to improve the language models suitability for dealing with Japanese data.
21
  These three datasets make up the model name: STX.