--- license: mit --- Fine-tuned Twitter XLM Roberta Base Model from cardiffnlp: [model](https://huggingface.co/cardiffnlp/twitter-xlm-roberta-base) on French part of XNLI dataset \ using xnli finetuning script. Hyperparameters: - Learning Rate: 5e-5 - Epochs : 2.0 - Batch size: 32 - Max sequence length 128 Train Runtime: - 6462 s --> 107.7 min on one NVIDIA GeForce RTX 3090 Results French XNLI - Eval set : 77.75 - Test set: 77.72