ConTEB evaluation datasets Evaluation datasets of the ConTEB benchmark. Use "test" split where available, otherwise "validation", otherwise "train". illuin-conteb/covid-qa Viewer • Updated Jun 2 • 4.46k • 9 • 1 illuin-conteb/geography Viewer • Updated May 30 • 11.4k • 17 • 1 illuin-conteb/esg-reports Viewer • Updated May 30 • 3.74k • 10 • 1 illuin-conteb/insurance Viewer • Updated May 30 • 180 • 7 • 1
ConTEB training datasets Training data for the InSeNT method. illuin-conteb/narrative-qa Viewer • Updated Jun 2 • 47.3k • 13 • 1 illuin-conteb/squad-conteb-train Viewer • Updated Jun 2 • 91.8k • 13 illuin-conteb/mldr-conteb-train Viewer • Updated Jun 2 • 566k • 19
ConTEB models Our models trained with the InSeNT approach. These are the checkpoints that we used to run the evaluations reported in our paper. illuin-conteb/modern-colbert-insent Feature Extraction • 0.1B • Updated Jun 2 • 9 • 5 illuin-conteb/modernbert-large-insent Sentence Similarity • 0.4B • Updated Jun 2 • 4 • 2
ConTEB evaluation datasets Evaluation datasets of the ConTEB benchmark. Use "test" split where available, otherwise "validation", otherwise "train". illuin-conteb/covid-qa Viewer • Updated Jun 2 • 4.46k • 9 • 1 illuin-conteb/geography Viewer • Updated May 30 • 11.4k • 17 • 1 illuin-conteb/esg-reports Viewer • Updated May 30 • 3.74k • 10 • 1 illuin-conteb/insurance Viewer • Updated May 30 • 180 • 7 • 1
ConTEB models Our models trained with the InSeNT approach. These are the checkpoints that we used to run the evaluations reported in our paper. illuin-conteb/modern-colbert-insent Feature Extraction • 0.1B • Updated Jun 2 • 9 • 5 illuin-conteb/modernbert-large-insent Sentence Similarity • 0.4B • Updated Jun 2 • 4 • 2
ConTEB training datasets Training data for the InSeNT method. illuin-conteb/narrative-qa Viewer • Updated Jun 2 • 47.3k • 13 • 1 illuin-conteb/squad-conteb-train Viewer • Updated Jun 2 • 91.8k • 13 illuin-conteb/mldr-conteb-train Viewer • Updated Jun 2 • 566k • 19