File size: 6,102 Bytes
7d29d04 938aac8 1bbbae6 7d29d04 938aac8 f5476e4 61602e9 231dff3 9da7770 231dff3 9da7770 231dff3 9da7770 231dff3 7d29d04 05189a4 04c793b cde0c02 5466115 05189a4 d324fd3 cb3e1f1 6b5feee cb3e1f1 6b5feee 26fd111 7d29d04 26fd111 7d29d04 d324fd3 208a846 cb3e1f1 208a846 b458202 208a846 600ca0b 208a846 cb3e1f1 a6b4b15 95e8ec1 b458202 208a846 d324fd3 208a846 b458202 031ce4c a6b4b15 b458202 208a846 7d29d04 d324fd3 7d29d04 b458202 7d29d04 8cd728d 7d29d04 031ce4c 8cd728d 7d29d04 0f81abe b458202 b8bc2fc 5466115 b458202 a6b4b15 b458202 91a9c42 d324fd3 91a9c42 c573c69 0e25332 5f577d1 49a1448 91a9c42 1af669f 5466115 fdb6c55 5466115 17dbd91 1af669f d5b4fb0 1af669f |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 |
---
language:
- vi
license: apache-2.0
library_name: transformers
tags:
- cross-encoder
- rerank
datasets:
- unicamp-dl/mmarco
widget:
- text: Trường UIT là gì ?.
output:
- label: >-
Trường Đại_học Công_nghệ Thông_tin có tên tiếng Anh là University of
Information_Technology ( viết tắt là UIT ) là thành_viên của Đại_học
Quốc_Gia TP. HCM.
score: 0.9819
- label: >-
Trường Đại_học Kinh_tế – Luật ( tiếng Anh : University of Economics and
Law – UEL ) là trường đại_học đào_tạo và nghiên_cứu khối ngành kinh_tế ,
kinh_doanh và luật hàng_đầu Việt_Nam .
score: 0.2444
- label: >-
Quĩ_uỷ_thác đầu_tư ( tiếng Anh : Unit Investment_Trusts ; viết tắt : UIT )
là một công_ty đầu_tư mua hoặc nắm giữ một danh_mục đầu_tư cố_định
score: 0.9253
pipeline_tag: text-classification
---
#### Table of contents
1. [Installation](#installation)
2. [Pre-processing](#pre-processing)
3. [Usage with `sentence-transformers`](#usage-with-sentence-transformers)
4. [Usage with `transformers`](#usage-with-transformers)
5. [Performance](#performance)
6. [Support me](#support-me)
7. [Citation](#citation)
## Installation
- Install `VnCoreNLP` to word segment:
- `pip install py_vncorenlp`
- Install `sentence-transformers` (recommend) - [Usage](#usage-with-sentence-transformers):
- `pip install sentence-transformers`
- Install `transformers` (optional) - [Usage](#usage-with-transformers):
- `pip install transformers`
## Pre-processing
```python
import py_vncorenlp
py_vncorenlp.download_model(save_dir='/absolute/path/to/vncorenlp')
rdrsegmenter = py_vncorenlp.VnCoreNLP(annotators=["wseg"], save_dir='/absolute/path/to/vncorenlp')
query = "Trường UIT là gì?"
sentences = [
"Trường Đại học Công nghệ Thông tin có tên tiếng Anh là University of Information Technology (viết tắt là UIT) là thành viên của Đại học Quốc Gia TP.HCM.",
"Trường Đại học Kinh tế – Luật (tiếng Anh: University of Economics and Law – UEL) là trường đại học đào tạo và nghiên cứu khối ngành kinh tế, kinh doanh và luật hàng đầu Việt Nam.",
"Quĩ uỷ thác đầu tư (tiếng Anh: Unit Investment Trusts; viết tắt: UIT) là một công ty đầu tư mua hoặc nắm giữ một danh mục đầu tư cố định"
]
tokenized_query = rdrsegmenter.word_segment(query)
tokenized_sentences = [rdrsegmenter.word_segment(sent) for sent in sentences]
tokenized_pairs = [[tokenized_query, sent] for sent in tokenized_sentences]
MODEL_ID = 'itdainb/PhoRanker'
MAX_LENGTH = 256
```
## Usage with sentence-transformers
```python
from sentence_transformers import CrossEncoder
model = CrossEncoder(MODEL_ID, max_length=MAX_LENGTH)
# For fp16 usage
model.model.half()
scores = model.predict(tokenized_pairs)
# 0.982, 0.2444, 0.9253
print(scores)
```
## Usage with transformers
```python
from transformers import AutoTokenizer, AutoModelForSequenceClassification
import torch
model = AutoModelForSequenceClassification.from_pretrained(MODEL_ID)
tokenizer = AutoTokenizer.from_pretrained(MODEL_ID)
# For fp16 usage
model.half()
features = tokenizer(tokenized_pairs, padding=True, truncation="longest_first", return_tensors="pt", max_length=MAX_LENGTH)
model.eval()
with torch.no_grad():
model_predictions = model(**features, return_dict=True)
logits = model_predictions.logits
logits = torch.nn.Sigmoid()(logits)
scores = [logit[0] for logit in logits]
# 0.9819, 0.2444, 0.9253
print(scores)
```
## Performance
In the following table, we provide various pre-trained Cross-Encoders together with their performance on the [MS MMarco Passage Reranking - Vi - Dev](https://huggingface.co/datasets/unicamp-dl/mmarco) dataset.
| Model-Name | NDCG@3 | MRR@3 | NDCG@5 | MRR@5 | NDCG@10 | MRR@10 | Docs / Sec |
| ----------------------------------------------------- |:------ | :---- |:------ | :---- |:------ | :----| :--- |
|itdainb/PhoRanker |**0.6625**|**0.6458**|**0.7147**|**0.6731**|**0.7422**|**0.6830**|15
|[amberoad/bert-multilingual-passage-reranking-msmarco](https://huggingface.co/amberoad/bert-multilingual-passage-reranking-msmarco) |0.4634|0.5233|0.5041|0.5383|0.5416|0.5523|**22**
|[kien-vu-uet/finetuned-phobert-passage-rerank-best-eval](https://huggingface.co/kien-vu-uet/finetuned-phobert-passage-rerank-best-eval) |0.0963|0.0883|0.1396|0.1131|0.1681|0.1246|15
|[BAAI/bge-reranker-v2-m3](https://huggingface.co/BAAI/bge-reranker-v2-m3) |0.6087|0.5841|0.6513|0.6062|0.6872|0.62091|3.51
|[BAAI/bge-reranker-v2-gemma](https://huggingface.co/BAAI/bge-reranker-v2-gemma) |0.6088|0.5908|0.6446|0.6108|0.6785|0.6249|1.29
Note: Runtime was computed on a A100 GPU with fp16.
## Support me
If you find this work useful and would like to support its continued development, here are a few ways you can help:
1. **Star the Repository**: If you appreciate this work, please give it a star. Your support encourages continued development and improvement.
2. **Contribute**: Contributions are always welcome! You can help by reporting issues, submitting pull requests, or suggesting new features.
3. **Share**: Share this project with your colleagues, friends, or community. The more people know about it, the more feedback and contributions it can attract.
4. **Buy me a coffee**: If you’d like to provide financial support, consider making a donation. You can donate via
- Momo: 0948798843
- BIDV Bank: DAINB
- Paypal: 0948798843
## Citation
Please cite as
```Plaintext
@misc{PhoRanker,
title={PhoRanker: A Cross-encoder Model for Vietnamese Text Ranking},
author={Dai Nguyen Ba ({ORCID:0009-0008-8559-3154})},
year={2024},
publisher={Huggingface},
journal={huggingface repository},
howpublished={\url{https://huggingface.co/itdainb/PhoRanker}},
}
``` |