|
|
|
--- |
|
tags: |
|
- bertopic |
|
library_name: bertopic |
|
pipeline_tag: text-classification |
|
--- |
|
|
|
# BERTopic_vafn |
|
|
|
This is a [BERTopic](https://github.com/MaartenGr/BERTopic) model. |
|
BERTopic is a flexible and modular topic modeling framework that allows for the generation of easily interpretable topics from large datasets. |
|
|
|
## Usage |
|
|
|
To use this model, please install BERTopic: |
|
|
|
``` |
|
pip install -U bertopic |
|
``` |
|
|
|
You can use the model as follows: |
|
|
|
```python |
|
from bertopic import BERTopic |
|
topic_model = BERTopic.load("ychu612/BERTopic_vafn") |
|
|
|
topic_model.get_topic_info() |
|
``` |
|
|
|
## Topic overview |
|
|
|
* Number of topics: 3 |
|
* Number of training documents: 103 |
|
|
|
<details> |
|
<summary>Click here for an overview of all topics.</summary> |
|
|
|
| Topic ID | Topic Keywords | Topic Frequency | Label | |
|
|----------|----------------|-----------------|-------| |
|
| -1 | the - was - she - and - to | 15 | -1_the_was_she_and | |
|
| 0 | the - she - was - and - her | 55 | 0_the_she_was_and | |
|
| 1 | the - was - he - and - to | 33 | 1_the_was_he_and | |
|
|
|
</details> |
|
|
|
## Training hyperparameters |
|
|
|
* calculate_probabilities: False |
|
* language: english |
|
* low_memory: False |
|
* min_topic_size: 10 |
|
* n_gram_range: (1, 1) |
|
* nr_topics: None |
|
* seed_topic_list: None |
|
* top_n_words: 10 |
|
* verbose: False |
|
* zeroshot_min_similarity: 0.7 |
|
* zeroshot_topic_list: None |
|
|
|
## Framework versions |
|
|
|
* Numpy: 1.23.0 |
|
* HDBSCAN: 0.8.33 |
|
* UMAP: 0.5.5 |
|
* Pandas: 2.1.4 |
|
* Scikit-Learn: 1.1.0 |
|
* Sentence-transformers: 2.3.1 |
|
* Transformers: 4.38.1 |
|
* Numba: 0.56.4 |
|
* Plotly: 5.9.0 |
|
* Python: 3.10.9 |
|
|