File size: 3,626 Bytes
d6ff049 59b6383 5afd5b6 59b6383 5afd5b6 59b6383 5afd5b6 59b6383 5afd5b6 59b6383 5afd5b6 59b6383 5afd5b6 59b6383 5afd5b6 59b6383 5afd5b6 59b6383 5afd5b6 59b6383 5afd5b6 59b6383 5afd5b6 59b6383 5afd5b6 59b6383 5afd5b6 59b6383 02820da 59b6383 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 |
---
license: mit
---
# Intro
The Guzheng Performance Technique Recognition Model is trained on the GZ_IsoTech Dataset, which consists of 2,824 audio clips that showcase various Guzheng playing techniques. Of these, 2,328 clips are from a virtual sound library, and 496 clips are performed by a highly skilled professional Guzheng artist, covering the full tonal range inherent to the Guzheng instrument. The audio clips are categorized into eight different playing techniques based on the unique performance practices of the Guzheng: Vibrato (chanyin), Slide-up (shanghuayin), Slide-down (xiahuayin), Return Slide (huihuayin), Glissando (guazou, huazhi, etc.), Thumb Plucking (yaozhi), Harmonics (fanyin), and Plucking Techniques (gou, da, mo, tuo, etc.). The model utilizes feature extraction, time-domain and frequency-domain analysis, and pattern recognition to accurately identify these distinct Guzheng playing techniques. The application of this model provides strong support for the automatic recognition, digital analysis, and educational research of Guzheng performance techniques, promoting the preservation and innovation of Guzheng art.
## Demo
<https://huggingface.co/spaces/ccmusic-database/GZ_IsoTech>
## Usage
```python
from modelscope import snapshot_download
model_dir = snapshot_download("ccmusic-database/GZ_IsoTech")
```
## Maintenance
```bash
git clone [email protected]:ccmusic-database/GZ_IsoTech
cd GZ_IsoTech
```
## Results
| Backbone | Size(M) | Mel | CQT | Chroma |
| :----------------: | :-----: | :-------------------------: | :---------: | :---------: |
| vit_l_16 | 304.3 | [**_0.855_**](#best-result) | **_0.824_** | **_0.770_** |
| maxvit_t | 30.9 | 0.763 | 0.776 | 0.642 |
| | | | | |
| resnext101_64x4d | 83.5 | 0.713 | 0.765 | 0.639 |
| resnet101 | 44.5 | 0.731 | 0.798 | **_0.719_** |
| regnet_y_8gf | 39.4 | 0.804 | **_0.807_** | 0.716 |
| shufflenet_v2_x2_0 | 7.4 | 0.702 | 0.799 | 0.665 |
| mobilenet_v3_large | 5.5 | **_0.806_** | 0.798 | 0.657 |
### Best result
<table>
<tr>
<th>Loss curve</th>
<td><img src="https://www.modelscope.cn/models/ccmusic-database/GZ_IsoTech/resolve/master/vit_l_16_mel_2024-12-06_08-28-13/loss.jpg"></td>
</tr>
<tr>
<th>Training and validation accuracy</th>
<td><img src="https://www.modelscope.cn/models/ccmusic-database/GZ_IsoTech/resolve/master/vit_l_16_mel_2024-12-06_08-28-13/acc.jpg"></td>
</tr>
<tr>
<th>Confusion matrix</th>
<td><img src="https://www.modelscope.cn/models/ccmusic-database/GZ_IsoTech/resolve/master/vit_l_16_mel_2024-12-06_08-28-13/mat.jpg"></td>
</tr>
</table>
## Dataset
<https://huggingface.co/datasets/ccmusic-database/GZ_IsoTech>
## Mirror
<https://www.modelscope.cn/models/ccmusic-database/GZ_IsoTech>
## Evaluation
<https://github.com/monetjoe/ccmusic_eval>
## Cite
```bibtex
@article{Zhou-2025,
title = {CCMusic: an Open and Diverse Database for Chinese Music Information Retrieval Research},
author = {Monan Zhou, Shenyang Xu, Zhaorui Liu, Zhaowen Wang, Feng Yu, Wei Li and Baoqiang Han},
journal = {Transactions of the International Society for Music Information Retrieval},
year = {2025}
}
``` |