Upload folder using huggingface_hub
Browse files- README.md +85 -0
- config.json +1 -0
- model.joblib +3 -0
- requirements.txt +6 -0
- scaler.joblib +3 -0
README.md
ADDED
@@ -0,0 +1,85 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
language: multilingual
|
3 |
+
license: apache-2.0
|
4 |
+
datasets:
|
5 |
+
- voxceleb2
|
6 |
+
libraries:
|
7 |
+
- speechbrain
|
8 |
+
tags:
|
9 |
+
- age-estimation
|
10 |
+
- speaker-characteristics
|
11 |
+
- speaker-recognition
|
12 |
+
- audio-regression
|
13 |
+
- voice-analysis
|
14 |
+
---
|
15 |
+
|
16 |
+
# Age Estimation Model
|
17 |
+
|
18 |
+
This model combines the SpeechBrain ECAPA-TDNN speaker embedding model with an SVR regressor to predict speaker age from audio input. The model was trained on the VoxCeleb2 dataset.
|
19 |
+
|
20 |
+
## Model Performance Comparison
|
21 |
+
|
22 |
+
We provide multiple pre-trained models with different architectures and feature sets. Here's a comprehensive comparison of their performance:
|
23 |
+
|
24 |
+
| Model | Architecture | Features | Training Data | Test MAE | Best For |
|
25 |
+
|-------|-------------|----------|---------------|-----------|----------|
|
26 |
+
| VoxCeleb2 SVR (223) | SVR | ECAPA + Librosa (223-dim) | VoxCeleb2 | 7.88 years | Best performance on VoxCeleb2 |
|
27 |
+
| VoxCeleb2 SVR (192) | SVR | ECAPA only (192-dim) | VoxCeleb2 | 7.89 years | Lightweight deployment |
|
28 |
+
| TIMIT ANN (192) | ANN | ECAPA only (192-dim) | TIMIT | 4.95 years | Clean studio recordings |
|
29 |
+
| Combined ANN (223) | ANN | ECAPA + Librosa (223-dim) | VoxCeleb2 + TIMIT | 6.93 years | Best general performance |
|
30 |
+
|
31 |
+
You may find other models [here](https://huggingface.co/griko).
|
32 |
+
|
33 |
+
## Model Details
|
34 |
+
- Input: Audio file (will be converted to 16kHz, mono, single channel)
|
35 |
+
- Output: Predicted age in years (continuous value)
|
36 |
+
- Features: SpeechBrain ECAPA-TDNN embedding [192 features]
|
37 |
+
- Regressor: Support Vector Regression optimized through Optuna
|
38 |
+
- Performance:
|
39 |
+
- VoxCeleb2 test set: 7.89 years Mean Absolute Error (MAE)
|
40 |
+
|
41 |
+
## Features
|
42 |
+
1. SpeechBrain ECAPA-TDNN embeddings (192 dimensions)
|
43 |
+
|
44 |
+
## Training Data
|
45 |
+
The model was trained on the VoxCeleb2 dataset:
|
46 |
+
- Audio preprocessing:
|
47 |
+
- Converted to WAV format, single channel, 16kHz sampling rate
|
48 |
+
- Applied SileroVAD for voice activity detection, taking the first voiced segment
|
49 |
+
- Age data was collected from Wikidata and public sources
|
50 |
+
## Installation
|
51 |
+
|
52 |
+
```bash
|
53 |
+
pip install git+https://github.com/griko/voice-age-regression.git[svr-ecapa-voxceleb2]
|
54 |
+
```
|
55 |
+
|
56 |
+
## Usage
|
57 |
+
|
58 |
+
```python
|
59 |
+
from age_regressor import AgeRegressionPipeline
|
60 |
+
|
61 |
+
# Load the pipeline
|
62 |
+
regressor = AgeRegressionPipeline.from_pretrained(
|
63 |
+
"griko/age_reg_svr_ecapa_voxceleb2"
|
64 |
+
)
|
65 |
+
|
66 |
+
# Single file prediction
|
67 |
+
result = regressor("path/to/audio.wav")
|
68 |
+
print(f"Predicted age: {result[0]:.1f} years")
|
69 |
+
|
70 |
+
# Batch prediction
|
71 |
+
results = regressor(["audio1.wav", "audio2.wav"])
|
72 |
+
print(f"Predicted ages: {[f'{age:.1f}' for age in results]} years")
|
73 |
+
```
|
74 |
+
|
75 |
+
## Limitations
|
76 |
+
- Model was trained on celebrity voices from YouTube interviews recordings
|
77 |
+
- Performance may vary on different audio qualities or recording conditions
|
78 |
+
- Age predictions are estimates and should not be used for medical or legal purposes
|
79 |
+
- Age estimations should be treated as approximate values, not exact measurements
|
80 |
+
|
81 |
+
## Citation
|
82 |
+
If you use this model in your research, please cite:
|
83 |
+
```bibtex
|
84 |
+
TBD
|
85 |
+
```
|
config.json
ADDED
@@ -0,0 +1 @@
|
|
|
|
|
1 |
+
{"feature_names": ["0_speechbrain_embedding", "1_speechbrain_embedding", "2_speechbrain_embedding", "3_speechbrain_embedding", "4_speechbrain_embedding", "5_speechbrain_embedding", "6_speechbrain_embedding", "7_speechbrain_embedding", "8_speechbrain_embedding", "9_speechbrain_embedding", "10_speechbrain_embedding", "11_speechbrain_embedding", "12_speechbrain_embedding", "13_speechbrain_embedding", "14_speechbrain_embedding", "15_speechbrain_embedding", "16_speechbrain_embedding", "17_speechbrain_embedding", "18_speechbrain_embedding", "19_speechbrain_embedding", "20_speechbrain_embedding", "21_speechbrain_embedding", "22_speechbrain_embedding", "23_speechbrain_embedding", "24_speechbrain_embedding", "25_speechbrain_embedding", "26_speechbrain_embedding", "27_speechbrain_embedding", "28_speechbrain_embedding", "29_speechbrain_embedding", "30_speechbrain_embedding", "31_speechbrain_embedding", "32_speechbrain_embedding", "33_speechbrain_embedding", "34_speechbrain_embedding", "35_speechbrain_embedding", "36_speechbrain_embedding", "37_speechbrain_embedding", "38_speechbrain_embedding", "39_speechbrain_embedding", "40_speechbrain_embedding", "41_speechbrain_embedding", "42_speechbrain_embedding", "43_speechbrain_embedding", "44_speechbrain_embedding", "45_speechbrain_embedding", "46_speechbrain_embedding", "47_speechbrain_embedding", "48_speechbrain_embedding", "49_speechbrain_embedding", "50_speechbrain_embedding", "51_speechbrain_embedding", "52_speechbrain_embedding", "53_speechbrain_embedding", "54_speechbrain_embedding", "55_speechbrain_embedding", "56_speechbrain_embedding", "57_speechbrain_embedding", "58_speechbrain_embedding", "59_speechbrain_embedding", "60_speechbrain_embedding", "61_speechbrain_embedding", "62_speechbrain_embedding", "63_speechbrain_embedding", "64_speechbrain_embedding", "65_speechbrain_embedding", "66_speechbrain_embedding", "67_speechbrain_embedding", "68_speechbrain_embedding", "69_speechbrain_embedding", "70_speechbrain_embedding", "71_speechbrain_embedding", "72_speechbrain_embedding", "73_speechbrain_embedding", "74_speechbrain_embedding", "75_speechbrain_embedding", "76_speechbrain_embedding", "77_speechbrain_embedding", "78_speechbrain_embedding", "79_speechbrain_embedding", "80_speechbrain_embedding", "81_speechbrain_embedding", "82_speechbrain_embedding", "83_speechbrain_embedding", "84_speechbrain_embedding", "85_speechbrain_embedding", "86_speechbrain_embedding", "87_speechbrain_embedding", "88_speechbrain_embedding", "89_speechbrain_embedding", "90_speechbrain_embedding", "91_speechbrain_embedding", "92_speechbrain_embedding", "93_speechbrain_embedding", "94_speechbrain_embedding", "95_speechbrain_embedding", "96_speechbrain_embedding", "97_speechbrain_embedding", "98_speechbrain_embedding", "99_speechbrain_embedding", "100_speechbrain_embedding", "101_speechbrain_embedding", "102_speechbrain_embedding", "103_speechbrain_embedding", "104_speechbrain_embedding", "105_speechbrain_embedding", "106_speechbrain_embedding", "107_speechbrain_embedding", "108_speechbrain_embedding", "109_speechbrain_embedding", "110_speechbrain_embedding", "111_speechbrain_embedding", "112_speechbrain_embedding", "113_speechbrain_embedding", "114_speechbrain_embedding", "115_speechbrain_embedding", "116_speechbrain_embedding", "117_speechbrain_embedding", "118_speechbrain_embedding", "119_speechbrain_embedding", "120_speechbrain_embedding", "121_speechbrain_embedding", "122_speechbrain_embedding", "123_speechbrain_embedding", "124_speechbrain_embedding", "125_speechbrain_embedding", "126_speechbrain_embedding", "127_speechbrain_embedding", "128_speechbrain_embedding", "129_speechbrain_embedding", "130_speechbrain_embedding", "131_speechbrain_embedding", "132_speechbrain_embedding", "133_speechbrain_embedding", "134_speechbrain_embedding", "135_speechbrain_embedding", "136_speechbrain_embedding", "137_speechbrain_embedding", "138_speechbrain_embedding", "139_speechbrain_embedding", "140_speechbrain_embedding", "141_speechbrain_embedding", "142_speechbrain_embedding", "143_speechbrain_embedding", "144_speechbrain_embedding", "145_speechbrain_embedding", "146_speechbrain_embedding", "147_speechbrain_embedding", "148_speechbrain_embedding", "149_speechbrain_embedding", "150_speechbrain_embedding", "151_speechbrain_embedding", "152_speechbrain_embedding", "153_speechbrain_embedding", "154_speechbrain_embedding", "155_speechbrain_embedding", "156_speechbrain_embedding", "157_speechbrain_embedding", "158_speechbrain_embedding", "159_speechbrain_embedding", "160_speechbrain_embedding", "161_speechbrain_embedding", "162_speechbrain_embedding", "163_speechbrain_embedding", "164_speechbrain_embedding", "165_speechbrain_embedding", "166_speechbrain_embedding", "167_speechbrain_embedding", "168_speechbrain_embedding", "169_speechbrain_embedding", "170_speechbrain_embedding", "171_speechbrain_embedding", "172_speechbrain_embedding", "173_speechbrain_embedding", "174_speechbrain_embedding", "175_speechbrain_embedding", "176_speechbrain_embedding", "177_speechbrain_embedding", "178_speechbrain_embedding", "179_speechbrain_embedding", "180_speechbrain_embedding", "181_speechbrain_embedding", "182_speechbrain_embedding", "183_speechbrain_embedding", "184_speechbrain_embedding", "185_speechbrain_embedding", "186_speechbrain_embedding", "187_speechbrain_embedding", "188_speechbrain_embedding", "189_speechbrain_embedding", "190_speechbrain_embedding", "191_speechbrain_embedding"], "model_type": "svr", "feature_set": "ecapa"}
|
model.joblib
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:91e2be1f70d14cbff46fc26b84e521b6887a101ace32e573af90a2b248d30369
|
3 |
+
size 4596143
|
requirements.txt
ADDED
@@ -0,0 +1,6 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
scikit-learn
|
2 |
+
pandas
|
3 |
+
soundfile
|
4 |
+
speechbrain
|
5 |
+
torch
|
6 |
+
torchaudio
|
scaler.joblib
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:41515fd50d331ccbc06750cef97d38c63660d07c3406c7566092916534a54b19
|
3 |
+
size 11559
|