androstj commited on
Commit
51465d5
·
verified ·
1 Parent(s): ed0f478

Push model using huggingface_hub.

Browse files
Files changed (3) hide show
  1. README.md +6 -95
  2. config.json +29 -0
  3. model.safetensors +3 -0
README.md CHANGED
@@ -1,100 +1,11 @@
1
  ---
2
  license: cc-by-4.0
3
- tags:
4
- - audio quality
5
- - audio aesthetics
6
- library_name: pytorch
7
  pipeline_tag: audio-classification
 
 
 
8
  ---
9
 
10
- # Model Summary
11
- audiobox-aesthetics is a unified automatic quality assessment for speech, music, and sound.
12
-
13
- # Model Details
14
-
15
- Audiobox-Aesthetics is introduced in [Meta Audiobox Aesthetics: Unified Automatic Quality Assessment for Speech, Music, and Sound](https://arxiv.org/abs/2502.05139)
16
-
17
- **Model Developer**: FAIR @ Meta AI
18
-
19
- **Model Architecture**:
20
-
21
- <img src="assets/aes_model.png" alt="Model" height="400px">
22
-
23
- Audiobox-Aesthetics is based on simple Transformer-based architecture. Specifically, the audio encoder based on WavLM-based structure, consisted of several CNN and 12 Transformers (Vaswani et al., 2017) layers with 768 hidden dimensions. To predict the output, we project the audio embedding through multiple multi-layer perceptron (MLP) blocks where each MLP block consisted of 5 non-linear layers with respect to each axes (PQ, PC, CE, CU). The model is trained with standard regression loss (Mean-Absolute & Mean-Squared Error).
24
-
25
- # How to install
26
- We are providing 2 ways to run the model:
27
-
28
- 1. Install via pip
29
- ```
30
- pip install audiobox_aesthetics
31
- ```
32
- 2. Install directly from source
33
-
34
- This repository requires Python 3.9 and Pytorch 2.2 or greater. To install, you can clone this repo and run:
35
- ```
36
- pip install -e .
37
- ```
38
-
39
- # How to run prediction:
40
-
41
- 1. Create a jsonl files with the following format
42
- ```
43
- {"path":"/path/to/a.wav"}
44
- {"path":"/path/to/b.wav"}
45
- ...
46
- {"path":"/path/to/z.wav"}
47
- ```
48
- or if you only want to predict aesthetic scores from certain timestamp
49
- ```
50
- {"path":"/path/to/a.wav", "start_time":0, "end_time": 5}
51
- {"path":"/path/to/b.wav", "start_time":3, "end_time": 10}
52
- ```
53
- and save it as `input.jsonl`
54
-
55
- 2. Run following command
56
- ```
57
- audio-aes input.jsonl --batch-size 100 > output.jsonl
58
- ```
59
- If you haven't downloade the checkpoint, the script will try to download it automatically. Otherwise, you can provide the path by `--ckpt /path/to/checkpoint.pt`
60
-
61
- If you have SLURM, run the following command
62
- ```
63
- audio-aes input.jsonl --batch-size 100 --remote --array 5 --job-dir $HOME/slurm_logs/ --chunk 1000 > output.jsonl
64
- ```
65
- Please adjust CPU & GPU settings using `--slurm-gpu, --slurm-cpu` depending on your nodes.
66
-
67
-
68
- 3. Output file will contain the same number of rows as `input.jsonl`. Each row contains 4 axes of prediction with a JSON-formatted dictionary. Check the following table for more info:
69
- Axes name | Full name
70
- |---|---|
71
- CE | Content Enjoyment
72
- CU | Content Usefulness
73
- PC | Production Complexity
74
- PQ | Production Quality
75
-
76
- Output line example:
77
- ```
78
- {"CE": 5.146, "CU": 5.779, "PC": 2.148, "PQ": 7.220}
79
- ```
80
-
81
- 4. (Extra) If you want to extract only one axis (i.e. CE), post-process the output file with the following command using `jq` utility:
82
-
83
- ```jq '.CE' output.jsonl > output-aes_ce.txt```
84
-
85
-
86
-
87
- ## Citation
88
- If you found this repository useful, please cite the following BibTeX entry.
89
-
90
- ```
91
- @article{tjandra2025aes,
92
- title={Meta Audiobox Aesthetics: Unified Automatic Quality Assessment for Speech, Music, and Sound},
93
- author={Andros Tjandra and Yi-Chiao Wu and Baishan Guo and John Hoffman and Brian Ellis and Apoorv Vyas and Bowen Shi and Sanyuan Chen and Matt Le and Nick Zacharov and Carleigh Wood and Ann Lee and Wei-Ning Hsu},
94
- year={2025},
95
- url={https://arxiv.org/abs/2502.05139}
96
- }
97
- ```
98
- ## License
99
- The majority of audiobox-aesthetics is licensed under CC-BY 4.0, as found in the LICENSE file.
100
- However, portions of the project are available under separate license terms: [https://github.com/microsoft/unilm](https://github.com/microsoft/unilm) is licensed under MIT license.
 
1
  ---
2
  license: cc-by-4.0
 
 
 
 
3
  pipeline_tag: audio-classification
4
+ tags:
5
+ - model_hub_mixin
6
+ - pytorch_model_hub_mixin
7
  ---
8
 
9
+ This model has been pushed to the Hub using the [PytorchModelHubMixin](https://huggingface.co/docs/huggingface_hub/package_reference/mixins#huggingface_hub.PyTorchModelHubMixin) integration:
10
+ - Library: https://github.com/facebookresearch/audiobox-aesthetics
11
+ - Docs: [More Information Needed]
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
config.json ADDED
@@ -0,0 +1,29 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "normalize_embed": true,
3
+ "nth_layer": 13,
4
+ "output_dim": 1,
5
+ "precision": "32",
6
+ "proj_act_fn": "gelu",
7
+ "proj_dropout": 0.0,
8
+ "proj_ln": true,
9
+ "proj_num_layer": 5,
10
+ "target_transform": {
11
+ "CE": {
12
+ "mean": 5.06865,
13
+ "std": 1.93029
14
+ },
15
+ "CU": {
16
+ "mean": 5.73633,
17
+ "std": 1.75669
18
+ },
19
+ "PC": {
20
+ "mean": 3.18591,
21
+ "std": 1.86637
22
+ },
23
+ "PQ": {
24
+ "mean": 6.57505,
25
+ "std": 1.51466
26
+ }
27
+ },
28
+ "use_weighted_layer_sum": true
29
+ }
model.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:a5a3c2412649cc2384ec525ffd5180ce6c4778f43bed6108e0a1303de04d014e
3
+ size 415472992