Spaces:
Running
Running
File size: 4,027 Bytes
ac21982 83c8036 ac21982 e75aa82 ac21982 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 |
---
title: AudioSR
sdk: gradio
emoji: π
colorFrom: red
colorTo: blue
short_description: Versatile audio super resolution (any -> 48kHz) with AudioSR
sdk_version: 5.11.0
---
# AudioSR: Versatile Audio Super-resolution at Scale
[](https://arxiv.org/abs/2309.07314) [](https://audioldm.github.io/audiosr) [](https://replicate.com/nateraw/audio-super-resolution)
Pass your audio in, AudioSR will make it high fidelity!
Work on all types of audio (e.g., music, speech, dog, raining, ...) & all sampling rates.
Share your thoughts/samples/issues in our discord channel: https://discord.gg/HWeBsJryaf

## Change Log
- 2024-12-31: The training code of AudioSR can be found [here](https://drive.google.com/file/d/1BaZuHbk1AfURX7SvkaD5_ZWLwun-wdpW/view?usp=drive_link) (For reference only. The code is not carefully organized.).
- 2024-12-16: Add [Important things to know to make AudioSR work](example/how_to_make_audiosr_work.md).
- 2023-09-24: Add replicate demo (@nateraw); Fix error on windows, librosa warning etc (@ORI-Muchim).
- 2023-09-16: Fix DC shift issue. Fix duration padding bug. Update default DDIM steps to 50.
## Gradio Demo
To run the Gradio demo locally:
1. Install dependencies: `pip install -r requirements.txt`
2. Run the app: `python app.py`
3. Open the URL displayed to view the demo
## Commandline Usage
## Installation
```shell
# Optional
conda create -n audiosr python=3.9; conda activate audiosr
# Install AudioLDM
pip3 install audiosr==0.0.7
# or
# pip3 install git+https://github.com/haoheliu/versatile_audio_super_resolution.git
```
## Usage
Process a list of files. The result will be saved at ./output by default.
```shell
audiosr -il batch.lst
```
Process a single audio file.
```shell
audiosr -i example/music.wav
```
Full usage instruction
```shell
> audiosr -h
> usage: audiosr [-h] -i INPUT_AUDIO_FILE [-il INPUT_FILE_LIST] [-s SAVE_PATH] [--model_name {basic,speech}] [-d DEVICE] [--ddim_steps DDIM_STEPS] [-gs GUIDANCE_SCALE] [--seed SEED]
optional arguments:
-h, --help show this help message and exit
-i INPUT_AUDIO_FILE, --input_audio_file INPUT_AUDIO_FILE
Input audio file for audio super resolution
-il INPUT_FILE_LIST, --input_file_list INPUT_FILE_LIST
A file that contains all audio files that need to perform audio super resolution
-s SAVE_PATH, --save_path SAVE_PATH
The path to save model output
--model_name {basic,speech}
The checkpoint you gonna use
-d DEVICE, --device DEVICE
The device for computation. If not specified, the script will automatically choose the device based on your environment.
--ddim_steps DDIM_STEPS
The sampling step for DDIM
-gs GUIDANCE_SCALE, --guidance_scale GUIDANCE_SCALE
Guidance scale (Large => better quality and relavancy to text; Small => better diversity)
--seed SEED Change this value (any integer number) will lead to a different generation result.
--suffix SUFFIX Suffix for the output file
```
## TODO
[](https://www.buymeacoffee.com/haoheliuP)
- [ ] Add gradio demo.
- [ ] Optimize the inference speed.
## Cite our work
If you find this repo useful, please consider citing:
```bibtex
@article{liu2023audiosr,
title={{AudioSR}: Versatile Audio Super-resolution at Scale},
author={Liu, Haohe and Chen, Ke and Tian, Qiao and Wang, Wenwu and Plumbley, Mark D},
journal={arXiv preprint arXiv:2309.07314},
year={2023}
}
``` |