Spleeter manual guide

Usage

Once installed, Spleeter can be used directly from any CLI through the spleeter command. It provides three action with following subcommand :

Command Description
separate Separate audio files using pretrained model
train Train a source separation model. You need a dataset of separated tracks to use it
evaluate Pretrained model evaluation over musDB test set

Separate sources

To get help on the different options available with the separate command, type:

spleeter separate -h

Using 2stems model

You can straightforwardly separate audio files with the default 2 stems (vocals / accompaniment) pretrained model like following1 :

spleeter separate -i audio_example.mp3 -o audio_output

1 be sure to be in the spleeter folder if you are using cloned repository or replace audio_example.mp3 by a valid path to an audio file).

The -i option is for providing a list of audio filenames. The -o is for providing the output path where to write the separated wav files. The command may take quite some time to execute at first run, since it will download the pre-trained model. If everything goes well, you should then get a folder audio_output/audio_example that contains two files: accompaniment.wav and vocals.wav.

Using 4stems model

You can also use a pretrained 4 stems (vocals / bass / drums / other ) model :

spleeter separate -i audio_example.mp3 -o audio_output -p spleeter:4stems

The -p option is for providing the model settings. It could be either a Spleeter embedded setting identifier2 or a path to a JSON file configuration such as this one.

This time, it will generate four files: vocals.wavdrums.wavbass.wav and other.wav.

2 at this time, following embedded configuration are available :

  • spleeter:2stems
  • spleeter:4stems
  • spleeter:5stems

Using 5stems model

Finally a pretrained 5 stems (vocals / bass / drums / piano / other) model is also available out of the box :

spleeter separate -i audio_example.mp3 -o audio_output -p spleeter:5stems

Which would generate five files: vocals.wavdrums.wavbass.wavpiano.wav and other.wav.

Batch processing

separate command builds the model each time it is called and downloads it the first time. This process may be long compared to the separation process by itself if you process a single audio file (especially a short one). If you have several files to separate, it is then recommended to perform all separation with a single call to separate:

spleeter separate \
     -i    \
     -o audio_output

Train model

For training your own model, you need:

  • A dataset of separated files such as musDB.
  • Dataset must be described in CSV files : one for training and one for validation) which are used for generating training data.
  • JSON configuration file such as this one that gathers all parameters needed for training and paths to CSV file.

Once your train configuration is setup, you can run model training as following :

spleeter train -p configs/musdb_config.json -d 

Evaluate model

For evaluating a model, you need the musDB dataset. You can for instance evaluate the provided 4 stems pre-trained model this way:

spleeter evaluate -p spleeter:4stems --mus_dir  -o eval_output

For using multi-channel Wiener filtering for performing the separation, you need to add the --mwf option (to get the results reported in the paper):

spleeter evaluate -p spleeter:4stems --mus_dir  -o eval_output --mwf

Using Docker image

We are providing images in order to use Spleeter with Docker (a GPU and CPU image). You need first to install Docker, for instance the Docker Community Edition.

Build image

You can build image using docker build command from cloned repository :

git clone https://github.com/deezer/spleeter
cd spleeter
# Build CPU image.
docker build -f docker/cpu.Dockerfile -t spleeter:cpu .
# Build GPU image.
docker build -f docker/gpu.Dockerfile -t spleeter:gpu .

Run container

Built images entrypoint is Spleeter main command spleeter. Thus you can run the separate command by running this previously built image using docker run3 command with a mounted directory for output writing :

# Run with CPU :
docker run -v $(pwd)/output:/output spleeter:cpu separate -i audio_example.mp3 -o /output
# Or with GPU if available :
nvidia-docker run -v $(pwd)/output:/output spleeter:gpu separate -i audio_example.mp3 -o /output

3 For running command over GPU, you should use nvidia-docker command instead of docker command. This alternative command allows container to access Nvidia driver and the GPU devices from host.

This will separate the audio file provided as input (here audio_example.mp3 which is embedded in the built image) and put the separated files vocals.wav and accompaniment.wav on your computer in the mounted output folder output/audio_example.

For using your own audio file you will need to create container volume when running the image, we also suggest you to create a volume for storing downloaded model. This will avoid Spleeter to download model files each time you run the image.

To do so let's first create some environment variable :

export AUDIO_IN='/path/to/directory/with/audio/file'
export AUDIO_OUT='/path/to/write/separated/source/into'
export MODEL_DIRECTORY='/path/to/model/storage'

Then we can run the separate command through container :

docker run \
    -v $AUDIO_IN:/input \
    -v $AUDIO_OUT:/output \
    -v $MODEL_DIRECTORY:/model \
    spleeter:cpu \
    separate -i /input/audio_1.mp3 /input/audio_2.mp3 -o /output

⚠️ As for non docker usage we recommend you to perform separation of multiple file with a single call on Spleeter image.

You can use the train command (that you should mainly use with a GPU as it is very computationally expensive), as well as the evaluate command, that performs evaluation on the musDB test dataset4 using museval

# Model training.
nvidia-docker run -v :/musdb spleeter:gpu train -p configs/musdb_config.json -d /musdb
# Model evaluation.
nvidia-docker run -v $(pwd)/eval_output:/eval_output -v :/musdb spleeter:gpu evaluate -p spleeter:4stems --mus_dir /musdb -o /eval_output

4 You need to request access and download it from here

The separation process should be quite fast on a GPU (should be less than 90s on the musdb test set) but the execution of museval takes much more time (a few hours).

你可能感兴趣的:(最新前沿论文)