WaveNet解读

原paper
https://arxiv.org/pdf/1609.03499.pdf

关键的句子:

  1. At training time, the conditional predictions for all timesteps can be made in parallel because all timesteps of ground truth x are known. When generating with the model, the predictions are sequential: after each sample is predicted, it is fed back into the network to predict the next sample.

  2. Temporal Convolution.

总结:
WaveNet中蕴含的causal卷积的哲学,最实用(或者说只适用的)场景就是text-to-speech。而像多步时间序列预测这种场景,原始的不用causual卷积,普通的CNN也是可以做到的。

你可能感兴趣的:(WaveNet解读)