Dilated convolutions.

Dilated convolutions.


A recent development (e.g. seepaper by Fisher Yu and Vladlen Koltun) is to introduce one more hyperparameter to the CONV layer called thedilation. So far we’ve only discussed CONV filters that are contiguous. However, it’s possible to have filters that have spaces between each cell, called dilation. As an example, in one dimension a filterwof size 3 would compute over inputxthe following:w[0]*x[0] + w[1]*x[1] + w[2]*x[2]. This is dilation of 0. For dilation 1 the filter would instead computew[0]*x[0] + w[1]*x[2] + w[2]*x[4]; In other words there is a gap of 1 between the applications. This can be very useful in some settings to use in conjunction with 0-dilated filters because it allows you to merge spatial information across the inputs much more agressively with fewer layers. For example, if you stack two 3x3 CONV layers on top of each other then you can convince yourself that the neurons on the 2nd layer are a function of a 5x5 patch of the input (we would say that theeffective receptive fieldof these neurons is 5x5). If we use dilated convolutions then this effective receptive field would grow much quicker.

你可能感兴趣的:(Dilated convolutions.)