关于在阅读nnUNet代码中的一些小细节的记录(二)

1.推理阶段:在nnUNet/nnunet/network_architecture/neural_network.py的250行的_get_gaussian函数。

    def _get_gaussian(patch_size, sigma_scale=1. / 8) -> np.ndarray:
        tmp = np.zeros(patch_size)
        center_coords = [i // 2 for i in patch_size]
        sigmas = [i * sigma_scale for i in patch_size]
        tmp[tuple(center_coords)] = 1
        gaussian_importance_map = gaussian_filter(tmp, sigmas, 0, mode='constant', cval=0)
        gaussian_importance_map = gaussian_importance_map / np.max(gaussian_importance_map) * 1
        gaussian_importance_map = gaussian_importance_map.astype(np.float32)

        # gaussian_importance_map cannot be 0, otherwise we may end up with nans!
        gaussian_importance_map[gaussian_importance_map == 0] = np.min(
            gaussian_importance_map[gaussian_importance_map != 0])

        return gaussian_importance_map

这里说到底是一个在将数据分块处理后,一个块中每个点的比重。而3d(128,128,128)上比较难理解,先弄个2d的例子来理解下一个矩阵(中心点为1其它值为0)经过高斯滤波后到底发生了什么。例子如下:

from scipy.ndimage import gaussian_filter
import numpy as np
import matplotlib.pyplot as plt

tmp = np.zeros([128,128])
center_coords = [64,64]
sigmas = [i * 0.125 for i in [128,128]]
tmp[tuple(center_coords)] = 1
gaussian_importance_map = gaussian_filter(tmp, sigmas, 0, mode='constant', cval=0)
gaussian_importance_map1 = gaussian_importance_map / np.max(gaussian_importance_map) * 1

# result = gaussian_filter(a, sigma=5)
plt.subplot(1,2,1)
plt.imshow(tmp)
plt.subplot(1,2,2)
plt.imshow(gaussian_importance_map)
plt.show()

关于在阅读nnUNet代码中的一些小细节的记录(二)_第1张图片

扩展到3d,应该是一个值中间最大,八个角最小,符合高斯分布立方体矩阵。

 

2.推理阶段:在nnUNet/nnunet/network_architecture/neural_network.py的373行开始。

        for x in steps[0]:
            lb_x = x
            ub_x = x + patch_size[0]
            for y in steps[1]:
                lb_y = y
                ub_y = y + patch_size[1]
                for z in steps[2]:
                    lb_z = z
                    ub_z = z + patch_size[2]

                    predicted_patch = self._internal_maybe_mirror_and_pred_3D(
                        data[None, :, lb_x:ub_x, lb_y:ub_y, lb_z:ub_z], mirror_axes, do_mirroring,
                        gaussian_importance_map)[0]

                    if all_in_gpu:
                        predicted_patch = predicted_patch.half()
                    else:
                        predicted_patch = predicted_patch.cpu().numpy()

                    aggregated_results[:, lb_x:ub_x, lb_y:ub_y, lb_z:ub_z] += predicted_patch
                    aggregated_nb_of_predictions[:, lb_x:ub_x, lb_y:ub_y, lb_z:ub_z] += add_for_nb_of_preds

执行完这段代码后,aggregated_nb_of_predictions[0,64] 画出来如下图:

关于在阅读nnUNet代码中的一些小细节的记录(二)_第2张图片

这说明比重矩阵最关心的是中间部分,边缘关注度下降。这比较符合我的认知,卷积时边缘损失较大,同时目标脏器在比较中心的区域。

你可能感兴趣的:(深度学习)