【代码学习】voxel 或者 pillar,稀疏张量 转 稠密张量 的代码理解,理解了很久

需要 feature 和 对应 的坐标 coords

debug:转置,不然维度不匹配!
【代码学习】voxel 或者 pillar,稀疏张量 转 稠密张量 的代码理解,理解了很久_第1张图片
对应的代码,向量化 应该 比 for循环快

    def voxel_indexing(self, sparse_features, coords): # sparse_features: [N, C], coords:[N, 4]
        dim = sparse_features.shape[-1]

        dense_feature = Variable(torch.zeros(dim, cfg.batch_size, cfg.seq_num, cfg.X, cfg.Y)).to(cfg.device) # N = batch_size
        # print(f"dense_feature.shape = {dense_feature.shape}")
        dense_feature[:, coords[:,0], coords[:,1], coords[:,2], coords[:,3]]= sparse_features.t()


        # batch_canvas = []
        # batch_size = cfg.N
        # for batch_id in range(batch_size):
        #     canvas = torch.zeros(dim, cfg.W, cfg.H, cfg.D, dtype=sparse_features.dtype, device=sparse_features.device)

        #     batch_mask = (coords[:, 0] == batch_id)
        #     # print(f"batch_id = {batch_id}")
        #     # print(f"{batch_mask}")  

        #     this_coords = coords[batch_mask, :]
        #     # print(f"====> len(this_coords) = {len(this_coords)}")
        #     voxels = sparse_features[batch_mask, :]
        #     voxels = voxels.t()
        #     # print(f"voxels.t().shape = {voxels.shape}")
            
        #     canvas[:, this_coords[:, 1], this_coords[:, 2], this_coords[:, 3]] = voxels
        #     batch_canvas.append(canvas)
        
        # batch_canvas = torch.stack(batch_canvas, 0)
        # print(f"batch_canvas.shape = {batch_canvas.shape}")
        
        # return dense_feature
        return dense_feature.transpose(0, 1).contiguous()
        # return batch_canvas

你可能感兴趣的:(【阅读和学习代码】,学习)