稀疏矩阵(pytorch sparse tensor)运算

这里总结了从以前的PRs请求稀疏张量函数和autograd支持的列表
Functions

sum() with autograd #12430
max() with autograd
log1p() #8969
S.copy_(S) with autograd #9005
indexing (gather(), index_select())
mul_(S, D) -> S, mul(S, D) -> S with autograd
cuda()
nn.Linear with autograd (SxS, SxD, relies on addmm and matmul)
softmax() with autograd (same as in TF: Applies softmax() to a region of a densified tensor submatrix; (2) Masks out the zero locations; (3) Renormalizes the remaining elements. SparseTensor result has exactly the same non-zero indices and shape)
to_sparse() #12171
narrow_copy() #11342
sparse_mm(S, D) -> D with autograd
cat() #13577
unsqueeze(), stack() #13760

Wish list

bmm(S, D) (add an extra sparse dim at indices of SparseTensor as batch dim?)
broadcasting mul(S, D) -> S
Dataset, Dataloader
save, load for sparse tensors

Existing

    autograd supported for values() via \#13001 (Thanks to @SsnL!), that means all element-wise ops are supported in sparse now
    norm (cannot take dim args)
    pow
    clone
    zero_
    t_ / t
    add_ / add(Sparse, Sparse, Scalar) -> Sparse
    add_ / add(Dense, Sparse, Scalar) -> Dense
    sub_ / sub(Sparse, Sparse, Scalar) -> Sparse
    mul_ / mul(Sparse, Sparse) -> Sparse
    mul_ / mul(Sparse, Scalar) -> Sparse
    div_ / div(Sparse, Scalar) -> Sparse
    addmm(Dense, Sparse, Dense, Scalar, Scalar) -> Dense
    sspaddmm(Sparse, Sparse, Dense, Scalar, Scalar) -> Sparse
    mm(Sparse, Dense) -> Dense
    smm(Sparse, Dense) -> Sparse
    hspmm(Sparse, Dense) -> HybridSparse
    spmm(Sparse, Dense) -> Dense

https://pytorch.org/docs/stable/sparse.html#torch.sparse.FloatTensor.spmm
https://github.com/pytorch/pytorch/issues/8853

Sparse 与 Sparse -> sparse 的矩阵乘法还未实现
https://discuss.pytorch.org/t/sparse-multiplication-sparse-x-sparse-sparse/14542

你可能感兴趣的:(学习笔记,pytorch)