pytorch学习手册【二】

 

九、Reduction Ops(规约/简化操作)

torch.argmax(inputdim=Nonekeepdim=False)

torch.argmin(inputdim=Nonekeepdim=False)

 

torch.cumprod(inputdimdtype=None) → Tensor

torch.cumsum(inputdimout=Nonedtype=None) → Tensor

torch.dist(inputotherp=2) → Tensor

torch.logsumexp(inputdimkeepdim=Falseout=None)

torch.mean(inputdimkeepdim=Falseout=None) → Tensor

torch.median()

torch.median(input) → Tensor

torch.mode(inputdim=-1keepdim=Falsevalues=Noneindices=None) -> (TensorLongTensor)

torch.norm(inputp='fro'dim=Nonekeepdim=Falseout=None)

torch.prod(inputdimkeepdim=Falsedtype=None) → Tensor

torch.std()

torch.std(inputunbiased=True) → Tensor

torch.std(inputdimkeepdim=Falseunbiased=Trueout=None) → Tensor

torch.sum()

torch.sum(inputdtype=None) → Tensor

torch.sum(inputdimkeepdim=Falsedtype=None) → Tensor

torch.unique(inputsorted=Falsereturn_inverse=Falsedim=None)[SOURCE]

torch.var()

torch.var(inputunbiased=True) → Tensor

torch.var(inputdimkeepdim=Falseunbiased=Trueout=None) → Tensor

十、Comparison Ops(比较操作)

torch.allclose(selfotherrtol=1e-05atol=1e-08equal_nan=False) → bool

torch.argsort(inputdim=Nonedescending=False)

torch.eq(inputotherout=None) → Tensor

torch.equal(tensor1tensor2) → bool

torch.ge(inputotherout=None) → Tensor

torch.gt(inputotherout=None) → Tensor

torch.isfinite(tensor)

torch.isinf(tensor)

torch.isnan(tensor)

torch.kthvalue(inputkdim=Nonekeepdim=Falseout=None) -> (TensorLongTensor)

torch.le(inputotherout=None) → Tensor

torch.lt(inputotherout=None) → Tensor

torch.max()

torch.max(input) → Tensor

torch.max(inputdimkeepdim=Falseout=None) -> (TensorLongTensor)

torch.max(inputotherout=None) → Tensor

torch.min()

torch.min(input) → Tensor

torch.min(inputdimkeepdim=Falseout=None) -> (TensorLongTensor)

torch.min(inputotherout=None) → Tensor

torch.ne(inputotherout=None) → Tensor

torch.sort(inputdim=Nonedescending=Falseout=None) -> (TensorLongTensor)

torch.topk(inputkdim=Nonelargest=Truesorted=Trueout=None) -> (TensorLongTensor)

十一、Spectral Ops(信号处理相关的谱运算)

torch.fft(inputsignal_ndimnormalized=False) → Tensor

torch.ifft(inputsignal_ndimnormalized=False) → Tensor

torch.rfft(inputsignal_ndimnormalized=Falseonesided=True) → Tensor

torch.irfft(inputsignal_ndimnormalized=Falseonesided=Truesignal_sizes=None) → Tensor

torch.stft(inputn_ffthop_length=Nonewin_length=Nonewindow=Nonecenter=Truepad_mode='reflect'normalized=Falseonesided=True)

torch.bartlett_window(window_lengthperiodic=Truedtype=Nonelayout=torch.strideddevice=Nonerequires_grad=False) → Tensor

torch.blackman_window(window_lengthperiodic=Truedtype=Nonelayout=torch.strideddevice=Nonerequires_grad=False) → Tensor

torch.hamming_window(window_lengthperiodic=Truealpha=0.54beta=0.46dtype=Nonelayout=torch.strideddevice=Nonerequires_grad=False) → Tensor

torch.hann_window(window_lengthperiodic=Truedtype=Nonelayout=torch.strideddevice=Nonerequires_grad=False) → Tensor

十二、Other Operations(其他操作)

torch.bincount(selfweights=Noneminlength=0) → Tensor

torch.broadcast_tensors(*tensors) → List of Tensors[SOURCE]

torch.cross(inputotherdim=-1out=None) → Tensor

torch.diag(inputdiagonal=0out=None) → Tensor

torch.diagonal() always returns the diagonal of its input.

torch.diagflat() always constructs a tensor with diagonal elements specified by the input.

torch.diag_embed(inputoffset=0dim1=-2dim2=-1) → Tensor

torch.diagflat(inputdiagonal=0) → Tensor

torch.diagonal(inputoffset=0dim1=0dim2=1) → Tensor

torch.einsum(equation*operands) → Tensor[SOURCE]

torch.flatten(inputstart_dim=0end_dim=-1) → Tensor

torch.flip(inputdims) → Tensor

torch.histc(inputbins=100min=0max=0out=None) → Tensor

torch.meshgrid(*tensors**kwargs)[SOURCE]

torch.renorm(inputpdimmaxnormout=None) → Tensor

torch.roll(inputshiftsdims=None) → Tensor

torch.tensordot(abdims=2)[SOURCE]

torch.trace(input) → Tensor

torch.tril(inputdiagonal=0out=None) → Tensor

torch.triu(inputdiagonal=0out=None) → Tensor

十三、BLAS and LAPACK Operations(线性代数相关的运算)

BLAS即 basic linear algebra subprogram

LAPACK即 Linear Algebra PACKage

 

torch.addbmm(beta=1matalpha=1batch1batch2out=None) → Tensor

torch.addmm(beta=1matalpha=1mat1mat2out=None) → Tensor

torch.addmv(beta=1tensoralpha=1matvecout=None) → Tensor

torch.addr(beta=1matalpha=1vec1vec2out=None) → Tensor

torch.baddbmm(beta=1matalpha=1batch1batch2out=None) → Tensor

torch.bmm(batch1batch2out=None) → Tensor

 

torch.btrifact(Ainfo=Nonepivot=True)

torch.btrifact_with_info(Apivot=True) -> (TensorIntTensorIntTensor)

torch.btrisolve(bLU_dataLU_pivots) → Tensor

torch.btriunpack(LU_dataLU_pivotsunpack_data=Trueunpack_pivots=True)

torch.chain_matmul(*matrices)

torch.cholesky(Aupper=Falseout=None) → Tensor

torch.dot(tensor1tensor2) → Tensor

torch.eig(aeigenvectors=Falseout=None) -> (TensorTensor)

torch.gels(BAout=None) → Tensor

torch.geqrf(inputout=None) -> (TensorTensor)

torch.ger(vec1vec2out=None) → Tensor

torch.gesv(BA) -> (TensorTensor)

torch.inverse(inputout=None) → Tensor

torch.det(A) → Tensor

torch.logdet(A) → Tensor

torch.slogdet(A) -> (TensorTensor)

torch.matmul(tensor1tensor2out=None) → Tensor

torch.matrix_power(inputn) → Tensor

torch.matrix_rank(inputtol=Nonebool symmetric=False) → Tensor

torch.mm(mat1mat2out=None) → Tensor

torch.mv(matvecout=None) → Tensor

torch.orgqr(atau) → Tensor

torch.pinverse(inputrcond=1e-15) → Tensor

torch.potrf(aupper=Trueout=None)

torch.potrs(buupper=Trueout=None) → Tensor

torch.pstrf(aupper=Trueout=None) -> (TensorTensor)

torch.qr(inputout=None) -> (TensorTensor)

torch.svd(inputsome=Truecompute_uv=Trueout=None) -> (TensorTensorTensor)

torch.symeig(inputeigenvectors=Falseupper=Trueout=None) -> (TensorTensor)

 

你可能感兴趣的:(深度学习,深度学习相关,pytorch,pytorch函数大全,pytorch函数分类,pytorch学习手册)