PyTorch官方教程中文版
http://pytorch123.com/
pytorch handbook是一本开源的书籍,目标是帮助那些希望和使用PyTorch进行深度学习开发和研究的朋友
https://github.com/zergtant/pytorch-handbook
用GPU训练的模型在本地CPU无法进行加载
https://www.jianshu.com/p/0ae1b7522261
torchvision.models — PyTorch master documentation
https://pytorch.org/docs/stable/torchvision/models.html#classification
np.linalg.norm(求范数)
https://blog.csdn.net/hqh131360239/article/details/79061535
PyTorch中网络里面的inplace=True字段的意思
https://www.jianshu.com/p/8385aa74e2de
https://zhuanlan.zhihu.com/p/145810572
https://github.com/pytorch/vision
https://github.com/pytorch/audio
https://github.com/pytorch/text
深度学习debug沉思录!
https://mp.weixin.qq.com/s/Dk5reVIfKgt5U_oBpV8jbw
【pytorch】筛选冻结部分网络层参数同时设置有参数组的时候该怎么办?
https://blog.csdn.net/lingzhou33/article/details/88977700
torch.backends.cudnn.benchmark ?!平行库算子自动选择
https://zhuanlan.zhihu.com/p/73711222
pytorch---之cudnn.benchmark和cudnn.deterministic
https://blog.csdn.net/zxyhhjs2017/article/details/91348108
【深度学习技巧】超参数寻找--最合适的学习速率
https://blog.csdn.net/shankezh/article/details/88025669
机器学习超参数优化算法-Hyperband
https://www.imooc.com/article/269113
深度学习入门知识整理-训练技巧以及模型调优
https://blog.csdn.net/ForgetThatNight/article/details/91856052
获取网络模型的每一层参数量与计算量(Flops)———Pytorch
https://blog.csdn.net/comway_Li/article/details/105079731
Flops counter for convolutional networks in pytorch framework——ptflops工具
https://github.com/sovrasov/flops-counter.pytorch
Count the MACs / FLOPs of your PyTorch model.——thop工具
https://github.com/Lyken17/pytorch-OpCounter
torchstat,工具使用,显示每层的参数
https://blog.csdn.net/u013685264/article/details/108274289
Pytorch获取中间层结果,调优
『PyTorch』第十六弹_hook技术 - 叠加态的猫 - 博客园
https://www.cnblogs.com/hellcat/p/8512090.html
PyTorch杂谈 | (1) pack_padded_sequence和pad_packed_sequence
https://blog.csdn.net/sdu_hao/article/details/105408552
Pytorch中named_children()和named_modules()的区别
https://blog.csdn.net/watermelon1123/article/details/98036360
ReflectionPad2d、InstanceNorm2d详解及实现,图片生成
https://zhuanlan.zhihu.com/p/66989411
pytorch -- topk()
https://blog.csdn.net/u014264373/article/details/86525621
pytorch矩阵乘法mm,bmm
https://blog.csdn.net/bufanwangzi/article/details/101541098
python 版本 torchnet 简单使用文档,AverageValueMeter,confusionmeter
https://blog.csdn.net/u010510549/article/details/90263627
torch.mean参数解释
https://blog.csdn.net/u013049912/article/details/105628097
python---argparse解析bool值
https://www.jianshu.com/p/375b72d8acc7
Python进阶-----静态方法(@staticmethod)
https://www.cnblogs.com/Meanwey/p/9788713.html
pytorch 如何加载部分预训练模型
https://blog.csdn.net/amds123/article/details/63684716
【PyTorch】state_dict详解
https://blog.csdn.net/bigFatCat_Tom/article/details/90722261
一文搞懂深度网络初始化(Xavier and Kaiming initialization)
https://www.jianshu.com/p/f2d800388d1c
torch.utils.data.DataLoader使用方法
https://www.cnblogs.com/demo-deng/p/10623334.html
Pytorch:transforms的二十二个方法(很好)
https://blog.csdn.net/weixin_38533896/article/details/86028509
torch.cat与torch.chunk的使用
https://zhuanlan.zhihu.com/p/59141209
Pytorch nn.init 参数初始化方法
https://blog.csdn.net/weixin_42018112/article/details/90725819
pytorch系列 -- 9 pytorch nn.init 中实现的初始化函数 uniform, normal, const, Xavier, He initialization
https://blog.csdn.net/dss_dssssd/article/details/83959474
nn.Module的深入分析
https://www.jianshu.com/p/fa59e40698b5
pytorch学习笔记四:torch.nn下常用网络层(layer)详解
https://blog.csdn.net/qq_39507748/article/details/105371172
nn.Conv2d 参数及输入输出详解
https://www.cnblogs.com/siyuan1998/p/10809646.html
Pytorch——conv2d参数使用
https://blog.csdn.net/lzc842650834/article/details/90265621
torch.nn.Conv2d() 用法讲解
https://blog.csdn.net/qq_38863413/article/details/104108808
pytorch函数中的dilation参数的作用
https://blog.csdn.net/qq_36167072/article/details/104720202
深度学习笔记(三):BatchNorm(BN)层
https://blog.csdn.net/wjinjie/article/details/105028870
BatchNormalization、LayerNormalization、InstanceNorm、GroupNorm、SwitchableNorm总结,不同领域应用不同的归一化层
https://blog.csdn.net/liuxiao214/article/details/81037416
Dropout和BN(层归一化)详解
https://blog.csdn.net/qq_40176087/article/details/105904379
激活函数ReLU、Leaky ReLU、PReLU和RReLU
https://blog.csdn.net/qq_23304241/article/details/80300149
深度学习—激活函数详解(Sigmoid、tanh、ReLU、ReLU6及变体P-R-Leaky、ELU、SELU、Swish、Mish、Maxout、hard-sigmoid、hard-swish)
https://blog.csdn.net/jsk_learner/article/details/102822001
pytorch损失函数之nn.BCELoss()(为什么用交叉熵作为损失函数)
https://blog.csdn.net/geter_CS/article/details/84747670
回归损失函数1:L1 loss, L2 loss以及Smooth L1 Loss的对比 - Brook_icv - 博客园
https://www.cnblogs.com/wangguchangqing/p/12021638.html
Focal loss论文详解
https://zhuanlan.zhihu.com/p/49981234
简单认识Adam优化器
https://www.jianshu.com/p/aebcaf8af76e
torch.optim优化算法理解之optim.Adam()
https://www.jianshu.com/p/f2d800388d1c
深度学习——优化器算法Optimizer详解(BGD、SGD、MBGD、Momentum、NAG、Adagrad、Adadelta、RMSprop、Adam) - 郭耀华 - 博客园
https://www.cnblogs.com/guoyaohua/p/8542554.html
PyTorch 学习笔记(七):PyTorch的十个优化器
https://blog.csdn.net/u011995719/article/details/88988420
Pytorch层--AdaptiveAvgPool2d
https://blog.csdn.net/u010472607/article/details/89555206
torch之AvgPool2d
https://blog.csdn.net/yangwangnndd/article/details/95510597
pytorch之Dropout
https://www.jianshu.com/p/636be9f8f046
华为突破封锁,对标谷歌Dropout专利,开源自研算法Disout
https://blog.csdn.net/QbitAI/article/details/106232701
pytorch中tf.nn.functional.softmax(x,dim = -1)对参数dim的理解
https://blog.csdn.net/Will_Ye/article/details/104994504
torch.nn.functional中softmax的作用及其参数说明 - 慢行厚积 - 博客园
https://www.cnblogs.com/wanghui-garcia/p/10675588.html
pytorch识别CIFAR10:训练ResNet-34(准确率80%)
https://www.cnblogs.com/zhengbiqing/p/10432169.html
pytorch 网络结构可视化方法汇总(三种实现方法详解)
https://blog.csdn.net/qq_27825451/article/details/96856217
Pytorch使用tensorboardX可视化。超详细!!!
https://www.jianshu.com/p/46eb3004beca
详解PyTorch项目使用TensorboardX进行训练可视化
https://blog.csdn.net/bigbennyguo/article/details/87956434
【Python】浅谈 鸭子类型 (Duck Typing)
https://blog.csdn.net/qq_39478403/article/details/107371850
Python 中的鸭子类型(duck typing),协议和接口
https://blog.csdn.net/u012193416/article/details/89398627
[临时笔记] pytorch报错消息及其解决纪录
https://blog.csdn.net/LoseInVain/article/details/86140412