class torch.nn.Linear(in_features, out_features, bias=True)
对输入数据做线性变换:y=Ax+b
in_features - 每个输入样本的大小
out_features - 每个输出样本的大小
bias - 若设置为False,这层不会学习偏置。默认值:True
import torch
from torch.autograd import Variable
import torch.nn as nn
#输入样本大小为20 输出样本大小30
m = nn.Linear(20, 30)
# 随机生成一个大小为 128x20 的矩阵
input = Variable(torch.randn(128, 20))
#进行线性变化
output = m(input)
print(output.size())
torch.Size([128, 30])
output.size()=矩阵size(128,20)*矩阵size(20,30)=(128,30)
为了探究线性变化内部的原理,我们将数据简化
import torch
from torch.autograd import Variable
import torch.nn as nn
# 输入的矩阵为5x3的随机矩阵
input = Variable(torch.randn(5, 3))
#输入样本大小为3,输出样本大小为4
m = nn.Linear(3, 4)
output = m(input)
print(input)
print(m)
print(output)
tensor([[-1.4788, 0.9317, 0.2875],
[ 0.0588, -0.0252, -0.6945],
[ 1.3408, 0.0937, -1.3727],
[ 1.3503, 0.5256, 0.2303],
[-0.2420, -0.2153, -0.1697]])
Linear(in_features=3, out_features=4, bias=True)
tensor([[-0.7177, -0.5477, -0.6553, 0.3569],
[ 0.4992, 0.3262, 0.6837, -0.7086],
[ 1.2571, 1.2009, 1.3016, -1.1527],
[ 0.9723, 0.7095, 0.2631, -0.2271],
[ 0.3237, -0.0485, 0.4222, -0.5008]], grad_fn=<AddmmBackward>)
输入的矩阵大小为 5x3,线性变化的矩阵为 3x4
最后线性变化后的矩阵大小为 5x4
中间计算过程是什么,网上没找到,以后找到答案了,再补充