首先先定义一个模型:
import torch as t
import torch.nn as nn
class A(nn.Module):
def __init__(self):
super().__init__()
self.conv1 = nn.Conv2d(2, 2, 3)
self.conv2 = nn.Conv2d(2, 2, 3)
self.conv3 = nn.Conv2d(2, 2, 3)
def forward(self, x):
x = self.conv1(x)
x = self.conv2(x)
x = self.conv3(x)
return x
然后打印出该模型的参数:
pythona = A()
print(a.parameters()) #
以上代码说明parameters()会返回一个生成器(迭代器)
然后将其迭代打印出来:
print(list(a.parameters())):#将迭代器转换成列表
Parameter containing:
tensor([[[[-0.0299, 0.0891, 0.0303],
[ 0.0869, -0.0230, -0.1760],
[ 0.1408, 0.0348, 0.1795]],
[[ 0.2001, 0.0023, -0.1775],
[ 0.0947, -0.0231, -0.1756],
[ 0.1201, -0.0997, -0.0303]]],
[[[-0.0425, 0.0748, -0.1754],
[-0.1191, -0.1203, -0.1219],
[-0.0794, 0.0895, -0.1719]],
[[ 0.1968, -0.0463, 0.0550],
[-0.0386, 0.1594, 0.1282],
[-0.0009, 0.2167, -0.1783]]]], requires_grad=True)
Parameter containing:
tensor([ 0.0147, -0.0406], requires_grad=True)
Parameter containing:
tensor([[[[-0.0578, -0.1114, -0.1194],
[-0.1469, -0.1175, -0.1616],
[-0.2289, -0.0975, -0.1700]],
[[-0.0894, 0.0074, 0.1222],
[-0.0176, -0.0509, 0.1622],
[-0.0405, -0.1349, 0.1782]]],
[[[-0.0739, 0.2167, 0.1864],
[ 0.0956, -0.1761, 0.0464],
[ 0.0062, -0.0685, 0.0748]],
[[ 0.1085, 0.1481, 0.1334],
[ 0.2236, -0.0706, -0.0224],
[ 0.0079, -0.1835, -0.0407]]]], requires_grad=True)
Parameter containing:
tensor([-8.0720e-05, 1.6026e-01], requires_grad=True)
Parameter containing:
tensor([[[[-0.0702, 0.1846, 0.0419],
[-0.1891, -0.0893, -0.0024],
[-0.0349, -0.0213, 0.0936]],
[[-0.1062, 0.1242, 0.0391],
[-0.1924, 0.0535, -0.1480],
[ 0.0400, -0.0487, -0.2317]]],
[[[ 0.1202, 0.0961, 0.2336],
[ 0.2225, -0.2294, -0.2283],
[-0.0963, -0.0311, -0.2354]],
[[ 0.0676, -0.0439, -0.0962],
[-0.2316, -0.0639, -0.0671],
[ 0.1737, -0.1169, -0.1751]]]], requires_grad=True)
Parameter containing:
tensor([-0.1939, -0.0959], requires_grad=True)
从以上结果可以看出列表中有6个元素,由于nn.Conv2d()的参数包括self.weight和self.bias两部分,所以每个2D卷积层包括两部分的参数.注意self.bias是加在每个通道上的,所以self.bias的长度与output_channl相同
parameters()会返回一个生成器(迭代器),生成器每次生成的是Tensor类型的数据.