深度学习笔记(十四)—— GAN-3

实验要求与基本流程

实验要求

  1. 完成上一节实验课内容,理解GAN(Generative Adversarial Networks,生成对抗网络)的原理与训练方法.
  2. 结合理论课内容, 了解CGAN, pix2pix等模型基本结构和主要作用.
  3. 阅读实验指导书的实验内容,按照提示运行以及补充实验代码,或者简要回答问题.提交作业时,保留实验结果.

实验流程

  • CGAN
  • pix2pix

CGAN(Conditional GAN)

由上节课的内容可以看到,GAN可以用来生成接近真实的图片,但普通的GAN太过自由而不可控了,而CGAN(Conditional GAN)是一种带条件约束的GAN,在生成模型(D)和判别模型(G)的建模中均引入条件变量.这些条件变量可以基于多种信息,例如类别标签,用于图像修复的部分数据等等.在这个接下来这个CGAN中我们引入类别标签作为G和D的条件变量.

在下面的CGAN网络结构(与上节课展示的DCGAN模型相似)中,与之前的模型最大的不同是在G和D的输入中加入了类别标签labels,在G中,labels(用one-hot向量表示,如有3个类(0/1/2),第2类的one-hot向量为[0, 0, 1])和原来的噪声z一起输入到第一层全连接层中,在D中,labels和输入图片一起输入到卷积层中,labels中每个label用大小为(class_num,image_size,image_size)的张量表示,其正确类别的channel全为1,其余channel全为0.

import torch
import numpy as np
import torch.nn as nn
import torch.optim as optim
import torchvision
import torchvision.transforms as transforms
import matplotlib.pyplot as plt
%matplotlib inline
from utils import initialize_weights
class DCGenerator(nn.Module):
    def __init__(self, image_size=32, latent_dim=64, output_channel=1, class_num=3):
        super(DCGenerator, self).__init__()
        self.image_size = image_size
        self.latent_dim = latent_dim
        self.output_channel = output_channel
        self.class_num = class_num
        
        self.init_size = image_size // 8
        
        # fc: Linear -> BN -> ReLU
        self.fc = nn.Sequential(
            nn.Linear(latent_dim + class_num, 512 * self.init_size ** 2),
            nn.BatchNorm1d(512 * self.init_size ** 2),
            nn.ReLU(inplace=True)
        )
            
        # deconv: ConvTranspose2d(4, 2, 1) -> BN -> ReLU -> 
        #         ConvTranspose2d(4, 2, 1) -> BN -> ReLU -> 
        #         ConvTranspose2d(4, 2, 1) -> Tanh
        self.deconv = nn.Sequential(
            nn.ConvTranspose2d(512, 256, 4, stride=2, padding=1),
            nn.BatchNorm2d(256),
            nn.ReLU(inplace=True),
            nn.ConvTranspose2d(256, 128, 4, stride=2, padding=1),
            nn.BatchNorm2d(128),
            nn.ReLU(inplace=True),
            nn.ConvTranspose2d(128, output_channel, 4, stride=2, padding=1),
            nn.Tanh(),
        )
        initialize_weights(self)

    def forward(self, z, labels):
        """
        z : noise vector
        labels : one-hot vector 
        """
        input_ = torch.cat((z, labels), dim=1)
        out = self.fc(input_)
        out = out.view(out.shape[0], 512, self.init_size, self.init_size)
        img = self.deconv(out)
        return img


class DCDiscriminator(nn.Module):
    def __init__(self, image_size=32, input_channel=1, class_num=3, sigmoid=True):
        super(DCDiscriminator, self).__init__()
        self.image_size = image_size
        self.input_channel = input_channel
        self.class_num = class_num
        
        self.fc_size = image_size // 8
        
        # conv: Conv2d(3,2,1) -> LeakyReLU 
        #       Conv2d(3,2,1) -> BN -> LeakyReLU 
        #       Conv2d(3,2,1) -> BN -> LeakyReLU 
        
        
        self.conv = nn.Sequential(
            nn.Conv2d(input_channel + class_num, 128, 3, 2, 1),
            nn.LeakyReLU(0.2),
            nn.Conv2d(128, 256, 3, 2, 1),
            nn.BatchNorm2d(256),
            nn.LeakyReLU(0.2),
            nn.Conv2d(256, 512, 3, 2, 1),
            nn.BatchNorm2d(512),
            nn.LeakyReLU(0.2),
        )
        
        # fc: Linear -> Sigmoid
        self.fc = nn.Sequential(
            nn.Linear(512 * self.fc_size * self.fc_size, 1),
        )
        if sigmoid:
            self.fc.add_module('sigmoid', nn.Sigmoid())
        initialize_weights(self)

        
        

    def forward(self, img, labels):
        """
        img : input image
        labels : (batch_size, class_num, image_size, image_size)
                the i-th channel is filled with 1, and others is filled with 0.
        """
        
        input_ = torch.cat((img, labels), dim=1)
        out = self.conv(input_)
        out = out.view(out.shape[0], -1)
        out = self.fc(out)

        return out

数据集

我们使用我们熟悉的MNIST手写体数据集来训练我们的CGAN,我们同样提供了一个简化版本的数据集来加快我们的训练速度,与上次的数据集不一样的是,这次的数据集包含0到9共10类的手写数字,每类各200张,共2000张.图片同样为28*28的单通道灰度图(我们将其resize到32*32).下面是加载mnist数据集的代码.

def load_mnist_data():
    """
    load mnist(0,1,2) dataset 
    """
    
    transform = torchvision.transforms.Compose([
        # transform to 1-channel gray image since we reading image in RGB mode
        transforms.Grayscale(1),
        # resize image from 28 * 28 to 32 * 32
        transforms.Resize(32),
        transforms.ToTensor(),
        # normalize with mean=0.5 std=0.5
        transforms.Normalize(mean=(0.5, ), 
                             std=(0.5, ))
        ])
    
    train_dataset = torchvision.datasets.ImageFolder(root='./data/mnist', transform=transform)
    
    return train_dataset

接下来让我们查看一下各个类上真实的手写体数据集的数据吧.(运行一下2个cell的代码,无需理解)

def denorm(x):
    # denormalize
    out = (x + 1) / 2
    return out.clamp(0, 1)
from utils import show
"""
you can pass code in this cell
"""
# show mnist real data
train_dataset = load_mnist_data()
images = []
for j in range(5):
    for i in range(10):
        images.append(train_dataset[i * 200 + j][0])
show(torchvision.utils.make_grid(denorm(torch.stack(images)), nrow=10))
image

训练部分的代码代码与之前相似, 不同的地方在于要根据类别生成y_vec(one-hot向量如类别2对应[0,1,0,0,0,0,0,0,0,0])和y_fill(将y_vec扩展到大小为(class_num, image_size, image_size),正确的类别的channel全为1,其他channel全为0),分别输入G和D作为条件变量.其他训练过程与普通的GAN相似.我们可以先为每个类别标签生成vecs和fills.

# class number
class_num = 10

# image size and channel
image_size=32
image_channel=1

# vecs: one-hot vectors of size(class_num, class_num)
# fills: vecs expand to size(class_num, class_num, image_size, image_size)
vecs = torch.eye(class_num)
fills = vecs.unsqueeze(2).unsqueeze(3).expand(class_num, class_num, image_size, image_size)

def train(trainloader, G, D, G_optimizer, D_optimizer, loss_func, device, z_dim, class_num):
    """
    train a GAN with model G and D in one epoch
    Args:
        trainloader: data loader to train
        G: model Generator
        D: model Discriminator
        G_optimizer: optimizer of G(etc. Adam, SGD)
        D_optimizer: optimizer of D(etc. Adam, SGD)
        loss_func: Binary Cross Entropy(BCE) or MSE loss function
        device: cpu or cuda device
        z_dim: the dimension of random noise z
        
    """
    # set train mode
    D.train()
    G.train()
    
    D_total_loss = 0
    G_total_loss = 0
    
    
    for i, (x, y) in enumerate(trainloader):
        x = x.to(device)
        batch_size_ = x.size(0)
        image_size = x.size(2)
        
        # real label and fake label
        real_label = torch.ones(batch_size_, 1).to(device)
        fake_label = torch.zeros(batch_size_, 1).to(device)
        
        # y_vec: (batch_size, class_num) one-hot vector, for example, [0,0,0,0,1,0,0,0,0,0] (label: 4)
        y_vec = vecs[y.long()].to(device)
        # y_fill: (batch_size, class_num, image_size, image_size)
        # y_fill: the i-th channel is filled with 1, and others is filled with 0.
        y_fill = fills[y.long()].to(device)
        
        z = torch.rand(batch_size_, z_dim).to(device)

        # update D network
        # D optimizer zero grads
        D_optimizer.zero_grad()
        
        # D real loss from real images
        d_real = D(x, y_fill)
        d_real_loss = loss_func(d_real, real_label)
        
        # D fake loss from fake images generated by G
        g_z = G(z, y_vec)
        d_fake = D(g_z, y_fill)
        d_fake_loss = loss_func(d_fake, fake_label)
        
        # D backward and step
        d_loss = d_real_loss + d_fake_loss
        d_loss.backward()
        D_optimizer.step()

        # update G network
        # G optimizer zero gradsinput_dim=100, output_dim=1, input_size=32, class_num=10
        G_optimizer.zero_grad()
        
        # G loss
        g_z = G(z, y_vec)
        d_fake = D(g_z, y_fill)
        g_loss = loss_func(d_fake, real_label)
        
        # G backward and step
        g_loss.backward()
        G_optimizer.step()
        
        D_total_loss += d_loss.item()
        G_total_loss += g_loss.item()
        
    
    return D_total_loss / len(trainloader), G_total_loss / len(trainloader)

visualize_results和run_gan的代码不再详细说明.

def visualize_results(G, device, z_dim, class_num, class_result_size=5):
    G.eval()
    
    z = torch.rand(class_num * class_result_size, z_dim).to(device)
    
    y = torch.LongTensor([i for i in range(class_num)] * class_result_size)
    y_vec = vecs[y.long()].to(device)
    g_z = G(z, y_vec)
    
    show(torchvision.utils.make_grid(denorm(g_z.detach().cpu()), nrow=class_num))
def run_gan(trainloader, G, D, G_optimizer, D_optimizer, loss_func, n_epochs, device, latent_dim, class_num):
    d_loss_hist = []
    g_loss_hist = []

    for epoch in range(n_epochs):
        d_loss, g_loss = train(trainloader, G, D, G_optimizer, D_optimizer, loss_func, device, 
                               latent_dim, class_num)
        print('Epoch {}: Train D loss: {:.4f}, G loss: {:.4f}'.format(epoch, d_loss, g_loss))

        d_loss_hist.append(d_loss)
        g_loss_hist.append(g_loss)

        if epoch == 0 or (epoch + 1) % 10 == 0:
            visualize_results(G, device, latent_dim, class_num) 
    
    return d_loss_hist, g_loss_hist

下面尝试训练一下我们的CGAN吧.

# hyper params
# z dim
latent_dim = 100

# Adam lr and betas
learning_rate = 0.0002
betas = (0.5, 0.999)

# epochs and batch size
n_epochs = 120
batch_size = 32

# device : cpu or cuda:0/1/2/3
device = torch.device('cuda:0')

# mnist dataset and dataloader
train_dataset = load_mnist_data()
trainloader = torch.utils.data.DataLoader(train_dataset, batch_size=batch_size, shuffle=True)

# use BCELoss as loss function
bceloss = nn.BCELoss().to(device)

# G and D model
G = DCGenerator(image_size=image_size, latent_dim=latent_dim, output_channel=image_channel, class_num=class_num)
D = DCDiscriminator(image_size=image_size, input_channel=image_channel, class_num=class_num)
G.to(device)
D.to(device)

print(D)
print(G)

# G and D optimizer, use Adam or SGD
G_optimizer = optim.Adam(G.parameters(), lr=learning_rate, betas=betas)
D_optimizer = optim.Adam(D.parameters(), lr=learning_rate, betas=betas)
DCDiscriminator(
  (conv): Sequential(
    (0): Conv2d(11, 128, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1))
    (1): LeakyReLU(negative_slope=0.2)
    (2): Conv2d(128, 256, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1))
    (3): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
    (4): LeakyReLU(negative_slope=0.2)
    (5): Conv2d(256, 512, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1))
    (6): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
    (7): LeakyReLU(negative_slope=0.2)
  )
  (fc): Sequential(
    (0): Linear(in_features=8192, out_features=1, bias=True)
    (sigmoid): Sigmoid()
  )
)
DCGenerator(
  (fc): Sequential(
    (0): Linear(in_features=110, out_features=8192, bias=True)
    (1): BatchNorm1d(8192, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
    (2): ReLU(inplace)
  )
  (deconv): Sequential(
    (0): ConvTranspose2d(512, 256, kernel_size=(4, 4), stride=(2, 2), padding=(1, 1))
    (1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
    (2): ReLU(inplace)
    (3): ConvTranspose2d(256, 128, kernel_size=(4, 4), stride=(2, 2), padding=(1, 1))
    (4): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
    (5): ReLU(inplace)
    (6): ConvTranspose2d(128, 1, kernel_size=(4, 4), stride=(2, 2), padding=(1, 1))
    (7): Tanh()
  )
)
d_loss_hist, g_loss_hist = run_gan(trainloader, G, D, G_optimizer, D_optimizer, bceloss, 
                                   n_epochs, device, latent_dim, class_num)
Epoch 0: Train D loss: 0.2962, G loss: 3.8550
image
Epoch 1: Train D loss: 0.6089, G loss: 3.6378
Epoch 2: Train D loss: 0.8812, G loss: 2.2457
Epoch 3: Train D loss: 0.8877, G loss: 1.9269
Epoch 4: Train D loss: 0.9665, G loss: 1.8893
Epoch 5: Train D loss: 0.9414, G loss: 1.7735
Epoch 6: Train D loss: 0.8708, G loss: 1.8289
Epoch 7: Train D loss: 0.8942, G loss: 1.7005
Epoch 8: Train D loss: 0.9111, G loss: 1.7255
Epoch 9: Train D loss: 0.8998, G loss: 1.7084
image
Epoch 10: Train D loss: 0.9060, G loss: 1.6594
Epoch 11: Train D loss: 0.9331, G loss: 1.6657
Epoch 12: Train D loss: 0.9313, G loss: 1.6259
Epoch 13: Train D loss: 0.9475, G loss: 1.6301
Epoch 14: Train D loss: 0.9856, G loss: 1.6319
Epoch 15: Train D loss: 0.9712, G loss: 1.5905
Epoch 16: Train D loss: 0.9892, G loss: 1.5713
Epoch 17: Train D loss: 1.0118, G loss: 1.5743
Epoch 18: Train D loss: 1.0041, G loss: 1.5457
Epoch 19: Train D loss: 1.0028, G loss: 1.6262
image
Epoch 20: Train D loss: 1.0085, G loss: 1.5393
Epoch 21: Train D loss: 1.0020, G loss: 1.6078
Epoch 22: Train D loss: 0.9486, G loss: 1.6651
Epoch 23: Train D loss: 0.9706, G loss: 1.6328
Epoch 24: Train D loss: 0.9127, G loss: 1.6835
Epoch 25: Train D loss: 0.9416, G loss: 1.6948
Epoch 26: Train D loss: 0.8698, G loss: 1.7693
Epoch 27: Train D loss: 0.8571, G loss: 1.8435
Epoch 28: Train D loss: 0.8520, G loss: 1.8850
Epoch 29: Train D loss: 0.7613, G loss: 2.0046
image
Epoch 30: Train D loss: 0.8708, G loss: 1.9706
Epoch 31: Train D loss: 0.6392, G loss: 2.0542
Epoch 32: Train D loss: 0.7748, G loss: 2.0904
Epoch 33: Train D loss: 0.7603, G loss: 2.1889
Epoch 34: Train D loss: 0.6701, G loss: 2.2419
Epoch 35: Train D loss: 0.4888, G loss: 2.4315
Epoch 36: Train D loss: 0.6143, G loss: 2.4058
Epoch 37: Train D loss: 0.5030, G loss: 2.5943
Epoch 38: Train D loss: 0.6665, G loss: 2.5604
Epoch 39: Train D loss: 0.2921, G loss: 2.8537
image
Epoch 40: Train D loss: 0.7130, G loss: 2.7242
Epoch 41: Train D loss: 0.3132, G loss: 2.9228
Epoch 42: Train D loss: 0.4735, G loss: 2.9304
Epoch 43: Train D loss: 0.1570, G loss: 3.3429
Epoch 44: Train D loss: 0.6236, G loss: 3.0557
Epoch 45: Train D loss: 0.2389, G loss: 3.2241
Epoch 46: Train D loss: 0.1189, G loss: 3.6270
Epoch 47: Train D loss: 0.1112, G loss: 3.8986
Epoch 48: Train D loss: 0.5740, G loss: 3.6167
Epoch 49: Train D loss: 0.2161, G loss: 3.4319
image
Epoch 50: Train D loss: 0.1162, G loss: 3.9703
Epoch 51: Train D loss: 0.0875, G loss: 4.1047
Epoch 52: Train D loss: 1.1022, G loss: 2.5413
Epoch 53: Train D loss: 0.1822, G loss: 3.4868
Epoch 54: Train D loss: 0.0919, G loss: 3.9516
Epoch 55: Train D loss: 0.0657, G loss: 4.2033
Epoch 56: Train D loss: 0.0595, G loss: 4.3836
Epoch 57: Train D loss: 0.0533, G loss: 4.5497
Epoch 58: Train D loss: 0.7047, G loss: 3.6997
Epoch 59: Train D loss: 0.2122, G loss: 3.7186
image
Epoch 60: Train D loss: 0.0671, G loss: 4.2783
Epoch 61: Train D loss: 0.0534, G loss: 4.5652
Epoch 62: Train D loss: 0.0490, G loss: 4.6673
Epoch 63: Train D loss: 0.0387, G loss: 4.8734
Epoch 64: Train D loss: 0.0347, G loss: 4.9742
Epoch 65: Train D loss: 0.2409, G loss: 5.1782
Epoch 66: Train D loss: 1.0484, G loss: 2.4625
Epoch 67: Train D loss: 0.4583, G loss: 3.2699
Epoch 68: Train D loss: 0.4521, G loss: 3.4144
Epoch 69: Train D loss: 0.1248, G loss: 4.0661
image
Epoch 70: Train D loss: 0.0579, G loss: 4.4066
Epoch 71: Train D loss: 0.0474, G loss: 4.7067
Epoch 72: Train D loss: 0.0375, G loss: 4.8429
Epoch 73: Train D loss: 0.0304, G loss: 5.0606
Epoch 74: Train D loss: 0.0243, G loss: 5.2481
Epoch 75: Train D loss: 0.0260, G loss: 5.3255
Epoch 76: Train D loss: 0.0225, G loss: 5.4283
Epoch 77: Train D loss: 1.2070, G loss: 2.4013
Epoch 78: Train D loss: 0.6930, G loss: 2.4867
Epoch 79: Train D loss: 0.5972, G loss: 3.1937
image
Epoch 80: Train D loss: 0.2452, G loss: 3.6573
Epoch 81: Train D loss: 0.0592, G loss: 4.4053
Epoch 82: Train D loss: 0.0456, G loss: 4.7146
Epoch 83: Train D loss: 0.0366, G loss: 4.8923
Epoch 84: Train D loss: 0.0303, G loss: 5.0758
Epoch 85: Train D loss: 0.0233, G loss: 5.2704
Epoch 86: Train D loss: 0.0254, G loss: 5.4018
Epoch 87: Train D loss: 0.8972, G loss: 3.2275
Epoch 88: Train D loss: 0.6262, G loss: 2.5182
Epoch 89: Train D loss: 0.5316, G loss: 3.4189
image
Epoch 90: Train D loss: 0.5059, G loss: 3.2730
Epoch 91: Train D loss: 0.0708, G loss: 4.4447
Epoch 92: Train D loss: 0.0399, G loss: 4.8059
Epoch 93: Train D loss: 0.0292, G loss: 5.0672
Epoch 94: Train D loss: 0.0242, G loss: 5.1704
Epoch 95: Train D loss: 0.0206, G loss: 5.3694
Epoch 96: Train D loss: 0.0209, G loss: 5.4811
Epoch 97: Train D loss: 0.0174, G loss: 5.5394
Epoch 98: Train D loss: 0.0174, G loss: 5.5801
Epoch 99: Train D loss: 0.0167, G loss: 5.7518
image
Epoch 100: Train D loss: 0.0147, G loss: 5.8225
Epoch 101: Train D loss: 0.0153, G loss: 5.9176
Epoch 102: Train D loss: 0.0133, G loss: 6.0194
Epoch 103: Train D loss: 0.0114, G loss: 6.0404
Epoch 104: Train D loss: 0.0125, G loss: 6.0783
Epoch 105: Train D loss: 0.0102, G loss: 6.2466
Epoch 106: Train D loss: 0.0109, G loss: 6.2441
Epoch 107: Train D loss: 0.6059, G loss: 5.0261
Epoch 108: Train D loss: 0.5775, G loss: 2.7050
Epoch 109: Train D loss: 0.5215, G loss: 2.7918
image
Epoch 110: Train D loss: 0.5460, G loss: 2.7928
Epoch 111: Train D loss: 0.5656, G loss: 3.0143
Epoch 112: Train D loss: 0.5745, G loss: 3.1358
Epoch 113: Train D loss: 0.3454, G loss: 3.3785
Epoch 114: Train D loss: 0.6632, G loss: 3.3066
Epoch 115: Train D loss: 0.1403, G loss: 3.9030
Epoch 116: Train D loss: 0.0821, G loss: 4.3970
Epoch 117: Train D loss: 0.4486, G loss: 3.9750
Epoch 118: Train D loss: 0.4547, G loss: 3.1868
Epoch 119: Train D loss: 0.7379, G loss: 3.3208
image
from utils import loss_plot
loss_plot(d_loss_hist, g_loss_hist)
image

作业 :

  1. 在D中,可以将输入图片和labels分别通过两个不同的卷积层然后在维度1合并(通道上合并),再一起送去接下来的网络结构.网络部分结构已经在DCDiscriminator中写好,请在补充forward函数完成上述功能并再次使用同样的数据集训练CGAN.与之前的结果对比,说说有什么不同?

答:与之前的结果相比,loss变化整体趋势差别不大,不过这里的loss值波动比较频繁,而且生成器的loss最低值比之前大。判别器和生成器在训练前期相互对抗,使得生成器的loss下降,判别器的loss在上升,但是在后期,判别器占主导地位,其loss能稳定在较低值附近震荡,使得生成器的loss不断上升。另外这里的生成图片效果没有这么好,比如3、4和6都不容易识别出来。

class DCDiscriminator1(nn.Module):
    def __init__(self, image_size=32, input_channel=1, class_num=3, sigmoid=True):
        super().__init__()
        self.image_size = image_size
        self.input_channel = input_channel
        self.class_num = class_num
        
        self.fc_size = image_size // 8

        #       model : img -> conv1_1
        #               labels -> conv1_2    
        #       (img U labels) -> Conv2d(3,2,1) -> BN -> LeakyReLU 
        #       Conv2d(3,2,1) -> BN -> LeakyReLU 

        self.conv1_1 = nn.Sequential(nn.Conv2d(input_channel, 64, 3, 2, 1),
                                     nn.BatchNorm2d(64))
        self.conv1_2 = nn.Sequential(nn.Conv2d(class_num, 64, 3, 2, 1),
                                     nn.BatchNorm2d(64))
        
        self.conv = nn.Sequential(
            nn.LeakyReLU(0.2),
            nn.Conv2d(128, 256, 3, 2, 1),
            nn.BatchNorm2d(256),
            nn.LeakyReLU(0.2),
            nn.Conv2d(256, 512, 3, 2, 1),
            nn.BatchNorm2d(512),
            nn.LeakyReLU(0.2),
        )
        
        # fc: Linear -> Sigmoid
        self.fc = nn.Sequential(
            nn.Linear(512 * self.fc_size * self.fc_size, 1),
        )
        if sigmoid:
            self.fc.add_module('sigmoid', nn.Sigmoid())
        initialize_weights(self)

    def forward(self, img, labels):
        """
        img : input image
        labels : (batch_size, class_num, image_size, image_size)
                the i-th channel is filled with 1, and others is filled with 0.
        """
        img_out = self.conv1_1(img)
        labels_out = self.conv1_2(labels)
        out = torch.cat((img_out, labels_out), dim=1)
        out = self.conv(out)
        out = out.view(out.shape[0], -1)
        out = self.fc(out)

        return out


        return out
from utils import loss_plot
# hyper params
# device : cpu or cuda:0/1/2/3
device = torch.device('cuda:0')

# G and D model
G = DCGenerator(image_size=image_size, latent_dim=latent_dim, output_channel=image_channel, class_num=class_num)
D = DCDiscriminator1(image_size=image_size, input_channel=image_channel, class_num=class_num)
G.to(device)
D.to(device)

# G and D optimizer, use Adam or SGD
G_optimizer = optim.Adam(G.parameters(), lr=learning_rate, betas=betas)
D_optimizer = optim.Adam(D.parameters(), lr=learning_rate, betas=betas)

d_loss_hist, g_loss_hist = run_gan(trainloader, G, D, G_optimizer, D_optimizer, bceloss, 
                                   n_epochs, device, latent_dim, class_num)
loss_plot(d_loss_hist, g_loss_hist)
Epoch 0: Train D loss: 0.5783, G loss: 4.0314
image
Epoch 1: Train D loss: 0.3084, G loss: 4.6929
Epoch 2: Train D loss: 0.3652, G loss: 4.4327
Epoch 3: Train D loss: 0.6010, G loss: 3.2752
Epoch 4: Train D loss: 0.5115, G loss: 2.9438
Epoch 5: Train D loss: 0.5711, G loss: 2.8227
Epoch 6: Train D loss: 0.5215, G loss: 2.5894
Epoch 7: Train D loss: 0.5886, G loss: 2.6360
Epoch 8: Train D loss: 0.5324, G loss: 2.7322
Epoch 9: Train D loss: 0.4396, G loss: 2.6589
image
Epoch 10: Train D loss: 0.5049, G loss: 2.6881
Epoch 11: Train D loss: 0.6259, G loss: 2.5495
Epoch 12: Train D loss: 0.5501, G loss: 2.6990
Epoch 13: Train D loss: 0.6089, G loss: 2.5458
Epoch 14: Train D loss: 0.6249, G loss: 2.4948
Epoch 15: Train D loss: 0.6507, G loss: 2.2958
Epoch 16: Train D loss: 0.5265, G loss: 2.5526
Epoch 17: Train D loss: 0.6500, G loss: 2.4998
Epoch 18: Train D loss: 0.5119, G loss: 2.6624
Epoch 19: Train D loss: 0.7852, G loss: 2.2363
image
Epoch 20: Train D loss: 0.5294, G loss: 2.3413
Epoch 21: Train D loss: 0.6635, G loss: 2.7594
Epoch 22: Train D loss: 0.5128, G loss: 2.6446
Epoch 23: Train D loss: 0.6374, G loss: 2.4458
Epoch 24: Train D loss: 0.5262, G loss: 2.8333
Epoch 25: Train D loss: 0.4865, G loss: 2.6566
Epoch 26: Train D loss: 0.6546, G loss: 2.6343
Epoch 27: Train D loss: 0.6002, G loss: 2.8760
Epoch 28: Train D loss: 0.2794, G loss: 3.1967
Epoch 29: Train D loss: 0.3933, G loss: 3.2833
image
Epoch 30: Train D loss: 0.3230, G loss: 3.3384
Epoch 31: Train D loss: 0.4659, G loss: 3.4798
Epoch 32: Train D loss: 0.4419, G loss: 3.2220
Epoch 33: Train D loss: 0.7314, G loss: 2.6443
Epoch 34: Train D loss: 0.2897, G loss: 3.0850
Epoch 35: Train D loss: 0.2233, G loss: 3.6760
Epoch 36: Train D loss: 0.2126, G loss: 4.0898
Epoch 37: Train D loss: 0.8669, G loss: 2.8141
Epoch 38: Train D loss: 0.3106, G loss: 3.0525
Epoch 39: Train D loss: 0.1445, G loss: 3.7176
image
Epoch 40: Train D loss: 0.0959, G loss: 4.1259
Epoch 41: Train D loss: 0.9976, G loss: 3.0617
Epoch 42: Train D loss: 0.4574, G loss: 2.6349
Epoch 43: Train D loss: 0.6087, G loss: 2.9328
Epoch 44: Train D loss: 0.1489, G loss: 3.6072
Epoch 45: Train D loss: 0.0740, G loss: 4.2103
Epoch 46: Train D loss: 0.0629, G loss: 4.3956
Epoch 47: Train D loss: 0.5998, G loss: 3.2734
Epoch 48: Train D loss: 0.1357, G loss: 4.0024
Epoch 49: Train D loss: 0.0552, G loss: 4.5454
image
Epoch 50: Train D loss: 0.0484, G loss: 4.8041
Epoch 51: Train D loss: 0.6982, G loss: 3.7349
Epoch 52: Train D loss: 0.5501, G loss: 2.6109
Epoch 53: Train D loss: 0.2982, G loss: 3.6175
Epoch 54: Train D loss: 0.1213, G loss: 4.3322
Epoch 55: Train D loss: 0.5588, G loss: 3.3895
Epoch 56: Train D loss: 0.0901, G loss: 4.3445
Epoch 57: Train D loss: 0.0476, G loss: 4.8930
Epoch 58: Train D loss: 0.0491, G loss: 5.0212
Epoch 59: Train D loss: 0.0369, G loss: 5.0965
image
Epoch 60: Train D loss: 0.0337, G loss: 5.2226
Epoch 61: Train D loss: 1.2331, G loss: 2.7346
Epoch 62: Train D loss: 0.3062, G loss: 3.3701
Epoch 63: Train D loss: 0.4700, G loss: 3.3737
Epoch 64: Train D loss: 0.1190, G loss: 4.2390
Epoch 65: Train D loss: 0.0480, G loss: 4.6856
Epoch 66: Train D loss: 0.1715, G loss: 4.9363
Epoch 67: Train D loss: 0.9050, G loss: 2.3453
Epoch 68: Train D loss: 0.3282, G loss: 3.5270
Epoch 69: Train D loss: 0.3222, G loss: 3.8528
image
Epoch 70: Train D loss: 0.0641, G loss: 4.4396
Epoch 71: Train D loss: 0.0386, G loss: 4.9179
Epoch 72: Train D loss: 0.3381, G loss: 4.9477
Epoch 73: Train D loss: 0.7052, G loss: 2.2826
Epoch 74: Train D loss: 0.3881, G loss: 3.3018
Epoch 75: Train D loss: 0.3167, G loss: 3.6229
Epoch 76: Train D loss: 0.0612, G loss: 4.5307
Epoch 77: Train D loss: 0.0349, G loss: 5.0571
Epoch 78: Train D loss: 0.0259, G loss: 5.2854
Epoch 79: Train D loss: 0.6566, G loss: 4.2716
image
Epoch 80: Train D loss: 0.5006, G loss: 2.6297
Epoch 81: Train D loss: 0.4957, G loss: 3.3940
Epoch 82: Train D loss: 0.1931, G loss: 3.7004
Epoch 83: Train D loss: 0.0516, G loss: 4.6862
Epoch 84: Train D loss: 0.0361, G loss: 5.0259
Epoch 85: Train D loss: 0.0347, G loss: 5.2946
Epoch 86: Train D loss: 0.5135, G loss: 3.3887
Epoch 87: Train D loss: 0.0544, G loss: 4.6875
Epoch 88: Train D loss: 0.0276, G loss: 5.2771
Epoch 89: Train D loss: 0.0223, G loss: 5.4915
image
Epoch 90: Train D loss: 0.0286, G loss: 5.5067
Epoch 91: Train D loss: 0.0169, G loss: 5.7693
Epoch 92: Train D loss: 0.1816, G loss: 5.8505
Epoch 93: Train D loss: 0.6672, G loss: 2.9792
Epoch 94: Train D loss: 0.3550, G loss: 3.5908
Epoch 95: Train D loss: 0.1152, G loss: 4.2985
Epoch 96: Train D loss: 0.3241, G loss: 4.6347
Epoch 97: Train D loss: 0.1568, G loss: 4.1708
Epoch 98: Train D loss: 0.0337, G loss: 5.1337
Epoch 99: Train D loss: 0.2742, G loss: 5.3655
image
Epoch 100: Train D loss: 0.4777, G loss: 2.9118
Epoch 101: Train D loss: 0.3332, G loss: 3.7523
Epoch 102: Train D loss: 0.0812, G loss: 4.8293
Epoch 103: Train D loss: 0.0243, G loss: 5.4633
Epoch 104: Train D loss: 0.0205, G loss: 5.6605
Epoch 105: Train D loss: 0.0175, G loss: 5.7829
Epoch 106: Train D loss: 0.0174, G loss: 5.8838
Epoch 107: Train D loss: 0.0143, G loss: 6.0738
Epoch 108: Train D loss: 0.3694, G loss: 5.1934
Epoch 109: Train D loss: 0.2177, G loss: 4.0592
image
Epoch 110: Train D loss: 0.0368, G loss: 5.0899
Epoch 111: Train D loss: 0.3410, G loss: 5.0052
Epoch 112: Train D loss: 0.7964, G loss: 2.3244
Epoch 113: Train D loss: 0.6525, G loss: 2.7550
Epoch 114: Train D loss: 0.5676, G loss: 2.8861
Epoch 115: Train D loss: 0.4401, G loss: 3.0373
Epoch 116: Train D loss: 0.3829, G loss: 3.2658
Epoch 117: Train D loss: 0.2251, G loss: 3.6322
Epoch 118: Train D loss: 0.2588, G loss: 3.9562
Epoch 119: Train D loss: 0.2042, G loss: 3.9001
image
image
  1. 在D中,可以将输入图片通过1个卷积层然后和(尺寸与输入图片一致的)labels在维度1合并(通道上合并),再一起送去接下来的网络结构.网络部分结构已经在DCDiscriminator中写好,请在补充forward函数完成上述功能,并再次使用同样的数据集训练CGAN.与之前的结果对比,说说有什么不同?

答:与之前的结果相比,loss变化整体趋势差别不大,这里的loss值波动也挺频繁,但是波动的范围没有这么大。判别器和生成器在训练前期相互对抗,使得生成器的loss下降,判别器的loss在上升,但是在后期,判别器占主导地位,其loss能稳定在较低值附近震荡,使得生成器的loss不断上升。另外这里的生成图片效果比前面的两次都差,比如2、4、6、7、8和9的生成效果都不太好。

class DCDiscriminator2(nn.Module):
    def __init__(self, image_size=32, input_channel=1, class_num=3, sigmoid=True):
        super().__init__()
        self.image_size = image_size
        self.input_channel = input_channel
        self.class_num = class_num
        
        self.fc_size = image_size // 8

        #       model : img -> conv1
        #               labels -> maxpool   
        #       (img U labels) -> Conv2d(3,2,1) -> BN -> LeakyReLU 
        #       Conv2d(3,2,1) -> BN -> LeakyReLU 

        self.conv1 = nn.Sequential(nn.Conv2d(input_channel, 128, 3, 2, 1),
                                     nn.BatchNorm2d(128))
        self.maxpool = nn.MaxPool2d(kernel_size=2, stride=2)
        
        self.conv = nn.Sequential(
            nn.LeakyReLU(0.2),
            nn.Conv2d(128 + class_num, 256, 3, 2, 1),
            nn.BatchNorm2d(256),
            nn.LeakyReLU(0.2),
            nn.Conv2d(256, 512, 3, 2, 1),
            nn.BatchNorm2d(512),
            nn.LeakyReLU(0.2),
        )
        
        # fc: Linear -> Sigmoid
        self.fc = nn.Sequential(
            nn.Linear(512 * self.fc_size * self.fc_size, 1),
        )
        if sigmoid:
            self.fc.add_module('sigmoid', nn.Sigmoid())
        initialize_weights(self)

    def forward(self, img, labels):
        """
        img : input image
        labels : (batch_size, class_num, image_size, image_size)
                the i-th channel is filled with 1, and others is filled with 0.
        """
        img_out = self.conv1(img)
        labels_out = self.maxpool(labels)
        out = torch.cat((img_out, labels_out), dim=1)
        out = self.conv(out)
        out = out.view(out.shape[0], -1)
        out = self.fc(out)        


        return out
# hyper params
# device : cpu or cuda:0/1/2/3
device = torch.device('cuda:0')

# G and D model
G = DCGenerator(image_size=image_size, latent_dim=latent_dim, output_channel=image_channel, class_num=class_num)
D = DCDiscriminator2(image_size=image_size, input_channel=image_channel, class_num=class_num)
G.to(device)
D.to(device)

# G and D optimizer, use Adam or SGD
G_optimizer = optim.Adam(G.parameters(), lr=learning_rate, betas=betas)
D_optimizer = optim.Adam(D.parameters(), lr=learning_rate, betas=betas)

d_loss_hist, g_loss_hist = run_gan(trainloader, G, D, G_optimizer, D_optimizer, bceloss, 
                                   n_epochs, device, latent_dim, class_num)
loss_plot(d_loss_hist, g_loss_hist)
Epoch 0: Train D loss: 0.3652, G loss: 4.9241
image
Epoch 1: Train D loss: 0.4429, G loss: 5.9180
Epoch 2: Train D loss: 0.1754, G loss: 4.7812
Epoch 3: Train D loss: 0.1420, G loss: 4.3085
Epoch 4: Train D loss: 0.3841, G loss: 4.3876
Epoch 5: Train D loss: 0.2930, G loss: 4.0079
Epoch 6: Train D loss: 0.4288, G loss: 3.7424
Epoch 7: Train D loss: 0.2865, G loss: 3.3463
Epoch 8: Train D loss: 0.4004, G loss: 3.2854
Epoch 9: Train D loss: 0.5848, G loss: 3.1671
image
Epoch 10: Train D loss: 0.4002, G loss: 2.8178
Epoch 11: Train D loss: 0.5253, G loss: 2.8021
Epoch 12: Train D loss: 0.4601, G loss: 2.9233
Epoch 13: Train D loss: 0.4344, G loss: 2.8415
Epoch 14: Train D loss: 0.5028, G loss: 2.8215
Epoch 15: Train D loss: 0.5168, G loss: 2.7453
Epoch 16: Train D loss: 0.4130, G loss: 2.7670
Epoch 17: Train D loss: 0.4163, G loss: 3.0764
Epoch 18: Train D loss: 0.5104, G loss: 2.8070
Epoch 19: Train D loss: 0.3978, G loss: 2.8812
image
Epoch 20: Train D loss: 0.4791, G loss: 2.8178
Epoch 21: Train D loss: 0.4476, G loss: 2.8565
Epoch 22: Train D loss: 0.4595, G loss: 3.1004
Epoch 23: Train D loss: 0.4785, G loss: 2.8015
Epoch 24: Train D loss: 0.3153, G loss: 3.0215
Epoch 25: Train D loss: 0.5425, G loss: 2.8971
Epoch 26: Train D loss: 0.4823, G loss: 2.9873
Epoch 27: Train D loss: 0.3167, G loss: 3.1143
Epoch 28: Train D loss: 0.3648, G loss: 3.3072
Epoch 29: Train D loss: 0.2634, G loss: 3.4745
image
Epoch 30: Train D loss: 0.2306, G loss: 3.4688
Epoch 31: Train D loss: 0.4227, G loss: 3.5273
Epoch 32: Train D loss: 0.1514, G loss: 3.5782
Epoch 33: Train D loss: 0.7272, G loss: 3.0295
Epoch 34: Train D loss: 0.2905, G loss: 3.1706
Epoch 35: Train D loss: 0.3160, G loss: 3.8710
Epoch 36: Train D loss: 0.2567, G loss: 3.7115
Epoch 37: Train D loss: 0.2725, G loss: 3.4961
Epoch 38: Train D loss: 0.1762, G loss: 3.9646
Epoch 39: Train D loss: 0.1075, G loss: 4.2067
image
Epoch 40: Train D loss: 0.0735, G loss: 4.4320
Epoch 41: Train D loss: 0.9511, G loss: 2.4049
Epoch 42: Train D loss: 0.6208, G loss: 2.4724
Epoch 43: Train D loss: 0.4307, G loss: 2.8940
Epoch 44: Train D loss: 0.4893, G loss: 3.1014
Epoch 45: Train D loss: 0.3474, G loss: 2.8716
Epoch 46: Train D loss: 0.1160, G loss: 3.8310
Epoch 47: Train D loss: 0.0750, G loss: 4.2792
Epoch 48: Train D loss: 0.0646, G loss: 4.5145
Epoch 49: Train D loss: 0.6285, G loss: 3.3446
image
Epoch 50: Train D loss: 0.3821, G loss: 3.2104
Epoch 51: Train D loss: 0.1024, G loss: 4.2762
Epoch 52: Train D loss: 0.0519, G loss: 4.5700
Epoch 53: Train D loss: 0.0400, G loss: 4.8230
Epoch 54: Train D loss: 0.9730, G loss: 3.0698
Epoch 55: Train D loss: 0.4508, G loss: 2.9594
Epoch 56: Train D loss: 0.2250, G loss: 3.8201
Epoch 57: Train D loss: 0.3813, G loss: 3.9528
Epoch 58: Train D loss: 0.1910, G loss: 3.5154
Epoch 59: Train D loss: 0.0522, G loss: 4.5624
image
Epoch 60: Train D loss: 0.0466, G loss: 4.8247
Epoch 61: Train D loss: 0.4749, G loss: 3.5991
Epoch 62: Train D loss: 0.3555, G loss: 3.8786
Epoch 63: Train D loss: 0.1598, G loss: 3.7794
Epoch 64: Train D loss: 0.0524, G loss: 4.6509
Epoch 65: Train D loss: 1.0464, G loss: 2.4283
Epoch 66: Train D loss: 0.4957, G loss: 2.8442
Epoch 67: Train D loss: 0.3295, G loss: 3.3887
Epoch 68: Train D loss: 0.1129, G loss: 3.9749
Epoch 69: Train D loss: 0.1507, G loss: 4.4947
image
Epoch 70: Train D loss: 0.7100, G loss: 2.4590
Epoch 71: Train D loss: 0.2155, G loss: 3.7421
Epoch 72: Train D loss: 0.0485, G loss: 4.4712
Epoch 73: Train D loss: 0.0384, G loss: 4.7572
Epoch 74: Train D loss: 0.0426, G loss: 5.0863
Epoch 75: Train D loss: 0.0276, G loss: 5.1197
Epoch 76: Train D loss: 0.3221, G loss: 4.6973
Epoch 77: Train D loss: 0.1505, G loss: 3.9563
Epoch 78: Train D loss: 0.0345, G loss: 4.8970
Epoch 79: Train D loss: 0.0291, G loss: 5.1258
image
Epoch 80: Train D loss: 0.0274, G loss: 5.2251
Epoch 81: Train D loss: 0.5761, G loss: 3.7912
Epoch 82: Train D loss: 0.1272, G loss: 4.0800
Epoch 83: Train D loss: 0.0365, G loss: 5.0618
Epoch 84: Train D loss: 0.0256, G loss: 5.2438
Epoch 85: Train D loss: 0.0247, G loss: 5.5058
Epoch 86: Train D loss: 0.0233, G loss: 5.5718
Epoch 87: Train D loss: 0.5834, G loss: 5.2774
Epoch 88: Train D loss: 0.3599, G loss: 3.4697
Epoch 89: Train D loss: 0.5033, G loss: 3.5868
image
Epoch 90: Train D loss: 0.4733, G loss: 3.7724
Epoch 91: Train D loss: 0.4360, G loss: 3.1645
Epoch 92: Train D loss: 0.1544, G loss: 3.8841
Epoch 93: Train D loss: 0.0724, G loss: 4.4879
Epoch 94: Train D loss: 0.0429, G loss: 4.7762
Epoch 95: Train D loss: 1.1455, G loss: 2.3196
Epoch 96: Train D loss: 0.5529, G loss: 2.6795
Epoch 97: Train D loss: 0.5533, G loss: 3.1325
Epoch 98: Train D loss: 0.1586, G loss: 3.7296
Epoch 99: Train D loss: 0.2869, G loss: 4.0718
image
Epoch 100: Train D loss: 0.0795, G loss: 4.3183
Epoch 101: Train D loss: 0.0399, G loss: 4.7554
Epoch 102: Train D loss: 0.0357, G loss: 4.9615
Epoch 103: Train D loss: 0.0278, G loss: 5.0769
Epoch 104: Train D loss: 0.4849, G loss: 3.4960
Epoch 105: Train D loss: 0.0654, G loss: 4.4074
Epoch 106: Train D loss: 0.0316, G loss: 5.0200
Epoch 107: Train D loss: 0.6949, G loss: 3.0835
Epoch 108: Train D loss: 0.0958, G loss: 4.1572
Epoch 109: Train D loss: 0.0344, G loss: 4.9555
image
Epoch 110: Train D loss: 0.8575, G loss: 2.8936
Epoch 111: Train D loss: 0.4397, G loss: 3.1297
Epoch 112: Train D loss: 0.3955, G loss: 3.4052
Epoch 113: Train D loss: 0.0928, G loss: 4.1257
Epoch 114: Train D loss: 0.0494, G loss: 4.5448
Epoch 115: Train D loss: 0.0411, G loss: 4.7861
Epoch 116: Train D loss: 0.0321, G loss: 5.1476
Epoch 117: Train D loss: 0.0227, G loss: 5.2891
Epoch 118: Train D loss: 0.0228, G loss: 5.3384
Epoch 119: Train D loss: 0.5538, G loss: 3.7886
image

:

  1. 若输入的类别标签不用one-hot的向量表示,我们一开始先为每个类随机生成一个随机向量,然后使用这个向量作为类别标签,这样对结果会有改变吗?试尝试运行下面代码,与之前的结果对比,说说有什么不同?

答:与之前的结果相比,loss值变化趋势大体相同,不过识别器的loss最低值相比起来最小。根据生成图片的变化,可知其训练速度更快,训练效果一开始就挺好,我们根据第一张图就能大概看出数字形状,而之前的实验第一张图都没有这么容易看出数字形状,另外最后生成的图片分辨度很高。

vecs = torch.randn(class_num, class_num)
fills = vecs.unsqueeze(2).unsqueeze(3).expand(class_num, class_num, image_size, image_size)

# hyper params

# device : cpu or cuda:0/1/2/3
device = torch.device('cuda:0')

# G and D model
G = DCGenerator(image_size=image_size, latent_dim=latent_dim, output_channel=image_channel, class_num=class_num)
D = DCDiscriminator(image_size=image_size, input_channel=image_channel, class_num=class_num)
G.to(device)
D.to(device)

# G and D optimizer, use Adam or SGD
G_optimizer = optim.Adam(G.parameters(), lr=learning_rate, betas=betas)
D_optimizer = optim.Adam(D.parameters(), lr=learning_rate, betas=betas)

d_loss_hist, g_loss_hist = run_gan(trainloader, G, D, G_optimizer, D_optimizer, bceloss, 
                                   n_epochs, device, latent_dim, class_num)
loss_plot(d_loss_hist, g_loss_hist)
Epoch 0: Train D loss: 0.7260, G loss: 2.1572
image
Epoch 1: Train D loss: 0.9390, G loss: 1.7824
Epoch 2: Train D loss: 1.0395, G loss: 1.7254
Epoch 3: Train D loss: 1.1483, G loss: 1.4995
Epoch 4: Train D loss: 1.1209, G loss: 1.3907
Epoch 5: Train D loss: 1.1032, G loss: 1.4722
Epoch 6: Train D loss: 1.0763, G loss: 1.4546
Epoch 7: Train D loss: 1.0108, G loss: 1.5468
Epoch 8: Train D loss: 1.0263, G loss: 1.5239
Epoch 9: Train D loss: 1.0970, G loss: 1.4444
image
Epoch 10: Train D loss: 1.1275, G loss: 1.3871
Epoch 11: Train D loss: 1.0939, G loss: 1.3838
Epoch 12: Train D loss: 1.0969, G loss: 1.3490
Epoch 13: Train D loss: 1.0826, G loss: 1.3967
Epoch 14: Train D loss: 1.1435, G loss: 1.3215
Epoch 15: Train D loss: 1.1192, G loss: 1.3385
Epoch 16: Train D loss: 1.1218, G loss: 1.3073
Epoch 17: Train D loss: 1.1673, G loss: 1.2399
Epoch 18: Train D loss: 1.1879, G loss: 1.2262
Epoch 19: Train D loss: 1.1955, G loss: 1.2235
image
Epoch 20: Train D loss: 1.2040, G loss: 1.1830
Epoch 21: Train D loss: 1.2068, G loss: 1.1786
Epoch 22: Train D loss: 1.2297, G loss: 1.1382
Epoch 23: Train D loss: 1.2207, G loss: 1.1666
Epoch 24: Train D loss: 1.2436, G loss: 1.1467
Epoch 25: Train D loss: 1.2206, G loss: 1.1358
Epoch 26: Train D loss: 1.2438, G loss: 1.1182
Epoch 27: Train D loss: 1.2187, G loss: 1.1273
Epoch 28: Train D loss: 1.2356, G loss: 1.1087
Epoch 29: Train D loss: 1.2386, G loss: 1.1199
image
Epoch 30: Train D loss: 1.2112, G loss: 1.1312
Epoch 31: Train D loss: 1.2411, G loss: 1.1433
Epoch 32: Train D loss: 1.2233, G loss: 1.1320
Epoch 33: Train D loss: 1.2163, G loss: 1.1253
Epoch 34: Train D loss: 1.2149, G loss: 1.1394
Epoch 35: Train D loss: 1.2262, G loss: 1.1641
Epoch 36: Train D loss: 1.2044, G loss: 1.1631
Epoch 37: Train D loss: 1.2155, G loss: 1.1475
Epoch 38: Train D loss: 1.1877, G loss: 1.1697
Epoch 39: Train D loss: 1.1980, G loss: 1.1835
image
Epoch 40: Train D loss: 1.2010, G loss: 1.1685
Epoch 41: Train D loss: 1.1966, G loss: 1.1575
Epoch 42: Train D loss: 1.1946, G loss: 1.2079
Epoch 43: Train D loss: 1.1545, G loss: 1.2180
Epoch 44: Train D loss: 1.1543, G loss: 1.2154
Epoch 45: Train D loss: 1.1398, G loss: 1.2518
Epoch 46: Train D loss: 1.1400, G loss: 1.2798
Epoch 47: Train D loss: 1.1451, G loss: 1.2790
Epoch 48: Train D loss: 1.1083, G loss: 1.2797
Epoch 49: Train D loss: 1.0828, G loss: 1.3435
image
Epoch 50: Train D loss: 1.0941, G loss: 1.3757
Epoch 51: Train D loss: 1.0729, G loss: 1.3664
Epoch 52: Train D loss: 1.0801, G loss: 1.4018
Epoch 53: Train D loss: 1.0361, G loss: 1.4298
Epoch 54: Train D loss: 0.9954, G loss: 1.4514
Epoch 55: Train D loss: 1.0083, G loss: 1.4741
Epoch 56: Train D loss: 0.9435, G loss: 1.5283
Epoch 57: Train D loss: 0.9614, G loss: 1.6080
Epoch 58: Train D loss: 1.0008, G loss: 1.6005
Epoch 59: Train D loss: 0.8756, G loss: 1.6289
image
Epoch 60: Train D loss: 0.9253, G loss: 1.6770
Epoch 61: Train D loss: 0.8097, G loss: 1.7874
Epoch 62: Train D loss: 0.8168, G loss: 1.8690
Epoch 63: Train D loss: 0.8487, G loss: 1.8578
Epoch 64: Train D loss: 0.8003, G loss: 1.9216
Epoch 65: Train D loss: 0.8222, G loss: 1.8820
Epoch 66: Train D loss: 0.7529, G loss: 1.9991
Epoch 67: Train D loss: 0.7870, G loss: 1.9942
Epoch 68: Train D loss: 0.6070, G loss: 2.1499
Epoch 69: Train D loss: 0.6849, G loss: 2.1850
image
Epoch 70: Train D loss: 0.5645, G loss: 2.3048
Epoch 71: Train D loss: 0.6525, G loss: 2.3541
Epoch 72: Train D loss: 0.5725, G loss: 2.3339
Epoch 73: Train D loss: 0.4690, G loss: 2.5266
Epoch 74: Train D loss: 0.4421, G loss: 2.6831
Epoch 75: Train D loss: 0.4909, G loss: 2.7405
Epoch 76: Train D loss: 0.4755, G loss: 2.6397
Epoch 77: Train D loss: 0.2924, G loss: 2.9268
Epoch 78: Train D loss: 0.3036, G loss: 3.0787
Epoch 79: Train D loss: 0.1643, G loss: 3.3467
image
Epoch 80: Train D loss: 0.4594, G loss: 3.1674
Epoch 81: Train D loss: 0.2945, G loss: 3.1968
Epoch 82: Train D loss: 0.1721, G loss: 3.4859
Epoch 83: Train D loss: 0.6296, G loss: 3.0034
Epoch 84: Train D loss: 0.1400, G loss: 3.5678
Epoch 85: Train D loss: 0.2866, G loss: 3.6853
Epoch 86: Train D loss: 0.3567, G loss: 3.3547
Epoch 87: Train D loss: 0.1112, G loss: 3.7273
Epoch 88: Train D loss: 0.0861, G loss: 4.0452
Epoch 89: Train D loss: 0.0946, G loss: 4.2372
image
Epoch 90: Train D loss: 0.8998, G loss: 2.6816
Epoch 91: Train D loss: 0.1990, G loss: 3.7121
Epoch 92: Train D loss: 0.4284, G loss: 3.2418
Epoch 93: Train D loss: 0.0850, G loss: 3.9878
Epoch 94: Train D loss: 0.0668, G loss: 4.1930
Epoch 95: Train D loss: 0.0567, G loss: 4.3359
Epoch 96: Train D loss: 0.0570, G loss: 4.4259
Epoch 97: Train D loss: 0.0474, G loss: 4.6437
Epoch 98: Train D loss: 0.9481, G loss: 2.7850
Epoch 99: Train D loss: 0.4302, G loss: 3.4014
image
Epoch 100: Train D loss: 0.0989, G loss: 3.9864
Epoch 101: Train D loss: 0.0609, G loss: 4.3214
Epoch 102: Train D loss: 0.0498, G loss: 4.5487
Epoch 103: Train D loss: 0.0451, G loss: 4.6218
Epoch 104: Train D loss: 0.0464, G loss: 4.7520
Epoch 105: Train D loss: 0.0371, G loss: 4.8745
Epoch 106: Train D loss: 0.1983, G loss: 4.9354
Epoch 107: Train D loss: 1.0577, G loss: 1.8109
Epoch 108: Train D loss: 0.7774, G loss: 3.1034
Epoch 109: Train D loss: 0.5359, G loss: 3.3039
image
Epoch 110: Train D loss: 0.2023, G loss: 3.8176
Epoch 111: Train D loss: 0.0711, G loss: 4.3033
Epoch 112: Train D loss: 0.0551, G loss: 4.4357
Epoch 113: Train D loss: 0.0455, G loss: 4.6138
Epoch 114: Train D loss: 0.7499, G loss: 3.3320
Epoch 115: Train D loss: 0.1707, G loss: 3.8742
Epoch 116: Train D loss: 0.0536, G loss: 4.4872
Epoch 117: Train D loss: 0.0465, G loss: 4.7260
Epoch 118: Train D loss: 0.0329, G loss: 4.9563
Epoch 119: Train D loss: 0.0338, G loss: 4.9434
image

你可能感兴趣的:(深度学习笔记(十四)—— GAN-3)