CycleGAN(二)数据集重做与训练测试

目的:我们需要在我们的数据集上实现CycleGAN。

参考:https://github.com/junyanz/pytorch-CycleGAN-and-pix2pix/blob/master/docs/datasets.md

目录

一、数据集制作

1.1 数据集格式

1.2 我们的数据集

二、训练

1.1 命令行

1.2 不显示结果

二、训练命令行

2.1 norText_2_cotton

2.2 norText_2_falText

三、测试

3.1 测试相关命令行及含义

两点注意

3.2 测试

四、存储结果位置

4.1 测试结果位置

html

4.2 训练结果位置

五、loss常见值

5.1 norText_2_falText_cyclegan

开始


一、数据集制作

1.1 数据集格式

To train a model on your own datasets, you need to create a data folder with two subdirectories trainA and trainB that contain images from domain A and B. You can test your model on your training set by setting --phase train in test.py. You can also create subdirectories testA and testB if you have test data.

  • 创建文件夹,子文件夹中包含了trainA,trainB,分别是A域和B域。
  • 测试时,在test.py中设置--phase train从而对模型进行test。
  • 然后创建子文件夹testA和testB。

注意,trainA和trainB需要相似的图像,例如:cats<->keyboards这种可能不work,因为domaionA与domainB相差太远,但是landscape painting<->landscape photographs这种就很可能work,因为两域差别并不大。

1.2 我们的数据集

创建norText_2_contton数据集

其中四个文件夹 trainA是100张norText数据,trainB为24张cotton数据,testA为norText种抽出的20张正常数据,testB为空,到时生成的数据就在testB之中。

本地压缩,然后上传到服务器datasets之下解压

rz
# Received /Users/baidu/Desktop/实习文档/纺织品数据集/norText_2_cotton.zip
ls
unzip norText_2_cotton.zip

二、训练

1.1 命令行

命令行参考:

python train.py --dataroot ./datasets/maps --name maps_cyclegan --model cycle_gan
  • 我们的训练集的路径:--dataroot  /home/xingxiangrui/pytorch-CycleGAN-and-pix2pix/datasets/norText_2_cotton
  • 训练好的模型的名称为 --name norText_2_cotton_cyclegan
  • 模型类型为 --model cycle_gan

所以最开始输入的命令行为:

env/bin/python /home/xingxiangrui/pytorch-CycleGAN-and-pix2pix/train.py --dataroot /home/xingxiangrui/pytorch-CycleGAN-and-pix2pix/datasets/norText_2_cotton --name norText_2_cotton_cyclegan --model cycle_gan

后续不显示结果需要在后面加--no_html,我们命令行以此为准:

env/bin/python /home/xingxiangrui/pytorch-CycleGAN-and-pix2pix/train.py --dataroot /home/xingxiangrui/pytorch-CycleGAN-and-pix2pix/datasets/norText_2_cotton --name norText_2_cotton_cyclegan --model cycle_gan --no_html

运行成功:

[xingxiangrui@8888888888888888 ~]$ env/bin/python /home/xingxiangrui/pytorch-CycleGAN-and-pix2pix/train.py --dataroot /home/xingxiangrui/pytorch-CycleGAN-and-pix2pix/datasets/norText_2_cotton --name norText_2_cotton_cyclegan --model cycle_gan
----------------- Options ---------------
               batch_size: 1
                    beta1: 0.5
          checkpoints_dir: ./checkpoints
           continue_train: False
                crop_size: 256
                 dataroot: /home/xingxiangrui/pytorch-CycleGAN-and-pix2pix/datasets/norText_2_cotton	[default: None]
             dataset_mode: unaligned
                direction: AtoB
              display_env: main
             display_freq: 400
               display_id: 1
            display_ncols: 4
             display_port: 8097
           display_server: http://localhost
          display_winsize: 256
                    epoch: latest
              epoch_count: 1
                 gan_mode: lsgan
                  gpu_ids: 0
                init_gain: 0.02
                init_type: normal
                 input_nc: 3
                  isTrain: True                          	[default: None]
                 lambda_A: 10.0
                 lambda_B: 10.0
          lambda_identity: 0.5
                load_iter: 0                             	[default: 0]
                load_size: 286
                       lr: 0.0002
           lr_decay_iters: 50
                lr_policy: linear
         max_dataset_size: inf
                    model: cycle_gan
               n_layers_D: 3
                     name: norText_2_cotton_cyclegan     	[default: experiment_name]
                      ndf: 64
                     netD: basic
                     netG: resnet_9blocks
                      ngf: 64
                    niter: 100
              niter_decay: 100
               no_dropout: True
                  no_flip: False
                  no_html: False
                     norm: instance
              num_threads: 4
                output_nc: 3
                    phase: train
                pool_size: 50
               preprocess: resize_and_crop
               print_freq: 100
             save_by_iter: False
          save_epoch_freq: 5
         save_latest_freq: 5000
           serial_batches: False
                   suffix:
         update_html_freq: 1000
                  verbose: False
----------------- End -------------------
dataset [UnalignedDataset] was created
The number of training images = 100
initialize network with normal
initialize network with normal
initialize network with normal
initialize network with normal
model [CycleGANModel] was created
---------- Networks initialized -------------
[Network G_A] Total number of parameters : 11.378 M
[Network G_B] Total number of parameters : 11.378 M
[Network D_A] Total number of parameters : 2.765 M
[Network D_B] Total number of parameters : 2.765 M
-----------------------------------------------
。。。
create web directory ./checkpoints/norText_2_cotton_cyclegan/web...
(epoch: 1, iters: 100, time: 0.896, data: 1.052) D_A: 0.384 G_A: 0.232 cycle_A: 1.791 idt_A: 0.739 D_B: 0.572 G_B: 0.620 cycle_B: 2.002 idt_B: 0.851
End of epoch 1 / 200 	 Time Taken: 92 sec
learning rate = 0.0002000
(epoch: 2, iters: 100, time: 0.865, data: 0.214) D_A: 0.219 G_A: 0.304 cycle_A: 1.499 idt_A: 0.597 D_B: 0.305 G_B: 0.699 cycle_B: 1.118 idt_B: 0.711
End of epoch 2 / 200 	 Time Taken: 87 sec
learning rate = 0.0002000
。。。

看样子模型会迭代200个epoch。最终的loss情况:

(epoch: 195, iters: 100, time: 0.865, data: 0.209) D_A: 0.016 G_A: 0.836 cycle_A: 0.584 idt_A: 0.168 D_B: 0.124 G_B: 0.346 cycle_B: 0.530 idt_B: 0.181
saving the model at the end of epoch 195, iters 19500
End of epoch 195 / 200 	 Time Taken: 88 sec
learning rate = 0.0000119
(epoch: 196, iters: 100, time: 1.197, data: 0.237) D_A: 0.036 G_A: 0.737 cycle_A: 0.590 idt_A: 0.133 D_B: 0.014 G_B: 0.271 cycle_B: 0.425 idt_B: 0.186
End of epoch 196 / 200 	 Time Taken: 87 sec
learning rate = 0.0000099
(epoch: 197, iters: 100, time: 0.871, data: 0.218) D_A: 0.031 G_A: 0.756 cycle_A: 0.533 idt_A: 0.113 D_B: 0.037 G_B: 0.511 cycle_B: 0.370 idt_B: 0.160
End of epoch 197 / 200 	 Time Taken: 86 sec
learning rate = 0.0000079
(epoch: 198, iters: 100, time: 0.856, data: 0.217) D_A: 0.063 G_A: 0.509 cycle_A: 0.634 idt_A: 0.123 D_B: 0.308 G_B: 0.492 cycle_B: 0.478 idt_B: 0.222
End of epoch 198 / 200 	 Time Taken: 86 sec
learning rate = 0.0000059
(epoch: 199, iters: 100, time: 0.903, data: 0.203) D_A: 0.033 G_A: 0.981 cycle_A: 0.515 idt_A: 0.110 D_B: 0.111 G_B: 0.531 cycle_B: 0.381 idt_B: 0.167
End of epoch 199 / 200 	 Time Taken: 86 sec
learning rate = 0.0000040
(epoch: 200, iters: 100, time: 1.200, data: 0.219) D_A: 0.017 G_A: 1.035 cycle_A: 0.613 idt_A: 0.106 D_B: 0.030 G_B: 0.726 cycle_B: 0.384 idt_B: 0.200
saving the latest model (epoch 200, total_iters 20000)
saving the model at the end of epoch 200, iters 20000
End of epoch 200 / 200 	 Time Taken: 89 sec
learning rate = 0.0000020

1.2 不显示结果

模型可以进行可视化中间结果。

During training, the current results can be viewed using two methods. First, if you set --display_id > 0, the results and loss plot will appear on a local graphics web server launched by visdom. To do this, you should have visdom installed and a server running by the command python -m visdom.server. The default server URL is http://localhost:8097. display_id corresponds to the window ID that is displayed on the visdom server. The visdom display functionality is turned on by default. To avoid the extra overhead of communicating with visdom set --display_id -1. Second, the intermediate results are saved to [opt.checkpoints_dir]/[opt.name]/web/ as an HTML file. To avoid this, set --no_html.

我们在运行时因为设置不当,报错。所以我们可以取消此模块的功能,后续再详细研究模型运行中的可视化。

报错信息:

WARNING:root:Setting up a new session...
Exception in user code:
------------------------------------------------------------
Traceback (most recent call last):
  File "/home/xingxiangrui/env/lib/python3.6/site-packages/urllib3/connection.py", line 159, in _new_conn
    (self._dns_host, self.port), self.timeout, **extra_kw)
  File "/home/xingxiangrui/env/lib/python3.6/site-packages/urllib3/util/connection.py", line 80, in create_connection
    raise err
  File "/home/xingxiangrui/env/lib/python3.6/site-packages/urllib3/util/connection.py", line 70, in create_connection
    sock.connect(sa)
ConnectionRefusedError: [Errno 111] Connection refused

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/home/xingxiangrui/env/lib/python3.6/site-packages/urllib3/connectionpool.py", line 600, in urlopen
    chunked=chunked)
  File "/home/xingxiangrui/env/lib/python3.6/site-packages/urllib3/connectionpool.py", line 354, in _make_request
    conn.request(method, url, **httplib_request_kw)
  File "/home/xingxiangrui/env/lib/python3.6/http/client.py", line 1239, in request
    self._send_request(method, url, body, headers, encode_chunked)
  File "/home/xingxiangrui/env/lib/python3.6/http/client.py", line 1285, in _send_request
    self.endheaders(body, encode_chunked=encode_chunked)
  File "/home/xingxiangrui/env/lib/python3.6/http/client.py", line 1234, in endheaders
    self._send_output(message_body, encode_chunked=encode_chunked)
  File "/home/xingxiangrui/env/lib/python3.6/http/client.py", line 1026, in _send_output
    self.send(msg)
  File "/home/xingxiangrui/env/lib/python3.6/http/client.py", line 964, in send
    self.connect()
  File "/home/xingxiangrui/env/lib/python3.6/site-packages/urllib3/connection.py", line 181, in connect
    conn = self._new_conn()
  File "/home/xingxiangrui/env/lib/python3.6/site-packages/urllib3/connection.py", line 168, in _new_conn
    self, "Failed to establish a new connection: %s" % e)
urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/home/xingxiangrui/env/lib/python3.6/site-packages/requests/adapters.py", line 449, in send
    timeout=timeout
  File "/home/xingxiangrui/env/lib/python3.6/site-packages/urllib3/connectionpool.py", line 638, in urlopen
    _stacktrace=sys.exc_info()[2])
  File "/home/xingxiangrui/env/lib/python3.6/site-packages/urllib3/util/retry.py", line 398, in increment
    raise MaxRetryError(_pool, url, error or ResponseError(cause))
urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=8097): Max retries exceeded with url: /env/main (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused',))

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/home/xingxiangrui/env/lib/python3.6/site-packages/visdom/__init__.py", line 548, in _send
    data=json.dumps(msg),
  File "/home/xingxiangrui/env/lib/python3.6/site-packages/requests/sessions.py", line 581, in post
    return self.request('POST', url, data=data, json=json, **kwargs)
  File "/home/xingxiangrui/env/lib/python3.6/site-packages/requests/sessions.py", line 533, in request
    resp = self.send(prep, **send_kwargs)
  File "/home/xingxiangrui/env/lib/python3.6/site-packages/requests/sessions.py", line 646, in send
    r = adapter.send(request, **kwargs)
  File "/home/xingxiangrui/env/lib/python3.6/site-packages/requests/adapters.py", line 516, in send
    raise ConnectionError(e, request=request)
requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=8097): Max retries exceeded with url: /env/main (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused',))
ERROR:visdom:[Errno 97] Address family not supported by protocol
ERROR:visdom:[Errno 97] Address family not supported by protocol
ERROR:visdom:[Errno 97] Address family not supported by protocol
WARNING:visdom:Visdom python client failed to establish socket to get messages from the server. This feature is optional and can be disabled by initializing Visdom with `use_incoming_socket=False`, which will prevent waiting for this request to timeout.


Could not connect to Visdom server.
 Trying to start a server....
Command: /home/xingxiangrui/env/bin/python -m visdom.server -p 8097 &>/dev/null &
create web directory ./checkpoints/norText_2_cotton_cyclegan/web...

二、训练命令行

2.1 norText_2_cotton

env/bin/python /home/xingxiangrui/pytorch-CycleGAN-and-pix2pix/train.py --dataroot /home/xingxiangrui/pytorch-CycleGAN-and-pix2pix/datasets/norText_2_cotton --name norText_2_cotton_cyclegan --model cycle_gan --no_html

注意,在哪里运行,checkpoints就存在什么地方,导致checkpoints存在home/xingxiangrui目录下

2.2 norText_2_falText

env/bin/python /home/xingxiangrui/pytorch-CycleGAN-and-pix2pix/train.py --dataroot /home/xingxiangrui/pytorch-CycleGAN-and-pix2pix/datasets/norText_2_falText --name norText_2_falText_cyclegan --model cycle_gan --no_html

--continue_train

2.3 norText_2_tear

env/bin/python /home/xingxiangrui/pytorch-CycleGAN-and-pix2pix/train.py --dataroot /home/xingxiangrui/pytorch-CycleGAN-and-pix2pix/datasets/norText_2_tear --name nor2tear_cyclegan --model cycle_gan --no_html

三、测试

3.1 测试相关命令行及含义

python test.py --dataroot ./datasets/maps --name maps_cyclegan --model cycle_gan
python test.py --dataroot datasets/horse2zebra/testA --name horse2zebra_pretrained --model test --no_dropout

The option

--model test is used for generating results of CycleGAN only for one side. This option will automatically set --dataset_mode single, which only loads the images from one set. On the contrary, using --model cycle_gan requires loading and generating results in both directions, which is sometimes unnecessary. The results will be saved at ./results/. Use --results_dir {directory_path_to_save_result} to specify the results directory.

 

env/bin/python /home/xingxiangrui/pytorch-CycleGAN-and-pix2pix/test.py --dataroot /home/xingxiangrui/pytorch-CycleGAN-and-pix2pix/datasets/norText_2_cotton --name norText_2_cotton_cyclegan --model cycle_gan

两点注意

不要在norText_2_cotton后面加testA文件夹,不然模型会出错。(换用另一种命令行就没事,用--model test)

  File "/home/xingxiangrui/pytorch-CycleGAN-and-pix2pix/data/unaligned_dataset.py", line 29, in __init__
    self.A_paths = sorted(make_dataset(self.dir_A, opt.max_dataset_size))   # load images from '/path/to/data/trainA'
  File "/home/xingxiangrui/pytorch-CycleGAN-and-pix2pix/data/image_folder.py", line 25, in make_dataset
    assert os.path.isdir(dir), '%s is not a valid directory' % dir
AssertionError: /home/xingxiangrui/pytorch-CycleGAN-and-pix2pix/datasets/norText_2_cotton/testA/testA is not a valid directory

也不要让testB文件夹为空

  File "/home/xingxiangrui/pytorch-CycleGAN-and-pix2pix/data/unaligned_dataset.py", line 53, in __getitem__
    index_B = index % self.B_size
ZeroDivisionError: integer division or modulo by zero

3.2 测试

模型普通到棉上(both sides)

env/bin/python /home/xingxiangrui/pytorch-CycleGAN-and-pix2pix/test.py --dataroot /home/xingxiangrui/pytorch-CycleGAN-and-pix2pix/datasets/norText_2_cotton --name norText_2_cotton_cyclegan --model cycle_gan

但是,这个是both sides,不是one side,one side命令行如下:

env/bin/python /home/xingxiangrui/pytorch-CycleGAN-and-pix2pix/test.py --dataroot /home/xingxiangrui/pytorch-CycleGAN-and-pix2pix/datasets/norText_2_cotton/testA --name norText_2_cotton_cyclegan --model test --no_dropout

但是one side的命令行之中,需要G.ph就是生成器,但是目录之中,只有G_A.ph 和G_B.ph这两种,不知道什么原因,我们需要再仔细研究下。

FileNotFoundError: [Errno 2] No such file or directory: './checkpoints/nor2tear_cyclegan/latest_net_G.pth'

模型普通到撕裂上(both sides)

env/bin/python /home/xingxiangrui/pytorch-CycleGAN-and-pix2pix/test.py --dataroot /home/xingxiangrui/pytorch-CycleGAN-and-pix2pix/datasets/norText_2_tear --name nor2tear_cyclegan --model cycle_gan

普通到撕裂上(one side)

env/bin/python /home/xingxiangrui/pytorch-CycleGAN-and-pix2pix/test.py --dataroot /home/xingxiangrui/pytorch-CycleGAN-and-pix2pix/datasets/norText_2_tear/testA --name nor2tear_cyclegan --model test --no_dropout

此命令行即根据dataroot之中的testA迁移到testB

[xingxiangrui@yq01-gpu-yq-face-21-5 xingxiangrui]$ env/bin/python /home/xingxiangrui/pytorch-CycleGAN-and-pix2pix/test.py --dataroot /home/xingxiangrui/pytorch-CycleGAN-and-pix2pix/datasets/norText_2_cotton --name norText_2_cotton_cyclegan --model cycle_gan
----------------- Options ---------------
             aspect_ratio: 1.0
               batch_size: 1
          checkpoints_dir: ./checkpoints
                crop_size: 256
                 dataroot: /home/xingxiangrui/pytorch-CycleGAN-and-pix2pix/datasets/norText_2_cotton	[default: None]
             dataset_mode: unaligned
                direction: AtoB
          display_winsize: 256
                    epoch: latest
                     eval: False
                  gpu_ids: 0
                init_gain: 0.02
                init_type: normal
                 input_nc: 3
                  isTrain: False                         	[default: None]
                load_iter: 0                             	[default: 0]
                load_size: 256
         max_dataset_size: inf
                    model: cycle_gan                     	[default: test]
               n_layers_D: 3
                     name: norText_2_cotton_cyclegan     	[default: experiment_name]
                      ndf: 64
                     netD: basic
                     netG: resnet_9blocks
                      ngf: 64
               no_dropout: True
                  no_flip: False
                     norm: instance
                    ntest: inf
                 num_test: 50
              num_threads: 4
                output_nc: 3
                    phase: test
               preprocess: resize_and_crop
              results_dir: ./results/
           serial_batches: False
                   suffix:
                  verbose: False
----------------- End -------------------
dataset [UnalignedDataset] was created
initialize network with normal
initialize network with normal
model [CycleGANModel] was created
loading the model from ./checkpoints/norText_2_cotton_cyclegan/latest_net_G_A.pth
loading the model from ./checkpoints/norText_2_cotton_cyclegan/latest_net_G_B.pth
---------- Networks initialized -------------
[Network G_A] Total number of parameters : 11.378 M
[Network G_B] Total number of parameters : 11.378 M
-----------------------------------------------
processing (0000)-th image... ['/home/xingxiangrui/pytorch-CycleGAN-and-pix2pix/datasets/norText_2_cotton/testA/001.png']
processing (0005)-th image... ['/home/xingxiangrui/pytorch-CycleGAN-and-pix2pix/datasets/norText_2_cotton/testA/006.png']
processing (0010)-th image... ['/home/xingxiangrui/pytorch-CycleGAN-and-pix2pix/datasets/norText_2_cotton/testA/011.png']
processing (0015)-th image... ['/home/xingxiangrui/pytorch-CycleGAN-and-pix2pix/datasets/norText_2_cotton/testA/016.png']

四、存储结果位置

因为运行的时的目录的位置就是实际结果存储的位置。

4.1 测试结果位置

有个results文件夹,其中包含着相应的运行结果。

需要注意,testA中有x个文件,则结果也有x个组,每组中包括6个,fakeA,fakeB,realA,realB,recA,recB

html

再results中与images并列的地方,存在index.html文件里面,其中包含了每种结果的汇总,(但是因为我们程序有一些问题,这部分暂时没有显示。)

4.2 训练结果位置

运行目录下的/checkpoints/里面

五、loss常见值

5.1 norText_2_falText_cyclegan

开始

(epoch: 1, iters: 100, time: 0.893, data: 0.164) D_A: 0.774 G_A: 0.517 cycle_A: 1.244 idt_A: 0.813 D_B: 0.513 G_B: 0.355 cycle_B: 1.781 idt_B: 0.588
(epoch: 1, iters: 200, time: 0.861, data: 0.002) D_A: 0.583 G_A: 0.436 cycle_A: 1.244 idt_A: 0.876 D_B: 0.418 G_B: 0.458 cycle_B: 1.429 idt_B: 0.694
(epoch: 1, iters: 300, time: 0.905, data: 0.002) D_A: 0.286 G_A: 0.321 cycle_A: 1.381 idt_A: 0.670 D_B: 0.484 G_B: 0.760 cycle_B: 2.111 idt_B: 0.610
(epoch: 1, iters: 400, time: 1.155, data: 0.002) D_A: 0.261 G_A: 0.473 cycle_A: 1.048 idt_A: 0.942 D_B: 0.288 G_B: 0.223 cycle_B: 2.142 idt_B: 0.459
End of epoch 1 / 200 	 Time Taken: 419 sec
learning rate = 0.0002000
(epoch: 2, iters: 21, time: 0.867, data: 0.002) D_A: 0.278 G_A: 0.338 cycle_A: 0.865 idt_A: 0.598 D_B: 0.355 G_B: 0.113 cycle_B: 1.312 idt_B: 0.405
(epoch: 2, iters: 121, time: 0.909, data: 0.002) D_A: 0.281 G_A: 0.363 cycle_A: 1.717 idt_A: 0.407 D_B: 0.286 G_B: 0.288 cycle_B: 0.917 idt_B: 0.968
(epoch: 2, iters: 221, time: 0.864, data: 0.002) D_A: 0.249 G_A: 0.388 cycle_A: 0.993 idt_A: 0.722 D_B: 0.304 G_B: 0.582 cycle_B: 1.431 idt_B: 0.412
(epoch: 2, iters: 321, time: 1.205, data: 0.002) D_A: 0.136 G_A: 0.259 cycle_A: 1.251 idt_A: 0.797 D_B: 0.171 G_B: 0.510 cycle_B: 2.232 idt_B: 0.632
(epoch: 2, iters: 421, time: 0.857, data: 0.002) D_A: 0.283 G_A: 0.738 cycle_A: 0.740 idt_A: 0.875 D_B: 0.233 G_B: 0.340 cycle_B: 1.831 idt_B: 0.365
End of epoch 2 / 200 	 Time Taken: 417 sec
learning rate = 0.0002000
(epoch: 3, iters: 42, time: 0.864, data: 0.002) D_A: 0.212 G_A: 0.329 cycle_A: 0.900 idt_A: 1.102 D_B: 0.226 G_B: 0.656 cycle_B: 2.429 idt_B: 0.436
(epoch: 3, iters: 142, time: 0.886, data: 0.002) D_A: 0.214 G_A: 0.551 cycle_A: 1.099 idt_A: 0.471 D_B: 0.299 G_B: 1.530 cycle_B: 1.425 idt_B: 0.510
(epoch: 3, iters: 242, time: 1.224, data: 0.002) D_A: 0.367 G_A: 0.347 cycle_A: 1.350 idt_A: 0.432 D_B: 0.360 G_B: 0.109 cycle_B: 1.078 idt_B: 0.655
(epoch: 3, iters: 342, time: 0.869, data: 0.002) D_A: 0.328 G_A: 0.099 cycle_A: 1.522 idt_A: 0.450 D_B: 0.156 G_B: 0.546 cycle_B: 0.984 idt_B: 0.747
(epoch: 3, iters: 442, time: 0.860, data: 0.002) D_A: 0.395 G_A: 0.295 cycle_A: 2.179 idt_A: 0.766 D_B: 0.377 G_B: 0.247 cycle_B: 1.586 idt_B: 1.249
End of epoch 3 / 200 	 Time Taken: 417 sec
learning rate = 0.0002000

epoch=35左右

learning rate = 0.0002000
(epoch: 33, iters: 72, time: 0.865, data: 0.002) D_A: 0.090 G_A: 0.417 cycle_A: 0.420 idt_A: 0.237 D_B: 0.113 G_B: 0.361 cycle_B: 0.559 idt_B: 0.178
(epoch: 33, iters: 172, time: 0.896, data: 0.002) D_A: 0.094 G_A: 0.302 cycle_A: 0.734 idt_A: 0.629 D_B: 0.354 G_B: 1.186 cycle_B: 1.013 idt_B: 0.311
(epoch: 33, iters: 272, time: 1.217, data: 0.002) D_A: 0.312 G_A: 0.501 cycle_A: 0.481 idt_A: 0.275 D_B: 0.163 G_B: 0.178 cycle_B: 0.595 idt_B: 0.202
(epoch: 33, iters: 372, time: 0.915, data: 0.002) D_A: 0.302 G_A: 0.279 cycle_A: 0.475 idt_A: 0.204 D_B: 0.091 G_B: 0.481 cycle_B: 0.465 idt_B: 0.187
(epoch: 33, iters: 472, time: 0.863, data: 0.002) D_A: 0.214 G_A: 0.156 cycle_A: 0.393 idt_A: 0.239 D_B: 0.086 G_B: 1.344 cycle_B: 0.620 idt_B: 0.164
End of epoch 33 / 200 	 Time Taken: 416 sec
learning rate = 0.0002000
(epoch: 34, iters: 93, time: 0.911, data: 0.002) D_A: 0.075 G_A: 0.455 cycle_A: 0.537 idt_A: 0.263 D_B: 0.180 G_B: 0.647 cycle_B: 0.701 idt_B: 0.248
(epoch: 34, iters: 193, time: 1.169, data: 0.002) D_A: 0.239 G_A: 0.330 cycle_A: 0.757 idt_A: 0.299 D_B: 0.075 G_B: 0.334 cycle_B: 0.792 idt_B: 0.358
(epoch: 34, iters: 293, time: 0.866, data: 0.002) D_A: 0.011 G_A: 0.313 cycle_A: 0.688 idt_A: 0.271 D_B: 0.121 G_B: 0.867 cycle_B: 0.591 idt_B: 0.305
(epoch: 34, iters: 393, time: 0.863, data: 0.002) D_A: 0.163 G_A: 0.341 cycle_A: 0.372 idt_A: 0.224 D_B: 0.047 G_B: 0.663 cycle_B: 0.571 idt_B: 0.153
End of epoch 34 / 200 	 Time Taken: 416 sec
learning rate = 0.0002000
(epoch: 35, iters: 14, time: 0.859, data: 0.002) D_A: 0.182 G_A: 0.433 cycle_A: 0.432 idt_A: 0.198 D_B: 0.397 G_B: 0.024 cycle_B: 0.455 idt_B: 0.206
(epoch: 35, iters: 114, time: 1.158, data: 0.002) D_A: 0.187 G_A: 0.350 cycle_A: 0.603 idt_A: 0.291 D_B: 0.136 G_B: 0.152 cycle_B: 0.891 idt_B: 0.229
(epoch: 35, iters: 214, time: 0.858, data: 0.002) D_A: 0.122 G_A: 0.542 cycle_A: 0.401 idt_A: 0.246 D_B: 0.180 G_B: 0.656 cycle_B: 0.552 idt_B: 0.174
(epoch: 35, iters: 314, time: 0.892, data: 0.002) D_A: 0.256 G_A: 0.285 cycle_A: 0.423 idt_A: 0.205 D_B: 0.161 G_B: 0.227 cycle_B: 0.450 idt_B: 0.197
(epoch: 35, iters: 414, time: 0.862, data: 0.002) D_A: 0.092 G_A: 0.535 cycle_A: 0.616 idt_A: 0.208 D_B: 0.082 G_B: 0.625 cycle_B: 0.504 idt_B: 0.316
saving the model at the end of epoch 35, iters 16765
End of epoch 35 / 200 	 Time Taken: 417 sec
learning rate = 0.0002000

六、命令行汇总

lambda 40

训练

env/bin/python /home/xingxiangrui/pytorch-CycleGAN-and-pix2pix/train.py --dataroot /home/xingxiangrui/pytorch-CycleGAN-and-pix2pix/datasets/norText_2_cotton --name nor2cott_lambda40 --model cycle_gan --no_html --lambda_A 40 --lambda_B 40

测试

env/bin/python /home/xingxiangrui/pytorch-CycleGAN-and-pix2pix/test.py --dataroot /home/xingxiangrui/pytorch-CycleGAN-and-pix2pix/datasets/norText_2_cotton --name nor2cott_lambda40 --model cycle_gan --num_test 479

lambda 20 idt =0

训练

env/bin/python /home/xingxiangrui/pytorch-CycleGAN-and-pix2pix/train.py --dataroot /home/xingxiangrui/pytorch-CycleGAN-and-pix2pix/datasets/norText_2_cotton --name nor2cott_lambda40_idt_0 --model cycle_gan --no_html --lambda_A 20 --lambda_B 20 --lambda_identity 0 --continue_train

测试

env/bin/python /home/xingxiangrui/pytorch-CycleGAN-and-pix2pix/test.py --dataroot /home/xingxiangrui/pytorch-CycleGAN-and-pix2pix/datasets/norText_2_cotton --name nor2cott_lambda40_idt_0 --model cycle_gan --num_test 479

四显卡机

load 496,crop 256,GPU 0

  训练

python train.py --dataroot datasets/single2poly-OK-dataset --name single2poly_OK_cyclegan --model cycle_gan --no_html --max_dataset_size 1000 --preprocess scale_width_and_crop --load_size 496 --crop_size 256 --gpu_ids 0

  测试

python test.py --dataroot datasets/single2poly-OK-dataset --name single2poly_OK_cyclegan --model cycle_gan --num_test 200 --preprocess scale_width --load_size 496 --gpu_ids 1

lambda_5,idt 0,crop 440,gpu 0

  训练

none先试一下 即496尺寸直接送入网络,显存溢出

python train.py --dataroot datasets/single2poly-OK-dataset --name single2poly_OK_cyclegan_lambda5idt0 --model cycle_gan --no_html --max_dataset_size 200 --preprocess none --gpu_ids 0 --lambda_A 5 --lambda_B 5 --lambda_identity 0

不进行resize,直接crop为256,可以运行,逐渐将crop的尺寸增大为400正常 440可以采用此尺寸  480显存溢出,训练一次大概12小时

python train.py --dataroot datasets/single2poly-OK-dataset --name single2poly_OK_cyclegan_lambda5idt0 --model cycle_gan --no_html --max_dataset_size 200 --preprocess crop --load_size 496 --crop_size 440 --gpu_ids 0 --lambda_A 5 --lambda_B 5 --lambda_identity 0

测试

python test.py --dataroot datasets/single2poly-OK-dataset --name single2poly_OK_cyclegan_lambda5idt0 --model cycle_gan --num_test 100 --preprocess none --gpu_ids 0

lambda_2, idt 0

  训练

python train.py --dataroot datasets/single2poly-OK-dataset --name single2poly_OK_cyclegan_lambda2idt0 --model cycle_gan --no_html --max_dataset_size 200 --preprocess crop --load_size 496 --crop_size 440 --gpu_ids 1 --lambda_A 2 --lambda_B 2 --lambda_identity 0

测试

python test.py --dataroot datasets/single2poly-OK-dataset --name single2poly_OK_cyclegan_lambda2idt0 --model cycle_gan --num_test 100 --preprocess none --gpu_ids 1

lambda0.5,idt0

  训练

python train.py --dataroot datasets/single2poly-OK-dataset --name single2poly_OK_cyclegan_lambda0.5idt0 --model cycle_gan --no_html --max_dataset_size 200 --preprocess crop --load_size 496 --crop_size 440 --gpu_ids 2 --lambda_A 0.5 --lambda_B 0.5 --lambda_identity 0

测试

python test.py --dataroot datasets/single2poly-OK-dataset --name single2poly_OK_cyclegan_lambda0.5idt0 --model cycle_gan --num_test 100 --preprocess none --gpu_ids 2

lambda0

  训练

python train.py --dataroot datasets/single2poly-OK-dataset --name single2poly_OK_cyclegan_lambda0.5idt0 --model cycle_gan --no_html --max_dataset_size 200 --preprocess crop --load_size 496 --crop_size 440 --gpu_ids 3 --lambda_A 0 --lambda_B 0 --lambda_identity 0

测试

python test.py --dataroot datasets/single2poly-OK-dataset --name single2poly_OK_cyclegan_lambda0 --model cycle_gan --num_test 100 --preprocess none --gpu_ids 3

测试流程

上传文件到服务器

cd ....

sz

测试前清空上次运行数据

rm -rf /home/xingxiangrui/results/norText_2_cotton_cyclegan/test_latest/

另一个终端测试

env/bin/python /home/xingxiangrui/pytorch-CycleGAN-and-pix2pix/test.py --dataroot /home/xingxiangrui/pytorch-CycleGAN-and-pix2pix/datasets/norText_2_cotton --name norText_2_cotton_cyclegan --model cycle_gan

第一个终端打包文件(进入到文件夹打包,不然只在根目录生成,麻烦)

zip -r nor_2_cott_**_**.zip /home/xingxiangrui/results/norText_2_cotton_cyclegan/test_latest/images

下载到电脑

sz /home/xingxiangrui/results/norText_2_cotton_cyclegan/test_latest/**.zip

你可能感兴趣的:(机器学习,python,PyTorch,GAN,image2image)