camvid数据集介绍_语义分割的数据集

camvid数据集介绍_语义分割的数据集_第1张图片

背景

语义分割指的是把图像中的每个像素都划分到某一个类别上。

实现算法上,有传统时代的grab cut、ML时代的TextonForest、DL时代的FCN 、SegNet 、Dilated Convolutions 、DeepLab (v1 & v2)、RefineNet 、PSPNet 、Large Kernel Matters 、DeepLab v3等。当然了,现在是DL碾压的时代。

本文描述了DL时代的一些语义分割的数据集,由gemfield团队整理。

数据集

Stanford Background Dataset

该数据集包含从现有公共数据集中选择的715个图像,具有大约320×240像素,包含label种类:天空,树,道路,,建筑物,山脉和前景物体。对于各类object的图像数,官网无描述?几百张左右。

Sift Flow Dataset

包含2688张图片,33个labels。

Awning(棚) balcony(阳台) bird(鸟) boat(船) bridge(桥)Building(建筑) bus(公交车) car(轿车) cow(牛) crosswalk(人行横道)Desert(沙漠) door(门) fence(篱笆) field grass(草地)Moon mountain person plant(植物) pole(杆)River(河) road rock(岩石) sand(沙) sea(海)Sidewalk(人行道) sign sky(天空) staircase(楼梯) streetlight(路灯)Sun tree window

目测每一类都有百张左右,官网无描述?

Barcelona Dataset

building road sidewalk tree sky car wall person motorbike grass ground sea stand stair plant boat window bus door central rese bridge van fence trash crosswalk field sign umbrella bicycle truck sculpture poster balcony pole awning curb streetlight traffic light water column path head box blind bench bird handrail windshield wheel mountain parkingmete table text floor chair flag firehydrant pot lamp brand name roof dog headlight license plate bag tail light tower manhole paper air condition pipe chimney light face clock picture glass mirror leaf phone knob airplane animal apple basket bed book bookshelf bottle bowl branch brushes cabinet candle carpet cat ceiling cheetah closet cloud coffeemach cone counter top cpu crocodile cup curtain cushion deer dishwasher drawer duck elephant eye faucet fish flower foliage fork fridge frog furniture goat hand hippo jar keyboard knife land landscape laptop leopard lion lizard magazine mouse mousepad mug napkin object orange outlet painting pen pillow plate pumpkin river rock sand screen shelf sink snake snow socket sofa speaker spoon stove sun switch teapot television tiger towel vase wire worktop zebra

官网对每一类object的数量没有描述?

Coco数据集

COCO是一种大规模的物体检测,分割和字幕数据集。

330K张图片(>200K被标注) 150万个对象实例 80个object类别 91个stuff类别

MSRC Dataset (Microsoft Research in Cambridge

MSRC Dataset V1:240个图像,可识别9个object class

Building grass tree cow horse sheep sky mountain aeroplane Water face car bicycle

请注意,在这个数据集中,没有足够的训练区域来学习马,水,山和绵羊的合理模型。

MSRC Dataset V2:591个图像,可识别23个object class:

Building grass tree cow horse sheep sky mountain Aeroplane water face car bicycle flower sign bird Book chair road cat dog body boat Horse和mountain样例不够,不建议考虑使用

LITS Liver Tumor Segmentation Dataset

医学上的,肝肿瘤

KITTI

自动驾驶场景下的计算机视觉算法评测数据集

原始数据集被分类为’Road’, ’City’, ’Residential’, ’Campus’ 和 ’Person’。

PASCAL-Context

训练和验证集包含10,103张图像,测试集包含9,637张图像。与PASCAL VOC类别一样

共有400+ 个labels.个类别的的实例数见:https://cs.stanford.edu/~roozbeh/pascal-context/

Data from Games Database

数据集由24966个密集标记的框架组成,分为10个部分以方便使用。 类标签与CamVid和CityScapes数据集兼容。

HumanParsing-Dataset(人解析)

Background hat hair sunglass upper-clothes Skirt pants。。。

Multi-HumanParsing-Dataset V2包含25403张图片,每张图片上至少有两个人。

除background外,一共有58个类别。

官网上有对所有58个类别的例举:https://lv-mhp.github.io/dataset

LIP(Look Into Person)

LIP数据集中的人体图像是从microsoft coco训练集和验证集中裁剪的。定义了19个人体部件或衣服标签,它们是帽子、头发、太阳镜、上衣、衣服、外套、袜子、裤子、手套、围巾、裙子、连体裤、脸、右臂、左臂、右腿、左腿、右脚、右脚鞋、左鞋,以及背景标签。数据集中共有50462张图像,其中包括19081张全身图像、13672张上身图像、403张下身图像、3386张头部丢失的图像、2778张后视图图像和21028张有遮挡的图像。

Mapillary Vistas Dataset(远景数据集)

25,000个高分辨率图像(分为18,000个用于训练,2,000个用于验证,5,000个用于测试)

152个物体类别,100个特定于实例的注释类别。一个多样化的街道级图像数据集,具有像素精确和特定于实例的人类注释,用于理解世界各地的街景。

Microsoft AirSim

自动驾驶平台

MIT Scene Parsing Benchmark

MIT场景解析基准(SceneParse150)为场景解析算法提供标准的训练和评估平台。 该基准测试的数据来自ADE20K数据集。

COCO 2017 Stuff Segmentation Challenge

COCO 2017 图像分割挑战赛

ADE20K

训练集:20210张 验证集:2000张

含有天空,水,草地。官网上列出了所有实例数超过250的objects:http://groups.csail.mit.edu/vision/datasets/ADE20K/

INRIA Annotations for Graz-02

用于分割person,car and bike,每一类的图片数量参考官网。

Daimler dataset

Daimler Pedestrian Benchmark Data Sets

用于分析行人行为的数据集

ISBI Challenge: Segmentation of neuronal structures in EM stacks

EM图像中的神经元结构分割

INRIA Annotations for Graz-02 (IG02)

用于分割person,car and bike,同INRIA Annotations for Graz-02

Pratheepan Dataset

human skin detection dataset 人类皮肤检测

FacePhoto:Total Images = 32

FamilyPhoto:Total Images = 46

Clothing Co-Parsing (CCP) Dataset

分割(clothes)衣服

2,098张高分辨率街头时尚照片,共59个标签

Inria Aerial Image

航空影像图的分割

这些图像涵盖了不同的城市定居点,从人口密集的地区(例如旧金山的金融区)到高山城镇(例如,奥地利蒂罗尔的利恩茨),是对航拍图片中建筑物的分割。

ApolloScape

百度提供的场景解析数据集

开放数据集累计提供146,997帧图像数据,包含像素级标注和姿态信息,以及对应静态背景深度图像下载。

含有sky,共有34种objects

官网上有对所有object的详细描述:http://apolloscape.auto/scene.html

UrbanMapper3D

使用卫星图像和最近可用的3D高度数据产品来改进自动化建筑物检测的最新技术水平。

卫星图像中对建筑的分割,用于某比赛。

RoadDetector

卫星图形中对道路网络的分割,用于某比赛。

数据集读取

语义分割的数据标注格式主要有以下几种:

1,COCO的RLE或者polygon(这个其实是实例分割,主要是COCO太重要了,所以列在这里)

可以参考:COCO数据集的标注格式

2,png格式

png格式比较复杂,主流的代表有3种:P模式、grayscale模式、RGB模式;

2.1 P模式

Pascal VOC使用的是PIL的P模式,使用P模式将png图片读入:

>>> 

其像素值正是0-20这21个类别,此外物体周围还有像素值为255的白色边缘。

P模式指的是palette模式,也就是调色板模式,也可以说是index模式;这种模式下,图片每个像素上存放的是index索引值,通常情况下对应的就是数据集的分类的index;但是这个index终归是要在调色板中进行索引啊,那么调色板信息在哪里呢?是的,调色板信息可以看成是一个数组,它就在png图片的header后面(在png图片的前面部分):

#gemfield.png是P模式的图片
#该文件中的调色板信息是没有手工指定,是PIL库默认初始化的
gemfield@ai:/bigdata/gemfield$ hexdump gemfield.png
0000000 5089 474e 0a0d 0a1a 0000 0d00 4849 5244
0000010 0000 0001 0000 0001 0308 0000 6b00 58ac
#从这一行的0045中的00开始,进入调色板数组区域,每3字节一组颜色->
0000020 0054 0300 5000 544c 0045 0000 0101 0201
0000030 0202 0303 0403 0404 0505 0605 0606 0707
0000040 0807 0808 0909 0a09 0a0a 0b0b 0c0b 0c0c
0000050 0d0d 0e0d 0e0e 0f0f 100f 1010 1111 1211
0000060 1212 1313 1413 1414 1515 1615 1616 1717
0000070 1817 1818 1919 1a19 1a1a 1b1b 1c1b 1c1c
0000080 1d1d 1e1d 1e1e 1f1f 201f 2020 2121 2221
0000090 2222 2323 2423 2424 2525 2625 2626 2727
00000a0 2827 2828 2929 2a29 2a2a 2b2b 2c2b 2c2c
00000b0 2d2d 2e2d 2e2e 2f2f 302f 3030 3131 3231
00000c0 3232 3333 3433 3434 3535 3635 3636 3737
00000d0 3837 3838 3939 3a39 3a3a 3b3b 3c3b 3c3c
00000e0 3d3d 3e3d 3e3e 3f3f 403f 4040 4141 4241
00000f0 4242 4343 4443 4444 4545 4645 4646 4747
0000100 4847 4848 4949 4a49 4a4a 4b4b 4c4b 4c4c
0000110 4d4d 4e4d 4e4e 4f4f 504f 5050 5151 5251
0000120 5252 5353 5453 5454 5555 5655 5656 5757
0000130 5857 5858 5959 5a59 5a5a 5b5b 5c5b 5c5c
0000140 5d5d 5e5d 5e5e 5f5f 605f 6060 6161 6261
0000150 6262 6363 6463 6464 6565 6665 6666 6767
0000160 6867 6868 6969 6a69 6a6a 6b6b 6c6b 6c6c
0000170 6d6d 6e6d 6e6e 6f6f 706f 7070 7171 7271
0000180 7272 7373 7473 7474 7575 7675 7676 7777
0000190 7877 7878 7979 7a79 7a7a 7b7b 7c7b 7c7c
00001a0 7d7d 7e7d 7e7e 7f7f 807f 8080 8181 8281
00001b0 8282 8383 8483 8484 8585 8685 8686 8787
00001c0 8887 8888 8989 8a89 8a8a 8b8b 8c8b 8c8c
00001d0 8d8d 8e8d 8e8e 8f8f 908f 9090 9191 9291
00001e0 9292 9393 9493 9494 9595 9695 9696 9797
00001f0 9897 9898 9999 9a99 9a9a 9b9b 9c9b 9c9c
0000200 9d9d 9e9d 9e9e 9f9f a09f a0a0 a1a1 a2a1
0000210 a2a2 a3a3 a4a3 a4a4 a5a5 a6a5 a6a6 a7a7
0000220 a8a7 a8a8 a9a9 aaa9 aaaa abab acab acac
0000230 adad aead aeae afaf b0af b0b0 b1b1 b2b1
0000240 b2b2 b3b3 b4b3 b4b4 b5b5 b6b5 b6b6 b7b7
0000250 b8b7 b8b8 b9b9 bab9 baba bbbb bcbb bcbc
0000260 bdbd bebd bebe bfbf c0bf c0c0 c1c1 c2c1
0000270 c2c2 c3c3 c4c3 c4c4 c5c5 c6c5 c6c6 c7c7
0000280 c8c7 c8c8 c9c9 cac9 caca cbcb cccb cccc
0000290 cdcd cecd cece cfcf d0cf d0d0 d1d1 d2d1
00002a0 d2d2 d3d3 d4d3 d4d4 d5d5 d6d5 d6d6 d7d7
00002b0 d8d7 d8d8 d9d9 dad9 dada dbdb dcdb dcdc
00002c0 dddd dedd dede dfdf e0df e0e0 e1e1 e2e1
00002d0 e2e2 e3e3 e4e3 e4e4 e5e5 e6e5 e6e6 e7e7
00002e0 e8e7 e8e8 e9e9 eae9 eaea ebeb eceb ecec
00002f0 eded eeed eeee efef f0ef f0f0 f1f1 f2f1
0000300 f2f2 f3f3 f4f3 f4f4 f5f5 f6f5 f6f6 f7f7
0000310 f8f7 f8f8 f9f9 faf9 fafa fbfb fcfb fcfc
0000320 fdfd fefd fefe ffff e2ff 5db0 007d 0200
#从上面一行的ff结束调色板数组区域。。。
......

如果要自己使用PIL库写P模式的图片的话,需要注入调色板信息,否则图片格式虽然对,但是显示的时候则不会有你期望的色彩;使用下面的方式注入调色板信息:

>>> from PIL import Image
>>> im = Image.open('gemfield.png')
>>> p = [0, 0, 0, 128, 0, 0, 0, 128, 0, 128, 128, 0, 0, 0, 128, 128, 0, 128, 0, 
128, 128, 128, 128, 128, 64, 0, 0, 192, 0, 0, 64, 128, 0, 192, 128, 0, 64, 0, 128,
 192, 0, 128, 64, 128, 128, 192, 128, 128, 0, 64, 0, 128, 64, 0, 0, 192, 0, 128, 
192, 0, 0, 64, 128, 128, 64, 128, 0, 192, 128, 128, 192, 128, 64, 64, 0, 192, 64, 
0, 64, 192, 0, 192, 192, 0, 64, 64, 128, 192, 64, 128, 64, 192, 128, 192, 192, 128, 
0, 0, 64, 128, 0, 64, 0, 128, 64, 128, 128, 64, 0, 0, 192, 128, 0, 192, 0, 128, 192, 
128, 128, 192, 64, 0, 64, 192, 0, 64, 64, 128, 64, 192, 128, 64, 64, 0, 192, 192, 0, 
192, 64, 128, 192, 192, 128, 192, 0, 64, 64, 128, 64, 64, 0, 192, 64, 128, 192, 64, 0, 
64, 192, 128, 64, 192, 0, 192, 192, 128, 192, 192, 64, 64, 64, 192, 64, 64, 64, 192,
 64, 192, 192, 64, 64, 64, 192, 192, 64, 192, 64, 192, 192, 192, 192, 192, 32, 0, 0, 
160, 0, 0, 32, 128, 0, 160, 128, 0, 32, 0, 128, 160, 0, 128, 32, 128, 128, 160, 128, 
128, 96, 0, 0, 224, 0, 0, 96, 128, 0, 224, 128, 0, 96, 0, 128, 224, 0, 128, 96, 128, 
128, 224, 128, 128, 32, 64, 0, 160, 64, 0, 32, 192, 0, 160, 192, 0, 32, 64, 128, 160, 
64, 128, 32, 192, 128, 160, 192, 128, 96, 64, 0, 224, 64, 0, 96, 192, 0, 224, 192, 0, 
96, 64, 128, 224, 64, 128, 96, 192, 128, 224, 192, 128, 32, 0, 64, 160, 0, 64, 32, 128, 
64, 160, 128, 64, 32, 0, 192, 160, 0, 192, 32, 128, 192, 160, 128, 192, 96, 0, 64, 224,
 0, 64, 96, 128, 64, 224, 128, 64, 96, 0, 192, 224, 0, 192, 96, 128, 192, 224, 128, 192, 
32, 64, 64, 160, 64, 64, 32, 192, 64, 160, 192, 64, 32, 64, 192, 160, 64, 192, 32, 192, 
192, 160, 192, 192, 96, 64, 64, 224, 64, 64, 96, 192, 64, 224, 192, 64, 96, 64, 192, 224, 
64, 192, 96, 192, 192, 224, 192, 192, 0, 32, 0, 128, 32, 0, 0, 160, 0, 128, 160, 0, 0, 32, 
128, 128, 32, 128, 0, 160, 128, 128, 160, 128, 64, 32, 0, 192, 32, 0, 64, 160, 0, 192, 160, 
0, 64, 32, 128, 192, 32, 128, 64, 160, 128, 192, 160, 128, 0, 96, 0, 128, 96, 0, 0, 224, 0, 
128, 224, 0, 0, 96, 128, 128, 96, 128, 0, 224, 128, 128, 224, 128, 64, 96, 0, 192, 96, 0, 64, 
224, 0, 192, 224, 0, 64, 96, 128, 192, 96, 128, 64, 224, 128, 192, 224, 128, 0, 32, 64, 128, 
32, 64, 0, 160, 64, 128, 160, 64, 0, 32, 192, 128, 32, 192, 0, 160, 192, 128, 160, 192, 64, 
32, 64, 192, 32, 64, 64, 160, 64, 192, 160, 64, 64, 32, 192, 192, 32, 192, 64, 160, 192, 192, 
160, 192, 0, 96, 64, 128, 96, 64, 0, 224, 64, 128, 224, 64, 0, 96, 192, 128, 96, 192, 0, 224, 
192, 128, 224, 192, 64, 96, 64, 192, 96, 64, 64, 224, 64, 192, 224, 64, 64, 96, 192, 192, 96, 
192, 64, 224, 192, 192, 224, 192, 32, 32, 0, 160, 32, 0, 32, 160, 0, 160, 160, 0, 32, 32, 128, 
160, 32, 128, 32, 160, 128, 160, 160, 128, 96, 32, 0, 224, 32, 0, 96, 160, 0, 224, 160, 0, 96, 
32, 128, 224, 32, 128, 96, 160, 128, 224, 160, 128, 32, 96, 0, 160, 96, 0, 32, 224, 0, 160, 224, 
0, 32, 96, 128, 160, 96, 128, 32, 224, 128, 160, 224, 128, 96, 96, 0, 224, 96, 0, 96, 224, 0, 
224, 224, 0, 96, 96, 128, 224, 96, 128, 96, 224, 128, 224, 224, 128, 32, 32, 64, 160, 32, 64, 
32, 160, 64, 160, 160, 64, 32, 32, 192, 160, 32, 192, 32, 160, 192, 160, 160, 192, 96, 32, 64, 
224, 32, 64, 96, 160, 64, 224, 160, 64, 96, 32, 192, 224, 32, 192, 96, 160, 192, 224, 160, 192, 
32, 96, 64, 160, 96, 64, 32, 224, 64, 160, 224, 64, 32, 96, 192, 160, 96, 192, 32, 224, 192, 
160, 224, 192, 96, 96, 64, 224, 96, 64, 96, 224, 64, 224, 224, 64, 96, 96, 192, 224, 96, 192, 
96, 224, 192, 224, 224, 192]
>>> 
>>> im.putpalette(p)

因此,P模式的图片用图片软件打开直接可以看,对人类是友好的。

2.2 RGB模式

ADE20K数据集的label也是png格式,但它是RGB模式。RGB模式相当于在每个对应的像素上直接存放的就是RGB的数值(可能会在RGB中选2个通道作为语义分割的分类,选1个通道作为实例分割的区分),需要转换成Pascal VOC的格式才能对接主流的项目,或者更改项目的dataloader来对接这个数据集。

看看file命令的输出也会意识到这一点:

gemfield@ai:/bigdata2/dataset/ADE20K/ADE20K_2016_07_26/images/training/s/sky$ file ADE_train_00016353_seg.png
ADE_train_00016353_seg.png: PNG image data, 450 x 450, 8-bit/color RGB, non-interlaced

gemfield@ai:/bigdata/gemfield/dataset/VOCdevkit/VOC2012/SegmentationClass$ file 2009_000421.png
2009_000421.png: PNG image data, 500 x 375, 8-bit colormap, non-interlaced

2.3 grayscale模式

ADEChallengeData2016的数据集就使用的是这种模式,这种模式其实就是披着png的外衣,格式等同于下面的txt格式或者mat文件格式。

3,txt格式

这种格式很直截了当,比方说一个图片为256x256,则txt里有256行256列,这个256x256的二维矩阵的每一个元素的值一般是0到N,N为分类数,表示对应的图片中的那个像素属于哪一个分类;

4,mat文件格式

这种格式是matlab数据存储的格式,可以使用http://scipy.io来读取:

import scipy.io

#mat is python dict type
mat = scipy.io.loadmat('tallbuilding_urban1210.mat')


>>> for k in mat:
...   print(k)
... 
S
__version__
names
__header__
__globals__
>>> 
>>> 
>>> mat['S'].shape
(256, 256)
>>> 
>>> mat['names'].shape
(1, 33)
>>> 
>>> for i in mat['names'][0]:
...   print(i)
... 
[u'awning']
[u'balcony']
[u'bird']
[u'boat']
[u'bridge']
[u'building']
[u'bus']
[u'car']
[u'cow']
[u'crosswalk']
[u'desert']
[u'door']
[u'fence']
[u'field']
[u'grass']
[u'moon']
[u'mountain']
[u'person']
[u'plant']
[u'pole']
[u'river']
[u'road']
[u'rock']
[u'sand']
[u'sea']
[u'sidewalk']
[u'sign']
[u'sky']
[u'staircase']
[u'streetlight']
[u'sun']
[u'tree']
[u'window']

一些算法实现

semantic-segmentation-pytorch

#clone project
gemfield@skyweb:/home/gemfield/github/# git clone https://github.com/CSAILVision/semantic-segmentation-pytorch

#下载ADE20K数据集
bash download_ADE20K.sh

#训练,注意,这个项目必须使用multi gpus,否则你会遇到错误
gemfield@skyweb:/home/gemfield/github/semantic-segmentation-pytorch# python3 train.py --num_gpus 4 
Input arguments:
weights_decoder  
padding_constant 8
list_val         ./data/validation.odgt
beta1            0.9
arch_decoder     ppm_deepsup
lr_decoder       0.02
ckpt             ./ckpt
weight_decay     0.0001
lr_pow           0.9
num_epoch        20
fc_dim           2048
start_epoch      1
fix_bn           0
num_gpus         4
imgMaxSize       1000
lr_encoder       0.02
segm_downsampling_rate 8
disp_iter        20
arch_encoder     resnet50dilated
id               baseline
epoch_iters      5000
deep_sup_scale   0.4
list_train       ./data/train.odgt
num_class        150
root_dataset     ./data/
random_flip      True
workers          16
optim            SGD
batch_size_per_gpu 2
seed             304
weights_encoder  
imgSize          [300, 375, 450, 525, 600]
Model ID: baseline-resnet50dilated-ppm_deepsup-ngpus4-batchSize8-imgMaxSize1000-paddingConst8-segmDownsampleRate8-LR_encoder0.02-LR_decoder0.02-epoch20
# samples: 20210
1 Epoch = 5000 iters
Unexpected end of /proc/mounts line `overlay / overlay rw,relatime,lowerdir=/var/lib/docker/overlay2/l/EPUCJ5YS56B2EIAKRB2OI23DJG:/var/lib/docker/overlay2/l/TK4HNZFB37R2PI3DSOZM27QO5P:/var/lib/docker/overlay2/l/UECUDJLPQV56GEPBM27IKIB7U7:/var/lib/docker/overlay2/l/BYP3KOMKF7QGPGQYTKRULQMXAH:/var/lib/docker/overlay2/l/FBDDOYNOYKM5KZXKPUNHQHRYVP:/var/lib/docker/overlay2/l/O74KLIBSE27UBEH3X2H2R3PBAF:/var/lib/docker/overlay2/l/DWJLNVVRB6HLSYXIQMBRTAAQME:/var/lib/docker/overlay2/l/K3KZPWVDCX22Q5VFEDSOK4AC2Y:/var/lib/docker/overlay2/l/7ZNOVGJUVFTG4'
Epoch: [1][0/5000], Time: 13.54, Data: 0.22, lr_encoder: 0.020000, lr_decoder: 0.020000, Accuracy: 0.29, Loss: 7.914350

Epoch: [1][20/5000], Time: 3.69, Data: 0.12, lr_encoder: 0.019997, lr_decoder: 0.019997, Accuracy: 31.99, Loss: 5.207924
Epoch: [1][40/5000], Time: 3.20, Data: 0.11, lr_encoder: 0.019993, lr_decoder: 0.019993, Accuracy: 36.15, Loss: 4.550761
Epoch: [1][60/5000], Time: 3.04, Data: 0.11, lr_encoder: 0.019989, lr_decoder: 0.019989, Accuracy: 38.60, Loss: 4.246425
Epoch: [1][80/5000], Time: 2.99, Data: 0.11, lr_encoder: 0.019986, lr_decoder: 0.019986, Accuracy: 39.35, Loss: 4.115996
Epoch: [1][100/5000], Time: 2.99, Data: 0.11, lr_encoder: 0.019982, lr_decoder: 0.019982, Accuracy: 40.71, Loss: 4.001512
Epoch: [1][120/5000], Time: 2.98, Data: 0.11, lr_encoder: 0.019979, lr_decoder: 0.019979, Accuracy: 41.31, Loss: 3.917756
Epoch: [1][140/5000], Time: 2.98, Data: 0.11, lr_encoder: 0.019975, lr_decoder: 0.019975, Accuracy: 42.46, Loss: 3.801038
Epoch: [1][160/5000], Time: 2.99, Data: 0.11, lr_encoder: 0.019971, lr_decoder: 0.019971, Accuracy: 43.28, Loss: 3.724424
......
Epoch: [1][420/5000], Time: 2.96, Data: 0.11, lr_encoder: 0.019925, lr_decoder: 0.019925, Accuracy: 48.41, Loss: 3.239377
Epoch: [1][440/5000], Time: 2.97, Data: 0.11, lr_encoder: 0.019921, lr_decoder: 0.019921, Accuracy: 48.91, Loss: 3.205016
......
Epoch: [1][1000/5000], Time: 2.97, Data: 0.11, lr_encoder: 0.019820, lr_decoder: 0.019820, Accuracy: 54.68, Loss: 2.774183
Epoch: [1][1020/5000], Time: 2.97, Data: 0.11, lr_encoder: 0.019816, lr_decoder: 0.019816, Accuracy: 54.75, Loss: 2.769125
Epoch: [1][1040/5000], Time: 2.97, Data: 0.11, lr_encoder: 0.019813, lr_decoder: 0.019813, Accuracy: 54.94, Loss: 2.756475
......
Epoch: [1][3380/5000], Time: 2.95, Data: 0.11, lr_encoder: 0.019391, lr_decoder: 0.019391, Accuracy: 61.36, Loss: 2.276570
Epoch: [1][3400/5000], Time: 2.95, Data: 0.11, lr_encoder: 0.019387, lr_decoder: 0.019387, Accuracy: 61.39, Loss: 2.274549
Epoch: [1][3420/5000], Time: 2.95, Data: 0.11, lr_encoder: 0.019384, lr_decoder: 0.019384, Accuracy: 61.43, Loss: 2.272028
Epoch: [1][3440/5000], Time: 2.95, Data: 0.11, lr_encoder: 0.019380, lr_decoder: 0.019380, Accuracy: 61.48, Loss: 2.269013
......
Epoch: [1][4900/5000], Time: 2.94, Data: 0.11, lr_encoder: 0.019116, lr_decoder: 0.019116, Accuracy: 63.73, Loss: 2.119519
Epoch: [1][4920/5000], Time: 2.94, Data: 0.11, lr_encoder: 0.019112, lr_decoder: 0.019112, Accuracy: 63.75, Loss: 2.118082
Epoch: [1][4940/5000], Time: 2.94, Data: 0.11, lr_encoder: 0.019109, lr_decoder: 0.019109, Accuracy: 63.78, Loss: 2.116459
Epoch: [1][4960/5000], Time: 2.94, Data: 0.11, lr_encoder: 0.019105, lr_decoder: 0.019105, Accuracy: 63.78, Loss: 2.116231
Epoch: [1][4980/5000], Time: 2.94, Data: 0.11, lr_encoder: 0.019102, lr_decoder: 0.019102, Accuracy: 63.80, Loss: 2.114727
Saving checkpoints...
Epoch: [2][0/5000], Time: 1.43, Data: 0.00, lr_encoder: 0.019098, lr_decoder: 0.019098, Accuracy: 55.89, Loss: 2.537975
Epoch: [2][20/5000], Time: 2.84, Data: 0.10, lr_encoder: 0.019094, lr_decoder: 0.019094, Accuracy: 69.77, Loss: 1.727590
Epoch: [2][40/5000], Time: 2.88, Data: 0.10, lr_encoder: 0.019091, lr_decoder: 0.019091, Accuracy: 68.81, Loss: 1.786401
Epoch: [2][60/5000], Time: 2.95, Data: 0.10, lr_encoder: 0.019087, lr_decoder: 0.019087, Accuracy: 68.45, Loss: 1.799721
......
Epoch: [2][1360/5000], Time: 2.97, Data: 0.11, lr_encoder: 0.018852, lr_decoder: 0.018852, Accuracy: 70.75, Loss: 1.662803
Epoch: [2][1380/5000], Time: 2.97, Data: 0.11, lr_encoder: 0.018848, lr_decoder: 0.018848, Accuracy: 70.74, Loss: 1.662384
Epoch: [2][1400/5000], Time: 2.97, Data: 0.11, lr_encoder: 0.018844, lr_decoder: 0.018844, Accuracy: 70.74, Loss: 1.661786
Epoch: [2][1420/5000], Time: 2.97, Data: 0.11, lr_encoder: 0.018841, lr_decoder: 0.018841, Accuracy: 70.77, Loss: 1.659776
Epoch: [2][1440/5000], Time: 2.97, Data: 0.11, lr_encoder: 0.018837, lr_decoder: 0.018837, Accuracy: 70.78, Loss: 1.659779
......
Epoch: [2][4920/5000], Time: 2.95, Data: 0.11, lr_encoder: 0.018205, lr_decoder: 0.018205, Accuracy: 72.08, Loss: 1.578305
Epoch: [2][4940/5000], Time: 2.95, Data: 0.11, lr_encoder: 0.018202, lr_decoder: 0.018202, Accuracy: 72.09, Loss: 1.577838
Epoch: [2][4960/5000], Time: 2.95, Data: 0.11, lr_encoder: 0.018198, lr_decoder: 0.018198, Accuracy: 72.10, Loss: 1.577406
Epoch: [2][4980/5000], Time: 2.95, Data: 0.11, lr_encoder: 0.018194, lr_decoder: 0.018194, Accuracy: 72.11, Loss: 1.577146
Saving checkpoints...
Epoch: [3][0/5000], Time: 3.37, Data: 0.00, lr_encoder: 0.018191, lr_decoder: 0.018191, Accuracy: 85.19, Loss: 1.070056
Epoch: [3][20/5000], Time: 3.14, Data: 0.11, lr_encoder: 0.018187, lr_decoder: 0.018187, Accuracy: 72.73, Loss: 1.527696
Epoch: [3][40/5000], Time: 3.08, Data: 0.11, lr_encoder: 0.018184, lr_decoder: 0.018184, Accuracy: 73.96, Loss: 1.460679
Epoch: [3][60/5000], Time: 3.01, Data: 0.11, lr_encoder: 0.018180, lr_decoder: 0.018180, Accuracy: 73.45, Loss: 1.478827
......
Epoch: [3][4940/5000], Time: 2.93, Data: 0.11, lr_encoder: 0.017290, lr_decoder: 0.017290, Accuracy: 75.06, Loss: 1.404855
Epoch: [3][4960/5000], Time: 2.93, Data: 0.11, lr_encoder: 0.017286, lr_decoder: 0.017286, Accuracy: 75.06, Loss: 1.405186
Epoch: [3][4980/5000], Time: 2.93, Data: 0.11, lr_encoder: 0.017282, lr_decoder: 0.017282, Accuracy: 75.06, Loss: 1.404594
Saving checkpoints...
Epoch: [4][0/5000], Time: 1.78, Data: 0.00, lr_encoder: 0.017279, lr_decoder: 0.017279, Accuracy: 84.19, Loss: 0.967333
Epoch: [4][20/5000], Time: 2.86, Data: 0.10, lr_encoder: 0.017275, lr_decoder: 0.017275, Accuracy: 76.91, Loss: 1.278598
Epoch: [4][40/5000], Time: 2.95, Data: 0.10, lr_encoder: 0.017271, lr_decoder: 0.017271, Accuracy: 77.28, Loss: 1.278876
Epoch: [4][60/5000], Time: 2.99, Data: 0.11, lr_encoder: 0.017268, lr_decoder: 0.017268, Accuracy: 76.14, Loss: 1.314496
......
Epoch: [4][1940/5000], Time: 2.94, Data: 0.11, lr_encoder: 0.016923, lr_decoder: 0.016923, Accuracy: 76.73, Loss: 1.314272
Epoch: [4][1960/5000], Time: 2.94, Data: 0.11, lr_encoder: 0.016920, lr_decoder: 0.016920, Accuracy: 76.72, Loss: 1.314943
Epoch: [4][1980/5000], Time: 2.94, Data: 0.11, lr_encoder: 0.016916, lr_decoder: 0.016916, Accuracy: 76.73, Loss: 1.314677
......

DeepLab v3+的实现:pytorch-deeplab-xception

#clone project
gemfield@sky:/home/gemfield/github/# git clone https://github.com/jfzhang95/pytorch-deeplab-xception

#准备好pascal voc数据集
gemfield@sky:/home/gemfield/github/pytorch-deeplab-xception# cat mypath.py 
class Path(object):
    @staticmethod
    def db_root_dir(dataset):
        if dataset == 'pascal':
            return '/home/gemfield/datasets/VOCdevkit/VOC2012/'  # folder that contains VOCdevkit/.
......


#训练
gemfield@sky:/home/gemfield/github/pytorch-deeplab-xception# python3 train.py --backbone resnet --lr 0.007 --workers 4  --epochs 10 --batch-size 32 --gpu-ids 0,1,2,3 --checkname deeplab-resnet --eval-interval 1 --dataset pascal
Namespace(backbone='resnet', base_size=513, batch_size=32, checkname='deeplab-resnet', crop_size=513, cuda=True, dataset='pascal', epochs=10, eval_interval=1, freeze_bn=False, ft=False, gpu_ids=[0, 1, 2, 3], loss_type='ce', lr=0.007, lr_scheduler='poly', momentum=0.9, nesterov=False, no_cuda=False, no_val=False, out_stride=16, resume=None, seed=1, start_epoch=0, sync_bn=True, test_batch_size=32, use_balanced_weights=False, use_sbd=False, weight_decay=0.0005, workers=4)
Number of images in train: 12000
Number of images in val: 1078
Using poly LR Scheduler!
Starting Epoch: 0
Total Epoches: 10
  0%|                                                                                                                                                                | 0/375 [00:00Epoches 0, learning rate = 0.0070,                 previous best = 0.0000
Train loss: 0.006: 100%|###################################################################################################################################| 375/375 [28:18<00:00,  4.50s/it]
[Epoch: 0, numImages: 12000]
Loss: 2.351
Test loss: 0.006: 100%|######################################################################################################################################| 34/34 [01:54<00:00,  3.01s/it]
Validation:
[Epoch: 0, numImages:  1078]
Acc:0.9597080196165629, Acc_class:0.9577062235138765, mIoU:0.8992881267840838, fwIoU: 0.924346695146962
Loss: 0.220
......

你可能感兴趣的:(camvid数据集介绍,cityscapes数据集,coco数据集80个分类是哪些,制作自己的segnet数据集)