tensorflow深度学习实战笔记(一):使用tensorflow slim自带的模型训练自己的数据

目录

0、准备

1、数据处理---图片格式转成TFRecord格式

2、模型训练

3、验证训练后的效果


说明:此处可以模仿源码中inception v3的分类案例

slim预训练好的包含inception v1,inception v2,inception v3,inception v4,mobilenet v1,mobilenet v2,NasNet,pNasNet等。可以根据需要进行选择。

0、准备

0.1准备好自己的数据集,本人已经准备了一个数据集,假设为水果(fruit),我的文件目录结构为:

slim
...my_data
   ...fruits
      ...fruit_photos
         ...res apple
         ...pear
         ...peach
         ...green apple
其他文件省略,只展示数据集目录结构

0.2下载slim的源码:https://github.com/tensorflow/models/tree/master/research/slim,此链接不能直接下载,要退回到master下进行下载。

后面的操作都在下载好的slim文件夹下进行操作

1、数据处理---图片格式转成TFRecord格式

在datasets文件夹下,有download_and_convert_flowers.py文件,复制一份并重新命名为:convert_fruit.py,打开进行修改(一共修改四处,代码中有注明):

#coding=utf-8

from __future__ import absolute_import
from __future__ import division
from __future__ import print_function

import math
import os
import random
import sys

import tensorflow as tf

from datasets import dataset_utils

# The URL where the Flowers data can be downloaded.
_DATA_URL = 'http://download.tensorflow.org/example_images/flower_photos.tgz'

# The number of images in the validation set.
_NUM_VALIDATION = 350

# Seed for repeatability.
_RANDOM_SEED = 0

# The number of shards per dataset split.
_NUM_SHARDS = 5


class ImageReader(object):
  """Helper class that provides TensorFlow image coding utilities."""

  def __init__(self):
    # Initializes function that decodes RGB JPEG data.
    self._decode_jpeg_data = tf.placeholder(dtype=tf.string)
    self._decode_jpeg = tf.image.decode_jpeg(self._decode_jpeg_data, channels=3)

  def read_image_dims(self, sess, image_data):
    image = self.decode_jpeg(sess, image_data)
    return image.shape[0], image.shape[1]

  def decode_jpeg(self, sess, image_data):
    image = sess.run(self._decode_jpeg,
                     feed_dict={self._decode_jpeg_data: image_data})
    assert len(image.shape) == 3
    assert image.shape[2] == 3
    return image


def _get_filenames_and_classes(dataset_dir):
  """Returns a list of filenames and inferred class names.

  Args:
    dataset_dir: A directory containing a set of subdirectories representing
      class names. Each subdirectory should contain PNG or JPG encoded images.

  Returns:
    A list of image file paths, relative to `dataset_dir` and the list of
    subdirectories, representing class names.
  """

  #改为自己的数据集
  flower_root = os.path.join(dataset_dir, 'fruit_photos')
  directories = []
  class_names = []
  for filename in os.listdir(flower_root):
    path = os.path.join(flower_root, filename)
    if os.path.isdir(path):
      directories.append(path)
      class_names.append(filename)

  photo_filenames = []
  for directory in directories:
    for filename in os.listdir(directory):
      path = os.path.join(directory, filename)
      photo_filenames.append(path)

  return photo_filenames, sorted(class_names)


def _get_dataset_filename(dataset_dir, split_name, shard_id):
  #修改为fruit
  output_filename = 'fruit_%s_%05d-of-%05d.tfrecord' % (
      split_name, shard_id, _NUM_SHARDS)
  return os.path.join(dataset_dir, output_filename)


def _convert_dataset(split_name, filenames, class_names_to_ids, dataset_dir):
  """Converts the given filenames to a TFRecord dataset.

  Args:
    split_name: The name of the dataset, either 'train' or 'validation'.
    filenames: A list of absolute paths to png or jpg images.
    class_names_to_ids: A dictionary from class names (strings) to ids
      (integers).
    dataset_dir: The directory where the converted datasets are stored.
  """
  assert split_name in ['train', 'validation']

  num_per_shard = int(math.ceil(len(filenames) / float(_NUM_SHARDS)))

  with tf.Graph().as_default():
    image_reader = ImageReader()

    with tf.Session('') as sess:

      for shard_id in range(_NUM_SHARDS):
        output_filename = _get_dataset_filename(
            dataset_dir, split_name, shard_id)

        with tf.python_io.TFRecordWriter(output_filename) as tfrecord_writer:
          start_ndx = shard_id * num_per_shard
          end_ndx = min((shard_id+1) * num_per_shard, len(filenames))
          for i in range(start_ndx, end_ndx):
            sys.stdout.write('\r>> Converting image %d/%d shard %d' % (
                i+1, len(filenames), shard_id))
            sys.stdout.flush()

            # Read the filename:
            image_data = tf.gfile.FastGFile(filenames[i], 'rb').read()
            height, width = image_reader.read_image_dims(sess, image_data)

            class_name = os.path.basename(os.path.dirname(filenames[i]))
            class_id = class_names_to_ids[class_name]

            example = dataset_utils.image_to_tfexample(
                image_data, b'jpg', height, width, class_id)
            tfrecord_writer.write(example.SerializeToString())

  sys.stdout.write('\n')
  sys.stdout.flush()


def _clean_up_temporary_files(dataset_dir):
  """Removes temporary files used to create the dataset.

  Args:
    dataset_dir: The directory where the temporary files are stored.
  """
  filename = _DATA_URL.split('/')[-1]
  filepath = os.path.join(dataset_dir, filename)
  tf.gfile.Remove(filepath)

  tmp_dir = os.path.join(dataset_dir, 'flower_photos')
  tf.gfile.DeleteRecursively(tmp_dir)


def _dataset_exists(dataset_dir):
  for split_name in ['train', 'validation']:
    for shard_id in range(_NUM_SHARDS):
      output_filename = _get_dataset_filename(
          dataset_dir, split_name, shard_id)
      if not tf.gfile.Exists(output_filename):
        return False
  return True


def run(dataset_dir):
  """Runs the download and conversion operation.

  Args:
    dataset_dir: The dataset directory where the dataset is stored.
  """
  if not tf.gfile.Exists(dataset_dir):
    tf.gfile.MakeDirs(dataset_dir)

  if _dataset_exists(dataset_dir):
    print('Dataset files already exist. Exiting without re-creating them.')
    return

  #无需下载,此行注释
  #dataset_utils.download_and_uncompress_tarball(_DATA_URL, dataset_dir)
  photo_filenames, class_names = _get_filenames_and_classes(dataset_dir)
  class_names_to_ids = dict(zip(class_names, range(len(class_names))))

  # Divide into train and test:
  random.seed(_RANDOM_SEED)
  random.shuffle(photo_filenames)
  training_filenames = photo_filenames[_NUM_VALIDATION:]
  validation_filenames = photo_filenames[:_NUM_VALIDATION]

  # First, convert the training and validation sets.
  _convert_dataset('train', training_filenames, class_names_to_ids,
                   dataset_dir)
  _convert_dataset('validation', validation_filenames, class_names_to_ids,
                   dataset_dir)

  # Finally, write the labels file:
  labels_to_class_names = dict(zip(range(len(class_names)), class_names))
  dataset_utils.write_label_file(labels_to_class_names, dataset_dir)
  
  #此行注释,避免删除原来的数据图片
  #_clean_up_temporary_files(dataset_dir)
  print('\nFinished converting the Fruits dataset!')

 在slim文件夹下打开download_and_convert_data.py 文件,添加:

from datasets import convert_fruit

再添加:

elif FLAGS.dataset_name == 'fruit':
  download_and_convert_mnist.run(FLAGS.dataset_dir)

download_and_convert_data.py 修改后的代码如下:

from __future__ import absolute_import
from __future__ import division
from __future__ import print_function

import tensorflow as tf

from datasets import download_and_convert_cifar10
from datasets import download_and_convert_flowers
from datasets import download_and_convert_mnist
from datasets import convert_fruits

FLAGS = tf.app.flags.FLAGS

tf.app.flags.DEFINE_string(
    'dataset_name',
    None,
    'The name of the dataset to convert, one of "cifar10", "flowers", "mnist".')

tf.app.flags.DEFINE_string(
    'dataset_dir',
    None,
    'The directory where the output TFRecords and temporary files are saved.')


def main(_):
  if not FLAGS.dataset_name:
    raise ValueError('You must supply the dataset name with --dataset_name')
  if not FLAGS.dataset_dir:
    raise ValueError('You must supply the dataset directory with --dataset_dir')

  if FLAGS.dataset_name == 'cifar10':
    download_and_convert_cifar10.run(FLAGS.dataset_dir)
  elif FLAGS.dataset_name == 'flowers':
    download_and_convert_flowers.run(FLAGS.dataset_dir)
  elif FLAGS.dataset_name == 'mnist':
    download_and_convert_mnist.run(FLAGS.dataset_dir)
  elif FLAGS.dataset_name == 'fruit':
    my_convert_fruits.run(FLAGS.dataset_dir)
  else:
    raise ValueError(
        'dataset_name [%s] was not recognized.' % FLAGS.dataset_name)

if __name__ == '__main__':
  tf.app.run()

最后在终端中运行如下命令:

python download_and_convert_data.py \
--dataset_name=fruit \
--dataset_dir=my_data/fruits/

运行后可以在my_data/fruits/下看到生成的文件。

2、模型训练

将slim/datasets/flowers.py复制并重命名为fruit.py ,将  _FILE_PATTERN = 'flowers_%s_*.tfrecord'  改为: _FILE_PATTERN = 'fruit_%s_*.tfrecord'

将  SPLITS_TO_SIZES = {'train': 3320, 'validation': 350}  改为:  SPLITS_TO_SIZES = {'train': 4655, 'validation': 350}

将  _NUM_CLASSES = 5  改为: _NUM_CLASSES = 5

其中,train代表训练的图片张数,validation代表验证使用的图片张数。我的图片共5005张。

修改后的代码如下:

from __future__ import absolute_import
from __future__ import division
from __future__ import print_function

import os
import tensorflow as tf

from datasets import dataset_utils

slim = tf.contrib.slim

_FILE_PATTERN = 'fruit_%s_*.tfrecord'

SPLITS_TO_SIZES = {'train': 4655, 'validation': 350}

_NUM_CLASSES = 4

_ITEMS_TO_DESCRIPTIONS = {
    'image': 'A color image of varying size.',
    'label': 'A single integer between 0 and 4',
}


def get_split(split_name, dataset_dir, file_pattern=None, reader=None):
  """Gets a dataset tuple with instructions for reading flowers.

  Args:
    split_name: A train/validation split name.
    dataset_dir: The base directory of the dataset sources.
    file_pattern: The file pattern to use when matching the dataset sources.
      It is assumed that the pattern contains a '%s' string so that the split
      name can be inserted.
    reader: The TensorFlow reader type.

  Returns:
    A `Dataset` namedtuple.

  Raises:
    ValueError: if `split_name` is not a valid train/validation split.
  """
  if split_name not in SPLITS_TO_SIZES:
    raise ValueError('split name %s was not recognized.' % split_name)

  if not file_pattern:
    file_pattern = _FILE_PATTERN
  file_pattern = os.path.join(dataset_dir, file_pattern % split_name)

  # Allowing None in the signature so that dataset_factory can use the default.
  if reader is None:
    reader = tf.TFRecordReader

  keys_to_features = {
      'image/encoded': tf.FixedLenFeature((), tf.string, default_value=''),
      'image/format': tf.FixedLenFeature((), tf.string, default_value='png'),
      'image/class/label': tf.FixedLenFeature(
          [], tf.int64, default_value=tf.zeros([], dtype=tf.int64)),
  }

  items_to_handlers = {
      'image': slim.tfexample_decoder.Image(),
      'label': slim.tfexample_decoder.Tensor('image/class/label'),
  }

  decoder = slim.tfexample_decoder.TFExampleDecoder(
      keys_to_features, items_to_handlers)

  labels_to_names = None
  if dataset_utils.has_labels(dataset_dir):
    labels_to_names = dataset_utils.read_label_file(dataset_dir)

  return slim.dataset.Dataset(
      data_sources=file_pattern,
      reader=reader,
      decoder=decoder,
      num_samples=SPLITS_TO_SIZES[split_name],
      items_to_descriptions=_ITEMS_TO_DESCRIPTIONS,
      num_classes=_NUM_CLASSES,
      labels_to_names=labels_to_names)

 打开slim/datasets/dataset_factory.py,并修改以下内容:

添加:from datasets import fruit

把下面的一段代码

datasets_map = {
    'cifar10': cifar10,
    'flowers': flowers,
    'imagenet': imagenet,
    'mnist': mnist,
}

改为:

datasets_map = {
    'cifar10': cifar10,
    'flowers': flowers,
    'imagenet': imagenet,
    'mnist': mnist,
    'fruit':fruit,
}

打开终端,输入以下命令进行训练:

python3 train_image_classifier.py \
  --train_dir=./my_save_model/fruit-models/inception_v3 \
  --dataset_name=fruit \
  --dataset_split_name=train \
  --dataset_dir=my_data/fruits/ \
  --model_name=inception_v3 \
  --max_number_of_steps=10000 \
  --batch_size=32 \
  --learning_rate=0.0001 \
  --learning_rate_decay_type=fixed \
  --save_interval_secs=60 \
  --save_summaries_secs=60 \
  --log_every_n_steps=10 \
  --optimizer=rmsprop \
  --weight_decay=0.00004

或者新建一个train.sh 文件并添加上面的内容也可以,这样方便以后修改。

 

3、验证训练后的效果

在slim文件夹下运行终端并输入以下命令:

python3 eval_image_classifier.py \
  --checkpoint_path=./my_save_model/fruit-models/inception_v3 \
  --eval_dir=./my_save_model/fruit-models/inception_v3 \
  --dataset_name=fruit \
  --dataset_split_name=validation \
  --dataset_dir=my_data/fruits/ \
  --model_name=inception_v3

或者新建一个eval.sh 文件并添加上面的内容也可以,这样方便以后修改。

 

此处训练后生成的只是cpkt文件,并没有生成最终的模型,即pb文件,如何把cpkt文件固化生成pb文件,可以参考我的另一篇博客:tensorflow深度学习实战笔记(二):把训练好的模型进行固化

 

参考文献:

1、https://github.com/tensorflow/tensorflow/tree/master/tensorflow/contrib/slim

2、https://blog.csdn.net/rookie_wei/article/details/80796009

3、https://blog.csdn.net/wlzard/article/details/77689311

4、文献1的中文版:https://blog.csdn.net/chaipp0607/article/details/74139895

 

tensorflow深度学习实战笔记(一):使用tensorflow slim自带的模型训练自己的数据_第1张图片

 

 

你可能感兴趣的:(tensorflow深度学习实战笔记(一):使用tensorflow slim自带的模型训练自己的数据)