TensorFlow实战调试中所遇问题及解决方法2

程序源码:

# -*- encoding=utf-8 -*-
from numpy import genfromtxt
import numpy as np
import random
import sys
import csv
import tensorflow as tf
import matplotlib.pyplot as plt
from sklearn import preprocessing


reload(sys)
sys.setdefaultencoding("utf-8")


dataPath = r"./BB.csv"
num_epochs1 = 10
rows = 100
turn = 2000


def read_data(file_queue):
    #定义Reader
    reader = tf.TextLineReader(skip_header_lines=1)
    key, value = reader.read(file_queue)


    #定义Decoder
    #defaults用于指定矩阵格式以及数据类型,CSV文件中的矩阵是M*N的,则此处为1*N的矩阵,比如矩阵中如果有小数,则为float,[1]应该变为[1.0]
    defaults = [[''], ['null'], [''], [0.], [0.], [0.], [0.], [0], [""], [0], ['null'], [""]]
    #矩阵中有几列,有几列需要写几个
    city, origin, destination, origin_lat, origin_lng, destination_lat, destination_lng, \
    distance, weature, duration, week_time, create_time = tf.decode_csv(records=value, record_defaults=defaults)


    #return tf.stack([SepalLengthCm, SepalWidthCm, PetalLengthCm, PetalWidthCm]), preprocess_op
    return distance, duration


def batch_input(filename, num_epochs):
    #生成一个先入先出队列和一个QueueRunner
    file_queue = tf.train.string_input_producer(string_tensor=[filename], num_epochs=10)
    example, label = read_data(file_queue)
    min_after_dequeue = 100
    batch_size = 10
    capacity = min_after_dequeue+3*batch_size
    example_batch, label_batch = tf.train.shuffle_batch([example, label], batch_size=batch_size, capacity=capacity,
                                                     min_after_dequeue=min_after_dequeue)
    return example_batch, label_batch

#exampleBatch1, labelBatch1 = batch_input(dataPath, num_epochs=100)

with tf.Session() as sess:
    init_op = tf.group(tf.global_variables_initializer(), tf.local_variables_initializer())
    sess.run(init_op)

    exampleBatch1, labelBatch1 = batch_input(dataPath, num_epochs=100)
    coord = tf.train.Coordinator()
    threads = tf.train.start_queue_runners(sess=sess, coord=coord)

    try:
        while not coord.should_stop():
            example_batch, label_batch = sess.run([exampleBatch1, labelBatch1])
            print("example_batch is:")
            print(example_batch)
    except tf.errors.OutOfRangeError:  
        print('Done training -- epoch limit reached') 
    finally:
        coord.request_stop()
        coord.join(threads)

以上程序完成的功能是批量获得csv文件中的数据,但在运行时出现以下错误:

File "better_nonlinear_one_input_batch1.py", line 64, in
    coord.join(threads)
  File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/training/coordinator.py", line 389, in join
    six.reraise(*self._exc_info_to_raise)
  File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/training/queue_runner_impl.py", line 238, in _run
    enqueue_callable()
  File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/client/session.py", line 1231, in _single_operation_run
    target_list_as_strings, status, None)
  File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/framework/errors_impl.py", line 473, in __exit__
    c_api.TF_GetCode(self.status.status))
tensorflow.python.framework.errors_impl.FailedPreconditionError: Attempting to use uninitialized value input_producer/limit_epochs/epochs
[[Node: input_producer/limit_epochs/CountUpTo = CountUpTo[T=DT_INT64, _class=["loc:@input_producer/limit_epochs/epochs"], limit=10, _device="/job:localhost/replica:0/task:0/device:CPU:0"](input_producer/limit_epochs/epochs)]]

上网查找类似错误,得到的结果都说要加入初始化,即以下两行:

init_op = tf.group(tf.global_variables_initializer(), tf.local_variables_initializer())

sess.run(init_op)


但实际上代码中已经加入了这2行代码,问题依然存在。一时一头雾水,怀疑难道是例程都存在问题?决定一点一点查找原因,在这里要感谢

http://www.it1352.com/586287.html

如果没有它提供的程序,我可能还在苦苦摸索。此网页中作者也是和我遇到了相同的问题,但是他的程序加入上述2句程序后就正常运行了。深入细节,一句一句查找区别,最终定位了问题:上边源程序中有这样一行代码:

exampleBatch1, labelBatch1 = batch_input(dataPath, num_epochs=100),

其位置很关键,将上边贴的源程序中

exampleBatch1, labelBatch1 = batch_input(dataPath, num_epochs=100)这句话放到

with tf.Session() as sess:

之上,就不会报错(参见上述源代码中两个exampleBatch1, labelBatch1 = batch_input(dataPath, num_epochs=100)的位置),我最初是放到了里边,这样就产生了所遇到的诡异问题。


你可能感兴趣的:(TensorFlow)