tensorflow 报错:Key Variable_4 not found in checkpoint

遇到一个问题,在实际中需要连续导入两个不同的模型,会发现有一个报错,解决方法如下


    index = getModel1(q1,q2)
    ...
    func()
    ...
    index2 = getModel2(q3,q4)
NotFoundError (see above for traceback): Restoring from checkpoint failed. This is most likely due to a Variable name or other graph key that is missing from the checkpoint. Please ensure that you have not altered the graph expected based on the checkpoint. Original error:

Key Variable_4 not found in checkpoint
	 [[node save_1/RestoreV2 (defined at Model.py:80)  = RestoreV2[dtypes=[DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, ..., DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT], _device="/job:localhost/replica:0/task:0/device:CPU:0"](_arg_save_1/Const_0_0, save_1/RestoreV2/tensor_names, save_1/RestoreV2/shape_and_slices)]]

导入两个tf训练好的模型过程中,出现这样一个报错,显示找不到 checkpoint,但是导入位置没有错,这个时候是由于tf运算图没有重置的原因,有人实验过证明第二次导入时候其中的变量名发生变化

 

解决只需要在前面加  

tf.reset_default_graph()

即可

你可能感兴趣的:(Tensorflow)