Tensorflow大致可分为两个子系统:前端系统,后端系统。前端系统提供编程模型,负责构造计算图,后端系统提供运行环境,负责执行计算图。
①会话的使用方式:
1、标准方法:
Session类的Session()构造函数创建会话
Session类的close()关闭会话释放本次会话的资源
此方法虽然简单,但是程序出错后,close()函数不会生效,容易造成资源泄露。
2、with/as:
环境上下文管理器,格式如下:
with tf.compat.v1.Session() as sess:
sess.run(…)
②在定义计算的时候,Tensorflow会会自动生成一个默认的计算图,可以用:
with sess.as_default():
进行指定默认的计算图。这样指定后可以对对应的计算图进行修改。
③直接创建默认计算图:
sess = tf.compat.v1.InteractiveSession()
④在会话中计算张量图的取值方法:
根据笔记(二)中的程序对张量图进行取值有
......
......
result = a + b
.......
print(result.eval())
结果如下:
......
......
......
2020-06-10 16:25:08.171836: W tensorflow/core/common_runtime/gpu/gpu_device.cc:1598] Cannot dlopen some GPU libraries. Please make sure the missing libraries mentioned above are installed properly if you would like to use GPU. Follow the guide at https://www.tensorflow.org/install/gpu for how to download and setup the required libraries for your platform.
Skipping registering GPU devices...
2020-06-10 16:25:08.334874: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1102] Device interconnect StreamExecutor with strength 1 edge matrix:
2020-06-10 16:25:08.335558: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1108] 0
2020-06-10 16:25:08.336069: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1121] 0: N
2020-06-10 16:25:08.341047: I tensorflow/compiler/xla/service/service.cc:168] XLA service 0x269d1617000 initialized for platform CUDA (this does not guarantee that XLA will be used). Devices:
2020-06-10 16:25:08.341649: I tensorflow/compiler/xla/service/service.cc:176] StreamExecutor device (0): GeForce GTX 1080 Ti, Compute Capability 6.1
[4. 6.]
[4. 6.]
......
......
......
with tf.compat.v1.Session(graph=g2) as sess:
tf.compat.v1.global_variables_initializer ().run()
with tf.compat.v1.variable_scope("",reuse = True):
print(sess.run(a))
print(sess.run(b))
print(a.eval(session=sess))
结果如下:
......
......
......
2020-06-10 16:27:02.996298: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1102] Device interconnect StreamExecutor with strength 1 edge matrix:
2020-06-10 16:27:02.996904: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1108]
[0. 0.]
[1. 1.]
[0. 0.]
同Session.run()相比
eval()函数等价于tf.get_default_session().run(t)
只能计算一个张量图就调用一次eval()函数
run()函数可以传入多个需要计算的张量
⑤Session的参数配置:
1). session拥有管理CPU、GPU、网络连接的功能。
2). session的主要参数有三个:
target:用来控制session使用的硬件设备,如果为空值则调用本地设备;若取grpc://URL则调用该服务器控制的所有设备。
config:用来指定一个configProto配置参数,
graph:用来控制session运行哪个计算图,如果为空值则运行当前默认graph。
3). configProto可以配置多种参数,两种为常用:
log_device_placement=True: 打印日志
allow_soft_placement=True: 当运算无法在GPU上进行时会将运算转移到CPU上运行(自动选择设备)
4). config的使用:
config = tf.ConfigProto()
config.gpu_options.allow_growth = True
#动态申请显存
session = tf.Session(config=config)
config = tf.ConfigProto()
config.gpu_options.per_process_gpu_memory_fraction = 0.5
#占用50%显存
session = tf.Session(config=config)
需要注意的是,占用显存这一条必须设置,否则GPU利用率100%会出现 warning。
用于在会话运行时动态提供输入数据。一旦大量数据涌入计算图中,需要定义很多的网络输入常量(之前定义的a,b)于是计算图中会涌现大量的输入节点,降低节点利用率。
格式:
tf.compat.v1.placeholder(dtype,shape,name)
例如:
a = tf.compat.v1.placeholder(tf.float32,shape=(2),name="input")
a = tf.compat.v1.placeholder(tf.float32,shape=(4,2),name="input") #4个二维数组(4个数组,每个数组两个元素)
#dtype,name必须指定,shape可以不指定或者=None
这时a,b中的量可以通过input提供,用feed_dict在run(result,feed_dict={…})来提供
例如:
sess.run(result,feed_dict = {a:[1.0,2.0]})
这种操作解决了多变量问题
例如:
program1:
import tensorflow as tf
tf.compat.v1.disable_eager_execution()
tf.compat.v1.ConfigProto(allow_soft_placement=True)
config.gpu_options.per_process_gpu_memory_fraction = 0.9
a = tf.constant([1.0,2.0],name = "a")
b = tf.constant([3.0,4.0],name = "b")
result = a + b
sess = tf.compat.v1.Session(config=config)
print(sess.run(result))
b = tf.constant([5.0,6.0],name = "b")
result = a + b
print(sess.run(result))
b = tf.constant([7.0,8.0],name = "b")
result = a + b
print(sess.run(result))
b = tf.constant([9.0,10.0],name = "b")
result = a + b
print(sess.run(result))
结果如下:
......
......
......
Skipping registering GPU devices...
2020-06-10 21:23:15.423641: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1102] Device interconnect StreamExecutor with strength 1 edge matrix:
2020-06-10 21:23:15.424093: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1108] 0
2020-06-10 21:23:15.424393: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1121] 0: N
2020-06-10 21:23:15.429428: I tensorflow/compiler/xla/service/service.cc:168] XLA service 0x228ad4038f0 initialized for platform CUDA (this does not guarantee that XLA will be used). Devices:
2020-06-10 21:23:15.430001: I tensorflow/compiler/xla/service/service.cc:176] StreamExecutor device (0): GeForce GTX 1080 Ti, Compute Capability 6.1
[4. 6.]
[6. 8.]
[ 8. 10.]
[10. 12.]
program2:
import tensorflow as tf
tf.compat.v1.disable_eager_execution()
#动态图机制一定要打开,不打开会报错RuntimeError: tf.placeholder() is not compatible with eager execution.
a = tf.compat.v1.placeholder(tf.float32,name="input")
b = tf.compat.v1.placeholder(tf.float32,name= "input")
result = a + b
config = tf.compat.v1.ConfigProto(allow_soft_placement=True)
config.gpu_options.per_process_gpu_memory_fraction = 0.9
sess = tf.compat.v1.Session(config=config)
print(sess.run(result,feed_dict={a:[1.0,2.0],b:[[3.0,4.0],[5.0,6.0],[7.0,8.0],[9.0,10.0]]}))
结果如下:
......
......
......
Skipping registering GPU devices...
2020-06-10 21:30:49.502263: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1102] Device interconnect StreamExecutor with strength 1 edge matrix:
2020-06-10 21:30:49.502737: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1108] 0
2020-06-10 21:30:49.503013: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1121] 0: N
2020-06-10 21:30:49.508552: I tensorflow/compiler/xla/service/service.cc:168] XLA service 0x1dd6fbf0c70 initialized for platform CUDA (this does not guarantee that XLA will be used). Devices:
2020-06-10 21:30:49.509124: I tensorflow/compiler/xla/service/service.cc:176] StreamExecutor device (0): GeForce GTX 1080 Ti, Compute Capability 6.1
[[ 4. 6.]
[ 6. 8.]
[ 8. 10.]
[10. 12.]]
第二种在输入合适的时候可以一次运行多个运算,并且不用每次定义变量,其形状不定义的时候取决于输入。
在测试program1时有:
......
......
......
result = a + b
print(sess.run(result,feed_dict={a:[1.0,2.0],b:[3.0,4.0]}))
结果显示:
[4. 6.]
这说明feed_dict对一般定义的变量也有效。然而,当改变b的维数后有:
......
......
......
result = a + b
print(sess.run(result,feed_dict={a:[1.0,2.0],b:[[3.0,4.0],[5.0,6.0],[7.0,8.0],[9.0,10.0]]}))
结果报错:
ValueError: Cannot feed value of shape (4, 2) for Tensor 'b_3:0', which has shape '(2,)'
注意:在定义b中定义shape的情况下,feed和shape不符合时同样会报错