tensorflow的归一化在很多时候都会用到,tensorflow代码里面也有多个实现方式,下面主要以L2归一化来讲解。
函数原型:l2_normalize(x, dim, epsilon=1e-12, name=None)
x为输入的向量;
dim为指定按哪个维度求l2范化,dim取值为0或1,0代表列,1代表行;
epsilon为l2范化的最小值边界;
函数原型:norm(tensor, ord='euclidean', axis=None, keep_dims=False, name=None)
用于计算向量,矩阵和tensor的范数,默认情况下是计算欧氏距离的L2范数
tensor为待计算范数的输入
ord指定计算哪一种范数
axis为按哪个维度计算范数,可取0或1,0代表列,1代表行
keep_dims是否保持维度不变,默认False
函数原型:div(x, y, name=None)
实现对应元素的x/y操作
#!/usr/bin/env python
# -*- coding: utf-8 -*-
"""
@Time : 2019/4/22 16:17
@Author : Li Shanlu
@File : tensorflow_norm.py
@Software : PyCharm
@Description: 描述tf.nn.l2_normalize的使用和tf.norm方法的异同
"""
import tensorflow as tf
input_data = tf.constant([[1.0,2,3],[2.0,3,4],[3.0,4,5]])
output_1 = tf.nn.l2_normalize(input_data, dim=1, epsilon=1e-10, name='nn_l2_norm')
normal = tf.norm(input_data, axis=1, keep_dims=True, name='normal') # 求每行对应的L2范数(欧式距离)
output_2 = tf.div(input_data, normal, name='div_normal')
normal_1 = tf.norm(output_1, axis=1, keep_dims=True, name='normal_1')
output_3 = tf.div(output_1, normal_1, name='div_normal_1')
with tf.Session() as sess:
print("input_data:\n", sess.run(input_data))
print("output_1:\n", sess.run(output_1))
print("normal:\n", sess.run(normal))
print("output_2:\n", sess.run(output_2))
print("normal_1:\n", sess.run(normal_1))
print("output_3:\n", sess.run(output_3))
运行结果如下:
input_data:
[[1. 2. 3.]
[2. 3. 4.]
[3. 4. 5.]]
output_1:
[[0.26726124 0.5345225 0.8017837 ]
[0.37139067 0.557086 0.74278134]
[0.42426407 0.56568545 0.7071068 ]]
normal:
[[3.7416575]
[5.3851647]
[7.071068 ]]
output_2:
[[0.26726124 0.5345225 0.8017837 ]
[0.37139067 0.55708605 0.74278134]
[0.42426407 0.56568545 0.70710677]]
normal_1:
[[0.99999994]
[1. ]
[1. ]]
output_3:
[[0.26726127 0.53452253 0.80178374]
[0.37139067 0.557086 0.74278134]
[0.42426407 0.56568545 0.7071068 ]]
Process finished with exit code 0