深度学习-tensorflow1.x:平均值(reduce_mean)与求和(reduce_sum) 小白理解 代码实现 Tensorflow1.x 和 Numpy

用Tensorflow1.x 和Numpy 代码实现 均值(reduce_mean)

Tensorflow1.x中,求均值是用下面这个方法
tf.reduce_mean(input_tensor, axis, keepdims,name,reduction_indices, keep_dims)
Numpy 中求均值是用 np.mean(x) 方法

用Tensorflow 与Numpy分别实现均值

import tensorflow as tf
import numpy as np

x = [[1, 2, 3],
     [1, 2, 3]]

xx = tf.cast(x, tf.float32)

mean_all = tf.reduce_mean(xx, keepdims=False)
mean_0 = tf.reduce_mean(xx, axis=0, keepdims=False)
mean_1 = tf.reduce_mean(xx, axis=1, keepdims=False)

with tf.compat.v1.Session() as sess:
    m_a, m_0, m_1 = sess.run([mean_all, mean_0, mean_1])
    print(m_a)  # output: 2.0
    print(m_0)  # output: [ 1.  2.  3.]
    print(m_1)  # output:  [ 2.  2.]


print('Implementation with Np')
print(np.mean(x))
print(np.mean(x, axis = 0))
print(np.mean(x, axis = 1))

运行结果

2.0
[1. 2. 3.]
[2. 2.]
Implementation with Np
2.0
[1. 2. 3.]
[2. 2.]

Tensorflow1.x中,求均值是用这个方法 tf.reduce_sum
Numpy 中求均值是用 np.sum(x) 方法

用Tensorflow 与Numpy分别实现求和

import numpy as np
import tensorflow as tf

tf.compat.v1.disable_eager_execution()

t = [[2.4, 2.0, 3.4], [1.3, 2.5, 3.1], [1.0, 2.7, 3.4]]

# 神经网络的输出
logits = tf.constant(t)
y = tf.math.reduce_sum(logits, 1)

with tf.compat.v1.Session() as sess:
    softmax = sess.run(y)
    print("tensorflow softmax result=", softmax)


print('Implementation with Np')
#numpy 的实现方式

print(np.sum(t, axis = 1))

运行结果

tensorflow softmax result= [7.8       6.8999996 7.1000004]
Implementation with Np
[7.8 6.9 7.1]

你可能感兴趣的:(tensorflow,tensorflow,深度学习)