tf.argmax(), tf.maximum(a, b) , np.amax() , numpy.maximum(),np.argmax()系列最大最小值函数用法速查

numpy.ndarray.max()   和 numpy.amax() 一样。

numpy,amax()

Return the maximum of an array or maximum along an axis.

关于axis, 请看https://blog.csdn.net/Arctic_Beacon/article/details/83307785


numpy.maximum()

Compare two arrays and returns a new array containing the element-wise maxima.

所以限定两个数组逐个对应的元素比较,返回最大值。

import numpy as np
c = np.array([[1,2,3],[4,5,6]])
d = np.array([[-1,-2,-3],[7,8,9]])
max2=np.maximum(c,d)

>>>max2
array([[1, 2, 3],
       [7, 8, 9]])

import numpy as np
c = np.array([[1,2,3],[4,5,6]])
d = np.array([[-1],[7]])
max2=np.maximum(c,d)

>>>max2

array([[1, 2, 3],
       [7, 7, 7]])

这种操作叫broadcast。


numpy.argmax()

Returns the indices of the maximum values along an axis.

返回的坐标。

tf.argmax(), tf.maximum(a, b) , np.amax() , numpy.maximum(),np.argmax()系列最大最小值函数用法速查_第1张图片

>>> a = np.arange(6).reshape(2,3)
>>> a
array([[0, 1, 2],
       [3, 4, 5]])
>>> np.argmax(a)
5
>>> np.argmax(a, axis=0)
array([1, 1, 1])
>>> np.argmax(a, axis=1)
array([2, 2])

tensorflow系列语法基本相同

import tensorflow as tf
a = [1,5,3] 
b = [0,2,7]
f1 = tf.maximum(a, b) 
f2 = tf.minimum(a, b) 
with tf.Session() as sess: 
    print (sess.run(f1))
    print (sess.run(f2)) 

[1 5 7]
[0 2 3]

tensorflow 支持list,也支持numpy.ndarray

import numpy as np
import tensorflow as tf
c = np.array([[1,2,3],[4,5,6]])
d = np.array([[-1,2,0],[7,8,9]])
f1 = tf.maximum(c, d) 
f2 = tf.minimum(c, d) 
with tf.Session() as sess: 
    print (sess.run(f1))
    print (sess.run(f2)) 

[[1 2 3]
 [7 8 9]]
[[-1  2  0]
 [ 4  5  6]]

 


tensorflow 找最大值所在的坐标

tf.argmax()

Returns the index with the largest value across axes of a tensor.

默认axis=0

import numpy as np
import tensorflow as tf
c = np.array([[1,8,3],[4,5,6]])
d = np.array([[-1,2,0],[7,8,-9]])
f3 = tf.argmax(c) 
f4 = tf.argmin(d) 
with tf.Session() as sess: 
    sess = tf.Session(config=tf.ConfigProto(allow_soft_placement=True, log_device_placement=True))
    print (sess.run(f3))
    print (sess.run(f4))

[1 0 1]
[0 0 1]

有意思的是这个函数不用GPU运算而是CPU

Device mapping:
/job:localhost/replica:0/task:0/device:GPU:0 -> device: 0, name: GeForce GTX 1080 Ti, pci bus id: 0000:01:00.0, compute capability: 6.1
ArgMax: (ArgMax): /job:localhost/replica:0/task:0/device:CPU:0
ArgMin: (ArgMin): /job:localhost/replica:0/task:0/device:CPU:0
ArgMax_1: (ArgMax): /job:localhost/replica:0/task:0/device:CPU:0
ArgMin_1: (ArgMin): /job:localhost/replica:0/task:0/device:CPU:0
ArgMax/input: (Const): /job:localhost/replica:0/task:0/device:CPU:0
ArgMax/dimension: (Const): /job:localhost/replica:0/task:0/device:CPU:0
ArgMin/input: (Const): /job:localhost/replica:0/task:0/device:CPU:0
ArgMin/dimension: (Const): /job:localhost/replica:0/task:0/device:CPU:0
ArgMax_1/input: (Const): /job:localhost/replica:0/task:0/device:CPU:0
ArgMax_1/dimension: (Const): /job:localhost/replica:0/task:0/device:CPU:0
ArgMin_1/input: (Const): /job:localhost/replica:0/task:0/device:CPU:0
ArgMin_1/dimension: (Const): /job:localhost/replica:0/task:0/device:CPU:0

用tensor输入和list, numpy.ndarray效果一样。

c = tf.constant([[1,8,3],[4,5,6]])
d = tf.constant([[-1,2,0],[7,8,-9]])
f3 = tf.argmax(c,1) 
f4 = tf.argmin(d,1) 
sess = tf.Session()
print (sess.run(f3))
print (sess.run(f4))
sess.close()

[1 2]
[0 2]

你可能感兴趣的:(大蛇丸,T型牌坊)