深度学习激活函数总结

运行下面两行代码

 

import tensorflow as tf 
print(help(tf.nn))

得到所有的激活函数,但是常见的就是relu, tanh, sigmoid函数

 

 

 




/usr/local/Cellar/python/3.7.0/Frameworks/Python.framework/Versions/3.7/lib/python3.7/importlib/_bootstrap.py:219: RuntimeWarning: compiletime version 3.6 of module 'tensorflow.python.framework.fast_tensor_util' does not match runtime version 3.7
  return f(*args, **kwds)
Help on module tensorflow.python.ops.nn in tensorflow.python.ops:

NAME
    tensorflow.python.ops.nn - Neural network support.

DESCRIPTION
    See the @{$python/nn} guide.
    
    @@relu
    @@relu6
    @@crelu
    @@swish
    @@elu
    @@leaky_relu
    @@selu
    @@softplus
    @@softsign
    @@dropout
    @@bias_add
    @@sigmoid
    @@log_sigmoid
    @@tanh
    @@convolution
    @@conv2d
    @@depthwise_conv2d
    @@depthwise_conv2d_native
    @@separable_conv2d
    @@atrous_conv2d
    @@atrous_conv2d_transpose
    @@conv2d_transpose
    @@conv1d
    @@conv3d
    @@conv3d_transpose
    @@conv2d_backprop_filter
    @@conv2d_backprop_input
    @@conv3d_backprop_filter_v2
    @@depthwise_conv2d_native_backprop_filter
    @@depthwise_conv2d_native_backprop_input
    @@avg_pool
    @@max_pool
    @@max_pool_with_argmax
    @@avg_pool3d
    @@max_pool3d
    @@fractional_avg_pool
    @@fractional_max_pool
    @@pool
    @@dilation2d
    @@erosion2d
    @@with_space_to_batch
    @@l2_normalize
    @@local_response_normalization
    @@sufficient_statistics
    @@normalize_moments
    @@moments
    @@weighted_moments
    @@fused_batch_norm
    @@batch_normalization
    @@batch_norm_with_global_normalization
    @@l2_loss
    @@log_poisson_loss
    @@sigmoid_cross_entropy_with_logits
    @@softmax
    @@log_softmax
    @@softmax_cross_entropy_with_logits
    @@softmax_cross_entropy_with_logits_v2
    @@sparse_softmax_cross_entropy_with_logits
    @@weighted_cross_entropy_with_logits
    @@embedding_lookup
    @@embedding_lookup_sparse
    @@dynamic_rnn
    @@bidirectional_dynamic_rnn
    @@raw_rnn
    @@static_rnn
    @@static_state_saving_rnn
    @@static_bidirectional_rnn
    @@ctc_loss
    @@ctc_greedy_decoder
    @@ctc_beam_search_decoder
    @@top_k
    @@in_top_k
    @@nce_loss
    @@sampled_softmax_loss
    @@uniform_candidate_sampler
    @@log_uniform_candidate_sampler
    @@learned_unigram_candidate_sampler
    @@fixed_unigram_candidate_sampler
    @@compute_accidental_hits
    @@quantized_conv2d
    @@quantized_relu_x
    @@quantized_max_pool
    @@quantized_avg_pool

DATA
    swish = 

 

来看看relu激活函数:

 

运行代码:

import tensorflow as tf 
print(help(tf.nn.relu))

得到函数定义:原来输入时特征,大于零就是特征features本身,小于零就是0



/usr/local/Cellar/python/3.7.0/Frameworks/Python.framework/Versions/3.7/lib/python3.7/importlib/_bootstrap.py:219: RuntimeWarning: compiletime version 3.6 of module 'tensorflow.python.framework.fast_tensor_util' does not match runtime version 3.7
  return f(*args, **kwds)
Help on function relu in module tensorflow.python.ops.gen_nn_ops:

relu(features, name=None)
    Computes rectified linear: `max(features, 0)`.
    
    Args:
      features: A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `uint8`, `int16`, `int8`, `int64`, `bfloat16`, `uint16`, `half`, `uint32`, `uint64`.
      name: A name for the operation (optional).
    
    Returns:
      A `Tensor`. Has the same type as `features`.

None
[Finished in 1.5s]

长这样

深度学习激活函数总结_第1张图片

 

再来看看sigmoid函数

 

运行:

import tensorflow as tf 
print(help(tf.nn.sigmoid))

得到:



/usr/local/Cellar/python/3.7.0/Frameworks/Python.framework/Versions/3.7/lib/python3.7/importlib/_bootstrap.py:219: RuntimeWarning: compiletime version 3.6 of module 'tensorflow.python.framework.fast_tensor_util' does not match runtime version 3.7
  return f(*args, **kwds)
Help on function sigmoid in module tensorflow.python.ops.math_ops:

sigmoid(x, name=None)
    Computes sigmoid of `x` element-wise.
    
    Specifically, `y = 1 / (1 + exp(-x))`.
    
    Args:
      x: A Tensor with type `float16`, `float32`, `float64`, `complex64`,
        or `complex128`.
      name: A name for the operation (optional).
    
    Returns:
      A Tensor with the same type as `x`.
    
    @compatibility(numpy)
    Equivalent to np.scipy.special.expit
    @end_compatibility

None
[Finished in 1.5s]

 

函数:

 

深度学习激活函数总结_第2张图片

长这样

深度学习激活函数总结_第3张图片

 

再看看tahn函数

 

运行:

import tensorflow as tf 
print(help(tf.nn.tanh))

得到:



/usr/local/Cellar/python/3.7.0/Frameworks/Python.framework/Versions/3.7/lib/python3.7/importlib/_bootstrap.py:219: RuntimeWarning: compiletime version 3.6 of module 'tensorflow.python.framework.fast_tensor_util' does not match runtime version 3.7
  return f(*args, **kwds)
Help on function tanh in module tensorflow.python.ops.math_ops:

tanh(x, name=None)
    Computes hyperbolic tangent of `x` element-wise.
    
    Args:
      x: A Tensor or SparseTensor with type `float16`, `float32`, `double`,
        `complex64`, or `complex128`.
      name: A name for the operation (optional).
    
    Returns:
      A Tensor or SparseTensor respectively with the same type as `x`.

None
[Finished in 1.5s]

函数

深度学习激活函数总结_第4张图片

长这样

深度学习激活函数总结_第5张图片

 

 

比较:

深度学习激活函数总结_第6张图片

 

认识你是我们的缘分,同学,等等,学习人工智能,记得关注我。

 

深度学习激活函数总结_第7张图片

微信扫一扫
关注该公众号

《湾区人工智能》

 

 

你可能感兴趣的:(AI)