神经网络和深度学习——基础函数汇总(1)

神经网络基础

1、"Hello World"

test = "Hello world"
print("test: " + test)

output:

test: Hello World

2、Basic_sigmoid

(math实现sigmoid函数)

import math
def basic_sigmoid(x):
  s  = 1/(1+math.exp(-x))
  return s
basic_sigmoid(3)

output:

0.9525741268224334

3、sigmoid()

(numpy实现sigmoid函数)

import numpy as np
def sigmoid(x):
  s = 1/(1+np.exp(-x))
  return s
x=np.array([1,2,3])
sigmoid(x)

output:

array([0.73105858, 0.88079708, 0.95257413])

4、sigmoid_derivative

(sigmoid()函数的导数)

def sigmoid_derivative(x)
  s = 1/(1+np.exp(-x))
  ds = s*(1-s)
  return ds
x = np.array([1,2,3])
print("sigmoid_derivative(x) = " + str(sigmoid_derivative(x)))

output:

sigmoid_derivative(x) = [0.19661193 0.10499359 0.04517666]

5、image2vector

(reshape arrays of shape (a, b, c) into a vector of shape (a*b*c,1))

def image2vector(image):
  v = image.reshape(image.shape[0]*shape[1]*shape[2] , 1)
  return v

Argument:
image -- a numpy array of shape (length, height, depth)
Returns:
v -- a vector of shape (length*height*depth, 1)

image = np.array([[[ 0.67826139,  0.29380381],
        [ 0.90714982,  0.52835647],
        [ 0.4215251 ,  0.45017551]],
       [[ 0.92814219,  0.96677647],
        [ 0.85304703,  0.52351845],
        [ 0.19981397,  0.27417313]],
       [[ 0.60659855,  0.00533165],
        [ 0.10820313,  0.49978937],
        [ 0.34144279,  0.94630077]]])

print ("image2vector(image) = " + str(image2vector(image)))

output:

image2vector(image) = [[0.67826139]
 [0.29380381]
 [0.90714982]
 [0.52835647]
 [0.4215251 ]
 [0.45017551]
 [0.92814219]
 [0.96677647]
 [0.85304703]
 [0.52351845]
 [0.19981397]
 [0.27417313]
 [0.60659855]
 [0.00533165]
 [0.10820313]
 [0.49978937]
 [0.34144279]
 [0.94630077]]

6、normalizeRows()

changing x to x/||x||,规范化行向量

def normalizeRows(x):
  x_norm = np.linalg.norm(x, ord = 2, axis = 1, keepdims = True)
  x = x/x_norm
  return x

Argument:
x -- A numpy matrix of shape (n, m)
Returns:
x -- The normalized (by row) numpy matrix. You are allowed to modify x.

x = np.array([
    [0, 3, 4],
    [1, 6, 4]])
print("normalizeRows(x) = " + str(normalizeRows(x)))

output:
normalizeRows(x) = [[0. 0.6 0.8 ]
[0.13736056 0.82416338 0.54944226]]

7、broadcasting and the softmax function

def softmax(x):
  x_exp = np.exp(x)
  x_sum = np.sum(x_exp, axis = 1, keepdims = True)
  s = x_exp/x_sum
  return s

Argument:
x -- A numpy matrix of shape (n,m)
Returns:
s -- A numpy matrix equal to the softmax of x, of shape (n,m)

x = np.array([[9,2,5,0,0],[7,5,0,0,0]])
print("softmax(x) = " + str(softmax(x)))

output:

softmax(x) = [[9.80897665e-01 8.94462891e-04 1.79657674e-02 1.21052389e-04
  1.21052389e-04]
 [8.78679856e-01 1.18916387e-01 8.01252314e-04 8.01252314e-04
  8.01252314e-04]]

8、loss function L1

def  L1 (yhat , y ):
    loss = np.sum(np.abs(yhat - y))
    return loss

Arguments:
yhat -- vector of size m (predicted labels)
y -- vector of size m (true labels)
Returns:
loss -- the value of the L1 loss function defined above

yhat = np.array([.9,  0.2 ,  0.1 ,  .4 ,  .9])
y = np.array([1, 0 , 0 ,1 , 1])
print("L1 = " + str(L1(yhat  ,  y)))

output:

L1 = 1.1

9、loss function L2

def L2(yhat , y):
    loss = np.sum(np.dot((y - yhat) , (y - yhat)))
    retutn loss

Arguments:
yhat -- vector of size m (predicted labels)
y -- vector of size m (true labels)
Returns:
loss -- the value of the L2 loss function defined above

yhat = np.array([.9,  0.2 ,  0.1 ,  .4 ,  .9])
y = np.array([1, 0 , 0, 1 , 1])
print("L2 = " + str(L2(yhat,y)))

output:

L2 = 0.43

你可能感兴趣的:(神经网络和深度学习——基础函数汇总(1))