Keras自定义损失函数-quantile_loss

keras源码git地址

自己写函数,必然要参考官方写法,照猫画虎,取其精华,迎刃而解

核心是定义了loss类,那么在实践中可以继承loss类然后写自己的损失类,比如mse,mae这些loss,

当然为了方便,也可以自己实现函数,给大家个quantile loss的例子

def quantile_loss(y_true, y_pred, quantile=0.9):
    y_pred = tf.convert_to_tensor(y_pred)
    y_true = tf.cast(y_true, y_pred.dtype)
    error = y_pred - y_true
    smaller = tf.abs(error) * quantile
    bigger = tf.abs(error) * (1-quantile)
    is_more_zero = error > 0
    loss = tf.where(is_more_zero,bigger,smaller)
    return K.mean(loss, axis=-1)

 当然,也给大家看看官方实现例子 log_cosh

def log_cosh(y_true, y_pred):
  """Logarithm of the hyperbolic cosine of the prediction error.
  `log(cosh(x))` is approximately equal to `(x ** 2) / 2` for small `x` and
  to `abs(x) - log(2)` for large `x`. This means that 'logcosh' works mostly
  like the mean squared error, but will not be so strongly affected by the
  occasional wildly incorrect prediction.
  Standalone usage:
  >>> y_true = np.random.random(size=(2, 3))
  >>> y_pred = np.random.random(size=(2, 3))
  >>> loss = tf.keras.losses.logcosh(y_true, y_pred)
  >>> assert loss.shape == (2,)
  >>> x = y_pred - y_true
  >>> assert np.allclose(
  ...     loss.numpy(),
  ...     np.mean(x + np.log(np.exp(-2. * x) + 1.) - tf.math.log(2.), axis=-1),
  ...     atol=1e-5)
  Args:
    y_true: Ground truth values. shape = `[batch_size, d0, .. dN]`.
    y_pred: The predicted values. shape = `[batch_size, d0, .. dN]`.
  Returns:
    Logcosh error values. shape = `[batch_size, d0, .. dN-1]`.
  """
  y_pred = tf.convert_to_tensor(y_pred)
  y_true = tf.cast(y_true, y_pred.dtype)

  def _logcosh(x):
    return x + tf.math.softplus(-2. * x) - tf.cast(
        tf.math.log(2.), x.dtype)

  return backend.mean(_logcosh(y_pred - y_true), axis=-1)

你可能感兴趣的:(代码科普,keras,python)