keras中compile方法的 loss 和 metrics 区别

参考链接:https://codeantenna.com/a/7p6uOqnNhx

1、背景

loss 和 metrics 在compile接口中同时存在,同时默认值都是None,那么这两个到底什么作用?

def compile(self,
     optimizer='rmsprop',
     loss=None,
     metrics=None,
     loss_weights=None,
     weighted_metrics=None,
     run_eagerly=None,
     **kwargs):

2、loss官网介绍

The purpose of loss functions is to compute the quantity that a model should seek to minimize during training.
翻译:损失函数的目的是计算模型在训练过程中最小化的数值。
实际的优化目标是所有数据点输出数组的平均值。

3、metrics官网介绍

A metric is a function that is used to judge the performance of your model.
Metric functions are similar to loss functions, except that the results from evaluating a metric are not used when training the model. Note that you may use any loss function as a metric.

翻译:
指标是用于判断模型性能的函数。
度量标准函数与损失函数相似,不同之处在于训练模型时,不使用评估metric得到的结果。 请注意,您可以使用任何损失函数作为指标。

4、loss与metrics对比理解 

keras中compile方法的 loss 和 metrics 区别_第1张图片

对比补充:

  1. Measure a performance of your network using non-differentiable functions: e.g. accuracy is not differentiable (not even continuous) so you cannot directly optimize your network w.r.t. to it. However, you could use it in order to choose the model with the best accuracy.

  2. Obtain values of different loss functions when your final loss is a combination of a few of them: Let's assume that your loss has a regularization term which measures how your weights differ from 0, and a term which measures the fitness of your model. In this case, you could use metrics in order to have a separate track of how the fitness of your model changes across epochs.

  3. Track a measure with respect to which you don't want to directly optimize your model: so - let's assume that you are solving a multidimensional regression problem where you are mostly concerned about mse, but at the same time you are interested in how a cosine-distance of your solution is changing in time. Then, it's the best to use metrics.

你可能感兴趣的:(机器学习,keras,深度学习,python)