很多事情不是因为有多难才没完成,只是因为没有开始。come on,看好你哟!
参考自https://github.com/torch/nn/blob/master/doc/criterion.md
Criterions
---------------------------------------------------------------------------------------------------------------------------------------------------------------------
[output]forward(input, target)
计算该准则下的损失函数的值。output需要是标量
The state variable self.output should be updated after a call to forward().
[gradInput] backward(input, target)
The state variable self.gradInput should be updated after a call to backward().
BCECriterion
基于sigmoid的二进制交叉熵(ClassNLLCriterion的二分类情况)
公式如下:
loss(o, t) = - 1/n sum_i (t[i] * log(o[i]) + (1 - t[i]) * log(1 - o[i]))
ClassNLLCriterion
criterion = nn.ClassNLLCriterion([weights])
如果要使用这个,就需要在网络最后一层添加logsoftmax层,如果不想额外添加layer,可以使用CrossEntropyCriterion网页上的这一句看不明白,class是y么?。。。(或许只要知道它和logsoftmax组合起来是交叉熵就ok了)
CrossEntropyCriterion
criterion = nn.CrossEntropyCriterion([weights])
用于多分类情况
The loss can be described as:
loss(x, class) = -log(exp(x[class]) / (\sum_j exp(x[j]))) = -x[class] + log(\sum_j exp(x[j]))
通常将size average设置为false
crit = nn.CrossEntropyCriterion(weights)
crit.nll.sizeAverage = false
ClassSimplexCriterion
criterion = nn.ClassSimplexCriterion(nClasses)
该函数对于每一个类,学习一个embedding,embedding将极其稀疏的one-hot编码的词语进行降维。
在使用这个损失函数之前需要有这样两层(NormalizedLinearNoBias和Normalized),嗯。不太明白。。先记录一下,有时间再研究咯 -- 在教程中有论文,如果想了解,可以去看看论文,比多元逻辑回归更鲁棒
nInput = 10
nClasses = 30
nHidden = 100
mlp = nn.Sequential()
mlp:add(nn.Linear(nInput, nHidden)):add(nn.ReLU())
mlp:add(nn.NormalizedLinearNoBias(nHidden, nClasses))
mlp:add(nn.Normalize(2))
criterion = nn.ClassSimplexCriterion(nClasses)
function gradUpdate(mlp, x, y, learningRate)
pred = mlp:forward(x)
local err = criterion:forward(pred, y)
mlp:zeroGradParameters()
local t = criterion:backward(pred, y)
mlp:backward(x, t)
mlp:updateParameters(learningRate)
end
MarginCriterion
criterion = nn.MarginCriterion([margin])
二分类
例子代码:
function gradUpdate(mlp, x, y, criterion, learningRate)
local pred = mlp:forward(x)
local err = criterion:forward(pred, y)
local gradCriterion = criterion:backward(pred, y)
mlp:zeroGradParameters()
mlp:backward(x, gradCriterion)
mlp:updateParameters(learningRate)
end
mlp = nn.Sequential()
mlp:add(nn.Linear(5, 1))
x1 = torch.rand(5)
x1_target = torch.Tensor{1}
x2 = torch.rand(5)
x2_target = torch.Tensor{-1}
criterion=nn.MarginCriterion(1)
for i = 1, 1000 do
gradUpdate(mlp, x1, x1_target, criterion, 0.01)
gradUpdate(mlp, x2, x2_target, criterion, 0.01)
end
print(mlp:forward(x1))
print(mlp:forward(x2))
print(criterion:forward(mlp:forward(x1), x1_target))
print(criterion:forward(mlp:forward(x2), x2_target))
输出:
1.0043
[torch.Tensor of dimension 1]
-1.0061
[torch.Tensor of dimension 1]
0
0
By default, the losses are averaged over observations for each minibatch. However, if the field sizeAverage is set to false, the losses are instead summed.
SoftMarginCriterion
criterion = nn.SoftMarginCriterion()
二分类
例子代码:function gradUpdate(mlp, x, y, criterion, learningRate)
local pred = mlp:forward(x)
local err = criterion:forward(pred, y)
local gradCriterion = criterion:backward(pred, y)
mlp:zeroGradParameters()
mlp:backward(x, gradCriterion)
mlp:updateParameters(learningRate)
end
mlp = nn.Sequential()
mlp:add(nn.Linear(5, 1))
x1 = torch.rand(5)
x1_target = torch.Tensor{1}
x2 = torch.rand(5)
x2_target = torch.Tensor{-1}
criterion=nn.SoftMarginCriterion(1)
for i = 1, 1000 do
gradUpdate(mlp, x1, x1_target, criterion, 0.01)
gradUpdate(mlp, x2, x2_target, criterion, 0.01)
end
print(mlp:forward(x1))
print(mlp:forward(x2))
print(criterion:forward(mlp:forward(x1), x1_target))
print(criterion:forward(mlp:forward(x2), x2_target))
0.7471
[torch.DoubleTensor of size 1]
-0.9607
[torch.DoubleTensor of size 1]
0.38781049558836
0.32399356957564
MultiMarginCriterion
criterion = nn.MultiMarginCriterion(p, [weights], [margin])
多分类
使用时,前边需要加这两句:
mlp = nn.Sequential()
mlp:add(nn.Euclidean(n, m)) -- outputs a vector of distances
mlp:add(nn.MulConstant(-1)) -- distance to similarity
(公式还没有细看,先知道是多分类就好啦。。。)MultiLabelMarginCriterion
criterion = nn.MultiLabelMarginCriterion()
一个物体属于多个类别
代码例子:
criterion = nn.MultiLabelMarginCriterion()
input = torch.randn(2, 4)
target = torch.Tensor{{1, 3, 0, 0}, {4, 0, 0, 0}} -- zero-values are ignored
criterion:forward(input, target)
MultiLabelSoftMarginCriterion
criterion = nn.MultiLabelSoftMarginCriterion()
---------------------------------------------------------------------------------------------------------------------------------------------------------------------
Regression criterions
AbsCriterion
criterion = nn.AbsCriterion()
公式如下:
loss(x, y) = 1/n \sum |x_i - y_i|
如果x,y是d维的,也是除n,这样计算其实是不对的,所以我们可以通过以下方法来避免
criterion = nn.AbsCriterion()
criterion.sizeAverage = false
(问:如果不除,全都加起来岂不是很大?这样之后需要在后边加别的来归一化么?)SmoothL1Criterion
criterion = nn.SmoothL1Criterion()
smooth version of AbsCriterioncriterion = nn.SmoothL1Criterion()
criterion.sizeAverage = false
MSECriterion
criterion = nn.MSECriterion()
最小均方误差使用方法:
criterion = nn.MSECriterion()
criterion.sizeAverage = false
SpatialAutoCropMSECriterion
criterion = nn.SpatialAutoCropMSECriterion()
如果目标和输出差得比较大,那么可以用这个。criterion = nn.SpatialAutoCropMSECriterion()
criterion.sizeAverage = false
SpatialAutoCropMSECriterion
criterion = nn.SpatialAutoCropMSECriterion()
如果目标和输出差得比较大,那么可以用这个。criterion = nn.SpatialAutoCropMSECriterion()
criterion.sizeAverage = false
DiskKLDivCriterion
criterion = nn.DistKLDivCriterion()
KL散度
---------------------------------------------------------------------------------------------------------------------------------------------------------------------
Embedding criterions (测量两个输入是否相似或者不相似)
HingeEmbeddingCriterion
criterion = nn.HingeEmbeddingCriterion([margin])
y=1改变weight和bias使得输入的两个tensor越来越近,y=-1输入的两个tensor越来越远
(个人理解,可以使用l1距离,也可以自己设置距离,教程。。。写错了吧。。。只给一个tensor应该不能算距离吧)
L1HingeEmbeddingCriterion
criterion = nn.L1HingeEmbeddingCriterion([margin])
算输入两个向量的l1距离
CosineEmbeddingCriterion
criterion = nn.CosineEmbeddingCriterion([margin])
算cosine距离
DistanceRatioCriterion
criterion = nn.DistanceRatioCriterion(sizeAverage)
共三个向量,第一个是anchor,第二个和第一个相似,第三个和第一个不相似,公式如下:
loss = -log( exp(-Ds) / ( exp(-Ds) + exp(-Dd) ) )
代码如下(没有看懂):
torch.setdefaulttensortype("torch.FloatTensor")
require 'nn'
-- triplet : with batchSize of 32 and dimensionality 512
sample = {torch.rand(32, 512), torch.rand(32, 512), torch.rand(32, 512)}
embeddingModel = nn.Sequential()
embeddingModel:add(nn.Linear(512, 96)):add(nn.ReLU())
tripleModel = nn.ParallelTable()
tripleModel:add(embeddingModel)
tripleModel:add(embeddingModel:clone('weight', 'bias',
'gradWeight', 'gradBias'))
tripleModel:add(embeddingModel:clone('weight', 'bias',
'gradWeight', 'gradBias'))
-- Similar sample distance w.r.t anchor sample
posDistModel = nn.Sequential()
posDistModel:add(nn.NarrowTable(1,2)):add(nn.PairwiseDistance())
-- Different sample distance w.r.t anchor sample
negDistModel = nn.Sequential()
negDistModel:add(nn.NarrowTable(2,2)):add(nn.PairwiseDistance())
distanceModel = nn.ConcatTable():add(posDistModel):add(negDistModel)
-- Complete Model
model = nn.Sequential():add(tripleModel):add(distanceModel)
-- DistanceRatioCriterion
criterion = nn.DistanceRatioCriterion(true)
-- Forward & Backward
output = model:forward(sample)
loss = criterion:forward(output)
dLoss = criterion:backward(output)
model:backward(sample, dLoss)
怎么合在一起的。。怎么连接的。。
96和32的关系是什么情况
---------------------------------------------------------------------------------------------------------------------------------------------------------------------
Miscelaneus criterions(混合准则)
MultiCriterion
criterion = nn.MultiCriterion()
将多个准则放在一起。并赋予权重。代码如下:
input = torch.rand(2,10)
target = torch.IntTensor{1,8}
nll = nn.ClassNLLCriterion()
nll2 = nn.CrossEntropyCriterion()
mc = nn.MultiCriterion():add(nll, 0.5):add(nll2)
output = mc:forward(input, target)
ParallelCriterion
criterion = nn.ParallelCriterion([repeatTarget])
两个输入?两个输出?计算tensor中对应的损失然后按权重相加?
为什么要这样?可以用在哪里呢?
MarginRankingCriterion
criterion = nn.MarginRankingCriterion(margin)
输入3个tensor。
例子代码看不懂啊啊啊
p1_mlp = nn.Linear(5, 2)
p2_mlp = p1_mlp:clone('weight', 'bias')
prl = nn.ParallelTable()
prl:add(p1_mlp)
prl:add(p2_mlp)
mlp1 = nn.Sequential()
mlp1:add(prl)
mlp1:add(nn.DotProduct())
mlp2 = mlp1:clone('weight', 'bias')
mlpa = nn.Sequential()
prla = nn.ParallelTable()
prla:add(mlp1)
prla:add(mlp2)
mlpa:add(prla)
crit = nn.MarginRankingCriterion(0.1)
x=torch.randn(5)
y=torch.randn(5)
z=torch.randn(5)
-- Use a typical generic gradient update function
function gradUpdate(mlp, x, y, criterion, learningRate)
local pred = mlp:forward(x)
local err = criterion:forward(pred, y)
local gradCriterion = criterion:backward(pred, y)
mlp:zeroGradParameters()
mlp:backward(x, gradCriterion)
mlp:updateParameters(learningRate)
end
for i = 1, 100 do
gradUpdate(mlpa, {{x, y}, {x, z}}, 1, crit, 0.01)
if true then
o1 = mlp1:forward{x, y}[1]
o2 = mlp2:forward{x, z}[1]
o = crit:forward(mlpa:forward{{x, y}, {x, z}}, 1)
print(o1, o2, o)
end
end
print "--"
for i = 1, 100 do
gradUpdate(mlpa, {{x, y}, {x, z}}, -1, crit, 0.01)
if true then
o1 = mlp1:forward{x, y}[1]
o2 = mlp2:forward{x, z}[1]
o = crit:forward(mlpa:forward{{x, y}, {x, z}}, -1)
print(o1, o2, o)
end
end
第一个比第二个value更高?