segnet

图片发自App


图片发自App


http://mi.eng.cam.ac.uk/projects/segnet/#demo

lr_mult:学习率的系数,最终的学习率是这个数乘以solver.prototxt配置文件中的base_lr。如有两个lr_mult,则第一个表示权值w的学习率,第二个表示偏置项的学习率。一般偏置项的学习率是权值学习率的两倍。

layers {

  name: "fc8"

  type: "InnerProduct"

  blobs_lr: 1          # learning rate multiplier for the filters

  blobs_lr: 2          # learning rate multiplier for the biases

  weight_decay: 1      # weight decay multiplier for the filters

  weight_decay: 0      # weight decay multiplier for the biases

  inner_product_param {

    num_output: 1000

    weight_filler {

      type: "gaussian"

      std: 0.01

    }

    bias_filler {

      type: "constant"

      value: 0

    }

  }

  bottom: "fc7"

  top: "fc8"

---------------------

作者:One__Coder

来源:CSDN

原文:https://blog.csdn.net/github_37973614/article/details/81810327

版权声明:本文为博主原创文章,转载请附上博文链接!

你可能感兴趣的:(segnet)