利用caffe日志进行测试精度训练损失等的画图(caffe训练结果可视化)

本文主要介绍,将caffe训练得到的accracy,loss进行图像化。

对于一般caffe训练结果的可视化:

1.在训练时,需要将训练的结果保存日志。

  train.sh:

#!/usr/bin/env sh

TOOLS=/home/zhuangni/code/Multi-Task/caffe-master/build/tools
GLOG_log_dir='/home/zhuangni/code/Multi-Task/experiment_single/attr1/vgg/log/' \
$TOOLS/caffe train \
  --solver=/home/zhuangni/code/Multi-Task/experiment_single/attr1/vgg/solver.prototxt \
  --weights=/home/zhuangni/code/Multi-Task/experiment_single/attr1/vgg/face_snapshot_iter_450000.caffemodel \
  --gpu=0

   GLOG_log_dir为日志保存路径。

  日志名自动生成为: caffe.zhuangni.zhuangni.log.INFO.20161020-215304.3679

  log文件夹下自动生成: caffe.INFO 和 caffe.zhuangni.zhuangni.log.INFO.20161020-215304.3679 文件。其中 caffe.zhuangni.zhuangni.log.INFO.20161020-215304.3679 文件即为日志文件。

 

2.新建一个acc目录,

  1)将日志文件caffe.zhuangni.zhuangni.log.INFO.20161020-215304.3679拷入,并更改为 my.log

   2)将caffe-master/tools/extra目录里的extract_seconds.py , plot_training_log.py.example, parse_log.sh三个文件拷入 

3.运行

./plot_training_log.py.example 0 demo.png my.log
 运行参数:



==============================================================================================================================

以下记录我的caffe训练结果可视化过程:

我的网络多任务训练了五个属性,现将其中一个属性的测试精度可视化。

目标:可视化测试精度。

1.在训练时将结果保存日志:

   在train.sh中添加GLOG_log_dir:

#!/usr/bin/env sh

TOOLS=/home/zhuangni/code/Multi-Task/caffe-master/build/tools
GLOG_log_dir='/home/zhuangni/code/Multi-Task/experiment_single/attr1/vgg/log/' \
$TOOLS/caffe train \
  --solver=/home/zhuangni/code/Multi-Task/experiment_single/attr1/vgg/solver.prototxt \
  --weights=/home/zhuangni/code/Multi-Task/experiment_single/attr1/vgg/face_snapshot_iter_450000.caffemodel \
  --gpu=0
   训练结束后生成日志文件 caffe.zhuangni.zhuangni.log.INFO.20161020-215304.3679

2.新建一个acc目录,

  1)将日志文件caffe.zhuangni.zhuangni.log.INFO.20161020-215304.3679拷入,并更改为 single_attr1.log

   2)将caffe-master/tools/extra目录里的extract_seconds.py , plot_training_log.py.example, parse_log.sh三个文件拷入。

      修改parse_log.sh:

#!/bin/bash
# Usage parse_log.sh caffe.log
# It creates the following two text files, each containing a table:
#     caffe.log.test (columns: '#Iters Seconds TestAccuracy TestLoss')
#     caffe.log.train (columns: '#Iters Seconds TrainingLoss LearningRate')


# get the dirname of the script
DIR="$( cd "$(dirname "$0")" ; pwd -P )"

if [ "$#" -lt 1 ]
then
echo "Usage parse_log.sh /path/to/your.log"
exit
fi
LOG=`basename $1`
sed -n '/Iteration .* Testing net/,/Iteration *. loss/p' $1 > aux.txt
sed -i '/Waiting for data/d' aux.txt
sed -i '/prefetch queue empty/d' aux.txt
sed -i '/Iteration .* loss/d' aux.txt
sed -i '/Iteration .* lr/d' aux.txt
sed -i '/Train net/d' aux.txt
grep 'Iteration ' aux.txt | sed  's/.*Iteration \([[:digit:]]*\).*/\1/g' > aux0.txt
grep 'Test net output #0' aux.txt | awk '{print $11}' > aux1.txt
grep 'Test net output #1' aux.txt | awk '{print $11}' > aux2.txt
grep 'Test net output #2' aux.txt | awk '{print $11}' > aux3.txt
grep 'Test net output #3' aux.txt | awk '{print $11}' > aux4.txt
grep 'Test net output #4' aux.txt | awk '{print $11}' > aux5.txt
grep 'Test net output #5' aux.txt | awk '{print $11}' > aux6.txt
grep 'Test net output #6' aux.txt | awk '{print $11}' > aux7.txt
grep 'Test net output #7' aux.txt | awk '{print $11}' > aux8.txt
grep 'Test net output #8' aux.txt | awk '{print $11}' > aux9.txt
grep 'Test net output #9' aux.txt | awk '{print $11}' > aux10.txt


# Extracting elapsed seconds
# For extraction of time since this line contains the start time
grep '] Solving ' $1 > aux11.txt
grep 'Testing net' $1 >> aux11.txt
$DIR/extract_seconds.py aux11.txt aux12.txt

# Generating
echo '#Iters Seconds Test_accuracy_attr1 Test_accuracy_attr2 Test_accuracy_attr3 Test_accuracy_attr4 Test_accuracy_attr5 Test_loss_attr1 Test_loss_attr2 Test_loss_attr3 Test_loss_attr4 Test_loss_attr5'> $LOG.test
paste aux0.txt aux12.txt aux1.txt aux2.txt aux3.txt aux4.txt aux5.txt aux6.txt aux7.txt aux8.txt aux9.txt aux10.txt | column -t >> $LOG.test
rm aux.txt aux0.txt aux12.txt aux1.txt aux2.txt aux3.txt aux4.txt aux5.txt aux6.txt aux7.txt aux8.txt aux9.txt aux10.txt aux11.txt

# For extraction of time since this line contains the start time
grep '] Solving ' $1 > aux.txt
grep ', loss = ' $1 >> aux.txt
grep 'Train net' $1 >> aux.txt
grep 'Iteration ' aux.txt | sed  's/.*Iteration \([[:digit:]]*\).*/\1/g' > aux0.txt
grep ', lr = ' $1 | awk '{print $9}' > aux1.txt
grep 'Train net output #0' $1 | awk '{print $11}' > aux3.txt
grep 'Train net output #1' $1 | awk '{print $11}' > aux4.txt
grep 'Train net output #2' $1 | awk '{print $11}' > aux5.txt
grep 'Train net output #3' $1 | awk '{print $11}' > aux6.txt
grep 'Train net output #4' $1 | awk '{print $11}' > aux7.txt


# Extracting elapsed seconds
$DIR/extract_seconds.py aux.txt aux2.txt

# Generating
echo '#Iters Seconds Train_loss_attr1 Train_loss_attr2 Train_loss_attr3 Train_loss_attr4 Train_loss_attr5 LearningRate'> $LOG.train
paste aux0.txt aux2.txt aux3.txt aux4.txt aux5.txt aux6.txt aux7.txt aux1.txt | column -t >> $LOG.train
rm aux.txt aux0.txt aux1.txt aux2.txt  aux3.txt aux4.txt aux5.txt aux6.txt aux7.txt

3.运行

./plot_training_log.py.example 0 demo.png single_attr1.log
4.结果


附加:因为我有多个属性的log,则运行:

./plot_training_log.py.example 0 demo.png single_attr1.log single_attr2.log single_attr3.log single_attr4.log


5.补充



日志文件内容:





你可能感兴趣的:(caffe)