下载代码, 可以使用码云下载代码
git clone FSA-Net: [CVPR19] FSA-Net: Learning Fine-Grained Structure Aggregation for Head Pose Estimation from a Single Image
conda create tensorflow -n python=3.6
conda install cudatoolkit=8.0 -c https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/free/linux-64/
conda install cudnn==7.1.3
pip install tensorflow==1.10.0 -i https://mirrors.aliyun.com/pypi/simple
pip install tensorflow-gpu==1.10.0 -i https://mirrors.aliyun.com/pypi/simple
pip install keras==2.2.0 -i https://mirrors.aliyun.com/pypi/simple
其他的依赖环境
pip install pandas -i https://mirrors.aliyun.com/pypi/simple
pip install Pillow -i https://mirrors.aliyun.com/pypi/simple
pip install sklearn -i https://mirrors.aliyun.com/pypi/simple
pip install matplotlib==3.1.1 -i https://mirrors.aliyun.com/pypi/simple
pip install opencv-python==4.1.1.26 -i https://mirrors.aliyun.com/pypi/simple
pip install tqdm
pip install moviepy==1.0.0 -i https://mirrors.aliyun.com/pypi/simple
pip install pydot
pip install mtcnn
可能还会遇到
OSError: `pydot` failed to call GraphViz.Please install GraphViz (https://www.graphviz.org/) and ensure that its executables are in the $PATH.
解决方法:
sudo apt install graphviz
DEBUG:root:Running training...
Epoch 1/90
2021-09-07 17:27:13.491941: E tensorflow/stream_executor/cuda/cuda_dnn.cc:352] Could not create cudnn handle: CUDNN_STATUS_INTERNAL_ERROR
Segmentation fault (core dumped)
显存太小了,我换到服务器就可以了。
首先下载300W-LP数据集,
将300W-LP和AFLW2000放到data/type1/里面
将数据放在data/type1里面,
cd data/type1
sh run_created_db_type1.sh
针对BIWI数据集的处理,将Biwi Kinect Head Pose Database中的hpdb文件命名为BIWI,然后传到FSA-Net/data里面了,使用TYY_create_db_biwi.py处理数据。
训练
sh run_fsanet_train.sh
修改run_fsanet_train.sh(里面好多模型训练),随便 选择一个模型去训练。
KERAS_BACKEND=tensorflow python FSANET_train.py --batch_size 16 --nb_epochs 90 --model_type 5 --db_name '300W_LP'
测试
sh run_fsanet_test.sh
好像是不容易转成onnx模型,可以使用其他人使用pytorch代码实现的FSANet
如下:
https://github.com/omasaht/headpose-fsanet-pytorch
yydyz/headpose-fsanet-pytorch