ST-GCN

ST-GCN

Introduction

This repository holds the codebase, dataset and models for the paper:
此存储库包含论文的代码库、数据集和模型
Spatial Temporal Graph Convolutional Networks for Skeleton-Based Action Recognition Sijie Yan, Yuanjun Xiong and Dahua Lin, AAAI 2018. [Arxiv Preprint]
基于骨架的动作识别时空图卷积网络

Visulization of ST-GCN in Action

ST-GCN的可视化作用

Our demo for skeleton-based action recognition:
我们的基于骨架的动作识别演示

ST-GCN is able to exploit local pattern and correlation from human skeletons.
Below figures show the neural response magnitude of each node in the last layer of our ST-GCN.
ST-GCN能够利用人体骨骼的局部模式和相关性。

Prerequisites

工具

  • Python3 (>3.5)
  • PyTorch
  • Openpose with Python API. (Optional: for demo only)
  • Other Python libraries can be installed by pip install -r requirements.txt

Installation

安装

git clone https://github.com/yysijie/st-gcn.git; 
cd st-gcn
cd torchlight; python setup.py install; 
cd ..

Get pretrained models

获得预训练模型

We provided the pretrained model weithts of our ST-GCN. The model weights can be downloaded by running the script
我们提供了ST-GCN的预训练模型重量。可以通过运行脚本下载模型权重

bash tools/get_models.sh

You can also obtain models from GoogleDrive or BaiduYun, and manually put them into ./models.
你也可以从GoogleDrive或者百都云那里获得模型,然后手动将它们放入./models中

Demo

You can use the following commands to run the demo.
您可以使用以下命令来运行演示

# with offline pose estimation
python main.py demo_offline [--video ${PATH_TO_VIDEO}] [--openpose ${PATH_TO_OPENPOSE}]

# with realtime pose estimation
python main.py demo [--video ${PATH_TO_VIDEO}] [--openpose ${PATH_TO_OPENPOSE}]

Optional arguments:
可选参数

  • PATH_TO_OPENPOSE: It is required if the Openpose Python API is not in PYTHONPATH.
  • PATH_TO_VIDEO: Filename of the input video.

Data Preparation

数据准备

We experimented on two skeleton-based action recognition datasts: Kinetics-skeleton and NTU RGB+D.
Before training and testing, for convenience of fast data loading,
the datasets should be converted to proper file structure.
You can download the pre-processed data from
GoogleDrive
and extract files with
我们在两个基于骨架的动作识别数据集上进行了实验:动力学骨架和NTU-RGB+D。
在培训和测试之前,为了方便快速加载数据,
应将数据集转换为适当的文件结构。
您可以从下载预处理的数据
谷歌硬盘
并用

cd st-gcn
unzip 

Otherwise, for processing raw data by yourself,
please refer to below guidances.
另外,为了自己处理原始数据,
请参考以下指南。

Kinetics-skeleton

动力学骨架

Kinetics is a video-based dataset for action recognition which only provide raw video clips without skeleton data. Kinetics dataset include To obatin the joint locations, we first resized all videos to the resolution of 340x256 and converted the frame rate to 30 fps. Then, we extracted skeletons from each frame in Kinetics by Openpose. The extracted skeleton data we called Kinetics-skeleton(7.5GB) can be directly downloaded from GoogleDrive or BaiduYun.

Kinetics是一个基于视频的动作识别数据集,它只提供原始视频片段,没有骨架数据。为了确定关节位置,我们首先将所有视频的分辨率调整为340x256,并将帧速率转换为30fps。然后,利用Openpose从每一帧运动图像中提取骨架。我们称之为动力学骨架(7.5GB)的提取骨架数据可以直接从GoogleDrive或BaiduYun下载。

After uncompressing, rebuild the database by this command:
解压缩后,使用以下命令重建数据库:

python tools/kinetics_gendata.py --data_path 

NTU RGB+D

NTU RGB+D can be downloaded from their website.
Only the 3D skeletons(5.8GB) modality is required in our experiments. After that, this command should be used to build the database for training or evaluation:
可从其网站下载。
只需要三维骨架(5.8GB)模态。在此之后,应使用此命令建立用于培训或评估的数据库:

python tools/ntu_gendata.py --data_path 

where the points to the 3D skeletons modality of NTU RGB+D dataset you download.

其中指向您下载的NTU RGB+D数据集的3D skeletons模态

Testing Pretrained Models

测试预训练模型

To evaluate ST-GCN model pretrained on Kinetcis-skeleton, run
要评估在Kinetcis骨骼上预训练的ST-GCN模型,请运行

python main.py recognition -c config/st_gcn/kinetics-skeleton/test.yaml

For cross-view evaluation in NTU RGB+D, run
对于NTU RGB+D中的交叉视图评估,请运行

python main.py recognition -c config/st_gcn/ntu-xview/test.yaml

For cross-subject evaluation in NTU RGB+D, run
对于RGB+D的跨学科评估,请运行

python main.py recognition -c config/st_gcn/ntu-xsub/test.yaml

To speed up evaluation by multi-gpu inference or modify batch size for reducing the memory cost, set --test_batch_sizeand --devicelike:
要通过多GPU推理加速评估或修改批处理大小以降低内存成本,请设置

python main.py recognition -c  --test_batch_size  --device   ...

Results

结果

The expected Top-1 accuracy of provided models are shown here:
此处显示了所提供模型的预期最高精度

Model Kinetics-
skeleton (%)
NTU RGB+D
Cross View (%)
NTU RGB+D
Cross Subject (%)
Baseline[1] 20.3 83.1 74.3
ST-GCN (Ours) 31.6 88.8 81.6

[1] Kim, T. S., and Reiter, A. 2017. Interpretable 3d human action analysis with temporal convolutional networks. In BNMW CVPRW.
2017基于时间卷积网络的可解释三维人体行为分析。在BNMW CVPRW中

Training

To train a new ST-GCN model, run
要训练新的ST-GCN模型,请运行

python main.py recognition -c config/st_gcn//train.yaml [--work_dir ]

where the must be ntu-xsub, ntu-xviewor kinetics-skeleton, depending on the dataset you want to use.
The training results, including model weights, configurations and logging files, will be saved under the ./work_dirby default or if you appoint it.

其中必须是ntu-xsub、ntu-xview或者是kinetics-skeleton,具体取决于要使用的数据集。
默认情况下,训练结果(包括模型权重、配置和日志文件)将保存在./work_dir目录下,如果指定则保存在下

You can modify the training parameters such as work_dir, batch_size, step, base_lrand devicein the command line or configuration files. The order of priority is: command line > config file > default parameter. For more information, use main.py -h.

您可以在命令行或配置文件中修改训练参数,如work_dir、batch_size、step、base_lr和device。优先级顺序为:命令行>配置文件>默认参数。有关详细信息,请使用

Finally, custom model evaluation can be achieved by this command as we mentioned above:
最后,如前所述,可以通过该命令实现自定义模型评估

python main.py recognition -c config/st_gcn//test.yaml --weights 

Citation

引用

Please cite the following paper if you use this repository in your reseach.
如果您在研究中使用此存储库,请引用以下文章

@inproceedings{stgcn2018aaai,
  title     = {Spatial Temporal Graph Convolutional Networks for Skeleton-Based Action Recognition},
  author    = {Sijie Yan and Yuanjun Xiong and Dahua Lin},
  booktitle = {AAAI},
  year      = {2018},
}

你可能感兴趣的:(图像处理,深度学习)