图片能够动起来,看起来像天方夜谭,gan确实可以做到了
具体如下:
(1)获取代码
git clone https://github.com/AliaksandrSiarohin/first-order-model
如下载不方便可以 https://p去an.b掉aid汉u.co字m/s/1eI3_2KN5ctoHHCXqsSDMsg (zua5)
(2)准备环境
cd first-order-model
pip install -r requirements.txt
conda install ffmpeg -c conda-forge
如果在pip install -r requirements.txt安装的时候,出现版本问题,手动安装即可
(3)准备视频和图片
可以理解成图片模仿视频中的动作,然后生成视频
随便下载一个视频和图片即可
https://drive.google.com/drive/folders/1kZ1gCnpfU0BnpdU47pLM_TQ6RypDDqgw
(4)准备预训练模型文件
下载链接:https://drive.google.com/drive/folders/1PyQJmkdCsAkOYwUyaj_l-l0as-iLDgeH
备用链接:https://p去掉an中文.baidu.吧com/s/1的Oitfjrekf86PJ分55hIVlrxA (t9z2)
(5)修改demo.py文件
将需要固定的路径指定好,如下(这里只放了需要修改的地方):
parser = ArgumentParser()
parser.add_argument("--config", required=True, help="path to config")
parser.add_argument("--checkpoint", default='vox-adv-cpk.pth.tar', help="path to checkpoint to restore")
parser.add_argument("--source_image", default='sup-mat/source.png', help="path to source image")
parser.add_argument("--driving_video", default='sup-mat/04.mp4', help="path to driving video")
parser.add_argument("--result_video", default='result.mp4', help="path to output")
parser.add_argument("--relative", dest="relative", action="store_true", help="use relative or absolute keypoint coordinates")
parser.add_argument("--adapt_scale", dest="adapt_scale", action="store_true", help="adapt movement scale based on convex hull of keypoints")
parser.add_argument("--find_best_frame", dest="find_best_frame", action="store_true",
help="Generate from the frame that is the most alligned with source. (Only for faces, requires face_aligment lib)")
parser.add_argument("--best_frame", dest="best_frame", type=int, default=None,
help="Set frame to start from.")
(6) 运行起来
python demo.py --config config/vox-256.yaml --checkpoint checkpoints/vox-cpk.pth.tar --relative --adapt_scale
然后就会得到一个result.mp4文件
比如我这个