Look, Imagine and Match: Improving Textual-Visual Cross-Modal Retrieval with Generative Models
CVPR2018的一篇关于跨媒体检索的文章,paper链接https://arxiv.org/abs/1711.06420,一作是南洋理工大学的PHD,作者的homepagehttp://jxgu.cc/,code已经被released出来了https://github.com/ujiuxiang/NLP_Practice.PyTorch/tree/master/cross_modal_retri