综述类文章
Cross-media analysis and reasoning: advances and directions
Yu-xin PENG et al.
Front Inform Technol Electron Eng 浙江大学学报(英文版)2017 18(1):44-57
这篇文章主要讲了七个问题:
(1) theory and model for cross-media uniform representation;
(2) cross-media correlation understanding and deep mining;
(3) cross-media knowledge graph construction and learning methodologies;
(4) cross-media knowledge evolution and reasoning;
(5) cross-media description and generation;
(6) cross-media intelligent engines;
(7) cross-media intelligent applications.
个人觉得第一部分较为重要,大体提到了跨模态发展过程中比较重要的方法模型,当然只是笼统的提及,另一篇Overview的文章提及了具体的方法、数据集、准确率等(准备下周看那篇文章)。下面根据自己阅读的理解就前五部分的要点进行总结(后两部分基本上都是研究方向和意义):
- theory and model for cross-media uniform representation
作者认为对于处于易构空间的跨模态信息需要关注两个问题:
- how to build the shared space.
- how to project data into it.
文中总结了一些模型和方法:
CCA (Rasiwasia et al., 2010). It learns a commonly shared space by maximizing the correlation between pairwise co-occurring heterogeneous data and performs projection by linear functions.
Deep CCA (Andrew et al. 2013) extended CCA using a deep learning
technique to learn the correlations more comprehensively than those using CCA and kernel CCA.
MMD (Yang et al. 2008) the multimedia document (MMD). each MMD is a set of media objects of different modalities but carrying the same semantics. The distances between MMDs are related to each modality.
RBF network (Daras et al. 2012) radial basis function (RBF) network. address the problem of missing modalities.
The topic model:
LDA (Roller and Schulte im Walde 2013) integrated visual features into latent Dirichlet allocation (LDA) and proposed a multimodal LDA model to learn representations for textual and visual data.
M3R (Wang Y et al. 2014) the multimodal mutual topic reinforce model. It seeks to discover mutually consistent semantic topics via appropriate interactions between model factors. These schemes represent data as topic distributions, and similarities are measured by the likelihood of observed data in terms of latent topics.
PFAR (Mao et al. 2013) parallel field alignment retrieval. a manifold-based model, which considers cross-media retrieval as a manifold alignment problem using parallel fields.
Deep learning:
Autoencoder model (Ngiam et al 2011) learn uniform representations for speech audios coupled with videos of the lip movements.
Deep restricted Boltzmann machine (Srivastava and Salakhutdinov 2012)
learn joint representations for multimodal data.
Deep CCA (Andrew et al. 2013) a deep extension of the traditional CCA method.
DT-RNNs (Socher et al. 2014) dependency tree recursive neural networks. employed dependency trees to embed sentences into a vector space in order to retrieve images described by those sentences.
Autoencoders (Feng et al.2014) and (Wang W et al.2014) applied autoencoder to perform cross-modality retrieval.
Multimodal deep learning scheme (Wang et al. 2015) learn accurate and compact multimodal representations for multimodal data. This method facilitates efficient similarity search and other related applications on multimodal data.
ICMAE (Zhang et al. 2014a) an attribute discovery approach, named the independent component multimodal autoencoder (ICMAE), which can learn
shared high-level representation to identify attributes from a set of image and text pairs. Zhang et al. (2016) further proposed to learn image-text uniform representation from web social multimedia content, which is noisy, sparse, and diverse under weak supervision.
Deep-SM (Wei et al. 2017) a deep semantic matching(deep-SM) method that uses the convolutional neural network and fully connected network to map images and texts into their label vectors, achieving state-of-the-art accuracy. CMDN (Peng et al., 2016a) cross-media multiple deep network (CMDN) is a hierarchical structure with multiple deep networks, and can simultaneously preserve intra-media and inter-media information to further improve the retrieval accuracy.
这一部分提到的Deep-SM (Wei et al. 2017),查了一下,来自于文章Cross-Modal Retrieval With CNN Visual Features: A New Baseline, 准备接下来抽时间看看。
- cross-media correlation understanding and deep mining;
Basically, existing studies construct correlation learning on cross-media data
with representation learning, metric learning, and matrix factorization, which are usually performed in a batch learning fashion and can capture only the first-order correlations among data objects. How to develop more effective learning mechanisms to capture the high-order correlations and adapt to the
evolution that naturally exists among heterogeneous entities and heterogeneous relations, is the key research issue for future studies in cross-media correlation understanding.
- cross-media knowledge graph construction and learning methodologies;
知识图谱应用实例:The Knowledge Graph released by Google in 2012 (Singhal, 2012) provided a next-generation information retrieval service with ontology-based intelligent search based on free-style user queries. Similar techniques, e.g., Safari, were developed based on achievements in entity-centric search (Lin et al.,2012).
- cross-media knowledge evolution and reasoning;
Reinforcement learning and transfer learning, can be helpful for constructing more complex intelligent reasoning systems (Lazaric, 2012). Furthermore, lifelong learning (Lazer et al.,2014) is the key capability of advanced intelligence systems.
应用实例:Google DeepMind has constructed a machine intelligence system based on a reinforcement learning algorithm (Gibney, 2015). AlphaGo, developed by Google DeepMind, has been the first computer Go program that can beat a top professional human Go player. It even beat the world champion Lee Sedol in a five-game match.
Visual question answering (VQA) can be regarded as a good example of cross-media reasoning (Antol et al., 2015). VQA aims to provide natural
language answers for questions given in the form of combination of the image and natural language.
- cross-media description and generation;
Existing studies on visual content description can be divided into three groups.
1 The first group, based on language generation, first understands images in terms of objects, attributes, scene types, and their correlations, and then connects these semantic understanding outputs to generate a sentence description using natural language generation techniques.
2 The second group covers retrieval-based methods, retrieving content that is similar to a query and transferring the descriptions of the similar set to the
query.
3 The third group is based on deep neural networks,employing the CNN-RNN codec framework, where the convolutional neural network (CNN) is used to
extract features from images, and the recursive neural network (RNN) (Socher et al., 2011) or its variant, the long short-term memory network (LSTM) (Hochreiter and Schmidhuber, 1997), is used to encode and decode language models.