ViLBERT: Pretraining Task-Agnostic Visiolinguistic Representations for Vision-and-Language Tasks

目录

  • ViLBERT: Extending BERT to Jointly Represent Images and Text
  • Experimental Settings
  • References

ViLBERT: Vision-and-Language BERT

ViLBERT: Extending BERT to Jointly Represent Images and Text

  • Two-stream Architecture: ViLBERT 采用 two-stream 架构,由两个并行的 BERT-style 模型分别对 image region features v 1 , . . . , v T v_1,...,v_{\mathcal T} v1,...,vT 和 text input w 0 , . . . , w T w_0,...,w_T w0,...,wT 进行信息建模 (文本部分的 BERT 参数可由 BERT 进行初始化)。每个 stream 都由一系列的 transformer blocks (TRM)co-attentional transformer layers (Co-TRM) 组成,其中 Co-TRM 被用来促进模态间的信息交换。最终模型输出 ( h v 0 , . . . h v T ) (h_{v_0},...h_{v_{\mathcal T}}) (hv0,...hvT) ( h w 0 , . . . , h w T ) (h_{w_0},...,h_{w_T}) (hw0,...,hwT)
    ViLBERT: Pretraining Task-Agnostic Visiolinguistic Representations for Vision-and-Language Tasks_第1张图片注意到,两个 streams 之间的信息交换被限制在了特定的层上,并且由于输入的 image region features 本身就是经过 CNN 处理过的 high-level 特征,因此 text stream 在和 visual features 交互之前还做了更多的处理 (This structure allows for variable depths for each modality and enables sparse interaction through co-attention.)
  • Co-Attentional Transformer Layers (Co-TRM).
    ViLBERT: Pretraining Task-Agnostic Visiolinguistic Representations for Vision-and-Language Tasks_第2张图片
  • Image Representations. image region features 即为一个预训练好的 Faster R-CNN 抽取出的 bounding boxes 对应的 visual features,选出的 bounding boxes 均需超过 confidence threshold 并且每张图片只保留 10 到 36 个 high-scoring boxes。同时由于 image regions 缺少一个自然的排序顺序,我们转而用一个 5- d d d 向量对 image regions 的空间位置进行了编码,包括 region position (normalized top-left and bottom-right coordinates) 和 the fraction of image area covered。接着,该向量被投影到与 visual features 相同的维度进行相加,得到最终的 Image Representations。最后,我们还在图像特征输入的开头添加了特殊 token [IMG] 用于代表整张图片的信息 (i.e. mean-pooled visual features with a spatial encoding corresponding to the entire image)
  • Training Tasks and Objectives. (使用的数据集为 Conceptual Captions)
    • (1) masked multi-modal modelling: 类似于 BERT 的 MLM,随机遮盖 15% 的 words 和 image regions (被选中遮掩的 image regions 有 90% 的几率被置零,words 的处理与 BERT 一致),然后让模型重建被遮盖的 words 或预测出被遮盖的 image regions 对应的语义类别 (minimize KL divergence)
    • (2) multi-modal alignment prediction: 模型需要预测 image 和 text 是否匹配。我们将 h IMG h_{\text{IMG}} hIMG h CLS h_{\text{CLS}} hCLS 作为视觉和语言输入的整体特征表示,将它们进行 element-wise product 后送入线性层得到最终的预测结果 (负例样本通过随机替换配对的图像或文字得到)
      ViLBERT: Pretraining Task-Agnostic Visiolinguistic Representations for Vision-and-Language Tasks_第3张图片

Experimental Settings

  • We apply our pretrained model as a base for four established vision-and-language tasks – Visual Question Answering (VQA), Visual Commonsense Reasoning (VCR) (Q → \rightarrow A, QA → \rightarrow R), Grounding Referring Expressions (localize an image region given a natural language reference), and Caption-Based Image Retrieval –setting state-of-the-art on all four tasks.
    ViLBERT: Pretraining Task-Agnostic Visiolinguistic Representations for Vision-and-Language Tasks_第4张图片ViLBERT: Pretraining Task-Agnostic Visiolinguistic Representations for Vision-and-Language Tasks_第5张图片

References

  • ViLBERT: Pretraining Task-Agnostic Visiolinguistic Representations for Vision-and-Language Tasks

你可能感兴趣的:(#,多模态,机器学习,深度学习,自然语言处理)