ViLBERT: Pretraining Task-Agnostic Visiolinguistic Representations for Vision-and-Language Tasks
目录
- ViLBERT: Extending BERT to Jointly Represent Images and Text
- Experimental Settings
- References
ViLBERT: Vision-and-Language BERT
ViLBERT: Extending BERT to Jointly Represent Images and Text
- Two-stream Architecture: ViLBERT 采用 two-stream 架构,由两个并行的 BERT-style 模型分别对 image region features v1,...,vTv_1,...,v_{\mathcal T}v1,...,vT 和 text input w0,...,wTw_0,...,w_Tw0,...,wT 进行信息建模 (文本部分的 BERT 参数可由 BERT 进行初始化)。每个 stream 都由一系列的 transformer blocks (TRM) 和 co-attentional transformer layers (Co-TRM) 组成,其中 Co-TRM 被用来促进模态间的信息交换。最终模型输出 (hv0,...hvT)(h_{v_0},...h_{v_{\mathcal T}})(hv0,...hvT) 和 (hw0,...,hwT)(h_{w_0},...,h_{w_T})(hw0,...,hwT)
注意到,两个 streams 之间的信息交换被限制在了特定的层上,并且由于输入的 image region features 本身就是经过 CNN 处理过的 high-level 特征,因此 text stream 在和 visual features 交互之前还做了更多的处理 (This structure allows for variable depths for each modality and enables sparse interaction through co-attention.)
- Co-Attentional Transformer Layers (Co-TRM).
- Image Representations. image region features 即为一个预训练好的 Faster R-CNN 抽取出的 bounding boxes 对应的 visual features,选出的 bounding boxes 均需超过 confidence threshold 并且每张图片只保留 10 到 36 个 high-scoring boxes。同时由于 image regions 缺少一个自然的排序顺序,我们转而用一个 5-ddd 向量对 image regions 的空间位置进行了编码,包括 region position (normalized top-left and bottom-right coordinates) 和 the fraction of image area covered。接着,该向量被投影到与 visual features 相同的维度进行相加,得到最终的 Image Representations。最后,我们还在图像特征输入的开头添加了特殊 token
[IMG]
用于代表整张图片的信息 (i.e. mean-pooled visual features with a spatial encoding corresponding to the entire image) - Training Tasks and Objectives. (使用的数据集为 Conceptual Captions)
- (1) masked multi-modal modelling: 类似于 BERT 的 MLM,随机遮盖 15% 的 words 和 image regions (被选中遮掩的 image regions 有 90% 的几率被置零,words 的处理与 BERT 一致),然后让模型重建被遮盖的 words 或预测出被遮盖的 image regions 对应的语义类别 (minimize KL divergence)
- (2) multi-modal alignment prediction: 模型需要预测 image 和 text 是否匹配。我们将 hIMGh_{\text{IMG}}hIMG 和 hCLSh_{\text{CLS}}hCLS 作为视觉和语言输入的整体特征表示,将它们进行 element-wise product 后送入线性层得到最终的预测结果 (负例样本通过随机替换配对的图像或文字得到)
Experimental Settings
- We apply our pretrained model as a base for four established vision-and-language tasks – Visual Question Answering (VQA), Visual Commonsense Reasoning (VCR) (Q →\rightarrow→ A, QA →\rightarrow→ R), Grounding Referring Expressions (localize an image region given a natural language reference), and Caption-Based Image Retrieval –setting state-of-the-art on all four tasks.
References
- ViLBERT: Pretraining Task-Agnostic Visiolinguistic Representations for Vision-and-Language Tasks
ViLBERT: Pretraining Task-Agnostic Visiolinguistic Representations for Vision-and-Language Tasks相关推荐
- 【论文笔记】ViLBERT:Pretraining Task-Agnostic VisiolinguisticRepresentations for Vision-and-Language Tasks
论文标题: ViLBERT: Pretraining Task-Agnostic Visiolinguistic Representations for Vision-and-Language Tas ...
- 【预训练视觉-语言模型文献阅读】VL-BERT: PRE-TRAINING OF GENERIC VISUAL- LINGUISTIC REPRESENTATIONS(ICLR 2020)
[预训练视觉-语言模型文献阅读]VL-BERT: PRE-TRAINING OF GENERIC VISUAL- LINGUISTIC REPRESENTATIONS(ICLR 2020) 文章目录 ...
- VL-BERT: Pre-training of Generic Visual-Linguistic Representations
目录 Introduction Model Architecture Pre-training VL-BERT Experiments Visual Commensense Reasoning (VC ...
- 笔记:文澜:桥接视觉和语言的大规模多模态预训练 WenLan: Bridging Vision and Language by Large-Scale Multi-Modal Pre-Training
笔记:WenLan: Bridging Vision and Language by Large-Scale Multi-Modal Pre-Training 笔记:文澜:桥接视觉和语言的大规模的多模 ...
- Raki的读paper小记:Image as a Foreign Language: BEIT Pretraining for All Vision and Vision-Language Tasks
Abstract&Introduction&Related Work 研究任务 语言+视觉模态预训练任务 已有方法和相关工作 masked data已经成为一种主流 面临挑战 现有的多 ...
- 【论文阅读】 VL-BERT: Pre-training of generic visual-linguistic representations
利用BERT联合学习视觉和语言之间的信息. Visual-Linguistic BERT的主干是多模态Transformer attention 模块,以视觉和语言嵌入特征作为输入.在输入中,每个元素 ...
- 论文解读:NSP-BERT: A Prompt-based Zero-Shot Learner Through an Original Pre-training Task——Next Sentence
论文解读:NSP-BERT: A Prompt-based Zero-Shot Learner Through an Original Pre-training Task--Next Sentence ...
- 读论文:Fine-grained Image Classification via Combining Vision and Language
读论文:Fine-grained Image Classification via Combining Vision and Language 文章目录 一.概述 二.本文贡献 三.网络结构 1.目标 ...
- Align before Fuse: Vision and Language Representation Learning with Momentum Distillation
ALBEF:Align before Fuse: Vision and Language Representation Learning with Momentum Distillation 论文链接 ...
最新文章
- 使用Google Closure DepsWriter生成JS依赖文件(二)
- pytorch基于卷积层通道剪枝的方法
- cad打开图纸流程图_如何打开cad图纸?cad怎么打开pdf的图纸?
- TCP和UDP是否可以绑定同一端口进行通信
- 面对世界竞争对手,如何拿到Google PDF开源项目PDFium?
- C++类对象在内存中的布局
- floatvalue 重写_Java Number floatValue()方法与示例
- 数学和古典诗词的意境
- 的稳定性 linux_Linux系统KDE桌面,打造最接近Windows的界面环境!不用才后悔
- java 调制信号,常见调制技术汇总
- MPLS virtual private network OptionC实验(华为设备)
- linux中pak命令,Linux下Flatpak的安装与使用超详细教程
- win10内置ubuntu, 启动时提示“指定的网络名不再可用”无法启动解决办法
- 【codecademy笔记1】
- 使用stp制造广播风暴!
- python爬虫系列——拉勾网
- pythonweb项目微服务_python web微服务器端
- 2022年高压电工判断题及答案
- 【Reproduced】C language program of MODBUS RTU MASTER
- 查询oracle原始表d,oracle 多表查询