ACL2020论文整理目录

  • ACL2020论文整理(Main Conference)
    • ACL2020接受文章列表
      • Best Paper
      • Honorable Mention Papers – Main Conference
      • Best Theme Paper
      • Honorable Mention Paper – Theme
      • Best Demonstration Paper
      • Honorable Mention Papers – Demonstrations
    • 论文分类整理(自己感兴趣的方向,根据标题分类,loading)
      • 预训练/语言模型
      • 信息抽取
      • 关系抽取
      • 事件抽取
      • 文本生成(非对话)
      • 基础任务(大部分为NER)
      • 知识图谱
      • 图卷积神经网络
      • MAY USEFUL
      • SOUNDS INTERESTING
      • 伯特学

ACL2020论文整理(Main Conference)

最新深感自己的论文能力太差,准备向大佬们膜拜学习,于是先整理一个ACL2020的大致内容,以便之后的学习,内容在之后会补充,或者在其他文章中补充。。。

ACL2020接受文章列表

完整ACL2020接受文章列表链接 https://acl2020.org/program/accepted/
ACL2020 Best Paper https://acl2020.org/blog/ACL-2020-best-papers/
ACL论文常驻链接(一站在手,paper我有)https://www.aclweb.org/anthology/

Best Paper

Beyond Accuracy: Behavioral Testing of NLP Models with CheckList
Marco Tulio Ribeiro, Tongshuang Wu, Carlos Guestrin and Sameer Singh

Honorable Mention Papers – Main Conference

Don’t Stop Pretraining: Adapt Language Models to Domains and Tasks
Suchin Gururangan, Ana Marasović, Swabha Swayamdipta, Kyle Lo, Iz Beltagy, Doug Downey and Noah A. Smith
Tangled up in BLEU: Reevaluating the Evaluation of Automatic Machine Translation Evaluation Metrics
Nitika Mathur, Timothy Baldwin and Trevor Cohn

Best Theme Paper

Climbing towards NLU: On Meaning, Form, and Understanding in the Age of Data
Emily M. Bender and Alexander Koller

Honorable Mention Paper – Theme

How Can We Accelerate Progress Towards Human-like Linguistic Generalization?
Tal Linzen

Best Demonstration Paper

GAIA: A Fine-grained Multimedia Knowledge Extraction System
Manling Li, Alireza Zareian, Ying Lin, Xiaoman Pan, Spencer Whitehead, BRIAN CHEN, Bo Wu, Heng Ji, Shih-Fu Chang, Clare Voss, Daniel Napierski and Marjorie Freedman

Honorable Mention Papers – Demonstrations

Torch-Struct: Deep Structured Prediction Library
Alexander Rush
Prta: A System to Support the Analysis of Propaganda Techniques in the News
Giovanni Da San Martino, Shaden Shaar, Yifan Zhang, Seunghak Yu, Alberto Barrón-Cedeño and Preslav Nakov

论文分类整理(自己感兴趣的方向,根据标题分类,loading)

预训练/语言模型

Adaptive Compression of Word Embeddings
Yeachan Kim, Kang-Min Kim and SangKeun Lee
BART: Denoising Sequence-to-Sequence Pre-training for Natural Language Generation, Translation, and Comprehension
Mike Lewis, Yinhan Liu, Naman Goyal, Marjan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Veselin Stoyanov and Luke Zettlemoyer
BERTRAM: Improved Word Embeddings Have Big Impact on Contextualized Model
Performance

Timo Schick and Hinrich Schütze
CluBERT: A Cluster-Based Approach for Learning Sense Distributions in Multiple Languages
Tommaso Pasini, Federico Scozzafava and Bianca Scarlini
Don’t Stop Pretraining: Adapt Language Models to Domains and Tasks
Suchin Gururangan, Ana Marasović, Swabha Swayamdipta, Kyle Lo, Iz Beltagy, Doug Downey and Noah A. Smith
Emerging Cross-lingual Structure in Pretrained Language Models
Alexis Conneau, Shijie Wu, Haoran Li, Luke Zettlemoyer and Veselin Stoyanov
Exploiting Syntactic Structure for Better Language Modeling: A Syntactic Distance Approach
Wenyu DU, Zhouhan Lin, Yikang Shen, Timothy J. O’Donnell, Yoshua Bengio and Yue Zhang
Fast and Accurate Deep Bidirectional Language Representations for Unsupervised Learning
Joongbo Shin, Yoonhyung Lee, Seunghyun Yoon and Kyomin Jung
Pre-train and Plug-in: Flexible Conditional Text Generation with Variational Auto-Encoders
Yu Duan, Canwen Xu, Jiaxin Pei, Jialong Han and Chenliang Li
Pretraining with Contrastive Sentence Objectives Improves Discourse Performance of Language Models
Dan Iter, Kelvin Guu, Larry Lansing and Dan Jurafsky
Recurrent Neural Network Language Models Always Learn English-Like Relative Clause Attachment
Forrest Davis and Marten van Schijndel
Roles and Utilization of Attention Heads in Transformer-based Neural Language Models
Jae-young Jo and Sung-Hyon Myaeng
Unsupervised Domain Clusters in Pretrained Language Models
Roee Aharoni and Yoav Goldberg
A Two-Stage Masked LM Method for Term Set Expansion
Guy Kushilevitz, Shaul Markovitch and Yoav Goldberg
Do you have the right scissors? Tailoring Pre-trained Language Models via Monte-Carlo Methods
Ning Miao, Yuxuan Song, Hao Zhou and Lei Li
Enhancing Pre-trained Chinese Character Representation with Word-aligned Attention
Yanzeng Li, Bowen Yu, Xue Mengge and Tingwen Liu
Glyph2Vec: Learning Chinese Out-of-Vocabulary Word Embedding from Glyphs
Hong-You Chen, SZ-HAN YU and Shou-de Lin
Negated and Misprimed Probes for Pretrained Language Models: Birds Can Talk, But Cannot Fly
Nora Kassner and Hinrich Schütze
Overestimation of Syntactic Representation in Neural Language Models
Jordan Kodner and Nitish Gupta
Pretrained Transformers Improve Out-of-Distribution Robustness
Dan Hendrycks, Xiaoyuan Liu, Eric Wallace, Adam Dziedzic, Rishabh Krishnan and Dawn Song
Stolen Probability: A Structural Weakness of Neural Language Models
David Demeter, Gregory Kimmel and Doug Downey
To Pretrain or Not to Pretrain: Examining the Benefits of Pretrainng on Resource Rich Tasks
Sinong Wang, Madian Khabsa and Hao Ma

信息抽取

A Joint Neural Model for Information Extraction with Global Features
Ying Lin, Heng Ji, Fei Huang and Lingfei Wu
Conditional Augmentation for Aspect Term Extraction via Masked Sequence-to-Sequence Generation
Kun Li, Chengbo Chen, Xiaojun Quan, Qing Ling and Yan Song
Discourse-Aware Neural Extractive Text Summarization
Jiacheng Xu, Zhe Gan, Yu Cheng and Jingjing Liu
Discrete Optimization for Unsupervised Sentence Summarization with Word-Level Extraction
Raphael Schumann, Lili Mou, Yao Lu, Olga Vechtomova and Katja Markert
Effective Inter-Clause Modeling for End-to-End Emotion-Cause Pair Extraction
Penghui Wei, Jiahao Zhao and Wenji Mao
Extractive Summarization as Text Matching
Ming Zhong, Pengfei Liu, Yiran Chen, Danqing Wang, Xipeng Qiu and Xuanjing Huang
Heterogeneous Graph Neural Networks for Extractive Document Summarization
Danqing Wang, Pengfei Liu, Yining Zheng, Xipeng Qiu and Xuanjing Huang
IMoJIE: Iterative Memory-Based Joint Open Information Extraction
Keshav Kolluru, Samarth Aggarwal, Vipul Rathore, Mausam - and Soumen Chakrabarti
Representation Learning for Information Extraction from Form-like Documents
Bodhisattwa Prasad Majumder, Navneet Potti, Sandeep Tata, James Bradley Wendt, Qi Zhao and Marc Najork
SciREX: A Challenge Dataset for Document-Level Information Extraction
Sarthak Jain, Madeleine van Zuylen, Hannaneh Hajishirzi and Iz Beltagy
Transition-based Directed Graph Construction for Emotion-Cause Pair Extraction
Chuang Fan, Chaofa Yuan, Jiachen Du, Lin Gui, Min Yang and Ruifeng Xu

关系抽取

A Novel Cascade Binary Tagging Framework for Relational Triple Extraction
Zhepei Wei, Jianlin Su, Yue Wang, Yuan Tian and Yi Chang
Dialogue-Based Relation Extraction
Dian Yu, Kai Sun, Claire Cardie and Dong Yu
Exploiting the Syntax-Model Consistency for Neural Relation Extraction
Amir Pouran Ben Veyseh, Franck Dernoncourt, Dejing Dou and Thien Huu Nguyen
In Layman’s Terms: Semi-Open Relation Extraction from Scientific Texts
Ruben Kruiper, Julian Vincent, Jessica Chen-Burger, Marc Desmulliez and Ioannis Konstas
Probing Linguistic Features of Sentence-Level Representations in Relation Extraction
Christoph Alt, Aleksandra Gabryszak and Leonhard Hennig
Reasoning with Latent Structure Refinement for Document-Level Relation Extraction
Guoshun Nan, Zhijiang Guo, Ivan Sekulic and Wei Lu
Relabel the Noise: Joint Extraction of Entities and Relations via Cooperative Multiagents
Daoyuan Chen, Yaliang Li, Kai Lei and Ying Shen
ZeroShotCeres: Zero-Shot Relation Extraction from Semi-Structured Webpages
Colin Lockard, Prashant Shiralkar, Xin Luna Dong and Hannaneh Hajishirzi
Relation Extraction with Explanation
Hamed Shahbazi, Xiaoli Fern, Reza Ghaeini and Prasad Tadepalli
Revisiting Unsupervised Relation Extraction
Thy Thy Tran, Phong Le and Sophia Ananiadou

事件抽取

Cross-media Structured Common Space for Multimedia Event Extraction
Manling Li, Alireza Zareian, Qi Zeng, Spencer Whitehead, Di Lu, Heng Ji and Shih-Fu Chang
Document-Level Event Role Filler Extraction using Multi-Granularity Contextualized Encoding
Xinya Du and Claire Cardie
Improving Event Detection via Open-domain Trigger Knowledge
Meihan Tong, Bin Xu, Shuai Wang, Yixin Cao, Lei Hou, Juanzi Li and Jun Xie
A Two-Step Approach for Implicit Event Argument Detection
Zhisong Zhang, Xiang Kong, Zhengzhong Liu, Xuezhe Ma and Eduard Hovy
Towards Open Domain Event Trigger Identification using Adversarial Domain Adaptation
Aakanksha Naik and Carolyn Rose

文本生成(非对话)

A Generative Model for Joint Natural Language Understanding and Generation
Bo-Hsiang Tseng, Jianpeng Cheng, Yimai Fang and David Vandyke
A Study of Non-autoregressive Model for Sequence Generation
Yi Ren, Jinglin Liu, Xu Tan, Zhou Zhao, Sheng Zhao and Tie-Yan Liu
Automatic Generation of Citation Texts in Scholarly Papers: A Pilot Study
Xinyu Xing, Xiaosheng Fan and Xiaojun Wan
Automatic Poetry Generation from Prosaic Text
Tim Van de Cruys
BLEURT: Learning Robust Metrics for Text Generation
Thibault Sellam, Dipanjan Das and Ankur Parikh
Bridging the Structural Gap Between Encoding and Decoding for Data-To-Text Generation
Chao Zhao, Marilyn Walker and Snigdha Chaturvedi
Cross-modal Coherence Modeling for Caption Generation
Malihe Alikhani, Piyush Sharma, Shengjie Li, Radu Soricut and Matthew Stone
Cross-modal Language Generation using Pivot Stabilization for Web-scale Language Coverage
Ashish V. Thapliyal and Radu Soricut
Discourse as a Function of Event: Profiling Discourse Structure in News Articles around the Main Event
Prafulla Kumar Choubey, Aaron Lee, Ruihong Huang and Lu Wang
Discrete Optimization for Unsupervised Sentence Summarization with Word-Level Extraction
Raphael Schumann, Lili Mou, Yao Lu, Olga Vechtomova and Katja Markert
Evidence-Aware Inferential Text Generation with Vector Quantised Variational AutoEncoder
Daya Guo, Duyu Tang, Nan Duan, Jian Yin, Daxin Jiang and Ming Zhou
Improved Natural Language Generation via Loss Truncation
Daniel Kang and Tatsunori Hashimoto
Improving Adversarial Text Generation by Modeling the Distant Future
Ruiyi Zhang, Changyou Chen, Zhe Gan, Wenlin Wang, Dinghan Shen, Guoyin Wang, Zheng Wen and Lawrence Carin
Logical Natural Language Generation from Open-Domain Tables
Wenhu Chen, Jianshu Chen, Yu Su, Zhiyu Chen and William Yang Wang
Neural Data-to-Text Generation via Jointly Learning the Segmentation and Correspondence
Xiaoyu Shen, Ernie Chang, Hui Su, Cheng Niu and Dietrich Klakow
Towards Faithful Neural Table-to-Text Generation with Content-Matching Constraints
Zhenyi Wang, Xiaoyang Wang, Bang An, Dong Yu and Changyou Chen
Few-Shot NLG with Pre-Trained Language Model
Zhiyu Chen, Harini Eavani, Wenhu Chen, Yinyin Liu and William Yang Wang
GPT-too: A language-model-first approach for AMR-to-text generation
Manuel Mager, Ramón Fernandez Astudillo, Tahira Naseem, Md Arafat Sultan, Young-Suk Lee, Radu Florian and Salim Roukos
Simple and Effective Retrieve-Edit-Rerank Text Generation
Nabil Hossain, Marjan Ghazvininejad and Luke Zettlemoyer
Two Birds, One Stone: A Simple, Unified Model for Text Generation from Structured and Unstructured Data
Hamidreza Shahidi, Ming Li and Jimmy Lin

基础任务(大部分为NER)

A Joint Model for Document Segmentation and Segment Labeling
Joe Barrow, Rajiv Jain, Vlad Morariu, Varun Manjunatha, Douglas Oard and Philip Resnik
A Monolingual Approach to Contextualized Word Embeddings for Mid-Resource Languages
Pedro Javier Ortiz Suárez, Laurent Romary and Benoît Sagot
A Unified MRC Framework for Named Entity Recognition
Xiaoya Li, Jingrong Feng, Yuxian Meng, Qinghong Han, Fei Wu and Jiwei Li
An Effective Transition-based Model for Discontinuous NER
Xiang Dai, Sarvnaz Karimi, Ben Hachey and Cecile Paris
Bipartite Flat-Graph Network for Nested Named Entity Recognition
Ying Luo and Hai Zhao
Breaking Through the 80% Glass Ceiling: Raising the State of the Art in Word Sense Disambiguation by Incorporating Knowledge Graph Information
Michele Bevilacqua and Roberto Navigli
Coupling Distant Annotation and Adversarial Training for Cross-Domain Chinese Word Segmentation
Ning Ding, Dingkun Long, Guangwei Xu, Muhua Zhu, Pengjun Xie, Xiaobin Wang and Haitao Zheng
Cross-Lingual Semantic Role Labeling with High-Quality Translated Training Corpus
Hao Fei, Meishan Zhang and Donghong Ji
Improving Chinese Word Segmentation with Wordhood Memory Networks
Yuanhe Tian, Yan Song, Fei Xia, Tong Zhang and Yonggang Wang
Joint Chinese Word Segmentation and Part-of-speech Tagging via Two-way Attentions of Auto-analyzed Knowledge
Yuanhe Tian, Yan Song, Xiang Ao, Fei Xia, Xiaojun Quan, Tong Zhang and Yonggang Wang
Learning to Contextually Aggregate Multi-Source Supervision for Sequence Labeling
Ouyu Lan, Xiao Huang, Bill Yuchen Lin, He Jiang, Liyuan Liu and Xiang Ren
Multi-Cell Compositional LSTM for NER Domain Adaptation
Chen Jia and Yue Zhang
Multi-Domain Named Entity Recognition with Genre-Aware and Agnostic Inference
Jing Wang, Mayank Kulkarni and Daniel Preotiuc-Pietro
Named Entity Recognition without Labelled Data: A Weak Supervision Approach
Pierre Lison, Jeremy Barnes, Aliaksandr Hubin and Samia Touileb
NAT: Noise-Aware Training for Robust Neural Sequence Labeling
Marcin Namysl, Sven Behnke and Joachim Köhler
Pyramid: A Layered Model for Nested Named Entity Recognition
Jue Wang, Lidan Shou, Ke Chen and Gang Chen
SeqVAT: Virtual Adversarial Training for Semi-Supervised Sequence Labeling
Luoxin Chen, Weitong Ruan, Xinyue Liu and Jianhua Lu
Simplify the Usage of Lexicon in Chinese NER
Ruotian Ma, Minlong Peng, Qi Zhang, Zhongyu Wei and Xuanjing Huang
Single-/Multi-Source Cross-Lingual NER via Teacher-Student Learning on Unlabeled Data in Target Language
Qianhui Wu, Zijia Lin, Börje Karlsson, Jian-Guang Lou and Biqing Huang
Sources of Transfer in Multilingual Named Entity Recognition
David Mueller, Nicholas Andrews and Mark Dredze
Structured Tuning for Semantic Role Labeling
Tao Li, Parth Anand Jawale, Martha Palmer and Vivek Srikumar
Structure-Level Knowledge Distillation For Multilingual Sequence Labeling
Xinyu Wang, Yong Jiang, Nguyen Bach, Tao Wang, Fei Huang and Kewei Tu
Temporally-Informed Analysis of Named Entity Recognition
Shruti Rijhwani and Daniel Preotiuc-Pietro
Bayesian Hierarchical Words Representation Learning
Oren Barkan, Idan Rejwan, Avi Caciularu and Noam Koenigstein
FLAT: Chinese NER Using Flat-Lattice Transformer
Xiaonan Li, Hang Yan, Xipeng Qiu and Xuanjing Huang
Improving Low-Resource Named Entity Recognition using Joint Sentence and Token Labeling
Canasai Kruengkrai, Thien Hai Nguyen, Sharifah Mahani Aljunied and Lidong Bing
Instance-Based Learning of Span Representations: A Case Study through Named Entity Recognition
Hiroki Ouchi, Jun Suzuki, Sosuke Kobayashi, Sho Yokoi, Tatsuki Kuribayashi, Ryuto Konno and Kentaro Inui
Low Resource Sequence Tagging using Sentence Reconstruction
Tal Perl, Sriram Chaudhury and Raja Giryes
Named Entity Recognition as Dependency Parsing
Juntao Yu, Bernd Bohnet and Massimo Poesio
Soft Gazetteers for Low-Resource Named Entity Recognition
Shruti Rijhwani, Shuyan Zhou, Graham Neubig and Jaime Carbonell
TriggerNER: Learning with Entity Triggers as Explanations for Named Entity Recognition
Bill Yuchen Lin, Dong-Ho Lee, Ming Shen, Ryan Moreno, Xiao Huang, Prashant Shiralkar and Xiang Ren

知识图谱

Connecting Embeddings for Knowledge Graph Entity Typing
Yu Zhao, anxiang zhang, Ruobing Xie, Kang Liu and Xiaojie WANG
Knowledge Graph Embedding Compression
Mrinmaya Sachan
Low-Dimensional Hyperbolic Knowledge Graph Embeddings
Ines Chami, Adva Wolf, Da-Cheng Juan, Frederic Sala, Sujith Ravi and Christopher Ré
Orthogonal Relation Transforms with Graph Context Modeling for Knowledge Graph Embedding
Yun Tang, Jing Huang, Guangtao Wang, Xiaodong He and Bowen Zhou
ReInceptionE: Relation-Aware Inception Network with Joint Local-Global Structural Information for Knowledge Graph Embedding
SEEK: Segmented Embedding of Knowledge Graphs
Wentao Xu, Shun Zheng, Liang He, Bin Shao, Jian Yin and Tie-Yan Liu
A Re-evaluation of Knowledge Graph Completion Methods
Zhiqing Sun, Shikhar Vashishth, Soumya Sanyal, Partha Talukdar and Yiming Yang

图卷积神经网络

Aligned Dual Channel Graph Convolutional Network for Visual Question Answering
Qingbao Huang, Jielong Wei, Yi Cai, Changmeng Zheng, Junying Chen, Ho-fung Leung and Qing Li
Autoencoding Pixies: Amortised Variational Inference with Graph Convolutions for Functional Distributional Semantics
Guy Emerson
Integrating Semantic and Structural Information with Graph Convolutional Network for Controversy Detection
Lei Zhong, Juan Cao, Qiang Sheng, Junbo Guo and Ziang Wang
Syntax-Aware Opinion Role Labeling with Dependency Graph Convolutional Networks
Bo Zhang, Yue Zhang, Rui Wang, Zhenghua Li and Min Zhang

MAY USEFUL

Aspect Sentiment Classification with Document-level Sentiment Preference Modeling
Xiao Chen, Changlong Sun, Jingjing Wang, Shoushan Li, Luo Si, Min Zhang and Guodong Zhou
Bilingual Dictionary Based Neural Machine Translation without Using Parallel Sentences
Xiangyu Duan, Baijun Ji, Hao Jia, Min Tan, Min Zhang, Boxing Chen, Weihua Luo and Yue Zhang
BPE-Dropout: Simple and Effective Subword Regularization
Ivan Provilkov, Dmitrii Emelianenko and Elena Voita
Demographics Should Not Be the Reason of Toxicity: Mitigating Discrimination in Text Classifications with Instance Weighting
Guanhua Zhang, Bing Bai, Junqi Zhang, Kun Bai, Conghui Zhu and Tiejun Zhao
Dice Loss for Data-imbalanced NLP Tasks
Xiaoya Li, Xiaofei Sun, Yuxian Meng, Junjun Liang, Fei Wu and Jiwei Li
Estimating the influence of auxiliary tasks for multi-task learning of sequence tagging tasks
Fynn Schröder and Chris Biemann
Good-Enough Compositional Data Augmentation
Jacob Andreas
Handling Rare Entities for Neural Sequence Labeling
Yangming Li, Han Li, Kaisheng Yao and Xiaolong Li
Moving Down the Long Tail of Word Sense Disambiguation with Gloss Informed Bi-encoders
Terra Blevins and Luke Zettlemoyer
PuzzLing Machines: A Challenge on Learning From Small Data
Gözde Gül Şahin, Yova Kementchedjhieva, Phillip Rust and Iryna Gurevych
Similarity Analysis of Contextual Word Representation Models
John Wu, Yonatan Belinkov, Hassan Sajjad, Nadir Durrani, Fahim Dalvi and James Glass
SMART: Robust and Efficient Fine-Tuning for Pre-trained Natural Language Models through Principled Regularized Optimization
Haoming Jiang, Pengcheng He, Weizhu Chen, Xiaodong Liu, Jianfeng Gao and Tuo Zhao
Spying on your neighbors: Fine-grained probing of contextual embeddings for information about surrounding words
Josef Klafka and Allyson Ettinger

SOUNDS INTERESTING

Beyond Accuracy: Behavioral Testing of NLP Models with CheckList
Marco Tulio Ribeiro, Tongshuang Wu, Carlos Guestrin and Sameer Singh
ChartDialogs: Plotting from Natural Language Instructions
Yutong Shao and Ndapa Nakashole
Code and Named Entity Recognition in StackOverflow
Jeniya Tabassum, Mounica Maddela, Wei Xu and Alan Ritter
DeFormer: Decomposing Pre-trained Transformers for Faster Question Answering
Qingqing Cao, Harsh Trivedi, Aruna Balasubramanian and Niranjan Balasubramanian
Do Neural Language Models Show Preferences for Syntactic Formalisms?
Artur Kulmizev, Vinit Ravishankar, Mostafa Abdou and Joakim Nivre
Evaluating Explainable AI: Which Algorithmic Explanations Help Users Predict Model Behavior?
Peter Hase and Mohit Bansal
Examining Citations of Natural Language Processing Literature
Saif M. Mohammad
Expertise Style Transfer: A New Task Towards Better Communication between Experts and Laymen
Yixin Cao, Ruihao Shui, Liangming Pan, Min-Yen Kan, Zhiyuan Liu and Tat-Seng Chua
Highway Transformer: Self-Gating Enhanced Self-Attentive Networks
Yekun Chai, Shuo Jin and Xinwen Hou
How Does Selective Mechanism Improve Self-Attention Networks?
Xinwei Geng, Longyue Wang, Xing Wang, Bing Qin, Ting Liu and Zhaopeng Tu
Improving Disfluency Detection by Self-Training a Self-Attentive Model
Paria Jamshid Lou and Mark Johnson
Improving Transformer Models by Reordering their Sublayers
Ofir Press, Noah A. Smith and Omer Levy
Intermediate-Task Transfer Learning with Pretrained Language Models: When and Why Does It Work?
Yada Pruksachatkun, Jason Phang, Haokun Liu, Phu Mon Htut, Xiaoyi Zhang, Richard Yuanzhe Pang, Clara Vania, Katharina Kann and Samuel R. Bowman
Language (technology) is power: The need to be explicit about NLP harms
Su Lin Blodgett, Solon Barocas, Hal Daumé III and Hanna Wallach
(Re)construing Meaning in NLP
Sean Trott, Tiago Timponi Torrent, Nancy Chang and Nathan Schneider
Semantic Scaffolds for Pseudocode-to-Code Generation
Ruiqi Zhong, Mitchell Stern and Dan Klein
Towards Transparent and Explainable Attention Models
Akash Kumar Mohankumar, Preksha Nema, Sharan Narasimhan, Mitesh M. Khapra, Balaji Vasan Srinivasan and Balaraman Ravindran
When do Word Embeddings Accurately Reflect Surveys on our Beliefs About People?
Kenneth Jos eph and Jonathan Morgan
A Tale of a Probe and a Parser
Rowan Hall Maudslay, Josef Valvoda, Tiago Pimentel, Adina Williams and Ryan Cotterell
Contextual Embeddings: When Are They Worth It?
Simran Arora, Avner May, Jian Zhang and Christopher Ré
Do Transformers Need Deep Long-Range Memory?
Jack Rae and Ali Razavi
Hypernymy Detection for Low-Resource Languages via Meta Learning
Changlong Yu, Jialong Han, Haisong Zhang and Wilfred Ng
MOOCCube: A Large-scale Data Repository for NLP Applications in MOOCs
Jifan Yu, Gan Luo, Tong Xiao, Qingyang Zhong, Yuquan Wang, Wenzheng Feng, Junyi Luo, Chenyu Wang, Lei Hou, Juanzi Li, Zhiyuan Liu and Jie Tang
On Forgetting to Cite Older Papers: An Analysis of the ACL Anthology
Marcel Bollmann and Desmond Elliott
Quantifying Attention Flow in Transformers
Samira Abnar and Willem Zuidema
Showing Your Work Doesn’t Always Work
Raphael Tang, Jaejun Lee, Ji Xin, Xinyu Liu, Yaoliang Yu and Jimmy Lin
You Don’t Have Time to Read This: An Exploration of Document Reading Time Prediction
Orion Weller, Jordan Hildebrandt, Ilya Reznik, Christopher Challis, E. Shannon Tass, Quinn Snell and Kevin Seppi

伯特学

FastBERT: a Self-distilling BERT with Adaptive Inference Time
Weijie Liu, Peng Zhou, Zhiruo Wang, Zhe Zhao, Haotang Deng and QI JU
MobileBERT: a Compact Task-Agnostic BERT for Resource-Limited Devices
Zhiqing Sun, Hongkun Yu, Xiaodan Song, Renjie Liu, Yiming Yang and Denny Zhou
schuBERT: Optimizing Elements of BERT
Ashish Khetan and Zohar Karnin
SenseBERT: Driving Some Sense into BERT
Yoav Levine, Barak Lenz, Or Dagan, Ori Ram, Dan Padnos, Or Sharir, Shai Shalev-Shwartz, Amnon Shashua and Yoav Shoham
Spelling Error Correction with Soft-Masked BERT
Shaohua Zhang, Haoran Huang, Jicong Liu and Hang Li
DeeBERT: Dynamic Early Exiting for Accelerating BERT Inference
Ji Xin, Raphael Tang, Jaejun Lee, Yaoliang Yu and Jimmy Lin
tBERT: Topic Models and BERT Joining Forces for Semantic Similarity Detection
Nicole Peinelt, Dong Nguyen and Maria Liakata
Understanding Advertisements with BERT
Kanika Kalra, Bhargav Kurma, Silpa Vadakkeeveetil Sreelatha, Manasi Patwardhan and Shirish Karande
Unsupervised FAQ Retrieval with Question Generation and BERT
Yosi Mass, Boaz Carmeli, Haggai Roitman and David Konopnicki
What Does BERT with Vision Look At?
Liunian Harold Li, Mark Yatskar, Da Yin, Cho-Jui Hsieh and Kai-Wei Chang

ACL2020论文整理相关推荐

  1. 机器阅读理解MRC论文整理

    机器阅读理解MRC论文整理 最近发现一篇机器阅读理解整理的博客机器阅读理解整理整理于2020年 论文代码查找网站: https://dblp.uni-trier.de/db/conf/acl/acl2 ...

  2. 关系抽取论文整理,核方法、远程监督的重点都在这里

    来源 | CSDN 博客 作者 | Matt_sh,编辑 | Carol 来源 | CSDN云计算(ID:CSDNcloud) 本文是个人阅读文章的笔记整理,没有涉及到深度学习在关系抽取中的应用. 笔 ...

  3. 论文整理集合 -- 吴恩达老师深度学习课程

    吴恩达老师深度学习课程中所提到的论文整理集合!这些论文是深度学习的基本知识,阅读这些论文将更深入理解深度学习. 这些论文基本都可以免费下载到,如果无法免费下载,请留言!可以到coursera中看该视频 ...

  4. Non-Blind图像反卷积论文整理

    Non-Blind图像反卷积论文整理 1 Spatial Deconvolution Stochastic Deconvolution  2013   http://www.cs.ubc.ca/lab ...

  5. 计算机维修知识综述论文,机器学习领域各领域必读经典综述论文整理分享

    原标题:机器学习领域各领域必读经典综述论文整理分享 机器学习是一门多领域交叉学科,涉及概率论.统计学.逼近论.凸分析.算法复杂度理论等多门学科.专门研究计算机怎样模拟或实现人类的学习行为,以获取新的知 ...

  6. 内窥镜去反光的论文整理

    文章目录 内窥镜去反光的论文整理 Detection and correction of specular reflections for automatic surgical tool segmen ...

  7. ECCV2020超分辨率方向论文整理笔记

    ECCV2020超分辨率篇 ECCV的全称是European Conference on Computer Vision(欧洲计算机视觉国际会议) ,是计算机视觉三大顶级会议(另外两个是ICCV]和C ...

  8. 流量分类方法设计(一)——参考论文整理

    流量分类方法设计(一)--参考论文整理 因为最近在做流量分类有关的工作,所以将整个工作思路整理下来,希望对以后进一步的学习和论文写作有所帮助. 这一篇主要整理一下最近有关流量分类的论文,介绍他们的设计 ...

  9. WSDM'23 | 工业界搜推广nlp论文整理

    大家好,蘑菇先生. WSDM'23已公布录用结果,共收到投稿690篇,录用123篇,录用率为17.8%,完整录用论文: https://www.wsdm-conference.org/2023/pro ...

最新文章

  1. Django搭建个人博客:渲染Markdown文章目录
  2. 数学之路(2)-数据分析-R基础(4)
  3. bootstrap布局两列或者多列表单
  4. Android环境搭建遭遇Unsupported major.minor version 52.0
  5. pq分解法中b’怎么求_14.初中数学:二元一次方程组,加减消元法怎么解?视频有详细解题步骤...
  6. java图片上传并解析,详解SpringMVC实现图片上传以及该注意的小细节
  7. python3 rsa加密_python3产生RSA秘钥对并执行加解密操作详解
  8. C语言变量声明加冒号的用法
  9. Linux时间子系统之(四):timekeeping
  10. activity mq shared filesystem 部署
  11. iOS进阶:【1、 使用文件路径获取自定义字体名称2、添加资源包到工程→在info.plist文件中注册字体→在工程Bundle Resource中复制字体资源包→代码检测查询加入的字体并使用。】
  12. oracle datamodeler,查看您的 Oracle SQL Developer Data Modeler 设计
  13. Android文件解压
  14. excel引用其他表数据
  15. defined but not used [-Wunused-function] 使用 __attribute__((unused)) 告诉编译器忽略此告警
  16. python turtle画房子详细解释_如何用python画一个小房子
  17. 制造业数字化转型的困难_工业数字化转型的困境 | 从数字孪生的复杂性说起
  18. Vue3格式化时间、相对当前时间(前)
  19. linux串口驱动及应用程序,基于华邦W90P710处理器的Linux内核应用及串口驱动的实现-嵌入式系统-与非网...
  20. 【veriog】正向计时器设计(FPGA,秒表,时钟,正向计时)

热门文章

  1. 正版软件|WonderFox Photo Watermark 图片水印批量处理软件
  2. Payment Terms 收付款条件和分期付款设置
  3. 机智过人人声机器人_机智过人20171215视频,矣晓沅,九歌作诗机器人,清华大学
  4. 中长期规划---螺旋式上升方式修改完善职业之路
  5. win7下vs2008过期没有输入序列号的解决办法
  6. 手机软件管家 V0.12 申请手机证书、下载软件免签名自动安装
  7. 夜神模拟器如何连接电脑WiFi
  8. 阿里巴巴开发手册(官方认定手册)
  9. 抖音账号|短视频矩阵分发系统 | 多账号管理发布 |MVC架构
  10. Springboot毕设项目个性相册网站52jtb(java+VUE+Mybatis+Maven+Mysql)