Awesome Question Answering 
A curated list of the __Question Answering (QA)__ subject which is a computer science discipline within the fields of information retrieval and natural language processing (NLP) toward using machine learning and deep learning
정보 검색 및 자연 언어 처리 분야의 질의응답에 관한 큐레이션 - 머신러닝과 딥러닝 단계까지
问答系统主题的精选列表,是信息检索和自然语言处理领域的计算机科学学科 - 使用机器学习和深度学习
Contents
- Recent Trends
- About QA
- Events
- Systems
- Competitions in QA
- Publications
- Codes
- Lectures
- Slides
- Dataset Collections
- Datasets
- Books
- Links
Recent Trends
Recent QA Models
- DilBert: Delaying Interaction Layers in Transformer-based Encoders for Efficient Open Domain Question Answering (2020)
- paper: https://arxiv.org/pdf/2010.08422.pdf
- github: https://github.com/wissam-sib/dilbert
- UnifiedQA: Crossing Format Boundaries With a Single QA System (2020)
- Demo: https://unifiedqa.apps.allenai.org/
- ProQA: Resource-efficient method for pretraining a dense corpus index for open-domain QA and IR. (2020)
- paper: https://arxiv.org/pdf/2005.00038.pdf
- github: https://github.com/xwhan/ProQA
- TYDI QA: A Benchmark for Information-Seeking Question Answering in Typologically Diverse Languages (2020)
- paper: https://arxiv.org/ftp/arxiv/papers/2003/2003.05002.pdf
- Retrospective Reader for Machine Reading Comprehension
- paper: https://arxiv.org/pdf/2001.09694v2.pdf
- TANDA: Transfer and Adapt Pre-Trained Transformer Models for Answer Sentence Selection (AAAI 2020)
- paper: https://arxiv.org/pdf/1911.04118.pdf
Recent Language Models
- ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators, Kevin Clark, et al., ICLR, 2020.
- TinyBERT: Distilling BERT for Natural Language Understanding, Xiaoqi Jiao, et al., ICLR, 2020.
- MINILM: Deep Self-Attention Distillation for Task-Agnostic Compression of Pre-Trained Transformers, Wenhui Wang, et al., arXiv, 2020.
- T5: Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer, Colin Raffel, et al., arXiv preprint, 2019.
- ERNIE: Enhanced Language Representation with Informative Entities, Zhengyan Zhang, et al., ACL, 2019.
- XLNet: Generalized Autoregressive Pretraining for Language Understanding, Zhilin Yang, et al., arXiv preprint, 2019.
- ALBERT: A Lite BERT for Self-supervised Learning of Language Representations, Zhenzhong Lan, et al., arXiv preprint, 2019.
- RoBERTa: A Robustly Optimized BERT Pretraining Approach, Yinhan Liu, et al., arXiv preprint, 2019.
- DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter, Victor sanh, et al., arXiv, 2019.
- SpanBERT: Improving Pre-training by Representing and Predicting Spans, Mandar Joshi, et al., TACL, 2019.
- BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding, Jacob Devlin, et al., NAACL 2019, 2018.
AAAI 2020
- TANDA: Transfer and Adapt Pre-Trained Transformer Models for Answer Sentence Selection, Siddhant Garg, et al., AAAI 2020, Nov 2019.
ACL 2019
- Overview of the MEDIQA 2019 Shared Task on Textual Inference, Question Entailment and Question Answering, Asma Ben Abacha, et al., ACL-W 2019, Aug 2019.
- Towards Scalable and Reliable Capsule Networks for Challenging NLP Applications, Wei Zhao, et al., ACL 2019, Jun 2019.
- Cognitive Graph for Multi-Hop Reading Comprehension at Scale, Ming Ding, et al., ACL 2019, Jun 2019.
- Real-Time Open-Domain Question Answering with Dense-Sparse Phrase Index, Minjoon Seo, et al., ACL 2019, Jun 2019.
- Unsupervised Question Answering by Cloze Translation, Patrick Lewis, et al., ACL 2019, Jun 2019.
- SemEval-2019 Task 10: Math Question Answering, Mark Hopkins, et al., ACL-W 2019, Jun 2019.
- Improving Question Answering over Incomplete KBs with Knowledge-Aware Reader, Wenhan Xiong, et al., ACL 2019, May 2019.