정리한 내용을 정리

Neural Ranking Model (NRM) 

  • Ruiyang Ren et al., PAIR: Leveraging Passage-Centric Similarity Relation for Improving Dense Passage Retrieval, acl2021 (2021.12 정리)
    • baidu 논문. 
    • query centric loss와 passage centric loss를 함께 사용 
    • 기여도는 cross encoder를 이용한 pseudo labeled data를 추가한 것이 가장 큼 
  • Ruiyang Ren, et al., RocketQAv2: A Joint Training Method for Dense Passage Retrieval and Passage Re-ranking, EMNLP2021 (2021.12 정리
    • dynamic listwise distillation 
    • in-batch negatives를 추가로 사용해도 품질 향상 없음. (실험 결과는 없음)
  • Yingqi Qu, et al., RocketQA: An Optimized Training Approach to Dense Passage Retrieval for Open-Domain Question Answering, NAACL 2021 (2021.12 정리) 
    • cross-batch negatives, denoising hard negative, data augmentation 사용 
    • denoising hard negative의 기여도가 큼. 
  • Sheng-Chieh Lin et al., Distilling Dense Representations for Ranking using Tightly-Coupled Teachers, arXiv:2010.11386 (2021.10 정리
    • tct-colbert 
  • Jingtao Zhan et al., Optimizing Dense Retrieval Model Training with Hard Negatives, sigir 2021 (2021.10 정리
    • STAR + ADORE 논문 
    • dynamic hard negative 시에 hard negative를 찾는데 시간이 많이 걸리는 문제를 해결함 : 문서 모델은 고정하고 질의모델만 업데이트 함으로써. 
  • Vladimir Karpukhin et al., Dense Passage Retrieval for Open-Domain Question Answering, emnlp2020 (2021. 09 정리
  • Luyu Gao and Jamie Callan, Unsupervised Corpus Aware Language Model Pre-training for Dense Passage Retrieval, arXiv:2108.05540 (2021.09 정리
  • Yiding Liu et al., Pre-trained Language Model for Web-scale Retrieval in Baidu Search, kdd2021 (2021.08 정리
  • Lee Xiong et al., Approximate Nearest Neighbor Negative Contrastive Learning for Dense Text Retrieval, arxiv 2007.00808 (2021. 08 정리
  • Prafull Prakash et al., Learning Robust Dense Retrieval Models from Incomplete Relevance Labels, sigir2021 (2021. 08 정리

Pre-trained Language Model (PLM) 

  • Kevin Clark et al., ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators, iclr2020 (2021. 10 정리) 

summarization 

  • GSum: A General Framework for Guided Neural Abstractive Summarization, naacl2021 (2022.08 정리)
  • SimCLS: a simple framework for contrastive learning of abstractive summarization, acl2021 (2022.08 정리

datasets 

기타 (etc.)  

댓글

이 블로그의 인기 게시물

utf-8과 utf8