NLP经典入门论文

1.基础部分

Word2Vec

Efficient Estimation of Word Representations in Vector Space

https://arxiv.org/abs/1301.3781v3

Transformer

attention is all you need

https://arxiv.org/abs/1706.03762

BERT

Pre-training of Deep Bidirectional Transformers for Language Understanding

https://arxiv.org/abs/1810.04805

ERNIE

https://arxiv.org/pdf/1904.09223

GPT

gpt1: Improving Language Understanding by Generative Pre-Training 

gpt2: Language Models are Unsupervised Multitask Learners

gpt3: Language Models are Few-Shot Learners

2.进阶部分

roberta模型

RoBERTa: A Robustly Optimized BERT Pretraining Approach

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值