CLIP consists of two models, as discussed in more depth in the previous chapter. The version of CLIP we use here consists of a text transformer for encoding text embeddings and a vision transformer (ViT) for encoding image embeddings.
Both CLIP models are optimized during pretraining to align similar text and images in vector space. It does this by taking image-text pairs and pushing their output vectors nearer in vector space while separating the vectors of non-pairs.
It distinguishes itself from typical classification models for several reasons. First, OpenAI trained it on a huge dataset of 400M text-image pairs that were scraped from across the internet.
T5 模型:NLP Text-to-Text 预训练模型超大规模探索 - Andy Yang的文章 - 知乎 https://zhuanlan.zhihu.com/p/88438851
T5 模型,还有它的训练方法。
- Transformer Encoder-Decoder 模型;
- BERT-style 式的破坏方法;
- Replace Span 的破坏策略;
- 15 %的破坏比;
- 3 的破坏时小段长度。