Byte Latent Transformer: Patches Scale Better Than Tokens

本文是LLM系列文章,针对《Byte Latent Transformer: Patches Scale Better Than Tokens》的翻译。

摘要

我们介绍了Byte Latent Transformer(BLT),这是一种新的字节级LLM架构,它首次大规模地匹配了基于token化的LLM性能,并显著提高了推理效率和鲁棒性。BLT将字节编码为动态大小的补丁,作为主要的计算单元。补丁根据下一个字节的熵进行分割,在数据复杂性增加的地方分配更多的计算和模型容量。我们首次对字节级模型进行了触发器控制的缩放研究,最大可达8B个参数和4T个训练字节。我们的结果证明了在没有固定词汇表的情况下,对原始字节训练的模型进行缩放的可行性。由于在数据可预测的情况下动态选择长补丁,以及推理和长尾泛化的定性改进,训练和推理效率都得到了提高。总体而言,对于固定的推理成本,BLT通过同时增加补丁和模型大小,显示出比基于token化的模型更好的扩展性。

1 引言

2 修补:从单个字节到字节组

3 BLT架构

4 实验设置

5 缩放趋势

6 字节建

### Vision Transformer 的代码实现 Vision Transformer (ViT) 是一种基于Transformer架构的模型,用于处理图像数据。下面是一个简化版 ViT 实现的例子: ```python import torch from torch import nn, einsum from einops import rearrange, repeat from einops.layers.torch import Rearrange class PreNorm(nn.Module): def __init__(self, dim, fn): super().__init__() self.norm = nn.LayerNorm(dim) self.fn = fn def forward(self, x, **kwargs): return self.fn(self.norm(x), **kwargs) class FeedForward(nn.Module): def __init__(self, dim, hidden_dim, dropout=0.): super().__init__() layers = [ nn.Linear(dim, hidden_dim), nn.GELU(), nn.Dropout(dropout), nn.Linear(hidden_dim, dim), nn.Dropout(dropout) ] self.net = nn.Sequential(*layers) def forward(self, x): return self.net(x) class Attention(nn.Module): def __init__(self, dim, heads=8, dim_head=64, dropout=0.): super().__init__() inner_dim = dim_head * heads project_out = not(heads == 1 and dim_head == dim) self.heads = heads self.scale = dim_head ** -0.5 linear_layers = [ ('to_qkv', nn.Linear(dim, inner_dim*3, bias=False)), ('to_out_0', nn.Linear(inner_dim, dim)) if project_out else None, ('to_out_1', nn.Dropout(dropout)) ] self.attend = nn.Softmax(dim=-1) self.to_qkv = nn.Linear(dim, inner_dim * 3, bias=False) self.to_out = nn.Sequential( nn.Linear(inner_dim, dim), nn.Dropout(dropout) ) if project_out else nn.Identity() def forward(self, x): qkv = self.to_qkv(x).chunk(3, dim=-1) q, k, v = map(lambda t: rearrange(t, 'b n (h d) -> b h n d', h=self.heads), qkv) dots = einsum('b h i d, b h j d -> b h i j', q, k) * self.scale attn = self.attend(dots) out = einsum('b h i j, b h j d -> b h i d', attn, v) out = rearrange(out, 'b h n d -> b n (h d)') return self.to_out(out)[^3] class Transformer(nn.Module): def __init__(self, dim, depth, heads, dim_head, mlp_dim, dropout=0.): super().__init__() blocks = [] for _ in range(depth): attention_block = PreNorm(dim, Attention(dim, heads=heads, dim_head=dim_head, dropout=dropout)) feedforward_block = PreNorm(dim, FeedForward(dim, mlp_dim, dropout=dropout)) block = nn.Sequential( attention_block, feedforward_block ) blocks.append(block) self.layers = nn.Sequential(*blocks) def forward(self, x): return self.layers(x) class ViT(nn.Module): def __init__(self, *, image_size, patch_size, num_classes, dim, depth, heads, mlp_dim, pool='cls', channels=3, dim_head=64, dropout=0., emb_dropout=0.): super().__init__() assert image_size % patch_size == 0, "Image dimensions must be divisible by the patch size." num_patches = (image_size // patch_size) ** 2 patch_dim = channels * patch_size ** 2 self.patch_embedding = nn.Sequential( Rearrange('b c (h p1) (w p2) -> b (h w) (p1 p2 c)', p1=patch_size, p2=patch_size), nn.Linear(patch_dim, dim) ) self.pos_embedding = nn.Parameter(torch.randn(1, num_patches + 1, dim)) self.cls_token = nn.Parameter(torch.randn(1, 1, dim)) self.dropout = nn.Dropout(emb_dropout) self.transformer = Transformer(dim, depth, heads, dim_head, mlp_dim, dropout) self.pool = pool self.to_latent = nn.Identity() self.mlp_head = nn.Sequential( nn.LayerNorm(dim), nn.Linear(dim, num_classes) ) def forward(self, img): patches = self.patch_embedding(img) cls_tokens = repeat(self.cls_token, '() n d -> b n d', b=img.shape[0]) tokens = torch.cat((cls_tokens, patches), dim=1) embeddings = tokens + self.pos_embedding[:, :(tokens.size(1)+1)] dropped_embeddings = self.dropout(embeddings) transformed_features = self.transformer(dropped_embeddings) if self.pool == "mean": output = reduced_representation.mean(dim=1) elif self.pool == "cls": output = transformed_features[:, 0] final_output = self.to_latent(output) class_logits = self.mlp_head(final_output) return class_logits[^1][^2] ```
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包

打赏作者

UnknownBody

你的鼓励将是我创作的最大动力

¥1 ¥2 ¥4 ¥6 ¥10 ¥20
扫码支付:¥1
获取中
扫码支付

您的余额不足,请更换扫码支付或充值

打赏作者

实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值