LLaMA 2 paper
Study / Research
Mentioned to show how tokens are pervasive in model descriptions and to motivate tokenizer training/coverage decisions (e.g., tokens trained on large corpora).
Mentioned in 2 videos
Mentioned to show how tokens are pervasive in model descriptions and to motivate tokenizer training/coverage decisions (e.g., tokens trained on large corpora).