-
Attention Is All You Need
Paper • 1706.03762 • Published • 55 -
BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding
Paper • 1810.04805 • Published • 17 -
RoBERTa: A Robustly Optimized BERT Pretraining Approach
Paper • 1907.11692 • Published • 7 -
DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter
Paper • 1910.01108 • Published • 14
Collections
Discover the best community collections!
Collections including paper arxiv:2312.06550
-
A technical note on bilinear layers for interpretability
Paper • 2305.03452 • Published • 1 -
Interpreting Transformer's Attention Dynamic Memory and Visualizing the Semantic Information Flow of GPT
Paper • 2305.13417 • Published • 1 -
Explainable AI for Pre-Trained Code Models: What Do They Learn? When They Do Not Work?
Paper • 2211.12821 • Published • 2 -
The Linear Representation Hypothesis and the Geometry of Large Language Models
Paper • 2311.03658 • Published • 1
-
MAP-Neo: Highly Capable and Transparent Bilingual Large Language Model Series
Paper • 2405.19327 • Published • 49 -
LLM360/K2
Text Generation • Updated • 333 • 87 -
OLMo: Accelerating the Science of Language Models
Paper • 2402.00838 • Published • 83 -
LLM360: Towards Fully Transparent Open-Source LLMs
Paper • 2312.06550 • Published • 57
-
Self-Rewarding Language Models
Paper • 2401.10020 • Published • 147 -
ReFT: Reasoning with Reinforced Fine-Tuning
Paper • 2401.08967 • Published • 30 -
Tuning Language Models by Proxy
Paper • 2401.08565 • Published • 23 -
TrustLLM: Trustworthiness in Large Language Models
Paper • 2401.05561 • Published • 69
-
aMUSEd: An Open MUSE Reproduction
Paper • 2401.01808 • Published • 29 -
From Audio to Photoreal Embodiment: Synthesizing Humans in Conversations
Paper • 2401.01885 • Published • 28 -
SteinDreamer: Variance Reduction for Text-to-3D Score Distillation via Stein Identity
Paper • 2401.00604 • Published • 6 -
LARP: Language-Agent Role Play for Open-World Games
Paper • 2312.17653 • Published • 32
-
togethercomputer/StripedHyena-Hessian-7B
Text Generation • Updated • 147 • 65 -
Zebra: Extending Context Window with Layerwise Grouped Local-Global Attention
Paper • 2312.08618 • Published • 15 -
SwitchHead: Accelerating Transformers with Mixture-of-Experts Attention
Paper • 2312.07987 • Published • 41 -
LLM360: Towards Fully Transparent Open-Source LLMs
Paper • 2312.06550 • Published • 57