-
A technical note on bilinear layers for interpretability
Paper • 2305.03452 • Published • 1 -
Interpreting Transformer's Attention Dynamic Memory and Visualizing the Semantic Information Flow of GPT
Paper • 2305.13417 • Published • 1 -
Explainable AI for Pre-Trained Code Models: What Do They Learn? When They Do Not Work?
Paper • 2211.12821 • Published • 2 -
The Linear Representation Hypothesis and the Geometry of Large Language Models
Paper • 2311.03658 • Published • 1
Collections
Discover the best community collections!
Collections including paper arxiv:2401.06102
-
AtP*: An efficient and scalable method for localizing LLM behaviour to components
Paper • 2403.00745 • Published • 14 -
The Era of 1-bit LLMs: All Large Language Models are in 1.58 Bits
Paper • 2402.17764 • Published • 610 -
MobiLlama: Towards Accurate and Lightweight Fully Transparent GPT
Paper • 2402.16840 • Published • 26 -
LongRoPE: Extending LLM Context Window Beyond 2 Million Tokens
Paper • 2402.13753 • Published • 115
-
A Close Look at Decomposition-based XAI-Methods for Transformer Language Models
Paper • 2502.15886 • Published • 1 -
We Can't Understand AI Using our Existing Vocabulary
Paper • 2502.07586 • Published • 10 -
Position-aware Automatic Circuit Discovery
Paper • 2502.04577 • Published • 1 -
Building Bridges, Not Walls -- Advancing Interpretability by Unifying Feature, Data, and Model Component Attribution
Paper • 2501.18887 • Published • 1
-
Understanding LLMs: A Comprehensive Overview from Training to Inference
Paper • 2401.02038 • Published • 64 -
DocLLM: A layout-aware generative language model for multimodal document understanding
Paper • 2401.00908 • Published • 180 -
LLaMA Beyond English: An Empirical Study on Language Capability Transfer
Paper • 2401.01055 • Published • 54 -
LLM Maybe LongLM: Self-Extend LLM Context Window Without Tuning
Paper • 2401.01325 • Published • 27
-
The LLM Surgeon
Paper • 2312.17244 • Published • 9 -
TrustLLM: Trustworthiness in Large Language Models
Paper • 2401.05561 • Published • 69 -
Patchscope: A Unifying Framework for Inspecting Hidden Representations of Language Models
Paper • 2401.06102 • Published • 22 -
Model Surgery: Modulating LLM's Behavior Via Simple Parameter Editing
Paper • 2407.08770 • Published • 21
-
S-LoRA: Serving Thousands of Concurrent LoRA Adapters
Paper • 2311.03285 • Published • 32 -
Tailoring Self-Rationalizers with Multi-Reward Distillation
Paper • 2311.02805 • Published • 7 -
Ultra-Long Sequence Distributed Transformer
Paper • 2311.02382 • Published • 6 -
OpenChat: Advancing Open-source Language Models with Mixed-Quality Data
Paper • 2309.11235 • Published • 15