-
Long-form factuality in large language models
Paper • 2403.18802 • Published • 25 -
Attention Is All You Need
Paper • 1706.03762 • Published • 50 -
Language Models are Few-Shot Learners
Paper • 2005.14165 • Published • 12 -
A Survey of GPT-3 Family Large Language Models Including ChatGPT and GPT-4
Paper • 2310.12321 • Published • 1
Collections
Discover the best community collections!
Collections including paper arxiv:2005.14165
-
BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding
Paper • 1810.04805 • Published • 16 -
RoBERTa: A Robustly Optimized BERT Pretraining Approach
Paper • 1907.11692 • Published • 7 -
Language Models are Few-Shot Learners
Paper • 2005.14165 • Published • 12 -
OPT: Open Pre-trained Transformer Language Models
Paper • 2205.01068 • Published • 2
-
Lost in the Middle: How Language Models Use Long Contexts
Paper • 2307.03172 • Published • 38 -
BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding
Paper • 1810.04805 • Published • 16 -
Attention Is All You Need
Paper • 1706.03762 • Published • 50 -
Llama 2: Open Foundation and Fine-Tuned Chat Models
Paper • 2307.09288 • Published • 245
-
Attention Is All You Need
Paper • 1706.03762 • Published • 50 -
BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding
Paper • 1810.04805 • Published • 16 -
Universal Language Model Fine-tuning for Text Classification
Paper • 1801.06146 • Published • 6 -
Language Models are Few-Shot Learners
Paper • 2005.14165 • Published • 12
-
Attention Is All You Need
Paper • 1706.03762 • Published • 50 -
ImageNet Large Scale Visual Recognition Challenge
Paper • 1409.0575 • Published • 8 -
Sequence to Sequence Learning with Neural Networks
Paper • 1409.3215 • Published • 3 -
Language Models are Few-Shot Learners
Paper • 2005.14165 • Published • 12
-
Understanding LLMs: A Comprehensive Overview from Training to Inference
Paper • 2401.02038 • Published • 63 -
The Impact of Reasoning Step Length on Large Language Models
Paper • 2401.04925 • Published • 17 -
Lost in the Middle: How Language Models Use Long Contexts
Paper • 2307.03172 • Published • 38 -
Attention Is All You Need
Paper • 1706.03762 • Published • 50
-
Mistral 7B
Paper • 2310.06825 • Published • 47 -
BloombergGPT: A Large Language Model for Finance
Paper • 2303.17564 • Published • 21 -
BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding
Paper • 1810.04805 • Published • 16 -
DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter
Paper • 1910.01108 • Published • 14
-
Lost in the Middle: How Language Models Use Long Contexts
Paper • 2307.03172 • Published • 38 -
Efficient Estimation of Word Representations in Vector Space
Paper • 1301.3781 • Published • 6 -
BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding
Paper • 1810.04805 • Published • 16 -
Attention Is All You Need
Paper • 1706.03762 • Published • 50
-
Attention Is All You Need
Paper • 1706.03762 • Published • 50 -
Language Models are Few-Shot Learners
Paper • 2005.14165 • Published • 12 -
Chain-of-Thought Prompting Elicits Reasoning in Large Language Models
Paper • 2201.11903 • Published • 9 -
Orca 2: Teaching Small Language Models How to Reason
Paper • 2311.11045 • Published • 73