-
McGill-NLP/LLM2Vec-Meta-Llama-31-8B-Instruct-mntp-supervised
Sentence Similarity • Updated • 137 • 4 -
McGill-NLP/LLM2Vec-Meta-Llama-3-8B-Instruct-mntp-supervised
Sentence Similarity • Updated • 11.6k • 48 -
McGill-NLP/LLM2Vec-Mistral-7B-Instruct-v2-mntp-supervised
Sentence Similarity • Updated • 975 • 13 -
McGill-NLP/LLM2Vec-Llama-2-7b-chat-hf-mntp-supervised
Sentence Similarity • Updated • 332 • 3
Collections
Discover the best community collections!
Collections including paper arxiv:2404.05961
-
Language Agent Tree Search Unifies Reasoning Acting and Planning in Language Models
Paper • 2310.04406 • Published • 9 -
Chain-of-Thought Reasoning Without Prompting
Paper • 2402.10200 • Published • 105 -
ICDPO: Effectively Borrowing Alignment Capability of Others via In-context Direct Preference Optimization
Paper • 2402.09320 • Published • 6 -
Self-Discover: Large Language Models Self-Compose Reasoning Structures
Paper • 2402.03620 • Published • 115
-
Rho-1: Not All Tokens Are What You Need
Paper • 2404.07965 • Published • 90 -
LLM2Vec: Large Language Models Are Secretly Powerful Text Encoders
Paper • 2404.05961 • Published • 65 -
Compression Represents Intelligence Linearly
Paper • 2404.09937 • Published • 27 -
Multi-Head Mixture-of-Experts
Paper • 2404.15045 • Published • 60
-
A Mechanistic Understanding of Alignment Algorithms: A Case Study on DPO and Toxicity
Paper • 2401.01967 • Published -
Secrets of RLHF in Large Language Models Part I: PPO
Paper • 2307.04964 • Published • 29 -
Zephyr: Direct Distillation of LM Alignment
Paper • 2310.16944 • Published • 123 -
LLM2Vec: Large Language Models Are Secretly Powerful Text Encoders
Paper • 2404.05961 • Published • 65
-
LLM2Vec: Large Language Models Are Secretly Powerful Text Encoders
Paper • 2404.05961 • Published • 65 -
Leave No Context Behind: Efficient Infinite Context Transformers with Infini-attention
Paper • 2404.07143 • Published • 107 -
Scaling (Down) CLIP: A Comprehensive Analysis of Data, Architecture, and Training Strategies
Paper • 2404.08197 • Published • 29 -
Pre-training Small Base LMs with Fewer Tokens
Paper • 2404.08634 • Published • 35