-
Magnitude Invariant Parametrizations Improve Hypernetwork Learning
Paper • 2304.07645 • Published • 1 -
HyperShot: Few-Shot Learning by Kernel HyperNetworks
Paper • 2203.11378 • Published • 1 -
Hypernetworks for Zero-shot Transfer in Reinforcement Learning
Paper • 2211.15457 • Published • 1 -
Continual Learning with Dependency Preserving Hypernetworks
Paper • 2209.07712 • Published • 1
Collections
Discover the best community collections!
Collections including paper arxiv:2205.12148
-
LoftQ: LoRA-Fine-Tuning-Aware Quantization for Large Language Models
Paper • 2310.08659 • Published • 27 -
QA-LoRA: Quantization-Aware Low-Rank Adaptation of Large Language Models
Paper • 2309.14717 • Published • 44 -
ModuLoRA: Finetuning 3-Bit LLMs on Consumer GPUs by Integrating with Modular Quantizers
Paper • 2309.16119 • Published • 1 -
LoRA ensembles for large language model fine-tuning
Paper • 2310.00035 • Published • 2
-
Dissecting In-Context Learning of Translations in GPTs
Paper • 2310.15987 • Published • 6 -
Monolingual or Multilingual Instruction Tuning: Which Makes a Better Alpaca
Paper • 2309.08958 • Published • 2 -
X-LLM: Bootstrapping Advanced Large Language Models by Treating Multi-Modalities as Foreign Languages
Paper • 2305.04160 • Published • 2 -
Ziya-VL: Bilingual Large Vision-Language Model via Multi-Task Instruction Tuning
Paper • 2310.08166 • Published • 1