YAML Metadata Warning: empty or missing yaml metadata in repo card (https://huggingface.co/docs/hub/model-cards#model-card-metadata)

Quantization made by Richard Erkhov.

Github

Discord

Request more models

agentlm-13b - GGUF

Name Quant method Size
agentlm-13b.Q2_K.gguf Q2_K 4.52GB
agentlm-13b.Q3_K_S.gguf Q3_K_S 1.64GB
agentlm-13b.Q3_K.gguf Q3_K 5.9GB
agentlm-13b.Q3_K_M.gguf Q3_K_M 5.9GB
agentlm-13b.Q3_K_L.gguf Q3_K_L 6.46GB
agentlm-13b.IQ4_XS.gguf IQ4_XS 3.88GB
agentlm-13b.Q4_0.gguf Q4_0 6.86GB
agentlm-13b.IQ4_NL.gguf IQ4_NL 3.53GB
agentlm-13b.Q4_K_S.gguf Q4_K_S 0.49GB
agentlm-13b.Q4_K.gguf Q4_K 5.26GB
agentlm-13b.Q4_K_M.gguf Q4_K_M 5.18GB
agentlm-13b.Q4_1.gguf Q4_1 7.61GB
agentlm-13b.Q5_0.gguf Q5_0 8.36GB
agentlm-13b.Q5_K_S.gguf Q5_K_S 8.36GB
agentlm-13b.Q5_K.gguf Q5_K 8.6GB
agentlm-13b.Q5_K_M.gguf Q5_K_M 8.6GB
agentlm-13b.Q5_1.gguf Q5_1 9.11GB
agentlm-13b.Q6_K.gguf Q6_K 9.95GB
agentlm-13b.Q8_0.gguf Q8_0 12.88GB

Original model description:

datasets: - THUDM/AgentInstruct

AgentLM-13B

🤗 [Dataset] • 💻 [Github Repo] • 📌 [Project Page] • 📃 [Paper]

AgentTuning represents the very first attempt to instruction-tune LLMs using interaction trajectories across multiple agent tasks. Evaluation results indicate that AgentTuning enables the agent capabilities of LLMs with robust generalization on unseen agent tasks while remaining good on general language abilities. We have open-sourced the AgentInstruct dataset and AgentLM.

Models

AgentLM models are produced by mixed training on AgentInstruct dataset and ShareGPT dataset from Llama-2-chat models.

The models follow the conversation format of Llama-2-chat, with system prompt fixed as

You are a helpful, respectful and honest assistant.

7B, 13B, and 70B models are available on Huggingface model hub.

Model Huggingface Repo
AgentLM-7B 🤗Huggingface Repo
AgentLM-13B 🤗Huggingface Repo
AgentLM-70B 🤗Huggingface Repo

Citation

If you find our work useful, please consider citing AgentTuning:

@misc{zeng2023agenttuning,
      title={AgentTuning: Enabling Generalized Agent Abilities for LLMs}, 
      author={Aohan Zeng and Mingdao Liu and Rui Lu and Bowen Wang and Xiao Liu and Yuxiao Dong and Jie Tang},
      year={2023},
      eprint={2310.12823},
      archivePrefix={arXiv},
      primaryClass={cs.CL}
}
Downloads last month
71
GGUF

2-bit

3-bit

4-bit

5-bit

6-bit

8-bit

Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.
The model cannot be deployed to the HF Inference API: The model has no library tag.