Chinese
English

This is a Chinese instruction-tuning lora checkpoint based on llama-7B from this repo's work

We use the 50k Chinese data, which is the combination of alpaca_chinese_instruction_dataset and the Chinese conversation data from sharegpt-90k data. We finetune the model for 3 epochs use a single 4090 with ctxlen=2048.

You can use it like this:

from transformers import LlamaForCausalLM
from peft import PeftModel

model = LlamaForCausalLM.from_pretrained(
    "decapoda-research/llama-7b-hf",
    load_in_8bit=True,
    torch_dtype=torch.float16,
    device_map="auto",
)
model = PeftModel.from_pretrained(
    model,
    "Chinese-Vicuna/Chinese-Vicuna-lora-7b-chatv1"
    torch_dtype=torch.float16,
    device_map={'': 0}
)

We offer train-args and train-log in here

Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.
The model cannot be deployed to the HF Inference API: The model has no library tag.

Dataset used to train Chinese-Vicuna/Chinese-Vicuna-lora-7b-chatv1