Model Card for Model ID

Model Details

Model Card: llama3-pre1-ds-lora2 with Fine-Tuning Model Overview Model Name: llama3-pre1-ds-lora2

Model Type: Transformer-based Language Model

Model Size: 8 billion parameters

by: 4yo1

Languages: English and Korean

Model Description

llama3-pre1-ds-lora2 is a language model pre-trained on a diverse corpus of English and Korean texts. This fine-tuning approach allows the model to adapt to specific tasks or datasets with a minimal number of additional parameters, making it efficient and effective for specialized applications.

how to use - sample code

from transformers import AutoConfig, AutoModel, AutoTokenizer

config = AutoConfig.from_pretrained("4yo1/llama3-pre1-ds-lora2")
model = AutoModel.from_pretrained("4yo1/llama3-pre1-ds-lora2")
tokenizer = AutoTokenizer.from_pretrained("4yo1/llama3-pre1-ds-lora2")

datasets:

  • recipes

license: mit

Downloads last month
1,957
Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.

Model tree for 4yo1/llama3-pre1-ds-lora2

Quantizations
1 model