license: llama2 language: - ko library_name: transformers base_model: beomi/llama-2-ko-7b pipeline_tag: text-generation
msy127/ft_240201_01
Our Team
Research & Engineering | Product Management |
---|---|
David Sohn | David Sohn |
Model Details
Base Model
Trained On
- OS: Ubuntu 22.04
- GPU: A100 40GB 1ea
- transformers: v4.37
Instruction format
It follows Custom format.
E.g.
text = """\
<|user|>
๊ฑด๊ฐํ ์์ต๊ด์ ๋ง๋ค๊ธฐ ์ํด์๋ ์ด๋ป๊ฒ ํ๋๊ฒ์ด ์ข์๊น์?
<|assistant|>
"""
Implementation Code
This model contains the chat_template instruction format.
You can use the code below.
# Use a pipeline as a high-level helper
from transformers import pipeline
pipe = pipeline("text-generation", model="msy127/ft_240201_01")
# Load model directly
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("msy127/ft_240201_01")
model = AutoModelForCausalLM.from_pretrained("msy127/ft_240201_01")
Introduction to our service platform
- AI Companion service platform that talks while looking at your face.
- You can preview the future of the world's best, character.ai.
- https://livetalkingai.com
- Downloads last month
- 66
Inference Providers
NEW
This model is not currently available via any of the supported third-party Inference Providers, and
the model is not deployed on the HF Inference API.