Model Card for Model ID

Just only a text gen model type, I'm just test train my dataset and...it's work, very nice, try it.

Model Details

  • Open In Colab

Model Description

This is the model card of a 🤗 transformers model that has been pushed on the Hub.

  • Developed by: HuyRemy
  • Funded by : HuyRemy
  • Shared by : HuyRemy
  • Model type: Megatron Mistral
  • License: [email protected]

Model Demo:

Uses

USE T4 GPU

!pip install "unsloth[colab-new] @ git+https://github.com/unslothai/unsloth.git"
!pip install --no-deps xformers trl peft accelerate bitsandbytes

Direct Use

from unsloth import FastLanguageModel
import torch
max_seq_length = 2048
dtype = None 
load_in_4bit = True 
alpaca_prompt = """
### Instruction:
{}

### Input:
{}

### Response:
{}"""


def formatting_prompts_func(examples):
    instructions = examples["instruction"]
    inputs       = examples["input"]
    outputs      = examples["output"]
    texts = []
    for instruction, input, output in zip(instructions, inputs, outputs):
        text = alpaca_prompt.format(instruction, input, output) + EOS_TOKEN
        texts.append(text)
    return { "text" : texts, }
pass

model, tokenizer = FastLanguageModel.from_pretrained(
    model_name = "huyremy/aichat", 
    max_seq_length = max_seq_length,
    dtype = dtype,
    load_in_4bit = load_in_4bit,
)
FastLanguageModel.for_inference(model) 

EOS_TOKEN = tokenizer.eos_token 

inputs = tokenizer(
[
    alpaca_prompt.format(
        "who is Nguyễn Phú Trọng?", 
        "", 
        "", 
    ),
], return_tensors = "pt").to("cuda")

outputs = model.generate(**inputs, max_new_tokens = 64, use_cache = True)
tokenizer.batch_decode(outputs)

Model Card Contact

[email protected]

Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.
The model cannot be deployed to the HF Inference API: The model has no pipeline_tag.