This is a Mistral7B model fine-tuned with 4bit-QLoRA on Czech Wikipedia data. The model is primarily designed for further fine-tuning for Czech-specific NLP tasks, including summarization and question answering. This adaptation allows for better performance in tasks that require an understanding of the Czech language and context.

For exact QLoRA parameters, see the Axolotl's YAML file.

Built with Axolotl

Example of usage::

from transformers import AutoModelForCausalLM, AutoTokenizer
import torch

model_name = "simecek/cswikimistral_0.1"
device = "cuda" if torch.cuda.is_available() else "cpu"

tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(model_name, device_map="auto", load_in_4bit=True)

def generate_text(prompt, max_new_tokens=50):
    inputs = tokenizer(prompt, return_tensors="pt").to(device)
    attention_mask = inputs["attention_mask"]
    input_ids = inputs["input_ids"]
    
    output = model.generate(
        input_ids,
        attention_mask=attention_mask,
        max_new_tokens=max_new_tokens,
        num_return_sequences=1,
        pad_token_id=tokenizer.eos_token_id,
    )
    
    return tokenizer.decode(output[0], skip_special_tokens=True)

prompt = "Hlavní město České republiky je"
generated_text = generate_text(prompt, max_new_tokens=5)
print(generated_text)
Downloads last month
31
Inference Providers NEW
This model is not currently available via any of the supported third-party Inference Providers, and the model is not deployed on the HF Inference API.

Model tree for simecek/cswikimistral_0.1

Finetunes
2 models
Quantizations
1 model

Dataset used to train simecek/cswikimistral_0.1