EminescuAI

Overview

EminescuAI is a specialized language model based on meta-llama/Llama-3.3-70B, fine-tuned using LoRA (Low-Rank Adaptation) technology. The model was trained on the public literary works of Mihai Eminescu, one of Romania's most influential poets and writers. Building upon its smaller 8B predecessor, this 70B-parameter model demonstrates significantly enhanced capabilities in understanding and generating Romanian literary content.

Technical Details

  • Base Model: meta-llama/Llama-3.3-70B-Instruct
  • Fine-tuning Method: LoRA (Low-Rank Adaptation)
  • Training Data: Public works by Mihai Eminescu
  • Primary Language: Romanian

Capabilities

The model excels at generating text in Romanian that reflects Eminescu's literary style, producing more coherent and contextually appropriate poetry while maintaining consistent narrative structure.

It performs better on descriptive writing tasks, particularly:

  • Nature-themed poetry
  • Seasonal descriptions
  • Romantic and contemplative prose
  • Understanding and responding to Romanian language prompts

Ideal for:

  • Creative writing in Romanian
  • Generating descriptive text inspired by Romanian romantic literature

Limitations

For optimal results:

  • Use Romanian language prompts
  • Focus on descriptive and creative writing tasks
  • Keep in mind the model works best with themes common in Romanian romantic literature

Technical Note

This model represents an application of modern AI technology to classical Romanian literature, demonstrating how historical literary styles can be preserved and studied using machine learning techniques.

Usage

from transformers import AutoModelForCausalLM, AutoTokenizer
from peft import PeftModel

# Încarcă modelul și tokenizer-ul
model = AutoModelForCausalLM.from_pretrained("adrianpintilie/EminescuAI-70B")
tokenizer = AutoTokenizer.from_pretrained("adrianpintilie/EminescuAI-70B")

# Generează text
text = "Scrie o poezie despre:"
inputs = tokenizer(text, return_tensors="pt")
outputs = model.generate(**inputs, max_length=200)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
Downloads last month
13
Safetensors
Model size
70.6B params
Tensor type
FP16
·
Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.
The model cannot be deployed to the HF Inference API: The model has no library tag.

Model tree for adrianpintilie/EminescuAI-70B

Adapter
(55)
this model