Model Card for danilo

Model Details

  • Model Name: danilo
  • Model Type: GPT-Neo (Text Generation)
  • Base Model: EleutherAI/gpt-neo-125M
  • Fine-Tuned: Yes (Custom dataset)
  • License: MIT

Intended Use

This model is designed for text generation tasks, such as answering questions, generating conversational responses, or completing text prompts.

Training Data

The model was fine-tuned on a custom dataset of question-answer pairs to mimic a specific style of responses.

Limitations

  • The model may generate incorrect or nonsensical answers if the input is ambiguous or outside its training scope.
  • It may exhibit biases present in the training data.

Usage

You can use this model with the Hugging Face transformers library:

from transformers import AutoTokenizer, AutoModelForCausalLM

tokenizer = AutoTokenizer.from_pretrained("your-username/your-model-name")
model = AutoModelForCausalLM.from_pretrained("your-username/your-model-name")

inputs = tokenizer("How do you prioritize tasks when you’re overwhelmed?", return_tensors="pt")
outputs = model.generate(**inputs, max_length=100)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
Downloads last month
55
Safetensors
Model size
125M params
Tensor type
F32
·
Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.
The model cannot be deployed to the HF Inference API: The model has no library tag.