This is a glorious and graceful gift to the open-source community from PyThess meetups, with love. It’s designed to provide sarcastic non-answers. Use with caution, and don’t trust it. Do not use it seriously—or at all. Do not expect it to qualify as a “helpful assistant.”

Built on top of Llama-3.2-1B-Instruct

Fine tuned with a dataset with sarcastic short "answers" to questions.

To test:

import torch
from transformers import pipeline

pipe = pipeline(
    "text-generation",
    model="AlexandrosChariton/SarcasMLL-1B",
    torch_dtype=torch.float32,
    device_map="auto",
)
messages = [
    {"role": "user", "content": "Why do I even bother with Python? Is it any good?"},
]
outputs = pipe(
    messages,
    max_new_tokens=128
)
print(outputs[0]["generated_text"][-1])

Example input: "Should I move to Scandinavia?"

Response: {'role': 'assistant', 'content': "Oh yes, because nothing says 'good life' like freezing your butt off. And the cost of living? A whole other story. You might even need a warm coat. Worth a shot? Probably not. Scandinavia is all about embracing the cold. You'll love it. You'll hate it. Either way, you'll be fine. Or not. Who knows. It's all part of the adventure. Right?"}

Downloads last month
164
Safetensors
Model size
1.24B params
Tensor type
F32
·
Inference Providers NEW
This model is not currently available via any of the supported third-party Inference Providers, and the model is not deployed on the HF Inference API.

Model tree for AlexandrosChariton/SarcasMLL-1B

Finetuned
(261)
this model
Quantizations
1 model