YAML Metadata Warning: empty or missing yaml metadata in repo card (https://huggingface.co/docs/hub/model-cards#model-card-metadata)

Quantization made by Richard Erkhov.

Github

Discord

Request more models

gemma-2-2b-rebus-solver-fp16 - AWQ

Original model description:

language:

  • it license: apache-2.0 library_name: transformers tags:
  • text-generation-inference
  • unsloth
  • gemma
  • gemma2
  • trl
  • word-game
  • rebus
  • italian
  • word-puzzle
  • crossword datasets:
  • gsarti/eureka-rebus base_model: unsloth/gemma-2-2b-bnb-4bit

model-index: - name: gsarti/gemma-2-2b-rebus-solver-fp16 results: - task: type: verbalized-rebus-solving name: Verbalized Rebus Solving dataset: type: gsarti/eureka-rebus name: EurekaRebus config: llm_sft split: test revision: 0f24ebc3b66cd2f8968077a5eb058be1d5af2f05 metrics: - type: exact_match value: 0.43 name: First Pass Exact Match - type: exact_match value: 0.36 name: Solution Exact Match

Gemma-2 2B Verbalized Rebus Solver ๐Ÿ‡ฎ๐Ÿ‡น

This model is a parameter-efficient fine-tuned version of Gemma-2 2B trained for verbalized rebus solving in Italian, as part of the release for our paper Non Verbis, Sed Rebus: Large Language Models are Weak Solvers of Italian Rebuses. The task of verbalized rebus solving consists of converting an encrypted sequence of letters and crossword definitions into a solution phrase matching the word lengths specified in the solution key. An example is provided below.

The model was trained in 4-bit precision for 5070 steps on the verbalized subset of the EurekaRebus using QLora via Unsloth and TRL. This version has merged adapter weights in half precision, enabling out-of-the-box for usage with the transformers library.

We also provide adapter checkpoints through training and 8-bit GGUF versions of this model for analysis and local execution.

Using the Model

The following example shows how to perform inference using Unsloth:

# With Unsloth (efficient, requires GPU)
from unsloth import FastLanguageModel

model, tokenizer = FastLanguageModel.from_pretrained(
    model_name = "gsarti/gemma-2-2b-rebus-solver-fp16",
    max_seq_length = 1248,
    load_in_4bit = True,
)

# Inference
verbalized_rebus = "[Materiale espulso dai vulcani] R O [Strumento del calzolaio] [Si trovano ai lati del bacino] C I [Si ingrassano con la polenta] E I N [Contiene scorte di cibi] B [Isola in francese]"
solution_key = "1 ' 5 6 5 3 3 1 14"
template = """<bos><start_of_turn>user
Risolvi gli indizi tra parentesi per ottenere una prima lettura, e usa la chiave di lettura per ottenere la soluzione del rebus.

Rebus: {rebus}
Chiave risolutiva: {key}<end_of_turn>
<start_of_turn>model"""
input = template.format(rebus=verbalized_rebus, key=solution_key)
inputs = tokenizer(input, return_tensors="pt")["input_ids"]
outputs = model.generate(input_ids = inputs, max_new_tokens = 500, use_cache = True)
model_generations = tokenizer.batch_decode(outputs)
print(model_generations[0])

# Procediamo alla risoluzione del rebus passo per passo:
# - [Materiale espulso dai vulcani] = lava
# - R O = R O
# - [Strumento del calzolaio] = lesina
# - [Si trovano ai lati del bacino] = anche
# - C I = C I
# - [Si ingrassano con la polenta] = oche
# - E I N = E I N
# - [Contiene scorte di cibi] = dispensa
# - B = B
# - [Isola in francese] = ile
# 
# Prima lettura: lava R O lesina anche C I oche E I N silos B ile
# 
# Ora componiamo la soluzione seguendo la chiave risolutiva:
# 1 = L
# ' = '
# 5 = avaro
# 6 = lesina
# 5 = anche
# 3 = ciรฒ
# 3 = che
# 1 = รจ
# 14 = indispensabile
# 
# Soluzione: L'avaro lesina anche ciรฒ che รจ indispensabile

See the official code release for more examples.

Local usage with Ollama

A ready-to-use local version of this model is hosted on the Ollama Hub and can be used as follows:

ollama run gsarti/gemma2-2b-rebus-solver "Rebus: [Materiale espulso dai vulcani] R O [Strumento del calzolaio] [Si trovano ai lati del bacino] C I [Si ingrassano con la polenta] E I N [Contiene scorte di cibi] B [Isola in francese]\nChiave risolutiva: 1 ' 5 6 5 3 3 1 14"

Limitations

Lexical overfitting: As remarked in the related publication, the model overfitted the set of definitions/answers for first pass words. As a result, words that were explicitly witheld from the training set cause significant performance degradation when used as solutions for verbalized rebuses' definitions. You can compare model performances between in-domain and out-of-domain test examples to verify this limitation.

Model curators

For problems or updates on this model, please contact [email protected].

Citation Information

If you use this model in your work, please cite our paper as follows:

@article{sarti-etal-2024-rebus,
    title = "Non Verbis, Sed Rebus: Large Language Models are Weak Solvers of Italian Rebuses",
    author = "Sarti, Gabriele and Caselli, Tommaso and Nissim, Malvina and Bisazza, Arianna",
    journal = "ArXiv",
    month = jul,
    year = "2024",
    volume = {abs/2408.00584},
    url = {https://arxiv.org/abs/2408.00584},
}

Acknowledgements

We are grateful to the Associazione Culturale "Biblioteca Enigmistica Italiana - G. Panini" for making its rebus collection freely accessible on the Eureka5 platform.

Downloads last month
5
Safetensors
Model size
861M params
Tensor type
I32
ยท
FP16
ยท
Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.
The model cannot be deployed to the HF Inference API: The model has no library tag.