Leonard Püttmann
Update README.md
a5c4003 verified
|
raw
history blame
4.07 kB
metadata
library_name: transformers
tags: []

Header image of a very philosophical lama

Finetuned Mistral 7B model on philosophical texts

This model is a finetune of mistralai/Mistral-7B-Instruct-v0.3. It was tuned with the Stanford Encyclopedia of Philosophy (SEP) from here: AiresPucrs/stanford-encyclopedia-philosophy.

The model was trained on an A10G GPU with 24GB of VRAM for about one hour.

Example:

Q: "Please explain the allegory of the cave to me."

Answer before fine-tuning:

The Allegory of the Cave is a philosophical thought experiment by Plato, presented in his work The Republic. In this allegory, Plato describes a group of people who have lived chained in a dark cave since birth, with their backs facing a wall on which shadows are projected from objects passing in front of a fire behind them. These prisoners believe that the shadows they see are reality itself. One prisoner is freed and leaves the cave, eventually coming to understand that the shadows are mere representations of real things outside the cave. He returns to the cave to try to enlighten the other prisoners about the true nature of reality, but they reject his claims because they cannot conceive of anything beyond what they can perceive through their senses.

Plato uses this allegory to illustrate the idea that our understanding of the world is limited by our sensory experiences and that we must strive for knowledge and wisdom in order to truly comprehend reality. It also highlights the difficulty in communicating abstract ideas to those who have not experienced them firsthand.

Answer after finetuning:

In Plato’s Allegory of the Cave, a philosopher compares human beings who live in a cave and are chained so that they can only see shadows on the wall to people who have been educated by philosophy. The philosopher says that the prisoners in the cave represent those who have not been educated by philosophy, while the free person represents someone who has been educated by philosophy. According to the philosopher, the prisoners in the cave are like people who do not understand what is real because all they know is the world of appearances (the shadows). The philosopher argues that the prisoners need to be freed from their chains and led out into the sunlight where they will see the sun itself instead of just its reflection on the walls of the cave. This is similar to how philosophers argue that we should try to understand reality as it really is rather than just accepting our everyday perceptions of things. The philosopher also suggests that the process of being freed from the chains and seeing the sun for the first time is painful, but necessary if one wants to truly understand reality.

Using the model

To load the model, you can apply the adapter straight to the original base model:

import torch
from transformers import AutoTokenizer, AutoModelForCausalLM, BitsAndBytesConfig

base_model_id = "mistralai/Mistral-7B-Instruct-v0.3"
bnb_config = BitsAndBytesConfig(
    load_in_4bit=True,
    bnb_4bit_use_double_quant=True,
    bnb_4bit_quant_type="nf4",
    bnb_4bit_compute_dtype=torch.bfloat16
)

base_model = AutoModelForCausalLM.from_pretrained(
    base_model_id,  # Mistral, same as before
    quantization_config=bnb_config,  # Same quantization config as before
    device_map="auto",
    trust_remote_code=True,
)

tokenizer = AutoTokenizer.from_pretrained(base_model_id, add_bos_token=True, trust_remote_code=True)

prompt = "Please explain the allegory of the cave to me."
model_input = eval_tokenizer(prompt, return_tensors="pt").to("cuda")

ft_model.eval()
with torch.no_grad():
    print(eval_tokenizer.decode(ft_model.generate(**model_input, max_new_tokens=256, repetition_penalty=1.15)[0], skip_special_tokens=True))

Sources

Base model: https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.3 Dataset for finetuning: https://huggingface.co/datasets/AiresPucrs/stanford-encyclopedia-philosophy

I hold no rights to the base model or the dataset used.