Llama-KGQA / README.md
Mecharnia's picture
Update README.md
4b51b6c verified
metadata
license: mit
datasets:
  - casey-martin/qald_9_plus
language:
  - en
base_model:
  - meta-llama/Meta-Llama-3-8B-Instruct
pipeline_tag: text-generation
library_name: transformers

Llama-KGQA

Llama-KGQA is a fine-tuned model designed for question answering (QA) over knowledge graphs (KGs). This model translates natural language (NL) questions into SPARQL queries, enabling efficient querying of structured knowledge bases like DBpedia and Wikidata.

Model Overview

  • Base Model: The fine-tuning is performed on Meta-Llama-3-8B-Instruct model with 6 epochs.
  • Dataset: The model was fine-tuned using the QALD benchmark datasets, this version is trained on QALD-9-plus-DBpedia.
  • Objective: Enable natural language interfaces to query knowledge graphs.

Usage

You can use the translate.py script provided in the GitHub repository.

python translate.py "[NATURAL_LANGUAGE_QUESTION]"

Example:

python translate.py "What is the capital of France?"

Example Output

Input:

What is the capital of France?

Output:

PREFIX dbo: <http://dbpedia.org/ontology/>
PREFIX res: <http://dbpedia.org/resource/>
SELECT DISTINCT?uri WHERE {
res:France dbo:capital?uri
}

Fine-Tuning

If you would like to fine-tune the model on your own dataset, you can use the main_llama_kgqa.py script provided in the GitHub repository.

Evaluation

The model has been evaluated on QALD-9-plus-DBpedia and QALD-10-Wikidata datasets. Detailed results can be found in the GitHub repository.

License

This model is licensed under the MIT License. See the GitHub repository for more details.