constit / README.md
FaheemBEG's picture
Update README.md
d3792a7 verified
metadata
language:
  - fr
tags:
  - france
  - constitution
  - council
  - conseil-constitutionnel
  - decisions
  - justice
  - embeddings
  - open-data
  - government
pretty_name: French Constitutional Council Decisions Dataset
size_categories:
  - 10K<n<100K
license: etalab-2.0

πŸ‡«πŸ‡· French Constitutional Council Decisions Dataset (Conseil constitutionnel)

This dataset is a processed and embedded version of all decisions issued by the Conseil constitutionnel (French Constitutional Council) since its creation in 1958. It includes full legal texts of decisions, covering constitutional case law, electoral disputes, and other related matters. The original data is downloaded from the dedicated DILA open data repository and is also published on data.gouv.fr.

The dataset provides semantic-ready, structured and chunked content of constitutional decisions suitable for semantic search, AI legal assistants, or RAG pipelines for example. Each chunk of text has been vectorized using the BAAI/bge-m3 embedding model.


πŸ—‚οΈ Dataset Contents

The dataset is provided in Parquet format and includes the following columns:

Column Name Type Description
chunk_id str Unique generated identifier for each text chunk.
cid str Unique identifier of the decision (e.g., CONSTEXT...).
chunk_number int Index of the chunk within the same decision.
nature str Nature of the decision (e.g., Non lieu Γ  statuer, ConformitΓ©, etc.).
solution str Legal outcome or conclusion of the decision.
title str Title summarizing the subject matter of the decision.
number str Official number of the decision (e.g., 2019-790).
decision_date str Date of the decision (format: YYYY-MM-DD).
text str Raw full-text content of the chunk.
chunk_text str Formatted full chunk including title and text.
embeddings_bge-m3 str Embedding vector of chunk_text using BAAI/bge-m3, stored as JSON array string.

πŸ› οΈ Data Processing Methodology

πŸ“₯ 1. Field Extraction

The following fields were extracted and/or transformed from the original source:

  • Basic fields:

    • cid, title, nature, solution, number, and decision_date are extracted directly from the metadata of each decision record.
  • Generated fields:

    • chunk_id: a generated unique identifier combining the cid and chunk_number.
    • chunk_number: index of the chunk from the original decision.
  • Textual fields:

    • text: chunk of the main text content.
    • chunk_text: generated by concatenating title and text.

βœ‚οΈ 2. Generation of chunk_text

The Langchain's RecursiveCharacterTextSplitter function was used to make these chunks, which correspond to the text value. The parameters used are :

  • chunk_size = 1500 (in order to maximize the compability of most LLMs context windows)
  • chunk_overlap = 200
  • length_function = len

The value of chunk_text includes the title and the textual content chunk text. This strategy is designed to improve document search.

🧠 3. Embeddings Generation

Each chunk_text was embedded using the BAAI/bge-m3 model. The resulting embedding is stored as a JSON stringified array of 1024 floating point numbers in the embeddings_bge-m3 column.

πŸ“Œ Embedding Use Notice

⚠️ The embeddings_bge-m3 column is stored as a stringified list of floats (e.g., "[-0.03062629,-0.017049594,...]"). To use it as a vector, you need to parse it into a list of floats or NumPy array. For example, if you want to load the dataset into a dataframe :

import pandas as pd
import json
# The Pyarrow library must be installed in your Python environment for this example. By doing => pip install pyarrow

df = pd.read_parquet(path="constit-latest/") # Assuming that all parquet files are located into this folder
df["embeddings_bge-m3"] = df["embeddings_bge-m3"].apply(json.loads)

πŸ“š Source & License

πŸ”— Source :

πŸ“„ Licence :

Open License (Etalab) β€” This dataset is publicly available and can be reused under the conditions of the Etalab open license.