dole / README.md
FaheemBEG's picture
Update README.md
8edd834 verified
metadata
language:
  - fr
tags:
  - france
  - legislation
  - law
  - embeddings
  - open-data
  - government
  - parlement
pretty_name: French Legislative Dossiers Dataset (DOLE)
size_categories:
  - 1K<n<10K
license: etalab-2.0
configs:
  - config_name: latest
    data_files: data/dole-latest/*.parquet
    default: true

🇫🇷 French Legislative Dossiers Dataset (DOLE)

This dataset provides a semantic-ready, chunked and embedded version of the Dossiers Législatifs ("DOLE") published by the French government. It includes all laws promulgated since the XIIᵉ legislature (June 2002), ordinances, and legislative proposals under preparation. The original data is downloaded from the dedicated DILA open data repository and is also published on data.gouv.fr.

Each article is chunked and vectorized using the BAAI/bge-m3 embedding model, enabling use in semantic search, retrieval-augmented generation (RAG), and legal research systems for example.


🗂️ Dataset Contents

The dataset is available in Parquet format and contains the following columns:

Column Name Type Description
chunk_id str Unique identifier for each chunk.
cid str Legislative file identifier
chunk_number int Index of the chunk within its parent document.
category str Type of dossier (e.g., LOI_PUBLIEE, PROJET_LOI, etc.).
content_type str Nature of the content: article, dossier_content, or explanatory_memorandum.
title str Title summarizing the subject matter.
number str Internal document number.
wording str Libelle, Legislature reference (e.g., XIVème législature).
creation_date str Creation or publication date (YYYY-MM-DD).
article_number int or null Article number if applicable.
article_title str or null Optional title of the article.
article_synthesis str or null Optional synthesis of the article.
text str or null Text content of the explanatory_memorandum, article or file content (contenu du dossier) chunk.
chunk_text str Concatenated text (title + article_text or related content).
embeddings_bge-m3 str Embedding vector of chunk_text using BAAI/bge-m3, stored as JSON string.

🛠️ Data Processing Methodology

🧩 1. Content Extraction

Each dossier législatif was parsed, processed and standardized from his official XML structure. Metadata, article blocks, and explanatory sections were normalized into a unified schema. Specific rules applied per content type:

  • explanatory_memorandum: Includes the explanatory's introduction only. All articles synthesis that are in the explanatory are split by their article_number and added to article_synthesis. Article fields are null. An explanatory memorandum (exposé des motifs) is an official text that accompanies a draft or proposed law. It is used to explain the reasons why the law is being proposed, the context in which it is set, and the objectives pursued by the legislator.

  • dossier_content: Includes dossier's textual content if the split by article didn't work. The split may not work if there is no mention of article numbers in the dossier content or if the code was not adapted to a specific case in which the split wasn't possible. Article metadata fields are null.

  • article: Structured content, where article_number and text are always present. article_title and article_synthesis may be missing.

  • Basic fields: cid, category, title, number, wording, creation_date, were taken directly from the source XML file.

  • Generated fields:

    • chunk_id: A unique hash for each text chunk.
    • chunk_number: Indicates the order of a chunk within a same deliberation.
    • content_type: Nature of the content.
    • article_number: Number of the article. Available only if content_type is article.
    • article_title: Title of the article. Available only if content_type is article.
    • article_synthesis: Synthesis of the article extracted from the explanatory memorandum. Available only if content_type is article.
  • Textual fields:

    • text: Chunk of the main text content. It can be an article text content extracted from the dossier content, a chunk of the explanatory memorandum's introduction or a chunk from the dossier content.
    • chunk_text: Combines title and the main text body to maximize embedding relevance. If content_type is article, then the article number is also added.

✂️ 2. Chunk Generation

A chunk_text was built by combining the title, the article_number if applicable and its corresponding text content section. Chunking ensures semantic granularity for embedding purposes.

No recursive split was necessary as legal articles and memos are inherently structured and relatively short for content_type = article. If needed, the Langchain's RecursiveCharacterTextSplitter function was used to make these chunks (text value). The parameters used are :

  • chunk_size = 8000
  • chunk_overlap = 400
  • length_function = len

🧠 3. Embeddings Generation

Each chunk_text was embedded using the BAAI/bge-m3 model. The resulting embedding vector is stored in the embeddings_bge-m3 column as a string, but can easily be parsed back into a list[float] or NumPy array.

📌 Embedding Use Notice

⚠️ The embeddings_bge-m3 column is stored as a stringified list of floats (e.g., "[-0.03062629,-0.017049594,...]"). To use it as a vector, you need to parse it into a list of floats or NumPy array. For example, if you want to load the dataset into a dataframe by using the datasets library:

import pandas as pd
import json
from datasets import load_dataset
# The Pyarrow library must be installed in your Python environment for this example. By doing => pip install pyarrow

dataset = load_dataset("AgentPublic/dole")
df = pd.DataFrame(dataset['train'])
df["embeddings_bge-m3"] = df["embeddings_bge-m3"].apply(json.loads)

Otherwise, if you have already downloaded all parquet files from the data/dole-latest/ folder :

import pandas as pd
import json
# The Pyarrow library must be installed in your Python environment for this example. By doing => pip install pyarrow

df = pd.read_parquet(path="dole-latest/") # Assuming that all parquet files are located into this folder
df["embeddings_bge-m3"] = df["embeddings_bge-m3"].apply(json.loads)

You can then use the dataframe as you wish, such as by inserting the data from the dataframe into the vector database of your choice.


📚 Source & License

🔗 Source:

📄 License:

Open License (Etalab) — This dataset is publicly available and reusable under the Etalab open license.