FaheemBEG commited on
Commit
1d09fdf
·
verified ·
1 Parent(s): 4b60c73

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +103 -0
README.md ADDED
@@ -0,0 +1,103 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language:
3
+ - fr
4
+ tags:
5
+ - france
6
+ - constitution
7
+ - council
8
+ - conseil-constitutionnel
9
+ - decisions
10
+ - justice
11
+ - embeddings
12
+ - open-data
13
+ - government
14
+ pretty_name: French Constitutional Council Decisions Dataset
15
+ size_categories:
16
+ - 10K<n<100K
17
+ license: etalab-2.0
18
+ ---
19
+
20
+ # 🇫🇷 French Constitutional Council Decisions Dataset (Conseil constitutionnel)
21
+
22
+ This dataset is a processed and embedded version of all decisions issued by the **Conseil constitutionnel** (French Constitutional Council) since its creation in 1958.
23
+ It includes full legal texts of decisions, covering constitutional case law, electoral disputes, and other related matters.
24
+ The original data is downloaded from [the dedicated **DILA** open data repository](https://echanges.dila.gouv.fr/OPENDATA/CONSTIT) and is also published on [data.gouv.fr](https://www.data.gouv.fr/fr/datasets/les-decisions-du-conseil-constitutionnel/).
25
+
26
+ The dataset provides semantic-ready, structured and chunked content of constitutional decisions suitable for semantic search, AI legal assistants, or RAG pipelines for example.
27
+ Each chunk of text has been vectorized using the [`BAAI/bge-m3`](https://huggingface.co/BAAI/bge-m3) embedding model.
28
+
29
+ ---
30
+
31
+ ## 🗂️ Dataset Contents
32
+
33
+ The dataset is provided in **Parquet format** and includes the following columns:
34
+
35
+ | Column Name | Type | Description |
36
+ |--------------------|------------------|-----------------------------------------------------------------------------|
37
+ | `chunk_id` | `str` | Unique generated identifier for each text chunk. |
38
+ | `cid` | `str` | Unique identifier of the decision (e.g., `CONSTEXT...`). |
39
+ | `chunk_number` | `int` | Index of the chunk within the same decision. |
40
+ | `nature` | `str` | Nature of the decision (e.g., Non lieu à statuer, Conformité, etc.). |
41
+ | `solution` | `str` | Legal outcome or conclusion of the decision. |
42
+ | `title` | `str` | Title summarizing the subject matter of the decision. |
43
+ | `number` | `str` | Official number of the decision (e.g., 2019-790). |
44
+ | `decision_date` | `str` | Date of the decision (format: YYYY-MM-DD). |
45
+ | `text` | `str` | Raw full-text content of the chunk. |
46
+ | `chunk_text` | `str` | Formatted full chunk including `title` and `text`. |
47
+ | `embeddings_bge-m3`| `str` | Embedding vector of `chunk_text` using `BAAI/bge-m3`, stored as JSON array string. |
48
+
49
+ ---
50
+
51
+ ## 🛠️ Data Processing Methodology
52
+
53
+ ### 📥 1. Field Extraction
54
+
55
+ The following fields were extracted and/or transformed from the original source:
56
+
57
+ - **Basic fields**:
58
+ - `cid`, `title`, `nature`, `solution`, `number`, and `decision_date` are extracted directly from the metadata of each decision record.
59
+
60
+ - **Generated fields**:
61
+ - `chunk_id`: a generated unique identifier combining the `cid` and `chunk_number`.
62
+ - `chunk_number`: index of the chunk from the original decision.
63
+
64
+ - **Textual fields**:
65
+ - `text`: chunk of the main text content.
66
+ - `chunk_text`: generated by concatenating `title` and `text`.
67
+
68
+ ### ✂️ 2. Generation of `chunk_text`
69
+
70
+ The Langchain's `RecursiveCharacterTextSplitter` function was used to make these chunks, which correspond to the `text` value. The parameters used are :
71
+
72
+ - `chunk_size` = 1500 (in order to maximize the compability of most LLMs context windows)
73
+ - `chunk_overlap` = 200
74
+ - `length_function` = len
75
+
76
+ The value of `chunk_text` includes the `title` and the textual content chunk `text`. This strategy is designed to improve document search.
77
+
78
+ ### 🧠 3. Embeddings Generation
79
+
80
+ Each `chunk_text` was embedded using the [**`BAAI/bge-m3`**](https://huggingface.co/BAAI/bge-m3) model.
81
+ The resulting embedding is stored as a JSON stringified array of 1024 floating point numbers in the `embeddings_bge-m3` column.
82
+
83
+ ## 📌 Embedding Use Notice
84
+
85
+ ⚠️ The `embeddings_bge-m3` column is stored as a **stringified list** of floats (e.g., `"[-0.03062629,-0.017049594,...]"`).
86
+ To use it as a vector, you need to parse it into a list of floats or NumPy array. For example, if you want to load the dataset into a dataframe :
87
+
88
+ ```python
89
+ import pandas as pd
90
+ import json
91
+
92
+ df = pd.read_parquet("constit-latest.parquet")
93
+ df["embeddings_bge-m3"] = df["embeddings_bge-m3"].apply(json.loads)
94
+ ```
95
+
96
+ ## 📚 Source & License
97
+
98
+ ## 🔗 Source :
99
+ - [**DILA** open data repository](https://echanges.dila.gouv.fr/OPENDATA/CONSTIT)
100
+ - [Data.gouv.fr : CONSTIT: les décisions du Conseil constitutionnel](https://www.data.gouv.fr/datasets/constit-les-decisions-du-conseil-constitutionnel/)
101
+
102
+ ## 📄 Licence :
103
+ **Open License (Etalab)** — This dataset is publicly available and can be reused under the conditions of the Etalab open license.