constit / README.md
FaheemBEG's picture
Update README.md
9c5a636 verified
metadata
language:
  - fr
tags:
  - france
  - constitution
  - council
  - conseil-constitutionnel
  - decisions
  - justice
  - embeddings
  - open-data
  - government
pretty_name: French Constitutional Council Decisions Dataset
size_categories:
  - 10K<n<100K
license: etalab-2.0
configs:
  - config_name: latest
    data_files: data/constit-latest/*.parquet
    default: true

๐Ÿ“ข Sondage 2026 : Utilisation des datasets publiques de MediaTech

Vous utilisez ce dataset ou dโ€™autres datasets de notre collection MediaTech ? Votre avis compte ! Aidez-nous ร  amรฉliorer nos datasets publiques en rรฉpondant ร  ce sondage rapide (5 min) : ๐Ÿ‘‰ https://grist.numerique.gouv.fr/o/albert/forms/gF4hLaq9VvUog6c5aVDuMw/11 Merci pour votre contribution ! ๐Ÿ™Œ


๐Ÿ‡ซ๐Ÿ‡ท French Constitutional Council Decisions Dataset (Conseil constitutionnel)

This dataset is a processed and embedded version of all decisions issued by the Conseil constitutionnel (French Constitutional Council) since its creation in 1958. It includes full legal texts of decisions, covering constitutional case law, electoral disputes, and other related matters. The original data is downloaded from the dedicated DILA open data repository and is also published on data.gouv.fr.

The dataset provides semantic-ready, structured and chunked content of constitutional decisions suitable for semantic search, AI legal assistants, or RAG pipelines for example. Each chunk of text has been vectorized using the BAAI/bge-m3 embedding model.


๐Ÿ—‚๏ธ Dataset Contents

The dataset is provided in Parquet format and includes the following columns:

Column Name Type Description
chunk_id str Unique generated identifier for each text chunk.
doc_id str Document identifier from the source site.
chunk_index int Index of the chunk within the same document. Starting from 1.
chunk_xxh64 str XXH64 hash of the chunk_text value.
nature str Nature of the decision (e.g., Non lieu ร  statuer, Conformitรฉ, etc.).
solution str Legal outcome or conclusion of the decision.
title str Title summarizing the subject matter of the decision.
number str Official number of the decision (e.g., 2019-790).
decision_date str Date of the decision (format: YYYY-MM-DD).
text str Raw full-text content of the chunk.
chunk_text str Formatted full chunk including title and text.
embeddings_bge-m3 str Embedding vector of chunk_text using BAAI/bge-m3, stored as JSON array string.

๐Ÿ› ๏ธ Data Processing Methodology

๐Ÿ“ฅ 1. Field Extraction

The following fields were extracted and/or transformed from the original source:

  • Basic fields:

    • doc_id (cid), title, nature, solution, number, and decision_date are extracted directly from the metadata of each decision record.
  • Generated fields:

    • chunk_id: a generated unique identifier combining the doc_id and chunk_index.
    • chunk_index: is the index of the chunk of a same document. Each document has an unique doc_id.
    • chunk_xxh64: is the xxh64 hash of the chunk_text value. It is useful to determine if the chunk_text value has changed from a version to another.
  • Textual fields:

    • text: chunk of the main text content.
    • chunk_text: generated by concatenating title and text.

โœ‚๏ธ 2. Generation of chunk_text

The Langchain's RecursiveCharacterTextSplitter function was used to make these chunks, which correspond to the text value. The parameters used are :

  • chunk_size = 1500
  • chunk_overlap = 0
  • length_function = len

The value of chunk_text includes the title and the textual content chunk text. This strategy is designed to improve document search.

๐Ÿง  3. Embeddings Generation

Each chunk_text was embedded using the BAAI/bge-m3 model. The resulting embedding is stored as a JSON stringified array of 1024 floating point numbers in the embeddings_bge-m3 column.

๐ŸŽ“ Tutorials

๐Ÿ”„ 1. The chunking doesn't fit your use case?

If you need to reconstitute the original, un-chunked dataset, you can follow this tutorial notebook available on our GitHub repository.

โš ๏ธ The tutorial is only relevant for datasets that were chunked without overlap.

๐Ÿค– 2. How to load MediaTech's datasets from Hugging Face and use them in a RAG pipeline ?

To learn how to load MediaTech's datasets from Hugging Face and integrate them into a Retrieval-Augmented Generation (RAG) pipeline, check out our step-by-step RAG tutorial available on our GitHub repository !

๐Ÿ“Œ 3. Embedding Use Notice

โš ๏ธ The embeddings_bge-m3 column is stored as a stringified list of floats (e.g., "[-0.03062629,-0.017049594,...]"). To use it as a vector, you need to parse it into a list of floats or NumPy array.

Using the datasets library:

import pandas as pd
import json
from datasets import load_dataset
# The Pyarrow library must be installed in your Python environment for this example. By doing => pip install pyarrow

dataset = load_dataset("AgentPublic/constit")
df = pd.DataFrame(dataset['train'])
df["embeddings_bge-m3"] = df["embeddings_bge-m3"].apply(json.loads)

Using downloaded local Parquet files:

import pandas as pd
import json
# The Pyarrow library must be installed in your Python environment for this example. By doing => pip install pyarrow

df = pd.read_parquet(path="constit-latest/") # Assuming that all parquet files are located into this folder
df["embeddings_bge-m3"] = df["embeddings_bge-m3"].apply(json.loads)

You can then use the dataframe as you wish, such as by inserting the data from the dataframe into the vector database of your choice.

๐Ÿฑ GitHub repository :

The project MediaTech is open source ! You are free to contribute or see the complete code used to build the dataset by checking the GitHub repository

๐Ÿ“š Source & License

๐Ÿ”— Source :

๐Ÿ“„ Licence :

Open License (Etalab) โ€” This dataset is publicly available and can be reused under the conditions of the Etalab open license.