Dataset Viewer
The dataset viewer is not available for this dataset.
Cannot get the config names for the dataset.
Error code:   ConfigNamesError
Exception:    ReadTimeout
Message:      (ReadTimeoutError("HTTPSConnectionPool(host='huggingface.co', port=443): Read timed out. (read timeout=10)"), '(Request ID: 622e7cf8-5c8b-44b6-bf6d-9f27d8a66fb8)')
Traceback:    Traceback (most recent call last):
                File "/src/services/worker/src/worker/job_runners/dataset/config_names.py", line 66, in compute_config_names_response
                  config_names = get_dataset_config_names(
                                 ^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/inspect.py", line 161, in get_dataset_config_names
                  dataset_module = dataset_module_factory(
                                   ^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/load.py", line 1031, in dataset_module_factory
                  raise e1 from None
                File "/usr/local/lib/python3.12/site-packages/datasets/load.py", line 1004, in dataset_module_factory
                  ).get_module()
                    ^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/load.py", line 632, in get_module
                  data_files = DataFilesDict.from_patterns(
                               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/data_files.py", line 689, in from_patterns
                  else DataFilesList.from_patterns(
                       ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/data_files.py", line 592, in from_patterns
                  origin_metadata = _get_origin_metadata(data_files, download_config=download_config)
                                    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/data_files.py", line 506, in _get_origin_metadata
                  return thread_map(
                         ^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/tqdm/contrib/concurrent.py", line 69, in thread_map
                  return _executor_map(ThreadPoolExecutor, fn, *iterables, **tqdm_kwargs)
                         ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/tqdm/contrib/concurrent.py", line 51, in _executor_map
                  return list(tqdm_class(ex.map(fn, *iterables, chunksize=chunksize), **kwargs))
                         ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/tqdm/std.py", line 1169, in __iter__
                  for obj in iterable:
                             ^^^^^^^^
                File "/usr/local/lib/python3.12/concurrent/futures/_base.py", line 619, in result_iterator
                  yield _result_or_cancel(fs.pop())
                        ^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/concurrent/futures/_base.py", line 317, in _result_or_cancel
                  return fut.result(timeout)
                         ^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/concurrent/futures/_base.py", line 456, in result
                  return self.__get_result()
                         ^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/concurrent/futures/_base.py", line 401, in __get_result
                  raise self._exception
                File "/usr/local/lib/python3.12/concurrent/futures/thread.py", line 59, in run
                  result = self.fn(*self.args, **self.kwargs)
                           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/data_files.py", line 485, in _get_single_origin_metadata
                  resolved_path = fs.resolve_path(data_file)
                                  ^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/huggingface_hub/hf_file_system.py", line 198, in resolve_path
                  repo_and_revision_exist, err = self._repo_and_revision_exist(repo_type, repo_id, revision)
                                                 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/huggingface_hub/hf_file_system.py", line 125, in _repo_and_revision_exist
                  self._api.repo_info(
                File "/usr/local/lib/python3.12/site-packages/huggingface_hub/utils/_validators.py", line 114, in _inner_fn
                  return fn(*args, **kwargs)
                         ^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/huggingface_hub/hf_api.py", line 2816, in repo_info
                  return method(
                         ^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/huggingface_hub/utils/_validators.py", line 114, in _inner_fn
                  return fn(*args, **kwargs)
                         ^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/huggingface_hub/hf_api.py", line 2673, in dataset_info
                  r = get_session().get(path, headers=headers, timeout=timeout, params=params)
                      ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/requests/sessions.py", line 602, in get
                  return self.request("GET", url, **kwargs)
                         ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/requests/sessions.py", line 589, in request
                  resp = self.send(prep, **send_kwargs)
                         ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/requests/sessions.py", line 703, in send
                  r = adapter.send(request, **kwargs)
                      ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/huggingface_hub/utils/_http.py", line 96, in send
                  return super().send(request, *args, **kwargs)
                         ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/requests/adapters.py", line 690, in send
                  raise ReadTimeout(e, request=request)
              requests.exceptions.ReadTimeout: (ReadTimeoutError("HTTPSConnectionPool(host='huggingface.co', port=443): Read timed out. (read timeout=10)"), '(Request ID: 622e7cf8-5c8b-44b6-bf6d-9f27d8a66fb8)')

Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.

πŸ‡«πŸ‡· Travail Emploi website Dataset (French Minister of Labor and Employment)

This dataset is a processed and embedded version of public practical information sheets extracted from the official website of MinistΓ¨re du Travail et de l’Emploi (Minister of Labor and Employment): travail-emploi.gouv.fr. These datas are downloaded from the government Social Gouv GitHub repository.

The dataset provides semantic-ready, structured and chunked data of official content related to employment, labor law and administrative procedures. These chunks have been vectorized using the BAAI/bge-m3 embedding model to enable semantic search and retrieval tasks.


πŸ—‚οΈ Dataset Contents

The dataset is provided in Parquet format and includes the following columns:

Column Name Type Description
chunk_id str Unique generated and encoded hash of each chunk.
doc_id str Document identifier from the source site.
chunk_index int Index of the chunk within its original document. Starting from 1.
chunk_xxh64 str XXH64 hash of the chunk_text value.
title str Title of the article.
surtitre str Broader theme. (always "Travail-Emploi" in this dataset).
source str Dataset source label. (always "travail-emploi" in this dataset)
introduction str Introductory paragraph of the article.
date str Publication or last update date (format: DD/MM/YYYY).
url str URL of the original article.
context list[str] Section names related to the chunk.
text str Textual content extracted and chunked from a section of the article.
chunk_text str Formated text including title, context, introduction and text values for embedding
embeddings_bge-m3 str Embedding vector of chunk_text using BAAI/bge-m3, stored as JSON array string.

πŸ› οΈ Data Processing Methodology

πŸ“₯ 1. Field Extraction

The following fields were extracted and/or transformed from the original JSON:

  • Basic fields: sid (i.e. 'pubID'), title, introduction (i.e. 'intro'), date, url are directly extracted from JSON attributes.
  • Generated fields:
    • chunk_id: is an unique generated and encoded hash for each chunk.
    • chunk_index: is the index of the chunk of a same document. Each document has an unique doc_id.
    • chunk_xxh64: is the xxh64 hash of the chunk_text value. It is useful to determine if the chunk_text value has changed from a version to another.
    • source: is always "travail-emploi" here.
    • surtitre: is always "Travail-Emploi" here.
  • Textual fields:
    • context: Optional contextual hierarchy (e.g., nested sections).
    • text: Textual content of the article chunk. This is the value which corresponds to a semantically coherent fragment of textual content extracted from the XML document structure for a same sid.

Columns source and surtitre are fixed variables here because this dataset was built at the same time as the Service Public dataset. Both datasets were intended to be grouped together in a single vector collection, they then have differents source and surtitre values.

βœ‚οΈ 2. Generation of 'chunk_text'

The value includes the title and introduction of the article, the context values of the chunk and the textual content chunk text. This strategy is designed to improve semantic search for document search use cases on administrative procedures.

The Langchain's RecursiveCharacterTextSplitter function was used to make these chunks (text value). The parameters used are :

  • chunk_size = 1500 (in order to maximize the compability of most LLMs context windows)
  • chunk_overlap = 20
  • length_function = len

🧠 3. Embeddings Generation

Each chunk_text was embedded using the BAAI/bge-m3 model. The resulting embedding vector is stored in the embeddings_bge-m3 column as a string, but can easily be parsed back into a list[float] or NumPy array.

πŸ“Œ Embedding Use Notice

⚠️ The embeddings_bge-m3 column is stored as a stringified list of floats (e.g., "[-0.03062629,-0.017049594,...]"). To use it as a vector, you need to parse it into a list of floats or NumPy array. For example, if you want to load the dataset into a dataframe by using the datasets library:

import pandas as pd
import json
from datasets import load_dataset
# The Pyarrow library must be installed in your Python environment for this example. By doing => pip install pyarrow

dataset = load_dataset("AgentPublic/travail-emploi")
df = pd.DataFrame(dataset['train'])
df["embeddings_bge-m3"] = df["embeddings_bge-m3"].apply(json.loads)

Otherwise, if you have already downloaded all parquet files from the data/travail-emploi-latest/ folder :

import pandas as pd
import json
# The Pyarrow library must be installed in your Python environment for this example. By doing => pip install pyarrow

df = pd.read_parquet(path="travail-emploi-latest/") # Assuming that all parquet files are located into this folder
df["embeddings_bge-m3"] = df["embeddings_bge-m3"].apply(json.loads)

You can then use the dataframe as you wish, such as by inserting the data from the dataframe into the vector database of your choice.

🐱 GitHub repository :

The project MediaTech is open source ! You are free to contribute or see the complete code used to build the dataset by checking the GitHub repository

πŸ“š Source & License

πŸ”— Source :

πŸ“„ Licence :

Open License (Etalab) β€” This dataset is publicly available and can be reused under the conditions of the Etalab open license.

Downloads last month
200

Collection including AgentPublic/travail-emploi