Dataset Viewer
The dataset viewer is not available for this subset.
Cannot get the split names for the config 'default' of the dataset.
Exception:    SplitsNotFoundError
Message:      The split names could not be parsed from the dataset config.
Traceback:    Traceback (most recent call last):
                File "/src/services/worker/src/worker/job_runners/config/split_names.py", line 159, in compute
                  compute_split_names_from_info_response(
                File "/src/services/worker/src/worker/job_runners/config/split_names.py", line 131, in compute_split_names_from_info_response
                  config_info_response = get_previous_step_or_raise(kind="config-info", dataset=dataset, config=config)
                File "/src/libs/libcommon/src/libcommon/simple_cache.py", line 567, in get_previous_step_or_raise
                  raise CachedArtifactError(
              libcommon.simple_cache.CachedArtifactError: The previous step failed.
              
              During handling of the above exception, another exception occurred:
              
              Traceback (most recent call last):
                File "/usr/local/lib/python3.9/tarfile.py", line 190, in nti
                  s = nts(s, "ascii", "strict")
                File "/usr/local/lib/python3.9/tarfile.py", line 174, in nts
                  return s.decode(encoding, errors)
              UnicodeDecodeError: 'ascii' codec can't decode byte 0x98 in position 0: ordinal not in range(128)
              
              During handling of the above exception, another exception occurred:
              
              Traceback (most recent call last):
                File "/usr/local/lib/python3.9/tarfile.py", line 2588, in next
                  tarinfo = self.tarinfo.fromtarfile(self)
                File "/usr/local/lib/python3.9/tarfile.py", line 1292, in fromtarfile
                  obj = cls.frombuf(buf, tarfile.encoding, tarfile.errors)
                File "/usr/local/lib/python3.9/tarfile.py", line 1234, in frombuf
                  chksum = nti(buf[148:156])
                File "/usr/local/lib/python3.9/tarfile.py", line 193, in nti
                  raise InvalidHeaderError("invalid header")
              tarfile.InvalidHeaderError: invalid header
              
              During handling of the above exception, another exception occurred:
              
              Traceback (most recent call last):
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/inspect.py", line 499, in get_dataset_config_info
                  for split_generator in builder._split_generators(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/packaged_modules/webdataset/webdataset.py", line 86, in _split_generators
                  first_examples = list(islice(pipeline, self.NUM_EXAMPLES_FOR_FEATURES_INFERENCE))
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/packaged_modules/webdataset/webdataset.py", line 30, in _get_pipeline_from_tar
                  for filename, f in tar_iterator:
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/utils/file_utils.py", line 1577, in __iter__
                  for x in self.generator(*self.args, **self.kwargs):
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/utils/file_utils.py", line 1637, in _iter_from_urlpath
                  yield from cls._iter_tar(f)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/utils/file_utils.py", line 1588, in _iter_tar
                  stream = tarfile.open(fileobj=f, mode="r|*")
                File "/usr/local/lib/python3.9/tarfile.py", line 1822, in open
                  t = cls(name, filemode, stream, **kwargs)
                File "/usr/local/lib/python3.9/tarfile.py", line 1703, in __init__
                  self.firstmember = self.next()
                File "/usr/local/lib/python3.9/tarfile.py", line 2600, in next
                  raise ReadError(str(e))
              tarfile.ReadError: invalid header
              
              The above exception was the direct cause of the following exception:
              
              Traceback (most recent call last):
                File "/src/services/worker/src/worker/job_runners/config/split_names.py", line 75, in compute_split_names_from_streaming_response
                  for split in get_dataset_split_names(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/inspect.py", line 572, in get_dataset_split_names
                  info = get_dataset_config_info(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/inspect.py", line 504, in get_dataset_config_info
                  raise SplitsNotFoundError("The split names could not be parsed from the dataset config.") from err
              datasets.inspect.SplitsNotFoundError: The split names could not be parsed from the dataset config.

Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.

Assemblage vcpkg DLL Dataset

last updated: June 16th

This reposotory holds the public dataset for Assemblage, this copy of dataset is on vcpkg dll data along with pdb files. Please note, the Assemblage code is published under the MIT license, while the dataset specify each binary's source code repository license, please adhere to original repository's license.

You can find the paper on arxiv.

Please use cat (the command, not my cat) to concatenate part files into the original xz file, then unzip it.

# Concat the SQLite database then uncompress it
cat vcpkg.sqlite.tar.xz.* > vcpkg.sqlite.tar.xz
tar -xvf vcpkg.sqlite.tar.xz

Dataset Details

This public copy of Assemblage data consists of 130k vcpkg DLL binaries, and the information are stored in the SQLite database. Due to the nature that binary files can't be represented as texts, a seperate vcpkg_final.tar.xz is included, after inflation the folder contains the bnary files. Each file can be indexed by either its SHA256 hash, by hash column or the binary_path column. You can also read our docs on dataset

Downloads last month
72