[bot] Conversion to Parquet

#1
by parquet-converter - opened

The parquet-converter bot has created a version of this dataset in the Parquet format in the refs/convert/parquet branch.

What is Parquet?

Apache Parquet is a popular columnar storage format known for:

  • reduced memory requirement,
  • fast data retrieval and filtering,
  • efficient storage.

This is what powers the dataset viewer on each dataset page and every dataset on the Hub can be accessed with the same code (you can use HF Datasets, ClickHouse, DuckDB, Pandas or Polars, up to you).

You can learn more about the advantages associated with Parquet in the documentation.

How to access the Parquet version of the dataset?

You can access the Parquet version of the dataset by following this link: refs/convert/parquet

What if my dataset was already in Parquet?

When the dataset is already in Parquet format, the data are not converted and the files in refs/convert/parquet are links to the original files. This rule has an exception to ensure the dataset viewer API to stay fast: if the row group size of the original Parquet files is too big, new Parquet files are generated.

What should I do?

You don't need to do anything. The Parquet version of the dataset is available for you to use. Refer to the documentation for examples and code snippets on how to query the Parquet files with ClickHouse, DuckDB, Pandas or Polars.

If you have any questions or concerns, feel free to ask in the discussion below. You can also close the discussion if you don't have any questions.

So... The dataset was already in parquet format... So what did you do ?

Hello,

I encountered a CastError when trying to load this dataset using the datasets library. The specific error occurs during the generation of the 'train' split. The error message indicates a mismatch between the expected column names and those found in the Parquet files (it seems to expect a schema similar to the one in gaps.parquet, but it finds a schema similar to the one in ticks.parquet).

According to the description, the schema for ticks.parquet is:
{
'ts': pl.Datetime,
'open': pl.Float64,
'high': pl.Float64,
'low': pl.Float64,
'close': pl.Float64,
'volume': pl.UInt64,
}

and the schema for gaps.parquet is:
{
'length': pl.UInt64,
'start': pl.Datetime,
'end': pl.Datetime,
}

I would like to ask if there is a specific way to load this dataset using the datasets library or if any special configuration is required.

Thank you!

Your need to confirm your account before you can post a new comment.

Sign up or log in to comment