Update README.md
Browse files
README.md
CHANGED
@@ -7,17 +7,27 @@ pretty_name: Lichess Games With Stockfish Analysis
|
|
7 |
---
|
8 |
# Condensed Lichess Database
|
9 |
This dataset is a condensed version of the Lichess database.
|
10 |
-
It only includes games for which Stockfish evaluations were available, and
|
11 |
-
|
12 |
|
13 |
# Requirements
|
14 |
-
The dataset is compressed with `zstandard` and requires the `python-chess` library.
|
15 |
```
|
16 |
pip install zstandard python-chess
|
17 |
```
|
18 |
|
19 |
# Quick guide
|
20 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
21 |
|
22 |
### Usage
|
23 |
To use the dataset, apply `datasets.shuffle()` and your own transformations (e.g. tokenizer) using `datasets.map()`. The latter will process individual samples in parallel if you're using multiprocessing (e.g. with PyTorch dataloader).
|
|
|
7 |
---
|
8 |
# Condensed Lichess Database
|
9 |
This dataset is a condensed version of the Lichess database.
|
10 |
+
It only includes games for which Stockfish evaluations were available, and games are stored in a format that is faster to parse than the original PGN data.
|
11 |
+
Currently, the dataset contains the entire year 2023, which consists of >100M games and >1B positions.
|
12 |
|
13 |
# Requirements
|
|
|
14 |
```
|
15 |
pip install zstandard python-chess
|
16 |
```
|
17 |
|
18 |
# Quick guide
|
19 |
+
Using this dataset should be straightforward, but let me give you a quick tour.
|
20 |
+
### 1. Loading the dataset
|
21 |
+
I recommend streaming the data, because the dataset is rather large (~100 GB) and I will expand it in the future.
|
22 |
+
Note, `trust_remote_code=True` is needed to execute my [custom data loading script](https://huggingface.co/datasets/mauricett/lichess_sf/blob/main/lichess_sf.py), which is necessary to decompress the files.
|
23 |
+
See [HuggingFace's documentation](https://huggingface.co/docs/datasets/main/en/load_hub#remote-code) if you're unsure.
|
24 |
+
```py
|
25 |
+
# Load dataset.
|
26 |
+
dataset = load_dataset(path="../FishData/lichess_sf_test.py",
|
27 |
+
split="train",
|
28 |
+
streaming=True,
|
29 |
+
trust_remote_code=True)
|
30 |
+
```
|
31 |
|
32 |
### Usage
|
33 |
To use the dataset, apply `datasets.shuffle()` and your own transformations (e.g. tokenizer) using `datasets.map()`. The latter will process individual samples in parallel if you're using multiprocessing (e.g. with PyTorch dataloader).
|