Datasets:

Modalities:
Text
Formats:
json
ArXiv:
Libraries:
Datasets
Dask
TeleAI-AI-Flow commited on
Commit
0d4de14
·
verified ·
1 Parent(s): b3a1ad8

Upload README.md

Browse files
Files changed (1) hide show
  1. README.md +10 -9
README.md CHANGED
@@ -21,16 +21,17 @@ This reduction not only lowers computational costs and inference delay but also
21
  Tokenizer efficiency exhibits growing significance in light of the exploding input length and the widespread usage of test-time scaling, but is often **neglected** in LLM evaluations.
22
  We assess the information capacity of 49 models across 5 heterogeneous datasets and find consistent evidence regarding the influences of tokenizer efficiency, pretraining data, and the mixture-of-experts (MoE) architecture.
23
 
24
- ## Method
25
 
26
- The model intelligence is measured by the data size savings achieved from the LLM's probability prediction.
27
- The original size of a text sample in the given dataset is denoted as $C$, which is transformed into a sequence of $L$ tokens by the tokenizer of an LLM $M$.
28
- The symbol length of the $i$-th token derived from entropy coding is approximately $-\log p(x_i | x_{<i} ; M)$, and the compression gain is the difference between the original data size and the summed symbol length of all tokens.
29
- The computational complexity is measured by the inference floating-point operations (FLOPs) $N_M$ on a logarithmic scale according to the scaling law.
30
- We introduce a negative bias $b$ in the numerator so that different-sized models in a series have nearly identical information capacities, thus enabling convenient comparison across different model sizes and architectures.
31
 
32
- In summary, the computation formula of information capacity is expressed as:
33
- $$ \text{IC} = \frac{\frac{1}{L-1} (C - \sum_{i=2}^{L} -\log p(x_i | x_{<i} ; M))+b}{ \log (N_M / (L-1))} . $$
 
 
 
34
 
35
  ## Usage
36
 
@@ -41,7 +42,7 @@ pip install numpy torch transformers tqdm flash_attn huggingface_hub
41
 
42
  Step 2. Clone this repo.
43
  ```sh
44
- git clone https://github.com/TeleAI-AI-Flow/InformationCapacity.git
45
  cd InformationCapacity
46
  ```
47
 
 
21
  Tokenizer efficiency exhibits growing significance in light of the exploding input length and the widespread usage of test-time scaling, but is often **neglected** in LLM evaluations.
22
  We assess the information capacity of 49 models across 5 heterogeneous datasets and find consistent evidence regarding the influences of tokenizer efficiency, pretraining data, and the mixture-of-experts (MoE) architecture.
23
 
24
+ ## Data
25
 
26
+ Previous studies have established that the correlation between compression and intelligence weakens when the evaluation corpus significantly deviates from the domain of downstream tasks.
27
+ Thus, we construct five heterogeneous datasets to provide a holistic assessment of LLM capabilities: Mixed text, FinePDFs-en, Ch-FineWeb-Edu, FineWeb-Edu, and NextCoder.
28
+ The Mixed text dataset is collected by us, while other datasets are sampled from publicly available open-source datasets.
 
 
29
 
30
+ * **Mixed text**: We compile a multilingual text corpus from diverse sources, including books, webpages, code, and published papers, to facilitate a comprehensive evaluation on LLMs' compression efficiency.
31
+ * **FinePDFs-en**: The FinePDFs dataset consists of about 3T tokens sourced exclusively from publicly available PDF files. We only select from the English subset to better examine the influence of the corpus distribution. <a href="https://huggingface.co/datasets/HuggingFaceFW/finepdfs"> [Huggingface] </a>
32
+ * **Ch-FineWeb-Edu**: The Chinese Fineweb Edu dataset is a high-quality Chinese pretraining corpus of 90 million samples in the education domain, selected by a strategy similar to that of FineWeb-Edu. <a href="https://huggingface.co/datasets/opencsg/chinese-fineweb-edu"> [Huggingface] </a>
33
+ * **FineWeb-Edu**: The FineWeb-Edu dataset contains 1.3T tokens of educational English webpages filtered from the FineWeb dataset, based on the annotations generated by Llama-3-70B-Instruct. <a href="https://huggingface.co/datasets/HuggingFaceFW/fineweb-edu"> [Huggingface] </a>
34
+ * **NextCoder**: The NextCoder dataset consists of 127K unique code samples generated by GPT-4o and Llama-3.3-70B-Instruct across 8 programming languages: Python, Java, C++, C, Rust, JavaScript, Go, and Kotlin. <a href="https://huggingface.co/datasets/microsoft/NextCoderDataset"> [Huggingface] </a>
35
 
36
  ## Usage
37
 
 
42
 
43
  Step 2. Clone this repo.
44
  ```sh
45
+ GIT_LFS_SKIP_SMUDGE=1 git clone https://huggingface.co/datasets/TeleAI-AI-Flow/InformationCapacity
46
  cd InformationCapacity
47
  ```
48