Upload README.md
Browse files
README.md
CHANGED
|
@@ -21,16 +21,17 @@ This reduction not only lowers computational costs and inference delay but also
|
|
| 21 |
Tokenizer efficiency exhibits growing significance in light of the exploding input length and the widespread usage of test-time scaling, but is often **neglected** in LLM evaluations.
|
| 22 |
We assess the information capacity of 49 models across 5 heterogeneous datasets and find consistent evidence regarding the influences of tokenizer efficiency, pretraining data, and the mixture-of-experts (MoE) architecture.
|
| 23 |
|
| 24 |
-
##
|
| 25 |
|
| 26 |
-
|
| 27 |
-
|
| 28 |
-
The
|
| 29 |
-
The computational complexity is measured by the inference floating-point operations (FLOPs) $N_M$ on a logarithmic scale according to the scaling law.
|
| 30 |
-
We introduce a negative bias $b$ in the numerator so that different-sized models in a series have nearly identical information capacities, thus enabling convenient comparison across different model sizes and architectures.
|
| 31 |
|
| 32 |
-
|
| 33 |
-
|
|
|
|
|
|
|
|
|
|
| 34 |
|
| 35 |
## Usage
|
| 36 |
|
|
@@ -41,7 +42,7 @@ pip install numpy torch transformers tqdm flash_attn huggingface_hub
|
|
| 41 |
|
| 42 |
Step 2. Clone this repo.
|
| 43 |
```sh
|
| 44 |
-
git clone https://
|
| 45 |
cd InformationCapacity
|
| 46 |
```
|
| 47 |
|
|
|
|
| 21 |
Tokenizer efficiency exhibits growing significance in light of the exploding input length and the widespread usage of test-time scaling, but is often **neglected** in LLM evaluations.
|
| 22 |
We assess the information capacity of 49 models across 5 heterogeneous datasets and find consistent evidence regarding the influences of tokenizer efficiency, pretraining data, and the mixture-of-experts (MoE) architecture.
|
| 23 |
|
| 24 |
+
## Data
|
| 25 |
|
| 26 |
+
Previous studies have established that the correlation between compression and intelligence weakens when the evaluation corpus significantly deviates from the domain of downstream tasks.
|
| 27 |
+
Thus, we construct five heterogeneous datasets to provide a holistic assessment of LLM capabilities: Mixed text, FinePDFs-en, Ch-FineWeb-Edu, FineWeb-Edu, and NextCoder.
|
| 28 |
+
The Mixed text dataset is collected by us, while other datasets are sampled from publicly available open-source datasets.
|
|
|
|
|
|
|
| 29 |
|
| 30 |
+
* **Mixed text**: We compile a multilingual text corpus from diverse sources, including books, webpages, code, and published papers, to facilitate a comprehensive evaluation on LLMs' compression efficiency.
|
| 31 |
+
* **FinePDFs-en**: The FinePDFs dataset consists of about 3T tokens sourced exclusively from publicly available PDF files. We only select from the English subset to better examine the influence of the corpus distribution. <a href="https://huggingface.co/datasets/HuggingFaceFW/finepdfs"> [Huggingface] </a>
|
| 32 |
+
* **Ch-FineWeb-Edu**: The Chinese Fineweb Edu dataset is a high-quality Chinese pretraining corpus of 90 million samples in the education domain, selected by a strategy similar to that of FineWeb-Edu. <a href="https://huggingface.co/datasets/opencsg/chinese-fineweb-edu"> [Huggingface] </a>
|
| 33 |
+
* **FineWeb-Edu**: The FineWeb-Edu dataset contains 1.3T tokens of educational English webpages filtered from the FineWeb dataset, based on the annotations generated by Llama-3-70B-Instruct. <a href="https://huggingface.co/datasets/HuggingFaceFW/fineweb-edu"> [Huggingface] </a>
|
| 34 |
+
* **NextCoder**: The NextCoder dataset consists of 127K unique code samples generated by GPT-4o and Llama-3.3-70B-Instruct across 8 programming languages: Python, Java, C++, C, Rust, JavaScript, Go, and Kotlin. <a href="https://huggingface.co/datasets/microsoft/NextCoderDataset"> [Huggingface] </a>
|
| 35 |
|
| 36 |
## Usage
|
| 37 |
|
|
|
|
| 42 |
|
| 43 |
Step 2. Clone this repo.
|
| 44 |
```sh
|
| 45 |
+
GIT_LFS_SKIP_SMUDGE=1 git clone https://huggingface.co/datasets/TeleAI-AI-Flow/InformationCapacity
|
| 46 |
cd InformationCapacity
|
| 47 |
```
|
| 48 |
|