andylizf commited on
Commit
edf8aa8
·
verified ·
1 Parent(s): ac3a65f

Add files using upload-large-folder tool

Browse files
This view is limited to 50 files because it contains too many changes.   See raw diff
Files changed (50) hide show
  1. .ruff_cache/0.9.3/11933818255426932669 +0 -0
  2. .ruff_cache/0.9.3/11968666858155083397 +0 -0
  3. .ruff_cache/0.9.3/13436062826761165196 +0 -0
  4. .ruff_cache/0.9.3/13464923521211872640 +0 -0
  5. .ruff_cache/0.9.3/13470315390911081382 +0 -0
  6. .ruff_cache/0.9.3/13548313108019963712 +0 -0
  7. .ruff_cache/0.9.3/14245749132087675641 +0 -0
  8. .ruff_cache/0.9.3/14376865095874407072 +0 -0
  9. .ruff_cache/0.9.3/1481979688375074229 +0 -0
  10. .ruff_cache/0.9.3/18085458142504125944 +0 -0
  11. .ruff_cache/0.9.3/317943474952122141 +0 -0
  12. .ruff_cache/0.9.3/462964361075760675 +0 -0
  13. .ruff_cache/0.9.3/5234517607253957601 +0 -0
  14. .ruff_cache/0.9.3/6853246009102804303 +0 -0
  15. .ruff_cache/0.9.3/7457038290277297526 +0 -0
  16. .ruff_cache/0.9.3/8298439258647854158 +0 -0
  17. .ruff_cache/0.9.3/9242853808455100265 +0 -0
  18. .ruff_cache/0.9.3/9993021473936986026 +0 -0
  19. latency_analysis/absolute_times.png +3 -0
  20. sglang_repo/.gitignore +229 -0
  21. sglang_repo/.gitmodules +12 -0
  22. sglang_repo/sgl-kernel/3rdparty/flashinfer/.gitignore +185 -0
  23. sglang_repo/sgl-kernel/3rdparty/flashinfer/.gitmodules +18 -0
  24. sglang_repo/sgl-kernel/3rdparty/flashinfer/.pre-commit-config.yaml +61 -0
  25. sglang_repo/sgl-kernel/3rdparty/flashinfer/CHANGELOG.md +374 -0
  26. sglang_repo/sgl-kernel/3rdparty/flashinfer/LICENSE +223 -0
  27. sglang_repo/sgl-kernel/3rdparty/flashinfer/README.md +169 -0
  28. sglang_repo/sgl-kernel/3rdparty/flashinfer/custom_backend.py +41 -0
  29. sglang_repo/sgl-kernel/3rdparty/flashinfer/pyproject.toml +116 -0
  30. sglang_repo/sgl-kernel/3rdparty/flashinfer/setup.py +279 -0
  31. sglang_repo/sgl-kernel/3rdparty/flashinfer/src/bench_batch_decode_mla.cu +122 -0
  32. sglang_repo/sgl-kernel/3rdparty/flashinfer/src/bench_cascade.cu +386 -0
  33. sglang_repo/sgl-kernel/3rdparty/flashinfer/src/bench_norm.cu +53 -0
  34. sglang_repo/sgl-kernel/3rdparty/flashinfer/src/bench_sampling.cu +180 -0
  35. sglang_repo/sgl-kernel/3rdparty/flashinfer/src/bench_single_decode.cu +141 -0
  36. sglang_repo/sgl-kernel/3rdparty/flashinfer/src/bench_single_prefill.cu +217 -0
  37. sglang_repo/sgl-kernel/3rdparty/flashinfer/src/cpu_reference.h +192 -0
  38. sglang_repo/sgl-kernel/3rdparty/flashinfer/src/flashinfer_ops.cuh +647 -0
  39. sglang_repo/sgl-kernel/3rdparty/flashinfer/src/test_batch_decode.cu +182 -0
  40. sglang_repo/sgl-kernel/3rdparty/flashinfer/src/test_batch_prefill.cu +811 -0
  41. sglang_repo/sgl-kernel/3rdparty/flashinfer/src/test_cascade.cu +657 -0
  42. sglang_repo/sgl-kernel/3rdparty/flashinfer/src/test_fastdiv.cu +73 -0
  43. sglang_repo/sgl-kernel/3rdparty/flashinfer/src/test_norm.cu +76 -0
  44. sglang_repo/sgl-kernel/3rdparty/flashinfer/src/test_page.cu +208 -0
  45. sglang_repo/sgl-kernel/3rdparty/flashinfer/src/test_sampling.cu +0 -0
  46. sglang_repo/sgl-kernel/3rdparty/flashinfer/src/test_single_prefill.cu +276 -0
  47. sglang_repo/sgl-kernel/3rdparty/flashinfer/src/tvm_wrapper.cu +830 -0
  48. sglang_repo/sgl-kernel/3rdparty/flashinfer/src/utils.h +209 -0
  49. sglang_repo/sgl-kernel/LICENSE +201 -0
  50. sglang_repo/sgl-kernel/Makefile +28 -0
.ruff_cache/0.9.3/11933818255426932669 ADDED
Binary file (122 Bytes). View file
 
.ruff_cache/0.9.3/11968666858155083397 ADDED
Binary file (227 Bytes). View file
 
.ruff_cache/0.9.3/13436062826761165196 ADDED
Binary file (118 Bytes). View file
 
.ruff_cache/0.9.3/13464923521211872640 ADDED
Binary file (993 Bytes). View file
 
.ruff_cache/0.9.3/13470315390911081382 ADDED
Binary file (154 Bytes). View file
 
.ruff_cache/0.9.3/13548313108019963712 ADDED
Binary file (265 Bytes). View file
 
.ruff_cache/0.9.3/14245749132087675641 ADDED
Binary file (162 Bytes). View file
 
.ruff_cache/0.9.3/14376865095874407072 ADDED
Binary file (111 Bytes). View file
 
.ruff_cache/0.9.3/1481979688375074229 ADDED
Binary file (228 Bytes). View file
 
.ruff_cache/0.9.3/18085458142504125944 ADDED
Binary file (111 Bytes). View file
 
.ruff_cache/0.9.3/317943474952122141 ADDED
Binary file (172 Bytes). View file
 
.ruff_cache/0.9.3/462964361075760675 ADDED
Binary file (120 Bytes). View file
 
.ruff_cache/0.9.3/5234517607253957601 ADDED
Binary file (3.51 kB). View file
 
.ruff_cache/0.9.3/6853246009102804303 ADDED
Binary file (204 Bytes). View file
 
.ruff_cache/0.9.3/7457038290277297526 ADDED
Binary file (171 Bytes). View file
 
.ruff_cache/0.9.3/8298439258647854158 ADDED
Binary file (298 Bytes). View file
 
.ruff_cache/0.9.3/9242853808455100265 ADDED
Binary file (167 Bytes). View file
 
.ruff_cache/0.9.3/9993021473936986026 ADDED
Binary file (103 Bytes). View file
 
latency_analysis/absolute_times.png ADDED

Git LFS Details

  • SHA256: 7f98dedd3c85899708231f8beb92cb5387d28b6b86779800890fc4a2590f44a6
  • Pointer size: 131 Bytes
  • Size of remote file: 190 kB
sglang_repo/.gitignore ADDED
@@ -0,0 +1,229 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Byte-compiled / optimized / DLL files
2
+ __pycache__/
3
+ *.py[cod]
4
+ *$py.class
5
+
6
+ # C extensions
7
+ *.so
8
+
9
+ # Distribution / packaging
10
+ .Python
11
+ build/
12
+ develop-eggs/
13
+ dist/
14
+ downloads/
15
+ eggs/
16
+ .eggs/
17
+ lib/
18
+ lib64/
19
+ parts/
20
+ sdist/
21
+ var/
22
+ wheels/
23
+ share/python-wheels/
24
+ *.egg-info/
25
+ .installed.cfg
26
+ *.egg
27
+ MANIFEST
28
+
29
+ # PyInstaller
30
+ # Usually these files are written by a python script from a template
31
+ # before PyInstaller builds the exe, so as to inject date/other infos into it.
32
+ *.manifest
33
+ *.spec
34
+
35
+ # Installer logs
36
+ pip-log.txt
37
+ pip-delete-this-directory.txt
38
+
39
+ # Unit test / coverage reports
40
+ htmlcov/
41
+ .tox/
42
+ .nox/
43
+ .coverage
44
+ .coverage.*
45
+ .cache
46
+ nosetests.xml
47
+ coverage.xml
48
+ *.cover
49
+ *.py,cover
50
+ .hypothesis/
51
+ .pytest_cache/
52
+ cover/
53
+
54
+ # Translations
55
+ *.mo
56
+ *.pot
57
+
58
+ # Django stuff:
59
+ *.log
60
+ local_settings.py
61
+ db.sqlite3
62
+ db.sqlite3-journal
63
+
64
+ # Flask stuff:
65
+ instance/
66
+ .webassets-cache
67
+
68
+ # Scrapy stuff:
69
+ .scrapy
70
+
71
+ # Sphinx documentation
72
+ docs/_build/
73
+
74
+ # PyBuilder
75
+ .pybuilder/
76
+ target/
77
+
78
+ # Jupyter Notebook
79
+ .ipynb_checkpoints
80
+
81
+ # IPython
82
+ profile_default/
83
+ ipython_config.py
84
+
85
+ # pyenv
86
+ # For a library or package, you might want to ignore these files since the code is
87
+ # intended to run in multiple environments; otherwise, check them in:
88
+ # .python-version
89
+
90
+ # pipenv
91
+ # According to pypa/pipenv#598, it is recommended to include Pipfile.lock in version control.
92
+ # However, in case of collaboration, if having platform-specific dependencies or dependencies
93
+ # having no cross-platform support, pipenv may install dependencies that don't work, or not
94
+ # install all needed dependencies.
95
+ #Pipfile.lock
96
+
97
+ # poetry
98
+ # Similar to Pipfile.lock, it is generally recommended to include poetry.lock in version control.
99
+ # This is especially recommended for binary packages to ensure reproducibility, and is more
100
+ # commonly ignored for libraries.
101
+ # https://python-poetry.org/docs/basic-usage/#commit-your-poetrylock-file-to-version-control
102
+ #poetry.lock
103
+
104
+ # pdm
105
+ # Similar to Pipfile.lock, it is generally recommended to include pdm.lock in version control.
106
+ #pdm.lock
107
+ # pdm stores project-wide configurations in .pdm.toml, but it is recommended to not include it
108
+ # in version control.
109
+ # https://pdm.fming.dev/#use-with-ide
110
+ .pdm.toml
111
+
112
+ # PEP 582; used by e.g. github.com/David-OConnor/pyflow and github.com/pdm-project/pdm
113
+ __pypackages__/
114
+
115
+ # Celery stuff
116
+ celerybeat-schedule
117
+ celerybeat.pid
118
+
119
+ # SageMath parsed files
120
+ *.sage.py
121
+
122
+ # Environments
123
+ .env
124
+ .venv
125
+ env/
126
+ venv/
127
+ ENV/
128
+ env.bak/
129
+ venv.bak/
130
+
131
+ # Spyder project settings
132
+ .spyderproject
133
+ .spyproject
134
+
135
+ # Rope project settings
136
+ .ropeproject
137
+
138
+ # mkdocs documentation
139
+ /site
140
+
141
+ # mypy
142
+ .mypy_cache/
143
+ .dmypy.json
144
+ dmypy.json
145
+
146
+ # Pyre type checker
147
+ .pyre/
148
+
149
+ # pytype static type analyzer
150
+ .pytype/
151
+
152
+ # Cython debug symbols
153
+ cython_debug/
154
+
155
+ # PyCharm
156
+ # JetBrains specific template is maintained in a separate JetBrains.gitignore that can
157
+ # be found at https://github.com/github/gitignore/blob/main/Global/JetBrains.gitignore
158
+ # and can be added to the global gitignore or merged into this file. For a more nuclear
159
+ # option (not recommended) you can uncomment the following to ignore the entire idea folder.
160
+ .idea/
161
+
162
+ # MacOS
163
+ .DS_Store
164
+
165
+ # Vim
166
+ *.swp
167
+
168
+ # Documentation
169
+ docs/_build
170
+
171
+ # SGL
172
+ benchmark/mmlu/data
173
+ benchmark/mmlu/data.tar
174
+ benchmark/llava_bench/images
175
+ benchmark/llava_bench/mme_pack
176
+ *.jsonl
177
+ tmp*.txt
178
+
179
+ # Plots
180
+ *.png
181
+ *.pdf
182
+
183
+ # personnal
184
+ work_dirs/
185
+ *.csv
186
+
187
+ !logo.png
188
+
189
+ # Prerequisites
190
+ *.d
191
+
192
+ # Compiled Object files
193
+ *.slo
194
+ *.lo
195
+ *.o
196
+ *.obj
197
+
198
+ # Precompiled Headers
199
+ *.gch
200
+ *.pch
201
+
202
+ # Compiled Dynamic libraries
203
+ *.so
204
+ *.dylib
205
+ *.dll
206
+
207
+ # Fortran module files
208
+ *.mod
209
+ *.smod
210
+
211
+ # Compiled Static libraries
212
+ *.lai
213
+ *.la
214
+ *.a
215
+ *.lib
216
+
217
+ # Executables
218
+ *.exe
219
+ *.out
220
+ *.app
221
+
222
+ compile_commands.json
223
+
224
+ *.iml
225
+
226
+ # VSCode
227
+ .vscode
228
+
229
+ 1
sglang_repo/.gitmodules ADDED
@@ -0,0 +1,12 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ [submodule "sgl-kernel/3rdparty/cutlass"]
2
+ path = sgl-kernel/3rdparty/cutlass
3
+ url = https://github.com/NVIDIA/cutlass.git
4
+ [submodule "sgl-kernel/3rdparty/cccl"]
5
+ path = sgl-kernel/3rdparty/cccl
6
+ url = https://github.com/NVIDIA/cccl.git
7
+ [submodule "sgl-kernel/3rdparty/flashinfer"]
8
+ path = sgl-kernel/3rdparty/flashinfer
9
+ url = https://github.com/flashinfer-ai/flashinfer.git
10
+ [submodule "sgl-kernel/3rdparty/turbomind"]
11
+ path = sgl-kernel/3rdparty/turbomind
12
+ url = https://github.com/InternLM/turbomind
sglang_repo/sgl-kernel/3rdparty/flashinfer/.gitignore ADDED
@@ -0,0 +1,185 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # ci
2
+ flashinfer-whl/
3
+ dist/
4
+
5
+ # Compile commands json file
6
+ compile_commands.json
7
+
8
+ # Generated files
9
+ csrc/generated/
10
+ docs/generated/
11
+ flashinfer/_build_meta.py
12
+ flashinfer/data/
13
+ flashinfer/jit/aot_config.py
14
+ src/generated/
15
+ csrc/aot_default_additional_params.h
16
+
17
+ # DS_Store files
18
+ .DS_store
19
+
20
+ # Microbenchmark files
21
+ microbenchmark/
22
+
23
+ # vscode
24
+ .vscode/
25
+
26
+ # Byte-compiled / optimized / DLL files
27
+ __pycache__/
28
+ *.py[cod]
29
+ *$py.class
30
+
31
+ # C extensions
32
+ *.so
33
+
34
+ # Distribution / packaging
35
+ .Python
36
+ build/
37
+ develop-eggs/
38
+ dist/
39
+ downloads/
40
+ eggs/
41
+ .eggs/
42
+ lib/
43
+ lib64/
44
+ parts/
45
+ sdist/
46
+ var/
47
+ wheels/
48
+ share/python-wheels/
49
+ *.egg-info/
50
+ .installed.cfg
51
+ *.egg
52
+ MANIFEST
53
+
54
+ # PyInstaller
55
+ # Usually these files are written by a python script from a template
56
+ # before PyInstaller builds the exe, so as to inject date/other infos into it.
57
+ *.manifest
58
+ *.spec
59
+
60
+ # Installer logs
61
+ pip-log.txt
62
+ pip-delete-this-directory.txt
63
+
64
+ # Unit test / coverage reports
65
+ htmlcov/
66
+ .tox/
67
+ .nox/
68
+ .coverage
69
+ .coverage.*
70
+ .cache
71
+ nosetests.xml
72
+ coverage.xml
73
+ *.cover
74
+ *.py,cover
75
+ .hypothesis/
76
+ .pytest_cache/
77
+ cover/
78
+
79
+ # Translations
80
+ *.mo
81
+ *.pot
82
+
83
+ # Django stuff:
84
+ *.log
85
+ local_settings.py
86
+ db.sqlite3
87
+ db.sqlite3-journal
88
+
89
+ # Flask stuff:
90
+ instance/
91
+ .webassets-cache
92
+
93
+ # Scrapy stuff:
94
+ .scrapy
95
+
96
+ # Sphinx documentation
97
+ docs/_build/
98
+
99
+ # PyBuilder
100
+ .pybuilder/
101
+ target/
102
+
103
+ # Jupyter Notebook
104
+ .ipynb_checkpoints
105
+
106
+ # IPython
107
+ profile_default/
108
+ ipython_config.py
109
+
110
+ # pyenv
111
+ # For a library or package, you might want to ignore these files since the code is
112
+ # intended to run in multiple environments; otherwise, check them in:
113
+ # .python-version
114
+
115
+ # pipenv
116
+ # According to pypa/pipenv#598, it is recommended to include Pipfile.lock in version control.
117
+ # However, in case of collaboration, if having platform-specific dependencies or dependencies
118
+ # having no cross-platform support, pipenv may install dependencies that don't work, or not
119
+ # install all needed dependencies.
120
+ #Pipfile.lock
121
+
122
+ # poetry
123
+ # Similar to Pipfile.lock, it is generally recommended to include poetry.lock in version control.
124
+ # This is especially recommended for binary packages to ensure reproducibility, and is more
125
+ # commonly ignored for libraries.
126
+ # https://python-poetry.org/docs/basic-usage/#commit-your-poetrylock-file-to-version-control
127
+ #poetry.lock
128
+
129
+ # pdm
130
+ # Similar to Pipfile.lock, it is generally recommended to include pdm.lock in version control.
131
+ #pdm.lock
132
+ # pdm stores project-wide configurations in .pdm.toml, but it is recommended to not include it
133
+ # in version control.
134
+ # https://pdm.fming.dev/#use-with-ide
135
+ .pdm.toml
136
+
137
+ # PEP 582; used by e.g. github.com/David-OConnor/pyflow and github.com/pdm-project/pdm
138
+ __pypackages__/
139
+
140
+ # Celery stuff
141
+ celerybeat-schedule
142
+ celerybeat.pid
143
+
144
+ # SageMath parsed files
145
+ *.sage.py
146
+
147
+ # Environments
148
+ .env
149
+ .venv
150
+ env/
151
+ venv/
152
+ ENV/
153
+ env.bak/
154
+ venv.bak/
155
+
156
+ # Spyder project settings
157
+ .spyderproject
158
+ .spyproject
159
+
160
+ # Rope project settings
161
+ .ropeproject
162
+
163
+ # mkdocs documentation
164
+ /site
165
+
166
+ # mypy
167
+ .mypy_cache/
168
+ .dmypy.json
169
+ dmypy.json
170
+
171
+ # Pyre type checker
172
+ .pyre/
173
+
174
+ # pytype static type analyzer
175
+ .pytype/
176
+
177
+ # Cython debug symbols
178
+ cython_debug/
179
+
180
+ # PyCharm
181
+ # JetBrains specific template is maintained in a separate JetBrains.gitignore that can
182
+ # be found at https://github.com/github/gitignore/blob/main/Global/JetBrains.gitignore
183
+ # and can be added to the global gitignore or merged into this file. For a more nuclear
184
+ # option (not recommended) you can uncomment the following to ignore the entire idea folder.
185
+ #.idea/
sglang_repo/sgl-kernel/3rdparty/flashinfer/.gitmodules ADDED
@@ -0,0 +1,18 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ [submodule "3rdparty/nvbench"]
2
+ path = 3rdparty/nvbench
3
+ url = https://github.com/NVIDIA/nvbench.git
4
+ [submodule "3rdparty/googletest"]
5
+ path = 3rdparty/googletest
6
+ url = https://github.com/google/googletest.git
7
+ [submodule "3rdparty/mscclpp"]
8
+ path = 3rdparty/mscclpp
9
+ url = https://github.com/microsoft/mscclpp.git
10
+ [submodule "3rdparty/cutlass"]
11
+ path = 3rdparty/cutlass
12
+ url = https://github.com/NVIDIA/cutlass.git
13
+ [submodule "3rdparty/composable_kernels"]
14
+ path = 3rdparty/composable_kernels
15
+ url = https://github.com/ROCm/composable_kernel.git
16
+ [submodule "3rdparty/spdlog"]
17
+ path = 3rdparty/spdlog
18
+ url = https://github.com/gabime/spdlog.git
sglang_repo/sgl-kernel/3rdparty/flashinfer/.pre-commit-config.yaml ADDED
@@ -0,0 +1,61 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # To use:
2
+ #
3
+ # pre-commit run -a
4
+ #
5
+ # Or:
6
+ #
7
+ # pre-commit install # (runs every time you commit in git)
8
+ #
9
+ # To update this file:
10
+ #
11
+ # pre-commit autoupdate
12
+ #
13
+ # See https://github.com/pre-commit/pre-commit
14
+ # Note the pre-commit hooks shoule only be used for formatting, but not for linting.
15
+ # For linting consider using CI.
16
+ repos:
17
+ # Standard hooks
18
+ - repo: https://github.com/pre-commit/pre-commit-hooks
19
+ rev: v5.0.0
20
+ hooks:
21
+ - id: check-added-large-files
22
+ - id: check-case-conflict
23
+ - id: check-merge-conflict
24
+ - id: check-symlinks
25
+ - id: end-of-file-fixer
26
+ - id: mixed-line-ending
27
+ - id: requirements-txt-fixer
28
+ - id: trailing-whitespace
29
+
30
+ # Changes tabs to spaces
31
+ - repo: https://github.com/Lucas-C/pre-commit-hooks
32
+ rev: v1.5.5
33
+ hooks:
34
+ - id: remove-tabs
35
+ - id: remove-crlf
36
+
37
+ # Formatters
38
+ - repo: https://github.com/psf/black-pre-commit-mirror
39
+ rev: 24.8.0
40
+ hooks:
41
+ - id: black
42
+
43
+ - repo: https://github.com/pycqa/isort
44
+ rev: 5.13.2
45
+ hooks:
46
+ - id: isort
47
+ args: ["--profile=black"] # <-- this one
48
+
49
+ - repo: https://github.com/pre-commit/mirrors-clang-format
50
+ rev: v19.1.1
51
+ hooks:
52
+ - id: clang-format
53
+ types_or: [c++, c, cuda]
54
+ exclude: |
55
+ (?x)^(3rdparty/.* src/generated/.* flashinfer/jit/aot_config.py)$
56
+
57
+ - repo: https://github.com/cheshirekow/cmake-format-precommit
58
+ rev: v0.6.13
59
+ hooks:
60
+ - id: cmake-format
61
+ additional_dependencies: [pyyaml>=5.1]
sglang_repo/sgl-kernel/3rdparty/flashinfer/CHANGELOG.md ADDED
@@ -0,0 +1,374 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Changelog
2
+
3
+ ## [0.2.0.post1](https://github.com/flashinfer-ai/flashinfer/compare/v0.2.0...v0.2.0.post1) (2024-12-22)
4
+
5
+ ### Bug Fixes
6
+
7
+ * bug fix on determine_attention_backend condition ([#688](https://github.com/flashinfer-ai/flashinfer/pull/688)) ([bcf7a3e](https://github.com/flashinfer-ai/flashinfer/commit/bcf7a3ee0d919eca45d2f07241479b5776975bc3))
8
+ * accelerate plan speed of fa3 template ([#690](https://github.com/flashinfer-ai/flashinfer/pull/690)) ([db8f04d](https://github.com/flashinfer-ai/flashinfer/commit/db8f04d30989f57acef3fbde41cbd3ce373727f1))
9
+
10
+ ## [0.2.0](https://github.com/flashinfer-ai/flashinfer/compare/v0.1.6...v0.2.0) (2024-12-17)
11
+
12
+ ### Release Blog
13
+
14
+ [FlashInfer 0.2 - Efficient and Customizable Kernels for LLM Inference Serving](https://flashinfer.ai/2024/12/16/flashinfer-v02-release.html)
15
+
16
+ ### Features
17
+
18
+ * add `rotary_dim` argument to rope APIs for partial apply rope ([#599](https://github.com/flashinfer-ai/flashinfer/issues/599)) ([eb9bc71](https://github.com/flashinfer-ai/flashinfer/commit/eb9bc710ce875dd276109b6b62745fc1282f1541))
19
+ * add a `use_softmax` field in variant class ([#533](https://github.com/flashinfer-ai/flashinfer/issues/533)) ([d81af97](https://github.com/flashinfer-ai/flashinfer/commit/d81af9775e56bb30152b17770e804823cddfc279))
20
+ * add an option `non_blocking` to plan function ([#622](https://github.com/flashinfer-ai/flashinfer/issues/622)) ([560af6f](https://github.com/flashinfer-ai/flashinfer/commit/560af6f687524a2415eb94ad333b65b9461a47b1))
21
+ * add gemma_rmsnorm and gemma_fused_add_rmsnorm ([#477](https://github.com/flashinfer-ai/flashinfer/issues/477)) ([1a6b17e](https://github.com/flashinfer-ai/flashinfer/commit/1a6b17e2b78fc811d50030b9326a4d01f1ff956f))
22
+ * add group size 3 to GQA decode dispatch ([#558](https://github.com/flashinfer-ai/flashinfer/issues/558)) ([6227562](https://github.com/flashinfer-ai/flashinfer/commit/62275625f9332e40a69789467835cbb376f2940d))
23
+ * add JIT compilation support for FA3 templates ([#672](https://github.com/flashinfer-ai/flashinfer/issues/672)) ([d4e8d79](https://github.com/flashinfer-ai/flashinfer/commit/d4e8d79b340589633943bebd827da17b3f4c29ad))
24
+ * allow the cascade kernels to be executed using varying sequence lengths ([#627](https://github.com/flashinfer-ai/flashinfer/issues/627)) ([92ac440](https://github.com/flashinfer-ai/flashinfer/commit/92ac4401d434e988ec8aeb769ecf3ff575c32983))
25
+ * CUDAGraph compatibility of multi-level cascade inference APIs ([#586](https://github.com/flashinfer-ai/flashinfer/issues/586)) ([2332e8a](https://github.com/flashinfer-ai/flashinfer/commit/2332e8ae477656b2be060465b30c30b5dee389b9))
26
+ * fix the maximal grid dimension in prefill planning with CUDA graphs ([#639](https://github.com/flashinfer-ai/flashinfer/issues/639)) ([86ca89a](https://github.com/flashinfer-ai/flashinfer/commit/86ca89a60f1bf1eb566cb9e45d21e4c8f174c251))
27
+ * improve the precision of the FusedAddRMSNormKernel function ([#587](https://github.com/flashinfer-ai/flashinfer/issues/587)) ([c7dc921](https://github.com/flashinfer-ai/flashinfer/commit/c7dc921f9323d2f767fd8e9d9d0ab4c1d95ad1b5))
28
+ * JIT compilation ([#507](https://github.com/flashinfer-ai/flashinfer/issues/507)) ([3613a5b](https://github.com/flashinfer-ai/flashinfer/commit/3613a5bd829234863a96bc23e3bd2a1da345a592))
29
+ * modify group-gemm stage number ([#497](https://github.com/flashinfer-ai/flashinfer/issues/497)) ([52dab1d](https://github.com/flashinfer-ai/flashinfer/commit/52dab1d4a4d7e5d910a8c695de911d979d6f2038))
30
+ * non-contiguous query with paged kv cache ([#553](https://github.com/flashinfer-ai/flashinfer/issues/553)) ([89f2c4a](https://github.com/flashinfer-ai/flashinfer/commit/89f2c4a816ff133e09cb9fc1d7c3de43d4431ffd))
31
+ * pass a dynamic token count to the cascade kernels ([#635](https://github.com/flashinfer-ai/flashinfer/issues/635)) ([5fe9f7d](https://github.com/flashinfer-ai/flashinfer/commit/5fe9f7d1d1ab8aa13cb6073a6447e383ad52b484))
32
+ * simplify prefill JIT compilation ([#605](https://github.com/flashinfer-ai/flashinfer/issues/605)) ([fe4f898](https://github.com/flashinfer-ai/flashinfer/commit/fe4f8980223a92cc918f2e6041df854fcebefbc9))
33
+ * specify gemm backend ([#648](https://github.com/flashinfer-ai/flashinfer/issues/648)) ([0cc1a51](https://github.com/flashinfer-ai/flashinfer/commit/0cc1a51757e73a4f4a1be9f2e7ac0e0f2c156056))
34
+ * support cached cos/sin in rope APIs ([#585](https://github.com/flashinfer-ai/flashinfer/issues/585)) ([83e541d](https://github.com/flashinfer-ai/flashinfer/commit/83e541d8fa2b15ff23c8c68c136fa5023e2c977d))
35
+ * support huggingface transformer style rope interface ([#568](https://github.com/flashinfer-ai/flashinfer/issues/568)) ([4f40420](https://github.com/flashinfer-ai/flashinfer/commit/4f40420e24d65cabd8be731e12f96a5ef0795a4b))
36
+ * support sm90 cutlass group gemm ([#509](https://github.com/flashinfer-ai/flashinfer/issues/509)) ([794bdda](https://github.com/flashinfer-ai/flashinfer/commit/794bdda1ea2d62d4d2c0e858553058ad890ee5e3))
37
+ * torch custom_op fix for rope ([#569](https://github.com/flashinfer-ai/flashinfer/issues/569)) ([3e104bc](https://github.com/flashinfer-ai/flashinfer/commit/3e104bc7769735af83ffc709fe1f7a641f2471da))
38
+ * torch custom_op support: norm ([#552](https://github.com/flashinfer-ai/flashinfer/issues/552)) ([f6e0010](https://github.com/flashinfer-ai/flashinfer/commit/f6e0010833f54a5b8181a9232588649f0b3c182e))
39
+ * torch.compile and custom_op support ([#554](https://github.com/flashinfer-ai/flashinfer/issues/554)) ([9bf916f](https://github.com/flashinfer-ai/flashinfer/commit/9bf916f236139f5b6410e298615d0db152e82409))
40
+ * warmup for jit kernel tests ([#629](https://github.com/flashinfer-ai/flashinfer/issues/629)) ([8f5f349](https://github.com/flashinfer-ai/flashinfer/commit/8f5f3491c523f5c43623d3cd3eaa42854f47ad76))
41
+
42
+
43
+ ### Bug Fixes
44
+
45
+ * AOT compiler flags on non-sm90 ([#522](https://github.com/flashinfer-ai/flashinfer/issues/522)) ([0aa4726](https://github.com/flashinfer-ai/flashinfer/commit/0aa47269f9f06f20e4a15662931972c9a2de482f))
46
+ * batch decode kernel redundant store output to gmem ([#505](https://github.com/flashinfer-ai/flashinfer/issues/505)) ([90e42a7](https://github.com/flashinfer-ai/flashinfer/commit/90e42a7307dad08bc1f800efb3d73a3bd22a0824))
47
+ * compatible with torch 2.2 ([#478](https://github.com/flashinfer-ai/flashinfer/issues/478)) ([ac41d1b](https://github.com/flashinfer-ai/flashinfer/commit/ac41d1bdc72ed4614c9eafb8644d45b234260005))
48
+ * https://github.com/flashinfer-ai/flashinfer/issues/452 ([b53a46f](https://github.com/flashinfer-ai/flashinfer/commit/b53a46f8b073e66fbc8fe888e87517b3aea8bd2d))
49
+ * remove redundant load ([#495](https://github.com/flashinfer-ai/flashinfer/issues/495)) ([2de16b0](https://github.com/flashinfer-ai/flashinfer/commit/2de16b0f4afbb9d3c5725187ee2f14ef08fa364f))
50
+ * update bmm fp8 test ([#487](https://github.com/flashinfer-ai/flashinfer/issues/487)) ([45eac04](https://github.com/flashinfer-ai/flashinfer/commit/45eac04f9420b2372737d16d51f4d07bf928d293))
51
+
52
+
53
+ ### Performance Improvements
54
+
55
+ * accelerate JIT compilation speed ([#618](https://github.com/flashinfer-ai/flashinfer/issues/618)) ([eaf73fd](https://github.com/flashinfer-ai/flashinfer/commit/eaf73fd0246f32f214f1db6ed8143bf8a503aae4))
56
+ * Dense and sparse customizable flashattention-3 template ([#667](https://github.com/flashinfer-ai/flashinfer/issues/667)) ([51236c9](https://github.com/flashinfer-ai/flashinfer/commit/51236c913107f2f6098ac039a4aaa4841a443c25))
57
+ * fix prefill kernel performance degradation (step 1) ([#602](https://github.com/flashinfer-ai/flashinfer/issues/602)) ([595cf60](https://github.com/flashinfer-ai/flashinfer/commit/595cf602e73688d2f96f8cf1aad7cb2fce689d41))
58
+ * fix the performance issue of `append_paged_kv_cache` ([#588](https://github.com/flashinfer-ai/flashinfer/issues/588)) ([e15f7c9](https://github.com/flashinfer-ai/flashinfer/commit/e15f7c984bc4152c0b65cfec916ace37c98668cd))
59
+ * improve parallelism in RoPE with pos_ids ([#609](https://github.com/flashinfer-ai/flashinfer/issues/609)) ([ff05155](https://github.com/flashinfer-ai/flashinfer/commit/ff05155581f5e085b573f803aed398434859e22f))
60
+ * improve plan performance by using non-blocking memcpy ([#547](https://github.com/flashinfer-ai/flashinfer/issues/547)) ([41ebe6d](https://github.com/flashinfer-ai/flashinfer/commit/41ebe6dce7c505801853a27246feea2e06500620))
61
+ * reduce the read and write of shared memory in the FusedAddRMSNormKernel ([#592](https://github.com/flashinfer-ai/flashinfer/issues/592)) ([2043ca2](https://github.com/flashinfer-ai/flashinfer/commit/2043ca2181d1e9119a1fb8b86a739c245be5b536))
62
+ * reduce total_num_tiles_q by one ([#644](https://github.com/flashinfer-ai/flashinfer/issues/644)) ([553ace5](https://github.com/flashinfer-ai/flashinfer/commit/553ace5eb91fc07681fa9edf8b6c09827a72617a))
63
+ * remove unnecessary contiguous operation in block sparse attention ([#561](https://github.com/flashinfer-ai/flashinfer/issues/561)) ([7a7ad46](https://github.com/flashinfer-ai/flashinfer/commit/7a7ad4659a7b7e1a78eebbb9bb8af6c21130f14e))
64
+ * speedup jit compilation of prefill attention kernels ([#632](https://github.com/flashinfer-ai/flashinfer/issues/632)) ([a059586](https://github.com/flashinfer-ai/flashinfer/commit/a0595866db384b4a782c1ec70df72251b17de287))
65
+ * use cuda-core implementation for io-bound block-sparse attention ([#560](https://github.com/flashinfer-ai/flashinfer/issues/560)) ([3fbf028](https://github.com/flashinfer-ai/flashinfer/commit/3fbf02800e6166d2bf9e1de1cfa6ac826fa4618d))
66
+
67
+ ## [0.1.6](https://github.com/flashinfer-ai/flashinfer/compare/v0.1.5...v0.1.6) (2024-08-27)
68
+
69
+ ### SM75 Support
70
+
71
+ Starting from [0.1.6](https://github.com/flashinfer-ai/flashinfer/compare/v0.1.5...v0.1.6), our pre-built wheels include experimental support sm75 (Turing architecture GPUs such as Tesla T4, Quadro RTX 6000 and RTX 2080).
72
+
73
+ ### API Changes
74
+
75
+ #### `plan`/`run`
76
+
77
+ Since [0.1.6](https://github.com/flashinfer-ai/flashinfer/compare/v0.1.5...v0.1.6) on, `begin_forward`/`forward`/`end_forward` APIs are replaced with the new `plan`/`run` API.
78
+ - `forward` is renamed to `run`, which is more precise and consistent with the naming convention of cutlass's python API.
79
+ - `begin_forward` is renamed to `plan`, which is consistent with the naming convention of nvmath API.
80
+ - `end_forward` is deprecated and has no effect after this PR.
81
+
82
+ There is some slight difference between the old `forward` and the new `run` API:
83
+ - All extra arguments such as `causal` and `logits_soft_cap` will be provided in `plan` (previously `begin_forward`) API, and cached until next `plan` call, and we only need to provide query and KV-Cache tensors in `run` API.
84
+
85
+ The old `begin_forward`/`forward`/`end_forward` APIs are still functional, but we will gradually deprecate them in future releases.
86
+
87
+ Check [#466](https://github.com/flashinfer-ai/flashinfer/pull/466) for more details.
88
+
89
+ #### `MultiLevelCascadeAttentionWrapper`
90
+
91
+ Since [0.1.6](https://github.com/flashinfer-ai/flashinfer/compare/v0.1.5...v0.1.6) on, we introduce a new `MultiLevelCascadeAttentionWrapper` API for cascade inference,
92
+ which supports multi-level cascade inference where all levels' KV-Cache can be managed in a unified Paged KV-Cache.
93
+
94
+ See [documentation](https://docs.flashinfer.ai/api/python/cascade.html#flashinfer.cascade.MultiLevelCascadeAttentionWrapper) and [tutorial](https://docs.flashinfer.ai/tutorials/kv_layout.html#multi-level-cascade-inference-data-layout) on API usage and layout explanation.
95
+
96
+ The old `BatchDecodeWithSharedPrefixPagedKVCacheWrapper` and `BatchPrefillWithSharedPrefixPagedKVCacheWrapper` will be deprecated in future releases.
97
+
98
+ ### Features
99
+
100
+ * sm75 support ([#448](https://github.com/flashinfer-ai/flashinfer/pull/448), [#449](https://github.com/flashinfer-ai/flashinfer/pull/449))
101
+ * add `MultiLevelCascadeAttentionWrapper` API ([#462](https://github.com/flashinfer-ai/flashinfer/issues/462)) ([1e37989](https://github.com/flashinfer-ai/flashinfer/commit/1e379898a589cdd4ff18a4621fcbe18d63501545))
102
+ * add accept num, emit num metric for ChainSpeculativeSampling ([#450](https://github.com/flashinfer-ai/flashinfer/issues/450)) ([fa38b5e](https://github.com/flashinfer-ai/flashinfer/commit/fa38b5e34b9591bd5ab07186bea229ea95307755))
103
+ * support bmm fp8 ([#469](https://github.com/flashinfer-ai/flashinfer/issues/469)) ([f1c0b68](https://github.com/flashinfer-ai/flashinfer/commit/f1c0b68d0f4a77ff3bf705307b3529b996fc9826))
104
+
105
+ ### Refactor
106
+
107
+ * refactor: replace `begin_forward`/`forward`/`end_forward` with `plan`/`run` [#466](https://github.com/flashinfer-ai/flashinfer/pull/466)
108
+
109
+ ### Misc
110
+
111
+ * misc: improve error handling of sampling kernels ([#456](https://github.com/flashinfer-ai/flashinfer/pull/456)) ([0dce178](https://github.com/flashinfer-ai/flashinfer/commit/0dce178389e5e85b1d40212b1d12d1754304e46))
112
+
113
+ ### Performance Improvements
114
+
115
+ * slight optimization on f16-&gt;f8 fragment layout swizzling ([#453](https://github.com/flashinfer-ai/flashinfer/issues/453)) ([0d61871](https://github.com/flashinfer-ai/flashinfer/commit/0d618712faff20a84bbd513d02ac01e16be19306))
116
+ * slight optimization on fragment layout swizzle ([#458](https://github.com/flashinfer-ai/flashinfer/issues/458)) ([7c397cb](https://github.com/flashinfer-ai/flashinfer/commit/7c397cbd81d4fa5da8aef9f105576dbe67f6c22b))
117
+ * use persistent kernel for merging attention states ([#459](https://github.com/flashinfer-ai/flashinfer/issues/459)) ([be6bf5b](https://github.com/flashinfer-ai/flashinfer/commit/be6bf5bb26f1f1b3edf094d903544600c574ee09))
118
+
119
+ ### Acknowledgement
120
+
121
+ We thank [@LiuXiaoxuanPKU](https://github.com/LiuXiaoxuanPKU) on enhance of speculative sampling operator, [@merrymercy](https://github.com/merrymercy) on API change suggestion and [@zhyncs](https://github.com/zhyncs) on integrating fp8 BMM cublas implementation.
122
+
123
+ ## [0.1.5](https://github.com/flashinfer-ai/flashinfer/compare/v0.1.4...v0.1.5) (2024-08-13)
124
+
125
+
126
+ ### Bugfix
127
+
128
+ * resolve cu121 compile wired issue ([#446](https://github.com/flashinfer-ai/flashinfer/issues/446)) ([5f0159e](https://github.com/flashinfer-ai/flashinfer/commit/5f0159e6abeb7308d965bb1b9aef05547b8a57b3))
129
+ * Fix PagedPrefill python api and some typos ([#441](https://github.com/flashinfer-ai/flashinfer/pull/441)) ([3fff008](https://github.com/flashinfer-ai/flashinfer/commit/3fff008dc9af56c325d9c487bddf69ff014f3989))
130
+ * fix prefill kernels' lse result for empty kv-cache ([#440](https://github.com/flashinfer-ai/flashinfer/pull/440)) ([6ac28f4](https://github.com/flashinfer-ai/flashinfer/commit/6ac28f4dd3a9a34a2b4abcbe0a815fc59a2d74ad))
131
+
132
+ ### Features
133
+
134
+ * decouple float and int workspace buffer ([#442](https://github.com/flashinfer-ai/flashinfer/issues/442)) ([a7ee566](https://github.com/flashinfer-ai/flashinfer/commit/a7ee5662bf967ab1ee16910c73761d326fbeb9a0))
135
+
136
+
137
+ ### Performance Improvements
138
+
139
+ * faster fp8-&gt;fp16 dequantization for pre sm_90 arch ([#439](https://github.com/flashinfer-ai/flashinfer/issues/439)) ([c93f647](https://github.com/flashinfer-ai/flashinfer/commit/c93f647a0dd6b58c9ac20b39438316202358463c))
140
+
141
+ ### Acknowledgement
142
+
143
+ We thank contributions and feedbacks from the community: [@comaniac](https://github.com/comaniac), [@hnyls2002](https://github.com/hnyls2002), [@jianfei-wangg](https://github.com/jianfei-wangg), [@Yard1](https://github.com/Yard1).
144
+
145
+
146
+
147
+ ## [0.1.4](https://github.com/flashinfer-ai/flashinfer/compare/v0.1.3...v0.1.4) (2024-08-09)
148
+
149
+
150
+ ### Features
151
+
152
+ * append attention kernels for fp8 kv-cache ([#420](https://github.com/flashinfer-ai/flashinfer/issues/420)) ([906c2f5](https://github.com/flashinfer-ai/flashinfer/commit/906c2f5df3b35df45a4fb2614815308b662099ea))
153
+ * support min_p sampling ([#422](https://github.com/flashinfer-ai/flashinfer/pull/422)) ([d52f2da](https://github.com/flashinfer-ai/flashinfer/commit/d52f2da6825f0fd7f614bf3a2db3b75c8fef961b))
154
+ * deterministic sampling ([#417](https://github.com/flashinfer-ai/flashinfer/issues/417)) ([0dd801d](https://github.com/flashinfer-ai/flashinfer/commit/0dd801d2027af89f3603cbbf68a76e9503bb2f57))
155
+ * more sampling operator options ([#431](https://github.com/flashinfer-ai/flashinfer/issues/431)) ([68df9c4](https://github.com/flashinfer-ai/flashinfer/commit/68df9c487e672b4a4ea3be97aed63a48aac5945b))
156
+ * support fused add rmsnorm ([#419](https://github.com/flashinfer-ai/flashinfer/issues/419)) ([b781513](https://github.com/flashinfer-ai/flashinfer/commit/b78151383d4a75094195cba29aba45d694d5fdb7))
157
+ * support fused silu mul ([#427](https://github.com/flashinfer-ai/flashinfer/issues/427)) ([ea0ba9a](https://github.com/flashinfer-ai/flashinfer/commit/ea0ba9a51238597bd7863b6e3c9bfda574df4df5))
158
+
159
+ ### Bug Fixes
160
+
161
+ * fix dispatch fp16 type when enable fp8 ([#430](https://github.com/flashinfer-ai/flashinfer/pull/430)) ([daa5566](https://github.com/flashinfer-ai/flashinfer/commit/daa556697fed849810745f0aae0015d8e4460050))
162
+ * improve numerical stability of sampling kernels ([#429](https://github.com/flashinfer-ai/flashinfer/pull/429)) ([898d8ea](https://github.com/flashinfer-ai/flashinfer/commit/898d8ea8a21f5850288bc4a860399678131a2d30))
163
+
164
+ ### Other improvements
165
+
166
+ * break up `_kernels` into multiple modules ([#428](https://github.com/flashinfer-ai/flashinfer/pull/428)) ([8e482d9](https://github.com/flashinfer-ai/flashinfer/commit/8e482d92cb0ad046ec5f57509f9473e76bd668fe))
167
+
168
+ ### Acknowledgement
169
+
170
+ We thank contributions and feedbacks from the community: [@comaniac](https://github.com/comaniac), [@esmeetu](https://github.com/esmeetu), [@LiuXiaoxuanPKU](https://github.com/LiuXiaoxuanPKU), [@peng1999](https://github.com/peng1999), [@xslingcn](https://github.com/xslingcn), [@Yard1](https://github.com/Yard1), [@zhyncs](https://github.com/zhyncs).
171
+
172
+
173
+ ## [0.1.3](https://github.com/flashinfer-ai/flashinfer/compare/v0.1.2...v0.1.3) (2024-07-31)
174
+
175
+ ### Bugfix
176
+
177
+ * bugfix: Fix cudagraph mode of BatchPrefillWithRaggedKVCacheWrapper ([#412](https://github.com/flashinfer-ai/flashinfer/pull/412)) ([9907bc](https://github.com/flashinfer-ai/flashinfer/commit/9907bc163eec7677870014b6ed5bb1789cc584f0))
178
+ * fix cu118 cub usage for sampling kernels ([#410](https://github.com/flashinfer-ai/flashinfer/pull/410)) ([58d359](https://github.com/flashinfer-ai/flashinfer/commit/58d35930740083f27e65c9818ab857f9f4880aff))
179
+
180
+ ### MiscBreak up _kernels into multiple modules
181
+
182
+ * enhance allocator error info and add shape check for prefill begin forward functions ([#413](https://github.com/flashinfer-ai/flashinfer/pull/413)) ([5e36c5](https://github.com/flashinfer-ai/flashinfer/commit/5e36c527bb10c9331a17d4ecd609120406280979))
183
+
184
+ ## [0.1.2](https://github.com/flashinfer-ai/flashinfer/compare/v0.1.1...v0.1.2) (2024-07-29)
185
+
186
+ ### Bugfix
187
+ * Fix the sampling kernel bug for cu118 ([#386](https://github.com/flashinfer-ai/flashinfer/pull/386), [#387](https://github.com/flashinfer-ai/flashinfer/pull/387)) ([0cd499](https://github.com/flashinfer-ai/flashinfer/commit/0cd49949e6c05a0c8f63d050ff96c8f6168cf914), [dc3f18](https://github.com/flashinfer-ai/flashinfer/commit/dc3f184eda83b9feb5c901606b3d8aede23a4a5f))
188
+
189
+ ### Features
190
+
191
+ * add llama 3.1 style rope ([#401](https://github.com/flashinfer-ai/flashinfer/issues/401)) ([4c89dec](https://github.com/flashinfer-ai/flashinfer/commit/4c89decadc8ae9f261cae97c350064156e66bc09))
192
+ * non-inplace rope operators ([#405](https://github.com/flashinfer-ai/flashinfer/issues/405)) ([74ffba1](https://github.com/flashinfer-ai/flashinfer/commit/74ffba1d1b946fcd3536b7637a4e1a999e5a5d3e))
193
+ * sliding window attention ([#406](https://github.com/flashinfer-ai/flashinfer/issues/406)) ([28cffd3](https://github.com/flashinfer-ai/flashinfer/commit/28cffd366888649a1e9d871efec32e67b88070cb))
194
+ * support non-contiguous (packed) input for prefill kernels ([#404](https://github.com/flashinfer-ai/flashinfer/issues/404)) ([68c3719](https://github.com/flashinfer-ai/flashinfer/commit/68c3719113f90bed5bf1a5d4990f8e2c0b0f5fd3))
195
+
196
+
197
+ ### Performance Improvements
198
+
199
+ * slight optimization on merge states ([#313](https://github.com/flashinfer-ai/flashinfer/issues/313)) ([701c813](https://github.com/flashinfer-ai/flashinfer/commit/701c813cb1266f8dd2b93d17978d35fd6fb975dd))
200
+
201
+ ## [0.1.1](https://github.com/flashinfer-ai/flashinfer/compare/v0.1.0...v0.1.1) (2024-07-20)
202
+
203
+ ### Bugfix
204
+
205
+ * fix the invalid kernel configuration for architectures with small shared memory size ([#385](https://github.com/flashinfer-ai/flashinfer/pull/385)) ([cdac57](https://github.com/flashinfer-ai/flashinfer/commit/cdac577011e8ab50aa26dfef0cecf77d92d2f804))
206
+
207
+ ### Features
208
+
209
+ * expose decoupled kv-cache to pytorch api ([#383](https://github.com/flashinfer-ai/flashinfer/issues/383)) ([457a0ae](https://github.com/flashinfer-ai/flashinfer/commit/457a0ae0c8a43bd95a803167e28be19555a2ebf8))
210
+
211
+
212
+ ### Performance Improvements
213
+
214
+ * use stmatrix in epilogue for sm90+ ([#380](https://github.com/flashinfer-ai/flashinfer/issues/380)) ([c6f20d1](https://github.com/flashinfer-ai/flashinfer/commit/c6f20d1406a3a8c4f134c4a764d16e157a184338))
215
+
216
+ ## [0.1.0](https://github.com/flashinfer-ai/flashinfer/compare/v0.0.9...v0.1.0) (2024-07-17)
217
+
218
+
219
+ ### Features
220
+
221
+ * Add mask to `merge_state_in_place` ([#372](https://github.com/flashinfer-ai/flashinfer/issues/372)) ([e14fa81](https://github.com/flashinfer-ai/flashinfer/commit/e14fa8194cfc09c271e6f2c102060698f18297a9))
222
+ * expose pytorch api for block sparse attention ([#375](https://github.com/flashinfer-ai/flashinfer/issues/375)) ([4bba6fa](https://github.com/flashinfer-ai/flashinfer/commit/4bba6fa3aa848d2e43248bca8d959fd58a27cfa4))
223
+ * Fused GPU sampling kernel for joint top-k & top-p sampling ([#374](https://github.com/flashinfer-ai/flashinfer/issues/374)) ([6e028eb](https://github.com/flashinfer-ai/flashinfer/commit/6e028eb997173658832a66c7480cc9224d637a15))
224
+
225
+ ## [0.0.9](https://github.com/flashinfer-ai/flashinfer/compare/v0.0.8...v0.0.9) (2024-07-12)
226
+
227
+ ### Bugfix
228
+
229
+ * fix the decode kernel segfault in cudagraph mode ([#368](https://github.com/flashinfer-ai/flashinfer/pull/368))([c69cfa](https://github.com/flashinfer-ai/flashinfer/commit/c69cfabc540e4a7edd991713df10d575ff3b0c21))
230
+ - fix decode kernels output for empty kv cache ([#363](https://github.com/flashinfer-ai/flashinfer/pull/363))([ac72b1](https://github.com/flashinfer-ai/flashinfer/commit/ac72b1cc14a6474d601f371c8d69e2600ac28d2f))
231
+ - check gpu id in PyTorch APIs and use input tensor's gpu default stream ([#361](https://github.com/flashinfer-ai/flashinfer/pull/361))([1b84fa](https://github.com/flashinfer-ai/flashinfer/commit/1b84fab3e4f53fb4fa26952fdb46fa8018634057))
232
+
233
+ ### Performance Improvements
234
+
235
+ * accelerate alibi ([#365](https://github.com/flashinfer-ai/flashinfer/issues/365)) ([4f0a9f9](https://github.com/flashinfer-ai/flashinfer/commit/4f0a9f987ad2036f3c466257459de823be85fcc6))
236
+ * accelerate gqa performance ([#356](https://github.com/flashinfer-ai/flashinfer/issues/356)) ([e56ddad](https://github.com/flashinfer-ai/flashinfer/commit/e56ddadf4bdbb164c3f1a03f9f69cb8a25621ef5))
237
+ * Optimize tensor conversions in C++ code to avoid unnecessary copies ([#366](https://github.com/flashinfer-ai/flashinfer/issues/366)) ([1116237](https://github.com/flashinfer-ai/flashinfer/commit/1116237ac1e5690cf404841327b58b1d268d9951))
238
+
239
+ ### Acknowledgement
240
+
241
+ We thank [@Yard1](https://github.com/Yard1), [@Ying1123](https://github.com/Ying1123) and [@zhyncs](https://github.com/zhyncs) for their contributions.
242
+
243
+ ## [0.0.8](https://github.com/flashinfer-ai/flashinfer/compare/v0.0.7...v0.0.8) (2024-07-03)
244
+
245
+ ### Bugfix
246
+
247
+ * fix prefill/append kernel behavior for empty kv-cache ([#353](https://github.com/flashinfer-ai/flashinfer/pull/353)) ([7adc8c](https://github.com/flashinfer-ai/flashinfer/commit/7adc8cf01a029645307c321a7754d0b0a4f0f4de))
248
+ * fix decode attention kernel with logits cap ([#350](https://github.com/flashinfer-ai/flashinfer/pull/350)) ([f5f7a2](https://github.com/flashinfer-ai/flashinfer/commit/f5f7a2a23249fd0be5b30fd8fb3957ac3bb527ca))
249
+
250
+
251
+ ## [0.0.7](https://github.com/flashinfer-ai/flashinfer/compare/v0.0.6...v0.0.7) (2024-06-28)
252
+
253
+ ### Breaking Changes
254
+ * `batch_decode_with_padded_kv_cache` was removed, we encourage user to use `BatchDecodeWithPagedKVCacheWrapper` instead. ([#343](https://github.com/flashinfer-ai/flashinfer/pull/343))
255
+
256
+ ### Bugfix
257
+
258
+ * fix the `forward_return_lse` function in `BatchPrefillWithRaggedKVCache` class ([#337](https://github.com/flashinfer-ai/flashinfer/pull/337))
259
+ * fix the scheduler behavior of large page size ([#333](https://github.com/flashinfer-ai/flashinfer/pull/333))
260
+
261
+ ### Features
262
+
263
+ * customize `logits_soft_cap` value ([#339](https://github.com/flashinfer-ai/flashinfer/issues/339)) ([a2498f5](https://github.com/flashinfer-ai/flashinfer/commit/a2498f511b354ce049bda6be320a24b73c719be3))
264
+
265
+
266
+ ### Performance Improvements
267
+
268
+ * change minimal `kv_chunk_size` back to 128 ([#329](https://github.com/flashinfer-ai/flashinfer/issues/329)) ([f237f5f](https://github.com/flashinfer-ai/flashinfer/commit/f237f5f80199e2c433fcca750713c6e774693b58))
269
+ * more options for kv tile size ([#336](https://github.com/flashinfer-ai/flashinfer/issues/336)) ([bf2a6c7](https://github.com/flashinfer-ai/flashinfer/commit/bf2a6c7c05a82e0ee0ea04381d04b84327355b69))
270
+
271
+ ## [0.0.6](https://github.com/flashinfer-ai/flashinfer/compare/v0.0.5...v0.0.6) (2024-06-21)
272
+
273
+ ### Bugfix
274
+
275
+ Fix some bug in v0.0.5 that might lead to crashes and instable performance.
276
+
277
+ ### Performance Improvements
278
+
279
+ * use 1x4 warp layout for small query length ([#322](https://github.com/flashinfer-ai/flashinfer/issues/322)) ([4e89b4d](https://github.com/flashinfer-ai/flashinfer/commit/4e89b4dfdeb0c07b290ace9f82edf31e63136cfd))
280
+
281
+ ## [0.0.5](https://github.com/flashinfer-ai/flashinfer/compare/v0.0.4...v0.0.5) (2024-06-20)
282
+
283
+ ### Highlights
284
+
285
+ * Support any GQA group size support for tensor-cores kernels.
286
+ * Support any page size support for tensor-cores kernels.
287
+ * Support CUDA-Graph for prefill/decode APIs.
288
+ * Add an option to accelerate decode kernels with Tensor Cores.
289
+ * Support custom attention mask. (https://docs.flashinfer.ai/tutorials/kv_layout.html#mask-layout-2d-ragged-tensor)
290
+ * Support logits cap in Grok-1 models.
291
+ * Fused GPU-sampling kernels: top-p, top-k, speculative verification. (https://docs.flashinfer.ai/api/python/sampling.html)
292
+ * PyTorch wrapper of group-gemm cutlass kernels. (https://docs.flashinfer.ai/api/python/group_gemm.html)
293
+
294
+ ### Acknowledgement
295
+
296
+ We thank [@ibsidorenko](https://github.com/ibsidorenko), [@LiuXiaoxuanPKU](https://github.com/LiuXiaoxuanPKU), [@Yard1](https://github.com/Yard1) [@AgrawalAmey](https://github.com/AgrawalAmey), [@xuzhenqi](https://github.com/xuzhenqi), [@mgerstgrasser](https://github.com/mgerstgrasser), [@esmeetu](https://github.com/esmeetu), [@yz-tang](https://github.com/yz-tang), [@HSQ79815](https://github.com/HSQ79815), [@Qubitium](https://github.com/Qubitium), [@shreygupta2809](https://github.com/shreygupta2809), [@sighingnow](https://github.com/sighingnow), [@vinx13](https://github.com/vinx13),
297
+ [@tqchen](https://github.com/tqchen), [@merrymercy](https://github.com/merrymercy), [@comaniac](https://github.com/comaniac) and many others for their contributions and helpful discussions for 0.0.5 release.
298
+
299
+ ### Refactor
300
+
301
+ * support any GQA group size for tensor-cores kernels ([#301](https://github.com/flashinfer-ai/flashinfer/pull/301)) ([c111ca](https://github.com/flashinfer-ai/flashinfer/commit/c111ca630d57bc4c301fff2599253a5d782a95c8))
302
+ * support any page size for tensor-cores kernels ([#306](https://github.com/flashinfer-ai/flashinfer/pull/306)) ([82fd8c](https://github.com/flashinfer-ai/flashinfer/commit/82fd8c7ee2d569b1876d547f73c7ad4b085a771e))
303
+
304
+
305
+ ### Features
306
+
307
+ * add `use_tensor_cores` option to decode kernels to accelerate GQA ([#317](https://github.com/flashinfer-ai/flashinfer/issues/317)) ([3b50dd5](https://github.com/flashinfer-ai/flashinfer/commit/3b50dd59b0e1f23905e583d5af069e43ff5e15a4))
308
+ * add group gemm operators ([#282](https://github.com/flashinfer-ai/flashinfer/issues/282)) ([e08ba42](https://github.com/flashinfer-ai/flashinfer/commit/e08ba4226f694d5469cce4233f1854c965f05197))
309
+ * initial support of distributed operators ([#289](https://github.com/flashinfer-ai/flashinfer/issues/289)) ([03553da](https://github.com/flashinfer-ai/flashinfer/commit/03553dac1dffff9a6867be0d5676d69d6eeae18c))
310
+ * initial support of logits hook ([#298](https://github.com/flashinfer-ai/flashinfer/issues/298)) ([ab1e2ad](https://github.com/flashinfer-ai/flashinfer/commit/ab1e2ad89f27319f5b4874c5e8b526c1cae43598))
311
+ * Separate Q and KV dtypes for decode ([#286](https://github.com/flashinfer-ai/flashinfer/issues/286)) ([5602659](https://github.com/flashinfer-ai/flashinfer/commit/5602659d8cd0616ec8214d056ea5c4078b21342b))
312
+ * support cuda graph for batched multi-query(prefill/append) attention ([#275](https://github.com/flashinfer-ai/flashinfer/issues/275)) ([83ceb67](https://github.com/flashinfer-ai/flashinfer/commit/83ceb67a5773b0447f5f0344411abfdbc53cf5f4))
313
+ * support cuda graph for batched multi-query(prefill/append) attention ([#277](https://github.com/flashinfer-ai/flashinfer/issues/277)) ([24cc583](https://github.com/flashinfer-ai/flashinfer/commit/24cc583cb6b1a205aa8aad53f56472305b73f5f4))
314
+ * support custom attention mask in prefill/append attention kernels ([#266](https://github.com/flashinfer-ai/flashinfer/issues/266)) ([7304282](https://github.com/flashinfer-ai/flashinfer/commit/7304282a8068942100f8e59adff533ce28f4d3e5))
315
+ * fused speculative sampilng kernels ([#259](https://github.com/flashinfer-ai/flashinfer/pull/259)) ([cea2bb](https://github.com/flashinfer-ai/flashinfer/commit/cea2bb9a836ba6d34d6667b8983ad79fa35cf933))
316
+ * expose sampling APIs in pytorch ([#238](https://github.com/flashinfer-ai/flashinfer/pull/238)) ([092902](https://github.com/flashinfer-ai/flashinfer/commit/0929023e5325a30357750eacec27b0d3a20d1254))
317
+
318
+
319
+ ### Performance Improvements
320
+
321
+ * initial cuda graph support ([#256](https://github.com/flashinfer-ai/flashinfer/issues/256)) ([7e9cc7f](https://github.com/flashinfer-ai/flashinfer/commit/7e9cc7ff42ca283c317061a877305d09a395fad2))
322
+ * split kv-cache for prefill/append kernels ([#310](https://github.com/flashinfer-ai/flashinfer/issues/310)) ([f0bb0a3](https://github.com/flashinfer-ai/flashinfer/commit/f0bb0a3a723cbe1a138c604680e6b573d877f210))
323
+ * use packed bit array for attention mask ([#308](https://github.com/flashinfer-ai/flashinfer/issues/308)) ([3d43dc9](https://github.com/flashinfer-ai/flashinfer/commit/3d43dc9dc1a2ae804eaa7e40b4555e471fd03fe3))
324
+
325
+ ## [0.0.4](https://github.com/flashinfer-ai/flashinfer/compare/v0.0.3...v0.0.4) (2024-05-01)
326
+
327
+
328
+ ### Features
329
+
330
+ * pytorch 2.3 support
331
+ * gpu sampling kernels (top-p, top-k)
332
+ * more gqa group sizes
333
+ * add mma instructions for fp8 ([#179](https://github.com/flashinfer-ai/flashinfer/issues/179)) ([d305798](https://github.com/flashinfer-ai/flashinfer/commit/d3057983e6d47e857ec3956de94eb11f62d9d83e))
334
+ * mma rowsum for fp8 ([#180](https://github.com/flashinfer-ai/flashinfer/issues/180)) ([5af935c](https://github.com/flashinfer-ai/flashinfer/commit/5af935ca783d3487034110902c6406089c31acbc))
335
+ * support any num_heads for get_alibi_slope ([#200](https://github.com/flashinfer-ai/flashinfer/issues/200)) ([b217a6f](https://github.com/flashinfer-ai/flashinfer/commit/b217a6fefb7bd091469467d32b8aedde4a25cad7))
336
+
337
+ ### Bug Fixes
338
+
339
+ * fix python package dispatch error message ([#182](https://github.com/flashinfer-ai/flashinfer/issues/182)) ([8eed01c](https://github.com/flashinfer-ai/flashinfer/commit/8eed01c094ceb47375a1d4da8748c43a2947e959))
340
+
341
+ ## [0.0.3](https://github.com/flashinfer-ai/flashinfer/compare/v0.0.2...v0.0.3) (2024-03-08)
342
+
343
+
344
+ ### Features
345
+
346
+ * adding `sm_scale` field for all attention APIs ([#145](https://github.com/flashinfer-ai/flashinfer/issues/145)) ([85d4018](https://github.com/flashinfer-ai/flashinfer/commit/85d4018de4766dafd1be60cf6d953cd9236a4058))
347
+ * enable `head_dim=256` for attention kernels ([#132](https://github.com/flashinfer-ai/flashinfer/issues/132)) ([0372acc](https://github.com/flashinfer-ai/flashinfer/commit/0372acc44d0d393af7fd9fb3dcef0ff25953d4e1))
348
+ * pytorch api of fp8 kv-cache ([#156](https://github.com/flashinfer-ai/flashinfer/issues/156)) ([66ee066](https://github.com/flashinfer-ai/flashinfer/commit/66ee06683eaea7efe724c46df528ae47aa75eca2))
349
+ * support ALiBi ([#146](https://github.com/flashinfer-ai/flashinfer/issues/146)) ([383518b](https://github.com/flashinfer-ai/flashinfer/commit/383518bdf1824f68d33a2eaafd72a780f195bdd4))
350
+
351
+
352
+ ### Bug Fixes
353
+
354
+ * bugfix to pr 135 ([#136](https://github.com/flashinfer-ai/flashinfer/issues/136)) ([3d55c71](https://github.com/flashinfer-ai/flashinfer/commit/3d55c71a62052c590c130897d3a3db49b14fcc34))
355
+ * fix bugs introduced in [#132](https://github.com/flashinfer-ai/flashinfer/issues/132) ([#135](https://github.com/flashinfer-ai/flashinfer/issues/135)) ([9b7b0b9](https://github.com/flashinfer-ai/flashinfer/commit/9b7b0b913e1fbef7aac6351109911c7ac08a8904))
356
+ * fix FindThrust.cmake ([#161](https://github.com/flashinfer-ai/flashinfer/issues/161)) ([30fa584](https://github.com/flashinfer-ai/flashinfer/commit/30fa5843aeb1ac48816967a63db140cff6044e13))
357
+
358
+
359
+ ### Misc
360
+ * add stream argument in BeginForwardFunction of TVMWrapper ([#164](https://github.com/flashinfer-ai/flashinfer/pull/164)) ([fabfcb5](https://github.com/flashinfer-ai/flashinfer/tree/fabfcb5751dcc003137a5a7d2d5514f3afe2e302))
361
+
362
+
363
+ ### Performance Improvements
364
+
365
+ * multiple q by sm_scale in decode kernels ([#144](https://github.com/flashinfer-ai/flashinfer/issues/144)) ([660c559](https://github.com/flashinfer-ai/flashinfer/commit/660c559348ba9710d0d81b53f710f7e4951eee2b))
366
+
367
+ ## [0.0.2](https://github.com/flashinfer-ai/flashinfer/compare/v0.0.1...v0.0.2) (2024-02-17)
368
+
369
+
370
+ ### Bug Fixes
371
+
372
+ * add python 3.9 wheels to ci/cd ([#114](https://github.com/flashinfer-ai/flashinfer/issues/114)) ([2d8807d](https://github.com/flashinfer-ai/flashinfer/commit/2d8807d1fb3359ace8a03b73c92bd0679b9d4b33))
373
+ * version names cannot include multiple `+` ([#118](https://github.com/flashinfer-ai/flashinfer/issues/118)) ([af6bd10](https://github.com/flashinfer-ai/flashinfer/commit/af6bd10db03fa1353699631f6b31eee52d343569))
374
+ * version naming issue ([#117](https://github.com/flashinfer-ai/flashinfer/issues/117)) ([c849a90](https://github.com/flashinfer-ai/flashinfer/commit/c849a90e6b6756a2ca87733782607796d8c7b85a))
sglang_repo/sgl-kernel/3rdparty/flashinfer/LICENSE ADDED
@@ -0,0 +1,223 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ Apache License
2
+ Version 2.0, January 2004
3
+ http://www.apache.org/licenses/
4
+
5
+ TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
6
+
7
+ 1. Definitions.
8
+
9
+ "License" shall mean the terms and conditions for use, reproduction,
10
+ and distribution as defined by Sections 1 through 9 of this document.
11
+
12
+ "Licensor" shall mean the copyright owner or entity authorized by
13
+ the copyright owner that is granting the License.
14
+
15
+ "Legal Entity" shall mean the union of the acting entity and all
16
+ other entities that control, are controlled by, or are under common
17
+ control with that entity. For the purposes of this definition,
18
+ "control" means (i) the power, direct or indirect, to cause the
19
+ direction or management of such entity, whether by contract or
20
+ otherwise, or (ii) ownership of fifty percent (50%) or more of the
21
+ outstanding shares, or (iii) beneficial ownership of such entity.
22
+
23
+ "You" (or "Your") shall mean an individual or Legal Entity
24
+ exercising permissions granted by this License.
25
+
26
+ "Source" form shall mean the preferred form for making modifications,
27
+ including but not limited to software source code, documentation
28
+ source, and configuration files.
29
+
30
+ "Object" form shall mean any form resulting from mechanical
31
+ transformation or translation of a Source form, including but
32
+ not limited to compiled object code, generated documentation,
33
+ and conversions to other media types.
34
+
35
+ "Work" shall mean the work of authorship, whether in Source or
36
+ Object form, made available under the License, as indicated by a
37
+ copyright notice that is included in or attached to the work
38
+ (an example is provided in the Appendix below).
39
+
40
+ "Derivative Works" shall mean any work, whether in Source or Object
41
+ form, that is based on (or derived from) the Work and for which the
42
+ editorial revisions, annotations, elaborations, or other modifications
43
+ represent, as a whole, an original work of authorship. For the purposes
44
+ of this License, Derivative Works shall not include works that remain
45
+ separable from, or merely link (or bind by name) to the interfaces of,
46
+ the Work and Derivative Works thereof.
47
+
48
+ "Contribution" shall mean any work of authorship, including
49
+ the original version of the Work and any modifications or additions
50
+ to that Work or Derivative Works thereof, that is intentionally
51
+ submitted to Licensor for inclusion in the Work by the copyright owner
52
+ or by an individual or Legal Entity authorized to submit on behalf of
53
+ the copyright owner. For the purposes of this definition, "submitted"
54
+ means any form of electronic, verbal, or written communication sent
55
+ to the Licensor or its representatives, including but not limited to
56
+ communication on electronic mailing lists, source code control systems,
57
+ and issue tracking systems that are managed by, or on behalf of, the
58
+ Licensor for the purpose of discussing and improving the Work, but
59
+ excluding communication that is conspicuously marked or otherwise
60
+ designated in writing by the copyright owner as "Not a Contribution."
61
+
62
+ "Contributor" shall mean Licensor and any individual or Legal Entity
63
+ on behalf of whom a Contribution has been received by Licensor and
64
+ subsequently incorporated within the Work.
65
+
66
+ 2. Grant of Copyright License. Subject to the terms and conditions of
67
+ this License, each Contributor hereby grants to You a perpetual,
68
+ worldwide, non-exclusive, no-charge, royalty-free, irrevocable
69
+ copyright license to reproduce, prepare Derivative Works of,
70
+ publicly display, publicly perform, sublicense, and distribute the
71
+ Work and such Derivative Works in Source or Object form.
72
+
73
+ 3. Grant of Patent License. Subject to the terms and conditions of
74
+ this License, each Contributor hereby grants to You a perpetual,
75
+ worldwide, non-exclusive, no-charge, royalty-free, irrevocable
76
+ (except as stated in this section) patent license to make, have made,
77
+ use, offer to sell, sell, import, and otherwise transfer the Work,
78
+ where such license applies only to those patent claims licensable
79
+ by such Contributor that are necessarily infringed by their
80
+ Contribution(s) alone or by combination of their Contribution(s)
81
+ with the Work to which such Contribution(s) was submitted. If You
82
+ institute patent litigation against any entity (including a
83
+ cross-claim or counterclaim in a lawsuit) alleging that the Work
84
+ or a Contribution incorporated within the Work constitutes direct
85
+ or contributory patent infringement, then any patent licenses
86
+ granted to You under this License for that Work shall terminate
87
+ as of the date such litigation is filed.
88
+
89
+ 4. Redistribution. You may reproduce and distribute copies of the
90
+ Work or Derivative Works thereof in any medium, with or without
91
+ modifications, and in Source or Object form, provided that You
92
+ meet the following conditions:
93
+
94
+ (a) You must give any other recipients of the Work or
95
+ Derivative Works a copy of this License; and
96
+
97
+ (b) You must cause any modified files to carry prominent notices
98
+ stating that You changed the files; and
99
+
100
+ (c) You must retain, in the Source form of any Derivative Works
101
+ that You distribute, all copyright, patent, trademark, and
102
+ attribution notices from the Source form of the Work,
103
+ excluding those notices that do not pertain to any part of
104
+ the Derivative Works; and
105
+
106
+ (d) If the Work includes a "NOTICE" text file as part of its
107
+ distribution, then any Derivative Works that You distribute must
108
+ include a readable copy of the attribution notices contained
109
+ within such NOTICE file, excluding those notices that do not
110
+ pertain to any part of the Derivative Works, in at least one
111
+ of the following places: within a NOTICE text file distributed
112
+ as part of the Derivative Works; within the Source form or
113
+ documentation, if provided along with the Derivative Works; or,
114
+ within a display generated by the Derivative Works, if and
115
+ wherever such third-party notices normally appear. The contents
116
+ of the NOTICE file are for informational purposes only and
117
+ do not modify the License. You may add Your own attribution
118
+ notices within Derivative Works that You distribute, alongside
119
+ or as an addendum to the NOTICE text from the Work, provided
120
+ that such additional attribution notices cannot be construed
121
+ as modifying the License.
122
+
123
+ You may add Your own copyright statement to Your modifications and
124
+ may provide additional or different license terms and conditions
125
+ for use, reproduction, or distribution of Your modifications, or
126
+ for any such Derivative Works as a whole, provided Your use,
127
+ reproduction, and distribution of the Work otherwise complies with
128
+ the conditions stated in this License.
129
+
130
+ 5. Submission of Contributions. Unless You explicitly state otherwise,
131
+ any Contribution intentionally submitted for inclusion in the Work
132
+ by You to the Licensor shall be under the terms and conditions of
133
+ this License, without any additional terms or conditions.
134
+ Notwithstanding the above, nothing herein shall supersede or modify
135
+ the terms of any separate license agreement you may have executed
136
+ with Licensor regarding such Contributions.
137
+
138
+ 6. Trademarks. This License does not grant permission to use the trade
139
+ names, trademarks, service marks, or product names of the Licensor,
140
+ except as required for reasonable and customary use in describing the
141
+ origin of the Work and reproducing the content of the NOTICE file.
142
+
143
+ 7. Disclaimer of Warranty. Unless required by applicable law or
144
+ agreed to in writing, Licensor provides the Work (and each
145
+ Contributor provides its Contributions) on an "AS IS" BASIS,
146
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
147
+ implied, including, without limitation, any warranties or conditions
148
+ of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
149
+ PARTICULAR PURPOSE. You are solely responsible for determining the
150
+ appropriateness of using or redistributing the Work and assume any
151
+ risks associated with Your exercise of permissions under this License.
152
+
153
+ 8. Limitation of Liability. In no event and under no legal theory,
154
+ whether in tort (including negligence), contract, or otherwise,
155
+ unless required by applicable law (such as deliberate and grossly
156
+ negligent acts) or agreed to in writing, shall any Contributor be
157
+ liable to You for damages, including any direct, indirect, special,
158
+ incidental, or consequential damages of any character arising as a
159
+ result of this License or out of the use or inability to use the
160
+ Work (including but not limited to damages for loss of goodwill,
161
+ work stoppage, computer failure or malfunction, or any and all
162
+ other commercial damages or losses), even if such Contributor
163
+ has been advised of the possibility of such damages.
164
+
165
+ 9. Accepting Warranty or Additional Liability. While redistributing
166
+ the Work or Derivative Works thereof, You may choose to offer,
167
+ and charge a fee for, acceptance of support, warranty, indemnity,
168
+ or other liability obligations and/or rights consistent with this
169
+ License. However, in accepting such obligations, You may act only
170
+ on Your own behalf and on Your sole responsibility, not on behalf
171
+ of any other Contributor, and only if You agree to indemnify,
172
+ defend, and hold each Contributor harmless for any liability
173
+ incurred by, or claims asserted against, such Contributor by reason
174
+ of your accepting any such warranty or additional liability.
175
+
176
+ END OF TERMS AND CONDITIONS
177
+
178
+ APPENDIX: How to apply the Apache License to your work.
179
+
180
+ To apply the Apache License to your work, attach the following
181
+ boilerplate notice, with the fields enclosed by brackets "[]"
182
+ replaced with your own identifying information. (Don't include
183
+ the brackets!) The text should be enclosed in the appropriate
184
+ comment syntax for the file format. We also recommend that a
185
+ file or class name and description of purpose be included on the
186
+ same "printed page" as the copyright notice for easier
187
+ identification within third-party archives.
188
+
189
+ Copyright [yyyy] [name of copyright owner]
190
+
191
+ Licensed under the Apache License, Version 2.0 (the "License");
192
+ you may not use this file except in compliance with the License.
193
+ You may obtain a copy of the License at
194
+
195
+ http://www.apache.org/licenses/LICENSE-2.0
196
+
197
+ Unless required by applicable law or agreed to in writing, software
198
+ distributed under the License is distributed on an "AS IS" BASIS,
199
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
200
+ See the License for the specific language governing permissions and
201
+ limitations under the License.
202
+
203
+ -------------------------------------------------------------------------------------------------
204
+ Some of the code in this project are adapted from other open-source projects with different
205
+ licenses. This product also bundles some third-party components under other open source licenses.
206
+ This section summarizes those components and their licenses.
207
+ See licenses/ for text of these licenses.
208
+
209
+ BSD 3-Clause License
210
+ --------------------
211
+
212
+ include/flashinfer/attention/hopper/epilogue.cuh
213
+ include/flashinfer/attention/hopper/mainloop.cuh
214
+ include/flashinfer/attention/hopper/kernel_traits.cuh
215
+ include/flashinfer/attention/hopper/named_barrier.cuh
216
+ include/flashinfer/attention/hopper/tile_scheduler.cuh
217
+ include/flashinfer/attention/hopper/utils.cuh
218
+
219
+ BSD 3-Clause "New" License
220
+ --------------------------
221
+
222
+ 3rdparty/cutlass
223
+ include/flashinfer/attention/hopper/block_sparse_gather.cuh
sglang_repo/sgl-kernel/3rdparty/flashinfer/README.md ADDED
@@ -0,0 +1,169 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ <p align="center">
2
+ <picture>
3
+ <source media="(prefers-color-scheme: dark)" srcset="https://github.com/flashinfer-ai/web-data/blob/main/logo/FlashInfer-black-background.png?raw=true">
4
+ <img alt="FlashInfer" src="https://github.com/flashinfer-ai/web-data/blob/main/logo/FlashInfer-white-background.png?raw=true" width=55%>
5
+ </picture>
6
+ </p>
7
+ <h1 align="center">
8
+ Kernel Library for LLM Serving
9
+ </h1>
10
+
11
+ <p align="center">
12
+ | <a href="https://flashinfer.ai"><b>Blog</b></a> | <a href="https://docs.flashinfer.ai"><b>Documentation</b></a> | <a href="https://join.slack.com/t/flashinfer/shared_invite/zt-2r93kj2aq-wZnC2n_Z2~mf73N5qnVGGA"><b>Slack</b></a>| <a href="https://github.com/orgs/flashinfer-ai/discussions"><b>Discussion Forum</b></a> |
13
+ </p>
14
+
15
+ [![Release](https://github.com/flashinfer-ai/flashinfer/actions/workflows/release_wheel.yml/badge.svg)](https://github.com/flashinfer-ai/flashinfer/actions/workflows/release_wheel.yml)
16
+ [![Documentation](https://github.com/flashinfer-ai/flashinfer/actions/workflows/build-doc.yml/badge.svg)](https://github.com/flashinfer-ai/flashinfer/actions/workflows/build-doc.yml)
17
+
18
+
19
+ FlashInfer is a library and kernel generator for Large Language Models that provides high-performance implementation of LLM GPU kernels such as FlashAttention, SparseAttention, PageAttention, Sampling, and more. FlashInfer focuses on LLM serving and inference, and delivers state-of-the-art performance across diverse scenarios.
20
+
21
+ Check our [v0.2 release blog](https://flashinfer.ai/2024/12/16/flashinfer-v02-release.html) for new features!
22
+
23
+ The core features of FlashInfer include:
24
+ 1. **Efficient Sparse/Dense Attention Kernels**: Efficient single/batch attention for sparse(paged)/dense KV-storage on CUDA Cores and Tensor Cores (both FA2 & FA3) templates. The vector-sparse attention can achieve 90% of the bandwidth of dense kernels with same problem size.
25
+ 2. **Load-Balanced Scheduling**: FlashInfer decouples `plan`/`run` stage of attention computation where we schedule the computation of variable-length inputs in `plan` stage to alleviate load-imbalance issue.
26
+ 3. **Memory Efficiency**: FlashInfer offers [Cascade Attention](https://docs.flashinfer.ai/api/cascade.html#flashinfer.cascade.MultiLevelCascadeAttentionWrapper) for hierical KV-Cache, and implements Head-Query fusion for accelerating Grouped-Query Attention, and efficient kernels for low-precision attention and fused-RoPE attention for compressed KV-Cache.
27
+ 4. **Customizable Attention**: Bring your own attention variants through JIT-compilation.
28
+ 5. **CUDAGraph and torch.compile Compatibility**: FlashInfer kernels can be captured by CUDAGraphs and torch.compile for low-latency inference.
29
+ 6. **Efficient LLM-specific Operators**: High-Performance [fused kernel for Top-P, Top-K/Min-P sampling](https://docs.flashinfer.ai/api/sampling.html) without the need to sorting.
30
+
31
+ FlashInfer support PyTorch, TVM and C++ (header-only) APIs, and can be easily integrated into existing projects.
32
+
33
+ ## News
34
+ - [Dec 16, 2024] [Blog Post](https://flashinfer.ai/2024/12/16/flashinfer-v02-release.html) FlashInfer 0.2 - Efficient and Customizable Kernels for LLM Inference Serving
35
+ - [Sept 2024] We've launched a [Slack](https://join.slack.com/t/flashinfer/shared_invite/zt-2r93kj2aq-wZnC2n_Z2~mf73N5qnVGGA) workspace for Flashinfer users and developers. Join us for timely support, discussions, updates and knowledge sharing!
36
+ - [Jan 31, 2024] [Blog Post](https://flashinfer.ai/2024/01/08/cascade-inference.html) Cascade Inference: Memory-Efficient Shared Prefix Batch Decoding
37
+ - [Jan 31, 2024] [Blog Post](https://flashinfer.ai/2024/01/03/introduce-flashinfer.html) Accelerating Self-Attentions for LLM Serving with FlashInfer
38
+
39
+ ## Getting Started
40
+
41
+ Using our PyTorch API is the easiest way to get started:
42
+
43
+ ### Installation
44
+
45
+ We provide prebuilt wheels for Linux. You can install FlashInfer with the following command:
46
+
47
+ ```bash
48
+ # For CUDA 12.4 & torch 2.4
49
+ pip install flashinfer -i https://flashinfer.ai/whl/cu124/torch2.4
50
+ # For other CUDA & torch versions, please check https://docs.flashinfer.ai/installation.html
51
+ ```
52
+
53
+ We also offer nightly-built wheels to try the latest features from the main branch:
54
+
55
+ ```bash
56
+ pip install flashinfer -i https://flashinfer.ai/whl/nightly/cu124/torch2.4
57
+ ```
58
+
59
+ Alternatively, you can build FlashInfer from source:
60
+
61
+ ```bash
62
+ git clone https://github.com/flashinfer-ai/flashinfer.git --recursive
63
+ cd flashinfer
64
+ pip install -e . -v
65
+ ```
66
+
67
+ By default, FlashInfer uses Just-In-Time (JIT) compilation for its kernels. To pre-compile essential kernels, set the environment variable `FLASHINFER_ENABLE_AOT=1` before running the installation command:
68
+
69
+ ```bash
70
+ FLASHINFER_ENABLE_AOT=1 pip install -e . -v
71
+ ```
72
+
73
+ For more details, refer to the [Install from Source documentation](https://docs.flashinfer.ai/installation.html#install-from-source).
74
+
75
+ ### Trying it out
76
+
77
+ Below is a minimal example of using FlashInfer's single-request decode/append/prefill attention kernels:
78
+
79
+ ```python
80
+ import torch
81
+ import flashinfer
82
+
83
+ kv_len = 2048
84
+ num_kv_heads = 32
85
+ head_dim = 128
86
+
87
+ k = torch.randn(kv_len, num_kv_heads, head_dim).half().to(0)
88
+ v = torch.randn(kv_len, num_kv_heads, head_dim).half().to(0)
89
+
90
+ # decode attention
91
+
92
+ num_qo_heads = 32
93
+ q = torch.randn(num_qo_heads, head_dim).half().to(0)
94
+
95
+ o = flashinfer.single_decode_with_kv_cache(q, k, v) # decode attention without RoPE on-the-fly
96
+ o_rope_on_the_fly = flashinfer.single_decode_with_kv_cache(q, k, v, pos_encoding_mode="ROPE_LLAMA") # decode with LLaMA style RoPE on-the-fly
97
+
98
+ # append attention
99
+ append_qo_len = 128
100
+ q = torch.randn(append_qo_len, num_qo_heads, head_dim).half().to(0) # append attention, the last 128 tokens in the KV-Cache are the new tokens
101
+ o = flashinfer.single_prefill_with_kv_cache(q, k, v, causal=True) # append attention without RoPE on-the-fly, apply causal mask
102
+ o_rope_on_the_fly = flashinfer.single_prefill_with_kv_cache(q, k, v, causal=True, pos_encoding_mode="ROPE_LLAMA") # append attention with LLaMA style RoPE on-the-fly, apply causal mask
103
+
104
+ # prefill attention
105
+ qo_len = 2048
106
+ q = torch.randn(qo_len, num_qo_heads, head_dim).half().to(0) # prefill attention
107
+ o = flashinfer.single_prefill_with_kv_cache(q, k, v, causal=False) # prefill attention without RoPE on-the-fly, do not apply causal mask
108
+ ```
109
+
110
+ Check out [documentation](https://docs.flashinfer.ai/) for usage of batch decode/append/prefill kernels and shared-prefix cascading kernels.
111
+
112
+ ## Run Benchmarks
113
+
114
+ We profile FlashInfer kernel performance with [nvbench](https://github.com/NVIDIA/nvbench) and you can compile and run the benchmarks with the following commands:
115
+
116
+ ```bash
117
+ mkdir build
118
+ cp cmake/config.cmake build # you can modify the config.cmake to enable/disable benchmarks and change CUDA architectures
119
+ cd build
120
+ cmake ..
121
+ make -j12
122
+ ```
123
+
124
+ You can run `./bench_{single/batch}_{prefill/decode}` to benchmark the performance (e.g. `./bench_single_prefill` for single-request prefill attention). `./bench_{single/batch}_{prefill/decode} --help` will show you the available options.
125
+
126
+ ## C++ API and TVM Bindings
127
+
128
+ FlashInfer also provides C++ API and TVM bindings, please refer to [documentation](https://docs.flashinfer.ai/) for more details.
129
+
130
+ ## Adoption
131
+
132
+ We are thrilled to share that FlashInfer is being adopted by many cutting-edge projects, including but not limited to:
133
+ - [MLC-LLM](https://github.com/mlc-ai/mlc-llm)
134
+ - [Punica](https://github.com/punica-ai/punica)
135
+ - [SGLang](https://github.com/sgl-project/sglang)
136
+ - [ScaleLLM](https://github.com/vectorch-ai/ScaleLLM)
137
+ - [vLLM](https://github.com/vllm-project/vllm)
138
+ - [TGI](https://github.com/huggingface/text-generation-inference)
139
+ - [lorax](https://github.com/predibase/lorax)
140
+
141
+ ## Acknowledgement
142
+
143
+ FlashInfer is inspired by [FlashAttention 1&2](https://github.com/dao-AILab/flash-attention/), [vLLM](https://github.com/vllm-project/vllm), [stream-K](https://arxiv.org/abs/2301.03598), [cutlass](https://github.com/nvidia/cutlass) and [AITemplate](https://github.com/facebookincubator/AITemplate) projects.
144
+
145
+ ## Citation
146
+
147
+ If you find FlashInfer helpful in your project or research, please consider citing our [paper](https://arxiv.org/abs/2501.01005):
148
+
149
+ ```bibtex
150
+ @article{ye2025flashinfer,
151
+ title = {FlashInfer: Efficient and Customizable Attention Engine for LLM Inference Serving},
152
+ author = {
153
+ Ye, Zihao and
154
+ Chen, Lequn and
155
+ Lai, Ruihang and
156
+ Lin, Wuwei and
157
+ Zhang, Yineng and
158
+ Wang, Stephanie and
159
+ Chen, Tianqi and
160
+ Kasikci, Baris and
161
+ Grover, Vinod and
162
+ Krishnamurthy, Arvind and
163
+ Ceze, Luis
164
+ },
165
+ journal = {arXiv preprint arXiv:2501.01005},
166
+ year = {2025},
167
+ url = {https://arxiv.org/abs/2501.01005}
168
+ }
169
+ ```
sglang_repo/sgl-kernel/3rdparty/flashinfer/custom_backend.py ADDED
@@ -0,0 +1,41 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import os
2
+ from pathlib import Path
3
+
4
+ from setuptools import build_meta as orig
5
+ from setuptools.build_meta import * # noqa: F403
6
+
7
+
8
+ def _get_requires_for_build():
9
+ requires = []
10
+ if os.environ.get("FLASHINFER_ENABLE_AOT", "0") == "1":
11
+ requires += ["torch", "ninja"]
12
+ return requires
13
+
14
+
15
+ def get_requires_for_build_wheel(config_settings=None):
16
+ return _get_requires_for_build()
17
+
18
+
19
+ def get_requires_for_build_editable(config_settings=None):
20
+ return _get_requires_for_build()
21
+
22
+
23
+ def build_editable(wheel_directory, config_settings=None, metadata_directory=None):
24
+ root = Path(__file__).parent.resolve()
25
+ data_dir = root / "flashinfer" / "data"
26
+ data_dir.mkdir(parents=True, exist_ok=True)
27
+
28
+ def ln(src: str, dst: str) -> None:
29
+ src: Path = root / src
30
+ dst: Path = data_dir / dst
31
+ if dst.exists():
32
+ if dst.is_symlink():
33
+ dst.unlink()
34
+ elif dst.is_dir():
35
+ dst.rmdir()
36
+ dst.symlink_to(src, target_is_directory=True)
37
+
38
+ ln("3rdparty/cutlass", "cutlass")
39
+ ln("csrc", "csrc")
40
+ ln("include", "include")
41
+ return orig.build_editable(wheel_directory, config_settings, metadata_directory)
sglang_repo/sgl-kernel/3rdparty/flashinfer/pyproject.toml ADDED
@@ -0,0 +1,116 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Copyright (c) 2024 by FlashInfer team.
2
+ #
3
+ # Licensed under the Apache License, Version 2.0 (the "License");
4
+ # you may not use this file except in compliance with the License.
5
+ # You may obtain a copy of the License at
6
+ #
7
+ # http://www.apache.org/licenses/LICENSE-2.0
8
+ #
9
+ # Unless required by applicable law or agreed to in writing, software
10
+ # distributed under the License is distributed on an "AS IS" BASIS,
11
+ # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12
+ # See the License for the specific language governing permissions and
13
+ # limitations under the License.
14
+
15
+ [project]
16
+ name = "flashinfer-python"
17
+ description = "FlashInfer: Kernel Library for LLM Serving"
18
+ requires-python = ">=3.8,<4.0"
19
+ authors = [{ name = "FlashInfer team" }]
20
+ license = { text = "Apache License 2.0" }
21
+ readme = "README.md"
22
+ urls = { Homepage = "https://github.com/flashinfer-ai/flashinfer" }
23
+ dynamic = ["dependencies", "version"]
24
+
25
+ [build-system]
26
+ requires = ["setuptools"]
27
+ build-backend = "custom_backend"
28
+ backend-path = ["."]
29
+
30
+ [tool.codespell]
31
+ ignore-words-list = "3nd"
32
+ skip = [
33
+ "build",
34
+ "3rdparty",
35
+ "dist",
36
+ ".venv"
37
+ ]
38
+
39
+ [tool.setuptools]
40
+ packages = [
41
+ "flashinfer",
42
+ "flashinfer.data",
43
+ "flashinfer.data.csrc",
44
+ "flashinfer.data.cutlass",
45
+ "flashinfer.data.include",
46
+ "flashinfer.jit",
47
+ "flashinfer.triton",
48
+ "flashinfer.triton.kernels",
49
+ ]
50
+ include-package-data = false
51
+
52
+ [tool.setuptools.package-dir]
53
+ "flashinfer.data" = "."
54
+ "flashinfer.data.cutlass" = "3rdparty/cutlass"
55
+
56
+ [tool.setuptools.package-data]
57
+ "flashinfer.data" = [
58
+ "csrc/**",
59
+ "include/**",
60
+ "version.txt"
61
+ ]
62
+ "flashinfer.data.cutlass" = [
63
+ "include/**",
64
+ "tools/util/include/**"
65
+ ]
66
+
67
+ [tool.mypy]
68
+ ignore_missing_imports = false
69
+ show_column_numbers = true
70
+ show_error_context = true
71
+ follow_imports = "skip"
72
+ ignore_errors = false
73
+ strict_optional = false
74
+
75
+
76
+ [tool.ruff.lint]
77
+ select = [
78
+ # pycodestyle
79
+ "E",
80
+ # Pyflakes
81
+ "F",
82
+ # pyupgrade
83
+ # "UP",
84
+ # flake8-bugbear
85
+ "B",
86
+ # flake8-simplify
87
+ "SIM",
88
+ # isort
89
+ # "I",
90
+ ]
91
+ ignore = [
92
+ # Module level import not at top of file
93
+ "E402",
94
+ # star imports
95
+ "F405", "F403",
96
+ # ambiguous name
97
+ "E741",
98
+ # line too long
99
+ "E501",
100
+ # key in dict.keys()
101
+ "SIM118",
102
+ # memory leaks
103
+ "B019",
104
+ # No such file or directory
105
+ "E902",
106
+ # nested `if` statements
107
+ "SIM102",
108
+ # `if`-`else`-block
109
+ "SIM108",
110
+ # assign `lambda` expressions
111
+ "E731",
112
+ # Loop control variable overrides iterable it iterates
113
+ "B020",
114
+ # Return te negated condition directly
115
+ "SIM103",
116
+ ]
sglang_repo/sgl-kernel/3rdparty/flashinfer/setup.py ADDED
@@ -0,0 +1,279 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ """
2
+ Copyright (c) 2023 by FlashInfer team.
3
+
4
+ Licensed under the Apache License, Version 2.0 (the "License");
5
+ you may not use this file except in compliance with the License.
6
+ You may obtain a copy of the License at
7
+
8
+ http://www.apache.org/licenses/LICENSE-2.0
9
+
10
+ Unless required by applicable law or agreed to in writing, software
11
+ distributed under the License is distributed on an "AS IS" BASIS,
12
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13
+ See the License for the specific language governing permissions and
14
+ limitations under the License.
15
+ """
16
+
17
+ import argparse
18
+ import os
19
+ import platform
20
+ import re
21
+ import subprocess
22
+ import sys
23
+ from pathlib import Path
24
+
25
+ import setuptools
26
+
27
+ root = Path(__file__).parent.resolve()
28
+ gen_dir = root / "csrc" / "generated"
29
+
30
+ head_dims = os.environ.get("FLASHINFER_HEAD_DIMS", "64,128,256").split(",")
31
+ head_dims = list(map(int, head_dims))
32
+ SM90_ALLOWED_HEAD_DIMS = {64, 128, 256}
33
+ head_dims_sm90 = [d for d in head_dims if d in SM90_ALLOWED_HEAD_DIMS]
34
+
35
+ mask_modes = [0, 1, 2]
36
+
37
+ enable_aot = os.environ.get("FLASHINFER_ENABLE_AOT", "0") == "1"
38
+ enable_f16 = os.environ.get("FLASHINFER_ENABLE_F16", "1") == "1"
39
+ enable_bf16 = os.environ.get("FLASHINFER_ENABLE_BF16", "1") == "1"
40
+ enable_fp8 = os.environ.get("FLASHINFER_ENABLE_FP8", "1") == "1"
41
+ enable_fp8_e4m3 = (
42
+ os.environ.get("FLASHINFER_ENABLE_FP8_E4M3", "1" if enable_fp8 else "0") == "1"
43
+ )
44
+ enable_fp8_e5m2 = (
45
+ os.environ.get("FLASHINFER_ENABLE_FP8_E5M2", "1" if enable_fp8 else "0") == "1"
46
+ )
47
+ enable_sm90 = os.environ.get("FLASHINFER_ENABLE_SM90", "1") == "1"
48
+
49
+
50
+ def write_if_different(path: Path, content: str) -> None:
51
+ if path.exists() and path.read_text() == content:
52
+ return
53
+ path.parent.mkdir(parents=True, exist_ok=True)
54
+ path.write_text(content)
55
+
56
+
57
+ def get_version():
58
+ package_version = (root / "version.txt").read_text().strip()
59
+ local_version = os.environ.get("FLASHINFER_LOCAL_VERSION")
60
+ if local_version is None:
61
+ return package_version
62
+ return f"{package_version}+{local_version}"
63
+
64
+
65
+ def generate_build_meta(aot_build_meta: dict) -> None:
66
+ build_meta_str = f"__version__ = {get_version()!r}\n"
67
+ if len(aot_build_meta) != 0:
68
+ build_meta_str += f"build_meta = {aot_build_meta!r}\n"
69
+ write_if_different(root / "flashinfer" / "_build_meta.py", build_meta_str)
70
+
71
+
72
+ def generate_cuda() -> None:
73
+ try: # no aot_build_utils in sdist
74
+ sys.path.append(str(root))
75
+ from aot_build_utils import generate_dispatch_inc
76
+ from aot_build_utils.generate import get_instantiation_cu
77
+ from aot_build_utils.generate_aot_default_additional_params_header import (
78
+ get_aot_default_additional_params_header_str,
79
+ )
80
+ from aot_build_utils.generate_sm90 import get_sm90_instantiation_cu
81
+ except ImportError:
82
+ return
83
+
84
+ # dispatch.inc
85
+ write_if_different(
86
+ gen_dir / "dispatch.inc",
87
+ generate_dispatch_inc.get_dispatch_inc_str(
88
+ argparse.Namespace(
89
+ head_dims=head_dims,
90
+ head_dims_sm90=head_dims_sm90,
91
+ pos_encoding_modes=[0],
92
+ use_fp16_qk_reductions=[0],
93
+ mask_modes=mask_modes,
94
+ )
95
+ ),
96
+ )
97
+
98
+ # _kernels
99
+ aot_kernel_uris = get_instantiation_cu(
100
+ argparse.Namespace(
101
+ path=gen_dir,
102
+ head_dims=head_dims,
103
+ pos_encoding_modes=[0],
104
+ use_fp16_qk_reductions=[0],
105
+ mask_modes=mask_modes,
106
+ enable_f16=enable_f16,
107
+ enable_bf16=enable_bf16,
108
+ enable_fp8_e4m3=enable_fp8_e4m3,
109
+ enable_fp8_e5m2=enable_fp8_e5m2,
110
+ )
111
+ )
112
+
113
+ # _kernels_sm90
114
+ if enable_sm90:
115
+ aot_kernel_uris += get_sm90_instantiation_cu(
116
+ argparse.Namespace(
117
+ path=gen_dir,
118
+ head_dims=head_dims_sm90,
119
+ pos_encoding_modes=[0],
120
+ use_fp16_qk_reductions=[0],
121
+ mask_modes=mask_modes,
122
+ enable_f16=enable_f16,
123
+ enable_bf16=enable_bf16,
124
+ )
125
+ )
126
+ aot_config_str = f"""prebuilt_ops_uri = set({aot_kernel_uris})"""
127
+ write_if_different(root / "flashinfer" / "jit" / "aot_config.py", aot_config_str)
128
+ write_if_different(
129
+ root / "csrc" / "aot_default_additional_params.h",
130
+ get_aot_default_additional_params_header_str(),
131
+ )
132
+
133
+
134
+ ext_modules = []
135
+ cmdclass = {}
136
+ install_requires = ["torch", "ninja"]
137
+ generate_build_meta({})
138
+
139
+ if enable_aot:
140
+ import torch
141
+ import torch.utils.cpp_extension as torch_cpp_ext
142
+ from packaging.version import Version
143
+
144
+ generate_cuda()
145
+
146
+ def get_cuda_version() -> Version:
147
+ if torch_cpp_ext.CUDA_HOME is None:
148
+ nvcc = "nvcc"
149
+ else:
150
+ nvcc = os.path.join(torch_cpp_ext.CUDA_HOME, "bin/nvcc")
151
+ txt = subprocess.check_output([nvcc, "--version"], text=True)
152
+ return Version(re.findall(r"release (\d+\.\d+),", txt)[0])
153
+
154
+ class NinjaBuildExtension(torch_cpp_ext.BuildExtension):
155
+ def __init__(self, *args, **kwargs) -> None:
156
+ # do not override env MAX_JOBS if already exists
157
+ if not os.environ.get("MAX_JOBS"):
158
+ max_num_jobs_cores = max(1, os.cpu_count())
159
+ os.environ["MAX_JOBS"] = str(max_num_jobs_cores)
160
+
161
+ super().__init__(*args, **kwargs)
162
+
163
+ # cuda arch check for fp8 at the moment.
164
+ for cuda_arch_flags in torch_cpp_ext._get_cuda_arch_flags():
165
+ arch = int(re.search(r"compute_(\d+)", cuda_arch_flags).group(1))
166
+ if arch < 75:
167
+ raise RuntimeError("FlashInfer requires sm75+")
168
+
169
+ cuda_version = get_cuda_version()
170
+ torch_full_version = Version(torch.__version__)
171
+ torch_version = f"{torch_full_version.major}.{torch_full_version.minor}"
172
+ cmdclass["build_ext"] = NinjaBuildExtension
173
+ install_requires = [f"torch == {torch_version}.*"]
174
+
175
+ aot_build_meta = {}
176
+ aot_build_meta["cuda_major"] = cuda_version.major
177
+ aot_build_meta["cuda_minor"] = cuda_version.minor
178
+ aot_build_meta["torch"] = torch_version
179
+ aot_build_meta["python"] = platform.python_version()
180
+ aot_build_meta["TORCH_CUDA_ARCH_LIST"] = os.environ.get("TORCH_CUDA_ARCH_LIST")
181
+ generate_build_meta(aot_build_meta)
182
+
183
+ if enable_f16:
184
+ torch_cpp_ext.COMMON_NVCC_FLAGS.append("-DFLASHINFER_ENABLE_F16")
185
+ if enable_bf16:
186
+ torch_cpp_ext.COMMON_NVCC_FLAGS.append("-DFLASHINFER_ENABLE_BF16")
187
+ if enable_fp8_e4m3:
188
+ torch_cpp_ext.COMMON_NVCC_FLAGS.append("-DFLASHINFER_ENABLE_FP8_E4M3")
189
+ if enable_fp8_e5m2:
190
+ torch_cpp_ext.COMMON_NVCC_FLAGS.append("-DFLASHINFER_ENABLE_FP8_E5M2")
191
+
192
+ for flag in [
193
+ "-D__CUDA_NO_HALF_OPERATORS__",
194
+ "-D__CUDA_NO_HALF_CONVERSIONS__",
195
+ "-D__CUDA_NO_BFLOAT16_CONVERSIONS__",
196
+ "-D__CUDA_NO_HALF2_OPERATORS__",
197
+ ]:
198
+ try:
199
+ torch_cpp_ext.COMMON_NVCC_FLAGS.remove(flag)
200
+ except ValueError:
201
+ pass
202
+
203
+ cutlass = root / "3rdparty" / "cutlass"
204
+ include_dirs = [
205
+ root.resolve() / "include",
206
+ cutlass.resolve() / "include", # for group gemm
207
+ cutlass.resolve() / "tools" / "util" / "include",
208
+ ]
209
+ cxx_flags = [
210
+ "-O3",
211
+ "-Wno-switch-bool",
212
+ ]
213
+ nvcc_flags = [
214
+ "-O3",
215
+ "-std=c++17",
216
+ "--threads=1",
217
+ "-Xfatbin",
218
+ "-compress-all",
219
+ "-use_fast_math",
220
+ ]
221
+ sm90a_flags = "-gencode arch=compute_90a,code=sm_90a".split()
222
+ kernel_sources = [
223
+ "csrc/bmm_fp8.cu",
224
+ "csrc/cascade.cu",
225
+ "csrc/group_gemm.cu",
226
+ "csrc/norm.cu",
227
+ "csrc/page.cu",
228
+ "csrc/quantization.cu",
229
+ "csrc/rope.cu",
230
+ "csrc/sampling.cu",
231
+ "csrc/renorm.cu",
232
+ "csrc/activation.cu",
233
+ "csrc/batch_decode.cu",
234
+ "csrc/batch_prefill.cu",
235
+ "csrc/single_decode.cu",
236
+ "csrc/single_prefill.cu",
237
+ "csrc/flashinfer_ops.cu",
238
+ ]
239
+ kernel_sm90_sources = [
240
+ "csrc/group_gemm_sm90.cu",
241
+ "csrc/single_prefill_sm90.cu",
242
+ "csrc/batch_prefill_sm90.cu",
243
+ "csrc/flashinfer_ops_sm90.cu",
244
+ ]
245
+ decode_sources = list(gen_dir.glob("*decode_head*.cu"))
246
+ prefill_sources = [
247
+ f for f in gen_dir.glob("*prefill_head*.cu") if "_sm90" not in f.name
248
+ ]
249
+ prefill_sm90_sources = list(gen_dir.glob("*prefill_head*_sm90.cu"))
250
+ ext_modules = [
251
+ torch_cpp_ext.CUDAExtension(
252
+ name="flashinfer._kernels",
253
+ sources=kernel_sources + decode_sources + prefill_sources,
254
+ include_dirs=include_dirs,
255
+ extra_compile_args={
256
+ "cxx": cxx_flags,
257
+ "nvcc": nvcc_flags,
258
+ },
259
+ )
260
+ ]
261
+ if enable_sm90:
262
+ ext_modules += [
263
+ torch_cpp_ext.CUDAExtension(
264
+ name="flashinfer._kernels_sm90",
265
+ sources=kernel_sm90_sources + prefill_sm90_sources,
266
+ include_dirs=include_dirs,
267
+ extra_compile_args={
268
+ "cxx": cxx_flags,
269
+ "nvcc": nvcc_flags + sm90a_flags,
270
+ },
271
+ ),
272
+ ]
273
+
274
+ setuptools.setup(
275
+ version=get_version(),
276
+ ext_modules=ext_modules,
277
+ cmdclass=cmdclass,
278
+ install_requires=install_requires,
279
+ )
sglang_repo/sgl-kernel/3rdparty/flashinfer/src/bench_batch_decode_mla.cu ADDED
@@ -0,0 +1,122 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ /*
2
+ * Copyright (c) 2023 by FlashInfer team.
3
+ *
4
+ * Licensed under the Apache License, Version 2.0 (the "License");
5
+ * you may not use this file except in compliance with the License.
6
+ * You may obtain a copy of the License at
7
+ *
8
+ * http://www.apache.org/licenses/LICENSE-2.0
9
+ *
10
+ * Unless required by applicable law or agreed to in writing, software
11
+ * distributed under the License is distributed on an "AS IS" BASIS,
12
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13
+ * See the License for the specific language governing permissions and
14
+ * limitations under the License.
15
+ */
16
+ #include <thrust/device_vector.h>
17
+
18
+ #include <cstddef>
19
+ #include <cstdint>
20
+ #include <nvbench/nvbench.cuh>
21
+ #include <unordered_set>
22
+ #include <vector>
23
+
24
+ #include "flashinfer_ops.cuh"
25
+ #include "utils.h"
26
+
27
+ using utils::vec_bytes;
28
+ using namespace flashinfer;
29
+
30
+ std::unordered_set<int> dev_to_bench{0};
31
+
32
+ template <typename T>
33
+ void bench_flashinfer_batch_decode_mla(nvbench::state& state) {
34
+ int dev_id = state.get_device().value().get_id();
35
+ if (dev_to_bench.count(dev_id) == 0) return;
36
+
37
+ cudaSetDevice(dev_id);
38
+ cudaStream_t stream;
39
+ cudaStreamCreate(&stream);
40
+ state.set_cuda_stream(nvbench::make_cuda_stream_view(stream));
41
+
42
+ constexpr size_t head_dim_ckv = 512;
43
+ constexpr size_t head_dim_kpe = head_dim_ckv / 8;
44
+ const size_t num_qo_heads = state.get_int64("num_qo_heads");
45
+ ;
46
+
47
+ size_t batch_size = state.get_int64("batch_size");
48
+ size_t seqlen = state.get_int64("seqlen");
49
+ size_t page_size = state.get_int64("page_size");
50
+
51
+ auto pages_per_seq = (seqlen + page_size - 1) / page_size;
52
+ auto num_pages = pages_per_seq * batch_size;
53
+ std::vector<int32_t> kv_indptr_host{0};
54
+ std::vector<int32_t> kv_indicies_host;
55
+ std::vector<int32_t> kv_last_page_len_host;
56
+ for (size_t i = 0; i < batch_size; ++i) {
57
+ for (size_t p = 0; p < pages_per_seq; ++p) {
58
+ kv_indicies_host.push_back(i * pages_per_seq + p);
59
+ }
60
+ kv_indptr_host.push_back(kv_indptr_host.back() + pages_per_seq);
61
+ kv_last_page_len_host.push_back((seqlen - 1) % page_size + 1);
62
+ }
63
+ thrust::device_vector<int32_t> kv_indptr(kv_indptr_host);
64
+ thrust::device_vector<int32_t> kv_indices(kv_indicies_host);
65
+ thrust::device_vector<int32_t> kv_last_page_len(kv_last_page_len_host);
66
+
67
+ thrust::device_vector<T> q_nope(batch_size * num_qo_heads * head_dim_ckv);
68
+ thrust::device_vector<T> q_pe(batch_size * num_qo_heads * head_dim_kpe);
69
+ thrust::device_vector<T> ckv_data(num_pages * page_size * head_dim_ckv);
70
+ thrust::device_vector<T> kpe_data(num_pages * page_size * head_dim_kpe);
71
+ thrust::device_vector<T> o(q_nope.size());
72
+
73
+ flashinfer::paged_kv_mla_t<T, int32_t> paged_kv_mla(
74
+ page_size, head_dim_ckv, head_dim_kpe, batch_size, thrust::raw_pointer_cast(ckv_data.data()),
75
+ thrust::raw_pointer_cast(kpe_data.data()), thrust::raw_pointer_cast(kv_indices.data()),
76
+ thrust::raw_pointer_cast(kv_indptr.data()),
77
+ thrust::raw_pointer_cast(kv_last_page_len.data()));
78
+
79
+ state.add_global_memory_reads<uint8_t>(vec_bytes(q_nope) + vec_bytes(q_pe) + vec_bytes(ckv_data) +
80
+ vec_bytes(kpe_data) + vec_bytes(kv_indptr) +
81
+ vec_bytes(kv_indices) + vec_bytes(kv_last_page_len),
82
+ "Read");
83
+ state.add_global_memory_writes<uint8_t>(vec_bytes(o), "Write");
84
+
85
+ flashinfer::BatchDecodeHandler handler;
86
+ handler.SetCUDAStream(stream);
87
+ size_t float_workspace_size_in_bytes = 32 * 1024 * 1024;
88
+ thrust::device_vector<char> float_buffer(float_workspace_size_in_bytes);
89
+ size_t int_workspace_size_in_bytes = 8 * 1024 * 1024;
90
+ thrust::device_vector<char> int_buffer(int_workspace_size_in_bytes);
91
+ flashinfer::BatchDecodeHandlerPlanMLA<T, T, T, int32_t>(
92
+ &handler, (void*)thrust::raw_pointer_cast(float_buffer.data()), float_workspace_size_in_bytes,
93
+ (void*)thrust::raw_pointer_cast(int_buffer.data()), int_workspace_size_in_bytes,
94
+ kv_indptr_host.data(), kv_last_page_len_host.data(), batch_size, num_qo_heads, head_dim_ckv,
95
+ page_size);
96
+
97
+ state.exec([&](nvbench::launch&) {
98
+ cudaError_t status = flashinfer::BatchDecodeWithPagedKVCacheWrapperMLA<T, T, T, int32_t>(
99
+ &handler, thrust::raw_pointer_cast(q_nope.data()), thrust::raw_pointer_cast(q_pe.data()),
100
+ /*q_rope_offset=*/nullptr, paged_kv_mla, thrust::raw_pointer_cast(o.data()),
101
+ /*lse=*/nullptr, num_qo_heads, std::sqrt(192.0));
102
+ if (status != cudaSuccess) {
103
+ state.skip("CUDA error: " + std::string(cudaGetErrorString(status)));
104
+ }
105
+ });
106
+
107
+ cudaStreamDestroy(stream);
108
+ }
109
+
110
+ #define STR_HELPER(x) #x
111
+ #define STR(x) STR_HELPER(x)
112
+
113
+ #define BENCH_FLASHINFER_BATCH_DECODE(dtype) \
114
+ auto bench_flashinfer_batch_decode_mla_##dtype##_ = bench_flashinfer_batch_decode_mla<dtype>; \
115
+ NVBENCH_BENCH(bench_flashinfer_batch_decode_mla_##dtype##_) \
116
+ .set_name("bench_flashinfer_batch_decode_mla_" STR(dtype)) \
117
+ .add_int64_axis("page_size", {64}) \
118
+ .add_int64_axis("batch_size", {16, 256}) \
119
+ .add_int64_axis("seqlen", {1024, 16384}) \
120
+ .add_int64_axis("num_qo_heads", {8, 16, 32, 40, 64, 128})
121
+
122
+ BENCH_FLASHINFER_BATCH_DECODE(half);
sglang_repo/sgl-kernel/3rdparty/flashinfer/src/bench_cascade.cu ADDED
@@ -0,0 +1,386 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ /*
2
+ * Copyright (c) 2023 by FlashInfer team.
3
+ *
4
+ * Licensed under the Apache License, Version 2.0 (the "License");
5
+ * you may not use this file except in compliance with the License.
6
+ * You may obtain a copy of the License at
7
+ *
8
+ * http://www.apache.org/licenses/LICENSE-2.0
9
+ *
10
+ * Unless required by applicable law or agreed to in writing, software
11
+ * distributed under the License is distributed on an "AS IS" BASIS,
12
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13
+ * See the License for the specific language governing permissions and
14
+ * limitations under the License.
15
+ */
16
+ #include <thrust/device_vector.h>
17
+
18
+ #include <cstddef>
19
+ #include <flashinfer/attention/cascade.cuh>
20
+ #include <nvbench/nvbench.cuh>
21
+
22
+ #include "flashinfer_ops.cuh"
23
+ #include "utils.h"
24
+
25
+ using namespace flashinfer;
26
+
27
+ constexpr QKVLayout kv_layout = QKVLayout::kNHD;
28
+
29
+ template <typename T>
30
+ void bench_merge_states(nvbench::state& state) {
31
+ const auto num_index_sets = state.get_int64("num_index_sets");
32
+ const auto seq_len = state.get_int64("seq_len");
33
+ const auto num_heads = state.get_int64("num_heads");
34
+ const auto head_dim = state.get_int64("head_dim");
35
+
36
+ std::vector<T> V_host(seq_len * num_index_sets * num_heads * head_dim);
37
+ std::vector<float> S_host(seq_len * num_index_sets * num_heads);
38
+
39
+ utils::vec_normal_(V_host);
40
+ utils::vec_uniform_(S_host, 5, 10);
41
+
42
+ thrust::device_vector<T> V_device(V_host);
43
+ thrust::device_vector<float> S_device(S_host);
44
+ thrust::device_vector<T> V_merged(seq_len * num_heads * head_dim);
45
+ thrust::device_vector<float> S_merged(seq_len * num_heads);
46
+
47
+ state.add_global_memory_reads<T>(V_host.size(), "Read");
48
+ state.add_global_memory_writes<T>(V_merged.size(), "Write");
49
+
50
+ state.exec(nvbench::exec_tag::timer, [&](nvbench::launch& launch, auto& timer) {
51
+ timer.start();
52
+ cudaError_t status = MergeStates(
53
+ thrust::raw_pointer_cast(V_device.data()), thrust::raw_pointer_cast(S_device.data()),
54
+ thrust::raw_pointer_cast(V_merged.data()), thrust::raw_pointer_cast(S_merged.data()),
55
+ num_index_sets, seq_len, num_heads, head_dim);
56
+ timer.stop();
57
+ });
58
+ }
59
+
60
+ template <typename T>
61
+ void bench_two_level_single_prefix_cascade_decode(nvbench::state& state) {
62
+ const auto batch_size = state.get_int64("batch_size");
63
+ const auto shared_prefix_length = state.get_int64("shared_prefix_length");
64
+ const auto unique_kv_length = state.get_int64("unique_kv_length");
65
+ const auto num_kv_heads = state.get_int64("num_kv_heads");
66
+ const auto num_qo_heads = state.get_int64("num_qo_heads");
67
+ const auto use_cascade = state.get_int64("use_cascade");
68
+ const auto head_dim = state.get_int64("head_dim");
69
+
70
+ constexpr uint32_t page_size = 16;
71
+
72
+ auto [testcase_float_data, testcase_int_data] = utils::create_shared_prefix_testcase_data<T>(
73
+ batch_size, shared_prefix_length, unique_kv_length,
74
+ /*qo_append_length=*/1, num_qo_heads, num_kv_heads, head_dim, page_size);
75
+
76
+ std::vector<T> q_h = std::move(testcase_float_data[0]),
77
+ shared_k_h = std::move(testcase_float_data[1]),
78
+ shared_v_h = std::move(testcase_float_data[2]),
79
+ k_data_h = std::move(testcase_float_data[3]),
80
+ v_data_h = std::move(testcase_float_data[4]);
81
+
82
+ std::vector<int32_t> kv_indices_combined_h = std::move(testcase_int_data[1]),
83
+ kv_indices_unique_h = std::move(testcase_int_data[2]),
84
+ kv_indptr_combined_h = std::move(testcase_int_data[3]),
85
+ kv_indptr_unique_h = std::move(testcase_int_data[4]),
86
+ kv_last_page_len_combined_h = std::move(testcase_int_data[5]),
87
+ kv_last_page_len_unique_h = std::move(testcase_int_data[6]);
88
+
89
+ thrust::device_vector<T> k_data_d(k_data_h), v_data_d(v_data_h);
90
+ thrust::device_vector<T> q_d(q_h);
91
+
92
+ state.add_global_memory_reads<T>(k_data_h.size() + v_data_h.size() + q_h.size(), "Read");
93
+ state.add_global_memory_writes<T>(q_h.size(), "Write");
94
+
95
+ if (use_cascade) {
96
+ thrust::device_vector<T> shared_k_d(shared_k_h), shared_v_d(shared_v_h),
97
+ o_cascade_0_d(q_h.size()), o_cascade_1_d(q_h.size());
98
+ thrust::device_vector<T> tmp_0_d(16 * 1024 * 1024);
99
+ thrust::device_vector<float> lse_cascade_0_d(batch_size * num_qo_heads),
100
+ lse_cascade_1_d(batch_size * num_qo_heads);
101
+ thrust::device_vector<int32_t> kv_indptr_unique_d(kv_indptr_unique_h),
102
+ kv_indices_unique_d(kv_indices_unique_h),
103
+ kv_last_page_len_unique_d(kv_last_page_len_unique_h);
104
+ paged_kv_t<T, int32_t> paged_kv_casacde_d(
105
+ num_kv_heads, page_size, head_dim, batch_size, kv_layout,
106
+ thrust::raw_pointer_cast(k_data_d.data()), thrust::raw_pointer_cast(v_data_d.data()),
107
+ thrust::raw_pointer_cast(kv_indices_unique_d.data()),
108
+ thrust::raw_pointer_cast(kv_indptr_unique_d.data()),
109
+ thrust::raw_pointer_cast(kv_last_page_len_unique_d.data()));
110
+ BatchDecodeHandler cascade_handler;
111
+ size_t float_workspace_size_in_bytes = 32 * 1024 * 1024;
112
+ thrust::device_vector<char> float_buffer(float_workspace_size_in_bytes);
113
+ size_t int_workspace_size_in_bytes = 8 * 1024 * 1024;
114
+ thrust::device_vector<char> int_buffer(int_workspace_size_in_bytes);
115
+ BatchDecodeHandlerPlan<T, T, T, int32_t>(
116
+ &cascade_handler, (void*)thrust::raw_pointer_cast(float_buffer.data()),
117
+ float_workspace_size_in_bytes, (void*)thrust::raw_pointer_cast(int_buffer.data()),
118
+ int_workspace_size_in_bytes, kv_indptr_unique_h.data(), kv_last_page_len_unique_h.data(),
119
+ batch_size, num_qo_heads, num_kv_heads, head_dim, page_size, PosEncodingMode::kNone);
120
+
121
+ state.exec(nvbench::exec_tag::timer, [&](nvbench::launch& launch, auto& timer) {
122
+ timer.start();
123
+ cudaError_t status = SinglePrefillWithKVCache(
124
+ thrust::raw_pointer_cast(q_d.data()), thrust::raw_pointer_cast(shared_k_d.data()),
125
+ thrust::raw_pointer_cast(shared_v_d.data()),
126
+ thrust::raw_pointer_cast(o_cascade_0_d.data()), thrust::raw_pointer_cast(tmp_0_d.data()),
127
+ thrust::raw_pointer_cast(lse_cascade_0_d.data()), num_qo_heads, num_kv_heads,
128
+ /*qo_len=*/batch_size, /*kv_len=*/shared_prefix_length, head_dim,
129
+ /*causal=*/false, /*kv_layout=*/QKVLayout::kNHD,
130
+ /*pos_encoding_mode=*/PosEncodingMode::kNone, /*use_fp16_qk_reduction=*/false);
131
+
132
+ if (status != cudaSuccess) {
133
+ state.skip("Cascade implementation prefill failed with error: " +
134
+ std::string(cudaGetErrorString(status)));
135
+ }
136
+
137
+ status = BatchDecodeWithPagedKVCacheWrapper<T, T, T, int32_t>(
138
+ &cascade_handler, thrust::raw_pointer_cast(q_d.data()),
139
+ /*q_rope_offset=*/nullptr, paged_kv_casacde_d,
140
+ thrust::raw_pointer_cast(o_cascade_1_d.data()),
141
+ /*lse=*/thrust::raw_pointer_cast(lse_cascade_1_d.data()), num_qo_heads,
142
+ PosEncodingMode::kNone);
143
+
144
+ if (status != cudaSuccess) {
145
+ state.skip("Cascade implementation decode failed with error: " +
146
+ std::string(cudaGetErrorString(status)));
147
+ }
148
+
149
+ status = MergeStateInPlace(thrust::raw_pointer_cast(o_cascade_0_d.data()),
150
+ thrust::raw_pointer_cast(lse_cascade_0_d.data()),
151
+ thrust::raw_pointer_cast(o_cascade_1_d.data()),
152
+ thrust::raw_pointer_cast(lse_cascade_1_d.data()), batch_size,
153
+ num_qo_heads, head_dim);
154
+
155
+ if (status != cudaSuccess) {
156
+ state.skip("Cascade implementation merge failed with error: " +
157
+ std::string(cudaGetErrorString(status)));
158
+ }
159
+ timer.stop();
160
+ });
161
+ } else {
162
+ thrust::device_vector<T> o_baseline_d(q_h.size());
163
+ thrust::device_vector<int32_t> kv_indptr_combined_d(kv_indptr_combined_h),
164
+ kv_indices_combined_d(kv_indices_combined_h),
165
+ kv_last_page_len_combined_d(kv_last_page_len_combined_h);
166
+ paged_kv_t<T, int32_t> paged_kv_baseline_d(
167
+ num_kv_heads, page_size, head_dim, batch_size, kv_layout,
168
+ thrust::raw_pointer_cast(k_data_d.data()), thrust::raw_pointer_cast(v_data_d.data()),
169
+ thrust::raw_pointer_cast(kv_indices_combined_d.data()),
170
+ thrust::raw_pointer_cast(kv_indptr_combined_d.data()),
171
+ thrust::raw_pointer_cast(kv_last_page_len_combined_d.data()));
172
+ BatchDecodeHandler baseline_handler;
173
+ size_t float_workspace_size_in_bytes = 32 * 1024 * 1024;
174
+ thrust::device_vector<char> float_buffer(float_workspace_size_in_bytes);
175
+ size_t int_workspace_size_in_bytes = 8 * 1024 * 1024;
176
+ thrust::device_vector<char> int_buffer(int_workspace_size_in_bytes);
177
+ BatchDecodeHandlerPlan<T, T, T, int32_t>(
178
+ &baseline_handler, (void*)thrust::raw_pointer_cast(float_buffer.data()),
179
+ float_workspace_size_in_bytes, (void*)thrust::raw_pointer_cast(int_buffer.data()),
180
+ int_workspace_size_in_bytes, kv_indptr_combined_h.data(),
181
+ kv_last_page_len_combined_h.data(), batch_size, num_qo_heads, num_kv_heads, head_dim,
182
+ page_size, PosEncodingMode::kNone);
183
+
184
+ state.exec(nvbench::exec_tag::timer, [&](nvbench::launch& launch, auto& timer) {
185
+ timer.start();
186
+ cudaError_t status = BatchDecodeWithPagedKVCacheWrapper<T, T, T, int32_t>(
187
+ &baseline_handler, thrust::raw_pointer_cast(q_d.data()),
188
+ /*q_rope_offset=*/nullptr, paged_kv_baseline_d,
189
+ thrust::raw_pointer_cast(o_baseline_d.data()),
190
+ /*lse=*/nullptr, num_qo_heads, PosEncodingMode::kNone);
191
+ if (status != cudaSuccess) {
192
+ state.skip("Cascade implementation decode failed with error: " +
193
+ std::string(cudaGetErrorString(status)));
194
+ }
195
+ timer.stop();
196
+ });
197
+ }
198
+ }
199
+
200
+ template <typename T>
201
+ void bench_two_level_single_prefix_cascade_append(nvbench::state& state) {
202
+ const auto batch_size = state.get_int64("batch_size");
203
+ const auto shared_prefix_length = state.get_int64("shared_prefix_length");
204
+ const auto unique_kv_length = state.get_int64("unique_kv_length");
205
+ const auto qo_append_length = state.get_int64("qo_append_length");
206
+ const auto num_kv_heads = state.get_int64("num_kv_heads");
207
+ const auto num_qo_heads = state.get_int64("num_qo_heads");
208
+ const auto use_cascade = state.get_int64("use_cascade");
209
+ const auto head_dim = state.get_int64("head_dim");
210
+
211
+ constexpr uint32_t page_size = 16;
212
+
213
+ auto [testcase_float_data, testcase_int_data] = utils::create_shared_prefix_testcase_data<T>(
214
+ batch_size, shared_prefix_length, unique_kv_length, qo_append_length, num_qo_heads,
215
+ num_kv_heads, head_dim, page_size);
216
+
217
+ std::vector<T> q_h = std::move(testcase_float_data[0]),
218
+ shared_k_h = std::move(testcase_float_data[1]),
219
+ shared_v_h = std::move(testcase_float_data[2]),
220
+ k_data_h = std::move(testcase_float_data[3]),
221
+ v_data_h = std::move(testcase_float_data[4]);
222
+
223
+ std::vector<int32_t> qo_indptr_h = std::move(testcase_int_data[0]),
224
+ kv_indices_combined_h = std::move(testcase_int_data[1]),
225
+ kv_indices_unique_h = std::move(testcase_int_data[2]),
226
+ kv_indptr_combined_h = std::move(testcase_int_data[3]),
227
+ kv_indptr_unique_h = std::move(testcase_int_data[4]),
228
+ kv_last_page_len_combined_h = std::move(testcase_int_data[5]),
229
+ kv_last_page_len_unique_h = std::move(testcase_int_data[6]);
230
+
231
+ thrust::device_vector<T> k_data_d(k_data_h), v_data_d(k_data_h);
232
+ thrust::device_vector<T> q_d(q_h);
233
+ thrust::device_vector<int32_t> qo_indptr_d(qo_indptr_h);
234
+
235
+ state.add_global_memory_reads<T>(k_data_h.size() + v_data_h.size() + q_h.size(), "Read");
236
+ state.add_global_memory_writes<T>(q_h.size(), "Write");
237
+
238
+ if (use_cascade) {
239
+ thrust::device_vector<T> shared_k_d(shared_k_h), shared_v_d(shared_v_h),
240
+ o_cascade_0_d(q_h.size()), o_cascade_1_d(q_h.size());
241
+ thrust::device_vector<T> tmp_0_d(8 * 1024 * 1024);
242
+ thrust::device_vector<float> lse_cascade_0_d((batch_size * qo_append_length) * num_qo_heads),
243
+ lse_cascade_1_d((batch_size * qo_append_length) * num_qo_heads);
244
+ thrust::device_vector<int32_t> kv_indptr_unique_d(kv_indptr_unique_h),
245
+ kv_indices_unique_d(kv_indices_unique_h),
246
+ kv_last_page_len_unique_d(kv_last_page_len_unique_h);
247
+ paged_kv_t<T, int32_t> paged_kv_casacde_d(
248
+ num_kv_heads, page_size, head_dim, batch_size, kv_layout,
249
+ thrust::raw_pointer_cast(k_data_d.data()), thrust::raw_pointer_cast(v_data_d.data()),
250
+ thrust::raw_pointer_cast(kv_indices_unique_d.data()),
251
+ thrust::raw_pointer_cast(kv_indptr_unique_d.data()),
252
+ thrust::raw_pointer_cast(kv_last_page_len_unique_d.data()));
253
+ BatchPrefillHandler cascade_handler;
254
+ size_t float_workspace_size_in_bytes = 32 * 1024 * 1024;
255
+ thrust::device_vector<char> float_buffer(float_workspace_size_in_bytes);
256
+ size_t int_workspace_size_in_bytes = 8 * 1024 * 1024;
257
+ thrust::device_vector<char> int_buffer(int_workspace_size_in_bytes);
258
+ cascade_handler.Plan<T, int32_t>(
259
+ (void*)thrust::raw_pointer_cast(float_buffer.data()), float_workspace_size_in_bytes,
260
+ (void*)thrust::raw_pointer_cast(int_buffer.data()), int_workspace_size_in_bytes,
261
+ qo_indptr_h.data(), kv_indptr_unique_h.data(),
262
+ /*total_num_rows=*/batch_size * qo_append_length, batch_size, num_qo_heads, num_kv_heads,
263
+ head_dim, page_size);
264
+ state.exec(nvbench::exec_tag::timer, [&](nvbench::launch& launch, auto& timer) {
265
+ timer.start();
266
+ cudaError_t status = SinglePrefillWithKVCache(
267
+ thrust::raw_pointer_cast(q_d.data()), thrust::raw_pointer_cast(shared_k_d.data()),
268
+ thrust::raw_pointer_cast(shared_v_d.data()),
269
+ thrust::raw_pointer_cast(o_cascade_0_d.data()), thrust::raw_pointer_cast(tmp_0_d.data()),
270
+ thrust::raw_pointer_cast(lse_cascade_0_d.data()), num_qo_heads, num_kv_heads,
271
+ /*qo_len=*/batch_size * qo_append_length,
272
+ /*kv_len=*/shared_prefix_length, head_dim,
273
+ /*causal=*/false, /*kv_layout=*/QKVLayout::kNHD,
274
+ /*pos_encoding_mode=*/PosEncodingMode::kNone, /*use_fp16_qk_reduction=*/false);
275
+
276
+ if (status != cudaSuccess) {
277
+ state.skip("Cascade implementation prefill failed with error: " +
278
+ std::string(cudaGetErrorString(status)));
279
+ }
280
+
281
+ status = BatchPrefillWithPagedKVCacheWrapper<T, T, T, int32_t>(
282
+ &cascade_handler, thrust::raw_pointer_cast(q_d.data()),
283
+ thrust::raw_pointer_cast(qo_indptr_d.data()),
284
+ /*q_rope_offset=*/nullptr, paged_kv_casacde_d,
285
+ thrust::raw_pointer_cast(o_cascade_1_d.data()),
286
+ thrust::raw_pointer_cast(lse_cascade_1_d.data()), num_qo_heads, /*causal=*/true,
287
+ PosEncodingMode::kNone, /*use_fp16_qk_reduction=*/false);
288
+
289
+ if (status != cudaSuccess) {
290
+ state.skip("Cascade implementation unique kv prefill failed with error: " +
291
+ std::string(cudaGetErrorString(status)));
292
+ }
293
+
294
+ status = MergeStateInPlace(thrust::raw_pointer_cast(o_cascade_0_d.data()),
295
+ thrust::raw_pointer_cast(lse_cascade_0_d.data()),
296
+ thrust::raw_pointer_cast(o_cascade_1_d.data()),
297
+ thrust::raw_pointer_cast(lse_cascade_1_d.data()),
298
+ batch_size * qo_append_length, num_qo_heads, head_dim);
299
+ if (status != cudaSuccess) {
300
+ state.skip("Cascade implementation merge failed with error: " +
301
+ std::string(cudaGetErrorString(status)));
302
+ }
303
+ timer.stop();
304
+ });
305
+ } else {
306
+ thrust::device_vector<T> o_baseline_d(q_h.size());
307
+ thrust::device_vector<int32_t> kv_indptr_combined_d(kv_indptr_combined_h),
308
+ kv_indices_combined_d(kv_indices_combined_h),
309
+ kv_last_page_len_combined_d(kv_last_page_len_combined_h);
310
+ paged_kv_t<T, int32_t> paged_kv_baseline_d(
311
+ num_kv_heads, page_size, head_dim, batch_size, kv_layout,
312
+ thrust::raw_pointer_cast(k_data_d.data()), thrust::raw_pointer_cast(v_data_d.data()),
313
+ thrust::raw_pointer_cast(kv_indices_combined_d.data()),
314
+ thrust::raw_pointer_cast(kv_indptr_combined_d.data()),
315
+ thrust::raw_pointer_cast(kv_last_page_len_combined_d.data()));
316
+ BatchPrefillHandler baseline_handler;
317
+ size_t float_workspace_size_in_bytes = 32 * 1024 * 1024;
318
+ thrust::device_vector<char> float_buffer(float_workspace_size_in_bytes);
319
+ size_t int_workspace_size_in_bytes = 8 * 1024 * 1024;
320
+ thrust::device_vector<char> int_buffer(int_workspace_size_in_bytes);
321
+ baseline_handler.Plan<T, int32_t>(
322
+ (void*)thrust::raw_pointer_cast(float_buffer.data()), float_workspace_size_in_bytes,
323
+ (void*)thrust::raw_pointer_cast(int_buffer.data()), int_workspace_size_in_bytes,
324
+ qo_indptr_h.data(), kv_indptr_combined_h.data(),
325
+ /*total_num_rows=*/batch_size * qo_append_length, batch_size, num_qo_heads, num_kv_heads,
326
+ head_dim, page_size);
327
+ state.exec(nvbench::exec_tag::timer, [&](nvbench::launch& launch, auto& timer) {
328
+ timer.start();
329
+ cudaError_t status = BatchPrefillWithPagedKVCacheWrapper<T, T, T, int32_t>(
330
+ &baseline_handler, thrust::raw_pointer_cast(q_d.data()),
331
+ thrust::raw_pointer_cast(qo_indptr_d.data()),
332
+ /*q_rope_offset=*/nullptr, paged_kv_baseline_d,
333
+ thrust::raw_pointer_cast(o_baseline_d.data()),
334
+ /*lse=*/nullptr, num_qo_heads, /*causal=*/true, PosEncodingMode::kNone,
335
+ /*use_fp16_qk_reduction=*/false);
336
+
337
+ if (status != cudaSuccess) {
338
+ state.skip("Baseline implementation failed with error: " +
339
+ std::string(cudaGetErrorString(status)));
340
+ }
341
+ timer.stop();
342
+ });
343
+ }
344
+ }
345
+
346
+ #define STR_HELPER(x) #x
347
+ #define STR(x) STR_HELPER(x)
348
+ #define BENCH_FLASHINFER_MERGE_KERNELS(T) \
349
+ auto bench_flashinfer_merge_states_##T##_ = bench_merge_states<T>; \
350
+ NVBENCH_BENCH(bench_flashinfer_merge_states_##T##_) \
351
+ .set_name("flashinfer_merge_states_" STR(T)) \
352
+ .add_int64_axis("num_index_sets", {2, 16, 64, 128, 256}) \
353
+ .add_int64_axis("seq_len", {1, 2, 4, 8, 16, 32, 64, 128, 256}) \
354
+ .add_int64_axis("num_heads", {32}) \
355
+ .add_int64_axis("head_dim", {128})
356
+
357
+ #define BENCH_FLASHINFER_TWO_LEVEL_SINGLE_PREFIX_CASCADE_DECODE_KERNELS(T) \
358
+ auto bench_flashinfer_two_level_single_prefix_cascade_decode_##T##_ = \
359
+ bench_two_level_single_prefix_cascade_decode<T>; \
360
+ NVBENCH_BENCH(bench_flashinfer_two_level_single_prefix_cascade_decode_##T##_) \
361
+ .set_name("flashinfer_two_level_single_prefix_cascade_decode_" STR(T)) \
362
+ .add_int64_axis("batch_size", {1, 8, 16, 64, 128, 256}) \
363
+ .add_int64_axis("shared_prefix_length", {1024, 2048, 8192, 32768}) \
364
+ .add_int64_axis("unique_kv_length", {128, 256, 512, 1024, 2048}) \
365
+ .add_int64_axis("num_kv_heads", {32}) \
366
+ .add_int64_axis("num_qo_heads", {32}) \
367
+ .add_int64_axis("use_cascade", {1, 0}) \
368
+ .add_int64_axis("head_dim", {128})
369
+
370
+ #define BENCH_FLASHINFER_TWO_LEVEL_SINGLE_PREFIX_CASCADE_APPEND_KERNELS(T) \
371
+ auto bench_flashinfer_two_level_single_prefix_cascade_append_##T##_ = \
372
+ bench_two_level_single_prefix_cascade_append<T>; \
373
+ NVBENCH_BENCH(bench_flashinfer_two_level_single_prefix_cascade_append_##T##_) \
374
+ .set_name("flashinfer_two_level_single_prefix_cascade_append_" STR(T)) \
375
+ .add_int64_axis("batch_size", {1, 8, 16, 64, 128, 256}) \
376
+ .add_int64_axis("shared_prefix_length", {1024, 2048, 8192, 32768}) \
377
+ .add_int64_axis("unique_kv_length", {128, 256, 512, 1024, 2048}) \
378
+ .add_int64_axis("qo_append_length", {128}) \
379
+ .add_int64_axis("num_kv_heads", {32}) \
380
+ .add_int64_axis("num_qo_heads", {32}) \
381
+ .add_int64_axis("use_cascade", {1, 0}) \
382
+ .add_int64_axis("head_dim", {128})
383
+
384
+ BENCH_FLASHINFER_MERGE_KERNELS(half);
385
+ BENCH_FLASHINFER_TWO_LEVEL_SINGLE_PREFIX_CASCADE_DECODE_KERNELS(half);
386
+ BENCH_FLASHINFER_TWO_LEVEL_SINGLE_PREFIX_CASCADE_APPEND_KERNELS(half);
sglang_repo/sgl-kernel/3rdparty/flashinfer/src/bench_norm.cu ADDED
@@ -0,0 +1,53 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ /*
2
+ * Copyright (c) 2024 by FlashInfer team.
3
+ *
4
+ * Licensed under the Apache License, Version 2.0 (the "License");
5
+ * you may not use this file except in compliance with the License.
6
+ * You may obtain a copy of the License at
7
+ *
8
+ * http://www.apache.org/licenses/LICENSE-2.0
9
+ *
10
+ * Unless required by applicable law or agreed to in writing, software
11
+ * distributed under the License is distributed on an "AS IS" BASIS,
12
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13
+ * See the License for the specific language governing permissions and
14
+ * limitations under the License.
15
+ */
16
+ #include <thrust/device_vector.h>
17
+
18
+ #include <flashinfer/norm.cuh>
19
+ #include <nvbench/nvbench.cuh>
20
+
21
+ #include "utils.h"
22
+
23
+ using namespace flashinfer;
24
+
25
+ template <typename T>
26
+ void bench_rms_norm(nvbench::state& state) {
27
+ size_t batch_size = state.get_int64("batch_size");
28
+ size_t hidden_dim = state.get_int64("hidden_dim");
29
+
30
+ thrust::device_vector<T> x(batch_size * hidden_dim);
31
+ thrust::device_vector<T> w(hidden_dim);
32
+ thrust::device_vector<T> y(batch_size * hidden_dim);
33
+
34
+ state.add_global_memory_reads<T>(batch_size * hidden_dim + hidden_dim, "Read");
35
+ state.add_global_memory_writes<T>(batch_size * hidden_dim, "Write");
36
+
37
+ state.exec(nvbench::exec_tag::timer, [&](nvbench::launch& launch, auto& timer) {
38
+ timer.start();
39
+ cudaError_t status =
40
+ norm::RMSNorm<T>(thrust::raw_pointer_cast(x.data()), thrust::raw_pointer_cast(w.data()),
41
+ thrust::raw_pointer_cast(y.data()), batch_size, hidden_dim, 1e-5);
42
+ timer.stop();
43
+ if (status != cudaSuccess) {
44
+ state.skip("RMSNorm kernel launch failed");
45
+ }
46
+ });
47
+ }
48
+
49
+ auto bench_rms_norm_f16 = bench_rms_norm<half>;
50
+ NVBENCH_BENCH(bench_rms_norm_f16)
51
+ .set_name("bench_rms_norm_f16")
52
+ .add_int64_axis("batch_size", {32, 128, 512, 2048})
53
+ .add_int64_axis("hidden_dim", {3072, 4096, 32768});
sglang_repo/sgl-kernel/3rdparty/flashinfer/src/bench_sampling.cu ADDED
@@ -0,0 +1,180 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ /*
2
+ * Copyright (c) 2024 by FlashInfer team.
3
+ *
4
+ * Licensed under the Apache License, Version 2.0 (the "License");
5
+ * you may not use this file except in compliance with the License.
6
+ * You may obtain a copy of the License at
7
+ *
8
+ * http://www.apache.org/licenses/LICENSE-2.0
9
+ *
10
+ * Unless required by applicable law or agreed to in writing, software
11
+ * distributed under the License is distributed on an "AS IS" BASIS,
12
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13
+ * See the License for the specific language governing permissions and
14
+ * limitations under the License.
15
+ */
16
+ #include <thrust/device_vector.h>
17
+
18
+ #include <flashinfer/sampling.cuh>
19
+ #include <nvbench/nvbench.cuh>
20
+
21
+ #include "utils.h"
22
+
23
+ using namespace flashinfer;
24
+
25
+ template <typename T>
26
+ void bench_sampling_with_probability(nvbench::state& state) {
27
+ size_t batch_size = state.get_int64("batch_size");
28
+ size_t vocab_size = state.get_int64("vocab_size");
29
+ bool deterministic = state.get_int64("determinisic");
30
+
31
+ std::vector<T> probs_h(batch_size * vocab_size);
32
+ std::vector<T> uniform_samples_h(batch_size);
33
+ utils::vec_uniform_<T>(uniform_samples_h, 0, 1);
34
+ utils::vec_uniform_<T>(probs_h, 0, 1);
35
+
36
+ // normalize the probs_h
37
+ for (uint32_t i = 0; i < batch_size; ++i) {
38
+ T sum = 0;
39
+ for (uint32_t j = 0; j < vocab_size; ++j) {
40
+ sum += probs_h[i * vocab_size + j];
41
+ }
42
+ for (uint32_t j = 0; j < vocab_size; ++j) {
43
+ probs_h[i * vocab_size + j] /= sum;
44
+ }
45
+ }
46
+
47
+ thrust::device_vector<T> probs_d(probs_h);
48
+ thrust::device_vector<T> uniform_samples_d(uniform_samples_h);
49
+ thrust::device_vector<int32_t> output_d(batch_size);
50
+
51
+ state.add_global_memory_reads<T>(batch_size * vocab_size, "Read");
52
+ state.add_global_memory_writes<int32_t>(batch_size, "Write");
53
+
54
+ state.exec(nvbench::exec_tag::timer, [&](nvbench::launch& launch, auto& timer) {
55
+ timer.start();
56
+ cudaError_t status = sampling::SamplingFromProb<T>(
57
+ thrust::raw_pointer_cast(probs_d.data()),
58
+ thrust::raw_pointer_cast(uniform_samples_d.data()),
59
+ thrust::raw_pointer_cast(output_d.data()), batch_size, vocab_size, deterministic);
60
+ timer.stop();
61
+ if (status != cudaSuccess) {
62
+ state.skip("CUDA error: " + std::string(cudaGetErrorString(status)));
63
+ }
64
+ });
65
+ }
66
+
67
+ template <typename T>
68
+ void bench_top_p_sampling_with_probability(nvbench::state& state) {
69
+ size_t batch_size = state.get_int64("batch_size");
70
+ size_t vocab_size = state.get_int64("vocab_size");
71
+ bool deterministic = state.get_int64("determinisic");
72
+ double p = state.get_float64("p");
73
+ constexpr uint32_t max_top_p_rounds = 32;
74
+
75
+ std::vector<T> probs_h(batch_size * vocab_size);
76
+ std::vector<T> uniform_samples_h(max_top_p_rounds * batch_size);
77
+ utils::vec_uniform_<T>(uniform_samples_h, 0, 1);
78
+ utils::vec_uniform_<T>(probs_h, 0, 1);
79
+
80
+ // normalize the probs_h
81
+ for (uint32_t i = 0; i < batch_size; ++i) {
82
+ T sum = 0;
83
+ for (uint32_t j = 0; j < vocab_size; ++j) {
84
+ sum += probs_h[i * vocab_size + j];
85
+ }
86
+ for (uint32_t j = 0; j < vocab_size; ++j) {
87
+ probs_h[i * vocab_size + j] /= sum;
88
+ }
89
+ }
90
+
91
+ thrust::device_vector<T> probs_d(probs_h);
92
+ thrust::device_vector<T> uniform_samples_d(uniform_samples_h);
93
+ thrust::device_vector<int32_t> output_d(batch_size);
94
+ thrust::device_vector<bool> success_d(batch_size);
95
+
96
+ state.add_global_memory_reads<T>(batch_size * vocab_size, "Read");
97
+ state.add_global_memory_writes<int32_t>(batch_size, "Write");
98
+
99
+ state.exec(nvbench::exec_tag::timer, [&](nvbench::launch& launch, auto& timer) {
100
+ timer.start();
101
+ cudaError_t status = sampling::TopPSamplingFromProb<T, int32_t>(
102
+ thrust::raw_pointer_cast(probs_d.data()),
103
+ thrust::raw_pointer_cast(uniform_samples_d.data()),
104
+ thrust::raw_pointer_cast(output_d.data()), thrust::raw_pointer_cast(success_d.data()),
105
+ /*top_p_arr=*/nullptr, batch_size, p, vocab_size, max_top_p_rounds, deterministic);
106
+ timer.stop();
107
+ if (status != cudaSuccess) {
108
+ state.skip("CUDA error: " + std::string(cudaGetErrorString(status)));
109
+ }
110
+ });
111
+ }
112
+
113
+ template <typename T>
114
+ void bench_top_k_sampling_with_probability(nvbench::state& state) {
115
+ size_t batch_size = state.get_int64("batch_size");
116
+ size_t vocab_size = state.get_int64("vocab_size");
117
+ size_t k = state.get_int64("k");
118
+ bool deterministic = state.get_int64("determinisic");
119
+ constexpr uint32_t max_top_k_rounds = 32;
120
+
121
+ std::vector<T> probs_h(batch_size * vocab_size);
122
+ std::vector<T> uniform_samples_h(max_top_k_rounds * batch_size);
123
+ utils::vec_uniform_<T>(uniform_samples_h, 0, 1);
124
+ utils::vec_uniform_<T>(probs_h, 0, 1);
125
+
126
+ // normalize the probs_h
127
+ for (uint32_t i = 0; i < batch_size; ++i) {
128
+ T sum = 0;
129
+ for (uint32_t j = 0; j < vocab_size; ++j) {
130
+ sum += probs_h[i * vocab_size + j];
131
+ }
132
+ for (uint32_t j = 0; j < vocab_size; ++j) {
133
+ probs_h[i * vocab_size + j] /= sum;
134
+ }
135
+ }
136
+
137
+ thrust::device_vector<T> probs_d(probs_h);
138
+ thrust::device_vector<T> uniform_samples_d(uniform_samples_h);
139
+ thrust::device_vector<int32_t> output_d(batch_size);
140
+ thrust::device_vector<bool> success_d(batch_size);
141
+
142
+ state.add_global_memory_reads<T>(batch_size * vocab_size, "Read");
143
+ state.add_global_memory_writes<int32_t>(batch_size, "Write");
144
+
145
+ state.exec(nvbench::exec_tag::timer, [&](nvbench::launch& launch, auto& timer) {
146
+ timer.start();
147
+ cudaError_t status = sampling::TopKSamplingFromProb<T, int32_t>(
148
+ thrust::raw_pointer_cast(probs_d.data()),
149
+ thrust::raw_pointer_cast(uniform_samples_d.data()),
150
+ thrust::raw_pointer_cast(output_d.data()), thrust::raw_pointer_cast(success_d.data()),
151
+ /*top_k_arr=*/nullptr, batch_size, k, vocab_size, max_top_k_rounds, deterministic);
152
+ timer.stop();
153
+ if (status != cudaSuccess) {
154
+ state.skip("CUDA error: " + std::string(cudaGetErrorString(status)));
155
+ }
156
+ });
157
+ }
158
+
159
+ auto bench_sampling_with_probability_f32 = bench_sampling_with_probability<float>;
160
+ NVBENCH_BENCH(bench_sampling_with_probability_f32)
161
+ .set_name("bench_sampling_with_probability_f32")
162
+ .add_int64_axis("batch_size", {16, 32, 128, 512, 2048})
163
+ .add_int64_axis("vocab_size", {32000, 32001, 32002, 128000, 256000})
164
+ .add_int64_axis("determinisic", {0, 1});
165
+
166
+ auto bench_top_p_sampling_with_probability_f32 = bench_top_p_sampling_with_probability<float>;
167
+ NVBENCH_BENCH(bench_top_p_sampling_with_probability_f32)
168
+ .set_name("bench_top_p_sampling_with_probability_f32")
169
+ .add_int64_axis("batch_size", {16, 32, 128, 512, 2048})
170
+ .add_int64_axis("vocab_size", {32000, 32001, 32002, 128000, 256000})
171
+ .add_float64_axis("p", {0.1, 0.5, 0.9, 1.0})
172
+ .add_int64_axis("determinisic", {0, 1});
173
+
174
+ auto bench_top_k_sampling_with_probability_f32 = bench_top_k_sampling_with_probability<float>;
175
+ NVBENCH_BENCH(bench_top_k_sampling_with_probability_f32)
176
+ .set_name("bench_top_k_sampling_with_probability_f32")
177
+ .add_int64_axis("batch_size", {16, 32, 128, 512, 2048})
178
+ .add_int64_axis("vocab_size", {32000, 32001, 32002, 128000, 256000})
179
+ .add_int64_axis("k", {16, 32, 128, 1024})
180
+ .add_int64_axis("determinisic", {0, 1});
sglang_repo/sgl-kernel/3rdparty/flashinfer/src/bench_single_decode.cu ADDED
@@ -0,0 +1,141 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ /*
2
+ * Copyright (c) 2023 by FlashInfer team.
3
+ *
4
+ * Licensed under the Apache License, Version 2.0 (the "License");
5
+ * you may not use this file except in compliance with the License.
6
+ * You may obtain a copy of the License at
7
+ *
8
+ * http://www.apache.org/licenses/LICENSE-2.0
9
+ *
10
+ * Unless required by applicable law or agreed to in writing, software
11
+ * distributed under the License is distributed on an "AS IS" BASIS,
12
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13
+ * See the License for the specific language governing permissions and
14
+ * limitations under the License.
15
+ */
16
+ #include <thrust/device_vector.h>
17
+
18
+ #include <nvbench/nvbench.cuh>
19
+
20
+ #include "flashinfer_ops.cuh"
21
+
22
+ using flashinfer::PosEncodingMode;
23
+ using flashinfer::QKVLayout;
24
+
25
+ template <typename dtype_qo, typename dtype_kv>
26
+ void bench_flashinfer_single_decode(nvbench::state& state) {
27
+ size_t seq_len = state.get_int64("seq_len");
28
+ size_t num_qo_heads = state.get_int64("num_qo_heads");
29
+ size_t num_kv_heads = state.get_int64("num_kv_heads");
30
+ size_t head_dim = state.get_int64("head_dim");
31
+ size_t pos_encoding_mode = state.get_int64("pos_encoding_mode");
32
+ size_t kv_layout = state.get_int64("kv_layout");
33
+ bool cooperative = state.get_int64("cooperative");
34
+ // Allocate input data:
35
+ thrust::device_vector<dtype_qo> Q(num_qo_heads * head_dim);
36
+ thrust::device_vector<dtype_kv> K(seq_len * num_kv_heads * head_dim);
37
+ thrust::device_vector<dtype_kv> V(seq_len * num_kv_heads * head_dim);
38
+ thrust::device_vector<dtype_qo> O(num_qo_heads * head_dim);
39
+ thrust::device_vector<dtype_qo> tmp(16 * 1024 * 1024);
40
+
41
+ // Provide throughput information:
42
+ state.add_global_memory_reads<dtype_kv>(
43
+ num_qo_heads * head_dim + 2 * seq_len * num_kv_heads * head_dim, "Read");
44
+ state.add_global_memory_writes<dtype_qo>(num_qo_heads * head_dim, "Write");
45
+
46
+ state.exec(nvbench::exec_tag::timer, [&](nvbench::launch& launch, auto& timer) {
47
+ timer.start();
48
+ cudaError_t status = flashinfer::SingleDecodeWithKVCache(
49
+ thrust::raw_pointer_cast(Q.data()), thrust::raw_pointer_cast(K.data()),
50
+ thrust::raw_pointer_cast(V.data()), thrust::raw_pointer_cast(O.data()),
51
+ cooperative ? thrust::raw_pointer_cast(tmp.data()) : nullptr, num_qo_heads, num_kv_heads,
52
+ seq_len, head_dim, QKVLayout(kv_layout), PosEncodingMode(pos_encoding_mode),
53
+ /*maybe_sm_scale=*/std::nullopt,
54
+ /*rope_scale=*/1.f,
55
+ /*rope_theta=*/1e4, launch.get_stream());
56
+ if (status != cudaSuccess) {
57
+ state.skip("CUDA error: " + std::string(cudaGetErrorString(status)));
58
+ }
59
+ timer.stop();
60
+ });
61
+ }
62
+
63
+ // Use prefill kernel for decoding, useful in GQA on GPUs with low non-tensor performance such as
64
+ // A100
65
+ template <typename dtype_in, typename dtype_out>
66
+ void bench_flashinfer_single_decode_with_prefill(nvbench::state& state) {
67
+ size_t seq_len = state.get_int64("seq_len");
68
+ size_t num_qo_heads = state.get_int64("num_qo_heads");
69
+ size_t num_kv_heads = state.get_int64("num_kv_heads");
70
+ size_t head_dim = state.get_int64("head_dim");
71
+ size_t pos_encoding_mode = state.get_int64("pos_encoding_mode");
72
+ size_t kv_layout = state.get_int64("kv_layout");
73
+ bool cooperative = state.get_int64("cooperative");
74
+ // Allocate input data:
75
+ thrust::device_vector<dtype_in> Q(num_qo_heads * head_dim);
76
+ thrust::device_vector<dtype_in> K(seq_len * num_kv_heads * head_dim);
77
+ thrust::device_vector<dtype_in> V(seq_len * num_kv_heads * head_dim);
78
+ thrust::device_vector<dtype_out> O(num_qo_heads * head_dim);
79
+ thrust::device_vector<dtype_out> tmp(16 * 1024 * 1024);
80
+
81
+ // Provide throughput information:
82
+ state.add_global_memory_reads<dtype_in>(
83
+ num_qo_heads * head_dim + 2 * seq_len * num_kv_heads * head_dim, "Read");
84
+ state.add_global_memory_writes<dtype_out>(num_qo_heads * head_dim, "Write");
85
+
86
+ state.exec(nvbench::exec_tag::timer, [&](nvbench::launch& launch, auto& timer) {
87
+ timer.start();
88
+ cudaError_t status = flashinfer::SinglePrefillWithKVCache(
89
+ thrust::raw_pointer_cast(Q.data()), thrust::raw_pointer_cast(K.data()),
90
+ thrust::raw_pointer_cast(V.data()), thrust::raw_pointer_cast(O.data()),
91
+ /*tmp=*/cooperative ? thrust::raw_pointer_cast(tmp.data()) : nullptr,
92
+ /*lse=*/nullptr, num_qo_heads, num_kv_heads,
93
+ /*qo_len=*/1,
94
+ /*kv_len=*/seq_len, head_dim,
95
+ /*causal=*/false, QKVLayout(kv_layout), PosEncodingMode(pos_encoding_mode),
96
+ /*use_fp16_qk_reduction=*/false,
97
+ /*maybe_sm_scale=*/std::nullopt,
98
+ /*rope_scale=*/1.f,
99
+ /*rope_theta=*/1e4, launch.get_stream());
100
+ if (status != cudaSuccess) {
101
+ state.skip("CUDA error: " + std::string(cudaGetErrorString(status)));
102
+ }
103
+ timer.stop();
104
+ });
105
+ }
106
+
107
+ #define STR_HELPER(x) #x
108
+ #define STR(x) STR_HELPER(x)
109
+ #define BENCH_FLASHINFER_SINGLE_DECODE(dtype_qo, dtype_kv) \
110
+ auto bench_flashinfer_single_decode_##dtype_qo##_##dtype_kv##_ = \
111
+ bench_flashinfer_single_decode<dtype_qo, dtype_kv>; \
112
+ NVBENCH_BENCH(bench_flashinfer_single_decode_##dtype_qo##_##dtype_kv##_) \
113
+ .set_name(("bench_flashinfer_single_decode_" STR(dtype_qo) "_" STR(dtype_kv))) \
114
+ .add_int64_axis("seq_len", \
115
+ {32, 64, 128, 256, 512, 1024, 2048, 4096, 8192, 16384, 32768, 65536}) \
116
+ .add_int64_axis("num_qo_heads", {32}) \
117
+ .add_int64_axis("num_kv_heads", {32, 4}) \
118
+ .add_int64_axis("head_dim", {128}) \
119
+ .add_int64_axis("pos_encoding_mode", {0, 1}) \
120
+ .add_int64_axis("kv_layout", {0, 1}) \
121
+ .add_int64_axis("cooperative", {1})
122
+
123
+ #define BENCH_FLASHINFER_SINGLE_DECODE_WITH_PREFILL(dtype_in, dtype_out) \
124
+ auto bench_flashinfer_single_decode_with_prefill_##dtype_in##_##dtype_out##_ = \
125
+ bench_flashinfer_single_decode_with_prefill<dtype_in, dtype_out>; \
126
+ NVBENCH_BENCH(bench_flashinfer_single_decode_with_prefill_##dtype_in##_##dtype_out##_) \
127
+ .set_name(("bench_flashinfer_single_decode_with_prefill_" STR(dtype_in) "_" STR(dtype_out))) \
128
+ .add_int64_axis("seq_len", \
129
+ {32, 64, 128, 256, 512, 1024, 2048, 4096, 8192, 16384, 32768, 65536}) \
130
+ .add_int64_axis("num_qo_heads", {32}) \
131
+ .add_int64_axis("num_kv_heads", {32, 4}) \
132
+ .add_int64_axis("head_dim", {128}) \
133
+ .add_int64_axis("pos_encoding_mode", {0, 1}) \
134
+ .add_int64_axis("kv_layout", {0, 1}) \
135
+ .add_int64_axis("cooperative", {1})
136
+
137
+ BENCH_FLASHINFER_SINGLE_DECODE(half, half);
138
+ BENCH_FLASHINFER_SINGLE_DECODE(half, __nv_fp8_e5m2);
139
+ // Use prefill kernel for decoding, useful in GQA on GPUs with low non-tensor performance such as
140
+ // A100
141
+ BENCH_FLASHINFER_SINGLE_DECODE_WITH_PREFILL(half, half);
sglang_repo/sgl-kernel/3rdparty/flashinfer/src/bench_single_prefill.cu ADDED
@@ -0,0 +1,217 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ /*
2
+ * Copyright (c) 2023 by FlashInfer team.
3
+ *
4
+ * Licensed under the Apache License, Version 2.0 (the "License");
5
+ * you may not use this file except in compliance with the License.
6
+ * You may obtain a copy of the License at
7
+ *
8
+ * http://www.apache.org/licenses/LICENSE-2.0
9
+ *
10
+ * Unless required by applicable law or agreed to in writing, software
11
+ * distributed under the License is distributed on an "AS IS" BASIS,
12
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13
+ * See the License for the specific language governing permissions and
14
+ * limitations under the License.
15
+ */
16
+ #include <thrust/device_vector.h>
17
+
18
+ #include <nvbench/nvbench.cuh>
19
+
20
+ #include "flashinfer_ops.cuh"
21
+
22
+ using flashinfer::PosEncodingMode;
23
+ using flashinfer::QKVLayout;
24
+
25
+ inline uint32_t ceil_div(uint32_t a, uint32_t b) { return (a + b - 1) / b; }
26
+
27
+ template <bool append>
28
+ void bench_flashinfer_single_prefill_fp8(nvbench::state& state) {
29
+ size_t kv_len = state.get_int64("kv_len");
30
+ size_t qo_len = kv_len;
31
+ if (append) {
32
+ qo_len = state.get_int64("qo_len");
33
+ if (qo_len > kv_len) {
34
+ state.skip("qo_len > kv_len");
35
+ }
36
+ }
37
+ size_t num_qo_heads = state.get_int64("num_qo_heads");
38
+ size_t num_kv_heads = state.get_int64("num_kv_heads");
39
+ size_t head_dim = state.get_int64("head_dim");
40
+ size_t pos_encoding_mode = state.get_int64("pos_encoding_mode");
41
+ size_t kv_layout = state.get_int64("kv_layout");
42
+ bool causal = state.get_int64("causal");
43
+ bool cooperative = state.get_int64("cooperative");
44
+ bool use_fp16_qk_reduction = state.get_int64("use_fp16_qk_reduction");
45
+ // Allocate input data:
46
+ thrust::device_vector<half> Q(qo_len * num_qo_heads * head_dim);
47
+ thrust::device_vector<__nv_fp8_e4m3> K(kv_len * num_kv_heads * head_dim);
48
+ thrust::device_vector<__nv_fp8_e4m3> V(kv_len * num_kv_heads * head_dim);
49
+ thrust::device_vector<half> O(qo_len * num_qo_heads * head_dim);
50
+ thrust::device_vector<half> tmp(16 * 1024 * 1024);
51
+
52
+ // Provide throughput information:
53
+ state.add_global_memory_reads<uint8_t>(
54
+ (qo_len * num_qo_heads * sizeof(half) + 2 * kv_len * num_kv_heads) * head_dim, "Read");
55
+ state.add_global_memory_writes<half>(qo_len * num_qo_heads * head_dim, "Write");
56
+
57
+ state.exec(nvbench::exec_tag::timer, [&](nvbench::launch& launch, auto& timer) {
58
+ timer.start();
59
+ cudaError_t status;
60
+ status = flashinfer::SinglePrefillWithKVCache<half, __nv_fp8_e4m3, half>(
61
+ thrust::raw_pointer_cast(Q.data()), thrust::raw_pointer_cast(K.data()),
62
+ thrust::raw_pointer_cast(V.data()), thrust::raw_pointer_cast(O.data()),
63
+ /*tmp=*/cooperative ? thrust::raw_pointer_cast(tmp.data()) : nullptr,
64
+ /*lse=*/nullptr, num_qo_heads, num_kv_heads, qo_len, kv_len, head_dim, causal,
65
+ QKVLayout(kv_layout), PosEncodingMode(pos_encoding_mode), use_fp16_qk_reduction,
66
+ /*maybe_sm_scale=*/std::nullopt,
67
+ /*rope_scale=*/1.f,
68
+ /*rope_theta=*/1e4, launch.get_stream());
69
+ if (status != cudaSuccess) {
70
+ state.skip("CUDA error: " + std::string(cudaGetErrorString(status)));
71
+ }
72
+ timer.stop();
73
+ });
74
+
75
+ const auto measured_mean = static_cast<nvbench::float32_t>(
76
+ state.get_summary("nv/cold/time/gpu/mean").get_float64("value"));
77
+ auto& summ = state.add_summary("nv/tflops");
78
+ summ.set_string("description", "Achieved TFlops/s");
79
+ summ.set_string("name", "TFlops/s");
80
+ float tflops;
81
+ if (causal) {
82
+ tflops = qo_len * (2 * kv_len - qo_len) * 2 * num_qo_heads * head_dim / measured_mean / 1e12;
83
+ } else {
84
+ tflops = qo_len * kv_len * 4 * num_qo_heads * head_dim / measured_mean / 1e12;
85
+ }
86
+ summ.set_float64("value", tflops);
87
+ }
88
+
89
+ template <typename dtype_in, typename dtype_out, bool append>
90
+ void bench_flashinfer_single_prefill(nvbench::state& state) {
91
+ size_t kv_len = state.get_int64("kv_len");
92
+ size_t qo_len = kv_len;
93
+ if (append) {
94
+ qo_len = state.get_int64("qo_len");
95
+ if (qo_len > kv_len) {
96
+ state.skip("qo_len > kv_len");
97
+ }
98
+ }
99
+ size_t num_qo_heads = state.get_int64("num_qo_heads");
100
+ size_t num_kv_heads = state.get_int64("num_kv_heads");
101
+ size_t head_dim = state.get_int64("head_dim");
102
+ size_t pos_encoding_mode = state.get_int64("pos_encoding_mode");
103
+ size_t kv_layout = state.get_int64("kv_layout");
104
+ bool causal = state.get_int64("causal");
105
+ bool cooperative = state.get_int64("cooperative");
106
+ bool custom_mask = state.get_int64("custom_mask");
107
+ bool use_fp16_qk_reduction = state.get_int64("use_fp16_qk_reduction");
108
+ // Allocate input data:
109
+ thrust::device_vector<dtype_in> Q(qo_len * num_qo_heads * head_dim);
110
+ thrust::device_vector<dtype_in> K(kv_len * num_kv_heads * head_dim);
111
+ thrust::device_vector<dtype_in> V(kv_len * num_kv_heads * head_dim);
112
+ thrust::device_vector<uint8_t> mask(ceil_div(qo_len * kv_len, 8));
113
+ thrust::device_vector<dtype_out> O(qo_len * num_qo_heads * head_dim);
114
+ thrust::device_vector<dtype_out> tmp(16 * 1024 * 1024);
115
+
116
+ // Provide throughput information:
117
+ state.add_global_memory_reads<dtype_in>(
118
+ (qo_len * num_qo_heads + 2 * kv_len * num_kv_heads) * head_dim, "Read");
119
+ state.add_global_memory_writes<dtype_out>(qo_len * num_qo_heads * head_dim, "Write");
120
+
121
+ state.exec(nvbench::exec_tag::timer, [&](nvbench::launch& launch, auto& timer) {
122
+ timer.start();
123
+ cudaError_t status;
124
+ if (custom_mask) {
125
+ status = flashinfer::SinglePrefillWithKVCacheCustomMask<dtype_in, dtype_out>(
126
+ thrust::raw_pointer_cast(Q.data()), thrust::raw_pointer_cast(K.data()),
127
+ thrust::raw_pointer_cast(V.data()), thrust::raw_pointer_cast(mask.data()),
128
+ thrust::raw_pointer_cast(O.data()),
129
+ /*tmp=*/cooperative ? thrust::raw_pointer_cast(tmp.data()) : nullptr,
130
+ /*lse=*/nullptr, num_qo_heads, num_kv_heads, qo_len, kv_len, head_dim,
131
+ QKVLayout(kv_layout), PosEncodingMode(pos_encoding_mode), use_fp16_qk_reduction,
132
+ /*maybe_sm_scale=*/std::nullopt,
133
+ /*rope_scale=*/1.f,
134
+ /*rope_theta=*/1e4, launch.get_stream());
135
+ } else {
136
+ status = flashinfer::SinglePrefillWithKVCache<dtype_in, dtype_in, dtype_out>(
137
+ thrust::raw_pointer_cast(Q.data()), thrust::raw_pointer_cast(K.data()),
138
+ thrust::raw_pointer_cast(V.data()), thrust::raw_pointer_cast(O.data()),
139
+ /*tmp=*/cooperative ? thrust::raw_pointer_cast(tmp.data()) : nullptr,
140
+ /*lse=*/nullptr, num_qo_heads, num_kv_heads, qo_len, kv_len, head_dim, causal,
141
+ QKVLayout(kv_layout), PosEncodingMode(pos_encoding_mode), use_fp16_qk_reduction,
142
+ /*maybe_sm_scale=*/std::nullopt,
143
+ /*rope_scale=*/1.f,
144
+ /*rope_theta=*/1e4, launch.get_stream());
145
+ }
146
+ if (status != cudaSuccess) {
147
+ state.skip("CUDA error: " + std::string(cudaGetErrorString(status)));
148
+ }
149
+ timer.stop();
150
+ });
151
+
152
+ const auto measured_mean = static_cast<nvbench::float32_t>(
153
+ state.get_summary("nv/cold/time/gpu/mean").get_float64("value"));
154
+ auto& summ = state.add_summary("nv/tflops");
155
+ summ.set_string("description", "Achieved TFlops/s");
156
+ summ.set_string("name", "TFlops/s");
157
+ float tflops;
158
+ if (causal) {
159
+ tflops = qo_len * (2 * kv_len - qo_len) * 2 * num_qo_heads * head_dim / measured_mean / 1e12;
160
+ } else {
161
+ tflops = qo_len * kv_len * 4 * num_qo_heads * head_dim / measured_mean / 1e12;
162
+ }
163
+ summ.set_float64("value", tflops);
164
+ }
165
+
166
+ #define STR_HELPER(x) #x
167
+ #define STR(x) STR_HELPER(x)
168
+ #define BENCH_FLASHINFER_PREFILL(dtype_in, dtype_out) \
169
+ auto bench_flashinfer_single_prefill_##dtype_in##_##dtype_out##_ = \
170
+ bench_flashinfer_single_prefill<dtype_in, dtype_out, false>; \
171
+ NVBENCH_BENCH(bench_flashinfer_single_prefill_##dtype_in##_##dtype_out##_) \
172
+ .set_name(("bench_flashinfer_single_prefill_" STR(dtype_in) "_" STR(dtype_out))) \
173
+ .add_int64_axis("kv_len", \
174
+ {32, 64, 128, 256, 512, 1024, 2048, 4096, 8192, 16384, 32768, 65536}) \
175
+ .add_int64_axis("num_qo_heads", {32}) \
176
+ .add_int64_axis("num_kv_heads", {32}) \
177
+ .add_int64_axis("head_dim", {128}) \
178
+ .add_int64_axis("causal", {0, 1}) \
179
+ .add_int64_axis("kv_layout", {0, 1}) \
180
+ .add_int64_axis("pos_encoding_mode", {0, 1}) \
181
+ .add_int64_axis("use_fp16_qk_reduction", {0, 1}) \
182
+ .add_int64_axis("custom_mask", {0}) \
183
+ .add_int64_axis("cooperative", {1})
184
+
185
+ auto bench_flashinfer_single_prefill_fp8_kv = bench_flashinfer_single_prefill_fp8<false>;
186
+ NVBENCH_BENCH(bench_flashinfer_single_prefill_fp8_kv)
187
+ .set_name(("bench_flashinfer_single_prefill_fp8_kv"))
188
+ .add_int64_axis("kv_len", {32, 64, 128, 256, 512, 1024, 2048, 4096, 8192, 16384, 32768, 65536})
189
+ .add_int64_axis("num_qo_heads", {32})
190
+ .add_int64_axis("num_kv_heads", {32})
191
+ .add_int64_axis("head_dim", {128})
192
+ .add_int64_axis("causal", {0, 1})
193
+ .add_int64_axis("kv_layout", {0, 1})
194
+ .add_int64_axis("pos_encoding_mode", {0, 1})
195
+ .add_int64_axis("use_fp16_qk_reduction", {0, 1})
196
+ .add_int64_axis("custom_mask", {0})
197
+ .add_int64_axis("cooperative", {1});
198
+
199
+ #define BENCH_FLASHINFER_APPEND_PREFILL(dtype_in, dtype_out) \
200
+ auto bench_flashinfer_single_append_prefill_##dtype_in##_##dtype_out##_ = \
201
+ bench_flashinfer_single_prefill<dtype_in, dtype_out, true>; \
202
+ NVBENCH_BENCH(bench_flashinfer_single_append_prefill_##dtype_in##_##dtype_out##_) \
203
+ .set_name(("bench_flashinfer_single_append_prefill_" STR(dtype_in) "_" STR(dtype_out))) \
204
+ .add_int64_axis("qo_len", {128}) \
205
+ .add_int64_axis("kv_len", {128, 256, 512, 1024, 2048, 4096, 8192, 16384, 32768, 65536}) \
206
+ .add_int64_axis("num_qo_heads", {32}) \
207
+ .add_int64_axis("num_kv_heads", {32}) \
208
+ .add_int64_axis("head_dim", {128}) \
209
+ .add_int64_axis("causal", {0, 1}) \
210
+ .add_int64_axis("kv_layout", {0, 1}) \
211
+ .add_int64_axis("pos_encoding_mode", {0, 1}) \
212
+ .add_int64_axis("use_fp16_qk_reduction", {0, 1}) \
213
+ .add_int64_axis("custom_mask", {0}) \
214
+ .add_int64_axis("cooperative", {0, 1})
215
+
216
+ BENCH_FLASHINFER_PREFILL(half, half);
217
+ BENCH_FLASHINFER_APPEND_PREFILL(half, half);
sglang_repo/sgl-kernel/3rdparty/flashinfer/src/cpu_reference.h ADDED
@@ -0,0 +1,192 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ /*
2
+ * Copyright (c) 2023 by FlashInfer team.
3
+ *
4
+ * Licensed under the Apache License, Version 2.0 (the "License");
5
+ * you may not use this file except in compliance with the License.
6
+ * You may obtain a copy of the License at
7
+ *
8
+ * http://www.apache.org/licenses/LICENSE-2.0
9
+ *
10
+ * Unless required by applicable law or agreed to in writing, software
11
+ * distributed under the License is distributed on an "AS IS" BASIS,
12
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13
+ * See the License for the specific language governing permissions and
14
+ * limitations under the License.
15
+ */
16
+ #pragma once
17
+
18
+ #include <flashinfer/exception.h>
19
+
20
+ #include <flashinfer/page.cuh>
21
+ #include <flashinfer/pos_enc.cuh>
22
+
23
+ #include "utils.h"
24
+
25
+ namespace cpu_reference {
26
+
27
+ using namespace flashinfer;
28
+
29
+ template <typename T>
30
+ inline std::vector<T> rms_norm(const T* input, const T* weight, size_t batch_size, size_t d,
31
+ float eps = 1e-5) {
32
+ std::vector<T> output(batch_size * d);
33
+ for (size_t i = 0; i < batch_size; ++i) {
34
+ float sum = 0;
35
+ for (size_t j = 0; j < d; ++j) {
36
+ sum += float(input[i * d + j]) * float(input[i * d + j]);
37
+ }
38
+ float rms_rcp = 1.f / (std::sqrt(sum / float(d)) + eps);
39
+ for (size_t j = 0; j < d; ++j) {
40
+ output[i * d + j] = (float(input[i * d + j]) * rms_rcp) * float(weight[j]);
41
+ }
42
+ }
43
+ return std::move(output);
44
+ }
45
+
46
+ template <typename T>
47
+ inline std::vector<T> exclusive_prefix_sum(const T* input, size_t batch_size, size_t d) {
48
+ std::vector<T> output(batch_size * d);
49
+ for (size_t i = 0; i < batch_size; ++i) {
50
+ for (size_t j = 0; j < d; ++j) {
51
+ output[i * d + j] = (j == 0) ? 0 : output[i * d + j - 1] + input[i * d + j - 1];
52
+ }
53
+ }
54
+ return std::move(output);
55
+ }
56
+
57
+ template <typename T>
58
+ inline std::vector<float> apply_llama_rope(const T* input, size_t D, size_t offset,
59
+ float rope_scale, float rope_theta) {
60
+ std::vector<float> rst(D);
61
+ std::vector<float> permuted_input(D);
62
+ for (size_t k = 0; k < D; ++k) {
63
+ permuted_input[k] = (k < D / 2) ? -float(input[k + D / 2]) : float(input[k - D / 2]);
64
+ }
65
+
66
+ for (size_t k = 0; k < D; ++k) {
67
+ float inv_freq =
68
+ (offset / rope_scale) / (std::pow(rope_theta, float(2 * (k % (D / 2))) / float(D)));
69
+ float cos = std::cos(inv_freq);
70
+ float sin = std::sin(inv_freq);
71
+ rst[k] = cos * float(input[k]) + sin * permuted_input[k];
72
+ }
73
+ return std::move(rst);
74
+ }
75
+
76
+ template <typename dtype_q, typename dtype_kv, typename dtype_out>
77
+ std::vector<dtype_out> single_mha(const std::vector<dtype_q>& q, const std::vector<dtype_kv>& k,
78
+ const std::vector<dtype_kv>& v, size_t qo_len, size_t kv_len,
79
+ size_t num_qo_heads, size_t num_kv_heads, size_t head_dim,
80
+ bool causal = true, QKVLayout kv_layout = QKVLayout::kHND,
81
+ PosEncodingMode pos_encoding_mode = PosEncodingMode::kNone,
82
+ float rope_scale = 1.f, float rope_theta = 1e4) {
83
+ assert(qo_len <= kv_len);
84
+ assert(num_qo_heads % num_kv_heads == 0);
85
+ float sm_scale = 1.f / std::sqrt(float(head_dim));
86
+ std::vector<dtype_out> o(qo_len * num_qo_heads * head_dim);
87
+ std::vector<float> att(kv_len);
88
+ std::vector<float> q_rotary_local(head_dim);
89
+ std::vector<float> k_rotary_local(head_dim);
90
+ DISPATCH_head_dim(head_dim, HEAD_DIM, {
91
+ tensor_info_t info(qo_len, kv_len, num_qo_heads, num_kv_heads, kv_layout, HEAD_DIM);
92
+ for (size_t qo_head_idx = 0; qo_head_idx < num_qo_heads; ++qo_head_idx) {
93
+ const size_t kv_head_idx = qo_head_idx / info.get_group_size();
94
+ for (size_t q_idx = 0; q_idx < qo_len; ++q_idx) {
95
+ float max_val = -5e4;
96
+ if (pos_encoding_mode == PosEncodingMode::kRoPELlama) {
97
+ q_rotary_local = std::move(cpu_reference::apply_llama_rope(
98
+ q.data() + info.get_q_elem_offset(q_idx, qo_head_idx, 0), head_dim,
99
+ q_idx + kv_len - qo_len, rope_scale, rope_theta));
100
+ }
101
+ for (size_t kv_idx = 0; kv_idx < kv_len; ++kv_idx) {
102
+ att[kv_idx] = 0.;
103
+ switch (pos_encoding_mode) {
104
+ case PosEncodingMode::kNone: {
105
+ for (size_t feat_idx = 0; feat_idx < head_dim; ++feat_idx) {
106
+ att[kv_idx] += float(q[info.get_q_elem_offset(q_idx, qo_head_idx, feat_idx)]) *
107
+ float(k[info.get_kv_elem_offset(kv_idx, kv_head_idx, feat_idx)]) *
108
+ sm_scale;
109
+ }
110
+ break;
111
+ }
112
+ case PosEncodingMode::kRoPELlama: {
113
+ k_rotary_local = std::move(cpu_reference::apply_llama_rope(
114
+ k.data() + info.get_kv_elem_offset(kv_idx, kv_head_idx, 0), head_dim, kv_idx,
115
+ rope_scale, rope_theta));
116
+ for (size_t feat_idx = 0; feat_idx < head_dim; ++feat_idx) {
117
+ att[kv_idx] += q_rotary_local[feat_idx] * k_rotary_local[feat_idx] * sm_scale;
118
+ }
119
+ break;
120
+ }
121
+ default: {
122
+ std::ostringstream err_msg;
123
+ err_msg << "Unsupported rotary mode.";
124
+ FLASHINFER_ERROR(err_msg.str());
125
+ }
126
+ }
127
+ // apply mask
128
+ if (causal && kv_idx > kv_len + q_idx - qo_len) {
129
+ att[kv_idx] = -5e4;
130
+ }
131
+ max_val = std::max(max_val, att[kv_idx]);
132
+ }
133
+ // exp minus max
134
+ float denom = 0;
135
+ for (size_t kv_idx = 0; kv_idx < kv_len; ++kv_idx) {
136
+ att[kv_idx] = std::exp(att[kv_idx] - max_val);
137
+ denom += att[kv_idx];
138
+ }
139
+
140
+ // divide by denom
141
+ for (size_t kv_idx = 0; kv_idx < kv_len; ++kv_idx) {
142
+ att[kv_idx] /= denom;
143
+ }
144
+
145
+ for (size_t feat_idx = 0; feat_idx < head_dim; ++feat_idx) {
146
+ float o_float = 0.;
147
+ for (size_t kv_idx = 0; kv_idx < kv_len; ++kv_idx) {
148
+ o_float +=
149
+ att[kv_idx] * float(v[info.get_kv_elem_offset(kv_idx, kv_head_idx, feat_idx)]);
150
+ }
151
+ o[info.get_o_elem_offset(q_idx, qo_head_idx, feat_idx)] = dtype_out(o_float);
152
+ }
153
+ }
154
+ }
155
+ });
156
+ return std::move(o);
157
+ }
158
+
159
+ template <typename T, typename IdxType>
160
+ void append_paged_kv_cache(paged_kv_t<T, IdxType> page_cpu, const std::vector<std::vector<T>>& keys,
161
+ const std::vector<std::vector<T>>& values,
162
+ const std::vector<IdxType>& append_indptr) {
163
+ size_t batch_size = page_cpu.batch_size;
164
+ size_t num_heads = page_cpu.num_heads;
165
+ size_t head_dim = page_cpu.head_dim;
166
+ size_t page_size = page_cpu.page_size;
167
+ for (size_t i = 0; i < batch_size; ++i) {
168
+ const std::vector<T>& ki = keys[i];
169
+ const std::vector<T>& vi = values[i];
170
+ size_t append_seq_len = append_indptr[i + 1] - append_indptr[i];
171
+ size_t num_pages_i = page_cpu.indptr[i + 1] - page_cpu.indptr[i];
172
+ size_t seq_len = (num_pages_i - 1) * page_size + page_cpu.last_page_len[i];
173
+ assert(append_seq_len <= seq_len);
174
+ size_t append_start = seq_len - append_seq_len;
175
+
176
+ for (size_t j = 0; j < append_seq_len; ++j) {
177
+ size_t page_seq_idx = j + append_start;
178
+ size_t page_idx = page_cpu.indices[page_cpu.indptr[i] + page_seq_idx / page_size];
179
+ size_t entry_idx = page_seq_idx % page_size;
180
+ for (size_t h = 0; h < num_heads; ++h) {
181
+ std::copy(ki.begin() + (j * num_heads + h) * head_dim,
182
+ ki.begin() + (j * num_heads + h + 1) * head_dim,
183
+ page_cpu.k_data + page_cpu.get_elem_offset(page_idx, h, entry_idx, 0));
184
+ std::copy(vi.begin() + (j * num_heads + h) * head_dim,
185
+ vi.begin() + (j * num_heads + h + 1) * head_dim,
186
+ page_cpu.v_data + page_cpu.get_elem_offset(page_idx, h, entry_idx, 0));
187
+ }
188
+ }
189
+ }
190
+ }
191
+
192
+ } // namespace cpu_reference
sglang_repo/sgl-kernel/3rdparty/flashinfer/src/flashinfer_ops.cuh ADDED
@@ -0,0 +1,647 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ /*
2
+ * Copyright (c) 2024 by FlashInfer team.
3
+ *
4
+ * Licensed under the Apache License, Version 2.0 (the "License");
5
+ * you may not use this file except in compliance with the License.
6
+ * You may obtain a copy of the License at
7
+ *
8
+ * http://www.apache.org/licenses/LICENSE-2.0
9
+ *
10
+ * Unless required by applicable law or agreed to in writing, software
11
+ * distributed under the License is distributed on an "AS IS" BASIS,
12
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13
+ * See the License for the specific language governing permissions and
14
+ * limitations under the License.
15
+ */
16
+ #include <flashinfer/attention/default_decode_params.cuh>
17
+ #include <flashinfer/attention/default_prefill_params.cuh>
18
+ #include <flashinfer/attention/scheduler.cuh>
19
+ #include <flashinfer/attention/variants.cuh>
20
+ #include <optional>
21
+
22
+ #include "flashinfer/allocator.h"
23
+ #include "flashinfer/attention/mask.cuh"
24
+ #include "flashinfer/attention/scheduler.cuh"
25
+ #include "flashinfer/exception.h"
26
+ #include "flashinfer/layout.cuh"
27
+ #include "utils.h"
28
+
29
+ namespace flashinfer {
30
+
31
+ template <uint32_t HEAD_DIM, PosEncodingMode POS_ENCODING_MODE, typename AttentionVariant,
32
+ typename Params>
33
+ cudaError_t BatchDecodeWithPagedKVCacheDispatched(Params params, typename Params::DTypeO* tmp_v,
34
+ float* tmp_s, cudaStream_t stream);
35
+
36
+ template <uint32_t HEAD_DIM_CKV, uint32_t HEAD_DIM_KPE, typename AttentionVariant, typename Params>
37
+ cudaError_t BatchDecodeWithPagedKVCacheDispatchedMLA(Params params, typename Params::DTypeO* tmp_v,
38
+ float* tmp_s, cudaStream_t stream);
39
+
40
+ class BatchDecodeHandler {
41
+ public:
42
+ template <uint32_t GROUP_SIZE, uint32_t HEAD_DIM, PosEncodingMode POS_ENCODING_MODE,
43
+ typename DTypeQ, typename DTypeKV, typename DTypeO, typename IdType>
44
+ cudaError_t PlanDispatched(void* float_buffer, size_t float_workspace_size_in_bytes,
45
+ void* int_buffer, size_t int_workspace_size_in_bytes, IdType* indptr_h,
46
+ IdType* last_page_len_h, uint32_t batch_size, uint32_t num_qo_heads,
47
+ uint32_t page_size) {
48
+ int_buffer_ = int_buffer;
49
+ float_buffer_ = float_buffer;
50
+ using Params = BatchDecodeParams<DTypeQ, DTypeKV, DTypeO, IdType>;
51
+ using AttentionVariant =
52
+ DefaultAttention</*use_custom_mask=*/false, /*use_sliding_window=*/false,
53
+ /*use_logits_soft_cap=*/false, /*use_alibi=*/false>;
54
+
55
+ auto work_estimation_func =
56
+ BatchDecodeWithPagedKVCacheWorkEstimationDispatched<GROUP_SIZE, HEAD_DIM, POS_ENCODING_MODE,
57
+ AttentionVariant, Params>;
58
+ return DecodePlan<HEAD_DIM, POS_ENCODING_MODE, AttentionVariant, Params>(
59
+ float_buffer, float_workspace_size_in_bytes, int_buffer, page_locked_buffer_,
60
+ int_workspace_size_in_bytes, plan_info_, indptr_h, batch_size, num_qo_heads, page_size,
61
+ cuda_graph_enabled_, stream_, work_estimation_func);
62
+ }
63
+
64
+ template <uint32_t HEAD_DIM_CKV, uint32_t HEAD_DIM_KPE, typename DTypeQ, typename DTypeKV,
65
+ typename DTypeO, typename IdType>
66
+ cudaError_t PlanDispatchedMLA(void* float_buffer, size_t float_workspace_size_in_bytes,
67
+ void* int_buffer, size_t int_workspace_size_in_bytes,
68
+ IdType* indptr_h, IdType* last_page_len_h, uint32_t batch_size,
69
+ uint32_t num_qo_heads, uint32_t page_size) {
70
+ int_buffer_ = int_buffer;
71
+ float_buffer_ = float_buffer;
72
+ using Params = BatchDecodeParamsMLA<DTypeQ, DTypeKV, DTypeO, IdType>;
73
+ using AttentionVariant =
74
+ DefaultAttention</*use_custom_mask=*/false, /*use_sliding_window=*/false,
75
+ /*use_logits_soft_cap=*/false, /*use_alibi=*/false>;
76
+
77
+ auto work_estimation_func =
78
+ BatchDecodeWithPagedKVCacheWorkEstimationDispatchedMLA<HEAD_DIM_CKV, HEAD_DIM_KPE,
79
+ AttentionVariant, Params>;
80
+ return DecodePlan<HEAD_DIM_CKV, flashinfer::PosEncodingMode::kRoPELlama, AttentionVariant,
81
+ Params>(float_buffer, float_workspace_size_in_bytes, int_buffer,
82
+ page_locked_buffer_, int_workspace_size_in_bytes, plan_info_,
83
+ indptr_h, batch_size, num_qo_heads, page_size, cuda_graph_enabled_,
84
+ stream_, work_estimation_func);
85
+ }
86
+
87
+ void UpdatePageLockedBufferSize(size_t int_workspace_size_in_bytes) {
88
+ cudaFreeHost(page_locked_buffer_);
89
+ cudaMallocHost(&page_locked_buffer_, int_workspace_size_in_bytes);
90
+ }
91
+
92
+ cudaStream_t GetCUDAStream() const { return stream_; }
93
+
94
+ void SetCUDAStream(cudaStream_t stream) { stream_ = stream; }
95
+
96
+ /*!
97
+ * \brief Constructor of BatchDecodeHandler
98
+ * \param enable_cuda_graph A boolean indicates whether to enable CUDA graph
99
+ * \param batch_size If enable_cuda_graph is true, we must specify a fixed batch_size
100
+ */
101
+ BatchDecodeHandler(bool enable_cuda_graph = false, uint32_t batch_size = 0)
102
+ : cuda_graph_enabled_(enable_cuda_graph), stream_(nullptr) {
103
+ cudaMallocHost(&page_locked_buffer_, 8 * 1024 * 1024);
104
+ }
105
+ ~BatchDecodeHandler() { cudaFreeHost(page_locked_buffer_); }
106
+
107
+ bool IsCUDAGraphEnabled() const { return cuda_graph_enabled_; }
108
+
109
+ DecodePlanInfo GetPlanInfo() const { return plan_info_; }
110
+
111
+ template <typename IdType>
112
+ IdType* GetRequestIndices() {
113
+ return GetPtrFromBaseOffset<IdType>(int_buffer_, plan_info_.request_indices_offset);
114
+ }
115
+
116
+ template <typename IdType>
117
+ IdType* GetKVTileIndices() {
118
+ return GetPtrFromBaseOffset<IdType>(int_buffer_, plan_info_.kv_tile_indices_offset);
119
+ }
120
+
121
+ template <typename IdType>
122
+ IdType* GetOIndptr() {
123
+ return GetPtrFromBaseOffset<IdType>(int_buffer_, plan_info_.o_indptr_offset);
124
+ }
125
+
126
+ template <typename IdType>
127
+ IdType* GetKVChunkSizePtr() {
128
+ return GetPtrFromBaseOffset<IdType>(int_buffer_, plan_info_.kv_chunk_size_ptr_offset);
129
+ }
130
+
131
+ template <typename DTypeO>
132
+ DTypeO* GetTmpV() {
133
+ if (plan_info_.split_kv) {
134
+ return GetPtrFromBaseOffset<DTypeO>(float_buffer_, plan_info_.v_offset);
135
+ }
136
+ return nullptr;
137
+ }
138
+
139
+ float* GetTmpS() {
140
+ if (plan_info_.split_kv) {
141
+ return GetPtrFromBaseOffset<float>(float_buffer_, plan_info_.s_offset);
142
+ }
143
+ return nullptr;
144
+ }
145
+
146
+ bool* GetBlockValidMask() {
147
+ if (plan_info_.split_kv && plan_info_.enable_cuda_graph) {
148
+ return GetPtrFromBaseOffset<bool>(int_buffer_, plan_info_.block_valid_mask_offset);
149
+ }
150
+ return nullptr;
151
+ }
152
+
153
+ protected:
154
+ void* page_locked_buffer_;
155
+ void* int_buffer_;
156
+ void* float_buffer_;
157
+ DecodePlanInfo plan_info_;
158
+ bool cuda_graph_enabled_;
159
+ cudaStream_t stream_;
160
+ };
161
+
162
+ template <uint32_t CTA_TILE_Q, uint32_t HEAD_DIM, PosEncodingMode POS_ENCODING_MODE,
163
+ bool USE_FP16_QK_REDUCTION, MaskMode MASK_MODE, typename AttentionVariant,
164
+ typename Params>
165
+ cudaError_t BatchPrefillWithRaggedKVCacheDispatched(Params params, typename Params::DTypeO* tmp_v,
166
+ float* tmp_s, cudaStream_t stream);
167
+
168
+ template <uint32_t CTA_TILE_Q, uint32_t HEAD_DIM, PosEncodingMode POS_ENCODING_MODE,
169
+ bool USE_FP16_QK_REDUCTION, MaskMode MASK_MODE, typename AttentionVariant,
170
+ typename Params>
171
+ cudaError_t BatchPrefillWithPagedKVCacheDispatched(Params params, typename Params::DTypeO* tmp_v,
172
+ float* tmp_s, cudaStream_t stream);
173
+
174
+ class BatchPrefillHandler {
175
+ public:
176
+ void UpdatePageLockedBufferSize(size_t int_workspace_size_in_bytes) {
177
+ cudaFreeHost(page_locked_buffer_);
178
+ cudaMallocHost(&page_locked_buffer_, int_workspace_size_in_bytes);
179
+ }
180
+
181
+ template <typename DTypeO, typename IdType>
182
+ cudaError_t Plan(void* float_buffer, size_t float_workspace_size_in_bytes, void* int_buffer,
183
+ size_t int_workspace_size_in_bytes, IdType* qo_indptr_h, IdType* kv_indptr_h,
184
+ uint32_t total_num_rows, uint32_t batch_size, uint32_t num_qo_heads,
185
+ uint32_t num_kv_heads, uint32_t head_dim, uint32_t page_size) {
186
+ int_buffer_ = int_buffer;
187
+ float_buffer_ = float_buffer;
188
+ return PrefillPlan<IdType>(float_buffer, float_workspace_size_in_bytes, int_buffer,
189
+ page_locked_buffer_, int_workspace_size_in_bytes, plan_info_,
190
+ qo_indptr_h, kv_indptr_h, total_num_rows, batch_size, num_qo_heads,
191
+ num_kv_heads, head_dim, page_size, enable_cuda_graph_,
192
+ sizeof(DTypeO), stream_);
193
+ }
194
+
195
+ cudaStream_t GetCUDAStream() const { return stream_; }
196
+
197
+ void SetCUDAStream(cudaStream_t stream) { stream_ = stream; }
198
+
199
+ bool IsCUDAGraphEnabled() const { return enable_cuda_graph_; }
200
+
201
+ BatchPrefillHandler(bool enable_cuda_graph = false)
202
+ : enable_cuda_graph_(enable_cuda_graph), stream_(nullptr) {
203
+ cudaMallocHost(&page_locked_buffer_, 8 * 1024 * 1024);
204
+ }
205
+ ~BatchPrefillHandler() { cudaFreeHost(page_locked_buffer_); }
206
+
207
+ PrefillPlanInfo GetPlanInfo() const { return plan_info_; }
208
+
209
+ template <typename IdType>
210
+ IdType* GetRequestIndices() {
211
+ return GetPtrFromBaseOffset<IdType>(int_buffer_, plan_info_.request_indices_offset);
212
+ }
213
+
214
+ template <typename IdType>
215
+ IdType* GetQOTileIndices() {
216
+ return GetPtrFromBaseOffset<IdType>(int_buffer_, plan_info_.qo_tile_indices_offset);
217
+ }
218
+
219
+ template <typename IdType>
220
+ IdType* GetKVTileIndices() {
221
+ return GetPtrFromBaseOffset<IdType>(int_buffer_, plan_info_.kv_tile_indices_offset);
222
+ }
223
+
224
+ template <typename IdType>
225
+ IdType* GetOIndptr() {
226
+ return GetPtrFromBaseOffset<IdType>(int_buffer_, plan_info_.o_indptr_offset);
227
+ }
228
+
229
+ template <typename IdType>
230
+ IdType* GetKVChunkSizePtr() {
231
+ return GetPtrFromBaseOffset<IdType>(int_buffer_, plan_info_.kv_chunk_size_ptr_offset);
232
+ }
233
+
234
+ template <typename IdType>
235
+ IdType* GetMergeIndptr() {
236
+ if (plan_info_.split_kv) {
237
+ return GetPtrFromBaseOffset<IdType>(int_buffer_, plan_info_.merge_indptr_offset);
238
+ }
239
+ return nullptr;
240
+ }
241
+
242
+ template <typename DTypeO>
243
+ DTypeO* GetTmpV() {
244
+ if (plan_info_.split_kv) {
245
+ return GetPtrFromBaseOffset<DTypeO>(float_buffer_, plan_info_.v_offset);
246
+ }
247
+ return nullptr;
248
+ }
249
+
250
+ float* GetTmpS() {
251
+ if (plan_info_.split_kv) {
252
+ return GetPtrFromBaseOffset<float>(float_buffer_, plan_info_.s_offset);
253
+ }
254
+ return nullptr;
255
+ }
256
+
257
+ uint32_t* GetTotalNumRows() {
258
+ if (plan_info_.enable_cuda_graph) {
259
+ return GetPtrFromBaseOffset<uint32_t>(int_buffer_, plan_info_.total_num_rows_offset);
260
+ }
261
+ return nullptr;
262
+ }
263
+
264
+ bool* GetBlockValidMask() {
265
+ if (plan_info_.split_kv && plan_info_.enable_cuda_graph) {
266
+ return GetPtrFromBaseOffset<bool>(int_buffer_, plan_info_.block_valid_mask_offset);
267
+ }
268
+ return nullptr;
269
+ }
270
+
271
+ protected:
272
+ void* page_locked_buffer_;
273
+ void* int_buffer_;
274
+ void* float_buffer_;
275
+ PrefillPlanInfo plan_info_;
276
+ bool enable_cuda_graph_;
277
+ cudaStream_t stream_;
278
+ };
279
+
280
+ template <uint32_t HEAD_DIM, PosEncodingMode POS_ENCODING_MODE, bool USE_FP16_QK_REDUCTION,
281
+ MaskMode MASK_MODE, typename AttentionVariant, typename Params>
282
+ cudaError_t SinglePrefillWithKVCacheDispatched(Params params, typename Params::DTypeO* tmp,
283
+ cudaStream_t stream);
284
+
285
+ template <typename DTypeIn, typename DTypeO>
286
+ cudaError_t SinglePrefillWithKVCacheCustomMask(
287
+ DTypeIn* q, DTypeIn* k, DTypeIn* v, uint8_t* custom_mask, DTypeO* o, DTypeO* tmp, float* lse,
288
+ uint32_t num_qo_heads, uint32_t num_kv_heads, uint32_t qo_len, uint32_t kv_len,
289
+ uint32_t head_dim, QKVLayout kv_layout = QKVLayout::kNHD,
290
+ PosEncodingMode pos_encoding_mode = PosEncodingMode::kNone, bool use_fp16_qk_reduction = false,
291
+ std::optional<float> maybe_sm_scale = std::nullopt, float rope_scale = 1.f,
292
+ float rope_theta = 1e4, cudaStream_t stream = nullptr) {
293
+ const float sm_scale = maybe_sm_scale.value_or(1.f / std::sqrt(float(head_dim)));
294
+ auto [qo_stride_n, qo_stride_h, kv_stride_n, kv_stride_h] =
295
+ get_qkv_strides(kv_layout, kv_len, num_qo_heads, num_kv_heads, head_dim);
296
+ DISPATCH_use_fp16_qk_reduction(
297
+ use_fp16_qk_reduction, USE_FP16_QK_REDUCTION,
298
+ {DISPATCH_head_dim(
299
+ head_dim, HEAD_DIM, {DISPATCH_pos_encoding_mode(pos_encoding_mode, POS_ENCODING_MODE, {
300
+ using Params = SinglePrefillParams<DTypeIn, DTypeIn, DTypeO>;
301
+ using AttentionVariant = DefaultAttention<
302
+ /*use_custom_mask=*/true, /*use_sliding_window=*/false,
303
+ /*use_logits_soft_cap=*/false, /*use_alibi=*/false>;
304
+ Params params(q, k, v, custom_mask, o, lse,
305
+ /*alibi_slopes=*/nullptr, num_qo_heads, num_kv_heads, qo_len, kv_len,
306
+ qo_stride_n, qo_stride_h, kv_stride_n, kv_stride_h, head_dim,
307
+ /*window_left=*/-1,
308
+ /*logits_soft_cap=*/0.f, sm_scale, rope_scale, rope_theta);
309
+ return SinglePrefillWithKVCacheDispatched<HEAD_DIM, POS_ENCODING_MODE,
310
+ USE_FP16_QK_REDUCTION, MaskMode::kCustom,
311
+ AttentionVariant>(params, tmp, stream);
312
+ })})});
313
+ return cudaSuccess;
314
+ }
315
+
316
+ /*!
317
+ * \brief FlashAttention prefill CUDA function for a single request.
318
+ * \tparam DTypeIn The data type of input
319
+ * \tparam DTypeO The data type of output
320
+ * \param q The query tensor.
321
+ * \param k The key tensor.
322
+ * \param v The value tensor.
323
+ * \param o The output tensor.
324
+ * \param tmp The temporary storage (only used for cooperative kernel).
325
+ * \param lse The logsumexp values.
326
+ * \param num_qo_heads The number of query and output heads.
327
+ * \param num_kv_heads The number of key and value heads.
328
+ * \param qo_len The length of query and output.
329
+ * \param kv_len The length of key and value.
330
+ * \param head_dim The dimension of each head.
331
+ * \param causal Whether to use causal attention.
332
+ * \param kv_layout The layout of input and output.
333
+ * \param pos_encoding_mode The positional encoding mode.
334
+ * \param use_fp16_qk_reduction Whether to allow accumulating q*k^T with fp16.
335
+ * \param rope_scale The scaling factor used in RoPE interpolation.
336
+ * \param rope_theta The theta used in RoPE.
337
+ * \param stream The cuda stream to execute the kernel on.
338
+ * \return status Indicates whether CUDA calls are successful
339
+ */
340
+ template <typename DTypeQ, typename DTypeKV, typename DTypeO>
341
+ cudaError_t SinglePrefillWithKVCache(DTypeQ* q, DTypeKV* k, DTypeKV* v, DTypeO* o, DTypeO* tmp,
342
+ float* lse, uint32_t num_qo_heads, uint32_t num_kv_heads,
343
+ uint32_t qo_len, uint32_t kv_len, uint32_t head_dim,
344
+ bool causal = true, QKVLayout kv_layout = QKVLayout::kNHD,
345
+ PosEncodingMode pos_encoding_mode = PosEncodingMode::kNone,
346
+ bool use_fp16_qk_reduction = false,
347
+ std::optional<float> maybe_sm_scale = std::nullopt,
348
+ float rope_scale = 1.f, float rope_theta = 1e4,
349
+ cudaStream_t stream = nullptr) {
350
+ const float sm_scale = maybe_sm_scale.value_or(1.f / std::sqrt(float(head_dim)));
351
+ const MaskMode mask_mode = causal ? MaskMode::kCausal : MaskMode::kNone;
352
+ auto [qo_stride_n, qo_stride_h, kv_stride_n, kv_stride_h] =
353
+ get_qkv_strides(kv_layout, kv_len, num_qo_heads, num_kv_heads, head_dim);
354
+ DISPATCH_use_fp16_qk_reduction(
355
+ use_fp16_qk_reduction, USE_FP16_QK_REDUCTION,
356
+ {DISPATCH_mask_mode(
357
+ mask_mode, MASK_MODE,
358
+ {DISPATCH_head_dim(
359
+ head_dim, HEAD_DIM,
360
+ {DISPATCH_pos_encoding_mode(pos_encoding_mode, POS_ENCODING_MODE, {
361
+ using Params = SinglePrefillParams<DTypeQ, DTypeKV, DTypeO>;
362
+ using AttentionVariant = DefaultAttention<
363
+ /*use_custom_mask=*/(MASK_MODE == MaskMode::kCustom),
364
+ /*use_sliding_window=*/false,
365
+ /*use_logits_soft_cap=*/false, /*use_alibi=*/false>;
366
+ Params params(q, k, v, /*custom_mask=*/nullptr, o, lse,
367
+ /*alibi_slopes=*/nullptr, num_qo_heads, num_kv_heads, qo_len, kv_len,
368
+ qo_stride_n, qo_stride_h, kv_stride_n, kv_stride_h, head_dim,
369
+ /*window_left=*/-1,
370
+ /*logits_soft_cap=*/0.f, sm_scale, rope_scale, rope_theta);
371
+ return SinglePrefillWithKVCacheDispatched<HEAD_DIM, POS_ENCODING_MODE,
372
+ USE_FP16_QK_REDUCTION, MASK_MODE,
373
+ AttentionVariant, Params>(params, tmp,
374
+ stream);
375
+ })})})});
376
+ return cudaSuccess;
377
+ }
378
+
379
+ template <typename DTypeQ, typename DTypeKV, typename DTypeO, typename IdType>
380
+ cudaError_t BatchPrefillWithRaggedKVCacheWrapper(
381
+ BatchPrefillHandler* handler, DTypeQ* q, IdType* qo_indptr, DTypeKV* k, DTypeKV* v,
382
+ IdType* kv_indptr, IdType* q_rope_offset, IdType* k_rope_offset, DTypeO* o, float* lse,
383
+ const uint32_t batch_size, const uint32_t num_qo_heads, const uint32_t num_kv_heads,
384
+ const uint32_t head_dim, bool causal = true, QKVLayout kv_layout = QKVLayout::kNHD,
385
+ PosEncodingMode pos_encoding_mode = PosEncodingMode::kNone, bool use_fp16_qk_reduction = false,
386
+ std::optional<float> maybe_sm_scale = std::nullopt, const float rope_scale = 1.f,
387
+ const float rope_theta = 1e4, cudaStream_t stream = nullptr) {
388
+ const float sm_scale = maybe_sm_scale.value_or(1.f / std::sqrt(float(head_dim)));
389
+ const MaskMode mask_mode = causal ? MaskMode::kCausal : MaskMode::kNone;
390
+ auto [qo_stride_n, qo_stride_h, kv_stride_n, kv_stride_h] =
391
+ get_qkv_strides(kv_layout, 0, num_qo_heads, num_kv_heads, head_dim);
392
+ auto plan_info = handler->GetPlanInfo();
393
+ DISPATCH_head_dim(
394
+ head_dim, HEAD_DIM,
395
+ {DISPATCH_mask_mode(
396
+ mask_mode, MASK_MODE,
397
+ {DISPATCH_pos_encoding_mode(
398
+ pos_encoding_mode, POS_ENCODING_MODE,
399
+ {DISPATCH_use_fp16_qk_reduction(use_fp16_qk_reduction, USE_FP16_QK_REDUCTION, {
400
+ using Params = BatchPrefillRaggedParams<DTypeQ, DTypeKV, DTypeO, IdType>;
401
+ using AttentionVariant = DefaultAttention<
402
+ /*use_custom_mask=*/(MASK_MODE == MaskMode::kCustom),
403
+ /*use_sliding_window=*/false,
404
+ /*use_logits_soft_cap=*/false, /*use_alibi=*/false>;
405
+ Params params(q, k, v, /*custom_mask=*/nullptr, qo_indptr, kv_indptr,
406
+ /*mask_indptr=*/nullptr, q_rope_offset, k_rope_offset, o, lse,
407
+ /*alibi_slopes=*/nullptr, num_qo_heads, num_kv_heads, qo_stride_n,
408
+ qo_stride_h, kv_stride_n, kv_stride_h, /*window_left=*/-1,
409
+ /*logits_soft_cap=*/0.f, sm_scale, rope_scale, rope_theta);
410
+ params.request_indices = handler->GetRequestIndices<IdType>();
411
+ params.qo_tile_indices = handler->GetQOTileIndices<IdType>();
412
+ params.kv_tile_indices = handler->GetKVTileIndices<IdType>();
413
+ params.o_indptr = handler->GetOIndptr<IdType>();
414
+ params.kv_chunk_size_ptr = handler->GetKVChunkSizePtr<IdType>();
415
+ params.merge_indptr = handler->GetMergeIndptr<IdType>();
416
+ params.block_valid_mask = handler->GetBlockValidMask();
417
+ params.max_total_num_rows = plan_info.total_num_rows;
418
+ params.total_num_rows = handler->GetTotalNumRows();
419
+ params.padded_batch_size = plan_info.padded_batch_size;
420
+
421
+ DISPATCH_CTA_TILE_Q(plan_info.cta_tile_q, CTA_TILE_Q, {
422
+ BatchPrefillWithRaggedKVCacheDispatched<CTA_TILE_Q, HEAD_DIM, POS_ENCODING_MODE,
423
+ USE_FP16_QK_REDUCTION, MASK_MODE,
424
+ AttentionVariant>(
425
+ params, handler->GetTmpV<DTypeO>(), handler->GetTmpS(), stream);
426
+ });
427
+ })})})});
428
+ return cudaSuccess;
429
+ }
430
+
431
+ template <typename DTypeQ, typename DTypeKV, typename DTypeO, typename IdType>
432
+ cudaError_t BatchPrefillWithPagedKVCacheWrapper(
433
+ BatchPrefillHandler* handler, DTypeQ* q, IdType* qo_indptr, IdType* q_rope_offset,
434
+ paged_kv_t<DTypeKV, IdType> paged_kv, DTypeO* o, float* lse, uint32_t num_qo_heads,
435
+ bool causal = true, PosEncodingMode pos_encoding_mode = PosEncodingMode::kNone,
436
+ bool use_fp16_qk_reduction = false, std::optional<float> maybe_sm_scale = std::nullopt,
437
+ float rope_scale = 1.f, float rope_theta = 1e4, cudaStream_t stream = nullptr) {
438
+ const float sm_scale = maybe_sm_scale.value_or(1.f / std::sqrt(float(paged_kv.head_dim)));
439
+ const uint32_t num_kv_heads = paged_kv.num_heads;
440
+ const uint32_t head_dim = paged_kv.head_dim;
441
+ const MaskMode mask_mode = causal ? MaskMode::kCausal : MaskMode::kNone;
442
+ auto plan_info = handler->GetPlanInfo();
443
+ DISPATCH_head_dim(
444
+ head_dim, HEAD_DIM,
445
+ {DISPATCH_mask_mode(
446
+ mask_mode, MASK_MODE,
447
+ {DISPATCH_pos_encoding_mode(
448
+ pos_encoding_mode, POS_ENCODING_MODE,
449
+ {DISPATCH_use_fp16_qk_reduction(use_fp16_qk_reduction, USE_FP16_QK_REDUCTION, {
450
+ using Params = BatchPrefillPagedParams<DTypeQ, DTypeKV, DTypeO, IdType>;
451
+ using AttentionVariant = DefaultAttention<
452
+ /*use_custom_mask=*/(MASK_MODE == MaskMode::kCustom),
453
+ /*use_sliding_window=*/false,
454
+ /*use_logits_soft_cap=*/false,
455
+ /*use_alibi=*/false>;
456
+ Params params(q, paged_kv, /*custom_mask=*/nullptr, qo_indptr,
457
+ /*mask_indptr=*/nullptr, q_rope_offset, o, lse,
458
+ /*alibi_slopes=*/nullptr, num_qo_heads,
459
+ /*q_stride_n*/ num_qo_heads * HEAD_DIM, /*q_stride_h*/ HEAD_DIM,
460
+ /*window_left=*/-1, /*logits_soft_cap=*/0.f, sm_scale, rope_scale,
461
+ rope_theta);
462
+ params.request_indices = handler->GetRequestIndices<IdType>();
463
+ params.qo_tile_indices = handler->GetQOTileIndices<IdType>();
464
+ params.kv_tile_indices = handler->GetKVTileIndices<IdType>();
465
+ params.o_indptr = handler->GetOIndptr<IdType>();
466
+ params.kv_chunk_size_ptr = handler->GetKVChunkSizePtr<IdType>();
467
+ params.merge_indptr = handler->GetMergeIndptr<IdType>();
468
+ params.block_valid_mask = handler->GetBlockValidMask();
469
+ params.max_total_num_rows = plan_info.total_num_rows;
470
+ params.total_num_rows = handler->GetTotalNumRows();
471
+ params.padded_batch_size = plan_info.padded_batch_size;
472
+ DISPATCH_CTA_TILE_Q(plan_info.cta_tile_q, CTA_TILE_Q, {
473
+ return BatchPrefillWithPagedKVCacheDispatched<
474
+ CTA_TILE_Q, HEAD_DIM, POS_ENCODING_MODE, USE_FP16_QK_REDUCTION, MASK_MODE,
475
+ AttentionVariant>(params, handler->GetTmpV<DTypeO>(), handler->GetTmpS(),
476
+ stream);
477
+ })
478
+ })})})});
479
+ return cudaSuccess;
480
+ }
481
+
482
+ template <uint32_t HEAD_DIM, PosEncodingMode POS_ENCODING_MODE, typename AttentionVariant,
483
+ typename Params>
484
+ cudaError_t SingleDecodeWithKVCacheDispatched(Params params, typename Params::DTypeO* tmp,
485
+ cudaStream_t stream);
486
+
487
+ template <typename DTypeQ, typename DTypeKV, typename DTypeO>
488
+ cudaError_t SingleDecodeWithKVCache(DTypeQ* q, DTypeKV* k, DTypeKV* v, DTypeO* o, DTypeO* tmp,
489
+ uint32_t num_qo_heads, uint32_t num_kv_heads, uint32_t seq_len,
490
+ uint32_t head_dim, QKVLayout kv_layout = QKVLayout::kNHD,
491
+ PosEncodingMode pos_encoding_mode = PosEncodingMode::kNone,
492
+ std::optional<float> maybe_sm_scale = std::nullopt,
493
+ float rope_scale = 1.f, float rope_theta = 1e4,
494
+ cudaStream_t stream = nullptr) {
495
+ float sm_scale = maybe_sm_scale.value_or(1.f / std::sqrt(float(head_dim)));
496
+ if (num_qo_heads % num_kv_heads != 0) {
497
+ std::ostringstream err_msg;
498
+ err_msg << "num_qo_heads " << num_qo_heads << " is not a multiple of num_kv_heads "
499
+ << num_kv_heads;
500
+ FLASHINFER_ERROR(err_msg.str());
501
+ }
502
+
503
+ DISPATCH_head_dim(
504
+ head_dim, HEAD_DIM, {DISPATCH_pos_encoding_mode(pos_encoding_mode, POS_ENCODING_MODE, {
505
+ using Params = SingleDecodeParams<DTypeQ, DTypeKV, DTypeO>;
506
+ using AttentionVariant = DefaultAttention<
507
+ /*use_custom_mask=*/false, /*use_sliding_window=*/false,
508
+ /*use_logits_soft_cap=*/false, /*use_alibi=*/false>;
509
+ Params params(q, k, v, o, /*alibi_slopes=*/nullptr, seq_len, num_qo_heads, num_kv_heads,
510
+ kv_layout, head_dim, /*window_left=*/-1, /*logits_soft_cap=*/0.f, sm_scale,
511
+ rope_scale, rope_theta);
512
+
513
+ SingleDecodeWithKVCacheDispatched<HEAD_DIM, POS_ENCODING_MODE, AttentionVariant>(
514
+ params, tmp, stream);
515
+ })});
516
+ return cudaSuccess;
517
+ }
518
+
519
+ /*!
520
+ * \brief Wrapper of BatchDecodeWithPagedKVCache function, and caches the temporary buffer
521
+ * for cooperative kernels.
522
+ * \tparam DTypeQ The data type of query tensor.
523
+ * \tparam DTypeKV The data type of key-value tensor.
524
+ * \tparam DTypeO The data type of output tensor.
525
+ * \tparam IdType The data type of index tensor.
526
+ * \param handler The handler for the batch decode forward request.
527
+ * \param q The input tensor.
528
+ * \param paged_kv The paged key-value tensor.
529
+ * \param o The output tensor.
530
+ * \param lse The logsumexp values.
531
+ * \param num_qo_heads The number of heads.
532
+ * \param pos_encoding_mode The positional encoding mode.
533
+ * \param rope_scale The scale of rope.
534
+ * \param rope_theta The theta of rope.
535
+ * \param stream The CUDA stream.
536
+ */
537
+ template <typename DTypeQ, typename DTypeKV, typename DTypeO, typename IdType>
538
+ cudaError_t BatchDecodeWithPagedKVCacheWrapper(
539
+ BatchDecodeHandler* handler, DTypeQ* q, IdType* q_rope_offset,
540
+ paged_kv_t<DTypeKV, IdType> paged_kv, DTypeO* o, float* lse, uint32_t num_qo_heads,
541
+ PosEncodingMode pos_encoding_mode = PosEncodingMode::kNone,
542
+ std::optional<float> maybe_sm_scale = std::nullopt, float rope_scale = 1.f,
543
+ float rope_theta = 1e4, cudaStream_t stream = nullptr) {
544
+ float sm_scale = maybe_sm_scale.value_or(1.f / std::sqrt(float(paged_kv.head_dim)));
545
+ const uint32_t num_kv_heads = paged_kv.num_heads;
546
+ if (num_qo_heads % num_kv_heads != 0) {
547
+ std::ostringstream err_msg;
548
+ err_msg << "num_qo_heads " << num_qo_heads << " is not a multiple of num_kv_heads "
549
+ << num_kv_heads;
550
+ FLASHINFER_ERROR(err_msg.str());
551
+ }
552
+
553
+ DISPATCH_head_dim(
554
+ paged_kv.head_dim, HEAD_DIM,
555
+ {DISPATCH_pos_encoding_mode(pos_encoding_mode, POS_ENCODING_MODE, {
556
+ using Params = BatchDecodeParams<DTypeQ, DTypeKV, DTypeO, IdType>;
557
+ using AttentionVariant = DefaultAttention<
558
+ /*use_custom_mask=*/false, /*use_sliding_window=*/false,
559
+ /*use_logits_soft_cap=*/false, /*use_alibi=*/false>;
560
+ Params params(q, q_rope_offset, paged_kv, o, lse, /*alibi_slopes=*/nullptr, num_qo_heads,
561
+ /*q_stride_n*/ num_qo_heads * HEAD_DIM, /*q_stride_h*/ HEAD_DIM,
562
+ /*window_left=*/-1, /*logits_soft_cap=*/0.f, sm_scale, rope_scale,
563
+ rope_theta);
564
+ params.request_indices = handler->GetRequestIndices<IdType>();
565
+ params.kv_tile_indices = handler->GetKVTileIndices<IdType>();
566
+ params.o_indptr = handler->GetOIndptr<IdType>();
567
+ params.kv_chunk_size_ptr = handler->GetKVChunkSizePtr<IdType>();
568
+ params.block_valid_mask = handler->GetBlockValidMask();
569
+ params.padded_batch_size = handler->GetPlanInfo().padded_batch_size;
570
+
571
+ return BatchDecodeWithPagedKVCacheDispatched<HEAD_DIM, POS_ENCODING_MODE, AttentionVariant>(
572
+ params, handler->GetTmpV<DTypeO>(), handler->GetTmpS(), stream);
573
+ })});
574
+ return cudaSuccess;
575
+ }
576
+
577
+ template <typename DTypeQ, typename DTypeKV, typename DTypeO, typename IdType>
578
+ cudaError_t BatchDecodeHandlerPlan(BatchDecodeHandler* handler, void* float_buffer,
579
+ size_t float_workspace_size_in_bytes, void* int_buffer,
580
+ size_t int_workspace_size_in_bytes, IdType* indptr_h,
581
+ IdType* last_page_len_h, uint32_t batch_size,
582
+ uint32_t num_qo_heads, uint32_t num_kv_heads, uint32_t head_dim,
583
+ uint32_t page_size, PosEncodingMode pos_encoding_mode) {
584
+ if (num_qo_heads % num_kv_heads != 0) {
585
+ std::ostringstream err_msg;
586
+ err_msg << "num_qo_heads " << num_qo_heads << " should be divisible by num_kv_heads "
587
+ << num_kv_heads;
588
+ FLASHINFER_ERROR(err_msg.str());
589
+ }
590
+ DISPATCH_head_dim(head_dim, HEAD_DIM, {
591
+ DISPATCH_pos_encoding_mode(pos_encoding_mode, POS_ENCODING_MODE, {
592
+ DISPATCH_GQA_GROUP_SIZE(num_qo_heads / num_kv_heads, GROUP_SIZE, {
593
+ return handler->PlanDispatched<GROUP_SIZE, HEAD_DIM, POS_ENCODING_MODE, DTypeQ, DTypeKV,
594
+ DTypeO, IdType>(
595
+ float_buffer, float_workspace_size_in_bytes, int_buffer, int_workspace_size_in_bytes,
596
+ indptr_h, last_page_len_h, batch_size, num_qo_heads, page_size);
597
+ });
598
+ });
599
+ });
600
+ }
601
+
602
+ template <typename DTypeQ, typename DTypeKV, typename DTypeO, typename IdType>
603
+ cudaError_t BatchDecodeWithPagedKVCacheWrapperMLA(
604
+ BatchDecodeHandler* handler, DTypeQ* q_nope, DTypeQ* q_pe, IdType* q_rope_offset,
605
+ paged_kv_mla_t<DTypeKV, IdType> paged_kv, DTypeO* o, float* lse, uint32_t num_qo_heads,
606
+ float sm_scale, float rope_scale = 1.f, float rope_theta = 1e4, cudaStream_t stream = nullptr) {
607
+ DISPATCH_head_dim(paged_kv.head_dim_ckv, HEAD_DIM_CKV, {
608
+ // fixme: head_dim_ckv(kv_lora_rank) is 8 times the size of head_dim_kpe(qk_rope_head_dim) for
609
+ // all MLA model (DeepSeek-V2-Lite, DeepSeek-V2.5, MiniCPM3) at the time Oct.2024
610
+ constexpr auto HEAD_DIM_KPE = HEAD_DIM_CKV / 8;
611
+ using Params = BatchDecodeParamsMLA<DTypeQ, DTypeKV, DTypeO, IdType>;
612
+ using AttentionVariant = DefaultAttention<
613
+ /*use_custom_mask=*/false, /*use_sliding_window=*/false,
614
+ /*use_logits_soft_cap=*/false, /*use_alibi=*/false>;
615
+ Params params(q_nope, q_pe, q_rope_offset, paged_kv, o, lse, num_qo_heads,
616
+ /*window_left=*/-1, /*logits_soft_cap=*/0.f, sm_scale, rope_scale, rope_theta);
617
+ params.request_indices = handler->GetRequestIndices<IdType>();
618
+ params.kv_tile_indices = handler->GetKVTileIndices<IdType>();
619
+ params.o_indptr = handler->GetOIndptr<IdType>();
620
+ params.kv_chunk_size_ptr = handler->GetKVChunkSizePtr<IdType>();
621
+ params.block_valid_mask = handler->GetBlockValidMask();
622
+ params.padded_batch_size = handler->GetPlanInfo().padded_batch_size;
623
+
624
+ return BatchDecodeWithPagedKVCacheDispatchedMLA<HEAD_DIM_CKV, HEAD_DIM_KPE, AttentionVariant>(
625
+ params, handler->GetTmpV<DTypeO>(), handler->GetTmpS(), stream);
626
+ });
627
+ return cudaSuccess;
628
+ }
629
+
630
+ template <typename DTypeQ, typename DTypeKV, typename DTypeO, typename IdType>
631
+ cudaError_t BatchDecodeHandlerPlanMLA(BatchDecodeHandler* handler, void* float_buffer,
632
+ size_t float_workspace_size_in_bytes, void* int_buffer,
633
+ size_t int_workspace_size_in_bytes, IdType* indptr_h,
634
+ IdType* last_page_len_h, uint32_t batch_size,
635
+ uint32_t num_qo_heads, uint32_t head_dim_ckv,
636
+ uint32_t page_size) {
637
+ DISPATCH_head_dim(head_dim_ckv, HEAD_DIM_CKV, {
638
+ // fixme: head_dim_ckv(kv_lora_rank) is 8 times the size of head_dim_kpe(qk_rope_head_dim) for
639
+ // all MLA model (DeepSeek-V2-Lite, DeepSeek-V2.5, MiniCPM3) at the time Oct.2024
640
+ constexpr auto HEAD_DIM_KPE = HEAD_DIM_CKV / 8;
641
+ return handler->PlanDispatchedMLA<HEAD_DIM_CKV, HEAD_DIM_KPE, DTypeQ, DTypeKV, DTypeO, IdType>(
642
+ float_buffer, float_workspace_size_in_bytes, int_buffer, int_workspace_size_in_bytes,
643
+ indptr_h, last_page_len_h, batch_size, num_qo_heads, page_size);
644
+ });
645
+ }
646
+
647
+ } // namespace flashinfer
sglang_repo/sgl-kernel/3rdparty/flashinfer/src/test_batch_decode.cu ADDED
@@ -0,0 +1,182 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ /*
2
+ * Copyright (c) 2023 by FlashInfer team.
3
+ *
4
+ * Licensed under the Apache License, Version 2.0 (the "License");
5
+ * you may not use this file except in compliance with the License.
6
+ * You may obtain a copy of the License at
7
+ *
8
+ * http://www.apache.org/licenses/LICENSE-2.0
9
+ *
10
+ * Unless required by applicable law or agreed to in writing, software
11
+ * distributed under the License is distributed on an "AS IS" BASIS,
12
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13
+ * See the License for the specific language governing permissions and
14
+ * limitations under the License.
15
+ */
16
+ #include <gtest/gtest.h>
17
+
18
+ #include <type_traits>
19
+
20
+ #include "cpu_reference.h"
21
+ #include "flashinfer_ops.cuh"
22
+ #include "utils.h"
23
+
24
+ using namespace flashinfer;
25
+
26
+ constexpr QKVLayout kv_layout = QKVLayout::kNHD;
27
+
28
+ template <typename DTypeQO, typename DTypeKV>
29
+ void _TestBatchDecodingKernelCorrectness(size_t page_size, size_t batch_size, size_t num_qo_heads,
30
+ size_t num_kv_heads, size_t head_dim,
31
+ flashinfer::PosEncodingMode pos_encoding_mode) {
32
+ std::vector<int32_t> seq_lens(batch_size);
33
+ utils::vec_randint_(seq_lens, 1, 1024);
34
+ std::vector<int32_t> append_indptr{0};
35
+ for (size_t i = 0; i < batch_size; ++i) {
36
+ append_indptr.push_back(append_indptr.back() + seq_lens[i]);
37
+ }
38
+ std::vector<DTypeQO> q;
39
+ std::vector<DTypeQO> o_ref;
40
+ std::vector<DTypeKV> k_data;
41
+ std::vector<DTypeKV> v_data;
42
+ std::vector<int32_t> kv_indptr{0};
43
+ std::vector<int32_t> kv_indices;
44
+ std::vector<int32_t> kv_last_page_len;
45
+ size_t page_counter = 0;
46
+
47
+ std::vector<std::vector<DTypeKV>> keys, values;
48
+ for (size_t i = 0; i < batch_size; ++i) {
49
+ size_t seq_len = seq_lens[i];
50
+ size_t num_pages = (seq_len + page_size - 1) / page_size;
51
+ size_t last_page_len = (seq_len - 1) % page_size + 1;
52
+ std::vector<DTypeQO> qi(num_qo_heads * head_dim);
53
+ std::vector<DTypeKV> ki(seq_len * num_kv_heads * head_dim),
54
+ vi(seq_len * num_kv_heads * head_dim);
55
+ utils::vec_normal_(qi);
56
+ utils::vec_normal_(ki);
57
+ utils::vec_normal_(vi);
58
+
59
+ // compute reference output
60
+ std::vector<DTypeQO> o_ref_i = cpu_reference::single_mha<DTypeQO, DTypeKV, DTypeQO>(
61
+ qi, ki, vi, 1, seq_len, num_qo_heads, num_kv_heads, head_dim, false, QKVLayout::kNHD,
62
+ pos_encoding_mode);
63
+ keys.push_back(ki);
64
+ values.push_back(vi);
65
+ // append new q and o_ref
66
+ q.insert(q.end(), qi.begin(), qi.end());
67
+ o_ref.insert(o_ref.end(), o_ref_i.begin(), o_ref_i.end());
68
+ // append new kv_indptr, kv_indices and kv_last_page_len
69
+ kv_last_page_len.push_back(last_page_len);
70
+ kv_indptr.push_back(kv_indptr.back() + num_pages);
71
+ for (size_t j = 0; j < num_pages; ++j) {
72
+ kv_indices.push_back(page_counter++);
73
+ }
74
+ }
75
+ k_data.resize(page_counter * num_kv_heads * page_size * head_dim);
76
+ v_data.resize(page_counter * num_kv_heads * page_size * head_dim);
77
+ utils::vec_zero_(k_data);
78
+ utils::vec_zero_(v_data);
79
+ assert(q.size() == batch_size * num_qo_heads * head_dim);
80
+ assert(o_ref.size() == batch_size * num_qo_heads * head_dim);
81
+
82
+ flashinfer::paged_kv_t<DTypeKV, int32_t> paged_kv_cpu(
83
+ num_kv_heads, page_size, head_dim, batch_size, kv_layout, k_data.data(), v_data.data(),
84
+ kv_indices.data(), kv_indptr.data(), kv_last_page_len.data());
85
+ cpu_reference::append_paged_kv_cache<DTypeKV, int32_t>(paged_kv_cpu, keys, values, append_indptr);
86
+
87
+ // copy data to device
88
+ thrust::device_vector<DTypeKV> k_data_device(k_data);
89
+ thrust::device_vector<DTypeKV> v_data_device(v_data);
90
+ thrust::device_vector<int32_t> kv_indptr_device(kv_indptr);
91
+ thrust::device_vector<int32_t> kv_indices_device(kv_indices);
92
+ thrust::device_vector<int32_t> kv_last_page_len_device(kv_last_page_len);
93
+ thrust::device_vector<DTypeQO> q_device(q);
94
+ thrust::device_vector<DTypeQO> o_device(o_ref.size());
95
+
96
+ // create paged_kv object
97
+ flashinfer::paged_kv_t<DTypeKV, int32_t> paged_kv(
98
+ num_kv_heads, page_size, head_dim, batch_size, kv_layout,
99
+ thrust::raw_pointer_cast(k_data_device.data()),
100
+ thrust::raw_pointer_cast(v_data_device.data()),
101
+ thrust::raw_pointer_cast(kv_indices_device.data()),
102
+ thrust::raw_pointer_cast(kv_indptr_device.data()),
103
+ thrust::raw_pointer_cast(kv_last_page_len_device.data()));
104
+ flashinfer::BatchDecodeHandler handler;
105
+ size_t float_workspace_size_in_bytes = 32 * 1024 * 1024;
106
+ thrust::device_vector<char> float_buffer(float_workspace_size_in_bytes);
107
+ size_t int_workspace_size_in_bytes = 8 * 1024 * 1024;
108
+ thrust::device_vector<char> int_buffer(int_workspace_size_in_bytes);
109
+ BatchDecodeHandlerPlan<DTypeQO, DTypeKV, DTypeQO, int32_t>(
110
+ &handler, (void*)thrust::raw_pointer_cast(float_buffer.data()), float_workspace_size_in_bytes,
111
+ (void*)thrust::raw_pointer_cast(int_buffer.data()), int_workspace_size_in_bytes,
112
+ kv_indptr.data(), kv_last_page_len.data(), batch_size, num_qo_heads, num_kv_heads, head_dim,
113
+ page_size, pos_encoding_mode);
114
+
115
+ cudaError_t status =
116
+ flashinfer::BatchDecodeWithPagedKVCacheWrapper<DTypeQO, DTypeKV, DTypeQO, int32_t>(
117
+ &handler, thrust::raw_pointer_cast(q_device.data()), /*q_rope_offset=*/nullptr, paged_kv,
118
+ thrust::raw_pointer_cast(o_device.data()), /*lse=*/nullptr, num_qo_heads,
119
+ pos_encoding_mode);
120
+ EXPECT_EQ(status, cudaSuccess) << "CUDA error: " + std::string(cudaGetErrorString(status));
121
+ // compare result
122
+ thrust::host_vector<DTypeQO> o_host = o_device;
123
+ size_t num_result_errors_atol_1e_3_rtol_1e_3 = 0;
124
+ bool nan_detected = false;
125
+ for (size_t i = 0; i < batch_size * num_qo_heads * head_dim; ++i) {
126
+ if (std::isnan(float(o_host[i]))) {
127
+ nan_detected = true;
128
+ }
129
+ num_result_errors_atol_1e_3_rtol_1e_3 +=
130
+ (!utils::isclose(float(o_host[i]), float(o_ref[i]), 1e-3, 1e-3));
131
+ }
132
+ float result_accuracy = 1. - float(num_result_errors_atol_1e_3_rtol_1e_3) /
133
+ float(batch_size * num_qo_heads * head_dim);
134
+ std::cout << "page_size=" << page_size << ", num_qo_heads=" << num_qo_heads
135
+ << ", num_kv_heads=" << num_kv_heads << ", batch_size=" << batch_size
136
+ << ", head_dim=" << head_dim
137
+ << ", pos_encoding_mode=" << flashinfer::PosEncodingModeToString(pos_encoding_mode)
138
+ << ", result accuracy (atol=1e-3, rtol=1e-3): " << result_accuracy << std::endl;
139
+ EXPECT_GT(result_accuracy, 0.90) << "Result correctness test failed.";
140
+ EXPECT_EQ(nan_detected, false) << "NaN detected.";
141
+ }
142
+
143
+ template <typename DTypeQO, typename DTypeKV>
144
+ void TestBatchDecodeKernelCorrectness() {
145
+ for (size_t page_size : {1, 3, 7, 16}) {
146
+ for (size_t batch_size : {1, 2, 4, 8}) {
147
+ for (size_t num_qo_heads : {32}) {
148
+ for (size_t num_kv_heads : {32, 8, 4}) {
149
+ for (size_t head_dim : {64, 128, 256}) {
150
+ for (size_t pos_encoding_mode : {0U, 1U}) {
151
+ _TestBatchDecodingKernelCorrectness<DTypeQO, DTypeKV>(
152
+ page_size, batch_size, num_qo_heads, num_kv_heads, head_dim,
153
+ flashinfer::PosEncodingMode(pos_encoding_mode));
154
+ }
155
+ }
156
+ }
157
+ }
158
+ }
159
+ }
160
+ }
161
+
162
+ TEST(FlashInferCorrectnessTest, BatchDecodeKernelCorrectnessTestFP16) {
163
+ TestBatchDecodeKernelCorrectness<half, half>();
164
+ }
165
+
166
+ #ifdef FLASHINFER_ENABLE_BF16
167
+ TEST(FlashInferCorrectnessTest, TestBatchDecodeKernelCorrectnessBF16) {
168
+ TestBatchDecodeKernelCorrectness<__nv_bfloat16, __nv_bfloat16>();
169
+ }
170
+ #endif
171
+
172
+ #ifdef FLASHINFER_ENABLE_FP8_E4M3
173
+ TEST(FlashInferCorrectnessTest, TestBatchDecodeKernelCorrectnessE4M3) {
174
+ TestBatchDecodeKernelCorrectness<half, __nv_fp8_e4m3>();
175
+ }
176
+ #endif
177
+
178
+ #ifdef FLASHINFER_ENABLE_FP8_E5M2
179
+ TEST(FlashInferCorrectnessTest, TestBatchDecodeKernelCorrectnessE5M2) {
180
+ TestBatchDecodeKernelCorrectness<half, __nv_fp8_e5m2>();
181
+ }
182
+ #endif
sglang_repo/sgl-kernel/3rdparty/flashinfer/src/test_batch_prefill.cu ADDED
@@ -0,0 +1,811 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ /*
2
+ * Copyright (c) 2023 by FlashInfer team.
3
+ *
4
+ * Licensed under the Apache License, Version 2.0 (the "License");
5
+ * you may not use this file except in compliance with the License.
6
+ * You may obtain a copy of the License at
7
+ *
8
+ * http://www.apache.org/licenses/LICENSE-2.0
9
+ *
10
+ * Unless required by applicable law or agreed to in writing, software
11
+ * distributed under the License is distributed on an "AS IS" BASIS,
12
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13
+ * See the License for the specific language governing permissions and
14
+ * limitations under the License.
15
+ */
16
+ #include <gtest/gtest.h>
17
+
18
+ #include <cstdint>
19
+
20
+ #include "cpu_reference.h"
21
+ #include "flashinfer/pos_enc.cuh"
22
+ #include "flashinfer_ops.cuh"
23
+ #include "utils.h"
24
+
25
+ using namespace flashinfer;
26
+ constexpr QKVLayout kv_layout = QKVLayout::kNHD;
27
+
28
+ template <typename DTypeQO, typename DTypeKV>
29
+ void _TestBatchPagedPrefillKernelOneHotCorrectness(size_t num_kv_heads, size_t num_qo_heads,
30
+ size_t page_size, size_t head_dim, bool causal,
31
+ PosEncodingMode pos_encoding_mode,
32
+ bool use_fp16_qk_reduction) {
33
+ uint32_t batch_size = 9;
34
+ std::vector<int32_t> q_lens(batch_size), kv_lens(batch_size);
35
+ utils::vec_randint_(q_lens, 1, 15);
36
+ utils::vec_randint_(kv_lens, 15, 257);
37
+ std::vector<int32_t> append_indptr{0};
38
+ for (size_t request_idx = 0; request_idx < batch_size; ++request_idx) {
39
+ append_indptr.push_back(append_indptr.back() + kv_lens[request_idx]);
40
+ }
41
+ std::vector<DTypeKV> k_data;
42
+ std::vector<DTypeKV> v_data;
43
+ std::vector<int32_t> kv_indptr{0};
44
+ std::vector<int32_t> kv_indices;
45
+ std::vector<int32_t> kv_last_page_len;
46
+ size_t page_counter = 0;
47
+
48
+ std::vector<std::vector<DTypeKV>> key, value;
49
+ for (uint32_t request_idx = 0; request_idx < batch_size; ++request_idx) {
50
+ size_t kv_len = kv_lens[request_idx];
51
+ size_t num_pages = (kv_len + page_size - 1) / page_size;
52
+ size_t last_page_len = (kv_len - 1) % page_size + 1;
53
+ std::vector<DTypeKV> k(kv_len * num_kv_heads * head_dim), v(kv_len * num_kv_heads * head_dim);
54
+ utils::vec_normal_(k);
55
+ utils::vec_normal_(v);
56
+ key.push_back(k);
57
+ value.push_back(v);
58
+ kv_last_page_len.push_back(last_page_len);
59
+ kv_indptr.push_back(kv_indptr.back() + num_pages);
60
+ for (size_t j = 0; j < num_pages; ++j) {
61
+ kv_indices.push_back(page_counter++);
62
+ }
63
+ }
64
+
65
+ k_data.resize(page_counter * num_kv_heads * page_size * head_dim);
66
+ v_data.resize(page_counter * num_kv_heads * page_size * head_dim);
67
+ flashinfer::paged_kv_t<DTypeKV, int32_t> paged_kv_cpu(
68
+ num_kv_heads, page_size, head_dim, batch_size, kv_layout, k_data.data(), v_data.data(),
69
+ kv_indices.data(), kv_indptr.data(), kv_last_page_len.data());
70
+ cpu_reference::append_paged_kv_cache<DTypeKV, int32_t>(paged_kv_cpu, key, value, append_indptr);
71
+
72
+ // copy data to device
73
+ thrust::device_vector<DTypeKV> k_data_device(k_data);
74
+ thrust::device_vector<DTypeKV> v_data_device(v_data);
75
+ thrust::device_vector<int32_t> kv_indptr_device(kv_indptr);
76
+ thrust::device_vector<int32_t> kv_indices_device(kv_indices);
77
+ thrust::device_vector<int32_t> kv_last_page_len_device(kv_last_page_len);
78
+
79
+ // create paged_kv object
80
+ flashinfer::paged_kv_t<DTypeKV, int32_t> paged_kv = paged_kv_cpu;
81
+ paged_kv.k_data = thrust::raw_pointer_cast(k_data_device.data());
82
+ paged_kv.v_data = thrust::raw_pointer_cast(v_data_device.data());
83
+ paged_kv.indices = thrust::raw_pointer_cast(kv_indices_device.data());
84
+ paged_kv.indptr = thrust::raw_pointer_cast(kv_indptr_device.data());
85
+ paged_kv.last_page_len = thrust::raw_pointer_cast(kv_last_page_len_device.data());
86
+
87
+ BatchPrefillHandler handler;
88
+ size_t float_workspace_size_in_bytes = 128 * 1024 * 1024;
89
+ thrust::device_vector<char> float_buffer(float_workspace_size_in_bytes);
90
+ size_t int_workspace_size_in_bytes = 8 * 1024 * 1024;
91
+ thrust::device_vector<char> int_buffer(int_workspace_size_in_bytes);
92
+
93
+ for (uint32_t request_idx = 0; request_idx < batch_size; ++request_idx) {
94
+ // create one-hot queries
95
+ int32_t q_len = q_lens[request_idx], kv_len = kv_lens[request_idx];
96
+ std::vector<int32_t> q_indptr{0};
97
+ for (uint32_t i = 0; i < batch_size; ++i) {
98
+ q_indptr.push_back(i >= request_idx ? q_len : 0);
99
+ }
100
+ std::vector<DTypeQO> q(q_len * num_qo_heads * head_dim);
101
+ utils::vec_normal_(q);
102
+
103
+ std::vector<DTypeQO> o_ref = cpu_reference::single_mha<DTypeQO, DTypeKV, DTypeQO>(
104
+ q, key[request_idx], value[request_idx], q_len, kv_len, num_qo_heads, num_kv_heads,
105
+ head_dim, causal, QKVLayout::kNHD, pos_encoding_mode);
106
+
107
+ thrust::device_vector<int32_t> q_indptr_device(q_indptr);
108
+ thrust::device_vector<DTypeQO> q_device(q);
109
+ thrust::device_vector<DTypeQO> o_device(q_len * num_qo_heads * head_dim);
110
+
111
+ handler.Plan<DTypeQO, int32_t>(
112
+ (void*)thrust::raw_pointer_cast(float_buffer.data()), float_workspace_size_in_bytes,
113
+ (void*)thrust::raw_pointer_cast(int_buffer.data()), int_workspace_size_in_bytes,
114
+ q_indptr.data(), kv_indptr.data(), /*total_num_rows=*/q_indptr.back(), batch_size,
115
+ num_qo_heads, num_kv_heads, head_dim, page_size);
116
+
117
+ for (uint32_t num_runs = 0; num_runs < 10; ++num_runs) {
118
+ auto status =
119
+ flashinfer::BatchPrefillWithPagedKVCacheWrapper<DTypeQO, DTypeKV, DTypeQO, int32_t>(
120
+ &handler, thrust::raw_pointer_cast(q_device.data()),
121
+ thrust::raw_pointer_cast(q_indptr_device.data()), /*q_rope_offset=*/nullptr, paged_kv,
122
+ thrust::raw_pointer_cast(o_device.data()),
123
+ /*lse=*/nullptr, num_qo_heads, causal, pos_encoding_mode, use_fp16_qk_reduction);
124
+ EXPECT_EQ(status, cudaSuccess) << "CUDA error: " + std::string(cudaGetErrorString(status));
125
+ }
126
+
127
+ thrust::host_vector<DTypeQO> o_host(o_device);
128
+ size_t num_result_errors_atol_1e_3_rtol_1e_3 = 0;
129
+ bool nan_detected = false;
130
+ for (size_t i = 0; i < q_len * num_qo_heads * head_dim; ++i) {
131
+ if (std::isnan(float(o_host[i]))) {
132
+ nan_detected = true;
133
+ }
134
+ num_result_errors_atol_1e_3_rtol_1e_3 +=
135
+ (!utils::isclose(float(o_host[i]), float(o_ref[i]), 1e-3, 1e-3));
136
+ }
137
+ float result_accuracy = 1. - float(num_result_errors_atol_1e_3_rtol_1e_3) /
138
+ max(float(q_len * num_qo_heads * head_dim), 1.f);
139
+ std::cout << "request_idx=" << request_idx << ", page_size=" << page_size
140
+ << ", num_qo_heads=" << num_qo_heads << ", num_kv_heads=" << num_kv_heads
141
+ << ", q_len=" << q_len << ", kv_len=" << kv_len << ", head_dim=" << head_dim
142
+ << ", causal=" << causal
143
+ << ", pos_encoding_mode=" << PosEncodingModeToString(pos_encoding_mode)
144
+ << ", result_accuracy=" << result_accuracy << std::endl;
145
+ EXPECT_GT(result_accuracy, 0.99) << "Result correctness test failed.";
146
+ EXPECT_EQ(nan_detected, false) << "NaN detected in output.";
147
+ }
148
+ }
149
+
150
+ template <typename DTypeQO, typename DTypeKV>
151
+ void _TestBatchRaggedPrefillKernelCorrectness(size_t num_kv_heads, size_t num_qo_heads,
152
+ size_t head_dim, bool causal,
153
+ PosEncodingMode pos_encoding_mode,
154
+ bool use_fp16_qk_reduction) {
155
+ uint32_t batch_size = 9;
156
+ std::vector<int32_t> q_lens(batch_size), kv_lens(batch_size);
157
+ utils::vec_randint_(q_lens, 10, 15);
158
+ utils::vec_randint_(kv_lens, 128, 2048);
159
+ std::vector<int32_t> append_indptr{0}, kv_indptr{0};
160
+
161
+ for (uint32_t request_idx = 0; request_idx < batch_size; ++request_idx) {
162
+ append_indptr.push_back(append_indptr.back() + q_lens[request_idx]);
163
+ kv_indptr.push_back(kv_indptr.back() + kv_lens[request_idx]);
164
+ }
165
+
166
+ std::vector<DTypeQO> queries;
167
+ std::vector<DTypeKV> keys;
168
+ std::vector<DTypeKV> values;
169
+ std::vector<DTypeKV> output_refs;
170
+
171
+ BatchPrefillHandler handler;
172
+ size_t float_workspace_size_in_bytes = 128 * 1024 * 1024;
173
+ thrust::device_vector<char> float_buffer(float_workspace_size_in_bytes);
174
+ size_t int_workspace_size_in_bytes = 8 * 1024 * 1024;
175
+ thrust::device_vector<char> int_buffer(int_workspace_size_in_bytes);
176
+
177
+ for (uint32_t request_idx = 0; request_idx < batch_size; ++request_idx) {
178
+ std::vector<DTypeQO> q(q_lens[request_idx] * num_qo_heads * head_dim);
179
+ std::vector<DTypeKV> k(kv_lens[request_idx] * num_kv_heads * head_dim),
180
+ v(kv_lens[request_idx] * num_kv_heads * head_dim);
181
+ uint32_t q_len = q_lens[request_idx], kv_len = kv_lens[request_idx];
182
+ utils::vec_normal_(q);
183
+ utils::vec_normal_(k);
184
+ utils::vec_normal_(v);
185
+ std::vector<DTypeQO> o_ref = cpu_reference::single_mha<DTypeQO, DTypeKV, DTypeQO>(
186
+ q, k, v, q_len, kv_len, num_qo_heads, num_kv_heads, head_dim, causal, QKVLayout::kNHD,
187
+ pos_encoding_mode);
188
+ // NOTE(Zihao): The following code is only compatible with kv_layout = QKVLayout::kNHD
189
+ std::copy(q.begin(), q.end(), std::back_inserter(queries));
190
+ std::copy(k.begin(), k.end(), std::back_inserter(keys));
191
+ std::copy(v.begin(), v.end(), std::back_inserter(values));
192
+ std::copy(o_ref.begin(), o_ref.end(), std::back_inserter(output_refs));
193
+ }
194
+
195
+ thrust::device_vector<DTypeQO> queries_device(queries);
196
+ thrust::device_vector<DTypeKV> keys_device(keys);
197
+ thrust::device_vector<DTypeKV> values_device(values);
198
+ thrust::device_vector<DTypeQO> output_device(queries.size());
199
+ thrust::device_vector<int32_t> append_indptr_device(append_indptr);
200
+ thrust::device_vector<int32_t> kv_indptr_device(kv_indptr);
201
+
202
+ handler.Plan<DTypeQO, int32_t>(
203
+ (void*)thrust::raw_pointer_cast(float_buffer.data()), float_workspace_size_in_bytes,
204
+ (void*)thrust::raw_pointer_cast(int_buffer.data()), int_workspace_size_in_bytes,
205
+ append_indptr.data(), kv_indptr.data(), /*total_num_rows=*/append_indptr.back(), batch_size,
206
+ num_qo_heads, num_kv_heads, head_dim, /*page_size=*/1);
207
+
208
+ auto status = BatchPrefillWithRaggedKVCacheWrapper<DTypeQO, DTypeKV, DTypeQO, int32_t>(
209
+ &handler, thrust::raw_pointer_cast(queries_device.data()),
210
+ thrust::raw_pointer_cast(append_indptr_device.data()),
211
+ thrust::raw_pointer_cast(keys_device.data()), thrust::raw_pointer_cast(values_device.data()),
212
+ thrust::raw_pointer_cast(kv_indptr_device.data()),
213
+ /*q_rope_offset=*/nullptr,
214
+ /*k_rope_offset=*/nullptr, thrust::raw_pointer_cast(output_device.data()),
215
+ /*lse=*/nullptr, batch_size, num_qo_heads, num_kv_heads, head_dim, causal, kv_layout,
216
+ pos_encoding_mode, use_fp16_qk_reduction);
217
+
218
+ EXPECT_EQ(status, cudaSuccess) << "CUDA error: " + std::string(cudaGetErrorString(status));
219
+
220
+ thrust::host_vector<DTypeQO> output_host(output_device);
221
+ size_t num_result_errors_atol_1e_3_rtol_1e_3 = 0;
222
+ bool nan_detected = false;
223
+ for (size_t i = 0; i < output_refs.size(); ++i) {
224
+ if (std::isnan(float(output_host[i]))) {
225
+ nan_detected = true;
226
+ }
227
+ num_result_errors_atol_1e_3_rtol_1e_3 +=
228
+ (!utils::isclose(float(output_host[i]), float(output_refs[i]), 1e-3, 1e-3));
229
+ }
230
+
231
+ float result_accuracy =
232
+ 1. - float(num_result_errors_atol_1e_3_rtol_1e_3) / max(float(output_refs.size()), 1.f);
233
+ std::cout << "num_qo_heads=" << num_qo_heads << ", num_kv_heads=" << num_kv_heads
234
+ << ", head_dim=" << head_dim << ", causal=" << causal
235
+ << ", pos_encoding_mode=" << PosEncodingModeToString(pos_encoding_mode)
236
+ << ", result_accuracy=" << result_accuracy << std::endl;
237
+
238
+ EXPECT_GT(result_accuracy, 0.99) << "Result correctness test failed.";
239
+ EXPECT_EQ(nan_detected, false) << "NaN detected in output.";
240
+ }
241
+
242
+ template <typename DTypeQO, typename DTypeKV>
243
+ void _TestBatchPagedPrefillKernelShortContextCorrectness(size_t num_kv_heads, size_t num_qo_heads,
244
+ size_t page_size, size_t head_dim,
245
+ bool causal,
246
+ PosEncodingMode pos_encoding_mode,
247
+ bool use_fp16_qk_reduction) {
248
+ const uint32_t batch_size = 7;
249
+ std::vector<int32_t> q_lens(batch_size);
250
+ utils::vec_randint_(q_lens, 1, 64);
251
+ std::vector<int32_t> kv_lens(q_lens);
252
+
253
+ std::vector<int32_t> q_indptr{0};
254
+ for (uint32_t request_idx = 0; request_idx < batch_size; ++request_idx) {
255
+ q_indptr.push_back(q_indptr.back() + q_lens[request_idx]);
256
+ }
257
+ std::vector<int32_t> append_indptr{0};
258
+ for (uint32_t request_idx = 0; request_idx < batch_size; ++request_idx) {
259
+ append_indptr.push_back(append_indptr.back() + kv_lens[request_idx]);
260
+ }
261
+ std::vector<DTypeKV> k_data;
262
+ std::vector<DTypeKV> v_data;
263
+ std::vector<int32_t> kv_indptr{0};
264
+ std::vector<int32_t> kv_indices;
265
+ std::vector<int32_t> kv_last_page_len;
266
+ size_t page_counter = 0;
267
+ std::vector<std::vector<DTypeKV>> key, value;
268
+ for (uint32_t request_idx = 0; request_idx < batch_size; ++request_idx) {
269
+ size_t kv_len = kv_lens[request_idx];
270
+ size_t num_pages = (kv_len + page_size - 1) / page_size;
271
+ size_t last_page_len = (kv_len - 1) % page_size + 1;
272
+ std::vector<DTypeKV> k(kv_len * num_kv_heads * head_dim), v(kv_len * num_kv_heads * head_dim);
273
+ utils::vec_normal_(k);
274
+ utils::vec_normal_(v);
275
+ key.push_back(k);
276
+ value.push_back(v);
277
+ kv_last_page_len.push_back(last_page_len);
278
+ kv_indptr.push_back(kv_indptr.back() + num_pages);
279
+ for (size_t j = 0; j < num_pages; ++j) {
280
+ kv_indices.push_back(page_counter++);
281
+ }
282
+ }
283
+
284
+ k_data.resize(page_counter * num_kv_heads * page_size * head_dim);
285
+ v_data.resize(page_counter * num_kv_heads * page_size * head_dim);
286
+ flashinfer::paged_kv_t<DTypeKV, int32_t> paged_kv_cpu(
287
+ num_kv_heads, page_size, head_dim, batch_size, kv_layout, k_data.data(), v_data.data(),
288
+ kv_indices.data(), kv_indptr.data(), kv_last_page_len.data());
289
+ cpu_reference::append_paged_kv_cache<DTypeKV, int32_t>(paged_kv_cpu, key, value, append_indptr);
290
+
291
+ // copy data to device
292
+ thrust::device_vector<DTypeKV> k_data_device(k_data);
293
+ thrust::device_vector<DTypeKV> v_data_device(v_data);
294
+ thrust::device_vector<int32_t> kv_indptr_device(kv_indptr);
295
+ thrust::device_vector<int32_t> kv_indices_device(kv_indices);
296
+ thrust::device_vector<int32_t> kv_last_page_len_device(kv_last_page_len);
297
+
298
+ // create paged_kv object
299
+ flashinfer::paged_kv_t<DTypeKV, int32_t> paged_kv = paged_kv_cpu;
300
+ paged_kv.k_data = thrust::raw_pointer_cast(k_data_device.data());
301
+ paged_kv.v_data = thrust::raw_pointer_cast(v_data_device.data());
302
+ paged_kv.indices = thrust::raw_pointer_cast(kv_indices_device.data());
303
+ paged_kv.indptr = thrust::raw_pointer_cast(kv_indptr_device.data());
304
+ paged_kv.last_page_len = thrust::raw_pointer_cast(kv_last_page_len_device.data());
305
+
306
+ std::vector<std::vector<DTypeQO>> q, o_ref;
307
+ for (uint32_t request_idx = 0; request_idx < batch_size; ++request_idx) {
308
+ int32_t q_len = q_lens[request_idx];
309
+ std::vector<DTypeQO> qi(q_len * num_qo_heads * head_dim);
310
+ utils::vec_normal_(qi);
311
+ q.push_back(qi);
312
+ }
313
+ for (uint32_t request_idx = 0; request_idx < batch_size; ++request_idx) {
314
+ int32_t q_len = q_lens[request_idx], kv_len = kv_lens[request_idx];
315
+ std::vector<DTypeQO> o_ref_i = cpu_reference::single_mha<DTypeQO, DTypeKV, DTypeQO>(
316
+ q[request_idx], key[request_idx], value[request_idx], q_len, kv_len, num_qo_heads,
317
+ num_kv_heads, head_dim, causal, QKVLayout::kNHD, pos_encoding_mode);
318
+ o_ref.push_back(o_ref_i);
319
+ }
320
+
321
+ std::vector<DTypeQO> q_concat, o_concat_ref;
322
+ for (uint32_t request_idx = 0; request_idx < batch_size; ++request_idx) {
323
+ q_concat.insert(q_concat.end(), q[request_idx].begin(), q[request_idx].end());
324
+ o_concat_ref.insert(o_concat_ref.end(), o_ref[request_idx].begin(), o_ref[request_idx].end());
325
+ }
326
+ thrust::device_vector<DTypeQO> q_device(q_concat);
327
+
328
+ thrust::device_vector<int32_t> q_indptr_device(q_indptr);
329
+ thrust::device_vector<DTypeQO> o_device(o_concat_ref.size());
330
+
331
+ BatchPrefillHandler handler;
332
+ size_t float_workspace_size_in_bytes = 32 * 1024 * 1024;
333
+ thrust::device_vector<char> float_buffer(float_workspace_size_in_bytes);
334
+ size_t int_workspace_size_in_bytes = 8 * 1024 * 1024;
335
+ thrust::device_vector<char> int_buffer(int_workspace_size_in_bytes);
336
+
337
+ handler.Plan<DTypeQO, int32_t>(
338
+ (void*)thrust::raw_pointer_cast(float_buffer.data()), float_workspace_size_in_bytes,
339
+ (void*)thrust::raw_pointer_cast(int_buffer.data()), int_workspace_size_in_bytes,
340
+ q_indptr.data(), kv_indptr.data(), /*total_num_rows=*/q_indptr.back(), batch_size,
341
+ num_qo_heads, num_kv_heads, head_dim, page_size);
342
+
343
+ auto status = BatchPrefillWithPagedKVCacheWrapper<DTypeQO, DTypeKV, DTypeQO, int32_t>(
344
+ &handler, thrust::raw_pointer_cast(q_device.data()),
345
+ thrust::raw_pointer_cast(q_indptr_device.data()),
346
+ /*q_rope_offset=*/nullptr, paged_kv, thrust::raw_pointer_cast(o_device.data()),
347
+ /*lse=*/nullptr, num_qo_heads, causal, pos_encoding_mode, use_fp16_qk_reduction);
348
+ EXPECT_EQ(status, cudaSuccess) << "CUDA error: " + std::string(cudaGetErrorString(status));
349
+
350
+ thrust::host_vector<DTypeQO> o_host(o_device);
351
+ size_t num_result_errors_atol_1e_3_rtol_1e_3 = 0;
352
+ bool nan_detected = false;
353
+ for (size_t i = 0; i < o_concat_ref.size(); ++i) {
354
+ if (std::isnan(float(o_host[i]))) {
355
+ nan_detected = true;
356
+ }
357
+ num_result_errors_atol_1e_3_rtol_1e_3 +=
358
+ (!utils::isclose(float(o_host[i]), float(o_concat_ref[i]), 1e-3, 1e-3));
359
+ }
360
+ float result_accuracy =
361
+ 1. - float(num_result_errors_atol_1e_3_rtol_1e_3) / max(float(o_concat_ref.size()), 1.f);
362
+ std::cout << "page_size=" << page_size << ", num_qo_heads=" << num_qo_heads
363
+ << ", num_kv_heads=" << num_kv_heads << ", head_dim=" << head_dim
364
+ << ", causal=" << causal
365
+ << ", pos_encoding_mode=" << PosEncodingModeToString(pos_encoding_mode)
366
+ << ", result_accuracy=" << result_accuracy << std::endl;
367
+ EXPECT_GT(result_accuracy, 0.99) << "Result correctness test failed.";
368
+ EXPECT_EQ(nan_detected, false) << "NaN detected in output.";
369
+ }
370
+
371
+ template <typename DTypeQO, typename DTypeKV>
372
+ void _TestBatchPagedPrefillKernelQMinMaxKVMinMaxCorrectness(
373
+ size_t batch_size, size_t num_kv_heads, size_t num_qo_heads, size_t page_size, size_t head_dim,
374
+ bool use_fp16_qk_reduction, uint32_t q_len_min, uint32_t q_len_max, uint32_t kv_len_min,
375
+ uint32_t kv_len_max) {
376
+ std::vector<int32_t> q_lens(batch_size);
377
+ utils::vec_randint_(q_lens, q_len_min, q_len_max);
378
+ std::vector<int32_t> kv_lens(batch_size);
379
+ utils::vec_randint_(kv_lens, kv_len_min, kv_len_max);
380
+
381
+ std::vector<int32_t> q_indptr{0};
382
+ for (uint32_t request_idx = 0; request_idx < batch_size; ++request_idx) {
383
+ q_indptr.push_back(q_indptr.back() + q_lens[request_idx]);
384
+ }
385
+ std::vector<int32_t> append_indptr{0};
386
+ for (uint32_t request_idx = 0; request_idx < batch_size; ++request_idx) {
387
+ append_indptr.push_back(append_indptr.back() + kv_lens[request_idx]);
388
+ }
389
+ std::vector<DTypeKV> k_data;
390
+ std::vector<DTypeKV> v_data;
391
+ std::vector<int32_t> kv_indptr{0};
392
+ std::vector<int32_t> kv_indices;
393
+ std::vector<int32_t> kv_last_page_len;
394
+ size_t page_counter = 0;
395
+ std::vector<std::vector<DTypeKV>> key, value;
396
+ for (uint32_t request_idx = 0; request_idx < batch_size; ++request_idx) {
397
+ size_t kv_len = kv_lens[request_idx];
398
+ size_t num_pages = (kv_len + page_size - 1) / page_size;
399
+ size_t last_page_len = num_pages == 0 ? 0 : (kv_len - 1) % page_size + 1;
400
+ std::vector<DTypeKV> k(kv_len * num_kv_heads * head_dim), v(kv_len * num_kv_heads * head_dim);
401
+ utils::vec_normal_(k);
402
+ utils::vec_normal_(v);
403
+ key.push_back(k);
404
+ value.push_back(v);
405
+ kv_last_page_len.push_back(last_page_len);
406
+ kv_indptr.push_back(kv_indptr.back() + num_pages);
407
+ for (size_t j = 0; j < num_pages; ++j) {
408
+ kv_indices.push_back(page_counter++);
409
+ }
410
+ }
411
+
412
+ k_data.resize(page_counter * num_kv_heads * page_size * head_dim);
413
+ v_data.resize(page_counter * num_kv_heads * page_size * head_dim);
414
+ flashinfer::paged_kv_t<DTypeKV, int32_t> paged_kv_cpu(
415
+ num_kv_heads, page_size, head_dim, batch_size, kv_layout, k_data.data(), v_data.data(),
416
+ kv_indices.data(), kv_indptr.data(), kv_last_page_len.data());
417
+ cpu_reference::append_paged_kv_cache<DTypeKV, int32_t>(paged_kv_cpu, key, value, append_indptr);
418
+
419
+ // copy data to device
420
+ thrust::device_vector<DTypeKV> k_data_device(k_data);
421
+ thrust::device_vector<DTypeKV> v_data_device(v_data);
422
+ thrust::device_vector<int32_t> kv_indptr_device(kv_indptr);
423
+ thrust::device_vector<int32_t> kv_indices_device(kv_indices);
424
+ thrust::device_vector<int32_t> kv_last_page_len_device(kv_last_page_len);
425
+
426
+ // create paged_kv object
427
+ flashinfer::paged_kv_t<DTypeKV, int32_t> paged_kv = paged_kv_cpu;
428
+ paged_kv.k_data = thrust::raw_pointer_cast(k_data_device.data());
429
+ paged_kv.v_data = thrust::raw_pointer_cast(v_data_device.data());
430
+ paged_kv.indices = thrust::raw_pointer_cast(kv_indices_device.data());
431
+ paged_kv.indptr = thrust::raw_pointer_cast(kv_indptr_device.data());
432
+ paged_kv.last_page_len = thrust::raw_pointer_cast(kv_last_page_len_device.data());
433
+
434
+ std::vector<std::vector<DTypeQO>> q, o_ref;
435
+ for (uint32_t request_idx = 0; request_idx < batch_size; ++request_idx) {
436
+ int32_t q_len = q_lens[request_idx];
437
+ std::vector<DTypeQO> qi(q_len * num_qo_heads * head_dim);
438
+ utils::vec_normal_(qi);
439
+ q.push_back(qi);
440
+ }
441
+ for (uint32_t request_idx = 0; request_idx < batch_size; ++request_idx) {
442
+ int32_t q_len = q_lens[request_idx], kv_len = kv_lens[request_idx];
443
+ std::vector<DTypeQO> o_ref_i = cpu_reference::single_mha<DTypeQO, DTypeKV, DTypeQO>(
444
+ q[request_idx], key[request_idx], value[request_idx], q_len, kv_len, num_qo_heads,
445
+ num_kv_heads, head_dim, /*causal=*/false, QKVLayout::kNHD,
446
+ /*pos_encoding_mode*/ PosEncodingMode::kNone);
447
+ o_ref.push_back(o_ref_i);
448
+ }
449
+
450
+ std::vector<DTypeQO> q_concat, o_concat_ref;
451
+ for (uint32_t request_idx = 0; request_idx < batch_size; ++request_idx) {
452
+ q_concat.insert(q_concat.end(), q[request_idx].begin(), q[request_idx].end());
453
+ o_concat_ref.insert(o_concat_ref.end(), o_ref[request_idx].begin(), o_ref[request_idx].end());
454
+ }
455
+ thrust::device_vector<DTypeQO> q_device(q_concat);
456
+
457
+ thrust::device_vector<int32_t> q_indptr_device(q_indptr);
458
+ thrust::device_vector<DTypeQO> o_device(o_concat_ref.size());
459
+
460
+ BatchPrefillHandler handler;
461
+ size_t float_workspace_size_in_bytes = 32 * 1024 * 1024;
462
+ thrust::device_vector<char> float_buffer(float_workspace_size_in_bytes);
463
+ size_t int_workspace_size_in_bytes = 8 * 1024 * 1024;
464
+ thrust::device_vector<char> int_buffer(int_workspace_size_in_bytes);
465
+
466
+ handler.Plan<DTypeQO, int32_t>(
467
+ (void*)thrust::raw_pointer_cast(float_buffer.data()), float_workspace_size_in_bytes,
468
+ (void*)thrust::raw_pointer_cast(int_buffer.data()), int_workspace_size_in_bytes,
469
+ q_indptr.data(), kv_indptr.data(), /*total_num_rows=*/q_indptr.back(), batch_size,
470
+ num_qo_heads, num_kv_heads, head_dim, page_size);
471
+
472
+ auto status = BatchPrefillWithPagedKVCacheWrapper<DTypeQO, DTypeKV, DTypeQO, int32_t>(
473
+ &handler, thrust::raw_pointer_cast(q_device.data()),
474
+ thrust::raw_pointer_cast(q_indptr_device.data()),
475
+ /*q_rope_offset=*/nullptr, paged_kv, thrust::raw_pointer_cast(o_device.data()),
476
+ /*lse=*/nullptr, num_qo_heads, /*causal=*/false,
477
+ /*pos_encoding_mode*/ PosEncodingMode::kNone);
478
+ EXPECT_EQ(status, cudaSuccess) << "CUDA error: " + std::string(cudaGetErrorString(status));
479
+
480
+ thrust::host_vector<DTypeQO> o_host(o_device);
481
+ size_t num_result_errors_atol_1e_3_rtol_1e_3 = 0;
482
+ bool nan_detected = false;
483
+ for (size_t i = 0; i < o_concat_ref.size(); ++i) {
484
+ if (std::isnan(float(o_host[i]))) {
485
+ nan_detected = true;
486
+ }
487
+ num_result_errors_atol_1e_3_rtol_1e_3 +=
488
+ (!utils::isclose(float(o_host[i]), float(o_concat_ref[i]), 1e-3, 1e-3));
489
+ }
490
+ float result_accuracy =
491
+ 1. - float(num_result_errors_atol_1e_3_rtol_1e_3) / max(float(o_concat_ref.size()), 1.f);
492
+ std::cout << "batch_size=" << batch_size << ", page_size=" << page_size
493
+ << ", num_qo_heads=" << num_qo_heads << ", num_kv_heads=" << num_kv_heads
494
+ << ", head_dim=" << head_dim << ", result_accuracy=" << result_accuracy << std::endl;
495
+ EXPECT_GT(result_accuracy, 0.99) << "Result correctness test failed.";
496
+ EXPECT_EQ(nan_detected, false) << "NaN detected in output.";
497
+ }
498
+
499
+ template <typename DTypeQO, typename DTypeKV>
500
+ void _TestBatchPagedPrefillKernelLongContextCorrectness(size_t num_kv_heads, size_t num_qo_heads,
501
+ size_t page_size, size_t head_dim,
502
+ bool causal,
503
+ PosEncodingMode pos_encoding_mode,
504
+ bool use_fp16_qk_reduction) {
505
+ std::vector<std::vector<std::vector<DTypeKV>>> keys, values;
506
+ std::vector<int32_t> q_lens{33}, kv_lens{32768};
507
+ std::vector<int32_t> q_indptr{0, 33};
508
+ std::vector<int32_t> append_indptr{0, 32768};
509
+ std::vector<DTypeKV> k_data;
510
+ std::vector<DTypeKV> v_data;
511
+ std::vector<int32_t> kv_indptr{0};
512
+ std::vector<int32_t> kv_indices;
513
+ std::vector<int32_t> kv_last_page_len;
514
+ size_t page_counter = 0;
515
+
516
+ size_t num_pages = (kv_lens[0] + page_size - 1) / page_size;
517
+ size_t last_page_len = (kv_lens[0] - 1) % page_size + 1;
518
+ std::vector<DTypeKV> k(kv_lens[0] * num_kv_heads * head_dim),
519
+ v(kv_lens[0] * num_kv_heads * head_dim);
520
+ utils::vec_normal_(k);
521
+ utils::vec_normal_(v);
522
+ kv_last_page_len.push_back(last_page_len);
523
+ kv_indptr.push_back(kv_indptr.back() + num_pages);
524
+ for (size_t j = 0; j < num_pages; ++j) {
525
+ kv_indices.push_back(page_counter++);
526
+ }
527
+
528
+ k_data.resize(page_counter * 1 * num_kv_heads * page_size * head_dim);
529
+ v_data.resize(page_counter * 1 * num_kv_heads * page_size * head_dim);
530
+ flashinfer::paged_kv_t<DTypeKV, int32_t> paged_kv_cpu(
531
+ num_kv_heads, page_size, head_dim, 1, kv_layout, k_data.data(), v_data.data(),
532
+ kv_indices.data(), kv_indptr.data(), kv_last_page_len.data());
533
+ cpu_reference::append_paged_kv_cache<DTypeKV, int32_t>(paged_kv_cpu, {k}, {v}, append_indptr);
534
+
535
+ // copy data to device
536
+ thrust::device_vector<DTypeKV> k_data_device(k_data);
537
+ thrust::device_vector<DTypeKV> v_data_device(v_data);
538
+ thrust::device_vector<int32_t> kv_indptr_device(kv_indptr);
539
+ thrust::device_vector<int32_t> kv_indices_device(kv_indices);
540
+ thrust::device_vector<int32_t> kv_last_page_len_device(kv_last_page_len);
541
+
542
+ // create paged_kv object
543
+ flashinfer::paged_kv_t<DTypeKV, int32_t> paged_kv = paged_kv_cpu;
544
+ paged_kv.k_data = thrust::raw_pointer_cast(k_data_device.data());
545
+ paged_kv.v_data = thrust::raw_pointer_cast(v_data_device.data());
546
+ paged_kv.indices = thrust::raw_pointer_cast(kv_indices_device.data());
547
+ paged_kv.indptr = thrust::raw_pointer_cast(kv_indptr_device.data());
548
+ paged_kv.last_page_len = thrust::raw_pointer_cast(kv_last_page_len_device.data());
549
+
550
+ // create one-hot queries
551
+ std::vector<DTypeQO> q(q_lens[0] * num_qo_heads * head_dim);
552
+ utils::vec_normal_(q);
553
+
554
+ std::vector<DTypeQO> o_ref = cpu_reference::single_mha<DTypeQO, DTypeKV, DTypeQO>(
555
+ q, k, v, q_lens[0], kv_lens[0], num_qo_heads, num_kv_heads, head_dim, causal, QKVLayout::kNHD,
556
+ pos_encoding_mode);
557
+
558
+ thrust::device_vector<int32_t> q_indptr_device(q_indptr);
559
+ thrust::device_vector<DTypeQO> q_device(q);
560
+ thrust::device_vector<DTypeQO> o_device(q_lens[0] * num_qo_heads * head_dim);
561
+
562
+ BatchPrefillHandler handler;
563
+ size_t float_workspace_size_in_bytes = 32 * 1024 * 1024;
564
+ thrust::device_vector<char> float_buffer(float_workspace_size_in_bytes);
565
+ size_t int_workspace_size_in_bytes = 8 * 1024 * 1024;
566
+ thrust::device_vector<char> int_buffer(int_workspace_size_in_bytes);
567
+
568
+ handler.Plan<DTypeQO, int32_t>(
569
+ (void*)thrust::raw_pointer_cast(float_buffer.data()), float_workspace_size_in_bytes,
570
+ (void*)thrust::raw_pointer_cast(int_buffer.data()), int_workspace_size_in_bytes,
571
+ append_indptr.data(), kv_indptr.data(), /*total_num_rows=*/append_indptr.back(),
572
+ /*batch_size=*/1, num_qo_heads, num_kv_heads, head_dim, page_size);
573
+
574
+ auto status = BatchPrefillWithPagedKVCacheWrapper<DTypeQO, DTypeKV, DTypeQO, int32_t>(
575
+ &handler, thrust::raw_pointer_cast(q_device.data()),
576
+ thrust::raw_pointer_cast(q_indptr_device.data()),
577
+ /*q_rope_offset=*/nullptr, paged_kv, thrust::raw_pointer_cast(o_device.data()),
578
+ /*lse=*/nullptr, num_qo_heads, causal, pos_encoding_mode, use_fp16_qk_reduction);
579
+ EXPECT_EQ(status, cudaSuccess) << "CUDA error: " + std::string(cudaGetErrorString(status));
580
+
581
+ thrust::host_vector<DTypeQO> o_host(o_device);
582
+ size_t num_result_errors_atol_1e_3_rtol_1e_3 = 0;
583
+ bool nan_detected = false;
584
+ for (size_t i = 0; i < q_lens[0] * num_qo_heads * head_dim; ++i) {
585
+ if (std::isnan(float(o_host[i]))) {
586
+ nan_detected = true;
587
+ }
588
+ num_result_errors_atol_1e_3_rtol_1e_3 +=
589
+ (!utils::isclose(float(o_host[i]), float(o_ref[i]), 1e-3, 1e-3));
590
+ }
591
+ float result_accuracy = 1. - float(num_result_errors_atol_1e_3_rtol_1e_3) /
592
+ max(float(q_lens[0] * num_qo_heads * head_dim), 1.f);
593
+ std::cout << "page_size=" << page_size << ", num_qo_heads=" << num_qo_heads
594
+ << ", num_kv_heads=" << num_kv_heads << ", q_len=" << q_lens[0]
595
+ << ", kv_len=" << kv_lens[0] << ", head_dim=" << head_dim << ", causal=" << causal
596
+ << ", pos_encoding_mode=" << PosEncodingModeToString(pos_encoding_mode)
597
+ << ", result_accuracy=" << result_accuracy << std::endl;
598
+ EXPECT_GT(result_accuracy, 0.99) << "Result correctness test failed.";
599
+ EXPECT_EQ(nan_detected, false) << "NaN detected in output.";
600
+ }
601
+
602
+ template <typename T>
603
+ void TestBatchPagedPrefillKernelOneHotCorrectness(bool use_fp16_qk_reduction) {
604
+ for (size_t num_kv_heads : {4, 8, 32}) {
605
+ for (size_t num_qo_heads : {32}) {
606
+ for (size_t page_size : {1, 16}) {
607
+ for (size_t head_dim : {64, 128, 256}) {
608
+ for (size_t causal : {false, true}) {
609
+ for (size_t pos_encoding_mode : {0, 1}) {
610
+ _TestBatchPagedPrefillKernelOneHotCorrectness<T, T>(
611
+ num_kv_heads, num_qo_heads, page_size, head_dim, causal,
612
+ PosEncodingMode(pos_encoding_mode), use_fp16_qk_reduction);
613
+ }
614
+ }
615
+ }
616
+ }
617
+ }
618
+ }
619
+ }
620
+
621
+ template <typename T>
622
+ void TestBatchPagedPrefillKernelShortContextCorrectness(bool use_fp16_qk_reduction) {
623
+ for (size_t num_kv_heads : {4, 8, 32}) {
624
+ for (size_t num_qo_heads : {32}) {
625
+ for (size_t page_size : {1, 16}) {
626
+ for (size_t head_dim : {64, 128, 256}) {
627
+ for (size_t causal : {false, true}) {
628
+ for (size_t pos_encoding_mode : {0, 1}) {
629
+ _TestBatchPagedPrefillKernelShortContextCorrectness<T, T>(
630
+ num_kv_heads, num_qo_heads, page_size, head_dim, causal,
631
+ PosEncodingMode(pos_encoding_mode), use_fp16_qk_reduction);
632
+ }
633
+ }
634
+ }
635
+ }
636
+ }
637
+ }
638
+ }
639
+
640
+ template <typename DTypeKV>
641
+ void TestBatchPagedPrefillFP8KernelShortContextCorrectness(bool use_fp16_qk_reduction) {
642
+ for (size_t num_kv_heads : {4, 8, 32}) {
643
+ for (size_t num_qo_heads : {32}) {
644
+ for (size_t page_size : {1, 16}) {
645
+ for (size_t head_dim : {64, 128, 256}) {
646
+ for (size_t causal : {false, true}) {
647
+ for (size_t pos_encoding_mode : {0}) {
648
+ _TestBatchPagedPrefillKernelShortContextCorrectness<half, DTypeKV>(
649
+ num_kv_heads, num_qo_heads, page_size, head_dim, causal,
650
+ PosEncodingMode(pos_encoding_mode), use_fp16_qk_reduction);
651
+ }
652
+ }
653
+ }
654
+ }
655
+ }
656
+ }
657
+ }
658
+
659
+ template <typename T>
660
+ void TestBatchPagedPrefillKernelLongContextCorrectness(bool use_fp16_qk_reduction) {
661
+ for (size_t num_kv_heads : {1, 2, 8}) {
662
+ for (size_t group_size : {1, 3, 4, 5, 6, 7, 8}) {
663
+ size_t num_qo_heads = num_kv_heads * group_size;
664
+ for (size_t page_size : {1, 16}) {
665
+ for (size_t head_dim : {64, 128, 256}) {
666
+ for (size_t causal : {false, true}) {
667
+ for (size_t pos_encoding_mode : {0, 1}) {
668
+ _TestBatchPagedPrefillKernelLongContextCorrectness<T, T>(
669
+ num_kv_heads, num_qo_heads, page_size, head_dim, causal,
670
+ PosEncodingMode(pos_encoding_mode), use_fp16_qk_reduction);
671
+ }
672
+ }
673
+ }
674
+ }
675
+ }
676
+ }
677
+ }
678
+
679
+ template <typename DTypeKV>
680
+ void TestBatchPagedPrefillFP8KernelLongContextCorrectness(bool use_fp16_qk_reduction) {
681
+ for (size_t num_kv_heads : {1, 2, 8}) {
682
+ for (size_t group_size : {1, 3, 4, 5, 6, 7, 8}) {
683
+ size_t num_qo_heads = num_kv_heads * group_size;
684
+ for (size_t page_size : {1, 16}) {
685
+ for (size_t head_dim : {64, 128, 256}) {
686
+ for (size_t causal : {false, true}) {
687
+ for (size_t pos_encoding_mode : {0}) {
688
+ _TestBatchPagedPrefillKernelLongContextCorrectness<half, DTypeKV>(
689
+ num_kv_heads, num_qo_heads, page_size, head_dim, causal,
690
+ PosEncodingMode(pos_encoding_mode), use_fp16_qk_reduction);
691
+ }
692
+ }
693
+ }
694
+ }
695
+ }
696
+ }
697
+ }
698
+
699
+ template <typename T>
700
+ void TestBatchPagedPrefillKernelZeroContextCorrectness(bool use_fp16_qk_reduction) {
701
+ for (size_t batch_size : {1, 4, 7, 11, 19, 37, 99}) {
702
+ for (size_t num_kv_heads : {1, 4}) {
703
+ for (size_t group_size : {1, 8}) {
704
+ size_t num_qo_heads = num_kv_heads * group_size;
705
+ for (size_t page_size : {1, 16}) {
706
+ for (size_t head_dim : {64, 128, 256}) {
707
+ for (size_t kv_len_max : {0, 3}) {
708
+ _TestBatchPagedPrefillKernelQMinMaxKVMinMaxCorrectness<T, T>(
709
+ batch_size, num_kv_heads, num_qo_heads, page_size, head_dim,
710
+ use_fp16_qk_reduction,
711
+ /*q_len_min=*/1, /*q_len_max=*/3, /*kv_len_min=*/0, kv_len_max);
712
+ }
713
+ }
714
+ }
715
+ }
716
+ }
717
+ }
718
+ }
719
+
720
+ template <typename T>
721
+ void TestBatchRaggedPrefillKernelCorrectness(bool use_fp16_qk_reduction) {
722
+ for (size_t num_kv_heads : {4, 8, 32}) {
723
+ for (size_t num_qo_heads : {32}) {
724
+ for (size_t head_dim : {64, 128, 256}) {
725
+ for (size_t causal : {false, true}) {
726
+ for (size_t pos_encoding_mode : {0, 1}) {
727
+ _TestBatchRaggedPrefillKernelCorrectness<T, T>(
728
+ num_kv_heads, num_qo_heads, head_dim, causal, PosEncodingMode(pos_encoding_mode),
729
+ use_fp16_qk_reduction);
730
+ }
731
+ }
732
+ }
733
+ }
734
+ }
735
+ }
736
+
737
+ template <typename DTypeKV>
738
+ void TestBatchRaggedPrefillFP8KernelCorrectness(bool use_fp16_qk_reduction) {
739
+ for (size_t num_kv_heads : {4, 8, 32}) {
740
+ for (size_t num_qo_heads : {32}) {
741
+ for (size_t head_dim : {64, 128, 256}) {
742
+ for (size_t causal : {false, true}) {
743
+ for (size_t pos_encoding_mode : {0}) {
744
+ _TestBatchRaggedPrefillKernelCorrectness<half, DTypeKV>(
745
+ num_kv_heads, num_qo_heads, head_dim, causal, PosEncodingMode(pos_encoding_mode),
746
+ use_fp16_qk_reduction);
747
+ }
748
+ }
749
+ }
750
+ }
751
+ }
752
+ }
753
+
754
+ TEST(FlashInferCorrectnessTest, BatchPagedPrefillShortContextTestFP16) {
755
+ TestBatchPagedPrefillKernelShortContextCorrectness<half>(false);
756
+ }
757
+
758
+ TEST(FlashInferCorrectnessTest, BatchPagedPrefillShortContextTestFP16QKHalfAccum) {
759
+ TestBatchPagedPrefillKernelShortContextCorrectness<half>(false);
760
+ }
761
+
762
+ TEST(FlashInferCorrectnessTest, BatchPagedPrefillLongContextTestFP16) {
763
+ TestBatchPagedPrefillKernelLongContextCorrectness<half>(false);
764
+ }
765
+
766
+ TEST(FlashInferCorrectnessTest, BatchPagedPrefillZeroContextTestFP16) {
767
+ TestBatchPagedPrefillKernelZeroContextCorrectness<half>(false);
768
+ }
769
+
770
+ TEST(FlashInferCorrectnessTest, BatchPagedPrefillLongContextTestFP16QKHalfAccum) {
771
+ TestBatchPagedPrefillKernelLongContextCorrectness<half>(true);
772
+ }
773
+
774
+ TEST(FlashInferCorrectnessTest, BatchPagedPrefillKernelCorrectnessTestOneHotFP16) {
775
+ TestBatchPagedPrefillKernelOneHotCorrectness<half>(false);
776
+ }
777
+
778
+ TEST(FlashInferCorrectnessTest, BatchPagedPrefillKernelCorrectnessTestOneHotFP16QKHalfAccum) {
779
+ TestBatchPagedPrefillKernelOneHotCorrectness<half>(true);
780
+ }
781
+
782
+ TEST(FlashInferCorrectnessTest, BatchRaggedPrefillTestFP16) {
783
+ TestBatchRaggedPrefillKernelCorrectness<half>(false);
784
+ }
785
+
786
+ TEST(FlashInferCorrectnessTest, BatchRaggedPrefillTestFP16QKHalfAccum) {
787
+ TestBatchRaggedPrefillKernelCorrectness<half>(true);
788
+ }
789
+
790
+ #ifdef FLASHINFER_ENABLE_FP8_E4M3
791
+
792
+ TEST(FlashInferCorrectnessTest, BatchPagedPrefillShortContextTestE4M3) {
793
+ TestBatchPagedPrefillFP8KernelShortContextCorrectness<__nv_fp8_e4m3>(false);
794
+ }
795
+
796
+ TEST(FlashInferCorrectnessTest, BatchPagedPrefillLongContextTestE4M3) {
797
+ TestBatchPagedPrefillFP8KernelLongContextCorrectness<__nv_fp8_e4m3>(false);
798
+ }
799
+
800
+ #endif
801
+
802
+ #ifdef FLASHINFER_ENABLE_FP8_E5M2
803
+
804
+ TEST(FlashInferCorrectnessTest, BatchPagedPrefillShortContextTestE5M2) {
805
+ TestBatchPagedPrefillFP8KernelShortContextCorrectness<__nv_fp8_e5m2>(false);
806
+ }
807
+
808
+ TEST(FlashInferCorrectnessTest, BatchPagedPrefillLongContextTestE5M2) {
809
+ TestBatchPagedPrefillFP8KernelLongContextCorrectness<__nv_fp8_e5m2>(false);
810
+ }
811
+ #endif
sglang_repo/sgl-kernel/3rdparty/flashinfer/src/test_cascade.cu ADDED
@@ -0,0 +1,657 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ /*
2
+ * Copyright (c) 2023 by FlashInfer team.
3
+ *
4
+ * Licensed under the Apache License, Version 2.0 (the "License");
5
+ * you may not use this file except in compliance with the License.
6
+ * You may obtain a copy of the License at
7
+ *
8
+ * http://www.apache.org/licenses/LICENSE-2.0
9
+ *
10
+ * Unless required by applicable law or agreed to in writing, software
11
+ * distributed under the License is distributed on an "AS IS" BASIS,
12
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13
+ * See the License for the specific language governing permissions and
14
+ * limitations under the License.
15
+ */
16
+ #include <gtest/gtest.h>
17
+
18
+ #include <flashinfer/attention/cascade.cuh>
19
+
20
+ #include "flashinfer_ops.cuh"
21
+ #include "utils.h"
22
+
23
+ using namespace flashinfer;
24
+ constexpr QKVLayout kv_layout = QKVLayout::kHND;
25
+
26
+ bool is_prime(int x) {
27
+ for (int i = 2; i < int(std::sqrt(x)); ++i) {
28
+ if (x % i == 0) return false;
29
+ }
30
+ return true;
31
+ }
32
+
33
+ template <typename T>
34
+ void _TestVariableLengthMergeKernelCorrectness(size_t seq_len, size_t num_heads, size_t head_dim,
35
+ bool sparse_s) {
36
+ const uint32_t max_num_index_sets = 512;
37
+ std::vector<int32_t> lengths(seq_len);
38
+ utils::vec_randint_(lengths, 1, max_num_index_sets);
39
+ std::vector<int32_t> indptr{0};
40
+ for (size_t i = 0; i < seq_len; ++i) {
41
+ indptr.push_back(indptr.back() + lengths[i]);
42
+ }
43
+ std::vector<T> V_padded_host(seq_len * max_num_index_sets * num_heads * head_dim);
44
+ std::vector<T> V_ragged_host(indptr.back() * num_heads * head_dim);
45
+ std::vector<float> S_padded_host(seq_len * max_num_index_sets * num_heads);
46
+ std::vector<float> S_ragged_host(indptr.back() * num_heads);
47
+
48
+ utils::vec_normal_(V_ragged_host);
49
+ for (uint32_t j = 0; j < seq_len; ++j) {
50
+ std::copy(V_ragged_host.begin() + indptr[j] * num_heads * head_dim,
51
+ V_ragged_host.begin() + indptr[j + 1] * num_heads * head_dim,
52
+ V_padded_host.begin() + j * max_num_index_sets * num_heads * head_dim);
53
+ }
54
+ if (sparse_s) {
55
+ for (uint32_t i = 0; i < max_num_index_sets; ++i) {
56
+ float fill_val = is_prime(i) ? 10 : -10;
57
+ for (uint32_t j = 0; j < seq_len; ++j) {
58
+ if (i < lengths[j]) {
59
+ std::fill(S_ragged_host.begin() + (indptr[j] + i) * num_heads,
60
+ S_ragged_host.begin() + (indptr[j] + i + 1) * num_heads, fill_val);
61
+ std::fill(S_padded_host.begin() + (j * max_num_index_sets + i) * num_heads,
62
+ S_padded_host.begin() + (j * max_num_index_sets + i + 1) * num_heads, fill_val);
63
+ } else {
64
+ std::fill(S_padded_host.begin() + (j * max_num_index_sets + i) * num_heads,
65
+ S_padded_host.begin() + (j * max_num_index_sets + i + 1) * num_heads, -5e4);
66
+ }
67
+ }
68
+ }
69
+ } else {
70
+ utils::vec_uniform_(S_ragged_host, -10, 10);
71
+ for (uint32_t j = 0; j < seq_len; ++j) {
72
+ std::copy(S_ragged_host.begin() + indptr[j] * num_heads,
73
+ S_ragged_host.begin() + indptr[j + 1] * num_heads,
74
+ S_padded_host.begin() + (j * max_num_index_sets) * num_heads);
75
+ std::fill(
76
+ S_padded_host.begin() + (j * max_num_index_sets + indptr[j + 1] - indptr[j]) * num_heads,
77
+ S_padded_host.begin() + (j + 1) * max_num_index_sets * num_heads, -5e4);
78
+ }
79
+ }
80
+
81
+ thrust::device_vector<T> V_padded_device(V_padded_host);
82
+ thrust::device_vector<T> V_ragged_device(V_ragged_host);
83
+ thrust::device_vector<float> S_padded_device(S_padded_host);
84
+ thrust::device_vector<float> S_ragged_device(S_ragged_host);
85
+ thrust::device_vector<int32_t> indptr_device(indptr);
86
+ thrust::device_vector<T> V_merged_0_device(seq_len * num_heads * head_dim);
87
+ thrust::device_vector<T> V_merged_1_device(seq_len * num_heads * head_dim);
88
+ thrust::device_vector<float> S_merged_0_device(seq_len * num_heads);
89
+ thrust::device_vector<float> S_merged_1_device(seq_len * num_heads);
90
+
91
+ // Method 0: use MergeStates on padded data
92
+ MergeStates(thrust::raw_pointer_cast(V_padded_device.data()),
93
+ thrust::raw_pointer_cast(S_padded_device.data()),
94
+ thrust::raw_pointer_cast(V_merged_0_device.data()),
95
+ thrust::raw_pointer_cast(S_merged_0_device.data()), max_num_index_sets, seq_len,
96
+ num_heads, head_dim);
97
+
98
+ // Method 1: use VariableLengthMergeStates on ragged data
99
+ VariableLengthMergeStates(thrust::raw_pointer_cast(V_ragged_device.data()),
100
+ thrust::raw_pointer_cast(S_ragged_device.data()),
101
+ thrust::raw_pointer_cast(indptr_device.data()),
102
+ thrust::raw_pointer_cast(V_merged_1_device.data()),
103
+ thrust::raw_pointer_cast(S_merged_1_device.data()), seq_len, nullptr,
104
+ num_heads, head_dim);
105
+
106
+ thrust::host_vector<T> V_merged_0_host(V_merged_0_device), V_merged_1_host(V_merged_1_device);
107
+ thrust::host_vector<float> S_merged_0_host(S_merged_0_device), S_merged_1_host(S_merged_1_device);
108
+
109
+ // Compare results
110
+ size_t num_V_result_errors_atol_1e_3_rtol_1e_3 = 0, num_S_result_errors_atol_1e_3_rtol_1e_3 = 0;
111
+ for (size_t i = 0; i < seq_len * num_heads * head_dim; ++i) {
112
+ EXPECT_FALSE(std::isnan(float(V_merged_0_host[i]))) << "V_merged_0_host[" << i << "] is nan";
113
+ EXPECT_FALSE(std::isnan(float(V_merged_1_host[i]))) << "V_merged_1_host[" << i << "] is nan";
114
+ num_V_result_errors_atol_1e_3_rtol_1e_3 +=
115
+ (!utils::isclose(float(V_merged_0_host[i]), float(V_merged_1_host[i]), 1e-3, 1e-3));
116
+ }
117
+ for (size_t i = 0; i < seq_len * num_heads; ++i) {
118
+ EXPECT_FALSE(std::isnan(float(S_merged_0_host[i]))) << "S_merged_0_host[" << i << "] is nan";
119
+ EXPECT_FALSE(std::isnan(float(S_merged_1_host[i]))) << "S_merged_1_host[" << i << "] is nan";
120
+ num_S_result_errors_atol_1e_3_rtol_1e_3 +=
121
+ (!utils::isclose(float(S_merged_0_host[i]), float(S_merged_1_host[i]), 1e-3, 1e-3));
122
+ }
123
+ float V_result_accuracy =
124
+ 1.0 - float(num_V_result_errors_atol_1e_3_rtol_1e_3) / (seq_len * num_heads * head_dim);
125
+ float S_result_accuracy =
126
+ 1.0 - float(num_S_result_errors_atol_1e_3_rtol_1e_3) / (seq_len * num_heads);
127
+ std::cout << "seq_len=" << seq_len << ", num_heads=" << num_heads << ", head_dim=" << head_dim
128
+ << ", sparse_s=" << sparse_s
129
+ << ", V accuracy (atol=1e-3, rtol=1e-3)=" << V_result_accuracy
130
+ << ", S accuracy (atol=1e-3, rtol=1e-3)=" << S_result_accuracy << std::endl;
131
+
132
+ EXPECT_GT(V_result_accuracy, 0.99) << "V result correctness test failed.";
133
+ EXPECT_GT(S_result_accuracy, 0.99) << "S result correctness test failed.";
134
+ }
135
+
136
+ template <typename T>
137
+ void _TestVariableLengthMergeKernelPaddedCorrectness(size_t max_seq_len, size_t seq_len) {
138
+ ASSERT_LE(seq_len, max_seq_len);
139
+
140
+ const size_t num_heads = 4;
141
+ const size_t head_dim = 64;
142
+ const uint32_t max_num_index_sets = 512;
143
+
144
+ std::vector<int32_t> lengths(max_seq_len);
145
+ utils::vec_randint_(lengths, 1, max_num_index_sets);
146
+ std::vector<int32_t> indptr(max_seq_len + 1, 0);
147
+ for (size_t i = 0; i < seq_len; ++i) {
148
+ indptr[i + 1] = indptr[i] + lengths[i];
149
+ }
150
+
151
+ uint32_t last_indptr = indptr[seq_len];
152
+ std::vector<T> V_ragged_host(last_indptr * num_heads * head_dim);
153
+ std::vector<float> S_ragged_host(last_indptr * num_heads);
154
+
155
+ utils::vec_normal_(V_ragged_host);
156
+ utils::vec_uniform_(S_ragged_host, -10, 10);
157
+
158
+ thrust::device_vector<T> V_ragged_device(V_ragged_host);
159
+ thrust::device_vector<float> S_ragged_device(S_ragged_host);
160
+ thrust::device_vector<int32_t> indptr_device(indptr);
161
+ thrust::device_vector<T> V_merged_0_device(max_seq_len * num_heads * head_dim);
162
+ thrust::device_vector<T> V_merged_1_device(max_seq_len * num_heads * head_dim);
163
+ thrust::device_vector<float> S_merged_0_device(max_seq_len * num_heads);
164
+ thrust::device_vector<float> S_merged_1_device(max_seq_len * num_heads);
165
+ thrust::device_vector<uint32_t> seq_len_device(
166
+ std::vector<uint32_t>{static_cast<uint32_t>(seq_len)});
167
+
168
+ // Reference: use VariableLengthMergeStates on the precisely-sized input.
169
+ VariableLengthMergeStates(thrust::raw_pointer_cast(V_ragged_device.data()),
170
+ thrust::raw_pointer_cast(S_ragged_device.data()),
171
+ thrust::raw_pointer_cast(indptr_device.data()),
172
+ thrust::raw_pointer_cast(V_merged_0_device.data()),
173
+ thrust::raw_pointer_cast(S_merged_0_device.data()), seq_len, nullptr,
174
+ num_heads, head_dim);
175
+ // Expected: use VariableLengthMergeStates on a padded input
176
+ VariableLengthMergeStates(thrust::raw_pointer_cast(V_ragged_device.data()),
177
+ thrust::raw_pointer_cast(S_ragged_device.data()),
178
+ thrust::raw_pointer_cast(indptr_device.data()),
179
+ thrust::raw_pointer_cast(V_merged_1_device.data()),
180
+ thrust::raw_pointer_cast(S_merged_1_device.data()), max_seq_len,
181
+ thrust::raw_pointer_cast(seq_len_device.data()), num_heads, head_dim);
182
+
183
+ thrust::host_vector<T> V_merged_0_host(V_merged_0_device), V_merged_1_host(V_merged_1_device);
184
+ thrust::host_vector<float> S_merged_0_host(S_merged_0_device), S_merged_1_host(S_merged_1_device);
185
+
186
+ // Compare results
187
+ size_t num_V_result_errors_atol_1e_3_rtol_1e_3 = 0, num_S_result_errors_atol_1e_3_rtol_1e_3 = 0;
188
+ for (size_t i = 0; i < seq_len * num_heads * head_dim; ++i) {
189
+ EXPECT_FALSE(std::isnan(float(V_merged_1_host[i]))) << "V_merged_1_host[" << i << "] is nan";
190
+ num_V_result_errors_atol_1e_3_rtol_1e_3 +=
191
+ (!utils::isclose(float(V_merged_0_host[i]), float(V_merged_1_host[i]), 1e-3, 1e-3));
192
+ }
193
+ for (size_t i = 0; i < seq_len * num_heads; ++i) {
194
+ EXPECT_FALSE(std::isnan(float(S_merged_0_host[i]))) << "S_merged_0_host[" << i << "] is nan";
195
+ EXPECT_FALSE(std::isnan(float(S_merged_1_host[i]))) << "S_merged_1_host[" << i << "] is nan";
196
+ num_S_result_errors_atol_1e_3_rtol_1e_3 +=
197
+ (!utils::isclose(float(S_merged_0_host[i]), float(S_merged_1_host[i]), 1e-3, 1e-3));
198
+ }
199
+ float V_result_accuracy =
200
+ 1.0 - float(num_V_result_errors_atol_1e_3_rtol_1e_3) / (seq_len * num_heads * head_dim);
201
+ float S_result_accuracy =
202
+ 1.0 - float(num_S_result_errors_atol_1e_3_rtol_1e_3) / (seq_len * num_heads);
203
+ std::cout << "seq_len=" << seq_len << ", num_heads=" << num_heads << ", head_dim=" << head_dim
204
+ << ", V accuracy (atol=1e-3, rtol=1e-3)=" << V_result_accuracy
205
+ << ", S accuracy (atol=1e-3, rtol=1e-3)=" << S_result_accuracy << std::endl;
206
+
207
+ EXPECT_GT(V_result_accuracy, 0.99) << "V result correctness test failed.";
208
+ EXPECT_GT(S_result_accuracy, 0.99) << "S result correctness test failed.";
209
+ }
210
+
211
+ template <typename T>
212
+ void _TestMergeKernelCorrectness(size_t num_index_sets, size_t seq_len, size_t num_heads,
213
+ size_t head_dim, bool sparse_s) {
214
+ std::vector<T> V_host(seq_len * num_index_sets * num_heads * head_dim);
215
+ std::vector<float> V_host_trans_f32(num_index_sets * seq_len * num_heads * head_dim);
216
+ std::vector<float> S_host(seq_len * num_index_sets * num_heads);
217
+ std::vector<float> S_host_trans(num_index_sets * seq_len * num_heads);
218
+
219
+ utils::vec_normal_(V_host);
220
+ if (sparse_s) {
221
+ for (uint32_t i = 0; i < num_index_sets; ++i) {
222
+ float fill_val = is_prime(i) ? 10 : -10;
223
+ for (uint32_t j = 0; j < seq_len; ++j) {
224
+ for (uint32_t k = 0; k < num_heads; ++k) {
225
+ S_host[(j * num_index_sets + i) * num_heads + k] = fill_val;
226
+ }
227
+ }
228
+ }
229
+ } else {
230
+ utils::vec_uniform_(S_host, -10, 10);
231
+ }
232
+
233
+ for (uint32_t i = 0; i < num_index_sets; ++i) {
234
+ for (uint32_t j = 0; j < seq_len; ++j) {
235
+ std::transform(V_host.begin() + (j * num_index_sets + i) * num_heads * head_dim,
236
+ V_host.begin() + (j * num_index_sets + i + 1) * num_heads * head_dim,
237
+ V_host_trans_f32.begin() + (i * seq_len + j) * num_heads * head_dim,
238
+ [](T x) { return static_cast<float>(x); });
239
+ std::copy(S_host.begin() + (j * num_index_sets + i) * num_heads,
240
+ S_host.begin() + (j * num_index_sets + i + 1) * num_heads,
241
+ S_host_trans.begin() + (i * seq_len + j) * num_heads);
242
+ }
243
+ }
244
+
245
+ thrust::device_vector<T> V_device(V_host);
246
+ thrust::device_vector<float> V_device_trans_f32(V_host_trans_f32);
247
+ thrust::device_vector<float> S_device(S_host);
248
+ thrust::device_vector<float> S_device_trans(S_host_trans);
249
+
250
+ thrust::device_vector<float> V_merged_0_device(seq_len * num_heads * head_dim);
251
+ thrust::device_vector<float> S_merged_0_device(seq_len * num_heads);
252
+ thrust::device_vector<T> V_merged_1_device(seq_len * num_heads * head_dim);
253
+ thrust::device_vector<float> S_merged_1_device(seq_len * num_heads);
254
+
255
+ if (num_index_sets > 1) {
256
+ // Method 0: use MergeState
257
+ MergeState(thrust::raw_pointer_cast(V_device_trans_f32.data()),
258
+ thrust::raw_pointer_cast(S_device_trans.data()),
259
+ thrust::raw_pointer_cast(V_device_trans_f32.data() + seq_len * num_heads * head_dim),
260
+ thrust::raw_pointer_cast(S_device_trans.data() + seq_len * num_heads),
261
+ thrust::raw_pointer_cast(V_merged_0_device.data()),
262
+ thrust::raw_pointer_cast(S_merged_0_device.data()), seq_len, num_heads, head_dim);
263
+ for (uint i = 2; i < num_index_sets; ++i) {
264
+ MergeStateInPlace(
265
+ thrust::raw_pointer_cast(V_merged_0_device.data()),
266
+ thrust::raw_pointer_cast(S_merged_0_device.data()),
267
+ thrust::raw_pointer_cast(V_device_trans_f32.data() + i * seq_len * num_heads * head_dim),
268
+ thrust::raw_pointer_cast(S_device_trans.data() + i * seq_len * num_heads), seq_len,
269
+ num_heads, head_dim);
270
+ }
271
+ } else {
272
+ V_merged_0_device = V_device;
273
+ S_merged_0_device = S_device;
274
+ }
275
+
276
+ // Method 1: use MergeStates
277
+ MergeStates(thrust::raw_pointer_cast(V_device.data()), thrust::raw_pointer_cast(S_device.data()),
278
+ thrust::raw_pointer_cast(V_merged_1_device.data()),
279
+ thrust::raw_pointer_cast(S_merged_1_device.data()), num_index_sets, seq_len,
280
+ num_heads, head_dim);
281
+
282
+ thrust::host_vector<float> V_merged_0_host(V_merged_0_device);
283
+ thrust::host_vector<T> V_merged_1_host(V_merged_1_device);
284
+ thrust::host_vector<float> S_merged_0_host(S_merged_0_device), S_merged_1_host(S_merged_1_device);
285
+ size_t num_V_result_errors_atol_1e_3_rtol_1e_3 = 0, num_S_result_errors_atol_1e_3_rtol_1e_3 = 0;
286
+ for (size_t i = 0; i < seq_len * num_heads * head_dim; ++i) {
287
+ EXPECT_FALSE(std::isnan(float(V_merged_0_host[i]))) << "V_merged_0_host[" << i << "] is nan";
288
+ EXPECT_FALSE(std::isnan(float(V_merged_1_host[i]))) << "V_merged_1_host[" << i << "] is nan";
289
+ num_V_result_errors_atol_1e_3_rtol_1e_3 +=
290
+ (!utils::isclose(float(V_merged_0_host[i]), float(V_merged_1_host[i]), 1e-3, 1e-3));
291
+ }
292
+ for (size_t i = 0; i < seq_len * num_heads; ++i) {
293
+ EXPECT_FALSE(std::isnan(float(S_merged_0_host[i]))) << "S_merged_0_host[" << i << "] is nan";
294
+ EXPECT_FALSE(std::isnan(float(S_merged_1_host[i]))) << "S_merged_1_host[" << i << "] is nan";
295
+ num_S_result_errors_atol_1e_3_rtol_1e_3 +=
296
+ (!utils::isclose(float(S_merged_0_host[i]), float(S_merged_1_host[i]), 1e-3, 1e-3));
297
+ }
298
+ float V_result_accuracy =
299
+ 1.0 - float(num_V_result_errors_atol_1e_3_rtol_1e_3) / (seq_len * num_heads * head_dim);
300
+ float S_result_accuracy =
301
+ 1.0 - float(num_S_result_errors_atol_1e_3_rtol_1e_3) / (seq_len * num_heads);
302
+ std::cout << "num_index_sets=" << num_index_sets << ", seq_len=" << seq_len
303
+ << ", num_heads=" << num_heads << ", head_dim=" << head_dim << ", sparse_s=" << sparse_s
304
+ << ", V accuracy (atol=1e-3, rtol=1e-3)=" << V_result_accuracy
305
+ << ", S accuracy (atol=1e-3, rtol=1e-3)=" << S_result_accuracy << std::endl;
306
+ EXPECT_GT(V_result_accuracy, 0.99) << "V result correctness test failed.";
307
+ EXPECT_GT(S_result_accuracy, 0.99) << "S result correctness test failed.";
308
+ }
309
+
310
+ template <typename T>
311
+ void _TestTwoLevelSinglePrefixCascadeDecodeCorrectness(size_t batch_size,
312
+ size_t shared_prefix_length,
313
+ size_t unique_kv_length, size_t num_qo_heads,
314
+ size_t num_kv_heads, size_t head_dim) {
315
+ constexpr uint32_t page_size = 16;
316
+ auto [testcase_float_data, testcase_int_data] = utils::create_shared_prefix_testcase_data<T>(
317
+ batch_size, shared_prefix_length, unique_kv_length,
318
+ /*qo_append_length=*/1, num_qo_heads, num_kv_heads, head_dim, page_size);
319
+
320
+ std::vector<T> q_h = std::move(testcase_float_data[0]),
321
+ shared_k_h = std::move(testcase_float_data[1]),
322
+ shared_v_h = std::move(testcase_float_data[2]),
323
+ k_data_h = std::move(testcase_float_data[3]),
324
+ v_data_h = std::move(testcase_float_data[3]);
325
+
326
+ std::vector<int32_t> kv_indices_combined_h = std::move(testcase_int_data[1]),
327
+ kv_indices_unique_h = std::move(testcase_int_data[2]),
328
+ kv_indptr_combined_h = std::move(testcase_int_data[3]),
329
+ kv_indptr_unique_h = std::move(testcase_int_data[4]),
330
+ kv_last_page_len_combined_h = std::move(testcase_int_data[5]),
331
+ kv_last_page_len_unique_h = std::move(testcase_int_data[6]);
332
+
333
+ thrust::device_vector<T> shared_k_d(shared_k_h), shared_v_d(shared_v_h), k_data_d(k_data_h),
334
+ v_data_d(v_data_h), q_d(q_h), o_baseline_d(q_h.size()), o_cascade_0_d(q_h.size()),
335
+ o_cascade_1_d(q_h.size());
336
+ thrust::device_vector<T> tmp_0_d(16 * 1024 * 1024);
337
+ thrust::device_vector<float> lse_cascade_0_d(batch_size * num_qo_heads),
338
+ lse_cascade_1_d(batch_size * num_qo_heads);
339
+
340
+ thrust::device_vector<int32_t> kv_indptr_combined_d(kv_indptr_combined_h),
341
+ kv_indptr_unique_d(kv_indptr_unique_h), kv_indices_combined_d(kv_indices_combined_h),
342
+ kv_indices_unique_d(kv_indices_unique_h),
343
+ kv_last_page_len_combined_d(kv_last_page_len_combined_h),
344
+ kv_last_page_len_unique_d(kv_last_page_len_unique_h);
345
+
346
+ paged_kv_t<T, int32_t> paged_kv_baseline_d(
347
+ num_kv_heads, page_size, head_dim, batch_size, kv_layout,
348
+ thrust::raw_pointer_cast(k_data_d.data()), thrust::raw_pointer_cast(v_data_d.data()),
349
+ thrust::raw_pointer_cast(kv_indices_combined_d.data()),
350
+ thrust::raw_pointer_cast(kv_indptr_combined_d.data()),
351
+ thrust::raw_pointer_cast(kv_last_page_len_combined_d.data()));
352
+
353
+ paged_kv_t<T, int32_t> paged_kv_casacde_d(
354
+ num_kv_heads, page_size, head_dim, batch_size, kv_layout,
355
+ thrust::raw_pointer_cast(k_data_d.data()), thrust::raw_pointer_cast(v_data_d.data()),
356
+ thrust::raw_pointer_cast(kv_indices_unique_d.data()),
357
+ thrust::raw_pointer_cast(kv_indptr_unique_d.data()),
358
+ thrust::raw_pointer_cast(kv_last_page_len_unique_d.data()));
359
+
360
+ BatchDecodeHandler baseline_handler, cascade_handler;
361
+
362
+ size_t float_workspace_size_in_bytes = 32 * 1024 * 1024;
363
+ thrust::device_vector<char> float_buffer(float_workspace_size_in_bytes);
364
+ size_t int_workspace_size_in_bytes = 8 * 1024 * 1024;
365
+ thrust::device_vector<char> int_buffer(int_workspace_size_in_bytes);
366
+
367
+ BatchDecodeHandlerPlan<T, T, T, int32_t>(
368
+ &baseline_handler, (void*)thrust::raw_pointer_cast(float_buffer.data()),
369
+ float_workspace_size_in_bytes, (void*)thrust::raw_pointer_cast(int_buffer.data()),
370
+ int_workspace_size_in_bytes, kv_indptr_combined_h.data(), kv_last_page_len_combined_h.data(),
371
+ batch_size, num_qo_heads, num_kv_heads, head_dim, page_size, PosEncodingMode::kNone);
372
+
373
+ BatchDecodeHandlerPlan<T, T, T, int32_t>(
374
+ &cascade_handler, (void*)thrust::raw_pointer_cast(float_buffer.data()),
375
+ float_workspace_size_in_bytes, (void*)thrust::raw_pointer_cast(int_buffer.data()),
376
+ int_workspace_size_in_bytes, kv_indptr_unique_h.data(), kv_last_page_len_unique_h.data(),
377
+ batch_size, num_qo_heads, num_kv_heads, head_dim, page_size, PosEncodingMode::kNone);
378
+
379
+ // Compute result using baseline implementation
380
+ cudaError_t status = BatchDecodeWithPagedKVCacheWrapper<T, T, T, int32_t>(
381
+ &baseline_handler, thrust::raw_pointer_cast(q_d.data()),
382
+ /*q_rope_offset=*/nullptr, paged_kv_baseline_d, thrust::raw_pointer_cast(o_baseline_d.data()),
383
+ /*lse=*/nullptr, num_qo_heads, PosEncodingMode::kNone);
384
+
385
+ EXPECT_EQ(status, cudaSuccess) << "Baseline implementation failed with error: "
386
+ << cudaGetErrorString(status);
387
+
388
+ // Compute result using cascade implementation
389
+ status = SinglePrefillWithKVCache(
390
+ thrust::raw_pointer_cast(q_d.data()), thrust::raw_pointer_cast(shared_k_d.data()),
391
+ thrust::raw_pointer_cast(shared_v_d.data()), thrust::raw_pointer_cast(o_cascade_0_d.data()),
392
+ thrust::raw_pointer_cast(tmp_0_d.data()), thrust::raw_pointer_cast(lse_cascade_0_d.data()),
393
+ num_qo_heads, num_kv_heads, /*qo_len=*/batch_size, /*kv_len=*/shared_prefix_length, head_dim,
394
+ /*causal=*/false, /*kv_layout=*/QKVLayout::kNHD,
395
+ /*pos_encoding_mode=*/PosEncodingMode::kNone, /*use_fp16_qk_reduction=*/false);
396
+
397
+ EXPECT_EQ(status, cudaSuccess) << "Cascade implementation prefill failed with error: "
398
+ << cudaGetErrorString(status);
399
+
400
+ status = BatchDecodeWithPagedKVCacheWrapper<T, T, T, int32_t>(
401
+ &cascade_handler, thrust::raw_pointer_cast(q_d.data()),
402
+ /*q_rope_offset=*/nullptr, paged_kv_casacde_d, thrust::raw_pointer_cast(o_cascade_1_d.data()),
403
+ /*lse=*/thrust::raw_pointer_cast(lse_cascade_1_d.data()), num_qo_heads,
404
+ PosEncodingMode::kNone);
405
+
406
+ EXPECT_EQ(status, cudaSuccess) << "Cascade implementation decode failed with error: "
407
+ << cudaGetErrorString(status);
408
+
409
+ status = MergeStateInPlace(thrust::raw_pointer_cast(o_cascade_0_d.data()),
410
+ thrust::raw_pointer_cast(lse_cascade_0_d.data()),
411
+ thrust::raw_pointer_cast(o_cascade_1_d.data()),
412
+ thrust::raw_pointer_cast(lse_cascade_1_d.data()), batch_size,
413
+ num_qo_heads, head_dim);
414
+
415
+ EXPECT_EQ(status, cudaSuccess) << "Cascade implementation merge failed with error: "
416
+ << cudaGetErrorString(status);
417
+
418
+ thrust::host_vector<T> o_baseline_h(o_baseline_d), o_cascade_h(o_cascade_0_d);
419
+ size_t num_result_errors_atol_1e_3_rtol_1e_3 = 0;
420
+ for (size_t i = 0; i < o_baseline_h.size(); ++i) {
421
+ EXPECT_FALSE(std::isnan(float(o_baseline_h[i]))) << "o_baseline_h[" << i << "] is nan";
422
+ EXPECT_FALSE(std::isnan(float(o_cascade_h[i]))) << "o_cascade_h[" << i << "] is nan";
423
+ num_result_errors_atol_1e_3_rtol_1e_3 +=
424
+ (!utils::isclose(float(o_baseline_h[i]), float(o_cascade_h[i]), 1e-3, 1e-3));
425
+ }
426
+ float result_accuracy =
427
+ 1. - float(num_result_errors_atol_1e_3_rtol_1e_3) / float(o_baseline_h.size());
428
+ std::cout << "batch_size=" << batch_size << ", shared_prefix_length=" << shared_prefix_length
429
+ << ", unique_kv_length=" << unique_kv_length << ", num_qo_heads=" << num_qo_heads
430
+ << ", num_kv_heads=" << num_kv_heads << ", head_dim=" << head_dim
431
+ << ", result_accuracy (atol=1e-3, rtol=1e-3)=" << result_accuracy << std::endl;
432
+ EXPECT_GT(result_accuracy, 0.90) << "Result correctness test failed.";
433
+ }
434
+
435
+ template <typename T>
436
+ void _TestTwoLevelSinglePrefixCascadeAppendCorrectness(size_t batch_size,
437
+ size_t shared_prefix_length,
438
+ size_t unique_kv_length,
439
+ size_t qo_append_length, size_t num_qo_heads,
440
+ size_t num_kv_heads, size_t head_dim) {
441
+ constexpr uint32_t page_size = 16;
442
+
443
+ auto [testcase_float_data, testcase_int_data] = utils::create_shared_prefix_testcase_data<T>(
444
+ batch_size, shared_prefix_length, unique_kv_length, qo_append_length, num_qo_heads,
445
+ num_kv_heads, head_dim, page_size);
446
+
447
+ std::vector<T> q_h = std::move(testcase_float_data[0]),
448
+ shared_k_h = std::move(testcase_float_data[1]),
449
+ shared_v_h = std::move(testcase_float_data[2]),
450
+ k_data_h = std::move(testcase_float_data[3]),
451
+ v_data_h = std::move(testcase_float_data[4]);
452
+
453
+ std::vector<int32_t> qo_indptr_h = std::move(testcase_int_data[0]),
454
+ kv_indices_combined_h = std::move(testcase_int_data[1]),
455
+ kv_indices_unique_h = std::move(testcase_int_data[2]),
456
+ kv_indptr_combined_h = std::move(testcase_int_data[3]),
457
+ kv_indptr_unique_h = std::move(testcase_int_data[4]),
458
+ kv_last_page_len_combined_h = std::move(testcase_int_data[5]),
459
+ kv_last_page_len_unique_h = std::move(testcase_int_data[6]);
460
+
461
+ thrust::device_vector<T> shared_k_d(shared_k_h), shared_v_d(shared_v_h), k_data_d(k_data_h),
462
+ v_data_d(v_data_h), q_d(q_h), o_baseline_d(q_h.size()), o_cascade_0_d(q_h.size()),
463
+ o_cascade_1_d(q_h.size());
464
+ thrust::device_vector<T> tmp_0_d(16 * 1024 * 1024);
465
+ thrust::device_vector<float> lse_cascade_0_d((batch_size * qo_append_length) * num_qo_heads),
466
+ lse_cascade_1_d((batch_size * qo_append_length) * num_qo_heads);
467
+
468
+ thrust::device_vector<int32_t> qo_indptr_d(qo_indptr_h),
469
+ kv_indptr_combined_d(kv_indptr_combined_h), kv_indptr_unique_d(kv_indptr_unique_h),
470
+ kv_indices_combined_d(kv_indices_combined_h), kv_indices_unique_d(kv_indices_unique_h),
471
+ kv_last_page_len_combined_d(kv_last_page_len_combined_h),
472
+ kv_last_page_len_unique_d(kv_last_page_len_unique_h);
473
+
474
+ paged_kv_t<T, int32_t> paged_kv_baseline_d(
475
+ num_kv_heads, page_size, head_dim, batch_size, kv_layout,
476
+ thrust::raw_pointer_cast(k_data_d.data()), thrust::raw_pointer_cast(v_data_d.data()),
477
+ thrust::raw_pointer_cast(kv_indices_combined_d.data()),
478
+ thrust::raw_pointer_cast(kv_indptr_combined_d.data()),
479
+ thrust::raw_pointer_cast(kv_last_page_len_combined_d.data()));
480
+
481
+ paged_kv_t<T, int32_t> paged_kv_casacde_d(
482
+ num_kv_heads, page_size, head_dim, batch_size, kv_layout,
483
+ thrust::raw_pointer_cast(k_data_d.data()), thrust::raw_pointer_cast(v_data_d.data()),
484
+ thrust::raw_pointer_cast(kv_indices_unique_d.data()),
485
+ thrust::raw_pointer_cast(kv_indptr_unique_d.data()),
486
+ thrust::raw_pointer_cast(kv_last_page_len_unique_d.data()));
487
+
488
+ BatchPrefillHandler baseline_handler, cascade_handler;
489
+ size_t float_workspace_size_in_bytes = 32 * 1024 * 1024;
490
+ thrust::device_vector<char> float_buffer(float_workspace_size_in_bytes);
491
+ size_t int_workspace_size_in_bytes = 8 * 1024 * 1024;
492
+ thrust::device_vector<char> int_buffer(int_workspace_size_in_bytes);
493
+
494
+ baseline_handler.Plan<T, int32_t>(
495
+ (void*)thrust::raw_pointer_cast(float_buffer.data()), float_workspace_size_in_bytes,
496
+ (void*)thrust::raw_pointer_cast(int_buffer.data()), int_workspace_size_in_bytes,
497
+ qo_indptr_h.data(), kv_indptr_combined_h.data(), /*total_num_rows=*/qo_indptr_h.back(),
498
+ batch_size, num_qo_heads, num_kv_heads, head_dim, page_size);
499
+ cascade_handler.Plan<T, int32_t>(
500
+ (void*)thrust::raw_pointer_cast(float_buffer.data()), float_workspace_size_in_bytes,
501
+ (void*)thrust::raw_pointer_cast(int_buffer.data()), int_workspace_size_in_bytes,
502
+ qo_indptr_h.data(), kv_indptr_unique_h.data(), /*total_num_rows=*/qo_indptr_h.back(),
503
+ batch_size, num_qo_heads, num_kv_heads, head_dim, page_size);
504
+
505
+ cudaError_t status = BatchPrefillWithPagedKVCacheWrapper<T, T, T, int32_t>(
506
+ &baseline_handler, thrust::raw_pointer_cast(q_d.data()),
507
+ thrust::raw_pointer_cast(qo_indptr_d.data()),
508
+ /*q_rope_offset=*/nullptr, paged_kv_baseline_d, thrust::raw_pointer_cast(o_baseline_d.data()),
509
+ /*lse=*/nullptr, num_qo_heads, /*causal=*/true, PosEncodingMode::kNone,
510
+ /*use_fp16_qk_reduction=*/false);
511
+
512
+ EXPECT_EQ(status, cudaSuccess) << "Baseline implementation failed with error: "
513
+ << cudaGetErrorString(status);
514
+
515
+ status = SinglePrefillWithKVCache(
516
+ thrust::raw_pointer_cast(q_d.data()), thrust::raw_pointer_cast(shared_k_d.data()),
517
+ thrust::raw_pointer_cast(shared_v_d.data()), thrust::raw_pointer_cast(o_cascade_0_d.data()),
518
+ thrust::raw_pointer_cast(tmp_0_d.data()), thrust::raw_pointer_cast(lse_cascade_0_d.data()),
519
+ num_qo_heads, num_kv_heads, /*qo_len=*/batch_size * qo_append_length,
520
+ /*kv_len=*/shared_prefix_length, head_dim,
521
+ /*causal=*/false, /*kv_layout=*/QKVLayout::kNHD,
522
+ /*pos_encoding_mode=*/PosEncodingMode::kNone, /*use_fp16_qk_reduction=*/false);
523
+
524
+ EXPECT_EQ(status, cudaSuccess)
525
+ << "Cascade implementation shared prefix prefill failed with error: "
526
+ << cudaGetErrorString(status);
527
+
528
+ status = BatchPrefillWithPagedKVCacheWrapper<T, T, T, int32_t>(
529
+ &cascade_handler, thrust::raw_pointer_cast(q_d.data()),
530
+ thrust::raw_pointer_cast(qo_indptr_d.data()),
531
+ /*r_rope_position=*/nullptr, paged_kv_casacde_d,
532
+ thrust::raw_pointer_cast(o_cascade_1_d.data()),
533
+ thrust::raw_pointer_cast(lse_cascade_1_d.data()), num_qo_heads, /*causal=*/true,
534
+ PosEncodingMode::kNone, /*use_fp16_qk_reduction=*/false);
535
+
536
+ EXPECT_EQ(status, cudaSuccess) << "Cascade implementation unique kv prefill failed with error: "
537
+ << cudaGetErrorString(status);
538
+
539
+ status = MergeStateInPlace(thrust::raw_pointer_cast(o_cascade_0_d.data()),
540
+ thrust::raw_pointer_cast(lse_cascade_0_d.data()),
541
+ thrust::raw_pointer_cast(o_cascade_1_d.data()),
542
+ thrust::raw_pointer_cast(lse_cascade_1_d.data()),
543
+ batch_size * qo_append_length, num_qo_heads, head_dim);
544
+ EXPECT_EQ(status, cudaSuccess) << "Cascade implementation merge failed with error: "
545
+ << cudaGetErrorString(status);
546
+
547
+ thrust::host_vector<T> o_baseline_h(o_baseline_d), o_cascade_h(o_cascade_0_d);
548
+ size_t num_result_errors_atol_1e_3_rtol_1e_3 = 0;
549
+ for (size_t i = 0; i < o_baseline_h.size(); ++i) {
550
+ EXPECT_FALSE(std::isnan(float(o_baseline_h[i]))) << "o_baseline_h[" << i << "] is nan";
551
+ EXPECT_FALSE(std::isnan(float(o_cascade_h[i]))) << "o_cascade_h[" << i << "] is nan";
552
+ num_result_errors_atol_1e_3_rtol_1e_3 +=
553
+ (!utils::isclose(float(o_baseline_h[i]), float(o_cascade_h[i]), 1e-3, 1e-3));
554
+ }
555
+ float result_accuracy =
556
+ 1. - float(num_result_errors_atol_1e_3_rtol_1e_3) / float(o_baseline_h.size());
557
+ std::cout << "batch_size=" << batch_size << ", shared_prefix_length=" << shared_prefix_length
558
+ << ", unique_kv_length=" << unique_kv_length
559
+ << ", qo_append_length=" << qo_append_length << ", num_qo_heads=" << num_qo_heads
560
+ << ", num_kv_heads=" << num_kv_heads << ", head_dim=" << head_dim
561
+ << ", result_accuracy (atol=1e-3, rtol=1e-3)=" << result_accuracy << std::endl;
562
+ EXPECT_GT(result_accuracy, 0.90) << "Result correctness test failed.";
563
+ }
564
+
565
+ template <typename T>
566
+ void TestMergeKernelCorrectness() {
567
+ for (size_t num_index_sets : {1, 2, 9, 81, 513}) {
568
+ for (size_t seq_len : {4, 16, 77}) {
569
+ for (size_t num_heads : {1, 21, 32}) {
570
+ for (size_t head_dim : {64, 128, 256}) {
571
+ for (bool sparse_s : {false, true}) {
572
+ _TestMergeKernelCorrectness<T>(num_index_sets, seq_len, num_heads, head_dim, sparse_s);
573
+ }
574
+ }
575
+ }
576
+ }
577
+ }
578
+ }
579
+
580
+ template <typename T>
581
+ void TestVariableLengthMergeKernelCorrectness() {
582
+ for (size_t seq_len : {1, 3, 77, 191}) {
583
+ for (size_t num_heads : {1, 4, 32}) {
584
+ for (size_t head_dim : {64, 128, 256}) {
585
+ for (bool sparse_s : {false, true}) {
586
+ _TestVariableLengthMergeKernelCorrectness<T>(seq_len, num_heads, head_dim, sparse_s);
587
+ }
588
+ }
589
+ }
590
+ }
591
+ }
592
+
593
+ template <typename T>
594
+ void TestVariableLengthMergeKernelPaddedCorrectness() {
595
+ _TestVariableLengthMergeKernelPaddedCorrectness<T>(8, 1);
596
+ _TestVariableLengthMergeKernelPaddedCorrectness<T>(128, 77);
597
+ }
598
+
599
+ template <typename T>
600
+ void TestTwoLevelSinglePrefixCascadeDecodeCorrectness() {
601
+ for (size_t batch_size : {1, 8, 16, 64, 128}) {
602
+ for (size_t shared_prefix_length : {1024, 2048, 8192, 32768}) {
603
+ for (size_t unique_kv_length : {128, 256, 512, 1024}) {
604
+ for (size_t num_qo_heads : {32}) {
605
+ for (size_t num_kv_heads : {32}) {
606
+ for (size_t head_dim : {128}) {
607
+ _TestTwoLevelSinglePrefixCascadeDecodeCorrectness<T>(batch_size, shared_prefix_length,
608
+ unique_kv_length, num_qo_heads,
609
+ num_kv_heads, head_dim);
610
+ }
611
+ }
612
+ }
613
+ }
614
+ }
615
+ }
616
+ }
617
+
618
+ template <typename T>
619
+ void TestTwoLevelSinglePrefixCascadeAppendCorrectness() {
620
+ for (size_t batch_size : {1, 8, 16, 64, 128}) {
621
+ for (size_t shared_prefix_length : {1024, 2048, 8192, 32768}) {
622
+ for (size_t unique_kv_length : {128, 256, 512, 1024}) {
623
+ for (size_t qo_append_length : {128}) {
624
+ for (size_t num_qo_heads : {32}) {
625
+ for (size_t num_kv_heads : {32}) {
626
+ for (size_t head_dim : {128}) {
627
+ _TestTwoLevelSinglePrefixCascadeAppendCorrectness<T>(
628
+ batch_size, shared_prefix_length, unique_kv_length, qo_append_length,
629
+ num_qo_heads, num_kv_heads, head_dim);
630
+ }
631
+ }
632
+ }
633
+ }
634
+ }
635
+ }
636
+ }
637
+ }
638
+
639
+ TEST(FlashInferCorrectnessTest, MergeKernelCorrectnessTestFP16) {
640
+ TestMergeKernelCorrectness<half>();
641
+ }
642
+
643
+ TEST(FlashInferCorrectnessTest, VariableLengthMergeKernelCorrectnessTestFP16) {
644
+ TestVariableLengthMergeKernelCorrectness<half>();
645
+ }
646
+
647
+ TEST(FlashInferCorrectnessTest, VariableLengthMergeKernelPaddedCorrectnessTestFP16) {
648
+ TestVariableLengthMergeKernelPaddedCorrectness<half>();
649
+ }
650
+
651
+ TEST(FlashInferCorrectnessTest, TwoLevelSinglePrefixCascadeDecodeTestFP16) {
652
+ TestTwoLevelSinglePrefixCascadeDecodeCorrectness<half>();
653
+ }
654
+
655
+ TEST(FlashInferCorrectnessTest, TwoLevelSinglePrefixCascadeAppendTestFP16) {
656
+ TestTwoLevelSinglePrefixCascadeAppendCorrectness<half>();
657
+ }
sglang_repo/sgl-kernel/3rdparty/flashinfer/src/test_fastdiv.cu ADDED
@@ -0,0 +1,73 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ /*
2
+ * Copyright (c) 2023 by FlashInfer team.
3
+ *
4
+ * Licensed under the Apache License, Version 2.0 (the "License");
5
+ * you may not use this file except in compliance with the License.
6
+ * You may obtain a copy of the License at
7
+ *
8
+ * http://www.apache.org/licenses/LICENSE-2.0
9
+ *
10
+ * Unless required by applicable law or agreed to in writing, software
11
+ * distributed under the License is distributed on an "AS IS" BASIS,
12
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13
+ * See the License for the specific language governing permissions and
14
+ * limitations under the License.
15
+ */
16
+ #include <gtest/gtest.h>
17
+
18
+ #include <flashinfer/fastdiv.cuh>
19
+
20
+ #include "gtest/gtest.h"
21
+ #include "utils.h"
22
+
23
+ using namespace flashinfer;
24
+
25
+ __global__ void test_fastdiv_kernel_0(uint_fastdiv fd, uint32_t* q, uint32_t* r) {
26
+ uint32_t global_rank = blockIdx.x * blockDim.x + threadIdx.x;
27
+ q[global_rank] = global_rank / fd;
28
+ r[global_rank] = global_rank % fd;
29
+ }
30
+
31
+ __global__ void test_fastdiv_kernel_1(uint_fastdiv fd, uint32_t* q, uint32_t* r) {
32
+ uint32_t global_rank = blockIdx.x * blockDim.x + threadIdx.x;
33
+ fd.divmod(global_rank, q[global_rank], r[global_rank]);
34
+ }
35
+
36
+ void _TestFastDivU32Correctness(uint32_t d) {
37
+ uint_fastdiv fd(d);
38
+ thrust::device_vector<uint32_t> q(1024 * 1024), r(1024 * 1024);
39
+
40
+ {
41
+ test_fastdiv_kernel_0<<<1024, 1024>>>(fd, thrust::raw_pointer_cast(q.data()),
42
+ thrust::raw_pointer_cast(r.data()));
43
+
44
+ thrust::host_vector<uint32_t> q_h(q), r_h(r);
45
+
46
+ for (size_t i = 0; i < q_h.size(); ++i) {
47
+ EXPECT_EQ(q_h[i], i / d);
48
+ EXPECT_EQ(r_h[i], i % d);
49
+ }
50
+ }
51
+
52
+ {
53
+ test_fastdiv_kernel_1<<<1024, 1024>>>(fd, thrust::raw_pointer_cast(q.data()),
54
+ thrust::raw_pointer_cast(r.data()));
55
+
56
+ thrust::host_vector<uint32_t> q_h(q), r_h(r);
57
+
58
+ for (size_t i = 0; i < q_h.size(); ++i) {
59
+ EXPECT_EQ(q_h[i], i / d);
60
+ EXPECT_EQ(r_h[i], i % d);
61
+ }
62
+ }
63
+
64
+ std::cout << "FastDivU32 correctness test passed for d = " << d << std::endl;
65
+ }
66
+
67
+ void TestFastDivU32Correctness() {
68
+ for (uint32_t d = 1; d < 127; ++d) {
69
+ _TestFastDivU32Correctness(d);
70
+ }
71
+ }
72
+
73
+ TEST(FlashInferCorrectnessTest, TestFastDivU32Correctness) { TestFastDivU32Correctness(); }
sglang_repo/sgl-kernel/3rdparty/flashinfer/src/test_norm.cu ADDED
@@ -0,0 +1,76 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ /*
2
+ * Copyright (c) 2024 by FlashInfer team.
3
+ *
4
+ * Licensed under the Apache License, Version 2.0 (the "License");
5
+ * you may not use this file except in compliance with the License.
6
+ * You may obtain a copy of the License at
7
+ *
8
+ * http://www.apache.org/licenses/LICENSE-2.0
9
+ *
10
+ * Unless required by applicable law or agreed to in writing, software
11
+ * distributed under the License is distributed on an "AS IS" BASIS,
12
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13
+ * See the License for the specific language governing permissions and
14
+ * limitations under the License.
15
+ */
16
+ #include <gtest/gtest.h>
17
+
18
+ #include <flashinfer/norm.cuh>
19
+
20
+ #include "cpu_reference.h"
21
+ #include "utils.h"
22
+
23
+ using namespace flashinfer;
24
+
25
+ template <typename T>
26
+ void _TestRMSNormCorrectness(uint32_t batch_size, uint32_t d) {
27
+ std::vector<T> x_host(batch_size * d);
28
+ std::vector<T> w_host(d);
29
+
30
+ utils::vec_normal_(x_host);
31
+ utils::vec_normal_(w_host);
32
+
33
+ std::vector<T> y_ref_host =
34
+ std::move(cpu_reference::rms_norm<T>(x_host.data(), w_host.data(), batch_size, d, 1e-5));
35
+
36
+ thrust::device_vector<T> x_device(x_host);
37
+ thrust::device_vector<T> w_device(w_host);
38
+ thrust::device_vector<T> y_device(batch_size * d);
39
+
40
+ cudaError_t status = norm::RMSNorm<T>(
41
+ thrust::raw_pointer_cast(x_device.data()), thrust::raw_pointer_cast(w_device.data()),
42
+ thrust::raw_pointer_cast(y_device.data()), batch_size, d, 1e-6);
43
+ EXPECT_EQ(status, cudaSuccess) << "RMSNorm kernel launch failed, error message: "
44
+ << cudaGetErrorString(status);
45
+
46
+ thrust::host_vector<T> y_host(y_device);
47
+ bool nan_detected = false;
48
+ size_t num_result_errors_atol_1e_3_rtol_1e_3 = 0;
49
+ for (uint i = 0; i < batch_size * d; i++) {
50
+ if (isnan(float(y_host[i]))) {
51
+ nan_detected = true;
52
+ }
53
+ num_result_errors_atol_1e_3_rtol_1e_3 +=
54
+ (!utils::isclose(float(y_host[i]), float(y_ref_host[i]), 1e-3, 1e-3));
55
+ if (!utils::isclose(float(y_host[i]), float(y_ref_host[i]), 1e-3, 1e-3)) {
56
+ std::cout << "i: " << i << ", y_host[i]: " << float(y_host[i])
57
+ << ", y_ref_host[i]: " << float(y_ref_host[i]) << std::endl;
58
+ }
59
+ }
60
+ float result_accuracy = 1.0f - float(num_result_errors_atol_1e_3_rtol_1e_3) / (batch_size * d);
61
+ std::cout << "batch_size: " << batch_size << ", d: " << d
62
+ << ", RMSNorm correctness: " << result_accuracy << std::endl;
63
+ EXPECT_GT(result_accuracy, 0.99f) << "RMSNorm correctness test failed";
64
+ EXPECT_FALSE(nan_detected) << "Nan detected in RMSNorm output";
65
+ }
66
+
67
+ template <typename T>
68
+ void TestRMSNormCorrectness() {
69
+ for (size_t batch_size : {1, 3, 7, 19, 733}) {
70
+ for (size_t d : {37, 128, 512, 1002, 3072, 4096, 8192, 16384}) {
71
+ _TestRMSNormCorrectness<T>(batch_size, d);
72
+ }
73
+ }
74
+ }
75
+
76
+ TEST(FlashInferCorrectnessTests, TestRMSNormFP16) { TestRMSNormCorrectness<half>(); }
sglang_repo/sgl-kernel/3rdparty/flashinfer/src/test_page.cu ADDED
@@ -0,0 +1,208 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ /*
2
+ * Copyright (c) 2023 by FlashInfer team.
3
+ *
4
+ * Licensed under the Apache License, Version 2.0 (the "License");
5
+ * you may not use this file except in compliance with the License.
6
+ * You may obtain a copy of the License at
7
+ *
8
+ * http://www.apache.org/licenses/LICENSE-2.0
9
+ *
10
+ * Unless required by applicable law or agreed to in writing, software
11
+ * distributed under the License is distributed on an "AS IS" BASIS,
12
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13
+ * See the License for the specific language governing permissions and
14
+ * limitations under the License.
15
+ */
16
+ #include <gtest/gtest.h>
17
+
18
+ #include <flashinfer/page.cuh>
19
+ #include <type_traits>
20
+
21
+ #include "cpu_reference.h"
22
+ #include "utils.h"
23
+
24
+ using namespace flashinfer;
25
+
26
+ template <typename T>
27
+ void _TestAppendPagedKVKernelCorrectness(size_t page_size, size_t batch_size, size_t num_heads,
28
+ size_t head_dim, QKVLayout kv_layout) {
29
+ // number of conversation rounds
30
+ size_t num_conv_rounds = 3;
31
+ size_t max_decode_len = 1;
32
+ size_t max_prefill_len = 128;
33
+ size_t max_num_pages =
34
+ num_conv_rounds * batch_size * ((max_decode_len + max_prefill_len) / page_size + 1);
35
+ std::vector<T> k_data_cpu(max_num_pages * page_size * num_heads * head_dim);
36
+ std::vector<T> v_data_cpu(max_num_pages * page_size * num_heads * head_dim);
37
+ utils::vec_zero_(k_data_cpu);
38
+ utils::vec_zero_(v_data_cpu);
39
+ thrust::device_vector<T> k_data_gpu(k_data_cpu), v_data_gpu(v_data_cpu);
40
+ std::vector<int32_t> seq_len(batch_size);
41
+ utils::vec_fill_(seq_len, 0);
42
+ std::vector<std::vector<int32_t>> page_indices(batch_size);
43
+ std::vector<int32_t> last_page_len(batch_size);
44
+ utils::vec_fill_(last_page_len, 0);
45
+ size_t page_counter = 0;
46
+
47
+ for (size_t round = 0; round < 2 * num_conv_rounds; ++round) {
48
+ std::vector<int32_t> append_len(batch_size);
49
+ std::vector<int32_t> append_indptr{0};
50
+ std::vector<int32_t> batch_indices;
51
+ std::vector<int32_t> positions;
52
+ std::vector<std::vector<T>> keys;
53
+ std::vector<std::vector<T>> values;
54
+ if (round % 2 == 0) {
55
+ utils::vec_randint_(append_len, 1, max_prefill_len + 1);
56
+ } else {
57
+ utils::vec_fill_<int32_t>(append_len, max_decode_len);
58
+ }
59
+ for (size_t i = 0; i < batch_size; ++i) {
60
+ append_indptr.push_back(append_indptr.back() + append_len[i]);
61
+ seq_len[i] += append_len[i];
62
+ for (size_t j = 0; j < append_len[i]; ++j) {
63
+ if (last_page_len[i] % page_size == 0) {
64
+ page_indices[i].push_back(page_counter++);
65
+ last_page_len[i] = 1;
66
+ } else {
67
+ last_page_len[i] += 1;
68
+ }
69
+ batch_indices.push_back(i);
70
+ positions.push_back(seq_len[i] - append_len[i] + j);
71
+ }
72
+ std::vector<T> ki(append_len[i] * num_heads * head_dim),
73
+ vi(append_len[i] * num_heads * head_dim);
74
+ utils::vec_normal_(ki);
75
+ utils::vec_normal_(vi);
76
+ keys.push_back(ki);
77
+ values.push_back(vi);
78
+ }
79
+
80
+ std::vector<int32_t> indptr_cpu{0};
81
+ std::vector<int32_t> indices_cpu;
82
+ for (size_t i = 0; i < batch_size; ++i) {
83
+ for (size_t j = 0; j < page_indices[i].size(); ++j) {
84
+ indices_cpu.push_back(page_indices[i][j]);
85
+ }
86
+ indptr_cpu.push_back(indptr_cpu.back() + page_indices[i].size());
87
+ }
88
+ paged_kv_t<T, int32_t> paged_kv_cpu(num_heads, page_size, head_dim, batch_size, kv_layout,
89
+ /*k_data=*/k_data_cpu.data(),
90
+ /*v_data=*/v_data_cpu.data(), indices_cpu.data(),
91
+ indptr_cpu.data(), last_page_len.data());
92
+ cpu_reference::append_paged_kv_cache(paged_kv_cpu, keys, values, append_indptr);
93
+
94
+ thrust::device_vector<int32_t> indptr_gpu(indptr_cpu);
95
+ thrust::device_vector<int32_t> indices_gpu(indices_cpu);
96
+ thrust::device_vector<int32_t> last_page_len_gpu(last_page_len);
97
+ paged_kv_t<T, int32_t> paged_kv_gpu(num_heads, page_size, head_dim, batch_size, kv_layout,
98
+ /*k_data=*/thrust::raw_pointer_cast(k_data_gpu.data()),
99
+ /*v_data=*/thrust::raw_pointer_cast(v_data_gpu.data()),
100
+ thrust::raw_pointer_cast(indices_gpu.data()),
101
+ thrust::raw_pointer_cast(indptr_gpu.data()),
102
+ thrust::raw_pointer_cast(last_page_len_gpu.data()));
103
+
104
+ thrust::device_vector<int32_t> batch_indices_gpu(batch_indices);
105
+ thrust::device_vector<int32_t> positions_gpu(positions);
106
+ thrust::device_vector<T> keys_gpu(append_indptr.back() * num_heads * head_dim);
107
+ thrust::device_vector<T> values_gpu(append_indptr.back() * num_heads * head_dim);
108
+ for (size_t i = 0; i < batch_size; ++i) {
109
+ thrust::device_vector<T> ki(keys[i]);
110
+ thrust::device_vector<T> vi(values[i]);
111
+ thrust::copy(ki.begin(), ki.end(),
112
+ keys_gpu.begin() + append_indptr[i] * num_heads * head_dim);
113
+ thrust::copy(vi.begin(), vi.end(),
114
+ values_gpu.begin() + append_indptr[i] * num_heads * head_dim);
115
+ }
116
+
117
+ if (round % 2 == 0) {
118
+ // call prefill kernel
119
+ cudaError_t status =
120
+ AppendPagedKVCache(paged_kv_gpu, thrust::raw_pointer_cast(keys_gpu.data()),
121
+ thrust::raw_pointer_cast(values_gpu.data()),
122
+ thrust::raw_pointer_cast(batch_indices_gpu.data()),
123
+ thrust::raw_pointer_cast(positions_gpu.data()),
124
+ /*nnz=*/append_indptr.back(),
125
+ /*append_k_stride_n=*/num_heads * head_dim,
126
+ /*append_k_stride_h=*/head_dim,
127
+ /*append_v_stride_n=*/num_heads * head_dim,
128
+ /*append_v_stride_h=*/head_dim);
129
+ EXPECT_EQ(status, cudaSuccess) << "AppendPagedKVCache kernel launch failed, error message: "
130
+ << cudaGetErrorString(status);
131
+ } else {
132
+ // call decode kernel
133
+ cudaError_t status =
134
+ AppendPagedKVCacheDecode(paged_kv_gpu, thrust::raw_pointer_cast(keys_gpu.data()),
135
+ thrust::raw_pointer_cast(values_gpu.data()));
136
+ EXPECT_EQ(status, cudaSuccess)
137
+ << "AppendPagedKVCacheDecode kernel launch failed, error message: "
138
+ << cudaGetErrorString(status);
139
+ }
140
+ }
141
+
142
+ thrust::host_vector<T> k_data_gpu_h(k_data_gpu), v_data_gpu_h(v_data_gpu);
143
+ size_t num_result_errors_atol_1e_3_rtol_1e_3 = 0;
144
+ bool nan_detected = false;
145
+ for (size_t i = 0; i < k_data_cpu.size(); ++i) {
146
+ if (std::isnan(float(k_data_gpu_h[i]))) {
147
+ nan_detected = true;
148
+ }
149
+ num_result_errors_atol_1e_3_rtol_1e_3 +=
150
+ (!utils::isclose(float(k_data_cpu[i]), float(k_data_gpu_h[i]), 1e-3, 1e-3));
151
+ }
152
+ for (size_t i = 0; i < v_data_cpu.size(); ++i) {
153
+ if (std::isnan(float(v_data_gpu_h[i]))) {
154
+ nan_detected = true;
155
+ }
156
+ num_result_errors_atol_1e_3_rtol_1e_3 +=
157
+ (!utils::isclose(float(v_data_cpu[i]), float(v_data_gpu_h[i]), 1e-3, 1e-3));
158
+ }
159
+ float result_accuracy = 1. - float(num_result_errors_atol_1e_3_rtol_1e_3) /
160
+ float(k_data_cpu.size() + v_data_cpu.size());
161
+ std::cout << "kv_layout=" << QKVLayoutToString(kv_layout) << ", page_size=" << page_size
162
+ << ", batch_size=" << batch_size << ", num_heads=" << num_heads
163
+ << ", head_dim=" << head_dim << ", result_accuracy=" << result_accuracy << std::endl;
164
+ EXPECT_GT(result_accuracy, 0.99) << "Result correctness test failed.";
165
+ EXPECT_EQ(nan_detected, false) << "Nan detected in the result.";
166
+ }
167
+
168
+ template <typename T>
169
+ void TestAppendPagedKVKernelCorrectness() {
170
+ for (size_t page_size : {1, 3, 7, 17}) {
171
+ for (size_t batch_size : {1, 2, 3, 5, 7, 23, 79, 91}) {
172
+ for (size_t num_heads : {32}) {
173
+ for (QKVLayout kv_layout : {QKVLayout::kNHD, QKVLayout::kHND}) {
174
+ for (size_t head_dim : {64, 128, 256}) {
175
+ _TestAppendPagedKVKernelCorrectness<T>(page_size, batch_size, num_heads, head_dim,
176
+ kv_layout);
177
+ }
178
+ }
179
+ }
180
+ }
181
+ }
182
+ }
183
+
184
+ TEST(FlashInferCorrectnessTest, AppendPagedKVKernelCorrectnessTestFP16) {
185
+ TestAppendPagedKVKernelCorrectness<half>();
186
+ }
187
+
188
+ TEST(FlashInferCorrectnessTest, AppendPagedKVKernelCorrectnessTestFP32) {
189
+ TestAppendPagedKVKernelCorrectness<float>();
190
+ }
191
+
192
+ #ifdef FLASHINFER_ENABLE_BF16
193
+ TEST(FlashInferCorrectnessTest, AppendPagedKVKernelCorrectnessTestBF16) {
194
+ TestAppendPagedKVKernelCorrectness<__nv_bfloat16>();
195
+ }
196
+ #endif
197
+
198
+ #ifdef FLASHINFER_ENABLE_FP8_E4M3
199
+ TEST(FlashInferCorrectnessTest, AppendPagedKVKernelCorrectnessTestE4M3) {
200
+ TestAppendPagedKVKernelCorrectness<__nv_fp8_e4m3>();
201
+ }
202
+ #endif
203
+
204
+ #ifdef FLASHINFER_ENABLE_FP8_E5M2
205
+ TEST(FlashInferCorrectnessTest, AppendPagedKVKernelCorrectnessTestE5M2) {
206
+ TestAppendPagedKVKernelCorrectness<__nv_fp8_e5m2>();
207
+ }
208
+ #endif
sglang_repo/sgl-kernel/3rdparty/flashinfer/src/test_sampling.cu ADDED
The diff for this file is too large to render. See raw diff
 
sglang_repo/sgl-kernel/3rdparty/flashinfer/src/test_single_prefill.cu ADDED
@@ -0,0 +1,276 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ /*
2
+ * Copyright (c) 2023 by FlashInfer team.
3
+ *
4
+ * Licensed under the Apache License, Version 2.0 (the "License");
5
+ * you may not use this file except in compliance with the License.
6
+ * You may obtain a copy of the License at
7
+ *
8
+ * http://www.apache.org/licenses/LICENSE-2.0
9
+ *
10
+ * Unless required by applicable law or agreed to in writing, software
11
+ * distributed under the License is distributed on an "AS IS" BASIS,
12
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13
+ * See the License for the specific language governing permissions and
14
+ * limitations under the License.
15
+ */
16
+ #include <gtest/gtest.h>
17
+
18
+ #include <cstdint>
19
+
20
+ #include "cpu_reference.h"
21
+ #include "flashinfer_ops.cuh"
22
+ #include "utils.h"
23
+
24
+ using namespace flashinfer;
25
+
26
+ template <typename DTypeQ, typename DTypeKV, typename DTypeO>
27
+ void _TestSinglePrefillKernelCorrectness(size_t qo_len, size_t kv_len, size_t num_qo_heads,
28
+ size_t num_kv_heads, size_t head_dim, bool causal,
29
+ QKVLayout kv_layout, PosEncodingMode pos_encoding_mode,
30
+ bool use_fp16_qk_reduction, float rtol = 1e-3,
31
+ float atol = 1e-3) {
32
+ std::vector<DTypeQ> q(qo_len * num_qo_heads * head_dim);
33
+ std::vector<DTypeKV> k(kv_len * num_kv_heads * head_dim);
34
+ std::vector<DTypeKV> v(kv_len * num_kv_heads * head_dim);
35
+ std::vector<DTypeO> o(qo_len * num_qo_heads * head_dim);
36
+
37
+ utils::vec_normal_(q);
38
+ utils::vec_normal_(k);
39
+ utils::vec_normal_(v);
40
+ utils::vec_zero_(o);
41
+
42
+ thrust::device_vector<DTypeQ> q_d(q);
43
+ thrust::device_vector<DTypeKV> k_d(k);
44
+ thrust::device_vector<DTypeKV> v_d(v);
45
+ thrust::device_vector<DTypeO> o_d(o);
46
+ thrust::device_vector<DTypeO> tmp_d(16 * 1024 * 1024);
47
+
48
+ cudaError_t status = flashinfer::SinglePrefillWithKVCache<DTypeQ, DTypeKV, DTypeO>(
49
+ thrust::raw_pointer_cast(q_d.data()), thrust::raw_pointer_cast(k_d.data()),
50
+ thrust::raw_pointer_cast(v_d.data()), thrust::raw_pointer_cast(o_d.data()),
51
+ thrust::raw_pointer_cast(tmp_d.data()),
52
+ /*lse=*/nullptr, num_qo_heads, num_kv_heads, qo_len, kv_len, head_dim, causal, kv_layout,
53
+ pos_encoding_mode, use_fp16_qk_reduction);
54
+
55
+ EXPECT_EQ(status, cudaSuccess) << "SinglePrefillWithKVCache kernel launch failed, error message: "
56
+ << cudaGetErrorString(status);
57
+
58
+ thrust::host_vector<DTypeO> o_h(o_d);
59
+ std::vector<DTypeO> o_ref = cpu_reference::single_mha<DTypeQ, DTypeKV, DTypeO>(
60
+ q, k, v, qo_len, kv_len, num_qo_heads, num_kv_heads, head_dim, causal, kv_layout,
61
+ pos_encoding_mode);
62
+ size_t num_results_error_atol = 0;
63
+ bool nan_detected = false;
64
+
65
+ for (size_t i = 0; i < o_ref.size(); ++i) {
66
+ if (isnan(float(o_h[i]))) {
67
+ nan_detected = true;
68
+ }
69
+ num_results_error_atol += (!utils::isclose(float(o_ref[i]), float(o_h[i]), rtol, atol));
70
+ if (!utils::isclose(float(o_ref[i]), float(o_h[i]), rtol, atol)) {
71
+ std::cout << "i=" << i << ", o_ref[i]=" << float(o_ref[i]) << ", o_h[i]=" << float(o_h[i])
72
+ << std::endl;
73
+ }
74
+ }
75
+
76
+ float result_accuracy = 1. - float(num_results_error_atol) / float(o_ref.size());
77
+ std::cout << "num_qo_heads=" << num_qo_heads << ", num_kv_heads=" << num_kv_heads
78
+ << ", qo_len=" << qo_len << ", kv_len=" << kv_len << ", head_dim=" << head_dim
79
+ << ", causal=" << causal << ", kv_layout=" << QKVLayoutToString(kv_layout)
80
+ << ", pos_encoding_mode=" << PosEncodingModeToString(pos_encoding_mode)
81
+ << ", result_accuracy=" << result_accuracy << std::endl;
82
+ EXPECT_GT(result_accuracy, 0.90) << "Result correctness test failed.";
83
+ EXPECT_FALSE(nan_detected) << "Nan detected in the result.";
84
+ }
85
+
86
+ template <typename DTypeIn, typename DTypeO>
87
+ void TestSinglePrefillKernelLongContextCorrectness(bool use_fp16_qk_reduction) {
88
+ for (size_t qo_len : {1, 31, 63, 127}) {
89
+ for (size_t kv_len : {31717}) {
90
+ for (size_t num_heads : {1}) {
91
+ for (size_t head_dim : {64, 128, 256}) {
92
+ for (bool causal : {false, true}) {
93
+ for (size_t pos_encoding_mode : {0, 1}) {
94
+ for (size_t kv_layout : {0, 1}) {
95
+ _TestSinglePrefillKernelCorrectness<DTypeIn, DTypeIn, DTypeO>(
96
+ qo_len, kv_len, num_heads, num_heads, head_dim, causal, QKVLayout(kv_layout),
97
+ PosEncodingMode(pos_encoding_mode), use_fp16_qk_reduction);
98
+ }
99
+ }
100
+ }
101
+ }
102
+ }
103
+ }
104
+ }
105
+ }
106
+
107
+ template <typename DTypeKV>
108
+ void TestSinglePrefillFP8KernelLongContextCorrectness(bool use_fp16_qk_reduction) {
109
+ for (size_t qo_len : {1, 31, 63, 127}) {
110
+ for (size_t kv_len : {31717}) {
111
+ for (size_t num_heads : {1}) {
112
+ for (size_t head_dim : {64, 128, 256}) {
113
+ for (bool causal : {false, true}) {
114
+ for (size_t pos_encoding_mode : {0}) {
115
+ for (size_t kv_layout : {0, 1}) {
116
+ _TestSinglePrefillKernelCorrectness<half, DTypeKV, half>(
117
+ qo_len, kv_len, num_heads, num_heads, head_dim, causal, QKVLayout(kv_layout),
118
+ PosEncodingMode(pos_encoding_mode), use_fp16_qk_reduction);
119
+ }
120
+ }
121
+ }
122
+ }
123
+ }
124
+ }
125
+ }
126
+ }
127
+
128
+ template <typename DTypeIn, typename DTypeO>
129
+ void TestSinglePrefillKernelShortContextCorrectness(bool use_fp16_qk_reduction) {
130
+ float rtol = std::is_same<DTypeO, nv_bfloat16>::value ? 1e-2 : 1e-3;
131
+ float atol = std::is_same<DTypeO, nv_bfloat16>::value ? 1e-2 : 1e-3;
132
+ for (size_t qkv_len : {2, 3, 5, 7, 11, 13, 17, 19, 23, 29, 31, 37}) {
133
+ for (size_t num_qo_heads : {32}) {
134
+ for (size_t num_kv_heads : {4, 8, 32}) {
135
+ for (size_t head_dim : {64, 128, 256}) {
136
+ for (bool causal : {false, true}) {
137
+ for (size_t pos_encoding_mode : {0, 1}) {
138
+ for (size_t kv_layout : {0, 1}) {
139
+ _TestSinglePrefillKernelCorrectness<DTypeIn, DTypeIn, DTypeO>(
140
+ qkv_len, qkv_len, num_qo_heads, num_kv_heads, head_dim, causal,
141
+ QKVLayout(kv_layout), PosEncodingMode(pos_encoding_mode), use_fp16_qk_reduction,
142
+ rtol, atol);
143
+ }
144
+ }
145
+ }
146
+ }
147
+ }
148
+ }
149
+ }
150
+ }
151
+
152
+ template <typename DTypeKV>
153
+ void TestSinglePrefillFP8KernelShortContextCorrectness(bool use_fp16_qk_reduction) {
154
+ float rtol = 1e-3;
155
+ float atol = 1e-3;
156
+ for (size_t qkv_len : {2, 3, 5, 7, 11, 13, 17, 19, 23, 29, 31, 37}) {
157
+ for (size_t num_qo_heads : {32}) {
158
+ for (size_t num_kv_heads : {4, 8, 32}) {
159
+ for (size_t head_dim : {64, 128, 256}) {
160
+ for (bool causal : {false, true}) {
161
+ for (size_t pos_encoding_mode : {0}) {
162
+ for (size_t kv_layout : {0, 1}) {
163
+ _TestSinglePrefillKernelCorrectness<half, DTypeKV, half>(
164
+ qkv_len, qkv_len, num_qo_heads, num_kv_heads, head_dim, causal,
165
+ QKVLayout(kv_layout), PosEncodingMode(pos_encoding_mode), use_fp16_qk_reduction,
166
+ rtol, atol);
167
+ }
168
+ }
169
+ }
170
+ }
171
+ }
172
+ }
173
+ }
174
+ }
175
+
176
+ template <typename DTypeIn, typename DTypeO>
177
+ void TestSinglePrefillKernelCorrectness(bool use_fp16_qk_reduction) {
178
+ for (size_t qo_len : {399, 400, 401}) {
179
+ for (size_t kv_len : {533, 534, 535}) {
180
+ for (size_t num_heads : {12}) {
181
+ for (size_t head_dim : {64, 128, 256}) {
182
+ for (bool causal : {false, true}) {
183
+ for (size_t pos_encoding_mode : {0, 1}) {
184
+ for (size_t kv_layout : {0, 1}) {
185
+ _TestSinglePrefillKernelCorrectness<DTypeIn, DTypeIn, DTypeO>(
186
+ qo_len, kv_len, num_heads, num_heads, head_dim, causal, QKVLayout(kv_layout),
187
+ PosEncodingMode(pos_encoding_mode), use_fp16_qk_reduction);
188
+ }
189
+ }
190
+ }
191
+ }
192
+ }
193
+ }
194
+ }
195
+ }
196
+
197
+ template <typename DTypeKV>
198
+ void TestSinglePrefillFP8KernelCorrectness(bool use_fp16_qk_reduction) {
199
+ for (size_t qo_len : {399, 400, 401}) {
200
+ for (size_t kv_len : {533, 534, 535}) {
201
+ for (size_t num_heads : {12}) {
202
+ for (size_t head_dim : {64, 128, 256}) {
203
+ for (bool causal : {false, true}) {
204
+ for (size_t pos_encoding_mode : {0}) {
205
+ for (size_t kv_layout : {0, 1}) {
206
+ _TestSinglePrefillKernelCorrectness<half, DTypeKV, half>(
207
+ qo_len, kv_len, num_heads, num_heads, head_dim, causal, QKVLayout(kv_layout),
208
+ PosEncodingMode(pos_encoding_mode), use_fp16_qk_reduction);
209
+ }
210
+ }
211
+ }
212
+ }
213
+ }
214
+ }
215
+ }
216
+ }
217
+
218
+ TEST(FlashInferCorrectnessTest, TestSinglePrefillKernelLongContextCorrectnessFP16) {
219
+ TestSinglePrefillKernelLongContextCorrectness<half, half>(false);
220
+ }
221
+
222
+ TEST(FlashInferCorrectnessTest, TestSinglePrefillKernelLongContextCorrectnessFP16QKHalfAccum) {
223
+ TestSinglePrefillKernelLongContextCorrectness<half, half>(true);
224
+ }
225
+
226
+ TEST(FlashInferCorrectnessTest, TestSinglePrefillKernelShortContextCorrectnessFP16) {
227
+ TestSinglePrefillKernelShortContextCorrectness<half, half>(false);
228
+ }
229
+
230
+ TEST(FlashInferCorrectnessTest, TestSinglePrefillKernelShortContextCorrectnessFP16QKHalfAccum) {
231
+ TestSinglePrefillKernelShortContextCorrectness<half, half>(true);
232
+ }
233
+
234
+ TEST(FlashInferCorrectnessTest, TestSinglePrefillKernelCorrectnessTestFP16) {
235
+ TestSinglePrefillKernelCorrectness<half, half>(false);
236
+ }
237
+
238
+ TEST(FlashInferCorrectnessTest, TestSinglePrefillKernelCorrectnessTestFP16QKHalfAccum) {
239
+ TestSinglePrefillKernelCorrectness<half, half>(true);
240
+ }
241
+
242
+ #ifdef FLASHINFER_ENABLE_BF16
243
+ TEST(FlashInferCorrectnessTest, TestSinglePrefillKernelLongContextCorrectnessBF16) {
244
+ TestSinglePrefillKernelLongContextCorrectness<nv_bfloat16, nv_bfloat16>(false);
245
+ }
246
+ TEST(FlashInferCorrectnessTest, TestSinglePrefillKernelShortContextCorrectnessBF16) {
247
+ TestSinglePrefillKernelShortContextCorrectness<nv_bfloat16, nv_bfloat16>(false);
248
+ }
249
+ TEST(FlashInferCorrectnessTest, TestSinglePrefillKernelCorrectnessTestBF16) {
250
+ TestSinglePrefillKernelCorrectness<nv_bfloat16, nv_bfloat16>(false);
251
+ }
252
+ #endif
253
+
254
+ #ifdef FLASHINFER_ENABLE_FP8_E4M3
255
+ TEST(FlashInferCorrectnessTest, TestSinglePrefillKernelShortContextCorrectnessE4M3) {
256
+ TestSinglePrefillFP8KernelShortContextCorrectness<__nv_fp8_e4m3>(false);
257
+ }
258
+ TEST(FlashInferCorrectnessTest, TestSinglePrefillKernelCorrectnessTestE4M3) {
259
+ TestSinglePrefillFP8KernelCorrectness<__nv_fp8_e4m3>(false);
260
+ }
261
+ TEST(FlashInferCorrectnessTest, TestSinglePrefillKernelLongContextCorrectnessE4M3) {
262
+ TestSinglePrefillFP8KernelLongContextCorrectness<__nv_fp8_e4m3>(false);
263
+ }
264
+ #endif
265
+
266
+ #ifdef FLASHINFER_ENABLE_FP8_E5M2
267
+ TEST(FlashInferCorrectnessTest, TestSinglePrefillKernelShortContextCorrectnessE5M2) {
268
+ TestSinglePrefillFP8KernelShortContextCorrectness<__nv_fp8_e5m2>(false);
269
+ }
270
+ TEST(FlashInferCorrectnessTest, TestSinglePrefillKernelCorrectnessTestE5M2) {
271
+ TestSinglePrefillFP8KernelCorrectness<__nv_fp8_e5m2>(false);
272
+ }
273
+ TEST(FlashInferCorrectnessTest, TestSinglePrefillKernelLongContextCorrectnessE5M2) {
274
+ TestSinglePrefillFP8KernelLongContextCorrectness<__nv_fp8_e5m2>(false);
275
+ }
276
+ #endif
sglang_repo/sgl-kernel/3rdparty/flashinfer/src/tvm_wrapper.cu ADDED
@@ -0,0 +1,830 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ /*
2
+ * Copyright (c) 2023 by FlashInfer team.
3
+ *
4
+ * Licensed under the Apache License, Version 2.0 (the "License");
5
+ * you may not use this file except in compliance with the License.
6
+ * You may obtain a copy of the License at
7
+ *
8
+ * http://www.apache.org/licenses/LICENSE-2.0
9
+ *
10
+ * Unless required by applicable law or agreed to in writing, software
11
+ * distributed under the License is distributed on an "AS IS" BASIS,
12
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13
+ * See the License for the specific language governing permissions and
14
+ * limitations under the License.
15
+ */
16
+ #include <dlpack/dlpack.h>
17
+ #include <tvm/runtime/logging.h>
18
+ #include <tvm/runtime/module.h>
19
+ #include <tvm/runtime/ndarray.h>
20
+ #include <tvm/runtime/packed_func.h>
21
+ #include <tvm/runtime/registry.h>
22
+
23
+ #include <flashinfer/attention/cascade.cuh>
24
+ #include <flashinfer/sampling.cuh>
25
+ #include <optional>
26
+
27
+ #include "flashinfer_ops.cuh"
28
+
29
+ using tvm::runtime::Array;
30
+ using tvm::runtime::DataType;
31
+ using tvm::runtime::NDArray;
32
+ using tvm::runtime::ShapeTuple;
33
+ using namespace flashinfer;
34
+
35
+ #define DISPATCH_TVM_CUDA_DTYPE(dl_dtype, cuda_dtype, ...) \
36
+ if (dl_dtype.code == kDLFloat && dl_dtype.bits == 16) { \
37
+ using cuda_dtype = half; \
38
+ __VA_ARGS__ \
39
+ } else { \
40
+ LOG(FATAL) << "Unsupported data type " << dl_dtype.code; \
41
+ }
42
+
43
+ #define DISPATCH_TVM_CUDA_IDTYPE(dl_dtype, cuda_dtype, ...) \
44
+ if (dl_dtype.code == kDLInt && dl_dtype.bits == 32) { \
45
+ using cuda_dtype = int32_t; \
46
+ __VA_ARGS__ \
47
+ } else { \
48
+ LOG(FATAL) << "Unsupported data type " << dl_dtype.code; \
49
+ }
50
+
51
+ int _FlashInferSinglePrefillWithKVCache(DLTensor* q, DLTensor* k, DLTensor* v, DLTensor* tmp,
52
+ bool causal, int64_t kv_layout, int64_t pos_encoding_mode,
53
+ bool use_fp16_qk_reduction, double rope_scale,
54
+ double rope_theta, DLTensor* o) {
55
+ // `tmp` is user-provided scratch space of at least 16MB, e.g. 4 * 1024 * 1024 float32.
56
+ CHECK_EQ(q->device.device_type, kDLCUDA) << "The device of q matrix must be CUDA.";
57
+ CHECK_EQ(k->device.device_type, kDLCUDA) << "The device of k matrix must be CUDA.";
58
+ CHECK_EQ(v->device.device_type, kDLCUDA) << "The device of v matrix must be CUDA.";
59
+ CHECK_EQ(o->device.device_type, kDLCUDA) << "The device of o matrix must be CUDA.";
60
+
61
+ size_t dev_id = q->device.device_id;
62
+ CHECK_EQ(k->device.device_id, dev_id) << "The device id of q and k matrix doesn't match.";
63
+ CHECK_EQ(v->device.device_id, dev_id) << "The device id of q and v matrix doesn't match.";
64
+ CHECK_EQ(o->device.device_id, dev_id) << "The device id of q and o matrix doesn't match.";
65
+
66
+ CHECK_GE(q->ndim, 3);
67
+ size_t qo_len = q->shape[q->ndim - 3];
68
+ size_t num_qo_heads = q->shape[q->ndim - 2];
69
+ size_t head_dim = q->shape[q->ndim - 1];
70
+
71
+ CHECK_GE(k->ndim, 3);
72
+ size_t kv_len = k->shape[k->ndim - 3];
73
+ size_t num_kv_heads = k->shape[k->ndim - 2];
74
+ CHECK_EQ(head_dim, k->shape[k->ndim - 1]);
75
+
76
+ CHECK_GE(v->ndim, 3);
77
+ CHECK_EQ(kv_len, v->shape[v->ndim - 3]);
78
+ CHECK_EQ(num_kv_heads, v->shape[v->ndim - 2]);
79
+ CHECK_EQ(head_dim, v->shape[v->ndim - 1]);
80
+
81
+ CHECK_GE(o->ndim, 2);
82
+ CHECK_EQ(qo_len, o->shape[o->ndim - 2]);
83
+ CHECK_EQ(num_qo_heads * head_dim, o->shape[o->ndim - 1]);
84
+
85
+ CHECK(q->dtype.lanes == 1 && k->dtype.lanes == 1 && v->dtype.lanes == 1);
86
+ CHECK(q->dtype.bits == k->dtype.bits && q->dtype.code == k->dtype.code);
87
+ CHECK(q->dtype.bits == v->dtype.bits && q->dtype.code == v->dtype.code);
88
+
89
+ DISPATCH_TVM_CUDA_DTYPE(
90
+ q->dtype, dtype_in, {DISPATCH_TVM_CUDA_DTYPE(o->dtype, dtype_out, {
91
+ cudaError_t status = SinglePrefillWithKVCache(
92
+ (dtype_in*)q->data, (dtype_in*)k->data, (dtype_in*)v->data, (dtype_out*)o->data,
93
+ (dtype_out*)tmp->data, /*lse=*/nullptr, num_qo_heads, num_kv_heads, qo_len, kv_len,
94
+ head_dim, causal, QKVLayout(kv_layout), PosEncodingMode(pos_encoding_mode),
95
+ use_fp16_qk_reduction, std::nullopt, rope_scale, rope_theta, 0);
96
+ if (status != cudaSuccess) {
97
+ LOG(FATAL) << "FlashInfer CUDA kernel error " << cudaGetErrorString(status);
98
+ }
99
+ })});
100
+ return 0;
101
+ }
102
+
103
+ int _FlashInferSingleDecodeWithKVCache(DLTensor* q, DLTensor* k, DLTensor* v, DLTensor* tmp,
104
+ int64_t kv_layout, int64_t pos_encoding_mode,
105
+ double rope_scale, double rope_theta, DLTensor* o) {
106
+ // `tmp` is user-provided scratch space of at least 16MB, e.g. 4 * 1024 * 1024 float32.
107
+ CHECK_EQ(q->device.device_type, kDLCUDA) << "The device of q matrix must be CUDA.";
108
+ CHECK_EQ(k->device.device_type, kDLCUDA) << "The device of k matrix must be CUDA.";
109
+ CHECK_EQ(v->device.device_type, kDLCUDA) << "The device of v matrix must be CUDA.";
110
+ CHECK_EQ(o->device.device_type, kDLCUDA) << "The device of o matrix must be CUDA.";
111
+
112
+ size_t dev_id = q->device.device_id;
113
+ CHECK_EQ(k->device.device_id, dev_id) << "The device id of q and k matrix doesn't match.";
114
+ CHECK_EQ(v->device.device_id, dev_id) << "The device id of q and v matrix doesn't match.";
115
+ CHECK_EQ(o->device.device_id, dev_id) << "The device id of q and o matrix doesn't match.";
116
+
117
+ CHECK_GE(q->ndim, 2);
118
+ size_t num_qo_heads = q->shape[q->ndim - 2];
119
+ size_t head_dim = q->shape[q->ndim - 1];
120
+
121
+ CHECK_GE(k->ndim, 3);
122
+ size_t seq_len = k->shape[k->ndim - 3];
123
+ size_t num_kv_heads = k->shape[k->ndim - 2];
124
+ CHECK_EQ(head_dim, k->shape[k->ndim - 1]);
125
+
126
+ CHECK_GE(v->ndim, 3);
127
+ CHECK_EQ(seq_len, v->shape[v->ndim - 3]);
128
+ CHECK_EQ(num_kv_heads, v->shape[v->ndim - 2]);
129
+ CHECK_EQ(head_dim, v->shape[v->ndim - 1]);
130
+
131
+ CHECK_GE(o->ndim, 1);
132
+ CHECK_EQ(num_qo_heads * head_dim, o->shape[o->ndim - 1]);
133
+
134
+ CHECK(q->dtype.lanes == 1 && k->dtype.lanes == 1 && v->dtype.lanes == 1);
135
+ CHECK(q->dtype.bits == k->dtype.bits && q->dtype.code == k->dtype.code);
136
+ CHECK(q->dtype.bits == v->dtype.bits && q->dtype.code == v->dtype.code);
137
+
138
+ DISPATCH_TVM_CUDA_DTYPE(
139
+ q->dtype, dtype_in, {DISPATCH_TVM_CUDA_DTYPE(o->dtype, dtype_out, {
140
+ cudaError_t status = SingleDecodeWithKVCache(
141
+ (dtype_in*)q->data, (dtype_in*)k->data, (dtype_in*)v->data, (dtype_out*)o->data,
142
+ (dtype_out*)tmp->data, num_qo_heads, num_kv_heads, seq_len, head_dim,
143
+ QKVLayout(kv_layout), PosEncodingMode(pos_encoding_mode), rope_scale, rope_theta, 0);
144
+ if (status != cudaSuccess) {
145
+ LOG(FATAL) << "FlashInfer CUDA kernel error " << cudaGetErrorString(status);
146
+ }
147
+ })});
148
+ return 0;
149
+ }
150
+
151
+ constexpr uint32_t max_num_handlers = 8;
152
+ thread_local BatchPrefillHandler batch_prefill_paged_kv_handlers[max_num_handlers];
153
+ thread_local BatchPrefillHandler batch_prefill_ragged_kv_handler;
154
+
155
+ void _FlashInferAttentionPrefillWithPagedKVCache(int64_t handler_id, DLTensor* q_data,
156
+ DLTensor* qo_indptr, //
157
+ DLTensor* pages, //
158
+ DLTensor* page_table_indptr, //
159
+ DLTensor* page_table_values, //
160
+ DLTensor* last_page_len, //
161
+ DLTensor* k_rope_offset, //
162
+ DLTensor* q_rope_offset, //
163
+ DLTensor* output, //
164
+ DLTensor* lse, //
165
+ int64_t causal, //
166
+ int64_t pos_encoding_mode, //
167
+ double rope_scale, //
168
+ double rope_theta,
169
+ double attn_score_scaling_factor = 1.0f) {
170
+ CHECK(handler_id < max_num_handlers) << "The handler id must be less than " << max_num_handlers;
171
+ CHECK_EQ(q_data->device.device_type, kDLCUDA) << "The device of q_data must be CUDA.";
172
+ CHECK_EQ(pages->device.device_type, kDLCUDA) << "The device of kv pages must be CUDA.";
173
+ CHECK_EQ(page_table_indptr->device.device_type, kDLCUDA)
174
+ << "The device of page_table_indptr matrix must be CUDA.";
175
+ CHECK_EQ(page_table_values->device.device_type, kDLCUDA)
176
+ << "The device of page_table_values matrix must be CUDA.";
177
+ CHECK_EQ(last_page_len->device.device_type, kDLCUDA)
178
+ << "The device of last_page_len matrix must be CUDA.";
179
+ CHECK_EQ(q_rope_offset->device.device_type, kDLCUDA)
180
+ << "The device of q_rope_offset matrix must be CUDA.";
181
+ CHECK_EQ(k_rope_offset->device.device_type, kDLCUDA)
182
+ << "The device of k_rope_offset matrix must be CUDA.";
183
+ CHECK_EQ(qo_indptr->device.device_type, kDLCUDA)
184
+ << "The device of qo_indptr matrix must be CUDA.";
185
+ CHECK_EQ(output->device.device_type, kDLCUDA) << "The device of output must be CUDA.";
186
+
187
+ int32_t dev_id = q_data->device.device_id;
188
+ CHECK_EQ(pages->device.device_id, dev_id);
189
+ CHECK_EQ(page_table_indptr->device.device_id, dev_id);
190
+ CHECK_EQ(page_table_values->device.device_id, dev_id);
191
+ CHECK_EQ(last_page_len->device.device_id, dev_id);
192
+ CHECK_EQ(q_rope_offset->device.device_id, dev_id);
193
+ CHECK_EQ(k_rope_offset->device.device_id, dev_id);
194
+ CHECK_EQ(qo_indptr->device.device_id, dev_id);
195
+ CHECK_EQ(output->device.device_id, dev_id);
196
+
197
+ CHECK(q_data->dtype.lanes == 1 && pages->dtype.lanes == 1 && output->dtype.lanes == 1);
198
+ CHECK(q_data->dtype.bits == pages->dtype.bits && q_data->dtype.code == pages->dtype.code);
199
+ CHECK(page_table_indptr->dtype.lanes == 1 && page_table_values->dtype.lanes == 1 &&
200
+ last_page_len->dtype.lanes == 1 && q_rope_offset->dtype.lanes == 1 &&
201
+ k_rope_offset->dtype.lanes == 1 && qo_indptr->dtype.lanes == 1);
202
+ CHECK(page_table_indptr->dtype.bits == page_table_values->dtype.bits &&
203
+ page_table_indptr->dtype.bits == last_page_len->dtype.bits &&
204
+ page_table_indptr->dtype.bits == qo_indptr->dtype.bits &&
205
+ page_table_indptr->dtype.code == page_table_values->dtype.code &&
206
+ page_table_indptr->dtype.code == last_page_len->dtype.code &&
207
+ page_table_indptr->dtype.code == q_rope_offset->dtype.code &&
208
+ page_table_indptr->dtype.code == k_rope_offset->dtype.code &&
209
+ page_table_indptr->dtype.code == qo_indptr->dtype.code);
210
+
211
+ CHECK_EQ(pages->ndim, 5);
212
+ CHECK_EQ(pages->shape[1], 2);
213
+ int64_t nhead_kv = pages->shape[2];
214
+ int64_t nhead_qo = q_data->shape[1];
215
+ int64_t nfeat = pages->shape[4];
216
+ int64_t page_size = pages->shape[3];
217
+
218
+ CHECK_EQ(last_page_len->ndim, 1);
219
+ int64_t num_total_seqs = last_page_len->shape[0];
220
+
221
+ CHECK_EQ(qo_indptr->ndim, 1);
222
+ CHECK_EQ(qo_indptr->shape[0], num_total_seqs + 1);
223
+
224
+ CHECK_EQ(page_table_indptr->ndim, 1);
225
+ CHECK_EQ(page_table_indptr->shape[0], num_total_seqs + 1);
226
+ CHECK_EQ(page_table_values->ndim, 1);
227
+
228
+ CHECK_EQ(q_data->ndim, 3);
229
+ CHECK_EQ(output->ndim, 3);
230
+ CHECK_EQ(q_data->shape[2], nfeat);
231
+ CHECK_EQ(output->shape[1], nhead_qo);
232
+ CHECK_EQ(output->shape[2], nfeat);
233
+ CHECK_EQ(q_rope_offset->ndim, 1);
234
+ CHECK_EQ(q_rope_offset->shape[0], q_data->shape[0]);
235
+
236
+ CHECK_EQ(k_rope_offset->ndim, 1);
237
+ CHECK_EQ(k_rope_offset->shape[0], num_total_seqs);
238
+
239
+ constexpr QKVLayout kv_layout = QKVLayout::kHND;
240
+ const float sm_scale = attn_score_scaling_factor / std::sqrt(static_cast<float>(nfeat));
241
+
242
+ DISPATCH_TVM_CUDA_DTYPE(
243
+ pages->dtype, dtype_in,
244
+ {DISPATCH_TVM_CUDA_DTYPE(
245
+ output->dtype, dtype_out, {DISPATCH_TVM_CUDA_IDTYPE(page_table_values->dtype, dtype_idx, {
246
+ paged_kv_t<dtype_in, dtype_idx> cache(
247
+ nhead_kv, page_size, nfeat, num_total_seqs, kv_layout,
248
+ /*k_data=*/static_cast<dtype_in*>(pages->data),
249
+ /*v_data=*/static_cast<dtype_in*>(pages->data) + pages->strides[1],
250
+ static_cast<dtype_idx*>(page_table_values->data) +
251
+ page_table_values->byte_offset / sizeof(dtype_idx),
252
+ static_cast<dtype_idx*>(page_table_indptr->data) +
253
+ page_table_indptr->byte_offset / sizeof(dtype_idx),
254
+ static_cast<dtype_idx*>(last_page_len->data) +
255
+ last_page_len->byte_offset / sizeof(dtype_idx),
256
+ static_cast<dtype_idx*>(k_rope_offset->data) +
257
+ k_rope_offset->byte_offset / sizeof(dtype_idx));
258
+ cudaError_t status =
259
+ BatchPrefillWithPagedKVCacheWrapper<dtype_in, dtype_in, dtype_out, dtype_idx>(
260
+ &batch_prefill_paged_kv_handlers[handler_id],
261
+ static_cast<dtype_in*>(q_data->data),
262
+ static_cast<dtype_idx*>(qo_indptr->data) +
263
+ qo_indptr->byte_offset / sizeof(dtype_idx),
264
+ static_cast<dtype_idx*>(q_rope_offset->data) +
265
+ q_rope_offset->byte_offset / sizeof(dtype_idx),
266
+ cache, static_cast<dtype_out*>(output->data),
267
+ /*lse=*/static_cast<float*>(lse->data), nhead_qo,
268
+ /*causal=*/causal, PosEncodingMode(pos_encoding_mode),
269
+ /*use_fp16_qk_reduction=*/false, sm_scale, rope_scale, rope_theta,
270
+ /*stream=*/0);
271
+ if (status != cudaSuccess) {
272
+ LOG(FATAL) << "FlashInfer CUDA kernel error " << cudaGetErrorString(status);
273
+ }
274
+ })})});
275
+ }
276
+
277
+ void _FlashInferAttentionPrefillWithPagedKVCachePlan(
278
+ int64_t handler_idx, DLTensor* float_workspace_buffer, DLTensor* int_workspace_buffer,
279
+ DLTensor* qo_indptr, DLTensor* kv_indptr, int64_t batch_size, int64_t num_qo_heads,
280
+ int64_t num_kv_heads, int64_t head_dim, int64_t page_size, TVMStreamHandle copy_stream) {
281
+ CHECK_EQ(float_workspace_buffer->ndim, 1) << "The float workspace buffer must be a 1-D tensor";
282
+ size_t float_workspace_size_in_bytes =
283
+ float_workspace_buffer->shape[0] * float_workspace_buffer->dtype.bits / 8;
284
+ CHECK_EQ(int_workspace_buffer->ndim, 1) << "The int workspace buffer must be a 1-D tensor";
285
+ size_t int_workspace_size_in_bytes =
286
+ int_workspace_buffer->shape[0] * int_workspace_buffer->dtype.bits / 8;
287
+ CHECK(handler_idx < max_num_handlers) << "The handler id must be less than " << max_num_handlers;
288
+
289
+ // NOTE(Zihao): here we presume the input data type is half, in the future we should
290
+ // leave a parameter for the input data type.
291
+ using dtype_in = half;
292
+ cudaStream_t original_stream = batch_prefill_paged_kv_handlers[handler_idx].GetCUDAStream();
293
+ batch_prefill_paged_kv_handlers[handler_idx].SetCUDAStream(
294
+ static_cast<cudaStream_t>(copy_stream));
295
+ DISPATCH_TVM_CUDA_IDTYPE(qo_indptr->dtype, dtype_idx, {
296
+ cudaError_t status = batch_prefill_paged_kv_handlers[handler_idx].Plan<dtype_in, dtype_idx>(
297
+ static_cast<void*>(float_workspace_buffer->data), float_workspace_size_in_bytes,
298
+ static_cast<void*>(int_workspace_buffer->data), int_workspace_size_in_bytes,
299
+ static_cast<dtype_idx*>(qo_indptr->data) + qo_indptr->byte_offset / sizeof(dtype_idx),
300
+ static_cast<dtype_idx*>(kv_indptr->data) + kv_indptr->byte_offset / sizeof(dtype_idx),
301
+ batch_size, num_qo_heads, num_kv_heads, head_dim, page_size);
302
+ if (status != cudaSuccess) {
303
+ LOG(FATAL) << "FlashInfer prefill Plan error " << cudaGetErrorString(status);
304
+ }
305
+ });
306
+ batch_prefill_paged_kv_handlers[handler_idx].SetCUDAStream(original_stream);
307
+ }
308
+
309
+ // Creates a pool of handlers with a fixed size to independently handle decoding forward passes.
310
+ thread_local BatchDecodeHandler batch_decode_handlers[max_num_handlers];
311
+
312
+ void _FlashInferAttentionDecodeWithPagedKVCache(int64_t handler_id, DLTensor* q_data,
313
+ DLTensor* pages,
314
+ DLTensor* page_table_indptr, //
315
+ DLTensor* page_table_values, //
316
+ DLTensor* last_page_len, //
317
+ DLTensor* k_rope_offset, //
318
+ DLTensor* q_rope_offset, //
319
+ DLTensor* output, //
320
+ DLTensor* lse, //
321
+ int64_t pos_encoding_mode = 0, //
322
+ double rope_scale = 1.0f, //
323
+ double rope_theta = 1e4,
324
+ double attn_score_scaling_factor = 1.0f) {
325
+ CHECK_LT(handler_id, max_num_handlers) << "The handler id must be less than " << max_num_handlers;
326
+ CHECK_EQ(q_data->device.device_type, kDLCUDA) << "The device of q_data must be CUDA.";
327
+ CHECK_EQ(pages->device.device_type, kDLCUDA) << "The device of kv pages must be CUDA.";
328
+ CHECK_EQ(page_table_indptr->device.device_type, kDLCUDA)
329
+ << "The device of page_table_indptr matrix must be CUDA.";
330
+ CHECK_EQ(page_table_values->device.device_type, kDLCUDA)
331
+ << "The device of page_table_values matrix must be CUDA.";
332
+ CHECK_EQ(last_page_len->device.device_type, kDLCUDA)
333
+ << "The device of last_page_len matrix must be CUDA.";
334
+ CHECK_EQ(q_rope_offset->device.device_type, kDLCUDA)
335
+ << "The device of q_rope_offset matrix must be CUDA.";
336
+ CHECK_EQ(k_rope_offset->device.device_type, kDLCUDA)
337
+ << "The device of k_rope_offset matrix must be CUDA.";
338
+ CHECK_EQ(output->device.device_type, kDLCUDA) << "The device of output must be CUDA.";
339
+
340
+ int32_t dev_id = q_data->device.device_id;
341
+ CHECK_EQ(pages->device.device_id, dev_id);
342
+ CHECK_EQ(page_table_indptr->device.device_id, dev_id);
343
+ CHECK_EQ(page_table_values->device.device_id, dev_id);
344
+ CHECK_EQ(last_page_len->device.device_id, dev_id);
345
+ CHECK_EQ(q_rope_offset->device.device_id, dev_id);
346
+ CHECK_EQ(k_rope_offset->device.device_id, dev_id);
347
+ CHECK_EQ(output->device.device_id, dev_id);
348
+
349
+ CHECK(q_data->dtype.lanes == 1 && pages->dtype.lanes == 1 && output->dtype.lanes == 1);
350
+ CHECK(q_data->dtype.bits == pages->dtype.bits && q_data->dtype.code == pages->dtype.code);
351
+ CHECK(page_table_indptr->dtype.lanes == 1 && page_table_values->dtype.lanes == 1 &&
352
+ last_page_len->dtype.lanes == 1 && q_rope_offset->dtype.lanes == 1 &&
353
+ k_rope_offset->dtype.lanes == 1);
354
+ CHECK(page_table_indptr->dtype.bits == page_table_values->dtype.bits &&
355
+ page_table_indptr->dtype.bits == last_page_len->dtype.bits &&
356
+ page_table_indptr->dtype.code == page_table_values->dtype.code &&
357
+ page_table_indptr->dtype.code == last_page_len->dtype.code &&
358
+ page_table_indptr->dtype.code == q_rope_offset->dtype.code &&
359
+ page_table_indptr->dtype.code == k_rope_offset->dtype.code);
360
+
361
+ CHECK_EQ(pages->ndim, 5);
362
+ CHECK_EQ(pages->shape[1], 2);
363
+ int64_t nhead_kv = pages->shape[2];
364
+ int64_t nfeat = pages->shape[4];
365
+ int64_t page_size = pages->shape[3];
366
+
367
+ CHECK_EQ(last_page_len->ndim, 1);
368
+ int64_t num_total_seqs = last_page_len->shape[0];
369
+
370
+ CHECK_EQ(page_table_indptr->ndim, 1);
371
+ CHECK_EQ(page_table_indptr->shape[0], num_total_seqs + 1);
372
+ CHECK_EQ(page_table_values->ndim, 1);
373
+
374
+ CHECK_EQ(q_data->ndim, 3);
375
+ CHECK_EQ(output->ndim, 3);
376
+ CHECK_GE(q_data->shape[0], 1);
377
+ CHECK_EQ(q_data->shape[0], output->shape[0]);
378
+ CHECK_EQ(q_data->shape[2], nfeat);
379
+ int64_t nhead_qo = q_data->shape[1];
380
+ CHECK_EQ(output->shape[1], nhead_qo);
381
+ CHECK_EQ(output->shape[2], nfeat);
382
+ CHECK_EQ(q_rope_offset->ndim, 1);
383
+ CHECK_EQ(q_rope_offset->shape[0], num_total_seqs);
384
+
385
+ CHECK_EQ(k_rope_offset->ndim, 1);
386
+ CHECK_EQ(k_rope_offset->shape[0], num_total_seqs);
387
+
388
+ constexpr QKVLayout kv_layout = QKVLayout::kHND;
389
+ const float sm_scale = attn_score_scaling_factor / std::sqrt(static_cast<float>(nfeat));
390
+
391
+ DISPATCH_TVM_CUDA_DTYPE(
392
+ pages->dtype, dtype_in,
393
+ {DISPATCH_TVM_CUDA_DTYPE(
394
+ output->dtype, dtype_out, {DISPATCH_TVM_CUDA_IDTYPE(page_table_values->dtype, dtype_idx, {
395
+ paged_kv_t<dtype_in, dtype_idx> cache(
396
+ nhead_kv, page_size, nfeat, num_total_seqs, kv_layout,
397
+ /*k_data=*/static_cast<dtype_in*>(pages->data),
398
+ /*v_data=*/static_cast<dtype_in*>(pages->data) + pages->strides[1],
399
+ static_cast<dtype_idx*>(page_table_values->data) +
400
+ page_table_values->byte_offset / sizeof(dtype_idx),
401
+ static_cast<dtype_idx*>(page_table_indptr->data) +
402
+ page_table_indptr->byte_offset / sizeof(dtype_idx),
403
+ static_cast<dtype_idx*>(last_page_len->data) +
404
+ last_page_len->byte_offset / sizeof(dtype_idx),
405
+ static_cast<dtype_idx*>(k_rope_offset->data) +
406
+ k_rope_offset->byte_offset / sizeof(dtype_idx));
407
+ cudaError_t status =
408
+ BatchDecodeWithPagedKVCacheWrapper<dtype_in, dtype_in, dtype_out, dtype_idx>(
409
+ &batch_decode_handlers[handler_id], static_cast<dtype_in*>(q_data->data),
410
+ static_cast<dtype_idx*>(q_rope_offset->data) +
411
+ q_rope_offset->byte_offset / sizeof(dtype_idx),
412
+ cache, static_cast<dtype_out*>(output->data),
413
+ /*lse=*/static_cast<float*>(lse->data), nhead_qo,
414
+ PosEncodingMode(pos_encoding_mode), sm_scale, rope_scale, rope_theta,
415
+ /*stream=*/0);
416
+ if (status != cudaSuccess) {
417
+ LOG(FATAL) << "FlashInfer CUDA kernel error " << cudaGetErrorString(status);
418
+ }
419
+ })})});
420
+ }
421
+
422
+ void _FlashInferAttentionDecodeWithPagedKVCachePlan(
423
+ int64_t handler_idx, DLTensor* float_workspace_buffer, DLTensor* int_workspace_buffer,
424
+ DLTensor* page_table_indptr, DLTensor* last_page_len, int64_t num_qo_heads,
425
+ int64_t num_kv_heads, int64_t head_dim, int64_t page_size, int64_t pos_encoding_mode,
426
+ TVMStreamHandle copy_stream) {
427
+ CHECK_EQ(float_workspace_buffer->ndim, 1) << "The float workspace buffer must be a 1-D tensor";
428
+ size_t float_workspace_size_in_bytes =
429
+ float_workspace_buffer->shape[0] * float_workspace_buffer->dtype.bits / 8;
430
+ CHECK_EQ(int_workspace_buffer->ndim, 1) << "The int workspace buffer must be a 1-D tensor";
431
+ size_t int_workspace_size_in_bytes =
432
+ int_workspace_buffer->shape[0] * int_workspace_buffer->dtype.bits / 8;
433
+ CHECK_LT(handler_idx, max_num_handlers)
434
+ << "The handler id must be less than " << max_num_handlers;
435
+ // NOTE(Zihao): here we presume the input data type is half, in the future we should
436
+ // leave a parameter for the input data type.
437
+ using dtype_in = half;
438
+ const uint32_t batch_size = page_table_indptr->shape[0] - 1;
439
+ cudaStream_t original_stream = batch_decode_handlers[handler_idx].GetCUDAStream();
440
+ batch_decode_handlers[handler_idx].SetCUDAStream(static_cast<cudaStream_t>(copy_stream));
441
+ DISPATCH_TVM_CUDA_IDTYPE(page_table_indptr->dtype, dtype_idx, {
442
+ cudaError_t status = BatchDecodeHandlerPlan<dtype_in, dtype_in, dtype_in, dtype_idx>(
443
+ batch_decode_handlers + handler_idx, static_cast<void*>(float_workspace_buffer->data),
444
+ float_workspace_size_in_bytes, static_cast<void*>(int_workspace_buffer->data),
445
+ int_workspace_size_in_bytes,
446
+ static_cast<dtype_idx*>(page_table_indptr->data) +
447
+ page_table_indptr->byte_offset / sizeof(dtype_idx),
448
+ static_cast<dtype_idx*>(last_page_len->data) +
449
+ last_page_len->byte_offset / sizeof(dtype_idx),
450
+ batch_size, num_qo_heads, num_kv_heads, head_dim, page_size,
451
+ PosEncodingMode(pos_encoding_mode));
452
+ if (status != cudaSuccess) {
453
+ LOG(FATAL) << "FlashInfer decode Plan error " << cudaGetErrorString(status);
454
+ }
455
+ });
456
+ batch_decode_handlers[handler_idx].SetCUDAStream(original_stream);
457
+ }
458
+
459
+ void _FlashInferAttentionPrefillWithRaggedKVCache(
460
+ DLTensor* q_data, DLTensor* qo_indptr, DLTensor* k_data, DLTensor* v_data, DLTensor* kv_indptr,
461
+ DLTensor* q_rope_offset_map, DLTensor* k_rope_offset, DLTensor* output, DLTensor* lse,
462
+ int64_t causal = 1, int64_t pos_encoding_mode = 0, double rope_scale = 1.0f,
463
+ double rope_theta = 1e4, double attn_score_scaling_factor = 1.0f) {
464
+ CHECK_EQ(q_data->device.device_type, kDLCUDA) << "The device of q_data must be CUDA.";
465
+ CHECK_EQ(qo_indptr->device.device_type, kDLCUDA) << "The device of qo_indptr must be CUDA.";
466
+ CHECK_EQ(k_data->device.device_type, kDLCUDA) << "The device of k_data must be CUDA.";
467
+ CHECK_EQ(v_data->device.device_type, kDLCUDA) << "The device of v_data must be CUDA.";
468
+ CHECK_EQ(kv_indptr->device.device_type, kDLCUDA) << "The device of kv_indptr must be CUDA.";
469
+ CHECK_EQ(output->device.device_type, kDLCUDA) << "The device of output must be CUDA.";
470
+ CHECK_EQ(lse->device.device_type, kDLCUDA) << "The lse of output must be CUDA.";
471
+ CHECK_EQ(q_rope_offset_map->device.device_type, kDLCUDA)
472
+ << "The device of q_rope_offset_map must be CUDA.";
473
+ CHECK_EQ(k_rope_offset->device.device_type, kDLCUDA)
474
+ << "The device of k_rope_offset must be CUDA.";
475
+
476
+ int dev_id = q_data->device.device_id;
477
+ CHECK_EQ(qo_indptr->device.device_id, dev_id);
478
+ CHECK_EQ(k_data->device.device_id, dev_id);
479
+ CHECK_EQ(v_data->device.device_id, dev_id);
480
+ CHECK_EQ(kv_indptr->device.device_id, dev_id);
481
+ CHECK_EQ(output->device.device_id, dev_id);
482
+ CHECK_EQ(lse->device.device_id, dev_id);
483
+ CHECK_EQ(q_rope_offset_map->device.device_id, dev_id);
484
+ CHECK_EQ(k_rope_offset->device.device_id, dev_id);
485
+
486
+ CHECK(q_data->dtype.lanes == 1 && qo_indptr->dtype.lanes == 1 && k_data->dtype.lanes == 1 &&
487
+ v_data->dtype.lanes == 1 && kv_indptr->dtype.lanes == 1 && output->dtype.lanes == 1 &&
488
+ lse->dtype.lanes == 1 && q_rope_offset_map->dtype.lanes == 1 &&
489
+ k_rope_offset->dtype.lanes == 1);
490
+ CHECK(q_data->dtype.bits == k_data->dtype.bits && q_data->dtype.code == v_data->dtype.code);
491
+ CHECK(qo_indptr->dtype.bits == kv_indptr->dtype.bits);
492
+ CHECK(lse->dtype.bits == 32);
493
+ CHECK(q_data->dtype.code == k_data->dtype.code && q_data->dtype.code == v_data->dtype.code);
494
+ CHECK(qo_indptr->dtype.code == kv_indptr->dtype.code);
495
+ CHECK(q_rope_offset_map->dtype.code == kv_indptr->dtype.code);
496
+ CHECK(k_rope_offset->dtype.code == kv_indptr->dtype.code);
497
+ CHECK(lse->dtype.code == kDLFloat);
498
+
499
+ CHECK_EQ(q_data->ndim, 3); // qo_nnz, nhead_qo, nfeat
500
+ CHECK_EQ(output->ndim, 3); // qo_nnz, nhead_qo, nfeat
501
+ CHECK_EQ(lse->ndim, 2); // qo_nnz, nhead_qo
502
+ CHECK_EQ(k_data->ndim, 3); // kv_nnz, nhead_kv, nfeat
503
+ CHECK_EQ(v_data->ndim, 3); // kv_nnz, nhead_kv, nfeat
504
+ int64_t nhead_qo = q_data->shape[1];
505
+ int64_t nfeat = q_data->shape[2];
506
+ int64_t nhead_kv = k_data->shape[1];
507
+ CHECK_EQ(output->shape[0], q_data->shape[0]);
508
+ CHECK_EQ(output->shape[1], nhead_qo);
509
+ CHECK_EQ(output->shape[2], nfeat);
510
+ CHECK_EQ(lse->shape[0], q_data->shape[0]);
511
+ CHECK_EQ(lse->shape[1], nhead_qo);
512
+ CHECK_EQ(k_data->shape[2], nfeat);
513
+ CHECK_EQ(v_data->shape[0], k_data->shape[0]);
514
+ CHECK_EQ(v_data->shape[1], nhead_kv);
515
+ CHECK_EQ(v_data->shape[2], nfeat);
516
+
517
+ CHECK_EQ(qo_indptr->ndim, 1);
518
+ CHECK_EQ(kv_indptr->ndim, 1);
519
+ int64_t batch_size = qo_indptr->shape[0] - 1;
520
+ CHECK_EQ(kv_indptr->shape[0], batch_size + 1);
521
+
522
+ CHECK_EQ(q_rope_offset_map->ndim, 1);
523
+ CHECK_EQ(q_rope_offset_map->shape[0], q_data->shape[0]);
524
+ CHECK_EQ(k_rope_offset->ndim, 1);
525
+ CHECK_EQ(k_rope_offset->shape[0], batch_size);
526
+
527
+ const float sm_scale = attn_score_scaling_factor / std::sqrt(static_cast<float>(nfeat));
528
+
529
+ DISPATCH_TVM_CUDA_DTYPE(
530
+ q_data->dtype, dtype_in,
531
+ {DISPATCH_TVM_CUDA_DTYPE(
532
+ output->dtype, dtype_out, {DISPATCH_TVM_CUDA_IDTYPE(qo_indptr->dtype, dtype_idx, {
533
+ cudaError_t status =
534
+ BatchPrefillWithRaggedKVCacheWrapper<dtype_in, dtype_in, dtype_out, dtype_idx>(
535
+ &batch_prefill_ragged_kv_handler, static_cast<dtype_in*>(q_data->data),
536
+ static_cast<dtype_idx*>(qo_indptr->data) +
537
+ qo_indptr->byte_offset / sizeof(dtype_idx),
538
+ static_cast<dtype_in*>(k_data->data), static_cast<dtype_in*>(v_data->data),
539
+ static_cast<dtype_idx*>(kv_indptr->data) +
540
+ kv_indptr->byte_offset / sizeof(dtype_idx),
541
+ static_cast<dtype_idx*>(q_rope_offset_map->data) +
542
+ q_rope_offset_map->byte_offset / sizeof(dtype_idx),
543
+ static_cast<dtype_idx*>(k_rope_offset->data) +
544
+ k_rope_offset->byte_offset / sizeof(dtype_idx),
545
+ static_cast<dtype_out*>(output->data),
546
+ /*lse=*/static_cast<float*>(lse->data), batch_size, nhead_qo, nhead_kv, nfeat,
547
+ /*causal=*/bool(causal), QKVLayout::kNHD, PosEncodingMode(pos_encoding_mode),
548
+ /*use_fp16_qk_reduction=*/false, sm_scale, rope_scale, rope_theta,
549
+ /*sm_scale=*/0);
550
+ if (status != cudaSuccess) {
551
+ LOG(FATAL) << "FlashInfer AttentionPrefillWithRaggedKVCache error "
552
+ << cudaGetErrorString(status);
553
+ }
554
+ })})})
555
+ }
556
+
557
+ void _FlashInferAttentionPrefillWithRaggedKVCachePlan(DLTensor* float_workspace_buffer,
558
+ DLTensor* int_workspace_buffer,
559
+ DLTensor* qo_indptr, DLTensor* kv_indptr,
560
+ int64_t batch_size, int64_t num_qo_heads,
561
+ int64_t num_kv_heads, int64_t head_dim,
562
+ TVMStreamHandle copy_stream) {
563
+ CHECK_EQ(float_workspace_buffer->ndim, 1) << "The workspace buffer must be a 1-D tensor";
564
+ size_t float_workspace_size_in_bytes =
565
+ float_workspace_buffer->shape[0] * float_workspace_buffer->dtype.bits / 8;
566
+ CHECK_EQ(int_workspace_buffer->ndim, 1) << "The workspace buffer must be a 1-D tensor";
567
+ size_t int_workspace_size_in_bytes =
568
+ int_workspace_buffer->shape[0] * int_workspace_buffer->dtype.bits / 8;
569
+ cudaStream_t original_stream = batch_prefill_ragged_kv_handler.GetCUDAStream();
570
+ batch_prefill_ragged_kv_handler.SetCUDAStream(static_cast<cudaStream_t>(copy_stream));
571
+
572
+ // NOTE(Zihao): here we presume the input data type is half, in the future we should
573
+ // leave a parameter for the input data type.
574
+ using dtype_in = half;
575
+
576
+ DISPATCH_TVM_CUDA_IDTYPE(qo_indptr->dtype, dtype_idx, {
577
+ cudaError_t status = batch_prefill_ragged_kv_handler.Plan<dtype_in, dtype_idx>(
578
+ static_cast<void*>(float_workspace_buffer->data), float_workspace_size_in_bytes,
579
+ static_cast<void*>(int_workspace_buffer->data), int_workspace_size_in_bytes,
580
+ static_cast<dtype_idx*>(qo_indptr->data) + qo_indptr->byte_offset / sizeof(dtype_idx),
581
+ static_cast<dtype_idx*>(kv_indptr->data) + kv_indptr->byte_offset / sizeof(dtype_idx),
582
+ batch_size, num_qo_heads, num_kv_heads, head_dim,
583
+ /*page_size=*/1);
584
+ if (status != cudaSuccess) {
585
+ LOG(FATAL) << "FlashInfer PrefillWithRaggedKVCache Plan error " << cudaGetErrorString(status);
586
+ }
587
+ });
588
+ batch_prefill_ragged_kv_handler.SetCUDAStream(original_stream);
589
+ }
590
+
591
+ void _FlashInferMergeState(DLTensor* v_a, DLTensor* s_a, DLTensor* v_b, DLTensor* s_b,
592
+ DLTensor* v_merged, DLTensor* s_merged) {
593
+ CHECK_EQ(v_a->device.device_type, kDLCUDA) << "The device of v_a must be CUDA.";
594
+ CHECK_EQ(s_a->device.device_type, kDLCUDA) << "The device of s_a must be CUDA.";
595
+ CHECK_EQ(v_b->device.device_type, kDLCUDA) << "The device of v_b must be CUDA.";
596
+ CHECK_EQ(s_b->device.device_type, kDLCUDA) << "The device of s_b must be CUDA.";
597
+ CHECK_EQ(v_merged->device.device_type, kDLCUDA) << "The device of v_merged must be CUDA.";
598
+ CHECK_EQ(s_merged->device.device_type, kDLCUDA) << "The device of s_merged must be CUDA.";
599
+ int32_t dev_id = v_a->device.device_id;
600
+ CHECK_EQ(s_a->device.device_id, dev_id);
601
+ CHECK_EQ(v_b->device.device_id, dev_id);
602
+ CHECK_EQ(s_b->device.device_id, dev_id);
603
+ CHECK_EQ(v_merged->device.device_id, dev_id);
604
+ CHECK_EQ(s_merged->device.device_id, dev_id);
605
+
606
+ CHECK(v_a->dtype.lanes == 1 && s_a->dtype.lanes == 1 && v_b->dtype.lanes == 1 &&
607
+ s_b->dtype.lanes == 1 && v_merged->dtype.lanes == 1 && s_merged->dtype.lanes == 1);
608
+ CHECK(v_a->dtype.bits == v_b->dtype.bits && v_a->dtype.code == v_b->dtype.code);
609
+ CHECK(s_a->dtype.bits == 32 && s_a->dtype.code == kDLFloat);
610
+ CHECK(s_b->dtype.bits == 32 && s_b->dtype.code == kDLFloat);
611
+ CHECK(s_merged->dtype.bits == 32 && s_merged->dtype.code == kDLFloat);
612
+
613
+ CHECK_EQ(v_a->ndim, 3);
614
+ int64_t batch_size = v_a->shape[0];
615
+ int64_t num_heads = v_a->shape[1];
616
+ int64_t head_dim = v_a->shape[2];
617
+ CHECK_EQ(s_a->shape[0], batch_size);
618
+ CHECK_EQ(s_a->shape[1], num_heads);
619
+ CHECK_EQ(v_b->shape[0], batch_size);
620
+ CHECK_EQ(v_b->shape[1], num_heads);
621
+ CHECK_EQ(v_b->shape[2], head_dim);
622
+ CHECK_EQ(s_b->shape[0], batch_size);
623
+ CHECK_EQ(s_b->shape[1], num_heads);
624
+ CHECK_EQ(v_merged->shape[0], batch_size);
625
+ CHECK_EQ(v_merged->shape[1], num_heads);
626
+ CHECK_EQ(v_merged->shape[2], head_dim);
627
+ CHECK_EQ(s_merged->shape[0], batch_size);
628
+ CHECK_EQ(s_merged->shape[1], num_heads);
629
+
630
+ DISPATCH_TVM_CUDA_DTYPE(
631
+ v_a->dtype, dtype_in, {DISPATCH_TVM_CUDA_DTYPE(v_merged->dtype, dtype_out, {
632
+ cudaError_t status =
633
+ MergeState(static_cast<dtype_in*>(v_a->data), static_cast<float*>(s_a->data),
634
+ static_cast<dtype_in*>(v_b->data), static_cast<float*>(s_b->data),
635
+ static_cast<dtype_out*>(v_merged->data), static_cast<float*>(s_merged->data),
636
+ batch_size, num_heads, head_dim);
637
+ if (status != cudaSuccess) {
638
+ LOG(FATAL) << "FlashInfer CUDA MergeState error " << cudaGetErrorString(status);
639
+ }
640
+ })});
641
+ }
642
+
643
+ void _FlashInferMergeStateInPlace(DLTensor* v, DLTensor* s, DLTensor* v_other, DLTensor* s_other) {
644
+ CHECK_EQ(v->device.device_type, kDLCUDA) << "The device of v must be CUDA.";
645
+ CHECK_EQ(s->device.device_type, kDLCUDA) << "The device of s must be CUDA.";
646
+ CHECK_EQ(v_other->device.device_type, kDLCUDA) << "The device of v_other must be CUDA.";
647
+ CHECK_EQ(s_other->device.device_type, kDLCUDA) << "The device of s_other must be CUDA.";
648
+ int32_t dev_id = v->device.device_id;
649
+ CHECK_EQ(s->device.device_id, dev_id);
650
+ CHECK_EQ(v_other->device.device_id, dev_id);
651
+ CHECK_EQ(s_other->device.device_id, dev_id);
652
+
653
+ CHECK(v->dtype.lanes == 1 && s->dtype.lanes == 1 && v_other->dtype.lanes == 1 &&
654
+ s_other->dtype.lanes == 1);
655
+ CHECK(v->dtype.bits == v_other->dtype.bits && v->dtype.code == v_other->dtype.code);
656
+ CHECK(s->dtype.bits == 32 && s->dtype.code == kDLFloat);
657
+ CHECK(s_other->dtype.bits == 32 && s_other->dtype.code == kDLFloat);
658
+
659
+ CHECK_EQ(v->ndim, 3);
660
+ CHECK_EQ(v_other->ndim, 3);
661
+ CHECK_EQ(s->ndim, 2); // qo_nnz, nhead_qo
662
+ CHECK_EQ(s_other->ndim, 2); // qo_nnz, nhead_qo
663
+ int64_t batch_size = v->shape[0];
664
+ int64_t num_heads = v->shape[1];
665
+ int64_t head_dim = v->shape[2];
666
+ CHECK_EQ(s->shape[0], batch_size);
667
+ CHECK_EQ(s->shape[1], num_heads);
668
+ CHECK_EQ(v_other->shape[0], batch_size);
669
+ CHECK_EQ(v_other->shape[1], num_heads);
670
+ CHECK_EQ(v_other->shape[2], head_dim);
671
+ CHECK_EQ(s_other->shape[0], batch_size);
672
+ CHECK_EQ(s_other->shape[1], num_heads);
673
+
674
+ DISPATCH_TVM_CUDA_DTYPE(v->dtype, dtype, {
675
+ cudaError_t status =
676
+ MergeStateInPlace(static_cast<dtype*>(v->data), static_cast<float*>(s->data),
677
+ static_cast<dtype*>(v_other->data), static_cast<float*>(s_other->data),
678
+ batch_size, num_heads, head_dim);
679
+ if (status != cudaSuccess) {
680
+ LOG(FATAL) << "FlashInfer CUDA MergeStateInPlace error " << cudaGetErrorString(status);
681
+ }
682
+ });
683
+ }
684
+
685
+ void _FlashInferBatchQKApplyRotaryInPlace(DLTensor* q, DLTensor* k, DLTensor* indptr,
686
+ DLTensor* offsets, int64_t batch_size,
687
+ int64_t num_qo_heads, int64_t num_kv_heads,
688
+ int64_t head_dim, double rope_scale, double rope_theta) {
689
+ size_t q_stride_n = q->strides[0];
690
+ size_t q_stride_h = q->strides[1];
691
+ size_t k_stride_n = k->strides[0];
692
+ size_t k_stride_h = k->strides[1];
693
+ DISPATCH_TVM_CUDA_DTYPE(
694
+ q->dtype, dtype, {DISPATCH_TVM_CUDA_IDTYPE(indptr->dtype, idtype, {
695
+ cudaError_t status = BatchQKApplyRotaryInPlace(
696
+ static_cast<dtype*>(q->data), static_cast<dtype*>(k->data),
697
+ static_cast<idtype*>(indptr->data), static_cast<idtype*>(offsets->data), batch_size,
698
+ num_qo_heads, num_kv_heads, /*rotary_dim=*/head_dim, head_dim, q_stride_n, q_stride_h,
699
+ k_stride_n, k_stride_h,
700
+ /*interleave=*/false, rope_scale, rope_theta);
701
+ if (status != cudaSuccess) {
702
+ LOG(FATAL) << "FlashInfer CUDA kernel error " << cudaGetErrorString(status);
703
+ }
704
+ })});
705
+ }
706
+
707
+ void _FlashInferParallelSamplingFromProb(DLTensor* probs, DLTensor* uniform_samples,
708
+ DLTensor* row_indices, DLTensor* sampled_token_ids) {
709
+ CHECK_EQ(probs->device.device_type, kDLCUDA) << "The device of probs must be CUDA.";
710
+ CHECK_EQ(uniform_samples->device.device_type, kDLCUDA)
711
+ << "The device of uniform_samples must be CUDA.";
712
+ CHECK_EQ(row_indices->device.device_type, kDLCUDA) << "The device of row_indices must be CUDA.";
713
+ CHECK_EQ(sampled_token_ids->device.device_type, kDLCUDA)
714
+ << "The device of sampled_token_ids must be CUDA.";
715
+
716
+ int dev_id = probs->device.device_id;
717
+ CHECK_EQ(uniform_samples->device.device_id, dev_id);
718
+ CHECK_EQ(row_indices->device.device_id, dev_id);
719
+ CHECK_EQ(sampled_token_ids->device.device_id, dev_id);
720
+
721
+ CHECK(probs->dtype.lanes == 1 && uniform_samples->dtype.lanes == 1 &&
722
+ row_indices->dtype.lanes == 1 && sampled_token_ids->dtype.lanes == 1);
723
+ CHECK(probs->dtype.code == kDLFloat && probs->dtype.bits == 32);
724
+ CHECK(uniform_samples->dtype.code == kDLFloat && uniform_samples->dtype.bits == 32);
725
+ CHECK(row_indices->dtype.code == kDLInt && row_indices->dtype.bits == 32);
726
+ CHECK(sampled_token_ids->dtype.code == kDLInt && sampled_token_ids->dtype.bits == 32);
727
+
728
+ CHECK_EQ(probs->ndim, 2); // num_probs, vocab_size
729
+ CHECK_EQ(uniform_samples->ndim, 1); // batch_size,
730
+ CHECK_EQ(row_indices->ndim, 1); // batch_size,
731
+ CHECK_EQ(sampled_token_ids->ndim, 1); // batch_size,
732
+ int64_t num_probs = probs->shape[0];
733
+ int64_t vocab_size = probs->shape[1];
734
+ int64_t batch_size = row_indices->shape[0];
735
+ CHECK_EQ(uniform_samples->shape[0], batch_size);
736
+ CHECK_EQ(sampled_token_ids->shape[0], batch_size);
737
+
738
+ cudaError_t status = sampling::ParallelSamplingFromProb<float, int32_t>(
739
+ static_cast<float*>(probs->data), static_cast<float*>(uniform_samples->data),
740
+ static_cast<int32_t*>(sampled_token_ids->data), static_cast<int32_t*>(row_indices->data),
741
+ batch_size, vocab_size, /*deterministic=*/true);
742
+ if (status != cudaSuccess) {
743
+ LOG(FATAL) << "FlashInfer ParallelTopPSamplingFromProb error " << cudaGetErrorString(status);
744
+ }
745
+ }
746
+
747
+ void _FlashInferParallelTopPSamplingFromProb(DLTensor* probs, DLTensor* uniform_samples,
748
+ DLTensor* row_indices, DLTensor* top_p,
749
+ DLTensor* sampled_token_ids) {
750
+ CHECK_EQ(probs->device.device_type, kDLCUDA) << "The device of probs must be CUDA.";
751
+ CHECK_EQ(uniform_samples->device.device_type, kDLCUDA)
752
+ << "The device of uniform_samples must be CUDA.";
753
+ CHECK_EQ(row_indices->device.device_type, kDLCUDA) << "The device of row_indices must be CUDA.";
754
+ CHECK_EQ(top_p->device.device_type, kDLCUDA) << "The device of top_p must be CUDA.";
755
+ CHECK_EQ(sampled_token_ids->device.device_type, kDLCUDA)
756
+ << "The device of sampled_token_ids must be CUDA.";
757
+
758
+ int dev_id = probs->device.device_id;
759
+ CHECK_EQ(uniform_samples->device.device_id, dev_id);
760
+ CHECK_EQ(row_indices->device.device_id, dev_id);
761
+ CHECK_EQ(top_p->device.device_id, dev_id);
762
+ CHECK_EQ(sampled_token_ids->device.device_id, dev_id);
763
+
764
+ CHECK(probs->dtype.lanes == 1 && uniform_samples->dtype.lanes == 1 &&
765
+ row_indices->dtype.lanes == 1 && top_p->dtype.lanes == 1 &&
766
+ sampled_token_ids->dtype.lanes == 1);
767
+ CHECK(probs->dtype.code == kDLFloat && probs->dtype.bits == 32);
768
+ CHECK(uniform_samples->dtype.code == kDLFloat && uniform_samples->dtype.bits == 32);
769
+ CHECK(top_p->dtype.code == kDLFloat && top_p->dtype.bits == 32);
770
+ CHECK(row_indices->dtype.code == kDLInt && row_indices->dtype.bits == 32);
771
+ CHECK(sampled_token_ids->dtype.code == kDLInt && sampled_token_ids->dtype.bits == 32);
772
+
773
+ CHECK_EQ(probs->ndim, 2); // num_probs, vocab_size
774
+ CHECK_EQ(uniform_samples->ndim, 2); // num_rounds, batch_size
775
+ CHECK_EQ(row_indices->ndim, 1); // batch_size,
776
+ CHECK_EQ(top_p->ndim, 1); // num_probs,
777
+ CHECK_EQ(sampled_token_ids->ndim, 1); // batch_size,
778
+ int64_t num_probs = probs->shape[0];
779
+ int64_t vocab_size = probs->shape[1];
780
+ int64_t batch_size = row_indices->shape[0];
781
+ int64_t num_rounds = uniform_samples->shape[0];
782
+ CHECK_EQ(uniform_samples->shape[1], batch_size);
783
+ CHECK_EQ(top_p->shape[0], num_probs);
784
+ CHECK_EQ(sampled_token_ids->shape[0], batch_size);
785
+
786
+ cudaError_t status = sampling::ParallelTopPSamplingFromProb<float, int32_t>(
787
+ static_cast<float*>(probs->data), static_cast<float*>(uniform_samples->data),
788
+ static_cast<int32_t*>(sampled_token_ids->data), /*success=*/nullptr,
789
+ static_cast<int32_t*>(row_indices->data), static_cast<float*>(top_p->data), batch_size,
790
+ vocab_size, num_rounds, /*deterministic=*/true);
791
+ if (status != cudaSuccess) {
792
+ LOG(FATAL) << "FlashInfer ParallelTopPSamplingFromProb error " << cudaGetErrorString(status);
793
+ }
794
+ }
795
+
796
+ TVM_REGISTER_GLOBAL("flashinfer.attention_kernel_prefill_with_paged_kv_cache")
797
+ .set_body_typed(_FlashInferAttentionPrefillWithPagedKVCache);
798
+
799
+ TVM_REGISTER_GLOBAL("flashinfer.attention_kernel_prefill_with_paged_kv_cache_begin_forward")
800
+ .set_body_typed(_FlashInferAttentionPrefillWithPagedKVCachePlan);
801
+
802
+ TVM_REGISTER_GLOBAL("flashinfer.attention_kernel_decode_with_paged_kv_cache")
803
+ .set_body_typed(_FlashInferAttentionDecodeWithPagedKVCache);
804
+
805
+ TVM_REGISTER_GLOBAL("flashinfer.attention_kernel_decode_with_paged_kv_cache_begin_forward")
806
+ .set_body_typed(_FlashInferAttentionDecodeWithPagedKVCachePlan);
807
+
808
+ TVM_REGISTER_GLOBAL("flashinfer.attention_kernel_prefill_with_ragged_kv_cache")
809
+ .set_body_typed(_FlashInferAttentionPrefillWithRaggedKVCache);
810
+
811
+ TVM_REGISTER_GLOBAL("flashinfer.attention_kernel_prefill_with_ragged_kv_cache_begin_forward")
812
+ .set_body_typed(_FlashInferAttentionPrefillWithRaggedKVCachePlan);
813
+
814
+ TVM_REGISTER_GLOBAL("flashinfer.merge_state").set_body_typed(_FlashInferMergeState);
815
+
816
+ TVM_REGISTER_GLOBAL("flashinfer.merge_state_in_place").set_body_typed(_FlashInferMergeStateInPlace);
817
+
818
+ TVM_REGISTER_GLOBAL("flashinfer.batch_qk_apply_rotary_in_place")
819
+ .set_body_typed(_FlashInferBatchQKApplyRotaryInPlace);
820
+
821
+ TVM_REGISTER_GLOBAL("flashinfer.single_prefill")
822
+ .set_body_typed(_FlashInferSinglePrefillWithKVCache);
823
+
824
+ TVM_REGISTER_GLOBAL("flashinfer.single_decode").set_body_typed(_FlashInferSingleDecodeWithKVCache);
825
+
826
+ TVM_REGISTER_GLOBAL("flashinfer.sampling.parallel_sampling_from_prob")
827
+ .set_body_typed(_FlashInferParallelSamplingFromProb);
828
+
829
+ TVM_REGISTER_GLOBAL("flashinfer.sampling.parallel_top_p_sampling_from_prob")
830
+ .set_body_typed(_FlashInferParallelTopPSamplingFromProb);
sglang_repo/sgl-kernel/3rdparty/flashinfer/src/utils.h ADDED
@@ -0,0 +1,209 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ /*
2
+ * Copyright (c) 2023 by FlashInfer team.
3
+ *
4
+ * Licensed under the Apache License, Version 2.0 (the "License");
5
+ * you may not use this file except in compliance with the License.
6
+ * You may obtain a copy of the License at
7
+ *
8
+ * http://www.apache.org/licenses/LICENSE-2.0
9
+ *
10
+ * Unless required by applicable law or agreed to in writing, software
11
+ * distributed under the License is distributed on an "AS IS" BASIS,
12
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13
+ * See the License for the specific language governing permissions and
14
+ * limitations under the License.
15
+ */
16
+ #pragma once
17
+
18
+ #include <cuda_bf16.h>
19
+ #include <cuda_fp16.h>
20
+ #include <cuda_fp8.h>
21
+ #include <cuda_runtime.h>
22
+ #include <thrust/device_vector.h>
23
+ #include <thrust/execution_policy.h>
24
+ #include <thrust/host_vector.h>
25
+ #include <thrust/iterator/counting_iterator.h>
26
+ #include <thrust/random.h>
27
+ #include <thrust/transform.h>
28
+
29
+ #include <random>
30
+ #include <sstream>
31
+
32
+ #include "flashinfer/exception.h"
33
+ #include "generated/dispatch.inc"
34
+
35
+ #define _DISPATCH_SWITCH(var_name, cond, ...) \
36
+ switch (cond) { \
37
+ __VA_ARGS__ \
38
+ default: \
39
+ std::ostringstream oss; \
40
+ oss << __PRETTY_FUNCTION__ << " failed to dispatch " var_name " " << int(cond); \
41
+ FLASHINFER_ERROR(oss.str()); \
42
+ }
43
+
44
+ #define _DISPATCH_CASE(case_expr, case_var, ...) \
45
+ case case_expr: { \
46
+ constexpr auto case_var = case_expr; \
47
+ __VA_ARGS__ \
48
+ break; \
49
+ }
50
+
51
+ #define DISPATCH_group_size(expr, const_expr, ...) \
52
+ _DISPATCH_SWITCH("group_size", expr, _DISPATCH_CASES_group_size(const_expr, __VA_ARGS__))
53
+
54
+ #define DISPATCH_head_dim(expr, const_expr, ...) \
55
+ _DISPATCH_SWITCH("head_dim", expr, _DISPATCH_CASES_head_dim(const_expr, __VA_ARGS__))
56
+
57
+ #define DISPATCH_pos_encoding_mode(expr, const_expr, ...) \
58
+ _DISPATCH_SWITCH("positional encoding mode", expr, \
59
+ _DISPATCH_CASES_pos_encoding_mode(const_expr, __VA_ARGS__))
60
+
61
+ #define DISPATCH_use_fp16_qk_reduction(expr, const_expr, ...) \
62
+ _DISPATCH_SWITCH("use_fp16_qk_reduction", expr, \
63
+ _DISPATCH_CASES_use_fp16_qk_reduction(const_expr, __VA_ARGS__))
64
+
65
+ #define DISPATCH_mask_mode(expr, const_expr, ...) \
66
+ _DISPATCH_SWITCH("mask_mode", expr, _DISPATCH_CASES_mask_mode(const_expr, __VA_ARGS__))
67
+
68
+ namespace utils {
69
+
70
+ template <typename T>
71
+ void vec_normal_(std::vector<T>& vec, float mean = 0.f, float std = 1.f) {
72
+ std::random_device rd{};
73
+ std::mt19937 gen{rd()};
74
+ std::normal_distribution d{mean, std};
75
+ for (size_t i = 0; i < vec.size(); ++i) {
76
+ vec[i] = T(d(gen));
77
+ }
78
+ }
79
+
80
+ template <typename T>
81
+ void vec_uniform_(std::vector<T>& vec, float a = 0.f, float b = 1.f) {
82
+ std::random_device rd{};
83
+ std::mt19937 gen{rd()};
84
+ std::uniform_real_distribution d{a, b};
85
+ for (size_t i = 0; i < vec.size(); ++i) {
86
+ vec[i] = T(d(gen));
87
+ }
88
+ }
89
+
90
+ template <typename T>
91
+ void vec_zero_(std::vector<T>& vec) {
92
+ std::fill(vec.begin(), vec.end(), T(0));
93
+ }
94
+
95
+ template <typename T>
96
+ void vec_fill_(std::vector<T>& vec, T val) {
97
+ std::fill(vec.begin(), vec.end(), val);
98
+ }
99
+
100
+ template <typename T>
101
+ void vec_randint_(std::vector<T>& vec, int low, int high) {
102
+ std::random_device rd{};
103
+ std::mt19937 gen{rd()};
104
+ std::uniform_int_distribution d{low, high};
105
+ for (size_t i = 0; i < vec.size(); ++i) {
106
+ vec[i] = T(d(gen));
107
+ }
108
+ }
109
+
110
+ template <typename T>
111
+ size_t vec_bytes(const T& vec) {
112
+ return vec.size() * sizeof(typename T::value_type);
113
+ }
114
+
115
+ template <typename T>
116
+ bool isclose(T a, T b, float rtol = 1e-5, float atol = 1e-8) {
117
+ return fabs(a - b) <= (atol + rtol * fabs(b));
118
+ }
119
+
120
+ template <typename T>
121
+ std::tuple<std::vector<std::vector<T>>, std::vector<std::vector<int32_t>>>
122
+ create_shared_prefix_testcase_data(size_t batch_size, size_t shared_prefix_length,
123
+ size_t unique_kv_length, size_t qo_append_length,
124
+ size_t num_qo_heads, size_t num_kv_heads, size_t head_dim,
125
+ size_t page_size) {
126
+ uint32_t num_pages = ((shared_prefix_length + unique_kv_length * batch_size) / page_size);
127
+ std::vector<T> shared_k_h(shared_prefix_length * num_kv_heads * head_dim);
128
+ std::vector<T> shared_v_h(shared_prefix_length * num_kv_heads * head_dim);
129
+ std::vector<T> q_h((batch_size * qo_append_length) * num_qo_heads * head_dim);
130
+
131
+ utils::vec_normal_(shared_k_h);
132
+ utils::vec_normal_(shared_v_h);
133
+ utils::vec_normal_(q_h);
134
+
135
+ std::vector<int32_t> qo_indptr{0};
136
+ std::vector<int32_t> kv_indptr_combined_h{0};
137
+ std::vector<int32_t> kv_indptr_unique_h{0};
138
+ std::vector<int32_t> kv_last_page_len_combined_h;
139
+ std::vector<int32_t> kv_last_page_len_unique_h;
140
+
141
+ for (uint32_t request_id = 0; request_id < batch_size; ++request_id) {
142
+ qo_indptr.push_back(qo_indptr.back() + qo_append_length);
143
+ kv_indptr_combined_h.push_back(kv_indptr_combined_h.back() +
144
+ (shared_prefix_length + unique_kv_length) / page_size);
145
+ kv_indptr_unique_h.push_back(kv_indptr_unique_h.back() + unique_kv_length / page_size);
146
+ kv_last_page_len_combined_h.push_back(page_size);
147
+ kv_last_page_len_unique_h.push_back(page_size);
148
+ }
149
+
150
+ std::vector<int32_t> kv_indices_combined_h(kv_indptr_combined_h.back());
151
+ std::vector<int32_t> kv_indices_unique_h(kv_indptr_unique_h.back());
152
+
153
+ std::vector<T> k_data_h(num_pages * num_kv_heads * page_size * head_dim);
154
+ std::vector<T> v_data_h(num_pages * num_kv_heads * page_size * head_dim);
155
+ uint32_t page_id = 0;
156
+
157
+ for (; page_id < (shared_prefix_length / page_size); page_id++) {
158
+ for (uint32_t entry_idx = 0; entry_idx < page_size; entry_idx++) {
159
+ for (uint32_t head_idx = 0; head_idx < num_kv_heads; head_idx++) {
160
+ std::copy(shared_k_h.begin() +
161
+ ((page_id * page_size + entry_idx) * num_kv_heads + head_idx) * head_dim,
162
+ shared_k_h.begin() +
163
+ ((page_id * page_size + entry_idx) * num_kv_heads + head_idx + 1) * head_dim,
164
+ k_data_h.begin() +
165
+ ((page_id * num_kv_heads + head_idx) * page_size + entry_idx) * head_dim);
166
+ std::copy(shared_v_h.begin() +
167
+ ((page_id * page_size + entry_idx) * num_kv_heads + head_idx) * head_dim,
168
+ shared_v_h.begin() +
169
+ ((page_id * page_size + entry_idx) * num_kv_heads + head_idx + 1) * head_dim,
170
+ v_data_h.begin() +
171
+ ((page_id * num_kv_heads + head_idx) * page_size + entry_idx) * head_dim);
172
+ }
173
+ }
174
+ for (uint32_t request_id = 0; request_id < batch_size; ++request_id) {
175
+ kv_indices_combined_h[request_id * ((shared_prefix_length + unique_kv_length) / page_size) +
176
+ page_id] = page_id;
177
+ }
178
+ }
179
+
180
+ for (uint32_t request_id = 0; request_id < batch_size; ++request_id) {
181
+ for (uint32_t page_iter = 0; page_iter < (unique_kv_length / page_size);
182
+ ++page_iter, ++page_id) {
183
+ for (uint32_t entry_idx = 0; entry_idx < page_size; entry_idx++) {
184
+ for (uint32_t head_idx = 0; head_idx < num_kv_heads; head_idx++) {
185
+ std::vector<T> k(head_dim), v(head_dim);
186
+ utils::vec_normal_(k);
187
+ utils::vec_normal_(v);
188
+ std::copy(k.begin(), k.end(),
189
+ k_data_h.begin() +
190
+ ((page_id * num_kv_heads + head_idx) * page_size + entry_idx) * head_dim);
191
+ std::copy(v.begin(), v.end(),
192
+ v_data_h.begin() +
193
+ ((page_id * num_kv_heads + head_idx) * page_size + entry_idx) * head_dim);
194
+ }
195
+ }
196
+ kv_indices_combined_h[request_id * ((shared_prefix_length + unique_kv_length) / page_size) +
197
+ (shared_prefix_length / page_size) + page_iter] = page_id;
198
+ kv_indices_unique_h[request_id * (unique_kv_length / page_size) + page_iter] = page_id;
199
+ }
200
+ }
201
+ return std::make_tuple<std::vector<std::vector<T>>, std::vector<std::vector<int32_t>>>(
202
+ {std::move(q_h), std::move(shared_k_h), std::move(shared_v_h), std::move(k_data_h),
203
+ std::move(v_data_h)},
204
+ {std::move(qo_indptr), std::move(kv_indices_combined_h), std::move(kv_indices_unique_h),
205
+ std::move(kv_indptr_combined_h), std::move(kv_indptr_unique_h),
206
+ std::move(kv_last_page_len_combined_h), std::move(kv_last_page_len_unique_h)});
207
+ }
208
+
209
+ } // namespace utils
sglang_repo/sgl-kernel/LICENSE ADDED
@@ -0,0 +1,201 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ Apache License
2
+ Version 2.0, January 2004
3
+ http://www.apache.org/licenses/
4
+
5
+ TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
6
+
7
+ 1. Definitions.
8
+
9
+ "License" shall mean the terms and conditions for use, reproduction,
10
+ and distribution as defined by Sections 1 through 9 of this document.
11
+
12
+ "Licensor" shall mean the copyright owner or entity authorized by
13
+ the copyright owner that is granting the License.
14
+
15
+ "Legal Entity" shall mean the union of the acting entity and all
16
+ other entities that control, are controlled by, or are under common
17
+ control with that entity. For the purposes of this definition,
18
+ "control" means (i) the power, direct or indirect, to cause the
19
+ direction or management of such entity, whether by contract or
20
+ otherwise, or (ii) ownership of fifty percent (50%) or more of the
21
+ outstanding shares, or (iii) beneficial ownership of such entity.
22
+
23
+ "You" (or "Your") shall mean an individual or Legal Entity
24
+ exercising permissions granted by this License.
25
+
26
+ "Source" form shall mean the preferred form for making modifications,
27
+ including but not limited to software source code, documentation
28
+ source, and configuration files.
29
+
30
+ "Object" form shall mean any form resulting from mechanical
31
+ transformation or translation of a Source form, including but
32
+ not limited to compiled object code, generated documentation,
33
+ and conversions to other media types.
34
+
35
+ "Work" shall mean the work of authorship, whether in Source or
36
+ Object form, made available under the License, as indicated by a
37
+ copyright notice that is included in or attached to the work
38
+ (an example is provided in the Appendix below).
39
+
40
+ "Derivative Works" shall mean any work, whether in Source or Object
41
+ form, that is based on (or derived from) the Work and for which the
42
+ editorial revisions, annotations, elaborations, or other modifications
43
+ represent, as a whole, an original work of authorship. For the purposes
44
+ of this License, Derivative Works shall not include works that remain
45
+ separable from, or merely link (or bind by name) to the interfaces of,
46
+ the Work and Derivative Works thereof.
47
+
48
+ "Contribution" shall mean any work of authorship, including
49
+ the original version of the Work and any modifications or additions
50
+ to that Work or Derivative Works thereof, that is intentionally
51
+ submitted to Licensor for inclusion in the Work by the copyright owner
52
+ or by an individual or Legal Entity authorized to submit on behalf of
53
+ the copyright owner. For the purposes of this definition, "submitted"
54
+ means any form of electronic, verbal, or written communication sent
55
+ to the Licensor or its representatives, including but not limited to
56
+ communication on electronic mailing lists, source code control systems,
57
+ and issue tracking systems that are managed by, or on behalf of, the
58
+ Licensor for the purpose of discussing and improving the Work, but
59
+ excluding communication that is conspicuously marked or otherwise
60
+ designated in writing by the copyright owner as "Not a Contribution."
61
+
62
+ "Contributor" shall mean Licensor and any individual or Legal Entity
63
+ on behalf of whom a Contribution has been received by Licensor and
64
+ subsequently incorporated within the Work.
65
+
66
+ 2. Grant of Copyright License. Subject to the terms and conditions of
67
+ this License, each Contributor hereby grants to You a perpetual,
68
+ worldwide, non-exclusive, no-charge, royalty-free, irrevocable
69
+ copyright license to reproduce, prepare Derivative Works of,
70
+ publicly display, publicly perform, sublicense, and distribute the
71
+ Work and such Derivative Works in Source or Object form.
72
+
73
+ 3. Grant of Patent License. Subject to the terms and conditions of
74
+ this License, each Contributor hereby grants to You a perpetual,
75
+ worldwide, non-exclusive, no-charge, royalty-free, irrevocable
76
+ (except as stated in this section) patent license to make, have made,
77
+ use, offer to sell, sell, import, and otherwise transfer the Work,
78
+ where such license applies only to those patent claims licensable
79
+ by such Contributor that are necessarily infringed by their
80
+ Contribution(s) alone or by combination of their Contribution(s)
81
+ with the Work to which such Contribution(s) was submitted. If You
82
+ institute patent litigation against any entity (including a
83
+ cross-claim or counterclaim in a lawsuit) alleging that the Work
84
+ or a Contribution incorporated within the Work constitutes direct
85
+ or contributory patent infringement, then any patent licenses
86
+ granted to You under this License for that Work shall terminate
87
+ as of the date such litigation is filed.
88
+
89
+ 4. Redistribution. You may reproduce and distribute copies of the
90
+ Work or Derivative Works thereof in any medium, with or without
91
+ modifications, and in Source or Object form, provided that You
92
+ meet the following conditions:
93
+
94
+ (a) You must give any other recipients of the Work or
95
+ Derivative Works a copy of this License; and
96
+
97
+ (b) You must cause any modified files to carry prominent notices
98
+ stating that You changed the files; and
99
+
100
+ (c) You must retain, in the Source form of any Derivative Works
101
+ that You distribute, all copyright, patent, trademark, and
102
+ attribution notices from the Source form of the Work,
103
+ excluding those notices that do not pertain to any part of
104
+ the Derivative Works; and
105
+
106
+ (d) If the Work includes a "NOTICE" text file as part of its
107
+ distribution, then any Derivative Works that You distribute must
108
+ include a readable copy of the attribution notices contained
109
+ within such NOTICE file, excluding those notices that do not
110
+ pertain to any part of the Derivative Works, in at least one
111
+ of the following places: within a NOTICE text file distributed
112
+ as part of the Derivative Works; within the Source form or
113
+ documentation, if provided along with the Derivative Works; or,
114
+ within a display generated by the Derivative Works, if and
115
+ wherever such third-party notices normally appear. The contents
116
+ of the NOTICE file are for informational purposes only and
117
+ do not modify the License. You may add Your own attribution
118
+ notices within Derivative Works that You distribute, alongside
119
+ or as an addendum to the NOTICE text from the Work, provided
120
+ that such additional attribution notices cannot be construed
121
+ as modifying the License.
122
+
123
+ You may add Your own copyright statement to Your modifications and
124
+ may provide additional or different license terms and conditions
125
+ for use, reproduction, or distribution of Your modifications, or
126
+ for any such Derivative Works as a whole, provided Your use,
127
+ reproduction, and distribution of the Work otherwise complies with
128
+ the conditions stated in this License.
129
+
130
+ 5. Submission of Contributions. Unless You explicitly state otherwise,
131
+ any Contribution intentionally submitted for inclusion in the Work
132
+ by You to the Licensor shall be under the terms and conditions of
133
+ this License, without any additional terms or conditions.
134
+ Notwithstanding the above, nothing herein shall supersede or modify
135
+ the terms of any separate license agreement you may have executed
136
+ with Licensor regarding such Contributions.
137
+
138
+ 6. Trademarks. This License does not grant permission to use the trade
139
+ names, trademarks, service marks, or product names of the Licensor,
140
+ except as required for reasonable and customary use in describing the
141
+ origin of the Work and reproducing the content of the NOTICE file.
142
+
143
+ 7. Disclaimer of Warranty. Unless required by applicable law or
144
+ agreed to in writing, Licensor provides the Work (and each
145
+ Contributor provides its Contributions) on an "AS IS" BASIS,
146
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
147
+ implied, including, without limitation, any warranties or conditions
148
+ of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
149
+ PARTICULAR PURPOSE. You are solely responsible for determining the
150
+ appropriateness of using or redistributing the Work and assume any
151
+ risks associated with Your exercise of permissions under this License.
152
+
153
+ 8. Limitation of Liability. In no event and under no legal theory,
154
+ whether in tort (including negligence), contract, or otherwise,
155
+ unless required by applicable law (such as deliberate and grossly
156
+ negligent acts) or agreed to in writing, shall any Contributor be
157
+ liable to You for damages, including any direct, indirect, special,
158
+ incidental, or consequential damages of any character arising as a
159
+ result of this License or out of the use or inability to use the
160
+ Work (including but not limited to damages for loss of goodwill,
161
+ work stoppage, computer failure or malfunction, or any and all
162
+ other commercial damages or losses), even if such Contributor
163
+ has been advised of the possibility of such damages.
164
+
165
+ 9. Accepting Warranty or Additional Liability. While redistributing
166
+ the Work or Derivative Works thereof, You may choose to offer,
167
+ and charge a fee for, acceptance of support, warranty, indemnity,
168
+ or other liability obligations and/or rights consistent with this
169
+ License. However, in accepting such obligations, You may act only
170
+ on Your own behalf and on Your sole responsibility, not on behalf
171
+ of any other Contributor, and only if You agree to indemnify,
172
+ defend, and hold each Contributor harmless for any liability
173
+ incurred by, or claims asserted against, such Contributor by reason
174
+ of your accepting any such warranty or additional liability.
175
+
176
+ END OF TERMS AND CONDITIONS
177
+
178
+ APPENDIX: How to apply the Apache License to your work.
179
+
180
+ To apply the Apache License to your work, attach the following
181
+ boilerplate notice, with the fields enclosed by brackets "[]"
182
+ replaced with your own identifying information. (Don't include
183
+ the brackets!) The text should be enclosed in the appropriate
184
+ comment syntax for the file format. We also recommend that a
185
+ file or class name and description of purpose be included on the
186
+ same "printed page" as the copyright notice for easier
187
+ identification within third-party archives.
188
+
189
+ Copyright 2023-2024 SGLang Team
190
+
191
+ Licensed under the Apache License, Version 2.0 (the "License");
192
+ you may not use this file except in compliance with the License.
193
+ You may obtain a copy of the License at
194
+
195
+ http://www.apache.org/licenses/LICENSE-2.0
196
+
197
+ Unless required by applicable law or agreed to in writing, software
198
+ distributed under the License is distributed on an "AS IS" BASIS,
199
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
200
+ See the License for the specific language governing permissions and
201
+ limitations under the License.
sglang_repo/sgl-kernel/Makefile ADDED
@@ -0,0 +1,28 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ .PHONY: tree ln submodule install build clean rebuild test format
2
+
3
+ tree:
4
+ @tree --prune -I "__pycache__|*.egg-info|*.so|build|3rdparty|dist"
5
+
6
+ submodule:
7
+ @git submodule update --init --recursive
8
+
9
+ ln: submodule
10
+ @rm -rf build && bear python3 setup.py build
11
+
12
+ install: submodule
13
+ @pip install -e .
14
+
15
+ build: submodule
16
+ @rm -rf dist/* || true && export MAX_JOBS=$(nproc) && python3 setup.py bdist_wheel && pip3 install dist/*whl --force-reinstall --no-deps
17
+
18
+ clean:
19
+ @rm -rf build dist *.egg-info
20
+
21
+ rebuild: clean submodule build
22
+ @echo "Succeed to rebuild"
23
+
24
+ test:
25
+ @find tests -name "test_*.py" | xargs -n 1 python3
26
+
27
+ format:
28
+ @find src tests -name '*.cc' -o -name '*.cu' -o -name '*.cuh' -o -name '*.h' -o -name '*.hpp' | xargs clang-format -i && find src tests -name '*.py' | xargs isort && find src tests -name '*.py' | xargs black