Add files using upload-large-folder tool
Browse filesThis view is limited to 50 files because it contains too many changes.
See raw diff
- -dFLT4oBgHgl3EQfCy73/content/tmp_files/2301.11977v1.pdf.txt +1236 -0
- -dFLT4oBgHgl3EQfCy73/content/tmp_files/load_file.txt +0 -0
- -tAzT4oBgHgl3EQfvf0n/vector_store/index.faiss +3 -0
- .gitattributes +61 -0
- 19AzT4oBgHgl3EQf8_5O/content/tmp_files/2301.01912v1.pdf.txt +1268 -0
- 19AzT4oBgHgl3EQf8_5O/content/tmp_files/load_file.txt +0 -0
- 1tFIT4oBgHgl3EQf4CvP/vector_store/index.pkl +3 -0
- 2NE1T4oBgHgl3EQfAAJE/content/2301.02833v1.pdf +3 -0
- 2NE1T4oBgHgl3EQfAAJE/vector_store/index.pkl +3 -0
- 2tE0T4oBgHgl3EQfdwDZ/content/tmp_files/2301.02382v1.pdf.txt +1049 -0
- 2tE0T4oBgHgl3EQfdwDZ/content/tmp_files/load_file.txt +0 -0
- 39E3T4oBgHgl3EQfQAlh/content/2301.04408v1.pdf +3 -0
- 39E3T4oBgHgl3EQfQAlh/vector_store/index.faiss +3 -0
- 39E3T4oBgHgl3EQfQAlh/vector_store/index.pkl +3 -0
- 3NAzT4oBgHgl3EQf9P6r/vector_store/index.faiss +3 -0
- 4tE2T4oBgHgl3EQfOQbV/content/2301.03747v1.pdf +3 -0
- 4tE2T4oBgHgl3EQfOQbV/vector_store/index.pkl +3 -0
- 5NE2T4oBgHgl3EQfOgaA/content/tmp_files/2301.03749v1.pdf.txt +2470 -0
- 5NE2T4oBgHgl3EQfOgaA/content/tmp_files/load_file.txt +0 -0
- 5tE5T4oBgHgl3EQfPQ4_/vector_store/index.faiss +3 -0
- 6tFKT4oBgHgl3EQf_i4o/content/tmp_files/2301.11962v1.pdf.txt +1774 -0
- 6tFKT4oBgHgl3EQf_i4o/content/tmp_files/load_file.txt +0 -0
- 7NE4T4oBgHgl3EQfcgzB/vector_store/index.faiss +3 -0
- 8dE4T4oBgHgl3EQfdQww/vector_store/index.faiss +3 -0
- 9NE0T4oBgHgl3EQffwBQ/content/2301.02408v1.pdf +3 -0
- 9NE0T4oBgHgl3EQffwBQ/vector_store/index.faiss +3 -0
- 9NE0T4oBgHgl3EQffwBQ/vector_store/index.pkl +3 -0
- 9tAzT4oBgHgl3EQf-_7r/vector_store/index.faiss +3 -0
- ANFAT4oBgHgl3EQfrR6g/content/2301.08652v1.pdf +3 -0
- ANFAT4oBgHgl3EQfrR6g/vector_store/index.pkl +3 -0
- CdE1T4oBgHgl3EQfWAQw/content/tmp_files/2301.03108v1.pdf.txt +490 -0
- CdE1T4oBgHgl3EQfWAQw/content/tmp_files/load_file.txt +214 -0
- CtAzT4oBgHgl3EQfwf4q/content/2301.01722v1.pdf +3 -0
- CtAzT4oBgHgl3EQfwf4q/vector_store/index.pkl +3 -0
- CtE4T4oBgHgl3EQf5w5z/vector_store/index.pkl +3 -0
- DNE4T4oBgHgl3EQfew0W/content/tmp_files/2301.05101v1.pdf.txt +0 -0
- DNE4T4oBgHgl3EQfew0W/content/tmp_files/load_file.txt +0 -0
- E9FLT4oBgHgl3EQfFi_K/content/tmp_files/2301.11988v1.pdf.txt +970 -0
- E9FLT4oBgHgl3EQfFi_K/content/tmp_files/load_file.txt +0 -0
- EtE1T4oBgHgl3EQfEgOL/vector_store/index.faiss +3 -0
- FNE0T4oBgHgl3EQfhAFe/content/2301.02425v1.pdf +0 -0
- FNE0T4oBgHgl3EQfhAFe/content/tmp_files/2301.02425v1.pdf.txt +242 -0
- FNE0T4oBgHgl3EQfhAFe/content/tmp_files/load_file.txt +196 -0
- FNE2T4oBgHgl3EQfSwf4/content/tmp_files/2301.03797v1.pdf.txt +2048 -0
- FNE2T4oBgHgl3EQfSwf4/content/tmp_files/load_file.txt +0 -0
- GdFIT4oBgHgl3EQfWysJ/content/tmp_files/2301.11240v1.pdf.txt +1565 -0
- GdFIT4oBgHgl3EQfWysJ/content/tmp_files/load_file.txt +0 -0
- H9E3T4oBgHgl3EQfuQuc/content/2301.04683v1.pdf +3 -0
- H9E3T4oBgHgl3EQfuQuc/vector_store/index.faiss +3 -0
- H9E3T4oBgHgl3EQfuQuc/vector_store/index.pkl +3 -0
-dFLT4oBgHgl3EQfCy73/content/tmp_files/2301.11977v1.pdf.txt
ADDED
|
@@ -0,0 +1,1236 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
A Memory Efficient Deep Reinforcement Learning
|
| 2 |
+
Approach For Snake Game Autonomous Agents
|
| 3 |
+
Md. Rafat Rahman Tushar1
|
| 4 |
+
Department of Electrical and Computer Engineering
|
| 5 |
+
North South University
|
| 6 |
+
Dhaka, Bangladesh
|
| 7 | |
| 8 |
+
Shahnewaz Siddique2
|
| 9 |
+
Department of Electrical and Computer Engineering
|
| 10 |
+
North South University
|
| 11 |
+
Dhaka, Bangladesh
|
| 12 | |
| 13 |
+
Abstract—To perform well, Deep Reinforcement Learning
|
| 14 |
+
(DRL)
|
| 15 |
+
methods
|
| 16 |
+
require
|
| 17 |
+
significant
|
| 18 |
+
memory
|
| 19 |
+
resources
|
| 20 |
+
and
|
| 21 |
+
computational
|
| 22 |
+
time.
|
| 23 |
+
Also,
|
| 24 |
+
sometimes
|
| 25 |
+
these
|
| 26 |
+
systems
|
| 27 |
+
need
|
| 28 |
+
additional environment information to achieve a good reward.
|
| 29 |
+
However, it is more important for many applications and devices
|
| 30 |
+
to reduce memory usage and computational times than to achieve
|
| 31 |
+
the maximum reward. This paper presents a modified DRL
|
| 32 |
+
method that performs reasonably well with compressed imagery
|
| 33 |
+
data without requiring additional environment information and
|
| 34 |
+
also uses less memory and time. We have designed a lightweight
|
| 35 |
+
Convolutional Neural Network (CNN) with a variant of the
|
| 36 |
+
Q-network that efficiently takes preprocessed image data as
|
| 37 |
+
input and uses less memory. Furthermore, we use a simple
|
| 38 |
+
reward mechanism and small experience replay memory so as to
|
| 39 |
+
provide only the minimum necessary information. Our modified
|
| 40 |
+
DRL method enables our autonomous agent to play Snake, a
|
| 41 |
+
classical control game. The results show our model can achieve
|
| 42 |
+
similar performance as other DRL methods.
|
| 43 |
+
Index Terms—Deep Reinforcement Learning, Convolutional
|
| 44 |
+
Neural Network, Deep Q Learning, Hyperparameter Tuning,
|
| 45 |
+
Replay Size, Image Preprocessing
|
| 46 |
+
I. INTRODUCTION
|
| 47 |
+
Complex problems can be solved in real-world applications
|
| 48 |
+
by carefully designing Deep Reinforcement Learning (DRL)
|
| 49 |
+
models by taking high dimensional input data and producing
|
| 50 |
+
discrete or continuous outputs. It is challenging to build a
|
| 51 |
+
agent using sensory data capable of controlling and acting
|
| 52 |
+
in an environment. The environment is also complex and
|
| 53 |
+
primarily unknown to the acting agent. The agent needs to
|
| 54 |
+
learn the underlying distribution of the state and action spaces,
|
| 55 |
+
and the distribution changes as the agent encounters new
|
| 56 |
+
data from an environment. Previously reinforcement learning
|
| 57 |
+
algorithms [1]–[5] were presented with lower constraint prob-
|
| 58 |
+
lems to demonstrate the algorithms effectiveness. However,
|
| 59 |
+
these systems were not well generalized for high dimensional
|
| 60 |
+
inputs; thus, they could not meet the requirements of practical
|
| 61 |
+
applications.
|
| 62 |
+
Recently, DRL has had success in CNN based vision-based
|
| 63 |
+
problems [6]–[8]. They have successfully implemented DRL
|
| 64 |
+
methods that learn to control based on image pixel. Although
|
| 65 |
+
1Research Assistant.
|
| 66 |
+
2Assistant Professor, IEEE Member.
|
| 67 |
+
*GitHub implementation: https://github.com/rafattushar/rl-snake
|
| 68 |
+
the image-based DRL methods have enjoyed considerable
|
| 69 |
+
success, they are memory intensive during training as well as
|
| 70 |
+
deployment. Since they require a massive amount of memory,
|
| 71 |
+
they are not suitable for implementation in mobile devices or
|
| 72 |
+
mid-range autonomous robots for training and deployment.
|
| 73 |
+
All modern reinforcement learning algorithms use replay
|
| 74 |
+
buffer for sampling uncorrelated data for online training in
|
| 75 |
+
mainly off-policy algorithms. Experience replay buffer also
|
| 76 |
+
improves the data efficiency [9] during data sampling. Since
|
| 77 |
+
the use of neural networks in various DRL algorithms is
|
| 78 |
+
increasing, it is necessary to stabilize the neural network
|
| 79 |
+
with uncorrelated data. That is why the experience replay
|
| 80 |
+
buffer is a desirable property of various reinforcement learning
|
| 81 |
+
algorithms. The first successful implementation of DRL in
|
| 82 |
+
high dimensional observation space, the Deep Q-learning [6],
|
| 83 |
+
used a replay buffer of 106 size. After that, [8], [10]–[12], to
|
| 84 |
+
name a few, have solved complex high dimensional problems
|
| 85 |
+
but still use a replay buffer of the same size.
|
| 86 |
+
Experience replay buffer suffers from two types of issues.
|
| 87 |
+
One is to choose the size of the replay buffer, and the second
|
| 88 |
+
is the method of sampling data from the buffer. [13]–[15]
|
| 89 |
+
consider the latter problem to best sample from the replay
|
| 90 |
+
buffer. But the favorable size for the replay buffer remains
|
| 91 |
+
unknown. Although [15] points out that the learning algorithm
|
| 92 |
+
is sensitive to the size of the replay buffer, they have not come
|
| 93 |
+
up with a better conclusion on the size of the buffer.
|
| 94 |
+
In this paper, we tackle the memory usage of DRL al-
|
| 95 |
+
gorithms by implementing a modified approach for image
|
| 96 |
+
preprocessing and replay buffer size. Although we want the
|
| 97 |
+
agent to obtain a decent score, we are more concerned about
|
| 98 |
+
memory usage. We choose a Deep Q-Network (DQN) [6]
|
| 99 |
+
for our algorithm with some variations. Our objective is to
|
| 100 |
+
design a DRL model that can be implemented on mobile
|
| 101 |
+
devices during training and deployment. To be deployed on
|
| 102 |
+
mobile devices, memory consumption must be minimized as
|
| 103 |
+
traditional DRL model with visual inputs sometimes need half
|
| 104 |
+
a terabyte of memory. We achieve low memory consumption
|
| 105 |
+
by preprocessing the visual image data and tuning the replay
|
| 106 |
+
buffer size with other hyperparameters. Then, we evaluate
|
| 107 |
+
our model in our simulation environment using the classical
|
| 108 |
+
control game named Snake.* The results show that our model
|
| 109 |
+
can achieve similar performance as other DRL methods.
|
| 110 |
+
arXiv:2301.11977v1 [cs.AI] 27 Jan 2023
|
| 111 |
+
|
| 112 |
+
II. RELATED WORK
|
| 113 |
+
The core idea of reinforcement learning is the sequential
|
| 114 |
+
decision making process involving some agency that learns
|
| 115 |
+
from the experience and acts on uncertain environments. After
|
| 116 |
+
the development of a formal framework of reinforcement
|
| 117 |
+
learning, many algorithms have been introduced such as, [1]–
|
| 118 |
+
[5].
|
| 119 |
+
Q-learning [1] is a model-free asynchronous dynamic pro-
|
| 120 |
+
gramming algorithm of reinforcement learning. Q-learning
|
| 121 |
+
proposes that by sampling all the actions in states and iterating
|
| 122 |
+
the action-value functions repeatedly, convergence can be
|
| 123 |
+
achieved. The Q-learning works perfectly on limited state
|
| 124 |
+
and action space while collapsing with high dimensional
|
| 125 |
+
infinite state space. Then, [6] proposes their Deep Q-network
|
| 126 |
+
algorithm that demonstrates significant results with image
|
| 127 |
+
data. Among other variations, they use a convolutional neural
|
| 128 |
+
network and replay buffer. Double Q-learning [16] is applied
|
| 129 |
+
with DQN to overcome the overestimation of the action-value
|
| 130 |
+
function and is named Deep Reinforcement Learning with
|
| 131 |
+
Double Q-Learning (DDQN) [8]. DDQN proposes another
|
| 132 |
+
neural network with the same structure as DQN but gets
|
| 133 |
+
updated less frequently. Refined DQN [17] proposes another
|
| 134 |
+
DRL method that involves a carefully designed reward mech-
|
| 135 |
+
anism and a dual experience replay structure. Refined DQN
|
| 136 |
+
evaluate their work by enabling their agent to play the snake
|
| 137 |
+
game.
|
| 138 |
+
The experience replay buffer is a desirable property of
|
| 139 |
+
modern DRL algorithms. It provides powerful, model-free, off-
|
| 140 |
+
policy DRL algorithms with correlated data and improves data
|
| 141 |
+
efficiency [9] during data sampling. DQN [6] shows the power
|
| 142 |
+
of replay buffer in sampling data. DQN uses the size 106 for
|
| 143 |
+
replay buffer. After that, [8], [10]–[12], [17], among others,
|
| 144 |
+
have shown their work with the same size and structure as
|
| 145 |
+
the replay buffer. Schaul et al. propose an efficient sampling
|
| 146 |
+
strategy in their prioritized experience replay (PER) [13]. PER
|
| 147 |
+
shows that instead of sampling data uniform-randomly, the
|
| 148 |
+
latest data gets the most priority; hence the latest data have
|
| 149 |
+
more probability of being selected, and this selection method
|
| 150 |
+
seems to improve results. [15] shows that a large experience
|
| 151 |
+
replay buffer can hurt the performance. They also propose that
|
| 152 |
+
when sampling data to train DRL algorithms, the most recent
|
| 153 |
+
data should the appended to the batch.
|
| 154 |
+
III. METHOD
|
| 155 |
+
Our objective is to reduce memory usage during training
|
| 156 |
+
time while achieving the best performance possible. The replay
|
| 157 |
+
memory takes a considerable amount of memory, as described
|
| 158 |
+
later. We try to achieve memory efficiency by reducing the
|
| 159 |
+
massive replay buffer requirement with image preprocessing
|
| 160 |
+
and the buffer size. The buffer size is carefully chosen so
|
| 161 |
+
that the agent has the necessary information to train well and
|
| 162 |
+
achieves a moderate score. We use a slight variation of the
|
| 163 |
+
deep Q-learning algorithm for this purpose.
|
| 164 |
+
TABLE I
|
| 165 |
+
REWARD MECHANISM FOR SNAKE GAME
|
| 166 |
+
Moves
|
| 167 |
+
Rewards
|
| 168 |
+
Results
|
| 169 |
+
Eats an apple
|
| 170 |
+
+1
|
| 171 |
+
Score Increase
|
| 172 |
+
Hits with wall or itself
|
| 173 |
+
-1
|
| 174 |
+
End of episode
|
| 175 |
+
Not eats or hits wall or itself
|
| 176 |
+
-0.1
|
| 177 |
+
Continue playing games
|
| 178 |
+
TABLE II
|
| 179 |
+
MEMORY REQUIREMENT FOR DIFFERENT PIXEL DATA
|
| 180 |
+
RGB
|
| 181 |
+
Grayscale
|
| 182 |
+
Binary
|
| 183 |
+
Data Type
|
| 184 |
+
float
|
| 185 |
+
float
|
| 186 |
+
int
|
| 187 |
+
Size (kB)
|
| 188 |
+
165.375
|
| 189 |
+
55.125
|
| 190 |
+
6.890
|
| 191 |
+
Memory Save % w.r.t. RGB
|
| 192 |
+
0%
|
| 193 |
+
67%
|
| 194 |
+
96%
|
| 195 |
+
Memory Save % w.r.t. Grayscale
|
| 196 |
+
-
|
| 197 |
+
0%
|
| 198 |
+
87.5%
|
| 199 |
+
(a) Before preprocessing
|
| 200 |
+
0
|
| 201 |
+
20
|
| 202 |
+
40
|
| 203 |
+
60
|
| 204 |
+
80
|
| 205 |
+
0
|
| 206 |
+
10
|
| 207 |
+
20
|
| 208 |
+
30
|
| 209 |
+
40
|
| 210 |
+
50
|
| 211 |
+
60
|
| 212 |
+
70
|
| 213 |
+
80
|
| 214 |
+
(b) After preprocessing
|
| 215 |
+
Fig. 1. Visual image data before and after preprocessing
|
| 216 |
+
A. Image Preprocessing
|
| 217 |
+
The agent gets the RGB values in the 3-D array format
|
| 218 |
+
from the games’ environments. We convert the RGB array into
|
| 219 |
+
grayscale because it would not affect the performance [18] and
|
| 220 |
+
it saves three times of memory. We resize the grayscale data
|
| 221 |
+
into 84 × 84 pixels. Finally, for more memory reduction, we
|
| 222 |
+
convert this resized grayscale data into binary data (values only
|
| 223 |
+
with 0 and 1). The memory requirement for storing various
|
| 224 |
+
image data (scaled-down between 0 and 1) is given in Table II.
|
| 225 |
+
Table II shows that it saves around 67% from converting
|
| 226 |
+
RGB into grayscale and around 96% from converting RBG
|
| 227 |
+
into binary. Also, the memory requirement reduces by around
|
| 228 |
+
87.5% converting from grayscale into binary. Visual pixel
|
| 229 |
+
data transformation with preprocessing is given in Fig. 1. The
|
| 230 |
+
preprocessing method is presented using a flowchart in Fig. 2.
|
| 231 |
+
B. Game Selection and Their Environments
|
| 232 |
+
The use-case of our target applications is less complex tasks.
|
| 233 |
+
For this reason, we implemented the classical Snake game [19]
|
| 234 |
+
Game Env
|
| 235 |
+
Graysclale
|
| 236 |
+
Resize 84X84
|
| 237 |
+
Pixel value
|
| 238 |
+
0 or 1
|
| 239 |
+
Fig. 2. Diagram of image preprocessing
|
| 240 |
+
|
| 241 |
+
in the ’pygame’ module. The game screen is divided into a
|
| 242 |
+
12 × 12 grid. The resolution for the game is set to 252 × 252.
|
| 243 |
+
The initial snake size is 3. The controller has four inputs to
|
| 244 |
+
navigate. Table I shows the valid actions and respective reward
|
| 245 |
+
for the snake game environment.
|
| 246 |
+
C. Reinforcement Learning Preliminary
|
| 247 |
+
Any reinforcement learning or sequential decision-making
|
| 248 |
+
problem can be formulated with Markov Decision Processes
|
| 249 |
+
(MDPs). An MDP is a triplet M = (X, A, P0), where X
|
| 250 |
+
is a set of valid states, A is a set of valid actions, and P0
|
| 251 |
+
is transition probability kernel that maps X × A into next
|
| 252 |
+
state transition probability. For a deterministic system, the state
|
| 253 |
+
transition is defined as,
|
| 254 |
+
st+1 = f(st, at)
|
| 255 |
+
(1)
|
| 256 |
+
The reward is defined as,
|
| 257 |
+
rt = R(st, at)
|
| 258 |
+
(2)
|
| 259 |
+
The cumulative reward over a trajectory or episode is called
|
| 260 |
+
the return, R(τ). The equation for discounted return is given
|
| 261 |
+
below,
|
| 262 |
+
R(τ) =
|
| 263 |
+
∞
|
| 264 |
+
�
|
| 265 |
+
t=0
|
| 266 |
+
γtrt
|
| 267 |
+
(3)
|
| 268 |
+
D. Deep Q-Learning
|
| 269 |
+
The goal of the RL agent is to maximize the expected return.
|
| 270 |
+
Following a policy π, the expected return, J(π), is defined as,
|
| 271 |
+
J(π) = E
|
| 272 |
+
τ∼π[R(τ)]
|
| 273 |
+
(4)
|
| 274 |
+
The optimal action-value or q function Q∗(s, a) maximizes
|
| 275 |
+
the expected return by taking any action at state s and acting
|
| 276 |
+
optimally in the following states.
|
| 277 |
+
Q∗(s, a) = max
|
| 278 |
+
π
|
| 279 |
+
E
|
| 280 |
+
τ∼π[R(τ)|s0 = s, a0 = a]
|
| 281 |
+
(5)
|
| 282 |
+
For finding out the optimal actions based on an optimal action-
|
| 283 |
+
value function at time t, the Q∗ must satisfy the Bellman
|
| 284 |
+
Equation, which is,
|
| 285 |
+
Q∗(s, a) = E
|
| 286 |
+
s′∼ρ
|
| 287 |
+
�
|
| 288 |
+
r(s, a) + γ max
|
| 289 |
+
a′ Q∗(s′, a′)
|
| 290 |
+
�
|
| 291 |
+
(6)
|
| 292 |
+
The optimal action-value function gives rise to optimal action
|
| 293 |
+
a∗(s). The a∗(s) can be described as,
|
| 294 |
+
a∗(s) = arg max
|
| 295 |
+
a
|
| 296 |
+
Q∗(s, a)
|
| 297 |
+
(7)
|
| 298 |
+
For training an optimal action-value function, sometimes a
|
| 299 |
+
non-linear function approximator like neural network [6] is
|
| 300 |
+
used. We used a convolutional neural network.
|
| 301 |
+
TABLE III
|
| 302 |
+
THE ARCHITECTURE OF NEURAL NETWORK
|
| 303 |
+
Layer
|
| 304 |
+
Filter
|
| 305 |
+
Stride
|
| 306 |
+
Layer
|
| 307 |
+
Acti-
|
| 308 |
+
Zero
|
| 309 |
+
Output
|
| 310 |
+
Name
|
| 311 |
+
vation
|
| 312 |
+
Padd
|
| 313 |
+
Input
|
| 314 |
+
84*84*4
|
| 315 |
+
Conv1
|
| 316 |
+
8*8
|
| 317 |
+
4
|
| 318 |
+
32
|
| 319 |
+
ReLU
|
| 320 |
+
Yes
|
| 321 |
+
21*21*32
|
| 322 |
+
M. Pool
|
| 323 |
+
2*2
|
| 324 |
+
2
|
| 325 |
+
Yes
|
| 326 |
+
11*11*32
|
| 327 |
+
Conv2
|
| 328 |
+
4*4
|
| 329 |
+
2
|
| 330 |
+
64
|
| 331 |
+
ReLU
|
| 332 |
+
Yes
|
| 333 |
+
6*6*64
|
| 334 |
+
M. Pool
|
| 335 |
+
2*2
|
| 336 |
+
2
|
| 337 |
+
Yes
|
| 338 |
+
3*3*64
|
| 339 |
+
B. Norm
|
| 340 |
+
3*3*64
|
| 341 |
+
Conv3
|
| 342 |
+
3*3
|
| 343 |
+
2
|
| 344 |
+
128
|
| 345 |
+
ReLU
|
| 346 |
+
Yes
|
| 347 |
+
2*2*128
|
| 348 |
+
M. Pool
|
| 349 |
+
2*2
|
| 350 |
+
2
|
| 351 |
+
Yes
|
| 352 |
+
1*1*128
|
| 353 |
+
B. Norm
|
| 354 |
+
1*1*128
|
| 355 |
+
Flatten
|
| 356 |
+
128
|
| 357 |
+
FC
|
| 358 |
+
512
|
| 359 |
+
ReLU
|
| 360 |
+
512
|
| 361 |
+
FC
|
| 362 |
+
512
|
| 363 |
+
ReLU
|
| 364 |
+
512
|
| 365 |
+
Output
|
| 366 |
+
No. of
|
| 367 |
+
Linear
|
| 368 |
+
No. of
|
| 369 |
+
actions
|
| 370 |
+
actions
|
| 371 |
+
M. Pool = Max Pooling, B. Norm = Batch Normalization, FC = Fully Connected
|
| 372 |
+
TABLE IV
|
| 373 |
+
MEMORY REQUIREMENT EXPERIENCE REPLAY
|
| 374 |
+
RGB
|
| 375 |
+
Grayscale
|
| 376 |
+
Binary
|
| 377 |
+
Memory Usage (GB)
|
| 378 |
+
1261.71
|
| 379 |
+
420.57
|
| 380 |
+
2.628
|
| 381 |
+
Memory Save % w.r.t. RGB
|
| 382 |
+
0%
|
| 383 |
+
67%
|
| 384 |
+
99.7%
|
| 385 |
+
Memory Save % w.r.t. Grayscale
|
| 386 |
+
-
|
| 387 |
+
0%
|
| 388 |
+
99.4%
|
| 389 |
+
E. Neural Network
|
| 390 |
+
The action-value function is iteratively updated to achieve
|
| 391 |
+
the optimal action-value function. The neural network used
|
| 392 |
+
to approximate the action-value function and update at each
|
| 393 |
+
iteration is called Q-network. We train the Q-network, param-
|
| 394 |
+
eterized by θ, by minimizing a loss function Li(θi) at ith
|
| 395 |
+
iteration.
|
| 396 |
+
Li(θi) =
|
| 397 |
+
E
|
| 398 |
+
s,a∼ρ
|
| 399 |
+
�
|
| 400 |
+
(yi − Q(s, a; θi))2�
|
| 401 |
+
(8)
|
| 402 |
+
where yi =
|
| 403 |
+
E
|
| 404 |
+
s′∼ρ
|
| 405 |
+
�
|
| 406 |
+
r(s, a) + γmax
|
| 407 |
+
a′ Q′(s′, a′; θ′
|
| 408 |
+
k)
|
| 409 |
+
�
|
| 410 |
+
is the target
|
| 411 |
+
for that update. Here Q′ is another Q-network with the
|
| 412 |
+
same shape as Q-network but with a frozen parameter called
|
| 413 |
+
target Q-network for training stability parameterized by θ′
|
| 414 |
+
k.
|
| 415 |
+
We train the Q-network by minimizing this loss function (8)
|
| 416 |
+
w.r.t. the parameter θi. We use Adam [20] optimizer for fast
|
| 417 |
+
Environment
|
| 418 |
+
Random Action
|
| 419 |
+
or Actions taken
|
| 420 |
+
by Agent
|
| 421 |
+
Screen Data
|
| 422 |
+
Rewards
|
| 423 |
+
Replay
|
| 424 |
+
Memory
|
| 425 |
+
State, Action, Reward, Next State
|
| 426 |
+
State
|
| 427 |
+
E1= (s1,a1,r2,s2)
|
| 428 |
+
E2= (s2,a2,r3,s3)
|
| 429 |
+
E3= (s3,a3,r4,s4)
|
| 430 |
+
E4= (s4,a4,r5,s5)
|
| 431 |
+
....
|
| 432 |
+
....
|
| 433 |
+
....
|
| 434 |
+
....
|
| 435 |
+
E1= (st,at,rt+1,st+1)
|
| 436 |
+
Experience Replay Memory
|
| 437 |
+
Fig. 3. Structure of experience replay memory and flowchart
|
| 438 |
+
|
| 439 |
+
St
|
| 440 |
+
Online DQN
|
| 441 |
+
At
|
| 442 |
+
ENV
|
| 443 |
+
St+1
|
| 444 |
+
Rt+1
|
| 445 |
+
Experience
|
| 446 |
+
Replay
|
| 447 |
+
Memory
|
| 448 |
+
Q0
|
| 449 |
+
Q1
|
| 450 |
+
Q2
|
| 451 |
+
Q3
|
| 452 |
+
Max Q
|
| 453 |
+
Et=(st, at, rt+1, st+1)
|
| 454 |
+
St+1
|
| 455 |
+
St+1
|
| 456 |
+
Q0'
|
| 457 |
+
Q1'
|
| 458 |
+
Q2'
|
| 459 |
+
Q3'
|
| 460 |
+
E2=(s2, a2, r3, s3)
|
| 461 |
+
s2
|
| 462 |
+
Q0
|
| 463 |
+
Q1
|
| 464 |
+
Q2
|
| 465 |
+
Q3
|
| 466 |
+
s3
|
| 467 |
+
Online Deep Q Network
|
| 468 |
+
Target Deep Q Network
|
| 469 |
+
Loss = [ yt - Q(At) ]2
|
| 470 |
+
yt = Rt+1 + �.maxa Q'(a)
|
| 471 |
+
Random Mini-Batch
|
| 472 |
+
Sync weights
|
| 473 |
+
every p steps
|
| 474 |
+
Image Pre-processing
|
| 475 |
+
Fig. 4. The deep reinforcement learning design structure of our model
|
| 476 |
+
convergence. Our convolutional neural network structure is
|
| 477 |
+
shown in Table III.
|
| 478 |
+
F. Experience Replay Buffer
|
| 479 |
+
As our focus is to keep memory requirements as low as
|
| 480 |
+
possible during training, choosing the size of the replay buffer
|
| 481 |
+
is one of the critical design decisions. The size of the replay
|
| 482 |
+
buffer directly alters the requirement of memory necessity. We
|
| 483 |
+
use a replay buffer of size 50,000, requiring less memory
|
| 484 |
+
(only 5%) than [6], [8], [17], which use a replay buffer
|
| 485 |
+
of size 1,000,000. [6], [8], [17] store grayscale data into a
|
| 486 |
+
replay buffer. Table IV shows that we use 99.4% less memory
|
| 487 |
+
compared to these works. The replay buffer stores data in FIFO
|
| 488 |
+
(first in, first out) order so that the buffer contains only the
|
| 489 |
+
latest data. We present the complete cycle of the experience
|
| 490 |
+
replay buffer in Fig 3. Fig. 4 illustrates our complete design
|
| 491 |
+
diagram.
|
| 492 |
+
IV. EXPERIMENTS
|
| 493 |
+
A. Training
|
| 494 |
+
For training our model, we take a random batch of 32
|
| 495 |
+
experiences from the replay buffer at each iteration. Our
|
| 496 |
+
model has two convolutional neural networks (online DQN
|
| 497 |
+
and target DQN) sharing the same structure but does not sync
|
| 498 |
+
automatically. The weights of the target network are frozen so
|
| 499 |
+
that it cannot be trained. The state history from the mini-batch
|
| 500 |
+
is fed into the Online DQN. The DQN outputs the Q-values,
|
| 501 |
+
Q(st, at).
|
| 502 |
+
Loss = [yt − Q(st, at)]2
|
| 503 |
+
(9)
|
| 504 |
+
The yt is calculated from the target Q-network. We are passing
|
| 505 |
+
the next-state value to the target Q-network, and for each next-
|
| 506 |
+
state in the batch, we get Q-value, respectively. That is our
|
| 507 |
+
maxa′Q(s′, a′) value in the below equation.
|
| 508 |
+
yt = Rt+1 + γmaxa′Q(s′, a′)
|
| 509 |
+
(10)
|
| 510 |
+
The γ is the discount factor, which is one of many hyperpa-
|
| 511 |
+
rameters we are using in our model. Initially, we set γ value to
|
| 512 |
+
0.99. The Rt+1 is the reward in each experience tuple. So, we
|
| 513 |
+
get the yt value. The loss function is generated by putting these
|
| 514 |
+
values in (9). Then, we use this loss function to backpropagate
|
| 515 |
+
our Online DQN with an ‘Adam’ optimizer. Adam optimizer is
|
| 516 |
+
used instead of classical stochastic gradient descent for more
|
| 517 |
+
speed. The target DQN is synced with online DQN at every
|
| 518 |
+
|
| 519 |
+
InputLayer
|
| 520 |
+
HiddenLayer
|
| 521 |
+
OutputLayer(a) Score vs. episode graph
|
| 522 |
+
(b) Reward vs. episode graph
|
| 523 |
+
Fig. 5. Results of our agent playing Snake game during training
|
| 524 |
+
(a) Score vs. episode graph
|
| 525 |
+
(b) Reward vs. episode graph
|
| 526 |
+
Fig. 6. Results of baseline DQN model playing Snake game during training
|
| 527 |
+
10,000 steps. The values of hyperparameters we choose are
|
| 528 |
+
listed in Table VI.
|
| 529 |
+
B. Results and Comparisons
|
| 530 |
+
We allow DRL agents to play 140,000 episodes of games
|
| 531 |
+
to match the training results presented in [17]. We train one
|
| 532 |
+
agent with our method and another with the DQN method
|
| 533 |
+
presented in [6], we refer to [6] as the baseline DQN model.
|
| 534 |
+
Next, we compare our model with the baseline DQN model
|
| 535 |
+
[6] and the refined DQN model [17]. The results of training
|
| 536 |
+
the snake game with our model are shown in Fig. 5. Fig.
|
| 537 |
+
5(a) shows the game’s score with our model during training.
|
| 538 |
+
Fig. 5(b) shows that even though our reward mechanism is
|
| 539 |
+
simpler than the refined DQN model, the agent maximizes the
|
| 540 |
+
cumulative reward optimally.
|
| 541 |
+
In section III-F we showed that our model is more memory
|
| 542 |
+
efficient than the baseline DQN model and the refined DQN
|
| 543 |
+
model during training. In this section we show that despite low
|
| 544 |
+
memory usage, our model can achieve similar if not better
|
| 545 |
+
(a) Score comparison
|
| 546 |
+
(b) Reward comparison
|
| 547 |
+
Fig. 7. Comparison between our model and baseline DQN model
|
| 548 |
+
0
|
| 549 |
+
2
|
| 550 |
+
4
|
| 551 |
+
6
|
| 552 |
+
8
|
| 553 |
+
10
|
| 554 |
+
12
|
| 555 |
+
14
|
| 556 |
+
104
|
| 557 |
+
0
|
| 558 |
+
0.5
|
| 559 |
+
1
|
| 560 |
+
1.5
|
| 561 |
+
2
|
| 562 |
+
2.5
|
| 563 |
+
3
|
| 564 |
+
(a) Performance evaluation in terms
|
| 565 |
+
of game score
|
| 566 |
+
0
|
| 567 |
+
2
|
| 568 |
+
4
|
| 569 |
+
6
|
| 570 |
+
8
|
| 571 |
+
10
|
| 572 |
+
12
|
| 573 |
+
14
|
| 574 |
+
104
|
| 575 |
+
10
|
| 576 |
+
20
|
| 577 |
+
30
|
| 578 |
+
40
|
| 579 |
+
50
|
| 580 |
+
60
|
| 581 |
+
70
|
| 582 |
+
80
|
| 583 |
+
90
|
| 584 |
+
(b) Performance evaluation in terms
|
| 585 |
+
of survival time
|
| 586 |
+
Fig. 2. Visualization of performance comparison. To improve clarity, we only
|
| 587 |
+
use the averaged values of each 1,000 games.
|
| 588 |
+
Moreover, for benchmarking purpose, we also conduct
|
| 589 |
+
experiments using a baseline model, which follows the same
|
| 590 |
+
strategy used in the DeepMind’s groundbreaking work [2]
|
| 591 |
+
(with the same network structure as shown in Table I). This
|
| 592 |
+
baseline model is trained in the same manner as our refined
|
| 593 |
+
DQN model, but without our carefully designed reward mech-
|
| 594 |
+
anism, training gap, and dual experience replay strategy. Fig. 2
|
| 595 |
+
clearly demonstrates that our model outperforms the baseline
|
| 596 |
+
model in terms of both the game score and the survival
|
| 597 |
+
time. This finding empirically shows the effectiveness of our
|
| 598 |
+
improvements over the baseline model, i.e., the reward assign-
|
| 599 |
+
ment based on distance, the training gap, the timeout punish-
|
| 600 |
+
ment, and the dual experience replay strategies. Nevertheless,
|
| 601 |
+
as shown in Fig. 2, the highest values of the averaged game
|
| 602 |
+
score and the averaged number of steps survived are seemingly
|
| 603 |
+
small, i.e., around 2.5 and 80, respectively. However, please
|
| 604 |
+
note that these numbers are computed as the average of 1,000
|
| 605 |
+
games, within which several outlier cases may drastically
|
| 606 |
+
lower the averaged performance. Furthermore, in the latter part
|
| 607 |
+
of this experiment section, we compare the performance of our
|
| 608 |
+
refined DQN model with human performance, trying to further
|
| 609 |
+
evaluate the capability of our proposed model. As shown in
|
| 610 |
+
Fig. 2, the performance of our refined DQN model in terms of
|
| 611 |
+
game score increases slowly over the first 50,000 games along
|
| 612 |
+
with the decay of ϵ. Moreover, the performance in terms of the
|
| 613 |
+
number of steps survived even gets decreasing (see Fig. 2(b)).
|
| 614 |
+
These findings are due to the exploration-exploitation trade-
|
| 615 |
+
off. As in the exploration phase, wherein ϵ linearly decays
|
| 616 |
+
from 0.5 to 0, the agent is actually getting familiar with
|
| 617 |
+
the game environment by accumulating knowledge learned
|
| 618 |
+
from random exploration. After the exploration phase, the
|
| 619 |
+
performance of the agent starts to improve by making all
|
| 620 |
+
the decisions based on the learned knowledge. As shown in
|
| 621 |
+
Fig. 2(a), the averaged game score generally keeps improving.
|
| 622 |
+
Similarly, as shown in Fig. 2(b), the averaged number of
|
| 623 |
+
steps survived also shows improvements in general. There is
|
| 624 |
+
a noticeable peak in terms of the number of steps survived
|
| 625 |
+
around 50,000th to 77,000th games. This unexpected peak may
|
| 626 |
+
be due to the completion of ϵ decay that the performance of
|
| 627 |
+
the agent starts to improve as it relies purely on the learned
|
| 628 |
+
knowledge for decision making. However, we suspect that the
|
| 629 |
+
0
|
| 630 |
+
2
|
| 631 |
+
4
|
| 632 |
+
6
|
| 633 |
+
8
|
| 634 |
+
10
|
| 635 |
+
12
|
| 636 |
+
14
|
| 637 |
+
16
|
| 638 |
+
18
|
| 639 |
+
0
|
| 640 |
+
5
|
| 641 |
+
10
|
| 642 |
+
15
|
| 643 |
+
20
|
| 644 |
+
25
|
| 645 |
+
30
|
| 646 |
+
35
|
| 647 |
+
40
|
| 648 |
+
45
|
| 649 |
+
50
|
| 650 |
+
(a) Performance in terms of game
|
| 651 |
+
score
|
| 652 |
+
0
|
| 653 |
+
1000
|
| 654 |
+
2000
|
| 655 |
+
3000
|
| 656 |
+
4000
|
| 657 |
+
5000
|
| 658 |
+
6000
|
| 659 |
+
0
|
| 660 |
+
5
|
| 661 |
+
10
|
| 662 |
+
15
|
| 663 |
+
20
|
| 664 |
+
25
|
| 665 |
+
30
|
| 666 |
+
35
|
| 667 |
+
40
|
| 668 |
+
45
|
| 669 |
+
50
|
| 670 |
+
(b) Performance in terms of the num-
|
| 671 |
+
ber of steps survived
|
| 672 |
+
Fig. 3. The performance of our agent (after being training for 134,000 games)
|
| 673 |
+
in additional 50 games, wherein ϵ = 0 and training is turned off.
|
| 674 |
+
TABLE II
|
| 675 |
+
PERFORMANCE COMPARISON AMONG DIFFERENT MODELS
|
| 676 |
+
Performance
|
| 677 |
+
Score
|
| 678 |
+
Survival Steps
|
| 679 |
+
Human Average
|
| 680 |
+
1.98
|
| 681 |
+
216.46
|
| 682 |
+
Baseline Average
|
| 683 |
+
0.26
|
| 684 |
+
31.64
|
| 685 |
+
Refined DQN Average
|
| 686 |
+
9.04
|
| 687 |
+
1477.40
|
| 688 |
+
Human Best
|
| 689 |
+
15
|
| 690 |
+
1389
|
| 691 |
+
Baseline Best
|
| 692 |
+
2
|
| 693 |
+
1015
|
| 694 |
+
Refined DQN Best
|
| 695 |
+
17
|
| 696 |
+
5039
|
| 697 |
+
game play policies learned during the exploration phase may
|
| 698 |
+
not be optimal or near optimal that after a while (around
|
| 699 |
+
27,000 games after ϵ decays to 0), the performance of the
|
| 700 |
+
agent drops significantly (also shown as a slight drop in terms
|
| 701 |
+
of game scores in Fig. 2(a)). However, it is encouraging to
|
| 702 |
+
see that even after the exploration phase, our agent is able to
|
| 703 |
+
learn more appropriate knowledge and achieves monotonically
|
| 704 |
+
increasing performance after the performance drop. It seems
|
| 705 |
+
the period of ϵ decay, i.e., 50,000 games, is not sufficient
|
| 706 |
+
for the agent to obtain a converged knowledge set. However,
|
| 707 |
+
due to the limited computing resource we have, we are not
|
| 708 |
+
able to re-run all the experiments due to the time constraint.
|
| 709 |
+
Nonetheless, the monotonically increasing performance after
|
| 710 |
+
77,000th game empirically shows that our agent is able to learn
|
| 711 |
+
correctly in the Snake Game. Moreover, in the last paragraph
|
| 712 |
+
of this section, we show that although pre-converged, our agent
|
| 713 |
+
can already surpass average human players.
|
| 714 |
+
To further justify the performance of our agent, we let the
|
| 715 |
+
trained agent play additional 50 games with ϵ = 0 and show
|
| 716 |
+
the results in Fig. 3. In terms of game score, our agent obtains a
|
| 717 |
+
minimum score of 3, a maximum score of 17, and the averaged
|
| 718 |
+
score of around 9. The averaged score of 9 is significantly
|
| 719 |
+
higher than 2.5 shown in Fig. 2(a). Similarly, the averaged
|
| 720 |
+
number of steps survived is approximately 1,500, which is
|
| 721 |
+
again significantly higher than that of 80 shown in Fig. 2(b).
|
| 722 |
+
To further compare our refined DQN model with human
|
| 723 |
+
performance, we invite ten undergraduate students to play the
|
| 724 |
+
Snake Game for 50 games. Before they play 50 games for
|
| 725 |
+
performance comparisons, each human player played at least
|
| 726 |
+
10 games to get familiar with this particular Snake Game
|
| 727 |
+
implementation. The performance comparisons in terms of
|
| 728 |
+
game scores and the number of steps survived are shown
|
| 729 |
+
(a) Score graph of Refined DQN
|
| 730 |
+
(graph taken from [17])
|
| 731 |
+
(b) Score graph of our model
|
| 732 |
+
Fig. 8. Comparison between Refined DQN model and our model
|
| 733 |
+
0
|
| 734 |
+
2
|
| 735 |
+
4
|
| 736 |
+
6
|
| 737 |
+
8
|
| 738 |
+
10
|
| 739 |
+
12
|
| 740 |
+
14
|
| 741 |
+
104
|
| 742 |
+
0
|
| 743 |
+
0.5
|
| 744 |
+
1
|
| 745 |
+
1.5
|
| 746 |
+
2
|
| 747 |
+
2.5
|
| 748 |
+
3
|
| 749 |
+
(a) Performance evaluation in terms
|
| 750 |
+
of game score
|
| 751 |
+
0
|
| 752 |
+
2
|
| 753 |
+
4
|
| 754 |
+
6
|
| 755 |
+
8
|
| 756 |
+
10
|
| 757 |
+
12
|
| 758 |
+
14
|
| 759 |
+
104
|
| 760 |
+
10
|
| 761 |
+
20
|
| 762 |
+
30
|
| 763 |
+
40
|
| 764 |
+
50
|
| 765 |
+
60
|
| 766 |
+
70
|
| 767 |
+
80
|
| 768 |
+
90
|
| 769 |
+
(b) Performance evaluation in terms
|
| 770 |
+
of survival time
|
| 771 |
+
Fig. 2. Visualization of performance comparison. To improve clarity, we only
|
| 772 |
+
use the averaged values of each 1,000 games.
|
| 773 |
+
Moreover, for benchmarking purpose, we also conduct
|
| 774 |
+
experiments using a baseline model, which follows the same
|
| 775 |
+
strategy used in the DeepMind’s groundbreaking work [2]
|
| 776 |
+
(with the same network structure as shown in Table I). This
|
| 777 |
+
baseline model is trained in the same manner as our refined
|
| 778 |
+
DQN model, but without our carefully designed reward mech-
|
| 779 |
+
anism, training gap, and dual experience replay strategy. Fig. 2
|
| 780 |
+
clearly demonstrates that our model outperforms the baseline
|
| 781 |
+
model in terms of both the game score and the survival
|
| 782 |
+
time. This finding empirically shows the effectiveness of our
|
| 783 |
+
improvements over the baseline model, i.e., the reward assign-
|
| 784 |
+
ment based on distance, the training gap, the timeout punish-
|
| 785 |
+
ment, and the dual experience replay strategies. Nevertheless,
|
| 786 |
+
as shown in Fig. 2, the highest values of the averaged game
|
| 787 |
+
score and the averaged number of steps survived are seemingly
|
| 788 |
+
small, i.e., around 2.5 and 80, respectively. However, please
|
| 789 |
+
note that these numbers are computed as the average of 1,000
|
| 790 |
+
games, within which several outlier cases may drastically
|
| 791 |
+
lower the averaged performance. Furthermore, in the latter part
|
| 792 |
+
of this experiment section, we compare the performance of our
|
| 793 |
+
refined DQN model with human performance, trying to further
|
| 794 |
+
evaluate the capability of our proposed model. As shown in
|
| 795 |
+
Fig. 2, the performance of our refined DQN model in terms of
|
| 796 |
+
game score increases slowly over the first 50,000 games along
|
| 797 |
+
with the decay of ϵ. Moreover, the performance in terms of the
|
| 798 |
+
number of steps survived even gets decreasing (see Fig. 2(b)).
|
| 799 |
+
These findings are due to the exploration-exploitation trade-
|
| 800 |
+
off. As in the exploration phase, wherein ϵ linearly decays
|
| 801 |
+
from 0.5 to 0, the agent is actually getting familiar with
|
| 802 |
+
the game environment by accumulating knowledge learned
|
| 803 |
+
from random exploration. After the exploration phase, the
|
| 804 |
+
performance of the agent starts to improve by making all
|
| 805 |
+
the decisions based on the learned knowledge. As shown in
|
| 806 |
+
Fig. 2(a), the averaged game score generally keeps improving.
|
| 807 |
+
Similarly, as shown in Fig. 2(b), the averaged number of
|
| 808 |
+
steps survived also shows improvements in general. There is
|
| 809 |
+
a noticeable peak in terms of the number of steps survived
|
| 810 |
+
around 50,000th to 77,000th games. This unexpected peak may
|
| 811 |
+
be due to the completion of ϵ decay that the performance of
|
| 812 |
+
the agent starts to improve as it relies purely on the learned
|
| 813 |
+
knowledge for decision making. However, we suspect that the
|
| 814 |
+
0
|
| 815 |
+
2
|
| 816 |
+
4
|
| 817 |
+
6
|
| 818 |
+
8
|
| 819 |
+
10
|
| 820 |
+
12
|
| 821 |
+
14
|
| 822 |
+
16
|
| 823 |
+
18
|
| 824 |
+
0
|
| 825 |
+
5
|
| 826 |
+
10
|
| 827 |
+
15
|
| 828 |
+
20
|
| 829 |
+
25
|
| 830 |
+
30
|
| 831 |
+
35
|
| 832 |
+
40
|
| 833 |
+
45
|
| 834 |
+
50
|
| 835 |
+
(a) Performance in terms of game
|
| 836 |
+
score
|
| 837 |
+
0
|
| 838 |
+
1000
|
| 839 |
+
2000
|
| 840 |
+
3000
|
| 841 |
+
4000
|
| 842 |
+
5000
|
| 843 |
+
6000
|
| 844 |
+
0
|
| 845 |
+
5
|
| 846 |
+
10
|
| 847 |
+
15
|
| 848 |
+
20
|
| 849 |
+
25
|
| 850 |
+
30
|
| 851 |
+
35
|
| 852 |
+
40
|
| 853 |
+
45
|
| 854 |
+
50
|
| 855 |
+
(b) Performance in terms of the num-
|
| 856 |
+
ber of steps survived
|
| 857 |
+
Fig. 3. The performance of our agent (after being training for 134,000 games)
|
| 858 |
+
in additional 50 games, wherein ϵ = 0 and training is turned off.
|
| 859 |
+
TABLE II
|
| 860 |
+
PERFORMANCE COMPARISON AMONG DIFFERENT MODELS
|
| 861 |
+
Performance
|
| 862 |
+
Score
|
| 863 |
+
Survival Steps
|
| 864 |
+
Human Average
|
| 865 |
+
1.98
|
| 866 |
+
216.46
|
| 867 |
+
Baseline Average
|
| 868 |
+
0.26
|
| 869 |
+
31.64
|
| 870 |
+
Refined DQN Average
|
| 871 |
+
9.04
|
| 872 |
+
1477.40
|
| 873 |
+
Human Best
|
| 874 |
+
15
|
| 875 |
+
1389
|
| 876 |
+
Baseline Best
|
| 877 |
+
2
|
| 878 |
+
1015
|
| 879 |
+
Refined DQN Best
|
| 880 |
+
17
|
| 881 |
+
5039
|
| 882 |
+
game play policies learned during the exploration phase may
|
| 883 |
+
not be optimal or near optimal that after a while (around
|
| 884 |
+
27,000 games after ϵ decays to 0), the performance of the
|
| 885 |
+
agent drops significantly (also shown as a slight drop in terms
|
| 886 |
+
of game scores in Fig. 2(a)). However, it is encouraging to
|
| 887 |
+
see that even after the exploration phase, our agent is able to
|
| 888 |
+
learn more appropriate knowledge and achieves monotonically
|
| 889 |
+
increasing performance after the performance drop. It seems
|
| 890 |
+
the period of ϵ decay, i.e., 50,000 games, is not sufficient
|
| 891 |
+
for the agent to obtain a converged knowledge set. However,
|
| 892 |
+
due to the limited computing resource we have, we are not
|
| 893 |
+
able to re-run all the experiments due to the time constraint.
|
| 894 |
+
Nonetheless, the monotonically increasing performance after
|
| 895 |
+
77,000th game empirically shows that our agent is able to learn
|
| 896 |
+
correctly in the Snake Game. Moreover, in the last paragraph
|
| 897 |
+
of this section, we show that although pre-converged, our agent
|
| 898 |
+
can already surpass average human players.
|
| 899 |
+
To further justify the performance of our agent, we let the
|
| 900 |
+
trained agent play additional 50 games with ϵ = 0 and show
|
| 901 |
+
the results in Fig. 3. In terms of game score, our agent obtains a
|
| 902 |
+
minimum score of 3, a maximum score of 17, and the averaged
|
| 903 |
+
score of around 9. The averaged score of 9 is significantly
|
| 904 |
+
higher than 2.5 shown in Fig. 2(a). Similarly, the averaged
|
| 905 |
+
number of steps survived is approximately 1,500, which is
|
| 906 |
+
again significantly higher than that of 80 shown in Fig. 2(b).
|
| 907 |
+
To further compare our refined DQN model with human
|
| 908 |
+
performance, we invite ten undergraduate students to play the
|
| 909 |
+
Snake Game for 50 games. Before they play 50 games for
|
| 910 |
+
performance comparisons, each human player played at least
|
| 911 |
+
10 games to get familiar with this particular Snake Game
|
| 912 |
+
implementation. The performance comparisons in terms of
|
| 913 |
+
game scores and the number of steps survived are shown
|
| 914 |
+
(a)
|
| 915 |
+
Refined
|
| 916 |
+
DQN
|
| 917 |
+
score
|
| 918 |
+
(Taken
|
| 919 |
+
from [17])
|
| 920 |
+
0
|
| 921 |
+
10
|
| 922 |
+
20
|
| 923 |
+
30
|
| 924 |
+
40
|
| 925 |
+
50
|
| 926 |
+
Episode
|
| 927 |
+
0.0
|
| 928 |
+
2.5
|
| 929 |
+
5.0
|
| 930 |
+
7.5
|
| 931 |
+
10.0
|
| 932 |
+
12.5
|
| 933 |
+
15.0
|
| 934 |
+
17.5
|
| 935 |
+
20.0
|
| 936 |
+
Score
|
| 937 |
+
(b) Our model’s score
|
| 938 |
+
Fig. 9. Testing evaluation by playing random 50 episodes game
|
| 939 |
+
results than the baseline and refined DQN models. Fig. 6
|
| 940 |
+
displays the baseline DQN results during training on the snake
|
| 941 |
+
game. In Fig. 7 we present the score and reward comparison
|
| 942 |
+
between our model and the baseline DQN model. The blue
|
| 943 |
+
line in Fig. 7(a) represents our model’s score, and the purple
|
| 944 |
+
line represents the score of the baseline DQN model. During
|
| 945 |
+
140,000 numbers of training episodes, our model remains
|
| 946 |
+
better at episode score though it requires fewer resources.
|
| 947 |
+
Fig. 7(b) demonstrates that our model is capable of achieving
|
| 948 |
+
higher cumulative rewards than the baseline DQN model.
|
| 949 |
+
We also compare the results between our model and the
|
| 950 |
+
refined DQN model [17]. Refined DQN follows a dual ex-
|
| 951 |
+
perience replay memory architecture and a complex reward
|
| 952 |
+
mechanism. However, our model surpasses their score. Since
|
| 953 |
+
their game is similar to ours, we compare our results with
|
| 954 |
+
the results provided in their paper. Fig. 8(a) shows the results
|
| 955 |
+
presented in [17], and Fig. 8(b) is our model’s results during
|
| 956 |
+
TABLE V
|
| 957 |
+
LIST OF PERFORMANCE COMPARISON OF DIFFERENT AGENTS
|
| 958 |
+
Performance
|
| 959 |
+
Score
|
| 960 |
+
Human Average
|
| 961 |
+
1.98 *
|
| 962 |
+
Baseline Average
|
| 963 |
+
0.26 *
|
| 964 |
+
Refined DQN Average
|
| 965 |
+
9.04 *
|
| 966 |
+
Our Average
|
| 967 |
+
9.53
|
| 968 |
+
Human Best
|
| 969 |
+
15 *
|
| 970 |
+
Baseline Best
|
| 971 |
+
2 *
|
| 972 |
+
Refined DQN Best
|
| 973 |
+
17 *
|
| 974 |
+
Our Best
|
| 975 |
+
20
|
| 976 |
+
* Data taken from [17]
|
| 977 |
+
|
| 978 |
+
Jno
|
| 979 |
+
12
|
| 980 |
+
Baseline DQN
|
| 981 |
+
10
|
| 982 |
+
8
|
| 983 |
+
Score
|
| 984 |
+
6
|
| 985 |
+
4
|
| 986 |
+
2
|
| 987 |
+
0
|
| 988 |
+
0.0
|
| 989 |
+
0.2
|
| 990 |
+
0.4
|
| 991 |
+
0.6
|
| 992 |
+
0.8
|
| 993 |
+
1.0
|
| 994 |
+
1.2
|
| 995 |
+
1.4
|
| 996 |
+
Episode
|
| 997 |
+
1e510
|
| 998 |
+
0
|
| 999 |
+
Reward
|
| 1000 |
+
-10
|
| 1001 |
+
-20
|
| 1002 |
+
Our
|
| 1003 |
+
-30
|
| 1004 |
+
Baseline DQN
|
| 1005 |
+
0.0
|
| 1006 |
+
0.2
|
| 1007 |
+
0.4
|
| 1008 |
+
0.6
|
| 1009 |
+
0.8
|
| 1010 |
+
1.0
|
| 1011 |
+
1.2
|
| 1012 |
+
1.4
|
| 1013 |
+
Episode
|
| 1014 |
+
1e5Our
|
| 1015 |
+
12
|
| 1016 |
+
10
|
| 1017 |
+
8
|
| 1018 |
+
Score
|
| 1019 |
+
6
|
| 1020 |
+
4
|
| 1021 |
+
2
|
| 1022 |
+
0.0
|
| 1023 |
+
0.2
|
| 1024 |
+
0.4
|
| 1025 |
+
0.6
|
| 1026 |
+
0.8
|
| 1027 |
+
1.0
|
| 1028 |
+
1.2
|
| 1029 |
+
1.4
|
| 1030 |
+
Episode
|
| 1031 |
+
le5Our
|
| 1032 |
+
10
|
| 1033 |
+
0
|
| 1034 |
+
Reward
|
| 1035 |
+
-10
|
| 1036 |
+
-20
|
| 1037 |
+
-30
|
| 1038 |
+
0.0
|
| 1039 |
+
0.2
|
| 1040 |
+
0.4
|
| 1041 |
+
0.6
|
| 1042 |
+
0.8
|
| 1043 |
+
1.0
|
| 1044 |
+
1.2
|
| 1045 |
+
1.4
|
| 1046 |
+
Episode
|
| 1047 |
+
le5Baseline DQN
|
| 1048 |
+
10
|
| 1049 |
+
8
|
| 1050 |
+
6
|
| 1051 |
+
Score
|
| 1052 |
+
4
|
| 1053 |
+
2
|
| 1054 |
+
0
|
| 1055 |
+
0.0
|
| 1056 |
+
0.2
|
| 1057 |
+
0.4
|
| 1058 |
+
0.6
|
| 1059 |
+
0.8
|
| 1060 |
+
1.0
|
| 1061 |
+
1.2
|
| 1062 |
+
1.4
|
| 1063 |
+
Episode
|
| 1064 |
+
1e510
|
| 1065 |
+
Baseline DQN
|
| 1066 |
+
8
|
| 1067 |
+
6
|
| 1068 |
+
Reward
|
| 1069 |
+
4
|
| 1070 |
+
2
|
| 1071 |
+
0
|
| 1072 |
+
-2
|
| 1073 |
+
0.0
|
| 1074 |
+
0.2
|
| 1075 |
+
0.4
|
| 1076 |
+
0.6
|
| 1077 |
+
0.8
|
| 1078 |
+
1.0
|
| 1079 |
+
1.2
|
| 1080 |
+
1.4
|
| 1081 |
+
Episode
|
| 1082 |
+
1e5training. By comparing Fig. 8(a) and Fig. 8(b), we can safely
|
| 1083 |
+
say that our model achieves better scores despite having a
|
| 1084 |
+
simple replay buffer, a simple reward mechanism, and less
|
| 1085 |
+
memory consumption.
|
| 1086 |
+
Fig. 9(a) and Fig. 9(b) show scores of random 50 episodes
|
| 1087 |
+
during testing of refined DQN and our model, respectively.
|
| 1088 |
+
Table V summarizes the scores provided in the refined DQN
|
| 1089 |
+
and our model. We can identify from Table V that their refined
|
| 1090 |
+
DQN average is 9.04, while ours is 9.53, and their refined
|
| 1091 |
+
DQN best score is 17, while ours is 20. So, we can see that our
|
| 1092 |
+
model also performs better in the training and testing phase.
|
| 1093 |
+
TABLE VI
|
| 1094 |
+
LIST OF HYPERPARAMETERS
|
| 1095 |
+
Hyperparameter
|
| 1096 |
+
Value
|
| 1097 |
+
Description
|
| 1098 |
+
Discount Factor
|
| 1099 |
+
0.99
|
| 1100 |
+
γ-value in max Q-function
|
| 1101 |
+
Initial Epsilon
|
| 1102 |
+
1.0
|
| 1103 |
+
Exploration epsilon initial value
|
| 1104 |
+
Final Epsilon
|
| 1105 |
+
0.01
|
| 1106 |
+
Exploration final epsilon value
|
| 1107 |
+
Batch size
|
| 1108 |
+
32
|
| 1109 |
+
Mini batch from replay memory
|
| 1110 |
+
Max step
|
| 1111 |
+
10,000
|
| 1112 |
+
Maximum number of steps
|
| 1113 |
+
allowed per episode
|
| 1114 |
+
Learning Rate
|
| 1115 |
+
0.0025
|
| 1116 |
+
Learning rate for Adam optimizer
|
| 1117 |
+
Clip-Norm
|
| 1118 |
+
1.0
|
| 1119 |
+
Clipping value for Adam optimizer
|
| 1120 |
+
Random Frames
|
| 1121 |
+
50,000
|
| 1122 |
+
Number of random initial steps
|
| 1123 |
+
Epsilon greedy
|
| 1124 |
+
500,000
|
| 1125 |
+
Number of frames in which initial
|
| 1126 |
+
frames
|
| 1127 |
+
epsilon will be equal final epsilon
|
| 1128 |
+
Experience Replay
|
| 1129 |
+
50,000
|
| 1130 |
+
Capacity of experience replay
|
| 1131 |
+
Memory
|
| 1132 |
+
memory
|
| 1133 |
+
Update of DQN
|
| 1134 |
+
4
|
| 1135 |
+
The number of steps after each
|
| 1136 |
+
update of DQN takes place
|
| 1137 |
+
Update Target
|
| 1138 |
+
10,000
|
| 1139 |
+
The number of steps after the
|
| 1140 |
+
DQN
|
| 1141 |
+
Target and Online DQN sync
|
| 1142 |
+
V. CONCLUSION
|
| 1143 |
+
In this paper, we have shown that better image preprocess-
|
| 1144 |
+
ing and constructing a better mechanism for replay buffer
|
| 1145 |
+
can reduce memory consumption on DRL algorithms during
|
| 1146 |
+
training. We have also demonstrated that using our method,
|
| 1147 |
+
the performance of the DRL agent on a lower constraint
|
| 1148 |
+
application is entirely similar, if not better. We combined our
|
| 1149 |
+
method with the DQN (with some modification) algorithm
|
| 1150 |
+
to observe the method’s effectiveness. Our presented design
|
| 1151 |
+
requires less memory and a simple CNN. We established that
|
| 1152 |
+
our method’s result is as good as other DRL approaches for
|
| 1153 |
+
the snake game autonomous agent.
|
| 1154 |
+
ACKNOWLEDGMENT
|
| 1155 |
+
This work was supported by North South University re-
|
| 1156 |
+
search grant CTRG-21-SEPS-18.
|
| 1157 |
+
The authors would like to gratefully acknowledge that the
|
| 1158 |
+
computing resources used in this work was housed at the
|
| 1159 |
+
National University of Sciences and Technology (NUST),
|
| 1160 |
+
Pakistan. The cooperation was pursued under the South Asia
|
| 1161 |
+
Regional Development Center (RDC) framework of the Belt
|
| 1162 |
+
& Road Aerospace Innovation Alliance (BRAIA).
|
| 1163 |
+
REFERENCES
|
| 1164 |
+
[1] C. J. C. H. Watkins and P. Dayan, “Q-learning,” in Machine Learning,
|
| 1165 |
+
1992, pp. 279–292.
|
| 1166 |
+
[2] G. Tesauro, “Temporal difference learning and td-gammon,” Commun.
|
| 1167 |
+
ACM, vol. 38, no. 3, p. 58–68, Mar. 1995.
|
| 1168 |
+
[3] R. S. Sutton, D. McAllester, S. Singh, and Y. Mansour, “Policy gradient
|
| 1169 |
+
methods for reinforcement learning with function approximation,” in
|
| 1170 |
+
Advances in Neural Information Processing Systems, S. Solla, T. Leen,
|
| 1171 |
+
and K. M¨uller, Eds., vol. 12.
|
| 1172 |
+
MIT Press, 1999.
|
| 1173 |
+
[4] J. Peters, S. Vijayakumar, and S. Schaal, “Natural actor-critic,” in
|
| 1174 |
+
Machine Learning: ECML 2005.
|
| 1175 |
+
Berlin, Heidelberg: Springer Berlin
|
| 1176 |
+
Heidelberg, 2005, pp. 280–291.
|
| 1177 |
+
[5] D. Silver, G. Lever, N. Heess, T. Degris, D. Wierstra, and M. Ried-
|
| 1178 |
+
miller, “Deterministic policy gradient algorithms,” in Proceedings of the
|
| 1179 |
+
31st International Conference on International Conference on Machine
|
| 1180 |
+
Learning - Volume 32, ser. ICML’14. JMLR.org, 2014, p. I–387–I–395.
|
| 1181 |
+
[6] V. Mnih, K. Kavukcuoglu, D. Silver, A. Graves, I. Antonoglou, D. Wier-
|
| 1182 |
+
stra, and M. A. Riedmiller, “Playing atari with deep reinforcement
|
| 1183 |
+
learning,” Computing Research Repository, vol. abs/1312.5602, 2013.
|
| 1184 |
+
[7] V. Mnih, K. Kavukcuoglu, D. Silver, A. Rusu, J. Veness, M. Bellemare,
|
| 1185 |
+
A. Graves, M. Riedmiller, A. Fidjeland, G. Ostrovski, S. Petersen,
|
| 1186 |
+
C. Beattie, A. Sadik, I. Antonoglou, H. King, D. Kumaran, D. Wierstra,
|
| 1187 |
+
S. Legg, and D. Hassabis, “Human-level control through deep reinforce-
|
| 1188 |
+
ment learning,” Nature, vol. 518, pp. 529–33, 02 2015.
|
| 1189 |
+
[8] H. v. Hasselt, A. Guez, and D. Silver, “Deep reinforcement learning with
|
| 1190 |
+
double q-learning,” in Proceedings of the Thirtieth AAAI Conference on
|
| 1191 |
+
Artificial Intelligence, ser. AAAI’16. AAAI Press, 2016, p. 2094–2100.
|
| 1192 |
+
[9] L.-J. Lin, “Self-improving reactive agents based on reinforcement learn-
|
| 1193 |
+
ing, planning and teaching,” Mach. Learn., vol. 8, no. 3–4, p. 293–321,
|
| 1194 |
+
may 1992.
|
| 1195 |
+
[10] T. P. Lillicrap, J. J. Hunt, A. Pritzel, N. Heess, T. Erez, Y. Tassa,
|
| 1196 |
+
D. Silver, and D. Wierstra, “Continuous control with deep reinforcement
|
| 1197 |
+
learning,” Computing Research Repository, 2019.
|
| 1198 |
+
[11] S. Li, Y. Wu, X. Cui, H. Dong, F. Fang, and S. Russell, “Robust multi-
|
| 1199 |
+
agent reinforcement learning via minimax deep deterministic policy
|
| 1200 |
+
gradient,” Proceedings of the AAAI Conference on Artificial Intelligence,
|
| 1201 |
+
vol. 33, no. 01, pp. 4213–4220, Jul. 2019.
|
| 1202 |
+
[12] T. Haarnoja, A. Zhou, P. Abbeel, and S. Levine, “Soft actor-critic: Off-
|
| 1203 |
+
policy maximum entropy deep reinforcement learning with a stochastic
|
| 1204 |
+
actor.” in ICML, ser. Proceedings of Machine Learning Research, vol. 80.
|
| 1205 |
+
PMLR, 2018, pp. 1856–1865.
|
| 1206 |
+
[13] T. Schaul, J. Quan, I. Antonoglou, and D. Silver, “Prioritized experience
|
| 1207 |
+
replay,” 2015. [Online]. Available: https://arxiv.org/abs/1511.05952
|
| 1208 |
+
[14] M. Andrychowicz, F. Wolski, A. Ray, J. Schneider, R. Fong, P. Welinder,
|
| 1209 |
+
B. McGrew, J. Tobin, O. Pieter Abbeel, and W. Zaremba, “Hindsight
|
| 1210 |
+
experience replay,” in Advances in Neural Information Processing
|
| 1211 |
+
Systems, I. Guyon, U. V. Luxburg, S. Bengio, H. Wallach, R. Fergus,
|
| 1212 |
+
S. Vishwanathan, and R. Garnett, Eds., vol. 30.
|
| 1213 |
+
Curran Associates,
|
| 1214 |
+
Inc., 2017.
|
| 1215 |
+
[15] S. Zhang and R. S. Sutton, “A deeper look at experience replay,”
|
| 1216 |
+
Computing Research Repository, vol. abs/1712.01275, 2017.
|
| 1217 |
+
[16] H. Hasselt, “Double q-learning,” in Advances in Neural Information
|
| 1218 |
+
Processing Systems, J. Lafferty, C. Williams, J. Shawe-Taylor, R. Zemel,
|
| 1219 |
+
and A. Culotta, Eds., vol. 23.
|
| 1220 |
+
Curran Associates, Inc., 2010.
|
| 1221 |
+
[17] Z. Wei, D. Wang, M. Zhang, A.-H. Tan, C. Miao, and Y. Zhou,
|
| 1222 |
+
“Autonomous agents in snake game via deep reinforcement learning,” in
|
| 1223 |
+
2018 IEEE International Conference on Agents (ICA), 2018, pp. 20–25.
|
| 1224 |
+
[18] T. D. Nguyen, K. Mori, and R. Thawonmas, “Image colorization using
|
| 1225 |
+
a deep convolutional neural network,” Computing Research Repository,
|
| 1226 |
+
vol. abs/1604.07904, 2016.
|
| 1227 |
+
[19] A. Punyawee, C. Panumate, and H. Iida, “Finding comfortable settings
|
| 1228 |
+
of snake game using game refinement measurement,” in Advances in
|
| 1229 |
+
Computer Science and Ubiquitous Computing.
|
| 1230 |
+
Singapore: Springer
|
| 1231 |
+
Singapore, 2017, pp. 66–73.
|
| 1232 |
+
[20] D. P. Kingma and J. Ba, “Adam: A method for stochastic optimiza-
|
| 1233 |
+
tion,” in 3rd International Conference on Learning Representations,
|
| 1234 |
+
ICLR 2015, San Diego, CA, USA, May 7-9, 2015, Conference Track
|
| 1235 |
+
Proceedings, Y. Bengio and Y. LeCun, Eds., 2015.
|
| 1236 |
+
|
-dFLT4oBgHgl3EQfCy73/content/tmp_files/load_file.txt
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
-tAzT4oBgHgl3EQfvf0n/vector_store/index.faiss
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:e2e052a07b2202f6af43331e81e092230447f957a77064d6fa520cf89fb631dc
|
| 3 |
+
size 6160429
|
.gitattributes
CHANGED
|
@@ -4181,3 +4181,64 @@ DNE2T4oBgHgl3EQfoQhP/content/2301.04016v1.pdf filter=lfs diff=lfs merge=lfs -tex
|
|
| 4181 |
btAzT4oBgHgl3EQfLfs5/content/2301.01114v1.pdf filter=lfs diff=lfs merge=lfs -text
|
| 4182 |
gNE0T4oBgHgl3EQfpAFc/content/2301.02533v1.pdf filter=lfs diff=lfs merge=lfs -text
|
| 4183 |
odE1T4oBgHgl3EQf1wVk/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 4181 |
btAzT4oBgHgl3EQfLfs5/content/2301.01114v1.pdf filter=lfs diff=lfs merge=lfs -text
|
| 4182 |
gNE0T4oBgHgl3EQfpAFc/content/2301.02533v1.pdf filter=lfs diff=lfs merge=lfs -text
|
| 4183 |
odE1T4oBgHgl3EQf1wVk/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
|
| 4184 |
+
5tE5T4oBgHgl3EQfPQ4_/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
|
| 4185 |
+
iNE2T4oBgHgl3EQfHwaf/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
|
| 4186 |
+
TtAzT4oBgHgl3EQfXvxz/content/2301.01323v1.pdf filter=lfs diff=lfs merge=lfs -text
|
| 4187 |
+
7NE4T4oBgHgl3EQfcgzB/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
|
| 4188 |
+
EtE1T4oBgHgl3EQfEgOL/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
|
| 4189 |
+
ndE2T4oBgHgl3EQfewdN/content/2301.03919v1.pdf filter=lfs diff=lfs merge=lfs -text
|
| 4190 |
+
yNFST4oBgHgl3EQfTDh5/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
|
| 4191 |
+
oNE1T4oBgHgl3EQfOwM_/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
|
| 4192 |
+
-tAzT4oBgHgl3EQfvf0n/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
|
| 4193 |
+
gtE1T4oBgHgl3EQfzAWY/content/2301.03440v1.pdf filter=lfs diff=lfs merge=lfs -text
|
| 4194 |
+
SNFJT4oBgHgl3EQfLSwz/content/2301.11468v1.pdf filter=lfs diff=lfs merge=lfs -text
|
| 4195 |
+
mtE1T4oBgHgl3EQf1AXN/content/2301.03464v1.pdf filter=lfs diff=lfs merge=lfs -text
|
| 4196 |
+
htE0T4oBgHgl3EQfXwCw/content/2301.02298v1.pdf filter=lfs diff=lfs merge=lfs -text
|
| 4197 |
+
TtAzT4oBgHgl3EQfXvxz/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
|
| 4198 |
+
otAyT4oBgHgl3EQfzPl9/content/2301.00698v1.pdf filter=lfs diff=lfs merge=lfs -text
|
| 4199 |
+
gNE0T4oBgHgl3EQfpAFc/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
|
| 4200 |
+
ctE3T4oBgHgl3EQfeAqc/content/2301.04540v1.pdf filter=lfs diff=lfs merge=lfs -text
|
| 4201 |
+
9tAzT4oBgHgl3EQf-_7r/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
|
| 4202 |
+
gtE1T4oBgHgl3EQfzAWY/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
|
| 4203 |
+
QtE1T4oBgHgl3EQfHQOA/content/2301.02924v1.pdf filter=lfs diff=lfs merge=lfs -text
|
| 4204 |
+
P9E5T4oBgHgl3EQfZA8C/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
|
| 4205 |
+
lNE1T4oBgHgl3EQfNwNu/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
|
| 4206 |
+
ttE_T4oBgHgl3EQf9hxa/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
|
| 4207 |
+
k9AzT4oBgHgl3EQfNfvd/content/2301.01151v1.pdf filter=lfs diff=lfs merge=lfs -text
|
| 4208 |
+
QtE1T4oBgHgl3EQfHQOA/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
|
| 4209 |
+
P9E5T4oBgHgl3EQfZA8C/content/2301.05577v1.pdf filter=lfs diff=lfs merge=lfs -text
|
| 4210 |
+
8dE4T4oBgHgl3EQfdQww/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
|
| 4211 |
+
i9E0T4oBgHgl3EQfYQAP/content/2301.02303v1.pdf filter=lfs diff=lfs merge=lfs -text
|
| 4212 |
+
eNE0T4oBgHgl3EQfWwDh/content/2301.02284v1.pdf filter=lfs diff=lfs merge=lfs -text
|
| 4213 |
+
9NE0T4oBgHgl3EQffwBQ/content/2301.02408v1.pdf filter=lfs diff=lfs merge=lfs -text
|
| 4214 |
+
9NE0T4oBgHgl3EQffwBQ/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
|
| 4215 |
+
JNE2T4oBgHgl3EQf_wld/content/2301.04251v1.pdf filter=lfs diff=lfs merge=lfs -text
|
| 4216 |
+
htE0T4oBgHgl3EQfXwCw/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
|
| 4217 |
+
i9AzT4oBgHgl3EQfM_vp/content/2301.01143v1.pdf filter=lfs diff=lfs merge=lfs -text
|
| 4218 |
+
UtFJT4oBgHgl3EQfNixZ/content/2301.11478v1.pdf filter=lfs diff=lfs merge=lfs -text
|
| 4219 |
+
39E3T4oBgHgl3EQfQAlh/content/2301.04408v1.pdf filter=lfs diff=lfs merge=lfs -text
|
| 4220 |
+
mtE1T4oBgHgl3EQf1AXN/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
|
| 4221 |
+
ItFIT4oBgHgl3EQfYivh/content/2301.11249v1.pdf filter=lfs diff=lfs merge=lfs -text
|
| 4222 |
+
otAyT4oBgHgl3EQfzPl9/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
|
| 4223 |
+
SNFJT4oBgHgl3EQfLSwz/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
|
| 4224 |
+
3NAzT4oBgHgl3EQf9P6r/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
|
| 4225 |
+
H9E3T4oBgHgl3EQfuQuc/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
|
| 4226 |
+
4tE2T4oBgHgl3EQfOQbV/content/2301.03747v1.pdf filter=lfs diff=lfs merge=lfs -text
|
| 4227 |
+
H9E3T4oBgHgl3EQfuQuc/content/2301.04683v1.pdf filter=lfs diff=lfs merge=lfs -text
|
| 4228 |
+
TdE3T4oBgHgl3EQfzguF/content/2301.04729v1.pdf filter=lfs diff=lfs merge=lfs -text
|
| 4229 |
+
UtFJT4oBgHgl3EQfNixZ/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
|
| 4230 |
+
RdFQT4oBgHgl3EQfajZo/content/2301.13320v1.pdf filter=lfs diff=lfs merge=lfs -text
|
| 4231 |
+
yNE3T4oBgHgl3EQf_wth/content/2301.04837v1.pdf filter=lfs diff=lfs merge=lfs -text
|
| 4232 |
+
YdFPT4oBgHgl3EQftTXv/content/2301.13152v1.pdf filter=lfs diff=lfs merge=lfs -text
|
| 4233 |
+
ltFLT4oBgHgl3EQfeC95/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
|
| 4234 |
+
i9E0T4oBgHgl3EQfYQAP/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
|
| 4235 |
+
CtAzT4oBgHgl3EQfwf4q/content/2301.01722v1.pdf filter=lfs diff=lfs merge=lfs -text
|
| 4236 |
+
W9E2T4oBgHgl3EQfYQe0/content/2301.03853v1.pdf filter=lfs diff=lfs merge=lfs -text
|
| 4237 |
+
2NE1T4oBgHgl3EQfAAJE/content/2301.02833v1.pdf filter=lfs diff=lfs merge=lfs -text
|
| 4238 |
+
39E3T4oBgHgl3EQfQAlh/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
|
| 4239 |
+
_NAyT4oBgHgl3EQfRfYM/content/2301.00065v1.pdf filter=lfs diff=lfs merge=lfs -text
|
| 4240 |
+
yNE3T4oBgHgl3EQf_wth/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
|
| 4241 |
+
ANFAT4oBgHgl3EQfrR6g/content/2301.08652v1.pdf filter=lfs diff=lfs merge=lfs -text
|
| 4242 |
+
eNE0T4oBgHgl3EQfWwDh/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
|
| 4243 |
+
stE5T4oBgHgl3EQfmQ9N/content/2301.05677v1.pdf filter=lfs diff=lfs merge=lfs -text
|
| 4244 |
+
TdE3T4oBgHgl3EQfzguF/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
|
19AzT4oBgHgl3EQf8_5O/content/tmp_files/2301.01912v1.pdf.txt
ADDED
|
@@ -0,0 +1,1268 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
Observation of room temperature anomalous
|
| 2 |
+
Hall effect in graphene-WSe2 heterostructures
|
| 3 |
+
Priya Tiwari1†, Divya Sahani1†, Atasi Chakraborty2, Kamal Das2, Kenji
|
| 4 |
+
Watanabe3, Takashi Taniguchi4, Amit Agarwal2∗, and Aveek Bid1∗
|
| 5 |
+
1Department of Physics, Indian Institute of Science, Bangalore 560012, India
|
| 6 |
+
2 Department of Physics, Indian Institute of Technology Kanpur, Kanpur-208016, India
|
| 7 |
+
3 Research Center for Functional Materials, National Institute for Materials Science, 1-1 Namiki,
|
| 8 |
+
Tsukuba 305-0044, Japan
|
| 9 |
+
4 International Center for Materials Nanoarchitectonics, National Institute for Materials Science,
|
| 10 |
+
1-1 Namiki, Tsukuba 305-0044, Japan
|
| 11 |
+
† These authors contributed equally.
|
| 12 |
+
E-mail: [email protected],[email protected]
|
| 13 |
+
Abstract
|
| 14 |
+
Proximity-induced spin–orbit coupling in graphene offers an exciting platform to probe
|
| 15 |
+
spin-based effects in chiral Dirac fermionic systems. These systems are believed to be intrinsically
|
| 16 |
+
time-reversal symmetric, which should ensure that the charge Hall response vanishes without
|
| 17 |
+
a magnetic field. In contrast to this expectation, we report the first observation of anomalous
|
| 18 |
+
Hall effect (AHE) in single-layer graphene/single-layer WSe2 heterostructures that persists
|
| 19 |
+
up to room temperature. The magnitude and the sign of the AHE can be tuned using an
|
| 20 |
+
external perpendicular electric field. Our joint experimental and theoretical study establishes
|
| 21 |
+
that the observed anomalous Hall signal arises from the combined effect of strain and spin-
|
| 22 |
+
orbit coupling in graphene, which induces time-reversal symmetry breaking and manifests
|
| 23 |
+
1
|
| 24 |
+
arXiv:2301.01912v1 [cond-mat.mes-hall] 5 Jan 2023
|
| 25 |
+
|
| 26 |
+
as a valley asymmetry. Our observation broadens the prospects of realizing high-temperature
|
| 27 |
+
anomalous Hall effects in a completely new system, namely graphene-transition metal dichalcogenide-
|
| 28 |
+
based heterostructures.
|
| 29 |
+
Introduction
|
| 30 |
+
Topological and band geometric effects in two-dimensional systems have attracted significant
|
| 31 |
+
attention due to their fascinating physics and potential applications in spintronics and novel electronic
|
| 32 |
+
devices1–5. Graphene-based heterostructures offer one such exciting platform for studying band
|
| 33 |
+
geometric effects. The coupling to the charge, spin, and valley degrees of freedom in graphene
|
| 34 |
+
gives rise to, among other things, a multitude of Hall effects such as the spin Hall6–9, and the
|
| 35 |
+
valley Hall effects10–15. A possible common origin of these effects is the emergence of a non-
|
| 36 |
+
trivial Berry curvature on breaking the inversion symmetry, which induces opposite anomalous
|
| 37 |
+
velocity in the two valleys of graphene16–18. Note that in the absence of exchange interactions,
|
| 38 |
+
time-reversal symmetry (TRS) forces the Berry curvatures at the K and K′ valleys to be equal and
|
| 39 |
+
opposite Ωz(K) = −Ωz(K′), causing signatures of the anomalous Hall effect (AHE) in the charge
|
| 40 |
+
sector to vanish19.
|
| 41 |
+
Several other unconventional Hall effects have been predicted and explored in graphene. Some
|
| 42 |
+
prominent examples include the nonlinear anomalous Hall effect20–23, layer contrasted Hall effect3,24,
|
| 43 |
+
and linear Hall effect in corrugated systems25. The study in corrugated systems is particularly
|
| 44 |
+
fascinating as it demonstrates the appearance of a linear Hall response even under time-reversal
|
| 45 |
+
symmetric conditions for systems with tilted bands in a reduced-symmetry scenario. More recently,
|
| 46 |
+
AHE has been observed in graphene-based moiré heterostructures at half- or quarter-filling of the
|
| 47 |
+
bands owing to the spontaneously broken time-reversal symmetry and magnetization arising from
|
| 48 |
+
the enhancement of the exchange interactions by the large density of states of the flat bands26–33.
|
| 49 |
+
Several studies have reported extrinsic AHE in graphene where suitable dopants or magnetic
|
| 50 |
+
substrate induce an exchange interaction (see for example 15,34,35). However, despite being a testbed
|
| 51 |
+
2
|
| 52 |
+
|
| 53 |
+
for band geometric effects, the observation of intrinsic AHE in graphene-based non-magnetic
|
| 54 |
+
heterostructures remains rare.
|
| 55 |
+
In this letter, we report the observation of a large linear AHE originating from lifting the valley-
|
| 56 |
+
degeneracy in the high-mobility heterostructures of single-layer graphene (SLG) with proximity-
|
| 57 |
+
induced spin-orbit coupling (SOC) from single-layer WSe2. We find that the dependence of the
|
| 58 |
+
transverse resistance at a zero magnetic field Rxy(B = 0) on the charge carrier density mimics the
|
| 59 |
+
finite B-field classical Hall signal in graphene and is observed up to room temperature.
|
| 60 |
+
Single-layer WSe2 used as a substrate influences the graphene bands in two significant ways. The
|
| 61 |
+
first of these is well studied: Graphene on WSe2 possesses spin-split bands owing to the Ising-like
|
| 62 |
+
SOC, which gives rise to the spin Hall effect36–38. The second effect, equally vital for our purposes
|
| 63 |
+
but ill-explored to date, is the appearance of a substantial lateral strain in the graphene layer. We
|
| 64 |
+
propose that the combined effect of this proximity-induced SOC and lattice-induced strain lifts the
|
| 65 |
+
valley-degeneracy in graphene, leading to the appearance of the AHE signal near the Dirac point.
|
| 66 |
+
We establish that the AHE is zero in the absence of the WSe2 layer. Note that previous studies
|
| 67 |
+
on the SLG-WSe2 heterostructure (or graphene on transition metal dichalcogenides in general)
|
| 68 |
+
focused primarily on the spin aspects of the transport36,37,39–41 where a non-local signal is measured
|
| 69 |
+
as a signature of the spin Hall effect and weak (anti-) localization measurements were used to
|
| 70 |
+
quantify the spin-orbit coupling strength38,42–47. Interestingly, these studies did not probe the finite
|
| 71 |
+
Hall effect without a magnetic field. This makes our observation of AHE in this system unique.
|
| 72 |
+
Results
|
| 73 |
+
Device characteristics
|
| 74 |
+
Heterostructures of SLG and single-layer WSe2, encapsulated by crystalline hexagonal boron
|
| 75 |
+
nitrate (hBN), were fabricated using a dry transfer technique48,49.
|
| 76 |
+
One-dimensional electrical
|
| 77 |
+
contacts were formed by electron beam lithography, followed by etching (using a mixture of CHF3
|
| 78 |
+
3
|
| 79 |
+
|
| 80 |
+
and O2) and deposition of 5 nm/60 nm Cr/Au contacts and top-gate electrode (see Section S3
|
| 81 |
+
Supplementary Information for details). A schematic of the device structure is shown in Fig. 1(a),
|
| 82 |
+
and an optical image of the device is shown in Fig. 1(b). The dual-gated architecture of the devices
|
| 83 |
+
allows independent control of the charge-carrier density n and the vertical displacement field D;
|
| 84 |
+
n= (CtgVtg + CbgVbg)/e − n0 and D = (CtgVtg − CbgVbg)/2ϵ0 − D0. Here Cbg (Ctg) is the
|
| 85 |
+
capacitance per unit area of the back-gate (top-gate), Vbg (Vtg) is the back-gate (top-gate) bias.
|
| 86 |
+
n0 and D0 are the residual charge carrier density and residual vertical displacement field induced
|
| 87 |
+
by impurities in the device channel.
|
| 88 |
+
Electrical transport measurements were performed at 10 nA source-drain current using low-frequency
|
| 89 |
+
lock-in detection techniques. All data were obtained at 20 mK unless specified otherwise. The
|
| 90 |
+
measurements were performed on multiple devices; the results were similar. In the main manuscript,
|
| 91 |
+
we present the data from a single device, SW1. The data from another device, SW2, are shown in
|
| 92 |
+
the Supplementary Information.
|
| 93 |
+
A map of the measured longitudinal conductance Gxx as a function of charge carrier density n and
|
| 94 |
+
perpendicular magnetic field B is shown in Fig. 1(c). The appearance of broken symmetry quantum
|
| 95 |
+
Hall states at low B-fields implies a complete lifting of the spin and valley degeneracies in SLG
|
| 96 |
+
bands. The splitting of the spin-degenerate bands in SLG (shown schematically in Fig. 1(f)) is also
|
| 97 |
+
evident from the beating pattern seen in the Shubnikov de Haas oscillations [Fig. 1(d)], and the
|
| 98 |
+
double periodicity in the corresponding Fourier spectrum [Fig. 1(e)]. Fig. 1(g) is a representation
|
| 99 |
+
of the lifting of the valley degeneracy; the valley-splitting energy scale is marked as ∆vs. The
|
| 100 |
+
lifting of spin- and valley-degeneracies in the band dispersion (along with the high field-effect
|
| 101 |
+
mobility µ ∼ 140, 000 cm2V−1s−1 of the device) shows that the graphene and WSe2 interface is
|
| 102 |
+
atomically clean with significant interfacial coupling and minimal random potential fluctuations.
|
| 103 |
+
4
|
| 104 |
+
|
| 105 |
+
Room temperature anomalous Hall effect at B = 0 T
|
| 106 |
+
In Fig. 2(a), we present the data for the longitudinal resistance, Rxx (left-axis, red line), and
|
| 107 |
+
transverse resistance, Rxy (right-axis, blue line) measured at B = 0 T. We observe a finite Rxy
|
| 108 |
+
signal in a narrow range of charge carrier densities ∆n = ±1015 m−2 centered about the charge
|
| 109 |
+
neutrality point, a feature conspicuously absent in hBN/graphene/hBN heterostructures. The Rxy
|
| 110 |
+
features an evident change in the sign about the charge neutrality point – it is positive for n < 0
|
| 111 |
+
(hole-band) and negative for n > 0 (electron-band). The current independence of Rxy establishes
|
| 112 |
+
it to be a linear anomalous Hall effect – (see Fig. 2(c) for the data for two representative values
|
| 113 |
+
of current - 30 nA and 120 nA). The finite Rxy(B = 0) survives at least to room temperature with
|
| 114 |
+
diminished amplitude as shown in Figs. 2(b) and (d). This observation of room temperature B = 0
|
| 115 |
+
anomalous Hall effect in hBN/graphene/WSe2/hBN heterostructures is the central result of this
|
| 116 |
+
letter.
|
| 117 |
+
We find the nonlinear anomalous Hall resistance (quantified by the second harmonic R2ω
|
| 118 |
+
xy signal)
|
| 119 |
+
to be negligibly small for our device (Fig. S5 of Supplementary Information). To establish that the
|
| 120 |
+
absence of the second harmonic signal is not an experimental artifact, we present in the same figure
|
| 121 |
+
data from similar measurements on hBN/graphene moiré devices where a small but finite nonlinear
|
| 122 |
+
signal does show up in the measured R2ω
|
| 123 |
+
xy near the primary Dirac point as per previous reports50.
|
| 124 |
+
Note also that the data for Rxy(B = 0) were reproduced in cryostats without a superconducting
|
| 125 |
+
magnet, ruling out the remnant field of a magnet as the origin of the AHE.
|
| 126 |
+
We attribute the observed zero-field anomalous Hall effect (AHE) to an effective time-reversal
|
| 127 |
+
symmetry breaking of the system captured by valley splitting. In the presence of time-reversal
|
| 128 |
+
symmetry, the anomalous Hall effect, defined as σxy = − e2
|
| 129 |
+
ℏ
|
| 130 |
+
�
|
| 131 |
+
dk
|
| 132 |
+
(2π)2Ωzf(k), vanishes. Here f(k)
|
| 133 |
+
is the Fermi distribution function. The vanishing of AHE can be understood by recalling that as
|
| 134 |
+
Ωz(K) = −Ωz(K′) in the presence of time-reversal symmetry, the contribution of each valley to
|
| 135 |
+
the AHE are equal and opposite, making the total AHE zero. However, on breaking the valley
|
| 136 |
+
degeneracy, the valleys have different fillings, as shown in Fig. 2(e). In this case, the resulting total
|
| 137 |
+
5
|
| 138 |
+
|
| 139 |
+
anomalous Hall response is finite. We calculate this non-zero AHE explicitly for the graphene-
|
| 140 |
+
WSe2 heterostructure (see Supplementary Information for the details of the calculation), and the
|
| 141 |
+
theoretical results for the Hall conductivity (which has the opposite sign to the Hall resistivity)
|
| 142 |
+
are shown in Fig. 2(f). Our calculations capture the existence of zero-field AHE in valley split
|
| 143 |
+
graphene-WSe2 device along with the sign reversal in the AHE on going from the hole (valence)
|
| 144 |
+
band to the electron (conduction) band. We emphasize that here we aim for a qualitative match
|
| 145 |
+
with the experimental data, as the microscopic origin of valley splitting (and hence the magnitude
|
| 146 |
+
of the split) is not evident.
|
| 147 |
+
The valley polarization can arise from different physical mechanisms such as enhanced impurity-
|
| 148 |
+
induced inter-valley scattering, selective exchange coupling of the two valleys, or non-periodic
|
| 149 |
+
lattice deformations51–54. However, we do not find evidence of valley splitting or finite AHE
|
| 150 |
+
in hBN/graphene/hBN devices without the intervening WSe2 layer. Thus, it is obvious that the
|
| 151 |
+
valley-specific asymmetry is induced by WSe2-graphene combination. The lattice constant for
|
| 152 |
+
graphene is ∼ 2.46 Å while that for WSe2 is ∼ 3.27 Å. The large lattice-mismatch generates
|
| 153 |
+
a significant strain across the graphene flake as the heterostructure relaxes to the stable ground
|
| 154 |
+
state. From Raman spectroscopy, we estimate the magnitude of the strain on the SLG layer in our
|
| 155 |
+
hBN/SLG/WSe2/hBN heterostructure to be ≈ 0.15% − 0.20% (see Section S6 of Supplementary
|
| 156 |
+
Information). This combination of strain and spin-orbit coupling feasibly lifts the valley degeneracy.
|
| 157 |
+
While the microscopic origin of valley splitting is not completely clear, we model it by shifting the
|
| 158 |
+
two valleys in energy, as indicated in Fig. 1(f).
|
| 159 |
+
Hall response with vertical displacement and magnetic field
|
| 160 |
+
Having demonstrated the AHE, we now focus on the dependence of the AHE on a perpendicular
|
| 161 |
+
displacement field D (Fig. 3). It is illuminating to map the transverse zero-B-field conductivity
|
| 162 |
+
Rxy(B = 0) data in the n − D plane (Fig. 3(a)). The plot shows Rxy(B = 0) to be finite only at
|
| 163 |
+
the band edges, consistent with the idea of the Berry curvature hot spots lying in the vicinity of
|
| 164 |
+
the band edges. This can be seen clearly in the line plots of Rxy(B = 0) for different values of D
|
| 165 |
+
6
|
| 166 |
+
|
| 167 |
+
shown in Fig. 3(b). Note that the plots are vertically offset by 200 Ω for clarity. The measured
|
| 168 |
+
Rxy(B = 0) has an intriguing D dependence; it changes its sign as the direction of D flips [Fig. 3(a-
|
| 169 |
+
b)]. To understand this, we analyze the dependence of the Berry curvature near the band edges on
|
| 170 |
+
the direction of D. Our theoretical calculations show that as the polarity of D changes, the Berry
|
| 171 |
+
curvature near the band edges changes sign. Consequently, the sign of the anomalous Hall voltage
|
| 172 |
+
(determined by the sign of the Berry curvature) in the SLG/WSe2 heterostructure flips. This is
|
| 173 |
+
reminiscent of the change in the sign of the gap in bilayer graphene on flipping the direction of D,
|
| 174 |
+
which changes the sign of the Berry curvature.
|
| 175 |
+
Measurements in a finite magnetic field B applied perpendicular to the device interface (see Section
|
| 176 |
+
S5 of Supplementary Information) reveal the interplay between the classical Hall effect and the
|
| 177 |
+
B = 0 AHE. The data smoothly crosses over from the anomalous hall phase at B = 0 to the
|
| 178 |
+
conventional Hall phase at finite B-field with an anti-crossing feature. This feature resembles
|
| 179 |
+
the planar Hall effect in corrugated bilayer graphene25. A non-zero intercept of the plot of Rxy
|
| 180 |
+
versus B [shown for a fixed n in Fig. 3(c)] on the B-axis captures the AHE. We note that Rxy
|
| 181 |
+
is non-hysteretic in the presence of a small non-quantizing magnetic field (see Section S7 of
|
| 182 |
+
Supplementary Information), ruling out emergent ferromagnetism in the system.
|
| 183 |
+
In Fig. 4(a), we present a plot of Rxx in the n − D plane measured at B = 0. We observe that with
|
| 184 |
+
increasing D, the resistance peak at the charge neutrality point splits into two maxima. This feature
|
| 185 |
+
can be better appreciated from Fig. 4(b), where we show individual plots of Rxx(B = 0) versus n
|
| 186 |
+
at several representative values of D. At higher values of |D|, we find two distinct peaks in Rxx
|
| 187 |
+
separated by a shallow valley. Such a displacement field-dependent dispersion of the bands near
|
| 188 |
+
the Dirac point is not captured by the existing models for graphene/WSe2 heterostructures42,55–61.
|
| 189 |
+
To remedy this, we construct a new model Hamiltonian for the graphene/WSe2 system, retaining
|
| 190 |
+
both the WSe2 and the graphene Hamiltonian blocks, which allows us to include the impact of a
|
| 191 |
+
vertical displacement field systematically (see Section S1 and S2 of Supplementary Information for
|
| 192 |
+
details). Fig. 4(c) is a plot of the theoretically calculated σxx as a function of the chemical potential
|
| 193 |
+
7
|
| 194 |
+
|
| 195 |
+
– the panels show the splitting of the conductivity minima into two asymmetric conductivity
|
| 196 |
+
minima at finite D. Our model thus reproduces the prominent features of σxx both at zero displacement
|
| 197 |
+
field55,57 and at a finite D, along with the observed AHE.
|
| 198 |
+
Discussion
|
| 199 |
+
To summarize, we report the first observation of room temperature anomalous Hall effect in
|
| 200 |
+
heterostructures of graphene/WSe2. Primarily known for their promising spintronic aspects, the
|
| 201 |
+
charge Hall response of such a heterostructure was expected to be relatively mundane. Contrary
|
| 202 |
+
to this, we show that the dual effect of spin-orbit coupling and strain in the system gives rise to
|
| 203 |
+
time-reversal symmetry-breaking through valley splitting. Combined with a finite Berry curvature,
|
| 204 |
+
this results in a finite anomalous Hall effect in the system. The anomalous Hall response persists
|
| 205 |
+
at least to room temperature and features a unique perpendicular electric field tunability. Our work
|
| 206 |
+
establishes graphene-WSe2 heterostructure as an excellent platform for further exploration of band
|
| 207 |
+
geometry-induced interplay of charge, spin, and valley responses in two-dimensional systems.
|
| 208 |
+
AUTHOR INFORMATION
|
| 209 |
+
Author Contributions
|
| 210 |
+
A.B., P.T., and D.S. conceptualized the study, performed the measurements, and analyzed the data.
|
| 211 |
+
A.A., A.C., and K.D. performed the theoretical analysis. K.W. and T.T. grew the hBN single
|
| 212 |
+
crystals. All the authors contributed to preparing the manuscript.
|
| 213 |
+
Notes
|
| 214 |
+
The authors declare no competing financial interest.
|
| 215 |
+
8
|
| 216 |
+
|
| 217 |
+
Acknowledgement
|
| 218 |
+
A.B. acknowledges funding from the DST FIST program, DST fellowship (DST/SJF/PSA01/2016-
|
| 219 |
+
17), and US Army DECVCOM and ITC IPAC (project: FA520922P0166/2232). A.C. acknowledges
|
| 220 |
+
the Indian Institute of Technology, Kanpur, and the Science and Engineering Research Board
|
| 221 |
+
(SERB) National Postdoctoral Fellowship (PDF/2021/000346), India, for financial support. A.A.
|
| 222 |
+
acknowledges the Science and Engineering Research Board for Project No. MTR/2019/001520,
|
| 223 |
+
and the Department of Science and Technology for Project No. DST/NM/TUE/QM-6/2019(G)-IIT
|
| 224 |
+
Kanpur of the Government of India for funding. K.W. and T.T. acknowledge support from JSPS
|
| 225 |
+
KAKENHI (Grant Numbers 19H05790, 20H00354, and 21H05233)
|
| 226 |
+
Supporting Information Available
|
| 227 |
+
Supporting information contains detailed discussions of (a) model Hamiltonian of Graphene/WSe2
|
| 228 |
+
heterostructure, (b) anomalous Hall effect and Drude conductivity, (c) data from other devices, and
|
| 229 |
+
(d) device fabrication and characterization details.
|
| 230 |
+
9
|
| 231 |
+
|
| 232 |
+
(b)
|
| 233 |
+
(d)
|
| 234 |
+
(e)
|
| 235 |
+
(c)
|
| 236 |
+
(a)
|
| 237 |
+
n (1X1016 m-2)
|
| 238 |
+
B (T)
|
| 239 |
+
−2
|
| 240 |
+
=
|
| 241 |
+
ν
|
| 242 |
+
ν=−6
|
| 243 |
+
ν=−10
|
| 244 |
+
ν=−14
|
| 245 |
+
ν=2
|
| 246 |
+
ν=6
|
| 247 |
+
ν=10
|
| 248 |
+
ν=14
|
| 249 |
+
0
|
| 250 |
+
1
|
| 251 |
+
2
|
| 252 |
+
3
|
| 253 |
+
4
|
| 254 |
+
5
|
| 255 |
+
6
|
| 256 |
+
7
|
| 257 |
+
8
|
| 258 |
+
9
|
| 259 |
+
10
|
| 260 |
+
-10
|
| 261 |
+
-5
|
| 262 |
+
0
|
| 263 |
+
5
|
| 264 |
+
0
|
| 265 |
+
-2.5
|
| 266 |
+
-2
|
| 267 |
+
-1.5
|
| 268 |
+
-1
|
| 269 |
+
-0.5
|
| 270 |
+
0.5
|
| 271 |
+
1
|
| 272 |
+
1.5
|
| 273 |
+
2
|
| 274 |
+
2.5
|
| 275 |
+
K valley
|
| 276 |
+
-1
|
| 277 |
+
1
|
| 278 |
+
Ωx104 (
|
| 279 |
+
2)
|
| 280 |
+
0
|
| 281 |
+
Graphene on WSe2
|
| 282 |
+
Graphene
|
| 283 |
+
Graphene on WSe2
|
| 284 |
+
Energy (meV)
|
| 285 |
+
K valley
|
| 286 |
+
K valley
|
| 287 |
+
-1
|
| 288 |
+
1
|
| 289 |
+
Ωx104 (
|
| 290 |
+
2)
|
| 291 |
+
K' valley
|
| 292 |
+
0
|
| 293 |
+
Energy (meV)
|
| 294 |
+
kx
|
| 295 |
+
kx
|
| 296 |
+
x
|
| 297 |
+
kx
|
| 298 |
+
Δvs
|
| 299 |
+
A
|
| 300 |
+
A
|
| 301 |
+
20
|
| 302 |
+
10
|
| 303 |
+
0
|
| 304 |
+
-10
|
| 305 |
+
-20
|
| 306 |
+
-0.01
|
| 307 |
+
0.00
|
| 308 |
+
0.01
|
| 309 |
+
20
|
| 310 |
+
10
|
| 311 |
+
0
|
| 312 |
+
10
|
| 313 |
+
20
|
| 314 |
+
-0.01
|
| 315 |
+
0.00
|
| 316 |
+
0.01
|
| 317 |
+
20
|
| 318 |
+
10
|
| 319 |
+
0
|
| 320 |
+
-10
|
| 321 |
+
-20
|
| 322 |
+
-0.01
|
| 323 |
+
0.00
|
| 324 |
+
0.01
|
| 325 |
+
k
|
| 326 |
+
20
|
| 327 |
+
10
|
| 328 |
+
0
|
| 329 |
+
-10
|
| 330 |
+
-20
|
| 331 |
+
-0.01
|
| 332 |
+
0.00
|
| 333 |
+
0.01
|
| 334 |
+
(f)
|
| 335 |
+
(g)
|
| 336 |
+
1
|
| 337 |
+
2
|
| 338 |
+
-10
|
| 339 |
+
-5
|
| 340 |
+
0
|
| 341 |
+
5
|
| 342 |
+
10
|
| 343 |
+
0.0
|
| 344 |
+
0.2
|
| 345 |
+
0.4
|
| 346 |
+
0.6
|
| 347 |
+
0.8
|
| 348 |
+
1.0
|
| 349 |
+
|
| 350 |
+
1/B (1/T)
|
| 351 |
+
Normalized Amplitude
|
| 352 |
+
16
|
| 353 |
+
20
|
| 354 |
+
24
|
| 355 |
+
28
|
| 356 |
+
32
|
| 357 |
+
36
|
| 358 |
+
Rxx(Ω)
|
| 359 |
+
10µm
|
| 360 |
+
hBN
|
| 361 |
+
hBN
|
| 362 |
+
WSe2
|
| 363 |
+
Graphene
|
| 364 |
+
BF (T)
|
| 365 |
+
lnGxx(e2/h)
|
| 366 |
+
Figure 1: Device characteristics and band dispersion: (a) Schematic of the graphene/WSe2
|
| 367 |
+
layers encapsulated in hBN illustrating the sequence of crystal stacking. (b) Optical image of
|
| 368 |
+
the device. (c) Map of the longitudinal conductance (Gxx(B)) with varying carrier density n and
|
| 369 |
+
perpendicular magnetic field B at T ∼ 20 mK. The thicker dashed lines correspond to the signature
|
| 370 |
+
plateaus of single-layer graphene. Thinner lines mark the broken-symmetry phases indicating
|
| 371 |
+
complete lifting of the spin and valley degeneracies at low-B. (d) SdH oscillations versus 1/B
|
| 372 |
+
at Vbg = −40 V. (e) Fourier spectrum of the SdH oscillations; two peaks are distinctly visible,
|
| 373 |
+
establishing the presence of two Fermi surfaces. (f) Schematic of the band dispersion of the K
|
| 374 |
+
valley of monolayer graphene (left panel) and graphene on WSe2 heterostructure (right panel).
|
| 375 |
+
The WSe2 layer essentially lifts the spin degeneracy of the low-lying energy bands and opens up a
|
| 376 |
+
gap at the Fermi energy. (g) The impact of valley splitting (denoted by ∆vs) on the band structure
|
| 377 |
+
of the K (left) and the K′ (right) valleys of the graphene/WSe2 heterostructure. The color map of
|
| 378 |
+
the lines indicates the Berry curvature, which is concentrated near the band edges.
|
| 379 |
+
10
|
| 380 |
+
|
| 381 |
+
2
|
| 382 |
+
0
|
| 383 |
+
-2.5
|
| 384 |
+
-2
|
| 385 |
+
-1.5-0.5
|
| 386 |
+
0
|
| 387 |
+
0.55
|
| 388 |
+
0101.5
|
| 389 |
+
2
|
| 390 |
+
2.5
|
| 391 |
+
16
|
| 392 |
+
X10-106
|
| 393 |
+
5
|
| 394 |
+
4
|
| 395 |
+
3-59
|
| 396 |
+
8
|
| 397 |
+
1
|
| 398 |
+
7Rxy(kΩ)
|
| 399 |
+
B=0T
|
| 400 |
+
n (1 x 1016 m-2)
|
| 401 |
+
n (1 x 1016 m-2)
|
| 402 |
+
n (1 x 1016 m-2)
|
| 403 |
+
Rxx(kΩ)
|
| 404 |
+
Rxy(kΩ)
|
| 405 |
+
30nA
|
| 406 |
+
T = 142 K
|
| 407 |
+
150nA
|
| 408 |
+
R
|
| 409 |
+
xy(kΩ)
|
| 410 |
+
0.3
|
| 411 |
+
0.2
|
| 412 |
+
0.1
|
| 413 |
+
0
|
| 414 |
+
100
|
| 415 |
+
0
|
| 416 |
+
50
|
| 417 |
+
150 200 250
|
| 418 |
+
Rxy(kΩ)
|
| 419 |
+
T(K)
|
| 420 |
+
µ(meV)
|
| 421 |
+
0.2
|
| 422 |
+
0.0
|
| 423 |
+
-0.2
|
| 424 |
+
-200
|
| 425 |
+
-100
|
| 426 |
+
0
|
| 427 |
+
100
|
| 428 |
+
200
|
| 429 |
+
(a)
|
| 430 |
+
(b)
|
| 431 |
+
(c)
|
| 432 |
+
(d)
|
| 433 |
+
(e)
|
| 434 |
+
(f)
|
| 435 |
+
10K
|
| 436 |
+
30K
|
| 437 |
+
50K
|
| 438 |
+
Rxx
|
| 439 |
+
Rxy
|
| 440 |
+
0.3
|
| 441 |
+
-0.3
|
| 442 |
+
-0.2
|
| 443 |
+
-0.1
|
| 444 |
+
0.0
|
| 445 |
+
0.1
|
| 446 |
+
0.2
|
| 447 |
+
-0.5
|
| 448 |
+
-0.4
|
| 449 |
+
-1.2
|
| 450 |
+
-0.4
|
| 451 |
+
-0.8
|
| 452 |
+
-0.0
|
| 453 |
+
0.4
|
| 454 |
+
0.8
|
| 455 |
+
1.2
|
| 456 |
+
0
|
| 457 |
+
1
|
| 458 |
+
2
|
| 459 |
+
3
|
| 460 |
+
4
|
| 461 |
+
5
|
| 462 |
+
6
|
| 463 |
+
0.02K
|
| 464 |
+
14K
|
| 465 |
+
24K
|
| 466 |
+
51K
|
| 467 |
+
80K
|
| 468 |
+
142K
|
| 469 |
+
10K
|
| 470 |
+
222K
|
| 471 |
+
300K
|
| 472 |
+
0.3
|
| 473 |
+
-0.4
|
| 474 |
+
-0.3
|
| 475 |
+
-0.2
|
| 476 |
+
-0.1
|
| 477 |
+
0.0
|
| 478 |
+
0.1
|
| 479 |
+
0.2
|
| 480 |
+
-1.2
|
| 481 |
+
-0.8
|
| 482 |
+
-0.4
|
| 483 |
+
0.0
|
| 484 |
+
0.4
|
| 485 |
+
0.8
|
| 486 |
+
1.2
|
| 487 |
+
B=0T
|
| 488 |
+
300
|
| 489 |
+
-1.2
|
| 490 |
+
-0.8
|
| 491 |
+
-0.4
|
| 492 |
+
0.0
|
| 493 |
+
0.4
|
| 494 |
+
0.8
|
| 495 |
+
1.2
|
| 496 |
+
0.06
|
| 497 |
+
-0.10
|
| 498 |
+
-0.08
|
| 499 |
+
-0.06
|
| 500 |
+
-0.02
|
| 501 |
+
0.00
|
| 502 |
+
0.02
|
| 503 |
+
0.04
|
| 504 |
+
-0.04
|
| 505 |
+
Rxx
|
| 506 |
+
Rxy
|
| 507 |
+
K
|
| 508 |
+
K
|
| 509 |
+
Ωz
|
| 510 |
+
0
|
| 511 |
+
σxy/σ0
|
| 512 |
+
Figure 2: Anomalous Hall effect (a) Plots of the zero magnetic-field longitudinal resistance
|
| 513 |
+
Rxx(B = 0) (left-axis, red line) and zero magnetic-field transverse resistance Rxy(B = 0) (right-
|
| 514 |
+
axis, blue line) versus n; the data were measured at T = 20 mK. (b) Rxy(B = 0) response as a
|
| 515 |
+
function of n at few representative values of temperature; the AHE persists up to 300 K. (c) Plot
|
| 516 |
+
of Rxy(B = 0) as a function of n for two different values of electrical current; the data were taken
|
| 517 |
+
at T = 142 K. (d) Plot of the peak value of Rxy(B = 0) versus T. The dotted line is a guide to
|
| 518 |
+
the eye. (e) The bell-shaped surface represents the opposite Berry curvatures of the two valleys.
|
| 519 |
+
The position of the Fermi surfaces for the K and K′ valleys (indicated by the black circle) differ
|
| 520 |
+
due to valley population imbalance. The top insets show the schematic of Dirac crossing for the
|
| 521 |
+
K and K′ valleys for the effective graphene sector. The valley splitting introduces a population
|
| 522 |
+
imbalance between the two valleys of the Dirac cones. (f) Theoretically calculated anomalous Hall
|
| 523 |
+
conductivity (σxy ∝ −ρxy) in the absence (black dashed line) and in the presence (solid lines) of
|
| 524 |
+
valley splitting (∆vs ∼ 4 meV). The y-axis is scaled w.r.t σ0 ≡ e2/h. The increase in temperature
|
| 525 |
+
diminishes the heights of the σxy peak.
|
| 526 |
+
11
|
| 527 |
+
|
| 528 |
+
n= −0.18 x1016 m-2
|
| 529 |
+
B (mT)
|
| 530 |
+
-0.3
|
| 531 |
+
-0.2
|
| 532 |
+
-0.1
|
| 533 |
+
0
|
| 534 |
+
0.1
|
| 535 |
+
0.2
|
| 536 |
+
0.3
|
| 537 |
+
0.6
|
| 538 |
+
-0.4
|
| 539 |
+
0.2
|
| 540 |
+
0
|
| 541 |
+
-0.2
|
| 542 |
+
0.4
|
| 543 |
+
-6
|
| 544 |
+
-4
|
| 545 |
+
2
|
| 546 |
+
0
|
| 547 |
+
-2
|
| 548 |
+
4
|
| 549 |
+
-3
|
| 550 |
+
-2
|
| 551 |
+
-1
|
| 552 |
+
0
|
| 553 |
+
1
|
| 554 |
+
2
|
| 555 |
+
3
|
| 556 |
+
-0.2
|
| 557 |
+
0.0
|
| 558 |
+
0.2
|
| 559 |
+
0.4
|
| 560 |
+
0.6
|
| 561 |
+
0.8
|
| 562 |
+
1.0
|
| 563 |
+
1.2
|
| 564 |
+
1.4
|
| 565 |
+
1.6
|
| 566 |
+
1.8
|
| 567 |
+
0.3V/nm
|
| 568 |
+
0.2V/nm
|
| 569 |
+
0.1V/nm
|
| 570 |
+
0.05V/nm
|
| 571 |
+
0.0V/nm
|
| 572 |
+
-0.05V/nm
|
| 573 |
+
-0.1V/nm
|
| 574 |
+
-0.2V/nm
|
| 575 |
+
-0.3V/nm
|
| 576 |
+
n (1x1016 m-2)
|
| 577 |
+
n (1x1016 m-2)
|
| 578 |
+
0
|
| 579 |
+
0.1
|
| 580 |
+
−0.1
|
| 581 |
+
−40
|
| 582 |
+
0
|
| 583 |
+
−0.2
|
| 584 |
+
0.3
|
| 585 |
+
40
|
| 586 |
+
80
|
| 587 |
+
−80
|
| 588 |
+
0.2
|
| 589 |
+
Rxy(kΩ)
|
| 590 |
+
D (V/nm)
|
| 591 |
+
(a)
|
| 592 |
+
(b)
|
| 593 |
+
(c)
|
| 594 |
+
Rxy (kΩ)
|
| 595 |
+
Rxy (kΩ)
|
| 596 |
+
Figure 3: Dependence of the transverse resistance Rxy on D and B. (a) A 2-dimensional contour
|
| 597 |
+
map of Rxy(B = 0) plotted in the n−D plane. (b) Plots of Rxy(B = 0) versus n for different values
|
| 598 |
+
of D. The data have been vertically shifted by 200 Ω for clarity. The dashed horizontal line for
|
| 599 |
+
each plot marks the zero of Rxy(B = 0). (c) A representative plot of Rxy versus B measured at
|
| 600 |
+
n = −0.18 × 1016 m−2, an arrow marks the value of the anomalous Hall resistance.
|
| 601 |
+
12
|
| 602 |
+
|
| 603 |
+
0.3V/nm
|
| 604 |
+
0.2V/nm
|
| 605 |
+
0.1V/nm
|
| 606 |
+
0.05V/nm
|
| 607 |
+
0V/nm
|
| 608 |
+
-0.3V/nm
|
| 609 |
+
-0.05V/nm
|
| 610 |
+
-0.1V/nm
|
| 611 |
+
-0.2V/nm
|
| 612 |
+
)
|
| 613 |
+
4
|
| 614 |
+
0
|
| 615 |
+
1
|
| 616 |
+
2
|
| 617 |
+
3
|
| 618 |
+
5
|
| 619 |
+
6
|
| 620 |
+
7
|
| 621 |
+
8
|
| 622 |
+
9
|
| 623 |
+
-3
|
| 624 |
+
-2
|
| 625 |
+
-1
|
| 626 |
+
0
|
| 627 |
+
1
|
| 628 |
+
2
|
| 629 |
+
3
|
| 630 |
+
2.5
|
| 631 |
+
5.0
|
| 632 |
+
7.5
|
| 633 |
+
2.5
|
| 634 |
+
5.0
|
| 635 |
+
7.5
|
| 636 |
+
0.0
|
| 637 |
+
2.0
|
| 638 |
+
Δ=300 meV
|
| 639 |
+
Δ=0 meV
|
| 640 |
+
Δ=-300 meV
|
| 641 |
+
σxx /συ (103)
|
| 642 |
+
-200
|
| 643 |
+
200
|
| 644 |
+
0
|
| 645 |
+
μ (meV)
|
| 646 |
+
(c)
|
| 647 |
+
(b)
|
| 648 |
+
(a)
|
| 649 |
+
0.5
|
| 650 |
+
1
|
| 651 |
+
2
|
| 652 |
+
3
|
| 653 |
+
0.6
|
| 654 |
+
-0.4
|
| 655 |
+
0.2
|
| 656 |
+
0
|
| 657 |
+
-0.2
|
| 658 |
+
0.4
|
| 659 |
+
-6
|
| 660 |
+
-4
|
| 661 |
+
2
|
| 662 |
+
0
|
| 663 |
+
-2
|
| 664 |
+
4
|
| 665 |
+
Rxx(kΩ)
|
| 666 |
+
D (V/nm)
|
| 667 |
+
4.0
|
| 668 |
+
σxx /συ (103)
|
| 669 |
+
σxx /συ (103)
|
| 670 |
+
n (1 x 1016 m-2)
|
| 671 |
+
n (1 x 1016 m-2)
|
| 672 |
+
Rxx(kΩ
|
| 673 |
+
Figure 4: Dependence of Rxx(B = 0) on D. (a) A 2-dimensional contour map of Rxx(B = 0)
|
| 674 |
+
plotted in the n − D plane. (b) Plots of Rxx(B = 0) versus n for different values of D. The data
|
| 675 |
+
have been vertically shifted by 1 kΩ for clarity. The dashed horizontal line for each plot is the
|
| 676 |
+
zero of the y-axis. (c) Variation of the calculated Drude conductivity σxx with energy (µ) for three
|
| 677 |
+
different values of the interlayer potential induced by the applied electric field, ∆ = 300 meV (red
|
| 678 |
+
line), 0 meV (blue line) and -300 meV (green line), respectively. The values of σxx have been
|
| 679 |
+
scaled by σv where σv = e2τ/4π2ℏ2.
|
| 680 |
+
13
|
| 681 |
+
|
| 682 |
+
Supplementary Information
|
| 683 |
+
Model Hamiltonian of Graphene WSe2 heterostructure
|
| 684 |
+
In this section, we construct the low energy model Hamiltonian of monolayer graphene on a
|
| 685 |
+
WSe2 layer. Going beyond the effective graphene model as reported in recent literature55,57,62, we
|
| 686 |
+
explicitly solve for the composite low energy Hamiltonian for the graphene-WSe2 heterostructure
|
| 687 |
+
to capture the effect of perpendicular electric field correctly. We solve the following low-energy
|
| 688 |
+
Hamiltonian
|
| 689 |
+
Htot =
|
| 690 |
+
�
|
| 691 |
+
�
|
| 692 |
+
�
|
| 693 |
+
Hg
|
| 694 |
+
k
|
| 695 |
+
Ht
|
| 696 |
+
H†
|
| 697 |
+
t
|
| 698 |
+
Hws
|
| 699 |
+
tot
|
| 700 |
+
�
|
| 701 |
+
�
|
| 702 |
+
� + H⊥
|
| 703 |
+
(1)
|
| 704 |
+
Here, Hg
|
| 705 |
+
k and Hws
|
| 706 |
+
tot are the onsite Hamiltonian for graphene and the WSe2 respectively.
|
| 707 |
+
The
|
| 708 |
+
interaction between graphene and WSe2 layer has been included through spin and valley conserved
|
| 709 |
+
off-diagonal hopping (Ht). The effect of the perpendicular electric field is captured through the
|
| 710 |
+
diagonal matrix H⊥.
|
| 711 |
+
We consider the monolayer of WSe2 in the x-y plane in the presence of intrinsic spin-orbit coupling
|
| 712 |
+
(SOC) (Hws
|
| 713 |
+
sym), spin Zeeman field (∆ws
|
| 714 |
+
0 ).
|
| 715 |
+
In addition, finite Rashba SOC term (Hws
|
| 716 |
+
R ) is also
|
| 717 |
+
considered within the WSe2 sector? ? . Including all these effects, the two-dimensional extended
|
| 718 |
+
Dirac Hamiltonian (Hws
|
| 719 |
+
tot) of WSe2 monolayer can be written as
|
| 720 |
+
Hws
|
| 721 |
+
tot = Hws
|
| 722 |
+
k
|
| 723 |
+
+ Hws
|
| 724 |
+
sym + Hws
|
| 725 |
+
R .
|
| 726 |
+
(2)
|
| 727 |
+
The explicit forms of each term are expressed as follows,
|
| 728 |
+
Hws
|
| 729 |
+
k
|
| 730 |
+
= vws
|
| 731 |
+
F [ξσxkx + σyky] + ∆ws
|
| 732 |
+
0 σz ,
|
| 733 |
+
Hsym = 1
|
| 734 |
+
2[λc(σz + σ0) + λv(σz − σ0)] ,
|
| 735 |
+
Hws
|
| 736 |
+
R = λR[ξσxSy − σySx] ,
|
| 737 |
+
(3)
|
| 738 |
+
14
|
| 739 |
+
|
| 740 |
+
where ξ = ±1 for K and K′ valley respectively. As in the monolayer WSe2, two degenerate
|
| 741 |
+
but inequivalent valleys (K and K′) are separated by a large momentum; we can split the total
|
| 742 |
+
Hamiltonian into two valley-specific parts. Here, we have considered vws
|
| 743 |
+
F
|
| 744 |
+
≡1.83 eV.Å as the
|
| 745 |
+
Fermi velocity of WSe2. ∆0 represents the mass term that breaks the inversion symmetry. Here,
|
| 746 |
+
λc and λv correspond to the SOC strengths of conduction and valence bands. In general, the
|
| 747 |
+
valence band (λv ∼ 112.5 meV) of WSe2 possesses larger SOC strength than the conduction band
|
| 748 |
+
(λc ∼ 7.5meV), promoting relatively larger splitting in the valence band63? . For simplicity of the
|
| 749 |
+
calculation, we choose the SOC strengths of both the conduction and valence bands to be equal,
|
| 750 |
+
λc = λv =7.5 meV. We set ∆0 =250 meV which induces a large gap between the conduction and
|
| 751 |
+
valence bands of WSe2. To model the low energy physics of graphene, we choose valley-specific
|
| 752 |
+
Hamiltonian of the following form,
|
| 753 |
+
Hg
|
| 754 |
+
k = vg
|
| 755 |
+
F[ξσxkx + σyky] .
|
| 756 |
+
(4)
|
| 757 |
+
Here, vg
|
| 758 |
+
F=3.46 eV.Å is the Fermi velocity of graphene. Equation (4) represents a gapless Dirac
|
| 759 |
+
dispersion for the graphene sector. The coupling between the two layers is captured by
|
| 760 |
+
Ht = t
|
| 761 |
+
�
|
| 762 |
+
�
|
| 763 |
+
�
|
| 764 |
+
0
|
| 765 |
+
1
|
| 766 |
+
1
|
| 767 |
+
0
|
| 768 |
+
�
|
| 769 |
+
�
|
| 770 |
+
� σ0 .
|
| 771 |
+
(5)
|
| 772 |
+
For our calculation, we set the hopping strength t =50 meV. The proximity effect of the WSe2
|
| 773 |
+
layer essentially opens up a gap at the Dirac crossing of the graphene bands. The induced band
|
| 774 |
+
gap of graphene gets enhanced with an increase in hopping strength.
|
| 775 |
+
The effect of the external perpendicular electric field is introduced by adding a diagonal Hamiltonian.
|
| 776 |
+
H⊥ =
|
| 777 |
+
�
|
| 778 |
+
�
|
| 779 |
+
�
|
| 780 |
+
∆I
|
| 781 |
+
0
|
| 782 |
+
0
|
| 783 |
+
−∆I
|
| 784 |
+
�
|
| 785 |
+
�
|
| 786 |
+
� .
|
| 787 |
+
(6)
|
| 788 |
+
Figure 5 shows the band dispersion evolution with a perpendicular electric field. The band dispersion
|
| 789 |
+
15
|
| 790 |
+
|
| 791 |
+
Energy (meV)
|
| 792 |
+
(c)
|
| 793 |
+
(b)
|
| 794 |
+
(a)
|
| 795 |
+
=300 meV
|
| 796 |
+
Δ
|
| 797 |
+
Δ=0 meV
|
| 798 |
+
Δ=-300 meV
|
| 799 |
+
Figure 5: Impact of the electric field on the band structure of graphene/WSe2 heterostructure. (a),
|
| 800 |
+
(b) and (c) show the band dispersion in the presence of electric field values ∆ = 300 meV, 0 meV,
|
| 801 |
+
and -300 meV, respectively. The external electric field changes the low energy band dispersion of
|
| 802 |
+
the composite graphene-WSe2 heterostructure, inducing a metal-insulator transition.
|
| 803 |
+
essentially undergoes an insulator-to-metal transition with the electric field (see Fig. 5).
|
| 804 |
+
Anomalous Hall effect and Drude conductivity
|
| 805 |
+
We attribute the observed Hall effect to the anomalous Hall effect induced by Berry curvature. The
|
| 806 |
+
anomalous Hall conductivity of the system is defined as,
|
| 807 |
+
σxy = −e2
|
| 808 |
+
ℏ
|
| 809 |
+
�
|
| 810 |
+
n,ξ
|
| 811 |
+
� � dkxdky
|
| 812 |
+
(2π)2 Ωn,ξ
|
| 813 |
+
z f n,ξ ,
|
| 814 |
+
(7)
|
| 815 |
+
where n is the band index. As observed in our experimental finding, a Hall current can only be
|
| 816 |
+
generated through a population imbalance due to the valley gap difference. The van der Waals
|
| 817 |
+
stacking of graphene onto hexagonal boron nitride offers a natural platform for valley control? . To
|
| 818 |
+
induce a finite valley splitting, we have incorporated a term ∆vs =10 meV between the two valleys,
|
| 819 |
+
as shown in Fig. 1 (f) of the main manuscript. It is important to note that ϵK ̸= ϵK′ even without
|
| 820 |
+
external perturbations like an electric field. As a result of this valley splitting, a finite anomalous
|
| 821 |
+
Hall effect σxy is generated within the system (see Fig. 2 (f) in the main manuscript).
|
| 822 |
+
16
|
| 823 |
+
|
| 824 |
+
We calculate σxx using the expression of the Drude conductivity
|
| 825 |
+
σxx = e2τ
|
| 826 |
+
�
|
| 827 |
+
n,ξ
|
| 828 |
+
� � dkxdky
|
| 829 |
+
4π2
|
| 830 |
+
vn,ξ
|
| 831 |
+
x vn,ξ
|
| 832 |
+
x (−∂f
|
| 833 |
+
∂ϵ )ϵ=ϵn(k) .
|
| 834 |
+
(8)
|
| 835 |
+
The band velocity is defined as ℏvn,ξ
|
| 836 |
+
x
|
| 837 |
+
= ∂ϵn,ξ/∂kx, where n is the band index. The longitudinal
|
| 838 |
+
conductivity (σxx), which follows the density of states (DOS), shows a W-like pattern with an
|
| 839 |
+
increase in the electric field. The calculated σxx captured the qualitative nature of the inverse of
|
| 840 |
+
the experimental resistivity (Rxx) plot of Fig. 4(a) of the main manuscript. The pseudo gap within
|
| 841 |
+
the first and second valence (conduction) bands promotes the low conducting dips below (above)
|
| 842 |
+
the Fermi energy, whereas for a finite electric field, the substantial DOS at Fermi energy promotes
|
| 843 |
+
the metallic nature indicated by a peak at the σxx of Fig. 4(c) of the main manuscript.
|
| 844 |
+
Device fabrication
|
| 845 |
+
Thin flakes of WSe2, hBN, and graphene were mechanically exfoliated on Si/SiO2 substrates. The
|
| 846 |
+
thickness of the flakes was initially estimated from the color contrast under an optical microscope
|
| 847 |
+
and later confirmed using Raman spectroscopy. This was followed by sequential pickup of each
|
| 848 |
+
flake using Polycarbonate (PC) film at 90oC. The assembled heterostructure was transferred on
|
| 849 |
+
a new Si/SiO2 substrate. The heterostructure is then cleaned in chloroform, acetone, and IPA to
|
| 850 |
+
remove the PC residue. The heterostructure was then annealed at 2500C for 3 hours. Electron
|
| 851 |
+
beam lithography was used to define the contact and top gate electrodes. We used reactive ion
|
| 852 |
+
etching (mixture of CHF3 and O2 gas) to etch top hBN to make one-dimensional edge contacts
|
| 853 |
+
to graphene. For making the electrical contacts, Cr/Au (5 nm/60 nm) was deposited, followed by
|
| 854 |
+
liftoff in hot acetone and cleaning in IPA. The unwanted hBN and graphene were removed using
|
| 855 |
+
E-beam lithography and dry etching to define the Hall bar. We transferred an hBN top of the device
|
| 856 |
+
and fabricated a metallic top gate using lithography and thermal deposition.
|
| 857 |
+
17
|
| 858 |
+
|
| 859 |
+
-3
|
| 860 |
+
-2
|
| 861 |
+
-1
|
| 862 |
+
0
|
| 863 |
+
1
|
| 864 |
+
2
|
| 865 |
+
3
|
| 866 |
+
-250
|
| 867 |
+
-200
|
| 868 |
+
-150
|
| 869 |
+
-100
|
| 870 |
+
-50
|
| 871 |
+
0
|
| 872 |
+
50
|
| 873 |
+
100
|
| 874 |
+
-3
|
| 875 |
+
-2
|
| 876 |
+
-1
|
| 877 |
+
0
|
| 878 |
+
1
|
| 879 |
+
2
|
| 880 |
+
3
|
| 881 |
+
0
|
| 882 |
+
1
|
| 883 |
+
2
|
| 884 |
+
3
|
| 885 |
+
4
|
| 886 |
+
Rxx (kΩ)
|
| 887 |
+
n (1x1016 m-2)
|
| 888 |
+
-250
|
| 889 |
+
-200
|
| 890 |
+
-150
|
| 891 |
+
-100
|
| 892 |
+
-50
|
| 893 |
+
0
|
| 894 |
+
50
|
| 895 |
+
100
|
| 896 |
+
Rxy (Ω)
|
| 897 |
+
n (1x1016 m-2)
|
| 898 |
+
(a)
|
| 899 |
+
(b)
|
| 900 |
+
Rxx
|
| 901 |
+
Rxy
|
| 902 |
+
Rxy
|
| 903 |
+
Isd
|
| 904 |
+
Isd
|
| 905 |
+
Rxy
|
| 906 |
+
Isd
|
| 907 |
+
Rxy (Ω)
|
| 908 |
+
Figure 6: Data on device SW2. (a) Plot of longitudinal and transverse resistivity versus number density
|
| 909 |
+
for device SW2. (b) Plot of transverse resistance versus number density in two different configurations for
|
| 910 |
+
device SW2. Configuration 1 measures Rxy(B = 0) and configuration 2 measures Ryx(B = 0).
|
| 911 |
+
Data on device SW2
|
| 912 |
+
Fig. 6(a) shows the data for zero-field longitudinal and transverse resistance in device SW2; one
|
| 913 |
+
can see the appearance of a finite Rxy(B = 0) that changes its sign near the Dirac point. Fig. 6(b)
|
| 914 |
+
presents the B = 0 transverse signal measured in two different configurations, configuration 1
|
| 915 |
+
measures Rxy(B = 0) while configuration 2 measures Ryx(B = 0). The two signals overlap
|
| 916 |
+
exactly with each other.
|
| 917 |
+
Note that this is one expects from the Onsager relation Rxy(B) =
|
| 918 |
+
Rxy(−B) for B = 0.
|
| 919 |
+
Low-field magnetoresistance
|
| 920 |
+
Fig. 7(a) shows the line plots of the transverse signal measured in device SW2 in the presence of
|
| 921 |
+
a small perpendicular magnetic field. The data show the smooth evolution of the anomalous Hall
|
| 922 |
+
18
|
| 923 |
+
|
| 924 |
+
3
|
| 925 |
+
-3
|
| 926 |
+
2
|
| 927 |
+
1
|
| 928 |
+
0
|
| 929 |
+
-1
|
| 930 |
+
-2
|
| 931 |
+
1
|
| 932 |
+
1.5
|
| 933 |
+
2
|
| 934 |
+
2.5
|
| 935 |
+
0
|
| 936 |
+
0.5
|
| 937 |
+
n (1x1016 m-2)
|
| 938 |
+
Rxy(kΩ)
|
| 939 |
+
100
|
| 940 |
+
80
|
| 941 |
+
60
|
| 942 |
+
40
|
| 943 |
+
20
|
| 944 |
+
0
|
| 945 |
+
-20
|
| 946 |
+
-40
|
| 947 |
+
-80
|
| 948 |
+
-60
|
| 949 |
+
-2
|
| 950 |
+
-3
|
| 951 |
+
-1
|
| 952 |
+
0
|
| 953 |
+
1
|
| 954 |
+
2
|
| 955 |
+
3
|
| 956 |
+
-600
|
| 957 |
+
-400
|
| 958 |
+
-200
|
| 959 |
+
0
|
| 960 |
+
200
|
| 961 |
+
400
|
| 962 |
+
B (T)
|
| 963 |
+
(b)
|
| 964 |
+
n (1x1016 m-2)
|
| 965 |
+
Rxy(Ω)
|
| 966 |
+
(a)
|
| 967 |
+
Figure 7: Dependence of Rxy on B. (a) Plot of Rxy at small magnetic field values measured for
|
| 968 |
+
device SW2. (b) A 2D map of the transverse resistance Rxy(B) in the n − B plane; the data shows
|
| 969 |
+
a finite Hall signal at B = 0 T.
|
| 970 |
+
signal into the classical Hall signal. This can be better appreciated from Fig. 7(b), which is a 2D
|
| 971 |
+
map of the transverse signal in the n-B plane.
|
| 972 |
+
Raman shift and strain
|
| 973 |
+
We used low-temperature Raman spectroscopy in graphene WSe2 stack to estimate the strain in
|
| 974 |
+
graphene. High-quality single layer graphene has two prominent Raman active modes, the G-
|
| 975 |
+
mode (1580 cm−1) and the 2D-mode (2690 cm−1). In the presence of a uniaxial strain ϵ, the shift
|
| 976 |
+
in 2D peak has been measured to be δωSLG
|
| 977 |
+
2D /ϵ ∼ −64cm−1/%? . Fig. 8(a) shows a comparison of
|
| 978 |
+
the temperature-dependence of the Raman shift of the 2D band measured for graphene ωSLG
|
| 979 |
+
2D
|
| 980 |
+
and
|
| 981 |
+
for graphene on WSe2 ωSLG/WSe2
|
| 982 |
+
2D
|
| 983 |
+
. In Fig. 8(b), we show a plot of the T-dependence of δω2D =
|
| 984 |
+
ωSLG/WSe2
|
| 985 |
+
2D
|
| 986 |
+
− ωSLG
|
| 987 |
+
2D . One can see that the difference in the Raman shift of the 2D peak increases
|
| 988 |
+
rapidly with a decrease in T; the positive value of δω2D indicates that the strain is compressive.
|
| 989 |
+
The temperature dependence of the strain in graphene was extracted from the data in Fig. 8(b);
|
| 990 |
+
its magnitude is plotted in Fig. 8(c). The data shows that SLG on single layer WSe2 undergoes a
|
| 991 |
+
19
|
| 992 |
+
|
| 993 |
+
0
|
| 994 |
+
100
|
| 995 |
+
200
|
| 996 |
+
300
|
| 997 |
+
2684
|
| 998 |
+
2688
|
| 999 |
+
2692
|
| 1000 |
+
2696
|
| 1001 |
+
0
|
| 1002 |
+
100
|
| 1003 |
+
200
|
| 1004 |
+
300
|
| 1005 |
+
8
|
| 1006 |
+
9
|
| 1007 |
+
10
|
| 1008 |
+
11
|
| 1009 |
+
12
|
| 1010 |
+
13
|
| 1011 |
+
14
|
| 1012 |
+
0
|
| 1013 |
+
100
|
| 1014 |
+
200
|
| 1015 |
+
300
|
| 1016 |
+
0.12
|
| 1017 |
+
0.14
|
| 1018 |
+
0.16
|
| 1019 |
+
0.18
|
| 1020 |
+
0.20
|
| 1021 |
+
0.22
|
| 1022 |
+
T (K)
|
| 1023 |
+
ω2d (cm-1)
|
| 1024 |
+
(b)
|
| 1025 |
+
(c)
|
| 1026 |
+
(a)
|
| 1027 |
+
T (K)
|
| 1028 |
+
δω2d (cm-1)
|
| 1029 |
+
|ε| (%)
|
| 1030 |
+
T (K)
|
| 1031 |
+
Figure 8: Raman shift in the 2D band of graphene (a) Temperature variation of the measured
|
| 1032 |
+
Raman shift of the 2D peak of graphene (blue filled circles) and of graphene on single-layer WSe2
|
| 1033 |
+
(red filled circles). (b) Plot of δω2D versus T. (c) Plot of the T- dependence of the magnitude of
|
| 1034 |
+
the strain |ϵ| in SLG on single-layer WSe2.
|
| 1035 |
+
significant compressive strain of about 0.2% at 4 K.
|
| 1036 |
+
Absence of ferromagnetism and nonlinear AHE
|
| 1037 |
+
The measured magnetoresistance in our devices is non-hysteretic (Fig. 9(a)). This is clear evidence
|
| 1038 |
+
of the absence of ferromagnetism in the system. We also find the second harmonic R2ω
|
| 1039 |
+
xy signal to
|
| 1040 |
+
be negligibly small for our device (Fig. 9(b)). This establishes that one does not have a nonlinear
|
| 1041 |
+
anomalous Hall effect in this system. To establish that the absence of the second harmonic signal
|
| 1042 |
+
is real and not an experimental artifact, we plot for comparison in Fig. 9(b) the data from similar
|
| 1043 |
+
measurements on hBN/graphene moiré devices. In the moiré device, we measure a finite nonlinear
|
| 1044 |
+
signal R2ω
|
| 1045 |
+
xy near the primary Dirac point (as expected from previous reports50).
|
| 1046 |
+
20
|
| 1047 |
+
|
| 1048 |
+
-0.10
|
| 1049 |
+
-0.05
|
| 1050 |
+
0.00
|
| 1051 |
+
0.05
|
| 1052 |
+
0.10
|
| 1053 |
+
-30
|
| 1054 |
+
-20
|
| 1055 |
+
-10
|
| 1056 |
+
0
|
| 1057 |
+
10
|
| 1058 |
+
20
|
| 1059 |
+
30
|
| 1060 |
+
40
|
| 1061 |
+
-0.2
|
| 1062 |
+
-0.1
|
| 1063 |
+
0.0
|
| 1064 |
+
0.1
|
| 1065 |
+
0.2
|
| 1066 |
+
-15
|
| 1067 |
+
-10
|
| 1068 |
+
-5
|
| 1069 |
+
0
|
| 1070 |
+
5
|
| 1071 |
+
10
|
| 1072 |
+
15
|
| 1073 |
+
Rxy(Ω)
|
| 1074 |
+
B (mT)
|
| 1075 |
+
R2ω
|
| 1076 |
+
xy (Ω)
|
| 1077 |
+
n (1x1016 m-2)
|
| 1078 |
+
(a)
|
| 1079 |
+
(b)
|
| 1080 |
+
Figure 9: Nonlinear AHE And MR: (a) Plot of magnetoresistance in a small magnetic field at
|
| 1081 |
+
D = −0.3 V/nm displacement field. The data were taken at n = −2 × 1016m−2. (b) Plot of the
|
| 1082 |
+
nonlinear AHE R2ω
|
| 1083 |
+
xy(B = 0) for SLG/WSe2 (red line). The data is contrasted with that obtained for
|
| 1084 |
+
a graphene/hBN moiré device (black line).
|
| 1085 |
+
21
|
| 1086 |
+
|
| 1087 |
+
References
|
| 1088 |
+
(1) Xiao, D.; Chang, M.-C.; Niu, Q. Berry phase effects on electronic properties. Rev. Mod.
|
| 1089 |
+
Phys. 2010, 82, 1959–2007.
|
| 1090 |
+
(2) Ahn, J.; Guo, G.-Y.; Nagaosa, N.; Vishwanath, A. Riemannian geometry of resonant optical
|
| 1091 |
+
responses. Nature Physics 2022, 18, 290–295.
|
| 1092 |
+
(3) Gao, A. et al. Layer Hall effect in a 2D topological axion antiferromagnet. Nature 2021, 595,
|
| 1093 |
+
521–525.
|
| 1094 |
+
(4) Bhalla, P.; Das, K.; Culcer, D.; Agarwal, A. Resonant Second-Harmonic Generation as a
|
| 1095 |
+
Probe of Quantum Geometry. Phys. Rev. Lett. 2022, 129, 227401.
|
| 1096 |
+
(5) Han, W.;
|
| 1097 |
+
Kawakami, R. K.;
|
| 1098 |
+
Gmitra, M.;
|
| 1099 |
+
Fabian, J. Graphene spintronics. Nature
|
| 1100 |
+
Nanotechnology 2014, 9, 794–807.
|
| 1101 |
+
(6) Sinova, J.; Valenzuela, S. O.; Wunderlich, J.; Back, C.; Jungwirth, T. Spin hall effects.
|
| 1102 |
+
Reviews of modern physics 2015, 87, 1213.
|
| 1103 |
+
(7) Hirsch, J. Spin hall effect. Physical review letters 1999, 83, 1834.
|
| 1104 |
+
(8) Bernevig, B. A.; Zhang, S.-C. Quantum spin Hall effect. Physical review letters 2006, 96,
|
| 1105 |
+
106802.
|
| 1106 |
+
(9) Tiwari, P.; Jat, M. K.; Udupa, A.; Narang, D. S.; Watanabe, K.; Taniguchi, T.; Sen, D.;
|
| 1107 |
+
Bid, A. Experimental observation of spin-split energy dispersion in high-mobility single-
|
| 1108 |
+
layer graphene/WSe2 heterostructures. npj 2D Materials and Applications 2022, 6, 68.
|
| 1109 |
+
(10) Xiao, D.; Liu, G.-B.; Feng, W.; Xu, X.; Yao, W. Coupled Spin and Valley Physics in
|
| 1110 |
+
Monolayers of MoS2 and Other Group-VI Dichalcogenides. Phys. Rev. Lett. 2012, 108,
|
| 1111 |
+
196802.
|
| 1112 |
+
22
|
| 1113 |
+
|
| 1114 |
+
(11) Cresti, A.; Nikoli´c, B. K.; García, J. H.; Roche, S. Charge, spin and valley Hall effects in
|
| 1115 |
+
disordered graphene. La Rivista del Nuovo Cimento 2016, 39, 587–667.
|
| 1116 |
+
(12) Mak, K. F.; McGill, K. L.; Park, J.; McEuen, P. L. The valley Hall effect in MoS2 transistors.
|
| 1117 |
+
Science 2014, 344, 1489–1492.
|
| 1118 |
+
(13) Lee, J.; Mak, K. F.; Shan, J. Electrical control of the valley Hall effect in bilayer MoS2
|
| 1119 |
+
transistors. Nature nanotechnology 2016, 11, 421–425.
|
| 1120 |
+
(14) Liu, J.; Ma, Z.; Gao, J.; Dai, X. Quantum valley Hall effect, orbital magnetism, and
|
| 1121 |
+
anomalous Hall effect in twisted multilayer graphene systems. Physical Review X 2019, 9,
|
| 1122 |
+
031021.
|
| 1123 |
+
(15) Qiao, Z.; Yang, S. A.; Feng, W.; Tse, W.-K.; Ding, J.; Yao, Y.; Wang, J.; Niu, Q. Quantum
|
| 1124 |
+
anomalous Hall effect in graphene from Rashba and exchange effects. Phys. Rev. B 2010, 82,
|
| 1125 |
+
161414.
|
| 1126 |
+
(16) Shimazaki, Y.; Yamamoto, M.; Borzenets, I. V.; Watanabe, K.; Taniguchi, T.; Tarucha, S.
|
| 1127 |
+
Generation and detection of pure valley current by electrically induced Berry curvature in
|
| 1128 |
+
bilayer graphene. Nature Physics 2015, 11, 1032–1036.
|
| 1129 |
+
(17) Sui, M.; Chen, G.; Ma, L.; Shan, W.-Y.; Tian, D.; Watanabe, K.; Taniguchi, T.; Jin, X.;
|
| 1130 |
+
Yao, W.; Xiao, D.; Zhang, Y. Gate-tunable topological valley transport in bilayer graphene.
|
| 1131 |
+
Nature Physics 2015, 11, 1027–1031.
|
| 1132 |
+
(18) Wallbank, J. R. et al. Tuning the valley and chiral quantum state of Dirac electrons in van der
|
| 1133 |
+
Waals heterostructures. Science 2016, 353, 575–579.
|
| 1134 |
+
(19) Xiao, D.; Yao, W.; Niu, Q. Valley-Contrasting Physics in Graphene: Magnetic Moment and
|
| 1135 |
+
Topological Transport. Phys. Rev. Lett. 2007, 99, 236809.
|
| 1136 |
+
(20) Sodemann, I.; Fu, L. Quantum Nonlinear Hall Effect Induced by Berry Curvature Dipole in
|
| 1137 |
+
Time-Reversal Invariant Materials. Phys. Rev. Lett. 2015, 115, 216806.
|
| 1138 |
+
23
|
| 1139 |
+
|
| 1140 |
+
(21) Du, Z. Z.; Wang, C. M.; Li, S.; Lu, H.-Z.; Xie, X. C. Disorder-induced nonlinear Hall effect
|
| 1141 |
+
with time-reversal symmetry. Nature Communications 2019, 10, 3047.
|
| 1142 |
+
(22) Sinha, S.; Adak, P. C.; Chakraborty, A.; Das, K.; Debnath, K.; Sangani, L. D. V.;
|
| 1143 |
+
Watanabe, K.; Taniguchi, T.; Waghmare, U. V.; Agarwal, A.; Deshmukh, M. M. Berry
|
| 1144 |
+
curvature dipole senses topological transition in a moiré superlattice. Nature Physics 2022,
|
| 1145 |
+
18, 765–770.
|
| 1146 |
+
(23) Chakraborty, A.; Das, K.; Sinha, S.; Adak, P. C.; Deshmukh, M. M.; Agarwal, A.
|
| 1147 |
+
Nonlinear anomalous Hall effects probe topological phase-transitions in twisted double
|
| 1148 |
+
bilayer graphene. 2D Materials 2022, 9, 045020.
|
| 1149 |
+
(24) Zhai, D.; Chen, C.; Xiao, C.; Yao, W. Layer-Contrasted Hall Effect in Twisted Bilayers with
|
| 1150 |
+
Time Reversal Symmetry. 2022; https://arxiv.org/abs/2207.14644.
|
| 1151 |
+
(25) Ho, S.-C.; Chang, C.-H.; Hsieh, Y.-C.; Lo, S.-T.; Huang, B.; Vu, T.-H.-Y.; Ortix, C.; Chen, T.-
|
| 1152 |
+
M. Hall effects in artificially corrugated bilayer graphene without breaking time-reversal
|
| 1153 |
+
symmetry. Nature Electronics 2021, 4, 116–125.
|
| 1154 |
+
(26) Sharpe, A. L.; Fox, E. J.; Barnard, A. W.; Finney, J.; Watanabe, K.; Taniguchi, T.;
|
| 1155 |
+
Kastner, M. A.; Goldhaber-Gordon, D. Emergent ferromagnetism near three-quarters filling
|
| 1156 |
+
in twisted bilayer graphene. Science 2019, 365, 605–608.
|
| 1157 |
+
(27) Serlin, M.; Tschirhart, C. L.; Polshyn, H.; Zhang, Y.; Zhu, J.; Watanabe, K.; Taniguchi, T.;
|
| 1158 |
+
Balents, L.; Young, A. F. Intrinsic quantized anomalous Hall effect in a moiré heterostructure.
|
| 1159 |
+
Science 2020, 367, 900–903.
|
| 1160 |
+
(28) Li, T.; Jiang, S.; Shen, B.; Zhang, Y.; Li, L.; Tao, Z.; Devakul, T.; Watanabe, K.; Taniguchi, T.;
|
| 1161 |
+
Fu, L.; Shan, J.; Mak, K. F. Quantum anomalous Hall effect from intertwined moiré bands.
|
| 1162 |
+
Nature 2021, 600, 641–646.
|
| 1163 |
+
24
|
| 1164 |
+
|
| 1165 |
+
(29) Lin, J.-X.; Zhang, Y.-H.; Morissette, E.; Wang, Z.; Liu, S.; Rhodes, D.; Watanabe, K.;
|
| 1166 |
+
Taniguchi, T.; Hone, J.; Li, J. I. A. Spin-orbit-driven ferromagnetism at half moiré filling
|
| 1167 |
+
in magic-angle twisted bilayer graphene. Science 2022, 375, 437–441.
|
| 1168 |
+
(30) Kuiri, M.; Coleman, C.; Gao, Z.; Vishnuradhan, A.; Watanabe, K.; Taniguchi, T.; Zhu, J.;
|
| 1169 |
+
MacDonald, A. H.; Folk, J. Spontaneous time-reversal symmetry breaking in twisted double
|
| 1170 |
+
bilayer graphene. Nature Communications 2022, 13, 6468.
|
| 1171 |
+
(31) Xie, Y.-M.; Zhang, C.-P.; Hu, J.-X.; Mak, K. F.; Law, K. T. Valley-Polarized Quantum
|
| 1172 |
+
Anomalous Hall State in Moiré MoTe2/WSe2 Heterobilayers. Phys. Rev. Lett. 2022, 128,
|
| 1173 |
+
026402.
|
| 1174 |
+
(32) Kang, J.; Vafek, O. Strong Coupling Phases of Partially Filled Twisted Bilayer Graphene
|
| 1175 |
+
Narrow Bands. Phys. Rev. Lett. 2019, 122, 246401.
|
| 1176 |
+
(33) Liu, J.; Dai, X. Anomalous Hall effect, magneto-optical properties, and nonlinear optical
|
| 1177 |
+
properties of twisted graphene systems. npj Computational Materials 2020, 6, 57.
|
| 1178 |
+
(34) Qiao, Z.; Ren, W.; Chen, H.; Bellaiche, L.; Zhang, Z.; MacDonald, A.; Niu, Q. Quantum
|
| 1179 |
+
anomalous Hall effect in graphene proximity coupled to an antiferromagnetic insulator.
|
| 1180 |
+
Physical review letters 2014, 112, 116404.
|
| 1181 |
+
(35) Song, G.; Ranjbar, M.; Daughton, D. R.; Kiehl, R. A. Nanoparticle-induced anomalous Hall
|
| 1182 |
+
effect in graphene. Nano Letters 2019, 19, 7112–7118.
|
| 1183 |
+
(36) Avsar, A.; Tan, J. Y.; Taychatanapat, T.; Balakrishnan, J.; Koon, G.; Yeo, Y.; Lahiri, J.;
|
| 1184 |
+
Carvalho, A.; Rodin, A.; O’Farrell, E., et al. Spin-orbit proximity effect in graphene. Nature
|
| 1185 |
+
communications 2014, 5, 1–6.
|
| 1186 |
+
(37) Ghiasi, T. S.; Kaverzin, A. A.; Blah, P. J.; van Wees, B. J. Charge-to-spin conversion by
|
| 1187 |
+
the Rashba–Edelstein effect in two-dimensional van der Waals heterostructures up to room
|
| 1188 |
+
temperature. Nano letters 2019, 19, 5959–5966.
|
| 1189 |
+
25
|
| 1190 |
+
|
| 1191 |
+
(38) Tiwari, P.; Srivastav, S. K.; Ray, S.; Das, T.; Bid, A. Observation of Time-Reversal Invariant
|
| 1192 |
+
Helical Edge-Modes in Bilayer Graphene/WSe2 Heterostructure. ACS Nano 2021, 15, 916–
|
| 1193 |
+
922, PMID: 33378173.
|
| 1194 |
+
(39) Herling, F.; Safeer, C. K.; Ingla-Aynés, J.; Ontoso, N.; Hueso, L. E.; Casanova, F. Gate
|
| 1195 |
+
tunability of highly efficient spin-to-charge conversion by spin Hall effect in graphene
|
| 1196 |
+
proximitized with WSe2. APL Materials 2020, 8, 071103.
|
| 1197 |
+
(40) Dastgeer, G.; Afzal, A. M.; Jaffery, S. H. A.; Imran, M.; Assiri, M. A.; Nisar, S. Gate
|
| 1198 |
+
modulation of the spin current in graphene/WSe2 van der Waals heterostructure at room
|
| 1199 |
+
temperature. Journal of Alloys and Compounds 2022, 919, 165815.
|
| 1200 |
+
(41) Lee, S.; de Sousa, D. J. P.; Kwon, Y.-K.; de Juan, F.; Chi, Z.; Casanova, F.; Low, T. Charge-
|
| 1201 |
+
to-spin conversion in twisted graphene/WSe2 heterostructures. Phys. Rev. B 2022, 106,
|
| 1202 |
+
165420.
|
| 1203 |
+
(42) Wang, Z.; Ki, D.-K.; Chen, H.; Berger, H.; MacDonald, A. H.; Morpurgo, A. F. Strong
|
| 1204 |
+
interface-induced spin–orbit interaction in graphene on WS2. Nature Communications 2015,
|
| 1205 |
+
6, 8339.
|
| 1206 |
+
(43) Wang, Z.; Ki, D.-K.; Khoo, J. Y.; Mauro, D.; Berger, H.; Levitov, L. S.; Morpurgo, A. F.
|
| 1207 |
+
Origin and Magnitude of ‘Designer’ Spin-Orbit Interaction in Graphene on Semiconducting
|
| 1208 |
+
Transition Metal Dichalcogenides. Phys. Rev. X 2016, 6, 041020.
|
| 1209 |
+
(44) Völkl, T.; Rockinger, T.; Drienovsky, M.; Watanabe, K.; Taniguchi, T.; Weiss, D.; Eroms, J.
|
| 1210 |
+
Magnetotransport in heterostructures of transition metal dichalcogenides and graphene. Phys.
|
| 1211 |
+
Rev. B 2017, 96, 125405.
|
| 1212 |
+
(45) Wakamura, T.; Reale, F.; Palczynski, P.; Zhao, M. Q.; Johnson, A. T. C.; Guéron, S.;
|
| 1213 |
+
Mattevi, C.; Ouerghi, A.; Bouchiat, H. Spin-orbit interaction induced in graphene by
|
| 1214 |
+
transition metal dichalcogenides. Phys. Rev. B 2019, 99, 245402.
|
| 1215 |
+
26
|
| 1216 |
+
|
| 1217 |
+
(46) Fülöp, B.; Márffy, A.; Zihlmann, S.; Gmitra, M.; Tóvári, E.; Szentpéteri, B.; Kedves, M.;
|
| 1218 |
+
Watanabe, K.; Taniguchi, T.; Fabian, J.; Schönenberger, C.; Makk, P.; Csonka, S. Boosting
|
| 1219 |
+
proximity spin–orbit coupling in graphene/WSe2 heterostructures via hydrostatic pressure.
|
| 1220 |
+
npj 2D Materials and Applications 2021, 5, 82.
|
| 1221 |
+
(47) Tiwari, P.; Srivastav, S. K.; Bid, A. Electric-Field-Tunable Valley Zeeman Effect in Bilayer
|
| 1222 |
+
Graphene Heterostructures: Realization of the Spin-Orbit Valve Effect. Phys. Rev. Lett. 2021,
|
| 1223 |
+
126, 096801.
|
| 1224 |
+
(48) Pizzocchero, F.; Gammelgaard, L.; Jessen, B. S.; Caridad, J. M.; Wang, L.; Hone, J.;
|
| 1225 |
+
Bøggild, P.; Booth, T. J. The hot pick-up technique for batch assembly of van der Waals
|
| 1226 |
+
heterostructures. Nature Communications 2016, 7, 1–10.
|
| 1227 |
+
(49) Wang, L.; Meric, I.; Huang, P.; Gao, Q.; Gao, Y.; Tran, H.; Taniguchi, T.; Watanabe, K.;
|
| 1228 |
+
Campos, L.; Muller, D., et al. One-dimensional electrical contact to a two-dimensional
|
| 1229 |
+
material. Science 2013, 342, 614–617.
|
| 1230 |
+
(50) He, P.; Koon, G. K. W.; Isobe, H.; Tan, J. Y.; Hu, J.; Neto, A. H. C.; Fu, L.; Yang, H.
|
| 1231 |
+
Graphene moiré superlattices with giant quantum nonlinearity of chiral Bloch electrons.
|
| 1232 |
+
Nature Nanotechnology 2022, 17, 378–383.
|
| 1233 |
+
(51) Nakamura, M.; Castro, E. V.; Dóra, B. Valley Symmetry Breaking in Bilayer Graphene: A
|
| 1234 |
+
Test of the Minimal Model. Phys. Rev. Lett. 2009, 103, 266804.
|
| 1235 |
+
(52) Yang, Z.; Han, J. H. Hierarchy of spin and valley symmetry breaking in quantum Hall single-
|
| 1236 |
+
layer graphene. Phys. Rev. B 2010, 81, 115405.
|
| 1237 |
+
(53) Farajollahpour, T.; Phirouznia, A. The role of the strain induced population imbalance in
|
| 1238 |
+
Valley polarization of graphene: Berry curvature perspective. Scientific Reports 2017, 7,
|
| 1239 |
+
17878.
|
| 1240 |
+
27
|
| 1241 |
+
|
| 1242 |
+
(54) Freitag, N. M.; Reisch, T.; Chizhova, L. A.; Nemes-Incze, P.; Holl, C.; Woods, C. R.;
|
| 1243 |
+
Gorbachev, R. V.; Cao, Y.; Geim, A. K.; Novoselov, K. S.; Burgdörfer, J.; Libisch, F.;
|
| 1244 |
+
Morgenstern, M. Large tunable valley splitting in edge-free graphene quantum dots on boron
|
| 1245 |
+
nitride. Nature Nanotechnology 2018, 13, 392–397.
|
| 1246 |
+
(55) Gmitra, M.; Kochan, D.; Högl, P.; Fabian, J. Trivial and inverted Dirac bands and the
|
| 1247 |
+
emergence of quantum spin Hall states in graphene on transition-metal dichalcogenides.
|
| 1248 |
+
Phys. Rev. B 2016, 93, 155104.
|
| 1249 |
+
(56) Offidani, M.; Milletarì, M.; Raimondi, R.; Ferreira, A. Optimal Charge-to-Spin Conversion
|
| 1250 |
+
in Graphene on Transition-Metal Dichalcogenides. Phys. Rev. Lett. 2017, 119, 196801.
|
| 1251 |
+
(57) Cummings, A. W.; Garcia, J. H.; Fabian, J.; Roche, S. Giant Spin Lifetime Anisotropy in
|
| 1252 |
+
Graphene Induced by Proximity Effects. Phys. Rev. Lett. 2017, 119, 206601.
|
| 1253 |
+
(58) Garcia, J. H.; Vila, M.; Cummings, A. W.; Roche, S. Spin transport in graphene/transition
|
| 1254 |
+
metal dichalcogenide heterostructures. Chemical Society Reviews 2018, 47, 3359–3379.
|
| 1255 |
+
(59) Li, Y.; Koshino, M. Twist-angle dependence of the proximity spin-orbit coupling in graphene
|
| 1256 |
+
on transition-metal dichalcogenides. Phys. Rev. B 2019, 99, 075438.
|
| 1257 |
+
(60) Zubair, M.; Vasilopoulos, P.; Tahir, M. Influence of interface induced valley-Zeeman and
|
| 1258 |
+
spin-orbit couplings on transport in heterostructures of graphene on WSe2. Phys. Rev. B
|
| 1259 |
+
2020, 101, 165436.
|
| 1260 |
+
(61) Kumar, A.; Maiti, S.; Maslov, D. L. Zero-field spin resonance in graphene with proximity-
|
| 1261 |
+
induced spin-orbit coupling. Phys. Rev. B 2021, 104, 155138.
|
| 1262 |
+
(62) Gmitra, M.; Fabian, J. Graphene on transition-metal dichalcogenides:
|
| 1263 |
+
A platform for
|
| 1264 |
+
proximity spin-orbit physics and optospintronics. Phys. Rev. B 2015, 92, 155403.
|
| 1265 |
+
(63) Tahir, M.; Vasilopoulos, P. Magneto-optical transport properties of monolayer WSe2. Phys.
|
| 1266 |
+
Rev. B 2016, 94, 045415.
|
| 1267 |
+
28
|
| 1268 |
+
|
19AzT4oBgHgl3EQf8_5O/content/tmp_files/load_file.txt
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
1tFIT4oBgHgl3EQf4CvP/vector_store/index.pkl
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:7dfaa2ba8e03efae28dbad253816baf5565e382534a570e0bbf009160b3bb56f
|
| 3 |
+
size 135915
|
2NE1T4oBgHgl3EQfAAJE/content/2301.02833v1.pdf
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:94c02748995b5161a15c2061277c9df8166cf274cffab335c76f42a89e83f05d
|
| 3 |
+
size 653017
|
2NE1T4oBgHgl3EQfAAJE/vector_store/index.pkl
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:6ba60bd80d7b9a55129f9593fb323927a5e666a6b60501dd9700a7dfe57fdf27
|
| 3 |
+
size 180108
|
2tE0T4oBgHgl3EQfdwDZ/content/tmp_files/2301.02382v1.pdf.txt
ADDED
|
@@ -0,0 +1,1049 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
ReVoLT: Relational Reasoning and Voronoi Local Graph Planning
|
| 2 |
+
for Target-driven Navigation
|
| 3 |
+
Junjia Liu13, Jianfei Guo23, Zehui Meng3, Jingtao Xue3
|
| 4 |
+
1 Department of Mechanical and Automation Engineering, The Chinese University of Hong Kong
|
| 5 |
+
2 School of Automation Science and Engineering, Xi’an Jiaotong University
|
| 6 |
+
3 Application Innovate Laboratory (2012 Laboratories), Huawei Technologies Co., Ltd.
|
| 7 |
+
Beijing, 100038, China
|
| 8 |
+
[email protected], [email protected], {mengzehui, xuejingtao}@huawei.com
|
| 9 |
+
Abstract—Embodied AI is an inevitable trend that emphasizes
|
| 10 |
+
the interaction between intelligent entities and the real world,
|
| 11 |
+
with broad applications in Robotics, especially target-driven
|
| 12 |
+
navigation. This task requires the robot to find an object of a
|
| 13 |
+
certain category efficiently in an unknown domestic environment.
|
| 14 |
+
Recent works focus on exploiting layout relationships by graph
|
| 15 |
+
neural networks (GNNs). However, most of them obtain robot
|
| 16 |
+
actions directly from observations in an end-to-end manner
|
| 17 |
+
via an incomplete relation graph, which is not interpretable
|
| 18 |
+
and reliable. We decouple this task and propose ReVoLT, a
|
| 19 |
+
hierarchical framework: (a) an object detection visual front-
|
| 20 |
+
end, (b) a high-level reasoner (infers semantic sub-goals), (c) an
|
| 21 |
+
intermediate-level planner (computes geometrical positions), and
|
| 22 |
+
(d) a low-level controller (executes actions). ReVoLT operates with
|
| 23 |
+
a multi-layer semantic-spatial topological graph. The reasoner
|
| 24 |
+
uses multiform structured relations as priors, which are obtained
|
| 25 |
+
from combinatorial relation extraction networks composed of
|
| 26 |
+
unsupervised GraphSAGE, GCN, and GraphRNN-based Region
|
| 27 |
+
Rollout. The reasoner performs with Upper Confidence Bound
|
| 28 |
+
for Tree (UCT) to infer semantic sub-goals, accounting for
|
| 29 |
+
trade-offs between exploitation (depth-first searching) and ex-
|
| 30 |
+
ploration (regretting). The lightweight intermediate-level planner
|
| 31 |
+
generates instantaneous spatial sub-goal locations via an online
|
| 32 |
+
constructed Voronoi local graph. The simulation experiments
|
| 33 |
+
demonstrate that our framework achieves better performance in
|
| 34 |
+
the target-driven navigation tasks and generalizes well, which
|
| 35 |
+
has an 80% improvement compared to the existing state-of-
|
| 36 |
+
the-art method. The code and result video will be released at
|
| 37 |
+
https://ventusff.github.io/ReVoLT-website/.
|
| 38 |
+
Index Terms—Relational reasoning, combinatorial relation
|
| 39 |
+
graph neural networks, UCT bandit, online Voronoi local graph
|
| 40 |
+
I. INTRODUCTION
|
| 41 |
+
Finding objects in complex houses efficiently is a prereq-
|
| 42 |
+
uisite for domestic service robots. Robots need to reason and
|
| 43 |
+
make dynamic decisions along with interacting with the real-
|
| 44 |
+
world environment. Embodied AI, proposed by Matej Hoffman
|
| 45 |
+
and Rolf Pfiefer [1], suggests that to truly understand how
|
| 46 |
+
the human brain works, a brain should be embedded into
|
| 47 |
+
a physical body, and let it explore and interact with the
|
| 48 |
+
real world. Among all the work practicing Embodied AI in
|
| 49 |
+
recent years, target-driven navigation (TDN) is one of the most
|
| 50 |
+
feasible and essential tasks, which combines techniques in both
|
| 51 |
+
machine learning and robotics, and is widely applicable for
|
| 52 |
+
scenarios such as domestic service robots. It typically requires
|
| 53 |
+
the robot to find a target object of a certain category in an
|
| 54 |
+
unknown scene, demanding both high efficiency and success
|
| 55 |
+
rate. Hence, the key problems of the TDN task are generalizing
|
| 56 |
+
across unknown domains and exploring efficiently.
|
| 57 |
+
Traditional
|
| 58 |
+
Simultaneous
|
| 59 |
+
Localization
|
| 60 |
+
and
|
| 61 |
+
Mapping
|
| 62 |
+
(SLAM)
|
| 63 |
+
pipeline
|
| 64 |
+
has
|
| 65 |
+
already
|
| 66 |
+
handled
|
| 67 |
+
TDN
|
| 68 |
+
to
|
| 69 |
+
some
|
| 70 |
+
extent [2], but there are still numerous problems lying
|
| 71 |
+
in its major modules. First, it remains troublesome for
|
| 72 |
+
SLAM-based methods to acquire and maintain a lifelong
|
| 73 |
+
updating semantic map, which demands accurate sensors and
|
| 74 |
+
semantic information. Second, SLAM-based methods are
|
| 75 |
+
inherently less adaptive to posterior information, which causes
|
| 76 |
+
them not generalizing well in complicated environments,
|
| 77 |
+
especially in indoor scenes. Last but not least, SLAM-based
|
| 78 |
+
methods are not specially designed for searching objects
|
| 79 |
+
in unknown environments, which requires keeping balance
|
| 80 |
+
between exploitation (depth-first searching) and exploration
|
| 81 |
+
(regretting).
|
| 82 |
+
Recently, learning-based methods emerge and show power-
|
| 83 |
+
ful capabilities of solving complicated tasks. However, these
|
| 84 |
+
methods generally have problems of interpretability and gen-
|
| 85 |
+
eralization, especially in the TDN task which require robots
|
| 86 |
+
to operate in unseen domain. We argue that it is more natural
|
| 87 |
+
and empirical to introduce a priori [3] to the learning model
|
| 88 |
+
instead of training from scratch, considering how human teach
|
| 89 |
+
ignorant babies. Introducing a priori enables algorithms to
|
| 90 |
+
achieve higher data efficiency, better model interpretability,
|
| 91 |
+
and generalization. In indoor TDN tasks, one of the most
|
| 92 |
+
useful prior information is the relationship among objects
|
| 93 |
+
and rooms of different categories. Some recent works reason
|
| 94 |
+
about the target direction using object relationships as a
|
| 95 |
+
priori in single-room environments [4]–[6]. However, common
|
| 96 |
+
domestic scenes are composed of multiple rooms, thus more
|
| 97 |
+
prior information such as room connection, object-in-room
|
| 98 |
+
membership, and other implicitly structured relationships could
|
| 99 |
+
be exploited, which are typically ignored in these works.
|
| 100 |
+
In this paper, we propose a hierarchical navigation frame-
|
| 101 |
+
work, Relational Reasoning and Voronoi Local graph plan-
|
| 102 |
+
ning (ReVoLT), which comprises a combinatorial graph neural
|
| 103 |
+
network for multiform domestic relations extraction, an UCT-
|
| 104 |
+
based reasoning exploration, and an online Voronoi local graph
|
| 105 |
+
for the semantic-spatial transition. The detailed contributions
|
| 106 |
+
are as follows:
|
| 107 |
+
• The TDN task is concisely decomposed, allowing for
|
| 108 |
+
separate and special designs for different modules, instead
|
| 109 |
+
of operating in a mixed-up end-to-end manner. We focus
|
| 110 |
+
our efforts on designing the reasoner and the planner.
|
| 111 |
+
• To extract multiform structural relations for reasoning, we
|
| 112 |
+
arXiv:2301.02382v1 [cs.RO] 6 Jan 2023
|
| 113 |
+
|
| 114 |
+
������������
|
| 115 |
+
���������
|
| 116 |
+
����������
|
| 117 |
+
���������
|
| 118 |
+
�����������
|
| 119 |
+
����������
|
| 120 |
+
������������
|
| 121 |
+
�������������
|
| 122 |
+
�����������
|
| 123 |
+
�
|
| 124 |
+
�
|
| 125 |
+
��
|
| 126 |
+
���������������������������������������������
|
| 127 |
+
��������
|
| 128 |
+
�������������
|
| 129 |
+
�������������������
|
| 130 |
+
��������������������������
|
| 131 |
+
����������
|
| 132 |
+
��������
|
| 133 |
+
����������
|
| 134 |
+
���������������
|
| 135 |
+
�����������������
|
| 136 |
+
�����������
|
| 137 |
+
������������������������
|
| 138 |
+
�������������������
|
| 139 |
+
����������������
|
| 140 |
+
����������������
|
| 141 |
+
��������������
|
| 142 |
+
������
|
| 143 |
+
��������
|
| 144 |
+
��
|
| 145 |
+
��
|
| 146 |
+
�
|
| 147 |
+
�
|
| 148 |
+
��
|
| 149 |
+
�
|
| 150 |
+
��
|
| 151 |
+
��
|
| 152 |
+
�
|
| 153 |
+
������
|
| 154 |
+
Fig. 1. The main hierarchical framework of ReVoLT method, which contains a high-level reasoner (infers semantic sub-goals), an intermediate-level planner
|
| 155 |
+
(computes spatial location sub-goal), and a low-level controller (computes actions). The combinatorial relation extraction module provides a priori of the
|
| 156 |
+
exploration value about the observed objects and regions through embedding similarity. Especially, Region Rollout model provides Monte Carlo simulation
|
| 157 |
+
for UCT in a conditional GraphRNN (c-GraphRNN) way.
|
| 158 |
+
propose combining unsupervised GraphSAGE [7], self-
|
| 159 |
+
supervised GCN, and c-GraphRNN methods for learning
|
| 160 |
+
object embedding, region embedding, and region rollout,
|
| 161 |
+
respectively.
|
| 162 |
+
• Based on the relation priors, the high-level reasoner
|
| 163 |
+
(semantic reasoning) is abstracted as a bandit problem and
|
| 164 |
+
adopts UCT to balance exploitation (depth-first searching)
|
| 165 |
+
and exploration (regretting).
|
| 166 |
+
• We construct Voronoi local graphs online using RGB-
|
| 167 |
+
D observations and convert semantic sub-goals to spatial
|
| 168 |
+
locations. We term this an intermediate-level planning
|
| 169 |
+
process.
|
| 170 |
+
• It is found in the test results that the proposed framework
|
| 171 |
+
is superior to state-of-the-art methods and achieves a
|
| 172 |
+
higher success rate and success weighted by path length
|
| 173 |
+
(SPL) with good generalization.
|
| 174 |
+
II. RELATED WORKS
|
| 175 |
+
Recently, there are many TDN solutions based on relational
|
| 176 |
+
reasoning. They have the advantage of replacing an explicit
|
| 177 |
+
metric map like SLAM-based methods, inferring the approxi-
|
| 178 |
+
mate position of the target object based on observed objects.
|
| 179 |
+
Most of these methods use GNNs to learn object-object
|
| 180 |
+
proximity relationships but ignore the relationship between
|
| 181 |
+
regions/rooms, thus it limits their task scenarios to a single
|
| 182 |
+
room (using AI2Thor data set [8] in simulation for training).
|
| 183 |
+
For example, Yang et al. [4] propose to use Graph Convo-
|
| 184 |
+
lutional Network (GCN) to incorporate the prior knowledge
|
| 185 |
+
about object relationship into a Deep Reinforcement Learning
|
| 186 |
+
(DRL) framework as part of joint embedding. Their priors are
|
| 187 |
+
obtained from large-scale scene understanding datasets and
|
| 188 |
+
updated according to the current observation. Qiu et al. [6]
|
| 189 |
+
share the same idea, but extract observations as context vectors,
|
| 190 |
+
which integrates relationship strength between the connected
|
| 191 |
+
objects and their spatial information.
|
| 192 |
+
For navigation tasks in houses with multiple rooms, it is
|
| 193 |
+
necessary to first reach the room that may contain the target
|
| 194 |
+
object (e.g. refrigerator-kitchen), then search the target in
|
| 195 |
+
object cliques. Therefore, the learning of prior knowledge
|
| 196 |
+
should consider more relationships, including room-to-room
|
| 197 |
+
connection and object-in-room membership. Wu et al. [9]
|
| 198 |
+
propose a memory structure based on the Bayesian graph
|
| 199 |
+
model. It uses the probability relationship graph to get the prior
|
| 200 |
+
house layout from the training set and estimates its posterior
|
| 201 |
+
in the test set. However, this work does not combine object-
|
| 202 |
+
level reasoning to achieve a complete TDN task. Chaplot
|
| 203 |
+
et al. [10] build a topological representation with associated
|
| 204 |
+
semantic features and learn a prior semantic score function
|
| 205 |
+
to evaluate the probability of potential nodes in a graph with
|
| 206 |
+
various directions. However, they provide target images,which
|
| 207 |
+
is impractical in domestic scenarios, while our method only
|
| 208 |
+
uses target labels. They subsequently extend the Active Neural
|
| 209 |
+
SLAM system [2], to learn semantic priors using a semanti-
|
| 210 |
+
cally aware long-term policy for label target navigation task
|
| 211 |
+
[11] and won CVPR 2020 Habitat ObjectNav Challenge1 [12].
|
| 212 |
+
It is worth mentioning that they also point out the end-to-end
|
| 213 |
+
learning-based methods suffer from large sample complexity
|
| 214 |
+
and poor generalization as they memorize object locations and
|
| 215 |
+
appearance in training environments [11], which prompt us to
|
| 216 |
+
consider the hierarchical framework with a topological graph.
|
| 217 |
+
Table I only lists TDN methods with label target and relational
|
| 218 |
+
reasoning.
|
| 219 |
+
III. REVOLT REASONING & PLANNING WITH
|
| 220 |
+
HIERARCHICAL FRAMEWORK
|
| 221 |
+
This task needs to be re-examined from the perspective of
|
| 222 |
+
bionics. Imagine a human facing such a task when he enters
|
| 223 |
+
an unknown house. He will not feel confused due to the prior
|
| 224 |
+
knowledge about domestic scenes he has. It is natural for us to
|
| 225 |
+
first roughly determine the type of room based on categories
|
| 226 |
+
of multiple observed objects in the current room (e.g. a
|
| 227 |
+
bedroom). According to the object-in-room membership, the
|
| 228 |
+
1https://aihabitat.org/challenge/2020/
|
| 229 |
+
|
| 230 |
+
13
|
| 231 |
+
12
|
| 232 |
+
11
|
| 233 |
+
10
|
| 234 |
+
6
|
| 235 |
+
8
|
| 236 |
+
6
|
| 237 |
+
5
|
| 238 |
+
0
|
| 239 |
+
8
|
| 240 |
+
10
|
| 241 |
+
12����������������������������������
|
| 242 |
+
���������������������
|
| 243 |
+
����������������
|
| 244 |
+
���������������������������
|
| 245 |
+
���
|
| 246 |
+
�������������������������
|
| 247 |
+
�����������
|
| 248 |
+
���������������
|
| 249 |
+
������������������
|
| 250 |
+
���
|
| 251 |
+
�
|
| 252 |
+
�
|
| 253 |
+
�
|
| 254 |
+
�
|
| 255 |
+
�
|
| 256 |
+
�
|
| 257 |
+
�
|
| 258 |
+
�
|
| 259 |
+
�
|
| 260 |
+
�
|
| 261 |
+
�
|
| 262 |
+
�
|
| 263 |
+
�
|
| 264 |
+
�
|
| 265 |
+
�����������������������������������
|
| 266 |
+
����������������������
|
| 267 |
+
��������������������
|
| 268 |
+
��������������������������
|
| 269 |
+
��������������������
|
| 270 |
+
���������������������������
|
| 271 |
+
������������������
|
| 272 |
+
�����������������������������
|
| 273 |
+
������������������
|
| 274 |
+
Fig. 2. Combinatorial relation extraction module. (a) Obtain object embedding via unsupervised weighted-GraphSAGE; (b) Region embedding is received by
|
| 275 |
+
passing a sub-graph with object embedding to GCN layers; (c) According to the house structure of region connectivity, a GraphRNN-based model is used to
|
| 276 |
+
learn the structure distribution and generate possible feature of future regions node by node.
|
| 277 |
+
TABLE I
|
| 278 |
+
PERFORMANCE OF EXISTING TDN METHODS WITH
|
| 279 |
+
VARIOUS EXPERIMENT SETTING
|
| 280 |
+
Method
|
| 281 |
+
Room Scale
|
| 282 |
+
Dataset
|
| 283 |
+
SR(%)
|
| 284 |
+
SPL(%)
|
| 285 |
+
Scene-prior [4]
|
| 286 |
+
Single
|
| 287 |
+
AI2-THOR
|
| 288 |
+
35.4
|
| 289 |
+
10.9
|
| 290 |
+
SAVN [13]
|
| 291 |
+
Single
|
| 292 |
+
AI2-THOR
|
| 293 |
+
35.7
|
| 294 |
+
9.3
|
| 295 |
+
MJOLNIR [6]
|
| 296 |
+
Single
|
| 297 |
+
AI2-THOR
|
| 298 |
+
65.3
|
| 299 |
+
21.1
|
| 300 |
+
BRM [9]
|
| 301 |
+
Multiple
|
| 302 |
+
House3D
|
| 303 |
+
-
|
| 304 |
+
-
|
| 305 |
+
SemExp† [11]
|
| 306 |
+
Multiple
|
| 307 |
+
Matterport3D
|
| 308 |
+
36.0
|
| 309 |
+
14.4
|
| 310 |
+
† SemExp won the first place in CVPR Habitat 2020 competition.
|
| 311 |
+
exploration value V(t|cur room) of the target object t in
|
| 312 |
+
the current room can be obtained. At the same time, some
|
| 313 |
+
potential but unexplored passages (e.g. a door or hallway)
|
| 314 |
+
can be determined as ghost nodes like [10]. The structural
|
| 315 |
+
relationship of the house layout and room connection can help
|
| 316 |
+
us predict categories and value V(t|next room) of next rooms
|
| 317 |
+
connected by ghost nodes.
|
| 318 |
+
Except for these priors, dynamic decisions also should be
|
| 319 |
+
made in a specific task, rather than just applying experience
|
| 320 |
+
mechanically. Reasoning procedure which contains intelligent
|
| 321 |
+
exploration and exploitation is one of the winning strategies.
|
| 322 |
+
Thus, our approach focuses on solving the following two
|
| 323 |
+
problems:
|
| 324 |
+
• How to obtain a more effective prior conditional explo-
|
| 325 |
+
ration value in a structured form?
|
| 326 |
+
• How to make efficient decisions between multiple feasible
|
| 327 |
+
paths based on exploration values?
|
| 328 |
+
The remainder of this section is organized as follows. In
|
| 329 |
+
subsection III-A, III-B, III-C, we present a combinatorial
|
| 330 |
+
relation extraction module (Fig. 2) using GNNs, which learns
|
| 331 |
+
three different relationships in a unified paradigm. A UCT-
|
| 332 |
+
based online reasoner is described in subsection III-D. In
|
| 333 |
+
III-E, we consider the coarse spatial information and build
|
| 334 |
+
an intermediate-level planner through online Voronoi construc-
|
| 335 |
+
tion. Finally, the whole ReVoLT hierarchical framework is
|
| 336 |
+
summarized in subsection III-F (Fig. 1).
|
| 337 |
+
A. Object Embedding learning
|
| 338 |
+
As illustrated in Fig. 2 (a), the object-to-object relationship
|
| 339 |
+
consists of not only pair-wise semantic similarity, but also
|
| 340 |
+
distances and the number of hops between object pairs. We
|
| 341 |
+
first extract an object-level graph Go(Vo, Eo) through object
|
| 342 |
+
positions pos and category Co from Matterport3D dataset.
|
| 343 |
+
Objects in the same room are fully connected. As for object
|
| 344 |
+
pairs in different rooms, only those closest to a common door
|
| 345 |
+
have an connecting edge. This is useful for the robot to infer
|
| 346 |
+
objects that are strongly related to the target just using object-
|
| 347 |
+
level embedding.
|
| 348 |
+
GraphSAGE [7] is a popular model in the node embedding
|
| 349 |
+
field. We adopt it to obtain the embedding of each object
|
| 350 |
+
category to fuse semantics and proximity relationships with
|
| 351 |
+
other categories. Our node embedding procedure uses GloVe
|
| 352 |
+
[14] as the initial node semantic feature {xv, ∀v ∈ Vo}, and
|
| 353 |
+
employ an unsupervised form of GraphSAGE with a loss that
|
| 354 |
+
penalizes the embedding similarity between two objects far
|
| 355 |
+
apart and reward the adjacent two. Different from the original
|
| 356 |
+
GraphSAGE, edge features {ωe:u→v, ∀e ∈ Eo} are also taken
|
| 357 |
+
into account in aggregation and loss calculations. For each
|
| 358 |
+
search depth k, weight matrices Wk, ∀k ∈ {1, . . . , K}, we
|
| 359 |
+
employ an edge-weighted mean aggregator which simply takes
|
| 360 |
+
the element-wise mean of the vectors in {hk−1
|
| 361 |
+
u
|
| 362 |
+
, ∀u ∈ N(v)}
|
| 363 |
+
to aggregate information from node neighbors:
|
| 364 |
+
h0
|
| 365 |
+
v ← xv, ∀v ∈ V
|
| 366 |
+
hk
|
| 367 |
+
v ←σ
|
| 368 |
+
�
|
| 369 |
+
Wk · mean({hk−1
|
| 370 |
+
v
|
| 371 |
+
} ∪ {ωu→v · hk−1
|
| 372 |
+
u
|
| 373 |
+
})
|
| 374 |
+
�
|
| 375 |
+
(1)
|
| 376 |
+
Then an edge-weighted loss function is applied to the output
|
| 377 |
+
{zv, ∀v ∈ Vo}, and tune the weight matrices Wk:
|
| 378 |
+
LGo (zv) = − log
|
| 379 |
+
�
|
| 380 |
+
σ
|
| 381 |
+
�
|
| 382 |
+
ωu→vz⊤
|
| 383 |
+
v zu
|
| 384 |
+
��
|
| 385 |
+
− Q · Eun∼Pn(v) log
|
| 386 |
+
�
|
| 387 |
+
σ
|
| 388 |
+
�
|
| 389 |
+
−ωu→vz⊤
|
| 390 |
+
v zun
|
| 391 |
+
��
|
| 392 |
+
(2)
|
| 393 |
+
|
| 394 |
+
where Pn is a negative sampling distribution, Q defines the
|
| 395 |
+
number of negative samples, σ is the sigmoid function.
|
| 396 |
+
Since object embeddings with the same category {zc, ∀c ∈
|
| 397 |
+
Co} should have consistent representation, another mean ag-
|
| 398 |
+
gregation is performed on the embeddings of same category
|
| 399 |
+
between the final GraphSAGE aggregation and loss function.
|
| 400 |
+
This overwrites the original value with the final embedding for
|
| 401 |
+
each category {zc ← mean(hK
|
| 402 |
+
v ), if Co(v) = c}.
|
| 403 |
+
B. Region Embedding learning
|
| 404 |
+
Apart from the pairwise relationship between objects, the
|
| 405 |
+
many-to-one relationship between an object and a room or
|
| 406 |
+
region is also indispensable for inferring the existence pos-
|
| 407 |
+
sibility of the target object in a certain room or among
|
| 408 |
+
multiple observed objects. Besides, to evaluate the similarity,
|
| 409 |
+
relationships of different levels should have a unified paradigm
|
| 410 |
+
to obtain representation of consistent metrics. Therefore, for
|
| 411 |
+
region-level sub-graphs, we still opt for the same embedding
|
| 412 |
+
representation procedure. This part is shown in Fig. 2 (b).
|
| 413 |
+
Region embedding is carried out in a self-supervised form.
|
| 414 |
+
We take the sub-graph Gr(Vr, Er) as input, with embedding
|
| 415 |
+
of objects in the same region {zc, ∀c ∈ Co} as nodes and
|
| 416 |
+
weighted spatial distances as edges. The batch composed
|
| 417 |
+
of these sub-graphs is passed into the GCN [15], and the
|
| 418 |
+
corresponding region embedding {rv, ∀v ∈ Vr} is obtained.
|
| 419 |
+
Similarly from the previous procedure, for region embedding
|
| 420 |
+
with the same label, a mean aggregation is performed to obtain
|
| 421 |
+
a uniform vector representation {rl, ∀l ∈ Lr}. Since there is
|
| 422 |
+
no need to do multi-hop aggregations at region-level, a simple
|
| 423 |
+
GCN layer is applied rather than GraphSAGE.
|
| 424 |
+
To enable membership calculation between region embed-
|
| 425 |
+
ding rl and object embedding zc and distinguish regions with
|
| 426 |
+
different labels, we use a combined loss which comprises
|
| 427 |
+
two parts: the classification loss of embedding label and the
|
| 428 |
+
membership loss of object-in-region:
|
| 429 |
+
LGr (rv) = − log
|
| 430 |
+
�
|
| 431 |
+
σ
|
| 432 |
+
�
|
| 433 |
+
r⊤
|
| 434 |
+
v zu
|
| 435 |
+
��
|
| 436 |
+
− Q · Eun∼Pn(v) log
|
| 437 |
+
�
|
| 438 |
+
σ
|
| 439 |
+
�
|
| 440 |
+
−r⊤
|
| 441 |
+
v zun
|
| 442 |
+
��
|
| 443 |
+
− 1
|
| 444 |
+
n
|
| 445 |
+
n
|
| 446 |
+
�
|
| 447 |
+
i=1
|
| 448 |
+
lv log(ˆl(rv))
|
| 449 |
+
(3)
|
| 450 |
+
where Pn(v) represents objects not in region v, and ˆl(·) is a
|
| 451 |
+
multi-layer perceptron (MLP) network.
|
| 452 |
+
C. Region Rollout learning
|
| 453 |
+
As the third and most important part of relation extraction,
|
| 454 |
+
the structural relationship reasoning ability plays a crucial
|
| 455 |
+
role in understanding the correct direction of navigation and
|
| 456 |
+
shortening the exploration period. To achieve this, the joint
|
| 457 |
+
probability p(Gh) of houses need to be learned to conceive a
|
| 458 |
+
probable house layout memory Gh ∼ p(Gh|Gsub) conditioned
|
| 459 |
+
on observed regions Gsub. However, its sample space might not
|
| 460 |
+
be easily characterized. Thus, the house graphs are modeled
|
| 461 |
+
as sequences by following the idea of GraphRNN [16], and
|
| 462 |
+
redefine some concepts to make it more suitable for conditional
|
| 463 |
+
reasoning with embedding. This part is shown in Fig. 2 (c).
|
| 464 |
+
Sπ = fs(Gh, π) = (Aπ
|
| 465 |
+
1, . . . , Aπ
|
| 466 |
+
n)
|
| 467 |
+
(4)
|
| 468 |
+
where π represents the node order, and each element Aπ
|
| 469 |
+
i ∈
|
| 470 |
+
{0, 1}(i−1)×(i−1), i ∈ {1, . . . , n} is an adjacent matrix refer-
|
| 471 |
+
ring the edges between node π(vi) and its previous nodes
|
| 472 |
+
π(vj), j ∈ {1, . . . , i − 1} already in the graph.
|
| 473 |
+
Since each Aπ
|
| 474 |
+
i has variable dimensions, we first fill them up
|
| 475 |
+
to the maximum dimension n and then repeat the 2D matrix
|
| 476 |
+
16 times to form a 3D matrix with n × n × 16 dimensions as
|
| 477 |
+
an edge mask where 16 is the embedding length. Therefore, a
|
| 478 |
+
featured graph can be expressed as the element-wise product
|
| 479 |
+
of the region embedding matrix Xπ under corresponding order
|
| 480 |
+
and sequence matrix {Sπ}3D:
|
| 481 |
+
p(G) =
|
| 482 |
+
n+1
|
| 483 |
+
�
|
| 484 |
+
i=1
|
| 485 |
+
p
|
| 486 |
+
�
|
| 487 |
+
xπ
|
| 488 |
+
i | ({Sπ
|
| 489 |
+
1 }3D, . . . , {Sπ
|
| 490 |
+
i−1}3D) ⊙ Xπ
|
| 491 |
+
i−1
|
| 492 |
+
�
|
| 493 |
+
(5)
|
| 494 |
+
where Xπ
|
| 495 |
+
i−1 is the embedding matrix with (i − 1) × (i − 1) ×
|
| 496 |
+
16 dimensions referring to region embeddings before region
|
| 497 |
+
π(vi), and xπ
|
| 498 |
+
i refers to the embedding of π(vi).
|
| 499 |
+
Passing {Sπ}3D ⊙ Xπ as a sequence into GRU or LSTM,
|
| 500 |
+
we can get the structure distribution of houses. This allows
|
| 501 |
+
us to predict the next region embedding and label under the
|
| 502 |
+
condition of the observed subgraph. The loss function of the
|
| 503 |
+
Region Rollout network is a CrossEntropy between generated
|
| 504 |
+
embedding label and the real label:
|
| 505 |
+
LGh(xπ
|
| 506 |
+
i ) = − 1
|
| 507 |
+
n
|
| 508 |
+
n
|
| 509 |
+
�
|
| 510 |
+
i=1
|
| 511 |
+
li log-softmax[(xπ
|
| 512 |
+
i )T rj], ∀j ∈ Lr
|
| 513 |
+
(6)
|
| 514 |
+
In conclusion, with the combination of III-A unsupervised
|
| 515 |
+
edge-weighted GraphSAGE object embedding learning, III-B
|
| 516 |
+
self-supervised GCN region embedding learning, and III-C
|
| 517 |
+
c-GraphRNN conditional region rollout, we can now extract
|
| 518 |
+
multiform structural relationships. Meanwhile, embedding is
|
| 519 |
+
used as a unified paradigm for representation, and the similar-
|
| 520 |
+
ity between objects or regions (either observed or predicted)
|
| 521 |
+
embeddings and the target object embedding is used as a prior
|
| 522 |
+
to guide the exploration in an unknown domain.
|
| 523 |
+
D. Reasoning and Exploring as a Bandit Problem
|
| 524 |
+
A prior alone cannot lead to success. Inspired by [10], a
|
| 525 |
+
posterior topological representation is also constructed in each
|
| 526 |
+
specific task to combine experience with practice. Specifically,
|
| 527 |
+
we build a multi-layer posterior topological graph covering all
|
| 528 |
+
object-level, clique-level and vertex-level. clique divides rooms
|
| 529 |
+
into small clustered regions and reduces the burden of the
|
| 530 |
+
visual front-end. Each vertex governs the three nearest cliques.
|
| 531 |
+
Object Embedding network provides the object node features,
|
| 532 |
+
and Region Embedding network generates the features of both
|
| 533 |
+
clique and vertex from their attached objects. Region Rollout
|
| 534 |
+
network gives an evaluation about ghost nodes. However, there
|
| 535 |
+
are always situations contrary to experience in reality. In other
|
| 536 |
+
words, robots must have the ability to balance exploration and
|
| 537 |
+
exploitation online.
|
| 538 |
+
We adopt Upper Confidence Bound for Tree (UCT) method
|
| 539 |
+
[17] to set an online bonus. The simulation procedure of UCT
|
| 540 |
+
is supported by the Region Rollout network, thus the robot
|
| 541 |
+
is not only able to obtain the bonus from reached count, but
|
| 542 |
+
also estimate the future exploration value inductive bias ωi of
|
| 543 |
+
selected path. It can effectively prevent the robot from being
|
| 544 |
+
|
| 545 |
+
������������
|
| 546 |
+
���������
|
| 547 |
+
�����������
|
| 548 |
+
����������
|
| 549 |
+
������������
|
| 550 |
+
�������������
|
| 551 |
+
�����������
|
| 552 |
+
�
|
| 553 |
+
�
|
| 554 |
+
��
|
| 555 |
+
�������������������
|
| 556 |
+
����������
|
| 557 |
+
��������
|
| 558 |
+
����������
|
| 559 |
+
���������������
|
| 560 |
+
������������������������
|
| 561 |
+
�������������������
|
| 562 |
+
����������������
|
| 563 |
+
����������������
|
| 564 |
+
������������
|
| 565 |
+
������
|
| 566 |
+
��������
|
| 567 |
+
��
|
| 568 |
+
��
|
| 569 |
+
�
|
| 570 |
+
Fig. 3. In a specific task, a multi-layer topological graph is constructed based
|
| 571 |
+
on visual front-end, and a tree with the birthplace as the root node is abstracted
|
| 572 |
+
from the graph. The clique refers to a collection of adjacent objects or a bunch
|
| 573 |
+
of non-semantic obstacles, and the vertex refers to an observed navigable
|
| 574 |
+
location. Each gray ghost node has connected two vertices, and only stores
|
| 575 |
+
the relative position of the connected vertices to assist localization, without
|
| 576 |
+
being used as a navigation sub-goal. The black ghost nodes refer to unknown
|
| 577 |
+
areas and promote exploration.
|
| 578 |
+
trapped in a useless area. The combined effect of inductive bias
|
| 579 |
+
ω and bonus will discourage the repetitive search near negative
|
| 580 |
+
(non-success) sub-goals and drive the robot to return to parent
|
| 581 |
+
nodes for back-tracking, which we term Revolt Reasoning.
|
| 582 |
+
The word Revolt summarizes the characteristics of our method
|
| 583 |
+
vividly, which allows robots to regret at nodes with low
|
| 584 |
+
exploration value, discarding them and returning to previous
|
| 585 |
+
paths. To avoid robots wandering between two goals, it is
|
| 586 |
+
necessary to introduce a navigation loss term Ldis to penalize
|
| 587 |
+
node distances. Hence, we can finally obtain the exploration
|
| 588 |
+
value V of the node i as:
|
| 589 |
+
V(t|i) = Σm
|
| 590 |
+
i→jωj
|
| 591 |
+
m
|
| 592 |
+
+ c1
|
| 593 |
+
�
|
| 594 |
+
ln Ni
|
| 595 |
+
ni
|
| 596 |
+
− c2Ldis
|
| 597 |
+
(7)
|
| 598 |
+
where factors c1 and c2 are set as 1 and 0.5. j refers to one
|
| 599 |
+
of node i’s descendants in the tree, and m is its total number.
|
| 600 |
+
Ni is the total arrivals of node i and its descendants, while ni
|
| 601 |
+
just represents arrivals of node i.
|
| 602 |
+
E. Online constructed Voronoi local graph
|
| 603 |
+
The reasoner only gives a semantic node id in a graph as
|
| 604 |
+
a sub-goal. If the low-level controller directly uses it as a
|
| 605 |
+
navigation goal, it will inevitably lead to over-coupling and
|
| 606 |
+
increase the difficulty of navigation success. We can refer to
|
| 607 |
+
the hierarchical human central nervous system composed of
|
| 608 |
+
the brain, cerebellum, brain-stem and spinal cord [18], if the
|
| 609 |
+
high-level reasoner is compared to the brain, then the skeletal
|
| 610 |
+
muscle is the low-level motor controller. The brain does not
|
| 611 |
+
directly transmit motion instructions to the skeletal muscles,
|
| 612 |
+
but passes it through the brain-stem, spinal cord and other
|
| 613 |
+
lower-level central nervous system for information conversion
|
| 614 |
+
[19]. Besides, the brain does not actually support high-speed,
|
| 615 |
+
low-latency information interaction while controlling a motion
|
| 616 |
+
[20]. Therefore, it is necessary to use a RGB-D camera and
|
| 617 |
+
an odometer to construct a local Voronoi graph, offering
|
| 618 |
+
approximate relative coordinates of the sub-goal within a
|
| 619 |
+
Fig. 4. Combining the depth information with robot’s pose in a short period,
|
| 620 |
+
then we can get a simple 3D reconstruction result. A Voronoi local graph can
|
| 621 |
+
be constructed through DBSCAN clustering after projecting the 3D map as a
|
| 622 |
+
2D obstacle scatter plot.
|
| 623 |
+
reachable range as an input to the low-level controller. The
|
| 624 |
+
Voronoi graph can record the relationship between the robot
|
| 625 |
+
and obstacles, and provide an available path. Since the TDN
|
| 626 |
+
task is map-less, we construct a local Voronoi graph within a
|
| 627 |
+
fixed step online.
|
| 628 |
+
Conditioning on the depth information, the parameters (in-
|
| 629 |
+
ternal and external) of the camera, and the odometer infor-
|
| 630 |
+
mation, obstacles in depth images can be easily converted
|
| 631 |
+
into coordinates in a world coordinate system. This system
|
| 632 |
+
is derived from the birth pose of the robot. Projecting this
|
| 633 |
+
partially reconstructed 3D map onto a 2D plane along the
|
| 634 |
+
vertical axis forms a scatter diagram depicting obstacles. We
|
| 635 |
+
can construct a Voronoi diagram online by segmenting naviga-
|
| 636 |
+
ble paths and explorable cliques with multiple related objects.
|
| 637 |
+
Different from traditional methods [21], we use DBSCAN
|
| 638 |
+
[22], [23] (a density-based clustering algorithm) to cluster
|
| 639 |
+
the scattered points of adjacent obstacles into convex hulls
|
| 640 |
+
first, and then filter out noise points. Followed by constructing
|
| 641 |
+
Delaunay triangle with the center of scattered points in the
|
| 642 |
+
convex hull, thereby generating a Voronoi diagram.
|
| 643 |
+
|
| 644 |
+
������������
|
| 645 |
+
���������
|
| 646 |
+
����������
|
| 647 |
+
������������
|
| 648 |
+
�������������
|
| 649 |
+
�
|
| 650 |
+
�
|
| 651 |
+
��
|
| 652 |
+
��������
|
| 653 |
+
�������������
|
| 654 |
+
�������������������
|
| 655 |
+
��������������������
|
| 656 |
+
�
|
| 657 |
+
��
|
| 658 |
+
�
|
| 659 |
+
����������
|
| 660 |
+
��������
|
| 661 |
+
����������
|
| 662 |
+
���������������
|
| 663 |
+
�����������������
|
| 664 |
+
�����������
|
| 665 |
+
��
|
| 666 |
+
��
|
| 667 |
+
��
|
| 668 |
+
��
|
| 669 |
+
�
|
| 670 |
+
�
|
| 671 |
+
�����������
|
| 672 |
+
����������
|
| 673 |
+
���������
|
| 674 |
+
�����������
|
| 675 |
+
���������������������������������������������
|
| 676 |
+
������� ����� �����
|
| 677 |
+
��� ������ � �����������
|
| 678 |
+
Fig. 5.
|
| 679 |
+
The semantic sub-goal is converted into relative coordinates by the
|
| 680 |
+
Voronoi-based intermediate-level planner.
|
| 681 |
+
F. Hierarchical reasoning and planning for navigation
|
| 682 |
+
In this section, we will summarize how the proposed rea-
|
| 683 |
+
soner and planner cooperate to complete navigation tasks. The
|
| 684 |
+
curves in Fig. 5 show the correspondence of concepts between
|
| 685 |
+
the topological graph in reasoner and the Voronoi diagram
|
| 686 |
+
in planner. The aggregation of obstacles is regarded as a
|
| 687 |
+
clique, each of which attaches and records all objects in its
|
| 688 |
+
convex hull, and evaluates its inductive bias value according
|
| 689 |
+
to the object-in-region membership via the Region Embedding
|
| 690 |
+
network. The position of a vertex is generated by Voronoi.
|
| 691 |
+
Multiple cliques and their subordinate objects surrounding the
|
| 692 |
+
vertex jointly determine the general room label of it, and use
|
| 693 |
+
the label for the inductive bias evaluation. Relative directions
|
| 694 |
+
and distances between two adjacent vertex nodes are stored in
|
| 695 |
+
gray ghost nodes. Since the robot exploits relative coordinates
|
| 696 |
+
and directions, it effectively avoids the influence of odometer
|
| 697 |
+
and depth camera error, thus insensitive to cumulative error.
|
| 698 |
+
Besides, thanks to the Voronoi local diagram, only short-period
|
| 699 |
+
scatter data need to be saved, and there is no need to consider
|
| 700 |
+
the closed-loop matching problem like SLAM.
|
| 701 |
+
With the construction of Voronoi diagram and its trans-
|
| 702 |
+
formation to a hierarchical topology, we can conduct rea-
|
| 703 |
+
soning in vertex/clique-level and object-level, searching for
|
| 704 |
+
the best vertex position and the most likely clique based on
|
| 705 |
+
the exploration value. After selecting a clique, the robot will
|
| 706 |
+
navigate towards it, and explore it more explicitly as object-
|
| 707 |
+
level reasoning. Besides, the Voronoi diagram provides the
|
| 708 |
+
evidence for choosing the next best view of one clique. By
|
| 709 |
+
changing multiple perspectives, the robot can find the target
|
| 710 |
+
object in a clique more efficiently.
|
| 711 |
+
IV. EXPERIMENTS
|
| 712 |
+
A. Experiment Setup
|
| 713 |
+
We use the Habitat simulator [24] with Matterport3D [25]
|
| 714 |
+
environment as our experiment platform. Habitat simulator is
|
| 715 |
+
a 3D simulator with configurable agents, multiple sensors, and
|
| 716 |
+
generic 3D dataset handling. Matterport3D dataset contains 90
|
| 717 |
+
houses with 40 categories of objects and 31 labels of regions.
|
| 718 |
+
It also provides detailed object and region segmentation infor-
|
| 719 |
+
mation. Here we just focus on 21 categories of target object
|
| 720 |
+
required by the task: chair, table, picture, cabinet, cushion,
|
| 721 |
+
sofa, bed, chest of drawers, plant, sink, toilet, stool, towel, tv
|
| 722 |
+
monitor, shower, bathtub, counter, fireplace, gym equipment,
|
| 723 |
+
seating, clothes and also ignore some meaningless room labels,
|
| 724 |
+
like outdoor, no label, other room and empty room. We use
|
| 725 |
+
YOLOv4 [26] as our object detection module, which is fine-
|
| 726 |
+
tuned using objects in Matterport3D dataset. Because the
|
| 727 |
+
aiming of low-level controller is the same as PointNav task’s
|
| 728 |
+
[27], we adapt a pre-trained state-of-the-art PointNav method
|
| 729 |
+
occupancy anticipation [28] as our controller.
|
| 730 |
+
During a specific TDN task, the robot is spawned at a
|
| 731 |
+
random location in a certain house and is demanded to find a
|
| 732 |
+
object of a given category as quickly as possible. The task
|
| 733 |
+
is evaluated with three commonly used indicators: Success
|
| 734 |
+
Rate (SR), the Success weighted by Path Length (SPL)
|
| 735 |
+
and Distance to Success (DTS). SR represents the number of
|
| 736 |
+
times the target was found in multiple episodes and is defined
|
| 737 |
+
as
|
| 738 |
+
1
|
| 739 |
+
N
|
| 740 |
+
�N
|
| 741 |
+
i=1 Sui, where N is the number of total episodes and
|
| 742 |
+
Sui is a binary value representing the success or failure of the
|
| 743 |
+
i-th episode. SPL depicts both success and the optimal path
|
| 744 |
+
length, it is defined as
|
| 745 |
+
1
|
| 746 |
+
N
|
| 747 |
+
�N
|
| 748 |
+
i=1 Si
|
| 749 |
+
Li
|
| 750 |
+
max(Pi,Li). Here we use the
|
| 751 |
+
shortest length provided by the simulator as Li and the path
|
| 752 |
+
length of the robot as Pi in episode i. DTS is the distance
|
| 753 |
+
of the agent from the success threshold boundary when the
|
| 754 |
+
episode ends. The boundary is set to 1m and the maximum
|
| 755 |
+
episode length is 500 steps, which are the same as [11].
|
| 756 |
+
Furthermore, our navigation task has two modes: indepen-
|
| 757 |
+
dent (ReVoLT-i) and continuous (ReVoLT-c). Independent mode
|
| 758 |
+
is the traditional one, the environment is reset after each
|
| 759 |
+
episode and the robot clears its memory. While the continuous
|
| 760 |
+
mode allows the robot to keep the topological graph if it
|
| 761 |
+
resets in the same house. It is used for evaluating the robot’s
|
| 762 |
+
capability of keeping and updating the environment memory.
|
| 763 |
+
B. Baselines
|
| 764 |
+
Random: At each step, the agent randomly samples an
|
| 765 |
+
action from the action space with a uniform distribution.
|
| 766 |
+
RGBD + DD-PPO: This baseline is provided by ObjectNav
|
| 767 |
+
Challenge 2020 [24]. Directly pass RGB-D information to an
|
| 768 |
+
end-to-end DD-PPO and output an action from the policy.
|
| 769 |
+
Active Neural SLAM: This baseline uses an exploration
|
| 770 |
+
policy trained to maximize coverage from [2], followed by the
|
| 771 |
+
heuristic-based local policy as described above.
|
| 772 |
+
SemExp: Since [11] has not open-sourced their code, we
|
| 773 |
+
directly use results in their paper as a state-of-the-art method.
|
| 774 |
+
C. Results
|
| 775 |
+
1) results of combinatorial relation embeddings: The Ob-
|
| 776 |
+
ject Embedding network obtains classification accuracy of
|
| 777 |
+
91%. The Region Embedding network obtains membership
|
| 778 |
+
accuracy of 78% and classification accuracy of 75%. The
|
| 779 |
+
Region Rollout network reaches prediction accuracy of 45%
|
| 780 |
+
in the test set, which is acceptable since room relationships
|
| 781 |
+
are not significant inherently.
|
| 782 |
+
2) results of the whole TDN task: The results of baseline
|
| 783 |
+
methods and ReVoLT is shown in Table II. It can be seen
|
| 784 |
+
that both of our models significantly outperform the current
|
| 785 |
+
state-of-the-art. ReVoLT-i small has ≈ 80% increase in SR
|
| 786 |
+
and nearly twice than SemExp in SPL. This confirms our
|
| 787 |
+
hypothesis that separating prior learning and control policy in a
|
| 788 |
+
hierarchical framework is indeed a wise approach than directly
|
| 789 |
+
|
| 790 |
+
13
|
| 791 |
+
12
|
| 792 |
+
11
|
| 793 |
+
10
|
| 794 |
+
6
|
| 795 |
+
8
|
| 796 |
+
6
|
| 797 |
+
5
|
| 798 |
+
0
|
| 799 |
+
8
|
| 800 |
+
10
|
| 801 |
+
12���������
|
| 802 |
+
���������
|
| 803 |
+
���������
|
| 804 |
+
���������
|
| 805 |
+
���������
|
| 806 |
+
���������
|
| 807 |
+
���������
|
| 808 |
+
���������
|
| 809 |
+
Fig. 6.
|
| 810 |
+
Top-down maps of four successful tasks while using ReVoLT-i.
|
| 811 |
+
The blue squares are the beginning positions, the blue curves are the robot
|
| 812 |
+
trajectories, and arrows represent the robot’s current positions. Targets are
|
| 813 |
+
highlighted with green boxes, and pink areas refer to the success threshold
|
| 814 |
+
boundary. The color of the trajectory is a gradient from dark to light, and the
|
| 815 |
+
brighter the end indicates the longer the path.
|
| 816 |
+
TABLE II
|
| 817 |
+
PERFORMANCE COMPARISON
|
| 818 |
+
Method
|
| 819 |
+
SR(%)
|
| 820 |
+
SPL
|
| 821 |
+
DTS (m)
|
| 822 |
+
Random
|
| 823 |
+
0
|
| 824 |
+
0
|
| 825 |
+
10.3298
|
| 826 |
+
RGBD + DD-PPO
|
| 827 |
+
6.2
|
| 828 |
+
0.021
|
| 829 |
+
9.3162
|
| 830 |
+
Active Neural SLAM
|
| 831 |
+
32.1
|
| 832 |
+
0.119
|
| 833 |
+
7.056
|
| 834 |
+
SemExp1
|
| 835 |
+
36.0
|
| 836 |
+
0.144
|
| 837 |
+
6.733
|
| 838 |
+
ReVoLT-i small∗
|
| 839 |
+
66.7
|
| 840 |
+
0.265
|
| 841 |
+
0.9762
|
| 842 |
+
ReVoLT-i∗
|
| 843 |
+
62.5
|
| 844 |
+
0.102
|
| 845 |
+
1.0511
|
| 846 |
+
ReVoLT-c∗
|
| 847 |
+
85.7
|
| 848 |
+
0.070
|
| 849 |
+
0.0253
|
| 850 |
+
1 The 1st prize of AI Habitat 2020
|
| 851 |
+
* These three refer to small mode with only 6 categories target like SemExp,
|
| 852 |
+
independence mode (-i) and continuous mode (-c) of ReVoLT.
|
| 853 |
+
learning a semantically-aware policy. Besides, the standard
|
| 854 |
+
ReVoLT-i with 19 categories of targets still achieve a higher SR
|
| 855 |
+
and SPL. By applying the continuous mode, the robot retains
|
| 856 |
+
a memory belonging to the same house, which allows it to find
|
| 857 |
+
observed targets with a higher SR.
|
| 858 |
+
V. ABLATION STUDY
|
| 859 |
+
The success of ReVoLT is attributed to the relationship
|
| 860 |
+
priors provided by the combinatorial graph neural networks,
|
| 861 |
+
the online bonus by UCT, and the distance penalty. Therefore,
|
| 862 |
+
we set three extra experiments with the same Voronoi-based
|
| 863 |
+
planner and low-level controller to reveal their impacts, respec-
|
| 864 |
+
tively. Moreover, the results of the continuous mode are also
|
| 865 |
+
presented below. The performance of all varieties is listed in
|
| 866 |
+
Table III.
|
| 867 |
+
ReVoLT w/o relationship priors. Sub-goal in the navigation
|
| 868 |
+
without priors can be generated according to the distance of
|
| 869 |
+
the observed cliques. Compared to Fig. 7 (a) with Fig. 6, we
|
| 870 |
+
find that the lack of semantic relationship profoundly affects
|
| 871 |
+
the robot’s path decision, making it not interested in the region
|
| 872 |
+
with a target even though it is just nearby. Besides, the lack
|
| 873 |
+
����������������������������������
|
| 874 |
+
������������������������
|
| 875 |
+
�������������������������������
|
| 876 |
+
�������������������������������
|
| 877 |
+
����������������
|
| 878 |
+
����������
|
| 879 |
+
����������������
|
| 880 |
+
�������������������
|
| 881 |
+
�������������������
|
| 882 |
+
�������������������
|
| 883 |
+
���������
|
| 884 |
+
�������������������
|
| 885 |
+
���������
|
| 886 |
+
���������
|
| 887 |
+
Fig. 7. In response to the three parts of exploration value function, we conduct
|
| 888 |
+
ablation experiments respectively and illustrate them in top-down maps.
|
| 889 |
+
TABLE III
|
| 890 |
+
PERFORMANCE OF ABLATION EXPERIMENTS
|
| 891 |
+
Method
|
| 892 |
+
SR(%)
|
| 893 |
+
SPL
|
| 894 |
+
DTS (m)
|
| 895 |
+
ReVoLT-i
|
| 896 |
+
62.5
|
| 897 |
+
0.102
|
| 898 |
+
1.0511
|
| 899 |
+
ReVoLT-c
|
| 900 |
+
85.7
|
| 901 |
+
0.070
|
| 902 |
+
0.0253
|
| 903 |
+
ReVoLT w/o priors
|
| 904 |
+
25.0
|
| 905 |
+
0.003
|
| 906 |
+
1.4129
|
| 907 |
+
ReVoLT w/o bonus
|
| 908 |
+
60.0
|
| 909 |
+
0.034
|
| 910 |
+
0.8139
|
| 911 |
+
ReVoLT w/o distance
|
| 912 |
+
54.5
|
| 913 |
+
0.030
|
| 914 |
+
1.2689
|
| 915 |
+
of region classification and region rollout makes the robot
|
| 916 |
+
unable to use the observed semantic information to reason
|
| 917 |
+
about relationships, resulting in longer paths.
|
| 918 |
+
ReVoLT w/o UCT bonus. The bonus is replaced with a fixed
|
| 919 |
+
threshold. If the robot reaches the same clique or vertex node
|
| 920 |
+
more than twice, then this node will no longer be selected as
|
| 921 |
+
|
| 922 |
+
105T105a sub-goal. The corresponding top-down maps are illustrated
|
| 923 |
+
in Fig. 7 (b). Without a UCT bonus, the robot falls into an
|
| 924 |
+
impossible local region until the threshold is reached.
|
| 925 |
+
ReVoLT w/o distance penalty. In Fig. 7 (c), using only priors
|
| 926 |
+
and bonuses can also complete tasks, but their paths are longer
|
| 927 |
+
due to the fluctuating thoughts while making decisions.
|
| 928 |
+
ReVoLT with continuous mode. The left figure of Fig. 7 (d)
|
| 929 |
+
is the same as the one in Fig. 6. However, when searching
|
| 930 |
+
for the second target in this house, once the robot associates
|
| 931 |
+
current observations with the memory, it can find the target
|
| 932 |
+
with a higher success rate. However, this also makes the robot
|
| 933 |
+
more focused on exploitation and reduces exploration, which
|
| 934 |
+
may cause it to ignore closer targets and lead to a lower SPL.
|
| 935 |
+
To sum up, relationship priors are essential for robots to
|
| 936 |
+
understand the environment semantics, and it is also the major
|
| 937 |
+
factor affecting the SR. The UCT bonus and distance penalty
|
| 938 |
+
contribute to the improvement of SPL. ReVoLT-c maintains a
|
| 939 |
+
long-term scene memory and can get outstanding performance.
|
| 940 |
+
VI. CONCLUSION
|
| 941 |
+
We propose ReVoLT, a hierarchical reasoning target-driven
|
| 942 |
+
navigation framework that combines combinatorial graph re-
|
| 943 |
+
lation extraction and online UCT decision operating with a
|
| 944 |
+
multi-layer topological graph. ReVoLT shows better perfor-
|
| 945 |
+
mance on exploiting the prior relationships, and its bandit
|
| 946 |
+
reasoning is more reasonable and efficient. To bridge the
|
| 947 |
+
gap between existing point-goal controllers and our reasoner,
|
| 948 |
+
we adopt the Voronoi local graph for the semantic-spatial
|
| 949 |
+
transition. However, some significant challenges remain in
|
| 950 |
+
this field. Our future direction lies in using representation
|
| 951 |
+
learning techniques to introduce richer object information like
|
| 952 |
+
shape, color, and size, using scene graph detection to introduce
|
| 953 |
+
richer semantic relation information like furniture layout, and
|
| 954 |
+
achieving more abundant tasks like object instance navigation.
|
| 955 |
+
REFERENCES
|
| 956 |
+
[1] M. Hoffmann and R. Pfeifer, “The implications of embodiment for
|
| 957 |
+
behavior and cognition: animal and robotic case studies,” arXiv preprint
|
| 958 |
+
arXiv:1202.0440, 2012.
|
| 959 |
+
[2] D. S. Chaplot, D. Gandhi, S. Gupta, A. Gupta, and R. Salakhutdinov,
|
| 960 |
+
“Learning to explore using active neural slam,” in International Confer-
|
| 961 |
+
ence on Learning Representations, 2019.
|
| 962 |
+
[3] K. Chatzilygeroudis, V. Vassiliades, F. Stulp, S. Calinon, and J.-B.
|
| 963 |
+
Mouret, “A survey on policy search algorithms for learning robot
|
| 964 |
+
controllers in a handful of trials,” IEEE Transactions on Robotics,
|
| 965 |
+
vol. 36, no. 2, pp. 328–347, 2019.
|
| 966 |
+
[4] W. Yang, X. Wang, A. Farhadi, A. Gupta, and R. Mottaghi, “Visual se-
|
| 967 |
+
mantic navigation using scene priors,” arXiv preprint arXiv:1810.06543,
|
| 968 |
+
2018.
|
| 969 |
+
[5] H. Du, X. Yu, and L. Zheng, “Learning object relation graph and tentative
|
| 970 |
+
policy for visual navigation,” in European Conference on Computer
|
| 971 |
+
Vision, pp. 19–34, Springer, 2020.
|
| 972 |
+
[6] Y. Qiu, A. Pal, and H. I. Christensen, “Learning hierarchical relationships
|
| 973 |
+
for object-goal navigation,” 2020.
|
| 974 |
+
[7] W. L. Hamilton, R. Ying, and J. Leskovec, “Inductive representation
|
| 975 |
+
learning on large graphs,” in Advances in Neural Information Processing
|
| 976 |
+
Systems (NeurIPS), 2017.
|
| 977 |
+
[8] E. Kolve, R. Mottaghi, W. Han, E. VanderBilt, L. Weihs, A. Herrasti,
|
| 978 |
+
D. Gordon, Y. Zhu, A. Gupta, and A. Farhadi, “Ai2-thor: An interactive
|
| 979 |
+
3d environment for visual ai,” arXiv preprint arXiv:1712.05474, 2017.
|
| 980 |
+
[9] Y. Wu, Y. Wu, A. Tamar, S. Russell, G. Gkioxari, and Y. Tian, “Bayesian
|
| 981 |
+
relational memory for semantic visual navigation,” in Proceedings of
|
| 982 |
+
the IEEE International Conference on Computer Vision, pp. 2769–2779,
|
| 983 |
+
2019.
|
| 984 |
+
[10] D. S. Chaplot, R. Salakhutdinov, A. Gupta, and S. Gupta, “Neural
|
| 985 |
+
topological slam for visual navigation,” in Proceedings of the IEEE/CVF
|
| 986 |
+
Conference on Computer Vision and Pattern Recognition, pp. 12875–
|
| 987 |
+
12884, 2020.
|
| 988 |
+
[11] D. S. Chaplot, D. P. Gandhi, A. Gupta, and R. R. Salakhutdinov, “Object
|
| 989 |
+
goal navigation using goal-oriented semantic exploration,” Advances in
|
| 990 |
+
Neural Information Processing Systems (NeurIPS), vol. 33, 2020.
|
| 991 |
+
[12] D. Batra, A. Gokaslan, A. Kembhavi, O. Maksymets, R. Mottaghi,
|
| 992 |
+
M. Savva, A. Toshev, and E. Wijmans, “ObjectNav Revisited: On Evalu-
|
| 993 |
+
ation of Embodied Agents Navigating to Objects,” in arXiv:2006.13171,
|
| 994 |
+
2020.
|
| 995 |
+
[13] M. Wortsman, K. Ehsani, M. Rastegari, A. Farhadi, and R. Mottaghi,
|
| 996 |
+
“Learning to learn how to learn: Self-adaptive visual navigation using
|
| 997 |
+
meta-learning,” 2019 IEEE/CVF Conference on Computer Vision and
|
| 998 |
+
Pattern Recognition (CVPR), pp. 6743–6752, 2019.
|
| 999 |
+
[14] J. Pennington, R. Socher, and C. D. Manning, “Glove: Global vectors for
|
| 1000 |
+
word representation,” in Proceedings of the 2014 conference on empirical
|
| 1001 |
+
methods in natural language processing (EMNLP), pp. 1532–1543, 2014.
|
| 1002 |
+
[15] T. N. Kipf and M. Welling, “Semi-supervised classification with graph
|
| 1003 |
+
convolutional networks,” in International Conference on Learning Rep-
|
| 1004 |
+
resentations (ICLR), 2017.
|
| 1005 |
+
[16] J. You, R. Ying, X. Ren, W. Hamilton, and J. Leskovec, “Graphrnn:
|
| 1006 |
+
Generating realistic graphs with deep auto-regressive models,” in Inter-
|
| 1007 |
+
national Conference on Machine Learning, pp. 5708–5717, 2018.
|
| 1008 |
+
[17] P.-A. Coquelin and R. Munos, “Bandit algorithms for tree search,” in
|
| 1009 |
+
Proceedings of the Twenty-Third Conference on Uncertainty in Artificial
|
| 1010 |
+
Intelligence, pp. 67–74, 2007.
|
| 1011 |
+
[18] D. Purves, R. Cabeza, S. A. Huettel, K. S. LaBar, M. L. Platt, M. G.
|
| 1012 |
+
Woldorff, and E. M. Brannon, Cognitive neuroscience.
|
| 1013 |
+
Sunderland:
|
| 1014 |
+
Sinauer Associates, Inc, 2008.
|
| 1015 |
+
[19] E. Bizzi, M. C. Tresch, P. Saltiel, and A. d’Avella, “New perspectives
|
| 1016 |
+
on spinal motor systems,” Nature Reviews Neuroscience, vol. 1, no. 2,
|
| 1017 |
+
pp. 101–108, 2000.
|
| 1018 |
+
[20] D. A. Rosenbaum, Human motor control. Academic press, 2009.
|
| 1019 |
+
[21] R. Mahkovic and T. Slivnik, “Generalized local voronoi diagram of
|
| 1020 |
+
visible region,” in Proceedings. 1998 IEEE International Conference on
|
| 1021 |
+
Robotics and Automation (Cat. No. 98CH36146), vol. 1, pp. 349–355,
|
| 1022 |
+
IEEE, 1998.
|
| 1023 |
+
[22] K. Khan, S. U. Rehman, K. Aziz, S. Fong, and S. Sarasvady, “Dbscan:
|
| 1024 |
+
Past, present and future,” in The fifth international conference on the
|
| 1025 |
+
applications of digital information and web technologies (ICADIWT
|
| 1026 |
+
2014), pp. 232–238, IEEE, 2014.
|
| 1027 |
+
[23] E. Schubert, J. Sander, M. Ester, H. P. Kriegel, and X. Xu, “Dbscan
|
| 1028 |
+
revisited, revisited: why and how you should (still) use dbscan,” ACM
|
| 1029 |
+
Transactions on Database Systems (TODS), vol. 42, no. 3, pp. 1–21,
|
| 1030 |
+
2017.
|
| 1031 |
+
[24] Manolis Savva*, Abhishek Kadian*, Oleksandr Maksymets*, Y. Zhao,
|
| 1032 |
+
E. Wijmans, B. Jain, J. Straub, J. Liu, V. Koltun, J. Malik, D. Parikh,
|
| 1033 |
+
and D. Batra, “Habitat: A Platform for Embodied AI Research,” in
|
| 1034 |
+
Proceedings of the IEEE/CVF International Conference on Computer
|
| 1035 |
+
Vision (ICCV), 2019.
|
| 1036 |
+
[25] A. Chang, A. Dai, T. Funkhouser, M. Halber, M. Niessner, M. Savva,
|
| 1037 |
+
S. Song, A. Zeng, and Y. Zhang, “Matterport3D: Learning from RGB-
|
| 1038 |
+
D data in indoor environments,” International Conference on 3D Vision
|
| 1039 |
+
(3DV), 2017.
|
| 1040 |
+
[26] A. Bochkovskiy, C.-Y. Wang, and H.-Y. M. Liao, “Yolov4: Optimal speed
|
| 1041 |
+
and accuracy of object detection,” arXiv preprint arXiv:2004.10934,
|
| 1042 |
+
2020.
|
| 1043 |
+
[27] A. Kadian, J. Truong, A. Gokaslan, A. Clegg, E. Wijmans, S. Lee,
|
| 1044 |
+
M. Savva, S. Chernova, and D. Batra, “Sim2real predictivity: Does
|
| 1045 |
+
evaluation in simulation predict real-world performance?,” 2019.
|
| 1046 |
+
[28] S. K. Ramakrishnan, Z. Al-Halah, and K. Grauman, “Occupancy antici-
|
| 1047 |
+
pation for efficient exploration and navigation,” in European Conference
|
| 1048 |
+
on Computer Vision, pp. 400–418, Springer, 2020.
|
| 1049 |
+
|
2tE0T4oBgHgl3EQfdwDZ/content/tmp_files/load_file.txt
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
39E3T4oBgHgl3EQfQAlh/content/2301.04408v1.pdf
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:951bd9bc89217cb496e7f9b4d64a8f16b01c43f14e5c8352e1e72cafeef4a045
|
| 3 |
+
size 222924
|
39E3T4oBgHgl3EQfQAlh/vector_store/index.faiss
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:9237b06040f0468c198f7fc043ef0a089b7ffa148d0242e7af0f4c6d1f33998c
|
| 3 |
+
size 2490413
|
39E3T4oBgHgl3EQfQAlh/vector_store/index.pkl
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:67093fffd9c4ec2720489de53113f242425f7ecbd42adbf3c2d6ffb0a5469b8f
|
| 3 |
+
size 103685
|
3NAzT4oBgHgl3EQf9P6r/vector_store/index.faiss
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:8b96c03cdc0d6f4bd82b5b51529cfb9b6ff0a5125e7c53626e3581855e14d270
|
| 3 |
+
size 3604525
|
4tE2T4oBgHgl3EQfOQbV/content/2301.03747v1.pdf
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:8fbb34cca9ed18d4bb687d691b8e307578cb9d8554c646db4773fd4eb5d6e0e0
|
| 3 |
+
size 1190020
|
4tE2T4oBgHgl3EQfOQbV/vector_store/index.pkl
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:537df5ca6bff95fd77a61132eaa398abe5eafede2bb8cab340251d47e4ec6380
|
| 3 |
+
size 203671
|
5NE2T4oBgHgl3EQfOgaA/content/tmp_files/2301.03749v1.pdf.txt
ADDED
|
@@ -0,0 +1,2470 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
Markovian Sliced Wasserstein Distances: Beyond
|
| 2 |
+
Independent Projections
|
| 3 |
+
Khai Nguyen
|
| 4 |
+
Tongzheng Ren
|
| 5 |
+
Nhat Ho
|
| 6 |
+
The University of Texas at Austin
|
| 7 |
+
January 11, 2023
|
| 8 |
+
Abstract
|
| 9 |
+
Sliced Wasserstein (SW) distance suffers from redundant projections due to independent
|
| 10 |
+
uniform random projecting directions. To partially overcome the issue, max K sliced Wasserstein
|
| 11 |
+
(Max-K-SW) distance (K ≥ 1), seeks the best discriminative orthogonal projecting directions.
|
| 12 |
+
Despite being able to reduce the number of projections, the metricity of Max-K-SW cannot be
|
| 13 |
+
guaranteed in practice due to the non-optimality of the optimization. Moreover, the orthogonality
|
| 14 |
+
constraint is also computationally expensive and might not be effective. To address the problem,
|
| 15 |
+
we introduce a new family of SW distances, named Markovian sliced Wasserstein (MSW) distance,
|
| 16 |
+
which imposes a first-order Markov structure on projecting directions. We discuss various members
|
| 17 |
+
of MSW by specifying the Markov structure including the prior distribution, the transition
|
| 18 |
+
distribution, and the burning and thinning technique. Moreover, we investigate the theoretical
|
| 19 |
+
properties of MSW including topological properties (metricity, weak convergence, and connection
|
| 20 |
+
to other distances), statistical properties (sample complexity, and Monte Carlo estimation error),
|
| 21 |
+
and computational properties (computational complexity and memory complexity). Finally, we
|
| 22 |
+
compare MSW distances with previous SW variants in various applications such as gradient
|
| 23 |
+
flows, color transfer, and deep generative modeling to demonstrate the favorable performance of
|
| 24 |
+
MSW 1.
|
| 25 |
+
1
|
| 26 |
+
Introduction
|
| 27 |
+
Sliced Wasserstein (SW) [7] distance has been well-known as a great alternative statistical distance
|
| 28 |
+
for Wasserstein distance [60, 52]. In short, SW takes the average of Wasserstein distances between
|
| 29 |
+
corresponding pairs of one-dimensional projected measures as the distance between the two original
|
| 30 |
+
measures. Because of that, the SW has a low computational complexity compared to the conventional
|
| 31 |
+
Wasserstein distance due to the closed-form solution of optimal transport in one dimension. When
|
| 32 |
+
the probability measures have at most n supports, the computational complexity of the SW is
|
| 33 |
+
only O(n log n). This complexity is much lower than the computational complexity O(n3 log n) of
|
| 34 |
+
Wasserstein distance and the complexity O(n2) [1, 34, 35, 33] of entropic Wasserstein [11] (Sinkhorn
|
| 35 |
+
divergence). Moreover, the memory complexity of the SW which is O(n) which is lower than the
|
| 36 |
+
memory complexity O(n2) of the Wasserstein (Sinkhorn) distance. The reason is that SW does not
|
| 37 |
+
need to store the cost matrix between supports which cost O(n2). An additional appealing property
|
| 38 |
+
of the SW is that it does not suffer from the curse of dimensionality, namely, its sample complexity
|
| 39 |
+
is O(n−1/2) [40, 49] compared to O(n−1/d) [19] of the Wasserstein distance (d is the number of
|
| 40 |
+
dimensions).
|
| 41 |
+
Due to the scalability, the SW has been applied to almost all applications where the Wasserstein
|
| 42 |
+
distance is used. For example, we refer to some applications of the SW which are generative model-
|
| 43 |
+
1Code for the experiments will be published at https://github.com/UT-Austin-Data-Science-Group/MSW.
|
| 44 |
+
1
|
| 45 |
+
arXiv:2301.03749v1 [stat.ML] 10 Jan 2023
|
| 46 |
+
|
| 47 |
+
ing [63, 15, 27, 42], domain adaptation [30], clustering [28], approximate Bayesian computation [39],
|
| 48 |
+
gradient flows [36, 5], and variational inference [64]. Moreover, there are many attempts to improve
|
| 49 |
+
the SW. The generalized sliced Wasserstein (GSW) distance that uses non-linear projection is
|
| 50 |
+
proposed in [26]. Distributional sliced Wasserstein distance is proposed in [44, 47] by replacing the
|
| 51 |
+
uniform distribution on the projecting directions in SW with an estimated distribution that puts
|
| 52 |
+
high probabilities for discriminative directions. Spherical sliced Wasserstein which is defined between
|
| 53 |
+
distributions that have their supports on the hyper-sphere is introduced in [4]. A sliced Wasserstein
|
| 54 |
+
variant between probability measures over images with convolution is defined in [43].
|
| 55 |
+
Despite having a lot of improvements, one common property in previous variants of the SW is
|
| 56 |
+
that they use independent projecting directions that are sampled from a distribution over a space
|
| 57 |
+
of projecting direction e.g., the unit-hypersphere. Those projecting directions are further utilized
|
| 58 |
+
to project two interested measures to corresponding pairs of one-dimensional measures. Due to
|
| 59 |
+
the independence, practitioners have reported that many projections do not have the power to
|
| 60 |
+
discriminative between two input probability measures [26, 15]. Moreover, having a lot of projections
|
| 61 |
+
leads to redundancy and losing computation for uninformative pairs of projected measures. This
|
| 62 |
+
problem is known as the projection complexity limitation of the SW.
|
| 63 |
+
To partially address the issue, the max sliced Wasserstein (Max-SW) distance is introduced in [14].
|
| 64 |
+
Max-SW seeks the best projecting direction that can maximize the projected Wasserstein distance.
|
| 65 |
+
Since the Max-SW contains a constraint optimization problem, the projected subgradient ascent
|
| 66 |
+
algorithm is performed. Since the algorithm only guarantees to obtain local maximum [49], the
|
| 67 |
+
performance of empirical estimation Max-SW is not stable in practice [42] since the metricity of
|
| 68 |
+
Max-SW can be only obtained at the global optimum. Another approach is to force the orthogonality
|
| 69 |
+
between projecting directions. In particular, K-sliced Wasserstein [53] (K-SW) uses K > 1 orthogonal
|
| 70 |
+
projecting directions. Moreover, to generalize the Max-SW and the K-SW, max-K sliced Wasserstein
|
| 71 |
+
(Max-K-SW) distance (K > 1) appears in [12] to find the best K projecting directions that
|
| 72 |
+
are orthogonal to each other via the projected sub-gradient ascent algorithm. Nevertheless, the
|
| 73 |
+
orthogonality constraint is computationally expensive and might not be good in terms of reflecting
|
| 74 |
+
discrepancy between general measures. Moreover, Max-K-SW also suffers from the non-optimality
|
| 75 |
+
problem which leads to losing the metricity property in practice.
|
| 76 |
+
To avoid the independency and to satisfy the requirement of creating informative projecting directions
|
| 77 |
+
efficiently, we propose to impose a sequential structure on projecting directions. Namely, we choose
|
| 78 |
+
a new projecting direction based on the previously chosen directions. For having more efficiency
|
| 79 |
+
in computation, we consider first-order Markovian structure in the paper which means that a
|
| 80 |
+
projecting direction can be sampled by using only the previous direction. For the first projecting
|
| 81 |
+
direction, it can follow any types of distributions on the unit-hypersphere that were used in the
|
| 82 |
+
literature e.g., uniform distribution [7] and von Mises-Fisher distribution [23, 47] to guarantee the
|
| 83 |
+
metricity. For the transition distribution on the second projecting direction and later, we propose
|
| 84 |
+
three types of family which are random walk transition, orthogonal-based transition, and input-awared
|
| 85 |
+
transition. For the random walk transition, we use the von Mises-Fisher with the mean as the
|
| 86 |
+
previous projecting direction as the conditional distribution. For the orthogonal-based transition, we
|
| 87 |
+
choose the projecting direction uniformly on the unit hypersphere such that it is orthogonal to the
|
| 88 |
+
previous direction. In contrast to the previous two transitions which do not use the information
|
| 89 |
+
from the two input measures, the input-awared transition uses the sub-gradient with respect to
|
| 90 |
+
the previous projecting direction of the corresponding projected Wasserstein distance between the
|
| 91 |
+
2
|
| 92 |
+
|
| 93 |
+
two measures to design the transition. In particular, the projected sub-gradient update is used to
|
| 94 |
+
create the new projecting direction. Moreover, we further improve the computational time and
|
| 95 |
+
computational memory by introducing the burning and thinning technique to reduce the number of
|
| 96 |
+
random projecting directions.
|
| 97 |
+
Contribution: In summary, our contributions are two-fold:
|
| 98 |
+
1. We propose a novel family of distances on the space of probability measures, named Markovian
|
| 99 |
+
sliced Wasserstein (MSW) distances. MSW considers a first-order Markovian structure on random
|
| 100 |
+
projecting directions. Moreover, we derive three variants of MSW that use three different types of
|
| 101 |
+
conditional transition distributions: random walk, orthogonal-based, and input-awared. We investigate
|
| 102 |
+
the theoretical properties of MSW including topological properties (metricity, weak convergence,
|
| 103 |
+
and connection to other distances), statistical properties (sample complexity, and Monte Carlo
|
| 104 |
+
estimation error), and computational properties (computational complexity and memory complexity).
|
| 105 |
+
Moreover, we introduce a burning and thinning approach to further reduce computational and
|
| 106 |
+
memory complexity, and we discuss the properties of the resulting distances.
|
| 107 |
+
2. We conduct experiments to compare MSW with SW, Max-SW, K-SW, and Max-K-SW in various
|
| 108 |
+
applications, namely, gradient flows, color transfer, and deep generative models on standard image
|
| 109 |
+
datasets: CIFAR10 and CelebA. We show that the input-awared MSW can yield better qualitative
|
| 110 |
+
and quantitative performance while consuming less computation than previous distances in gradient
|
| 111 |
+
flows and color transfer, and comparable computation in deep generative modeling. Finally, we
|
| 112 |
+
investigate the role of hyper-parameters of distances e.g., the number of projections, the number of
|
| 113 |
+
time-steps, and so on, in applications.
|
| 114 |
+
Organization. We first provide background for Wasserstein distance, sliced Wasserstein distance,
|
| 115 |
+
and max sliced Wasserstein distance in Section 2. In Section 3, we propose Markovian sliced
|
| 116 |
+
Wasserstein distances and derive their theoretical properties. Section 4 contains the comparison of
|
| 117 |
+
MSW to previous SW variants in gradient flows, color transfer, and deep generative modeling. We
|
| 118 |
+
then conclude the paper in Section 5. Finally, we defer the proofs of key results in the paper and
|
| 119 |
+
supplementary materials to Appendices.
|
| 120 |
+
Notation. For p ≥ 1, Pp(Rd) is the set of all probability measures on Rd that have finite p-
|
| 121 |
+
moments. For any d ≥ 2, we denote U(Sd−1) is the uniform measure over the unit hyper-sphere
|
| 122 |
+
Sd−1 := {θ ∈ Rd | ||θ||2
|
| 123 |
+
2 = 1}. For any two sequences an and bn, the notation an = O(bn) means
|
| 124 |
+
that an ≤ Cbn for all n ≥ 1, where C is some universal constant. We denote θ♯µ is the push-forward
|
| 125 |
+
measures of µ through the function f : Rd → R that is f(x) = θ⊤x.
|
| 126 |
+
2
|
| 127 |
+
Background
|
| 128 |
+
We start with reviewing the background on Wasserstein distance, sliced Wasserstein distances, their
|
| 129 |
+
computation techniques, and their limitations.
|
| 130 |
+
Wasserstein distance:
|
| 131 |
+
Given two probability measures µ ∈ Pp(Rd) and ν ∈ Pp(Rd), the
|
| 132 |
+
Wasserstein distance [60, 51] between µ and ν is :
|
| 133 |
+
Wp
|
| 134 |
+
p(µ, ν) =
|
| 135 |
+
inf
|
| 136 |
+
π∈Π(µ,ν)
|
| 137 |
+
�
|
| 138 |
+
Rd×Rd ∥x − y∥p
|
| 139 |
+
pdπ(x, y)
|
| 140 |
+
(1)
|
| 141 |
+
3
|
| 142 |
+
|
| 143 |
+
where Π(µ, ν) is set of all couplings that have marginals are µ and ν respectively. The computational
|
| 144 |
+
complexity and memory complexity of Wasserstein distance are O(n3 log n) and O(n2) in turn when
|
| 145 |
+
µ and ν have at most n supports. When d = 1, the Wasserstein distance can be computed with a
|
| 146 |
+
closed form: Wp
|
| 147 |
+
p(µ, ν) =
|
| 148 |
+
� 1
|
| 149 |
+
0 |F −1
|
| 150 |
+
µ (z) − F −1
|
| 151 |
+
ν
|
| 152 |
+
(z)|pdz, where Fµ and Fν are the cumulative distribution
|
| 153 |
+
function (CDF) of µ and ν respectively.
|
| 154 |
+
Sliced Wasserstein distance:
|
| 155 |
+
By randomly projecting two interested high-dimensional measures
|
| 156 |
+
to corresponding pairs of one-dimensional measures, sliced Wasserstein (SW) distance can exploit the
|
| 157 |
+
closed-form benefit of Wasserstein distance in one dimension. The definition of sliced Wasserstein
|
| 158 |
+
distance [7] between two probability measures µ ∈ Pp(Rd) and ν ∈ Pp(Rd) is:
|
| 159 |
+
SWp
|
| 160 |
+
p(µ, ν) = Eθ∼U(Sd−1)Wp
|
| 161 |
+
p(θ♯µ, θ♯ν).
|
| 162 |
+
(2)
|
| 163 |
+
Monte Carlo samples are often used to approximate the intractable expectation unbiasedly: �
|
| 164 |
+
SW
|
| 165 |
+
p
|
| 166 |
+
p(µ, ν) =
|
| 167 |
+
1
|
| 168 |
+
L
|
| 169 |
+
�L
|
| 170 |
+
l=1 Wp
|
| 171 |
+
p(θl♯µ, θl♯ν), where θ1, . . . , θL are drawn randomly from U(Sd−1). When µ and ν are dis-
|
| 172 |
+
crete measures that have at most n supports in d dimension, the computational complexity of SW
|
| 173 |
+
is O(Ln log2 n + Ldn) and the memory complexity for storing the projecting directions and the
|
| 174 |
+
projected supports of SW is O(L(d + n)). Here, Ln log2 n is for sorting L sets of projected supports
|
| 175 |
+
and Ld is for projecting supports to L sets of scalars.
|
| 176 |
+
Max sliced Wasserstein distance:
|
| 177 |
+
To select the best discriminative projecting direction, the
|
| 178 |
+
max sliced Wasserstein (Max-SW) distance [14] between µ ∈ Pp(Rd) and ν ∈ Pp(Rd) is introduced
|
| 179 |
+
as follows:
|
| 180 |
+
Max-SWp(µ, ν) = max
|
| 181 |
+
θ∈Sd Wp(θ♯µ, θ♯ν).
|
| 182 |
+
(3)
|
| 183 |
+
Computing Max-SW requires solving the constrained optimization problem. In practice, the projected
|
| 184 |
+
sub-gradient ascent algorithm with T > 1 iterations is often used to obtain a surrogate projecting
|
| 185 |
+
direction ˆθT for the global optimum. Hence, the empirical Max-SW distance is
|
| 186 |
+
�
|
| 187 |
+
Max-SWp(µ, ν) =
|
| 188 |
+
Wp(ˆθT ♯µ, ˆθT ♯ν). The detail of the projected sub-gradient ascent algorithm is given in Algorithm 1 in
|
| 189 |
+
Appendix A.1. The computational complexity of Max-SW is O(Tn log2 n + Tdn) and the memory
|
| 190 |
+
complexity of Max-SW is O(d + n). It is worth noting that the projected sub-gradient ascent can
|
| 191 |
+
only yield local maximum [49]. Therefore, the empirical Max-SW might not be distance even when
|
| 192 |
+
T → ∞ since the metricity of Max-SW can be only obtained at the global maximum.
|
| 193 |
+
K sliced Wasserstein distance: The authors in [53] propose to estimate the sliced Wasserstein
|
| 194 |
+
distance based on orthogonal projecting directions. We refer the distance as K sliced Wasserstein
|
| 195 |
+
distance (K-SW). The definition of K-SW between two probability measures µ ∈ Pp(Rd) and
|
| 196 |
+
ν ∈ Pp(Rd) is:
|
| 197 |
+
K-SWp(µ, ν) = E
|
| 198 |
+
�
|
| 199 |
+
1
|
| 200 |
+
K
|
| 201 |
+
K
|
| 202 |
+
�
|
| 203 |
+
i=1
|
| 204 |
+
Wp
|
| 205 |
+
p(θi♯µ, θi♯ν)
|
| 206 |
+
�
|
| 207 |
+
,
|
| 208 |
+
(4)
|
| 209 |
+
where the expectation is with respect to (θ1, . . . , θK) ∼ U(Vk(Rd)) with VK(Rd) = {(θ1, . . . , θK) ∈
|
| 210 |
+
Sd−1|⟨θi, θj⟩ = 0 ∀i, j ≤ K} is the Stiefel manifold. The expectation can be approximated with
|
| 211 |
+
Monte Carlo samples (θ1l, . . . , θKl)L
|
| 212 |
+
l=1 from U(VK(Rd)). In the original paper, L is set to 1. To
|
| 213 |
+
sample from the uniform distribution over the Stiefel manifold U(Vk(Rd)), it requires using the
|
| 214 |
+
4
|
| 215 |
+
|
| 216 |
+
Gram-Schmidt orthogonality process which has the computational complexity O(K2d) (quadratic
|
| 217 |
+
in K). Therefore, the total computational complexity of K-SW is O(LKn log2 n + LKdn + LK2d)
|
| 218 |
+
and the memory complexity of K-SW is O(LK(d + n)). More detail related to K-SW including
|
| 219 |
+
Gram-Smith process and sampling uniformly from Stiefel manifold is given in Appendix A.1.
|
| 220 |
+
Max K sliced Wasserstein distance: To generalize both Max-SW and K-SW, Max K sliced
|
| 221 |
+
Wasserstein is introduced in [12]. Its definition between µ ∈ Pp(Rd) and ν ∈ Pp(Rd) is:
|
| 222 |
+
Max-K-SWp
|
| 223 |
+
p(µ, ν) =
|
| 224 |
+
max
|
| 225 |
+
(θ1,...,θK)∈VK(Rd)
|
| 226 |
+
�
|
| 227 |
+
1
|
| 228 |
+
K
|
| 229 |
+
K
|
| 230 |
+
�
|
| 231 |
+
i=1
|
| 232 |
+
Wp
|
| 233 |
+
p(θi♯µ, θi♯ν)
|
| 234 |
+
�
|
| 235 |
+
.
|
| 236 |
+
(5)
|
| 237 |
+
Similar to Max-SW, a projected sub-gradient ascent algorithm with T > 1 iterations is used to
|
| 238 |
+
approximate Max-K-SW. We refer the reader to Algorithm 4 in Appendix A.1 for greater detail.
|
| 239 |
+
Since the projecting operator to the Stiefel manifold is the Gram-Smith process, the computational
|
| 240 |
+
complexity of Max-K-SW is O(TKn log2 n+TKdn+TK2d). The memory complexity of Max-K-SW
|
| 241 |
+
is O(K(d + n)). Similar to Max-SW, the metricity of Max-K-SW is only obtained at the global
|
| 242 |
+
optimum, hence, the empirical estimation might not be stable. Moreover, the orthogonality constraint
|
| 243 |
+
is also computationally expensive i.e., quadratic in terms of the number of orthogonal projections K.
|
| 244 |
+
3
|
| 245 |
+
Markovian Sliced Wasserstein distances
|
| 246 |
+
As discussed, the limitations of the previous works are independent projecting directions, compu-
|
| 247 |
+
tationally expensive dependency, and the lost of asymptotic metricity. In order to address those
|
| 248 |
+
limitations, we propose to impose the dependency between projecting directions via the first-order
|
| 249 |
+
Markov chain. By doing so, a new projecting direction can be created efficiently while being depen-
|
| 250 |
+
dent on previous projecting directions. In this section, we first define Markovian sliced Wasserstein
|
| 251 |
+
(MSW) distance and discuss its theoretical properties including topological properties, statistical
|
| 252 |
+
properties, and computational properties in Section 3.1. In Section 3.2, we discuss some choices in
|
| 253 |
+
designing the Markov chain including the prior distribution and the transition distribution. Finally,
|
| 254 |
+
we discuss the burning and thinning variant of MSW which can reduce the computational and
|
| 255 |
+
memory complexity in Section 3.3.
|
| 256 |
+
3.1
|
| 257 |
+
Definitions, Topological, Statistical, and Computational Properties
|
| 258 |
+
We first start with a general definition of Markovian sliced Wasserstein distance in Definition 1.
|
| 259 |
+
Definition 1. For any p ≥ 1, T ≥ 1, and dimension d ≥ 1, the Markovian sliced Wasserstein of
|
| 260 |
+
order p between two probability measures µ ∈ Pp(Rd) and ν ∈ Pp(Rd) is:
|
| 261 |
+
MSWp
|
| 262 |
+
p,T (µ, ν) = E
|
| 263 |
+
�
|
| 264 |
+
1
|
| 265 |
+
T
|
| 266 |
+
T
|
| 267 |
+
�
|
| 268 |
+
t=1
|
| 269 |
+
W p
|
| 270 |
+
p (θt♯µ, θt♯ν)
|
| 271 |
+
�
|
| 272 |
+
,
|
| 273 |
+
(6)
|
| 274 |
+
where T is the number of time steps, the expectation is under the projecting distribution θ1:T ∼ σ(θ1:T )
|
| 275 |
+
with σ(θ1:T ) = σ(θ1, . . . , θT ) = σ1(θ1) �T
|
| 276 |
+
l=2 σt(θt|θt−1), and σ1(θ1), σt(θt|θt−1) ∈ P(Sd−1) for all
|
| 277 |
+
t = 1, . . . , T.
|
| 278 |
+
5
|
| 279 |
+
|
| 280 |
+
The first projecting direction θ1 follows the distribution σ1(θ1) with σ1(θ1) to be any distributions
|
| 281 |
+
on the unit hyper-sphere, e.g., the uniform distribution, a von Mises-Fisher distribution, and so on.
|
| 282 |
+
By designing the transition distribution σl(θl|θl−1), we can obtain various variants of MSW. Before
|
| 283 |
+
going to the specific design of those distributions, we first discuss the empirical estimation of MSW,
|
| 284 |
+
and investigate its theoretical properties including topological properties, statistical properties, and
|
| 285 |
+
computational properties.
|
| 286 |
+
Monte Carlo estimation:
|
| 287 |
+
Similar to SW, we also need to use Monte Carlo samples to approximate
|
| 288 |
+
the expectation in Definition 1. We first samples θ11, . . . , θL1 ∼ σ1(θ1) for L ≥ 1, then we samples
|
| 289 |
+
θlt ∼ σt(θt|θlt−1) for t = 1, . . . , T and l = 1, . . . , L. After that, we can form an unbiased empirical
|
| 290 |
+
estimation of MSW as follows: �
|
| 291 |
+
MSW
|
| 292 |
+
p
|
| 293 |
+
p,T (µ, ν) =
|
| 294 |
+
1
|
| 295 |
+
LT
|
| 296 |
+
�L
|
| 297 |
+
l=1
|
| 298 |
+
�T
|
| 299 |
+
t=1 Wp
|
| 300 |
+
p(θlt♯µ, θlt♯ν).
|
| 301 |
+
Topological Properties:
|
| 302 |
+
We first state the following assumption: A1: In MSW, the prior
|
| 303 |
+
distribution σ1(θ1) is supported on all the unit-hypersphere or there exists a transition distribution
|
| 304 |
+
σt(θt|θt−1) being supported on all the unit-hypersphere. The assumption A1 is easy to satisfy and it
|
| 305 |
+
holds for all later choices of the prior distribution and transition distribution. We now consider the
|
| 306 |
+
metricity properties of the Markovian sliced Wasserstein distance.
|
| 307 |
+
Theorem 1 (Metricity). For any p ≥ 1, T ≥ 1, and dimension d ≥ 1, if A1 holds, Markovian sliced
|
| 308 |
+
Wasserstein MSWp,T (·, ·) is a valid metric on the space of probability measures Pp(Rd), namely, it
|
| 309 |
+
satisfies the (i) non-negativity, (ii) symmetry, (iii) triangle inequality, and (iv) identity.
|
| 310 |
+
The proof of Theorem 1 is in Appendix B.1. Next, we show that the convergence in MSW implies
|
| 311 |
+
the weak convergence of probability measures and the reverse also holds.
|
| 312 |
+
Theorem 2 (Weak Convergence). For any p ≥ 1, T ≥ 1, and dimension d ≥ 1, if A1 holds, the
|
| 313 |
+
convergence of probability measures in Pp(Rd) under the Markovian sliced Wasserstein distance
|
| 314 |
+
MSWp,T (·, ·) implies weak convergence of probability measures and vice versa.
|
| 315 |
+
Theorem 2 means that for any sequence of probability measures (µk)k∈N and µ in Pp(Rd), we have
|
| 316 |
+
limk→+∞ MSWp,T (µk, µ) = 0 if and only if for any continuous and bounded function f : Rd → R,
|
| 317 |
+
limk→+∞
|
| 318 |
+
�
|
| 319 |
+
f dµk =
|
| 320 |
+
�
|
| 321 |
+
f dµ. The proof of Theorem 2 is in Appendix B.2. Next, we discuss the
|
| 322 |
+
connection of MSW to previous sliced Wasserstein variants.
|
| 323 |
+
Proposition 1. For any p ≥ 1 and dimension d ≥ 1,
|
| 324 |
+
(i) For any T ≥ 1 and µ, ν ∈ Pp(Rd), MSWp,T (µ, ν) ≤ Max-SWp(µ, ν) ≤ Wp(µ, ν).
|
| 325 |
+
(ii) If T = 1 and the prior σ1(θ1) := U(Sd−1), MSWp,T (µ, ν) = SWp(µ, ν).
|
| 326 |
+
The proof of Proposition 1 is in Appendix B.3.
|
| 327 |
+
Statistical Properties: We first investigate the sample complexity or the empirical estimation
|
| 328 |
+
rate of MSW.
|
| 329 |
+
Proposition 2 (Sample Complexity). Let X1, X2, . . . , Xn be i.i.d. samples from the probability
|
| 330 |
+
measure µ being supported on compact set of Rd. We denote the empirical measure µn = 1
|
| 331 |
+
n
|
| 332 |
+
�n
|
| 333 |
+
i=1 δXi.
|
| 334 |
+
Then, for any p ≥ 1 and T ≥ 1, there exists a universal constant C > 0 such that
|
| 335 |
+
E[MSWp,T (µn, µ)] ≤ C
|
| 336 |
+
�
|
| 337 |
+
(d + 1) log n/n,
|
| 338 |
+
where the outer expectation is taken with respect to the data X1, X2, . . . , Xn.
|
| 339 |
+
6
|
| 340 |
+
|
| 341 |
+
The proof of Proposition 2 is in Appendix B.4. The above sample complexity suggests that MSW
|
| 342 |
+
does not suffer from the curse of dimensionality. Next, we investigate the Monte Carlo approximation
|
| 343 |
+
error for MSW.
|
| 344 |
+
Proposition 3 (Monte Carlo error). For any p ≥ 1, T ≥ 1, dimension d ≥ 1, and µ, ν ∈ Pp(Rd),
|
| 345 |
+
we have:
|
| 346 |
+
E|�
|
| 347 |
+
MSW
|
| 348 |
+
p
|
| 349 |
+
p,T (µ, ν) − MSWp
|
| 350 |
+
p,T (µ, ν)|
|
| 351 |
+
1
|
| 352 |
+
√
|
| 353 |
+
TL
|
| 354 |
+
L
|
| 355 |
+
�
|
| 356 |
+
l=1
|
| 357 |
+
V ar
|
| 358 |
+
� T
|
| 359 |
+
�
|
| 360 |
+
t=1
|
| 361 |
+
W p
|
| 362 |
+
p (θt♯µ, θt♯ν)
|
| 363 |
+
� 1
|
| 364 |
+
2
|
| 365 |
+
,
|
| 366 |
+
where the variance is with respect to σ(θ1, . . . , θT ).
|
| 367 |
+
The proof of Proposition 3 is in Appendix B.5. From the above proposition, we know that increasing
|
| 368 |
+
the number of projections L reduces the approximation error.
|
| 369 |
+
Computational Properties: When µ and ν are two discrete probability measures in Pp(Rd) that
|
| 370 |
+
have at most n supports, the computational complexity for the Monte Carlo approximation of MSW
|
| 371 |
+
is O(TLn log2 n+TLdn) where O(TLn log n) is for computation of TL one-dimensional Wasserstein
|
| 372 |
+
distances and O(TLdn) is the projecting complexity for TL projections from d dimension to 1
|
| 373 |
+
dimension. The memory complexity of MSW is O(TL(d + n)) for storing the projecting directions
|
| 374 |
+
and the projections.
|
| 375 |
+
3.2
|
| 376 |
+
Specific Choices of the Projecting Distribution
|
| 377 |
+
Designing the projecting distribution σ(θ1, . . . , θT ) is the central task in using MSW since it controls
|
| 378 |
+
the projecting behavior. For each choice of the σ(θ1, . . . , θT ), we obtain a variant of MSW. Since we
|
| 379 |
+
impose the first order Markov structure σ(θ1, . . . , θT ) = σ1(θ1) �T
|
| 380 |
+
t=2 σt(θt|θt−1), there are two types
|
| 381 |
+
of distributions that we need to choose: the prior distribution σ1(θ1) and the transition distribution
|
| 382 |
+
σt(θt|θt−1) for all t = 2, . . . , T.
|
| 383 |
+
Prior distribution: The most simple choice of σ1(θ1) when we know nothing about probability
|
| 384 |
+
measures that we want to compare is the uniform distribution over the unit hypersphere U(Sd−1).
|
| 385 |
+
Moreover, the metricity of MSW is guaranteed regardless of the transition distribution with this
|
| 386 |
+
choice. Therefore, the uniform distribution is the choice that we use in our experiments in the
|
| 387 |
+
paper. It is worth noting that we could also use a distribution that is estimated from two interested
|
| 388 |
+
probability measures [44]; however, this approach costs more computation.
|
| 389 |
+
Now, we discuss some specific choices of the transition distributions σt(θt|θt−1). Detailed algorithms
|
| 390 |
+
for computing MSW with specific transitions are given in Appendix A.3.
|
| 391 |
+
Random Walk transition:
|
| 392 |
+
Motivated by the Gaussian Random Walk in MCMC literature [37],
|
| 393 |
+
we use a version of Gaussian on the unit hypersphere which is the von Mises-Fisher (vMF) distri-
|
| 394 |
+
bution [23]. The details about the vMF distribution including its probability density function, its
|
| 395 |
+
sampling procedure, and its properties are given in Appendix A.2. In summary, the vMF distribution
|
| 396 |
+
has two parameters: the location parameter ϵ ∈ Sd−1 which is the mean, and the concentration
|
| 397 |
+
parameter κ ∈ R+ which plays the role as the variance. Therefore, the transition distribution is
|
| 398 |
+
σt(θt|θt−1) = vMF(θt|ϵ = θt−1, κ) where κ is a hyperparameter.
|
| 399 |
+
Orthogonal-based transition:
|
| 400 |
+
Motivated by the orthogonality constraint in Max-K-SW and
|
| 401 |
+
K-SW, we can design a transition distribution that gives us an orthogonal projecting direction to the
|
| 402 |
+
7
|
| 403 |
+
|
| 404 |
+
previous one. In particular, given a previous projecting direction θt−1, we want to have θt such that
|
| 405 |
+
⟨θt, θt−1⟩ = 0, namely, we want to sample from the subsphere Sd−1
|
| 406 |
+
θt−1 := {θt ∈ Sd−1|⟨θt, θt−1⟩ = 0}.
|
| 407 |
+
To the best of our knowledge, there is no explicit form of distribution (known pdf) that is defined
|
| 408 |
+
on that set. However, we can still sample from the uniform distribution over that set: U(Sd−1
|
| 409 |
+
θt−1)
|
| 410 |
+
since that distribution can be constructed by pushing the uniform distribution over the whole unit
|
| 411 |
+
hypersphere U(Sd−1) through the projection operator: Prodθt−1(θt) = ProdSd−1
|
| 412 |
+
�
|
| 413 |
+
θt −
|
| 414 |
+
⟨θt−1,θt⟩
|
| 415 |
+
⟨θt−1,θt−1⟩θt−1
|
| 416 |
+
�
|
| 417 |
+
where ProdSd−1(θ) =
|
| 418 |
+
θ
|
| 419 |
+
||θ||2 is the normalizing operator. In a greater detail, we first sample θ′
|
| 420 |
+
t ∼
|
| 421 |
+
U(Sd−1) and then set θt = Prodθt−1(θ′
|
| 422 |
+
t). Therefore, in this case, we have σt(θt|θt−1) = U(Sd−1
|
| 423 |
+
t−1 ) =
|
| 424 |
+
Prodθt−1♯U(Sd−1).
|
| 425 |
+
Input-awared transition: The above two transition distributions do not take into account the
|
| 426 |
+
information of the two probability measures µ and ν that we want to compare. Hence, they could
|
| 427 |
+
be inefficient to explore good projecting directions in terms of comparing µ and ν. Motivated by the
|
| 428 |
+
projected sub-gradient ascent [9] update in finding the “max" projecting direction, we could design the
|
| 429 |
+
transition distribution as follows: σt(θt|θt−1) = δf(θt−1|η,µ,ν) where δ denotes the Dirac Delta function
|
| 430 |
+
and the transition function f(θt−1|η, µ, ν) =
|
| 431 |
+
ProdSd−1
|
| 432 |
+
�
|
| 433 |
+
θt−1 + η∇θt−1Wp (θt−1♯µ, θt−1♯ν)
|
| 434 |
+
�
|
| 435 |
+
, with
|
| 436 |
+
η > 0 is the stepsize hyperparameter.
|
| 437 |
+
As the current choice is a deterministic transition, it
|
| 438 |
+
requires the prior distribution to have supports on all Sd−1 to obtain the metricity for MSW.
|
| 439 |
+
A choice to guarantee the metricity regardless of the prior distribution is the vMF distribution,
|
| 440 |
+
namely, σt(θt|θt−1) = vMF(θt|ϵ = f(θt−1|η, µ, µ), κ). Thank the interpolation properties of the
|
| 441 |
+
vMF distribution: limκ→0 vMF(θ|ϵ, κ) = U(Sd−1) and limκ→∞ vMF(θ|ϵ, κ) = δϵ, the transition
|
| 442 |
+
distribution can balance between heading to the “max" projecting direction and exploring the space
|
| 443 |
+
of directions.
|
| 444 |
+
Stationarity of σT (θT ): A natural important question arises: what is the distribution of σT (θT ) =
|
| 445 |
+
�
|
| 446 |
+
. . .
|
| 447 |
+
�
|
| 448 |
+
σ(θ1, . . . , θT )dθ1 . . . dθT−1 when T → ∞? The answer to the above questions depends on the
|
| 449 |
+
choice of the projection distribution which is discussed in Section 3.2. For the Random Walk and the
|
| 450 |
+
Orthogonal-based transitions and the uniform distribution prior, it is unclear whether the stationary
|
| 451 |
+
distribution exists. For the deterministic Input-awared transition and the uniform prior, we have
|
| 452 |
+
limT→∞ σT (θT ) = �A
|
| 453 |
+
a=1 αaδθ∗a with �A
|
| 454 |
+
a=1 αa = 1 where θ∗
|
| 455 |
+
a (a = 1, . . . , A) are local maximas of
|
| 456 |
+
the optimization problem maxθ∈Sd−1 Wp(θ♯µ, θ♯ν) and some unknown weights αa that depend on µ
|
| 457 |
+
and ν. This property is due to the fact that the projected sub-gradient ascent can guarantee local
|
| 458 |
+
maxima convergence [49]. For the Input-awared vMF transition, it is also unclear if the stationary
|
| 459 |
+
distribution exists when the parameter κ < ∞.
|
| 460 |
+
3.3
|
| 461 |
+
Burning and Thinning
|
| 462 |
+
In the definition of MSW in Definition 1, we take the expectation on the joint distribution over
|
| 463 |
+
all timesteps σ(θ1:T ) which leads to the time and memory complexities to be linear with T in the
|
| 464 |
+
Monte Carlo approximation. Therefore, we can adapt the practical technique from MCMC methods
|
| 465 |
+
which is burning in and thinning in to reduce the number of random variables while still having a
|
| 466 |
+
dependency structure.
|
| 467 |
+
Definition 2. For any p ≥ 1, T ≥ 1, dimension d ≥ 1, the number of burned steps M ≥ 0, and the
|
| 468 |
+
number of thinned steps N ≥ 1, the burned thinned Markovian sliced Wasserstein of order p between
|
| 469 |
+
8
|
| 470 |
+
|
| 471 |
+
two probability measures µ ∈ Pp(Rd) and ν ∈ Pp(Rd) is:
|
| 472 |
+
MSWp,T,N,M(µ, ν) = E
|
| 473 |
+
�
|
| 474 |
+
�
|
| 475 |
+
N
|
| 476 |
+
T − M
|
| 477 |
+
(T−M)/N
|
| 478 |
+
�
|
| 479 |
+
t=1
|
| 480 |
+
W p
|
| 481 |
+
p
|
| 482 |
+
�
|
| 483 |
+
θ′
|
| 484 |
+
t♯µ, θ′
|
| 485 |
+
t♯ν
|
| 486 |
+
�
|
| 487 |
+
�
|
| 488 |
+
� ,
|
| 489 |
+
(7)
|
| 490 |
+
where the expectation is under the projection distribution θ′
|
| 491 |
+
1:N(T−M) ∼ σ(θ′
|
| 492 |
+
1:N(T−M)) with σ(θ′
|
| 493 |
+
1:N/(T−M))
|
| 494 |
+
being the marginal distribution which is obtained by integrating out random projecting directions
|
| 495 |
+
at the time step t such that t ≤ M or t%N ̸= 0 from σ(θ1, . . . , θT ) = σ1(θ1) �T
|
| 496 |
+
l=2 σt(θt|θt−1), and
|
| 497 |
+
σ1(θ1), σt(θt|θt−1) ∈ P(Sd−1) for all t = 1, . . . , T.
|
| 498 |
+
Similar to MSW, the burned-thinned MSW is also a metric on Pp(Rd) when there exists a time
|
| 499 |
+
step t that is not burned, is not thinned, and θt is a random variable that has the supports on
|
| 500 |
+
all Sd−1. We discuss more details about the burned-thinned MSW including its topological and
|
| 501 |
+
statistical properties in Appendix A.4. The Monte Carlo estimation of the burned-thinned MSW is
|
| 502 |
+
given in Equation equation 9 in Appendix A.4. The approximation is the average of the projected
|
| 503 |
+
Wasserstein distance from θtl with t ≥ M and t%N = 0. By reducing the number of random
|
| 504 |
+
projecting directions, the computational complexity of the burned-thinned MSW is improved to
|
| 505 |
+
O(((T −M)Ln log2 n+(T −M)Ldn)/N) in the random walk and the orthogonal-based transitions. In
|
| 506 |
+
the case of the input-awared transition, the computational complexity is still O(TLn log2 n + TLdn)
|
| 507 |
+
since the transition requires computing the gradient of the projected Wasserstein distance. However,
|
| 508 |
+
in all cases, the memory complexity is reduced to O((T − M)L(d + n)/N).
|
| 509 |
+
Burned thinned MSW is the generalization of Max-SW: the empirical computation of Max-
|
| 510 |
+
SW [14] with the projected sub-gradient ascent and uniform random initialization can be viewed
|
| 511 |
+
as a special case of burned thinned MSW with the input-awared transition and with the number
|
| 512 |
+
of burned samples M = T − 1. The difference is that Max-SW uses only one local maximum to
|
| 513 |
+
compute the distance while the burned thinned MSW uses L ≥ 1 maximums (might not be unique).
|
| 514 |
+
More discussions: We refer the reader to Appendix A.5 for other related discussions e.g., “K-SW
|
| 515 |
+
is autoregressive decomposition of projecting distribution", “sequential generalization of Max-K-SW",
|
| 516 |
+
and related literature.
|
| 517 |
+
4
|
| 518 |
+
Experiments
|
| 519 |
+
In this section, we refer MSW with random walk transition as rMSW, MSW with orthogonal-based
|
| 520 |
+
transition as oMSW, MSW with input-awared transition as iMSW (using the Dirac distribution)
|
| 521 |
+
and viMSW (using the vMF distribution). We compare MSW variants to SW, Max-SW, K-SW,
|
| 522 |
+
and Max-K-SW in standard applications e.g., gradient flows, color transfer, and deep generative
|
| 523 |
+
models. Moreover, we also investigate the role of hyperparameters, e.g., concentration parameter κ,
|
| 524 |
+
the number of projections L, the number of time steps T, the number of burning steps M, and the
|
| 525 |
+
number of thinning steps N in applications.
|
| 526 |
+
4.1
|
| 527 |
+
Gradient Flows and Color Transfer
|
| 528 |
+
Gradient flows: We follow the same setting in [17]. The gradient flow models a distribution
|
| 529 |
+
µ(t) flowing with time t along the gradient flow of a loss functional µ(t) → D(µ(t), ν) that drives
|
| 530 |
+
9
|
| 531 |
+
|
| 532 |
+
SW L=30
|
| 533 |
+
W2: 25.3149×10
|
| 534 |
+
2 (0s)
|
| 535 |
+
W2: 0.5913×10
|
| 536 |
+
2 (1.07s)
|
| 537 |
+
W2: 0.0099×10
|
| 538 |
+
2 (1.55s)
|
| 539 |
+
Max-SW T=30
|
| 540 |
+
W2: 25.3149×10
|
| 541 |
+
2 (0s)
|
| 542 |
+
W2: 0.1091×10
|
| 543 |
+
2 (2.37s)
|
| 544 |
+
W2: 0.0098×10
|
| 545 |
+
2 (3.48s)
|
| 546 |
+
steps 0
|
| 547 |
+
iMSW L=2 T=5
|
| 548 |
+
W2: 25.3149×10
|
| 549 |
+
2 (0s)
|
| 550 |
+
steps 200
|
| 551 |
+
W2: 0.0483×10
|
| 552 |
+
2 (0.99s)
|
| 553 |
+
steps 300
|
| 554 |
+
W2: 0.0064×10
|
| 555 |
+
2 (1.41s)
|
| 556 |
+
steps 0
|
| 557 |
+
viMSW L=2 T=5 =50
|
| 558 |
+
W2: 25.3149×10
|
| 559 |
+
2 (0s)
|
| 560 |
+
steps 200
|
| 561 |
+
W2: 0.0512×10
|
| 562 |
+
2 (2.05s)
|
| 563 |
+
steps 300
|
| 564 |
+
W2: 0.0043×10
|
| 565 |
+
2 (2.94s)
|
| 566 |
+
Figure 1: The figures show the gradient flows that are from the empirical distribution over the
|
| 567 |
+
color points to the empirical distribution over S-shape points produced by different distances. The
|
| 568 |
+
corresponding Wasserstein-2 distance between the empirical distribution at the current step and the
|
| 569 |
+
S-shape distribution and the computational time (in seconds) to reach the step is reported at the
|
| 570 |
+
top of the figure.
|
| 571 |
+
it towards a target distribution ν [56] where D is a given distance between probability measures.
|
| 572 |
+
In this setup, we consider ν = 1
|
| 573 |
+
n
|
| 574 |
+
�n
|
| 575 |
+
i=1 δYi as a fixed empirical target distribution and the model
|
| 576 |
+
distribution µ(t) = 1
|
| 577 |
+
n
|
| 578 |
+
�n
|
| 579 |
+
i=1 δXi(t). Here, the model distribution is parameterized by a time-varying
|
| 580 |
+
point cloud X(t) = (Xi(t))n
|
| 581 |
+
i=1 ∈
|
| 582 |
+
�
|
| 583 |
+
Rd�n. Starting from an initial condition at time t = 0, we integrate
|
| 584 |
+
the ordinary differential equation ˙X(t) = −n∇X(t)
|
| 585 |
+
�
|
| 586 |
+
D
|
| 587 |
+
� 1
|
| 588 |
+
n
|
| 589 |
+
�n
|
| 590 |
+
i=1 δXi(t), ν
|
| 591 |
+
��
|
| 592 |
+
for each iteration. In the
|
| 593 |
+
experiments, we utilze the Euler scheme with 300 timesteps and the step size is 10−3 to move the
|
| 594 |
+
empirical distribution over colorful points µ(0) to the distribution over S-shape points (ν) (see
|
| 595 |
+
Figure 1). For Max-SW, Max-K-SW, iMSW, and viMSW, we use the learning rate parameter
|
| 596 |
+
for projecting directions η = 0.1. We report the Wasserstein-2 distances between the empirical
|
| 597 |
+
distribution µ(t) and the target empirical distribution ν, and the computational time in Table 1.
|
| 598 |
+
We also give the visualization of some obtained flows in Figure 1. We refer the reader to Figure 5 in
|
| 599 |
+
Appendix C.1 for the full visualization of all flows and detailed algorithms. We observe that iMSW
|
| 600 |
+
gives better flows than SW, Max-SW, K-SW, and Max-K-SW. Namely, the empirical distribution
|
| 601 |
+
µ(t) (t = 300) with iMSW is closer to ν in terms of Wasserstein distance. More importantly, iMSW
|
| 602 |
+
consumes less computation than its competitors since it can use a smaller number of projections
|
| 603 |
+
due to more informative projecting directions. Furthermore, viMSW gives better final results than
|
| 604 |
+
iMSW, however, the trade-off is doubling the time computation due to the sampling step of vMF
|
| 605 |
+
distribution. We also observe that rMSW does not give good results in both Wasserstein-2 and
|
| 606 |
+
computational time due to the random walk transition. In this case, K-SW is equivalent to our
|
| 607 |
+
oMSW with T=K=2 since the dimension d = 2. We refer the reader to Appendix C.1 for more
|
| 608 |
+
discussion.
|
| 609 |
+
Studies on hyperparameters: From Table 3 in Appendix C.1, increasing the number of projections
|
| 610 |
+
L yields better performance for SW, K-SW, and iMSW. Similarly, increasing the number of timesteps
|
| 611 |
+
T also helps Max-SW and iMSW better. Moreover, we find that for the same number of total
|
| 612 |
+
projections e.g., L = 5, T = 2 and T = 2, L = 5, a larger timestep T might lead to a better result
|
| 613 |
+
for iMSW. For burning and thinning, we see that they help to reduce the computation while the
|
| 614 |
+
performance stays comparable or even better if choosing the right value of M and N. Also, iMSW
|
| 615 |
+
10
|
| 616 |
+
|
| 617 |
+
Source
|
| 618 |
+
SW (L=45), 37.97(s), W2 = 414.51
|
| 619 |
+
Max-SW (T=45), 57.48(s), W2 = 449.42
|
| 620 |
+
K-SW (L=15,K=3), 38.21(s), W2 = 411.74
|
| 621 |
+
Max-K-SW (K=3,T=15), 52.6(s), W2 = 479.43
|
| 622 |
+
rMSW (L=3,T=5, =50), 15.65(s), W2 = 444.35
|
| 623 |
+
oMSW (L=3,T=5), 14.17(s), W2 = 415.06
|
| 624 |
+
iMSW (L=3,T=5), 25.39(s), W2 = 16.97
|
| 625 |
+
viMSW (L=3,T=5, =50), 29.27(s), W2 = 16.48
|
| 626 |
+
Target
|
| 627 |
+
Figure 2: The figures show the source image, the target image, and the transferred images from
|
| 628 |
+
different distances. The corresponding Wasserstein-2 distance between the empirical distribution
|
| 629 |
+
over transferred color palates and the empirical distribution over the target color palette and the
|
| 630 |
+
computational time (in second) are reported at the top of the figure.
|
| 631 |
+
Table 1: Summary of Wasserstein-2 scores, computational time in second (s) of different distances in gradient flow.
|
| 632 |
+
Distances
|
| 633 |
+
Wasserstein-2 (↓)
|
| 634 |
+
Time (↓)
|
| 635 |
+
SW (L=30)
|
| 636 |
+
0.0099 × 10−2
|
| 637 |
+
1.55
|
| 638 |
+
Max-SW (T=30)
|
| 639 |
+
0.0098 × 10−2
|
| 640 |
+
3.48
|
| 641 |
+
K-SW (L=15,K=2)
|
| 642 |
+
0.0098 × 10−2
|
| 643 |
+
1.71
|
| 644 |
+
Max-K-SW (K=2,T=15)
|
| 645 |
+
0.0146 × 10−2
|
| 646 |
+
3.35
|
| 647 |
+
rMSW (L=2,T=5,κ=50) (ours)
|
| 648 |
+
0.0157 × 10−2
|
| 649 |
+
2.16
|
| 650 |
+
iMSW (L=2,T=5) (ours)
|
| 651 |
+
0.0064 × 10−2
|
| 652 |
+
1.41
|
| 653 |
+
viMSW (L=2,T=5,κ=50)(ours)
|
| 654 |
+
0.0043 × 10−2
|
| 655 |
+
2.94
|
| 656 |
+
Table 2: Summary of FID and IS scores of methods on CIFAR10 (32x32), and CelebA (64x64).
|
| 657 |
+
Method
|
| 658 |
+
CIFAR10 (32x32)
|
| 659 |
+
CelebA (64x64)
|
| 660 |
+
FID (↓)
|
| 661 |
+
IS (↑)
|
| 662 |
+
FID (↓)
|
| 663 |
+
SW
|
| 664 |
+
14.21±1.12
|
| 665 |
+
8.19±0.07
|
| 666 |
+
8.93±0.23
|
| 667 |
+
Max-SW
|
| 668 |
+
14.38±0.08
|
| 669 |
+
8.15±0.02
|
| 670 |
+
8.94±0.35
|
| 671 |
+
KSW
|
| 672 |
+
15.24±0.02
|
| 673 |
+
8.15±0.03
|
| 674 |
+
9.41±0.16
|
| 675 |
+
Max-K-SW
|
| 676 |
+
14.83±1.01
|
| 677 |
+
8.17±0.03
|
| 678 |
+
9.29±0.29
|
| 679 |
+
rMSW (ours)
|
| 680 |
+
14.33±0.51
|
| 681 |
+
8.15±0.06
|
| 682 |
+
9.12±0.44
|
| 683 |
+
oMSW (ours)
|
| 684 |
+
14.12±0.54
|
| 685 |
+
8.20±0.05
|
| 686 |
+
9.68±0.55
|
| 687 |
+
iMSW (ours)
|
| 688 |
+
14.12±0.48
|
| 689 |
+
8.24±0.09
|
| 690 |
+
8.89±0.23
|
| 691 |
+
viMSW (ours)
|
| 692 |
+
13.98±0.59
|
| 693 |
+
8.12±0.20
|
| 694 |
+
8.91±0.11
|
| 695 |
+
the burning steps M = T − 1 is still better than Max-SW with T time steps. For the concentration
|
| 696 |
+
parameter κ in rMSW and viMSW, a larger value of κ leads to a faster computation due to faster
|
| 697 |
+
sampling. However, the performance of viMSW is not monotonic in terms of κ.
|
| 698 |
+
Color transfer: We aim to transfer the color palate (RGB) of a source image to the color palette
|
| 699 |
+
(RGB) target image. Therefore, it is natural to build a gradient flow that starts from the empirical
|
| 700 |
+
distribution over the color palette of the source image to the empirical distribution over the color
|
| 701 |
+
palette of the target image. Since the value of color palette is in the set {0, . . . , 255}3, we round the
|
| 702 |
+
11
|
| 703 |
+
|
| 704 |
+
200
|
| 705 |
+
300
|
| 706 |
+
400
|
| 707 |
+
500
|
| 708 |
+
600
|
| 709 |
+
Epochs
|
| 710 |
+
14
|
| 711 |
+
16
|
| 712 |
+
18
|
| 713 |
+
20
|
| 714 |
+
22
|
| 715 |
+
24
|
| 716 |
+
26
|
| 717 |
+
28
|
| 718 |
+
FID Score
|
| 719 |
+
CIFAR10
|
| 720 |
+
SW
|
| 721 |
+
Max-SW
|
| 722 |
+
K-SW
|
| 723 |
+
Max-K-SW
|
| 724 |
+
rMSW
|
| 725 |
+
oMSW
|
| 726 |
+
iMSW
|
| 727 |
+
viMSW
|
| 728 |
+
200
|
| 729 |
+
300
|
| 730 |
+
400
|
| 731 |
+
500
|
| 732 |
+
600
|
| 733 |
+
Epochs
|
| 734 |
+
7.4
|
| 735 |
+
7.6
|
| 736 |
+
7.8
|
| 737 |
+
8.0
|
| 738 |
+
8.2
|
| 739 |
+
IS Score
|
| 740 |
+
CIFAR10
|
| 741 |
+
SW
|
| 742 |
+
Max-SW
|
| 743 |
+
K-SW
|
| 744 |
+
Max-K-SW
|
| 745 |
+
rMSW
|
| 746 |
+
oMSW
|
| 747 |
+
iMSW
|
| 748 |
+
viMSW
|
| 749 |
+
25
|
| 750 |
+
50
|
| 751 |
+
75
|
| 752 |
+
100
|
| 753 |
+
125
|
| 754 |
+
150
|
| 755 |
+
175
|
| 756 |
+
200
|
| 757 |
+
Epochs
|
| 758 |
+
10
|
| 759 |
+
15
|
| 760 |
+
20
|
| 761 |
+
25
|
| 762 |
+
30
|
| 763 |
+
35
|
| 764 |
+
40
|
| 765 |
+
FID Score
|
| 766 |
+
CelebA
|
| 767 |
+
SW
|
| 768 |
+
Max-SW
|
| 769 |
+
K-SW
|
| 770 |
+
Max-K-SW
|
| 771 |
+
rMSW
|
| 772 |
+
oMSW
|
| 773 |
+
iMSW
|
| 774 |
+
viMSW
|
| 775 |
+
Figure 3: The FID scores over epochs of different distances.
|
| 776 |
+
value of the supports of the empirical distribution at the final step of the Euler scheme with 2000
|
| 777 |
+
steps and 10−3 step size. Greater detail can be found in Appendix C.2. For Max-SW, Max-K-SW,
|
| 778 |
+
iMSW, and viMSW, we use the learning rate parameter for projecting directions η = 0.1. We show
|
| 779 |
+
the transferred images, the corresponding Wasserstein-2 distances between the empirical distribution
|
| 780 |
+
over the transferred color palette and the empirical distribution over the target color palette, and the
|
| 781 |
+
corresponding computational time in Figure 2. From the figures, iMSW and viMSW give the best
|
| 782 |
+
transferred images quantitatively and qualitatively. Moreover, oMSW and rMSW are comparable
|
| 783 |
+
to SW, Max-SW, K-SW, and are better than Max-K-SW while consuming much less computation.
|
| 784 |
+
We refer the reader to Figure 6 in Appendix C.2 for the color palette visualization and to Figure 7
|
| 785 |
+
for another choice of the source and target images. We also conduct studies on hyperparameters in
|
| 786 |
+
Appendix C.2 where we observe some similar phenomenons as in gradient flow.
|
| 787 |
+
4.2
|
| 788 |
+
Deep Generative Models
|
| 789 |
+
We follow the setup of sliced Wasserstein deep generative models in [15]. The full settings of the
|
| 790 |
+
framework including neural network architectures, training framework, and hyperparameters are
|
| 791 |
+
given Appendix C.3. We compare MSW with previous baselines including SW, Max-SW, K-SW,
|
| 792 |
+
and Max-K-SW on benchmark datasets: CIFAR10 (image size 32x32) [29], and CelebA (image size
|
| 793 |
+
64x64). The evaluation metrics are FID score [21] and Inception score (IS) [54] (except on CelebA
|
| 794 |
+
since IS score poorly captures the perceptual quality of face images [21]). A notable change in
|
| 795 |
+
computing Max-SW is that we do not use momentum in optimization for max projecting direction
|
| 796 |
+
like in previous works [26, 42], which leads to a better result.
|
| 797 |
+
Summary of generative performance: We train generative models with SW (L ∈ {100, 1000, 10000}),
|
| 798 |
+
Max-SW (T ∈ {10, 100, 1000}, the learning rate for projected gradient ascent algorithm η ∈
|
| 799 |
+
{0.01, 0.1}), K-SW (L ∈ {1, 10, 100}, K = 10), Max-K-SW (K = 10, η ∈ {0.01, 0.1}), MSW (all
|
| 800 |
+
variant, L = {10, 100}, T ∈ {10, 100}), iMSW and viMSW (η ∈ {0.01, 0.1}), rMSW and viMSW and
|
| 801 |
+
(κ ∈ {10, 50}). We report the best FID score and the best IS score for each distance in Table 2. In
|
| 802 |
+
addition, we show how scores change with respect to the training epochs in Figure 3. Overall, we
|
| 803 |
+
observe that viMSW and iMSW give the best generative performance in terms of the final scores
|
| 804 |
+
and fast convergence on CIFAR10 and CelebA. Other MSW variants including rMSW and oMSW
|
| 805 |
+
give comparable results to baselines. Since most computation in training deep generative models is
|
| 806 |
+
for updating neural networks, the computational time for distances is almost the same. Furthermore,
|
| 807 |
+
we show some generated images on CelebA in Figure 4 and all generated images on CIFAR10 and
|
| 808 |
+
12
|
| 809 |
+
|
| 810 |
+
SW
|
| 811 |
+
Max-K-SW
|
| 812 |
+
iMSW
|
| 813 |
+
Figure 4: Random generated images of distances on CelebA.
|
| 814 |
+
CelebA in Figure 8 and Figure 9 in Appendix C.3. We visually observe that the qualitative results
|
| 815 |
+
are consistent with the quantitative results in Table 2.
|
| 816 |
+
Studies on hyperparameters: We conduct experiments to understand the behavior of the burning
|
| 817 |
+
and thinning technique, and to compare the role of L and T in Table 5 in Appendix C.3. Overall,
|
| 818 |
+
burning (thinning) sometimes helps to improve the performance of training generative models. There
|
| 819 |
+
is no clear sign of superiority between burning and thinning. We compare two settings of the same
|
| 820 |
+
number of total projections (same complexities): L = 10, T = 100 and L = 100, T = 10. On
|
| 821 |
+
CIFAR10, the first setting is better while the reverse case happens on CelebA.
|
| 822 |
+
5
|
| 823 |
+
Conclusion
|
| 824 |
+
We have introduced the Markovian sliced Wasserstein (MSW), a novel family of sliced Wasserstein
|
| 825 |
+
(SW) distances, which imposes a first-order Markov structure on projecting directions. We have
|
| 826 |
+
investigated the theoretical properties of MSW including topological properties, statistical properties,
|
| 827 |
+
and computational properties. Moreover, we have discussed three types of transition distribution
|
| 828 |
+
for MSW, namely, random walk, orthogonal-based, and input-awared transitions. In addition, we
|
| 829 |
+
have proposed a burning and thinning technique to improve the computational time and memory of
|
| 830 |
+
MSW. Finally, we have compared MSW to previous variants of SW in gradient flows, color transfer,
|
| 831 |
+
and generative modeling to show that MSW distances are both effective and efficient.
|
| 832 |
+
References
|
| 833 |
+
[1] J. Altschuler, J. Niles-Weed, and P. Rigollet. Near-linear time approximation algorithms for
|
| 834 |
+
optimal transport via Sinkhorn iteration. In Advances in Neural Information Processing Systems,
|
| 835 |
+
pages 1964–1974, 2017. (Cited on page 1.)
|
| 836 |
+
[2] Y. Bai, B. Schmitzer, M. Thorpe, and S. Kolouri. Sliced optimal partial transport. arXiv
|
| 837 |
+
preprint arXiv:2212.08049, 2022. (Cited on page 23.)
|
| 838 |
+
[3] V. I. Bogachev and M. A. S. Ruas. Measure theory, volume 1. Springer, 2007. (Cited on page 25.)
|
| 839 |
+
[4] C. Bonet, P. Berg, N. Courty, F. Septier, L. Drumetz, and M.-T. Pham. Spherical sliced-
|
| 840 |
+
wasserstein. arXiv preprint arXiv:2206.08780, 2022. (Cited on page 2.)
|
| 841 |
+
13
|
| 842 |
+
|
| 843 |
+
[5] C. Bonet, N. Courty, F. Septier, and L. Drumetz. Efficient gradient flows in sliced-wasserstein
|
| 844 |
+
space. Transactions on Machine Learning Research, 2022. (Cited on page 2.)
|
| 845 |
+
[6] N. Bonneel and D. Coeurjolly. Spot: sliced partial optimal transport. ACM Transactions on
|
| 846 |
+
Graphics (TOG), 38(4):1–13, 2019. (Cited on page 23.)
|
| 847 |
+
[7] N. Bonneel, J. Rabin, G. Peyré, and H. Pfister. Sliced and Radon Wasserstein barycenters of
|
| 848 |
+
measures. Journal of Mathematical Imaging and Vision, 1(51):22–45, 2015. (Cited on pages 1, 2,
|
| 849 |
+
and 4.)
|
| 850 |
+
[8] N. Bonnotte. Unidimensional and evolution methods for optimal transportation. PhD thesis,
|
| 851 |
+
Paris 11, 2013. (Cited on pages 24 and 32.)
|
| 852 |
+
[9] S. Bubeck. Convex optimization: Algorithms and complexity. Foundations and Trends® in
|
| 853 |
+
Machine Learning, 8(3-4):231–357, 2015. (Cited on page 8.)
|
| 854 |
+
[10] X. Chen, Y. Yang, and Y. Li. Augmented sliced Wasserstein distances. International Conference
|
| 855 |
+
on Learning Representations, 2022. (Cited on page 23.)
|
| 856 |
+
[11] M. Cuturi. Sinkhorn distances: Lightspeed computation of optimal transport. In Advances in
|
| 857 |
+
Neural Information Processing Systems, pages 2292–2300, 2013. (Cited on page 1.)
|
| 858 |
+
[12] B. Dai and U. Seljak. Sliced iterative normalizing flows. In International Conference on Machine
|
| 859 |
+
Learning, pages 2352–2364. PMLR, 2021. (Cited on pages 2, 5, and 19.)
|
| 860 |
+
[13] T. R. Davidson, L. Falorsi, N. De Cao, T. Kipf, and J. M. Tomczak. Hyperspherical variational
|
| 861 |
+
auto-encoders. In 34th Conference on Uncertainty in Artificial Intelligence 2018, UAI 2018,
|
| 862 |
+
pages 856–865. Association For Uncertainty in Artificial Intelligence (AUAI), 2018. (Cited on
|
| 863 |
+
page 21.)
|
| 864 |
+
[14] I. Deshpande, Y.-T. Hu, R. Sun, A. Pyrros, N. Siddiqui, S. Koyejo, Z. Zhao, D. Forsyth, and
|
| 865 |
+
A. G. Schwing. Max-sliced Wasserstein distance and its use for GANs. In Proceedings of the
|
| 866 |
+
IEEE Conference on Computer Vision and Pattern Recognition, pages 10648–10656, 2019. (Cited
|
| 867 |
+
on pages 2, 4, and 9.)
|
| 868 |
+
[15] I. Deshpande, Z. Zhang, and A. G. Schwing. Generative modeling using the sliced Wasserstein
|
| 869 |
+
distance. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,
|
| 870 |
+
pages 3483–3491, 2018. (Cited on pages 2, 12, 34, and 35.)
|
| 871 |
+
[16] K. Fatras, Y. Zine, R. Flamary, R. Gribonval, and N. Courty. Learning with minibatch Wasser-
|
| 872 |
+
stein: asymptotic and gradient properties. In AISTATS 2020-23nd International Conference on
|
| 873 |
+
Artificial Intelligence and Statistics, volume 108, pages 1–20, 2020. (Cited on page 34.)
|
| 874 |
+
[17] J. Feydy, T. Séjourné, F.-X. Vialard, S.-i. Amari, A. Trouve, and G. Peyré. Interpolating
|
| 875 |
+
between optimal transport and MMD using Sinkhorn divergences. In The 22nd International
|
| 876 |
+
Conference on Artificial Intelligence and Statistics, pages 2681–2690, 2019. (Cited on page 9.)
|
| 877 |
+
[18] R. Flamary, N. Courty, A. Gramfort, M. Z. Alaya, A. Boisbunon, S. Chambon, L. Chapel,
|
| 878 |
+
A. Corenflos, K. Fatras, N. Fournier, L. Gautheron, N. T. Gayraud, H. Janati, A. Rakotoma-
|
| 879 |
+
monjy, I. Redko, A. Rolet, A. Schutz, V. Seguy, D. J. Sutherland, R. Tavenard, A. Tong, and
|
| 880 |
+
14
|
| 881 |
+
|
| 882 |
+
T. Vayer. Pot: Python optimal transport. Journal of Machine Learning Research, 22(78):1–8,
|
| 883 |
+
2021. (Cited on page 30.)
|
| 884 |
+
[19] N. Fournier and A. Guillin. On the rate of convergence in Wasserstein distance of the empirical
|
| 885 |
+
measure. Probability Theory and Related Fields, 162:707–738, 2015. (Cited on page 1.)
|
| 886 |
+
[20] A. Genevay, G. Peyré, and M. Cuturi. Learning generative models with Sinkhorn divergences.
|
| 887 |
+
In International Conference on Artificial Intelligence and Statistics, pages 1608–1617. PMLR,
|
| 888 |
+
2018. (Cited on page 34.)
|
| 889 |
+
[21] M. Heusel, H. Ramsauer, T. Unterthiner, B. Nessler, and S. Hochreiter. GANs trained by a two
|
| 890 |
+
time-scale update rule converge to a local Nash equilibrium. In Advances in Neural Information
|
| 891 |
+
Processing Systems, pages 6626–6637, 2017. (Cited on page 12.)
|
| 892 |
+
[22] M. Huang, S. Ma, and L. Lai. A Riemannian block coordinate descent method for computing
|
| 893 |
+
the projection robust Wasserstein distance. In International Conference on Machine Learning,
|
| 894 |
+
pages 4446–4455. PMLR, 2021. (Cited on page 23.)
|
| 895 |
+
[23] P. E. Jupp and K. V. Mardia. Maximum likelihood estimators for the matrix von Mises-Fisher
|
| 896 |
+
and bingham distributions. The Annals of Statistics, 7(3):599–606, 1979. (Cited on pages 2, 7,
|
| 897 |
+
and 20.)
|
| 898 |
+
[24] O. Kallenberg and O. Kallenberg. Foundations of modern probability, volume 2. Springer, 1997.
|
| 899 |
+
(Cited on page 25.)
|
| 900 |
+
[25] D. P. Kingma and J. Ba.
|
| 901 |
+
Adam: A method for stochastic optimization.
|
| 902 |
+
arXiv preprint
|
| 903 |
+
arXiv:1412.6980, 2014. (Cited on page 36.)
|
| 904 |
+
[26] S. Kolouri, K. Nadjahi, U. Simsekli, R. Badeau, and G. Rohde. Generalized sliced Wasserstein
|
| 905 |
+
distances. In Advances in Neural Information Processing Systems, pages 261–272, 2019. (Cited
|
| 906 |
+
on pages 2, 12, 19, and 23.)
|
| 907 |
+
[27] S. Kolouri, P. E. Pope, C. E. Martin, and G. K. Rohde. Sliced Wasserstein auto-encoders. In
|
| 908 |
+
International Conference on Learning Representations, 2018. (Cited on page 2.)
|
| 909 |
+
[28] S. Kolouri, G. K. Rohde, and H. Hoffmann. Sliced Wasserstein distance for learning Gaussian
|
| 910 |
+
mixture models. In Proceedings of the IEEE Conference on Computer Vision and Pattern
|
| 911 |
+
Recognition, pages 3427–3436, 2018. (Cited on pages 2 and 24.)
|
| 912 |
+
[29] A. Krizhevsky, G. Hinton, et al. Learning multiple layers of features from tiny images. Master’s
|
| 913 |
+
thesis, Department of Computer Science, University of Toronto, 2009. (Cited on page 12.)
|
| 914 |
+
[30] C.-Y. Lee, T. Batra, M. H. Baig, and D. Ulbricht. Sliced Wasserstein discrepancy for unsuper-
|
| 915 |
+
vised domain adaptation. In Proceedings of the IEEE/CVF Conference on Computer Vision
|
| 916 |
+
and Pattern Recognition, pages 10285–10295, 2019. (Cited on page 2.)
|
| 917 |
+
[31] J. Lezama, W. Chen, and Q. Qiu. Run-sort-rerun: Escaping batch size limitations in sliced
|
| 918 |
+
Wasserstein generative models.
|
| 919 |
+
In International Conference on Machine Learning, pages
|
| 920 |
+
6275–6285. PMLR, 2021. (Cited on page 24.)
|
| 921 |
+
15
|
| 922 |
+
|
| 923 |
+
[32] T. Lin, C. Fan, N. Ho, M. Cuturi, and M. Jordan. Projection robust Wasserstein distance and
|
| 924 |
+
Riemannian optimization. Advances in Neural Information Processing Systems, 33:9383–9397,
|
| 925 |
+
2020. (Cited on page 23.)
|
| 926 |
+
[33] T. Lin, N. Ho, X. Chen, M. Cuturi, and M. I. Jordan. Fixed-support Wasserstein barycenters:
|
| 927 |
+
Computational hardness and fast algorithm. In NeurIPS, pages 5368–5380, 2020. (Cited on
|
| 928 |
+
page 1.)
|
| 929 |
+
[34] T. Lin, N. Ho, and M. Jordan. On efficient optimal transport: An analysis of greedy and
|
| 930 |
+
accelerated mirror descent algorithms. In International Conference on Machine Learning, pages
|
| 931 |
+
3982–3991, 2019. (Cited on page 1.)
|
| 932 |
+
[35] T. Lin, N. Ho, and M. I. Jordan. On the efficiency of entropic regularized algorithms for optimal
|
| 933 |
+
transport. Journal of Machine Learning Research (JMLR), 23:1–42, 2022. (Cited on page 1.)
|
| 934 |
+
[36] A. Liutkus, U. Simsekli, S. Majewski, A. Durmus, and F.-R. Stöter. Sliced-Wasserstein flows:
|
| 935 |
+
Nonparametric generative modeling via optimal transport and diffusions. In International
|
| 936 |
+
Conference on Machine Learning, pages 4104–4113. PMLR, 2019. (Cited on page 2.)
|
| 937 |
+
[37] K. P. Murphy. Machine learning: a probabilistic perspective. MIT press, 2012. (Cited on page 7.)
|
| 938 |
+
[38] N. Naderializadeh, J. Comer, R. Andrews, H. Hoffmann, and S. Kolouri. Pooling by sliced-
|
| 939 |
+
Wasserstein embedding. Advances in Neural Information Processing Systems, 34, 2021. (Cited
|
| 940 |
+
on page 24.)
|
| 941 |
+
[39] K. Nadjahi, V. De Bortoli, A. Durmus, R. Badeau, and U. Şimşekli. Approximate Bayesian
|
| 942 |
+
computation with the sliced-Wasserstein distance. In ICASSP 2020-2020 IEEE International
|
| 943 |
+
Conference on Acoustics, Speech and Signal Processing (ICASSP), pages 5470–5474. IEEE, 2020.
|
| 944 |
+
(Cited on pages 2 and 24.)
|
| 945 |
+
[40] K. Nadjahi, A. Durmus, L. Chizat, S. Kolouri, S. Shahrampour, and U. Simsekli. Statistical
|
| 946 |
+
and topological properties of sliced probability divergences. Advances in Neural Information
|
| 947 |
+
Processing Systems, 33:20802–20812, 2020. (Cited on page 1.)
|
| 948 |
+
[41] K. Nadjahi, A. Durmus, U. Simsekli, and R. Badeau. Asymptotic guarantees for learning
|
| 949 |
+
generative models with the sliced-Wasserstein distance. In Advances in Neural Information
|
| 950 |
+
Processing Systems, pages 250–260, 2019. (Cited on pages 25 and 34.)
|
| 951 |
+
[42] K. Nguyen and N. Ho. Amortized projection optimization for sliced Wasserstein generative
|
| 952 |
+
models. Advances in Neural Information Processing Systems, 2022. (Cited on pages 2, 12, 19,
|
| 953 |
+
and 34.)
|
| 954 |
+
[43] K. Nguyen and N. Ho. Revisiting sliced Wasserstein on images: From vectorization to convolution.
|
| 955 |
+
Advances in Neural Information Processing Systems, 2022. (Cited on pages 2, 23, and 28.)
|
| 956 |
+
[44] K. Nguyen, N. Ho, T. Pham, and H. Bui. Distributional sliced-Wasserstein and applications to
|
| 957 |
+
generative modeling. In International Conference on Learning Representations, 2021. (Cited on
|
| 958 |
+
pages 2, 7, and 19.)
|
| 959 |
+
16
|
| 960 |
+
|
| 961 |
+
[45] K. Nguyen, D. Nguyen, Q. Nguyen, T. Pham, H. Bui, D. Phung, T. Le, and N. Ho. On
|
| 962 |
+
transportation of mini-batches: A hierarchical approach. In Proceedings of the 39th International
|
| 963 |
+
Conference on Machine Learning, 2022. (Cited on page 34.)
|
| 964 |
+
[46] K. Nguyen, D. Nguyen, T. Pham, and N. Ho. Improving mini-batch optimal transport via
|
| 965 |
+
partial transportation. In Proceedings of the 39th International Conference on Machine Learning,
|
| 966 |
+
2022. (Cited on page 34.)
|
| 967 |
+
[47] K. Nguyen, S. Nguyen, N. Ho, T. Pham, and H. Bui. Improving relational regularized au-
|
| 968 |
+
toencoders with spherical sliced fused Gromov-Wasserstein. In International Conference on
|
| 969 |
+
Learning Representations, 2021. (Cited on pages 2, 19, and 21.)
|
| 970 |
+
[48] K. Nguyen, T. Ren, H. Nguyen, L. Rout, T. Nguyen, and N. Ho. Hierarchical sliced wasserstein
|
| 971 |
+
distance. arXiv preprint arXiv:2209.13570, 2022. (Cited on page 23.)
|
| 972 |
+
[49] S. Nietert, R. Sadhu, Z. Goldfeld, and K. Kato. Statistical, robustness, and computational
|
| 973 |
+
guarantees for sliced wasserstein distances. Advances in Neural Information Processing Systems,
|
| 974 |
+
2022. (Cited on pages 1, 2, 4, and 8.)
|
| 975 |
+
[50] F.-P. Paty and M. Cuturi. Subspace robust Wasserstein distances. In International Conference
|
| 976 |
+
on Machine Learning, pages 5072–5081, 2019. (Cited on page 23.)
|
| 977 |
+
[51] G. Peyré and M. Cuturi. Computational optimal transport: With applications to data science.
|
| 978 |
+
Foundations and Trends® in Machine Learning, 11(5-6):355–607, 2019. (Cited on page 3.)
|
| 979 |
+
[52] G. Peyré and M. Cuturi. Computational optimal transport, 2020. (Cited on page 1.)
|
| 980 |
+
[53] M. Rowland, J. Hron, Y. Tang, K. Choromanski, T. Sarlos, and A. Weller. Orthogonal estimation
|
| 981 |
+
of Wasserstein distances. In The 22nd International Conference on Artificial Intelligence and
|
| 982 |
+
Statistics, pages 186–195. PMLR, 2019. (Cited on pages 2, 4, and 19.)
|
| 983 |
+
[54] T. Salimans, I. Goodfellow, W. Zaremba, V. Cheung, A. Radford, and X. Chen. Improved
|
| 984 |
+
techniques for training GANs. Advances in Neural Information Processing Systems, 29, 2016.
|
| 985 |
+
(Cited on page 12.)
|
| 986 |
+
[55] T. Salimans, H. Zhang, A. Radford, and D. Metaxas. Improving GANs using optimal transport.
|
| 987 |
+
In International Conference on Learning Representations, 2018. (Cited on page 34.)
|
| 988 |
+
[56] F. Santambrogio. Optimal transport for applied mathematicians. Birkäuser, NY, 55(58-63):94,
|
| 989 |
+
2015. (Cited on page 10.)
|
| 990 |
+
[57] M. Sommerfeld and A. Munk. Inference for empirical wasserstein distances on finite spaces.
|
| 991 |
+
Journal of the Royal Statistical Society: Series B (Statistical Methodology), 80(1):219–238, 2018.
|
| 992 |
+
(Cited on page 34.)
|
| 993 |
+
[58] S. Sra. Directional statistics in machine learning: a brief review. arXiv preprint arXiv:1605.00316,
|
| 994 |
+
2016. (Cited on page 21.)
|
| 995 |
+
[59] N. M. Temme. Special functions: An introduction to the classical functions of mathematical
|
| 996 |
+
physics. John Wiley & Sons, 2011. (Cited on page 20.)
|
| 997 |
+
17
|
| 998 |
+
|
| 999 |
+
[60] C. Villani. Optimal transport: Old and New. Springer, 2008. (Cited on pages 1 and 3.)
|
| 1000 |
+
[61] C. Villani. Optimal transport: old and new, volume 338. Springer Science & Business Media,
|
| 1001 |
+
2008. (Cited on pages 25 and 27.)
|
| 1002 |
+
[62] M. J. Wainwright.
|
| 1003 |
+
High-dimensional statistics: A non-asymptotic viewpoint.
|
| 1004 |
+
Cambridge
|
| 1005 |
+
University Press, 2019. (Cited on page 29.)
|
| 1006 |
+
[63] J. Wu, Z. Huang, D. Acharya, W. Li, J. Thoma, D. P. Paudel, and L. V. Gool. Sliced Wasserstein
|
| 1007 |
+
generative models. In Proceedings of the IEEE Conference on Computer Vision and Pattern
|
| 1008 |
+
Recognition, pages 3713–3722, 2019. (Cited on pages 2 and 24.)
|
| 1009 |
+
[64] M. Yi and S. Liu. Sliced Wasserstein variational inference. In Fourth Symposium on Advances
|
| 1010 |
+
in Approximate Bayesian Inference, 2021. (Cited on pages 2 and 24.)
|
| 1011 |
+
18
|
| 1012 |
+
|
| 1013 |
+
Supplement to “Markovian Sliced Wasserstein Distances: Beyond
|
| 1014 |
+
Independent Projections"
|
| 1015 |
+
In this supplementary material, we present additional materials in Appendix A. In particular, we
|
| 1016 |
+
provide additional background on sliced Wasserstein variants in Appendix A.1, background on von
|
| 1017 |
+
Mises-Fisher distribution in Appendix A.2, algorithms for computing Markovian sliced Wasserstein
|
| 1018 |
+
distances in Appendix A.3, additional information about burned thinned MSW in Appendix A.4,
|
| 1019 |
+
and discussion on related works in Appendix A.5. We then provide skipped proofs in the main paper
|
| 1020 |
+
in Appendix B. Additional experiments are presented in Appendix C.
|
| 1021 |
+
A
|
| 1022 |
+
Additional Materials
|
| 1023 |
+
A.1
|
| 1024 |
+
Background on Sliced Wasserstein Variants
|
| 1025 |
+
We review computational aspects of sliced Wasserstein variants.
|
| 1026 |
+
Computation of Max sliced Wasserstein distance: We demonstrate the empirical estimation
|
| 1027 |
+
of Max-SW via projected sub-gradient ascent algorithm in Algorithm 1. The initialization step for
|
| 1028 |
+
ˆθ0 is rarely discussed in previous works. Normally, ˆθ0 is randomly initialized by drawing from the
|
| 1029 |
+
uniform distribution over the unit-hypersphere. Many previous works [26, 44, 47, 42] use Adam
|
| 1030 |
+
update instead of the standard gradient ascent update for Max-SW. In this work, we find out that
|
| 1031 |
+
using the standard gradient ascent update is more stable and effective.
|
| 1032 |
+
Algorithm 1 Max sliced Wasserstein distance
|
| 1033 |
+
Input: Probability measures µ, ν, learning rate η, the order p, and the number of iterations T.
|
| 1034 |
+
Initialize ˆθ0.
|
| 1035 |
+
for t = 1 to T − 1 do
|
| 1036 |
+
ˆθt = ˆθt−1 + η · ∇ˆθt−1Wp(ˆθt−1♯µ, ˆθt−1♯ν)
|
| 1037 |
+
ˆθt =
|
| 1038 |
+
ˆθt
|
| 1039 |
+
||ˆθt||2
|
| 1040 |
+
end for
|
| 1041 |
+
Return: Wp(ˆθT ♯µ, ˆθT ♯ν)
|
| 1042 |
+
K sliced Wasserstein distance: We first review the Gram–Schmidt process in Algorithm 2. With
|
| 1043 |
+
the Gram–Schmidt process, the sampling from U(VK(Rd)) can be done by sampling θ1, . . . , θk
|
| 1044 |
+
i.i.d from N(0, Id) then applying the Gram-Schmidt process on them. Therefore, we present the
|
| 1045 |
+
computation of K sliced Wasserstein distance in Algorithm 3. We would like to recall that the
|
| 1046 |
+
original work of K-SW [53] uses only one set of orthogonal projecting directions. Here, we generalize
|
| 1047 |
+
the original work by using L sets of orthogonal projecting directions.
|
| 1048 |
+
Max K sliced Wasserstein distance: We now present the empirical estimation of Max-K-SW
|
| 1049 |
+
via projected sub-gradient ascent algorithm in Algorithm 4. This algorithm is first discussed in
|
| 1050 |
+
the original paper of Max-K-SW [12]. The optimization of Max-K-SW can be solved by using
|
| 1051 |
+
Riemannian optimization since the Stiefel manifold is a Riemannian manifold. However, to the best
|
| 1052 |
+
of our knowledge, Riemannian optimization has not been applied to Max-K-SW.
|
| 1053 |
+
19
|
| 1054 |
+
|
| 1055 |
+
Algorithm 2 Gram–Schmidt process
|
| 1056 |
+
Input: K vectors θ1, . . . , θK
|
| 1057 |
+
θ1 =
|
| 1058 |
+
θ1
|
| 1059 |
+
||θ1||2
|
| 1060 |
+
for k = 2 to K do
|
| 1061 |
+
for i = 1 to k − 1 do
|
| 1062 |
+
θk = θk − ⟨θi,θk⟩
|
| 1063 |
+
⟨θi,θi⟩ θi
|
| 1064 |
+
end for
|
| 1065 |
+
θk =
|
| 1066 |
+
θk
|
| 1067 |
+
||θk||2
|
| 1068 |
+
end for
|
| 1069 |
+
Return: θ1, . . . , θK
|
| 1070 |
+
Algorithm 3 K sliced Wasserstein distance
|
| 1071 |
+
Input: Probability measures µ, ν, the dimension d, the order p, the number of projections L, the
|
| 1072 |
+
number of orthogonal projections K.
|
| 1073 |
+
for l = 1 to L do
|
| 1074 |
+
Draw θl1, . . . , θlK i.i.d from N(0, Id).
|
| 1075 |
+
θl1, . . . , θlK = Gram–Schmidt(θl1, . . . , θlK)
|
| 1076 |
+
end for
|
| 1077 |
+
Return:
|
| 1078 |
+
�
|
| 1079 |
+
1
|
| 1080 |
+
LK
|
| 1081 |
+
�L
|
| 1082 |
+
l=1
|
| 1083 |
+
�K
|
| 1084 |
+
k=1 Wp
|
| 1085 |
+
p(θlk♯µ, θlk♯ν)
|
| 1086 |
+
� 1
|
| 1087 |
+
p
|
| 1088 |
+
A.2
|
| 1089 |
+
Von Mises-Fisher Distribution
|
| 1090 |
+
We first start with the definition of von Mises-Fisher (vMF) distribution.
|
| 1091 |
+
Definition 3. The von Mises–Fisher distribution ( vMF)[23] is a probability distribution on the unit
|
| 1092 |
+
hypersphere Sd−1 with the density function be:
|
| 1093 |
+
f(x|ϵ, κ) := Cd(κ) exp(κϵ⊤x),
|
| 1094 |
+
(8)
|
| 1095 |
+
where ϵ ∈ Sd−1 is the location vector, κ ≥ 0 is the concentration parameter, and Cd(κ) :=
|
| 1096 |
+
κd/2−1
|
| 1097 |
+
(2π)d/2Id/2−1(κ) is the normalization constant. Here, Iv is the modified Bessel function of the first
|
| 1098 |
+
kind at order v [59].
|
| 1099 |
+
Algorithm 4 Max-K sliced Wasserstein distance
|
| 1100 |
+
Input: Probability measures µ, ν, learning rate η, the dimension d, the order p, the number of
|
| 1101 |
+
iterations T > 1, and the number of orthogonal projections K > 1.
|
| 1102 |
+
Initialize ˆθ01, . . . , ˆθ0K to be orthogonal.
|
| 1103 |
+
for t = 1 to T − 1 do
|
| 1104 |
+
for k = 1 to K do
|
| 1105 |
+
ˆθtk = θtk + η · ∇ˆθt−1kWp(ˆθt−1k♯µ, ˆθt−1k♯ν)
|
| 1106 |
+
end for
|
| 1107 |
+
ˆθt1, . . . , ˆθtK = Gram-Schmidt(ˆθt1, . . . , ˆθtK)
|
| 1108 |
+
end for
|
| 1109 |
+
Return:
|
| 1110 |
+
�
|
| 1111 |
+
1
|
| 1112 |
+
K
|
| 1113 |
+
�K
|
| 1114 |
+
k=1 Wp
|
| 1115 |
+
p(ˆθTk♯µ, ˆθTk♯ν)
|
| 1116 |
+
� 1
|
| 1117 |
+
p
|
| 1118 |
+
20
|
| 1119 |
+
|
| 1120 |
+
Algorithm 5 Sampling from vMF distribution
|
| 1121 |
+
Input: location ϵ, concentration κ, dimension d, unit vector e1 = (1, 0, .., 0)
|
| 1122 |
+
Draw v ∼ U(Sd−2)
|
| 1123 |
+
b ← −2κ+√
|
| 1124 |
+
4κ2+(d−1)2
|
| 1125 |
+
d−1
|
| 1126 |
+
, a ← (d−1)+2κ+√
|
| 1127 |
+
4κ2+(d−1)2
|
| 1128 |
+
4
|
| 1129 |
+
, m ←
|
| 1130 |
+
4ab
|
| 1131 |
+
(1+b) − (d − 1) log(d − 1)
|
| 1132 |
+
repeat
|
| 1133 |
+
Draw ψ ∼ Beta
|
| 1134 |
+
� 1
|
| 1135 |
+
2(d − 1), 1
|
| 1136 |
+
2(d − 1)
|
| 1137 |
+
�
|
| 1138 |
+
ω ← h(ψ, κ) = 1−(1+b)ψ
|
| 1139 |
+
1−(1−b)ψ
|
| 1140 |
+
t ←
|
| 1141 |
+
2ab
|
| 1142 |
+
1−(1−b)ψ
|
| 1143 |
+
Draw u ∼ U([0, 1])
|
| 1144 |
+
until (d − 1) log(t) − t + m ≥ log(u)
|
| 1145 |
+
h1 ← (ω,
|
| 1146 |
+
√
|
| 1147 |
+
1 − ω2v⊤)⊤
|
| 1148 |
+
ϵ′ ← e1 − ϵ
|
| 1149 |
+
u =
|
| 1150 |
+
ϵ′
|
| 1151 |
+
||ϵ′||2
|
| 1152 |
+
U = I − 2uu⊤
|
| 1153 |
+
Output: Uh1
|
| 1154 |
+
The vMF distribution is a continuous distribution, its mass concentrates around the mean ϵ, and its
|
| 1155 |
+
density decrease when x goes away from ϵ. When κ → 0, vMF converges in distribution to U(Sd−1),
|
| 1156 |
+
and when κ → ∞, vMF converges in distribution to the Dirac distribution centered at ϵ [58].
|
| 1157 |
+
Sampling:
|
| 1158 |
+
We review the sampling process in Algorithm 5 [13, 47]. The sampling process of vMF
|
| 1159 |
+
distribution is based on the rejection sampling procedure. It is worth noting that the sampling
|
| 1160 |
+
algorithm is doing reparameterization implicitly. However, we only use the algorithm to obtain
|
| 1161 |
+
random samples without estimating stochastic gradients.
|
| 1162 |
+
A.3
|
| 1163 |
+
Algorithms for Computing Markovian Sliced Wasserstein Distances
|
| 1164 |
+
We first start with the general computation of MSW in Algorithm 6.
|
| 1165 |
+
For the random walk
|
| 1166 |
+
transition in rMSW, we replace the line θlt ∼ σt(θt|θlt−1) by θlt ∼ vMF(θt|ϵ = θlt−1, κ) (Algorithm 5)
|
| 1167 |
+
with the concentration hyperparameter κ. For the orthogonal-based transition in oMSW, we use
|
| 1168 |
+
θlt ∼ U(Sd−1
|
| 1169 |
+
θlt−1) by first sampling θ′
|
| 1170 |
+
lt ∼ U(Sd−1) then set θlt = θlt− ⟨θ′
|
| 1171 |
+
lt,θlt⟩
|
| 1172 |
+
⟨θ′
|
| 1173 |
+
lt,θ′
|
| 1174 |
+
lt⟩θ′
|
| 1175 |
+
lt then normalize θlt =
|
| 1176 |
+
θlt
|
| 1177 |
+
||θlt||2 .
|
| 1178 |
+
For deterministic input-awared transition, iMSW, we set θlt = θlt−1 + η∇θlt−1Wp(θlt−1♯µ, θlt−1♯ν)
|
| 1179 |
+
then normalize θlt =
|
| 1180 |
+
θlt
|
| 1181 |
+
||θlt||2 . For probabilistic input-awared transition, viMSW, θlt ∼ vMF(θt|ϵ =
|
| 1182 |
+
ProdSd−1θ′
|
| 1183 |
+
lt, κ) with θ′
|
| 1184 |
+
lt = θlt−1 + η∇θlt−1Wp(θlt−1♯µ, θlt−1♯ν).
|
| 1185 |
+
A.4
|
| 1186 |
+
Burned Thinned Markovian Sliced Wasserstein Distance
|
| 1187 |
+
We continue the discussion on burned thinned MSW in Section 3.3. We first start with the Monte
|
| 1188 |
+
Carlo estimation of burned thinned MSW.
|
| 1189 |
+
Monte Carlo Estimation:
|
| 1190 |
+
We samples θ11, . . . , θL1 ∼ σ1(θ1) for L ≥ 1, then we samples
|
| 1191 |
+
θlt ∼ σt(θt|θlt−1) for t = 1, . . . , T and l = 1, . . . , L. We then obtain samples θ′
|
| 1192 |
+
lt by filtering out t < M
|
| 1193 |
+
and t%N ̸= 0 from the set {θlt} for l = 1, . . . , L and t = 1, . . . , T. The Monte Carlo approximation
|
| 1194 |
+
21
|
| 1195 |
+
|
| 1196 |
+
Algorithm 6 Markovian sliced Wasserstein distance
|
| 1197 |
+
Input: Probability measures µ, ν, the dimension d, the order p, the number of projections L, and
|
| 1198 |
+
the number of timesteps T.
|
| 1199 |
+
for l = 1 to L do
|
| 1200 |
+
Draw θl0 ∼ σ(θ0)
|
| 1201 |
+
for t = 1 to T − 1 do
|
| 1202 |
+
Draw θlt ∼ σt(θt|θlt−1)
|
| 1203 |
+
end for
|
| 1204 |
+
end for
|
| 1205 |
+
Return:
|
| 1206 |
+
�
|
| 1207 |
+
1
|
| 1208 |
+
LT
|
| 1209 |
+
�L
|
| 1210 |
+
l=1
|
| 1211 |
+
�T
|
| 1212 |
+
t=1 Wp
|
| 1213 |
+
p(θlt♯µ, θlt♯ν)
|
| 1214 |
+
� 1
|
| 1215 |
+
p
|
| 1216 |
+
of the burned-thinned Markovian sliced Wasserstein distance is:
|
| 1217 |
+
�
|
| 1218 |
+
MSWp,T,N,M(µ, ν) =
|
| 1219 |
+
�
|
| 1220 |
+
�
|
| 1221 |
+
N
|
| 1222 |
+
L(T − M)
|
| 1223 |
+
L
|
| 1224 |
+
�
|
| 1225 |
+
l=1
|
| 1226 |
+
(T−M)/N
|
| 1227 |
+
�
|
| 1228 |
+
t=1
|
| 1229 |
+
W p
|
| 1230 |
+
p
|
| 1231 |
+
�
|
| 1232 |
+
θ′
|
| 1233 |
+
lt♯µ, θ′
|
| 1234 |
+
lt♯ν
|
| 1235 |
+
�
|
| 1236 |
+
�
|
| 1237 |
+
�
|
| 1238 |
+
1
|
| 1239 |
+
p
|
| 1240 |
+
.
|
| 1241 |
+
(9)
|
| 1242 |
+
Theoretical properties: We first state the following assumption: A2: Given T > M ≥ 0, N ≥ 1,
|
| 1243 |
+
the prior distribution σ1(θ1) and the transition distribution σt(θt|θt−1) are chosen such that there
|
| 1244 |
+
exists marginals σt(θt) =
|
| 1245 |
+
�
|
| 1246 |
+
t− σ(θ1, . . . , θt)dt− with t ≥ M and t%N = 0, t− = {t′ = 1, . . . , T|t′ ̸= t}.
|
| 1247 |
+
The assumption A2 can be easily obtained by using vMF transition, e.g., in random walk transition
|
| 1248 |
+
and probabilistic input-awared transition. From this assumption, we can derive theoretical properties
|
| 1249 |
+
of burned-thinned MSW including topological properties and statistical complexity.
|
| 1250 |
+
Proposition 4. For any p ≥ 1, T ≥ 1, M ≥ 0, N ≥ 1, and dimension d ≥ 1, if A2 holds, the
|
| 1251 |
+
burned thinned Markovian sliced Wasserstein distance MSWp,T,N,M(·, ·) is a valid metric on the
|
| 1252 |
+
space of probability measures Pp(Rd), namely, it satisfies the (i) non-negativity, (ii) symmetry, (iii)
|
| 1253 |
+
triangle inequality, and (iv) identity.
|
| 1254 |
+
The proof of Proposition 4 follows directly the proof of Theorem 1 in Appendix B.1.
|
| 1255 |
+
Proposition 5 (Weak Convergence). For any p ≥ 1, T ≥ 1, M ≥ 0, N ≥ 1, and dimension d ≥ 1,
|
| 1256 |
+
if A2 holds, the convergence of probability measures in Pp(Rd) under the burned thinned Markovian
|
| 1257 |
+
sliced Wasserstein distance MSWp,T,N,M(·, ·) implies weak convergence of probability measures and
|
| 1258 |
+
vice versa.
|
| 1259 |
+
The proof of Proposition 5 follows directly the proof of Theorem 2 in Appendix B.2.
|
| 1260 |
+
Proposition 6. For any p ≥ 1 and dimension d ≥ 1, for any T ≥ 1, M ≥ 0, N ≥ 1 and
|
| 1261 |
+
µ, ν ∈ Pp(Rd), MSWp,T,N,M(µ, ν) ≤ Max-SWp(µ, ν) ≤ Wp(µ, ν).
|
| 1262 |
+
The proof of Proposition 6 follows directly the proof of Proposition 1 in Appendix B.3.
|
| 1263 |
+
22
|
| 1264 |
+
|
| 1265 |
+
Proposition 7 (Sample Complexity). Let X1, X2, . . . , Xn be i.i.d. samples from the probability
|
| 1266 |
+
measure µ being supported on compact set of Rd. We denote the empirical measure µn = 1
|
| 1267 |
+
n
|
| 1268 |
+
�n
|
| 1269 |
+
i=1 δXi.
|
| 1270 |
+
Then, for any p ≥ 1 and T ≥ 1, M ≥ 0, N ≥ 1, there exists a universal constant C > 0 such that
|
| 1271 |
+
E[MSWp,T,N,M(µn, µ)] ≤ C
|
| 1272 |
+
�
|
| 1273 |
+
(d + 1) log n/n,
|
| 1274 |
+
where the outer expectation is taken with respect to the data X1, X2, . . . , Xn.
|
| 1275 |
+
The proof of Proposition 7 follows directly the proof of Proposition 2 in Appendix B.4.
|
| 1276 |
+
Proposition 8 (Monte Carlo error). For any p ≥ 1, T ≥ 1, M ≥ 0, N ≥ 1, dimension d ≥ 1, and
|
| 1277 |
+
µ, ν ∈ Pp(Rd), we have:
|
| 1278 |
+
E|�
|
| 1279 |
+
MSW
|
| 1280 |
+
p
|
| 1281 |
+
p,T,N,M(µ, ν) − MSWp
|
| 1282 |
+
p,T,N,M(µ, ν)|
|
| 1283 |
+
≤
|
| 1284 |
+
√
|
| 1285 |
+
N
|
| 1286 |
+
�
|
| 1287 |
+
TL(T − M)
|
| 1288 |
+
L
|
| 1289 |
+
�
|
| 1290 |
+
l=1
|
| 1291 |
+
V ar
|
| 1292 |
+
�
|
| 1293 |
+
�
|
| 1294 |
+
(T−M)/N
|
| 1295 |
+
�
|
| 1296 |
+
t=1
|
| 1297 |
+
W p
|
| 1298 |
+
p
|
| 1299 |
+
�
|
| 1300 |
+
θ′
|
| 1301 |
+
t♯µ, θ′
|
| 1302 |
+
t♯ν
|
| 1303 |
+
�
|
| 1304 |
+
�
|
| 1305 |
+
�
|
| 1306 |
+
1
|
| 1307 |
+
2
|
| 1308 |
+
,
|
| 1309 |
+
where the variance is with respect to σ(θ′
|
| 1310 |
+
1, . . . , θ′
|
| 1311 |
+
(T−M)/N).
|
| 1312 |
+
The proof of Proposition 8 follows directly the proof of Proposition 3 in Appendix B.5.
|
| 1313 |
+
A.5
|
| 1314 |
+
Discussions on Related Works
|
| 1315 |
+
K-SW is autoregressive decomposition: In MSW, we assume that the joint distribution over pro-
|
| 1316 |
+
jecting directions has the first-order Markov structure: σ(θ1, . . . , θT ) = σ1(θ1) �T
|
| 1317 |
+
t=2 σt(θt|θt−1). How-
|
| 1318 |
+
ever, we can consider the full autoregressive decomposition σ(θ1, . . . , θT ) = σ1(θ1) �T
|
| 1319 |
+
t=2 σt(θt|θ1, . . . , θt−1).
|
| 1320 |
+
Let T = K in K-SW, hence the transition distribution that is used in K-SW is: σt(θt|θ1, . . . , θt−1) =
|
| 1321 |
+
Gram-Schmidtθ1,...,θt−1♯U(Sd−1), where Gram-Schmidtθ1,...,θt−1(θt) denotes the Gram-Schmidt pro-
|
| 1322 |
+
cess update that applies on θt.
|
| 1323 |
+
Generalization of Max-K-SW: Similar to Max-SW, we can derive a Markovian-based K-sliced
|
| 1324 |
+
Wasserstein distance that generalizes the idea of the projected gradient ascent update in Max-K-SW.
|
| 1325 |
+
However, the distance considers the transition on the Stiefel manifold instead of the unit hypersphere,
|
| 1326 |
+
hence, it will be more computationally expensive. Moreover, orthogonality might not be a good
|
| 1327 |
+
constraint. Therefore, the generalization of Max-K-SW might not have many advantages.
|
| 1328 |
+
Beyond the projected sub-gradient ascent update: In the input-awared transition for MSW,
|
| 1329 |
+
we utilize the projected sub-gradient update as the transition function to create a new projecting
|
| 1330 |
+
direction. Therefore, we could other optimization techniques such as momentum, adaptive stepsize,
|
| 1331 |
+
and so on to create the transition function. We will leave the investigation about this direction to
|
| 1332 |
+
future work.
|
| 1333 |
+
Applications to other sliced Wasserstein variants: The Markovian approach can be applied
|
| 1334 |
+
to other variants of sliced Wasserstein distances e.g., generalized sliced Wasserstein [26], augmented
|
| 1335 |
+
sliced Wasserstein distance [10], projected robust Wasserstein (PRW) [50, 32, 22] (k > 1 dimensional
|
| 1336 |
+
projection), convolution sliced Wasserstein [43], sliced partial optimal transport [6, 2], hierarchical
|
| 1337 |
+
sliced Wasserstein [48] and so on.
|
| 1338 |
+
23
|
| 1339 |
+
|
| 1340 |
+
Markovian sliced Wasserstein distances in other applications: We can apply MSW to the
|
| 1341 |
+
setting in [31] which is an implementation technique that utilizes both RAM and GPUs’ memory for
|
| 1342 |
+
training sliced Wasserstein generative models. MSW can also replace sliced Wasserstein distance in
|
| 1343 |
+
pooling in [38]. Similarly, MSW can be used in applications that exist sliced Wasserstein distance
|
| 1344 |
+
e.g., clustering [28], Bayesian inference [39, 64], domain adaptation [63], and so on.
|
| 1345 |
+
B
|
| 1346 |
+
Proofs
|
| 1347 |
+
B.1
|
| 1348 |
+
Proof of Theorem 1
|
| 1349 |
+
(i), (ii): the MSW is an expectation of the one-dimensional Wasserstein distance hence the non-
|
| 1350 |
+
negativity and symmetry properties of the MSW follow directly by the non-negativity and symmetry
|
| 1351 |
+
of the Wasserstein distance.
|
| 1352 |
+
(iii) From the definition of MSW in Definition 1, given three probability measures µ1, µ2, µ3 ∈ Pp(Rd)
|
| 1353 |
+
we have:
|
| 1354 |
+
MSWp,T (µ1, µ3) =
|
| 1355 |
+
�
|
| 1356 |
+
E(θ1:T )∼σ(θ1:T )
|
| 1357 |
+
�
|
| 1358 |
+
1
|
| 1359 |
+
T
|
| 1360 |
+
T
|
| 1361 |
+
�
|
| 1362 |
+
t=1
|
| 1363 |
+
W p
|
| 1364 |
+
p (θt♯µ1, θt♯µ3)
|
| 1365 |
+
�� 1
|
| 1366 |
+
p
|
| 1367 |
+
≤
|
| 1368 |
+
�
|
| 1369 |
+
E(θ1:T )∼σ(θ1:T )
|
| 1370 |
+
�
|
| 1371 |
+
1
|
| 1372 |
+
T
|
| 1373 |
+
T
|
| 1374 |
+
�
|
| 1375 |
+
t=1
|
| 1376 |
+
(Wp (θt♯µ1, θt♯µ2) + Wp (θt♯µ2, θt♯µ3))p
|
| 1377 |
+
�� 1
|
| 1378 |
+
p
|
| 1379 |
+
≤
|
| 1380 |
+
�
|
| 1381 |
+
E(θ1:T )∼σ(θ1:T )
|
| 1382 |
+
�
|
| 1383 |
+
1
|
| 1384 |
+
T
|
| 1385 |
+
T
|
| 1386 |
+
�
|
| 1387 |
+
t=1
|
| 1388 |
+
W p
|
| 1389 |
+
p (θt♯µ1, θt♯µ2)
|
| 1390 |
+
�� 1
|
| 1391 |
+
p
|
| 1392 |
+
+
|
| 1393 |
+
�
|
| 1394 |
+
E(θ1:T )∼σ(θ1:T )
|
| 1395 |
+
�
|
| 1396 |
+
1
|
| 1397 |
+
T
|
| 1398 |
+
T
|
| 1399 |
+
�
|
| 1400 |
+
t=1
|
| 1401 |
+
W p
|
| 1402 |
+
p (θt♯µ2, θt♯µ3)
|
| 1403 |
+
�� 1
|
| 1404 |
+
p
|
| 1405 |
+
= MSWp,T (µ1, µ2) + MSWp,T (µ2, µ3),
|
| 1406 |
+
where the first inequality is due to the triangle inequality of Wasserstein distance and the second
|
| 1407 |
+
inequality is due to the Minkowski inequality. We complete the triangle inequality proof.
|
| 1408 |
+
(iv) We need to show that MSWp,T (µ, ν) = 0 if and only if µ = ν. First, from the definition of MSW,
|
| 1409 |
+
we obtain directly µ = ν implies MSWp,T (µ, ν) = 0. For the reverse direction, we use the same proof
|
| 1410 |
+
technique in [8]. If MSWp,T (µ, ν) = 0, we have
|
| 1411 |
+
�
|
| 1412 |
+
S(d−1)⊗T 1
|
| 1413 |
+
T
|
| 1414 |
+
�T
|
| 1415 |
+
t=1 Wp (θt♯µ, θt♯ν) dσ(θ1:T ) = 0. If A1
|
| 1416 |
+
holds, namely, the prior distribution σ1(θ1) is supported on all the unit-hypersphere or exists a
|
| 1417 |
+
transition distribution σt(θt|θt−1) is supported on all the unit-hypersphere, we have Wp(θ♯µ, θ♯ν) = 0
|
| 1418 |
+
for all θ ∈ Sd−1 where σ denotes the prior or the transition distribution that satisfies the assumption
|
| 1419 |
+
A1. From the identity property of the Wasserstein distance, we obtain θ♯µ = θ♯ν for σ-a.e θ ∈ Sd−1.
|
| 1420 |
+
Therefore, for any t ∈ R and θ ∈ Sd−1, we have:
|
| 1421 |
+
F[µ](tθ) =
|
| 1422 |
+
�
|
| 1423 |
+
Rd e−it⟨θ,x⟩dµ(x) =
|
| 1424 |
+
�
|
| 1425 |
+
R
|
| 1426 |
+
e−itzdθ♯µ(z) = F[θ♯µ](t)
|
| 1427 |
+
= F[θ♯ν](t) =
|
| 1428 |
+
�
|
| 1429 |
+
R
|
| 1430 |
+
e−itzdθ♯ν(z) =
|
| 1431 |
+
�
|
| 1432 |
+
Rd e−it⟨θ,x⟩dν(x) = F[ν](tθ),
|
| 1433 |
+
24
|
| 1434 |
+
|
| 1435 |
+
where F[γ](w) =
|
| 1436 |
+
�
|
| 1437 |
+
Rd′ e−i⟨w,x⟩dγ(x) denotes the Fourier transform of γ ∈ P(Rd′). By the injectivity
|
| 1438 |
+
of the Fourier transform, we obtain µ = ν which concludes the proof.
|
| 1439 |
+
B.2
|
| 1440 |
+
Proof of Theorem 2
|
| 1441 |
+
Our goal is to show that for any sequence of probability measures (µk)k∈N and µ in Pp(Rd),
|
| 1442 |
+
limk→+∞ MSWp,T (µk, µ) = 0 if and only if for any continuous and bounded function f : Rd → R,
|
| 1443 |
+
limk→+∞
|
| 1444 |
+
�
|
| 1445 |
+
f dµk =
|
| 1446 |
+
�
|
| 1447 |
+
f dµ. The proof follows the techniques in [41]. We first state the following
|
| 1448 |
+
lemma.
|
| 1449 |
+
Lemma 1. For any p ≥ 1, T ≥ 1, and dimension d ≥ 1, if A1 holds and a sequence of probability
|
| 1450 |
+
measures (µk)k∈N satisfies limk→+∞ MSWp,T (µk, µ) = 0 with µ in Pp(Rd), there exists an increasing
|
| 1451 |
+
function φ : N → N such that the subsequence
|
| 1452 |
+
�
|
| 1453 |
+
µφ(k)
|
| 1454 |
+
�
|
| 1455 |
+
k∈N converges weakly to µ.
|
| 1456 |
+
Proof. We are given that limk→+∞ MSWp,T (µk, µ) = 0, therefore
|
| 1457 |
+
limk→∞
|
| 1458 |
+
�
|
| 1459 |
+
S(d−1)⊗T 1
|
| 1460 |
+
T
|
| 1461 |
+
�T
|
| 1462 |
+
t=1 Wp (θt♯µk, θt♯µ) dσ(θ1:T ) = 0. If A1 holds, namely, the prior distribution
|
| 1463 |
+
σ1(θ1) is supported on all the unit-hypersphere or exists a transition distribution σt(θt|θt−1) is
|
| 1464 |
+
supported on all the unit-hypersphere, we have
|
| 1465 |
+
lim
|
| 1466 |
+
k→∞
|
| 1467 |
+
�
|
| 1468 |
+
Sd−1 Wp (θ♯µk, θ♯µ) dσ(θ) = 0,
|
| 1469 |
+
where σ denotes the prior or the transition distribution that satisfies the assumption A1. From Theo-
|
| 1470 |
+
rem 2.2.5 in [3], there exists an increasing function φ : N → N such that limk→∞ Wp(θ♯µφ(k), θ♯ν) = 0
|
| 1471 |
+
for σ-a.e θ ∈ Sd−1. Since the Wasserstein distance of order p implies weak convergence in Pp(Rd) [61],
|
| 1472 |
+
�
|
| 1473 |
+
θ♯µφ(k)
|
| 1474 |
+
�
|
| 1475 |
+
k∈N converges weakly to θ♯µ for σ-a.e θ ∈ Sd−1.
|
| 1476 |
+
Let Φµ =
|
| 1477 |
+
�
|
| 1478 |
+
Rd ei⟨v,w⟩dµ(w) be the characteristic function of µ ∈ Pp(Rd), we have the weak conver-
|
| 1479 |
+
gence implies the convergence of characteristic function (Theorem 4.3 [24]): limk→∞ Φθ♯µφ(k)(s) =
|
| 1480 |
+
Φθ♯µ(s),
|
| 1481 |
+
∀s ∈ R, for σ-a.e θ ∈ Sd−1. Therefore, limk→∞ Φµφ(k)(z) = Φµ(z), for almost most every
|
| 1482 |
+
z ∈ Rd.
|
| 1483 |
+
For any γ > 0 and a continuous function f : Rd → R with compact support, we denote fγ(x) =
|
| 1484 |
+
f ∗ gγ(x) =
|
| 1485 |
+
�
|
| 1486 |
+
2πγ2�−d/2 �
|
| 1487 |
+
Rd f(x − z) exp
|
| 1488 |
+
�
|
| 1489 |
+
−∥z∥2/
|
| 1490 |
+
�
|
| 1491 |
+
2γ2��
|
| 1492 |
+
dz where gγ is the density function of
|
| 1493 |
+
25
|
| 1494 |
+
|
| 1495 |
+
N(0, γId). We have:
|
| 1496 |
+
�
|
| 1497 |
+
Rd fγ(z)dµφ(k)(z) =
|
| 1498 |
+
�
|
| 1499 |
+
Rd
|
| 1500 |
+
�
|
| 1501 |
+
Rd f(w)gγ(z − w)dw dµφ(k)(z)
|
| 1502 |
+
=
|
| 1503 |
+
�
|
| 1504 |
+
Rd
|
| 1505 |
+
�
|
| 1506 |
+
Rd f(w)
|
| 1507 |
+
�
|
| 1508 |
+
2πγ2�−d/2 exp(−||z − w||2/(2γ2))dw dµφ(k)(z)
|
| 1509 |
+
=
|
| 1510 |
+
�
|
| 1511 |
+
2πγ2�−d/2 �
|
| 1512 |
+
Rd
|
| 1513 |
+
�
|
| 1514 |
+
Rd f(w)
|
| 1515 |
+
�
|
| 1516 |
+
Rd ei⟨z−w,x⟩g1/γ(x)dx dw dµφ(k)(z)
|
| 1517 |
+
=
|
| 1518 |
+
�
|
| 1519 |
+
2πγ2�−d/2 �
|
| 1520 |
+
Rd
|
| 1521 |
+
�
|
| 1522 |
+
Rd f(w)
|
| 1523 |
+
�
|
| 1524 |
+
Rd e−i⟨w,x⟩ei⟨z,x⟩g1/γ(x)dx dw dµφ(k)(z)
|
| 1525 |
+
=
|
| 1526 |
+
�
|
| 1527 |
+
2πγ2�−d/2 �
|
| 1528 |
+
Rd
|
| 1529 |
+
�
|
| 1530 |
+
Rd f(w)e−i⟨w,x⟩g1/γ(x)
|
| 1531 |
+
�
|
| 1532 |
+
Rd ei⟨z,x⟩ dµφ(k)(z)dx dw
|
| 1533 |
+
=
|
| 1534 |
+
�
|
| 1535 |
+
2πγ2�−d/2 �
|
| 1536 |
+
Rd
|
| 1537 |
+
�
|
| 1538 |
+
Rd f(w)e−i⟨w,x⟩g1/γ(x)Φµφ(k)(x)dx dw
|
| 1539 |
+
=
|
| 1540 |
+
�
|
| 1541 |
+
2πγ2�−d/2 �
|
| 1542 |
+
Rd F[f](x)g1/γ(x)Φµφ(k)(x)dx,
|
| 1543 |
+
where the third equality is due to the fact that
|
| 1544 |
+
�
|
| 1545 |
+
Rd ei⟨z−w,x⟩g1/γ(x)dx = exp(−||z − w||2/(2γ2)) and
|
| 1546 |
+
F[f](w) =
|
| 1547 |
+
�
|
| 1548 |
+
Rd′ f(x)e−i⟨w,x⟩dx denotes the Fourier transform of the bounded function f. Similarly,
|
| 1549 |
+
we have
|
| 1550 |
+
�
|
| 1551 |
+
Rd fγ(z)dµ(z) =
|
| 1552 |
+
�
|
| 1553 |
+
Rd
|
| 1554 |
+
�
|
| 1555 |
+
Rd f(w)gγ(z − w)dw dµ(z)
|
| 1556 |
+
=
|
| 1557 |
+
�
|
| 1558 |
+
Rd
|
| 1559 |
+
�
|
| 1560 |
+
Rd f(w)
|
| 1561 |
+
�
|
| 1562 |
+
2πγ2�−d/2 exp(−||z − w||2/(2γ2))dw dµ(z)
|
| 1563 |
+
=
|
| 1564 |
+
�
|
| 1565 |
+
2πγ2�−d/2 �
|
| 1566 |
+
Rd
|
| 1567 |
+
�
|
| 1568 |
+
Rd f(w)
|
| 1569 |
+
�
|
| 1570 |
+
Rd ei⟨z−w,x⟩g1/γ(x)dx dw dµ(z)
|
| 1571 |
+
=
|
| 1572 |
+
�
|
| 1573 |
+
2πγ2�−d/2 �
|
| 1574 |
+
Rd
|
| 1575 |
+
�
|
| 1576 |
+
Rd f(w)
|
| 1577 |
+
�
|
| 1578 |
+
Rd e−i⟨w,x⟩ei⟨z,x⟩g1/γ(x)dx dw dµ(z)
|
| 1579 |
+
=
|
| 1580 |
+
�
|
| 1581 |
+
2πγ2�−d/2 �
|
| 1582 |
+
Rd
|
| 1583 |
+
�
|
| 1584 |
+
Rd f(w)e−i⟨w,x⟩g1/γ(x)
|
| 1585 |
+
�
|
| 1586 |
+
Rd ei⟨z,x⟩ dµ(z)dx dw
|
| 1587 |
+
=
|
| 1588 |
+
�
|
| 1589 |
+
2πγ2�−d/2 �
|
| 1590 |
+
Rd
|
| 1591 |
+
�
|
| 1592 |
+
Rd f(w)e−i⟨w,x⟩g1/γ(x)Φµ(x)dx dw
|
| 1593 |
+
=
|
| 1594 |
+
�
|
| 1595 |
+
2πγ2�−d/2 �
|
| 1596 |
+
Rd F[f](x)g1/γ(x)Φµ(x)dx.
|
| 1597 |
+
Since f is assumed to have compact support, F[f] exists and is bounded by
|
| 1598 |
+
�
|
| 1599 |
+
Rd |f(w)|dw < +∞.
|
| 1600 |
+
Hence, for any k ∈ R and x ∈ Rd, we have
|
| 1601 |
+
���F[f](x)g1/γ(x)Φµφ(k)(x)
|
| 1602 |
+
��� ≤ g1/γ(x)
|
| 1603 |
+
�
|
| 1604 |
+
Rd |f(w)|dw and
|
| 1605 |
+
��F[f](x)g1/γ(x)Φµ(x)
|
| 1606 |
+
�� ≤ g1/γ(x)
|
| 1607 |
+
�
|
| 1608 |
+
Rd |f(w)|dw. Using the proved result of limk→∞ Φµφ(k)(z) = Φµ(z)
|
| 1609 |
+
and Lebesgue’s Dominated Convergence Therefore, we obtain
|
| 1610 |
+
lim
|
| 1611 |
+
k→∞
|
| 1612 |
+
�
|
| 1613 |
+
Rd fγ(z)dµφ(k)(z) = lim
|
| 1614 |
+
k→∞
|
| 1615 |
+
�
|
| 1616 |
+
2πγ2�−d/2 �
|
| 1617 |
+
Rd F[f](x)g1/γ(x)Φµφ(k)(x)dx
|
| 1618 |
+
=
|
| 1619 |
+
�
|
| 1620 |
+
2πγ2�−d/2 �
|
| 1621 |
+
Rd F[f](x)g1/γ(x)Φµφ(k)(x)dx
|
| 1622 |
+
=
|
| 1623 |
+
�
|
| 1624 |
+
Rd fγ(z)dµ(z).
|
| 1625 |
+
26
|
| 1626 |
+
|
| 1627 |
+
Moreover, we have:
|
| 1628 |
+
lim
|
| 1629 |
+
γ→0 lim sup
|
| 1630 |
+
k→+∞
|
| 1631 |
+
����
|
| 1632 |
+
�
|
| 1633 |
+
Rd f(z)dµφ(k)(z) −
|
| 1634 |
+
�
|
| 1635 |
+
Rd f(z)dµ(z)
|
| 1636 |
+
����
|
| 1637 |
+
≤ lim
|
| 1638 |
+
γ→0 lim sup
|
| 1639 |
+
k→+∞
|
| 1640 |
+
�
|
| 1641 |
+
2 sup
|
| 1642 |
+
z∈Rd |f(z) − fγ(z)| +
|
| 1643 |
+
����
|
| 1644 |
+
�
|
| 1645 |
+
Rd fγ(z)dµφ(k)(z) −
|
| 1646 |
+
�
|
| 1647 |
+
Rd fγ(z)dµ(z)
|
| 1648 |
+
����
|
| 1649 |
+
�
|
| 1650 |
+
= lim
|
| 1651 |
+
γ→0 2 sup
|
| 1652 |
+
z∈Rd |f(z) − fγ(z)| = 0,
|
| 1653 |
+
which implies
|
| 1654 |
+
�
|
| 1655 |
+
µφ(k)
|
| 1656 |
+
�
|
| 1657 |
+
k∈N converges weakly to µ.
|
| 1658 |
+
We now continue the proof of Theorem 2. We first show that if limk→∞ MSWp,T (µk, µ) = 0, (µk)k∈N
|
| 1659 |
+
converges weakly to µ. We consider a sequence
|
| 1660 |
+
�
|
| 1661 |
+
µφ(k)
|
| 1662 |
+
�
|
| 1663 |
+
k∈N such that limk→∞ MSWp,T (µk, µ) = 0
|
| 1664 |
+
and we suppose
|
| 1665 |
+
�
|
| 1666 |
+
µφ(k)
|
| 1667 |
+
�
|
| 1668 |
+
k∈N does not converge weakly to µ. Therefore, let dP be the Lévy-Prokhorov
|
| 1669 |
+
metric, limk→∞ dP(µk,µ) ̸= 0 that implies there exists ε > 0 and a subsequence
|
| 1670 |
+
�
|
| 1671 |
+
µψ(k)
|
| 1672 |
+
�
|
| 1673 |
+
k∈N with an
|
| 1674 |
+
increasing function ψ : N → N such that for any k ∈ N: dP(µψ(k), µ) ≥ ε. However, we have
|
| 1675 |
+
MSWp,T (µ, ν) =
|
| 1676 |
+
�
|
| 1677 |
+
E(θ1:T )∼σ(θ1:T )
|
| 1678 |
+
�
|
| 1679 |
+
1
|
| 1680 |
+
T
|
| 1681 |
+
T
|
| 1682 |
+
�
|
| 1683 |
+
t=1
|
| 1684 |
+
W p
|
| 1685 |
+
p (θt♯µ, θt♯ν)
|
| 1686 |
+
�� 1
|
| 1687 |
+
p
|
| 1688 |
+
≥ E(θ1:T )∼σ(θ1:T )
|
| 1689 |
+
�
|
| 1690 |
+
1
|
| 1691 |
+
T
|
| 1692 |
+
T
|
| 1693 |
+
�
|
| 1694 |
+
t=1
|
| 1695 |
+
Wp (θt♯µ, θt♯ν)
|
| 1696 |
+
�
|
| 1697 |
+
≥ E(θ1:T )∼σ(θ1:T )
|
| 1698 |
+
�
|
| 1699 |
+
1
|
| 1700 |
+
T
|
| 1701 |
+
T
|
| 1702 |
+
�
|
| 1703 |
+
t=1
|
| 1704 |
+
W1 (θt♯µ, θt♯ν)
|
| 1705 |
+
�
|
| 1706 |
+
= MSW1,T (µ, ν),
|
| 1707 |
+
by the Holder inequality with µ, ν ∈ Pp(Rd). Therefore, limk→∞ MSW1,T (µψ(k), µ) = 0 which
|
| 1708 |
+
implies that there exists s a subsequence
|
| 1709 |
+
�
|
| 1710 |
+
µφ(ψ(k))
|
| 1711 |
+
�
|
| 1712 |
+
k∈N with an increasing function φ : N → N such
|
| 1713 |
+
that
|
| 1714 |
+
�
|
| 1715 |
+
µφ(ψ(k))
|
| 1716 |
+
�
|
| 1717 |
+
k∈N converges weakly to µ by Lemma 1. Hence, limk→∞ dP
|
| 1718 |
+
�
|
| 1719 |
+
µφ(ψ(k)), µ
|
| 1720 |
+
�
|
| 1721 |
+
= 0 which
|
| 1722 |
+
contradicts our assumption. We conclude that if limk→∞ MSWp,T (µk, µ) = 0, (µk)k∈N converges
|
| 1723 |
+
weakly to µ.
|
| 1724 |
+
Now, we show that if (µk)k∈N converges weakly to µ, limk→∞ MSWp,T (µk, µ) = 0. By the con-
|
| 1725 |
+
tinuous mapping theorem, we obtain (θ♯µk)k∈N converges weakly to θ♯µ for any θ ∈ Sd−1. Since
|
| 1726 |
+
the weak convergence implies the convergence under the Wasserstein distance [61], we obtain
|
| 1727 |
+
limk→∞ Wp(θ♯µk, µ) = 0. Moreover, the Wasserstein distance is also bounded, hence the bounded
|
| 1728 |
+
convergence theorem:
|
| 1729 |
+
lim
|
| 1730 |
+
k→∞ MSWp
|
| 1731 |
+
p,T (µk, µ) = E(θ1:T )∼��(θ1:T )
|
| 1732 |
+
�
|
| 1733 |
+
1
|
| 1734 |
+
T
|
| 1735 |
+
T
|
| 1736 |
+
�
|
| 1737 |
+
t=1
|
| 1738 |
+
W p
|
| 1739 |
+
p (θt♯µk, θt♯µ)
|
| 1740 |
+
�
|
| 1741 |
+
= E(θ1:T )∼σ(θ1:T )
|
| 1742 |
+
�
|
| 1743 |
+
1
|
| 1744 |
+
T
|
| 1745 |
+
T
|
| 1746 |
+
�
|
| 1747 |
+
t=1
|
| 1748 |
+
0
|
| 1749 |
+
�
|
| 1750 |
+
= 0.
|
| 1751 |
+
By the continuous mapping theorem with function x → x1/p, we obtain limk→∞ MSWp,T (µk, µ) → 0
|
| 1752 |
+
which completes the proof.
|
| 1753 |
+
27
|
| 1754 |
+
|
| 1755 |
+
B.3
|
| 1756 |
+
Proof of Proposition 1
|
| 1757 |
+
(i) We recall the definition of Max-SW:
|
| 1758 |
+
Max-SWp(µ, ν) = max
|
| 1759 |
+
θ∈Sd−1 Wp(θ♯µ, θ♯ν).
|
| 1760 |
+
Let θ∗ = argmaxθ∈Sd−1Wp(θ♯µ, θ♯ν), from Definition 1, for any p ≥ 1, T ≥ 1, dimension d ≥ 1, and
|
| 1761 |
+
µ, ν ∈ Pp(Rd) we have:
|
| 1762 |
+
MSWp,T (µ, ν) =
|
| 1763 |
+
�
|
| 1764 |
+
E(θ1:T )∼σ(θ1:T )
|
| 1765 |
+
�
|
| 1766 |
+
1
|
| 1767 |
+
T
|
| 1768 |
+
T
|
| 1769 |
+
�
|
| 1770 |
+
t=1
|
| 1771 |
+
W p
|
| 1772 |
+
p (θt♯µ, θt♯ν)
|
| 1773 |
+
�� 1
|
| 1774 |
+
p
|
| 1775 |
+
≤ 1
|
| 1776 |
+
T
|
| 1777 |
+
T
|
| 1778 |
+
�
|
| 1779 |
+
t=1
|
| 1780 |
+
W p
|
| 1781 |
+
p (θ∗♯µ, θ∗♯ν) = W p
|
| 1782 |
+
p (θ∗♯µ, θ∗♯ν) = Max-SWp(µ, ν).
|
| 1783 |
+
Furthermore, by applying the Cauchy-Schwartz inequality, we have:
|
| 1784 |
+
Max-SWp
|
| 1785 |
+
p(µ, ν) = max
|
| 1786 |
+
θ∈Sd−1
|
| 1787 |
+
�
|
| 1788 |
+
inf
|
| 1789 |
+
π∈Π(µ,ν)
|
| 1790 |
+
�
|
| 1791 |
+
Rd
|
| 1792 |
+
���θ⊤x − θ⊤y
|
| 1793 |
+
���
|
| 1794 |
+
p
|
| 1795 |
+
dπ(x, y)
|
| 1796 |
+
�
|
| 1797 |
+
≤ max
|
| 1798 |
+
θ∈Sd−1
|
| 1799 |
+
�
|
| 1800 |
+
inf
|
| 1801 |
+
π∈Π(µ,ν)
|
| 1802 |
+
�
|
| 1803 |
+
Rd×Rd ∥θ∥p∥x − y∥pdπ(x, y)
|
| 1804 |
+
�
|
| 1805 |
+
=
|
| 1806 |
+
inf
|
| 1807 |
+
π∈Π(µ,ν)
|
| 1808 |
+
�
|
| 1809 |
+
Rd×Rd ∥θ∥p∥x − y∥pdπ(x, y)
|
| 1810 |
+
=
|
| 1811 |
+
inf
|
| 1812 |
+
π∈Π(µ,ν)
|
| 1813 |
+
�
|
| 1814 |
+
Rd×Rd ∥x − y∥pdπ(x, y)
|
| 1815 |
+
= W p
|
| 1816 |
+
p (µ, ν),
|
| 1817 |
+
which completes the proof.
|
| 1818 |
+
(ii) This result can be directly obtained from the definitions of MSW and SW.
|
| 1819 |
+
B.4
|
| 1820 |
+
Proof of Proposition 2
|
| 1821 |
+
In this proof, we denote Θ ⊂ Rd as the compact set of the probability measure P. From Proposition 1,
|
| 1822 |
+
we find that
|
| 1823 |
+
E[MSWp,T (µn, µ)] ≤ E [Max-SWp(µn, µ)] .
|
| 1824 |
+
Therefore, the proposition follows as long as we can demonstrate that
|
| 1825 |
+
E[Max-SWp(µn, µ)] ≤ C
|
| 1826 |
+
�
|
| 1827 |
+
(d + 1) log2 n/n
|
| 1828 |
+
where C > 0 is some universal constant and the outer expectation is taken with respect to the data.
|
| 1829 |
+
The proof for this result follows from the proof of Proposition 3 in [43]. Here, we provide the proof
|
| 1830 |
+
for the completeness. By defining Fn,θ and Fθ as the cumulative distributions of θ♯µn and θ♯µ, the
|
| 1831 |
+
28
|
| 1832 |
+
|
| 1833 |
+
closed-form expression of the Wasserstein distance in one dimension leads to the following equations
|
| 1834 |
+
and inequalities:
|
| 1835 |
+
Max-SWp
|
| 1836 |
+
p(µn, µ) = max
|
| 1837 |
+
θ∈Sd−1
|
| 1838 |
+
� 1
|
| 1839 |
+
0
|
| 1840 |
+
|F −1
|
| 1841 |
+
n,θ(u) − F −1
|
| 1842 |
+
θ
|
| 1843 |
+
(u)|pdu
|
| 1844 |
+
=
|
| 1845 |
+
max
|
| 1846 |
+
θ∈Rd:∥θ∥=1
|
| 1847 |
+
� 1
|
| 1848 |
+
0
|
| 1849 |
+
|F −1
|
| 1850 |
+
n,θ(u) − F −1
|
| 1851 |
+
θ
|
| 1852 |
+
(u)|pdu
|
| 1853 |
+
≤ diam(Θ)
|
| 1854 |
+
max
|
| 1855 |
+
θ∈Rd:∥θ∥≤1 |Fn,θ(x) − Fθ(x)|p.
|
| 1856 |
+
We can check that
|
| 1857 |
+
max
|
| 1858 |
+
θ∈Rd:∥θ∥≤1 |Fn,θ(x) − Fθ(x)| = sup
|
| 1859 |
+
B∈B
|
| 1860 |
+
|Pn(B) − P(B)|,
|
| 1861 |
+
where B is the set of half-spaces {z ∈ Rd : θ⊤z ≤ x} for all θ ∈ Rd such that ∥θ∥ ≤ 1. From [62],
|
| 1862 |
+
we can show that the Vapnik-Chervonenkis (VC) dimension of B is at most d + 1. Therefore, the
|
| 1863 |
+
following inequality holds:
|
| 1864 |
+
sup
|
| 1865 |
+
B∈B
|
| 1866 |
+
|Pn(B) − P(B)| ≤
|
| 1867 |
+
�
|
| 1868 |
+
32
|
| 1869 |
+
n [(d + 1) log2(n + 1) + log2(8/δ)]
|
| 1870 |
+
with probability at least 1 − δ. Putting the above results together leads to
|
| 1871 |
+
E[Max-SWp(µn, µ)] ≤ C
|
| 1872 |
+
�
|
| 1873 |
+
(d + 1) log2 n/n,
|
| 1874 |
+
where C > 0 is some universal constant.
|
| 1875 |
+
As a consequence, we obtain the conclusion of the
|
| 1876 |
+
proposition.
|
| 1877 |
+
B.5
|
| 1878 |
+
Proof of Proposition 3
|
| 1879 |
+
For any p ≥ 1, T ≥ 1, dimension d ≥ 1, and µ, ν ∈ Pp(Rd), using the Holder’s inequality, we have:
|
| 1880 |
+
E|�
|
| 1881 |
+
MSW
|
| 1882 |
+
p
|
| 1883 |
+
p,T (µ, ν) − MSWp
|
| 1884 |
+
p,T (µ, ν)|
|
| 1885 |
+
≤
|
| 1886 |
+
�
|
| 1887 |
+
E|�
|
| 1888 |
+
MSW
|
| 1889 |
+
p
|
| 1890 |
+
p,k(µ, ν) − MSWp
|
| 1891 |
+
p,k(µ, ν)|2� 1
|
| 1892 |
+
2
|
| 1893 |
+
=
|
| 1894 |
+
�
|
| 1895 |
+
�E
|
| 1896 |
+
�����
|
| 1897 |
+
1
|
| 1898 |
+
TL
|
| 1899 |
+
T
|
| 1900 |
+
�
|
| 1901 |
+
t=1
|
| 1902 |
+
L
|
| 1903 |
+
�
|
| 1904 |
+
l=1
|
| 1905 |
+
Wp
|
| 1906 |
+
p(θtl♯µ, θtl♯ν) − Eθ1:T ∼σ(θ1:T )
|
| 1907 |
+
�
|
| 1908 |
+
1
|
| 1909 |
+
T
|
| 1910 |
+
T
|
| 1911 |
+
�
|
| 1912 |
+
t=1
|
| 1913 |
+
W p
|
| 1914 |
+
p (θt♯µ, θt♯ν)
|
| 1915 |
+
������
|
| 1916 |
+
2�
|
| 1917 |
+
�
|
| 1918 |
+
1
|
| 1919 |
+
2
|
| 1920 |
+
=
|
| 1921 |
+
�
|
| 1922 |
+
V ar
|
| 1923 |
+
�
|
| 1924 |
+
1
|
| 1925 |
+
TL
|
| 1926 |
+
T
|
| 1927 |
+
�
|
| 1928 |
+
t=1
|
| 1929 |
+
L
|
| 1930 |
+
�
|
| 1931 |
+
l=1
|
| 1932 |
+
W p
|
| 1933 |
+
p (θt♯µ, θt♯ν)
|
| 1934 |
+
�� 1
|
| 1935 |
+
2
|
| 1936 |
+
=
|
| 1937 |
+
1
|
| 1938 |
+
√
|
| 1939 |
+
TL
|
| 1940 |
+
L
|
| 1941 |
+
�
|
| 1942 |
+
l=1
|
| 1943 |
+
V ar
|
| 1944 |
+
� T
|
| 1945 |
+
�
|
| 1946 |
+
t=1
|
| 1947 |
+
W p
|
| 1948 |
+
p (θt♯µ, θt♯ν)
|
| 1949 |
+
� 1
|
| 1950 |
+
2
|
| 1951 |
+
,
|
| 1952 |
+
which completes the proof.
|
| 1953 |
+
29
|
| 1954 |
+
|
| 1955 |
+
Algorithm 7 Gradient flow with the Euler scheme
|
| 1956 |
+
Input: the start distribution µ = 1
|
| 1957 |
+
n
|
| 1958 |
+
�n
|
| 1959 |
+
i=1 δXi, the target distribution ν = 1
|
| 1960 |
+
n
|
| 1961 |
+
�n
|
| 1962 |
+
i=1 δYi, number of
|
| 1963 |
+
Euler iterations T (abuse of notation), Euler step size η (abuse of notation), a metric D.
|
| 1964 |
+
for t = 1 to T do
|
| 1965 |
+
X = X − n · η∇XD(PX, PY )
|
| 1966 |
+
end for
|
| 1967 |
+
Output: µ = 1
|
| 1968 |
+
n
|
| 1969 |
+
�n
|
| 1970 |
+
i=1 δXi
|
| 1971 |
+
Table 3: Summary of Wasserstein-2 scores, computational time in second (s) of different distances in gradient flow
|
| 1972 |
+
application.
|
| 1973 |
+
Distances
|
| 1974 |
+
Wasserstein-2 (↓)
|
| 1975 |
+
Time (↓)
|
| 1976 |
+
Distances
|
| 1977 |
+
Wasserstein-2 (↓)
|
| 1978 |
+
Time (↓)
|
| 1979 |
+
SW (L=10)
|
| 1980 |
+
0.0113 × 10−2
|
| 1981 |
+
0.85
|
| 1982 |
+
SW (L=100)
|
| 1983 |
+
0.0096 × 10−2
|
| 1984 |
+
4.32
|
| 1985 |
+
Max-SW (T=5)
|
| 1986 |
+
0.0231 × 10−2
|
| 1987 |
+
1.02
|
| 1988 |
+
Max-SW (T=100)
|
| 1989 |
+
0.0083 × 10−2
|
| 1990 |
+
10.46
|
| 1991 |
+
K-SW (L=5,K=2)
|
| 1992 |
+
0.0104 × 10−2
|
| 1993 |
+
0.92
|
| 1994 |
+
K-SW (L=20,K=2)
|
| 1995 |
+
0.0096 × 10−2
|
| 1996 |
+
1.97
|
| 1997 |
+
Max-K-SW (K=2,T=5)
|
| 1998 |
+
0.0152 × 10−2
|
| 1999 |
+
1.41
|
| 2000 |
+
Max-K-SW (K=2,T=100)
|
| 2001 |
+
0.0083 × 10−2
|
| 2002 |
+
10.46
|
| 2003 |
+
rMSW (L=2,T=5,κ=10)
|
| 2004 |
+
0.0109 × 10−2
|
| 2005 |
+
2.11
|
| 2006 |
+
rMSW (L=2,T=5,κ=100)
|
| 2007 |
+
0.0141 × 10−2
|
| 2008 |
+
17.98
|
| 2009 |
+
iMSW (L=1,T=5)
|
| 2010 |
+
0.0109 × 10−2
|
| 2011 |
+
1.07
|
| 2012 |
+
iMSW (L=5,T=5)
|
| 2013 |
+
0.0055 × 10−2
|
| 2014 |
+
2.44
|
| 2015 |
+
iMSW (L=2,T=10)
|
| 2016 |
+
0.0052 × 10−2
|
| 2017 |
+
2.79
|
| 2018 |
+
iMSW (L=5,T=2)
|
| 2019 |
+
0.0071 × 10−2
|
| 2020 |
+
1.14
|
| 2021 |
+
iMSW (L=2,T=5,M=4)
|
| 2022 |
+
0.0101 × 10−2
|
| 2023 |
+
1.2
|
| 2024 |
+
iMSW (L=2,T=5,M=2)
|
| 2025 |
+
0.0055 × 10−2
|
| 2026 |
+
1.25
|
| 2027 |
+
iMSW (L=2,T=5,M=0,N=2)
|
| 2028 |
+
0.0066 × 10−2
|
| 2029 |
+
1.28
|
| 2030 |
+
iMSW (L=2,T=5,M=2,N=2)
|
| 2031 |
+
0.0072 × 10−2
|
| 2032 |
+
1.19
|
| 2033 |
+
viMSW (L=2,T=5,κ=10)
|
| 2034 |
+
0.0052 × 10−2
|
| 2035 |
+
3.12
|
| 2036 |
+
viMSW (L=2,T=5,κ=100)
|
| 2037 |
+
0.0053 × 10−2
|
| 2038 |
+
2.76
|
| 2039 |
+
C
|
| 2040 |
+
Additional Experiments
|
| 2041 |
+
In this section, we present the detail of experimental frameworks and additional experiments on
|
| 2042 |
+
gradient flows, color transfer, and deep generative modeling which are not in the main paper.
|
| 2043 |
+
C.1
|
| 2044 |
+
Gradient Flows
|
| 2045 |
+
Framework: We have discussed in detail the framework of gradient flow in Section 4.1 in the main
|
| 2046 |
+
paper. Here, we summarize the Euler scheme for solving the gradient flow in Algorithm 7.
|
| 2047 |
+
Visualization of gradient flows: We show the visualization of gradient flows from all distances
|
| 2048 |
+
(Table 1) in Figure 5. Overall, we observe that the quality of the flows is consistent with the
|
| 2049 |
+
quantitative Wasserstein-2 score which is computed using [18]. From the figures, we see that iMSW
|
| 2050 |
+
and viMSW help the flows converge very fast. Namely, Wasserstein-2 scores at steps 200 of iMSW
|
| 2051 |
+
and viMSW are much lower than other distances. For oMSW, with L = 5, T = 2, it achieves a
|
| 2052 |
+
comparable result to SW, K-SW, and Max-SW while being faster. The random walk transition does
|
| 2053 |
+
not work well in rMSW with the concentration parameter κ = 50.
|
| 2054 |
+
Studies on hyper-parameters: We run gradient flows with different values of hyper-parameters
|
| 2055 |
+
and report the Wasserstein-2 scores and computational time in Table 3. From the table and Figure 5,
|
| 2056 |
+
we see that SW with L = 10 is worse than oMSW, iMSW, and viMSW with L = 2, T = 5 (10 total
|
| 2057 |
+
projections). Increasing the number of projections to 100, SW gets better, however, its Wasserstein-2
|
| 2058 |
+
score is still higher than the scores of iMSW and viMSW while its computational time is bigger.
|
| 2059 |
+
30
|
| 2060 |
+
|
| 2061 |
+
SW L=30
|
| 2062 |
+
W2: 25.3149×10
|
| 2063 |
+
2 (0s)
|
| 2064 |
+
W2: 0.5913×10
|
| 2065 |
+
2 (1.07s)
|
| 2066 |
+
W2: 0.0099×10
|
| 2067 |
+
2 (1.55s)
|
| 2068 |
+
Max-SW T=30
|
| 2069 |
+
W2: 25.3149×10
|
| 2070 |
+
2 (0s)
|
| 2071 |
+
W2: 0.1091×10
|
| 2072 |
+
2 (2.37s)
|
| 2073 |
+
W2: 0.0098×10
|
| 2074 |
+
2 (3.48s)
|
| 2075 |
+
K-SW L=15 K=2
|
| 2076 |
+
W2: 25.3149×10
|
| 2077 |
+
2 (0s)
|
| 2078 |
+
W2: 0.5846×10
|
| 2079 |
+
2 (1.16s)
|
| 2080 |
+
W2: 0.0098×10
|
| 2081 |
+
2 (1.71s)
|
| 2082 |
+
Max-K-SW K=2 T=15
|
| 2083 |
+
W2: 25.3149×10
|
| 2084 |
+
2 (0s)
|
| 2085 |
+
W2: 0.7388×10
|
| 2086 |
+
2 (2.36s)
|
| 2087 |
+
W2: 0.0146×10
|
| 2088 |
+
2 (3.35s)
|
| 2089 |
+
rMSW L=2 T=5 =50
|
| 2090 |
+
W2: 25.3149×10
|
| 2091 |
+
2 (0s)
|
| 2092 |
+
W2: 0.8628×10
|
| 2093 |
+
2 (1.48s)
|
| 2094 |
+
W2: 0.0157×10
|
| 2095 |
+
2 (2.16s)
|
| 2096 |
+
oMSW L=5 T=2
|
| 2097 |
+
W2: 25.3149×10
|
| 2098 |
+
2 (0s)
|
| 2099 |
+
W2: 0.5783×10
|
| 2100 |
+
2 (0.59s)
|
| 2101 |
+
W2: 0.0104×10
|
| 2102 |
+
2 (0.87s)
|
| 2103 |
+
steps 0
|
| 2104 |
+
iMSW L=2 T=5
|
| 2105 |
+
W2: 25.3149×10
|
| 2106 |
+
2 (0s)
|
| 2107 |
+
steps 200
|
| 2108 |
+
W2: 0.0483×10
|
| 2109 |
+
2 (0.99s)
|
| 2110 |
+
steps 300
|
| 2111 |
+
W2: 0.0064×10
|
| 2112 |
+
2 (1.41s)
|
| 2113 |
+
steps 0
|
| 2114 |
+
viMSW L=2 T=5 =50
|
| 2115 |
+
W2: 25.3149×10
|
| 2116 |
+
2 (0s)
|
| 2117 |
+
steps 200
|
| 2118 |
+
W2: 0.0512×10
|
| 2119 |
+
2 (2.05s)
|
| 2120 |
+
steps 300
|
| 2121 |
+
W2: 0.0043×10
|
| 2122 |
+
2 (2.94s)
|
| 2123 |
+
Figure 5: The figures show the gradient flows that are from the empirical distribution over the
|
| 2124 |
+
color points to the empirical distribution over S-shape points produced by different distances. The
|
| 2125 |
+
corresponding Wasserstein-2 distance between the empirical distribution at the current step and the
|
| 2126 |
+
S-shape distribution and the computational time (in second) to reach the step is reported at the top
|
| 2127 |
+
of the figure.
|
| 2128 |
+
Similarly, Max-(K)-SW with T = 100 is better than Max-(K)-SW with T = 5 and T = 10, however,
|
| 2129 |
+
it is still worse than iMSW and viMSW in terms of computation and performance. For burning
|
| 2130 |
+
and thinning, we see that the technique can help improve the computation considerably. More
|
| 2131 |
+
importantly, the burning and thinning techniques do not reduce the performance too much. For
|
| 2132 |
+
iMSW, increasing L and T leads to a better flow. For the same number of total projections e.g.,
|
| 2133 |
+
10, L = 2, T = 5 is better than L = 5, T = 2. For viMSW, it usually performs better than iMSW,
|
| 2134 |
+
however, its computation is worse due to the sampling complexity of the vMF distribution. We vary
|
| 2135 |
+
the concentration parameter κ ∈ {10, 50, 100} and find that κ = 50 is the best. Hence, it might
|
| 2136 |
+
suggest that a good balance between heading to the “max" projecting direction and exploring the
|
| 2137 |
+
space of projecting directions is the best strategy.
|
| 2138 |
+
C.2
|
| 2139 |
+
Color Transfer
|
| 2140 |
+
Framework: In our experiments, we first compress the color palette of the source image and the
|
| 2141 |
+
target image to 3000 colors by using K-Mean clustering. After that, the color transfer application is
|
| 2142 |
+
31
|
| 2143 |
+
|
| 2144 |
+
Source
|
| 2145 |
+
SW (L=45), 37.97(s), W2 = 414.51
|
| 2146 |
+
Max-SW (T=45), 57.48(s), W2 = 449.42
|
| 2147 |
+
K-SW (L=15,K=3), 38.21(s), W2 = 411.74
|
| 2148 |
+
Max-K-SW (K=3,T=15), 52.6(s), W2 = 479.43
|
| 2149 |
+
rMSW (L=3,T=5, =50), 15.65(s), W2 = 444.35
|
| 2150 |
+
oMSW (L=3,T=5), 14.17(s), W2 = 415.06
|
| 2151 |
+
iMSW (L=3,T=5), 25.39(s), W2 = 16.97
|
| 2152 |
+
viMSW (L=3,T=5, =50), 29.27(s), W2 = 16.48
|
| 2153 |
+
Target
|
| 2154 |
+
Figure 6: The figures show the source image, the target image, and transferred images from
|
| 2155 |
+
different distances. The corresponding Wasserstein-2 distance between the empirical distribution
|
| 2156 |
+
over transferred color palates and the empirical distribution over the target color palette and the
|
| 2157 |
+
computational time (in second) is reported at the top of the figure. The color palates are given
|
| 2158 |
+
below the corresponding images.
|
| 2159 |
+
Algorithm 8 Color Transfer
|
| 2160 |
+
Input: source color palette X ∈ {0, . . . , 255}n×3, target color palette Y ∈ {0, . . . , 255}n×3, number
|
| 2161 |
+
of Euler iterations T (abuse of notation), Euler step size η (abuse of notation), a metric D.
|
| 2162 |
+
for t = 1 to T do
|
| 2163 |
+
X = X − n · η∇XD(PX, PY )
|
| 2164 |
+
end for
|
| 2165 |
+
X = round(X, {0, . . . , 255})
|
| 2166 |
+
Output: X
|
| 2167 |
+
conducted by using Algorithm 8 which is a modified version of the gradient flow algorithm since the
|
| 2168 |
+
color palette contains only positive integer in {0, . . . , 255}. The flow can be seen as an incomplete
|
| 2169 |
+
transportation map that maps from the source color palette to a color palette that is close to the
|
| 2170 |
+
target color palette. This is quite similar to the iterative distribution transfer algorithm [8], however,
|
| 2171 |
+
the construction of the iterative map is different.
|
| 2172 |
+
Visuallization of transferred images: We show the source image, the target image, and the
|
| 2173 |
+
corresponding transferred images from distances in Figure 6 and Figure 7. The color palates are given
|
| 2174 |
+
below the corresponding images. The corresponding Wasserstein-2 distance between the empirical
|
| 2175 |
+
distribution over transferred color palates and the empirical distribution over the target color palette
|
| 2176 |
+
and the computational time (in second) is reported at the top of the figure. First, we observe that
|
| 2177 |
+
32
|
| 2178 |
+
|
| 2179 |
+
Source
|
| 2180 |
+
SW (L=45), 38.0(s), W2 = 68.09
|
| 2181 |
+
Max-SW (T=45), 58.17(s), W2 = 207.12
|
| 2182 |
+
K-SW (L=15,K=3), 38.34(s), W2 = 67.88
|
| 2183 |
+
Max-K-SW (K=3,T=15), 52.72(s), W2 = 65.52
|
| 2184 |
+
rMSW (L=3,T=5, =50), 15.63(s), W2 = 69.4
|
| 2185 |
+
oMSW (L=3,T=5), 13.48(s), W2 = 68.51
|
| 2186 |
+
iMSW (L=3,T=5), 25.56(s), W2 = 22.35
|
| 2187 |
+
viMSW (L=3,T=5, =50), 28.42(s), W2 = 22.1
|
| 2188 |
+
Target
|
| 2189 |
+
Figure 7: The figures show the source image, the target images, and transferred images from
|
| 2190 |
+
different distances. The corresponding Wasserstein-2 distance between the empirical distribution
|
| 2191 |
+
over transferred color palates and the empirical distribution over the target color palette and the
|
| 2192 |
+
computational time (in second) is reported at the top of the figure. The color palates are given
|
| 2193 |
+
below the corresponding images.
|
| 2194 |
+
the qualitative comparison (transferred images and color palette) is consistent with the Wasserstein
|
| 2195 |
+
scores. We observe that iMSW and viMSW have their transferred images closer to the target image
|
| 2196 |
+
in terms of color than other distances. More importantly, iMSW and viMSW are faster than other
|
| 2197 |
+
distances. Max-SW and Max-K-SW do not perform well in this application, namely, they are slow
|
| 2198 |
+
and give high Wasserstein distances. For oMSW, it is comparable to SW and K-SW while being
|
| 2199 |
+
faster.
|
| 2200 |
+
Studies on hyper-parameters: In addition to result in Figure 6, we run color transfer with other
|
| 2201 |
+
settings of distances in Table 4. From the table, increasing the number of projections L lead to
|
| 2202 |
+
a better result for SW and K-SW. However, they are still worse than iMSW and viMSW with a
|
| 2203 |
+
33
|
| 2204 |
+
|
| 2205 |
+
Table 4: Summary of Wasserstein-2 scores, computational time in second (s) of different distances in the color
|
| 2206 |
+
transfer application.
|
| 2207 |
+
Distances
|
| 2208 |
+
Wasserstein-2 (↓)
|
| 2209 |
+
Time (↓)
|
| 2210 |
+
Distances
|
| 2211 |
+
Wasserstein-2 (↓)
|
| 2212 |
+
Time (↓)
|
| 2213 |
+
SW (L=45)
|
| 2214 |
+
414.51
|
| 2215 |
+
37.97
|
| 2216 |
+
SW (L=15)
|
| 2217 |
+
421.5
|
| 2218 |
+
12.96
|
| 2219 |
+
Max-SW (T=45)
|
| 2220 |
+
449.42
|
| 2221 |
+
57.48
|
| 2222 |
+
Max-SW (T=15)
|
| 2223 |
+
450.37
|
| 2224 |
+
19.03
|
| 2225 |
+
K-SW (L=15,K=3)
|
| 2226 |
+
411.74
|
| 2227 |
+
38.21
|
| 2228 |
+
K-SW (L=5,K=3)
|
| 2229 |
+
413.16
|
| 2230 |
+
14.2
|
| 2231 |
+
Max-K-SW (K=3,T=15)
|
| 2232 |
+
479.43
|
| 2233 |
+
52.6
|
| 2234 |
+
Max-K-SW (K=3,T=5)
|
| 2235 |
+
510.43
|
| 2236 |
+
17.46
|
| 2237 |
+
rMSW (L=3,T=5,κ=50)
|
| 2238 |
+
444.35
|
| 2239 |
+
15.65
|
| 2240 |
+
rMSW (L=3,T=5,κ=100)
|
| 2241 |
+
446.35
|
| 2242 |
+
16.14
|
| 2243 |
+
oMSW (L=3,T=5)
|
| 2244 |
+
415.06
|
| 2245 |
+
14.17
|
| 2246 |
+
oMSW (L=3,T=15)
|
| 2247 |
+
414.29
|
| 2248 |
+
38.51
|
| 2249 |
+
iMSW (L=3,T=5)
|
| 2250 |
+
16.97
|
| 2251 |
+
25.39
|
| 2252 |
+
iMSW (L=3,T=15)
|
| 2253 |
+
15.23
|
| 2254 |
+
79.47
|
| 2255 |
+
iMSW (L=5,T=5)
|
| 2256 |
+
21.63
|
| 2257 |
+
39.82
|
| 2258 |
+
iMSW (L=5,T=3)
|
| 2259 |
+
24.02
|
| 2260 |
+
22.27
|
| 2261 |
+
iMSW (L=3,T=15,M=14)
|
| 2262 |
+
26.23
|
| 2263 |
+
48.08
|
| 2264 |
+
iMSW (L=3,T=15,M=10)
|
| 2265 |
+
18.67
|
| 2266 |
+
55.55
|
| 2267 |
+
iMSW (L=3,T=15,M=0,N=2)
|
| 2268 |
+
16.6
|
| 2269 |
+
62.66
|
| 2270 |
+
iMSW (L=3,T=15,M=10,N=2)
|
| 2271 |
+
19.2
|
| 2272 |
+
50.1
|
| 2273 |
+
viMSW (L=3,T=5,κ=50)
|
| 2274 |
+
16.48
|
| 2275 |
+
29.27
|
| 2276 |
+
viMSW (L=3,T=5,κ=100)
|
| 2277 |
+
16.49
|
| 2278 |
+
28.52
|
| 2279 |
+
smaller number of projections. Similarly, increasing T helps Max-SW, Max-K-SW, and iMSW better.
|
| 2280 |
+
As discussed in the main paper, the burning and thinning technique improves the computation and
|
| 2281 |
+
sometimes enhances the performance.
|
| 2282 |
+
C.3
|
| 2283 |
+
Deep Generative Models
|
| 2284 |
+
Framework: We follow the generative modeling framework from [20, 42]. Here, we state an adaptive
|
| 2285 |
+
formulation of the framework. We are given a data distribution µ ∈ P(X) through its random
|
| 2286 |
+
samples (data). Our goal is to estimate a parametric distribution νφ that belongs to a family of
|
| 2287 |
+
distributions indexed by parameters φ in a parameter space Φ. Deep generative modeling is interested
|
| 2288 |
+
in constructing νφ via pushforward measure. In particular, νφ is implicitly represented by pushing
|
| 2289 |
+
forward a random noise ν0 ∈ P(Z) e.g., standard multivariable Gaussian, through a parametric
|
| 2290 |
+
function Gφ : Z → X (a neural network with weights φ). To estimate φ (νφ), the expected distance
|
| 2291 |
+
estimator [57, 41] is used:
|
| 2292 |
+
argminφ∈ΦE(X,Z)∼µ⊗m⊗ν⊗m
|
| 2293 |
+
0
|
| 2294 |
+
[D(PX, PGφ(Z))],
|
| 2295 |
+
where m ≥ 1, D can be any distance on space of probability measures, µ⊗ is the product measures,
|
| 2296 |
+
namely, X = (x1, . . . , xm) ∼ µ⊗ is equivalent to xi ∼ µ for i = 1, . . . , m, and PX =
|
| 2297 |
+
1
|
| 2298 |
+
m
|
| 2299 |
+
�m
|
| 2300 |
+
i=1 δxi.
|
| 2301 |
+
Similarly, Z = (z1, . . . , zm) with zi ∼ ν0 for i = 1, . . . , m, and Gφ(Z) is the output of the neural
|
| 2302 |
+
work given the input mini-batch Z.
|
| 2303 |
+
By using Wasserstein distance, sliced Wasserstein distance, and their variants as the distance D,
|
| 2304 |
+
we obtain the corresponding estimators. These estimators are sometimes known as mini-batch
|
| 2305 |
+
Wasserstein losses [16, 45, 46] However, applying directly those estimators to natural image data
|
| 2306 |
+
cannot give perceptually good results [20, 15]. The reason is that Wasserstein distance, sliced
|
| 2307 |
+
Wasserstein distances, and their variants require a ground metric as input e.g., L2, however, those
|
| 2308 |
+
ground metrics are not meaningful on images. Therefore, previous works propose using a function
|
| 2309 |
+
that maps the original data space X to a feature space F where the L2 norm is meaningful [55]. We
|
| 2310 |
+
denote the feature function Fγ : X → F. Now the estimator becomes:
|
| 2311 |
+
argminφ∈ΦE(X,Z)∼µ⊗m⊗ν⊗m
|
| 2312 |
+
0
|
| 2313 |
+
[D(PFγ(X), PFγ(Gφ(Z)))].
|
| 2314 |
+
34
|
| 2315 |
+
|
| 2316 |
+
The above optimization can be solved by stochastic gradient descent algorithm with the following
|
| 2317 |
+
stochastic gradient estimator:
|
| 2318 |
+
∇φE(X,Z)∼µ⊗m⊗ν⊗m
|
| 2319 |
+
0
|
| 2320 |
+
[D(PFγ(X), PFγ(Gφ(Z)))] = E(X,Z)∼µ⊗m⊗ν⊗m
|
| 2321 |
+
0
|
| 2322 |
+
[∇φD(PFγ(X), PFγ(Gφ(Z)))]
|
| 2323 |
+
≈ 1
|
| 2324 |
+
K
|
| 2325 |
+
K
|
| 2326 |
+
�
|
| 2327 |
+
k=1
|
| 2328 |
+
∇φD(PFγ(Xk), PFγ(Gφ(Zk))),
|
| 2329 |
+
where X1, . . . , XK are drawn i.i.d from µ⊗m and Z1, . . . , ZK are drawn i.i.d from ν⊗m
|
| 2330 |
+
0
|
| 2331 |
+
. There are
|
| 2332 |
+
several ways to estimate the feature function Fγ in practice. In our experiments, we use the following
|
| 2333 |
+
objective [15]:
|
| 2334 |
+
min
|
| 2335 |
+
γ
|
| 2336 |
+
�
|
| 2337 |
+
EX∼µ⊗m[min(0, −1 + H(Fγ(X)))] + EZ∼ν⊗m
|
| 2338 |
+
0
|
| 2339 |
+
[min(0, −1 − H(Fγ(Gφ(Z)))))]
|
| 2340 |
+
�
|
| 2341 |
+
,
|
| 2342 |
+
where H : F → R. The above optimization problem is also solved by the stochastic gradient descent
|
| 2343 |
+
algorithm with the following gradient estimator:
|
| 2344 |
+
∇γ
|
| 2345 |
+
�
|
| 2346 |
+
EX∼µ⊗m[min(0, −1 + H(Fγ(X)))] + EZ∼ν⊗m
|
| 2347 |
+
0
|
| 2348 |
+
[min(0, −1 − H(Fγ(Gφ(Z)))))]
|
| 2349 |
+
�
|
| 2350 |
+
= EX∼µ⊗m[∇γ min(0, −1 + H(Fγ(X)))] + EZ∼ν⊗m
|
| 2351 |
+
0
|
| 2352 |
+
[∇γ min(0, −1 − H(Fγ(Gφ(Z)))))]
|
| 2353 |
+
≈ 1
|
| 2354 |
+
K
|
| 2355 |
+
K
|
| 2356 |
+
�
|
| 2357 |
+
k=1
|
| 2358 |
+
[∇γ min(0, −1 + H(Fγ(Xk)))] + 1
|
| 2359 |
+
K
|
| 2360 |
+
K
|
| 2361 |
+
�
|
| 2362 |
+
k=1
|
| 2363 |
+
[∇γ min(0, −1 − H(Fγ(Gφ(Zk)))))],
|
| 2364 |
+
where X1, . . . , XK are drawn i.i.d from µ⊗m and Z1, . . . , ZK are drawn i.i.d from ν⊗m
|
| 2365 |
+
0
|
| 2366 |
+
.
|
| 2367 |
+
Settings: We use the following neural networks for Gφ and Fγ:
|
| 2368 |
+
• CIFAR10:
|
| 2369 |
+
– Gφ: z ∈ R128(∼ ν0 : N(0, 1)) → 4 × 4 × 256(Dense, Linear) → ResBlock up 256 →
|
| 2370 |
+
ResBlock up 256 → ResBlock up 256 → BN, ReLU, → 3 × 3 conv, 3 Tanh .
|
| 2371 |
+
– Fγ1: x ∈ [−1, 1]32×32×3 → ResBlock down 128 → ResBlock down 128 → ResBlock down 128 →
|
| 2372 |
+
ResBlock 128 → ResBlock 128.
|
| 2373 |
+
– Fγ2: x ∈ R128×8×8 → ReLU → Global sum pooling(128) → 1(Spectral normalization).
|
| 2374 |
+
– Fγ(x) = (Fγ1(x), Fγ2(Fγ1(x))) and H(Fγ(x)) = Fγ2(Fγ1(x)).
|
| 2375 |
+
• CelebA:
|
| 2376 |
+
– Gφ: z ∈ R128(∼ ν0 : N(0, 1)) → 4 × 4 × 256(Dense, Linear) → ResBlock up 256 →
|
| 2377 |
+
ResBlock up 256 →
|
| 2378 |
+
ResBlock up 256 →
|
| 2379 |
+
ResBlock up 256 →
|
| 2380 |
+
BN, ReLU,
|
| 2381 |
+
→ 3 ×
|
| 2382 |
+
3 conv, 3 Tanh .
|
| 2383 |
+
– Fγ1: x ∈ [−1, 1]32×32×3 → ResBlock down 128 → ResBlock down 128 → ResBlock down 128 →
|
| 2384 |
+
ResBlock 128 → ResBlock 128.
|
| 2385 |
+
– Fγ2: x ∈ R128×8×8 → ReLU → Global sum pooling(128) → 1(Spectral normalization).
|
| 2386 |
+
– Fγ(x) = (Fγ1(x), Fγ2(Fγ1(x))) and H(Fγ(x)) = Fγ2(Fγ1(x)).
|
| 2387 |
+
35
|
| 2388 |
+
|
| 2389 |
+
SW
|
| 2390 |
+
Max-SW
|
| 2391 |
+
K-SW
|
| 2392 |
+
Max-K-SW
|
| 2393 |
+
rMSW
|
| 2394 |
+
oMSW
|
| 2395 |
+
iMSW
|
| 2396 |
+
viMSW
|
| 2397 |
+
Figure 8: Random generated images of distances on CIFAR10.
|
| 2398 |
+
Table 5: Summary of FID and IS scores of methods on CIFAR10 (32x32), and CelebA (64x64)
|
| 2399 |
+
Method
|
| 2400 |
+
CIFAR10 (32x32)
|
| 2401 |
+
CelebA (64x64)
|
| 2402 |
+
FID (↓)
|
| 2403 |
+
IS (↑)
|
| 2404 |
+
FID (↓)
|
| 2405 |
+
iMSW (L=100,T=10,M=0,N=1)
|
| 2406 |
+
14.61±0.72
|
| 2407 |
+
8.15±0.15
|
| 2408 |
+
9.73±0.33
|
| 2409 |
+
iMSW (L=100,T=10,M=9,N=1)
|
| 2410 |
+
14.16±1.11
|
| 2411 |
+
8.17±0.07
|
| 2412 |
+
9.10±0.34
|
| 2413 |
+
iMSW (L=100,T=10,M=5,N=1)
|
| 2414 |
+
13.93±0.21
|
| 2415 |
+
8.15±0.05
|
| 2416 |
+
9.49±0.52
|
| 2417 |
+
iMSW (L=100,T=10,M=0,N=2)
|
| 2418 |
+
14.33±0.32
|
| 2419 |
+
8.15±0.06
|
| 2420 |
+
8.99±0.64
|
| 2421 |
+
iMSW (L=10,T=100,M=0,N=1)
|
| 2422 |
+
14.26±0.74
|
| 2423 |
+
8.15±0.07
|
| 2424 |
+
8.89±0.23
|
| 2425 |
+
iMSW (L=10,T=100,M=99,N=1)
|
| 2426 |
+
14.50±0.70
|
| 2427 |
+
8.12±0.08
|
| 2428 |
+
9.55±0.35
|
| 2429 |
+
iMSW (L=10,T=100,M=50,N=1)
|
| 2430 |
+
14.41±0.58
|
| 2431 |
+
8.12±0.06
|
| 2432 |
+
9.46±0.73
|
| 2433 |
+
iMSW (L=10,T=100,M=0,N=2)
|
| 2434 |
+
14.65±0.01
|
| 2435 |
+
8.11±0.06
|
| 2436 |
+
9.49±0.39
|
| 2437 |
+
For all datasets, the number of training iterations is set to 50000. We update the generator Gφ each
|
| 2438 |
+
5 iterations while we update the feature function Fγ every iteration. The mini-batch size m is set
|
| 2439 |
+
128 in all datasets. The learning rate for Gφ and Fγ is 0.0002 and the optimizer is Adam [25] with
|
| 2440 |
+
parameters (β1, β2) = (0, 0.9). We use the order p = 2 for all sliced Wasserstein variants. We use
|
| 2441 |
+
50000 random samples from estimated generative models Gφ for computing the FID scores and the
|
| 2442 |
+
Inception scores. In evaluating FID scores, we use all training samples for computing statistics of
|
| 2443 |
+
datasets2.
|
| 2444 |
+
Generated images: We show generated images on CIFAR10 and CelebA from different generative
|
| 2445 |
+
models trained by different distances in Figure 8 and in Figure 9 in turn. Overall, images are visually
|
| 2446 |
+
consistent with the quantitative FID scores in Table 2.
|
| 2447 |
+
Studies on hyperparameters: We run some additional settings of iMSW to investigate the
|
| 2448 |
+
2We evaluate the scores based on the code from https://github.com/GongXinyuu/sngan.pytorch.
|
| 2449 |
+
36
|
| 2450 |
+
|
| 2451 |
+
SW
|
| 2452 |
+
Max-SW
|
| 2453 |
+
K-SW
|
| 2454 |
+
Max-K-SW
|
| 2455 |
+
rMSW
|
| 2456 |
+
oMSW
|
| 2457 |
+
iMSW
|
| 2458 |
+
viMSW
|
| 2459 |
+
Figure 9: Random generated images of distances on CelebA.
|
| 2460 |
+
performance of the burning thinning technique and to compare the role of L and T in Table 5.
|
| 2461 |
+
First, we see that burning and thinning helps to improve FID score and IS score on CIFAR10 and
|
| 2462 |
+
CelebA in the settings of L = 100, T = 10. It is worth noting that the original purpose of burning
|
| 2463 |
+
and thinning is to reduce computational complexity and memory complexity. The side benefit of
|
| 2464 |
+
improving performance requires more investigation that is left for future work. In addition, we find
|
| 2465 |
+
that for the same number of total projections 1000 without burning and thinning, the setting of
|
| 2466 |
+
L = 10, T = 100 is better than the setting of L = 100, T = 10 on CIFAR10. However, the reverse
|
| 2467 |
+
direction happens on CelebA. Therefore, on different datasets, it might require hyperparameter
|
| 2468 |
+
tunning for finding the best setting of the number of projections L and the number of timesteps T.
|
| 2469 |
+
37
|
| 2470 |
+
|
5NE2T4oBgHgl3EQfOgaA/content/tmp_files/load_file.txt
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
5tE5T4oBgHgl3EQfPQ4_/vector_store/index.faiss
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:78c5dfea9ac3ddc456312924dd22ee19d9202d3a6dbf72a6cc52c439cf4ed6a5
|
| 3 |
+
size 3080237
|
6tFKT4oBgHgl3EQf_i4o/content/tmp_files/2301.11962v1.pdf.txt
ADDED
|
@@ -0,0 +1,1774 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
On the Feasibility of Machine Learning Augmented
|
| 2 |
+
Magnetic Resonance for Point-of-Care Identification
|
| 3 |
+
of Disease
|
| 4 |
+
Raghav Singhal1∗
|
| 5 |
+
Mukund Sudarshan1,∗
|
| 6 |
+
Anish Mahishi1
|
| 7 |
+
Sri Kaushik1
|
| 8 |
+
Luke Ginnochio2
|
| 9 |
+
Angela Tong2
|
| 10 |
+
Hersh Chandarana2
|
| 11 |
+
Daniel Sodickson2
|
| 12 |
+
Rajesh Ranganath1,3
|
| 13 |
+
Sumit Chopra1,2
|
| 14 |
+
Abstract
|
| 15 |
+
Early detection of many life-threatening diseases (e.g., prostate and breast cancer)
|
| 16 |
+
within at-risk population can improve clinical outcomes and reduce cost of care.
|
| 17 |
+
While numerous disease-specific “screening" tests that are closer to Point-of-Care
|
| 18 |
+
(POC) are in use for this task, their low specificity results in unnecessary biopsies,
|
| 19 |
+
leading to avoidable patient trauma and wasteful healthcare spending. On the
|
| 20 |
+
other hand, despite the high accuracy of Magnetic Resonance (MR) imaging in
|
| 21 |
+
disease diagnosis, it is not used as a POC disease identification tool because of poor
|
| 22 |
+
accessibility. The root cause of poor accessibility of MR stems from the requirement
|
| 23 |
+
to reconstruct high-fidelity images, as it necessitates a lengthy and complex process
|
| 24 |
+
of acquiring large quantities of high-quality k-space measurements. In this study
|
| 25 |
+
we explore the feasibility of an ML-augmented MR pipeline that directly infers
|
| 26 |
+
the disease sidestepping the image reconstruction process. We hypothesise that
|
| 27 |
+
the disease classification task can be solved using a very small tailored subset of
|
| 28 |
+
k-space data, compared to image reconstruction. Towards that end, we propose a
|
| 29 |
+
method that performs two tasks: 1) identifies a subset of the k-space that maximizes
|
| 30 |
+
disease identification accuracy, and 2) infers the disease directly using the identified
|
| 31 |
+
k-space subset, bypassing the image reconstruction step. We validate our hypothesis
|
| 32 |
+
by measuring the performance of the proposed system across multiple diseases
|
| 33 |
+
and anatomies. We show that comparable performance to image-based classifiers,
|
| 34 |
+
trained on images reconstructed with full k-space data, can be achieved using small
|
| 35 |
+
quantities of data: 8% of the data for detecting multiple abnormalities in prostate
|
| 36 |
+
and brain scans, and 5% of the data for detecting knee abnormalities. To better
|
| 37 |
+
understand the proposed approach and instigate future research, we provide an
|
| 38 |
+
extensive analysis and release code.
|
| 39 |
+
1
|
| 40 |
+
Introduction
|
| 41 |
+
Early and accurate identification of several terminal diseases, such as breast cancer [42], prostate
|
| 42 |
+
cancer [27], and colon cancer [65], within the at-risk population followed by appropriate intervention
|
| 43 |
+
leads to favorable clinical outcomes for patients by reducing mortality rates [57] and reducing cost of
|
| 44 |
+
care. In the current standard-of-care this goal is accomplished by subjecting at-risk but otherwise
|
| 45 |
+
∗Equal Contribution. 1 Department of Computer Science, New York University, New York, NY. 2 Center
|
| 46 |
+
for Advanced Imaging Innovation and Research (CAI2R), Department of Radiology, New York University
|
| 47 |
+
Grossman School of Medicine, New York, NY, United States. 3 Center for Data Science, New York University,
|
| 48 |
+
New York, NY, United States. Correspondence to: Raghav Singhal <[email protected]>.
|
| 49 |
+
arXiv:2301.11962v1 [cs.LG] 27 Jan 2023
|
| 50 |
+
|
| 51 |
+
asymptomatic individuals within the population to clinical tests (a.k.a., “screening tests”) that identify
|
| 52 |
+
the presence of the disease under consideration: a process formally referred to as “Population-Level
|
| 53 |
+
Screening (PLS).” Desiderata for an effective screening test are: 1) it should be safe for use, 2)
|
| 54 |
+
it should be accurate (have high sensitivity and specificity), and 3) it should be fast and easily
|
| 55 |
+
accessible to facilitate use at population-level. While numerous disease-specific screening tests that
|
| 56 |
+
are administered closer to point-of-care (POC), and hence are accessible at population-level, have
|
| 57 |
+
been proposed and are in use, most of them do not satisfy all the three requirements mentioned above.
|
| 58 |
+
For instance, prostate cancer [54] and breast cancer [16] have accessible tests, but these tests have
|
| 59 |
+
low specificity, as shown by multiple clinical trials [33, 17]. Low specificity of these tests results
|
| 60 |
+
in over-diagnosis and over-treatment of patients leading to many unnecessary, risky, and expensive
|
| 61 |
+
followup procedures, such as advanced imaging and/or invasive tissue biopsies. This in-turn causes
|
| 62 |
+
avoidable patient trauma and significant wasteful healthcare spending [33, 36, 5, 59].
|
| 63 |
+
Magnetic Resonance Imaging (MRI) has been shown to be a highly effective tool for accurately
|
| 64 |
+
diagnosing multiple diseases, especially those involving soft-tissues [51, 18, 60, 68, 48, 6]. While
|
| 65 |
+
traditionally MRI is used to validate clinical hypotheses under a differential diagnosis regime and
|
| 66 |
+
is typically used as last in line tool, multiple recent studies have proposed new disease specific
|
| 67 |
+
data acquisition protocols that can potentially make MR useful for the purpose of early disease
|
| 68 |
+
identification [15, 62, 41, 4]. These studies have shown that MR can outperform the screening tests
|
| 69 |
+
being used as part of current standard-of-care. However, despite its proven clinical benefits, the
|
| 70 |
+
challenges associated with the accessibility of MRI, limits its widespread use at population-level. As
|
| 71 |
+
such, there is an unmet need for a POC tool that has the diagnostic accuracy of MR and yet is readily
|
| 72 |
+
accessible at population-level. Such a tool can have widespread positive impact on the standard-of-
|
| 73 |
+
care for multiple life threatening diseases. Specifically, patients will receive improved care via easy
|
| 74 |
+
access to MR technology outside of the high-friction specialized environments of imaging centers
|
| 75 |
+
for early and accurate identification of diseases; radiologists will see an increased diagnostic yield
|
| 76 |
+
of expensive followup scans since the tool will ensure that only patients with high-likelihood of the
|
| 77 |
+
disease undergo full diagnostic imaging; and health system will see a reduction in overall cost of
|
| 78 |
+
care with the decrease in the number of unnecessary expensive follow-up diagnostics and treatment
|
| 79 |
+
procedures.
|
| 80 |
+
To understand the reason behind poor accessibility of MR, we first shed light on the workings
|
| 81 |
+
of the pipeline. Figure 2 (c) depicts the full MR pipeline.
|
| 82 |
+
MR imaging is an indirect imaging
|
| 83 |
+
process in which the MR scanner subjects the human body with magnetic field and radio-frequency
|
| 84 |
+
signals and measures the subsequent electromagnetic response activity from within the body. These
|
| 85 |
+
measurements are collected in the Fourier space, also known as k-space (see section 5.4 in [7])
|
| 86 |
+
(stage S1 in figure 2 (c)). The 3D volumetric image of the anatomy is reconstructed from these
|
| 87 |
+
k-space measurements using a multi-dimensional inverse Fourier transform (stage S2). The images
|
| 88 |
+
are then finally interpreted by sub-specialized radiologists who render the final diagnosis (stage S3).
|
| 89 |
+
The reason behind MR’s diagnostic success is its ability to generate these high-fidelity images with
|
| 90 |
+
excellent soft-tissue contrast properties, because such images enable the human radiologists to easily
|
| 91 |
+
discern the pathology accurately. The quality of the images is directly related to the quantity and the
|
| 92 |
+
quality of the k-space measurements acquired: large quantities of high-quality measurements results
|
| 93 |
+
in a high-quality image. This in-turn necessitates the need for 1) expensive specialized scanners
|
| 94 |
+
installed in special purpose imaging centers to collect large quantities of high-quality k-space data,
|
| 95 |
+
2) execution of long and complex data acquisition protocols to reconstruct the high-fidelity images
|
| 96 |
+
exhibiting multiple contrasts, and 3) sub-specialized radiologists to interpret the reconstructed images.
|
| 97 |
+
All these factors prevent MR scanning to be used as a tool closer to POC for early and accurate
|
| 98 |
+
disease identification. Instead its use is predominantly limited to validating a clinical hypothesis at
|
| 99 |
+
the end of the diagnostic chain. With the motivation of improving accessibility of MR, researchers
|
| 100 |
+
have proposed multiple solutions to simplify the pipeline. These include designing novel acquisition
|
| 101 |
+
protocols to acquire the k-space data [32, 14], learning the under-sampling pattern over k-space data
|
| 102 |
+
matrices so that the image quality is not compromised [2, 73, 64, 25], faster data acquisition and
|
| 103 |
+
image reconstruction from the under-sampled k-space data, and for simultaneous classification and
|
| 104 |
+
image-reconstruction using under-sampled k-space data [31, 39, 40, 70, 44, 20]. While these efforts
|
| 105 |
+
have expedited the data acquisition process, the requirement to generate high-fidelity images still
|
| 106 |
+
necessitates the use of expensive scanners and the need for a sub-specialized radiologist to interpret
|
| 107 |
+
them. Furthermore, image generation also imposes limits on how much one can under-sample the
|
| 108 |
+
k-space. For instance, [44] reports that reconstructed images started missing clinically relevant
|
| 109 |
+
pathologies if we sample less than 25% of the data. This phenomenon can be observed in Figure 1,
|
| 110 |
+
2
|
| 111 |
+
|
| 112 |
+
which shows images reconstructed by a state-of-the-art reconstruction model [56] using different
|
| 113 |
+
levels of sampling. A clearly visible lesion in the high resolution image is barely visible in the image
|
| 114 |
+
generated using 8% data.
|
| 115 |
+
Figure 1: Figure showing deterioration in the quality of reconstructed images with decreasing
|
| 116 |
+
sampling factors (from left to right). Lesion visible (red arrow) on image reconstructed from the
|
| 117 |
+
fully-sampled k-space data (left panel) is not visible in the image reconstructed from 12.5% (middle
|
| 118 |
+
panel) or 8% sampled data (right panel) when reconstructed with state-of-the-art reconstruction
|
| 119 |
+
methods.
|
| 120 |
+
This work is motivated by the goal of making the benefits of MR diagnostics available for population-
|
| 121 |
+
wide identification of disease. Towards that end, we ask the following questions: “If the clinical goal
|
| 122 |
+
is to merely identify the presence or absence of a specific disease (a binary task accomplished by
|
| 123 |
+
a typical screening test), is it necessary to generate a high-fidelity image of the entire underlying
|
| 124 |
+
anatomy? Instead, can we build an ML model that can accurately provide the final answer (whether
|
| 125 |
+
a disease is present or not) from a carefully selected subset of the k-space data?" Specifically, we
|
| 126 |
+
hypothesize that when the task is to infer the presence of a disease (a binary decision), we do not need
|
| 127 |
+
all the k-space measurements that are otherwise acquired to generate a high-fidelity image. Instead,
|
| 128 |
+
we can train an ML system that can accurately provide the binary answer directly from a carefully
|
| 129 |
+
tailored small fraction of degraded k-space data that can potentially be acquired using low-grade
|
| 130 |
+
inexpensive scanning devices. To validate the above hypothesis, one needs to answer the following
|
| 131 |
+
key questions:
|
| 132 |
+
Q1. Can we build a ML system that can accurately infer the presence of a disease using data
|
| 133 |
+
from standard MR sequences without generating images?
|
| 134 |
+
Q2. Can we build a ML system that uses only a small fraction of carefully tailored subset of the
|
| 135 |
+
k-space data to infer the presence of a disease without images? If so, how little data do we
|
| 136 |
+
need without compromising performance?
|
| 137 |
+
Q3. Can we build a ML system that can accurately infer the presence of a disease using degraded
|
| 138 |
+
k-space data without generating images? What are the limits on signal quality we can afford
|
| 139 |
+
to work with, without compromising performance?
|
| 140 |
+
Answers to these questions will shed light on the feasibility of making MR scanning accessible outside
|
| 141 |
+
of its current specialized environments to be potentially used for the purpose of early, efficient, and
|
| 142 |
+
accurate identification of disease at population-level. In this study we answer Q1, and Q2 and leave
|
| 143 |
+
the answers to Q3 as future work. Towards that end, we first propose a novel deep learning (DL)
|
| 144 |
+
model that takes as input the raw k-space data and generates the final (binary) answer, skipping the
|
| 145 |
+
image reconstruction step (Section 5). We show that it is indeed possible to train a ML model that
|
| 146 |
+
can directly generate an answer from the k-space data without generating an image. This result is
|
| 147 |
+
not surprising because mapping the k-space data to image space is accomplished by simply applying
|
| 148 |
+
an Inverse Fourier Transform (IFT) operation on the k-space data, which is a deterministic lossless
|
| 149 |
+
mapping. Next, to answer question Q2, we propose a novel ML methodology that can accurately infer
|
| 150 |
+
the presence of a disease directly from a small tailored subset of the k-space data, side-stepping the
|
| 151 |
+
image reconstruction step (Section 6). We call this methodology the End-to-end Magnetic Resonance
|
| 152 |
+
Triaging (EMRT). Figure 2(d) provides an outline of our methodology in comparison to the current
|
| 153 |
+
image reconstruction-based pipeline (Figure 2(c)). EMRT simultaneously accomplishes two tasks:
|
| 154 |
+
1. Identifies a small subset of the k-space that can provide sufficient signal for accurate
|
| 155 |
+
prediction of the disease by an ML model, ignoring the quality of the image it will generate.
|
| 156 |
+
2. It then infers the presence of the disease directly using data from only the identified subset
|
| 157 |
+
of the k-space, without generating an image.
|
| 158 |
+
3
|
| 159 |
+
|
| 160 |
+
We validate the efficacy of EMRT in identifying multiple diseases using scans from multiple anatomies,
|
| 161 |
+
namely, to detect presence of ACL sprains and meniscal tears in slice-level knee MR scans, to detect
|
| 162 |
+
enlarged ventricles and mass in slice-level brain MR scans, and detect the presence of clinically
|
| 163 |
+
significant prostate cancer (CS-PCA) in slice-level abdominal MR scans. The knee and brain scans are
|
| 164 |
+
made available in the FastMRI data set [70] with labels provided by the FastMRI+ data set [74]. We
|
| 165 |
+
use an internal data set for the prostate scans acquired as part of clinical exams of real patients at
|
| 166 |
+
NYU Langone Health system. We compare the performance of EMRT against two types of benchmark
|
| 167 |
+
methods.
|
| 168 |
+
Our first benchmark consists of a classifier trained with images reconstructed from fully-sampled
|
| 169 |
+
k-space data. Since the prediction accuracy of this benchmark is the best one can hope for from
|
| 170 |
+
any image-based classifier, we use this comparison to establish the limits of how much one can
|
| 171 |
+
under-sample the k-space and still accurately infer the disease, when not reconstructing images. Our
|
| 172 |
+
results show that EMRT can achieve the same level of accuracy as this benchmark using only 5% of
|
| 173 |
+
the data for knee scans and 8% of the data for brain and prostate scans. Our second benchmark is
|
| 174 |
+
another image-based classifier that uses as input the images reconstructed from an under-sampled
|
| 175 |
+
k-space data using the state-of-the-art image reconstruction models proposed in the literature [56, 44].
|
| 176 |
+
The motivation behind this experiment was to show that for the same disease identification accuracy,
|
| 177 |
+
if we by-pass the image reconstruction step, we require a significantly smaller fraction of the k-space
|
| 178 |
+
data in comparison to when we reconstruct images. Our results also show that for all under-sampling
|
| 179 |
+
rates in our experiments, EMRT outperforms under-sampled image-reconstruction based benchmarks
|
| 180 |
+
even though the images are reconstructed using the state-of-the-art reconstruction models. Lastly,
|
| 181 |
+
we also provide an extensive analysis that shed light on understanding the workings of EMRT. Our
|
| 182 |
+
contributions include:
|
| 183 |
+
• EMRT: a first-of-its-kind machine learning methodology that identifies a subset of k-space
|
| 184 |
+
that maximizes disease classification accuracy, and then infers the presence of a disease
|
| 185 |
+
directly from the k-space data of the identified subset, without reconstructing images.
|
| 186 |
+
• Rigorous comparison to the state-of-the-art image reconstruction-based benchmark models
|
| 187 |
+
to prove the efficacy of the proposed methodology.
|
| 188 |
+
• Extensive analysis of EMRT to understand the reasons behind its superior performance.
|
| 189 |
+
• Release of the code and data used to build EMRT with the goal of facilitating further research
|
| 190 |
+
in end-to-end methods like EMRT, that have the potential to transform healthcare.
|
| 191 |
+
2
|
| 192 |
+
Clinical Vision
|
| 193 |
+
This study is motivated by the overarching goal of making MR scanning accessible outside of its
|
| 194 |
+
current specialized environments so that its diagnostic benefits can be realized at population-level
|
| 195 |
+
for early, efficient, and accurate identification of life-threatening diseases. We argue that this poor
|
| 196 |
+
accessibility is rooted in the requirement to generate high-fidelity images, because image generation
|
| 197 |
+
necessitates the need to acquire large quantities of high-quality k-space data (forcing the use of
|
| 198 |
+
expensive scanners installed in specialized environments running complex data acquisition protocols)
|
| 199 |
+
and the need for sub-specialized radiologists for interpretation. As such we ask a sequence of
|
| 200 |
+
questions pertaining to k-space data requirements for accurate disease identification under the setting
|
| 201 |
+
when we are not generating intermediate high-fidelity images. Answers to questions posed in this
|
| 202 |
+
study will shed light on the feasibility of whether our end goals can be accomplished.
|
| 203 |
+
Assuming answers to all the questions are favorable, one can imagine an ultra-low-field inexpensive
|
| 204 |
+
scanning device that is only capable of acquiring small quantities of low quality k-space data, from
|
| 205 |
+
which it is difficult to reconstruct an image that has a clearly discernible pathology. However an ML
|
| 206 |
+
model (embedded within the device) could accurately infer the presence of the disease directly from
|
| 207 |
+
this data. Such an inexpensive system could be used clinically as a triaging tool in the following way:
|
| 208 |
+
The system is placed in a primary care clinic where it is used to test patients who are known to be
|
| 209 |
+
at risk of the disease. Patients for whom the system provides a “yes” answer (possibly with some
|
| 210 |
+
confidence score) are routed for the more thorough followup diagnostic procedures (full MR scan
|
| 211 |
+
and/or biopsy). Others are sent back into the surveillance pipeline for subsequent periodic screening.
|
| 212 |
+
More specifically, in Figure 2(a) and (b), we depict the utility of such a device when screening for
|
| 213 |
+
clinically significant prostate cancer (CSPCA), the second most common reason behind male mortality
|
| 214 |
+
4
|
| 215 |
+
|
| 216 |
+
More than 70% of people
|
| 217 |
+
get false biopsies when
|
| 218 |
+
mp-MRI is not available
|
| 219 |
+
At Risk Patients
|
| 220 |
+
Physician Orders PSA
|
| 221 |
+
To Screen for csPCA
|
| 222 |
+
PSA Test
|
| 223 |
+
>= 4 ng/ml
|
| 224 |
+
< 4 ng/ml
|
| 225 |
+
Do
|
| 226 |
+
Nothing
|
| 227 |
+
Patient is Referred to
|
| 228 |
+
Biopsy (and mp-MRI, if available)
|
| 229 |
+
MRI Scan
|
| 230 |
+
Biopsy
|
| 231 |
+
Less than 50%* of people
|
| 232 |
+
get false biopsies when
|
| 233 |
+
mp-MRI is not available
|
| 234 |
+
EMRT
|
| 235 |
+
Low yield on
|
| 236 |
+
mp-MRI scans
|
| 237 |
+
High yield on
|
| 238 |
+
mp-MRI scans
|
| 239 |
+
(a) Patient Flow in Current Standard of Care
|
| 240 |
+
(b) Patient Flow in Future Standard of Care
|
| 241 |
+
5
|
| 242 |
+
C M Hyun et al
|
| 243 |
+
fix this unwanted distortion by placing the original x values in their corresponding positions in the k-space data
|
| 244 |
+
F(˜y). We call this k-space correction as fcor and set ˆx = fcor(F(˜y)). Because the original input data is preserved,
|
| 245 |
+
we expect to obtain a more satisfactory reconstruction image and, indeed, our experiments show that the k-space
|
| 246 |
+
correction is very effective. Finally, we apply the inverse Fourier transform to ˆx, take the absolute value and obtain
|
| 247 |
+
our reconstruction image |F−1(ˆx)|. In summary, our image reconstruction function f : x �→ y is given by
|
| 248 |
+
f = |F−1| ◦ fcor ◦ F ◦ fd ◦ |F−1| ◦ P,
|
| 249 |
+
|
| 250 |
+
(10)
|
| 251 |
+
where fd is the trained U-net and fcor indicates the k-space correction. Here, fd should be determined by the
|
| 252 |
+
following training process.
|
| 253 |
+
To train and test the U-net fd, we generate the training and test sets as follows. Given ground-truth MR
|
| 254 |
+
images {y( j)}N
|
| 255 |
+
j=1, we take the Fourier transform of each y( j), apply our subsampling strategy S, which yields
|
| 256 |
+
x( j). This provides a dataset {(x( j), y( j))}N
|
| 257 |
+
j=1 of subsampled k-space data and ground-truth MR images. The
|
| 258 |
+
dataset is divided into two subsets : a training set {(x( j), y( j))}M
|
| 259 |
+
j=1 and test set {(x( j), y( j))}N
|
| 260 |
+
j=M+1. The input x( j)
|
| 261 |
+
of the image reconstruction function f is an undersampled k-space data and the output y( j) is the ground truth
|
| 262 |
+
image. Using the zero-padding operator, inverse Fourier transform, and absolute value, we obtain folded images
|
| 263 |
+
y( j)
|
| 264 |
+
S . Our training goal is then to recover the ground-truth images y( j) from the folded images y( j)
|
| 265 |
+
S . Note that
|
| 266 |
+
{y( j)
|
| 267 |
+
S , y( j)}M
|
| 268 |
+
j=1 is a set of pairs for training fd.
|
| 269 |
+
The architecture of our U-net is illustrated in figure 4. The first half of the network is the contracting path and
|
| 270 |
+
the last half is the expansive path. The size of the input and output images is 256 × 256. In the contracting path,
|
| 271 |
+
we first apply the 3 × 3 convolutions with zero-padding so that the image size does not decrease after convolu-
|
| 272 |
+
tion. The convolution layers improve the performance of machine learning systems by extracting useful features,
|
| 273 |
+
sharing parameters, and introducing sparse interactions and equivariant representations (Bengio et al 2015).
|
| 274 |
+
After each convolution, we use a rectified linear unit(ReLU) as an activation function to solve the vanishing gra-
|
| 275 |
+
dient problem (Glorot et al 2011). Then, we apply the 2 × 2 max pooling with a stride of 2. The max pooling helps
|
| 276 |
+
to make the representation approximately invariant to small translations of the input (Bengio et al 2015). In the
|
| 277 |
+
expansive path, we use the average unpooling instead of max-pooling to restore the size of the output. In order
|
| 278 |
+
to localize more precisely, the upsampled output is concatenated with the correspondingly feature from the con-
|
| 279 |
+
tracting path. At the last layer a 1 × 1 convolution is used to combine each the 64 features into one large feature
|
| 280 |
+
(Ronnerberger et al 2015).
|
| 281 |
+
The input of the net is y( j)
|
| 282 |
+
S , the weights are W, the net, as a function of weights W, is fnet(·, W), and the output
|
| 283 |
+
is denoted as fnet(y( j)
|
| 284 |
+
S , W). To train the net, we use the ℓ2 loss and find the optimal weight set W0 with
|
| 285 |
+
Figure 3. MR images of human brain with a tumor at the bottom. Images (a)–(e) are reconstructed from (f) full sampling,
|
| 286 |
+
(g) uniform subsampling of factor 2, (h) uniform subsampling of factor 2 with added some low frequencies, (i) uniform
|
| 287 |
+
subsampling of factor 4, and (j) uniform subsampling of factor 4 with added low frequencies , respectively. In (b) and (d), tumor-like
|
| 288 |
+
lesions are found at both the top and bottom; one is a copy of the other. Hence, a location uncertainty exists in the uniform sampling.
|
| 289 |
+
However, in the reconstructed image (c) and (e) using the uniform subsampling of factor 2 and 4 with added low frequencies, the
|
| 290 |
+
tumors are clearly located at the bottom. The location uncertainty can hence be addressed by adding a few low frequencies in k-
|
| 291 |
+
space.
|
| 292 |
+
Phys. Med. Biol. 63 (2018) 135007 (15pp)
|
| 293 |
+
Image
|
| 294 |
+
Reconstruction
|
| 295 |
+
K-space
|
| 296 |
+
Acquisition
|
| 297 |
+
Radiological
|
| 298 |
+
Diagnosis
|
| 299 |
+
(c) Standard MR Pipeline Involving Image Reconstruction and a Radiologist for Diagnosis
|
| 300 |
+
ML to Learn
|
| 301 |
+
Under-Sampling Pattern
|
| 302 |
+
of Low SNR K-Space
|
| 303 |
+
ML to Infer cs-PCA
|
| 304 |
+
From Selected Subsets of
|
| 305 |
+
Low SNR K-Space
|
| 306 |
+
(d) Proposed End-to-end Magnetic Resonance Triaging (EMRT) Pipeline using Ultra Low-Field Scanner
|
| 307 |
+
EMRT
|
| 308 |
+
S1
|
| 309 |
+
S2
|
| 310 |
+
S3
|
| 311 |
+
High Performance Scanner
|
| 312 |
+
Full k-Space Data
|
| 313 |
+
Reconstructed Image
|
| 314 |
+
Radiology Report
|
| 315 |
+
Low Performance Scanner
|
| 316 |
+
Followup After
|
| 317 |
+
One Year
|
| 318 |
+
Low Risk
|
| 319 |
+
High Risk
|
| 320 |
+
High Risk of cs-PCa
|
| 321 |
+
Low Risk of cs-PCa
|
| 322 |
+
At Risk Patients
|
| 323 |
+
Physician Orders PSA
|
| 324 |
+
To Screen for csPCA
|
| 325 |
+
PSA Test
|
| 326 |
+
>= 4 ng/ml
|
| 327 |
+
< 4 ng/ml
|
| 328 |
+
Do
|
| 329 |
+
Nothing
|
| 330 |
+
Patient is Referred to
|
| 331 |
+
Biopsy (and mp-MRI, if available)
|
| 332 |
+
Biopsy
|
| 333 |
+
Mp-MRI
|
| 334 |
+
Figure 2: Overview of current and proposed standards of care for Prostate Cancer: Panel (a)
|
| 335 |
+
depicts the current practice of testing for clinically significant prostate cancer (CS-PCA), which
|
| 336 |
+
involves testing at-risk patients using a PSA test followed by an expensive multi-parametric MRI
|
| 337 |
+
(Panel (c)) and a biopsy. In Panel (b), with our proposed triaging tool, patients who have a high
|
| 338 |
+
PSA score undergo a subsequent test with the EMRT embedded ultra-low field MR device (Panel
|
| 339 |
+
(d)). With the use of the triaging device, only high risk patients get the expensive and inaccessible
|
| 340 |
+
multi-parametric MRI and invasive biopsy, reducing waste in the healthcare system and preventing as
|
| 341 |
+
many as 38% of the biopsies [55].
|
| 342 |
+
within the United States. Figure 2(a) depicts the current standard of care for CSPCA screening, where
|
| 343 |
+
at-risk people are ordered to take the PSA test to screen for disease, followed by either an invasive
|
| 344 |
+
biopsy or a 40 minute long multi-parametric MRI exam (depending on the PSA value). Unfortunately,
|
| 345 |
+
the high false positive rate of the PSA test, causes unnecessary patient trauma and wasteful healthcare
|
| 346 |
+
spending as 70% of patients who have a positive PSA test can get a negative biopsy. In Figure 2(b), we
|
| 347 |
+
highlight how the proposed triaging tool can be placed in the pipeline. The PSA test can be followed
|
| 348 |
+
up by another test using the ultra-low-field EMRT embedded MRI device. Unlike a full MRI exam, the
|
| 349 |
+
EMRT-embedded device will not have to produce an image, just a risk score. Such a triaging device
|
| 350 |
+
can filter high and low risk patients further, and only select the high risk patients for subsequent
|
| 351 |
+
diagnostics tests such as, full MRI and/or biopsy. This in-turn will reduce waste in the healthcare
|
| 352 |
+
system and prevent patient trauma.
|
| 353 |
+
5
|
| 354 |
+
|
| 355 |
+
Clinical IndicationforMRl
|
| 356 |
+
Recent PSA level (ng/ml)
|
| 357 |
+
MRI Report
|
| 358 |
+
Prostate volume: Size must be measured as maximum transverse (T) x
|
| 359 |
+
(
|
| 360 |
+
Volume can be calculated as 0.52 x T x AP x L cm3
|
| 361 |
+
Lesion reporting
|
| 362 |
+
Upto 4 lesions may be described, each assigned to a PiRADS category of 3, 4 or
|
| 363 |
+
5,
|
| 364 |
+
Index lesion must be identified. Index lesion refers to either lesion with highest
|
| 365 |
+
overallcategoryorwithEPE
|
| 366 |
+
For every lesion:
|
| 367 |
+
Location of lesion should be described with reference to sectors described in
|
| 368 |
+
sector map and may also be visually mapped for easier understanding
|
| 369 |
+
Size measured as largest diameter of the lesion in axial plane. If largest
|
| 370 |
+
should be mentioned along with the plane. Pz lesions should be preferably
|
| 371 |
+
measuredonDWl whileTZlesions shouldbe measuredonT2w
|
| 372 |
+
Signal characteristics on T2w/DWI with sequence scoring.
|
| 373 |
+
EPE +/-, SVI +/-
|
| 374 |
+
Overall PI-RADS category
|
| 375 |
+
Other findings (for staging): Suspicious lymph nodes, suspicious bone
|
| 376 |
+
metastasis,
|
| 377 |
+
Other benign findings such as cysts may be reported to use as landmarks for
|
| 378 |
+
argeted biopsies or to identify lesions on MR follow-upCThis scenario is not far from the realm of reality, as many organizations are manufacturing such
|
| 379 |
+
ultra low-field specialized scanners, such as Promaxo [45] for the prostate and Hyperfine [19] for the
|
| 380 |
+
brain, both of which are approved by the FDA. We note that while we are exploring the feasibility
|
| 381 |
+
of ML enabled MR scanning that generates an answer without images, the use-case of such a device
|
| 382 |
+
does not replace the current practice of radiology, which requires the generation of high-fidelity
|
| 383 |
+
images interpreted by sub-specialized radiologists to render the final diagnosis. Such an imaging and
|
| 384 |
+
subsequent interpretation exercise is important to render the final diagnosis, for staging and planning
|
| 385 |
+
treatment [8]. Instead, existence of such a device has the ability to generate alternate use-cases for
|
| 386 |
+
MR scanning technology.
|
| 387 |
+
3
|
| 388 |
+
Related Work
|
| 389 |
+
Applications of deep learning (DL) within MR can be grouped into two categories, namely image
|
| 390 |
+
analysis and image reconstruction. Under the image analysis category, DL models take the spatially
|
| 391 |
+
resolved gray scale 2D or 3D MR images as input and perform tasks like tissue/organ segmentation
|
| 392 |
+
[1, 10] or disease identification [53, 69, 67, 72, 46]. DL models have achieved radiologist level
|
| 393 |
+
performance in identifying numerous diseases [23, 24, 22] and are increasingly being deployed as
|
| 394 |
+
part of computer aided diagnostic systems [52, 66]. For instance, the authors in [35] examined
|
| 395 |
+
the effect of DL assistance on both experienced and less-experienced radiologists. The DL assisted
|
| 396 |
+
radiologist surpassed the performance of both the individual radiologist and the DL system alone.
|
| 397 |
+
These approaches have improved diagnostic accuracy but have so far required high-resolution images
|
| 398 |
+
that are expensive to produce.
|
| 399 |
+
Most methods within the image reconstruction category are motivated with the goal of improving
|
| 400 |
+
the accessibility of MR scanning by reducing the scanning time. Towards that end, researchers have
|
| 401 |
+
proposed a variety of solutions to simplify and expedite the data acquisition process. Specifically,
|
| 402 |
+
researchers have proposed machine learning models to enable rapid reconstruction of spatially
|
| 403 |
+
resolved 2D images from under-sampled k-space data acquired by the scanner [40, 56, 10]. This task
|
| 404 |
+
requires addressing two key questions, namely, 1) what sampling pattern to choose? and 2) given
|
| 405 |
+
a sampling pattern, what reconstruction method to choose? For the first question, researchers have
|
| 406 |
+
proposed ML methods that learn the sampling pattern over the k-space data matrices so that the image
|
| 407 |
+
quality is not compromised [2, 71, 64, 3, 25]. In another line of work, researchers model the k-space
|
| 408 |
+
acquisition process as a sequential decision making process, where each sample is collected to improve
|
| 409 |
+
reconstruction performance and used reinforcement learning models to solve the task [47, 30, 3].
|
| 410 |
+
To answer the second question, DL models have been proposed that use under-sampled k-space
|
| 411 |
+
data to reconstruct images of provable diagnostic quality [39, 40, 44, 20, 56, 34, 75, 49, 38, 26, 9].
|
| 412 |
+
Researchers have also proposed NON-ML-based solutions to expedite the scanning time for MR.
|
| 413 |
+
These solutions involve the design and execution of novel data acquisition protocols and sequences
|
| 414 |
+
that enable rapid acquisition of the k-space data [32, 14]. Lastly, to facilitate research in image
|
| 415 |
+
reconstruction, several data sets and challenges have been released, such as the FASTMRI [70],
|
| 416 |
+
FASTMRI+ [74] and Stanford knee MRI with multi-task evaluation (SKM-TEA) [13]. These data sets
|
| 417 |
+
provide raw k-space measurements for MR scans along with labels of abnormalities associated with
|
| 418 |
+
those scans.
|
| 419 |
+
While these efforts have simplified and expedited the data acquisition process, the requirement to
|
| 420 |
+
generate high-fidelity images still necessitates the use of expensive scanners and the need for a
|
| 421 |
+
sub-specialized radiologist to interpret them. Furthermore, image generation imposes limits on how
|
| 422 |
+
much one can under-sample the k-space.
|
| 423 |
+
Our work instead studies the problem of using DL models to infer the presence/absence of a disease
|
| 424 |
+
directly from a small learned subset of the k-space data has never been considered.
|
| 425 |
+
4
|
| 426 |
+
MR Background and Notation
|
| 427 |
+
MR imaging is an indirect process, whereby which spatially resolved images of a human subject’s
|
| 428 |
+
anatomy are reconstructed from the frequency space (a.k.a., k-space) measurements of the electro-
|
| 429 |
+
magnetic activity inside the subject’s body after it is subjected to magnetic field and radio-frequency
|
| 430 |
+
pulses. These measurements are captured by an instrument called a receiver coil which is kept in the
|
| 431 |
+
vicinity of the part of the body whose image is sought. The k-space measurements from a single-coil
|
| 432 |
+
6
|
| 433 |
+
|
| 434 |
+
are represented as a 2-dimensional complex valued matrix x ∈ Cr×c, where r is the number of rows
|
| 435 |
+
and c is the number of columns. The spatial image z is reconstructed from the k-space matrix by
|
| 436 |
+
a multi-dimensional inverse Fourier transform, z = F−1(x). We denote by y ∈ {1, . . . , K} the
|
| 437 |
+
clinically relevant response. In our case y will be a binary response variable (y ∈ {1, 0}) indicating
|
| 438 |
+
the presence/absence of the disease being inferred.
|
| 439 |
+
Multi-Coil Data:
|
| 440 |
+
In practice, to speed up the data acquisition process, most modern MR scanners
|
| 441 |
+
acquire measurements in parallel using multiple receiver coils. In case of multi-coil acquisition,
|
| 442 |
+
the k-space matrix xmc is 3-dimensional: xmc ∈ Cdc×r×c [70], where dc is the number of coils
|
| 443 |
+
used. The image produced by each coil has a slightly different view of the anatomy, since each
|
| 444 |
+
coil has different sensitivity to signals arising from different spatial locations. Multiple methods
|
| 445 |
+
have been proposed to combine/use these images in ways that are conducive for ingestion into any
|
| 446 |
+
downstream ML model. For instance, a commonly used method that combines these images from
|
| 447 |
+
different coils into a single aggregate image is called the root-sum-of-squares (RSS) method [37].
|
| 448 |
+
Given the multi-coil k-space matrix, the RSS method requires computing the inverse Fourier transform
|
| 449 |
+
of each coil’s k-space matrix ˜mj = F−1(xj), and then generating the RSS image by
|
| 450 |
+
˜m =
|
| 451 |
+
�
|
| 452 |
+
�
|
| 453 |
+
�
|
| 454 |
+
�
|
| 455 |
+
Nc
|
| 456 |
+
�
|
| 457 |
+
j=1
|
| 458 |
+
| ˜mj |2.
|
| 459 |
+
Instead of combining the data from multiple coils in the image space, one can also combine the
|
| 460 |
+
data in the original k-space. A methods called Emulated Single Coil (ESC) [61], aggregates directly
|
| 461 |
+
the k-space data from multiple coils and emulates it to be coming from a single coil. This process
|
| 462 |
+
reduces the dimension of the full matrix xmc ∈ Cdc×r×c to a matrix ˜xmc ∈ Cr×c. In the subsequent
|
| 463 |
+
discussion pertaining to the direct k-space model, we will assume that we are working with the
|
| 464 |
+
emulated single coil data matrix ˜xmc of dimensions r × c.
|
| 465 |
+
Figure 3: Examples of k-space sampling patterns: The left panel shows a unconstrained sampling
|
| 466 |
+
pattern with 30% sampling rate, the middle panel shows a random Cartesian sampling pattern with
|
| 467 |
+
a 30% sampling rate, and the right panel displays an equispaced Cartesian sampling pattern with a
|
| 468 |
+
25% sampling rate with sampling.
|
| 469 |
+
Under-Sampled Data:
|
| 470 |
+
The notion of “under-sampling” refers to measuring only a subset of entries
|
| 471 |
+
in the k-space matrix x. We represent the sampling pattern using a binary mask matrix s ∈ {0, 1}r×c
|
| 472 |
+
(sometimes also referred to as sampling mask), where sij = 1 if and only if the measurement xij
|
| 473 |
+
was acquired. The under-sampled k-space matrix is represented as xs = x ◦ s, where ◦ is element-
|
| 474 |
+
wise multiplication between the two matrices. In this work, we constrain the sampling pattern to
|
| 475 |
+
“Cartesian”, which consists of sampling the lines of the k-space matrix. More specifically, for a
|
| 476 |
+
Cartesian sampling pattern all the elements of some lines of the matrix s are 0 and all the elements of
|
| 477 |
+
other lines are set to 1.
|
| 478 |
+
See Figure 3 for structure of various sampling patterns. The k-space matrix has its origin in the center
|
| 479 |
+
of the matrix. The sampling rate α is defined as the total percentage of measurements.
|
| 480 |
+
4.1
|
| 481 |
+
Image-Based Disease Identification using Deep Learning Models
|
| 482 |
+
The conventional way of using DL models to infer presence of a disease within the MR pipeline
|
| 483 |
+
involves two steps. In the first step, a high-fidelity image is reconstructed using the acquired k-space
|
| 484 |
+
measurements from multiple coils using the RSS method, as described above. In the second step, the
|
| 485 |
+
reconstructed image is provided as an input to a DL model that is trained to infer the presence/absence
|
| 486 |
+
7
|
| 487 |
+
|
| 488 |
+
Figure 4: (a) k-space layer: the k-space layer makes use of the convolution theorem to perform an
|
| 489 |
+
initial convolution operation between the complex valued k-space input x and the kernel z. The
|
| 490 |
+
resulting output is passed through an inverse Fourier transform operation to generate real valued
|
| 491 |
+
feature maps hR of size k × r × c × 2. These feature maps are passed as input to the subsequent
|
| 492 |
+
layers of KSPACE-NET. (b) KSPACE-NET: The KSPACE-NET takes the k-space as input followed
|
| 493 |
+
by the k-space layer, then it applies a convolutional architecture on the feature maps h to make a
|
| 494 |
+
classification.
|
| 495 |
+
of the disease. We refer to this model as MODELRSS. This is the best one can hope to achieve when
|
| 496 |
+
using images and we benchmark the accuracy of EMRT against it.
|
| 497 |
+
Since the high-fidelity images used by methods such as MODELRSS requires acquisition of large
|
| 498 |
+
quantities of high quality k-space data, researchers have also proposed to train image-based DL
|
| 499 |
+
classifiers using images reconstructed from the under-sampled k-space data. This approach requires
|
| 500 |
+
one to make decisions at two levels, namely 1) choosing the sampling pattern over the k-space, the
|
| 501 |
+
data from which will be used to reconstruct the image and 2) given the sampling pattern, choosing a
|
| 502 |
+
method to reconstruct the image. Multiple methods have been proposed to learn the sampling pattern
|
| 503 |
+
[2, 71, 64], and to reconstruct images using the under-sampled k-space data [34, 44, 20, 56, 75, 38,
|
| 504 |
+
26, 9]. We denote the class of these models by MODEL<SAMP>:<RECON>, where <SAMP> refers to the
|
| 505 |
+
method used to choose the sampling pattern and <RECON> refers to the image reconstruction method.
|
| 506 |
+
We compare the performance of EMRT against a variety of these models with different combinations
|
| 507 |
+
of sampling and reconstruction regimes (Section 7).
|
| 508 |
+
5
|
| 509 |
+
Direct k-Space Classifier
|
| 510 |
+
We now describe the proposed DL model that takes as input the k-space data and directly generates the
|
| 511 |
+
final answer without reconstructing an intermediate image. The foundational block of our architecture
|
| 512 |
+
is the convolution theorem, which states that for any given functions f, g we have:
|
| 513 |
+
F(f ∗ g) = F(f) ◦ F(g),
|
| 514 |
+
(1)
|
| 515 |
+
where F is the Fourier transform, ∗ denotes the convolution operation, and ◦ denotes element-
|
| 516 |
+
wise multiplication. Multiple researchers in the past have used this operator duality to accelerate
|
| 517 |
+
convolutions in Convolutional Neural Networks (CNN) [50, 43].
|
| 518 |
+
Since the k-space data is in the frequency domain, we can use Eq. 1 to adapt any convolutional neural
|
| 519 |
+
network architecture to directly use the k-space data as input. Specifically, let x ∈ Cr×c denote the
|
| 520 |
+
complex-valued k-space matrix of size r × c, and let z ∈ Ck×k be the kernel with which we want to
|
| 521 |
+
convolve the input x. We accomplish this convolution by first zero-padding the kernel to the right
|
| 522 |
+
and bottom to create a kernel z′ ∈ Cr×c which is of the same size as the input (see Figure 4). We
|
| 523 |
+
8
|
| 524 |
+
|
| 525 |
+
x
|
| 526 |
+
Elementwise multiplication
|
| 527 |
+
in complex domain
|
| 528 |
+
h
|
| 529 |
+
Pad
|
| 530 |
+
Z
|
| 531 |
+
Zetthen take the Fourier transform of the padded kernel z′, such that z′
|
| 532 |
+
F = F(z′) is in the frequency
|
| 533 |
+
space. Using Equation 1, we compute the convolution between the input x and the kernel z by taking
|
| 534 |
+
the inverse Fourier transform of the element-wise multiplication of x and z′
|
| 535 |
+
F:
|
| 536 |
+
h = F−1(x) ∗ z = F−1(x ◦ z′
|
| 537 |
+
F).
|
| 538 |
+
(2)
|
| 539 |
+
The matrix h ∈ Cr×c is a complex matrix in the spatial (image) domain and serves as input to the
|
| 540 |
+
subsequent layers of the neural network. By design, subsequent layers of our proposed network takes
|
| 541 |
+
real-valued inputs. As a result we stack the real and imaginary components of h as two separate
|
| 542 |
+
channels. The resulting tensor hR is of size Rr×c×2, which is supplied as input to the downstream
|
| 543 |
+
layers of the neural network. In practice, much like in real convolution neural networks, we convolve
|
| 544 |
+
the k-space input x with p independent kernels {z1, z2, . . . , zp} to extract different features from
|
| 545 |
+
the input, resulting in feature maps of size hR ∈ Rp×r×c×2, which are supplied as input to the
|
| 546 |
+
subsequent layers of the neural network.
|
| 547 |
+
Following the k-space layer, we can adopt any standard architecture for the subsequent layers. The
|
| 548 |
+
real-valued feature map hR ∈ Rp×r×c×2 from the k-space layer is used as input to the subsequent
|
| 549 |
+
layers, where instead of a 3 channel input for RGB images, we have p × 2 input channels. In this
|
| 550 |
+
work, we use a Preact-ResNet [21] for the subsequent layers. The output of this layer is a feature
|
| 551 |
+
representation z ∈ Rhz. This feature representation is used as input to a feed-forward network to
|
| 552 |
+
output the probabilities of the positive and negative classes. Figure 4 depicts the full architecture
|
| 553 |
+
which we call the KSPACE-NET. We can easily extend KSPACE-NET to predict multiple pathologies
|
| 554 |
+
from the same input. For each pathology, we can use a different feed-forward network with the
|
| 555 |
+
feature representation z as input to each.
|
| 556 |
+
Lastly, extending the KSPACE-NET to work with under-sampled data is straightforward. We simply
|
| 557 |
+
replace the full k-space input x to the model with the under-sampled input xs, which is obtained by
|
| 558 |
+
taking an element-wise dot product with the sampling mask matrix s: xs = x ◦ s (see Section 4).
|
| 559 |
+
6
|
| 560 |
+
End-to-End MR Triaging: EMRT
|
| 561 |
+
We now introduce End-to-End MR Triaging (EMRT): a novel method that infers the presence/absence
|
| 562 |
+
of a disease (a binary decision) directly from a drastically small amount of k-space data, skipping
|
| 563 |
+
the image reconstruction process. The underlying motivating hypothesis behind EMRT is that we
|
| 564 |
+
can accurately infer the presence of a disease (a binary decision) from a small amount of carefully
|
| 565 |
+
selected k-space measurements so long as we are not concerned with reconstructing high-fidelity
|
| 566 |
+
images. Towards that end, at a high-level, EMRT learns to identify the subsets of the k-space that have
|
| 567 |
+
the largest predictive signal pertaining to the disease being identified without considering the quality
|
| 568 |
+
of the image that would be generated using the data from identified subset. This is in contrast to the
|
| 569 |
+
image reconstruction approaches where the requirement to generate a high quality image of the entire
|
| 570 |
+
anatomy necessitates the sampling of a large portion of the k-space. Once the subset is identified,
|
| 571 |
+
only the data from the identified k-space subset is used by a DL to directly generate the final answer.
|
| 572 |
+
To the best of our knowledge, EMRT is the first method to propose classification of a disease directly
|
| 573 |
+
from a carefully chosen (learned) subset of the k-space. More formally EMRT is a two-step algorithm.
|
| 574 |
+
Step 1: In this step EMRT searches for a subset of the k-space that has the strongest signal that
|
| 575 |
+
can help accurately infer the presence of the disease. This is accomplished by learning a
|
| 576 |
+
sparse sampling pattern s∗, such that it maximizes the mutual information between under-
|
| 577 |
+
sampled k-space matrix xs∗ and the response variable y (a binary variable indicating the
|
| 578 |
+
presence/absence of the disease).
|
| 579 |
+
Step 2: Once the sampling pattern s∗ is learned, the second step involves using a KSPACE-NET
|
| 580 |
+
classifier (Section 5) that takes as input the under-sampled k-space data matrix xs∗ to infer
|
| 581 |
+
the presence of the disease y, without reconstructing an intermediate image.
|
| 582 |
+
To execute the above steps we need to answer the following questions, which we address in the
|
| 583 |
+
following sub-sections: Q1. How to learn a sparse sampling pattern s∗ of the k-space matrix that
|
| 584 |
+
maximizes the mutual information between the under-sampled k-space xs∗ and the response variable
|
| 585 |
+
y?; Q2. How to train the KSPACE-NET classifier that uses xs∗ as input to accurately infer the disease
|
| 586 |
+
y.
|
| 587 |
+
9
|
| 588 |
+
|
| 589 |
+
Algorithm 1 Estimating the conditional likelihood qval(y | xs)
|
| 590 |
+
Input: Training data set Dtr = {(xi, yi)}
|
| 591 |
+
Ntr
|
| 592 |
+
i=1, model qval(y | x; λ) with initial parameters λ,
|
| 593 |
+
mini-batch size M, acceleration factor α, and prior distribution π over sampling patterns
|
| 594 |
+
Return: Trained model qval(y | xs; λ∗)
|
| 595 |
+
while not converged do
|
| 596 |
+
Sample a mini-batch of training points of size M
|
| 597 |
+
Draw a sampling pattern s ∼ π, such that r×c
|
| 598 |
+
∥s∥0 = α
|
| 599 |
+
Update the model parameters
|
| 600 |
+
λt+1 = λt + γ
|
| 601 |
+
M
|
| 602 |
+
M
|
| 603 |
+
�
|
| 604 |
+
i=1
|
| 605 |
+
∇λ log qval(yi|xi
|
| 606 |
+
s; λt)
|
| 607 |
+
end while
|
| 608 |
+
Return the trained model qval(y | xs; λ∗)
|
| 609 |
+
6.1
|
| 610 |
+
Learning the Sparse k-Space Sampling Pattern
|
| 611 |
+
EMRT learns to identify a sampling pattern s∗ over the k-space matrix, such that the k-space data
|
| 612 |
+
xs∗ corresponding to this pattern has the maximum information required to accurately infer the
|
| 613 |
+
presence/absence of the disease. For any sampling pattern s, EMRT uses the mutual information
|
| 614 |
+
between the output variable y and the corresponding under-sampled k-space data xs, as a surrogate
|
| 615 |
+
for the information content in xs for disease inference. Then for a given sampling rate α, the process
|
| 616 |
+
of identifying s∗ (the optimal pattern) boils down to finding the sampling pattern that maximizes the
|
| 617 |
+
mutual information between y and xs∗.
|
| 618 |
+
Specifically, let I(y; xs) denote the mutual information between y and xs. For a given sampling rate
|
| 619 |
+
α, EMRT identifies a pattern s∗, such that:
|
| 620 |
+
s∗ = arg max
|
| 621 |
+
s∈{0,1}r×c I(y | xs),
|
| 622 |
+
(3)
|
| 623 |
+
where α =
|
| 624 |
+
r×c
|
| 625 |
+
∥s∥0 and s is the binary mask matrix of dimensions r × c. The mutual information
|
| 626 |
+
I(y | xs) [11] is defined as:
|
| 627 |
+
I(y; xs) = ExsKL (p(y | xs) || p(y))
|
| 628 |
+
(4)
|
| 629 |
+
= ExsEy|xs log p(y | xs) − log p(y)
|
| 630 |
+
(5)
|
| 631 |
+
= ExsEy|xs log p(y | xs) − C,
|
| 632 |
+
(6)
|
| 633 |
+
where C is a constant independent of the sampling pattern s, and p(y | xs) and p(y) are the
|
| 634 |
+
conditional and the marginal distribution of the response variable respectively. According to equation
|
| 635 |
+
6, we can estimate the mutual information I(y | xs) if we are able to estimate the value of p(y | xs).
|
| 636 |
+
Since, we do not have access to the true conditional distribution p(y | xs), we can approximate the
|
| 637 |
+
expected conditional log-likelihood by learning a probabilistic model q(y | xs; λ) parameterized by
|
| 638 |
+
λ. However, learning a model for every sampling pattern s is infeasible even for moderately high
|
| 639 |
+
dimensions. To address this issue we draw upon the works of [12, 28], where the authors show that
|
| 640 |
+
at optimality, a single model qval(y | xs; λ) trained with independently generated sampling patterns
|
| 641 |
+
that are drawn independent of the data x, y, is equivalent to a conditional distribution of y for each
|
| 642 |
+
sampling pattern. This approach is in contrast to approaches that explicitly model x [58] and has
|
| 643 |
+
been used in other applications [29]. As such, we train a model qval by minimizing the following loss
|
| 644 |
+
function:
|
| 645 |
+
L(λ) = −Ex,yEs∼π log qval(y | xs; λ),
|
| 646 |
+
where π is a distribution over the sampling pattern that is independent of the data x, y. In EMRT,
|
| 647 |
+
distribution π is a one-dimensional distribution and the KSPACE-NET model (Section 5) is used as
|
| 648 |
+
qval. The under-sampled data xs is created by masking the fully-sampled matrix x with a mask
|
| 649 |
+
s ∈ {0, 1}r×c. This masking ensures that the same model can be used as the input’s dimensions are
|
| 650 |
+
fixed. This process is summarized in Algorithm 1.
|
| 651 |
+
10
|
| 652 |
+
|
| 653 |
+
Algorithm 2 Learning the sampling pattern s∗
|
| 654 |
+
Input: Validation data set Dval = {(xi, yi)}
|
| 655 |
+
Nval
|
| 656 |
+
i=1, model qval(y | x; λ∗) with parameters λ, acceler-
|
| 657 |
+
ation factor α, number of candidate sampling patterns to generate N, and prior distribution π over
|
| 658 |
+
the sampling patterns
|
| 659 |
+
Return: Sampling pattern s∗
|
| 660 |
+
for j ∈ {1, . . . , N} do
|
| 661 |
+
Sample sj ∼ π such that r×c
|
| 662 |
+
∥s∥0 = α
|
| 663 |
+
Estimate the mutual information score in eq. (7) as follows
|
| 664 |
+
�V (sj) =
|
| 665 |
+
1
|
| 666 |
+
Nval
|
| 667 |
+
Nval
|
| 668 |
+
�
|
| 669 |
+
i=1
|
| 670 |
+
log qval(yi | xi
|
| 671 |
+
sj; λ∗)
|
| 672 |
+
(9)
|
| 673 |
+
end for
|
| 674 |
+
Let s∗ = arg maxj∈{1,...,N} �V (sj)
|
| 675 |
+
After training qval, EMRT uses it to define a scoring function V : {0, 1}r×c → R, for each sampling
|
| 676 |
+
pattern s that estimates the mutual information between that subset of the k-space up to a constant
|
| 677 |
+
eq. (6). Specifically,
|
| 678 |
+
V (s) = ExEy|x log qval(y | xs; λ).
|
| 679 |
+
(7)
|
| 680 |
+
The higher the score achieved by an sampling pattern the higher its diagnostic signal. Therefore the
|
| 681 |
+
objective of Equation 3 can be rewritten as
|
| 682 |
+
s∗ = arg max
|
| 683 |
+
s∈{0,1}r×c V (s), with r × c
|
| 684 |
+
∥s∥0
|
| 685 |
+
= α.
|
| 686 |
+
(8)
|
| 687 |
+
In practice s∗ is approximated by a Monte Carlo search within the space of all sampling patterns. N
|
| 688 |
+
candidate sampling patterns are drawn from the prior distribution π. Each drawn pattern is scored by
|
| 689 |
+
the scoring function V and the pattern with the highest score is selected as s∗. The details of the full
|
| 690 |
+
algorithm are provided in Algorithm 2.
|
| 691 |
+
6.2
|
| 692 |
+
Training the Direct k-Space Classifier
|
| 693 |
+
For inference during test time, we use the KSPACE-NET classifier qval(y | xs∗; λ∗), trained using
|
| 694 |
+
Algorithm 1, along with the optimized sampling pattern s∗. As specified in Algorithm 1, during the
|
| 695 |
+
training of this classifier, for every mini-batch we randomly sample a different sampling pattern from
|
| 696 |
+
the distribution π. Through our experiments, we found that this is in-fact the key to training a reliable
|
| 697 |
+
classifier. We also explored retraining a classifier using data xs∗, obtained from a fixed classification
|
| 698 |
+
optimized sampling pattern s∗. We compare these two approaches in section 7. To summarize, the
|
| 699 |
+
classifier qval(y | xs; λ∗) is trained with randomly sampled under-sampling patterns, however at test
|
| 700 |
+
time we make inferences with a fixed under-sampling pattern.
|
| 701 |
+
7
|
| 702 |
+
Experiments
|
| 703 |
+
We evaluate the efficacy of EMRT by comparing its performance to several benchmark models across
|
| 704 |
+
multiple clinical tasks. Our experiments are structured to answer the following questions in order.
|
| 705 |
+
Q1. Can we infer the presence/absence of the disease directly from the k-space data as accurately
|
| 706 |
+
as the state-of-the-art image-based model trained on images reconstructed from the full k-space
|
| 707 |
+
data? Q2. Using EMRT, how much can we under-sample the k-space input before we start to lose
|
| 708 |
+
disease inference accuracy in comparison to the state-of-the-art image-based model trained on images
|
| 709 |
+
reconstructed from the full K-space data? Q3. For the same under-sampling factor, how much better
|
| 710 |
+
(or worse) is the disease inference accuracy of the EMRT model in comparison to the image-based
|
| 711 |
+
model trained on images reconstructed from the under-sampled k-space data using state-of-the-art
|
| 712 |
+
image reconstruction method? Q4. Is there any benefit of learning the sampling pattern using EMRT
|
| 713 |
+
that seeks to maximize the disease inference signal as compared to the sampling patterns proposed in
|
| 714 |
+
the literature that optimize accurate image reconstruction or any heuristic based sampling pattern?
|
| 715 |
+
11
|
| 716 |
+
|
| 717 |
+
Knee MR
|
| 718 |
+
Abdominal MR
|
| 719 |
+
Brain MR
|
| 720 |
+
Mensc. Tear
|
| 721 |
+
ACL Sprain
|
| 722 |
+
CS-PCA
|
| 723 |
+
Enlg. Ventricles
|
| 724 |
+
Mass
|
| 725 |
+
Train slices
|
| 726 |
+
29100 (11%)
|
| 727 |
+
29100 (3.6%)
|
| 728 |
+
6649 (5%)
|
| 729 |
+
11002 (1.61%)
|
| 730 |
+
11002 (1.98%)
|
| 731 |
+
Val slices
|
| 732 |
+
6298 (11%)
|
| 733 |
+
6298 (2.4%)
|
| 734 |
+
1431 (4.5%)
|
| 735 |
+
2362 (1.52%)
|
| 736 |
+
2362 (2.03%)
|
| 737 |
+
Test slices
|
| 738 |
+
6281 (11%)
|
| 739 |
+
6281 (3%)
|
| 740 |
+
1462 (6%)
|
| 741 |
+
2366 (2.58%)
|
| 742 |
+
2366 (2.70%)
|
| 743 |
+
Table 1: Dataset statistics: Number of slices in the training, validation and test splits for each task.
|
| 744 |
+
Numbers in bracket are the percentages of slices in which the disease is visible (positive examples).
|
| 745 |
+
7.1
|
| 746 |
+
Datasets
|
| 747 |
+
Efficacy of EMRT is assessed by comparing its performance to a variety of benchmark models on
|
| 748 |
+
multiple clinical tasks across multiple anatomies. In particular we train and test our models to identify
|
| 749 |
+
pathologies for three anatomies, namely knee MR scans, brain MR scans, and abdominal MR scans.
|
| 750 |
+
See Table 1 for the description of data statistics for each of the three anatomies.
|
| 751 |
+
Knee MR Scans. We use k-space data of the MR scans of the knees provided as part of the FASTMRI
|
| 752 |
+
dataset [70] along with slice level annotations provided by the FASTMRI+ dataset [74]. The dataset
|
| 753 |
+
consists of multi-coil and single-coil coronal proton-density weighting scans, with and without
|
| 754 |
+
fat suppression, acquired at the NYU Langone Health hospital system. Further sequence details
|
| 755 |
+
are available in [70]. The training, validation, and test sets consist of 816, 176, and 175 volumes
|
| 756 |
+
respectively. The clinical task we solve is to predict whether a two-dimensional slice has a Meniscal
|
| 757 |
+
Tear and/or an ACL Sprain.
|
| 758 |
+
Brain MR Scans. We use the annotated slices of the MR scans of the brain also provided by the
|
| 759 |
+
FASTMRI dataset [70] and then obtain the k-space data for these annotated slices using the FASTMRI+
|
| 760 |
+
dataset [74]. A total of 1001 volumes were annotated in the FASTMRI+ dataset out of a total of 5847
|
| 761 |
+
volumes that were present in the FASTMRI dataset. Each brain examination included a multi-coil
|
| 762 |
+
single axial series (either T2-weighted FLAIR, T1-weighted without contrast, or T1-weighted with
|
| 763 |
+
contrast). The training, validation, and test sets consist of 700, 150, & 151 volumes respectively. We
|
| 764 |
+
predict whether a two-dimensional slice has Enlarged Ventricles and/or Mass (includes Mass and
|
| 765 |
+
Extra-axial Mass as in [74]).
|
| 766 |
+
Abdominal MR Scans. The clinical task for the abdominal MR scans is the identification of a
|
| 767 |
+
clinically significant prostate cancer (CS-PCA), which is defined as a lesion within the prostate for
|
| 768 |
+
which a radiologist assigns a Prostate Imaging Reporting And Data system (PI-RADS) score [63]
|
| 769 |
+
of 3 or more. We use the retrospectively collected bi-parametric abdominal MR scans performed
|
| 770 |
+
clinically at NYU Langone Health hospital system. It consists of scans from 313 subjects who were
|
| 771 |
+
referred due to suspected prostate cancer. The scans were performed on a 3 Tesla Siemens scanner
|
| 772 |
+
with a 30-element body coil array. Examinations included an axial T2-weighted TSE and an axial
|
| 773 |
+
diffusion-weighted EPI sequence using B values of 50 and 1000. For our experiments we only used
|
| 774 |
+
the data obtained using the T2-weighted sequence. For each scan volume, a board-certified abdominal
|
| 775 |
+
radiologist examined each slice to identify the presence of lesion and assigned a PI-RAD score to it. A
|
| 776 |
+
slice is said to have CS-PCA, if there exists at least one lesion in it with a PI-RADS score of 3 or more.
|
| 777 |
+
We split the data into 218, 48 and 47 volumes for the training, validation and test sets, respectively.
|
| 778 |
+
During the splits we make sure that scans from the same patient appear only in one of the three splits.
|
| 779 |
+
Since the data for these scans is acquired using multiple coils, following [70], we emulate it to be
|
| 780 |
+
coming from a single coil using the emulated single-coil (ESC) method [61]. This results in a single
|
| 781 |
+
k-space matrix that is provided as an input to EMRT. The primary motivation behind doing this was
|
| 782 |
+
simplicity on our way to prove our hypothesis. In future work we will propose models that work
|
| 783 |
+
directly with the multi-coil data.
|
| 784 |
+
7.2
|
| 785 |
+
Exp 1: Disease Inference Directly from k-space
|
| 786 |
+
Our first set of experiments tests the feasibility of inferring a disease directly from the k-space data
|
| 787 |
+
by comparing the performance of KSPACE-NET to a DL model that uses high-fidelity images as
|
| 788 |
+
input. Towards that end, we train the KSPACE-NET model to solve the binary task of inferring the
|
| 789 |
+
presence/absence of the disease using the full k-space matrix ˜xmc that is emulated to be coming
|
| 790 |
+
12
|
| 791 |
+
|
| 792 |
+
5%
|
| 793 |
+
8%
|
| 794 |
+
10%
|
| 795 |
+
12.5%
|
| 796 |
+
Sampling Rate
|
| 797 |
+
86
|
| 798 |
+
88
|
| 799 |
+
90
|
| 800 |
+
AUROC
|
| 801 |
+
ACL
|
| 802 |
+
5%
|
| 803 |
+
8%
|
| 804 |
+
10%
|
| 805 |
+
12.5%
|
| 806 |
+
Sampling Rate
|
| 807 |
+
90
|
| 808 |
+
91
|
| 809 |
+
92
|
| 810 |
+
93
|
| 811 |
+
AUROC
|
| 812 |
+
Meniscal Tear
|
| 813 |
+
5%
|
| 814 |
+
8%
|
| 815 |
+
10%
|
| 816 |
+
12.5%
|
| 817 |
+
Sampling Rate
|
| 818 |
+
82
|
| 819 |
+
83
|
| 820 |
+
84
|
| 821 |
+
85
|
| 822 |
+
AUROC
|
| 823 |
+
CS-PCA
|
| 824 |
+
5%
|
| 825 |
+
8%
|
| 826 |
+
10%
|
| 827 |
+
12.5%
|
| 828 |
+
Sampling Rate
|
| 829 |
+
90
|
| 830 |
+
92
|
| 831 |
+
94
|
| 832 |
+
AUROC
|
| 833 |
+
Enlarged Ventricles
|
| 834 |
+
5%
|
| 835 |
+
8%
|
| 836 |
+
10%
|
| 837 |
+
12.5%
|
| 838 |
+
Sampling Rate
|
| 839 |
+
85.0
|
| 840 |
+
87.5
|
| 841 |
+
90.0
|
| 842 |
+
92.5
|
| 843 |
+
AUROC
|
| 844 |
+
Mass
|
| 845 |
+
Figure 5: Performance of EMRT against MODELRSS: Top panel shows AUROC on the test set of the
|
| 846 |
+
EMRT (red) at different sampling factors in comparison to the AUROC of MODELRSS (black) trained
|
| 847 |
+
using the fully-sampled k-space data.
|
| 848 |
+
from a single coil using the ESC algorithm [61] as input. Performance of the KSPACE-NET model is
|
| 849 |
+
compared against the image-based deep learning models trained to infer presence of the disease from
|
| 850 |
+
images reconstructed using the RSS method from full k-space data acquired using multiple coils. We
|
| 851 |
+
train a pre-activation ResNet-50 [21] model using these ˜mRSS images as its input. We call this model
|
| 852 |
+
MODELRSS. Disease inference accuracy of these models is the best one can hope to achieve from an
|
| 853 |
+
image-based model, because the images are reconstructed from the full k-space data and the models
|
| 854 |
+
are trained using a rigorous hyper-parameter search to find the best performing model configuration.
|
| 855 |
+
Knee AUROC
|
| 856 |
+
CS-PCA AUROC
|
| 857 |
+
Brain AUROC
|
| 858 |
+
Mensc. Tear
|
| 859 |
+
ACL Sprain
|
| 860 |
+
CS-PCA
|
| 861 |
+
Enlg. Ventricles
|
| 862 |
+
Mass
|
| 863 |
+
KSPACE-NET
|
| 864 |
+
93.4 ± 0.7
|
| 865 |
+
90.8 ± 1.5
|
| 866 |
+
84.1 ± 0.4
|
| 867 |
+
92.3 ± 2.0
|
| 868 |
+
91.5 ± 1.0
|
| 869 |
+
MODELRSS
|
| 870 |
+
92.1 ± 1.0
|
| 871 |
+
90.6 ± 1.01
|
| 872 |
+
83.1 ± 1.6
|
| 873 |
+
93.8 ± 1.3
|
| 874 |
+
88.4 ± 5
|
| 875 |
+
Table 2: Disease inference directly from k-space : The AUROC of the KSPACE-NET model in
|
| 876 |
+
comparison to a DL model trained on high-fidelity images to infer the presence/absence of specific
|
| 877 |
+
diseases. The results clearly show that it is indeed feasible to infer the disease directly from the
|
| 878 |
+
k-space data as accurately as an image-based classifier.
|
| 879 |
+
Table 2 provides the AUROC of the KSPACE-NET model in comparison to MODELRSS. The results
|
| 880 |
+
clearly show that it is indeed feasible to infer the presence of the disease directly from the k-space data
|
| 881 |
+
as accurately as a finely tuned DL model trained on high-fidelity images. This result is not surprising,
|
| 882 |
+
since transformation from k-space to image space is achieved using IFFT, which is a deterministic
|
| 883 |
+
and lossless operation. What is surprising is that in some cases the KSPACE-NET model performs
|
| 884 |
+
better than the image-based model. While this question is left for future work, we conjecture that the
|
| 885 |
+
reason behind this performance gap is that the KSPACE-NET model uses as input the entire complex
|
| 886 |
+
data where as the image-based model uses only the magnitude of the complex matrix in the image
|
| 887 |
+
space (as is the widespread norm in medical image analysis). Lastly, these results are particularly
|
| 888 |
+
impressive when one takes into account that the KSPACE-NET model takes as input the data emulated
|
| 889 |
+
from a single coil (which has a lower SNR) whereas MODELRSS is using the full multi-coil data. As
|
| 890 |
+
part of the future work we are working on extending the KSPACE-NET model to ingest multi-coil data
|
| 891 |
+
directly.
|
| 892 |
+
13
|
| 893 |
+
|
| 894 |
+
7.3
|
| 895 |
+
Exp 2: Exploring the Limits on Under-Sampling the k-space Using EMRT
|
| 896 |
+
In our second set of experiments, we estimate the extent to which one can under-sample the k-space
|
| 897 |
+
data and still infer the presence of the disease (using the KSPACE-NET model) as accurately as
|
| 898 |
+
an image-based classifier using high-fidelity images as input. We sample the k-space at different
|
| 899 |
+
sampling rates α (∈ {5%,8%,10%,12.5%}) and train a KSPACE-NET for each α. For the given
|
| 900 |
+
sampling rate α, the sampling pattern is learnt using the EMRT procedure, summarized in Algorithm 1
|
| 901 |
+
and Algorithm 2.
|
| 902 |
+
Figure 5 and table 3 give the AUC, Sensitivity, and Specificity of the EMRT model at different sampling
|
| 903 |
+
rates and compares its performance to the MODELRSS. We observe that at high sampling rates, the
|
| 904 |
+
performance of EMRT, in terms of AUC and sensitivity-specificity, does not deteriorate significantly
|
| 905 |
+
in comparison to the DL trained trained on high-fidelity images reconstructed using the full k-space
|
| 906 |
+
data. This experiment demonstrates that if the goal is to simply infer the presence/absence of the
|
| 907 |
+
disease, without the concern to reconstruct a high-fidelity image, then we can afford to significantly
|
| 908 |
+
under-sample the k-space data (as low as 5%) without any significant loss in performance. This is
|
| 909 |
+
in contrast to [44], which reports that in the FastMRI challenge, all submissions had reconstructed
|
| 910 |
+
images that started missing clinically relevant pathologies at sampling rates less than 25% of the
|
| 911 |
+
data. Figure 1 shows the sequence of images reconstructed from the k-space data corresponding to
|
| 912 |
+
the sampling patterns learnt by EMRT. One can clearly see that the pathology visible is the image
|
| 913 |
+
reconstructed from the full k-space is hard to discern in images generated from under-sampled data.
|
| 914 |
+
Furthermore, it becomes successively hard to identify the pathology as we decrease the amount of
|
| 915 |
+
data used.
|
| 916 |
+
Knee SENS/SPEC
|
| 917 |
+
CS-PCA SENS/SPEC
|
| 918 |
+
Brain SENS/SPEC
|
| 919 |
+
Mensc. Tear
|
| 920 |
+
ACL Sprain
|
| 921 |
+
CS-PCA
|
| 922 |
+
Enlg. Ventricles
|
| 923 |
+
Mass
|
| 924 |
+
EMRT
|
| 925 |
+
81/83
|
| 926 |
+
80/81
|
| 927 |
+
88/65
|
| 928 |
+
86/82
|
| 929 |
+
89/70
|
| 930 |
+
MODELRSS
|
| 931 |
+
83/86
|
| 932 |
+
81/82
|
| 933 |
+
88/60
|
| 934 |
+
78/94
|
| 935 |
+
82/80
|
| 936 |
+
Table 3: Performance of EMRT against MODELRSS: Test Sensitivity/Specificity of EMRT and
|
| 937 |
+
MODELRSS obtained using an operating point with 85% Sensitivity on the validation set. The
|
| 938 |
+
Sensitivity/Specificity results are reported using a sampling factor α = 5% for knee MR and 8% for
|
| 939 |
+
brain and prostate MR scans. See appendix A for confidence intervals.
|
| 940 |
+
Knee SENS/SPEC
|
| 941 |
+
CS-PCA SENS/SPEC
|
| 942 |
+
Brain SENS/SPEC
|
| 943 |
+
Mensc. Tear
|
| 944 |
+
ACL Sprain
|
| 945 |
+
CS-PCA
|
| 946 |
+
Enlg. Ventricles
|
| 947 |
+
Mass
|
| 948 |
+
EMRT
|
| 949 |
+
81/83
|
| 950 |
+
80/81
|
| 951 |
+
88/65
|
| 952 |
+
86/82
|
| 953 |
+
89/70
|
| 954 |
+
MODELLOUPE:VARNET
|
| 955 |
+
81/79
|
| 956 |
+
74/81
|
| 957 |
+
86/54
|
| 958 |
+
84/72
|
| 959 |
+
74/56
|
| 960 |
+
Table 4: Performance of EMRT against MODELLOUPE:VARNET: Test Sensitivity/Specificity of EMRT
|
| 961 |
+
and MODELLOUPE:VARNET obtained using an operating point with 85% Sensitivity on the validation set.
|
| 962 |
+
The Sensitivity/Specificity results are reported using a sampling factor α = 5% for knee MR and 8%
|
| 963 |
+
for brain and prostate MR scans. See appendix A for confidence intervals.
|
| 964 |
+
7.4
|
| 965 |
+
Exp 3: Reconstructed Images vs Direct k-space When Under-Sampling
|
| 966 |
+
So far we have established that we can infer the presence/absence of a disease directly from k-space
|
| 967 |
+
data. In addition, when we are not concerned with reconstructing intermediate images, we only need
|
| 968 |
+
a fraction of the k-space data to infer the disease without compromising accuracy in comparison to a
|
| 969 |
+
model trained on images reconstructed from the full k-space data. When using under-sampled k-space
|
| 970 |
+
data however, another way to infer the disease presence is by first reconstructing an intermediate
|
| 971 |
+
image from the under-sampled data and then training a classifier on these images to infer the disease.
|
| 972 |
+
Our third set of experiments are structured to answer the following question: “how is the disease
|
| 973 |
+
inference accuracy impacted if we use a DL model trained on images reconstructed from the under-
|
| 974 |
+
sampled k-space data in comparison to the EMRT, which infers the disease directly from the k-space
|
| 975 |
+
data?”
|
| 976 |
+
14
|
| 977 |
+
|
| 978 |
+
5%
|
| 979 |
+
8%
|
| 980 |
+
10%
|
| 981 |
+
12.5%
|
| 982 |
+
Sampling Rate
|
| 983 |
+
82
|
| 984 |
+
84
|
| 985 |
+
86
|
| 986 |
+
88
|
| 987 |
+
90
|
| 988 |
+
AUROC
|
| 989 |
+
ACL
|
| 990 |
+
5%
|
| 991 |
+
8%
|
| 992 |
+
10%
|
| 993 |
+
12.5%
|
| 994 |
+
Sampling Rate
|
| 995 |
+
86
|
| 996 |
+
88
|
| 997 |
+
90
|
| 998 |
+
92
|
| 999 |
+
AUROC
|
| 1000 |
+
Meniscal Tear
|
| 1001 |
+
5%
|
| 1002 |
+
8%
|
| 1003 |
+
10%
|
| 1004 |
+
12.5%
|
| 1005 |
+
Sampling Rate
|
| 1006 |
+
77.5
|
| 1007 |
+
80.0
|
| 1008 |
+
82.5
|
| 1009 |
+
85.0
|
| 1010 |
+
AUROC
|
| 1011 |
+
CS-PCA
|
| 1012 |
+
5%
|
| 1013 |
+
8%
|
| 1014 |
+
10%
|
| 1015 |
+
12.5%
|
| 1016 |
+
Sampling Rate
|
| 1017 |
+
80
|
| 1018 |
+
85
|
| 1019 |
+
90
|
| 1020 |
+
95
|
| 1021 |
+
AUROC
|
| 1022 |
+
Enlarged Ventricles
|
| 1023 |
+
5%
|
| 1024 |
+
8%
|
| 1025 |
+
10%
|
| 1026 |
+
12.5%
|
| 1027 |
+
Sampling Rate
|
| 1028 |
+
70
|
| 1029 |
+
75
|
| 1030 |
+
80
|
| 1031 |
+
85
|
| 1032 |
+
90
|
| 1033 |
+
AUROC
|
| 1034 |
+
Mass
|
| 1035 |
+
Figure 6: Performance of EMRT against MODELLOUPE:VARNET: Top panel shows AUROC on the test
|
| 1036 |
+
set of the EMRT (red) at different sampling factors in comparison to the AUROC of MODELLOUPE:VARNET
|
| 1037 |
+
(blue). Note that for all pathologies, MR scans and all sampling rates α ∈ {5%, 8%, 10%, 12.5%}
|
| 1038 |
+
EMRT outperforms MODELLOUPE:VARNET.
|
| 1039 |
+
Towards that end, we compare the performance of EMRT against the image-based classifiers which
|
| 1040 |
+
are trained using images reconstructed from the under-sampled k-space data. For the image-based
|
| 1041 |
+
classifiers, the sampling pattern used is the one obtained by the LOUPE method [2]: a state-of-the-art
|
| 1042 |
+
method proposed in the literature which learns a sampling pattern over the k-space such that the data
|
| 1043 |
+
corresponding to it gives the best possible reconstructed image. Furthermore, we use the state-of-the-
|
| 1044 |
+
art image reconstruction model, namely the VARNET model [56], to reconstruct the images from
|
| 1045 |
+
the under-sampled k-space data. We denote this benchmark by MODELLOUPE:VARNET, identifying the
|
| 1046 |
+
methods used for learning the sampling pattern and the method used to reconstruct the images from
|
| 1047 |
+
the learnt sampling pattern respectively.
|
| 1048 |
+
Figure 6 and table 4 compare the performance of the two sets of models. We observe that for all
|
| 1049 |
+
the abnormalities and for all sampling rates, EMRT outperforms MODELLOUPE:VARNET. The bottom
|
| 1050 |
+
panel of Figure 6 shows the sensitivity and specificity of the models obtained at 5% sampling rate for
|
| 1051 |
+
knees, and 8% sampling rate for abdomen and brain. For a given sensitivity, EMRT has a significantly
|
| 1052 |
+
better specificity compared to MODELLOUPE:VARNET, translating to lower number of false positive cases.
|
| 1053 |
+
Furthermore we observe that for some pathologies, such as CS-PCA and Enlarged Ventricles, there is
|
| 1054 |
+
a sharp decrease in the AUROC compared to EMRT, which for the most part remains stable across all
|
| 1055 |
+
sampling factors and for all the pathologies. .
|
| 1056 |
+
Lastly, to validate the correctness of our implementation of the image reconstruction method (VARNET
|
| 1057 |
+
[56]) we also report the structural similarity (SSIM) metric in fig. 7, a commonly used metric to
|
| 1058 |
+
measure reconstruction quality. Our SSIM numbers are within the ballpark of the state-of-the-art
|
| 1059 |
+
reported in literature. Specifically, for 12.5% sampling rate, the knee reconstruction SSIM is 0.82
|
| 1060 |
+
compared to 0.88 reported in [56] and the brain reconstruction SSIM is 0.89 compared to 0.94 reported
|
| 1061 |
+
in [56].
|
| 1062 |
+
7.5
|
| 1063 |
+
Exp 4: Benefits of Learning Sampling Pattern Using EMRT
|
| 1064 |
+
In our next set of experiments we show two things. First, we show that the sampling pattern learnt by
|
| 1065 |
+
EMRT (which optimizes the classification accuracy) is different from the ones learnt by any method
|
| 1066 |
+
that optimizes a reconstruction metric (such as LOUPE). Second, we show the benefits of learning a
|
| 1067 |
+
sampling pattern that explicitly optimizes the disease classification accuracy (as achieved by EMRT)
|
| 1068 |
+
in comparison to other sampling pattern.
|
| 1069 |
+
15
|
| 1070 |
+
|
| 1071 |
+
5%
|
| 1072 |
+
8%
|
| 1073 |
+
10%
|
| 1074 |
+
12.5%
|
| 1075 |
+
Sampling Rate
|
| 1076 |
+
0.80
|
| 1077 |
+
0.82
|
| 1078 |
+
0.84
|
| 1079 |
+
0.86
|
| 1080 |
+
0.88
|
| 1081 |
+
0.90
|
| 1082 |
+
Structural Similarity
|
| 1083 |
+
Brain
|
| 1084 |
+
Prostate T2
|
| 1085 |
+
Knee
|
| 1086 |
+
Prostate b50
|
| 1087 |
+
Figure 7: Performance of image reconstruction: Reconstruction methods are an essential com-
|
| 1088 |
+
ponent of the indirect classification benchmark. In this figure, we plot the reconstruction per-
|
| 1089 |
+
formance of the best performing reconstruction methods at increasing sampling rates α ∈
|
| 1090 |
+
{5%, 8%, 10%, 12.5%}.
|
| 1091 |
+
Figure 8 contrasts the classification optimized sampling pattern learnt by EMRT versus the
|
| 1092 |
+
reconstruction-optimized sampling patterns learnt by LOUPE. We clearly see that the sampling
|
| 1093 |
+
pattern learnt by EMRT is composed of a mixture of a set of low frequencies (red lines clustered
|
| 1094 |
+
around the center) and a set of high frequencies (red lines spread away from the center). This is
|
| 1095 |
+
in contrast to the predominantly low frequencies selected by LOUPE, that are largely concentrated
|
| 1096 |
+
around the center.
|
| 1097 |
+
Next, to show the benefits of learning a sampling pattern catered towards explicitly optimizing the
|
| 1098 |
+
disease identification accuracy, we compare the performance of EMRT against another KSPACE-
|
| 1099 |
+
NET model that is trained to identify the disease using a fixed sampling pattern consisting of only
|
| 1100 |
+
low frequencies (center-focused k-space lines). We denote this model by MODELCENTER. Figure 9
|
| 1101 |
+
compares the performance of the two sets of classifier. As evident from the figure, performance of
|
| 1102 |
+
EMRT is better than the performance of MODELCENTER across all tasks, pointing towards the benefits
|
| 1103 |
+
of learning the sampling pattern that optimizes the classification accuracy. The performance gap is
|
| 1104 |
+
16
|
| 1105 |
+
|
| 1106 |
+
Ground Truth
|
| 1107 |
+
0.125
|
| 1108 |
+
0.1
|
| 1109 |
+
0.08
|
| 1110 |
+
0.05
|
| 1111 |
+
T2
|
| 1112 |
+
Prostate
|
| 1113 |
+
Prostate DWI
|
| 1114 |
+
Knee
|
| 1115 |
+
rain
|
| 1116 |
+
Bc. Knee
|
| 1117 |
+
a. Prostate T2
|
| 1118 |
+
b. Brain
|
| 1119 |
+
Figure 8: Contrasting sampling patterns: Here we compare the sampling patterns learnt by
|
| 1120 |
+
EMRT that optimizes classification accuracy versus the patterns learnt by LOUPE that optimizes the
|
| 1121 |
+
reconstruction metric for different diseases. EMRT is learning a mix of low and high frequencies (red
|
| 1122 |
+
lines spread across the spectrum). Whereas LOUPE predominantly is picking low frequencies (blue
|
| 1123 |
+
lines clustered around the center). The prostate and brain sampling patterns are sampled with 8%
|
| 1124 |
+
sampling rate while knee MR patterns are sampled at a 5% sampling rate.
|
| 1125 |
+
5%
|
| 1126 |
+
8%
|
| 1127 |
+
10%
|
| 1128 |
+
12.5%
|
| 1129 |
+
Sampling Rate
|
| 1130 |
+
80
|
| 1131 |
+
85
|
| 1132 |
+
90
|
| 1133 |
+
AUROC
|
| 1134 |
+
EMRT (Meniscal Tear)
|
| 1135 |
+
MODELCENTER (Meniscal Tear)
|
| 1136 |
+
EMRT (ACL)
|
| 1137 |
+
MODELCENTER (ACL)
|
| 1138 |
+
5%
|
| 1139 |
+
8%
|
| 1140 |
+
10%
|
| 1141 |
+
12.5%
|
| 1142 |
+
Sampling Rate
|
| 1143 |
+
80
|
| 1144 |
+
81
|
| 1145 |
+
82
|
| 1146 |
+
83
|
| 1147 |
+
84
|
| 1148 |
+
AUROC
|
| 1149 |
+
EMRT (CS-PCa)
|
| 1150 |
+
MODELCENTER (CS-PCa)
|
| 1151 |
+
5%
|
| 1152 |
+
8%
|
| 1153 |
+
10%
|
| 1154 |
+
12.5%
|
| 1155 |
+
Sampling Rate
|
| 1156 |
+
80
|
| 1157 |
+
90
|
| 1158 |
+
AUROC
|
| 1159 |
+
EMRT (EV)
|
| 1160 |
+
MODELCENTER (EV)
|
| 1161 |
+
EMRT (Mass)
|
| 1162 |
+
MODELCENTER (Mass)
|
| 1163 |
+
Figure 9: Benefits of learning the sampling pattern: Figure shows AUROC of EMRT (which learns
|
| 1164 |
+
a sampling pattern that optimizes the disease classification accuracy) in comparison to the AUROC of
|
| 1165 |
+
MODELCENTER which uses a fixed sampling pattern that is center-focused. Superior performance of
|
| 1166 |
+
EMRT across all tasks across all the sampling rates is indicative of the benefits of learning a sampling
|
| 1167 |
+
pattern that explicitly optimizes the classification accuracy.
|
| 1168 |
+
larger for tasks for which the frequencies learnt by EMRT are more spread away from the center of
|
| 1169 |
+
the frequency spectrum, such as Mass in the brain scans and CS-PCA in prostate scans.
|
| 1170 |
+
17
|
| 1171 |
+
|
| 1172 |
+
ARMS mask
|
| 1173 |
+
LOUPE mask
|
| 1174 |
+
DPS maskARMS mask
|
| 1175 |
+
LOUPE mask
|
| 1176 |
+
DPS mask5%
|
| 1177 |
+
8%
|
| 1178 |
+
10%
|
| 1179 |
+
12.5%
|
| 1180 |
+
Sampling Rate
|
| 1181 |
+
80
|
| 1182 |
+
85
|
| 1183 |
+
90
|
| 1184 |
+
AUROC
|
| 1185 |
+
EMRT (Meniscal Tear)
|
| 1186 |
+
MODELFIXED (Meniscal Tear)
|
| 1187 |
+
EMRT (ACL)
|
| 1188 |
+
MODELFIXED (ACL)
|
| 1189 |
+
5%
|
| 1190 |
+
8%
|
| 1191 |
+
10%
|
| 1192 |
+
12.5%
|
| 1193 |
+
Sampling Rate
|
| 1194 |
+
81
|
| 1195 |
+
82
|
| 1196 |
+
83
|
| 1197 |
+
84
|
| 1198 |
+
AUROC
|
| 1199 |
+
EMRT (CS-PCa)
|
| 1200 |
+
MODELFIXED (CS-PCa)
|
| 1201 |
+
5%
|
| 1202 |
+
8%
|
| 1203 |
+
10%
|
| 1204 |
+
12.5%
|
| 1205 |
+
Sampling Rate
|
| 1206 |
+
85
|
| 1207 |
+
90
|
| 1208 |
+
95
|
| 1209 |
+
AUROC
|
| 1210 |
+
EMRT (EV)
|
| 1211 |
+
MODELFIXED (EV)
|
| 1212 |
+
EMRT (Mass)
|
| 1213 |
+
Fixed (Mass)
|
| 1214 |
+
Figure 10: The Role of Random Subset Training in EMRT. Compares the classification performance
|
| 1215 |
+
of the KSPACE-NET trained using the EMRT under-sampling patterns (dashed lines), MODELFIXED,
|
| 1216 |
+
against EMRT (solid lines).
|
| 1217 |
+
7.6
|
| 1218 |
+
Exp 5: The Role of Random Subset Training in EMRT
|
| 1219 |
+
One of the key characteristics of the training methodology of EMRT is the way the KSPACE-NET model
|
| 1220 |
+
qval is trained. Specifically, during the training of the classifier qval, every mini-batch is constructed
|
| 1221 |
+
by first randomly drawing a different sampling pattern from the distribution π, and then applying the
|
| 1222 |
+
chosen pattern to all the samples in the mini-batch (see Algorithm 1). To better understand the role
|
| 1223 |
+
of this specialized training procedure on the performance of EMRT, we examine whether training a
|
| 1224 |
+
KSPACE-NET classifier using different sampling patterns across different mini-batches has any benefit
|
| 1225 |
+
compared to training a classifier trained using the same fixed sampling pattern across mini-batches.
|
| 1226 |
+
To that end, we compare the performance of the EMRT classifier qval to a model trained with the fixed
|
| 1227 |
+
but learnt sampling pattern. We use the sampling pattern learnt by EMRT as the input to this classifier.
|
| 1228 |
+
The architecture of the two classifiers were identical. In Figure 10, we observe that for most sampling
|
| 1229 |
+
rates the classifier trained using different sampling patterns across mini-batches outperforms the
|
| 1230 |
+
classifier trained with a single fixed sampling pattern, even if the fixed pattern is learnt. Training
|
| 1231 |
+
using the randomly chosen sampling patterns across mini-batches act as a regularizer which leads to
|
| 1232 |
+
better generalization performance.
|
| 1233 |
+
8
|
| 1234 |
+
Conclusion and Limitations
|
| 1235 |
+
MR imaging is the gold standard of diagnostic imaging, especially in a differential diagnosis setting,
|
| 1236 |
+
thanks to its excellent soft-tissue contrast properties. However, despite its proven diagnostic value,
|
| 1237 |
+
this imaging modality is not used as a first-in-line tool for early identification of life threatening
|
| 1238 |
+
diseases, primarily because of lack of accessibility of this modality at population level. This lack
|
| 1239 |
+
of accessibility can be attributed to the need to generate high-fidelity images that are examined by
|
| 1240 |
+
radiologists. This is so because high-fidelity image generation necessitates the use of expensive
|
| 1241 |
+
scanning hardware to acquire large quantities of high quality k-space data and the execution of
|
| 1242 |
+
complex and time consuming acquisition protocols to collect this data. Motivated by the goal of
|
| 1243 |
+
improving accessibility of MR for early and accurate disease identification at the population level,
|
| 1244 |
+
in this study we propose to skip the image reconstruction step and instead propose to infer the final
|
| 1245 |
+
answer (presence/absence of the disease) directly from the k-space data. We hypothesize that when
|
| 1246 |
+
image reconstruction is not a requirement, one can infer the presence/absence of the disease using a
|
| 1247 |
+
18
|
| 1248 |
+
|
| 1249 |
+
very small tailored fraction of the k-space data. Towards that end we propose a novel deep neural
|
| 1250 |
+
network methodology, which we call EMRT that first learns the subset of the k-space data which has
|
| 1251 |
+
the largest diagnostic signal to infer the disease and then uses this data to directly infer the disease
|
| 1252 |
+
without generating images. We validate our hypothesis by running a series of experiments using
|
| 1253 |
+
small sampling rates without suffereing a significant drop in performance compared to models using
|
| 1254 |
+
the fully-sampled k-space . Models such as EMRT that infer the presence of a disease directly from
|
| 1255 |
+
the k-space data have the potential to bring MR scanners closer to deployment for population-level
|
| 1256 |
+
screening of disease.
|
| 1257 |
+
Limitations
|
| 1258 |
+
Despite encouraging preliminary results, much work needs to be done to get us closer
|
| 1259 |
+
to a system that can be clinically deployed. The present work is just a first step towards assessing
|
| 1260 |
+
the feasibility of whether it is possible to accurately infer the presence of the disease from a small
|
| 1261 |
+
tailored fraction of k-space data without generating images. There are several limitations associated
|
| 1262 |
+
with the current work, which need to be addressed to bring us closer to developing an actual scanning
|
| 1263 |
+
hardware that can operate outside of the specialized imaging environments and yet capture sufficient
|
| 1264 |
+
quantity and quality of the k-space data for the subsequent ML model to infer the disease accurately.
|
| 1265 |
+
First, the current study works with the data generated from an expensive high-field 3T scanner (the
|
| 1266 |
+
current standard of care) which is housed in specialized imaging environments. As a result the
|
| 1267 |
+
underlying k-space data is of very high quality. In order for these results to generalize to the data
|
| 1268 |
+
acquired by more accessible low-field scanners, one needs to account for the noise ingrained in the
|
| 1269 |
+
data acquired by these low-field scanners. The current work does not propose any mechanism to
|
| 1270 |
+
account for such noise. It only focuses on establishing the limits on the quantity of data needed for
|
| 1271 |
+
accurate diagnosis.
|
| 1272 |
+
Second, almost all the modern day scanners acquire data in parallel using multiple coils. This not
|
| 1273 |
+
only speeds up the data acquisition process but also increases the signal-to-noise (SNR) ratio of the
|
| 1274 |
+
acquired signal. However, in the current feasibility study, for the sake of simplicity, we resorted to
|
| 1275 |
+
working with the ESC data (the multi-coil data emulated to be coming from a single coil). Future
|
| 1276 |
+
work will focus on extending the EMRT methodology for the multi-coil k-space data. We anticipate
|
| 1277 |
+
that working with multi-coil data will only lead to an improvement in performance because of the
|
| 1278 |
+
larger effective SNR associated with the multi-coil data.
|
| 1279 |
+
Third, MR imaging is a 3D imaging modality, where the human clinician renders the disease diagnosis
|
| 1280 |
+
after looking at all the slices in the volumetric image. The individual slices are seldom interpreted
|
| 1281 |
+
in isolation. In other words the final diagnosis is at the volume-level. However, in the current
|
| 1282 |
+
study, because of a dearth of positive cases at volume-level in our data set, we developed the EMRT
|
| 1283 |
+
methodology to classify individual slices. Volume-level labels can be derived from labels of individual
|
| 1284 |
+
slices within the volume using any aggregation scheme, such as majority voting or averaging the
|
| 1285 |
+
probabilities of individual slices. However, naively aggregating slice-level labels can potentially lead
|
| 1286 |
+
to an increase in the number of false positive volumes. As part of the future work, with the help of
|
| 1287 |
+
additional data, we will explore extending the EMRT methodology to directly classify the volumes.
|
| 1288 |
+
Another limitation of EMRT comes from its use of the type of k-space data. In a typical clinical
|
| 1289 |
+
MR scan multiple volumetric images are reconstructed, each having different contrast properties,
|
| 1290 |
+
with the goal of providing a radiologists with multiple visual facets of the same underlying anatomy.
|
| 1291 |
+
These different contrast images are reconstructed from the k-space data corresponding to different
|
| 1292 |
+
acquisition sequences. For instance, prostate scans are typically acquired using T2-weighted (T2)
|
| 1293 |
+
and Diffusion-weighted (DW) sequences. However, again in the interest of simplicity, the EMRT
|
| 1294 |
+
methodology proposed in this study uses the k-space data from a single sequence. In the future
|
| 1295 |
+
we plan to extend this methodology to incorporate data from multiple sequences informed by what
|
| 1296 |
+
is used in real clinical settings. Lastly, the EMRT methodology is restricted to learning only the
|
| 1297 |
+
Cartesian sampling patterns. However, for a given disease identification accuracy, there might exist
|
| 1298 |
+
other non-Cartesian sampling patterns which are even sparser than the corresponding Cartesian
|
| 1299 |
+
pattern. While learning such “arbitrary” sampling patterns one needs to restrict to sample from the
|
| 1300 |
+
subset of patterns that respect the physical constraints of the scanner. In our future work we will also
|
| 1301 |
+
extend EMRT to learn such “arbitrary” sampling patterns. Furthermore, to facilitate further research
|
| 1302 |
+
in this potentially high impact area, we are releasing a repository containing the data set and code for
|
| 1303 |
+
reproducing the experiments.
|
| 1304 |
+
19
|
| 1305 |
+
|
| 1306 |
+
References
|
| 1307 |
+
[1] Zeynettin Akkus, Alfiia Galimzianova, Assaf Hoogi, Daniel L Rubin, and Bradley J Erickson.
|
| 1308 |
+
Deep learning for brain mri segmentation: state of the art and future directions. Journal of
|
| 1309 |
+
digital imaging, 30(4):449–459, 2017.
|
| 1310 |
+
[2] Cagla Deniz Bahadir, Adrian V Dalca, and Mert R Sabuncu. Learning-based optimization of
|
| 1311 |
+
the under-sampling pattern in mri. In International Conference on Information Processing in
|
| 1312 |
+
Medical Imaging, pages 780–792. Springer, 2019.
|
| 1313 |
+
[3] Tim Bakker, Herke van Hoof, and Max Welling. Experimental design for mri by greedy policy
|
| 1314 |
+
search. Advances in Neural Information Processing Systems, 33, 2020.
|
| 1315 |
+
[4] Juergen Biederer, Yoshiharu Ohno, Hiroto Hatabu, Mark L Schiebler, Edwin JR van Beek, Jens
|
| 1316 |
+
Vogel-Claussen, and Hans-Ulrich Kauczor. Screening for lung cancer: Does mri have a role?
|
| 1317 |
+
European journal of radiology, 86:353–360, 2017.
|
| 1318 |
+
[5] John Brodersen and Volkert Dirk Siersma. Long-term psychosocial consequences of false-
|
| 1319 |
+
positive screening mammography. The Annals of Family Medicine, 11(2):106–115, 2013.
|
| 1320 |
+
[6] Louise Clare Brown, Hashim U Ahmed, Rita Faria, Ahmed El-Shater Bosaily, Rhian Gabe,
|
| 1321 |
+
Richard S Kaplan, Mahesh Parmar, Yolanda Collaco-Moraes, Katie Ward, Richard Graham
|
| 1322 |
+
Hindley, Alex Freeman, Alexander Kirkham, Robert Oldroyd, Chris Parker, Simon Bott, Nick
|
| 1323 |
+
Burns-Cox, Tim Dudderidge, Maneesh Ghei, Alastair Henderson, Rajendra Persad, Derek J
|
| 1324 |
+
Rosario, Iqbal Shergill, Mathias Winkler, Marta Soares, Eldon Spackman, Mark Sculpher, and
|
| 1325 |
+
Mark Emberton. Multiparametric MRI to improve detection of prostate cancer compared with
|
| 1326 |
+
transrectal ultrasound-guided prostate biopsy alone: the PROMIS study. Health technology
|
| 1327 |
+
assessment (Winchester, England), 22(39):1–176, 7 2018.
|
| 1328 |
+
[7] Mark A Brown and Richard C Semelka. MRI: basic principles and applications. John Wiley &
|
| 1329 |
+
Sons, 2011.
|
| 1330 |
+
[8] Iztok Caglic, Viljem Kovac, and Tristan Barrett. Multiparametric mri-local staging of prostate
|
| 1331 |
+
cancer and beyond. Radiology and oncology, 53(2):159–170, 2019.
|
| 1332 |
+
[9] Elizabeth K Cole, John M Pauly, Shreyas S Vasanawala, and Frank Ong. Unsupervised mri
|
| 1333 |
+
reconstruction with generative adversarial networks. arXiv preprint arXiv:2008.13065, 2020.
|
| 1334 |
+
[10] Albert Comelli, Navdeep Dahiya, Alessandro Stefano, Federica Vernuccio, Marzia Portoghese,
|
| 1335 |
+
Giuseppe Cutaia, Alberto Bruno, Giuseppe Salvaggio, and Anthony Yezzi. Deep learning-based
|
| 1336 |
+
methods for prostate segmentation in magnetic resonance imaging. Applied Sciences, 11(2):782,
|
| 1337 |
+
2021.
|
| 1338 |
+
[11] Thomas M Cover. Elements of information theory. John Wiley & Sons, 1999.
|
| 1339 |
+
[12] Ian Covert, Scott Lundberg, and Su-In Lee. Explaining by removing: A unified framework for
|
| 1340 |
+
model explanation. arXiv preprint arXiv:2011.14878, 2020.
|
| 1341 |
+
[13] Arjun D Desai, Andrew M Schmidt, Elka B Rubin, Christopher Michael Sandino, Marianne Su-
|
| 1342 |
+
san Black, Valentina Mazzoli, Kathryn J Stevens, Robert Boutin, Christopher Re, Garry E
|
| 1343 |
+
Gold, et al. Skm-tea: A dataset for accelerated mri reconstruction with dense image labels for
|
| 1344 |
+
quantitative clinical evaluation. In Thirty-fifth Conference on Neural Information Processing
|
| 1345 |
+
Systems Datasets and Benchmarks Track (Round 2), 2021.
|
| 1346 |
+
[14] D Eldred-Evans, P Burak, MJ Connor, E Day, M Evans, F Fiorentino, M Gammon, F Hosking-
|
| 1347 |
+
Jervis, N Klimowska-Nassar, W McGuire, AR Padhani, AT Prevost, D Price, H Sokhi, H Tam,
|
| 1348 |
+
M Winkler, and HU Ahmed. Population-Based Prostate Cancer Screening With Magnetic
|
| 1349 |
+
Resonance Imaging or Ultrasonography: The IP1-PROSTAGRAM Study. Jama Oncology,
|
| 1350 |
+
7(3):395 – 402, 2021.
|
| 1351 |
+
[15] David Eldred-Evans, Paula Burak, Martin J Connor, Emily Day, Martin Evans, Francesca
|
| 1352 |
+
Fiorentino, Martin Gammon, Feargus Hosking-Jervis, Natalia Klimowska-Nassar, William
|
| 1353 |
+
McGuire, et al. Population-based prostate cancer screening with magnetic resonance imaging
|
| 1354 |
+
or ultrasonography: the ip1-prostagram study. JAMA oncology, 7(3):395–402, 2021.
|
| 1355 |
+
20
|
| 1356 |
+
|
| 1357 |
+
[16] Joann G Elmore, Mary B Barton, Victoria M Moceri, Sarah Polk, Philip J Arena, and Suzanne W
|
| 1358 |
+
Fletcher. Ten-year risk of false positive screening mammograms and clinical breast examinations.
|
| 1359 |
+
New England Journal of Medicine, 338(16):1089–1096, 1998.
|
| 1360 |
+
[17] Joshua J Fenton, Meghan S Weyrich, Shauna Durbin, Yu Liu, Heejung Bang, and Joy Melnikow.
|
| 1361 |
+
Prostate-specific antigen–based screening for prostate cancer: evidence report and systematic
|
| 1362 |
+
review for the us preventive services task force. Jama, 319(18):1914–1931, 2018.
|
| 1363 |
+
[18] Kirema Garcia-Reyes, Niccolò M Passoni, Mark L Palmeri, Christopher R Kauffman, King-
|
| 1364 |
+
shuk Roy Choudhury, Thomas J Polascik, and Rajan T Gupta. Detection of prostate cancer with
|
| 1365 |
+
multiparametric mri (mpmri): effect of dedicated reader education on accuracy and confidence
|
| 1366 |
+
of index and anterior cancer diagnosis. Abdominal imaging, 40(1):134–142, 2015.
|
| 1367 |
+
[19] Melanie Hamilton-Basich. Hyperfine receives fda clearance for portable mri technology. AXIS
|
| 1368 |
+
Imaging News, 2020.
|
| 1369 |
+
[20] Kerstin Hammernik, Teresa Klatzer, Erich Kobler, Michael P Recht, Daniel K Sodickson,
|
| 1370 |
+
Thomas Pock, and Florian Knoll. Learning a variational network for reconstruction of acceler-
|
| 1371 |
+
ated mri data. Magnetic resonance in medicine, 79(6):3055–3071, 2018.
|
| 1372 |
+
[21] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Identity mappings in deep residual
|
| 1373 |
+
networks. In European conference on computer vision, pages 630–645. Springer, 2016.
|
| 1374 |
+
[22] Nils Hendrix, Ward Hendrix, Kees van Dijke, Bas Maresch, Mario Maas, Stijn Bollen, Alexander
|
| 1375 |
+
Scholtens, Milko de Jonge, Lee-Ling Sharon Ong, Bram van Ginneken, et al. Musculoskeletal
|
| 1376 |
+
radiologist-level performance by using deep learning for detection of scaphoid fractures on
|
| 1377 |
+
conventional multi-view radiographs of hand and wrist. European Radiology, pages 1–14, 2022.
|
| 1378 |
+
[23] Lukas Hirsch, Yu Huang, Shaojun Luo, Carolina Rossi Saccarelli, Roberto Lo Gullo, Isaac
|
| 1379 |
+
Daimiel Naranjo, Almir GV Bitencourt, Natsuko Onishi, Eun Sook Ko, Doris Leithner, et al.
|
| 1380 |
+
Radiologist-level performance by using deep learning for segmentation of breast cancers on mri
|
| 1381 |
+
scans. Radiology: Artificial Intelligence, 4(1):e200231, 2021.
|
| 1382 |
+
[24] Lukas Hirsch, Yu Huang, Shaojun Luo, Carolina Rossi Saccarelli, Roberto Lo Gullo,
|
| 1383 |
+
Isaac Daimiel Naranjo, Almir GV Bitencourt, Natsuko Onishi, Eun Sook Ko, Dortis Lei-
|
| 1384 |
+
thner, et al. Deep learning achieves radiologist-level performance of tumor segmentation in
|
| 1385 |
+
breast mri. arXiv preprint arXiv:2009.09827, 2020.
|
| 1386 |
+
[25] Iris AM Huijben, Bastiaan S Veeling, and Ruud JG van Sloun. Deep probabilistic subsampling
|
| 1387 |
+
for task-adaptive compressed sensing. In International Conference on Learning Representations,
|
| 1388 |
+
2019.
|
| 1389 |
+
[26] Chang Min Hyun, Hwa Pyung Kim, Sung Min Lee, Sungchul Lee, and Jin Keun Seo. Deep
|
| 1390 |
+
learning for undersampled mri reconstruction. Physics in Medicine & Biology, 63(13):135007,
|
| 1391 |
+
2018.
|
| 1392 |
+
[27] Dragan Ilic, Mia Djulbegovic, Jae Hung Jung, Eu Chang Hwang, Qi Zhou, Anne Cleves,
|
| 1393 |
+
Thomas Agoritsas, and Philipp Dahm. Prostate cancer screening with prostate-specific antigen
|
| 1394 |
+
(psa) test: a systematic review and meta-analysis. Bmj, 362, 2018.
|
| 1395 |
+
[28] Neil Jethani, Mukund Sudarshan, Yindalon Aphinyanaphongs, and Rajesh Ranganath. Have
|
| 1396 |
+
we learned to explain?: How interpretability methods can learn to encode predictions in their
|
| 1397 |
+
interpretations. In International Conference on Artificial Intelligence and Statistics, pages
|
| 1398 |
+
1459–1467. PMLR, 2021.
|
| 1399 |
+
[29] Neil Jethani, Mukund Sudarshan, Ian Connick Covert, Su-In Lee, and Rajesh Ranganath.
|
| 1400 |
+
Fastshap: Real-time shapley value estimation. In International Conference on Learning Repre-
|
| 1401 |
+
sentations, 2022.
|
| 1402 |
+
[30] Kyong Hwan Jin, Michael Unser, and Kwang Moo Yi. Self-supervised deep active accelerated
|
| 1403 |
+
mri. arXiv preprint arXiv:1901.04547, 2019.
|
| 1404 |
+
21
|
| 1405 |
+
|
| 1406 |
+
[31] Patricia M Johnson, Angela Tong, Awani Donthireddy, Kira Melamud, Robert Petrocelli, Paul
|
| 1407 |
+
Smereka, Kun Qian, Mahesh B Keerthivasan, Hersh Chandarana, and Florian Knoll. Deep
|
| 1408 |
+
learning reconstruction enables highly accelerated biparametric mr imaging of the prostate.
|
| 1409 |
+
Journal of Magnetic Resonance Imaging, 56(1):184–195, 2022.
|
| 1410 |
+
[32] Veeru Kasivisvanathan, Antti S Rannikko, Marcelo Borghi, Valeria Panebianco, Lance A
|
| 1411 |
+
Mynderse, Markku H Vaarala, Alberto Briganti, Lars Budäus, Giles Hellawell, Richard G
|
| 1412 |
+
Hindley, et al. Mri-targeted or standard biopsy for prostate-cancer diagnosis. New England
|
| 1413 |
+
Journal of Medicine, 378(19):1767–1777, 2018.
|
| 1414 |
+
[33] TP Kilpeläinen, TLJ Tammela, L Määttänen, P Kujala, Ulf-Håkan Stenman, M Ala-Opas,
|
| 1415 |
+
TJ Murtola, and A Auvinen. False-positive screening results in the finnish prostate cancer
|
| 1416 |
+
screening trial. British journal of cancer, 102(3):469–474, 2010.
|
| 1417 |
+
[34] Florian Knoll, Tullie Murrell, Anuroop Sriram, Nafissa Yakubova, Jure Zbontar, Michael
|
| 1418 |
+
Rabbat, Aaron Defazio, Matthew J Muckley, Daniel K Sodickson, C Lawrence Zitnick, et al.
|
| 1419 |
+
Advancing machine learning for mr image reconstruction with an open competition: Overview
|
| 1420 |
+
of the 2019 fastmri challenge. Magnetic resonance in medicine, 84(6):3054–3070, 2020.
|
| 1421 |
+
[35] Sandra Labus, Martin M Altmann, Henkjan Huisman, Angela Tong, Tobias Penzkofer,
|
| 1422 |
+
Moon Hyung Choi, Ivan Shabunin, David J Winkel, Pengyi Xing, Dieter H Szolar, et al.
|
| 1423 |
+
A concurrent, deep learning–based computer-aided detection system for prostate multiparamet-
|
| 1424 |
+
ric mri: a performance study involving experienced and less-experienced radiologists. European
|
| 1425 |
+
Radiology, pages 1–13, 2022.
|
| 1426 |
+
[36] Jennifer Elston Lafata, Janine Simpkins, Lois Lamerato, Laila Poisson, George Divine, and
|
| 1427 |
+
Christine Cole Johnson.
|
| 1428 |
+
The economic impact of false-positive cancer screens.
|
| 1429 |
+
Cancer
|
| 1430 |
+
Epidemiology and Prevention Biomarkers, 13(12):2126–2132, 2004.
|
| 1431 |
+
[37] Erik G Larsson, Deniz Erdogmus, Rui Yan, Jose C Principe, and Jeffrey R Fitzsimmons. Snr-
|
| 1432 |
+
optimality of sum-of-squares reconstruction for phased-array magnetic resonance imaging.
|
| 1433 |
+
Journal of Magnetic Resonance, 163(1):121–123, 2003.
|
| 1434 |
+
[38] Dongwook Lee, Jaejun Yoo, and Jong Chul Ye. Deep residual learning for compressed sensing
|
| 1435 |
+
mri. In 2017 IEEE 14th International Symposium on Biomedical Imaging (ISBI 2017), pages
|
| 1436 |
+
15–18. IEEE, 2017.
|
| 1437 |
+
[39] Michael Lustig, David Donoho, and John M Pauly. Sparse mri: The application of compressed
|
| 1438 |
+
sensing for rapid mr imaging. Magnetic Resonance in Medicine: An Official Journal of the
|
| 1439 |
+
International Society for Magnetic Resonance in Medicine, 58(6):1182–1195, 2007.
|
| 1440 |
+
[40] Michael Lustig, David L Donoho, Juan M Santos, and John M Pauly. Compressed sensing mri.
|
| 1441 |
+
IEEE signal processing magazine, 25(2):72–82, 2008.
|
| 1442 |
+
[41] Maria Adele Marino, Thomas Helbich, Pascal Baltzer, and Katja Pinker-Domenig. Multipara-
|
| 1443 |
+
metric mri of the breast: A review. Journal of Magnetic Resonance Imaging, 47(2):301–315,
|
| 1444 |
+
2018.
|
| 1445 |
+
[42] Michael G Marmot, DG Altman, DA Cameron, JA Dewar, SG Thompson, and Maggie Wilcox.
|
| 1446 |
+
The benefits and harms of breast cancer screening: an independent review. British journal of
|
| 1447 |
+
cancer, 108(11):2205–2240, 2013.
|
| 1448 |
+
[43] Michael Mathieu, Mikael Henaff, and Yann LeCun. Fast training of convolutional networks
|
| 1449 |
+
through ffts. arXiv preprint arXiv:1312.5851, 2013.
|
| 1450 |
+
[44] Matthew J Muckley, Bruno Riemenschneider, Alireza Radmanesh, Sunwoo Kim, Geunu Jeong,
|
| 1451 |
+
Jingyu Ko, Yohan Jun, Hyungseob Shin, Dosik Hwang, Mahmoud Mostapha, et al. Results of
|
| 1452 |
+
the 2020 fastmri challenge for machine learning mr image reconstruction. IEEE Transactions
|
| 1453 |
+
on Medical Imaging, 40(9):2306–2317, 2021.
|
| 1454 |
+
[45] Jordan Nasri. Office-based, point-of-care, low-field mri system to guide prostate interventions:
|
| 1455 |
+
Recent developments. UROLOGY, 2021.
|
| 1456 |
+
22
|
| 1457 |
+
|
| 1458 |
+
[46] Anwar R Padhani and Baris Turkbey. Detecting prostate cancer with deep learning for mri: a
|
| 1459 |
+
small step forward, 2019.
|
| 1460 |
+
[47] Luis Pineda, Sumana Basu, Adriana Romero, Roberto Calandra, and Michal Drozdzal. Active
|
| 1461 |
+
mr k-space sampling with reinforcement learning. In International Conference on Medical
|
| 1462 |
+
Image Computing and Computer-Assisted Intervention, pages 23–33. Springer, 2020.
|
| 1463 |
+
[48] Ardeshir R Rastinehad, Baris Turkbey, Simpa S Salami, Oksana Yaskiv, Arvin K George,
|
| 1464 |
+
Mathew Fakhoury, Karin Beecher, Manish A Vira, Louis R Kavoussi, David N Siegel,
|
| 1465 |
+
et al. Improving detection of clinically significant prostate cancer: magnetic resonance imag-
|
| 1466 |
+
ing/transrectal ultrasound fusion guided prostate biopsy. The Journal of urology, 191(6):1749–
|
| 1467 |
+
1754, 2014.
|
| 1468 |
+
[49] Michael P Recht, Jure Zbontar, Daniel K Sodickson, Florian Knoll, Nafissa Yakubova, Anuroop
|
| 1469 |
+
Sriram, Tullie Murrell, Aaron Defazio, Michael Rabbat, Leon Rybak, et al. Using deep learning
|
| 1470 |
+
to accelerate knee mri at 3 t: results of an interchangeability study. American Journal of
|
| 1471 |
+
Roentgenology, 215(6):1421–1429, 2020.
|
| 1472 |
+
[50] Oren Rippel, Jasper Snoek, and Ryan P Adams. Spectral representations for convolutional
|
| 1473 |
+
neural networks. arXiv preprint arXiv:1506.03767, 2015.
|
| 1474 |
+
[51] Andrew B Rosenkrantz, Fang-Ming Deng, Sooah Kim, Ruth P Lim, Nicole Hindman, Thais C
|
| 1475 |
+
Mussi, Bradley Spieler, Jason Oaks, James S Babb, Jonathan Melamed, et al. Prostate cancer:
|
| 1476 |
+
multiparametric mri for index lesion localization—a multiple-reader study. American Journal
|
| 1477 |
+
of Roentgenology, 199(4):830–837, 2012.
|
| 1478 |
+
[52] V Sathiyamoorthi, AK Ilavarasi, K Murugeswari, Syed Thouheed Ahmed, B Aruna Devi, and
|
| 1479 |
+
Murali Kalipindi. A deep convolutional neural network based computer aided diagnosis system
|
| 1480 |
+
for the prediction of alzheimer’s disease in mri images. Measurement, 171:108838, 2021.
|
| 1481 |
+
[53] Li Shen, Laurie R Margolies, Joseph H Rothstein, Eugene Fluder, Russell McBride, and Weiva
|
| 1482 |
+
Sieh. Deep learning to improve breast cancer detection on screening mammography. Scientific
|
| 1483 |
+
reports, 9(1):1–12, 2019.
|
| 1484 |
+
[54] Susan Slatkoff, Stephen Gamboa, Adam J Zolotor, Anne L Mounsey, and Kohar Jones. Psa
|
| 1485 |
+
testing: When it’s useful, when it’s not. The Journal of family practice, 60(6):357, 2011.
|
| 1486 |
+
[55] Anita Slomski. Avoiding unnecessary prostate biopsies with mri. JAMA, 317(12):1206–1206,
|
| 1487 |
+
2017.
|
| 1488 |
+
[56] Anuroop Sriram, Jure Zbontar, Tullie Murrell, Aaron Defazio, C Lawrence Zitnick, Nafissa
|
| 1489 |
+
Yakubova, Florian Knoll, and Patricia Johnson. End-to-end variational networks for accelerated
|
| 1490 |
+
mri reconstruction. In International Conference on Medical Image Computing and Computer-
|
| 1491 |
+
Assisted Intervention, pages 64–73. Springer, 2020.
|
| 1492 |
+
[57] Andreas Stang and Karl-Heinz Jöckel. The impact of cancer screening on all-cause mortality:
|
| 1493 |
+
what is the best we can expect? Deutsches Ärzteblatt International, 115(29-30):481, 2018.
|
| 1494 |
+
[58] Mukund Sudarshan, Wesley Tansey, and Rajesh Ranganath. Deep direct likelihood knockoffs.
|
| 1495 |
+
Advances in neural information processing systems, 33:5036–5046, 2020.
|
| 1496 |
+
[59] Glen B Taksler, Nancy L Keating, and Michael B Rothberg. Implications of false-positive
|
| 1497 |
+
results for future cancer screenings. Cancer, 124(11):2390–2398, 2018.
|
| 1498 |
+
[60] JE Thompson, PJ Van Leeuwen, Daniel Moses, Ron Shnier, Phillip Brenner, Warick Delprado,
|
| 1499 |
+
M Pulbrook, Maret Böhm, Anne M Haynes, Andrew Hayen, et al. The diagnostic performance
|
| 1500 |
+
of multiparametric magnetic resonance imaging to detect significant prostate cancer. The
|
| 1501 |
+
Journal of urology, 195(5):1428–1435, 2016.
|
| 1502 |
+
[61] Mark Tygert and Jure Zbontar. Simulating single-coil mri from the responses of multiple coils.
|
| 1503 |
+
Communications in Applied Mathematics and Computational Science, 15(2):115–127, 2020.
|
| 1504 |
+
[62] Christopher JD Wallis, Masoom A Haider, and Robert K Nam. Role of mpmri of the prostate in
|
| 1505 |
+
screening for prostate cancer. Translational andrology and urology, 6(3):464, 2017.
|
| 1506 |
+
23
|
| 1507 |
+
|
| 1508 |
+
[63] Jeffrey C Weinreb, Jelle O Barentsz, Peter L Choyke, Francois Cornud, Masoom A Haider,
|
| 1509 |
+
Katarzyna J Macura, Daniel Margolis, Mitchell D Schnall, Faina Shtern, Clare M Tempany,
|
| 1510 |
+
et al. Pi-rads prostate imaging–reporting and data system: 2015, version 2. European urology,
|
| 1511 |
+
69(1):16–40, 2016.
|
| 1512 |
+
[64] Tomer Weiss, Sanketh Vedula, Ortal Senouf, Oleg Michailovich, Michael Zibulevsky, and Alex
|
| 1513 |
+
Bronstein. Joint learning of cartesian under sampling andre construction for accelerated mri. In
|
| 1514 |
+
ICASSP 2020-2020 IEEE International Conference on Acoustics, Speech and Signal Processing
|
| 1515 |
+
(ICASSP), pages 8653–8657. IEEE, 2020.
|
| 1516 |
+
[65] Sidney J Winawer, Robert H Fletcher, L Miller, Fiona Godlee, MH Stolar, CD Mulrow,
|
| 1517 |
+
SH Woolf, SN Glick, TG Ganiats, JH Bond, et al. Colorectal cancer screening: clinical
|
| 1518 |
+
guidelines and rationale. Gastroenterology, 112(2):594–642, 1997.
|
| 1519 |
+
[66] David J Winkel, Angela Tong, Bin Lou, Ali Kamen, Dorin Comaniciu, Jonathan A Disselhorst,
|
| 1520 |
+
Alejandro Rodríguez-Ruiz, Henkjan Huisman, Dieter Szolar, Ivan Shabunin, et al. A novel
|
| 1521 |
+
deep learning based computer-aided diagnosis system improves the accuracy and efficiency of
|
| 1522 |
+
radiologists in reading biparametric magnetic resonance images of the prostate: results of a
|
| 1523 |
+
multireader, multicase study. Investigative radiology, 56(10):605–613, 2021.
|
| 1524 |
+
[67] Tien Yin Wong and Neil M Bressler. Artificial intelligence with deep learning technology looks
|
| 1525 |
+
into diabetic retinopathy screening. Jama, 316(22):2366–2367, 2016.
|
| 1526 |
+
[68] JS Wysock, N Mendhiratta, F Zattoni, X Meng, M Bjurlin, WC Huang, H Lepor,
|
| 1527 |
+
AB Rosenkrantz, and SS. Taneja. Predictive Value of Negative 3T Multiparametric Mag-
|
| 1528 |
+
netic Resonance Imaging of the Prostate on 12-core Biopsy Results. BJU Int., 118(4):515–520,
|
| 1529 |
+
2016.
|
| 1530 |
+
[69] Sunghwan Yoo, Isha Gujrathi, Masoom A Haider, and Farzad Khalvati. Prostate cancer detection
|
| 1531 |
+
using deep convolutional neural networks. Scientific reports, 9(1):1–10, 2019.
|
| 1532 |
+
[70] Jure Zbontar, Florian Knoll, Anuroop Sriram, Tullie Murrell, Zhengnan Huang, Matthew J
|
| 1533 |
+
Muckley, Aaron Defazio, Ruben Stern, Patricia Johnson, Mary Bruno, et al. fastmri: An open
|
| 1534 |
+
dataset and benchmarks for accelerated mri. arXiv preprint arXiv:1811.08839, 2018.
|
| 1535 |
+
[71] Jinwei Zhang, Hang Zhang, Alan Wang, Qihao Zhang, Mert Sabuncu, Pascal Spincemaille,
|
| 1536 |
+
Thanh D Nguyen, and Yi Wang. Extending loupe for k-space under-sampling pattern optimiza-
|
| 1537 |
+
tion in multi-coil mri. In International Workshop on Machine Learning for Medical Image
|
| 1538 |
+
Reconstruction, pages 91–101. Springer, 2020.
|
| 1539 |
+
[72] Min Zhang, Geoffrey S Young, Huai Chen, Jing Li, Lei Qin, J Ricardo McFaline-Figueroa,
|
| 1540 |
+
David A Reardon, Xinhua Cao, Xian Wu, and Xiaoyin Xu. Deep-learning detection of cancer
|
| 1541 |
+
metastases to the brain on mri. Journal of Magnetic Resonance Imaging, 52(4):1227–1236,
|
| 1542 |
+
2020.
|
| 1543 |
+
[73] Zizhao Zhang, Adriana Romero, Matthew J Muckley, Pascal Vincent, Lin Yang, and Michal
|
| 1544 |
+
Drozdzal. Reducing uncertainty in undersampled mri reconstruction with active acquisition. In
|
| 1545 |
+
Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages
|
| 1546 |
+
2049–2058, 2019.
|
| 1547 |
+
[74] Ruiyang Zhao, Burhaneddin Yaman, Yuxin Zhang, Russell Stewart, Austin Dixon, Florian
|
| 1548 |
+
Knoll, Zhengnan Huang, Yvonne W Lui, Michael S Hansen, and Matthew P Lungren. fastmri+:
|
| 1549 |
+
Clinical pathology annotations for knee and brain fully sampled multi-coil mri data. arXiv
|
| 1550 |
+
preprint arXiv:2109.03812, 2021.
|
| 1551 |
+
[75] Bo Zhu, Jeremiah Z Liu, Stephen F Cauley, Bruce R Rosen, and Matthew S Rosen. Image
|
| 1552 |
+
reconstruction by domain-transform manifold learning. Nature, 555(7697):487–492, 2018.
|
| 1553 |
+
24
|
| 1554 |
+
|
| 1555 |
+
A
|
| 1556 |
+
Classification Metrics
|
| 1557 |
+
A.1
|
| 1558 |
+
Knee Results
|
| 1559 |
+
Sampling Rate
|
| 1560 |
+
Pathologies
|
| 1561 |
+
NPV / PPV (ARMS)
|
| 1562 |
+
NPV / PPV (Recon)
|
| 1563 |
+
NPV / PPV (RSS)
|
| 1564 |
+
100%
|
| 1565 |
+
ACL
|
| 1566 |
+
99.2 ± 0.2 / 14.5 ± 0.9
|
| 1567 |
+
Meniscal Tear
|
| 1568 |
+
97.4 ± 0.3 / 44.9 ± 4.7
|
| 1569 |
+
12.5%
|
| 1570 |
+
ACL
|
| 1571 |
+
99.1 ± 0.1 / 13.1 ± 1.5
|
| 1572 |
+
98.8 ± 0.3 / 10.9 ± 1.7
|
| 1573 |
+
Meniscal Tear
|
| 1574 |
+
97 ± 0.3 / 42.3 ± 1.7
|
| 1575 |
+
96.9 ± 0.5 / 10.9 ± 1.7
|
| 1576 |
+
10%
|
| 1577 |
+
ACL
|
| 1578 |
+
99.3 ± 0.2 / 12.7 ± 1.3
|
| 1579 |
+
98.9 ± 0.2 / 11.5 ± 2
|
| 1580 |
+
Meniscal Tear
|
| 1581 |
+
97.6 ± 0.5/ 41.1 ± 2
|
| 1582 |
+
97.1 ± 0.6 / 33.9 ± 2.5
|
| 1583 |
+
8%
|
| 1584 |
+
ACL
|
| 1585 |
+
99. ± 0.3 / 13 ± 1.2
|
| 1586 |
+
99 ± 0.2 / 11.1 ± 1.6
|
| 1587 |
+
Meniscal Tear
|
| 1588 |
+
97.8 ± 0.4 / 41.1 ± 2
|
| 1589 |
+
97.1 ± 0.4 / 33.9 ± 2.4
|
| 1590 |
+
5%
|
| 1591 |
+
ACL
|
| 1592 |
+
99.1 ± 0.1 / 13.3 ± 0.7
|
| 1593 |
+
98.8 ± 0.3 / 11.5 ± 1.6
|
| 1594 |
+
Meniscal Tear
|
| 1595 |
+
97. ± 0.3 / 39.8 ± 1.3
|
| 1596 |
+
96.8 ± 0.5 / 34.4 ± 2.9
|
| 1597 |
+
Table 5: Knee NPV/PPV Results
|
| 1598 |
+
Sampling Rate
|
| 1599 |
+
Pathologies
|
| 1600 |
+
Sens / Spec (ARMS)
|
| 1601 |
+
Sens / Spec (Recon)
|
| 1602 |
+
Sens / Spec (RSS)
|
| 1603 |
+
100%
|
| 1604 |
+
ACL
|
| 1605 |
+
81.1± 4.4 / 82.2± 2.2
|
| 1606 |
+
Meniscal Tear
|
| 1607 |
+
82.8± 2.2 / 86± 2.7
|
| 1608 |
+
12.5%
|
| 1609 |
+
ACL
|
| 1610 |
+
80.9 ± 4.6 / 79.7 ± 4.2
|
| 1611 |
+
75.5 ± 8.2 / 76.2 ± 7.3
|
| 1612 |
+
Meniscal Tear
|
| 1613 |
+
82.2 ± 2.5 / 84.8 ± 0.8
|
| 1614 |
+
81.4 ± 3.1 / 78 ± 2.8
|
| 1615 |
+
10%
|
| 1616 |
+
ACL
|
| 1617 |
+
80 ± 2.6 / 80.7 ± 1.1
|
| 1618 |
+
77 ± 4.1 / 77.2 ± 4.9
|
| 1619 |
+
Meniscal Tear
|
| 1620 |
+
81 ± 1.9 / 83.4 ± 1.2
|
| 1621 |
+
82.8 ± 3.7 / 78 ± 2.3
|
| 1622 |
+
8%
|
| 1623 |
+
ACL
|
| 1624 |
+
78.2 ± 7.3 / 80.5 ± 2
|
| 1625 |
+
80.2 ± 4.2 / 75.6 ± 4.6
|
| 1626 |
+
Meniscal Tear
|
| 1627 |
+
80.6 ± 2.2 / 84.4 ± 0.9
|
| 1628 |
+
82.8 ± 2.8 / 78.1 ± 2.4
|
| 1629 |
+
5%
|
| 1630 |
+
ACL
|
| 1631 |
+
84.8 ± 3.5 / 78.1 ± 2.5
|
| 1632 |
+
73.9 ± 8.2 / 78.1 ± 6.3
|
| 1633 |
+
Meniscal Tear
|
| 1634 |
+
80.8 ± 3.4 / 84 ± 1.1
|
| 1635 |
+
81 ± 3.2 / 78.9 ± 2.8
|
| 1636 |
+
Table 6: Knee Sensitivity / Specificity Results
|
| 1637 |
+
25
|
| 1638 |
+
|
| 1639 |
+
A.2
|
| 1640 |
+
Brain Results
|
| 1641 |
+
Sampling Rate
|
| 1642 |
+
Pathologies
|
| 1643 |
+
NPV / PPV (ARMS)
|
| 1644 |
+
NPV / PPV (Recon)
|
| 1645 |
+
NPV / PPV (RSS)
|
| 1646 |
+
100%
|
| 1647 |
+
Enlarged Ventricles
|
| 1648 |
+
99.6 ± 0.2 / 18.3 ± 9.7
|
| 1649 |
+
Mass
|
| 1650 |
+
99.5± 0.3 / 8.1± 0.9
|
| 1651 |
+
12.5%
|
| 1652 |
+
Enlarged Ventricles
|
| 1653 |
+
99.5 ± 0.1 / 15.3 ± 7.1
|
| 1654 |
+
99.3 ± 0.3 / 5.9 ± 1.5
|
| 1655 |
+
Mass
|
| 1656 |
+
99.5 ± 0.2 / 8.3 ± 1.4
|
| 1657 |
+
98.8 ± 0.2/ 3.8
|
| 1658 |
+
10%
|
| 1659 |
+
Enlarged Ventricles
|
| 1660 |
+
99.5 ± 0.1 / 11.3 ± 4
|
| 1661 |
+
99.4 ± 0.1/ 8.1 ± 2.5
|
| 1662 |
+
Mass
|
| 1663 |
+
99.4 ± 0.2 / 6.7 ± 1.4
|
| 1664 |
+
99.4 ± 0.2/ 5.1 ± 1.1
|
| 1665 |
+
8%
|
| 1666 |
+
Enlarged Ventricles
|
| 1667 |
+
99.6 ± 0.1 / 9.3 ± 3.7
|
| 1668 |
+
99.4 ± 0.2/ 5.1 ± 1.1
|
| 1669 |
+
Mass
|
| 1670 |
+
99.6 ± 0.1 / 6.8 ± 1.3
|
| 1671 |
+
98.7 ± 0.2/ 4.4 ± 0.8
|
| 1672 |
+
5%
|
| 1673 |
+
Enlarged Ventricles
|
| 1674 |
+
99.5 ± 0.1 / 9.1 ± 2.2
|
| 1675 |
+
99.4 ± 0.2/ 6.5 ± 2.2
|
| 1676 |
+
Mass
|
| 1677 |
+
99.5 ± 0.3 / 7 ± 1.2
|
| 1678 |
+
98.7 ± 0.2 / 4.4 ± 0.6
|
| 1679 |
+
Table 7: Brain NPV/PPV Results
|
| 1680 |
+
Sampling Rate
|
| 1681 |
+
Pathologies
|
| 1682 |
+
Sens / Spec (ARMS)
|
| 1683 |
+
Sens / Spec (Recon)
|
| 1684 |
+
Sens / Spec (RSS)
|
| 1685 |
+
100%
|
| 1686 |
+
Enlarged Ventricles
|
| 1687 |
+
84.9 ± 7 / 85.8 ± 7.9
|
| 1688 |
+
Mass
|
| 1689 |
+
86.2 ± 6.9 / 72.4 ± 3.8
|
| 1690 |
+
12.5%
|
| 1691 |
+
Enlarged Ventricles
|
| 1692 |
+
83.3 ± 2.2 / 84.3 ± 7.5
|
| 1693 |
+
83.4 ± 8.1 / 62.2 ± 11.7
|
| 1694 |
+
Mass
|
| 1695 |
+
85.6 ± 4.7 / 73 ± 4.8
|
| 1696 |
+
83.3 ± 5 / 38.8 ± 12.2
|
| 1697 |
+
10%
|
| 1698 |
+
Enlarged Ventricles
|
| 1699 |
+
83.9 ± 4.1 / 79 ± 10.4
|
| 1700 |
+
84.1 ± 5 / 71.9 ± 10.3
|
| 1701 |
+
Mass
|
| 1702 |
+
85.9 ± 4.9 / 65.3 ± 7.8
|
| 1703 |
+
73.9 ± 4.6 / 56.3 ± 7.7
|
| 1704 |
+
8%
|
| 1705 |
+
Enlarged Ventricles
|
| 1706 |
+
88.2 ± 3.7 / 74.1 ± 8.2
|
| 1707 |
+
88.5 ± 3.7 / 54.2 ± 11.1
|
| 1708 |
+
Mass
|
| 1709 |
+
90.0 ± 2.1/64.7 ± 5.5
|
| 1710 |
+
74.2 ± 5.4 / 53.5 ± 11.1
|
| 1711 |
+
5%
|
| 1712 |
+
Enlarged Ventricles
|
| 1713 |
+
86.2 ± 4.5/75.4 ± 7.4
|
| 1714 |
+
84.8 ± 7.9 / 63.1 ± 14.8
|
| 1715 |
+
Mass
|
| 1716 |
+
87.8 ± 7.7/66.3 ± 7.2
|
| 1717 |
+
73.4 ± 3.9 / 55.2 ± 7.5
|
| 1718 |
+
Table 8: Brain Sensitivity / Specificity Results
|
| 1719 |
+
A.3
|
| 1720 |
+
Prostate Results
|
| 1721 |
+
Sampling Rate
|
| 1722 |
+
Pathologies
|
| 1723 |
+
Sens / Spec (ARMS)
|
| 1724 |
+
Sens / Spec (Recon)
|
| 1725 |
+
Sens / Spec (RSS)
|
| 1726 |
+
100%
|
| 1727 |
+
CS-PCa
|
| 1728 |
+
93.3 ± 0.5 / 59.3 ± 5.4
|
| 1729 |
+
12.5%
|
| 1730 |
+
CS-PCa
|
| 1731 |
+
91.1 ± 9.6 / 59.2 ± 1.9
|
| 1732 |
+
90 ± 9.6 / 57.9 ± 1.9
|
| 1733 |
+
10%
|
| 1734 |
+
CS-PCa
|
| 1735 |
+
88 ± 8.1 / 64.7 ± 5.1
|
| 1736 |
+
86 ± 8.1 / 54.4 ± 2.3
|
| 1737 |
+
8%
|
| 1738 |
+
CS-PCa
|
| 1739 |
+
91.3 ± 5.3 / 60.8 ± 2.1
|
| 1740 |
+
89 ± 5.3 / 54.3 ± 2.1
|
| 1741 |
+
5%
|
| 1742 |
+
CS-PCa
|
| 1743 |
+
88.5 ± 4.4 / 62.9 ± 1.5
|
| 1744 |
+
88.6 ± 4.4 / 47 ± 1.5
|
| 1745 |
+
Table 9: Prostate Sensitivity/Specificity Results
|
| 1746 |
+
26
|
| 1747 |
+
|
| 1748 |
+
Sampling Rate
|
| 1749 |
+
Pathologies
|
| 1750 |
+
NPV / PPV (ARMS)
|
| 1751 |
+
NPV / PPV (Recon)
|
| 1752 |
+
NPV / PPV (RSS)
|
| 1753 |
+
100%
|
| 1754 |
+
CS-PCa
|
| 1755 |
+
99.2 ± 0.0 / 14.5 ± 1.6
|
| 1756 |
+
12.5%
|
| 1757 |
+
CS-PCa
|
| 1758 |
+
98.7 ± 0.2 /13.4 ± 1.8
|
| 1759 |
+
98.7 ± 0.6 / 12.2 ± 1.8
|
| 1760 |
+
10%
|
| 1761 |
+
CS-PCa
|
| 1762 |
+
99 ± 0.3 /12.8 ± 5
|
| 1763 |
+
98.8 ± 0.6 / 11.7 ± 5
|
| 1764 |
+
8%
|
| 1765 |
+
CS-PCa
|
| 1766 |
+
98.7 ± 0.6 / 13.8 ± 2.1
|
| 1767 |
+
97 ± 0.3 /11.8 ± 2.1
|
| 1768 |
+
5%
|
| 1769 |
+
CS-PCa
|
| 1770 |
+
98.9 ± 0.6 / 12.1 ± 1.5
|
| 1771 |
+
96.9 ± 0.1 /10 ± 1.5
|
| 1772 |
+
Table 10: Prostate NPV/PPV Results
|
| 1773 |
+
27
|
| 1774 |
+
|
6tFKT4oBgHgl3EQf_i4o/content/tmp_files/load_file.txt
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
7NE4T4oBgHgl3EQfcgzB/vector_store/index.faiss
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:be6a72a6855e09a360da033d2c713248519fe70dc2e13366641d045c0a576b81
|
| 3 |
+
size 10027053
|
8dE4T4oBgHgl3EQfdQww/vector_store/index.faiss
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:7897bc5d6e1911c552a6a0681fe16efeb415820615cd1ee3825948f08cb29d34
|
| 3 |
+
size 6619181
|
9NE0T4oBgHgl3EQffwBQ/content/2301.02408v1.pdf
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:271becf21ce495d91abbf9a8952fc13d8efda303e5c891572143f68d8e691ad9
|
| 3 |
+
size 1098449
|
9NE0T4oBgHgl3EQffwBQ/vector_store/index.faiss
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:4bb8adc46b1a95ed2809570d6fa51925f718f23a3953b7e1fdf6d6285cfb110e
|
| 3 |
+
size 1769517
|
9NE0T4oBgHgl3EQffwBQ/vector_store/index.pkl
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:50917a872ca2a8968fe591610c19eee6513b870766ed140478f0d4a6ef16035c
|
| 3 |
+
size 61822
|
9tAzT4oBgHgl3EQf-_7r/vector_store/index.faiss
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:63a3643a00f69a63b6ec5550354d9c497e8097ff4a8dc296c9a5754da925549d
|
| 3 |
+
size 6029357
|
ANFAT4oBgHgl3EQfrR6g/content/2301.08652v1.pdf
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:275f58c9422277549e14681f49b041638791779a0637adf3d110185a7f785ab6
|
| 3 |
+
size 645114
|
ANFAT4oBgHgl3EQfrR6g/vector_store/index.pkl
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:e33f1e765c9e254e2d14bf9f57eda69689b17feece2e3deacf5dceea3cc13f0f
|
| 3 |
+
size 96979
|
CdE1T4oBgHgl3EQfWAQw/content/tmp_files/2301.03108v1.pdf.txt
ADDED
|
@@ -0,0 +1,490 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
arXiv:2301.03108v1 [hep-th] 8 Jan 2023
|
| 2 |
+
Fields and strings on non commutative q-deformed
|
| 3 |
+
spaces
|
| 4 |
+
Poula Tadros
|
| 5 |
+
Department of Applied Physics, Aalto University School of Science, FI-00076
|
| 6 |
+
Aalto, Finland.
|
| 7 |
+
email:[email protected]
|
| 8 |
+
Abstract
|
| 9 |
+
We study scalar field and string theory on non commutative q-deformed
|
| 10 |
+
spaces. We define a product of functions on a non commutative algebra
|
| 11 |
+
of functions resulting from the q-deformation analog to the Moyal prod-
|
| 12 |
+
uct for canonically non commutative spaces. We then give the general
|
| 13 |
+
procedure to define scalar field and classical string theories on the men-
|
| 14 |
+
tioned spaces, we argue that the resulting theories will have enlarged sets
|
| 15 |
+
of both spacetime and internal symmetries which can be used to study
|
| 16 |
+
gravitational effects due to the q-deformation.
|
| 17 |
+
1
|
| 18 |
+
Introduction
|
| 19 |
+
Non commutative geometry was introduced in string theory in [1] where
|
| 20 |
+
it was shown that the coordinates of the endpoints of strings on D-branes
|
| 21 |
+
in presence of Neveu-Schwartz field is non commutative. In field theory
|
| 22 |
+
it was even older where Yang-Mills theory on non commutative torus was
|
| 23 |
+
introduced [2].
|
| 24 |
+
The main motivation to introduce non commutative space times is field
|
| 25 |
+
theory is explained in [3,4]. In quantum mechanics Heisenberg uncertainty
|
| 26 |
+
principle states that at small distance scales there is a large uncertainty in
|
| 27 |
+
momentum measurement i.e. energy can reach very high values in small
|
| 28 |
+
space distance (close to the Planck scale), but according to the general
|
| 29 |
+
theory of relativity, high energy in sufficiently small distance scale creates
|
| 30 |
+
a black hole preventing measurement of position to be fully certain i.e.
|
| 31 |
+
there is uncertainty in position measurement in small scales, this can only
|
| 32 |
+
be achieved by introducing non commutativity in space time. Notice that
|
| 33 |
+
this implies non locality in the theory.
|
| 34 |
+
Since the introduction of non commutativity in field and string theories
|
| 35 |
+
a lot of progress has been made in all directions including classical and
|
| 36 |
+
quantum field theories, theories of gravity and string theory. However,
|
| 37 |
+
the non commutativity used is the canonical non commutativity which
|
| 38 |
+
does not capture the mathematical structure of the given field or string
|
| 39 |
+
theory and it is clear that it was imposed by hand. In this article we use
|
| 40 |
+
another type of non commutativity, the q-deformation, to study classical
|
| 41 |
+
scalar field theory and the consequences on string theory. In section 2, we
|
| 42 |
+
review the most popular types of non commutativity on space times and
|
| 43 |
+
motivate the choice of q-deformation as the non commutativity of choice.
|
| 44 |
+
In section 3, we define a product of functions on q-deformed spaces sim-
|
| 45 |
+
ilar to the Moyal product on canonically non commutative spaces and
|
| 46 |
+
1
|
| 47 |
+
|
| 48 |
+
show the procedure failed for Lie-type non commutativity. In section 4,
|
| 49 |
+
we study scalar field theory on q-deformed space time. In section 5, we
|
| 50 |
+
study string theory on the same space time.
|
| 51 |
+
In section 6, we discuss
|
| 52 |
+
the symmetries of the non commutative theories, we show that there are
|
| 53 |
+
more symmetries in the non commutative theories than the corresponding
|
| 54 |
+
commutative ones, then use this to argue that we can define theories with
|
| 55 |
+
dynamical spacetimes by quantizing the spacetime symmetry group of the
|
| 56 |
+
theory.
|
| 57 |
+
2
|
| 58 |
+
Types of non commutativity
|
| 59 |
+
This section is dedicated to review three types of non commutativity of
|
| 60 |
+
space times
|
| 61 |
+
2.1
|
| 62 |
+
Canonical non commutativity
|
| 63 |
+
It is the simplest type and is the one used in physics literature, it was
|
| 64 |
+
introduced in [5], it is defined by imposing the following commutation
|
| 65 |
+
relations on the space time
|
| 66 |
+
[xµ, xν] = iθµν,
|
| 67 |
+
where xµ are the space time coordinates and θµν is a constant, anti sym-
|
| 68 |
+
metric matrix.
|
| 69 |
+
Canonical non commutativity corresponds to smearing of the space
|
| 70 |
+
time, it can be easily seen from solutions of polynomial equations on the
|
| 71 |
+
commutative space compared to its non commutative counterpart.
|
| 72 |
+
As an example consider the two dimensional Euclidean space with co-
|
| 73 |
+
ordinates x and y with the commutation relation [x, y] = k, where k is a
|
| 74 |
+
positive constant, and consider the polynomial equation (x − y)2 = 0.
|
| 75 |
+
In the usual commutative space, the solution to the above equation is
|
| 76 |
+
x = y which is a straight line with slope = 1 and passing through the
|
| 77 |
+
origin. However, in the corresponding non commutative space the equa-
|
| 78 |
+
tion can be written as x2 − 2yx + y2 = k whose solutions are two parallel
|
| 79 |
+
straight lines separated by a distance proportional to k, when k = 0 the
|
| 80 |
+
two straight lines coincide and we recover the solution on the commutative
|
| 81 |
+
space.
|
| 82 |
+
Note that this procedure is valid regardless of whether or not there are
|
| 83 |
+
additional mathematical structures on the space, the smearing is carried
|
| 84 |
+
out the same way. That is why we need more complicated non commuta-
|
| 85 |
+
tivity to use in physics.
|
| 86 |
+
2.2
|
| 87 |
+
Lie-type non commutativity
|
| 88 |
+
In this case the coordinates has a Lie algebra structure i.e. the commuta-
|
| 89 |
+
tion relations can capture a Lie algebra structures if defined on the space
|
| 90 |
+
2
|
| 91 |
+
|
| 92 |
+
time for example like in field theories [6]. The commutation relations are
|
| 93 |
+
given by
|
| 94 |
+
[xµ, xν] = if µν
|
| 95 |
+
ρ xρ,
|
| 96 |
+
where f µν
|
| 97 |
+
ρ
|
| 98 |
+
are the structure constants of the defined Lie algebra. How-
|
| 99 |
+
ever, this type is not useful because Lie structures are rigid i.e. any small
|
| 100 |
+
deformation of a Lie algebra is isomorphic to the Lie algebra. This leads
|
| 101 |
+
to difficulties in defining products of functions on the resulting non com-
|
| 102 |
+
mutative space as we will see in the next section.
|
| 103 |
+
2.3
|
| 104 |
+
q-deformations
|
| 105 |
+
This type was introduced to solve the rigidity problem for Lie algebras.
|
| 106 |
+
The main idea is to replace Lie group with a flexible structure which is
|
| 107 |
+
called quantum groups, for more details on the theory of quantum groups
|
| 108 |
+
see [7,8].
|
| 109 |
+
The commutation relations are given by
|
| 110 |
+
xµxν = 1
|
| 111 |
+
q Rµν
|
| 112 |
+
στxσxτ,
|
| 113 |
+
where q is a parameter and Rµν
|
| 114 |
+
στ is the R-matrix of the quantum group
|
| 115 |
+
defined on the space.
|
| 116 |
+
In this space a Lie algebra is replaced by a non commutative Hopf alge-
|
| 117 |
+
bra with deformation parameter q. Hopf algebras are considered deforma-
|
| 118 |
+
tions of the universal enveloping algebra of the Lie group. The resulting
|
| 119 |
+
space is deformed according to the Lie group on the space and on the pa-
|
| 120 |
+
rameter q, this is the simplest way to deform a space time while capturing
|
| 121 |
+
the full structure of the space, other more complicated approaches can be
|
| 122 |
+
studied such as deforming with more than one parameter [9,10] but it is
|
| 123 |
+
beyond the scope of the article.
|
| 124 |
+
3
|
| 125 |
+
Moyal-like product on q-deformed spaces
|
| 126 |
+
Here, we define a non commutative product of functions on q-deformed
|
| 127 |
+
spaces i.e. non commutative spaces in the R-matrix formalism.
|
| 128 |
+
3.1
|
| 129 |
+
Moyal product on canonically non commuta-
|
| 130 |
+
tive spaces
|
| 131 |
+
We begin with reviewing the original Moyal product.
|
| 132 |
+
On Canonically
|
| 133 |
+
non commutative spaces, the algebra of functions is replaced by a non
|
| 134 |
+
commutative C∗ algebra, the Moyal product is the product of functions
|
| 135 |
+
on the non commutative algebra. Its formula can be derived as follows:
|
| 136 |
+
Consider two functions f(x) and g(x), their Fourier transforms are
|
| 137 |
+
f(x) =
|
| 138 |
+
�
|
| 139 |
+
dDk
|
| 140 |
+
(2π)D
|
| 141 |
+
¯
|
| 142 |
+
f(k)eikixi,
|
| 143 |
+
3
|
| 144 |
+
|
| 145 |
+
g(x) =
|
| 146 |
+
�
|
| 147 |
+
dDk′
|
| 148 |
+
(2π)D
|
| 149 |
+
¯
|
| 150 |
+
g(k)eikjxj,
|
| 151 |
+
The product on the non commutative space is
|
| 152 |
+
f(x) ⋆ g(x) =
|
| 153 |
+
� dDkdDk′
|
| 154 |
+
(2π)2D
|
| 155 |
+
¯
|
| 156 |
+
f(k) ¯
|
| 157 |
+
g(k)eikixieikjxj.
|
| 158 |
+
Using Baker-Campbell-Hausdorff formula we get
|
| 159 |
+
f(x) ⋆ g(x) =
|
| 160 |
+
� dDkdDk′
|
| 161 |
+
(2π)2D
|
| 162 |
+
¯
|
| 163 |
+
f(k) ¯
|
| 164 |
+
g(k)eikixieikjxjei/2kikjθij
|
| 165 |
+
=
|
| 166 |
+
� dDkdDk′
|
| 167 |
+
(2π)2D
|
| 168 |
+
¯
|
| 169 |
+
f(k) ¯
|
| 170 |
+
g(k)eikixieikjxj(1 +
|
| 171 |
+
∞
|
| 172 |
+
�
|
| 173 |
+
n=1
|
| 174 |
+
( i
|
| 175 |
+
2)n 1
|
| 176 |
+
n!(kikjθij)n)
|
| 177 |
+
= f(x)g(x) +
|
| 178 |
+
∞
|
| 179 |
+
�
|
| 180 |
+
n=1
|
| 181 |
+
( i
|
| 182 |
+
2)n 1
|
| 183 |
+
n!θi1j1...θinjn∂i1∂i2...∂inf∂j1...∂jng.
|
| 184 |
+
3.2
|
| 185 |
+
Product of functions on q-deformed space
|
| 186 |
+
We follow the same procedure to define a product on q-deformed spaces.
|
| 187 |
+
The non commutativity is given by
|
| 188 |
+
xµxν = 1
|
| 189 |
+
q Rµν
|
| 190 |
+
στxσxτ.
|
| 191 |
+
It can be written as a commutation relation as
|
| 192 |
+
[xµ, xν] = Qµν
|
| 193 |
+
στxσxτ,
|
| 194 |
+
where Qµν
|
| 195 |
+
στ = 1
|
| 196 |
+
q Rµν
|
| 197 |
+
στ − δµ
|
| 198 |
+
τ δν
|
| 199 |
+
σ. Note that at q = 1 we have Qµν
|
| 200 |
+
στ = 0 and we
|
| 201 |
+
recover commutativity.
|
| 202 |
+
Now again two functions f(x) and g(x). the product of their Fourier
|
| 203 |
+
transforms is
|
| 204 |
+
f(x) ⋆ g(x) =
|
| 205 |
+
� dDkdDk′
|
| 206 |
+
(2π)2D
|
| 207 |
+
¯
|
| 208 |
+
f(k) ¯
|
| 209 |
+
g(k)eikixieikjxj,
|
| 210 |
+
and using Baker-Campbell-Hausdorff formula we get
|
| 211 |
+
f(x) ⋆ g(x) =
|
| 212 |
+
� dDkdDk′
|
| 213 |
+
(2π)2D
|
| 214 |
+
¯
|
| 215 |
+
f(k) ¯
|
| 216 |
+
g(k)eikixieikjxj
|
| 217 |
+
exi+xj+1/2[xi,xj]+1/12[xi,[xi,xj]]−1/12[xj,[xi,xj]]+...,
|
| 218 |
+
after some calculations we have
|
| 219 |
+
[xi, [xi, xj]] = (Qij
|
| 220 |
+
nlQil
|
| 221 |
+
ab + Qij
|
| 222 |
+
cbQim
|
| 223 |
+
na)xnxaxb,
|
| 224 |
+
[xi, [xi, xj]] = (Qij
|
| 225 |
+
nlQjl
|
| 226 |
+
ab + Qij
|
| 227 |
+
cbQjm
|
| 228 |
+
na )xnxaxb,
|
| 229 |
+
Substituting we get
|
| 230 |
+
f(x) ⋆ g(x) =
|
| 231 |
+
� dDkdDk′
|
| 232 |
+
(2π)2D
|
| 233 |
+
¯
|
| 234 |
+
f(k) ¯
|
| 235 |
+
g(k)eikixieikjxj
|
| 236 |
+
4
|
| 237 |
+
|
| 238 |
+
exiki+xjk′
|
| 239 |
+
j+1/2Qij
|
| 240 |
+
mnxmxnkik′
|
| 241 |
+
j+1/12kik′
|
| 242 |
+
j(Qij
|
| 243 |
+
nlQil
|
| 244 |
+
ab−Qij
|
| 245 |
+
cbQim
|
| 246 |
+
na −Qij
|
| 247 |
+
nlQjl
|
| 248 |
+
ab+Qij
|
| 249 |
+
cbQjm
|
| 250 |
+
na )xnxaxb+....
|
| 251 |
+
In string theory it is reasonable to assume that the parameter q is
|
| 252 |
+
close to 1 since the operator Q is related to the string length which is
|
| 253 |
+
assumed to be very small. In field theory the assumption is reasonable
|
| 254 |
+
as well since in this case Q is related to the area of the space time where
|
| 255 |
+
general relativity breaks i.e. quantum gravity scale which is assumed to
|
| 256 |
+
be very small.
|
| 257 |
+
Thus, the series converges and all the exponentials are
|
| 258 |
+
well defined. If we ignore the higher orders in the last exponent we get a
|
| 259 |
+
formula similar to the Moyal product:
|
| 260 |
+
f(x) ⋆ g(x) =
|
| 261 |
+
� dDkdDk′
|
| 262 |
+
(2π)2D
|
| 263 |
+
¯
|
| 264 |
+
f(k)
|
| 265 |
+
¯
|
| 266 |
+
g(k′)eikixieik′
|
| 267 |
+
jxje1/2Qij
|
| 268 |
+
mnxmxnkik′
|
| 269 |
+
j
|
| 270 |
+
=
|
| 271 |
+
� dDkdDk′
|
| 272 |
+
(2π)2D
|
| 273 |
+
¯
|
| 274 |
+
f(k) ¯
|
| 275 |
+
g(k)eikixieik′
|
| 276 |
+
jxj(1 +
|
| 277 |
+
∞
|
| 278 |
+
�
|
| 279 |
+
n=1
|
| 280 |
+
(1/2)n 1
|
| 281 |
+
n!(Qij
|
| 282 |
+
mnxmxnkik′
|
| 283 |
+
j)n)
|
| 284 |
+
= f(x)g(x)+
|
| 285 |
+
∞
|
| 286 |
+
�
|
| 287 |
+
p=1
|
| 288 |
+
(1/2)p 1
|
| 289 |
+
p!xm1xl1Qi1j1
|
| 290 |
+
m1l1...xmpxlpQ
|
| 291 |
+
ipjp
|
| 292 |
+
mplp∂i1∂i2...∂ipf(x)∂j1...∂jpg(x).
|
| 293 |
+
The formula captures the mathematical structures on the space time in
|
| 294 |
+
the form of the Q operator and the deformation is subsequently trans-
|
| 295 |
+
formed into these structures leading to additional symmetries at least in
|
| 296 |
+
string theory.
|
| 297 |
+
From this procedure we can see that Lie-type non commutativity
|
| 298 |
+
presents difficulties in defining the product since the product would be
|
| 299 |
+
of the form
|
| 300 |
+
f(x) ⋆ g(x) =
|
| 301 |
+
� dDkdDk′
|
| 302 |
+
(2π)2D
|
| 303 |
+
¯
|
| 304 |
+
f(k) ¯
|
| 305 |
+
g(k)eikixieikjxj
|
| 306 |
+
e1/2(fij
|
| 307 |
+
m xmkik′
|
| 308 |
+
j)+1/12kik′
|
| 309 |
+
j(fij
|
| 310 |
+
nlfil
|
| 311 |
+
ab−fij
|
| 312 |
+
cbfim
|
| 313 |
+
na )+....
|
| 314 |
+
The series in the exponential generally diverges and the product can not
|
| 315 |
+
be defined.
|
| 316 |
+
4
|
| 317 |
+
Scalar field theory on q-deformed space
|
| 318 |
+
In this section we study massive scalar field theory on a flat non dynam-
|
| 319 |
+
ical q-deformed non commutative space time. While there are attempts
|
| 320 |
+
to define field theories on deformed spaces, the used non commutativity
|
| 321 |
+
is usually the canonical type or the use of quantum groups is limited to
|
| 322 |
+
defining differential structure on a specific examples of spaces [11-14], con-
|
| 323 |
+
formal field theory also was studied on deformed spaces [15], however the
|
| 324 |
+
deformations considered are introduced by hand and does not introduce
|
| 325 |
+
non commutativity i.e. the deformed manifold is another manifold with
|
| 326 |
+
no additional structure.
|
| 327 |
+
The Lagrangian of the theory is
|
| 328 |
+
L = ∂µφ∂µφ − m2φ2,
|
| 329 |
+
5
|
| 330 |
+
|
| 331 |
+
where φ is the scalar field which we will assume is infinitely differentiable,
|
| 332 |
+
and m is the mass.
|
| 333 |
+
Now we perform the deformation quantization:
|
| 334 |
+
The first step is to perform the q-quantization of the symmetry group in
|
| 335 |
+
this case U(1). To do this we write its universal enveloping algebra which
|
| 336 |
+
is a C∗ algebra of functions generated by the function z ∈ U(U(1)) →
|
| 337 |
+
e(iz) ∈ C.
|
| 338 |
+
We notice that the algebra is commutative then its defor-
|
| 339 |
+
mations are equivalent to itself, i.e. no contribution from the symmetry
|
| 340 |
+
group to non commutativity and the product on the non commutative
|
| 341 |
+
space is equivalent to the product on the original manifold.
|
| 342 |
+
The second step is to replace the manifold on which the field theory
|
| 343 |
+
is defined with a non commutative locally compact topological space. On
|
| 344 |
+
this manifold the derivatives are q-deformed into Jackson derivatives.
|
| 345 |
+
The new q-deformed Lagrangian will be
|
| 346 |
+
Lq = DqµφDµ
|
| 347 |
+
q φ − m2φ2,
|
| 348 |
+
Now we relate the theory on the non commutative topological space
|
| 349 |
+
to the theory on the commutative manifold (i.e. transforming the non
|
| 350 |
+
commutative theory back to the commutative manifold) using the formula
|
| 351 |
+
Dqµ(f(x)) = ∂µf +
|
| 352 |
+
∞
|
| 353 |
+
�
|
| 354 |
+
k=1
|
| 355 |
+
(q − 1)k
|
| 356 |
+
(k + 1)! xk
|
| 357 |
+
µf (k+1)(x),
|
| 358 |
+
where f (k) is the k th ordinary derivative of f.
|
| 359 |
+
The resulting Lagrangian on the commutative manifold is
|
| 360 |
+
Lq = ∂µφ∂µφ − m2φ2 + 2∂µφ
|
| 361 |
+
∞
|
| 362 |
+
�
|
| 363 |
+
k=1
|
| 364 |
+
(q − 1)k
|
| 365 |
+
(k + 1)! xµkφ(k+1)
|
| 366 |
+
+
|
| 367 |
+
∞
|
| 368 |
+
�
|
| 369 |
+
l,m=1
|
| 370 |
+
(q − 1)(l+m)
|
| 371 |
+
(m + 1)!(l + 1)!φ(l+1)xk
|
| 372 |
+
µxµlφ(m+1).
|
| 373 |
+
The first two terms of the Lagrangian is the non commutative origi-
|
| 374 |
+
nal theory and the rest are the contributions of non commutativity from
|
| 375 |
+
replacing the non commutative topological space with the original com-
|
| 376 |
+
mutative manifold. The theory is q-deformed i.e. if q = 1 then we recover
|
| 377 |
+
the original theory.
|
| 378 |
+
The additional terms are non local as expected and contain an infinite
|
| 379 |
+
series of higher (ordinary) derivatives of the field φ.
|
| 380 |
+
5
|
| 381 |
+
String theory on q-deformed space
|
| 382 |
+
String theory follows the same q-quantization procedure as field theory
|
| 383 |
+
but with richer geometry since the fundamental object is one dimensional.
|
| 384 |
+
6
|
| 385 |
+
|
| 386 |
+
Here we establish the connection between the Q operator defined above
|
| 387 |
+
and the length of the string, then give the general procedure of defining
|
| 388 |
+
a string theory on q-deformed space.
|
| 389 |
+
The uncertainty in position in case of q-deformed spaces can be cal-
|
| 390 |
+
culated to be
|
| 391 |
+
∆xi∆xj ≥ 1
|
| 392 |
+
2 < xµQij
|
| 393 |
+
µνxν >,
|
| 394 |
+
where < xµQij
|
| 395 |
+
µνxν > is the expectation value of the quadratic form of the
|
| 396 |
+
operator Q.
|
| 397 |
+
Following the same argument as [2], we find that the length of the
|
| 398 |
+
string squared is proportional to the above expectation value
|
| 399 |
+
< xµQij
|
| 400 |
+
µνxν > ∝ l2
|
| 401 |
+
s.
|
| 402 |
+
This implies that the string’s length depends on the geometry of the
|
| 403 |
+
non commutative space i.e. depends on the string theory in question, and
|
| 404 |
+
is determined by the R-matrix of the quantized group.
|
| 405 |
+
The procedure on a static spacetime is as follows:
|
| 406 |
+
1. Determine the symmetry group of the theory and find the corre-
|
| 407 |
+
sponding quantum group.
|
| 408 |
+
2. Use the product presented in section 3 instead of the usual product
|
| 409 |
+
and Jackson’s derivative instead of the usual derivative.
|
| 410 |
+
3. Use the corresponding formulae to relate back to the original man-
|
| 411 |
+
ifold as we did in section 4, this usually leads to infinite series of
|
| 412 |
+
higher derivatives in the Lagrangian.
|
| 413 |
+
6
|
| 414 |
+
Symmetries and theories on dynamical
|
| 415 |
+
spacetimes
|
| 416 |
+
The first step of q-quantization is to replace the symmetry group with a
|
| 417 |
+
quantum group which is a deformation of its universal enveloping algebra.
|
| 418 |
+
This gives more symmetries than the commutative theory by definition.
|
| 419 |
+
In field and string theory, symmetries are classified into spacetime sym-
|
| 420 |
+
metries and internal symmetries, spacetime symmetries relates directly to
|
| 421 |
+
the ambient manifold on which the field/string theory is defined while
|
| 422 |
+
internal symmetries are additional structure on the manifold. While on
|
| 423 |
+
static spacetime (disregarding gravity) only the internal symmetry group
|
| 424 |
+
is to be q-deformed, the spacetime symmetry group must contribute to
|
| 425 |
+
the R-matrix if dynamic spacetimes are to be studied, the deformations
|
| 426 |
+
of the spacetime symmetry should lead to effects on the gravitational as-
|
| 427 |
+
pects of the theory like changes in curvature, singularities, etc. Similar
|
| 428 |
+
studies of non commutativity’s effects on gravity are found in [] but uses
|
| 429 |
+
the canonical non commutativity, using q-deformations to study gravity
|
| 430 |
+
is a subject of future research.
|
| 431 |
+
7
|
| 432 |
+
|
| 433 |
+
7
|
| 434 |
+
Conclusion and outlook
|
| 435 |
+
The results presented in this paper showed that a product of functions
|
| 436 |
+
on a q-deformed space at least for small deformations exists and is well
|
| 437 |
+
defined, we give an explicit formula in the paper. We also showed that
|
| 438 |
+
field and string theory can be defined on q-deformed manifolds but hav-
|
| 439 |
+
ing enlarged set of symmetries and extra features depending on the theory
|
| 440 |
+
and the manifold in question.
|
| 441 |
+
A possible direction of future research is to study the enlarged set
|
| 442 |
+
of symmetries due to q-deformations as well as their mathematical and
|
| 443 |
+
the phenomenological implications. Another direction is to study more
|
| 444 |
+
complicated field/string theories and find ways to define higher spin fields
|
| 445 |
+
on such spaces.
|
| 446 |
+
Acknowledgments
|
| 447 |
+
We would like to thank Dr.Ivan Kolar for the useful discussions on the
|
| 448 |
+
topic.
|
| 449 |
+
References
|
| 450 |
+
[1] Seiberg, N. and Witten, E. (1999) “String theory and noncommu-
|
| 451 |
+
tative geometry,” Journal of High Energy Physics, 1999(09), pp.
|
| 452 |
+
032–032.
|
| 453 |
+
[2] Szabo, R. (2003) “Quantum field theory on noncommutative spaces,”
|
| 454 |
+
Physics Reports, 378(4), pp. 207–299.
|
| 455 |
+
[3] Doplicher, S., Fredenhagen, K. and Roberts, J.E. (1995) “The quan-
|
| 456 |
+
tum structure of spacetime at the Planck scale and Quantum Fields,”
|
| 457 |
+
Communications in Mathematical Physics, 172(1), pp. 187–220.
|
| 458 |
+
[4] Ahluwalia, D.V. (1994) “Quantum measurement, gravitation, and
|
| 459 |
+
locality,” Physics Letters B, 339(4), pp. 301–303.
|
| 460 |
+
[5] C. S. Chu and P. M. Ho, Noncommutative open string and D-brane,
|
| 461 |
+
Nucl. Phys. B 550, 151 (1999) [hep-th/9812219].
|
| 462 |
+
[6] B. Jurco, S. Schraml, P. Schupp and J. Wess, Enveloping algebra
|
| 463 |
+
valued gauge transformations for non-Abelian gauge groups on non-
|
| 464 |
+
commutative spaces, Eur. Phys. J. C17, 521 (2000) [hep-th/0006246].
|
| 465 |
+
[7] Chaichian, M. and Demichev, A.P. Introduction to quantum groups.
|
| 466 |
+
Singapore: World Scientific (1996).
|
| 467 |
+
[8] A. Klimyk and K. Schmudgen, Quantum Groups and Their Repre-
|
| 468 |
+
sentations, Springer (1997).
|
| 469 |
+
[9] Hu, N.H. and Pei, Y.F. (2008) “Notes on 2-parameter Quantum
|
| 470 |
+
Groups I,” Science in China Series A: Mathematics, 51(6), pp. 1101–1110.
|
| 471 |
+
[10] Hu, N. and Pei, Y. (2012) “Notes on two-parameter quantum groups,
|
| 472 |
+
(II),” Communications in Algebra, 40(9), pp. 3202–3220.
|
| 473 |
+
8
|
| 474 |
+
|
| 475 |
+
[11] Wulkenhaar, R. (2006) “Field theories on deformed spaces,” Journal
|
| 476 |
+
of Geometry and Physics, 56(1), pp. 108–141.
|
| 477 |
+
[12] Grosse, H., Madore, J. and Steinacker, H. (2001) “Field theory on
|
| 478 |
+
the Q-deformed fuzzy sphere I,” Journal of Geometry and Physics,
|
| 479 |
+
38(3-4), pp. 308–342.
|
| 480 |
+
[13] Grosse, H., Madore, J. and Steinacker, H. (2002) “Field theory on
|
| 481 |
+
the Q-deformed Fuzzy Sphere II: Quantization,” Journal of Geome-
|
| 482 |
+
try and Physics, 43(2-3), pp. 205–240.
|
| 483 |
+
[14] BARDEK, V., DOREˇSI´C, M. and MELJANAC, S. (1994) “An ex-
|
| 484 |
+
ample of Q-deformed field theory,” International Journal of Modern
|
| 485 |
+
Physics A, 09(23), pp. 4185–4194.
|
| 486 |
+
[15] Minahan, J., Naseer, U. and Thull, C. (2021) “Conformal field the-
|
| 487 |
+
ories on deformed spheres, anomalies, and supersymmetry,” SciPost
|
| 488 |
+
Physics, 10(3).
|
| 489 |
+
9
|
| 490 |
+
|
CdE1T4oBgHgl3EQfWAQw/content/tmp_files/load_file.txt
ADDED
|
@@ -0,0 +1,214 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
filepath=/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE1T4oBgHgl3EQfWAQw/content/2301.03108v1.pdf,len=213
|
| 2 |
+
page_content='arXiv:2301.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE1T4oBgHgl3EQfWAQw/content/2301.03108v1.pdf'}
|
| 3 |
+
page_content='03108v1 [hep-th] 8 Jan 2023 Fields and strings on non commutative q-deformed spaces Poula Tadros Department of Applied Physics, Aalto University School of Science, FI-00076 Aalto, Finland.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE1T4oBgHgl3EQfWAQw/content/2301.03108v1.pdf'}
|
| 4 |
+
page_content=' email:poulatadros9@gmail.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE1T4oBgHgl3EQfWAQw/content/2301.03108v1.pdf'}
|
| 5 |
+
page_content='com Abstract We study scalar field and string theory on non commutative q-deformed spaces.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE1T4oBgHgl3EQfWAQw/content/2301.03108v1.pdf'}
|
| 6 |
+
page_content=' We define a product of functions on a non commutative algebra of functions resulting from the q-deformation analog to the Moyal prod- uct for canonically non commutative spaces.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE1T4oBgHgl3EQfWAQw/content/2301.03108v1.pdf'}
|
| 7 |
+
page_content=' We then give the general procedure to define scalar field and classical string theories on the men- tioned spaces, we argue that the resulting theories will have enlarged sets of both spacetime and internal symmetries which can be used to study gravitational effects due to the q-deformation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE1T4oBgHgl3EQfWAQw/content/2301.03108v1.pdf'}
|
| 8 |
+
page_content=' 1 Introduction Non commutative geometry was introduced in string theory in [1] where it was shown that the coordinates of the endpoints of strings on D-branes in presence of Neveu-Schwartz field is non commutative.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE1T4oBgHgl3EQfWAQw/content/2301.03108v1.pdf'}
|
| 9 |
+
page_content=' In field theory it was even older where Yang-Mills theory on non commutative torus was introduced [2].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE1T4oBgHgl3EQfWAQw/content/2301.03108v1.pdf'}
|
| 10 |
+
page_content=' The main motivation to introduce non commutative space times is field theory is explained in [3,4].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE1T4oBgHgl3EQfWAQw/content/2301.03108v1.pdf'}
|
| 11 |
+
page_content=' In quantum mechanics Heisenberg uncertainty principle states that at small distance scales there is a large uncertainty in momentum measurement i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE1T4oBgHgl3EQfWAQw/content/2301.03108v1.pdf'}
|
| 12 |
+
page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE1T4oBgHgl3EQfWAQw/content/2301.03108v1.pdf'}
|
| 13 |
+
page_content=' energy can reach very high values in small space distance (close to the Planck scale), but according to the general theory of relativity, high energy in sufficiently small distance scale creates a black hole preventing measurement of position to be fully certain i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE1T4oBgHgl3EQfWAQw/content/2301.03108v1.pdf'}
|
| 14 |
+
page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE1T4oBgHgl3EQfWAQw/content/2301.03108v1.pdf'}
|
| 15 |
+
page_content=' there is uncertainty in position measurement in small scales, this can only be achieved by introducing non commutativity in space time.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE1T4oBgHgl3EQfWAQw/content/2301.03108v1.pdf'}
|
| 16 |
+
page_content=' Notice that this implies non locality in the theory.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE1T4oBgHgl3EQfWAQw/content/2301.03108v1.pdf'}
|
| 17 |
+
page_content=' Since the introduction of non commutativity in field and string theories a lot of progress has been made in all directions including classical and quantum field theories, theories of gravity and string theory.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE1T4oBgHgl3EQfWAQw/content/2301.03108v1.pdf'}
|
| 18 |
+
page_content=' However, the non commutativity used is the canonical non commutativity which does not capture the mathematical structure of the given field or string theory and it is clear that it was imposed by hand.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE1T4oBgHgl3EQfWAQw/content/2301.03108v1.pdf'}
|
| 19 |
+
page_content=' In this article we use another type of non commutativity, the q-deformation, to study classical scalar field theory and the consequences on string theory.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE1T4oBgHgl3EQfWAQw/content/2301.03108v1.pdf'}
|
| 20 |
+
page_content=' In section 2, we review the most popular types of non commutativity on space times and motivate the choice of q-deformation as the non commutativity of choice.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE1T4oBgHgl3EQfWAQw/content/2301.03108v1.pdf'}
|
| 21 |
+
page_content=' In section 3, we define a product of functions on q-deformed spaces sim- ilar to the Moyal product on canonically non commutative spaces and 1 show the procedure failed for Lie-type non commutativity.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE1T4oBgHgl3EQfWAQw/content/2301.03108v1.pdf'}
|
| 22 |
+
page_content=' In section 4, we study scalar field theory on q-deformed space time.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE1T4oBgHgl3EQfWAQw/content/2301.03108v1.pdf'}
|
| 23 |
+
page_content=' In section 5, we study string theory on the same space time.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE1T4oBgHgl3EQfWAQw/content/2301.03108v1.pdf'}
|
| 24 |
+
page_content=' In section 6, we discuss the symmetries of the non commutative theories, we show that there are more symmetries in the non commutative theories than the corresponding commutative ones, then use this to argue that we can define theories with dynamical spacetimes by quantizing the spacetime symmetry group of the theory.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE1T4oBgHgl3EQfWAQw/content/2301.03108v1.pdf'}
|
| 25 |
+
page_content=' 2 Types of non commutativity This section is dedicated to review three types of non commutativity of space times 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE1T4oBgHgl3EQfWAQw/content/2301.03108v1.pdf'}
|
| 26 |
+
page_content='1 Canonical non commutativity It is the simplest type and is the one used in physics literature, it was introduced in [5], it is defined by imposing the following commutation relations on the space time [xµ, xν] = iθµν, where xµ are the space time coordinates and θµν is a constant, anti sym- metric matrix.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE1T4oBgHgl3EQfWAQw/content/2301.03108v1.pdf'}
|
| 27 |
+
page_content=' Canonical non commutativity corresponds to smearing of the space time, it can be easily seen from solutions of polynomial equations on the commutative space compared to its non commutative counterpart.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE1T4oBgHgl3EQfWAQw/content/2301.03108v1.pdf'}
|
| 28 |
+
page_content=' As an example consider the two dimensional Euclidean space with co- ordinates x and y with the commutation relation [x, y] = k, where k is a positive constant, and consider the polynomial equation (x − y)2 = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE1T4oBgHgl3EQfWAQw/content/2301.03108v1.pdf'}
|
| 29 |
+
page_content=' In the usual commutative space, the solution to the above equation is x = y which is a straight line with slope = 1 and passing through the origin.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE1T4oBgHgl3EQfWAQw/content/2301.03108v1.pdf'}
|
| 30 |
+
page_content=' However, in the corresponding non commutative space the equa- tion can be written as x2 − 2yx + y2 = k whose solutions are two parallel straight lines separated by a distance proportional to k, when k = 0 the two straight lines coincide and we recover the solution on the commutative space.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE1T4oBgHgl3EQfWAQw/content/2301.03108v1.pdf'}
|
| 31 |
+
page_content=' Note that this procedure is valid regardless of whether or not there are additional mathematical structures on the space, the smearing is carried out the same way.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE1T4oBgHgl3EQfWAQw/content/2301.03108v1.pdf'}
|
| 32 |
+
page_content=' That is why we need more complicated non commuta- tivity to use in physics.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE1T4oBgHgl3EQfWAQw/content/2301.03108v1.pdf'}
|
| 33 |
+
page_content=' 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE1T4oBgHgl3EQfWAQw/content/2301.03108v1.pdf'}
|
| 34 |
+
page_content='2 Lie-type non commutativity In this case the coordinates has a Lie algebra structure i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE1T4oBgHgl3EQfWAQw/content/2301.03108v1.pdf'}
|
| 35 |
+
page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE1T4oBgHgl3EQfWAQw/content/2301.03108v1.pdf'}
|
| 36 |
+
page_content=' the commuta- tion relations can capture a Lie algebra structures if defined on the space 2 time for example like in field theories [6].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE1T4oBgHgl3EQfWAQw/content/2301.03108v1.pdf'}
|
| 37 |
+
page_content=' The commutation relations are given by [xµ, xν] = if µν ρ xρ, where f µν ρ are the structure constants of the defined Lie algebra.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE1T4oBgHgl3EQfWAQw/content/2301.03108v1.pdf'}
|
| 38 |
+
page_content=' How- ever, this type is not useful because Lie structures are rigid i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE1T4oBgHgl3EQfWAQw/content/2301.03108v1.pdf'}
|
| 39 |
+
page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE1T4oBgHgl3EQfWAQw/content/2301.03108v1.pdf'}
|
| 40 |
+
page_content=' any small deformation of a Lie algebra is isomorphic to the Lie algebra.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE1T4oBgHgl3EQfWAQw/content/2301.03108v1.pdf'}
|
| 41 |
+
page_content=' This leads to difficulties in defining products of functions on the resulting non com- mutative space as we will see in the next section.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE1T4oBgHgl3EQfWAQw/content/2301.03108v1.pdf'}
|
| 42 |
+
page_content=' 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE1T4oBgHgl3EQfWAQw/content/2301.03108v1.pdf'}
|
| 43 |
+
page_content='3 q-deformations This type was introduced to solve the rigidity problem for Lie algebras.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE1T4oBgHgl3EQfWAQw/content/2301.03108v1.pdf'}
|
| 44 |
+
page_content=' The main idea is to replace Lie group with a flexible structure which is called quantum groups, for more details on the theory of quantum groups see [7,8].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE1T4oBgHgl3EQfWAQw/content/2301.03108v1.pdf'}
|
| 45 |
+
page_content=' The commutation relations are given by xµxν = 1 q Rµν στxσxτ, where q is a parameter and Rµν στ is the R-matrix of the quantum group defined on the space.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE1T4oBgHgl3EQfWAQw/content/2301.03108v1.pdf'}
|
| 46 |
+
page_content=' In this space a Lie algebra is replaced by a non commutative Hopf alge- bra with deformation parameter q.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE1T4oBgHgl3EQfWAQw/content/2301.03108v1.pdf'}
|
| 47 |
+
page_content=' Hopf algebras are considered deforma- tions of the universal enveloping algebra of the Lie group.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE1T4oBgHgl3EQfWAQw/content/2301.03108v1.pdf'}
|
| 48 |
+
page_content=' The resulting space is deformed according to the Lie group on the space and on the pa- rameter q, this is the simplest way to deform a space time while capturing the full structure of the space, other more complicated approaches can be studied such as deforming with more than one parameter [9,10] but it is beyond the scope of the article.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE1T4oBgHgl3EQfWAQw/content/2301.03108v1.pdf'}
|
| 49 |
+
page_content=' 3 Moyal-like product on q-deformed spaces Here, we define a non commutative product of functions on q-deformed spaces i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE1T4oBgHgl3EQfWAQw/content/2301.03108v1.pdf'}
|
| 50 |
+
page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE1T4oBgHgl3EQfWAQw/content/2301.03108v1.pdf'}
|
| 51 |
+
page_content=' non commutative spaces in the R-matrix formalism.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE1T4oBgHgl3EQfWAQw/content/2301.03108v1.pdf'}
|
| 52 |
+
page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE1T4oBgHgl3EQfWAQw/content/2301.03108v1.pdf'}
|
| 53 |
+
page_content='1 Moyal product on canonically non commuta- tive spaces We begin with reviewing the original Moyal product.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE1T4oBgHgl3EQfWAQw/content/2301.03108v1.pdf'}
|
| 54 |
+
page_content=' On Canonically non commutative spaces, the algebra of functions is replaced by a non commutative C∗ algebra, the Moyal product is the product of functions on the non commutative algebra.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE1T4oBgHgl3EQfWAQw/content/2301.03108v1.pdf'}
|
| 55 |
+
page_content=' Its formula can be derived as follows: Consider two functions f(x) and g(x), their Fourier transforms are f(x) = � dDk (2π)D ¯ f(k)eikixi, 3 g(x) = � dDk′ (2π)D ¯ g(k)eikjxj, The product on the non commutative space is f(x) ⋆ g(x) = � dDkdDk′ (2π)2D ¯ f(k) ¯ g(k)eikixieikjxj.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE1T4oBgHgl3EQfWAQw/content/2301.03108v1.pdf'}
|
| 56 |
+
page_content=' Using Baker-Campbell-Hausdorff formula we get f(x) ⋆ g(x) = � dDkdDk′ (2π)2D ¯ f(k) ¯ g(k)eikixieikjxjei/2kikjθij = � dDkdDk′ (2π)2D ¯ f(k) ¯ g(k)eikixieikjxj(1 + ∞ � n=1 ( i 2)n 1 n!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE1T4oBgHgl3EQfWAQw/content/2301.03108v1.pdf'}
|
| 57 |
+
page_content=' (kikjθij)n) = f(x)g(x) + ∞ � n=1 ( i 2)n 1 n!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE1T4oBgHgl3EQfWAQw/content/2301.03108v1.pdf'}
|
| 58 |
+
page_content='θi1j1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE1T4oBgHgl3EQfWAQw/content/2301.03108v1.pdf'}
|
| 59 |
+
page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE1T4oBgHgl3EQfWAQw/content/2301.03108v1.pdf'}
|
| 60 |
+
page_content='θinjn∂i1∂i2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE1T4oBgHgl3EQfWAQw/content/2301.03108v1.pdf'}
|
| 61 |
+
page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE1T4oBgHgl3EQfWAQw/content/2301.03108v1.pdf'}
|
| 62 |
+
page_content='∂inf∂j1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE1T4oBgHgl3EQfWAQw/content/2301.03108v1.pdf'}
|
| 63 |
+
page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE1T4oBgHgl3EQfWAQw/content/2301.03108v1.pdf'}
|
| 64 |
+
page_content='∂jng.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE1T4oBgHgl3EQfWAQw/content/2301.03108v1.pdf'}
|
| 65 |
+
page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE1T4oBgHgl3EQfWAQw/content/2301.03108v1.pdf'}
|
| 66 |
+
page_content='2 Product of functions on q-deformed space We follow the same procedure to define a product on q-deformed spaces.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE1T4oBgHgl3EQfWAQw/content/2301.03108v1.pdf'}
|
| 67 |
+
page_content=' The non commutativity is given by xµxν = 1 q Rµν στxσxτ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE1T4oBgHgl3EQfWAQw/content/2301.03108v1.pdf'}
|
| 68 |
+
page_content=' It can be written as a commutation relation as [xµ, xν] = Qµν στxσxτ, where Qµν στ = 1 q Rµν στ − δµ τ δν σ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE1T4oBgHgl3EQfWAQw/content/2301.03108v1.pdf'}
|
| 69 |
+
page_content=' Note that at q = 1 we have Qµν στ = 0 and we recover commutativity.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE1T4oBgHgl3EQfWAQw/content/2301.03108v1.pdf'}
|
| 70 |
+
page_content=' Now again two functions f(x) and g(x).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE1T4oBgHgl3EQfWAQw/content/2301.03108v1.pdf'}
|
| 71 |
+
page_content=' the product of their Fourier transforms is f(x) ⋆ g(x) = � dDkdDk′ (2π)2D ¯ f(k) ¯ g(k)eikixieikjxj, and using Baker-Campbell-Hausdorff formula we get f(x) ⋆ g(x) = � dDkdDk′ (2π)2D ¯ f(k) ¯ g(k)eikixieikjxj exi+xj+1/2[xi,xj]+1/12[xi,[xi,xj]]−1/12[xj,[xi,xj]]+.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE1T4oBgHgl3EQfWAQw/content/2301.03108v1.pdf'}
|
| 72 |
+
page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE1T4oBgHgl3EQfWAQw/content/2301.03108v1.pdf'}
|
| 73 |
+
page_content=', after some calculations we have [xi, [xi, xj]] = (Qij nlQil ab + Qij cbQim na)xnxaxb, [xi, [xi, xj]] = (Qij nlQjl ab + Qij cbQjm na )xnxaxb, Substituting we get f(x) ⋆ g(x) = � dDkdDk′ (2π)2D ¯ f(k) ¯ g(k)eikixieikjxj 4 exiki+xjk′ j+1/2Qij mnxmxnkik′ j+1/12kik′ j(Qij nlQil ab−Qij cbQim na −Qij nlQjl ab+Qij cbQjm na )xnxaxb+.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE1T4oBgHgl3EQfWAQw/content/2301.03108v1.pdf'}
|
| 74 |
+
page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE1T4oBgHgl3EQfWAQw/content/2301.03108v1.pdf'}
|
| 75 |
+
page_content='. In string theory it is reasonable to assume that the parameter q is close to 1 since the operator Q is related to the string length which is assumed to be very small.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE1T4oBgHgl3EQfWAQw/content/2301.03108v1.pdf'}
|
| 76 |
+
page_content=' In field theory the assumption is reasonable as well since in this case Q is related to the area of the space time where general relativity breaks i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE1T4oBgHgl3EQfWAQw/content/2301.03108v1.pdf'}
|
| 77 |
+
page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE1T4oBgHgl3EQfWAQw/content/2301.03108v1.pdf'}
|
| 78 |
+
page_content=' quantum gravity scale which is assumed to be very small.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE1T4oBgHgl3EQfWAQw/content/2301.03108v1.pdf'}
|
| 79 |
+
page_content=' Thus, the series converges and all the exponentials are well defined.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE1T4oBgHgl3EQfWAQw/content/2301.03108v1.pdf'}
|
| 80 |
+
page_content=' If we ignore the higher orders in the last exponent we get a formula similar to the Moyal product: f(x) ⋆ g(x) = � dDkdDk′ (2π)2D ¯ f(k) ¯ g(k′)eikixieik′ jxje1/2Qij mnxmxnkik′ j = � dDkdDk′ (2π)2D ¯ f(k) ¯ g(k)eikixieik′ jxj(1 + ∞ � n=1 (1/2)n 1 n!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE1T4oBgHgl3EQfWAQw/content/2301.03108v1.pdf'}
|
| 81 |
+
page_content=' (Qij mnxmxnkik′ j)n) = f(x)g(x)+ ∞ � p=1 (1/2)p 1 p!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE1T4oBgHgl3EQfWAQw/content/2301.03108v1.pdf'}
|
| 82 |
+
page_content='xm1xl1Qi1j1 m1l1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE1T4oBgHgl3EQfWAQw/content/2301.03108v1.pdf'}
|
| 83 |
+
page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE1T4oBgHgl3EQfWAQw/content/2301.03108v1.pdf'}
|
| 84 |
+
page_content='xmpxlpQ ipjp mplp∂i1∂i2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE1T4oBgHgl3EQfWAQw/content/2301.03108v1.pdf'}
|
| 85 |
+
page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE1T4oBgHgl3EQfWAQw/content/2301.03108v1.pdf'}
|
| 86 |
+
page_content='∂ipf(x)∂j1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE1T4oBgHgl3EQfWAQw/content/2301.03108v1.pdf'}
|
| 87 |
+
page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE1T4oBgHgl3EQfWAQw/content/2301.03108v1.pdf'}
|
| 88 |
+
page_content='∂jpg(x).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE1T4oBgHgl3EQfWAQw/content/2301.03108v1.pdf'}
|
| 89 |
+
page_content=' The formula captures the mathematical structures on the space time in the form of the Q operator and the deformation is subsequently trans- formed into these structures leading to additional symmetries at least in string theory.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE1T4oBgHgl3EQfWAQw/content/2301.03108v1.pdf'}
|
| 90 |
+
page_content=' From this procedure we can see that Lie-type non commutativity presents difficulties in defining the product since the product would be of the form f(x) ⋆ g(x) = � dDkdDk′ (2π)2D ¯ f(k) ¯ g(k)eikixieikjxj e1/2(fij m xmkik′ j)+1/12kik′ j(fij nlfil ab−fij cbfim na )+.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE1T4oBgHgl3EQfWAQw/content/2301.03108v1.pdf'}
|
| 91 |
+
page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE1T4oBgHgl3EQfWAQw/content/2301.03108v1.pdf'}
|
| 92 |
+
page_content='. The series in the exponential generally diverges and the product can not be defined.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE1T4oBgHgl3EQfWAQw/content/2301.03108v1.pdf'}
|
| 93 |
+
page_content=' 4 Scalar field theory on q-deformed space In this section we study massive scalar field theory on a flat non dynam- ical q-deformed non commutative space time.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE1T4oBgHgl3EQfWAQw/content/2301.03108v1.pdf'}
|
| 94 |
+
page_content=' While there are attempts to define field theories on deformed spaces, the used non commutativity is usually the canonical type or the use of quantum groups is limited to defining differential structure on a specific examples of spaces [11-14], con- formal field theory also was studied on deformed spaces [15], however the deformations considered are introduced by hand and does not introduce non commutativity i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE1T4oBgHgl3EQfWAQw/content/2301.03108v1.pdf'}
|
| 95 |
+
page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE1T4oBgHgl3EQfWAQw/content/2301.03108v1.pdf'}
|
| 96 |
+
page_content=' the deformed manifold is another manifold with no additional structure.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE1T4oBgHgl3EQfWAQw/content/2301.03108v1.pdf'}
|
| 97 |
+
page_content=' The Lagrangian of the theory is L = ∂µφ∂µφ − m2φ2, 5 where φ is the scalar field which we will assume is infinitely differentiable, and m is the mass.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE1T4oBgHgl3EQfWAQw/content/2301.03108v1.pdf'}
|
| 98 |
+
page_content=' Now we perform the deformation quantization: The first step is to perform the q-quantization of the symmetry group in this case U(1).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE1T4oBgHgl3EQfWAQw/content/2301.03108v1.pdf'}
|
| 99 |
+
page_content=' To do this we write its universal enveloping algebra which is a C∗ algebra of functions generated by the function z ∈ U(U(1)) → e(iz) ∈ C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE1T4oBgHgl3EQfWAQw/content/2301.03108v1.pdf'}
|
| 100 |
+
page_content=' We notice that the algebra is commutative then its defor- mations are equivalent to itself, i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE1T4oBgHgl3EQfWAQw/content/2301.03108v1.pdf'}
|
| 101 |
+
page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE1T4oBgHgl3EQfWAQw/content/2301.03108v1.pdf'}
|
| 102 |
+
page_content=' no contribution from the symmetry group to non commutativity and the product on the non commutative space is equivalent to the product on the original manifold.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE1T4oBgHgl3EQfWAQw/content/2301.03108v1.pdf'}
|
| 103 |
+
page_content=' The second step is to replace the manifold on which the field theory is defined with a non commutative locally compact topological space.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE1T4oBgHgl3EQfWAQw/content/2301.03108v1.pdf'}
|
| 104 |
+
page_content=' On this manifold the derivatives are q-deformed into Jackson derivatives.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE1T4oBgHgl3EQfWAQw/content/2301.03108v1.pdf'}
|
| 105 |
+
page_content=' The new q-deformed Lagrangian will be Lq = DqµφDµ q φ − m2φ2, Now we relate the theory on the non commutative topological space to the theory on the commutative manifold (i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE1T4oBgHgl3EQfWAQw/content/2301.03108v1.pdf'}
|
| 106 |
+
page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE1T4oBgHgl3EQfWAQw/content/2301.03108v1.pdf'}
|
| 107 |
+
page_content=' transforming the non commutative theory back to the commutative manifold) using the formula Dqµ(f(x)) = ∂µf + ∞ � k=1 (q − 1)k (k + 1)!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE1T4oBgHgl3EQfWAQw/content/2301.03108v1.pdf'}
|
| 108 |
+
page_content=' xk µf (k+1)(x), where f (k) is the k th ordinary derivative of f.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE1T4oBgHgl3EQfWAQw/content/2301.03108v1.pdf'}
|
| 109 |
+
page_content=' The resulting Lagrangian on the commutative manifold is Lq = ∂µφ∂µφ − m2φ2 + 2∂µφ ∞ � k=1 (q − 1)k (k + 1)!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE1T4oBgHgl3EQfWAQw/content/2301.03108v1.pdf'}
|
| 110 |
+
page_content=' xµkφ(k+1) + ∞ � l,m=1 (q − 1)(l+m) (m + 1)!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE1T4oBgHgl3EQfWAQw/content/2301.03108v1.pdf'}
|
| 111 |
+
page_content=' (l + 1)!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE1T4oBgHgl3EQfWAQw/content/2301.03108v1.pdf'}
|
| 112 |
+
page_content='φ(l+1)xk µxµlφ(m+1).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE1T4oBgHgl3EQfWAQw/content/2301.03108v1.pdf'}
|
| 113 |
+
page_content=' The first two terms of the Lagrangian is the non commutative origi- nal theory and the rest are the contributions of non commutativity from replacing the non commutative topological space with the original com- mutative manifold.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE1T4oBgHgl3EQfWAQw/content/2301.03108v1.pdf'}
|
| 114 |
+
page_content=' The theory is q-deformed i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE1T4oBgHgl3EQfWAQw/content/2301.03108v1.pdf'}
|
| 115 |
+
page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE1T4oBgHgl3EQfWAQw/content/2301.03108v1.pdf'}
|
| 116 |
+
page_content=' if q = 1 then we recover the original theory.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE1T4oBgHgl3EQfWAQw/content/2301.03108v1.pdf'}
|
| 117 |
+
page_content=' The additional terms are non local as expected and contain an infinite series of higher (ordinary) derivatives of the field φ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE1T4oBgHgl3EQfWAQw/content/2301.03108v1.pdf'}
|
| 118 |
+
page_content=' 5 String theory on q-deformed space String theory follows the same q-quantization procedure as field theory but with richer geometry since the fundamental object is one dimensional.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE1T4oBgHgl3EQfWAQw/content/2301.03108v1.pdf'}
|
| 119 |
+
page_content=' 6 Here we establish the connection between the Q operator defined above and the length of the string, then give the general procedure of defining a string theory on q-deformed space.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE1T4oBgHgl3EQfWAQw/content/2301.03108v1.pdf'}
|
| 120 |
+
page_content=' The uncertainty in position in case of q-deformed spaces can be cal- culated to be ∆xi∆xj ≥ 1 2 < xµQij µνxν >, where < xµQij µνxν > is the expectation value of the quadratic form of the operator Q.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE1T4oBgHgl3EQfWAQw/content/2301.03108v1.pdf'}
|
| 121 |
+
page_content=' Following the same argument as [2], we find that the length of the string squared is proportional to the above expectation value < xµQij µνxν > ∝ l2 s.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE1T4oBgHgl3EQfWAQw/content/2301.03108v1.pdf'}
|
| 122 |
+
page_content=' This implies that the string’s length depends on the geometry of the non commutative space i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE1T4oBgHgl3EQfWAQw/content/2301.03108v1.pdf'}
|
| 123 |
+
page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE1T4oBgHgl3EQfWAQw/content/2301.03108v1.pdf'}
|
| 124 |
+
page_content=' depends on the string theory in question, and is determined by the R-matrix of the quantized group.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE1T4oBgHgl3EQfWAQw/content/2301.03108v1.pdf'}
|
| 125 |
+
page_content=' The procedure on a static spacetime is as follows: 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE1T4oBgHgl3EQfWAQw/content/2301.03108v1.pdf'}
|
| 126 |
+
page_content=' Determine the symmetry group of the theory and find the corre- sponding quantum group.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE1T4oBgHgl3EQfWAQw/content/2301.03108v1.pdf'}
|
| 127 |
+
page_content=' 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE1T4oBgHgl3EQfWAQw/content/2301.03108v1.pdf'}
|
| 128 |
+
page_content=' Use the product presented in section 3 instead of the usual product and Jackson’s derivative instead of the usual derivative.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE1T4oBgHgl3EQfWAQw/content/2301.03108v1.pdf'}
|
| 129 |
+
page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE1T4oBgHgl3EQfWAQw/content/2301.03108v1.pdf'}
|
| 130 |
+
page_content=' Use the corresponding formulae to relate back to the original man- ifold as we did in section 4, this usually leads to infinite series of higher derivatives in the Lagrangian.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE1T4oBgHgl3EQfWAQw/content/2301.03108v1.pdf'}
|
| 131 |
+
page_content=' 6 Symmetries and theories on dynamical spacetimes The first step of q-quantization is to replace the symmetry group with a quantum group which is a deformation of its universal enveloping algebra.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE1T4oBgHgl3EQfWAQw/content/2301.03108v1.pdf'}
|
| 132 |
+
page_content=' This gives more symmetries than the commutative theory by definition.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE1T4oBgHgl3EQfWAQw/content/2301.03108v1.pdf'}
|
| 133 |
+
page_content=' In field and string theory, symmetries are classified into spacetime sym- metries and internal symmetries, spacetime symmetries relates directly to the ambient manifold on which the field/string theory is defined while internal symmetries are additional structure on the manifold.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE1T4oBgHgl3EQfWAQw/content/2301.03108v1.pdf'}
|
| 134 |
+
page_content=' While on static spacetime (disregarding gravity) only the internal symmetry group is to be q-deformed, the spacetime symmetry group must contribute to the R-matrix if dynamic spacetimes are to be studied, the deformations of the spacetime symmetry should lead to effects on the gravitational as- pects of the theory like changes in curvature, singularities, etc.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE1T4oBgHgl3EQfWAQw/content/2301.03108v1.pdf'}
|
| 135 |
+
page_content=' Similar studies of non commutativity’s effects on gravity are found in [] but uses the canonical non commutativity, using q-deformations to study gravity is a subject of future research.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE1T4oBgHgl3EQfWAQw/content/2301.03108v1.pdf'}
|
| 136 |
+
page_content=' 7 7 Conclusion and outlook The results presented in this paper showed that a product of functions on a q-deformed space at least for small deformations exists and is well defined, we give an explicit formula in the paper.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE1T4oBgHgl3EQfWAQw/content/2301.03108v1.pdf'}
|
| 137 |
+
page_content=' We also showed that field and string theory can be defined on q-deformed manifolds but hav- ing enlarged set of symmetries and extra features depending on the theory and the manifold in question.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE1T4oBgHgl3EQfWAQw/content/2301.03108v1.pdf'}
|
| 138 |
+
page_content=' A possible direction of future research is to study the enlarged set of symmetries due to q-deformations as well as their mathematical and the phenomenological implications.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE1T4oBgHgl3EQfWAQw/content/2301.03108v1.pdf'}
|
| 139 |
+
page_content=' Another direction is to study more complicated field/string theories and find ways to define higher spin fields on such spaces.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE1T4oBgHgl3EQfWAQw/content/2301.03108v1.pdf'}
|
| 140 |
+
page_content=' Acknowledgments We would like to thank Dr.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE1T4oBgHgl3EQfWAQw/content/2301.03108v1.pdf'}
|
| 141 |
+
page_content='Ivan Kolar for the useful discussions on the topic.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE1T4oBgHgl3EQfWAQw/content/2301.03108v1.pdf'}
|
| 142 |
+
page_content=' References [1] Seiberg, N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE1T4oBgHgl3EQfWAQw/content/2301.03108v1.pdf'}
|
| 143 |
+
page_content=' and Witten, E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE1T4oBgHgl3EQfWAQw/content/2301.03108v1.pdf'}
|
| 144 |
+
page_content=' (1999) “String theory and noncommu- tative geometry,” Journal of High Energy Physics, 1999(09), pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE1T4oBgHgl3EQfWAQw/content/2301.03108v1.pdf'}
|
| 145 |
+
page_content=' 032–032.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE1T4oBgHgl3EQfWAQw/content/2301.03108v1.pdf'}
|
| 146 |
+
page_content=' [2] Szabo, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE1T4oBgHgl3EQfWAQw/content/2301.03108v1.pdf'}
|
| 147 |
+
page_content=' (2003) “Quantum field theory on noncommutative spaces,” Physics Reports, 378(4), pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE1T4oBgHgl3EQfWAQw/content/2301.03108v1.pdf'}
|
| 148 |
+
page_content=' 207–299.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE1T4oBgHgl3EQfWAQw/content/2301.03108v1.pdf'}
|
| 149 |
+
page_content=' [3] Doplicher, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE1T4oBgHgl3EQfWAQw/content/2301.03108v1.pdf'}
|
| 150 |
+
page_content=', Fredenhagen, K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE1T4oBgHgl3EQfWAQw/content/2301.03108v1.pdf'}
|
| 151 |
+
page_content=' and Roberts, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE1T4oBgHgl3EQfWAQw/content/2301.03108v1.pdf'}
|
| 152 |
+
page_content='E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE1T4oBgHgl3EQfWAQw/content/2301.03108v1.pdf'}
|
| 153 |
+
page_content=' (1995) “The quan- tum structure of spacetime at the Planck scale and Quantum Fields,” Communications in Mathematical Physics, 172(1), pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE1T4oBgHgl3EQfWAQw/content/2301.03108v1.pdf'}
|
| 154 |
+
page_content=' 187–220.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE1T4oBgHgl3EQfWAQw/content/2301.03108v1.pdf'}
|
| 155 |
+
page_content=' [4] Ahluwalia, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE1T4oBgHgl3EQfWAQw/content/2301.03108v1.pdf'}
|
| 156 |
+
page_content='V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE1T4oBgHgl3EQfWAQw/content/2301.03108v1.pdf'}
|
| 157 |
+
page_content=' (1994) “Quantum measurement, gravitation, and locality,” Physics Letters B, 339(4), pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE1T4oBgHgl3EQfWAQw/content/2301.03108v1.pdf'}
|
| 158 |
+
page_content=' 301–303.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE1T4oBgHgl3EQfWAQw/content/2301.03108v1.pdf'}
|
| 159 |
+
page_content=' [5] C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE1T4oBgHgl3EQfWAQw/content/2301.03108v1.pdf'}
|
| 160 |
+
page_content=' S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE1T4oBgHgl3EQfWAQw/content/2301.03108v1.pdf'}
|
| 161 |
+
page_content=' Chu and P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE1T4oBgHgl3EQfWAQw/content/2301.03108v1.pdf'}
|
| 162 |
+
page_content=' M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE1T4oBgHgl3EQfWAQw/content/2301.03108v1.pdf'}
|
| 163 |
+
page_content=' Ho, Noncommutative open string and D-brane, Nucl.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE1T4oBgHgl3EQfWAQw/content/2301.03108v1.pdf'}
|
| 164 |
+
page_content=' Phys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE1T4oBgHgl3EQfWAQw/content/2301.03108v1.pdf'}
|
| 165 |
+
page_content=' B 550, 151 (1999) [hep-th/9812219].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE1T4oBgHgl3EQfWAQw/content/2301.03108v1.pdf'}
|
| 166 |
+
page_content=' [6] B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE1T4oBgHgl3EQfWAQw/content/2301.03108v1.pdf'}
|
| 167 |
+
page_content=' Jurco, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE1T4oBgHgl3EQfWAQw/content/2301.03108v1.pdf'}
|
| 168 |
+
page_content=' Schraml, P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE1T4oBgHgl3EQfWAQw/content/2301.03108v1.pdf'}
|
| 169 |
+
page_content=' Schupp and J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE1T4oBgHgl3EQfWAQw/content/2301.03108v1.pdf'}
|
| 170 |
+
page_content=' Wess, Enveloping algebra valued gauge transformations for non-Abelian gauge groups on non- commutative spaces, Eur.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE1T4oBgHgl3EQfWAQw/content/2301.03108v1.pdf'}
|
| 171 |
+
page_content=' Phys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE1T4oBgHgl3EQfWAQw/content/2301.03108v1.pdf'}
|
| 172 |
+
page_content=' J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE1T4oBgHgl3EQfWAQw/content/2301.03108v1.pdf'}
|
| 173 |
+
page_content=' C17, 521 (2000) [hep-th/0006246].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE1T4oBgHgl3EQfWAQw/content/2301.03108v1.pdf'}
|
| 174 |
+
page_content=' [7] Chaichian, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE1T4oBgHgl3EQfWAQw/content/2301.03108v1.pdf'}
|
| 175 |
+
page_content=' and Demichev, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE1T4oBgHgl3EQfWAQw/content/2301.03108v1.pdf'}
|
| 176 |
+
page_content='P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE1T4oBgHgl3EQfWAQw/content/2301.03108v1.pdf'}
|
| 177 |
+
page_content=' Introduction to quantum groups.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE1T4oBgHgl3EQfWAQw/content/2301.03108v1.pdf'}
|
| 178 |
+
page_content=' Singapore: World Scientific (1996).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE1T4oBgHgl3EQfWAQw/content/2301.03108v1.pdf'}
|
| 179 |
+
page_content=' [8] A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE1T4oBgHgl3EQfWAQw/content/2301.03108v1.pdf'}
|
| 180 |
+
page_content=' Klimyk and K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE1T4oBgHgl3EQfWAQw/content/2301.03108v1.pdf'}
|
| 181 |
+
page_content=' Schmudgen, Quantum Groups and Their Repre- sentations, Springer (1997).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE1T4oBgHgl3EQfWAQw/content/2301.03108v1.pdf'}
|
| 182 |
+
page_content=' [9] Hu, N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE1T4oBgHgl3EQfWAQw/content/2301.03108v1.pdf'}
|
| 183 |
+
page_content='H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE1T4oBgHgl3EQfWAQw/content/2301.03108v1.pdf'}
|
| 184 |
+
page_content=' and Pei, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE1T4oBgHgl3EQfWAQw/content/2301.03108v1.pdf'}
|
| 185 |
+
page_content='F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE1T4oBgHgl3EQfWAQw/content/2301.03108v1.pdf'}
|
| 186 |
+
page_content=' (2008) “Notes on 2-parameter Quantum Groups I,” Science in China Series A: Mathematics, 51(6), pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE1T4oBgHgl3EQfWAQw/content/2301.03108v1.pdf'}
|
| 187 |
+
page_content=' 1101–1110.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE1T4oBgHgl3EQfWAQw/content/2301.03108v1.pdf'}
|
| 188 |
+
page_content=' [10] Hu, N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE1T4oBgHgl3EQfWAQw/content/2301.03108v1.pdf'}
|
| 189 |
+
page_content=' and Pei, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE1T4oBgHgl3EQfWAQw/content/2301.03108v1.pdf'}
|
| 190 |
+
page_content=' (2012) “Notes on two-parameter quantum groups, (II),” Communications in Algebra, 40(9), pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE1T4oBgHgl3EQfWAQw/content/2301.03108v1.pdf'}
|
| 191 |
+
page_content=' 3202–3220.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE1T4oBgHgl3EQfWAQw/content/2301.03108v1.pdf'}
|
| 192 |
+
page_content=' 8 [11] Wulkenhaar, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE1T4oBgHgl3EQfWAQw/content/2301.03108v1.pdf'}
|
| 193 |
+
page_content=' (2006) “Field theories on deformed spaces,” Journal of Geometry and Physics, 56(1), pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE1T4oBgHgl3EQfWAQw/content/2301.03108v1.pdf'}
|
| 194 |
+
page_content=' 108–141.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE1T4oBgHgl3EQfWAQw/content/2301.03108v1.pdf'}
|
| 195 |
+
page_content=' [12] Grosse, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE1T4oBgHgl3EQfWAQw/content/2301.03108v1.pdf'}
|
| 196 |
+
page_content=', Madore, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE1T4oBgHgl3EQfWAQw/content/2301.03108v1.pdf'}
|
| 197 |
+
page_content=' and Steinacker, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE1T4oBgHgl3EQfWAQw/content/2301.03108v1.pdf'}
|
| 198 |
+
page_content=' (2001) “Field theory on the Q-deformed fuzzy sphere I,” Journal of Geometry and Physics, 38(3-4), pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE1T4oBgHgl3EQfWAQw/content/2301.03108v1.pdf'}
|
| 199 |
+
page_content=' 308–342.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE1T4oBgHgl3EQfWAQw/content/2301.03108v1.pdf'}
|
| 200 |
+
page_content=' [13] Grosse, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE1T4oBgHgl3EQfWAQw/content/2301.03108v1.pdf'}
|
| 201 |
+
page_content=', Madore, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE1T4oBgHgl3EQfWAQw/content/2301.03108v1.pdf'}
|
| 202 |
+
page_content=' and Steinacker, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE1T4oBgHgl3EQfWAQw/content/2301.03108v1.pdf'}
|
| 203 |
+
page_content=' (2002) “Field theory on the Q-deformed Fuzzy Sphere II: Quantization,” Journal of Geome- try and Physics, 43(2-3), pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE1T4oBgHgl3EQfWAQw/content/2301.03108v1.pdf'}
|
| 204 |
+
page_content=' 205–240.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE1T4oBgHgl3EQfWAQw/content/2301.03108v1.pdf'}
|
| 205 |
+
page_content=' [14] BARDEK, V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE1T4oBgHgl3EQfWAQw/content/2301.03108v1.pdf'}
|
| 206 |
+
page_content=', DOREˇSI´C, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE1T4oBgHgl3EQfWAQw/content/2301.03108v1.pdf'}
|
| 207 |
+
page_content=' and MELJANAC, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE1T4oBgHgl3EQfWAQw/content/2301.03108v1.pdf'}
|
| 208 |
+
page_content=' (1994) “An ex- ample of Q-deformed field theory,” International Journal of Modern Physics A, 09(23), pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE1T4oBgHgl3EQfWAQw/content/2301.03108v1.pdf'}
|
| 209 |
+
page_content=' 4185–4194.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE1T4oBgHgl3EQfWAQw/content/2301.03108v1.pdf'}
|
| 210 |
+
page_content=' [15] Minahan, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE1T4oBgHgl3EQfWAQw/content/2301.03108v1.pdf'}
|
| 211 |
+
page_content=', Naseer, U.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE1T4oBgHgl3EQfWAQw/content/2301.03108v1.pdf'}
|
| 212 |
+
page_content=' and Thull, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE1T4oBgHgl3EQfWAQw/content/2301.03108v1.pdf'}
|
| 213 |
+
page_content=' (2021) “Conformal field the- ories on deformed spheres, anomalies, and supersymmetry,” SciPost Physics, 10(3).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE1T4oBgHgl3EQfWAQw/content/2301.03108v1.pdf'}
|
| 214 |
+
page_content=' 9' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CdE1T4oBgHgl3EQfWAQw/content/2301.03108v1.pdf'}
|
CtAzT4oBgHgl3EQfwf4q/content/2301.01722v1.pdf
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:341cdc3f07e2612e2ebaad4beb8a5d263273b0bf35559101ee465d3f870330ae
|
| 3 |
+
size 2082119
|
CtAzT4oBgHgl3EQfwf4q/vector_store/index.pkl
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:bcebe0c2304c962070655ed02182c6dc3c801383cee428717f3cb245db863581
|
| 3 |
+
size 857708
|
CtE4T4oBgHgl3EQf5w5z/vector_store/index.pkl
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:b7101816dd329ed779542a57fdbf1918a453199c77f836a9f439861a3c974b40
|
| 3 |
+
size 88937
|
DNE4T4oBgHgl3EQfew0W/content/tmp_files/2301.05101v1.pdf.txt
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
DNE4T4oBgHgl3EQfew0W/content/tmp_files/load_file.txt
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
E9FLT4oBgHgl3EQfFi_K/content/tmp_files/2301.11988v1.pdf.txt
ADDED
|
@@ -0,0 +1,970 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
arXiv:2301.11988v1 [cs.DC] 27 Jan 2023
|
| 2 |
+
Energy-Efficient Distributed Algorithms for
|
| 3 |
+
Synchronous Networks⋆
|
| 4 |
+
Pierre Fraigniaud1⋆⋆, Pedro Montealegre2, Ivan Rapaport3⋆ ⋆ ⋆, and
|
| 5 |
+
Ioan Todinca4
|
| 6 |
+
1 Institut de Recherche en Informatique Fondamentale (IRIF), CNRS and Université
|
| 7 |
+
Paris Cité, Paris, France. [email protected]
|
| 8 |
+
2 Facultad de Ingeniería y Ciencias, Universidad Adolfo Ibáñez, Santiago, Chile
|
| 9 | |
| 10 |
+
3 Departamento de Ingeniería Matemática - Centro de Modelamiento Matemático
|
| 11 |
+
(UMI 2807 CNRS), Universidad de Chile, Santiago, Chile [email protected]
|
| 12 |
+
4 Laboratoire d’informatique fondamentale d’Orléans (LIFO), Université d’Orléans,
|
| 13 |
+
Orléans, France [email protected]
|
| 14 |
+
Abstract. We study the design of energy-efficient algorithms for the
|
| 15 |
+
LOCAL and CONGEST models. Specifically, as a measure of complex-
|
| 16 |
+
ity, we consider the maximum, taken over all the edges, or over all the
|
| 17 |
+
nodes, of the number of rounds at which an edge, or a node, is active
|
| 18 |
+
in the algorithm. We first show that every Turing-computable problem
|
| 19 |
+
has a CONGEST algorithm with constant node-activation complexity,
|
| 20 |
+
and therefore constant edge-activation complexity as well. That is, ev-
|
| 21 |
+
ery node (resp., edge) is active in sending (resp., transmitting) messages
|
| 22 |
+
for only O(1) rounds during the whole execution of the algorithm. In
|
| 23 |
+
other words, every Turing-computable problem can be solved by an al-
|
| 24 |
+
gorithm consuming the least possible energy. In the LOCAL model, the
|
| 25 |
+
same holds obviously, but with the additional feature that the algorithm
|
| 26 |
+
runs in O(poly(n)) rounds in n-node networks. However, we show that
|
| 27 |
+
insisting on algorithms running in O(poly(n)) rounds in the CONGEST
|
| 28 |
+
model comes with a severe cost in terms of energy. Namely, there are
|
| 29 |
+
problems requiring Ω(poly(n)) edge-activations (and thus Ω(poly(n))
|
| 30 |
+
node-activations as well) in the CONGEST model whenever solved by
|
| 31 |
+
algorithms bounded to run in O(poly(n)) rounds. Finally, we demon-
|
| 32 |
+
strate the existence of a sharp separation between the edge-activation
|
| 33 |
+
complexity and the node-activation complexity in the CONGEST model,
|
| 34 |
+
for algorithms bounded to run in O(poly(n)) rounds. Specifically, under
|
| 35 |
+
this constraint, there is a problem with O(1) edge-activation complexity
|
| 36 |
+
but ˜Ω(n1/4) node-activation complexity.
|
| 37 |
+
Keywords: Synchronous distributed algorithms · LOCAL and CON-
|
| 38 |
+
GEST models · Energy efficiency.
|
| 39 |
+
⋆ This work was performed during the visit of the first and last authors to Universidad
|
| 40 |
+
de Chile, and to Universidad Adolfo Ibañez, Chile.
|
| 41 |
+
⋆⋆ Additional support from ANR project DUCAT (ref. ANR-20-CE48-0006).
|
| 42 |
+
⋆ ⋆ ⋆ Additional support from ANID via PIA/Apoyo a Centros Cientificos y Tecnológicos
|
| 43 |
+
de Excelencia AFB 170001 and Fondecyt 1220142.
|
| 44 |
+
|
| 45 |
+
2
|
| 46 |
+
P. Fraigniaud, P. Montealegre, I. Rapaport, I. Todinca
|
| 47 |
+
1
|
| 48 |
+
Introduction
|
| 49 |
+
1.1
|
| 50 |
+
Objective
|
| 51 |
+
Designing computing environments consuming a limited amount of energy while
|
| 52 |
+
achieving computationally complex tasks is an objective of utmost importance,
|
| 53 |
+
especially in distributed systems involving a large number of computing entities.
|
| 54 |
+
In this paper, we aim at designing energy-efficient algorithms for the standard
|
| 55 |
+
LOCAL and CONGEST models of distributed computing in networks [11]. Both
|
| 56 |
+
models assume a network modeled as an n-node graph G = (V, E), where each
|
| 57 |
+
node is provided with an identifier, i.e., an integer that is unique in the network,
|
| 58 |
+
which can be stored on O(log n) bits. All nodes are assumed to run the same
|
| 59 |
+
algorithm, and computation proceeds as a series of synchronous rounds (all nodes
|
| 60 |
+
start simultaneously at round 1). During a round, every node sends a message to
|
| 61 |
+
each of its neighbors, receives the messages sent by its neighbors, and performs
|
| 62 |
+
some individual computation. The two models LOCAL and CONGEST differ
|
| 63 |
+
only in the amount of information that can be exchanged between nodes at each
|
| 64 |
+
round.
|
| 65 |
+
The LOCAL model does not bound the size of the messages, whereas the
|
| 66 |
+
CONGEST model allows only messages of size O(log n) bits. Initially, every
|
| 67 |
+
node v ∈ V knows solely its identifier id(v), an upper bound of the number n of
|
| 68 |
+
nodes, which is assumed to be polynomial in n and to be the same for all nodes,
|
| 69 |
+
plus possibly some input bit-string x(v) depending on the task to be solved by
|
| 70 |
+
the nodes. In this paper, we denote by N the maximum between the largest
|
| 71 |
+
identifier and the upper bound on n given to all nodes. Hence N = O(poly(n)),
|
| 72 |
+
and is supposed to be known by all nodes. After a certain number of rounds,
|
| 73 |
+
every node outputs a bit-string y(v), where the correctness of the collection of
|
| 74 |
+
outputs y = {y(v) : v ∈ V } is defined with respect to the specification of the
|
| 75 |
+
task to be solved, and may depend on the collection of inputs x = {x(v) : v ∈ V }
|
| 76 |
+
given to the nodes, as well as on the graph G (but not on the identifiers assigned
|
| 77 |
+
to the nodes, nor on the upper bound N).
|
| 78 |
+
Activation complexity. We measure the energy consumption of an algorithm A
|
| 79 |
+
by counting how many times each node and each edge is activated during the
|
| 80 |
+
execution of the algorithm. More specifically, a node v (resp., an edge e) is
|
| 81 |
+
said to be active at a given round r if v is sending a message to at least one
|
| 82 |
+
of its neighbors at round r (resp., if a message traverses e at round r). The
|
| 83 |
+
node-activation and the edge-activation of an algorithm A running in a graph
|
| 84 |
+
G = (V, E) are respectively defined as
|
| 85 |
+
nact(A) := max
|
| 86 |
+
v∈V #activation(v), and eact(A) := max
|
| 87 |
+
e∈E #activation(e),
|
| 88 |
+
where #activation(v) (resp., #activation(e)) denotes the number of rounds dur-
|
| 89 |
+
ing which node v (resp., edge e) is active along the execution of the algorithm A.
|
| 90 |
+
By definition, we have that, in any graph of maximum degree ∆,
|
| 91 |
+
eact(A) ≤ 2 · nact(A),
|
| 92 |
+
and nact(A) ≤ ∆ · eact(A).
|
| 93 |
+
(1)
|
| 94 |
+
|
| 95 |
+
Energy-Efficient Distributed Algorithms
|
| 96 |
+
3
|
| 97 |
+
Objective. Our goal is to design frugal algorithms, that is, algorithms with con-
|
| 98 |
+
stant node-activation, or to the least constant edge-activation, independent of
|
| 99 |
+
the number n of nodes and of the number m of edges. Indeed, such algorithms
|
| 100 |
+
can be viewed as consuming the least possible energy for solving a given task.
|
| 101 |
+
Moreover, even if the energy requirement for solving the task naturally grows
|
| 102 |
+
with the number of components (nodes or edges) of the network, it grows linearly
|
| 103 |
+
with this number whenever using frugal algorithms. We refer to node-frugality
|
| 104 |
+
or edge-frugality depending on whether we focus on node-activation or edge-
|
| 105 |
+
activation, respectively.
|
| 106 |
+
1.2
|
| 107 |
+
Our Results
|
| 108 |
+
We first show that every Turing-computable problem5 can thus be solved by a
|
| 109 |
+
node-frugal algorithm in the LOCAL model as well as in the CONGEST model.
|
| 110 |
+
It follows from Eq. 1 that every Turing-computable problem can be solved by
|
| 111 |
+
an edge-frugal algorithm in both models. In other words, every problem can
|
| 112 |
+
be solved by an energy-efficient distributed algorithm. One important question
|
| 113 |
+
remains: what is the round complexity of frugal algorithms?
|
| 114 |
+
In the LOCAL model, our node-frugal algorithms run in O(poly(n)) rounds.
|
| 115 |
+
However, they may run in exponentially many rounds in the CONGEST model.
|
| 116 |
+
We show that this cannot be avoided. Indeed, even if many symmetry-breaking
|
| 117 |
+
problems such as computing a maximal-independent set (mis) and comput-
|
| 118 |
+
ing a (∆ + 1)-coloring can be solved by a node-frugal algorithm performing in
|
| 119 |
+
O(poly(n)) rounds, we show that there exist problems (e.g., deciding C4-freeness
|
| 120 |
+
or deciding the presence of symmetries in the graph) that cannot be solved in
|
| 121 |
+
O(poly(n)) rounds in the CONGEST model by any edge-frugal algorithm.
|
| 122 |
+
Finally, we discuss the relation between node-activation complexity and edge-
|
| 123 |
+
activation complexity. We show that the bounds given by Eq. 1 are essentially
|
| 124 |
+
the best that can be achieved in general. Precisely, we identify a problem, namely
|
| 125 |
+
Depth First Pointer Chasing (dfpc), which has edge-activation complexity
|
| 126 |
+
O(1) for all graphs with an algorithm running in O(poly(n)) rounds in the CON-
|
| 127 |
+
GEST model, but satisfying that, for every ∆ = O
|
| 128 |
+
Ä
|
| 129 |
+
n1/4
|
| 130 |
+
√log n
|
| 131 |
+
ä
|
| 132 |
+
, its node-activation
|
| 133 |
+
complexity in graphs with maximum degree ∆ is Ω(∆) whenever solved by an
|
| 134 |
+
algorithm bounded to run in O(poly(n)) rounds in the CONGEST model. In
|
| 135 |
+
particular, Depth First Pointer Chasing has constant edge-activation com-
|
| 136 |
+
plexity but node-activation complexity ˜Ω(n1/4) in the CONGEST model (for
|
| 137 |
+
O(poly(n))-round algorithms).
|
| 138 |
+
Our main results are summarized in Table 1.
|
| 139 |
+
Our Techniques. Our upper bounds are mostly based on similar types of up-
|
| 140 |
+
per bounds techniques used in the sleeping model [2,4] (cf. Section 1.3), based
|
| 141 |
+
5 A problem is Turing-computable if there exists a Turing machine that, given any
|
| 142 |
+
graph with identifiers and inputs assigned to the nodes, computes the output of each
|
| 143 |
+
node in the graph.
|
| 144 |
+
|
| 145 |
+
4
|
| 146 |
+
P. Fraigniaud, P. Montealegre, I. Rapaport, I. Todinca
|
| 147 |
+
Awakeness
|
| 148 |
+
Node-Activation
|
| 149 |
+
Edge-Activation
|
| 150 |
+
LOCAL
|
| 151 |
+
• ∀Π, Π ∈ O(log n) with
|
| 152 |
+
• ∀Π, Π ∈ O(1) with • ∀Π, Π ∈ O(1) with
|
| 153 |
+
O(poly(n)) rounds [2]
|
| 154 |
+
O(poly(n)) rounds
|
| 155 |
+
O(poly(n)) rounds
|
| 156 |
+
• st ∈ Ω(log n) [2]
|
| 157 |
+
CONGEST • mis ∈ O(polyloglog(n))
|
| 158 |
+
• ∀Π, Π ∈ O(1)
|
| 159 |
+
• ∀Π, Π ∈ O(1)
|
| 160 |
+
with O(polylog(n))
|
| 161 |
+
• poly(n) rounds
|
| 162 |
+
• poly(n) rounds
|
| 163 |
+
rounds [6] (randomized)
|
| 164 |
+
⇒ ∃Π ∈ Ω(poly(n))
|
| 165 |
+
⇒ ∃Π ∈ Ω(poly(n))
|
| 166 |
+
• mst ∈ O(log n)
|
| 167 |
+
• poly(n) rounds
|
| 168 |
+
• dfpc ∈ O(1) with
|
| 169 |
+
with O(poly(n))
|
| 170 |
+
⇒ dfpc ∈ ˜Ω(n1/4)
|
| 171 |
+
O(poly(n)) rounds
|
| 172 |
+
rounds [1]
|
| 173 |
+
• Π ∈ FO and ∆ = O(1)
|
| 174 |
+
⇒ Π ∈ O(1) with
|
| 175 |
+
O(poly(n)) rounds [8]
|
| 176 |
+
Table 1. Summary of our results where, for a problem Π, Π ∈ O(f(n)) means that
|
| 177 |
+
the corresponding complexity of Π is O(f(n)) (same shortcut for Ω).
|
| 178 |
+
on constructing spanning trees along with gathered and broadcasted informa-
|
| 179 |
+
tion. However, the models considered in this paper do not suffer from the same
|
| 180 |
+
limitations as the sleeping model (cf. Section 2), and thus one can achieve acti-
|
| 181 |
+
vation complexity O(1) in scenarios where the sleeping model limits the awake
|
| 182 |
+
complexity to Ω(log n).
|
| 183 |
+
Our lower bounds for CONGEST are based on reductions from 2-party com-
|
| 184 |
+
munication complexity. However, as opposed to the standard CONGEST model
|
| 185 |
+
in which the simulation of a distributed algorithm by two players is straightfor-
|
| 186 |
+
ward (each player performs the rounds sequentially, one by one, and exchanges
|
| 187 |
+
the messages sent across the cut between the two subsets of nodes handled by the
|
| 188 |
+
players at each round), the simulation of distributed algorithms in which only
|
| 189 |
+
subsets of nodes are active at various rounds requires more care. This is especially
|
| 190 |
+
the case when the simulation must not only control the amount of information
|
| 191 |
+
exchanged between these players, but also the number of communication steps
|
| 192 |
+
performed by the two players. Indeed, there are 2-party communication com-
|
| 193 |
+
plexity problems that are hard for k steps, but trivial for k + 1 steps [10], and
|
| 194 |
+
some of our lower bounds rely on this fact.
|
| 195 |
+
1.3
|
| 196 |
+
Related Work
|
| 197 |
+
The study of frugal algorithms has been initiated in [8], which focuses on the
|
| 198 |
+
edge-frugality in the CONGEST model. It is shown that for bounded-degree
|
| 199 |
+
graphs, any problem expressible in first-order logic (e.g., C4-freeness) can be
|
| 200 |
+
solved by an edge-frugal algorithm running in O(poly(n)) rounds in the CON-
|
| 201 |
+
GEST model. This also holds for planar graphs with no bounds on the maximum
|
| 202 |
+
degree, whenever the nodes are provided with their local combinatorial embed-
|
| 203 |
+
ding. Our results show that these statements cannot be extended to arbitrary
|
| 204 |
+
graphs as we prove that any algorithm solving C4-freeness in O(poly(n)) rounds
|
| 205 |
+
in the CONGEST model has edge-activation ˜Ω(√n).
|
| 206 |
+
|
| 207 |
+
Energy-Efficient Distributed Algorithms
|
| 208 |
+
5
|
| 209 |
+
More generally, the study of energy-efficient algorithms in the context of
|
| 210 |
+
distributed computing in networks has been previously considered in the frame-
|
| 211 |
+
work of the sleeping model, introduced in [4]. This model assumes that nodes
|
| 212 |
+
can be in two states: awake and asleep. A node in the awake state performs as
|
| 213 |
+
in the LOCAL and CONGEST models, but may also decide to fall asleep, for
|
| 214 |
+
a prescribed amount of rounds, controlled by each node, and depending on the
|
| 215 |
+
algorithm executed at the nodes. A sleeping node is totally inactive in the sense
|
| 216 |
+
that it does not send messages, it cannot receive messages (i.e., if a message is
|
| 217 |
+
sent to a sleeping node by an awake neighbor, then the message is lost), and
|
| 218 |
+
it is computationally idle (apart from counting rounds). The main measure of
|
| 219 |
+
interest in the sleeping model is the awake complexity, defined as the maximum,
|
| 220 |
+
taken over all nodes, of the number of rounds at which each node is awake during
|
| 221 |
+
the execution of the algorithm.
|
| 222 |
+
In the LOCAL model, it is known [2] that all problems have awake complexity
|
| 223 |
+
O(log n), using algorithms running in O(poly(n)) rounds. This bound is tight in
|
| 224 |
+
the sense that there are problems (e.g., spanning tree construction) with awake
|
| 225 |
+
complexity Ω(log n) [2,3].
|
| 226 |
+
In the CONGEST model, It was first shown [4] that mis has constant average
|
| 227 |
+
awake complexity, thanks to a randomized algorithm running in O(polylog(n))
|
| 228 |
+
rounds. The round complexity was improved in [7] with a randomized algo-
|
| 229 |
+
rithm running in O(log n) rounds. The (worst-case) awake complexity of mis
|
| 230 |
+
was proved to be O(log log n) using a randomized Monte-Carlo algorithm run-
|
| 231 |
+
ning in O(poly(n)) rounds [6]. This (randomized) round complexity can even
|
| 232 |
+
be reduced to O(log3 n · log log n · log⋆ n), to the cost of slightly increasing the
|
| 233 |
+
awake complexity to O(log log n · log⋆ n). mst has also been considered, and it
|
| 234 |
+
was proved [1] that its (worst-case) awake complexity is O(log n) thanks to a
|
| 235 |
+
(deterministic) algorithm running in O(poly(n)) rounds. The upper bound on
|
| 236 |
+
the awake complexity of mst is tight, thank to the lower bound for spanning
|
| 237 |
+
tree (st) in [2].
|
| 238 |
+
2
|
| 239 |
+
Preliminaries
|
| 240 |
+
In this section, we illustrate the difference between the standard LOCAL and
|
| 241 |
+
CONGEST models, their sleeping variants, and our node- and edge-activation
|
| 242 |
+
variants. Fig. 1(a) displays the automaton corresponding to the behavior of a
|
| 243 |
+
node in the standard models. A node is either active (A) or terminated (T). At
|
| 244 |
+
each clock tick (i.e., round) a node is subject to message events corresponding to
|
| 245 |
+
sending and receiving messages to/from neighbors. A node remains active until
|
| 246 |
+
it terminates.
|
| 247 |
+
Fig. 1(b) displays the automaton corresponding to the behavior of a node in
|
| 248 |
+
the sleeping variant. In this variant, a node can also be in a passive (P) state. In
|
| 249 |
+
this state, the clock event can either leave the node passive, or awake the node,
|
| 250 |
+
which then moves back to the active state.
|
| 251 |
+
Finally, Fig. 1(c) displays the automaton corresponding to the behavior of
|
| 252 |
+
a node in our activation variants. It differs from the sleeping variant in that a
|
| 253 |
+
|
| 254 |
+
6
|
| 255 |
+
P. Fraigniaud, P. Montealegre, I. Rapaport, I. Todinca
|
| 256 |
+
A
|
| 257 |
+
P
|
| 258 |
+
T
|
| 259 |
+
clock
|
| 260 |
+
msg
|
| 261 |
+
msg
|
| 262 |
+
msg
|
| 263 |
+
clock
|
| 264 |
+
clock
|
| 265 |
+
clock
|
| 266 |
+
A
|
| 267 |
+
P
|
| 268 |
+
T
|
| 269 |
+
clock
|
| 270 |
+
msg
|
| 271 |
+
clock
|
| 272 |
+
clock
|
| 273 |
+
clock
|
| 274 |
+
A
|
| 275 |
+
T
|
| 276 |
+
clock
|
| 277 |
+
msg
|
| 278 |
+
(a)
|
| 279 |
+
(b)
|
| 280 |
+
(c)
|
| 281 |
+
Fig. 1. (a) Classical model (b) Sleeping model, (c) Activation model.
|
| 282 |
+
passive node is also subject to message events, which can leave the node passive,
|
| 283 |
+
but may also move the node to the active state. In particular, a node does not
|
| 284 |
+
need to be active for receiving messages, and incoming messages may not trigger
|
| 285 |
+
an immediate response from the node (e.g., forwarding information). Instead, a
|
| 286 |
+
node can remain passive while collecting information from each of its neighbors,
|
| 287 |
+
and eventually react by becoming active.
|
| 288 |
+
Example 1: Broadcast. Assume that one node of the n-node cycle Cn has a token
|
| 289 |
+
to be broadcast to all the nodes. Initially, all nodes are active. However, all nodes
|
| 290 |
+
but the one with the token become immediately passive when the clock ticks for
|
| 291 |
+
entering the second round. The node with the token sends the token to one of
|
| 292 |
+
its neighbors, and becomes passive at the next clock tick. Upon reception of the
|
| 293 |
+
token, a passive node becomes active, forwards the token, and terminates. When
|
| 294 |
+
the source node receives the token back, it becomes active, and terminates. The
|
| 295 |
+
node-activation complexity of broadcast is therefore O(1), whereas it is known
|
| 296 |
+
that broadcasting has awake complexity Ω(log n) in the sleeping model [2].
|
| 297 |
+
Example 2: At-least-one-leader. Assume that each node of the cycle Cn has an
|
| 298 |
+
input-bit specifying whether the node is leader or not, and the nodes must col-
|
| 299 |
+
lectively check that there is at least one leader. Every leader broadcasts a token,
|
| 300 |
+
outputs accept, and terminates. Non-leader nodes become passive immediately
|
| 301 |
+
after the beginning of the algorithm, and start waiting for N rounds (recall that
|
| 302 |
+
N is an upper bound on the number n of nodes). Whenever the “sleep” of a (pas-
|
| 303 |
+
sive) non-leader is interrupted by the reception of a token, it becomes active,
|
| 304 |
+
forwards the token, outputs accept, and terminates. After N rounds, a passive
|
| 305 |
+
node that has not been “awaken” by a token becomes active, outputs reject, and
|
| 306 |
+
terminates. This guarantees that there is at least one leader if and only if all
|
| 307 |
+
nodes accept. The node-activation complexity of this algorithm is O(1), while
|
| 308 |
+
the awake complexity of at-least-one-leader is Ω(log n) in the sleeping model, by
|
| 309 |
+
reduction to broadcast.
|
| 310 |
+
The following observation holds for LOCAL and CONGEST, by noticing that
|
| 311 |
+
every algorithm for the sleeping model can be implemented with no overheads
|
| 312 |
+
in terms of node-activation.
|
| 313 |
+
|
| 314 |
+
Energy-Efficient Distributed Algorithms
|
| 315 |
+
7
|
| 316 |
+
Observation 1 In n-node graphs, every algorithm with awake complexity a(n)
|
| 317 |
+
and round complexity r(n) has node-activation complexity a(n) and round com-
|
| 318 |
+
plexity r(n).
|
| 319 |
+
It follows from Observation 1 that all upper bound results for the awake
|
| 320 |
+
complexity directly transfer to the node-activation complexity. However, as we
|
| 321 |
+
shall show in this paper, in contrast to the sleeping model in which some problems
|
| 322 |
+
(e.g., spanning tree) have awake complexity Ω(log n), even in the LOCAL model,
|
| 323 |
+
all problems admit a frugal algorithm in the CONGEST model, i.e., an algorithm
|
| 324 |
+
with node-activation O(1).
|
| 325 |
+
Definition 1. A LOCAL or CONGEST algorithm is node-frugal (resp., edge-
|
| 326 |
+
frugal) if the activation of every node (resp., edge) is upper-bounded by a constant
|
| 327 |
+
independent of the graph, and of the identifiers and inputs given to the nodes.
|
| 328 |
+
3
|
| 329 |
+
Universality of Frugal Algorithms
|
| 330 |
+
In this section we show that every Turing-computable problem can be solved
|
| 331 |
+
by frugal algorithms, both in the LOCAL and CONGEST models. Thanks to
|
| 332 |
+
Eq. 1, it is sufficient to prove that this holds for node-frugality.
|
| 333 |
+
Lemma 1. There exists a CONGEST algorithm electing a leader, and con-
|
| 334 |
+
structing a BFS tree rooted at the leader, with node-activation complexity O(1),
|
| 335 |
+
and performing in O(N 2) = O(poly(n)) rounds.
|
| 336 |
+
Proof. The algorithm elects as leader the node with smallest identifier, and initi-
|
| 337 |
+
ates a breadth-first search from that node. At every node v, the protocol performs
|
| 338 |
+
as follows.
|
| 339 |
+
– If v has received no messages until round id(v) · N, then v elects itself as
|
| 340 |
+
leader, and starts a BFS by sending message (id(v), 0) to all its neighbors.
|
| 341 |
+
Locally, v sets its parent in the BFS tree to ⊥, and the distance to the root
|
| 342 |
+
to 0.
|
| 343 |
+
– Otherwise, let r be the first round at which vertex v receives a message. Such
|
| 344 |
+
a message is of type (id(u), d) where u is the neighbor of v which sent the
|
| 345 |
+
message to v, and d is the distance from u to the leader in the graph. Node
|
| 346 |
+
v sets its parent in the BFS tree to id(u), its distance to the root to d + 1,
|
| 347 |
+
and, at round r + 1, it sends the message (id(v), d + 1) to all its neighbors.
|
| 348 |
+
(If v receives several messages at round r, from different neighbors, then v
|
| 349 |
+
selects the messages coming from the neighobors with smallest identifier).
|
| 350 |
+
The node v with smallest identifier is indeed the node initiating the BFS, as
|
| 351 |
+
the whole BFS is constructed between rounds id(v) · N and id(v) · N + N − 1,
|
| 352 |
+
and N ≥ n. The algorithm terminates at round at most O(N 2).
|
| 353 |
+
⊓⊔
|
| 354 |
+
|
| 355 |
+
8
|
| 356 |
+
P. Fraigniaud, P. Montealegre, I. Rapaport, I. Todinca
|
| 357 |
+
An instance of a problem is a triple (G, id, x) where G = (V, E) is an n-node
|
| 358 |
+
graph, id : V → [1, N] with N = O(poly(n)), and x : V → [1, ν] is the input
|
| 359 |
+
assignment to the nodes. Note that the input range ν may depend on n, and even
|
| 360 |
+
be exponential in n, even for classical problems, e.g., whenever weights assigned
|
| 361 |
+
to the edges are part of the input. A solution to a graph problem is an output
|
| 362 |
+
assignment y : V → [1, µ], and the correctness of y depends on G and x only,
|
| 363 |
+
with respect to the specification of the problem. We assume that µ and ν are
|
| 364 |
+
initially known to the nodes, as it is the case for, e.g., mst, in which the weights
|
| 365 |
+
of the edges can be encoded on O(log n) bits.
|
| 366 |
+
Theorem 1. Every Turing-computable problem has a LOCAL algorithm with
|
| 367 |
+
O(1) node-activation complexity, and running in O(N 2) = O(poly(n)) rounds.
|
| 368 |
+
Proof. Once the BFS tree T of Lemma 1 is constructed, the root can (1) gather
|
| 369 |
+
the whole instance (G, id, x), (2) compute a solution y, and (3) broadcast y to
|
| 370 |
+
all nodes. Specifically, every leaf v of T sends the set
|
| 371 |
+
E(v) =
|
| 372 |
+
�
|
| 373 |
+
{(id(v), x(v)), (id(w), x(w))} : w ∈ N(v)
|
| 374 |
+
�
|
| 375 |
+
to its parent in T . An internal node v waits for receiving a set of edges S(u)
|
| 376 |
+
from each of its children u in T , and then forwards the set
|
| 377 |
+
S(v) = E(v) ∪ (∪u∈child(v)S(u))
|
| 378 |
+
to its parent. Each node of T is activated once during this phase, and thus the
|
| 379 |
+
node-activation complexity of gathering is 1. Broadcasting the solution y from
|
| 380 |
+
the leader to all the nodes is achieved along the edges of T , again with node-
|
| 381 |
+
activation 1.
|
| 382 |
+
⊓⊔
|
| 383 |
+
The algorithm used in the proof of Theorem 1 cannot be implemented in
|
| 384 |
+
CONGEST due to the size of the messages, which may require each node to be
|
| 385 |
+
activated more than a constant number of times. To keep the node-activation
|
| 386 |
+
constant, we increased the round complexity of the algorithm.
|
| 387 |
+
Lemma 2. Every node-frugal algorithm A performing in R rounds in the LO-
|
| 388 |
+
CAL model with messages of size at most M bits can be implemented by a node-
|
| 389 |
+
frugal algorithm B performing in R 2M rounds in the CONGEST model.
|
| 390 |
+
Proof. Let v be a node sending a message m through an incident edge e at
|
| 391 |
+
round r of A. Then, in B, v sends one “beep” through edge e at round r 2M + t
|
| 392 |
+
where t is lexicographic rank of m among the at most 2M messages generated
|
| 393 |
+
by A.
|
| 394 |
+
⊓⊔
|
| 395 |
+
Theorem 2. Every Turing-computable problem has a CONGEST algorithm with
|
| 396 |
+
O(1) node-activation complexity, and running in 2poly(n)+O((ν+µ) log n) rounds for
|
| 397 |
+
inputs in the range [1, ν] and outputs in the range [1, µ].
|
| 398 |
+
Proof. The algorithm used in the proof of Theorem 1 used messages of size at
|
| 399 |
+
most 2N 2 + ν log N bits during the gathering phase, and of size at most µ log N
|
| 400 |
+
bits during the broadcast phase. The result follows from Lemma 2.
|
| 401 |
+
⊓⊔
|
| 402 |
+
|
| 403 |
+
Energy-Efficient Distributed Algorithms
|
| 404 |
+
9
|
| 405 |
+
Of course, there are many problems that can be solved in the CONGEST
|
| 406 |
+
model by a frugal algorithm much faster than the bound from Theorem 2. This
|
| 407 |
+
is typically the case of all problems that can be solved by a sequential greedy
|
| 408 |
+
algorithm visiting the nodes in arbitrary order, and producing a solution at the
|
| 409 |
+
currently visited node based only on the partial solution in the neighborhood of
|
| 410 |
+
the node. Examples of such problems are mis, ∆ + 1-coloring, etc. We call such
|
| 411 |
+
problem sequential-greedy.
|
| 412 |
+
Theorem 3. Every sequential-greedy problem whose solution at every node can
|
| 413 |
+
be encoded on O(log n) bits has a node-frugal CONGEST algorithm running in
|
| 414 |
+
O(N) = O(poly(n)) rounds.
|
| 415 |
+
Proof. Every node v ∈ V generates its output at round id(v) according to its
|
| 416 |
+
current knowledge about its neighborhood, and sends this output to all its neigh-
|
| 417 |
+
bors.
|
| 418 |
+
⊓⊔
|
| 419 |
+
4
|
| 420 |
+
Limits of CONGEST Algorithms with Polynomially
|
| 421 |
+
Many Rounds
|
| 422 |
+
Given a graph G = (V, E) such that V is partitioned in two sets VA, VB, the set
|
| 423 |
+
of edges with one endpoint in VA and the other in VB is called the cut. We denote
|
| 424 |
+
by e(VA, VB) the number of edges in the cut, and by n(VA, VB) the number of
|
| 425 |
+
nodes incident to an edge of the cut. Consider the situation where there are
|
| 426 |
+
two players, namely Alice and Bob. We say that a player controls a node v if
|
| 427 |
+
it knows all its incident edges and its input. For a CONGEST algorithm A, we
|
| 428 |
+
denote A(I) the output of A on input I = (G, id, x). We denote RA(n) the
|
| 429 |
+
round complexity of A on inputs of size n.
|
| 430 |
+
Lemma 3 (Simulation lemma). Let A be an algorithm in the CONGEST
|
| 431 |
+
model, let I = (G, id, x) be an input for A, and let VA, VB be a partition of V (G).
|
| 432 |
+
Suppose that Alice controls all the nodes in VA, and Bob controls all the nodes in
|
| 433 |
+
VB. Then, there exists a communication protocol P between Alice and Bob with
|
| 434 |
+
at most 2 · min(n(VA, VB) · nact(A), e(VA, VB) · eact(A)) rounds and using total
|
| 435 |
+
communication O(min(n(VA, VB)·nact(A), e(VA, VB)·eact(A))·log n·log(RA(n)),
|
| 436 |
+
such that each player computes the value of A(I) at all nodes he or she controls.
|
| 437 |
+
Proof. In protocol P, Alice and Bob simulate the rounds of algorithm A in
|
| 438 |
+
all the nodes they control. The simulation run in phases. Each phase is used to
|
| 439 |
+
simulate up to a certain number of rounds t of algorithm A, and takes two rounds
|
| 440 |
+
of protocol P (one round for Alice, and one round for Bob). By simulating A
|
| 441 |
+
up to t rounds, we mean that Alice and Bob know all the states of all the nodes
|
| 442 |
+
they control, on every round up to round t.
|
| 443 |
+
In the first phase, players start simulating A from the initial state. Let us
|
| 444 |
+
suppose that both Alice and Bob have already executed p ≥ 0 phases, meaning
|
| 445 |
+
that they had correctly simulated A up to round t = t(p) ≥ 0. Let us explain
|
| 446 |
+
phase p + 1 (see also Figure 2).
|
| 447 |
+
|
| 448 |
+
10
|
| 449 |
+
P. Fraigniaud, P. Montealegre, I. Rapaport, I. Todinca
|
| 450 |
+
rounds
|
| 451 |
+
VA
|
| 452 |
+
VB
|
| 453 |
+
ra
|
| 454 |
+
rb
|
| 455 |
+
VA
|
| 456 |
+
VB
|
| 457 |
+
VA
|
| 458 |
+
VB
|
| 459 |
+
Oblivious simulation
|
| 460 |
+
of Alice
|
| 461 |
+
t
|
| 462 |
+
Oblivious simulation
|
| 463 |
+
of Bob
|
| 464 |
+
Transcript of
|
| 465 |
+
algorithm A
|
| 466 |
+
Fig. 2. Illustration of one phase of the simulation protocol. Assuming that the players
|
| 467 |
+
agree on the simulation of algorithm A up to round t, each player runs an oblivious
|
| 468 |
+
simulation at the nodes they control. In the example of the figure, the next message
|
| 469 |
+
corresponds to a node controlled by Bob, who sends a message to a node in VA at
|
| 470 |
+
round rb. The oblivious simulation of Alice is not aware of this message, and incor-
|
| 471 |
+
rectly considers that a message is sent from VA to VB at round ra > rb. Using the
|
| 472 |
+
communication rounds in this phase, the players agree that the message of Bob was
|
| 473 |
+
correct. Thus the simulation is correct up to round rb, for both players.
|
| 474 |
+
Starting from round t, Alice runs an oblivious simulation of algorithm A over
|
| 475 |
+
all nodes that she controls. By oblivious, we mean that Alice assumes that no
|
| 476 |
+
node of VB communicates a message to a node in VA in any round at least t. The
|
| 477 |
+
oblivious simulation of Alice stops in one of the following two possible scenarios:
|
| 478 |
+
(1) All nodes that she controls either terminate or enter into a passive state that
|
| 479 |
+
quits only on an incoming message from VB.
|
| 480 |
+
(2) The simulation reaches a round ra where a message is sent from a node in
|
| 481 |
+
VA to a node in VB.
|
| 482 |
+
At the same time, Bob runs and oblivious simulation of A starting from
|
| 483 |
+
round t (i.e. assuming that no node of VA sends a message to a node in VB in
|
| 484 |
+
any round at least t). The oblivious simulation of Bob stops in one of the same
|
| 485 |
+
two scenarios analogous to the ones above. In this case, we call rb the round
|
| 486 |
+
reached by Bob in his version of scenario (2).
|
| 487 |
+
At the beginning of a phase, it is the turn of Alice to speak. Once the obliv-
|
| 488 |
+
ious simulation of Alice stops, she is ready to send a message to Bob. If the
|
| 489 |
+
simulation stops in the scenario (1), Alice sends a message "scenario 1" to Bob.
|
| 490 |
+
Otherwise, Alice sends ra together with all the messages sent from nodes in VA
|
| 491 |
+
to nodes in VB at round ra, to Bob. When Bob receives the message from Alice,
|
| 492 |
+
one of the following situations holds:
|
| 493 |
+
Case 1: the oblivious simulation of both Alice and Bob stopped in the first sce-
|
| 494 |
+
nario. In this case, since A is correct, there are no deadlocks. Therefore, all
|
| 495 |
+
vertices of G reached a terminal state, meaning that the oblivious simulation
|
| 496 |
+
|
| 497 |
+
Energy-Efficient Distributed Algorithms
|
| 498 |
+
11
|
| 499 |
+
of both players was in fact a real simulation of A, and the obtained states are
|
| 500 |
+
the output states. Therefore, Bob sends a message to Alice indicating that the
|
| 501 |
+
simulation is finished, and indeed Alice and Bob have correctly computed the
|
| 502 |
+
output of A for all the nodes they control.
|
| 503 |
+
Case 2: the oblivious simulation of Alice stopped in scenario (1), and the one of
|
| 504 |
+
Bob stopped in the scenario (2). In this case, Bob infers that his oblivious simu-
|
| 505 |
+
lation was correct. He sends rb and all the messages communicated in round rb
|
| 506 |
+
through the cut to Alice. When Alice receives the message of Bob, she updates
|
| 507 |
+
the state of the nodes she controls up to round rb. It follows that both players
|
| 508 |
+
have correctly simulated algorithm A up to round rb > t.
|
| 509 |
+
Case 3: the oblivious simulation of Alice stopped in scenario (2), and the one of
|
| 510 |
+
Bob stopped in scenario (1). In this case, Bob infres that the simulation of Alice
|
| 511 |
+
was correct up to round ra. He sends a message to Alice indicating that she has
|
| 512 |
+
correctly simulated A up to round ra, and he updates the states of all the nodes
|
| 513 |
+
he controls up to round ra. It follows that both players have correctly simulated
|
| 514 |
+
A up to round ra > t.
|
| 515 |
+
Case 4: the oblivious simulation of both players stopped in scenario (2), and
|
| 516 |
+
ra > rb. Bob infers that his oblivious simulation was correct up to rb, and that
|
| 517 |
+
the one of Alice was not correct after round rb. Then, the players act in the same
|
| 518 |
+
way as described in Case 2. Thus, both players have correctly simulated A up
|
| 519 |
+
to round rb.
|
| 520 |
+
Case 5: the oblivious simulation of both players stopped in scenario (2), and
|
| 521 |
+
rb > ra. Bob infers that his oblivious simulation was incorrect after round ra,
|
| 522 |
+
and that the one of Alice was correct up to round ra. Then, the players act in the
|
| 523 |
+
same way as described in Case 3. Thus, both players have correctly simulated A
|
| 524 |
+
up to round ra.
|
| 525 |
+
Case 6: the oblivious simulation of both players stopped in scenario (2), and
|
| 526 |
+
rb = ra. Bob assumes that both oblivious simulations were correct. He sends rb
|
| 527 |
+
together with all the messages communicated from his nodes at round rb through
|
| 528 |
+
the cut. Then, he updates the states of all the nodes he controls up to round rb.
|
| 529 |
+
When Alice receives the message from Bob, she updates the states of the nodes
|
| 530 |
+
she controls up to round rb. It follows that both players have correctly simulated
|
| 531 |
+
A up to round rb > t.
|
| 532 |
+
Observe that, except when the algorithm terminates, on each phase of the
|
| 533 |
+
protocol, at least one node controlled by Alice or Bob is activated. Since the
|
| 534 |
+
number of rounds of P is twice the number of phases, we deduce that the total
|
| 535 |
+
number of rounds is at most
|
| 536 |
+
2 · min(n(VA, VB) · nact(A), e(VA, VB) · eact(A)).
|
| 537 |
+
|
| 538 |
+
12
|
| 539 |
+
P. Fraigniaud, P. Montealegre, I. Rapaport, I. Todinca
|
| 540 |
+
Moreover, on each round of P, the players communicate O(log(RA(n)) · log n ·
|
| 541 |
+
e(VA, VB)) bits. As a consequence, the total communication cost of P is
|
| 542 |
+
O(log(RA(n)) · log n · e(VA, VB)) · min(n(VA, VB) · nact(A), e(VA, VB) · eact(A))),
|
| 543 |
+
which completes the proof.
|
| 544 |
+
⊓⊔
|
| 545 |
+
We use the simulation lemma to show that there are problems that cannot
|
| 546 |
+
be solved by a frugal algorithm in a polynomial number of rounds. In problem
|
| 547 |
+
C4-freeness, all nodes of the input graph G must accept if G has no cycle of
|
| 548 |
+
4 vertices, and at least one node must reject if such a cycle exists. Observe that
|
| 549 |
+
this problem is expressible in first-order logic, in particular it has en edge-frugal
|
| 550 |
+
algorithm with a polynomial number of rounds in graphs of bounded degree [8].
|
| 551 |
+
We show that, in graphs of unbounded degree, this does not hold anymore.
|
| 552 |
+
We shall also consider problem Symmetry, where the input is a graph G with
|
| 553 |
+
2n nodes indexed from 1 to 2n, and with a unique edge {1, n + 1} between
|
| 554 |
+
GA = G[{1, . . . , n}] and GB = G[{n + 1, . . . , 2n}]. Our lower bounds holds
|
| 555 |
+
even if every node is identified by its index. All nodes must output accept if
|
| 556 |
+
the function f : {1, . . . , n} → {n + 1, . . . , 2n} defined by f(x) = x + n is an
|
| 557 |
+
isomorphism from GA to GB, otherwise at least one node must output reject.
|
| 558 |
+
The proof of the following theorem is based on classic reductions from com-
|
| 559 |
+
munication complexity problems Equality and Set Disjointness (see, e.g.,
|
| 560 |
+
[9]), combined with Lemma 3.
|
| 561 |
+
Theorem 4. Any CONGEST algorithm solving Symmetry (resp., C4-free-
|
| 562 |
+
ness) in polynomially many rounds has node-activation and edge-activation at
|
| 563 |
+
least Ω
|
| 564 |
+
Ä
|
| 565 |
+
n2
|
| 566 |
+
log2 n
|
| 567 |
+
ä
|
| 568 |
+
(resp., Ω
|
| 569 |
+
Ä
|
| 570 |
+
√n
|
| 571 |
+
log2 n
|
| 572 |
+
ä
|
| 573 |
+
).
|
| 574 |
+
Proof. In problem Equality, two players Alice and Bob have a boolean vector
|
| 575 |
+
of size k, xA for Alice and xB for Bob. Their goal is to answer true if xA = xB,
|
| 576 |
+
and false otherwise. The communication complexity of this problem is known to
|
| 577 |
+
be Θ(k) [9]. Let k = n2. We can interpret xA and xB as the adjacency matrix of
|
| 578 |
+
two graphs GA and GB in an instance of Symmetry. It is a mere technicality to
|
| 579 |
+
"shift" GB as if its vertices were indexed from 1 to n, such that Symmetry is true
|
| 580 |
+
for G iff xA = xB. Moreover, Alice can construct GA from its input xA, and Bob
|
| 581 |
+
can construct GB from xB. Both can simulate the unique edge joining the two
|
| 582 |
+
graphs in G. Therefore, by Lemma 3 applied to G, if Alice controls the vertices
|
| 583 |
+
of GA, and Bob controls the vertices of GB, then any CONGEST algorithm A
|
| 584 |
+
solving Symmetry in polynomially many rounds yields a two-party protocol for
|
| 585 |
+
Equality on n2 bits. Since graphs GA and GB are linked by a unique edge, the
|
| 586 |
+
total communication of the protocol is O(eact(A)·log2 n) and O(nact(A)·log2 n).
|
| 587 |
+
The result follows.
|
| 588 |
+
In Set Disjointness, each of the two players Alice and Bob has a Boolean
|
| 589 |
+
vector of size k, xA for Alice, and xB for Bob. Their goal is to answer true if
|
| 590 |
+
there is no index i ∈ [k] such that both xA[i] and xB[i] are true (in which case,
|
| 591 |
+
xA and xB are disjoint), and false otherwise. The communication complexity of
|
| 592 |
+
|
| 593 |
+
Energy-Efficient Distributed Algorithms
|
| 594 |
+
13
|
| 595 |
+
this problem is known to be Θ(k) [9]. We use the technique in [5] to construct an
|
| 596 |
+
instance G for C4 freeness, with a small cut, from two Boolean vectors xA, xB
|
| 597 |
+
of size k = Θ(n3/2). Consider a C4-free n-vertex graph H with a maximum
|
| 598 |
+
number of edges. Such a graph has k = Θ(n3/2) edges, as recalled in [5]. We
|
| 599 |
+
can consider the edges E(H) as indexed from 1 to k, and V (H) as [n]. Let now
|
| 600 |
+
xA and xB be two Boolean vectors of size k. These vectors can be interpreted
|
| 601 |
+
as edge subsets E(xA) and E(xB) of H, in the sense that the edge indexed i in
|
| 602 |
+
E(H) appears in E(xA) (resp. E(xB)) iff xA[i] (resp. xB[i]) is true. Graph G is
|
| 603 |
+
constructed to have 2n vertices, formed by two sub-graphs GA = G[{1, . . . , n}]
|
| 604 |
+
and GB = G[{n+1, . . . , 2n}]. The edges of E(GA) are exactly the ones of E(xA).
|
| 605 |
+
Similarly, the edges of E(GB) correspond to E(xA), modulo the fact that the
|
| 606 |
+
vertex indexes are shifted by n, i.e., for each edge {u, v} ∈ E(xB), we add edge
|
| 607 |
+
{u+n, v +n} to GB. Moreover we add a perfect matching to G, between V (GA)
|
| 608 |
+
and V (GB), by adding all edges {i, i + n}, for all i ∈ [n]. Note that G is C4-
|
| 609 |
+
free if and only if vectors xA and xB are disjoint. Indeed, since GA, GB are
|
| 610 |
+
isomorphic to sub-graphs of H, they are C4-free. Thus any C4 of G must contain
|
| 611 |
+
two vertices in GA and two in GB, in which case the corresponding edges in
|
| 612 |
+
GA and GB designate the same bit of xA and xB respectively. Moreover Alice
|
| 613 |
+
and Bob can construct GA and GB, as well as the edges in the matching, from
|
| 614 |
+
their respective inputs xA and xB. Therefore, thanks to Lemma 3, a CONGEST
|
| 615 |
+
algorithm A for C4-freeness running in a polynomial number of rounds can
|
| 616 |
+
be used to design a protocol P solving Set Disjointness on k = Θ(n3/2)
|
| 617 |
+
bits, where Alice controls V (GA) and Bob controls V (GB). The communication
|
| 618 |
+
complexity of the protocol is O(eact(A) · n · log2 n), and O(nact(A) · n · log2 n),
|
| 619 |
+
since the cut between GA and GB is a matching. The result follows.
|
| 620 |
+
⊓⊔
|
| 621 |
+
5
|
| 622 |
+
Node versus Edge Activation
|
| 623 |
+
In this section we exhibit a problem that admits an edge-frugal CONGEST
|
| 624 |
+
algorithm running in a polynomial number of rounds, for which any algorithm
|
| 625 |
+
running in a polynomial number of rounds has large node-activation complexity.
|
| 626 |
+
We proceed by reduction from a two-party communication complexity prob-
|
| 627 |
+
lem. However, unlike the previous section, we are now also interested in the
|
| 628 |
+
number of rounds of the two-party protocols. We consider protocols in which
|
| 629 |
+
the two players Alice and Bob do not communicate simultaneously. For such a
|
| 630 |
+
protocol P, a round is defined as a maximal contiguous sequence of messages
|
| 631 |
+
emitted by a same player. We denote by R(P) the number of rounds of P.
|
| 632 |
+
Let G be a graph, and S be a subset of nodes of G. We denote by ∂S the
|
| 633 |
+
number of vertices in S with a neighbor in V \ S.
|
| 634 |
+
Lemma 4 (Round-Efficient Simulation lemma). Let A be an algorithm in
|
| 635 |
+
the CONGEST model, let I = (G, id, x) be an input for A, and let VA, VB be a
|
| 636 |
+
partition of V (G). Let us assume that Alice controls all the nodes in VA, and
|
| 637 |
+
Bob controls all the nodes in VB, and both players know the value of nact(A).
|
| 638 |
+
Then, there exists a communication protocol P between Alice and Bob such
|
| 639 |
+
|
| 640 |
+
14
|
| 641 |
+
P. Fraigniaud, P. Montealegre, I. Rapaport, I. Todinca
|
| 642 |
+
that, in at most min(∂VA, ∂VB)·nact(A) rounds, and using total communication
|
| 643 |
+
O(((∂(VA) + ∂(VB)) · nact(A)))2 · log n · log RA(n)) bits, each player computes
|
| 644 |
+
the value of A(I) at all the nodes he or she controls.
|
| 645 |
+
Proof. In protocol P, Alice and Bob simulate the rounds of algorithm A at
|
| 646 |
+
all the nodes each player controls. Without loss of generality, we assume that
|
| 647 |
+
algorithm A satisfies that the nodes send messages at different rounds, by merely
|
| 648 |
+
multiplying by N the number of rounds.
|
| 649 |
+
Initially, Alice runs a oblivious simulation of A that stops when every node
|
| 650 |
+
in VA either has terminated, or entered into the passive state that it may leave
|
| 651 |
+
only after having received a message from a node in VB (this corresponds to
|
| 652 |
+
what we call the first scenario in the proof of Lemma 3). Then, Alice sends to
|
| 653 |
+
Bob the integer t1 = 0, and the set M 1
|
| 654 |
+
A of all messages sent from nodes in VA
|
| 655 |
+
to nodes in VB in the communication rounds that she simulated, together with
|
| 656 |
+
their corresponding timestamps. If the number of messages communicated by
|
| 657 |
+
Alice exceeds nact(A) · ∂A, we trim the list up to this threshold.
|
| 658 |
+
Let us suppose that the protocol P has run for p rounds, and let us assume
|
| 659 |
+
that it is the turn of Bob to speak at round p + 1 — the case where Alice speaks
|
| 660 |
+
at round p + 1 can be treated in the same way. Moreover, we assume that P
|
| 661 |
+
satisfies the following two conditions:
|
| 662 |
+
1. At round p, Alice sents an integer tp ≥ 0, and a list of timestamped messages
|
| 663 |
+
M p
|
| 664 |
+
A corresponding to messages sent from nodes in VA to nodes in VB in an
|
| 665 |
+
oblivious simulation of A starting from a round tp.
|
| 666 |
+
2. Bob had correctly simulated A at all the nodes he controls, up to round tp.
|
| 667 |
+
We now describe round p+1 (see also Figure 3). Bob initiates a simulation of
|
| 668 |
+
A at all the nodes he controls. However, this simulation is not oblivious. Specif-
|
| 669 |
+
ically, Bob simulates A from round tp taking into account all the messages sent
|
| 670 |
+
from nodes in VA to nodes in VB, as listed in the messages M p
|
| 671 |
+
A. The simulation
|
| 672 |
+
stops when Bob reaches a round tp+1 > tp at which a node in VB sends a mes-
|
| 673 |
+
sage to a node in VA. Observe that, up to round tp+1, the oblivious simulation
|
| 674 |
+
of Alice was correct. At this point, Bob initiates an oblivious simulation of A at
|
| 675 |
+
all the nodes he controls, starting from tp+1. Finally, Bob sends to Alice tp+1,
|
| 676 |
+
and the list M p+1
|
| 677 |
+
B
|
| 678 |
+
of all timestamped messages sent from nodes in VB to nodes
|
| 679 |
+
in VA resulting from the oblivious simulation of the nodes he controls during
|
| 680 |
+
rounds at least tp+1. Using this information, Alice infers that her simulation was
|
| 681 |
+
correct up to round tp+1, and she starts the next round for protocol P.
|
| 682 |
+
The simulation carries on until one of the two players runs an oblivious
|
| 683 |
+
simulation in which all the nodes he or she controls terminate, and no messages
|
| 684 |
+
were sent through the cut in at any intermediate round. In this case, this player
|
| 685 |
+
sends a message "finish" to the other player, and both infer that their current
|
| 686 |
+
simulations are correct. As a consequence, each player has correctly computed
|
| 687 |
+
the output of A at all the nodes he or she controls.
|
| 688 |
+
At every communication round during which Alice speaks, at least one vertex
|
| 689 |
+
of VA which has a neighbor in VB is activated. Therefore, the number of rounds of
|
| 690 |
+
|
| 691 |
+
Energy-Efficient Distributed Algorithms
|
| 692 |
+
15
|
| 693 |
+
rounds
|
| 694 |
+
VA
|
| 695 |
+
VB
|
| 696 |
+
VA
|
| 697 |
+
VB
|
| 698 |
+
VA
|
| 699 |
+
VB
|
| 700 |
+
Transcript of
|
| 701 |
+
the algorithm A
|
| 702 |
+
Simulation
|
| 703 |
+
of Alice
|
| 704 |
+
tp
|
| 705 |
+
Simulation
|
| 706 |
+
of Bob
|
| 707 |
+
tp+1
|
| 708 |
+
Fig. 3. Illustration of the round-efficient simulation protocol for algorithm A. After
|
| 709 |
+
round p, Alice has correctly simulated the algorithm up to round tp. It is the turn
|
| 710 |
+
of Bob to speak in round p + 1. In round p, Alice sent to Bob the set of messages
|
| 711 |
+
M p
|
| 712 |
+
A, obtained from an oblivious simulation of A starting from tp. Only the first three
|
| 713 |
+
messages are correct, since at round tp+1 Bob communicates a message to Alice. Then,
|
| 714 |
+
Bob runs an oblivious simulation of A starting from tp+1, and communicates all the
|
| 715 |
+
messages sent from nodes VB to nodes in VA. In this case the two first messages are
|
| 716 |
+
correct.
|
| 717 |
+
Alice is at most ∂VA · nact(A). By the same argument, we have that the number
|
| 718 |
+
of rounds of Bob is at most ∂VB · nact(A). It follows that
|
| 719 |
+
R(P) = min(∂VA, ∂VB) · nact(A).
|
| 720 |
+
At each communication round, Alice sends at most ∂(VA)·nact(A) timestamped
|
| 721 |
+
messages, which can be encoded using O(∂(VA)·nact(A))·log n·log RA(n)) bits.
|
| 722 |
+
Similarly, Bob sends O(∂(VB) · nact(A)) · log n · log RA(n)) bits. It follows that
|
| 723 |
+
C(P) = O(((∂(VA) + ∂(VB)) · nact(A)))2 · log n · log RA(n)),
|
| 724 |
+
which completes the proof.
|
| 725 |
+
⊓⊔
|
| 726 |
+
In order to separate the node-activation complexity from the edge-activation
|
| 727 |
+
complexity, we consider a problem called Depth First Pointer Chasing,
|
| 728 |
+
and we show that this problem can be solved by an edge-frugal CONGEST
|
| 729 |
+
algorithm running in O(poly(n)) rounds, whereas the node-activation complexity
|
| 730 |
+
of any algorithm running in O(poly(n)) rounds for this problem is Ω(∆), for
|
| 731 |
+
any ∆ ∈ O(
|
| 732 |
+
√n
|
| 733 |
+
log n). The lower bound is proved thanks to the Round-Efficient
|
| 734 |
+
Simulation Lemma (Lemma 3), by reduction from the two-party communication
|
| 735 |
+
complexity problem Pointer Chasing, for which too few rounds imply large
|
| 736 |
+
communication complexity [10].
|
| 737 |
+
In the Depth First Pointer Chasing, each node v of the graph is given as
|
| 738 |
+
input its index DFS(v) ∈ [n] in a depth-first search ordering (as usual we denote
|
| 739 |
+
[n] = {1, . . ., n}). Moreover the vertex indexed i is given a function fi : [n] → [n],
|
| 740 |
+
|
| 741 |
+
16
|
| 742 |
+
P. Fraigniaud, P. Montealegre, I. Rapaport, I. Todinca
|
| 743 |
+
and the root (i.e., the node indexed 1) is given a value x ∈ [n] as part of its input.
|
| 744 |
+
The goal is to compute the value of fn ◦ fn−1 ◦ · · · ◦ f1(x) at the root.
|
| 745 |
+
Lemma 5. There exists an edge-frugal CONGEST algorithm for problem Depth
|
| 746 |
+
First Pointer Chasing, with polynomial number of rounds.
|
| 747 |
+
Proof. The lemma is established using an algorithm that essentially traverses
|
| 748 |
+
the DFS tree encoded by the indices of the nodes, and performs the due par-
|
| 749 |
+
tial computation of the function at every node, that is, the node with index i
|
| 750 |
+
computes fi ◦fi−1 . . . f1(x), and forwards the result to the node with index i+1.
|
| 751 |
+
At round 1, each node v transmits its depth-first search index DFS(v) to
|
| 752 |
+
its neighbors. Therefore, after this round, every node knows its parent, and its
|
| 753 |
+
children in the DFS tree. Then the algorithm merely forwards messages of type
|
| 754 |
+
m(i) = fi ◦ fi−1 . . . f1(x), corresponding to iterated computations for increasing
|
| 755 |
+
values i, along the DFS tree, using the DFS ordering. That is, for any node v,
|
| 756 |
+
let MaxDFS(v) denote the maximum DFS index appearing in the subtree of
|
| 757 |
+
the DFS tree rooted at v. We will not explicitly compute this quantity but it
|
| 758 |
+
will ease the notations. At some round, vertex v of DFS index i will receive a
|
| 759 |
+
message m(i − 1) from its parent (of index i − 1). Then node v will be in charge
|
| 760 |
+
of computing message m(MaxDFS(v)), by “calling” its children in the tree, and
|
| 761 |
+
sending this message back to its parent. In this process, each edge in the subtree
|
| 762 |
+
rooted at v is activated twice.
|
| 763 |
+
The vertex of DFS index 1 initiates the process at round 2, sending f1(x) to its
|
| 764 |
+
child of DFS index 2. Any other node v waits until it receives a message from its
|
| 765 |
+
parent, at a round that we denote r(v). This message is precisely m(i−1) = fi−1◦
|
| 766 |
+
fi−2 . . . f1(x), for i = DFS(v). Then v computes message m(i) = fi◦fi−1 . . . f1(x)
|
| 767 |
+
using its local function fi. If it has no children, then it sends this message m(i)
|
| 768 |
+
to its parent at round r(v) + 1. Assume now that v has j children in the DFS
|
| 769 |
+
tree, denoted u1, u2, . . . , uj, sorted by increasing DFS index. Observe that, by
|
| 770 |
+
definition of DFS trees, DFS(uk) = MaxDFS(uk−1) + 1 for each k ∈ {2, . . . , j}.
|
| 771 |
+
Node v will be activated j times, once for each edge {v, uk}, 1 ≤ k ≤ j, as
|
| 772 |
+
follows. At round r(v) + 1 (right after receiving the message from its parent),
|
| 773 |
+
v sends message m(i) to its child u1, then it awaits until round r1(v) when it
|
| 774 |
+
gets back a message from u1.
|
| 775 |
+
The process is repeated for k = 2, . . . , j: at round rk−1(v) + 1, node v sends
|
| 776 |
+
the message m(DFS(uk) − 1) received from uk−1 to uk, and waits until it gets
|
| 777 |
+
back a message from uk, at round rk(v). Note that if k < j then this message is
|
| 778 |
+
m(DFS(uk+1) − 1), and if k = j then this message is m(MaxDFS(v)). At round
|
| 779 |
+
rj(v) + 1, after having received messages from all its children, v backtracks
|
| 780 |
+
message m(MaxDFS(v)) to its parent. If v is the root, then the process stops.
|
| 781 |
+
The process terminates in O(n) rounds, and, except for the first round, every
|
| 782 |
+
edge of the DFS tree is activated twice: first, going downwards, from the root
|
| 783 |
+
towards the leaves, and, second, going upwards. At the end, the root obtains the
|
| 784 |
+
requested message m(n) = fn ◦ fn−1 . . . f1(x).
|
| 785 |
+
⊓⊔
|
| 786 |
+
Let us recall the Pointer Chasing problem as defined in [10]. Alice is
|
| 787 |
+
given a function fA : [n] → [n], and a number x0 ∈ [n]. Bob is given function
|
| 788 |
+
|
| 789 |
+
Energy-Efficient Distributed Algorithms
|
| 790 |
+
17
|
| 791 |
+
fB : [n] → [n]. Both players have a parameter k ∈ [n]. Note that the size
|
| 792 |
+
of the input given to each player is Θ(n log n) bits. The goal is to compute
|
| 793 |
+
(fA ◦ fB)k(x0), i.e., k successive iterations of fA ◦ fB applied to x0. We give a
|
| 794 |
+
slightly simplified version of the result in [10].
|
| 795 |
+
Lemma 6 (Nissan and Wigderson [10]). Any two-party protocol for Pointer
|
| 796 |
+
Chasing using less than 2k rounds has communication complexity Ω(n−k log n).
|
| 797 |
+
We have now all ingredients for proving the main result of this section.
|
| 798 |
+
Theorem 5. For every ∆ ∈ O
|
| 799 |
+
Ä
|
| 800 |
+
n1/4
|
| 801 |
+
√log n
|
| 802 |
+
ä
|
| 803 |
+
, every CONGEST algorithm solving
|
| 804 |
+
Depth First Pointer Chasing in graphs of maximum degree ∆ with polyno-
|
| 805 |
+
mialy many rounds has node-activation complexity Ω(∆).
|
| 806 |
+
Proof. Let k be the parameter of Pointer Chasing that will be fixed later.
|
| 807 |
+
The lower bound is established for this specific parameter k. Let us consider an
|
| 808 |
+
arbitrary instance of Pointer Chasing fA, fB : [n] → [n], and x0 ∈ [n], with
|
| 809 |
+
parameter k. We reduce that instance to a particular instance of Depth First
|
| 810 |
+
Pointer Chasing (see Fig. 4).
|
| 811 |
+
a1
|
| 812 |
+
a2
|
| 813 |
+
ak
|
| 814 |
+
b1
|
| 815 |
+
b2
|
| 816 |
+
bk
|
| 817 |
+
v1
|
| 818 |
+
vn−2k
|
| 819 |
+
v2
|
| 820 |
+
Alice
|
| 821 |
+
Bob
|
| 822 |
+
Fig. 4. Reduction from Pointer Chasing to Depth First Pointer Chasing.
|
| 823 |
+
The graph is a tree T on n vertices, composed of a path (v1, . . . , vn−2k), and
|
| 824 |
+
2k leaves vn−2k+1, . . . , vn, all adjacent to vn−2k. Node v1 is called the root, and
|
| 825 |
+
node vn−2k is said central. Note that the ordering obtained by taking DFS(vi) = i
|
| 826 |
+
is a depth-first search of T , rooted at v1. The root v1 is given value x0 as
|
| 827 |
+
input. If i ≤ n − 2k, then function fi is merely the identity function f (i.e.,
|
| 828 |
+
f(x) = x for all x). For every j ∈ [k], let aj = vn−k+2j−1, and bj = vn−k+2j.
|
| 829 |
+
All nodes bj get as input the function fB, and all nodes aj get the function fA.
|
| 830 |
+
Observe that the output of Depth First Pointer Chasing on this instance
|
| 831 |
+
is precisely the same as the output of the initial instance of Pointer Chasing.
|
| 832 |
+
Indeed, fn−2k ◦ fn−2k−1 ◦ · · · ◦ f1 is the identity function, and the sequence
|
| 833 |
+
fn◦fn−1◦· · ·◦fn−2k+2◦fn−2k+1 alternates nodes of “type” aj with nodes of “type”
|
| 834 |
+
bj, for decreasing values of j ∈ [k], and thus corresponds to fA ◦fB ◦· · ·◦fA ◦fB,
|
| 835 |
+
where the pair fA ◦ fB is repeated k times, exactly as in problem Pointer
|
| 836 |
+
Chasing.
|
| 837 |
+
|
| 838 |
+
18
|
| 839 |
+
P. Fraigniaud, P. Montealegre, I. Rapaport, I. Todinca
|
| 840 |
+
We can now apply Round-Efficient Simulation Lemma. Let Alice control all
|
| 841 |
+
vertices aj, for all j ∈ [k], and vertices v1, . . . , vn−2k. Let Bob control vertices bj,
|
| 842 |
+
for all j ∈ [k]. See Fig. 4. Note that Alice and Bob can construct the subgraph
|
| 843 |
+
that they control, based only on their input in the considered Pointer Chasing
|
| 844 |
+
instance, and they both now value k.
|
| 845 |
+
Claim. If there exists a CONGEST algorithm A for Depth First Pointer
|
| 846 |
+
Chasing on n-node graphs performing in RA rounds with node-activation smaller
|
| 847 |
+
than 2k, then Pointer Chasing can be solved by a two-party protocol P in
|
| 848 |
+
less than 2k rounds, with communication complexity O(k4 log n log RA) bits.
|
| 849 |
+
The claim directly follows from Lemma 4. Indeed, by construction, ∂VA = 1
|
| 850 |
+
and ∂VB = k. Since we assumed nact(A) < 2k, the two-way protocol P provided
|
| 851 |
+
by Lemma 4 solves the Pointer Chasing instance in less than 2k rounds, and
|
| 852 |
+
uses O(k4 log n log RA) bits.
|
| 853 |
+
By Lemma 6, we must have k4 log n log RA ∈ Ω(n − k log n). Therefore,
|
| 854 |
+
if our CONGEST algorithm A has polynomially many rounds, we must have
|
| 855 |
+
k ∈ Ω
|
| 856 |
+
Ä
|
| 857 |
+
n1/4
|
| 858 |
+
√log n
|
| 859 |
+
ä
|
| 860 |
+
. Since our graph has maximum degree ∆ = 2k+1, the conclusion
|
| 861 |
+
follows.
|
| 862 |
+
⊓⊔
|
| 863 |
+
6
|
| 864 |
+
Conclusion
|
| 865 |
+
In this paper, we have mostly focused on the round complexity of (deterministic)
|
| 866 |
+
frugal algorithms solving general graph problems in the LOCAL or CONGEST
|
| 867 |
+
model. It might be interesting to consider specific classical problems. As far as
|
| 868 |
+
“local problems” are concerned, i.e., for locally checkable labeling (LCL) prob-
|
| 869 |
+
lems, we have shown that MIS and (∆+1)-coloring admit frugal algorithms with
|
| 870 |
+
polynomial round complexities. It is easy to see, using the same arguments, that
|
| 871 |
+
problems such as maximal matching share the same properties. It is however not
|
| 872 |
+
clear that the same holds for (2∆ − 1)-edge coloring.
|
| 873 |
+
Open Problem 1 Is there a (node or edge) frugal algorithm solving (2∆ − 1)-
|
| 874 |
+
edge-coloring with round complexity O(poly(n)) in the CONGEST model?
|
| 875 |
+
In fact, it would be desirable to design frugal algorithms with sub-polynomial
|
| 876 |
+
round complexities for LCL problems in general. In particular:
|
| 877 |
+
Open Problem 2 Is there a (node or edge) frugal algorithm solving mis or
|
| 878 |
+
(∆ + 1)-coloring with round complexity O(polylog(n)) in the LOCAL model?
|
| 879 |
+
The same type of questions can be asked for global problems. In particular,
|
| 880 |
+
it is known that MST has no “awake frugal” algorithms, as MST has awake
|
| 881 |
+
complexity Ω(log n), even in the LOCAL model. In contrast, frugal algorithms
|
| 882 |
+
for MST do exist as far as node-activation complexity is concerned. The issue is
|
| 883 |
+
about the round complexities of such algorithms.
|
| 884 |
+
|
| 885 |
+
Energy-Efficient Distributed Algorithms
|
| 886 |
+
19
|
| 887 |
+
Open Problem 3 Is there a (node or edge) frugal algorithm solving mst with
|
| 888 |
+
round complexity O(poly(n)) in the CONGEST model?
|
| 889 |
+
Another intriguing global problem is depth-first search (dfs), say starting
|
| 890 |
+
from an identified node. This can be performed by an edge-frugal algorithm
|
| 891 |
+
performing in a linear number of rounds in CONGEST. However, it is not clear
|
| 892 |
+
whether the same can be achieved by a node-frugal algorithm.
|
| 893 |
+
Open Problem 4 Is there a node-frugal algorithm solving dfs with round com-
|
| 894 |
+
plexity O(poly(n)) in the CONGEST model?
|
| 895 |
+
Finally, we have restricted our analysis to deterministic algorithms, and it
|
| 896 |
+
might obviously be worth considering randomized frugal algorithms as well.
|
| 897 |
+
References
|
| 898 |
+
1. Augustine, J., Moses, W.K., Pandurangan, G.: Brief announcement: Distributed
|
| 899 |
+
MST computation in the sleeping model: Awake-optimal algorithms and lower
|
| 900 |
+
bounds. In: 41st ACM Symposium on Principles of Distributed Computing
|
| 901 |
+
(PODC). pp. 51–53 (2022). https://doi.org/10.1145/3519270.3538459
|
| 902 |
+
2. Barenboim, L., Maimon, T.: Deterministic logarithmic completeness in the dis-
|
| 903 |
+
tributed sleeping model. In: 35th International Symposium on Distributed Com-
|
| 904 |
+
puting (DISC). LIPIcs, vol. 209, pp. 10:1–10:19. Schloss Dagstuhl - Leibniz-
|
| 905 |
+
Zentrum für Informatik (2021). https://doi.org/10.4230/LIPIcs.DISC.2021.10
|
| 906 |
+
3. Chang, Y., Dani, V., Hayes, T.P., He, Q., Li, W., Pettie, S.: The energy complexity
|
| 907 |
+
of broadcast. In: 37th ACM Symposium on Principles of Distributed Computing
|
| 908 |
+
(PODC). pp. 95–104 (2018). https://doi.org/10.1145/3212734.3212774
|
| 909 |
+
4. Chatterjee,
|
| 910 |
+
S.,
|
| 911 |
+
Gmyr,
|
| 912 |
+
R.,
|
| 913 |
+
Pandurangan,
|
| 914 |
+
G.:
|
| 915 |
+
Sleeping
|
| 916 |
+
is
|
| 917 |
+
efficient:
|
| 918 |
+
MIS
|
| 919 |
+
in
|
| 920 |
+
O(1)-rounds
|
| 921 |
+
node-averaged
|
| 922 |
+
awake
|
| 923 |
+
complexity.
|
| 924 |
+
In:
|
| 925 |
+
39th
|
| 926 |
+
ACM
|
| 927 |
+
Sympo-
|
| 928 |
+
sium on Principles of Distributed Computing (PODC). pp. 99–108 (2020).
|
| 929 |
+
https://doi.org/10.1145/3382734.3405718
|
| 930 |
+
5. Drucker, A., Kuhn, F., Oshman, R.: On the power of the congested clique
|
| 931 |
+
model. In: Proceedings of the 2014 ACM Symposium on Principles of Dis-
|
| 932 |
+
tributed Computing. p. 367–376. PODC ’14, Association for Computing Machin-
|
| 933 |
+
ery, New York, NY, USA (2014). https://doi.org/10.1145/2611462.2611493,
|
| 934 |
+
https://doi.org/10.1145/2611462.2611493
|
| 935 |
+
6. Dufoulon,
|
| 936 |
+
F.,
|
| 937 |
+
Moses,
|
| 938 |
+
W.K.,
|
| 939 |
+
Pandurangan,
|
| 940 |
+
G.:
|
| 941 |
+
Sleeping
|
| 942 |
+
is
|
| 943 |
+
super-
|
| 944 |
+
efficient:
|
| 945 |
+
MIS
|
| 946 |
+
in
|
| 947 |
+
exponentially
|
| 948 |
+
better
|
| 949 |
+
awake
|
| 950 |
+
complexity
|
| 951 |
+
(2022).
|
| 952 |
+
https://doi.org/10.48550/ARXIV.2204.08359
|
| 953 |
+
7. Ghaffari, M., Portmann, J.: Average awake complexity of MIS and matching. In:
|
| 954 |
+
34th ACM Symposium on Parallelism in Algorithms and Architectures (SPAA).
|
| 955 |
+
pp. 45–55 (2022). https://doi.org/10.1145/3490148.3538566
|
| 956 |
+
8. Grumbach, S., Wu, Z.: Logical locality entails frugal distributed computation
|
| 957 |
+
over graphs. In: 35th International Workshop on Graph-Theoretic Concepts
|
| 958 |
+
in Computer Science (WG). LNCS, vol. 5911, pp. 154–165. Springer (2009).
|
| 959 |
+
https://doi.org/10.1007/978-3-642-11409-0
|
| 960 |
+
9. Kushilevitz, E., Nisan, N.: Communication complexity. Cambridge University Press
|
| 961 |
+
(1997)
|
| 962 |
+
10. Nisan, N., Wigderson, A.: Rounds in communication complexity revisited. SIAM
|
| 963 |
+
Journal on Computing 22(1), 211–219 (1993). https://doi.org/10.1137/0222016
|
| 964 |
+
11. Peleg, D.: Distributed computing: a locality-sensitive approach. SIAM (2000)
|
| 965 |
+
|
| 966 |
+
20
|
| 967 |
+
P. Fraigniaud, P. Montealegre, I. Rapaport, I. Todinca
|
| 968 |
+
Acknowledgements. The authors are thankful to Benjamin Jauregui for helpful
|
| 969 |
+
discussions about the sleeping model.
|
| 970 |
+
|
E9FLT4oBgHgl3EQfFi_K/content/tmp_files/load_file.txt
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
EtE1T4oBgHgl3EQfEgOL/vector_store/index.faiss
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:240cb08635f7c9bc86f2897b07707fd5194673fb8e34e83faf800efb733a4f43
|
| 3 |
+
size 3473453
|
FNE0T4oBgHgl3EQfhAFe/content/2301.02425v1.pdf
ADDED
|
Binary file (77.6 kB). View file
|
|
|
FNE0T4oBgHgl3EQfhAFe/content/tmp_files/2301.02425v1.pdf.txt
ADDED
|
@@ -0,0 +1,242 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
arXiv:2301.02425v1 [hep-ph] 6 Jan 2023
|
| 2 |
+
January 2023
|
| 3 |
+
An SU(15) Approach to Bifermion Classification
|
| 4 |
+
Claudio Corian`o∗
|
| 5 |
+
Paul H. Frampton†
|
| 6 |
+
Dario Melle‡
|
| 7 |
+
Dipartimento di Matematica e Fisica “Ennio De Giorgi”,
|
| 8 |
+
Universit`a del Salento and INFN-Lecce,
|
| 9 |
+
Via Arnesano, 73100 Lecce, Italy
|
| 10 |
+
National Center for HPC, Big Data and Quantum Computing
|
| 11 |
+
Thomas W. Kephart§
|
| 12 |
+
Department of Physics and Astronomy, Vanderbilt University,
|
| 13 |
+
Nashville, TN 37235, USA.
|
| 14 |
+
Tzu-Chiang Yuan¶
|
| 15 |
+
Institute of Physics, Academia Sinica, Nangang, Taipei 11529, Taiwan.
|
| 16 |
+
Abstract
|
| 17 |
+
One interesting way to extend the standard model is the hypothesis of bifermions which are
|
| 18 |
+
bosons which couple to pairs of quarks and leptons. We point out that SU(15) grand unification
|
| 19 |
+
gives a natural way to classify bifermions and discuss leptoquarks, biquarks and bileptons. In
|
| 20 |
+
fact, SU(15) provides an ideal covering group as it contains all possible bifermions within a
|
| 21 |
+
single model.
|
| 22 | |
| 23 | |
| 24 | |
| 25 | |
| 26 | |
| 27 |
+
|
| 28 |
+
The standard model (SM) of particle theory has remained robust and only
|
| 29 |
+
occasionally tantalising hints have appeared from experiment about how to
|
| 30 |
+
extend it. If and when these hints have become more definite they are likely
|
| 31 |
+
to influence all of theoretical physics by clarifying the choices which Nature
|
| 32 |
+
has made.
|
| 33 |
+
A recent disappointment was that the anomalies in B decays
|
| 34 |
+
which had stubbornly remained for the eight years 2014-2022 at the 3σ level
|
| 35 |
+
have now been withdrawn [1]. The present article is intended to be useful for
|
| 36 |
+
the time when further discrepancies from the standard model appear. One
|
| 37 |
+
attempt at grand unification [2] involves the gauge group SU(15) where all
|
| 38 |
+
15 states of a quark-lepton family are in the defining representation and every
|
| 39 |
+
possible leptoquark is present in the adjoint representation which provides a
|
| 40 |
+
useful classification. The adjoint appears in 15 × 15∗ = 1 + 224 and contains
|
| 41 |
+
72 leptoquarks which transform in irreducible representations of the standard
|
| 42 |
+
model gauge group
|
| 43 |
+
(SU(3)C, SU(2)L)Y with Q = T3 + Y/2 in four sets of 18 as follows
|
| 44 |
+
B = +1/3, L = +1,
|
| 45 |
+
2(3, 2)−5/3
|
| 46 |
+
Q = (−1/3, −4/3)
|
| 47 |
+
ue−, de−
|
| 48 |
+
(3, 2)1/3
|
| 49 |
+
Q = (2/3. − 1/3)
|
| 50 |
+
uν, dν
|
| 51 |
+
B = −1/3, L = +1,
|
| 52 |
+
2(3∗, 1)−4/3
|
| 53 |
+
Q = (−2/3)
|
| 54 |
+
¯uν
|
| 55 |
+
(3∗, 1)−10/3
|
| 56 |
+
Q = (−5/3)
|
| 57 |
+
¯ue−
|
| 58 |
+
(3∗, 3)−5/3 Q = (−5/3, −2/3, +1/3) ¯ue−, ¯uν, ¯dν
|
| 59 |
+
B = +1/3, L = −1,
|
| 60 |
+
2(3, 1)4/3
|
| 61 |
+
Q = (2/3)
|
| 62 |
+
e+d
|
| 63 |
+
(3, 1)10/3
|
| 64 |
+
Q = (5/3)
|
| 65 |
+
¯ue−
|
| 66 |
+
(3, 3)4/3
|
| 67 |
+
Q = (−1/3, 2/3.5/3)
|
| 68 |
+
νd, e+d, e+u.
|
| 69 |
+
B = −1/3, L = −1,
|
| 70 |
+
2(3∗, 2)5/3
|
| 71 |
+
Q = (1/3, 4/3)
|
| 72 |
+
e+¯u, e+ ¯d
|
| 73 |
+
(3∗, 2)−1/3
|
| 74 |
+
Q = (−2/3, 1/3)
|
| 75 |
+
ν¯u, e+¯u
|
| 76 |
+
The adjoint describes the spin-one gauge bosons of SU(15) and also a spin-
|
| 77 |
+
zero Higgs if it is used [3] for symmetry breaking. A spin-one hypothesis
|
| 78 |
+
would imply that a leptoquark is a gauge boson of SU(15). In that case, if
|
| 79 |
+
1
|
| 80 |
+
|
| 81 |
+
at least the first two families are treated sequentially as 15’s, unless there
|
| 82 |
+
is an ad hoc assumption motivated by the data [4], muon-electron LFU =
|
| 83 |
+
Lepton Flavour Universality, meaning that the leptons e, µ have identical
|
| 84 |
+
properties in every way except for their different masses, will be an inevitable
|
| 85 |
+
consequence. A spin-zero hypothesis would imply bifermions in the product
|
| 86 |
+
15 × 15 = 105A + 120S as per their Yukawa interactions, hence we examine
|
| 87 |
+
the decompositions of 15, 105 and 120 into their SM components, which is
|
| 88 |
+
easily done with the Mathematica package LieART [5,6]:
|
| 89 |
+
15 = (3, 2)+ 1
|
| 90 |
+
3 + (3∗, 1)− 4
|
| 91 |
+
3 + (3∗, 1)+ 2
|
| 92 |
+
3 + (1, 2)−1 + (1, 1)+2
|
| 93 |
+
(1)
|
| 94 |
+
105 = (3∗, 2)− 1
|
| 95 |
+
3 + (3, 1)+ 4
|
| 96 |
+
3 + (1, 1)−2
|
| 97 |
+
+(3∗, 3)+ 2
|
| 98 |
+
3 + (1, 2)−1 + (3∗, 1)+ 2
|
| 99 |
+
3 + (3, 1)− 8
|
| 100 |
+
3
|
| 101 |
+
+(3, 2)+ 7
|
| 102 |
+
3 + (6, 1)+ 2
|
| 103 |
+
3 + (8, 2)−1
|
| 104 |
+
+(6∗, 1)− 2
|
| 105 |
+
3 + (3∗, 2)− 7
|
| 106 |
+
3 + (3∗, 1)+ 8
|
| 107 |
+
3 + (3, 1)− 2
|
| 108 |
+
3 + (3, 3)− 2
|
| 109 |
+
3 + (1, 2)+1
|
| 110 |
+
+(8, 2)1 + (3, 1)− 2
|
| 111 |
+
3 + (1, 2)+1
|
| 112 |
+
(2)
|
| 113 |
+
and
|
| 114 |
+
120 =
|
| 115 |
+
(6∗, 1)+ 4
|
| 116 |
+
3 + (3∗, 2)− 1
|
| 117 |
+
3 + (1, 3)−2
|
| 118 |
+
+(3∗, 1)+ 2
|
| 119 |
+
3 + (1, 2)−1
|
| 120 |
+
+(1, 1)6 + (3∗, 1)+ 2
|
| 121 |
+
3 + (3, 2)+ 7
|
| 122 |
+
3 + (6∗, 1)− 8
|
| 123 |
+
3 + (6, 3)+ 2
|
| 124 |
+
3 + (8, 2−1)
|
| 125 |
+
+(6∗, 1)− 2
|
| 126 |
+
3 + (3∗, 2)− 7
|
| 127 |
+
3 + (3∗, 1)+ 8
|
| 128 |
+
3 + (3, 1)− 2
|
| 129 |
+
3 + (3, 3)− 2
|
| 130 |
+
3 + (1, 2)+1
|
| 131 |
+
+(8, 2)1 + (3, 1)− 2
|
| 132 |
+
3 + (1, 2)+1
|
| 133 |
+
(3)
|
| 134 |
+
The leptoquark (3∗, 1)+ 2
|
| 135 |
+
3 which could have fit the now non-existent B anoma-
|
| 136 |
+
lies is seen in both 105 and 120. Being a weak singlet, it doesn’t contribute
|
| 137 |
+
to the oblique parameters [7] that are tightly constrained by electroweak pre-
|
| 138 |
+
cision data. The one disadvantage of SU(15), but only an aesthetic one and
|
| 139 |
+
a stumbling block we must initially ignore, is that anomaly cancellation re-
|
| 140 |
+
quires the addition of mirror fermions. An advantage of SU(15) is the absence
|
| 141 |
+
of proton decay because all of the adjoint components have well-defined B and
|
| 142 |
+
L quantum numbers. Even if one rejects the SU(15) model for being vector-
|
| 143 |
+
like, it is still an ideal testing ground and classification system of leptoquarks,
|
| 144 |
+
diquarks and dileptons. i.e., it is a perfect umbrella model for models with
|
| 145 |
+
incomplete lists of bifermions. Smoking guns for SU(15) include a predicted
|
| 146 |
+
2
|
| 147 |
+
|
| 148 |
+
enhancement for B → K(∗)ν¯ν. Because of the lepton mass dependence in the
|
| 149 |
+
Higgs Yukawas, it predicts significant LFU-violating enhancements relative
|
| 150 |
+
to the SM for the decays B+ → K+τ +τ − and Bs → τ +τ −. In an ingenious
|
| 151 |
+
argument [8], it has been convincingly shown that violation of LFU implies
|
| 152 |
+
the occurrence of LFV decays which are vanishing in the standard model.
|
| 153 |
+
These will include the decays τ → µγ, τ → µφ and Bs → τµ. The dis-
|
| 154 |
+
covery of such LFV processes could lend support for the additional particles
|
| 155 |
+
we have discussed. It will be exciting to learn from experiments about more
|
| 156 |
+
violations of LFU, as well as any examples of LFV. Such additional input is
|
| 157 |
+
necessary to further evolve the theory. There has been extensive discussion
|
| 158 |
+
of leptoquarks because they were temporarily suggested by the now-defunct
|
| 159 |
+
B anomalies. Bileptons are suggested by the 331-model. We are tempted to
|
| 160 |
+
believe that the third and last type of bifermion, the biquark, appearing in
|
| 161 |
+
the 224 of SU(15) may also exist in Nature.
|
| 162 |
+
The 224 has 76 components with B = L = 0. The remaining 148 include the
|
| 163 |
+
72 leptoquarks listed ut supra, 72 biquarks and 4 bileptons.
|
| 164 |
+
The 72 biquarks fall into two sets of 36:
|
| 165 |
+
B = +2/3, L = 0,
|
| 166 |
+
(3∗ + 6, 2)5/3
|
| 167 |
+
Q = (1/3, 4/3)
|
| 168 |
+
uu, dd
|
| 169 |
+
(3∗ + 6, 2)1/3
|
| 170 |
+
Q = (1/3. − 2/3)
|
| 171 |
+
ud, dd
|
| 172 |
+
and
|
| 173 |
+
B = −2/3, L = +0,
|
| 174 |
+
(3 + 6∗, 2)−5/3
|
| 175 |
+
Q = (−4/3, −1/3)
|
| 176 |
+
¯u¯u, ¯u ¯d
|
| 177 |
+
(3 + 6∗, 2)1/3
|
| 178 |
+
Q = (−1/3, 2/3)
|
| 179 |
+
¯u ¯d, ¯d ¯d
|
| 180 |
+
In the phenomenological analysis of tetraquarks (first discovered in 2003)
|
| 181 |
+
and pentaquarks (2015), the name “diquark” is used for two quarks behaving
|
| 182 |
+
together like a molecule, so a diquark is definitely a bound state and not an
|
| 183 |
+
elementary particle like a biquark. At present the study of tetraquarks and
|
| 184 |
+
pentaquarks is successful [9] by using only diquarks without biquarks.
|
| 185 |
+
It will be interesting to discover whether biquarks become necessary in these
|
| 186 |
+
analyses. The distinction between diquark and biquark could be made using
|
| 187 |
+
the same criterion as used in [10] to decide whether the deuteron is a bound
|
| 188 |
+
state or elementary.
|
| 189 |
+
Finally, we discuss the four bileptons in the 224 which are in two SU(2)
|
| 190 |
+
doublets (Y −−, Y −) with B = 0, L = 2, and (Y ++, Y +) with B = 0, L = −2.
|
| 191 |
+
3
|
| 192 |
+
|
| 193 |
+
In the context of the 331-model, they lead [11] to the prediction of a reso-
|
| 194 |
+
nance in same-sign leptons with mass between 1 TeV and 4 TeV, and width
|
| 195 |
+
ΓY ≃ 0.05 − 0.10 TeV. The bilepton resonance in µ±µ± has been the subject
|
| 196 |
+
of searches by the
|
| 197 |
+
ATLAS and CMS Collaborations at the LHC. In March 2022, ATLAS pub-
|
| 198 |
+
lished an inconclusive result [12] about the existence of the bilepton, putting
|
| 199 |
+
only a lower mass limit MY > 1.08 TeV. CMS may have better momentum
|
| 200 |
+
resolution and charge identification than ATLAS and may therefore be able
|
| 201 |
+
to investigate the bilepton resonance proper. At the time of writing, CMS
|
| 202 |
+
began an in earnest search in October 2022 which is expected to be unblinded
|
| 203 |
+
at some time in 2023. Of the three classes of elementary bifermion (biquark,
|
| 204 |
+
leptoquark, bilepton) the one which appears nearest to confirmation at the
|
| 205 |
+
present time is the bilepton.
|
| 206 |
+
Acknowledgements
|
| 207 |
+
The work of C. C. and R. T. is funded by the European Union, Next
|
| 208 |
+
Generation EU, PNRR project ”National Centre for HPC, Big Data and
|
| 209 |
+
Quantum Computing”, project code CN00000013 and by INFN iniziativa
|
| 210 |
+
specifica QFT-HEP.
|
| 211 |
+
References
|
| 212 |
+
[1] LHCb Collaboration,
|
| 213 |
+
arXiv:2212.09153[hep-ex].
|
| 214 |
+
[2] P.H. Frampton and B.H. Lee,
|
| 215 |
+
Phys. Rev. Lett. 64, 619 (1990).
|
| 216 |
+
[3] P.H. Frampton and T.W. Kephart,
|
| 217 |
+
Phys. Rev. D42, 3892 (1990).
|
| 218 |
+
[4] C. Cornella, D.A. Faroughy, J. Fuentes-Martin, G. Isidori
|
| 219 |
+
and M. Neubert,
|
| 220 |
+
JCAP 08:050 (2021).
|
| 221 |
+
arXiv:2103.16558[hep-ph].
|
| 222 |
+
[5] R. Feger and T. W. Kephart,
|
| 223 |
+
Comput. Phys. Commun. 192, 166 (2015).
|
| 224 |
+
[6] R. Feger, T. W. Kephart and R. J. Saskowski,
|
| 225 |
+
Comput. Phys. Commun. 257, 107490 (2020).
|
| 226 |
+
4
|
| 227 |
+
|
| 228 |
+
[7] M. E. Peskin and T. Takeuchi,
|
| 229 |
+
Phys. Rev. D 46, 381-409 (1992)
|
| 230 |
+
[8] S.L. Glashow, D. Guadagnoli and K. Lane,
|
| 231 |
+
Phys. Rev. Lett. 114, 091801 (2014).
|
| 232 |
+
arXiv:1411.0565[hep-ph].
|
| 233 |
+
[9] L. Maiani and A. Pilloni,
|
| 234 |
+
arXiv:2207.05141[hep-ph].
|
| 235 |
+
[10] S. Weinberg,
|
| 236 |
+
Phys. Rev. 137, B672 (1965).
|
| 237 |
+
[11] P.H. Frampton,
|
| 238 |
+
Phys. Rev. Lett. 69, 2889 (1992).
|
| 239 |
+
[12] ATLAS Collaboration,
|
| 240 |
+
ATLAS-CONF-2022-010 (11 March 2022).
|
| 241 |
+
5
|
| 242 |
+
|
FNE0T4oBgHgl3EQfhAFe/content/tmp_files/load_file.txt
ADDED
|
@@ -0,0 +1,196 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
filepath=/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfhAFe/content/2301.02425v1.pdf,len=195
|
| 2 |
+
page_content='arXiv:2301.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfhAFe/content/2301.02425v1.pdf'}
|
| 3 |
+
page_content='02425v1 [hep-ph] 6 Jan 2023 January 2023 An SU(15) Approach to Bifermion Classification Claudio Corian`o∗ Paul H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfhAFe/content/2301.02425v1.pdf'}
|
| 4 |
+
page_content=' Frampton† Dario Melle‡ Dipartimento di Matematica e Fisica “Ennio De Giorgi”, Universit`a del Salento and INFN-Lecce, Via Arnesano, 73100 Lecce, Italy National Center for HPC, Big Data and Quantum Computing Thomas W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfhAFe/content/2301.02425v1.pdf'}
|
| 5 |
+
page_content=' Kephart§ Department of Physics and Astronomy, Vanderbilt University, Nashville, TN 37235, USA.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfhAFe/content/2301.02425v1.pdf'}
|
| 6 |
+
page_content=' Tzu-Chiang Yuan¶ Institute of Physics, Academia Sinica, Nangang, Taipei 11529, Taiwan.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfhAFe/content/2301.02425v1.pdf'}
|
| 7 |
+
page_content=' Abstract One interesting way to extend the standard model is the hypothesis of bifermions which are bosons which couple to pairs of quarks and leptons.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfhAFe/content/2301.02425v1.pdf'}
|
| 8 |
+
page_content=' We point out that SU(15) grand unification gives a natural way to classify bifermions and discuss leptoquarks, biquarks and bileptons.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfhAFe/content/2301.02425v1.pdf'}
|
| 9 |
+
page_content=' In fact, SU(15) provides an ideal covering group as it contains all possible bifermions within a single model.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfhAFe/content/2301.02425v1.pdf'}
|
| 10 |
+
page_content=' ∗claudio.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfhAFe/content/2301.02425v1.pdf'}
|
| 11 |
+
page_content='coriano@le.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfhAFe/content/2301.02425v1.pdf'}
|
| 12 |
+
page_content='infn.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfhAFe/content/2301.02425v1.pdf'}
|
| 13 |
+
page_content='it †paul.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfhAFe/content/2301.02425v1.pdf'}
|
| 14 |
+
page_content='h.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfhAFe/content/2301.02425v1.pdf'}
|
| 15 |
+
page_content='frampton@gmail.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfhAFe/content/2301.02425v1.pdf'}
|
| 16 |
+
page_content='com ‡dario.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfhAFe/content/2301.02425v1.pdf'}
|
| 17 |
+
page_content='melle@studenti.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfhAFe/content/2301.02425v1.pdf'}
|
| 18 |
+
page_content='unisalento.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfhAFe/content/2301.02425v1.pdf'}
|
| 19 |
+
page_content='it §tom.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfhAFe/content/2301.02425v1.pdf'}
|
| 20 |
+
page_content='kephart@gmail.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfhAFe/content/2301.02425v1.pdf'}
|
| 21 |
+
page_content='com ¶tcyuan@phys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfhAFe/content/2301.02425v1.pdf'}
|
| 22 |
+
page_content='sinica.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfhAFe/content/2301.02425v1.pdf'}
|
| 23 |
+
page_content='edu.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfhAFe/content/2301.02425v1.pdf'}
|
| 24 |
+
page_content='tw The standard model (SM) of particle theory has remained robust and only occasionally tantalising hints have appeared from experiment about how to extend it.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfhAFe/content/2301.02425v1.pdf'}
|
| 25 |
+
page_content=' If and when these hints have become more definite they are likely to influence all of theoretical physics by clarifying the choices which Nature has made.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfhAFe/content/2301.02425v1.pdf'}
|
| 26 |
+
page_content=' A recent disappointment was that the anomalies in B decays which had stubbornly remained for the eight years 2014-2022 at the 3σ level have now been withdrawn [1].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfhAFe/content/2301.02425v1.pdf'}
|
| 27 |
+
page_content=' The present article is intended to be useful for the time when further discrepancies from the standard model appear.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfhAFe/content/2301.02425v1.pdf'}
|
| 28 |
+
page_content=' One attempt at grand unification [2] involves the gauge group SU(15) where all 15 states of a quark-lepton family are in the defining representation and every possible leptoquark is present in the adjoint representation which provides a useful classification.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfhAFe/content/2301.02425v1.pdf'}
|
| 29 |
+
page_content=' The adjoint appears in 15 × 15∗ = 1 + 224 and contains 72 leptoquarks which transform in irreducible representations of the standard model gauge group (SU(3)C, SU(2)L)Y with Q = T3 + Y/2 in four sets of 18 as follows B = +1/3, L = +1, 2(3, 2)−5/3 Q = (−1/3, −4/3) ue−, de− (3, 2)1/3 Q = (2/3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfhAFe/content/2301.02425v1.pdf'}
|
| 30 |
+
page_content=' − 1/3) uν, dν B = −1/3, L = +1, 2(3∗, 1)−4/3 Q = (−2/3) ¯uν (3∗, 1)−10/3 Q = (−5/3) ¯ue− (3∗, 3)−5/3 Q = (−5/3, −2/3, +1/3) ¯ue−, ¯uν, ¯dν B = +1/3, L = −1, 2(3, 1)4/3 Q = (2/3) e+d (3, 1)10/3 Q = (5/3) ¯ue− (3, 3)4/3 Q = (−1/3, 2/3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfhAFe/content/2301.02425v1.pdf'}
|
| 31 |
+
page_content='5/3) νd, e+d, e+u.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfhAFe/content/2301.02425v1.pdf'}
|
| 32 |
+
page_content=' B = −1/3, L = −1, 2(3∗, 2)5/3 Q = (1/3, 4/3) e+¯u, e+ ¯d (3∗, 2)−1/3 Q = (−2/3, 1/3) ν¯u, e+¯u The adjoint describes the spin-one gauge bosons of SU(15) and also a spin- zero Higgs if it is used [3] for symmetry breaking.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfhAFe/content/2301.02425v1.pdf'}
|
| 33 |
+
page_content=' A spin-one hypothesis would imply that a leptoquark is a gauge boson of SU(15).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfhAFe/content/2301.02425v1.pdf'}
|
| 34 |
+
page_content=' In that case, if 1 at least the first two families are treated sequentially as 15’s, unless there is an ad hoc assumption motivated by the data [4], muon-electron LFU = Lepton Flavour Universality, meaning that the leptons e, µ have identical properties in every way except for their different masses, will be an inevitable consequence.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfhAFe/content/2301.02425v1.pdf'}
|
| 35 |
+
page_content=' A spin-zero hypothesis would imply bifermions in the product 15 × 15 = 105A + 120S as per their Yukawa interactions,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfhAFe/content/2301.02425v1.pdf'}
|
| 36 |
+
page_content=' hence we examine the decompositions of 15,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfhAFe/content/2301.02425v1.pdf'}
|
| 37 |
+
page_content=' 105 and 120 into their SM components,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfhAFe/content/2301.02425v1.pdf'}
|
| 38 |
+
page_content=' which is easily done with the Mathematica package LieART [5,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfhAFe/content/2301.02425v1.pdf'}
|
| 39 |
+
page_content='6]: 15 = (3,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfhAFe/content/2301.02425v1.pdf'}
|
| 40 |
+
page_content=' 2)+ 1 3 + (3∗,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfhAFe/content/2301.02425v1.pdf'}
|
| 41 |
+
page_content=' 1)− 4 3 + (3∗,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfhAFe/content/2301.02425v1.pdf'}
|
| 42 |
+
page_content=' 1)+ 2 3 + (1,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfhAFe/content/2301.02425v1.pdf'}
|
| 43 |
+
page_content=' 2)−1 + (1,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfhAFe/content/2301.02425v1.pdf'}
|
| 44 |
+
page_content=' 1)+2 (1) 105 = (3∗,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfhAFe/content/2301.02425v1.pdf'}
|
| 45 |
+
page_content=' 2)− 1 3 + (3,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfhAFe/content/2301.02425v1.pdf'}
|
| 46 |
+
page_content=' 1)+ 4 3 + (1,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfhAFe/content/2301.02425v1.pdf'}
|
| 47 |
+
page_content=' 1)−2 +(3∗,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfhAFe/content/2301.02425v1.pdf'}
|
| 48 |
+
page_content=' 3)+ 2 3 + (1,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfhAFe/content/2301.02425v1.pdf'}
|
| 49 |
+
page_content=' 2)−1 + (3∗,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfhAFe/content/2301.02425v1.pdf'}
|
| 50 |
+
page_content=' 1)+ 2 3 + (3,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfhAFe/content/2301.02425v1.pdf'}
|
| 51 |
+
page_content=' 1)− 8 3 +(3,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfhAFe/content/2301.02425v1.pdf'}
|
| 52 |
+
page_content=' 2)+ 7 3 + (6,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfhAFe/content/2301.02425v1.pdf'}
|
| 53 |
+
page_content=' 1)+ 2 3 + (8,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfhAFe/content/2301.02425v1.pdf'}
|
| 54 |
+
page_content=' 2)−1 +(6∗,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfhAFe/content/2301.02425v1.pdf'}
|
| 55 |
+
page_content=' 1)− 2 3 + (3∗,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfhAFe/content/2301.02425v1.pdf'}
|
| 56 |
+
page_content=' 2)− 7 3 + (3∗,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfhAFe/content/2301.02425v1.pdf'}
|
| 57 |
+
page_content=' 1)+ 8 3 + (3,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfhAFe/content/2301.02425v1.pdf'}
|
| 58 |
+
page_content=' 1)− 2 3 + (3,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfhAFe/content/2301.02425v1.pdf'}
|
| 59 |
+
page_content=' 3)− 2 3 + (1,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfhAFe/content/2301.02425v1.pdf'}
|
| 60 |
+
page_content=' 2)+1 +(8,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfhAFe/content/2301.02425v1.pdf'}
|
| 61 |
+
page_content=' 2)1 + (3,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfhAFe/content/2301.02425v1.pdf'}
|
| 62 |
+
page_content=' 1)− 2 3 + (1,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfhAFe/content/2301.02425v1.pdf'}
|
| 63 |
+
page_content=' 2)+1 (2) and 120 = (6∗,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfhAFe/content/2301.02425v1.pdf'}
|
| 64 |
+
page_content=' 1)+ 4 3 + (3∗,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfhAFe/content/2301.02425v1.pdf'}
|
| 65 |
+
page_content=' 2)− 1 3 + (1,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfhAFe/content/2301.02425v1.pdf'}
|
| 66 |
+
page_content=' 3)−2 +(3∗,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfhAFe/content/2301.02425v1.pdf'}
|
| 67 |
+
page_content=' 1)+ 2 3 + (1,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfhAFe/content/2301.02425v1.pdf'}
|
| 68 |
+
page_content=' 2)−1 +(1,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfhAFe/content/2301.02425v1.pdf'}
|
| 69 |
+
page_content=' 1)6 + (3∗,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfhAFe/content/2301.02425v1.pdf'}
|
| 70 |
+
page_content=' 1)+ 2 3 + (3,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfhAFe/content/2301.02425v1.pdf'}
|
| 71 |
+
page_content=' 2)+ 7 3 + (6∗,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfhAFe/content/2301.02425v1.pdf'}
|
| 72 |
+
page_content=' 1)− 8 3 + (6,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfhAFe/content/2301.02425v1.pdf'}
|
| 73 |
+
page_content=' 3)+ 2 3 + (8,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfhAFe/content/2301.02425v1.pdf'}
|
| 74 |
+
page_content=' 2−1) +(6∗,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfhAFe/content/2301.02425v1.pdf'}
|
| 75 |
+
page_content=' 1)− 2 3 + (3∗,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfhAFe/content/2301.02425v1.pdf'}
|
| 76 |
+
page_content=' 2)− 7 3 + (3∗,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfhAFe/content/2301.02425v1.pdf'}
|
| 77 |
+
page_content=' 1)+ 8 3 + (3,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfhAFe/content/2301.02425v1.pdf'}
|
| 78 |
+
page_content=' 1)− 2 3 + (3,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfhAFe/content/2301.02425v1.pdf'}
|
| 79 |
+
page_content=' 3)− 2 3 + (1,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfhAFe/content/2301.02425v1.pdf'}
|
| 80 |
+
page_content=' 2)+1 +(8,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfhAFe/content/2301.02425v1.pdf'}
|
| 81 |
+
page_content=' 2)1 + (3,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfhAFe/content/2301.02425v1.pdf'}
|
| 82 |
+
page_content=' 1)− 2 3 + (1,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfhAFe/content/2301.02425v1.pdf'}
|
| 83 |
+
page_content=' 2)+1 (3) The leptoquark (3∗,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfhAFe/content/2301.02425v1.pdf'}
|
| 84 |
+
page_content=' 1)+ 2 3 which could have fit the now non-existent B anoma- lies is seen in both 105 and 120.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfhAFe/content/2301.02425v1.pdf'}
|
| 85 |
+
page_content=' Being a weak singlet, it doesn’t contribute to the oblique parameters [7] that are tightly constrained by electroweak pre- cision data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfhAFe/content/2301.02425v1.pdf'}
|
| 86 |
+
page_content=' The one disadvantage of SU(15), but only an aesthetic one and a stumbling block we must initially ignore, is that anomaly cancellation re- quires the addition of mirror fermions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfhAFe/content/2301.02425v1.pdf'}
|
| 87 |
+
page_content=' An advantage of SU(15) is the absence of proton decay because all of the adjoint components have well-defined B and L quantum numbers.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfhAFe/content/2301.02425v1.pdf'}
|
| 88 |
+
page_content=' Even if one rejects the SU(15) model for being vector- like, it is still an ideal testing ground and classification system of leptoquarks, diquarks and dileptons.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfhAFe/content/2301.02425v1.pdf'}
|
| 89 |
+
page_content=' i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfhAFe/content/2301.02425v1.pdf'}
|
| 90 |
+
page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfhAFe/content/2301.02425v1.pdf'}
|
| 91 |
+
page_content=', it is a perfect umbrella model for models with incomplete lists of bifermions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfhAFe/content/2301.02425v1.pdf'}
|
| 92 |
+
page_content=' Smoking guns for SU(15) include a predicted 2 enhancement for B → K(∗)ν¯ν.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfhAFe/content/2301.02425v1.pdf'}
|
| 93 |
+
page_content=' Because of the lepton mass dependence in the Higgs Yukawas, it predicts significant LFU-violating enhancements relative to the SM for the decays B+ → K+τ +τ − and Bs → τ +τ −.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfhAFe/content/2301.02425v1.pdf'}
|
| 94 |
+
page_content=' In an ingenious argument [8], it has been convincingly shown that violation of LFU implies the occurrence of LFV decays which are vanishing in the standard model.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfhAFe/content/2301.02425v1.pdf'}
|
| 95 |
+
page_content=' These will include the decays τ → µγ, τ → µφ and Bs → τµ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfhAFe/content/2301.02425v1.pdf'}
|
| 96 |
+
page_content=' The dis- covery of such LFV processes could lend support for the additional particles we have discussed.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfhAFe/content/2301.02425v1.pdf'}
|
| 97 |
+
page_content=' It will be exciting to learn from experiments about more violations of LFU, as well as any examples of LFV.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfhAFe/content/2301.02425v1.pdf'}
|
| 98 |
+
page_content=' Such additional input is necessary to further evolve the theory.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfhAFe/content/2301.02425v1.pdf'}
|
| 99 |
+
page_content=' There has been extensive discussion of leptoquarks because they were temporarily suggested by the now-defunct B anomalies.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfhAFe/content/2301.02425v1.pdf'}
|
| 100 |
+
page_content=' Bileptons are suggested by the 331-model.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfhAFe/content/2301.02425v1.pdf'}
|
| 101 |
+
page_content=' We are tempted to believe that the third and last type of bifermion, the biquark, appearing in the 224 of SU(15) may also exist in Nature.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfhAFe/content/2301.02425v1.pdf'}
|
| 102 |
+
page_content=' The 224 has 76 components with B = L = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfhAFe/content/2301.02425v1.pdf'}
|
| 103 |
+
page_content=' The remaining 148 include the 72 leptoquarks listed ut supra, 72 biquarks and 4 bileptons.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfhAFe/content/2301.02425v1.pdf'}
|
| 104 |
+
page_content=' The 72 biquarks fall into two sets of 36: B = +2/3, L = 0, (3∗ + 6, 2)5/3 Q = (1/3, 4/3) uu, dd (3∗ + 6, 2)1/3 Q = (1/3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfhAFe/content/2301.02425v1.pdf'}
|
| 105 |
+
page_content=' − 2/3) ud, dd and B = −2/3, L = +0, (3 + 6∗, 2)−5/3 Q = (−4/3, −1/3) ¯u¯u, ¯u ¯d (3 + 6∗, 2)1/3 Q = (−1/3, 2/3) ¯u ¯d, ¯d ¯d In the phenomenological analysis of tetraquarks (first discovered in 2003) and pentaquarks (2015), the name “diquark” is used for two quarks behaving together like a molecule, so a diquark is definitely a bound state and not an elementary particle like a biquark.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfhAFe/content/2301.02425v1.pdf'}
|
| 106 |
+
page_content=' At present the study of tetraquarks and pentaquarks is successful [9] by using only diquarks without biquarks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfhAFe/content/2301.02425v1.pdf'}
|
| 107 |
+
page_content=' It will be interesting to discover whether biquarks become necessary in these analyses.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfhAFe/content/2301.02425v1.pdf'}
|
| 108 |
+
page_content=' The distinction between diquark and biquark could be made using the same criterion as used in [10] to decide whether the deuteron is a bound state or elementary.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfhAFe/content/2301.02425v1.pdf'}
|
| 109 |
+
page_content=' Finally, we discuss the four bileptons in the 224 which are in two SU(2) doublets (Y −−, Y −) with B = 0, L = 2, and (Y ++, Y +) with B = 0, L = −2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfhAFe/content/2301.02425v1.pdf'}
|
| 110 |
+
page_content=' 3 In the context of the 331-model, they lead [11] to the prediction of a reso- nance in same-sign leptons with mass between 1 TeV and 4 TeV, and width ΓY ≃ 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfhAFe/content/2301.02425v1.pdf'}
|
| 111 |
+
page_content='05 − 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfhAFe/content/2301.02425v1.pdf'}
|
| 112 |
+
page_content='10 TeV.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfhAFe/content/2301.02425v1.pdf'}
|
| 113 |
+
page_content=' The bilepton resonance in µ±µ± has been the subject of searches by the ATLAS and CMS Collaborations at the LHC.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfhAFe/content/2301.02425v1.pdf'}
|
| 114 |
+
page_content=' In March 2022, ATLAS pub- lished an inconclusive result [12] about the existence of the bilepton, putting only a lower mass limit MY > 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfhAFe/content/2301.02425v1.pdf'}
|
| 115 |
+
page_content='08 TeV.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfhAFe/content/2301.02425v1.pdf'}
|
| 116 |
+
page_content=' CMS may have better momentum resolution and charge identification than ATLAS and may therefore be able to investigate the bilepton resonance proper.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfhAFe/content/2301.02425v1.pdf'}
|
| 117 |
+
page_content=' At the time of writing, CMS began an in earnest search in October 2022 which is expected to be unblinded at some time in 2023.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfhAFe/content/2301.02425v1.pdf'}
|
| 118 |
+
page_content=' Of the three classes of elementary bifermion (biquark, leptoquark, bilepton) the one which appears nearest to confirmation at the present time is the bilepton.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfhAFe/content/2301.02425v1.pdf'}
|
| 119 |
+
page_content=' Acknowledgements The work of C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfhAFe/content/2301.02425v1.pdf'}
|
| 120 |
+
page_content=' C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfhAFe/content/2301.02425v1.pdf'}
|
| 121 |
+
page_content=' and R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfhAFe/content/2301.02425v1.pdf'}
|
| 122 |
+
page_content=' T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfhAFe/content/2301.02425v1.pdf'}
|
| 123 |
+
page_content=' is funded by the European Union, Next Generation EU, PNRR project ”National Centre for HPC, Big Data and Quantum Computing”, project code CN00000013 and by INFN iniziativa specifica QFT-HEP.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfhAFe/content/2301.02425v1.pdf'}
|
| 124 |
+
page_content=' References [1] LHCb Collaboration, arXiv:2212.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfhAFe/content/2301.02425v1.pdf'}
|
| 125 |
+
page_content='09153[hep-ex].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfhAFe/content/2301.02425v1.pdf'}
|
| 126 |
+
page_content=' [2] P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfhAFe/content/2301.02425v1.pdf'}
|
| 127 |
+
page_content='H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfhAFe/content/2301.02425v1.pdf'}
|
| 128 |
+
page_content=' Frampton and B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfhAFe/content/2301.02425v1.pdf'}
|
| 129 |
+
page_content='H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfhAFe/content/2301.02425v1.pdf'}
|
| 130 |
+
page_content=' Lee, Phys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfhAFe/content/2301.02425v1.pdf'}
|
| 131 |
+
page_content=' Rev.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfhAFe/content/2301.02425v1.pdf'}
|
| 132 |
+
page_content=' Lett.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfhAFe/content/2301.02425v1.pdf'}
|
| 133 |
+
page_content=' 64, 619 (1990).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfhAFe/content/2301.02425v1.pdf'}
|
| 134 |
+
page_content=' [3] P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfhAFe/content/2301.02425v1.pdf'}
|
| 135 |
+
page_content='H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfhAFe/content/2301.02425v1.pdf'}
|
| 136 |
+
page_content=' Frampton and T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfhAFe/content/2301.02425v1.pdf'}
|
| 137 |
+
page_content='W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfhAFe/content/2301.02425v1.pdf'}
|
| 138 |
+
page_content=' Kephart, Phys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfhAFe/content/2301.02425v1.pdf'}
|
| 139 |
+
page_content=' Rev.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfhAFe/content/2301.02425v1.pdf'}
|
| 140 |
+
page_content=' D42, 3892 (1990).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfhAFe/content/2301.02425v1.pdf'}
|
| 141 |
+
page_content=' [4] C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfhAFe/content/2301.02425v1.pdf'}
|
| 142 |
+
page_content=' Cornella, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfhAFe/content/2301.02425v1.pdf'}
|
| 143 |
+
page_content='A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfhAFe/content/2301.02425v1.pdf'}
|
| 144 |
+
page_content=' Faroughy, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfhAFe/content/2301.02425v1.pdf'}
|
| 145 |
+
page_content=' Fuentes-Martin, G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfhAFe/content/2301.02425v1.pdf'}
|
| 146 |
+
page_content=' Isidori and M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfhAFe/content/2301.02425v1.pdf'}
|
| 147 |
+
page_content=' Neubert, JCAP 08:050 (2021).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfhAFe/content/2301.02425v1.pdf'}
|
| 148 |
+
page_content=' arXiv:2103.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfhAFe/content/2301.02425v1.pdf'}
|
| 149 |
+
page_content='16558[hep-ph].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfhAFe/content/2301.02425v1.pdf'}
|
| 150 |
+
page_content=' [5] R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfhAFe/content/2301.02425v1.pdf'}
|
| 151 |
+
page_content=' Feger and T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfhAFe/content/2301.02425v1.pdf'}
|
| 152 |
+
page_content=' W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfhAFe/content/2301.02425v1.pdf'}
|
| 153 |
+
page_content=' Kephart, Comput.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfhAFe/content/2301.02425v1.pdf'}
|
| 154 |
+
page_content=' Phys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfhAFe/content/2301.02425v1.pdf'}
|
| 155 |
+
page_content=' Commun.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfhAFe/content/2301.02425v1.pdf'}
|
| 156 |
+
page_content=' 192, 166 (2015).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfhAFe/content/2301.02425v1.pdf'}
|
| 157 |
+
page_content=' [6] R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfhAFe/content/2301.02425v1.pdf'}
|
| 158 |
+
page_content=' Feger, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfhAFe/content/2301.02425v1.pdf'}
|
| 159 |
+
page_content=' W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfhAFe/content/2301.02425v1.pdf'}
|
| 160 |
+
page_content=' Kephart and R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfhAFe/content/2301.02425v1.pdf'}
|
| 161 |
+
page_content=' J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfhAFe/content/2301.02425v1.pdf'}
|
| 162 |
+
page_content=' Saskowski, Comput.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfhAFe/content/2301.02425v1.pdf'}
|
| 163 |
+
page_content=' Phys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfhAFe/content/2301.02425v1.pdf'}
|
| 164 |
+
page_content=' Commun.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfhAFe/content/2301.02425v1.pdf'}
|
| 165 |
+
page_content=' 257, 107490 (2020).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfhAFe/content/2301.02425v1.pdf'}
|
| 166 |
+
page_content=' 4 [7] M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfhAFe/content/2301.02425v1.pdf'}
|
| 167 |
+
page_content=' E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfhAFe/content/2301.02425v1.pdf'}
|
| 168 |
+
page_content=' Peskin and T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfhAFe/content/2301.02425v1.pdf'}
|
| 169 |
+
page_content=' Takeuchi, Phys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfhAFe/content/2301.02425v1.pdf'}
|
| 170 |
+
page_content=' Rev.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfhAFe/content/2301.02425v1.pdf'}
|
| 171 |
+
page_content=' D 46, 381-409 (1992) [8] S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfhAFe/content/2301.02425v1.pdf'}
|
| 172 |
+
page_content='L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfhAFe/content/2301.02425v1.pdf'}
|
| 173 |
+
page_content=' Glashow, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfhAFe/content/2301.02425v1.pdf'}
|
| 174 |
+
page_content=' Guadagnoli and K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfhAFe/content/2301.02425v1.pdf'}
|
| 175 |
+
page_content=' Lane, Phys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfhAFe/content/2301.02425v1.pdf'}
|
| 176 |
+
page_content=' Rev.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfhAFe/content/2301.02425v1.pdf'}
|
| 177 |
+
page_content=' Lett.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfhAFe/content/2301.02425v1.pdf'}
|
| 178 |
+
page_content=' 114, 091801 (2014).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfhAFe/content/2301.02425v1.pdf'}
|
| 179 |
+
page_content=' arXiv:1411.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfhAFe/content/2301.02425v1.pdf'}
|
| 180 |
+
page_content='0565[hep-ph].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfhAFe/content/2301.02425v1.pdf'}
|
| 181 |
+
page_content=' [9] L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfhAFe/content/2301.02425v1.pdf'}
|
| 182 |
+
page_content=' Maiani and A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfhAFe/content/2301.02425v1.pdf'}
|
| 183 |
+
page_content=' Pilloni, arXiv:2207.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfhAFe/content/2301.02425v1.pdf'}
|
| 184 |
+
page_content='05141[hep-ph].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfhAFe/content/2301.02425v1.pdf'}
|
| 185 |
+
page_content=' [10] S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfhAFe/content/2301.02425v1.pdf'}
|
| 186 |
+
page_content=' Weinberg, Phys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfhAFe/content/2301.02425v1.pdf'}
|
| 187 |
+
page_content=' Rev.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfhAFe/content/2301.02425v1.pdf'}
|
| 188 |
+
page_content=' 137, B672 (1965).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfhAFe/content/2301.02425v1.pdf'}
|
| 189 |
+
page_content=' [11] P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfhAFe/content/2301.02425v1.pdf'}
|
| 190 |
+
page_content='H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfhAFe/content/2301.02425v1.pdf'}
|
| 191 |
+
page_content=' Frampton, Phys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfhAFe/content/2301.02425v1.pdf'}
|
| 192 |
+
page_content=' Rev.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfhAFe/content/2301.02425v1.pdf'}
|
| 193 |
+
page_content=' Lett.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfhAFe/content/2301.02425v1.pdf'}
|
| 194 |
+
page_content=' 69, 2889 (1992).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfhAFe/content/2301.02425v1.pdf'}
|
| 195 |
+
page_content=' [12] ATLAS Collaboration, ATLAS-CONF-2022-010 (11 March 2022).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfhAFe/content/2301.02425v1.pdf'}
|
| 196 |
+
page_content=' 5' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfhAFe/content/2301.02425v1.pdf'}
|
FNE2T4oBgHgl3EQfSwf4/content/tmp_files/2301.03797v1.pdf.txt
ADDED
|
@@ -0,0 +1,2048 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
Recommending Root-Cause and Mitigation Steps
|
| 2 |
+
for Cloud Incidents using Large Language Models
|
| 3 |
+
Toufique Ahmed∗§, Supriyo Ghosh†, Chetan Bansal†
|
| 4 |
+
Thomas Zimmermann‡, Xuchao Zhang†, Saravan Rajmohan†
|
| 5 |
+
∗UC Davis
|
| 6 |
+
†Microsoft
|
| 7 |
+
‡Microsoft Research
|
| 8 |
+
Abstract—Incident management for cloud services is a complex
|
| 9 |
+
process involving several steps and has a huge impact on both
|
| 10 |
+
service health and developer productivity. On-call engineers
|
| 11 |
+
require significant amount of domain knowledge and manual
|
| 12 |
+
effort for root causing and mitigation of production incidents.
|
| 13 |
+
Recent advances in artificial intelligence has resulted in state-of-
|
| 14 |
+
the-art large language models like GPT-3.x (both GPT-3.0 and
|
| 15 |
+
GPT-3.5), which have been used to solve a variety of problems
|
| 16 |
+
ranging from question answering to text summarization. In this
|
| 17 |
+
work, we do the first large-scale study to evaluate the effectiveness
|
| 18 |
+
of these models for helping engineers root cause and mitigate
|
| 19 |
+
production incidents. We do a rigorous study at Microsoft, on
|
| 20 |
+
more than 40,000 incidents and compare several large language
|
| 21 |
+
models in zero-shot, fine-tuned and multi-task setting using
|
| 22 |
+
semantic and lexical metrics. Lastly, our human evaluation with
|
| 23 |
+
actual incident owners show the efficacy and future potential of
|
| 24 |
+
using artificial intelligence for resolving cloud incidents.
|
| 25 |
+
Index Terms—Incident Management, Service Quality, GPT-3.x,
|
| 26 |
+
Large Language Models
|
| 27 |
+
I. INTRODUCTION
|
| 28 |
+
Large IT enterprises such as Amazon, Google, Microsoft,
|
| 29 |
+
and Salesforce have replaced the traditional shrink-wrapped
|
| 30 |
+
software and moved towards deploying applications and ser-
|
| 31 |
+
vices on cloud platforms. In today’s cloud systems, production
|
| 32 |
+
incidents (e.g., outage or performance degradation, unplanned
|
| 33 |
+
interruptions) adversely impact the customers and can be
|
| 34 |
+
expensive in terms of penalty associated with service level
|
| 35 |
+
agreement violations and engineering efforts required to mit-
|
| 36 |
+
igate the incidents. For example, one hour of downtime is
|
| 37 |
+
estimated to cost Amazon US$100 million on major shopping
|
| 38 |
+
days [1]. Despite continuous reliability efforts over the years,
|
| 39 |
+
cloud services still experience inevitable severe incidents.
|
| 40 |
+
Artificial Intelligence (AI) for IT Operations, also known
|
| 41 |
+
as AIOps, has increased in popularity. Data-driven and AI
|
| 42 |
+
techniques have been leveraged for automating parts of the
|
| 43 |
+
incident life-cycle, for example, incident prioritization [2],
|
| 44 |
+
retrieval of incidents with similar symptoms [3], and reducing
|
| 45 |
+
the time to mitigate incidents [4], [5]. However, on-call
|
| 46 |
+
engineers (OCEs) still spend a significant amount of manual
|
| 47 |
+
toil through multiple rounds of back and forth communication
|
| 48 |
+
for identifying root causes and mitigation steps. Motivated
|
| 49 |
+
by the recent successes of leveraging GPT-3 models for non-
|
| 50 |
+
trivial tasks [6], [7] and code generation [8], we apply such
|
| 51 |
+
§This work is done during the author’s internship at Microsoft Research.
|
| 52 |
+
models to incident management. We identified the following
|
| 53 |
+
two scenarios:
|
| 54 |
+
1) Find the incident’s root cause. Diagnosing incidents
|
| 55 |
+
typically requires significant time and communication be-
|
| 56 |
+
fore engineers can identify the root cause of the incident.
|
| 57 |
+
We investigate how effective large language models are
|
| 58 |
+
at suggesting root causes for incidents (RQ1).
|
| 59 |
+
2) Suggest the mitigation steps for the incident. After
|
| 60 |
+
a root cause has been located, engineers take actions to
|
| 61 |
+
mitigate the problem. We investigate how effective large
|
| 62 |
+
language models are at recommending the mitigation
|
| 63 |
+
steps for incidents (RQ2).
|
| 64 |
+
When applying large language models several considera-
|
| 65 |
+
tions and decisions need to be taken. Since the models were
|
| 66 |
+
not trained with incident management data, is fine-tuning of
|
| 67 |
+
the models necessary (RQ3)? Is it more effective to build one
|
| 68 |
+
model for each task (single-task) or one combined model that
|
| 69 |
+
supports both root causes and incidents (multiple task) (RQ4)?
|
| 70 |
+
Does the root cause help language models to find better
|
| 71 |
+
mitigation steps (RQ5)? Do the models perform better for
|
| 72 |
+
certain types of incidents (RQ6)? We address these questions
|
| 73 |
+
with a rigorous large-scale evaluation of 44,340 incidents
|
| 74 |
+
from 1,759 services of Microsoft. In addition to lexical and
|
| 75 |
+
semantic evaluation metrics that are typically reported for such
|
| 76 |
+
experiments, we present the results from a human validation,
|
| 77 |
+
where we asked incident owners to assess the correctness and
|
| 78 |
+
readability of suggested root causes and mitigation steps. The
|
| 79 |
+
original incident owners are the most qualified to assess the
|
| 80 |
+
performance of the models on incidents. In this paper, we
|
| 81 |
+
make the following contributions:
|
| 82 |
+
1) This is the first work to demonstrate the usefulness
|
| 83 |
+
of state-of-the-art large language models (LLMs) such
|
| 84 |
+
as GPT-3.x (both GPT-3.0 and GPT-3.5) for resolving
|
| 85 |
+
production incidents in a real world setting. (Section III)
|
| 86 |
+
2) We present a rigorous and large-scale study in Microsoft
|
| 87 |
+
on over 40,000 incidents from 1000+ cloud services with
|
| 88 |
+
six semantic and lexical metrics. (Section IV)
|
| 89 |
+
• Fine-tuning significantly improves the effectiveness of
|
| 90 |
+
LLMs for incident data.
|
| 91 |
+
• GPT-3 and GPT-3.5 models significantly outperform
|
| 92 |
+
encoder-decoder models in our experiments.
|
| 93 |
+
• Metrics such as BLEU-4 are useful to measure relative
|
| 94 |
+
1
|
| 95 |
+
arXiv:2301.03797v1 [cs.SE] 10 Jan 2023
|
| 96 |
+
|
| 97 |
+
performance of models in different settings. How-
|
| 98 |
+
ever, manual inspection and validation with experts is
|
| 99 |
+
needed to assess the actual performance.
|
| 100 |
+
3) Our human study with the actual incident owners of pro-
|
| 101 |
+
duction incidents helps prove the efficacy of the proposed
|
| 102 |
+
approach. (Section V)
|
| 103 |
+
II. OVERVIEW
|
| 104 |
+
A. Incident management
|
| 105 |
+
Production incidents are inevitable in large-scale cloud
|
| 106 |
+
services and often severely affect the customer experience.
|
| 107 |
+
Also, they can be extremely expensive in terms of engineer-
|
| 108 |
+
ing resources required to root cause and mitigate them. An
|
| 109 |
+
incident life-cycle typically has the following four stages:
|
| 110 |
+
(1) Detection: The first step in the incident life-cycle is
|
| 111 |
+
detection where the incidents are either reported by internal
|
| 112 |
+
or external customers of a given service after they notice
|
| 113 |
+
anomalous behavior. Also, incidents can also be reported
|
| 114 |
+
via automated monitors which are configured by the service
|
| 115 |
+
owners. (2) Triaging: Once an incident is reported, a team
|
| 116 |
+
of OCEs analyze the problem and route the incident ticket to
|
| 117 |
+
appropriate engineering team. This process is often referred
|
| 118 |
+
as incident triaging. (3) Diagnosis: The incident diagnosis and
|
| 119 |
+
root cause identification process requires multiple iterations of
|
| 120 |
+
back and forth communication between engineers inspecting
|
| 121 |
+
the different aspects to understand the broad nature of the
|
| 122 |
+
incident and identify the root cause. (4) Mitigation: Based
|
| 123 |
+
on the identified root causes, actions are taken to mitigate the
|
| 124 |
+
problem so as to recover the service health and minimize the
|
| 125 |
+
impact on the service users.
|
| 126 |
+
Lately, AIOps (AI for IT Operations) has gained popularity
|
| 127 |
+
for automating various parts of the incident life-cycle by
|
| 128 |
+
combining data-driven and AI techniques with data-sources
|
| 129 |
+
like application logs, time series performance metrics and
|
| 130 |
+
service traces [2], [4], [5], [9]. Albeit significant efforts,
|
| 131 |
+
incident management in large cloud systems still requires a
|
| 132 |
+
huge amount of engineering effort and cost. More specifically,
|
| 133 |
+
even with plethora of historical incident data, root cause iden-
|
| 134 |
+
tification and mitigation remains notoriously challenging and
|
| 135 |
+
time consuming tasks. In this work, we propose to use large
|
| 136 |
+
language models such as GPT-3.x to automatically recommend
|
| 137 |
+
root causes and mitigation for new incidents by leveraging
|
| 138 |
+
historical incident data.
|
| 139 |
+
B. The promise of LLMs/GPT-3.x models
|
| 140 |
+
Large language models (LLMs) such as GPT-3.x [7] have
|
| 141 |
+
emerged as one of the hottest trends in natural language
|
| 142 |
+
processing over the last few years. With 175 billion parame-
|
| 143 |
+
ters, the GPT-3.x language models, which held the record for
|
| 144 |
+
being the largest neural network ever developed, is an order
|
| 145 |
+
of magnitude larger than prior language models. Using this
|
| 146 |
+
massive model architecture, GPT-3.x were trained using almost
|
| 147 |
+
all accessible data from the Internet, including CommonCrawl
|
| 148 |
+
[10], WebText [11], Wikipedia [12], and a corpus of books.
|
| 149 |
+
Title: Attach vm fails with connection timeout
|
| 150 |
+
Summary: The workspace is not associated with any vnet. Cus-
|
| 151 |
+
tomer has a vm which is already running inside a vnet. They like
|
| 152 |
+
to attach that vm into [product omitted]. We tried the UI and CLI
|
| 153 |
+
route, but still fails with same connection timeout error. Error points
|
| 154 |
+
that it resolves to some public ip [...]
|
| 155 |
+
Reference root cause: It is not supported to attach a private vm to
|
| 156 |
+
a public workspace directly.
|
| 157 |
+
Reference mitigation: Open a task to provide better official docu-
|
| 158 |
+
ment for customer on the topic of virtual machine.
|
| 159 |
+
Fig. 1: A sample production incident.
|
| 160 |
+
GPT-3.x models surpass the state-of-the-art models in a va-
|
| 161 |
+
riety of NLP tasks, including machine translation, question-
|
| 162 |
+
answering, and close tasks. Furthermore, the GPT-3.x models
|
| 163 |
+
achieved a significant milestone by showing that unsupervised
|
| 164 |
+
language models trained with adequate data can multi-task to
|
| 165 |
+
the same level of fine-tuned models using just a few examples
|
| 166 |
+
of the new tasks. As a result of its powerful text generation
|
| 167 |
+
capabilities in new tasks, GPT-3.x are used in a wide range
|
| 168 |
+
of categories and industries, from productivity and education
|
| 169 |
+
to creativity and gaming. For instance, GPT-3.x are used to
|
| 170 |
+
produce creative writing, including blog posts, advertisements,
|
| 171 |
+
and poetry, that mimics the literary style of well-known writers
|
| 172 |
+
like Shakespeare.
|
| 173 |
+
C. Root-causing and mitigating incidents
|
| 174 |
+
Incident root-causing and mitigation is a complex process
|
| 175 |
+
which requires significant amount of manual effort and, also,
|
| 176 |
+
domain knowledge about the services. Incidents can be caused
|
| 177 |
+
by various kind of issues such as code bugs, dependency
|
| 178 |
+
failures, infrastructure issues, configuration bugs, etc. Due to
|
| 179 |
+
the vast number of possibilities, it is non-trivial for the OCEs
|
| 180 |
+
to root cause the incidents. Similarly, once the root cause is
|
| 181 |
+
identified, there can be various mitigation steps which can
|
| 182 |
+
be taken such as code rollback, hotfix, infrastructure changes,
|
| 183 |
+
configuration update, etc. Identifying the correct mitigation
|
| 184 |
+
step is again non-trivial and requires domain knowledge and
|
| 185 |
+
experience. Human errors in root causing or mitigation of
|
| 186 |
+
incidents results in not just more effort and human toil but
|
| 187 |
+
also impact on the customers and the revenue. Fig. 1 shows
|
| 188 |
+
a real incident from a service where we can see the title and
|
| 189 |
+
summary provided by the customer along with the actual root
|
| 190 |
+
cause and mitigation.
|
| 191 |
+
In this study, we evaluate the effectiveness of large lan-
|
| 192 |
+
guage models like GPT-3.x and Codex for root causing and
|
| 193 |
+
mitigating production incidents. When an incident is created,
|
| 194 |
+
the author would specify a title for the incident and describe
|
| 195 |
+
any relevant details such as any error messages, anomalous
|
| 196 |
+
behavior and other details which could potentially help with
|
| 197 |
+
resolution. Once the OCE starts investigating the incident, they
|
| 198 |
+
might get more details by communicating with the incident
|
| 199 |
+
author or by looking at telemetry and logs. During the course
|
| 200 |
+
of the investigation, the OCE might often updates the incident
|
| 201 |
+
details. For our evaluation, we use the title and the summary
|
| 202 |
+
of a given incident at the time of incident creation as input
|
| 203 |
+
2
|
| 204 |
+
|
| 205 |
+
and generate the root cause and mitigation steps. This is to
|
| 206 |
+
ensure that we only use the information which was available
|
| 207 |
+
to the OCE when they started investigating the incident.
|
| 208 |
+
D. Research questions
|
| 209 |
+
We investigated several OpenAI GPT-3.x models (i.e.,
|
| 210 |
+
Curie, Codex-cushman, Davinci, Code-davinci-002) to gener-
|
| 211 |
+
ate root causes and mitigation plans for the incident. This leads
|
| 212 |
+
to several RQs.
|
| 213 |
+
RQ1 Are fine-tuned GPT-3.x models effective at finding the
|
| 214 |
+
incident’s root cause?
|
| 215 |
+
The OpenAI models are not trained with the incident manage-
|
| 216 |
+
ment data since the data contain sensitive privacy information,
|
| 217 |
+
and Microsoft follows standard protocols to ensure the security
|
| 218 |
+
of the data. Therefore, the GPT-3.x models are not expected
|
| 219 |
+
to perform well in zero-shot/few-shot settings. In this paper,
|
| 220 |
+
we fine-tuned four different GPT-3.x models with different ca-
|
| 221 |
+
pacities and observed how the models performed at proposing
|
| 222 |
+
the root causes of the incident.
|
| 223 |
+
RQ2 Are fine-tuned GPT-3.x models capable of suggesting the
|
| 224 |
+
mitigation plan for the incident?
|
| 225 |
+
We are also interested in generating mitigation plans for the
|
| 226 |
+
incident using GPT-3.x models. Like root cause generation, we
|
| 227 |
+
fine-tune and evaluate the model using the input and criteria
|
| 228 |
+
we use for RQ1.
|
| 229 |
+
RQ3 How much fine-tuning improves over zero-shot learning
|
| 230 |
+
performance of GPT-3.x models?
|
| 231 |
+
Though we primarily focus on fine-tuning, GPT-3.x models
|
| 232 |
+
are reported to be effective at various downstream tasks with
|
| 233 |
+
zero-shot and few-shot training [7], [8]. In few-shot learning,
|
| 234 |
+
we use a few examples in the prompt as input to the model, and
|
| 235 |
+
the model generates the expected output. Zero-shot is similar
|
| 236 |
+
to few-shot training, but none of the examples are given. These
|
| 237 |
+
two settings are economically and environmentally beneficial
|
| 238 |
+
(reduced carbon footprint) because we are not updating any
|
| 239 |
+
parameters of the models. This paper will investigate how
|
| 240 |
+
the models perform at zero-shot settings. Note that few-shot
|
| 241 |
+
learning is unsuitable for our project because we have long
|
| 242 |
+
sequences in our dataset, and we observe the truncation of the
|
| 243 |
+
sequences when we infer only one sequence after fine-tuning.
|
| 244 |
+
RQ4 Does multi-task learning improve the performance of
|
| 245 |
+
GPT-3.x models at finding root causes and mitigation plans?
|
| 246 |
+
Multi-task learning is effective for some pre-trained mod-
|
| 247 |
+
els [13]. So far, we have discussed separate training models
|
| 248 |
+
and using the input independently to generate the incident’s
|
| 249 |
+
root cause and mitigation plans. We are interested in how GPT-
|
| 250 |
+
3.x models react to multi-task learning in our specific setting.
|
| 251 |
+
We combine all the training data for this experiment for both
|
| 252 |
+
tasks. However, during evaluation, we used the same test sets
|
| 253 |
+
used in RQ1 and RQ2.
|
| 254 |
+
RQ5 Do GPT-3.x models get better at proposing mitigation
|
| 255 |
+
plans if the root cause is given?
|
| 256 |
+
Mitigation plans for an incident depend on the specific root
|
| 257 |
+
cause. Different root causes may lead to different mitigation
|
| 258 |
+
plans. Moreover, the GPT-3.x models can be improved by
|
| 259 |
+
making the input larger or more informative. We will also
|
| 260 |
+
investigate whether providing the root cause in the input help
|
| 261 |
+
models find the mitigation plans.
|
| 262 |
+
RQ6 Do the models better propose mitigation plans for
|
| 263 |
+
machine-detected incidents than human-detected ones?
|
| 264 |
+
Incidents can be machine-detected (by some monitors) or
|
| 265 |
+
human-detected. Both types of incidents have specific char-
|
| 266 |
+
acteristics. Machine-detected incidents are generally triggered
|
| 267 |
+
when the monitor observes system changes like build failures,
|
| 268 |
+
resource availability, request counts, etc. On the contrary,
|
| 269 |
+
human-detected incidents are unique and may apply to a spe-
|
| 270 |
+
cific customer (e.g., webpage is not loading). In the research
|
| 271 |
+
question, we will investigate if the model performs well for
|
| 272 |
+
incidents belonging to a specific class.
|
| 273 |
+
E. Human validation
|
| 274 |
+
Root causes and mitigation plans can be written in different
|
| 275 |
+
forms. Unlike natural language translation or code summa-
|
| 276 |
+
rization, root causes and mitigation steps are much more open-
|
| 277 |
+
ended. Depending on the author, the root causes and mitigation
|
| 278 |
+
plans can vary from generic to specific. Automatic metrics may
|
| 279 |
+
fail to reflect the overall performance of the models ideally
|
| 280 |
+
because these metrics compare the models’ suggestions with
|
| 281 |
+
one reference, which may be completely different from the
|
| 282 |
+
models’ correct and relevant outputs. To better understand the
|
| 283 |
+
model’s performance, we went to the owner/resolver of the
|
| 284 |
+
specific incidents and presented the solutions from our models
|
| 285 |
+
and baselines. They assigned correctness and readability scores
|
| 286 |
+
to the model’s output. We will discuss our methodology and
|
| 287 |
+
findings from the human validation in Section V.
|
| 288 |
+
III. METHODOLOGY
|
| 289 |
+
A. Dataset Preparation
|
| 290 |
+
Thousands of incidents with different severity are being
|
| 291 |
+
detected (by both machines and humans) every day at Mi-
|
| 292 |
+
crosoft. The on-call engineers (OCEs) are working relentlessly
|
| 293 |
+
to provide seamless service to the customers. To manage
|
| 294 |
+
incidents at that scale, Microsoft has a well-designed website
|
| 295 |
+
for reporting and managing the incident. A database also keeps
|
| 296 |
+
track of the website’s data insertion, modification, and deletion
|
| 297 |
+
from incident reporting to mitigation. One of the inputs to the
|
| 298 |
+
model is the summary written at the time of incident reporting
|
| 299 |
+
or creation to prevent any data leakage from input to output.
|
| 300 |
+
In most cases, the OCEs do not follow any specific for-
|
| 301 |
+
mat to write incident summaries, root causes, and mitigation
|
| 302 |
+
plans. The fields, especially summaries, contain information in
|
| 303 |
+
multiple forms, including tables, links to prior incidents, and
|
| 304 |
+
images of individual monitor output or code snippets. This
|
| 305 |
+
is because the incidents are very different from each other,
|
| 306 |
+
and the utmost priority of the OCEs is to resolve the incident
|
| 307 |
+
rather than document the symptoms. Also, some incidents are
|
| 308 |
+
transient and auto-mitigated. No post-mortem is done if the
|
| 309 |
+
severity of low. Since GPT-3.x are text models, we discarded
|
| 310 |
+
the tables and images from the summaries. Hence, there is a
|
| 311 |
+
chance that we lost some critical information while discarding
|
| 312 |
+
that information.
|
| 313 |
+
3
|
| 314 |
+
|
| 315 |
+
We collected data for incidents from the database that
|
| 316 |
+
has the creation date between January 1, 2018, to July 15,
|
| 317 |
+
2022. Initially, we collected 123,953 instances for root causes
|
| 318 |
+
and 23,544 mitigations from the “Resolved” or “Mitigated”
|
| 319 |
+
incidents with severity levels 0-3 (most severe incidents belong
|
| 320 |
+
to level 0). The samples for mitigation are low because they
|
| 321 |
+
can be found in the post-mortem of the incident, and post-
|
| 322 |
+
mortem are not done for every incident. After collecting the
|
| 323 |
+
data, we observe many incidents with duplicate root causes and
|
| 324 |
+
mitigations. Some severe incidents/ denial of service trigger
|
| 325 |
+
hundreds of incident reports for the same event, all of which
|
| 326 |
+
have the exact root causes and mitigations. To fairly evaluate
|
| 327 |
+
the model, we remove the exact duplicates for root causes and
|
| 328 |
+
mitigation plans and end up with 57,520 root causes and 8,300
|
| 329 |
+
mitigation plans. The average root causes and mitigations
|
| 330 |
+
lengths are 87 and 12 tokens, respectively. Some root causes
|
| 331 |
+
are very long, and it is difficult for the models and human
|
| 332 |
+
evaluators to generate and evaluate the models’ output. We
|
| 333 |
+
kept the root causes up to 100 tokens, allowing us to keep
|
| 334 |
+
73% of the instances for root causes. We also discarded root
|
| 335 |
+
causes and mitigation plans with less than three tokens because
|
| 336 |
+
those are not informative.
|
| 337 |
+
After deduplication and filtering, we sorted the data accord-
|
| 338 |
+
ing to the creation date to use only historical data for training
|
| 339 |
+
the model. We selected 35820, 3000 and 2000 root causes
|
| 340 |
+
for training, testing and validation. We have fewer instances
|
| 341 |
+
for mitigations. Hence, the training, test and validation sets
|
| 342 |
+
for mitigations have 5455, 2000 and 500 data, respectively.
|
| 343 |
+
Even after this rigorous filtering and deduplication of data,
|
| 344 |
+
some root causes and mitigations do not carry any useful
|
| 345 |
+
information (e.g., root cause in a different link, transient, and
|
| 346 |
+
auto-mitigated incidents). We manually went through 3000
|
| 347 |
+
root causes and 2000 mitigation plans from test sets and
|
| 348 |
+
selected 2,621 root causes and 1,780 mitigation plans. 1
|
| 349 |
+
B. OpenAI models and baselines
|
| 350 |
+
The recent advancement of the deep neural network models
|
| 351 |
+
is greatly influenced by the introduction of Transformer mod-
|
| 352 |
+
els [14]. Prior approaches (i.e., LSTM [15] and GRU [16])
|
| 353 |
+
modeled the sequential dependencies of the generated text us-
|
| 354 |
+
ing recurrent architecture. These recurrent models use “Back-
|
| 355 |
+
Propagation through Time” (BPTT) to recursively propagate
|
| 356 |
+
loss values over gradients within the same recurrent units pro-
|
| 357 |
+
hibiting the possibility of parallel computation while capturing
|
| 358 |
+
the long-distance dependencies of the tokens in the sequence.
|
| 359 |
+
Bahdanau et al. introduced an attention mechanism that works
|
| 360 |
+
on top recurrent architecture and improves the performance of
|
| 361 |
+
recurrent neural models by providing an attention vector that
|
| 362 |
+
indicates the relevant part of the input to the target output [17].
|
| 363 |
+
Transformer model completely removes the recurrence unit
|
| 364 |
+
and entirely relies on the attention mechanism. It uses a multi-
|
| 365 |
+
layer, multi-head self-attention architecture where the attention
|
| 366 |
+
mechanism can relate different positions of a single sequence
|
| 367 |
+
to compute a sequence representation.
|
| 368 |
+
1We cannot share the dataset because incident data can contain confidential
|
| 369 |
+
and private data and sharing such data would violate the terms of service.
|
| 370 |
+
Pre-trained models are currently achieving state-of-the-art
|
| 371 |
+
performance for various natural language and code tasks.
|
| 372 |
+
These pre-trained models work in 2 stages (i.e., pre-training
|
| 373 |
+
and fine-tuning). In the pre-training stage, we train the model
|
| 374 |
+
to learn statistics of language (or code) in a self-supervised
|
| 375 |
+
fashion from large-scale corpora. After that, we use a smaller
|
| 376 |
+
labeled dataset to fine-tune the model for specific tasks. It
|
| 377 |
+
is nearly infeasible to have sufficient labeled data to train
|
| 378 |
+
such high-capacity deep learning models. Pre-trained models
|
| 379 |
+
enable us to train such big models with the unlabeled data
|
| 380 |
+
in a self-supervised way in the pre-training stage. All the
|
| 381 |
+
recent pre-trained (encoder-only and encoder-decoder) models
|
| 382 |
+
(e.g., BERT [18], RoBERTA [19], BART [20], T5 [21]) and
|
| 383 |
+
decoder-only generative models (e.g., GPT [22], GPT-2 [23],
|
| 384 |
+
GPT-3 [7], OPT [24]) are basically Transformer models of
|
| 385 |
+
various capacity trained with different pre-training objectives.
|
| 386 |
+
The following subsections briefly discuss the baselines and
|
| 387 |
+
OpenAI models we used for our experiments.
|
| 388 |
+
1) Baselines encoder-decoder models: We can apply the
|
| 389 |
+
encoder-decoder models for both root cause and mitigation.
|
| 390 |
+
The encoder will encode the input, and the decoder will
|
| 391 |
+
generate the root cause or mitigation using the encoded
|
| 392 |
+
representation provided by the encoder.
|
| 393 |
+
Pre-trained NLP models (e.g., BERT [18], RoBERTa [19],
|
| 394 |
+
BART [20], T5 [21]) use different self-supervised pre-
|
| 395 |
+
training objectives to learn robust language representa-
|
| 396 |
+
tions. NLP models have programming language counterparts
|
| 397 |
+
(e.g., CodeBERT [25], GraphCodeBERT [26], PLBART [27],
|
| 398 |
+
CodeT5 [13], NatGen [28]) where the models are initialized
|
| 399 |
+
with the NLP models’ weights and continued pre-training
|
| 400 |
+
with code and associated natural language comments in most
|
| 401 |
+
cases. Though root cause and mitigation are natural language
|
| 402 |
+
descriptions, the vocabulary (e.g., identifiers) overlaps more
|
| 403 |
+
with the comments used in code models. Therefore we picked
|
| 404 |
+
both NLP and code models from OpenAI and baseline criteria
|
| 405 |
+
to see if the performance differs depending on the domain used
|
| 406 |
+
for pre-training. For baselining, we pick RoBERTa [19] and
|
| 407 |
+
CodeBERT [25] models because of two reasons: i) these two
|
| 408 |
+
models are architecturally identical with 125M parameters, ii)
|
| 409 |
+
Both models are widely used as baselines (in fact, CodeBERT
|
| 410 |
+
is the primary baseline model of the CodeXGLUE [29] dataset,
|
| 411 |
+
which is a popular benchmark of 10 SE tasks including
|
| 412 |
+
encoder-decoder tasks like code summarization and code trans-
|
| 413 |
+
lation). Note that many transformer-based encoder-decoder
|
| 414 |
+
models can be applied to this problem. However, comparing
|
| 415 |
+
with all the models is beyond the scope of the paper.
|
| 416 |
+
RoBERTa: BERT is the first model that introduced the pre-
|
| 417 |
+
training strategy that outperforms the traditional Transformer
|
| 418 |
+
models. It applied two pre-training strategies: Masked Lan-
|
| 419 |
+
guage Modeling (MLM) and NSP (Next Sentence Prediction).
|
| 420 |
+
In MLM pre-training, we randomly mask out 15% of the
|
| 421 |
+
tokens and ask the model to recover those tokens, whereas, in
|
| 422 |
+
NSP, we train the model to learn to predict the next sentence
|
| 423 |
+
following an input sentence. Liu et al. [19] propose RoBERTa
|
| 424 |
+
(A Robustly Optimized BERT Pre-training Approach), which
|
| 425 |
+
outperforms the BERT model with a few changes, such as
|
| 426 |
+
4
|
| 427 |
+
|
| 428 |
+
dynamic masking and dropping NSP, achieves better perfor-
|
| 429 |
+
mance. We apply RoBERTa as NLP baseline model.
|
| 430 |
+
CodeBERT:
|
| 431 |
+
CodeBERT
|
| 432 |
+
is
|
| 433 |
+
architecturally
|
| 434 |
+
identical
|
| 435 |
+
to
|
| 436 |
+
RoBERTa model that uses two pre-training objectives: MLM
|
| 437 |
+
and Replaced Token Detection (RTD) [30]. We can define RTD
|
| 438 |
+
as a binary classification problem where two data generators
|
| 439 |
+
(i.e., NL and PL) generate plausible alternatives for a set
|
| 440 |
+
of randomly masked positions. A discriminator is trained
|
| 441 |
+
to determine whether a word is the original one or not.
|
| 442 |
+
CodeBERT is pre-trained on CodeSerachNet [31] dataset.
|
| 443 |
+
2) OpenAI generative models: Radford et al. introduced
|
| 444 |
+
general task-agnostic generative pre-training of language mod-
|
| 445 |
+
els (GPT) and outperformed 9 out of 12 discriminatively
|
| 446 |
+
trained models that use architectures designed for the spe-
|
| 447 |
+
cific task [22]. In generative pre-training, we autoregressively
|
| 448 |
+
predict the probability of a token given the previous tokens
|
| 449 |
+
moving from left to right. This left-to-right autoregressive
|
| 450 |
+
training prevents the model from retrieving information from
|
| 451 |
+
future tokens. All the subsequent generative models (e.g., GPT-
|
| 452 |
+
2, GPT-3) use very similar pre-training objectives but have
|
| 453 |
+
a higher capacity than previous ones and are pre-trained on
|
| 454 |
+
a much larger dataset. Very large language models (LLMs)
|
| 455 |
+
like GPT-3.x have 175 billion parameters and are found to
|
| 456 |
+
be effective with few-shot learning replacing the need for
|
| 457 |
+
fine-tuning for a specific set of tasks. However, fine-tuning
|
| 458 |
+
GPT-3 models are still beneficial for some tasks [7]. This
|
| 459 |
+
paper evaluates our approach using four OpenAI [32] GPT-
|
| 460 |
+
3.x models: Curie, Codex, Davinci, and Code-davinci-002.
|
| 461 |
+
Curie: Curie is the fastest GPT-3 model with 6.7B parameters.
|
| 462 |
+
This model is trained with natural language data and performs
|
| 463 |
+
well on language translation, complex classification, text sen-
|
| 464 |
+
timent, and summarization tasks. This is the smallest model
|
| 465 |
+
we use for our experiments.
|
| 466 |
+
Codex: The Codex models are also GPT-3 models trained for
|
| 467 |
+
understanding and generating code. The training data contains
|
| 468 |
+
both natural language and billions of lines of public code
|
| 469 |
+
from GitHub. We use one model, Codex-cushman from Codex
|
| 470 |
+
family, with 12 billion parameters. Though the models are
|
| 471 |
+
pre-trained for code-related tasks, it somehow relevant to
|
| 472 |
+
incident management. Root cause and mitigation contain a lot
|
| 473 |
+
of terminology (e.g., filenames, identifiers) which relate more
|
| 474 |
+
to comments used in software development projects.
|
| 475 |
+
Davinci: Davinci is the biggest GPT-3 model (175 billion
|
| 476 |
+
parameters) we use for our experiments. It can perform tasks
|
| 477 |
+
with fewer instructions than other GPT-3 models. Davinci
|
| 478 |
+
usually performs better at understanding the content or creative
|
| 479 |
+
content generation task. It is also very good at solving logic
|
| 480 |
+
problems. However, training the 175 billion parameters model
|
| 481 |
+
is costly and requires a much longer period (almost four times
|
| 482 |
+
compared to Curie with the same dataset) and more resources.
|
| 483 |
+
Davinci is not trained to understand or generate code.
|
| 484 |
+
Code-davinci-002: Code-davinci-002 is the 175 billion pa-
|
| 485 |
+
rameters GPT-3.5 model we use for our experiments. Code-
|
| 486 |
+
davinci-002 is an upgraded and more capable version of Codex
|
| 487 |
+
model that was trained on a more recent dataset of text and
|
| 488 |
+
code corpus.
|
| 489 |
+
C. Model configuration
|
| 490 |
+
One of the limitations of pre-trained encoder-decoder mod-
|
| 491 |
+
els is that they can only encode 512 tokens. We observe
|
| 492 |
+
that several samples from our test set truncated in GPT-3
|
| 493 |
+
model even though GPT-3 models support from 2048 tokens
|
| 494 |
+
(e.g., Curie, Codex) to 4000 tokens (e.g., Code-davinci-002).
|
| 495 |
+
Therefore, we can assume that the traditional encoder-encoder
|
| 496 |
+
models do not have enough capacity to encode our sequences.
|
| 497 |
+
Encoder-decoder models have been successful for problems
|
| 498 |
+
like code summarization [13], [25], [27], code translation [29],
|
| 499 |
+
and natural language translation [14], [20], [21]. We usually
|
| 500 |
+
generate one sample using beam search for each input and
|
| 501 |
+
compare the results with the reference. Generating one sample
|
| 502 |
+
is sufficient for these problems because the target text is
|
| 503 |
+
less open-ended. Besides, most of the information needed for
|
| 504 |
+
successful generation can be found in the input for this set of
|
| 505 |
+
problems. The models need to learn the syntactic alignment
|
| 506 |
+
between two programming languages for code translation.
|
| 507 |
+
Learning to transform conditional statements and loops from
|
| 508 |
+
one programming language to another may be enough to do a
|
| 509 |
+
successful translation, which is learnable from a few thousand
|
| 510 |
+
samples. For natural language translation learning the mapping
|
| 511 |
+
between the words from different natural languages is essential
|
| 512 |
+
to generate good quality translation. Code summarization is
|
| 513 |
+
slightly different from these two, where the input is much
|
| 514 |
+
longer than the output. However, Ahmed and Devanbu found
|
| 515 |
+
that all the necessary information for code summarization is
|
| 516 |
+
extracted from the identifiers, and obfuscating the identifiers
|
| 517 |
+
hurts the models [33]. Generating root causes and mitigation
|
| 518 |
+
plans is much more complex than these problems, where the
|
| 519 |
+
input may not contain handy information. The models need
|
| 520 |
+
to be able to generate more diverse and creative solutions
|
| 521 |
+
to answer the question. Our problem is more aligned with
|
| 522 |
+
code generation problems where the input does not carry
|
| 523 |
+
most information. For these types of problems, it is found
|
| 524 |
+
that instead of using the encoder-decoder model, decoder-only
|
| 525 |
+
models (e.g., GPT-3.x) are more successful where we only
|
| 526 |
+
focus on the following tokens considering the prior tokens
|
| 527 |
+
generated by the models. It is well-established that encoder-
|
| 528 |
+
decoder models are not as successful as decoder-only models
|
| 529 |
+
in code generation tasks. However, we still apply encoder-
|
| 530 |
+
decoder models to our problems and discuss our findings in
|
| 531 |
+
the following sections. For RoBERTa [19] and CodeBERT [25]
|
| 532 |
+
we use the exact setup that is used for the code summarization
|
| 533 |
+
task [31], [34]. We adjust the length to 512 tokens with a batch
|
| 534 |
+
size of 8 to provide as much as information to the model.
|
| 535 |
+
Full fine-tuning that retrains all the parameters is very
|
| 536 |
+
costly and challenging for the OpenAI models with billions
|
| 537 |
+
of parameters. We use LoRA (Low-Rank Adaptation), a novel
|
| 538 |
+
approach that significantly reduces the number of trainable
|
| 539 |
+
parameters by freezing the pre-trained model weights and
|
| 540 |
+
injecting trainable rank decomposition matrices into each layer
|
| 541 |
+
of the Transformer architecture [35]. Even though LoRA
|
| 542 |
+
reduces trainable parameters, it performs on-par or better than
|
| 543 |
+
fine-tuning in model quality on RoBERTa, DeBERTa, GPT-
|
| 544 |
+
5
|
| 545 |
+
|
| 546 |
+
2, and GPT-3. We fine-tuned the OpenAI GPT-3 (i.e., Curie,
|
| 547 |
+
Codex, Davinci) and GPT-3.5 (Code-davinci-002) models for
|
| 548 |
+
root causes and mitigation plans generation. We train both
|
| 549 |
+
models for 2000 steps (4 epochs) which OpenAI recommends.
|
| 550 |
+
For fine-tuning smaller models (i.e., Curie and Codex), we
|
| 551 |
+
use one NVIDIA V100 GPU, and for Davinci, we use four
|
| 552 |
+
NVIDIA V100 GPUs. For finetuning Code-davinci-002 model,
|
| 553 |
+
we use four NVIDIA A100 GPUs. We evaluated the models
|
| 554 |
+
on the validation set after every 100 steps and chose the model
|
| 555 |
+
that showed minimum training loss on the validation set.
|
| 556 |
+
As discussed earlier, the model needs to generate more
|
| 557 |
+
diverse and creative recommendations to solve problems like
|
| 558 |
+
the predictions of root causes and mitigation plans. Two
|
| 559 |
+
critical parameters to control the quality of the generated
|
| 560 |
+
outputs are temperature and top p, and it is recommended
|
| 561 |
+
to update one parameter. Following prior works [8], [36], we
|
| 562 |
+
decided to update the value of temperature. Higher temperature
|
| 563 |
+
encourages the model to take more risk, which is necessary
|
| 564 |
+
for the creative application [32]. Lower value performs argmax
|
| 565 |
+
sampling, which is very similar to what we do in encoder-
|
| 566 |
+
decoder model models like CodeBERT. Typically, a temper-
|
| 567 |
+
ature between 0.50–0.90 is the most common for creative
|
| 568 |
+
tasks. However, a high temperature is hurtful (makes the output
|
| 569 |
+
too diverge) [36]. We perform a grid search and choose 0.7
|
| 570 |
+
for Curie, Codex, and Davinci models and 0.5 for Code-
|
| 571 |
+
davinci-002 experiments to minimize the divergence issue for
|
| 572 |
+
generating five samples.
|
| 573 |
+
D. Evaluation Metrics
|
| 574 |
+
We briefly describe the evaluation metrics used for the two
|
| 575 |
+
downstream tasks, root cause and mitigation generation.
|
| 576 |
+
1) Lexical Metrics: For lexical metrics, we employ the
|
| 577 |
+
smooth sentence BLEU-4 (Bilingual Evaluation Understudy)
|
| 578 |
+
[37] metric to calculate n-grams overlap from 1 to 4 between
|
| 579 |
+
the reference and generated texts. In addition, the Rouge met-
|
| 580 |
+
ric (Recall Oriented Understudy for Gisting Evaluation) [38]
|
| 581 |
+
is used to compare a candidate document to a set of reference
|
| 582 |
+
texts. Specifically, we choose ROUGE-L [38], which takes
|
| 583 |
+
into account sentence-level structural similarity and identifies
|
| 584 |
+
longest co-occurring in sequence n-grams based on Longest
|
| 585 |
+
Common Subsequence (LCS) [39] statistics. METEOR (Met-
|
| 586 |
+
ric for Evaluation of Translation with Explicit Ordering) [40]
|
| 587 |
+
is the final lexical metric we selected, which is based on the
|
| 588 |
+
harmonic mean of unigram precision and recall as well as
|
| 589 |
+
stemming and synonymy matching as extra features.
|
| 590 |
+
2) Semantic Metrics: Since the lexical metrics usually
|
| 591 |
+
conduct exact word matches and disregard the meaning of
|
| 592 |
+
words, we choose three semantic metrics to evaluate our
|
| 593 |
+
outcomes according to their semantic meanings. We use the
|
| 594 |
+
BERTScore [41], which leverages the pre-trained contextual
|
| 595 |
+
embeddings from the BERT [18] model and matches candidate
|
| 596 |
+
and reference sentence words based on cosine similarity. Then,
|
| 597 |
+
the BLEURT score [42] is selected to demonstrate the degree
|
| 598 |
+
to what extent the candidate is fluent and conveys the meaning
|
| 599 |
+
of the reference. Last, we select NUBIA (NeUral Based Inter-
|
| 600 |
+
changeability Assessor) [43], a recent neural-based measure
|
| 601 |
+
that incorporates the semantic similarity, logical inference
|
| 602 |
+
and sentence legibility from exposing layers of pre-trained
|
| 603 |
+
language models, including RoBERTa STS [19], RoBERTa
|
| 604 |
+
MNLI and GPT-2 [23].
|
| 605 |
+
The semantic metric calculation takes significant time and
|
| 606 |
+
requires expensive GPU resources (Tables I and II took two
|
| 607 |
+
days on a single GPU). Therefore, we reported semantic met-
|
| 608 |
+
rics for the first two research questions, and for the remaining
|
| 609 |
+
research questions, we restricted ourselves to lexical ones that
|
| 610 |
+
are computationally less expensive.
|
| 611 |
+
IV. RESULT
|
| 612 |
+
A. How effective are fine-tuned GPT-3.x models in generating
|
| 613 |
+
incidents’ root cause recommendation? (RQ1)
|
| 614 |
+
Table I presents the effectiveness of our baseline encoder-
|
| 615 |
+
decoder models and fine-tuned GPT-3.x models for root cause
|
| 616 |
+
recommendation. We have 2621 test samples for evaluating the
|
| 617 |
+
models. We generated ten samples for the OpenAI models for
|
| 618 |
+
two reasons: i) using temperature, we can generate very diverse
|
| 619 |
+
and creative samples from GPT-3.x models. ii) we found that
|
| 620 |
+
GPT-3.x models can generate valuable suggestions even with
|
| 621 |
+
lower ranks. We observed the average BLEU-4 of all the
|
| 622 |
+
samples at a particular rank, and we found that all the OpenAI
|
| 623 |
+
GPT-3.x models produce examples with higher BLEU-4 even
|
| 624 |
+
at rank eight or lower. However, ten examples are too many for
|
| 625 |
+
a human OCE, and we restrict ourselves to five top suggestions
|
| 626 |
+
from the model. In Table I, for each metric, we have Top 1
|
| 627 |
+
and Top 5. Top 1 presents the mean of the first candidates
|
| 628 |
+
for all the test samples; while calculating Top 5, we take the
|
| 629 |
+
maximum value from the first five candidates and then find
|
| 630 |
+
the average for all samples. This Top 5 gives an overall view
|
| 631 |
+
of how the models are performing. For our baseline encoder-
|
| 632 |
+
decoder models, we have only one sample for each model.
|
| 633 |
+
Surprisingly, the encoder-decoder models are doing really
|
| 634 |
+
good compared to GPT-3 models in all six automatic metrics.
|
| 635 |
+
In fact, all six metrics fail to distinguish significant differences
|
| 636 |
+
between the OpenAI models. The reason behind the success
|
| 637 |
+
of encoder-decoder models in automatic metrics is that these
|
| 638 |
+
models are less explorative and try to maximize the success de-
|
| 639 |
+
pending on argmax probabilities during decoding. Now “There
|
| 640 |
+
is a bug in the code” is a very common and generic sentence
|
| 641 |
+
that can be a part of any root causes. The models maximize the
|
| 642 |
+
success just by copying that particular segment, and automatic
|
| 643 |
+
metrics also fail here. We tried three semantic metrics to
|
| 644 |
+
resolve that issue, but the encoder-decoder models still benefit
|
| 645 |
+
from the automatic metric. Table III presents the number of
|
| 646 |
+
unique samples generated by the models. For OpenAI models
|
| 647 |
+
we only consider the first candidate to make a fair comparison.
|
| 648 |
+
We observe that the unique candidate count for RoBERTa
|
| 649 |
+
and CodeBERT are 6.10% and 16.67% of the total count,
|
| 650 |
+
whereas, for all the OpenAI GPT-3.x models, the percentages
|
| 651 |
+
are above 97%. Remember that we deduplicated the dataset,
|
| 652 |
+
and repeatedly generating the same samples should not help
|
| 653 |
+
here. In Section V, we interviewed the incident owners, and
|
| 654 |
+
the majority of them complained about the generic nature of
|
| 655 |
+
encoder-decoder models’ recommendations, and these models
|
| 656 |
+
6
|
| 657 |
+
|
| 658 |
+
TABLE I: Effectiveness of fine-tuned GPT-3.x models at finding root causes of the incidents
|
| 659 |
+
Model
|
| 660 |
+
BLEU-4
|
| 661 |
+
ROUGE-L
|
| 662 |
+
METEOR
|
| 663 |
+
BERTScore
|
| 664 |
+
BLEURT
|
| 665 |
+
NUBIA
|
| 666 |
+
Top1
|
| 667 |
+
Top5
|
| 668 |
+
Top1
|
| 669 |
+
Top5
|
| 670 |
+
Top1
|
| 671 |
+
Top5
|
| 672 |
+
Top1
|
| 673 |
+
Top5
|
| 674 |
+
Top1
|
| 675 |
+
Top5
|
| 676 |
+
Top1
|
| 677 |
+
Top5
|
| 678 |
+
RoBERTa
|
| 679 |
+
4.21
|
| 680 |
+
NA
|
| 681 |
+
12.83
|
| 682 |
+
NA
|
| 683 |
+
9.89
|
| 684 |
+
NA
|
| 685 |
+
85.38
|
| 686 |
+
NA
|
| 687 |
+
35.66
|
| 688 |
+
NA
|
| 689 |
+
33.94
|
| 690 |
+
NA
|
| 691 |
+
CodeBERT
|
| 692 |
+
3.38
|
| 693 |
+
NA
|
| 694 |
+
10.17
|
| 695 |
+
NA
|
| 696 |
+
6.58
|
| 697 |
+
NA
|
| 698 |
+
84.88
|
| 699 |
+
NA
|
| 700 |
+
33.19
|
| 701 |
+
NA
|
| 702 |
+
39.05
|
| 703 |
+
NA
|
| 704 |
+
Curie
|
| 705 |
+
3.40
|
| 706 |
+
6.29
|
| 707 |
+
9.04
|
| 708 |
+
15.44
|
| 709 |
+
7.21
|
| 710 |
+
13.65
|
| 711 |
+
84.90
|
| 712 |
+
86.36
|
| 713 |
+
32.62
|
| 714 |
+
40.08
|
| 715 |
+
33.52
|
| 716 |
+
49.76
|
| 717 |
+
Codex
|
| 718 |
+
3.44
|
| 719 |
+
6.25
|
| 720 |
+
8.98
|
| 721 |
+
15.51
|
| 722 |
+
7.33
|
| 723 |
+
13.82
|
| 724 |
+
84.85
|
| 725 |
+
86.33
|
| 726 |
+
32.50
|
| 727 |
+
40.11
|
| 728 |
+
33.64
|
| 729 |
+
49.77
|
| 730 |
+
Davinci
|
| 731 |
+
3.34
|
| 732 |
+
5.94
|
| 733 |
+
8.53
|
| 734 |
+
15.10
|
| 735 |
+
6.67
|
| 736 |
+
12.95
|
| 737 |
+
83.13
|
| 738 |
+
84.41
|
| 739 |
+
31.06
|
| 740 |
+
38.61
|
| 741 |
+
35.28
|
| 742 |
+
50.79
|
| 743 |
+
Davinci-002
|
| 744 |
+
4.24
|
| 745 |
+
7.15
|
| 746 |
+
11.43
|
| 747 |
+
17.2
|
| 748 |
+
10.42
|
| 749 |
+
16.8
|
| 750 |
+
85.42
|
| 751 |
+
86.78
|
| 752 |
+
36.77
|
| 753 |
+
42.87
|
| 754 |
+
32.3
|
| 755 |
+
51.34
|
| 756 |
+
%gain for Davinci-002
|
| 757 |
+
23.26
|
| 758 |
+
13.67
|
| 759 |
+
26.44
|
| 760 |
+
10.90
|
| 761 |
+
42.16
|
| 762 |
+
21.56
|
| 763 |
+
0.61
|
| 764 |
+
0.49
|
| 765 |
+
12.72
|
| 766 |
+
6.88
|
| 767 |
+
-8.45
|
| 768 |
+
1.08
|
| 769 |
+
underperform at correctness criteria. Among OpenAI models,
|
| 770 |
+
GPT-3.5 (i.e., Code-davinci-002) model significantly outper-
|
| 771 |
+
forms all GPT-3 models as well as other baselines in terms of
|
| 772 |
+
all the 6 automated metrics.
|
| 773 |
+
Though the automatic metrics fail to detect the weaknesses
|
| 774 |
+
of the encoder-decoder models, these metrics are still widely
|
| 775 |
+
used. Human evaluation is hard to perform in every scenario,
|
| 776 |
+
and these metrics can be useful to find the models’ relative
|
| 777 |
+
performance. Therefore, even though we achieve a low score
|
| 778 |
+
on these metrics, these are useful while trying to capture the
|
| 779 |
+
relative performance of the model in different settings. Also,
|
| 780 |
+
getting a lower score with lexical metrics is not surprising
|
| 781 |
+
because lexical metrics only consider token overlaps and
|
| 782 |
+
root cause and mitigation are open-ended, and the same root
|
| 783 |
+
cause/mitigation can be written differently. In Section V, from
|
| 784 |
+
the interviews with OCEs, we found that suggestions with
|
| 785 |
+
lower BLEU-4 or other metrics are still helpful.
|
| 786 |
+
B. How effective are fine-tuned GPT-3.x models in recom-
|
| 787 |
+
mending mitigation plans for an incident? (RQ2)
|
| 788 |
+
Table II shows that we achieved a slightly higher mitigation
|
| 789 |
+
score (4.44-6.76 BLEU-4) than the root cause recommendation
|
| 790 |
+
(3.38-4.24 BLEU-4).We observed a similar and consistent
|
| 791 |
+
pattern (Table III) of the output as observed with root causes.
|
| 792 |
+
The encoder-decoder models generate generic comments (e.g.,
|
| 793 |
+
“the issue is self-mitigated”, “fix deployed to all regions”)
|
| 794 |
+
like before, and those recommendations are mostly useless
|
| 795 |
+
for the OCEs. For both RQ1 and RQ2, the fine-tuned Davinci
|
| 796 |
+
model (even with 175 Billion parameters) is significantly un-
|
| 797 |
+
derperforming other baseline methods according to automatic
|
| 798 |
+
metrics. However, the Davinci and Code-davinci-002 models
|
| 799 |
+
are the best performing models according to the incident
|
| 800 |
+
owners (see Section V)
|
| 801 |
+
C. How much fine-tuning improves over zero-shot learning
|
| 802 |
+
performance of GPT-3.x models? (RQ3)
|
| 803 |
+
As discussed in Section II-D, we will investigate the per-
|
| 804 |
+
formance of OpenAI models in the zero-shot setting. Table IV
|
| 805 |
+
presents the performance of the OpenAI models for root cause
|
| 806 |
+
and mitigation. As expected, the model did not perform well in
|
| 807 |
+
this setting since the models were not trained on confidential
|
| 808 |
+
data from the incident management space. The models achieve
|
| 809 |
+
0.80-2.18 BLEU-4 for the top candidate, which is much lower
|
| 810 |
+
(210%) than what we achieved with fine-tuning the models
|
| 811 |
+
(5.47-6.76) while recommending mitigation steps. Though we
|
| 812 |
+
achieved a higher score for mitigation than root cause during
|
| 813 |
+
fine-tuning, in the zero-shot setting, the numbers for root cause
|
| 814 |
+
are slightly high (1.18-2.83 for the top candidates). The model
|
| 815 |
+
tries to complete the sequence depending on the given input.
|
| 816 |
+
Copying a few tokens from input may help the model because
|
| 817 |
+
the root cause is usually longer than mitigation and tends
|
| 818 |
+
to share more tokens with the input. Because of unigram
|
| 819 |
+
overlaps METEOR is doing better compared to other metrics
|
| 820 |
+
(BLEU-4 and ROUGE-L) because it looks for the unigram
|
| 821 |
+
precision and recall, making it lenient compared to BLEU-4
|
| 822 |
+
and ROUGE-L. We observe another interesting phenomenon
|
| 823 |
+
here. Though the Davinci model was underperforming in RQ1
|
| 824 |
+
and RQ2, it significantly outperforms the other OpenAI models
|
| 825 |
+
at zero-shot settings for both root cause and mitigation. This
|
| 826 |
+
is because the model has higher parameters and is trained on
|
| 827 |
+
more data enabling it to infer better without explicit training.
|
| 828 |
+
D. Does multi-task learning improve the performance of GPT-
|
| 829 |
+
3.x models at finding root causes and mitigation plans? (RQ4)
|
| 830 |
+
To evaluate the results of multi-task training in the root
|
| 831 |
+
cause recommendation and mitigating planning tasks, we com-
|
| 832 |
+
bine the training set of the two tasks for GPT-3.x models. The
|
| 833 |
+
models are then individually tested using the corresponding
|
| 834 |
+
test sets. Table V shows the results of root cause and mitigation
|
| 835 |
+
with multi-task training. Overall, we observe that multi-task
|
| 836 |
+
training does not significantly outperform training for a single
|
| 837 |
+
task. The performance of Curie and Codex models has fallen
|
| 838 |
+
by an average of 2.8% for BLEU-4, 2.0% for Rouge-L and
|
| 839 |
+
7.2% for Meteor. Only the Davinci model is marginally 6.2%
|
| 840 |
+
better than single task training in terms of BLEU-4 metric.
|
| 841 |
+
The performance of Code-davinci-002 is almost always lower
|
| 842 |
+
across all lexical metrics in a multi-task setting. Similar
|
| 843 |
+
to this, the results of mitigation generation reveals a 4.1%
|
| 844 |
+
performance decline in average for all the four models. The
|
| 845 |
+
lack of connection between the root cause and mitigation is
|
| 846 |
+
what mostly contributes to the decline in performance. It is
|
| 847 |
+
challenging to transfer knowledge from one task to the other
|
| 848 |
+
because of the distinct distribution in their answer spaces,
|
| 849 |
+
such as the variations in root cause and mitigation length and
|
| 850 |
+
concreteness.
|
| 851 |
+
E. Do GPT-3.x models get better at proposing mitigation
|
| 852 |
+
plans if the root cause is given? (RQ5)
|
| 853 |
+
We assess the performance of the mitigation generation
|
| 854 |
+
while the root cause is being revealed. Our training set of
|
| 855 |
+
mitigation is reduced from 5,455 to 2,973 as a result of the
|
| 856 |
+
missing root causes in the incidents, and we have 166 test
|
| 857 |
+
7
|
| 858 |
+
|
| 859 |
+
TABLE II: Effectiveness of fine-tuned GPT-3.x models at finding mitigation plans of the incidents
|
| 860 |
+
Model
|
| 861 |
+
BLEU-4
|
| 862 |
+
ROUGE-L
|
| 863 |
+
METEOR
|
| 864 |
+
BERTScore
|
| 865 |
+
BLEURT
|
| 866 |
+
NUBIA
|
| 867 |
+
Top1
|
| 868 |
+
Top5
|
| 869 |
+
Top1
|
| 870 |
+
Top5
|
| 871 |
+
Top1
|
| 872 |
+
Top5
|
| 873 |
+
Top1
|
| 874 |
+
Top5
|
| 875 |
+
Top1
|
| 876 |
+
Top5
|
| 877 |
+
Top1
|
| 878 |
+
Top5
|
| 879 |
+
RoBERTa
|
| 880 |
+
4.44
|
| 881 |
+
NA
|
| 882 |
+
7.10
|
| 883 |
+
NA
|
| 884 |
+
4.52
|
| 885 |
+
NA
|
| 886 |
+
86.33
|
| 887 |
+
NA
|
| 888 |
+
26.80
|
| 889 |
+
NA
|
| 890 |
+
14.90
|
| 891 |
+
NA
|
| 892 |
+
CodeBERT
|
| 893 |
+
6.02
|
| 894 |
+
NA
|
| 895 |
+
4.40
|
| 896 |
+
NA
|
| 897 |
+
3.37
|
| 898 |
+
NA
|
| 899 |
+
86.83
|
| 900 |
+
NA
|
| 901 |
+
28.44
|
| 902 |
+
NA
|
| 903 |
+
27.89
|
| 904 |
+
NA
|
| 905 |
+
Curie
|
| 906 |
+
5.47
|
| 907 |
+
10.62
|
| 908 |
+
8.03
|
| 909 |
+
16.31
|
| 910 |
+
6.22
|
| 911 |
+
12.75
|
| 912 |
+
85.65
|
| 913 |
+
87.13
|
| 914 |
+
27.20
|
| 915 |
+
37.23
|
| 916 |
+
15.30
|
| 917 |
+
25.46
|
| 918 |
+
Codex
|
| 919 |
+
5.53
|
| 920 |
+
10.62
|
| 921 |
+
8.15
|
| 922 |
+
16.23
|
| 923 |
+
6.19
|
| 924 |
+
13.15
|
| 925 |
+
85.68
|
| 926 |
+
87.35
|
| 927 |
+
28.43
|
| 928 |
+
37.92
|
| 929 |
+
15.77
|
| 930 |
+
26.33
|
| 931 |
+
Davinci
|
| 932 |
+
5.54
|
| 933 |
+
10.66
|
| 934 |
+
8.10
|
| 935 |
+
15.96
|
| 936 |
+
6.08
|
| 937 |
+
12.49
|
| 938 |
+
85.72
|
| 939 |
+
87.19
|
| 940 |
+
27.15
|
| 941 |
+
37.00
|
| 942 |
+
15.71
|
| 943 |
+
25.61
|
| 944 |
+
Davinci-002
|
| 945 |
+
6.76
|
| 946 |
+
11.66
|
| 947 |
+
10.22
|
| 948 |
+
18.14
|
| 949 |
+
8.23
|
| 950 |
+
15.13
|
| 951 |
+
86.17
|
| 952 |
+
87.65
|
| 953 |
+
30.19
|
| 954 |
+
38.96
|
| 955 |
+
17.58
|
| 956 |
+
28.81
|
| 957 |
+
%gain for Davinci-002
|
| 958 |
+
22.02
|
| 959 |
+
9.38
|
| 960 |
+
25.40
|
| 961 |
+
11.22
|
| 962 |
+
32.32
|
| 963 |
+
15.06
|
| 964 |
+
0.52
|
| 965 |
+
0.34
|
| 966 |
+
6.19
|
| 967 |
+
2.74
|
| 968 |
+
11.48
|
| 969 |
+
9.42
|
| 970 |
+
TABLE III: Uniqueness of the models’ suggestions
|
| 971 |
+
Model
|
| 972 |
+
Root cause
|
| 973 |
+
Mitigation
|
| 974 |
+
# of unique
|
| 975 |
+
recommendations
|
| 976 |
+
In % of
|
| 977 |
+
total
|
| 978 |
+
# of unique
|
| 979 |
+
recommendations
|
| 980 |
+
In % of
|
| 981 |
+
total
|
| 982 |
+
RoBERTa
|
| 983 |
+
160
|
| 984 |
+
6.10
|
| 985 |
+
4
|
| 986 |
+
0.22
|
| 987 |
+
CodeBERT
|
| 988 |
+
437
|
| 989 |
+
16.67
|
| 990 |
+
2
|
| 991 |
+
0.1
|
| 992 |
+
Curie
|
| 993 |
+
2612
|
| 994 |
+
99.65
|
| 995 |
+
1669
|
| 996 |
+
93.76
|
| 997 |
+
Codex
|
| 998 |
+
2614
|
| 999 |
+
99.73
|
| 1000 |
+
1743
|
| 1001 |
+
97.92
|
| 1002 |
+
Davinci
|
| 1003 |
+
2587
|
| 1004 |
+
98.70
|
| 1005 |
+
1731
|
| 1006 |
+
97.24
|
| 1007 |
+
Davinci-002
|
| 1008 |
+
2614
|
| 1009 |
+
99.73
|
| 1010 |
+
1696
|
| 1011 |
+
95.28
|
| 1012 |
+
TABLE IV: Effectiveness of OpenAI models for recommend-
|
| 1013 |
+
ing root causes and mitigation steps at zero-shot setting
|
| 1014 |
+
Objective
|
| 1015 |
+
Model
|
| 1016 |
+
BLEU-4
|
| 1017 |
+
ROUGE-L
|
| 1018 |
+
METEOR
|
| 1019 |
+
Top1
|
| 1020 |
+
Top5
|
| 1021 |
+
Top1
|
| 1022 |
+
Top5
|
| 1023 |
+
Top1
|
| 1024 |
+
Top5
|
| 1025 |
+
Root cause
|
| 1026 |
+
Curie
|
| 1027 |
+
1.26
|
| 1028 |
+
2.01
|
| 1029 |
+
4.75
|
| 1030 |
+
7.80
|
| 1031 |
+
7.94
|
| 1032 |
+
13.30
|
| 1033 |
+
Codex
|
| 1034 |
+
1.18
|
| 1035 |
+
1.94
|
| 1036 |
+
3.80
|
| 1037 |
+
7.07
|
| 1038 |
+
6.58
|
| 1039 |
+
12.20
|
| 1040 |
+
Davinci
|
| 1041 |
+
2.83
|
| 1042 |
+
4.37
|
| 1043 |
+
6.11
|
| 1044 |
+
11.55
|
| 1045 |
+
6.04
|
| 1046 |
+
11.87
|
| 1047 |
+
Davinci-
|
| 1048 |
+
002
|
| 1049 |
+
1.35
|
| 1050 |
+
2.5
|
| 1051 |
+
4.89
|
| 1052 |
+
8.58
|
| 1053 |
+
7.65
|
| 1054 |
+
13.55
|
| 1055 |
+
Finetuned-
|
| 1056 |
+
Davinci-
|
| 1057 |
+
002
|
| 1058 |
+
4.24
|
| 1059 |
+
7.15
|
| 1060 |
+
11.43
|
| 1061 |
+
17.2
|
| 1062 |
+
10.42
|
| 1063 |
+
16.8
|
| 1064 |
+
% gain for
|
| 1065 |
+
Finetuning
|
| 1066 |
+
49.82
|
| 1067 |
+
63.62
|
| 1068 |
+
87.07
|
| 1069 |
+
48.92
|
| 1070 |
+
31.23
|
| 1071 |
+
23.99
|
| 1072 |
+
Mitigation
|
| 1073 |
+
Curie
|
| 1074 |
+
0.81
|
| 1075 |
+
1.50
|
| 1076 |
+
2.45
|
| 1077 |
+
4.59
|
| 1078 |
+
5.33
|
| 1079 |
+
9.40
|
| 1080 |
+
Codex
|
| 1081 |
+
0.80
|
| 1082 |
+
1.57
|
| 1083 |
+
1.97
|
| 1084 |
+
4.05
|
| 1085 |
+
4.56
|
| 1086 |
+
8.55
|
| 1087 |
+
Davinci
|
| 1088 |
+
2.18
|
| 1089 |
+
3.67
|
| 1090 |
+
3.84
|
| 1091 |
+
7.84
|
| 1092 |
+
4.99
|
| 1093 |
+
10.44
|
| 1094 |
+
Davinci-
|
| 1095 |
+
002
|
| 1096 |
+
0.92
|
| 1097 |
+
1.89
|
| 1098 |
+
2.31
|
| 1099 |
+
4.52
|
| 1100 |
+
4.92
|
| 1101 |
+
9.2
|
| 1102 |
+
Finetuned-
|
| 1103 |
+
Davinci-
|
| 1104 |
+
002
|
| 1105 |
+
6.76
|
| 1106 |
+
11.66
|
| 1107 |
+
10.22
|
| 1108 |
+
18.14
|
| 1109 |
+
8.23
|
| 1110 |
+
15.13
|
| 1111 |
+
% gain for
|
| 1112 |
+
Finetuning
|
| 1113 |
+
210.1
|
| 1114 |
+
217.7
|
| 1115 |
+
166.2
|
| 1116 |
+
131.4
|
| 1117 |
+
54.4
|
| 1118 |
+
44.9
|
| 1119 |
+
samples to evaluate the model. Despite the sample reduction
|
| 1120 |
+
in the training set, Table V reveals a considerable performance
|
| 1121 |
+
gain with the additional root cause information: the average
|
| 1122 |
+
for all three metrics is improved by 9.8% for the Curie
|
| 1123 |
+
model, 8.3% for the Codex model, 5.4% for the Davinci
|
| 1124 |
+
model and 26% for the Code-davinci-002. Nevertheless, we
|
| 1125 |
+
observe that the performance gain of the Code-davinci-002
|
| 1126 |
+
model’s Top-5 recommendations is modest compared to the
|
| 1127 |
+
improvement of the Top-1 results. Despite this, the overall
|
| 1128 |
+
promising results highlight the significance of root cause
|
| 1129 |
+
information in generating mitigation plans.
|
| 1130 |
+
F. Do the models better propose mitigation plans for machine-
|
| 1131 |
+
detected incidents than human-detected ones? (RQ6)
|
| 1132 |
+
We analyze the mitigation generation performance of GPT-
|
| 1133 |
+
3.x models for both machine and human detected incidents in
|
| 1134 |
+
Table VII. We employ the same training set but separate the
|
| 1135 |
+
test samples by the categories of human and machine detected
|
| 1136 |
+
incidents. The testing samples consist of 592 incidents rec-
|
| 1137 |
+
ognized by machines and 1188 incidents detected by humans.
|
| 1138 |
+
TABLE V: Effectiveness of multi-task learning
|
| 1139 |
+
Objective
|
| 1140 |
+
Model
|
| 1141 |
+
Multi-
|
| 1142 |
+
tasking?
|
| 1143 |
+
BLEU-4
|
| 1144 |
+
ROUGE-L
|
| 1145 |
+
METEOR
|
| 1146 |
+
Top1
|
| 1147 |
+
Top5
|
| 1148 |
+
Top1
|
| 1149 |
+
Top5
|
| 1150 |
+
Top1
|
| 1151 |
+
Top5
|
| 1152 |
+
Root
|
| 1153 |
+
Cause
|
| 1154 |
+
Curie
|
| 1155 |
+
No
|
| 1156 |
+
3.40
|
| 1157 |
+
6.29
|
| 1158 |
+
9.04
|
| 1159 |
+
15.44
|
| 1160 |
+
7.21
|
| 1161 |
+
13.65
|
| 1162 |
+
Yes
|
| 1163 |
+
3.30
|
| 1164 |
+
6.13
|
| 1165 |
+
8.66
|
| 1166 |
+
15.51
|
| 1167 |
+
6.60
|
| 1168 |
+
12.97
|
| 1169 |
+
Codex
|
| 1170 |
+
No
|
| 1171 |
+
3.44
|
| 1172 |
+
6.25
|
| 1173 |
+
8.98
|
| 1174 |
+
15.51
|
| 1175 |
+
7.33
|
| 1176 |
+
13.82
|
| 1177 |
+
Yes
|
| 1178 |
+
3.42
|
| 1179 |
+
6.11
|
| 1180 |
+
8.64
|
| 1181 |
+
15.24
|
| 1182 |
+
6.53
|
| 1183 |
+
12.81
|
| 1184 |
+
Davinci
|
| 1185 |
+
No
|
| 1186 |
+
3.34
|
| 1187 |
+
5.94
|
| 1188 |
+
8.53
|
| 1189 |
+
15.10
|
| 1190 |
+
6.67
|
| 1191 |
+
12.95
|
| 1192 |
+
Yes
|
| 1193 |
+
3.60
|
| 1194 |
+
6.27
|
| 1195 |
+
9.11
|
| 1196 |
+
15.66
|
| 1197 |
+
7.31
|
| 1198 |
+
13.64
|
| 1199 |
+
Davinci-002 No
|
| 1200 |
+
4.24
|
| 1201 |
+
7.15
|
| 1202 |
+
11.43
|
| 1203 |
+
17.2
|
| 1204 |
+
10.42
|
| 1205 |
+
16.8
|
| 1206 |
+
Yes
|
| 1207 |
+
4.24
|
| 1208 |
+
7.09
|
| 1209 |
+
11.32
|
| 1210 |
+
17.14
|
| 1211 |
+
10.32
|
| 1212 |
+
16.34
|
| 1213 |
+
Mitigation
|
| 1214 |
+
Curie
|
| 1215 |
+
No
|
| 1216 |
+
5.47
|
| 1217 |
+
10.62
|
| 1218 |
+
8.03
|
| 1219 |
+
16.31
|
| 1220 |
+
6.22
|
| 1221 |
+
12.75
|
| 1222 |
+
Yes
|
| 1223 |
+
5.49
|
| 1224 |
+
10.89
|
| 1225 |
+
7.98
|
| 1226 |
+
16.14
|
| 1227 |
+
5.92
|
| 1228 |
+
12.54
|
| 1229 |
+
Codex
|
| 1230 |
+
No
|
| 1231 |
+
5.53
|
| 1232 |
+
10.62
|
| 1233 |
+
8.15
|
| 1234 |
+
16.23
|
| 1235 |
+
6.19
|
| 1236 |
+
13.15
|
| 1237 |
+
Yes
|
| 1238 |
+
5.15
|
| 1239 |
+
10.88
|
| 1240 |
+
7.49
|
| 1241 |
+
15.87
|
| 1242 |
+
5.55
|
| 1243 |
+
11.85
|
| 1244 |
+
Davinci
|
| 1245 |
+
No
|
| 1246 |
+
5.54
|
| 1247 |
+
10.66
|
| 1248 |
+
8.10
|
| 1249 |
+
15.96
|
| 1250 |
+
6.18
|
| 1251 |
+
12.49
|
| 1252 |
+
Yes
|
| 1253 |
+
5.64
|
| 1254 |
+
10.74
|
| 1255 |
+
7.88
|
| 1256 |
+
15.97
|
| 1257 |
+
6.13
|
| 1258 |
+
12.99
|
| 1259 |
+
Davinci-002 No
|
| 1260 |
+
6.76
|
| 1261 |
+
11.66
|
| 1262 |
+
10.22
|
| 1263 |
+
18.14
|
| 1264 |
+
8.23
|
| 1265 |
+
15.13
|
| 1266 |
+
Yes
|
| 1267 |
+
6.58
|
| 1268 |
+
11.36
|
| 1269 |
+
10.04
|
| 1270 |
+
17.76
|
| 1271 |
+
7.91
|
| 1272 |
+
14.36
|
| 1273 |
+
TABLE VI: Effectiveness of GPT-3 models at proposing
|
| 1274 |
+
mitigation plans given root causes
|
| 1275 |
+
Model
|
| 1276 |
+
Root-cause
|
| 1277 |
+
given?
|
| 1278 |
+
BLEU-4
|
| 1279 |
+
ROUGE-L
|
| 1280 |
+
METEOR
|
| 1281 |
+
Top1
|
| 1282 |
+
Top5
|
| 1283 |
+
Top1
|
| 1284 |
+
Top5
|
| 1285 |
+
Top1
|
| 1286 |
+
Top5
|
| 1287 |
+
Curie
|
| 1288 |
+
No
|
| 1289 |
+
5.92
|
| 1290 |
+
11.29
|
| 1291 |
+
9.46
|
| 1292 |
+
17.76
|
| 1293 |
+
7.34
|
| 1294 |
+
13.35
|
| 1295 |
+
Yes
|
| 1296 |
+
6.59
|
| 1297 |
+
12.40
|
| 1298 |
+
10.25
|
| 1299 |
+
18.61
|
| 1300 |
+
8.24
|
| 1301 |
+
16.00
|
| 1302 |
+
Codex
|
| 1303 |
+
No
|
| 1304 |
+
6.25
|
| 1305 |
+
11.23
|
| 1306 |
+
8.94
|
| 1307 |
+
17.62
|
| 1308 |
+
6.46
|
| 1309 |
+
13.00
|
| 1310 |
+
Yes
|
| 1311 |
+
6.23
|
| 1312 |
+
12.03
|
| 1313 |
+
9.32
|
| 1314 |
+
18.48
|
| 1315 |
+
7.73
|
| 1316 |
+
15.96
|
| 1317 |
+
Davinci
|
| 1318 |
+
No
|
| 1319 |
+
6.35
|
| 1320 |
+
12.05
|
| 1321 |
+
8.75
|
| 1322 |
+
18.21
|
| 1323 |
+
7.28
|
| 1324 |
+
15.07
|
| 1325 |
+
Yes
|
| 1326 |
+
7.02
|
| 1327 |
+
11.47
|
| 1328 |
+
9.49
|
| 1329 |
+
18.20
|
| 1330 |
+
8.40
|
| 1331 |
+
16.17
|
| 1332 |
+
Davinci-002
|
| 1333 |
+
No
|
| 1334 |
+
6.8
|
| 1335 |
+
12
|
| 1336 |
+
9.48
|
| 1337 |
+
17.37
|
| 1338 |
+
8.15
|
| 1339 |
+
15.53
|
| 1340 |
+
Yes
|
| 1341 |
+
8.6
|
| 1342 |
+
13.28
|
| 1343 |
+
11.56
|
| 1344 |
+
19.46
|
| 1345 |
+
10.9
|
| 1346 |
+
18.08
|
| 1347 |
+
%gain
|
| 1348 |
+
26.47
|
| 1349 |
+
10.21
|
| 1350 |
+
21.94
|
| 1351 |
+
6.86
|
| 1352 |
+
33.74
|
| 1353 |
+
16.42
|
| 1354 |
+
Table VII demonstrates that machine-recognized incidents can
|
| 1355 |
+
outperform those detected by humans by a factor of 9.5%
|
| 1356 |
+
for BLEU-4, 20% for ROUGE-L and 23% for METEOR in
|
| 1357 |
+
the context of Top-1 recommendations of Code-davinci-002
|
| 1358 |
+
model. It is due to the fact that machine detected incidents
|
| 1359 |
+
usually adhere to certain patterns, which are easier for machine
|
| 1360 |
+
learning models to recognize.
|
| 1361 |
+
V. LOOKING THROUGH THE INCIDENT OWNERS’ EYES
|
| 1362 |
+
A. Methodology
|
| 1363 |
+
From our test sets for root causes and mitigation plans, we
|
| 1364 |
+
selected the incidents with both root causes and mitigation,
|
| 1365 |
+
so that each incident owner could evaluate both the models
|
| 1366 |
+
in the same interview. Incident resolution is a complex task
|
| 1367 |
+
requiring significant context and domain knowledge about
|
| 1368 |
+
the service and also about the specific incidents. Hence,
|
| 1369 |
+
we conducted this human evaluation with the actual owners
|
| 1370 |
+
who root caused and mitigated the incidents. We chose 50
|
| 1371 |
+
recent incidents which occurred in the last two months, to
|
| 1372 |
+
evaluate the models’ performance so that the incident owners
|
| 1373 |
+
8
|
| 1374 |
+
|
| 1375 |
+
TABLE VII: Models’ performance on machine vs human
|
| 1376 |
+
detected incidents
|
| 1377 |
+
Model
|
| 1378 |
+
Machine
|
| 1379 |
+
detected?
|
| 1380 |
+
BLEU-4
|
| 1381 |
+
ROUGE-L
|
| 1382 |
+
METEOR
|
| 1383 |
+
Top1
|
| 1384 |
+
Top5
|
| 1385 |
+
Top1
|
| 1386 |
+
Top5
|
| 1387 |
+
Top1
|
| 1388 |
+
Top5
|
| 1389 |
+
Curie
|
| 1390 |
+
Yes
|
| 1391 |
+
5.49
|
| 1392 |
+
10.54
|
| 1393 |
+
8.54
|
| 1394 |
+
16.63
|
| 1395 |
+
6.45
|
| 1396 |
+
13.13
|
| 1397 |
+
No
|
| 1398 |
+
5.45
|
| 1399 |
+
10.65
|
| 1400 |
+
7.78
|
| 1401 |
+
16.15
|
| 1402 |
+
6.10
|
| 1403 |
+
12.56
|
| 1404 |
+
Codex
|
| 1405 |
+
Yes
|
| 1406 |
+
5.76
|
| 1407 |
+
10.54
|
| 1408 |
+
9.10
|
| 1409 |
+
16.84
|
| 1410 |
+
6.80
|
| 1411 |
+
13.88
|
| 1412 |
+
No
|
| 1413 |
+
5.41
|
| 1414 |
+
10.67
|
| 1415 |
+
7.68
|
| 1416 |
+
15.93
|
| 1417 |
+
5.88
|
| 1418 |
+
12.78
|
| 1419 |
+
Davinci
|
| 1420 |
+
Yes
|
| 1421 |
+
5.56
|
| 1422 |
+
10.51
|
| 1423 |
+
8.49
|
| 1424 |
+
16.17
|
| 1425 |
+
6.34
|
| 1426 |
+
12.59
|
| 1427 |
+
No
|
| 1428 |
+
5.52
|
| 1429 |
+
10.74
|
| 1430 |
+
7.91
|
| 1431 |
+
15.86
|
| 1432 |
+
5.95
|
| 1433 |
+
12.44
|
| 1434 |
+
Davinci-002
|
| 1435 |
+
Yes
|
| 1436 |
+
7.18
|
| 1437 |
+
11.83
|
| 1438 |
+
11.5
|
| 1439 |
+
18.59
|
| 1440 |
+
9.41
|
| 1441 |
+
15.66
|
| 1442 |
+
No
|
| 1443 |
+
6.56
|
| 1444 |
+
11.57
|
| 1445 |
+
9.58
|
| 1446 |
+
17.92
|
| 1447 |
+
7.65
|
| 1448 |
+
14.87
|
| 1449 |
+
%gain
|
| 1450 |
+
9.45
|
| 1451 |
+
2.25
|
| 1452 |
+
20.04
|
| 1453 |
+
3.74
|
| 1454 |
+
23.01
|
| 1455 |
+
5.31
|
| 1456 |
+
could precisely remember what happened during managing
|
| 1457 |
+
particular incidents. We reached out to all the incident owners
|
| 1458 |
+
and 25 incident owners responded and each interview took
|
| 1459 |
+
around 20-30 minutes.
|
| 1460 |
+
We presented the outputs from all the models under con-
|
| 1461 |
+
sideration. For both root causes and mitigation plans, we have
|
| 1462 |
+
six pools of candidates. The first four pools are for OpenAI
|
| 1463 |
+
models, each with six options (including “none”), and the last
|
| 1464 |
+
two are for RoBERTa and CodeBERT, which has only one
|
| 1465 |
+
candidate. For the OpenAI models, we ask the OCEs to select
|
| 1466 |
+
the best option that might be relevant to the incident. After
|
| 1467 |
+
that, we ask the OCEs to assign correctness and readability for
|
| 1468 |
+
the chosen candidate on a scale of 1-5, with 5 being the best
|
| 1469 |
+
score. Please note that for RoBERTa and CodeBERT, we only
|
| 1470 |
+
have one option. Hence, we only ask to assign correctness and
|
| 1471 |
+
readability scores to those candidates. We define correctness
|
| 1472 |
+
and readability as follows:
|
| 1473 |
+
Correctness: For this metric, we ask the incident owner to
|
| 1474 |
+
check whether the model provides a helpful and relevant
|
| 1475 |
+
suggestion compared to the actual root cause/mitigation.
|
| 1476 |
+
Readability: Readability is the ease with which a reader
|
| 1477 |
+
can understand a generated text. A text is readable if it is
|
| 1478 |
+
grammatically correct, meaningful and easy to understand.
|
| 1479 |
+
Note that a readable text does not need to be correct.
|
| 1480 |
+
At the end, we asked the incident owners to assign an overall
|
| 1481 |
+
score (1-5) indicating their perception about the usefulness of
|
| 1482 |
+
LLMs for incident resolution and, also, asked them to share
|
| 1483 |
+
their thoughts and comments regarding this.
|
| 1484 |
+
B. Results
|
| 1485 |
+
Table VIII presents the correctness and readability scores
|
| 1486 |
+
assigned by the incident owners. We can see that candidates
|
| 1487 |
+
from the Davinci and Code-davinci-002 pools have achieved
|
| 1488 |
+
higher mean correctness scores than those selected from Curie
|
| 1489 |
+
and Codex models for both root causes (2.88 and 2.56) and
|
| 1490 |
+
mitigation plans (3.04 and 3.16). The mean readability score
|
| 1491 |
+
ranges from 2.52 to 4.08 for all the models. The incident
|
| 1492 |
+
owners expressed positive opinions about the readability of
|
| 1493 |
+
the outputs, and all the models achieved higher readability
|
| 1494 |
+
than correctness scores. We received a few recommendations
|
| 1495 |
+
on how to improve the readability in the future (e.g., avoiding
|
| 1496 |
+
use of acronyms and generating more specific or informative
|
| 1497 |
+
comments).
|
| 1498 |
+
As discussed before, the baseline encoder-decoder models
|
| 1499 |
+
generate very generic comments, and the automatic metrics
|
| 1500 |
+
fail to detect that. We can see the incident owners assign a
|
| 1501 |
+
lower correctness score to RoBERTa and CodeBERT model,
|
| 1502 |
+
and several OCEs pointed out the generic nature of the
|
| 1503 |
+
recommendations generated by the encoder-decoder models.
|
| 1504 |
+
Though the correctness score of the OpenAI models ranges
|
| 1505 |
+
from 2.28 to 3.16, several OCEs pointed out that the models
|
| 1506 |
+
recommend beneficial root causes and mitigation plans. For
|
| 1507 |
+
example, the models succeeded in pinpointing some hard to
|
| 1508 |
+
detect root causes:
|
| 1509 |
+
“I am very impressed because one model found the right
|
| 1510 |
+
root cause, which was very hard to detect. We found it in the
|
| 1511 |
+
postmortem phase. However, I am a little worried that there
|
| 1512 |
+
would not be enough information on the incident website.
|
| 1513 |
+
Overall, I am impressed with the efficacy of the models.”
|
| 1514 |
+
“Even if not always correct, these suggestions can guide
|
| 1515 |
+
the OCE towards actual root cause. ML model can give
|
| 1516 |
+
directions and can be valuable suggestions.”
|
| 1517 |
+
We also took the maximum score assigned by the OpenAI
|
| 1518 |
+
models and reported the average correctness and readability
|
| 1519 |
+
score. The mean correctness and readability score ranges from
|
| 1520 |
+
3.52 to 4.64 (median score 3-5), presenting the overall strength
|
| 1521 |
+
of the models. We asked for the overall scores (1-5), and
|
| 1522 |
+
Table IX shows that the incident owners found the overall
|
| 1523 |
+
contribution promising and useful. More than 70% of incident
|
| 1524 |
+
owners gave three or above for the recommendations of the
|
| 1525 |
+
models. We found that at least one model is effective for most
|
| 1526 |
+
incidents. We also found out why the automatic metrics fail
|
| 1527 |
+
to provide valuable insights.
|
| 1528 |
+
There is always another side to the coin, and we observe
|
| 1529 |
+
that the models’ outputs are not helpful for some incidents.
|
| 1530 |
+
The OCEs assigned lower scores to those incidents and here
|
| 1531 |
+
are some of the concerns they mentioned:
|
| 1532 |
+
“Based on just incident data it is difficult for the model to
|
| 1533 |
+
predict root-cause and mitigation because not all data are
|
| 1534 |
+
recorded in the database and some of them are classified.”
|
| 1535 |
+
“Major concern is if the suggestion is incorrect, on-call
|
| 1536 |
+
engineers may take longer time to investigate the problem.”
|
| 1537 |
+
We observed some negative samples for the model because
|
| 1538 |
+
a lack of discussion or other information results in the de-
|
| 1539 |
+
privation of valuable signals from the input. However, the
|
| 1540 |
+
model’s overall performance is quite promising, which can
|
| 1541 |
+
be considered a stepping stone toward the automation of root
|
| 1542 |
+
causes and mitigation plans in the future.
|
| 1543 |
+
VI. DISCUSSION & THREATS
|
| 1544 |
+
A. Do automatic metrics reflect human perception?
|
| 1545 |
+
Automatic evaluation metrics are known to be representative
|
| 1546 |
+
of human perception and are widely used in problems like nat-
|
| 1547 |
+
ural language translation [14], [20], [21]. Though some recent
|
| 1548 |
+
works looked into the effectiveness of these metrics in code
|
| 1549 |
+
summarization and reported many pitfalls and weaknesses
|
| 1550 |
+
of these metrics [44]–[47], researchers are still using them
|
| 1551 |
+
for benchmarking. The best possible alternative to automatic
|
| 1552 |
+
metrics is human validation or some form of automatic test
|
| 1553 |
+
9
|
| 1554 |
+
|
| 1555 |
+
TABLE VIII: Correctness and readability scores assigned by the incident owners
|
| 1556 |
+
Objective
|
| 1557 |
+
Criteria
|
| 1558 |
+
RoBERTA
|
| 1559 |
+
CodeBERT
|
| 1560 |
+
Curie
|
| 1561 |
+
Codex
|
| 1562 |
+
Davinci
|
| 1563 |
+
Davinci-002
|
| 1564 |
+
Max
|
| 1565 |
+
OpenAI
|
| 1566 |
+
Mean
|
| 1567 |
+
Median
|
| 1568 |
+
Mean
|
| 1569 |
+
Median
|
| 1570 |
+
Mean
|
| 1571 |
+
Median
|
| 1572 |
+
Mean
|
| 1573 |
+
Median
|
| 1574 |
+
Mean
|
| 1575 |
+
Median
|
| 1576 |
+
Mean
|
| 1577 |
+
Median
|
| 1578 |
+
Mean
|
| 1579 |
+
Median
|
| 1580 |
+
Root cause
|
| 1581 |
+
Correctness
|
| 1582 |
+
1.56
|
| 1583 |
+
1
|
| 1584 |
+
1.72
|
| 1585 |
+
1
|
| 1586 |
+
2.40
|
| 1587 |
+
2
|
| 1588 |
+
2.40
|
| 1589 |
+
2
|
| 1590 |
+
2.88
|
| 1591 |
+
3
|
| 1592 |
+
2.56
|
| 1593 |
+
2
|
| 1594 |
+
3.52
|
| 1595 |
+
3
|
| 1596 |
+
Readability
|
| 1597 |
+
3.56
|
| 1598 |
+
5
|
| 1599 |
+
3.68
|
| 1600 |
+
5
|
| 1601 |
+
3.08
|
| 1602 |
+
4
|
| 1603 |
+
3.52
|
| 1604 |
+
4
|
| 1605 |
+
3.56
|
| 1606 |
+
5
|
| 1607 |
+
3.8
|
| 1608 |
+
4
|
| 1609 |
+
4.52
|
| 1610 |
+
5
|
| 1611 |
+
Mitigation
|
| 1612 |
+
Correctness
|
| 1613 |
+
1.6
|
| 1614 |
+
1
|
| 1615 |
+
1.52
|
| 1616 |
+
1
|
| 1617 |
+
2.28
|
| 1618 |
+
2
|
| 1619 |
+
2.28
|
| 1620 |
+
1
|
| 1621 |
+
3.04
|
| 1622 |
+
3
|
| 1623 |
+
3.16
|
| 1624 |
+
3
|
| 1625 |
+
4.04
|
| 1626 |
+
4
|
| 1627 |
+
Readability
|
| 1628 |
+
2.88
|
| 1629 |
+
2
|
| 1630 |
+
3.04
|
| 1631 |
+
4
|
| 1632 |
+
2.52
|
| 1633 |
+
2
|
| 1634 |
+
2.8
|
| 1635 |
+
3
|
| 1636 |
+
3.52
|
| 1637 |
+
4
|
| 1638 |
+
4.08
|
| 1639 |
+
4
|
| 1640 |
+
4.64
|
| 1641 |
+
5
|
| 1642 |
+
TABLE IX: Usefulness of LLMs for incident resolution
|
| 1643 |
+
Score
|
| 1644 |
+
# of incident
|
| 1645 |
+
owners
|
| 1646 |
+
In percent (%)
|
| 1647 |
+
of total
|
| 1648 |
+
5
|
| 1649 |
+
2
|
| 1650 |
+
7.41
|
| 1651 |
+
4
|
| 1652 |
+
9
|
| 1653 |
+
33.33
|
| 1654 |
+
3
|
| 1655 |
+
8
|
| 1656 |
+
29.63
|
| 1657 |
+
2
|
| 1658 |
+
6
|
| 1659 |
+
22.22
|
| 1660 |
+
1
|
| 1661 |
+
2
|
| 1662 |
+
7.41
|
| 1663 |
+
case evaluation (done in code generation tasks). The main
|
| 1664 |
+
challenge in incident management is that even experts face
|
| 1665 |
+
difficulties evaluating the incidents if they are not involved
|
| 1666 |
+
in resolving particular incidents. In some cases, the OCEs
|
| 1667 |
+
could not clearly remember the incidents if they happened
|
| 1668 |
+
two months ago. Thus conducting a large-scale study is
|
| 1669 |
+
quite challenging in this area. However, we interviewed 25
|
| 1670 |
+
incident owners and found that the models perform pretty
|
| 1671 |
+
well even after achieving lower scores with automatic metrics.
|
| 1672 |
+
We calculated the Pearson coefficient for all three lexical
|
| 1673 |
+
metrics (i.e., BLEU-4, ROUGE-L, and METEOR) with the
|
| 1674 |
+
correctness and readability score assigned by the OCEs. We
|
| 1675 |
+
observed that the co-efficient varies from -0.42 to +0.62,
|
| 1676 |
+
preventing us from getting specific patterns in the value. That
|
| 1677 |
+
also indicates that these automatic metrics may not be coherent
|
| 1678 |
+
with human perception for resolving cloud incidents. However,
|
| 1679 |
+
more sample cases are needed to reach any concrete resolution.
|
| 1680 |
+
B. Natural language or code? Which family of models are
|
| 1681 |
+
better for incident management?
|
| 1682 |
+
While choosing the models, we selected both natural lan-
|
| 1683 |
+
guage (i.e., RoBERTa, Curie, Davinci) and code models (i.e.,
|
| 1684 |
+
CodeBERT, Codex-cushman, Code-davinci-002) to see which
|
| 1685 |
+
family of models is beneficial for incident management. We
|
| 1686 |
+
did not find any winners from these two groups. Davinci and
|
| 1687 |
+
Code-davinci-002 models are found to be producing correct
|
| 1688 |
+
and readable suggestions compared to other models. Note that
|
| 1689 |
+
both of them have 175 billion parameters. We leave fine-tuning
|
| 1690 |
+
larger code models or pre-training a model from scratch with
|
| 1691 |
+
incident data for future research.
|
| 1692 |
+
C. How the models’ performance can be improved?
|
| 1693 |
+
We received several recommendations from the incident
|
| 1694 |
+
owners. The main recommendation is to incorporate the dis-
|
| 1695 |
+
cussions among the OCEs into the model. This will guide
|
| 1696 |
+
the model to locate better suggestions. We also dropped many
|
| 1697 |
+
incidents with summaries that written or updated at the time of
|
| 1698 |
+
incident resolution. To fairly evaluate the model and prevent
|
| 1699 |
+
possible data leakage (root cause and mitigation can be written
|
| 1700 |
+
in summary if updated later), we discarded them from our
|
| 1701 |
+
dataset. Incorporating them into our dataset after preventing
|
| 1702 |
+
data leakage may improve the performance of the models.
|
| 1703 |
+
We also lost some critical information while cleaning the
|
| 1704 |
+
summaries (e.g., discarding images and tables). Incorporating
|
| 1705 |
+
that information may also help.
|
| 1706 |
+
D. Threats to Validity
|
| 1707 |
+
There are several threats to our study. The semantic metrics
|
| 1708 |
+
use pre-trained models at the core, and we use the default,
|
| 1709 |
+
natural language models for the evaluation. A model pre-
|
| 1710 |
+
trained with incident management text may result in some
|
| 1711 |
+
changes in the performance evaluation. Also, we train and
|
| 1712 |
+
evaluate the models with the services available within our
|
| 1713 |
+
organization. These models may show unexpected behaviors
|
| 1714 |
+
if evaluated on a different set of services from other organi-
|
| 1715 |
+
zations. Some incidents owners expressed concerns about the
|
| 1716 |
+
models’ efficacy with rare incidents, and rare incidents are
|
| 1717 |
+
frequently reported at Microsoft. Another threat to our study
|
| 1718 |
+
is the sample size of our human subject study. It is difficult to
|
| 1719 |
+
achieve statistical significance on correctness and readability
|
| 1720 |
+
scores with such small samples. However, it is challenging to
|
| 1721 |
+
scale depending on the nature of the study.
|
| 1722 |
+
VII. RELATED WORK
|
| 1723 |
+
A. Incident management
|
| 1724 |
+
Incident management in large cloud services has become
|
| 1725 |
+
a popular topic of research in the Systems and Software
|
| 1726 |
+
Engineering communities. Prior work in this space has focused
|
| 1727 |
+
on two main directions. First, there has been several empirical
|
| 1728 |
+
studies on analyzing incidents and outages in production
|
| 1729 |
+
systems which have focused on studying incidents caused
|
| 1730 |
+
by certain type of issues [48]–[51] or issues from specific
|
| 1731 |
+
services and systems [52]–[54]. Second and more related to
|
| 1732 |
+
our work is the use of machine learning and data driven
|
| 1733 |
+
techniques for automating different aspects of incident life-
|
| 1734 |
+
cycle such as triaging [55], [56], diagnosis [57]–[59] and
|
| 1735 |
+
mitigation [5]. Different from prior work, this is the first effort
|
| 1736 |
+
on leveraging state-of-the art language models for assisting
|
| 1737 |
+
OCEs with incident resolution. We hope that this work will
|
| 1738 |
+
also motivate future work which will merge traditional task-
|
| 1739 |
+
specific discriminative models with LLMs to do end-to-end
|
| 1740 |
+
automation of production incidents.
|
| 1741 |
+
B. LLMs in Software Engineering
|
| 1742 |
+
Even though this is the first work leveraging LLMs for
|
| 1743 |
+
AIOps, several works in Software Engineering have tried to
|
| 1744 |
+
solve other challenging problems with LLMs. Github Copi-
|
| 1745 |
+
lot uses GPT-3 for automated code generation from natural
|
| 1746 |
+
language inputs [8]. Several researchers have addressed code
|
| 1747 |
+
generation [8], [36], docstring generation [8], [60], and code
|
| 1748 |
+
10
|
| 1749 |
+
|
| 1750 |
+
repair [61], [62] problems. Bareiß et al. [63] show how few-
|
| 1751 |
+
shot learning can be effective at (i) code mutation; (ii) test
|
| 1752 |
+
oracle generation from natural language documentation; and
|
| 1753 |
+
(iii) test case generation task. Jain et al. propose an approach
|
| 1754 |
+
to augment large language models with post-processing steps
|
| 1755 |
+
based on program analysis and synthesis techniques and
|
| 1756 |
+
achieve better performance [64]. However, unlike code gener-
|
| 1757 |
+
ation where we have both lexical and structural information
|
| 1758 |
+
along with massive amount of training data, we explore the
|
| 1759 |
+
problem of incident resolution using state-of-the-art LLMs
|
| 1760 |
+
which has not been done before.
|
| 1761 |
+
VIII. CONCLUSION
|
| 1762 |
+
With this work, we show that state-of-the-art large language
|
| 1763 |
+
models such as GPT-3 and GPT-3.5 are effective to help with
|
| 1764 |
+
incident management, specifically, to identify root causes and
|
| 1765 |
+
mitigation steps. To compare the effectiveness of the models,
|
| 1766 |
+
we conducted a rigorous and large-scale study at Microsoft,
|
| 1767 |
+
on over 40,000 incidents. To assess the actual usefulness of
|
| 1768 |
+
the approach, we involved the actual owners of production
|
| 1769 |
+
incidents. We expect that this paper is the first of many
|
| 1770 |
+
studies that leverage LLMs to make incident management
|
| 1771 |
+
more effective. Our next steps are to deploy the models in
|
| 1772 |
+
production to assist the OCEs with incident resolution. We
|
| 1773 |
+
are also planning to explore other usage scenarios for LLMs
|
| 1774 |
+
such as incident summarization.
|
| 1775 |
+
IX. ACKNOWLEDGEMENTS
|
| 1776 |
+
We would like to thank the engineers who participated in the
|
| 1777 |
+
validation of root causes and mitigation steps. We would like
|
| 1778 |
+
to also acknowledge the contributors of the following people
|
| 1779 |
+
across Microsoft: Oleg Losinets, Jim Kleewein.
|
| 1780 |
+
REFERENCES
|
| 1781 |
+
[1] S.
|
| 1782 |
+
Wolfe,
|
| 1783 |
+
“Amazon’s
|
| 1784 |
+
one
|
| 1785 |
+
hour
|
| 1786 |
+
of
|
| 1787 |
+
downtime
|
| 1788 |
+
on
|
| 1789 |
+
prime
|
| 1790 |
+
day
|
| 1791 |
+
may
|
| 1792 |
+
have
|
| 1793 |
+
cost
|
| 1794 |
+
it
|
| 1795 |
+
up
|
| 1796 |
+
to
|
| 1797 |
+
$100
|
| 1798 |
+
million
|
| 1799 |
+
in
|
| 1800 |
+
lost
|
| 1801 |
+
sales,”
|
| 1802 |
+
2018.
|
| 1803 |
+
[Online].
|
| 1804 |
+
Available:
|
| 1805 |
+
https://www.businessinsider.com/
|
| 1806 |
+
amazon-prime-day-website-issues-cost-it-millions-in-lost-sales-2018-7
|
| 1807 |
+
[2] J. Chen, S. Zhang, X. He, Q. Lin, H. Zhang, D. Hao, Y. Kang, F. Gao,
|
| 1808 |
+
Z. Xu, Y. Dang et al., “How incidental are the incidents? characterizing
|
| 1809 |
+
and prioritizing incidents for large-scale online service systems,” in Pro-
|
| 1810 |
+
ceedings of the 35th IEEE/ACM International Conference on Automated
|
| 1811 |
+
Software Engineering, 2020, pp. 373–384.
|
| 1812 |
+
[3] A. Saha and S. C. Hoi, “Mining root cause knowledge from cloud service
|
| 1813 |
+
incident investigations for aiops,” arXiv preprint arXiv:2204.11598,
|
| 1814 |
+
2022.
|
| 1815 |
+
[4] J. Chen, X. He, Q. Lin, H. Zhang, D. Hao, F. Gao, Z. Xu, Y. Dang, and
|
| 1816 |
+
D. Zhang, “Continuous incident triage for large-scale online service sys-
|
| 1817 |
+
tems,” in 2019 34th IEEE/ACM International Conference on Automated
|
| 1818 |
+
Software Engineering (ASE).
|
| 1819 |
+
IEEE, 2019, pp. 364–375.
|
| 1820 |
+
[5] J. Jiang, W. Lu, J. Chen, Q. Lin, P. Zhao, Y. Kang, H. Zhang, Y. Xiong,
|
| 1821 |
+
F. Gao, Z. Xu et al., “How to mitigate the incident? an effective
|
| 1822 |
+
troubleshooting guide recommendation technique for online service
|
| 1823 |
+
systems,” in Proceedings of the 28th ACM Joint Meeting on European
|
| 1824 |
+
Software Engineering Conference and Symposium on the Foundations
|
| 1825 |
+
of Software Engineering, 2020, pp. 1410–1420.
|
| 1826 |
+
[6] J. Wei, X. Wang, D. Schuurmans, M. Bosma, E. Chi, Q. Le, and D. Zhou,
|
| 1827 |
+
“Chain of thought prompting elicits reasoning in large language models,”
|
| 1828 |
+
arXiv preprint arXiv:2201.11903, 2022.
|
| 1829 |
+
[7] T. Brown, B. Mann, N. Ryder, M. Subbiah, J. D. Kaplan, P. Dhariwal,
|
| 1830 |
+
A. Neelakantan, P. Shyam, G. Sastry, A. Askell et al., “Language mod-
|
| 1831 |
+
els are few-shot learners,” Advances in neural information processing
|
| 1832 |
+
systems, vol. 33, pp. 1877–1901, 2020.
|
| 1833 |
+
[8] M. Chen, J. Tworek, H. Jun, Q. Yuan, H. P. d. O. Pinto, J. Kaplan,
|
| 1834 |
+
H. Edwards, Y. Burda, N. Joseph, G. Brockman et al., “Evaluating large
|
| 1835 |
+
language models trained on code,” arXiv preprint arXiv:2107.03374,
|
| 1836 |
+
2021.
|
| 1837 |
+
[9] Z. Chen, Y. Kang, L. Li, X. Zhang, H. Zhang, H. Xu, Y. Zhou, L. Yang,
|
| 1838 |
+
J. Sun, Z. Xu et al., “Towards intelligent incident management: why
|
| 1839 |
+
we need it and how we make it,” in Proceedings of the 28th ACM Joint
|
| 1840 |
+
Meeting on European Software Engineering Conference and Symposium
|
| 1841 |
+
on the Foundations of Software Engineering, 2020, pp. 1487–1497.
|
| 1842 |
+
[10] “Common Crawl.” [Online]. Available: https://commoncrawl.org/
|
| 1843 |
+
[11] S. Kulkarni, A. Singh, G. Ramakrishnan, and S. Chakrabarti, “Collective
|
| 1844 |
+
annotation of wikipedia entities in web text,” in Proceedings of the 15th
|
| 1845 |
+
ACM SIGKDD international conference on Knowledge discovery and
|
| 1846 |
+
data mining, 2009, pp. 457–466.
|
| 1847 |
+
[12] “Wikipedia.” [Online]. Available: https://www.wikipedia.org/
|
| 1848 |
+
[13] Y. Wang, W. Wang, S. Joty, and S. C. Hoi, “Codet5: Identifier-aware
|
| 1849 |
+
unified pre-trained encoder-decoder models for code understanding and
|
| 1850 |
+
generation,” arXiv preprint arXiv:2109.00859, 2021.
|
| 1851 |
+
[14] A. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A. N. Gomez,
|
| 1852 |
+
Ł. Kaiser, and I. Polosukhin, “Attention is all you need,” in Advances
|
| 1853 |
+
in neural information processing systems, 2017, pp. 5998–6008.
|
| 1854 |
+
[15] S. Hochreiter and J. Schmidhuber, “Long short-term memory,” Neural
|
| 1855 |
+
computation, vol. 9, no. 8, pp. 1735–1780, 1997.
|
| 1856 |
+
[16] J. Chung, C. Gulcehre, K. Cho, and Y. Bengio, “Empirical evaluation of
|
| 1857 |
+
gated recurrent neural networks on sequence modeling,” arXiv preprint
|
| 1858 |
+
arXiv:1412.3555, 2014.
|
| 1859 |
+
[17] D. Bahdanau, K. Cho, and Y. Bengio, “Neural machine translation by
|
| 1860 |
+
jointly learning to align and translate,” arXiv preprint arXiv:1409.0473,
|
| 1861 |
+
2014.
|
| 1862 |
+
[18] J. Devlin, M.-W. Chang, K. Lee, and K. Toutanova, “Bert: Pre-training
|
| 1863 |
+
of deep bidirectional transformers for language understanding,” arXiv
|
| 1864 |
+
preprint arXiv:1810.04805, 2018.
|
| 1865 |
+
[19] Y. Liu, M. Ott, N. Goyal, J. Du, M. Joshi, D. Chen, O. Levy, M. Lewis,
|
| 1866 |
+
L. Zettlemoyer, and V. Stoyanov, “Roberta: A robustly optimized bert
|
| 1867 |
+
pretraining approach,” arXiv preprint arXiv:1907.11692, 2019.
|
| 1868 |
+
[20] M. Lewis, Y. Liu, N. Goyal, M. Ghazvininejad, A. Mohamed, O. Levy,
|
| 1869 |
+
V. Stoyanov, and L. Zettlemoyer, “Bart: Denoising sequence-to-sequence
|
| 1870 |
+
pre-training for natural language generation, translation, and comprehen-
|
| 1871 |
+
sion,” arXiv preprint arXiv:1910.13461, 2019.
|
| 1872 |
+
[21] C. Raffel, N. Shazeer, A. Roberts, K. Lee, S. Narang, M. Matena,
|
| 1873 |
+
Y. Zhou, W. Li, and P. J. Liu, “Exploring the limits of trans-
|
| 1874 |
+
fer learning with a unified text-to-text transformer,” arXiv preprint
|
| 1875 |
+
arXiv:1910.10683, 2019.
|
| 1876 |
+
[22] A. Radford, K. Narasimhan, T. Salimans, I. Sutskever et al., “Improving
|
| 1877 |
+
language understanding by generative pre-training,” 2018.
|
| 1878 |
+
[23] A. Radford, J. Wu, R. Child, D. Luan, D. Amodei, I. Sutskever et al.,
|
| 1879 |
+
“Language models are unsupervised multitask learners,” OpenAI blog,
|
| 1880 |
+
vol. 1, no. 8, p. 9, 2019.
|
| 1881 |
+
[24] S. Zhang, S. Roller, N. Goyal, M. Artetxe, M. Chen, S. Chen, C. Dewan,
|
| 1882 |
+
M. Diab, X. Li, X. V. Lin et al., “Opt: Open pre-trained transformer
|
| 1883 |
+
language models,” arXiv preprint arXiv:2205.01068, 2022.
|
| 1884 |
+
[25] Z. Feng, D. Guo, D. Tang, N. Duan, X. Feng, M. Gong, L. Shou, B. Qin,
|
| 1885 |
+
T. Liu, D. Jiang et al., “Codebert: A pre-trained model for programming
|
| 1886 |
+
and natural languages,” in Proceedings of the 2020 Conference on
|
| 1887 |
+
Empirical Methods in Natural Language Processing: Findings, 2020,
|
| 1888 |
+
pp. 1536–1547.
|
| 1889 |
+
[26] D. Guo, S. Ren, S. Lu, Z. Feng, D. Tang, L. Shujie, L. Zhou,
|
| 1890 |
+
N. Duan, A. Svyatkovskiy, S. Fu et al., “Graphcodebert: Pre-training
|
| 1891 |
+
code representations with data flow,” in International Conference on
|
| 1892 |
+
Learning Representations, 2020.
|
| 1893 |
+
[27] W. Ahmad, S. Chakraborty, B. Ray, and K.-W. Chang, “Unified
|
| 1894 |
+
pre-training for program understanding and generation,” in Proceedings
|
| 1895 |
+
of
|
| 1896 |
+
the
|
| 1897 |
+
2021
|
| 1898 |
+
Conference
|
| 1899 |
+
of
|
| 1900 |
+
the
|
| 1901 |
+
North
|
| 1902 |
+
American
|
| 1903 |
+
Chapter
|
| 1904 |
+
of
|
| 1905 |
+
the Association for Computational Linguistics: Human Language
|
| 1906 |
+
Technologies.
|
| 1907 |
+
Online: Association for Computational Linguistics, Jun.
|
| 1908 |
+
2021, pp. 2655–2668. [Online]. Available: https://www.aclweb.org/
|
| 1909 |
+
anthology/2021.naacl-main.211
|
| 1910 |
+
[28] S. Chakraborty, T. Ahmed, Y. Ding, P. T. Devanbu, and B. Ray, “Natgen:
|
| 1911 |
+
generative pre-training by “naturalizing” source code,” in Proceedings
|
| 1912 |
+
of the 30th ACM Joint European Software Engineering Conference and
|
| 1913 |
+
Symposium on the Foundations of Software Engineering, 2022, pp. 18–
|
| 1914 |
+
30.
|
| 1915 |
+
[29] S. Lu, D. Guo, S. Ren, J. Huang, A. Svyatkovskiy, A. Blanco, C. B.
|
| 1916 |
+
Clement, D. Drain, D. Jiang, D. Tang, G. Li, L. Zhou, L. Shou, L. Zhou,
|
| 1917 |
+
11
|
| 1918 |
+
|
| 1919 |
+
M. Tufano, M. Gong, M. Zhou, N. Duan, N. Sundaresan, S. K. Deng,
|
| 1920 |
+
S. Fu, and S. Liu, “Codexglue: A machine learning benchmark dataset
|
| 1921 |
+
for code understanding and generation,” CoRR, vol. abs/2102.04664,
|
| 1922 |
+
2021.
|
| 1923 |
+
[30] K. Clark, M.-T. Luong, Q. V. Le, and C. D. Manning, “Electra: Pre-
|
| 1924 |
+
training text encoders as discriminators rather than generators,” arXiv
|
| 1925 |
+
preprint arXiv:2003.10555, 2020.
|
| 1926 |
+
[31] H. Husain, H.-H. Wu, T. Gazit, M. Allamanis, and M. Brockschmidt,
|
| 1927 |
+
“Codesearchnet challenge: Evaluating the state of semantic code search,”
|
| 1928 |
+
arXiv preprint arXiv:1909.09436, 2019.
|
| 1929 |
+
[32] “Openai.” [Online]. Available: https://openai.com/
|
| 1930 |
+
[33] T. Ahmed and P. Devanbu, “Multilingual training for software engineer-
|
| 1931 |
+
ing,” in Proceedings of the 44th International Conference on Software
|
| 1932 |
+
Engineering, 2022, pp. 1443–1455.
|
| 1933 |
+
[34] “Codexglue – code-to-text.” [Online]. Available: https://github.com/
|
| 1934 |
+
microsoft/CodeXGLUE/tree/main/Code-Text/code-to-text
|
| 1935 |
+
[35] E. J. Hu, Y. Shen, P. Wallis, Z. Allen-Zhu, Y. Li, S. Wang, L. Wang,
|
| 1936 |
+
and W. Chen, “Lora: Low-rank adaptation of large language models,”
|
| 1937 |
+
arXiv preprint arXiv:2106.09685, 2021.
|
| 1938 |
+
[36] F. F. Xu, U. Alon, G. Neubig, and V. J. Hellendoorn, “A systematic
|
| 1939 |
+
evaluation of large language models of code,” in Proceedings of the
|
| 1940 |
+
6th ACM SIGPLAN International Symposium on Machine Programming,
|
| 1941 |
+
2022, pp. 1–10.
|
| 1942 |
+
[37] C.-Y. Lin and F. J. Och, “Orange: a method for evaluating automatic
|
| 1943 |
+
evaluation metrics for machine translation,” in COLING 2004: Proceed-
|
| 1944 |
+
ings of the 20th International Conference on Computational Linguistics,
|
| 1945 |
+
2004, pp. 501–507.
|
| 1946 |
+
[38] C.-Y. Lin, “Rouge: A package for automatic evaluation of summaries,”
|
| 1947 |
+
in Text summarization branches out, 2004, pp. 74–81.
|
| 1948 |
+
[39] D. S. Hirschberg, “Algorithms for the longest common subsequence
|
| 1949 |
+
problem,” Journal of the ACM (JACM), vol. 24, no. 4, pp. 664–675,
|
| 1950 |
+
1977.
|
| 1951 |
+
[40] S. Banerjee and A. Lavie, “Meteor: An automatic metric for mt evalua-
|
| 1952 |
+
tion with improved correlation with human judgments,” in Proceedings
|
| 1953 |
+
of the acl workshop on intrinsic and extrinsic evaluation measures for
|
| 1954 |
+
machine translation and/or summarization, 2005, pp. 65–72.
|
| 1955 |
+
[41] T. Zhang, V. Kishore, F. Wu, K. Q. Weinberger, and Y. Artzi, “Bertscore:
|
| 1956 |
+
Evaluating text generation with bert,” arXiv preprint arXiv:1904.09675,
|
| 1957 |
+
2019.
|
| 1958 |
+
[42] T. Sellam, D. Das, and A. P. Parikh, “Bleurt: Learning robust metrics
|
| 1959 |
+
for text generation,” arXiv preprint arXiv:2004.04696, 2020.
|
| 1960 |
+
[43] H. Kane, M. Y. Kocyigit, A. Abdalla, P. Ajanoh, and M. Coulibali,
|
| 1961 |
+
“Nubia: Neural based interchangeability assessor for text generation,”
|
| 1962 |
+
2020.
|
| 1963 |
+
[44] E. Shia, Y. Wangb, L. Dub, J. Chenc, S. Hanb, H. Zhangd, D. Zhangb,
|
| 1964 |
+
and H. Suna, “On the evaluation of neural code summarization,” in Pro-
|
| 1965 |
+
ceedings of the 44th International Conference on Software Engineering
|
| 1966 |
+
(ICSE), 2022.
|
| 1967 |
+
[45] D. Roy, S. Fakhoury, and V. Arnaoudova, “Reassessing automatic
|
| 1968 |
+
evaluation metrics for code summarization tasks,” in Proceedings of the
|
| 1969 |
+
29th ACM Joint Meeting on European Software Engineering Conference
|
| 1970 |
+
and Symposium on the Foundations of Software Engineering, 2021, pp.
|
| 1971 |
+
1105–1116.
|
| 1972 |
+
[46] D. Gros, H. Sezhiyan, P. Devanbu, and Z. Yu, “Code to comment ?trans-
|
| 1973 |
+
lation?: Data, metrics, baselining & evaluation,” in 2020 35th IEEE/ACM
|
| 1974 |
+
International Conference on Automated Software Engineering (ASE).
|
| 1975 |
+
IEEE, 2020, pp. 746–757.
|
| 1976 |
+
[47] S. Haque, Z. Eberhart, A. Bansal, and C. McMillan, “Semantic similarity
|
| 1977 |
+
metrics for evaluating source code summarization,” arXiv preprint
|
| 1978 |
+
arXiv:2204.01632, 2022.
|
| 1979 |
+
[48] T. Leesatapornwongsa, J. F. Lukman, S. Lu, and H. S. Gunawi, “Taxdc:
|
| 1980 |
+
A taxonomy of non-deterministic concurrency bugs in datacenter dis-
|
| 1981 |
+
tributed systems,” in Proceedings of the Twenty-First International
|
| 1982 |
+
Conference on Architectural Support for Programming Languages and
|
| 1983 |
+
Operating Systems, 2016, pp. 517–530.
|
| 1984 |
+
[49] A. Alquraan, H. Takruri, M. Alfatafta, and S. Al-Kiswany, “An analysis
|
| 1985 |
+
of {Network-Partitioning} failures in cloud systems,” in 13th USENIX
|
| 1986 |
+
Symposium on Operating Systems Design and Implementation (OSDI
|
| 1987 |
+
18), 2018, pp. 51–68.
|
| 1988 |
+
[50] Y. Gao, W. Dou, F. Qin, C. Gao, D. Wang, J. Wei, R. Huang, L. Zhou,
|
| 1989 |
+
and Y. Wu, “An empirical study on crash recovery bugs in large-scale
|
| 1990 |
+
distributed systems,” in Proceedings of the 2018 26th ACM Joint Meeting
|
| 1991 |
+
on European Software Engineering Conference and Symposium on the
|
| 1992 |
+
Foundations of Software Engineering, 2018, pp. 539–550.
|
| 1993 |
+
[51] Y. Zhang, J. Yang, Z. Jin, U. Sethi, K. Rodrigues, S. Lu, and D. Yuan,
|
| 1994 |
+
“Understanding and detecting software upgrade failures in distributed
|
| 1995 |
+
systems,” in Proceedings of the ACM SIGOPS 28th Symposium on
|
| 1996 |
+
Operating Systems Principles, 2021, pp. 116–131.
|
| 1997 |
+
[52] S. Ghosh, M. Shetty, C. Bansal, and S. Nath, “How to fight produc-
|
| 1998 |
+
tion incidents? an empirical study on a large-scale cloud service,” in
|
| 1999 |
+
Proceedings of the 13th Symposium on Cloud Computing, 2022, pp.
|
| 2000 |
+
126–141.
|
| 2001 |
+
[53] H. Liu, S. Lu, M. Musuvathi, and S. Nath, “What bugs cause production
|
| 2002 |
+
cloud incidents?” in Proceedings of the Workshop on Hot Topics in
|
| 2003 |
+
Operating Systems, 2019, pp. 155–162.
|
| 2004 |
+
[54] D. Yuan, Y. Luo, X. Zhuang, G. R. Rodrigues, X. Zhao, Y. Zhang, P. U.
|
| 2005 |
+
Jain, and M. Stumm, “Simple testing can prevent most critical failures:
|
| 2006 |
+
An analysis of production failures in distributed {Data-Intensive} sys-
|
| 2007 |
+
tems,” in 11th USENIX Symposium on Operating Systems Design and
|
| 2008 |
+
Implementation (OSDI 14), 2014, pp. 249–265.
|
| 2009 |
+
[55] J. Chen, X. He, Q. Lin, Y. Xu, H. Zhang, D. Hao, F. Gao, Z. Xu, Y. Dang,
|
| 2010 |
+
and D. Zhang, “An empirical investigation of incident triage for online
|
| 2011 |
+
service systems,” in 2019 IEEE/ACM 41st International Conference on
|
| 2012 |
+
Software Engineering: Software Engineering in Practice (ICSE-SEIP),
|
| 2013 |
+
2019, pp. 111–120.
|
| 2014 |
+
[56] J. Chen, X. He, Q. Lin, H. Zhang, D. Hao, F. Gao, Z. Xu, Y. Dang, and
|
| 2015 |
+
D. Zhang, “Continuous incident triage for large-scale online service sys-
|
| 2016 |
+
tems,” in 2019 34th IEEE/ACM International Conference on Automated
|
| 2017 |
+
Software Engineering (ASE), 2019, pp. 364–375.
|
| 2018 |
+
[57] V. Nair, A. Raul, S. Khanduja, V. Bahirwani, Q. Shao, S. Sellamanickam,
|
| 2019 |
+
S. Keerthi, S. Herbert, and S. Dhulipalla, “Learning a hierarchical
|
| 2020 |
+
monitoring system for detecting and diagnosing service issues,” in
|
| 2021 |
+
Proceedings of the 21th ACM SIGKDD International Conference on
|
| 2022 |
+
Knowledge Discovery and Data Mining, 2015, pp. 2029–2038.
|
| 2023 |
+
[58] C. Bansal, S. Renganathan, A. Asudani, O. Midy, and M. Janakiraman,
|
| 2024 |
+
“Decaf: Diagnosing and triaging performance issues in large-scale
|
| 2025 |
+
cloud services,” in 2020 IEEE/ACM 42nd International Conference on
|
| 2026 |
+
Software Engineering: Software Engineering in Practice (ICSE-SEIP),
|
| 2027 |
+
2020.
|
| 2028 |
+
[59] C. Luo, J.-G. Lou, Q. Lin, Q. Fu, R. Ding, D. Zhang, and Z. Wang,
|
| 2029 |
+
“Correlating events with time series for incident diagnosis,” in Proceed-
|
| 2030 |
+
ings of the 20th ACM SIGKDD international conference on Knowledge
|
| 2031 |
+
discovery and data mining, 2014, pp. 1583–1592.
|
| 2032 |
+
[60] T. Ahmed and P. Devanbu, “Few-shot training llms for project-specific
|
| 2033 |
+
code-summarization,” arXiv preprint arXiv:2207.04237, 2022.
|
| 2034 |
+
[61] Z. Fan, X. Gao, A. Roychoudhury, and S. H. Tan, “Improving automat-
|
| 2035 |
+
ically generated code from codex via automated program repair,” arXiv
|
| 2036 |
+
preprint arXiv:2205.10583, 2022.
|
| 2037 |
+
[62] H. Joshi, J. Cambronero, S. Gulwani, V. Le, I. Radicek, and G. Ver-
|
| 2038 |
+
bruggen, “Repair is nearly generation: Multilingual program repair with
|
| 2039 |
+
llms,” arXiv preprint arXiv:2208.11640, 2022.
|
| 2040 |
+
[63] P. Bareiß, B. Souza, M. d’Amorim, and M. Pradel, “Code generation
|
| 2041 |
+
tools (almost) for free? a study of few-shot, pre-trained language models
|
| 2042 |
+
on code,” arXiv preprint arXiv:2206.01335, 2022.
|
| 2043 |
+
[64] N. Jain, S. Vaidyanath, A. Iyer, N. Natarajan, S. Parthasarathy, S. Ra-
|
| 2044 |
+
jamani, and R. Sharma, “Jigsaw: Large language models meet program
|
| 2045 |
+
synthesis,” in Proceedings of the 44th International Conference on
|
| 2046 |
+
Software Engineering, 2022, pp. 1219–1231.
|
| 2047 |
+
12
|
| 2048 |
+
|
FNE2T4oBgHgl3EQfSwf4/content/tmp_files/load_file.txt
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
GdFIT4oBgHgl3EQfWysJ/content/tmp_files/2301.11240v1.pdf.txt
ADDED
|
@@ -0,0 +1,1565 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
Draft version January 27, 2023
|
| 2 |
+
Typeset using LATEX twocolumn style in AASTeX63
|
| 3 |
+
Hubble Constant Measurement from Three Large-Separation Quasars Strongly Lensed by Galaxy
|
| 4 |
+
Clusters
|
| 5 |
+
Kate Napier,1 Keren Sharon,1 H˚akon Dahle,2 Matthew Bayliss,3 Michael D. Gladders,4 Guillaume Mahler,5, 6
|
| 6 |
+
Jane R. Rigby,7 and Michael Florian8
|
| 7 |
+
1Department of Astronomy, University of Michigan, 1085 S University Ave, Ann Arbor, MI 48109, USA
|
| 8 |
+
2Institute of Theoretical Astrophysics, University of Oslo, P.O. Box 1029, Blindern, NO-0315 Oslo, Norway
|
| 9 |
+
3Department of Physics, University of Cincinnati, Cincinnati, OH 45221, USA
|
| 10 |
+
4Department of Astronomy and Astrophysics, University of Chicago, 5640 South Ellis Avenue, Chicago, IL 60637, USA
|
| 11 |
+
5Centre for Extragalactic Astronomy, Durham University, South Road, Durham DH1 3LE, UK
|
| 12 |
+
6 Institute for Computational Cosmology, Durham University, South Road, Durham DH1 3LE, UK
|
| 13 |
+
7Observational Cosmology Lab, Code 665, NASA Goddard Space Flight Center, Greenbelt, MD 20771, USA
|
| 14 |
+
8Steward Observatory, University of Arizona, 933 North Cherry Ave., Tucson, AZ 85721, USA
|
| 15 |
+
(Received ; Revised ; Accepted )
|
| 16 |
+
Submitted to ApJ
|
| 17 |
+
ABSTRACT
|
| 18 |
+
Tension between cosmic microwave background-based and distance ladder-based determinations of
|
| 19 |
+
the Hubble constant H0 motivates pursuit of independent methods that are not subject to the same
|
| 20 |
+
systematic effects. A promising alternative, proposed by Refsdal in 1964, relies on the inverse scaling of
|
| 21 |
+
H0 with the delay between the arrival times of at least two images of a strongly-lensed variable source
|
| 22 |
+
such as a quasar. To date, Refsdal’s method has mostly been applied to quasars lensed by individual
|
| 23 |
+
galaxies rather than by galaxy clusters.
|
| 24 |
+
Using the three quasars strongly lensed by galaxy clus-
|
| 25 |
+
ters (SDSS J1004+4112, SDSS J1029+2623, and SDSS J2222+2745) that have both multiband Hubble
|
| 26 |
+
Space Telescope data and published time delay measurements, we derive H0, accounting for the sys-
|
| 27 |
+
tematic and statistical sources of uncertainty. While a single time delay measurement does not yield a
|
| 28 |
+
well-constrained H0 value, analyzing the systems together tightens the constraint. Combining the six
|
| 29 |
+
time delays measured in the three cluster-lensed quasars gives H0 = 71.5 ± 6.1 km s−1 Mpc−1. To
|
| 30 |
+
reach 1% uncertainty in H0, we estimate that a sample size of order of 500 time delay measurements of
|
| 31 |
+
similar quality as those from SDSS J1004+4112, SDSS J1029+2623, and SDSS J2222+2745 would be
|
| 32 |
+
needed. Improving the lens modeling uncertainties by a factor of two may reduce the needed sample
|
| 33 |
+
size to 120 time delays, potentially reachable in the next decade.
|
| 34 |
+
Keywords: galaxy clusters; quasars; time delay; Hubble constant
|
| 35 |
+
1. INTRODUCTION
|
| 36 |
+
The Hubble parameter H0, which describes the cur-
|
| 37 |
+
rent expansion rate of the Universe, has been sought
|
| 38 |
+
since the discovery in the 1920s that the Universe is
|
| 39 |
+
expanding (Lemaˆıtre 1927; Hubble 1929). At the turn
|
| 40 |
+
of the last century, measurements of H0 started con-
|
| 41 |
+
verging around H0 = 70 km s−1 Mpc−1. However, as
|
| 42 |
+
H0 measurements have become increasingly precise, the
|
| 43 |
+
Corresponding author: Kate Napier
|
| 44 | |
| 45 |
+
so-called ‘Hubble Tension’ has arisen between the esti-
|
| 46 |
+
mates from early- and late-Universe probes. The Planck
|
| 47 |
+
Collaboration reported H0 = 67.4 ± 0.5 km s−1 Mpc−1
|
| 48 |
+
(Planck Collaboration et al. 2020). They used density
|
| 49 |
+
fluctuations encoded in the Cosmic Microwave Back-
|
| 50 |
+
ground (CMB) at the surface of last scattering to deter-
|
| 51 |
+
mine H at that epoch, then used a spatially flat cosmo-
|
| 52 |
+
logical model to extrapolate to H0. By contrast, the “Su-
|
| 53 |
+
pernovae, H0, for the Equation of State of Dark Energy”
|
| 54 |
+
(SH0ES) collaboration combined Gaia parallaxes and
|
| 55 |
+
multi-band HST photometry of Milky Way Cepheids to
|
| 56 |
+
calibrate the extragalactic distance scale and derive H0
|
| 57 |
+
arXiv:2301.11240v1 [astro-ph.CO] 26 Jan 2023
|
| 58 |
+
|
| 59 |
+
2
|
| 60 |
+
Napier et al.
|
| 61 |
+
= 73.2 ± 1.3 km s−1 Mpc−1 (Riess et al. 2021). The
|
| 62 |
+
Planck and SH0ES values, which respectively capture
|
| 63 |
+
the early and late-time physics of the Universe, differ
|
| 64 |
+
by 4.2σ. Freedman (2021) applied an updated Tip of
|
| 65 |
+
the Red Giant Branch (TRGB) calibration to a distant
|
| 66 |
+
sample of Type Ia supernovae from the Carnegie Su-
|
| 67 |
+
pernova Project and obtained H0 = 69.8 ± 0.6 (stat)
|
| 68 |
+
± 1.6 (sys) km s−1 Mpc−1, consistent with the CMB
|
| 69 |
+
value, and within 2σ of the SH0ES value, owing to the
|
| 70 |
+
larger uncertainties. The discrepancy between different
|
| 71 |
+
H0 methods may indicate a deviation from the standard
|
| 72 |
+
Λ Cold Dark Matter (ΛCDM) model, and therefore new
|
| 73 |
+
physics, or the presence of unknown or underestimated
|
| 74 |
+
systematics.
|
| 75 |
+
Either way, this tension remotivates the
|
| 76 |
+
pursuit of other H0 determination methods that are not
|
| 77 |
+
prone to the same systematics.
|
| 78 |
+
An alternative H0 determination method, proposed
|
| 79 |
+
by Refsdal (1964), uses the fact that H0 scales inversely
|
| 80 |
+
with the delay between the arrival times of at least two
|
| 81 |
+
images of a strongly-lensed variable source, such as a
|
| 82 |
+
quasar or a supernova. Due to the rarity of galaxy clus-
|
| 83 |
+
ters lensing quasars or supernovae, the Refsdal H0 tech-
|
| 84 |
+
nique has primarily been sought with galaxy-scale lenses
|
| 85 |
+
(see e.g., the recent reviews by Moresco et al. 2022; Bir-
|
| 86 |
+
rer et al. 2022).
|
| 87 |
+
Of the >300 known lensed quasars, the vast major-
|
| 88 |
+
ity are lensed by individual galaxies (Lemon et al. 2019,
|
| 89 |
+
2022). Quasars lensed by individual galaxies have been
|
| 90 |
+
used to obtain H0. For example, the H0 Lenses in COS-
|
| 91 |
+
MOGRAIL’s Wellspring (H0LiCOW) collaboration per-
|
| 92 |
+
formed a joint analysis of six galaxy-lensed quasars to
|
| 93 |
+
obtain H0 = 73.3+1.7
|
| 94 |
+
−1.8 km s−1 Mpc−1 (Wong et al. 2020).
|
| 95 |
+
This value seems to be consistent with the Cepheid-
|
| 96 |
+
calibrated measurement from the SH0ES collaboration.
|
| 97 |
+
Birrer et al. (2020) found a smaller H0 value, and a
|
| 98 |
+
larger uncertainty, H0 = 67.4+4.1
|
| 99 |
+
−3.2 km s−1 Mpc−1, sta-
|
| 100 |
+
tistically consistent with the CMB and TRGB measure-
|
| 101 |
+
ments. The smaller H0 value was driven by making an
|
| 102 |
+
assumption that the H0 lens galaxy population is drawn
|
| 103 |
+
from a parent population with the same statistical prop-
|
| 104 |
+
erties as the Sloan Lens ACS lenses.
|
| 105 |
+
Kochanek (2020) argued that although the uncertain-
|
| 106 |
+
ties of H0 values from galaxy-lensed quasars are typ-
|
| 107 |
+
ically reported as 4 - 8% for individual gravitational
|
| 108 |
+
lenses, it is likely that any current estimate of H0 from
|
| 109 |
+
time delays has an uncertainty of at least 10%. As dis-
|
| 110 |
+
cussed in Kochanek (2020, 2021), the main uncertainty
|
| 111 |
+
with galaxy lenses is the mean surface mass density of
|
| 112 |
+
the lens within the Einstein radius where most lensed
|
| 113 |
+
images are found.
|
| 114 |
+
The distribution of baryonic mat-
|
| 115 |
+
ter in the lens galaxy significantly contributes to the
|
| 116 |
+
mass.
|
| 117 |
+
Most galaxy-scale lenses are early-type galax-
|
| 118 |
+
ies, and local measurements show that these galaxies
|
| 119 |
+
exhibit color gradients. Color gradients indicate spatial
|
| 120 |
+
variation in age and metallicity, and thus, produce corre-
|
| 121 |
+
sponding gradients in the mass-to-light ratio of the bary-
|
| 122 |
+
onic mass. A galaxy’s evolutionary history and growth
|
| 123 |
+
through mergers will complexly affect these gradients.
|
| 124 |
+
Resolved JWST and Extremeley Large Telescope ob-
|
| 125 |
+
servations of the stellar kinematics in the lens galaxies
|
| 126 |
+
may significantly reduce these sources of systematic er-
|
| 127 |
+
rors (Birrer & Treu 2021).
|
| 128 |
+
What has remained largely unexplored until now is de-
|
| 129 |
+
termining H0 by using quasars that are strongly lensed
|
| 130 |
+
by galaxy clusters. For several reasons, cluster-lensed
|
| 131 |
+
quasars can potentially overcome some of the difficul-
|
| 132 |
+
ties faced by individual galaxy lenses. First, since galaxy
|
| 133 |
+
clusters have deeper potential wells than galaxies, clus-
|
| 134 |
+
ter lenses produce longer time delays of order months to
|
| 135 |
+
years compared to typically a month in galaxy lenses.
|
| 136 |
+
Consequently, the observationally measured time delay
|
| 137 |
+
values will have smaller fractional uncertainty, which
|
| 138 |
+
then will propagate to reduced uncertainty in H0 due
|
| 139 |
+
to the inverse scaling of H0 with time delays. Second,
|
| 140 |
+
the light curves of cluster-lensed quasars are less likely
|
| 141 |
+
to be affected by microlensing from stars in the lens
|
| 142 |
+
plane, because the mass distribution is dominated by
|
| 143 |
+
dark matter at the projected radius at which the im-
|
| 144 |
+
ages appear. Third, galaxy cluster mass distributions
|
| 145 |
+
are less affected by complex baryonic physics than those
|
| 146 |
+
of galaxy lenses; the complex baryonic surface density of
|
| 147 |
+
galaxy-scale lenses may be a significant source of system-
|
| 148 |
+
atic uncertainty. A challenge that must be contended
|
| 149 |
+
with, however, is the complexity of cluster lenses.
|
| 150 |
+
Two inputs are necessary to use cluster-lensed quasars
|
| 151 |
+
to determine H0. The first is an observational measure-
|
| 152 |
+
ment of the time delay between the multiple quasar im-
|
| 153 |
+
ages, and the second is an accurate mapping of the pro-
|
| 154 |
+
jected density of the dark and luminous mass at the clus-
|
| 155 |
+
ter core. High accuracy lens models require space-based
|
| 156 |
+
resolution and spectroscopic follow-up. Of the six pub-
|
| 157 |
+
lished cluster-lensed quasars to date (Inada et al. 2003,
|
| 158 |
+
2006; Dahle et al. 2013; Shu et al. 2018, 2019; Martinez
|
| 159 |
+
et al. 2022), only three have the necessary data available
|
| 160 |
+
to determine H0: SDSS J1004+4112, SDSS J1029+2623,
|
| 161 |
+
and SDSS J2222+2745. In this paper, we use the avail-
|
| 162 |
+
able archival HST data and the latest measurements of
|
| 163 |
+
time delay and spectroscopic redshifts of background
|
| 164 |
+
sources from the literature to obtain an independent
|
| 165 |
+
measurement of H0 from these three systems.
|
| 166 |
+
This paper is organized as follows: In Section 2, we
|
| 167 |
+
outline the theory of observational gravitational lensing
|
| 168 |
+
time delay and its dependence on H0. In Section 3 we
|
| 169 |
+
detail the lens modeling procedure.
|
| 170 |
+
In Sections 4, 5,
|
| 171 |
+
|
| 172 |
+
H0 from Cluster-Lensed Quasars
|
| 173 |
+
3
|
| 174 |
+
and 6, we give an overview of the three cluster-lensed
|
| 175 |
+
quasar systems used in this H0 analysis and provide
|
| 176 |
+
details about their HST and spectroscopic data, time
|
| 177 |
+
delays, and lens models. In Section 7, we present our
|
| 178 |
+
constraints on H0. We conclude in Section 8 with a dis-
|
| 179 |
+
cussion of our H0 result and the future prospects of the
|
| 180 |
+
time delay H0 method.
|
| 181 |
+
Throughout the paper, we adopt the standard ΛCDM
|
| 182 |
+
flat cosmological model with Ωm = 0.3 and ΩΛ = 0.7.
|
| 183 |
+
2. TIME DELAY ANALYSIS
|
| 184 |
+
The Refsdal H0 method is possible due to the mea-
|
| 185 |
+
surable delay between the arrival time of two or more
|
| 186 |
+
images of a variable source such as a quasar. Under the
|
| 187 |
+
thin lens approximation, a packet of light that travels
|
| 188 |
+
from the source to the observer will be delayed by time
|
| 189 |
+
t given by the arrival time surface (Schneider 1985):
|
| 190 |
+
t(⃗θ, ⃗β) = 1 + zl
|
| 191 |
+
c
|
| 192 |
+
dlds
|
| 193 |
+
dls
|
| 194 |
+
[1
|
| 195 |
+
2(⃗θ − ⃗β)2 − ψ(⃗θ)],
|
| 196 |
+
(1)
|
| 197 |
+
where zl is the redshift of the lens, dl, ds, and dls are an-
|
| 198 |
+
gular diameter distances from the observer to the lens,
|
| 199 |
+
to the source, and between the lens and the source, re-
|
| 200 |
+
spectively; ⃗θ is the image position in the image plane;
|
| 201 |
+
⃗β is the unobserved source position; and ψ(⃗θ) is the
|
| 202 |
+
gravitational lensing potential. The arrival time t is a
|
| 203 |
+
combination of the path length and the gravitational
|
| 204 |
+
time delay (t = tgeometric + tgrav).
|
| 205 |
+
The last term,
|
| 206 |
+
τ(θ; β) = [ 1
|
| 207 |
+
2(⃗θ − ⃗β)2 − ψ(⃗θ)], is also known as the Fer-
|
| 208 |
+
mat potential. The multiple images of a strongly-lensed
|
| 209 |
+
source appear in the stationary points of the arrival time
|
| 210 |
+
surface, that is, in the minima, maxima, and saddle
|
| 211 |
+
points. H0 is incorporated in Eq. 1 because of its in-
|
| 212 |
+
verse scaling with the angular diameter distances:
|
| 213 |
+
dA(z1, z2) =
|
| 214 |
+
1
|
| 215 |
+
(1 + z2)
|
| 216 |
+
c
|
| 217 |
+
H0
|
| 218 |
+
z2
|
| 219 |
+
�
|
| 220 |
+
z1
|
| 221 |
+
dz
|
| 222 |
+
E(z; Ωm, ΩΛ),
|
| 223 |
+
(2)
|
| 224 |
+
where E(z; Ωm, ΩΛ) is a dimensionless function given by
|
| 225 |
+
E(z; Ωm, ΩΛ) =
|
| 226 |
+
�
|
| 227 |
+
Ωm(1 + z)3 + ΩΛ + (1 − Ωm − ΩΛ)(1 + z)2.
|
| 228 |
+
The matter density and vacuum energy density param-
|
| 229 |
+
eters are Ωm and ΩΛ, respectively. Conveniently, H0 is
|
| 230 |
+
disentangled from the other cosmological parameters in
|
| 231 |
+
the angular diameter distance equation (Eq. 2). After
|
| 232 |
+
substituting Eq. 2 into dlds/dls in Eq. 1, the time delay
|
| 233 |
+
is determined by solving Eq. 1 for two image positions
|
| 234 |
+
corresponding to the same source position and taking
|
| 235 |
+
the difference. The time delay between the images thus
|
| 236 |
+
becomes:
|
| 237 |
+
∆t =
|
| 238 |
+
� 1
|
| 239 |
+
H0
|
| 240 |
+
� � 1 + zl
|
| 241 |
+
1 + zs
|
| 242 |
+
�
|
| 243 |
+
�
|
| 244 |
+
�
|
| 245 |
+
�
|
| 246 |
+
�
|
| 247 |
+
�
|
| 248 |
+
zl�
|
| 249 |
+
0
|
| 250 |
+
dz
|
| 251 |
+
E(z)
|
| 252 |
+
zs�
|
| 253 |
+
0
|
| 254 |
+
dz
|
| 255 |
+
E(z)
|
| 256 |
+
zs�
|
| 257 |
+
zl
|
| 258 |
+
dz
|
| 259 |
+
E(z)
|
| 260 |
+
�
|
| 261 |
+
�
|
| 262 |
+
�
|
| 263 |
+
�
|
| 264 |
+
� ×
|
| 265 |
+
�1
|
| 266 |
+
2[(⃗θ1 − ⃗β)2 − (⃗θ2 − ⃗β)2] − [ψ(⃗θ1) − ψ(⃗θ2)]
|
| 267 |
+
�
|
| 268 |
+
(3)
|
| 269 |
+
The first term on the right-hand side of the time delay
|
| 270 |
+
equation gives the Hubble parameter; the second term is
|
| 271 |
+
a direct observable; the third term contains the depen-
|
| 272 |
+
dence on cosmological parameters other than H0; and
|
| 273 |
+
the last term is solved by the strong gravitational lens
|
| 274 |
+
model. We neglect the higher order effects of the cos-
|
| 275 |
+
mological parameters and take the third term in Eq. 3
|
| 276 |
+
to be constant. The left-hand side of the equation is the
|
| 277 |
+
measurement of the time delay, e.g., from monitoring
|
| 278 |
+
and comparing the observed light curves of two images
|
| 279 |
+
of the variable source.
|
| 280 |
+
Once we compute a model of the lensing mass distribu-
|
| 281 |
+
tion (see Section 3), we determine the model-predicted
|
| 282 |
+
excess arrival time surface (Eq. 3) with respect to one of
|
| 283 |
+
the quasar images. Assuming our lens model is a correct
|
| 284 |
+
description of the matter distribution, we then leverage
|
| 285 |
+
the fact that the time delay scales inversely with H0.
|
| 286 |
+
We compare the model-predicted time delays between
|
| 287 |
+
images to the observational measurements of the time
|
| 288 |
+
delays to obtain H0 via:
|
| 289 |
+
H0 = H0,model ×
|
| 290 |
+
∆tmodel
|
| 291 |
+
∆tmeasured
|
| 292 |
+
(4)
|
| 293 |
+
where H0,model is the H0 value used to generate the Fer-
|
| 294 |
+
mat potential from the lensing analysis, ∆tmodel is the
|
| 295 |
+
model-predicted time delay between the quasar images,
|
| 296 |
+
and ∆tmeasured is the observational measurement of the
|
| 297 |
+
time delay between the pair of quasar images.
|
| 298 |
+
3. LENS MODELING
|
| 299 |
+
We computed the lens models with the publicly avail-
|
| 300 |
+
able software Lenstool (Jullo et al. 2007). Lenstool
|
| 301 |
+
is a ‘parametric’ modeling algorithm which describes
|
| 302 |
+
the lensing mass distribution as a linear combination of
|
| 303 |
+
galaxy-scale, group-scale, and cluster-scale halos, each
|
| 304 |
+
of which is parameterized as a pseudo-isothermal el-
|
| 305 |
+
lipsoidal mass distribution (PIEMD, also called dPIE;
|
| 306 |
+
El´ıasd´ottir et al. 2007). A PIEMD halo has seven pa-
|
| 307 |
+
rameters whose values can either be fixed or varied: po-
|
| 308 |
+
sition (x, y); ellipticity e = (a2-b2)/(a2+b2), where a and
|
| 309 |
+
b are the semi-major and semi-minor axes, respectively;
|
| 310 |
+
position angle θ; core radius rc; truncation radius rcut;
|
| 311 |
+
|
| 312 |
+
4
|
| 313 |
+
Napier et al.
|
| 314 |
+
and effective velocity dispersion σ0. The parameters of
|
| 315 |
+
the group-scale and cluster-scale halos are typically al-
|
| 316 |
+
lowed to vary. The exception is rcut for the cluster-scale
|
| 317 |
+
halos as this radius usually occurs outside the region
|
| 318 |
+
where strong lensing evidence is found, and thus, can-
|
| 319 |
+
not be constrained.
|
| 320 |
+
Lenstool uses a Markov Chain Monte Carlo (MCMC)
|
| 321 |
+
sampling of parameter space. The best-fit model is iden-
|
| 322 |
+
tified as the one that minimizes the scatter between the
|
| 323 |
+
model-predicted and observed image locations in the im-
|
| 324 |
+
age plane (“image plane minimization”) or minimizes
|
| 325 |
+
the scatter between the predicted source locations of
|
| 326 |
+
multiple images in the source plane (“source plane min-
|
| 327 |
+
imization”). The lens models employ the strong lens-
|
| 328 |
+
ing evidence of multiply-imaged galaxies (arcs), whose
|
| 329 |
+
positions and redshifts are used as model constraints.
|
| 330 |
+
The availability of lensing constraints strongly affects
|
| 331 |
+
the accuracy of lens models, as they are used as local
|
| 332 |
+
solutions of the lensing equations and constrain the pro-
|
| 333 |
+
jected mass density distribution at the cluster’s core.
|
| 334 |
+
The mass distribution and magnification are sensitive
|
| 335 |
+
to the accurate identifications and positions of multiple
|
| 336 |
+
images and to the redshifts of the lensed galaxies. It is
|
| 337 |
+
necessary to include a few spectroscopic redshifts in the
|
| 338 |
+
lens model in order to avoid incorrect results (Johnson
|
| 339 |
+
& Sharon 2016).
|
| 340 |
+
To select cluster-member galaxies, we followed the
|
| 341 |
+
procedure of Gladders & Yee (2000), by selecting galax-
|
| 342 |
+
ies that fall on the cluster red sequence in a color-
|
| 343 |
+
magnitude diagram. For SDSS J1029+2623 we also in-
|
| 344 |
+
corporated spectroscopic redshift information (see Sec-
|
| 345 |
+
tion 5).
|
| 346 |
+
The galaxy-scale halos’ positional parame-
|
| 347 |
+
ters (x, y, e, θ) are measured with Source Extractor
|
| 348 |
+
(Bertin & Arnouts 1996) and fixed. The rcore, rcut, and
|
| 349 |
+
σ0 of the galaxy-scale halos are scaled to their observed
|
| 350 |
+
luminosity following the scaling relations in Limousin
|
| 351 |
+
et al. (2005).
|
| 352 |
+
4. SDSS J1004+4112
|
| 353 |
+
SDSS J1004+4112 was the first discovered galaxy clus-
|
| 354 |
+
ter strongly lensing a quasar (Inada et al. 2003). The
|
| 355 |
+
cluster at z = 0.68 strongly lenses a quasar at z = 1.734
|
| 356 |
+
into five images, with a maximum image separation of
|
| 357 |
+
14.′′6 (Table 1). The cluster also strongly lenses several
|
| 358 |
+
background sources at z = 2.74 (Sharon et al. 2005), z =
|
| 359 |
+
3.288 (Sharon 2008; Oguri 2010), and z = 3.332 (Sharon
|
| 360 |
+
et al. 2005) (Table 2).
|
| 361 |
+
We used archival HST
|
| 362 |
+
multi-color imaging from
|
| 363 |
+
the
|
| 364 |
+
Advanced
|
| 365 |
+
Camera
|
| 366 |
+
for
|
| 367 |
+
Surveys
|
| 368 |
+
(ACS).
|
| 369 |
+
The
|
| 370 |
+
SDSS J1004+4112 imaging data include GO-10509 (PI:
|
| 371 |
+
Kochanek) ACS/F814W, F555W, F435W (10 orbits);
|
| 372 |
+
GO-9744 (PI: Kochanek) ACS/F814W, F555W (2 or-
|
| 373 |
+
bits); and GO-10793 (PI: Gal-Yam) ACS/F814W (1
|
| 374 |
+
orbit).
|
| 375 |
+
These data were originally proposed to iden-
|
| 376 |
+
tify multiply-imaged galaxies to construct a mass model
|
| 377 |
+
(Sharon et al. 2005), search for the fifth quasar image
|
| 378 |
+
(Inada et al. 2005), derive ΩΛ, perform a weak lens-
|
| 379 |
+
ing analysis, and search for supernovae in massive high-
|
| 380 |
+
redshift clusters (Sharon et al. 2010). These data also
|
| 381 |
+
enabled studies of the spectral energy distribution of the
|
| 382 |
+
quasar host galaxy (Ross et al. 2009), the ultraviolet up-
|
| 383 |
+
turn in red sequence galaxies (Ali et al. 2018), and active
|
| 384 |
+
galactic nuclei (AGN) in massive clusters (Klesman &
|
| 385 |
+
Sarajedini 2012).
|
| 386 |
+
We modeled SDSS J1004+4112 using one cluster-scale
|
| 387 |
+
halo, one brightest cluster galaxy (BCG)-scale halo, and
|
| 388 |
+
a galaxy-scale halo for each of the cluster member galax-
|
| 389 |
+
ies, four of which have their parameters optimized in-
|
| 390 |
+
stead of adopting the scaling relations from Limousin
|
| 391 |
+
et al. (2005).
|
| 392 |
+
We modeled the cluster using both source-plane min-
|
| 393 |
+
imization and image-plane minimization, and evaluated
|
| 394 |
+
the quality of the models obtained by each approach.
|
| 395 |
+
While formally the image-plane minimization resulted
|
| 396 |
+
in a better image-plane scatter, these models produced
|
| 397 |
+
additional quasar images that are not observed. There-
|
| 398 |
+
fore, we proceeded with the source-plane minimization
|
| 399 |
+
for SDSS J1004+4112 for the remainder of the analysis.
|
| 400 |
+
We note that the best-fit lens model produced large scat-
|
| 401 |
+
ter between the observed and model-predicted positions
|
| 402 |
+
in the image plane for quasar image C. In our results, we
|
| 403 |
+
checked what happens when image C is removed from
|
| 404 |
+
the H0 measurement.
|
| 405 |
+
The model consists of 27 free parameters and 78
|
| 406 |
+
constraints.
|
| 407 |
+
The HST data and the lens model for
|
| 408 |
+
SDSS J1004+4112 are shown in Figure 1. The redshifts
|
| 409 |
+
of the arcs in our lens model are the same as those used
|
| 410 |
+
by For´es-Toribio et al. (2022). The strong lensing mass
|
| 411 |
+
model parameters are reported in Table 3.
|
| 412 |
+
The measured time delay between images A and
|
| 413 |
+
B (∆tAB = -38.4 ± 2.0 days) was first published in
|
| 414 |
+
Fohlmeister et al. (2007). In this notation, a positive
|
| 415 |
+
value of the time delay means image A leads the other
|
| 416 |
+
image. In addition to reporting a refined value of ∆tAB
|
| 417 |
+
= -40.6 ± 1.8 days, Fohlmeister et al. (2008) measured
|
| 418 |
+
the time delay between images A and C (∆tAC = -
|
| 419 |
+
821.6 ± 2.1 days) and set a lower limit of ∆tAD >
|
| 420 |
+
1250 days. After the completion of a 14.5 year mon-
|
| 421 |
+
itoring campaign at the 1.2m Fred Lawrence Whipple
|
| 422 |
+
Observatory (FLWO), Mu˜noz et al. (2022) recently pre-
|
| 423 |
+
sented new light curves for the four brightest images in
|
| 424 |
+
SDSS J1004+4112, resulting in updated time delay val-
|
| 425 |
+
ues of ∆tAB = -43.01 ± 0.27, ∆tAC = -825.23 ± 0.46
|
| 426 |
+
days, and ∆tAD = 1633.23 ± 0.97 days (Table 4).
|
| 427 |
+
|
| 428 |
+
H0 from Cluster-Lensed Quasars
|
| 429 |
+
5
|
| 430 |
+
SDSS J1029+2623
|
| 431 |
+
SDSS 1029+2623
|
| 432 |
+
SDSS J1004+4112
|
| 433 |
+
SDSS J1029+2623
|
| 434 |
+
SDSS J2222+2745
|
| 435 |
+
Figure 1. Hubble Space Telescope imaging of the three cluster-lensed quasars used to derive H0. We computed the lens models
|
| 436 |
+
for SDSS J1004+4112 and SDSS J1029+2623. SDSS J2222+2745 is reproduced from Sharon et al. (2017). The positions of the
|
| 437 |
+
quasar images are denoted with the cyan letters. The critical curves, the loci of maximum magnification at a specified source
|
| 438 |
+
redshift, are generated at the quasar redshifts – z = 1.734, z = 2.1992, and z = 2.805, for SDSS J1004+4112, SDSS J1029+2623,
|
| 439 |
+
and SDSS J2222+2745, respectively, and are plotted in red.
|
| 440 |
+
5. SDSS J1029+2623
|
| 441 |
+
SDSS J1029+2623 is a cluster at z = 0.588 that is
|
| 442 |
+
strongly lensing a quasar at z = 2.1992 into three im-
|
| 443 |
+
ages (Inada et al. 2006; Oguri et al. 2008). The quasar
|
| 444 |
+
images are in a naked cusp configuration with a maxi-
|
| 445 |
+
mum image separation of 22.′′5 (Table 1).
|
| 446 |
+
Acebron et al. (2022) reported spectroscopic redshifts
|
| 447 |
+
of several galaxies in the field, based on Multi Unit Spec-
|
| 448 |
+
troscopic Explorer (MUSE) spectroscopy from the Very
|
| 449 |
+
Large Telescope. They refined the redshift measurement
|
| 450 |
+
of the quasar to z = 2.1992 (formerly reported as z
|
| 451 |
+
= 2.197, Inada et al. (2006)). The other spectroscop-
|
| 452 |
+
ically confirmed objects from MUSE include a doubly-
|
| 453 |
+
imaged galaxy at z=2.1812, a septuply-imaged galaxy
|
| 454 |
+
at z=3.0275, a quadruply-imaged galaxy at z=3.0278,
|
| 455 |
+
a doubly-imaged galaxy at z=1.0232, and a quadruply-
|
| 456 |
+
imaged galaxy at z=5.0622 (Acebron et al. 2022) (Table
|
| 457 |
+
2).
|
| 458 |
+
We used archival HST
|
| 459 |
+
multi-color imaging from
|
| 460 |
+
GO-12195 (PI: Oguri):
|
| 461 |
+
WFC3/F160W (2 orbits),
|
| 462 |
+
ACS/F814W
|
| 463 |
+
(3
|
| 464 |
+
orbits),
|
| 465 |
+
and
|
| 466 |
+
ACS/F475W
|
| 467 |
+
(2
|
| 468 |
+
or-
|
| 469 |
+
bits). These data were originally proposed to identify
|
| 470 |
+
multiply-imaged galaxies to construct a mass model that
|
| 471 |
+
could be used to better understand the anomalous flux
|
| 472 |
+
ratios between two of the quasar images and the dynam-
|
| 473 |
+
ical state of the cluster (Oguri et al. 2013). These HST
|
| 474 |
+
data also enabled a weak lensing analysis and a mor-
|
| 475 |
+
phology study of the quasar host galaxy (Oguri et al.
|
| 476 |
+
2013).
|
| 477 |
+
Our lens model, which builds on the results from
|
| 478 |
+
Acebron et al. (2022) and Oguri et al. (2013), con-
|
| 479 |
+
tains 48 constraints and 33 free parameters. All of the
|
| 480 |
+
model constraints are taken from Acebron et al. (2022).
|
| 481 |
+
The model includes two cluster-scale dark matter ha-
|
| 482 |
+
los that were allowed to vary in position around the
|
| 483 |
+
two BCGs as well as two galaxy-scale halos that were
|
| 484 |
+
fixed to the BCGs’ positions. Additionally, a foreground
|
| 485 |
+
galaxy (z=0.5111 from MUSE) and a background galaxy
|
| 486 |
+
(z=0.6735 from MUSE) along the line of sight are both
|
| 487 |
+
modeled at the cluster redshift since Lenstool does not
|
| 488 |
+
yet implement a multi-plane lensing framework. This
|
| 489 |
+
approach improves the accuracy of the lensing analysis
|
| 490 |
+
outputs compared to omitting these interlopers from the
|
| 491 |
+
model (Raney et al. 2020).
|
| 492 |
+
Our lens model differs from Acebron et al. (2022) in
|
| 493 |
+
the following ways. Whereas Acebron et al. (2022) in-
|
| 494 |
+
clude a model (Model 1) with an external shear compo-
|
| 495 |
+
nent, we opted to not include this component as its phys-
|
| 496 |
+
ical effect on the measured time delay is not well under-
|
| 497 |
+
stood. Additionally, for consistency with the other clus-
|
| 498 |
+
ters modeled in this paper, our galaxy-scale halos have
|
| 499 |
+
ellipticities, whereas Acebron et al. (2022) use spherical
|
| 500 |
+
halos. We constructed our galaxy catalog as described in
|
| 501 |
+
Section 3, taking into account the MUSE spectroscopy
|
| 502 |
+
to determine the red sequence (see Sharon et al. 2022).
|
| 503 |
+
We used the ACS F814W vs. F475W for selection. We
|
| 504 |
+
identified the red sequence by fitting a line to the spec-
|
| 505 |
+
troscopic members in this phase space, with four itera-
|
| 506 |
+
tions of sigma clipping.
|
| 507 |
+
We found that the source-plane minimization did a
|
| 508 |
+
better job at predicting the quasar image positions in
|
| 509 |
+
this cluster than the image-plane minimization, possi-
|
| 510 |
+
bly due to the close proximity of quasar images B and
|
| 511 |
+
C. Once a best-fit model was obtained, we examined the
|
| 512 |
+
posterior distribution of image predictions and rejected
|
| 513 |
+
from the MCMC sampling steps that did not produce
|
| 514 |
+
|
| 515 |
+
6
|
| 516 |
+
Napier et al.
|
| 517 |
+
this lensing configuration, i.e., not producing two sep-
|
| 518 |
+
arate images for A and B on either side of the critical
|
| 519 |
+
curve. Since these two images lie very close to the crit-
|
| 520 |
+
ical curve, some parameter combinations produce solu-
|
| 521 |
+
tions in which these two images merge and only image
|
| 522 |
+
A of the quasar remains, in contrast to the observed
|
| 523 |
+
lensing evidence.
|
| 524 |
+
The
|
| 525 |
+
HST
|
| 526 |
+
data
|
| 527 |
+
and
|
| 528 |
+
the
|
| 529 |
+
lens
|
| 530 |
+
model
|
| 531 |
+
for
|
| 532 |
+
SDSS J1029+2623 are shown in Figure 1. The strong
|
| 533 |
+
lensing mass model parameters are reported in Table 5.
|
| 534 |
+
Fohlmeister et al. (2013) published the time delay
|
| 535 |
+
measurement between images A and B (∆tAB = 744
|
| 536 |
+
± 10 days) based on photometric monitoring campaign
|
| 537 |
+
at the FLWO 1.2m.
|
| 538 |
+
6. SDSS J2222+2745
|
| 539 |
+
SDSS J2222+2745, discovered by Dahle et al. (2013),
|
| 540 |
+
is a cluster at z = 0.49 that strongly lenses a quasar at
|
| 541 |
+
z = 2.805. The quasar is imaged six times (Sharon et al.
|
| 542 |
+
2017) with a maximum image separation of 15.′′1 (Table
|
| 543 |
+
1).
|
| 544 |
+
Spectroscopy of other lensed galaxies was obtained by
|
| 545 |
+
the Gemini North Telescope. These data include triply-
|
| 546 |
+
imaged and doubly-imaged knots from a galaxy at z =
|
| 547 |
+
4.562 and a doubly-imaged galaxy at z = 2.2963 (Sharon
|
| 548 |
+
et al. 2017).
|
| 549 |
+
We used archival HST multi-color imaging from GO-
|
| 550 |
+
13337 (PI: Sharon): WFC3/F160W, F110W (1 orbit)
|
| 551 |
+
and ACS/F814W, F606W, F435W (6 orbits).
|
| 552 |
+
These
|
| 553 |
+
data were originally proposed to detect any additional
|
| 554 |
+
quasar images and to compute a mass model (Sharon
|
| 555 |
+
et al. 2017). Additionally, these HST data have enabled
|
| 556 |
+
a spatially resolved study of the Lyman-alpha emission
|
| 557 |
+
in the quasar host galaxy (Bayliss et al. 2017).
|
| 558 |
+
We adopted the lens model from Sharon et al.
|
| 559 |
+
(2017) with 32 constraints and 31 free parameters.
|
| 560 |
+
SDSS J2222+2745 is modeled with one cluster-scale halo
|
| 561 |
+
and 167 galaxy-scale halos. Sharon et al. (2017) included
|
| 562 |
+
as constraints triply-imaged and doubly-imaged knots at
|
| 563 |
+
the quasar’s redshift of z = 2.805, and triply-imaged and
|
| 564 |
+
doubly-imaged knots from a galaxy at z = 4.562. Two
|
| 565 |
+
separate triply-imaged galaxies had their redshifts left
|
| 566 |
+
as free parameters, with priors of 2.0 ≤ z ≤ 4.0 and
|
| 567 |
+
3.8 ≤ z ≤ 5.0, respectively, based on photometric red-
|
| 568 |
+
shift analysis. The HST data and the lens model for
|
| 569 |
+
SDSS J2222+2745 are shown in Figure 1.
|
| 570 |
+
Table 5 of
|
| 571 |
+
Sharon et al. (2017) lists the strong lensing mass model
|
| 572 |
+
parameters.
|
| 573 |
+
Dahle et al. (2015) first published the time delay mea-
|
| 574 |
+
surements between images A and B (∆tAB = 47.7 ± 6.0
|
| 575 |
+
days) and A and C (∆tAC = -722 ± 24 days). Then Dyr-
|
| 576 |
+
land (2019) reported updated values for the time delays
|
| 577 |
+
between images A and B (∆tAB = 42.44 +1.36
|
| 578 |
+
−1.44 days) and
|
| 579 |
+
images A and C (∆tAC = -696.65 +2.00
|
| 580 |
+
−2.10 days). These
|
| 581 |
+
measurements were based on data from a monitoring
|
| 582 |
+
campaign at the 2.5m Nordic Optical Telescope.
|
| 583 |
+
In the analysis that follows,
|
| 584 |
+
we used the most
|
| 585 |
+
up-to-date time delay values for SDSS J1004+4112,
|
| 586 |
+
SDSS J1029+2623, and SDSS J2222+2745 which are
|
| 587 |
+
listed in Table 4.
|
| 588 |
+
Figure
|
| 589 |
+
2.
|
| 590 |
+
Constraints
|
| 591 |
+
on
|
| 592 |
+
H0
|
| 593 |
+
from
|
| 594 |
+
three
|
| 595 |
+
cluster-
|
| 596 |
+
lensed quasars, SDSS J1004+4112, SDSS J1029+2623, and
|
| 597 |
+
SDSS J2222+2745.
|
| 598 |
+
The histograms are created from 100
|
| 599 |
+
random models sampled from the MCMC. Overplotted are
|
| 600 |
+
Gaussian fits to the distributions. Whereas individual time
|
| 601 |
+
delay measurements produce H0 values with an average of
|
| 602 |
+
32% error, the error is decreased to 8.8% when the systems
|
| 603 |
+
are analyzed together. The inverse-variance weighted mean
|
| 604 |
+
of H0 is 71.5 km s−1 Mpc−1 (solid gray line) and the standard
|
| 605 |
+
error of the weighted mean is 6.1 km s−1 Mpc−1.
|
| 606 |
+
7. RESULTS
|
| 607 |
+
Using the outputs of the lens models described in
|
| 608 |
+
the previous sections, we computed the model-predicted
|
| 609 |
+
time delay values for each of the quasar images in each
|
| 610 |
+
cluster field with respect to image A of the quasar
|
| 611 |
+
(Equation 3 and Table 6).
|
| 612 |
+
The time delay is a sensitive function of the posi-
|
| 613 |
+
tions of the source (⃗β) and its multiple images (⃗θ1,⃗θ2).
|
| 614 |
+
The unobservable source position and the locations of
|
| 615 |
+
its multiple images are strongly coupled to the time
|
| 616 |
+
delay, since stationary points in the arrival time sur-
|
| 617 |
+
|
| 618 |
+
Predicted Ho from time delays
|
| 619 |
+
0.06
|
| 620 |
+
2222 AB
|
| 621 |
+
2222 AC
|
| 622 |
+
1029 AB
|
| 623 |
+
0.05
|
| 624 |
+
1004 AB
|
| 625 |
+
1004 AC
|
| 626 |
+
1004 AD
|
| 627 |
+
0.04
|
| 628 |
+
0.03
|
| 629 |
+
0.02
|
| 630 |
+
0.01
|
| 631 |
+
0.00
|
| 632 |
+
0
|
| 633 |
+
50
|
| 634 |
+
100
|
| 635 |
+
150
|
| 636 |
+
200
|
| 637 |
+
250
|
| 638 |
+
300
|
| 639 |
+
Hokm/s/MpcH0 from Cluster-Lensed Quasars
|
| 640 |
+
7
|
| 641 |
+
face determine the image-plane positions of multiple im-
|
| 642 |
+
ages of any given source-plane position (see Section 2).
|
| 643 |
+
It is therefore important to measure time delays self-
|
| 644 |
+
consistently, by obtaining the time delay at the image
|
| 645 |
+
positions predicted by the same lensing potential. Lens
|
| 646 |
+
models are never perfect, and small scatter between ob-
|
| 647 |
+
served and predicted position is expected. To maintain
|
| 648 |
+
this self-consistency, we calculated the source position
|
| 649 |
+
⃗β by ray-tracing the observed position of image A (⃗θA)
|
| 650 |
+
through the lens equation, and used the same lens model
|
| 651 |
+
to predict the image-plane positions of its counter im-
|
| 652 |
+
ages (⃗θ2,⃗θ3,...). The time delay was then calculated from
|
| 653 |
+
these predicted positions, rather than the observed posi-
|
| 654 |
+
tions, which may be slightly shifted from the stationary
|
| 655 |
+
points in the Fermat potential. The scatter in the image
|
| 656 |
+
or source plane contributes to the error budget through
|
| 657 |
+
the MCMC exploration of the parameter space. An al-
|
| 658 |
+
ternative approach to determining the source position
|
| 659 |
+
would be averaging the predicted source locations from
|
| 660 |
+
all the quasar images, and calculating the predicted im-
|
| 661 |
+
age locations of the average source.
|
| 662 |
+
Using Equation 4, we computed the H0 value cor-
|
| 663 |
+
responding to each independent published time delay
|
| 664 |
+
value and corresponding predicted time delays. To gen-
|
| 665 |
+
erate the 1σ uncertainties in H0, we used 100 random
|
| 666 |
+
models from the MCMC sampling of the parameter
|
| 667 |
+
space for each cluster.
|
| 668 |
+
The number of measured time delays in each field de-
|
| 669 |
+
termines the number of H0 measurements derived from
|
| 670 |
+
each cluster: three from SDSS J1004+4112, one from
|
| 671 |
+
SDSS J1029+2623, and two from SDSS J2222+2745, for
|
| 672 |
+
a total of six H0 measurements. Table 7 lists the derived
|
| 673 |
+
H0 values and uncertainties, obtained for the “best” lens
|
| 674 |
+
model, i.e., the one producing the smallest scatter, and
|
| 675 |
+
for the full posterior distribution.
|
| 676 |
+
The resulting H0 measurement from each quasar pair
|
| 677 |
+
has large uncertainties due to the complexity of the lens
|
| 678 |
+
and systematic uncertainties in the lens modeling pro-
|
| 679 |
+
cess. However, given that all three of these systems re-
|
| 680 |
+
side in the same universe, they all must have the same
|
| 681 |
+
H0; we can leverage these three independent lines of
|
| 682 |
+
sight, with six time delays, to obtain a tighter constraint
|
| 683 |
+
than what is possible from a single time delay. We com-
|
| 684 |
+
bine the results from the six time delays by taking the
|
| 685 |
+
inverse-variance weighted mean of the six H0 measure-
|
| 686 |
+
ments, sampled from their posterior distributions, mak-
|
| 687 |
+
ing sure to account for the correlation between measure-
|
| 688 |
+
ments made in the same line of sight. We note that the
|
| 689 |
+
observational time delay measurement uncertainties are
|
| 690 |
+
negligible compared to the lens modeling uncertainties.
|
| 691 |
+
The inverse-variance weighted mean and the standard
|
| 692 |
+
error of the weighted mean of H0 is 71.5 ± 6.1 km s−1
|
| 693 |
+
Mpc−1 (Fig. 2). Combining the H0 values derived from
|
| 694 |
+
multiple time delay values improves the constraints on
|
| 695 |
+
H0, decreasing the uncertainty from ∼32% for an in-
|
| 696 |
+
dividual H0 measurement to 8.8% for the sample.
|
| 697 |
+
If
|
| 698 |
+
SDSS J1004+4112’s quasar image C is excluded from the
|
| 699 |
+
analysis (see Section 4), we obtain H0 = 73.7 ± 7.5 km
|
| 700 |
+
s−1 Mpc−1.
|
| 701 |
+
8. DISCUSSION
|
| 702 |
+
Our analysis provides an independent H0 measure-
|
| 703 |
+
ment that is not sensitive to the same systematics as
|
| 704 |
+
other methods. Albeit with a larger fractional uncer-
|
| 705 |
+
tainty, our H0 measurement (71.5 ± 6.1 km s−1 Mpc−1)
|
| 706 |
+
falls between the lower H0 values from CMB (67.4 ± 0.5
|
| 707 |
+
km s−1 Mpc−1, Planck Collaboration et al. (2020)) and
|
| 708 |
+
TRGB (69.8 ± 0.6 (stat) ± 1.6 (sys), Freedman (2021))
|
| 709 |
+
and the higher H0 value from Cepheids (73.2 ± 1.3 km
|
| 710 |
+
s−1 Mpc−1, Riess et al. (2021)), and is consistent with
|
| 711 |
+
all three.
|
| 712 |
+
Increasing the number of systems used for a com-
|
| 713 |
+
bined time-delay measurement of H0 will improve this
|
| 714 |
+
method’s competitiveness with CMB-based and distance
|
| 715 |
+
ladder-based methods.
|
| 716 |
+
Although three other cluster-
|
| 717 |
+
lensed quasars are published in the literature, none
|
| 718 |
+
has all the necessary time delays measurements, space-
|
| 719 |
+
resolution imaging, and spectroscopic redshifts of sec-
|
| 720 |
+
ondary arcs for a measurement of H0.
|
| 721 |
+
All three of
|
| 722 |
+
the other published cluster-lensed quasars have ongo-
|
| 723 |
+
ing photometric monitoring campaigns to measure their
|
| 724 |
+
time delays. Additionally, one of the other three sys-
|
| 725 |
+
tems, COOL J0542-2125 (Martinez et al. 2022) will be
|
| 726 |
+
observed by HST in Cycle 30 (GO-17243; PI: Napier).
|
| 727 |
+
To estimate the improvement in the H0 constraint
|
| 728 |
+
from a sample of twice as many time delay measure-
|
| 729 |
+
ments, we simulated H0 distributions guided by the
|
| 730 |
+
existing sample, as follows.
|
| 731 |
+
We randomly selected
|
| 732 |
+
six integer H0 values between 50-150 as this is the
|
| 733 |
+
range spanned by the peaks of the six H0 distribu-
|
| 734 |
+
tions from SDSS J1004+4112, SDSS J1029+2623, and
|
| 735 |
+
SDSS J2222+2745. We then randomly assigned to these
|
| 736 |
+
six H0 values the standard deviation of one of the six H0
|
| 737 |
+
distributions (Table 7), and produced the correspond-
|
| 738 |
+
ing Gaussian distributions. We repeated this simulation
|
| 739 |
+
process 100 times. Incorporating these new six H0 dis-
|
| 740 |
+
tributions for a total of 12 constraints, and averaging
|
| 741 |
+
the 100 iterations, gave a standard error of the weighted
|
| 742 |
+
mean of 4.5 km s−1 Mpc−1.
|
| 743 |
+
Therefore, doubling the
|
| 744 |
+
number of systems results in a ∼30% improvement in
|
| 745 |
+
the constraint on H0, reducing the uncertainty on H0
|
| 746 |
+
from 8.8% to 6.3%.
|
| 747 |
+
A 1% uncertainty measurement of H0 from cluster-
|
| 748 |
+
lensed quasars would be competitive with the cur-
|
| 749 |
+
|
| 750 |
+
8
|
| 751 |
+
Napier et al.
|
| 752 |
+
rent precision level of CMB and distance ladder meth-
|
| 753 |
+
ods.
|
| 754 |
+
Extending the simulation described above to a
|
| 755 |
+
larger number of systems, we estimated that ∼500
|
| 756 |
+
time delay measurements from cluster-lensed quasars
|
| 757 |
+
would achieve a 1% uncertainty level on H0 from
|
| 758 |
+
cluster lensed-quasars.
|
| 759 |
+
Based on SDSS J1004+4112,
|
| 760 |
+
SDSS J1029+2623, and SDSS J2222+2745 each having
|
| 761 |
+
an average of two time delay measurements, a sample
|
| 762 |
+
size of 250 cluster-lensed quasars would be needed to
|
| 763 |
+
produce 500 time delay measurements. Future surveys
|
| 764 |
+
are expected to detect of order ∼50 such systems in the
|
| 765 |
+
next decade (Robertson et al. 2020).
|
| 766 |
+
Therefore, this
|
| 767 |
+
increase in sample size alone will not achieve 1% uncer-
|
| 768 |
+
tainty in H0; to reach 1% with of order of 50 systems
|
| 769 |
+
(100 time delays) will require a decrease in the lens mod-
|
| 770 |
+
eling uncertainties by about a factor of two, on average.
|
| 771 |
+
Future work will explore whether this decrease in the
|
| 772 |
+
uncertainties is feasible.
|
| 773 |
+
ACKNOWLEDGMENTS
|
| 774 |
+
Based on observations made with the NASA/ESA
|
| 775 |
+
Hubble Space Telescope, obtained from the Multimis-
|
| 776 |
+
sion Archive at the Space Telescope Science Institute
|
| 777 |
+
(MAST) at the Space Telescope Science Institute, which
|
| 778 |
+
is operated by the Association of Universities for Re-
|
| 779 |
+
search in Astronomy, Incorporated, under NASA con-
|
| 780 |
+
tract NAS 5-26555.
|
| 781 |
+
These archival observations are
|
| 782 |
+
associated with programs GO-10509, GO-9744, GO-
|
| 783 |
+
10793, GO-12195, and GO-13337.
|
| 784 |
+
Support for HST
|
| 785 |
+
program AR-16150, which enabled this work, was pro-
|
| 786 |
+
vided through grants from the STScI under NASA con-
|
| 787 |
+
tract NAS5-26555. Co-author GM acknowledges fund-
|
| 788 |
+
ing from the European Union’s Horizon 2020 research
|
| 789 |
+
and innovation programme under the Marie Sk�lodowska-
|
| 790 |
+
Curie grant agreement NoMARACHAS - DLV- 896778.
|
| 791 |
+
We thank Ana Acebron for her useful discussions about
|
| 792 |
+
SDSS J1029+2623.
|
| 793 |
+
Facilities: HST(ACS); HST(WFC3); HST(MAST)
|
| 794 |
+
Software:
|
| 795 |
+
Lenstool (Jullo et al. 2007); Source
|
| 796 |
+
Extractor (Bertin & Arnouts 1996)
|
| 797 |
+
|
| 798 |
+
H0 from Cluster-Lensed Quasars
|
| 799 |
+
9
|
| 800 |
+
REFERENCES
|
| 801 |
+
Acebron, A., Grillo, C., Bergamini, P., et al. 2022, ApJ,
|
| 802 |
+
926, 86, doi: 10.3847/1538-4357/ac3d35
|
| 803 |
+
Ali, S. S., Bremer, M. N., Phillipps, S., & De Propris, R.
|
| 804 |
+
2018, MNRAS, 480, 2236, doi: 10.1093/mnras/sty1988
|
| 805 |
+
Bayliss, M. B., Sharon, K., Acharyya, A., et al. 2017,
|
| 806 |
+
ApJL, 845, L14, doi: 10.3847/2041-8213/aa831a
|
| 807 |
+
Bertin, E., & Arnouts, S. 1996, A&AS, 117, 393,
|
| 808 |
+
doi: 10.1051/aas:1996164
|
| 809 |
+
Birrer, S., Millon, M., Sluse, D., et al. 2022, arXiv e-prints,
|
| 810 |
+
arXiv:2210.10833, doi: 10.48550/arXiv.2210.10833
|
| 811 |
+
Birrer, S., & Treu, T. 2021, A&A, 649, A61,
|
| 812 |
+
doi: 10.1051/0004-6361/202039179
|
| 813 |
+
Birrer, S., Shajib, A. J., Galan, A., et al. 2020, A&A, 643,
|
| 814 |
+
A165, doi: 10.1051/0004-6361/20203886110.48550/arXiv.
|
| 815 |
+
2007.02941
|
| 816 |
+
Dahle, H., Gladders, M. D., Sharon, K., Bayliss, M. B., &
|
| 817 |
+
Rigby, J. R. 2015, ApJ, 813, 67,
|
| 818 |
+
doi: 10.1088/0004-637X/813/1/67
|
| 819 |
+
Dahle, H., Gladders, M. D., Sharon, K., et al. 2013, ApJ,
|
| 820 |
+
773, 146, doi: 10.1088/0004-637X/773/2/146
|
| 821 |
+
Dyrland, K. 2019, Master’s thesis, University of Oslo
|
| 822 |
+
El´ıasd´ottir, ´A., Limousin, M., Richard, J., et al. 2007,
|
| 823 |
+
arXiv e-prints, arXiv:0710.5636.
|
| 824 |
+
https://arxiv.org/abs/0710.5636
|
| 825 |
+
Fohlmeister, J., Kochanek, C. S., Falco, E. E., Morgan,
|
| 826 |
+
C. W., & Wambsganss, J. 2008, ApJ, 676, 761,
|
| 827 |
+
doi: 10.1086/528789
|
| 828 |
+
Fohlmeister, J., Kochanek, C. S., Falco, E. E., et al. 2013,
|
| 829 |
+
ApJ, 764, 186, doi: 10.1088/0004-637X/764/2/186
|
| 830 |
+
—. 2007, ApJ, 662, 62, doi: 10.1086/518018
|
| 831 |
+
For´es-Toribio, R., Mu˜noz, J. A., Kochanek, C. S., &
|
| 832 |
+
Mediavilla, E. 2022, ApJ, 937, 35,
|
| 833 |
+
doi: 10.3847/1538-4357/ac8c40
|
| 834 |
+
Freedman, W. L. 2021, ApJ, 919, 16,
|
| 835 |
+
doi: 10.3847/1538-4357/ac0e9510.48550/arXiv.2106.15656
|
| 836 |
+
Gladders, M. D., & Yee, H. K. C. 2000, AJ, 120, 2148,
|
| 837 |
+
doi: 10.1086/301557
|
| 838 |
+
Hubble, E. 1929, Proceedings of the National Academy of
|
| 839 |
+
Science, 15, 168, doi: 10.1073/pnas.15.3.168
|
| 840 |
+
Inada, N., Oguri, M., Pindor, B., et al. 2003, Nature, 426,
|
| 841 |
+
810, doi: 10.1038/nature02153
|
| 842 |
+
Inada, N., Oguri, M., Keeton, C. R., et al. 2005, PASJ, 57,
|
| 843 |
+
L7, doi: 10.1093/pasj/57.3.L7
|
| 844 |
+
Inada, N., Oguri, M., Morokuma, T., et al. 2006, ApJL,
|
| 845 |
+
653, L97, doi: 10.1086/510671
|
| 846 |
+
Johnson, T. L., & Sharon, K. 2016, ApJ, 832, 82,
|
| 847 |
+
doi: 10.3847/0004-637X/832/1/82
|
| 848 |
+
Jullo, E., Kneib, J. P., Limousin, M., et al. 2007, New
|
| 849 |
+
Journal of Physics, 9, 447,
|
| 850 |
+
doi: 10.1088/1367-2630/9/12/447
|
| 851 |
+
Klesman, A. J., & Sarajedini, V. L. 2012, MNRAS, 425,
|
| 852 |
+
1215, doi: 10.1111/j.1365-2966.2012.21508.x
|
| 853 |
+
Kochanek, C. S. 2020, MNRAS, 493, 1725,
|
| 854 |
+
doi: 10.1093/mnras/staa344
|
| 855 |
+
—. 2021, MNRAS, 501, 5021, doi: 10.1093/mnras/staa4033
|
| 856 |
+
Lemaˆıtre, G. 1927, Annales de la Société
|
| 857 |
+
Scientifique de Bruxelles, 47, 49
|
| 858 |
+
Lemon, C., Anguita, T., Auger, M., et al. 2022, arXiv
|
| 859 |
+
e-prints, arXiv:2206.07714.
|
| 860 |
+
https://arxiv.org/abs/2206.07714
|
| 861 |
+
Lemon, C. A., Auger, M. W., & McMahon, R. G. 2019,
|
| 862 |
+
MNRAS, 483, 4242, doi: 10.1093/mnras/sty3366
|
| 863 |
+
Limousin, M., Kneib, J.-P., & Natarajan, P. 2005, MNRAS,
|
| 864 |
+
356, 309, doi: 10.1111/j.1365-2966.2004.08449.x
|
| 865 |
+
Martinez, M. N., Napier, K. A., Cloonan, A. P., et al. 2022,
|
| 866 |
+
arXiv e-prints, arXiv:2209.03972.
|
| 867 |
+
https://arxiv.org/abs/2209.03972
|
| 868 |
+
Moresco, M., Amati, L., Amendola, L., et al. 2022, arXiv
|
| 869 |
+
e-prints, arXiv:2201.07241.
|
| 870 |
+
https://arxiv.org/abs/2201.07241
|
| 871 |
+
Mu˜noz, J. A., Kochanek, C. S., Fohlmeister, J., et al. 2022,
|
| 872 |
+
arXiv e-prints, arXiv:2206.08597.
|
| 873 |
+
https://arxiv.org/abs/2206.08597
|
| 874 |
+
Oguri, M. 2010, PASJ, 62, 1017,
|
| 875 |
+
doi: 10.1093/pasj/62.4.1017
|
| 876 |
+
Oguri, M., Ofek, E. O., Inada, N., et al. 2008, ApJL, 676,
|
| 877 |
+
L1, doi: 10.1086/586897
|
| 878 |
+
Oguri, M., Schrabback, T., Jullo, E., et al. 2013, MNRAS,
|
| 879 |
+
429, 482, doi: 10.1093/mnras/sts351
|
| 880 |
+
Planck Collaboration, Aghanim, N., Akrami, Y., et al.
|
| 881 |
+
2020, A&A, 641, A6, doi: 10.1051/0004-6361/201833910
|
| 882 |
+
Raney, C. A., Keeton, C. R., & Brennan, S. 2020, MNRAS,
|
| 883 |
+
492, 503, doi: 10.1093/mnras/stz3116
|
| 884 |
+
Refsdal, S. 1964, MNRAS, 128, 307,
|
| 885 |
+
doi: 10.1093/mnras/128.4.307
|
| 886 |
+
Riess, A. G., Casertano, S., Yuan, W., et al. 2021, ApJL,
|
| 887 |
+
908, L6, doi: 10.3847/2041-8213/abdbaf
|
| 888 |
+
Robertson, A., Smith, G. P., Massey, R., et al. 2020,
|
| 889 |
+
MNRAS, 495, 3727, doi: 10.1093/mnras/staa1429
|
| 890 |
+
Ross, N. R., Assef, R. J., Kochanek, C. S., Falco, E., &
|
| 891 |
+
Poindexter, S. D. 2009, ApJ, 702, 472,
|
| 892 |
+
doi: 10.1088/0004-637X/702/1/472
|
| 893 |
+
Schneider, P. 1985, A&A, 143, 413
|
| 894 |
+
Sharon, K. 2008, PhD thesis, Tel Aviv University, Israel
|
| 895 |
+
|
| 896 |
+
10
|
| 897 |
+
Napier et al.
|
| 898 |
+
Sharon, K., Chen, M. C., Mahler, G., Coe, D., & the
|
| 899 |
+
RELICS Collaboration. 2022, arXiv e-prints,
|
| 900 |
+
arXiv:2208.08483. https://arxiv.org/abs/2208.08483
|
| 901 |
+
Sharon, K., Ofek, E. O., Smith, G. P., et al. 2005, ApJL,
|
| 902 |
+
629, L73, doi: 10.1086/452633
|
| 903 |
+
Sharon, K., Gal-Yam, A., Maoz, D., et al. 2010, ApJ, 718,
|
| 904 |
+
876, doi: 10.1088/0004-637X/718/2/876
|
| 905 |
+
Sharon, K., Bayliss, M. B., Dahle, H., et al. 2017, ApJ, 835,
|
| 906 |
+
5, doi: 10.3847/1538-4357/835/1/5
|
| 907 |
+
Shu, Y., Koposov, S. E., Evans, N. W., et al. 2019,
|
| 908 |
+
MNRAS, 489, 4741, doi: 10.1093/mnras/stz2487
|
| 909 |
+
Shu, Y., Marques-Chaves, R., Evans, N. W., &
|
| 910 |
+
P´erez-Fournon, I. 2018, MNRAS, 481, L136,
|
| 911 |
+
doi: 10.1093/mnrasl/sly174
|
| 912 |
+
Wong, K. C., Suyu, S. H., Chen, G. C. F., et al. 2020,
|
| 913 |
+
MNRAS, 498, 1420, doi: 10.1093/mnras/stz3094
|
| 914 |
+
|
| 915 |
+
H0 from Cluster-Lensed Quasars
|
| 916 |
+
11
|
| 917 |
+
Target
|
| 918 |
+
QSO Image
|
| 919 |
+
QSO z
|
| 920 |
+
RA [J2000]
|
| 921 |
+
Decl. [J2000]
|
| 922 |
+
µ
|
| 923 |
+
SDSS J1004+4112
|
| 924 |
+
A
|
| 925 |
+
1.734
|
| 926 |
+
151.1450074
|
| 927 |
+
41.2109193
|
| 928 |
+
26.0±5.4
|
| 929 |
+
B
|
| 930 |
+
1.734
|
| 931 |
+
151.1454888
|
| 932 |
+
41.2119003
|
| 933 |
+
9.2±1.0
|
| 934 |
+
C
|
| 935 |
+
1.734
|
| 936 |
+
151.1409266
|
| 937 |
+
41.2096668
|
| 938 |
+
6.0±0.5
|
| 939 |
+
D
|
| 940 |
+
1.734
|
| 941 |
+
151.1419060
|
| 942 |
+
41.2136092
|
| 943 |
+
9.2±1.9
|
| 944 |
+
E
|
| 945 |
+
1.734
|
| 946 |
+
151.1423413
|
| 947 |
+
41.2122017
|
| 948 |
+
0.3±0.05
|
| 949 |
+
SDSS J1029+2623
|
| 950 |
+
A
|
| 951 |
+
2.1992
|
| 952 |
+
157.3081009
|
| 953 |
+
26.3883044
|
| 954 |
+
6.1±0.4
|
| 955 |
+
B
|
| 956 |
+
2.1992
|
| 957 |
+
157.3093619
|
| 958 |
+
26.39446237
|
| 959 |
+
24.7±4.2
|
| 960 |
+
C
|
| 961 |
+
2.1992
|
| 962 |
+
157.3095755
|
| 963 |
+
26.3939894
|
| 964 |
+
3.7±8.0
|
| 965 |
+
SDSS J2222+2745
|
| 966 |
+
A
|
| 967 |
+
2.805
|
| 968 |
+
335.537707
|
| 969 |
+
27.760543
|
| 970 |
+
15.4±5.7
|
| 971 |
+
B
|
| 972 |
+
2.805
|
| 973 |
+
335.53669
|
| 974 |
+
27.761119
|
| 975 |
+
8.0±4.3
|
| 976 |
+
C
|
| 977 |
+
2.805
|
| 978 |
+
335.53296
|
| 979 |
+
27.760505
|
| 980 |
+
7.1±2.3
|
| 981 |
+
D
|
| 982 |
+
2.805
|
| 983 |
+
335.536205
|
| 984 |
+
27.758901
|
| 985 |
+
1.3±0.4
|
| 986 |
+
E
|
| 987 |
+
2.805
|
| 988 |
+
335.536007
|
| 989 |
+
27.758248
|
| 990 |
+
0.8±0.2
|
| 991 |
+
F
|
| 992 |
+
2.805
|
| 993 |
+
335.535874
|
| 994 |
+
27.759723
|
| 995 |
+
1.0±0.4
|
| 996 |
+
Table 1. The quasar image positions and redshifts. Also included are the magnifications at the observed positions of the quasar
|
| 997 |
+
images.
|
| 998 |
+
System
|
| 999 |
+
ID
|
| 1000 |
+
R.A. [J2000]
|
| 1001 |
+
Decl. [J2000]
|
| 1002 |
+
z
|
| 1003 |
+
SDSS J1004+4112
|
| 1004 |
+
QSO-A
|
| 1005 |
+
151.1450074
|
| 1006 |
+
41.2109193
|
| 1007 |
+
1.734
|
| 1008 |
+
QSO-B
|
| 1009 |
+
151.1454888
|
| 1010 |
+
41.2119003
|
| 1011 |
+
1.734
|
| 1012 |
+
QSO-C
|
| 1013 |
+
151.1409266
|
| 1014 |
+
41.2096668
|
| 1015 |
+
1.734
|
| 1016 |
+
QSO-D
|
| 1017 |
+
151.1419060
|
| 1018 |
+
41.2136092
|
| 1019 |
+
1.734
|
| 1020 |
+
QSO-E
|
| 1021 |
+
151.1423413
|
| 1022 |
+
41.2122017
|
| 1023 |
+
1.734
|
| 1024 |
+
2.1
|
| 1025 |
+
151.1418821
|
| 1026 |
+
41.2102917
|
| 1027 |
+
2.74
|
| 1028 |
+
2.2
|
| 1029 |
+
151.1468800
|
| 1030 |
+
41.2153908
|
| 1031 |
+
2.74
|
| 1032 |
+
21.1
|
| 1033 |
+
151.1417325
|
| 1034 |
+
41.2103272
|
| 1035 |
+
2.74
|
| 1036 |
+
21.2
|
| 1037 |
+
151.1470383
|
| 1038 |
+
41.2153011
|
| 1039 |
+
2.74
|
| 1040 |
+
21.3
|
| 1041 |
+
151.1419526
|
| 1042 |
+
41.2116044
|
| 1043 |
+
2.74
|
| 1044 |
+
22.1
|
| 1045 |
+
151.1416225
|
| 1046 |
+
41.2103033
|
| 1047 |
+
2.74
|
| 1048 |
+
22.2
|
| 1049 |
+
151.1471250
|
| 1050 |
+
41.2152436
|
| 1051 |
+
2.74
|
| 1052 |
+
3.1
|
| 1053 |
+
151.1414121
|
| 1054 |
+
41.2099250
|
| 1055 |
+
3.288
|
| 1056 |
+
3.2
|
| 1057 |
+
151.1476847
|
| 1058 |
+
41.2152121
|
| 1059 |
+
3.288
|
| 1060 |
+
31.1
|
| 1061 |
+
151.1413250
|
| 1062 |
+
41.2099825
|
| 1063 |
+
3.288
|
| 1064 |
+
31.2
|
| 1065 |
+
151.1477393
|
| 1066 |
+
41.2151976
|
| 1067 |
+
3.288
|
| 1068 |
+
32.1
|
| 1069 |
+
151.1412104
|
| 1070 |
+
41.2100544
|
| 1071 |
+
3.288
|
| 1072 |
+
32.2
|
| 1073 |
+
151.1478065
|
| 1074 |
+
41.2151979
|
| 1075 |
+
3.288
|
| 1076 |
+
33.1
|
| 1077 |
+
151.1411279
|
| 1078 |
+
41.2101547
|
| 1079 |
+
3.288
|
| 1080 |
+
33.2
|
| 1081 |
+
151.1478809
|
| 1082 |
+
41.2151884
|
| 1083 |
+
3.288
|
| 1084 |
+
33.3
|
| 1085 |
+
151.1418864
|
| 1086 |
+
41.2116948
|
| 1087 |
+
3.288
|
| 1088 |
+
4.1
|
| 1089 |
+
151.1439081
|
| 1090 |
+
41.2165866
|
| 1091 |
+
3.332
|
| 1092 |
+
4.2
|
| 1093 |
+
151.1382517
|
| 1094 |
+
41.2153846
|
| 1095 |
+
3.332
|
| 1096 |
+
4.3
|
| 1097 |
+
151.1379048
|
| 1098 |
+
41.2149959
|
| 1099 |
+
3.332
|
| 1100 |
+
4.4
|
| 1101 |
+
151.1434099
|
| 1102 |
+
41.2103752
|
| 1103 |
+
3.332
|
| 1104 |
+
|
| 1105 |
+
12
|
| 1106 |
+
Napier et al.
|
| 1107 |
+
41.1
|
| 1108 |
+
151.1441118
|
| 1109 |
+
41.2165193
|
| 1110 |
+
3.332
|
| 1111 |
+
41.2
|
| 1112 |
+
151.1383309
|
| 1113 |
+
41.2153297
|
| 1114 |
+
3.332
|
| 1115 |
+
41.3
|
| 1116 |
+
151.1378932
|
| 1117 |
+
41.2148820
|
| 1118 |
+
3.332
|
| 1119 |
+
41.4
|
| 1120 |
+
151.1434562
|
| 1121 |
+
41.2102573
|
| 1122 |
+
3.332
|
| 1123 |
+
42.1
|
| 1124 |
+
151.1444522
|
| 1125 |
+
41.2163884
|
| 1126 |
+
3.332
|
| 1127 |
+
42.2
|
| 1128 |
+
151.1383940
|
| 1129 |
+
41.2153469
|
| 1130 |
+
3.332
|
| 1131 |
+
42.3
|
| 1132 |
+
151.1378407
|
| 1133 |
+
41.2148091
|
| 1134 |
+
3.332
|
| 1135 |
+
42.4
|
| 1136 |
+
151.1434818
|
| 1137 |
+
41.2101761
|
| 1138 |
+
3.332
|
| 1139 |
+
43.1
|
| 1140 |
+
151.1445319
|
| 1141 |
+
41.2162919
|
| 1142 |
+
3.332
|
| 1143 |
+
43.2
|
| 1144 |
+
151.1384506
|
| 1145 |
+
41.2154232
|
| 1146 |
+
3.332
|
| 1147 |
+
43.3
|
| 1148 |
+
151.1376594
|
| 1149 |
+
41.2145747
|
| 1150 |
+
3.332
|
| 1151 |
+
43.4
|
| 1152 |
+
151.1435603
|
| 1153 |
+
41.2101349
|
| 1154 |
+
3.332
|
| 1155 |
+
43.5
|
| 1156 |
+
151.1424833
|
| 1157 |
+
41.2118271
|
| 1158 |
+
3.332
|
| 1159 |
+
SDSS J1029+2623
|
| 1160 |
+
QSO-A
|
| 1161 |
+
157.3081009
|
| 1162 |
+
26.38830445
|
| 1163 |
+
2.1992
|
| 1164 |
+
QSO-B
|
| 1165 |
+
157.3093619
|
| 1166 |
+
26.39446237
|
| 1167 |
+
2.1992
|
| 1168 |
+
QSO-C
|
| 1169 |
+
157.3095755
|
| 1170 |
+
26.3939894
|
| 1171 |
+
2.1992
|
| 1172 |
+
1.1
|
| 1173 |
+
157.2980611
|
| 1174 |
+
26.3907404
|
| 1175 |
+
· · ·
|
| 1176 |
+
1.2
|
| 1177 |
+
157.2978817
|
| 1178 |
+
26.3924467
|
| 1179 |
+
· · ·
|
| 1180 |
+
1.3
|
| 1181 |
+
157.3008758
|
| 1182 |
+
26.3974054
|
| 1183 |
+
· · ·
|
| 1184 |
+
2.1
|
| 1185 |
+
157.2981743
|
| 1186 |
+
26.3915325
|
| 1187 |
+
2.1812
|
| 1188 |
+
2.3
|
| 1189 |
+
157.3014749
|
| 1190 |
+
26.3977063
|
| 1191 |
+
2.1812
|
| 1192 |
+
3.1
|
| 1193 |
+
157.2990642
|
| 1194 |
+
26.3923892
|
| 1195 |
+
3.0275
|
| 1196 |
+
3.2
|
| 1197 |
+
157.3074114
|
| 1198 |
+
26.3913469
|
| 1199 |
+
3.0275
|
| 1200 |
+
3.3
|
| 1201 |
+
157.3041512
|
| 1202 |
+
26.3982630
|
| 1203 |
+
3.0275
|
| 1204 |
+
3.4
|
| 1205 |
+
157.3015481
|
| 1206 |
+
26.3880193
|
| 1207 |
+
3.0275
|
| 1208 |
+
3.5
|
| 1209 |
+
157.3017377
|
| 1210 |
+
26.3879213
|
| 1211 |
+
3.0275
|
| 1212 |
+
3.6
|
| 1213 |
+
157.3018385
|
| 1214 |
+
26.3878900
|
| 1215 |
+
3.0275
|
| 1216 |
+
3.7
|
| 1217 |
+
157.3032208
|
| 1218 |
+
26.3919632
|
| 1219 |
+
3.0275
|
| 1220 |
+
4.1
|
| 1221 |
+
157.2992278
|
| 1222 |
+
26.3925219
|
| 1223 |
+
3.0278
|
| 1224 |
+
4.2
|
| 1225 |
+
157.3076382
|
| 1226 |
+
26.3913247
|
| 1227 |
+
3.0278
|
| 1228 |
+
4.3
|
| 1229 |
+
157.3043869
|
| 1230 |
+
26.3981437
|
| 1231 |
+
3.0278
|
| 1232 |
+
4.4
|
| 1233 |
+
157.3023985
|
| 1234 |
+
26.3877048
|
| 1235 |
+
3.0278
|
| 1236 |
+
4.5
|
| 1237 |
+
157.3035100
|
| 1238 |
+
26.3920169
|
| 1239 |
+
3.0278
|
| 1240 |
+
5.1
|
| 1241 |
+
157.3019777
|
| 1242 |
+
26.3946563
|
| 1243 |
+
1.0232
|
| 1244 |
+
5.3
|
| 1245 |
+
157.3008781
|
| 1246 |
+
26.3917377
|
| 1247 |
+
1.0232
|
| 1248 |
+
7.1
|
| 1249 |
+
157.3075794
|
| 1250 |
+
26.3951262
|
| 1251 |
+
5.0622
|
| 1252 |
+
7.2
|
| 1253 |
+
157.3064130
|
| 1254 |
+
26.3960500
|
| 1255 |
+
5.0622
|
| 1256 |
+
7.3
|
| 1257 |
+
157.3014210
|
| 1258 |
+
26.3936610
|
| 1259 |
+
5.0622
|
| 1260 |
+
7.4
|
| 1261 |
+
157.3012420
|
| 1262 |
+
26.3938020
|
| 1263 |
+
5.0622
|
| 1264 |
+
Table 2. Positions and spectroscopic redshifts of the multiply-imaged
|
| 1265 |
+
background sources used as constraints in the strong lens models for
|
| 1266 |
+
SDSS J1004+4112 and SDSS J1029+2623. See Table 1 from Sharon et al.
|
| 1267 |
+
(2017) for the lensing constraints for SDSS J2222+2745.
|
| 1268 |
+
|
| 1269 |
+
H0 from Cluster-Lensed Quasars
|
| 1270 |
+
13
|
| 1271 |
+
Component No.
|
| 1272 |
+
∆ R.A. [′′]
|
| 1273 |
+
∆ Decl. [′′]
|
| 1274 |
+
e
|
| 1275 |
+
θ [deg]
|
| 1276 |
+
σ0 [km s−1]
|
| 1277 |
+
rcut [kpc]
|
| 1278 |
+
rcore [kpc]
|
| 1279 |
+
1
|
| 1280 |
+
-0.085+2.56
|
| 1281 |
+
−0.53
|
| 1282 |
+
3.07+5.83
|
| 1283 |
+
−1.30
|
| 1284 |
+
0.17+0.022
|
| 1285 |
+
−0.030
|
| 1286 |
+
66.39+3.70
|
| 1287 |
+
−3.22
|
| 1288 |
+
987+245
|
| 1289 |
+
−84
|
| 1290 |
+
[1500]
|
| 1291 |
+
126.27+112.43
|
| 1292 |
+
−33.97
|
| 1293 |
+
2
|
| 1294 |
+
[0]
|
| 1295 |
+
[0]
|
| 1296 |
+
[0.40]
|
| 1297 |
+
63.98+4.34
|
| 1298 |
+
−5.31
|
| 1299 |
+
461+48
|
| 1300 |
+
−52
|
| 1301 |
+
181.42+13.77
|
| 1302 |
+
−28.04
|
| 1303 |
+
5.65+0.99
|
| 1304 |
+
−1.62
|
| 1305 |
+
3
|
| 1306 |
+
[1.963]
|
| 1307 |
+
[-1.832]
|
| 1308 |
+
0.42+0.25
|
| 1309 |
+
−0.19
|
| 1310 |
+
[349.480]
|
| 1311 |
+
235+10
|
| 1312 |
+
−14
|
| 1313 |
+
30.30+7.045
|
| 1314 |
+
−12.29
|
| 1315 |
+
2.68+0.99
|
| 1316 |
+
−0.68
|
| 1317 |
+
4
|
| 1318 |
+
[7.659]
|
| 1319 |
+
[-9.821]
|
| 1320 |
+
0.43+0.22
|
| 1321 |
+
−0.29
|
| 1322 |
+
[131.13]
|
| 1323 |
+
127+33
|
| 1324 |
+
−29
|
| 1325 |
+
20.13+6.64
|
| 1326 |
+
−8.33
|
| 1327 |
+
1.62+1.48
|
| 1328 |
+
−1.06
|
| 1329 |
+
5
|
| 1330 |
+
[-8.463]
|
| 1331 |
+
[-3.877]
|
| 1332 |
+
0.44+0.24
|
| 1333 |
+
−0.27
|
| 1334 |
+
[133.89]
|
| 1335 |
+
114+31
|
| 1336 |
+
−28
|
| 1337 |
+
13.28+2.97
|
| 1338 |
+
−2.97
|
| 1339 |
+
2.26+0.92
|
| 1340 |
+
−1.20
|
| 1341 |
+
6
|
| 1342 |
+
[11.220]
|
| 1343 |
+
[11.401]
|
| 1344 |
+
0.42+0.29
|
| 1345 |
+
−0.29
|
| 1346 |
+
150.24+22.22
|
| 1347 |
+
−34.44
|
| 1348 |
+
76+9
|
| 1349 |
+
−7
|
| 1350 |
+
22.465.79
|
| 1351 |
+
−6.85
|
| 1352 |
+
3.18+0.85
|
| 1353 |
+
−0.85
|
| 1354 |
+
Table 3. Strong lensing mass model parameters for SDSS J1004+4112. Median values and the 1σ confidence level from the
|
| 1355 |
+
MCMC are reported. The coordinates ∆ R.A. and ∆ Decl. are listed in arcseconds measured east and north from the core
|
| 1356 |
+
of Component No. 2 at [RA, Dec] = [151.142381, 41.212131]. The other parameters are the ellipticity e, the position angle
|
| 1357 |
+
θ, the velocity dispersion σ0, the cut radius rcut, and the core radius rcore. The parameters listed in square brackets were not
|
| 1358 |
+
optimized.
|
| 1359 |
+
Target Name
|
| 1360 |
+
z clus-
|
| 1361 |
+
ter
|
| 1362 |
+
z QSO
|
| 1363 |
+
no.
|
| 1364 |
+
QSO
|
| 1365 |
+
im
|
| 1366 |
+
widest
|
| 1367 |
+
sepa-
|
| 1368 |
+
ration
|
| 1369 |
+
[′′]
|
| 1370 |
+
no.
|
| 1371 |
+
of
|
| 1372 |
+
lensed
|
| 1373 |
+
sources
|
| 1374 |
+
no.
|
| 1375 |
+
of
|
| 1376 |
+
spec-
|
| 1377 |
+
zs
|
| 1378 |
+
time delay (days)
|
| 1379 |
+
Reference
|
| 1380 |
+
SDSS J1004+4112
|
| 1381 |
+
0.68
|
| 1382 |
+
1.734
|
| 1383 |
+
5
|
| 1384 |
+
14.6
|
| 1385 |
+
4
|
| 1386 |
+
4
|
| 1387 |
+
∆tAB = −43.01 ± 0.27
|
| 1388 |
+
Mu˜noz+(2022)
|
| 1389 |
+
∆tAC = −825.23 ± 0.46
|
| 1390 |
+
∆tAD = 1633.23 ± 0.97
|
| 1391 |
+
SDSS J1029+2623
|
| 1392 |
+
0.58
|
| 1393 |
+
2.1992
|
| 1394 |
+
3
|
| 1395 |
+
22.5
|
| 1396 |
+
7
|
| 1397 |
+
6
|
| 1398 |
+
∆tAB = 744 ± 10
|
| 1399 |
+
Fohlmeister+(2013)
|
| 1400 |
+
SDSS J2222+2745
|
| 1401 |
+
0.49
|
| 1402 |
+
2.805
|
| 1403 |
+
6
|
| 1404 |
+
15.1
|
| 1405 |
+
5
|
| 1406 |
+
3
|
| 1407 |
+
∆tAB = 42.44+1.36
|
| 1408 |
+
−1.44
|
| 1409 |
+
Dyrland (2019)
|
| 1410 |
+
∆tAC = −696.65+2.00
|
| 1411 |
+
−2.10
|
| 1412 |
+
Table 4. The three large separation lensed QSOs in the HST archive. The listed time delays are the most up-to-date values
|
| 1413 |
+
from the literature. See Fohlmeister et al. (2008) and Dahle et al. (2015) for previous measurements for SDSS J1004+4112 and
|
| 1414 |
+
SDSS J2222+2745, respectively.
|
| 1415 |
+
Component No.
|
| 1416 |
+
∆ R.A. [′′]
|
| 1417 |
+
∆ Decl. [′′]
|
| 1418 |
+
e
|
| 1419 |
+
θ [deg]
|
| 1420 |
+
σ0 [km s−1]
|
| 1421 |
+
rcut [kpc]
|
| 1422 |
+
rcore [kpc]
|
| 1423 |
+
1
|
| 1424 |
+
-10.01+0.53
|
| 1425 |
+
−0.62
|
| 1426 |
+
0.71+0.25
|
| 1427 |
+
−0.23
|
| 1428 |
+
0.53+0.031
|
| 1429 |
+
−0.034
|
| 1430 |
+
172.80+2.24
|
| 1431 |
+
−2.27
|
| 1432 |
+
650+21
|
| 1433 |
+
−20
|
| 1434 |
+
[1500]
|
| 1435 |
+
31.39+4.37
|
| 1436 |
+
−3.78
|
| 1437 |
+
2
|
| 1438 |
+
3.04+1.16
|
| 1439 |
+
−1.38
|
| 1440 |
+
3.62+0.46
|
| 1441 |
+
−0.58
|
| 1442 |
+
0.55+0.052
|
| 1443 |
+
−0.055
|
| 1444 |
+
17.25+4.87
|
| 1445 |
+
−5.10
|
| 1446 |
+
528+30
|
| 1447 |
+
−20
|
| 1448 |
+
[1500]
|
| 1449 |
+
37.95+6.42
|
| 1450 |
+
−6.62
|
| 1451 |
+
3
|
| 1452 |
+
2.48+1.35
|
| 1453 |
+
−1.25
|
| 1454 |
+
-0.11+1.83
|
| 1455 |
+
−2.35
|
| 1456 |
+
0.61+0.10
|
| 1457 |
+
−0.062
|
| 1458 |
+
45.57+7.24
|
| 1459 |
+
−9.24
|
| 1460 |
+
385+43
|
| 1461 |
+
−52
|
| 1462 |
+
[1500]
|
| 1463 |
+
57.82+9.47
|
| 1464 |
+
−11.86
|
| 1465 |
+
4
|
| 1466 |
+
[-3.808]
|
| 1467 |
+
[-1.354]
|
| 1468 |
+
0.51+0.19
|
| 1469 |
+
−0.21
|
| 1470 |
+
69.07+19.26
|
| 1471 |
+
−15.61
|
| 1472 |
+
202+20
|
| 1473 |
+
−19
|
| 1474 |
+
33.64+7.88
|
| 1475 |
+
−6.82
|
| 1476 |
+
1.92+0.52
|
| 1477 |
+
−0.86
|
| 1478 |
+
5
|
| 1479 |
+
[-19.7]
|
| 1480 |
+
[-8.8]
|
| 1481 |
+
[0.0]
|
| 1482 |
+
[0.0]
|
| 1483 |
+
169+30
|
| 1484 |
+
−24
|
| 1485 |
+
89.94+19.27
|
| 1486 |
+
−19.47
|
| 1487 |
+
[0.0]
|
| 1488 |
+
6
|
| 1489 |
+
-23.87+0.13
|
| 1490 |
+
−0.11
|
| 1491 |
+
6.50+0.14
|
| 1492 |
+
−0.12
|
| 1493 |
+
0.30+0.29
|
| 1494 |
+
−0.20
|
| 1495 |
+
52.06+26.58
|
| 1496 |
+
−38.88
|
| 1497 |
+
64+7
|
| 1498 |
+
−5
|
| 1499 |
+
32.65+11.13
|
| 1500 |
+
−16.82
|
| 1501 |
+
0.51+0.30
|
| 1502 |
+
−0.31
|
| 1503 |
+
Table 5. Strong lensing mass model parameters for SDSS J1029+2623. Median values and the 1σ confidence level from the
|
| 1504 |
+
MCMC are reported. The coordinates ∆ R.A. and ∆ Decl. are listed in arcseconds measured east and north from [RA, Dec]
|
| 1505 |
+
= [157.302047, 26.392209]. The other parameters are the ellipticity e, the position angle θ, the velocity dispersion σ0, the cut
|
| 1506 |
+
radius rcut, and the core radius rcore. The parameters listed in square brackets were not optimized.
|
| 1507 |
+
System
|
| 1508 |
+
∆tAB
|
| 1509 |
+
∆tAC
|
| 1510 |
+
∆tAD
|
| 1511 |
+
∆tAE
|
| 1512 |
+
∆tAF
|
| 1513 |
+
SDSS J1004+4112
|
| 1514 |
+
-11
|
| 1515 |
+
-783
|
| 1516 |
+
1294
|
| 1517 |
+
1776
|
| 1518 |
+
N/A
|
| 1519 |
+
SDSS J1029+2623
|
| 1520 |
+
1060
|
| 1521 |
+
1054
|
| 1522 |
+
N/A
|
| 1523 |
+
N/A
|
| 1524 |
+
N/A
|
| 1525 |
+
SDSS J2222+2745
|
| 1526 |
+
54
|
| 1527 |
+
-693
|
| 1528 |
+
485
|
| 1529 |
+
564
|
| 1530 |
+
431
|
| 1531 |
+
Table 6. Predicted time delay (in days) from the ‘best’ lens model for each cluster. The values are measured at the model-
|
| 1532 |
+
predicted locations of the quasar images, assuming H0= 70 km s−1 Mpc−1.
|
| 1533 |
+
|
| 1534 |
+
14
|
| 1535 |
+
Napier et al.
|
| 1536 |
+
System
|
| 1537 |
+
H0 (km s−1 Mpc−1)
|
| 1538 |
+
H0 (km s−1 Mpc−1)
|
| 1539 |
+
(from best model)
|
| 1540 |
+
(mean ± 1σ)
|
| 1541 |
+
SDSS J1004+4112
|
| 1542 |
+
AB
|
| 1543 |
+
17.4
|
| 1544 |
+
56.4 ± 35.0
|
| 1545 |
+
AC
|
| 1546 |
+
66.4
|
| 1547 |
+
55.8 ± 17.9
|
| 1548 |
+
AD
|
| 1549 |
+
55.5
|
| 1550 |
+
69.3 ± 8.2
|
| 1551 |
+
SDSS J1029+2623
|
| 1552 |
+
AB
|
| 1553 |
+
99.7
|
| 1554 |
+
93.6 ± 37.8
|
| 1555 |
+
SDSS J2222+2745
|
| 1556 |
+
AB
|
| 1557 |
+
89.1
|
| 1558 |
+
109.0 ± 24.1
|
| 1559 |
+
AC
|
| 1560 |
+
69.6
|
| 1561 |
+
74.8 ± 15.8
|
| 1562 |
+
Table 7. H0 constraints from the time delay measurements in SDSS J1004+4112, SDSS J1029+2623, and SDSS J2222+2745.
|
| 1563 |
+
The middle column is the H0 value from the ‘best’ lens model for each cluster. The right column lists the mean and 1σ from
|
| 1564 |
+
the Gaussian distribution fit to the H0 values determined from 100 random models drawn from the MCMC.
|
| 1565 |
+
|
GdFIT4oBgHgl3EQfWysJ/content/tmp_files/load_file.txt
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
H9E3T4oBgHgl3EQfuQuc/content/2301.04683v1.pdf
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:7561dd5c2cee88d8893ed27d5821e2756c35a61e41acc893b1f89f7ac38ac0db
|
| 3 |
+
size 665242
|
H9E3T4oBgHgl3EQfuQuc/vector_store/index.faiss
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:622aa1ff94403eb9953f47fda612427887cf64fe6e8f5ed3652a6df55ce9f2c2
|
| 3 |
+
size 2687021
|
H9E3T4oBgHgl3EQfuQuc/vector_store/index.pkl
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:caac45291d0d87a25256eca2b3df4011a56a4bc99699aab3669273b930a9b6d2
|
| 3 |
+
size 112749
|