Commit
·
6a24cda
1
Parent(s):
ba6dc6d
Update README
Browse files- README.md +10 -2
- datasets/README.md +0 -33
- docs/CONTRIBUTING.md +26 -0
- docs/large_files.md +0 -31
README.md
CHANGED
|
@@ -4,7 +4,16 @@ license: mit
|
|
| 4 |
|
| 5 |
# WannierDatasets
|
| 6 |
|
| 7 |
-
Datasets
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 8 |
|
| 9 |
Specifically, this repo
|
| 10 |
|
|
@@ -30,7 +39,6 @@ to manage the datasets. This allows us to
|
|
| 30 |
- [`datasets/`](./datasets/) each subfolder contains a dataset for a specific system
|
| 31 |
- [`pseudo/`](./pseudo/) pseudopotentials used when generating the datasets
|
| 32 |
- [`src/`](./src/) a fake folder just to make `Project.toml` happy
|
| 33 |
-
- [`util/`](./util/) Several small scripts that help with running the examples
|
| 34 |
|
| 35 |
## Contributing
|
| 36 |
|
|
|
|
| 4 |
|
| 5 |
# WannierDatasets
|
| 6 |
|
| 7 |
+
Datasets of input files for Wannier functions.
|
| 8 |
+
|
| 9 |
+
## List of datasets
|
| 10 |
+
|
| 11 |
+
- `Si2_valence`: Silicon valence band only
|
| 12 |
+
- `Si2`: Silicon valence and conduction bands
|
| 13 |
+
- `Cu`: copper, metal
|
| 14 |
+
- `CrI3`: chromium triiodide, magnetic calculation
|
| 15 |
+
|
| 16 |
+
## Why this repo?
|
| 17 |
|
| 18 |
Specifically, this repo
|
| 19 |
|
|
|
|
| 39 |
- [`datasets/`](./datasets/) each subfolder contains a dataset for a specific system
|
| 40 |
- [`pseudo/`](./pseudo/) pseudopotentials used when generating the datasets
|
| 41 |
- [`src/`](./src/) a fake folder just to make `Project.toml` happy
|
|
|
|
| 42 |
|
| 43 |
## Contributing
|
| 44 |
|
datasets/README.md
DELETED
|
@@ -1,33 +0,0 @@
|
|
| 1 |
-
# Datasets
|
| 2 |
-
|
| 3 |
-
## List of datasets
|
| 4 |
-
|
| 5 |
-
- `Si2_valence`: Silicon valence band only
|
| 6 |
-
- `Si2`: Silicon valence and conduction bands
|
| 7 |
-
- `Cu`: copper, metal
|
| 8 |
-
- `CrI3`: chromium triiodide, magnetic calculation
|
| 9 |
-
|
| 10 |
-
## Dataset generation
|
| 11 |
-
|
| 12 |
-
These files are generated by the respective `creator/run.sh` script in each subdirectory.
|
| 13 |
-
The `creator` subdirectory contains all the script and input files for DFT codes
|
| 14 |
-
to generate the Wannier input `amn/mmn/eig/...` files.
|
| 15 |
-
|
| 16 |
-
The Fortran binary (also called unformatted) files are written by QE binaries
|
| 17 |
-
which are compiled with
|
| 18 |
-
|
| 19 |
-
```bash
|
| 20 |
-
GNU Fortran (Ubuntu 11.2.0-19ubuntu1) 11.2.0
|
| 21 |
-
```
|
| 22 |
-
|
| 23 |
-
To add a new dataset:
|
| 24 |
-
|
| 25 |
-
- Create a new subdirectory, e.g. `Si2`
|
| 26 |
-
- Put all the input files for the DFT code in the subdirectory `Si2/creator/`
|
| 27 |
-
- Create a `run.sh` script in `creator/` which runs the DFT code and generates the `amn/mmn/eig/...` files
|
| 28 |
-
- Move the `amn/mmn/eig/...` files to `Si2/`
|
| 29 |
-
- (Optional) Create a `README.md` file in `Si2/` which describes the dataset
|
| 30 |
-
- (Optional) Add reference results in a subdirectory `Si2/reference/`
|
| 31 |
-
|
| 32 |
-
Out goal is that the `run.sh` script should be able to reproduce the `amn/mmn/eig/...` files,
|
| 33 |
-
on any machine. So we can easily regenerate the dataset if we need to.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
docs/CONTRIBUTING.md
CHANGED
|
@@ -11,6 +11,32 @@ In general, we would like to
|
|
| 11 |
to ensure reproducibility. This also allows us to regenerate the datasets
|
| 12 |
if needed.
|
| 13 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 14 |
## All Code Changes Happen Through Pull Requests
|
| 15 |
|
| 16 |
Pull requests are the best way to propose changes to the codebase.
|
|
|
|
| 11 |
to ensure reproducibility. This also allows us to regenerate the datasets
|
| 12 |
if needed.
|
| 13 |
|
| 14 |
+
## Dataset generation
|
| 15 |
+
|
| 16 |
+
Each folder in [`../datasets`](../datasets) folder is a standalone dataset
|
| 17 |
+
for one material, these files are generated by the respective `inputs/run.sh`
|
| 18 |
+
script. The `inputs` subdirectory contains all the script and input files for
|
| 19 |
+
DFT codes to generate the Wannier input `amn/mmn/eig/...` files.
|
| 20 |
+
|
| 21 |
+
The Fortran binary (also called unformatted) files are written by QE binaries
|
| 22 |
+
which are compiled with
|
| 23 |
+
|
| 24 |
+
```bash
|
| 25 |
+
GNU Fortran (Ubuntu 11.2.0-19ubuntu1) 11.2.0
|
| 26 |
+
```
|
| 27 |
+
|
| 28 |
+
To add a new dataset:
|
| 29 |
+
|
| 30 |
+
- Create a new subdirectory, e.g. `Si2`
|
| 31 |
+
- Put all the input files for the DFT code in the subdirectory `Si2/inputs/`
|
| 32 |
+
- Create a `run.sh` script in `inputs/` which runs the DFT code and generates the `amn/mmn/eig/...` files
|
| 33 |
+
- Move the `amn/mmn/eig/...` files to `Si2/`
|
| 34 |
+
- (Optional) Create a `README.md` file in `Si2/` which describes the dataset
|
| 35 |
+
- (Optional) Add reference results in a subdirectory `Si2/outputs/`
|
| 36 |
+
|
| 37 |
+
Out goal is that the `run.sh` script should be able to reproduce the `amn/mmn/eig/...` files,
|
| 38 |
+
on any machine. So we can easily regenerate the dataset if needed.
|
| 39 |
+
|
| 40 |
## All Code Changes Happen Through Pull Requests
|
| 41 |
|
| 42 |
Pull requests are the best way to propose changes to the codebase.
|
docs/large_files.md
DELETED
|
@@ -1,31 +0,0 @@
|
|
| 1 |
-
# Notes on large files
|
| 2 |
-
|
| 3 |
-
## `util/GitHub-ForceLargeFiles`
|
| 4 |
-
|
| 5 |
-
GitHub has a limit of 100MB per file, to bypass this limit, there is a
|
| 6 |
-
python script `util/GitHub-ForceLargeFiles/src/main.py` that will auto
|
| 7 |
-
compress large files and split them into chunks.
|
| 8 |
-
|
| 9 |
-
To use it
|
| 10 |
-
|
| 11 |
-
```shell
|
| 12 |
-
python util/GitHub-ForceLargeFiles/src/main.py DIR_TO_CHECK
|
| 13 |
-
```
|
| 14 |
-
|
| 15 |
-
where `DIR_TO_CHECK` is the directory to check for large files.
|
| 16 |
-
|
| 17 |
-
To decompress the files, use
|
| 18 |
-
|
| 19 |
-
```shell
|
| 20 |
-
python util/GitHub-ForceLargeFiles/src/reverse.py DIR_TO_CHECK
|
| 21 |
-
```
|
| 22 |
-
|
| 23 |
-
For more information, see the two scripts.
|
| 24 |
-
|
| 25 |
-
## Adding large dataset files
|
| 26 |
-
|
| 27 |
-
Therefore, in general we should avoid adding large files. However, if e.g.
|
| 28 |
-
without enough kpoint sampling the band structure is really poor, then we
|
| 29 |
-
can use `main.py` script to compress the files and git commit the 7z files.
|
| 30 |
-
In the GitHub workflow, the `reverse.py` script will be run to decompress
|
| 31 |
-
the files and pack them into artifact tarballs.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|