Spaces:
Runtime error
Runtime error
dbouget
commited on
Commit
·
3ca8ae1
1
Parent(s):
dea81ad
Initial full commit [skip ci]
Browse files- .dockerignore +13 -0
- .github/ISSUE_TEMPLATE/bug_report.md +32 -0
- .github/ISSUE_TEMPLATE/feature_request.md +20 -0
- .github/workflows/deploy.yml +20 -0
- .github/workflows/filesize.yml +16 -0
- .github/workflows/linting.yml +26 -0
- .gitignore +13 -0
- Dockerfile +74 -0
- app.py +41 -0
- requirements.txt +2 -0
- setup.cfg +14 -0
- shell/format.sh +4 -0
- shell/lint.sh +23 -0
- src/__init__.py +1 -0
- src/gui.py +184 -0
- src/inference.py +97 -0
- src/utils.py +68 -0
.dockerignore
ADDED
@@ -0,0 +1,13 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
venv/
|
2 |
+
*__pycache__/
|
3 |
+
resources/
|
4 |
+
*.DS_Store
|
5 |
+
*.nii
|
6 |
+
*.nii.gz
|
7 |
+
*.nrrd
|
8 |
+
*.obj
|
9 |
+
*.zip
|
10 |
+
*log.csv
|
11 |
+
*.ini
|
12 |
+
gradio_cached_examples/
|
13 |
+
.idea/
|
.github/ISSUE_TEMPLATE/bug_report.md
ADDED
@@ -0,0 +1,32 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
name: Bug report
|
3 |
+
about: Create a report to help us improve
|
4 |
+
title: ''
|
5 |
+
labels: ''
|
6 |
+
assignees: ''
|
7 |
+
|
8 |
+
---
|
9 |
+
|
10 |
+
**Describe the bug**
|
11 |
+
A clear and concise description of what the bug is.
|
12 |
+
|
13 |
+
**To Reproduce**
|
14 |
+
Steps to reproduce the behavior:
|
15 |
+
1. Go to '...'
|
16 |
+
2. Click on '....'
|
17 |
+
3. Scroll down to '....'
|
18 |
+
4. See error
|
19 |
+
|
20 |
+
**Expected behavior**
|
21 |
+
A clear and concise description of what you expected to happen.
|
22 |
+
|
23 |
+
**Screenshots**
|
24 |
+
If applicable, add screenshots to help explain your problem.
|
25 |
+
|
26 |
+
**Desktop (please complete the following information):**
|
27 |
+
- OS: [e.g. Windows]
|
28 |
+
- Version: [e.g. 10]
|
29 |
+
- Python: [e.g. 3.8.10]
|
30 |
+
|
31 |
+
**Additional context**
|
32 |
+
Add any other context about the problem here.
|
.github/ISSUE_TEMPLATE/feature_request.md
ADDED
@@ -0,0 +1,20 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
name: Feature request
|
3 |
+
about: Suggest an idea for this project
|
4 |
+
title: ''
|
5 |
+
labels: ''
|
6 |
+
assignees: ''
|
7 |
+
|
8 |
+
---
|
9 |
+
|
10 |
+
**Is your feature request related to a problem? Please describe.**
|
11 |
+
A clear and concise description of what the problem is. Ex. I'm always frustrated when [...]
|
12 |
+
|
13 |
+
**Describe the solution you'd like**
|
14 |
+
A clear and concise description of what you want to happen.
|
15 |
+
|
16 |
+
**Describe alternatives you've considered**
|
17 |
+
A clear and concise description of any alternative solutions or features you've considered.
|
18 |
+
|
19 |
+
**Additional context**
|
20 |
+
Add any other context or screenshots about the feature request here.
|
.github/workflows/deploy.yml
ADDED
@@ -0,0 +1,20 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
name: Deploy
|
2 |
+
on:
|
3 |
+
push:
|
4 |
+
branches: [ main ]
|
5 |
+
|
6 |
+
# to run this workflow manually from the Actions tab
|
7 |
+
workflow_dispatch:
|
8 |
+
|
9 |
+
jobs:
|
10 |
+
sync-to-hub:
|
11 |
+
runs-on: ubuntu-latest
|
12 |
+
steps:
|
13 |
+
- uses: actions/checkout@v3
|
14 |
+
with:
|
15 |
+
fetch-depth: 0
|
16 |
+
lfs: true
|
17 |
+
- name: Push to hub
|
18 |
+
env:
|
19 |
+
HF_TOKEN: ${{ secrets.HF_TOKEN }}
|
20 |
+
run: git push https://dbouget:[email protected]/spaces/dbouget/Raidionics-HF main
|
.github/workflows/filesize.yml
ADDED
@@ -0,0 +1,16 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
name: Check file size
|
2 |
+
on: # or directly `on: [push]` to run the action on every push on any branch
|
3 |
+
pull_request:
|
4 |
+
branches: [ main ]
|
5 |
+
|
6 |
+
# to run this workflow manually from the Actions tab
|
7 |
+
workflow_dispatch:
|
8 |
+
|
9 |
+
jobs:
|
10 |
+
check-filesize:
|
11 |
+
runs-on: ubuntu-latest
|
12 |
+
steps:
|
13 |
+
- name: Check large files
|
14 |
+
uses: ActionsDesk/[email protected]
|
15 |
+
with:
|
16 |
+
filesizelimit: 10485760 # this is 10MB so we can sync to HF Spaces
|
.github/workflows/linting.yml
ADDED
@@ -0,0 +1,26 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
name: Linting
|
2 |
+
|
3 |
+
on:
|
4 |
+
push:
|
5 |
+
branches:
|
6 |
+
- '*'
|
7 |
+
pull_request:
|
8 |
+
branches:
|
9 |
+
- '*'
|
10 |
+
workflow_dispatch:
|
11 |
+
|
12 |
+
jobs:
|
13 |
+
build:
|
14 |
+
runs-on: ubuntu-20.04
|
15 |
+
steps:
|
16 |
+
- uses: actions/checkout@v1
|
17 |
+
- name: Set up Python 3.7
|
18 |
+
uses: actions/setup-python@v2
|
19 |
+
with:
|
20 |
+
python-version: 3.7
|
21 |
+
|
22 |
+
- name: Install lint dependencies
|
23 |
+
run: pip install wheel setuptools black==22.3.0 isort==5.10.1 flake8==4.0.1
|
24 |
+
|
25 |
+
- name: Lint the code
|
26 |
+
run: sh shell/lint.sh
|
.gitignore
ADDED
@@ -0,0 +1,13 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
venv/
|
2 |
+
*__pycache__/
|
3 |
+
resources/
|
4 |
+
*.DS_Store
|
5 |
+
*.nii
|
6 |
+
*.nii.gz
|
7 |
+
*.nrrd
|
8 |
+
*.obj
|
9 |
+
*.zip
|
10 |
+
*log.csv
|
11 |
+
*.ini
|
12 |
+
gradio_cached_examples/
|
13 |
+
.idea/
|
Dockerfile
ADDED
@@ -0,0 +1,74 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
# read the doc: https://huggingface.co/docs/hub/spaces-sdks-docker
|
2 |
+
# you will also find guides on how best to write your Dockerfile
|
3 |
+
FROM python:3.8-slim
|
4 |
+
|
5 |
+
# set language, format and stuff
|
6 |
+
ENV LANG=C.UTF-8 LC_ALL=C.UTF-8
|
7 |
+
|
8 |
+
WORKDIR /code
|
9 |
+
|
10 |
+
RUN apt-get update -y
|
11 |
+
#RUN apt-get install -y python3 python3-pip
|
12 |
+
RUN apt install git --fix-missing -y
|
13 |
+
RUN apt install wget -y
|
14 |
+
|
15 |
+
# installing other libraries
|
16 |
+
RUN apt-get install python3-pip -y && \
|
17 |
+
apt-get -y install sudo
|
18 |
+
RUN apt-get install curl -y
|
19 |
+
RUN apt-get install nano -y
|
20 |
+
RUN apt-get update && apt-get install -y git
|
21 |
+
RUN apt-get install libblas-dev -y && apt-get install liblapack-dev -y
|
22 |
+
RUN apt-get install gfortran -y
|
23 |
+
RUN apt-get install libpng-dev -y
|
24 |
+
RUN apt-get install python3-dev -y
|
25 |
+
|
26 |
+
WORKDIR /code
|
27 |
+
|
28 |
+
# install dependencies
|
29 |
+
COPY ./requirements.txt /code/requirements.txt
|
30 |
+
RUN pip install --no-cache-dir --upgrade -r /code/requirements.txt
|
31 |
+
|
32 |
+
# resolve issue with tf==2.4 and gradio dependency collision issue
|
33 |
+
RUN pip install --force-reinstall typing_extensions==4.7.1
|
34 |
+
|
35 |
+
# Install wget
|
36 |
+
RUN apt install wget -y && \
|
37 |
+
apt install unzip
|
38 |
+
|
39 |
+
# Set up a new user named "user" with user ID 1000
|
40 |
+
RUN useradd -m -u 1000 user
|
41 |
+
|
42 |
+
# Switch to the "user" user
|
43 |
+
USER user
|
44 |
+
|
45 |
+
# Set home to the user's home directory
|
46 |
+
ENV HOME=/home/user \
|
47 |
+
PATH=/home/user/.local/bin:$PATH
|
48 |
+
|
49 |
+
# Set the working directory to the user's home directory
|
50 |
+
WORKDIR $HOME
|
51 |
+
|
52 |
+
# Copy the current directory contents into the container at $HOME/app setting the owner to the user
|
53 |
+
COPY --chown=user . $HOME
|
54 |
+
|
55 |
+
# Download pretrained models
|
56 |
+
RUN mkdir -p resources/models/
|
57 |
+
RUN wget "https://github.com/raidionics/Raidionics-models/releases/download/1.2.0/Raidionics-MRI_Brain-ONNX-v12.zip" && \
|
58 |
+
unzip "Raidionics-MRI_Brain-ONNX-v12.zip" && mv MRI_Brain/ resources/models/MRI_Brain/
|
59 |
+
RUN wget "https://github.com/raidionics/Raidionics-models/releases/download/1.2.0/Raidionics-MRI_GBM-ONNX-v12.zip" && \
|
60 |
+
unzip "Raidionics-MRI_GBM-ONNX-v12.zip" && mv MRI_GBM/ resources/models/MRI_GBM/ \
|
61 |
+
RUN wget "https://github.com/raidionics/Raidionics-models/releases/download/1.2.0/Raidionics-MRI_LGGlioma-ONNX-v12.zip" && \
|
62 |
+
unzip "Raidionics-MRI_LGGlioma-ONNX-v12.zip" && mv MRI_LGGlioma/ resources/models/MRI_LGGlioma/ \
|
63 |
+
RUN wget "https://github.com/raidionics/Raidionics-models/releases/download/1.2.0/Raidionics-MRI_Meningioma-ONNX-v12.zip" && \
|
64 |
+
unzip "Raidionics-MRI_Meningioma-ONNX-v12.zip" && mv MRI_Meningioma/ resources/models/MRI_Meningioma/
|
65 |
+
RUN wget "https://github.com/raidionics/Raidionics-models/releases/download/1.2.0/Raidionics-MRI_Metastasis-ONNX-v12.zip" && \
|
66 |
+
unzip "Raidionics-MRI_Metastasis-ONNX-v12.zip" && mv MRI_Metastasis/ resources/models/MRI_Metastasis/
|
67 |
+
|
68 |
+
RUN rm -r *.zip
|
69 |
+
|
70 |
+
# Download test sample
|
71 |
+
RUN wget "https://github.com/raidionics/Raidionics-HF/releases/download/v1.0.0/t1gd.nii.gz"
|
72 |
+
|
73 |
+
# CMD ["/bin/bash"]
|
74 |
+
CMD ["python3", "app.py"]
|
app.py
ADDED
@@ -0,0 +1,41 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
import os
|
2 |
+
from argparse import ArgumentParser
|
3 |
+
|
4 |
+
from src.gui import WebUI
|
5 |
+
|
6 |
+
|
7 |
+
def main():
|
8 |
+
parser = ArgumentParser()
|
9 |
+
parser.add_argument(
|
10 |
+
"--cwd",
|
11 |
+
type=str,
|
12 |
+
default="/home/user/app/",
|
13 |
+
help="Set current working directory (path to app.py).",
|
14 |
+
)
|
15 |
+
parser.add_argument(
|
16 |
+
"--share",
|
17 |
+
type=int,
|
18 |
+
default=1,
|
19 |
+
help="Whether to enable the app to be accessible online"
|
20 |
+
"-> setups a public link which requires internet access.",
|
21 |
+
)
|
22 |
+
args = parser.parse_args()
|
23 |
+
|
24 |
+
print("Current working directory:", args.cwd)
|
25 |
+
|
26 |
+
if not os.path.exists(args.cwd):
|
27 |
+
raise ValueError("Chosen 'cwd' is not a valid path!")
|
28 |
+
if args.share not in [0, 1]:
|
29 |
+
raise ValueError(
|
30 |
+
"The 'share' argument can only be set to 0 or 1, but was:",
|
31 |
+
args.share,
|
32 |
+
)
|
33 |
+
|
34 |
+
# initialize and run app
|
35 |
+
print("Launching demo...")
|
36 |
+
app = WebUI(cwd=args.cwd, share=args.share)
|
37 |
+
app.run()
|
38 |
+
|
39 |
+
|
40 |
+
if __name__ == "__main__":
|
41 |
+
main()
|
requirements.txt
ADDED
@@ -0,0 +1,2 @@
|
|
|
|
|
|
|
1 |
+
raidionicsrads@git+https://github.com/dbouget/raidionics_rads_lib
|
2 |
+
gradio==3.44.4
|
setup.cfg
ADDED
@@ -0,0 +1,14 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
[metadata]
|
2 |
+
description-file = README.md
|
3 |
+
|
4 |
+
[isort]
|
5 |
+
force_single_line=True
|
6 |
+
known_first_party=aeropath
|
7 |
+
line_length=160
|
8 |
+
profile=black
|
9 |
+
|
10 |
+
[flake8]
|
11 |
+
# imported but unused in __init__.py, that's ok.
|
12 |
+
per-file-ignores=*__init__.py:F401
|
13 |
+
ignore=E203,W503,W605,F632,E266,E731,E712,E741
|
14 |
+
max-line-length=160
|
shell/format.sh
ADDED
@@ -0,0 +1,4 @@
|
|
|
|
|
|
|
|
|
|
|
1 |
+
#!/bin/bash
|
2 |
+
isort --sl demo/src/ demo/app.py
|
3 |
+
black --line-length 80 demo/src/ demo/app.py
|
4 |
+
flake8 demo/src/ demo/app.py
|
shell/lint.sh
ADDED
@@ -0,0 +1,23 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
#!/bin/bash
|
2 |
+
isort --check --sl -c demo/src/ demo/app.py
|
3 |
+
if ! [ $? -eq 0 ]
|
4 |
+
then
|
5 |
+
echo "Please run \"sh shell/format.sh\" to format the code."
|
6 |
+
exit 1
|
7 |
+
fi
|
8 |
+
echo "no issues with isort"
|
9 |
+
flake8 demo/src/ demo/app.py
|
10 |
+
if ! [ $? -eq 0 ]
|
11 |
+
then
|
12 |
+
echo "Please fix the code style issue."
|
13 |
+
exit 1
|
14 |
+
fi
|
15 |
+
echo "no issues with flake8"
|
16 |
+
black --check --line-length 80 demo/src/ demo/app.py
|
17 |
+
if ! [ $? -eq 0 ]
|
18 |
+
then
|
19 |
+
echo "Please run \"sh shell/format.sh\" to format the code."
|
20 |
+
exit 1
|
21 |
+
fi
|
22 |
+
echo "no issues with black"
|
23 |
+
echo "linting success!"
|
src/__init__.py
ADDED
@@ -0,0 +1 @@
|
|
|
|
|
1 |
+
|
src/gui.py
ADDED
@@ -0,0 +1,184 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
import os
|
2 |
+
|
3 |
+
import gradio as gr
|
4 |
+
|
5 |
+
from .inference import run_model
|
6 |
+
from .utils import load_to_numpy
|
7 |
+
from .utils import load_pred_volume_to_numpy
|
8 |
+
from .utils import nifti_to_glb
|
9 |
+
|
10 |
+
|
11 |
+
class WebUI:
|
12 |
+
def __init__(
|
13 |
+
self,
|
14 |
+
model_name: str = None,
|
15 |
+
cwd: str = "/home/user/app/",
|
16 |
+
share: int = 1,
|
17 |
+
):
|
18 |
+
# global states
|
19 |
+
self.images = []
|
20 |
+
self.pred_images = []
|
21 |
+
|
22 |
+
# @TODO: This should be dynamically set based on chosen volume size
|
23 |
+
self.nb_slider_items = 512
|
24 |
+
|
25 |
+
self.model_name = model_name
|
26 |
+
self.cwd = cwd
|
27 |
+
self.share = share
|
28 |
+
|
29 |
+
self.class_name = "meningioma" # default
|
30 |
+
self.class_names = {
|
31 |
+
"meningioma": "MRI_Meningioma",
|
32 |
+
"lower-grade-glioma": "MRI_LGGlioma",
|
33 |
+
"metastasis": "MRI_Metastasis",
|
34 |
+
"glioblastoma": "MRI_GBM",
|
35 |
+
"brain": "MRI_Brain",
|
36 |
+
}
|
37 |
+
|
38 |
+
self.result_names = {
|
39 |
+
"meningioma": "Tumor",
|
40 |
+
"lower-grade-glioma": "Tumor",
|
41 |
+
"metastasis": "Tumor",
|
42 |
+
"glioblastoma": "Tumor",
|
43 |
+
"brain": "Brain",
|
44 |
+
}
|
45 |
+
|
46 |
+
# define widgets not to be rendered immediately, but later on
|
47 |
+
self.slider = gr.Slider(
|
48 |
+
minimum=1,
|
49 |
+
maximum=self.nb_slider_items,
|
50 |
+
value=1,
|
51 |
+
step=1,
|
52 |
+
label="Which 2D slice to show",
|
53 |
+
interactive=True,
|
54 |
+
)
|
55 |
+
|
56 |
+
self.volume_renderer = gr.Model3D(
|
57 |
+
clear_color=[0.0, 0.0, 0.0, 0.0],
|
58 |
+
label="3D Model",
|
59 |
+
visible=True,
|
60 |
+
elem_id="model-3d",
|
61 |
+
).style(height=512)
|
62 |
+
|
63 |
+
def set_class_name(self, value):
|
64 |
+
print("Changed task to:", value)
|
65 |
+
self.class_name = value
|
66 |
+
|
67 |
+
def combine_ct_and_seg(self, img, pred):
|
68 |
+
return (img, [(pred, self.class_name)])
|
69 |
+
|
70 |
+
def upload_file(self, file):
|
71 |
+
return file.name
|
72 |
+
|
73 |
+
def process(self, mesh_file_name):
|
74 |
+
path = mesh_file_name.name
|
75 |
+
run_model(
|
76 |
+
path,
|
77 |
+
model_path=os.path.join(self.cwd, "resources/models/"),
|
78 |
+
task=self.class_names[self.class_name],
|
79 |
+
name=self.result_names[self.class_name],
|
80 |
+
)
|
81 |
+
nifti_to_glb("prediction.nii.gz")
|
82 |
+
|
83 |
+
self.images = load_to_numpy(path)
|
84 |
+
# @TODO. Dynamic update of the slider does not seem to work like this
|
85 |
+
# self.nb_slider_items = len(self.images)
|
86 |
+
# self.slider.update(value=int(self.nb_slider_items/2), maximum=self.nb_slider_items)
|
87 |
+
|
88 |
+
self.pred_images = load_pred_volume_to_numpy("./prediction.nii.gz")
|
89 |
+
return "./prediction.obj"
|
90 |
+
|
91 |
+
def get_img_pred_pair(self, k):
|
92 |
+
k = int(k) - 1
|
93 |
+
# @TODO. Will the duplicate the last slice to fill up, since slider not adjustable right now
|
94 |
+
if k >= len(self.images):
|
95 |
+
k = len(self.images) - 1
|
96 |
+
out = [gr.AnnotatedImage.update(visible=False)] * self.nb_slider_items
|
97 |
+
out[k] = gr.AnnotatedImage.update(
|
98 |
+
self.combine_ct_and_seg(self.images[k], self.pred_images[k]),
|
99 |
+
visible=True,
|
100 |
+
)
|
101 |
+
return out
|
102 |
+
|
103 |
+
def run(self):
|
104 |
+
css = """
|
105 |
+
#model-3d {
|
106 |
+
height: 512px;
|
107 |
+
}
|
108 |
+
#model-2d {
|
109 |
+
height: 512px;
|
110 |
+
margin: auto;
|
111 |
+
}
|
112 |
+
#upload {
|
113 |
+
height: 120px;
|
114 |
+
}
|
115 |
+
"""
|
116 |
+
with gr.Blocks(css=css) as demo:
|
117 |
+
with gr.Row():
|
118 |
+
file_output = gr.File(file_count="single", elem_id="upload")
|
119 |
+
file_output.upload(self.upload_file, file_output, file_output)
|
120 |
+
|
121 |
+
model_selector = gr.Dropdown(
|
122 |
+
list(self.class_names.keys()),
|
123 |
+
label="Segmentation task",
|
124 |
+
info="Select the preoperative segmentation model to run",
|
125 |
+
multiselect=False,
|
126 |
+
size="sm",
|
127 |
+
)
|
128 |
+
model_selector.input(
|
129 |
+
fn=lambda x: self.set_class_name(x),
|
130 |
+
inputs=model_selector,
|
131 |
+
outputs=None,
|
132 |
+
)
|
133 |
+
|
134 |
+
run_btn = gr.Button("Run segmentation").style(
|
135 |
+
full_width=False, size="lg"
|
136 |
+
)
|
137 |
+
run_btn.click(
|
138 |
+
fn=lambda x: self.process(x),
|
139 |
+
inputs=file_output,
|
140 |
+
outputs=self.volume_renderer,
|
141 |
+
)
|
142 |
+
|
143 |
+
with gr.Row():
|
144 |
+
gr.Examples(
|
145 |
+
examples=[
|
146 |
+
os.path.join(self.cwd, "t1gd.nii.gz"),
|
147 |
+
],
|
148 |
+
inputs=file_output,
|
149 |
+
outputs=file_output,
|
150 |
+
fn=self.upload_file,
|
151 |
+
cache_examples=True,
|
152 |
+
)
|
153 |
+
|
154 |
+
with gr.Row():
|
155 |
+
with gr.Box():
|
156 |
+
with gr.Column():
|
157 |
+
image_boxes = []
|
158 |
+
for i in range(self.nb_slider_items):
|
159 |
+
visibility = True if i == 1 else False
|
160 |
+
t = gr.AnnotatedImage(
|
161 |
+
visible=visibility, elem_id="model-2d"
|
162 |
+
).style(
|
163 |
+
color_map={self.class_name: "#ffae00"},
|
164 |
+
height=512,
|
165 |
+
width=512,
|
166 |
+
)
|
167 |
+
image_boxes.append(t)
|
168 |
+
|
169 |
+
self.slider.input(
|
170 |
+
self.get_img_pred_pair, self.slider, image_boxes
|
171 |
+
)
|
172 |
+
|
173 |
+
self.slider.render()
|
174 |
+
|
175 |
+
with gr.Box():
|
176 |
+
self.volume_renderer.render()
|
177 |
+
|
178 |
+
# sharing app publicly -> share=True:
|
179 |
+
# https://gradio.app/sharing-your-app/
|
180 |
+
# inference times > 60 seconds -> need queue():
|
181 |
+
# https://github.com/tloen/alpaca-lora/issues/60#issuecomment-1510006062
|
182 |
+
demo.queue().launch(
|
183 |
+
server_name="0.0.0.0", server_port=7860, share=self.share
|
184 |
+
)
|
src/inference.py
ADDED
@@ -0,0 +1,97 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
import configparser
|
2 |
+
import logging
|
3 |
+
import os
|
4 |
+
import shutil
|
5 |
+
|
6 |
+
|
7 |
+
def run_model(
|
8 |
+
input_path: str,
|
9 |
+
model_path: str,
|
10 |
+
verbose: str = "info",
|
11 |
+
task: str = "MRI_Meningioma",
|
12 |
+
name: str = "Tumor",
|
13 |
+
):
|
14 |
+
logging.basicConfig()
|
15 |
+
logging.getLogger().setLevel(logging.WARNING)
|
16 |
+
|
17 |
+
if verbose == "debug":
|
18 |
+
logging.getLogger().setLevel(logging.DEBUG)
|
19 |
+
elif verbose == "info":
|
20 |
+
logging.getLogger().setLevel(logging.INFO)
|
21 |
+
elif verbose == "error":
|
22 |
+
logging.getLogger().setLevel(logging.ERROR)
|
23 |
+
else:
|
24 |
+
raise ValueError("Unsupported verbose value provided:", verbose)
|
25 |
+
|
26 |
+
# delete patient/result folder if they exist
|
27 |
+
if os.path.exists("./patient/"):
|
28 |
+
shutil.rmtree("./patient/")
|
29 |
+
if os.path.exists("./result/"):
|
30 |
+
shutil.rmtree("./result/")
|
31 |
+
|
32 |
+
try:
|
33 |
+
# setup temporary patient directory
|
34 |
+
filename = input_path.split("/")[-1]
|
35 |
+
splits = filename.split(".")
|
36 |
+
extension = ".".join(splits[1:])
|
37 |
+
patient_directory = "./patient/"
|
38 |
+
os.makedirs(patient_directory + "T0/", exist_ok=True)
|
39 |
+
shutil.copy(
|
40 |
+
input_path,
|
41 |
+
patient_directory + "T0/" + splits[0] + "-t1gd." + extension,
|
42 |
+
)
|
43 |
+
|
44 |
+
# define output directory to save results
|
45 |
+
output_path = "./result/prediction-" + splits[0] + "/"
|
46 |
+
os.makedirs(output_path, exist_ok=True)
|
47 |
+
|
48 |
+
# Setting up the configuration file
|
49 |
+
rads_config = configparser.ConfigParser()
|
50 |
+
rads_config.add_section("Default")
|
51 |
+
rads_config.set("Default", "task", "neuro_diagnosis")
|
52 |
+
rads_config.set("Default", "caller", "")
|
53 |
+
rads_config.add_section("System")
|
54 |
+
rads_config.set("System", "gpu_id", "-1")
|
55 |
+
rads_config.set("System", "input_folder", patient_directory)
|
56 |
+
rads_config.set("System", "output_folder", output_path)
|
57 |
+
rads_config.set("System", "model_folder", model_path)
|
58 |
+
rads_config.set(
|
59 |
+
"System",
|
60 |
+
"pipeline_filename",
|
61 |
+
os.path.join(model_path, task, "pipeline.json"),
|
62 |
+
)
|
63 |
+
rads_config.add_section("Runtime")
|
64 |
+
rads_config.set(
|
65 |
+
"Runtime", "reconstruction_method", "thresholding"
|
66 |
+
) # thresholding, probabilities
|
67 |
+
rads_config.set("Runtime", "reconstruction_order", "resample_first")
|
68 |
+
rads_config.set("Runtime", "use_preprocessed_data", "False")
|
69 |
+
|
70 |
+
with open("rads_config.ini", "w") as f:
|
71 |
+
rads_config.write(f)
|
72 |
+
|
73 |
+
# finally, run inference
|
74 |
+
from raidionicsrads.compute import run_rads
|
75 |
+
|
76 |
+
run_rads(config_filename="rads_config.ini")
|
77 |
+
|
78 |
+
# rename and move final result
|
79 |
+
os.rename(
|
80 |
+
"./result/prediction-"
|
81 |
+
+ splits[0]
|
82 |
+
+ "/T0/"
|
83 |
+
+ splits[0]
|
84 |
+
+ "-t1gd_annotation-"
|
85 |
+
+ name
|
86 |
+
+ ".nii.gz",
|
87 |
+
"./prediction.nii.gz",
|
88 |
+
)
|
89 |
+
|
90 |
+
except Exception as e:
|
91 |
+
print(e)
|
92 |
+
|
93 |
+
# Clean-up
|
94 |
+
if os.path.exists(patient_directory):
|
95 |
+
shutil.rmtree(patient_directory)
|
96 |
+
if os.path.exists(output_path):
|
97 |
+
shutil.rmtree(output_path)
|
src/utils.py
ADDED
@@ -0,0 +1,68 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
import nibabel as nib
|
2 |
+
import numpy as np
|
3 |
+
from nibabel.processing import resample_to_output
|
4 |
+
from skimage.measure import marching_cubes
|
5 |
+
|
6 |
+
|
7 |
+
def load_to_numpy(data_path):
|
8 |
+
if type(data_path) != str:
|
9 |
+
data_path = data_path.name
|
10 |
+
|
11 |
+
image = nib.load(data_path)
|
12 |
+
resampled = resample_to_output(image, None, order=0)
|
13 |
+
data = resampled.get_fdata()
|
14 |
+
|
15 |
+
data = np.rot90(data, k=1, axes=(0, 1))
|
16 |
+
|
17 |
+
# @TODO. Contrast-operation to do based on MRI/CT and target to segment
|
18 |
+
# data[data < -150] = -150
|
19 |
+
# data[data > 250] = 250
|
20 |
+
|
21 |
+
data = data - np.amin(data)
|
22 |
+
data = data / np.amax(data) * 255
|
23 |
+
data = data.astype("uint8")
|
24 |
+
|
25 |
+
print(data.shape)
|
26 |
+
return [data[..., i] for i in range(data.shape[-1])]
|
27 |
+
|
28 |
+
|
29 |
+
def load_pred_volume_to_numpy(data_path):
|
30 |
+
if type(data_path) != str:
|
31 |
+
data_path = data_path.name
|
32 |
+
|
33 |
+
image = nib.load(data_path)
|
34 |
+
resampled = resample_to_output(image, None, order=0)
|
35 |
+
data = resampled.get_fdata()
|
36 |
+
|
37 |
+
data = np.rot90(data, k=1, axes=(0, 1))
|
38 |
+
|
39 |
+
data[data > 0] = 1
|
40 |
+
data = data.astype("uint8")
|
41 |
+
|
42 |
+
print(data.shape)
|
43 |
+
return [data[..., i] for i in range(data.shape[-1])]
|
44 |
+
|
45 |
+
|
46 |
+
def nifti_to_glb(path, output="prediction.obj"):
|
47 |
+
# load NIFTI into numpy array
|
48 |
+
image = nib.load(path)
|
49 |
+
resampled = resample_to_output(image, [1, 1, 1], order=1)
|
50 |
+
data = resampled.get_fdata().astype("uint8")
|
51 |
+
|
52 |
+
# extract surface
|
53 |
+
verts, faces, normals, values = marching_cubes(data, 0)
|
54 |
+
faces += 1
|
55 |
+
|
56 |
+
with open(output, "w") as thefile:
|
57 |
+
for item in verts:
|
58 |
+
thefile.write("v {0} {1} {2}\n".format(item[0], item[1], item[2]))
|
59 |
+
|
60 |
+
for item in normals:
|
61 |
+
thefile.write("vn {0} {1} {2}\n".format(item[0], item[1], item[2]))
|
62 |
+
|
63 |
+
for item in faces:
|
64 |
+
thefile.write(
|
65 |
+
"f {0}//{0} {1}//{1} {2}//{2}\n".format(
|
66 |
+
item[0], item[1], item[2]
|
67 |
+
)
|
68 |
+
)
|