Yuning You
commited on
Commit
·
b2d16e9
1
Parent(s):
882651e
update
Browse files- README.md +1 -1
- test.ipynb +6 -5
README.md
CHANGED
@@ -16,7 +16,7 @@ tags:
|
|
16 |
This is the PyTorch implementation of the CI-FM model -- an AI model that can simulate the biological activities within a living tissue (AI virtual tissue).
|
17 |
The current version of CI-FM has 138M parameters and is trained on around 23M cells of spatial genomics. The signature functions of CI-FM are:
|
18 |
- **Embedding** of celllular microenvironments via ```embeddings = model.embed(adata)``` (Figure below panel D top);
|
19 |
-
- **Inference** of cellular gene expressions within a certain microenvironment via ```expressions = model.predict_cells_at_locations(adata,
|
20 |
|
21 |
The detailed usage of the model can be found in the [tutorial](https://huggingface.co/ynyou/CIFM/blob/main/test.ipynb).
|
22 |
Before running the tutorial, please set up an environment following the [environment instruction](https://huggingface.co/ynyou/CIFM#environment).
|
|
|
16 |
This is the PyTorch implementation of the CI-FM model -- an AI model that can simulate the biological activities within a living tissue (AI virtual tissue).
|
17 |
The current version of CI-FM has 138M parameters and is trained on around 23M cells of spatial genomics. The signature functions of CI-FM are:
|
18 |
- **Embedding** of celllular microenvironments via ```embeddings = model.embed(adata)``` (Figure below panel D top);
|
19 |
+
- **Inference** of cellular gene expressions within a certain microenvironment via ```expressions = model.predict_cells_at_locations(adata, target_loc)``` (Figure below panel D bottom).
|
20 |
|
21 |
The detailed usage of the model can be found in the [tutorial](https://huggingface.co/ynyou/CIFM/blob/main/test.ipynb).
|
22 |
Before running the tutorial, please set up an environment following the [environment instruction](https://huggingface.co/ynyou/CIFM#environment).
|
test.ipynb
CHANGED
@@ -221,7 +221,7 @@
|
|
221 |
},
|
222 |
{
|
223 |
"cell_type": "code",
|
224 |
-
"execution_count":
|
225 |
"metadata": {},
|
226 |
"outputs": [
|
227 |
{
|
@@ -243,14 +243,15 @@
|
|
243 |
}
|
244 |
],
|
245 |
"source": [
|
246 |
-
"
|
|
|
247 |
"x_min, x_max = adata.obsm['spatial'][:, 0].min(), adata.obsm['spatial'][:, 0].max()\n",
|
248 |
"y_min, y_max = adata.obsm['spatial'][:, 1].min(), adata.obsm['spatial'][:, 1].max()\n",
|
249 |
-
"
|
250 |
-
"
|
251 |
"\n",
|
252 |
"with torch.no_grad():\n",
|
253 |
-
" expressions = model.predict_cells_at_locations(adata,
|
254 |
"expressions, expressions.shape"
|
255 |
]
|
256 |
}
|
|
|
221 |
},
|
222 |
{
|
223 |
"cell_type": "code",
|
224 |
+
"execution_count": null,
|
225 |
"metadata": {},
|
226 |
"outputs": [
|
227 |
{
|
|
|
243 |
}
|
244 |
],
|
245 |
"source": [
|
246 |
+
"# we here randomly generate the locations for the cells just for demonstration\n",
|
247 |
+
"target_loc = np.random.rand(10, 2)\n",
|
248 |
"x_min, x_max = adata.obsm['spatial'][:, 0].min(), adata.obsm['spatial'][:, 0].max()\n",
|
249 |
"y_min, y_max = adata.obsm['spatial'][:, 1].min(), adata.obsm['spatial'][:, 1].max()\n",
|
250 |
+
"target_loc[:, 0] = target_loc[:, 0] * (x_max - x_min) + x_min\n",
|
251 |
+
"target_loc[:, 1] = target_loc[:, 1] * (y_max - y_min) + y_min\n",
|
252 |
"\n",
|
253 |
"with torch.no_grad():\n",
|
254 |
+
" expressions = model.predict_cells_at_locations(adata, target_loc)\n",
|
255 |
"expressions, expressions.shape"
|
256 |
]
|
257 |
}
|