You need to agree to share your contact information to access this dataset
This repository is publicly accessible, but you have to accept the conditions to access its files and content.
- This dataset and associated code are released under the CC-BY-NC-ND 4.0 license and may only be used for non-commercial, academic research purposes with proper attribution. - Any commercial use, sale, or other monetization of the dataset and its derivatives, which include models trained on outputs from the datasets, is prohibited and requires prior approval. - By downloading the dataset, you attest that all information (affiliation, research use) is correct and up-to-date. Downloading the dataset requires prior registration on Hugging Face and agreeing to the terms of use. By downloading this dataset, you agree not to distribute, publish or reproduce a copy of the dataset. If another user within your organization wishes to use the dataset, they must register as an individual user and agree to comply with the terms of use. Users may not attempt to re-identify the deidentified data used to develop the underlying dataset.
- This dataset is provided “as-is” without warranties of any kind, express or implied. This dataset has not been reviewed, certified, or approved by any regulatory body, including but not limited to the FDA (U.S.), EMA (Europe), MHRA (UK), or other medical device authorities. Any application of this dataset in healthcare or biomedical settings must comply with relevant regulatory requirements and undergo independent validation. Users assume full responsibility for how they use this dataset and any resulting consequences. The authors, contributors, and distributors disclaim any liability for damages, direct or indirect, resulting from dataset use. Users are responsible for ensuring compliance with data protection regulations (e.g., GDPR, HIPAA) when using it in research that involves patient data.
Log in or Sign Up to review the conditions and access this dataset content.
Dataset Card for SpatialRefinery Xenium
What is Xenium von 10x?
- A collection of 31 spatial transcriptomic profiles, each linked and aligned to a Whole Slide Image (with pixel size < 0.3 µm/px) and metadata.
- Xenium von 10x was assembled from the 10x website encompassing:
- multiple tissues
- 1 species (Homo Sapiens)
- 31 cancer samples
This is a temporary dataset and can have bugs!!!
Instructions for Setting Up HuggingFace Account and Token
1. Create an Account on HuggingFace
Follow the instructions provided on the HuggingFace sign-up page.
2. Accept terms of use of SpatialRefinery
- On this page click request access
- At this stage, you can already manually inspect the data by navigating in the
Files and version
3. Create a Hugging Face Token
Go to Settings: Navigate to your profile settings by clicking on your profile picture in the top right corner and selecting
Settings
from the dropdown menu.Access Tokens: In the settings menu, find and click on
Access tokens
.Create New Token:
- Click on
New token
. - Set the token name (e.g.,
spatial-refinery
). - Set the access level to
Write
. - Click on
Create
.
- Click on
Copy Token: After the token is created, copy it to your clipboard. You will need this token for authentication.
4. Logging
Important! Run the following
pip install datasets==2.16.0
pip install huggingface-hub==0.20.0
from huggingface_hub import login
login(token="YOUR HUGGINGFACE TOKEN")
Download the entire Xenium dataset:
import datasets
local_dir='xenium_von_10x' # dataset will be dowloaded to this folder
# Note that the full dataset is around 1TB of data
dataset = datasets.load_dataset(
'rushin682/xenium_von_10x',
cache_dir=local_dir,
patterns='*'
)
Download a subset of Xenium dataset:
import datasets
local_dir='xenium_von_10x' # dataset will be dowloaded to this folder
ids_to_query = ['HTA12_246_3', 'HTA12_254_8'] # list of ids to query
list_patterns = [f"*{id}[_.]**" for id in ids_to_query]
dataset = datasets.load_dataset(
'rushin682/xenium_von_10x',
cache_dir=local_dir,
patterns=list_patterns
)
Loading the data with the python library hest
Once downloaded, you can then easily iterate through the dataset:
from hest import iter_hest
for st in iter_hest('../xenium_von_10x', id_list=['HTA12_246_3']):
print(st)
Data organization
For each sample:
wsis/
: H&E stained Whole Slide Images in pyramidal Generic TIFF (or pyramidal Generic BigTIFF if >4.1GB)st/
: spatial transcriptomics expressions in a scanpy.h5ad
objectmetadata/
: metadatabiospecimen_figures/
: overlay of the WSI with the st spotsthumbnails/
: downscaled version of the WSItissue_seg/
: tissue segmentation masks:- {id}_vis.jpg: downscaled or full resolution greyscale tissue mask
- {id}_contours.geojson: tissue segmentation contours to load in QuPath
pixel_size_vis/
: visualization of the pixel sizecellvit_seg/
: cellvit nuclei segmentation
Contact:
- Rushin Gindra Helmholtz Munich, Munich (
[email protected]
)
The dataset is distributed under the Attribution-NonCommercial-ShareAlike 4.0 International license (CC BY-NC-SA 4.0 Deed)
- Downloads last month
- 11