Safetensors
model_hub_mixin
pytorch_model_hub_mixin
SNN
jessepisel's picture
Update README.md
160e117 verified
---
tags:
- model_hub_mixin
- pytorch_model_hub_mixin
- SNN
license: apache-2.0
datasets:
- thinkonward/reflection-connection
---
# Model Card for ThinkOnward's SectionSeeker
This model has been pushed to the Hub using the [PytorchModelHubMixin](https://huggingface.co/docs/huggingface_hub/package_reference/mixins#huggingface_hub.PyTorchModelHubMixin) integration:
## Model Details
### Model Description
The `section-seeker-large-16` model is designed to address the challenge of few-shot learning, with a particular focus on one-shot learning scenarios. The model employs a Siamese Neural Network architecture, leveraging a pre-trained ResNet-50 backbone for feature extraction. This architecture allows the model to compare reference images and query images to find matching pairs.
This larger version (section-seeker-large-16) is part of a twin set of models, which may offer more extensive capabilities or higher accuracy. The section-seeker-large-16 model strikes a balance between computational efficiency and effectiveness, making it suitable for various applications where rapid and accurate predictions are required.
- **Developed by:** Jakub Mizera, Mike McIntire, Ognjen Tanovic and Jesse Pisel of ThinkOnward
- **Model type:** ViT
- **License:** Apache 2.0
- **Based on:** facebook/vit-msn-large
### Model Sources
The `section-seeker-large-16` model is built upon several open-source components:
1. **Siamese Neural Network Architecture**: Based on the architecture described in [this paper](https://arxiv.org/pdf/1503.03832) and [this paper](https://arxiv.org/pdf/2204.07141).
2. **Pre-trained ResNet-50 Backbone**: Utilizing weights from the [HuggingFace Transformers library](https://huggingface.co/docs/transformers/model_doc/resnet).
3. **Contrastive Loss**: The loss function used for training is implemented using PyTorch, following the methodology described in [this paper](https://yann.lecun.com/exdb/publis/pdf/hadsell-chopra-lecun-06.pdf).
The complete source code and training scripts are available on our [GitHub repository](https://github.com/thinkonward/section-seeker). Contributions to improve and extend this model are always welcome!
## Uses
The `section-seeker-large-16` model is designed for seismic reflection data analysis and has several practical applications in geophysics:
1. **Seismic Reflection Data Analysis**: The model can quickly match new seismic images with known categories, facilitating faster and more accurate interpretations.
2. **Geological Fault Detection**: It can detect and classify fault lines within seismic reflection data, aiding in understanding subsurface structural complexity.
3. **Reservoir Characterization**: By classifying different rock types based on their seismic reflection characteristics, the model provides valuable insights for reservoir characterization.
4. **Multi-Modal Data Integration**: The model facilitates multi-modal data integration by matching seismic reflection patterns with other datasets, offering a more holistic understanding of subsurface structures.
### Out of Scope
- **Non-Seismic Data**: The model has been specifically trained on and optimized for seismic reflection data.
- **Real-Time Applications**: While it can process images quickly, real-time processing capabilities are beyond its design scope.
- **Very Complex Geological Structures**: For extremely complex geological formations where manual analysis might still be necessary.
## How to Get Started with the Model
After downloading the model architecture from the [SectionSeeker Repository](https://github.com/thinkonward/section-seeker)
you can load the model using:
```python
import torch
from huggingface_hub import snapshot_download
L16_MODEL_REPO_ID = "thinkonward/section-seeker-large-16"
huggingface_hub.snapshot_download(repo_id=L16_MODEL_REPO_ID, repo_type="model", local_dir='./l_16_checkpoint', allow_patterns='*.pth')
# Use the ModelConfig class from the GitHub repository
vitLConfigPretrained = ModelConfig(BACKBONE_MODEL = 'ViT_L_16',
BACKBONE_MODEL_WEIGHTS = './L_16_checkpoint/ViT_L_16_SEISMIC31K.pth',
LATENT_SPACE_DIM = 16,
FC_IN_FEATURES = 1024)
model = SiameseNetwork(vitLConfigPretrained)
```
Check out the [tutorial on GitHub](https://github.com/thinkonward/section-seeker/blob/main/SectionSeeker_Quickstart_notebook.ipynb) for more help getting started
## Training Details
### Training Data
The data used to train the SectionSeeker Large 16 model were image patches from real seismic volumes. The data was sourced from the following sources:
- Australia Source: National Offshore Petroleum Information Management System. Available at https://www.ga.gov.au/nopims by Geoscience Australia which is © Commonwealth of Australia and is provided under a Creative Commons Attribution 4.0 International License and is subject to the disclaimer of warranties in section 5 of that license.
- Netherlands Source: NAM (2020). Petrel geological model of the Groningen gas field, the Netherlands. Open access through EPOS-NL. Yoda data publication platform Utrecht University. https://doi.org/10.24416/UU01-1QH0MW
**Training Dataset Card:** [patch-the-planet](https://huggingface.co/datasets/thinkonward/reflection-connection)
## Citations
**BibTex:**
@misc {thinkonward_2024,
author = { {ThinkOnward} },
title = { section-seeker-large-16 (Revision bb371d2) },
year = 2024,
url = { https://huggingface.co/thinkonward/section-seeker-large-16 },
doi = { 10.57967/hf/3736 },
publisher = { Hugging Face }
}
## Model Card Contact
Please contact `[email protected]` for questions, comments, or concerns about this model.