YAML Metadata Warning: empty or missing yaml metadata in repo card (https://huggingface.co/docs/hub/model-cards#model-card-metadata)

MedSAM2 - Custom Inference Endpoint

This repository contains a custom handler to deploy the MedSAM2 model using Hugging Face Inference Endpoints.

MedSAM2 is a state-of-the-art medical image segmentation model designed for zero-shot and fine-tuned applications across ultrasound, CT, and MRI modalities.

πŸ”§ Files

  • handler.py: Defines a custom inference handler for processing inputs and generating predictions.
  • model.py: Loads the pretrained MedSAM2_latest.pt weights directly from the Hugging Face Hub.
  • requirements.txt: Python dependencies required for running the model.
  • MedSAM2_latest.pt: (Optional) If you are using local weights instead of downloading from the Hub.

πŸ“₯ Input Format

The model expects a JSON payload with a base64-encoded image:

Example input: { "image": "" }

Encode an image like this in Python: import base64

with open("your_image.png", "rb") as f:
    encoded = base64.b64encode(f.read()).decode("utf-8")

πŸ“€ Output Format

The model returns predictions in a JSON object:

Example output: { "output": [[...]] }

Note: The current handler.py uses placeholder preprocessing. You should update it with the actual image preprocessing and model inference logic according to the MedSAM2 architecture.

πŸš€ Deployment Instructions

  1. Push this repo to the Hugging Face Hub
  2. Go to: https://huggingface.co/inference-endpoints
  3. Create a new endpoint
  4. Select:
    • Your model repo (e.g. username/my-medsam2-endpoint)
    • "Custom handler"
    • Hardware: Choose GPU if required
  5. Deploy

🧠 Resources

🀝 License

Please review the license of the original model here: https://huggingface.co/wanglab/MedSAM2

This repository is for research and educational purposes.

Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support