--- license: apache-2.0 pipeline_tag: image-segmentation tags: - BEN - background-remove - mask-generation - Dichotomous image segmentation - background remove - foreground - background - remove background - pytorch --- # BEN: Background Erase Network [![arXiv](https://img.shields.io/badge/arXiv-2501.06230-b31b1b.svg)](https://arxiv.org/abs/2501.06230) [![GitHub](https://img.shields.io/badge/GitHub-BEN-black.svg)](https://github.com/PramaLLC/BEN/) [![Website](https://img.shields.io/badge/Website-backgrounderase.net-104233)](https://backgrounderase.net) [![PWC](https://img.shields.io/endpoint.svg?url=https://paperswithcode.com/badge/ben-using-confidence-guided-matting-for/dichotomous-image-segmentation-on-dis-vd)](https://paperswithcode.com/sota/dichotomous-image-segmentation-on-dis-vd?p=ben-using-confidence-guided-matting-for) ## Overview BEN (Background Erase Network) introduces a novel approach to foreground segmentation through its innovative Confidence Guided Matting (CGM) pipeline. The architecture employs a refiner network that targets and processes pixels where the base model exhibits lower confidence levels, resulting in more precise and reliable matting results. This repository provides the official code for our model, as detailed in our research paper: [BEN: Background Erase Network](https://arxiv.org/abs/2501.06230). ## BEN2 Access BEN2 is now publicly available, trained on DIS5k and our 22K proprietary segmentation dataset. Our enhanced model delivers superior performance in hair matting, 4K processing, object segmentation, and edge refinement. Access the base model on Huggingface, try the full model through our free web demo or integrate BEN2 into your project with our API: - 🤗 [PramaLLC/BEN2](https://huggingface.co/PramaLLC/BEN2) - 🌐 [backgrounderase.net](https://backgrounderase.net) ## Model Access The base model is publicly available and free to use for commercial use on HuggingFace: - 🤗 [PramaLLC/BEN](https://huggingface.co/PramaLLC/BEN) ## Contact US - For access to our commercial model email us at sales@pramadevelopment.com - Our website: https://pramadevelopment.com/ - Follow us on X: https://x.com/PramaResearch/ ## Quick Start Code (Inside Cloned Repo) ```python import model from PIL import Image import torch device = torch.device('cuda' if torch.cuda.is_available() else 'cpu') file = "./image.png" # input image model = model.BEN_Base().to(device).eval() #init pipeline model.loadcheckpoints("./BEN_Base.pth") image = Image.open(file) mask, foreground = model.inference(image) mask.save("./mask.png") foreground.save("./foreground.png") ``` # BEN SOA Benchmarks on Disk 5k Eval ![Demo Results](demo.jpg) ### BEN_Base + BEN_Refiner (commercial model please contact us for more information): - MAE: 0.0270 - DICE: 0.8989 - IOU: 0.8506 - BER: 0.0496 - ACC: 0.9740 ### BEN_Base (94 million parameters): - MAE: 0.0309 - DICE: 0.8806 - IOU: 0.8371 - BER: 0.0516 - ACC: 0.9718 ### MVANet (old SOTA): - MAE: 0.0353 - DICE: 0.8676 - IOU: 0.8104 - BER: 0.0639 - ACC: 0.9660 ### BiRefNet(not tested in house): - MAE: 0.038 ### InSPyReNet (not tested in house): - MAE: 0.042 ## Features - Background removal from images - Generates both binary mask and foreground image - CUDA support for GPU acceleration - Simple API for easy integration ## Installation 1. Clone Repo 2. Install requirements.txt