Rembg is a tool to remove image backgrounds. It can be used as a CLI, Python library, HTTP server, or Docker container.
If this project has helped you, please consider making a donation.
|
PhotoRoom Remove Background API
https://photoroom.com/api
Fast and accurate background remover API |
python: >=3.11, <3.14
Choose one of the following backends based on your hardware:
pip install "rembg[cpu]" # for library
pip install "rembg[cpu,cli]" # for library + cliFirst, check if your system supports onnxruntime-gpu by visiting onnxruntime.ai and reviewing the installation matrix.
If your system is compatible, run:
pip install "rembg[gpu]" # for library
pip install "rembg[gpu,cli]" # for library + cliNote: NVIDIA GPUs may require
onnxruntime-gpu, CUDA, andcudnn-devel. See #668 for details. Ifrembg[gpu]doesn't work and you can't install CUDA orcudnn-devel, userembg[cpu]withonnxruntimeinstead.
ROCm support requires the onnxruntime-rocm package. Install it by following AMD's documentation.
Once onnxruntime-rocm is installed and working, install rembg with ROCm support:
pip install "rembg[rocm]" # for library
pip install "rembg[rocm,cli]" # for library + cliAfter installation, you can use rembg by typing rembg in your terminal.
The rembg command has 4 subcommands, one for each input type:
i- single filesp- folders (batch processing)s- HTTP serverb- RGB24 pixel binary stream
You can get help about the main command using:
rembg --helpYou can also get help for any subcommand:
rembg <COMMAND> --helpUsed for processing single files.
Remove background from a remote image:
curl -s http://input.png | rembg i > output.pngRemove background from a local file:
rembg i path/to/input.png path/to/output.pngSpecify a model:
rembg i -m u2netp path/to/input.png path/to/output.pngReturn only the mask:
rembg i -om path/to/input.png path/to/output.pngApply alpha matting:
rembg i -a path/to/input.png path/to/output.pngPass extra parameters (SAM example):
rembg i -m sam -x '{ "sam_prompt": [{"type": "point", "data": [724, 740], "label": 1}] }' examples/plants-1.jpg examples/plants-1.out.pngPass extra parameters (custom model):
rembg i -m u2net_custom -x '{"model_path": "~/.u2net/u2net.onnx"}' path/to/input.png path/to/output.pngUsed for batch processing entire folders.
Process all images in a folder:
rembg p path/to/input path/to/outputWatch mode (process new/changed files automatically):
rembg p -w path/to/input path/to/outputUsed to start an HTTP server.
rembg s --host 0.0.0.0 --port 7000 --log_level infoFor complete API documentation, visit: http://localhost:7000/api
Remove background from an image URL:
curl -s "http://localhost:7000/api/remove?url=http://input.png" -o output.pngRemove background from an uploaded image:
curl -s -F file=@/path/to/input.jpg "http://localhost:7000/api/remove" -o output.pngProcess a sequence of RGB24 images from stdin. This is intended to be used with programs like FFmpeg that output RGB24 pixel data to stdout.
rembg b <width> <height> -o <output_specifier>Arguments:
| Argument | Description |
|---|---|
width |
Width of input image(s) |
height |
Height of input image(s) |
output_specifier |
Printf-style specifier for output filenames (e.g., output-%03u.png produces output-000.png, output-001.png, etc.). Omit to write to stdout. |
Example with FFmpeg:
ffmpeg -i input.mp4 -ss 10 -an -f rawvideo -pix_fmt rgb24 pipe:1 | rembg b 1280 720 -o folder/output-%03u.pngNote: The width and height must match FFmpeg's output dimensions. The flags
-an -f rawvideo -pix_fmt rgb24 pipe:1are required for FFmpeg compatibility.
Input and output as bytes:
from rembg import remove
with open('input.png', 'rb') as i:
with open('output.png', 'wb') as o:
input = i.read()
output = remove(input)
o.write(output)Input and output as a PIL image:
from rembg import remove
from PIL import Image
input = Image.open('input.png')
output = remove(input)
output.save('output.png')Input and output as a NumPy array:
from rembg import remove
import cv2
input = cv2.imread('input.png')
output = remove(input)
cv2.imwrite('output.png', output)Force output as bytes:
from rembg import remove
with open('input.png', 'rb') as i:
with open('output.png', 'wb') as o:
input = i.read()
output = remove(input, force_return_bytes=True)
o.write(output)Batch processing with session reuse (recommended for performance):
from pathlib import Path
from rembg import remove, new_session
session = new_session()
for file in Path('path/to/folder').glob('*.png'):
input_path = str(file)
output_path = str(file.parent / (file.stem + ".out.png"))
with open(input_path, 'rb') as i:
with open(output_path, 'wb') as o:
input = i.read()
output = remove(input, session=session)
o.write(output)For more examples, see the examples page.
Replace the rembg command with docker run danielgatis/rembg:
docker run -v path/to/input:/rembg danielgatis/rembg i input.png path/to/output/output.pngRequirements: Your host must have the NVIDIA Container Toolkit installed.
CUDA acceleration requires cudnn-devel, so you need to build the Docker image yourself. See #668 for details.
Build the image:
docker build -t rembg-nvidia-cuda-cudnn-gpu -f Dockerfile_nvidia_cuda_cudnn_gpu .Note: This image requires ~11GB of disk space (CPU version is ~1.6GB). Models are not included.
Run the container:
sudo docker run --rm -it --gpus all -v /dev/dri:/dev/dri -v $PWD:/rembg rembg-nvidia-cuda-cudnn-gpu i -m birefnet-general input.png output.pngTips:
- You can create your own NVIDIA CUDA image and install
rembg[gpu,cli]in it. - Use
-v /path/to/models/:/root/.u2netto store model files outside the container, avoiding re-downloads.
All models are automatically downloaded and saved to ~/.u2net/ on first use.
- u2net (download, source): A pre-trained model for general use cases.
- u2netp (download, source): A lightweight version of u2net model.
- u2net_human_seg (download, source): A pre-trained model for human segmentation.
- u2net_cloth_seg (download, source): A pre-trained model for Cloths Parsing from human portrait. Here clothes are parsed into 3 category: Upper body, Lower body and Full body.
- silueta (download, source): Same as u2net but the size is reduced to 43Mb.
- isnet-general-use (download, source): A new pre-trained model for general use cases.
- isnet-anime (download, source): A high-accuracy segmentation for anime character.
- sam (download encoder, download decoder, source): A pre-trained model for any use cases.
- birefnet-general (download, source): A pre-trained model for general use cases.
- birefnet-general-lite (download, source): A light pre-trained model for general use cases.
- birefnet-portrait (download, source): A pre-trained model for human portraits.
- birefnet-dis (download, source): A pre-trained model for dichotomous image segmentation (DIS).
- birefnet-hrsod (download, source): A pre-trained model for high-resolution salient object detection (HRSOD).
- birefnet-cod (download, source): A pre-trained model for concealed object detection (COD).
- birefnet-massive (download, source): A pre-trained model with massive dataset.
- bria-rmbg (download, source): A state-of-the-art background removal model by BRIA AI.
This library depends on onnxruntime. Python version support is determined by onnxruntime's compatibility.
If you find this project useful, consider buying me a coffee (or a beer):
Copyright (c) 2020-present Daniel Gatis
Licensed under the MIT License.


