You need to agree to share your contact information to access this model

This repository is publicly accessible, but you have to accept the conditions to access its files and content.

Model weights from BRIA AI can be obtained after purchasing a commercial license. Fill in the form below and we reach out to you.

Log in or Sign Up to review the conditions and access this model content.

BRIA 2.3 FAST: Text-to-Image Model for Commercial Licensing

Introducing Bria AI 2.3 FAST, is the LCM version of BRIA 2.3. This model is the best combination of quality and latency for the 2.X family

Get Access

Bria 2.3Fast is avaialabe everywhere you build, either as source-code and weights, ComfyUI nodes or API endpoints.

  • API Endpoint: Bria.ai, fal.ai
  • ComfyUI: Use it in workflows
  • Interested in BRIA 2.3Fast weights? Purchase is required to license and access BRIA 2.3Fast, ensuring royalty management with our data partners and full liability coverage for commercial use.
    • Are you a startup or a student? We encourage you to apply for our Startup Program to request access. This program are designed to support emerging businesses and academic pursuits with our cutting-edge technology.
    • Contact us today to unlock the potential of BRIA 2.3Fast! By submitting the form above, you agree to BRIA’s Privacy policy and Terms & conditions.

For more information, please visit our website.

Join our Discord community for more information, tutorials, tools, and to connect with other users!

What's New

BRIA 2.3 FAST is a speedy version of BRIA 2.3, that provides an optimal balance between speed and accuracy. Engineered for efficiency, it takes only 1.64 seconds to generate images on a standard NVIDIA A10 GPU, achieving excellent image quality with an 80% reduction in inference time.

The model was distilled using the LCM technique and supports multiple aspect ratios, with the default resolution being 1024x1024. Similar to Bria AI 2.3, it presents improved realism and aesthetics.

Our evaluations show that our model achieves image quality comparable to its teacher, BRIA 2.3, and outperforms the SDXL LCM. While SDXL Turbo is faster, our model produces significantly better human faces as it supports higher resolution. These assessments were conducted by measuring human preferences.

CLICK HERE FOR A DEMO

Key Features

  • Legally Compliant: Offers full legal liability coverage for copyright and privacy infringements. Thanks to training on 100% licensed data from leading data partners, we ensure the ethical use of content.

  • Patented Attribution Engine: Our attribution engine is our way to compensate our data partners, powered by our proprietary and patented algorithms.

  • Enterprise-Ready: Specifically designed for business applications, Bria AI 2.3 delivers high-quality, compliant imagery for a variety of commercial needs.

  • Customizable Technology: Provides access to source code and weights for extensive customization, catering to specific business requirements.

Model Description

  • Developed by: BRIA AI
  • Model type: Text-to-Image model
  • License: BRIA 2.3 FAST Licensing terms & conditions.
  • Purchase is required to license and access the model.
  • Model Description: BRIA 2.3 Fast is an efficient text-to-image model trained exclusively on a professional-grade, licensed dataset. It is designed for commercial use and includes full legal liability coverage.
  • Resources for more information: BRIA AI

Code example using Diffusers

pip install diffusers
from diffusers import UNet2DConditionModel, DiffusionPipeline, LCMScheduler
import torch

unet = UNet2DConditionModel.from_pretrained("briaai/BRIA-2.3-FAST", torch_dtype=torch.float16)
pipe = DiffusionPipeline.from_pretrained("briaai/BRIA-2.3-BETA", unet=unet, torch_dtype=torch.float16)
pipe.force_zeros_for_empty_prompt = False

pipe.scheduler = LCMScheduler.from_config(pipe.scheduler.config)
pipe.to("cuda")

prompt = "A portrait of a Beautiful and playful ethereal singer, golden designs, highly detailed, blurry background"

image = pipe(prompt, num_inference_steps=8, guidance_scale=1.0).images[0]

Some tips for using our text-to-image model at inference:

  1. You must set pipe.force_zeros_for_empty_prompt = False
  2. Using negative prompt is recommended.
  3. We support multiple aspect ratios, yet resolution should overall consists approximately 1024*1024=1M pixels, for example: (1024,1024), (1280, 768), (1344, 768), (832, 1216), (1152, 832), (1216, 832), (960,1088)
  4. The Fast model works well with just 8 steps
  5. For the Fast models use guidance_scale 1.0 or 0.0, note that in this configuration negative prompt is not relevant
Downloads last month
18
Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.
The model cannot be deployed to the HF Inference API: The model authors have turned it off explicitly.

Space using briaai/BRIA-2.3-FAST 1