--- license: other pipeline_tag: conversational --- This is a copy of the original [BLOOMChat weights](https://huggingface.co/sambanovasystems/BLOOMChat-176B-v1/tree/main) that is more efficient to use with the [DeepSpeed-Inference](https://www.deepspeed.ai/tutorials/inference-tutorial/). In this repo the original tensors are split into 8 shards to target 8 GPUs, this allows the user to run the model with DeepSpeed-inference Tensor Parallelism. For specific details about the BLOOMChat model itself, please see the [original BLOOMChat model card](https://huggingface.co/sambanovasystems/BLOOMChat-176B-v1).