If you are using AWS, give a read. It is a running document to showcase how to deploy and fine-tune DeepSeek R1 models with Hugging Face on AWS.
We're working hard to enable all the scenarios, whether you want to deploy to Inference Endpoints, Sagemaker or EC2; with GPUs or with Trainium & Inferentia.
We have full support for the distilled models, DeepSeek-R1 support is coming soon!! I'll keep you posted.
Cosmos is a family of pre-trained models purpose-built for generating physics-aware videos and world states to advance physical AI development. The release includes Tokenizers nvidia/cosmos-tokenizer-672b93023add81b66a8ff8e6
Itās 2nd of December , hereās your Cyber Monday present š !
Weāre cutting our price down on Hugging Face Inference Endpoints and Spaces!
Our folks at Google Cloud are treating us with a 40% price cut on GCP Nvidia A100 GPUs for the next 3ļøā£ months. We have other reductions on all instances ranging from 20 to 50%.
if you use Google Kubernetes Engine to host you ML workloads, I think this series of videos is a great way to kickstart your journey of deploying LLMs, in less than 10 minutes! Thank you @wietse-venema-demo !
I'd like to share here a bit more about our Deep Learning Containers (DLCs) we built with Google Cloud, to transform the way you build AI with open models on this platform!
With pre-configured, optimized environments for PyTorch Training (GPU) and Inference (CPU/GPU), Text Generation Inference (GPU), and Text Embeddings Inference (CPU/GPU), the Hugging Face DLCs offer:
ā” Optimized performance on Google Cloud's infrastructure, with TGI, TEI, and PyTorch acceleration. š ļø Hassle-free environment setup, no more dependency issues. š Seamless updates to the latest stable versions. š¼ Streamlined workflow, reducing dev and maintenance overheads. š Robust security features of Google Cloud. āļø Fine-tuned for optimal performance, integrated with GKE and Vertex AI. š¦ Community examples for easy experimentation and implementation. š TPU support for PyTorch Training/Inference and Text Generation Inference is coming soon!
Pro Tip - if you're a Firefox user, you can set up Hugging Chat as integrated AI Assistant, with contextual links to summarize or simplify any text - handy!
These 15 open models are available for serverless inference on Cloudflare Workers AI, powered by GPUs distributed in 150 datacenters globally - š @rita3ko@mchenco@jtkipp@nkothariCF@philschmid