id
stringlengths
36
36
status
stringclasses
1 value
inserted_at
timestamp[us]
updated_at
timestamp[us]
_server_id
stringlengths
36
36
title
stringlengths
11
142
authors
stringlengths
3
297
filename
stringlengths
5
62
content
stringlengths
2
64.1k
content_class.responses
sequencelengths
1
1
content_class.responses.users
sequencelengths
1
1
content_class.responses.status
sequencelengths
1
1
content_class.suggestion
sequencelengths
1
4
content_class.suggestion.agent
null
content_class.suggestion.score
null
d1b1592c-5735-4f1a-86be-fb87048043cf
completed
2025-01-16T03:09:54.388936
2025-01-19T19:09:43.456535
7ebd1a76-7100-460b-a03a-16737ee0e571
Fine-Tune Whisper For Multilingual ASR with 🤗 Transformers
sanchit-gandhi
fine-tune-whisper.md
<a target="_blank" href="https://colab.research.google.com/github/sanchit-gandhi/notebooks/blob/main/fine_tune_whisper.ipynb"> <img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/> </a> In this blog, we present a step-by-step guide on fine-tuning Whisper for any multilingual ASR dataset using Hugging Face 🤗 Transformers. This blog provides in-depth explanations of the Whisper model, the Common Voice dataset and the theory behind fine-tuning, with accompanying code cells to execute the data preparation and fine-tuning steps. For a more streamlined version of the notebook with fewer explanations but all the code, see the accompanying [Google Colab](https://colab.research.google.com/github/sanchit-gandhi/notebooks/blob/main/fine_tune_whisper.ipynb). ## Table of Contents 1. [Introduction](#introduction) 2. [Fine-tuning Whisper in a Google Colab](#fine-tuning-whisper-in-a-google-colab) 1. [Prepare Environment](#prepare-environment) 2. [Load Dataset](#load-dataset) 3. [Prepare Feature Extractor, Tokenizer and Data](#prepare-feature-extractor-tokenizer-and-data) 4. [Training and Evaluation](#training-and-evaluation) 5. [Building a Demo](#building-a-demo) 3. [Closing Remarks](#closing-remarks) ## Introduction Whisper is a pre-trained model for automatic speech recognition (ASR) published in [September 2022](https://openai.com/blog/whisper/) by the authors Alec Radford et al. from OpenAI. Unlike many of its predecessors, such as [Wav2Vec 2.0](https://arxiv.org/abs/2006.11477), which are pre-trained on un-labelled audio data, Whisper is pre-trained on a vast quantity of **labelled** audio-transcription data, 680,000 hours to be precise. This is an order of magnitude more data than the un-labelled audio data used to train Wav2Vec 2.0 (60,000 hours). What is more, 117,000 hours of this pre-training data is multilingual ASR data. This results in checkpoints that can be applied to over 96 languages, many of which are considered _low-resource_. This quantity of labelled data enables Whisper to be pre-trained directly on the _supervised_ task of speech recognition, learning a speech-to-text mapping from the labelled audio-transcription pre-training data \\({}^1\\). As a consequence, Whisper requires little additional fine-tuning to yield a performant ASR model. This is in contrast to Wav2Vec 2.0, which is pre-trained on the _unsupervised_ task of masked prediction. Here, the model is trained to learn an intermediate mapping from speech to hidden states from un-labelled audio only data. While unsupervised pre-training yields high-quality representations of speech, it does **not** learn a speech-to-text mapping. This mapping is only learned during fine-tuning, thus requiring more fine-tuning to yield competitive performance. When scaled to 680,000 hours of labelled pre-training data, Whisper models demonstrate a strong ability to generalise to many datasets and domains. The pre-trained checkpoints achieve competitive results to state-of-the-art ASR systems, with near 3% word error rate (WER) on the test-clean subset of LibriSpeech ASR and a new state-of-the-art on TED-LIUM with 4.7% WER (_c.f._ Table 8 of the [Whisper paper](https://cdn.openai.com/papers/whisper.pdf)). The extensive multilingual ASR knowledge acquired by Whisper during pre-training can be leveraged for other low-resource languages; through fine-tuning, the pre-trained checkpoints can be adapted for specific datasets and languages to further improve upon these results. Whisper is a Transformer based encoder-decoder model, also referred to as a _sequence-to-sequence_ model. It maps a _sequence_ of audio spectrogram features to a _sequence_ of text tokens. First, the raw audio inputs are converted to a log-Mel spectrogram by action of the feature extractor. The Transformer encoder then encodes the spectrogram to form a sequence of encoder hidden states. Finally, the decoder autoregressively predicts text tokens, conditional on both the previous tokens and the encoder hidden states. Figure 1 summarises the Whisper model. <figure> <img src="assets/111_fine_tune_whisper/whisper_architecture.svg" alt="Trulli" style="width:100%"> <figcaption align = "center"><b>Figure 1:</b> Whisper model. The architecture follows the standard Transformer-based encoder-decoder model. A log-Mel spectrogram is input to the encoder. The last encoder hidden states are input to the decoder via cross-attention mechanisms. The decoder autoregressively predicts text tokens, jointly conditional on the encoder hidden states and previously predicted tokens. Figure source: <a href="https://openai.com/blog/whisper/">OpenAI Whisper Blog</a>.</figcaption> </figure> In a sequence-to-sequence model, the encoder transforms the audio inputs into a set of hidden state representations, extracting important features from the spoken speech. The decoder plays the role of a language model, processing the hidden state representations and generating the corresponding text transcriptions. Incorporating a language model **internally** in the system architecture is termed _deep fusion_. This is in contrast to _shallow fusion_, where a language model is combined **externally** with an encoder, such as with CTC + \\(n\\)-gram (_c.f._ [Internal Language Model Estimation](https://arxiv.org/pdf/2011.01991.pdf)). With deep fusion, the entire system can be trained end-to-end with the same training data and loss function, giving greater flexibility and generally superior performance (_c.f._ [ESB Benchmark](https://arxiv.org/abs/2210.13352)). Whisper is pre-trained and fine-tuned using the cross-entropy objective function, a standard objective function for training sequence-to-sequence systems on classification tasks. Here, the system is trained to correctly classify the target text token from a pre-defined vocabulary of text tokens. The Whisper checkpoints come in five configurations of varying model sizes. The smallest four are trained on either English-only or multilingual data. The largest checkpoints are multilingual only. All 11 of the pre-trained checkpoints are available on the [Hugging Face Hub](https://huggingface.co/models?search=openai/whisper). The checkpoints are summarised in the following table with links to the models on the Hub: | Size | Layers | Width | Heads | Parameters | English-only | Multilingual | |
[ [ "audio", "transformers", "tutorial", "fine_tuning" ] ]
[ "2629e041-8c70-4026-8651-8bb91fd9749a" ]
[ "submitted" ]
[ "audio", "transformers", "fine_tuning", "tutorial" ]
null
null
c06c3541-5fc0-4295-aa16-3c640563bc34
completed
2025-01-16T03:09:54.388949
2025-01-16T03:18:04.941313
3ecd6c85-a0bb-483d-be50-21ba96cc717f
From Files to Chunks: Improving HF Storage Efficiency
jsulz, erinys
from-files-to-chunks.md
Hugging Face stores over [30 PB of models, datasets, and spaces](https://huggingface.co/spaces/xet-team/lfs-analysis) in [Git LFS repositories](https://huggingface.co/docs/hub/en/repositories-getting-started#requirements). Because Git stores and versions at the file level, any change to a file requires re-uploading the full asset – expensive operations when average Parquet and CSV files on the Hub range between 200-300 MB, average Safetensor files around 1 GB, and GGUF files can exceed 8 GB. Imagine modifying just a single line of metadata in a GGUF file and waiting for the multi-gigabyte file to upload; in addition to user time and transfer costs, Git LFS also then needs to save full versions of both files, bloating storage costs. The plot below illustrates the growth of LFS storage in model, dataset, and space repositories on the Hub between March 2022 and September 2024: <p align="center"> <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/blog/from-files-to-chunks/lfs-analysis-min.png" alt="Parquet Layout" width=90%> </p> Hugging Face's Xet team is taking a different approach to storage by storing files as chunks. By only transferring modified chunks, we can dramatically improve both storage efficiency and iteration speed while ensuring reliable access to evolving datasets and models. Here’s how it works. ## Content-Defined Chunking Foundations The method that we use to chunk files is called content-defined chunking (CDC). Instead of treating a file as an indivisible unit, CDC breaks files down into variable-sized chunks, using the data to define boundaries. To compute the chunks, we apply a [rolling hash algorithm](https://en.wikipedia.org/wiki/Rolling_hash) that scans the file’s byte sequence. Consider a file with the contents: ```bash transformerstransformerstransformers ``` We’re using text for illustration, but this could be any sequence of bytes. A rolling hash algorithm computes a hash over a sliding window of data. In this case, with a window of length 4 the hash would be computed first over `tran`, then `rans`, then `ansf` and so on until the end of the file. Chunk boundaries are determined when a hash satisfies a predefined condition, such as: ``` hash(data) % 2^12 == 0 ``` If the sequence `mers` produces a hash that meets this condition, the file will be split into three chunks: ```bash transformers | transformers | transformers ``` The content of these chunks is hashed to create mapping between chunk hash and bytes and will eventually be stored in a content-addressed store (CAS). Since all three chunks are identical, we only store one chunk in the CAS for built-in deduplication. 🪄 ## Insertions and Deletions When the contents of a file change, CDC allows for fine-grained updates that make it robust to handling insertions and deletions. Let’s modify the file by inserting `super`, making the new file contents: ```bash transformerstransformerssupertransformers ``` After applying the rolling hash again with the same boundary condition, the new chunks look like this: ```bash transformers | transformers | supertransformers ``` We do not need to save chunks we have seen before; they are already stored. However, `supertransformers` is a new chunk. Thus, the only cost of saving the updated version of this file is uploading and storing one new chunk. To validate this optimization in the real world, we benchmarked our previous implementation of CDC-backed storage at XetHub against Git LFS and found a consistent 50% improvement in storage and transfer performance across three iterative development use cases. One example was the [CORD-19 dataset](https://ai2-semanticscholar-cord-19.s3-us-west-2.amazonaws.com/historical_releases.html), a collection of COVID-19 research papers curated between 2020 and 2022 with 50 incremental updates. The comparison between Xet-backed and Git LFS-backed repositories is summarized below: | Metric | Git LFS-backed Repository | Xet-backed Repository | |
[ [ "data", "mlops", "optimization", "efficient_computing" ] ]
[ "2629e041-8c70-4026-8651-8bb91fd9749a" ]
[ "submitted" ]
[ "data", "mlops", "optimization", "efficient_computing" ]
null
null
8b9f23f0-1d76-4393-ab1d-56aa30cf2453
completed
2025-01-16T03:09:54.388955
2025-01-19T19:14:53.085817
08f6a02f-41d3-4e96-8c05-44dbf39c71d9
Llama can now see and run on your device - welcome Llama 3.2
merve, philschmid, osanseviero, reach-vb, lewtun, ariG23498, pcuenq
llama32.md
Llama 3.2 is out! Today, we welcome the next iteration of the [Llama collection](https://huggingface.co/collections/meta-llama/llama-32-66f448ffc8c32f949b04c8cf) to Hugging Face. This time, we’re excited to collaborate with Meta on the release of multimodal and small models. Ten open-weight models (5 multimodal models and 5 text-only ones) are available on the Hub. Llama 3.2 Vision comes in two sizes: 11B for efficient deployment and development on consumer-size GPU, and 90B for large-scale applications. Both versions come in base and instruction-tuned variants. In addition to the four multimodal models, Meta released a new version of Llama Guard with vision support. Llama Guard 3 is a safeguard model that can classify model inputs and generations, including detecting harmful multimodal prompts or assistant responses. Llama 3.2 also includes small text-only language models that can run on-device. They come in two new sizes (1B and 3B) with base and instruct variants, and they have strong capabilities for their sizes. There’s also a small 1B version of Llama Guard that can be deployed alongside these or the larger text models in production use cases. Among the features and integrations being released, we have: - [Model checkpoints on the Hub](https://huggingface.co/collections/meta-llama/llama-32-66f448ffc8c32f949b04c8cf) - [Hugging Face Transformers](https://huggingface.co/docs/transformers/v4.45.1/en/model_doc/mllama) and TGI integration for the Vision models - Inference & Deployment Integration with Inference Endpoints, Google Cloud, Amazon SageMaker & DELL Enterprise Hub - Fine-tuning Llama 3.2 11B Vision on a single GPU with [transformers🤗](https://github.com/huggingface/huggingface-llama-recipes/blob/main/fine_tune/Llama-Vision%20FT.ipynb) and [TRL](https://github.com/huggingface/huggingface-llama-recipes/blob/main/fine_tune/sft_vlm.py) ## Table of contents - [What is Llama 3.2 Vision?](#what-is-llama-32-vision) - [Llama 3.2 license changes. Sorry, EU :(](#llama-32-license-changes-sorry-eu-) - [What is special about Llama 3.2 1B and 3B?](#what-is-special-about-llama-32-1b-and-3b) - [Demo](#demo) - [Using Hugging Face Transformers](#using-hugging-face-transformers) - [Llama 3.2 1B & 3B Language Models](#llama-32-1b--3b-language-models) - [Llama 3.2 Vision](#llama-32-vision) - [On-device](#on-device) - [Llama.cpp & Llama-cpp-python](#llamacpp--llama-cpp-python) - [Transformers.js](#transformersjs) - [Fine-tuning Llama 3.2](#fine-tuning-llama-32) - [Hugging Face Partner Integrations](#hugging-face-partner-integrations) - [Additional Resources](#additional-resources) - [Acknowledgements](#acknowledgements) ## What is Llama 3.2 Vision? Llama 3.2 Vision is the most powerful open multimodal model released by Meta. It has great visual understanding and reasoning capabilities and can be used to accomplish a variety of tasks, including visual reasoning and grounding, document question answering, and image-text retrieval. Chain of Thought (CoT) answers are often very good, which makes visual reasoning particularly powerful. Llama 3.2 Vision can be used to process text and an image as well as only text. With image-text prompting, the model can take English inputs, while for text-only prompting, the model can handle multiple languages. The full list of languages in text-only mode is: - English - German - French - Italian - Portuguese - Hindi - Spanish - Thai The architecture of these models is based on the combination of Llama 3.1 LLMs combined with a vision tower and an image adapter. The text models used are Llama 3.1 8B for the Llama 3.2 11B Vision model, and Llama 3.1 70B for the 3.2 90B Vision model. To the best of our understanding, the text models were frozen during the training of the vision models to preserve text-only performance. Below you can find some inference examples from the 11B instruction-tuned model that showcase real world knowledge, document reasoning and infographics understanding capabilities. <table><thead><tr><th>Input Image</th><th>Input Prompt</th><th>Answer</th></tr></thead><tbody><tr> <tr> <td style="vertical-align: middle;"><img src="https://huggingface.co/datasets/huggingface/release-assets/resolve/main/wat_arun.jpg" width="300"/></td> <td style="vertical-align: middle;"> <strong>Where is this located in? What are other attractions nearby?</strong> </td> <td style="vertical-align: middle;"> The image depicts the Grand Palace in Bangkok, Thailand. The Grand Palace is a complex of buildings that served as the official residence of the King of Siam and the administrative seat of government for 150 years. It is one of the most popular tourist attractions in Bangkok and a must-see for anyone visiting the city.<br><br> Some other attractions near the Grand Palace include:<br> * Wat Phra Kaew: This temple is located within the Grand Palace complex and is home to the famous Emerald Buddha statue.<br> * Wat Arun: This temple is located on the west bank of the Chao Phraya River and is known for its stunning architecture and beautiful views of the city.<br> * Wat Pho: This temple is located near the Grand Palace and is home to a large reclining Buddha statue.<br> </td> </tr> <tr> <td style="vertical-align: middle;"><img src="https://huggingface.co/datasets/huggingface/release-assets/resolve/main/weather_events.png" width="300"/></td> <td style="vertical-align: middle;"> <strong>Where do the droughts happen according to this diagram?</strong> </td> <td style="vertical-align: middle;"> <strong>Drought Regions in 2016</strong><br>According to the diagram, droughts occurred in two distinct regions in 2016:<br> * <strong>Eastern and Southern Africa:</strong> This region experienced severe droughts.<br> * <strong>Chile:</strong> Chile was the driest in January in over 50 years. These areas faced significant water scarcity issues during that period. </td> </tr> <tr> <td style="vertical-align: middle;"><img src="https://huggingface.co/datasets/huggingface/release-assets/resolve/main/invoice.png" width="300"/></td> <td style="vertical-align: middle;"> <strong>How long does it take from invoice date to due date? Be short and concise.</strong> </td> <td style="vertical-align: middle;"> <strong>Invoice Date:</strong> 11/02/2019<br> <strong>Due Date:</strong> 26/02/2019<br> <strong>Time Difference Calculation:</strong><br> 26/02/2019 - 11/02/2019 = 15 days<br> The time difference between the invoice date and the due date is <strong>15 days.</strong> </td> </tr> </table> Vision models have a context length of 128k tokens, which allows for multiple-turn conversations that may contain images. However, the model works best when attending to a single image, so the `transformers` implementation only attends to the last image provided in the input. This preserves quality and saves memory. The 11B base model supports tile sizes of 448, while the instruct version and the 90B models all use tile sizes of 560. The models were trained on a massive dataset of 6 billion image-text pairs with a diverse data mixture. This makes them excellent candidates for fine-tuning on downstream tasks. For reference, you can see below how the 11B, 90B and their instruction fine-tuned versions compare in some benchmarks, as reported by Meta. Please, refer to the model cards for additional benchmarks and details. | | 11B | 11B (instruction-tuned) | 90B | 90B (instruction-tuned) | Metric | |
[ [ "llm", "deployment", "multi_modal", "efficient_computing" ] ]
[ "2629e041-8c70-4026-8651-8bb91fd9749a" ]
[ "submitted" ]
[ "llm", "multi_modal", "deployment", "efficient_computing" ]
null
null
19496f50-cd74-4f8a-af12-74548ebe452f
completed
2025-01-16T03:09:54.388963
2025-01-16T13:36:08.845745
d6b646eb-2e28-4287-bc9b-7e6a8f445806
Parameter-Efficient Fine-Tuning using 🤗 PEFT
smangrul, sayakpaul
peft.md
## Motivation Large Language Models (LLMs) based on the transformer architecture, like GPT, T5, and BERT have achieved state-of-the-art results in various Natural Language Processing (NLP) tasks. They have also started foraying into other domains, such as Computer Vision (CV) (VIT, Stable Diffusion, LayoutLM) and Audio (Whisper, XLS-R). The conventional paradigm is large-scale pretraining on generic web-scale data, followed by fine-tuning to downstream tasks. Fine-tuning these pretrained LLMs on downstream datasets results in huge performance gains when compared to using the pretrained LLMs out-of-the-box (zero-shot inference, for example). However, as models get larger and larger, full fine-tuning becomes infeasible to train on consumer hardware. In addition, storing and deploying fine-tuned models independently for each downstream task becomes very expensive, because fine-tuned models are the same size as the original pretrained model. Parameter-Efficient Fine-tuning (PEFT) approaches are meant to address both problems! PEFT approaches only fine-tune a small number of (extra) model parameters while freezing most parameters of the pretrained LLMs, thereby greatly decreasing the computational and storage costs. This also overcomes the issues of [catastrophic forgetting](https://arxiv.org/abs/1312.6211), a behaviour observed during the full finetuning of LLMs. PEFT approaches have also shown to be better than fine-tuning in the low-data regimes and generalize better to out-of-domain scenarios. It can be applied to various modalities, e.g., [image classification](https://github.com/huggingface/peft/tree/main/examples/image_classification) and [stable diffusion dreambooth](https://github.com/huggingface/peft/tree/main/examples/lora_dreambooth). It also helps in portability wherein users can tune models using PEFT methods to get tiny checkpoints worth a few MBs compared to the large checkpoints of full fine-tuning, e.g., `bigscience/mt0-xxl` takes up 40GB of storage and full fine-tuning will lead to 40GB checkpoints for each downstream dataset whereas using PEFT methods it would be just a few MBs for each downstream dataset all the while achieving comparable performance to full fine-tuning. The small trained weights from PEFT approaches are added on top of the pretrained LLM. So the same LLM can be used for multiple tasks by adding small weights without having to replace the entire model. **In short, PEFT approaches enable you to get performance comparable to full fine-tuning while only having a small number of trainable parameters.** Today, we are excited to introduce the [🤗 PEFT](https://github.com/huggingface/peft) library, which provides the latest Parameter-Efficient Fine-tuning techniques seamlessly integrated with 🤗 Transformers and 🤗 Accelerate. This enables using the most popular and performant models from Transformers coupled with the simplicity and scalability of Accelerate. Below are the currently supported PEFT methods, with more coming soon: 1. LoRA: [LORA: LOW-RANK ADAPTATION OF LARGE LANGUAGE MODELS](https://arxiv.org/pdf/2106.09685.pdf) 2. Prefix Tuning: [P-Tuning v2: Prompt Tuning Can Be Comparable to Fine-tuning Universally Across Scales and Tasks](https://arxiv.org/pdf/2110.07602.pdf) 3. Prompt Tuning: [The Power of Scale for Parameter-Efficient Prompt Tuning](https://arxiv.org/pdf/2104.08691.pdf) 4. P-Tuning: [GPT Understands, Too](https://arxiv.org/pdf/2103.10385.pdf) ## Use Cases We explore many interesting use cases [here](https://github.com/huggingface/peft#use-cases). These are a few of the most interesting ones: 1. Using 🤗 PEFT LoRA for tuning `bigscience/T0_3B` model (3 Billion parameters) on consumer hardware with 11GB of RAM, such as Nvidia GeForce RTX 2080 Ti, Nvidia GeForce RTX 3080, etc using 🤗 Accelerate's DeepSpeed integration: [peft_lora_seq2seq_accelerate_ds_zero3_offload.py](https://github.com/huggingface/peft/blob/main/examples/conditional_generation/peft_lora_seq2seq_accelerate_ds_zero3_offload.py). This means you can tune such large LLMs in Google Colab. 2. Taking the previous example a notch up by enabling INT8 tuning of the `OPT-6.7b` model (6.7 Billion parameters) in Google Colab using 🤗 PEFT LoRA and [bitsandbytes](https://github.com/TimDettmers/bitsandbytes): [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/drive/1jCkpikz0J2o20FBQmYmAGdiKmJGOMo-o?usp=sharing) 3. Stable Diffusion Dreambooth training using 🤗 PEFT on consumer hardware with 11GB of RAM, such as Nvidia GeForce RTX 2080 Ti, Nvidia GeForce RTX 3080, etc. Try out the Space demo, which should run seamlessly on a T4 instance (16GB GPU): [smangrul/peft-lora-sd-dreambooth](https://huggingface.co/spaces/smangrul/peft-lora-sd-dreambooth). <p align="center"> <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/peft/peft_lora_dreambooth_gradio_space.png" alt="peft lora dreambooth gradio space"><br> <em>PEFT LoRA Dreambooth Gradio Space</em> </p> ## Training your model using 🤗 PEFT Let's consider the case of fine-tuning [`bigscience/mt0-large`](https://huggingface.co/bigscience/mt0-large) using LoRA. 1. Let's get the necessary imports ```diff from transformers import AutoModelForSeq2SeqLM + from peft import get_peft_model, LoraConfig, TaskType model_name_or_path = "bigscience/mt0-large" tokenizer_name_or_path = "bigscience/mt0-large" ``` 2. Creating config corresponding to the PEFT method ```py peft_config = LoraConfig( task_type=TaskType.SEQ_2_SEQ_LM, inference_mode=False, r=8, lora_alpha=32, lora_dropout=0.1 ) ``` 3. Wrapping base 🤗 Transformers model by calling `get_peft_model` ```diff model = AutoModelForSeq2SeqLM.from_pretrained(model_name_or_path) + model = get_peft_model(model, peft_config) + model.print_trainable_parameters() # output: trainable params: 2359296 || all params: 1231940608 || trainable%: 0.19151053100118282 ``` That's it! The rest of the training loop remains the same. Please refer example [peft_lora_seq2seq.ipynb](https://github.com/huggingface/peft/blob/main/examples/conditional_generation/peft_lora_seq2seq.ipynb) for an end-to-end example. 4. When you are ready to save the model for inference, just do the following. ```py model.save_pretrained("output_dir") # model.push_to_hub("my_awesome_peft_model") also works ``` This will only save the incremental PEFT weights that were trained. For example, you can find the `bigscience/T0_3B` tuned using LoRA on the `twitter_complaints` raft dataset here: [smangrul/twitter_complaints_bigscience_T0_3B_LORA_SEQ_2_SEQ_LM](https://huggingface.co/smangrul/twitter_complaints_bigscience_T0_3B_LORA_SEQ_2_SEQ_LM). Notice that it only contains 2 files: adapter_config.json and adapter_model.bin with the latter being just 19MB. 5. To load it for inference, follow the snippet below: ```diff from transformers import AutoModelForSeq2SeqLM + from peft import PeftModel, PeftConfig peft_model_id = "smangrul/twitter_complaints_bigscience_T0_3B_LORA_SEQ_2_SEQ_LM" config = PeftConfig.from_pretrained(peft_model_id) model = AutoModelForSeq2SeqLM.from_pretrained(config.base_model_name_or_path) + model = PeftModel.from_pretrained(model, peft_model_id) tokenizer = AutoTokenizer.from_pretrained(config.base_model_name_or_path) model = model.to(device) model.eval() inputs = tokenizer("Tweet text : @HondaCustSvc Your customer service has been horrible during the recall process. I will never purchase a Honda again. Label :", return_tensors="pt") with torch.no_grad(): outputs = model.generate(input_ids=inputs["input_ids"].to("cuda"), max_new_tokens=10) print(tokenizer.batch_decode(outputs.detach().cpu().numpy(), skip_special_tokens=True)[0]) # 'complaint' ``` ## Next steps We've released PEFT as an efficient way of tuning large LLMs on downstream tasks and domains, saving a lot of compute and storage while achieving comparable performance to full finetuning. In the coming months, we'll be exploring more PEFT methods, such as (IA)3 and bottleneck adapters. Also, we'll focus on new use cases such as INT8 training of [`whisper-large`](https://huggingface.co/openai/whisper-large) model in Google Colab and tuning of RLHF components such as policy and ranker using PEFT approaches. In the meantime, we're excited to see how industry practitioners apply PEFT to their use cases - if you have any questions or feedback, open an issue on our [GitHub repo](https://github.com/huggingface/peft) 🤗. Happy Parameter-Efficient Fine-Tuning!
[ [ "llm", "optimization", "fine_tuning", "efficient_computing" ] ]
[ "2629e041-8c70-4026-8651-8bb91fd9749a" ]
[ "submitted" ]
[ "llm", "fine_tuning", "efficient_computing", "optimization" ]
null
null
0babcd3d-ac07-409a-bd5c-a78c9e0a604c
completed
2025-01-16T03:09:54.388971
2025-01-19T18:55:51.826592
e20f2833-d114-4c26-bcf3-f6fad87a7604
Making automatic speech recognition work on large files with Wav2Vec2 in 🤗 Transformers
Narsil
asr-chunking.md
``` Tl;dr: This post explains how to use the specificities of the Connectionist Temporal Classification (CTC) architecture in order to achieve very good quality automatic speech recognition (ASR) even on arbitrarily long files or during live inference. ``` **Wav2Vec2** is a popular pre-trained model for speech recognition. Released in [September 2020](https://ai.facebook.com/blog/wav2vec-20-learning-the-structure-of-speech-from-raw-audio/) by Meta AI Research, the novel architecture catalyzed progress in self-supervised pretraining for speech recognition, *e.g.* [*G. Ng et al.*, 2021](https://arxiv.org/pdf/2104.03416.pdf), [*Chen et al*, 2021](https://arxiv.org/abs/2110.13900), [*Hsu et al.*, 2021](https://arxiv.org/abs/2106.07447) and [*Babu et al.*, 2021](https://arxiv.org/abs/2111.09296). On the Hugging Face Hub, Wav2Vec2's most popular pre-trained checkpoint currently amounts to over [**250,000** monthly downloads](https://huggingface.co/facebook/wav2vec2-base-960h). **Wav2Vec2** is at its core a **transformers** models and one caveat of **transformers** is that it usually has a finite amount of sequence length it can handle. Either because it uses **position encodings** (not the case here) or simply because the cost of attention in transformers is actually O(n²) in sequence_length, meaning that using very large sequence_length explodes in complexity/memory. So you cannot run with finite hardware (even a very large GPU like A100), simply run Wav2Vec2 on an hour long file. Your program will crash. Let's try it ! ```bash pip install transformers ``` ```python from transformers import pipeline # This will work on any of the thousands of models at # https://huggingface.co/models?pipeline_tag=automatic-speech-recognition pipe = pipeline(model="facebook/wav2vec2-base-960h") # The Public Domain LibriVox file used for the test #!wget https://ia902600.us.archive.org/8/items/thecantervilleghostversion_2_1501_librivox/thecantervilleghostversion2_01_wilde_128kb.mp3 -o very_long_file.mp3 pipe("very_long_file.mp3") # Crash out of memory ! pipe("very_long_file.mp3", chunk_length_s=10) # This works and prints a very long string ! # This whole blogpost will explain how to make things work ``` Simple Chunking
[ [ "audio", "transformers", "implementation", "tutorial" ] ]
[ "2629e041-8c70-4026-8651-8bb91fd9749a" ]
[ "submitted" ]
[ "audio", "transformers", "implementation", "tutorial" ]
null
null
9e199cb4-f3bd-4a06-853c-c610d664fa68
completed
2025-01-16T03:09:54.388978
2025-01-16T13:35:49.872581
9f196dab-85ee-4723-90e9-840552bc8a0c
Faster Training and Inference: Habana Gaudi®2 vs Nvidia A100 80GB
regisss
habana-gaudi-2-benchmark.md
In this article, you will learn how to use [Habana® Gaudi®2](https://habana.ai/training/gaudi2/) to accelerate model training and inference, and train bigger models with 🤗 [Optimum Habana](https://huggingface.co/docs/optimum/habana/index). Then, we present several benchmarks including BERT pre-training, Stable Diffusion inference and T5-3B fine-tuning, to assess the performance differences between first generation Gaudi, Gaudi2 and Nvidia A100 80GB. Spoiler alert - Gaudi2 is about twice faster than Nvidia A100 80GB for both training and inference! [Gaudi2](https://habana.ai/training/gaudi2/) is the second generation AI hardware accelerator designed by Habana Labs. A single server contains 8 accelerator devices with 96GB of memory each (versus 32GB on first generation Gaudi and 80GB on A100 80GB). The Habana SDK, [SynapseAI](https://developer.habana.ai/), is common to both first-gen Gaudi and Gaudi2. That means that 🤗 Optimum Habana, which offers a very user-friendly interface between the 🤗 Transformers and 🤗 Diffusers libraries and SynapseAI, **works the exact same way on Gaudi2 as on first-gen Gaudi!** So if you already have ready-to-use training or inference workflows for first-gen Gaudi, we encourage you to try them on Gaudi2, as they will work without any single change. ## How to Get Access to Gaudi2? One of the easy, cost-efficient ways that Intel and Habana have made Gaudi2 available is on the Intel Developer Cloud. To start using Gaudi2 there, you should follow the following steps: 1. Go to the [Intel Developer Cloud landing page](https://www.intel.com/content/www/us/en/developer/tools/devcloud/services.html) and sign in to your account or register if you do not have one. 2. Go to the [Intel Developer Cloud management console](https://scheduler.cloud.intel.com/#/systems). 3. Select *Habana Gaudi2 Deep Learning Server featuring eight Gaudi2 HL-225H mezzanine cards and latest Intel® Xeon® Processors* and click on *Launch Instance* in the lower right corner as shown below. <figure class="image table text-center m-0 w-full"> <img src="assets/habana-gaudi-2-benchmark/launch_instance.png" alt="Cloud Architecture"/> </figure> 4. You can then request an instance: <figure class="image table text-center m-0 w-full"> <img src="assets/habana-gaudi-2-benchmark/request_instance.png" alt="Cloud Architecture"/> </figure> 5. Once your request is validated, re-do step 3 and click on *Add OpenSSH Publickey* to add a payment method (credit card or promotion code) and a SSH public key that you can generate with `ssh-keygen -t rsa -b 4096 -f ~/.ssh/id_rsa`. You may be redirected to step 3 each time you add a payment method or a SSH public key. 6. Re-do step 3 and then click on *Launch Instance*. You will have to accept the proposed general conditions to actually launch the instance. 7. Go to the [Intel Developer Cloud management console](https://scheduler.cloud.intel.com/#/systems) and click on the tab called *View Instances*. 8. You can copy the SSH command to access your Gaudi2 instance remotely! > If you terminate the instance and want to use Gaudi2 again, you will have to re-do the whole process. You can find more information about this process [here](https://scheduler.cloud.intel.com/public/Intel_Developer_Cloud_Getting_Started.html). ## Benchmarks Several benchmarks were performed to assess the abilities of first-gen Gaudi, Gaudi2 and A100 80GB for both training and inference, and for models of various sizes. ### Pre-Training BERT A few months ago, [Philipp Schmid](https://huggingface.co/philschmid), technical lead at Hugging Face, presented [how to pre-train BERT on Gaudi with 🤗 Optimum Habana](https://huggingface.co/blog/pretraining-bert). 65k training steps were performed with a batch size of 32 samples per device (so 8*32=256 in total) for a total training time of 8 hours and 53 minutes (you can see the TensorBoard logs of this run [here](https://huggingface.co/philschmid/bert-base-uncased-2022-habana-test-6/tensorboard?scroll=1#scalars)). We re-ran the same script with the same hyperparameters on Gaudi2 and got a total training time of 2 hours and 55 minutes (see the logs [here](https://huggingface.co/regisss/bert-pretraining-gaudi-2-batch-size-32/tensorboard?scroll=1#scalars)). **That makes a x3.04 speedup on Gaudi2 without changing anything.** Since Gaudi2 has roughly 3 times more memory per device compared to first-gen Gaudi, it is possible to leverage this greater capacity to have bigger batches. This will give HPUs more work to do and will also enable developers to try a range of hyperparameter values that was not reachable with first-gen Gaudi. With a batch size of 64 samples per device (512 in total), we got with 20k steps a similar loss convergence to the 65k steps of the previous runs. That makes a total training time of 1 hour and 33 minutes (see the logs [here](https://huggingface.co/regisss/bert-pretraining-gaudi-2-batch-size-64/tensorboard?scroll=1#scalars)). The throughput is x1.16 higher with this configuration, while this new batch size strongly accelerates convergence. **Overall, with Gaudi2, the total training time is reduced by a 5.75 factor and the throughput is x3.53 higher compared to first-gen Gaudi**. **Gaudi2 also offers a speedup over A100**: 1580.2 samples/s versus 981.6 for a batch size of 32 and 1835.8 samples/s versus 1082.6 for a batch size of 64, which is consistent with the x1.8 speedup [announced by Habana](https://habana.ai/training/gaudi2/) on the phase 1 of BERT pre-training with a batch size of 64. The following table displays the throughputs we got for first-gen Gaudi, Gaudi2 and Nvidia A100 80GB GPUs: <center> | | First-gen Gaudi (BS=32) | Gaudi2 (BS=32) | Gaudi2 (BS=64) | A100 (BS=32) | A100 (BS=64) | |:-:|:
[ [ "implementation", "benchmarks", "optimization", "efficient_computing" ] ]
[ "2629e041-8c70-4026-8651-8bb91fd9749a" ]
[ "submitted" ]
[ "benchmarks", "optimization", "efficient_computing", "implementation" ]
null
null
464e154f-2329-41af-b2f0-95f02b06bc97
completed
2025-01-16T03:09:54.388985
2025-01-16T14:20:56.634159
88029ba2-2664-4fb8-adc4-8a39bcf1b865
Welcome PaddlePaddle to the Hugging Face Hub
PaddlePaddle
paddlepaddle.md
We are happy to share an open source collaboration between Hugging Face and [PaddlePaddle](https://www.paddlepaddle.org.cn/en) on a shared mission to advance and democratize AI through open source! First open sourced by Baidu in 2016, PaddlePaddle enables developers of all skill levels to adopt and implement Deep Learning at scale. As of Q4 2022, PaddlePaddle is being used by more than 5.35 million developers and 200,000 enterprises, ranking first in terms of market share among Deep Learning platforms in China. PaddlePaddle features popular open source repositories such as the [Paddle](https://github.com/PaddlePaddle/Paddle) Deep Learning Framework, model libraries across different modalities (e.g. [PaddleOCR](https://github.com/PaddlePaddle/PaddleOCR), [PaddleDetection](https://github.com/PaddlePaddle/PaddleDetection), [PaddleNLP](https://github.com/PaddlePaddle/PaddleNLP), [PaddleSpeech](https://github.com/PaddlePaddle/PaddleSpeech)), [PaddleSlim](https://github.com/PaddlePaddle/PaddleSlim) for model compression, [FastDeploy](https://github.com/PaddlePaddle/FastDeploy) for model deployment and many more. ![thumbnail](assets/126_paddlepaddle/thumbnail.jpg) **With [PaddleNLP](https://huggingface.co/docs/hub/paddlenlp) leading the way, PaddlePaddle will gradually integrate its libraries with the Hugging Face Hub.** You will soon be able to play with the full suite of awesome pre-trained PaddlePaddle models across text, image, audio, video and multi-modalities on the Hub! ## Find PaddlePaddle Models You can find all PaddlePaddle models on the Model Hub by filtering with the [PaddlePaddle library tag](https://huggingface.co/models?library=paddlepaddle). <p align="center"> <img src="assets/126_paddlepaddle/paddle_tag.png" alt="PaddlePaddle Tag"/> </p> There are already over 75 PaddlePaddle models on the Hub. As an example, you can find our multi-task Information Extraction model series [UIE](https://huggingface.co/PaddlePaddle/uie-base), State-of-the-Art Chinese Language Model [ERNIE 3.0 model series](https://huggingface.co/PaddlePaddle/ernie-3.0-nano-zh), novel document pre-training model [Ernie-Layout](PaddlePaddle/ernie-layoutx-base-uncased) with layout knowledge enhancement in the whole workflow and so on. You are also welcome to check out the [PaddlePaddle](https://huggingface.co/PaddlePaddle) org on the HuggingFace Hub. In additional to the above-mentioned models, you can also explore our Spaces, including our text-to-image [Ernie-ViLG](https://huggingface.co/spaces/PaddlePaddle/ERNIE-ViLG), cross-modal Information Extraction engine [UIE-X](https://huggingface.co/spaces/PaddlePaddle/UIE-X) and awesome multilingual OCR toolkit [PaddleOCR](https://huggingface.co/spaces/PaddlePaddle/PaddleOCR). ## Inference API and Widgets PaddlePaddle models are available through the [Inference API](https://huggingface.co/docs/hub/models-inference), which you can access through HTTP with cURL, Python’s requests library, or your preferred method for making network requests. ![inference_api](assets/126_paddlepaddle/inference_api.png) Models that support a [task](https://huggingface.co/tasks) are equipped with an interactive widget that allows you to play with the model directly in the browser. ![widget](assets/126_paddlepaddle/widget.png) ## Use Existing Models If you want to see how to load a specific model, you can click `Use in paddlenlp` (or other PaddlePaddle libraries in the future) and you will be given a working snippet that to load it! ![snippet](assets/126_paddlepaddle/snippet.png) ## Share Models Depending on the PaddlePaddle library, you may be able to share your models by pushing to the Hub. For example, you can share PaddleNLP models by using the `save_to_hf_hub` method. ```python from paddlenlp.transformers import AutoTokenizer, AutoModelForMaskedLM tokenizer = AutoTokenizer.from_pretrained("PaddlePaddle/ernie-3.0-base-zh", from_hf_hub=True) model = AutoModelForMaskedLM.from_pretrained("PaddlePaddle/ernie-3.0-base-zh", from_hf_hub=True) tokenizer.save_to_hf_hub(repo_id="<my_org_name>/<my_repo_name>") model.save_to_hf_hub(repo_id="<my_org_name>/<my_repo_name>") ``` ## Conclusion PaddlePaddle is an open source Deep Learning platform that originated from industrial practice and has been open-sourcing innovative and industry-grade projects since 2016. We are excited to join the Hub to share our work with the HuggingFace community and you can expect more fun and State-of-the-Art projects from us soon! To stay up to date with the latest news, you can follow us on Twitter at [@PaddlePaddle](https://twitter.com/PaddlePaddle).
[ [ "implementation", "community", "tools", "integration" ] ]
[ "2629e041-8c70-4026-8651-8bb91fd9749a" ]
[ "submitted" ]
[ "community", "integration", "tools", "implementation" ]
null
null