Dataset Viewer
Auto-converted to Parquet
id
stringlengths
4
117
sentence
stringlengths
1
134k
jonatasgrosman/wav2vec2-large-xlsr-53-english
Fine-tuned XLSR-53 large model for speech recognition in English
jonatasgrosman/wav2vec2-large-xlsr-53-english
Fine-tuned facebook/wav2vec2-large-xlsr-53 on English using the train and validation splits of Common Voice 6.1.
jonatasgrosman/wav2vec2-large-xlsr-53-english
When using this model, make sure that your speech input is sampled at 16kHz.
jonatasgrosman/wav2vec2-large-xlsr-53-english
This model has been fine-tuned thanks to the GPU credits generously given by the OVHcloud :)
jonatasgrosman/wav2vec2-large-xlsr-53-english
The script used for training can be found here: https://github.com/jonatasgrosman/wav2vec2-sprint
jonatasgrosman/wav2vec2-large-xlsr-53-english
Usage
jonatasgrosman/wav2vec2-large-xlsr-53-english
The model can be used directly (without a language model) as follows...
jonatasgrosman/wav2vec2-large-xlsr-53-english
Using the HuggingSound library:
jonatasgrosman/wav2vec2-large-xlsr-53-english
from huggingsound import SpeechRecognitionModel model = SpeechRecognitionModel("jonatasgrosman/wav2vec2-large-xlsr-53-english") audio_paths = ["/path/to/file.mp3", "/path/to/another_file.wav"] transcriptions = model.transcribe(audio_paths)
jonatasgrosman/wav2vec2-large-xlsr-53-english
Writing your own inference script:
jonatasgrosman/wav2vec2-large-xlsr-53-english
import torch import librosa from datasets import load_dataset from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor LANG_ID
jonatasgrosman/wav2vec2-large-xlsr-53-english
= "en" MODEL_ID = "jonatasgrosman/wav2vec2-large-xlsr-53-english" SAMPLES = 10 test_dataset = load_dataset("common_voice", LANG_ID, split=f"test[:{SAMPLES}]") processor = Wav2Vec2Processor.from_pretrained(MODEL_ID) model = Wav2Vec2ForCTC.from_pretrained(MODEL_ID)
jonatasgrosman/wav2vec2-large-xlsr-53-english
# Preprocessing the datasets.
jonatasgrosman/wav2vec2-large-xlsr-53-english
# We need to read the audio files as arrays def speech_file_to_array_fn(batch): speech_array, sampling_rate = librosa.load(batch["path"], sr=16_000) batch["speech"] = speech_array batch["sentence"] = batch["sentence"].upper() return batch test_dataset
jonatasgrosman/wav2vec2-large-xlsr-53-english
= test_dataset.map(speech_file_to_array_fn) inputs = processor(test_dataset["speech"], sampling_rate=16_000, return_tensors="pt", padding=True) with torch.no_grad(): logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits predicted_ids = torch.argmax(logits, dim=-1) predicted_sentences = processor.batch_decode(predicted_ids) for i, predicted_sentence in enumerate(predicted_sentences): print("-" * 100) print("Reference:", test_dataset[i]["sentence"]) print("Prediction:", predicted_sentence)
jonatasgrosman/wav2vec2-large-xlsr-53-english
Reference
jonatasgrosman/wav2vec2-large-xlsr-53-english
Prediction
jonatasgrosman/wav2vec2-large-xlsr-53-english
"SHE'LL BE ALL RIGHT."
jonatasgrosman/wav2vec2-large-xlsr-53-english
SHE'LL BE ALL RIGHT
jonatasgrosman/wav2vec2-large-xlsr-53-english
SIX
jonatasgrosman/wav2vec2-large-xlsr-53-english
SIX
jonatasgrosman/wav2vec2-large-xlsr-53-english
"ALL'S WELL
jonatasgrosman/wav2vec2-large-xlsr-53-english
THAT ENDS WELL."
jonatasgrosman/wav2vec2-large-xlsr-53-english
ALL AS WELL THAT ENDS
jonatasgrosman/wav2vec2-large-xlsr-53-english
WELL
jonatasgrosman/wav2vec2-large-xlsr-53-english
DO YOU MEAN IT?
jonatasgrosman/wav2vec2-large-xlsr-53-english
DO YOU MEAN IT
jonatasgrosman/wav2vec2-large-xlsr-53-english
THE NEW PATCH IS LESS INVASIVE THAN THE OLD ONE, BUT STILL CAUSES REGRESSIONS.
jonatasgrosman/wav2vec2-large-xlsr-53-english
THE NEW PATCH IS LESS INVASIVE THAN THE OLD ONE
jonatasgrosman/wav2vec2-large-xlsr-53-english
BUT STILL CAUSES REGRESSION
jonatasgrosman/wav2vec2-large-xlsr-53-english
HOW IS MOZILLA GOING TO HANDLE AMBIGUITIES LIKE QUEUE AND CUE?
jonatasgrosman/wav2vec2-large-xlsr-53-english
HOW IS MOSLILLAR GOING TO HANDLE ANDBEWOOTH HIS LIKE Q AND Q
jonatasgrosman/wav2vec2-large-xlsr-53-english
"
jonatasgrosman/wav2vec2-large-xlsr-53-english
I GUESS YOU MUST THINK I'M KINDA BATTY."
jonatasgrosman/wav2vec2-large-xlsr-53-english
RUSTIAN WASTIN PAN ONTE
jonatasgrosman/wav2vec2-large-xlsr-53-english
BATTLY
jonatasgrosman/wav2vec2-large-xlsr-53-english
NO ONE NEAR THE REMOTE MACHINE YOU COULD RING?
jonatasgrosman/wav2vec2-large-xlsr-53-english
NO ONE NEAR THE REMOTE MACHINE YOU COULD RING
jonatasgrosman/wav2vec2-large-xlsr-53-english
SAUCE
jonatasgrosman/wav2vec2-large-xlsr-53-english
FOR THE GOOSE IS SAUCE FOR THE GANDER.
jonatasgrosman/wav2vec2-large-xlsr-53-english
SAUCE FOR THE GUICE IS SAUCE FOR THE GONDER
jonatasgrosman/wav2vec2-large-xlsr-53-english
GROVES STARTED WRITING SONGS WHEN SHE WAS FOUR YEARS OLD.
jonatasgrosman/wav2vec2-large-xlsr-53-english
GRAFS STARTED WRITING SONGS WHEN SHE WAS
jonatasgrosman/wav2vec2-large-xlsr-53-english
FOUR YEARS OLD
jonatasgrosman/wav2vec2-large-xlsr-53-english
Evaluation
jonatasgrosman/wav2vec2-large-xlsr-53-english
To evaluate on mozilla-foundation/common_voice_6_0 with split test
jonatasgrosman/wav2vec2-large-xlsr-53-english
python eval.py --model_id jonatasgrosman/wav2vec2-large-xlsr-53-english --dataset mozilla-foundation/common_voice_6_0 --config en --split test
jonatasgrosman/wav2vec2-large-xlsr-53-english
To evaluate on speech-recognition-community-v2/dev_data
jonatasgrosman/wav2vec2-large-xlsr-53-english
python eval.py --model_id jonatasgrosman/wav2vec2-large-xlsr-53-english --dataset speech-recognition-community-v2/dev_data --config
jonatasgrosman/wav2vec2-large-xlsr-53-english
en --split validation --chunk_length_s 5.0 --stride_length_s 1.0
jonatasgrosman/wav2vec2-large-xlsr-53-english
Citation
jonatasgrosman/wav2vec2-large-xlsr-53-english
If you want to cite this model you can use this:
jonatasgrosman/wav2vec2-large-xlsr-53-english
@misc{grosman2021xlsr53-large-english, title={Fine-tuned {XLSR}-53 large model for speech recognition in {E}nglish}, author={Grosman, Jonatas}, howpublished={\url{https://huggingface.co/jonatasgrosman/wav2vec2-large-xlsr-53-english}}, year={2021} }
bert-base-uncased
BERT base model (uncased)
bert-base-uncased
Pretrained model on English language using a masked language modeling (MLM) objective.
bert-base-uncased
It was introduced in this paper and first released in this repository.
bert-base-uncased
This model is uncased: it does not make a difference between english and English.
bert-base-uncased
Disclaimer:
bert-base-uncased
The team releasing BERT did not write a model card for this model so this model card has been written by the Hugging Face team.
bert-base-uncased
Model description
bert-base-uncased
BERT is a transformers model pretrained on a large corpus of English data in a self-supervised fashion.
bert-base-uncased
This means it was pretrained on the raw texts only, with no humans labeling them in any way (which is why it can use lots of publicly available data) with an automatic process to generate inputs and labels from those texts.
bert-base-uncased
More precisely, it was pretrained with two objectives:
bert-base-uncased
Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run the entire masked sentence through the model and has to predict the masked words.
bert-base-uncased
This is different from traditional recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like GPT which internally masks the future tokens.
bert-base-uncased
It allows the model to learn a bidirectional representation of the sentence.
bert-base-uncased
Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining.
bert-base-uncased
Sometimes they correspond to sentences that were next to each other in the original text, sometimes not.
bert-base-uncased
The model then has to predict if the two sentences were following each other or not.
bert-base-uncased
This way, the model learns an inner representation of the English language that can then be used to extract features useful for downstream tasks: if you have a dataset of labeled sentences, for instance, you can train a standard classifier using the features produced by the BERT model as inputs.
bert-base-uncased
Model variations
bert-base-uncased
BERT has originally been released in base and large variations, for cased and uncased input text.
bert-base-uncased
The uncased models also strips out an accent markers.
bert-base-uncased
Chinese and multilingual uncased and cased versions followed shortly after.
bert-base-uncased
Modified preprocessing with whole word masking has replaced subpiece masking in a following work, with the release of two models.
bert-base-uncased
Other 24 smaller models are released afterward.
bert-base-uncased
The detailed release history can be found on the google-research/bert readme on github.
bert-base-uncased
Model
bert-base-uncased
#params
bert-base-uncased
Language
bert-base-uncased
bert-base-uncased
bert-base-uncased
110M
bert-base-uncased
English
bert-base-uncased
bert-large-uncased
bert-base-uncased
340
bert-base-uncased
M
bert-base-uncased
English
bert-base-uncased
bert-base-cased
bert-base-uncased
110M
bert-base-uncased
English
bert-base-uncased
bert-large-cased
bert-base-uncased
340M
bert-base-uncased
English
bert-base-uncased
bert-base-chinese
bert-base-uncased
110M
bert-base-uncased
Chinese
bert-base-uncased
bert-base-multilingual-cased
bert-base-uncased
110M
bert-base-uncased
Multiple
bert-base-uncased
bert-large-uncased-whole-word-masking
End of preview. Expand in Data Studio
README.md exists but content is empty.
Downloads last month
37