title
stringlengths
15
185
link
stringlengths
53
219
replies
int64
0
43
views
int64
18
25.9k
initial_post
stringlengths
4
20.5k
initial_post_date
stringlengths
20
20
responses
listlengths
0
20
Confidence Scores / Self-Training for Wav2Vec2 / CTC models
https://discuss.huggingface.co/t/confidence-scores-self-training-for-wav2vec2-ctc-models/17050
1
3,612
I started looking a bit into Confidence Scores / Self-Training for Speech Recognition for models like Wav2Vec2.The most reasonable way of doing so is to do it on a per-word level basis.With the newoutput_word_offsets=Trueit’s quite easy to retrieve the logits scores of the predicted words. E.g. one could do the following:Import all necessary libraries and load model and tokenizerfrom transformers import AutoModelForCTC, AutoProcessor from datasets import load_dataset import datasets import torch import sys model_id = "TODO: fill in" model = AutoModelForCTC.from_pretrained(model_id) processor = AutoProcessor.from_pretrained(model_id)Load Librispeech dummy data:num_samples = 4 dataset = load_dataset("patrickvonplaten/librispeech_asr_dummy", "clean", split="validation") samples = dataset[:num_samples] audio_samples = [s["array"] for s in samples["audio"]] sampling_rate = set([s["sampling_rate"] for s in samples["audio"]]).pop() text_samples = samples["text"]Predict transcription with model:# process to input_values inputs = processor(audio_samples, return_tensors="pt", sampling_rate=sampling_rate, padding=True) # forward inputs to model with torch.no_grad(): logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logitsCompute probabilities (log softmax here) of predicted (argmax logits):pred_ids = torch.argmax(logits, dim=-1) scores = torch.nn.functional.log_softmax(logits, dim=-1) pred_scores = scores.gather(1, pred_ids.unsqueeze(-1))[:, :, 0]Retrieve the per word probability normalized over word lengthoutput = processor.batch_decode(pred_ids, output_word_offsets=True) # add confidence def confidence_score(word_dict, index): probs = pred_scores[index, word_dict["start_offset"]: word_dict["end_offset"]] return round(torch.sum(probs).item() / (len(probs)), 4) confidence_scores = [] for i in range(num_samples): confidence_scores.append({d["word"]: confidence_score(d, i) for d in output.word_offsets[i]})Define confidence score as minimum word probfor i in range(num_samples): print(20 * "=" + f"Output {i}" + 20 * "=") print(text_samples[i]) print(f"{' '.join(confidence_scores[i].keys())}: {min(confidence_scores[i].values())}") print("\n")Cool let’s run this on the new data2vec audio models:facebook/data2vec-audio-base-10m · Hugging Facefacebook/data2vec-audio-base-100h · Hugging Facefacebook/data2vec-audio-base-960h · Hugging FaceIt should be clear that the 960h should have “more” confidence than the 100h model. However the outputs are as follows:facebook/data2vec-audio-base-10m====================Output 0==================== MISTER QUILTER IS THE APOSTLE OF THE MIDDLE CLASSES AND WE ARE GLAD TO WELCOME HIS GOSPEL MISTER QUILTER IS THE APPOSELE OF MIDL CLASES AND WHE ER GLAD TO WELCOME HIS GASPLE: -0.5873 ====================Output 1==================== NOR IS MISTER QUILTER'S MANNER LESS INTERESTING THAN HIS MATTER NOR IS MISTERE QUILTR'S MANER LES INTRESTING THAN HIS MATER: -0.4173 ====================Output 2==================== HE TELLS US THAT AT THIS FESTIVE SEASON OF THE YEAR WITH CHRISTMAS AND ROAST BEEF LOOMING BEFORE US SIMILES DRAWN FROM EATING AND ITS RESULTS OCCUR MOST READILY TO THE MIND HE TELES IS THAT AT THIS FESTIVE CESON OF THE YEAR WITH CRISMIIS AND ROST BEF LOOMING BEFOR SEIMILIYS DRAWN FROM EATING ITS RESALTS OCARE MOST REDHILY TO MIND: -0.0 ====================Output 3==================== HE HAS GRAVE DOUBTS WHETHER SIR FREDERICK LEIGHTON'S WORK IS REALLY GREEK AFTER ALL AND CAN DISCOVER IN IT BUT LITTLE OF ROCKY ITHACA HE HAS GREAVED DOUBTS WETHER SIR FREDRICK LATEN'S WORK IS RELY GRE AFTER ALL AND CAN DESCOVER IN IT BUT LITTLE OFE ROCKY ETHICA: -0.0006facebook/data2vec-audio-base-100h====================Output 0==================== MISTER QUILTER IS THE APOSTLE OF THE MIDDLE CLASSES AND WE ARE GLAD TO WELCOME HIS GOSPEL MISTER QUILTER IS THE APOSTLE OF MIDDLE CLASSES AND WE ARE GLAD TO WELCOME HIS GOSPEL: -0.7656 ====================Output 1==================== NOR IS MISTER QUILTER'S MANNER LESS INTERESTING THAN HIS MATTER NOR IS MISTER QUILTER'S MANNER LESS INTERESTING THAN HIS MATTER: -0.5057 ====================Output 2==================== HE TELLS US THAT AT THIS FESTIVE SEASON OF THE YEAR WITH CHRISTMAS AND ROAST BEEF LOOMING BEFORE US SIMILES DRAWN FROM EATING AND ITS RESULTS OCCUR MOST READILY TO THE MIND HE TELLS US THAT AT THIS FESTIVE SEASON OF THE YEAR WITH CHRISTMAS AND ROAST BEEF LOOMING BEFORE SIMILES DRAWN FROM EATING ITS RESULTS OCCUR MOST READILY TO MINE: -0.0 ====================Output 3==================== HE HAS GRAVE DOUBTS WHETHER SIR FREDERICK LEIGHTON'S WORK IS REALLY GREEK AFTER ALL AND CAN DISCOVER IN IT BUT LITTLE OF ROCKY ITHACA HE HAS GRAVE DOUBTS WHETHER SIR FREDERICK LAYTON'S WORK IS REALLY GREEK AFTER ALL AND CAN DISCOVER IN IT BUT LITTLE OF ROCKY ITHICA EH: -0.0facebook/data2vec-audio-base-960h====================Output 0==================== MISTER QUILTER IS THE APOSTLE OF THE MIDDLE CLASSES AND WE ARE GLAD TO WELCOME HIS GOSPEL MISTER QUILTER IS THE APOSTLE OF MIDDLE CLASSES AND WE ARE GLAD TO WELCOME HIS GOSPEL: -0.938 ====================Output 1==================== NOR IS MISTER QUILTER'S MANNER LESS INTERESTING THAN HIS MATTER NOR IS MISTER QUILTER'S MANNER LESS INTERESTING THAN HIS MATTER RR: -0.6415 ====================Output 2==================== HE TELLS US THAT AT THIS FESTIVE SEASON OF THE YEAR WITH CHRISTMAS AND ROAST BEEF LOOMING BEFORE US SIMILES DRAWN FROM EATING AND ITS RESULTS OCCUR MOST READILY TO THE MIND HE TELLS US THAT AT THIS FESTIVE SEASON OF THE YEAR WITH CHRISTMAS AND ROAST BEEF LOOMING BEFORE SIMILES DRAWN FROM EATING ITS RESULTS OCCUR MOST READILY TO MIND: 0.0 ====================Output 3==================== HE HAS GRAVE DOUBTS WHETHER SIR FREDERICK LEIGHTON'S WORK IS REALLY GREEK AFTER ALL AND CAN DISCOVER IN IT BUT LITTLE OF ROCKY ITHACA HE HAS GRAVE DOUBTS WHETHER SIR FREDERIC LEYHTON'S WORK IS REALLY GREEK AFTER ALL AND CAN DISCOVER IN IT BUT LITTLE OF ROCKY ITHACA: -0.0Now as can be seen that this doesn’t seem to be too useful. Incorrect text is predicted with very high confidence by the10mmodel and there is no difference between the 960h and the 10m model really at all nor between correctly and incorrectly predicted sentences.There are a couple of questions, I’m not sure about:Is it even possible to do confidence scoring without a language model for ASR?Should the minimum (lowest prob) of all words be taken as the confidence of the transcription or the average?Should the word prob correspond to a length normalized log_sum or not normalized?
2022-04-21T10:57:24Z
[ { "date": "2022-04-21T11:14:14Z", "reply": "Using a LM in addition to Wav2Vec2 definitely seems to be better here! SeeConfidence Scores / Self-Training for Wav2Vec2 / CTC models With LM (PyCTCDecode)" } ]
Text to Speech Alignment with Transformers
https://discuss.huggingface.co/t/text-to-speech-alignment-with-transformers/16166
2
4,783
Hi there,I have a large dataset of transcripts (without timestamps) and corresponding audio files (avg length of one hour). My goal is to temporally align the transcripts with the corresponding audio files.Can anyone point me to resources, e.g., tutorials or huggingface models, that may help with the task? Are there any best practices for how to do it (without building an entire system from scratch)?My initial naive idea was to use a STT model to transcribe the audio (while documenting timestamps) and then performing some kind of similarity search with the transcript to align the two. However, I feel this approach might be quite error prone.I am happy for any kind of help/pointer.Simon
2022-03-28T14:00:56Z
[ { "date": "2022-04-19T13:10:30Z", "reply": "This task is called Forced Alignment and there are reasonably mature tools to do it with classical approaches. I’d suggest perusingforced-alignment · GitHub Topics · GitHub.If the accuracy of the classical methods isn’t good enough for you, you can peruse research papers on, say,Speech | Papers With Code" }, { "date": "2022-04-20T07:17:19Z", "reply": "Thank you so much for the reply! Currently, I’m starting to experiment withaeneas, however, I realize that the quality of my sound files is indeed very poor. Is it generally worthwile to try to improve the sound quality or would it be more fruitful to directly train/fine-tune a model to work with poorer sound quality end-2-end?" } ]
Projected gradient descent on autoregressive models
https://discuss.huggingface.co/t/projected-gradient-descent-on-autoregressive-models/16975
0
820
I am doing text summarization along with a trained classifier (that gives a label to a outputted summarization), and I would like to find how far away certain classifier labels are from each other by using some adversarial attacks and visualizing it for summarizer’s encoder embeddings. Is there any part of the huggingface library focusing on doing i.e. projected gradient descent on (autoregressive) decoder to encoder embedding?
2022-04-19T14:22:44Z
[]
Compute metric on Dev
https://discuss.huggingface.co/t/compute-metric-on-dev/16836
1
797
Hello, I was wondering why notebooks compute blue or rouge on dev data, not on test data? like thisnotebookdev1398×298 8.97 KB
2022-04-14T14:52:20Z
[ { "date": "2022-04-15T10:15:03Z", "reply": "It is common in deep learning to train on a train set, and monitor the loss of a validation set every x epochs or steps, as is done here. This way, you get an intuition of the model’s performance, particularly whether it is overfitting. If the training loss is very low but the validation is high, your model is overfitting. So the dev data here does not give you official test results, since the model is a bit biased towards that dev data: you keep training as long as the training loss and validation loss decreases.To probe the final performance of your model, you still test it on a held-out set that has never been used before (the test set). In Tensorflow, AFAIK, you can then evaluate on this unseen test set withmodel.evaluate." } ]
Text similarity not by cosine similarity
https://discuss.huggingface.co/t/text-similarity-not-by-cosine-similarity/8766
3
4,463
Hi all,I have a question.I have a dataset containing questions and answers from a specific domain. My goal is to find the find the X most similar questions to a query.for example:user: “What is python?”dataset questions: [“What is python?”, “What does python means?”, “Is it python?”, “Is it a python snake?”, “Is it a python?”]I tried encoding the questions to embeddings and calculate the cosine similarity but the problem is it gives me high similarity score for “Is it python?” for the query “What is python?” which is clearly not the same question meaning and for “What does python means?” get very low score compared to “Is it python?”Any suggestions how i can overcome this problem? maybe new approaches…
2021-07-28T13:06:30Z
[ { "date": "2021-07-29T01:55:59Z", "reply": "if cosine similarity is not giving you the results you want, you could try a different metric like euclidean / manhattan / minkowski distance or jaccard similarity.alternatively you could try changing the embedding model to see if that improves the comparisons" }, { "date": "2021-10-29T14:29:27Z", "reply": "What you are trying to do is clearly one of theGLUE tasks:3.2 SIMILARITY AND PARAPHRASE TASKSMRPC The Microsoft Research Paraphrase Corpus (Dolan & Brockett, 2005) is a corpus of sentence pairs automatically extracted from online news sources, with human annotations for whetherthe sentences in the pair are semantically equivalent. Because the classes are imbalanced (68%positive), we follow common practice and report both accuracy and F1 score.QQP The Quora Question Pairs2 dataset is a collection of question pairs from the communityquestion-answering website Quora. The task is to determine whether a pair of questions are semantically equivalent. As in MRPC, the class distribution in QQP is unbalanced (63% negative), so wereport both accuracy and F1 score. We use the standard test set, for which we obtained private labelsfrom the authors. We observe that the test set has a different label distribution than the training set.STS-B The Semantic Textual Similarity Benchmark (Cer et al., 2017) is a collection of sentencepairs drawn from news headlines, video and image captions, and natural language inference data.Each pair is human-annotated with a similarity score from 1 to 5; the task is to predict these scores.Follow common practice, we evaluate using Pearson and Spearman correlation coefficients.What I suggest you to do, is to follow thefollowing tutorialto pre-train your model on the dataset that is the most similar to what you are trying to do (ex: GLUE, QQP instead of GLUE MRCP in the tutorial)There is even available Leaderboard where you can find which model perform the best on QQP." }, { "date": "2022-04-12T05:52:17Z", "reply": "These are not definitive solutions but experiments I’ve tried with vectorized representations and I’ve had some success:Definitely try Dot product. In my limited experience dot product has always given superior results to other metrics. There are reasons why metrics like euclidean might fail, things get freaky and weird when we’re extending our 3-dimensional intuition to 100 dimensions. However, experimentation is going to make you wiser.Refer to the first WordVectors paper where they do experiments like adding and subtracting vectors like concepts. For example, v(king) - v(man) + v(woman) is close to v(queen). These experiments are not perfect and I remember reading a paper stating a proposition that this kind of adding and subtracting is flawed which might have some merit. However, they’ve worked in a limited capacity for me. So, experiments like:v(What is python?) - v(What) + v(How) might lead you near places where python questions with How.v(x) refers to the vector of x" } ]
Aggregate encoder states in encoder-decoder models for long sequences?
https://discuss.huggingface.co/t/aggregate-encoder-states-in-encoder-decoder-models-for-long-sequences/16625
0
723
Hi. I would like to train a text-to-text QA model for long documents.I was wondering if anyone has seen success in aggregating the encoder states of a long document in any way (e.g. pooling) before passing it to the decoder, similar to the sliding window technique done for e.g. classification with BERT. I’m well aware of models like the longformer, etc, but just wondering if this approach has any utility, and if not, why not?
2022-04-08T18:39:37Z
[]
Why do the commit histories of Hugging Face's datasets and models appear recent? Weren't these datasets and models uploaded a while ago?
https://discuss.huggingface.co/t/why-do-the-commit-histories-of-hugging-faces-datasets-and-models-appear-recent-werent-these-datasets-and-models-uploaded-a-while-ago/16595
2
913
I’ve been going over Hugging Face for research purposes, we’ve been looking over several datasets and models. We started doing this sometime ago last year during September and October. However, recently I checked this again, and it seems like these commit histories have changed.For instance, we looked overgemback in October of last year, but it shows that its commit history started on Jan 25.I am using Hugging Face as a use case for research purposes, I want to publish this research eventually, however people will ask questions about this, so I was wondering if someone could offer an explanation for this.
2022-04-07T18:32:42Z
[ { "date": "2022-04-08T07:25:52Z", "reply": "Many times, people do changes in the repositories metadata to help with discoverability or consistency across models/datasets. Other times, the model card or dataset sheet are extended/improved. For example, the last change of GEM -Update files from the datasets library (from 1.17.0) · gem at d5a0674- is just fixing a typo in the dataset metadata" }, { "date": "2022-04-08T15:50:37Z", "reply": "I see, but what does that imply for the commit histories? Do these updates mean that the commit histories are replaced?For the record, I just want to know in order to log this information as a justification for the procedures we’re taking. We’ve been observing the commit histories and want to be able to explain why they may change over time." } ]
Incorporating structural information in a Transformer?
https://discuss.huggingface.co/t/incorporating-structural-information-in-a-transformer/16554
0
717
For a Neural Machine Translation (NMT) task, my input data has relational information. This relation could be modelled using a graphical structure. Some researchers have tried to exploit transformer for graph data. For example, here is onepaper.I want to use Transformer. But then the challenge is how can I embed structural information there? Is there any open source artefact for Relational Transformer that I can use out of the box?
2022-04-06T19:50:25Z
[]
Can you use both copy mechanism and BPE for a NMT task?
https://discuss.huggingface.co/t/can-you-use-both-copy-mechanism-and-bpe-for-a-nmt-task/16531
0
711
I read to alleviate the problem of Out of Vocabulary (OOV), there are two techniques:BPECopy mechanismIt appears to me they are two orthogonal approaches.Can we combine the two, i.e., we use both the copy mechanism and BPE? Are there any work out there that combines the two? I cant find any.
2022-04-06T11:44:44Z
[]
Is there an easy way to apply layer-wise decaying learning rate in huggingface trainer for RobertaMaskedForLM?
https://discuss.huggingface.co/t/is-there-an-easy-way-to-apply-layer-wise-decaying-learning-rate-in-huggingface-trainer-for-robertamaskedforlm/1599
3
2,839
I am pre-training RobertaMaskedForLM on my own custom dataset. I wanted to implement the layer-wise learning rate decay given inhttps://github.com/aws-health-ai/multi_domain_lm#learning-rate-controlcorresponding to the paper -An Empirical Investigation Towards Efficient Multi-Domain LanguageModel Pre-training. Is there an easy way to incorporate this decay of learning rate with layer depth towards input usingtransformers.Trainer?
2020-10-17T09:31:45Z
[ { "date": "2020-11-14T04:14:27Z", "reply": "I have the same question" }, { "date": "2020-11-16T13:57:46Z", "reply": "There is nothing in the lib for this, but you can pass your own optimizer and scheduler." }, { "date": "2022-04-05T09:01:27Z", "reply": "Hello, I have the same question. I’m fine-tuning RoBERTa large for RE(Relation Extraction) task andthe paperI referenced usedlayer decay.It seems like I have to custom my own optimizer and scheduler for layer-wise learning rate decay. Could you tell me how you implemented your own scheduler?" } ]
The discussion is about entity recognition and corefrence resolution
https://discuss.huggingface.co/t/the-discussion-is-about-entity-recognition-and-corefrence-resolution/16068
0
715
Input = " I need 3 chairs in each class and there are 10 classes, so i need 30 chairs"output = “30 chairs, 10 classes”i have used the concept of corefrence resolution and entity recognition but i am unable to simplify the statement enough so that the input can become " i need 30 chairs and 10 classrooms" or similar linguistic approach which would help me to use entity recognition and solve this problem statement.
2022-03-25T10:54:24Z
[]
GPT2 for QA Pair Generation
https://discuss.huggingface.co/t/gpt2-for-qa-pair-generation/759
9
8,560
I was wondering if it were possible to somehow train GPT2 to generate question-answer pairs in a particular domain?
2020-08-18T21:59:56Z
[ { "date": "2020-08-19T09:08:31Z", "reply": "I’ve tried this with seq2seq models. I have worked on qa pair generation (separately) using T5 with descent results. You can find ithere.One way we can do this with GPT-2 is prepare our input like thisOur context is42 is the answer to life, the universe and everything, answer is42and target question isWhat is the answer to life, universe and everything ?Theninput text:context: 42 is the answer to life, the universe and everything. question: What is the answer to life, universe and everything ? answer: 42and prepare the attention mask such that, there will be no attention fromquestion: ...part, so model won’t look into future tokens and calculate loss only on thequestion: ...part. And it inference time we will feed only the context part and ask the model to generate the question.This just one one way I can think of the of my mind. Feel free to correct me if this is wrong." }, { "date": "2020-08-19T18:37:53Z", "reply": "@valhallaThanks for your response. That’s an interesting approach! Does that still require humans to create training “context” strings for gpt2?" }, { "date": "2020-10-12T19:51:20Z", "reply": "@valhallaIf I understand this correctly:The input text will look likecontext: 42 is the answer to life, the universe and everything. question: What is the answer to life, universe and everything ? answer: 42Mask out thequestionpart so the new text will look likecontext: 42 is the answer to life, the universe and everything. <BIG MASK> answer: 42That is what gets fed as input text into the GPT2 modelDoes this mean I define thelabelsinto the model as the text that is masked?" }, { "date": "2020-10-13T07:25:40Z", "reply": "By mask, I meantattention_mask, theattention_maskshould be zero on the text you want to predict, so the model won’t peek into future.So if you want to generate question and answer, then the question and answer tokens should have 0in attention mask." }, { "date": "2020-10-13T11:19:41Z", "reply": "Ah yes, sorry for my misunderstanding. So we mask out the parts we want to predict by setting theattention_maskof those tokens to 0.With these tokens masked inattention_mask, do we then pass it and the input string to GPT2 and train it with the language model head with no labels?" }, { "date": "2020-10-13T15:06:14Z", "reply": "You’ll still need to passlabelsfor training.Training will be same as training any GPT-2 model, only difference is theattention_mask" }, { "date": "2020-10-13T15:53:54Z", "reply": "If I only wanted to generate questions, would I set theattention_maskfor those tokens to 0 and use their text as thelabels? Something like:from transformers import GPT2Tokenizer\n\ntokenizer = GPT2Tokenizer.from_pretrained('gpt2')\ndef my_data_collator(text_str):\n encoded_results = tokenizer(text_str, padding=True, truncation=True, return_tensors='pt',\n return_attention_mask=True)\n enncoded_results['attention_mask'] = set_my_attention_mask(encoded_results) #function to set attention mask to 0 on tokens in the question:... part of text_str\n label_ids = get_my_label_str(encoded_results['input_ids']) #function to return list of token ids for question:... part of text_str\n\n batch = {}\n batch['input_ids'] = encoded_results['input_ids']\n batch['past'] = None\n batch['attention_mask'] = encoded_results['attention_mask']\n batch['position_ids'] = None\n batch['head_mask'] = None\n batch['inputs_embeds'] = None\n batch['labels'] = label_ids\n batch['use_cache'] = True\n return batch\n\ntext_str = 'context: 42 is the answer to life, the universe and everything. question: What is the answer to life, universe and everything ? answer: 42'Andbatchwould get passed to aGPT2LMHeadModel?" }, { "date": "2020-10-13T16:09:11Z", "reply": "This seems correct. One more thing to add, you can calculate loss only on thequestion: ...part.To do this setlabelsto -100 for tokens before thequestion:part, so cross entropy will ignore it.Also you won’t need to explicitly set some arguments (position_ids,head_masketc) toNone.They are by defaultNoneso it’s okay if don’t pass them. Will make the code more cleaner." }, { "date": "2022-03-23T17:27:21Z", "reply": "@valhallaif we set the context labels to -100, this will make the model ignore the context while training. In other words, the generation of the questions won’t be based context-based. Am I right?" } ]
Converting Test Case Description into Test case Steps
https://discuss.huggingface.co/t/converting-test-case-description-into-test-case-steps/15332
0
776
Hello Everyone,I am looking for model or an approach which can help us converting a test case scenario description into testcase steps for Web Functional Testing. This will help an user to follow the instructions mentioned in the test steps in order to execute a testcase for the web.For example:Test Case:Verify that Feature creation is available within Portfolio Items on RallyTest Description: (Input)I login towww.rally.comand enter username and password and click on submit button so that it takes me to the Dashboard pageI click on Epic Delivery and select SkynetI click on Portfolio and select Portfolio Items so that Features table is displayedOutputNoTest StepsValidation Steps1Login towww.rally.com2Enter Username3Enter Password4Click on submit buttonDashboard Page is displayed5Click on Epic Delivery6Select Skynet7Click on Portfolio8Select Portfolio ItemsFeatures table is displayedPlease point me in some direction where I’ll be able to achieve some results.Thanks
2022-03-04T02:43:46Z
[]
Best Pre-training Strategy
https://discuss.huggingface.co/t/best-pre-training-strategy/15307
0
744
Hey community, I hope you’re models are converging fastI’m trying to pre-train a BERT model on short query sentences/words, and i’m wondering what’s the best pre-training strategy to adapt in this situation?Thank in advance.
2022-03-03T12:09:18Z
[]
Relative Position Representation/Encoding for Transformer
https://discuss.huggingface.co/t/relative-position-representation-encoding-for-transformer/15018
0
1,904
InGPT-NeoX-20B: An Open-Source Autoregressive Language Modelpaper, why didthe authorstated thatRotary embeddings are a form of static relative positional embeddings?InHow Self-Attention with Relative Position Representations works | by ___ | Medium, could anyone explain the rationale behindthe value of the lookup indices after the 3rd element are all 6?What is the actual purpose ofskewing mechanism?image711×801 102 KBimage1738×598 134 KB
2022-02-22T08:45:31Z
[]
How find idea for academic thesis?
https://discuss.huggingface.co/t/how-find-idea-for-academic-thesis/14933
2
873
How can I find some idea regarding NLP tasks for graduate thesis?
2022-02-19T21:57:09Z
[ { "date": "2022-02-19T22:23:57Z", "reply": "HelloThis is a great question!I think the questions you should ask yourself, in order of precedence, are:What interests you in NLP? Is there a question that interests you that you couldn’t find a decent answer for in the literature? (Semantic Scholar is great if you’re looking to browse through papers)Does your adviser have any interesting ideas?If you are part of an NLP lab, what are your lab-mates working on? Is there a part of their research you can expand?If you know a language other than English, can you create a resource and model for a task in that specific language?Can you continue someone else’s work?Given the above, I think the most important thing is: your research should be interesting and fun. You want a subject that you’ll get up in the morning and say “I can’t wait to solve this already!” Yes, it will have its frustrating moments, when things don’t quite work, but it’s part of the journey. If your thesis subject bores you, it’s time to change the subject.Hope this helps a bit, and good luck with your thesis!" }, { "date": "2022-02-19T22:36:00Z", "reply": "Yes, and those are very good questions. I like this:Given the above, I think the most important thing is: your research should be interesting and fun. You want a subject that you’ll get up in the morning and say “I can’t wait to solve this already!” Yes, it will have its frustrating moments, when things don’t quite work, but it’s part of the journey. If your thesis subject bores you, it’s time to change the subject.The thing is that I am looking for trends to follow them. As in our lab, I am the only one interested in NLP.Now, I am studying some review papers to understand the area or maybe trends but needed to find the pioneer labs to follow them." } ]
Extractive oracle
https://discuss.huggingface.co/t/extractive-oracle/14548
0
810
Is there any official script for an extractive oracle using huggingface’s implementation of ROUGE?An extractive oracle extracts from the source the N sentences that maximize ROUGE-2 (typically). For example,thisscript computes such extractive oracle. However, since they use a different implementation of ROUGE, this might not be completely in line with my other experiments (which use the HF implementation).Thanks!
2022-02-09T12:37:40Z
[]
A Survey to Understand Challenges of Deploying Text Classification
https://discuss.huggingface.co/t/a-survey-to-understand-challenges-of-deploying-text-classification/14345
2
942
Hello everyone,As more and more machine learning libraries are developed, it becomes much easier to build a text classifier. However, there are still a lot of challenges, ranging from collecting the training data, achieving high accuracy, making the classification fair for different groups of users, defending against malicious input, etc. As a group of researchers from MIT, we are curious about what are the challenges for industrial practitioners currently having to deploy text classifiers. If you have experience in deploying text classifiers, I wish you can spend 15-20 minutes filling out this survey to help us understand the challenges. You will also enter a lottery for a $25 gift card.Link to the SurveyWhy should you participate?Have you ever encountered a problem when deploying a text classifier, and could not find a good solution? We believe that there are common problems in the deployment process, while some of them could have a better solution. Our research is to understand the actual challenges in the deployment of text classifiers, and to establish connections between these challenges and future academic research. We will summarize the results into a position paper to call on researchers’ attention to solving problems in the practical deployment.
2022-02-02T18:36:18Z
[ { "date": "2022-02-02T21:12:23Z", "reply": "Hey, nice survey. I just filled it out.What I was missing, though, is more information about the researchers behind this survey, likeMIT, AI Lab XYZ and the names of a couple of people. I think adding this information to the survey would make it much more trustworthy and likely that people fill it out.Also, when will you share the results of the survey?" }, { "date": "2022-02-08T19:51:00Z", "reply": "Hi,Thank you for your quick response. I’m Lei Xu fromMIT Data to AI Lab. This survey is part of my PhD research on deployable and robust text classification.About the timeline, it highly depends on how many responses we can collect. We are targeting at summarizing the results into a research paper in a three-month timeline. I’ll keep you updated." } ]
Question Answering model on mathematical domain for the greek language
https://discuss.huggingface.co/t/question-answering-model-on-mathematical-domain-for-the-greek-language/14300
0
809
Hello to everyone. I want to build a chatbot for my students in order to answer mathematical questions for the greek language. What I want to use is a question answering bert model or a sentence pair similarity. After trying various multilingual models pretrained on the question answering task on close domain i didnt have any luck, mainly because the text has specific mathematical terminology, complementary angles, supplementary angles e.t.c.I have found a greek language bert modelnlpaueb/bert-base-greek-uncased-v1 · Hugging Facewhich was trained on the greek part of wikipedia. Should I use this model and fine tune it on the greek part of Wikipedia on articles containing mathematical text and the training that model for the question answering task? And if this is the case does anyone know any question answering dataset for the greek language like the squad dataset? If my understanding is correct auto translate the squad dataset won’t give good results since after translation tha starting point of the answer may have changed. I would appreciate if someone could give me some quide lines to follow.
2022-02-01T13:33:40Z
[]
Finetuning German BERT for QA on biomedical domain
https://discuss.huggingface.co/t/finetuning-german-bert-for-qa-on-biomedical-domain/500
2
1,002
Hello there and thank you very much for this wonderful work. I am relatively new to this field, so please bear with my amateur question. I want to perform question-answering on a German Biomedical text. From what I understand up to now, I need to fine-tune German BERT on biomedical QA datasets. Is there any script/pipeline that I should be using for this?Thank you very much in advance.
2020-07-28T09:01:21Z
[ { "date": "2020-07-28T13:07:49Z", "reply": "There is an example of script finetuning a model on question answeringhere, hope it can help!" }, { "date": "2022-01-30T06:36:45Z", "reply": "Here’s the updated linkfor QA examples" } ]
[Suggestions and Guidance]Finetuning Bert models for Next word Prediction
https://discuss.huggingface.co/t/suggestions-and-guidance-finetuning-bert-models-for-next-word-prediction/14043
4
4,695
Problem Statement : To produce a next word prediction model on legal text. The aim is to build an autocomplete model which will make use of existing typed text as well as a possible concatenation of vectors from prior clauses/paragraphs.Current Approach: Because Bert based model are based on masked language, pretrained models such asLegalBertdid not produce good accuracy for prediction of next word when the word to be predicted was marked as [MASK]. Here is an example sentence, “use of [MASK]” where “marked” is the next word to be predicted in place of “[MASK]” token. (Note that there would not be words present after the mask token, only before the token).Currently approaching the problem as a SequenceClassification problem where labels are the token ids of the words that are to be predicted next. Will also attempt to finetune gpt2 on the legal text using run_clm.py from huggingface examples directoryIs there a better way to approach this problem of next word prediction?Any suggestions and guidance would be welcome.Thank you in advance
2022-01-24T11:15:47Z
[ { "date": "2022-01-24T13:02:21Z", "reply": "Hi Sumanth! I believe you are already on the right track by finetuning gpt2. The difference is that GPT was trained using causal/autoregressive attention. It means that GPT is specifically trained to predict the next word without having access to the word to the right of the masked token (unlike BERT).The different models and their architectures are depicted in this chart:Capture684×642 56.9 KBLong story short - you should see better results with GPT2. Let us know how it goes.CheersHeiko" }, { "date": "2022-01-25T15:51:15Z", "reply": "Hey, Thanks for the prompt reply. Will focus my attempts more on autoregressive models." }, { "date": "2022-01-26T13:44:33Z", "reply": "@marshmellow77a question. Is there a way to finetune and use T5 or BigBird for this Next word prediction task?. Unable to find tutorials for using these models for Next word prediction." }, { "date": "2022-01-26T15:11:48Z", "reply": "Yes, and it is actually pretty easy thanks to a script provided by Hugging Face:transformers/run_clm.py at master · huggingface/transformers · GitHubYou can use this script to finetune models for causal language modeling (i.e. next word prediction) on a text file or a dataset." } ]
Suggestions for an open source tagging tool to build custom LayoutLMv2 datasets
https://discuss.huggingface.co/t/suggestions-for-an-open-source-tagging-tool-to-build-custom-layoutlmv2-datasets/14103
0
904
Any suggestions on an Open Source tagging tool to get data in the format expected by the LayoutLMv2 model? I take it the standard format is similar to theFUNSD dataset.
2022-01-25T16:36:55Z
[]
Paper Notes: Deepspeed Mixture of Experts
https://discuss.huggingface.co/t/paper-notes-deepspeed-mixture-of-experts/13908
2
2,162
SummaryThe legends over at DeepSpeed released apaperon scaling Mixture of Experts with a bunch of cool ideas.Since they will probably release some pytorch code soon I wanted to summarize/discuss the findings so that I learn them better.I provide 0 background on Mixture of Experts, assume knowledge of Top1 vs Top2 gating, for selfish/lazy reasons. Read thedeepspeed blog postfor background.I abstract the term “acc” to encompass all types of metrics: validation perplexity, zero shot accuracy, etc.I used@srushtrick of trying to read critically (to get your brain to think harder about other peoples’ results) but I don’t want to come off as too negative. I really enjoyed this paper and am excited to read the code!The DeepSpeed team proposes:(a) (sec 4.1) architectural modifications that reduce the number of experts without hurting acc.(b) (sec 4.1) Moe 2 Moe distillation, (instead of MoE 2 dense distillation like the FAIR paper (appendix Table 9) and theSwitch paper)(c) (sec 5) Systems Optimizations to make inference fastImproved Communication Collectives for MoE Inference (hierarchical all2all)tutelstyle single-device kernels to make routing tokens to experts fast.4D parallelism!?I now cover architecture and distillation, and save systems optimizations for later because I don’t fully understand them yet.Architecture: Pyramid Residual MoEThis section is really well written. It contains two very nice ablations that motivated the changes:Phenomenon 1: “Pyramid”We compare the performance of two different half-MoE architectures. More specifically, we put MoE layers in the first half of the model and leave the second half’s layers identical to the dense model. We switch the MoE layers to the second half and use dense at the first half.The results show that deeper layers benefit more from large number of experts.This also saves a ton of parameters: 40% reduction at 1.3B dense equivalent size, which will be useful at inference time.Phenomenon 2: “Residual”we can achieve the benefit of using two experts per layer but still use one communication.They frame this as trying to get the benefits of top2 routing without the costs.But, basically MoeLayers become only half sparse – a dense ffn that process the input as does 1 expert – the results are added.Compared to top2 where 2 different sparse experts process the input, this is cheaper because there is less communication (you only need to send the input to 1 place instead of 2?)Note this does not improve acc compared to top2, just speed.Putting it all together:FAIR arch (see table 1) (52B Params)Layers: top2 gating (each token gets routed to 2 experts)512 experts at each MoE layerDeepspeed Arch: (31B params)Layers: each token processed by dense FFN and 1 expert (same FLOPs as top2 gating if same number of experts, I believe).pyramid: somewhere between 32 and 128 experts at each Moe layer – way fewer params!In terms of acc, (PIQA is the only overlapping evaluation),the 31B Deepspeed performs between the FAIR 52B and the FAIR 207B and was probably lower training cost than the 52B, even before all the systems optimizations in section 5. Nice!With the systems optimizations they say training is 5x faster than dense (to the same acc). The FAIR paper says “4x faster than dense”, but measures TFLOPS, which make the extra communication required for MoE appear to be free. So all in all this definitely seems like a better architecture.It would have been cool if Tables 2,4 had training cost and inference cost next to the few shot performances (or 1 big joined table somewhere!).Staged Knowledge Distillation: Mixture Of Students (MoS)Caveat before you read this section: in most distillation results, the student model is MUCH smaller than the teacher model, like half as large or so. Here, the student model is only 12.5% smaller than the teacher model. (3 fewer layers, 4B fewer params (31B vs 27B)).They are able to lose very little performance, which is nice, but they also didn’t really lose that much weight, and it would be interesting to try to replicate what they did with smaller students.Caveat 2: name deeply misleading. It’s normal KD but they switch to cross entropy loss halfway through that’s it!Anyways, these are the first published MoE 2 MoE Distillation results. The switch paper and FAIR paper both distill Moe 2 Dense models (since they are much easier to serve than MoE models, a gap deepspeed claims to eliminate in section 5 – the one I don’t understand yet:( ).They use the same KD loss as the other papers, but they turn it off halfway through training.They say this improves acc, but I am most interested in the speed implications. I tried MoE2MoE distillation but it was extremely slow (like 10x slower than Dense2Dense) because of teacher inference every step.If we could only run the teacher forward pass for part of the student training, that would be sweet!NextLet me know any inaccuracies, important omissions, what you ate for lunch follow up ideas!Next week I will try to tackle Section 5 (Systems optimizations) and if I don’t I will burn a 20 dollar bill and record it!
2022-01-19T21:19:55Z
[ { "date": "2022-01-20T13:57:06Z", "reply": "What is 4D parallelism?" }, { "date": "2022-01-20T16:42:26Z", "reply": "sshleifer:Next week I will try to tackle Section 5 (Systems optimizations) and if I don’t I will burn a 20 dollar bill and record it!I’ll hold you to it@sshleifer=)" } ]
How does the vocabulary size count towards total parameter size of a model?
https://discuss.huggingface.co/t/how-does-the-vocabulary-size-count-towards-total-parameter-size-of-a-model/13833
0
2,246
TL;DR The vocabulary size changes the number of parameters of the model. If we were to compare models with different vocabulary sizes, what would be the most fair strategy, fixing the total number of parameters or having the same architecture with same number of layers, attention heads, etc.?We have a set of mini models which are pretrained from scratch using the Roberta architecture. The number of layers, hidden sizes, and number of attention heads correspond to that of the mini models in the BERT paper. We wanted to experiment with the effect of different tokenization algorithms on the downstream performance and to this end, fit BPE, WordPiece, and WordLevel tokenizers with 50K, 50K, 100K vocabulary sizes respectively in addition to character-based tokenization. The reason for the increased vocabulary size for the WordLevel tokenization is to decrease the number of OOV tokens.Later did we notice that the difference between vocabulary sizes cause a huge difference between the number of parameters. The model sizes are 20.4M, 20.4M, 33.2M, and 8.1M for BPE, WordPiece, WordLevel, and char tokenizer-based models respectively. This means that the percentages of the number of parameters coming from the vocabulary of the model are 63%, 63%, 77%, and 1% for BPE, WordPiece, WordLevel, and char tokenizer-based models respectively.My question is, is it unfair to compare the downstream performance of these models on the same task with the same dataset just because the number of parameters are different. I would assume that in a given forward pass through an input, only a very small part of the vocabulary is updated. Because, a parameter in a layer of the transformer blocks of the model is updated every step, whereas a parameter in the vocabulary is updated whenever it appears in the input text. Therefore, to say that, for example, 100K parameters from the vocabulary of WordLevel tokenizer-based model contribute to the computation of an input is not true. This means that for as long as the number of parameters in the transformer blocks of the models are comparable, it is fair to compare the performance of the models. If this assumption is incorrect, I would be happy to be corrected.Thanks for your time.
2022-01-18T07:28:48Z
[]
Guide: The best way to calculate the perplexity of fixed-length models
https://discuss.huggingface.co/t/guide-the-best-way-to-calculate-the-perplexity-of-fixed-length-models/193
9
8,994
Hey all. Just thought you might be interested in a page I just added to the research docs on theperplexity of fixed-length models.Perplexity (PPL) is defined as the exponential average of a sequence’s negative log likelihoods. For at-length sequenceX, this is defined,\text{PPL}(X) = \exp \left\{ -\frac{1}{t} \sum_i^t \log p_\theta (x_i|x_{<i}) \right\}But with fixed-length models (like most transformers), we can’t always condition on the entire preceding subsequence when predicting each token.The initial instinct for many in dealing with this problem is to break the whole sequence into segments equal to the model’s max input size and calculate the likelihoods of each segment independently. This not the best approach, however, since it gives the model very little context to use for prediction at the beginning of each segment. I’ll illustrate this with the following gif where we imagine a model with a max input size of 6 adding up the log-likelihoods for the sentence, “Hugging Face is a startup based in New York City and Paris”ppl_chunked1200×160 352 KBWhen the model starts the second segment, it has to try to predict the word “in” without any context, even though we have 5 words before it that the model could be using (since we said the max input size is 6).A better approach is to instead employ asliding windowstrategy, where you continually move the context across the sequence, allowing the model to take advantage of the available context.ppl_sliding1200×160 373 KBThis is slower to compute, but will typically yield better scores and is actually much closer to the way the sequence probabilities are formally decomposed (e.g. see the the equation above).In theguide, we show how to do this in a strided way with GPT-2. When using the first, naive approach, GPT-2 gets a PPL of19.64on WikiText-2. In contrast, when we use a strided sliding window, this score improves dramatically down to16.53.
2020-07-10T17:07:49Z
[ { "date": "2020-10-20T20:37:42Z", "reply": "Hi, I have a question about the perplexity calculation from theguide.Why do we divide byiin the example, seeppl = torch.exp(torch.stack(lls).sum() / i)?If you have a codebase or paper that exemplifies this behaviour could you please share it?Thanks!" }, { "date": "2020-10-20T22:01:25Z", "reply": "Hmm yes, you should actually divide byencodings.input_ids.size(1)sinceidoesn’t account for the length of the last stride.I also just spotted another bug. When the length of the last segment is less thanstride, thelog_likelihoodcalculation is slightly off. The difference in scores won’t be significant, but I’ve update the guide on master. This should be right:max_length = model.config.n_positions\nstride = 512\n\nlls = []\nfor i in tqdm(range(0, encodings.input_ids.size(1), stride)):\n begin_loc = max(i + stride - max_length, 0)\n end_loc = min(i + stride, encodings.input_ids.size(1))\n trg_len = end_loc - i # may be different from stride on last loop\n input_ids = encodings.input_ids[:,begin_loc:end_loc].to(device)\n target_ids = input_ids.clone()\n target_ids[:,:-trg_len] = -100\n\n with torch.no_grad():\n outputs = model(input_ids, labels=target_ids)\n log_likelihood = outputs[0] * trg_len\n\n lls.append(log_likelihood)\n\nppl = torch.exp(torch.stack(lls).sum() / end_loc)Does that answer your question?" }, { "date": "2020-10-21T14:02:18Z", "reply": "yep thanks Joe!I was thinking something similar but wanted to check in case I was missing something" }, { "date": "2021-03-01T22:57:39Z", "reply": "Hi@joeddav- the input_ids and target_ids are the same. Shouldn’t target_ids be shifted by one?" }, { "date": "2021-03-01T23:09:39Z", "reply": "Nevermind - just found out that labels are shifted inside the model and the loss for last one gets ignored.huggingface.coOpenAI GPT2We’re on a journey to advance and democratize artificial intelligence through open source and open science.labels(torch.LongTensorof shape(batch_size, sequence_length), optional) – Labels for language modeling. Note that the labelsare shiftedinside the model, i.e. you can setlabels = input_idsIndices are selected in[-100, 0, ..., config.vocab_size]All labels set to-100are ignored (masked), the loss is only computed for labels in[0, ..., config.vocab_size]" }, { "date": "2021-07-15T19:44:29Z", "reply": "@joeddavI read and read the page several times. Thank you!What would be the simplest way of accessing a perplexity score for a sentence and its parts? I’m building an application in NodeJS and hoping to access a perplexity score via an API - paid is fine for now. I think I could set up the Python model somewhere and expose it via an API but this hopefully will come later after some MVP testing.Thank you again!" }, { "date": "2021-10-16T20:01:19Z", "reply": "I am wondering whether this is still correct. So what you do is, for all input sequences:neg_log_likelihood = outputs[0] * trg_lenYet the first output of causal LMs isCrossEntropyLoss, not NLLL. So from that you can just get the mean CE loss from all sequences and get the exponential.EDIT: that is also how it is implemented in the Trainer and run_clm.py script. First gather all losses for all batches in the whole validation set and take the mean.github.comhuggingface/transformers/blob/11c69b80452fae4b13c6d8bc22bdc19f3a752199/src/transformers/trainer.py#L2353-L2354if all_losses is not None:metrics[f\"{metric_key_prefix}_loss\"] = all_losses.mean().item()Then take the exponential.github.comhuggingface/transformers/blob/11c69b80452fae4b13c6d8bc22bdc19f3a752199/examples/pytorch/language-modeling/run_clm.py#L495# Evaluationif training_args.do_eval:logger.info(\"*** Evaluate ***\")metrics = trainer.evaluate()max_eval_samples = data_args.max_eval_samples if data_args.max_eval_samples is not None else len(eval_dataset)metrics[\"eval_samples\"] = min(max_eval_samples, len(eval_dataset))try:perplexity = math.exp(metrics[\"eval_loss\"])except OverflowError:perplexity = float(\"inf\")metrics[\"perplexity\"] = perplexitytrainer.log_metrics(\"eval\", metrics)trainer.save_metrics(\"eval\", metrics)kwargs = {\"finetuned_from\": model_args.model_name_or_path, \"tasks\": \"text-generation\"}if data_args.dataset_name is not None:kwargs[\"dataset_tags\"] = data_args.dataset_name" }, { "date": "2021-11-26T15:26:25Z", "reply": "I’d agree with@BramVanroy, any thoughts@joeddavon the above post?BramVanroy:Yet the first output of causal LMs isCrossEntropyLoss, not NLLL. So from that you can just get the mean CE loss from all sequences and get the exponential.I don’t understand the multiplication bytrg_lenin this example. Also on my dataset it explodes the perplexity by orders of magnitude above a uniform upper bound oflog(|Vocab Size|)" }, { "date": "2021-12-16T03:36:34Z", "reply": "I think it is correct forPerplexity of fixed-length modelssince batch size is 1.B.T.W. most libraries like simpletransformers implement perplexity calculation by taking exp(sum_of_loss_in_all_batches / num_of_batch) likesimpletransformers/language_modeling_model.py at 254aaaa218635ef68f80ad1917403e7b7e24d710 · ThilinaRajapakse/simpletransformers · GitHub" } ]
Few shot automatic moderation
https://discuss.huggingface.co/t/few-shot-automatic-moderation/12102
0
666
the facebook flies shows that there’s a lack of human based moderation on social networks. what about automatic moderation and how it cope with reduced datasets availability?I’m wondering what’s the actual research status on this subject.
2021-11-20T14:59:40Z
[]
Let's Make an Ethics Chat Bot that's Not Racist!
https://discuss.huggingface.co/t/lets-make-an-ethics-chat-bot-thats-not-racist/11905
0
725
I am a philosopher and I have studied Ethics for over 20 years (check me here j.mp/joshtedx). I am disheartened to see a few recent attempts to make Ethics AIs have not turned out well (racist ethics AIs - Google Search)This should not happen. I am quite certain I can make an Ethics AI Few shot, Q and A example or cloze or knowledge base that is not racist for you, if you can make the chat bot part. I can also easily correct the answers from any large NLP model.Let’s show the world not all Ethics AIs will end up racist!I suggest doing this as a not-for profit project that others could even then use in their chat bots to correct for unethical answers as an out-of-the-box solution!If anyone is interested please LMK! Let’s make something good for humanity!
2021-11-16T19:32:49Z
[]
New Paper: Masked Autoencoders Are Scalable Vision Learners
https://discuss.huggingface.co/t/new-paper-masked-autoencoders-are-scalable-vision-learners/11673
0
1,362
(Meta-comment: I’m actually not sure which forum this would best fit into - seems like it would be useful to have a place where we can discuss new papers.)This new work by Kaiming He et al seems pretty interesting - they use a very simple setup for masking during pre-training a ViT and it looks like they get very good results across a variety of tasks.So far, I see an implementation bylucidrains.arXiv.orgMasked Autoencoders Are Scalable Vision LearnersThis paper shows that masked autoencoders (MAE) are scalable self-supervised learners for computer vision. Our MAE approach is simple: we mask random patches of the input image and reconstruct the missing pixels. It is based on two core designs....
2021-11-14T01:55:46Z
[]
Improving performance of Wav2Vec2 fine tuning with word piece vocabulary
https://discuss.huggingface.co/t/improving-performance-of-wav2vec2-fine-tuning-with-word-piece-vocabulary/6292
5
2,917
Hello,I’m fine tuning XLSR-Wav2Vec2 on a 200+ hours of a speech in a language not in the original pertaining.The training progresses nicely, however when it reaches about 40 WER it starts to overfit (WER doesn’t progress much and train loss decreases while eval loss is going up).I’ve tried increasing some params of the SpecAugment, but it only helped a bit.I’ve noticed that using the Speechbrain lib implementation I’m getting a bit better results (on the expense of training stability) and was wondering if it is due to a larger vocabulary they use there. Does anyone tried to use a tokenizer with a vocabulary that contains subwords and words in addition to characters? I could’t find any experiment that uses it with Huggingface transformers W2V2.I see in the Wav2Vec 2 paper they say that:We expect performance gains by switching to a seq2seq architecture and aword piece vocabulary.https://arxiv.org/pdf/2006.11477.pdfAny suggestions on how to do that with Huggingface Transformers?P.S. my dataset is noisy and not super clean.Any help or suggestion will be very helpful.Samuel
2021-05-21T13:31:58Z
[ { "date": "2021-05-26T07:07:21Z", "reply": "Not sure how I’d switch to a seq2seq architecture, but for word piece, I think you just need to change the vocab passed to theWav2Vec2CTCTokenizer. Instead of the individual alphabet characters used for the vocab in the XLSR example, you’d need to use the wordpiece/BPE algorithm on your language text data and pass that through." }, { "date": "2021-05-28T13:49:02Z", "reply": "Thanks for the answer!Any code examples or ideas on how to use word piece tokenizer easily? I understand I’ll need to basically override most of the functions in transformers/models/wav2vec2/tokenization_wav2vec2.py" }, { "date": "2021-06-03T07:00:39Z", "reply": "you can look intosentencepiece.Hope that helps!" }, { "date": "2021-07-30T17:19:24Z", "reply": "This can be accomplished by using theBertTokenizerand settingvocab_sizeto 30522. Keep in mind that you don’t want to use the existinglm_headweights in theWav2Vec2ForCTCcheckpoint though. I did this with the TensorFlow version, but I don’t think there is a vocab limit on the PyTorch ctc loss either." }, { "date": "2021-10-27T03:02:25Z", "reply": "Thanks for the answer!I am also trying to implement this. Can I get any code examples for this? Thank you." } ]
[Help needed] Extending Trainer for Meta learning
https://discuss.huggingface.co/t/help-needed-extending-trainer-for-meta-learning/635
3
1,558
I want to implement MAML with Glue dataset with transformers. In my case, query and support set will come from the same dataset. I’ve read some work in meta learning from HF team (Wolf et al., 18).Although I’ve implemented my training loop (withhigher) (open for other methods as well), I am still looking for a correct reference implementation of MAML or Reptile to confirm. Currently my code inherits fromTrainer. If anyone share a sample snippet that would perform MAML gradient updates, that’d be really helpful ?
2020-08-08T11:31:51Z
[ { "date": "2020-08-17T03:27:18Z", "reply": "So theMetaDatasetwraps anyGlueDatasetto give a list containing all classes whenmeta_dataset[0]is called. So this will become,num_of_classes (N)way K shot example.I’ve written this, which extendsTrainerfor MAML.def train(self):\n\n self.create_optimizer_and_scheduler(\n int(\n len(self.train_dataloader)\n // self.args.gradient_accumulation_steps\n * self.args.num_train_epochs\n )\n )\n\n logger.info(\"***** Running training *****\")\n\n self.global_step = 0\n self.epoch = 0\n\n eval_step = [2 ** i for i in range(1, 20)]\n inner_optimizer = torch.optim.SGD(\n self.model.parameters(), lr=self.args.step_size\n )\n self.model.train()\n\n tqdm_iterator = tqdm(self.train_dataloader, desc=\"Batch Index\")\n\n # n_inner_iter = 5\n self.optimizer.zero_grad()\n query_dataloader = iter(self.train_dataloader)\n\n for batch_idx, meta_batch in enumerate(tqdm_iterator):\n target_batch = next(query_dataloader)\n outer_loss = 0.0\n # Loop through all classes\n for inputs, target_inputs in zip(meta_batch, target_batch):\n\n for k, v in inputs.items():\n inputs[k] = v.to(self.args.device)\n target_inputs[k] = v.to(self.args.device)\n\n with higher.innerloop_ctx(\n self.model, inner_optimizer, copy_initial_weights=False\n ) as (fmodel, diffopt):\n\n inner_loss = fmodel(**inputs)[0]\n diffopt.step(inner_loss)\n outer_loss += fmodel(**target_inputs)[0]\n\n self.global_step += 1\n self.optimizer.step()\n\n outer_loss.backward()\n\n if (batch_idx + 1) % self.args.gradient_accumulation_steps == 0:\n torch.nn.utils.clip_grad_norm_(\n self.model.parameters(), self.args.max_grad_norm\n )\n\n # Run evaluation on task list\n if self.global_step in eval_step:\n output = self.prediction_loop(self.eval_dataloader, description = \"Evaluation\")\n self.log(output.metrics)\n\n output_dir = os.path.join(\n self.args.output_dir, f\"{PREFIX_CHECKPOINT_DIR}-{self.global_step}\",\n )\n self.save_model(output_dir)" }, { "date": "2020-08-19T13:44:28Z", "reply": "I’m not completely sure howhigherworks. If someone can provide a minimal example with bare Pytorch, that’d be helpful." }, { "date": "2021-10-19T15:23:49Z", "reply": "Hey,@prajjwal1did you implemented this?" } ]
Detection Transformer (DETR) for text detection in documents
https://discuss.huggingface.co/t/detection-transformer-detr-for-text-detection-in-documents/10396
0
1,999
Hi,i do currently some experiments on text detection with a transformer based model.Do anyone have experience at this or recommendations ?My idea is to train the DetrForObjectDetection on the COCOText-v2 datasetCOCOText-v2i have tested some setups:pretrained facebook/resnet-50 with num_queries=2000 (a good value for a A4 document page)from scratch with efficentNet_b0 backbone from timm with backbone lr: 0.001 and lr: 0.01but in all cases the loss and train loss stuck at ~1.7 after ~35 epochs with 2 val steps per epochanother problem i have faiced is the COCOevaluator there seems to be a problem with numpy has no append at validation step:in COCOeval:problem:self.eval_imgs[iou_type].append(eval_imgs)one sample from my train dataloader looks like this:# pixel_values 1 example torch.Size([3, 640, 640]) # target for this example {'boxes': tensor([[0.0810, 0.8323, 0.1621, 0.1356], [0.3031, 0.3070, 0.0367, 0.0088], [0.5304, 0.3418, 0.0349, 0.0102]]), 'class_labels': tensor([0, 0, 0]), 'image_id': tensor([367969]), 'area': tensor([5295.0200, 103.8200, 105.6000]), 'iscrowd': tensor([0, 0, 0]), 'orig_size': tensor([640, 556]), 'size': tensor([640, 556])}so the data after Dataloader seems to be oksome more code:COCO_stuff:adapted from:COCOTextPytorch COCODataloaderdef collate_fn(batch): """ process on every sample in batch """ feature_extractor = DetrFeatureExtractor() pixel_values = [item[0] for item in batch] encoding = feature_extractor.pad_and_create_pixel_mask(pixel_values, return_tensors="pt") labels = [item[1] for item in batch] batch = dict() batch['pixel_values'] = encoding['pixel_values'] batch['pixel_mask'] = encoding['pixel_mask'] batch['labels'] = labels return batch class CocoTextDataset(Dataset): """MSCOCO Text V2 Dataset """ def __init__(self, path, ann_file_name, image_folder_name, feature_extractor, is_train=True, data_limit=None): self.path = path self.annotation_path = os.path.join(path, ann_file_name) self.image_folder_path = os.path.join(path, image_folder_name) self.feature_extractor = feature_extractor self.data_limit = data_limit self.dataset_length = 0 self.coco_text = COCO_Text(annotation_file=self.annotation_path) if is_train: print('Load Training Data') self.set_part = self.coco_text.train else: print('Load Validation Data') self.set_part = self.coco_text.val # create sets for train and validation self.cleaned_img_to_ann_ids = {k:v for k,v in self.coco_text.imgToAnns.items() if v and k in self.set_part} # sort out images and annotations, which are not readable or have uncorrect bound boxes self.ann_ids = list() self.image_ids = list() for entry_id in self.cleaned_img_to_ann_ids.values(): annotations = self.coco_text.loadAnns(entry_id) allowed_ann_ids = list() allowed_image_ids = list() for annotation in annotations: if annotation['legibility'] == 'legible' and len(annotation['bbox']) == 4: allowed_ann_ids.append(annotation['id']) if annotation['image_id'] not in allowed_image_ids: allowed_image_ids.append(annotation['image_id']) # if image has no annotations, skip it if allowed_image_ids and allowed_ann_ids: self.image_ids.append(allowed_image_ids) self.ann_ids.append(allowed_ann_ids) if self.data_limit: self.image_ids = self.image_ids[0:data_limit] self.ann_ids = self.ann_ids[0:data_limit] self.image_info = list() self.ann_info = list() for id in self.image_ids: info = self.coco_text.loadImgs(id) self.image_info.append(info) for id in self.ann_ids: info = self.coco_text.loadAnns(id) self.ann_info.append(info) if len(self.image_info) == len(self.ann_info): print('Dataset created sucessfully') self.dataset_length = len(self.image_info) else: print(f'Error: Number of images and annotations do not match. {len(self.image_info)} images and {len(self.ann_info)} annotations') sys.exit(0) def __len__(self): return self.dataset_length def __getitem__(self, index): image_id = self.image_ids[index] image_file = self.image_info[index] annotations = self.ann_info[index] image_path = os.path.join(self.image_folder_path, image_file[0]['file_name']) image = Image.open(image_path).convert("RGB") target = {'image_id': image_id[0], 'annotations': annotations} encoding = self.feature_extractor(images=image, annotations=target, return_tensors="pt") pixel_values = encoding["pixel_values"].squeeze() # remove batch dimension target = encoding["labels"][0] # remove batch dimension return pixel_values, target class COCODatasetLoader(pl.LightningDataModule): def __init__(self, path, ann_file_name, image_folder_name, feature_extractor, batch_size, worker, collator, data_limit=None): super().__init__() self.path = path self.ann_file_name = ann_file_name self.image_folder_name = image_folder_name self.feature_extractor = feature_extractor self.batch_size = batch_size self.worker = worker self.collator = collator self.data_limit = data_limit print(f'Data Limit is set to : {self.data_limit}') def setup(self, stage=None): self.train_dataset = CocoTextDataset(self.path, self.ann_file_name, self.image_folder_name, self.feature_extractor, is_train=True, data_limit=self.data_limit) print(f'# of training samples: {self.train_dataset.dataset_length}') self.val_dataset = CocoTextDataset(self.path, self.ann_file_name, self.image_folder_name, self.feature_extractor, is_train=False, data_limit=self.data_limit) print(f'# of validation samples: {self.val_dataset.dataset_length}') def visualize_example(self, index): print(f'Visualize Example: {index}') file_name = self.train_dataset.coco_text.loadImgs(self.train_dataset.image_ids[index])[0]['file_name'] path = os.path.join(self.train_dataset.image_folder_path, file_name) annotations = self.train_dataset.coco_text.loadAnns(self.train_dataset.ann_ids[index]) print(f'{len(annotations)} boxes in image detected') image = Image.open(path).convert("RGB") draw = ImageDraw.Draw(image, "RGBA") for annotation in annotations: box = annotation['bbox'] x,y,w,h = tuple(box) draw.rectangle((x,y,x+w,y+h), outline='red', width=1) image.show() def get_val_coco_text_dataset(self): return self.val_dataset.coco_text def train_dataloader(self): return DataLoader(self.train_dataset, batch_size=self.batch_size, shuffle=False, num_workers=self.worker, pin_memory=True, collate_fn=self.collator) def val_dataloader(self): return DataLoader(self.val_dataset, batch_size=self.batch_size, shuffle=False, num_workers=self.worker, pin_memory=True, collate_fn=self.collator)Model:class TextDetectionModel(pl.LightningModule): def __init__(self, lr, id2label, feature_extractor, coco_evaluator, sync): super().__init__() self.save_hyperparameters() self.sync_dist = sync self.lr = lr self.id2label = id2label self.feature_extractor = feature_extractor self.coco_evaluator = coco_evaluator self.num_classes = len(id2label) self.model = DetrForObjectDetection.from_pretrained("facebook/detr-resnet-50", num_queries=2000, encoder_layerdrop=0.2, decoder_layerdrop=0.2, num_labels=self.num_classes, ignore_mismatched_sizes=True, return_dict=True) def forward(self, pixel_values, pixel_mask=None, labels=None): outputs = self.model(pixel_values=pixel_values, pixel_mask=pixel_mask, labels=labels, return_dict=True) return outputs.loss, outputs.loss_dict, outputs.logits, outputs.pred_boxes def training_step(self, batch, batch_idx): pixel_values = batch["pixel_values"] pixel_mask = batch["pixel_mask"] labels = [{k: v.to(self.device) for k, v in t.items()} for t in batch["labels"]] outputs = self.model(pixel_values=pixel_values, pixel_mask=pixel_mask, labels=labels) loss = outputs[0] loss_dict = outputs[1] self.log("train_loss", loss.detach(), prog_bar=True, on_step=False, on_epoch=True, sync_dist=self.sync_dist) for k,v in loss_dict.items(): self.log("train_" + k, v.item()) return loss def validation_step(self, batch, batch_idx): pixel_values = batch["pixel_values"] pixel_mask = batch["pixel_mask"] labels = [{k: v.to(self.device) for k, v in t.items()} for t in batch["labels"]] bboxes = [entry['boxes'] for entry in labels] outputs = self.model(pixel_values=pixel_values, pixel_mask=pixel_mask, labels=labels) loss = outputs[0] loss_dict = outputs[1] logits = outputs[2] # pred_boxes = outputs[3] # compute averaged probability of each bbox proba = torch.stack([x for x in logits.softmax(-1)[0, :, :-1]]).mean() # compute COCO Output for each image # orig_target_sizes = torch.stack([target["orig_size"] for target in labels], dim=0) # results = self.feature_extractor.post_process(outputs, orig_target_sizes) # convert outputs of model to COCO api # res = {target['image_id'].item(): output for target, output in zip(labels, results)} # Coco Eval is broken currently # self.coco_evaluator.update(res) self.log("val_loss", loss.detach(), prog_bar=True, on_step=False, on_epoch=True, sync_dist=self.sync_dist) self.log("val_bbox_proba", proba.detach(), prog_bar=True, on_step=False, on_epoch=True, sync_dist=self.sync_dist) for k,v in loss_dict.items(): self.log("val_" + k, v.item()) return loss #def validation_epoch_end(self, outputs): # self.coco_evaluator.synchronize_between_processes() # self.coco_evaluator.accumulate() # self.coco_evaluator.summarize() def predict_step(self, batch, batch_idx): pixel_values = batch["pixel_values"] outputs = self.model(pixel_values=pixel_values) logits = outputs[2] pred_boxes = outputs[3] probas = logits.softmax(-1)[0, :, :-1] return {'probas': probas, 'pred_boxes': pred_boxes} def configure_optimizers(self): param_dicts = [ {"params": [p for n, p in self.named_parameters() if "backbone" not in n and p.requires_grad]}, { "params": [p for n, p in self.named_parameters() if "backbone" in n and p.requires_grad], "lr": 1e-5, # this lr is used for backbone parameters }, ] optimizer = AdamW(param_dicts, lr=self.lr, weight_decay=1e-4) scheduler = ReduceLROnPlateau(optimizer, patience=2, verbose=True) return {'optimizer': optimizer, 'lr_scheduler': scheduler, 'monitor': 'val_loss'} def optimizer_zero_grad(self, epoch, batch_idx, optimizer, optimizer_idx): optimizer.zero_grad(set_to_none=True)Trainerimport argparse import os import warnings import time import numpy as np import onnx import pytorch_lightning as pl import torch from onnxruntime.quantization import quantize_qat from pytorch_lightning.callbacks import (EarlyStopping, LearningRateMonitor, ModelCheckpoint) from pytorch_lightning.loggers import TensorBoardLogger from transformers import DetrFeatureExtractor from coco_tools.coco_torch_evaluator import CocoEvaluator from dataloader import COCODatasetLoader, collate_fn from model import TextDetectionModel def __check_for_boolean_value(val): """argparse helper function """ if val.lower() == "true": return True else: return False if __name__ == '__main__': warnings.filterwarnings("ignore") pl.seed_everything(42, workers=True) print('annotations file and image folder have to be in the same parent folder') parser = argparse.ArgumentParser(description='Text Detection Trainer') parser.add_argument("--path", help='path to generated images', type=str, required=False, default='/COCOText-v2') #set to true parser.add_argument("--ann_file_name", help='name of annotations file', type=str, required=False, default='cocotext.v2.json') parser.add_argument("--image_folder_name", help='name of image folder', type=str, required=False, default='train2014') parser.add_argument("--epochs", help='how many epochs to train the model',type=int, required=False, default=250) parser.add_argument("--batch_size", help='how big are a batch',type=int, required=False, default=8) parser.add_argument("--data_limit", help='set a fixed data limit',type=int, required=False, default=0) parser.add_argument("--worker", help='how many threads for the Dataloader',type=int, required=False, default=0) parser.add_argument("--learning_rate", help='the learning rate for the optimizer',type=float, required=False, default=1e-4) parser.add_argument("--gradient_clip", help='float for gradient clipping',type=float, required=False, default=0.1) parser.add_argument("--visualize_random_example", help='if true show an example from train set',type=__check_for_boolean_value, required=False, default=False) args = parser.parse_args() path = args.path ann_file_name = args.ann_file_name image_folder_name = args.image_folder_name epochs = args.epochs batch_size = args.batch_size data_limit = args.data_limit worker = args.worker learning_rate = args.learning_rate gradient_clip = args.gradient_clip visualize_random_example = args.visualize_random_example if data_limit == 0: data_limit = None # resource handling if torch.cuda.device_count() >= 1: batch_size = int(batch_size / torch.cuda.device_count()) accelerator = 'ddp' sync = True else: accelerator = None sync = False ### Data Part os.makedirs('text_detection_model_files', exist_ok=True) feature_extractor = DetrFeatureExtractor(format="coco_detection", do_resize=False, do_normalize=True, image_mean=[0.485, 0.456, 0.406], image_std=[0.229, 0.224, 0.225]) feat_extractor_to_save = DetrFeatureExtractor.from_pretrained("facebook/detr-resnet-50", do_resize=True, size=600) feat_extractor_to_save.save_pretrained('text_detection_model_files/transformer_model/') print('feature extractor saved succesful') data_module = COCODatasetLoader(path=path, ann_file_name=ann_file_name, image_folder_name=image_folder_name, feature_extractor=feature_extractor, batch_size=batch_size, worker=worker, collator=collate_fn, data_limit=data_limit) data_module.setup() if visualize_random_example: index = np.random.choice(len(data_module.train_dataset)) data_module.visualize_example(index) train = data_module.train_dataloader() val = data_module.val_dataloader() coco_val_dataset = data_module.get_val_coco_text_dataset() coco_evaluator = CocoEvaluator(coco_val_dataset, ['bbox']) print('Coco Evaluator created') ### Model Part id2label = {0: 'Text'} # we have only one class to detect: Text text_detection_model = TextDetectionModel(lr=learning_rate, id2label=id2label, feature_extractor=feature_extractor, coco_evaluator=coco_evaluator, sync=sync) ### Callback Part checkpoint_callback = ModelCheckpoint( dirpath="text_detection_model_files/checkpoints", filename="best-checkpoint", save_top_k=1, verbose=True, monitor="val_loss", mode="min" ) logger = TensorBoardLogger(save_dir="text_detection_model_files/Lightning_logs", name="Text_Detection") early_stopping_callback = EarlyStopping( monitor="val_loss", min_delta=0.001, patience=15, check_finite=True, verbose=True ) lr_monitor = LearningRateMonitor(logging_interval='epoch') ### Training Part trainer = pl.Trainer(logger=logger, weights_summary="full", # only if gpu mem is overheaded -> needs much more train time benchmark=True, move_metrics_to_cpu=False, val_check_interval=0.5, gradient_clip_val=gradient_clip, # set to 0.5 to avoid exploding gradients stochastic_weight_avg=True, callbacks=[ checkpoint_callback, early_stopping_callback, lr_monitor ], max_epochs=epochs, gpus=torch.cuda.device_count(), accelerator=accelerator, precision=32, # dont change for model accumulate_grad_batches=1, # optimizer step after every n batches -> better gpu mem usage / model specific progress_bar_refresh_rate=20, # profiler='pytorch', # only for debug ) trainer.fit(text_detection_model, train, val) time.sleep(2) # short delay trained_model = text_detection_model.load_from_checkpoint(trainer.checkpoint_callback.best_model_path) trained_model.eval() trained_model.freeze() ### Saving Part # ---------------------------------- # PyTorch Model - full # ---------------------------------- try: torch.save(trained_model, "text_detection_model_files/torch_text_detection_model.pt") print('Torch model saved successful') except Exception as e: print('Cannot export as PyTorch Format -- Error : ' + str(e)) # ---------------------------------- # PyTorch Model - state dict # ---------------------------------- try: torch.save(trained_model.state_dict(), "text_detection_model_files/torch_text_detection_model_state_dict.pt") print('Torch model state dict saved successful') except Exception as e: print('Cannot export as PyTorch Format with state dict -- Error : ' + str(e)) # ---------------------------------- # onnx # ---------------------------------- try: input_batch = next(iter(val)) input_sample = { "pixel_values": input_batch["pixel_values"][0].unsqueeze(0), } values = input_sample['pixel_values'] file_path = "text_detection_model_files/torch_text_detection_model.onnx" torch.onnx.export(trained_model, values, file_path, input_names=['pixel_values'], output_names=['logits', 'pred_boxes'], dynamic_axes={'pixel_values': {0: 'batch_size', 1: 'channels', 2: 'width', 3: 'height'}, 'logits': {0: 'batch_size'}, 'pred_boxes': {0: 'batch_size'}}, export_params=True, opset_version=11, enable_onnx_checker=True, verbose=False) print('Onnx model saved successful') print('Start model quantization') model_quant = "text_detection_model_files/torch_text_detection_model.quant.onnx" quantized_model = quantize_qat(file_path, model_quant) print('Quantization succesfull') except Exception as e: print('Cannot export as ONNX Format -- Error : ' + str(e)) # Predictions model = text_detection_model.load_from_checkpoint(checkpoint_path=trainer.checkpoint_callback.best_model_path) preds = trainer.predict(model, val, return_predictions=True) print(preds)@nielsrdo you have any idea or recommendations ? ^^
2021-09-29T14:51:09Z
[]
Summarization for downstream task
https://discuss.huggingface.co/t/summarization-for-downstream-task/10011
0
656
Hi!I was wondering id anyone could point me to any work about summarization for a downstream task.For example, given an NLP pipeline, one might want to first summarize the input and then perform some tasks (eg keywords extraction, classification, etc).For very long input, a first summarization step makes the text more treatable. I know of groups / companies that do proceed in this way, in some cases.However, one might want to directly summarize the text with the downstream task in mind: for keyword extraction, this might mean to keep as many keywords as possible, for classification to keep interesting features etc.Is anyone aware of any research work in this direction? I have looked a bit and I did not find anything, but I would be surprised no previous work exists, so I am problably searching using the wrong keywords.Any idea in this direction would also be highly appreciated
2021-09-15T08:56:59Z
[]
[Call for participation] Interactive Grounded Language Understanding in a Collaborative Environment (IGLU) Competition@NeurIPS2021
https://discuss.huggingface.co/t/call-for-participation-interactive-grounded-language-understanding-in-a-collaborative-environment-iglu-competition-neurips2021/9851
0
721
Human intelligence has the remarkable ability to quickly adapt to new tasks and environments. Starting from a very young age, humans acquire new skills and learn how to solve new tasks either by imitating the behavior of others or by following provided natural language instructions. To facilitate research in this direction, we propose theNeurIPS IGLU competition: Interactive Grounded Language Understanding in a Collaborative Environment.The primary goal of the IGLU competition is to approach the problem of how to build interactive agents that learn to solve a task while provided with grounded natural language instructions in a collaborative environment. Understanding the complexity of the challenge, we split it into sub-tasks to make it feasible for participants.This research challenge is naturally related, but not limited, to two fields of study: Natural Language Understanding and Generation (NLU/G) and Reinforcement Learning (RL). Therefore, the suggested challenge can bring two communities together to approach one of the important challenges in AI. Another important aspect of the challenge is the dedication to perform a human-in-the-loop evaluation as a final evaluation for the agents developed by contestants.The goal of our competition is to approach the following scientific challenge: How to build interactive agents that learn to solve a task while provided with grounded natural language instructions in a collaborative environment? By the interactive agent we mean that the agent is able to follow the instructions correctly, is able to ask for clarification when needed, and is able to quickly adapt newly acquired skills, just like humans are able to do while collaboratively interacting with each other.Tasks and Application Scenarios:Given the current state of the field, our main research challenge might be too complex to suggest a reasonable end-to-end solution. Therefore, we split the problem into the following concrete research tasks:Architect Task:Given target structure, generate step instructions for the BuilderSubmission system:CodaLab - CompetitionBuilder Task:Given Architect-Builder conversation, build target structure:Submission system:https://competitions.codalab.org/competitions/33828Prizes:Architect Task:1st place - 5K $2nd place - 1.5k $3rd place - 500 $Builder Task:1st place - 5K $2nd place - 1.5k $3rd place - 500 $Timeline:July 26 – Stage 1 begins;(Tentative) October 15 – Stage 1 ends;October 22 – Stage 2 begins by deploying the top-3 performing agents for human evaluation;November 26 – The results of Stage 2 are posted, and the list of winning teams per task is released;December 6 – NeurIPS 2021 begins.Upcoming workshops:To make it even easier for you to onboard, we will arrange workshops per task:Architect Task on Sep 9, at 9 am PST: the link to sign up:IGLU - Workshop (iglu-contest.net)Builder Task on Sep 10, at 10 am PST: the link to sign up:IGLU - Workshop (iglu-contest.net)During the workshops, our team will walk you through setup, available baselines, and training environment (for Builder task). You will have a great opportunity to ask any questions, which we probably can answerGuest Lectures:If you have missed our guest lectures. Here are the links to recordings:IGLU - Guest Lecture by Marc (iglu-contest.net)IGLU - Guest Lecture by Jianwei (iglu-contest.net)For more frequent updates:follow us at Twitter@IgluContestThe news section at our website:IGLU (iglu-contest.net)For questions to organizers and mentors use the slack channel:Join IGLU on Slack | SlackRegister for the competition at CodaLab:CodaLab - Competitionhttps://competitions.codalab.org/competitions/33828
2021-09-09T14:41:39Z
[]
Implementing a custom Attention Transformer
https://discuss.huggingface.co/t/implementing-a-custom-attention-transformer/9702
5
3,075
Hello everyone, currently I am trying to implement a custom attention transformer, whose attention is given on Page No. 4 of thislink. They have used hugging face for the implementation, and I am not sure about how to go for approaching this problem, and how to use hugging face to implement custom attention. Can anybody guide me, about how to go about implementing this? Thanks,
2021-09-03T04:54:27Z
[ { "date": "2021-09-03T20:24:28Z", "reply": "Hey@iakarshumy best guess is that the authors implemented DocFormer from scratch, so as far as I can tell you can’t do some clever subclassing of an existing model to tweak the attention layers.Having said that, you could look at the implementation ofLayoutLMV2which seems to share a similar approach and you can usethis templateto get all the basic modeling files.Do you know if AWS open-sourced the pretrained weights of DocFormer? Without them, you might need a lot of compute to build a useful model.Hope that helps!" }, { "date": "2021-09-04T03:02:44Z", "reply": "Hey@lewtun, thanks a lot for sharing this, maybe then I would focus on implementing it from scratch, and learn from the implementation of LayoutLMV2, thanks a lot for that. And for the computation, I have some resources, which means NVIDIA DGX to work, and I am searching about the open-source Docformer code, but I am not getting it. I mailed the author and they refrained from sharing the code, so I don’t think that they have open-sourced it. Again, thanks a lot for replying." }, { "date": "2021-09-06T08:35:30Z", "reply": "Hey@nielsrisDocFormercurrently on your roadmap fortransformers?@iakarshuis thinking about having a go at implementing and pretraining it (because the authors didn’t release code or weights), so I thought it would be good to double-check that you don’t do the same work twice" }, { "date": "2021-09-06T09:48:59Z", "reply": "No it’s not on my list, seems interesting.However, if there are no pre-trained weights available (and even no code), then there’s a low chance for me to add it to the library." }, { "date": "2021-09-06T10:14:45Z", "reply": "@nielsr@lewtunthanks a lot, then I would do it, and would ask the community if i get stucked, thanks a lot, I shall begin my coding then" } ]
Collaborative Training Experiment Round 2 with Yandex and HuggingFace
https://discuss.huggingface.co/t/collaborative-training-experiment-round-2-with-yandex-and-huggingface/9674
0
563
Let’s train an even larger model together with Yandex, HuggingFaceand Neuropark!A few months ago we assembled to train a large SahajBERT. So let’s make it even larger!Join Neuropark’s discord community with this link -NeuroparkWe are about to start the training from- 2nd SeptemberThere will be a few new things to play with beside the 4x scale:sahajBERT 2.0 will start from sahajBERT 1.0 using Net2Net model expansionwe’ll try hybrid training with both GPU and TPU and see how they compareand bring along local GPU devices (see below)If you have a GPU desktop with ≥6GB memory and ≥30Mbit upload speed, we’d really appreciate if you can bring it to the collaborative run (and will help you with the setup). You can join and leave training at any time, even if it is only for a couple of hours.Also, we’d really appreciate your ideas on the training procedure:fine-tuning benchmarks that we should run: anything beside Wikiann and Soham News Category Classification?future training runs: we’ll be able to train the model in ~2 weeks. Is there any other task that you would like to pretrain a model for? What data should we use there?Let me know if you face any issues regarding joining or anything.Check our previous models onneuropark (Neuropark)Read the blog post about our previous collaborative training -Deep Learning over the Internet: Training Language Models Collaborativelypaper link -[2106.10207] Distributed Deep Learning in Open CollaborationsThanks to Yandex and Huggingface for this initiativelets train 4x !
2021-09-01T16:17:44Z
[]
Tutorial / codebase for models interacting while training?
https://discuss.huggingface.co/t/tutorial-codebase-for-models-interacting-while-training/9554
0
494
I need guidance on how to get started on a research project. I want to train two models (the particular architectures aren’t important) in tandem, with the ability to have the two models pass input tokens and output token between one another during training. Is there a tutorial or codebase with this functionality for me to get started?
2021-08-29T00:44:06Z
[]
10_000 samples & 10_000 labels
https://discuss.huggingface.co/t/10-000-samples-10-000-labels/8868
0
507
Hey Community, I have a data set which each sample has it’s own label, for instance :I have 10000 sample which each sample has one word as label, and each label is unique for that sentence, this made 10000 training samples with 10000 labels.Is anyone here has an idea about how to do this or a toy code?Thank you so much for your help
2021-07-31T09:57:12Z
[]
Best way to infer continuously with Transformer?
https://discuss.huggingface.co/t/best-way-to-infer-continuously-with-transformer/8690
0
556
Hi!I’m looking for ways to infer w/ a Transformer model in a continuous manner — basically, I want it to retain some information about the previous sample in case it was part of the same text segment.One approach I’m trying out now is inferring with intersecting windows (stride < length), and aggregating encoder embeddings of the overlapping part of the sequence (i.e. use information from window N to infer N+1). I use summing to aggregate instead of mean/dot product, as it gives the closest result to inferring as usual, but the result still doesn’t account for earlier context, meaning the approach doesn’t work.Has this problem been addressed already? Is the typical solution to just increase input length bound? (What if I don’t have enough compute to train a model with large input lengths?)
2021-07-26T12:11:00Z
[]
The (hidden) meaning behind the embedding of the padding token?
https://discuss.huggingface.co/t/the-hidden-meaning-behind-the-embedding-of-the-padding-token/3212
2
6,035
So noticed that the transformers contain different embeddings for PAD tokens, and I know pad tokens typically are simply ignored for the most part (if at all present). However, as a forward pass using a batch typically contain dozens of padding tokens it would be interesting to see if these in fact hold any meaningful information (as padding tokens do attend to the sequence). Does anyone know of any research which has been conducted on what information might be present here?One might legitimately ask why this is relevant isn’t padding tokens simply a convenience for efficient processing because we need the same tensor shape? This is naturally correct, but quite a few studies have clustered the sentence embedding and it seems relevant to ask what influence the padding embeddings have on this.For a short demonstration that they indeed have different embeddings:import transformers tokenizer = transformers.AutoTokenizer.from_pretrained( "bert-base-uncased") model = transformers.BertModel.from_pretrained( "bert-base-uncased") input_ = tokenizer(["this is a sample sentence"], return_tensors="pt", # add some padding padding="max_length", max_length=128, truncation=True) output = model(**input_) # extract padding token embedding pad_tok_id = [i for i, t in enumerate(input_["input_ids"][0]) if t == 0] embedding_pad1 = output[0][0][pad_tok_id[0]] embedding_pad2 = output[0][0][pad_tok_id[1]] embedding_pad1.shape #embedding size embedding_pad1[0:10] embedding_pad2[0:10]tensor([-0.5072, -0.4916, -0.1021, -0.1485, -0.4096, 0.0536, -0.1111, 0.0525, -0.0748, -0.4794], grad_fn=<SliceBackward>) tensor([-0.6447, -0.5780, -0.1062, -0.1869, -0.3671, 0.0763, -0.0486, 0.0202, -0.1334, -0.5716], grad_fn=<SliceBackward>)
2021-01-15T09:22:01Z
[ { "date": "2021-04-29T01:21:21Z", "reply": "@KennethEnevoldsenI have been thinking about the same a while ago.You have a point with different embeddings for pad tokens. But, to my understanding these never interfere with any part of model’s computation (like, self attention), since the pad tokens are always masked using the attention masks.Would you have an example of where the pad token embeddings could make a difference, given the attention mask?" }, { "date": "2021-07-14T11:13:48Z", "reply": "Hello,This discussion sounds interesting to me because I was thinking the same.Why there are different embedding vectors for PAD tokens.My use-case is a multi-label text classification where I am using a pretrained model in MaskedLanguageModeling as an “embedding layer”. More specific, I feed the input text [b,t] padded to the “embedding layer” and it outputs [b,t,f], where b is the batch_size, t is the length of the max sequence in the batch, f is the feature_number.After this I am using Attention to [b,t,f] and take a vector [b,1,f] which, after pass it from two linear layers and a sigmoid, gives the predictions.I check cosine similarity between embedding vectors of PAD tokens and it is almost between all over 0.7. Additionally, cosine similarity between words’ embedding vectors and PAD tokens’ vectors is almost between all under 0.3.Atenttion mechanism seems to assign negligible weights to PAD tokens embeddings vectors.In general it seems that these vectors are kind of ignored from the model. Furthermore, my results are pretty ok with respect to accuracy." } ]
Language model to search an answer in a huge collection of (unrelated) paragraphs
https://discuss.huggingface.co/t/language-model-to-search-an-answer-in-a-huge-collection-of-unrelated-paragraphs/2210
4
1,491
I want to build a question/answer language model to search a large collection of paragraphs.Say 10k paragraphs. And find relevant answers in them.There are 2 issues I don’t know how to solve.existing solutions often identify an answer from a short paragraph. I don’t know how to deal with a lot of paragraphs. A naive approach would be going through each paragraph and identify an answer in each of them.existing solutions will generate an answer even when fed with an unrelated paragraph. they don’t give a confidence number. If I have 10k paragraphs to search an answer from, and only 3 paragraphs have an answer, using existing solutions won’t let me to rule out unrelated paragraphs.Is there a way to generate a document embedding first (using both a question and a paragraph ), and I can use the embedding to find candidate paragraphs first and then do the actual answer search. And when there is no answer, I’d like to get a confidence number that 's below my answer threshold.Are there any papers dealing with this problem?
2020-11-25T18:59:48Z
[ { "date": "2020-11-27T23:50:08Z", "reply": "DPR & RAG may be the references you want.Regarding your questions and my answers with DPRhuggingface.coDPR — transformers 3.5.0 documentationDPR (retriever module) select top-k paragraphs from 20 million of possible wikipedia paragraphs (not just 10k, and you can also make your own corpus) using very fast MIPS (maximum inner product search) implemented by FAISSDPR (reader module) produce a relevance score for each of the top-k passages so this is a confidence number that you mentionedFinally, RAG is an improvement of DPR where (1) you can combine different passages directly (both relevance and irrelevance) to produce the final answer by “marginalization” and (2) Final answer is generated in free-form, not necessarily contained in any of the passage .(Please see the paper for detailshttps://huggingface.co/transformers/model_doc/rag.html)" }, { "date": "2021-07-02T06:32:54Z", "reply": "Hi Jung & HF Community.I am implementing a RAG process,… with a daily update.I can easily merge the dataset objects using datasets.concatenate_datasets()but I have two questions:I cannot merge the indices… even if i .load_faiss_index() to each part the concat object has no indexIs this the best way to search a large corpus or would it be best to load each dataset into a seperate node and scan across a cluster?I am followingtransformers/use_own_knowledge_dataset.py at master · huggingface/transformers · GitHub, creating a new folder for each daily dataset." }, { "date": "2021-07-03T09:24:36Z", "reply": "Hi@Berowne, it’s very interesting question.Daily updated datasets should be an important use case.Unfortunately, I have no answer. Maybe@lhoestqcould help us here?" }, { "date": "2021-07-06T12:40:00Z", "reply": "Hi ! If you concatenate two datasets, you will need to build a new FAISS index for the new dataset.Depending on the number of documents you have and the type of index you use, you can either:rebuild a new index from scratch (easy, but slow for big datasets and advanced index types)or update one of the existing index with new vectors (useful if you need to add a few new documents for example into an already existing big dataset)or merge the two index together (possible only for certain index types,hereis an example for IVF)Regarding your second question, it is definitely a reasonable way to search a large corpus. Though it may also depend on your needs in terms of speed and accuracy, and on the size of your dataset." } ]
Address extraction and formated using Places API (Google Maps API)
https://discuss.huggingface.co/t/address-extraction-and-formated-using-places-api-google-maps-api/7998
0
1,691
I am currently playing around with Places API from Google.I just have curious about the technique that they were using to make this happens (do only NER enough for this). I thought they firstly detect where is the address in my input, then parsing these into sub-level (like what I am gonna describe as below).When I input the text: "I wanna deliver this package to#KA B C "It gave me the result was super tremendous with 3 administrative_area_level, even it did format my input text (also correct their spelling/grammar mistake) to the address one. In detail, it somehow will be likestreet_name:{#k} administrative_area_level_1: {A} administrative_area_level_2:{B} ... formated_address: #K, A, B, CGoogle DevelopersOverview  |  Places API  |  Google DevelopersProvide type-ahead predictions for text-based geographic searches, by returning places such as businesses, addresses and points of interest as a user types.
2021-07-04T14:25:56Z
[]
Finetuning for fp16 compatibility
https://discuss.huggingface.co/t/finetuning-for-fp16-compatibility/977
2
1,645
t5 and pegasus don’t really work in fp16 because they create activations that overflow fp16 bits. (they were trained in bfloat 16 which has larger range) Has anyone read/seen/heard anything about finetuning/scaling models so that their activations can fit in fp16. (or generally to encourage smaller magnitude activations?I tried one experiment on google/pegasus-xsum where I finetune with summarization lm loss and add some additional losses based on the magnitude of hidden states, but I haven’t weighted them (the model instantly forgets how to summarize) so I’m looking around.
2020-09-03T17:26:08Z
[ { "date": "2021-06-07T10:05:31Z", "reply": "It’s been a long time since this post, but maybe you remember if the problem with fp16 will appear when training the models from scratch (pretraining)?I’ve seen some NaNs already while training with fp16 on, but after lowering the learning rate, beginning of training looks reasonable." }, { "date": "2021-06-17T10:33:53Z", "reply": "After 3 days of training with fp16 on NaN loss happened. Created issuePegasus pretraining in fp16 results in NaN loss · Issue #12225 · huggingface/transformers · GitHub, maybe someone knows how it can be fixed." } ]
What can transformers learn without position encoding?
https://discuss.huggingface.co/t/what-can-transformers-learn-without-position-encoding/6554
1
2,994
So it obviously makes sense that attention mechanisms don’t have any inherent sense of position without encoding it explicitly, and for sequence prediction this seems critical. But, for example, word2vec via CBOW or skip gram is able to learn word embeddings without explicit position encoding. So my question is basically if we train a BERT model without the position encoding on the Masked LM task (something very similar to word2vec it seems to me), what is BERT capable of learning if anything? Would it be better than word2vec for creating word embeddings?
2021-06-03T15:52:11Z
[ { "date": "2021-06-10T08:18:11Z", "reply": "My intuition would be that the transformers would still have a notion of context. It would still know this word appear in context with those other words, but would lose the notion of order loosely associated with position embeddings. Also, it would still allow word embeddings to change depending on the other words in context. So it would still be better than word2vec, which only has one embedding by word (learned as a combination of several contexts)." } ]
Project Description
https://discuss.huggingface.co/t/project-description/6444
1
366
Hi@Madsyour project looks very interesting, would you mind adding a description?huggingface.coMads/wav2vec2-xlsr-large-53-kor-financial-engineering · Hugging FaceWe’re on a journey to advance and democratize artificial intelligence through open source and open science.
2021-05-28T09:33:44Z
[ { "date": "2021-05-29T04:25:31Z", "reply": "Hi Snow, thank you for your interest!I will update in the coming week as soon as possible!" } ]
Does it make sense to generate sentences with Transofmrer's encoder?
https://discuss.huggingface.co/t/does-it-make-sense-to-generate-sentences-with-transofmrers-encoder/6311
0
379
Quite a few vision+language papers pretrain BERT-based model with image-text data and finetune for image captioning task. But there is no decoder involved to generate sentences. Does that make sense? And what’s the main difference between using T’s encoder to do the sentence generation and do it with a T’ decoder?
2021-05-22T15:26:15Z
[]
PEGASUS model overfitting
https://discuss.huggingface.co/t/pegasus-model-overfitting/6246
2
463
Hey everyone, I would like to see any scientific evidence regarding model overfitting available for thePEGASUS: Pre-training with Extracted Gap-sentences for Abstractive Summarizationmodel.If anyone can point me to some resources or provide an answer, I’d greatly appreciate itThanks and stay safe
2021-05-19T07:05:14Z
[ { "date": "2021-05-19T12:01:41Z", "reply": "het@theprincedripi don’t know the answer off the top of my head, but one place to start would be to check out the citations of the pegasus paper, e.g. viaGoogle Scholar" }, { "date": "2021-05-19T12:12:11Z", "reply": "Thanks a lot, I’ll check it out" } ]
Classification Heads in BERT and DistilBERT for Sequence Classification
https://discuss.huggingface.co/t/classification-heads-in-bert-and-distilbert-for-sequence-classification/6146
2
1,128
Hi,I have been using BertForSequenceClassification and DistilBertForSequenceClassification recently and I have noticed that they have different classification heads.BertForSequenceClassification has a dropout layer and a linear layer, whereas DistilBertForSequenceClassification has two linear layers and a dropout layer.Is there a particular reason for this?Thanks in advance!
2021-05-12T16:16:52Z
[ { "date": "2021-05-12T18:22:47Z", "reply": "All in all, they have the same head: BertForSequenceClassification has a dropout layer and a linear layer but uses the pooler output, which went through a linear layer inside the BertModel.DistilBertModel has no pooler output however, so the first linear layer is there to replicate that." }, { "date": "2021-05-13T09:28:34Z", "reply": "Thank you that makes sense!" } ]
Collaborative Training Experiment of an Albert Model for Bengali
https://discuss.huggingface.co/t/collaborative-training-experiment-of-an-albert-model-for-bengali/5991
1
1,301
Huggingface is launching a collaborative training experiment of an Albert Model for Bengali language with our community. We are actively looking for participants who will help us to train the model.So what do you need in order to participate-A Google Colab accountThat’s everything you need.[Although if you want to use the power of your own GPUs, Huggingface will also provide a script for that.]How you can contribute?If you are a native Bengali speaker, that would be a great help, we are looking for participants who will check the performance of the tokenizer, sentence splitter, etc.You might want to help us preprocessing the dataset. We are using the Wikidump and OSCAR Bengali dataset to train the model, if you have some suggestions on preprocessing these feel free to contribute in that part.Now the main part, distributive training. You have been provided a google colab script in order to start the training and if your kernel crashes just restart the training script. (Non native speakers can participate)Join our discord community link-https://discord.gg/GD9G4j8fJU[A separate slack channel from Huggingface will be provided where you will get to know more about the distributive training framework and other related things.]We are aiming to start this collaborative training experiment from -May 7thPlease do participate in this first Huggingface collaborative training experiment specifically the native bengali speakers.
2021-05-05T06:15:19Z
[ { "date": "2021-05-06T09:43:18Z", "reply": "Also I forgot to mention the main thing. Thanks to Yandex for creating this collaborative distributive training strategy. Without them this huge community training event would not be possible." } ]
Task-specific fine-tuning of GPT2
https://discuss.huggingface.co/t/task-specific-fine-tuning-of-gpt2/5700
0
1,041
Hi thereIn the Seq2Seq examples (transformers/examples/legacy/seq2seq at master · huggingface/transformers · GitHub) why there is no mention of GPT-x? it seems to me that, it shouldn’t be difficult to fine-tune this model usingGPT2LMHeadModelfor particular text-to-text tasks.Wondering if anyone has any thoughts on this.Thanks!
2021-04-22T19:37:46Z
[]
Is causal language modeling (CLM) vs masked language modeling (MLM) a common distinction in NLP research?
https://discuss.huggingface.co/t/is-causal-language-modeling-clm-vs-masked-language-modeling-mlm-a-common-distinction-in-nlp-research/5665
0
2,158
Thehuggingface documentationstates:GPT and GPT-2 are fine-tuned using a causal language modeling (CLM) loss while BERT and RoBERTa are fine-tuned using a masked language modeling (MLM) loss.I have two questions regarding this statement:Is this a common distinction you’d find in the NLP literature (any literature on this distinction)?Is it a sensible distinction in your opinion? I have two questions While I totally agree with CLM, I don’t understand why you would call BERT & co. “masked language models”, since causal language models do the actual masking in next token prediction?Thanks!
2021-04-21T14:30:19Z
[]
Any ways to visualize attention of the LXMERT?
https://discuss.huggingface.co/t/any-ways-to-visualize-attention-of-the-lxmert/5579
0
497
I would like to observe the attention between an input RoI and each word in an input sentence of LXMERT. If a framework that facilitates what I want do exists, please let me know. If not, could you tell me which of the tensors from LXMERT I should watch?
2021-04-17T17:06:53Z
[]
Human Evaluation and Statistical significance
https://discuss.huggingface.co/t/human-evaluation-and-statistical-significance/5374
0
415
Hello,I have recently conducted a human evaluation of a chatbot via a survey. I wonder how I can prove that the results are statistically significant.More specifically, I compared the generated responses of two chatbots and calculated each one’s win rate. Moreover, participants were asked to rate each model according to “relevance” and “fluency” using a scale ranging from 1 to 5.According to some references (e.g.DodecaDialogue paper), they prove that the results are statistically significant using binomial testing.How can I apply binomial testing in the aforementioned case?@patrickvonplaten
2021-04-08T20:21:45Z
[]
How to instill auxiliary information coupled with words into transformers?
https://discuss.huggingface.co/t/how-to-instill-auxiliary-information-coupled-with-words-into-transformers/4655
0
338
Assume that auxiliary information is attached to some words. Our goal is to use them at finetuning for some tasks.Specifically, we want to finetune BERT or GPT-2 on texts with named entities. For instance, we want to feed “Jim (Person) bought 300 shares of Acme Corp. (Organization) in 2006 (Time)”, i.e., a text with named entities, to transformers instead of “Jim bought 300 shares of Acme Corp. in 2006”Note that such auxiliary information, e.g., named entities, is coupled with specific words in most cases.If we feed the above “annotated” sentence, a pretrained tokenizer breaks the words into pieces. Hence, the model would not notice the annotation, e.g., Organization, directs its corresponding word, e.g., Acme Corp.What would be the standard practice to instill auxiliary information coupled with words in a sentence into transformers?
2021-03-19T02:06:10Z
[]
Zero-shot and distillation - Improved distilled model over teacher model
https://discuss.huggingface.co/t/zero-shot-and-distillation-improved-distilled-model-over-teacher-model/4621
0
1,094
A colleague and I each ran an experiment following the example found attransformers/examples/research_projects/zero-shot-distillation at master · huggingface/transformers · GitHub. Even though it was a zero-shot experiment we used data for which we had labels to evaluate how well the zero-shot prediction performed. When we ran the distillation part of our experiments we both were surprised to discover that the accuracy of the distilled student model was significantly higher than the zero-shot teacher model (experiment 1: accuracy of the distilled model 48.12% > accuracy of the zero-shot model 42.91%, experiment 2: accuracy of the distilled model 79.82% > accuracy of the zero-shot model 77.36%). In the second experiment there is a small possibility that this performance increase could be explained by chance (5000 examples), but not for the first experiment which has 86651 examples. I wonder if other people got similar improvement and if it’s a known phenomenon what would explain it.
2021-03-18T17:53:41Z
[]
XLSR-53: To group tokens or not to group tokens
https://discuss.huggingface.co/t/xlsr-53-to-group-tokens-or-not-to-group-tokens/4522
1
536
In@patrickvonplaten's Fine Tuning XLSR-53 notebook, he mention how tokens shall not be grouped when computing metrics, in the case of that notebook, the WER metric. And that does make sense. However, later on in the notebook, he goes on to use the processor to decode the predictions and doesn’t pass thegroup_tokens=Falseargument to the method.Shouldn’t the way we decode to compute metrics and to output predictions be the same? Which way would be the correct one? This is probably a minor issue for languages that don’t duplicate graphemes that often, but I’m curious as it could impact the perceived performance one way or another.Could someone clarify this for me?
2021-03-17T19:59:43Z
[ { "date": "2021-03-18T06:57:08Z", "reply": "Hey@jjdv,Could you check whether this issue answers your question:wav2vec2: `convert_tokens_to_string` contracts legitimately repeated characters · Issue #10619 · huggingface/transformers · GitHub?" } ]
NER for 2D text
https://discuss.huggingface.co/t/ner-for-2d-text/4451
0
424
I’m looking for a method for NER on semi-structured text(ie. text with bounding boxes). The challenge with NER on semi-structured text is that because of the 2D nature of the text, we cannot rely on the usual IOB tagging schema to retrieve entities.Here’s an example where we want to extract the 2 addresses as LOC entitiesIn this setup, we have those labels (disregarding B-/I- since it’s not making sense anymore)Now, if we were to treat this as plain text by sequentially looking line by line, this would give usHere, weare mixing entities because each entity spreads across multiple lines, so retrieving entities from entity labels is not trivial.The only solution I’ve seen is toadd a subtask to group tokens into entities(treating it essentially as relation extraction).
2021-03-16T17:21:53Z
[]
Dealing with Imbalanced Datasets?
https://discuss.huggingface.co/t/dealing-with-imbalanced-datasets/4328
1
5,162
Hi everyone,I am dealing with a binary classification task (non-English language) of relatively long documents (~4k words on average). I have tested a Logistic Regression trained on simplistic BoW features, yielding reasonable performance.I am now testing the multilingual BERT, with two linear layers on top of it and using the Cross-Entropy loss; however, its performance is quite low. The “annoying” part is that on a given test set, BERT always predicts the majority class. It is worth saying that the dataset (both train and test) is rather imbalanced (80/20).I have tried the following without any luck:a) Play around with the learning rate, class weighting, num of linear layers & associated configurations.b) Select different parts of the document as input to BERT.c) Generate balanced samples (incl. oversampling the minority class).I have also tried generating a synthetic toy dataset of 1K examples from one document belonging to one class and another 1K examples from one document belonging belonging to the other class - the performance was perfect, as expected.Is there something obvious that I am missing in terms of debugging my model? Is the problem the imbalanced nature of the dataset I am working with? Could a Focal loss (or anything else) help on this end?
2021-03-11T21:18:10Z
[ { "date": "2021-03-11T22:05:54Z", "reply": "Hi@aguarius, my naive guess is that the length of your documents is the source of the low performance since BERT has a maximum context size of 512 tokens which is only a handful of paragraphs.One somewhat hacky approach to this could be to chunk your document into smaller passages, extract the hidden states per passage and then average them as features for your linear layers.What language(s) are in your corpus? That might be another source of difficulty since mBERT is not great on all of its languages and perhaps you can work with a better model like XLM-RoBERTa (or even a monolingual one if that’s possible)" } ]
How does BERT actually answer questions?
https://discuss.huggingface.co/t/how-does-bert-actually-answer-questions/4287
1
747
have been trying to understand how the BERT model works. Specifically, I am trying to understand how it picks up up answers to questions on a given passage. I have tried followingthisblog post and whilst It has given me a nice intuition, I would like to better understand what is happening under the hood.From my understanding, the question and paragraph are tokenised separately and then go through the transformer model. Then, the dot product between the ‘transformed’ tokens and a START/END token is taken, with the higher result giving you that start and Finnish of the answer.What I would like to understand, what happens to the tokens in this “transformation” (i.e feedforward through the model) that makes it possible to take a dot product and therefore indicate if a word is a START/END.
2021-03-10T15:30:01Z
[ { "date": "2021-03-11T20:30:32Z", "reply": "Hi@theudster, you can find a detailed tutorial on question-answering withtransformershere:https://colab.research.google.com/github/huggingface/notebooks/blob/master/examples/question_answering.ipynb" } ]
Hugging Face Reads - 01/2021 - Sparsity and Pruning
https://discuss.huggingface.co/t/hugging-face-reads-01-2021-sparsity-and-pruning/3144
13
7,439
Hugging Face ReadsJanuary 2021 - Sparsity and PruningByVictor Sanh,François Lagunas, andYacine JerniteIntroduction to the Hugging Face Reads (HFR) seriesNew year, new Hugging Face reading group! We are launching the Hugging Face Reads (HFR) series: each month, we will choose a topic to focus on, reading a set of four papers recently published on the subject. We will then write a short blog post summarizing their findings and the common trends between them, questions we had for follow-up work after reading them, and how recent advances in the area have affected our work at HF.The first topic for January 2021 is Sparsity and Pruning, and we are planning to address Long-Range Attention in Transformers next month. Enjoy, and come join the conversation here!IntroductionWhile large-scale pre-trained language models help solve an ever-growing set of natural language processing tasks, the progressive increase in their sizes raises concerns about their wide-scale applicability in practical settings, especially on devices with limited memory and computing power.Sparse neural network models which only use a fraction of the large parameter sets of their dense equivalents offer a promising avenue to reduce these computational demands. Recent works have proposed various methods to achieve impressive levels of sparsity, whether by gradually choosing which parameters to retain during training or by “pruning” the parameter set after the fact. This post presents an overview of four papers proposing or analyzing such methods. We review the following works: the(Chen et al., NeurIPS 2020)paper investigating the applicability of the Lottery Ticket Hypothesis to BERT-style models, the(Frankle et al., 2020)analysis of currently available methods to find sparsity patterns at initialization before doing any training, the(Li et al., 2020)work on the computational and performance trade-offs between training a large model to prune later vs. training smaller models right away, and the(Hooker et al., 2020)study of the biases introduced by current methods used to compress models (including pruning).Paper summariesFor each paper, we identify some of the claims and contributions, as well as some follow-up questions.The Lottery Ticket Hypothesis for Pre-trained BERT NetworksBy Tianlong Chen, Jonathan Frankle, Shiyu Chang, Sijia Liu, Yang Zhang, Zhangyang Wang, Michael CarbinTheLottery Ticket Hypothesis(LTH) was initially developed and tested on computer vision systems. It states that given an initialization of a model, it is possible to find a subset of sufficient parameters during training: i.e., such that training only those parameters while setting the others to zero allows the model to reach the same performance as training the full model. Unfortunately, this subset can only be found aftersome amount of computation, and the method requires several iterations of re-training (either from scratch or from an earlier checkpoint, a method known as rewinding) and pruning for full effect. However, the approach can still end up improving training time and outputs a ready-to-use sparse model.This paper sets out to validate the LTH in NLP (and in particular in BERT-style models). Specifically, it asks whether sparse subnetworks of a model pre-trained with Masked Language Modeling (MLM) are sufficient to solve down-stream tasks. The answer is broadly positive.FindingsUsing a pre-trained initialization, BERT contains sparse subnetworks at non-trivial sparsities that can be fine-tuned in isolation to full performance on a range of downstream tasks.As opposed to previous work, these subnetworks are found at pre-trained initialization and not at random initialization (which was the case with the original LTH work). Rewinding does not significantly improve accuracy on downstream tasks.There are universal subnetworks that transfer to all studied downstream tasks. By further fine-tuning on the same task that was used for pre-training (Masked Language Modeling), the method finds a 70% sparse sub-network that can yield good results on all downstream applications.Follow-up questionsIn practice, the computational cost of fine-tuning is already much less than that of pre-training. How would “fine-pruning” (pruning while fine-tuning with methods such as movement pruning) a model on a downstream task compare to using the LTH-sparse model obtained with MLM (or with the downstream task)?The lack of impact of rewinding is in stark contrast with previous work on networks initialized from scratch and bears closer examination. For example, does this finding hold across fine-tuning learning rates? How much does the value of the selected parameters change over time?Pruning Neural Networks at Initialization: Why are We Missing the Mark?By Jonathan Frankle, Gintare Karolina Dziugaite, Daniel M. Roy, Michael CarbinThis paper analyzes the performance of several methods to prune networks at initialization, so even before training starts, to save on training time as the network is smaller or sparse (SNIP, GraSP, SynFlow, Magnitude pruning). The methods are allowed to sample the dataset to perform the pruning: this sampling can be considered negligible compared to the computation required for training. They compare the methods to two “upper bounds” representing the performance we can hope to achieve when given access to information that is available after training: Magnitude Pruning and Lottery Ticket Rewinding.FindingsAll proposed methods are better than random pruning, but they are not sensitive to the individual selection weights, only to pruning proportions on each layer. Even worse, selecting the weights with the lowest instead of the highest value of the utility criteria improves performance on some methods (GraSP), which appears to invalidate some of the original works’ claims.The methods are far from competitive with post-training approaches. Moreover, none of the methods is SOTA in all settings: some methods are better at some sparsity levels than others, but this depends on sparsity.The methods yield better results if they are applied after a few training steps rather than right away at initialization, but they need a significant amount of training to approach the proposed “upper bounds”.Follow-up questionsThe problem of finding a “good” subnetwork right at initialization seems somewhat under-defined and possibly overly difficult: which task or set of tasks is used to measure success? Is it even possible to find an ideal sub-networks that works on any possible task a priori? Consequently, it is hard to tell whether the mixed results stem from flaws in the methods or from the task’s inherent difficulty. More insights here would be particularly enlightening.The authors note that the studied methods prune “layers, not weights”, which may explain the surprising results they obtain by inverting the weight selection. In that case, would a dense model with adaptive layer sizes following the same patterns work as well?An interesting follow-up direction could be something along the lines of “pruning as soon as possible”. Recent“Bertology” workhas shown that pre-trained models learn different levels of skill in sequence: we are particularly eager to see follow up work that explores the relationship between the emergence of these skills and the optimal pruning strategy.Train Large then Compress, Rethinking Model Size for Efficient Training and Inference of TransformersBy Zhuohan Li, Eric Wallace, Sheng Shen, Kevin Lin, Kurt Keutzer, Dan Klein, Joseph E. GonzalezThis paper explores the landscape of the computational tradeoffs between transformer models’ sizes and the required numbers of hyper-parameter settings and training steps to achieve a good performance. It finds larger sizes can allow for fewer hyper-parameter settings and training steps and offers some practical advice about choosing a larger initial number of parameters that can later be pruned to, counter-intuitively, reduce the overall computational cost of training a mode when compared to just training a smaller model from scratch.FindingsLarge models are faster to train: they reach a given precision faster when measuring optimizing steps/wall clock time/ flops, even when they are stopped before convergence. Absolute size is more important than depth or width alone, but depth can be more important than width in some cases. The faster convergence usually makes up for the faster execution of smaller models.Large models can be compressed to smaller networks. Training large networks might speed up training but would lead to problems at inference time, as their resource cost is much higher. This work finds that pruning them to networks that end up containing fewer parameters than the original smaller alternatives still yields higher performance. They can be quantized too with less quantization error.Batch size has an influence on training speed. In practice, this means that gradient accumulation should be used for larger models.Follow-up questionsThe results are impressive, but it can still be difficult to get some intuition for why the larger models converge to a better state faster and are easier to prune. The authors mention previous work hinting that deeper networks“promote movement along directions already taken”as a possible explanation, but we are definitely looking forward to reading further analysis.The connection to Lottery Ticket Hypothesis is mentioned only in passing. Further work exploring whether the sub-networks selected by the two approaches are similar in any fashion (such as by considering the Jaccard distance between the sets).Characterizing Bias in Compressed ModelsBy Sara Hooker, Nyalleng Moorosi, Gregory Clark, Samy Bengio, Emily DentonThis paper sheds light on the impact of pruning on neural models for vision and shows that reported top-line accuracies often hide the disproportionate negative impact on certain classes of inputs. The paper connects this phenomenon to bias and fairness considerations.FindingsWhile the overall error is largely unchanged when a model is compressed (by pruning and quantization), there is a set of data that bears a disproportionately high portion of the error, with their accuracy falling by up to 50% while the overall performance only decreases by 1%, regardless of what the original accuracy was on the group.These examples (or at least some of them) can be consistently identified by comparing the predictions from a population of compressed models with the predictions from a population of non-compressed models on the same inputs: the examples where the predictions distributions diverge are called Compressed Identified Examples (CIE).CIE often correspond to low-frequency patterns in the inputs. Compression cannibalizes performance on low-frequency patterns in order to optimize the performance on higher-frequency patterns and preserve the overall accuracy.Compression thus amplifies biases of models (amplifying certain errors on certain types of inputs). The authors suggest using CIE as an auditing tool for compressed models: surfacing a tractable subset of the data for further inspection by domain experts to assess this issue.Follow-up questionsThis paper studies are pruning and quantization techniques that are run after training. One question that remains open is whether the models are facing an issue of modeling capacity (i.e., less-biased predictions require more representation power) or whether it is tied to the training procedure. Analyzing methods that reduce model size in the course of training or approaches such asgradual pruningwould shed some light on this question.Would up-weighting the CIE examples in training lead to models that are more robust to compression? Or would we expect to find different CIE groups?The authors suggest using CIE as a diagnostic tool. What can be done with the diagnostic? Are there other calls to action from these insights? For instance, how could we change existing benchmarks on compression to include robustness metrics (i.e., adding another component to the tradeoff size vs. accuracy on CIE groups)?Reading Group DiscussionThe quantitative results obtained on many of the common benchmark tasks by pruning are impressive. At the same time, they also remind us how much we still have to learn about the training dynamics of neural networks. Common wisdom states that “overparameterization helps with optimization”, but we have little theory available to help us understand the phenomenon further, especially in the deep attention-based models that perform so well in NLP.Each of the four papers above offers a different view of this question of modeling capacity vs. optimization vs. generalization.The Lottery Ticket Hypothesis relies on the quality of the initial state of the parameters at least as much as on the evolution of the weight values during optimization. As such, the main purpose of increasing the number of parameters would be to exponentially increase the chances of hitting a good sub-network at initialization.Other approaches focus more on how and whether the gradient flowing through the possibly redundant parameters help optimize the value of the ones we want to keep in the final pruned network: whether they try to evaluate that impact a priori as in the SynFlow algorithm or are content to simply keep them around for optimization based on their empirically proven efficiency and to prune them at the end of the training.All of the works outlined above, however, assume that the neural networks are indeed over-parameterized and that they can be pruned without changing their qualitative behavior. The CIE work questions that assumption and finds that pruning does change the behavior of the model in non-trivial ways. This assessment also agrees with some experimentsVictor Sanhhas run on the task for natural language inference, gradually pruning a model trained onmultiNLIand testing it on theHANSdataset. As the sparsity increases, the generalization as measured by the accuracy on the HANS test set decreases and gradually drops to 0 while the performance on the multiNLI test set stays mostly constant. Another experiment along those lines would be to see how much factual knowledge pre-trained language models lose as they are pruned (for example by monitoring closed-book QA accuracy for a model like T5).The question remains whether this loss of generalization and increased bias is a result of the model losing “expressive capacity” as its number of parameters decreases or whether the fault lies in the compression strategies that aren’t quite flexible enough, but the results certainly suggest that a large number of parameters serves as more than a crutch for optimization.Another question that is somewhat orthogonal to the one above is that of when to optimally prune weights from the model. Pruning early saves computation, but does not benefit from any signal from the target task. Pruning after training can take advantage of additional information but does not save any computation at training time or allow the parameters to adapt to the new sparsity pattern. Gradually pruning during training seems to provide the best of both worlds, but introduces a new set of hyper-parameters which may make optimization more costly. One should also keep in mind that actual computational gains will depend on the capabilities of current hardware and their ability to take full advantage of shifting sparsity patterns.We’re very much looking forward to the progress on all of these questions that 2021 is sure to bring!@HuggingFace: Sparsity and PruningWe first started investigating ways to make existing models more computationally efficient withDistilBERT, a method which was used to trainone of our most popular models. The follow-up on sequence-to-sequence models yieldedDistilBart, which also reaches similar performances to their larger counterparts at a lesser cost. Recently, we have also investigated approaches which focus on sparsity more specifically.Movement PruningMost of the works referenced above use magnitude pruning, a widely used strategy for pruning which thresholds weight values and simply sets the smallest ones to zero. In our work onMovement Pruningled byVictor Sanh, we argue that this approach is less effective in the context of transfer learning and highlight the importance of considering the changes of weights during fine-tuning as opposed to relying (mostly) on the pre-trained values. Code & hyper-parameters are availablehere.Block Movement PruningThe main drawback of unstructured pruning from a practical point of view is that current hardware can make it quite difficult to take full advantage of the sparsity pattern to accelerate the computation of the network. A compromise that can help alleviate this issue is the use of “semi-structured” sparsity patterns. By selecting blocks (typically 32x32) of weights instead of single weights, and running the same kind of optimization methods.Accelerating block sparse linear algebra is easier, and thepytorch_block_sparselibrary developed at Hugging Face is our attempt to show that. We are pretty confident more and more solutions for block-sparsity computation will emerge, and we will be working with major actors to enable it. We are already providing somesample networksthat take advantage of block sparsity, so you can judge by yourself!Finally, we also work to combine block sparsity with other accelerated sparsity patterns such as NVidia Ampere, to further decrease the memory, the compute and the energy used by the neural networks that will be everywhere in the near future.
2021-01-12T14:36:43Z
[ { "date": "2021-01-22T19:32:59Z", "reply": "Hi@VictorSanhI noticed that your implementation of movement pruning involves some masked versions of BERT likeMaskedBertForSequenceClassification. Do you know whether these classes will become part of the main library at some point in the future?" }, { "date": "2021-01-22T20:17:13Z", "reply": "Hi! I just wanted to add a wrap-up about our article in this context.Sparsifying Transformer Models with Differentiable Representation PoolingBy Michał Pietruszka, Łukasz BorchmannThe problem of having quadratic complexity w.r.t. the length of the attention mechanism in transformers is approached using pooling operations for reducing the number of word-vectors in between layers. The paper finds that even a hard selection of word-vectors outperforms Linformer and Reformer-based baselines on a long-documents summarization task both in speed and ROUGE scores. However, this hard selection remains suboptimal, as gradients are not propagated to each element in the sequence. This drawback was eliminated by introducing the novel pooling operation, namely The Successive Halving Differentiable Topk. It allows scoring each element in the sequence and selecting a predetermined number of word-vectors that achieved the highest score.FindingsWord-vector pooling allows achieving sub-linear complexity. Keeping the lower sequence length after the pooling is beneficial for the complexity of the next layers, FFNs, and even the decoder’s cross attention. More vectors can be eliminated in subsequent layers, further decreasing complexity as in the Pyramidion model.Massive saving on memory and time (16x and 3.5x respectively) are achieved while outperforming dense baselines at the same time. The time overhead for scoring and pooling is minimal, but the elimination of some information redundancy improves the training significantly.The best models were reusing a part of the saved computations for deepening the network.Follow-up questionsThe proposed Successive Halving Top-k operator is universally applicable. How do you want to use that in other fields? What are specific examples of tasks and model architectures?How can other methods (e.g., Linformer) benefit from keeping the lower number of word-vectors?I hope you will find it interesting!" }, { "date": "2021-01-23T17:40:32Z", "reply": "Hi@lewtun, thanks for the question!Indeed all the linear layers (torch.nn.Linear) are replaced with custom modules that add scores matrices to accumulate the momentum for pruning.As of now, we have no plan to include it more broadly in the transformers library even though it is fairly straight-forward to do it: replace all the torch.nn.Linear and change the forward call. I believe@madlaghas some code to automatically do that on the fly, maybe he would be open to share about that?Victor" }, { "date": "2021-01-23T19:31:38Z", "reply": "Thanks for the answer@VictorSanh! I was able to adapt your implementation to work with a customTrainerand the latest version oftransformersusing the approach you suggested. Nevertheless, I would certainly be interested in seeing how the mapping of BERT → MaskedBERT can be done on the fly" }, { "date": "2021-01-24T10:43:55Z", "reply": "If I may ask a follow-up question: what is the heuristic for picking the number of warmup steps? Is it the first 6% of steps that one sees in the literature (e.g. the RoBERTa paper)?The reason I ask is that I want to run some experiments on a subset of SQuAD and am wondering how I should scale-down thewarmup_stepsargument accordingly" }, { "date": "2021-01-24T10:45:39Z", "reply": "Hi@lewtun!I have almost finished my work on an extension of@VictorSanhwork, I tried to make it as generic as possible, to be able to patch any network with only minimal additional work, and to include it in your own training infrastructure.As@VictorSanhmentionned, it won’t probably be part of transformers, but a standalone tool. I will be releasing it in the following weeks (hopefully before end of month), so I hope you can wait until that point !To be able to patch a network “on the fly” you can use the approach I used inpytorch_block_sparse, using inspection of pytorch modules.(You can see the first results of my work I had a few weeks agoherefor example )François" }, { "date": "2021-01-24T10:52:12Z", "reply": "Thanks a lot for the pointers@madlag! I didn’t know about your pytorch_block_sparse repo - this is exactly what I’m looking forI’ll keep an eye out for the release of your tool - do I understand correctly that this will enable people to incorporate e.g. movement pruning in their workflow or is it more focused on patching networks?Cheers,Lewis" }, { "date": "2021-01-25T13:41:54Z", "reply": "That’s a good question!In my experience having between 5% and 10% ofwarmup_stepsis a good enough target.If your question is specifically for movement pruning. the most important thing (especially if you have a smaller dataset) isnotto prunetoo fast(i.e. having a total number of steps sufficiently high).@madlaghad some experiments where he just doubled the number of epochs (pruning more slowly) and improved the results I reported in the movement pruning paper.In the general case, some recent papers also echo this experimental trick ([2006.04884] On the Stability of Fine-tuning BERT: Misconceptions, Explanations, and Strong Baselines,[2006.05987] Revisiting Few-sample BERT Fine-tuning). Having enough epochs help stabilize the training especially for very small datasets.Victor" }, { "date": "2021-01-25T13:50:39Z", "reply": "Thanks a lot for the tips and pointers to the literaure@VictorSanh- they’re really useful!" }, { "date": "2021-03-08T11:53:09Z", "reply": "madlag:I have almost finished my work on an extension of@VictorSanhwork, I tried to make it as generic as possible, to be able to patch any network with only minimal additional work, and to include it in your own training infrastructure.As@VictorSanhmentionned, it won’t probably be part of transformers, but a standalone tool. I will be releasing it in the following weeks (hopefully before end of month), so I hope you can wait until that point !To be able to patch a network “on the fly” you can use the approach I used inpytorch_block_sparse, using inspection of pytorch modules.Hi@madlagIs this extension referring to the method of quantizing the layers other thannn.Linearandnn.LSTM, which are not supported by thequantizationapi of PyTorch by default?I have been experiencing issues while quantizing a GPT2 based model, where most of the layers containnn.Conv1D, using the Movement Pruning notebook in examples.Thanks,Mrigank" }, { "date": "2021-03-08T15:41:13Z", "reply": "Hi@mriganktiwari, Francois is referring to his brand-newnn_pruninglibrary that extends the work done by Victor et al on movement pruning and provides an inference speed-up without quantization.If you’re having trouble quantizing GPT-2, one idea could be to convert it to ONNX format and then optimize the graph with ONNX runtime as done here:onnxruntime/Inference_GPT2_with_OnnxRuntime_on_CPU.ipynb at master · microsoft/onnxruntime · GitHubI’ve generally found the ONNX runtime supports more operators for quantization and the notebook I linked to shows you how to do it" }, { "date": "2021-03-08T16:37:43Z", "reply": "Thanks@lewtunfor the quick response,I have already quantized my model via ONNX, now I was trying to usepruningas a way of further reducing the size and inference time for my DistilGPT2 model - and thought the Movement pruningnotebookfrom Hugging Face might be helpful.But I get it, that as of now the PyTorch quantization API does not support quantizing of t heConv1dmodule.So I’ll look for other ways to prune using the Tensorflow version of the model.Thanks again!" }, { "date": "2021-03-08T17:47:37Z", "reply": "Just an idea: what if you prune the model first withnn_pruning, convert to ONNX and then quantize / optimize with ORT?I’m not sure whether the optimized models produced bynn_pruning(i.e. after the heads/rows/columns with zeroes are removed) can be exported to ONNX format, but this might be a way to get the best of both worlds" } ]
FDA Label Document Embedding
https://discuss.huggingface.co/t/fda-label-document-embedding/3654
9
1,455
Hi everyone,I am looking for any ideas or advice that you guys may have obtained in similar situations.I have been working on an NLP task to cluster medical documents for some time, and whilst I am eager to use transformers to get the best results, through all my efforts it seems that TF-IDF has worked best.I am working with the SIDER side effect dataset, which provides annotated FDA medication labels, an example is here:http://sideeffects.embl.de/media/pdf/fda/17106s032lbl/annotated.html#C0026961_0I have tried TF-IDF and SciBert through sentence transformers, selecting the most relevant passages, but no amazing results yet. Does anyone have any ideas or previous experience?Many Thanks,Chris
2021-02-15T22:01:34Z
[ { "date": "2021-02-15T22:21:43Z", "reply": "Hi@FL33TW00D, I ran into a similar problem last year with TF-IDF and found the following approach gave better results:Encode the documents, either with your favourite Transformer or Universal Sentence Encoder (the latter works really well!)RunUMAPon the embeddings to perform dimensionality reductionCluster withHDBSCANHTH!" }, { "date": "2021-02-15T22:46:37Z", "reply": "Hi@lewtun,Thanks for the response.How did you manage to encode the entire document? Did you perform summarization or did you split it up into chunks and average?I’ve already included steps 2 and 3 in my pipeline, I feel its the representations that are holding me back! Do you think I should make an attempt to somehow include the annotations provided by the dataset into the representations?Many Thanks,Chris" }, { "date": "2021-02-15T23:11:40Z", "reply": "In my case the documents were short emails, most of which could fit in the 512 token limit of USE - I did not try fancy things like summarization / chunking, but the latter would be my first thing to try for a long documentRegarding the annotations, theymighthelp, but you’d have to think carefully about how you plan to combine them with the embeddings before applying UMAP.Perhaps a “quick and dirty” approach would be to experiment with is concatenating the hidden states from multiple layers to see if that improves your document representation (assuming you’re just taking the last hidden state).Alternatively, you could try composing different UMAP models for different embeddings (see e.g.herefor a discussion), but I’ve never tried that so cannot vouch for its utility." }, { "date": "2021-02-15T23:26:10Z", "reply": "@lewtun,This is great, thanks for the insight. Really pleased to see that the version of UMAP you linked supports semi-supervised, which is perfect!Will attempt the quick and dirty approach and report back.Many thanks,Chris" }, { "date": "2021-02-16T22:29:58Z", "reply": "Hi@lewtun,Wanted to report back, did a lot of reading starting with the Universal Sentence Encoder (which I’d foolishly neglected in my previous passes over the literature). It looked like a great starting point but I was really looking for something like SciBERT that had the vocab needed to capture some of the more detailed parts of the data.Landed upon DeCLUTR (gitandpaper) and it looks like we are onto a winner!Many thanks for the input,Chris" }, { "date": "2021-02-17T08:32:10Z", "reply": "Thanks for the pointer to DeCLUTR - I hadn’t heard of it and it looks like a really interesting and simple approach!" }, { "date": "2021-02-17T21:49:45Z", "reply": "Hi@lewtun,Sorry to bother you on this again, just wanted to pick your brain on the optimal distance metric you found for UMAP? On their documentation they use Hellinger but this doesn’t work for negative values:Document embedding using UMAP — umap 0.5 documentationAlso wondered if you’d found a way to select the optimal dimensionality of the UMAP reduction in order to provide HDBSCAN with maximal info.Any insight or papers in this area would be much appreciated.Many thanks,ChrisEdit: On a second search of their documentation I found a much more helpful entry:Using UMAP for Clustering — umap 0.5 documentation, but would still love to hear your findings." }, { "date": "2021-02-19T21:26:06Z", "reply": "Hi@FL33TW00Din my use case (emails), I was able to get good results with cosine similarity and 5 dimensions for the embedding space.Although not strictly a metric, cosine similarity is nice because it doesn’t care about the size of the documents - if you need a proper metric then you could try using the L2 normalised Euclidean distance (Cosine similarity - Wikipedia). I wish I could say that I got the dim=5 value through some deep intuition of topology, but it was mostly a form of trial and errorThe other UMAP parameters were left on their default values, which incidentally is similar to those used in the top2vec paper:https://arxiv.org/pdf/2008.09470.pdfI’m not aware of a principled way for deciding the optimal embedding dimension - perhaps you can try a simple gridsearch to see which one works best?" }, { "date": "2021-02-19T22:30:29Z", "reply": "Hi@lewtun,Thanks for coming back to me, this confirms all my own preliminary findings, but will set up a grid search for concrete proof.Many Thanks,Chris" } ]
Likelyhood input sequence came from training set
https://discuss.huggingface.co/t/likelyhood-input-sequence-came-from-training-set/3684
0
336
I’m wondering if there’s a way of using a transformer to generate some sort of metric which scores an input sequence based on how similar it is to the training data. My motivation is I’ve created my own tokeniser and trained a RoBERTa model using a moderately large corpus of IoT device descriptions. The descriptions contain lots of abbreviations, unusual ways of delimiting the text etc.When I pre-train, then fine tune a classifier the performance is good on some datasets and poor on others. I assume the variation is because some datasets aren’t similar enough to the training data.So ideally I’d like to compete P(x1,…xn) where x1…xn is the input sequence, i.e. assuming this sequence is similar to data seen in training P(x1,…xn) should be higher than if not.Given that the encoder produces a contextual embedding rather than probabilities I’m not sure if this is possible though?
2021-02-17T10:06:50Z
[]
Why are embedding / pooler layers excluded from pruning comparisons?
https://discuss.huggingface.co/t/why-are-embedding-pooler-layers-excluded-from-pruning-comparisons/3580
7
781
Hi@VictorSanh,In your Saving PruneBERTnotebookI noticed that you only save the encoder and head when comparing the effects of pruning / quantisation. For example, here you save the original dense model as follows:# Saving the original (encoder + classifier) in the standard torch.save format dense_st = {name: param for name, param in model.state_dict().items() if "embedding" not in name and "pooler" not in name} torch.save(dense_st, 'dbg/dense_squad.pt',) dense_mb_size = os.path.getsize("dbg/dense_squad.pt")My question is: why are the embedding and pooled layers excluded from the size comparison between the BERT-base model and its pruned / quantised counterpart?Naively, I would have thought that if I care about the amount of storage my model requires, then I would include all layers in the size calculation.Thanks!
2021-02-10T12:27:21Z
[ { "date": "2021-02-10T22:41:49Z", "reply": "Hey!The QA model actually only needs the qa-head, the pooler is just decorative (it’s not even trained). Start and end of spans are predicted directly from the sequence of hidden state. This explains why I am not saving the pooler.As for the embedding, I’m just fine-pruning the encoder, and the embedding modules stay fixed at their pre-trained values. So I am mostly interested in comparing the compression ratio of the encoder (since the rest is fixed).Hope that makes sense." }, { "date": "2021-02-11T08:44:49Z", "reply": "Thanks for the answer@VictorSanh- this makes perfect sense!" }, { "date": "2021-02-13T21:38:46Z", "reply": "Hi@VictorSanh, I have a follow up question about the Saving PruneBERT notebook.As far as I can tell, you rely on weight quantization in order to be able to use the CSR format on integer-valued weights - is this correct?My question is whether it is possible to show the memory compression benefits of fine-pruningwithoutquantizing the model first?What I’d like to do is quantify the memory reduction of BERT-base vs your PruneBERT model, so that one can clearly see that X% comes from pruning, Y% from quantization and so on.Thanks!" }, { "date": "2021-02-14T14:53:11Z", "reply": "The notebook you are playing with isonlyapplying the weight quantization. It is taking as input the fine-pruned (pruned during fine-tuning) model, so to see the impact of the pruning (compression), simply count the number of non-zero values (in the encoder). That should give you the compression rate of pruning!Victor" }, { "date": "2021-02-15T10:05:44Z", "reply": "Thanks for the clarification!Counting the number of non-zero values is a good idea to get the compression rate, but what I’d usually do to quantify the size on disk (e.g. in MB) is save the encoder’sstate_dictand get the size as follows:state_dict = {name: param for name, param in model.state_dict().items() if \"embedding\" not in name and \"pooler\" not in name}\n tmp_path = Path(\"model.pt\")\n torch.save(state_dict, tmp_path)\n # Calculate size in megabytes\n size_mb = Path(tmp_path).stat().st_size / (1024 * 1024)Now, my understanding is that if I load a fine-pruned model as followsmodel = BertForQuestionAnswering.from_pretrained(\"huggingface/prunebert-base-uncased-6-finepruned-w-distil-squad\")then the model is dense, so I don’t see any compression gains on disk when I save thestate_dict- is this correct?If yes, then do you know if there’s a way to save thestate_dictof a fine-pruned model to disk in a way that reflects the compression gains from a sparse encoder?Thanks!" }, { "date": "2021-02-16T16:38:59Z", "reply": "Ooooh yeah sorry for the confusion.As far as I know (I think I tried), you can use the torch.sparse tensors representations which will decompose a sparse tensor into its CSR format (location of non-zero values + these non-zero values). It should give you a MB compression gain.The reason why I encoded the CSR format “by hand” is that sparse quantized tensors don’t exist yet in PyTorch so I had to do the quantization and the CSR format on top." }, { "date": "2021-02-16T21:10:23Z", "reply": "Thanks for the tip abouttorch.sparse: from thedocsit seems to use the COO format which should also work wellAnd thanks for clarifying the reason for encoding the CSR format by hand - when I find a solution to the torch > 1.5 issue, I’ll expand the text accordingly!" } ]
Debugging the RAG question encoder
https://discuss.huggingface.co/t/debugging-the-rag-question-encoder/3550
2
563
Hi- Thank you again for the awesome library & work.I have been trying to repurpose the RAG code to train on the KILT dataset. As I understand, during the training phase, document encoder (and the index) is fixed, only the query encoder and the generator are fine-tuned.As I train multiple epochs, something curios happens where the question encoder ‘collapses’ into emitting identical predictions regardless of the input. Specifically,out1andout2are identical, even though input embeddings are different.emb2 = torch.randn([1, 512, 768]) emb3 = torch.zeros([1, 512, 768]) # encoder out1 = model.rag.question_encoder.question_encoder.bert_model.encoder(emb2) out2 = model.rag.question_encoder.question_encoder.bert_model.encoder(emb3)The way this behavior manifests itself is that the question encoder starts pulling the same wiki entries regardless of the question.In fact, the last hidden states are identical for each token in the sequence.Screenshot from 2021-02-08 21-20-17823×215 26.1 KBI am curious if this type of behavior rings any bells? One hunch I have is whether mixed-precision training might be the cause. Any direction / feedback will be greatly appreciated, before I take the plunge and dig any further.Thank you!Deniz
2021-02-09T05:21:54Z
[ { "date": "2021-02-09T10:53:51Z", "reply": "Hi ! There’s some discussion about that atRetrieval Collapse when fine-tuning RAG · Issue #9405 · huggingface/transformers · GitHubApparently it can happen in some setups" }, { "date": "2021-02-10T05:55:09Z", "reply": "This is it! Thank you." } ]
Question about maximum number of tokens
https://discuss.huggingface.co/t/question-about-maximum-number-of-tokens/3544
1
5,803
Hi,It is my understanding that all the pretrained models have a fixed maximum number of tokens (512 forbert-base-uncased). Suppose I have texts, that when tokenized exceed that number (like fictional text running through many paragraphs). I feel that there could be a better way than just using the first 512 tokens of the text. I could increase that limit, but my understanding is that for me to do that I have to train model from scratch and not be able to use the pretrained model. I would like to use the pretrained model.In order to achieve this I have an idea and need some feedback on that:Split the text into a list of sentences using a sentence Sentence Boundary Disambiguation tool.Tokenize each sentence using the model’s corresponding tokenizer.Create our new text, by keep the first and lastnsentences from the list and then taking a random subset of the rest of the sentences, such that all the tokens add up to 512.This will not restrict the input to only the first 512 tokens and will include random sentences from the middle of the text. Any thoughts on this approach?
2021-02-08T23:47:17Z
[ { "date": "2021-02-09T09:01:55Z", "reply": "Sure, that is an option. You can also first run the text through a summarizer model and use the output as the input for your classifying model. There is no one “right” approach. You can try different things and see what works best for you." } ]
Science Tuesday: MARGE
https://discuss.huggingface.co/t/science-tuesday-marge/685
7
3,730
For this science Tuesday, I read Marge, and wrote up a brief summary, as well as some interesting questions to discuss@joeddav@srush@VictorSanh@thomwolf@clem@julien-c@teven@patrickvonplaten@yjernite(only allowed 10 tags)Pre-training via Paraphrasing (MARGE)Paper: published June 26 2020Authors are from Facebook AI Research:Mike Lewis, Marjan Ghazvininejad, Gargi Ghosh, Armen Aghajanyan, Sida Wang, Luke Zettlemoyer.SummaryHuge models trained with masked-lm pretraining objective, or similar, memorize lots of facts in their parameters and don’t use an external storage to look up facts they are missing. Human brains have separate systems (it seems) for memorizing facts and generating language, and often google things. In this spirit, goal of many transformer+retriever models is to decompose memorization of facts and language understanding. MARGE stands for a Multi-lingual Autoencoder that Retrieves and GEnerates.The pretraining setup:reconstruct original document by retrieving related documents (from wiki) and trying to regenerate the original maximize likelihood of original doc conditional on retrieved docs, relevance scores. This implicitly forces the retriever to learn how to generate good relevance scores.There are some tricks related to not scoring all of wikipedia for every example while keeping relevant articles in each batch.Every 10k training steps, they remake their batches by computing the cosine similarity of every pair of docs, and then greedily adding source and target docs to batches such that the pairwise sum of cosine similarities increases the most. This obviously seems hacky, but allows them to get away without approximate NN or some other expensive way to find related docs. This, and the fact that a randomly initialized encoder will give docs with lexical overlap higher than random cosine similarity, allows the model to train from random.The retrieval model, ideally, can focus on getting the transformer all the facts that it needs while the transformer learns to paraphrase, which requires generating fluent language.For finetuning/inference, you don’t need to use the retrieval part.Marge performs…:comparably to XLM-Roberta, with 20% of the pretraining compute.comparably to mbart on de-en, en-zh translationSOTA on ml-sum, a cross lingual summarization taskKey contributions:(1) Most of the related work is not multilingual(2) most of the related work does not zero-shot well?(3) this pretraining objective unifies learning to retrieve and learning to generate. Previous work requires two pretraining stages.Related WorkRealm: “At a high level, the method goes like this: find the most similar text passages in BERT space, add those passages to the input as additional context, and then make a prediction.” -Joea few weeks agodifferent because the retriever has to be pretrained separately. Realm also seems to use mostly open domain QA benchmarks.RAG (Retrieval-Augmented Generation)Different because mostly focused on knowledge intensive benchmarks. MARGE can also do well on translation.Starts with bart-large + DPR, whereas MARGE pretrains end-to-end.Questions somebody could answer:Does MARGE outperform Bart on english only benchmarks like GLUE/ xsum summarization? Why did they only show multilingual benchmarks?When will there be code?How long does a forward pass take?What are the consequences of not using retrieval during inference. Does the model not “know” anything?Higher Level:Is Translation “knowledge intensive”?How could we measure hallucinations?Authors suggest that we should use a pre-training that is as close as possible to the dowstream task. Pegasus paper also suggests this. Where else could this idea be applied?Also these two talks are good:https://slideslive.com/38929793/beyond-bert(Mike Lewis at ACL)https://www.youtube.com/watch?v=KTQPWoQ7Ol8(Luke Zettlemoyer at AKCD)
2020-08-11T22:51:57Z
[ { "date": "2020-08-13T16:55:53Z", "reply": "From Mike Lewis, the 1st author:We didn’t try very hard, but from what I saw MARGE lags a little behind BART on monolingual English tasks. It’s not too surprising, because I think having to be a good multilingual model just dilutes the capacity a bit. Similarly, XLM-R isn’t quite at RoBERTa level.code coming soonthey also retrieve from CC-News, not just wikipedia.“We’re going to look at retrieval during inference, but haven’t run that yet. Qualitatively, I think it’s a bit less prone to hallucination than BART because it (somewhat) knows that it doesn’t know anything. That means we get surprisingly literal zero-shot translations, because it tends not to make too much stuff up.”" }, { "date": "2020-08-13T18:45:25Z", "reply": "Hadn’t read about this. Cool stuff!Every 10k training steps, they remake their batches by computing the cosine similarity of every pair of docs, and then greedily adding source and target docs to batches such that the pairwise sum of cosine similarities increases the most.You seem to imply that this is not an expensive operation, but it sounds very expensive: calculate vector for doc, cos sim betweenalldata pointsgreedily. Isn’t that super computationally expensive?" }, { "date": "2020-08-14T02:34:41Z", "reply": "In the paper, they separated the dataset into many shards, each of which consists of similar documents, so that they can compute cosine similarity between the documents within the same shards. More generally, instead of shards you can use faiss to cluster the embeddings and compute kNN efficiently.Also, the forward pass of the embedding costs a fraction of each iteration of training in terms of the computes, so computing the embeddings isn’t expensive, either." }, { "date": "2020-08-14T07:07:36Z", "reply": "Thanks, I am aware of faiss. We use it in our work, too as an alternative (and addition) to edit distance. It is very fast, but considering the size of the data set this will still take quite some time. If you want to compareall inputs to all other inputsat every x steps, that is still an expensive look up. But if I understand your comment correctly, documents are only compared within the same shards and the shards are created based on some preprocessing that clusters similar documents together. So all inputs are not compared with all the others, but only with those in their own shard." }, { "date": "2020-08-14T07:30:24Z", "reply": "Right. But using faiss for every documents without using shards is actually still fast enough.Say, the training dataset contains 128 billion tokens. If your batch size is 2M tokens and you update every 10k iters, then you update the knn every 20B tokens. Since the embedder forward pass is about 6x (2x from using half as many layers and 3x from using forward only vs. forward+backward) faster than each iteration per document, the cost of getting embeddings costs as much as the training for 10k iters (128B/6 ~ 20B).Since the training dataset contains 128 billion tokens, and each document consists of 128 tokens (512 in the paper, so even fewer). Then, you have 1 billion vectors, and as in knn-lm you can use a subset of them for computing the centroids and then search (with GPUs) after quantization as usual. If you take a look at the original paper of faiss, you can see that the computes required for constructing kNN graph of 1 billion vectors is not much … actually about no more than 10 GPU-hours with a single V100, much smaller than what it takes to train the sota LM on 20 billion tokens, so it’s still fast enough relative to the training.Depending on your perspective, you may argue that this still costs too much or, for example, that batch size is too large in this case. My point is that the frequency of updating the knn is merely the hyperparameter that can be adjusted so as to make the knn part reasonably small. Since it’s not expensive in the case I suggested (which I believe is a reasonable one), MARGE isn’t inherently expensive. You can just make the cost reasonable by investingating the trade-off and find a reasonable compromise." }, { "date": "2020-08-14T08:13:23Z", "reply": "Interesting! Thanks for the elaborate explanation. I can only encourage and be happy about more efficient models." }, { "date": "2021-02-08T22:52:24Z", "reply": "@BramVanroy@AranKomatsuzakiI wonder if we can use the same strategy to fine-tune RAG retriever in an end-to-end manner since currently we only fine-tune the doc encoder." } ]
RAG for FEVER Dataset
https://discuss.huggingface.co/t/rag-for-fever-dataset/3541
0
407
I had a few queries on running rag for FEVER?In the finetuning step as i understand from the paper for fever they would first regenerate the claim. So for seq2seq format would our “train.source” and “train.target” would be just the same claim?And then how do we actually make the model classify/give a fever label?In addition how do we after fine tuning, run just the code in inference mode where can either generate answers for Question Asnwering or a label for fever?Any help would be much appreciatedCheersShraey
2021-02-08T16:22:52Z
[]
Transfer learning to explore tasks' information requirements?
https://discuss.huggingface.co/t/transfer-learning-to-explore-tasks-information-requirements/3506
0
387
Continuing the discussion fromACL 2020 highlights – Joe:ACL 2020 highlights – Joewhat kinds of datasets are useful for intermediate training and what downstream tasks they have a positive (or negative) effect on.This kind of question fascinates me. If intermediate training on Task A allows you to train target Task B more successfully, or if A and B as target tasks are affected in similar ways by each of several intermediate tasks, I’d strongly suspect that some of the same information is relevant to both A and B, and that the link between their respective successes (or failures) is the later layers of the encoder learning (or not learning) to fish that information out of the many other combinations of input features in the middle layers of the model.I see it as complementary to probing experiments. When you determine that a word’s encoding predicts some linguistic or psycholinguistic object – its lexical semantics, its position in a parse tree, its reading time, the probability that a human reader will notice that its agreement morphology is wrong – you’re giving an exact description of one kind of information that can be found in a text when you know its language. Transfer learning experiments are (at least initially) working with far more opaque descriptions: “Information sufficient to (mostly) recreate human-like readings of anaphors,whatever that might be.” But I’m fascinated by the potential for the two approaches to meet in the middle: Using the probes as tasks that (theoretically) isolate one particular kind of knowledge, to dissect the “whatever that might be” and find that humanlike anaphora resolution depends heavily on X kind of information, lightly on Y kind, moderately on Z, and there’s this residue we haven’t explained yet, but we can see what other tasks it’s relevant to and take a guess.A Transformer is, of course, not “wetware in disguise.” Not even structurally, let alone experientially. Finding the particular information that lets it imitate humans in some task is no guarantee that humans rely on the same information. If you want to uncover the cognitive particulars, how we do the task on an algorithmic level, BERT won’t tell you. But it can show us the shadow that the human algorithm casts onto the computational level, educate our guesses, help us prioritize our hypotheses. We’ll have to figure out whether X helps to predict human performance because we use X, or because X reflects a quirk of our processing that also affects our task performance, or what. But studying how this “hyperintelligent octopus” of ours gets around the atoll could at least indicate some of the currents that we too swim in.(Sincere apologies to Bender and Koller for abusing their metaphor.)On the techniques for studying transfer learning, I’ve had some discussions lately about the possibility of adversarial/amnesic intermediate tasks – using the training process to burn certain informationoutof the representations. Thinking about how to make sure that that happens, as opposed to just building a defiantly contrary task head, or a clueless one, or making the encoder all-around worse by flattening the representations. I have a bit of discussion about some of that in a feature request over on Github, and if you’ve read this far you’ll probably have some good ideas about it, so consider yourself invited! It’s atgithub.com/huggingface/transformersAdversarial/amnesic headsopened02:34AM - 04 Feb 21 UTCeritainFeature request# 🚀 Feature request Task heads that backpropagate deliberately reversed gradi…ents to the encoder. A flag requesting this behavior when constructing a task head. ## Motivation Transfer learning experiments lend themselves to questions about the extent to which two tasks rely on the same information about a word/sentence, and to experiments probing whether and how word encodings contain/correspond to syntax trees, lemmas, frequencies, and other objects of linguistic/psycholinguistic study. A difficulty is that a pretrained model, without fine-tuning, may already encode certain information too thoroughly and accessibly for intermediate training to make much of a difference. For example, BERT's masked language modeling objective produces word encodings in which syntax information is readily accessible. Intermediate training on a syntax task requires training a task head to extract this information, of course, but it will result in very little reorganization of the encoder itself. Adversarial training, such as the amnesic probing of Elazar et al. 2020, can avoid this pitfall. Intermediate training can aim to burn particular information *out* of the encodings, and measure how much this impairs trainability of the target task. Strictly reversing the sense of the training data won't do it though; getting all the answers exactly wrong requires just as much domain knowledge as getting them all right does. And randomizing the labels on training data may just result in a feckless task head, one that discards useful information passed to it from the encoder, rather than affecting the encoder itself. Ideally, then, the task head would be trained toward correctly reproducing gold-standard labels, but would flip all its gradients before backpropagating them to the shared encoder, thus training it not to produce precisely the signals that the task head found most informative. The following work by Cory Shain illustrates flipping gradients in this way (although it's not applied to shared-encoder transfer learning, but rather to development of encoders that disentangle semantics from syntax). https://docs.google.com/presentation/d/1E89yZ8jXXeSARDLmlksOCJo83QZdNbd7phBrR_dRogg/edit#slide=id.g79452223cd_0_19 https://github.com/coryshain/synsemnet ## Your contribution I am deeply unfamiliar with pytorch, unfortunately, and utterly ignorant of tensorflow. I can't offer much.
2021-02-05T00:19:30Z
[]
Model or Dataset available for classifying a grammatical sentence?
https://discuss.huggingface.co/t/model-or-dataset-available-for-classifying-a-grammatical-sentence/3423
1
1,622
I want to be able to classify if an input text is acompletesentence or not.The closest accurate definition of ‘being complete’ is if the sentence is a grammatical sentence.Also ‘being complete’ sentence, can depend on the context of the sentence but I want to focus on a sentence-like text as input for now.Example of a complete sentence:“You can write using one of the following styles”“You can write”“He writes code”Example of an incomplete sentence:“You can write using”“You can write using one”“He writes code for”I found this package for grammar checking which I am going to try:PyPIlanguage-tool-pythonChecks grammar using LanguageTool.I am wondering if there is an ML/DL solution for this problem. Is there a dataset or available model for this that you know?
2021-01-28T19:38:13Z
[ { "date": "2021-02-03T05:21:03Z", "reply": "Hi@emadg,I don’t think language tools is best way to go here, because language tool will just check grammer and grammer does not predict wheather this sentence is complete or not. Here are the rules that are implemented in Language tool, you can check if there are any rules which will help you to classify a sentence as complete or not.https://community.languagetool.org/rule/list?sort=category&order=ascFor ML Approach, I think you can try using a Language model, You can trying looking for things which end a sentence and find probaibily of those words like (Punctuations, conjuction words) in the end. Lesser probability means very less chances that sentence will end there.PS: I will add If I found a concreate method to solve this." } ]
Generating coherent related text with generative model (GPT2 etc.)
https://discuss.huggingface.co/t/generating-coherent-related-text-with-generative-model-gpt2-etc/3417
0
521
I am trying to generative sentences based on some context on one of the datasets fromdatasetslibrary. I’ve tried finetuning on some portion (50% train) of the dataset, but still not able to generate coherent related context. For instance, if some noun is present, the generated sentence should most likely be related to noun or perform some form of coreference resolution, needs to know the verb from context, and talk about it in generated sentence. I do not see anything happening like that. The generated sentences are infact very divergent from the context. I’d appreciate if you can suggest some methods, papers (with code) for tackling this problem. Thanks.
2021-01-28T15:32:41Z
[]
RoBERTa trained on NSP
https://discuss.huggingface.co/t/roberta-trained-on-nsp/3133
0
629
I want to perform experiments with RoBERTa that has been trained on MLM+NSP task. In the paper, NSP was discarded because of lower performance and wasn’t made publicly available by the authors. Does anyone have good suggestions about if it is available in some form or a implementation that can replicate it in the same manner (with pre-training) ? I knowtransformersprovide support but there’s not much room to make error due to restricted GPU access time, so if both model weights and implementation isn’t available, I’d really appreciate if someone can provide a working training routine withtransformers.
2021-01-12T03:58:54Z
[]
Switch Transformers: Scaling to Trillion Parameter Models with Simple and Efficient Sparsity
https://discuss.huggingface.co/t/switch-transformers-scaling-to-trillion-parameter-models-with-simple-and-efficient-sparsity/3137
1
1,589
Interesting new paper from Google improving upon T5.arXiv.orgSwitch Transformers: Scaling to Trillion Parameter Models with Simple and...In deep learning, models typically reuse the same parameters for all inputs. Mixture of Experts (MoE) defies this and instead selects different parameters for each incoming example. The result is a sparsely-activated model -- with outrageous numbers...
2021-01-12T08:13:46Z
[ { "date": "2021-01-20T15:37:20Z", "reply": "Just to add to the previous post… Google Brain recently unveiled a language model of 1.6 trillion (1.6E+12) parameters with performance equal to or better than the SOTA on several NLP tasks. It surpasses the 175 billion (1.75E+11) parameters of GPT-3. The mastodon was made possible by the development of a new attention-based architecture (switch transform) that divides training data and parameters between a multitude of sub-models or mix of experts connected by trainable gating. Despite its gigantic size, this text-to-text model would have been 7 times faster to train on the C4 (Colossal Clean Crawled Corpus, 750 GB) using the same amount of computation. The original article:https://bit.ly/2LQzsmJ, the source code:http://bit.ly/390j0ZY" } ]
Multilingual token, phrase and sentence representations for text similarity
https://discuss.huggingface.co/t/multilingual-token-phrase-and-sentence-representations-for-text-similarity/3167
0
488
Hello allFor some research of mine, I am looking for the best way to get sentence representations as well as phrase and word representations that will be used for text similarity. Specifically, I want to compare the representations of translated sentences, as well as their aligned individual words and word groups (phrases). I could just use something like mT5 or XLM-R and use the final hidden states of the subword units and pool them to create these representations, however my fear is that they are not well-suited for a text similarity task. This issue was also raised by the people over at SentenceTransformers intheir paper, who propose to finetune LMs on STS and other tasks to get sentence representations that are actually meaningful in a text similarity context. I could try these models, but as far as I know they never do any token similarity tests - only sentence similarity.So if you have any ideas, perhaps some previous research that you read, or a new model that was actually evaluated on segment and token similarity, then I’d love to hear it!Thanks in advanceBram
2021-01-13T08:17:58Z
[]
Classification problem difficulty when going from 3 classes to 5 classes?
https://discuss.huggingface.co/t/classification-problem-difficulty-when-going-from-3-classes-to-5-classes/3037
1
358
This question is conceptual in nature.Suppose I’m working on a text classification problem where I have 3 labels. To make the problem more concrete, let’s say I’m working on sentiment analysis with ground-truth labelspositive,neutral, andnegative. I am measuring accuracy and macro-F1.Now I’d like to make another data set with 5 ground-truth labels:very positive,positive,neutral,negative, andvery negative. Intuitively, I would think that the 5-label classification problem is more difficult than the 3-label problem, but the only “proof” I can think of is that a random guess is correct only 1/5 of the time with 5 labels but a random guess is correct 1/3 of the time with 3 labels.Is there a more formal machine learning argument for why a 5-label problem is more difficult than 3-label? How about an N-label problem to an M-label problem where M > N?I’m willing to brush up on Vapnik–Chervonenkis theory if that’s needed (hopefully not).
2021-01-03T23:55:42Z
[ { "date": "2021-01-11T20:22:17Z", "reply": "Any help, intuition, hints, pointers, or references would be appreciated." } ]
Text to Text Transformer - T5
https://discuss.huggingface.co/t/text-to-text-transformer-t5/3008
2
1,090
Hello, I am trying to understand how T5 sentencepiece impacts custom data set. I know T5 does not use lossless training(mT5 does) but unsure of what impact it may have on any custom tokens in my dataset. Can someone please chime in if you have some insight ?Thanks
2020-12-31T13:53:55Z
[ { "date": "2021-01-03T21:53:58Z", "reply": "What do you mean by “lossless” training?" }, { "date": "2021-01-04T20:06:14Z", "reply": "Sorry I meant lossless Tokenization. Please refer to section 3.1 in link belowarxiv-vanity.comSentencePiece: A simple and language independent subword tokenizer and...This paper describes SentencePiece, a language-independent subword\ntokenizer and detokenizer designed for Neural-based text processing,\nincluding Neural Machine Translation. It provides open-source C++ and\nPython implementations for subword units....from the paper:We call this design lossless tokenization, in which all the information to reproduce the normalized text is preserved in the encoder’s output. The basic idea of lossless tokenization is to treat the input text just as a sequence of Unicode characters. Even whitespace is handled as a normal symbol. For the sake of clarity, SentencePiece first escapes the whitespace with a meta symbol _ (U+2581), and tokenizes the input into an arbitrary subword sequence, for example:" } ]
Shortformer: Better Language Modeling using Shorter Inputs
https://discuss.huggingface.co/t/shortformer-better-language-modeling-using-shorter-inputs/3007
0
467
Interesting paper focusing on shorter context windows and improving training speed!ofir.ioshortformer.pdf349.75 KB
2020-12-31T10:02:40Z
[]
Don't Stop Pretraining BART
https://discuss.huggingface.co/t/dont-stop-pretraining-bart/2986
1
898
Hi, I would like to try the approach suggested in “Don’t Stop Pretraining: Adapt Language Models to Domains and Tasks” (link) for BART. I have my own dataset but there are 2 things that are still unclear to me.I believe I should start with BartForConditionalGeneration , as that is the LM model. is that right?Can anyone provide more details on the noising algorithm that was used to train the model? The paper is pretty vague about it, as these are the only details I foundA number of text spans are sampled, with span lengths drawn from a Poisson distribution(λ = 3)We mask 30% of tokens in each document, and permute all sentences.
2020-12-28T20:25:08Z
[ { "date": "2020-12-29T06:35:38Z", "reply": "Hi@ErpaYes,BartForConditionalGenerationis the LM model.Currently seq2seq pre-training examples are not available in transformers.FairSeqhas the implementation of Bart denoising dataset, so that might help, You can find ithere" } ]
Pre-training with Lamb optimizer
https://discuss.huggingface.co/t/pre-training-with-lamb-optimizer/1647
7
4,200
Hello everyone,Has anyone experimented with Lamb optimizers in HF? I tried usinghttps://github.com/cybertronai/pytorch-lambbut I was only marginally able to increase batch size and the training loss curve was rather flat. If you’ve used lamb would you please share some tips. How did you initliaze it? I am not sure what to use in optimizer_grouped_parameters list of dictionaries that wrap model parameters. Also, I’ve seen some other people use a different lr scheduler with Lamb.Thanks in advance.
2020-10-20T09:12:51Z
[ { "date": "2020-10-26T05:53:19Z", "reply": "Hi vblagoje,I am new to transformer. I have been playing the hugging face model for several month and I think I am thinking to made a some small changes on the Bert Model and pretrain it from scratch. I saw you discussing on another post several days ago about the pretraining process. I was wondering if you know the pretraining repository made by Nvidia?github.comDeepLearningExamples/PyTorch/LanguageModeling/BERT at master ·...master/PyTorch/LanguageModeling/BERTDeep Learning Examples. Contribute to NVIDIA/DeepLearningExamples development by creating an account on GitHub.I think they implemented the lamb optimizer, NSP objective and wrote code to better utilized multiple gpu during distributed training. I still haven’t use it yet because I have some trouble with installing docker on the remote machine I am working on. I was just wondering if you already seen this repository or tried it, or if you have any advice on pretraining bert from scratch?" }, { "date": "2020-10-26T10:31:42Z", "reply": "Hey@zeyuyun1,Yes, I am aware of the NVidia repo, however, I haven’t used their scripts. I would like to use the HF library to train BERT from scratch using HF Trainer class, HF datasets project, and helper classes likeDataCollatorForNextSentencePrediction. NVidia scripts are excellent but noisy, with lots of engineering details explicitly mixed with the BERT specifics. These engineering details should be hidden; using the above classes and projects is a step in the right direction to minimize the engineering details.And yes you are right; they use FusedLamb from apex optimizers package. I was able to integrate FusedLamb as well. I am currently tuning the multi-node multi-GPU distributed training and once I am done, I’ll share the script. But yes, so far on a single instance I can train BERT tiny or BERT mini without any major issues.Hope this answers some of your questions. I’ll share the scripts I am working on once I have them training BERT base on multi-node multi-GPU distributed training setup.Cheers,Vladimir." }, { "date": "2020-10-27T00:52:15Z", "reply": "Thank you so much! I’ll look into the training process using HF Trainer too." }, { "date": "2020-11-18T10:16:40Z", "reply": "vblagoje:training loss curve was rather flatI have tried the same repo and the same case happened. the loss curve went flat after a few iterations. were you able to lay your hand on any other implementations?" }, { "date": "2020-11-18T14:05:03Z", "reply": "Hey guys, I am using apex.optimizers FusedLamb and it’s working well. I’ll publish my work in about a week or two. I can now train bert-mini on lambdalabs 8x Tesla V100 single machine in about 3 hours and 40 min. The above-mentioned NVidia training trains the same model in about 2 hours and 30 min. My goal right now is to match the performance of equivalent Google/NVidia baked models on various LM tests (Glue etc) and then I’ll focus on closing the training speed performance.Best,Vladimir" }, { "date": "2020-12-24T01:06:19Z", "reply": "Hi Vladimir,Would you mind sharing your training code? I still didn’t figure out how to implement FusedLamb." }, { "date": "2020-12-28T17:24:09Z", "reply": "Hey there, I’ll share all the details in a week or so. Until I really wrap this up note that I used thisscriptto create sharded datasets for bert training. After dataset preparation, I used thisscriptto train BERT. There are still a few small bugs to iron out but it works quite well. I can train bert base in about 8-9 hours on 8gpu machine using Pytorch distributed training.HTH,Vladimir" } ]
About the encoder and generator used in the RAG model
https://discuss.huggingface.co/t/about-the-encoder-and-generator-used-in-the-rag-model/2959
2
827
Hi, I have questions about the Rag model.In this paper, the query encoder is DPR and the generator is Bart.My questions are:Is the generator a full Bart or just the decoder part of the Bart.If I implement a Rag with the encoder part of Bart as the query encoder, and decoder part of the Bart as generator. Does that make sense w.r.t the Rag concept? I think this is more intuitive to me. why they use a ‘heterogeneous’ setting?Thanks.
2020-12-25T09:25:26Z
[ { "date": "2020-12-25T14:15:03Z", "reply": "Hi,generator is Bart encoder-decoder. If you have a rag model, you can access it bymodel.generatorRAG’s question-encoder is not the same as RAG’s generator’s encoder … This really may be confusing, so let me try to explainquestion encoder is for encoding “question” to retrieve “documents” (or so-called “contexts”) from retriever.Then, retriever will concatenate “contexts” with “question” ; this concatenated texts are the new input.This new input will be encoded by Bart’s encoder to generate answer via Bart’s decoderHope this helps!" }, { "date": "2020-12-25T18:08:49Z", "reply": "Hi, thanks for the reply! I get it better." } ]
MRPC Reproducibility with transformers-4.1.0
https://discuss.huggingface.co/t/mrpc-reproducibility-with-transformers-4-1-0/2884
0
498
Hi, I always get lower precision following the MRPC example from text-classification in transformers, what’s the reason?python run_glue.py \ --model_name_or_path bert-base-cased \ --task_name $TASK_NAME \ --do_train \ --do_eval \ --max_seq_length 128 \ --per_device_train_batch_size 32 \ --learning_rate 2e-5 \ --num_train_epochs 3.0 \ --output_dir /tmp/$TASK_NAME/and get precision like the followings, while the document says it’s 0.88 averaged.12/18/2020 17:16:38 - INFO - __main__ - ***** Eval results mrpc ***** 12/18/2020 17:16:38 - INFO - __main__ - eval_loss = 0.5318707227706909 12/18/2020 17:16:38 - INFO - __main__ - eval_accuracy = 0.7622549019607843 12/18/2020 17:16:38 - INFO - __main__ - eval_f1 = 0.8417618270799347 12/18/2020 17:16:38 - INFO - __main__ - eval_combined_score = 0.8020083645203595 12/18/2020 17:16:38 - INFO - __main__ - epoch = 3.0 12/18/2020 16:45:29 - INFO - __main__ - ***** Eval results mrpc ***** 12/18/2020 16:45:29 - INFO - __main__ - eval_loss = 0.47723284363746643 12/18/2020 16:45:29 - INFO - __main__ - eval_accuracy = 0.8063725490196079 12/18/2020 16:45:29 - INFO - __main__ - eval_f1 = 0.868988391376451 12/18/2020 16:45:29 - INFO - __main__ - eval_combined_score = 0.8376804701980294 12/18/2020 16:45:29 - INFO - __main__ - epoch = 3.0 12/18/2020 16:34:37 - INFO - __main__ - ***** Eval results mrpc ***** 12/18/2020 16:34:37 - INFO - __main__ - eval_loss = 0.571368932723999 12/18/2020 16:34:37 - INFO - __main__ - eval_accuracy = 0.6838235294117647 12/18/2020 16:34:37 - INFO - __main__ - eval_f1 = 0.8122270742358079 12/18/2020 16:34:37 - INFO - __main__ - eval_combined_score = 0.7480253018237863 12/18/2020 16:34:37 - INFO - __main__ - epoch = 3.0GPU: GTX 1080transformers: 4.1.0Torch: 1.6.0python: 3.8Server: Ubuntu 18.04
2020-12-19T06:24:38Z
[]
Using transformers (BERT, RoBERTa) without embedding layer
https://discuss.huggingface.co/t/using-transformers-bert-roberta-without-embedding-layer/2807
8
4,024
I’m looking to train a RoBERTa model on protein sequences, which is in many ways similar to normal nlp training, but in others quite different.In the language of proteins, I have 20 characters instead of the normal 26 characters used in english (it is 26 right? :D), so that is rather similar. The big difference is that you don’t really combine the characters in proteins to actual words, but rather just keep each character as a distinct token or class.Hence essentially my input to the transformer model could just be a list of numbers ranging from 0-19. However that would mean that my input would only have 1 feature if I did that, and I’m not sure a transformer could work with that?I’m thinking of just doing a onehot encoding of these characters, which would give me 20 input features. However this is of course still very low in comparison to how normal transformers are trained, where d_model is somewhere in the range of 128-512 if I understand correctly.Does anyone have any experience with anything like this? any good advice for how it is most likely to work?
2020-12-13T18:16:24Z
[ { "date": "2020-12-13T21:34:43Z", "reply": "Hey,I’d recommend taking a look at this repo:GitHub - agemagician/CodeTrans: Pretrained Language Models for Source codeby@agemagician. This repo uses transformer models for protein sequences if I understand it correctly.Also, taking a look at those models:huggingface.coRostlab (Rostlab)We’re on a journey to advance and democratize artificial intelligence through open source and open science.might help. Not sure if there is a notebook on doing protein sequence LM, maybe@agemagicianhas a good pointer by chance" }, { "date": "2020-12-13T22:00:24Z", "reply": "Hi@tueboesen,Yes, it will work. It can give you a very close results compared to MSA methods, sometimes even better results. If you combine it with MSA, it will even give you a better results compared to MSA methods alone.We have trained (Transformer XL, XLNet, Bert, Albert, Electra and T5) for Uniref100 and BFD dataset. I would recommend to simply use on of these models, because it requires tremendous amount of computing power to reach good results.You can find them here:GitHubGitHub - agemagician/ProtTrans: ProtTrans is providing state of the art...ProtTrans is providing state of the art pretrained language models for proteins. ProtTrans was trained on thousands of GPUs from Summit and hundreds of Google TPUs using Transformers Models. - GitH...huggingface.coRostlab (Rostlab)We’re on a journey to advance and democratize artificial intelligence through open source and open science.You can find more details on our paper:bioRxiv – 21 Jul 20ProtTrans: Towards Cracking the Language of Life’s Code Through...Computational biology and bioinformatics provide vast data gold-mines from protein sequences, ideal for Language Models (LMs) taken from Natural Language Processing (NLP). These LMs reach for new prediction frontiers at low inference costs. Here, we...Facebook also trained Roberta using Unrief50 dataset:GitHubGitHub - facebookresearch/esm: Evolutionary Scale Modeling (esm): Pretrained...Evolutionary Scale Modeling (esm): Pretrained language models for proteins - GitHub - facebookresearch/esm: Evolutionary Scale Modeling (esm): Pretrained language models for proteinsUnfortunately, we don’t have a notebook for training from scratch, but you can find more details to replicate our results here:github.com/agemagician/ProtTransSource code of the modelsopened11:49PM - 10 Oct 20 UTCclosed08:01PM - 11 Oct 20 UTChadimquestionDo you have the source code of the various pre-trained models?@patrickvonplaten:You meant :GitHubGitHub - agemagician/ProtTrans: ProtTrans is providing state of the art...ProtTrans is providing state of the art pretrained language models for proteins. ProtTrans was trained on thousands of GPUs from Summit and hundreds of Google TPUs using Transformers Models. - GitH...Not :GitHubGitHub - agemagician/CodeTrans: Pretrained Language Models for Source codePretrained Language Models for Source code. Contribute to agemagician/CodeTrans development by creating an account on GitHub.ProtTrans: Provides the SOT pre-trained models for protein sequences.CodeTrans: Provides the SOTpre-trained models for computer source code." }, { "date": "2020-12-16T15:35:15Z", "reply": "agemagician:gy and bioinformatics provide vast data gold-mines from protein sequences, ideal for Language Models (LMs) taken from Natural Language Processing (NLP). These LMs reach for new prediction frontieWow this is an amazing response, thank you so much for this. I will need some time to digest it all, but this is exactly what I need!" }, { "date": "2020-12-16T16:21:56Z", "reply": "Is there a way for me to use any of the models to return probability distributions?More specifically I would like to see how exactly the model has learned and test it out a bit. To this effect I would love to be able to feed it a protein sequence where I have masked out some of the amino acids, and then have it return a probability distribution for the full returned protein.I’m sure this is possible, after all this is how the model was trained in the first place, but I’m just a bit overwhelmed by all the models, so I haven’t managed to figure out how to do this." }, { "date": "2020-12-16T17:37:43Z", "reply": "You can find an answer to your question here:https://github.com/agemagician/ProtTrans/issues/5" }, { "date": "2020-12-16T19:19:29Z", "reply": "Hmm that still doesn’t quite do it unless I’m missing something.This does allow masking of a sequence, but you can only mask 1 amino acid in the sequence, and it doesn’t give the actual probabilities on output, but only the top5 probabilities for that single masked amino acid." }, { "date": "2020-12-16T19:55:38Z", "reply": "You can send “top_k” parameter to “fill-mask” method, to return more/all tokens.Check here:github.comhuggingface/transformers/blob/1c1a2ffbff2052100053cddb3a87d45fb9d210ca/src/transformers/pipelines.py#L1184\"\"\"def __init__(self,model: Union[\"PreTrainedModel\", \"TFPreTrainedModel\"],tokenizer: PreTrainedTokenizer,modelcard: Optional[ModelCard] = None,framework: Optional[str] = None,args_parser: ArgumentHandler = None,device: int = -1,top_k=5,task: str = \"\",):super().__init__(model=model,tokenizer=tokenizer,modelcard=modelcard,framework=framework,args_parser=args_parser,device=device,binary_output=True,If it is still doesn’t fit your use-case, then you have to implement it your self." }, { "date": "2020-12-16T21:00:52Z", "reply": "Something like that could be a good starting point for you:colab.research.google.comGoogle Colaboratory" } ]
What are some recommended pretrained models for extracting semantic feature on single sentence?
https://discuss.huggingface.co/t/what-are-some-recommended-pretrained-models-for-extracting-semantic-feature-on-single-sentence/2698
4
1,445
Hi, I am more a CV guy and recently get interested in doing a nlp project.In this project, one part might involve extracting sentence-level semantic representation from a pretrained model.In computer vision, one standard way to extract feature of an image or a video snippet could beusing Resnet pretrained on Imagenet or I3D pretrained on Kinetics datasets, respectively.I want to do the similar thing but in nlp domain. I wonder if there are some recommended models pretrained on specific dataset for me to try?As far as my limited understanding, models trained on datasets which aim to to tell if two sentences are semantically equal could be a direction (e.g. QQP, STS-B ). But it needs a pair of sentences, my case is just feeding one sentence (or one block of sentences), not in a pair format. Any suggestion? Thanks!
2020-12-08T14:32:32Z
[ { "date": "2020-12-12T04:59:22Z", "reply": "Hi! IMO, Bert could be comparable to ResNet as the baseline. (you can uselast_hidden_statevariable ofBertModeljust like the global-pooled features of ResNet) Then, newer models like Roberta and many more could be comparable to EfficientNet etc." }, { "date": "2020-12-12T08:29:45Z", "reply": "Seems like you are looking for the Sentence Transformers library which trains Siamese BERT (etc.) networks on NLI data. That means that you can indeed pass one sentence to get a sentence embedding. They also have a few finetuned models that use cross-encoders instead. Those are obviously slower but lead to better performance on downstream tasks such as STSb.github.comUKPLab/sentence-transformersSentence Embeddings with BERT &amp; XLNet. Contribute to UKPLab/sentence-transformers development by creating an account on GitHub." }, { "date": "2020-12-12T16:04:53Z", "reply": "Thanks for reply. And it seems sentence-BERT , LaBSE, and Universal Sentence Encoder are other some choices for sentence embeddings." }, { "date": "2020-12-14T03:57:06Z", "reply": "Benchmark-wise speaking, I have some new idea : sinceSuperGLUEis one of the most difficult (multi-)task on language understanding. And since T5 is the current SOTA on this benchmark so we can also try embedding vectors from T5.Previously, this may not be straightforward to extract (since T5 is encoder-decoder), but the latest master version of Huggingface now containsT5 encoder’s only modelwhich we can directly extract the vector of the pretrained model. (Thanks to@agemagician) … So this is interesting choice IMO" } ]
BORT: Optimal Subarchitecture Extraction for BERT
https://discuss.huggingface.co/t/bort-optimal-subarchitecture-extraction-for-bert/2562
1
539
Hi guys,Wondering if anyone has read the new paper from the Alexa team regarding BERT size reduction.arXiv.orgOptimal Subarchitecture Extraction For BERTWe extract an optimal subset of architectural parameters for the BERT architecture from Devlin et al. (2018) by applying recent breakthroughs in algorithms for neural architecture search. This optimal subset, which we refer to as "Bort", is...GitHubGitHub - alexa/bort: Repository for the paper "Optimal Subarchitecture...Repository for the paper "Optimal Subarchitecture Extraction for BERT" - GitHub - alexa/bort: Repository for the paper "Optimal Subarchitecture Extraction for BERT"If anyone has any thoughts on it or would like to discuss please comment here.Thanks
2020-12-04T18:34:00Z
[ { "date": "2020-12-05T04:16:15Z", "reply": "Super interesting, thanks for sharing!! Perhaps@VictorSanhcan give us the best commentsWondering if the same technique can be efficiently used for the giant models like T5-11B and GPT-3" } ]
Training generative models based on "rewards"
https://discuss.huggingface.co/t/training-generative-models-based-on-rewards/2576
0
288
Suppose you we want to train BART/T5. Typically these models are trained assuming that we have direct access to gold outputs. I am interested in a slightly different setting: suppose you don’t have the gold output, but you have access to a black-box (a reward function) that tells you how “correct” is the current generation. Does anyone have thoughts on how this could be done?
2020-12-04T22:23:52Z
[]
EMNLP Picks from the Hugging Face Science Team
https://discuss.huggingface.co/t/emnlp-picks-from-the-hugging-face-science-team/2424
1
4,056
The Hugging Faceteam had a great time attending EMNLP the other week. Virtual conferences are tricky, but I personally have come to enjoy some aspects of it like the pre-recorded presentations and gather.town mingling. And not having to travel is a plus, tooLast week a few of us on the science team tried to each select 4-5 presentations we’d recommend others on the team to check out. I’ve compiled our suggestions and included them here for those of you that are interested in our picks & very brief comments. Included are suggestions from myself,@VictorSanh,@yjernite, and@canwenxu(including a couple repeats).There was an incredible amount of high-caliber work and we couldn’t share all but a few that we thought our team might be interested in, so free to respond with any suggestions (or comments) of your own!Victor’s picks (@VictorSanh)BLEU might be Guilty but References are not InnocentPaper:https://arxiv.org/abs/2004.06063Presentation:https://slideslive.com/38938647Discuss a new reference generation method for calculating more reliable automatic scores (including BLEU) that correlate better with human judgement. + a dataset of references (included in sacrebleu i believe)Learning from Task DescriptionsPaper:https://www.aclweb.org/anthology/2020.emnlp-main.105.pdfPresentation:https://slideslive.com/38939344Introduce a new dataset for structured task-oriented evaluation on unseen tasks (0-shot settings) conditioned on a description of the task in natural language. (nice discussion, less convinced by the dataset itself)Learning Which Features Matter: RoBERTa Acquires a Preference for Linguistic Generalizations (Eventually)Paper:https://www.aclweb.org/anthology/2020.emnlp-main.16/Presentation:https://slideslive.com/38939219Model can learn to represent linguistic features with little pretraining data, but require orders of magniutde more data to learn to prefer linguistic generalization over surface ones (but it is slow…)Reformulating Unsupervised Style Transfer as Paraphrase GenerationPaper:https://www.aclweb.org/anthology/2020.emnlp-main.55/Presentation:https://slideslive.com/38938942Propose simple method based on fine-tuning pretrained language models on automatially generated paraphrase data + discusses weaknesses in automatic metrics of style transfer + release of 15M dataset of style transferthe 5th one: I found the talk of Emmanuel Dupoux at Conll very informativeYacine’s picks (@yjernite)ETC: Encoding Long and Structured Inputs in TransformersPaper:https://www.aclweb.org/anthology/2020.emnlp-main.19Presentation:https://slideslive.com/38938951/etc-encoding-long-and-structured-inputs-in-transformersHas local attention and a one global attention token per sentence which is trained with a contrastive loss similar to ICT.A* Beam SearchPresentation:https://slideslive.com/38939414/bestfirst-beam-searchA* algorithm is not quite as easy to batch as regular beam search, but leads to better and more diverse n-best.F2-Softmax: Diversifying Neural Text Generation via Frequency Factorized SoftmaxPaper:https://www.aclweb.org/anthology/2020.emnlp-main.737/Presentation:https://slideslive.com/38938686Pretty simple idea: groups tokens into bins of equal probability mass for a hierarchical softmax so the model can focus on choosing between candidates with the same prior. Leads to a nice improvement on human evaluation and generation diversity metrics.Towards Reasonably-Sized Character-Level Transformer NMT by Finetuning Subword SystemsComments:https://www.aclweb.org/anthology/2020.emnlp-main.203Presentation:https://slideslive.com/38938871Pre-trains on BPE and fine-tunes on full character decomposition to get the model to train faster.Towards Debiasing NLU Models from Unknown BiasesPaper:https://www.aclweb.org/anthology/2020.emnlp-main.613Presentation:https://slideslive.com/38938901Related to@VictorSanh’s recent paper: the “biases” tend to show up in easy-to-learn examples, so the model down-weight examples that are classified correctly early in training.Canwen’s picks (@canwenxu)Experience Grounds LanguagePaper:https://www.aclweb.org/anthology/2020.emnlp-main.703.pdfPresentation:https://slideslive.com/38938907This may be the paper that defines the future direction of NLP. What should a model learn and what ability should a model have? You can find a good guess from this paper.Recall and Learn: Fine-tuning Deep Pretrained Language Models with Less ForgettingPaper:https://www.aclweb.org/anthology/2020.emnlp-main.634.pdfPresentation:https://slideslive.com/38938976Yes we know that fine-tuning a pretrained language model can bring the problem of forgetting.Mixoutis a valid solution but this EMNLP paper proposes an easy-to-use optimizer to resolve the problem.Do sequence-to-sequence VAEs learn global features of sentences?Paper:https://www.aclweb.org/anthology/2020.emnlp-main.350.pdfPresentation:https://slideslive.com/38939119It’s a little surprising to see this title cuz we all thought of course VAEs do. However, through well-designed experiments, the authors reveal the other side of this claim.Pre-Training Transformers as Energy-Based Cloze ModelsPaper:https://www.aclweb.org/anthology/2020.emnlp-main.20.pdfPresentation:https://slideslive.com/38939095It’s a really cool idea and it makes sense mathematically. Though the results are modest, there’re definitely more to explore.BERT-of-Theseus: Compressing BERT by Progressive Module ReplacingPaper:https://www.aclweb.org/anthology/2020.emnlp-main.633.pdfPresentation:https://slideslive.com/38938938Self-promoting. It’s a really neat idea that you can compress a model by simply replacing their components. No additional loss function needed.My picksLearning from Task DescriptionsPaper:https://www.aclweb.org/anthology/2020.emnlp-main.105.pdfPresentation:https://slideslive.com/38939344@VictorSanhmentioned this one but I want to include it as well. They create a new dataset trying to generalize from one set of tasks to another using only task descriptions w/o training data. It’s an ambitious idea to try to formalize and evaluate but I appreciated the work. I’m actually taking a break from adding their dataset “zest” toDatasets to compile this post, so it should be up very soon.Universal Natural Language Processing with Limited Annotations: Try Few-shot Textual Entailment as a StartPaper:https://www.aclweb.org/anthology/2020.emnlp-main.660Presentation:https://slideslive.com/38939094Another approach to “universal” NLP w/ cross-task generalization. The idea here is to pose various tasks as one task (natural language inference) enabling transferability between tasks. Incidentally, the first author is the same who introduced theNLI-based zero-shotclassification approach which is roughly the same as the one we now use in ourzero-shot pipeline & API.Text Classification Using Label Names Only: A Language Model Self-Training ApproachPaper:https://www.aclweb.org/anthology/2020.emnlp-main.724Presentation:https://slideslive.com/38938946Similar to the “zero-shot” setup ofSchick et al.'s PET andYin et al.'s entailment-based approach (though they refer to it as “weak supervision” here). A nice difference from previous work is that they create groups of synonyms to a class label which can be used as a class representation instead of the class name alone. Another demonstration of self-training with unlabeled data only working well for classification.Experience Grounds LanguagePaper:https://www.aclweb.org/anthology/2020.emnlp-main.703.pdfPresentation:https://slideslive.com/38938907Really nice kinda philosophical paper about computational understanding of language. They lay out different “world scopes” to help think about different levels of understanding/experience. Reminiscent in some ways of Bender & Koller’s ACL paper this year,“Climbing towards NLU”and their superintelligent octopus.
2020-12-02T15:01:10Z
[ { "date": "2020-12-02T16:37:14Z", "reply": "Especially like the linguistic shout-outs in there like Warstad et al. It’s always nice to see authors go back and see what (generativist) linguist theory has been saying for perhaps over sixty years and find ways to link that with how LMs “learn” grammar. I’ll be having some time off soon, can’t wait to catch up with all these latest developments! Thanks for the distillation, (pardon the pun)!" } ]
Meta Persona an abstract adaptive neural construct
https://discuss.huggingface.co/t/meta-persona-an-abstract-adaptive-neural-construct/2208
0
710
TODO: Add description of the dataset here_DESCRIPTION = “Meta-Persona Dataset Object”\This new dataset is designed to solve this great NLP task and is crafted with a lot of care.[image|575x321, 75%](up load://nfoAskmJEO25xRecIlzlrXyeLh2.png)This is an “abstract-adaptive” card like dataset object I call ‘meta-persona’A meta-persona is an abstract neural construct like a fine tuned set of configurations including but not limited to:-dataset+metrics,-cache or full reset-scripts for split, separate, concatenate, or coalescence-and last but not least – a neural net search engine…The key here is how easily identifiable these configurations will be considering the neural construct would be a personified interactive adaptive UI card or webapp sized social network profile with a few quick change configurations that can be made on the fly without hard-coding.For instance; one quick change could be a drop down menu or toggle switch for:-use cache?or-clear cache full reset?The image below would be a good reference guide for the design of the adaptive card UIimage1021×507 126 KBEvaluation Metrics could be listed as a sequence of icons/emojis just to save space…Each 'Meta Persona is an abstract neural construct of datasets/virtual corpora/custom script/configurations.Fine tuned configurations like this:[image|690x381, 75%](up load://jMLQxZqyFRCklEv8vBo3YuTjIqd.jpeg)Keep in mind 1 persona is not enough to reach the goal. Multiple personas working in tandem will be needed. In essence each persona is single member in a team/group/deck or role playing game party that when taken as a whole represents;NLUideal use case scenariodesirable relationship simulationhold up…[image|598x375, 75%](up load://AcEhBtO1YZnJ9U5E5PenEjxzihr.jpeg)Lets just call the end goal – the front end of all this NLP pipeline[image|690x117](up load://fAN37rUtoyNnFIgDxG0PTP1uN44.jpeg)Lets call itan “Avatar” of and for the user.Anyways…It is important to mathematically or arbitrarily impose compatibility issues between personas.Some personas may have an attitude and are (strict) in the ‘magic’ role which prevents the user from choosing certain ‘support’ personas. I don’t mean that literally just used that as an example to explain the incompatibility of personas – like a give and take or a balance to strike. If the user does select incompatible personas it will be counter-productive as in diminishing returns either as a direct consequence or an arbitrary imposed consequence.On the flipside some personas are fundamental or happy go lucky personified as someone or something that is always ‘happy’ and gets along with everything…so open not strict. Does that make sense?If the designer cards are too small or too stylish something like this image below would suffice I think?. Including playful descriptions of the neural construct or just a succinct on the nose descript like this…[image|690x433, 75%](up load://ktAC5DyMttnRJQEbgnVnFTLUbHW.jpeg)For my avatar I would need;-Sherlock Holmes-Einstein or hawkings-Groot-Alan Watts-Jackie Chan----- probably should set a limit to 5 maybe? ------5 meta personas to generate 1 avatar or ideal use case. Each of the personas hand crafted meticulously built as abstract adaptive neural constructs all backed by a growing library of datasets which in turn are backed by .map() or Apache Arrow table data…FYI I am this…[image|514x441, 75%](up load://yZTlKWViA4XXcnZq8eT89o5fCqT.jpeg)Cute Baby GrootWhat better way to build a neural construct out of fictional character?It is going to be a lot of hard work. Indeed the whole huggingface team is grinding on adding datasets to the library right now as we speak.However…instead of fine tuning the fine tunes and potentially overfitting the whole construct what if building neural constructs was more about an artistic expression then some standard deviation.I suspect It may require a combination of figurative and declarative expressions to generate puzzles for users to assemble.Thus completing the loop.In essence the act of giving users a puzzle to put together puts the emphasis on the user to create the avatar by selecting their own preferred combination of personas into something the users can name and save/load like a VM state.I suggest that building meta-persona as abstract personifications of neural constructs is more valuable right now. I say get creative fine tune a construct for the sake of creative expression first and foremost and continue until there is a huge diverse library of personified neural constructs. I am uncertain if this s a technological breakthrough or perhaps a killer app like PR and marketing breakthru…This new user limitation threw off my whole pitch though…
2020-11-25T18:27:03Z
[]
Adding learnable coefficients for multi-objective losses?
https://discuss.huggingface.co/t/adding-learnable-coefficients-for-multi-objective-losses/2191
2
740
I am running a multi-objective problem where I compute three losses and then sum them up. For each loss, I want to have a learnable coefficient (alpha,beta, andgamma, respectively) that will be optimized.optimizer = AdamW(model.parameters(), lr=2e-5, eps=1e-8) for batch in dl: optimizer.zero_grad() result = model(batch) loss1 = loss_fn_1(result) loss2 = loss_fn_2(result) loss3 = loss_fn_3(result) # How to optimize alpha, beta, and gamma? loss = alpha*loss1 + beta*loss2 + gamma*loss3 loss.backward() optimizer.step()Specific questions:Should I even have coefficientsalpha,beta, andgamma? The optimizer will minimize, so they’ll all go to 0.0, right?If having those coefficients is a good idea, how can I prevent them from going to 0.0? Someone told me to use regularization, but what does that mean in this case?How do I declarealpha,beta, andgammato be learnable byAdamW?
2020-11-24T20:11:55Z
[ { "date": "2020-11-25T07:21:30Z", "reply": "YesTheoretically, we have to make a constraint like alpha+beta+gamma = 1. To change this to unconstrained optimization, we have to use Lagrange multiplier to the constraint equation, and that will be the regularization formula your friend talked about e.g. you putlambda1*alpha, lambda2*beta and lambda3*gammainto loss function. I believe it complicates the problem even more since finding optimum values of lambdas are difficult even theoretically.2.5 Sorry not answer you Q3, but I think the practical way is to treat alpha, beta and gamma as hyperparameters and simply optimize them via grid search.In this case, simply split some of your training set to validation set, and define the metric on it. The “validation metric” has to be specified to be suitable to your problem (e.g. error, f1, spearman or any others) — you can get some ideas on metrics by finding some Kaggle competitions that is similar to your problem and see their metrics.Select hyperparaeters that optimize your validation metric." }, { "date": "2020-11-25T18:27:02Z", "reply": "Theoretically, we have to make a constraint like alpha+beta+gamma = 1Thank you.Last night I was thinking of doingloss = alpha*loss1 + beta*loss2 + (1.0 - alpha - beta)*loss3which seems to be equivalent to what you wrote above." } ]
Inference on constrained devices
https://discuss.huggingface.co/t/inference-on-constrained-devices/2157
0
292
Hi there,I am looking for any resources or any previous work on getting huggingface models to run inference on constrained devices. Since I read on your DistilGPT2 page that it “Runs smoothly on an iPhone 7.” I’ve been curious.Has anyone managed to get inference working on something like an RPi?Many Thanks,Chris
2020-11-21T13:48:48Z
[]
What are some popular datasets for domain adaptation in NLP
https://discuss.huggingface.co/t/what-are-some-popular-datasets-for-domain-adaptation-in-nlp/1931
1
470
Hello,Having some experience in domain adaptation in CV but no NLP.Can someone recommend some popular datasets in NLP for DA?and even better for me if there is any in the hugginface datasets.Thanks!
2020-11-07T17:30:18Z
[ { "date": "2020-11-12T13:13:37Z", "reply": "cc@yjernitemaybe here (and Angie which should be also on the forum by the way!)" } ]
Adding features to a pretrained language model
https://discuss.huggingface.co/t/adding-features-to-a-pretrained-language-model/770
3
3,863
I’ve often thought about use cases where you think of word or sentence features that you knowmustbe helpful to the system. Features that you would typically use in an SVM or a shallow network. I would want to know if those features still have the ability to add to the performance of a pretrained language model. So rather than just fine-tuning the language model, what are good ways to integrate custom features into LM without pretraining from-scratch?My guess is that you can just take the output from an LM and add a custom head on top that also takes in these other features. So basically the output of the LM serves as another set of features. This does not seem ideal though, since the final connections might be too shallow, I imagine that a better approach is possible that still involves finetuning the LM along side training the network that the custom features are part of. Any thoughts or best “tried and true” methods out there?
2020-08-19T15:58:02Z
[ { "date": "2020-08-20T10:37:15Z", "reply": "Hi Bram,One of my students studied exactly this phenomenon in a recent submission to SemEval: “UoB at SemEval-2020 Task 12: Boosting BERT with Corpus Level Information.” (https://arxiv.org/abs/2008.08547)Excerpts from the paper:We hypothesise that deep learning models, especially those that use pre-trained embeddings and so are trained on a small number of epochs, can benefit from corpus level count information. We test this on Sub-Task A using an ensemble of BERT and TF-IDF which outperforms both the individual models.For sub-task B, we hypothesise that these sentence representations can benefit from having POS information to help identify the presence of a target. To test this hypothesis, we integrate the count of part-of-speech (POS) tags with BERT. While this combination did outperform BERT, we found that a simpler modification to BERT (i.e. cost weighting, Section 3.5) outperforms this combination.And in terms of how the model was built:This ensemble model is created by concatenating the sentence representation of BERT to the features generated by the TF-IDF model before then using this combined vector for classification. In practice, this translates into calculating the TF-IDF vector for each sentence and concatenating it to the corresponding BERT output. This vector is then fed to a fully connected classification layer. Both BERT and the TF-IDF weights are updated during training." }, { "date": "2020-10-27T17:01:16Z", "reply": "Have you solved the question ? I have similar demands." }, { "date": "2020-10-28T20:24:49Z", "reply": "Hi@guoziyuanWe’ve since built on the previous work in the paper “Incorporating Count-Based Features into Pre-Trained Models for Improved Stance Detection” (https://arxiv.org/pdf/2010.09078.pdf). The code for this work is available athttps://github.com/Anushka-Prakash/RumourEval-2019-Stance-Detection/This work outperforms a RoBERTa baseline and achieved state-of-the-art results in stance detection by solving these problems (from paper):Pre-trained models, such as BERT, are often trained for between 2 and 5 epochs during fine-tuning whereas simpler feature based models need to be trained for much longer. Our experiments show that a simple ensemble of these models results in over-fittingThere are likely to be too many features to directly ensemble the raw features with pre-trained models (resulting in too much noise), a loss of important - task specific - information when using dimensionality reduction methods, and too few output classes to use only the outputs of a feature based model in an ensemble (lack of information)." } ]
Bart-base rouge scores
https://discuss.huggingface.co/t/bart-base-rouge-scores/683
11
1,706
Has anyone finetunedbart-baseon xsum or cnn summarization task and willing to report the rouge score they got?I just got 15.5 for xum which feels low, since bart-large can get to 22 ish.@astariul@valhalla@VictorSanh?
2020-08-11T19:00:38Z
[ { "date": "2020-08-12T05:44:41Z", "reply": "@sshleifer, could it be due to theadjust_logitsissue ? Just a guess but as I posted there, after modifying theadjust_logits_during_generationBLUE-4 score for my model went from 13.09 to 19.14 forbart-base" }, { "date": "2020-08-14T14:34:55Z", "reply": "@sshleifercould you also try usingbosasdecoder_start_token_idand modifyingadjust_logits_during_generationto return logits as is instead of forcingbos? If you also get bump in ROUGE score we can confirm the issue. Thanks !" }, { "date": "2020-08-15T18:00:41Z", "reply": "Possible suggestion that saves on the re-training could be to check the perplexity values and compare to paper" }, { "date": "2020-08-25T15:46:01Z", "reply": "I got 16.6 ROUGE 2 on XSUM, in 3 epochs/ 6hrs" }, { "date": "2020-08-25T16:24:38Z", "reply": "bart-base doesn’t seem to be good then, in my other seq2seq experiment t5-small performed similar/better to bart-base" }, { "date": "2020-09-05T18:22:20Z", "reply": "Made a google doc to aggregate experiment results. Please add any interesting results!docs.google.comSeq2Seq Finetuning LeaderboardPlease state what you did and what you got Pegasus-large on xsum: \tReleased model: 46.87/24.46/39.15 (Rouge1/Rouge2/RougeL) finetuned: {\"rouge1\": 46.8248, \"rouge2\": 23.9987, \"rougeL\": 38.6751, \"n_obs\": 11333, \"runtime\": 4228.170863628387,..." }, { "date": "2020-10-27T12:08:43Z", "reply": "How can I change the adjust_logits_during_generation ? thanks" }, { "date": "2020-10-27T14:33:43Z", "reply": "By editing the code!" }, { "date": "2020-10-27T15:26:26Z", "reply": "Can you provide a example ? I saw the source code ofadjust_logits_during_generationand it directly returns the logits." }, { "date": "2020-10-27T16:32:19Z", "reply": "github.comhuggingface/transformers/blob/master/src/transformers/modeling_bart.py#L1100):return {\"input_ids\": None, # encoder_outputs is defined. input_ids not needed\"encoder_outputs\": encoder_outputs,\"past_key_values\": past,\"decoder_input_ids\": decoder_input_ids,\"attention_mask\": attention_mask,\"use_cache\": use_cache, # change this to avoid caching (presumably for debugging)}def adjust_logits_during_generation(self, logits, cur_len, max_length):if cur_len == 1 and self.config.force_bos_token_to_be_generated:self._force_token_id_to_be_generated(logits, self.config.bos_token_id)elif cur_len == max_length - 1 and self.config.eos_token_id is not None:self._force_token_id_to_be_generated(logits, self.config.eos_token_id)return logits@staticmethoddef _force_token_id_to_be_generated(scores, token_id) -> None:\"\"\"force one of token_ids to be generated by setting prob of all other tokens to 0 (logprob=-float(\"inf\"))\"\"\"scores[:, [x for x in range(scores.shape[1]) if x != token_id]] = -float(\"inf\")in the futuregit grep adjust_logits_during_generation" }, { "date": "2020-10-27T16:56:12Z", "reply": "thanks" } ]
Load/save HF block sparse model
https://discuss.huggingface.co/t/load-save-hf-block-sparse-model/1646
1
394
Hey everyone,I am exploringhttps://github.com/huggingface/pytorch_block_sparseproject. One of the issues that popped up almost immediatelly is loading a saved “sparsified” model. So, let’s say you sparsified Roberta using an exampleprovided. Now that the model has been sparsified (it’s linear layers replaced with BlockSparseLinear nn modules) how can I load previously saved model using HF ecosystem? All I can think of is that I again need to create a Roberta model with uninitialized weights, sparsify it, and the load weights with model.load_state_dict(torch.load(PATH))?Am I overlooking something obvious?
2020-10-20T09:07:15Z
[ { "date": "2020-10-21T13:47:50Z", "reply": "No mechanism in place for loading as of now, which is ok. I sparsed the model again and loaded the weights manually via model.load_state_dict(torch.load(PATH))." } ]
Resume Training / Finetune a language model and further finetune a classifier
https://discuss.huggingface.co/t/resume-training-finetune-a-language-model-and-further-finetune-a-classifier/1616
1
1,226
Hi,I would like to finetune a powerful classifier based on a pre-trained language model. As we know, the typical approach is to fine-tune a classifier using a pre-trained model. What I am wondering is that, if I fine-tune a pre-trained model based on a fine-tune language model settings using DS1(typical text dataset) (OR resume training from the last checkpoint) and then further fine-tune this newly fine-tuned model using another DS2(typical text dataset) for a classifier purpose, would this be a redundant effort as compared to a pipeline which is to just finetune a pre-trained model using DS2? I would like to receive your thoughts.Thank you.
2020-10-19T00:51:45Z
[ { "date": "2020-10-19T15:15:28Z", "reply": "Hi, there are papers indeed indicate that “multi-steps” finetuning is helpful. Seethis paperfor one example ." } ]
Hyperparameter for distil bert
https://discuss.huggingface.co/t/hyperparameter-for-distil-bert/1624
0
663
I’m reproducing the glue result for the paper “DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter”, and I now get MNLI-m dev set about 80 acc, and the score in paper is 82.Here is the parameter I’m using:epoch=3lr=2e-5batchsize=32*4cards.Can anybody share the hyper-parameter for the experiment in this paper?
2020-10-19T11:30:27Z
[]
Transformer for Abstractive Summarization for Chats Based on Performance
https://discuss.huggingface.co/t/transformer-for-abstractive-summarization-for-chats-based-on-performance/731
3
1,935
Hi, I’ve some general questions related to Transfer Learning on pretrained models for summarization problem. I’ve been trying to engineer Seq2Seq model for Summarizing Chats between two user agents.I’ve tried T5 model (Pretrained & Transfer Learning), but the results were not satisfactory. The summarized text missed the context entirely after training on the custom dataset.Can someone please help me understand which model works better for summarizing chats or any pre-processing task that precedes this.Thanks in advance.
2020-08-17T13:01:40Z
[ { "date": "2020-08-17T15:26:49Z", "reply": "Hi@anant0308! Happy to discuss possible approaches, but what works best (and whether you can expect good results at all) will depend on what your fine-tuning data looks like: for example, how long are the chats? do you have any gold summaries for your chats? do you have examples of summaries without corresponding chats? how many examples do you have? how are you representing speaker turns?Keep in mind that summarizing chats is quite a different task from summarizing news text: if the pre-training data lacks any kind of dialogue inputs, then the model will have to learn how to interpret multi-turn structure from scratch, which will probably be your main challenge." }, { "date": "2020-08-18T05:33:22Z", "reply": "Hey@yjernite, the primary challenge as you mentioned is to identify the speaker and hence interpret the structure. The dataset is somewhat similar to (SAMsum corpus -https://arxiv.org/src/1911.12237v2/anc/corpus.7z).The following are the key points that might help -The summaries are there.The chats are similar to normal texts exchanged between two users.There are around 15K-20K training examples.Currently, the speaker is represented as is. (Based on Name)Kindly suggest the improvements for better implementation of abstractive summarization. Following are my key queries -Is there any preferred model for chat summarization?What might be the pre-processing steps for improvement in performance?How should speakers be represented as it was found that the contexts might be changed because of a speaker name being present in a sentence (ambiguity increased) ?Any suggestion would be of great help !" }, { "date": "2020-10-09T15:06:22Z", "reply": "Did you ever find an improvement?I am trying to accomplish the sam thing with the SAMsum dataset" } ]
Obtaining BERT-base from BERT-large
https://discuss.huggingface.co/t/obtaining-bert-base-from-bert-large/1288
3
447
So I want to extract (prune) BERT-large such that I get BERT-base fairly. Initially I performed random pruning (near to 110M param count) on BERT-large but it didn’t seem to work well. L1 pruning seemed to work (nearly 131M param), but it doesn’t seem fair. Pre-training seems like a big hurdle given that there are some ambiguities on how to go about it. Please let me know if you’ve any suggestions on getting BERT-base fairly from BERT-large.
2020-09-29T03:31:47Z
[ { "date": "2020-09-29T15:22:13Z", "reply": "Have you tried Distilling it?https://medium.com/huggingface/distilbert-8cf3380435b5.Why would you expect pruning to work?(Why do you want to extract bert-base from bert-large?)" }, { "date": "2020-09-30T04:32:35Z", "reply": "Distillation is very different thing. What I want is to modify BERT-large such that it has the near same param count as BERT-base and the weight distribution matches that of BERT-base." }, { "date": "2020-10-02T10:53:52Z", "reply": "What do you mean by “fairly”? Clearly, in order for a pruned bert-large to be effective, you need to prune those heads that are least useful. There isn’t really anything “fair” about that.What do you mean by “the weight distribution matches that of bert-base”? I shouldn’t think that to be possible. To start with, I’m pretty sure you will need to keep at least one head per layer, so that the data can flow through the model, and bert-large has 24 layers to bert-base’s 12. Which weights are you hoping to match? Furthermore, there’s no reason to suppose that the way the weights develop in bert-large will be similar to the way the weights develop in bert-base.Are you investigating this purely for the interest of it, or because you want to use the result?" } ]
How I fine-tune BART for summarization using large texts?
https://discuss.huggingface.co/t/how-i-fine-tune-bart-for-summarization-using-large-texts/1266
3
3,852
Good night!I’m using a pre-trained Bart for summarization and I have my own dataset for fine-tuning (which has a set with the big text and its respective summary). Despite this, my input texts are approximately 2500 characters long and the maximum Bart accepts is 1024. Is there any technique I can use to use all text? I thought of splitting each cell into smaller texts (max 1024) and assigning the same summary to each. Makes sense?Example:Before:ABC: summary1DEF: summary2After:A: summary1B: summary1C: summary1D: summary2E: summary2F: summary2Thanks in advance!
2020-09-27T04:02:21Z
[ { "date": "2020-09-27T07:00:53Z", "reply": "Hi, there’e already thread for this, you might find it helpfulSummarization on long documents🤗TransformersYou can try extractive summarisation followed by abstractive. In the extractive step you choose top k sentences of which you choose top n allowed till model max length. \nAnother way is to use successive abstractive summarisation where you summarise in chunk of model max length and then again use it to summarise till the length you want. This method will be super expensive. \nYou can also combine first + second method." }, { "date": "2020-09-27T15:17:30Z", "reply": "Do you have any idea how I can do this extractive summarization before? I would have to cut my text in half to be the ideal size, but I don’t know how to get the most relevant sentences in this extractive step." }, { "date": "2020-09-28T06:15:41Z", "reply": "could you post this question in that thread, people there might have tried this, let’s keep the long summ discussion in one thread" } ]
New seq2seq tool: search hparam space with run_eval.py
https://discuss.huggingface.co/t/new-seq2seq-tool-search-hparam-space-with-run-eval-py/1166
5
347
FYI, there is a new tool available to you - you can now search the hparam space withrun_eval.py.It’s calledrun_eval_search.pyIt uses the same arguments asrun_eval.py, but allows you to parametrize the hparams, so in addition to the normal args you can pass:--search="num_beams=8:11:15 length_penalty=0.9:1.0:1.1 early_stopping=true:false"and it’ll search all the possible combinations and at the end print a table of results sorted by the scores of the task. e.g.:bleu | num_beams | length_penalty | early_stopping ----- | --------- | -------------- | -------------- 41.35 | 11 | 1.1 | 0 41.33 | 11 | 1.0 | 0 41.33 | 11 | 1.1 | 1 41.32 | 15 | 1.1 | 0 41.29 | 15 | 1.1 | 1 41.28 | 15 | 1.0 | 0 41.25 | 8 | 1.1 | 0 41.24 | 11 | 1.0 | 1 41.23 | 11 | 0.9 | 0 41.20 | 15 | 1.0 | 1 41.18 | 8 | 1.0 | 0You can have one or more params searched.Here is an example of a full command:PYTHONPATH="src:examples/seq2seq" python examples/seq2seq/run_eval_search.py \ facebook/wmt19-$PAIR $DATA_DIR/val.source $SAVE_DIR/test_translations.txt \ --reference_path $DATA_DIR/val.target --score_path $SAVE_DIR/test_bleu.json \ --bs $BS --task translation \ --search="num_beams=1:5 length_penalty=0.9:1.1 early_stopping=true:false"If you encounter any issues please let me know.It’s documented here:https://github.com/huggingface/transformers/blob/master/examples/seq2seq/README.md#run_eval-tips-and-tricks.@sshleiferand I added some more goodies inrun_eval.py- you will find them all documented at that url.Enjoy.p.s. edited to remove things that are going to change based on Sam’s comment below.
2020-09-16T20:21:53Z
[ { "date": "2020-09-16T22:07:28Z", "reply": "Great work!There are only two possible sets of keys to get fromrun_eval.pysincescore_fn = calculate_bleu_score if \"translation\" in args.task else calculate_rougeYou shouldn’t hard code the possible tasks any more than that IMO." }, { "date": "2020-09-16T22:25:01Z", "reply": "ah, thank you for clarifying that - I will adjust it to follow the same logic." }, { "date": "2020-09-17T06:04:14Z", "reply": "This is awesome ! Thanks@stas" }, { "date": "2020-09-17T11:31:32Z", "reply": "I haven’t checked the code, I’m on mobile now. But are there many scenarios where we actually need to do hyperparameters search on theevaluation/inference side? In addition, does this use the optuna implementation that is being worked on in the trainer by@sgugger, or is it a separate implementation?" }, { "date": "2020-09-17T12:11:00Z", "reply": "When you train a seq2seq model on new summ or translation dataset or other seq2seq task and want to decide how many beams to use, should use length penalty or not, what should be the max seq length, what should be theno_repeat_ngram_sizeetc, all of these parameter affect the metrics , so this tool helps to make those decisions,It does not useoptuna,it just usesitetools.productto enumerate the different combinations and evaluate on them" } ]
Not all BLEU scores were created equal
https://discuss.huggingface.co/t/not-all-bleu-scores-were-created-equal/1154
0
312
While porting fairseq transformer and another model from allenai, I wasn’t getting the same BLEU scores as reported by the papers. At the end I learned that some of that difference was due to the fact that I was measuring the BLEU score in a different way from theirs. So when you see a BLEU number in a report, it could mean many different things. e.g. apparently you get a higher score if you measure tokenized outputs.Please see this paper for many more nuances:arXiv.orgA Call for Clarity in Reporting BLEU ScoresThe field of machine translation faces an under-recognized problem because of inconsistency in the reporting of scores from its dominant metric. Although people refer to "the" BLEU score, BLEU is in fact a parameterized metric whose values can vary...In your work and experiments, please, try to usesacrebleufor measuring as suggested in the paper. That’s what our seq2seqeval_run.pyuses.Thank you.
2020-09-15T23:45:09Z
[]
Bertology-like Analysis for BART, T5?
https://discuss.huggingface.co/t/bertology-like-analysis-for-bart-t5/941
0
665
In my current project, I am working on training encoder-decoder models (BART, T5, etc.) and the Transformers library has been absolutely invaluable! After seeing several Bertology analyses (i.e. looking at the information the model’s attention mechanism learns to attend to), I would like to know if a similar analysis is possible with the BART and T5 models in the Hugging Face library. Any recommendations are certainly appreciated!
2020-08-31T07:55:01Z
[]
BART question, it seems that pretraining is not work for a small model?
https://discuss.huggingface.co/t/bart-question-it-seems-that-pretraining-is-not-work-for-a-small-model/511
6
558
What is your question?My task is to generate keywords from sentences.I pretrain a text-generation model. I mask the sentences’ tokens and predict the whole sentences’ tokens.Pretraining batch_size = 8 and step = 1000000I haven’t observed improvement from pretraining. BLEU score is 10.5 for not pretraining, BLEU score is 9.5 for pretraining.CodeI take the python code fromgithub.comgoogle-research/pegasus/blob/master/pegasus/models/transformer.py#L38from pegasus.layers import attentionfrom pegasus.layers import decodingfrom pegasus.layers import embeddingfrom pegasus.layers import timingfrom pegasus.layers import transformer_blockfrom pegasus.models import baseimport tensorflow as tffrom tensorflow.contrib import layers as contrib_layersclass TransformerEncoderDecoderModel(base.BaseModel):"""Transformer encoder+decoder.Notations:B: batch_size, I: max_input_len, T: max_target/decode_len, D: hidden_sizeV: vocab_size"""def __init__(self, vocab_size, hidden_size, filter_size, num_heads,num_encoder_layers, num_decoder_layers, label_smoothing,dropout):hidden_size = 512num_encoder_layers = 3num_decoder_layers = 3DiscussionThe task is to generate keyword from sentences.The keyword may not appear in the sentences.So input masked sentences to predict whole sentences, it is not benefit the keywords generation task.Input masked sentences to predict whole sentences, it do not have relation to the keywords generation task.Am I right? Is it the reason that pretraining do not improve the BLEU score?Thank you very much.
2020-07-29T08:15:53Z
[ { "date": "2020-07-29T08:49:07Z", "reply": "With all due respect, you are asking a question on a forum dedicated to a specific librarytransformersby HuggingFace, but the question does not involve that library. In fact, you are using a completely different library. I am not sure if this is the right place for such questions.@sgugger" }, { "date": "2020-07-29T09:06:50Z", "reply": "I have changed the tag." }, { "date": "2020-07-29T13:25:33Z", "reply": "On the research part of the forum, we welcome any general questions, though of course we would prefer you to use our models@sshleifermight have some answer as he is the Bart person on the team." }, { "date": "2020-07-29T14:21:42Z", "reply": "guotong1988:So input masked sentences to predict whole sentences, it is not benefit the keywords generation task.Input masked sentences to predict whole sentences, it do not have relation to the keywords generation task.Am I right? Is it the reason that pretraining do not improve the BLEU score?Thank you very much.Definitely possible, there could also be a bug in your code. I don’t have enough familiarity with your task to know what results to expect." }, { "date": "2020-07-30T01:41:44Z", "reply": "Thank you. I am also using your models." }, { "date": "2020-08-03T03:10:01Z", "reply": "1, I pad some zeros in the input tokens for multi sentences. The output positions of output tokens should be exactly same to the input tokens, which means I should keep the padding zeros in the output tokens.2, The pretraining time should be longer." } ]