Commit 
							
							·
						
						3ebdb48
	
1
								Parent(s):
							
							013152b
								
Update README.md
Browse files
    	
        README.md
    CHANGED
    
    | 
         @@ -12,7 +12,7 @@ It has rather poor paraphrasing performance, but can be fine tuned for this or o 
     | 
|
| 12 | 
         
             
            This model was created by taking the [alenusch/mt5small-ruparaphraser](https://huggingface.co/alenusch/mt5small-ruparaphraser) model and stripping 96% of its vocabulary which is unrelated to the Russian language or infrequent.
         
     | 
| 13 | 
         | 
| 14 | 
         
             
            * The original model has 300M parameters, with 256M of them being input and output embeddings. 
         
     | 
| 15 | 
         
            -
            * After shrinking the `sentencepiece` vocabulary from 250K to 20K the number of model parameters reduced from 1.1GB to 246MB.
         
     | 
| 16 | 
         
             
               * The first 5K tokens in the new vocabulary are taken from the original `mt5-small`.
         
     | 
| 17 | 
         
             
               * The next 15K tokens are the most frequent tokens obtained by tokenizing a Russian web corpus from the [Leipzig corpora collection](https://wortschatz.uni-leipzig.de/en/download/Russian).
         
     | 
| 18 | 
         | 
| 
         | 
|
| 12 | 
         
             
            This model was created by taking the [alenusch/mt5small-ruparaphraser](https://huggingface.co/alenusch/mt5small-ruparaphraser) model and stripping 96% of its vocabulary which is unrelated to the Russian language or infrequent.
         
     | 
| 13 | 
         | 
| 14 | 
         
             
            * The original model has 300M parameters, with 256M of them being input and output embeddings. 
         
     | 
| 15 | 
         
            +
            * After shrinking the `sentencepiece` vocabulary from 250K to 20K the number of model parameters reduced to 65M parameters, and model size reduced from 1.1GB to 246MB.
         
     | 
| 16 | 
         
             
               * The first 5K tokens in the new vocabulary are taken from the original `mt5-small`.
         
     | 
| 17 | 
         
             
               * The next 15K tokens are the most frequent tokens obtained by tokenizing a Russian web corpus from the [Leipzig corpora collection](https://wortschatz.uni-leipzig.de/en/download/Russian).
         
     | 
| 18 | 
         |