File size: 2,787 Bytes
8d18d8a f40d435 0d6185e 4087aa8 0d6185e |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 |
---
datasets:
- code_search_net
widget:
- text: "def <mask> ( a, b ) : if a > b : return a else return b</s>return the maximum value"
- text: "def <mask> ( a, b ) : if a > b : return a else return b"
---
# Model Architecture
This model follows the distilroberta-base architecture. Futhermore, this model was initialized with the checkpoint of distilroberta-base.
# Pre-training phase
This model was pre-trained with the MLM objective (`mlm_probability=0.15`).
During this phase, the inputs had the following format:
$$\left[[CLS], t_1, \dots, t_n, [SEP], w_1, \dots, w_m\right[EOS]]$$
where $t_1, \dots, t_n$ are the code tokens and $w_1, \dots, w_m$ are the natural language description tokens. More concretely, this is the snippet that tokenizes the input:
```python
def tokenize_function_bimodal(examples, tokenizer, max_len):
codes = [' '.join(example) for example in examples['func_code_tokens']]
nls = [' '.join(example) for example in examples['func_documentation_tokens']]
pairs = [[c, nl] for c, nl in zip(codes, nls)]
return tokenizer(pairs, max_length=max_len, padding="max_length", truncation=True)
```
# Training details
- Max length: 512
- Effective batch size: 64
- Total steps: 60000
- Learning rate: 5e-4
# Usage
```python
model = AutoModelForMaskedLM.from_pretrained('antolin/distilroberta-base-csn-python-bimodal')
tokenizer = AutoTokenizer.from_pretrained('antolin/distilroberta-base-csn-python-bimodal')
mask_filler = pipeline("fill-mask", model=model, tokenizer=tokenizer)
code_tokens = ["def", "<mask>", "(", "a", ",", "b", ")", ":", "if", "a", ">", "b", ":", "return", "a", "else", "return", "b"]
nl_tokens = ["return", "the", "maximum", "value"]
input_text = ' '.join(code_tokens) + tokenizer.sep_token + ' '.join(nl_tokens)
pprint(mask_filler(input_text, top_k=5))
```
```shell
[{'score': 0.4645618796348572,
'sequence': 'def max ( a, b ) : if a > b : return a else return b return '
'the maximum value',
'token': 19220,
'token_str': ' max'},
{'score': 0.40963634848594666,
'sequence': 'def maximum ( a, b ) : if a > b : return a else return b '
'return the maximum value',
'token': 4532,
'token_str': ' maximum'},
{'score': 0.02103462442755699,
'sequence': 'def min ( a, b ) : if a > b : return a else return b return '
'the maximum value',
'token': 5251,
'token_str': ' min'},
{'score': 0.014217409305274487,
'sequence': 'def value ( a, b ) : if a > b : return a else return b return '
'the maximum value',
'token': 923,
'token_str': ' value'},
{'score': 0.010762304067611694,
'sequence': 'def minimum ( a, b ) : if a > b : return a else return b '
'return the maximum value',
'token': 3527,
'token_str': ' minimum'}]
```
|