update architecture
Browse files
architectures/incoder.txt
CHANGED
|
@@ -1,4 +1,4 @@
|
|
| 1 |
-
[InCoder](https://huggingface.co/facebook/incoder-6B) uses a decoder-only Transformer with Causal Masking objective, to train a left-to-right language model to fill in masked token segments.
|
| 2 |
|
| 3 |
|Model | # parameters |
|
| 4 |
| - | - |
|
|
@@ -11,7 +11,7 @@ During the training of InCoder, spans of code were randomly masked and moved to
|
|
| 11 |
|
| 12 |
So in addition to program synthesis (via left-to-right generation), InCoder can also perform editing (via infilling). The model gives promising results in some zero-shot code infilling tasks such as type prediction, variable re-naming and comment generation.
|
| 13 |
|
| 14 |
-
In the code generation demo we use InCoder 1.3B.
|
| 15 |
|
| 16 |
You can load the model and tokenizer directly from [`transformers`](https://huggingface.co/docs/transformers/index):
|
| 17 |
|
|
|
|
| 1 |
+
[InCoder](https://huggingface.co/facebook/incoder-6B) uses a decoder-only Transformer with Causal Masking objective, to train a left-to-right language model to fill in masked token segments, with a context length of 2048.
|
| 2 |
|
| 3 |
|Model | # parameters |
|
| 4 |
| - | - |
|
|
|
|
| 11 |
|
| 12 |
So in addition to program synthesis (via left-to-right generation), InCoder can also perform editing (via infilling). The model gives promising results in some zero-shot code infilling tasks such as type prediction, variable re-naming and comment generation.
|
| 13 |
|
| 14 |
+
In the code generation demo, at the end of the blog, we use InCoder 1.3B.
|
| 15 |
|
| 16 |
You can load the model and tokenizer directly from [`transformers`](https://huggingface.co/docs/transformers/index):
|
| 17 |
|