Update README.md
Browse files
README.md
CHANGED
@@ -3,6 +3,7 @@ license: apache-2.0
|
|
3 |
datasets:
|
4 |
- bigcode/the-stack
|
5 |
- HuggingFaceFW/fineweb
|
|
|
6 |
---
|
7 |
|
8 |
|
@@ -59,4 +60,4 @@ TinyCodeLM models were pretrained from scratch on a single H100 node (four GPUs)
|
|
59 |
```
|
60 |
|
61 |
# Safety
|
62 |
-
This work explores data-driven mechanisms for improving the quality of language model-generated code. Our synthetic data generation method relies on open-source data and our experiments leverage open-source software and resources. It is important to acknowledge that all language models for code synthesis have the potential to be misused – whether intentionally or unintentionally – for generation of code with vulnerabilities and/or malicious behaviors. Any and all model generated code has the potential to be harmful and must not be executed without precautions.
|
|
|
3 |
datasets:
|
4 |
- bigcode/the-stack
|
5 |
- HuggingFaceFW/fineweb
|
6 |
+
library_name: transformers
|
7 |
---
|
8 |
|
9 |
|
|
|
60 |
```
|
61 |
|
62 |
# Safety
|
63 |
+
This work explores data-driven mechanisms for improving the quality of language model-generated code. Our synthetic data generation method relies on open-source data and our experiments leverage open-source software and resources. It is important to acknowledge that all language models for code synthesis have the potential to be misused – whether intentionally or unintentionally – for generation of code with vulnerabilities and/or malicious behaviors. Any and all model generated code has the potential to be harmful and must not be executed without precautions.
|