Update README.md
Browse files
README.md
CHANGED
@@ -6,8 +6,6 @@ datasets:
|
|
6 |
---
|
7 |
|
8 |
|
9 |
-
|
10 |
-
|
11 |
# Model Details
|
12 |
|
13 |
The TinyCodeLM family of tiny language models (LMs) is a collection of fully open-source pretrained and instruction tuned generative code models in 150M and 400M sizes. These models are pretrained on a mixture of open-source web text and Python code. The instruction tuned TinyCodeLM models are optimized for Python code synthesis, and are trained on [synthetic edit sequence data generated with the LintSeq algorithm](https://lintseq.github.io/).
|
@@ -61,4 +59,4 @@ TinyCodeLM models were pretrained from scratch on a single H100 node (four GPUs)
|
|
61 |
```
|
62 |
|
63 |
# Safety
|
64 |
-
This work explores data-driven mechanisms for improving the quality of language model-generated code. Our synthetic data generation method relies on open-source data and our experiments leverage open-source software and resources. It is important to acknowledge that all language models for code synthesis have the potential to be misused – whether intentionally or unintentionally – for generation of code with vulnerabilities and/or malicious behaviors. Any and all model generated code has
|
|
|
6 |
---
|
7 |
|
8 |
|
|
|
|
|
9 |
# Model Details
|
10 |
|
11 |
The TinyCodeLM family of tiny language models (LMs) is a collection of fully open-source pretrained and instruction tuned generative code models in 150M and 400M sizes. These models are pretrained on a mixture of open-source web text and Python code. The instruction tuned TinyCodeLM models are optimized for Python code synthesis, and are trained on [synthetic edit sequence data generated with the LintSeq algorithm](https://lintseq.github.io/).
|
|
|
59 |
```
|
60 |
|
61 |
# Safety
|
62 |
+
This work explores data-driven mechanisms for improving the quality of language model-generated code. Our synthetic data generation method relies on open-source data and our experiments leverage open-source software and resources. It is important to acknowledge that all language models for code synthesis have the potential to be misused – whether intentionally or unintentionally – for generation of code with vulnerabilities and/or malicious behaviors. Any and all model generated code has the potential to be harmful and must not be executed without precautions.
|