Text Generation
Transformers
PyTorch
olmo
upiter commited on
Commit
bea5ada
·
verified ·
1 Parent(s): f712ad6

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +3 -2
README.md CHANGED
@@ -8,6 +8,7 @@ base_model:
8
  ---
9
 
10
 
 
11
  # Model Details
12
 
13
  The TinyCodeLM family of tiny language models (LMs) is a collection of fully open-source pretrained and instruction tuned generative code models in 150M and 400M sizes. These models are pretrained on a mixture of open-source web text and Python code. The instruction tuned TinyCodeLM models are optimized for Python code synthesis, and are trained on [synthetic edit sequence data generated with the LintSeq algorithm](https://arxiv.org/abs/2410.02749).
@@ -43,7 +44,7 @@ TinyCodeLM models were pretrained from scratch on a single H100 node (four GPUs)
43
  | :----------- | -----------------: | -----------------: |
44
  | HumanEval, pass@1 | 12.8 | 13.4 |
45
  | HumanEval, pass@10 | 20.6 | 20.9 |
46
- | MBPP(+), pass@1 | 13.6 | 24.4 |
47
  | MBPP(+), pass@10 | 24.4 | 29.9 |
48
 
49
 
@@ -61,4 +62,4 @@ TinyCodeLM models were pretrained from scratch on a single H100 node (four GPUs)
61
  ```
62
 
63
  # Safety
64
- This work explores data-driven mechanisms for improving the quality of language model-generated code. Our synthetic data generation method relies on open-source data and our experiments leverage open-source software and resources. It is important to acknowledge that all language models for code synthesis have the potential to be misused – whether intentionally or unintentionally – for generation of code with vulnerabilities and/or malicious behaviors. Any and all model generated code has thepotential to be harmful and must not be executed without precautions.
 
8
  ---
9
 
10
 
11
+
12
  # Model Details
13
 
14
  The TinyCodeLM family of tiny language models (LMs) is a collection of fully open-source pretrained and instruction tuned generative code models in 150M and 400M sizes. These models are pretrained on a mixture of open-source web text and Python code. The instruction tuned TinyCodeLM models are optimized for Python code synthesis, and are trained on [synthetic edit sequence data generated with the LintSeq algorithm](https://arxiv.org/abs/2410.02749).
 
44
  | :----------- | -----------------: | -----------------: |
45
  | HumanEval, pass@1 | 12.8 | 13.4 |
46
  | HumanEval, pass@10 | 20.6 | 20.9 |
47
+ | MBPP(+), pass@1 | 13.6 | 19.4 |
48
  | MBPP(+), pass@10 | 24.4 | 29.9 |
49
 
50
 
 
62
  ```
63
 
64
  # Safety
65
+ This work explores data-driven mechanisms for improving the quality of language model-generated code. Our synthetic data generation method relies on open-source data and our experiments leverage open-source software and resources. It is important to acknowledge that all language models for code synthesis have the potential to be misused – whether intentionally or unintentionally – for generation of code with vulnerabilities and/or malicious behaviors. Any and all model generated code has thepotential to be harmful and must not be executed without precautions.