Text Generation
Transformers
PyTorch
olmo
Inference Endpoints
upiter commited on
Commit
0da02a6
·
verified ·
1 Parent(s): c9616a6

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +2 -1
README.md CHANGED
@@ -7,9 +7,10 @@ datasets:
7
 
8
 
9
 
 
10
  # Model Details
11
 
12
- The TinyCodeLM family of tiny language models (LMs) is a collection of fully open-source pretrained and instruction tuned generative code models in 150M and 400M sizes. These models are pretrained on a mixture of open-source web text and Python code. The instruction tuned TinyCodeLM models are optimized for Python code synthesis, and are trained on [synthetic edit sequence data generated with the LintSeq algorithm](https://arxiv.org/abs/2410.02749).
13
 
14
  Despite being trained on only 72 billion tokens of text, the models outperform many of the available open source Python code synthesis models on HumanEval and MBPP. The TinyCodeLM-LintSeqInstruct models are state-of-the-art on Python synthesis for their size.
15
 
 
7
 
8
 
9
 
10
+
11
  # Model Details
12
 
13
+ The TinyCodeLM family of tiny language models (LMs) is a collection of fully open-source pretrained and instruction tuned generative code models in 150M and 400M sizes. These models are pretrained on a mixture of open-source web text and Python code. The instruction tuned TinyCodeLM models are optimized for Python code synthesis, and are trained on [synthetic edit sequence data generated with the LintSeq algorithm](https://lintseq.github.io/).
14
 
15
  Despite being trained on only 72 billion tokens of text, the models outperform many of the available open source Python code synthesis models on HumanEval and MBPP. The TinyCodeLM-LintSeqInstruct models are state-of-the-art on Python synthesis for their size.
16