shibing624 commited on
Commit
07a5a56
·
1 Parent(s): 3f78d0d

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +159 -1
README.md CHANGED
@@ -1,3 +1,161 @@
1
  ---
2
- license: apache-2.0
 
 
 
 
 
 
 
 
3
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
+ language:
3
+ - zh
4
+ - en
5
+ tags:
6
+ - code
7
+ - autocomplete
8
+ - pytorch
9
+ - zh
10
+ license: "apache-2.0"
11
  ---
12
+
13
+ # GPT2 for Code AutoComplete Model
14
+ code-autocomplete, a code completion plugin for Python.
15
+
16
+ **code-autocomplete**实现了Python代码行粒度和块粒度自动补全功能。
17
+
18
+ ## Usage
19
+
20
+ 本项目开源在中文文本纠错项目:[code-autocomplete](https://github.com/shibing624/code-autocomplete),可支持GPT2模型,通过如下命令调用:
21
+
22
+ ```python
23
+ from autocomplete.gpt2 import Infer
24
+ m = Infer(model_name="gpt2", model_dir="shibing624/code-autocomplete-gpt2-base", use_cuda=use_cuda)
25
+ i = m.predict('import torch.nn as')
26
+ print(i)
27
+ ```
28
+
29
+ 当然,你也可使用官方的huggingface/transformers调用:
30
+
31
+ *Please use 'GPT2' related functions to load this model!*
32
+
33
+ ```python
34
+ import os
35
+ import torch
36
+ from transformers import GPT2Tokenizer, GPT2LMHeadModel
37
+
38
+ os.environ["KMP_DUPLICATE_LIB_OK"] = "TRUE"
39
+ device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
40
+
41
+ tokenizer = GPT2Tokenizer.from_pretrained("shibing624/code-autocomplete-gpt2-base")
42
+ model = GPT2LMHeadModel.from_pretrained("shibing624/code-autocomplete-gpt2-base")
43
+ model.to(device)
44
+ prompts = [
45
+ """from torch import nn
46
+ class LSTM(Module):
47
+ def __init__(self, *,
48
+ n_tokens: int,
49
+ embedding_size: int,
50
+ hidden_size: int,
51
+ n_layers: int):""",
52
+ """import numpy as np
53
+ import torch
54
+ import torch.nn as""",
55
+ "import java.util.ArrayList",
56
+ "def factorial(n):",
57
+ ]
58
+ for prompt in prompts:
59
+ input_ids = tokenizer.encode(prompt, add_special_tokens=False, return_tensors='pt').to(device)
60
+ outputs = model.generate(input_ids=input_ids,
61
+ max_length=64 + len(prompt),
62
+ temperature=1.0,
63
+ top_k=50,
64
+ top_p=0.95,
65
+ repetition_penalty=1.0,
66
+ do_sample=True,
67
+ num_return_sequences=1,
68
+ length_penalty=2.0,
69
+ early_stopping=True)
70
+ decoded = tokenizer.decode(outputs[0], skip_special_tokens=True)
71
+ print(decoded)
72
+ print("=" * 20)
73
+ ```
74
+
75
+ output:
76
+ ```shell
77
+ from torch import nn
78
+ class LSTM(Module):
79
+ def __init__(self, *,
80
+ n_tokens: int,
81
+ embedding_size: int,
82
+ hidden_size: int,
83
+ n_layers: int):
84
+ self.embedding_size = embedding_size
85
+ ====================
86
+ import numpy as np
87
+ import torch
88
+ import torch.nn as np
89
+ from onmt import nnumpy as np
90
+
91
+
92
+ class PredicterDNN(nn.Module):
93
+ @classmethod
94
+ @parameterized.expand([0.5, 2.5] + (10, 10))
95
+ @classmethod
96
+ @static
97
+ def add(self, sample_rate, max_iters=self.max_iters, mask_fre
98
+ ====================
99
+ import java.util.ArrayList[Tuple[Int]],
100
+
101
+ ====================
102
+ def factorial(n): number of elements per dimension,
103
+ assert len(n) > 1
104
+ n.append(self.n_iters)
105
+ n = n_iter(self.n_norm)
106
+
107
+ def _score(
108
+ ====================
109
+
110
+ Process finished with exit code 0
111
+
112
+ ```
113
+
114
+ 模型文件组成:
115
+ ```
116
+ code-autocomplete-gpt2-base
117
+ ├── config.json
118
+ ├── merges.txt
119
+ ├── pytorch_model.bin
120
+ ├── special_tokens_map.json
121
+ ├── tokenizer_config.json
122
+ └── vocab.json
123
+ ```
124
+
125
+ ### 训练数据集
126
+ #### pytorch_awesome的所有项目代码
127
+
128
+ download [code-autocomplete](https://github.com/shibing624/code-autocomplete),
129
+ ```shell
130
+ cd autocomplete
131
+ python create_dataset.py
132
+ ```
133
+
134
+ 如果需要训练code-autocomplete,请参考[https://github.com/shibing624/code-autocomplete/blob/main/autocomplete/gpt2.py](https://github.com/shibing624/code-autocomplete/blob/main/autocomplete/gpt2.py)
135
+
136
+
137
+ ### About GPT2
138
+
139
+ Test the whole generation capabilities here: https://transformer.huggingface.co/doc/gpt2-large
140
+
141
+ Pretrained model on English language using a causal language modeling (CLM) objective. It was introduced in
142
+ [this paper](https://d4mucfpksywv.cloudfront.net/better-language-models/language_models_are_unsupervised_multitask_learners.pdf)
143
+ and first released at [this page](https://openai.com/blog/better-language-models/).
144
+
145
+ Disclaimer: The team releasing GPT-2 also wrote a
146
+ [model card](https://github.com/openai/gpt-2/blob/master/model_card.md) for their model. Content from this model card
147
+ has been written by the Hugging Face team to complete the information they provided and give specific examples of bias.
148
+
149
+
150
+ ## Citation
151
+
152
+ ```latex
153
+ @misc{code-autocomplete,
154
+ author = {Xu Ming},
155
+ title = {code-autocomplete: Code AutoComplete with GPT model},
156
+ year = {2022},
157
+ publisher = {GitHub},
158
+ journal = {GitHub repository},
159
+ url = {https://github.com/shibing624/code-autocomplete},
160
+ }
161
+ ```