dbernsohn commited on
Commit
075f3cb
·
1 Parent(s): ca1c9c3

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +45 -0
README.md ADDED
@@ -0,0 +1,45 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # roberta-java
2
+ ---
3
+ language: Java
4
+ datasets:
5
+ - CodeSearchNet
6
+ ---
7
+
8
+ This is a [roberta](https://arxiv.org/pdf/1907.11692.pdf) pre-trained version on the [CodeSearchNet dataset](https://github.com/github/CodeSearchNet) for **Java** Mask Language Model mission.
9
+
10
+ To load the model:
11
+ (necessary packages: !pip install transformers sentencepiece)
12
+ ```python
13
+ from transformers import AutoTokenizer, AutoModelWithLMHead, pipeline
14
+ tokenizer = AutoTokenizer.from_pretrained("dbernsohn/roberta-python")
15
+ model = AutoModelWithLMHead.from_pretrained("dbernsohn/roberta-python")
16
+
17
+ fill_mask = pipeline(
18
+ "fill-mask",
19
+ model=model,
20
+ tokenizer=tokenizer
21
+ )
22
+ ```
23
+
24
+ You can then use this model to fill masked words in a Pytho code.
25
+
26
+ ```java
27
+ code = """
28
+ String[] cars = {"Volvo", "BMW", "Ford", "Mazda"};
29
+ for (String i : cars) {
30
+ System.out.<mask>(i);
31
+ }
32
+ """.lstrip()
33
+
34
+ pred = {x["token_str"].replace("Ġ", ""): x["score"] for x in fill_mask(code)}
35
+ sorted(pred.items(), key=lambda kv: kv[1], reverse=True)
36
+ # [('println', 0.32571351528167725),
37
+ # ('get', 0.2897663116455078),
38
+ # ('remove', 0.0637081190943718),
39
+ # ('exit', 0.058875661343336105),
40
+ # ('print', 0.034190207719802856)]
41
+ ```
42
+
43
+ The whole training process and hyperparameters are in my [GitHub repo](https://github.com/DorBernsohn/CodeLM/tree/main/CodeMLM)
44
+
45
+ > Created by [Dor Bernsohn](https://www.linkedin.com/in/dor-bernsohn-70b2b1146/)