jenyag commited on
Commit
8eab76e
Β·
verified Β·
1 Parent(s): 191bb08

Update src/tasks_content.py

Browse files
Files changed (1) hide show
  1. src/tasks_content.py +13 -12
src/tasks_content.py CHANGED
@@ -44,20 +44,21 @@ TASKS_DESCRIPTIONS = {
44
  "library_usage": "cool description for Library Usage Examples Generation task",
45
 
46
  "project_code_completion": """# Project-Level Code Completion\n
47
- Our Project-Level Code Completion πŸ€— [JetBrains-Research/lca-code-completion](https://huggingface.co/datasets/JetBrains-Research/lca-code-completion) includes four datasets:\n
48
- \t * `small-context`: 144 data points,\n
49
- \t * `medium-context`: 224 data points,\n
50
- \t * `large-context`: 270 data points,\n
51
- \t * `huge-context`: 296 data points.\n
 
52
 
53
  We use standard Exact Match (EM) metric for one-line code completion.
54
- We evaluate Exact Match for different line categories:\n
55
- \t * *infile* – functions and classes are from the completion file;\n
56
- \t * *inproject* – functions and files are from the repository snapshot;\n
57
- \t * *committed* – functions and classes are from the files that were added on the completion file commit;\n
58
- \t * *common* – functions and classes with common names, e.g., `main`, `get`, etc.;\n
59
- \t * *non-informative* – short/long lines, import/print lines, or comment lines;\n
60
- \t * *random* – lines that doesn't fit to any of previous categories.\n
61
 
62
  For further details on the dataset and the baselines from 🏟️ Long Code Arena Team, refer to `code_completion` folder in [our baselines repository](https://github.com/JetBrains-Research/lca-baselines) or to our preprint (TODO).
63
  """,
 
44
  "library_usage": "cool description for Library Usage Examples Generation task",
45
 
46
  "project_code_completion": """# Project-Level Code Completion\n
47
+
48
+ Our Project-Level Code Completion πŸ€— [JetBrains-Research/lca-code-completion](https://huggingface.co/datasets/JetBrains-Research/lca-code-completion) includes four datasets:
49
+ * `small-context`: 144 data points,
50
+ * `medium-context`: 224 data points,
51
+ * `large-context`: 270 data points,
52
+ * `huge-context`: 296 data points.
53
 
54
  We use standard Exact Match (EM) metric for one-line code completion.
55
+ We evaluate Exact Match for different line categories:
56
+ * *infile* – functions and classes are from the completion file;
57
+ * *inproject* – functions and files are from the repository snapshot;
58
+ * *committed* – functions and classes are from the files that were added on the completion file commit;
59
+ * *common* – functions and classes with common names, e.g., `main`, `get`, etc.;
60
+ * *non-informative* – short/long lines, import/print lines, or comment lines;
61
+ * *random* – lines that doesn't fit to any of previous categories.
62
 
63
  For further details on the dataset and the baselines from 🏟️ Long Code Arena Team, refer to `code_completion` folder in [our baselines repository](https://github.com/JetBrains-Research/lca-baselines) or to our preprint (TODO).
64
  """,