jenyag commited on
Commit
191bb08
Β·
verified Β·
1 Parent(s): 159c898

Update src/tasks_content.py

Browse files
Files changed (1) hide show
  1. src/tasks_content.py +12 -12
src/tasks_content.py CHANGED
@@ -44,20 +44,20 @@ TASKS_DESCRIPTIONS = {
44
  "library_usage": "cool description for Library Usage Examples Generation task",
45
 
46
  "project_code_completion": """# Project-Level Code Completion\n
47
- Our Project-Level Code Completion πŸ€— [JetBrains-Research/lca-code-completion](https://huggingface.co/datasets/JetBrains-Research/lca-code-completion) includes four datasets:
48
- - `small-context`: 144 data points,
49
- - `medium-context: 224 data points,
50
- - `large-context`: 270 data points,
51
- - `huge-context`: 296 data points.
52
 
53
  We use standard Exact Match (EM) metric for one-line code completion.
54
- We evaluate Exact Match for different line categories:
55
- - *infile* – functions and classes are from the completion file;
56
- - *inproject – functions and files are from the repository snapshot;
57
- - *committed* – functions and classes are from the files that were added on the completion file commit;
58
- - *common* – functions and classes with common names, e.g., `main`, `get`, etc.;
59
- - *non-informative* – short/long lines, import/print lines, or comment lines;
60
- - *random* – lines that doesn't fit to any of previous categories.
61
 
62
  For further details on the dataset and the baselines from 🏟️ Long Code Arena Team, refer to `code_completion` folder in [our baselines repository](https://github.com/JetBrains-Research/lca-baselines) or to our preprint (TODO).
63
  """,
 
44
  "library_usage": "cool description for Library Usage Examples Generation task",
45
 
46
  "project_code_completion": """# Project-Level Code Completion\n
47
+ Our Project-Level Code Completion πŸ€— [JetBrains-Research/lca-code-completion](https://huggingface.co/datasets/JetBrains-Research/lca-code-completion) includes four datasets:\n
48
+ \t * `small-context`: 144 data points,\n
49
+ \t * `medium-context`: 224 data points,\n
50
+ \t * `large-context`: 270 data points,\n
51
+ \t * `huge-context`: 296 data points.\n
52
 
53
  We use standard Exact Match (EM) metric for one-line code completion.
54
+ We evaluate Exact Match for different line categories:\n
55
+ \t * *infile* – functions and classes are from the completion file;\n
56
+ \t * *inproject* – functions and files are from the repository snapshot;\n
57
+ \t * *committed* – functions and classes are from the files that were added on the completion file commit;\n
58
+ \t * *common* – functions and classes with common names, e.g., `main`, `get`, etc.;\n
59
+ \t * *non-informative* – short/long lines, import/print lines, or comment lines;\n
60
+ \t * *random* – lines that doesn't fit to any of previous categories.\n
61
 
62
  For further details on the dataset and the baselines from 🏟️ Long Code Arena Team, refer to `code_completion` folder in [our baselines repository](https://github.com/JetBrains-Research/lca-baselines) or to our preprint (TODO).
63
  """,