Mercury7353 commited on
Commit
1176ecd
Β·
verified Β·
1 Parent(s): 721a51e

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +5 -39
README.md CHANGED
@@ -14,7 +14,7 @@
14
  This is the PyLlama3 model, fine-tuned for <a href=" https://github.com/Mercury7353/PyBench" > PyBench </a>.
15
 
16
  PyBench is a comprehensive benchmark evaluating LLM on real-world coding tasks including **chart analysis**, **text analysis**, **image/ audio editing**, **complex math** and **software/website development**.
17
- We collect files from Kaggle, arXiv, and other sources and automatically generate queries according to the type and content of each file.
18
 
19
  ![Overview](images/main.png)
20
 
@@ -24,10 +24,10 @@ PyBench is a comprehensive benchmark evaluating LLM on real-world coding tasks i
24
  ## Why PyBench?
25
 
26
  The LLM Agent, equipped with a code interpreter, is capable of automatically solving real-world coding tasks, such as data analysis and image processing.
27
- %
28
  However, existing benchmarks primarily focus on either simplistic tasks, such as completing a few lines of code, or on extremely complex and specific tasks at the repository level, neither of which are representative of various daily coding tasks.
29
- %
30
- To address this gap, we introduce **PyBench**, a benchmark that encompasses 6 main categories of real-world tasks, covering more than 10 types of files.
31
  ![How PyBench Works](images/generateTraj.png)
32
 
33
  ## πŸ“ PyInstruct
@@ -45,41 +45,7 @@ We trained Llama3-8B-base on PyInstruct, CodeActInstruct, CodeFeedback, and Jupy
45
  ## πŸš€ Model Evaluation with PyBench!
46
  <video src="https://github.com/Mercury7353/PyBench/assets/103104011/fef85310-55a3-4ee8-a441-612e7dbbaaab"> </video>
47
  *Demonstration of the chat interface.*
48
- ### Environment Setup:
49
- Begin by establishing the required environment:
50
-
51
- ```bash
52
- conda env create -f environment.yml
53
- ```
54
-
55
- ### Model Configuration
56
- Initialize a local server using the vllm framework, which defaults to port "8001":
57
-
58
- ```bash
59
- bash SetUpModel.sh
60
- ```
61
-
62
-
63
- A Jinja template is necessary to launch a vllm server. Commonly used templates can be located in the `./jinja/` directory.
64
- Prior to starting the vllm server, specify the model path and Jinja template path in `SetUpModel.sh`.
65
- ### Configuration Adjustments
66
- Specify your model's path and the server port in `./config/model.yaml`. This configuration file also allows for customization of the system prompts.
67
- ### Execution on PyBench
68
- Ensure to update the output trajectory file path in the script before execution:
69
-
70
- ```bash
71
- python /data/zyl7353/codeinterpreterbenchmark/inference.py --config_path ./config/<your config>.yaml --task_path ./data/meta/task.json --output_path <your trajectory.jsonl path>
72
- ```
73
-
74
-
75
- ### Unit Testing Procedure
76
- - **Step 1:** Store the output files in `./output`.
77
- - **Step 2:** Define the trajectory file path in
78
- `./data/unit_test/enter_point.py`.
79
- - **Step 3:** Execute the unit test script:
80
- ```bash
81
- python data/unit_test/enter_point.py
82
- ```
83
 
84
  ## πŸ“Š LeaderBoard
85
  ![LLM Leaderboard](images/leaderboard.png)
 
14
  This is the PyLlama3 model, fine-tuned for <a href=" https://github.com/Mercury7353/PyBench" > PyBench </a>.
15
 
16
  PyBench is a comprehensive benchmark evaluating LLM on real-world coding tasks including **chart analysis**, **text analysis**, **image/ audio editing**, **complex math** and **software/website development**.
17
+ We collect files from Kaggle, arXiv, and other sources and automatically generate queries according to the type and content of each file. As for evaluation, we design unit tests for each tasks.
18
 
19
  ![Overview](images/main.png)
20
 
 
24
  ## Why PyBench?
25
 
26
  The LLM Agent, equipped with a code interpreter, is capable of automatically solving real-world coding tasks, such as data analysis and image processing.
27
+
28
  However, existing benchmarks primarily focus on either simplistic tasks, such as completing a few lines of code, or on extremely complex and specific tasks at the repository level, neither of which are representative of various daily coding tasks.
29
+
30
+ To address this gap, we introduce **PyBench**, a benchmark that encompasses 5 main categories of real-world tasks, covering more than 10 types of files.
31
  ![How PyBench Works](images/generateTraj.png)
32
 
33
  ## πŸ“ PyInstruct
 
45
  ## πŸš€ Model Evaluation with PyBench!
46
  <video src="https://github.com/Mercury7353/PyBench/assets/103104011/fef85310-55a3-4ee8-a441-612e7dbbaaab"> </video>
47
  *Demonstration of the chat interface.*
48
+ - Detailed in <a href=" https://github.com/Mercury7353/PyBench" > πŸš—Github </a>
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
49
 
50
  ## πŸ“Š LeaderBoard
51
  ![LLM Leaderboard](images/leaderboard.png)