add more description
Browse files
README.md
CHANGED
@@ -44,16 +44,16 @@ Think step by step. Solve this problem without removing any existing functionali
|
|
44 |
|
45 |
| **Rank** | **Model Name** | **Token Speed (tokens/s)** | **Debugging Performance** | **Code Generation Performance** | **Notes** |
|
46 |
|----------|----------------------------------------------|----------------------------|------------------------------------------------------------------------|-----------------------------------------------------------------------|-------------------------------------------------------------------------------------------|
|
47 |
-
| 1 | codestral-22b-v0.1-IQ6_K.gguf (this repo) | 34.21 | Excellent at complex debugging, often surpasses GPT-4o and Claude-3.5 | Good, but may not be par with GPT-4o
|
48 |
-
| 2* | Claude-3.5-Sonnet | N/A | Poor in complex debugging compared to Codestral | Excellent, better than GPT-4o in
|
49 |
-
| 2* | GPT-4o | N/A | Good at complex debugging but can be outperformed by Codestral | Excellent, generally reliable for code generation
|
50 |
-
| 4 | DeepSeekV2 Coder Instruct | N/A | Good, but outputs the same code in complex scenarios | Great at general code generation, rivals GPT-4o
|
51 |
-
| 5* | Qwen2-7b-Instruct bf16 | 78.22 | Average, can think of correct approaches | Sometimes helps generate new ideas
|
52 |
-
| 5* | AutoCoder.IQ4_K.gguf (this repo) | 26.43 | Excellent at solutions that require one to few lines of edits | Generates useful short code segments
|
53 |
-
| 7 | GPT-4o-mini | N/A | Decent, but struggles with complex debugging tasks | Reliable for shorter or simpler code generation tasks
|
54 |
-
| 8 | Meta-Llama-3.1-70B-Instruct-IQ2_XS.gguf | 2.55 | Poor, occasionally helps generate ideas | ---
|
55 |
-
| 9 | Trinity-2-Codestral-22B-Q6_K_L | N/A | Poor, similar issues to DeepSeekV2 in outputing the same code | ---
|
56 |
-
| 10 | DeepSeekV2 Coder Lite Instruct Q_8L | N/A | Poor, repeats code similar to other models in its family | Not as effective in my context
|
57 |
|
58 |
|
59 |
<br>
|
@@ -123,12 +123,12 @@ Please use `Qwen2-7b-Instruct bf16` and `AutoCoder.IQ4_K.gguf` as alternatives f
|
|
123 |
pip install -U "huggingface_hub[cli]"
|
124 |
```
|
125 |
|
126 |
-
|
127 |
```
|
128 |
-
huggingface-cli download FredZhang7/claudegpt-code-logic-debugger-v0.1 --include "
|
129 |
```
|
130 |
|
131 |
-
|
132 |
```
|
133 |
-
huggingface-cli download FredZhang7/claudegpt-code-logic-debugger-v0.1 --include "
|
134 |
```
|
|
|
44 |
|
45 |
| **Rank** | **Model Name** | **Token Speed (tokens/s)** | **Debugging Performance** | **Code Generation Performance** | **Notes** |
|
46 |
|----------|----------------------------------------------|----------------------------|------------------------------------------------------------------------|-----------------------------------------------------------------------|-------------------------------------------------------------------------------------------|
|
47 |
+
| 1 | codestral-22b-v0.1-IQ6_K.gguf (this repo) | 34.21 | Excellent at complex debugging, often surpasses GPT-4o and Claude-3.5 | Good, but may not be par with GPT-4o | Best overall for debugging in my workflow, use Balanced Mode. |
|
48 |
+
| 2* | Claude-3.5-Sonnet | N/A | Poor in complex debugging compared to Codestral | Excellent, better and more creative than GPT-4o in code generation | Great for code generation, but weaker in debugging. |
|
49 |
+
| 2* | GPT-4o | N/A | Good at complex debugging but can be outperformed by Codestral | Excellent, generally reliable for code generation, more knowledgable | Balanced performance between code debugging and generation. |
|
50 |
+
| 4 | DeepSeekV2 Coder Instruct | N/A | Good, but outputs the same code in complex scenarios | Great at general code generation, rivals GPT-4o | Excellent at code generation, but has data privacy concerns as per Privacy Policy. |
|
51 |
+
| 5* | Qwen2-7b-Instruct bf16 | 78.22 | Average, can think of correct approaches | Sometimes helps generate new ideas | High speed, useful for generating ideas. |
|
52 |
+
| 5* | AutoCoder.IQ4_K.gguf (this repo) | 26.43 | Excellent at solutions that require one to few lines of edits | Generates useful short code segments | Try Precise Mode or Balanced Mode. |
|
53 |
+
| 7 | GPT-4o-mini | N/A | Decent, but struggles with complex debugging tasks | Reliable for shorter or simpler code generation tasks | Suitable for less complex coding tasks. |
|
54 |
+
| 8 | Meta-Llama-3.1-70B-Instruct-IQ2_XS.gguf | 2.55 | Poor, occasionally helps generate ideas | --- | Speed is a significant limitation. |
|
55 |
+
| 9 | Trinity-2-Codestral-22B-Q6_K_L | N/A | Poor, similar issues to DeepSeekV2 in outputing the same code | --- | Similar problem to DeepSeekV2, not recommended for my complex tasks. |
|
56 |
+
| 10 | DeepSeekV2 Coder Lite Instruct Q_8L | N/A | Poor, repeats code similar to other models in its family | Not as effective in my context | Not recommended overall based on my criteria. |
|
57 |
|
58 |
|
59 |
<br>
|
|
|
123 |
pip install -U "huggingface_hub[cli]"
|
124 |
```
|
125 |
|
126 |
+
Commercial use:
|
127 |
```
|
128 |
+
huggingface-cli download FredZhang7/claudegpt-code-logic-debugger-v0.1 --include "AutoCoder.IQ4_K.gguf" --local-dir ./
|
129 |
```
|
130 |
|
131 |
+
Non-commercial (e.g. testing, research, personal, or evaluation purposes) use:
|
132 |
```
|
133 |
+
huggingface-cli download FredZhang7/claudegpt-code-logic-debugger-v0.1 --include "codestral-22b-v0.1-IQ6_K.gguf" --local-dir ./
|
134 |
```
|