FredZhang7 commited on
Commit
30d0038
·
verified ·
1 Parent(s): 2c51c3a

add download of autocoder gguf

Browse files
Files changed (1) hide show
  1. README.md +7 -3
README.md CHANGED
@@ -44,12 +44,12 @@ Think step by step. Solve this problem without removing any existing functionali
44
 
45
  | **Rank** | **Model Name** | **Token Speed (tokens/s)** | **Debugging Performance** | **Code Generation Performance** | **Notes** |
46
  |----------|----------------------------------------------|----------------------------|------------------------------------------------------------------------|-----------------------------------------------------------------------|-------------------------------------------------------------------------------------------|
47
- | 1 | codestral-22b-v0.1-IQ6_K.gguf (this model) | 34.21 | Excellent at complex debugging, often surpasses GPT-4o and Claude-3.5 | Good, but may not be par with GPT-4o | Best overall for debugging in my workflow, use Balanced Mode. 100% private |
48
  | 2 | Claude-3.5-Sonnet | N/A | Poor in complex debugging compared to Codestral | Excellent, better than GPT-4o in long code generation | Great for code generation, but weaker in debugging. |
49
  | 3 | GPT-4o | N/A | Good at complex debugging but can be outperformed by Codestral | Excellent, generally reliable for code generation | Balanced performance between code debugging and generation. |
50
  | 4 | DeepSeekV2 Coder Instruct | N/A | Poor, outputs the same code in complex scenarios | Great at general code generation, rivals GPT-4o | Excellent at code generation, but has data privacy concerns as per Privacy Policy. |
51
  | 5* | Qwen2-7b-Instruct bf16 | 78.22 | Average, can think of correct approaches | Sometimes helps generate new ideas | High speed, useful for generating ideas. |
52
- | 5* | AutoCoder.IQ4_K.gguf | 26.43 | Excellent at solutions that require one to few lines of edits | Generates useful short code segments | Use Precise Mode for better results. |
53
  | 7 | GPT-4o-mini | N/A | Decent, but struggles with complex debugging tasks | Reliable for shorter or simpler code generation tasks | Suitable for less complex coding tasks. |
54
  | 8 | Meta-Llama-3.1-70B-Instruct-IQ2_XS.gguf | 2.55 | Poor, too slow to be practical in day-to-day workflows | Occasionally helps generate ideas | Speed is a significant limitation. |
55
  | 9 | Trinity-2-Codestral-22B-Q6_K_L | N/A | Poor, similar issues to DeepSeekV2 in outputing the same code | Decent, but often repeats code | Similar problem to DeepSeekV2, not recommended for my complex tasks. |
@@ -111,7 +111,7 @@ The following are tested in my workflow, but may not generalize well to other wo
111
 
112
  ## License
113
 
114
- A reminder that Codestral 22b should only be used for non-commercial projects.
115
 
116
  Please use `Qwen2-7b-Instruct bf16` and `AutoCoder.IQ4_K.gguf` as alternatives for commericial activities.
117
 
@@ -125,4 +125,8 @@ pip install -U "huggingface_hub[cli]"
125
 
126
  ```
127
  huggingface-cli download FredZhang7/claudegpt-code-logic-debugger-v0.1 --include "codestral-22b-v0.1-IQ6_K.gguf" --local-dir ./
 
 
 
 
128
  ```
 
44
 
45
  | **Rank** | **Model Name** | **Token Speed (tokens/s)** | **Debugging Performance** | **Code Generation Performance** | **Notes** |
46
  |----------|----------------------------------------------|----------------------------|------------------------------------------------------------------------|-----------------------------------------------------------------------|-------------------------------------------------------------------------------------------|
47
+ | 1 | codestral-22b-v0.1-IQ6_K.gguf (this repo) | 34.21 | Excellent at complex debugging, often surpasses GPT-4o and Claude-3.5 | Good, but may not be par with GPT-4o | Best overall for debugging in my workflow, use Balanced Mode. 100% private |
48
  | 2 | Claude-3.5-Sonnet | N/A | Poor in complex debugging compared to Codestral | Excellent, better than GPT-4o in long code generation | Great for code generation, but weaker in debugging. |
49
  | 3 | GPT-4o | N/A | Good at complex debugging but can be outperformed by Codestral | Excellent, generally reliable for code generation | Balanced performance between code debugging and generation. |
50
  | 4 | DeepSeekV2 Coder Instruct | N/A | Poor, outputs the same code in complex scenarios | Great at general code generation, rivals GPT-4o | Excellent at code generation, but has data privacy concerns as per Privacy Policy. |
51
  | 5* | Qwen2-7b-Instruct bf16 | 78.22 | Average, can think of correct approaches | Sometimes helps generate new ideas | High speed, useful for generating ideas. |
52
+ | 5* | AutoCoder.IQ4_K.gguf (this repo) | 26.43 | Excellent at solutions that require one to few lines of edits | Generates useful short code segments | Use Precise Mode for better results. |
53
  | 7 | GPT-4o-mini | N/A | Decent, but struggles with complex debugging tasks | Reliable for shorter or simpler code generation tasks | Suitable for less complex coding tasks. |
54
  | 8 | Meta-Llama-3.1-70B-Instruct-IQ2_XS.gguf | 2.55 | Poor, too slow to be practical in day-to-day workflows | Occasionally helps generate ideas | Speed is a significant limitation. |
55
  | 9 | Trinity-2-Codestral-22B-Q6_K_L | N/A | Poor, similar issues to DeepSeekV2 in outputing the same code | Decent, but often repeats code | Similar problem to DeepSeekV2, not recommended for my complex tasks. |
 
111
 
112
  ## License
113
 
114
+ A reminder that `codestral-22b-v0.1-IQ6_K.gguf` should only be used for non-commercial projects.
115
 
116
  Please use `Qwen2-7b-Instruct bf16` and `AutoCoder.IQ4_K.gguf` as alternatives for commericial activities.
117
 
 
125
 
126
  ```
127
  huggingface-cli download FredZhang7/claudegpt-code-logic-debugger-v0.1 --include "codestral-22b-v0.1-IQ6_K.gguf" --local-dir ./
128
+ ```
129
+
130
+ ```
131
+ huggingface-cli download FredZhang7/claudegpt-code-logic-debugger-v0.1 --include "AutoCoder.IQ4_K.gguf" --local-dir ./
132
  ```