clarify
Browse files
README.md
CHANGED
|
@@ -11,9 +11,9 @@ tags:
|
|
| 11 |
|
| 12 |
# Code Logic Debugger v0.1
|
| 13 |
|
| 14 |
-
Hardware requirements for ChatGPT GPT-4o level inference speed for
|
| 15 |
|
| 16 |
-
Note: The following results are based on my day-to-day workflows only. My goal was to run private models that could beat GPT-4o and Claude-3.5 in code debugging and generation to ‘load balance’ between OpenAI/Anthropic’s free plan and local models to avoid hitting rate limits, and to upload as few lines of my code and ideas to their servers as possible.
|
| 17 |
|
| 18 |
An example of a complex debugging scenario is where you build library A on top of library B that requires library C as a dependency but the root cause was a variable in library C. In this case, the following workflow guided me to correctly identify the problem.
|
| 19 |
|
|
|
|
| 11 |
|
| 12 |
# Code Logic Debugger v0.1
|
| 13 |
|
| 14 |
+
Hardware requirements for ChatGPT GPT-4o level inference speed for the models in this repo: >=24 GB VRAM.
|
| 15 |
|
| 16 |
+
Note: The following results are based on my day-to-day workflows only on an RTX 3090. My goal was to run private models that could beat GPT-4o and Claude-3.5 in code debugging and generation to ‘load balance’ between OpenAI/Anthropic’s free plan and local models to avoid hitting rate limits, and to upload as few lines of my code and ideas to their servers as possible.
|
| 17 |
|
| 18 |
An example of a complex debugging scenario is where you build library A on top of library B that requires library C as a dependency but the root cause was a variable in library C. In this case, the following workflow guided me to correctly identify the problem.
|
| 19 |
|