fuzzy-mittenz's picture
Update README.md
18df9ea verified
---
license: other
license_name: qwen-research
license_link: https://huggingface.co/Qwen/Qwen2.5-Coder-3B-Instruct/blob/main/LICENSE
language:
- en
base_model: Qwen/Qwen2.5-Coder-3B-Instruct
pipeline_tag: text-generation
library_name: transformers
tags:
- code
- codeqwen
- chat
- qwen
- qwen-coder
- llama-cpp
datasets:
- IntelligentEstate/The_Key
---
# IntelligentEstate/Replicant_Operator_ed-Q2-iQ8_0.gguf
For those who need more power
## Use in GPT-4-ALL with the with or without the adjusted jinja/minja chat template, calling upon it's tool an (o3/QwQ like Javascript reasoning function) it excells in complex computation made for the edge. NO GPU NEEDED
A QAT/TTT* unique method using "THE_KEY" Dataset applied to the Coder instruct version of Qwen 2.5 3B mixed with the NOMIC teams new Reasoner system in GPT4ALL. o1/QwQ/o3 tech is now using Test time compute without a GPU instead of spending $300,000 in compute,
context 4k max 8k, temp 0.8 top-k 120, rep pen 1.18, rep tokens 64, batch 512, top-p 0.5, min-p 0,
please comment with any issues or insight
![2b13cf8d-79b3-46e7-83b5-7e7290cc6307.jpg](https://cdn-uploads.huggingface.co/production/uploads/6593502ca2607099284523db/wVJJxU_s2QTLU0W5IOpK0.jpeg)
This model was converted to GGUF format from [`Qwen/Qwen2.5-Coder-3B-Instruct`](https://huggingface.co/Qwen/Qwen2.5-Coder-3B-Instruct) using llama.cpp
Refer to the [original model card](https://huggingface.co/Qwen/Qwen2.5-Coder-3B-Instruct) for more details on the model.
# for use with GPT4ALL
```
{{- '<|im_start|>system\n' }}
{% if toolList|length > 0 %}You have access to the following functions:
{% for tool in toolList %}
Use the function '{{tool.function}}' to: '{{tool.description}}'
{% if tool.parameters|length > 0 %}
parameters:
{% for info in tool.parameters %}
{{info.name}}:
type: {{info.type}}
description: {{info.description}}
required: {{info.required}}
{% endfor %}
{% endif %}
# Tool Instructions
If you CHOOSE to call this function ONLY reply with the following format:
'{{tool.symbolicFormat}}'
Here is an example. If the user says, '{{tool.examplePrompt}}', then you reply
'{{tool.exampleCall}}'
After the result you might reply with, '{{tool.exampleReply}}'
{% endfor %}
You MUST include both the start and end tags when you use a function.
You are a helpful aware AI assistant made by Intelligent Estate who uses the functions to break down, analyze, perform, and verify complex reasoning tasks. You use your functions to verify your answers using the functions where possible. You will write code in markdown code blocks when necessary.
{% endif %}
{{- '<|im_end|>\n' }}
{%- if not add_generation_prompt is defined %}
{%- set add_generation_prompt = false %}
{%- endif %}
{% for message in messages %}
{%- if message['role'] == 'assistant' %}
{%- set content = message['content'] | regex_replace('^[\\s\\S]*</think>', '') %}
{{'<|im_start|>' + message['role'] + '\n' + content + '<|im_end|>\n' }}
{%- else %}
{{'<|im_start|>' + message['role'] + '\n' + message['content'] + '<|im_end|>\n' }}
{%- endif %}
{% endfor %}
{% if add_generation_prompt %}
{{ '<|im_start|>assistant\n' }}
{% endif %}
```
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.