Nicoooolasweee's picture
Update README.md
72bdfdf verified
---
language:
- en
base_model:
- Qwen/Qwen2.5-7B-Instruct
library_name: transformers
---
## Introduction
FLock Web3 Agent Model is a specialized LLM designed to address complex queries in the Web3 ecosystem, with a focus on DeFi, blockchain interoperability, on-chain analytics, and etc.. The model excels in function-calling reasoning, enabling it to break down intricate user requests into actionable steps, interact with external APIs, and provide data-driven insights for Web3 applications. It is tailored for users ranging from developers and researchers to investors navigating the decentralized landscape.
## Requirements
We advise you to use the latest version of `transformers`.
## Quickstart
Given a query and a list of available tools. The model generate function calls using the provided tools to respond the query correctly.
**Example query and tools format**
```python
input_example=
{
"query": "Track crosschain message verification, implement timeout recovery procedures.",
"tools": [
{"type": "function", "function": {"name": "track_crosschain_message", "description": "Track the status of a crosschain message", "parameters": {"type": "object", "properties": {"message_id": {"type": "string"}}}}},
{"type": "function", "function": {"name": "schedule_timeout_check", "description": "Schedule a timeout check for a message", "parameters": {"type": "object", "properties": {"message_id": {"type": "string"}, "timeout": {"type": "integer"}}}}}
]
}
```
**Function calling generation**
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
import json
model_name = "flock-io/Flock_Web3_Agent_Model"
model = AutoModelForCausalLM.from_pretrained(
model_name,
torch_dtype="auto",
device_map="auto"
)
tokenizer = AutoTokenizer.from_pretrained(model_name)
messages = [
{"role": "system", "content": "You are a helpful assistant with access to the following functions. Use them if required -"
+ json.dumps(input_example["tools"], ensure_ascii=False)},
{"role": "user", "content": input_example["query"]}
]
text = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True
)
model_inputs = tokenizer([text], return_tensors="pt").to(model.device)
generated_ids = model.generate(
**model_inputs,
max_new_tokens=3000
)
generated_ids = [
output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids)
]
response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
```
The output text is in the string format
```
[
{"name": "track_crosschain_message", "arguments": {"message_id": "msg12345"}},
{"name": "schedule_timeout_check", "arguments": {"message_id": "msg12345", "timeout": "30"}}
]
```