Fan9494 commited on
Commit
4d77a8a
·
verified ·
1 Parent(s): bb4d401

Create readme file

Browse files
Files changed (1) hide show
  1. README.md +66 -0
README.md ADDED
@@ -0,0 +1,66 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language:
3
+ - en
4
+ base_model:
5
+ - Qwen/Qwen2.5-7B-Instruct
6
+ library_name: transformers
7
+ ---
8
+ ## Introduction
9
+ Flock Web3 Agent Model is aimed at helping process **function call** queries in the **specific web3 domain**.
10
+
11
+ ## Requirements
12
+ We advise you to use the latest version of `transformers`.
13
+
14
+ ## Quickstart
15
+
16
+ Given a query and a list of available tools. The model generate function calls using the provided tools to respond the query correctly.
17
+
18
+ **Example query and tools format**
19
+
20
+ ```python
21
+ input_example=
22
+ {
23
+ "query": "Track crosschain message verification, implement timeout recovery procedures.",
24
+ "tools": [
25
+ {"type": "function", "function": {"name": "track_crosschain_message", "description": "Track the status of a crosschain message", "parameters": {"type": "object", "properties": {"message_id": {"type": "string"}}}}},
26
+ {"type": "function", "function": {"name": "schedule_timeout_check", "description": "Schedule a timeout check for a message", "parameters": {"type": "object", "properties": {"message_id": {"type": "string"}, "timeout": {"type": "integer"}}}}}
27
+ ]
28
+ }
29
+
30
+ ```
31
+
32
+ **Function calling generation**
33
+
34
+ ```python
35
+ from transformers import AutoModelForCausalLM, AutoTokenizer
36
+ import json
37
+
38
+ model_name = "flock-io/Flock_Web3_Agent_Model"
39
+ model = AutoModelForCausalLM.from_pretrained(
40
+ model_name,
41
+ torch_dtype="auto",
42
+ device_map="auto"
43
+ )
44
+ tokenizer = AutoTokenizer.from_pretrained(model_name)
45
+
46
+ messages = [
47
+ {"role": "system", "content": "You are a helpful assistant with access to the following functions. Use them if required -"
48
+ + json.dumps(input_example["tools"], ensure_ascii=False)},
49
+ {"role": "user", "content": input_example["tools"]}
50
+ ]
51
+ text = tokenizer.apply_chat_template(
52
+ messages,
53
+ tokenize=False,
54
+ add_generation_prompt=True
55
+ )
56
+ model_inputs = tokenizer([text], return_tensors="pt").to(model.device)
57
+ generated_ids = model.generate(
58
+ **model_inputs,
59
+ max_new_tokens=3000
60
+ )
61
+ generated_ids = [
62
+ output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids)
63
+ ]
64
+ response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
65
+ ```
66
+ Here provide