File size: 2,393 Bytes
8d4c52c
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
### GenericAgent-AgentTrek-1.0-32b

this agent is GenericAgent from Agentlab

- **Base Model:**

  - Qwen/Qwen2.5-32B-Instruct
- **Architecture:**

  - Type: Causal Language Models
  - Training Stage: Pretraining & Post-training
  - Architecture: transformers with RoPE, SwiGLU, RMSNorm, and Attention QKV bias
  - Number of Parameters: 32.5B
  - Number of Paramaters (Non-Embedding): 31.0B
  - Number of Layers: 64
  - Number of Attention Heads (GQA): 40 for Q and 8 for KV
- Input/Output Format:

  - with the following flags:
    ```txt
    flags=GenericPromptFlags(
        obs=ObsFlags(
            use_html=True,
            use_ax_tree=True,
            use_tabs=False,
            use_focused_element=False,
            use_error_logs=True,
            use_history=True,
            use_past_error_logs=False,
            use_action_history=True,
            use_think_history=False,
            use_diff=False,
            html_type='pruned_html',
            use_screenshot=False,
            use_som=False,
            extract_visible_tag=False,
            extract_clickable_tag=False,
            extract_coords='False',
            filter_visible_elements_only=False,
            openai_vision_detail='auto',
            filter_with_bid_only=False,
            filter_som_only=False
        ),
        action=ActionFlags(
            action_set=HighLevelActionSetArgs(
                subsets=('miniwob_all',),
                multiaction=False,
                strict=False,
                retry_with_force=True,
                demo_mode='off'
            ),
            long_description=False,
            individual_examples=False,
            multi_actions=None,
            is_strict=None
        ),
        use_plan=False,
        use_criticise=False,
        use_thinking=True,
        use_memory=True,
        use_concrete_example=True,
        use_abstract_example=True,
        use_hints=False,
        enable_chat=False,
        max_prompt_tokens=40000,
        be_cautious=True,
        extra_instructions=None,
        add_missparsed_messages=True,
        max_trunc_itr=20,
        flag_group=None
    )
    ```
- Training Details

  - Dataset used: [AgentTrek-6K](https://agenttrek.github.io)
  - Number of training steps: 3 Epochs
- Paper Link:

  - https://arxiv.org/abs/2412.09605
- Code Repository:

  - https://agenttrek.github.io
- Lisense:

  - apache2.0