File size: 5,028 Bytes
852adc3
 
 
 
 
 
 
 
1dabca0
852adc3
 
 
 
1dabca0
852adc3
 
23cbd33
1dabca0
852adc3
30ef141
852adc3
9b354aa
 
 
1a929c7
852adc3
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
9b354aa
 
d593380
 
60acc2d
d593380
2d734b8
d593380
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
59db38b
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
---
license: apache-2.0
datasets:
- sujet-ai/Sujet-Finance-Instruct-177k
language:
- en
metrics:
- accuracy
pipeline_tag: text-generation
tags:
- finance
- topic classification
- sentiment analysis
- qa
---
<Gallery />

# Introducing Sujet Finance 8B v0.1 πŸš€

## A Specialized Financial Language Model Fine-Tuned on Sujet Finance Instruct-177k Dataset


<img src="7b.jpg" width="400" height="200">

Welcome to the exciting world of **Sujet Finance 8B v0.1** – your go-to language model for all things finance! πŸ’° This state-of-the-art model is a fine-tuned version of the powerful LLAMA 3 model, meticulously trained on the comprehensive Sujet Finance Instruct-177k dataset. πŸ“ˆ

### 🎯 Fine-Tuning Focus

In this initial fine-tuning iteration, we've focused on three key financial tasks:

1. βœ…βŒ Yes/No Questions
   - Description: This task involves answering financial questions that require a simple "yes" or "no" response.
   - Class Distribution:
     - Train Set: 5,265 "yes" examples, 5,302 "no" examples
     - Eval Set: 1,340 "yes" examples, 1,303 "no" examples

2. πŸ“‚ Topic Classification
   - Description: The model classifies financial texts into specific finance-related categories such as company news, markets, earnings, and more.
   - Class Distribution:
     - Train Set: Balanced across 20 classes, with 29-40 examples per class
     - Eval Set: Varies across classes, ranging from 4 to 15 examples per class

3. 😊😐😑 Sentiment Analysis
   - Description: This task involves analyzing financial texts to categorize sentiments as positive, negative, neutral, bearish, or bullish.
   - Class Distribution:
     - Train Set: 1,160 positive, 1,155 negative, 1,150 neutral, 1,133 bearish, and 1,185 bullish examples
     - Eval Set: 281 positive, 286 negative, 291 neutral, 308 bearish, and 256 bullish examples



### Inference code

**This model was finetuned using Unsloth. Please refer to their github repository and make sure you have it installed before using the model : [Unsloth](https://github.com/unslothai/unsloth)**

```python
from unsloth import FastLanguageModel


max_seq_length = 2048 
dtype = None 
load_in_4bit = False 


alpaca_prompt = """Below is an instruction that describes a task, paired with an input that provides further context. Write a response that appropriately completes the request.

### Instruction:
{}

### Input:
{}

### Response:
{}"""


model, tokenizer = FastLanguageModel.from_pretrained(
    model_name = "sujet-ai/Sujet-Finance-8B-v0.1",
    max_seq_length = max_seq_length,
    dtype = dtype,
    load_in_4bit = load_in_4bit,
    token = "your hf token here",
)


example = {
'system_prompt': 'You are a financial sentiment analysis expert. Your task is to analyze the sentiment expressed in the given financial text.Only reply with bearish, neutral, or bullish.',
'user_prompt': "Expedia's Problems Run Deeper Than SEO Headwinds",
'answer': 'bearish',
}


inputs = tokenizer(
                [alpaca_prompt.format(
                    example['system_prompt'],  # instruction
                    example['user_prompt'],  # input
                    "",  # output - leave this blank for generation!
                )],
                return_tensors="pt"
            ).to("cuda")
            
outputs = model.generate(**inputs, max_new_tokens=2048, use_cache=True, pad_token_id=tokenizer.eos_token_id)
output = tokenizer.batch_decode(outputs)[0]
response = output.split("### Response:")[1].strip()
print(response)
```

**You can find more information about the dataset by clicking on this link : [Sujet-Finance-Instruct-177k Dataset](https://huggingface.co/datasets/sujet-ai/Sujet-Finance-Instruct-177k)**

Our model has been carefully trained to excel in these areas, providing accurate and insightful responses to your financial queries. πŸ’‘

### πŸŽ“ Training Methodology

To ensure optimal performance, we've employed a balanced training approach. Our dataset preparation process strategically selects an equal number of examples from each subclass within the three focus tasks. This results in a well-rounded model that can handle a diverse range of financial questions and topics. 🧠

The final balanced training dataset consists of 17,036 examples, while the evaluation dataset contains 4,259 examples.

### πŸ”§ Model Specifications

- Base Model: LLAMA 3 8B πŸ¦™
- Fine-Tuning Technique: LoRA (Low-Rank Adaptation)
  - r = 16
  - alpha = 32
- Learning Rate: 2e-4 πŸ“ˆ
- Weight Decay: 0.01 πŸ‹οΈβ€β™‚οΈ
- Epochs: 1 πŸ”„
- Quantization: float16 for VLLM πŸ—œοΈ

### πŸ“Š Evaluation Results

We've put our model to the test, comparing its performance against the base LLAMA 3 model on our evaluation dataset. The results are impressive! πŸ†

We consider a response correct if the true answer appears within the first 10 words generated by the model. This strict criterion ensures that our model not only provides accurate answers but also prioritizes the most relevant information. 🎯

<img src="eval.jpg" width="400" height="200">