Update README.md
Browse files
README.md
CHANGED
@@ -46,34 +46,6 @@ In this initial fine-tuning iteration, we've focused on three key financial task
|
|
46 |
- Train Set: 1,160 positive, 1,155 negative, 1,150 neutral, 1,133 bearish, and 1,185 bullish examples
|
47 |
- Eval Set: 281 positive, 286 negative, 291 neutral, 308 bearish, and 256 bullish examples
|
48 |
|
49 |
-
**You can find more information about the dataset by clicking on this link : [Sujet-Finance-Instruct-177k Dataset](https://huggingface.co/datasets/sujet-ai/Sujet-Finance-Instruct-177k)**
|
50 |
-
|
51 |
-
Our model has been carefully trained to excel in these areas, providing accurate and insightful responses to your financial queries. π‘
|
52 |
-
|
53 |
-
### π Training Methodology
|
54 |
-
|
55 |
-
To ensure optimal performance, we've employed a balanced training approach. Our dataset preparation process strategically selects an equal number of examples from each subclass within the three focus tasks. This results in a well-rounded model that can handle a diverse range of financial questions and topics. π§
|
56 |
-
|
57 |
-
The final balanced training dataset consists of 17,036 examples, while the evaluation dataset contains 4,259 examples.
|
58 |
-
|
59 |
-
### π§ Model Specifications
|
60 |
-
|
61 |
-
- Base Model: LLAMA 3 8B π¦
|
62 |
-
- Fine-Tuning Technique: LoRA (Low-Rank Adaptation)
|
63 |
-
- r = 16
|
64 |
-
- alpha = 32
|
65 |
-
- Learning Rate: 2e-4 π
|
66 |
-
- Weight Decay: 0.01 ποΈββοΈ
|
67 |
-
- Epochs: 1 π
|
68 |
-
- Quantization: float16 for VLLM ποΈ
|
69 |
-
|
70 |
-
### π Evaluation Results
|
71 |
-
|
72 |
-
We've put our model to the test, comparing its performance against the base LLAMA 3 model on our evaluation dataset. The results are impressive! π
|
73 |
-
|
74 |
-
We consider a response correct if the true answer appears within the first 10 words generated by the model. This strict criterion ensures that our model not only provides accurate answers but also prioritizes the most relevant information. π―
|
75 |
-
|
76 |
-
<img src="eval.jpg" width="400" height="200">
|
77 |
|
78 |
|
79 |
### Inference code
|
@@ -130,4 +102,34 @@ outputs = model.generate(**inputs, max_new_tokens=2048, use_cache=True, pad_toke
|
|
130 |
output = tokenizer.batch_decode(outputs)[0]
|
131 |
response = output.split("### Response:")[1].strip()
|
132 |
print(response)
|
133 |
-
```
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
46 |
- Train Set: 1,160 positive, 1,155 negative, 1,150 neutral, 1,133 bearish, and 1,185 bullish examples
|
47 |
- Eval Set: 281 positive, 286 negative, 291 neutral, 308 bearish, and 256 bullish examples
|
48 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
49 |
|
50 |
|
51 |
### Inference code
|
|
|
102 |
output = tokenizer.batch_decode(outputs)[0]
|
103 |
response = output.split("### Response:")[1].strip()
|
104 |
print(response)
|
105 |
+
```
|
106 |
+
|
107 |
+
**You can find more information about the dataset by clicking on this link : [Sujet-Finance-Instruct-177k Dataset](https://huggingface.co/datasets/sujet-ai/Sujet-Finance-Instruct-177k)**
|
108 |
+
|
109 |
+
Our model has been carefully trained to excel in these areas, providing accurate and insightful responses to your financial queries. π‘
|
110 |
+
|
111 |
+
### π Training Methodology
|
112 |
+
|
113 |
+
To ensure optimal performance, we've employed a balanced training approach. Our dataset preparation process strategically selects an equal number of examples from each subclass within the three focus tasks. This results in a well-rounded model that can handle a diverse range of financial questions and topics. π§
|
114 |
+
|
115 |
+
The final balanced training dataset consists of 17,036 examples, while the evaluation dataset contains 4,259 examples.
|
116 |
+
|
117 |
+
### π§ Model Specifications
|
118 |
+
|
119 |
+
- Base Model: LLAMA 3 8B π¦
|
120 |
+
- Fine-Tuning Technique: LoRA (Low-Rank Adaptation)
|
121 |
+
- r = 16
|
122 |
+
- alpha = 32
|
123 |
+
- Learning Rate: 2e-4 π
|
124 |
+
- Weight Decay: 0.01 ποΈββοΈ
|
125 |
+
- Epochs: 1 π
|
126 |
+
- Quantization: float16 for VLLM ποΈ
|
127 |
+
|
128 |
+
### π Evaluation Results
|
129 |
+
|
130 |
+
We've put our model to the test, comparing its performance against the base LLAMA 3 model on our evaluation dataset. The results are impressive! π
|
131 |
+
|
132 |
+
We consider a response correct if the true answer appears within the first 10 words generated by the model. This strict criterion ensures that our model not only provides accurate answers but also prioritizes the most relevant information. π―
|
133 |
+
|
134 |
+
<img src="eval.jpg" width="400" height="200">
|
135 |
+
|