Update README.md
Browse files
README.md
CHANGED
@@ -14,11 +14,10 @@ tags:
|
|
14 |
pipeline_tag: text-generation
|
15 |
---
|
16 |
|
17 |
-
# Model Card: Engine_Finetuned_V2.2
|
18 |
|
19 |
## Model Details
|
20 |
|
21 |
-
**Model Name:**
|
22 |
**Authors:**
|
23 |
- **Youri LALAIN**, Engineering student at French Engineering School ECE
|
24 |
- **Lilian RAGE**, Engineering student at French Engineering School ECE
|
@@ -31,7 +30,7 @@ pipeline_tag: text-generation
|
|
31 |
|
32 |
# Mistral-7B Fine-tuné sur les moteurs aérospatiaux
|
33 |
|
34 |
-
|
35 |
|
36 |
## Capabilities
|
37 |
- Technical Q&A on aerospace and aeronautical engines
|
@@ -44,7 +43,7 @@ Engine_Finetuned_V2.2 is a specialized fine-tuned version of Mistral-7B designed
|
|
44 |
- Assisting in aerospace-related R&D projects
|
45 |
|
46 |
## Training Details
|
47 |
-
This model was fine-tuned on
|
48 |
|
49 |
|
50 |
## How to Use
|
@@ -53,7 +52,7 @@ You can load the model using Hugging Face's `transformers` library:
|
|
53 |
```python
|
54 |
from transformers import AutoModelForCausalLM, AutoTokenizer
|
55 |
|
56 |
-
model_name = "
|
57 |
|
58 |
model = AutoModelForCausalLM.from_pretrained(model_name)
|
59 |
tokenizer = AutoTokenizer.from_pretrained(model_name)
|
|
|
14 |
pipeline_tag: text-generation
|
15 |
---
|
16 |
|
|
|
17 |
|
18 |
## Model Details
|
19 |
|
20 |
+
**Model Name:** UMA-IA/LLM_Engine_Finetuned_Aero
|
21 |
**Authors:**
|
22 |
- **Youri LALAIN**, Engineering student at French Engineering School ECE
|
23 |
- **Lilian RAGE**, Engineering student at French Engineering School ECE
|
|
|
30 |
|
31 |
# Mistral-7B Fine-tuné sur les moteurs aérospatiaux
|
32 |
|
33 |
+
LLM_Engine_Finetuned_Aero is a specialized fine-tuned version of Mistral-7B designed to provide accurate and detailed answers to technical questions related to aerospace and aeronautical engines. The model leverages the SpaceYL/Aerospatial_Dataset to enhance its understanding of complex engineering principles, propulsion systems, and aerospace technologies.
|
34 |
|
35 |
## Capabilities
|
36 |
- Technical Q&A on aerospace and aeronautical engines
|
|
|
43 |
- Assisting in aerospace-related R&D projects
|
44 |
|
45 |
## Training Details
|
46 |
+
This model was fine-tuned on UMA-IA/UMA_Dataset_Engine_Aero_LLM, a curated dataset focusing on aerospace engines, propulsion systems, and general aeronautical engineering. The fine-tuning process was performed using supervised learning to adapt Mistral-7B to technical discussions.
|
47 |
|
48 |
|
49 |
## How to Use
|
|
|
52 |
```python
|
53 |
from transformers import AutoModelForCausalLM, AutoTokenizer
|
54 |
|
55 |
+
model_name = "UMA-IA/LLM_Engine_Finetuned_Aero"
|
56 |
|
57 |
model = AutoModelForCausalLM.from_pretrained(model_name)
|
58 |
tokenizer = AutoTokenizer.from_pretrained(model_name)
|