Youlln commited on
Commit
346d601
·
verified ·
1 Parent(s): 6c38756

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +6 -6
README.md CHANGED
@@ -4,7 +4,7 @@ language:
4
  - fr
5
  license: mit
6
  datasets:
7
- - UMA-IA/UMA_Dataset_Engine_Aero_LLM
8
  base_model: mistralai/Mistral-7B-v0.1
9
  tags:
10
  - aerospace
@@ -17,20 +17,20 @@ pipeline_tag: text-generation
17
 
18
  ## Model Details
19
 
20
- **Model Name:** UMA-IA/LLM_Engine_Finetuned_Aero
21
  **Authors:**
22
  - **Youri LALAIN**, Engineering student at French Engineering School ECE
23
  - **Lilian RAGE**, Engineering student at French Engineering School ECE
24
 
25
  **Base Model:** [Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1)
26
- **Fine-tuned Dataset:** [UMA-IA/UMA_Dataset_Engine_Aero_LLM](https://huggingface.co/datasets/UMA-IA/UMA_Dataset_Engine_Aero_LLM)
27
  **License:** Apache 2.0
28
 
29
  ## Model Description
30
 
31
  # Mistral-7B Fine-tuné sur les moteurs aérospatiaux
32
 
33
- LLM_Engine_Finetuned_Aero is a specialized fine-tuned version of Mistral-7B designed to provide accurate and detailed answers to technical questions related to aerospace and aeronautical engines. The model leverages the UMA-IA/UMA_Dataset_Engine_Aero_LLM to enhance its understanding of complex engineering principles, propulsion systems, and aerospace technologies.
34
 
35
  ## Capabilities
36
  - Technical Q&A on aerospace and aeronautical engines
@@ -43,7 +43,7 @@ LLM_Engine_Finetuned_Aero is a specialized fine-tuned version of Mistral-7B desi
43
  - Assisting in aerospace-related R&D projects
44
 
45
  ## Training Details
46
- This model was fine-tuned on UMA-IA/UMA_Dataset_Engine_Aero_LLM, a curated dataset focusing on aerospace engines, propulsion systems, and general aeronautical engineering. The fine-tuning process was performed using supervised learning to adapt Mistral-7B to technical discussions.
47
 
48
 
49
  ## How to Use
@@ -52,7 +52,7 @@ You can load the model using Hugging Face's `transformers` library:
52
  ```python
53
  from transformers import AutoModelForCausalLM, AutoTokenizer
54
 
55
- model_name = "UMA-IA/LLM_Engine_Finetuned_Aero"
56
 
57
  model = AutoModelForCausalLM.from_pretrained(model_name)
58
  tokenizer = AutoTokenizer.from_pretrained(model_name)
 
4
  - fr
5
  license: mit
6
  datasets:
7
+ - UMA-IA/VELA-Engine-v1
8
  base_model: mistralai/Mistral-7B-v0.1
9
  tags:
10
  - aerospace
 
17
 
18
  ## Model Details
19
 
20
+ **Model Name:** UMA-IA/CENTAURUS-Engine-v1
21
  **Authors:**
22
  - **Youri LALAIN**, Engineering student at French Engineering School ECE
23
  - **Lilian RAGE**, Engineering student at French Engineering School ECE
24
 
25
  **Base Model:** [Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1)
26
+ **Fine-tuned Dataset:** [UMA-IA/VELA-Engine-v1](https://huggingface.co/datasets/UMA-IA/UMA_Dataset_Engine_Aero_LLM)
27
  **License:** Apache 2.0
28
 
29
  ## Model Description
30
 
31
  # Mistral-7B Fine-tuné sur les moteurs aérospatiaux
32
 
33
+ UMA-IA/CENTAURUS-Engine-v1 is a specialized fine-tuned version of Mistral-7B designed to provide accurate and detailed answers to technical questions related to aerospace and aeronautical engines. The model leverages the UMA-IA/UMA_Dataset_Engine_Aero_LLM to enhance its understanding of complex engineering principles, propulsion systems, and aerospace technologies.
34
 
35
  ## Capabilities
36
  - Technical Q&A on aerospace and aeronautical engines
 
43
  - Assisting in aerospace-related R&D projects
44
 
45
  ## Training Details
46
+ This model was fine-tuned on UMA-IA/VELA-Engine-v1, a curated dataset focusing on aerospace engines, propulsion systems, and general aeronautical engineering. The fine-tuning process was performed using supervised learning to adapt Mistral-7B to technical discussions.
47
 
48
 
49
  ## How to Use
 
52
  ```python
53
  from transformers import AutoModelForCausalLM, AutoTokenizer
54
 
55
+ model_name = "UMA-IA/CENTAURUS-Engine-v1"
56
 
57
  model = AutoModelForCausalLM.from_pretrained(model_name)
58
  tokenizer = AutoTokenizer.from_pretrained(model_name)