SpaceYL's picture
Update README.md
6c38756 verified
metadata
language:
  - en
  - fr
license: mit
datasets:
  - UMA-IA/UMA_Dataset_Engine_Aero_LLM
base_model: mistralai/Mistral-7B-v0.1
tags:
  - aerospace
  - aeronautics
  - engineering
  - technical-QA
pipeline_tag: text-generation

Model Details

Model Name: UMA-IA/LLM_Engine_Finetuned_Aero Authors:

  • Youri LALAIN, Engineering student at French Engineering School ECE
  • Lilian RAGE, Engineering student at French Engineering School ECE

Base Model: Mistral-7B-v0.1
Fine-tuned Dataset: UMA-IA/UMA_Dataset_Engine_Aero_LLM
License: Apache 2.0

Model Description

Mistral-7B Fine-tuné sur les moteurs aérospatiaux

LLM_Engine_Finetuned_Aero is a specialized fine-tuned version of Mistral-7B designed to provide accurate and detailed answers to technical questions related to aerospace and aeronautical engines. The model leverages the UMA-IA/UMA_Dataset_Engine_Aero_LLM to enhance its understanding of complex engineering principles, propulsion systems, and aerospace technologies.

Capabilities

  • Technical Q&A on aerospace and aeronautical engines
  • Analysis and explanations of propulsion system components
  • Assistance in understanding aerospace engineering concepts

Use Cases

  • Aerospace research and engineering support
  • Educational purposes for students and professionals
  • Assisting in aerospace-related R&D projects

Training Details

This model was fine-tuned on UMA-IA/UMA_Dataset_Engine_Aero_LLM, a curated dataset focusing on aerospace engines, propulsion systems, and general aeronautical engineering. The fine-tuning process was performed using supervised learning to adapt Mistral-7B to technical discussions.

How to Use

You can load the model using Hugging Face's transformers library:

from transformers import AutoModelForCausalLM, AutoTokenizer

model_name = "UMA-IA/LLM_Engine_Finetuned_Aero"

model = AutoModelForCausalLM.from_pretrained(model_name)
tokenizer = AutoTokenizer.from_pretrained(model_name)

input_text = "Explain the working principle of a turbofan engine."
inputs = tokenizer(input_text, return_tensors="pt")
outputs = model.generate(**inputs)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))