Gaurav069 commited on
Commit
e9fbb0f
·
verified ·
1 Parent(s): 2460c03

Upload 15 files

Browse files
Damage Propagation Modeling.pdf ADDED
Binary file (434 kB). View file
 
app.py ADDED
@@ -0,0 +1,222 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # import libraries
2
+ import streamlit as st
3
+ import joblib
4
+ import numpy as np
5
+ import pandas as pd
6
+
7
+ st.set_page_config(
8
+ page_title="NASA Turbofan Playground",
9
+ page_icon="🧊",
10
+ initial_sidebar_state="expanded",
11
+ menu_items={
12
+ 'Get Help': 'https://www.extremelycoolapp.com/help',
13
+ 'Report a bug': "https://www.extremelycoolapp.com/bug",
14
+ 'About': "# This is a header. This is an *extremely* cool app!"
15
+ }
16
+ )
17
+
18
+ # Set the background image
19
+ background_image = """
20
+ <style>
21
+ [data-testid="stAppViewContainer"] > .main {
22
+ background-image: url("https://w.wallhaven.cc/full/q6/wallhaven-q6vpxr.png");
23
+ background-size: 100vw 100vh; # This sets the size to cover 100% of the viewport width and height
24
+ background-position: center;
25
+ background-repeat: no-repeat;
26
+ }
27
+ </style>
28
+ """
29
+
30
+ st.markdown(background_image, unsafe_allow_html=True)
31
+
32
+
33
+ # Title with Rainbow Transition Effect and Neon Glow
34
+ html_code = """
35
+ <div class="title-container">
36
+ <h1 class="neon-text">
37
+ NASA Turbofan Playground
38
+ </h1>
39
+ </div>
40
+
41
+ <style>
42
+ @keyframes rainbow-text-animation {
43
+ 0% { color: red; }
44
+ 16.67% { color: orange; }
45
+ 33.33% { color: yellow; }
46
+ 50% { color: green; }
47
+ 66.67% { color: blue; }
48
+ 83.33% { color: indigo; }
49
+ 100% { color: violet; }
50
+ }
51
+
52
+ .title-container {
53
+ text-align: center;
54
+ margin: 1em 0;
55
+ padding-bottom: 10px;
56
+ border-bottom: 4 px solid #fcdee9; /* Magenta underline */
57
+ }
58
+
59
+ .neon-text {
60
+ font-family: Times New Roman, sans-serif;
61
+ font-size: 4em;
62
+ margin: 0;
63
+ animation: rainbow-text-animation 5s infinite linear;
64
+ text-shadow: 0 0 5px rgba(255, 255, 255, 0.8),
65
+ 0 0 10px rgba(255, 255, 255, 0.7),
66
+ 0 0 20px rgba(255, 255, 255, 0.6),
67
+ 0 0 40px rgba(255, 0, 255, 0.6),
68
+ 0 0 80px rgba(255, 0, 255, 0.6),
69
+ 0 0 90px rgba(255, 0, 255, 0.6),
70
+ 0 0 100px rgba(255, 0, 255, 0.6),
71
+ 0 0 150px rgba(255, 0, 255, 0.6);
72
+ }
73
+ </style>
74
+ """
75
+
76
+ st.markdown(html_code, unsafe_allow_html=True)
77
+ st.divider()
78
+
79
+ st.markdown(
80
+ """
81
+ <style>
82
+ .success-message {
83
+ font-family: Arial, sans-serif;
84
+ font-size: 24px;
85
+ color: green;
86
+ text-align: left;
87
+ }
88
+ .wng_txt {
89
+ font-family: Arial, sans-serif;
90
+ font-size: 24px;
91
+ color: yellow;
92
+ text-align: left;
93
+ }
94
+ .unsuccess-message {
95
+ font-family: Arial, sans-serif;
96
+ font-size: 22px;
97
+ color: red;
98
+ text-align: left;
99
+ }
100
+ .prompt-message {
101
+ font-family: Arial, sans-serif;
102
+ font-size: 24px;
103
+ color: #333;
104
+ text-align: center;
105
+ }
106
+ .success-message2 {
107
+ font-family: Arial, sans-serif;
108
+ font-size: 18px;
109
+ color: white;
110
+ text-align: left;
111
+ }
112
+ .inf_txt {
113
+ font-family: Arial, sans-serif;
114
+ font-size: 28px;
115
+ color: white;
116
+ text-align: left;
117
+ }
118
+ .message-box {
119
+ text-align: center;
120
+ background-color: white;
121
+ padding: 5px;
122
+ border-radius: 10px;
123
+ box-shadow: 0 4px 8px rgba(0, 0, 0, 0.1);
124
+ font-size: 24px;
125
+ color: #333;
126
+ }
127
+ </style>
128
+ """,
129
+ unsafe_allow_html=True
130
+ )
131
+
132
+
133
+ # Load your ML model
134
+ model = joblib.load('random_forest_model.joblib')
135
+
136
+ # Define the columns and their min/max values along with descriptions
137
+ columns = {
138
+ 'cycles': {'min': 1, 'max': 362, 'type': 'int', 'desc': 'Number of operational cycles the turbofan engine has undergone.'},
139
+ 'T24': {'min': 641.21, 'max': 644.53, 'type': 'float', 'desc': 'Total temperature at fan inlet (sensor measurement).'},
140
+ 'T30': {'min': 1571.04, 'max': 1616.91, 'type': 'float', 'desc': 'Total temperature at Low-Pressure Compressor (LPC) outlet (sensor measurement).'},
141
+ 'T50': {'min': 1382.25, 'max': 1441.49, 'type': 'float', 'desc': 'Total temperature at High-Pressure Turbine (HPT) outlet (sensor measurement).'},
142
+ 'P30': {'min': 549.85, 'max': 556.06, 'type': 'float', 'desc': 'Pressure at HPC outlet (sensor measurement).'},
143
+ 'Nf': {'min': 2387.90, 'max': 2388.56, 'type': 'float', 'desc': 'Physical fan speed (shaft rotational speed).'},
144
+ 'Nc': {'min': 9021.73, 'max': 9244.59, 'type': 'float', 'desc': 'Physical core speed (shaft rotational speed).'},
145
+ 'Ps30': {'min': 46.85, 'max': 48.53, 'type': 'float', 'desc': 'Static pressure at HPC outlet (sensor measurement).'},
146
+ 'phi': {'min': 518.69, 'max': 523.38, 'type': 'float', 'desc': 'Ratio of fuel flow to the engine core airflow (calculated parameter).'},
147
+ 'NRf': {'min': 2387.88, 'max': 2388.56, 'type': 'float', 'desc': 'Corrected fan speed (shaft rotational speed).'},
148
+ 'NRc': {'min': 8099.94, 'max': 8293.72, 'type': 'float', 'desc': 'Corrected core speed (shaft rotational speed).'},
149
+ 'BPR': {'min': 8.3249, 'max': 8.5848, 'type': 'float', 'desc': 'Bypass ratio (ratio of the mass of air bypassing the engine core to the mass of air passing through the core).'},
150
+ 'htBleed': {'min': 388.00, 'max': 400.00, 'type': 'float', 'desc': 'Bleed Enthalpy (energy loss due to air bled off for use in other aircraft systems).'},
151
+ 'W31': {'min': 38.14, 'max': 39.43, 'type': 'float', 'desc': 'High-pressure turbine cooling bleed flow (sensor measurement).'},
152
+ 'W32': {'min': 22.8942, 'max': 23.6184, 'type': 'float', 'desc': 'Low-pressure turbine cooling bleed flow (sensor measurement).'}
153
+ }
154
+
155
+ # Sidebar inputs
156
+ st.sidebar.title("NASA Turbofan Playground")
157
+ inputs = {}
158
+ for col, limits in columns.items():
159
+ if limits['type'] == 'int':
160
+ inputs[col] = st.sidebar.number_input(col, min_value=limits['min'], max_value=limits['max'], value=(limits['min'] + limits['max']) // 2, step=1, help=limits['desc'])
161
+ else:
162
+ inputs[col] = st.sidebar.number_input(col, min_value=limits['min'], max_value=limits['max'], value=(limits['min'] + limits['max']) / 2, help=limits['desc'])
163
+
164
+ # Main page
165
+ # st.title("Turbofan Engine RUL Prediction")
166
+ st.markdown('<p class="message-box">Turbofan Engine RUL Prediction</p>', unsafe_allow_html=True)
167
+ st.markdown('<p class="success-message2">Provide the necessary inputs in the sidebar to predict the Remaining Useful Life (RUL) of the turbofan engine.</p>', unsafe_allow_html=True)
168
+ # st.write("Provide the necessary inputs in the sidebar to predict the Remaining Useful Life (RUL) of the turbofan engine.")
169
+
170
+ # Prepare the input for prediction
171
+ input_data = [inputs[col] for col in columns]
172
+ input_data = [input_data] # Assuming the model expects a 2D array
173
+
174
+ # Predict the RUL
175
+ if st.sidebar.button("Predict RUL"):
176
+ prediction = model.predict(input_data)
177
+ predicted_rul = int(prediction[0])
178
+ input_cycles = inputs['cycles']
179
+
180
+ # Calculate percentage of RUL
181
+ percentage_rul = input_cycles / predicted_rul
182
+ st.divider()
183
+ st.subheader(f"The predicted Remaining Useful Life (RUL) is: {predicted_rul} cycles")
184
+ st.divider()
185
+
186
+ # Determine engine health status
187
+ if percentage_rul >=1:
188
+
189
+ st.markdown('<p class="unsuccess-message">The RUL is less than Current Cycle, this may be due to wrong prediction or Sensor Anomalies.</p>', unsafe_allow_html=True)
190
+ max_deviation = 0
191
+ anomaly_sensor = None
192
+ for col, limits in columns.items():
193
+ deviation = min(abs(inputs[col] - limits['min']), abs(inputs[col] - limits['max']))
194
+ if deviation > max_deviation:
195
+ max_deviation = deviation
196
+ anomaly_sensor = col
197
+ st.markdown(f'<p class="unsuccess-message">Potential anomaly detected in sensor: {anomaly_sensor}.</p>', unsafe_allow_html=True)
198
+
199
+ elif percentage_rul < 0.4:
200
+ st.markdown('<p class="success-message">The Remaining Useful is more than 60%, \nEngine Health is Excellent.</p>', unsafe_allow_html=True)
201
+
202
+ elif percentage_rul < 0.6:
203
+
204
+ st.markdown('<p class="success-message">The Remaining Useful is more than 40%, \nEngine Health is Fine.</p>', unsafe_allow_html=True)
205
+ elif percentage_rul < 0.8:
206
+
207
+ st.markdown('<p class="wng_txt">Warning: The Remaining Useful is between 60 to 80%, \nEngine needs to be checked.</p>', unsafe_allow_html=True)
208
+ else:
209
+ st.markdown('<p class="unsuccess-message">Warning: The Remaining Useful is critical and less than 20%, \nEngine maintenance Required.</p>', unsafe_allow_html=True)
210
+
211
+ # Identify the sensor responsible for the anomaly
212
+ max_deviation = 0
213
+ anomaly_sensor = None
214
+ for col, limits in columns.items():
215
+ deviation = min(abs(inputs[col] - limits['min']), abs(inputs[col] - limits['max']))
216
+ if deviation > max_deviation:
217
+ max_deviation = deviation
218
+ anomaly_sensor = col
219
+
220
+ st.markdown(f'<p class="unsuccess-message">Potential anomaly detected in sensor: {anomaly_sensor}.</p>', unsafe_allow_html=True)
221
+
222
+
best_lstm_model.keras ADDED
Binary file (599 kB). View file
 
evaluationer.py ADDED
@@ -0,0 +1,151 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # importing libraries
2
+ import pandas as pd
3
+ import numpy as np
4
+ from sklearn.model_selection import train_test_split
5
+ from sklearn.metrics import root_mean_squared_error,r2_score,mean_squared_error,root_mean_squared_log_error,mean_absolute_error,mean_squared_log_error
6
+ from sklearn.metrics import f1_score, accuracy_score, precision_score,recall_score, average_precision_score
7
+ # creating a class for evaluation
8
+
9
+ reg_evaluation_df = pd.DataFrame({"evaluation_df_method" :[],
10
+ "model": [],# model displays regression model
11
+ "method": [],# method display evaluation metrics used
12
+ "train_r2": [],# train r2 shows train R2 score
13
+ "test_r2": [],# test r2 shows test R2 Score
14
+ "adjusted_r2_train": [],# adjusted_r2_train shows adjusted r2 score for train
15
+ "adjusted_r2_test": [],# adjusted_r2_test shows adjusted r2 score for test
16
+ "train_evaluation": [],# train_evaluation shows train evaluation score by used method
17
+ "test_evaluation" : []# test_evaluation shows test evaluation score by used method
18
+ })
19
+
20
+ classification_evaluation_df = pd.DataFrame({"evaluation_df_method" :[],
21
+ 'model': [],
22
+ 'train_f1': [],
23
+ 'test_f1': [],
24
+ 'train_acc': [],
25
+ 'test_acc': [],
26
+ 'precision_train': [],
27
+ 'precision_test': [],
28
+ 'recall_train': [],
29
+ 'recall_test': []
30
+ })
31
+
32
+ # function for evaluating dataframe
33
+ def evaluation(evaluation_df_method,X_train,X_test,y_train,y_test,model,method,eva):# input parameters from train_test_split , model and method for evaluation.
34
+ global y_pred_train,y_pred_test,y_pred_proba_train,y_pred_proba_test
35
+ model = model
36
+ model.fit(X_train,y_train) # model fitting
37
+ y_pred_train = model.predict(X_train) # model prediction for train
38
+ y_pred_test = model.predict(X_test) # model prediction for test
39
+
40
+ if eva == "reg":
41
+
42
+ train_r2 = r2_score(y_train, y_pred_train) # evaluating r2 score for train
43
+ test_r2 = r2_score(y_test, y_pred_test) # evaluating r2 score for test
44
+
45
+ n_r_train, n_c_train = X_train.shape # getting no of rows and columns of train data
46
+ n_r_test, n_c_test = X_test.shape # getting no of rows and columns of test data
47
+
48
+ adj_r2_train = 1 - ((1 - train_r2)*(n_r_train - 1)/ (n_r_train - n_c_train - 1)) # evaluating adjusted r2 score for train
49
+ adj_r2_test = 1 - ((1 - test_r2)*(n_r_test - 1)/ (n_r_test - n_c_test - 1)) # evaluating adjusted r2 score for test
50
+
51
+ train_evaluation = method(y_train, y_pred_train) # evaluating train error
52
+ test_evaluation = method(y_test, y_pred_test) # evaluating test error
53
+
54
+ if method == root_mean_squared_error:
55
+ a = "root_mean_squared_error"
56
+ elif method ==root_mean_squared_log_error:
57
+ a = "root_mean_squared_log_error"
58
+ elif method == mean_absolute_error:
59
+ a = "mean_absolute_error"
60
+ elif method == mean_squared_error:
61
+ a = "mean_squared_error"
62
+ elif method == mean_squared_log_error:
63
+ a = "mean_squared_log_error"
64
+
65
+ # declaring global dataframes
66
+ global reg_evaluation_df,temp_df
67
+
68
+ # creating temporary dataframe for concating in later into main evaluation dataframe
69
+ temp_df = pd.DataFrame({"evaluation_df_method" :[evaluation_df_method],
70
+ "model": [model],
71
+ "method": [a],
72
+ "train_r2": [train_r2],
73
+ "test_r2": [test_r2],
74
+ "adjusted_r2_train": [adj_r2_train],
75
+ "adjusted_r2_test": [adj_r2_test],
76
+ "train_evaluation": [train_evaluation],
77
+ "test_evaluation" : [test_evaluation]
78
+ })
79
+ reg_evaluation_df = pd.concat([reg_evaluation_df,temp_df]).reset_index(drop = True)
80
+
81
+
82
+
83
+
84
+ return reg_evaluation_df # returning evaluation_df
85
+
86
+ elif eva == "class":
87
+
88
+ # y_pred_proba_train= model.predict_proba(X_train)
89
+ # y_pred_proba_test= model.predict_proba(X_test)
90
+
91
+ unique_classes = np.unique(y_train)
92
+
93
+ # Determine the average method
94
+ if len(unique_classes) == 2:
95
+ # Binary classification
96
+ print("Using 'binary' average for binary classification.")
97
+ average_method = 'binary'
98
+ elif len(unique_classes)!=2:
99
+ # Determine the distribution of the target column
100
+ class_counts = np.bincount(y_train)
101
+
102
+ # Check if the dataset is imbalanced
103
+ imbalance_ratio = max(class_counts) / min(class_counts)
104
+
105
+ if imbalance_ratio > 1.5:
106
+ # Imbalanced dataset
107
+ print("Using 'weighted' average due to imbalanced dataset.")
108
+ average_method = 'weighted'
109
+ else:
110
+ # Balanced dataset
111
+ print("Using 'macro' average due to balanced dataset.")
112
+ average_method = 'macro'
113
+
114
+ # F1 scores
115
+ train_f1_scores = (f1_score(y_train, y_pred_train,average=average_method))
116
+ test_f1_scores = (f1_score(y_test, y_pred_test,average=average_method))
117
+
118
+ # Accuracies
119
+ train_accuracies = (accuracy_score(y_train, y_pred_train))
120
+ test_accuracies = (accuracy_score(y_test, y_pred_test))
121
+
122
+ # Precisions
123
+ train_precisions = (precision_score(y_train, y_pred_train,average=average_method))
124
+ test_precisions = (precision_score(y_test, y_pred_test,average=average_method))
125
+
126
+ # Recalls
127
+ train_recalls = (recall_score(y_train, y_pred_train,average=average_method))
128
+ test_recalls = (recall_score(y_test, y_pred_test,average=average_method))
129
+
130
+ # declaring global dataframes
131
+ global classification_evaluation_df,temp_df1
132
+
133
+ # creating temporary dataframe for concating in later into main evaluation dataframe
134
+ temp_df1 = pd.DataFrame({"evaluation_df_method" :[evaluation_df_method],
135
+ 'model': [model],
136
+ 'train_f1': [train_f1_scores],
137
+ 'test_f1': [test_f1_scores],
138
+ 'train_acc': [train_accuracies],
139
+ 'test_acc': [test_accuracies],
140
+ 'precision_train': [train_precisions],
141
+ 'precision_test': [test_precisions],
142
+ 'recall_train': [train_recalls],
143
+ 'recall_test': [test_recalls]
144
+ })
145
+ classification_evaluation_df = pd.concat([classification_evaluation_df, temp_df1]).reset_index(drop = True)
146
+
147
+ return classification_evaluation_df # returning evaluation_df
148
+
149
+ global method_df
150
+ method_df = pd.DataFrame(data = [root_mean_squared_error, root_mean_squared_log_error,mean_absolute_error,mean_squared_error,mean_squared_log_error],
151
+ index = ["root_mean_squared_error", "root_mean_squared_log_error","mean_absolute_error","mean_squared_error","mean_squared_log_error"])
images/image.png ADDED
main.ipynb ADDED
The diff for this file is too large to render. See raw diff
 
models.py ADDED
@@ -0,0 +1,70 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import pandas as pd
2
+ import numpy as np
3
+ import matplotlib.pyplot as plt
4
+ import seaborn as sns
5
+ # import algorithms for classification
6
+ from sklearn.linear_model import LogisticRegression, SGDClassifier, RidgeClassifier
7
+ from sklearn.ensemble import RandomForestClassifier,AdaBoostClassifier,GradientBoostingClassifier,HistGradientBoostingClassifier
8
+ from sklearn.neighbors import KNeighborsClassifier
9
+ from sklearn.tree import DecisionTreeClassifier
10
+ from sklearn.svm import SVC
11
+ from xgboost import XGBClassifier,XGBRFClassifier
12
+ from sklearn.neural_network import MLPClassifier
13
+ from lightgbm import LGBMClassifier
14
+ from sklearn.naive_bayes import MultinomialNB,CategoricalNB
15
+ # import algorithms for regression
16
+ from sklearn.linear_model import LinearRegression, SGDRegressor, Ridge, Lasso, ElasticNet
17
+ from sklearn.ensemble import RandomForestRegressor,AdaBoostRegressor,GradientBoostingRegressor,HistGradientBoostingRegressor
18
+ from sklearn.neighbors import KNeighborsRegressor
19
+ from sklearn.tree import DecisionTreeRegressor
20
+ from sklearn.svm import SVR
21
+ from xgboost import XGBRegressor, XGBRFRegressor
22
+ from sklearn.neural_network import MLPRegressor
23
+ from lightgbm import LGBMRegressor
24
+ from sklearn.naive_bayes import GaussianNB
25
+
26
+
27
+
28
+ # dictionary where keys are name of algorithm and values are algorithm for classifier
29
+ algos_class = {
30
+ "Logistic Regression": LogisticRegression(),
31
+ "SGD Classifier": SGDClassifier(),
32
+ "Ridge Classifier": RidgeClassifier(),
33
+ "Random Forest Classifier": RandomForestClassifier(),
34
+ "AdaBoost Classifier": AdaBoostClassifier(),
35
+ "Gradient Boosting Classifier": GradientBoostingClassifier(),
36
+ "Hist Gradient Boosting Classifier": HistGradientBoostingClassifier(),
37
+ "K Neighbors Classifier": KNeighborsClassifier(),
38
+ "Decision Tree Classifier": DecisionTreeClassifier(),
39
+ "SVC": SVC(),
40
+ "XGB Classifier": XGBClassifier(),
41
+ "XGBRF Classifier": XGBRFClassifier(),
42
+ "MLP Classifier": MLPClassifier(),
43
+ "LGBM Classifier": LGBMClassifier(),
44
+ "Multinomial Naive Bayes": MultinomialNB(),
45
+ "Categorical Naive Bayes": CategoricalNB()}
46
+
47
+ # dictionary where keys are name of algorithm and values are algorithm for regression
48
+ algos_reg = {
49
+ "Linear Regression": LinearRegression(),
50
+ "Ridge Regressor": Ridge(),
51
+ "Lasso Regressor": Lasso(),
52
+ "ElasticNet Regressor": ElasticNet(),
53
+ "Random Forest Regressor": RandomForestRegressor(),
54
+ "AdaBoost Regressor": AdaBoostRegressor(),
55
+ "Gradient Boosting Regressor": GradientBoostingRegressor(),
56
+ "Hist Gradient Boosting Regressor": HistGradientBoostingRegressor(),
57
+ "K Neighbors Regressor": KNeighborsRegressor(),
58
+ "Decision Tree Regressor": DecisionTreeRegressor(),
59
+ "XGB Regressor": XGBRegressor(),
60
+ "XGBRF Regressor": XGBRFRegressor(),
61
+ "MLP Regressor": MLPRegressor(),
62
+ "LGBM Regressor": LGBMRegressor(),
63
+ "Gaussian Naive Bayes": GaussianNB()}
64
+
65
+ # dataframe where index are name of algorithm as "algorithm name" , column is algorithm as "algorithm"
66
+
67
+ Classification_models = pd.DataFrame(data=algos_class.values(), index=algos_class.keys())
68
+
69
+ Regression_models = pd.DataFrame(data=algos_reg.values(), index=algos_reg.keys())
70
+
processed_data/test_FD001.csv ADDED
The diff for this file is too large to render. See raw diff
 
processed_data/train_FD001.csv ADDED
The diff for this file is too large to render. See raw diff
 
random_forest_model.joblib ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:9d0771ed913e1d71c04a998ad8010a96ee823e25e834f47c938e6ed5eff624bb
3
+ size 28841249
raw_data/RUL_FD001.txt ADDED
@@ -0,0 +1,100 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ 112
2
+ 98
3
+ 69
4
+ 82
5
+ 91
6
+ 93
7
+ 91
8
+ 95
9
+ 111
10
+ 96
11
+ 97
12
+ 124
13
+ 95
14
+ 107
15
+ 83
16
+ 84
17
+ 50
18
+ 28
19
+ 87
20
+ 16
21
+ 57
22
+ 111
23
+ 113
24
+ 20
25
+ 145
26
+ 119
27
+ 66
28
+ 97
29
+ 90
30
+ 115
31
+ 8
32
+ 48
33
+ 106
34
+ 7
35
+ 11
36
+ 19
37
+ 21
38
+ 50
39
+ 142
40
+ 28
41
+ 18
42
+ 10
43
+ 59
44
+ 109
45
+ 114
46
+ 47
47
+ 135
48
+ 92
49
+ 21
50
+ 79
51
+ 114
52
+ 29
53
+ 26
54
+ 97
55
+ 137
56
+ 15
57
+ 103
58
+ 37
59
+ 114
60
+ 100
61
+ 21
62
+ 54
63
+ 72
64
+ 28
65
+ 128
66
+ 14
67
+ 77
68
+ 8
69
+ 121
70
+ 94
71
+ 118
72
+ 50
73
+ 131
74
+ 126
75
+ 113
76
+ 10
77
+ 34
78
+ 107
79
+ 63
80
+ 90
81
+ 8
82
+ 9
83
+ 137
84
+ 58
85
+ 118
86
+ 89
87
+ 116
88
+ 115
89
+ 136
90
+ 28
91
+ 38
92
+ 20
93
+ 85
94
+ 55
95
+ 128
96
+ 137
97
+ 82
98
+ 59
99
+ 117
100
+ 20
raw_data/test_FD001.txt ADDED
The diff for this file is too large to render. See raw diff
 
raw_data/train_FD001.txt ADDED
The diff for this file is too large to render. See raw diff
 
readme.txt ADDED
@@ -0,0 +1,45 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ Data Set: FD001
2
+ Train trjectories: 100
3
+ Test trajectories: 100
4
+ Conditions: ONE (Sea Level)
5
+ Fault Modes: ONE (HPC Degradation)
6
+
7
+ Data Set: FD002
8
+ Train trjectories: 260
9
+ Test trajectories: 259
10
+ Conditions: SIX
11
+ Fault Modes: ONE (HPC Degradation)
12
+
13
+ Data Set: FD003
14
+ Train trjectories: 100
15
+ Test trajectories: 100
16
+ Conditions: ONE (Sea Level)
17
+ Fault Modes: TWO (HPC Degradation, Fan Degradation)
18
+
19
+ Data Set: FD004
20
+ Train trjectories: 248
21
+ Test trajectories: 249
22
+ Conditions: SIX
23
+ Fault Modes: TWO (HPC Degradation, Fan Degradation)
24
+
25
+
26
+
27
+ Experimental Scenario
28
+
29
+ Data sets consists of multiple multivariate time series. Each data set is further divided into training and test subsets. Each time series is from a different engine � i.e., the data can be considered to be from a fleet of engines of the same type. Each engine starts with different degrees of initial wear and manufacturing variation which is unknown to the user. This wear and variation is considered normal, i.e., it is not considered a fault condition. There are three operational settings that have a substantial effect on engine performance. These settings are also included in the data. The data is contaminated with sensor noise.
30
+
31
+ The engine is operating normally at the start of each time series, and develops a fault at some point during the series. In the training set, the fault grows in magnitude until system failure. In the test set, the time series ends some time prior to system failure. The objective of the competition is to predict the number of remaining operational cycles before failure in the test set, i.e., the number of operational cycles after the last cycle that the engine will continue to operate. Also provided a vector of true Remaining Useful Life (RUL) values for the test data.
32
+
33
+ The data are provided as a zip-compressed text file with 26 columns of numbers, separated by spaces. Each row is a snapshot of data taken during a single operational cycle, each column is a different variable. The columns correspond to:
34
+ 1) unit number
35
+ 2) time, in cycles
36
+ 3) operational setting 1
37
+ 4) operational setting 2
38
+ 5) operational setting 3
39
+ 6) sensor measurement 1
40
+ 7) sensor measurement 2
41
+ ...
42
+ 26) sensor measurement 26
43
+
44
+
45
+ Reference: A. Saxena, K. Goebel, D. Simon, and N. Eklund, �Damage Propagation Modeling for Aircraft Engine Run-to-Failure Simulation�, in the Proceedings of the Ist International Conference on Prognostics and Health Management (PHM08), Denver CO, Oct 2008.
requirements.txt ADDED
@@ -0,0 +1,8 @@
 
 
 
 
 
 
 
 
 
1
+ streamlit==1.36.0
2
+ joblib==1.4.2
3
+ numpy==1.26.4
4
+ pandas==2.2.2
5
+ scikit-learn==1.4.2
6
+ tensorflow==2.16.1
7
+ matplotlib==3.9.0
8
+ seaborn==0.13.2