University
stringclasses
19 values
Text
stringlengths
458
20.7k
Colorado School of Mines
principal subspace (PS) or signal subspace, while the subspace covered by MC is called minor subspace (MC) or noise subspace. The PCs and MCs result from converging matrix differential equations derived from the symmetric matrix’s principal and minor component analyzers (Kong et al. 2017). Since the PC is the direction correlated with the eigenvector affiliated with the largest eigenvalue of the autocorrelation matrix of the data vectors and the MC is the direction correlated with the smallest eigenvalue of the autocorrelation matrix of the data vectors, batch eigenvalue decomposition (ED) of the sample correlation matrix or singular value decomposition (SVD) of the data matrix can be used for PCA and MCA. The concept of PCA can be described with math equations. The goal of PCA is to reduce the dimensionality of the matrix X(cid:15)RN∗D to matrix ZN∗M by projecting X into lower dimension space M where D>>M. X = [x x x ...x ]T X(cid:15)RN∗D (3.7) 1 2 3 N Z = [z z z ...z ]T Z(cid:15)RN∗M (3.8) 1 2 3 N By using transformation matrix U, the dimensionality of matrix X should be reduced from D to M. Z = U ∗X U(cid:15)RD∗M where; (3.9) 1 The covariance matrix of S = S = ZT ∗Z S (cid:15)RM∗M (3.10) Z Z N Optimization should be done to maximize the covariance matrix S, and an upper boundary condi- tion should be added to avoid an infinite number of results. max S (3.11) u Z 1 max ZT ∗Z where;Z = UX (3.12) u N 1 max (XU)T(XU) (3.13) u N 1 max UTXTXU where;XTX = S (3.14) u x N 1 max UTS U (3.15) u x N 30
Colorado School of Mines
N 1 (cid:88) Cov(X) = XXT (3.35) N n=1 TheequationaboveshowsthatXT Xisthescaledversionofthecovariancematrixofdesignmatrix X. The matrix V contains the eigenvectors of the design matrix, and the matrix D contains the eigenvalues of the design matrix. By applying the same process as eigenvalue decomposition, the information represented by each eigenvalue can be calculated. 3.3.2 The Results of PCA The fundamental concept of principal component analysis is explained with mathematical ex- pressionsintheprevioussection. Asmentionedbefore,themainobjectiveofPCAistomeasurethe variance within the data set and provide insight into how the variance contribution changes with each principal component. The measurement of variance contribution by each principal component is called explained variance analysis. Also, PCA can be used to indicate the variance contribution of each variable by simply measuring the number of data points transferred to each principal com- ponent. This process is called Feature Importance Analysis, and it can be used to indicate the influence of each variable on principal components. In this study, both explained variance analysis and feature importance analysis will be applied to the complete data set to indicate explained variance by each PC and how these PCs are influenced by each variable in the data set. Implemen- tation of PCA helped to gain insight into how the variance within the data set changes with each variable, and the insight eventually led to changes in this study. The PCA will be applied only to the training data set as the objective is to study the variance contribution of each variable. These variables are MSE, ROP, Torque, N-FPI, WOB, RPM, and Depth. 3.3.2.1 Explained Variance Analysis Explained variance can be defined as the variance retained by each eigenvalue and eigenvector described by the covariance matrix of principal components. . These retained values by each principal component is calculated by using scikit-library (Pedregosa et al. 2011). The explained variance of each component is indicated in Figure 3.5. As can be observed in Figure 3.5, PC1 reflects the highest variance within the dataset as the eigenvalues and eigenvectors are sorted out after applying eigenvalue decomposition. The observation from this graph is the uniform decrease 33
Colorado School of Mines
3.3.2.2 Feature Importance Analysis A feature matrix indicates the importance of each feature that is reflected by the magnitude of corresponding values in eigenvectors. The numbers are shown in the feature matrix range from -1 to1, indicatingtheinfluenceoffeaturesoneachprincipalcomponent. Thehighernumberindicates a more significant influence. The feature matrix of the training data set is presented in Figure 3.7. Thecoefficientvaluesreflecttheinfluenceofeachvariableonprincipalcomponents. Thecalculated coefficient value can be negative or positive since these values indicate a distance, aka magnitude, from orthogonal lines set for each principal component. The observation from the feature matrix is a significant influence of variables on PC7. Commonly, the degree of influence decreases with each principal component added, but similar to the anomaly observed from explained variance percentages, the influence of variables on principal components is not decreasing. Especially, the influence of MSE and Torque are high on PC7, which indicates that valuable information from the original data set will be lost if less than seven PCs are used to train the algorithm. The explained variance and feature importance analysis indicated that a possible implementa- tionofPCAtoreducedimensionalitywillresultinthelossofvaluableinformation. Thisobservation is simple yet essential as it impacted this study and led to changes. A detailed discussion on the possible loss of information is given in Chapter 4. 35
Colorado School of Mines
CHAPTER 4 MODELING AND TESTING Once negative values are removed and the data set re-indexed successfully, the drilling param- eters are fed into the Random Forest regression algorithm as a training data set. The objective is to develop a regression model that estimates unconfined compressive strength from drilling param- eters. This chapter covers machine learning methods and includes model performance evaluation methods while building a regression model. Also, the final architecture of the Random Forest regression algorithm is explained. 4.1 Machine Learning Methods Machinelearningisstatedasthescienceofprogrammingcomputerssotheycanlearnfromdata (Geron 2019). Machine learning can be a valuable asset for solving extremely complicated prob- lems that can not be explained using traditional physics-based models. Due to drilling operations’ complex and dynamic nature, machine learning methods to predict geomechanical and operational parameters perform better than traditional models that commonly consider several assumptions. The machine learning methods can be grouped into two main categories, supervised learning and unsupervised learning. The main distinction between supervised and unsupervised learning meth- ods is the data set used to train models. Supervised learning methods use labeled data sets for classification and regression problems, whereas unsupervised learning methods use unlabeled data sets for clustering, association, and dimensionality reduction problems. This study uses a supervised learning method called Random Forest regression algorithm to estimate UCS instantaneously from drilling parameters. The algorithm built includes two main sections. The first section consists of a main Random Forest regression algorithm and hyper- parameter tuning tools for RF model to ensure the best fit while avoiding overfitting. These hyper-parameter tuning tools are called Randomized-Search and Grid-Search, which have a built- in K-fold cross-validation function. The K-fold cross-validation function is essential while tuning hyper-parameters to ensure the best bias-variance trade-off. The second section includes common model performance evaluation techniques such as root-mean-squared-error, and mean-absolute- 37
Colorado School of Mines
error. In this chapter, the most common regression models and performance evaluation techniques are evaluated, the algorithm’s final architecture is provided, and the potential implementation of the model for drilling operations is discussed. 4.1.1 Regression Models Regression problems can be defined as problems that require a quantitative approach to solve. A quantitative relation between one or more dependent or independent features can be measured using regression models. Regression models can be used to solve the simple or complex relations betweenoneormultiplefeatures. Theregressionmodelcomplexitywillincreasewiththeincreasing complexity of relations between the features and target values. This relation can be as simple as linear,whichcanbesolvedwithafirst-degreepolynomialequation,orincrediblycomplex,requiring more sophisticated methods such as decision trees. More information regarding these sophisticated methods is given as four different regression models are evaluated in this section. 4.1.1.1 Linear Regression Linearregressionisthemostcommonandsimplestregressionmethodthatcanbedefinedasthe simplest supervised machine learning algorithm for regression problems. The lack of complexity of linear regression can be an advantage as long as the relation between dependent and independent features can be explained with a first-degree polynomial equation. Linear regression is still robust and practical approach to building simple quantitative models. The main linear regression types can be divided into simple and multiple linear regression. Simple linear regression is a powerful regression method when a parameter is estimated using only one predictor or feature, aka a single independent variable. Even though simple and multiple linear regression methods follow similar techniques to estimate features, a multiple linear regression provides the complexity needed to estimate more than one feature. The multiple linear regression model is required to estimate UCS from drilling parameters as more than one feature is present in the data set. A mathematical description of the multiple regression method is given below in Equation 4.1. Y = β +(β X )+(β X )+β (X ∗X )+β (X ∗X ∗X )+...+(cid:15) (4.1) 0 1 1 2 2 3 1 2 4 1 2 3 where: Y: Output Parameter or Predictor 38
Colorado School of Mines
X : Input Parameters or Features i β : Regression Coefficients i (cid:15): Error Here m can be defined as average impact of X or X *X on Y. i i i i+1 The goal of implementing a multiple regression model is to estimate the regression coefficients (β )foreachinputparameterorfeaturewhilesatisfyingtherequirementofhavingtheminimumsum i of square residuals. This simple approach performs well while building a robust multiple regression model, butitisessentialtomeasurethefeatureimportancetorecognizearelationbetweenfeatures and predictors. By recognizing the relation between features and predictor, features with a small impact on predictor can be eliminated. The most common method to estimate feature importance is the null hypothesis. The null hypothesis is essentially replacing each regression coefficient with integer zero to identify the importance of each feature. Mathematically, the null hypothesis for a model with n features can be described as (James et al. 2013) H = β = β = β = ... = β = 0 OR H = β (cid:54)= 0 (4.2) o 0 1 2 n a i The hypothesis is tested by performing the F-statistic. The F-statistic is defined as (James et al. 2013) (TSS −RSS)/p F = where; RSS/(n−p−1) TSS = (cid:88) (y −y)2 and RSS = (cid:88) (y −y )2 (4.3) i i (cid:98)i In equation 4.3, y refers to the predicted Y on the ith value of X, y refers to the mean of y (cid:98)i i values,y referstotheactualith valueofy,n,andpreferstostatisticalfactorsdescribingF-statistics i distribution. TSS is the measurement of total variance in response to Y, and RSS is the amount of variation left unexplained after applying regression (James et al. 2013). Here, p value indicates the importance of the feature. If p-value is low after implementing the null hypothesis for a feature, it will indicate a significant impact on output, whereas the feature will have a small or no indication if p-value is high. A process called backward elimination can be applied to implement a null hypothesis to each feature. The process of backward elimination is described below in Figure 4.1 39
Colorado School of Mines
Figure 4.1: The backward elimination flowchart Even though the multiple linear regression method is robust and can be used for complex regression problems, the multiple linear regression approach is not complex enough to cover the drilling data patterns due to the dynamic and complex nature of drilling. 4.1.1.2 Polynomial Regression Polynomialregressionisdefinedasadaptingalinearregressionmodeltofitanon-linearrelation using power features. The polynomial regression can be described as: y = β +β ∗x +β ∗(x )2+β ∗(x )3+(cid:15) (4.4) i 0 1 i 2 i 3 i i Polynomial regression can be a robust method to fit a non-linear model. The process to fit a non-linear model is the same as a linear regression model, but it should be noted that polynomial regression models tend to overfit, which is essentially memorizing patterns in training data set and replicating the same results. The overfitting causes the model to be non-generic and results in a model that can only work with the same training data set. This can be avoided by implementing cross-validation methods. The cross-validation methods will be explained later in this chapter. Overfitting can be observed clearly for 300th degree polynomial regression case on Figure 4.2. A linear regression, 2nd degree polynomial regression, and 300th degree polynomial regression models fitthesametrainingdatasettoobservepossibleoverfitting. For300th degreepolynomialregression case, a model recognizes even insignificant patterns in the data set, which results in the model that 40
Colorado School of Mines
will perform poorly if another data set is used to estimate target values. Figure 4.2: Linear Regression, 2nd, 300th degree Polynomial Regression models on the same train- ing dataset (Geron 2019) ©2019 Kiwisoft S.A.S. Published by O’Reilly Media, Inc. Used with permission 4.1.1.3 Support Vector Regression Support Vector Machines (SVM) is a type of supervised ML algorithm that sorts the data into two categories based on classification differences, and it uses these two categories to identify difference within the dataset. Even though SVM commonly used for classification problems such as image classification, recognition of handwritten characters, it can be a robust and efficient algorithm for regression problems. Support Vector Regression (SVR) is an extension SVM that is used to bring solution to regression problems. Similar to SVM, SVR creates (n-1) number of linear or non-linear hyper-planes based on predefined function or kernel with bounds (cid:15) distance away from these hyper-planes. SVR estimates target values based on the data points laying outside of margins set at a distance (cid:15) from hyper-plane, and feeding the model with more data points to the margin placed within the boundaries don’t necessarily impact the estimations (Geron 2019). The mathematical approach to SVR formulation can be best described with geometrical perspective by approximating continuous-valued function for one-dimensional data (Awad and Khanna 2015). The example model of one-dimensional linear SVR is given in Figure 4.3. As it can be observed in 41
Colorado School of Mines
Figure 4.3, possible support vector are determined by data points outside of margin determined by (cid:15) distance from hyper-plane. y = f(x) =< w,x > +b =(cid:80) M(w x +b) where; y,b ∈ R and x,w ∈ RM (4.5) j=1 j j Figure 4.3: One-dimensional linear SVR example (Awad and Khanna 2015) 4.1.1.4 Random Forest Regression Random forest is a tree-based model that can be a useful method to build efficient regression models. Decisiontreeandrandomforestmodelsfollowthesamesplittingrulesbasedoninformation gained by splitting the training data set. Detailed information about decision tree methods and random forest are provided in Section 2.3 and 2.3.1, respectively. Decision tree models can be a robust solution to complex regression problems. The regression tree models split the training data set into branches (leafs) until the limit data set allows. While training decision tree regression models, data will be split into small batches until reaching the terminal node. The predicted value is estimated based on the mean of the terminal node that the input vector activates (Joshi 2021). Figure 4.4 clearly shows how the data split among leaves. The estimation process of ROP based on WOB and RPM can be observed in the decision tree below. In Figure 4.4, the relation between ROP, WOB, and RPM as data sets is split into small batches by applying various split rules (Joshi 42
Colorado School of Mines
2021). By using the decision tree below, ROP = 160 ft/hour can be predicted for WOB = 5 klb and RPM = 150. Figure 4.4: (Left) The relation between WOB,ROP, and RPM based on split rules, (Right) The decision tree to predict ROP. Modified from Joshi 2021. Eventhoughdecisiontreeregressionmodelsarepowerfultoolstobuildregressionmodelsaround a vast amount of data, there are several issues regarding their low generalization and high variance. In other words, small changes in the data set can drastically impact predictions. A group of trees can be built to reduce this impact, and collective decisions made on each tree can be combined to provide a final prediction. This method is called bagging or bootstrapping, and it is explained in detail in Section 2.3. Bagging or bootstrapping is using the same batch of data set to train different predictors to build a group of weak predictors instead of a single strong predictor. The bagging can significantly reduce the variance in predictions as each prediction delivered from a group of predictors is taken into account and averaged to estimate the final output. Random forest implements bagging as part of its function and splits each predictor’s data set based on randomly sampled input features. 4.2 Possible Problems and Solutions Thissection presentspossibleproblemsregarding theimplementationofmachinelearningmod- elsforregressionproblems. Generally, complexregressionproblemscanbesolvedbyusingmachine learning models, but these models require hyper-parameter tuning to prevent common problems such as low accuracy, underfitting, overfitting due to bias, and variance errors. The difference 43
Colorado School of Mines
between predicted and actual output value is called error, and any supervised machine learning algorithm is compromised to noise, bias, and variance error. Even though the noise error, aka irreducible error, can not be eliminated, bias and variance error can be minimized to a certain degree. 4.2.1 Bias and Variance Bias is defined as a generalization error caused by incorrect assumptions, such as modeling 4th polynomial regression as linear regression (Geron 2019). Also, bias can be defined as the difference betweenpredictedandactualvalues. Themodeldoesnotlearnenoughtomakeaccuratepredictions if it has a high bias error, and eventually, it will cause an underfitting of the training data. Contrary to bias, Variance error is caused by learning from training data very well. Variance error is defined as excess sensitivity of the model to small patterns in the training data set (Geron 2019). The high variance error is generally caused by giving a high degree of freedom to a model since a model with the freedom to choose a degree of fit has a tendency to fit a higher polynomial regression. The high variance error eventually causes overfitting of the training data. 4.2.2 Underfitting and Overfitting As mentioned in the previous section, underfitting and overfitting are the results of high bias and high variance. Overfitting is a phenomenon of well-performing model on the same training data set, but the modelcan’tgeneralizeorperformspoorlyondifferentdatasets(Geron2019). Areal-lifeexampleof overfitting can be generalizing an attitude of a bad taxi driver and assuming that every taxi driver will act in the same manner (Geron 2019). Overfitting is caused by high variance error, and it can be described as the significant impact of insignificant variations or patterns in the training data set on the overall learning of the model. An example of overfitting can be observed in Figure 4.5. The model assumed that the training data is more complex due to small variations in the data set, which led to building a high-degree polynomial regression model instead of a simple linear regression model. Some ways to avoid overfitting are re-processing the training data to fix data errors and remove outliers, using a bigger training data set, simplifying the model by dropping the features that have small or no impact on the learning process (Geron 2019). 44
Colorado School of Mines
Figure 4.5: An Example of Overfitting the training data (Geron 2019)©2019 Kiwisoft S.A.S. Published by O’Reilly Media, Inc. Used with permission As mentioned, underfitting is the opposite of overfitting, and it is caused by high bias error. Underfitting can be defined as the model’s lack of complexity due to incorrect assumptions (Geron 2019). Commonly, linear regression models tend to underfit the training data. An example of underfitting can be observed in Figure 4.2. The linear regression shown in the red line in Figure 4.2 indicates that the simple regression model does not cover the variations in the training data enough to make accurate predictions. Some of the ways to avoid underfitting are choosing a more complex androbustmodelwithmorefeatures,reducingthemodel’slimitationsbytuninghyper-parameters, and using better features to train the model (Geron 2019). 4.2.3 Bias and Variance Trade-off As explained in the previous sections above, bias and variance introduce opposite types of error to a regression model. The high variance causes overfitting, whereas the high bias causes underfitting. A trade-off is required to develop the best approach to limit both bias and variance as an increase in variance generates a more complex model, leading to high bias. The relation between bias and variance is visualized and provided in Figure 4.6. There are several methods to achieve optimum model complexity. These methods are cross-validation, bagging, removing outlier data, and hyper-parameter tuning. While building Random Forest regression model for this study, bagging, cross-validation, and hyper-parameter tuning has been utilized to optimize the model. As mentioned in Section 4.1.1.4, bagging is a built-in function of the Random Forest regression model, and it reduces variance error, which is a common problem of decision-tree regression models. Also, 45
Colorado School of Mines
accuracy of predictions. While applying K-fold Cross-validation, the training data set is sub- sampled to have k mutually exclusive data batches, and the model is tested with one of the k data batches on each iteration, whereas remaining (k-1) batches are used to fit a model. K-fold Cross-validationimplementsaniterativeapproachtotesttheaccuracyofpredictions, soeverydata batch is used as both train and test subset. The implementation of K-fold cross-validation is an example presented in Figure 4.7 to clarify the concept of cross-validation. Figure 4.7: K-fold Cross Validation Example Both Randomized-Search and Grid-Search are optimization tools developed as a built-in func- tion in scikit-learn library (Pedregosa et al. 2011). Both methods use an iterative approach to find best-fitting parameters as they continuously fit the model based on a preset list of parameters. The advantage of using these methods is implementing an iterative approach to try out combinations of the listed parameters to achieve optimum accuracy in predictions. The fundamental difference between Randomized-Search and Grid-Search is how the combinations of parameters are selected. While using Randomized-Search, the parameters are selected randomly from a range of numbers to fit that individual instance, whereas these parameters are selected from a list of numbers preset by the author while using Grid-Search. These methods are powerful tools to improve the model’s generalization as K-fold Cross-Validation is used for each fit. The implementation of these methods is similar and is presented in Figure 4.8 as a flowchart. 47
Colorado School of Mines
Figure 4.8: RandomizedSearchCV-GridSearchCV Steps of Implementation 4.4 Evaluating Performance of the Models The performance parameters of a regression model can be calculated by using several methods. The process of validating a model is simply estimating the performance of the trained model based on the error margin between actual and predicted values. There are several statistical approaches to measure the accuracy of a model. For regression models, the most common methods to evaluate the accuracy are Root Mean Squared Error (RMSE) and Mean Absolute Error (MAE) (Geron 2019). This section evaluated three main methods, including RMSE and MAE (Swalin 2018). Root Mean Squared Error (RMSE): Root mean squared error is the most common method to estimate the accuracy of a regression model. For a regression model, the equation below is used to calculate RMSE. In Equation 4.6, y refers to the actual value at ith observation, y refers to i l the predicted value at ith observation, where n refers to the number of predicted values for each observation. The values of RMSE will be small, if the predicted values are close to the actual values. (cid:118) (cid:117) n (cid:117)1 (cid:88) RMSE = (cid:116) (y i−y (cid:98)l)2 (4.6) n i=1 Mean Absolute Error (MAE): Mean absolute error is fairly similar to RMSE, and it can be calculated using Equation 4.7. The fundamental difference between RMSE and MAE is that MAE 48
Colorado School of Mines
treats errors equally, whereas RMSE penalizes significant errors by square. n 1 (cid:88) MAE = |y −y | (4.7) i (cid:98)l n i=1 R2 and Adjusted R2: R2 can be defined as estimating variations within the predictions based on features fed into the model. The range of R2 is between 0 and 1. The higher R2 value represents accurate predictions and less variance within the prediction compared to actual variance within the training dataset. The R2 can be calculated by using Equation 4.8. On the other hand, adjusted R2 considers the number of features fed to the model by considering how useful the feature is. If a feature with a significant impact is fed to the model, adjusted R2 will increase, whereas it will decrease if a feature with no impact is fed to the model (Swalin 2018). (cid:80)n (y −y )2 R2 = 1− i=1 i (cid:98)l (4.8) (cid:80)n (y −y)2 i=1 i The last step of evaluating these models was choosing the most efficient method. R2 and Adjusted R2 were powerful methods to estimate the accuracy, but Ford (2015) argues against using this method based on the initial arguments proposed by Shalizi (2015). Ford (2015) argues that R2 can estimate the accuracy of the model’s accuracy as close to 1 even though the fitted model is completely wrong if the variance in the training data set is high. Considering a high variance and noise in the drilling data, R2 is considered the least preferred method to measure the model’s accuracy. RMSE and MAE were suitable options to measure the accuracyofUCSestimations. Hence,bothRMSEandMAEareusedastheperformanceevaluation method in this study. 4.5 Final Architecture of the Model As mentioned before, the final architecture of the regression algorithm trained to estimate UCS from drilling parameters is explained in this section. The process of training the model is described anprovidedinFigure4.9. TheoverallobjectiveofthealgorithmistotrainarobustRandomForest regression model while optimizing the accuracy of estimation without overfitting the drilling data. The training process is completed in three main steps, and at each step, a different function is used. These steps prepare the training data set, optimize the model, and evaluate the final model’s performance. 49
Colorado School of Mines
Figure 4.9: The Final Architecture of prediction model. • Preparation of the training dataset: The overall goal of this step is to implement Prin- cipal Component Analysis to prepare the training data set, gain insights into the correlation between features, and estimate explained variance in the data set by each principal compo- nent. Also, the data set is split into three subsets. The first set was testing, aka blind test data set. The blind test data set was 10% of the original data set, and it was never shown to the model until the final model was fit. The main reason to split the testing data set was to ensure that the model never learned from this sub-set of original data. Hence, the testing data set can be used to check if the final model is overfitted. The remaining data set is split into two subsets, training and validation data. The training and validation subsets consisted of 80% and 20% of the remaining data set. • Optimization of the Model: In this step, the best parameters are searched by using Randomized-Search and Grid-Search. These methods fit the model numerous times continu- ously by using a preset list of parameters. Each fit uses the K-fold cross-validation method to avoid overfitting. At the end of each iterative method, a combination of the best parameters is chosen. The model is fit one more time based on these best parameters, and the accuracy of predictions is evaluated before initiating the next step. 50
Colorado School of Mines
• Training and Evaluating accuracy of the model: This is the final step of the model. After implementing Randomized-Search and Grid-Search, the best parameters for the final model are selected. Based on these parameters, the final model is fit, and the accuracy of predictions is evaluated. Also, the testing subset split from the original data set is used to evaluate the performance of the final model. This testing subset is called a blind testing data set, and it is an essential step that helps identify overfitted training data. 4.6 Discussion on Challenges and Changes The path of this study changed with time due to the particular problem of high accuracy in predictions of Random Forest regression model trained by the principal components of drilling parameters. The accuracy of UCS predictions was as high as 97% on the blind testing data set, which indicated that the model is overfitted. These results were highly unexpected as bagging and bootstrapping were used while training each decision tree of Random Forest regression algorithm to reduce variance error. In addition to this, K-fold cross-validation was used on each model fit while tuning hyper-parameters to reduce both bias and variance error. Even though the initial exploratory data analysis on drilling data indicated a lack of variability in UCS as only seven different UCS values were present, however, this issue was never assumed to cause an overfitting problem while building a regression model. After realizing the overfitting problem, the model was fitted numerous times with a different list of parameters fed into hyper-parameter tuning tools Random-Search and Grid-Search, but the model was performing poorly, and the generalization of the model was low. To achieve the study’s overall objective, the following hypotheses are tested by conducting comprehensive research to indicate the root cause of the overfitting problem. • A possible linear correlation between MSE and UCS can lead to perfect predictions. • A loss of valuable information due to the implementation of PCA can cause overfitting. • A lack of variability on UCS can reduce the variability of predictions, which can cause over- fitting. Pearson correlation coefficient between MSE and UCS is calculated to test the first hypothe- sis. Pearson correlation coefficient is defined as a measurement of linear correlation between two 51
Colorado School of Mines
variables. The coefficient value ranges from -1 to 1, and perfect positive correlation is indicated by 1, while perfect negative correlation is indicated by -1. Pearson coefficient between MSE and UCS is calculated as -0.189, which indicates no correlation. The first hypothesis is tested and elim- inated. The second hypothesis was tested by going through the implementation of PCA on each data subset. These data subsets are data files in feather format. Initially, the data set collected by Joshi (2021) was provided in four different feather files. These data files were initially divided as it was computationally expensive to load the complete data set. The initial feature importance and explained variance analyses are conducted on these four subsets separately. The explained variance retained by the first four principal components was over the threshold value of 85% for all subsets, as observed in Figure 4.10. This value was set as a target explained variance retained for this study to use principal components of drilling parameters as a training data set. Figure 4.10: The Explained Variance Retained by PCs for Datasets 1 to 4. After analyzing retained explained variance percentages, feature importance analysis is con- ducted to indicate the influence of variables on each principal component. The feature matrix for each subset is visualized to conduct a feature importance analysis. As a result of observations from 52
Colorado School of Mines
Figure 4.11 and Figure 4.12, the feature importance analysis indicated that there was small or no influence of variables on principal components 5, 6, and 7. The initial observations from explained variance and feature importance analyses indicated that using principal components of drilling pa- rameters did not lead to information loss. With these results, theoretically, a similar result should have been observed on the complete data set. However, the observations in the results of feature importance and explained variance were significantly different. The observations from Figure 3.5 and Figure 3.7 indicated that explained variance ratio by four principal components was lower than the threshold value of 85% set for this study. Also, feature importance analysis indicated a significant influence of variables on Principal Component 7. These results indicated an information loss introduced by implementing PCA, leading to an overfitting problem. As a result, PCA has been taken out of the algorithm to observe changes in the accuracy of predictions. Unfortunately, the removal of PCA from the algorithm completely did not solve the problem of high accuracy, as the accuracy of fitted models was approximately 98%. With this result, the second hypothesis is tested and eliminated. However, PCA is not added back to the algorithm as information loss on removing Principal Component 7 is observed in the analyses. Figure 4.11: The Feature Matrix of Datasets 1 - 2. Testing the last hypothesis was not as straightforward as other hypotheses since variability should be introduced to target outputs, and collecting a new data set were impossible. In addition, therawdatasetwasnotaccessiblesincethealgorithmbuiltbyJoshi(2021)wasworkingasawhole 53
Colorado School of Mines
Figure 4.12: The Feature Matrix of Datasets 3 - 4. system that takes collected drilling data as an input and predicts Torque values after classifying the raw data as drilling and non-drilling data. Also, high accuracy in UCS predictions is observed by Joshi (2021), as it is stated that the UCS regression model predicted UCS values for analog and cryogenic samples with less than a 2.5% error margin. However, the issue of overfitting UCS did not introduce a significant generalization problem as the purpose of the final algorithm built by Joshi (2021) was to indicate drilling dysfunctions while drilling, which is essentially a classification problem. Thefindingfromapreviousstudyconductedbythisdatasetshowsthatanotherapproach should be developed to indicate if the regression algorithm built for this study is working or not. Potential performance changes in the Multi-Output Random Forest algorithm are studied with the idea of increasing variability by introducing another target value to the model. The study conducted by Linusson (2013) on Multi-output Random Forests indicates that the performance of multi-outputrandomforestmodelsshouldbesimilartoorthesameassingleoutputmodels. Then, oneofthevariablesshouldbemovedtotargettoincreasevariabilitywithintheoutputvalues. MSE is selected as a variable to move to the target values. As a result of moving MSE to target values, approximately a 5-10% decrease in average accuracy was achieved on every fitted model. With this, the third hypothesis is tested, and the performance improvement indicated that the lack of variability in UCS data was leading to perfect predictions and causing the model to overfit the training data. 54
Colorado School of Mines
4.7 Potential Implementation of the Model for Field Applications Throughout this study, the improvements that can be introduced by implementing data-driven solutions to the field applications are mentioned and discussed based on previous studies conducted with similar intent. This section discusses the potential implementation of the regression model built step by step. As mentioned in previous sections, the regression model built aims to estimate UCS from the drilling parameters instantaneously and avoid potential wellbore stability problems and drilling accidents by providing changes in UCS while drilling. The model should be re-trained with necessary data collected from the field where it will be used as a UCS prediction tool since data-driven solutions are as unique as the data fed into the model. The potential development of the UCS prediction tool by using this model for field applications is studied, and necessary steps are provided in Figure 4.13. The initial implementation step will be using available drilling and Figure 4.13: Potential Implementation on Field Application Workflow geomechanicaldatafrompreviouswellsdrilledtotrainthemodelandtestthemodel’sperformance. Then, the model can be tested based on different scenarios to indicate the model’s reaction to real- life drilling operations (i.e., drilling break, high dogleg severity). After running these tests, a demo product can be produced using the model if satisfactory test results are obtained. The demo product can be tested in a field with development potential by training the model after collecting necessary drilling and geomechanical from 1st exploration well. The model’s ability to predict UCS while drilling can be tested for every appraisal well drilled and the additional data collected can be fed to the model to improve its ability to indicate previous problems faced while drilling. A final 55
Colorado School of Mines
CHAPTER 5 RESULTS AND TECHNICAL EVALUATION The analysis was conducted on data set, and the findings are previously mentioned in Chapter 4. The final architecture of the algorithm is given, and possible problems that might occur while training RF regression algorithm are explained in detail in Chapter 4. The final and third step of the algorithm is described in this chapter. In addition, the overall results of models fitted by using different approaches are provided. The changes in the algorithm as a consequence of findings comprehensivelydiscussedinthepreviouschapterarebrieflysummarized. Aproblemofoverfitting indicated due to a lack of variability within UCS data for each model is evaluated by visualizing predicted and actual UCS values. Finally, the final model fit by using UCS and MSE as target values, and the results are visualized and evaluated in this section. 5.1 The development phase of the algorithm As mentioned before, the RF algorithm built to predict UCS is fit numerous times, and hyper- parameter tuning tools are implemented at each fit. Initially, the model was fitted numerous times with only 1% of the complete data set to gain more knowledge about RF regression model and achieve the overall objective of building a robust algorithm. These subsets are sampled randomly based on the distribution within the data set. A five K-fold cross validation method is applied to reduce variance error at each fit. These initial tests on the algorithm helped to eliminate minor errors in the code. 5.1.1 Initial Results While conducting this study, the original plan was to use principal components of drilling parameterstotraintheRFregressionmodel. Asmentionedearlier,85%explainedvarianceretained principal components set as a threshold value. Initially, only 5% of data sampled from each file are fed into the PCA algorithm to transform into principal components. The results acquired by implementing PCA before feeding the training data into the algorithm were promising, and using only fraction of the complete data set helped avoid minor errors. In addition to hyper-parameter 57
Colorado School of Mines
tuning, the percentage of validation data split and implementation of K-fold cross-validation are tested in these initial trials. The performance of each fit is evaluated with MAE, MSE, RMSE, and MAPE. The overall accuracy is calculated from MAPE. The results are given in Table 5.1. After running through the performance evaluation algorithm, it is clearly indicated that K-fold cross-validation reduced variance error by preventing overfitting. Table 5.1: Initial models fitted with sub datasets after implementing PCA Number K-Fold Validation MAE MSE RMSE MAPE Overall of PCs CV Dataset Size Accuracy 3 No CV 20% 0.17 1.15 1.07 0.23 99.77% 2 No CV 20% 3.61 103.31 10.16 4.29 95.71% 2 No CV 30% 3.61 103.46 10.17 4.28 95.72% 3 No CV 30% 0.17 1.2 1.09 0.25 99.75% 5 No CV 30% 0.046 0.018 0.13 0.03 99.97% 3 No CV 30% 0.18 1.24 1.11 0.25 99.75% 5 3 K-Fold 20% 20.33 1042 32.38 28.34 71.66% 5 4 K-Fold 30% 16.43 682.9 26.13 21.9 78.10% Later, PCA was implemented on each data file provided, andexploratory analysis onthese data setswaspromisingsinceexplainedvarianceratioretainedbyfourprincipalcomponentswerehigher than 85%, and the influence of variables on principal components was low, as discussed in detail Chapter 4. After implementing PCA, seven drilling parameters are replaced with four principal components,asfourPCsexplainedmostofthedatavariancepresentinthecompletedataset. After these steps, PCs are fed to the algorithm as a training dataset. The hyper-parameters are searched using Randomized - Search and Grid-Search while fitting the model numerous times to indicate the best possible fit. The final parameters used while fitting the last mode is given in Table 5.2. The followings refer to the terms indicated in Table 5.2. • Numberofestimators: Numberoftreescreatedbyusingtrainingdata,andthefinalprediction is made by the contribution of each of these estimators. • Maximum features: Decision criteria for maximum amount of features considered while split- ting node. 58
Colorado School of Mines
• Maximum depth: Maximum number of levels created while building each decision tree (esti- mator). • Minimum samples split: Minimum number of data points placed in a node before it splits. • Minimum sample leaf: Minimum number of data points to be kept in each leaf node. • Bootstrap: Method to sample data points. Afterfittingthemodel,MAPEisusedtoindicatemodelperformance,and98%accuracyinUCS predictions was observed. Extremely accurate predictions were a clear indication of overfitting. Table 5.2: Hyper-parameters of the model fitted with four PCs Hyper - Parameter Selected Parameter Number of estimators 35 Maximum features Square root Maximum depth 40 Minimum sample split 9 Minimum sample leaf 2 Bootstrap True The UCS predictions are visualized to observe possible problems introduced by the lack of variability in UCS data. Initially, the results from a model trained with one of the subsets are visualized to observe how the lack of variability within UCS data affects the final visuals. As expected, raw predictions collected from the model could not be visualized properly since the predictions were too noisy. The moving average method is used to smooth the results. The moving average can be defined as a statistical method for smoothing noisy predictions by continuously taking the average values of the range defined. The advantage of moving average is keeping the impact of each variable taken into consideration while continuously calculating the mean of a given range of numbers. For example, the moving average of integers from 1 through 5 in the range of 2 would be 1.5, 2.5, 3.5, and 4.5. A closer match between actual and predicted values was observed after using the moving average. The actual and predicted UCS values from the model trained with a subset is plotted after applying a moving average with a range of 20, 50, and 100, given in Figure 5.1. 59
Colorado School of Mines
Figure 5.1: Comparison between predicted and actual UCS with 20, 50, 100 moving averages. The significant noise in predictions is observed from the model trained with 4 PCs of complete data set also indicated a presence within the predictions, and a higher range of moving average should be used to observe a proper correlation between prediction and testing data. Then, a 10,000 moving average is applied to 5+ million predictions, and the observations on results were more promising to indicate almost a perfect match between actual and predicted UCS values, as observed in Figure 5.2. These results show a high variance error and indicate overfitting. As mentioned in Chapter 4, PCA has been removed from the final algorithm. 5.1.2 Models Fitted without PCA To test the 2nd hypothesis discussed in detail in 4, PCA is removed from the main algorithm. Later, PCA is used to conduct explained variance and feature importance analysis. After removing PCA, the number of variables fed to the algorithm is changed to seven; the overall objective was to predict a single target value, UCS. The variables fed to the algorithm are Depth, RPM, WOB, FPI, Torque, ROP, and MSE. Different cases are considered by varying training data set size and number of iterations. 60
Colorado School of Mines
Figure 5.2: Predicted and actual UCS data with 10000 moving average - Predicted Values (Blue), Actual Values(Black). Initially, RMSE was planned to evaluate the model’s performance, but MAPE has replaced it since RMSE values were incredibly low, and it was impossible to indicate any difference between models. MAPE values indicated a performance change between fitted models in the third decimal point. The motivation for removing PCA was to study a possible information loss by using only four principal components. ThesecondhypothesiswastestedbyremovingPCAtoindicateifinitialinformationlosscaused the overfitting observed in previous results. Similar to previous graphs describing predicted and tested UCS versus depth created for the model fitted without PCA. Two different cases are eval- uated. For the 1st case, only 50% of the complete data set is fitted through 280 iterations with four K-fold cross-validation. For the 2nd case, the entire data set is fitted with five K-fold cross- validation and 250 iterations. Unfortunately, the overall results did not change as UCS accuracy in predictions was approximately 98%. This outcome proved that the information loss through the implementation of PCA was not causing the overfitting problem. For 2nd case, the comparison between UCS values from testing data and predicted UCS values is completed by observing the results in Figure 5.3. 61
Colorado School of Mines
Figure 5.3: Comparison between prediction and test UCS data with 10000 moving average. 5.1.3 Multi-output Random Forest Model To test the 3rd hypothesis, one of the variables moved to target values. In this hypothesis, the theory of how introducing a variation to target values would change the prediction accuracy. From variables, MSEismovedtotargetvalues. Thenthemodelisfittedwithsixvariables(Depth, RPM, WOB, PFI, Torque, ROP) and two target values (UCS, MSE). To the hypothesis, the model fit four times based on different cases. A similar performance was expected as a multi-output random forest should perform similar to a single output (Linusson 2013). In these cases, the number of iterations and percentage of data used to fit the model is varied. These cases are given below in Table 5.3. Based on these findings, it was decided to fit the model with only 100 iterations as the number of iterations was not affecting prediction accuracy significantly. Hyper-parameter tuning tools are used to search for the best model by using K-Fold cross-validation 100 times. The final parameters decided to use while fitting the final model are presented in Table 5.4. After fitting the model for each case, model prediction accuracy is calculated using MAPE. The prediction of accuracy was not sensitive to the number of iterations, but it increased with the increasingnumberofdatapointsusedtotrainthemodel. Thisfindingindicatedalackofvariability within the data set even though two target values were used to increase variability. The prediction accuracy of each case evaluated with train, validation, and test sets are given in Table 5.5. 62
Colorado School of Mines
The predicted UCS values are compared with actual values in Figure 5.4. This comparison was completed after applying a 10,000 moving average to actual and predicted UCS as a combination of lack of variability within UCS and 5+ million data points made it impossible to visualize these values on a scatter plot. The perfect match between predicted and actual UCS was expected as the average prediction accuracy of MSE and UCS is 94%. Since moving average had to be applied to see a clear match between actual predicted UCS, prediction accuracy is taken as an indication of model performance. 10,000 moving average on UCS and MSE predictions from the final model is applied. The comparison of predicted and actual MSE is plotted and presented in Figure 5.5. Also, the actual versus predicted values are visualized using a scatter plot after applying a 10,000 moving average and presented in Figure 5.6. Figure 5.4: Target UCS (Black) vs Predicted UCS(Blue). Figure 5.5: Actual and Predicted MSE with the Final Model. 64
Colorado School of Mines
CHAPTER 6 SUMMARY AND CONCLUSIONS The objective of this work was to develop a complex machine learning algorithm by using Random Forest Regression method to process high-frequency drilling data and train a model that can estimate UCS instantaneously from drilling parameters. The importance of gaining knowledge about subsurface conditions and geomechanical parameters is one of the key criteria for efficient drilling operations. Current methods of estimating geomechanical parameters are time-consuming and expensive. These constraints motivate the oil and gas industry to seek much more efficient methods to indicate subsurface conditions and geomechanical parameters. A decrease in the cost of computational power and robust implementation of data-driven solutions in other industries encourages the oil and gas industry to search for more efficient solutions to solve common problems through data collection. By predicting UCS while drilling, the model will allow to show changes in UCS in different zones and give a key input to indicate potential wellbore stability problems. Initially, this study was proposed to be conducted with a different data set and core samples collected by using a coring rig located at the Edgar mine. The coring rig is equipped with after- market sensors to collect RPM, ROP, WOB, and Torque data. The UCS and Young’s Modulus of formation drilled through with the coring rig was planned to be measured by conducting a laboratory experiment at the Colorado School of Mines campus using the MTS Rock Mechanics System. Eventhoughtheuniquedrillingdatawascollected, UCSdatacouldnotbecollectedasthe project milestones did not match this study. A similar data set, collected Joshi (2021), is utilized for this study as it was purposed to build a similar stack of machine learning and deep learning algorithms. The data set collected by Joshi (2021) was re-purposed to achieve the main objective of building a regression algorithm to predict UCS from drilling parameters. The main objectives of this thesis are: • Develop a machine-learning model that can be trained to estimate unconfined compressive strength from drilling parameters instantaneously. 66
Colorado School of Mines
• Provide changes in Unconfined Compressive Strength based on drilling parameters instanta- neously to avoid possible wellbore failures and drilling accidents. • Utilize principal component analysis to analyze feature importance using available drilling data. • StudytheimplementationofRandomForestregressionalgorithmtobuildarobustregression model to estimate certain geomechanical parameters (i.e., UCS, and Mechanical Specific Energy). The initial path of this study was to implement PCA to the complete data set and use only four principal components of drilling parameters instead of seven variables (Depth, RPM, WOB, PFI, Torque,ROP,MSE)totrainRandomForestregressionmodeltopredictUCS.Theearlyexploratory analysisconductedonthedatasetindicatedalackofvariabilityinUCSdataasonlysevendifferent values were present even though the complete data set had 55+ million UCS observations. This lack of variability is assumed to be a unique part of the data set similar to predicted Torque values. However, as the study developed further by training the model with complete data set, a high variance error is indicated as the accuracy of UCS predictions was 98%. The high variance error was not resolved as the model was fitted numerous times with a different list of hyper-parameters. To bring a solution to this problem, a root cause of high variance error is tested through three different hypotheses. These hypotheses are as followings: • A possible linear correlation between MSE and UCS can lead to perfect predictions. • A loss of valuable information due to the implementation of PCA can cause overfitting. • A lack of variability on UCS can reduce the variability of predictions, which can cause over- fitting. These hypotheses are tested to indicate the root cause of high variance error. As mentioned in Chapter 4, 1st and 2nd hypotheses are tested by changing the algorithm by removing PCA and measuring the correlation between MSE and UCS. Both 1st and 2nd hypotheses are found wrong. The 3rd hypothesis is tested by moving MSE from variables to target values as MSE would cover the variation within the data set most. The accuracy of predictions decreased by 5-10%, and the 67
Colorado School of Mines
accuracy reduction changed with respect to the amount of data used to train the model. These observations indicated that the model was robust, but the lack of variability within UCS was causing a high variance error as Multi-Output Random Forest performs similar to single output RF (Linusson 2013). The final model is fitted with six variables (Depth, RPM, WOB, PFI, Torque, ROP) and two target values (UCS, MSE). The following conclusions are found from this study; • Implementation of PCA on this data set will cause a valuable information loss. • A small explained variance contribution by a variable doesn’t reflect its importance in regres- sionmodelsasthevariablemightcarryanimportantpieceofinformationthatwillcontribute to predictions of the model. • A lack of variability within target values while training a regression model will cause high variance error, aka overfitting, as the number of data points fed to algorithm increase. • This data set does not reflect the real performance of the algorithm as a high variance error is observed if the model is fitted with a single target value, even though most common tools are used to avoid overfitting. • The model is robust enough to indicate variation change in target values as UCS predictions accuracy decreased when another target value is added to target values to introduce more variation. • The model can be trained to estimate UCS from drilling parameters accurately if the data set with higher variations among variables is used. ThemainobjectivesofthisthesisarefulfilledasthemachinelearningmodelbuiltbyusingRan- dom Forest regression algorithm were successfully estimate UCS and MSE from drilling parameters instantaneously. Also,PCAisutilizedtoindicatethefeatureimportanceandtheretainedexplained variance by each principal component to observe the possible loss of valuable information. As a result of the implementation of PCA, it is decided to use six drilling parameters directly instead of training the model with only four principal components. 68
Colorado School of Mines
CHAPTER 7 FUTURE WORK The work completed in this study can be developed further to implement the model as an efficient data-driven solution for field applications. The suggested approaches and methods can be a part of the further development of this study as follow: • The model built for this study can be trained with field data collected at the drilling site, and a comparison between UCS predictions from the model and UCS measurements from laboratory experiments can be made. • Different variables affecting UCS can be introduced to the model as a part of the training data set. These variables can be porosity and elemental spectroscopy (Negara et al. 2017). • The empirical equations derived to estimate UCS can be added to the model as a lower boundary condition (Chang et al. 2006), if the empirical equation is derived for the same formation where the data set is collected. • The core samples retrieved from Edgar Mine by using Apache Coring Rig can be tested to measure UCS and Young’s Modulus, and these measurements can be integrated into the drilling data collected. The integrated drilling parameters and UCS can be used to train the model. 69
Colorado School of Mines
ABSTRACT Drilling technology has advanced rapidly over the past few decades. The petroleum industry is motivated to automate directional drilling operations, making them safer, more cost- effective, and efficient. The main input parameters fed to directional drilling tools (e.g., rotary steerable systems) are inclination, azimuth, and distance. Inclination and azimuth are measured downhole; however, the distance drilled is computed using a pipe tally system installed on the surface. Having continuous downhole measurements of distance, inclination, and azimuth allow the directional drilling tool to determine the drilling bit's location. Moreover, if most calculations were made downhole, then the bit could follow a preprogrammed trajectory and drill autonomously. Many researchers have proposed different methods for measuring the distance drilled in the bottomhole as a solution for directional drilling automation. However, the suggested approaches have experienced challenges during field applications. For this reason, this dissertation focuses on solving one of the main limiting factors, i.e., real-time downhole measurements of the distance drilled. This work proposes a method that senses the distance drilled at the bottom of the well based on automated image analysis. The dissertation involves the development of a tool prototype with multiple identical imaging sensors spaced at known distances. The sensors acquire images of the formation at synchronized times, with the imaging sensors capturing an image of the same formation location at different times. Based on this concept, an algorithm was developed to identify the "fingerprints" of the images captured and register similar images according to those fingerprints. Under wellbore conditions, each fingerprint reveals unique iii
Colorado School of Mines
marks left by the bit, natural geological features of the rocks drilled, and the topology of the borehole wall, all of which contribute to the image registration process. Therefore, when two images captured by two sensors match, the timestamps of each image and the distance between the sensors can be used to calculate the average rate of penetration in that time interval. Subsequently, integrating the rate of penetration results in an estimate of the interval distance drilled. Adding all intervals drilled is then equivalent to the total distance of the well, and combining this information with azimuth and inclination can provide an estimate of the depth drilled. This dissertation aims to demonstrate that an image-matching process is an attractive candidate for downhole measurements that compute the distance drilled. In this work, an image-matching algorithm is developed with the sole intention of estimating the distance drilled. Tests were designed to analyze the effect of different variables on the accuracy of the distance calculation. Experiments were devised to investigate how sampling criteria, the distance between the sensors, velocity, matches missed, and noise affect the method's efficacy. The method was tested in homogeneous and heterogeneous scenarios (i.e., images with distinct and repetitive features) to ensure that the algorithm can match images when exposed to analog scenarios that model wellbore conditions. The results show that the image-matching algorithm can accurately match images even when wellbore features are considerably repetitive. Additionally, having an optimized velocity-dependent combination of sampling criterion and distance between the sensors can minimize errors, especially those associated with missed matches, and therefore, increase the accuracy of the distance calculation. The main contribution of this Ph.D. dissertation is that it demonstrates that having identical sensors capture images showing topological features can be a robust method to measure the distance traveled. Additionally, this work provides initial research on how to optimize iv
Colorado School of Mines
ACKNOWLEDGMENTS I thank Allah (God) for giving me the strength and power to complete this dissertation and granting me the opportunity to be a part of Colorado School of Mines and learn from outstanding professors. I also want to take this opportunity to thank my advisor, Dr. Luis Zerpa, for all of his unconditional support and guidance. I wouldn't have been able to complete this dissertation without him. I also owe a great amount of gratitude to my co-advisor, Dr. Jorge Sampaio. I want to thank him for his guidance, unconditional support, and giving me the chance to be a part of a great project. He has helped me in every step of this dissertation, and it has been an honor working by his side. I would also like to express my sincere appreciation to my committee members, Dr. Alfred Eustes, Dr. Jeffrey Shragge, and Dr. Manika Prasad. It is a privilege to be working with them, and I want to thank them for all the help and guidance they gave me. A very special thanks goes to Dr. Heba Mahgoub, my mum, and Dr. Khaled Mansour, my dad, for all their emotional support and unconditional love. They have always been by my side and have been the source of my inspiration and motivation. They have raised me to be the man I am today, and without them, I would have never achieved this goal. In addition to my parents, I would also like to thank my brother, Mohamed Mansour, for always encouraging me to work hard and supporting me. He has been a blessing in my life and has always been there for me whenever I needed him. xxii
Colorado School of Mines
CHAPTER 1 INTRODUCTION This chapter introduces the problem statement for this doctoral research dissertation. It explains the different kinds of depth measurements and discusses their importance during drilling operations. Additionally, it introduces and describes the differences between surface and current bottomhole depth measurement techniques along with their associated errors and limitations. 1.1 Definition and Importance of Well Depth In drilling operations, the well(cid:182)s depth acts as a standard reference for most data observed and measured on the well. For example, formation evaluation, a technique where a measuring tool is deployed down the borehole to measure physical rock properties of subsurface geologic formations, is highly dependent on well depth for data interpretation. Other data such as drilling parameters, cuttings description, deviation surveys, and cores also rely on well depth. However, well depth is a distinct measurement that needs to be made during drilling operations (Storey 2013). Well depth is defined as a parameter that characterizes the position of an object within the wellbore concerning a reference point at the surface. Five attributes are required to specify the depth of a point along the wellbore (Storey 2013): 1. Wellbore name from which depth will be measured. 2. Point of reference, which represents the depth measurement starting point. 3. Depth unit (e.g., in feet). 4. The measurement process used (e.g., driller(cid:182)s or logger(cid:182)s depth). 1
Colorado School of Mines
5. Type of path measured (e.g., Measured Depth (MD), also known as Along the Hole Depth (AHD), or True Vertical Depth (TVD), as seen in Figure 1-1). The type of path referred to will depend on the type of calculations needed. For example, the total MD refers to the total distance drilled by the bit and is, therefore, the total length of the hole, while TVD refers to the vertical depth of the hole. MD is used for directional drilling surveys and volumetric calculations, while TVD can be used for pressure and mud weight calculations. It is important to note that the method in this dissertation is estimating MD and not TVD. Figure 1-1 Different paths are measured, where “Q(cid:180) is a point on a vertical well, and “P(cid:180) is a point on a directional well. The depth of P and Q is the same, but the length is different due to the difference in inclination. AHD refers to the length, and TVD refers to the true vertical depth (Storey 2013). 2
Colorado School of Mines
1.2 Problem Statement The oil and gas industry has been motivated to automate directional drilling operations, and many researchers have investigated ways to make such automation possible. As of 2021, directional drilling tools, such as Rotary Steerable Systems (RSS), depend on MD measurements, inclination, and azimuth. The RSS collects the target formation's coordinates sent from the surface through the Measurement While Drilling (MWD) tool. Finally, it adjusts the azimuth, tool-face, and inclination based on its sensors until the target formation is reached (Yiyong et al. 2009). The limitation encountered with this directional drilling system is that for it to be fully automated, it requires a downhole MD measurement method that is relatively accurate to the measurements made on the surface, which is currently not the case (Starkey 2018; Stoner 2018). If a directional drilling tool could measure inclination, azimuth, and the MD downhole, it could determine the location of the drilling bit. The engineer could then pick the best possible trajectory that minimizes torque and drag (Sampaio and Mansour 2019). Programming this pre- defined trajectory alongside the data gathered downhole would allow the autonomous drilling tool to know where it is and where it needs to go, thus achieving directional drilling automation. 1.3 Depth Measurement Techniques There are different approaches and methods that engineers have used to measure the MD of the well. This section discusses different measurement techniques and inventions used today during drilling operations. 1.3.1 Surface Depth Measurements Two conventional surface-related MD measurement techniques are used in the field today, and these are the driller(cid:182)s and the logger(cid:182)s depth. During drilling operations, the driller(cid:182)s 3
Colorado School of Mines
depth is computed by summing up individual lengths of the drill bit, drill collar, and drill pipe joints below the kelly bushing or the rig floor (Chia et al. 2006). Most of the lengths are measured on the surface using a steel measuring tape before lowering the pipe in the hole. This measurement technique implies that the pipe(cid:182)s length is measured before getting attached to the drill string and placed downhole (Reistle and Sikes 1938). The way these measurements are made is a problem because they do not account for pipe length changes that occur downhole due to forces acting on the pipe, which can alter its length. Therefore, the pipe length measured on the surface is different from the actual pipe length in the bottomhole (Chia et al. 2006). The two most significant factors that contribute to measurement errors are thermal expansion and mechanical stretch. Temperature changes contribute to thermal expansion, while well profile, frictional forces, and pipe weight contribute to the mechanical stretch. Axial pressure effects, ballooning, buckling, and drill pipe twists can also contribute to the pipe's mechanical deformation. Complex algorithms may identify these errors, and a torque and drag model can then adjust for the mechanical stretch and thermal expansion (Chia et al. 2006). The automated version of the driller's depth is referred to as the pipe tally measurement system. A typical system includes a drawwork's encoder or a geolograph that records depth continuously on a log by tracking the block's movement (Figure 1-2). An installed software monitors the hookload sensor and analyzes the bit's in and out of slips motion. The movement of the bit can only be updated when the pipe is in "out of slips" motion. (Chia et al. 2006). 4
Colorado School of Mines
Figure 1-2 Pipe tally measurement system consisting of a drawworks encoder that continuously records depth on a log and a hook load sensor that analyzes in and out of slips motion. (Chia et al. 2006). Lai et al. (2016) analyzed the errors associated with the pipe tally measurement system. They studied 29 onshore wells with an average MD of 16,800 ft located in Canada and the U.S. The objective of their study was to determine the frequency of depth corrections and the absolute error magnitude resulting from the pipe tally. The results show that the frequency of error corrections ranged between 40 to 143 corrections. Additionally, the pipe tally's absolute average measurement error for all 29 wells was 1965 ft, where 779 ft was due to hookload threshold error, and 1186 ft attributed to human errors. Hookload threshold errors occurred due to failure in detecting in-slip motion and incorrect tracking. Human errors occurred due to incorrect settings in the pipe tally's system, typing mistakes, and unfamiliarity with the tally's software user interface. The second type of MD surface-related measurements is the logger(cid:182)s depth, and it is made using a wireline system that consists of an electric logging cable, guide rollers, and a measuring wheel. The electric cable acts as a measuring tape and is attached to a calibrated measuring wheel with a known circumference. As the electric cable moves along the hole, 5
Colorado School of Mines
friction occurs between the cable and the wheel, causing it to rotate. Each measuring wheel revolution is counted and correlated to MD (Epstein 2005). This type of measurement is accurate if the wellbore is vertical. However, as the wellbore shape increases in complexity, the wireline measurement system starts accumulating errors (Chia et al. 2006). Additionally, errors may result from different suspended loads, incorrect counting of measuring wheel revolutions, and cable stretch resulting from acceleration forces (Reistle and Sikes 1938; Bolt 2017) In summary, there are two different ways that MD can be measured on the surface. The standard measurement method for drilling operations is the driller's depth that uses the automated pipe tally measurement system. However, as shown by Lai et al. (2016), the system has errors and requires constant corrections. Thus, the advantage of developing a bottomhole MD measurement system is not to enhance the system's accuracy but to automate directional drilling and facilitate a continuous decision-making process made by a downhole computer instead of a discrete process made by an engineer on the surface. An ongoing decision-making process will lead to a smooth directionally drilled trajectory, decreasing torque, drag, and drilling costs. Nevertheless, while the bottomhole measurement system will have errors, it should not continuously disrupt drilling operations due to low accuracies. The bottomhole measurement system should work effectively under normal drilling conditions. 1.3.2 Bottomhole Depth Measurements Due to the growing interest in automated directional drilling systems, researchers have aimed to decrease dependence on surface measurements by finding new ways to obtain downhole real-time MD measurements. Patton (1994) invented a new system for controlling the direction of the bit and measuring MD at the bottomhole. The system consisted of a standard bottomhole assembly 6
Colorado School of Mines
accompanied by a magnetic marker system. This system generated pulses via magnetic markers spaced out at a known distance and magnetized the formation. The system reads the markers and computes the distance drilled. This invention faced challenges because the formation, most often, could not be magnetized. According to Jain (2015), field applications showed that failure to magnetize the formation led to significant errors associated with the MD measurements. Hassan and Kurkoski (2007) proposed acquiring formation characteristics through logging tools to measure the MD at the bottom. The method consisted of similar sensors placed at different depths, gathering specific petrophysical information about the rock, such as porosity and neutron density. Since two or more sensors will pass the same location in the formation, and the distance between the sensors is known, a downhole computer can correlate the similar data gathered. According to Jain (2015), this geological correlation method faced a strong challenge. Horizontal wellbores usually extend within the same formation and are characterized by the same lithology, meaning that rock characteristics will not significantly change due to homogeneity. Field results show that using a sensor such as a gamma-ray in a homogeneous section caused MD errors to be approximately 95%. Estes et al. (2012) proposed measuring MD in the bottomhole using an inertial navigation system involving an accelerator installed on the drillstring that transmits the recorded data to a programmed memory module. Then, a downhole processor installed between the accelerometer and the memory module gathers data from both objects to calculate the acceleration of the drillstring. Finally, the computed acceleration data is integrated to calculate the total MD of the well. According to Jain (2015), this method goes through an integration drift process and needs a secondary system that constantly corrects these errors. 7
Colorado School of Mines
Jain (2015) proposed an MD measurement method similar to the geological correlation method proposed by Hassan and Kurkoski (2007) that uses the Normalized Cross-Correlation (NCC) algorithm or the Hybrid Matching (HM) algorithm. The NCC algorithm can identify a time lag between two identical signals. This algorithm generates all possible matches and calculates a correlation coefficient for each match, with the highest value considered the greatest possible match. The HM algorithm uses brute-force to search for potential matches that depend on tolerance criteria and fit. If several outputs fit the input criteria, then the best match is defined using the minimum square distance method (Jain 2015). The simulation results using both NCC and HM algorithms showed that even if sensor drift caused an offset in data, the correlation algorithms were not affected. Additionally, when data input counts are higher, depth measurements become more accurate. However, MD measurement errors occurred due to noise and ROP averaging. Additionally, based on the gamma-ray tests performed, errors were still high in homogeneous sections due to a lack of change in rock properties (Jain 2015). Sugiura (2015) proposed another geological correlation technique for estimating MD. Firstly, two sensors are installed to measure the same borehole parameter and generate two time- based logs. Secondly, a downhole computer processor measures the time shift by comparing both logs. A correlation factor is then generated by time-shifting the logs using a simplified version of Pearson(cid:182)s correlation method, measuring the degree of the linear relationship between two data points. Finally, the drilling speed is then calculated and integrated to calculate the distance drilled. Researchers have proposed many techniques to measure the MD at the bottom of the well; however, every solution proposed was accompanied by challenges that made the approach 8
Colorado School of Mines
difficult for routine field implementation. Geological correlation methods typically fail in homogeneous formations due to a lack of change in rock properties. Since homogeneous formations are common (especially in horizontal drilling), it is appropriate to have a measurement method that works during such conditions, which is the focus of this dissertation. The method proposed here is similar to the geological correlation methods but focuses on the topology of the borehole wall rather than the petrophysical rock properties mentioned above. 1.4 Dissertation Structure This dissertation is divided into six chapters: Chapter 1 explains the importance of depth measurements in drilling operations, the different methods used to measure depth, and the limitations of existing methods. Chapter 2 explains the proposed distance measurement methodology and the goal and objectives of this dissertation. Chapter 3 consists of the background and literature review covering important topics such as the rate of penetration, drilling automation, and logging while drilling. Chapter 4 explains the operating principle of the proposed methodology and the design of the image-matching algorithm and experiments. Chapter 5 presents and discusses the results obtained from all the experiments performed in this dissertation. Chapter 6 discusses the potential of the proposed methodology and its limitations. Chapter 7 concludes the dissertation and provides recommendations for future work. 9
Colorado School of Mines
CHAPTER 2 RESEARCH PROPOSAL AND OBJECTIVES This chapter discusses the proposed downhole distance measurement method, explains the method(cid:182)s theory, and demonstrates the objectives of this dissertation. 2.1 Proposed Bottomhole Measurement Technique As discussed in the previous chapter, current bottomhole MD measurement techniques still face challenges. These complications create a barrier to fully automating a directional drilling system. At present, a reliable bottomhole distance measurement technique does not exist. Therefore, this work proposes a new approach to sense the distance drilled at the bottom of the hole. The proposed method includes identical sensors at known positions that synchronously capture images of the wellbore. The images captured by the sensors show unique marks left by the bit, the topology of the borehole wall, and the natural geological features of the rocks drilled. Therefore, each image has an identified “fingerprint.(cid:180) An image-matching algorithm will then correlate the fingerprints and obtain the times each sensor records the same formation image. The algorithm uses time and distance information to estimate the local average rate of penetration. The algorithm then integrates the rate of penetration to give the interval distance drilled, which is then summed to estimate the MD. The methodology is based on the theory that comparing images that show the topology of the wellbore is more accurate than comparing data from rock characteristics. This is because the images will contain important features such as fractures, marks left by the bit, and the borehole geometry, which are constantly changing and are independent of the rock(cid:182)s characteristics. This 10
Colorado School of Mines
independence allows the methodology to work in homogeneous sections where petrophysical data collected from the Logging While Drilling (LWD) tool rarely change. As a result, two crucial questions need to be answered: 1. Which image-matching algorithms are capable of efficiently and accurately matching wellbore images? 2. Which type of sensors or methods should be used to capture wellbore feature images? 2.2 Image and Pattern Matching Pattern matching is a method used to find a specific pattern within an image. The process uses a concept called “masking,(cid:180) where a predefined mask consisting of an image pattern is placed over all possible pixels positioned in the picture. A factor-matching score is then generated for the masking pattern and the image, allowing comparisons with a preset matching score. If the score is higher than a predefined value, the pattern matches the image (Walia and Suneja 2010). According to Balletti and Guerra (2009), there are three different patterns and image-matching methods. These methods are feature-based, area-based, and symbolic. Each method has its way of generating a matching process for the images (Balletti and Guerra 2009). This research focuses on pattern matching techniques that use feature-based methods which extract specific geometrical features at various scales. The comparison depends on the size and shape of these features (Balletti and Guerra 2009). 2.2.1 Feature and Key-Points Identification Techniques Various programs, simulators, and open-source codes for image feature identification exist today. Lowe (2004) developed a model for distinctive image-matching and named it Scale Invariant Feature Transform (SIFT). This innovative method identifies key features from an existing image that are not affected by image rotation or scaling and act like a “fingerprint(cid:180) for a 11
Colorado School of Mines
(cid:3105)(cid:3118)𝐺(cid:4666)𝑥,𝑦,(cid:3097)(cid:4667) where 𝐿 (cid:4666)𝑥,𝑦,𝜎(cid:4667) (cid:3404) ∗𝐼(cid:4666)𝑥,𝑦(cid:4667). This method speeds up the image-matching technique 𝑥𝑥 (cid:3105)𝑥(cid:3118) and replaces the use of difference-of-Gaussian functions (Bay et al. 2008). Calonder et al. (2010) developed a feature point detection and image-matching model called Binary Robust Independent Elementary Features (BRIEF). The authors aimed to build an image-matching technique that works faster than SIFT and SURF. The computational speed was increased by comparing the intensity of point pairs using short descriptors. However, one of the main challenges this technique faces is that it is affected by rotation and may not be used in many applications. Rublee et al. (2011) developed a feature-point detection and image-matching model called Oriented FAST and Rotated BRIEF (ORB). For this model, the authors combine an accurate feature point detection method, FAST, proposed by Rosten and Drummond (2006), with an updated version of BRIEF that is not affected by image rotation. ORB works very similar to SIFT and SURF but uses a different mathematical approach. ORB has been known to be fast and accurate for many applications. Karami et al. (2017) measured the effect of an image(cid:182)s deformation on three different image and pattern matching techniques. These effects included scale, rotation, noise, illumination, and translation. The methods compared were SIFT, SURF, and ORB. It was determined that ORB was the fastest algorithm when compared to the other techniques. However, SIFT provided the highest matching rates and accuracy when compared to the other methods. 2.2.2 Image and Pattern Matching Techniques Key-points identification algorithms allow a unique “fingerprint(cid:180) for each image taken. However, the algorithm itself does not provide any matching methods between similar 15
Colorado School of Mines
“fingerprints.(cid:180) Therefore, pattern-matching algorithms are needed to compare the images and evaluate them based on their similarities. The brute-force technique is one of the simplest pattern-matching methods used today. This technique matches the key-points in the images using the Euclidean distance, 𝑑(cid:4666)𝑃 ,𝑃 (cid:4667), 1 (cid:2870) which is calculated as: 𝑑(cid:4666)𝑃 ,𝑃 (cid:4667) (cid:3404) (cid:3493)(cid:4666)𝑥 (cid:3398)𝑥 (cid:4667)(cid:2870) (cid:3397)(cid:4666)𝑦 (cid:3398)𝑦 (cid:4667)(cid:2870) (2.9) 1 (cid:2870) 1 (cid:2870) 1 (cid:2870) where 𝑃 and 𝑃 are key-points on the first and second image, respectively. This technique 1 (cid:2870) requires many trials because each key-point from an image will be compared with all the other key-points from another image. A match is produced based on the closest points found (Mordvintsev and Abid 2013). Another commonly used pattern-matching algorithm is the Fast Library for Approximate Nearest Neighbors (FLANN). This algorithm works very similarly to a brute-force technique. However, FLANN is more efficient and allows large datasets to be matched much faster than brute-force because it uses parallel linear brute-force searching. Therefore, FLANN decreases the computational time for large datasets while still maintaining the accuracy of the brute-force technique (Mordvintsev and Abid 2013; Muja and Lowe 2009). Other pattern-matching methods that have been used in many applications are the Boyer- Moore technique and the Karp-Rabin algorithm. The Boyer-Moore technique is a pattern- matching method that only works when string matching elements have a finite fixed size. This matching method uses information based on previous unsuccessful matches and decides what to do next with such information. The decision is based on precomputed timetables, where the method tries to match key-points by going back in time (Korber 2006). Alternatively, the Karp- Rabin algorithm generates hash-codes for each key-point. The algorithm will then process a 16
Colorado School of Mines
matching technique that compares similar hash-codes together from key-points located on different images. However, once the matches are obtained, they should be checked to ensure that there are no collisions and that the hash-codes matched correctly (Korber 2006). All of the algorithms mentioned above provide matching techniques between images. However, the main challenge is knowing how good a match is. Therefore, researchers specified different methods to identify the accuracy of the matching process and the ability to find successful matches. Out of all the research made, the Good-Ratio Test (GRT) proposed by Lowe (2004) and the Random Sample Consensus (RANSAC) algorithm proposed by Fischler and Bolles (1981) are sufficient techniques for the method proposed. The GRT is a test that filters all "bad matches" out to ensure the accuracy of the pattern- matching algorithm. Instead of only comparing the distance between the key-points and their closest neighbors, the test also examines the gap between the nearest and second nearest neighbors. Therefore, the second-closest match provides an estimate of "bad matches" occurring within the pattern-matching algorithm. Generally, if the distance ratio is set to 0.8, approximately 90% of bad matches will be eliminated (Lowe 2004). Figure 2-1 shows an example of two photographs of the same book yet from a different camera perspective. The SIFT algorithm was applied to these images to identify the key-points of both pictures. Next, the brute-force matching algorithm was combined with the GRT to match both images together. The result illustrated in Figure 2-2 shows the similarity between both images. The lines connecting both photos are the "good matches" defined by the GRT test. 17
Colorado School of Mines
The same photographs presented in Figure 2-1 are used to show an example using RANSAC. For this example, the SIFT algorithm was applied to identify key-points in these images. Afterward, the brute-force matching method and the RANSAC algorithm are used to match both photos together. Figure 2-3 shows the result where the green lines represent "good matches" between the image, and the square indicates the region where the two photos are similar. Figure 2-3 RANSAC pattern-matching algorithm. The orange square is generated by the RANSAC algorithm and represents the exact area where both images are similar. It can be seen from the examples mentioned above that GRT and RANSAC algorithms can provide reliable matches. Both algorithms can accurately identify similarities between two images. Based on preliminary testing, it was determined that SIFT, FLANN, and GRT are the most suitable algorithms for the work proposed in this dissertation. 2.3 Ultrasonic Imaging My research determined that ultrasonic imaging is the most suitable type of imaging for the proposed method. Ultrasonic imaging is highly reactive to fractures and bit marks and, when implemented properly, can show the whole surface topology (Yao 2018). It is important to note 19
Colorado School of Mines
that other sensors can be used, especially those that clearly show topological features. Nevertheless, based on the current technology available in the oil and gas industry, ultrasonic sensors are preferred over other sensors because of their strong capability of identifying topological features. Ultrasonic imaging is used in many engineering and medical applications. The imaging process uses a piezoelectric crystal that is placed in a transducer. As the electric current passes through the crystal, the current transforms to mechanical pressure waves, i.e., sound waves. The thickness of the crystals controls the frequency of the waves released, and they are usually higher than 20 kHz (Wang 2020; Boston Piezo Optics 2020; The Green Age 2020). Once the waves are generated, they travel through a medium, and the crystal switches from the “sending(cid:180) phase to the “listening(cid:180) phase. The waves travel through the medium until they get reflected. The reflected wave travels back to the crystal, and the whole process is repeated. The time it takes for the waves to come back depends on the velocity of the sound waves in the medium through which they travel and the distance to the reflector. An image is then created using the time it took for the waves to be reflected and the relationship between the amplitude of the waves sent and received. It is also possible to have more than one crystal to do this process (Wang 2020) An ultrasonic Imaging While Drilling (IWD) tool works using the same concept mentioned above. The transducer sends sound waves through the drilling fluid, and then they are reflected from the borehole wall and sent back to the transducer or another receiver. All data acquired are assigned a toolface position and azimuthal orientation. When the data are received, environmental corrections for sound velocity and denoising are made before generating the final image. The environmental modifications are made on parameters that affect sound velocity. 20
Colorado School of Mines
These parameters include mud weight, mud type, temperature, pressure, and cuttings (Hassan et al. 2004; Steinseik et al. 2010). Hassan et al. (2004) created a device that measures ultrasound velocity in drilling fluids. Such a device could be applied to our proposed methodology because it prevents environmental corrections made on the surface. The device consists of an acoustic transmitter that generates ultrasound waves in the drilling fluid and two acoustic receivers that detect the ultrasound waves over two paths. The ultrasound velocity is calculated using the time that the wave arrived and the distance. Ultrasonic imaging tools are mainly used to detect fractures and understand rock properties. However, the proposed method does not focus on petrophysical properties but rather on the topology of the borehole wall. Yao (2018) designed a MATLAB code that reconstructs a wrench surface from data gathered from a focused acoustic transmitter (Figure 2-4). The wrench was submerged in water, and the objective of the experiment was to reconstruct the word “DUTY(cid:180) written on the wrench. The reconstruction is a function of the reflection coefficient, 𝑅 , (cid:3027) which is calculated as: 𝑅 (cid:3404) (cid:3053)(cid:3117)−(cid:3053)(cid:3118) (2.10) (cid:3027) (cid:3053)(cid:3117)+(cid:3053)(cid:3118) Where 𝑧 is the acoustic impedance (i.e., opposition to wave propagation) of the surface and 𝑧 (cid:2870) 1 is the acoustic impedance of water. The focused acoustic transmitter can scan a small area one at a time, allowing many reflection coefficients to be calculated over the whole surface area of the wrench. As a result, the authors were able to reconstruct the word accurately. 21
Colorado School of Mines
Figure 2-4 Reconstruction of the word “DUTY(cid:180) embedded on a wrench (right) using a focused acoustic transmitter (left) (Yao 2018). 2.4 Research Goal and Objectives This research aims to assist in the automation of directional drilling by estimating the distance drilled in the bottomhole and minimizing dependence on surface information. The method depends on capturing images of the formation while drilling and comparing them until a successful match is reached. Therefore, the main goal of this dissertation is to demonstrate that an image-matching algorithm can be used to calculate the distance traveled/drilled. The objectives of this dissertation are as follows: 1. Explain the concept and theory of the bottomhole MD measurement method. 2. Develop a computer algorithm for automatic image processing that identifies key- points (i.e., image fingerprints), compares time-lapse images, identifies matches, and calculates the distance drilled. The algorithm should have the following features: a. Flexible and accurately differentiates between good and bad matches. b. Designed to work in real-time applications. c. Work in areas where rock features may be homogeneous. 22
Colorado School of Mines
CHAPTER 3 BACKGROUND This chapter explains the theory behind this research. The method proposed in this work includes an image-matching algorithm that estimates the rate of penetration to calculate the distance drilled. Therefore, this chapter explains the meaning of the rate of penetration and the factors that affect it. Additionally, this chapter discusses drilling automation and rotatory steerable systems. The proposed method aims to automate directional drilling by adding a necessary input parameter (i.e., distance drilled) to the directional drilling tools. Finally, this chapter discusses imaging while drilling methods because the proposed method depends on capturing images of the formation. 3.1 Rate of Penetration The proposed method calculates the rate of penetration (ROP) in real-time while drilling and uses a mathematical integration procedure to get the interval distance drilled. Therefore, this section discusses ROP and the factors that affect it. 3.1.1 Definition and Importance of Rate of Penetration A drilling bit is usually used to break the rock. Simultaneously, drilling fluid is circulated within the hole to cool down and lubricate the bit and carry the cuttings to the surface. To minimize costs, the well needs to be drilled in a fast and efficient manner. For the drilling engineer to know how fast a well is drilled, ROP needs to be calculated. ROP is defined as the advancement of the drilling bit per unit time while drilling a formation and is calculated as: 24
Colorado School of Mines
Distance Drilled ROP = (3.1) Time ROP calculations can be divided into two types: (1) instantaneous ROP measured while the drilling operation is still occurring and (2) average ROP measured over the total distance interval after the formation has been drilled (Mensa-Wilmot et al. 2010). ROP is considered to be one of the main factors that influence drilling efficiency and operational costs. Most performance quantifying metrics such as Mechanical Specific Energy (MSE), Cost per Foot (CPF), and feet per day are strongly affected by ROP. An increase in ROP generally leads to an increase in drilling efficiency and a decrease in costs (Mensa-Wilmot et al. 2010). Researchers have identified the parameters that affect ROP as well as examined ways to remediate and optimize ROP to enhance drilling efficiency. The main motivation for undertaking ROP research is to decrease drilling costs. For example, in BP operations, onshore drilling time accounted for up to 30% of the well(cid:182)s cost (Fear 1999). The costs associated with offshore drilling are even higher (Hemphill et al. 2001; Mansour 2017). Even though serious financial costs may occur due to drilling time, ROP is still a complex problem that is not well understood. Therefore, to minimize costs, drilling engineers need to find the best method to quickly drill a well while maintaining wellbore integrity. 3.1.2 Factors Affecting Rate of Penetration There are five aspects affecting ROP: formation characteristics, hydraulic factors, rig efficiency, mud properties, and mechanical factors. Additionally, these aspects are categorized into either controllable or uncontrollable factors. Drilling engineers can change controllable factors, while uncontrollable factors are challenging to alter due to geological or economic reasons. It is difficult to analyze an individual aspect by itself because the factors affect each 25
Colorado School of Mines
other as well as ROP (Elkatatny et al. 2017). Many factors such as lithology types, bit design, drilling fluids, drillstring rotation speed (RPM), and weight on bit (WOB) can affect ROP (Mensa-Wilmot et al. 2010). Brantly and Clayton (1939) studied various factors affecting ROP while drilling: RPM, circulating fluid volume and pump rate, WOB, and equipment and personnel. The authors used the data from tour reports, bit records, questionnaires, and formation records in many different fields over seven U.S. states. The authors conclude that all factors studied affected ROP (Table 3-1). Table 3-1 ROP study results (Brantly and Clayton 1939). Factors Studied Effect When the rotating speed increases, ROP increases. However, it will not increase ROP by much when the Drillstring rotation speed rotating speed has reached a critical limit. To enhance ROP, the engineer will have to alter the other factors. The higher the fluid volume and pump rate, the higher the Fluid volume and pump rate ROP. However, there is also a critical limit for fluid volume and pump rate. WOB will only increase ROP if the bit is penetrating a hard formation. For soft formations, rotating speeds and fluid Weight on bit volumes are more effective. For brittle formations, there is a certain limit that WOB can be increased to alter ROP positively. Above that limit will cause the ROP to decrease. The better the design of the drilling bit, drilling pipes, and Drilling Equipment drilling fluid, the higher the ROP. Hellums (1952) studied the effect of pump horsepower (PH) on ROP. The authors analyzed data obtained from drilling records in eight different southern U.S. states. The results showed that ROP is almost proportional to PH; in other words, when PH is increased, ROP increases. 26
Colorado School of Mines
Bielstrein and Cannon (1950) investigated the effect of bit design and hydraulic factors on ROP. All tests were made in the field via various wells. The authors chose formations that were classified as either soft or medium hard and nonabrasive. The authors studied the effect of nozzle fluid velocity, nozzle efficiency and power factor, and fluid distribution around the bit on ROP (Table 3-2). Table 3-2 ROP Study results (Bielstrein and Cannon 1950). Factors Studied Effect When nozzle fluid velocity increases, ROP increases. Nozzle fluid velocity The effect of nozzle efficiency and power factor on Nozzle efficiency and power factor ROP varies and does not show a constant relationship. The more optimized the fluid distribution is around the bit, the higher the ROP due to hole cleaning. Fluid distribution around the bit Fluid forced through the teeth of the bit will cause better distribution, ensure circulation of the cuttings, and therefore, clean the hole. Tables 3-1 and 3-2 show that many variables affect ROP. This implies that ROP can vary highly from well to well. Therefore, the proposed method should work effectively for a wide range of velocities. 3.2 Directional Drilling Automation Drilling automation is a complex problem that has attracted many researchers. Over the years, various studies have examined ways to automate drilling operations. Many factors need to be analyzed to have a fully automated drilling system. Drilling automation can be defined and classified through three different terms (Annaiyappa 2013): 27
Colorado School of Mines
1. Machine Automation: this process is achieved when mechanical power replaces human labor. Equipment from various vendors should be integrated to achieve the same outcome that humans can do. 2. Drilling Process Automation: this process is achieved when equipment from the rig and service companies can work together without manual instructions to the driller. 3. Drilling Advisory Automation: this process is achieved when drilling parameters(cid:182) computations are integrated with the rig control system instead of being handed to the driller manually. In the earliest times of drilling operations, very simple equipment was required to start the drilling job. A big part of the drilling procedure depended on heavy human labor. Human safety was a significant concern because of the unguarded equipment and hazardous operations occurring during drilling. Safety, as well as the intention of decreasing labor costs, motivated automating drilling operations (Eustes 2007). The main objective of drilling automation is to increase rig safety, maintain operations during harsh weather conditions, reduce the overall costs, reduce people on the rig and control the rig remotely, improve drilling efficiency and ROP, and reduce environmental footprint (Eustes 2007). A typical onshore drilling rig has seven different systems working together. These are the rig power system, the hoisting system, the circulating system, the rotary system, the well-control system, the trajectory control system, and the well-monitoring system (Bourgoyne et al. 1986). The rig power system provides power to the rig through internal combustion diesel engines. The hoisting system is mainly used to lower or raise casing strings, drillstrings, or other subsurface equipment. The circulating system ensures the removal of the formation(cid:182)s cuttings by circulating 28
Colorado School of Mines
fluid around the wellbore. The rotary system applies torque to the drilling bit and allows rotation for cutting. The well-control system maintains uncontrolled formation fluids within the wellbore and prevents unwanted fluid flow to the surface. The trajectory control system (e.g., a rotary steerable system) allows drilling of a specific well path (e.g., a deviated well). Finally, the well- monitoring system allows the driller to continually monitor information such as depth, rotary speed, rate of penetration, and directional drilling data (Bourgoyne et al. 1986; Schaaf et al. 2000a). A fully automatic drilling system can only be achieved when all these systems are working together automatically in an efficient and unified manner. The focus of this dissertation is the trajectory control system and its progress towards directional drilling automation. Therefore, only rotary steerable systems are discussed. 3.2.1 Rotary Steerable Systems (RSS) RSS is a type of trajectory control system used to automate directional drilling operations. RSS was initially developed for Extended Reach Drilling (ERD) to prevent problems caused by drag (Schaaf et al. 2000a). Today, RSS is used for performance drilling, geosteering, and improved hole cleaning (Schaaf et al. 2000b; May et al. 2015). RSS is classified under two categories: (1) point-the-bit systems and (2) push-the-bit systems (Schaaf et al. 2000a). Point-the-bit systems have a rotating shaft within a non-rotating stabilizer that can be deflected at an angle away from the wellbore axis. This type of system works by placing a bend along the trajectory. The system is divided into three components: a power generation section, electronics and sensor section, and steerability enhancement section (Figure 3-1) (Schaaf et al. 2000a; Schaaf et al. 2000b). 29
Colorado School of Mines
Figure 3-1 A schematic showing the different components of point-the-bit rotary steerable system (Schaaf et al. 2000a). The power generation section provides power for the tool to work. The steering section uses the weight and torque on the bit. Additionally, it allows the system to move at an angle that offsets the tool(cid:182)s axis. The electronics and sensor section allows control of the system(cid:182)s direction. Sensors, such as magnetometers and gyroscopes, analyze the coordinates of the current trajectory and compare it with the target(cid:182)s coordinates. A signal is then sent to the steering section via the electronics to allow the drilling trajectory to reach the target(cid:182)s coordinates. In summary, the system allows continuous data feedback and control of the trajectory until the desired coordinates are reached. (Schaaf et al. 2000a; Schaaf et al. 2000b) Push-the-bit systems include steering pads that perform a side-force on the formation, which facilitates directional drilling. These steering pads can be positioned on either a rotating or a non-rotating housing (Schaaf et al. 2000a; Bryant 2019). Push-the-bit systems are usually divided into four sections: head, power, control, and MWD system. The RSS head contains annular ports, oil fill ports, and hydraulic steering pads. The annular ports prevent excessive pressure buildup in case the bit gets plugged. A centralizer channels the mud to the hydraulic oil fill ports. The oil is pressurized and forces pistons to activate a specific number of pads (Bryant 2019). 30
Colorado School of Mines
Secondly, the RSS power contains batteries or a turbine that provides electrical power for the system to work. Thirdly, the control system contains sensors that analyze the coordinates of the drilled trajectory and control the number and location of pads activated based on the predefined trajectory. Finally, the MWD system provides two-way communication between the surface and the bottomhole to allow the RSS to drill across the preset drilling path (Bryant 2019). This research intends to provide the RSS with a bottomhole MD measurement to prevent the necessity of communication between the surface and the bottomhole. The distance calculated by the proposed method should be sent to the RSS, where the drilled and predefined trajectories are compared. Then, the RSS should be able to continue drilling the preset path without waiting for and depending on new surface distance measurements. 3.3 Logging While Drilling The proposed method includes an IWD tool used to estimate the distance drilled at the bottom of the well and provide real-time ROP data. Since the IWD tool is a logging tool, logging while drilling techniques are discussed in this section. This section focuses on how information from the LWD tool is collected, processed, and sent to the surface. Finally, it discusses how imaging while drilling tools operate. LWD is a method of gathering data while the well is being drilled. A data-gathering tool, such as a resistivity tool, is attached above the drilling bit in the bottomhole assembly (Figure 3- 2). This design makes the tool follow the drilling bit (Allen et al. 1989). 31
Colorado School of Mines
Figure 3-2 A schematic showing LWD and MWD tools combined. Here, the type of LWD tool is a Compensated Dual Resistivity (CDR) tool that measures the resistivity of the formation (ODP Legacy 2000). The design of the LWD tool allows it to survive the harsh conditions occurring downhole while drilling. It can withstand problems such as a bent or a vibrating bottomhole assembly and still transmit accurate data (Allen et al. 1989). The LWD is almost always accompanied by another tool and method called MWD that provides information regarding the azimuth and inclination of the well. Any information gathered from the tool is sampled with respect to time, for example, 0.1 Hz (Fagin 1994). The sampling rate can only be set on the surface and cannot be changed unless the tool returns to the surface. Dividing the sampling rate by the ROP results in the number of samples generated for each interval distance drilled. Since ROP values change throughout drilling, the sampling rate must be altered to account for these changes. Therefore, when ROP increases, samples per foot must increase. Typically, the sampling criteria can range from 32
Colorado School of Mines
capturing one image every 10.0 s to capturing one image every 120.0 s, depending on the ROP and the desired amount of data (Allen et al. 1989). There are two ways that the engineer can get the information captured by the LWD: (1) mud-pulse telemetry (MPT) system and (2) memory chip. If data are wanted in real-time, then the MPT system is used to send them to the surface. In contrast, nonessential data are stored in the LWD's memory chip and can be read when the tool is brought to the surface. Figure 3-3 shows an example of LWD data sent to the surface for real-time analysis (Allen et al. 1989; Hutin 2017). Figure 3-3 Example of logging data where the plots show from left to right: (1) gamma ray, (2) resistivity, (3) travel time, (4) bulk density, and (5) total organic carbon (Mahmoud et al. 2017). The MPT system allows for two-way data transmission between the surface and the bottomhole. Firstly, the drilling fluid is forced through a valve, where the flow is restricted, and pressure pulses are generated. These pressure pulses represent the data measured in the bottomhole, which are then sent to the surface and read by a pressure transducer that decodes the 33
Colorado School of Mines
frequency (𝜔), cut-off frequency (𝜔 ), number of poles (n), and the maximum passband 𝑐 gain (𝜀). After removing the high frequencies representing the noise, the signal can then be reconstructed in its denoised form. There are also other types of filters similar to the Butterworth filter that can denoise the signal. 3.3.2 Imaging While Drilling As mentioned before, the proposed method relies on taking images of the formation, and therefore, an IWD tool is needed. IWD is a process of measuring real-time images of the wellbore while it is being drilled. These images can be taken from different formation evaluation measurements and can be in the form of ultrasonic, resistivity, density, electromagnetic, or gamma-ray images. As of 2021, various service companies have provided different measurement tools and methods for imaging the wellbore. The main reason for performing IWD is to have an image that reveals the formation and its geological features. Once this image has been generated, various engineering analysis techniques can be performed to understand the characteristics of the formation that is being drilled. Schlumberger researcher Montaron (1996) was one of the first researchers to develop a real-time wellbore imaging tool and method using MWD telemetry. Following this breakthrough invention, other service companies started manufacturing IWD tools. The IWD tool (Figure 3-5) works by measuring real-time formation characteristics, such as gamma-ray measurements, using logging sensors. These measurements are made around the whole wellbore (i.e., 360 degrees) and are then compressed using a downhole processor and transmitted to the surface via MWD telemetry (Montaron 1996). 35
Colorado School of Mines
Figure 3-5 A schematic showing the different parts of an IWD tool. The azimuthal gamma ray is capturing information for the full circumference (360 degrees) of the wellbore (Columbia University 2020; Schlumberger 2000). Once the compressed data have reached the surface, each data point is combined with its toolface position and azimuthal orientation. Each datum is assigned a translational distance value where the data points are correlated to their depth position. Finally, the captured data are then decompressed and converted to pixels. The pixel arrangement generates a high-resolution resistivity image of the formation (Montaron 1996; Fulda et al. 2007). IWD tools have many different field applications. Bazara et al. (2013a) used the IWD tool to identify fractures causing excessive lost circulation. The tool provided engineers with ultra-high-resolution images of the borehole. Based on these images, the engineers could characterize the geometry, morphology, and conductivity of each fracture, and calculate the structural dip. The analysis of the fractures assisted them in finding solutions to remediate lost circulation. Bazara et al. (2013b) used the IWD tool for geological steering, often known as geosteering, which is the art of drilling the well based on well logging data instead of three- dimensional targets in space. Since the IWD tool provides a method formation evaluation and 36
Colorado School of Mines
structural dip calculation, the engineers could geosteer the well more accurately than when using standard LWD tools (Bazara et al. 2013b) For this research(cid:182)s purpose, the IWD tool is a source of measuring the distance drilled. Therefore, instead of correlating the images to depth like other typical applications, the images will be correlated to time. Additionally, the type of sensors used should be ultrasonic to show the topology of the borehole wall. A good example of an ultrasonic IWD is the TerraSphere tool, a registered trademark of Schlumberger. The tool has four focused pulse-echo transducers and generates high-resolution images of the wellbore(cid:182)s topological features. 3.4 Summary This chapter discussed ROP, drilling automation, RSS, and IWD. The proposed method is designed to capture images using an IWD, calculate the ROP, and then integrate it to estimate the distance drilled. Combining this distance-calculation process with an RSS that measures the inclination and azimuth, the bit(cid:182)s location can be found. Continuously locating the bit at the bottom of the hole facilitates directional drilling automation. 37
Colorado School of Mines
CHAPTER 4 RESEARCH PLAN This chapter discusses the operating principle of the proposed method, the design of the image-matching algorithm, and the different experimental procedures performed. 4.1 Operating Principle The method proposed in this dissertation calculates the distance drilled in the bottomhole using identical imaging sensors. The concept proposed will be explained using an example of a theoretical imaging tool (Figure 4-1). The sensors on this tool are imaging sensors that capture formation images and are placed within a known distance of each other (e.g., D1). Additionally, while the figure shows a 2D drawing of the tool, a 3D illustration would show the locations of four different sensors located around the tool to capture 360° images. In addition to the sensors, the tool contains a Programmable Memory Chip (PMC) with a key-point identification algorithm, such as SIFT, to locate key-points for all images taken. The PMC also contains a pattern-matching algorithm, such as FLANN, which compares images taken from one sensor with images taken from the other sensor until a good match is reached. 38
Colorado School of Mines
where 𝐶 is the depth of the casing shoe, D1 is the distance between Sensor 1 and Sensor 2, and 𝐷 𝐿 is the total length of the drilling bit, drill collar, and PMC. 𝑏 Additionally, because the timestamps of each image are known, the PMC can identify the difference between the times that both sensors took an image of Location “A(cid:180) and use the distance between the sensors to calculate the average ROP for that time interval using the following equation: 𝑅𝑂𝑃 (cid:3404) T(cid:3177)(cid:3118)−𝑇(cid:3294)(cid:3117) (4.2) 𝐷1 where T is the time that Sensor 2 captured an image of Location “A(cid:180) and T is the time that s(cid:2870) s1 Sensor 1 took an image of Location “A(cid:180). The method described above (Method 1) is straightforward, and the process can be repeated many times until the drilling operation is terminated. However, there are two main challenges associated with this method: 1. The total MD consists of a summation of the discrete calculations representing the distance drilled. Each discrete calculation is equal to the distance between the sensors, and therefore, drilled distances that are shorter than the distance between the sensors cannot be computed. 2. If Sensor 2 does not capture the image needed to match the image from Sensor 1, a “match miss(cid:180) is introduced into the system. This miss will jeopardize the whole MD measurement because the tool does not know its location. A match miss can also be introduced in the system from a noisy environment that alters one of the image(cid:182)s features, and the algorithm cannot recognize the pair as the optimum match. Based on the challenges mentioned above, the algorithm should use a different method that calculates the distance drilled to prevent disruptions. Therefore, both Sensors 1 and 2 will 42
Colorado School of Mines
capture images at a specific rate, allowing them to constantly find optimum image pairs and calculate many rates of penetration (Method 2). Subsequently, the algorithm will compute the following equation, MD (cid:3404) 𝐶 (cid:3397) 𝐿 (cid:3397)∫𝑅𝑂𝑃𝑑𝑡 (4.3) 𝐷 𝑏 and constantly integrate the ROP, continuously providing an estimate of the distance drilled. The accuracy of Method 2 depends on the sampling rate and sensor location but is not as affected by match misses compared to Method 1. In some cases, more than two sensors may be needed, especially during high drilling rates, to ensure image-matching. This is because one sensor may miss capturing an essential image due to the high speed of drilling. Therefore, adding multiple sensors can assist in mitigating missed matches because it increases the likelihood of at least one sensor capturing an image similar to the first sensor(cid:182)s image The proposed method will decrease the RSS dependence on the measured depth calculated from the surface. The RSS will frequently obtain the distance drilled, process all required data to determine the current coordinates, and take the appropriate actions to guide the trajectory to the target with minimum dependence on surface measurements. This method is not designed to entirely cut off communications with the surface. Information will always be sent to the surface, and if unexpected events occur, such as the tool being incapable of finding a good match, then surface corrections will be requested by the tool. To summarize, the sensors are continuously imaging, registering, analyzing (obtaining the “fingerprints(cid:180)), and comparing the fingerprints and times as a moving window. This procedure allows a continuous measurement of the ROP. Continuously integrating the ROP produces an estimation of the total distance drilled. 43
Colorado School of Mines
4.2 Experimental Design and Procedure This section explains and describes the experimental design and procedures made to evaluate the concept of the method proposed in this dissertation. 4.2.1 General Design of Experiments An IWD tool with multiple similar sensors is not currently designed or readily available. Therefore, the theoretical tool illustrated in Figure 4-1 is redesigned using cost-effective laboratory equipment that tests this concept. For this simple design, optical cameras will be used to capture images of natural or artificial “sceneries(cid:180) representing the borehole wall instead of ultrasonic imaging sensors capturing images of a drilled well. It was mentioned previously in Section 2.3 that the type of imaging sensor required should be able to identify the borehole wall topology. Based on this, it was proposed to use an ultrasonic IWD because ultrasonic images could give information about fractures and rock geometry, among others. Similarly, when captured in clear fluids, optical images are highly accurate and show the geometric features required for the image-matching process. It is understood that the use of optical cameras during drilling operations is challenging because most drilling fluids are not transparent to light. Nevertheless, using optical cameras to prove the concept is justified because both ultrasonic and optical images provide similar features required for the key-points identification (Figures 4-4 and 4-5). 44
Colorado School of Mines
Figure 4-9 Scenery 4: three actual ultrasonic image well logs (CT-GW 22 (left), US-53-21-703 (middle), and G-2996 (right)) combined (USG 2021). 4.2.2 Image-Matching Algorithm The image-matching algorithm finds the fingerprint of each image taken from the different sensors and uses the equations described in Chapter 2 to find a match. The algorithm processes the data until a match occurs. Then, it extracts the images' timestamps and uses the distance between the sensors to calculate the velocities. Finally, it integrates the velocities to compute the distance traveled. The algorithm is designed to work in real-time or non-real-time, depending on the type of experiments made. The steps of the algorithm are: Step 1: All the experiments use optical cameras. In some experiments, videos are recorded, and in others, the algorithm captures images at a defined sampling criterion (e.g., one image every one second). Step 2: If images are captured and not videos, then this step is skipped. The videos taken by the cameras have a constant frame rate of 30 frames per second. This step in the algorithm 49
Colorado School of Mines
applies a preset sampling criterion to extract the images from the videos and assign a timestamp to each image. For example, if the videos taken by Sensors 1 and 2 are of 60.0 s duration, then the videos can be subsampled at a rate of one image every 2.0 s. Therefore, a total of 30 images is taken by each camera, where Image 1 from both sensors will have a timestamp of 2.0 s. Step 3: The SIFT algorithm combined with FLANN and a GRT of 0.45 are used to match the images from the different sensors. The algorithm is based on the theory that the match rate will increase to the optimum match and then decrease. Therefore, one image from Sensor 1 is matched with images from Sensor 2 until the match rate starts declining. The code waits and matches a predefined number of images after the decrease to ensure that this is the maximum point. The algorithm then moves on to the next image pair, and the process keeps repeating until the experiment/drilling operation is terminated. SIFT and a GRT value of 0.45 were chosen based on preliminary testing that used multiple sceneries and images. The SIFT method provided the highest accuracy in terms of match ratings and had considerably low computational times. The GRT of 0.45 ensures that only similar images match, while everything else gives a match rate very close to 0%. The following equation is used to calculate the GRT match rate: 𝐺𝑅𝑇 𝐺𝑜𝑜𝑑 𝑀𝑎𝑡𝑐ℎ𝑒𝑠 𝐺𝑅𝑇 𝑀𝑎𝑡𝑐ℎ 𝑅𝑎𝑡𝑒 (cid:4666)%(cid:4667) (cid:3404) (cid:3400)100 (4.4) (cid:3286)(cid:3291)(cid:3294)(cid:3117)(cid:3126) (cid:3286)(cid:3291)(cid:3294)(cid:3118) (cid:3118) where 𝐺𝑅𝑇 𝐺𝑜𝑜𝑑 𝑀𝑎𝑡𝑐ℎ𝑒𝑠 is the good matches calculated by the Good Ratio Test, 𝑘𝑝𝑠 is the 1 number of key-points computed using the image from the first sensor, and 𝑘𝑝𝑠 is the number of (cid:2870) key-points computed using the image from the second sensor. It is important to note that this equation only holds if images have key-points. If the SIFT algorithm cannot find key-points for at least one of the images, then both images are disregarded, and the algorithm analyzes the next pair of images. 50
Colorado School of Mines
After the interpolation, the algorithm finds the timestamp that gave the highest match rate. In this case, the interpolation shows that the match happened at 23.56 s and not 24.0 s. This interpolation estimates the time the match occurred since the sampling criterion may have missed it. Step 5: The algorithm checks for any times that the cameras were stationary. For example, suppose that two images from Sensor 1 at times 8.0 s and 6.0 s matched with the same image from Sensor 2 at time 10.0 s. This means that from time 6.0 to 8.0 s, Sensors 1 and 2 were stationary and not moving because they were capturing the same images for this period. In this case, Sensor 1 at time 6.0 s matches with Sensor 2 at 10.0 s, and the match that occurred at time 8.0 is disregarded because the object was stationary. Another way to test for the stationary position is to match Sensor 2 images together and ensure that the match rate changes. If the match rate is constant, then the object is stationary. Only the technique mentioned in the example is used for this research, not the Sensor 2 image- matching method. Step 6: The following equation is used by the algorithm to calculate the velocity: Distance (cid:2886)etween (cid:2903)ensors v(cid:4666)t(cid:4667) (cid:3404) (4.5) 𝑇𝑠(cid:3118)−𝑇𝑠(cid:3117) For example, suppose the algorithm found the matched timestamps presented in Table 4-1. (cid:2873) cm cm Therefore, the velocity of the row highlighted in blue is v(cid:4666)t(cid:4667) (cid:3404) (cid:3404) 0.9 . Here, the (cid:2870)3.(cid:2873)(cid:2874) s−1(cid:2876).00 s s distance between the sensors is assumed to be 5 cm. 52
Colorado School of Mines
Table 4-1 An example of matched data gathered from an experiment. Timestamp of Sensor 1 (s) Timestamp of Sensor 2 (s) 0.00 2.28 4.00 9.62 6.00 12.91 14.00 20.24 18.00 23.56 22.00 28.12 28.00 35.49 40.00 43.12 Step 7: The algorithm uses the following equation to calculate the distance traveled (trapezoidal integration): D (cid:3404) D (cid:3397) 1 (cid:4670)𝑣 (cid:3397)𝑣 (cid:4671)(cid:3427)𝑡𝑖 (cid:3398)𝑡𝑖−1(cid:3431) (4.6) i i−1 𝑖 𝑖−1 𝑠(cid:2870) 𝑠(cid:2870) (cid:2870) where D is the current distance traveled, D is the previous distance calculated, 𝑣 is the i i−1 𝑖 velocity calculated at the current match, 𝑣 is the velocity calculated at the previous match, 𝑡𝑖 𝑖−1 𝑠(cid:2870) is the timestamp of the image taken by Sensor 2 for the current match, and 𝑡𝑖−1 is the timestamp 𝑠(cid:2870) of the image taken by Sensor 2 for the previous match. Finally, the algorithm assumes a linear interpolation to connect the distance data points together. 4.2.3 Experiment 1 - Image-Matching Code Accuracy and Key Variables This section explains the procedures performed to evaluate the effect of different variables on the image-matching algorithm and the distance calculation. The main objective of this experiment is to test the effectiveness of the algorithm. This experiment aims to determine if the algorithm can accurately differentiate between the images captured from similar sensors and correctly find the optimal match for each image. Additionally, it seeks to identify the factors and variables that affect the distance calculation. Those factors can be sampling criteria, the distance between sensors, noise, and match misses. This experiment is not meant to generate accurate 53
Colorado School of Mines
Step 4: Hold the frame above the scenery and ensure that the camera lens on the first cellphone (i.e., Sensor 1) is vertically aligned above the start marker (Figure 4-15). At this point, start recording a video using both cameras at the same time. Figure 4-15 Performing Experiment 1 by holding the frame above the started marker and moving towards the end marker. Step 5: Once the cameras are recording, start moving towards the second marker. Identify the times that the lens of the second cellphone (i.e., Sensor 2) is vertically aligned above the start and end markers. This step requires a second person constantly monitoring the position of the camera relative to the marked points. Keep moving until the end marker and then stop recording both videos simultaneously. The movement in this experiment is not controlled or automated and is done randomly, i.e., a person holding the cameras attached to the gate and walking. The purpose is to see which variables cause the accuracy to change and not to increase the accuracy of the distance calculation. Therefore, random movement is not a problem for this experiment. 56
Colorado School of Mines
Step 6: Upload the videos recorded to the image-matching code for analysis and distance calculation. 4.2.3.2 Variables Tested The variables tested in this experiment are: 1) Sampling Criteria: this represents the time interval for which the algorithm captures one image. The sampling criteria tested for the granite rock test was capturing one image every 2.0 s, 4.0 s, 6.0 s, and 8.0 s. The sampling criteria tested for the white-painted wall test were similar but with an additional criterion of 10.0 s. 2) Missing Matches: the algorithm forces images not to match based on preset commands that control a miss rate. For example, suppose the algorithm found an optimal match between two images. If the miss rate is set to missing every other match, the algorithm is forced not to match the second pair of images. The different miss rates tested were missing one match every other match, two matches, five matches, and ten matches. 3) Distance Between Sensors: the distance that the sensors are placed away from each other is fixed. However, this test aims to know what happens if the sensors are placed further away or closer to each other. The distances between the sensors tested for the granite rock test were 8.3 cm, 11.0 cm, and 13.3 cm. The distances between the sensors tested for the white- painted wall were 11.0 cm, 15.0 cm, and 22.0 cm. 4) Random Noise: this test evaluates the effect that random noise has on the accuracy of the distance calculation. It aims to find if noise will prevent the algorithm from finding accurate matches. Additionally, it aims to see if an image denoiser can retrieve the image's key-points and maintain the accuracy of the distance calculation. Therefore, this test contains three parts: 57
Colorado School of Mines
4a) A normal image-matching process where no noise is added, and this is the control experiment. 4b) Salt and pepper noise is randomly added at a 0.55 ratio to all images from all sensors and compared to the results obtained from the control experiment (Figure 4-16). This is a type of noise added in an image that presents itself as white or black pixels, hence the name. Figure 4-16 Example of salt and pepper noise added to the experiment. 4c) The noisy images are denoised and compared to the control experiment. Figure 4-17 shows a normal, noisy, and denoised image. The denoising process is made by applying a 2D median filter to restore the image's features and then a bilateral filter. The average time needed to match two images captured from the granite rock is 1.30 s without a denoising algorithm and 0.75 s with a denoising algorithm. The average time needed to match two images captured from the white-painted wall is 0.80 s and 1.20 s with and without a denoising algorithm, respectively. The technical specifications of the laptop used to run the 58
Colorado School of Mines
sensors. The distance between the sensors is only controllable before the design phase; however, it cannot be changed once the sensors are fixed. Contrarily, the algorithm can continuously adjust the sampling criterion based on preset commands. Because the distances between the sensors are fixed, it is crucial to ensure that they will be compatible with different velocities. For example, suppose the sensors are very close to each other, and the tool is traveling at a relatively high velocity. In that case, there is a high chance that one of the sensors will miss the image that gives a good match unless an optimal sampling is chosen. Conversely, if the sensors are separated by long distance and the velocity is very low, it will take a considerable amount of computer power to process and match the optimal images unless the optimal sampling is chosen again. Additionally, a combination of low velocity and a long DBS may cause errors because false positives can occur due to the increased number of images used in the image-matching process, or misses can occur due to changes in wellbore conditions. As a result, the distance between the sensors and the sampling needs to be optimized to minimize errors. To design an optimized system, at least three sensors are needed to provide three different distances between the sensors. One of the distances will be short, one will be medium, and one will be long. Additionally, each distance between the sensors will require a specific sampling criterion or a range of optimum sampling criteria that mitigates misses and minimizes errors. The combination of optimum sampling criteria and optimum distance between the sensors will change when the velocity changes. Therefore, this experiment aims to find the optimum combination of sampling criterion and distance between the sensors for various velocities. 60
Colorado School of Mines
4.2.4.1 Experimental Procedure Experiment 2 uses Scenery 3 mentioned in Section 4.2.1. The procedure for this experiment is: Step 1: Build a motorized dolly cart with an adjustable speed controller (Figure 4-18). The dolly imitates the theoretical IWD tool illustrated previously in Figure 4-1. Two different motors with different rotations per minute (6 RPM and 20 RPM) were used interchangeably in this experiment to allow a wide range of velocities to be tested. The speed controller enables the velocity to be adjusted accordingly. Figure 4-18 A motorized dolly cart with an adjustable speed controller. Step 2: Mount Polyvinyl Chloride (PVC) pipes to the ground using a mounting putty. The dolly is designed to travel on PVC pipes to ensure a smooth straight trajectory so that the cart can run reliably with minimum interruptions when set to constant speeds. Step 3: Attach the three cellphone cameras at predetermined locations to account for the various distances between the sensors and place the dolly correctly on the pipes (Figure 4-19). The distances between the sensors tested in this experiment are 15.0 cm, 30.0 cm, and 45.0 cm. 61
Colorado School of Mines
Step 5: Subsample the videos using different sampling criteria because the experiment is not performed in real-time. The sampling criteria tested in this experiment are 2.0 s, 5.0 s, 12.0 s, 20.0 s, and 30.0 s. Step 6: Compare the average velocity calculated by the algorithm over the total 5-minute run with the average velocity of the cart measured using the stopwatch. Step 7: Calculate the average velocity and the percentage error for each combination of sampling criterion and distance between the sensors. For example, suppose that the average velocity of the cart is 63 ft/hour. Then, suppose that for the first and second runs, the algorithm calculated many velocities using various timestamps from matched images (Table 4-2). Those velocities (for each run) are then integrated with respect to time to calculate the total distance traveled over the whole run. Therefore, dividing the total distance traveled by the total time results in the average velocity of the whole run. Since two runs were made, the average velocity calculated by the algorithm when the cart is moving at 63 ft/hour will be the mean of both average velocities calculated from the two runs. Table 4-2 Velocities calculated for two runs. Velocity - first run (ft/hour) Velocity- second run (ft/hour) 58.20 59.38 58.39 59.45 58.62 59.74 58.32 60.70 58.43 60.92 63
Colorado School of Mines
4.2.5 Experiment 3 - Real-time Distance Calculation on Actual Well Logs Experiment 2 provided an optimum combination of sampling criterion and distance between the sensors for each average velocity. For this experiment, the image-matching algorithm is modified by including the optimal combinations from Experiment 2. Afterward, various random velocities are tested during the same run to see if the algorithm will choose the correct combination and correctly estimate the distance. This experiment was performed in real- time on Scenery 4 described in Section 4.2.1. Additionally, three tests were made in this experiment: 1. Test 1 is made at a constant velocity different from the velocities tested in Experiment 2. Figure 4-20 shows the constant velocity of the cart at an average velocity of 89 ft/hour. Note that the constant velocity test is not always maintained due to rough velocity changes occurring at the pipe connections. This is a noteworthy limitation of the experimental design and is demonstrated at time 280.0 s in the figure. 2. Test 2 is made at random velocities by constantly adjusting the speed controller. Figure 4-21 shows the random velocity of the cart. 3. Test 3 is made at random velocities but is different from the second test. Figure 4-22 shows the random velocity of the cart. 64
Colorado School of Mines
for low velocities. For example, suppose sampling at one image every 2.0 s and separating the sensors by 15.0 cm provides the lowest errors for velocities below 60 ft/hour. In that case, the algorithm will initially only match the images using this combination and calculate the velocity. Therefore, even though the webcams capture one image every one second, the algorithm will only match images at a two-second interval using the sensors separated by 15.0 cm. The algorithm will keep all the other captured images stored but will not attempt to match them. Once the velocity is calculated, the algorithm compares the velocity with the results obtained from Experiment 2 and sees if the initial guess of the optimum combination of sampling criterion and distance between the sensors is correct. If the initial guess is correct, then the algorithm calculates the distance using Step 7 in Section 4.2.2. Conversely, if the initial guess of the optimum combination is incorrect, the algorithm repeats the process using the correct combination and the images stored. It then calculates the distance using Step 7 in Section 4.2.2. This whole process is repeated until the cart comes to a complete stop. To summarize, Experiment 1 helps in understanding the key variables that affect the image-matching code. Once the key variables are known, Experiment 2 analyzes the controllable variables to minimize errors and computational time. Finally, Experiment 3 tests the model developed in Experiment 2 in real-time on real ultrasonic imaging logs to prove that the distance can be calculated using an image-matching process and that misses can be minimized when variables are optimized. 70
Colorado School of Mines
CHAPTER 5 RESULTS This chapter presents the results obtained from the three experiments described in Chapter 4, and demonstrates the effects of different variables such as velocity, the distance between the sensors (DBS), and sampling criteria on the accuracy of the distance calculations. Additionally, it presents the error analysis performed on the relationship between sampling, DBS, and velocity and shows the final model developed to optimize the image-matching algorithm and minimize errors. Finally, the method's efficacy is demonstrated here by presenting the results obtained from the real-time tests performed on real well logs obtained by acoustic televiewers. 5.1 Experiment 1 – Image-Matching Code Accuracy and Key Variables This section discusses the results obtained from Experiment 1. This experiment evaluates the effects of noise, sampling, the distance between the sensors, and multiple sceneries on the accuracy of the image-matching algorithm and distance calculation. The results presented from Experiment 1 are demonstrated in terms of percentage errors on a box and whisker plot. The symbol "X" represents the average data point, and "O" represents internal points and outliers. The median is represented as a line and, for this experiment, is always equal to the average because only two data points are used for each box. Finally, when reading the graphs, a negative percent error means that the distance is undercalculated, and a positive percent error means that the distance is overcalculated. The following equation shows the percentage error calculation: 71
Colorado School of Mines
Percentage Error (cid:3404) 𝐷(cid:3278)−𝐷(cid:3264) (cid:3400)100 (5.1) 𝐷(cid:3264) where 𝐷 is the distance calculated by the algorithm; and 𝐷 is the actual distance traveled and 𝑐 𝑂 checked by a measuring tape. Figure A-4 shows the general labels related to box and whisker plots. 5.1.1 The Effect of Sampling and the Distance Between the Sensors As presented in Section 4.2.3.2, the distance between the sensors was varied to observe its effect on the distance calculation. The experiment was performed on two sceneries: the granite rock and the white-painted wall. 5.1.1.1 Granite Rock The objectives of this test are to see: (1) if the image-matching algorithm can correctly match images captured from Scenery 1 (i.e., granite rock described in Section 4.2.1), and (2) how varying the sampling criterion and the distances between the sensors affect the accuracy of the distance calculation. The test was repeated once to give a rough estimate of the average percentage error and the range of percentage errors for each combination of DBS and sampling criterion. It is understood that two data points do not represent the true average percentage error. However, this test only aims to understand how the DBS and sampling affect the algorithm. Performing statistical analyses or acquiring accurate distance calculations are not the objectives of this experiment. To complete the first objective, all the images matched by the algorithm in the first run were visually inspected and verified that they indeed matched. This inspection was made for all sampling criteria and all the distances between the sensors. As a result, it was demonstrated that the algorithm matched the granite images correctly. 72
Colorado School of Mines
Table 5-1 shows the average percentage error from the two runs for each DBS and sampling criterion combination. The table shows that the average error significantly varies when the sampling criterion or the DBS changes. For example, for a DBS of 8.3 cm and a sampling criterion of 6.0 s, the distance is overcalculated by 2.3%. However, if the DBS is kept the same but the sampling changes to 8.0 s, the distance is undercalculated, and the error increases to an absolute value of 11.9%. Table 5-1 Granite rock DBS test - average percentage error for each combination of sampling criterion and distance between the sensors. Sampling Criteria 8.3 cm 13.3 cm 15.0 cm 2.0 s -4.4% -2.7% 12.5% 4.0 s -5.0% -5.7% 7.1% 6.0 s 2.3% -5.3% 6.2% 8.0 s -11.9% 2.9% -4.2% Table 5-2 shows the standard deviation of the percentage errors calculated from the two runs for each combination of DBS and sampling criterion. The table demonstrates that the range of errors can also significantly vary when the sampling criterion or the DBS changes. A low standard deviation suggests that the errors have a narrow range and are controlled. Conversely, a high standard deviation indicates a wide range of errors that vary unexpectedly. 73
Colorado School of Mines
Table 5-2 Granite rock DBS test – standard deviation of percentage error for each combination of sampling criterion and distance between the sensors. Sampling Criteria 8.3 cm 13.3 cm 15.0 cm 2.0 s ± 3.7% ± 8.7% ± 10.4% 4.0 s ± 3.2% ± 7.4% ± 7.6% 6.0 s ± 2.4% ± 5.4% ± 9.1% 8.0 s ± 2.3% ± 0.5% ± 4.9% To visualize the results and qualitatively analyze the range of percentage errors, Figures 5-1 to 5-4 show the effect of the DBS on the distance calculation when sampled at one image every 2.0 s, 4.0 s, 6.0 s, and 8.0 s, respectively. When sampling at one image every 2.0 s, Figure 5-1 shows that the average percentage error increases and shifts from negative to positive as the DBS increases. It can also be seen that the range of errors increases when the DBS increases. Here, a DBS of 13.3 cm has the lowest average percentage error when calculating the distance; however, the range of errors is wide. Therefore, a DBS of 13.3 cm may look good, but it is a bad choice for this specific sampling criterion. Sample Rate = 1 Image/2 seconds Sampled at 1Image every 2 seconds Sample Rate = 1 Image/2 seconds 20 44 15 42 10 40 Distance between Distance between sensors = 8.3 cm 38 5 sensors = 8.3 cm Distance between Distance between 36 sensors = 13.3 cm 0 sensors = 13.3 cm Distance between 34 -5 Distance between sensors = 15 cm 32 sensors = 15 cm Tape Measurement = -10 30 33.5 cm -15 28 -20 26 Figure 5-1 Granite rock DBS test - the effect of the distance between the sensors on the accuracy of the distance calculation when sampled every 2.0 s. 74 )%( rorrE egatnecreP )mc( detaluclaC ecnatsiD
Colorado School of Mines
When sampling at one image every 4.0 s, Figure 5-2 shows that the range of errors increases when the DBS increases. When comparing the results to Figure 5-1, the average percentage error for a DBS of 8.3 cm and a DBS of 13.3 cm increases and becomes more negative. Additionally, when the sampling changes from 2.0 s to 4.0 s, the average percentage error decreases for a DBS of 15.0 cm and becomes less positive. For this sampling criterion, a DBS of 8.3 cm provides the lowest range of errors. Sample Rate = 1 Image/4 seconds Sampled at 1Image every 4 seconds Sample Rate = 1 Image/4 seconds 20 40 15 38 Distance between 10 Distance between 36 sensors = 8.3 cm 5 sensors = 8.3 cm Distance between 34 Distance between sensors = 13.3 cm 0 sensors = 13.3 cm Distance between 32 -5 Distance between sensors = 15 cm sensors = 15 cm 30 Tape Measurement = -10 33.5 cm -15 28 -20 26 Figure 5-2 Granite rock DBS test - the effect of the distance between the sensors on the accuracy of the distance calculation when sampled every 4.0 s. When sampling at one image every 6.0 s, Figure 5-3 shows a very different trend than Figures 5-1 and 5-2. Firstly, a DBS of 8.3 cm or 15.0 cm provides a positive average percentage error, while a DBS of 13.3 cm provides a negative average percentage error. Secondly, the average percentage error for a DBS of 8.3 cm shifts to a positive value when the sampling changes from 4.0 s to 6.0 s. Thirdly, the average percentage error for a DBS of 15.0 cm decreases when the sampling changes from 4.0 s to 6.0 s. Finally, the average percentage error for a DBS of 13.3 cm stays approximately the same when the sampling changes from 4.0 s to 6.0 s. For this sampling criterion, a DBS of 8.3 cm provides the most accurate distance calculation. 75 )%( rorrE egatnecreP )mc( detaluclaC ecnatsiD
Colorado School of Mines
Sample Rate = 1 Image/6 seconds Sampled at 1Image every 6 seconds Sample Rate = 1 Image/6 seconds 20 40 15 38 Distance between 10 Distance between 36 sensors = 8.3 cm 5 sensors = 8.3 cm Distance between 34 Distance between sensors = 13.3 cm 0 sensors = 13.3 cm Distance between 32 -5 Distance between sensors = 15 cm sensors = 15 cm 30 Tape Measurement = -10 33.5 cm -15 28 -20 26 Figure 5-3 Granite rock DBS test - the effect of the distance between the sensors on the accuracy of the distance calculation when sampled at 1 image every 6.0 s. When sampling at one image every 8.0 s and comparing Figure 5-4 with Figure 5-3, the average percentage error became negative instead of positive for both a DBS of 8.3 cm and 15.0 cm. Additionally, for a DBS of 13.3 cm, as the sampling changes from 6.0 s to 8.0 s, the average percentage error becomes positive instead of negative. For this sampling criterion, a DBS of 13.3 cm provides the most accurate distance calculation. Sample Rate = 1 Image/8 seconds Sampled at 1Image every 8 seconds Sample Rate = 1 Image/8 seconds 20 40 15 38 Distance between 10 Distance between 36 sensors = 8.3 cm 5 sensors = 8.3 cm Distance between 34 Distance between sensors = 13.3 cm 0 sensors = 13.3 cm Distance between 32 -5 Distance between sensors = 15 cm sensors = 15 cm 30 Tape Measurement = -10 33.5 cm -15 28 -20 26 Figure 5-4 Granite rock DBS test - the effect of the distance between the sensors on the accuracy of the distance calculation when sampled at 1 image every 8.0 s. 76 )%( rorrE egatnecreP )%( rorrE egatnecreP )mc( detaluclaC ecnatsiD )mc( detaluclaC ecnatsiD
Colorado School of Mines
5.1.1.2 White-painted Wall This test has the same objectives as the granite rock test shown above. All the images matched by the algorithm in the first run were visually inspected and verified that they indeed matched. This inspection was made for all sampling criteria and all the distances between the sensors. As a result, it was demonstrated that the algorithm correctly matched the wall images. Table 5-3 shows the average percentage error calculated from the two runs for each DBS and sampling criterion combination. The results show the same behavior demonstrated by the granite rock test. The average error significantly varies when the sampling criterion or the DBS changes. For example, a DBS of 15.0 cm and a sampling criterion of 6.0 s caused the distance to be undercalculated by 1.6%. However, if the DBS is kept the same but the sampling changes to 8.0 s, the distance is overcalculated, and the error increases to an absolute value of 11%. Table 5-3 White-painted wall DBS test - average percentage error for each combination of sampling criterion and distance between the sensors. Sampling Criteria 11.0 cm 15.0 cm 22.0 cm 2.0 s 1.3% 3.0% -4.5% 4.0 s 7.4% 0.8% -3.8% 6.0 s 6.0% -1.6% -6.7% 8.0 s 1.9% 11.0% -15.5% 10.0 s -2.4% 0.5% 0.2% Table 5-4 shows the standard deviation of the percentage errors calculated from the two runs for each DBS and sampling criterion combination. The results show that the range of errors varies when the sampling criterion or the DBS changes. This observation is similar to the granite rock test; however, the range of errors is lower than the granite rock test. Comparing Tables 5-2 77
Colorado School of Mines
and 5-4 and focusing only on the 15.0 cm DBS shows that the range of errors has indeed decreased. A reason for this behavior could be the differences in the distances tested where the granite rock scenery tests 33.5 cm, while white-painted wall scenery tests 91.5 cm. Table 5-4 White-painted wall DBS test – standard deviation of percentage error for each combination of sampling criterion and distance between the sensors. Sampling Criteria 11.0 cm 15.0 cm 22.0 cm 2.0 s ± 2.0% ± 0.9% ± 5.1% 4.0 s ± 2.9% ± 2.1% ± 4.4% 6.0 s ± 1.9% ± 2.5% ± 2.9% 8.0 s ± 7.2% ± 3.2% ± 0.9% 10.0 s ± 5.2% ± 0.4% ± 1.1% To visualize the results and qualitatively analyze the range of percentage errors, Figures 5-5 to 5-9 show the effect of the DBS on the distance calculation when sampled at one image every 2.0 s, 4.0 s, 6.0 s, 8.0 s, and 10.0 s, respectively. When sampling at one image every 2.0 s, Figure 5-5 shows that the average percentage error is positive and increases as the DBS increases from 11.0 cm to 15.0 cm. Additionally, the average percentage error becomes negative and increases when the DBS is 22.0 cm. For this sampling criterion, a DBS of 11.0 cm has the highest accuracy when calculating the distance, and a DBS of 22.0 cm has the widest range of errors and least accuracy. 78
Colorado School of Mines
Sample Rate = 1 Image/2 secondSsampled at 1Image every 2 secondSsample Rate = 1 Image/2 seconds 20 105 15 Distance between 100 sensors = 11 cm 10 Distance between 95 Distance between 5 sensors = 11 cm sensors = 15 cm Distance between 0 90 sensors = 15 cm Distance between -5 Distance between 85 sensors = 22 cm sensors = 22 cm -10 Tape 80 Measurement = -15 91.5 cm -20 75 Figure 5-5 White-painted wall DBS test - the effect of the distance between the sensors on the accuracy of the distance calculation when sampled at 1 image every 2.0 s. When sampling at one image every 4.0 s, Figure 5-6 shows a downward trend where the average percentage error shifts from positive to negative as the DBS increases. Comparing those results with Figure 5-5 shows that changing the sampling from 2.0 s to 4.0 s causes an increase in the average percentage error when the DBS is 11.0 cm. Additionally, the average percentage error for a DBS of 15.0 cm and a DBS of 22.0 cm decreases. Finally, the sign of the average percentage error for each DBS stays the same and does not change. A DBS of 15.0 cm has the highest accuracy and narrowest range of errors for this sampling criterion. Sample Rate = 1 Image/4 secondSsampled at 1Image every 4 secondSsample Rate = 1 Image/4 seconds 20 105 15 Distance between 100 sensors = 11 cm 10 5 D si est na sn oc re s =b e 1t 1w ce men 95 Distance between sensors = 15 cm Distance between 0 90 sensors = 15 cm Distance between -5 Distance between 85 sensors = 22 cm sensors = 22 cm -10 Tape 80 Measurement = -15 91.5 cm -20 75 Figure 5-6 White-painted wall DBS test - the effect of the distance between the sensors on the accuracy of the distance calculation when sampled at 1 image every 4.0 s. 79 )%( rrorrE egatnecreP )%( rorrE egatnecreP )mc( detaluclaC ecnatsiD )mc( detaluclaC ecnatsiD
Colorado School of Mines
When sampling at one image every 6.0 s, Figure 5-7 shows a similar trend to Figure 5-6. When the DBS increases, the average percentage error shifts from a positive to a negative value. Comparing the results from Figure 5-6 with Figure 5-7 shows that changing the sampling from 4.0 s to 6.0 s causes the average percentage error and the range of errors for a DBS of 11.0 cm to decrease. Additionally, the average percentage error for a DBS of 15.0 cm shifted from positive to negative and had approximately the same absolute value. Finally, the average percentage error for a DBS of 22 cm increases in value but maintains the same negative sign. Sample Rate = 1 Image/6 secondSsampled at 1Image every 6 secondSsample Rate = 1 Image/6 seconds 20 105 15 Distance between 100 sensors = 11 cm 10 Distance between 95 5 sensors = 11 cm Distance between sensors = 15 cm Distance between 0 90 sensors = 15 cm Distance between -5 Distance between 85 sensors = 22 cm sensors = 22 cm -10 Tape 80 Measurement = -15 91.5 cm -20 75 Figure 5-7 White-painted wall DBS test - the effect of the distance between the sensors on the accuracy of the distance calculation when sampled at 1 image every 6.0 s. When sampling at one image every 8.0 s, Figure 5-8 shows that the results significantly change compared to the previous sampling criterion. Firstly, for a DBS of 11.0 cm, the total range of percentage errors in Figure 5-7 are all positive values, while in Figure 5-8, the range of errors covers both positive and negative values. Secondly, for a DBS of 15.0 cm, the total range of percentage errors in Figure 5-7 are all negative values, while the range of percentage errors in Figure 5-8 are all positive values. Finally, for a DBS of 15.0 cm, the range of percentage errors 80 )%( rorrE egatnecreP )mc( detaluclaC ecnatsiD
Colorado School of Mines
are all negative values in both Figures 5-7 and 5-8. Additionally, changing the sampling from 6.0 s to 8.0 s causes the distance calculated by a DBS of 15.0 cm to be less accurate. Sample Rate = 1 Image/8 secondSsampled at 1Image every 8 secondSsample Rate = 1 Image/8 seconds 20 105 15 100 Distance between sensors = 11 cm 10 95 Distance between 5 sensors = 11 cm Distance between 90 sensors = 15 cm Distance between 0 sensors = 15 cm 85 Distance between -5 Distance between sensors = 22 cm sensors = 22 cm 80 -10 Tape -15 75 Measurement = 91.5 cm -20 70 Figure 5-8 White-painted wall DBS test - the effect of the distance between the sensors on the accuracy of the distance calculation when sampled at 1 image every 8.0 s. When sampling at one image every 10.0 s, Figure 5-9 shows that a DBS of 15.0 cm and 22.0 cm provides an accurate calculation of the distance traveled. However, when the DBS is 11.0 cm, the errors increase, and the distance calculation is less accurate. Sample Rate = 1 Image/10 secondSasmpled at 1Image every 10 seconSdasmple Rate = 1 Image/10 seconds 20 105 15 100 Distance between sensors = 11 cm 10 95 Distance between Distance between 5 sensors = 11 cm 90 sensors = 15 cm Distance between 0 sensors = 15 cm 85 Distance between -5 Distance between sensors = 22 cm sensors = 22 cm 80 -10 Tape -15 75 Measurement = 91.5 cm -20 70 Figure 5-9 White-painted wall DBS test - the effect of the distance between the sensors on the accuracy of the distance calculation when sampled at 1 image every 10.0 s. 81 )%( rorrE egatnecreP )%( rorrE egatnecreP )mc( detaluclaC ecnatsiD )mc( detaluclaC ecnatsiD
Colorado School of Mines
5.1.2 The Effect of Noise and the Accuracy of Denoising This section discusses the effect of random speckle noise on the accuracy of the image- matching code. Here, the sampling is varied, but the DBS is kept constant and is equal to 13.3 cm. The tests were made on two sceneries: the granite rock and a white-painted wall. All the figures presented in this section have the same label. The blue “Original(cid:180) label defines a test made without any added noise or de-noising. The orange “Noisy(cid:180) label represents a test where salt and pepper noise is added to the original images at a ratio of 0.55. Then, the algorithm used the noisy images to calculate the distance. The gray “Denoised(cid:180) label represents a test where the noisy images are denoised, and then the algorithm uses the denoised images to calculate the distance. Finally, the black “Measurement by tape(cid:180) label represents the actual length of the scenery. Each of the original, noisy, and denoised tests was repeated once to give a rough estimate of the average percentage error and the range of percentage errors. It is also understood that two data points do not represent the true average percentage error. However, this test only aims to understand how noise affects the algorithm. Performing statistical analyses or acquiring accurate distance calculations are not the objectives of this experiment. 5.1.2.1 Granite Rock This section discusses the results associated with the tests made on the granite rock. Table 5-5 presents the average percentage error and the standard deviation calculated from the two runs for each test. The results show that the denoising algorithm raises the accuracy of the distance calculation when compared to the noisy calculations. Additionally, the calculation performed on the denoised images had similar average percentage errors compared to the original test and, in most cases, better. Interestingly, the denoised algorithm removed the noise 82
Colorado School of Mines
When sampling at one image every 4.0 s, Figure 5-11 shows that the error range in the noisy system increases compared to sampling every 2.0 s. Again, the error range of the results obtained from the denoising test falls within the range of the original test. Finally, the noisy system provides the least accurate calculations. Sample Rate = 1 Image/4 seconds Sampledat 1 image every 4 seconds Sample Rate = 1 Image/4 seconds 20 38.00 15 36.00 34.00 10 Original 32.00 5 Original 30.00 Noisy 0 Noisy Denoised 28.00 Denoised -5 26.00 Measurement by -10 24.00 tape = 33.5 cm -15 22.00 -20 20.00 Figure 5-11 Noise test - the effect of noise and denoising for the granite rock experiment when sampled at 1 image every 4.0 s. When sampling at one image every 6.0 s, Figure 5-12 demonstrate that the error range for the original and denoised tests shifts from a negative value to a positive value compared to sampling every 4.0 s. The denoised system had an average percentage error of 0% and a very narrow range of errors for this sampling criterion. Again, the noisy system provides the least accurate calculations. 84 )%( rorrE egatnecreP )mc( detaluclaC ecnatsiD
Colorado School of Mines
Sample Rate = 1 Image/6 seconds Sampledat 1 image every 6 seconds Sample Rate = 1 Image/6 seconds 20 38.00 15 36.00 34.00 10 Original 32.00 5 Original 30.00 Noisy 0 Noisy Denoised 28.00 Denoised -5 26.00 Measurement by -10 24.00 tape = 33.5 cm -15 22.00 -20 20.00 Figure 5-12 Noise test - the effect of noise and denoising for the granite rock experiment when sampled at 1 image every 6.0 s. When sampling at one image every 8.0 s, Figure 5-13 shows that the error range for the original and denoised tests shifted from a positive value to a negative value compared to sampling every 6.0 s. Additionally, the range of errors generated from all tests is very similar. However, the absolute value of the average percentage error in the noisy test is still higher than the values of the original and denoised tests. Sample Rate = 1 Image/8 seconds Sampledat 1 image every 8 seconds Sample Rate = 1 Image/6 seconds 20 38.00 15 36.00 34.00 10 Original 32.00 5 Original 30.00 Noisy 0 Noisy Denoised 28.00 Denoised -5 26.00 Measurement by -10 24.00 tape = 33.5 cm -15 22.00 -20 20.00 Figure 5-13 Noise test - the effect of noise and denoising for the granite rock experiment when sampled at 1 image every 8.0 s. 5.1.2.2 White-painted Wall This section discusses the results associated with the tests made on the white-painted wall. Table 5-6 presents the average percentage error and the standard deviation calculated from 85 )%( rorrE egatnecreP )%( rorrE egatnecreP )mc( detaluclaC ecnatsiD )mc( detaluclaC ecnatsiD
Colorado School of Mines
the two runs for each test. The results show that the average percentage errors of the noisy test are very different from the average percentage errors of the original test. Additionally, the denoising algorithm successfully removed the noise from the image and obtained average percentage errors similar to the original test. Table 5-6 White-painted wall - noise test - average percentage error and standard deviation for noisy, denoised, and original tests. Sample Criteria Noisy (%) Denoised (%) Original (%) 2.0 s 12.2 ± 13.5 2.1 ± 2.7 1.3 ± 2.2 4.0 s 2.3 ± 0.3 9.3 ± 6.1 7.4 ± 3.1 6.0 s -3.7 ± 3.4 2.6 ± 3.8 6.0 ± 2 8.0 s -0.8 ± 11.4 2.8 ± 8.2 1.9 ± 7.9 10.0 s -2.2 ± 6.5 -2.1 ± 6.2 -2.4 ± 5.7 To visualize the results and qualitatively analyze the range of percentage errors, Figures 5-14 to 5-18 show the effect of noise and denoising on the distance calculations when sampled at one image every 2.0 s, 4.0 s, 6.0 s, 8.0 s, and 10.0 s, respectively. When sampling at one image every 2.0 s, Figure 5-14 shows that the error range of the results obtained from the original test and the denoising test are very similar. Those results were also observed in the granite rock experiment for the same sampling criterion. 86
Colorado School of Mines
Sample Rate = 1 Image/2 seconds Sampledat 1 image every 2 seconds Sample Rate = 1 Image/2 seconds 20 120 15 115 10 110 Original 5 105 Original Noisy 0 Noisy 100 Denoised Denoised -5 95 Measurement by -10 90 tape = 91.5 -15 85 -20 80 Figure 5-14 Noise test - the effect of noise and denoising for the white-painted wall experiment when sampled at 1 image every 2.0 s. When changing the sampling criterion from 2.0 s to 4.0 s, the average percentage error of the denoised and original tests increases (Figure 5-15). Furthermore, the average percentage error of the denoised and original tests is very similar. However, the range of the denoised tests is wider, which means that denoising at this specific sampling criterion can generate results that deviate from a non-noisy environment. Finally, the results generated from the noisy system are more accurate than the original and denoised systems. Occasionally, such behavior should be expected because noise causes images that are not similar to match. As a result of this incorrect matching process, the accuracy of the distance calculated can fluctuate unexpectedly and generate random results that may be very accurate or inaccurate. 87 )%( rorrE egatnecreP )mc( detaluclaC ecnatsiD
Colorado School of Mines
Sample Rate = 1 Image/4 seconds Sampledat 1 image every 4 seconds Sample Rate = 1 Image/4 seconds 20 120 15 115 10 110 Original 5 105 Original Noisy 0 Noisy 100 Denoised Denoised -5 95 Measurement by -10 90 tape = 91.5 -15 85 -20 80 Figure 5-15 Noise test - the effect of noise and denoising for the white-painted wall experiment when sampled at 1 image every 4.0 s. When changing the sampling criterion from 4.0 s to 6.0 s, the average percentage error of the original test remained relatively the same but decreased for the denoised test (Figure 5-16). Also, the average percentage error of the denoised and original tests are very close to each other. However, the range of the denoised tests is wider, and the average percentage error is lower. Finally, the original and denoised tests generated positive errors, while the noisy tests generated negative errors. This behavior again indicates that noise randomly affects the image-matching process. Sample Rate = 1 Image/6 seconds Sampledat 1 image every 6 seconds Sample Rate = 1 Image/6 seconds 20 120 15 115 10 110 Original 5 105 Original Noisy 0 Noisy 100 Denoised Denoised -5 95 Measurement by -10 90 tape = 91.5 -15 85 -20 80 Figure 5-16 Noise test - the effect of noise and denoising for the white-painted wall experiment when sampled at 1 image every 6.0 s. 88 )%( rorrE egatnecreP )%( rorrE egatnecreP )mc( detaluclaC ecnatsiD )mc( detaluclaC ecnatsiD
Colorado School of Mines
When changing the sampling criterion from 6.0 s to 8.0 s, the range of errors increased for all tests (Figure 5-17). Additionally, the average percentage error for the original and denoised tests are similar to each other and are positive, while the average percentage error for the denoised test is negative. Finally, the range of errors is very similar for the original and denoised tests but wider for the noisy test. Sample Rate = 1 Image/8 seconds Sampledat 1 image every 8 seconds Sample Rate = 1 Image/6 seconds 20 120 15 115 10 110 Original 5 105 Original Noisy 0 Noisy 100 Denoised Denoised -5 95 Measurement by -10 90 tape = 91.5 -15 85 -20 80 Figure 5-17 Noise test - the effect of noise and denoising for the white-painted wall experiment when sampled at 1 image every 8.0 s. When changing the sampling criterion from 8.0 s to 10.0 s, Figure 5-18 shows that the range of errors decreased for all the tests but is more dominated by negative values. Interestingly, the average percentage error for all the tests is similar. Sample Rate = 1 Image/8 seconds Sampledat 1 image every 10 seconds Sample Rate = 1 Image/6 seconds 20 120 15 115 10 110 Original 5 105 Original Noisy 0 Noisy 100 Denoised Denoised -5 95 Measurement by -10 90 tape = 91.5 -15 85 -20 80 Figure 5-18 Noise test - the effect of noise and denoising for the white-painted wall experiment when sampled at 1 image every 10.0 s. 89 )%( rorrE vegatnecreP )%( rorrE egatnecreP )mc( detaluclaC ecnatsiD )mc( detaluclaC ecnatsiD