title
stringlengths
3
221
text
stringlengths
17
477k
parsed
listlengths
0
3.17k
NumPy - right_shift
The numpy.right_shift() function shift the bits in the binary representation of an array element to the right by specified positions, and an equal number of 0s are appended from the left. import numpy as np print 'Right shift 40 by two positions:' print np.right_shift(40,2) print '\n' print 'Binary representation of 40:' print np.binary_repr(40, width = 8) print '\n' print 'Binary representation of 10' print np.binary_repr(10, width = 8) # Two bits in '00001010' are shifted to right and two 0s appended from left. Its output would be as follows − Right shift 40 by two positions: 10 Binary representation of 40: 00101000 Binary representation of 10 00001010 63 Lectures 6 hours Abhilash Nelson 19 Lectures 8 hours DATAhill Solutions Srinivas Reddy 12 Lectures 3 hours DATAhill Solutions Srinivas Reddy 10 Lectures 2.5 hours Akbar Khan 20 Lectures 2 hours Pruthviraja L 63 Lectures 6 hours Anmol Print Add Notes Bookmark this page
[ { "code": null, "e": 2431, "s": 2243, "text": "The numpy.right_shift() function shift the bits in the binary representation of an array element to the right by specified positions, and an equal number of 0s are appended from the left." }, { "code": null, "e": 2777, "s": 2431, "text": "import numpy as np \n\nprint 'Right shift 40 by two positions:' \nprint np.right_shift(40,2) \nprint '\\n' \n\nprint 'Binary representation of 40:' \nprint np.binary_repr(40, width = 8) \nprint '\\n' \n\nprint 'Binary representation of 10' \nprint np.binary_repr(10, width = 8) \n# Two bits in '00001010' are shifted to right and two 0s appended from left." }, { "code": null, "e": 2810, "s": 2777, "text": "Its output would be as follows −" }, { "code": null, "e": 2924, "s": 2810, "text": "Right shift 40 by two positions:\n10\n\nBinary representation of 40:\n00101000\n\nBinary representation of 10\n00001010\n" }, { "code": null, "e": 2957, "s": 2924, "text": "\n 63 Lectures \n 6 hours \n" }, { "code": null, "e": 2974, "s": 2957, "text": " Abhilash Nelson" }, { "code": null, "e": 3007, "s": 2974, "text": "\n 19 Lectures \n 8 hours \n" }, { "code": null, "e": 3042, "s": 3007, "text": " DATAhill Solutions Srinivas Reddy" }, { "code": null, "e": 3075, "s": 3042, "text": "\n 12 Lectures \n 3 hours \n" }, { "code": null, "e": 3110, "s": 3075, "text": " DATAhill Solutions Srinivas Reddy" }, { "code": null, "e": 3145, "s": 3110, "text": "\n 10 Lectures \n 2.5 hours \n" }, { "code": null, "e": 3157, "s": 3145, "text": " Akbar Khan" }, { "code": null, "e": 3190, "s": 3157, "text": "\n 20 Lectures \n 2 hours \n" }, { "code": null, "e": 3205, "s": 3190, "text": " Pruthviraja L" }, { "code": null, "e": 3238, "s": 3205, "text": "\n 63 Lectures \n 6 hours \n" }, { "code": null, "e": 3245, "s": 3238, "text": " Anmol" }, { "code": null, "e": 3252, "s": 3245, "text": " Print" }, { "code": null, "e": 3263, "s": 3252, "text": " Add Notes" } ]
Getting the beat right!!. Processing the ECG signal with... | by Pareeknikhil | Towards Data Science
With a plethora of Biofeedback apps emerging in the health and fitness space, wearable biosensing devices are becoming pivotal in determining the “Active” state of the users. The active state of the user could be defined as the mood; between happy-sad-neutral or stress state; between relaxed-stressed-neutral, or even health state; healthy-risky, depending upon the scope of the application being developed. But a common physiological indicator that pins all the aforementioned use-cases is cardiovascular data. And this article aims to give you a brief summary on filtering raw cardiovascular data from BCI/Fitness tracking devices such as Polar H10 and OpenBCI-Cyton into noise-free data. Most of the Biofeedback systems rely on two key physiological measurements as indicators of the active state of the user: Cardiovascular activity and Breathing capacity. These signals could further be split into specific metrics such as Heart Rate Variability (HRV), RR interval, Tidal Volume, etc. but we would stick with processing raw cardiovascular activity here (For a detailed read on indicators — here). And Electrocardiograph (ECG) is the savior here in providing the most accurate electrical activity of the heart in real-time as compared to techniques such as Photoplethysmography (PPG). Essentially ECG signals convey information about the structure and function of the heart and further, we are specifically interested in capturing a biomedical signal — the QRS complex of the heart which is useful in diagnosing cardiac arrhythmias, conduction abnormalities, ventricular hypertrophy, myocardial infarction, electrolyte derangements, and other disease states. What is a QRS complex? As one would guess from the name, the QRS complex includes the Q wave, R wave, and S wave and all these three waves occur in rapid succession. The QRS complex represents the electrical impulse of the heart as it beats in real-time. And as the blood spreads through the ventricles of the heart, occurs the ventricular depolarization and a corresponding QRS complex is registered in the ECG. Although not every QRS complex will contain Q, R, and S waves and the convention is such that the Q wave is always negative and that the R wave is the first positive wave of the complex. usually, if the QRS complex only includes an upward (positive) deflection, then it is an R wave. The S wave is the first negative deflection after an R wave. In a healthy adult and under normal circumstances, the duration of the QRS complex will be between 0.06 and 0.10 seconds. Share URLCtrl + C to copy EmbedCtrl + C to copy    What’s noise in ECG data? The frequency range of a clean ECG signal is between 0.05 Hertz to 100 Hertz but during the transmission and acquisition of the signal via the ECG monitoring device, different noises such as power line interference, baseline drift, channel noise, Eletcromyogram/Muscular movement noise, electrode contact noise could be introduced. Mainly two types of noises are present in a raw ECG signal (Figure 2 shows the combination of both): High-Frequency Noise: Power line Noise, White Gaussian Noise, Eletcromyogram/Motion Noise Low-Frequency Noise: Baseline drift, Electrode contact loss Understanding each of these noises a bit in detail could help us to effectively identify the type of noise and then we can further use the correct denoising technique to remove the artifact. Power Line Interference: These are the harmonics of the Electromagnetic interference through the power line and the electromagnetic field of the nearby electrical appliances. Especially the heavy power line machines such as Air-conditioners, microwaves, elevators, and washing machines, etc. The frequency range is between 50 Hz to 60 Hz (USA) depending upon the country of study. White Gaussian Noise: This a noise that is embedded in the ECG signal usually and the determination of the source is difficult since it is introduced at various levels and is random in nature. It is very similar to channel noise in nature. Electromyogram/Motion Noise: This type of noise is generated by the electrical activity of the muscle and also due to the change in the electrode-skin impedance change which could further be resulted due to the change in skin temperature, humidity, etc. Baseline Drift: This is the most common noise and causes greater difficulty in peak detection during the analysis of the ECG signal. It is low-frequency noise and is usually close to 1 Hertz as it is attributed due to respiration and swift body movements. Electrode Contact Loss: This usually leads to a flat signal and is resulted due to the loss of contact between the electrode and the skin and therefore disconnecting the ECG monitoring device from the body. Which Denoising techniques can I use? To eliminate both of these types of artifacts: low frequency and high-frequency noises, one could use two types of filters while cleaning the data: IIR Notch Filters: Usually know as Notch filters, these are the simplest ones and are usually used to remove power line interference and or a motion artifact at a specific or narrow frequency spectrum. Work really well with the fixed-frequency noise source. FIR Filters: Also known as the Windowing or Band-pass filters, where high cut-off and low cut-off frequencies are specified and then everything else beyond the specified “band” is attenuated. These filters are very stable and work in the range of 1 Hertz to 100 Hertz thus making them fit for the application on ECG data. The most common type amongst FIR filters is the Butterworth filter. From our understanding of physiological responses registered in adults, a healthy resting heart rate is 60 to 80 bpm; heart rate in athletes could be as low as 30 bpm and as high as 120 bpm in hypertension. Therefore our band of frequency is between 0.5 Hertz to 2 Hertz. One could also specify parameters such as the sampling frequency of the device as a parameter into the Butterworth filter. For example, Polar H10 has a sampling frequency rate of 130 Hertz as suggested in the official documentation; though the value ranges around 120 Hz when using the device in real-time, and OpenBCI — Cyton has a sampling frequency of around 250 Hertz as indicated in the official documentation and is mostly consistent throughout its use. Since in our case, sampling frequencies are many folds higher than the heart response frequencies we can easily capture the heartbeats in users with a band of 0.5 Hertz to 2 Hertz. Where do I find the code? All of this could be very easily achieved with a few lines of code in Python and with the help of libraries such as Scipy, Numpy, and Pandas, it’s mostly a walk in the park. A sample code snippet is available below for reference (Check out my Github repo for ECG Data Stream and detailed filtering — here): from scipy import signalfrom scipy.signal import butter, iirnotch, lfilterimport numpy as npimport matplotlib.pyplot as plt## A high pass filter allows frequencies higher than a cut-off valuedef butter_highpass(cutoff, fs, order=5): nyq = 0.5*fs normal_cutoff = cutoff/nyq b, a = butter(order, normal_cutoff, btype='high', analog=False, output='ba') return b, a## A low pass filter allows frequencies lower than a cut-off valuedef butter_lowpass(cutoff, fs, order=5): nyq = 0.5*fs normal_cutoff = cutoff/nyq b, a = butter(order, normal_cutoff, btype='low', analog=False, output='ba') return b, adef notch_filter(cutoff, q): nyq = 0.5*fs freq = cutoff/nyq b, a = iirnotch(freq, q) return b, adef highpass(data, fs, order=5): b,a = butter_highpass(cutoff_high, fs, order=order) x = lfilter(b,a,data) return xdef lowpass(data, fs, order =5): b,a = butter_lowpass(cutoff_low, fs, order=order) y = lfilter(b,a,data) return ydef notch(data, powerline, q): b,a = notch_filter(powerline,q) z = lfilter(b,a,data) return zdef final_filter(data, fs, order=5): b, a = butter_highpass(cutoff_high, fs, order=order) x = lfilter(b, a, data) d, c = butter_lowpass(cutoff_low, fs, order = order) y = lfilter(d, c, x) f, e = notch_filter(powerline, 30) z = lfilter(f, e, y) return zecg_singal = np.loadtxt('df_ecg_Polar.csv', skiprows=0)fs = 1000## Order of five works well with ECG signalscutoff_high = 0.5cutoff_low = 2powerline = 60order = 5plt.figure(1)ax1 = plt.subplot(211)plt.plot(ecg_signal)ax1.set_title("Raw ECG signal")filter_signal = final_filter(ecg_signal, fs, order)ax2 = plt.subplot(212)plt.plot(filter_signal)ax2.set_title("Clean ECG signal")plt.show() What are the results I can expect? Using the above-mentioned filtering technique, one can produce very clean signals with the prominent QRS complex visible. For example, I used the above technique on raw ECG signal recorded from Polar-H10 and OpenBCI-cyton board and the results are illustrated as follows: OpenBCI — Cytonboard: In OpenBCI raw signal one could observe the drift as well as artifacts. Both baseline correction and artifact removal are successfully achieved with the Butterworth filter with ❤ looking QRS complex. Polar H10: Polar H10 already filters the data on the device and sends it across, therefore there is only a bit of baseline correction around the mean zero but for the most part of it, the filter output is the same as raw signal input. Where can I learn more? Building a live data-stream for Polar and OpenBCI boards is vital in building real-time applications and is a fairly simple task with a little time and sufficient documentation. Luckily we have both !! How to create a real-time data-stream with Polar devices — ECG and Accelerometer How to create a real-time data-stream with OpenBCI boards — ECG References: 1. Wiki, QRS Complex, Wikipedia2. National Centre for Adaptive Neurotechnologies (NCAN), Contributions:OpenBCI Module3. Denisse Castaneda, Aibhlin Esparza, Mohammad Ghamari, Cinna Soltanpur, and Homer Nazera, A review on wearable photoplethysmography sensors and their potential future applications in health care (2019), NCBI4. ACLS Medical training |The Basics of ECG5. Aswathy Velayudhan, Soniya Peter, Noise Analysis and Different Denoising Techniques of ECG Signal - A Survey (2016) ,IOSR-JECE6. Mohamed Elgendi,Carlo Menon, Assessing Anxiety Disorders Using Wearable Devices: Challenges and Future Directions (2019) ,Brain Sciences7. Fred Shaffer1, J. P. Ginsberg, An Overview of Heart Rate Variability Metrics and Norms (2017) ,NCBI8. Inma Mohino-Herranz, Roberto Gil-Pita, Manuel Rosa-Zurera, Fernando Seoane, Activity Recognition Using Wearable Physiological Measurements: Selection of Features from a Comprehensive Literature Study (2019) ,NCBI
[ { "code": null, "e": 864, "s": 172, "text": "With a plethora of Biofeedback apps emerging in the health and fitness space, wearable biosensing devices are becoming pivotal in determining the “Active” state of the users. The active state of the user could be defined as the mood; between happy-sad-neutral or stress state; between relaxed-stressed-neutral, or even health state; healthy-risky, depending upon the scope of the application being developed. But a common physiological indicator that pins all the aforementioned use-cases is cardiovascular data. And this article aims to give you a brief summary on filtering raw cardiovascular data from BCI/Fitness tracking devices such as Polar H10 and OpenBCI-Cyton into noise-free data." }, { "code": null, "e": 1836, "s": 864, "text": "Most of the Biofeedback systems rely on two key physiological measurements as indicators of the active state of the user: Cardiovascular activity and Breathing capacity. These signals could further be split into specific metrics such as Heart Rate Variability (HRV), RR interval, Tidal Volume, etc. but we would stick with processing raw cardiovascular activity here (For a detailed read on indicators — here). And Electrocardiograph (ECG) is the savior here in providing the most accurate electrical activity of the heart in real-time as compared to techniques such as Photoplethysmography (PPG). Essentially ECG signals convey information about the structure and function of the heart and further, we are specifically interested in capturing a biomedical signal — the QRS complex of the heart which is useful in diagnosing cardiac arrhythmias, conduction abnormalities, ventricular hypertrophy, myocardial infarction, electrolyte derangements, and other disease states." }, { "code": null, "e": 1859, "s": 1836, "text": "What is a QRS complex?" }, { "code": null, "e": 2249, "s": 1859, "text": "As one would guess from the name, the QRS complex includes the Q wave, R wave, and S wave and all these three waves occur in rapid succession. The QRS complex represents the electrical impulse of the heart as it beats in real-time. And as the blood spreads through the ventricles of the heart, occurs the ventricular depolarization and a corresponding QRS complex is registered in the ECG." }, { "code": null, "e": 2594, "s": 2249, "text": "Although not every QRS complex will contain Q, R, and S waves and the convention is such that the Q wave is always negative and that the R wave is the first positive wave of the complex. usually, if the QRS complex only includes an upward (positive) deflection, then it is an R wave. The S wave is the first negative deflection after an R wave." }, { "code": null, "e": 2716, "s": 2594, "text": "In a healthy adult and under normal circumstances, the duration of the QRS complex will be between 0.06 and 0.10 seconds." }, { "code": null, "e": 2742, "s": 2716, "text": "Share URLCtrl + C to copy" }, { "code": null, "e": 2764, "s": 2742, "text": "EmbedCtrl + C to copy" }, { "code": null, "e": 2766, "s": 2764, "text": "" }, { "code": null, "e": 2768, "s": 2766, "text": "" }, { "code": null, "e": 2770, "s": 2768, "text": "" }, { "code": null, "e": 2796, "s": 2770, "text": "What’s noise in ECG data?" }, { "code": null, "e": 3128, "s": 2796, "text": "The frequency range of a clean ECG signal is between 0.05 Hertz to 100 Hertz but during the transmission and acquisition of the signal via the ECG monitoring device, different noises such as power line interference, baseline drift, channel noise, Eletcromyogram/Muscular movement noise, electrode contact noise could be introduced." }, { "code": null, "e": 3229, "s": 3128, "text": "Mainly two types of noises are present in a raw ECG signal (Figure 2 shows the combination of both):" }, { "code": null, "e": 3319, "s": 3229, "text": "High-Frequency Noise: Power line Noise, White Gaussian Noise, Eletcromyogram/Motion Noise" }, { "code": null, "e": 3379, "s": 3319, "text": "Low-Frequency Noise: Baseline drift, Electrode contact loss" }, { "code": null, "e": 3570, "s": 3379, "text": "Understanding each of these noises a bit in detail could help us to effectively identify the type of noise and then we can further use the correct denoising technique to remove the artifact." }, { "code": null, "e": 3951, "s": 3570, "text": "Power Line Interference: These are the harmonics of the Electromagnetic interference through the power line and the electromagnetic field of the nearby electrical appliances. Especially the heavy power line machines such as Air-conditioners, microwaves, elevators, and washing machines, etc. The frequency range is between 50 Hz to 60 Hz (USA) depending upon the country of study." }, { "code": null, "e": 4191, "s": 3951, "text": "White Gaussian Noise: This a noise that is embedded in the ECG signal usually and the determination of the source is difficult since it is introduced at various levels and is random in nature. It is very similar to channel noise in nature." }, { "code": null, "e": 4445, "s": 4191, "text": "Electromyogram/Motion Noise: This type of noise is generated by the electrical activity of the muscle and also due to the change in the electrode-skin impedance change which could further be resulted due to the change in skin temperature, humidity, etc." }, { "code": null, "e": 4701, "s": 4445, "text": "Baseline Drift: This is the most common noise and causes greater difficulty in peak detection during the analysis of the ECG signal. It is low-frequency noise and is usually close to 1 Hertz as it is attributed due to respiration and swift body movements." }, { "code": null, "e": 4908, "s": 4701, "text": "Electrode Contact Loss: This usually leads to a flat signal and is resulted due to the loss of contact between the electrode and the skin and therefore disconnecting the ECG monitoring device from the body." }, { "code": null, "e": 4946, "s": 4908, "text": "Which Denoising techniques can I use?" }, { "code": null, "e": 5094, "s": 4946, "text": "To eliminate both of these types of artifacts: low frequency and high-frequency noises, one could use two types of filters while cleaning the data:" }, { "code": null, "e": 5352, "s": 5094, "text": "IIR Notch Filters: Usually know as Notch filters, these are the simplest ones and are usually used to remove power line interference and or a motion artifact at a specific or narrow frequency spectrum. Work really well with the fixed-frequency noise source." }, { "code": null, "e": 5742, "s": 5352, "text": "FIR Filters: Also known as the Windowing or Band-pass filters, where high cut-off and low cut-off frequencies are specified and then everything else beyond the specified “band” is attenuated. These filters are very stable and work in the range of 1 Hertz to 100 Hertz thus making them fit for the application on ECG data. The most common type amongst FIR filters is the Butterworth filter." }, { "code": null, "e": 6014, "s": 5742, "text": "From our understanding of physiological responses registered in adults, a healthy resting heart rate is 60 to 80 bpm; heart rate in athletes could be as low as 30 bpm and as high as 120 bpm in hypertension. Therefore our band of frequency is between 0.5 Hertz to 2 Hertz." }, { "code": null, "e": 6655, "s": 6014, "text": "One could also specify parameters such as the sampling frequency of the device as a parameter into the Butterworth filter. For example, Polar H10 has a sampling frequency rate of 130 Hertz as suggested in the official documentation; though the value ranges around 120 Hz when using the device in real-time, and OpenBCI — Cyton has a sampling frequency of around 250 Hertz as indicated in the official documentation and is mostly consistent throughout its use. Since in our case, sampling frequencies are many folds higher than the heart response frequencies we can easily capture the heartbeats in users with a band of 0.5 Hertz to 2 Hertz." }, { "code": null, "e": 6681, "s": 6655, "text": "Where do I find the code?" }, { "code": null, "e": 6988, "s": 6681, "text": "All of this could be very easily achieved with a few lines of code in Python and with the help of libraries such as Scipy, Numpy, and Pandas, it’s mostly a walk in the park. A sample code snippet is available below for reference (Check out my Github repo for ECG Data Stream and detailed filtering — here):" }, { "code": null, "e": 8729, "s": 6988, "text": "from scipy import signalfrom scipy.signal import butter, iirnotch, lfilterimport numpy as npimport matplotlib.pyplot as plt## A high pass filter allows frequencies higher than a cut-off valuedef butter_highpass(cutoff, fs, order=5): nyq = 0.5*fs normal_cutoff = cutoff/nyq b, a = butter(order, normal_cutoff, btype='high', analog=False, output='ba') return b, a## A low pass filter allows frequencies lower than a cut-off valuedef butter_lowpass(cutoff, fs, order=5): nyq = 0.5*fs normal_cutoff = cutoff/nyq b, a = butter(order, normal_cutoff, btype='low', analog=False, output='ba') return b, adef notch_filter(cutoff, q): nyq = 0.5*fs freq = cutoff/nyq b, a = iirnotch(freq, q) return b, adef highpass(data, fs, order=5): b,a = butter_highpass(cutoff_high, fs, order=order) x = lfilter(b,a,data) return xdef lowpass(data, fs, order =5): b,a = butter_lowpass(cutoff_low, fs, order=order) y = lfilter(b,a,data) return ydef notch(data, powerline, q): b,a = notch_filter(powerline,q) z = lfilter(b,a,data) return zdef final_filter(data, fs, order=5): b, a = butter_highpass(cutoff_high, fs, order=order) x = lfilter(b, a, data) d, c = butter_lowpass(cutoff_low, fs, order = order) y = lfilter(d, c, x) f, e = notch_filter(powerline, 30) z = lfilter(f, e, y) return zecg_singal = np.loadtxt('df_ecg_Polar.csv', skiprows=0)fs = 1000## Order of five works well with ECG signalscutoff_high = 0.5cutoff_low = 2powerline = 60order = 5plt.figure(1)ax1 = plt.subplot(211)plt.plot(ecg_signal)ax1.set_title(\"Raw ECG signal\")filter_signal = final_filter(ecg_signal, fs, order)ax2 = plt.subplot(212)plt.plot(filter_signal)ax2.set_title(\"Clean ECG signal\")plt.show()" }, { "code": null, "e": 8764, "s": 8729, "text": "What are the results I can expect?" }, { "code": null, "e": 9036, "s": 8764, "text": "Using the above-mentioned filtering technique, one can produce very clean signals with the prominent QRS complex visible. For example, I used the above technique on raw ECG signal recorded from Polar-H10 and OpenBCI-cyton board and the results are illustrated as follows:" }, { "code": null, "e": 9258, "s": 9036, "text": "OpenBCI — Cytonboard: In OpenBCI raw signal one could observe the drift as well as artifacts. Both baseline correction and artifact removal are successfully achieved with the Butterworth filter with ❤ looking QRS complex." }, { "code": null, "e": 9493, "s": 9258, "text": "Polar H10: Polar H10 already filters the data on the device and sends it across, therefore there is only a bit of baseline correction around the mean zero but for the most part of it, the filter output is the same as raw signal input." }, { "code": null, "e": 9517, "s": 9493, "text": "Where can I learn more?" }, { "code": null, "e": 9719, "s": 9517, "text": "Building a live data-stream for Polar and OpenBCI boards is vital in building real-time applications and is a fairly simple task with a little time and sufficient documentation. Luckily we have both !!" }, { "code": null, "e": 9800, "s": 9719, "text": "How to create a real-time data-stream with Polar devices — ECG and Accelerometer" }, { "code": null, "e": 9864, "s": 9800, "text": "How to create a real-time data-stream with OpenBCI boards — ECG" }, { "code": null, "e": 9876, "s": 9864, "text": "References:" } ]
Loading Model for Predictions
To predict the unseen data, you first need to load the trained model into the memory. This is done using the following command − model = load_model ('./models/handwrittendigitrecognition.h5') Note that we are simply loading the .h5 file into memory. This sets up the entire neural network in memory along with the weights assigned to each layer. Now, to do your predictions on unseen data, load the data, let it be one or more items, into the memory. Preprocess the data to meet the input requirements of our model as what you did on your training and test data above. After preprocessing, feed it to your network. The model will output its prediction. 87 Lectures 11 hours Abhilash Nelson 106 Lectures 13.5 hours Abhilash Nelson 28 Lectures 4 hours Abhilash Nelson 58 Lectures 8 hours Soumyadeep Dey 59 Lectures 2.5 hours Mike West 128 Lectures 5.5 hours TELCOMA Global Print Add Notes Bookmark this page
[ { "code": null, "e": 2088, "s": 1959, "text": "To predict the unseen data, you first need to load the trained model into the memory. This is done using the following command −" }, { "code": null, "e": 2152, "s": 2088, "text": "model = load_model ('./models/handwrittendigitrecognition.h5')\n" }, { "code": null, "e": 2306, "s": 2152, "text": "Note that we are simply loading the .h5 file into memory. This sets up the entire neural network in memory along with the weights assigned to each layer." }, { "code": null, "e": 2613, "s": 2306, "text": "Now, to do your predictions on unseen data, load the data, let it be one or more items, into the memory. Preprocess the data to meet the input requirements of our model as what you did on your training and test data above. After preprocessing, feed it to your network. The model will output its prediction." }, { "code": null, "e": 2647, "s": 2613, "text": "\n 87 Lectures \n 11 hours \n" }, { "code": null, "e": 2664, "s": 2647, "text": " Abhilash Nelson" }, { "code": null, "e": 2701, "s": 2664, "text": "\n 106 Lectures \n 13.5 hours \n" }, { "code": null, "e": 2718, "s": 2701, "text": " Abhilash Nelson" }, { "code": null, "e": 2751, "s": 2718, "text": "\n 28 Lectures \n 4 hours \n" }, { "code": null, "e": 2768, "s": 2751, "text": " Abhilash Nelson" }, { "code": null, "e": 2801, "s": 2768, "text": "\n 58 Lectures \n 8 hours \n" }, { "code": null, "e": 2817, "s": 2801, "text": " Soumyadeep Dey" }, { "code": null, "e": 2852, "s": 2817, "text": "\n 59 Lectures \n 2.5 hours \n" }, { "code": null, "e": 2863, "s": 2852, "text": " Mike West" }, { "code": null, "e": 2899, "s": 2863, "text": "\n 128 Lectures \n 5.5 hours \n" }, { "code": null, "e": 2915, "s": 2899, "text": " TELCOMA Global" }, { "code": null, "e": 2922, "s": 2915, "text": " Print" }, { "code": null, "e": 2933, "s": 2922, "text": " Add Notes" } ]
How to compute union of JavaScript arrays ? - GeeksforGeeks
05 Jan, 2022 Given two arrays containing array elements and the task is to compute the union of both arrays with the help of JavaScript. There are two methods to solve this problem which are discussed below:Approach 1: Declare both array named as A and B. Use spread operator to concat both array and store it into set. Since, set removes the duplicate elements. After removing the duplicate elements, it will display the union of array element. Example: This example implements the above approach. html <!DOCTYPE HTML><html> <head> <title> How to compute union of JavaScript arrays ? </title></head> <body style = "text-align:center;"> <h1 style = "color:green;" id = "h1"> GeeksForGeeks </h1> <p id = "GFG_UP" style = "font-size: 15px; font-weight: bold;"> </p> <button onclick = "GFG_Fun()"> click here </button> <p id = "GFG_DOWN" style = "color:green; font-size: 20px; font-weight: bold;"> </p> <script> var up = document.getElementById('GFG_UP'); var down = document.getElementById('GFG_DOWN'); var A = [7, 2, 6, 4, 5]; var B = [1, 6, 4, 9]; up.innerHTML = "Click on the button to compute" + " the union of arrays.<br>" + "Array1 - [" + A + "]<br>Array2 - [" + B + "]"; function GFG_Fun() { var union = [...new Set([...A, ...B])]; down.innerHTML = "Union is = " + union; } </script> </body></html> Output: Before clicking on the button: After clicking on the button: Approach 2: Here, all elements are stored in a JavaScript object. Storing elements in JavaScript Object is going to remove the duplicate elements. At the end, push all element in a JavaScript array. Example: This example implements the above approach. html <!DOCTYPE HTML><html> <head> <title> How to compute union of JavaScript arrays ? </title></head> <body style = "text-align:center;"> <h1 style = "color:green;" id = "h1"> GeeksForGeeks </h1> <p id = "GFG_UP" style = "font-size: 15px; font-weight: bold;"> </p> <button onclick = "GFG_Fun()"> click here </button> <p id = "GFG_DOWN" style = "color:green; font-size: 20px; font-weight: bold;"> </p> <script> var up = document.getElementById('GFG_UP'); var down = document.getElementById('GFG_DOWN'); var A = [7, 2, 6, 4, 5]; var B = [1, 6, 4, 9]; up.innerHTML = "Click on the button to compute" + " the union of arrays.<br>" + "Array1 - [" + A + "]<br>Array2 - [" + B + "]"; function computeUnion(a, b) { var object = {}; for (var i = a.length-1; i >= 0; -- i) object[a[i]] = a[i]; for (var i = b.length-1; i >= 0; -- i) object[b[i]] = b[i]; var ret = [] for (var i in object) { if (object.hasOwnProperty(i)) ret.push(object[i]); } return ret; } function GFG_Fun() { down.innerHTML = "Union is = " + computeUnion(A, B); } </script></body> </html> Output: Before clicking on the button: After clicking on the button: sumitgumber28 javascript-array JavaScript Web Technologies Web technologies Questions Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here. Comments Old Comments Difference between var, let and const keywords in JavaScript Differences between Functional Components and Class Components in React Convert a string to an integer in JavaScript How to append HTML code to a div using JavaScript ? Difference Between PUT and PATCH Request Roadmap to Become a Web Developer in 2022 Installation of Node.js on Linux Top 10 Projects For Beginners To Practice HTML and CSS Skills How to fetch data from an API in ReactJS ? How to insert spaces/tabs in text using HTML/CSS?
[ { "code": null, "e": 24572, "s": 24544, "text": "\n05 Jan, 2022" }, { "code": null, "e": 24780, "s": 24572, "text": "Given two arrays containing array elements and the task is to compute the union of both arrays with the help of JavaScript. There are two methods to solve this problem which are discussed below:Approach 1: " }, { "code": null, "e": 24817, "s": 24780, "text": "Declare both array named as A and B." }, { "code": null, "e": 24881, "s": 24817, "text": "Use spread operator to concat both array and store it into set." }, { "code": null, "e": 24924, "s": 24881, "text": "Since, set removes the duplicate elements." }, { "code": null, "e": 25007, "s": 24924, "text": "After removing the duplicate elements, it will display the union of array element." }, { "code": null, "e": 25060, "s": 25007, "text": "Example: This example implements the above approach." }, { "code": null, "e": 25065, "s": 25060, "text": "html" }, { "code": "<!DOCTYPE HTML><html> <head> <title> How to compute union of JavaScript arrays ? </title></head> <body style = \"text-align:center;\"> <h1 style = \"color:green;\" id = \"h1\"> GeeksForGeeks </h1> <p id = \"GFG_UP\" style = \"font-size: 15px; font-weight: bold;\"> </p> <button onclick = \"GFG_Fun()\"> click here </button> <p id = \"GFG_DOWN\" style = \"color:green; font-size: 20px; font-weight: bold;\"> </p> <script> var up = document.getElementById('GFG_UP'); var down = document.getElementById('GFG_DOWN'); var A = [7, 2, 6, 4, 5]; var B = [1, 6, 4, 9]; up.innerHTML = \"Click on the button to compute\" + \" the union of arrays.<br>\" + \"Array1 - [\" + A + \"]<br>Array2 - [\" + B + \"]\"; function GFG_Fun() { var union = [...new Set([...A, ...B])]; down.innerHTML = \"Union is = \" + union; } </script> </body></html>", "e": 26117, "s": 25065, "text": null }, { "code": null, "e": 26126, "s": 26117, "text": "Output: " }, { "code": null, "e": 26157, "s": 26126, "text": "Before clicking on the button:" }, { "code": null, "e": 26187, "s": 26157, "text": "After clicking on the button:" }, { "code": null, "e": 26201, "s": 26187, "text": "Approach 2: " }, { "code": null, "e": 26255, "s": 26201, "text": "Here, all elements are stored in a JavaScript object." }, { "code": null, "e": 26336, "s": 26255, "text": "Storing elements in JavaScript Object is going to remove the duplicate elements." }, { "code": null, "e": 26388, "s": 26336, "text": "At the end, push all element in a JavaScript array." }, { "code": null, "e": 26441, "s": 26388, "text": "Example: This example implements the above approach." }, { "code": null, "e": 26446, "s": 26441, "text": "html" }, { "code": "<!DOCTYPE HTML><html> <head> <title> How to compute union of JavaScript arrays ? </title></head> <body style = \"text-align:center;\"> <h1 style = \"color:green;\" id = \"h1\"> GeeksForGeeks </h1> <p id = \"GFG_UP\" style = \"font-size: 15px; font-weight: bold;\"> </p> <button onclick = \"GFG_Fun()\"> click here </button> <p id = \"GFG_DOWN\" style = \"color:green; font-size: 20px; font-weight: bold;\"> </p> <script> var up = document.getElementById('GFG_UP'); var down = document.getElementById('GFG_DOWN'); var A = [7, 2, 6, 4, 5]; var B = [1, 6, 4, 9]; up.innerHTML = \"Click on the button to compute\" + \" the union of arrays.<br>\" + \"Array1 - [\" + A + \"]<br>Array2 - [\" + B + \"]\"; function computeUnion(a, b) { var object = {}; for (var i = a.length-1; i >= 0; -- i) object[a[i]] = a[i]; for (var i = b.length-1; i >= 0; -- i) object[b[i]] = b[i]; var ret = [] for (var i in object) { if (object.hasOwnProperty(i)) ret.push(object[i]); } return ret; } function GFG_Fun() { down.innerHTML = \"Union is = \" + computeUnion(A, B); } </script></body> </html>", "e": 27933, "s": 26446, "text": null }, { "code": null, "e": 27942, "s": 27933, "text": "Output: " }, { "code": null, "e": 27973, "s": 27942, "text": "Before clicking on the button:" }, { "code": null, "e": 28003, "s": 27973, "text": "After clicking on the button:" }, { "code": null, "e": 28017, "s": 28003, "text": "sumitgumber28" }, { "code": null, "e": 28034, "s": 28017, "text": "javascript-array" }, { "code": null, "e": 28045, "s": 28034, "text": "JavaScript" }, { "code": null, "e": 28062, "s": 28045, "text": "Web Technologies" }, { "code": null, "e": 28089, "s": 28062, "text": "Web technologies Questions" }, { "code": null, "e": 28187, "s": 28089, "text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here." }, { "code": null, "e": 28196, "s": 28187, "text": "Comments" }, { "code": null, "e": 28209, "s": 28196, "text": "Old Comments" }, { "code": null, "e": 28270, "s": 28209, "text": "Difference between var, let and const keywords in JavaScript" }, { "code": null, "e": 28342, "s": 28270, "text": "Differences between Functional Components and Class Components in React" }, { "code": null, "e": 28387, "s": 28342, "text": "Convert a string to an integer in JavaScript" }, { "code": null, "e": 28439, "s": 28387, "text": "How to append HTML code to a div using JavaScript ?" }, { "code": null, "e": 28480, "s": 28439, "text": "Difference Between PUT and PATCH Request" }, { "code": null, "e": 28522, "s": 28480, "text": "Roadmap to Become a Web Developer in 2022" }, { "code": null, "e": 28555, "s": 28522, "text": "Installation of Node.js on Linux" }, { "code": null, "e": 28617, "s": 28555, "text": "Top 10 Projects For Beginners To Practice HTML and CSS Skills" }, { "code": null, "e": 28660, "s": 28617, "text": "How to fetch data from an API in ReactJS ?" } ]
How to handle large datasets in Python with Pandas and Dask | by Filip Ciesielski | Towards Data Science
Python data scientists often use Pandas for working with tables. While Pandas is perfect for small to medium-sized datasets, larger ones are problematic. In this article, I show how to deal with large datasets using Pandas together with Dask for parallel computing — and when to offset even larger problems to SQL if all else fails. Pandas is a wonderful library for working with data tables. Its dataframe construct provides a very powerful workflow for data analysis similar to the R ecosystem. It’s fairly quick, rich in features and well-documented. In fact, it has earned its place as a fundamental tool used by data scientists who follow the Python way. However, in the life of a data-scientist-who-uses-Python-instead-of-R there always comes a time where the laptop throws a tantrum, refuses to do any more work, and freezes spectacularly. As great as it is, Pandas achieves its speed by holding the dataset in RAM when performing calculations. That’s why it comes with a certain limitation (which also depends on the machine specs, of course). The issue often originates in an unforeseen expansion of a dataframe during an overly-complex transformation or a blind import of a table from a file. That can be very frustrating. One solution would be to limit the data to a smaller subset — for example, by probing every-nth value in a source. But often that’s not enough. But what if limiting data isn’t an option? Luckily, there’s a way to address this issue. The most common fix is using Pandas alongside another solution — like a relational SQL database, MongoDB, ElasticSearch, or something similar. At Sunscrapers, we definitely agree with that approach. But you can sometimes deal with larger-than-memory datasets in Python using Pandas and another handy open-source Python library, Dask. Dask is a robust Python library for performing distributed and parallel computations. It also provides tooling for dynamic scheduling of Python-defined tasks (something like Apache Airflow). It’s tightly integrated with NumPy and provides Pandas with dataframe-equivalent structures — the dask.dataframes — that are based on lazy loading and can be used to perform dataframe operations in chunks and in parallel. It also automatically supports grouping by performing data shuffling under the hood. This article outlines a few handy tips and tricks to help developers mitigate some of the showstoppers when working with large datasets in Python. To demonstrate the power of Pandas/Dask, I chose chose an open-source dataset from Wikipedia about the source of the site’s visitors. You can get the ‘clickstream’ tables (in .tsv) here. The clickstream data contains 4 main columns: ‘Prev’ — the site from which the visitor came (I renamed it to ‘coming_from’)‘curr’ — the target article page (renamed to ‘article’)‘type’ — this column describes the type of referral, for example, an external link (I renamed it to ‘referral_type’)’n’ — the number of visits ‘Prev’ — the site from which the visitor came (I renamed it to ‘coming_from’) ‘curr’ — the target article page (renamed to ‘article’) ‘type’ — this column describes the type of referral, for example, an external link (I renamed it to ‘referral_type’) ’n’ — the number of visits Next, I came up with a few questions to play around with my dataset and check whether the combination of Pandas and Dask does its job: Which links do people click on most often in a given article?What are the most popular articles users access from all the external search engines?What percentage of visitors to a given article page have clicked on a link to get there?What is the most common source of visits for each article? (displayed in a single table) Which links do people click on most often in a given article? What are the most popular articles users access from all the external search engines? What percentage of visitors to a given article page have clicked on a link to get there? What is the most common source of visits for each article? (displayed in a single table) The dataset size is 1.4 Gb, so it carries significant risk of memory overload. That’s why I split the study into two parts. First, I implemented the analysis on a limited data subset using just the Pandas library. Then I attempted to do exactly the same on the full set using Dask. Ok, let’s move on to the analysis. Let’s grab our data for the analysis: if [ ! -d “./data” ] then mkdir ./data echo ‘created folder ./data’fi# get the data if not present:if [ ! -f “./data/clickstream_data.tsv” ]; then if [ ! -f “./data/clickstream_data.tsv.gz” ] then wget https://dumps.wikimedia.org/other/clickstream/2018-12/clickstream-enwiki-2018-12.tsv.gz -O ./data/clickstream_data.tsv.gz fi gunzip ./data/clickstream_data.tsv.gzfi Now let’s have a look at what kind of data we have here and import it into the dataframe. The very first memory optimization step we can perform already at this point (assuming we know our table structure by now) is specifying the columns data types during the import (via the dtype= input parameter). That way, we can force Pandas to convert some values into types with a significantly lower memory footprint. That may not make much sense if you’re dealing with a few thousand rows, but will make a noticeable difference in a few millions! For example, if you know that a column should only have positive integers, use unsigned integer type (uint32) instead of the regular int type (or worse — float, which may sometimes happen automatically). df = pd.read_csv(‘data/clickstream_data.tsv’, delimiter=’\t’, names=[‘coming_from’, ‘article’, ‘referrer_type’, ‘n’], dtype={ ‘referrer_type’: ‘category’, ’n’: ‘uint32’}) Finally, let’s limit the data frame size to the first 100k rows for the sake of speed. Note that this is usually a bad idea; when sampling a subset, it’s far more appropriate to sample every nth row to get as much uniform sampling as possible. But since we’re only using it to demonstrate the analysis process, we’re not going to bother: df = df.iloc[:100000] To answer this question, we need to create a table where we can see the aggregated sum of visitors per article and per source of origin (coming_from column). So, let’s aggregate the table over the article, the coming_from columns, sum up the ’n’ values, and then order the rows according to the ’n’ sums. Here’s how we approach it in Pandas: top_links = df.loc[ df['referrer_type'].isin(['link']), ['coming_from','article', 'n']]\.groupby([‘coming_from’, ‘article’])\.sum()\.sort_values(by=’n’, ascending=False) And the resulting table: Now let’s recreate this data using the Dask library. from dask import dataframe as dddfd = dd.read_csv( ‘data/clickstream_data.tsv’, delimiter=’\t’, names=[‘coming_from’, ‘article’, ‘referrer_type’, ‘n’], dtype={ ‘referrer_type’: ‘category’, ’n’: ‘uint32’}, blocksize=64000000 # = 64 Mb chunks) Note that the read_csv function is pretty similar to the Pandas one, except here we specify the byte-size per chunks. We perform the aggregation logic which is also almost identical to Pandas: top_links_grouped_dask = dfd.loc[ dfd[‘referrer_type’].isin([‘link’]), [‘coming_from’,’article’, ‘n’]]\ .groupby([‘coming_from’, ‘article’]) That won’t do any calculations yet, the top_links_grouped_dask will be a Dask delayed dataframe object. We can then launch it to be computed via the .compute() method. But we don’t want to clog our memory, so let’s save it directly to hard drive. We will use the hdf5 file format to do that. Let’s declare the hdf5 store then: store = pd.HDFStore(‘./data/clickstream_store.h5’) And compute the data frame into it. Note that ordering column values with Dask isn’t that easy (after all, the data is read one chunk at a time), so we cannot use the sort_values() method like we did in the Pandas example. Instead, we need to use the nlargest() Dask method and specify the number of top values we’d like to determine: top_links_dask = top_links_grouped_dask.sum().nlargest(20, ‘n’) It too returns a delayed Dask object, so to finally compute it (and save it to the store) we run the following: store.put(‘top_links_dask’, top_links_dask.compute(), format=’table’, data_columns=True) In this case, the result is different from the values in the Pandas example since here we work on the entire dataset, not just the first 100k rows: That one is easy. All we need to do is to filter out the rows that contain the ‘external’ referrer_type and ‘other-search’ coming_from values: external_searches = df.loc[ (df[‘referrer_type’].isin([‘external’])) & (df[‘coming_from’].isin([‘other-search’])), [‘article’, ‘n’]] We then only have to sort the values according to the number of visitors: most_popular_articles = external_searches.sort_values( by=’n’, ascending=False).head(40) and voila! How about doing the same, but on the full dataset with Dask this time? external_searches_dask = dfd.loc[ (dfd[‘referrer_type’].isin([‘external’])) & (dfd[‘coming_from’].isin([‘other-search’])), [‘article’, ‘n’]] Since we only need to store the top 40 results, we can simply store them directly in Pandas dataframe: external_searches_dask = external_searches_dask.nlargest( 40, ‘n’).compute() which returns this (showing only the first 5 rows here): This is a great question to be answered graphically, so lets plot the first 40 top values: sns.barplot(data=external_searches_dask, y=’article’, x=’n’)plt.gca().set_ylabel(‘’) The framing of this question suggests that we need to be able to calculate the fraction for a specific article title. So let’s create a function that will take a dataframe and the desired article title, and then return the percentage value. The function will have to filter the rows for a given article, sum up all the visitors count, and then find a cumulative sum of n for a subset of visits with the ‘link’ value in the the referrer_type column: def visitors_clicked_link_pandas(dataframe, article): df_article = dataframe.loc[dataframe[‘article’].isin([article])] a = df_article[‘n’].sum() l = df_article.loc[ df_article[‘referrer_type’].isin([‘link’]), ‘n’].sum() return round((l*100)/a, 2) And let’s test in on one of the articles, say one with the title ”Jehangir_Wadia”: >>> visitors_clicked_link_pandas(df, ‘Jehangir_Wadia’)81.1 Which suggests that ~81% of the ”Jehangir_Wadia” article visitors arrive there by clicking on an external link. How can we extend that to the entire dataset using Dask? Quite easily. All we have to do is using the dask-dataframe instead of the Pandas ones and adding the .compute() methods to two of the inner statements in the function, like that: def visitors_clicked_link_dask(dataframe, article): df_article = dataframe.loc[dataframe[‘article’].isin([article])] a = df_article[‘n’].sum().compute() l = df_article.loc[ df_article[‘referrer_type’].isin([‘link’]), ‘n’].sum().compute() return round((l*100)/a, 2) Running the function will return the same result: >>> visitors_clicked_link_dask(dfd, ‘Jehangir_Wadia’)81.1 To answer this question, we require two columns: one for the destination article and the origin title, as well as the sum of the number of visits. Furthermore, we have to filter out the rows with the highest number of visitors per article. First, let’s get rid of all the unnecessary extra columns by aggregating and summing up all the ’n’ counts over the referrer_type for every coming_from/article combination: summed_articles = df.groupby([‘article’, ‘coming_from’]).sum() Next, let’s find the referrers (coming_from) that generated the highest number of visitors for each article page. One way to do that is using a filter table with the desired rows indices via the df.iloc[] method. So let’s find those for the summed_articles table which correspond to the highest ’n’ per article. We’ll use a nifty Pandas method called idxmax which returns the indices of the grouped column with max values. Aggregating the summed_articles again, this time over the coming_from column, we can run it like this: max_n_filter = summed_articles.reset_index()\ .groupby(‘article’)\ .idxmax() Let’s preview the filter first: Now we can filter out the summed_articles rows with this table: summed_articles.iloc[max_n_filter[‘n’]].head(4) Finally, we need to sort the values by the highest number of visitors: summed_articles.iloc[max_n_filter[‘n’]]\ .sort_values(by=’n’, ascending=False)\ .head(10) Done! Now, let’s try recreating this moderately-complex task in Dask on the full data set. The first step is easy. We can create a table with summed_articles like this without any issues: summed_articles = dfd.groupby([‘article’, ‘coming_from’])\ .sum()\ .reset_index()\ .compute() But it’s best not to store it in memory — we’ll have to perform the aggregation later on and that will be memory-demanding. So let’s write it down (as its being calculated) directly to hard drive instead, for example hdf5 or a parquet file: dfd.groupby([‘article’, ‘coming_from’])\ .sum()\ .reset_index()\ .to_parquet(‘./summed_articles.parquet’, engine=’pyarrow’) So far so good. Step two is creating the filter table. That’s where the problems begin: at the time of writing this article, Dask dataframes have no idxmax() implementation available. We’d have to improvise somehow. For example, we could copy the summed_articles index into a new column and output it via a custom apply function. However, there’s another problem — Dask partitioning of the data means that we can’t use iloc to filter specific rows (it requires the “:” value for all rows). We could try using a loc method and select rows by checking if their indices are present in the list of previously-determined filter table, but that would be a huge computational overhead. What a bummer. Here’s another approach: we could write a custom function for processing aggregated data and use it with the groupby-apply combination. That way, we can overcome all the above issues quite easily. But then... the apply method works by concatenating all of the data output from the individually processed subsets of rows into one final table, which means it will have to be transiently stored in one piece in memory, unfortunately... Depending on our luck with the data, it can be small enough or not. I tried that a couple of times and found it clogs my (16BG RAM laptop) memory, forcing the notebook kernel to restart eventually. Not giving up, I resorted to the dark side of solutions by attempting to iterate over individual groups, find the right row, and append it to an hdf5/parquet storage on disk. First problem: DaskGroupBy object has no implementation of iteritem method (at the time of writing), so we can’t use the for-in logic. Finally, we can find all the article/coming_from unique combinations and iterate over these values to group the summed_articles rows ourselves with the get_group() method: dfd[[‘article’, ‘coming_from’]]\ .drop_duplicates()\ .to_parquet(‘./uniques.parquet’)for item in pd.read_parquet(‘./uniques.parquet’, engine=’pyarrow’).itertuples(): t = dfd.groupby([‘article’,‘coming_from’])\ .get_group(item)\ .compute() ... That should work, but the process would be incredibly slow. That’s why I gave up on using Dask for this problem. The point I’m trying to make here is that not all data-oriented problems can be solved (easily) with Pandas. Sure, one can invest in massive amounts of RAM, but most of the time, that’s just not the way to go — certainly not for a regular data-guy with a laptop. That type of problems are still best tackled with the good old SQL and a relational database where even a simple SQLite could perform better and in a very reasonable time. We can solve this problem in several ways. Here’s one of them. My solution is based on storing data in a PostgreSQL database and performing a composite query with the help of PARTITION BY and ROW_NUMBER functions. I use the PostgreSQL database here, but it could just as well be the latest SQLite3 too (version 3.25 or later) as it now supports the partition by functionality — and that’s what we need for our solution. To enable saving results I created a new PostgreSQL database ‘clickstream’ running locally in a Docker container and connected to it from the Jupyter Notebook via an SQLAlchemy interface engine: import psycopg2from sqlalchemy import create engineengine = create_engine(‘postgres://<db hostname>/clickstream’)conn = psycopg2.connect( dbname=”clickstream”, user=”postgres”, password=”<secure-password>”, host=”0.0.0.0")cur = conn.cursor() We then perform the summation of the Dask dataframe on the group by article and coming_from columns, and clean up the string data from tabs and return characters, that would interfere with the PostgreSQL upload: summed_articles = dfd.groupby([‘article’, ‘coming_from’])\ .sum()\ .reset_index()\ .compute()for c in ['\t', '\n', '\\']: summed_articles[‘article’] = \ summed_articles[‘article’].str.replace(c, ‘ ‘)summed_articles[‘coming_from’] = \ summed_articles[‘coming_from’].str.replace(‘\t’, ‘ ‘) Again, at this point we still haven’t performed any editing and summed_articles is still a delayed Dask object. One last thing to do before uploading the dataframe to the database is creating an empty table in an existing database, so sending an empty table with the right column names will do the trick quite well: pd.DataFrame(columns=summed_articles.columns).to_sql( ‘summed_articles’, con=engine, if_exists=’replace’, index=False) And finally, let’s upload data into it. Note, that at the time of writing the Dask dataframe offers no to_sql method, so we can use another trick to do it quickly chunk by chunk: for n in range(summed_articles.npartitions): table_chunk = summed_articles.get_partition(n).compute() output = io.StringIO() table_chunk.to_csv(output, sep=’\t’, header=False, index=False) output.seek(0) try: cur.copy_from(output, ‘summed_articles’, null=””) except Exception: err_tables.append(table_chunk) conn.rollback() continue conn.commit() Next, we create a SELECT statement that partitions the rows by article, orders them locally by the number of visits column ’n’ and indexes the ordered groups incrementally with integers (starting with 1 for every partitioned subset): SELECT row_number() OVER ( PARTITION BY article ORDER BY n DESC ) ArticleNR, article, coming_from, nFROM article_sum Then we aggregate the rows again by the article column and return only those with the index equal to 1, essentially filtering out the rows with the maximum ’n’ values for a given article. Here is the full SQL query: SELECT t.article, t.coming_from, t.n FROM ( SELECT row_number() OVER ( PARTITION BY article ORDER BY n DESC ) ArticleNR, article, coming_from, n FROM article_sum ) tWHERE t.ArticleNR = 1ORDER BY n DESC; The above SQL query was then executed against the database via the: q = engine.execute(‘’’<SELECT statement here>’’’).fetchall()pd.DataFrame(q, columns=[‘article’, ‘coming_from’, ‘n’]).head(20) And voila, our table is ready. Also, apparently the difference between a hyphen and minus kept a lot of people awake at night in 2018: I hope this guide helps you deal with larger datasets in Python using the Pandas + Dask combo. It’s clear that some complex analytical tasks are still best handled with other technologies like the good old relational database and SQL. Note 1: While using Dask, every dask-dataframe chunk, as well as the final output (converted into a Pandas dataframe), MUST be small enough to fit into the memory. Note 2: Here are some useful tools that help to keep an eye on data-size related issues: %timeit magic function in the Jupyter Notebookdf.memory_usage()ResourceProfiler from dask.diagnosticsProgressBarsys.getsizeofgc.collect() %timeit magic function in the Jupyter Notebook df.memory_usage() ResourceProfiler from dask.diagnostics
[ { "code": null, "e": 505, "s": 172, "text": "Python data scientists often use Pandas for working with tables. While Pandas is perfect for small to medium-sized datasets, larger ones are problematic. In this article, I show how to deal with large datasets using Pandas together with Dask for parallel computing — and when to offset even larger problems to SQL if all else fails." }, { "code": null, "e": 832, "s": 505, "text": "Pandas is a wonderful library for working with data tables. Its dataframe construct provides a very powerful workflow for data analysis similar to the R ecosystem. It’s fairly quick, rich in features and well-documented. In fact, it has earned its place as a fundamental tool used by data scientists who follow the Python way." }, { "code": null, "e": 1019, "s": 832, "text": "However, in the life of a data-scientist-who-uses-Python-instead-of-R there always comes a time where the laptop throws a tantrum, refuses to do any more work, and freezes spectacularly." }, { "code": null, "e": 1224, "s": 1019, "text": "As great as it is, Pandas achieves its speed by holding the dataset in RAM when performing calculations. That’s why it comes with a certain limitation (which also depends on the machine specs, of course)." }, { "code": null, "e": 1405, "s": 1224, "text": "The issue often originates in an unforeseen expansion of a dataframe during an overly-complex transformation or a blind import of a table from a file. That can be very frustrating." }, { "code": null, "e": 1549, "s": 1405, "text": "One solution would be to limit the data to a smaller subset — for example, by probing every-nth value in a source. But often that’s not enough." }, { "code": null, "e": 1592, "s": 1549, "text": "But what if limiting data isn’t an option?" }, { "code": null, "e": 1638, "s": 1592, "text": "Luckily, there’s a way to address this issue." }, { "code": null, "e": 1837, "s": 1638, "text": "The most common fix is using Pandas alongside another solution — like a relational SQL database, MongoDB, ElasticSearch, or something similar. At Sunscrapers, we definitely agree with that approach." }, { "code": null, "e": 1972, "s": 1837, "text": "But you can sometimes deal with larger-than-memory datasets in Python using Pandas and another handy open-source Python library, Dask." }, { "code": null, "e": 2470, "s": 1972, "text": "Dask is a robust Python library for performing distributed and parallel computations. It also provides tooling for dynamic scheduling of Python-defined tasks (something like Apache Airflow). It’s tightly integrated with NumPy and provides Pandas with dataframe-equivalent structures — the dask.dataframes — that are based on lazy loading and can be used to perform dataframe operations in chunks and in parallel. It also automatically supports grouping by performing data shuffling under the hood." }, { "code": null, "e": 2617, "s": 2470, "text": "This article outlines a few handy tips and tricks to help developers mitigate some of the showstoppers when working with large datasets in Python." }, { "code": null, "e": 2804, "s": 2617, "text": "To demonstrate the power of Pandas/Dask, I chose chose an open-source dataset from Wikipedia about the source of the site’s visitors. You can get the ‘clickstream’ tables (in .tsv) here." }, { "code": null, "e": 2850, "s": 2804, "text": "The clickstream data contains 4 main columns:" }, { "code": null, "e": 3125, "s": 2850, "text": "‘Prev’ — the site from which the visitor came (I renamed it to ‘coming_from’)‘curr’ — the target article page (renamed to ‘article’)‘type’ — this column describes the type of referral, for example, an external link (I renamed it to ‘referral_type’)’n’ — the number of visits" }, { "code": null, "e": 3203, "s": 3125, "text": "‘Prev’ — the site from which the visitor came (I renamed it to ‘coming_from’)" }, { "code": null, "e": 3259, "s": 3203, "text": "‘curr’ — the target article page (renamed to ‘article’)" }, { "code": null, "e": 3376, "s": 3259, "text": "‘type’ — this column describes the type of referral, for example, an external link (I renamed it to ‘referral_type’)" }, { "code": null, "e": 3403, "s": 3376, "text": "’n’ — the number of visits" }, { "code": null, "e": 3538, "s": 3403, "text": "Next, I came up with a few questions to play around with my dataset and check whether the combination of Pandas and Dask does its job:" }, { "code": null, "e": 3861, "s": 3538, "text": "Which links do people click on most often in a given article?What are the most popular articles users access from all the external search engines?What percentage of visitors to a given article page have clicked on a link to get there?What is the most common source of visits for each article? (displayed in a single table)" }, { "code": null, "e": 3923, "s": 3861, "text": "Which links do people click on most often in a given article?" }, { "code": null, "e": 4009, "s": 3923, "text": "What are the most popular articles users access from all the external search engines?" }, { "code": null, "e": 4098, "s": 4009, "text": "What percentage of visitors to a given article page have clicked on a link to get there?" }, { "code": null, "e": 4187, "s": 4098, "text": "What is the most common source of visits for each article? (displayed in a single table)" }, { "code": null, "e": 4311, "s": 4187, "text": "The dataset size is 1.4 Gb, so it carries significant risk of memory overload. That’s why I split the study into two parts." }, { "code": null, "e": 4469, "s": 4311, "text": "First, I implemented the analysis on a limited data subset using just the Pandas library. Then I attempted to do exactly the same on the full set using Dask." }, { "code": null, "e": 4504, "s": 4469, "text": "Ok, let’s move on to the analysis." }, { "code": null, "e": 4542, "s": 4504, "text": "Let’s grab our data for the analysis:" }, { "code": null, "e": 4909, "s": 4542, "text": "if [ ! -d “./data” ] then mkdir ./data echo ‘created folder ./data’fi# get the data if not present:if [ ! -f “./data/clickstream_data.tsv” ]; then if [ ! -f “./data/clickstream_data.tsv.gz” ] then wget https://dumps.wikimedia.org/other/clickstream/2018-12/clickstream-enwiki-2018-12.tsv.gz -O ./data/clickstream_data.tsv.gz fi gunzip ./data/clickstream_data.tsv.gzfi" }, { "code": null, "e": 4999, "s": 4909, "text": "Now let’s have a look at what kind of data we have here and import it into the dataframe." }, { "code": null, "e": 5211, "s": 4999, "text": "The very first memory optimization step we can perform already at this point (assuming we know our table structure by now) is specifying the columns data types during the import (via the dtype= input parameter)." }, { "code": null, "e": 5320, "s": 5211, "text": "That way, we can force Pandas to convert some values into types with a significantly lower memory footprint." }, { "code": null, "e": 5450, "s": 5320, "text": "That may not make much sense if you’re dealing with a few thousand rows, but will make a noticeable difference in a few millions!" }, { "code": null, "e": 5654, "s": 5450, "text": "For example, if you know that a column should only have positive integers, use unsigned integer type (uint32) instead of the regular int type (or worse — float, which may sometimes happen automatically)." }, { "code": null, "e": 5850, "s": 5654, "text": "df = pd.read_csv(‘data/clickstream_data.tsv’, delimiter=’\\t’, names=[‘coming_from’, ‘article’, ‘referrer_type’, ‘n’], dtype={ ‘referrer_type’: ‘category’, ’n’: ‘uint32’})" }, { "code": null, "e": 5937, "s": 5850, "text": "Finally, let’s limit the data frame size to the first 100k rows for the sake of speed." }, { "code": null, "e": 6188, "s": 5937, "text": "Note that this is usually a bad idea; when sampling a subset, it’s far more appropriate to sample every nth row to get as much uniform sampling as possible. But since we’re only using it to demonstrate the analysis process, we’re not going to bother:" }, { "code": null, "e": 6210, "s": 6188, "text": "df = df.iloc[:100000]" }, { "code": null, "e": 6368, "s": 6210, "text": "To answer this question, we need to create a table where we can see the aggregated sum of visitors per article and per source of origin (coming_from column)." }, { "code": null, "e": 6552, "s": 6368, "text": "So, let’s aggregate the table over the article, the coming_from columns, sum up the ’n’ values, and then order the rows according to the ’n’ sums. Here’s how we approach it in Pandas:" }, { "code": null, "e": 6731, "s": 6552, "text": "top_links = df.loc[ df['referrer_type'].isin(['link']), ['coming_from','article', 'n']]\\.groupby([‘coming_from’, ‘article’])\\.sum()\\.sort_values(by=’n’, ascending=False)" }, { "code": null, "e": 6756, "s": 6731, "text": "And the resulting table:" }, { "code": null, "e": 6809, "s": 6756, "text": "Now let’s recreate this data using the Dask library." }, { "code": null, "e": 7082, "s": 6809, "text": "from dask import dataframe as dddfd = dd.read_csv( ‘data/clickstream_data.tsv’, delimiter=’\\t’, names=[‘coming_from’, ‘article’, ‘referrer_type’, ‘n’], dtype={ ‘referrer_type’: ‘category’, ’n’: ‘uint32’}, blocksize=64000000 # = 64 Mb chunks)" }, { "code": null, "e": 7275, "s": 7082, "text": "Note that the read_csv function is pretty similar to the Pandas one, except here we specify the byte-size per chunks. We perform the aggregation logic which is also almost identical to Pandas:" }, { "code": null, "e": 7430, "s": 7275, "text": "top_links_grouped_dask = dfd.loc[ dfd[‘referrer_type’].isin([‘link’]), [‘coming_from’,’article’, ‘n’]]\\ .groupby([‘coming_from’, ‘article’])" }, { "code": null, "e": 7598, "s": 7430, "text": "That won’t do any calculations yet, the top_links_grouped_dask will be a Dask delayed dataframe object. We can then launch it to be computed via the .compute() method." }, { "code": null, "e": 7757, "s": 7598, "text": "But we don’t want to clog our memory, so let’s save it directly to hard drive. We will use the hdf5 file format to do that. Let’s declare the hdf5 store then:" }, { "code": null, "e": 7808, "s": 7757, "text": "store = pd.HDFStore(‘./data/clickstream_store.h5’)" }, { "code": null, "e": 7844, "s": 7808, "text": "And compute the data frame into it." }, { "code": null, "e": 8031, "s": 7844, "text": "Note that ordering column values with Dask isn’t that easy (after all, the data is read one chunk at a time), so we cannot use the sort_values() method like we did in the Pandas example." }, { "code": null, "e": 8143, "s": 8031, "text": "Instead, we need to use the nlargest() Dask method and specify the number of top values we’d like to determine:" }, { "code": null, "e": 8207, "s": 8143, "text": "top_links_dask = top_links_grouped_dask.sum().nlargest(20, ‘n’)" }, { "code": null, "e": 8319, "s": 8207, "text": "It too returns a delayed Dask object, so to finally compute it (and save it to the store) we run the following:" }, { "code": null, "e": 8438, "s": 8319, "text": "store.put(‘top_links_dask’, top_links_dask.compute(), format=’table’, data_columns=True)" }, { "code": null, "e": 8586, "s": 8438, "text": "In this case, the result is different from the values in the Pandas example since here we work on the entire dataset, not just the first 100k rows:" }, { "code": null, "e": 8729, "s": 8586, "text": "That one is easy. All we need to do is to filter out the rows that contain the ‘external’ referrer_type and ‘other-search’ coming_from values:" }, { "code": null, "e": 8865, "s": 8729, "text": "external_searches = df.loc[ (df[‘referrer_type’].isin([‘external’])) & (df[‘coming_from’].isin([‘other-search’])), [‘article’, ‘n’]]" }, { "code": null, "e": 8939, "s": 8865, "text": "We then only have to sort the values according to the number of visitors:" }, { "code": null, "e": 9031, "s": 8939, "text": "most_popular_articles = external_searches.sort_values( by=’n’, ascending=False).head(40)" }, { "code": null, "e": 9042, "s": 9031, "text": "and voila!" }, { "code": null, "e": 9113, "s": 9042, "text": "How about doing the same, but on the full dataset with Dask this time?" }, { "code": null, "e": 9269, "s": 9113, "text": "external_searches_dask = dfd.loc[ (dfd[‘referrer_type’].isin([‘external’])) & (dfd[‘coming_from’].isin([‘other-search’])), [‘article’, ‘n’]]" }, { "code": null, "e": 9372, "s": 9269, "text": "Since we only need to store the top 40 results, we can simply store them directly in Pandas dataframe:" }, { "code": null, "e": 9452, "s": 9372, "text": "external_searches_dask = external_searches_dask.nlargest( 40, ‘n’).compute()" }, { "code": null, "e": 9509, "s": 9452, "text": "which returns this (showing only the first 5 rows here):" }, { "code": null, "e": 9600, "s": 9509, "text": "This is a great question to be answered graphically, so lets plot the first 40 top values:" }, { "code": null, "e": 9685, "s": 9600, "text": "sns.barplot(data=external_searches_dask, y=’article’, x=’n’)plt.gca().set_ylabel(‘’)" }, { "code": null, "e": 9926, "s": 9685, "text": "The framing of this question suggests that we need to be able to calculate the fraction for a specific article title. So let’s create a function that will take a dataframe and the desired article title, and then return the percentage value." }, { "code": null, "e": 10134, "s": 9926, "text": "The function will have to filter the rows for a given article, sum up all the visitors count, and then find a cumulative sum of n for a subset of visits with the ‘link’ value in the the referrer_type column:" }, { "code": null, "e": 10408, "s": 10134, "text": "def visitors_clicked_link_pandas(dataframe, article): df_article = dataframe.loc[dataframe[‘article’].isin([article])] a = df_article[‘n’].sum() l = df_article.loc[ df_article[‘referrer_type’].isin([‘link’]), ‘n’].sum() return round((l*100)/a, 2)" }, { "code": null, "e": 10491, "s": 10408, "text": "And let’s test in on one of the articles, say one with the title ”Jehangir_Wadia”:" }, { "code": null, "e": 10550, "s": 10491, "text": ">>> visitors_clicked_link_pandas(df, ‘Jehangir_Wadia’)81.1" }, { "code": null, "e": 10662, "s": 10550, "text": "Which suggests that ~81% of the ”Jehangir_Wadia” article visitors arrive there by clicking on an external link." }, { "code": null, "e": 10899, "s": 10662, "text": "How can we extend that to the entire dataset using Dask? Quite easily. All we have to do is using the dask-dataframe instead of the Pandas ones and adding the .compute() methods to two of the inner statements in the function, like that:" }, { "code": null, "e": 11190, "s": 10899, "text": "def visitors_clicked_link_dask(dataframe, article): df_article = dataframe.loc[dataframe[‘article’].isin([article])] a = df_article[‘n’].sum().compute() l = df_article.loc[ df_article[‘referrer_type’].isin([‘link’]), ‘n’].sum().compute() return round((l*100)/a, 2)" }, { "code": null, "e": 11240, "s": 11190, "text": "Running the function will return the same result:" }, { "code": null, "e": 11298, "s": 11240, "text": ">>> visitors_clicked_link_dask(dfd, ‘Jehangir_Wadia’)81.1" }, { "code": null, "e": 11538, "s": 11298, "text": "To answer this question, we require two columns: one for the destination article and the origin title, as well as the sum of the number of visits. Furthermore, we have to filter out the rows with the highest number of visitors per article." }, { "code": null, "e": 11711, "s": 11538, "text": "First, let’s get rid of all the unnecessary extra columns by aggregating and summing up all the ’n’ counts over the referrer_type for every coming_from/article combination:" }, { "code": null, "e": 11774, "s": 11711, "text": "summed_articles = df.groupby([‘article’, ‘coming_from’]).sum()" }, { "code": null, "e": 11888, "s": 11774, "text": "Next, let’s find the referrers (coming_from) that generated the highest number of visitors for each article page." }, { "code": null, "e": 12086, "s": 11888, "text": "One way to do that is using a filter table with the desired rows indices via the df.iloc[] method. So let’s find those for the summed_articles table which correspond to the highest ’n’ per article." }, { "code": null, "e": 12300, "s": 12086, "text": "We’ll use a nifty Pandas method called idxmax which returns the indices of the grouped column with max values. Aggregating the summed_articles again, this time over the coming_from column, we can run it like this:" }, { "code": null, "e": 12383, "s": 12300, "text": "max_n_filter = summed_articles.reset_index()\\ .groupby(‘article’)\\ .idxmax()" }, { "code": null, "e": 12415, "s": 12383, "text": "Let’s preview the filter first:" }, { "code": null, "e": 12479, "s": 12415, "text": "Now we can filter out the summed_articles rows with this table:" }, { "code": null, "e": 12527, "s": 12479, "text": "summed_articles.iloc[max_n_filter[‘n’]].head(4)" }, { "code": null, "e": 12598, "s": 12527, "text": "Finally, we need to sort the values by the highest number of visitors:" }, { "code": null, "e": 12694, "s": 12598, "text": "summed_articles.iloc[max_n_filter[‘n’]]\\ .sort_values(by=’n’, ascending=False)\\ .head(10)" }, { "code": null, "e": 12700, "s": 12694, "text": "Done!" }, { "code": null, "e": 12882, "s": 12700, "text": "Now, let’s try recreating this moderately-complex task in Dask on the full data set. The first step is easy. We can create a table with summed_articles like this without any issues:" }, { "code": null, "e": 12985, "s": 12882, "text": "summed_articles = dfd.groupby([‘article’, ‘coming_from’])\\ .sum()\\ .reset_index()\\ .compute()" }, { "code": null, "e": 13226, "s": 12985, "text": "But it’s best not to store it in memory — we’ll have to perform the aggregation later on and that will be memory-demanding. So let’s write it down (as its being calculated) directly to hard drive instead, for example hdf5 or a parquet file:" }, { "code": null, "e": 13359, "s": 13226, "text": "dfd.groupby([‘article’, ‘coming_from’])\\ .sum()\\ .reset_index()\\ .to_parquet(‘./summed_articles.parquet’, engine=’pyarrow’)" }, { "code": null, "e": 13375, "s": 13359, "text": "So far so good." }, { "code": null, "e": 13575, "s": 13375, "text": "Step two is creating the filter table. That’s where the problems begin: at the time of writing this article, Dask dataframes have no idxmax() implementation available. We’d have to improvise somehow." }, { "code": null, "e": 13849, "s": 13575, "text": "For example, we could copy the summed_articles index into a new column and output it via a custom apply function. However, there’s another problem — Dask partitioning of the data means that we can’t use iloc to filter specific rows (it requires the “:” value for all rows)." }, { "code": null, "e": 14053, "s": 13849, "text": "We could try using a loc method and select rows by checking if their indices are present in the list of previously-determined filter table, but that would be a huge computational overhead. What a bummer." }, { "code": null, "e": 14250, "s": 14053, "text": "Here’s another approach: we could write a custom function for processing aggregated data and use it with the groupby-apply combination. That way, we can overcome all the above issues quite easily." }, { "code": null, "e": 14486, "s": 14250, "text": "But then... the apply method works by concatenating all of the data output from the individually processed subsets of rows into one final table, which means it will have to be transiently stored in one piece in memory, unfortunately..." }, { "code": null, "e": 14684, "s": 14486, "text": "Depending on our luck with the data, it can be small enough or not. I tried that a couple of times and found it clogs my (16BG RAM laptop) memory, forcing the notebook kernel to restart eventually." }, { "code": null, "e": 14859, "s": 14684, "text": "Not giving up, I resorted to the dark side of solutions by attempting to iterate over individual groups, find the right row, and append it to an hdf5/parquet storage on disk." }, { "code": null, "e": 14994, "s": 14859, "text": "First problem: DaskGroupBy object has no implementation of iteritem method (at the time of writing), so we can’t use the for-in logic." }, { "code": null, "e": 15166, "s": 14994, "text": "Finally, we can find all the article/coming_from unique combinations and iterate over these values to group the summed_articles rows ourselves with the get_group() method:" }, { "code": null, "e": 15469, "s": 15166, "text": "dfd[[‘article’, ‘coming_from’]]\\ .drop_duplicates()\\ .to_parquet(‘./uniques.parquet’)for item in pd.read_parquet(‘./uniques.parquet’, engine=’pyarrow’).itertuples(): t = dfd.groupby([‘article’,‘coming_from’])\\ .get_group(item)\\ .compute() ..." }, { "code": null, "e": 15582, "s": 15469, "text": "That should work, but the process would be incredibly slow. That’s why I gave up on using Dask for this problem." }, { "code": null, "e": 15845, "s": 15582, "text": "The point I’m trying to make here is that not all data-oriented problems can be solved (easily) with Pandas. Sure, one can invest in massive amounts of RAM, but most of the time, that’s just not the way to go — certainly not for a regular data-guy with a laptop." }, { "code": null, "e": 16017, "s": 15845, "text": "That type of problems are still best tackled with the good old SQL and a relational database where even a simple SQLite could perform better and in a very reasonable time." }, { "code": null, "e": 16080, "s": 16017, "text": "We can solve this problem in several ways. Here’s one of them." }, { "code": null, "e": 16437, "s": 16080, "text": "My solution is based on storing data in a PostgreSQL database and performing a composite query with the help of PARTITION BY and ROW_NUMBER functions. I use the PostgreSQL database here, but it could just as well be the latest SQLite3 too (version 3.25 or later) as it now supports the partition by functionality — and that’s what we need for our solution." }, { "code": null, "e": 16632, "s": 16437, "text": "To enable saving results I created a new PostgreSQL database ‘clickstream’ running locally in a Docker container and connected to it from the Jupyter Notebook via an SQLAlchemy interface engine:" }, { "code": null, "e": 16887, "s": 16632, "text": "import psycopg2from sqlalchemy import create engineengine = create_engine(‘postgres://<db hostname>/clickstream’)conn = psycopg2.connect( dbname=”clickstream”, user=”postgres”, password=”<secure-password>”, host=”0.0.0.0\")cur = conn.cursor()" }, { "code": null, "e": 17099, "s": 16887, "text": "We then perform the summation of the Dask dataframe on the group by article and coming_from columns, and clean up the string data from tabs and return characters, that would interfere with the PostgreSQL upload:" }, { "code": null, "e": 17407, "s": 17099, "text": "summed_articles = dfd.groupby([‘article’, ‘coming_from’])\\ .sum()\\ .reset_index()\\ .compute()for c in ['\\t', '\\n', '\\\\']: summed_articles[‘article’] = \\ summed_articles[‘article’].str.replace(c, ‘ ‘)summed_articles[‘coming_from’] = \\ summed_articles[‘coming_from’].str.replace(‘\\t’, ‘ ‘)" }, { "code": null, "e": 17519, "s": 17407, "text": "Again, at this point we still haven’t performed any editing and summed_articles is still a delayed Dask object." }, { "code": null, "e": 17723, "s": 17519, "text": "One last thing to do before uploading the dataframe to the database is creating an empty table in an existing database, so sending an empty table with the right column names will do the trick quite well:" }, { "code": null, "e": 17857, "s": 17723, "text": "pd.DataFrame(columns=summed_articles.columns).to_sql( ‘summed_articles’, con=engine, if_exists=’replace’, index=False)" }, { "code": null, "e": 18036, "s": 17857, "text": "And finally, let’s upload data into it. Note, that at the time of writing the Dask dataframe offers no to_sql method, so we can use another trick to do it quickly chunk by chunk:" }, { "code": null, "e": 18432, "s": 18036, "text": "for n in range(summed_articles.npartitions): table_chunk = summed_articles.get_partition(n).compute() output = io.StringIO() table_chunk.to_csv(output, sep=’\\t’, header=False, index=False) output.seek(0) try: cur.copy_from(output, ‘summed_articles’, null=””) except Exception: err_tables.append(table_chunk) conn.rollback() continue conn.commit()" }, { "code": null, "e": 18666, "s": 18432, "text": "Next, we create a SELECT statement that partitions the rows by article, orders them locally by the number of visits column ’n’ and indexes the ordered groups incrementally with integers (starting with 1 for every partitioned subset):" }, { "code": null, "e": 18796, "s": 18666, "text": "SELECT row_number() OVER ( PARTITION BY article ORDER BY n DESC ) ArticleNR, article, coming_from, nFROM article_sum" }, { "code": null, "e": 19012, "s": 18796, "text": "Then we aggregate the rows again by the article column and return only those with the index equal to 1, essentially filtering out the rows with the maximum ’n’ values for a given article. Here is the full SQL query:" }, { "code": null, "e": 19240, "s": 19012, "text": "SELECT t.article, t.coming_from, t.n FROM ( SELECT row_number() OVER ( PARTITION BY article ORDER BY n DESC ) ArticleNR, article, coming_from, n FROM article_sum ) tWHERE t.ArticleNR = 1ORDER BY n DESC;" }, { "code": null, "e": 19308, "s": 19240, "text": "The above SQL query was then executed against the database via the:" }, { "code": null, "e": 19434, "s": 19308, "text": "q = engine.execute(‘’’<SELECT statement here>’’’).fetchall()pd.DataFrame(q, columns=[‘article’, ‘coming_from’, ‘n’]).head(20)" }, { "code": null, "e": 19465, "s": 19434, "text": "And voila, our table is ready." }, { "code": null, "e": 19569, "s": 19465, "text": "Also, apparently the difference between a hyphen and minus kept a lot of people awake at night in 2018:" }, { "code": null, "e": 19804, "s": 19569, "text": "I hope this guide helps you deal with larger datasets in Python using the Pandas + Dask combo. It’s clear that some complex analytical tasks are still best handled with other technologies like the good old relational database and SQL." }, { "code": null, "e": 19968, "s": 19804, "text": "Note 1: While using Dask, every dask-dataframe chunk, as well as the final output (converted into a Pandas dataframe), MUST be small enough to fit into the memory." }, { "code": null, "e": 20057, "s": 19968, "text": "Note 2: Here are some useful tools that help to keep an eye on data-size related issues:" }, { "code": null, "e": 20195, "s": 20057, "text": "%timeit magic function in the Jupyter Notebookdf.memory_usage()ResourceProfiler from dask.diagnosticsProgressBarsys.getsizeofgc.collect()" }, { "code": null, "e": 20242, "s": 20195, "text": "%timeit magic function in the Jupyter Notebook" }, { "code": null, "e": 20260, "s": 20242, "text": "df.memory_usage()" } ]
Copy elision in C++ - GeeksforGeeks
29 May, 2017 Copy elision (or Copy omission) is a compiler optimization technique that avoids unnecessary copying of objects. Now a days, almost every compiler uses it. Let us understand it with the help of an example. #include <iostream>using namespace std; class B{public: B(const char* str = "\0") //default constructor { cout << "Constructor called" << endl; } B(const B &b) //copy constructor { cout << "Copy constructor called" << endl; } }; int main(){ B ob = "copy me"; return 0;} The output of above program is: Constructor called Why copy constructor is not called?According to theory, when the object “ob” is being constructed, one argument constructor is used to convert “copy me” to a temporary object & that temporary object is copied to the object “ob”. So the statement B ob = "copy me"; should be broken down by the compiler as B ob = B("copy me"); However, most of the C++ compilers avoid such overheads of creating a temporary object & then copying it. The modern compilers break down the statement B ob = "copy me"; //copy initialization as B ob("copy me"); //direct initialization and thus eliding call to copy constructor. However, if we still want to ensure that the compiler doesn’t elide the call to copy constructor [disable the copy elision], we can compile the program using “-fno-elide-constructors” option with g++ and see the output as following: aashish@aashish-ThinkPad-SL400:~$ g++ copy_elision.cpp -fno-elide-constructors aashish@aashish-ThinkPad-SL400:~$ ./a.out Constructor called Copy constructor called If “-fno-elide-constructors” option is used, first default constructor is called to create a temporary object, then copy constructor is called to copy the temporary object to ob. Reference:http://en.wikipedia.org/wiki/Copy_elision This article is compiled by Aashish Barnwal and reviewed by GeeksforGeeks team. Please write comments if you find anything incorrect, or you want to share more information about the topic discussed above C Language C++ CPP Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here. Comments Old Comments Multidimensional Arrays in C / C++ rand() and srand() in C/C++ Left Shift and Right Shift Operators in C/C++ Different methods to reverse a string in C/C++ Core Dump (Segmentation fault) in C/C++ Vector in C++ STL Inheritance in C++ Initialize a vector in C++ (6 different ways) Socket Programming in C/C++ Map in C++ Standard Template Library (STL)
[ { "code": null, "e": 24440, "s": 24412, "text": "\n29 May, 2017" }, { "code": null, "e": 24646, "s": 24440, "text": "Copy elision (or Copy omission) is a compiler optimization technique that avoids unnecessary copying of objects. Now a days, almost every compiler uses it. Let us understand it with the help of an example." }, { "code": "#include <iostream>using namespace std; class B{public: B(const char* str = \"\\0\") //default constructor { cout << \"Constructor called\" << endl; } B(const B &b) //copy constructor { cout << \"Copy constructor called\" << endl; } }; int main(){ B ob = \"copy me\"; return 0;}", "e": 24975, "s": 24646, "text": null }, { "code": null, "e": 25007, "s": 24975, "text": "The output of above program is:" }, { "code": null, "e": 25027, "s": 25007, "text": "Constructor called\n" }, { "code": null, "e": 25273, "s": 25027, "text": "Why copy constructor is not called?According to theory, when the object “ob” is being constructed, one argument constructor is used to convert “copy me” to a temporary object & that temporary object is copied to the object “ob”. So the statement" }, { "code": null, "e": 25297, "s": 25273, "text": " B ob = \"copy me\"; " }, { "code": null, "e": 25338, "s": 25297, "text": "should be broken down by the compiler as" }, { "code": null, "e": 25364, "s": 25338, "text": " B ob = B(\"copy me\");" }, { "code": null, "e": 25470, "s": 25364, "text": "However, most of the C++ compilers avoid such overheads of creating a temporary object & then copying it." }, { "code": null, "e": 25652, "s": 25470, "text": "The modern compilers break down the statement\n B ob = \"copy me\"; //copy initialization\nas\n B ob(\"copy me\"); //direct initialization\nand thus eliding call to copy constructor.\n" }, { "code": null, "e": 25885, "s": 25652, "text": "However, if we still want to ensure that the compiler doesn’t elide the call to copy constructor [disable the copy elision], we can compile the program using “-fno-elide-constructors” option with g++ and see the output as following:" }, { "code": null, "e": 26058, "s": 25885, "text": " aashish@aashish-ThinkPad-SL400:~$ g++ copy_elision.cpp -fno-elide-constructors\n aashish@aashish-ThinkPad-SL400:~$ ./a.out\n Constructor called\n Copy constructor called\n" }, { "code": null, "e": 26237, "s": 26058, "text": "If “-fno-elide-constructors” option is used, first default constructor is called to create a temporary object, then copy constructor is called to copy the temporary object to ob." }, { "code": null, "e": 26289, "s": 26237, "text": "Reference:http://en.wikipedia.org/wiki/Copy_elision" }, { "code": null, "e": 26493, "s": 26289, "text": "This article is compiled by Aashish Barnwal and reviewed by GeeksforGeeks team. Please write comments if you find anything incorrect, or you want to share more information about the topic discussed above" }, { "code": null, "e": 26504, "s": 26493, "text": "C Language" }, { "code": null, "e": 26508, "s": 26504, "text": "C++" }, { "code": null, "e": 26512, "s": 26508, "text": "CPP" }, { "code": null, "e": 26610, "s": 26512, "text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here." }, { "code": null, "e": 26619, "s": 26610, "text": "Comments" }, { "code": null, "e": 26632, "s": 26619, "text": "Old Comments" }, { "code": null, "e": 26667, "s": 26632, "text": "Multidimensional Arrays in C / C++" }, { "code": null, "e": 26695, "s": 26667, "text": "rand() and srand() in C/C++" }, { "code": null, "e": 26741, "s": 26695, "text": "Left Shift and Right Shift Operators in C/C++" }, { "code": null, "e": 26788, "s": 26741, "text": "Different methods to reverse a string in C/C++" }, { "code": null, "e": 26828, "s": 26788, "text": "Core Dump (Segmentation fault) in C/C++" }, { "code": null, "e": 26846, "s": 26828, "text": "Vector in C++ STL" }, { "code": null, "e": 26865, "s": 26846, "text": "Inheritance in C++" }, { "code": null, "e": 26911, "s": 26865, "text": "Initialize a vector in C++ (6 different ways)" }, { "code": null, "e": 26939, "s": 26911, "text": "Socket Programming in C/C++" } ]
How to concatenate lists in java?
The addAll(Collection<? extends E> c) method of the class java.util.ArrayList appends all of the elements in the specified collection to the end of this list, in the order that they are returned by the specified collection's Iterator. Using this method you can concatenate two lists. Example: import java.util.ArrayList; public class Sample { public static void main(String[] args) { ArrayList<String> list = new ArrayList<String>(); list.add("JavaFx"); list.add("Java"); list.add("WebGL"); list.add("OpenCV"); System.out.println(list); ArrayList<String> newList = new ArrayList<String>(); list.add("HBase"); list.add("Neo4j"); list.add("MangoDB"); list.add("Cassandra"); list.addAll(newList); System.out.println(list); } } Output: [JavaFx, Java, WebGL, OpenCV] [JavaFx, Java, WebGL, OpenCV, HBase, Neo4j, MangoDB, Cassandra]
[ { "code": null, "e": 1346, "s": 1062, "text": "The addAll(Collection<? extends E> c) method of the class java.util.ArrayList appends all of the elements in the specified collection to the end of this list, in the order that they are returned by the specified collection's Iterator. Using this method you can concatenate two lists." }, { "code": null, "e": 1355, "s": 1346, "text": "Example:" }, { "code": null, "e": 1871, "s": 1355, "text": "import java.util.ArrayList;\n\npublic class Sample {\n public static void main(String[] args) {\n ArrayList<String> list = new ArrayList<String>();\n list.add(\"JavaFx\");\n list.add(\"Java\");\n list.add(\"WebGL\");\n list.add(\"OpenCV\");\n System.out.println(list);\n ArrayList<String> newList = new ArrayList<String>();\n list.add(\"HBase\");\n list.add(\"Neo4j\");\n list.add(\"MangoDB\");\n list.add(\"Cassandra\");\n list.addAll(newList);\n System.out.println(list);\n }\n}" }, { "code": null, "e": 1879, "s": 1871, "text": "Output:" }, { "code": null, "e": 1973, "s": 1879, "text": "[JavaFx, Java, WebGL, OpenCV]\n[JavaFx, Java, WebGL, OpenCV, HBase, Neo4j, MangoDB, Cassandra]" } ]
Angular ngx bootstrap Pagination Component - GeeksforGeeks
02 Sep, 2021 Angular ngx bootstrap is a bootstrap framework used with angular to create components with great styling and this framework is very easy to use and is used to make responsive websites. In this article, we will know how to use Pagination in angular ngx bootstrap. Installation syntax: npm install ngx-bootstrap --save Approach: First, install the angular ngx bootstrap using the above-mentioned command. Add the following script in index.html<link href=”https://maxcdn.bootstrapcdn.com/bootstrap/4.0.0/css/bootstrap.min.css” rel=”stylesheet”> <link href=”https://maxcdn.bootstrapcdn.com/bootstrap/4.0.0/css/bootstrap.min.css” rel=”stylesheet”> Import Pagination component in module.ts In app.component.html, make a Pagination component. Serve the app using ng serve. Example 1: index.html <!doctype html><html lang="en"> <head> <meta charset="utf-8"> <title>Demo</title> <base href="/"> <meta name="viewport" content= "width=device-width,initial-scale=1"> <link href="https://maxcdn.bootstrapcdn.com/bootstrap/4.0.0/css/bootstrap.min.css" rel="stylesheet"> <link rel="icon" type="image/x-icon" href="favicon.ico"> <link rel="preconnect" href="https://fonts.gstatic.com"> <link href="https://fonts.googleapis.com/css2?family=Roboto:wght@300;400;500&display=swap" rel="stylesheet"> <link href="https://fonts.googleapis.com/icon?family=Material+Icons" rel="stylesheet"></head> <body class="mat-typography"> <app-root></app-root></body> </html> app.component.html <div class="row"> <div class="col-xs-12 col-12"> <pagination [totalItems]="totalItems" [(ngModel)]="currentPage"> </pagination> </div></div> app.module.ts import { NgModule } from '@angular/core'; // Importing forms moduleimport { FormsModule, ReactiveFormsModule } from '@angular/forms';import { BrowserModule } from '@angular/platform-browser';import { BrowserAnimationsModule } from '@angular/platform-browser/animations';import { PaginationModule } from 'ngx-bootstrap/pagination'; import { AppComponent } from './app.component'; @NgModule({ bootstrap: [ AppComponent ], declarations: [ AppComponent ], imports: [ FormsModule, BrowserModule, BrowserAnimationsModule, ReactiveFormsModule, PaginationModule ]})export class AppModule { } Output: Reference: https://valor-software.com/ngx-bootstrap/pagination ysachin2314 Angular-ngx-bootstrap AngularJS Bootstrap Web Technologies Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here. Comments Old Comments Angular PrimeNG Dropdown Component How to make a Bootstrap Modal Popup in Angular 9/8 ? Angular 10 (blur) Event How to setup 404 page in angular routing ? How to create module with Routing in Angular 9 ? How to change navigation bar color in Bootstrap ? Form validation using jQuery How to align navbar items to the right in Bootstrap 4 ? How to pass data into a bootstrap modal? How to Show Images on Click using HTML ?
[ { "code": null, "e": 24718, "s": 24690, "text": "\n02 Sep, 2021" }, { "code": null, "e": 24981, "s": 24718, "text": "Angular ngx bootstrap is a bootstrap framework used with angular to create components with great styling and this framework is very easy to use and is used to make responsive websites. In this article, we will know how to use Pagination in angular ngx bootstrap." }, { "code": null, "e": 25002, "s": 24981, "text": "Installation syntax:" }, { "code": null, "e": 25036, "s": 25002, "text": "npm install ngx-bootstrap --save " }, { "code": null, "e": 25046, "s": 25036, "text": "Approach:" }, { "code": null, "e": 25122, "s": 25046, "text": "First, install the angular ngx bootstrap using the above-mentioned command." }, { "code": null, "e": 25261, "s": 25122, "text": "Add the following script in index.html<link href=”https://maxcdn.bootstrapcdn.com/bootstrap/4.0.0/css/bootstrap.min.css” rel=”stylesheet”>" }, { "code": null, "e": 25362, "s": 25261, "text": "<link href=”https://maxcdn.bootstrapcdn.com/bootstrap/4.0.0/css/bootstrap.min.css” rel=”stylesheet”>" }, { "code": null, "e": 25403, "s": 25362, "text": "Import Pagination component in module.ts" }, { "code": null, "e": 25455, "s": 25403, "text": "In app.component.html, make a Pagination component." }, { "code": null, "e": 25485, "s": 25455, "text": "Serve the app using ng serve." }, { "code": null, "e": 25498, "s": 25487, "text": "Example 1:" }, { "code": null, "e": 25509, "s": 25498, "text": "index.html" }, { "code": "<!doctype html><html lang=\"en\"> <head> <meta charset=\"utf-8\"> <title>Demo</title> <base href=\"/\"> <meta name=\"viewport\" content= \"width=device-width,initial-scale=1\"> <link href=\"https://maxcdn.bootstrapcdn.com/bootstrap/4.0.0/css/bootstrap.min.css\" rel=\"stylesheet\"> <link rel=\"icon\" type=\"image/x-icon\" href=\"favicon.ico\"> <link rel=\"preconnect\" href=\"https://fonts.gstatic.com\"> <link href=\"https://fonts.googleapis.com/css2?family=Roboto:wght@300;400;500&display=swap\" rel=\"stylesheet\"> <link href=\"https://fonts.googleapis.com/icon?family=Material+Icons\" rel=\"stylesheet\"></head> <body class=\"mat-typography\"> <app-root></app-root></body> </html>", "e": 26224, "s": 25509, "text": null }, { "code": null, "e": 26243, "s": 26224, "text": "app.component.html" }, { "code": "<div class=\"row\"> <div class=\"col-xs-12 col-12\"> <pagination [totalItems]=\"totalItems\" [(ngModel)]=\"currentPage\"> </pagination> </div></div>", "e": 26416, "s": 26243, "text": null }, { "code": null, "e": 26430, "s": 26416, "text": "app.module.ts" }, { "code": "import { NgModule } from '@angular/core'; // Importing forms moduleimport { FormsModule, ReactiveFormsModule } from '@angular/forms';import { BrowserModule } from '@angular/platform-browser';import { BrowserAnimationsModule } from '@angular/platform-browser/animations';import { PaginationModule } from 'ngx-bootstrap/pagination'; import { AppComponent } from './app.component'; @NgModule({ bootstrap: [ AppComponent ], declarations: [ AppComponent ], imports: [ FormsModule, BrowserModule, BrowserAnimationsModule, ReactiveFormsModule, PaginationModule ]})export class AppModule { }", "e": 27087, "s": 26430, "text": null }, { "code": null, "e": 27095, "s": 27087, "text": "Output:" }, { "code": null, "e": 27158, "s": 27095, "text": "Reference: https://valor-software.com/ngx-bootstrap/pagination" }, { "code": null, "e": 27170, "s": 27158, "text": "ysachin2314" }, { "code": null, "e": 27192, "s": 27170, "text": "Angular-ngx-bootstrap" }, { "code": null, "e": 27202, "s": 27192, "text": "AngularJS" }, { "code": null, "e": 27212, "s": 27202, "text": "Bootstrap" }, { "code": null, "e": 27229, "s": 27212, "text": "Web Technologies" }, { "code": null, "e": 27327, "s": 27229, "text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here." }, { "code": null, "e": 27336, "s": 27327, "text": "Comments" }, { "code": null, "e": 27349, "s": 27336, "text": "Old Comments" }, { "code": null, "e": 27384, "s": 27349, "text": "Angular PrimeNG Dropdown Component" }, { "code": null, "e": 27437, "s": 27384, "text": "How to make a Bootstrap Modal Popup in Angular 9/8 ?" }, { "code": null, "e": 27461, "s": 27437, "text": "Angular 10 (blur) Event" }, { "code": null, "e": 27504, "s": 27461, "text": "How to setup 404 page in angular routing ?" }, { "code": null, "e": 27553, "s": 27504, "text": "How to create module with Routing in Angular 9 ?" }, { "code": null, "e": 27603, "s": 27553, "text": "How to change navigation bar color in Bootstrap ?" }, { "code": null, "e": 27632, "s": 27603, "text": "Form validation using jQuery" }, { "code": null, "e": 27688, "s": 27632, "text": "How to align navbar items to the right in Bootstrap 4 ?" }, { "code": null, "e": 27729, "s": 27688, "text": "How to pass data into a bootstrap modal?" } ]
What are forward declarations in C++?
Forward declaration lets the code following the declaration know that there is are classes with the name Person. This satisfies the compiler when it sees these names used. Later the linker will find the definition of the classes. Class Person; void myFunc(Person p1) { // ... } Class Person { // Class definition here }; So in this case when the compiler encounters myFunc, it'll know that it's going to encounter this class somewhere down in the code. This can be used in cases where code using the class is placed/included before the code containing the class definition.
[ { "code": null, "e": 1292, "s": 1062, "text": "Forward declaration lets the code following the declaration know that there is are classes with the name Person. This satisfies the compiler when it sees these names used. Later the linker will find the definition of the classes." }, { "code": null, "e": 1390, "s": 1292, "text": "Class Person;\n\nvoid myFunc(Person p1) {\n // ...\n}\nClass Person {\n // Class definition here\n};" }, { "code": null, "e": 1643, "s": 1390, "text": "So in this case when the compiler encounters myFunc, it'll know that it's going to encounter this class somewhere down in the code. This can be used in cases where code using the class is placed/included before the code containing the class definition." } ]
Killshot - Information gathering Tool in kali linux - GeeksforGeeks
30 May, 2021 Killshot is used as an information-gathering tool. It is used to scan websites for information gathering and finding vulnerabilities in websites and webapps. It is one of the easiest and useful tools for performing reconnaissance on websites and web apps. It is available for Linux, window, and android phones ( termux ) that is coded in both bash and ruby languages. Killshot interface is very similar to Metasploit 1 and Metasploit. Killshot provide a command-line interface that you can run on Linux. This tool can be used to get information about our target(domain). We can target any domain using Killshot. The interactive console provides a number of helpful features, such as command completion and contextual help. This tool is written in ruby language. You must have the ruby language installed in your Kali Linux to use this tool. Killshot can detect WordPress, Drupal, Joomla, and Magento CMS, WordPress sensitive files, and WordPress version-related vulnerabilities. Killshot uses different modules for doing all the scannings. The whois data collection gives us information about Geoip lookup, Banner grabbing, DNS lookup, port scanning, sub-domain information, reverse IP, and MX records lookup. killshot is a free and open-source tool. killshot is a complete package of information gathering modules. killshot works and acts as a web application/website scanner. killshot is one of the easiest and useful tools for performing reconnaissance. killshot is written in ruby language. killshot can be used to find the IP Addresses of the target. killshot can be used to look for error-based SQL injections. killshot can be used to find sensitive files such as robots.txt. Step 1: Open your Kali Linux operating system and install the tool using the following command. git clone https://github.com/bahaabdelwahed/killshot cd killshot Step 2: Now install the dependencies using the following command, sudo ruby setup.rb Step 3: All the installation has been done. Now to run the tool use the following command. ruby Killshot.rb The tool is running successfully. Now let’s see an example of how to use the tool for reconnaissance. Example 1. Scan the website google.com find the IP address, country, HTTP server details, redirect location, x-xss protection, languages website using. Step 1: Open the tool and type the following command. help Step 3: Use the following command to scan the site. site google.com Once you enter google.com into the site. The tool will search all the details. The tool will gather all the information. You can see that we got all the details of google.com. You can also use your target to gather information. The information is p address, country, HTTP server details, redirect location, x-xss protection, languages that the website is using. Kali-Linux Linux-Tools Linux-Unix Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here. Comments Old Comments scp command in Linux with Examples nohup Command in Linux with Examples mv command in Linux with examples Thread functions in C/C++ Docker - COPY Instruction chown command in Linux with Examples nslookup command in Linux with Examples SED command in Linux | Set 2 Named Pipe or FIFO with example C program uniq Command in LINUX with examples
[ { "code": null, "e": 24015, "s": 23987, "text": "\n30 May, 2021" }, { "code": null, "e": 24384, "s": 24015, "text": "Killshot is used as an information-gathering tool. It is used to scan websites for information gathering and finding vulnerabilities in websites and webapps. It is one of the easiest and useful tools for performing reconnaissance on websites and web apps. It is available for Linux, window, and android phones ( termux ) that is coded in both bash and ruby languages. " }, { "code": null, "e": 24858, "s": 24384, "text": "Killshot interface is very similar to Metasploit 1 and Metasploit. Killshot provide a command-line interface that you can run on Linux. This tool can be used to get information about our target(domain). We can target any domain using Killshot. The interactive console provides a number of helpful features, such as command completion and contextual help. This tool is written in ruby language. You must have the ruby language installed in your Kali Linux to use this tool. " }, { "code": null, "e": 25227, "s": 24858, "text": "Killshot can detect WordPress, Drupal, Joomla, and Magento CMS, WordPress sensitive files, and WordPress version-related vulnerabilities. Killshot uses different modules for doing all the scannings. The whois data collection gives us information about Geoip lookup, Banner grabbing, DNS lookup, port scanning, sub-domain information, reverse IP, and MX records lookup." }, { "code": null, "e": 25268, "s": 25227, "text": "killshot is a free and open-source tool." }, { "code": null, "e": 25333, "s": 25268, "text": "killshot is a complete package of information gathering modules." }, { "code": null, "e": 25395, "s": 25333, "text": "killshot works and acts as a web application/website scanner." }, { "code": null, "e": 25474, "s": 25395, "text": "killshot is one of the easiest and useful tools for performing reconnaissance." }, { "code": null, "e": 25512, "s": 25474, "text": "killshot is written in ruby language." }, { "code": null, "e": 25573, "s": 25512, "text": "killshot can be used to find the IP Addresses of the target." }, { "code": null, "e": 25634, "s": 25573, "text": "killshot can be used to look for error-based SQL injections." }, { "code": null, "e": 25699, "s": 25634, "text": "killshot can be used to find sensitive files such as robots.txt." }, { "code": null, "e": 25795, "s": 25699, "text": "Step 1: Open your Kali Linux operating system and install the tool using the following command." }, { "code": null, "e": 25860, "s": 25795, "text": "git clone https://github.com/bahaabdelwahed/killshot\ncd killshot" }, { "code": null, "e": 25926, "s": 25860, "text": "Step 2: Now install the dependencies using the following command," }, { "code": null, "e": 25945, "s": 25926, "text": "sudo ruby setup.rb" }, { "code": null, "e": 26036, "s": 25945, "text": "Step 3: All the installation has been done. Now to run the tool use the following command." }, { "code": null, "e": 26053, "s": 26036, "text": "ruby Killshot.rb" }, { "code": null, "e": 26155, "s": 26053, "text": "The tool is running successfully. Now let’s see an example of how to use the tool for reconnaissance." }, { "code": null, "e": 26307, "s": 26155, "text": "Example 1. Scan the website google.com find the IP address, country, HTTP server details, redirect location, x-xss protection, languages website using." }, { "code": null, "e": 26361, "s": 26307, "text": "Step 1: Open the tool and type the following command." }, { "code": null, "e": 26366, "s": 26361, "text": "help" }, { "code": null, "e": 26418, "s": 26366, "text": "Step 3: Use the following command to scan the site." }, { "code": null, "e": 26434, "s": 26418, "text": "site\ngoogle.com" }, { "code": null, "e": 26555, "s": 26434, "text": "Once you enter google.com into the site. The tool will search all the details. The tool will gather all the information." }, { "code": null, "e": 26796, "s": 26555, "text": "You can see that we got all the details of google.com. You can also use your target to gather information. The information is p address, country, HTTP server details, redirect location, x-xss protection, languages that the website is using." }, { "code": null, "e": 26807, "s": 26796, "text": "Kali-Linux" }, { "code": null, "e": 26819, "s": 26807, "text": "Linux-Tools" }, { "code": null, "e": 26830, "s": 26819, "text": "Linux-Unix" }, { "code": null, "e": 26928, "s": 26830, "text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here." }, { "code": null, "e": 26937, "s": 26928, "text": "Comments" }, { "code": null, "e": 26950, "s": 26937, "text": "Old Comments" }, { "code": null, "e": 26985, "s": 26950, "text": "scp command in Linux with Examples" }, { "code": null, "e": 27022, "s": 26985, "text": "nohup Command in Linux with Examples" }, { "code": null, "e": 27056, "s": 27022, "text": "mv command in Linux with examples" }, { "code": null, "e": 27082, "s": 27056, "text": "Thread functions in C/C++" }, { "code": null, "e": 27108, "s": 27082, "text": "Docker - COPY Instruction" }, { "code": null, "e": 27145, "s": 27108, "text": "chown command in Linux with Examples" }, { "code": null, "e": 27185, "s": 27145, "text": "nslookup command in Linux with Examples" }, { "code": null, "e": 27214, "s": 27185, "text": "SED command in Linux | Set 2" }, { "code": null, "e": 27256, "s": 27214, "text": "Named Pipe or FIFO with example C program" } ]
Paid Search Incrementality. Rethinking your targets and finding the... | by António Lima | Towards Data Science
Many advertisers and digital marketing professionals often work towards achieving a target cost per sale while maximising the number of customers or sales they bring to the business. Today we will challenge the best way to achieve those goals in the paid search world introducing the concept of incrementality. We will do it so by not only explaining the business rational and theory behind it but also proposing actual practical steps to operate in an incrementality framework as well as offering code samples to tackle the challenge in an efficient manner. Let’s then start by defining incrementality as the impact that a specific action has in changing total output (that being cost, sales, revenue, profit or other). To understand incrementality we have to realise that there is always a baseline situation against which you want to measure the impact of any given action: Scenario 1 — An action is totally incremental — Ultimately if the baseline scenario is a state of absence of almost everything then all actions are 100% incremental. For instance, having a live website for an e-commerce company is 100% incremental. No sales or revenue can be made without it. Scenario 2 — An action is partially incremental — Not all actions are as critical as the one above. Consider for example a company that buys advertising space regarding their own brand on Google, it is expectable that if they didn’t do it so then some potential customers would click on their competitors’ paid ads links; however, they cannot say that the revenue originated by people who clicked on their own ads is 100% incremental because some people could have come to their website in any case via the SEO organic link on their own brand, skipping through the competitors’ paid ads. (Note: not advocating for not bidding on your brand, many companies do it because they’re usually cheap clicks and they’re afraid competition will steer away some of their natural customers) Scenario 3 — An action is partially incremental and you want to find out how much of doing that action is optimal — Imagine a restaurant, for example, trying to figure out how many options they should have available on their menus. Surely the answer is often more than one or two for increased choice and pleasing different subsets of customers, however, at which point the benefit of one extra menu item for increased variability is surpassed by the cost of having to manage extra food inventories, train their chefs to make such recipes and increased difficulty in kitchen coordination to jump between different dishes at one given service? Probably many restaurants would like to have a tool at their disposal that would help them solve this optimisation problem. Scenario 4 — An action is not incremental at all — A simple example would be giving your employees white notebooks instead of black notebooks: this action is hardly ever incremental on employees’ productivity at any given point. Sometimes, however, non-incremental actions are harder to identify. For example, you show some Facebook ads to your existing customers and they eventually at some point purchase your items again (there’s a correlation). Although, it could be the case that they would buy your items in any case (correlation not implying causation) or even that some of them would eventually buy again from you but they became upset with your company over-communicating and ended up not returning to your website or shop (in this case your action would have been negatively incremental). A quick note before proceeding: A reasonable expectation regarding the meaning of the expression paid search incrementality could be the pursuit of understanding the incremental value of the channel as a whole (i.e. what would happen if we turned off ppc at all? Would some potential customers still find us via other marketing channels?). Although this can be an immensely insightful study for some companies, this is not what this article will be exploring today. Instead, this article will be focusing on what we described in scenario 3 and its application to the Paid Search world. Hopefully, this could be useful to help you think about your PPC goals and targets, and how to measure what are your optimal bids for achieving them. So like in the example of restaurants optimising the number of menu items they should have, chances are that some google ads advertising will be beneficial and profitable for your business and that too much google ads advertising starts to become marginally loss making for your company. This article proposes a framework for finding that optimal amount of google ads advertising while bearing an incrementality mindset and steering away from an averages way of thinking. This means making the transition between optimisation maturity 2 and 3 described below (which can be as important and fundamental as making the transition between optimisation maturity 1 and 2): Optimisation maturity 1 — Company Target — The SEM channel manager ensures that the account is roughly achieving an overall target per conversion and that the return on ad spend is in line with the expectations of upper management or clients. The diversity of campaigns is either limited or the span in actual cost per conversion across campaigns is really big. Optimisation maturity 2 — Averages — The SEM manager still ensures that the account’s return on ad spend is in line with expectations, but at the same time ensures that it has a robust account structure with a multitude of campaigns achieving similar costs per conversion across all of them. The rationale behind it is that firstly campaigns with an average cost per conversion above target are unprofitable for the company and secondly that volumes of conversions are maximised when all campaigns are on target vs having some campaigns below and others above target (even if at the end of the day both scenarios lead to an overall account on target). As a second step after ensuring the above, the SEM manager will also often ensure that other ways of “slicing” the account other than on a campaign level will also be on par with each other in terms of the average cost per conversion (examples being device type, major locations, ad group types that repeat themselves across many campaigns, etc.). Most companies operate on this level of ppc optimisation maturity. Optimisation maturity 3 — Incrementality — Companies operating on this framework understand that their average targets and metrics are just the tip of the iceberg. The critical mind shift is that they redefine the question from “what’s my average cost per conversion?” into “what is the maximum I am willing to spend on any given conversion?”. They also understand diminishing returns and that they vary depending on many factors: They know that two campaigns with the same average cost per conversion can have very different incremental costs per conversion and if they have a pound or a dollar more to spend they won’t distribute evenly across these two campaigns but they will instead apply it on the one with lower incremental cost per conversion. However, to put such framework in practice the very first thing they do differently is to generate their own datasets as incrementality metrics are not readily available and this usually involves a lot of cross-team collaboration (and they don’t by any means underestimate the importance of communication, alignment and buy in across all teams involved). Without such an effort, this framework won’t ever be nothing more than a theoretical exercise. Finally, once that’s unlocked, they go from theory to practice, from averages to incrementality, by using their new metrics, reports and marketing business objectives. But what do we mean by campaigns with the same averages but different incrementality economics? Let’s consider the following example: For both campaigns ABC and XYZ, you have set a target cpa of 6£ (assuming smart bidding here but should be no different if you’re running keyword level manual bidding) and Google is able to get you 100 conversions at that average cost for each campaign. Without taking incrementality into consideration you might have no reason to take different actions when optimising each campaign. However, what might be happening in the background is that campaign ABC would be able to get you 80 conversions at an average price of £5.8 and extra 20 conversions at an average price of £6.8 (average cpa for the total 100 conversion still being £6/conversion) whereas campaign XYZ would be able to get you 80 conversions at an average price of £4 and then finds some slack to get you an extra 20 conversions at an average price of £14 (average cpa for the total conversions still being £6 pounds — and yes this can happen and it does happen all the time in so many occasions!). Basically, in one case you are acquiring extra 20 conversions at a price that’s still reasonable and in the other at a price that probably makes you lose money and this sums up the idea of why averages often hide half of the picture. We finish this section with a chart that illustrates this idea: At this stage, we have laid the theoretical framework in regards to why PPC Incrementality is important, but we haven’t yet proposed a practical way to operate under this level of optimisation maturity and that is probably the main point of this article. It is harder though than operating on an averages framework hence fewer companies adopt it. The good thing about working with averages is that the data is usually readily available for consumption and it does not require extra steps for generating it. Google ads will provide you with detailed spend and conversion data from which you can easily calculate your average cost per conversion on the account itself or on almost whichever “slice” of the account (per campaign, ad group, device, a combination of device and campaign, location, audience, etc.). If you want to operate on an incrementality framework you will have to figure out which data you want to generate, do generate it and ensure its quality and accuracy. So, how do you run an incrementality test on your ppc account? And yes I say a test because on the one hand for many people ppc incrementality might have not been a thing they have yet implemented and on the other ppc incrementality does require testing per se. When we earlier gave an example in which out of you 100 conversions you would get at a total cost of £600 you could potentially get 80 of them at a cost of £4 and extra 20 at a cost of £14, we did not mention that there is actually a way to know that is the case (or not) for sure (at least statistically speaking). And the vehicle for that is the very likely known to you google ads’ Drafts and Experiments. This tool will allow you to A/B test any campaign in your account splitting search traffic equally among the control and variant (or original and draft campaign in google ads jargon) and peek into what would have happened to your metrics (spend and conversions) had you have the draft instead of the original in place. The difference between your draft and original campaign will give you your incremental spend and incremental conversions. Careful though, if we don’t equate statistical significance into our test, it will be as informative as doing nothing. Moreover, chances are that you will hardly ever get statistical significance at the campaign level and you will have to get less granular in your test (i.e. consider starting at account/group of campaigns/portfolio level). Thence it eventually boils down to the following steps (and now we enter into the practical application of the article): Scenario AnalysisCreate multiple drafts and experiments at scaleMonitor the integrity of the testCollect the data in a suitable manner and get it ready for measurement.Evaluate the TestExploratory AnalysisWrap up and prepare for future incrementality tests. Scenario Analysis Create multiple drafts and experiments at scale Monitor the integrity of the test Collect the data in a suitable manner and get it ready for measurement. Evaluate the Test Exploratory Analysis Wrap up and prepare for future incrementality tests. This is planning, playing with numbers and forecasts phase. You should start with a hypothesis as in any good experiment and that hypothesis should be actionable (i.e. its confirmation or not should make you take either action A or action B) and measurable with a good degree of accuracy (i.e. you must be able to generate enough data to reach a statistically significant conclusion and there shouldn’t exist any technical, commercial or business blockers preventing you from generating such data). Let’s get back to our example in which we had a ppc account generating a total of 100 conversions with an average cpc of £6. An example of a good hypothesis, in this case, could be: You want to find out if your incremental conversions cost you more than £15 (because you breakeven at £15 or because you want a marginal ROAS of at least 2 times to compensate for all other costs you have for example). If you confirm your hypothesis (incremental conversions > £15 per conv) the action you will take is to decrease your bids up to the same bidding level you tested in this experiment. If you disprove your hypothesis (incremental conversions < £15 per conv) the action you will take is to maintain your bid levels. With this in mind you’ll now run some scenarios playing with the variables of the test: Potential metrics of the test: lower funnel conversions may provide you with the answer you’re most interested in, but upper funnel conversions are more frequent hence meaning shorter test durations. The more suspicious you are in the effect that varying bids might have into the type of visitors that you ppc account attracts the more reason to make the effort to have lower funnel conversions as your test metric, otherwise upper funnel conversions might just do it; Variation in bidding levels: it’s more interesting to test bidding levels closer to each other like a 10% change, but you might not have enough data to accurately validate such small difference; Assumptions to what happens to spend and conversions when changing your bids by the levels defined above: if you are decreasing you bids naturally your spending levels will decrease as well, but by how much one might wonder? You should put forward a few assumptions in advance and check what impact in your estimates and confidence levels they will have “if you were to observe such difference at the end of the test”. You want to ensure beforehand that the design of the test is robust enough to withstand different outcomes and still be able to provide you with the answer to your initial question: is my incremental cost per conversion above or below £15? And for that, you will want that your confidence intervals don’t overlap £15. Time to run the test for: the longer it is the more data you gather and your confidence intervals become smaller, however running a test has always a cost associated with it being it at least number of conversions sacrificed or opportunity cost of not running other tests. The more important is the answer to your question the more willing you will be to extend the test duration. The more options you consider for the 4 variables described above the more scenarios you will end up with so we want to strike a balance between a comprehensive mix of scenarios and not getting overwhelmed by them. Back to our example, we have created some scenarios ourselves as well. We have considered two metrics; two variations in bidding levels; two different assumptions to what happens to spend and conversions when varying each of the bidding levels; and two test durations. All these combinations resulted in a total of 16 scenarios as per the table below: The first thing that is important to explain is that you will generate a lot more “spend” data than “conversions” data, hence your spend confidence interval will usually be very close to its point estimate meaning that your test accuracy will mostly depend on your conversions data. Secondly, the confidence intervals calculation depend not only of your assumptions but also on how much data you will generate (you should look into your account’s historic to have an idea of how much that is in your case). In the “Evaluate the Test” section we dive deeper into confidence intervals calculations which you can then also apply when calculating the expected CIs in the scenario analysis. So, once we have all these it’s time to make decisions regarding the final design of our experiment and looking at the scenarios in the example it seems that the following are not possible: A 10% variation in bids doesn’t seem to be an option. Even if we were to run it for 2 months and used leads as the test metric, the confidence interval in the scenario in which we assumed spend to decrease by 15% and conversions by 7% would overlap £15. We could end up in a situation by the end of the test in which we wouldn’t have either confirmation or disproval of our hypothesis hence unsure of whether to take action A or action B. Running a test based on purchases for the duration of 1 month seems to be out of the table too (in our spend 50% down and conversions 25% down scenario the confidence interval overlaps £15 by a lot — [£8.8 ; £18.8]; and using the other assumptions the confidence interval just touches £15) Hence our options would be either: Run the test for 1 month and evaluate it on Leads (both spend/conversions assumptions end up in a confidence interval clear of overlapping £15) Extend the test duration to two months and turn Purchases into your metric (again CIs clear of overlapping £15) Which one you end up choosing depends on what’s more important for you in the tradeoff of test duration vs lower funnel conversions. At this point you have chosen the design of your test and here is where we need to get a bit more technical as we will be introducing code samples to tackle this step. Python being the data language of election will be what will be using, plus we will be interacting with the adwords API. So to start with you will need to connect to your account’s AdWords API. If you haven’t used the API before, the first thing you have to do is to get a Developer Token; a Google Client ID (this is different from the Client Customer Id — the one you actually see in google ads user interface with a number similar to this one: 562–425–3085); Google Client Secret; and Google Refresh Token. There’s very useful documentation in how to get started with the adwords API and generate all of the above in here. If you have succeeded on the above then the following script should connect you to your adwords API: The next thing you will probably have to do relates to budgets. If you are not using any shared budgets in your account you can ignore this next bit, but my bet is that at least some of your campaigns are sharing budgets. If that is indeed your account’s case then the next bit sets to address one of the drawbacks of using Drafts and Experiments: you can’t create a draft for a campaign that is using a shared budget. This means that you will have to create individual budgets and assign each of them to your campaigns. Fortunately, you can also do this programmatically, which will save you lots of time when doing it for all the campaigns in your account that you proposed to enter the incrementality test. Once you decide what is a sensible amount for each campaign’s individual budget, the following scripts will help you create them: Now is a good time to introduce why google has probably named this feature Drafts and Experiments and not juts Drafts or just Experiments, since creating a draft is next on the list. In a nutshell, a draft when created is a duplication of an existing campaign that will always remain linked with its original campaign (i.e. can’t run on its own independently) and that will see some component of it get changed vs the original campaign (in theory you don’t have to but that would seem pointless); whereas an experiment is the settings that will determine in which circumstances and for how long a test between a draft and an original campaign will happen (what is the split of traffic, if this split is made on a cookie or search basis, what’s the start date, etc.). A draft is a campaign, an experiment are settings. So next on the list is to create the Draft, which can be accomplished by running the script below: And surely the most important thing to note is the information returned by this create_draft() function as we will need it afterwards to make changes to the draft campaign and also to create the experiment. And that is exactly what we will be doing next: changes to the draft campaign. As in the examples earlier above we will consider that you are operating on a target_cpa smart bidding strategy and that you want to find out what happens when you drop your targets by 30%. The code below will help you with that. The code above is doing two things, firstly it creates a target cpa portfolio with a target that is 30% less than the portfolio that is assigned to the original campaign, and secondly, it assigns that newly created portfolio to the draft campaign we created in the step before. There are a few additional things though that we should bear in mind: If a multitude of your campaigns is assigned to the same portfolio then you wouldn’t want to create a single portfolio for each draft campaign. Instead, a more sensible approach would be to create one portfolio for each group of draft campaigns that share the same target cpa and assign that portfolio to all the drafts with the same target. The exact code to do it would be slightly different but would follow a similar logic. You might have ad groups within some campaigns that have their own individual targets. If that is the case you will have to ensure that the draft campaign corresponding ad groups will have their targets dropped by 30% too (in this example). Otherwise, you won’t have a fair test as some ad groups won’t be dropped in the same order of magnitude as others. You would need an additional script to do this operation. Once the above is done, we’re off to the final step of making an experiment live using Drafts and Experiments. Not a complicated one, you just need to grab the draft_id an campaign_id returned by the create_draft() function, then chose an experiment name (ideally something along the lines of a bit that identifies the original campaign and a bit that identifies the whole group of experiments that are part of this test like a suffix: campaign_123-decrease_30_test for example), a split percentage (recommend 50%) and a split_type (recommend Cookie). The following code will then handle the final operation of this section: When you are running a test there is nothing worse than seeing it being invalidated, particularly for totally unavoidable reasons. This section is about the watchpoints of things that can go wrong if we are not careful and that ultimately would invalidate this test’s data (these incrementality datasets) that is so precious for our end goal. The crux of the matter is that the only setting that should be different between your original and draft campaigns is the bidding levels. So: If you have any team member or tool that updates ad copy in a systematic way across the account, make sure the same changes are applied to original and draft. If your landing pages are changing by any reason, make sure that the end of the day original and draft have the same landing pages. If you’re adding bid modifiers to your campaigns, you need to add those to the drafts too. As previously mentioned, if your campaigns have ad group targets, they’ll have to be reduced/increased by the same proportion the main campaign targets has been and you need to ensure they keep the same way throughout the test. I have added some example code of how you can quickly make this comparison, in particular, leveraging the adwords API: I think you got the point already, but for comprehensiveness here are some other settings you might want to bear in mind: Budgets, Location targeting, audiences, networks, ad rotation, conversions, new ad groups or keywords, negative keywords, extensions. Ultimately it’s a balance of either trying to avoid making many changes to the campaigns that you have created a draft and experiment for; or having a robust system in place that will ensure changes applied to original are also applied to draft. Finally, one extra note to close this section: Did your portfolios in the draft campaigns went into learning mode and behave «funny» in the first few days? If yes, consider excluding these days from your incrementality dataset. So you’ve launched your test and you made sure it’s running on fair terms, the next thing you want to do is to gather and prepare the test data in a manner that’s suitable for analysis and evaluation. In line with what we have done so far we will resort to adwords api to tackle this step of the process as well. But first let me ask you, have you heard about fivetran? If not please do yourself a favour a go implement it straight ahead! Assuming you currently use fivetran or you went on implementing it and came back to the article, we will use the google ads fivetran connector to get ourselves the data we need. The alternative would be to use the AdWords Query Language (AWQL) directly, but fivetran handles all of that for us with just a few clicks. So, using fivetran what we need to do is to create a new google ads connector and sync a Campaign Performance Report into your database of election (psst... snowflake is the best one) choosing the following fields as per the image below: AccountDescriptiveName CampaignId CampaignName AdNetworkType2 (Not needed for test analysis but useful for deep dives) Date Device (Not needed for test analysis but useful for deep dives) Clicks (Not needed for test analysis but useful for deep dives) Conversions ConversionValue Cost Impressions (Not needed for test analysis but useful for deep dives) SearchImpressionShare (Not needed for test analysis but useful for deep dives) And finally, once this data has made its way into your database, the following query will organise it in a way that’s suitable for test evaluation (Such query will serve as the backbone for the evaluation of the test): At this stage, your test is coming to an end or is already finished. You have collected and prepared your data in a way that is now suitable for analysis and that’s exactly what you’re going to do now. But how? There’s not one single right to way to it, but we propose that we use a statistical technique called bootstrap resampling (which is actually not so different from what google itself uses to report their confidence intervals — they use jackknife resampling though). The reason why we need to do this has to due with google only reporting confidence intervals on individual campaigns’ experiments, so if you are running an experiment on a multitude of campaigns you will have to calculate such confidence intervals yourself as if they were one big campaign because they are in fact all part of the same experiment. The end goal of this stage is not just to find the difference in spend and in conversions between the test and control groups (because that is relatively easy — we just need to look at total spend and conversions on both groups of the experiment) but also to find a confidence interval for that difference hence the resampling. By resampling we will be taking a sample (pass the redundancy) with replacement of n observations of the performance of different campaigns on different days and compare the differences between test and control (n being the total number of rows yielded by the query provided in section 4 — i.e. total combinations of campaigns/days). We will be repeating this process a multitude of times (usually between 1,000 and 10,000) and for each of them comparing the difference between test and control. Having this set of samples we are now able to calculate the point estimate (the mean of the resample’s means) and the variance of performance amongst this set of samples and hence calculating our statistics’ standard deviation as: σx = ( ∑ ( (xi -x) ^ 2 ) / j) ^ (1⁄2)xi, being the mean of each resamplex, being the point estimator (the mean of all resamples' means)j, being the number of resamples And now having both the variance and the observed difference between control and test we are able to calculate the so much wanted confidence interval as: CI = x ± 1.64 * ( σx / n ^ (1/2) )x, being the point estimator (the mean of all resamples' means)σx, the standard deviation of the samples' meansn, the size of the sample (i.e. combinations of campaigns/days)1.64, Zα for a 90% Confidence Interval Finally, we can say with 90% of confidence that the true difference in performance at different bidding levels in our ppc account lies between the lower bound and the upper bound of the confidence interval and make business decisions accordingly — back to our example in the scenario analysis section this means that if the confidence interval is below £15 we will maintain our bidding levels, however, if it is above £15 we will decrease our bids by 30% (as the marginal cost of the marginal conversions would be above our threshold of profitability) The actual code to do make such calculations using python and a dummy dataset mimicking one prepared accordingly to the methodology proposed in step 4 are as per below: If you are interested in finding out more about this resampling technique and the theory behind it I really recommend this article. This is probably one of the most interesting and joyfully stages of this whole journey. You have successfully identified what’s the incremental value of going from bidding level x to bidding level y and gave your business crucial information to make a decision on whether to keep the biding levels or change them depending on the incremental value of your incremental spend. You have done this for the overall account potentially if your doing it for the first time or if you don’t generate a lot of conversion data; or you have gone further and did it for multiple portfolios, groups of campaigns or others (meaning that your test was actually a set of tests and you took actions on each of them accordingly) — for simplicity though, in this section we will assume you did an overall account test. Nonetheless having a clear hypothesis that we want to answer being the main goal of this venture, a ppc incrementality test should also create many byproducts that you won’t want to ignore. The byproducts might need to be planned (for example if you purposely breakdown your campaigns into targeting locations where it is raining at the moment and locations where it isn’t raining — farfetched in most cases but for instance if you have a windshield business or you sell snow chains this might be important for you) but usually they will be available in on or more of the many adwords report types and in many cases they will even be unintentional and only found after careful post test analysis. So let’s dig more into this second option: I will give a couple of examples in bigger depth and in addition suggest a list of factors that might have an interesting impact into different incrementality levels within your account — your job is to look at all of them and find out which ones are more decisive for your business. Example 1: Impression share can play a critical role in incrementality. Imagine you sell two types of t-shirts in your website — A blue and a red one (and you have a ppc campaign for each of them). They’re both sold at the same price but the blue T-shirt’s conversion rate is 40% and the red’s is 10%, hence you’re willing to pay 4x more for a blue T-shirt ppc click than a red T-shirt click. In this case you’ll end up with the same cost per conversion on both campaigns but you’ll likely have a higher impression share on the blue campaign (because you’re paying more per click). What’s important about this is that usually, it’s difficult to go after the extra bit of potential customers that you’re not showing ads to when you have 80% impression share than going after a few more of the yet so many available potential customers when your impression share is 20%, hence (rule of thumb) the higher the impression share the higher is your marginal cost per conversion and this is particularly noticeable on both extremes of impression share. Below is a real life based example of the role of impression share in marginal cost per conversion: Example 2: Your main competitor offers their services both via web and via an app and to attract customers they use both google search ads and universal app campaigns. This means that your competitor will have a wider range of possibilities and a higher probability of competing for the same search keywords in a mobile device than in a computer. The competitive landscape will be more favourable in computer devices and all things equal you would expect your marginal cost per conversion to be higher on mobile devices than in computers. List of factors potentially playing an important role into incrementality: Impression Share (and other measures of competitiveness like average position, absolute top of page, etc.) Device Location Audiences (People who have interacted with your website vs. people who haven’t registered any activity with you; Gender; Demographics; etc.) Campaign Type / Ad group type Keyword Match Type Time of Day & Day of Week Network Demographic factors like age and gender Others like weather (mentioned in example above); occurrence of significant event (for newspaper business); etc. As many as your creativity can get you too Finally, what’s still most important about these analyses is that they should and most likely will inform which tests you will run next. Let’s say again this was the very first test you ran and you had no preconceived idea of which factors would most contribute to incrementality within the account. After you run such exploratory analysis you will be able to formulate a more robust hypothesis and ensure you design a test to statistically prove (or not) that. For example, you might be able to group campaigns into four groups because you observe a pattern and design a test that will ensure you find with precision their incrementality levels and shift spending across them according to the test results; or you might design a test-focused in confirming (or not) that campaigns/ad groups / or devices with higher impression share will be less incremental hence shifting budget to the ones with lower levels of IS. If you have made it this far then congrats, you’ve done it — you have optimised your paid search account in an incrementality framework. Nevertheless, now it’s also the time to think about the future and how you are going to keep operating in an incrementality world. Below a list of the 3 most important things you want to consider at this wrapping up stage: We hope that the test has turned out to be conclusive and that has led you to take either one or another action, but even if this is the case these findings might not hold true for long: Auctions’ competitive landscape is ever-evolving and hence marginal cost per conversion will keep changing. You need to plan in order to keep a reasonably constant level of testing inbuilt into your accounts so that you can have accurate and updated answers to guide your optimisation goals. You need to decide if the next set of tests you will run will be made on the same dimensions as the first one (i.e. account/portfolio/campaigns grouped by a given logic, etc.). If this is the case you still want to sense check if there’s anything you want to change about the test setup (could you run it for a shorter period of time? Given what you now know, should you change the test metric? Etc.). Alternatively, you might have found something interesting in the exploratory analysis that you want now to confirm (statistically speaking). For example, in the next test, you might group your campaigns into low/medium/high impression share and test increasing/decreasing different bidding levels for the different groups; or you might want to test different device multipliers for mobile/tablet/computer instead of changing campaign’s bids. Finally, at some point, you will also want to consider some sort of automation in terms of implementing and measuring these tests. This article covered the reasoning behind paid search incrementality, why you should implement it in your marketing optimisation routines, and a practical way to indeed implement it; but instead of using this section to stress any of those points, I will save it to talk about the impact which at the end of the day is what really matters. Hope this will be the extra bit of motivation you need to actually go for it and implement it on your paid search account. So let me end talking about a startup in which we implemented this methodology and what it meant for the company: Context: This was a company in a high growth phase, hence historically, they were willing to break even on cost per conversion and revenue per conversion (on average). On a given period the company started to feel the need to show signs of profitability so google ads could no longer run at break-even cost (on average). Hence the challenge was defined as: Moving from breaking even on average to breaking even marginally. The methodology was to run a series of tests from bidding level x to y to z (each of them following the 7 practical steps defined above) to find the point in the bidding level curve where the marginal cost per conversion was the same as the up to the moment average cost. Results were: 1) For each 10% we saved in total spend in paid search we only sacrificed 4.2% of the total conversions generated by paid search. 2) From breaking even in paid search to a return on ad spend of 233%. 3) No single conversion turning a loss to the company. Leaving a note on potential alternative ways to tackle this challenge other than the Drafts & Experiments route. The first one is to use google ads bid landscapes — these are basically estimates that google makes about spend, conversions and conversion value were you to have different target cpas or target road. You can use these in the google ads UI for a one-off or programmatically via the API: The main drawback is that is only available on an ad-group level which may not generalise well for your use case and in addition the ad group needs to meet certain criteria. If this becomes available via the API on campaign or portfolio level this can become really powerful though. The second alternative is to use the trafficEstimator API — Haven’t used it myself because I have been working with smart bidding mostly but if you use manual bidding this tool should help you calculate how many clicks you’d generate for different bidding levels(max cpcs in this case) and if you assume a flat conversion rate from click to conversion you will be able to calculate you marginal cost per conversion from bidding level x to bidding level y.
[ { "code": null, "e": 731, "s": 172, "text": "Many advertisers and digital marketing professionals often work towards achieving a target cost per sale while maximising the number of customers or sales they bring to the business. Today we will challenge the best way to achieve those goals in the paid search world introducing the concept of incrementality. We will do it so by not only explaining the business rational and theory behind it but also proposing actual practical steps to operate in an incrementality framework as well as offering code samples to tackle the challenge in an efficient manner." }, { "code": null, "e": 1049, "s": 731, "text": "Let’s then start by defining incrementality as the impact that a specific action has in changing total output (that being cost, sales, revenue, profit or other). To understand incrementality we have to realise that there is always a baseline situation against which you want to measure the impact of any given action:" }, { "code": null, "e": 1342, "s": 1049, "text": "Scenario 1 — An action is totally incremental — Ultimately if the baseline scenario is a state of absence of almost everything then all actions are 100% incremental. For instance, having a live website for an e-commerce company is 100% incremental. No sales or revenue can be made without it." }, { "code": null, "e": 2121, "s": 1342, "text": "Scenario 2 — An action is partially incremental — Not all actions are as critical as the one above. Consider for example a company that buys advertising space regarding their own brand on Google, it is expectable that if they didn’t do it so then some potential customers would click on their competitors’ paid ads links; however, they cannot say that the revenue originated by people who clicked on their own ads is 100% incremental because some people could have come to their website in any case via the SEO organic link on their own brand, skipping through the competitors’ paid ads. (Note: not advocating for not bidding on your brand, many companies do it because they’re usually cheap clicks and they’re afraid competition will steer away some of their natural customers)" }, { "code": null, "e": 2888, "s": 2121, "text": "Scenario 3 — An action is partially incremental and you want to find out how much of doing that action is optimal — Imagine a restaurant, for example, trying to figure out how many options they should have available on their menus. Surely the answer is often more than one or two for increased choice and pleasing different subsets of customers, however, at which point the benefit of one extra menu item for increased variability is surpassed by the cost of having to manage extra food inventories, train their chefs to make such recipes and increased difficulty in kitchen coordination to jump between different dishes at one given service? Probably many restaurants would like to have a tool at their disposal that would help them solve this optimisation problem." }, { "code": null, "e": 3687, "s": 2888, "text": "Scenario 4 — An action is not incremental at all — A simple example would be giving your employees white notebooks instead of black notebooks: this action is hardly ever incremental on employees’ productivity at any given point. Sometimes, however, non-incremental actions are harder to identify. For example, you show some Facebook ads to your existing customers and they eventually at some point purchase your items again (there’s a correlation). Although, it could be the case that they would buy your items in any case (correlation not implying causation) or even that some of them would eventually buy again from you but they became upset with your company over-communicating and ended up not returning to your website or shop (in this case your action would have been negatively incremental)." }, { "code": null, "e": 4423, "s": 3687, "text": "A quick note before proceeding: A reasonable expectation regarding the meaning of the expression paid search incrementality could be the pursuit of understanding the incremental value of the channel as a whole (i.e. what would happen if we turned off ppc at all? Would some potential customers still find us via other marketing channels?). Although this can be an immensely insightful study for some companies, this is not what this article will be exploring today. Instead, this article will be focusing on what we described in scenario 3 and its application to the Paid Search world. Hopefully, this could be useful to help you think about your PPC goals and targets, and how to measure what are your optimal bids for achieving them." }, { "code": null, "e": 5090, "s": 4423, "text": "So like in the example of restaurants optimising the number of menu items they should have, chances are that some google ads advertising will be beneficial and profitable for your business and that too much google ads advertising starts to become marginally loss making for your company. This article proposes a framework for finding that optimal amount of google ads advertising while bearing an incrementality mindset and steering away from an averages way of thinking. This means making the transition between optimisation maturity 2 and 3 described below (which can be as important and fundamental as making the transition between optimisation maturity 1 and 2):" }, { "code": null, "e": 5452, "s": 5090, "text": "Optimisation maturity 1 — Company Target — The SEM channel manager ensures that the account is roughly achieving an overall target per conversion and that the return on ad spend is in line with the expectations of upper management or clients. The diversity of campaigns is either limited or the span in actual cost per conversion across campaigns is really big." }, { "code": null, "e": 6519, "s": 5452, "text": "Optimisation maturity 2 — Averages — The SEM manager still ensures that the account’s return on ad spend is in line with expectations, but at the same time ensures that it has a robust account structure with a multitude of campaigns achieving similar costs per conversion across all of them. The rationale behind it is that firstly campaigns with an average cost per conversion above target are unprofitable for the company and secondly that volumes of conversions are maximised when all campaigns are on target vs having some campaigns below and others above target (even if at the end of the day both scenarios lead to an overall account on target). As a second step after ensuring the above, the SEM manager will also often ensure that other ways of “slicing” the account other than on a campaign level will also be on par with each other in terms of the average cost per conversion (examples being device type, major locations, ad group types that repeat themselves across many campaigns, etc.). Most companies operate on this level of ppc optimisation maturity." }, { "code": null, "e": 7889, "s": 6519, "text": "Optimisation maturity 3 — Incrementality — Companies operating on this framework understand that their average targets and metrics are just the tip of the iceberg. The critical mind shift is that they redefine the question from “what’s my average cost per conversion?” into “what is the maximum I am willing to spend on any given conversion?”. They also understand diminishing returns and that they vary depending on many factors: They know that two campaigns with the same average cost per conversion can have very different incremental costs per conversion and if they have a pound or a dollar more to spend they won’t distribute evenly across these two campaigns but they will instead apply it on the one with lower incremental cost per conversion. However, to put such framework in practice the very first thing they do differently is to generate their own datasets as incrementality metrics are not readily available and this usually involves a lot of cross-team collaboration (and they don’t by any means underestimate the importance of communication, alignment and buy in across all teams involved). Without such an effort, this framework won’t ever be nothing more than a theoretical exercise. Finally, once that’s unlocked, they go from theory to practice, from averages to incrementality, by using their new metrics, reports and marketing business objectives." }, { "code": null, "e": 9286, "s": 7889, "text": "But what do we mean by campaigns with the same averages but different incrementality economics? Let’s consider the following example: For both campaigns ABC and XYZ, you have set a target cpa of 6£ (assuming smart bidding here but should be no different if you’re running keyword level manual bidding) and Google is able to get you 100 conversions at that average cost for each campaign. Without taking incrementality into consideration you might have no reason to take different actions when optimising each campaign. However, what might be happening in the background is that campaign ABC would be able to get you 80 conversions at an average price of £5.8 and extra 20 conversions at an average price of £6.8 (average cpa for the total 100 conversion still being £6/conversion) whereas campaign XYZ would be able to get you 80 conversions at an average price of £4 and then finds some slack to get you an extra 20 conversions at an average price of £14 (average cpa for the total conversions still being £6 pounds — and yes this can happen and it does happen all the time in so many occasions!). Basically, in one case you are acquiring extra 20 conversions at a price that’s still reasonable and in the other at a price that probably makes you lose money and this sums up the idea of why averages often hide half of the picture. We finish this section with a chart that illustrates this idea:" }, { "code": null, "e": 10263, "s": 9286, "text": "At this stage, we have laid the theoretical framework in regards to why PPC Incrementality is important, but we haven’t yet proposed a practical way to operate under this level of optimisation maturity and that is probably the main point of this article. It is harder though than operating on an averages framework hence fewer companies adopt it. The good thing about working with averages is that the data is usually readily available for consumption and it does not require extra steps for generating it. Google ads will provide you with detailed spend and conversion data from which you can easily calculate your average cost per conversion on the account itself or on almost whichever “slice” of the account (per campaign, ad group, device, a combination of device and campaign, location, audience, etc.). If you want to operate on an incrementality framework you will have to figure out which data you want to generate, do generate it and ensure its quality and accuracy." }, { "code": null, "e": 11717, "s": 10263, "text": "So, how do you run an incrementality test on your ppc account? And yes I say a test because on the one hand for many people ppc incrementality might have not been a thing they have yet implemented and on the other ppc incrementality does require testing per se. When we earlier gave an example in which out of you 100 conversions you would get at a total cost of £600 you could potentially get 80 of them at a cost of £4 and extra 20 at a cost of £14, we did not mention that there is actually a way to know that is the case (or not) for sure (at least statistically speaking). And the vehicle for that is the very likely known to you google ads’ Drafts and Experiments. This tool will allow you to A/B test any campaign in your account splitting search traffic equally among the control and variant (or original and draft campaign in google ads jargon) and peek into what would have happened to your metrics (spend and conversions) had you have the draft instead of the original in place. The difference between your draft and original campaign will give you your incremental spend and incremental conversions. Careful though, if we don’t equate statistical significance into our test, it will be as informative as doing nothing. Moreover, chances are that you will hardly ever get statistical significance at the campaign level and you will have to get less granular in your test (i.e. consider starting at account/group of campaigns/portfolio level)." }, { "code": null, "e": 11838, "s": 11717, "text": "Thence it eventually boils down to the following steps (and now we enter into the practical application of the article):" }, { "code": null, "e": 12096, "s": 11838, "text": "Scenario AnalysisCreate multiple drafts and experiments at scaleMonitor the integrity of the testCollect the data in a suitable manner and get it ready for measurement.Evaluate the TestExploratory AnalysisWrap up and prepare for future incrementality tests." }, { "code": null, "e": 12114, "s": 12096, "text": "Scenario Analysis" }, { "code": null, "e": 12162, "s": 12114, "text": "Create multiple drafts and experiments at scale" }, { "code": null, "e": 12196, "s": 12162, "text": "Monitor the integrity of the test" }, { "code": null, "e": 12268, "s": 12196, "text": "Collect the data in a suitable manner and get it ready for measurement." }, { "code": null, "e": 12286, "s": 12268, "text": "Evaluate the Test" }, { "code": null, "e": 12307, "s": 12286, "text": "Exploratory Analysis" }, { "code": null, "e": 12360, "s": 12307, "text": "Wrap up and prepare for future incrementality tests." }, { "code": null, "e": 13041, "s": 12360, "text": "This is planning, playing with numbers and forecasts phase. You should start with a hypothesis as in any good experiment and that hypothesis should be actionable (i.e. its confirmation or not should make you take either action A or action B) and measurable with a good degree of accuracy (i.e. you must be able to generate enough data to reach a statistically significant conclusion and there shouldn’t exist any technical, commercial or business blockers preventing you from generating such data). Let’s get back to our example in which we had a ppc account generating a total of 100 conversions with an average cpc of £6. An example of a good hypothesis, in this case, could be:" }, { "code": null, "e": 13260, "s": 13041, "text": "You want to find out if your incremental conversions cost you more than £15 (because you breakeven at £15 or because you want a marginal ROAS of at least 2 times to compensate for all other costs you have for example)." }, { "code": null, "e": 13442, "s": 13260, "text": "If you confirm your hypothesis (incremental conversions > £15 per conv) the action you will take is to decrease your bids up to the same bidding level you tested in this experiment." }, { "code": null, "e": 13572, "s": 13442, "text": "If you disprove your hypothesis (incremental conversions < £15 per conv) the action you will take is to maintain your bid levels." }, { "code": null, "e": 13660, "s": 13572, "text": "With this in mind you’ll now run some scenarios playing with the variables of the test:" }, { "code": null, "e": 14129, "s": 13660, "text": "Potential metrics of the test: lower funnel conversions may provide you with the answer you’re most interested in, but upper funnel conversions are more frequent hence meaning shorter test durations. The more suspicious you are in the effect that varying bids might have into the type of visitors that you ppc account attracts the more reason to make the effort to have lower funnel conversions as your test metric, otherwise upper funnel conversions might just do it;" }, { "code": null, "e": 14324, "s": 14129, "text": "Variation in bidding levels: it’s more interesting to test bidding levels closer to each other like a 10% change, but you might not have enough data to accurately validate such small difference;" }, { "code": null, "e": 15061, "s": 14324, "text": "Assumptions to what happens to spend and conversions when changing your bids by the levels defined above: if you are decreasing you bids naturally your spending levels will decrease as well, but by how much one might wonder? You should put forward a few assumptions in advance and check what impact in your estimates and confidence levels they will have “if you were to observe such difference at the end of the test”. You want to ensure beforehand that the design of the test is robust enough to withstand different outcomes and still be able to provide you with the answer to your initial question: is my incremental cost per conversion above or below £15? And for that, you will want that your confidence intervals don’t overlap £15." }, { "code": null, "e": 15442, "s": 15061, "text": "Time to run the test for: the longer it is the more data you gather and your confidence intervals become smaller, however running a test has always a cost associated with it being it at least number of conversions sacrificed or opportunity cost of not running other tests. The more important is the answer to your question the more willing you will be to extend the test duration." }, { "code": null, "e": 16009, "s": 15442, "text": "The more options you consider for the 4 variables described above the more scenarios you will end up with so we want to strike a balance between a comprehensive mix of scenarios and not getting overwhelmed by them. Back to our example, we have created some scenarios ourselves as well. We have considered two metrics; two variations in bidding levels; two different assumptions to what happens to spend and conversions when varying each of the bidding levels; and two test durations. All these combinations resulted in a total of 16 scenarios as per the table below:" }, { "code": null, "e": 16292, "s": 16009, "text": "The first thing that is important to explain is that you will generate a lot more “spend” data than “conversions” data, hence your spend confidence interval will usually be very close to its point estimate meaning that your test accuracy will mostly depend on your conversions data." }, { "code": null, "e": 16695, "s": 16292, "text": "Secondly, the confidence intervals calculation depend not only of your assumptions but also on how much data you will generate (you should look into your account’s historic to have an idea of how much that is in your case). In the “Evaluate the Test” section we dive deeper into confidence intervals calculations which you can then also apply when calculating the expected CIs in the scenario analysis." }, { "code": null, "e": 16885, "s": 16695, "text": "So, once we have all these it’s time to make decisions regarding the final design of our experiment and looking at the scenarios in the example it seems that the following are not possible:" }, { "code": null, "e": 17324, "s": 16885, "text": "A 10% variation in bids doesn’t seem to be an option. Even if we were to run it for 2 months and used leads as the test metric, the confidence interval in the scenario in which we assumed spend to decrease by 15% and conversions by 7% would overlap £15. We could end up in a situation by the end of the test in which we wouldn’t have either confirmation or disproval of our hypothesis hence unsure of whether to take action A or action B." }, { "code": null, "e": 17614, "s": 17324, "text": "Running a test based on purchases for the duration of 1 month seems to be out of the table too (in our spend 50% down and conversions 25% down scenario the confidence interval overlaps £15 by a lot — [£8.8 ; £18.8]; and using the other assumptions the confidence interval just touches £15)" }, { "code": null, "e": 17649, "s": 17614, "text": "Hence our options would be either:" }, { "code": null, "e": 17793, "s": 17649, "text": "Run the test for 1 month and evaluate it on Leads (both spend/conversions assumptions end up in a confidence interval clear of overlapping £15)" }, { "code": null, "e": 17905, "s": 17793, "text": "Extend the test duration to two months and turn Purchases into your metric (again CIs clear of overlapping £15)" }, { "code": null, "e": 18038, "s": 17905, "text": "Which one you end up choosing depends on what’s more important for you in the tradeoff of test duration vs lower funnel conversions." }, { "code": null, "e": 18832, "s": 18038, "text": "At this point you have chosen the design of your test and here is where we need to get a bit more technical as we will be introducing code samples to tackle this step. Python being the data language of election will be what will be using, plus we will be interacting with the adwords API. So to start with you will need to connect to your account’s AdWords API. If you haven’t used the API before, the first thing you have to do is to get a Developer Token; a Google Client ID (this is different from the Client Customer Id — the one you actually see in google ads user interface with a number similar to this one: 562–425–3085); Google Client Secret; and Google Refresh Token. There’s very useful documentation in how to get started with the adwords API and generate all of the above in here." }, { "code": null, "e": 18933, "s": 18832, "text": "If you have succeeded on the above then the following script should connect you to your adwords API:" }, { "code": null, "e": 19773, "s": 18933, "text": "The next thing you will probably have to do relates to budgets. If you are not using any shared budgets in your account you can ignore this next bit, but my bet is that at least some of your campaigns are sharing budgets. If that is indeed your account’s case then the next bit sets to address one of the drawbacks of using Drafts and Experiments: you can’t create a draft for a campaign that is using a shared budget. This means that you will have to create individual budgets and assign each of them to your campaigns. Fortunately, you can also do this programmatically, which will save you lots of time when doing it for all the campaigns in your account that you proposed to enter the incrementality test. Once you decide what is a sensible amount for each campaign’s individual budget, the following scripts will help you create them:" }, { "code": null, "e": 20591, "s": 19773, "text": "Now is a good time to introduce why google has probably named this feature Drafts and Experiments and not juts Drafts or just Experiments, since creating a draft is next on the list. In a nutshell, a draft when created is a duplication of an existing campaign that will always remain linked with its original campaign (i.e. can’t run on its own independently) and that will see some component of it get changed vs the original campaign (in theory you don’t have to but that would seem pointless); whereas an experiment is the settings that will determine in which circumstances and for how long a test between a draft and an original campaign will happen (what is the split of traffic, if this split is made on a cookie or search basis, what’s the start date, etc.). A draft is a campaign, an experiment are settings." }, { "code": null, "e": 20690, "s": 20591, "text": "So next on the list is to create the Draft, which can be accomplished by running the script below:" }, { "code": null, "e": 20976, "s": 20690, "text": "And surely the most important thing to note is the information returned by this create_draft() function as we will need it afterwards to make changes to the draft campaign and also to create the experiment. And that is exactly what we will be doing next: changes to the draft campaign." }, { "code": null, "e": 21206, "s": 20976, "text": "As in the examples earlier above we will consider that you are operating on a target_cpa smart bidding strategy and that you want to find out what happens when you drop your targets by 30%. The code below will help you with that." }, { "code": null, "e": 21554, "s": 21206, "text": "The code above is doing two things, firstly it creates a target cpa portfolio with a target that is 30% less than the portfolio that is assigned to the original campaign, and secondly, it assigns that newly created portfolio to the draft campaign we created in the step before. There are a few additional things though that we should bear in mind:" }, { "code": null, "e": 21982, "s": 21554, "text": "If a multitude of your campaigns is assigned to the same portfolio then you wouldn’t want to create a single portfolio for each draft campaign. Instead, a more sensible approach would be to create one portfolio for each group of draft campaigns that share the same target cpa and assign that portfolio to all the drafts with the same target. The exact code to do it would be slightly different but would follow a similar logic." }, { "code": null, "e": 22396, "s": 21982, "text": "You might have ad groups within some campaigns that have their own individual targets. If that is the case you will have to ensure that the draft campaign corresponding ad groups will have their targets dropped by 30% too (in this example). Otherwise, you won’t have a fair test as some ad groups won’t be dropped in the same order of magnitude as others. You would need an additional script to do this operation." }, { "code": null, "e": 23021, "s": 22396, "text": "Once the above is done, we’re off to the final step of making an experiment live using Drafts and Experiments. Not a complicated one, you just need to grab the draft_id an campaign_id returned by the create_draft() function, then chose an experiment name (ideally something along the lines of a bit that identifies the original campaign and a bit that identifies the whole group of experiments that are part of this test like a suffix: campaign_123-decrease_30_test for example), a split percentage (recommend 50%) and a split_type (recommend Cookie). The following code will then handle the final operation of this section:" }, { "code": null, "e": 23506, "s": 23021, "text": "When you are running a test there is nothing worse than seeing it being invalidated, particularly for totally unavoidable reasons. This section is about the watchpoints of things that can go wrong if we are not careful and that ultimately would invalidate this test’s data (these incrementality datasets) that is so precious for our end goal. The crux of the matter is that the only setting that should be different between your original and draft campaigns is the bidding levels. So:" }, { "code": null, "e": 23665, "s": 23506, "text": "If you have any team member or tool that updates ad copy in a systematic way across the account, make sure the same changes are applied to original and draft." }, { "code": null, "e": 23797, "s": 23665, "text": "If your landing pages are changing by any reason, make sure that the end of the day original and draft have the same landing pages." }, { "code": null, "e": 23888, "s": 23797, "text": "If you’re adding bid modifiers to your campaigns, you need to add those to the drafts too." }, { "code": null, "e": 24235, "s": 23888, "text": "As previously mentioned, if your campaigns have ad group targets, they’ll have to be reduced/increased by the same proportion the main campaign targets has been and you need to ensure they keep the same way throughout the test. I have added some example code of how you can quickly make this comparison, in particular, leveraging the adwords API:" }, { "code": null, "e": 24491, "s": 24235, "text": "I think you got the point already, but for comprehensiveness here are some other settings you might want to bear in mind: Budgets, Location targeting, audiences, networks, ad rotation, conversions, new ad groups or keywords, negative keywords, extensions." }, { "code": null, "e": 24737, "s": 24491, "text": "Ultimately it’s a balance of either trying to avoid making many changes to the campaigns that you have created a draft and experiment for; or having a robust system in place that will ensure changes applied to original are also applied to draft." }, { "code": null, "e": 24965, "s": 24737, "text": "Finally, one extra note to close this section: Did your portfolios in the draft campaigns went into learning mode and behave «funny» in the first few days? If yes, consider excluding these days from your incrementality dataset." }, { "code": null, "e": 25278, "s": 24965, "text": "So you’ve launched your test and you made sure it’s running on fair terms, the next thing you want to do is to gather and prepare the test data in a manner that’s suitable for analysis and evaluation. In line with what we have done so far we will resort to adwords api to tackle this step of the process as well." }, { "code": null, "e": 25722, "s": 25278, "text": "But first let me ask you, have you heard about fivetran? If not please do yourself a favour a go implement it straight ahead! Assuming you currently use fivetran or you went on implementing it and came back to the article, we will use the google ads fivetran connector to get ourselves the data we need. The alternative would be to use the AdWords Query Language (AWQL) directly, but fivetran handles all of that for us with just a few clicks." }, { "code": null, "e": 25960, "s": 25722, "text": "So, using fivetran what we need to do is to create a new google ads connector and sync a Campaign Performance Report into your database of election (psst... snowflake is the best one) choosing the following fields as per the image below:" }, { "code": null, "e": 25983, "s": 25960, "text": "AccountDescriptiveName" }, { "code": null, "e": 25994, "s": 25983, "text": "CampaignId" }, { "code": null, "e": 26007, "s": 25994, "text": "CampaignName" }, { "code": null, "e": 26079, "s": 26007, "text": "AdNetworkType2 (Not needed for test analysis but useful for deep dives)" }, { "code": null, "e": 26084, "s": 26079, "text": "Date" }, { "code": null, "e": 26148, "s": 26084, "text": "Device (Not needed for test analysis but useful for deep dives)" }, { "code": null, "e": 26212, "s": 26148, "text": "Clicks (Not needed for test analysis but useful for deep dives)" }, { "code": null, "e": 26224, "s": 26212, "text": "Conversions" }, { "code": null, "e": 26240, "s": 26224, "text": "ConversionValue" }, { "code": null, "e": 26245, "s": 26240, "text": "Cost" }, { "code": null, "e": 26314, "s": 26245, "text": "Impressions (Not needed for test analysis but useful for deep dives)" }, { "code": null, "e": 26393, "s": 26314, "text": "SearchImpressionShare (Not needed for test analysis but useful for deep dives)" }, { "code": null, "e": 26612, "s": 26393, "text": "And finally, once this data has made its way into your database, the following query will organise it in a way that’s suitable for test evaluation (Such query will serve as the backbone for the evaluation of the test):" }, { "code": null, "e": 27436, "s": 26612, "text": "At this stage, your test is coming to an end or is already finished. You have collected and prepared your data in a way that is now suitable for analysis and that’s exactly what you’re going to do now. But how? There’s not one single right to way to it, but we propose that we use a statistical technique called bootstrap resampling (which is actually not so different from what google itself uses to report their confidence intervals — they use jackknife resampling though). The reason why we need to do this has to due with google only reporting confidence intervals on individual campaigns’ experiments, so if you are running an experiment on a multitude of campaigns you will have to calculate such confidence intervals yourself as if they were one big campaign because they are in fact all part of the same experiment." }, { "code": null, "e": 28491, "s": 27436, "text": "The end goal of this stage is not just to find the difference in spend and in conversions between the test and control groups (because that is relatively easy — we just need to look at total spend and conversions on both groups of the experiment) but also to find a confidence interval for that difference hence the resampling. By resampling we will be taking a sample (pass the redundancy) with replacement of n observations of the performance of different campaigns on different days and compare the differences between test and control (n being the total number of rows yielded by the query provided in section 4 — i.e. total combinations of campaigns/days). We will be repeating this process a multitude of times (usually between 1,000 and 10,000) and for each of them comparing the difference between test and control. Having this set of samples we are now able to calculate the point estimate (the mean of the resample’s means) and the variance of performance amongst this set of samples and hence calculating our statistics’ standard deviation as:" }, { "code": null, "e": 28661, "s": 28491, "text": "σx = ( ∑ ( (xi -x) ^ 2 ) / j) ^ (1⁄2)xi, being the mean of each resamplex, being the point estimator (the mean of all resamples' means)j, being the number of resamples" }, { "code": null, "e": 28815, "s": 28661, "text": "And now having both the variance and the observed difference between control and test we are able to calculate the so much wanted confidence interval as:" }, { "code": null, "e": 29070, "s": 28815, "text": "CI = x ± 1.64 * ( σx / n ^ (1/2) )x, being the point estimator (the mean of all resamples' means)σx, the standard deviation of the samples' meansn, the size of the sample (i.e. combinations of campaigns/days)1.64, Zα for a 90% Confidence Interval" }, { "code": null, "e": 29622, "s": 29070, "text": "Finally, we can say with 90% of confidence that the true difference in performance at different bidding levels in our ppc account lies between the lower bound and the upper bound of the confidence interval and make business decisions accordingly — back to our example in the scenario analysis section this means that if the confidence interval is below £15 we will maintain our bidding levels, however, if it is above £15 we will decrease our bids by 30% (as the marginal cost of the marginal conversions would be above our threshold of profitability)" }, { "code": null, "e": 29791, "s": 29622, "text": "The actual code to do make such calculations using python and a dummy dataset mimicking one prepared accordingly to the methodology proposed in step 4 are as per below:" }, { "code": null, "e": 29923, "s": 29791, "text": "If you are interested in finding out more about this resampling technique and the theory behind it I really recommend this article." }, { "code": null, "e": 30722, "s": 29923, "text": "This is probably one of the most interesting and joyfully stages of this whole journey. You have successfully identified what’s the incremental value of going from bidding level x to bidding level y and gave your business crucial information to make a decision on whether to keep the biding levels or change them depending on the incremental value of your incremental spend. You have done this for the overall account potentially if your doing it for the first time or if you don’t generate a lot of conversion data; or you have gone further and did it for multiple portfolios, groups of campaigns or others (meaning that your test was actually a set of tests and you took actions on each of them accordingly) — for simplicity though, in this section we will assume you did an overall account test." }, { "code": null, "e": 31746, "s": 30722, "text": "Nonetheless having a clear hypothesis that we want to answer being the main goal of this venture, a ppc incrementality test should also create many byproducts that you won’t want to ignore. The byproducts might need to be planned (for example if you purposely breakdown your campaigns into targeting locations where it is raining at the moment and locations where it isn’t raining — farfetched in most cases but for instance if you have a windshield business or you sell snow chains this might be important for you) but usually they will be available in on or more of the many adwords report types and in many cases they will even be unintentional and only found after careful post test analysis. So let’s dig more into this second option: I will give a couple of examples in bigger depth and in addition suggest a list of factors that might have an interesting impact into different incrementality levels within your account — your job is to look at all of them and find out which ones are more decisive for your business." }, { "code": null, "e": 32891, "s": 31746, "text": "Example 1: Impression share can play a critical role in incrementality. Imagine you sell two types of t-shirts in your website — A blue and a red one (and you have a ppc campaign for each of them). They’re both sold at the same price but the blue T-shirt’s conversion rate is 40% and the red’s is 10%, hence you’re willing to pay 4x more for a blue T-shirt ppc click than a red T-shirt click. In this case you’ll end up with the same cost per conversion on both campaigns but you’ll likely have a higher impression share on the blue campaign (because you’re paying more per click). What’s important about this is that usually, it’s difficult to go after the extra bit of potential customers that you’re not showing ads to when you have 80% impression share than going after a few more of the yet so many available potential customers when your impression share is 20%, hence (rule of thumb) the higher the impression share the higher is your marginal cost per conversion and this is particularly noticeable on both extremes of impression share. Below is a real life based example of the role of impression share in marginal cost per conversion:" }, { "code": null, "e": 33430, "s": 32891, "text": "Example 2: Your main competitor offers their services both via web and via an app and to attract customers they use both google search ads and universal app campaigns. This means that your competitor will have a wider range of possibilities and a higher probability of competing for the same search keywords in a mobile device than in a computer. The competitive landscape will be more favourable in computer devices and all things equal you would expect your marginal cost per conversion to be higher on mobile devices than in computers." }, { "code": null, "e": 33505, "s": 33430, "text": "List of factors potentially playing an important role into incrementality:" }, { "code": null, "e": 33612, "s": 33505, "text": "Impression Share (and other measures of competitiveness like average position, absolute top of page, etc.)" }, { "code": null, "e": 33619, "s": 33612, "text": "Device" }, { "code": null, "e": 33628, "s": 33619, "text": "Location" }, { "code": null, "e": 33769, "s": 33628, "text": "Audiences (People who have interacted with your website vs. people who haven’t registered any activity with you; Gender; Demographics; etc.)" }, { "code": null, "e": 33799, "s": 33769, "text": "Campaign Type / Ad group type" }, { "code": null, "e": 33818, "s": 33799, "text": "Keyword Match Type" }, { "code": null, "e": 33844, "s": 33818, "text": "Time of Day & Day of Week" }, { "code": null, "e": 33852, "s": 33844, "text": "Network" }, { "code": null, "e": 33892, "s": 33852, "text": "Demographic factors like age and gender" }, { "code": null, "e": 34048, "s": 33892, "text": "Others like weather (mentioned in example above); occurrence of significant event (for newspaper business); etc. As many as your creativity can get you too" }, { "code": null, "e": 34965, "s": 34048, "text": "Finally, what’s still most important about these analyses is that they should and most likely will inform which tests you will run next. Let’s say again this was the very first test you ran and you had no preconceived idea of which factors would most contribute to incrementality within the account. After you run such exploratory analysis you will be able to formulate a more robust hypothesis and ensure you design a test to statistically prove (or not) that. For example, you might be able to group campaigns into four groups because you observe a pattern and design a test that will ensure you find with precision their incrementality levels and shift spending across them according to the test results; or you might design a test-focused in confirming (or not) that campaigns/ad groups / or devices with higher impression share will be less incremental hence shifting budget to the ones with lower levels of IS." }, { "code": null, "e": 35325, "s": 34965, "text": "If you have made it this far then congrats, you’ve done it — you have optimised your paid search account in an incrementality framework. Nevertheless, now it’s also the time to think about the future and how you are going to keep operating in an incrementality world. Below a list of the 3 most important things you want to consider at this wrapping up stage:" }, { "code": null, "e": 35804, "s": 35325, "text": "We hope that the test has turned out to be conclusive and that has led you to take either one or another action, but even if this is the case these findings might not hold true for long: Auctions’ competitive landscape is ever-evolving and hence marginal cost per conversion will keep changing. You need to plan in order to keep a reasonably constant level of testing inbuilt into your accounts so that you can have accurate and updated answers to guide your optimisation goals." }, { "code": null, "e": 36648, "s": 35804, "text": "You need to decide if the next set of tests you will run will be made on the same dimensions as the first one (i.e. account/portfolio/campaigns grouped by a given logic, etc.). If this is the case you still want to sense check if there’s anything you want to change about the test setup (could you run it for a shorter period of time? Given what you now know, should you change the test metric? Etc.). Alternatively, you might have found something interesting in the exploratory analysis that you want now to confirm (statistically speaking). For example, in the next test, you might group your campaigns into low/medium/high impression share and test increasing/decreasing different bidding levels for the different groups; or you might want to test different device multipliers for mobile/tablet/computer instead of changing campaign’s bids." }, { "code": null, "e": 36779, "s": 36648, "text": "Finally, at some point, you will also want to consider some sort of automation in terms of implementing and measuring these tests." }, { "code": null, "e": 37356, "s": 36779, "text": "This article covered the reasoning behind paid search incrementality, why you should implement it in your marketing optimisation routines, and a practical way to indeed implement it; but instead of using this section to stress any of those points, I will save it to talk about the impact which at the end of the day is what really matters. Hope this will be the extra bit of motivation you need to actually go for it and implement it on your paid search account. So let me end talking about a startup in which we implemented this methodology and what it meant for the company:" }, { "code": null, "e": 37779, "s": 37356, "text": "Context: This was a company in a high growth phase, hence historically, they were willing to break even on cost per conversion and revenue per conversion (on average). On a given period the company started to feel the need to show signs of profitability so google ads could no longer run at break-even cost (on average). Hence the challenge was defined as: Moving from breaking even on average to breaking even marginally." }, { "code": null, "e": 38051, "s": 37779, "text": "The methodology was to run a series of tests from bidding level x to y to z (each of them following the 7 practical steps defined above) to find the point in the bidding level curve where the marginal cost per conversion was the same as the up to the moment average cost." }, { "code": null, "e": 38320, "s": 38051, "text": "Results were: 1) For each 10% we saved in total spend in paid search we only sacrificed 4.2% of the total conversions generated by paid search. 2) From breaking even in paid search to a return on ad spend of 233%. 3) No single conversion turning a loss to the company." } ]
When not to use machine learning or AI | by Cassie Kozyrkov | Towards Data Science
Imagine that you’ve just managed to get your hands on a dataset from a clinical trial. Exciting! To help you get in character, I made up some data for you to look at: Pretend that these datapoints map out the relationship between the treatment day (input “feature”) and the correct dosage of some miracle cure in milligrams (output “prediction”) that a patient should receive for over the course of 60 days. #The data:(1,28) (2,17) (3,92) (4,41) (5,9) (6,87) (7,54) (8,3) (9,78) (10,67) (11,1) (12,67) (13,78) (14,3) (15,55) (16,86) (17,8) (18,42) (19,92) (20,17) (21,29) (22,94) (23,28) (24,18) (25,93) (26,40) (27,9) (28,87) (29,53) (30,3) (31,79) (32,66) (33,1) (34,68) (35,77) (36,3) (37,56) (38,86) (39,8) (40,43) (41,92) (42,16) (43,30) (44,94) (45,27) (46,19) (47,93) (48,39) (49,10) (50,88) (51,53) (52,4) (53,80) (54,65) (55,1) (56,69) (57,77) (58,3) (59,57) (60,86) ... Now imagine that you’re treating a patient and it’s day 2. What dose do you suggest we use? I really hope you answered “17mg” since this was definitely not supposed to be a trick question. How about day 4? 41mg? Yes indeedy! Now, how would you build software to output the right doses on days 1–5? Would you try to use machine learning (ML)? In other words, would you try to find patterns in these data and try to turn them into a recipe (“model”) for going from inputs to outputs? No, of course you wouldn’t! You’d get your software to do exactly what you’re doing: look the answer up in a table. That way, you’ll get the right answer 100% of the time for all 60 days. No need for patterns here and no need for machine learning either. So, what sort of situation requires machine learning? How about now? It’s day 61. What’s the right answer here? Well, we’ve never seen data for day 61, so there’s no way we can look up the answer here. What can we do? Are we out of luck? Can machine learning help us? That depends. If there’s no pattern that connects the inputs with the outputs, forget it. In that case, nothing can help us... short of actual magic, which doesn’t exist (in case you thought machine learning was it). Give up now! But if there is a pattern and if (that’s a big if!) we could find it, then we could try to apply it to day 61 to try to predict/guess the right answer. Perhaps machine learning might help us. The trouble is that it’s not enough for there to be a pattern in our data. That would be much too convenient. The pattern also has to be relevant beyond day 60. What if the conditions are fundamentally different in day 61, so the pattern doesn’t generalize? For all you know, maybe on day 61 all patients are fully cured or dead or on an incompatible medication. Then the pattern is no good to you. Let this sink in. If your data aren’t a useful window into tomorrow’s world — perhaps because a pandemic changed all the rules — it doesn’t matter how good your information was yesterday. If you live in an unstable corner of the universe, you’ll have a hard time justifying what we call ergodicity and stationarity assumptions. These roughly translate to “I believe that the rules haven’t changed.” I’m not talking about the kind of nonstationarity that’s in the eye of the beholder (like when average prices appear to drift over time because you forgot to adjust for inflation). Dealing with gentle nonstationarity (when the rules are a predictable function of time) is what the field of time series analysis is all about. I’m talking about the kind of violent nonstationarity that you can’t do anything about because your system’s rules are fundamentally different in a way you can’t predict from one period to the next. If your past data suddenly don’t apply at all to your nonstationary future, you’re not allowed to use yesterday to predict tomorrow with a straight face. But if there is a pattern and if this pattern is relevant to the new situation we find ourselves in, then we’re in business. We could go and find the pattern in the old data, make a recipe based on it, and then use that recipe to succeed on day 61 and beyond! Finding patterns and using them is what machine learning is all about. In applied machine learning (and AI), you’re not in the business of regurgitating memorized examples you’ve seen before — you don’t need ML for that, just look ’em up! —you’re here to learn. Just repeat old answers? ML can do better! It succeeds on new examples. Your mission? To build a solution that generalizes successfully (or pull the plug on your project). (What does “successfully” mean? I have a whole guide for you on that topic.) In other words, your solution is no good if it can’t handle new examples it has never seen before. Not dramatically new examples that break all the rules of a stationary universe, but slight twists on the learned theme. We’re not here to memorize like a parrot. We’re here to generalize to new situations. That’s the power and the beauty of machine learning. If you haven’t seen this exact combination of input values before (day 61), what’s the right output answer? Well, maybe we can turn old patterns into a recipe that makes a decent guess. For example, if you trained a cat/not-cat classifier from thousands of animal photos, you can ask it to tell you if a brand new photo has a cat in it, but you shouldn’t ask it to tell you whether a painting is in the Cubist style. If you’re sick of hearing me call it a thing-labeler and an alternative approach to writing code, let me try putting it another way. Machine learning is an approach to automating repeated decisions that involves algorithmically finding patterns in data and using these to make recipes that deal correctly with brand new data. To know if machine learning is for you, I have three guides you might enjoy: Is your ML/AI project a nonstarter? A 22-item reality check(list) Advice for finding ML/AI use cases Getting started with ML/AI? Start here! Still curious about day 61? Turns out there *is* a pattern in the toy data I made for this example. I know this because I put it there. I can even promise you that it generalizes to day The-Biggest-Number-You-Can-Think-Of-Plus-One because in these wildly nonstationary times, I find it luxuriously comforting to work with data that plays nice for a change. #The data:(1,28) (2,17) (3,92) (4,41) (5,9) (6,87) (7,54) (8,3) (9,78) (10,67) (11,1) (12,67) (13,78) (14,3) (15,55) (16,86) (17,8) (18,42) (19,92) (20,17) (21,29) (22,94) (23,28) (24,18) (25,93) (26,40) (27,9) (28,87) (29,53) (30,3) (31,79) (32,66) (33,1) (34,68) (35,77) (36,3) (37,56) (38,86) (39,8) (40,43) (41,92) (42,16) (43,30) (44,94) (45,27) (46,19) (47,93) (48,39) (49,10) (50,88) (51,53) (52,4) (53,80) (54,65) (55,1) (56,69) (57,77) (58,3) (59,57) (60,86) ... For those who like a challenge, why don’t you try see if your favorite machine learning algorithm can find the pattern and turn it into a useful recipe? (Answer at the bottom of this page.) I also suspect that there might be more folks who get it with an analytics approach instead of using machine learning (see this to understand the difference, plus the clue I’ve just given you) but GLHF. May the best approach win! If you’re keen to try ML, don’t forget to do things in the right order — here’s a step-by-step guide to help you out. If you’re keen to read more of my writing, most of the links in this article take you to my other musings. You can also enjoy audio versions here and my statistics video playlist here. If you’re curious to see the answer for day 61, try running the R function that I used to generate the data (you can paste it in and run it online here). # Here's the R code I used to generate the data:doseFun <- Vectorize(function(x) {r <- round(93 * cos(x) ^ 2 + sqrt(exp(x/100))); return(r)})# Output the result for day 61:print(doseFun(61))# Plot the deterministic function:plot(x = 1:60, y = doseFun(1:60)) Because my function turned out to be deterministic, you could have gotten the right answer by analytics (plotting the graph and eyeballing it to notice the repeating pattern) and you didn’t really need machine learning here, though it can work anyway. It’s just not the most efficient way to go about things. For an example of a simple machine learning approach in a deterministic setting, see my video below: I hope I haven’t done more harm than good by exposing you to that toy dataset. The danger is that you learn a very bad habit: failure to split your data and test your system properly. Those of you who split the data and validated your solution before submitting it deserve an extra pat on the back. Your caution will serve you well! Those of you who plotted/trained on the entire dataset may have gotten away with it... this time. The only reason you didn’t get suckerpunched by this cartoonish example is that the true underlying model was a simple pattern which could be extracted easily from the data. These are rare in practice, since your colleagues probably found all such low-hanging fruit decades ago. If you approach real world data the way you just approached this toy example, you’ll get hurt. You can find more info about that in my article How to be an AI idiot. If you had fun here and you’re looking for an applied AI course designed to be fun for beginners and experts alike, here’s one I made for your amusement:
[ { "code": null, "e": 339, "s": 172, "text": "Imagine that you’ve just managed to get your hands on a dataset from a clinical trial. Exciting! To help you get in character, I made up some data for you to look at:" }, { "code": null, "e": 580, "s": 339, "text": "Pretend that these datapoints map out the relationship between the treatment day (input “feature”) and the correct dosage of some miracle cure in milligrams (output “prediction”) that a patient should receive for over the course of 60 days." }, { "code": null, "e": 1072, "s": 580, "text": "#The data:(1,28) (2,17) (3,92) (4,41) (5,9) (6,87) (7,54) (8,3) (9,78) (10,67) (11,1) (12,67) (13,78) (14,3) (15,55) (16,86) (17,8) (18,42) (19,92) (20,17) (21,29) (22,94) (23,28) (24,18) (25,93) (26,40) (27,9) (28,87) (29,53) (30,3) (31,79) (32,66) (33,1) (34,68) (35,77) (36,3) (37,56) (38,86) (39,8) (40,43) (41,92) (42,16) (43,30) (44,94) (45,27) (46,19) (47,93) (48,39) (49,10) (50,88) (51,53) (52,4) (53,80) (54,65) (55,1) (56,69) (57,77) (58,3) (59,57) (60,86) ..." }, { "code": null, "e": 1164, "s": 1072, "text": "Now imagine that you’re treating a patient and it’s day 2. What dose do you suggest we use?" }, { "code": null, "e": 1297, "s": 1164, "text": "I really hope you answered “17mg” since this was definitely not supposed to be a trick question. How about day 4? 41mg? Yes indeedy!" }, { "code": null, "e": 1554, "s": 1297, "text": "Now, how would you build software to output the right doses on days 1–5? Would you try to use machine learning (ML)? In other words, would you try to find patterns in these data and try to turn them into a recipe (“model”) for going from inputs to outputs?" }, { "code": null, "e": 1809, "s": 1554, "text": "No, of course you wouldn’t! You’d get your software to do exactly what you’re doing: look the answer up in a table. That way, you’ll get the right answer 100% of the time for all 60 days. No need for patterns here and no need for machine learning either." }, { "code": null, "e": 1863, "s": 1809, "text": "So, what sort of situation requires machine learning?" }, { "code": null, "e": 1921, "s": 1863, "text": "How about now? It’s day 61. What’s the right answer here?" }, { "code": null, "e": 2077, "s": 1921, "text": "Well, we’ve never seen data for day 61, so there’s no way we can look up the answer here. What can we do? Are we out of luck? Can machine learning help us?" }, { "code": null, "e": 2091, "s": 2077, "text": "That depends." }, { "code": null, "e": 2307, "s": 2091, "text": "If there’s no pattern that connects the inputs with the outputs, forget it. In that case, nothing can help us... short of actual magic, which doesn’t exist (in case you thought machine learning was it). Give up now!" }, { "code": null, "e": 2499, "s": 2307, "text": "But if there is a pattern and if (that’s a big if!) we could find it, then we could try to apply it to day 61 to try to predict/guess the right answer. Perhaps machine learning might help us." }, { "code": null, "e": 2898, "s": 2499, "text": "The trouble is that it’s not enough for there to be a pattern in our data. That would be much too convenient. The pattern also has to be relevant beyond day 60. What if the conditions are fundamentally different in day 61, so the pattern doesn’t generalize? For all you know, maybe on day 61 all patients are fully cured or dead or on an incompatible medication. Then the pattern is no good to you." }, { "code": null, "e": 3297, "s": 2898, "text": "Let this sink in. If your data aren’t a useful window into tomorrow’s world — perhaps because a pandemic changed all the rules — it doesn’t matter how good your information was yesterday. If you live in an unstable corner of the universe, you’ll have a hard time justifying what we call ergodicity and stationarity assumptions. These roughly translate to “I believe that the rules haven’t changed.”" }, { "code": null, "e": 3622, "s": 3297, "text": "I’m not talking about the kind of nonstationarity that’s in the eye of the beholder (like when average prices appear to drift over time because you forgot to adjust for inflation). Dealing with gentle nonstationarity (when the rules are a predictable function of time) is what the field of time series analysis is all about." }, { "code": null, "e": 3975, "s": 3622, "text": "I’m talking about the kind of violent nonstationarity that you can’t do anything about because your system’s rules are fundamentally different in a way you can’t predict from one period to the next. If your past data suddenly don’t apply at all to your nonstationary future, you’re not allowed to use yesterday to predict tomorrow with a straight face." }, { "code": null, "e": 4235, "s": 3975, "text": "But if there is a pattern and if this pattern is relevant to the new situation we find ourselves in, then we’re in business. We could go and find the pattern in the old data, make a recipe based on it, and then use that recipe to succeed on day 61 and beyond!" }, { "code": null, "e": 4306, "s": 4235, "text": "Finding patterns and using them is what machine learning is all about." }, { "code": null, "e": 4497, "s": 4306, "text": "In applied machine learning (and AI), you’re not in the business of regurgitating memorized examples you’ve seen before — you don’t need ML for that, just look ’em up! —you’re here to learn." }, { "code": null, "e": 4569, "s": 4497, "text": "Just repeat old answers? ML can do better! It succeeds on new examples." }, { "code": null, "e": 4746, "s": 4569, "text": "Your mission? To build a solution that generalizes successfully (or pull the plug on your project). (What does “successfully” mean? I have a whole guide for you on that topic.)" }, { "code": null, "e": 4966, "s": 4746, "text": "In other words, your solution is no good if it can’t handle new examples it has never seen before. Not dramatically new examples that break all the rules of a stationary universe, but slight twists on the learned theme." }, { "code": null, "e": 5105, "s": 4966, "text": "We’re not here to memorize like a parrot. We’re here to generalize to new situations. That’s the power and the beauty of machine learning." }, { "code": null, "e": 5291, "s": 5105, "text": "If you haven’t seen this exact combination of input values before (day 61), what’s the right output answer? Well, maybe we can turn old patterns into a recipe that makes a decent guess." }, { "code": null, "e": 5522, "s": 5291, "text": "For example, if you trained a cat/not-cat classifier from thousands of animal photos, you can ask it to tell you if a brand new photo has a cat in it, but you shouldn’t ask it to tell you whether a painting is in the Cubist style." }, { "code": null, "e": 5655, "s": 5522, "text": "If you’re sick of hearing me call it a thing-labeler and an alternative approach to writing code, let me try putting it another way." }, { "code": null, "e": 5848, "s": 5655, "text": "Machine learning is an approach to automating repeated decisions that involves algorithmically finding patterns in data and using these to make recipes that deal correctly with brand new data." }, { "code": null, "e": 5925, "s": 5848, "text": "To know if machine learning is for you, I have three guides you might enjoy:" }, { "code": null, "e": 5991, "s": 5925, "text": "Is your ML/AI project a nonstarter? A 22-item reality check(list)" }, { "code": null, "e": 6026, "s": 5991, "text": "Advice for finding ML/AI use cases" }, { "code": null, "e": 6066, "s": 6026, "text": "Getting started with ML/AI? Start here!" }, { "code": null, "e": 6423, "s": 6066, "text": "Still curious about day 61? Turns out there *is* a pattern in the toy data I made for this example. I know this because I put it there. I can even promise you that it generalizes to day The-Biggest-Number-You-Can-Think-Of-Plus-One because in these wildly nonstationary times, I find it luxuriously comforting to work with data that plays nice for a change." }, { "code": null, "e": 6915, "s": 6423, "text": "#The data:(1,28) (2,17) (3,92) (4,41) (5,9) (6,87) (7,54) (8,3) (9,78) (10,67) (11,1) (12,67) (13,78) (14,3) (15,55) (16,86) (17,8) (18,42) (19,92) (20,17) (21,29) (22,94) (23,28) (24,18) (25,93) (26,40) (27,9) (28,87) (29,53) (30,3) (31,79) (32,66) (33,1) (34,68) (35,77) (36,3) (37,56) (38,86) (39,8) (40,43) (41,92) (42,16) (43,30) (44,94) (45,27) (46,19) (47,93) (48,39) (49,10) (50,88) (51,53) (52,4) (53,80) (54,65) (55,1) (56,69) (57,77) (58,3) (59,57) (60,86) ..." }, { "code": null, "e": 7105, "s": 6915, "text": "For those who like a challenge, why don’t you try see if your favorite machine learning algorithm can find the pattern and turn it into a useful recipe? (Answer at the bottom of this page.)" }, { "code": null, "e": 7335, "s": 7105, "text": "I also suspect that there might be more folks who get it with an analytics approach instead of using machine learning (see this to understand the difference, plus the clue I’ve just given you) but GLHF. May the best approach win!" }, { "code": null, "e": 7453, "s": 7335, "text": "If you’re keen to try ML, don’t forget to do things in the right order — here’s a step-by-step guide to help you out." }, { "code": null, "e": 7638, "s": 7453, "text": "If you’re keen to read more of my writing, most of the links in this article take you to my other musings. You can also enjoy audio versions here and my statistics video playlist here." }, { "code": null, "e": 7792, "s": 7638, "text": "If you’re curious to see the answer for day 61, try running the R function that I used to generate the data (you can paste it in and run it online here)." }, { "code": null, "e": 8050, "s": 7792, "text": "# Here's the R code I used to generate the data:doseFun <- Vectorize(function(x) {r <- round(93 * cos(x) ^ 2 + sqrt(exp(x/100))); return(r)})# Output the result for day 61:print(doseFun(61))# Plot the deterministic function:plot(x = 1:60, y = doseFun(1:60))" }, { "code": null, "e": 8359, "s": 8050, "text": "Because my function turned out to be deterministic, you could have gotten the right answer by analytics (plotting the graph and eyeballing it to notice the repeating pattern) and you didn’t really need machine learning here, though it can work anyway. It’s just not the most efficient way to go about things." }, { "code": null, "e": 8460, "s": 8359, "text": "For an example of a simple machine learning approach in a deterministic setting, see my video below:" }, { "code": null, "e": 8644, "s": 8460, "text": "I hope I haven’t done more harm than good by exposing you to that toy dataset. The danger is that you learn a very bad habit: failure to split your data and test your system properly." }, { "code": null, "e": 8793, "s": 8644, "text": "Those of you who split the data and validated your solution before submitting it deserve an extra pat on the back. Your caution will serve you well!" }, { "code": null, "e": 9336, "s": 8793, "text": "Those of you who plotted/trained on the entire dataset may have gotten away with it... this time. The only reason you didn’t get suckerpunched by this cartoonish example is that the true underlying model was a simple pattern which could be extracted easily from the data. These are rare in practice, since your colleagues probably found all such low-hanging fruit decades ago. If you approach real world data the way you just approached this toy example, you’ll get hurt. You can find more info about that in my article How to be an AI idiot." } ]
EJB - Interceptors
EJB 3.0 provides specification to intercept business methods calls using methods annotated with @AroundInvoke annotation. An interceptor method is called by ejbContainer before business method call it is intercepting. Following is the example signature of an interceptor method @AroundInvoke public Object methodInterceptor(InvocationContext ctx) throws Exception { System.out.println("*** Intercepting call to LibraryBean method: " + ctx.getMethod().getName()); return ctx.proceed(); } Interceptor methods can be applied or bound at three levels. Default − Default interceptor is invoked for every bean within deployment.Default interceptor can be applied only via xml (ejb-jar.xml). Default − Default interceptor is invoked for every bean within deployment.Default interceptor can be applied only via xml (ejb-jar.xml). Class − Class level interceptor is invoked for every method of the bean. Class level interceptor can be applied both by annotation of via xml(ejb-jar.xml). Class − Class level interceptor is invoked for every method of the bean. Class level interceptor can be applied both by annotation of via xml(ejb-jar.xml). Method− Method level interceptor is invoked for a particular method of the bean. Method level interceptor can be applied both by annotation of via xml(ejb-jar.xml). Method− Method level interceptor is invoked for a particular method of the bean. Method level interceptor can be applied both by annotation of via xml(ejb-jar.xml). We are discussing Class level interceptor here. package com.tutorialspoint.interceptor; import javax.interceptor.AroundInvoke; import javax.interceptor.InvocationContext; public class BusinessInterceptor { @AroundInvoke public Object methodInterceptor(InvocationContext ctx) throws Exception { System.out.println("*** Intercepting call to LibraryBean method: " + ctx.getMethod().getName()); return ctx.proceed(); } } import javax.ejb.Remote; @Remote public interface LibraryBeanRemote { //add business method declarations } @Interceptors ({BusinessInterceptor.class}) @Stateless public class LibraryBean implements LibraryBeanRemote { //implement business method } Let us create a test EJB application to test intercepted stateless EJB. Create a project with a name EjbComponent under a package com.tutorialspoint.interceptor as explained in the EJB - Create Application chapter. You can also use the project created in EJB - Create Application chapter as such for this chapter to understand intercepted EJB concepts. Create LibraryBean.java and LibraryBeanRemote under package com.tutorialspoint.interceptor as explained in the EJB - Create Application chapter. Keep rest of the files unchanged. Clean and Build the application to make sure business logic is working as per the requirements. Finally, deploy the application in the form of jar file on JBoss Application Server. JBoss Application server will get started automatically if it is not started yet. Now create the ejb client, a console based application in the same way as explained in the EJB - Create Application chapter under topic Create Client to access EJB. package com.tutorialspoint.interceptor; import java.util.List; import javax.ejb.Remote; @Remote public interface LibraryBeanRemote { void addBook(String bookName); List getBooks(); } package com.tutorialspoint.interceptor; import java.util.ArrayList; import java.util.List; import javax.ejb.Stateless; import javax.interceptor.Interceptors; @Interceptors ({BusinessInterceptor.class}) @Stateless public class LibraryBean implements LibraryBeanRemote { List<String> bookShelf; public LibraryBean() { bookShelf = new ArrayList<String>(); } public void addBook(String bookName) { bookShelf.add(bookName); } public List<String> getBooks() { return bookShelf; } } As soon as you deploy the EjbComponent project on JBOSS, notice the jboss log. As soon as you deploy the EjbComponent project on JBOSS, notice the jboss log. JBoss has automatically created a JNDI entry for our session bean − LibraryBean/remote. JBoss has automatically created a JNDI entry for our session bean − LibraryBean/remote. We will using this lookup string to get remote business object of type − com.tutorialspoint.interceptor.LibraryBeanRemote We will using this lookup string to get remote business object of type − com.tutorialspoint.interceptor.LibraryBeanRemote ... 16:30:01,401 INFO [JndiSessionRegistrarBase] Binding the following Entries in Global JNDI: LibraryBean/remote - EJB3.x Default Remote Business Interface LibraryBean/remote-com.tutorialspoint.interceptor.LibraryBeanRemote - EJB3.x Remote Business Interface 16:30:02,723 INFO [SessionSpecContainer] Starting jboss.j2ee:jar=EjbComponent.jar,name=LibraryBean,service=EJB3 16:30:02,723 INFO [EJBContainer] STARTED EJB: com.tutorialspoint.interceptor.LibraryBeanRemote ejbName: LibraryBean 16:30:02,731 INFO [JndiSessionRegistrarBase] Binding the following Entries in Global JNDI: LibraryBean/remote - EJB3.x Default Remote Business Interface LibraryBean/remote-com.tutorialspoint.interceptor.LibraryBeanRemote - EJB3.x Remote Business Interface ... java.naming.factory.initial=org.jnp.interfaces.NamingContextFactory java.naming.factory.url.pkgs=org.jboss.naming:org.jnp.interfaces java.naming.provider.url=localhost These properties are used to initialize the InitialContext object of java naming service. These properties are used to initialize the InitialContext object of java naming service. InitialContext object will be used to lookup stateless session bean. InitialContext object will be used to lookup stateless session bean. package com.tutorialspoint.test; import com.tutorialspoint.stateful.LibraryBeanRemote; import java.io.BufferedReader; import java.io.FileInputStream; import java.io.IOException; import java.io.InputStreamReader; import java.util.List; import java.util.Properties; import javax.naming.InitialContext; import javax.naming.NamingException; public class EJBTester { BufferedReader brConsoleReader = null; Properties props; InitialContext ctx; { props = new Properties(); try { props.load(new FileInputStream("jndi.properties")); } catch (IOException ex) { ex.printStackTrace(); } try { ctx = new InitialContext(props); } catch (NamingException ex) { ex.printStackTrace(); } brConsoleReader = new BufferedReader(new InputStreamReader(System.in)); } public static void main(String[] args) { EJBTester ejbTester = new EJBTester(); ejbTester.testInterceptedEjb(); } private void showGUI() { System.out.println("**********************"); System.out.println("Welcome to Book Store"); System.out.println("**********************"); System.out.print("Options \n1. Add Book\n2. Exit \nEnter Choice: "); } private void testInterceptedEjb() { try { int choice = 1; LibraryBeanRemote libraryBean = LibraryBeanRemote)ctx.lookup("LibraryBean/remote"); while (choice != 2) { String bookName; showGUI(); String strChoice = brConsoleReader.readLine(); choice = Integer.parseInt(strChoice); if (choice == 1) { System.out.print("Enter book name: "); bookName = brConsoleReader.readLine(); Book book = new Book(); book.setName(bookName); libraryBean.addBook(book); } else if (choice == 2) { break; } } List<Book> booksList = libraryBean.getBooks(); System.out.println("Book(s) entered so far: " + booksList.size()); int i = 0; for (Book book:booksList) { System.out.println((i+1)+". " + book.getName()); i++; } } catch (Exception e) { System.out.println(e.getMessage()); e.printStackTrace(); }finally { try { if(brConsoleReader !=null) { brConsoleReader.close(); } } catch (IOException ex) { System.out.println(ex.getMessage()); } } } } EJBTester performs the following tasks − Load properties from jndi.properties and initialize the InitialContext object. Load properties from jndi.properties and initialize the InitialContext object. In testInterceptedEjb() method, jndi lookup is done with the name - "LibraryBean/remote" to obtain the remote business object (stateless EJB). In testInterceptedEjb() method, jndi lookup is done with the name - "LibraryBean/remote" to obtain the remote business object (stateless EJB). Then user is shown a library store User Interface and he/she is asked to enter a choice. Then user is shown a library store User Interface and he/she is asked to enter a choice. If the user enters 1, the system asks for book name and saves the book using stateless session bean addBook() method. Session Bean is storing the book in its instance variable. If the user enters 1, the system asks for book name and saves the book using stateless session bean addBook() method. Session Bean is storing the book in its instance variable. If the user enters 2, the system retrieves books using stateless session bean getBooks() method and exits. If the user enters 2, the system retrieves books using stateless session bean getBooks() method and exits. Locate EJBTester.java in project explorer. Right click on EJBTester class and select run file. Verify the following output in Netbeans console. run: ********************** Welcome to Book Store ********************** Options 1. Add Book 2. Exit Enter Choice: 1 Enter book name: Learn Java ********************** Welcome to Book Store ********************** Options 1. Add Book 2. Exit Enter Choice: 2 Book(s) entered so far: 1 1. Learn Java BUILD SUCCESSFUL (total time: 13 seconds) Verify the following output in JBoss Application server log output. .... 09:55:40,741 INFO [STDOUT] *** Intercepting call to LibraryBean method: addBook 09:55:43,661 INFO [STDOUT] *** Intercepting call to LibraryBean method: getBooks Print Add Notes Bookmark this page
[ { "code": null, "e": 2325, "s": 2047, "text": "EJB 3.0 provides specification to intercept business methods calls using methods annotated with @AroundInvoke annotation. An interceptor method is called by ejbContainer before business method call it is intercepting. Following is the example signature of an interceptor method" }, { "code": null, "e": 2544, "s": 2325, "text": "@AroundInvoke\npublic Object methodInterceptor(InvocationContext ctx) throws Exception {\n System.out.println(\"*** Intercepting call to LibraryBean method: \" \n + ctx.getMethod().getName());\n return ctx.proceed();\n}" }, { "code": null, "e": 2605, "s": 2544, "text": "Interceptor methods can be applied or bound at three levels." }, { "code": null, "e": 2742, "s": 2605, "text": "Default − Default interceptor is invoked for every bean within deployment.Default interceptor can be applied only via xml (ejb-jar.xml)." }, { "code": null, "e": 2879, "s": 2742, "text": "Default − Default interceptor is invoked for every bean within deployment.Default interceptor can be applied only via xml (ejb-jar.xml)." }, { "code": null, "e": 3035, "s": 2879, "text": "Class − Class level interceptor is invoked for every method of the bean. Class level interceptor can be applied both by annotation of via xml(ejb-jar.xml)." }, { "code": null, "e": 3191, "s": 3035, "text": "Class − Class level interceptor is invoked for every method of the bean. Class level interceptor can be applied both by annotation of via xml(ejb-jar.xml)." }, { "code": null, "e": 3357, "s": 3191, "text": "Method− Method level interceptor is invoked for a particular method of the bean. Method level interceptor can be applied both by annotation of via xml(ejb-jar.xml)." }, { "code": null, "e": 3523, "s": 3357, "text": "Method− Method level interceptor is invoked for a particular method of the bean. Method level interceptor can be applied both by annotation of via xml(ejb-jar.xml)." }, { "code": null, "e": 3571, "s": 3523, "text": "We are discussing Class level interceptor here." }, { "code": null, "e": 3970, "s": 3571, "text": "package com.tutorialspoint.interceptor;\n\nimport javax.interceptor.AroundInvoke;\nimport javax.interceptor.InvocationContext;\n\npublic class BusinessInterceptor {\n @AroundInvoke\n public Object methodInterceptor(InvocationContext ctx) throws Exception {\n System.out.println(\"*** Intercepting call to LibraryBean method: \" \n + ctx.getMethod().getName());\n return ctx.proceed();\n }\n}" }, { "code": null, "e": 4081, "s": 3970, "text": "import javax.ejb.Remote;\n\n@Remote\npublic interface LibraryBeanRemote {\n //add business method declarations\n}" }, { "code": null, "e": 4226, "s": 4081, "text": "@Interceptors ({BusinessInterceptor.class})\n@Stateless\npublic class LibraryBean implements LibraryBeanRemote {\n //implement business method \n}" }, { "code": null, "e": 4298, "s": 4226, "text": "Let us create a test EJB application to test intercepted stateless EJB." }, { "code": null, "e": 4579, "s": 4298, "text": "Create a project with a name EjbComponent under a package com.tutorialspoint.interceptor as explained in the EJB - Create Application chapter. You can also use the project created in EJB - Create Application chapter as such for this chapter to understand intercepted EJB concepts." }, { "code": null, "e": 4758, "s": 4579, "text": "Create LibraryBean.java and LibraryBeanRemote under package com.tutorialspoint.interceptor as explained in the EJB - Create Application chapter. Keep rest of the files unchanged." }, { "code": null, "e": 4854, "s": 4758, "text": "Clean and Build the application to make sure business logic is working as per the requirements." }, { "code": null, "e": 5021, "s": 4854, "text": "Finally, deploy the application in the form of jar file on JBoss Application Server. JBoss Application server will get started automatically if it is not started yet." }, { "code": null, "e": 5186, "s": 5021, "text": "Now create the ejb client, a console based application in the same way as explained in the EJB - Create Application chapter under topic Create Client to access EJB." }, { "code": null, "e": 5377, "s": 5186, "text": "package com.tutorialspoint.interceptor;\n\nimport java.util.List;\nimport javax.ejb.Remote;\n\n@Remote\npublic interface LibraryBeanRemote {\n void addBook(String bookName);\n List getBooks();\n}" }, { "code": null, "e": 5914, "s": 5377, "text": "package com.tutorialspoint.interceptor;\n\nimport java.util.ArrayList;\nimport java.util.List;\n\nimport javax.ejb.Stateless;\nimport javax.interceptor.Interceptors;\n\n@Interceptors ({BusinessInterceptor.class})\n@Stateless\npublic class LibraryBean implements LibraryBeanRemote {\n \n List<String> bookShelf; \n\n public LibraryBean() {\n bookShelf = new ArrayList<String>();\n }\n\n public void addBook(String bookName) {\n bookShelf.add(bookName);\n } \n\n public List<String> getBooks() {\n return bookShelf;\n } \n}" }, { "code": null, "e": 5993, "s": 5914, "text": "As soon as you deploy the EjbComponent project on JBOSS, notice the jboss log." }, { "code": null, "e": 6072, "s": 5993, "text": "As soon as you deploy the EjbComponent project on JBOSS, notice the jboss log." }, { "code": null, "e": 6160, "s": 6072, "text": "JBoss has automatically created a JNDI entry for our session bean − LibraryBean/remote." }, { "code": null, "e": 6248, "s": 6160, "text": "JBoss has automatically created a JNDI entry for our session bean − LibraryBean/remote." }, { "code": null, "e": 6370, "s": 6248, "text": "We will using this lookup string to get remote business object of type −\ncom.tutorialspoint.interceptor.LibraryBeanRemote" }, { "code": null, "e": 6492, "s": 6370, "text": "We will using this lookup string to get remote business object of type −\ncom.tutorialspoint.interceptor.LibraryBeanRemote" }, { "code": null, "e": 7261, "s": 6492, "text": "...\n16:30:01,401 INFO [JndiSessionRegistrarBase] Binding the following Entries in Global JNDI:\n LibraryBean/remote - EJB3.x Default Remote Business Interface\n LibraryBean/remote-com.tutorialspoint.interceptor.LibraryBeanRemote - EJB3.x Remote Business Interface\n16:30:02,723 INFO [SessionSpecContainer] Starting jboss.j2ee:jar=EjbComponent.jar,name=LibraryBean,service=EJB3\n16:30:02,723 INFO [EJBContainer] STARTED EJB: com.tutorialspoint.interceptor.LibraryBeanRemote ejbName: LibraryBean\n16:30:02,731 INFO [JndiSessionRegistrarBase] Binding the following Entries in Global JNDI:\n\n LibraryBean/remote - EJB3.x Default Remote Business Interface\n LibraryBean/remote-com.tutorialspoint.interceptor.LibraryBeanRemote - EJB3.x Remote Business Interface\n... \n" }, { "code": null, "e": 7429, "s": 7261, "text": "java.naming.factory.initial=org.jnp.interfaces.NamingContextFactory\njava.naming.factory.url.pkgs=org.jboss.naming:org.jnp.interfaces\njava.naming.provider.url=localhost" }, { "code": null, "e": 7519, "s": 7429, "text": "These properties are used to initialize the InitialContext object of java naming service." }, { "code": null, "e": 7609, "s": 7519, "text": "These properties are used to initialize the InitialContext object of java naming service." }, { "code": null, "e": 7678, "s": 7609, "text": "InitialContext object will be used to lookup stateless session bean." }, { "code": null, "e": 7747, "s": 7678, "text": "InitialContext object will be used to lookup stateless session bean." }, { "code": null, "e": 10395, "s": 7747, "text": "package com.tutorialspoint.test;\n \nimport com.tutorialspoint.stateful.LibraryBeanRemote;\n\nimport java.io.BufferedReader;\nimport java.io.FileInputStream;\nimport java.io.IOException;\nimport java.io.InputStreamReader;\n\nimport java.util.List;\nimport java.util.Properties;\n\nimport javax.naming.InitialContext;\nimport javax.naming.NamingException;\n\npublic class EJBTester {\n\n BufferedReader brConsoleReader = null; \n Properties props;\n InitialContext ctx;\n {\n props = new Properties();\n try {\n props.load(new FileInputStream(\"jndi.properties\"));\n } catch (IOException ex) {\n ex.printStackTrace();\n }\n try {\n ctx = new InitialContext(props); \n } catch (NamingException ex) {\n ex.printStackTrace();\n }\n brConsoleReader = \n new BufferedReader(new InputStreamReader(System.in));\n }\n \n public static void main(String[] args) {\n\n EJBTester ejbTester = new EJBTester();\n\n ejbTester.testInterceptedEjb();\n }\n \n private void showGUI() {\n System.out.println(\"**********************\");\n System.out.println(\"Welcome to Book Store\");\n System.out.println(\"**********************\");\n System.out.print(\"Options \\n1. Add Book\\n2. Exit \\nEnter Choice: \");\n }\n \n private void testInterceptedEjb() {\n\n try {\n int choice = 1; \n\n LibraryBeanRemote libraryBean =\n LibraryBeanRemote)ctx.lookup(\"LibraryBean/remote\");\n\n while (choice != 2) {\n String bookName;\n showGUI();\n String strChoice = brConsoleReader.readLine();\n choice = Integer.parseInt(strChoice);\n if (choice == 1) {\n System.out.print(\"Enter book name: \");\n bookName = brConsoleReader.readLine();\n Book book = new Book();\n book.setName(bookName);\n libraryBean.addBook(book); \n } else if (choice == 2) {\n break;\n }\n }\n\n List<Book> booksList = libraryBean.getBooks();\n\n System.out.println(\"Book(s) entered so far: \" + booksList.size());\n int i = 0;\n for (Book book:booksList) {\n System.out.println((i+1)+\". \" + book.getName());\n i++;\n } \n } catch (Exception e) {\n System.out.println(e.getMessage());\n e.printStackTrace();\n }finally {\n try {\n if(brConsoleReader !=null) {\n brConsoleReader.close();\n }\n } catch (IOException ex) {\n System.out.println(ex.getMessage());\n }\n }\n }\n}" }, { "code": null, "e": 10436, "s": 10395, "text": "EJBTester performs the following tasks −" }, { "code": null, "e": 10515, "s": 10436, "text": "Load properties from jndi.properties and initialize the InitialContext object." }, { "code": null, "e": 10594, "s": 10515, "text": "Load properties from jndi.properties and initialize the InitialContext object." }, { "code": null, "e": 10737, "s": 10594, "text": "In testInterceptedEjb() method, jndi lookup is done with the name - \"LibraryBean/remote\" to obtain the remote business object (stateless EJB)." }, { "code": null, "e": 10880, "s": 10737, "text": "In testInterceptedEjb() method, jndi lookup is done with the name - \"LibraryBean/remote\" to obtain the remote business object (stateless EJB)." }, { "code": null, "e": 10969, "s": 10880, "text": "Then user is shown a library store User Interface and he/she is asked to enter a choice." }, { "code": null, "e": 11058, "s": 10969, "text": "Then user is shown a library store User Interface and he/she is asked to enter a choice." }, { "code": null, "e": 11235, "s": 11058, "text": "If the user enters 1, the system asks for book name and saves the book using stateless session bean addBook() method. Session Bean is storing the book in its instance variable." }, { "code": null, "e": 11412, "s": 11235, "text": "If the user enters 1, the system asks for book name and saves the book using stateless session bean addBook() method. Session Bean is storing the book in its instance variable." }, { "code": null, "e": 11519, "s": 11412, "text": "If the user enters 2, the system retrieves books using stateless session bean getBooks() method and exits." }, { "code": null, "e": 11626, "s": 11519, "text": "If the user enters 2, the system retrieves books using stateless session bean getBooks() method and exits." }, { "code": null, "e": 11721, "s": 11626, "text": "Locate EJBTester.java in project explorer. Right click on EJBTester class and select run file." }, { "code": null, "e": 11770, "s": 11721, "text": "Verify the following output in Netbeans console." }, { "code": null, "e": 12114, "s": 11770, "text": "run:\n**********************\nWelcome to Book Store\n**********************\nOptions \n1. Add Book\n2. Exit \nEnter Choice: 1\nEnter book name: Learn Java\n**********************\nWelcome to Book Store\n**********************\nOptions \n1. Add Book\n2. Exit \nEnter Choice: 2\nBook(s) entered so far: 1\n1. Learn Java\nBUILD SUCCESSFUL (total time: 13 seconds)\n" }, { "code": null, "e": 12182, "s": 12114, "text": "Verify the following output in JBoss Application server log output." }, { "code": null, "e": 12351, "s": 12182, "text": "....\n09:55:40,741 INFO [STDOUT] *** Intercepting call to LibraryBean method: addBook\n09:55:43,661 INFO [STDOUT] *** Intercepting call to LibraryBean method: getBooks\n" }, { "code": null, "e": 12358, "s": 12351, "text": " Print" }, { "code": null, "e": 12369, "s": 12358, "text": " Add Notes" } ]
JAXB Map to XML Conversion Example - onlinetutorialspoint
PROGRAMMINGJava ExamplesC Examples Java Examples C Examples C Tutorials aws JAVAEXCEPTIONSCOLLECTIONSSWINGJDBC EXCEPTIONS COLLECTIONS SWING JDBC JAVA 8 SPRING SPRING BOOT HIBERNATE PYTHON PHP JQUERY PROGRAMMINGJava ExamplesC Examples Java Examples C Examples C Tutorials aws JAXB is a very efficient framework to covert XML to Object and Object to XML in Java. In this tutorial, we are going to see how to convert Java Map to XML using JAXB. Add the below jaxb-api dependency in pom.xml <dependency> <groupId>javax.xml.bind</groupId> <artifactId>jaxb-api</artifactId> <version>2.3.1</version> </dependency> <!-- API, java.xml.bind module --> <dependency> <groupId>jakarta.xml.bind</groupId> <artifactId>jakarta.xml.bind-api</artifactId> <version>2.3.2</version> </dependency> <!-- Runtime, com.sun.xml.bind module --> <dependency> <groupId>org.glassfish.jaxb</groupId> <artifactId>jaxb-runtime</artifactId> <version>2.3.2</version> </dependency> We can not convert the Java Map directly to XML because JAXB expects a @XmlRootElement annotation on top of the entity class to convert into XML. If you try to convert the Map object you would get the below exception. The Exception clearly saying @XmlRootElement is mandatory for JAXBContext. So to fix the issue we can create a Wrapper for the HashMap class by defining the @XmlRootElement annotation on top of the Wrapper class. Let’s see in practice. In this example, I am going to create a map with a list of items, where a category is a key and the list of items are value. So basically I need two wrapper classes to define my use-case: one is for List and another is for Map. Creating ItemsList wrapper class to hold the items. import javax.xml.bind.annotation.XmlAccessType; import javax.xml.bind.annotation.XmlAccessorType; import javax.xml.bind.annotation.XmlRootElement; import java.util.ArrayList; import java.util.List; @XmlRootElement(name = "item") @XmlAccessorType(XmlAccessType.FIELD) public class ItemsList { private List<Item> item = new ArrayList<>(); public List<Item> getItem() { return item; } public void setItem(List<Item> itemsList) { this.item = item; } } Create an ItemsMap class to hold the list of items by category. import javax.xml.bind.annotation.XmlAccessType; import javax.xml.bind.annotation.XmlAccessorType; import javax.xml.bind.annotation.XmlRootElement; import java.util.HashMap; import java.util.Map; @XmlRootElement(name = "items") @XmlAccessorType(XmlAccessType.FIELD) public class ItemsMap { private Map<String, ItemsList> itemsMap = new HashMap<>(); public Map<String, ItemsList> getItemsMap() { return itemsMap; } public void setItemsMap(Map<String, ItemsList> itemsMap) { this.itemsMap = itemsMap; } } Now the wrapper classes are ready to use. Now let’s prepare the list of items, make it as a map and convert the map into XML. import javax.xml.bind.JAXBContext; import javax.xml.bind.JAXBException; import javax.xml.bind.Marshaller; import java.util.ArrayList; import java.util.Arrays; import java.util.HashMap; public class MapToXML { public static void main(String[] args) throws JAXBException { // Creating HashMap to hold ItemsList (Wrapper for List) HashMap<String, ItemsList> map = new HashMap<>(); map.put("electronics",getElecrtonics()); map.put("books",getBooks()); // Setting map to ItemsMap (Wrapper for HashMap) ItemsMap iMap = new ItemsMap(); iMap.setItemsMap(map); JAXBContext jaxbContext = JAXBContext.newInstance(ItemsMap.class); Marshaller jaxbMarshaller = jaxbContext.createMarshaller(); jaxbMarshaller.setProperty(Marshaller.JAXB_FORMATTED_OUTPUT, true); // marshalling ItemsMap. jaxbMarshaller.marshal(iMap, System.out); } // Building the Electronic items public static ItemsList getElecrtonics(){ ArrayList<Item> list = new ArrayList<Item>(Arrays.asList(new Item(100,"Samsung",15000.00), new Item(200,"iPhone10",110000.00))); ItemsList itemsList = new ItemsList(); itemsList.getItem().addAll(list); return itemsList; } // Building the Book items public static ItemsList getBooks(){ ArrayList<Item> list = new ArrayList<Item>(Arrays.asList(new Item(2000,"Core Java",500.00), new Item(3000,"Spring in Action",750.00))); ItemsList itemsList = new ItemsList(); itemsList.getItem().addAll(list); return itemsList; } } Output: <?xml version="1.0" encoding="UTF-8" standalone="yes"?> <items> <itemsMap> <entry> <key>electronics</key> <value> <item> <itemId>100</itemId> <itemName>Samsung</itemName> <price>15000.0</price> </item> <item> <itemId>200</itemId> <itemName>iPhone10</itemName> <price>110000.0</price> </item> </value> </entry> <entry> <key>books</key> <value> <item> <itemId>2000</itemId> <itemName>Core Java</itemName> <price>500.0</price> </item> <item> <itemId>3000</itemId> <itemName>Spring in Action</itemName> <price>750.0</price> </item> </value> </entry> </itemsMap> </items> Done! JAXB Object to XML conversion JAXB XML to Object Conversion Happy Learning 🙂 JAXB XML to Java Object Conversion Example JAXB Java Object to XML Conversion Example How to Rotate Elements in List Java 8 foreach Example Tutorials Java Swing JList Example Hibernate orderby criteria Example Resolve NullPointerException in Collectors.toMap Java String to int conversion Example Decimal To Hex Conversion Java Program Python String to int Conversion Example Binary To Decimal Conversion Java Program Binary To Hexadecimal Conversion Java Program Decimal To Binary Conversion Java Program Decimal To Octal Conversion Java Program Octal To Decimal Conversion Java Program JAXB XML to Java Object Conversion Example JAXB Java Object to XML Conversion Example How to Rotate Elements in List Java 8 foreach Example Tutorials Java Swing JList Example Hibernate orderby criteria Example Resolve NullPointerException in Collectors.toMap Java String to int conversion Example Decimal To Hex Conversion Java Program Python String to int Conversion Example Binary To Decimal Conversion Java Program Binary To Hexadecimal Conversion Java Program Decimal To Binary Conversion Java Program Decimal To Octal Conversion Java Program Octal To Decimal Conversion Java Program Δ Install Java on Mac OS Install AWS CLI on Windows Install Minikube on Windows Install Docker Toolbox on Windows Install SOAPUI on Windows Install Gradle on Windows Install RabbitMQ on Windows Install PuTTY on windows Install Mysql on Windows Install Hibernate Tools in Eclipse Install Elasticsearch on Windows Install Maven on Windows Install Maven on Ubuntu Install Maven on Windows Command Add OJDBC jar to Maven Repository Install Ant on Windows Install RabbitMQ on Windows Install Apache Kafka on Ubuntu Install Apache Kafka on Windows Java8 – Install Windows Java8 – foreach Java8 – forEach with index Java8 – Stream Filter Objects Java8 – Comparator Userdefined Java8 – GroupingBy Java8 – SummingInt Java8 – walk ReadFiles Java8 – JAVA_HOME on Windows Howto – Install Java on Mac OS Howto – Convert Iterable to Stream Howto – Get common elements from two Lists Howto – Convert List to String Howto – Concatenate Arrays using Stream Howto – Remove duplicates from List Howto – Filter null values from Stream Howto – Convert List to Map Howto – Convert Stream to List Howto – Sort a Map Howto – Filter a Map Howto – Get Current UTC Time Howto – Verify an Array contains a specific value Howto – Convert ArrayList to Array Howto – Read File Line By Line Howto – Convert Date to LocalDate Howto – Merge Streams Howto – Resolve NullPointerException in toMap Howto -Get Stream count Howto – Get Min and Max values in a Stream Howto – Convert InputStream to String
[ { "code": null, "e": 158, "s": 123, "text": "PROGRAMMINGJava ExamplesC Examples" }, { "code": null, "e": 172, "s": 158, "text": "Java Examples" }, { "code": null, "e": 183, "s": 172, "text": "C Examples" }, { "code": null, "e": 195, "s": 183, "text": "C Tutorials" }, { "code": null, "e": 199, "s": 195, "text": "aws" }, { "code": null, "e": 234, "s": 199, "text": "JAVAEXCEPTIONSCOLLECTIONSSWINGJDBC" }, { "code": null, "e": 245, "s": 234, "text": "EXCEPTIONS" }, { "code": null, "e": 257, "s": 245, "text": "COLLECTIONS" }, { "code": null, "e": 263, "s": 257, "text": "SWING" }, { "code": null, "e": 268, "s": 263, "text": "JDBC" }, { "code": null, "e": 275, "s": 268, "text": "JAVA 8" }, { "code": null, "e": 282, "s": 275, "text": "SPRING" }, { "code": null, "e": 294, "s": 282, "text": "SPRING BOOT" }, { "code": null, "e": 304, "s": 294, "text": "HIBERNATE" }, { "code": null, "e": 311, "s": 304, "text": "PYTHON" }, { "code": null, "e": 315, "s": 311, "text": "PHP" }, { "code": null, "e": 322, "s": 315, "text": "JQUERY" }, { "code": null, "e": 357, "s": 322, "text": "PROGRAMMINGJava ExamplesC Examples" }, { "code": null, "e": 371, "s": 357, "text": "Java Examples" }, { "code": null, "e": 382, "s": 371, "text": "C Examples" }, { "code": null, "e": 394, "s": 382, "text": "C Tutorials" }, { "code": null, "e": 398, "s": 394, "text": "aws" }, { "code": null, "e": 565, "s": 398, "text": "JAXB is a very efficient framework to covert XML to Object and Object to XML in Java. In this tutorial, we are going to see how to convert Java Map to XML using JAXB." }, { "code": null, "e": 610, "s": 565, "text": "Add the below jaxb-api dependency in pom.xml" }, { "code": null, "e": 748, "s": 610, "text": "<dependency>\n <groupId>javax.xml.bind</groupId>\n <artifactId>jaxb-api</artifactId>\n <version>2.3.1</version>\n</dependency>" }, { "code": null, "e": 1189, "s": 748, "text": "<!-- API, java.xml.bind module -->\n <dependency>\n <groupId>jakarta.xml.bind</groupId>\n <artifactId>jakarta.xml.bind-api</artifactId>\n <version>2.3.2</version>\n </dependency>\n\n <!-- Runtime, com.sun.xml.bind module -->\n <dependency>\n <groupId>org.glassfish.jaxb</groupId>\n <artifactId>jaxb-runtime</artifactId>\n <version>2.3.2</version>\n </dependency>" }, { "code": null, "e": 1407, "s": 1189, "text": "We can not convert the Java Map directly to XML because JAXB expects a @XmlRootElement annotation on top of the entity class to convert into XML. If you try to convert the Map object you would get the below exception." }, { "code": null, "e": 1482, "s": 1407, "text": "The Exception clearly saying @XmlRootElement is mandatory for JAXBContext." }, { "code": null, "e": 1643, "s": 1482, "text": "So to fix the issue we can create a Wrapper for the HashMap class by defining the @XmlRootElement annotation on top of the Wrapper class. Let’s see in practice." }, { "code": null, "e": 1768, "s": 1643, "text": "In this example, I am going to create a map with a list of items, where a category is a key and the list of items are value." }, { "code": null, "e": 1871, "s": 1768, "text": "So basically I need two wrapper classes to define my use-case: one is for List and another is for Map." }, { "code": null, "e": 1923, "s": 1871, "text": "Creating ItemsList wrapper class to hold the items." }, { "code": null, "e": 2415, "s": 1923, "text": "import javax.xml.bind.annotation.XmlAccessType;\nimport javax.xml.bind.annotation.XmlAccessorType;\nimport javax.xml.bind.annotation.XmlRootElement;\nimport java.util.ArrayList;\nimport java.util.List;\n\n@XmlRootElement(name = \"item\")\n@XmlAccessorType(XmlAccessType.FIELD)\npublic class ItemsList {\n private List<Item> item = new ArrayList<>();\n\n public List<Item> getItem() {\n return item;\n }\n \n public void setItem(List<Item> itemsList) {\n this.item = item;\n }\n}\n" }, { "code": null, "e": 2479, "s": 2415, "text": "Create an ItemsMap class to hold the list of items by category." }, { "code": null, "e": 3022, "s": 2479, "text": "import javax.xml.bind.annotation.XmlAccessType;\nimport javax.xml.bind.annotation.XmlAccessorType;\nimport javax.xml.bind.annotation.XmlRootElement;\nimport java.util.HashMap;\nimport java.util.Map;\n\n@XmlRootElement(name = \"items\")\n@XmlAccessorType(XmlAccessType.FIELD)\npublic class ItemsMap {\n\n private Map<String, ItemsList> itemsMap = new HashMap<>();\n\n public Map<String, ItemsList> getItemsMap() {\n return itemsMap;\n }\n\n public void setItemsMap(Map<String, ItemsList> itemsMap) {\n this.itemsMap = itemsMap;\n }\n}\n" }, { "code": null, "e": 3148, "s": 3022, "text": "Now the wrapper classes are ready to use. Now let’s prepare the list of items, make it as a map and convert the map into XML." }, { "code": null, "e": 4788, "s": 3148, "text": "import javax.xml.bind.JAXBContext;\nimport javax.xml.bind.JAXBException;\nimport javax.xml.bind.Marshaller;\nimport java.util.ArrayList;\nimport java.util.Arrays;\nimport java.util.HashMap;\n\npublic class MapToXML {\n public static void main(String[] args) throws JAXBException {\n // Creating HashMap to hold ItemsList (Wrapper for List)\n HashMap<String, ItemsList> map = new HashMap<>();\n map.put(\"electronics\",getElecrtonics());\n map.put(\"books\",getBooks());\n \n // Setting map to ItemsMap (Wrapper for HashMap)\n ItemsMap iMap = new ItemsMap();\n iMap.setItemsMap(map);\n\n JAXBContext jaxbContext = JAXBContext.newInstance(ItemsMap.class);\n Marshaller jaxbMarshaller = jaxbContext.createMarshaller();\n jaxbMarshaller.setProperty(Marshaller.JAXB_FORMATTED_OUTPUT, true);\n // marshalling ItemsMap.\n jaxbMarshaller.marshal(iMap, System.out);\n }\n\n // Building the Electronic items\n public static ItemsList getElecrtonics(){\n ArrayList<Item> list = new ArrayList<Item>(Arrays.asList(new Item(100,\"Samsung\",15000.00),\n new Item(200,\"iPhone10\",110000.00)));\n ItemsList itemsList = new ItemsList();\n itemsList.getItem().addAll(list);\n return itemsList;\n }\n // Building the Book items\n public static ItemsList getBooks(){\n ArrayList<Item> list = new ArrayList<Item>(Arrays.asList(new Item(2000,\"Core Java\",500.00),\n new Item(3000,\"Spring in Action\",750.00)));\n ItemsList itemsList = new ItemsList();\n itemsList.getItem().addAll(list);\n return itemsList;\n }\n}\n" }, { "code": null, "e": 4796, "s": 4788, "text": "Output:" }, { "code": null, "e": 5843, "s": 4796, "text": "<?xml version=\"1.0\" encoding=\"UTF-8\" standalone=\"yes\"?>\n<items>\n <itemsMap>\n <entry>\n <key>electronics</key>\n <value>\n <item>\n <itemId>100</itemId>\n <itemName>Samsung</itemName>\n <price>15000.0</price>\n </item>\n <item>\n <itemId>200</itemId>\n <itemName>iPhone10</itemName>\n <price>110000.0</price>\n </item>\n </value>\n </entry>\n <entry>\n <key>books</key>\n <value>\n <item>\n <itemId>2000</itemId>\n <itemName>Core Java</itemName>\n <price>500.0</price>\n </item>\n <item>\n <itemId>3000</itemId>\n <itemName>Spring in Action</itemName>\n <price>750.0</price>\n </item>\n </value>\n </entry>\n </itemsMap>\n</items>" }, { "code": null, "e": 5849, "s": 5843, "text": "Done!" }, { "code": null, "e": 5879, "s": 5849, "text": "JAXB Object to XML conversion" }, { "code": null, "e": 5909, "s": 5879, "text": "JAXB XML to Object Conversion" }, { "code": null, "e": 5926, "s": 5909, "text": "Happy Learning 🙂" }, { "code": null, "e": 6516, "s": 5926, "text": "\nJAXB XML to Java Object Conversion Example\nJAXB Java Object to XML Conversion Example\nHow to Rotate Elements in List\nJava 8 foreach Example Tutorials\nJava Swing JList Example\nHibernate orderby criteria Example\nResolve NullPointerException in Collectors.toMap\nJava String to int conversion Example\nDecimal To Hex Conversion Java Program\nPython String to int Conversion Example\nBinary To Decimal Conversion Java Program\nBinary To Hexadecimal Conversion Java Program\nDecimal To Binary Conversion Java Program\nDecimal To Octal Conversion Java Program\nOctal To Decimal Conversion Java Program\n" }, { "code": null, "e": 6559, "s": 6516, "text": "JAXB XML to Java Object Conversion Example" }, { "code": null, "e": 6602, "s": 6559, "text": "JAXB Java Object to XML Conversion Example" }, { "code": null, "e": 6633, "s": 6602, "text": "How to Rotate Elements in List" }, { "code": null, "e": 6666, "s": 6633, "text": "Java 8 foreach Example Tutorials" }, { "code": null, "e": 6691, "s": 6666, "text": "Java Swing JList Example" }, { "code": null, "e": 6726, "s": 6691, "text": "Hibernate orderby criteria Example" }, { "code": null, "e": 6775, "s": 6726, "text": "Resolve NullPointerException in Collectors.toMap" }, { "code": null, "e": 6813, "s": 6775, "text": "Java String to int conversion Example" }, { "code": null, "e": 6852, "s": 6813, "text": "Decimal To Hex Conversion Java Program" }, { "code": null, "e": 6892, "s": 6852, "text": "Python String to int Conversion Example" }, { "code": null, "e": 6934, "s": 6892, "text": "Binary To Decimal Conversion Java Program" }, { "code": null, "e": 6980, "s": 6934, "text": "Binary To Hexadecimal Conversion Java Program" }, { "code": null, "e": 7022, "s": 6980, "text": "Decimal To Binary Conversion Java Program" }, { "code": null, "e": 7063, "s": 7022, "text": "Decimal To Octal Conversion Java Program" }, { "code": null, "e": 7104, "s": 7063, "text": "Octal To Decimal Conversion Java Program" }, { "code": null, "e": 7110, "s": 7108, "text": "Δ" }, { "code": null, "e": 7134, "s": 7110, "text": " Install Java on Mac OS" }, { "code": null, "e": 7162, "s": 7134, "text": " Install AWS CLI on Windows" }, { "code": null, "e": 7191, "s": 7162, "text": " Install Minikube on Windows" }, { "code": null, "e": 7226, "s": 7191, "text": " Install Docker Toolbox on Windows" }, { "code": null, "e": 7253, "s": 7226, "text": " Install SOAPUI on Windows" }, { "code": null, "e": 7280, "s": 7253, "text": " Install Gradle on Windows" }, { "code": null, "e": 7309, "s": 7280, "text": " Install RabbitMQ on Windows" }, { "code": null, "e": 7335, "s": 7309, "text": " Install PuTTY on windows" }, { "code": null, "e": 7361, "s": 7335, "text": " Install Mysql on Windows" }, { "code": null, "e": 7397, "s": 7361, "text": " Install Hibernate Tools in Eclipse" }, { "code": null, "e": 7431, "s": 7397, "text": " Install Elasticsearch on Windows" }, { "code": null, "e": 7457, "s": 7431, "text": " Install Maven on Windows" }, { "code": null, "e": 7482, "s": 7457, "text": " Install Maven on Ubuntu" }, { "code": null, "e": 7516, "s": 7482, "text": " Install Maven on Windows Command" }, { "code": null, "e": 7551, "s": 7516, "text": " Add OJDBC jar to Maven Repository" }, { "code": null, "e": 7575, "s": 7551, "text": " Install Ant on Windows" }, { "code": null, "e": 7604, "s": 7575, "text": " Install RabbitMQ on Windows" }, { "code": null, "e": 7636, "s": 7604, "text": " Install Apache Kafka on Ubuntu" }, { "code": null, "e": 7669, "s": 7636, "text": " Install Apache Kafka on Windows" }, { "code": null, "e": 7694, "s": 7669, "text": " Java8 – Install Windows" }, { "code": null, "e": 7711, "s": 7694, "text": " Java8 – foreach" }, { "code": null, "e": 7739, "s": 7711, "text": " Java8 – forEach with index" }, { "code": null, "e": 7770, "s": 7739, "text": " Java8 – Stream Filter Objects" }, { "code": null, "e": 7802, "s": 7770, "text": " Java8 – Comparator Userdefined" }, { "code": null, "e": 7822, "s": 7802, "text": " Java8 – GroupingBy" }, { "code": null, "e": 7842, "s": 7822, "text": " Java8 – SummingInt" }, { "code": null, "e": 7866, "s": 7842, "text": " Java8 – walk ReadFiles" }, { "code": null, "e": 7896, "s": 7866, "text": " Java8 – JAVA_HOME on Windows" }, { "code": null, "e": 7928, "s": 7896, "text": " Howto – Install Java on Mac OS" }, { "code": null, "e": 7964, "s": 7928, "text": " Howto – Convert Iterable to Stream" }, { "code": null, "e": 8008, "s": 7964, "text": " Howto – Get common elements from two Lists" }, { "code": null, "e": 8040, "s": 8008, "text": " Howto – Convert List to String" }, { "code": null, "e": 8081, "s": 8040, "text": " Howto – Concatenate Arrays using Stream" }, { "code": null, "e": 8118, "s": 8081, "text": " Howto – Remove duplicates from List" }, { "code": null, "e": 8158, "s": 8118, "text": " Howto – Filter null values from Stream" }, { "code": null, "e": 8187, "s": 8158, "text": " Howto – Convert List to Map" }, { "code": null, "e": 8219, "s": 8187, "text": " Howto – Convert Stream to List" }, { "code": null, "e": 8239, "s": 8219, "text": " Howto – Sort a Map" }, { "code": null, "e": 8261, "s": 8239, "text": " Howto – Filter a Map" }, { "code": null, "e": 8291, "s": 8261, "text": " Howto – Get Current UTC Time" }, { "code": null, "e": 8342, "s": 8291, "text": " Howto – Verify an Array contains a specific value" }, { "code": null, "e": 8378, "s": 8342, "text": " Howto – Convert ArrayList to Array" }, { "code": null, "e": 8410, "s": 8378, "text": " Howto – Read File Line By Line" }, { "code": null, "e": 8445, "s": 8410, "text": " Howto – Convert Date to LocalDate" }, { "code": null, "e": 8468, "s": 8445, "text": " Howto – Merge Streams" }, { "code": null, "e": 8515, "s": 8468, "text": " Howto – Resolve NullPointerException in toMap" }, { "code": null, "e": 8540, "s": 8515, "text": " Howto -Get Stream count" }, { "code": null, "e": 8584, "s": 8540, "text": " Howto – Get Min and Max values in a Stream" } ]
How to get first and last date of the current month with JavaScript?
To get the first and last date of current month in JavaScript, you can try to run the following code − <html> <head> <title>JavaScript Dates</title> </head> <body> <script> var date = new Date(); document.write("Current Date: " + date ); var firstDay = new Date(date.getFullYear(), date.getMonth(), 1); document.write("<br>"+firstDay); var lastDay = new Date(date.getFullYear(), date.getMonth() + 1, 0); document.write("<br>"+lastDay); </script> </body> </html>
[ { "code": null, "e": 1165, "s": 1062, "text": "To get the first and last date of current month in JavaScript, you can try to run the following code −" }, { "code": null, "e": 1608, "s": 1165, "text": "<html>\n <head>\n <title>JavaScript Dates</title>\n </head>\n <body>\n <script>\n var date = new Date();\n document.write(\"Current Date: \" + date );\n var firstDay = new Date(date.getFullYear(), date.getMonth(), 1);\n document.write(\"<br>\"+firstDay);\n var lastDay = new Date(date.getFullYear(), date.getMonth() + 1, 0);\n document.write(\"<br>\"+lastDay);\n </script>\n </body>\n</html>" } ]
Python | Pandas DataFrame.to_html() method - GeeksforGeeks
17 Sep, 2019 With help of DataFrame.to_html() method, we can get the html format of a dataframe by using DataFrame.to_html() method. Syntax : DataFrame.to_html()Return : Return the html format of a dataframe. Example #1 :In this example we can say that by using DataFrame.to_html() method, we are able to get the html format of a dataframe. # import DataFrameimport pandas as pd # using DataFrame.to_html() methodgfg = pd.DataFrame({'Name': ['Marks'], 'Jitender': ['78'], 'Rahul': ['77.9']}) print(gfg.to_html()) Output : Example #2 : # import DataFrameimport pandas as pd # using DataFrame.to_html() methodgfg = pd.DataFrame({'Name': ['Marks', 'Gender'], 'Jitender': ['78', 'Male'], 'Purnima': ['78.9', 'Female']}) print(gfg.to_html()) Output : Python pandas-dataFrame Python pandas-dataFrame-methods Python-pandas Python Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here. Comments Old Comments Python Dictionary Read a file line by line in Python Enumerate() in Python How to Install PIP on Windows ? Iterate over a list in Python Python program to convert a list to string Reading and Writing to text files in Python sum() function in Python Create a Pandas DataFrame from Lists How to drop one or multiple columns in Pandas Dataframe
[ { "code": null, "e": 24611, "s": 24583, "text": "\n17 Sep, 2019" }, { "code": null, "e": 24731, "s": 24611, "text": "With help of DataFrame.to_html() method, we can get the html format of a dataframe by using DataFrame.to_html() method." }, { "code": null, "e": 24807, "s": 24731, "text": "Syntax : DataFrame.to_html()Return : Return the html format of a dataframe." }, { "code": null, "e": 24939, "s": 24807, "text": "Example #1 :In this example we can say that by using DataFrame.to_html() method, we are able to get the html format of a dataframe." }, { "code": "# import DataFrameimport pandas as pd # using DataFrame.to_html() methodgfg = pd.DataFrame({'Name': ['Marks'], 'Jitender': ['78'], 'Rahul': ['77.9']}) print(gfg.to_html())", "e": 25152, "s": 24939, "text": null }, { "code": null, "e": 25161, "s": 25152, "text": "Output :" }, { "code": null, "e": 25174, "s": 25161, "text": "Example #2 :" }, { "code": "# import DataFrameimport pandas as pd # using DataFrame.to_html() methodgfg = pd.DataFrame({'Name': ['Marks', 'Gender'], 'Jitender': ['78', 'Male'], 'Purnima': ['78.9', 'Female']}) print(gfg.to_html())", "e": 25416, "s": 25174, "text": null }, { "code": null, "e": 25425, "s": 25416, "text": "Output :" }, { "code": null, "e": 25449, "s": 25425, "text": "Python pandas-dataFrame" }, { "code": null, "e": 25481, "s": 25449, "text": "Python pandas-dataFrame-methods" }, { "code": null, "e": 25495, "s": 25481, "text": "Python-pandas" }, { "code": null, "e": 25502, "s": 25495, "text": "Python" }, { "code": null, "e": 25600, "s": 25502, "text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here." }, { "code": null, "e": 25609, "s": 25600, "text": "Comments" }, { "code": null, "e": 25622, "s": 25609, "text": "Old Comments" }, { "code": null, "e": 25640, "s": 25622, "text": "Python Dictionary" }, { "code": null, "e": 25675, "s": 25640, "text": "Read a file line by line in Python" }, { "code": null, "e": 25697, "s": 25675, "text": "Enumerate() in Python" }, { "code": null, "e": 25729, "s": 25697, "text": "How to Install PIP on Windows ?" }, { "code": null, "e": 25759, "s": 25729, "text": "Iterate over a list in Python" }, { "code": null, "e": 25802, "s": 25759, "text": "Python program to convert a list to string" }, { "code": null, "e": 25846, "s": 25802, "text": "Reading and Writing to text files in Python" }, { "code": null, "e": 25871, "s": 25846, "text": "sum() function in Python" }, { "code": null, "e": 25908, "s": 25871, "text": "Create a Pandas DataFrame from Lists" } ]
Categorical features parameters in CatBoost | by Mariia Garkavenko | Towards Data Science
CatBoost is an open-sourced gradient boosting library. One of the differences between CatBoost and other gradient boosting libraries is its advanced processing of the categorical features (in fact “Cat” in the package name stands not for a 🐱 but for “CATegorical”). CatBoost deals with the categorical data quite well out-of-the-box. However, it also has a huge number of training parameters, which provide fine control over the categorical features preprocessing. In this tutorial, we are going to learn how to use these parameters for the greater good. The tutorial is split into the following sections: Introduction: categorical features in machine learningCategorical features processing in CatBoostExperiment: How the categorical features settings affect accuracy in predicting the prices of the old cars. Introduction: categorical features in machine learning Categorical features processing in CatBoost Experiment: How the categorical features settings affect accuracy in predicting the prices of the old cars. Сategorical feature is a feature that has a discrete set of values called categories that are not comparable by < or > to each other. In real-world datasets, we quite often deal with categorical data. The cardinality of a categorical feature, i.e. the number of different values that the feature can take varies drastically among features and datasets — from just a few to thousands and millions of distinct values. The values of a categorical feature can be distributed almost uniformly and there might be values with a frequency different by the orders of magnitude. To be used in gradient boosting categorical features need to be transformed into some form that can be handled by a decision tree, for example to numbers. In the next section, we are going to briefly go through the most popular in machine learning methods of transforming categorical features values into numbers. Standard approaches to categorical features preprocessing One-hot Encoding consists in creating a binary feature for each category. The main problem of the method is that features with huge cardinalities (such as user id for example) lead to a huge number of features. Label Encoding maps each category, i.e. value that a categorical feature can take into a random number. Does it not make a terrible amount of sense? It does not work very well in practice either. Hash Encoding converts string type features into a fixed dimension vector using a hash function. Frequency Encoding consists in replacing categorical feature values with the frequency of the category in the dataset. Target Encoding replaces the values of the categorical feature with a number that is calculated from the distribution of the target values for that particular value of the categorical variable. The most straightforward approach sometimes referred to as Greedy Target Encoding is to use the mean value of target on the objects belonging to the category. However, this method leads to target leakage and overfitting. One possible solution to these problems is Holdout Target Encoding — one part of the training dataset is used to compute the target statistics for each category, and the training is performed on the rest of the training data. It solves the target leakage problem but requires us to sacrifice part of our precious training data. For this reason, the most popular in practice solutions are K-Fold Target Encoding and Leave-One-Out Target Encoding. The idea behind K-Fold Target Encoding is very similar to K-Fold Cross Validation — we split the training data into several folds in each fold we replace the categorical feature values with the target statistics for the category calculated on the other folds. Leave-One-Out Target Encoding is a special case of K-Fold Encoding where K is equal to the length of training data. K-Fold Encoding and Leave-One-Out Target Encoding can also lead to overfitting. Consider the following example: in a training dataset, we have a single categorical feature with a single value and 5 objects of class 0 and 6 objects of class 1. Obviously feature that has only one possible value is useless, however, if we use Leave-One-Out Target Encoding with mean function for all the objects of class 0 the feature value will be encoded into 0.6 while for all the objects of class 1 the feature encoding value will be 0.5. This will allow a decision tree classifier to choose a split at 0.55 and achieve 100% accuracy on the training set. CatBoost supports some traditional methods of categorical data preprocessing, such as One-hot Encoding and Frequency Encoding. However one of the signatures of this package is its original solution for categorical features encoding. The core idea behind CatBoost categorical features preprocessing is Ordered Target Encoding: a random permutation of the dataset is performed and then target encoding of some type (for example just computing mean of the target for objects of this category) is performed on each example using only the objects that are placed before the current object. Generally transforming categorical features to numerical features in CatBoost includes the following steps: Permutation of the training objects in random order.Quantization i.e. converting the target value from a floating-point to an integer depending on the task type: Permutation of the training objects in random order. Quantization i.e. converting the target value from a floating-point to an integer depending on the task type: Classification — Possible values for target value are “0” (doesn’t belong to the specified target class) and “1” (belongs to the specified target class). Multiclassification — The target values are integer identifiers of target classes (starting from “0”). Regression — Quantization is performed on the label value. The mode and number of buckets are set in the starting parameters. All values located inside a single bucket are assigned a label value class — an integer in the range defined by the formula: <bucket ID — 1>. 3. Encoding the categorical feature values. CatBoost creates four permutations of the training objects and for each permutation, a separate model is trained. Three models are used for the tree structure selection and the fourth is used to compute the leaves values of the final model that we save. At each iteration one of the three models is chosen randomly; this model is used to choose the new tree structure and to calculate the leaves values for all the four models. Using several models for tree structure selection enhances the robustness of the categorical features encoding. If in one permutation an object is close to the beginning of the dataset and the statistics for encoding are calculated on a small number of objects in the other two permutations it may be closer to the end of the dataset and many objects will be used to compute the statistics. Another important point is that CatBoost can create new categorical features combining the existing ones. And it will actually do so unless you explicitly tell it not to :) Treatment of the original features and the created features can be controlled separately by the settings simple_ctr and combinations_ctr respectively (we will talk about them in detail). For the experiments in this tutorial, we are going to use https://www.kaggle.com/lepchenkov/usedcarscatalog This dataset consists of the old cars’ descriptions and their characteristics — both numerical, such as mileage, production year, etc and categorical, such as color, manufacturer name, model name, etc. Our goal is to solve the regression task, i.e. to predict the price of an old car. Let us see how many unique values each categorical variable has: df[categorical_features_names].nunique()manufacturer_name 55model_name 1118transmission 2color 12engine_fuel 6engine_type 3body_type 12state 3drivetrain 3location_region 6 Here is the target value distribution: First, we are going to roughly estimate the number of trees and the learning rate required that are sufficient for this task. 0: learn: 5935.7603510 test: 6046.0339243 best: 6046.0339243 (0) total: 73.2ms remaining: 6m 5s2000: learn: 1052.8405096 test: 1684.8571308 best: 1684.8571308 (2000) total: 19.5s remaining: 29.2s4000: learn: 830.0093394 test: 1669.1267503 best: 1668.7626148 (3888) total: 41.4s remaining: 10.3s4999: learn: 753.5299104 test: 1666.7826842 best: 1666.6739968 (4463) total: 52.7s remaining: 0usbestTest = 1666.673997bestIteration = 4463 Now we are going to write a simple function that tests CatBoost performance on 3-fold cross-validation given the parameters and returns the full list of parameters for the last model. Optionally this function compares the model’s metrics with the results of the model trained with the default categorical features parameters. We will fix the number of estimators at 4500 and the learning rate at 0.1. The amount of parameters related to categorical features processing in CatBoost is overwhelming. Here is a hopefully the full list: one_hot_max_size (int) - use one-hot encoding for all categorical features with a number of different values less than or equal to the given parameter value. No complex encoding is performed for such features. The default value for the regression task is 2. model_size_reg (float from 0 to inf) - The model size regularization coefficient. The larger the value, the smaller the model size. Refer to the Model size regularization coefficient section for details. This regularization is needed only for models with categorical features (other models are small). Models with categorical features might weight tens of gigabytes or more if categorical features have a lot of values. If the value of the regularizer differs from zero, then the usage of categorical features or feature combinations with a lot of values has a penalty, so fewer of them are used in the resulting model. The default value is 0.5 max_ctr_complexity - The maximum number of features that can be combined. Each resulting combination consists of one or more categorical features and can optionally contain binary features in the following form: “numeric feature > value”. For the regression task on the CPU, the default value is 4. has_time (bool) - if true, the 1-st step of categorical features processing, permutation, is not performed. Useful when the objects in your dataset are ordered by time. For our dataset, we don't need it. The default value is False simple_ctr - Quantization settings for simple categorical features. combinations_ctr - Quantization settings for combinations of categorical features. per_feature_ctr - Per-feature quantization settings for categorical features. counter_calc_method determines whether to use validation dataset(provided through the parameter eval_set of fit method) to estimate categories frequencies with Counter. By default, it is Full and the objects from validation dataset are used; Pass SkipTest value to ignore the objects from the validation set ctr_target_border_count - The maximum number of borders to use in target quantization for categorical features that need it. The default for the regression task is 1. ctr_leaf_count_limit - The maximum number of leaves with categorical features. The default value is None i.e. no limit. store_all_simple_ctr - If the previous parameter at some point gradient boosting tree can no longer make splits by categorical features. With the default value equal toFalse the limitation applies both to original categorical features and the features, that CatBoost creates by combining different features. If this parameter is set to True only the number of splits made on combination features is limited. The three parameters simple_ctr, combinations_ctr, and per_feature_ctr are complex parameters that control the second and the third steps of categorical features processing. We will talk about them more in the next sections. First, we test the out-of-the-box CatBoost categorical features processing. last_model_params = score_catboost_model({}, True)R2 score: 0.9334(0.0009)RMSE score: 1659(17) We will save the metrics of the model with the default categorical features parameters for further comparison. The first thing we try is to make CatBoost use one-hot encoding for all our categorical features (the max categorical feature cardinality in our dataset is 1118 < 2000). The documentation says, that for the features for which one-hot encoding is used no other encodings are computed. The default value is: N/A if training is performed on CPU in Pairwise scoring mode 255 if training is performed on GPU and the selected Ctr types require target data that is not available during the training 10 if training is performed in Ranking mode 2 if none of the conditions above is met model_params = score_catboost_model({'one_hot_max_size' : 2000})R2 score: 0.9392(0.0029) +0.6% compared to default parametersRMSE score: 1584(28) -4.5% compared to default parameters This parameter influences the model size if training data has categorical features. The information regarding categorical features makes a great contribution to the final size of the model. The mapping from the categorical feature value hash to some statistic values is stored for each categorical feature that is used in the model. The size of this mapping for a particular feature depends on the number of unique values that this feature takes. Therefore, the potential weight of a categorical feature can be taken into account in the final model when choosing a split in a tree to reduce the final size of the model. When choosing the best split, all split scores are calculated and then the split with the best score is chosen. But before choosing the split with the best score, all scores change according to the following formula: s_new is the new score for the split by some categorical feature or combination feature, s_old is the old score for the split by the feature, u is the number of unique values of the feature, U is the maximum of all values among all features and M is the value of the model_size_reg parameter. This regularization works slightly differently on GPU: feature combinations are regularized more aggressively than on CPU. For CPU cost of a combination is equal to the number of different feature values in these combinations that are present in the training dataset. On GPU cost of a combination is equal to the number of all possible different values of this combination. For example, if the combination contains two categorical features c1 and c2, then the cost will be #categories in c1 * #categories in c2, even though many of the values from this combination might not be present in the dataset. Let us try to set model size regularization coefficient to 0 — thus we allow our model to use as many categorical features and its combinations as it wants. model_params = score_catboost_model({'model_size_reg': 0})R2 score: 0.9360(0.0014) +0.3% compared to default parametersRMSE score: 1626(26) -2.0% compared to default parametersmodel_params = score_catboost_model({'model_size_reg': 1})R2 score: 0.9327(0.0020) -0.1% compared to default parametersRMSE score: 1667(30) +0.5% compared to default parameters To check how the size of the model is affected by this setting we will write a function that given parameters dictionary will train a model, save it in a file and return the model’s weight: model_size_reg_0 = weight_model({'model_size_reg': 0})model_size_reg_1 = weight_model({'model_size_reg': 1})model_size_reg_0/model_size_reg_112.689550532622183 As we can see the model with the strong regularization is almost 13 times smaller than the model without regularization. Feature combinations: Note that any combination of several categorical features could be considered as a new one. For example, assume that the task is music recommendation and we have two categorical features: user ID and musical genre. Some user prefers, say, rock music. When we convert user ID and musical genre to numerical features we lose this information. A combination of two features solves this problem and gives a new powerful feature. However, the number of combinations grows exponentially with the number of categorical features in the dataset and it is not possible to consider all of them in the algorithm. When constructing a new split for the current tree, CatBoost considers combinations in a greedy way. No combinations are considered for the first split in the tree. For the next splits, CatBoost combines all combinations and categorical features present in the current tree with all categorical features in the dataset. Combination values are converted to numbers on the fly. CatBoost also generates combinations of numerical and categorical features in the following way: all the splits selected in the tree are considered as categorical with two values and used in combinations in the same way as categorical ones. The maximum number of features that can be combined. Each resulting combination consists of one or more categorical features and can optionally contain binary features in the following form: “numeric feature > value”. For the regression task on CPU, the default value is 4. Although it is not mentioned in the documentation, this parameter value has to be less or equal to 15. (Because this parameter should be less than the max gradient boosting tree depth). model_params = score_catboost_model({'max_ctr_complexity': 6})R2 score: 0.9335(0.0016) +0.0% compared to default parametersRMSE score: 1657(24) -0.2% compared to default parametersmodel_params = score_catboost_model({'max_ctr_complexity': 0})R2 score: 0.9286(0.0041) -0.5% compared to default parametersRMSE score: 1716(30) +3.4% compared to default parameters As we can see on our dataset the difference in the model’s accuracy is not significant. To check how the size of the model is affected we will use our function that weights a model. model_size_max_ctr_6 = weight_model({'max_ctr_complexity': 6})model_size_max_ctr_0 = weight_model({'max_ctr_complexity': 0})model_size_max_ctr_6/model_size_max_ctr_06.437194589788451 As can be seen, the model that can combine up to 6 features weights 6 times more than the model that does not combine features at all. With this setting on we do not perform random permutations during the Transforming categorical features to numerical. This might be useful when the objects of our dataset are already ordered by time. If a Timestamp type column is present in the input data it is used to determine the order of objects. model_params = score_catboost_model({'has_time': True})R2 score: 0.9174(0.0029) -1.7% compared to default parametersRMSE score: 1847(29) +11.3% compared to default parameters Both simple_ctr and combinations_ctr are complex parameters that provide regulation of the categorical features encodings types. While simple_ctr is responsible for processing the categorical features initially present in the dataset, combinations_ctr affects the encoding of the new features, that CatBoost creates by combining the existing features. The available methods of encodings and possible values of simple_ctr and combinations_ctr are the same, so we are not going to look at them separately. But of course, you can always tune them separately on your task! Target quantization is transforming float target values to int target values using some borders. We will first consider the target encoding methods that do not require such a transformation. The first option FloatTargetMeanValue is the most straightforward approach. Each value of the categorical variable is replaced with the mean of the target over the objects of the same category that are placed before the current object. model_params = score_catboost_model({'simple_ctr' : 'FloatTargetMeanValue', 'combinations_ctr' : 'FloatTargetMeanValue', 'task_type' : 'GPU'})R2 score: 0.9183(0.0022) -1.6% compared to default parametersRMSE score: 1837(32) +10.7% compared to default parameters The second option is FeatureFreq. The categorical feature values are replaced with the frequencies of the category in the dataset. Again only the objects placed before the current objects are used. model_params = score_catboost_model({'simple_ctr' : 'FeatureFreq', 'combinations_ctr' : 'FeatureFreq', 'task_type' : 'GPU'})R2 score: 0.9170(0.0019) -1.8% compared to default parametersRMSE score: 1852(12) +11.6% compared to default parameters We have already discussed Counter method in section “Default parameters” because by default this method is used to create feature encodings. It is worth noticing, that if we directly pass Counter into simple_ctr and/or combinations_ctr CatBoost will use only Counter features encodings. model_params = score_catboost_model({'simple_ctr' : 'Counter', 'combinations_ctr' : 'Counter'})R2 score: 0.9288(0.0014) -0.5% compared to default parametersRMSE score: 1715(12) +3.3% compared to default parameters Let us say we have calculated encodings for our categorical variable. These encodings are floats and they are comparable: in case of Counter the larger encoding value corresponds to the more frequent category. However, if we have a large number of categories the difference between close categories encodings may be caused by noise and we do not want our model to differentiate between close categories. For this reason, we transform our float encoding into int encoding i∈[0,l]i∈[0,l]. By default CtrBorderCount=15 setting means that l=14(15−1)l=14(15−1). We can try to use a bigger value: model_params = score_catboost_model({'combinations_ctr': ['Counter:CtrBorderCount=40:Prior=0.5/1'], 'simple_ctr': ['Counter:CtrBorderCount=40:Prior=0.5/1']})R2 score: 0.9337(0.0013) -0.0% compared to default parametersRMSE score: 1655(13) +0.2% compared to default parameters The second method BinarizedTargetMeanValue is very similar to target encoding, except that instead of the sum over the exact target values we use the sum of the values of the beans. Which corresponds to the following formula: where: countInClass is the ratio of the sum of the label value integers for this categorical feature to the maximum label value integer k. totalCount is the total number of objects that have a feature value matching the current one. prior is a number (constant) defined by the starting parameters. model_params = score_catboost_model({'combinations_ctr':'BinarizedTargetMeanValue', 'simple_ctr': 'BinarizedTargetMeanValue'})R2 score: 0.9312(0.0008) -0.2% compared to default parametersRMSE score: 1685(20) +1.6% compared to default parameters{k:v for k, v in model_params.items() if k in ctr_parameters}{'combinations_ctr': ['BinarizedTargetMeanValue:CtrBorderCount=15:CtrBorderType=Uniform:TargetBorderCount=1:TargetBorderType=MinEntropy:Prior=0/1:Prior=0.5/1:Prior=1/1'], 'simple_ctr': ['BinarizedTargetMeanValue:CtrBorderCount=15:CtrBorderType=Uniform:TargetBorderCount=1:TargetBorderType=MinEntropy:Prior=0/1:Prior=0.5/1:Prior=1/1']} While using the BinarizedTargetMeanValue method we can also finetune Prior and CtrBorderCount(the number of borders for quantization the category feature encoding). By default CtrBorderCount=15 and 0, 0.5 and 1 Prior values are used to build three different encodings. Now we proceed to the settings of the encodings methods that require target quantization. The first choice is Borders vs. Buckets. The difference between the two is pretty simple. Both are described by the following formula: for i in [0, k-1] in case of Borders and for i in [0, k] in case of Buckets: where k is the number of borders regulated by parameter TargetBorderCount, totalCount is the number of objects of the same category. prior is defined by the parameter prior. The only difference is that for Borders countInClass is the number of the objects of the category with the discretized target value greater than i while for Buckets countInClass is the number of the objects of the category with the discretized target value equal to i. Let us see a small example: we have objects of two categories shown as suns and moons. We will compute the categorical feature encodings in case of borders and buckets. Borders: We have two borders(which corresponds to TargetBorderCount=2), so we need to calculate 2 encodings. Let us say our Prior is 0.5 Buckets: i in [0, k] creates k+1 buckets. So the same value of parameter TargetBorderCount=2 creates more features from each categorical feature if we choose Buckets. Important note! This example just serves to illustrate the difference between Borders and Buckets and the whole dataset is used to compute countInClass and totalCount. In reality, CatBoost uses only the objects placed before the current object are used. Let us see if it makes any difference in practice: model_params = score_catboost_model({'combinations_ctr': 'Borders', 'simple_ctr': 'Borders'})R2 score: 0.9311(0.0017) -0.2% compared to default parametersRMSE score: 1688(40) +1.7% compared to default parametersmodel_params = score_catboost_model({'combinations_ctr': 'Buckets', 'simple_ctr': 'Buckets'})R2 score: 0.9314(0.0048) -0.2% compared to default parametersRMSE score: 1682(49) +1.4% compared to default parameters An attentive reader may remember that by default CatBoost creates some features using Borders splits and also some features using Counter method. When we explicitly pass the Borders option, Counter method is not used. Generally, it is recommended to use Borders for the regression task and Buckets for the multiclassification task. What happens if there is a new category in the test set that never appeared in the training set? The answer is, that since countInClasscountInClass is equal to zero, the prior is used to compute the encoding: Meanwhile, missing values in the categorical feature are replaced with "None" string. Then all the objects with the missing feature value are treated as a new category. The number of borders or buckets is can be controlled with the TargetBorderCount parameter. By default we have only one border, let us see if having more borders helps: model_params = score_catboost_model({'combinations_ctr': 'Borders:TargetBorderCount=4', 'simple_ctr': 'Borders:TargetBorderCount=4'})R2 score: 0.9356(0.0019) +0.2% compared to default parametersRMSE score: 1631(9) -1.7% compared to default parameters By default, CatBoost uses several encoding techniques to encode each categorical feature. First, it uses Borders method with one target border TargetBorderCount=1 (in our example for each categorical feature we just want to see if it makes the car more expensive). The obtained float encodings are further discretized into CtrBorderCount=15 different values. Three values of Prior parameter are used to create 3 three different encodings: Prior=0/1:Prior=0.5/1:Prior=1/1 Also for each categorical feature, we create an encoding with Counter method. The number of categorical encoding value borders CtrBorderCount is also equal to 15, and only one value of Prior=0/1 is used. We can always check the parameters used by our model with get_all_params() method. last_model_params = score_catboost_model({}, True)last_model_params['simple_ctr']['Borders:CtrBorderCount=15:CtrBorderType=Uniform:TargetBorderCount=1:TargetBorderType=MinEntropy:Prior=0/1:Prior=0.5/1:Prior=1/1', 'Counter:CtrBorderCount=15:CtrBorderType=Uniform:Prior=0/1']last_model_params['combinations_ctr']['Borders:CtrBorderCount=15:CtrBorderType=Uniform:TargetBorderCount=1:TargetBorderType=MinEntropy:Prior=0/1:Prior=0.5/1:Prior=1/1', 'Counter:CtrBorderCount=15:CtrBorderType=Uniform:Prior=0/1'] The next thing I would like to talk about in this tutorial is using different encoding methods for different features with the parameter per_feature_ctr. It might be useful in cases when you know that one of your features is more important than the others. We can, for example, increase the number of target borders for model_name feature: model_params = score_catboost_model({'per_feature_ctr': ['1:Borders:TargetBorderCount=10:Prior=0/1'] })R2 score: 0.9361(0.0005) +0.3% compared to default parametersRMSE score: 1625(28) -2.1% compared to default parameters The parameter determines whether to use validation dataset(provided through parameter eval_set of fit method) to estimate categories frequencies with Counter. By default, it is Full and the objects from validation dataset are used; Pass SkipTest value to ignore the objects from the validation set In our score_catboost_model function we don't give to CatBoost the validation dataset at all during training so to check this method effect we will use train/test split. model = CatBoostRegressor(custom_metric= ['R2', 'RMSE'], learning_rate=0.1, n_estimators=4500, counter_calc_method='Full')model.fit(train_pool, eval_set=test_pool, verbose=False)r2_res = r2_score(df_test.price_usd.values, model.predict(test_pool))rmse_res = mean_squared_error(df_test.price_usd.values, model.predict(test_pool))print('Counter Calculation Method Full: R2={:.4f} RMSE={:.0f}'.format(r2_res, rmse_res))Counter Calculation Method Full: R2=0.9334 RMSE=2817626model = CatBoostRegressor(custom_metric= ['R2', 'RMSE'], learning_rate=0.1, n_estimators=4500, counter_calc_method='SkipTest')model.fit(train_pool, eval_set=test_pool, verbose=False)r2_res = r2_score(df_test.price_usd.values, model.predict(test_pool))rmse_res = mean_squared_error(df_test.price_usd.values, model.predict(test_pool))print('Counter Calculation Method SkipTest: R2={:.4f} RMSE={:.0f}'.format(r2_res, rmse_res))Counter Calculation Method SkipTest: R2=0.9344 RMSE=2777802 The maximum number of borders to use in target quantization for categorical features that need it. The default for the regression task is 1. Let us try a rather big number of borders: model_params = score_catboost_model({'ctr_target_border_count': 10})R2 score: 0.9375(0.0046) +0.4% compared to default parametersRMSE score: 1606(73) -3.2% compared to default parameters This parameter regulates the number of the most common categorical feature values that are used by the model. If we have n unique categories and ctr_leaf_count_limit=m we preserve the categorical feature value only for objects from most frequent categories. For the objects from the remaining categories, we replace categorical feature value with None. The default value of this parameter is None -- all the categorical features values are preserved. model_params = score_catboost_model({'ctr_leaf_count_limit' : 5})R2 score: 0.8278(0.0236) -11.3% compared to default parametersRMSE score: 2661(187) +60.4% compared to default parameters Oops! On our dataset, it ruins the model performance. With this setting on the previous parameter ctr_leaf_count_limit affects only the categorical features, that CatBoost creates by combining the initial features, and the initial categorical features present in the dataset are not affected. When parameter ctr_leaf_count_limit is None parameter store_all_simple_ctr has no effect. model_params = score_catboost_model({'store_all_simple_ctr' : True, 'ctr_leaf_count_limit' : 5})R2 score: 0.8971(0.0070) -3.9% compared to default parametersRMSE score: 2060(74) +24.2% compared to default parameters It is quite common to use several encodings for a categorical feature. For instance, CatBoost creates 4 different encodings for each categorical feature by default (see “Default value of simple_ctr and combinations_ctr” section). When we call get_feature_importances method we get aggregated across all the encodings importance for the categorical feature. That is because in practice we usually just want to compare the overall usefulness of the different features present in our dataset. However, what if we want to know which encodings worked best for us? For that, we would need to get Internal Feature Importance. Currently, it is available only in the command-line version of CatBoost library. You can find details about the installation here and an example of how to train a model with the command-line version in this tutorial. To train a model with the command-line version we first need to create a column description file: descr = ['Categ' if i in categorical_features_names else 'Auxiliary' for i in df.columns]descr[14] = 'Target'pd.Series(descr).to_csv('train.cd', sep='\t', header=None) Then train a model: catboost fit --learn-set cars.csv --loss-function RMSE --learning-rate 0.1 --iterations 4500 --delimiter=',' --has-header --column-description train.cd And then create an Internal Feature Importance file: catboost fstr -m model.bin --cd train.cd --fstr-type InternalFeatureImportance -o feature_strength.tsv The contents of this file in our case are the following: 9.318442186 transmission 7.675430604 {model_name} prior_num=1 prior_denom=1 targetborder=0 type=Borders 3.04782682 {model_name} prior_num=0 prior_denom=1 targetborder=0 type=Borders 2.951546528 {model_name} prior_num=0.5 prior_denom=1 targetborder=0 type=Borders 2.939078189 {body_type} prior_num=0 prior_denom=1 targetborder=0 type=Borders 2.666138982 {state, transmission} prior_num=0.5 prior_denom=1 targetborder=0 type=Borders 2.431465565 {body_type} prior_num=1 prior_denom=1 targetborder=0 type=Borders 2.059354431 {manufacturer_name} prior_num=0 prior_denom=1 targetborder=0 type=Counter 1.946443049 {state} prior_num=1 prior_denom=1 targetborder=0 type=Borders 1.932116622 {color} prior_num=1 prior_denom=1 targetborder=0 type=Borders 1.633469855 {color} prior_num=0.5 prior_denom=1 targetborder=0 type=Borders 1.561168441 {manufacturer_name} prior_num=0.5 prior_denom=1 targetborder=0 type=Borders 1.419944596 {manufacturer_name} prior_num=0 prior_denom=1 targetborder=0 type=Borders 1.3323198 {body_type} prior_num=0 prior_denom=1 targetborder=0 type=Counter 1.068973258 {color} prior_num=0 prior_denom=1 targetborder=0 type=Counter 1.038663366 {manufacturer_name} prior_num=1 prior_denom=1 targetborder=0 type=Borders 1.001434874 {manufacturer_name, body_type} prior_num=0 prior_denom=1 targetborder=0 type=Counter 0.9012036663 {body_type} prior_num=0.5 prior_denom=1 targetborder=0 type=Borders 0.8805961369 {manufacturer_name, body_type} prior_num=1 prior_denom=1 targetborder=0 type=Borders 0.8796937131 {drivetrain} prior_num=0 prior_denom=1 targetborder=0 type=Borders ... ... 1.476546485e-05 {engine_fuel, engine_type} prior_num=0 prior_denom=1 targetborder=0 type=Borders 7.417408934e-06 {engine_type, body_type, state, location_region} prior_num=0.5 prior_denom=1 targetborder=0 type=Borders We can see that the most important feature is transmission; then we have the 3 Borders type encodings for model_name categorical feature with different priors; then an encoding for body_type feature; then we have a categorical feature created by CatBoost from the combination of state and transmission features An interesting observation is that for some features like model_name the most useful are the encodings of Border type, while for other features e.g. manufacturer_name the most useful encoding is obtained with Counter method. Another way of getting some insight into how your model works is training with logging_level=Info parameter. This setting allows us to see the feature splits chosen for each tree: model = CatBoostRegressor(custom_metric= ['R2', 'RMSE'], learning_rate=0.3, n_estimators=5)model.fit(train_pool, eval_set=test_pool, logging_level='Info')year_produced, bin=47 score 669154.1979{drivetrain} pr0 tb0 type1, border=10 score 754651.9724year_produced, bin=56 score 809503.2502year_produced, bin=51 score 856912.7803engine_capacity, bin=24 score 888794.1978{state} pr1 tb0 type0, border=12 score 901338.61730: learn: 5040.7980368 test: 5141.1143627 best: 5141.1143627 (0) total: 17.9ms remaining: 71.7msyear_produced, bin=49 score 474289.2398engine_capacity, bin=14 score 565290.1728year_produced, bin=54 score 615593.891year_produced, bin=43 score 638265.472{state} pr1 tb0 type0, border=10 score 663225.8837engine_capacity, bin=24 score 667635.8031: learn: 4071.9260223 test: 4162.4422665 best: 4162.4422665 (1) total: 24.9ms remaining: 37.3msyear_produced, bin=50 score 332853.7156{body_type} pr2 tb0 type0, border=8 score 403465.931{manufacturer_name} pr0 tb0 type0, border=7 score 428269.628year_produced, bin=38 score 458860.027feature_2, bin=0 score 474315.0996year_produced, bin=54 score 485594.39612: learn: 3475.0456278 test: 3544.0465297 best: 3544.0465297 (2) total: 31.3ms remaining: 20.9msyear_produced, bin=45 score 250517.4612engine_capacity, bin=13 score 290570.2886year_produced, bin=55 score 340482.1423{manufacturer_name} pr1 tb0 type0, border=6 score 352029.1735feature_1, bin=0 score 368528.2728year_produced, bin=50 score 376011.70753: learn: 3066.9757921 test: 3128.4648163 best: 3128.4648163 (3) total: 37.4ms remaining: 9.35msyear_produced, bin=46 score 184224.3588feature_6, bin=0 score 214268.3547{body_type} pr0 tb0 type1, border=5 score 238951.9169{state} pr1 tb0 type0, border=9 score 260941.1746transmission, value=1 score 275871.0414{engine_fuel} pr1 tb0 type0, border=7 score 289133.30864: learn: 2797.0109121 test: 2864.4763038 best: 2864.4763038 (4) total: 42.8ms remaining: 0usbestTest = 2864.476304bestIteration = 4 For numeric features the format is the following: feature name, index of the chosen split, split score Example: year_produced, bin=47 score 669154.1979 Format for categorical features is: feature name, prior, target border, encoding type, categorical feature border, split score Example: {state} pr1 tb0 type0, border=12 score 901338.6173 For convenience, categorical features names are written in brackets {} In our tutorial, we were working on the regression task, so I would like to make several notes on categorical parameter tuning on binary classification and multiclassification tasks. For binary classification, parameter tuning is very similar to regression task, except the fact, that it is usually useless to increase the TargetBorderCount parameter value (unless you pass probabilities as a target). In multiclassification task, we should keep in mind that usually, we do not have any natural order on classes so the use of FloatTargetMeanValue or BinarizedTargetMeanValue encodings are not recommended. If your training takes too long you can try to set TargetBorderCount to a lower value than the default n_classes - 1 if there is a way to unite some of your classes. Congratulations to everyone who finished reading this tutorial :) As we saw the number of tunable parameters related to categorical features processing in CatBoost package is quite impressive. We learned how to control all of them, and I very much hope that this knowledge will help you to achieve the best results on your tasks involving categorical data!
[ { "code": null, "e": 438, "s": 172, "text": "CatBoost is an open-sourced gradient boosting library. One of the differences between CatBoost and other gradient boosting libraries is its advanced processing of the categorical features (in fact “Cat” in the package name stands not for a 🐱 but for “CATegorical”)." }, { "code": null, "e": 778, "s": 438, "text": "CatBoost deals with the categorical data quite well out-of-the-box. However, it also has a huge number of training parameters, which provide fine control over the categorical features preprocessing. In this tutorial, we are going to learn how to use these parameters for the greater good. The tutorial is split into the following sections:" }, { "code": null, "e": 983, "s": 778, "text": "Introduction: categorical features in machine learningCategorical features processing in CatBoostExperiment: How the categorical features settings affect accuracy in predicting the prices of the old cars." }, { "code": null, "e": 1038, "s": 983, "text": "Introduction: categorical features in machine learning" }, { "code": null, "e": 1082, "s": 1038, "text": "Categorical features processing in CatBoost" }, { "code": null, "e": 1190, "s": 1082, "text": "Experiment: How the categorical features settings affect accuracy in predicting the prices of the old cars." }, { "code": null, "e": 2131, "s": 1190, "text": "Сategorical feature is a feature that has a discrete set of values called categories that are not comparable by < or > to each other. In real-world datasets, we quite often deal with categorical data. The cardinality of a categorical feature, i.e. the number of different values that the feature can take varies drastically among features and datasets — from just a few to thousands and millions of distinct values. The values of a categorical feature can be distributed almost uniformly and there might be values with a frequency different by the orders of magnitude. To be used in gradient boosting categorical features need to be transformed into some form that can be handled by a decision tree, for example to numbers. In the next section, we are going to briefly go through the most popular in machine learning methods of transforming categorical features values into numbers. Standard approaches to categorical features preprocessing" }, { "code": null, "e": 2342, "s": 2131, "text": "One-hot Encoding consists in creating a binary feature for each category. The main problem of the method is that features with huge cardinalities (such as user id for example) lead to a huge number of features." }, { "code": null, "e": 2538, "s": 2342, "text": "Label Encoding maps each category, i.e. value that a categorical feature can take into a random number. Does it not make a terrible amount of sense? It does not work very well in practice either." }, { "code": null, "e": 2635, "s": 2538, "text": "Hash Encoding converts string type features into a fixed dimension vector using a hash function." }, { "code": null, "e": 2754, "s": 2635, "text": "Frequency Encoding consists in replacing categorical feature values with the frequency of the category in the dataset." }, { "code": null, "e": 4632, "s": 2754, "text": "Target Encoding replaces the values of the categorical feature with a number that is calculated from the distribution of the target values for that particular value of the categorical variable. The most straightforward approach sometimes referred to as Greedy Target Encoding is to use the mean value of target on the objects belonging to the category. However, this method leads to target leakage and overfitting. One possible solution to these problems is Holdout Target Encoding — one part of the training dataset is used to compute the target statistics for each category, and the training is performed on the rest of the training data. It solves the target leakage problem but requires us to sacrifice part of our precious training data. For this reason, the most popular in practice solutions are K-Fold Target Encoding and Leave-One-Out Target Encoding. The idea behind K-Fold Target Encoding is very similar to K-Fold Cross Validation — we split the training data into several folds in each fold we replace the categorical feature values with the target statistics for the category calculated on the other folds. Leave-One-Out Target Encoding is a special case of K-Fold Encoding where K is equal to the length of training data. K-Fold Encoding and Leave-One-Out Target Encoding can also lead to overfitting. Consider the following example: in a training dataset, we have a single categorical feature with a single value and 5 objects of class 0 and 6 objects of class 1. Obviously feature that has only one possible value is useless, however, if we use Leave-One-Out Target Encoding with mean function for all the objects of class 0 the feature value will be encoded into 0.6 while for all the objects of class 1 the feature encoding value will be 0.5. This will allow a decision tree classifier to choose a split at 0.55 and achieve 100% accuracy on the training set." }, { "code": null, "e": 4865, "s": 4632, "text": "CatBoost supports some traditional methods of categorical data preprocessing, such as One-hot Encoding and Frequency Encoding. However one of the signatures of this package is its original solution for categorical features encoding." }, { "code": null, "e": 5217, "s": 4865, "text": "The core idea behind CatBoost categorical features preprocessing is Ordered Target Encoding: a random permutation of the dataset is performed and then target encoding of some type (for example just computing mean of the target for objects of this category) is performed on each example using only the objects that are placed before the current object." }, { "code": null, "e": 5325, "s": 5217, "text": "Generally transforming categorical features to numerical features in CatBoost includes the following steps:" }, { "code": null, "e": 5487, "s": 5325, "text": "Permutation of the training objects in random order.Quantization i.e. converting the target value from a floating-point to an integer depending on the task type:" }, { "code": null, "e": 5540, "s": 5487, "text": "Permutation of the training objects in random order." }, { "code": null, "e": 5650, "s": 5540, "text": "Quantization i.e. converting the target value from a floating-point to an integer depending on the task type:" }, { "code": null, "e": 5804, "s": 5650, "text": "Classification — Possible values for target value are “0” (doesn’t belong to the specified target class) and “1” (belongs to the specified target class)." }, { "code": null, "e": 5907, "s": 5804, "text": "Multiclassification — The target values are integer identifiers of target classes (starting from “0”)." }, { "code": null, "e": 6175, "s": 5907, "text": "Regression — Quantization is performed on the label value. The mode and number of buckets are set in the starting parameters. All values located inside a single bucket are assigned a label value class — an integer in the range defined by the formula: <bucket ID — 1>." }, { "code": null, "e": 6219, "s": 6175, "text": "3. Encoding the categorical feature values." }, { "code": null, "e": 6647, "s": 6219, "text": "CatBoost creates four permutations of the training objects and for each permutation, a separate model is trained. Three models are used for the tree structure selection and the fourth is used to compute the leaves values of the final model that we save. At each iteration one of the three models is chosen randomly; this model is used to choose the new tree structure and to calculate the leaves values for all the four models." }, { "code": null, "e": 7038, "s": 6647, "text": "Using several models for tree structure selection enhances the robustness of the categorical features encoding. If in one permutation an object is close to the beginning of the dataset and the statistics for encoding are calculated on a small number of objects in the other two permutations it may be closer to the end of the dataset and many objects will be used to compute the statistics." }, { "code": null, "e": 7398, "s": 7038, "text": "Another important point is that CatBoost can create new categorical features combining the existing ones. And it will actually do so unless you explicitly tell it not to :) Treatment of the original features and the created features can be controlled separately by the settings simple_ctr and combinations_ctr respectively (we will talk about them in detail)." }, { "code": null, "e": 7506, "s": 7398, "text": "For the experiments in this tutorial, we are going to use https://www.kaggle.com/lepchenkov/usedcarscatalog" }, { "code": null, "e": 7708, "s": 7506, "text": "This dataset consists of the old cars’ descriptions and their characteristics — both numerical, such as mileage, production year, etc and categorical, such as color, manufacturer name, model name, etc." }, { "code": null, "e": 7791, "s": 7708, "text": "Our goal is to solve the regression task, i.e. to predict the price of an old car." }, { "code": null, "e": 7856, "s": 7791, "text": "Let us see how many unique values each categorical variable has:" }, { "code": null, "e": 8147, "s": 7856, "text": "df[categorical_features_names].nunique()manufacturer_name 55model_name 1118transmission 2color 12engine_fuel 6engine_type 3body_type 12state 3drivetrain 3location_region 6" }, { "code": null, "e": 8186, "s": 8147, "text": "Here is the target value distribution:" }, { "code": null, "e": 8312, "s": 8186, "text": "First, we are going to roughly estimate the number of trees and the learning rate required that are sufficient for this task." }, { "code": null, "e": 8746, "s": 8312, "text": "0:\tlearn: 5935.7603510\ttest: 6046.0339243\tbest: 6046.0339243 (0)\ttotal: 73.2ms\tremaining: 6m 5s2000:\tlearn: 1052.8405096\ttest: 1684.8571308\tbest: 1684.8571308 (2000)\ttotal: 19.5s\tremaining: 29.2s4000:\tlearn: 830.0093394\ttest: 1669.1267503\tbest: 1668.7626148 (3888)\ttotal: 41.4s\tremaining: 10.3s4999:\tlearn: 753.5299104\ttest: 1666.7826842\tbest: 1666.6739968 (4463)\ttotal: 52.7s\tremaining: 0usbestTest = 1666.673997bestIteration = 4463" }, { "code": null, "e": 9072, "s": 8746, "text": "Now we are going to write a simple function that tests CatBoost performance on 3-fold cross-validation given the parameters and returns the full list of parameters for the last model. Optionally this function compares the model’s metrics with the results of the model trained with the default categorical features parameters." }, { "code": null, "e": 9147, "s": 9072, "text": "We will fix the number of estimators at 4500 and the learning rate at 0.1." }, { "code": null, "e": 9279, "s": 9147, "text": "The amount of parameters related to categorical features processing in CatBoost is overwhelming. Here is a hopefully the full list:" }, { "code": null, "e": 9537, "s": 9279, "text": "one_hot_max_size (int) - use one-hot encoding for all categorical features with a number of different values less than or equal to the given parameter value. No complex encoding is performed for such features. The default value for the regression task is 2." }, { "code": null, "e": 10182, "s": 9537, "text": "model_size_reg (float from 0 to inf) - The model size regularization coefficient. The larger the value, the smaller the model size. Refer to the Model size regularization coefficient section for details. This regularization is needed only for models with categorical features (other models are small). Models with categorical features might weight tens of gigabytes or more if categorical features have a lot of values. If the value of the regularizer differs from zero, then the usage of categorical features or feature combinations with a lot of values has a penalty, so fewer of them are used in the resulting model. The default value is 0.5" }, { "code": null, "e": 10481, "s": 10182, "text": "max_ctr_complexity - The maximum number of features that can be combined. Each resulting combination consists of one or more categorical features and can optionally contain binary features in the following form: “numeric feature > value”. For the regression task on the CPU, the default value is 4." }, { "code": null, "e": 10712, "s": 10481, "text": "has_time (bool) - if true, the 1-st step of categorical features processing, permutation, is not performed. Useful when the objects in your dataset are ordered by time. For our dataset, we don't need it. The default value is False" }, { "code": null, "e": 10780, "s": 10712, "text": "simple_ctr - Quantization settings for simple categorical features." }, { "code": null, "e": 10863, "s": 10780, "text": "combinations_ctr - Quantization settings for combinations of categorical features." }, { "code": null, "e": 10941, "s": 10863, "text": "per_feature_ctr - Per-feature quantization settings for categorical features." }, { "code": null, "e": 11249, "s": 10941, "text": "counter_calc_method determines whether to use validation dataset(provided through the parameter eval_set of fit method) to estimate categories frequencies with Counter. By default, it is Full and the objects from validation dataset are used; Pass SkipTest value to ignore the objects from the validation set" }, { "code": null, "e": 11416, "s": 11249, "text": "ctr_target_border_count - The maximum number of borders to use in target quantization for categorical features that need it. The default for the regression task is 1." }, { "code": null, "e": 11536, "s": 11416, "text": "ctr_leaf_count_limit - The maximum number of leaves with categorical features. The default value is None i.e. no limit." }, { "code": null, "e": 11944, "s": 11536, "text": "store_all_simple_ctr - If the previous parameter at some point gradient boosting tree can no longer make splits by categorical features. With the default value equal toFalse the limitation applies both to original categorical features and the features, that CatBoost creates by combining different features. If this parameter is set to True only the number of splits made on combination features is limited." }, { "code": null, "e": 12169, "s": 11944, "text": "The three parameters simple_ctr, combinations_ctr, and per_feature_ctr are complex parameters that control the second and the third steps of categorical features processing. We will talk about them more in the next sections." }, { "code": null, "e": 12245, "s": 12169, "text": "First, we test the out-of-the-box CatBoost categorical features processing." }, { "code": null, "e": 12340, "s": 12245, "text": "last_model_params = score_catboost_model({}, True)R2 score: 0.9334(0.0009)RMSE score: 1659(17)" }, { "code": null, "e": 12451, "s": 12340, "text": "We will save the metrics of the model with the default categorical features parameters for further comparison." }, { "code": null, "e": 12735, "s": 12451, "text": "The first thing we try is to make CatBoost use one-hot encoding for all our categorical features (the max categorical feature cardinality in our dataset is 1118 < 2000). The documentation says, that for the features for which one-hot encoding is used no other encodings are computed." }, { "code": null, "e": 12757, "s": 12735, "text": "The default value is:" }, { "code": null, "e": 12818, "s": 12757, "text": "N/A if training is performed on CPU in Pairwise scoring mode" }, { "code": null, "e": 12943, "s": 12818, "text": "255 if training is performed on GPU and the selected Ctr types require target data that is not available during the training" }, { "code": null, "e": 12987, "s": 12943, "text": "10 if training is performed in Ranking mode" }, { "code": null, "e": 13028, "s": 12987, "text": "2 if none of the conditions above is met" }, { "code": null, "e": 13211, "s": 13028, "text": "model_params = score_catboost_model({'one_hot_max_size' : 2000})R2 score: 0.9392(0.0029) +0.6% compared to default parametersRMSE score: 1584(28) -4.5% compared to default parameters" }, { "code": null, "e": 13295, "s": 13211, "text": "This parameter influences the model size if training data has categorical features." }, { "code": null, "e": 13658, "s": 13295, "text": "The information regarding categorical features makes a great contribution to the final size of the model. The mapping from the categorical feature value hash to some statistic values is stored for each categorical feature that is used in the model. The size of this mapping for a particular feature depends on the number of unique values that this feature takes." }, { "code": null, "e": 14048, "s": 13658, "text": "Therefore, the potential weight of a categorical feature can be taken into account in the final model when choosing a split in a tree to reduce the final size of the model. When choosing the best split, all split scores are calculated and then the split with the best score is chosen. But before choosing the split with the best score, all scores change according to the following formula:" }, { "code": null, "e": 14341, "s": 14048, "text": "s_new is the new score for the split by some categorical feature or combination feature, s_old is the old score for the split by the feature, u is the number of unique values of the feature, U is the maximum of all values among all features and M is the value of the model_size_reg parameter." }, { "code": null, "e": 14943, "s": 14341, "text": "This regularization works slightly differently on GPU: feature combinations are regularized more aggressively than on CPU. For CPU cost of a combination is equal to the number of different feature values in these combinations that are present in the training dataset. On GPU cost of a combination is equal to the number of all possible different values of this combination. For example, if the combination contains two categorical features c1 and c2, then the cost will be #categories in c1 * #categories in c2, even though many of the values from this combination might not be present in the dataset." }, { "code": null, "e": 15100, "s": 14943, "text": "Let us try to set model size regularization coefficient to 0 — thus we allow our model to use as many categorical features and its combinations as it wants." }, { "code": null, "e": 15453, "s": 15100, "text": "model_params = score_catboost_model({'model_size_reg': 0})R2 score: 0.9360(0.0014) +0.3% compared to default parametersRMSE score: 1626(26) -2.0% compared to default parametersmodel_params = score_catboost_model({'model_size_reg': 1})R2 score: 0.9327(0.0020) -0.1% compared to default parametersRMSE score: 1667(30) +0.5% compared to default parameters" }, { "code": null, "e": 15643, "s": 15453, "text": "To check how the size of the model is affected by this setting we will write a function that given parameters dictionary will train a model, save it in a file and return the model’s weight:" }, { "code": null, "e": 15803, "s": 15643, "text": "model_size_reg_0 = weight_model({'model_size_reg': 0})model_size_reg_1 = weight_model({'model_size_reg': 1})model_size_reg_0/model_size_reg_112.689550532622183" }, { "code": null, "e": 15924, "s": 15803, "text": "As we can see the model with the strong regularization is almost 13 times smaller than the model without regularization." }, { "code": null, "e": 17164, "s": 15924, "text": "Feature combinations: Note that any combination of several categorical features could be considered as a new one. For example, assume that the task is music recommendation and we have two categorical features: user ID and musical genre. Some user prefers, say, rock music. When we convert user ID and musical genre to numerical features we lose this information. A combination of two features solves this problem and gives a new powerful feature. However, the number of combinations grows exponentially with the number of categorical features in the dataset and it is not possible to consider all of them in the algorithm. When constructing a new split for the current tree, CatBoost considers combinations in a greedy way. No combinations are considered for the first split in the tree. For the next splits, CatBoost combines all combinations and categorical features present in the current tree with all categorical features in the dataset. Combination values are converted to numbers on the fly. CatBoost also generates combinations of numerical and categorical features in the following way: all the splits selected in the tree are considered as categorical with two values and used in combinations in the same way as categorical ones." }, { "code": null, "e": 17438, "s": 17164, "text": "The maximum number of features that can be combined. Each resulting combination consists of one or more categorical features and can optionally contain binary features in the following form: “numeric feature > value”. For the regression task on CPU, the default value is 4." }, { "code": null, "e": 17624, "s": 17438, "text": "Although it is not mentioned in the documentation, this parameter value has to be less or equal to 15. (Because this parameter should be less than the max gradient boosting tree depth)." }, { "code": null, "e": 17985, "s": 17624, "text": "model_params = score_catboost_model({'max_ctr_complexity': 6})R2 score: 0.9335(0.0016) +0.0% compared to default parametersRMSE score: 1657(24) -0.2% compared to default parametersmodel_params = score_catboost_model({'max_ctr_complexity': 0})R2 score: 0.9286(0.0041) -0.5% compared to default parametersRMSE score: 1716(30) +3.4% compared to default parameters" }, { "code": null, "e": 18167, "s": 17985, "text": "As we can see on our dataset the difference in the model’s accuracy is not significant. To check how the size of the model is affected we will use our function that weights a model." }, { "code": null, "e": 18350, "s": 18167, "text": "model_size_max_ctr_6 = weight_model({'max_ctr_complexity': 6})model_size_max_ctr_0 = weight_model({'max_ctr_complexity': 0})model_size_max_ctr_6/model_size_max_ctr_06.437194589788451" }, { "code": null, "e": 18485, "s": 18350, "text": "As can be seen, the model that can combine up to 6 features weights 6 times more than the model that does not combine features at all." }, { "code": null, "e": 18787, "s": 18485, "text": "With this setting on we do not perform random permutations during the Transforming categorical features to numerical. This might be useful when the objects of our dataset are already ordered by time. If a Timestamp type column is present in the input data it is used to determine the order of objects." }, { "code": null, "e": 18962, "s": 18787, "text": "model_params = score_catboost_model({'has_time': True})R2 score: 0.9174(0.0029) -1.7% compared to default parametersRMSE score: 1847(29) +11.3% compared to default parameters" }, { "code": null, "e": 19531, "s": 18962, "text": "Both simple_ctr and combinations_ctr are complex parameters that provide regulation of the categorical features encodings types. While simple_ctr is responsible for processing the categorical features initially present in the dataset, combinations_ctr affects the encoding of the new features, that CatBoost creates by combining the existing features. The available methods of encodings and possible values of simple_ctr and combinations_ctr are the same, so we are not going to look at them separately. But of course, you can always tune them separately on your task!" }, { "code": null, "e": 19722, "s": 19531, "text": "Target quantization is transforming float target values to int target values using some borders. We will first consider the target encoding methods that do not require such a transformation." }, { "code": null, "e": 19958, "s": 19722, "text": "The first option FloatTargetMeanValue is the most straightforward approach. Each value of the categorical variable is replaced with the mean of the target over the objects of the same category that are placed before the current object." }, { "code": null, "e": 20220, "s": 19958, "text": "model_params = score_catboost_model({'simple_ctr' : 'FloatTargetMeanValue', 'combinations_ctr' : 'FloatTargetMeanValue', 'task_type' : 'GPU'})R2 score: 0.9183(0.0022) -1.6% compared to default parametersRMSE score: 1837(32) +10.7% compared to default parameters" }, { "code": null, "e": 20418, "s": 20220, "text": "The second option is FeatureFreq. The categorical feature values are replaced with the frequencies of the category in the dataset. Again only the objects placed before the current objects are used." }, { "code": null, "e": 20662, "s": 20418, "text": "model_params = score_catboost_model({'simple_ctr' : 'FeatureFreq', 'combinations_ctr' : 'FeatureFreq', 'task_type' : 'GPU'})R2 score: 0.9170(0.0019) -1.8% compared to default parametersRMSE score: 1852(12) +11.6% compared to default parameters" }, { "code": null, "e": 20949, "s": 20662, "text": "We have already discussed Counter method in section “Default parameters” because by default this method is used to create feature encodings. It is worth noticing, that if we directly pass Counter into simple_ctr and/or combinations_ctr CatBoost will use only Counter features encodings." }, { "code": null, "e": 21163, "s": 20949, "text": "model_params = score_catboost_model({'simple_ctr' : 'Counter', 'combinations_ctr' : 'Counter'})R2 score: 0.9288(0.0014) -0.5% compared to default parametersRMSE score: 1715(12) +3.3% compared to default parameters" }, { "code": null, "e": 21754, "s": 21163, "text": "Let us say we have calculated encodings for our categorical variable. These encodings are floats and they are comparable: in case of Counter the larger encoding value corresponds to the more frequent category. However, if we have a large number of categories the difference between close categories encodings may be caused by noise and we do not want our model to differentiate between close categories. For this reason, we transform our float encoding into int encoding i∈[0,l]i∈[0,l]. By default CtrBorderCount=15 setting means that l=14(15−1)l=14(15−1). We can try to use a bigger value:" }, { "code": null, "e": 22114, "s": 21754, "text": "model_params = score_catboost_model({'combinations_ctr': ['Counter:CtrBorderCount=40:Prior=0.5/1'], 'simple_ctr': ['Counter:CtrBorderCount=40:Prior=0.5/1']})R2 score: 0.9337(0.0013) -0.0% compared to default parametersRMSE score: 1655(13) +0.2% compared to default parameters" }, { "code": null, "e": 22340, "s": 22114, "text": "The second method BinarizedTargetMeanValue is very similar to target encoding, except that instead of the sum over the exact target values we use the sum of the values of the beans. Which corresponds to the following formula:" }, { "code": null, "e": 22347, "s": 22340, "text": "where:" }, { "code": null, "e": 22479, "s": 22347, "text": "countInClass is the ratio of the sum of the label value integers for this categorical feature to the maximum label value integer k." }, { "code": null, "e": 22573, "s": 22479, "text": "totalCount is the total number of objects that have a feature value matching the current one." }, { "code": null, "e": 22638, "s": 22573, "text": "prior is a number (constant) defined by the starting parameters." }, { "code": null, "e": 23278, "s": 22638, "text": "model_params = score_catboost_model({'combinations_ctr':'BinarizedTargetMeanValue', 'simple_ctr': 'BinarizedTargetMeanValue'})R2 score: 0.9312(0.0008) -0.2% compared to default parametersRMSE score: 1685(20) +1.6% compared to default parameters{k:v for k, v in model_params.items() if k in ctr_parameters}{'combinations_ctr': ['BinarizedTargetMeanValue:CtrBorderCount=15:CtrBorderType=Uniform:TargetBorderCount=1:TargetBorderType=MinEntropy:Prior=0/1:Prior=0.5/1:Prior=1/1'], 'simple_ctr': ['BinarizedTargetMeanValue:CtrBorderCount=15:CtrBorderType=Uniform:TargetBorderCount=1:TargetBorderType=MinEntropy:Prior=0/1:Prior=0.5/1:Prior=1/1']}" }, { "code": null, "e": 23547, "s": 23278, "text": "While using the BinarizedTargetMeanValue method we can also finetune Prior and CtrBorderCount(the number of borders for quantization the category feature encoding). By default CtrBorderCount=15 and 0, 0.5 and 1 Prior values are used to build three different encodings." }, { "code": null, "e": 23772, "s": 23547, "text": "Now we proceed to the settings of the encodings methods that require target quantization. The first choice is Borders vs. Buckets. The difference between the two is pretty simple. Both are described by the following formula:" }, { "code": null, "e": 23849, "s": 23772, "text": "for i in [0, k-1] in case of Borders and for i in [0, k] in case of Buckets:" }, { "code": null, "e": 23924, "s": 23849, "text": "where k is the number of borders regulated by parameter TargetBorderCount," }, { "code": null, "e": 24292, "s": 23924, "text": "totalCount is the number of objects of the same category. prior is defined by the parameter prior. The only difference is that for Borders countInClass is the number of the objects of the category with the discretized target value greater than i while for Buckets countInClass is the number of the objects of the category with the discretized target value equal to i." }, { "code": null, "e": 24461, "s": 24292, "text": "Let us see a small example: we have objects of two categories shown as suns and moons. We will compute the categorical feature encodings in case of borders and buckets." }, { "code": null, "e": 24598, "s": 24461, "text": "Borders: We have two borders(which corresponds to TargetBorderCount=2), so we need to calculate 2 encodings. Let us say our Prior is 0.5" }, { "code": null, "e": 24765, "s": 24598, "text": "Buckets: i in [0, k] creates k+1 buckets. So the same value of parameter TargetBorderCount=2 creates more features from each categorical feature if we choose Buckets." }, { "code": null, "e": 25019, "s": 24765, "text": "Important note! This example just serves to illustrate the difference between Borders and Buckets and the whole dataset is used to compute countInClass and totalCount. In reality, CatBoost uses only the objects placed before the current object are used." }, { "code": null, "e": 25070, "s": 25019, "text": "Let us see if it makes any difference in practice:" }, { "code": null, "e": 25565, "s": 25070, "text": "model_params = score_catboost_model({'combinations_ctr': 'Borders', 'simple_ctr': 'Borders'})R2 score: 0.9311(0.0017) -0.2% compared to default parametersRMSE score: 1688(40) +1.7% compared to default parametersmodel_params = score_catboost_model({'combinations_ctr': 'Buckets', 'simple_ctr': 'Buckets'})R2 score: 0.9314(0.0048) -0.2% compared to default parametersRMSE score: 1682(49) +1.4% compared to default parameters" }, { "code": null, "e": 25783, "s": 25565, "text": "An attentive reader may remember that by default CatBoost creates some features using Borders splits and also some features using Counter method. When we explicitly pass the Borders option, Counter method is not used." }, { "code": null, "e": 25897, "s": 25783, "text": "Generally, it is recommended to use Borders for the regression task and Buckets for the multiclassification task." }, { "code": null, "e": 26106, "s": 25897, "text": "What happens if there is a new category in the test set that never appeared in the training set? The answer is, that since countInClasscountInClass is equal to zero, the prior is used to compute the encoding:" }, { "code": null, "e": 26275, "s": 26106, "text": "Meanwhile, missing values in the categorical feature are replaced with \"None\" string. Then all the objects with the missing feature value are treated as a new category." }, { "code": null, "e": 26444, "s": 26275, "text": "The number of borders or buckets is can be controlled with the TargetBorderCount parameter. By default we have only one border, let us see if having more borders helps:" }, { "code": null, "e": 26731, "s": 26444, "text": "model_params = score_catboost_model({'combinations_ctr': 'Borders:TargetBorderCount=4', 'simple_ctr': 'Borders:TargetBorderCount=4'})R2 score: 0.9356(0.0019) +0.2% compared to default parametersRMSE score: 1631(9) -1.7% compared to default parameters" }, { "code": null, "e": 26821, "s": 26731, "text": "By default, CatBoost uses several encoding techniques to encode each categorical feature." }, { "code": null, "e": 27202, "s": 26821, "text": "First, it uses Borders method with one target border TargetBorderCount=1 (in our example for each categorical feature we just want to see if it makes the car more expensive). The obtained float encodings are further discretized into CtrBorderCount=15 different values. Three values of Prior parameter are used to create 3 three different encodings: Prior=0/1:Prior=0.5/1:Prior=1/1" }, { "code": null, "e": 27406, "s": 27202, "text": "Also for each categorical feature, we create an encoding with Counter method. The number of categorical encoding value borders CtrBorderCount is also equal to 15, and only one value of Prior=0/1 is used." }, { "code": null, "e": 27489, "s": 27406, "text": "We can always check the parameters used by our model with get_all_params() method." }, { "code": null, "e": 27992, "s": 27489, "text": "last_model_params = score_catboost_model({}, True)last_model_params['simple_ctr']['Borders:CtrBorderCount=15:CtrBorderType=Uniform:TargetBorderCount=1:TargetBorderType=MinEntropy:Prior=0/1:Prior=0.5/1:Prior=1/1', 'Counter:CtrBorderCount=15:CtrBorderType=Uniform:Prior=0/1']last_model_params['combinations_ctr']['Borders:CtrBorderCount=15:CtrBorderType=Uniform:TargetBorderCount=1:TargetBorderType=MinEntropy:Prior=0/1:Prior=0.5/1:Prior=1/1', 'Counter:CtrBorderCount=15:CtrBorderType=Uniform:Prior=0/1']" }, { "code": null, "e": 28332, "s": 27992, "text": "The next thing I would like to talk about in this tutorial is using different encoding methods for different features with the parameter per_feature_ctr. It might be useful in cases when you know that one of your features is more important than the others. We can, for example, increase the number of target borders for model_name feature:" }, { "code": null, "e": 28554, "s": 28332, "text": "model_params = score_catboost_model({'per_feature_ctr': ['1:Borders:TargetBorderCount=10:Prior=0/1'] })R2 score: 0.9361(0.0005) +0.3% compared to default parametersRMSE score: 1625(28) -2.1% compared to default parameters" }, { "code": null, "e": 29022, "s": 28554, "text": "The parameter determines whether to use validation dataset(provided through parameter eval_set of fit method) to estimate categories frequencies with Counter. By default, it is Full and the objects from validation dataset are used; Pass SkipTest value to ignore the objects from the validation set In our score_catboost_model function we don't give to CatBoost the validation dataset at all during training so to check this method effect we will use train/test split." }, { "code": null, "e": 30029, "s": 29022, "text": "model = CatBoostRegressor(custom_metric= ['R2', 'RMSE'], learning_rate=0.1, n_estimators=4500, counter_calc_method='Full')model.fit(train_pool, eval_set=test_pool, verbose=False)r2_res = r2_score(df_test.price_usd.values, model.predict(test_pool))rmse_res = mean_squared_error(df_test.price_usd.values, model.predict(test_pool))print('Counter Calculation Method Full: R2={:.4f} RMSE={:.0f}'.format(r2_res, rmse_res))Counter Calculation Method Full: R2=0.9334 RMSE=2817626model = CatBoostRegressor(custom_metric= ['R2', 'RMSE'], learning_rate=0.1, n_estimators=4500, counter_calc_method='SkipTest')model.fit(train_pool, eval_set=test_pool, verbose=False)r2_res = r2_score(df_test.price_usd.values, model.predict(test_pool))rmse_res = mean_squared_error(df_test.price_usd.values, model.predict(test_pool))print('Counter Calculation Method SkipTest: R2={:.4f} RMSE={:.0f}'.format(r2_res, rmse_res))Counter Calculation Method SkipTest: R2=0.9344 RMSE=2777802" }, { "code": null, "e": 30170, "s": 30029, "text": "The maximum number of borders to use in target quantization for categorical features that need it. The default for the regression task is 1." }, { "code": null, "e": 30213, "s": 30170, "text": "Let us try a rather big number of borders:" }, { "code": null, "e": 30400, "s": 30213, "text": "model_params = score_catboost_model({'ctr_target_border_count': 10})R2 score: 0.9375(0.0046) +0.4% compared to default parametersRMSE score: 1606(73) -3.2% compared to default parameters" }, { "code": null, "e": 30753, "s": 30400, "text": "This parameter regulates the number of the most common categorical feature values that are used by the model. If we have n unique categories and ctr_leaf_count_limit=m we preserve the categorical feature value only for objects from most frequent categories. For the objects from the remaining categories, we replace categorical feature value with None." }, { "code": null, "e": 30851, "s": 30753, "text": "The default value of this parameter is None -- all the categorical features values are preserved." }, { "code": null, "e": 31038, "s": 30851, "text": "model_params = score_catboost_model({'ctr_leaf_count_limit' : 5})R2 score: 0.8278(0.0236) -11.3% compared to default parametersRMSE score: 2661(187) +60.4% compared to default parameters" }, { "code": null, "e": 31092, "s": 31038, "text": "Oops! On our dataset, it ruins the model performance." }, { "code": null, "e": 31421, "s": 31092, "text": "With this setting on the previous parameter ctr_leaf_count_limit affects only the categorical features, that CatBoost creates by combining the initial features, and the initial categorical features present in the dataset are not affected. When parameter ctr_leaf_count_limit is None parameter store_all_simple_ctr has no effect." }, { "code": null, "e": 31637, "s": 31421, "text": "model_params = score_catboost_model({'store_all_simple_ctr' : True, 'ctr_leaf_count_limit' : 5})R2 score: 0.8971(0.0070) -3.9% compared to default parametersRMSE score: 2060(74) +24.2% compared to default parameters" }, { "code": null, "e": 32127, "s": 31637, "text": "It is quite common to use several encodings for a categorical feature. For instance, CatBoost creates 4 different encodings for each categorical feature by default (see “Default value of simple_ctr and combinations_ctr” section). When we call get_feature_importances method we get aggregated across all the encodings importance for the categorical feature. That is because in practice we usually just want to compare the overall usefulness of the different features present in our dataset." }, { "code": null, "e": 32473, "s": 32127, "text": "However, what if we want to know which encodings worked best for us? For that, we would need to get Internal Feature Importance. Currently, it is available only in the command-line version of CatBoost library. You can find details about the installation here and an example of how to train a model with the command-line version in this tutorial." }, { "code": null, "e": 32571, "s": 32473, "text": "To train a model with the command-line version we first need to create a column description file:" }, { "code": null, "e": 32739, "s": 32571, "text": "descr = ['Categ' if i in categorical_features_names else 'Auxiliary' for i in df.columns]descr[14] = 'Target'pd.Series(descr).to_csv('train.cd', sep='\\t', header=None)" }, { "code": null, "e": 32759, "s": 32739, "text": "Then train a model:" }, { "code": null, "e": 32911, "s": 32759, "text": "catboost fit --learn-set cars.csv --loss-function RMSE --learning-rate 0.1 --iterations 4500 --delimiter=',' --has-header --column-description train.cd" }, { "code": null, "e": 33067, "s": 32911, "text": "And then create an Internal Feature Importance file: catboost fstr -m model.bin --cd train.cd --fstr-type InternalFeatureImportance -o feature_strength.tsv" }, { "code": null, "e": 33124, "s": 33067, "text": "The contents of this file in our case are the following:" }, { "code": null, "e": 35045, "s": 33124, "text": "9.318442186 transmission 7.675430604 {model_name} prior_num=1 prior_denom=1 targetborder=0 type=Borders 3.04782682 {model_name} prior_num=0 prior_denom=1 targetborder=0 type=Borders 2.951546528 {model_name} prior_num=0.5 prior_denom=1 targetborder=0 type=Borders 2.939078189 {body_type} prior_num=0 prior_denom=1 targetborder=0 type=Borders 2.666138982 {state, transmission} prior_num=0.5 prior_denom=1 targetborder=0 type=Borders 2.431465565 {body_type} prior_num=1 prior_denom=1 targetborder=0 type=Borders 2.059354431 {manufacturer_name} prior_num=0 prior_denom=1 targetborder=0 type=Counter 1.946443049 {state} prior_num=1 prior_denom=1 targetborder=0 type=Borders 1.932116622 {color} prior_num=1 prior_denom=1 targetborder=0 type=Borders 1.633469855 {color} prior_num=0.5 prior_denom=1 targetborder=0 type=Borders 1.561168441 {manufacturer_name} prior_num=0.5 prior_denom=1 targetborder=0 type=Borders 1.419944596 {manufacturer_name} prior_num=0 prior_denom=1 targetborder=0 type=Borders 1.3323198 {body_type} prior_num=0 prior_denom=1 targetborder=0 type=Counter 1.068973258 {color} prior_num=0 prior_denom=1 targetborder=0 type=Counter 1.038663366 {manufacturer_name} prior_num=1 prior_denom=1 targetborder=0 type=Borders 1.001434874 {manufacturer_name, body_type} prior_num=0 prior_denom=1 targetborder=0 type=Counter 0.9012036663 {body_type} prior_num=0.5 prior_denom=1 targetborder=0 type=Borders 0.8805961369 {manufacturer_name, body_type} prior_num=1 prior_denom=1 targetborder=0 type=Borders 0.8796937131 {drivetrain} prior_num=0 prior_denom=1 targetborder=0 type=Borders ... ... 1.476546485e-05 {engine_fuel, engine_type} prior_num=0 prior_denom=1 targetborder=0 type=Borders 7.417408934e-06 {engine_type, body_type, state, location_region} prior_num=0.5 prior_denom=1 targetborder=0 type=Borders" }, { "code": null, "e": 35105, "s": 35045, "text": "We can see that the most important feature is transmission;" }, { "code": null, "e": 35205, "s": 35105, "text": "then we have the 3 Borders type encodings for model_name categorical feature with different priors;" }, { "code": null, "e": 35245, "s": 35205, "text": "then an encoding for body_type feature;" }, { "code": null, "e": 35356, "s": 35245, "text": "then we have a categorical feature created by CatBoost from the combination of state and transmission features" }, { "code": null, "e": 35581, "s": 35356, "text": "An interesting observation is that for some features like model_name the most useful are the encodings of Border type, while for other features e.g. manufacturer_name the most useful encoding is obtained with Counter method." }, { "code": null, "e": 35761, "s": 35581, "text": "Another way of getting some insight into how your model works is training with logging_level=Info parameter. This setting allows us to see the feature splits chosen for each tree:" }, { "code": null, "e": 37725, "s": 35761, "text": "model = CatBoostRegressor(custom_metric= ['R2', 'RMSE'], learning_rate=0.3, n_estimators=5)model.fit(train_pool, eval_set=test_pool, logging_level='Info')year_produced, bin=47 score 669154.1979{drivetrain} pr0 tb0 type1, border=10 score 754651.9724year_produced, bin=56 score 809503.2502year_produced, bin=51 score 856912.7803engine_capacity, bin=24 score 888794.1978{state} pr1 tb0 type0, border=12 score 901338.61730:\tlearn: 5040.7980368\ttest: 5141.1143627\tbest: 5141.1143627 (0)\ttotal: 17.9ms\tremaining: 71.7msyear_produced, bin=49 score 474289.2398engine_capacity, bin=14 score 565290.1728year_produced, bin=54 score 615593.891year_produced, bin=43 score 638265.472{state} pr1 tb0 type0, border=10 score 663225.8837engine_capacity, bin=24 score 667635.8031:\tlearn: 4071.9260223\ttest: 4162.4422665\tbest: 4162.4422665 (1)\ttotal: 24.9ms\tremaining: 37.3msyear_produced, bin=50 score 332853.7156{body_type} pr2 tb0 type0, border=8 score 403465.931{manufacturer_name} pr0 tb0 type0, border=7 score 428269.628year_produced, bin=38 score 458860.027feature_2, bin=0 score 474315.0996year_produced, bin=54 score 485594.39612:\tlearn: 3475.0456278\ttest: 3544.0465297\tbest: 3544.0465297 (2)\ttotal: 31.3ms\tremaining: 20.9msyear_produced, bin=45 score 250517.4612engine_capacity, bin=13 score 290570.2886year_produced, bin=55 score 340482.1423{manufacturer_name} pr1 tb0 type0, border=6 score 352029.1735feature_1, bin=0 score 368528.2728year_produced, bin=50 score 376011.70753:\tlearn: 3066.9757921\ttest: 3128.4648163\tbest: 3128.4648163 (3)\ttotal: 37.4ms\tremaining: 9.35msyear_produced, bin=46 score 184224.3588feature_6, bin=0 score 214268.3547{body_type} pr0 tb0 type1, border=5 score 238951.9169{state} pr1 tb0 type0, border=9 score 260941.1746transmission, value=1 score 275871.0414{engine_fuel} pr1 tb0 type0, border=7 score 289133.30864:\tlearn: 2797.0109121\ttest: 2864.4763038\tbest: 2864.4763038 (4)\ttotal: 42.8ms\tremaining: 0usbestTest = 2864.476304bestIteration = 4" }, { "code": null, "e": 37775, "s": 37725, "text": "For numeric features the format is the following:" }, { "code": null, "e": 37828, "s": 37775, "text": "feature name, index of the chosen split, split score" }, { "code": null, "e": 37877, "s": 37828, "text": "Example: year_produced, bin=47 score 669154.1979" }, { "code": null, "e": 37913, "s": 37877, "text": "Format for categorical features is:" }, { "code": null, "e": 38004, "s": 37913, "text": "feature name, prior, target border, encoding type, categorical feature border, split score" }, { "code": null, "e": 38064, "s": 38004, "text": "Example: {state} pr1 tb0 type0, border=12 score 901338.6173" }, { "code": null, "e": 38135, "s": 38064, "text": "For convenience, categorical features names are written in brackets {}" }, { "code": null, "e": 38318, "s": 38135, "text": "In our tutorial, we were working on the regression task, so I would like to make several notes on categorical parameter tuning on binary classification and multiclassification tasks." }, { "code": null, "e": 38537, "s": 38318, "text": "For binary classification, parameter tuning is very similar to regression task, except the fact, that it is usually useless to increase the TargetBorderCount parameter value (unless you pass probabilities as a target)." }, { "code": null, "e": 38907, "s": 38537, "text": "In multiclassification task, we should keep in mind that usually, we do not have any natural order on classes so the use of FloatTargetMeanValue or BinarizedTargetMeanValue encodings are not recommended. If your training takes too long you can try to set TargetBorderCount to a lower value than the default n_classes - 1 if there is a way to unite some of your classes." } ]
C++ Program to Convert Octal Number to Binary Number
In a computer system, the binary number is expressed in the binary numeral system while the octal number is in the octal numeral system. The binary number is in base 2 while the octal number is in base 8. Examples of binary numbers and their corresponding octal numbers are as follows − A program that converts the octal numbers into binary is given as follows − Live Demo #include <iostream> #include <cmath> using namespace std; int OctalToBinary(int octalNum) { int decimalNum = 0, binaryNum = 0, count = 0; while(octalNum != 0) { decimalNum += (octalNum%10) * pow(8,count); ++count; octalNum/=10; } count = 1; while (decimalNum != 0) { binaryNum += (decimalNum % 2) * count; decimalNum /= 2; count *= 10; } return binaryNum; } int main() { int octalNum = 33; cout <<"Octal to Binary"<<endl; cout<<"Octal number: "<<octalNum<<endl; cout<<"Binary number: "<<OctalToBinary(octalNum)<<endl; return 0; } The output of the above program is as follows − Octal to Binary Octal number: 33 Binary number: 11011 In the given program, the function OctalToBinary() converts the given octal number into a binary number This is done by first converting the octal number into a decimal number and then converting the decimal number into an binary number. This is seen in the following code snippet − int OctalToBinary(int octalNum) { int decimalNum = 0, binaryNum = 0, count = 0; while(octalNum != 0) { decimalNum += (octalNum%10) * pow(8,count); ++count; octalNum/=10; } count = 1; while (decimalNum != 0) { binaryNum += (decimalNum % 2) * count; decimalNum /= 2; count *= 10; } return binaryNum; } In the function main(), the octal number is given. Then its corresponding binary number is calculated by calling OctalToBinary(). This is shown below − int main() { int octalNum = 33; cout <<"Octal to Binary"<<endl; cout<<"Octal number: "<<octalNum<<endl; cout<<"Binary number: "<<OctalToBinary(octalNum)<<endl; return 0; }
[ { "code": null, "e": 1267, "s": 1062, "text": "In a computer system, the binary number is expressed in the binary numeral system while the octal number is in the octal numeral system. The binary number is in base 2 while the octal number is in base 8." }, { "code": null, "e": 1349, "s": 1267, "text": "Examples of binary numbers and their corresponding octal numbers are as follows −" }, { "code": null, "e": 1425, "s": 1349, "text": "A program that converts the octal numbers into binary is given as follows −" }, { "code": null, "e": 1436, "s": 1425, "text": " Live Demo" }, { "code": null, "e": 2039, "s": 1436, "text": "#include <iostream>\n#include <cmath>\nusing namespace std;\nint OctalToBinary(int octalNum) {\n int decimalNum = 0, binaryNum = 0, count = 0;\n\n while(octalNum != 0) {\n decimalNum += (octalNum%10) * pow(8,count);\n ++count;\n octalNum/=10;\n }\n count = 1;\n while (decimalNum != 0) {\n binaryNum += (decimalNum % 2) * count;\n decimalNum /= 2;\n count *= 10;\n }\n return binaryNum;\n}\nint main() {\n int octalNum = 33;\n cout <<\"Octal to Binary\"<<endl;\n cout<<\"Octal number: \"<<octalNum<<endl;\n cout<<\"Binary number: \"<<OctalToBinary(octalNum)<<endl;\n return 0;\n}" }, { "code": null, "e": 2087, "s": 2039, "text": "The output of the above program is as follows −" }, { "code": null, "e": 2141, "s": 2087, "text": "Octal to Binary\nOctal number: 33\nBinary number: 11011" }, { "code": null, "e": 2424, "s": 2141, "text": "In the given program, the function OctalToBinary() converts the given octal number into a binary number This is done by first converting the octal number into a decimal number and then converting the decimal number into an binary number. This is seen in the following code snippet −" }, { "code": null, "e": 2781, "s": 2424, "text": "int OctalToBinary(int octalNum) {\n int decimalNum = 0, binaryNum = 0, count = 0;\n while(octalNum != 0) {\n decimalNum += (octalNum%10) * pow(8,count);\n ++count;\n octalNum/=10;\n }\n count = 1;\n while (decimalNum != 0) {\n binaryNum += (decimalNum % 2) * count;\n decimalNum /= 2;\n count *= 10;\n }\n return binaryNum;\n}" }, { "code": null, "e": 2933, "s": 2781, "text": "In the function main(), the octal number is given. Then its corresponding binary number is calculated by calling OctalToBinary(). This is shown below −" }, { "code": null, "e": 3120, "s": 2933, "text": "int main() {\n int octalNum = 33;\n cout <<\"Octal to Binary\"<<endl;\n cout<<\"Octal number: \"<<octalNum<<endl;\n cout<<\"Binary number: \"<<OctalToBinary(octalNum)<<endl;\n return 0;\n}" } ]
Convert an Iterator to a List in Java - GeeksforGeeks
11 Dec, 2018 Given an Iterator, the task is to convert it into List in Java. Examples: Input: Iterator = {1, 2, 3, 4, 5} Output: {1, 2, 3, 4, 5} Input: Iterator = {'G', 'e', 'e', 'k', 's'} Output: {'G', 'e', 'e', 'k', 's'} Below are the various ways to do so: Naive Approach:Get the Iterator.Create an empty list.Add each element of the iterator to the list using forEachRemaining() method.Return the list.Below is the implementation of the above approach:// Java program to get a List// from a given Iterator import java.util.*; class GFG { // Function to get the List public static <T> List<T> getListFromIterator(Iterator<T> iterator) { // Create an empty list List<T> list = new ArrayList<>(); // Add each element of iterator to the List iterator.forEachRemaining(list::add); // Return the List return list; } // Driver code public static void main(String[] args) { // Get the Iterator Iterator<Integer> iterator = Arrays.asList(1, 2, 3, 4, 5) .iterator(); // Get the List from the Iterator List<Integer> list = getListFromIterator(iterator); // Print the list System.out.println(list); }}Output:[1, 2, 3, 4, 5] Get the Iterator.Create an empty list.Add each element of the iterator to the list using forEachRemaining() method.Return the list. Get the Iterator. Create an empty list. Add each element of the iterator to the list using forEachRemaining() method. Return the list. Below is the implementation of the above approach: // Java program to get a List// from a given Iterator import java.util.*; class GFG { // Function to get the List public static <T> List<T> getListFromIterator(Iterator<T> iterator) { // Create an empty list List<T> list = new ArrayList<>(); // Add each element of iterator to the List iterator.forEachRemaining(list::add); // Return the List return list; } // Driver code public static void main(String[] args) { // Get the Iterator Iterator<Integer> iterator = Arrays.asList(1, 2, 3, 4, 5) .iterator(); // Get the List from the Iterator List<Integer> list = getListFromIterator(iterator); // Print the list System.out.println(list); }} [1, 2, 3, 4, 5] Using Iterable as intermediate:Get the Iterator.Convert the iterator to iterable using lambda expression.Convert the iterable to list using Stream.Return the list.Below is the implementation of the above approach:// Java program to get a List// from a given Iterator import java.util.*;import java.util.stream.Collectors;import java.util.stream.StreamSupport; class GFG { // Function to get the List public static <T> List<T> getListFromIterator(Iterator<T> iterator) { // Convert iterator to iterable Iterable<T> iterable = () -> iterator; // Create a List from the Iterable List<T> list = StreamSupport .stream(iterable.spliterator(), false) .collect(Collectors.toList()); // Return the List return list; } // Driver code public static void main(String[] args) { // Get the Iterator Iterator<Integer> iterator = Arrays.asList(1, 2, 3, 4, 5) .iterator(); // Get the List from the Iterator List<Integer> list = getListFromIterator(iterator); // Print the list System.out.println(list); }}Output:[1, 2, 3, 4, 5] Get the Iterator.Convert the iterator to iterable using lambda expression.Convert the iterable to list using Stream.Return the list. Get the Iterator. Convert the iterator to iterable using lambda expression. Convert the iterable to list using Stream. Return the list. Below is the implementation of the above approach: // Java program to get a List// from a given Iterator import java.util.*;import java.util.stream.Collectors;import java.util.stream.StreamSupport; class GFG { // Function to get the List public static <T> List<T> getListFromIterator(Iterator<T> iterator) { // Convert iterator to iterable Iterable<T> iterable = () -> iterator; // Create a List from the Iterable List<T> list = StreamSupport .stream(iterable.spliterator(), false) .collect(Collectors.toList()); // Return the List return list; } // Driver code public static void main(String[] args) { // Get the Iterator Iterator<Integer> iterator = Arrays.asList(1, 2, 3, 4, 5) .iterator(); // Get the List from the Iterator List<Integer> list = getListFromIterator(iterator); // Print the list System.out.println(list); }} [1, 2, 3, 4, 5] Java - util package Java-Iterator java-list Java-List-Programs Java Java Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here. Comments Old Comments Arrays in Java Split() String method in Java with examples For-each loop in Java Reverse a string in Java Arrays.sort() in Java with examples Object Oriented Programming (OOPs) Concept in Java HashMap in Java with Examples How to iterate any Map in Java Initialize an ArrayList in Java Interfaces in Java
[ { "code": null, "e": 23451, "s": 23423, "text": "\n11 Dec, 2018" }, { "code": null, "e": 23515, "s": 23451, "text": "Given an Iterator, the task is to convert it into List in Java." }, { "code": null, "e": 23525, "s": 23515, "text": "Examples:" }, { "code": null, "e": 23663, "s": 23525, "text": "Input: Iterator = {1, 2, 3, 4, 5}\nOutput: {1, 2, 3, 4, 5}\n\nInput: Iterator = {'G', 'e', 'e', 'k', 's'}\nOutput: {'G', 'e', 'e', 'k', 's'}\n" }, { "code": null, "e": 23700, "s": 23663, "text": "Below are the various ways to do so:" }, { "code": null, "e": 24734, "s": 23700, "text": "Naive Approach:Get the Iterator.Create an empty list.Add each element of the iterator to the list using forEachRemaining() method.Return the list.Below is the implementation of the above approach:// Java program to get a List// from a given Iterator import java.util.*; class GFG { // Function to get the List public static <T> List<T> getListFromIterator(Iterator<T> iterator) { // Create an empty list List<T> list = new ArrayList<>(); // Add each element of iterator to the List iterator.forEachRemaining(list::add); // Return the List return list; } // Driver code public static void main(String[] args) { // Get the Iterator Iterator<Integer> iterator = Arrays.asList(1, 2, 3, 4, 5) .iterator(); // Get the List from the Iterator List<Integer> list = getListFromIterator(iterator); // Print the list System.out.println(list); }}Output:[1, 2, 3, 4, 5]\n" }, { "code": null, "e": 24866, "s": 24734, "text": "Get the Iterator.Create an empty list.Add each element of the iterator to the list using forEachRemaining() method.Return the list." }, { "code": null, "e": 24884, "s": 24866, "text": "Get the Iterator." }, { "code": null, "e": 24906, "s": 24884, "text": "Create an empty list." }, { "code": null, "e": 24984, "s": 24906, "text": "Add each element of the iterator to the list using forEachRemaining() method." }, { "code": null, "e": 25001, "s": 24984, "text": "Return the list." }, { "code": null, "e": 25052, "s": 25001, "text": "Below is the implementation of the above approach:" }, { "code": "// Java program to get a List// from a given Iterator import java.util.*; class GFG { // Function to get the List public static <T> List<T> getListFromIterator(Iterator<T> iterator) { // Create an empty list List<T> list = new ArrayList<>(); // Add each element of iterator to the List iterator.forEachRemaining(list::add); // Return the List return list; } // Driver code public static void main(String[] args) { // Get the Iterator Iterator<Integer> iterator = Arrays.asList(1, 2, 3, 4, 5) .iterator(); // Get the List from the Iterator List<Integer> list = getListFromIterator(iterator); // Print the list System.out.println(list); }}", "e": 25867, "s": 25052, "text": null }, { "code": null, "e": 25884, "s": 25867, "text": "[1, 2, 3, 4, 5]\n" }, { "code": null, "e": 27125, "s": 25884, "text": "Using Iterable as intermediate:Get the Iterator.Convert the iterator to iterable using lambda expression.Convert the iterable to list using Stream.Return the list.Below is the implementation of the above approach:// Java program to get a List// from a given Iterator import java.util.*;import java.util.stream.Collectors;import java.util.stream.StreamSupport; class GFG { // Function to get the List public static <T> List<T> getListFromIterator(Iterator<T> iterator) { // Convert iterator to iterable Iterable<T> iterable = () -> iterator; // Create a List from the Iterable List<T> list = StreamSupport .stream(iterable.spliterator(), false) .collect(Collectors.toList()); // Return the List return list; } // Driver code public static void main(String[] args) { // Get the Iterator Iterator<Integer> iterator = Arrays.asList(1, 2, 3, 4, 5) .iterator(); // Get the List from the Iterator List<Integer> list = getListFromIterator(iterator); // Print the list System.out.println(list); }}Output:[1, 2, 3, 4, 5]\n" }, { "code": null, "e": 27258, "s": 27125, "text": "Get the Iterator.Convert the iterator to iterable using lambda expression.Convert the iterable to list using Stream.Return the list." }, { "code": null, "e": 27276, "s": 27258, "text": "Get the Iterator." }, { "code": null, "e": 27334, "s": 27276, "text": "Convert the iterator to iterable using lambda expression." }, { "code": null, "e": 27377, "s": 27334, "text": "Convert the iterable to list using Stream." }, { "code": null, "e": 27394, "s": 27377, "text": "Return the list." }, { "code": null, "e": 27445, "s": 27394, "text": "Below is the implementation of the above approach:" }, { "code": "// Java program to get a List// from a given Iterator import java.util.*;import java.util.stream.Collectors;import java.util.stream.StreamSupport; class GFG { // Function to get the List public static <T> List<T> getListFromIterator(Iterator<T> iterator) { // Convert iterator to iterable Iterable<T> iterable = () -> iterator; // Create a List from the Iterable List<T> list = StreamSupport .stream(iterable.spliterator(), false) .collect(Collectors.toList()); // Return the List return list; } // Driver code public static void main(String[] args) { // Get the Iterator Iterator<Integer> iterator = Arrays.asList(1, 2, 3, 4, 5) .iterator(); // Get the List from the Iterator List<Integer> list = getListFromIterator(iterator); // Print the list System.out.println(list); }}", "e": 28450, "s": 27445, "text": null }, { "code": null, "e": 28467, "s": 28450, "text": "[1, 2, 3, 4, 5]\n" }, { "code": null, "e": 28487, "s": 28467, "text": "Java - util package" }, { "code": null, "e": 28501, "s": 28487, "text": "Java-Iterator" }, { "code": null, "e": 28511, "s": 28501, "text": "java-list" }, { "code": null, "e": 28530, "s": 28511, "text": "Java-List-Programs" }, { "code": null, "e": 28535, "s": 28530, "text": "Java" }, { "code": null, "e": 28540, "s": 28535, "text": "Java" }, { "code": null, "e": 28638, "s": 28540, "text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here." }, { "code": null, "e": 28647, "s": 28638, "text": "Comments" }, { "code": null, "e": 28660, "s": 28647, "text": "Old Comments" }, { "code": null, "e": 28675, "s": 28660, "text": "Arrays in Java" }, { "code": null, "e": 28719, "s": 28675, "text": "Split() String method in Java with examples" }, { "code": null, "e": 28741, "s": 28719, "text": "For-each loop in Java" }, { "code": null, "e": 28766, "s": 28741, "text": "Reverse a string in Java" }, { "code": null, "e": 28802, "s": 28766, "text": "Arrays.sort() in Java with examples" }, { "code": null, "e": 28853, "s": 28802, "text": "Object Oriented Programming (OOPs) Concept in Java" }, { "code": null, "e": 28883, "s": 28853, "text": "HashMap in Java with Examples" }, { "code": null, "e": 28914, "s": 28883, "text": "How to iterate any Map in Java" }, { "code": null, "e": 28946, "s": 28914, "text": "Initialize an ArrayList in Java" } ]
How to set the SVG background color ? - GeeksforGeeks
07 Dec, 2021 There are two types of images that can be used in the background and in both cases you can change the background color of the image. Raster: The images where each pixels represents the individual color within the image. So when we zoom in, the pixels start enlarging and hence after a certain point, the image starts getting blur. Vector: These are the images where the information about drawing has been stored. So when they are zoomed, the image is redrawn according to the size of that page. Hence they do not pixelate and we get crisp and sharp images. Since these images are scalable they are known as SVGs (Scalable Vector Graphics). The SVG stands for Scalable Vector Graphics. The SVG background is used to draw any kind of shape, set any color you want by the set property. The below examples illustrates the concept of SVG set background-color more specifically. The SVG allowed the CSS background sizing, position, and much more complex property. Example: The cx and cy attributes define the x and y coordinates of the center of the circle. If cx and cy are omitted, then it sets the center of the circle to (0, 0). The r attribute defines the radius of the circle. To set the background color to this SVG, there are two ways. html <!DOCTYPE html><html> <head> <title> SVG set Background Color </title></head> <body> <center> <h1 style="color:green;">GeeksforGeeks</h1> <h4>SVG set Background Color</h4> <svg height="100" width="100"> <circle cx="50" cy="50" r="40" stroke="black" stroke-width="3" fill="red" /> </svg> </center></body> </html> To set the background color of the SVG body, background can be done in two ways: Method 1: You can add the background color to the SVG body itself.htmlhtml<!DOCTYPE html><html> <head> <title> SVG set Background Color </title></head> <body> <center> <h1 style="color:green;">GeeksforGeeks</h1> <h4>SVG set Background Color</h4> <svg height="100" width="100" style="background-color:green"> <circle cx="50" cy="50" r="40" stroke="black" stroke-width="3" fill="red" /> </svg> </center></body> </html>Output: html <!DOCTYPE html><html> <head> <title> SVG set Background Color </title></head> <body> <center> <h1 style="color:green;">GeeksforGeeks</h1> <h4>SVG set Background Color</h4> <svg height="100" width="100" style="background-color:green"> <circle cx="50" cy="50" r="40" stroke="black" stroke-width="3" fill="red" /> </svg> </center></body> </html> Method 2: You can add a rectangle as the first or lowermost layer with 100% width and 100% height and set the color of your desired background color and then we can start drawing the shape.htmlhtml<!DOCTYPE html><html> <head> <title> SVG set Background Color </title></head> <body> <center> <h1 style="color:green;">GeeksforGeeks</h1> <h4>SVG set Background Color</h4> <svg height="100" width="100"> <rect width="100%" height="100%" fill="green" /> <circle cx="50" cy="50" r="40" stroke="black" stroke-width="3" fill="red" /> </svg> </center></body> </html>Output: html <!DOCTYPE html><html> <head> <title> SVG set Background Color </title></head> <body> <center> <h1 style="color:green;">GeeksforGeeks</h1> <h4>SVG set Background Color</h4> <svg height="100" width="100"> <rect width="100%" height="100%" fill="green" /> <circle cx="50" cy="50" r="40" stroke="black" stroke-width="3" fill="red" /> </svg> </center></body> </html> HTML is the foundation of webpages, is used for webpage development by structuring websites and web apps.You can learn HTML from the ground up by following this HTML Tutorial and HTML Examples. Attention reader! Don’t stop learning now. Get hold of all the important HTML concepts with the Web Design for Beginners | HTML course. surinderdawra388 HTML Web Technologies Web technologies Questions HTML Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here. Comments Old Comments How to update Node.js and NPM to next version ? Types of CSS (Cascading Style Sheet) How to Insert Form Data into Database using PHP ? CSS to put icon inside an input element in a form REST API (Introduction) Top 10 Front End Developer Skills That You Need in 2022 Installation of Node.js on Linux How to fetch data from an API in ReactJS ? Difference between var, let and const keywords in JavaScript Convert a string to an integer in JavaScript
[ { "code": null, "e": 24435, "s": 24407, "text": "\n07 Dec, 2021" }, { "code": null, "e": 24568, "s": 24435, "text": "There are two types of images that can be used in the background and in both cases you can change the background color of the image." }, { "code": null, "e": 24766, "s": 24568, "text": "Raster: The images where each pixels represents the individual color within the image. So when we zoom in, the pixels start enlarging and hence after a certain point, the image starts getting blur." }, { "code": null, "e": 25075, "s": 24766, "text": "Vector: These are the images where the information about drawing has been stored. So when they are zoomed, the image is redrawn according to the size of that page. Hence they do not pixelate and we get crisp and sharp images. Since these images are scalable they are known as SVGs (Scalable Vector Graphics)." }, { "code": null, "e": 25393, "s": 25075, "text": "The SVG stands for Scalable Vector Graphics. The SVG background is used to draw any kind of shape, set any color you want by the set property. The below examples illustrates the concept of SVG set background-color more specifically. The SVG allowed the CSS background sizing, position, and much more complex property." }, { "code": null, "e": 25673, "s": 25393, "text": "Example: The cx and cy attributes define the x and y coordinates of the center of the circle. If cx and cy are omitted, then it sets the center of the circle to (0, 0). The r attribute defines the radius of the circle. To set the background color to this SVG, there are two ways." }, { "code": null, "e": 25678, "s": 25673, "text": "html" }, { "code": "<!DOCTYPE html><html> <head> <title> SVG set Background Color </title></head> <body> <center> <h1 style=\"color:green;\">GeeksforGeeks</h1> <h4>SVG set Background Color</h4> <svg height=\"100\" width=\"100\"> <circle cx=\"50\" cy=\"50\" r=\"40\" stroke=\"black\" stroke-width=\"3\" fill=\"red\" /> </svg> </center></body> </html>", "e": 26073, "s": 25678, "text": null }, { "code": null, "e": 26154, "s": 26073, "text": "To set the background color of the SVG body, background can be done in two ways:" }, { "code": null, "e": 26661, "s": 26154, "text": "Method 1: You can add the background color to the SVG body itself.htmlhtml<!DOCTYPE html><html> <head> <title> SVG set Background Color </title></head> <body> <center> <h1 style=\"color:green;\">GeeksforGeeks</h1> <h4>SVG set Background Color</h4> <svg height=\"100\" width=\"100\" style=\"background-color:green\"> <circle cx=\"50\" cy=\"50\" r=\"40\" stroke=\"black\" stroke-width=\"3\" fill=\"red\" /> </svg> </center></body> </html>Output:" }, { "code": null, "e": 26666, "s": 26661, "text": "html" }, { "code": "<!DOCTYPE html><html> <head> <title> SVG set Background Color </title></head> <body> <center> <h1 style=\"color:green;\">GeeksforGeeks</h1> <h4>SVG set Background Color</h4> <svg height=\"100\" width=\"100\" style=\"background-color:green\"> <circle cx=\"50\" cy=\"50\" r=\"40\" stroke=\"black\" stroke-width=\"3\" fill=\"red\" /> </svg> </center></body> </html>", "e": 27092, "s": 26666, "text": null }, { "code": null, "e": 27750, "s": 27092, "text": "Method 2: You can add a rectangle as the first or lowermost layer with 100% width and 100% height and set the color of your desired background color and then we can start drawing the shape.htmlhtml<!DOCTYPE html><html> <head> <title> SVG set Background Color </title></head> <body> <center> <h1 style=\"color:green;\">GeeksforGeeks</h1> <h4>SVG set Background Color</h4> <svg height=\"100\" width=\"100\"> <rect width=\"100%\" height=\"100%\" fill=\"green\" /> <circle cx=\"50\" cy=\"50\" r=\"40\" stroke=\"black\" stroke-width=\"3\" fill=\"red\" /> </svg> </center></body> </html>Output:" }, { "code": null, "e": 27755, "s": 27750, "text": "html" }, { "code": "<!DOCTYPE html><html> <head> <title> SVG set Background Color </title></head> <body> <center> <h1 style=\"color:green;\">GeeksforGeeks</h1> <h4>SVG set Background Color</h4> <svg height=\"100\" width=\"100\"> <rect width=\"100%\" height=\"100%\" fill=\"green\" /> <circle cx=\"50\" cy=\"50\" r=\"40\" stroke=\"black\" stroke-width=\"3\" fill=\"red\" /> </svg> </center></body> </html>", "e": 28209, "s": 27755, "text": null }, { "code": null, "e": 28403, "s": 28209, "text": "HTML is the foundation of webpages, is used for webpage development by structuring websites and web apps.You can learn HTML from the ground up by following this HTML Tutorial and HTML Examples." }, { "code": null, "e": 28540, "s": 28403, "text": "Attention reader! Don’t stop learning now. Get hold of all the important HTML concepts with the Web Design for Beginners | HTML course." }, { "code": null, "e": 28557, "s": 28540, "text": "surinderdawra388" }, { "code": null, "e": 28562, "s": 28557, "text": "HTML" }, { "code": null, "e": 28579, "s": 28562, "text": "Web Technologies" }, { "code": null, "e": 28606, "s": 28579, "text": "Web technologies Questions" }, { "code": null, "e": 28611, "s": 28606, "text": "HTML" }, { "code": null, "e": 28709, "s": 28611, "text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here." }, { "code": null, "e": 28718, "s": 28709, "text": "Comments" }, { "code": null, "e": 28731, "s": 28718, "text": "Old Comments" }, { "code": null, "e": 28779, "s": 28731, "text": "How to update Node.js and NPM to next version ?" }, { "code": null, "e": 28816, "s": 28779, "text": "Types of CSS (Cascading Style Sheet)" }, { "code": null, "e": 28866, "s": 28816, "text": "How to Insert Form Data into Database using PHP ?" }, { "code": null, "e": 28916, "s": 28866, "text": "CSS to put icon inside an input element in a form" }, { "code": null, "e": 28940, "s": 28916, "text": "REST API (Introduction)" }, { "code": null, "e": 28996, "s": 28940, "text": "Top 10 Front End Developer Skills That You Need in 2022" }, { "code": null, "e": 29029, "s": 28996, "text": "Installation of Node.js on Linux" }, { "code": null, "e": 29072, "s": 29029, "text": "How to fetch data from an API in ReactJS ?" }, { "code": null, "e": 29133, "s": 29072, "text": "Difference between var, let and const keywords in JavaScript" } ]
Generate Your Sample Dataset — A Must Have Skill For Data Scientists. | by flo.tausend | Towards Data Science
It is one thing to create powerpoint slides and talk theoretically about what you will do with data. But it is another one to create a sample dataset and present a dashboard, visualisation or data model that is already working. While many sample datasets can be found on the internet, they quite often do not fit your exact needs. Therefore, at the end of this article, you will know how to create a beautiful sample CSV dataset for your specific purposes. Once the code is setup in a Jupyter notebook, you can use it over and over again. As final outcome a sample CSV dataset with various variables such as numbers, currencies, dates, names and strings and is desired. These variables have different types and are independent or related to each other. To get started, it is crucial to understand how we can use basic “random” functions to generate our sample dataset. Afterwards we will combine the variables in one data frame. For our purpose the data frame will finally be exported as CSV. CSV is a format quite easy to connect with various tools and sufficient for the purpose of sampling. The good thing is that we mainly need access to python and use some of its libraries and tools. First of all we use Jupyter Notebook, an open-source application for live coding and it allows us to tell a story with the code. In our case it makes it way simpler to adapt and only use few code elements later on. Furthermore we use NumPy, and especially the random function of NumPy to generate our variables. While this is one option, a very convenient library called pydbgen helps us to accelerate our sample dataset generation. Additionally we import Pandas, which puts our data in an easy-to-use structure for data analysis and data transformation. “random.” moduleThe most used module in order to create random numbers with Python is probably the random module with the random.random() function. When importing the module and calling the function, a float between 0.0 and 1.0 will be generated as seen in the code below. “random.” moduleThe most used module in order to create random numbers with Python is probably the random module with the random.random() function. When importing the module and calling the function, a float between 0.0 and 1.0 will be generated as seen in the code below. >>>import random>>>random.random()0.18215964678315466 Additionally there is the option to set a seed. This allows us to reproduce the random numbers generated in the code. >>> random.seed(123)>>> random.random()0.052363598850944326>>> random.seed(123)>>> random.random()0.052363598850944326 If there is a specific range you want to consider for your floats or integers, you can define the functions’ parameters as done below. >>> random.randint(10,100)21>>> random.uniform(10,100)79.20607497283636 2. DatesWhile being familiar with the random method, we can now apply the knowledge to create random dates in our sample dataset. >>> import numpy as np>>> monthly_days = np.arange(0, 30)>>> base_date = np.datetime64('2020-01-01')>>> random_date = base_date + np.random.choice(monthly_days)>>> print(random_date)2020-01-07 There are endless variations to adjust this piece of code. Change the monthly days range to bimonthly or yearly. And of course the base date should be fitted to your requirements. 3. StringsWhile having random numbers and dates in place for our sample dataset, it is about time to look at strings. When looking at the randomString function below, the variable “letters” contains the whole alphabet in lowercase. With the help of the “random” method, a random number is created and assigned to letter. Easy as that! import randomimport stringdef randomString(stringLength=15): letters = string.ascii_lowercase return ''.join(random.choice(letters) for i in range(stringLength))print ("The generated string is", randomString())The generated string is celwcujpdcdppzx 4. NamesWhile our sample dataset was previously based on the random method, it is not possible to generate expressions that make sense by just randomly putting things together. Therefor we use the names package available on PyPi. The package is really convenient and gives various options to draw from as shown in the snippet below. >>> import names>>> names.get_full_name()‘Margaret Wigfield’>>> names.get_first_name(gender='female')'Patricia' 5. Locations, Phone Numbers, Address or TextAfter a short introduction to the principles of randomisation with “random.” as basic method, it is probably noteworthy that there are complete packages that cover most of your needs. Installation is simple via pip and you can find all infos need on the Github repository. There are two very useful ones definitely worth mentioning: Faker pydbgen (Partially uses Faker for some data types) 6. What about sample data for Machine LearningWhen it comes to machine learning problems, the job is not done generating by generating any random data. In most cases you want some correlation or relation between the variables / features of your sample dataset. Scikit let’s you create such datasets in seconds. Have a look at the sample code below: import pandas as pdfrom sklearn.datasets import make_regression# Generate fetures, outputs, and true coefficient of 100 samplesfeatures, output, coef = make_regression(n_samples = 100, # three features n_features = 3, # two features are useful, n_informative = 2, # one target n_targets = 1, # 0.0 standard deviation noise = 0.0, # show the true coefficient coef = True) # View the features of the first 10 rowspd.DataFrame(features, columns=['Customer A', 'Customer B', 'Customer C']).head(10)# View the output of the first 10 rowspd.DataFrame(output, columns=['Sales']).head(10)# View the actual, true coefficients used to generate the datapd.DataFrame(coef, columns=['Coefficient Values']) After exploring, playing around and generating our sample datasets, we finally want to export and make use of them. While there are no limitations in the file format, as we do not a huge amount of data and want to plug the dataset in various tools, a simple CSV files is quite convenient for that use. But first a data frame has to be created. import pandas as pdimport numpy as npimport random#using numpy's randintdf = pd.DataFrame(np.random.randint(0,100,size=(15, 4)), columns=list('ABCD'))# create a new column df['AminusB'] = df['A'] - (0.1 * df['B'])df['names'] = 'Laura'df.head(5) While we have generated some random related and unrelated numbers, the name column is the same in each row. To get random names we have to iterate over the rows and use our random name generator. import namesfor index, row in df.iterrows(): df.at[index,'names'] = names.get_first_name(gender='female') df.head(5) Note that instead of names you could use any other package or method to create your sample CSV dataset. There are no limitations. Now let’s export it as CSV and we are good to go. #Export to csvexport_csv = df.to_csv (r'/Users/SampleDataset.csv', header=True) #Don't forget to add '.csv' at the end of the path You made it — Congrats ! Now the more exiting part begins and you can start working with your generated CSV sample dataset. Good Luck. ************************************************************** Articles you may enjoy reading as well: Predictive Maintenance: Machine Learning vs Rule Based Hands-on: Customer Segmentation Setup Your Data Environment Misleading with Data & Statistics Learn How To Predict Customer Churn
[ { "code": null, "e": 711, "s": 172, "text": "It is one thing to create powerpoint slides and talk theoretically about what you will do with data. But it is another one to create a sample dataset and present a dashboard, visualisation or data model that is already working. While many sample datasets can be found on the internet, they quite often do not fit your exact needs. Therefore, at the end of this article, you will know how to create a beautiful sample CSV dataset for your specific purposes. Once the code is setup in a Jupyter notebook, you can use it over and over again." }, { "code": null, "e": 1266, "s": 711, "text": "As final outcome a sample CSV dataset with various variables such as numbers, currencies, dates, names and strings and is desired. These variables have different types and are independent or related to each other. To get started, it is crucial to understand how we can use basic “random” functions to generate our sample dataset. Afterwards we will combine the variables in one data frame. For our purpose the data frame will finally be exported as CSV. CSV is a format quite easy to connect with various tools and sufficient for the purpose of sampling." }, { "code": null, "e": 1917, "s": 1266, "text": "The good thing is that we mainly need access to python and use some of its libraries and tools. First of all we use Jupyter Notebook, an open-source application for live coding and it allows us to tell a story with the code. In our case it makes it way simpler to adapt and only use few code elements later on. Furthermore we use NumPy, and especially the random function of NumPy to generate our variables. While this is one option, a very convenient library called pydbgen helps us to accelerate our sample dataset generation. Additionally we import Pandas, which puts our data in an easy-to-use structure for data analysis and data transformation." }, { "code": null, "e": 2190, "s": 1917, "text": "“random.” moduleThe most used module in order to create random numbers with Python is probably the random module with the random.random() function. When importing the module and calling the function, a float between 0.0 and 1.0 will be generated as seen in the code below." }, { "code": null, "e": 2463, "s": 2190, "text": "“random.” moduleThe most used module in order to create random numbers with Python is probably the random module with the random.random() function. When importing the module and calling the function, a float between 0.0 and 1.0 will be generated as seen in the code below." }, { "code": null, "e": 2517, "s": 2463, "text": ">>>import random>>>random.random()0.18215964678315466" }, { "code": null, "e": 2635, "s": 2517, "text": "Additionally there is the option to set a seed. This allows us to reproduce the random numbers generated in the code." }, { "code": null, "e": 2754, "s": 2635, "text": ">>> random.seed(123)>>> random.random()0.052363598850944326>>> random.seed(123)>>> random.random()0.052363598850944326" }, { "code": null, "e": 2889, "s": 2754, "text": "If there is a specific range you want to consider for your floats or integers, you can define the functions’ parameters as done below." }, { "code": null, "e": 2961, "s": 2889, "text": ">>> random.randint(10,100)21>>> random.uniform(10,100)79.20607497283636" }, { "code": null, "e": 3091, "s": 2961, "text": "2. DatesWhile being familiar with the random method, we can now apply the knowledge to create random dates in our sample dataset." }, { "code": null, "e": 3284, "s": 3091, "text": ">>> import numpy as np>>> monthly_days = np.arange(0, 30)>>> base_date = np.datetime64('2020-01-01')>>> random_date = base_date + np.random.choice(monthly_days)>>> print(random_date)2020-01-07" }, { "code": null, "e": 3464, "s": 3284, "text": "There are endless variations to adjust this piece of code. Change the monthly days range to bimonthly or yearly. And of course the base date should be fitted to your requirements." }, { "code": null, "e": 3799, "s": 3464, "text": "3. StringsWhile having random numbers and dates in place for our sample dataset, it is about time to look at strings. When looking at the randomString function below, the variable “letters” contains the whole alphabet in lowercase. With the help of the “random” method, a random number is created and assigned to letter. Easy as that!" }, { "code": null, "e": 4055, "s": 3799, "text": "import randomimport stringdef randomString(stringLength=15): letters = string.ascii_lowercase return ''.join(random.choice(letters) for i in range(stringLength))print (\"The generated string is\", randomString())The generated string is celwcujpdcdppzx" }, { "code": null, "e": 4388, "s": 4055, "text": "4. NamesWhile our sample dataset was previously based on the random method, it is not possible to generate expressions that make sense by just randomly putting things together. Therefor we use the names package available on PyPi. The package is really convenient and gives various options to draw from as shown in the snippet below." }, { "code": null, "e": 4500, "s": 4388, "text": ">>> import names>>> names.get_full_name()‘Margaret Wigfield’>>> names.get_first_name(gender='female')'Patricia'" }, { "code": null, "e": 4877, "s": 4500, "text": "5. Locations, Phone Numbers, Address or TextAfter a short introduction to the principles of randomisation with “random.” as basic method, it is probably noteworthy that there are complete packages that cover most of your needs. Installation is simple via pip and you can find all infos need on the Github repository. There are two very useful ones definitely worth mentioning:" }, { "code": null, "e": 4883, "s": 4877, "text": "Faker" }, { "code": null, "e": 4934, "s": 4883, "text": "pydbgen (Partially uses Faker for some data types)" }, { "code": null, "e": 5283, "s": 4934, "text": "6. What about sample data for Machine LearningWhen it comes to machine learning problems, the job is not done generating by generating any random data. In most cases you want some correlation or relation between the variables / features of your sample dataset. Scikit let’s you create such datasets in seconds. Have a look at the sample code below:" }, { "code": null, "e": 6055, "s": 5283, "text": "import pandas as pdfrom sklearn.datasets import make_regression# Generate fetures, outputs, and true coefficient of 100 samplesfeatures, output, coef = make_regression(n_samples = 100, # three features n_features = 3, # two features are useful, n_informative = 2, # one target n_targets = 1, # 0.0 standard deviation noise = 0.0, # show the true coefficient coef = True)" }, { "code": null, "e": 6377, "s": 6055, "text": "# View the features of the first 10 rowspd.DataFrame(features, columns=['Customer A', 'Customer B', 'Customer C']).head(10)# View the output of the first 10 rowspd.DataFrame(output, columns=['Sales']).head(10)# View the actual, true coefficients used to generate the datapd.DataFrame(coef, columns=['Coefficient Values'])" }, { "code": null, "e": 6721, "s": 6377, "text": "After exploring, playing around and generating our sample datasets, we finally want to export and make use of them. While there are no limitations in the file format, as we do not a huge amount of data and want to plug the dataset in various tools, a simple CSV files is quite convenient for that use. But first a data frame has to be created." }, { "code": null, "e": 6966, "s": 6721, "text": "import pandas as pdimport numpy as npimport random#using numpy's randintdf = pd.DataFrame(np.random.randint(0,100,size=(15, 4)), columns=list('ABCD'))# create a new column df['AminusB'] = df['A'] - (0.1 * df['B'])df['names'] = 'Laura'df.head(5)" }, { "code": null, "e": 7162, "s": 6966, "text": "While we have generated some random related and unrelated numbers, the name column is the same in each row. To get random names we have to iterate over the rows and use our random name generator." }, { "code": null, "e": 7287, "s": 7162, "text": "import namesfor index, row in df.iterrows(): df.at[index,'names'] = names.get_first_name(gender='female') df.head(5)" }, { "code": null, "e": 7467, "s": 7287, "text": "Note that instead of names you could use any other package or method to create your sample CSV dataset. There are no limitations. Now let’s export it as CSV and we are good to go." }, { "code": null, "e": 7598, "s": 7467, "text": "#Export to csvexport_csv = df.to_csv (r'/Users/SampleDataset.csv', header=True) #Don't forget to add '.csv' at the end of the path" }, { "code": null, "e": 7733, "s": 7598, "text": "You made it — Congrats ! Now the more exiting part begins and you can start working with your generated CSV sample dataset. Good Luck." }, { "code": null, "e": 7796, "s": 7733, "text": "**************************************************************" }, { "code": null, "e": 7836, "s": 7796, "text": "Articles you may enjoy reading as well:" }, { "code": null, "e": 7891, "s": 7836, "text": "Predictive Maintenance: Machine Learning vs Rule Based" }, { "code": null, "e": 7923, "s": 7891, "text": "Hands-on: Customer Segmentation" }, { "code": null, "e": 7951, "s": 7923, "text": "Setup Your Data Environment" }, { "code": null, "e": 7985, "s": 7951, "text": "Misleading with Data & Statistics" } ]
Are arrays zero indexed in C#?
Yes, arrays zero indexed in C#. Let us see how − If the array is empty, it has zero elements and has length 0. If the array has one element in 0 indexes, then it has length 1. If the array has two elements in 0 and 1 indexes, then it has length 2. If the array has three elements in 0, 1 and 2 indexes, then it has length 3. The following states that an array in C# begins with index 0 − /* begin from index 0 */ for ( i = 0; i < 5; i++ ) { n[ i ] = i + 5; } You can try to run the following to see how array indexing implements in C# − Live Demo using System; namespace ArrayApplication { class MyArray { static void Main(string[] args) { int [] n = new int[5]; int i,j; /* begings from index 0 */ for ( i = 0; i < 5; i++ ) { n[ i ] = i + 5; } for (j = 0; j < 5; j++ ) { Console.WriteLine("Element[{0}] = {1}", j, n[j]); } Console.ReadKey(); } } } Element[0] = 5 Element[1] = 6 Element[2] = 7 Element[3] = 8 Element[4] = 9
[ { "code": null, "e": 1111, "s": 1062, "text": "Yes, arrays zero indexed in C#. Let us see how −" }, { "code": null, "e": 1173, "s": 1111, "text": "If the array is empty, it has zero elements and has length 0." }, { "code": null, "e": 1238, "s": 1173, "text": "If the array has one element in 0 indexes, then it has length 1." }, { "code": null, "e": 1310, "s": 1238, "text": "If the array has two elements in 0 and 1 indexes, then it has length 2." }, { "code": null, "e": 1387, "s": 1310, "text": "If the array has three elements in 0, 1 and 2 indexes, then it has length 3." }, { "code": null, "e": 1450, "s": 1387, "text": "The following states that an array in C# begins with index 0 −" }, { "code": null, "e": 1524, "s": 1450, "text": "/* begin from index 0 */\nfor ( i = 0; i < 5; i++ ) {\n n[ i ] = i + 5;\n}" }, { "code": null, "e": 1602, "s": 1524, "text": "You can try to run the following to see how array indexing implements in C# −" }, { "code": null, "e": 1612, "s": 1602, "text": "Live Demo" }, { "code": null, "e": 2028, "s": 1612, "text": "using System;\nnamespace ArrayApplication {\n class MyArray {\n static void Main(string[] args) {\n int [] n = new int[5];\n int i,j;\n /* begings from index 0 */\n for ( i = 0; i < 5; i++ ) {\n n[ i ] = i + 5;\n }\n for (j = 0; j < 5; j++ ) {\n Console.WriteLine(\"Element[{0}] = {1}\", j, n[j]);\n }\n Console.ReadKey();\n }\n }\n}" }, { "code": null, "e": 2103, "s": 2028, "text": "Element[0] = 5\nElement[1] = 6\nElement[2] = 7\nElement[3] = 8\nElement[4] = 9" } ]
Adding only odd or even numbers JavaScript
We are required to make a function that given an array of numbers and a string that can take any of the two values “odd” or “even”, adds the numbers which match that condition. If no values match the condition, 0 should be returned. For example − console.log(conditionalSum([1, 2, 3, 4, 5], "even")); => 6 console.log(conditionalSum([1, 2, 3, 4, 5], "odd")); => 9 console.log(conditionalSum([13, 88, 12, 44, 99], "even")); => 144 console.log(conditionalSum([], "odd")); => 0 So, let’s write the code for this function, we will use the Array.prototype.reduce() method here − const conditionalSum = (arr, condition) => { const add = (num1, num2) => { if(condition === 'even' && num2 % 2 === 0){ return num1 + num2; } if(condition === 'odd' && num2 % 2 === 1){ return num1 + num2; }; return num1; } return arr.reduce((acc, val) => add(acc, val), 0); } console.log(conditionalSum([1, 2, 3, 4, 5], "even")); console.log(conditionalSum([1, 2, 3, 4, 5], "odd")); console.log(conditionalSum([13, 88, 12, 44, 99], "even")); console.log(conditionalSum([], "odd")); The output in the console will be − 6 9 144 0
[ { "code": null, "e": 1295, "s": 1062, "text": "We are required to make a function that given an array of numbers and a string that can take\nany of the two values “odd” or “even”, adds the numbers which match that condition. If no\nvalues match the condition, 0 should be returned." }, { "code": null, "e": 1309, "s": 1295, "text": "For example −" }, { "code": null, "e": 1537, "s": 1309, "text": "console.log(conditionalSum([1, 2, 3, 4, 5], \"even\")); => 6\nconsole.log(conditionalSum([1, 2, 3, 4, 5], \"odd\")); => 9\nconsole.log(conditionalSum([13, 88, 12, 44, 99], \"even\")); => 144\nconsole.log(conditionalSum([], \"odd\")); => 0" }, { "code": null, "e": 1636, "s": 1537, "text": "So, let’s write the code for this function, we will use the Array.prototype.reduce() method here −" }, { "code": null, "e": 2174, "s": 1636, "text": "const conditionalSum = (arr, condition) => {\n const add = (num1, num2) => {\n if(condition === 'even' && num2 % 2 === 0){\n return num1 + num2;\n }\n if(condition === 'odd' && num2 % 2 === 1){\n return num1 + num2;\n };\n return num1;\n }\n return arr.reduce((acc, val) => add(acc, val), 0);\n}\nconsole.log(conditionalSum([1, 2, 3, 4, 5], \"even\"));\nconsole.log(conditionalSum([1, 2, 3, 4, 5], \"odd\"));\nconsole.log(conditionalSum([13, 88, 12, 44, 99], \"even\"));\nconsole.log(conditionalSum([], \"odd\"));" }, { "code": null, "e": 2210, "s": 2174, "text": "The output in the console will be −" }, { "code": null, "e": 2220, "s": 2210, "text": "6\n9\n144\n0" } ]
N-Knights Tour | Practice | GeeksforGeeks
Popular Company Tags Amazon Microsoft Oracle Samsung Adobe Synopsys Infosys Cisco Wipro Ola-Cabs Morgan-Stanley Goldman-Sachs show more Amazon Microsoft Oracle Samsung Adobe Synopsys Infosys Cisco Wipro Ola-Cabs Morgan-Stanley Goldman-Sachs Popular Topic Tags Maths Array Dynamic-Programming Greedy-Algorithm Hashing Tree Bit-Algorithm Matrix Backtracking Operating System Linked-List Graph show more Maths Array Dynamic-Programming Greedy-Algorithm Hashing Tree Bit-Algorithm Matrix Backtracking Operating System Linked-List Graph 'School' level Subjective Problems This Question's [Answers : 0] [Views : 3781] 'School' level Subjective Problems This Question's [Answers : 0] [Views : 3781] Hi friend, I am Java Learner, I would like to seek for help on my coding. The requirement is as below, The knights must visits the maximum number of board positions without attacking each other. The Knight can move in the shape of the letter, 'L', over two in one direction and then over one in a perpendicular direction. If the Knight rests at the square marked X. Please assist, thanks. Package knightstour; import java.util.*; public class KnightsTour { private static int board[][] = new int[8][8]; private static int stepCounter = 1; public KnightsTour() { initBoard(board); tour(0,0); printSol(board); } public static void printSol(int[][] a) { for (int i = 0; i < a.length; i++) { for (int j = 0; j < a[i].length; j++) { if(a[i][j]>9) { System.out.print(a[i][j] + " "); } else { System.out.print(a[i][j] + " "); } } System.out.println(); } System.out.println(); } public static void initBoard(int[][] a) { for (int i = 0; i < a.length; i++) { for (int j = 0; j < a[i].length; j++) { a[i][j] = -1; } } } public void tour(int x, int y) { if (((x < 0) || (x >= 8) || (y < 0) || (y >= 8)) || (board[x][y] != -1)){ return; } else { stepCounter++; board[x][y] = stepCounter; tour(x+2, y+1); tour(x+1, y-2); tour(x+1, y+2); tour(x-1, y+2); tour(x-2, y-1); tour(x-2, y+1); tour(x-1, y-2); tour(x+2, y-1); } } public boolean spaceAvailable(int X, int Y) { } Public static void main(String[] args) { new KnightsTour(); } } Error Content Related Issue Sofware Related Issue
[ { "code": null, "e": 310, "s": 169, "text": "\nPopular Company Tags\n\nAmazon\nMicrosoft\nOracle\nSamsung\nAdobe\nSynopsys\nInfosys\nCisco\nWipro\nOla-Cabs\nMorgan-Stanley\nGoldman-Sachs\nshow more\n\n" }, { "code": null, "e": 317, "s": 310, "text": "Amazon" }, { "code": null, "e": 327, "s": 317, "text": "Microsoft" }, { "code": null, "e": 334, "s": 327, "text": "Oracle" }, { "code": null, "e": 342, "s": 334, "text": "Samsung" }, { "code": null, "e": 348, "s": 342, "text": "Adobe" }, { "code": null, "e": 357, "s": 348, "text": "Synopsys" }, { "code": null, "e": 365, "s": 357, "text": "Infosys" }, { "code": null, "e": 371, "s": 365, "text": "Cisco" }, { "code": null, "e": 377, "s": 371, "text": "Wipro" }, { "code": null, "e": 386, "s": 377, "text": "Ola-Cabs" }, { "code": null, "e": 401, "s": 386, "text": "Morgan-Stanley" }, { "code": null, "e": 415, "s": 401, "text": "Goldman-Sachs" }, { "code": null, "e": 580, "s": 415, "text": "\n\nPopular Topic Tags\n\nMaths\nArray\nDynamic-Programming\nGreedy-Algorithm\nHashing\nTree\nBit-Algorithm\nMatrix\nBacktracking\nOperating System\nLinked-List\nGraph\nshow more\n\n" }, { "code": null, "e": 586, "s": 580, "text": "Maths" }, { "code": null, "e": 592, "s": 586, "text": "Array" }, { "code": null, "e": 612, "s": 592, "text": "Dynamic-Programming" }, { "code": null, "e": 629, "s": 612, "text": "Greedy-Algorithm" }, { "code": null, "e": 637, "s": 629, "text": "Hashing" }, { "code": null, "e": 642, "s": 637, "text": "Tree" }, { "code": null, "e": 656, "s": 642, "text": "Bit-Algorithm" }, { "code": null, "e": 663, "s": 656, "text": "Matrix" }, { "code": null, "e": 676, "s": 663, "text": "Backtracking" }, { "code": null, "e": 693, "s": 676, "text": "Operating System" }, { "code": null, "e": 705, "s": 693, "text": "Linked-List" }, { "code": null, "e": 711, "s": 705, "text": "Graph" }, { "code": null, "e": 810, "s": 711, "text": "\n\n'School' level Subjective Problems \n\n\t\t\t\t\t\t\tThis Question's [Answers : 0] [Views : 3781]\n\t\t\t\t\t\t\n" }, { "code": null, "e": 847, "s": 810, "text": "\n'School' level Subjective Problems " }, { "code": null, "e": 907, "s": 847, "text": "\n\t\t\t\t\t\t\tThis Question's [Answers : 0] [Views : 3781]\n\t\t\t\t\t\t" }, { "code": null, "e": 1296, "s": 907, "text": "Hi friend, I am Java Learner, I would like to seek for help on my coding. The requirement is as below, The knights must visits the maximum number of board positions without attacking each other. The Knight can move in the shape of the letter, 'L', over two in one direction and then over one in a perpendicular direction. If the Knight rests at the square marked X. Please assist, thanks." }, { "code": null, "e": 2806, "s": 1296, "text": "Package knightstour;\nimport java.util.*;\n\npublic class KnightsTour {\n private static int board[][] = new int[8][8];\n private static int stepCounter = 1;\n \n public KnightsTour() { \n initBoard(board);\n tour(0,0); \n printSol(board); \n }\n\n public static void printSol(int[][] a) { \n for (int i = 0; i < a.length; i++) {\n for (int j = 0; j < a[i].length; j++) {\n if(a[i][j]>9) { \n System.out.print(a[i][j] + \" \");\n } else { \n System.out.print(a[i][j] + \" \");\n } \n }\n System.out.println(); } System.out.println();\n }\n\n public static void initBoard(int[][] a) {\n for (int i = 0; i < a.length; i++) {\n for (int j = 0; j < a[i].length; j++) { \n a[i][j] = -1; \n } \n } \n }\n\n public void tour(int x, int y) {\n if (((x < 0) || (x >= 8) || (y < 0) || (y >= 8)) || (board[x][y] != -1)){ \n return;\n } else { \n stepCounter++;\n board[x][y] = stepCounter;\n tour(x+2, y+1);\n tour(x+1, y-2);\n tour(x+1, y+2);\n tour(x-1, y+2);\n tour(x-2, y-1);\n tour(x-2, y+1);\n tour(x-1, y-2);\n tour(x+2, y-1);\n } \n }\n\n public boolean spaceAvailable(int X, int Y) { }\n\n Public static void main(String[] args) {\n new KnightsTour(); \n }\n}" }, { "code": null, "e": 2814, "s": 2808, "text": "Error" }, { "code": null, "e": 2836, "s": 2814, "text": "Content Related Issue" } ]
How to Use Singular Value Decomposition (SVD) for Image Classification in Python | by Nikos Kafritsas | Towards Data Science
One of the most elusive topics in linear algebra is the Singular Value Decomposition (SVD) method. It is also one of the most fundamental techniques because it paves the way for understanding Principal component analysis (PCA), Latent Dirichlet Allocation (LDA) and the concept of matrix factorization in general. The elusiveness of SVD stems from the fact that while this method requires a lot of maths and linear algebra in order to grasp it, the practical applications are often overlooked. There are many people who think they grasp the concept of SVD, but in fact they don’t. It’s not only a dimensionality reduction technique: Essentially, the magic of SVD is that any matrix A can be written as the sum of rank 1 matrices! This will become apparent later. The purpose of this article is to show the usefulness and the underlying mechanisms of SVD by applying it to a well known-example: Handwritten digits classification. The basic relation of SVD is where:U and V are orthogonal matrices,S is a diagonal matrix More specifically: which shows the aforementioned claim, that any matrix A can be written as the sum of rank 1 matrices. A few useful properties of SVD: The U and V matrices are constructed from the eigenvectors of AAT and ATA respectively.Any matrix has an SVD decomposition. This is because the AAT and ATA matrices have a special property (among others): They are at least positive semidefinite (which means their eigenvalues are either positive or zero).The S matrix contains the square roots of the positive eigenvalues. These are also called singular values.In most programming languages, including Python, the columns of U and V are arranged in such a way that columns with higher eigenvalues precede those with smaller values.The u1, u2.... vectors are also called left singular vectors and they form an orthonormal basis. Correspondingly, the v1, v2.... vectors are called right singular vectors.The rank of matrix A is the number of non-zero singular values of S matrix.Eckart-Young-Mirsky Theorem: The best k rank approximation of a rank k<r A matrix in the 2-norm and F- norm is: The U and V matrices are constructed from the eigenvectors of AAT and ATA respectively. Any matrix has an SVD decomposition. This is because the AAT and ATA matrices have a special property (among others): They are at least positive semidefinite (which means their eigenvalues are either positive or zero). The S matrix contains the square roots of the positive eigenvalues. These are also called singular values. In most programming languages, including Python, the columns of U and V are arranged in such a way that columns with higher eigenvalues precede those with smaller values. The u1, u2.... vectors are also called left singular vectors and they form an orthonormal basis. Correspondingly, the v1, v2.... vectors are called right singular vectors. The rank of matrix A is the number of non-zero singular values of S matrix. Eckart-Young-Mirsky Theorem: The best k rank approximation of a rank k<r A matrix in the 2-norm and F- norm is: In other words: If you want to approximate any matrix A with one of a lower rank k, the optimal way to do so is by applying SVD on A and take only the first k basis vectors with the highest k singular values. For this example, we will use the Handwritten Digits USPS (U.S. Postal Service) dataset. The dataset contains 7291 train and 2007 test images of handwritten digits between [0–9] . The images are 16*16 grayscale pixels. First, we load the data: import numpy as npimport pandas as pdimport matplotlib.pyplot as pltfrom scipy.linalg import svd, normfrom sklearn.metrics import accuracy_score, confusion_matrix, classification_reportimport h5pyimport os# define class labelslabels = { 0: "0", 1: "1", 2: "2", 3: "3", 4: "4", 5: "5", 6: "6", 7: "7", 8: "8", 9: "9"}# load the datasetwith h5py.File(os.path.join(os.getcwd(), 'usps.h5'), 'r') as hf: train = hf.get('train') test = hf.get('test') x_train = pd.DataFrame(train.get('data')[:]).T y_train = pd.DataFrame(train.get('target')[:]).T x_test = pd.DataFrame(test.get('data')[:]).T y_test = pd.DataFrame(test.get('target')[:]).Tprint(x_train.shape)print(y_train.shape)print(x_test.shape)print(y_test.shape)#Output:#(256, 7291)#(1, 7291)#(256, 2007)#(1, 2007) The data have been loaded in such a way to match the dimensions from the quick reminder section above. The columns of x_train and x_test contain the digits as vectors, flattened into arrays of size equal to 256 (since each digit is of 16x16 size). On the other hand, the y_train and y_test are row vectors which contain the actual classes for each digit (values between 0 and 9) for the train and test dataset respectively. Figure 1 displays the first image in the training dataset: digit_image=x_train[0]plt.imshow(digit_image.to_numpy().reshape(16,16),cmap='binary') The methodology for digit classification is organised in the following steps: We split the x_train dataframe into 10 matrices (columnwise), one for each digit[0–9]. These are the A’s matrices that were mentioned previously. The goal is to apply SVD to each one of them separately. For instance, the A0 matrix contains only images of digit 0, and its shape is (256,1194) since there are 1194 0’s in the dataset.Next, we apply SVD to each one of the [A0, A1.. A9] matrices. For each A matrix we store the corresponding U, S and V matrices. We will mostly work with U matrix.Each data matrix A, represented by a digit, has a ‘distinctive characteristic’. This differentiation is reflected in the first few left singular vectors (u1, u2....). Since these eigenvectors are essentially basis vectors, if an unknown digit can be better approximated with the basis of another digit (e.g the digit 3), then we can assume that the unknown digit is classified as that digit (as 3). This will become more apparent later in the programming example. We split the x_train dataframe into 10 matrices (columnwise), one for each digit[0–9]. These are the A’s matrices that were mentioned previously. The goal is to apply SVD to each one of them separately. For instance, the A0 matrix contains only images of digit 0, and its shape is (256,1194) since there are 1194 0’s in the dataset. Next, we apply SVD to each one of the [A0, A1.. A9] matrices. For each A matrix we store the corresponding U, S and V matrices. We will mostly work with U matrix. Each data matrix A, represented by a digit, has a ‘distinctive characteristic’. This differentiation is reflected in the first few left singular vectors (u1, u2....). Since these eigenvectors are essentially basis vectors, if an unknown digit can be better approximated with the basis of another digit (e.g the digit 3), then we can assume that the unknown digit is classified as that digit (as 3). This will become more apparent later in the programming example. Step 1 This is fairly easy. We just create the [A0, A1.. A9] matrices and store them in a dictionary, called alpha_matrices: alpha_matrices={}for i in range(10): alpha_matrices.update({"A"+str(i):x_train.loc[:,list(y_train.loc[0,:]==i)]})print(alpha_matrices['A0'].shape)#(256, 1194) Step 2 This step is also straightforward. We store the U, S and V matrices in the left_singular, singular_matix and right_singular dictionaries respectively: left_singular={}singular_matix={}right_singular={}for i in range(10): u, s, v_t = svd(alpha_matrices['A'+str(i)], full_matrices=False) left_singular.update({"u"+str(i):u}) singular_matix.update({"s"+str(i):s}) right_singular.update({"v_t"+str(i):v_t})print(left_singular['u0'].shape)#(256, 256) Let’s display what information is contained in those matrices. We will use as an example the U, S and V matrices of digit 3, which correspond to the following variables in our example: #left_singular[‘u3’]#right_singular[‘s3]#singular_matix[‘v_t3]plt.figure(figsize=(20,10))columns = 5for i in range(10): plt.subplot(10/ columns + 1, columns, i + 1) plt.imshow(left_singular["u3"][:,i].reshape(16,16),cmap='binary') Figure 2 displays as images the first 10 left singular vectors [u1, u2... u10] (out of 256). All of them depict the digit 3, with the first one (the u1 vector) being the most clear. Figure 3 shows the singular values of digit 3 from the S matrix, in log scale: plt.figure(figsize = (9, 6))plt.plot(singular_matix[‘s3’], color=’coral’, marker=’o’)plt.title(‘Singular values for digit $3$’,fontsize=15,weight=”bold”,pad=20)plt.ylabel(‘Singular values’ ,fontsize=15)plt.yscale(“log”)plt.show() Given that the singular values are sorted, the first few ones are the highest (in terms of value) and follow a ‘steep curve pattern’. By taking Figure 2 and Figure 3 into account, we can graphically confirm the matrix approximation properties of SVD for digit 3 (remember the Eckart-Young-Mirsky Theorem): The first left singular vector represents the intrinsic property value of matrix A3. Indeed, in Figure 2, the first singular vector u1 looks like the digit 3, and the following left singular vectors represent the most important variations of the training set around u1. The question is if we can use only the first k singular vectors, and still have a good approximation of the basis. We could test that experimentally. Step 3 Given an unknown digit represented by (1,256) vector called z, and the set of left singular vectors [u1, u2... uk] where each set represents the corresponding digit matrix/A matrix, what is the target value of z? Notice that our index is k (the first dominant singular eigenvectors) and not n (all of them). To solve this problem, all we have to do is the following: The goal is to compute how well a digit from the test set can be represented in the 10 different orthonormal bases. This can be done by computing the residual vector in least squares problems of the type: The solution of the Least Squares problem is remember that U matrix is orthogonal. The residual norm vector then becomes: And that’s it! Now we have everything we need. Using the last formula, we proceed to calculate the test accuracy for different k values: I = np.eye(x_test.shape[0])kappas=np.arange(5,21)len_test=x_test.shape[1]predictions=np.empty((y_test.shape[1],0), dtype = int)for t in list(kappas): prediction = [] for i in range(len_test): residuals = [] for j in range(10): u=left_singular["u"+str(j)][:,0:t] res=norm( np.dot(I-np.dot(u,u.T), x_test[i] )) residuals.append(res) index_min = np.argmin(residuals) prediction.append(index_min) prediction=np.array(prediction) predictions=np.hstack((predictions,prediction.reshape(-1,1)))scores=[]for i in range(len(kappas)): score=accuracy_score(y_test.loc[0,:],predictions[:,i]) scores.append(score)data={"Number of basis vectors":list(thresholds), "accuracy_score":scores}df=pd.DataFrame(data).set_index("Number of basis vectors") We could also show this result graphically: Both Table 1 and Figure 4 display the accuracy score for different number of basis vectors. The best score is achieved by using k=12. Next, the confusion matrix is displayed: pd.set_option(‘display.max_colwidth’,12)confusion_matrix_df = pd.DataFrame( confusion_matrix(y_test.loc[0,:],predictions[:,7]) )confusion_matrix_df = confusion_matrix_df.rename(columns = labels, index = labels)confusion_matrix_df And the f1 score (both macro-average and per class) : print(classification_report(y_test.loc[0,:],predictions[:,7])) Comments:* Digits 0,1,6 and 7 perform the best in terms of f1-score.* Digits 5, and 3 perform the worst in terms of f1-score. Let’s look some examples of misclassified images: misclassified = np.where(y_test.loc[0,:] != predictions[:,7])plt.figure(figsize=(20,10))columns = 5for i in range(2,12): misclassified_id=misclassified[0][i] image=x_test[misclassified_id] plt.subplot(10/ columns + 1, columns, i-1) plt.imshow(image.to_numpy().reshape(16,16),cmap='binary') plt.title("True label:"+str(y_test.loc[0,misclassified_id]) + '\n'+ "Predicted label:"+str(predictions[misclassified_id,12])) Clearly, some of these digits are very poorly written. Conclusion In practice, the A data matrix is essentially a low-rank matrix plus noise: A = A’ + N. By applying the Eckart-Young-Mirsk therorem, we approximate the data matrix A with a matrix of correct rank k. This has the effect of keeping the intrinsic properties of A matrix intact, while simultaneously the extra noise is removed. But how do we find the correct rank k? In our case, we found k empirically in terms of test accuracy by experimenting with the first dominant singular values. It should be noted however that there are other algorithms which of course outperform this technique. However, the focus of this case study is to show an alternative way for handwritten digits classification by tweaking a well-known matrix factorization technique. For more information on the theoretical techniques and principles of this example check this book.
[ { "code": null, "e": 486, "s": 172, "text": "One of the most elusive topics in linear algebra is the Singular Value Decomposition (SVD) method. It is also one of the most fundamental techniques because it paves the way for understanding Principal component analysis (PCA), Latent Dirichlet Allocation (LDA) and the concept of matrix factorization in general." }, { "code": null, "e": 935, "s": 486, "text": "The elusiveness of SVD stems from the fact that while this method requires a lot of maths and linear algebra in order to grasp it, the practical applications are often overlooked. There are many people who think they grasp the concept of SVD, but in fact they don’t. It’s not only a dimensionality reduction technique: Essentially, the magic of SVD is that any matrix A can be written as the sum of rank 1 matrices! This will become apparent later." }, { "code": null, "e": 1101, "s": 935, "text": "The purpose of this article is to show the usefulness and the underlying mechanisms of SVD by applying it to a well known-example: Handwritten digits classification." }, { "code": null, "e": 1130, "s": 1101, "text": "The basic relation of SVD is" }, { "code": null, "e": 1191, "s": 1130, "text": "where:U and V are orthogonal matrices,S is a diagonal matrix" }, { "code": null, "e": 1210, "s": 1191, "text": "More specifically:" }, { "code": null, "e": 1312, "s": 1210, "text": "which shows the aforementioned claim, that any matrix A can be written as the sum of rank 1 matrices." }, { "code": null, "e": 1344, "s": 1312, "text": "A few useful properties of SVD:" }, { "code": null, "e": 2283, "s": 1344, "text": "The U and V matrices are constructed from the eigenvectors of AAT and ATA respectively.Any matrix has an SVD decomposition. This is because the AAT and ATA matrices have a special property (among others): They are at least positive semidefinite (which means their eigenvalues are either positive or zero).The S matrix contains the square roots of the positive eigenvalues. These are also called singular values.In most programming languages, including Python, the columns of U and V are arranged in such a way that columns with higher eigenvalues precede those with smaller values.The u1, u2.... vectors are also called left singular vectors and they form an orthonormal basis. Correspondingly, the v1, v2.... vectors are called right singular vectors.The rank of matrix A is the number of non-zero singular values of S matrix.Eckart-Young-Mirsky Theorem: The best k rank approximation of a rank k<r A matrix in the 2-norm and F- norm is:" }, { "code": null, "e": 2371, "s": 2283, "text": "The U and V matrices are constructed from the eigenvectors of AAT and ATA respectively." }, { "code": null, "e": 2590, "s": 2371, "text": "Any matrix has an SVD decomposition. This is because the AAT and ATA matrices have a special property (among others): They are at least positive semidefinite (which means their eigenvalues are either positive or zero)." }, { "code": null, "e": 2697, "s": 2590, "text": "The S matrix contains the square roots of the positive eigenvalues. These are also called singular values." }, { "code": null, "e": 2868, "s": 2697, "text": "In most programming languages, including Python, the columns of U and V are arranged in such a way that columns with higher eigenvalues precede those with smaller values." }, { "code": null, "e": 3040, "s": 2868, "text": "The u1, u2.... vectors are also called left singular vectors and they form an orthonormal basis. Correspondingly, the v1, v2.... vectors are called right singular vectors." }, { "code": null, "e": 3116, "s": 3040, "text": "The rank of matrix A is the number of non-zero singular values of S matrix." }, { "code": null, "e": 3228, "s": 3116, "text": "Eckart-Young-Mirsky Theorem: The best k rank approximation of a rank k<r A matrix in the 2-norm and F- norm is:" }, { "code": null, "e": 3244, "s": 3228, "text": "In other words:" }, { "code": null, "e": 3437, "s": 3244, "text": "If you want to approximate any matrix A with one of a lower rank k, the optimal way to do so is by applying SVD on A and take only the first k basis vectors with the highest k singular values." }, { "code": null, "e": 3681, "s": 3437, "text": "For this example, we will use the Handwritten Digits USPS (U.S. Postal Service) dataset. The dataset contains 7291 train and 2007 test images of handwritten digits between [0–9] . The images are 16*16 grayscale pixels. First, we load the data:" }, { "code": null, "e": 4524, "s": 3681, "text": "import numpy as npimport pandas as pdimport matplotlib.pyplot as pltfrom scipy.linalg import svd, normfrom sklearn.metrics import accuracy_score, confusion_matrix, classification_reportimport h5pyimport os# define class labelslabels = { 0: \"0\", 1: \"1\", 2: \"2\", 3: \"3\", 4: \"4\", 5: \"5\", 6: \"6\", 7: \"7\", 8: \"8\", 9: \"9\"}# load the datasetwith h5py.File(os.path.join(os.getcwd(), 'usps.h5'), 'r') as hf: train = hf.get('train') test = hf.get('test') x_train = pd.DataFrame(train.get('data')[:]).T y_train = pd.DataFrame(train.get('target')[:]).T x_test = pd.DataFrame(test.get('data')[:]).T y_test = pd.DataFrame(test.get('target')[:]).Tprint(x_train.shape)print(y_train.shape)print(x_test.shape)print(y_test.shape)#Output:#(256, 7291)#(1, 7291)#(256, 2007)#(1, 2007)" }, { "code": null, "e": 4948, "s": 4524, "text": "The data have been loaded in such a way to match the dimensions from the quick reminder section above. The columns of x_train and x_test contain the digits as vectors, flattened into arrays of size equal to 256 (since each digit is of 16x16 size). On the other hand, the y_train and y_test are row vectors which contain the actual classes for each digit (values between 0 and 9) for the train and test dataset respectively." }, { "code": null, "e": 5007, "s": 4948, "text": "Figure 1 displays the first image in the training dataset:" }, { "code": null, "e": 5093, "s": 5007, "text": "digit_image=x_train[0]plt.imshow(digit_image.to_numpy().reshape(16,16),cmap='binary')" }, { "code": null, "e": 5171, "s": 5093, "text": "The methodology for digit classification is organised in the following steps:" }, { "code": null, "e": 6129, "s": 5171, "text": "We split the x_train dataframe into 10 matrices (columnwise), one for each digit[0–9]. These are the A’s matrices that were mentioned previously. The goal is to apply SVD to each one of them separately. For instance, the A0 matrix contains only images of digit 0, and its shape is (256,1194) since there are 1194 0’s in the dataset.Next, we apply SVD to each one of the [A0, A1.. A9] matrices. For each A matrix we store the corresponding U, S and V matrices. We will mostly work with U matrix.Each data matrix A, represented by a digit, has a ‘distinctive characteristic’. This differentiation is reflected in the first few left singular vectors (u1, u2....). Since these eigenvectors are essentially basis vectors, if an unknown digit can be better approximated with the basis of another digit (e.g the digit 3), then we can assume that the unknown digit is classified as that digit (as 3). This will become more apparent later in the programming example." }, { "code": null, "e": 6462, "s": 6129, "text": "We split the x_train dataframe into 10 matrices (columnwise), one for each digit[0–9]. These are the A’s matrices that were mentioned previously. The goal is to apply SVD to each one of them separately. For instance, the A0 matrix contains only images of digit 0, and its shape is (256,1194) since there are 1194 0’s in the dataset." }, { "code": null, "e": 6625, "s": 6462, "text": "Next, we apply SVD to each one of the [A0, A1.. A9] matrices. For each A matrix we store the corresponding U, S and V matrices. We will mostly work with U matrix." }, { "code": null, "e": 7089, "s": 6625, "text": "Each data matrix A, represented by a digit, has a ‘distinctive characteristic’. This differentiation is reflected in the first few left singular vectors (u1, u2....). Since these eigenvectors are essentially basis vectors, if an unknown digit can be better approximated with the basis of another digit (e.g the digit 3), then we can assume that the unknown digit is classified as that digit (as 3). This will become more apparent later in the programming example." }, { "code": null, "e": 7096, "s": 7089, "text": "Step 1" }, { "code": null, "e": 7214, "s": 7096, "text": "This is fairly easy. We just create the [A0, A1.. A9] matrices and store them in a dictionary, called alpha_matrices:" }, { "code": null, "e": 7376, "s": 7214, "text": "alpha_matrices={}for i in range(10): alpha_matrices.update({\"A\"+str(i):x_train.loc[:,list(y_train.loc[0,:]==i)]})print(alpha_matrices['A0'].shape)#(256, 1194)" }, { "code": null, "e": 7383, "s": 7376, "text": "Step 2" }, { "code": null, "e": 7534, "s": 7383, "text": "This step is also straightforward. We store the U, S and V matrices in the left_singular, singular_matix and right_singular dictionaries respectively:" }, { "code": null, "e": 7841, "s": 7534, "text": "left_singular={}singular_matix={}right_singular={}for i in range(10): u, s, v_t = svd(alpha_matrices['A'+str(i)], full_matrices=False) left_singular.update({\"u\"+str(i):u}) singular_matix.update({\"s\"+str(i):s}) right_singular.update({\"v_t\"+str(i):v_t})print(left_singular['u0'].shape)#(256, 256)" }, { "code": null, "e": 8026, "s": 7841, "text": "Let’s display what information is contained in those matrices. We will use as an example the U, S and V matrices of digit 3, which correspond to the following variables in our example:" }, { "code": null, "e": 8261, "s": 8026, "text": "#left_singular[‘u3’]#right_singular[‘s3]#singular_matix[‘v_t3]plt.figure(figsize=(20,10))columns = 5for i in range(10): plt.subplot(10/ columns + 1, columns, i + 1) plt.imshow(left_singular[\"u3\"][:,i].reshape(16,16),cmap='binary')" }, { "code": null, "e": 8443, "s": 8261, "text": "Figure 2 displays as images the first 10 left singular vectors [u1, u2... u10] (out of 256). All of them depict the digit 3, with the first one (the u1 vector) being the most clear." }, { "code": null, "e": 8522, "s": 8443, "text": "Figure 3 shows the singular values of digit 3 from the S matrix, in log scale:" }, { "code": null, "e": 8752, "s": 8522, "text": "plt.figure(figsize = (9, 6))plt.plot(singular_matix[‘s3’], color=’coral’, marker=’o’)plt.title(‘Singular values for digit $3$’,fontsize=15,weight=”bold”,pad=20)plt.ylabel(‘Singular values’ ,fontsize=15)plt.yscale(“log”)plt.show()" }, { "code": null, "e": 8886, "s": 8752, "text": "Given that the singular values are sorted, the first few ones are the highest (in terms of value) and follow a ‘steep curve pattern’." }, { "code": null, "e": 9328, "s": 8886, "text": "By taking Figure 2 and Figure 3 into account, we can graphically confirm the matrix approximation properties of SVD for digit 3 (remember the Eckart-Young-Mirsky Theorem): The first left singular vector represents the intrinsic property value of matrix A3. Indeed, in Figure 2, the first singular vector u1 looks like the digit 3, and the following left singular vectors represent the most important variations of the training set around u1." }, { "code": null, "e": 9478, "s": 9328, "text": "The question is if we can use only the first k singular vectors, and still have a good approximation of the basis. We could test that experimentally." }, { "code": null, "e": 9485, "s": 9478, "text": "Step 3" }, { "code": null, "e": 9852, "s": 9485, "text": "Given an unknown digit represented by (1,256) vector called z, and the set of left singular vectors [u1, u2... uk] where each set represents the corresponding digit matrix/A matrix, what is the target value of z? Notice that our index is k (the first dominant singular eigenvectors) and not n (all of them). To solve this problem, all we have to do is the following:" }, { "code": null, "e": 9968, "s": 9852, "text": "The goal is to compute how well a digit from the test set can be represented in the 10 different orthonormal bases." }, { "code": null, "e": 10057, "s": 9968, "text": "This can be done by computing the residual vector in least squares problems of the type:" }, { "code": null, "e": 10102, "s": 10057, "text": "The solution of the Least Squares problem is" }, { "code": null, "e": 10179, "s": 10102, "text": "remember that U matrix is orthogonal. The residual norm vector then becomes:" }, { "code": null, "e": 10316, "s": 10179, "text": "And that’s it! Now we have everything we need. Using the last formula, we proceed to calculate the test accuracy for different k values:" }, { "code": null, "e": 11136, "s": 10316, "text": "I = np.eye(x_test.shape[0])kappas=np.arange(5,21)len_test=x_test.shape[1]predictions=np.empty((y_test.shape[1],0), dtype = int)for t in list(kappas): prediction = [] for i in range(len_test): residuals = [] for j in range(10): u=left_singular[\"u\"+str(j)][:,0:t] res=norm( np.dot(I-np.dot(u,u.T), x_test[i] )) residuals.append(res) index_min = np.argmin(residuals) prediction.append(index_min) prediction=np.array(prediction) predictions=np.hstack((predictions,prediction.reshape(-1,1)))scores=[]for i in range(len(kappas)): score=accuracy_score(y_test.loc[0,:],predictions[:,i]) scores.append(score)data={\"Number of basis vectors\":list(thresholds), \"accuracy_score\":scores}df=pd.DataFrame(data).set_index(\"Number of basis vectors\")" }, { "code": null, "e": 11180, "s": 11136, "text": "We could also show this result graphically:" }, { "code": null, "e": 11314, "s": 11180, "text": "Both Table 1 and Figure 4 display the accuracy score for different number of basis vectors. The best score is achieved by using k=12." }, { "code": null, "e": 11355, "s": 11314, "text": "Next, the confusion matrix is displayed:" }, { "code": null, "e": 11585, "s": 11355, "text": "pd.set_option(‘display.max_colwidth’,12)confusion_matrix_df = pd.DataFrame( confusion_matrix(y_test.loc[0,:],predictions[:,7]) )confusion_matrix_df = confusion_matrix_df.rename(columns = labels, index = labels)confusion_matrix_df" }, { "code": null, "e": 11639, "s": 11585, "text": "And the f1 score (both macro-average and per class) :" }, { "code": null, "e": 11702, "s": 11639, "text": "print(classification_report(y_test.loc[0,:],predictions[:,7]))" }, { "code": null, "e": 11828, "s": 11702, "text": "Comments:* Digits 0,1,6 and 7 perform the best in terms of f1-score.* Digits 5, and 3 perform the worst in terms of f1-score." }, { "code": null, "e": 11878, "s": 11828, "text": "Let’s look some examples of misclassified images:" }, { "code": null, "e": 12313, "s": 11878, "text": "misclassified = np.where(y_test.loc[0,:] != predictions[:,7])plt.figure(figsize=(20,10))columns = 5for i in range(2,12): misclassified_id=misclassified[0][i] image=x_test[misclassified_id] plt.subplot(10/ columns + 1, columns, i-1) plt.imshow(image.to_numpy().reshape(16,16),cmap='binary') plt.title(\"True label:\"+str(y_test.loc[0,misclassified_id]) + '\\n'+ \"Predicted label:\"+str(predictions[misclassified_id,12]))" }, { "code": null, "e": 12368, "s": 12313, "text": "Clearly, some of these digits are very poorly written." }, { "code": null, "e": 12379, "s": 12368, "text": "Conclusion" }, { "code": null, "e": 12742, "s": 12379, "text": "In practice, the A data matrix is essentially a low-rank matrix plus noise: A = A’ + N. By applying the Eckart-Young-Mirsk therorem, we approximate the data matrix A with a matrix of correct rank k. This has the effect of keeping the intrinsic properties of A matrix intact, while simultaneously the extra noise is removed. But how do we find the correct rank k?" } ]
Find Pair Given Difference | Practice | GeeksforGeeks
Given an array Arr[] of size L and a number N, you need to write a program to find if there exists a pair of elements in the array whose difference is N. Example 1: Input: L = 6, N = 78 arr[] = {5, 20, 3, 2, 5, 80} Output: 1 Explanation: (2, 80) have difference of 78. Example 2: Input: L = 5, N = 45 arr[] = {90, 70, 20, 80, 50} Output: -1 Explanation: There is no pair with difference of 45. Your Task: You need not take input or print anything. Your task is to complete the function findPair() which takes array arr, size of the array L and N as input parameters and returns True if required pair exists, else return False. Expected Time Complexity: O(L* Log(L)). Expected Auxiliary Space: O(1). Constraints: 1<=L<=104 1<=Arr[i]<=105 0<=N<=105 +2 itsmemritu6 days ago bool findPair(int arr[], int size, int n){ //code set<int>s; for(int i=0 ;i<size ;i++){ if(s.find(n+arr[i])!=s.end() or s.find(arr[i]-n)!=s.end() ){ return true; } s.insert(arr[i]); } return false; } +3 yuvrajranabtcse201 week ago c++ code bool findPair(int arr[], int size, int n){ sort(arr,arr+size); for(int i=0;i<size;i++){ int find=arr[i]+n; int start=i+1; int end=size-1; int mid=start+(end-start)/2; while(start<=end){ if(arr[mid]==find)return true; else if(arr[mid]>find) end=mid-1; else start=mid+1; mid=start+(end-start)/2; } } return false; } 0 amarrajsmart1971 week ago map<int,int> mp; for(int i=0;i<size;i++) { mp[arr[i]]++; } if(n==0) { for(auto it:mp){ if(it.second>1) return true; } return false; } for(int i=0;i<size;i++) { if(mp.find(arr[i]+n)!=mp.end()) { if(mp.find(arr[i]+n)!=mp.end()) return true; } } return false;} +1 vaibhaviitj2 weeks ago Hi GFG TeamYou are missing this test case:5 0 1 2 2 6 5 +1 harshsinha18082 weeks ago Java Code using binary search if(n==0){ return false; } int dif = 0; Arrays.sort(arr); for(int i = 0 ; i< size ; i++) { int a = arr[i]; dif = Math.abs(n+a); int start =0; int end = size-1; int mid = start+(end-start)/2; while(start<=end) { if(arr[mid]==dif) { return true; }else if(arr[mid]>dif) { end=mid-1; }else { start=mid+1; } mid=start+(end-start)/2; } } return false; 0 superdude This comment was deleted. +2 thevaibhavkute4 weeks ago bool findPair(int arr[], int size, int n){ sort(arr,arr+size); int low=0; int high=1; while(low<size && high<size) { if(abs(arr[high]-arr[low])==n && low!=high) return true; else if((arr[high]-arr[low])<n) high++; else low++; } return false; } 0 vibhanshukaushal101 month ago C++ SOL : bool findPair(int arr[], int size, int n){ //code vector<int> v ; unordered_map<int,int> mp ; int i ; for(i=0 ; i<size ; i++) { v.push_back(abs(arr[i]-n)) ; } for(i=0 ; i<size ; i++) { mp[arr[i]]++ ; } for(i=0 ; i<v.size() ; i++) { if(mp[v[i]]==1 && n!=0) { return true ; } else if(n==0 && mp[v[i]]==2) { return true ; } } return false;} -1 shivommahar1 month ago easy c++ bool findPair(int arr[], int size, int n){ sort(arr, arr+size); int l=0; int h=1; while(l<size && h<size) { if(arr[h]-arr[l]==n && h>l) return true; else { if(arr[h]-arr[l]>n) l++; else h++;} } return false; } 0 abhinavsingh471 month ago public boolean findPair(int arr[], int size, int n) { //code here. // first sort the array to perform binary search Arrays.sort(arr); for(int i=0;i<size;i++){ // now we are finding the n-arr[i] number using binary search int complementIndex = Arrays.binarySearch(arr,n+arr[i]); // making sure that we don't find the same number because for n =0 this turns true if(complementIndex>0 && complementIndex!=i) return true; } return false; } We strongly recommend solving this problem on your own before viewing its editorial. Do you still want to view the editorial? Login to access your submissions. Problem Contest Reset the IDE using the second button on the top right corner. Avoid using static/global variables in your code as your code is tested against multiple test cases and these tend to retain their previous values. Passing the Sample/Custom Test cases does not guarantee the correctness of code. On submission, your code is tested against multiple test cases consisting of all possible corner cases and stress constraints. You can access the hints to get an idea about what is expected of you as well as the final solution code. You can view the solutions submitted by other users from the submission tab.
[ { "code": null, "e": 392, "s": 238, "text": "Given an array Arr[] of size L and a number N, you need to write a program to find if there exists a pair of elements in the array whose difference is N." }, { "code": null, "e": 403, "s": 392, "text": "Example 1:" }, { "code": null, "e": 507, "s": 403, "text": "Input:\nL = 6, N = 78\narr[] = {5, 20, 3, 2, 5, 80}\nOutput: 1\nExplanation: (2, 80) have difference of 78." }, { "code": null, "e": 518, "s": 507, "text": "Example 2:" }, { "code": null, "e": 632, "s": 518, "text": "Input:\nL = 5, N = 45\narr[] = {90, 70, 20, 80, 50}\nOutput: -1\nExplanation: There is no pair with difference of 45." }, { "code": null, "e": 866, "s": 632, "text": "Your Task:\nYou need not take input or print anything. Your task is to complete the function findPair() which takes array arr, size of the array L and N as input parameters and returns True if required pair exists, else return False. " }, { "code": null, "e": 938, "s": 866, "text": "Expected Time Complexity: O(L* Log(L)).\nExpected Auxiliary Space: O(1)." }, { "code": null, "e": 978, "s": 938, "text": "Constraints:\n1<=L<=104 \n1<=Arr[i]<=105 " }, { "code": null, "e": 988, "s": 978, "text": "0<=N<=105" }, { "code": null, "e": 991, "s": 988, "text": "+2" }, { "code": null, "e": 1012, "s": 991, "text": "itsmemritu6 days ago" }, { "code": null, "e": 1261, "s": 1012, "text": "bool findPair(int arr[], int size, int n){ //code set<int>s; for(int i=0 ;i<size ;i++){ if(s.find(n+arr[i])!=s.end() or s.find(arr[i]-n)!=s.end() ){ return true; } s.insert(arr[i]); } return false; }" }, { "code": null, "e": 1264, "s": 1261, "text": "+3" }, { "code": null, "e": 1292, "s": 1264, "text": "yuvrajranabtcse201 week ago" }, { "code": null, "e": 1301, "s": 1292, "text": "c++ code" }, { "code": null, "e": 1703, "s": 1303, "text": "bool findPair(int arr[], int size, int n){ sort(arr,arr+size); for(int i=0;i<size;i++){ int find=arr[i]+n; int start=i+1; int end=size-1; int mid=start+(end-start)/2; while(start<=end){ if(arr[mid]==find)return true; else if(arr[mid]>find) end=mid-1; else start=mid+1; mid=start+(end-start)/2; } } return false; }" }, { "code": null, "e": 1705, "s": 1703, "text": "0" }, { "code": null, "e": 1731, "s": 1705, "text": "amarrajsmart1971 week ago" }, { "code": null, "e": 2092, "s": 1731, "text": " map<int,int> mp; for(int i=0;i<size;i++) { mp[arr[i]]++; } if(n==0) { for(auto it:mp){ if(it.second>1) return true; } return false; } for(int i=0;i<size;i++) { if(mp.find(arr[i]+n)!=mp.end()) { if(mp.find(arr[i]+n)!=mp.end()) return true; } } return false;}" }, { "code": null, "e": 2095, "s": 2092, "text": "+1" }, { "code": null, "e": 2118, "s": 2095, "text": "vaibhaviitj2 weeks ago" }, { "code": null, "e": 2175, "s": 2118, "text": "Hi GFG TeamYou are missing this test case:5 0 1 2 2 6 5 " }, { "code": null, "e": 2178, "s": 2175, "text": "+1" }, { "code": null, "e": 2204, "s": 2178, "text": "harshsinha18082 weeks ago" }, { "code": null, "e": 2673, "s": 2204, "text": "Java Code using binary search\n\n if(n==0){\n return false;\n }\n int dif = 0;\n\t\tArrays.sort(arr);\n\t\tfor(int i = 0 ; i< size ; i++) {\n\t\t\tint a = arr[i];\n\t\t\tdif = Math.abs(n+a);\n\t\t\tint start =0;\n\t\t\tint end = size-1;\n\t\t\tint mid = start+(end-start)/2;\n\t\t\twhile(start<=end) {\n\t\t\t\tif(arr[mid]==dif) {\n\t\t\t\t\treturn true;\n\t\t\t\t}else if(arr[mid]>dif) {\n\t\t\t\t\tend=mid-1;\n\t\t\t\t}else {\n\t\t\t\t\tstart=mid+1;\n\t\t\t\t}\n\t\t\t\tmid=start+(end-start)/2;\n\t\t\t}\n\t\t}\n\t\treturn false;" }, { "code": null, "e": 2675, "s": 2673, "text": "0" }, { "code": null, "e": 2685, "s": 2675, "text": "superdude" }, { "code": null, "e": 2711, "s": 2685, "text": "This comment was deleted." }, { "code": null, "e": 2714, "s": 2711, "text": "+2" }, { "code": null, "e": 2740, "s": 2714, "text": "thevaibhavkute4 weeks ago" }, { "code": null, "e": 3083, "s": 2740, "text": "bool findPair(int arr[], int size, int n){\n sort(arr,arr+size);\n \n int low=0;\n int high=1;\n \n while(low<size && high<size)\n {\n if(abs(arr[high]-arr[low])==n && low!=high)\n return true;\n else if((arr[high]-arr[low])<n)\n high++;\n else \n low++;\n }\n return false;\n}" }, { "code": null, "e": 3085, "s": 3083, "text": "0" }, { "code": null, "e": 3115, "s": 3085, "text": "vibhanshukaushal101 month ago" }, { "code": null, "e": 3125, "s": 3115, "text": "C++ SOL :" }, { "code": null, "e": 3570, "s": 3125, "text": "bool findPair(int arr[], int size, int n){ //code vector<int> v ; unordered_map<int,int> mp ; int i ; for(i=0 ; i<size ; i++) { v.push_back(abs(arr[i]-n)) ; } for(i=0 ; i<size ; i++) { mp[arr[i]]++ ; } for(i=0 ; i<v.size() ; i++) { if(mp[v[i]]==1 && n!=0) { return true ; } else if(n==0 && mp[v[i]]==2) { return true ; } } return false;}" }, { "code": null, "e": 3573, "s": 3570, "text": "-1" }, { "code": null, "e": 3596, "s": 3573, "text": "shivommahar1 month ago" }, { "code": null, "e": 3605, "s": 3596, "text": "easy c++" }, { "code": null, "e": 3887, "s": 3605, "text": "bool findPair(int arr[], int size, int n){ sort(arr, arr+size); int l=0; int h=1; while(l<size && h<size) { if(arr[h]-arr[l]==n && h>l) return true; else { if(arr[h]-arr[l]>n) l++; else h++;} } return false; }" }, { "code": null, "e": 3889, "s": 3887, "text": "0" }, { "code": null, "e": 3915, "s": 3889, "text": "abhinavsingh471 month ago" }, { "code": null, "e": 4441, "s": 3915, "text": " public boolean findPair(int arr[], int size, int n) { //code here. // first sort the array to perform binary search Arrays.sort(arr); for(int i=0;i<size;i++){ // now we are finding the n-arr[i] number using binary search int complementIndex = Arrays.binarySearch(arr,n+arr[i]); // making sure that we don't find the same number because for n =0 this turns true if(complementIndex>0 && complementIndex!=i) return true; } return false; }" }, { "code": null, "e": 4587, "s": 4441, "text": "We strongly recommend solving this problem on your own before viewing its editorial. Do you still\n want to view the editorial?" }, { "code": null, "e": 4623, "s": 4587, "text": " Login to access your submissions. " }, { "code": null, "e": 4633, "s": 4623, "text": "\nProblem\n" }, { "code": null, "e": 4643, "s": 4633, "text": "\nContest\n" }, { "code": null, "e": 4706, "s": 4643, "text": "Reset the IDE using the second button on the top right corner." }, { "code": null, "e": 4854, "s": 4706, "text": "Avoid using static/global variables in your code as your code is tested against multiple test cases and these tend to retain their previous values." }, { "code": null, "e": 5062, "s": 4854, "text": "Passing the Sample/Custom Test cases does not guarantee the correctness of code. On submission, your code is tested against multiple test cases consisting of all possible corner cases and stress constraints." }, { "code": null, "e": 5168, "s": 5062, "text": "You can access the hints to get an idea about what is expected of you as well as the final solution code." } ]
Layman’s Introduction to KNN. k-nearest neighbour algorithm is where... | by Rishi Sidhu | Towards Data Science
kNN stands for k-Nearest Neighbours. It is a supervised learning algorithm. This means that we train it under supervision. We train it using the labelled data already available to us. Given a labelled dataset consisting of observations (x,y), we would like to capture the relationship between x — the data and y — the label. More formally, we want to learn a function g : X→Y so that given an unseen observation X, g(x) can confidently predict the corresponding output Y. Other examples of supervised learning algorithms include random forests, linear regression and logistic regression. kNN is very simple to implement and is most widely used as a first step in any machine learning setup. It is often used as a benchmark for more complex classifiers such as Artificial Neural Networks (ANN) and Support Vector Machines (SVM). Despite its simplicity, k-NN can outperform more powerful classifiers and is used in a variety of applications such as economic forecasting, data compression and genetics. Genetics bmcbioinformatics.biomedcentral.com Agriculture www.scielo.br Aviation — Air traffic flow prediction ieeexplore.ieee.org As with most technological progress in the early 1900s, KNN algorithm was also born out of research done for the armed forces. Two offices of USAF School of Aviation Medicine — Fix and Hodges (1951) wrote a technical report introducing a non-parametric method for pattern classification that has since become popular as the k-nearest neighbor (kNN) algorithm. Let’s say we have a dataset with two kinds of points — Label 1 and Label 2. Now given a new point in this dataset we want to figure out its label. The way it is done in kNN is by taking a majority vote of its k nearest neighbours. k can take any value between 1 and infinity but in most practical cases k is less than 30. Let’s say we have two groups of points — blue-circles and orange-triangles. We want to classify the Test Point = black circle with a question mark, as either a blue circle or an orange triangle. Goal: To label the black circle. For K = 1 we will look at the first nearest neighbour. Since we take majority vote and there is only 1 voter we assign its label to our black test point. We can see that the test point will be classified as a blue circle for k=1. Expanding our search radius to K=3 also keeps the result same, except that this time it is not an absolute majority, it’s 2 out of 3. Still with k=3 test point is predicted to have the class blue-circle →because the majority of points are blue. Let’s see how k=5 and K =9 do. To look at the nearest neighbors we draw circle with test point at the centre and stop when 5 points fall inside the circle. When we look at the 5 and subsequently at K = 9, the majority of the closest neighbors of our test point are orange-triangles. That indicates that the test point must be an orange triangle. Now that we have labelled this one test point we repeat the same process over all the unknown points (i.e. the test set). Once all test points are labelled using k-NN we try separating them using a decision boundary. Decision boundary shows how well the training set is separated. First they would choose k. We already saw above that a bigger k takes a vote of larger number of points. This means higher chances of being correct. But at what cost, you say? Getting the k nearest neighbours means sorting through the distances. That is a costly operation. A very high processing power is needed which translates to either longer processing time or costlier processor. Higher the K costlier the whole procedure. But too low a k would result in overfitting. A very low k will fail to generalize. A very high k is costly. As we go to higher K’s the boundaries become smooth.Blue and red regions are broadly separated. Some blue and red soldiers are left behind the enemy lines. They are collateral damage. They account for loss in training accuracy but lead to better generalisation and high test accuracy i.e. high accuracy of correct labelling for new points. A graph of validation error when plotted against K would typically look like this. We can see that around K = 8 the error is minimum. It goes up on either side. Then they would try to find distances between points. How do we decide which neighbours are near and which are not? Euclidean Distance — Most common distance metric Chebyshev Distance — L∞ Distance Manhattan Distance — L1 Distance : Sum of the (absolute) differences of their coordinates. In chess, the distance between squares on the chessboard for rooks is measured in Manhattan distance; kings and queens use Chebyshev distance — Wikipedia Once we know how to compare points based on distance we would like to train our model. The best part about k-NN is that there is no explicit training step for it. We already know all that is to know about our dataset — its labels. In essence the training phase of the algorithm consists only of storing the feature vectors and class labels of the training samples. K-NN is a lazy learner because it doesn’t learn a discriminative function from the training data but memorizes the training dataset instead. An eager learner has a model fitting or training step. A lazy learner does not have a training phase. Quick to implement : Which is why it is popular as a benchmarking algorithm. Less training time: Faster turn around time Comparable accuracies: Its prediction accuracy as indicated in a lot of research papers is fairly high for a lot of applications. k-NN is a life saver when one has to quickly deliver a solution with fairly accurate results. In most tools like MATLAB, python, R it is given as a single line command. Despite that it is very easy to implement and fun to try. Source code for the R Plots library(ElemStatLearn)require(class)#Data ExtractionKVAL <- 15x <- mixture.example$xg <- mixture.example$yxnew <- mixture.example$xnew#KNN and boundary extractionmod15 <- knn(x, xnew, g, k=KVAL, prob=TRUE)prob <- attr(mod15, "prob")prob <- ifelse(mod15=="1", prob, 1-prob)px1 <- mixture.example$px1px2 <- mixture.example$px2prob15 <- matrix(mod15, length(px1), length(px2))#Plotting Boundarypar(mar=rep(2,4))contour(px1, px2, prob15, levels=0.5, labels="", xlab="x", ylab="y", main="", axes=TRUE)#Plotting red and blue pointspoints(x, pch = 20, col=ifelse(g==1, "coral", "cornflowerblue"))gd <- expand.grid(x=px1, y=px2)points(gd, pch=".", cex=0.001, col=ifelse(prob15>0.5, "coral", "cornflowerblue"))legend("topright", pch = c(19, 19), col = c( "red","blue"), legend = c( "Class 1", "Class 2"))title(main="Boundary for K=15", sub="ii", xlab="Feature 1", ylab="Feature 2")box()
[ { "code": null, "e": 644, "s": 172, "text": "kNN stands for k-Nearest Neighbours. It is a supervised learning algorithm. This means that we train it under supervision. We train it using the labelled data already available to us. Given a labelled dataset consisting of observations (x,y), we would like to capture the relationship between x — the data and y — the label. More formally, we want to learn a function g : X→Y so that given an unseen observation X, g(x) can confidently predict the corresponding output Y." }, { "code": null, "e": 760, "s": 644, "text": "Other examples of supervised learning algorithms include random forests, linear regression and logistic regression." }, { "code": null, "e": 1172, "s": 760, "text": "kNN is very simple to implement and is most widely used as a first step in any machine learning setup. It is often used as a benchmark for more complex classifiers such as Artificial Neural Networks (ANN) and Support Vector Machines (SVM). Despite its simplicity, k-NN can outperform more powerful classifiers and is used in a variety of applications such as economic forecasting, data compression and genetics." }, { "code": null, "e": 1181, "s": 1172, "text": "Genetics" }, { "code": null, "e": 1217, "s": 1181, "text": "bmcbioinformatics.biomedcentral.com" }, { "code": null, "e": 1229, "s": 1217, "text": "Agriculture" }, { "code": null, "e": 1243, "s": 1229, "text": "www.scielo.br" }, { "code": null, "e": 1282, "s": 1243, "text": "Aviation — Air traffic flow prediction" }, { "code": null, "e": 1302, "s": 1282, "text": "ieeexplore.ieee.org" }, { "code": null, "e": 1662, "s": 1302, "text": "As with most technological progress in the early 1900s, KNN algorithm was also born out of research done for the armed forces. Two offices of USAF School of Aviation Medicine — Fix and Hodges (1951) wrote a technical report introducing a non-parametric method for pattern classification that has since become popular as the k-nearest neighbor (kNN) algorithm." }, { "code": null, "e": 1984, "s": 1662, "text": "Let’s say we have a dataset with two kinds of points — Label 1 and Label 2. Now given a new point in this dataset we want to figure out its label. The way it is done in kNN is by taking a majority vote of its k nearest neighbours. k can take any value between 1 and infinity but in most practical cases k is less than 30." }, { "code": null, "e": 2179, "s": 1984, "text": "Let’s say we have two groups of points — blue-circles and orange-triangles. We want to classify the Test Point = black circle with a question mark, as either a blue circle or an orange triangle." }, { "code": null, "e": 2212, "s": 2179, "text": "Goal: To label the black circle." }, { "code": null, "e": 2442, "s": 2212, "text": "For K = 1 we will look at the first nearest neighbour. Since we take majority vote and there is only 1 voter we assign its label to our black test point. We can see that the test point will be classified as a blue circle for k=1." }, { "code": null, "e": 2687, "s": 2442, "text": "Expanding our search radius to K=3 also keeps the result same, except that this time it is not an absolute majority, it’s 2 out of 3. Still with k=3 test point is predicted to have the class blue-circle →because the majority of points are blue." }, { "code": null, "e": 2843, "s": 2687, "text": "Let’s see how k=5 and K =9 do. To look at the nearest neighbors we draw circle with test point at the centre and stop when 5 points fall inside the circle." }, { "code": null, "e": 3033, "s": 2843, "text": "When we look at the 5 and subsequently at K = 9, the majority of the closest neighbors of our test point are orange-triangles. That indicates that the test point must be an orange triangle." }, { "code": null, "e": 3314, "s": 3033, "text": "Now that we have labelled this one test point we repeat the same process over all the unknown points (i.e. the test set). Once all test points are labelled using k-NN we try separating them using a decision boundary. Decision boundary shows how well the training set is separated." }, { "code": null, "e": 3490, "s": 3314, "text": "First they would choose k. We already saw above that a bigger k takes a vote of larger number of points. This means higher chances of being correct. But at what cost, you say?" }, { "code": null, "e": 3788, "s": 3490, "text": "Getting the k nearest neighbours means sorting through the distances. That is a costly operation. A very high processing power is needed which translates to either longer processing time or costlier processor. Higher the K costlier the whole procedure. But too low a k would result in overfitting." }, { "code": null, "e": 3851, "s": 3788, "text": "A very low k will fail to generalize. A very high k is costly." }, { "code": null, "e": 4191, "s": 3851, "text": "As we go to higher K’s the boundaries become smooth.Blue and red regions are broadly separated. Some blue and red soldiers are left behind the enemy lines. They are collateral damage. They account for loss in training accuracy but lead to better generalisation and high test accuracy i.e. high accuracy of correct labelling for new points." }, { "code": null, "e": 4352, "s": 4191, "text": "A graph of validation error when plotted against K would typically look like this. We can see that around K = 8 the error is minimum. It goes up on either side." }, { "code": null, "e": 4468, "s": 4352, "text": "Then they would try to find distances between points. How do we decide which neighbours are near and which are not?" }, { "code": null, "e": 4517, "s": 4468, "text": "Euclidean Distance — Most common distance metric" }, { "code": null, "e": 4550, "s": 4517, "text": "Chebyshev Distance — L∞ Distance" }, { "code": null, "e": 4641, "s": 4550, "text": "Manhattan Distance — L1 Distance : Sum of the (absolute) differences of their coordinates." }, { "code": null, "e": 4795, "s": 4641, "text": "In chess, the distance between squares on the chessboard for rooks is measured in Manhattan distance; kings and queens use Chebyshev distance — Wikipedia" }, { "code": null, "e": 5160, "s": 4795, "text": "Once we know how to compare points based on distance we would like to train our model. The best part about k-NN is that there is no explicit training step for it. We already know all that is to know about our dataset — its labels. In essence the training phase of the algorithm consists only of storing the feature vectors and class labels of the training samples." }, { "code": null, "e": 5301, "s": 5160, "text": "K-NN is a lazy learner because it doesn’t learn a discriminative function from the training data but memorizes the training dataset instead." }, { "code": null, "e": 5403, "s": 5301, "text": "An eager learner has a model fitting or training step. A lazy learner does not have a training phase." }, { "code": null, "e": 5480, "s": 5403, "text": "Quick to implement : Which is why it is popular as a benchmarking algorithm." }, { "code": null, "e": 5524, "s": 5480, "text": "Less training time: Faster turn around time" }, { "code": null, "e": 5654, "s": 5524, "text": "Comparable accuracies: Its prediction accuracy as indicated in a lot of research papers is fairly high for a lot of applications." }, { "code": null, "e": 5881, "s": 5654, "text": "k-NN is a life saver when one has to quickly deliver a solution with fairly accurate results. In most tools like MATLAB, python, R it is given as a single line command. Despite that it is very easy to implement and fun to try." }, { "code": null, "e": 5909, "s": 5881, "text": "Source code for the R Plots" } ]
Introduction to AWS Lambda, Layers and boto3 using Python3 | by Gabriel dos Santos Gonçalves | Towards Data Science
Amazon Lambda is probably the most famous serverless service available today offering low cost and practically no cloud infrastructure governance needed. It offers a relatively simple and straightforward platform for implementing functions on different languages like Python, Node.js, Java, C# and many more. Amazon Lambda can be tested through the AWS console or AWS Command Line Interface. One of the main problems about Lambda is that it becomes tricky to set up as soon as your functions and triggers get more complex. The goal of this article is to present you a digestible tutorial for configuring your first Amazon Lambda function with external libraries and doing something more useful than just printing “Hello world!”. We are going to use Python3, boto3 and a few more libraries loaded in Lambda Layers to help us achieve our goal to load a CSV file as a Pandas dataframe, do some data wrangling, and save the metrics and plots on report files on an S3 bucket. Although using the AWS console for configuring your services is not the best practice approach to work on the cloud, we are going to show each step using the console, because it’s more convenient for beginners to understand the basic structure of Amazon Lambda. I’m sure that after going through this tutorial you’ll have a good idea on migrating part of your local data analysis pipelines to Amazon Lambda. Before we start messing around with Amazon Lambda, we should first set our working environment. We first create a folder for the project (1) and the environment Python 3.7 using conda (you can also use pipenv )(2). Next, we create two folders, one to save the python scripts of your Lambda function, and one to build your Lambda Layers (3). We’ll explain better what Lambda Layers consists later on the article. Finally, we can create the folder structure to build Lambda Layers so it can be identified by the Amazon Lambda (4). The folder structure we created is going to help you better understand the concept behind Amazon Lambda and also organize your functions and libraries. # 1) Create project foldermkdir medium-lambda-tutorial# Change directorycd medium-lambda-tutorial/# 2) Create environment using condaconda create --name lambda-tutorial python=3.7conda activate lambda-tutorial# 3) Create one folder for the layers and another for the # lambda_function itselfmkdir lambda_function lambda_layers# 4) Create the folder structure to build your lambda layermkdir -p lambda_layers/python/lib/python3.7/site-packagestree .├── lambda_function└── lambda_layers └── python └── lib └── python3.7 └── site-packages One of the main troubles I encountered when trying to implement my first Lambda functions was trying to understand the file structure used by AWS to invoke scripts and load libraries. If you follow the default options ‘Author from scratch’ (Figure 1) for creating a Lambda function, you’ll end up with a folder with the name of your function and Python script named lambda_function.py inside it. The lambda_function.py file has a very simple structure and the code is the following: import jsondef lambda_handler(event, context): # TODO implement return { 'statusCode': 200, 'body': json.dumps('Hello from Lambda!') } These 8 lines of code are key to understanding Amazon Lambda, so we are going through each line to explain it. import json : You can import Python modules to use on your function and AWS provides you with a list of available Python libraries already built on Amazon Lambda, like json and many more. The problem starts when you need libraries that are not available (we will solve this problem later using Lambda Layers).def lambda_handler(event, context): This is the main function your Amazon Lambda is going to call when you run the service. It has two parameters event and context. The first one is used to pass data that can be used on the function itself (more on this later), and the second is used to provide runtime and metadata information.# TODO implement Here is where the magic happens! You can use the body of the lambda_handler function to implement any Python code you want.return This part of the function is going to return a default dictionary with statusCode equal to 200, and body with a “Hello from Lambda”. You can change this return later to any Python object that suits your needs. import json : You can import Python modules to use on your function and AWS provides you with a list of available Python libraries already built on Amazon Lambda, like json and many more. The problem starts when you need libraries that are not available (we will solve this problem later using Lambda Layers). def lambda_handler(event, context): This is the main function your Amazon Lambda is going to call when you run the service. It has two parameters event and context. The first one is used to pass data that can be used on the function itself (more on this later), and the second is used to provide runtime and metadata information. # TODO implement Here is where the magic happens! You can use the body of the lambda_handler function to implement any Python code you want. return This part of the function is going to return a default dictionary with statusCode equal to 200, and body with a “Hello from Lambda”. You can change this return later to any Python object that suits your needs. Before running our first test, it’s important to explain a key topic related to Amazon Lambda: Triggers. Triggers are basically ways in which you invoke your Lambda function. There are many ways for you to set up your trigger using events like adding a file to a S3 bucket, changing a value on a DynamoDB table or using an HTTP request through Amazon API Gateway. You can pretty much integrate your Lambda function to be invoked by a wide range of AWS services and this is probably one of the advantages offered by Lambda. One way we can do it to integrate with your Python code is by using boto3 to call your Lambda function, and that’s the approach we are going to use later on this tutorial. As you can see, the template structure offered by AWS is super simple, and you can test it by configuring a test event and running it (Figure 2). As we didn’t change anything on the code of the Lambda Function, the test runs the process and we receive a green alert describing the successful event (Figure 3). Figure 3 illustrates the layout of the Lambda invocation result. On the upper part you can see the dictionary contained on the returned statement. Underneath there is the Summary part, where we can see some important metrics related to the Lambda Function like Request ID, duration of the function, the billing duration and the amount of memory configured and used. We won’t go deep on Amazon Lambda pricing, but it is important to know that is charged based on: duration the function is running (rounded up to the nearest 100ms) the amount of memory/CPU used the number of requests (how many times you invoke your function) amount of data transferred in and out of Lambda In general, it is really cheap to test and use it, so you probably won’t have billing problems when using Amazon Lambda for small workloads. Another important detail related to the pricing and performance is how CPU and memory are available. You choose the amount of memory for running your function and “Lambda allocates CPU power linearly in proportion to the amount of memory configured”. At the bottom of Figure 3, you can see the Log output session where you can check all the execution lines printed by your Lambda function. One great feature implemented on Amazon Lambda is that it is integrated with Amazon CloudWatch, where you can find all the logs generated by your Lambda functions. For more details on monitoring execution and logs, please refer to Casey Dunham great Lambda Article. We have covered the basic features of Amazon Lambda, so on the next sessions, we are going to increase the complexity of our task to show you a real-world use providing a few insights into how to run a serverless service on a daily basis. One of the great things about using Python is the availability of a huge number of libraries that helps you implement fast solutions without having to code all classes and functions from scratch. As mentioned before, Amazon Lambda offers a list of Python libraries that you can import into your function. The problem starts when you have to use libraries that are not available. One way to do it is to install the library locally inside the same folder you have your lambda_function.py file, zip the files and upload it to your Amazon Lambda console. This process can be a laborious and inconvenient task to install libraries locally and upload it every time you have to create a new Lambda function. To make your life easier, Amazon offers the possibility for us to upload our libraries as Lambda Layers, which consists of a file structure where you store your libraries, load it independently to Amazon Lambda, and use them on your code whenever needed. Once you create a Lambda Layer it can be used by any other new Lambda Function. Going back to the first session where we organized our working environment, we are going to use the folder structure created inside lambda_layer folder to install locally one Python library, Pandas. # Our current folder structure.├── lambda_function└── lambda_layers └── python └── lib └── python3.7 └── site-packages# 1) Pip install Pandas and Matplotlib locallypip install pandas -t lambda_layers/python/lib/python3.7/site-packages/.# 2) Zip the lambda_layers foldercd lambda_layerszip -r pandas_lambda_layer.zip * By using pip with parameter -t we can specify where we want to install the libraries on our local folder (1). Next, we just need to zip the folder containing the libraries (2) and we have a file ready to be deployed as a Layer. It’s important that you keep the structure of folders we create on the beginning (python/lib/python3.7/site-packages/) so that Amazon Layer can identify the libraries contained on your zipped package. Click on the option Layers on the left panel of your AWS Lambda console, and on the button ‘Create Layer’ to start a new one. Then we can specify the name, description and compatible runtimes (in our case is Python 3.7). Finally, we upload our zipped folder and create the Layer (Figure 4). It takes less than a minute and we have our Amazon Layer ready to be used on our code. Going back to the console of our Lambda function we can specify which Layers we are going to use by clicking on the Layer icon, then on ‘Add a layer’ (Figure 5). Next, we select the Layer we just created and its respective version (Figure 6). As you can see from Figure 6, AWS offers a Lambda Layer with Scipy and Numpy ready to be used, so you don’t need to create new layers if the only libraries you need are one of these two. After selecting our Pandas Layer all we need to do is import it on your Lambda code as it was an installed library. Now that we have our environment and our Pandas Layer ready, we can start working on our code. As mentioned before, our goal is to create a Python3 local script (1) that can invoke a Lambda function using defined parameters (2) to perform a simple data analysis using Pandas on a CSV located on a S3 (3) and save the results back to the same bucket (4) (Figure 7). To give Amazon Lambda access to our S3 buckets we can simply add a role to our function by going to the session Execution role on your console. Although AWS offers you some role templates, my advise is to create a new role on the IAM console to specify exactly the permission need for your Lambda function (Left panel on Figure 8). We also changed the amount of memory available from 128MB to 1024MB, and the timeout to 5 minutes instead of just 3 seconds (Right panel on figure 8), to avoid running out of memory and timeout error. Amazon Lambda limits the total amount of RAM memory to 3GB and timeout to 15 minutes. So if you need to perform highly intensive tasks, you might find problems. One solution is to chain multiple Lambdas to other AWS services to perform steps of an analysis pipeline. Our idea is not to provide an exhaustive introduction to Amazon Lambda, so if you want to know more about it, please check out this article from Yi Ai. Before showing the code, it’s important to describe the dataset we are going to use on our small project. I chose the Fifa19 player dataset from Kaggle, which is a CSV file describing all the skills from the players present on the game (Table 1). It has 18.207 rows and 88 columns and you can get information about the nationality, clubs, salary, skill level and many more feature from each player. We downloaded the CSV file and uploaded it to our S3 bucket (renamed it fifa19_kaggle.csv). So now we can focus on our code! As we can see in the script above, the first 5 lines are just importing libraries. With exception to Pandas, all the other libraries are available for use, without having to use Layers. Next, we have an accessory function called write_dataframe_to_csv_on_s3 (lines 8 to 22) used to save a Pandas Dataframe to a specific S3 bucket. We are going to use it to save our output Dataframe created during the analysis. The other function we have on our code is the main lambda_handler,the one that is going to be called when we invoke the Lambda. We can see that the first 5 assignments on lambda_handler (lines 28 to 32) are variables passed to the eventobject. From the lines 35 to 41 we use boto3 to download the CSV file on the S3 bucket and load it as a Pandas Dataframe. Next, on line 44 we use the group by method on the Dataframe to aggregate the GROUP column and get the mean of the COLUMN variable. Finally, we use the function write_dataframe_to_csv_on_s3 to save df_groupby on the specified S3 bucket, and return a dictionary with statusCode and body as keys. As described before in the Amazon Lambda Basic Structure session, the event parameter is an object that carries variables available to lambda_handler function and we can define these variables when configuring the test event (Figure 9). If we run this the test, using the correct values related to the 5 keys of test JSON, our Lambda function should process the CSV file from S3 and write down the resulted CSV back to the bucket. Although using the variables hardcoded on test event can show the concept of our Lambda code, it’s not a practical way to invoke the function. In order to solve it, we are going to create a Python script (invoke_lambda.py) to invoke our Lambda function using boto3. We are going to use only three libraries: boto3, json and sys. From lines 5 to 10 we use sys.argv to access the parameter when running the script through the command line. python3 invoke_lambda.py <bucket> <csv_file> <output_file> <groupby_column> <avg_column> <aws_credentials> The last parameter (aws_credentials) we provide to invoke_lambda.py is a JSON file with our credentials to access AWS services. You may configure your credentials by using the awscli or generate a secret key using IAM. On our main function, invoke_lambda we use boto3 client to define access to Amazon Lambda (line 38). The next object called payload is a dictionary with all the variables we want to use inside our Lambda function. These are the Lambda variables that can be accessed using the event.get('variable'). Finally, we simply call client.invoke() with the target Lambda function name, Invocation type, and payload carrying the variables (line 54). The Invocation type can be of three types: RequestResponse (default), to“invoke the function synchronously. Keep the connection open until the function returns a response or times out”; Event, to asynchronously call Lambda; or DryRun when you need to validate user information. For our main purpose, we are going to use the default RequestResponse option to invoke our Lambda, as it waits for the Lambda process to return a response. As we defined a try/except structure on our Lambda Function, if the process runs without errors, it would return a status code 200 with the message “Success!”, otherwise it would return status code 400 and the message “Error, bad request!”. Our local script invoke_lambda.pywhen run with the right parameters takes a few seconds to return a response. If the response is positive with status code 200, you might check your S3 bucket to search for the report file generated by the Lambda function (Table 2). As we used the columns “Club” to group by and “Overall” to get the mean, we are showing the 20 clubs with the highest average player overall skill level. I hope this quick introduction (not so quick!) to Amazon Lambda helped you understand better the nuts and bolts of this serverless service. And that it can help you somehow try different approaches on your Data Science projects. For more information about serverless architecture using AWS please check this great article from Eduardo Romero. And if you feel you need a deeper understanding of AWS Lambda, I recently published an article that describes the infrastructure behind Lambda and a few other functionalities for it. You can find my other articles on my profile page 🔬 If you enjoyed it and want to become Medium a member you can use my referral link to also support me 👍
[ { "code": null, "e": 480, "s": 171, "text": "Amazon Lambda is probably the most famous serverless service available today offering low cost and practically no cloud infrastructure governance needed. It offers a relatively simple and straightforward platform for implementing functions on different languages like Python, Node.js, Java, C# and many more." }, { "code": null, "e": 900, "s": 480, "text": "Amazon Lambda can be tested through the AWS console or AWS Command Line Interface. One of the main problems about Lambda is that it becomes tricky to set up as soon as your functions and triggers get more complex. The goal of this article is to present you a digestible tutorial for configuring your first Amazon Lambda function with external libraries and doing something more useful than just printing “Hello world!”." }, { "code": null, "e": 1550, "s": 900, "text": "We are going to use Python3, boto3 and a few more libraries loaded in Lambda Layers to help us achieve our goal to load a CSV file as a Pandas dataframe, do some data wrangling, and save the metrics and plots on report files on an S3 bucket. Although using the AWS console for configuring your services is not the best practice approach to work on the cloud, we are going to show each step using the console, because it’s more convenient for beginners to understand the basic structure of Amazon Lambda. I’m sure that after going through this tutorial you’ll have a good idea on migrating part of your local data analysis pipelines to Amazon Lambda." }, { "code": null, "e": 2231, "s": 1550, "text": "Before we start messing around with Amazon Lambda, we should first set our working environment. We first create a folder for the project (1) and the environment Python 3.7 using conda (you can also use pipenv )(2). Next, we create two folders, one to save the python scripts of your Lambda function, and one to build your Lambda Layers (3). We’ll explain better what Lambda Layers consists later on the article. Finally, we can create the folder structure to build Lambda Layers so it can be identified by the Amazon Lambda (4). The folder structure we created is going to help you better understand the concept behind Amazon Lambda and also organize your functions and libraries." }, { "code": null, "e": 2803, "s": 2231, "text": "# 1) Create project foldermkdir medium-lambda-tutorial# Change directorycd medium-lambda-tutorial/# 2) Create environment using condaconda create --name lambda-tutorial python=3.7conda activate lambda-tutorial# 3) Create one folder for the layers and another for the # lambda_function itselfmkdir lambda_function lambda_layers# 4) Create the folder structure to build your lambda layermkdir -p lambda_layers/python/lib/python3.7/site-packagestree .├── lambda_function└── lambda_layers └── python └── lib └── python3.7 └── site-packages" }, { "code": null, "e": 3199, "s": 2803, "text": "One of the main troubles I encountered when trying to implement my first Lambda functions was trying to understand the file structure used by AWS to invoke scripts and load libraries. If you follow the default options ‘Author from scratch’ (Figure 1) for creating a Lambda function, you’ll end up with a folder with the name of your function and Python script named lambda_function.py inside it." }, { "code": null, "e": 3286, "s": 3199, "text": "The lambda_function.py file has a very simple structure and the code is the following:" }, { "code": null, "e": 3444, "s": 3286, "text": "import jsondef lambda_handler(event, context): # TODO implement return { 'statusCode': 200, 'body': json.dumps('Hello from Lambda!') }" }, { "code": null, "e": 3555, "s": 3444, "text": "These 8 lines of code are key to understanding Amazon Lambda, so we are going through each line to explain it." }, { "code": null, "e": 4550, "s": 3555, "text": "import json : You can import Python modules to use on your function and AWS provides you with a list of available Python libraries already built on Amazon Lambda, like json and many more. The problem starts when you need libraries that are not available (we will solve this problem later using Lambda Layers).def lambda_handler(event, context): This is the main function your Amazon Lambda is going to call when you run the service. It has two parameters event and context. The first one is used to pass data that can be used on the function itself (more on this later), and the second is used to provide runtime and metadata information.# TODO implement Here is where the magic happens! You can use the body of the lambda_handler function to implement any Python code you want.return This part of the function is going to return a default dictionary with statusCode equal to 200, and body with a “Hello from Lambda”. You can change this return later to any Python object that suits your needs." }, { "code": null, "e": 4860, "s": 4550, "text": "import json : You can import Python modules to use on your function and AWS provides you with a list of available Python libraries already built on Amazon Lambda, like json and many more. The problem starts when you need libraries that are not available (we will solve this problem later using Lambda Layers)." }, { "code": null, "e": 5190, "s": 4860, "text": "def lambda_handler(event, context): This is the main function your Amazon Lambda is going to call when you run the service. It has two parameters event and context. The first one is used to pass data that can be used on the function itself (more on this later), and the second is used to provide runtime and metadata information." }, { "code": null, "e": 5331, "s": 5190, "text": "# TODO implement Here is where the magic happens! You can use the body of the lambda_handler function to implement any Python code you want." }, { "code": null, "e": 5548, "s": 5331, "text": "return This part of the function is going to return a default dictionary with statusCode equal to 200, and body with a “Hello from Lambda”. You can change this return later to any Python object that suits your needs." }, { "code": null, "e": 6243, "s": 5548, "text": "Before running our first test, it’s important to explain a key topic related to Amazon Lambda: Triggers. Triggers are basically ways in which you invoke your Lambda function. There are many ways for you to set up your trigger using events like adding a file to a S3 bucket, changing a value on a DynamoDB table or using an HTTP request through Amazon API Gateway. You can pretty much integrate your Lambda function to be invoked by a wide range of AWS services and this is probably one of the advantages offered by Lambda. One way we can do it to integrate with your Python code is by using boto3 to call your Lambda function, and that’s the approach we are going to use later on this tutorial." }, { "code": null, "e": 6389, "s": 6243, "text": "As you can see, the template structure offered by AWS is super simple, and you can test it by configuring a test event and running it (Figure 2)." }, { "code": null, "e": 6553, "s": 6389, "text": "As we didn’t change anything on the code of the Lambda Function, the test runs the process and we receive a green alert describing the successful event (Figure 3)." }, { "code": null, "e": 7016, "s": 6553, "text": "Figure 3 illustrates the layout of the Lambda invocation result. On the upper part you can see the dictionary contained on the returned statement. Underneath there is the Summary part, where we can see some important metrics related to the Lambda Function like Request ID, duration of the function, the billing duration and the amount of memory configured and used. We won’t go deep on Amazon Lambda pricing, but it is important to know that is charged based on:" }, { "code": null, "e": 7083, "s": 7016, "text": "duration the function is running (rounded up to the nearest 100ms)" }, { "code": null, "e": 7113, "s": 7083, "text": "the amount of memory/CPU used" }, { "code": null, "e": 7178, "s": 7113, "text": "the number of requests (how many times you invoke your function)" }, { "code": null, "e": 7226, "s": 7178, "text": "amount of data transferred in and out of Lambda" }, { "code": null, "e": 7367, "s": 7226, "text": "In general, it is really cheap to test and use it, so you probably won’t have billing problems when using Amazon Lambda for small workloads." }, { "code": null, "e": 7618, "s": 7367, "text": "Another important detail related to the pricing and performance is how CPU and memory are available. You choose the amount of memory for running your function and “Lambda allocates CPU power linearly in proportion to the amount of memory configured”." }, { "code": null, "e": 8023, "s": 7618, "text": "At the bottom of Figure 3, you can see the Log output session where you can check all the execution lines printed by your Lambda function. One great feature implemented on Amazon Lambda is that it is integrated with Amazon CloudWatch, where you can find all the logs generated by your Lambda functions. For more details on monitoring execution and logs, please refer to Casey Dunham great Lambda Article." }, { "code": null, "e": 8262, "s": 8023, "text": "We have covered the basic features of Amazon Lambda, so on the next sessions, we are going to increase the complexity of our task to show you a real-world use providing a few insights into how to run a serverless service on a daily basis." }, { "code": null, "e": 9298, "s": 8262, "text": "One of the great things about using Python is the availability of a huge number of libraries that helps you implement fast solutions without having to code all classes and functions from scratch. As mentioned before, Amazon Lambda offers a list of Python libraries that you can import into your function. The problem starts when you have to use libraries that are not available. One way to do it is to install the library locally inside the same folder you have your lambda_function.py file, zip the files and upload it to your Amazon Lambda console. This process can be a laborious and inconvenient task to install libraries locally and upload it every time you have to create a new Lambda function. To make your life easier, Amazon offers the possibility for us to upload our libraries as Lambda Layers, which consists of a file structure where you store your libraries, load it independently to Amazon Lambda, and use them on your code whenever needed. Once you create a Lambda Layer it can be used by any other new Lambda Function." }, { "code": null, "e": 9497, "s": 9298, "text": "Going back to the first session where we organized our working environment, we are going to use the folder structure created inside lambda_layer folder to install locally one Python library, Pandas." }, { "code": null, "e": 9851, "s": 9497, "text": "# Our current folder structure.├── lambda_function└── lambda_layers └── python └── lib └── python3.7 └── site-packages# 1) Pip install Pandas and Matplotlib locallypip install pandas -t lambda_layers/python/lib/python3.7/site-packages/.# 2) Zip the lambda_layers foldercd lambda_layerszip -r pandas_lambda_layer.zip *" }, { "code": null, "e": 10571, "s": 9851, "text": "By using pip with parameter -t we can specify where we want to install the libraries on our local folder (1). Next, we just need to zip the folder containing the libraries (2) and we have a file ready to be deployed as a Layer. It’s important that you keep the structure of folders we create on the beginning (python/lib/python3.7/site-packages/) so that Amazon Layer can identify the libraries contained on your zipped package. Click on the option Layers on the left panel of your AWS Lambda console, and on the button ‘Create Layer’ to start a new one. Then we can specify the name, description and compatible runtimes (in our case is Python 3.7). Finally, we upload our zipped folder and create the Layer (Figure 4)." }, { "code": null, "e": 10820, "s": 10571, "text": "It takes less than a minute and we have our Amazon Layer ready to be used on our code. Going back to the console of our Lambda function we can specify which Layers we are going to use by clicking on the Layer icon, then on ‘Add a layer’ (Figure 5)." }, { "code": null, "e": 11088, "s": 10820, "text": "Next, we select the Layer we just created and its respective version (Figure 6). As you can see from Figure 6, AWS offers a Lambda Layer with Scipy and Numpy ready to be used, so you don’t need to create new layers if the only libraries you need are one of these two." }, { "code": null, "e": 11204, "s": 11088, "text": "After selecting our Pandas Layer all we need to do is import it on your Lambda code as it was an installed library." }, { "code": null, "e": 11569, "s": 11204, "text": "Now that we have our environment and our Pandas Layer ready, we can start working on our code. As mentioned before, our goal is to create a Python3 local script (1) that can invoke a Lambda function using defined parameters (2) to perform a simple data analysis using Pandas on a CSV located on a S3 (3) and save the results back to the same bucket (4) (Figure 7)." }, { "code": null, "e": 11901, "s": 11569, "text": "To give Amazon Lambda access to our S3 buckets we can simply add a role to our function by going to the session Execution role on your console. Although AWS offers you some role templates, my advise is to create a new role on the IAM console to specify exactly the permission need for your Lambda function (Left panel on Figure 8)." }, { "code": null, "e": 12521, "s": 11901, "text": "We also changed the amount of memory available from 128MB to 1024MB, and the timeout to 5 minutes instead of just 3 seconds (Right panel on figure 8), to avoid running out of memory and timeout error. Amazon Lambda limits the total amount of RAM memory to 3GB and timeout to 15 minutes. So if you need to perform highly intensive tasks, you might find problems. One solution is to chain multiple Lambdas to other AWS services to perform steps of an analysis pipeline. Our idea is not to provide an exhaustive introduction to Amazon Lambda, so if you want to know more about it, please check out this article from Yi Ai." }, { "code": null, "e": 13012, "s": 12521, "text": "Before showing the code, it’s important to describe the dataset we are going to use on our small project. I chose the Fifa19 player dataset from Kaggle, which is a CSV file describing all the skills from the players present on the game (Table 1). It has 18.207 rows and 88 columns and you can get information about the nationality, clubs, salary, skill level and many more feature from each player. We downloaded the CSV file and uploaded it to our S3 bucket (renamed it fifa19_kaggle.csv)." }, { "code": null, "e": 13045, "s": 13012, "text": "So now we can focus on our code!" }, { "code": null, "e": 13231, "s": 13045, "text": "As we can see in the script above, the first 5 lines are just importing libraries. With exception to Pandas, all the other libraries are available for use, without having to use Layers." }, { "code": null, "e": 13457, "s": 13231, "text": "Next, we have an accessory function called write_dataframe_to_csv_on_s3 (lines 8 to 22) used to save a Pandas Dataframe to a specific S3 bucket. We are going to use it to save our output Dataframe created during the analysis." }, { "code": null, "e": 13701, "s": 13457, "text": "The other function we have on our code is the main lambda_handler,the one that is going to be called when we invoke the Lambda. We can see that the first 5 assignments on lambda_handler (lines 28 to 32) are variables passed to the eventobject." }, { "code": null, "e": 13815, "s": 13701, "text": "From the lines 35 to 41 we use boto3 to download the CSV file on the S3 bucket and load it as a Pandas Dataframe." }, { "code": null, "e": 13947, "s": 13815, "text": "Next, on line 44 we use the group by method on the Dataframe to aggregate the GROUP column and get the mean of the COLUMN variable." }, { "code": null, "e": 14110, "s": 13947, "text": "Finally, we use the function write_dataframe_to_csv_on_s3 to save df_groupby on the specified S3 bucket, and return a dictionary with statusCode and body as keys." }, { "code": null, "e": 14347, "s": 14110, "text": "As described before in the Amazon Lambda Basic Structure session, the event parameter is an object that carries variables available to lambda_handler function and we can define these variables when configuring the test event (Figure 9)." }, { "code": null, "e": 14541, "s": 14347, "text": "If we run this the test, using the correct values related to the 5 keys of test JSON, our Lambda function should process the CSV file from S3 and write down the resulted CSV back to the bucket." }, { "code": null, "e": 14807, "s": 14541, "text": "Although using the variables hardcoded on test event can show the concept of our Lambda code, it’s not a practical way to invoke the function. In order to solve it, we are going to create a Python script (invoke_lambda.py) to invoke our Lambda function using boto3." }, { "code": null, "e": 14979, "s": 14807, "text": "We are going to use only three libraries: boto3, json and sys. From lines 5 to 10 we use sys.argv to access the parameter when running the script through the command line." }, { "code": null, "e": 15086, "s": 14979, "text": "python3 invoke_lambda.py <bucket> <csv_file> <output_file> <groupby_column> <avg_column> <aws_credentials>" }, { "code": null, "e": 15305, "s": 15086, "text": "The last parameter (aws_credentials) we provide to invoke_lambda.py is a JSON file with our credentials to access AWS services. You may configure your credentials by using the awscli or generate a secret key using IAM." }, { "code": null, "e": 15604, "s": 15305, "text": "On our main function, invoke_lambda we use boto3 client to define access to Amazon Lambda (line 38). The next object called payload is a dictionary with all the variables we want to use inside our Lambda function. These are the Lambda variables that can be accessed using the event.get('variable')." }, { "code": null, "e": 16420, "s": 15604, "text": "Finally, we simply call client.invoke() with the target Lambda function name, Invocation type, and payload carrying the variables (line 54). The Invocation type can be of three types: RequestResponse (default), to“invoke the function synchronously. Keep the connection open until the function returns a response or times out”; Event, to asynchronously call Lambda; or DryRun when you need to validate user information. For our main purpose, we are going to use the default RequestResponse option to invoke our Lambda, as it waits for the Lambda process to return a response. As we defined a try/except structure on our Lambda Function, if the process runs without errors, it would return a status code 200 with the message “Success!”, otherwise it would return status code 400 and the message “Error, bad request!”." }, { "code": null, "e": 16839, "s": 16420, "text": "Our local script invoke_lambda.pywhen run with the right parameters takes a few seconds to return a response. If the response is positive with status code 200, you might check your S3 bucket to search for the report file generated by the Lambda function (Table 2). As we used the columns “Club” to group by and “Overall” to get the mean, we are showing the 20 clubs with the highest average player overall skill level." }, { "code": null, "e": 17182, "s": 16839, "text": "I hope this quick introduction (not so quick!) to Amazon Lambda helped you understand better the nuts and bolts of this serverless service. And that it can help you somehow try different approaches on your Data Science projects. For more information about serverless architecture using AWS please check this great article from Eduardo Romero." }, { "code": null, "e": 17365, "s": 17182, "text": "And if you feel you need a deeper understanding of AWS Lambda, I recently published an article that describes the infrastructure behind Lambda and a few other functionalities for it." }, { "code": null, "e": 17417, "s": 17365, "text": "You can find my other articles on my profile page 🔬" } ]
Foreign Key constraint in SQL - GeeksforGeeks
19 Oct, 2020 Foreign Key is a column that refers to the primary key/unique key of other table. So it demonstrates relationship between tables and act as cross reference among them. Table in which foreign key is defined is called Foreign table/Referencing table. Table that defines primary/unique key and is referenced by foreign key is called primary table/master table/ Referenced Table. It is Defined in Create table/Alter table statement. For the table that contains Foreign key, it should match the primary key in referenced table for every row. This is called Referential Integrity. Foreign key ensures referential integrity. Properties : Parent that is being referenced has to be unique/Primary Key. Child may have duplicates and nulls. Parent record can be deleted if no child exists. Master table cannot be updated if child exists. Must reference PRIMARY KEY in primary table. Foreign key column and constraint column should have matching data types. Records cannot be inserted in child table if corresponding record in master table do not exist. Records of master table cannot be deleted if corresponding records in child table exits. SQL Foreign key At column level :Syntax –Create table people (no int references person, Fname varchar2(20)); OR Create table people (no int references person(id), Fname varchar2(20)); Here Person table should have primary key with type int. If there is single columnar Primary key in table, column name in syntax can be omitted. So both the above syntax works correctly.To check the constraint,If Parent table doesn’t have primary key.OUTPUT : Error at line 1 : referenced table does not have a primary key. If Parent table has Primary Key of different datatype.OUTPUT : Error at line 1 : column type incompatible with referenced column type. SQL Foreign key At table level :Syntax –create table people(no varchar2(10), fname varchar2(20), foreign key(no) references person); OR create table people(no varchar2(10), fname varchar2(20), foreign key(no) references person(id)); Column name of referenced table can be ignored.Insert Operation in Foreign Key Table :If corresponding value in foreign table doesn’t exists, a record in child table cannot be inserted.OUTPUT : Error at line 1 : integrity constraint violated - parent key not found. Delete Operation in Foreign Key Table :When a record in master table is deleted and corresponding record in child table exists, an error message is displayed and prevents delete operation from going through.OUTPUT : Error at line 1 : integrity constraint violated - child record found. Foreign Key with ON DELETE CASCADE :The default behavior of foreign key can be changed using ON DELETE CASCADE. When this option is specified in foreign key definition, if a record is deleted in master table, all corresponding record in detail table will be deleted.Syntax –create table people(no varchar2(10), fname varchar2(20), foreign key(no) references person on delete cascade); Now deleting records from person will delete all corresponding records from child table.OUTPUT : select * from person; no rows selected select * from people; no rows selected Foreign Key with ON DELETE SET NULL :A Foreign key with SET NULL ON DELETE means if record in parent table is deleted, corresponding records in child table will have foreign key fields set to null. Records in child table will not be deleted.Syntax –create table people(no varchar2(10), fname varchar2(20), foreign key(no) references person on delete set null); OUTPUT : select * from person; no rows selected select * from people; NO Fname pqr Notice the field “No” in people table that was referencing Primary key of Person table. On deleting person data, it will set null in child table people. But the record will not be deleted. SQL Foreign key At column level :Syntax –Create table people (no int references person, Fname varchar2(20)); OR Create table people (no int references person(id), Fname varchar2(20)); Here Person table should have primary key with type int. If there is single columnar Primary key in table, column name in syntax can be omitted. So both the above syntax works correctly.To check the constraint,If Parent table doesn’t have primary key.OUTPUT : Error at line 1 : referenced table does not have a primary key. If Parent table has Primary Key of different datatype.OUTPUT : Error at line 1 : column type incompatible with referenced column type. Syntax – Create table people (no int references person, Fname varchar2(20)); OR Create table people (no int references person(id), Fname varchar2(20)); Here Person table should have primary key with type int. If there is single columnar Primary key in table, column name in syntax can be omitted. So both the above syntax works correctly. To check the constraint, If Parent table doesn’t have primary key.OUTPUT : Error at line 1 : referenced table does not have a primary key. OUTPUT : Error at line 1 : referenced table does not have a primary key. If Parent table has Primary Key of different datatype.OUTPUT : Error at line 1 : column type incompatible with referenced column type. OUTPUT : Error at line 1 : column type incompatible with referenced column type. SQL Foreign key At table level :Syntax –create table people(no varchar2(10), fname varchar2(20), foreign key(no) references person); OR create table people(no varchar2(10), fname varchar2(20), foreign key(no) references person(id)); Column name of referenced table can be ignored. Syntax – create table people(no varchar2(10), fname varchar2(20), foreign key(no) references person); OR create table people(no varchar2(10), fname varchar2(20), foreign key(no) references person(id)); Column name of referenced table can be ignored. Insert Operation in Foreign Key Table :If corresponding value in foreign table doesn’t exists, a record in child table cannot be inserted.OUTPUT : Error at line 1 : integrity constraint violated - parent key not found. OUTPUT : Error at line 1 : integrity constraint violated - parent key not found. Delete Operation in Foreign Key Table :When a record in master table is deleted and corresponding record in child table exists, an error message is displayed and prevents delete operation from going through.OUTPUT : Error at line 1 : integrity constraint violated - child record found. OUTPUT : Error at line 1 : integrity constraint violated - child record found. Foreign Key with ON DELETE CASCADE :The default behavior of foreign key can be changed using ON DELETE CASCADE. When this option is specified in foreign key definition, if a record is deleted in master table, all corresponding record in detail table will be deleted.Syntax –create table people(no varchar2(10), fname varchar2(20), foreign key(no) references person on delete cascade); Now deleting records from person will delete all corresponding records from child table.OUTPUT : select * from person; no rows selected select * from people; no rows selected Syntax – create table people(no varchar2(10), fname varchar2(20), foreign key(no) references person on delete cascade); Now deleting records from person will delete all corresponding records from child table. OUTPUT : select * from person; no rows selected select * from people; no rows selected Foreign Key with ON DELETE SET NULL :A Foreign key with SET NULL ON DELETE means if record in parent table is deleted, corresponding records in child table will have foreign key fields set to null. Records in child table will not be deleted.Syntax –create table people(no varchar2(10), fname varchar2(20), foreign key(no) references person on delete set null); OUTPUT : select * from person; no rows selected select * from people; NO Fname pqr Notice the field “No” in people table that was referencing Primary key of Person table. On deleting person data, it will set null in child table people. But the record will not be deleted. Syntax – create table people(no varchar2(10), fname varchar2(20), foreign key(no) references person on delete set null); OUTPUT : select * from person; no rows selected select * from people; NO Fname pqr Notice the field “No” in people table that was referencing Primary key of Person table. On deleting person data, it will set null in child table people. But the record will not be deleted. DBMS-SQL SQL SQL Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here. Comments Old Comments CTE in SQL Difference between DELETE, DROP and TRUNCATE How to Update Multiple Columns in Single Update Statement in SQL? Difference between SQL and NoSQL SQL Interview Questions What is Temporary Table in SQL? SQL Query to Find the Name of a Person Whose Name Starts with Specific Letter Difference between Where and Having Clause in SQL SQL - ORDER BY SQL using Python
[ { "code": null, "e": 24011, "s": 23983, "text": "\n19 Oct, 2020" }, { "code": null, "e": 24440, "s": 24011, "text": "Foreign Key is a column that refers to the primary key/unique key of other table. So it demonstrates relationship between tables and act as cross reference among them. Table in which foreign key is defined is called Foreign table/Referencing table. Table that defines primary/unique key and is referenced by foreign key is called primary table/master table/ Referenced Table. It is Defined in Create table/Alter table statement." }, { "code": null, "e": 24629, "s": 24440, "text": "For the table that contains Foreign key, it should match the primary key in referenced table for every row. This is called Referential Integrity. Foreign key ensures referential integrity." }, { "code": null, "e": 24642, "s": 24629, "text": "Properties :" }, { "code": null, "e": 24704, "s": 24642, "text": "Parent that is being referenced has to be unique/Primary Key." }, { "code": null, "e": 24741, "s": 24704, "text": "Child may have duplicates and nulls." }, { "code": null, "e": 24790, "s": 24741, "text": "Parent record can be deleted if no child exists." }, { "code": null, "e": 24838, "s": 24790, "text": "Master table cannot be updated if child exists." }, { "code": null, "e": 24883, "s": 24838, "text": "Must reference PRIMARY KEY in primary table." }, { "code": null, "e": 24957, "s": 24883, "text": "Foreign key column and constraint column should have matching data types." }, { "code": null, "e": 25053, "s": 24957, "text": "Records cannot be inserted in child table if corresponding record in master table do not exist." }, { "code": null, "e": 25142, "s": 25053, "text": "Records of master table cannot be deleted if corresponding records in child table exits." }, { "code": null, "e": 28049, "s": 25142, "text": "SQL Foreign key At column level :Syntax –Create table people (no int references person, \n Fname varchar2(20));\n OR\nCreate table people (no int references person(id), \n Fname varchar2(20)); \nHere Person table should have primary key with type int. If there is single columnar Primary key in table, column name in syntax can be omitted. So both the above syntax works correctly.To check the constraint,If Parent table doesn’t have primary key.OUTPUT : \nError at line 1 : referenced table does not have a primary key.\nIf Parent table has Primary Key of different datatype.OUTPUT : \nError at line 1 : column type incompatible with referenced column type.\nSQL Foreign key At table level :Syntax –create table people(no varchar2(10), \n fname varchar2(20), \n foreign key(no) references person);\n OR\ncreate table people(no varchar2(10), \n fname varchar2(20), \n foreign key(no) references person(id));\nColumn name of referenced table can be ignored.Insert Operation in Foreign Key Table :If corresponding value in foreign table doesn’t exists, a record in child table cannot be inserted.OUTPUT : \nError at line 1 : integrity constraint violated - parent key not found.\nDelete Operation in Foreign Key Table :When a record in master table is deleted and corresponding record in child table exists, an error message is displayed and prevents delete operation from going through.OUTPUT : \nError at line 1 : integrity constraint violated - child record found.\nForeign Key with ON DELETE CASCADE :The default behavior of foreign key can be changed using ON DELETE CASCADE. When this option is specified in foreign key definition, if a record is deleted in master table, all corresponding record in detail table will be deleted.Syntax –create table people(no varchar2(10), \n fname varchar2(20), \n foreign key(no) \nreferences person on delete cascade);\nNow deleting records from person will delete all corresponding records from child table.OUTPUT :\nselect * from person;\nno rows selected \n\nselect * from people;\nno rows selected Foreign Key with ON DELETE SET NULL :A Foreign key with SET NULL ON DELETE means if record in parent table is deleted, corresponding records in child table will have foreign key fields set to null. Records in child table will not be deleted.Syntax –create table people(no varchar2(10), \n fname varchar2(20), \n foreign key(no)\nreferences person on delete set null);\nOUTPUT :\nselect * from person;\nno rows selected\n\nselect * from people;\nNO Fname\npqr \nNotice the field “No” in people table that was referencing Primary key of Person table. On deleting person data, it will set null in child table people. But the record will not be deleted." }, { "code": null, "e": 28781, "s": 28049, "text": "SQL Foreign key At column level :Syntax –Create table people (no int references person, \n Fname varchar2(20));\n OR\nCreate table people (no int references person(id), \n Fname varchar2(20)); \nHere Person table should have primary key with type int. If there is single columnar Primary key in table, column name in syntax can be omitted. So both the above syntax works correctly.To check the constraint,If Parent table doesn’t have primary key.OUTPUT : \nError at line 1 : referenced table does not have a primary key.\nIf Parent table has Primary Key of different datatype.OUTPUT : \nError at line 1 : column type incompatible with referenced column type.\n" }, { "code": null, "e": 28790, "s": 28781, "text": "Syntax –" }, { "code": null, "e": 29020, "s": 28790, "text": "Create table people (no int references person, \n Fname varchar2(20));\n OR\nCreate table people (no int references person(id), \n Fname varchar2(20)); \n" }, { "code": null, "e": 29207, "s": 29020, "text": "Here Person table should have primary key with type int. If there is single columnar Primary key in table, column name in syntax can be omitted. So both the above syntax works correctly." }, { "code": null, "e": 29232, "s": 29207, "text": "To check the constraint," }, { "code": null, "e": 29348, "s": 29232, "text": "If Parent table doesn’t have primary key.OUTPUT : \nError at line 1 : referenced table does not have a primary key.\n" }, { "code": null, "e": 29423, "s": 29348, "text": "OUTPUT : \nError at line 1 : referenced table does not have a primary key.\n" }, { "code": null, "e": 29560, "s": 29423, "text": "If Parent table has Primary Key of different datatype.OUTPUT : \nError at line 1 : column type incompatible with referenced column type.\n" }, { "code": null, "e": 29643, "s": 29560, "text": "OUTPUT : \nError at line 1 : column type incompatible with referenced column type.\n" }, { "code": null, "e": 30036, "s": 29643, "text": "SQL Foreign key At table level :Syntax –create table people(no varchar2(10), \n fname varchar2(20), \n foreign key(no) references person);\n OR\ncreate table people(no varchar2(10), \n fname varchar2(20), \n foreign key(no) references person(id));\nColumn name of referenced table can be ignored." }, { "code": null, "e": 30045, "s": 30036, "text": "Syntax –" }, { "code": null, "e": 30351, "s": 30045, "text": "create table people(no varchar2(10), \n fname varchar2(20), \n foreign key(no) references person);\n OR\ncreate table people(no varchar2(10), \n fname varchar2(20), \n foreign key(no) references person(id));\n" }, { "code": null, "e": 30399, "s": 30351, "text": "Column name of referenced table can be ignored." }, { "code": null, "e": 30620, "s": 30399, "text": "Insert Operation in Foreign Key Table :If corresponding value in foreign table doesn’t exists, a record in child table cannot be inserted.OUTPUT : \nError at line 1 : integrity constraint violated - parent key not found.\n" }, { "code": null, "e": 30703, "s": 30620, "text": "OUTPUT : \nError at line 1 : integrity constraint violated - parent key not found.\n" }, { "code": null, "e": 30991, "s": 30703, "text": "Delete Operation in Foreign Key Table :When a record in master table is deleted and corresponding record in child table exists, an error message is displayed and prevents delete operation from going through.OUTPUT : \nError at line 1 : integrity constraint violated - child record found.\n" }, { "code": null, "e": 31072, "s": 30991, "text": "OUTPUT : \nError at line 1 : integrity constraint violated - child record found.\n" }, { "code": null, "e": 31676, "s": 31072, "text": "Foreign Key with ON DELETE CASCADE :The default behavior of foreign key can be changed using ON DELETE CASCADE. When this option is specified in foreign key definition, if a record is deleted in master table, all corresponding record in detail table will be deleted.Syntax –create table people(no varchar2(10), \n fname varchar2(20), \n foreign key(no) \nreferences person on delete cascade);\nNow deleting records from person will delete all corresponding records from child table.OUTPUT :\nselect * from person;\nno rows selected \n\nselect * from people;\nno rows selected " }, { "code": null, "e": 31685, "s": 31676, "text": "Syntax –" }, { "code": null, "e": 31838, "s": 31685, "text": "create table people(no varchar2(10), \n fname varchar2(20), \n foreign key(no) \nreferences person on delete cascade);\n" }, { "code": null, "e": 31927, "s": 31838, "text": "Now deleting records from person will delete all corresponding records from child table." }, { "code": null, "e": 32017, "s": 31927, "text": "OUTPUT :\nselect * from person;\nno rows selected \n\nselect * from people;\nno rows selected " }, { "code": null, "e": 32691, "s": 32017, "text": "Foreign Key with ON DELETE SET NULL :A Foreign key with SET NULL ON DELETE means if record in parent table is deleted, corresponding records in child table will have foreign key fields set to null. Records in child table will not be deleted.Syntax –create table people(no varchar2(10), \n fname varchar2(20), \n foreign key(no)\nreferences person on delete set null);\nOUTPUT :\nselect * from person;\nno rows selected\n\nselect * from people;\nNO Fname\npqr \nNotice the field “No” in people table that was referencing Primary key of Person table. On deleting person data, it will set null in child table people. But the record will not be deleted." }, { "code": null, "e": 32700, "s": 32691, "text": "Syntax –" }, { "code": null, "e": 32852, "s": 32700, "text": "create table people(no varchar2(10), \n fname varchar2(20), \n foreign key(no)\nreferences person on delete set null);\n" }, { "code": null, "e": 32938, "s": 32852, "text": "OUTPUT :\nselect * from person;\nno rows selected\n\nselect * from people;\nNO Fname\npqr \n" }, { "code": null, "e": 33127, "s": 32938, "text": "Notice the field “No” in people table that was referencing Primary key of Person table. On deleting person data, it will set null in child table people. But the record will not be deleted." }, { "code": null, "e": 33136, "s": 33127, "text": "DBMS-SQL" }, { "code": null, "e": 33140, "s": 33136, "text": "SQL" }, { "code": null, "e": 33144, "s": 33140, "text": "SQL" }, { "code": null, "e": 33242, "s": 33144, "text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here." }, { "code": null, "e": 33251, "s": 33242, "text": "Comments" }, { "code": null, "e": 33264, "s": 33251, "text": "Old Comments" }, { "code": null, "e": 33275, "s": 33264, "text": "CTE in SQL" }, { "code": null, "e": 33320, "s": 33275, "text": "Difference between DELETE, DROP and TRUNCATE" }, { "code": null, "e": 33386, "s": 33320, "text": "How to Update Multiple Columns in Single Update Statement in SQL?" }, { "code": null, "e": 33419, "s": 33386, "text": "Difference between SQL and NoSQL" }, { "code": null, "e": 33443, "s": 33419, "text": "SQL Interview Questions" }, { "code": null, "e": 33475, "s": 33443, "text": "What is Temporary Table in SQL?" }, { "code": null, "e": 33553, "s": 33475, "text": "SQL Query to Find the Name of a Person Whose Name Starts with Specific Letter" }, { "code": null, "e": 33603, "s": 33553, "text": "Difference between Where and Having Clause in SQL" }, { "code": null, "e": 33618, "s": 33603, "text": "SQL - ORDER BY" } ]
How can I subtract tuple of tuples from a tuple in Python?
The direct way to subtract tuple of tuples from a tuple in Python is to use loops directly. For example, if you have a tuple of tuples ((0, 1, 2), (3, 4, 5), (6, 7, 8), (9, 10, 11), (12, 13, 14)) and want to subtract (1, 2, 3, 4, 5) from each of the inner tuples, you can do it as follows my_tuple = ((0, 1, 2), (3, 4, 5), (6, 7, 8), (9, 10, 11), (12, 13, 14)) sub = (1, 2, 3, 4, 5) tuple(tuple(x - sub[i] for x in my_tuple[i]) for i in range(len(my_tuple))) This will give the output ((-1, 0, 1), (1, 2, 3), (3, 4, 5), (5, 6, 7), (7, 8, 9))
[ { "code": null, "e": 1170, "s": 1062, "text": "The direct way to subtract tuple of tuples from a tuple in Python is to use loops directly. For example, if" }, { "code": null, "e": 1197, "s": 1170, "text": "you have a tuple of tuples" }, { "code": null, "e": 1258, "s": 1197, "text": "((0, 1, 2), (3, 4, 5), (6, 7, 8), (9, 10, 11), (12, 13, 14))" }, { "code": null, "e": 1351, "s": 1258, "text": "and want to subtract (1, 2, 3, 4, 5) from each of the inner tuples, you can do it as follows" }, { "code": null, "e": 1521, "s": 1351, "text": "my_tuple = ((0, 1, 2), (3, 4, 5), (6, 7, 8), (9, 10, 11), (12, 13, 14))\nsub = (1, 2, 3, 4, 5)\ntuple(tuple(x - sub[i] for x in my_tuple[i]) for i in range(len(my_tuple)))" }, { "code": null, "e": 1547, "s": 1521, "text": "This will give the output" }, { "code": null, "e": 1604, "s": 1547, "text": "((-1, 0, 1), (1, 2, 3), (3, 4, 5), (5, 6, 7), (7, 8, 9))" } ]
Comparison of dates in PHP
Matching two dates in PHP is quite smooth when both the dates are in a similar format but php failed to analyze when the two dates are in an unrelated format. In this article, we will discuss different cases of date comparison in PHP. We will figure out how to utilize DateTime class,strtotime() in comparing dates. we can analyze the dates by simple comparison operator if the given dates are in a similar format. <?php $date1 = "2018-11-24"; $date2 = "2019-03-26"; if ($date1 > $date2) echo "$date1 is latest than $date2"; else echo "$date1 is older than $date2"; ?> 2019-03-26 is latest than 2018-11-24 Here we have declared two dates date1anddate2 in the same format. So we have used a comparison operator(>) to compare the dates. If the given dates are in various formats at that point we can utilize the strtotime() function to convert the given dates into the UNIX timestamp format and analyze these numerical timestamps to get the expected result. <?php $date1 = "18-03-22"; $date2 = "2017-08-24"; $curtimestamp1 = strtotime($date1); $curtimestamp2 = strtotime($date2); if ($curtimestamp1 > $curtimestamp2) echo "$date1 is latest than $date2"; else echo "$date1 is older than $date2"; ?> 18-03-22 is latest than 2017-08-24 In this example, we have two Dates which are in a different format. So we have used predefined function strtotime() convert them into numeric UNIX timestamp, then to compare those timestamps we use different comparison operators to get the desired result. Comparing two dates by creating the object of the DateTime class. <?php $date1 = new DateTime("18-02-24"); $date2 = new DateTime("2019-03-24"); if ($date1 > $date2) { echo 'datetime1 greater than datetime2'; } if ($date1 < $date2) { echo 'datetime1 lesser than datetime2'; } if ($date1 == $date2) { echo 'datetime2 is equal than datetime1'; } ?> datetime1 lesser than datetime2 In this example, we created two DateTime objects. In order to compare those two dates, we use different comparison operators to get the desired result.
[ { "code": null, "e": 1378, "s": 1062, "text": "Matching two dates in PHP is quite smooth when both the dates are in a similar format but php failed to analyze when the two dates are in an unrelated format. In this article, we will discuss different cases of date comparison in PHP. We will figure out how to utilize DateTime class,strtotime() in comparing dates." }, { "code": null, "e": 1477, "s": 1378, "text": "we can analyze the dates by simple comparison operator if the given dates are in a similar format." }, { "code": null, "e": 1653, "s": 1477, "text": "<?php\n $date1 = \"2018-11-24\";\n $date2 = \"2019-03-26\";\n if ($date1 > $date2)\n echo \"$date1 is latest than $date2\";\n else\n echo \"$date1 is older than $date2\";\n?>" }, { "code": null, "e": 1690, "s": 1653, "text": "2019-03-26 is latest than 2018-11-24" }, { "code": null, "e": 1819, "s": 1690, "text": "Here we have declared two dates date1anddate2 in the same format. So we have used a comparison operator(>) to compare the dates." }, { "code": null, "e": 2040, "s": 1819, "text": "If the given dates are in various formats at that point we can utilize the strtotime() function to convert the given dates into the UNIX timestamp format and analyze these numerical timestamps to get the expected result." }, { "code": null, "e": 2310, "s": 2040, "text": "<?php\n $date1 = \"18-03-22\";\n $date2 = \"2017-08-24\";\n $curtimestamp1 = strtotime($date1);\n $curtimestamp2 = strtotime($date2);\n if ($curtimestamp1 > $curtimestamp2)\n echo \"$date1 is latest than $date2\";\n else\n echo \"$date1 is older than $date2\";\n?>" }, { "code": null, "e": 2345, "s": 2310, "text": "18-03-22 is latest than 2017-08-24" }, { "code": null, "e": 2601, "s": 2345, "text": "In this example, we have two Dates which are in a different format. So we have used predefined function strtotime() convert them into numeric UNIX timestamp, then to compare those timestamps we use different comparison operators to get the desired result." }, { "code": null, "e": 2667, "s": 2601, "text": "Comparing two dates by creating the object of the DateTime class." }, { "code": null, "e": 2982, "s": 2667, "text": "<?php\n $date1 = new DateTime(\"18-02-24\");\n $date2 = new DateTime(\"2019-03-24\");\n if ($date1 > $date2) {\n echo 'datetime1 greater than datetime2';\n }\n if ($date1 < $date2) {\n echo 'datetime1 lesser than datetime2';\n }\n if ($date1 == $date2) {\n echo 'datetime2 is equal than datetime1';\n }\n?>" }, { "code": null, "e": 3014, "s": 2982, "text": "datetime1 lesser than datetime2" }, { "code": null, "e": 3166, "s": 3014, "text": "In this example, we created two DateTime objects. In order to compare those two dates, we use different comparison operators to get the desired result." } ]
BabylonJS - Extrusion
Extrusion helps in transforming a 2D shape into a volumic shape.Suppose you want to create a star with 2D you will have x,y co-ordinates and z will be 0.Taking the 2D co-ordinates extrusion will convert the same to a 3D shape.So, the start of 2D with extrusion will turn out to be a 3D.You can try different 2D shapes and convert those into 3D. BABYLON.Mesh.ExtrudeShape(name, shape, path, scale, rotation, cap, scene, updatable?, sideOrientation) Consider the following parameters for extrusion − Name − The mesh name. Name − The mesh name. Shape − The shape to be extruded; it is an array of vectors. Shape − The shape to be extruded; it is an array of vectors. Path − The path to extrude the shape.Array of vectors to draw the shape. Path − The path to extrude the shape.Array of vectors to draw the shape. Scale − By default it is 1.Scale is the value to scale the initial shape. Scale − By default it is 1.Scale is the value to scale the initial shape. Rotation − Rotate the shape at each path point. Rotation − Rotate the shape at each path point. Cap − BABYLON.Mesh.NO_CAP, BABYLON.Mesh.CAP_START, BABYLON.Mesh.CAP_END, BABYLON.Mesh.CAP_ALL. Cap − BABYLON.Mesh.NO_CAP, BABYLON.Mesh.CAP_START, BABYLON.Mesh.CAP_END, BABYLON.Mesh.CAP_ALL. Scene − The current scene on which the mesh will be drawn. Scene − The current scene on which the mesh will be drawn. Updatable − By default, it is false.If set true, the mesh will be updatable. Updatable − By default, it is false.If set true, the mesh will be updatable. SideOrientation − The side orientation - front, back or double. SideOrientation − The side orientation - front, back or double. <!doctype html> <html> <head> <meta charset = "utf-8"> <title>BabylonJs - Basic Element-Creating Scene</title> <script src = "babylon.js"></script> <style> canvas {width: 100%; height: 100%;} </style> </head> <body> <canvas id = "renderCanvas"></canvas> <script type = "text/javascript"> var canvas = document.getElementById("renderCanvas"); var engine = new BABYLON.Engine(canvas, true); var createScene = function() { var scene = new BABYLON.Scene(engine); scene.clearColor = new BABYLON.Color3( .5, .5, .5); // camera var camera = new BABYLON.ArcRotateCamera("camera1", 0, 0, 0, new BABYLON.Vector3(0, 0, -0), scene); camera.setPosition(new BABYLON.Vector3(0, 0, -10)); camera.attachControl(canvas, true); // lights var light = new BABYLON.HemisphericLight("light1", new BABYLON.Vector3(1, 0.5, 0), scene); light.intensity = 0.7; var spot = new BABYLON.SpotLight("spot", new BABYLON.Vector3(25, 15, -10), new BABYLON.Vector3(-1, -0.8, 1), 15, 1, scene); spot.diffuse = new BABYLON.Color3(1, 1, 1); spot.specular = new BABYLON.Color3(0, 0, 0); spot.intensity = 0.8; // shape var shape = [ new BABYLON.Vector3(2, 0, 0), new BABYLON.Vector3(2, 2, 0), new BABYLON.Vector3(1, 2, 0), new BABYLON.Vector3(0, 3, 0), new BABYLON.Vector3(-1, 2, 0), new BABYLON.Vector3(-2, 2, 0), new BABYLON.Vector3(-2, 0, 0), new BABYLON.Vector3(-2, -2, 0), new BABYLON.Vector3(-1, -2, 0), new BABYLON.Vector3(0, -3, 0), new BABYLON.Vector3(1, -2, 0), new BABYLON.Vector3(2, -2, 0), ]; shape.push(shape[0]); var shapeline = BABYLON.Mesh.CreateLines("sl", shape, scene); shapeline.color = BABYLON.Color3.Green(); return scene; }; var scene = createScene(); engine.runRenderLoop(function() { scene.render(); }); </script> </body> </html> The above line of code generates the following output − In the above example, the lines are drawn in the x, y coordinates. Let us now apply 3D with the help of extrusion. For this, babylonjs has a class for extrusion which is explained below. <!doctype html> <html> <head> <meta charset = "utf-8"> <title>BabylonJs - Basic Element-Creating Scene</title> <script src = "babylon.js"></script> <style> canvas {width: 100%; height: 100%;} </style> </head> <body> <canvas id = "renderCanvas"></canvas> <script type = "text/javascript"> var canvas = document.getElementById("renderCanvas"); var engine = new BABYLON.Engine(canvas, true); var createScene = function() { var scene = new BABYLON.Scene(engine); scene.clearColor = new BABYLON.Color3( .5, .5, .5); // camera var camera = new BABYLON.ArcRotateCamera("camera1", 0, 0, 0, new BABYLON.Vector3(0, 0, -0), scene); camera.setPosition(new BABYLON.Vector3(0, 0, -10)); camera.attachControl(canvas, true); // lights var light = new BABYLON.HemisphericLight("light1", new BABYLON.Vector3(1, 0.5, 0), scene); light.intensity = 0.7; var spot = new BABYLON.SpotLight("spot", new BABYLON.Vector3(25, 15, -10), new BABYLON.Vector3(-1, -0.8, 1), 15, 1, scene); spot.diffuse = new BABYLON.Color3(1, 1, 1); spot.specular = new BABYLON.Color3(0, 0, 0); spot.intensity = 0.8; var mat = new BABYLON.StandardMaterial("mat1", scene); mat.alpha = 1.0; mat.diffuseColor = new BABYLON.Color3(0.5, 0.5, 1.0); mat.backFaceCulling = false; // shape var shape = [ new BABYLON.Vector3(2, 0, 0), new BABYLON.Vector3(2, 2, 0), new BABYLON.Vector3(1, 2, 0), new BABYLON.Vector3(0, 3, 0), new BABYLON.Vector3(-1, 2, 0), new BABYLON.Vector3(-2, 2, 0), new BABYLON.Vector3(-2, 0, 0), new BABYLON.Vector3(-2, -2, 0), new BABYLON.Vector3(-1, -2, 0), new BABYLON.Vector3(0, -3, 0), new BABYLON.Vector3(1, -2, 0), new BABYLON.Vector3(2, -2, 0), ]; shape.push(shape[0]); var path = [ BABYLON.Vector3.Zero(), new BABYLON.Vector3(0, 0, -1) ]; var shapeline = BABYLON.Mesh.CreateLines("sl", shape, scene); shapeline.color = BABYLON.Color3.Green(); var extruded = BABYLON.Mesh.ExtrudeShape("extruded", shape, path, 1, 0, 0, scene); extruded.material = mat; return scene; }; var scene = createScene(); engine.runRenderLoop(function() { scene.render(); }); </script> </body> </html> For polygonmeshbuilder uses earcut structure and for that to work fine we need an additional file which can be taken from cdn (https://unpkg.com/[email protected]/dist/earcut.min.js) or npm package(https://github.com/mapbox/earcut#install) <!doctype html> <html> <head> <meta charset = "utf-8"> <title>BabylonJs - Basic Element-Creating Scene</title> <script src="https://unpkg.com/[email protected]/dist/earcut.min.js"></script> <script src = "babylon.js"></script> <style> canvas {width: 100%; height: 100%;} </style> </head> <body> <canvas id = "renderCanvas"></canvas> <script type = "text/javascript"> var canvas = document.getElementById("renderCanvas"); var engine = new BABYLON.Engine(canvas, true); var createScene = function() { var scene = new BABYLON.Scene(engine); scene.clearColor = new BABYLON.Color3(0, 0, 1); var camera = new BABYLON.ArcRotateCamera("Camera", -Math.PI/2, Math.PI/4, 25, BABYLON.Vector3.Zero(), scene); camera.attachControl(canvas, true); var light = new BABYLON.HemisphericLight("light1", new BABYLON.Vector3(0, 10, 0), scene); light.intensity = 0.5; var corners = [ new BABYLON.Vector2(4, 0), new BABYLON.Vector2(3, 1), new BABYLON.Vector2(2, 3), new BABYLON.Vector2(2, 4), new BABYLON.Vector2(1, 3), new BABYLON.Vector2(0, 3), new BABYLON.Vector2(-1, 3), new BABYLON.Vector2(-3, 4), new BABYLON.Vector2(-2, 2), new BABYLON.Vector2(-3, 0), new BABYLON.Vector2(-3, -2), new BABYLON.Vector2(-3, -3), new BABYLON.Vector2(-2, -2), new BABYLON.Vector2(0, -2), new BABYLON.Vector2(3, -2), new BABYLON.Vector2(3, -1), ]; var hole = [ new BABYLON.Vector2(1, -1), new BABYLON.Vector2(1.5, 0), new BABYLON.Vector2(1.4, 1), new BABYLON.Vector2(0.5, 1.5) ] var poly_tri = new BABYLON.PolygonMeshBuilder("polytri", corners, scene); poly_tri.addHole(hole); var polygon = poly_tri.build(null, 0.5); polygon.position.y = + 4; var poly_path = new BABYLON.Path2(2, 0); poly_path.addLineTo(5, 2); poly_path.addLineTo(1, 2); poly_path.addLineTo(-5, 5); poly_path.addLineTo(-3, 1); poly_path.addLineTo(-4, -4); poly_path.addArcTo(0, -2, 4, -4, 100); var poly_tri2 = new BABYLON.PolygonMeshBuilder("polytri2", poly_path, scene); poly_tri2.addHole(hole); var polygon2 = poly_tri2.build(false, 0.5); //updatable, extrusion depth - both optional polygon2.position.y = -4; return scene; }; var scene = createScene(); engine.runRenderLoop(function() { scene.render(); }); </script> </body> </html> Following is the syntax for PolygonMeshBuilder − var poly_tri2 = new BABYLON.PolygonMeshBuilder("polytri2", poly_path, scene); Print Add Notes Bookmark this page
[ { "code": null, "e": 2528, "s": 2183, "text": "Extrusion helps in transforming a 2D shape into a volumic shape.Suppose you want to create a star with 2D you will have x,y co-ordinates and z will be 0.Taking the 2D co-ordinates extrusion will convert the same to a 3D shape.So, the start of 2D with extrusion will turn out to be a 3D.You can try different 2D shapes and convert those into 3D." }, { "code": null, "e": 2632, "s": 2528, "text": "BABYLON.Mesh.ExtrudeShape(name, shape, path, scale, rotation, cap, scene, updatable?, sideOrientation)\n" }, { "code": null, "e": 2682, "s": 2632, "text": "Consider the following parameters for extrusion −" }, { "code": null, "e": 2704, "s": 2682, "text": "Name − The mesh name." }, { "code": null, "e": 2726, "s": 2704, "text": "Name − The mesh name." }, { "code": null, "e": 2787, "s": 2726, "text": "Shape − The shape to be extruded; it is an array of vectors." }, { "code": null, "e": 2848, "s": 2787, "text": "Shape − The shape to be extruded; it is an array of vectors." }, { "code": null, "e": 2921, "s": 2848, "text": "Path − The path to extrude the shape.Array of vectors to draw the shape." }, { "code": null, "e": 2994, "s": 2921, "text": "Path − The path to extrude the shape.Array of vectors to draw the shape." }, { "code": null, "e": 3068, "s": 2994, "text": "Scale − By default it is 1.Scale is the value to scale the initial shape." }, { "code": null, "e": 3142, "s": 3068, "text": "Scale − By default it is 1.Scale is the value to scale the initial shape." }, { "code": null, "e": 3190, "s": 3142, "text": "Rotation − Rotate the shape at each path point." }, { "code": null, "e": 3238, "s": 3190, "text": "Rotation − Rotate the shape at each path point." }, { "code": null, "e": 3333, "s": 3238, "text": "Cap − BABYLON.Mesh.NO_CAP, BABYLON.Mesh.CAP_START, BABYLON.Mesh.CAP_END, BABYLON.Mesh.CAP_ALL." }, { "code": null, "e": 3428, "s": 3333, "text": "Cap − BABYLON.Mesh.NO_CAP, BABYLON.Mesh.CAP_START, BABYLON.Mesh.CAP_END, BABYLON.Mesh.CAP_ALL." }, { "code": null, "e": 3488, "s": 3428, "text": "Scene − The current scene on which the mesh will be drawn.\n" }, { "code": null, "e": 3548, "s": 3488, "text": "Scene − The current scene on which the mesh will be drawn.\n" }, { "code": null, "e": 3625, "s": 3548, "text": "Updatable − By default, it is false.If set true, the mesh will be updatable." }, { "code": null, "e": 3702, "s": 3625, "text": "Updatable − By default, it is false.If set true, the mesh will be updatable." }, { "code": null, "e": 3766, "s": 3702, "text": "SideOrientation − The side orientation - front, back or double." }, { "code": null, "e": 3830, "s": 3766, "text": "SideOrientation − The side orientation - front, back or double." }, { "code": null, "e": 6146, "s": 3830, "text": "<!doctype html>\n<html>\n <head>\n <meta charset = \"utf-8\">\n <title>BabylonJs - Basic Element-Creating Scene</title>\n <script src = \"babylon.js\"></script>\n <style>\n canvas {width: 100%; height: 100%;}\n </style>\n </head>\n \n <body>\n <canvas id = \"renderCanvas\"></canvas>\n <script type = \"text/javascript\">\n var canvas = document.getElementById(\"renderCanvas\");\n var engine = new BABYLON.Engine(canvas, true);\n var createScene = function() {\n\n var scene = new BABYLON.Scene(engine);\n scene.clearColor = new BABYLON.Color3( .5, .5, .5);\n\n // camera\n var camera = new BABYLON.ArcRotateCamera(\"camera1\", 0, 0, 0, new BABYLON.Vector3(0, 0, -0), scene);\n camera.setPosition(new BABYLON.Vector3(0, 0, -10));\n camera.attachControl(canvas, true);\n // lights\n \n var light = new BABYLON.HemisphericLight(\"light1\", new BABYLON.Vector3(1, 0.5, 0), scene);\n light.intensity = 0.7;\n \n var spot = new BABYLON.SpotLight(\"spot\", new BABYLON.Vector3(25, 15, -10), new BABYLON.Vector3(-1, -0.8, 1), 15, 1, scene);\n spot.diffuse = new BABYLON.Color3(1, 1, 1);\n spot.specular = new BABYLON.Color3(0, 0, 0);\n spot.intensity = 0.8;\n\n // shape\n var shape = [\n new BABYLON.Vector3(2, 0, 0),\n new BABYLON.Vector3(2, 2, 0),\n new BABYLON.Vector3(1, 2, 0),\n new BABYLON.Vector3(0, 3, 0),\n new BABYLON.Vector3(-1, 2, 0),\n new BABYLON.Vector3(-2, 2, 0),\n new BABYLON.Vector3(-2, 0, 0),\n new BABYLON.Vector3(-2, -2, 0),\n new BABYLON.Vector3(-1, -2, 0),\n new BABYLON.Vector3(0, -3, 0),\n new BABYLON.Vector3(1, -2, 0),\n new BABYLON.Vector3(2, -2, 0),\n ];\n shape.push(shape[0]);\n\n var shapeline = BABYLON.Mesh.CreateLines(\"sl\", shape, scene);\n shapeline.color = BABYLON.Color3.Green(); \n return scene;\n };\n var scene = createScene();\n engine.runRenderLoop(function() {\n scene.render();\n });\n </script>\n </body>\n</html>" }, { "code": null, "e": 6202, "s": 6146, "text": "The above line of code generates the following output −" }, { "code": null, "e": 6389, "s": 6202, "text": "In the above example, the lines are drawn in the x, y coordinates. Let us now apply 3D with the help of extrusion. For this, babylonjs has a class for extrusion which is explained below." }, { "code": null, "e": 9180, "s": 6389, "text": "<!doctype html>\n<html>\n <head>\n <meta charset = \"utf-8\">\n <title>BabylonJs - Basic Element-Creating Scene</title>\n <script src = \"babylon.js\"></script>\n <style>\n canvas {width: 100%; height: 100%;}\n </style>\n </head>\n\n <body>\n <canvas id = \"renderCanvas\"></canvas>\n <script type = \"text/javascript\">\n var canvas = document.getElementById(\"renderCanvas\");\n var engine = new BABYLON.Engine(canvas, true);\n \n var createScene = function() {\n var scene = new BABYLON.Scene(engine);\n scene.clearColor = new BABYLON.Color3( .5, .5, .5);\n\n // camera\n var camera = new BABYLON.ArcRotateCamera(\"camera1\", 0, 0, 0, new BABYLON.Vector3(0, 0, -0), scene);\n camera.setPosition(new BABYLON.Vector3(0, 0, -10));\n camera.attachControl(canvas, true);\n // lights\n \n var light = new BABYLON.HemisphericLight(\"light1\", new BABYLON.Vector3(1, 0.5, 0), scene);\n light.intensity = 0.7;\n \n var spot = new BABYLON.SpotLight(\"spot\", new BABYLON.Vector3(25, 15, -10), new BABYLON.Vector3(-1, -0.8, 1), 15, 1, scene);\n spot.diffuse = new BABYLON.Color3(1, 1, 1);\n spot.specular = new BABYLON.Color3(0, 0, 0);\n spot.intensity = 0.8;\n \n var mat = new BABYLON.StandardMaterial(\"mat1\", scene);\n mat.alpha = 1.0;\n mat.diffuseColor = new BABYLON.Color3(0.5, 0.5, 1.0);\n mat.backFaceCulling = false;\n\n // shape\n var shape = [\n new BABYLON.Vector3(2, 0, 0),\n new BABYLON.Vector3(2, 2, 0),\n new BABYLON.Vector3(1, 2, 0),\n new BABYLON.Vector3(0, 3, 0),\n new BABYLON.Vector3(-1, 2, 0),\n new BABYLON.Vector3(-2, 2, 0),\n new BABYLON.Vector3(-2, 0, 0),\n new BABYLON.Vector3(-2, -2, 0),\n new BABYLON.Vector3(-1, -2, 0),\n new BABYLON.Vector3(0, -3, 0),\n new BABYLON.Vector3(1, -2, 0),\n new BABYLON.Vector3(2, -2, 0),\n ];\n \n shape.push(shape[0]);\n var path = [ BABYLON.Vector3.Zero(), new BABYLON.Vector3(0, 0, -1) ];\n \n var shapeline = BABYLON.Mesh.CreateLines(\"sl\", shape, scene);\n shapeline.color = BABYLON.Color3.Green(); \n \n var extruded = BABYLON.Mesh.ExtrudeShape(\"extruded\", shape, path, 1, 0, 0, scene);\n\n extruded.material = mat;\n return scene;\n };\n var scene = createScene();\n engine.runRenderLoop(function() {\n scene.render();\n });\n </script>\n </body>\n</html>" }, { "code": null, "e": 9415, "s": 9180, "text": "For polygonmeshbuilder uses earcut structure and for that to work fine we need an additional file which can be taken from cdn (https://unpkg.com/[email protected]/dist/earcut.min.js) or npm package(https://github.com/mapbox/earcut#install)" }, { "code": null, "e": 12386, "s": 9415, "text": "<!doctype html>\n<html>\n <head>\n <meta charset = \"utf-8\">\n <title>BabylonJs - Basic Element-Creating Scene</title>\n <script src=\"https://unpkg.com/[email protected]/dist/earcut.min.js\"></script>\n <script src = \"babylon.js\"></script>\n <style>\n canvas {width: 100%; height: 100%;}\n </style>\n </head>\n\n <body>\n <canvas id = \"renderCanvas\"></canvas>\n <script type = \"text/javascript\">\n var canvas = document.getElementById(\"renderCanvas\");\n var engine = new BABYLON.Engine(canvas, true);\n \n var createScene = function() {\n var scene = new BABYLON.Scene(engine);\n scene.clearColor = new BABYLON.Color3(0, 0, 1);\n \n var camera = new BABYLON.ArcRotateCamera(\"Camera\", -Math.PI/2, Math.PI/4, 25, BABYLON.Vector3.Zero(), scene);\n camera.attachControl(canvas, true);\n\n var light = new BABYLON.HemisphericLight(\"light1\", new BABYLON.Vector3(0, 10, 0), scene);\n light.intensity = 0.5;\n\n var corners = [ \n new BABYLON.Vector2(4, 0),\n new BABYLON.Vector2(3, 1),\n new BABYLON.Vector2(2, 3),\n new BABYLON.Vector2(2, 4),\n new BABYLON.Vector2(1, 3),\n new BABYLON.Vector2(0, 3),\n new BABYLON.Vector2(-1, 3),\n new BABYLON.Vector2(-3, 4),\n new BABYLON.Vector2(-2, 2),\n new BABYLON.Vector2(-3, 0),\n new BABYLON.Vector2(-3, -2),\n new BABYLON.Vector2(-3, -3),\n new BABYLON.Vector2(-2, -2),\n new BABYLON.Vector2(0, -2),\n new BABYLON.Vector2(3, -2),\n new BABYLON.Vector2(3, -1),\t\n ];\n\n var hole = [ \n new BABYLON.Vector2(1, -1),\n new BABYLON.Vector2(1.5, 0),\n new BABYLON.Vector2(1.4, 1),\n new BABYLON.Vector2(0.5, 1.5)\n ] \n\n var poly_tri = new BABYLON.PolygonMeshBuilder(\"polytri\", corners, scene);\n poly_tri.addHole(hole);\n var polygon = poly_tri.build(null, 0.5);\n polygon.position.y = + 4;\n\n var poly_path = new BABYLON.Path2(2, 0);\n poly_path.addLineTo(5, 2);\n poly_path.addLineTo(1, 2);\n poly_path.addLineTo(-5, 5);\n poly_path.addLineTo(-3, 1);\n poly_path.addLineTo(-4, -4);\n poly_path.addArcTo(0, -2, 4, -4, 100);\n\n var poly_tri2 = new BABYLON.PolygonMeshBuilder(\"polytri2\", poly_path, scene);\n poly_tri2.addHole(hole);\n \n var polygon2 = poly_tri2.build(false, 0.5); //updatable, extrusion depth - both optional\n polygon2.position.y = -4;\n return scene;\n };\n var scene = createScene();\n engine.runRenderLoop(function() {\n scene.render();\n });\n </script>\n </body>\n</html>" }, { "code": null, "e": 12435, "s": 12386, "text": "Following is the syntax for PolygonMeshBuilder −" }, { "code": null, "e": 12514, "s": 12435, "text": "var poly_tri2 = new BABYLON.PolygonMeshBuilder(\"polytri2\", poly_path, scene);\n" }, { "code": null, "e": 12521, "s": 12514, "text": " Print" }, { "code": null, "e": 12532, "s": 12521, "text": " Add Notes" } ]
Compiler Design - Semantic Analysis
We have learnt how a parser constructs parse trees in the syntax analysis phase. The plain parse-tree constructed in that phase is generally of no use for a compiler, as it does not carry any information of how to evaluate the tree. The productions of context-free grammar, which makes the rules of the language, do not accommodate how to interpret them. For example E → E + T The above CFG production has no semantic rule associated with it, and it cannot help in making any sense of the production. Semantics of a language provide meaning to its constructs, like tokens and syntax structure. Semantics help interpret symbols, their types, and their relations with each other. Semantic analysis judges whether the syntax structure constructed in the source program derives any meaning or not. CFG + semantic rules = Syntax Directed Definitions For example: int a = “value”; should not issue an error in lexical and syntax analysis phase, as it is lexically and structurally correct, but it should generate a semantic error as the type of the assignment differs. These rules are set by the grammar of the language and evaluated in semantic analysis. The following tasks should be performed in semantic analysis: Scope resolution Type checking Array-bound checking We have mentioned some of the semantics errors that the semantic analyzer is expected to recognize: Type mismatch Undeclared variable Reserved identifier misuse. Multiple declaration of variable in a scope. Accessing an out of scope variable. Actual and formal parameter mismatch. Attribute grammar is a special form of context-free grammar where some additional information (attributes) are appended to one or more of its non-terminals in order to provide context-sensitive information. Each attribute has well-defined domain of values, such as integer, float, character, string, and expressions. Attribute grammar is a medium to provide semantics to the context-free grammar and it can help specify the syntax and semantics of a programming language. Attribute grammar (when viewed as a parse-tree) can pass values or information among the nodes of a tree. Example: E → E + T { E.value = E.value + T.value } The right part of the CFG contains the semantic rules that specify how the grammar should be interpreted. Here, the values of non-terminals E and T are added together and the result is copied to the non-terminal E. Semantic attributes may be assigned to their values from their domain at the time of parsing and evaluated at the time of assignment or conditions. Based on the way the attributes get their values, they can be broadly divided into two categories : synthesized attributes and inherited attributes. These attributes get values from the attribute values of their child nodes. To illustrate, assume the following production: S → ABC If S is taking values from its child nodes (A,B,C), then it is said to be a synthesized attribute, as the values of ABC are synthesized to S. As in our previous example (E → E + T), the parent node E gets its value from its child node. Synthesized attributes never take values from their parent nodes or any sibling nodes. In contrast to synthesized attributes, inherited attributes can take values from parent and/or siblings. As in the following production, S → ABC A can get values from S, B and C. B can take values from S, A, and C. Likewise, C can take values from S, A, and B. Expansion : When a non-terminal is expanded to terminals as per a grammatical rule Reduction : When a terminal is reduced to its corresponding non-terminal according to grammar rules. Syntax trees are parsed top-down and left to right. Whenever reduction occurs, we apply its corresponding semantic rules (actions). Semantic analysis uses Syntax Directed Translations to perform the above tasks. Semantic analyzer receives AST (Abstract Syntax Tree) from its previous stage (syntax analysis). Semantic analyzer attaches attribute information with AST, which are called Attributed AST. Attributes are two tuple value, <attribute name, attribute value> For example: int value = 5; <type, “integer”> <presentvalue, “5”> For every production, we attach a semantic rule. If an SDT uses only synthesized attributes, it is called as S-attributed SDT. These attributes are evaluated using S-attributed SDTs that have their semantic actions written after the production (right hand side). As depicted above, attributes in S-attributed SDTs are evaluated in bottom-up parsing, as the values of the parent nodes depend upon the values of the child nodes. This form of SDT uses both synthesized and inherited attributes with restriction of not taking values from right siblings. In L-attributed SDTs, a non-terminal can get values from its parent, child, and sibling nodes. As in the following production S → ABC S can take values from A, B, and C (synthesized). A can take values from S only. B can take values from S and A. C can get values from S, A, and B. No non-terminal can get values from the sibling to its right. Attributes in L-attributed SDTs are evaluated by depth-first and left-to-right parsing manner. We may conclude that if a definition is S-attributed, then it is also L-attributed as L-attributed definition encloses S-attributed definitions. 102 Lectures 10 hours Arnab Chakraborty Print Add Notes Bookmark this page
[ { "code": null, "e": 2548, "s": 2193, "text": "We have learnt how a parser constructs parse trees in the syntax analysis phase. The plain parse-tree constructed in that phase is generally of no use for a compiler, as it does not carry any information of how to evaluate the tree. The productions of context-free grammar, which makes the rules of the language, do not accommodate how to interpret them." }, { "code": null, "e": 2560, "s": 2548, "text": "For example" }, { "code": null, "e": 2570, "s": 2560, "text": "E → E + T" }, { "code": null, "e": 2694, "s": 2570, "text": "The above CFG production has no semantic rule associated with it, and it cannot help in making any sense of the production." }, { "code": null, "e": 2987, "s": 2694, "text": "Semantics of a language provide meaning to its constructs, like tokens and syntax structure. Semantics help interpret symbols, their types, and their relations with each other. Semantic analysis judges whether the syntax structure constructed in the source program derives any meaning or not." }, { "code": null, "e": 3038, "s": 2987, "text": "CFG + semantic rules = Syntax Directed Definitions" }, { "code": null, "e": 3051, "s": 3038, "text": "For example:" }, { "code": null, "e": 3068, "s": 3051, "text": "int a = “value”;" }, { "code": null, "e": 3405, "s": 3068, "text": "should not issue an error in lexical and syntax analysis phase, as it is lexically and structurally correct, but it should generate a semantic error as the type of the assignment differs. These rules are set by the grammar of the language and evaluated in semantic analysis. The following tasks should be performed in semantic analysis:" }, { "code": null, "e": 3422, "s": 3405, "text": "Scope resolution" }, { "code": null, "e": 3436, "s": 3422, "text": "Type checking" }, { "code": null, "e": 3457, "s": 3436, "text": "Array-bound checking" }, { "code": null, "e": 3559, "s": 3459, "text": "We have mentioned some of the semantics errors that the semantic analyzer is expected to recognize:" }, { "code": null, "e": 3573, "s": 3559, "text": "Type mismatch" }, { "code": null, "e": 3593, "s": 3573, "text": "Undeclared variable" }, { "code": null, "e": 3621, "s": 3593, "text": "Reserved identifier misuse." }, { "code": null, "e": 3666, "s": 3621, "text": "Multiple declaration of variable in a scope." }, { "code": null, "e": 3702, "s": 3666, "text": "Accessing an out of scope variable." }, { "code": null, "e": 3740, "s": 3702, "text": "Actual and formal parameter mismatch." }, { "code": null, "e": 4057, "s": 3740, "text": "Attribute grammar is a special form of context-free grammar where some additional information (attributes) are appended to one or more of its non-terminals in order to provide context-sensitive information. Each attribute has well-defined domain of values, such as integer, float, character, string, and expressions." }, { "code": null, "e": 4318, "s": 4057, "text": "Attribute grammar is a medium to provide semantics to the context-free grammar and it can help specify the syntax and semantics of a programming language. Attribute grammar (when viewed as a parse-tree) can pass values or information among the nodes of a tree." }, { "code": null, "e": 4327, "s": 4318, "text": "Example:" }, { "code": null, "e": 4369, "s": 4327, "text": "E → E + T { E.value = E.value + T.value }" }, { "code": null, "e": 4584, "s": 4369, "text": "The right part of the CFG contains the semantic rules that specify how the grammar should be interpreted. Here, the values of non-terminals E and T are added together and the result is copied to the non-terminal E." }, { "code": null, "e": 4881, "s": 4584, "text": "Semantic attributes may be assigned to their values from their domain at the time of parsing and evaluated at the time of assignment or conditions. Based on the way the attributes get their values, they can be broadly divided into two categories : synthesized attributes and inherited attributes." }, { "code": null, "e": 5005, "s": 4881, "text": "These attributes get values from the attribute values of their child nodes. To illustrate, assume the following production:" }, { "code": null, "e": 5013, "s": 5005, "text": "S → ABC" }, { "code": null, "e": 5155, "s": 5013, "text": "If S is taking values from its child nodes (A,B,C), then it is said to be a synthesized attribute, as the values of ABC are synthesized to S." }, { "code": null, "e": 5336, "s": 5155, "text": "As in our previous example (E → E + T), the parent node E gets its value from its child node. Synthesized attributes never take values from their parent nodes or any sibling nodes." }, { "code": null, "e": 5473, "s": 5336, "text": "In contrast to synthesized attributes, inherited attributes can take values from parent and/or siblings. As in the following production," }, { "code": null, "e": 5481, "s": 5473, "text": "S → ABC" }, { "code": null, "e": 5597, "s": 5481, "text": "A can get values from S, B and C. B can take values from S, A, and C. Likewise, C can take values from S, A, and B." }, { "code": null, "e": 5680, "s": 5597, "text": "Expansion : When a non-terminal is expanded to terminals as per a grammatical rule" }, { "code": null, "e": 5913, "s": 5680, "text": "Reduction : When a terminal is reduced to its corresponding non-terminal according to grammar rules. Syntax trees are parsed top-down and left to right. Whenever reduction occurs, we apply its corresponding semantic rules (actions)." }, { "code": null, "e": 5993, "s": 5913, "text": "Semantic analysis uses Syntax Directed Translations to perform the above tasks." }, { "code": null, "e": 6090, "s": 5993, "text": "Semantic analyzer receives AST (Abstract Syntax Tree) from its previous stage (syntax analysis)." }, { "code": null, "e": 6182, "s": 6090, "text": "Semantic analyzer attaches attribute information with AST, which are called Attributed AST." }, { "code": null, "e": 6248, "s": 6182, "text": "Attributes are two tuple value, <attribute name, attribute value>" }, { "code": null, "e": 6261, "s": 6248, "text": "For example:" }, { "code": null, "e": 6315, "s": 6261, "text": "int value = 5;\n<type, “integer”>\n<presentvalue, “5”>" }, { "code": null, "e": 6364, "s": 6315, "text": "For every production, we attach a semantic rule." }, { "code": null, "e": 6578, "s": 6364, "text": "If an SDT uses only synthesized attributes, it is called as S-attributed SDT. These attributes are evaluated using S-attributed SDTs that have their semantic actions written after the production (right hand side)." }, { "code": null, "e": 6742, "s": 6578, "text": "As depicted above, attributes in S-attributed SDTs are evaluated in bottom-up parsing, as the values of the parent nodes depend upon the values of the child nodes." }, { "code": null, "e": 6865, "s": 6742, "text": "This form of SDT uses both synthesized and inherited attributes with restriction of not taking values from right siblings." }, { "code": null, "e": 6991, "s": 6865, "text": "In L-attributed SDTs, a non-terminal can get values from its parent, child, and sibling nodes. As in the following production" }, { "code": null, "e": 6999, "s": 6991, "text": "S → ABC" }, { "code": null, "e": 7209, "s": 6999, "text": "S can take values from A, B, and C (synthesized). A can take values from S only. B can take values from S and A. C can get values from S, A, and B. No non-terminal can get values from the sibling to its right." }, { "code": null, "e": 7304, "s": 7209, "text": "Attributes in L-attributed SDTs are evaluated by depth-first and left-to-right parsing manner." }, { "code": null, "e": 7449, "s": 7304, "text": "We may conclude that if a definition is S-attributed, then it is also L-attributed as L-attributed definition encloses S-attributed definitions." }, { "code": null, "e": 7484, "s": 7449, "text": "\n 102 Lectures \n 10 hours \n" }, { "code": null, "e": 7503, "s": 7484, "text": " Arnab Chakraborty" }, { "code": null, "e": 7510, "s": 7503, "text": " Print" }, { "code": null, "e": 7521, "s": 7510, "text": " Add Notes" } ]
How-to Uninstall PostgreSQL 13.3 and Reinstall via Brew | by Wen Yang | Towards Data Science
Who is this for? For anyone who needs to completely uninstall PostgresSQL 13.3, which was installed via the installer. This article will cover three topics: How to uninstall PostgreSQL 13.3 How to reinstall PostgreSQL back via brew Test to see if it’s working: create database, user, and grant privileges How to uninstall PostgresSQL 13.3 How to uninstall PostgresSQL 13.3 Step 1: Open your terminal. Check installed version as well as location. In my case, it is installed under /Library/PostgreSQL/13/bin/psql # check version$ postgres --versionpostgres (PostgreSQL) 13.3# locate where it is installed$ which psql/Library/PostgreSQL/13/bin/psql Step 2: Depending on whether the uninstall-postgres.app is installed, we have two solutions. Solution 2A: Change the directory to run uninstall-postgres.app This app is located upper directory of the bin folder, which in my case it is /Library/PostgreSQL/13 . # change directory$ cd /Library/PostgreSQL/13$ open uninstall-postgres.app If the Uninstallation window prompt, you can follow this guide on section [Uninstalling PostgreSQL on Mac]. However, this solution didn’t work for me. I received an error message: $ open uninstall-postgres.appThe file /Library/PostgreSQL/13/uninstall-postgres.app does not exist. After trying many other methods online, though none seemed to lead to fruition, I noticed an interesting pattern, that is → for the same function, some people would usepostgres , and other people would use postgresql . Out of desperation, I accidentally discovered solution 2B. Solution 2B: Just change → $ open uninstall-postgres.app to $ open uninstall-postgresql.app . It’s such a small change but it worked! 🤩 # change directory$ cd /Library/PostgreSQL/13$ open uninstall-postgresql.app The uninstallation window prompted! If this works for you too, you can follow this guide on section [Uninstalling PostgreSQL on Mac] until Fig 8. Important note: after you followed the above guide, all the way until Fig 8, we are not done yet! In order to remove all the Postgres related files, you need step 3. Step 3: remove Postgres related files # change to home directory$ cd ~$ sudo rm -rf /Library/PostgreSQL$ sudo rm /etc/postgres-reg.ini# some people also suggested to remove sysctl.conf# but I don't seem to have this file in my environment# so I ignored it. You can try if you'd like$ sudo rm /etc/sysctl.confrm: /etc/sysctl.conf: No such file or directory 🎉🎉🎉 Hooray! We successfully uninstalled PostgreSQL 13.3!! 2. How to reinstall PostgreSQL back via brew 🍺 The reason I need to uninstall PostgreSQL is that I couldn’t use the same code my coworker was using when I need to create a test database. And we suspect that there’s a difference between PostgreSQL installed via installer and PostgreSQL installed via brew. Long story short, it is true at least in my case, it solved the problem. Install PostgreSQL back via brew is very simple, it has two steps: # 1. update brew$ brew update# optional: run brew doctor (I did this.)$ brew doctor# 2. install postgresql$ brew install postgresql By this point, we can launch PostgreSQL by running the below command. $ brew services start postgresql After running that, it tells us we successfully started postgresql ==> Successfully started `postgresql` (label: homebrew.mxcl.postgresql) Now, let’s test to see if it’s working. 3. Test: perform three tasks — create a database, user, and grant privileges. Step 1: launch Postgres # 1. launch postgres$ psql postgrespsql (13.3)Type "help" for help.postgres=# lspostgres-# helpUse \? for help or press control-C to clear the input buffer.postgres-# \q you can use the command \l to see all the databases available. For example, this is what I can see. # note: postgres=# is a prompt, not part of the command# the command is \l, which lists all databasespostgres=# \l Step 2: I created a database called discovery_db , you can name the database that suits your purpose. postgres=# create database discovery_db;# use \l to check againpostgres=# \l Now we have 4 rows and discovery_db is listed on the top. Neat! Step 3: Create a user with a password. postgres=# create user discovery_db_user with encrypted password 'discovery_db_pass'; Step 4: Grant all privileges to the user we just created. postgres=# grant all privileges on database discovery_db to discovery_db_user; Now, let’s check again → In the output Access privileges , we can see discovery_db_users has the same privileges as the owner wen (me 😊). Finally, we can exit postgres \qwith content. postgres=# \q Key Takeaways: If you run into Postgres issues and the blog post you found online don’t seem to work for you, try to modify the commands postgres to postgresql or vice versa.There are many different versions of Postgres. If you can’t run other people’s code, it might be easier to uninstall Postgres completely and reinstall instead of debugging for days.I realized this post might be outdated once there’s a new version of Postgres, but I thought it would at least serve as a time-shot solution for PostgreSQL 13.3 + MacOS Catalina system. If you run into Postgres issues and the blog post you found online don’t seem to work for you, try to modify the commands postgres to postgresql or vice versa. There are many different versions of Postgres. If you can’t run other people’s code, it might be easier to uninstall Postgres completely and reinstall instead of debugging for days. I realized this post might be outdated once there’s a new version of Postgres, but I thought it would at least serve as a time-shot solution for PostgreSQL 13.3 + MacOS Catalina system.
[ { "code": null, "e": 188, "s": 171, "text": "Who is this for?" }, { "code": null, "e": 290, "s": 188, "text": "For anyone who needs to completely uninstall PostgresSQL 13.3, which was installed via the installer." }, { "code": null, "e": 328, "s": 290, "text": "This article will cover three topics:" }, { "code": null, "e": 361, "s": 328, "text": "How to uninstall PostgreSQL 13.3" }, { "code": null, "e": 403, "s": 361, "text": "How to reinstall PostgreSQL back via brew" }, { "code": null, "e": 476, "s": 403, "text": "Test to see if it’s working: create database, user, and grant privileges" }, { "code": null, "e": 510, "s": 476, "text": "How to uninstall PostgresSQL 13.3" }, { "code": null, "e": 544, "s": 510, "text": "How to uninstall PostgresSQL 13.3" }, { "code": null, "e": 683, "s": 544, "text": "Step 1: Open your terminal. Check installed version as well as location. In my case, it is installed under /Library/PostgreSQL/13/bin/psql" }, { "code": null, "e": 818, "s": 683, "text": "# check version$ postgres --versionpostgres (PostgreSQL) 13.3# locate where it is installed$ which psql/Library/PostgreSQL/13/bin/psql" }, { "code": null, "e": 911, "s": 818, "text": "Step 2: Depending on whether the uninstall-postgres.app is installed, we have two solutions." }, { "code": null, "e": 924, "s": 911, "text": "Solution 2A:" }, { "code": null, "e": 1078, "s": 924, "text": "Change the directory to run uninstall-postgres.app This app is located upper directory of the bin folder, which in my case it is /Library/PostgreSQL/13 ." }, { "code": null, "e": 1153, "s": 1078, "text": "# change directory$ cd /Library/PostgreSQL/13$ open uninstall-postgres.app" }, { "code": null, "e": 1261, "s": 1153, "text": "If the Uninstallation window prompt, you can follow this guide on section [Uninstalling PostgreSQL on Mac]." }, { "code": null, "e": 1333, "s": 1261, "text": "However, this solution didn’t work for me. I received an error message:" }, { "code": null, "e": 1433, "s": 1333, "text": "$ open uninstall-postgres.appThe file /Library/PostgreSQL/13/uninstall-postgres.app does not exist." }, { "code": null, "e": 1711, "s": 1433, "text": "After trying many other methods online, though none seemed to lead to fruition, I noticed an interesting pattern, that is → for the same function, some people would usepostgres , and other people would use postgresql . Out of desperation, I accidentally discovered solution 2B." }, { "code": null, "e": 1724, "s": 1711, "text": "Solution 2B:" }, { "code": null, "e": 1738, "s": 1724, "text": "Just change →" }, { "code": null, "e": 1771, "s": 1738, "text": "$ open uninstall-postgres.app to" }, { "code": null, "e": 1805, "s": 1771, "text": "$ open uninstall-postgresql.app ." }, { "code": null, "e": 1847, "s": 1805, "text": "It’s such a small change but it worked! 🤩" }, { "code": null, "e": 1924, "s": 1847, "text": "# change directory$ cd /Library/PostgreSQL/13$ open uninstall-postgresql.app" }, { "code": null, "e": 2070, "s": 1924, "text": "The uninstallation window prompted! If this works for you too, you can follow this guide on section [Uninstalling PostgreSQL on Mac] until Fig 8." }, { "code": null, "e": 2236, "s": 2070, "text": "Important note: after you followed the above guide, all the way until Fig 8, we are not done yet! In order to remove all the Postgres related files, you need step 3." }, { "code": null, "e": 2274, "s": 2236, "text": "Step 3: remove Postgres related files" }, { "code": null, "e": 2592, "s": 2274, "text": "# change to home directory$ cd ~$ sudo rm -rf /Library/PostgreSQL$ sudo rm /etc/postgres-reg.ini# some people also suggested to remove sysctl.conf# but I don't seem to have this file in my environment# so I ignored it. You can try if you'd like$ sudo rm /etc/sysctl.confrm: /etc/sysctl.conf: No such file or directory" }, { "code": null, "e": 2650, "s": 2592, "text": "🎉🎉🎉 Hooray! We successfully uninstalled PostgreSQL 13.3!!" }, { "code": null, "e": 2697, "s": 2650, "text": "2. How to reinstall PostgreSQL back via brew 🍺" }, { "code": null, "e": 3029, "s": 2697, "text": "The reason I need to uninstall PostgreSQL is that I couldn’t use the same code my coworker was using when I need to create a test database. And we suspect that there’s a difference between PostgreSQL installed via installer and PostgreSQL installed via brew. Long story short, it is true at least in my case, it solved the problem." }, { "code": null, "e": 3096, "s": 3029, "text": "Install PostgreSQL back via brew is very simple, it has two steps:" }, { "code": null, "e": 3228, "s": 3096, "text": "# 1. update brew$ brew update# optional: run brew doctor (I did this.)$ brew doctor# 2. install postgresql$ brew install postgresql" }, { "code": null, "e": 3298, "s": 3228, "text": "By this point, we can launch PostgreSQL by running the below command." }, { "code": null, "e": 3331, "s": 3298, "text": "$ brew services start postgresql" }, { "code": null, "e": 3398, "s": 3331, "text": "After running that, it tells us we successfully started postgresql" }, { "code": null, "e": 3470, "s": 3398, "text": "==> Successfully started `postgresql` (label: homebrew.mxcl.postgresql)" }, { "code": null, "e": 3510, "s": 3470, "text": "Now, let’s test to see if it’s working." }, { "code": null, "e": 3588, "s": 3510, "text": "3. Test: perform three tasks — create a database, user, and grant privileges." }, { "code": null, "e": 3612, "s": 3588, "text": "Step 1: launch Postgres" }, { "code": null, "e": 3782, "s": 3612, "text": "# 1. launch postgres$ psql postgrespsql (13.3)Type \"help\" for help.postgres=# lspostgres-# helpUse \\? for help or press control-C to clear the input buffer.postgres-# \\q" }, { "code": null, "e": 3882, "s": 3782, "text": "you can use the command \\l to see all the databases available. For example, this is what I can see." }, { "code": null, "e": 3997, "s": 3882, "text": "# note: postgres=# is a prompt, not part of the command# the command is \\l, which lists all databasespostgres=# \\l" }, { "code": null, "e": 4099, "s": 3997, "text": "Step 2: I created a database called discovery_db , you can name the database that suits your purpose." }, { "code": null, "e": 4176, "s": 4099, "text": "postgres=# create database discovery_db;# use \\l to check againpostgres=# \\l" }, { "code": null, "e": 4240, "s": 4176, "text": "Now we have 4 rows and discovery_db is listed on the top. Neat!" }, { "code": null, "e": 4279, "s": 4240, "text": "Step 3: Create a user with a password." }, { "code": null, "e": 4365, "s": 4279, "text": "postgres=# create user discovery_db_user with encrypted password 'discovery_db_pass';" }, { "code": null, "e": 4423, "s": 4365, "text": "Step 4: Grant all privileges to the user we just created." }, { "code": null, "e": 4502, "s": 4423, "text": "postgres=# grant all privileges on database discovery_db to discovery_db_user;" }, { "code": null, "e": 4527, "s": 4502, "text": "Now, let’s check again →" }, { "code": null, "e": 4640, "s": 4527, "text": "In the output Access privileges , we can see discovery_db_users has the same privileges as the owner wen (me 😊)." }, { "code": null, "e": 4686, "s": 4640, "text": "Finally, we can exit postgres \\qwith content." }, { "code": null, "e": 4700, "s": 4686, "text": "postgres=# \\q" }, { "code": null, "e": 4715, "s": 4700, "text": "Key Takeaways:" }, { "code": null, "e": 5241, "s": 4715, "text": "If you run into Postgres issues and the blog post you found online don’t seem to work for you, try to modify the commands postgres to postgresql or vice versa.There are many different versions of Postgres. If you can’t run other people’s code, it might be easier to uninstall Postgres completely and reinstall instead of debugging for days.I realized this post might be outdated once there’s a new version of Postgres, but I thought it would at least serve as a time-shot solution for PostgreSQL 13.3 + MacOS Catalina system." }, { "code": null, "e": 5401, "s": 5241, "text": "If you run into Postgres issues and the blog post you found online don’t seem to work for you, try to modify the commands postgres to postgresql or vice versa." }, { "code": null, "e": 5583, "s": 5401, "text": "There are many different versions of Postgres. If you can’t run other people’s code, it might be easier to uninstall Postgres completely and reinstall instead of debugging for days." } ]
C++ Array Library - begin() Function
The C++ function std::array::begin() returns an iterator which points to the start of the array. Following is the declaration for std::array::begin() function form std::array header. iterator begin() noexcept; const_iterator begin() const noexcept; None If array object is const-qualified, method returns const random access iterator otherwise random access iterator. This member function never throws exception. Constant i.e. O(1) The following example shows the usage of std::array::begin() function. #include <iostream> #include <array> using namespace std; int main(void) { array <int, 5> arr = {1, 2, 3, 4, 5}; /* iterator pointing at the start of the array */ auto itr = arr.begin(); /* traverse complete container */ while (itr != arr.end()) { cout << *itr << " "; ++itr; /* increment iterator */ } cout << endl; return 0; } Let us compile and run the above program, this will produce the following result − 1 2 3 4 5 Print Add Notes Bookmark this page
[ { "code": null, "e": 2700, "s": 2603, "text": "The C++ function std::array::begin() returns an iterator which points to the start of the array." }, { "code": null, "e": 2786, "s": 2700, "text": "Following is the declaration for std::array::begin() function form std::array header." }, { "code": null, "e": 2852, "s": 2786, "text": "iterator begin() noexcept;\nconst_iterator begin() const noexcept;" }, { "code": null, "e": 2857, "s": 2852, "text": "None" }, { "code": null, "e": 2971, "s": 2857, "text": "If array object is const-qualified, method returns const random access iterator otherwise random access iterator." }, { "code": null, "e": 3016, "s": 2971, "text": "This member function never throws exception." }, { "code": null, "e": 3035, "s": 3016, "text": "Constant i.e. O(1)" }, { "code": null, "e": 3106, "s": 3035, "text": "The following example shows the usage of std::array::begin() function." }, { "code": null, "e": 3480, "s": 3106, "text": "#include <iostream>\n#include <array>\n\nusing namespace std;\n\nint main(void) {\n\n array <int, 5> arr = {1, 2, 3, 4, 5};\n\n /* iterator pointing at the start of the array */\n auto itr = arr.begin();\n\n /* traverse complete container */\n while (itr != arr.end()) {\n cout << *itr << \" \";\n ++itr; /* increment iterator */\n }\n\n cout << endl;\n\n return 0;\n}" }, { "code": null, "e": 3563, "s": 3480, "text": "Let us compile and run the above program, this will produce the following result −" }, { "code": null, "e": 3574, "s": 3563, "text": "1 2 3 4 5\n" }, { "code": null, "e": 3581, "s": 3574, "text": " Print" }, { "code": null, "e": 3592, "s": 3581, "text": " Add Notes" } ]
Quotation in Python
Python accepts single ('), double (") and triple (''' or """) quotes to denote string literals, as long as the same type of quote starts and ends the string. The triple quotes are used to span the string across multiple lines. For example, all the following are legal − word = 'word' sentence = "This is a sentence." paragraph = """This is a paragraph. It is made up of multiple lines and sentences."""
[ { "code": null, "e": 1220, "s": 1062, "text": "Python accepts single ('), double (\") and triple (''' or \"\"\") quotes to denote string literals, as long as the same type of quote starts and ends the string." }, { "code": null, "e": 1332, "s": 1220, "text": "The triple quotes are used to span the string across multiple lines. For example, all the following are legal −" }, { "code": null, "e": 1465, "s": 1332, "text": "word = 'word'\nsentence = \"This is a sentence.\"\nparagraph = \"\"\"This is a paragraph. It is\nmade up of multiple lines and sentences.\"\"\"" } ]
stopImmediatePropagation() Method
The stopImmediatePropagation() method stops the rest of the handlers from being executed. This method also stops the bubbling by calling event.stopPropagation(). You can use event.isImmediatePropagationStopped() to know whether this method was ever called (on that event object). Here is the simple syntax to use this method − event.stopImmediatePropagation() Here is the description of all the parameters used by this method − NA NA Following is a simple example a simple showing the usage of this method − <html> <head> <title>The jQuery Example</title> <script type = "text/javascript" src = "https://ajax.googleapis.com/ajax/libs/jquery/2.1.3/jquery.min.js"> </script> <script type = "text/javascript" language = "javascript"> $(document).ready(function() { $("div").click(function(event){ alert("1 - This is : " + $(this).text()); // Comment the following to see the effect. event.stopImmediatePropagation(); }); // This won't be executed. $("div").click(function(event){ alert("2 - This is : " + $(this).text()); }); }); </script> <style> div{ margin:10px;padding:12px; border:2px solid #666; width:160px;} </style> </head> <body> <p>Click on any box to see the result:</p> <div id = "div1" style = "background-color:blue;"> BOX 1 </div> <div id = "div2" style = "background-color:red;"> BOX 2 </div> </body> </html> This will produce following result − Click on any box to see the result − 27 Lectures 1 hours Mahesh Kumar 27 Lectures 1.5 hours Pratik Singh 72 Lectures 4.5 hours Frahaan Hussain 60 Lectures 9 hours Eduonix Learning Solutions 17 Lectures 2 hours Sandip Bhattacharya 12 Lectures 53 mins Laurence Svekis Print Add Notes Bookmark this page
[ { "code": null, "e": 2484, "s": 2322, "text": "The stopImmediatePropagation() method stops the rest of the handlers from being executed. This method also stops the bubbling by calling event.stopPropagation()." }, { "code": null, "e": 2602, "s": 2484, "text": "You can use event.isImmediatePropagationStopped() to know whether this method was ever called (on that event object)." }, { "code": null, "e": 2649, "s": 2602, "text": "Here is the simple syntax to use this method −" }, { "code": null, "e": 2684, "s": 2649, "text": "event.stopImmediatePropagation() \n" }, { "code": null, "e": 2752, "s": 2684, "text": "Here is the description of all the parameters used by this method −" }, { "code": null, "e": 2755, "s": 2752, "text": "NA" }, { "code": null, "e": 2758, "s": 2755, "text": "NA" }, { "code": null, "e": 2832, "s": 2758, "text": "Following is a simple example a simple showing the usage of this method −" }, { "code": null, "e": 3935, "s": 2832, "text": "<html>\n <head>\n <title>The jQuery Example</title>\n <script type = \"text/javascript\" \n src = \"https://ajax.googleapis.com/ajax/libs/jquery/2.1.3/jquery.min.js\">\n </script>\n\t\t\n <script type = \"text/javascript\" language = \"javascript\">\n $(document).ready(function() {\n\t\t\t\n $(\"div\").click(function(event){\n alert(\"1 - This is : \" + $(this).text());\n // Comment the following to see the effect.\n event.stopImmediatePropagation();\n });\n\t\t\t\t\n // This won't be executed.\n $(\"div\").click(function(event){\n alert(\"2 - This is : \" + $(this).text());\n });\n\t\t\t\t\n });\n </script>\n\t\t\n <style>\n div{ margin:10px;padding:12px; border:2px solid #666; width:160px;}\n </style>\n </head>\n\t\n <body>\n <p>Click on any box to see the result:</p>\n\t\t\n <div id = \"div1\" style = \"background-color:blue;\">\n BOX 1\n </div>\n\t\t\n <div id = \"div2\" style = \"background-color:red;\">\n BOX 2\n </div> \n </body>\n</html>" }, { "code": null, "e": 3972, "s": 3935, "text": "This will produce following result −" }, { "code": null, "e": 4009, "s": 3972, "text": "Click on any box to see the result −" }, { "code": null, "e": 4042, "s": 4009, "text": "\n 27 Lectures \n 1 hours \n" }, { "code": null, "e": 4056, "s": 4042, "text": " Mahesh Kumar" }, { "code": null, "e": 4091, "s": 4056, "text": "\n 27 Lectures \n 1.5 hours \n" }, { "code": null, "e": 4105, "s": 4091, "text": " Pratik Singh" }, { "code": null, "e": 4140, "s": 4105, "text": "\n 72 Lectures \n 4.5 hours \n" }, { "code": null, "e": 4157, "s": 4140, "text": " Frahaan Hussain" }, { "code": null, "e": 4190, "s": 4157, "text": "\n 60 Lectures \n 9 hours \n" }, { "code": null, "e": 4218, "s": 4190, "text": " Eduonix Learning Solutions" }, { "code": null, "e": 4251, "s": 4218, "text": "\n 17 Lectures \n 2 hours \n" }, { "code": null, "e": 4272, "s": 4251, "text": " Sandip Bhattacharya" }, { "code": null, "e": 4304, "s": 4272, "text": "\n 12 Lectures \n 53 mins\n" }, { "code": null, "e": 4321, "s": 4304, "text": " Laurence Svekis" }, { "code": null, "e": 4328, "s": 4321, "text": " Print" }, { "code": null, "e": 4339, "s": 4328, "text": " Add Notes" } ]
Analyzing commonly used slang words on TikTok using Twitter | by Kieran Tan Kah Wang | Towards Data Science
TikTok has been on the rise in recent years globally especially among the Gen-Z population who are born between 1997 to 2015. Slang words like LOL (Laugh out loud), ROFL (rolling on floor laughing), and FOMO (fear of missing out) is what most of us know but does the rise of TikTok brought upon another set of slang words? Indeed, according to TheSmartLocal (Singapore’s leading travel and lifestyle portal), there were 9 TikTok slangs being identified as commonly used by Gen-Z kids. What do words like bussin, no cap & sheesh being used on the platform refer to? Do common English words like bestie, fax, or slaps still mean the same? The article explained what each of the 9 slang words means, and how can they be used in conversation. In this article, we are going to analyze each of these slang words from a data science perspective where we will use Python to conduct some Natural Language Processing (NLP) techniques like sentiment analysis and topic modeling. This will give us a better idea of what are some associated words commonly used together with the slang words, the sentiments of the message, and the topics discussed together with these words. We are going to make use of another social media platform, Twitter, whereby tweets containing any of the 9 slang words will be captured for one month, which will form our entire dataset. The Twitter public API library on Python twitterallows us to collect tweets for the last 7 days after obtaining our authentication of the consumer key and access token. Once we initialize the connection to Twitter, we set the latitude, longitude maximum range of area, and set the search query to the slang word. As I’m interested to extract tweets tweeted by users in Singapore, I’ve set the geographical location to Singapore. The favorite counts and retweet counts were also collected from Twitter. # import librariesfrom twitter import *import pandas as pdfrom datetime import datetime# store the slang words in a listslangs = ['fax', 'no cap', 'ceo of', 'stan', 'bussin', 'slaps', 'hits different', 'sheesh', 'bestie']# twitter authenticationconsumer_key = '...'consumer_secret = '...'access_token = '...'access_token_secret = '...'twitter = Twitter(auth = OAuth(access_token, access_token_secret, consumer_key, consumer_secret))# set latitude & longitude to Singapore, maximum radius 20kmlatitude = 1.3521 longitude = 103.8198 max_range = 20# loop through each of the slangfor each in slangs: # extract tweets with query containing the planning area; note max count for standard API is 100 query = twitter.search.tweets(q = each, geocode = "%f,%f,%dkm" % (latitude, longitude, max_range),\ lang = 'en', count = 100) # once done, loop through each tweet for i in range (0, len(query['statuses'])): # store the planning area, tweet, created time, retweet count & favorite count as a list variable temp_list = [each, query['statuses'][i]['text'],\ datetime.strptime(query['statuses'][i]['created_at'], '%a %b %d %H:%M:%S %z %Y').strftime('%Y-%m-%d'),\ query['statuses'][i]['retweet_count'], query['statuses'][i]['favorite_count']] # append list to tweets dataframe tweets.loc[len(tweets)] = temp_list As the tweets collected were in the raw form which contains words like usernames, emoticons, and punctuations, data cleaning is necessary to remove them. We will start by converting all cases into lower case, filter off single word responses, remove punctuations/URLs/links. # function to clean column, tokenize & consolidate into corpus listdef column_cleaner(column, slang_word): # convert all to lower case and store in a list variable corpus = column.str.lower().tolist() # filter off single word responses, 'nil', 'nan' corpus = [x for x in corpus if len(x.split(' ')) > 1] corpus = [x for x in corpus if x != 'nan'] corpus = [x for x in corpus if x != 'nil'] # remove punctuations, links, urls for i in range (len(corpus)): x = corpus[i].replace('\n',' ') #cleaning newline “\n” from the tweets corpus[i] = html.unescape(x) corpus[i] = re.sub(r'(@[A-Za-z0–9_]+)|[^\w\s]|#|http\S+', '', corpus[i]) We then extend the above function column_cleanerto tokenize (from RegexpTokenizer function) the tweets into individual words, remove stopwords (from nltk package)/digits and perform lemmatization using part-of-speech (from WordNetLemmatizer function). # empty list to store cleaned corpus cleaned_corpus = [] # extend this slang into stopwords stopwords = nltk.corpus.stopwords.words("english") stopwords.extend([slang_word]) # tokenise each tweet, remove stopwords & digits & punctuations and filter to len > 2, lemmatize using Part-of-speech for i in range (0, len(corpus)): words = [w for w in tokenizer.tokenize(corpus[i]) if w.lower() not in stopwords] cleaned_words = [x for x in words if len(x) > 2] lemmatized_words = [wordnet_lemmatizer.lemmatize(x, pos = 'v') for x in cleaned_words] cleaned_corpus.extend(lemmatized_words) return cleaned_corpus This whole function will then be run on our collected tweets dataset and each tweet will be tokenized which we will then be able to plot the top n words associated with the slang word on our visualization software (i.e. Tableau). # loop through each slang wordfor each in slangs: # filter dataframe to responses with regards to this slang word temp_pd = tweets.loc[tweets.slang == each, :] # save result in temp pandas dataframe for easy output temp_result = pd.DataFrame(columns = ['slang', 'word']) # run column_cleaner function on the tweets temp_result['word'] = column_cleaner(temp_pd['tweets'], each) # add slang to slang column temp_result['slang'] = each # append temp_result to result result = result.append(temp_result, ignore_index = True) We can make use of the Python package textblob to conduct simple sentiment analysis where we will label each tweet as Positive if the polarity score > 0, Negative if the polarity score <0, else Neutral. Note that the above function column_cleaner need not be run before we conduct sentiment analysis as the textblob package can extract polarity scores directly from the raw tweets. from textblob import TextBlob# empty list to store polarity scorepolarity_score = []# loop through all tweetsfor i in range (0, len(tweets)): # run TextBlob on this tweet temp_blob = TextBlob(tweets.tweets[i]) # obtain polarity score of this tweet and store in polarity_score list # if polarity score > 0, positive. else if < 0, negative. else if 0, neutral. if temp_blob.sentiment.polarity > 0: polarity_score.append('Positive') elif temp_blob.sentiment.polarity < 0: polarity_score.append('Negative') else: polarity_score.append('Neutral') # create polarity_score column in tweets dataframetweets['polarity_score'] = polarity_score Next, we write a function that can perform topic modeling on our tweets. Topic modeling is a type of statistical modeling for discovering the abstract “topics” that occur in a collection of documents. We will use the common Latent Dirichlet Allocation (LDA) algorithm which is used to classify text in a document to a particular topic, and we can find it on sklearnlibrary package. In our function, we will also generate bigrams and trigrams using the individual tokens, and the top 3topics for each slang word. from sklearn.decomposition import LatentDirichletAllocationfrom sklearn.feature_extraction.text import TfidfVectorizerfrom sklearn.pipeline import make_pipeline# function to conduct topic modellingdef topic_modeller(column, no_topic, slang_word): # extend this slang into stopwords stopwords = nltk.corpus.stopwords.words("english") stopwords.extend([slang_word]) # set up vectorizer that remove stopwords, generate bigrams/trigrams tfidf_vectorizer = TfidfVectorizer(stop_words = stopwords, ngram_range = (2, 3)) # set the number of topics in lda model lda = LatentDirichletAllocation(n_components = no_topic) # create a pipeline that vectorise and then perform LDA pipe = make_pipeline(tfidf_vectorizer, lda) # run the pipe on the cleaned column pipe.fit(topic_column_cleaner(column, slang_word)) # inner function to return the topics and associated words def print_top_words(model, feature_names, n_top_words): result = [] for topic_idx, topic in enumerate(model.components_): message = [feature_names[i] for i in topic.argsort()[:-n_top_words - 1:-1]] result.append(message) return result return print_top_words(lda, tfidf_vectorizer.get_feature_names(), n_top_words = 3) Once we get our dataset ready, we can plot our findings onto a data visualization software, i.e. Tableau. As this article focuses more on the steps needed to collect the data and generating our insights, I shall not discuss how I managed to plot my findings onto Tableau. You can refer to the Tableau dashboard on my Tableau Public profile. Let’s take the slang word sheesh for example where we can filter our Tableau dashboard to that slang and the whole dashboard will be refreshed. Isn’t the idea cool where an iPhone wireframe was used to act as a user filter? A total of 173 tweets were collected during the period of August 2021 in Singapore, and our polarity scores revealed that 31.8% of the tweets were positive, 47.4% neutral, and 20.8% negative. It seems to suggest the slang sheesh carries more of a neutral to positive meaning. In our Python code, we tokenized our tweets so that we can rank the words in accordance with their frequencies among all the tweets containing that slang word. Our visualization shows that words like like, schmuck, and sembab (means swollen in Indonesian) seemed to suggest sheesh was used to further aggravate the impact of something. Looking at the 3 topics modeled, our assumption that sheesh was used to aggravate the impact of something is further suggested in topics like craving murtabak, sounds good and cute girls. Indeed, according to TheSmartLocal article, the word sheesh is used similarly as damn to express disbelief or exasperation. If we look at some of our tweets, “Sheesh craving for murtabak” and “Sheesh, he is a lucky person” do suggest the meaning. I hope this article is interesting and gives you guys some ideas on how Natural Language Processing techniques like Sentiment Analysis and Topic Modeling can help us understand our series of documents (i.e. our tweets) better. Have fun playing with the Tableau dashboard and it was definitely fun doing up the dashboard, no caps sheesh!
[ { "code": null, "e": 809, "s": 172, "text": "TikTok has been on the rise in recent years globally especially among the Gen-Z population who are born between 1997 to 2015. Slang words like LOL (Laugh out loud), ROFL (rolling on floor laughing), and FOMO (fear of missing out) is what most of us know but does the rise of TikTok brought upon another set of slang words? Indeed, according to TheSmartLocal (Singapore’s leading travel and lifestyle portal), there were 9 TikTok slangs being identified as commonly used by Gen-Z kids. What do words like bussin, no cap & sheesh being used on the platform refer to? Do common English words like bestie, fax, or slaps still mean the same?" }, { "code": null, "e": 1334, "s": 809, "text": "The article explained what each of the 9 slang words means, and how can they be used in conversation. In this article, we are going to analyze each of these slang words from a data science perspective where we will use Python to conduct some Natural Language Processing (NLP) techniques like sentiment analysis and topic modeling. This will give us a better idea of what are some associated words commonly used together with the slang words, the sentiments of the message, and the topics discussed together with these words." }, { "code": null, "e": 2023, "s": 1334, "text": "We are going to make use of another social media platform, Twitter, whereby tweets containing any of the 9 slang words will be captured for one month, which will form our entire dataset. The Twitter public API library on Python twitterallows us to collect tweets for the last 7 days after obtaining our authentication of the consumer key and access token. Once we initialize the connection to Twitter, we set the latitude, longitude maximum range of area, and set the search query to the slang word. As I’m interested to extract tweets tweeted by users in Singapore, I’ve set the geographical location to Singapore. The favorite counts and retweet counts were also collected from Twitter." }, { "code": null, "e": 3443, "s": 2023, "text": "# import librariesfrom twitter import *import pandas as pdfrom datetime import datetime# store the slang words in a listslangs = ['fax', 'no cap', 'ceo of', 'stan', 'bussin', 'slaps', 'hits different', 'sheesh', 'bestie']# twitter authenticationconsumer_key = '...'consumer_secret = '...'access_token = '...'access_token_secret = '...'twitter = Twitter(auth = OAuth(access_token, access_token_secret, consumer_key, consumer_secret))# set latitude & longitude to Singapore, maximum radius 20kmlatitude = 1.3521 longitude = 103.8198 max_range = 20# loop through each of the slangfor each in slangs: # extract tweets with query containing the planning area; note max count for standard API is 100 query = twitter.search.tweets(q = each, geocode = \"%f,%f,%dkm\" % (latitude, longitude, max_range),\\ lang = 'en', count = 100) # once done, loop through each tweet for i in range (0, len(query['statuses'])): # store the planning area, tweet, created time, retweet count & favorite count as a list variable temp_list = [each, query['statuses'][i]['text'],\\ datetime.strptime(query['statuses'][i]['created_at'], '%a %b %d %H:%M:%S %z %Y').strftime('%Y-%m-%d'),\\ query['statuses'][i]['retweet_count'], query['statuses'][i]['favorite_count']] # append list to tweets dataframe tweets.loc[len(tweets)] = temp_list" }, { "code": null, "e": 3718, "s": 3443, "text": "As the tweets collected were in the raw form which contains words like usernames, emoticons, and punctuations, data cleaning is necessary to remove them. We will start by converting all cases into lower case, filter off single word responses, remove punctuations/URLs/links." }, { "code": null, "e": 4391, "s": 3718, "text": "# function to clean column, tokenize & consolidate into corpus listdef column_cleaner(column, slang_word): # convert all to lower case and store in a list variable corpus = column.str.lower().tolist() # filter off single word responses, 'nil', 'nan' corpus = [x for x in corpus if len(x.split(' ')) > 1] corpus = [x for x in corpus if x != 'nan'] corpus = [x for x in corpus if x != 'nil'] # remove punctuations, links, urls for i in range (len(corpus)): x = corpus[i].replace('\\n',' ') #cleaning newline “\\n” from the tweets corpus[i] = html.unescape(x) corpus[i] = re.sub(r'(@[A-Za-z0–9_]+)|[^\\w\\s]|#|http\\S+', '', corpus[i])" }, { "code": null, "e": 4643, "s": 4391, "text": "We then extend the above function column_cleanerto tokenize (from RegexpTokenizer function) the tweets into individual words, remove stopwords (from nltk package)/digits and perform lemmatization using part-of-speech (from WordNetLemmatizer function)." }, { "code": null, "e": 5296, "s": 4643, "text": "# empty list to store cleaned corpus cleaned_corpus = [] # extend this slang into stopwords stopwords = nltk.corpus.stopwords.words(\"english\") stopwords.extend([slang_word]) # tokenise each tweet, remove stopwords & digits & punctuations and filter to len > 2, lemmatize using Part-of-speech for i in range (0, len(corpus)): words = [w for w in tokenizer.tokenize(corpus[i]) if w.lower() not in stopwords] cleaned_words = [x for x in words if len(x) > 2] lemmatized_words = [wordnet_lemmatizer.lemmatize(x, pos = 'v') for x in cleaned_words] cleaned_corpus.extend(lemmatized_words) return cleaned_corpus" }, { "code": null, "e": 5526, "s": 5296, "text": "This whole function will then be run on our collected tweets dataset and each tweet will be tokenized which we will then be able to plot the top n words associated with the slang word on our visualization software (i.e. Tableau)." }, { "code": null, "e": 6077, "s": 5526, "text": "# loop through each slang wordfor each in slangs: # filter dataframe to responses with regards to this slang word temp_pd = tweets.loc[tweets.slang == each, :] # save result in temp pandas dataframe for easy output temp_result = pd.DataFrame(columns = ['slang', 'word']) # run column_cleaner function on the tweets temp_result['word'] = column_cleaner(temp_pd['tweets'], each) # add slang to slang column temp_result['slang'] = each # append temp_result to result result = result.append(temp_result, ignore_index = True)" }, { "code": null, "e": 6459, "s": 6077, "text": "We can make use of the Python package textblob to conduct simple sentiment analysis where we will label each tweet as Positive if the polarity score > 0, Negative if the polarity score <0, else Neutral. Note that the above function column_cleaner need not be run before we conduct sentiment analysis as the textblob package can extract polarity scores directly from the raw tweets." }, { "code": null, "e": 7138, "s": 6459, "text": "from textblob import TextBlob# empty list to store polarity scorepolarity_score = []# loop through all tweetsfor i in range (0, len(tweets)): # run TextBlob on this tweet temp_blob = TextBlob(tweets.tweets[i]) # obtain polarity score of this tweet and store in polarity_score list # if polarity score > 0, positive. else if < 0, negative. else if 0, neutral. if temp_blob.sentiment.polarity > 0: polarity_score.append('Positive') elif temp_blob.sentiment.polarity < 0: polarity_score.append('Negative') else: polarity_score.append('Neutral') # create polarity_score column in tweets dataframetweets['polarity_score'] = polarity_score" }, { "code": null, "e": 7650, "s": 7138, "text": "Next, we write a function that can perform topic modeling on our tweets. Topic modeling is a type of statistical modeling for discovering the abstract “topics” that occur in a collection of documents. We will use the common Latent Dirichlet Allocation (LDA) algorithm which is used to classify text in a document to a particular topic, and we can find it on sklearnlibrary package. In our function, we will also generate bigrams and trigrams using the individual tokens, and the top 3topics for each slang word." }, { "code": null, "e": 8911, "s": 7650, "text": "from sklearn.decomposition import LatentDirichletAllocationfrom sklearn.feature_extraction.text import TfidfVectorizerfrom sklearn.pipeline import make_pipeline# function to conduct topic modellingdef topic_modeller(column, no_topic, slang_word): # extend this slang into stopwords stopwords = nltk.corpus.stopwords.words(\"english\") stopwords.extend([slang_word]) # set up vectorizer that remove stopwords, generate bigrams/trigrams tfidf_vectorizer = TfidfVectorizer(stop_words = stopwords, ngram_range = (2, 3)) # set the number of topics in lda model lda = LatentDirichletAllocation(n_components = no_topic) # create a pipeline that vectorise and then perform LDA pipe = make_pipeline(tfidf_vectorizer, lda) # run the pipe on the cleaned column pipe.fit(topic_column_cleaner(column, slang_word)) # inner function to return the topics and associated words def print_top_words(model, feature_names, n_top_words): result = [] for topic_idx, topic in enumerate(model.components_): message = [feature_names[i] for i in topic.argsort()[:-n_top_words - 1:-1]] result.append(message) return result return print_top_words(lda, tfidf_vectorizer.get_feature_names(), n_top_words = 3)" }, { "code": null, "e": 9252, "s": 8911, "text": "Once we get our dataset ready, we can plot our findings onto a data visualization software, i.e. Tableau. As this article focuses more on the steps needed to collect the data and generating our insights, I shall not discuss how I managed to plot my findings onto Tableau. You can refer to the Tableau dashboard on my Tableau Public profile." }, { "code": null, "e": 9476, "s": 9252, "text": "Let’s take the slang word sheesh for example where we can filter our Tableau dashboard to that slang and the whole dashboard will be refreshed. Isn’t the idea cool where an iPhone wireframe was used to act as a user filter?" }, { "code": null, "e": 9752, "s": 9476, "text": "A total of 173 tweets were collected during the period of August 2021 in Singapore, and our polarity scores revealed that 31.8% of the tweets were positive, 47.4% neutral, and 20.8% negative. It seems to suggest the slang sheesh carries more of a neutral to positive meaning." }, { "code": null, "e": 10088, "s": 9752, "text": "In our Python code, we tokenized our tweets so that we can rank the words in accordance with their frequencies among all the tweets containing that slang word. Our visualization shows that words like like, schmuck, and sembab (means swollen in Indonesian) seemed to suggest sheesh was used to further aggravate the impact of something." }, { "code": null, "e": 10276, "s": 10088, "text": "Looking at the 3 topics modeled, our assumption that sheesh was used to aggravate the impact of something is further suggested in topics like craving murtabak, sounds good and cute girls." }, { "code": null, "e": 10523, "s": 10276, "text": "Indeed, according to TheSmartLocal article, the word sheesh is used similarly as damn to express disbelief or exasperation. If we look at some of our tweets, “Sheesh craving for murtabak” and “Sheesh, he is a lucky person” do suggest the meaning." } ]
phpMyAdmin - Databases
Start the Apache Server and open /localhost/phpmyadmin phpmyadmin in web browser to open the phpMyAdmin interface. As we have configured a database MySQL during Environment Setup, we've root user with password as root@123. Once phpMyAdmin opens up, you need to enter same credential to login to database. Once logged in, you can see the following sections on the phpMyAdmin page loaded. The left section shows the databases available, it shows system as well user created databases. On the right side, dashboard shows a tabbed interface to do all the database administration operations as shown below. Click on Database Tab, to see the list of databases with more details. We can create database, iterate databases and do other operations here. Click on any listed database to see the list of tables with more details. Tabs changes as per the context. Now tabs will shows as per the database. Now in the schema browser, click on any table, right side section will load the table details as shown with updated tabbed interface to do various operations on that table as shown below − Double clicking on any cell, makes it editable, where you can edit and save data. Pressing esc key, will not save data. Once you move out of editing cell, it will show the update query and status of operation as shown below − You can verify the update statement as well as show below − UPDATE `employees` SET `AGE` = '28' WHERE `employees`.`ID` = 1; Now click on Structure tab, it will show the table structural details as shown below − Print Add Notes Bookmark this page
[ { "code": null, "e": 2150, "s": 2035, "text": "Start the Apache Server and open /localhost/phpmyadmin phpmyadmin in web browser to open the phpMyAdmin interface." }, { "code": null, "e": 2340, "s": 2150, "text": "As we have configured a database MySQL during Environment Setup, we've root user with password as root@123. Once phpMyAdmin opens up, you need to enter same credential to login to database." }, { "code": null, "e": 2518, "s": 2340, "text": "Once logged in, you can see the following sections on the phpMyAdmin page loaded. The left section shows the databases available, it shows system as well user created databases." }, { "code": null, "e": 2637, "s": 2518, "text": "On the right side, dashboard shows a tabbed interface to do all the database administration operations as shown below." }, { "code": null, "e": 2780, "s": 2637, "text": "Click on Database Tab, to see the list of databases with more details. We can create database, iterate databases and do other operations here." }, { "code": null, "e": 2929, "s": 2780, "text": "Click on any listed database to see the list of tables with more details. Tabs changes as per the context. Now tabs will shows as per the database. " }, { "code": null, "e": 3118, "s": 2929, "text": "Now in the schema browser, click on any table, right side section will load the table details as shown with updated tabbed interface to do various operations on that table as shown below −" }, { "code": null, "e": 3344, "s": 3118, "text": "Double clicking on any cell, makes it editable, where you can edit and save data. Pressing esc key, will not save data. Once you move out of editing cell, it will show the update query and status of operation as shown below −" }, { "code": null, "e": 3404, "s": 3344, "text": "You can verify the update statement as well as show below −" }, { "code": null, "e": 3469, "s": 3404, "text": "UPDATE `employees` SET `AGE` = '28' WHERE `employees`.`ID` = 1;\n" }, { "code": null, "e": 3556, "s": 3469, "text": "Now click on Structure tab, it will show the table structural details as shown below −" }, { "code": null, "e": 3563, "s": 3556, "text": " Print" }, { "code": null, "e": 3574, "s": 3563, "text": " Add Notes" } ]
How to Write Python Command-Line Interfaces like a Pro | by Simon Hawe | Towards Data Science
We as Data Scientists face doing many repetitive and similar tasks. That includes creating weekly reports, executing extract, transform, load (ETL) jobs, or training models using different parameter sets. Often, we end up having a bunch of Python scripts, where we change parameters in code every time we run them. I hate doing this! That’s why I got into the habit of transforming my scripts into reusable command-line interface (CLI) tools. This increased my efficiency and made me more productive in my daily life. I started doing this using Argparse but this was not enjoyable as I had to produce a lot of ugly code. So I thought, can’t I achieve that without having to write a lot of code over and over again? Can I even enjoy writing CLI tools? Click is your friend! So what is Click? From the webpage: It (Click) aims to make the process of writing command-line tools quick and fun while also preventing any frustration caused by the inability to implement an intended CLI API. To me, that sounds great, doesn’t it? In this article, I give you a hands-on guide on how to build Python CLIs using Click. I build up an example step by step that shows you the basic features and benefits Click offers. After this tutorial, you should be able to write your next CLI tool with joy and in a blink of an eye :) So let’s get our hands dirty! In this tutorial, we build up a Python CLI using Click that evolves step by step. I start with the basics, and with each step, I introduce a new concept offered by Click. Apart from Click, I use Poetry to manage dependencies and packages. First, let’s install Poetry. There are various ways of doing that, see my article, but here we use pip pip install poetry==0.12.7 Next, we use Poetry to create a project named cli-tutorial, add click and funcy as a dependency, and create a file cli.py that we later fill with code poetry new cli-tutorialcd cli-tutorialpoetry add click funcy# Create the file we will put in all our codetouch cli_tutorial/cli.py I have added funcy, as I will make use of it later. To see what that module is good for, I refer the interested reader to this article. Now, we are ready to go and implement our first CLI. As a side note, all example code is available on my GitHub account. Our initial CLI reads a CSV file from disk, processes it (how is not important for this tutorial), and stores the result in an Excel file. Both, the path to the input-, and the output file should be configurable by the user. The user must specify the input file path. Specifying the output file path is optional and defaults to output.xlsx. Using Click, the code that does that reads as import [email protected]()@click.option("--in", "-i", "in_file", required=True, help="Path to csv file to be processed.",)@click.option("--out-file", "-o", default="./output.xlsx", help="Path to excel file to store the result.")def process(in_file, out_file): """ Processes the input file IN and stores the result to output file OUT. """ input = read_csv(in_file) output = process_csv(input) write_excel(output, out_file)if __name__ =="__main__": process() What do we do here? We decorate the method process that we want to invoke from the command line with click.command.We define the command-line arguments using the click.option decorator. Now, you must be careful using the correct argument names in your decorated function. If we add a string without dashes to click.option, the argument must match that string. This is the case for --in and in_file. If all names contain leading dashes, Click generates the argument name using the longest name and converting all non-leading dashes to underscores. The name is converted to lowercase. This is the case for --out-file and out_file. For more details, consult the Click documentation.We configure desired prerequisites like defaults or required arguments using the corresponding arguments to click.option.We add a help text to our arguments, which is shown when invoking our function using --help. The docstring from our function will also be shown there. We decorate the method process that we want to invoke from the command line with click.command. We define the command-line arguments using the click.option decorator. Now, you must be careful using the correct argument names in your decorated function. If we add a string without dashes to click.option, the argument must match that string. This is the case for --in and in_file. If all names contain leading dashes, Click generates the argument name using the longest name and converting all non-leading dashes to underscores. The name is converted to lowercase. This is the case for --out-file and out_file. For more details, consult the Click documentation. We configure desired prerequisites like defaults or required arguments using the corresponding arguments to click.option. We add a help text to our arguments, which is shown when invoking our function using --help. The docstring from our function will also be shown there. Now, you can invoke this CLI in various ways # Prints helppython -m cli_tutorial.cli --help# Use single char -i for loading the filepython -m cli_tutorial.cli -i path/to/some/file.csv# Specify both file with long namepython -m cli_tutorial.cli --in path/to/file.csv --out-file out.xlsx Cool, we have created our first CLI using Click! Note that, I have not implemented read_csv, process_csv, and write_excel but assume they exist and do what they are supposed to do. One issue with CLIs is that we pass parameters as generic strings. Why is that an issue? Because these strings must be parsed to the actual types, which can fail due to badly formatted user input. Look at our example where we use paths and try to load a CSV file. A user can provide a string that does not represent a path at all. And even if the string is formatted correctly, the corresponding file might not exist or you don’t have the right permission to access it. Wouldn’t it be desirable to automatically validate the input, parse it if possible or fail early with helpful error messages? Ideally, all that without having to write a lot of code? Click supports us with this by specifying types for our arguments. In our example CLI, we want the user to pass in a valid path to an existing file for which we have read permissions. If these requirements are fulfilled, we can load the input file. Additionally, if the user specifies an output file path, this should be a valid path. We can enforce all that by passing a click.Path object to the type argument of the click.option decorator @click.command()@click.option("--in", "-i", "in_file", required=True, help="Path to csv file to be processed.", type=click.Path(exists=True, dir_okay=False, readable=True),)@click.option("--out-file", "-o", default="./output.csv", help="Path to csv file to store the result.", type=click.Path(dir_okay=False),)def process(in_file, out_file): """ Processes the input file IN and stores the result to output file OUT. """ input = read_csv(in_file) output = process_csv(input) write_excel(output, out_file)... click.Path is one of the various types offered by Click out of the box. You can also implement custom types, but this is out of scope for this tutorial. For more details, I refer the interested readers to the Click documentation. Another helpful feature offered by Click is boolean flags. Probably, the most famous boolean flag is the verbose flag. If set to true, your tool will print out a lot of information to the terminal. If set to false, only a few things are printed. With Click, we can implement that as from funcy import [email protected]('--verbose', is_flag=True, help="Verbose output")def process(in_file, out_file, verbose): """ Processes the input file IN and stores the result to output file OUT. """ print_func = print if verbose else identity print_func("We will start with the input") input = read_csv(in_file) print_func("Next we procees the data") output = process_csv(input) print_func("Finally, we dump it") write_excel(output, out_file) All you have to do is add another click.option decorate and set is_flag=True. Now, to get a verbose output, you need to call the CLI as python -m cli_tutorial.cli -i path/to/some/file.cs --verbose Assume we do not only want to store the result of process_csv locally, but we also want to upload it to a server. Additionally, there is not only one target server but a development-, a testing-, and a production instance. You can access these three instances via different URLs. One option for a user to select the server is to pass the full URL as an argument, which she has to type in. But, this is not only error-prone but also a tedious job. In situations like that, I use feature switches to simplify the user’s life. What they do is best explained through code [email protected]( "--dev", "server_url", help="Upload to dev server", flag_value='https://dev.server.org/api/v2/upload',)@click.option( "--test", "server_url", help="Upload to test server", flag_value='https://test.server.com/api/v2/upload',)@click.option( "--prod", "server_url", help="Upload to prod server", flag_value='https://real.server.com/api/v2/upload', default=True)def process(in_file, out_file, verbose, server_url): """ Processes the input file IN and stores the result to output file OUT. """ print_func = print if verbose else identity print_func("We will start with the input") input = read_csv(in_file) print_func("Next we procees the data") output = process_csv(input) print_func("Finally, we dump it") write_excel(output, out_file) print_func("Upload it to the server") upload_to(server_url, output)... Here, I have added three click.option decorators for the three possible server URLs. The important bit is, that all three options have the same target variable server_url. Depending on which option you choose, the value of server_url is equal to the corresponding value defined in flag_value. You chose one of those by adding --dev, --test, or --prod as an argument. So when you execute python -m cli_tutorial.cli -i path/to/some/file.csv --test server_url is equal to ‘https://test.server.com/api/v2/upload’. If we don’t specify any of the three flags, Click takes the value of --prod, as I set default=True. Unfortunately, or rather luckily :), our servers are password protected. So to upload our file, we need a username and a password. For sure, you can provide those as a standard click.option arguments. However, your password ends up in your command history in plain text. This can become a security issue. We like a prompt for the user to type his password without echoing it to the terminal and without storing it in the command history. For the username, we also want a simple prompt with echoing. Nothing easier than that when you know Click. Here is the code. import [email protected]('--user', prompt=True, default=lambda: os.environ.get('USER', ''))@click.password_option()def process(in_file, out_file, verbose, server_url, user, password): ... upload_to(server_url, output, user, password) To add a prompt for an argument, you have to set prompt=True. This will add a prompt whenever a user does not specify the --user argument, but she can still specify it like that. The default value is used, when you just hit enter on the prompt. The default is determined using a function, which is another handy feature offered by Click. Prompting passwords without echoing it to the terminal and asking for a confirmation is so common, that Click offers a dedicated decorator called password_option. An important note; this will not prevent the user to pass the password via --password MYSECRETPASSWORD. It just enables her not doing that. And that’s it. We’ve build the full CLI. Before we call it a day, I like to give you a final hint in the next section. A final tip that I want to give you, that is not related to Click but perfectly matches the CLI topic, is creating Poetry scripts. With Poetry scripts, you can create executables to invoke your Python functions from the command line, as you can do with Setuptools scripts. So how does that look like? First, you need to add the following to your pyproject.toml file [tool.poetry.scripts]your-wanted-name = 'cli_tutorial.cli:process' The your-wanted-name is an alias for the function process defined in the module cli_tutorial.cli. Now, you can invoke that through poetry run your-wanted-name -i ./dummy.csv --verbose --dev This allows you, for example, to add multiple CLI functions to the same file, to define aliases, and you don’t have to add an if __name__ == “__main__” block. In this article, I showed you how to use Click and Poetry to build CLI tools with joy and become more productive. This was just a small subset of the features offered by Click. There are many others like callbacks, nested commands, or choice options just to mention a few. For now, I refer the interested reader to the Click documentation but I might write a follow-up post covering these advanced topics. Stay tuned and thank you for following along this post. Feel free to contact me for questions, comments, or suggestions.
[ { "code": null, "e": 922, "s": 171, "text": "We as Data Scientists face doing many repetitive and similar tasks. That includes creating weekly reports, executing extract, transform, load (ETL) jobs, or training models using different parameter sets. Often, we end up having a bunch of Python scripts, where we change parameters in code every time we run them. I hate doing this! That’s why I got into the habit of transforming my scripts into reusable command-line interface (CLI) tools. This increased my efficiency and made me more productive in my daily life. I started doing this using Argparse but this was not enjoyable as I had to produce a lot of ugly code. So I thought, can’t I achieve that without having to write a lot of code over and over again? Can I even enjoy writing CLI tools?" }, { "code": null, "e": 944, "s": 922, "text": "Click is your friend!" }, { "code": null, "e": 980, "s": 944, "text": "So what is Click? From the webpage:" }, { "code": null, "e": 1156, "s": 980, "text": "It (Click) aims to make the process of writing command-line tools quick and fun while also preventing any frustration caused by the inability to implement an intended CLI API." }, { "code": null, "e": 1194, "s": 1156, "text": "To me, that sounds great, doesn’t it?" }, { "code": null, "e": 1511, "s": 1194, "text": "In this article, I give you a hands-on guide on how to build Python CLIs using Click. I build up an example step by step that shows you the basic features and benefits Click offers. After this tutorial, you should be able to write your next CLI tool with joy and in a blink of an eye :) So let’s get our hands dirty!" }, { "code": null, "e": 1750, "s": 1511, "text": "In this tutorial, we build up a Python CLI using Click that evolves step by step. I start with the basics, and with each step, I introduce a new concept offered by Click. Apart from Click, I use Poetry to manage dependencies and packages." }, { "code": null, "e": 1853, "s": 1750, "text": "First, let’s install Poetry. There are various ways of doing that, see my article, but here we use pip" }, { "code": null, "e": 1880, "s": 1853, "text": "pip install poetry==0.12.7" }, { "code": null, "e": 2031, "s": 1880, "text": "Next, we use Poetry to create a project named cli-tutorial, add click and funcy as a dependency, and create a file cli.py that we later fill with code" }, { "code": null, "e": 2162, "s": 2031, "text": "poetry new cli-tutorialcd cli-tutorialpoetry add click funcy# Create the file we will put in all our codetouch cli_tutorial/cli.py" }, { "code": null, "e": 2419, "s": 2162, "text": "I have added funcy, as I will make use of it later. To see what that module is good for, I refer the interested reader to this article. Now, we are ready to go and implement our first CLI. As a side note, all example code is available on my GitHub account." }, { "code": null, "e": 2806, "s": 2419, "text": "Our initial CLI reads a CSV file from disk, processes it (how is not important for this tutorial), and stores the result in an Excel file. Both, the path to the input-, and the output file should be configurable by the user. The user must specify the input file path. Specifying the output file path is optional and defaults to output.xlsx. Using Click, the code that does that reads as" }, { "code": null, "e": 3293, "s": 2806, "text": "import [email protected]()@click.option(\"--in\", \"-i\", \"in_file\", required=True, help=\"Path to csv file to be processed.\",)@click.option(\"--out-file\", \"-o\", default=\"./output.xlsx\", help=\"Path to excel file to store the result.\")def process(in_file, out_file): \"\"\" Processes the input file IN and stores the result to output file OUT. \"\"\" input = read_csv(in_file) output = process_csv(input) write_excel(output, out_file)if __name__ ==\"__main__\": process()" }, { "code": null, "e": 3313, "s": 3293, "text": "What do we do here?" }, { "code": null, "e": 4244, "s": 3313, "text": "We decorate the method process that we want to invoke from the command line with click.command.We define the command-line arguments using the click.option decorator. Now, you must be careful using the correct argument names in your decorated function. If we add a string without dashes to click.option, the argument must match that string. This is the case for --in and in_file. If all names contain leading dashes, Click generates the argument name using the longest name and converting all non-leading dashes to underscores. The name is converted to lowercase. This is the case for --out-file and out_file. For more details, consult the Click documentation.We configure desired prerequisites like defaults or required arguments using the corresponding arguments to click.option.We add a help text to our arguments, which is shown when invoking our function using --help. The docstring from our function will also be shown there." }, { "code": null, "e": 4340, "s": 4244, "text": "We decorate the method process that we want to invoke from the command line with click.command." }, { "code": null, "e": 4905, "s": 4340, "text": "We define the command-line arguments using the click.option decorator. Now, you must be careful using the correct argument names in your decorated function. If we add a string without dashes to click.option, the argument must match that string. This is the case for --in and in_file. If all names contain leading dashes, Click generates the argument name using the longest name and converting all non-leading dashes to underscores. The name is converted to lowercase. This is the case for --out-file and out_file. For more details, consult the Click documentation." }, { "code": null, "e": 5027, "s": 4905, "text": "We configure desired prerequisites like defaults or required arguments using the corresponding arguments to click.option." }, { "code": null, "e": 5178, "s": 5027, "text": "We add a help text to our arguments, which is shown when invoking our function using --help. The docstring from our function will also be shown there." }, { "code": null, "e": 5223, "s": 5178, "text": "Now, you can invoke this CLI in various ways" }, { "code": null, "e": 5464, "s": 5223, "text": "# Prints helppython -m cli_tutorial.cli --help# Use single char -i for loading the filepython -m cli_tutorial.cli -i path/to/some/file.csv# Specify both file with long namepython -m cli_tutorial.cli --in path/to/file.csv --out-file out.xlsx" }, { "code": null, "e": 5513, "s": 5464, "text": "Cool, we have created our first CLI using Click!" }, { "code": null, "e": 5645, "s": 5513, "text": "Note that, I have not implemented read_csv, process_csv, and write_excel but assume they exist and do what they are supposed to do." }, { "code": null, "e": 6365, "s": 5645, "text": "One issue with CLIs is that we pass parameters as generic strings. Why is that an issue? Because these strings must be parsed to the actual types, which can fail due to badly formatted user input. Look at our example where we use paths and try to load a CSV file. A user can provide a string that does not represent a path at all. And even if the string is formatted correctly, the corresponding file might not exist or you don’t have the right permission to access it. Wouldn’t it be desirable to automatically validate the input, parse it if possible or fail early with helpful error messages? Ideally, all that without having to write a lot of code? Click supports us with this by specifying types for our arguments." }, { "code": null, "e": 6739, "s": 6365, "text": "In our example CLI, we want the user to pass in a valid path to an existing file for which we have read permissions. If these requirements are fulfilled, we can load the input file. Additionally, if the user specifies an output file path, this should be a valid path. We can enforce all that by passing a click.Path object to the type argument of the click.option decorator" }, { "code": null, "e": 7281, "s": 6739, "text": "@click.command()@click.option(\"--in\", \"-i\", \"in_file\", required=True, help=\"Path to csv file to be processed.\", type=click.Path(exists=True, dir_okay=False, readable=True),)@click.option(\"--out-file\", \"-o\", default=\"./output.csv\", help=\"Path to csv file to store the result.\", type=click.Path(dir_okay=False),)def process(in_file, out_file): \"\"\" Processes the input file IN and stores the result to output file OUT. \"\"\" input = read_csv(in_file) output = process_csv(input) write_excel(output, out_file)..." }, { "code": null, "e": 7511, "s": 7281, "text": "click.Path is one of the various types offered by Click out of the box. You can also implement custom types, but this is out of scope for this tutorial. For more details, I refer the interested readers to the Click documentation." }, { "code": null, "e": 7794, "s": 7511, "text": "Another helpful feature offered by Click is boolean flags. Probably, the most famous boolean flag is the verbose flag. If set to true, your tool will print out a lot of information to the terminal. If set to false, only a few things are printed. With Click, we can implement that as" }, { "code": null, "e": 8279, "s": 7794, "text": "from funcy import [email protected]('--verbose', is_flag=True, help=\"Verbose output\")def process(in_file, out_file, verbose): \"\"\" Processes the input file IN and stores the result to output file OUT. \"\"\" print_func = print if verbose else identity print_func(\"We will start with the input\") input = read_csv(in_file) print_func(\"Next we procees the data\") output = process_csv(input) print_func(\"Finally, we dump it\") write_excel(output, out_file)" }, { "code": null, "e": 8415, "s": 8279, "text": "All you have to do is add another click.option decorate and set is_flag=True. Now, to get a verbose output, you need to call the CLI as" }, { "code": null, "e": 8476, "s": 8415, "text": "python -m cli_tutorial.cli -i path/to/some/file.cs --verbose" }, { "code": null, "e": 9044, "s": 8476, "text": "Assume we do not only want to store the result of process_csv locally, but we also want to upload it to a server. Additionally, there is not only one target server but a development-, a testing-, and a production instance. You can access these three instances via different URLs. One option for a user to select the server is to pass the full URL as an argument, which she has to type in. But, this is not only error-prone but also a tedious job. In situations like that, I use feature switches to simplify the user’s life. What they do is best explained through code" }, { "code": null, "e": 9923, "s": 9044, "text": "[email protected]( \"--dev\", \"server_url\", help=\"Upload to dev server\", flag_value='https://dev.server.org/api/v2/upload',)@click.option( \"--test\", \"server_url\", help=\"Upload to test server\", flag_value='https://test.server.com/api/v2/upload',)@click.option( \"--prod\", \"server_url\", help=\"Upload to prod server\", flag_value='https://real.server.com/api/v2/upload', default=True)def process(in_file, out_file, verbose, server_url): \"\"\" Processes the input file IN and stores the result to output file OUT. \"\"\" print_func = print if verbose else identity print_func(\"We will start with the input\") input = read_csv(in_file) print_func(\"Next we procees the data\") output = process_csv(input) print_func(\"Finally, we dump it\") write_excel(output, out_file) print_func(\"Upload it to the server\") upload_to(server_url, output)..." }, { "code": null, "e": 10310, "s": 9923, "text": "Here, I have added three click.option decorators for the three possible server URLs. The important bit is, that all three options have the same target variable server_url. Depending on which option you choose, the value of server_url is equal to the corresponding value defined in flag_value. You chose one of those by adding --dev, --test, or --prod as an argument. So when you execute" }, { "code": null, "e": 10369, "s": 10310, "text": "python -m cli_tutorial.cli -i path/to/some/file.csv --test" }, { "code": null, "e": 10533, "s": 10369, "text": "server_url is equal to ‘https://test.server.com/api/v2/upload’. If we don’t specify any of the three flags, Click takes the value of --prod, as I set default=True." }, { "code": null, "e": 10838, "s": 10533, "text": "Unfortunately, or rather luckily :), our servers are password protected. So to upload our file, we need a username and a password. For sure, you can provide those as a standard click.option arguments. However, your password ends up in your command history in plain text. This can become a security issue." }, { "code": null, "e": 11096, "s": 10838, "text": "We like a prompt for the user to type his password without echoing it to the terminal and without storing it in the command history. For the username, we also want a simple prompt with echoing. Nothing easier than that when you know Click. Here is the code." }, { "code": null, "e": 11350, "s": 11096, "text": "import [email protected]('--user', prompt=True, default=lambda: os.environ.get('USER', ''))@click.password_option()def process(in_file, out_file, verbose, server_url, user, password): ... upload_to(server_url, output, user, password)" }, { "code": null, "e": 11688, "s": 11350, "text": "To add a prompt for an argument, you have to set prompt=True. This will add a prompt whenever a user does not specify the --user argument, but she can still specify it like that. The default value is used, when you just hit enter on the prompt. The default is determined using a function, which is another handy feature offered by Click." }, { "code": null, "e": 11991, "s": 11688, "text": "Prompting passwords without echoing it to the terminal and asking for a confirmation is so common, that Click offers a dedicated decorator called password_option. An important note; this will not prevent the user to pass the password via --password MYSECRETPASSWORD. It just enables her not doing that." }, { "code": null, "e": 12110, "s": 11991, "text": "And that’s it. We’ve build the full CLI. Before we call it a day, I like to give you a final hint in the next section." }, { "code": null, "e": 12476, "s": 12110, "text": "A final tip that I want to give you, that is not related to Click but perfectly matches the CLI topic, is creating Poetry scripts. With Poetry scripts, you can create executables to invoke your Python functions from the command line, as you can do with Setuptools scripts. So how does that look like? First, you need to add the following to your pyproject.toml file" }, { "code": null, "e": 12543, "s": 12476, "text": "[tool.poetry.scripts]your-wanted-name = 'cli_tutorial.cli:process'" }, { "code": null, "e": 12674, "s": 12543, "text": "The your-wanted-name is an alias for the function process defined in the module cli_tutorial.cli. Now, you can invoke that through" }, { "code": null, "e": 12733, "s": 12674, "text": "poetry run your-wanted-name -i ./dummy.csv --verbose --dev" }, { "code": null, "e": 12892, "s": 12733, "text": "This allows you, for example, to add multiple CLI functions to the same file, to define aliases, and you don’t have to add an if __name__ == “__main__” block." } ]
Genome Assembly — The Holy Grail of Genome Analysis | by Vijini Mallawaarachchi | Towards Data Science
The 2019 novel coronavirus or coronavirus disease (COVID-19) outbreak has threatened the entire world at present. Scientists are working day and night to understand the origin of COVID-19. You may have heard the news recently that the complete genome of COVID-19 has been published. How did scientists figure out the complete genome of COVID-19? In this article, I will explain how we can do this. A genome is considered as all the genetic material, including all the genes of an organism. The genome contains all the information of an organism that is required to build and maintain it. How can we read the information present in the genome? This is where sequencing comes into action. Assuming you have read my previous article on DNA analysis, you know that sequencing is used to determine the sequence of individual genes, full chromosomes or entire genomes of an organism. Special machines, known as sequencing machines are used to extract short random sequences from the genome we are interested in. Current sequencing technologies cannot read the whole genome at once. It reads small pieces of mean length between 50–300 bases (next-generation sequencing/short reads) or 10,000-20,000 bases (third-generation sequencing/ long reads), depending on the technology used. These short pieces are called reads. If you want to know more details about how viral genomes are sequenced from clinical samples, you can read the following articles. A complete protocol for whole-genome sequencing of virus from clinical samples: Application to coronavirus OC43Specific Capture and Whole-Genome Sequencing of Viruses from Clinical Samples A complete protocol for whole-genome sequencing of virus from clinical samples: Application to coronavirus OC43 Specific Capture and Whole-Genome Sequencing of Viruses from Clinical Samples Once we have small pieces of the genome, we have to combine (assemble) them together based on their overlap information and build the complete genome. This process is called assembly. Assembly is like solving a jigsaw puzzle. Special software tools called assemblers are used to assemble these reads according to how they overlap, in order to generate continuous strings called contigs. These contigs can be the whole genome itself, or parts of the genome (as shown in Figure 2). Assemblers are divided into two categories as, De novo assemblers: assemble without the use of reference genomes (E.g.: SPAdes, SGA, MEGAHIT, Velvet, Canu and Flye).Reference guided assemblers: assemble by mapping sequences to reference genomes De novo assemblers: assemble without the use of reference genomes (E.g.: SPAdes, SGA, MEGAHIT, Velvet, Canu and Flye). Reference guided assemblers: assemble by mapping sequences to reference genomes Two main types of assemblers can be found across bioinformatics literature. The first type is the overlap-layout-consenses (OLC) method. In OLC method, first, we determine all the overlaps between the reads. Then we layout all the reads and overlaps in the form of a graph. Finally, we identify the consensus sequence. SGA is a popular tool based on the OLC method. The second type of assembler is the de Bruijn graph (DBG) method [2]. Rather than using the complete reads as they are, the DBG method breaks reads into shorter fragments called k-mers (with length k) and then build a de Bruijn graph using all the k-mers. Finally, the genome sequences are inferred based on the de Bruijn graph. SPAdes is a popular assembler which is based on the DBG method. Genomes contain patterns of nucleic acids that occur many times across the genome. These structures are called repeats. These repeats can complicate the assembly process and result in ambiguities. We cannot guarantee that the sequencing machine can produce reads covering the entire genome. The sequencing machine may miss some parts of the genome and there won’t be reads covering that region. This will affect the assembly process and those missed regions will not be present in the final assembly. Genome assemblers should address these challenges and try to minimise the errors caused during assembly. Evaluation of assemblies is very important as we have to decide whether the resulting assembly meets the standards. One of the well-known and most commonly used assembly evaluation tools is QUAST. Listed below are some criteria used to evaluate assemblies. N50: minimum contig length that is required to cover 50% of the total length of the assembly. L50: number of contigs that are longer than N50 NG50: minimum contig length that is required to cover 50% of the length of the reference genome LG50: number of contigs that are longer than NG50 NA50: minimum length of aligned blocks that are required to cover 50% of the total length of the assembly LA50: number of contigs that are longer than NA50 Genome fraction (%): percentage of bases that align to the reference genome Let’s get started with the experiments. I will be using the assembler SPAdes to assemble reads obtained from sequencing patient samples. SPAdes makes use of next-generation sequencing reads. You can download QUAST freely as well. You can get the code and binaries from the relevant homepages (which I have provided as links) and run these tools. Type in the following commands and verify whether the tools are working correctly. <your_path_to>/SPAdes-3.13.1/bin/spades.py -h<your_path_to>/quast-5.0.2/quast.py -h I assume you know how to download data from the National Center for Biotechnology Information (NBCI). If not, you can refer to this link. The reads for our experiments can be downloaded from NCBI with NCBI accession number SRX7636886. You can download the run SRR10971381 which contains reads obtained from an Illumina MiniSeq run. Make sure to download the data in FASTQ format. The downloaded file can be found as sra_data.fastq.gz. You can extract the FASTQ file using gunzip. After extracting, you can run the following bash command to count the number of reads in our dataset. You will see there are 56,565,928 reads. grep '^@' sra_data.fastq | wc -l You can download the publicly available COVID-19 complete genome[3] from NCBI with GenBank accession number MN908947. You will see a file in FASTA format. This will be our reference genome. Note that we have renamed it to MN908947.fasta. Let’s assemble the reads of COVID-19. Run the following command to assemble the reads using SPAdes. You can provide the compressed .gz file to SPAdes directly. <your_path_to>/SPAdes-3.13.1/bin/spades.py --12 sra_data.fastq.gz -o Output -t 8 Here we have used the general SPAdes assembler as a demonstration to this article. However, since the reads dataset consists of RNA-Seq data (read more about RNA from my previous article), it is better to use the --rna option in SPAdes. In the Output folder, you can see a file named contigs.fasta which contains our final assembled contigs. Run QUAST on the assemblies using the following command. <your_path_to>/quast-5.0.2/quast.py Output/contigs.fasta-l SPAdes_assembly -r MN908947.fasta -o quastResult Once QUAST has finished, you can go into the quastResult folder and view the evaluation results. You can view the QUAST report by opening the file report.html in your web browser. You can see a report similar to the one shown in Figure 3. You can click on “Extended report” for more information such as NG50 and LG50. You can study the values of the different evaluation criteria such as genome fraction NG50, NA50, misassemblies and number of contigs. Moreover, you can view the contig alignment to the reference genome using the Icarus contig browser (click on “View in Icarus contig browser”) as shown in Figure 4. From the Icarus contig browser, we can see that the contig named NODE_1 maps very closely to the reference genome of COVID-19. It has a genome fraction of 99.99% (as shown in Figure 3). Moreover, the total aligned length of 29,900 base pairs is very close to the length of the reference genome which is 29,903 base pairs. There is a tool named Bandage which you can use to visualise the assembly graph. You can download the precompiled binaries from their homepage and run the tool. You can load the graph file assembly_graph_with_scaffolds.gfa which can be found in the SPAdes output folder (Go to File → Load graph → select the .gfa file in Output and open it) to Bandage and click on “Draw graph” to visualise as shown in Figure 5. Note that the long green coloured curved segment in the middle of the first row of segments in Figure 5 corresponds to NODE_1 of our SPAdes assembly. Since the reference genome of COVID-19 is available now, we can evaluate our assembly. However, at first, there was no exact reference genome for COVID-19. So what did scientists do to figure it out? As explained in my previous article, analysing viral genomes comes under metagenomics and there are many techniques to do this. They had analysed the coverage of the contigs (the average number of reads covering each base position in a contig) and compared with the bat SARS-like coronavirus (CoV) isolate — bat SL-CoVZC45 (GenBank accession number MG772933) [3]. Results have shown that their longest assembled contig had high coverage (from our assembly you can see that NODE_1 has a high coverage value as well) and was very closely related to bat SL-CoVZC45. They have carried out more tests to confirm this, which I won’t go into detail. Genome assembly has paved the way for us to study what is actually inside the genomes of organisms. Even during the outbreak of COVID-19, genome assembly has played a major role in identifying the actual genetic code of this deadly virus. If you check the genome size of the COVID-19 genome, it is 29,903 base pairs (~30k base pairs). With the advancements of third-generation sequencing techniques, we may be able to directly sequence the entire length of small genomes such as viral genomes. As read lengths get longer, the need to assemble reads will reduce and eventually, there will be a time where we can directly obtain the genomes from sequencing machines (especially in Metagenomics where genomes range from a few kilobases to a few megabases)! Furthermore, revolutionary nanotechnology-based approaches such as quantum sequencing technologies [4] including graphene nanodevices [5] may become popular. Thank you for reading everyone. If you found my article interesting, do share it within your networks. I would love to hear your thoughts as well. Cheers! [1] Mining coronavirus genomes for clues to the outbreak’s origins| Science | AAAS (https://www.sciencemag.org/news/2020/01/mining-coronavirus-genomes-clues-outbreak-s-origins) [2] Zhenyu Li et al. Comparison of the two major classes of assembly algorithms: overlap–layout–consensus and de-bruijn-graph, Briefings in Functional Genomics, Volume 11, Issue 1, January 2012, Pages 25–37. https://doi.org/10.1093/bfgp/elr035 [3] F. Wu, S. Zhao, B. Yu et al. A new coronavirus associated with human respiratory disease in China. Nature (2020). https://doi.org/10.1038/s41586-020-2008-3 [4] Jang-il Sohn and Jin-Wu Nam. The present and future of de novo whole-genome assembly. Briefings in Bioinformatics, Volume 19, Issue 1, January 2018, Pages 23–40. https://doi.org/10.1093/bib/bbw096 [5] S. Heerema and C. Dekker. Graphene nanodevices for DNA sequencing. Nature Nanotech 11, 127–136 (2016). https://doi.org/10.1038/nnano.2015.307
[ { "code": null, "e": 569, "s": 171, "text": "The 2019 novel coronavirus or coronavirus disease (COVID-19) outbreak has threatened the entire world at present. Scientists are working day and night to understand the origin of COVID-19. You may have heard the news recently that the complete genome of COVID-19 has been published. How did scientists figure out the complete genome of COVID-19? In this article, I will explain how we can do this." }, { "code": null, "e": 759, "s": 569, "text": "A genome is considered as all the genetic material, including all the genes of an organism. The genome contains all the information of an organism that is required to build and maintain it." }, { "code": null, "e": 1049, "s": 759, "text": "How can we read the information present in the genome? This is where sequencing comes into action. Assuming you have read my previous article on DNA analysis, you know that sequencing is used to determine the sequence of individual genes, full chromosomes or entire genomes of an organism." }, { "code": null, "e": 1483, "s": 1049, "text": "Special machines, known as sequencing machines are used to extract short random sequences from the genome we are interested in. Current sequencing technologies cannot read the whole genome at once. It reads small pieces of mean length between 50–300 bases (next-generation sequencing/short reads) or 10,000-20,000 bases (third-generation sequencing/ long reads), depending on the technology used. These short pieces are called reads." }, { "code": null, "e": 1614, "s": 1483, "text": "If you want to know more details about how viral genomes are sequenced from clinical samples, you can read the following articles." }, { "code": null, "e": 1803, "s": 1614, "text": "A complete protocol for whole-genome sequencing of virus from clinical samples: Application to coronavirus OC43Specific Capture and Whole-Genome Sequencing of Viruses from Clinical Samples" }, { "code": null, "e": 1915, "s": 1803, "text": "A complete protocol for whole-genome sequencing of virus from clinical samples: Application to coronavirus OC43" }, { "code": null, "e": 1993, "s": 1915, "text": "Specific Capture and Whole-Genome Sequencing of Viruses from Clinical Samples" }, { "code": null, "e": 2473, "s": 1993, "text": "Once we have small pieces of the genome, we have to combine (assemble) them together based on their overlap information and build the complete genome. This process is called assembly. Assembly is like solving a jigsaw puzzle. Special software tools called assemblers are used to assemble these reads according to how they overlap, in order to generate continuous strings called contigs. These contigs can be the whole genome itself, or parts of the genome (as shown in Figure 2)." }, { "code": null, "e": 2520, "s": 2473, "text": "Assemblers are divided into two categories as," }, { "code": null, "e": 2718, "s": 2520, "text": "De novo assemblers: assemble without the use of reference genomes (E.g.: SPAdes, SGA, MEGAHIT, Velvet, Canu and Flye).Reference guided assemblers: assemble by mapping sequences to reference genomes" }, { "code": null, "e": 2837, "s": 2718, "text": "De novo assemblers: assemble without the use of reference genomes (E.g.: SPAdes, SGA, MEGAHIT, Velvet, Canu and Flye)." }, { "code": null, "e": 2917, "s": 2837, "text": "Reference guided assemblers: assemble by mapping sequences to reference genomes" }, { "code": null, "e": 3283, "s": 2917, "text": "Two main types of assemblers can be found across bioinformatics literature. The first type is the overlap-layout-consenses (OLC) method. In OLC method, first, we determine all the overlaps between the reads. Then we layout all the reads and overlaps in the form of a graph. Finally, we identify the consensus sequence. SGA is a popular tool based on the OLC method." }, { "code": null, "e": 3676, "s": 3283, "text": "The second type of assembler is the de Bruijn graph (DBG) method [2]. Rather than using the complete reads as they are, the DBG method breaks reads into shorter fragments called k-mers (with length k) and then build a de Bruijn graph using all the k-mers. Finally, the genome sequences are inferred based on the de Bruijn graph. SPAdes is a popular assembler which is based on the DBG method." }, { "code": null, "e": 3873, "s": 3676, "text": "Genomes contain patterns of nucleic acids that occur many times across the genome. These structures are called repeats. These repeats can complicate the assembly process and result in ambiguities." }, { "code": null, "e": 4177, "s": 3873, "text": "We cannot guarantee that the sequencing machine can produce reads covering the entire genome. The sequencing machine may miss some parts of the genome and there won’t be reads covering that region. This will affect the assembly process and those missed regions will not be present in the final assembly." }, { "code": null, "e": 4282, "s": 4177, "text": "Genome assemblers should address these challenges and try to minimise the errors caused during assembly." }, { "code": null, "e": 4539, "s": 4282, "text": "Evaluation of assemblies is very important as we have to decide whether the resulting assembly meets the standards. One of the well-known and most commonly used assembly evaluation tools is QUAST. Listed below are some criteria used to evaluate assemblies." }, { "code": null, "e": 4633, "s": 4539, "text": "N50: minimum contig length that is required to cover 50% of the total length of the assembly." }, { "code": null, "e": 4681, "s": 4633, "text": "L50: number of contigs that are longer than N50" }, { "code": null, "e": 4777, "s": 4681, "text": "NG50: minimum contig length that is required to cover 50% of the length of the reference genome" }, { "code": null, "e": 4827, "s": 4777, "text": "LG50: number of contigs that are longer than NG50" }, { "code": null, "e": 4933, "s": 4827, "text": "NA50: minimum length of aligned blocks that are required to cover 50% of the total length of the assembly" }, { "code": null, "e": 4983, "s": 4933, "text": "LA50: number of contigs that are longer than NA50" }, { "code": null, "e": 5059, "s": 4983, "text": "Genome fraction (%): percentage of bases that align to the reference genome" }, { "code": null, "e": 5405, "s": 5059, "text": "Let’s get started with the experiments. I will be using the assembler SPAdes to assemble reads obtained from sequencing patient samples. SPAdes makes use of next-generation sequencing reads. You can download QUAST freely as well. You can get the code and binaries from the relevant homepages (which I have provided as links) and run these tools." }, { "code": null, "e": 5488, "s": 5405, "text": "Type in the following commands and verify whether the tools are working correctly." }, { "code": null, "e": 5572, "s": 5488, "text": "<your_path_to>/SPAdes-3.13.1/bin/spades.py -h<your_path_to>/quast-5.0.2/quast.py -h" }, { "code": null, "e": 5710, "s": 5572, "text": "I assume you know how to download data from the National Center for Biotechnology Information (NBCI). If not, you can refer to this link." }, { "code": null, "e": 6052, "s": 5710, "text": "The reads for our experiments can be downloaded from NCBI with NCBI accession number SRX7636886. You can download the run SRR10971381 which contains reads obtained from an Illumina MiniSeq run. Make sure to download the data in FASTQ format. The downloaded file can be found as sra_data.fastq.gz. You can extract the FASTQ file using gunzip." }, { "code": null, "e": 6195, "s": 6052, "text": "After extracting, you can run the following bash command to count the number of reads in our dataset. You will see there are 56,565,928 reads." }, { "code": null, "e": 6228, "s": 6195, "text": "grep '^@' sra_data.fastq | wc -l" }, { "code": null, "e": 6466, "s": 6228, "text": "You can download the publicly available COVID-19 complete genome[3] from NCBI with GenBank accession number MN908947. You will see a file in FASTA format. This will be our reference genome. Note that we have renamed it to MN908947.fasta." }, { "code": null, "e": 6626, "s": 6466, "text": "Let’s assemble the reads of COVID-19. Run the following command to assemble the reads using SPAdes. You can provide the compressed .gz file to SPAdes directly." }, { "code": null, "e": 6707, "s": 6626, "text": "<your_path_to>/SPAdes-3.13.1/bin/spades.py --12 sra_data.fastq.gz -o Output -t 8" }, { "code": null, "e": 6944, "s": 6707, "text": "Here we have used the general SPAdes assembler as a demonstration to this article. However, since the reads dataset consists of RNA-Seq data (read more about RNA from my previous article), it is better to use the --rna option in SPAdes." }, { "code": null, "e": 7049, "s": 6944, "text": "In the Output folder, you can see a file named contigs.fasta which contains our final assembled contigs." }, { "code": null, "e": 7106, "s": 7049, "text": "Run QUAST on the assemblies using the following command." }, { "code": null, "e": 7214, "s": 7106, "text": "<your_path_to>/quast-5.0.2/quast.py Output/contigs.fasta-l SPAdes_assembly -r MN908947.fasta -o quastResult" }, { "code": null, "e": 7532, "s": 7214, "text": "Once QUAST has finished, you can go into the quastResult folder and view the evaluation results. You can view the QUAST report by opening the file report.html in your web browser. You can see a report similar to the one shown in Figure 3. You can click on “Extended report” for more information such as NG50 and LG50." }, { "code": null, "e": 7832, "s": 7532, "text": "You can study the values of the different evaluation criteria such as genome fraction NG50, NA50, misassemblies and number of contigs. Moreover, you can view the contig alignment to the reference genome using the Icarus contig browser (click on “View in Icarus contig browser”) as shown in Figure 4." }, { "code": null, "e": 8154, "s": 7832, "text": "From the Icarus contig browser, we can see that the contig named NODE_1 maps very closely to the reference genome of COVID-19. It has a genome fraction of 99.99% (as shown in Figure 3). Moreover, the total aligned length of 29,900 base pairs is very close to the length of the reference genome which is 29,903 base pairs." }, { "code": null, "e": 8717, "s": 8154, "text": "There is a tool named Bandage which you can use to visualise the assembly graph. You can download the precompiled binaries from their homepage and run the tool. You can load the graph file assembly_graph_with_scaffolds.gfa which can be found in the SPAdes output folder (Go to File → Load graph → select the .gfa file in Output and open it) to Bandage and click on “Draw graph” to visualise as shown in Figure 5. Note that the long green coloured curved segment in the middle of the first row of segments in Figure 5 corresponds to NODE_1 of our SPAdes assembly." }, { "code": null, "e": 9560, "s": 8717, "text": "Since the reference genome of COVID-19 is available now, we can evaluate our assembly. However, at first, there was no exact reference genome for COVID-19. So what did scientists do to figure it out? As explained in my previous article, analysing viral genomes comes under metagenomics and there are many techniques to do this. They had analysed the coverage of the contigs (the average number of reads covering each base position in a contig) and compared with the bat SARS-like coronavirus (CoV) isolate — bat SL-CoVZC45 (GenBank accession number MG772933) [3]. Results have shown that their longest assembled contig had high coverage (from our assembly you can see that NODE_1 has a high coverage value as well) and was very closely related to bat SL-CoVZC45. They have carried out more tests to confirm this, which I won’t go into detail." }, { "code": null, "e": 9799, "s": 9560, "text": "Genome assembly has paved the way for us to study what is actually inside the genomes of organisms. Even during the outbreak of COVID-19, genome assembly has played a major role in identifying the actual genetic code of this deadly virus." }, { "code": null, "e": 10472, "s": 9799, "text": "If you check the genome size of the COVID-19 genome, it is 29,903 base pairs (~30k base pairs). With the advancements of third-generation sequencing techniques, we may be able to directly sequence the entire length of small genomes such as viral genomes. As read lengths get longer, the need to assemble reads will reduce and eventually, there will be a time where we can directly obtain the genomes from sequencing machines (especially in Metagenomics where genomes range from a few kilobases to a few megabases)! Furthermore, revolutionary nanotechnology-based approaches such as quantum sequencing technologies [4] including graphene nanodevices [5] may become popular." }, { "code": null, "e": 10619, "s": 10472, "text": "Thank you for reading everyone. If you found my article interesting, do share it within your networks. I would love to hear your thoughts as well." }, { "code": null, "e": 10627, "s": 10619, "text": "Cheers!" }, { "code": null, "e": 10804, "s": 10627, "text": "[1] Mining coronavirus genomes for clues to the outbreak’s origins| Science | AAAS (https://www.sciencemag.org/news/2020/01/mining-coronavirus-genomes-clues-outbreak-s-origins)" }, { "code": null, "e": 11048, "s": 10804, "text": "[2] Zhenyu Li et al. Comparison of the two major classes of assembly algorithms: overlap–layout–consensus and de-bruijn-graph, Briefings in Functional Genomics, Volume 11, Issue 1, January 2012, Pages 25–37. https://doi.org/10.1093/bfgp/elr035" }, { "code": null, "e": 11208, "s": 11048, "text": "[3] F. Wu, S. Zhao, B. Yu et al. A new coronavirus associated with human respiratory disease in China. Nature (2020). https://doi.org/10.1038/s41586-020-2008-3" }, { "code": null, "e": 11409, "s": 11208, "text": "[4] Jang-il Sohn and Jin-Wu Nam. The present and future of de novo whole-genome assembly. Briefings in Bioinformatics, Volume 19, Issue 1, January 2018, Pages 23–40. https://doi.org/10.1093/bib/bbw096" } ]
What are the differences between import and static import statements in Java?
We can use an import statement to import classes and interface of a particular package. Whenever we are using import statement it is not required to use the fully qualified name and we can use short name directly. We can use static import to import static member from a particular class and package. Whenever we are using static import it is not required to use the class name to access static member and we can use directly. To access a class or method from another package we need to use the fully qualified name or we can use import statements. The class or method should also be accessible. Accessibility is based on the access modifiers. Private members are accessible only within the same class. So we won't be able to access a private member even with the fully qualified name or an import statement. The java.lang package is automatically imported into our code by Java. Live Demo import java.util.Vector; public class ImportDemo { public ImportDemo() { //Imported using keyword, hence able to access directly in the code without package qualification. Vector v = new Vector(); v.add("Tutorials"); v.add("Point"); v.add("India"); System.out.println("Vector values are: "+ v); //Package not imported, hence referring to it using the complete package. java.util.ArrayList list = new java.util.ArrayList(); list.add("Tutorix"); list.add("India"); System.out.println("Array List values are: "+ list); } public static void main(String arg[]) { new ImportDemo(); } } Vector values are: [Tutorials, Point, India] Array List values are: [Tutorix, India] Static imports will import all static data so that can use without a class name. A static import declaration has two forms, one that imports a particular static member which is known as single static import and one that imports all static members of a class which is known as a static import on demand. Static imports introduced in Java5 version. One of the advantages of using static imports is reducing keystrokes and re-usability. Live Demo import static java.lang.System.*; //Using Static Import public class StaticImportDemo { public static void main(String args[]) { //System.out is not used as it is imported using the keyword stati. out.println("Welcome to Tutorials Point"); } } Welcome to Tutorials Point
[ { "code": null, "e": 1488, "s": 1062, "text": "We can use an import statement to import classes and interface of a particular package. Whenever we are using import statement it is not required to use the fully qualified name and we can use short name directly. We can use static import to import static member from a particular class and package. Whenever we are using static import it is not required to use the class name to access static member and we can use directly." }, { "code": null, "e": 1610, "s": 1488, "text": "To access a class or method from another package we need to use the fully qualified name or we can use import statements." }, { "code": null, "e": 1705, "s": 1610, "text": "The class or method should also be accessible. Accessibility is based on the access modifiers." }, { "code": null, "e": 1870, "s": 1705, "text": "Private members are accessible only within the same class. So we won't be able to access a private member even with the fully qualified name or an import statement." }, { "code": null, "e": 1941, "s": 1870, "text": "The java.lang package is automatically imported into our code by Java." }, { "code": null, "e": 1951, "s": 1941, "text": "Live Demo" }, { "code": null, "e": 2609, "s": 1951, "text": "import java.util.Vector;\npublic class ImportDemo {\n public ImportDemo() {\n //Imported using keyword, hence able to access directly in the code without package qualification.\n Vector v = new Vector();\n v.add(\"Tutorials\");\n v.add(\"Point\");\n v.add(\"India\");\n System.out.println(\"Vector values are: \"+ v);\n //Package not imported, hence referring to it using the complete package.\n java.util.ArrayList list = new java.util.ArrayList();\n list.add(\"Tutorix\");\n list.add(\"India\");\n System.out.println(\"Array List values are: \"+ list);\n }\n public static void main(String arg[]) {\n new ImportDemo();\n }\n}" }, { "code": null, "e": 2694, "s": 2609, "text": "Vector values are: [Tutorials, Point, India]\nArray List values are: [Tutorix, India]" }, { "code": null, "e": 2775, "s": 2694, "text": "Static imports will import all static data so that can use without a class name." }, { "code": null, "e": 2997, "s": 2775, "text": "A static import declaration has two forms, one that imports a particular static member which is known as single static import and one that imports all static members of a class which is known as a static import on demand." }, { "code": null, "e": 3041, "s": 2997, "text": "Static imports introduced in Java5 version." }, { "code": null, "e": 3128, "s": 3041, "text": "One of the advantages of using static imports is reducing keystrokes and re-usability." }, { "code": null, "e": 3138, "s": 3128, "text": "Live Demo" }, { "code": null, "e": 3400, "s": 3138, "text": "import static java.lang.System.*; //Using Static Import\npublic class StaticImportDemo {\n public static void main(String args[]) {\n //System.out is not used as it is imported using the keyword stati.\n out.println(\"Welcome to Tutorials Point\");\n }\n}" }, { "code": null, "e": 3427, "s": 3400, "text": "Welcome to Tutorials Point" } ]
Aurelia - Events
In this chapter, you will learn about Aurelia events. Even delegation is a useful concept where the event handler is attached to one top level element instead of multiple elements on the DOM. This will improve the application memory efficiency and should be used whenever possible. This is a simple example of using event delegation with Aurelia framework. Our view will have a button with click.delegate event attached. <template> <button click.delegate = "myFunction()">CLICK ME</button> </template> Once the button is clicked, myFunction() will be called. export class App { myFunction() { console.log('The function is triggered...'); } } We will get the following output. There are some cases when you can't use delegation. Some JavaScript events don’t support delegation; IOS supports it for some elements. To find out which events allow delegation, you can search for a bubble property of any event here. In these cases, you can use trigger() method. The same functionality from the above example can be created with click.trigger. <template> <button click.trigger = "myFunction()">CLICK ME</button> </template> export class App { myFunction() { console.log('The function is triggered...'); } } Print Add Notes Bookmark this page
[ { "code": null, "e": 2219, "s": 2165, "text": "In this chapter, you will learn about Aurelia events." }, { "code": null, "e": 2447, "s": 2219, "text": "Even delegation is a useful concept where the event handler is attached to one top level element instead of multiple elements on the DOM. This will improve the application memory efficiency and should be used whenever possible." }, { "code": null, "e": 2586, "s": 2447, "text": "This is a simple example of using event delegation with Aurelia framework. Our view will have a button with click.delegate event attached." }, { "code": null, "e": 2670, "s": 2586, "text": "<template>\n <button click.delegate = \"myFunction()\">CLICK ME</button>\n</template>" }, { "code": null, "e": 2727, "s": 2670, "text": "Once the button is clicked, myFunction() will be called." }, { "code": null, "e": 2822, "s": 2727, "text": "export class App {\n myFunction() {\n console.log('The function is triggered...');\n }\n}" }, { "code": null, "e": 2856, "s": 2822, "text": "We will get the following output." }, { "code": null, "e": 3137, "s": 2856, "text": "There are some cases when you can't use delegation. Some JavaScript events don’t support delegation; IOS supports it for some elements. To find out which events allow delegation, you can search for a bubble property of any event here. In these cases, you can use trigger() method." }, { "code": null, "e": 3218, "s": 3137, "text": "The same functionality from the above example can be created with click.trigger." }, { "code": null, "e": 3301, "s": 3218, "text": "<template>\n <button click.trigger = \"myFunction()\">CLICK ME</button>\n</template>" }, { "code": null, "e": 3396, "s": 3301, "text": "export class App {\n myFunction() {\n console.log('The function is triggered...');\n }\n}" }, { "code": null, "e": 3403, "s": 3396, "text": " Print" }, { "code": null, "e": 3414, "s": 3403, "text": " Add Notes" } ]
Python program to find the sum of all items in a dictionary
In this article, we will learn about the solution to the problem statement given below. Problem statement − We are given a dictionary, and we need to print the 3 highest value in a dictionary. Three approaches to the problem statement are given below: Live Demo # sum function def Sum(myDict): sum_ = 0 for i in myDict: sum_ = sum_ + myDict[i] return sum_ # Driver Function dict = {'T': 1, 'U':2, 'T':3, 'O':4, 'R':5} print("Sum of dictionary values :", Sum(dict)) Sum of dictionary values : 14 Live Demo # sum function def Sum(dict): sum_ = 0 for i in dict.values(): sum_ = sum_ + i return sum_ # Driver Function dict = {'T': 1, 'U':2, 'T':3, 'O':4, 'R':5} print("Sum of dictionary values :", Sum(dict)) Sum of dictionary values : 14 Live Demo # sum function def Sum(dict): sum_ = 0 for i in dict.keys(): sum_ = sum_ + dict[i] return sum_ # Driver Function dict = {'T': 1, 'U':2, 'T':3, 'O':4, 'R':5} print("Sum of dictionary values :", Sum(dict)) Sum of dictionary values : 14 In this article, we have learned how we can find the highest 3 values in a dictionary
[ { "code": null, "e": 1150, "s": 1062, "text": "In this article, we will learn about the solution to the problem statement given below." }, { "code": null, "e": 1255, "s": 1150, "text": "Problem statement − We are given a dictionary, and we need to print the 3 highest value in a dictionary." }, { "code": null, "e": 1314, "s": 1255, "text": "Three approaches to the problem statement are given below:" }, { "code": null, "e": 1325, "s": 1314, "text": " Live Demo" }, { "code": null, "e": 1543, "s": 1325, "text": "# sum function\ndef Sum(myDict):\n sum_ = 0\n for i in myDict:\n sum_ = sum_ + myDict[i]\n return sum_\n# Driver Function\ndict = {'T': 1, 'U':2, 'T':3, 'O':4, 'R':5}\nprint(\"Sum of dictionary values :\", Sum(dict))" }, { "code": null, "e": 1573, "s": 1543, "text": "Sum of dictionary values : 14" }, { "code": null, "e": 1584, "s": 1573, "text": " Live Demo" }, { "code": null, "e": 1799, "s": 1584, "text": "# sum function\ndef Sum(dict):\n sum_ = 0\n for i in dict.values():\n sum_ = sum_ + i\n return sum_\n# Driver Function\ndict = {'T': 1, 'U':2, 'T':3, 'O':4, 'R':5}\nprint(\"Sum of dictionary values :\", Sum(dict))" }, { "code": null, "e": 1829, "s": 1799, "text": "Sum of dictionary values : 14" }, { "code": null, "e": 1840, "s": 1829, "text": " Live Demo" }, { "code": null, "e": 2059, "s": 1840, "text": "# sum function\ndef Sum(dict):\n sum_ = 0\n for i in dict.keys():\n sum_ = sum_ + dict[i]\n return sum_\n# Driver Function\ndict = {'T': 1, 'U':2, 'T':3, 'O':4, 'R':5}\nprint(\"Sum of dictionary values :\", Sum(dict))" }, { "code": null, "e": 2089, "s": 2059, "text": "Sum of dictionary values : 14" }, { "code": null, "e": 2175, "s": 2089, "text": "In this article, we have learned how we can find the highest 3 values in a dictionary" } ]
ViP-IPykernel: Start Jupyter Kernels in the Closest Venv | Towards Data Science
Reproducibility has been a hot topic for years in STEM subjects, more and more people are realising how important it is for your work to be reproducible, both from a purely scientific point of view but also as a way to facilitate and improve collaboration between colleagues. When using Python, an essential element of having reproducible work is using well defined isolated environments so that you can be sure of what packages are available and required for your code to execute correctly. A common way to do this is to use Python venv ‘s, however using Jupyter Notebooks with virtual environments can be a bit awkward for a few reasons, this post will go through these problems and explain how ViP-IPykernel tries to help avoid them. If you’re already thinking that notebook automatically using the virtual environment they’re in would be helpful then feel free to skip the explanation and check out the project here: RobertRosca/vip-ipykernel Many people will install Jupyter under their users home directory (i.e. with the pip install --user flag), or use a system provided Jupyter installation. If you do this, and then use virtual environments to isolate your projects you have a few options: Install Jupyter in every venv , and then always activate and run Jupyter out of the environment you want to work with. This way any notebooks you open in Jupyter, when using the default python3 kernel, will use the python environment set up for the project you started Jupyter in.Install just IPykernel in every venv , and then run python3 -m ipykernel install --user --name PROJECT-NAME , now when you start your user or site-level installation of Jupyter you can open your notebooks and select the relevant kernel for your project. Install Jupyter in every venv , and then always activate and run Jupyter out of the environment you want to work with. This way any notebooks you open in Jupyter, when using the default python3 kernel, will use the python environment set up for the project you started Jupyter in. Install just IPykernel in every venv , and then run python3 -m ipykernel install --user --name PROJECT-NAME , now when you start your user or site-level installation of Jupyter you can open your notebooks and select the relevant kernel for your project. The first approach has the downside of being a bit time consuming, Jupyter has a lot of dependencies, and it also adds in the additional step of always having to activate the correct environment before starting Jupyter. Mild inconveniences like these add up over time and create a barrier which may discourage people from using best practices, even if they know they should create an environment for each separate project. Plus if you’re using a remote or centralised Jupyter-Hub server this isn’t really an option. The second approach has a similar downside of the mild inconvenience of having to install IPykernel in every environment, and then having to create a new kernel for each project; if you have a lot of projects you can quickly end up with dozens of kernels, and if you don’t have a good naming system you may accidentally end up overwriting them or forgetting what does what. Additionally, setting a custom kernel for each notebook can impact collaboration negatively in a few ways: if you share a notebook or commit it to a common git repo then collaborators will have to set the kernel back to the default python3 one if they don’t use virtual environments, or use them and start Jupyter from within the environment (as in option 1) alternatively they’ll have to change it to their own kernel if they do use virtual environments to provide kernels (as in option 2), and once they save and commit their changes then everybody else will end up modifying the kernel again each time committing notebooks saved with non-standard kernels may affect some automated pipelines for testing or documentation which may execute the notebooks These seem like minor problems, but if you work with notebooks daily, or if you’re trying out Jupyter notebooks for the first time, these small annoyances can be enough to make you think “Eh, I can’t be bothered to make another kernel, using a venv isn’t that important anyway”. Which brings me to ViP-IPykernel: an IPython kernel which automatically runs python out of a Virtualenv in a Parent directory (hence ViP). The project is pretty simple, all it does is look for a virtual environment (currently only checks for .venv or venv directories but this will be expanded later) and if one is found in the directory the notebook is in, or in one of the parent directories, it runs IPykernel out of that environment automatically. If no venv is found then it falls back to the python environment vip-ipykernel was installed with. If you want the kernel to fall back to the standard Python kernel behaviour when no environment is found then this kernel should be installed under your user space, not inside a virtual environment, alternatively, it can be installed inside a venv if you’d like the fallback to be to a different environment to your users one. After installation, you can then either override the default python3 kernel with vip-ipykernel or install it as a separate kernel: pip install --user vip-ipykernel# To override the default kernel:python3 -m vip_ipykernel.kernelspec --user# To install as a new kernel:python3 -m vip_ipykernel.kernelspec --user --name vip-ipykernel Overriding the default kernel is a useful approach as that way you never have to change the kernel a notebook is using: leave it as the default python3 kernel and as long as the notebook is in a directory or subdirectory where a .venv exists it will always use that environment, avoiding both the issues of creating a kernel or installing Jupyter for every environment. This way you get the best of both worlds, you can use virtual environments to manage your dependencies, and you don’t have to juggle multiple kernels or run Jupyter out of a specific environment each time. Small quality of life improvements like this are pretty trivial, but when trying to convince others (or yourself) to change workflows and muscle memory built up and reinforced by years of experience, any small perceived downside can be seen as ‘enough’ of a reason to stick with what you’ve always done. If this sounds useful to you, the project is available on PyPI and can be installed via pip, or you can view the code here: RobertRosca/vip-ipykernel
[ { "code": null, "e": 448, "s": 172, "text": "Reproducibility has been a hot topic for years in STEM subjects, more and more people are realising how important it is for your work to be reproducible, both from a purely scientific point of view but also as a way to facilitate and improve collaboration between colleagues." }, { "code": null, "e": 664, "s": 448, "text": "When using Python, an essential element of having reproducible work is using well defined isolated environments so that you can be sure of what packages are available and required for your code to execute correctly." }, { "code": null, "e": 1119, "s": 664, "text": "A common way to do this is to use Python venv ‘s, however using Jupyter Notebooks with virtual environments can be a bit awkward for a few reasons, this post will go through these problems and explain how ViP-IPykernel tries to help avoid them. If you’re already thinking that notebook automatically using the virtual environment they’re in would be helpful then feel free to skip the explanation and check out the project here: RobertRosca/vip-ipykernel" }, { "code": null, "e": 1372, "s": 1119, "text": "Many people will install Jupyter under their users home directory (i.e. with the pip install --user flag), or use a system provided Jupyter installation. If you do this, and then use virtual environments to isolate your projects you have a few options:" }, { "code": null, "e": 1906, "s": 1372, "text": "Install Jupyter in every venv , and then always activate and run Jupyter out of the environment you want to work with. This way any notebooks you open in Jupyter, when using the default python3 kernel, will use the python environment set up for the project you started Jupyter in.Install just IPykernel in every venv , and then run python3 -m ipykernel install --user --name PROJECT-NAME , now when you start your user or site-level installation of Jupyter you can open your notebooks and select the relevant kernel for your project." }, { "code": null, "e": 2187, "s": 1906, "text": "Install Jupyter in every venv , and then always activate and run Jupyter out of the environment you want to work with. This way any notebooks you open in Jupyter, when using the default python3 kernel, will use the python environment set up for the project you started Jupyter in." }, { "code": null, "e": 2441, "s": 2187, "text": "Install just IPykernel in every venv , and then run python3 -m ipykernel install --user --name PROJECT-NAME , now when you start your user or site-level installation of Jupyter you can open your notebooks and select the relevant kernel for your project." }, { "code": null, "e": 2957, "s": 2441, "text": "The first approach has the downside of being a bit time consuming, Jupyter has a lot of dependencies, and it also adds in the additional step of always having to activate the correct environment before starting Jupyter. Mild inconveniences like these add up over time and create a barrier which may discourage people from using best practices, even if they know they should create an environment for each separate project. Plus if you’re using a remote or centralised Jupyter-Hub server this isn’t really an option." }, { "code": null, "e": 3331, "s": 2957, "text": "The second approach has a similar downside of the mild inconvenience of having to install IPykernel in every environment, and then having to create a new kernel for each project; if you have a lot of projects you can quickly end up with dozens of kernels, and if you don’t have a good naming system you may accidentally end up overwriting them or forgetting what does what." }, { "code": null, "e": 3438, "s": 3331, "text": "Additionally, setting a custom kernel for each notebook can impact collaboration negatively in a few ways:" }, { "code": null, "e": 3690, "s": 3438, "text": "if you share a notebook or commit it to a common git repo then collaborators will have to set the kernel back to the default python3 one if they don’t use virtual environments, or use them and start Jupyter from within the environment (as in option 1)" }, { "code": null, "e": 3936, "s": 3690, "text": "alternatively they’ll have to change it to their own kernel if they do use virtual environments to provide kernels (as in option 2), and once they save and commit their changes then everybody else will end up modifying the kernel again each time" }, { "code": null, "e": 4086, "s": 3936, "text": "committing notebooks saved with non-standard kernels may affect some automated pipelines for testing or documentation which may execute the notebooks" }, { "code": null, "e": 4365, "s": 4086, "text": "These seem like minor problems, but if you work with notebooks daily, or if you’re trying out Jupyter notebooks for the first time, these small annoyances can be enough to make you think “Eh, I can’t be bothered to make another kernel, using a venv isn’t that important anyway”." }, { "code": null, "e": 4504, "s": 4365, "text": "Which brings me to ViP-IPykernel: an IPython kernel which automatically runs python out of a Virtualenv in a Parent directory (hence ViP)." }, { "code": null, "e": 4916, "s": 4504, "text": "The project is pretty simple, all it does is look for a virtual environment (currently only checks for .venv or venv directories but this will be expanded later) and if one is found in the directory the notebook is in, or in one of the parent directories, it runs IPykernel out of that environment automatically. If no venv is found then it falls back to the python environment vip-ipykernel was installed with." }, { "code": null, "e": 5243, "s": 4916, "text": "If you want the kernel to fall back to the standard Python kernel behaviour when no environment is found then this kernel should be installed under your user space, not inside a virtual environment, alternatively, it can be installed inside a venv if you’d like the fallback to be to a different environment to your users one." }, { "code": null, "e": 5374, "s": 5243, "text": "After installation, you can then either override the default python3 kernel with vip-ipykernel or install it as a separate kernel:" }, { "code": null, "e": 5576, "s": 5374, "text": "pip install --user vip-ipykernel# To override the default kernel:python3 -m vip_ipykernel.kernelspec --user# To install as a new kernel:python3 -m vip_ipykernel.kernelspec --user --name vip-ipykernel" }, { "code": null, "e": 5946, "s": 5576, "text": "Overriding the default kernel is a useful approach as that way you never have to change the kernel a notebook is using: leave it as the default python3 kernel and as long as the notebook is in a directory or subdirectory where a .venv exists it will always use that environment, avoiding both the issues of creating a kernel or installing Jupyter for every environment." }, { "code": null, "e": 6152, "s": 5946, "text": "This way you get the best of both worlds, you can use virtual environments to manage your dependencies, and you don’t have to juggle multiple kernels or run Jupyter out of a specific environment each time." }, { "code": null, "e": 6456, "s": 6152, "text": "Small quality of life improvements like this are pretty trivial, but when trying to convince others (or yourself) to change workflows and muscle memory built up and reinforced by years of experience, any small perceived downside can be seen as ‘enough’ of a reason to stick with what you’ve always done." } ]
How to create responsive column cards with CSS?
To create responsive column cards with CSS, the code is as follows − Live Demo <!DOCTYPE html> <html> <head> <meta name="viewport" content="width=device-width, initial-scale=1" /> <style> body { font-family: "Segoe UI", Tahoma, Geneva, Verdana, sans-serif; } * { box-sizing: border-box; } .card { color: white; float: left; width: calc(25% - 20px); padding: 10px; border-radius: 10px; margin: 10px; height: 200px; } .card p { font-size: 18px; } .cardContainer:after { content: ""; display: table; clear: both; } @media screen and (max-width: 600px) { .card { width: 100%; } } </style> </head> <body> <h1>Responsive column card Example</h1> <h2>Resize the screen to see the below cards resize themselves</h2> <div class="cardContainer"> <div class="card" style="background-color:rgb(153, 29, 224);"> <h2>First card</h2> <p>Some text</p> </div> <div class="card" style="background-color:rgb(12, 126, 120);"> <h2>Second card</h2> <p>Some text</p> </div> <div class="card" style="background-color:rgb(207, 41, 91);"> <h2>Third card</h2> <p>Some text</p> </div> <div class="card" style="background-color:rgb(204, 91, 39);"> <h2>Fourth card</h2> <p>Some text</p> </div> </div> </body> </html> The above code will produce the following output − On resizing the screen to 600px or less −
[ { "code": null, "e": 1131, "s": 1062, "text": "To create responsive column cards with CSS, the code is as follows −" }, { "code": null, "e": 1142, "s": 1131, "text": " Live Demo" }, { "code": null, "e": 2385, "s": 1142, "text": "<!DOCTYPE html>\n<html>\n<head>\n<meta name=\"viewport\" content=\"width=device-width, initial-scale=1\" />\n<style>\n body {\n font-family: \"Segoe UI\", Tahoma, Geneva, Verdana, sans-serif;\n }\n * {\n box-sizing: border-box;\n }\n .card {\n color: white;\n float: left;\n width: calc(25% - 20px);\n padding: 10px;\n border-radius: 10px;\n margin: 10px;\n height: 200px;\n }\n .card p {\n font-size: 18px;\n }\n .cardContainer:after {\n content: \"\";\n display: table;\n clear: both;\n }\n @media screen and (max-width: 600px) {\n .card {\n width: 100%;\n }\n }\n</style>\n</head>\n<body>\n<h1>Responsive column card Example</h1>\n<h2>Resize the screen to see the below cards resize themselves</h2>\n<div class=\"cardContainer\">\n<div class=\"card\" style=\"background-color:rgb(153, 29, 224);\">\n<h2>First card</h2>\n<p>Some text</p>\n</div>\n<div class=\"card\" style=\"background-color:rgb(12, 126, 120);\">\n<h2>Second card</h2>\n<p>Some text</p>\n</div>\n<div class=\"card\" style=\"background-color:rgb(207, 41, 91);\">\n<h2>Third card</h2>\n<p>Some text</p>\n</div>\n<div class=\"card\" style=\"background-color:rgb(204, 91, 39);\">\n<h2>Fourth card</h2>\n<p>Some text</p>\n</div>\n</div>\n</body>\n</html>" }, { "code": null, "e": 2436, "s": 2385, "text": "The above code will produce the following output −" }, { "code": null, "e": 2478, "s": 2436, "text": "On resizing the screen to 600px or less −" } ]
chkconfig - Unix, Linux Command
chkconfig - Updates and queries runlevel information for system services. chkconfig --list [name] chkconfig --add name chkconfig --del name chkconfig [--level levels] name chkconfig [--level levels] name chkconfig provides a simple command-line tool for maintaining the /etc/rc[0-6].d directory hierarchy by relieving system administrators of the task of directly manipulating the numerous symbolic links in those directories. This implementation of chkconfig was inspired by the chkconfig command present in the IRIX operating system. Rather than maintaining configuration information outside of the /etc/rc[0-6].d hierarchy, however, this version directly manages the symlinks in /etc/rc[0-6].d. This leaves all of the configuration information regarding what services init starts in a single location. chkconfig has five distinct functions: adding new services for management, removing services from management, listing the current startup information for services, changing the startup information for services, and checking the startup state of a particular service. When chkconfig is run without any options, it displays usage information. If only a service name is given, it checks to see if the service is configured to be started in the current runlevel. If it is, chkconfig returns true; otherwise it returns false. The --level option can be used to have chkconfig query an alternative runlevel rather than the current one. If one of on, off, or reset is specified after the service name, chkconfig changes the startup information for the specified service. The on and off flags cause the service to be started or stopped, respectively, in the runlevels being changed. The reset flag resets the startup information for the service to whatever is specified in the init script in question. By default, the on and off options affect only runlevels 2, 3, 4, and 5, while reset affects all of the runlevels. The --level option can be used to specify which runlevels are affected. Note that for every service, each runlevel has either a start script or a stop script. When switching runlevels, init will not re-start an already-started service, and will not re-stop a service that is not running. chkconfig also can manage xinetd scripts via the means of xinetd.d configuration files. Note that only the on, off, and --list commands are supported for xinetd.d services. To list all startup services in alphabetic order. $ chkconfig --list | sort | less auditd 0:off 1:off 2:on 3:on 4:on 5:on 6:off ... To list the auditd service $ chkconfig --list auditd auditd 0:off 1:off 2:on 3:on 4:on 5:on 6:off To turn auditd off in runlevels 3, 4, and 5. $ chkconfig --level 345 auditd off auditd 0:off 1:off 2:on 3:off 4:off 5:off 6:off 129 Lectures 23 hours Eduonix Learning Solutions 5 Lectures 4.5 hours Frahaan Hussain 35 Lectures 2 hours Pradeep D 41 Lectures 2.5 hours Musab Zayadneh 46 Lectures 4 hours GUHARAJANM 6 Lectures 4 hours Uplatz Print Add Notes Bookmark this page
[ { "code": null, "e": 10651, "s": 10577, "text": "chkconfig - Updates and queries runlevel information for system services." }, { "code": null, "e": 10783, "s": 10651, "text": "chkconfig --list [name]\nchkconfig --add name\nchkconfig --del name\nchkconfig [--level levels] name \nchkconfig [--level levels] name " }, { "code": null, "e": 11384, "s": 10783, "text": "chkconfig provides a simple command-line tool for maintaining the /etc/rc[0-6].d directory hierarchy by relieving system administrators of the task of directly manipulating the numerous symbolic links in those directories. This implementation of chkconfig was inspired by the chkconfig command present in the IRIX operating system. Rather than maintaining configuration information outside of the /etc/rc[0-6].d hierarchy, however, this version directly manages the symlinks in /etc/rc[0-6].d. This leaves all of the configuration information regarding what services init starts in a single location." }, { "code": null, "e": 12013, "s": 11384, "text": "chkconfig has five distinct functions: adding new services for management, removing services from management, listing the current startup information for services, changing the startup information for services, and checking the startup state of a particular service. When chkconfig is run without any options, it displays usage information. If only a service name is given, it checks to see if the service is configured to be started in the current runlevel. If it is, chkconfig returns true; otherwise it returns false. The --level option can be used to have chkconfig query an alternative runlevel rather than the current one." }, { "code": null, "e": 12564, "s": 12013, "text": "If one of on, off, or reset is specified after the service name, chkconfig changes the startup information for the specified service. The on and off flags cause the service to be started or stopped, respectively, in the runlevels being changed. The reset flag resets the startup information for the service to whatever is specified in the init script in question. By default, the on and off options affect only runlevels 2, 3, 4, and 5, while reset affects all of the runlevels. The --level option can be used to specify which runlevels are affected." }, { "code": null, "e": 12953, "s": 12564, "text": "Note that for every service, each runlevel has either a start script or a stop script. When switching runlevels, init will not re-start an already-started service, and will not re-stop a service that is not running. chkconfig also can manage xinetd scripts via the means of xinetd.d configuration files. Note that only the on, off, and --list commands are supported for xinetd.d services." }, { "code": null, "e": 13003, "s": 12953, "text": "To list all startup services in alphabetic order." }, { "code": null, "e": 13086, "s": 13003, "text": "$ chkconfig --list | sort | less\nauditd 0:off 1:off 2:on 3:on 4:on 5:on 6:off\n...\n" }, { "code": null, "e": 13113, "s": 13086, "text": "To list the auditd service" }, { "code": null, "e": 13185, "s": 13113, "text": "$ chkconfig --list auditd\nauditd 0:off 1:off 2:on 3:on 4:on 5:on 6:off\n" }, { "code": null, "e": 13230, "s": 13185, "text": "To turn auditd off in runlevels 3, 4, and 5." }, { "code": null, "e": 13314, "s": 13230, "text": "$ chkconfig --level 345 auditd off\nauditd 0:off 1:off 2:on 3:off 4:off 5:off 6:off\n" }, { "code": null, "e": 13349, "s": 13314, "text": "\n 129 Lectures \n 23 hours \n" }, { "code": null, "e": 13377, "s": 13349, "text": " Eduonix Learning Solutions" }, { "code": null, "e": 13411, "s": 13377, "text": "\n 5 Lectures \n 4.5 hours \n" }, { "code": null, "e": 13428, "s": 13411, "text": " Frahaan Hussain" }, { "code": null, "e": 13461, "s": 13428, "text": "\n 35 Lectures \n 2 hours \n" }, { "code": null, "e": 13472, "s": 13461, "text": " Pradeep D" }, { "code": null, "e": 13507, "s": 13472, "text": "\n 41 Lectures \n 2.5 hours \n" }, { "code": null, "e": 13523, "s": 13507, "text": " Musab Zayadneh" }, { "code": null, "e": 13556, "s": 13523, "text": "\n 46 Lectures \n 4 hours \n" }, { "code": null, "e": 13568, "s": 13556, "text": " GUHARAJANM" }, { "code": null, "e": 13600, "s": 13568, "text": "\n 6 Lectures \n 4 hours \n" }, { "code": null, "e": 13608, "s": 13600, "text": " Uplatz" }, { "code": null, "e": 13615, "s": 13608, "text": " Print" }, { "code": null, "e": 13626, "s": 13615, "text": " Add Notes" } ]
Gson - Class
Gson is the main actor class of Google Gson library. It provides functionalities to convert Java objects to matching JSON constructs and vice versa. Gson is first constructed using GsonBuilder and then, toJson(Object) or fromJson(String, Class) methods are used to read/write JSON constructs. Following is the declaration for com.google.gson.Gson class − public final class Gson extends Object Gson() Constructs a Gson object with default configuration. <T> T fromJson(JsonElement json, Class<T> classOfT) This method deserializes the Json read from the specified parse tree into an object of the specified type. <T> T fromJson(JsonElement json, Type typeOfT) This method deserializes the Json read from the specified parse tree into an object of the specified type. <T> T fromJson(JsonReader reader, Type typeOfT) Reads the next JSON value from reader and convert it to an object of type typeOfT. <T> T fromJson(Reader json, Class<T> classOfT) This method deserializes the Json read from the specified reader into an object of the specified class. <T> T fromJson(Reader json, Type typeOfT) This method deserializes the Json read from the specified reader into an object of the specified type. <T> T fromJson(String json, Class<T> classOfT) This method deserializes the specified Json into an object of the specified class. <T> T fromJson(String json, Type typeOfT) This method deserializes the specified Json into an object of the specified type. <T> TypeAdapter<T> getAdapter(Class<T> type) Returns the type adapter for type. <T> TypeAdapter<T> getAdapter(TypeToken<T> type) Returns the type adapter for type. <T> TypeAdapter<T> getDelegateAdapter(TypeAdapterFactory skipPast, TypeToken<T> type) This method is used to get an alternate type adapter for the specified type. String toJson(JsonElement jsonElement) Converts a tree of JsonElements into its equivalent JSON representation. void toJson(JsonElement jsonElement, Appendable writer) Writes out the equivalent JSON for a tree of JsonElements. void toJson(JsonElement jsonElement, JsonWriter writer) Writes the JSON for jsonElement to writer. String toJson(Object src) This method serializes the specified object into its equivalent Json representation. void toJson(Object src, Appendable writer) This method serializes the specified object into its equivalent Json representation. String toJson(Object src, Type typeOfSrc) This method serializes the specified object, including those of generic types, into its equivalent Json representation. void toJson(Object src, Type typeOfSrc, Appendable writer) This method serializes the specified object, including those of generic types, into its equivalent Json representation. void toJson(Object src, Type typeOfSrc, JsonWriter writer) Writes the JSON representation of src of type typeOfSrc to writer. JsonElement toJsonTree(Object src) This method serializes the specified object into its equivalent representation as a tree of JsonElements. JsonElement toJsonTree(Object src, Type typeOfSrc) This method serializes the specified object, including those of generic types, into its equivalent representation as a tree of JsonElements. String toString() This class inherits methods from the following class − java.lang.Object Create the following Java program using any editor of your choice, and save it at, say, C:/> GSON_WORKSPACE File − GsonTester.java import com.google.gson.Gson; import com.google.gson.GsonBuilder; public class GsonTester { public static void main(String[] args) { String jsonString = "{\"name\":\"Mahesh\", \"age\":21}"; GsonBuilder builder = new GsonBuilder(); builder.setPrettyPrinting(); Gson gson = builder.create(); Student student = gson.fromJson(jsonString, Student.class); System.out.println(student); jsonString = gson.toJson(student); System.out.println(jsonString); } } class Student { private String name; private int age; public Student(){} public String getName() { return name; } public void setName(String name) { this.name = name; } public int getAge() { return age; } public void setAge(int age) { this.age = age; } public String toString() { return "Student [ name: "+name+", age: "+ age+ " ]"; } } Compile the classes using javac compiler as follows − C:\GSON_WORKSPACE>javac GsonTester.java Now run the GsonTester to see the result − C:\GSON_WORKSPACE>java GsonTester Verify the output Student [ name: Mahesh, age: 21 ] { "name" : "Mahesh", "age" : 21 } Print Add Notes Bookmark this page
[ { "code": null, "e": 2225, "s": 1932, "text": "Gson is the main actor class of Google Gson library. It provides functionalities to convert Java objects to matching JSON constructs and vice versa. Gson is first constructed using GsonBuilder and then, toJson(Object) or fromJson(String, Class) methods are used to read/write JSON constructs." }, { "code": null, "e": 2287, "s": 2225, "text": "Following is the declaration for com.google.gson.Gson class −" }, { "code": null, "e": 2332, "s": 2287, "text": "public final class Gson \n extends Object \n" }, { "code": null, "e": 2339, "s": 2332, "text": "Gson()" }, { "code": null, "e": 2392, "s": 2339, "text": "Constructs a Gson object with default configuration." }, { "code": null, "e": 2444, "s": 2392, "text": "<T> T fromJson(JsonElement json, Class<T> classOfT)" }, { "code": null, "e": 2551, "s": 2444, "text": "This method deserializes the Json read from the specified parse tree into an object of the specified type." }, { "code": null, "e": 2598, "s": 2551, "text": "<T> T fromJson(JsonElement json, Type typeOfT)" }, { "code": null, "e": 2705, "s": 2598, "text": "This method deserializes the Json read from the specified parse tree into an object of the specified type." }, { "code": null, "e": 2753, "s": 2705, "text": "<T> T fromJson(JsonReader reader, Type typeOfT)" }, { "code": null, "e": 2836, "s": 2753, "text": "Reads the next JSON value from reader and convert it to an object of type typeOfT." }, { "code": null, "e": 2883, "s": 2836, "text": "<T> T fromJson(Reader json, Class<T> classOfT)" }, { "code": null, "e": 2987, "s": 2883, "text": "This method deserializes the Json read from the specified reader into an object of the specified class." }, { "code": null, "e": 3029, "s": 2987, "text": "<T> T fromJson(Reader json, Type typeOfT)" }, { "code": null, "e": 3132, "s": 3029, "text": "This method deserializes the Json read from the specified reader into an object of the specified type." }, { "code": null, "e": 3179, "s": 3132, "text": "<T> T fromJson(String json, Class<T> classOfT)" }, { "code": null, "e": 3262, "s": 3179, "text": "This method deserializes the specified Json into an object of the specified class." }, { "code": null, "e": 3304, "s": 3262, "text": "<T> T fromJson(String json, Type typeOfT)" }, { "code": null, "e": 3386, "s": 3304, "text": "This method deserializes the specified Json into an object of the specified type." }, { "code": null, "e": 3431, "s": 3386, "text": "<T> TypeAdapter<T> getAdapter(Class<T> type)" }, { "code": null, "e": 3466, "s": 3431, "text": "Returns the type adapter for type." }, { "code": null, "e": 3515, "s": 3466, "text": "<T> TypeAdapter<T> getAdapter(TypeToken<T> type)" }, { "code": null, "e": 3550, "s": 3515, "text": "Returns the type adapter for type." }, { "code": null, "e": 3636, "s": 3550, "text": "<T> TypeAdapter<T> getDelegateAdapter(TypeAdapterFactory skipPast, TypeToken<T> type)" }, { "code": null, "e": 3713, "s": 3636, "text": "This method is used to get an alternate type adapter for the specified type." }, { "code": null, "e": 3752, "s": 3713, "text": "String toJson(JsonElement jsonElement)" }, { "code": null, "e": 3825, "s": 3752, "text": "Converts a tree of JsonElements into its equivalent JSON representation." }, { "code": null, "e": 3881, "s": 3825, "text": "void toJson(JsonElement jsonElement, Appendable writer)" }, { "code": null, "e": 3940, "s": 3881, "text": "Writes out the equivalent JSON for a tree of JsonElements." }, { "code": null, "e": 3996, "s": 3940, "text": "void toJson(JsonElement jsonElement, JsonWriter writer)" }, { "code": null, "e": 4039, "s": 3996, "text": "Writes the JSON for jsonElement to writer." }, { "code": null, "e": 4065, "s": 4039, "text": "String toJson(Object src)" }, { "code": null, "e": 4150, "s": 4065, "text": "This method serializes the specified object into its equivalent Json representation." }, { "code": null, "e": 4193, "s": 4150, "text": "void toJson(Object src, Appendable writer)" }, { "code": null, "e": 4278, "s": 4193, "text": "This method serializes the specified object into its equivalent Json representation." }, { "code": null, "e": 4320, "s": 4278, "text": "String toJson(Object src, Type typeOfSrc)" }, { "code": null, "e": 4440, "s": 4320, "text": "This method serializes the specified object, including those of generic types, into its equivalent Json representation." }, { "code": null, "e": 4499, "s": 4440, "text": "void toJson(Object src, Type typeOfSrc, Appendable writer)" }, { "code": null, "e": 4619, "s": 4499, "text": "This method serializes the specified object, including those of generic types, into its equivalent Json representation." }, { "code": null, "e": 4678, "s": 4619, "text": "void toJson(Object src, Type typeOfSrc, JsonWriter writer)" }, { "code": null, "e": 4745, "s": 4678, "text": "Writes the JSON representation of src of type typeOfSrc to writer." }, { "code": null, "e": 4780, "s": 4745, "text": "JsonElement toJsonTree(Object src)" }, { "code": null, "e": 4886, "s": 4780, "text": "This method serializes the specified object into its equivalent representation as a tree of JsonElements." }, { "code": null, "e": 4937, "s": 4886, "text": "JsonElement toJsonTree(Object src, Type typeOfSrc)" }, { "code": null, "e": 5078, "s": 4937, "text": "This method serializes the specified object, including those of generic types, into its equivalent representation as a tree of JsonElements." }, { "code": null, "e": 5096, "s": 5078, "text": "String toString()" }, { "code": null, "e": 5151, "s": 5096, "text": "This class inherits methods from the following class −" }, { "code": null, "e": 5168, "s": 5151, "text": "java.lang.Object" }, { "code": null, "e": 5276, "s": 5168, "text": "Create the following Java program using any editor of your choice, and save it at, say, C:/> GSON_WORKSPACE" }, { "code": null, "e": 5299, "s": 5276, "text": "File − GsonTester.java" }, { "code": null, "e": 6274, "s": 5299, "text": "import com.google.gson.Gson; \nimport com.google.gson.GsonBuilder; \n\npublic class GsonTester { \n public static void main(String[] args) { \n String jsonString = \"{\\\"name\\\":\\\"Mahesh\\\", \\\"age\\\":21}\"; \n \n GsonBuilder builder = new GsonBuilder(); \n builder.setPrettyPrinting(); \n \n Gson gson = builder.create(); \n Student student = gson.fromJson(jsonString, Student.class); \n System.out.println(student); \n \n jsonString = gson.toJson(student); \n System.out.println(jsonString); \n } \n} \n\nclass Student { \n private String name; \n private int age; \n public Student(){} \n \n public String getName() { \n return name; \n } \n public void setName(String name) { \n this.name = name; \n } \n public int getAge() { \n return age;\n } \n public void setAge(int age) { \n this.age = age; \n } \n public String toString() { \n return \"Student [ name: \"+name+\", age: \"+ age+ \" ]\"; \n } \n}" }, { "code": null, "e": 6328, "s": 6274, "text": "Compile the classes using javac compiler as follows −" }, { "code": null, "e": 6370, "s": 6328, "text": "C:\\GSON_WORKSPACE>javac GsonTester.java \n" }, { "code": null, "e": 6413, "s": 6370, "text": "Now run the GsonTester to see the result −" }, { "code": null, "e": 6448, "s": 6413, "text": "C:\\GSON_WORKSPACE>java GsonTester\n" }, { "code": null, "e": 6466, "s": 6448, "text": "Verify the output" }, { "code": null, "e": 6545, "s": 6466, "text": "Student [ name: Mahesh, age: 21 ] \n{ \n \"name\" : \"Mahesh\", \n \"age\" : 21 \n}\n" }, { "code": null, "e": 6552, "s": 6545, "text": " Print" }, { "code": null, "e": 6563, "s": 6552, "text": " Add Notes" } ]
Java Program to Implement the Linear Congruential Generator for Pseudo Random Number Generation - GeeksforGeeks
17 Jul, 2021 Linear Congruential Method is a class of Pseudo-Random Number Generator (PRNG) algorithms used for generating sequences of random-like numbers in a specific range. This method can be defined as: Xi+1 = aXi + c mod m where, X, is the sequence of pseudo-random numbers m, ( > 0) the modulus a, (0, m) the multiplier c, (0, m) the increment X0, [0, m) – Initial value of sequence known as seed Note: m, a, c, and X0 should be chosen appropriately to get a period almost equal to m. For a = 1, it will be the additive congruence method. For c = 0, it will be the multiplicative congruence method. Approach: The seed value X0 is chosen, Modulus parameter m, Multiplier term a, and increment term c. Initialize the required amount of random numbers to generate (say, an integer variable noOfRandomNums). Define storage to keep the generated random numbers (here, the vector is considered) of size noOfRandomNums. Initialize the 0th index of the vector with the seed value. For the rest of the indexes follow the Linear Congruential Method to generate the random numbers. randomNums[i] = ((randomNums[i – 1] * a) + c) % m Finally, return the random numbers. Below is the implementation of the above approach: Java // Java implementation of the above approachimport java.util.*; class GFG { // Function to generate random numbers static void lcm(int seed, int mod, int multiplier, int inc, int[] randomNums, int noOfRandomNum) { // Initialize the seed state randomNums[0] = seed; // Traverse to generate required // numbers of random numbers for (int i = 1; i < noOfRandomNum; i++) { // Follow the linear congruential method randomNums[i] = ((randomNums[i - 1] * multiplier) + inc) % m; } } // Driver code public static void main(String[] args) { // Seed value int seed = 5; // Modulus parameter int mod = 7; // Multiplier term int multiplier = 3; // Increment term int inc = 3; // Number of Random numbers // to be generated int noOfRandomNum = 10; // To store random numbers int[] randomNums = new int[noOfRandomNum]; // Function Call lcm(seed, mod, multiplier, inc, randomNums, noOfRandomNum); // Print the generated random numbers for (int i = 0; i < noOfRandomNum; i++) { System.out.print(randomNums[i] + " "); } }} 5 4 1 6 0 3 5 4 1 6 The literal meaning of pseudo is false or imaginary. These random numbers are called pseudo because some known arithmetic procedure is utilized to generate them. Even the generated sequence forms a pattern hence the generated number seems to be random but may not be truly random. saurabh1990aror Picked Java Java Programs Java Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here. Comments Old Comments Functional Interfaces in Java Stream In Java Constructors in Java Different ways of Reading a text file in Java Exceptions in Java Convert a String to Character array in Java Java Programming Examples Convert Double to Integer in Java Implementing a Linked List in Java using Class Factory method design pattern in Java
[ { "code": null, "e": 23557, "s": 23529, "text": "\n17 Jul, 2021" }, { "code": null, "e": 23753, "s": 23557, "text": "Linear Congruential Method is a class of Pseudo-Random Number Generator (PRNG) algorithms used for generating sequences of random-like numbers in a specific range. This method can be defined as: " }, { "code": null, "e": 23824, "s": 23753, "text": " Xi+1 = aXi + c mod m" }, { "code": null, "e": 23831, "s": 23824, "text": "where," }, { "code": null, "e": 23875, "s": 23831, "text": "X, is the sequence of pseudo-random numbers" }, { "code": null, "e": 23897, "s": 23875, "text": "m, ( > 0) the modulus" }, { "code": null, "e": 23922, "s": 23897, "text": "a, (0, m) the multiplier" }, { "code": null, "e": 23946, "s": 23922, "text": "c, (0, m) the increment" }, { "code": null, "e": 24000, "s": 23946, "text": "X0, [0, m) – Initial value of sequence known as seed" }, { "code": null, "e": 24088, "s": 24000, "text": "Note: m, a, c, and X0 should be chosen appropriately to get a period almost equal to m." }, { "code": null, "e": 24142, "s": 24088, "text": "For a = 1, it will be the additive congruence method." }, { "code": null, "e": 24202, "s": 24142, "text": "For c = 0, it will be the multiplicative congruence method." }, { "code": null, "e": 24214, "s": 24202, "text": "Approach: " }, { "code": null, "e": 24305, "s": 24214, "text": "The seed value X0 is chosen, Modulus parameter m, Multiplier term a, and increment term c." }, { "code": null, "e": 24409, "s": 24305, "text": "Initialize the required amount of random numbers to generate (say, an integer variable noOfRandomNums)." }, { "code": null, "e": 24518, "s": 24409, "text": "Define storage to keep the generated random numbers (here, the vector is considered) of size noOfRandomNums." }, { "code": null, "e": 24578, "s": 24518, "text": "Initialize the 0th index of the vector with the seed value." }, { "code": null, "e": 24676, "s": 24578, "text": "For the rest of the indexes follow the Linear Congruential Method to generate the random numbers." }, { "code": null, "e": 24747, "s": 24676, "text": " randomNums[i] = ((randomNums[i – 1] * a) + c) % m " }, { "code": null, "e": 24783, "s": 24747, "text": "Finally, return the random numbers." }, { "code": null, "e": 24834, "s": 24783, "text": "Below is the implementation of the above approach:" }, { "code": null, "e": 24839, "s": 24834, "text": "Java" }, { "code": "// Java implementation of the above approachimport java.util.*; class GFG { // Function to generate random numbers static void lcm(int seed, int mod, int multiplier, int inc, int[] randomNums, int noOfRandomNum) { // Initialize the seed state randomNums[0] = seed; // Traverse to generate required // numbers of random numbers for (int i = 1; i < noOfRandomNum; i++) { // Follow the linear congruential method randomNums[i] = ((randomNums[i - 1] * multiplier) + inc) % m; } } // Driver code public static void main(String[] args) { // Seed value int seed = 5; // Modulus parameter int mod = 7; // Multiplier term int multiplier = 3; // Increment term int inc = 3; // Number of Random numbers // to be generated int noOfRandomNum = 10; // To store random numbers int[] randomNums = new int[noOfRandomNum]; // Function Call lcm(seed, mod, multiplier, inc, randomNums, noOfRandomNum); // Print the generated random numbers for (int i = 0; i < noOfRandomNum; i++) { System.out.print(randomNums[i] + \" \"); } }}", "e": 26164, "s": 24839, "text": null }, { "code": null, "e": 26184, "s": 26164, "text": "5 4 1 6 0 3 5 4 1 6" }, { "code": null, "e": 26465, "s": 26184, "text": "The literal meaning of pseudo is false or imaginary. These random numbers are called pseudo because some known arithmetic procedure is utilized to generate them. Even the generated sequence forms a pattern hence the generated number seems to be random but may not be truly random." }, { "code": null, "e": 26481, "s": 26465, "text": "saurabh1990aror" }, { "code": null, "e": 26488, "s": 26481, "text": "Picked" }, { "code": null, "e": 26493, "s": 26488, "text": "Java" }, { "code": null, "e": 26507, "s": 26493, "text": "Java Programs" }, { "code": null, "e": 26512, "s": 26507, "text": "Java" }, { "code": null, "e": 26610, "s": 26512, "text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here." }, { "code": null, "e": 26619, "s": 26610, "text": "Comments" }, { "code": null, "e": 26632, "s": 26619, "text": "Old Comments" }, { "code": null, "e": 26662, "s": 26632, "text": "Functional Interfaces in Java" }, { "code": null, "e": 26677, "s": 26662, "text": "Stream In Java" }, { "code": null, "e": 26698, "s": 26677, "text": "Constructors in Java" }, { "code": null, "e": 26744, "s": 26698, "text": "Different ways of Reading a text file in Java" }, { "code": null, "e": 26763, "s": 26744, "text": "Exceptions in Java" }, { "code": null, "e": 26807, "s": 26763, "text": "Convert a String to Character array in Java" }, { "code": null, "e": 26833, "s": 26807, "text": "Java Programming Examples" }, { "code": null, "e": 26867, "s": 26833, "text": "Convert Double to Integer in Java" }, { "code": null, "e": 26914, "s": 26867, "text": "Implementing a Linked List in Java using Class" } ]
How to use the CopyTo(,) method of array class in C#
The CopyTo() method in C# is used to copy elements of one array to another array. In this method, you can set the starting index from where you want to copy from the source array. The following is the syntax. CopyTo(dest, index); Here dest = destination array index= starting index The following is an example showing the usage of CopyTo(,) method of array class in C#. Live Demo using System; class Program { static void Main() { int[] arrSource = new int[4]; arrSource[0] = 5; arrSource[1] = 9; arrSource[2] = 1; arrSource[3] = 3; int[] arrTarget = new int[4]; // CopyTo() method arrSource.CopyTo(arrTarget,0 ); Console.WriteLine("Destination Array ..."); foreach (int value in arrTarget) { Console.WriteLine(value); } } } Destination Array ... 5 9 1 3
[ { "code": null, "e": 1242, "s": 1062, "text": "The CopyTo() method in C# is used to copy elements of one array to another array. In this method, you can set the starting index from where you want to copy from the source array." }, { "code": null, "e": 1271, "s": 1242, "text": "The following is the syntax." }, { "code": null, "e": 1292, "s": 1271, "text": "CopyTo(dest, index);" }, { "code": null, "e": 1322, "s": 1292, "text": "Here dest = destination array" }, { "code": null, "e": 1344, "s": 1322, "text": "index= starting index" }, { "code": null, "e": 1432, "s": 1344, "text": "The following is an example showing the usage of CopyTo(,) method of array class in C#." }, { "code": null, "e": 1443, "s": 1432, "text": " Live Demo" }, { "code": null, "e": 1869, "s": 1443, "text": "using System;\nclass Program {\n static void Main() {\n int[] arrSource = new int[4];\n arrSource[0] = 5;\n arrSource[1] = 9;\n arrSource[2] = 1;\n arrSource[3] = 3;\n int[] arrTarget = new int[4];\n // CopyTo() method\n arrSource.CopyTo(arrTarget,0 );\n Console.WriteLine(\"Destination Array ...\");\n foreach (int value in arrTarget) {\n Console.WriteLine(value);\n }\n }\n}" }, { "code": null, "e": 1899, "s": 1869, "text": "Destination Array ...\n5\n9\n1\n3" } ]
Batch Script - NET ACCOUNTS
View the current password & logon restrictions for the computer. NET ACCOUNT [/FORCELOGOFF:{minutes | NO}] [/MINPWLEN:length] [/MAXPWAGE:{days | UNLIMITED}] [/MINPWAGE:days] [/UNIQUEPW:number] [/DOMAIN] Wherein FORCELOGOFF − Force the log-off of the current user within a defined time period. FORCELOGOFF − Force the log-off of the current user within a defined time period. MINPWLEN − This is the minimum password length setting to provide for the user. MINPWLEN − This is the minimum password length setting to provide for the user. MAXPWAGE − This is the maximum password age setting to provide for the user. MAXPWAGE − This is the maximum password age setting to provide for the user. MINPWAGE − This is the minimum password age setting to provide for the user. MINPWAGE − This is the minimum password age setting to provide for the user. NET ACCOUNT Force user logoff how long after time expires?: Never Minimum password age (days): 0 Maximum password age (days): 42 Minimum password length: 0 Length of password history maintained: None Lockout threshold: Never Lockout duration (minutes): 30 Lockout observation window (minutes): 30 Computer role: SERVER The command completed successfully. Print Add Notes Bookmark this page
[ { "code": null, "e": 2234, "s": 2169, "text": "View the current password & logon restrictions for the computer." }, { "code": null, "e": 2374, "s": 2234, "text": "NET ACCOUNT [/FORCELOGOFF:{minutes | NO}] [/MINPWLEN:length]\n[/MAXPWAGE:{days | UNLIMITED}] [/MINPWAGE:days] \n[/UNIQUEPW:number] [/DOMAIN]\n" }, { "code": null, "e": 2382, "s": 2374, "text": "Wherein" }, { "code": null, "e": 2464, "s": 2382, "text": "FORCELOGOFF − Force the log-off of the current user within a defined time period." }, { "code": null, "e": 2546, "s": 2464, "text": "FORCELOGOFF − Force the log-off of the current user within a defined time period." }, { "code": null, "e": 2626, "s": 2546, "text": "MINPWLEN − This is the minimum password length setting to provide for the user." }, { "code": null, "e": 2706, "s": 2626, "text": "MINPWLEN − This is the minimum password length setting to provide for the user." }, { "code": null, "e": 2783, "s": 2706, "text": "MAXPWAGE − This is the maximum password age setting to provide for the user." }, { "code": null, "e": 2860, "s": 2783, "text": "MAXPWAGE − This is the maximum password age setting to provide for the user." }, { "code": null, "e": 2937, "s": 2860, "text": "MINPWAGE − This is the minimum password age setting to provide for the user." }, { "code": null, "e": 3014, "s": 2937, "text": "MINPWAGE − This is the minimum password age setting to provide for the user." }, { "code": null, "e": 3027, "s": 3014, "text": "NET ACCOUNT\n" }, { "code": null, "e": 3578, "s": 3027, "text": "Force user logoff how long after time expires?: Never\nMinimum password age (days): 0\nMaximum password age (days): 42\nMinimum password length: 0\nLength of password history maintained: None\nLockout threshold: Never\nLockout duration (minutes): 30\nLockout observation window (minutes): 30\nComputer role: SERVER\nThe command completed successfully.\n" }, { "code": null, "e": 3585, "s": 3578, "text": " Print" }, { "code": null, "e": 3596, "s": 3585, "text": " Add Notes" } ]
What is explode() function in PHP?
In this article, figure out how to utilize PHP Explode() function which is a predefined inbuilt PHP function. The explode function is utilized to "Split a string into pieces of elements to form an array".The explode function in PHP enables us to break a string into smaller content with a break. This break is known as the delimiter. explode(separator, String, Number of Elements) The explode function acknowledges three parameters of which two are compulsory and one is optional. Let's discuss the three parameters. This character specifies the critical points or points at which the string will split, i.e. whenever this character is found in the string it symbolizes the end of one element of the array and start of another. The input string is to be split in the array. This is an optional parameter. It is utilized to determine the number of elements of the array. This parameter can be any integer(positive, negative or zero) When this parameter is not provided the array returned contains the total number of the element formed after separating the string with the separator. <?php $Original = "Hello,Welcome To Tutorials Point"; print_r(explode(" ",$Original)); print_r(explode(" ",$Original,3)); ?> Array ( [0] => Hello,Welcome [1] => To [2] => Tutorials [3] => Point ) Array ( [0] => Hello,Welcome [1] => To [2] => Tutorials Point ) In the above example, in the first expression, we have not passed the third parameter and we are only creating the new array with the help of "space" delimiter but in the second expression, we have instructed to create the new array with only three elements by passing the third parameter.
[ { "code": null, "e": 1297, "s": 1187, "text": "In this article, figure out how to utilize PHP Explode() function which is a predefined inbuilt PHP function." }, { "code": null, "e": 1521, "s": 1297, "text": "The explode function is utilized to \"Split a string into pieces of elements to form an array\".The explode function in PHP enables us to break a string into smaller content with a break. This break is known as the delimiter." }, { "code": null, "e": 1568, "s": 1521, "text": "explode(separator, String, Number of Elements)" }, { "code": null, "e": 1668, "s": 1568, "text": "The explode function acknowledges three parameters of which two are compulsory and one is optional." }, { "code": null, "e": 1704, "s": 1668, "text": "Let's discuss the three parameters." }, { "code": null, "e": 1915, "s": 1704, "text": "This character specifies the critical points or points at which the string will split, i.e. whenever this character is found in the string it symbolizes the end of one element of the array and start of another." }, { "code": null, "e": 1961, "s": 1915, "text": "The input string is to be split in the array." }, { "code": null, "e": 2119, "s": 1961, "text": "This is an optional parameter. It is utilized to determine the number of elements of the array. This parameter can be any integer(positive, negative or zero)" }, { "code": null, "e": 2270, "s": 2119, "text": "When this parameter is not provided the array returned contains the total number of the element formed after separating the string with the separator." }, { "code": null, "e": 2404, "s": 2270, "text": "<?php\n $Original = \"Hello,Welcome To Tutorials Point\";\n print_r(explode(\" \",$Original));\n print_r(explode(\" \",$Original,3));\n?>" }, { "code": null, "e": 2539, "s": 2404, "text": "Array ( [0] => Hello,Welcome [1] => To [2] => Tutorials [3] => Point )\nArray ( [0] => Hello,Welcome [1] => To [2] => Tutorials Point )" }, { "code": null, "e": 2829, "s": 2539, "text": "In the above example, in the first expression, we have not passed the third parameter and we are only creating the new array with the help of \"space\" delimiter but in the second expression, we have instructed to create the new array with only three elements by passing the third parameter." } ]
Clockwise rotation of Linked List - GeeksforGeeks
31 Mar, 2022 Given a singly linked list and an integer K, the task is to rotate the linked list clockwise to the right by K places.Examples: Input: 1 -> 2 -> 3 -> 4 -> 5 -> NULL, K = 2 Output: 4 -> 5 -> 1 -> 2 -> 3 -> NULLInput: 7 -> 9 -> 11 -> 13 -> 3 -> 5 -> NULL, K = 12 Output: 7 -> 9 -> 11 -> 13 -> 3 -> 5 -> NULL Approach: To rotate the linked list first check whether the given k is greater than the count of nodes in the linked list or not. Traverse the list and find the length of the linked list then compare it with k, if less then continue otherwise deduce it in the range of linked list size by taking modulo with the length of the list. After that subtract the value of k from the length of the list. Now, the question has been changed to the left rotation of the linked list so follow that procedure: Change the next of the kth node to NULL. Change the next of the last node to the previous head node. Change the head to (k+1)th node. In order to do that, the pointers to the kth node, (k+1)th node, and last node are required.Below is the implementation of the above approach: C++ Java Python3 C# Javascript // C++ implementation of the approach#include <bits/stdc++.h>using namespace std; /* Link list node */class Node {public: int data; Node* next;}; /* A utility function to push a node */void push(Node** head_ref, int new_data){ /* allocate node */ Node* new_node = new Node(); /* put in the data */ new_node->data = new_data; /* link the old list off the new node */ new_node->next = (*head_ref); /* move the head to point to the new node */ (*head_ref) = new_node;} /* A utility function to print linked list */void printList(Node* node){ while (node != NULL) { cout << node->data << " -> "; node = node->next; } cout << "NULL";} // Function that rotates the given linked list// clockwise by k and returns the updated// head pointerNode* rightRotate(Node* head, int k){ // If the linked list is empty if (!head) return head; // len is used to store length of the linked list // tmp will point to the last node after this loop Node* tmp = head; int len = 1; while (tmp->next != NULL) { tmp = tmp->next; len++; } // If k is greater than the size // of the linked list if (k > len) k = k % len; // Subtract from length to convert // it into left rotation k = len - k; // If no rotation needed then // return the head node if (k == 0 || k == len) return head; // current will either point to // kth or NULL after this loop Node* current = head; int cnt = 1; while (cnt < k && current != NULL) { current = current->next; cnt++; } // If current is NULL then k is equal to the // count of nodes in the list // Don't change the list in this case if (current == NULL) return head; // current points to the kth node Node* kthnode = current; // Change next of last node to previous head tmp->next = head; // Change head to (k+1)th node head = kthnode->next; // Change next of kth node to NULL kthnode->next = NULL; // Return the updated head pointer return head;} // Driver codeint main(){ /* The constructed linked list is: 1->2->3->4->5 */ Node* head = NULL; push(&head, 5); push(&head, 4); push(&head, 3); push(&head, 2); push(&head, 1); int k = 2; // Rotate the linked list Node* updated_head = rightRotate(head, k); // Print the rotated linked list printList(updated_head); return 0;} // Java implementation of the approachclass GFG{ /* Link list node */static class Node{ int data; Node next;} /* A utility function to push a node */static Node push(Node head_ref, int new_data){ /* allocate node */ Node new_node = new Node(); /* put in the data */ new_node.data = new_data; /* link the old list off the new node */ new_node.next = (head_ref); /* move the head to point to the new node */ (head_ref) = new_node; return head_ref;} /* A utility function to print linked list */static void printList(Node node){ while (node != null) { System.out.print(node.data + " -> "); node = node.next; } System.out.print( "null");} // Function that rotates the given linked list// clockwise by k and returns the updated// head pointerstatic Node rightRotate(Node head, int k){ // If the linked list is empty if (head == null) return head; // len is used to store length of the linked list // tmp will point to the last node after this loop Node tmp = head; int len = 1; while (tmp.next != null) { tmp = tmp.next; len++; } // If k is greater than the size // of the linked list if (k > len) k = k % len; // Subtract from length to convert // it into left rotation k = len - k; // If no rotation needed then // return the head node if (k == 0 || k == len) return head; // current will either point to // kth or null after this loop Node current = head; int cnt = 1; while (cnt < k && current != null) { current = current.next; cnt++; } // If current is null then k is equal to the // count of nodes in the list // Don't change the list in this case if (current == null) return head; // current points to the kth node Node kthnode = current; // Change next of last node to previous head tmp.next = head; // Change head to (k+1)th node head = kthnode.next; // Change next of kth node to null kthnode.next = null; // Return the updated head pointer return head;} // Driver codepublic static void main(String args[]){ /* The constructed linked list is: 1.2.3.4.5 */ Node head = null; head = push(head, 5); head = push(head, 4); head = push(head, 3); head = push(head, 2); head = push(head, 1); int k = 2; // Rotate the linked list Node updated_head = rightRotate(head, k); // Print the rotated linked list printList(updated_head);}} // This code is contributed by Arnub Kundu # Python3 implementation of the approach ''' Link list node '''class Node: def __init__(self, data): self.data = data self.next = None ''' A utility function to push a node '''def push(head_ref, new_data): ''' allocate node ''' new_node = Node(new_data) ''' put in the data ''' new_node.data = new_data ''' link the old list off the new node ''' new_node.next = (head_ref) ''' move the head to point to the new node ''' (head_ref) = new_node return head_ref ''' A utility function to print linked list '''def printList(node): while (node != None): print(node.data, end=' -> ') node = node.next print("NULL") # Function that rotates the given linked list# clockwise by k and returns the updated# head pointerdef rightRotate(head, k): # If the linked list is empty if (not head): return head # len is used to store length of the linked list # tmp will point to the last node after this loop tmp = head len = 1 while (tmp.next != None): tmp = tmp.next len += 1 # If k is greater than the size # of the linked list if (k > len): k = k % len # Subtract from length to convert # it into left rotation k = len - k # If no rotation needed then # return the head node if (k == 0 or k == len): return head # current will either point to # kth or None after this loop current = head cnt = 1 while (cnt < k and current != None): current = current.next cnt += 1 # If current is None then k is equal to the # count of nodes in the list # Don't change the list in this case if (current == None): return head # current points to the kth node kthnode = current # Change next of last node to previous head tmp.next = head # Change head to (k+1)th node head = kthnode.next # Change next of kth node to None kthnode.next = None # Return the updated head pointer return head # Driver codeif __name__ == '__main__': ''' The constructed linked list is: 1.2.3.4.5 ''' head = None head = push(head, 5) head = push(head, 4) head = push(head, 3) head = push(head, 2) head = push(head, 1) k = 2 # Rotate the linked list updated_head = rightRotate(head, k) # Print the rotated linked list printList(updated_head) # This code is contributed by rutvik_56 // C# implementation of the approachusing System; class GFG{ /* Link list node */public class Node{ public int data; public Node next;} /* A utility function to push a node */static Node push(Node head_ref, int new_data){ /* allocate node */ Node new_node = new Node(); /* put in the data */ new_node.data = new_data; /* link the old list off the new node */ new_node.next = (head_ref); /* move the head to point to the new node */ (head_ref) = new_node; return head_ref;} /* A utility function to print linked list */static void printList(Node node){ while (node != null) { Console.Write(node.data + " -> "); node = node.next; } Console.Write("null");} // Function that rotates the given linked list// clockwise by k and returns the updated// head pointerstatic Node rightRotate(Node head, int k){ // If the linked list is empty if (head == null) return head; // len is used to store length of // the linked list, tmp will point // to the last node after this loop Node tmp = head; int len = 1; while (tmp.next != null) { tmp = tmp.next; len++; } // If k is greater than the size // of the linked list if (k > len) k = k % len; // Subtract from length to convert // it into left rotation k = len - k; // If no rotation needed then // return the head node if (k == 0 || k == len) return head; // current will either point to // kth or null after this loop Node current = head; int cnt = 1; while (cnt < k && current != null) { current = current.next; cnt++; } // If current is null then k is equal // to the count of nodes in the list // Don't change the list in this case if (current == null) return head; // current points to the kth node Node kthnode = current; // Change next of last node // to previous head tmp.next = head; // Change head to (k+1)th node head = kthnode.next; // Change next of kth node to null kthnode.next = null; // Return the updated head pointer return head;} // Driver codepublic static void Main(String []args){ /* The constructed linked list is: 1.2.3.4.5 */ Node head = null; head = push(head, 5); head = push(head, 4); head = push(head, 3); head = push(head, 2); head = push(head, 1); int k = 2; // Rotate the linked list Node updated_head = rightRotate(head, k); // Print the rotated linked list printList(updated_head);}} // This code is contributed by PrinciRaj1992 <script> // JavaScript implementation of the approach /* Link list node */ class Node { constructor() { this.data = 0; this.next = null; } } /* A utility function to push a node */ function push(head_ref , new_data) { /* allocate node */ var new_node = new Node(); /* put in the data */ new_node.data = new_data; /* link the old list off the new node */ new_node.next = (head_ref); /* move the head to point to the new node */ (head_ref) = new_node; return head_ref; } /* A utility function to print linked list */ function printList(node) { while (node != null) { document.write(node.data + " -> "); node = node.next; } document.write("null"); } // Function that rotates the given linked list // clockwise by k and returns the updated // head pointer function rightRotate(head , k) { // If the linked list is empty if (head == null) return head; // len is used to store length // of the linked list // tmp will point to the last // node after this loop var tmp = head; var len = 1; while (tmp.next != null) { tmp = tmp.next; len++; } // If k is greater than the size // of the linked list if (k > len) k = k % len; // Subtract from length to convert // it into left rotation k = len - k; // If no rotation needed then // return the head node if (k == 0 || k == len) return head; // current will either point to // kth or null after this loop var current = head; var cnt = 1; while (cnt < k && current != null) { current = current.next; cnt++; } // If current is null then k is equal to the // count of nodes in the list // Don't change the list in this case if (current == null) return head; // current points to the kth node var kthnode = current; // Change next of last node to previous head tmp.next = head; // Change head to (k+1)th node head = kthnode.next; // Change next of kth node to null kthnode.next = null; // Return the updated head pointer return head; } // Driver code /* * The constructed linked list is: 1.2.3.4.5 */ var head = null; head = push(head, 5); head = push(head, 4); head = push(head, 3); head = push(head, 2); head = push(head, 1); var k = 2; // Rotate the linked list var updated_head = rightRotate(head, k); // Print the rotated linked list printList(updated_head); // This code contributed by Rajput-Ji </script> 4 -> 5 -> 1 -> 2 -> 3 -> NULL Time Complexity: O(n) where n is the number of nodes in Linked List. Auxiliary Space: O(1) STL based approach : This problem can also be solved using the deque data structure provided in the C++ STL Approach : Initialise a deque with the type Node* and push the linked list into it.Then keep popping from it’s back and adding that node to it’s front until the number of operations are not equal to k. C++ #include <bits/stdc++.h>using namespace std;class Node {public: int val; Node* next; Node(int d) { val = d; next = NULL; }};void build(Node*& head, int val){ if (head == NULL) { head = new Node(val); } else { Node* temp = head; while (temp->next != NULL) { temp = temp->next; } temp->next = new Node(val); }}Node* rotate_clockwise(Node* head, int k){ if (head == NULL) { return NULL; } deque<Node*> q; Node* temp = head; while (temp != NULL) { q.push_back(temp); temp = temp->next; } k %= q.size(); while ( k--) // popping from back and adding to it's front { q.back()->next = q.front(); q.push_front(q.back()); q.pop_back(); q.back()->next = NULL; } return q.front();}void print(Node* head){ while (head != NULL) { cout << head->val << " -> "; head = head->next; } cout << "NULL"; cout << endl;}int main(){ Node* head = NULL; build(head, 1); build(head, 2); build(head, 3); build(head, 4); build(head, 5); int k = 2; Node* r = rotate_clockwise(head, k); print(r); return 0;} 4 -> 5 -> 1 -> 2 -> 3 -> NULL Time Complexity: O(N) Auxiliary Space: O(N) andrew1234 princiraj1992 rutvik_56 Rajput-Ji adityamutharia simranarora5sos rohitsingh57 Linked Lists rotation Data Structures Linked List Data Structures Linked List Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here. SDE SHEET - A Complete Guide for SDE Preparation DSA Sheet by Love Babbar Introduction to Algorithms How to Start Learning DSA? Hash Map in Python Linked List | Set 1 (Introduction) Linked List | Set 2 (Inserting a node) Stack Data Structure (Introduction and Program) Linked List | Set 3 (Deleting a node) LinkedList in Java
[ { "code": null, "e": 26161, "s": 26133, "text": "\n31 Mar, 2022" }, { "code": null, "e": 26291, "s": 26161, "text": "Given a singly linked list and an integer K, the task is to rotate the linked list clockwise to the right by K places.Examples: " }, { "code": null, "e": 26471, "s": 26291, "text": "Input: 1 -> 2 -> 3 -> 4 -> 5 -> NULL, K = 2 Output: 4 -> 5 -> 1 -> 2 -> 3 -> NULLInput: 7 -> 9 -> 11 -> 13 -> 3 -> 5 -> NULL, K = 12 Output: 7 -> 9 -> 11 -> 13 -> 3 -> 5 -> NULL " }, { "code": null, "e": 26972, "s": 26473, "text": "Approach: To rotate the linked list first check whether the given k is greater than the count of nodes in the linked list or not. Traverse the list and find the length of the linked list then compare it with k, if less then continue otherwise deduce it in the range of linked list size by taking modulo with the length of the list. After that subtract the value of k from the length of the list. Now, the question has been changed to the left rotation of the linked list so follow that procedure: " }, { "code": null, "e": 27013, "s": 26972, "text": "Change the next of the kth node to NULL." }, { "code": null, "e": 27073, "s": 27013, "text": "Change the next of the last node to the previous head node." }, { "code": null, "e": 27106, "s": 27073, "text": "Change the head to (k+1)th node." }, { "code": null, "e": 27251, "s": 27106, "text": "In order to do that, the pointers to the kth node, (k+1)th node, and last node are required.Below is the implementation of the above approach: " }, { "code": null, "e": 27255, "s": 27251, "text": "C++" }, { "code": null, "e": 27260, "s": 27255, "text": "Java" }, { "code": null, "e": 27268, "s": 27260, "text": "Python3" }, { "code": null, "e": 27271, "s": 27268, "text": "C#" }, { "code": null, "e": 27282, "s": 27271, "text": "Javascript" }, { "code": "// C++ implementation of the approach#include <bits/stdc++.h>using namespace std; /* Link list node */class Node {public: int data; Node* next;}; /* A utility function to push a node */void push(Node** head_ref, int new_data){ /* allocate node */ Node* new_node = new Node(); /* put in the data */ new_node->data = new_data; /* link the old list off the new node */ new_node->next = (*head_ref); /* move the head to point to the new node */ (*head_ref) = new_node;} /* A utility function to print linked list */void printList(Node* node){ while (node != NULL) { cout << node->data << \" -> \"; node = node->next; } cout << \"NULL\";} // Function that rotates the given linked list// clockwise by k and returns the updated// head pointerNode* rightRotate(Node* head, int k){ // If the linked list is empty if (!head) return head; // len is used to store length of the linked list // tmp will point to the last node after this loop Node* tmp = head; int len = 1; while (tmp->next != NULL) { tmp = tmp->next; len++; } // If k is greater than the size // of the linked list if (k > len) k = k % len; // Subtract from length to convert // it into left rotation k = len - k; // If no rotation needed then // return the head node if (k == 0 || k == len) return head; // current will either point to // kth or NULL after this loop Node* current = head; int cnt = 1; while (cnt < k && current != NULL) { current = current->next; cnt++; } // If current is NULL then k is equal to the // count of nodes in the list // Don't change the list in this case if (current == NULL) return head; // current points to the kth node Node* kthnode = current; // Change next of last node to previous head tmp->next = head; // Change head to (k+1)th node head = kthnode->next; // Change next of kth node to NULL kthnode->next = NULL; // Return the updated head pointer return head;} // Driver codeint main(){ /* The constructed linked list is: 1->2->3->4->5 */ Node* head = NULL; push(&head, 5); push(&head, 4); push(&head, 3); push(&head, 2); push(&head, 1); int k = 2; // Rotate the linked list Node* updated_head = rightRotate(head, k); // Print the rotated linked list printList(updated_head); return 0;}", "e": 29743, "s": 27282, "text": null }, { "code": "// Java implementation of the approachclass GFG{ /* Link list node */static class Node{ int data; Node next;} /* A utility function to push a node */static Node push(Node head_ref, int new_data){ /* allocate node */ Node new_node = new Node(); /* put in the data */ new_node.data = new_data; /* link the old list off the new node */ new_node.next = (head_ref); /* move the head to point to the new node */ (head_ref) = new_node; return head_ref;} /* A utility function to print linked list */static void printList(Node node){ while (node != null) { System.out.print(node.data + \" -> \"); node = node.next; } System.out.print( \"null\");} // Function that rotates the given linked list// clockwise by k and returns the updated// head pointerstatic Node rightRotate(Node head, int k){ // If the linked list is empty if (head == null) return head; // len is used to store length of the linked list // tmp will point to the last node after this loop Node tmp = head; int len = 1; while (tmp.next != null) { tmp = tmp.next; len++; } // If k is greater than the size // of the linked list if (k > len) k = k % len; // Subtract from length to convert // it into left rotation k = len - k; // If no rotation needed then // return the head node if (k == 0 || k == len) return head; // current will either point to // kth or null after this loop Node current = head; int cnt = 1; while (cnt < k && current != null) { current = current.next; cnt++; } // If current is null then k is equal to the // count of nodes in the list // Don't change the list in this case if (current == null) return head; // current points to the kth node Node kthnode = current; // Change next of last node to previous head tmp.next = head; // Change head to (k+1)th node head = kthnode.next; // Change next of kth node to null kthnode.next = null; // Return the updated head pointer return head;} // Driver codepublic static void main(String args[]){ /* The constructed linked list is: 1.2.3.4.5 */ Node head = null; head = push(head, 5); head = push(head, 4); head = push(head, 3); head = push(head, 2); head = push(head, 1); int k = 2; // Rotate the linked list Node updated_head = rightRotate(head, k); // Print the rotated linked list printList(updated_head);}} // This code is contributed by Arnub Kundu", "e": 32313, "s": 29743, "text": null }, { "code": "# Python3 implementation of the approach ''' Link list node '''class Node: def __init__(self, data): self.data = data self.next = None ''' A utility function to push a node '''def push(head_ref, new_data): ''' allocate node ''' new_node = Node(new_data) ''' put in the data ''' new_node.data = new_data ''' link the old list off the new node ''' new_node.next = (head_ref) ''' move the head to point to the new node ''' (head_ref) = new_node return head_ref ''' A utility function to print linked list '''def printList(node): while (node != None): print(node.data, end=' -> ') node = node.next print(\"NULL\") # Function that rotates the given linked list# clockwise by k and returns the updated# head pointerdef rightRotate(head, k): # If the linked list is empty if (not head): return head # len is used to store length of the linked list # tmp will point to the last node after this loop tmp = head len = 1 while (tmp.next != None): tmp = tmp.next len += 1 # If k is greater than the size # of the linked list if (k > len): k = k % len # Subtract from length to convert # it into left rotation k = len - k # If no rotation needed then # return the head node if (k == 0 or k == len): return head # current will either point to # kth or None after this loop current = head cnt = 1 while (cnt < k and current != None): current = current.next cnt += 1 # If current is None then k is equal to the # count of nodes in the list # Don't change the list in this case if (current == None): return head # current points to the kth node kthnode = current # Change next of last node to previous head tmp.next = head # Change head to (k+1)th node head = kthnode.next # Change next of kth node to None kthnode.next = None # Return the updated head pointer return head # Driver codeif __name__ == '__main__': ''' The constructed linked list is: 1.2.3.4.5 ''' head = None head = push(head, 5) head = push(head, 4) head = push(head, 3) head = push(head, 2) head = push(head, 1) k = 2 # Rotate the linked list updated_head = rightRotate(head, k) # Print the rotated linked list printList(updated_head) # This code is contributed by rutvik_56", "e": 34735, "s": 32313, "text": null }, { "code": "// C# implementation of the approachusing System; class GFG{ /* Link list node */public class Node{ public int data; public Node next;} /* A utility function to push a node */static Node push(Node head_ref, int new_data){ /* allocate node */ Node new_node = new Node(); /* put in the data */ new_node.data = new_data; /* link the old list off the new node */ new_node.next = (head_ref); /* move the head to point to the new node */ (head_ref) = new_node; return head_ref;} /* A utility function to print linked list */static void printList(Node node){ while (node != null) { Console.Write(node.data + \" -> \"); node = node.next; } Console.Write(\"null\");} // Function that rotates the given linked list// clockwise by k and returns the updated// head pointerstatic Node rightRotate(Node head, int k){ // If the linked list is empty if (head == null) return head; // len is used to store length of // the linked list, tmp will point // to the last node after this loop Node tmp = head; int len = 1; while (tmp.next != null) { tmp = tmp.next; len++; } // If k is greater than the size // of the linked list if (k > len) k = k % len; // Subtract from length to convert // it into left rotation k = len - k; // If no rotation needed then // return the head node if (k == 0 || k == len) return head; // current will either point to // kth or null after this loop Node current = head; int cnt = 1; while (cnt < k && current != null) { current = current.next; cnt++; } // If current is null then k is equal // to the count of nodes in the list // Don't change the list in this case if (current == null) return head; // current points to the kth node Node kthnode = current; // Change next of last node // to previous head tmp.next = head; // Change head to (k+1)th node head = kthnode.next; // Change next of kth node to null kthnode.next = null; // Return the updated head pointer return head;} // Driver codepublic static void Main(String []args){ /* The constructed linked list is: 1.2.3.4.5 */ Node head = null; head = push(head, 5); head = push(head, 4); head = push(head, 3); head = push(head, 2); head = push(head, 1); int k = 2; // Rotate the linked list Node updated_head = rightRotate(head, k); // Print the rotated linked list printList(updated_head);}} // This code is contributed by PrinciRaj1992", "e": 37358, "s": 34735, "text": null }, { "code": "<script> // JavaScript implementation of the approach /* Link list node */ class Node { constructor() { this.data = 0; this.next = null; } } /* A utility function to push a node */ function push(head_ref , new_data) { /* allocate node */ var new_node = new Node(); /* put in the data */ new_node.data = new_data; /* link the old list off the new node */ new_node.next = (head_ref); /* move the head to point to the new node */ (head_ref) = new_node; return head_ref; } /* A utility function to print linked list */ function printList(node) { while (node != null) { document.write(node.data + \" -> \"); node = node.next; } document.write(\"null\"); } // Function that rotates the given linked list // clockwise by k and returns the updated // head pointer function rightRotate(head , k) { // If the linked list is empty if (head == null) return head; // len is used to store length // of the linked list // tmp will point to the last // node after this loop var tmp = head; var len = 1; while (tmp.next != null) { tmp = tmp.next; len++; } // If k is greater than the size // of the linked list if (k > len) k = k % len; // Subtract from length to convert // it into left rotation k = len - k; // If no rotation needed then // return the head node if (k == 0 || k == len) return head; // current will either point to // kth or null after this loop var current = head; var cnt = 1; while (cnt < k && current != null) { current = current.next; cnt++; } // If current is null then k is equal to the // count of nodes in the list // Don't change the list in this case if (current == null) return head; // current points to the kth node var kthnode = current; // Change next of last node to previous head tmp.next = head; // Change head to (k+1)th node head = kthnode.next; // Change next of kth node to null kthnode.next = null; // Return the updated head pointer return head; } // Driver code /* * The constructed linked list is: 1.2.3.4.5 */ var head = null; head = push(head, 5); head = push(head, 4); head = push(head, 3); head = push(head, 2); head = push(head, 1); var k = 2; // Rotate the linked list var updated_head = rightRotate(head, k); // Print the rotated linked list printList(updated_head); // This code contributed by Rajput-Ji </script>", "e": 40275, "s": 37358, "text": null }, { "code": null, "e": 40305, "s": 40275, "text": "4 -> 5 -> 1 -> 2 -> 3 -> NULL" }, { "code": null, "e": 40376, "s": 40307, "text": "Time Complexity: O(n) where n is the number of nodes in Linked List." }, { "code": null, "e": 40398, "s": 40376, "text": "Auxiliary Space: O(1)" }, { "code": null, "e": 40420, "s": 40398, "text": "STL based approach : " }, { "code": null, "e": 40507, "s": 40420, "text": "This problem can also be solved using the deque data structure provided in the C++ STL" }, { "code": null, "e": 40519, "s": 40507, "text": "Approach : " }, { "code": null, "e": 40710, "s": 40519, "text": "Initialise a deque with the type Node* and push the linked list into it.Then keep popping from it’s back and adding that node to it’s front until the number of operations are not equal to k." }, { "code": null, "e": 40714, "s": 40710, "text": "C++" }, { "code": "#include <bits/stdc++.h>using namespace std;class Node {public: int val; Node* next; Node(int d) { val = d; next = NULL; }};void build(Node*& head, int val){ if (head == NULL) { head = new Node(val); } else { Node* temp = head; while (temp->next != NULL) { temp = temp->next; } temp->next = new Node(val); }}Node* rotate_clockwise(Node* head, int k){ if (head == NULL) { return NULL; } deque<Node*> q; Node* temp = head; while (temp != NULL) { q.push_back(temp); temp = temp->next; } k %= q.size(); while ( k--) // popping from back and adding to it's front { q.back()->next = q.front(); q.push_front(q.back()); q.pop_back(); q.back()->next = NULL; } return q.front();}void print(Node* head){ while (head != NULL) { cout << head->val << \" -> \"; head = head->next; } cout << \"NULL\"; cout << endl;}int main(){ Node* head = NULL; build(head, 1); build(head, 2); build(head, 3); build(head, 4); build(head, 5); int k = 2; Node* r = rotate_clockwise(head, k); print(r); return 0;}", "e": 41922, "s": 40714, "text": null }, { "code": null, "e": 41952, "s": 41922, "text": "4 -> 5 -> 1 -> 2 -> 3 -> NULL" }, { "code": null, "e": 41974, "s": 41952, "text": "Time Complexity: O(N)" }, { "code": null, "e": 41997, "s": 41974, "text": "Auxiliary Space: O(N) " }, { "code": null, "e": 42008, "s": 41997, "text": "andrew1234" }, { "code": null, "e": 42022, "s": 42008, "text": "princiraj1992" }, { "code": null, "e": 42032, "s": 42022, "text": "rutvik_56" }, { "code": null, "e": 42042, "s": 42032, "text": "Rajput-Ji" }, { "code": null, "e": 42057, "s": 42042, "text": "adityamutharia" }, { "code": null, "e": 42073, "s": 42057, "text": "simranarora5sos" }, { "code": null, "e": 42086, "s": 42073, "text": "rohitsingh57" }, { "code": null, "e": 42099, "s": 42086, "text": "Linked Lists" }, { "code": null, "e": 42108, "s": 42099, "text": "rotation" }, { "code": null, "e": 42124, "s": 42108, "text": "Data Structures" }, { "code": null, "e": 42136, "s": 42124, "text": "Linked List" }, { "code": null, "e": 42152, "s": 42136, "text": "Data Structures" }, { "code": null, "e": 42164, "s": 42152, "text": "Linked List" }, { "code": null, "e": 42262, "s": 42164, "text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here." }, { "code": null, "e": 42311, "s": 42262, "text": "SDE SHEET - A Complete Guide for SDE Preparation" }, { "code": null, "e": 42336, "s": 42311, "text": "DSA Sheet by Love Babbar" }, { "code": null, "e": 42363, "s": 42336, "text": "Introduction to Algorithms" }, { "code": null, "e": 42390, "s": 42363, "text": "How to Start Learning DSA?" }, { "code": null, "e": 42409, "s": 42390, "text": "Hash Map in Python" }, { "code": null, "e": 42444, "s": 42409, "text": "Linked List | Set 1 (Introduction)" }, { "code": null, "e": 42483, "s": 42444, "text": "Linked List | Set 2 (Inserting a node)" }, { "code": null, "e": 42531, "s": 42483, "text": "Stack Data Structure (Introduction and Program)" }, { "code": null, "e": 42569, "s": 42531, "text": "Linked List | Set 3 (Deleting a node)" } ]
JavaScript in operator - GeeksforGeeks
11 Sep, 2020 Below is the example of the in operator. Example 1:<script> function gfg() { // Illustration of in operator const array = ['geeks', 'for', 'geeks'] // Output of the indexed number document.write(0 in array); // Output of the Value document.write('for' in array); // output of the Array property document.write('length' in array); } gfg(); </script> Example 1: <script> function gfg() { // Illustration of in operator const array = ['geeks', 'for', 'geeks'] // Output of the indexed number document.write(0 in array); // Output of the Value document.write('for' in array); // output of the Array property document.write('length' in array); } gfg(); </script> Output:true false true Output: true false true The in operator is an inbuilt operator in JavaScript which is used to check whether a particular property exists in an object or not. It returns boolean value true if the specified property is in an object, otherwise it returns false. Syntax: prop in object Parameters: This function accepts the following parameter as mentioned above and described below: prop: This parameter holds the string or symbol that represents a property name or array index. object: This parameter is an Object which to be checked whether it contains the prop. Return value: This method returns an boolean values : true: The value true is returned if the specified property is found in an object. false: The value false is returned if the specified property is not found in an object. Below examples illustrate the in operator in JavaScript: Example 1: <script> // Illustration of in operator const array = ['geeksforgeeks', 'geeksfor', 'geeks', 'geeks1'] // Output of the indexed number console.log(0 in array) console.log(2 in array) console.log(5 in array) // Output of the Value console.log('for' in array) console.log('geeksforgeeks' in array) // output of the Array property console.log('length' in array)</script> Output: > true > true > false > false > false > true Example 2: <script> // Illustration of in operator const object = { val1: 'Geeksforgeeks', val2: 'Javascript', val3: 'operator', val4: 'in' }; console.log('val1' in object); delete object.val1; console.log('val1' in object); if ('val1' in object === false) { object.val1 = 'GEEKSFORGEEKS'; } console.log(object.val1);</script> Output: > true > false > "GEEKSFORGEEKS" Supported Browsers: The browsers supported by JavaScript in operator are listed below: Google Chrome Firefox Opera Safari Edge Internet Explorer javascript-operators Picked JavaScript Web Technologies Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here. Remove elements from a JavaScript Array Convert a string to an integer in JavaScript Difference between var, let and const keywords in JavaScript Differences between Functional Components and Class Components in React How to append HTML code to a div using JavaScript ? Remove elements from a JavaScript Array Installation of Node.js on Linux Convert a string to an integer in JavaScript How to fetch data from an API in ReactJS ? Top 10 Projects For Beginners To Practice HTML and CSS Skills
[ { "code": null, "e": 25881, "s": 25853, "text": "\n11 Sep, 2020" }, { "code": null, "e": 25922, "s": 25881, "text": "Below is the example of the in operator." }, { "code": null, "e": 26299, "s": 25922, "text": "Example 1:<script> function gfg() { // Illustration of in operator const array = ['geeks', 'for', 'geeks'] // Output of the indexed number document.write(0 in array); // Output of the Value document.write('for' in array); // output of the Array property document.write('length' in array); } gfg(); </script>" }, { "code": null, "e": 26310, "s": 26299, "text": "Example 1:" }, { "code": "<script> function gfg() { // Illustration of in operator const array = ['geeks', 'for', 'geeks'] // Output of the indexed number document.write(0 in array); // Output of the Value document.write('for' in array); // output of the Array property document.write('length' in array); } gfg(); </script>", "e": 26677, "s": 26310, "text": null }, { "code": null, "e": 26700, "s": 26677, "text": "Output:true\nfalse\ntrue" }, { "code": null, "e": 26708, "s": 26700, "text": "Output:" }, { "code": null, "e": 26724, "s": 26708, "text": "true\nfalse\ntrue" }, { "code": null, "e": 26959, "s": 26724, "text": "The in operator is an inbuilt operator in JavaScript which is used to check whether a particular property exists in an object or not. It returns boolean value true if the specified property is in an object, otherwise it returns false." }, { "code": null, "e": 26967, "s": 26959, "text": "Syntax:" }, { "code": null, "e": 26983, "s": 26967, "text": "prop in object\n" }, { "code": null, "e": 27081, "s": 26983, "text": "Parameters: This function accepts the following parameter as mentioned above and described below:" }, { "code": null, "e": 27177, "s": 27081, "text": "prop: This parameter holds the string or symbol that represents a property name or array index." }, { "code": null, "e": 27263, "s": 27177, "text": "object: This parameter is an Object which to be checked whether it contains the prop." }, { "code": null, "e": 27317, "s": 27263, "text": "Return value: This method returns an boolean values :" }, { "code": null, "e": 27399, "s": 27317, "text": "true: The value true is returned if the specified property is found in an object." }, { "code": null, "e": 27487, "s": 27399, "text": "false: The value false is returned if the specified property is not found in an object." }, { "code": null, "e": 27544, "s": 27487, "text": "Below examples illustrate the in operator in JavaScript:" }, { "code": null, "e": 27555, "s": 27544, "text": "Example 1:" }, { "code": "<script> // Illustration of in operator const array = ['geeksforgeeks', 'geeksfor', 'geeks', 'geeks1'] // Output of the indexed number console.log(0 in array) console.log(2 in array) console.log(5 in array) // Output of the Value console.log('for' in array) console.log('geeksforgeeks' in array) // output of the Array property console.log('length' in array)</script>", "e": 28003, "s": 27555, "text": null }, { "code": null, "e": 28011, "s": 28003, "text": "Output:" }, { "code": null, "e": 28057, "s": 28011, "text": "> true\n> true\n> false\n> false\n> false\n> true\n" }, { "code": null, "e": 28068, "s": 28057, "text": "Example 2:" }, { "code": "<script> // Illustration of in operator const object = { val1: 'Geeksforgeeks', val2: 'Javascript', val3: 'operator', val4: 'in' }; console.log('val1' in object); delete object.val1; console.log('val1' in object); if ('val1' in object === false) { object.val1 = 'GEEKSFORGEEKS'; } console.log(object.val1);</script>", "e": 28483, "s": 28068, "text": null }, { "code": null, "e": 28491, "s": 28483, "text": "Output:" }, { "code": null, "e": 28525, "s": 28491, "text": "> true\n> false\n> \"GEEKSFORGEEKS\"\n" }, { "code": null, "e": 28612, "s": 28525, "text": "Supported Browsers: The browsers supported by JavaScript in operator are listed below:" }, { "code": null, "e": 28626, "s": 28612, "text": "Google Chrome" }, { "code": null, "e": 28634, "s": 28626, "text": "Firefox" }, { "code": null, "e": 28640, "s": 28634, "text": "Opera" }, { "code": null, "e": 28647, "s": 28640, "text": "Safari" }, { "code": null, "e": 28652, "s": 28647, "text": "Edge" }, { "code": null, "e": 28670, "s": 28652, "text": "Internet Explorer" }, { "code": null, "e": 28691, "s": 28670, "text": "javascript-operators" }, { "code": null, "e": 28698, "s": 28691, "text": "Picked" }, { "code": null, "e": 28709, "s": 28698, "text": "JavaScript" }, { "code": null, "e": 28726, "s": 28709, "text": "Web Technologies" }, { "code": null, "e": 28824, "s": 28726, "text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here." }, { "code": null, "e": 28864, "s": 28824, "text": "Remove elements from a JavaScript Array" }, { "code": null, "e": 28909, "s": 28864, "text": "Convert a string to an integer in JavaScript" }, { "code": null, "e": 28970, "s": 28909, "text": "Difference between var, let and const keywords in JavaScript" }, { "code": null, "e": 29042, "s": 28970, "text": "Differences between Functional Components and Class Components in React" }, { "code": null, "e": 29094, "s": 29042, "text": "How to append HTML code to a div using JavaScript ?" }, { "code": null, "e": 29134, "s": 29094, "text": "Remove elements from a JavaScript Array" }, { "code": null, "e": 29167, "s": 29134, "text": "Installation of Node.js on Linux" }, { "code": null, "e": 29212, "s": 29167, "text": "Convert a string to an integer in JavaScript" }, { "code": null, "e": 29255, "s": 29212, "text": "How to fetch data from an API in ReactJS ?" } ]
Program to find Sum of the series 1*3 + 3*5 + .... - GeeksforGeeks
01 Nov, 2021 Given a series: Sn = 1*3 + 3*5 + 5*7 + ... It is required to find the sum of first n terms of this series represented by Sn, where n is given taken input.Examples: Input : n = 2 Output : S<sub>n</sub> = 18 Explanation: The sum of first 2 terms of Series is 1*3 + 3*5 = 3 + 15 = 28 Input : n = 4 Output : S<sub>n</sub> = 116 Explanation: The sum of first 4 terms of Series is 1*3 + 3*5 + 5*7 + 7*9 = 3 + 15 + 35 + 63 = 116 Let, the n-th term be denoted by tn. This problem can easily be solved by observing that the nth term can be founded by following method: tn = (n-th term of (1, 3, 5, ... ) )*(nth term of (3, 5, 7, ....)) Now, n-th term of series 1, 3, 5 is given by 2*n-1 and, the n-th term of series 3, 5, 7 is given by 2*n+1Putting these two values in tn: tn = (2*n-1)*(2*n+1) = 4*n*n-1 Now, the sum of first n terms will be given by : Sn = ∑(4*n*n – 1) =∑4*{n*n}-∑(1) Now, it is known that the sum of first n terms of series n*n (1, 4, 9, ...) is given by: n*(n+1)*(2*n+1)/6 And sum of n number of 1’s is n itself.Now, putting values in Sn: Sn = 4*n*(n+1)*(2*n+1)/6 – n = n*(4*n*n + 6*n – 1)/3 Now, Sn value can be easily found by putting the desired value of n.Below is the implementation of the above approach: C++ Java Python C# PHP Javascript // C++ program to find sum of first n terms#include <bits/stdc++.h>using namespace std; int calculateSum(int n){ // Sn = n*(4*n*n + 6*n - 1)/3 return (n * (4 * n * n + 6 * n - 1) / 3);} int main(){ // number of terms to be included in the sum int n = 4; // find the Sn cout << "Sum = " << calculateSum(n); return 0;} // Java program to find sum// of first n termsclass GFG{ static int calculateSum(int n) { // Sn = n*(4*n*n + 6*n - 1)/3 return (n * (4 * n * n + 6 * n - 1) / 3); } // Driver Code public static void main(String args[]) { // number of terms to be // included in the sum int n = 4; // find the Sn System.out.println("Sum = " + calculateSum(n)); }} // This code is contributed by Bilal # Python program to find sum# of first n termsdef calculateSum(n): # Sn = n*(4*n*n + 6*n - 1)/3 return (n * (4 * n * n + 6 * n - 1) / 3); # Driver Code # number of terms to be# included in the sumn = 4 # find the Snprint("Sum =",calculateSum(n)) # This code is contributed by Bilal // C# program to find sum// of first n termsusing System; class GFG{ static int calculateSum(int n){ // Sn = n*(4*n*n + 6*n - 1)/3 return (n * (4 * n * n + 6 * n - 1) / 3);} // Driver codestatic public void Main (){ // number of terms to be // included in the sum int n = 4; // find the Sn Console.WriteLine("Sum = " + calculateSum(n));}} // This code is contributed// by mahadev <?php// PHP program to find sum// of first n terms function calculateSum($n){ // Sn = n*(4*n*n + 6*n - 1)/3 return ($n * (4 * $n * $n + 6 * $n - 1) / 3);} // number of terms to be// included in the sum$n = 4; // find the Snecho "Sum = " . calculateSum($n); // This code is contributed// by ChitraNayal?> <script> // Javascript program to find sum// of first n terms function calculateSum( n) { // Sn = n*(4*n*n + 6*n - 1)/3 return (n * (4 * n * n + 6 * n - 1) / 3); } // Driver Code // number of terms to be // included in the sum let n = 4; // find the Sn document.write("Sum = " + calculateSum(n)); // This code contributed by Princi Singh </script> Sum = 116 Time Complexity: O(1) Auxiliary Space: O(1) bilal-hungund Mahadev99 ukasp princi singh souravmahato348 series series-sum C++ Programs Mathematical Mathematical series Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here. Shallow Copy and Deep Copy in C++ cin in C++ Passing a function as a parameter in C++ Generics in C++ CSV file management using C++ Program for Fibonacci numbers Write a program to print all permutations of a given string Set in C++ Standard Template Library (STL) C++ Data Types Coin Change | DP-7
[ { "code": null, "e": 26689, "s": 26661, "text": "\n01 Nov, 2021" }, { "code": null, "e": 26707, "s": 26689, "text": "Given a series: " }, { "code": null, "e": 26735, "s": 26707, "text": "Sn = 1*3 + 3*5 + 5*7 + ... " }, { "code": null, "e": 26858, "s": 26735, "text": "It is required to find the sum of first n terms of this series represented by Sn, where n is given taken input.Examples: " }, { "code": null, "e": 27120, "s": 26858, "text": "Input : n = 2 \nOutput : S<sub>n</sub> = 18\nExplanation:\nThe sum of first 2 terms of Series is\n1*3 + 3*5\n= 3 + 15 \n= 28\n\nInput : n = 4 \nOutput : S<sub>n</sub> = 116\nExplanation:\nThe sum of first 4 terms of Series is\n1*3 + 3*5 + 5*7 + 7*9\n= 3 + 15 + 35 + 63\n= 116" }, { "code": null, "e": 27259, "s": 27120, "text": "Let, the n-th term be denoted by tn. This problem can easily be solved by observing that the nth term can be founded by following method: " }, { "code": null, "e": 27326, "s": 27259, "text": "tn = (n-th term of (1, 3, 5, ... ) )*(nth term of (3, 5, 7, ....))" }, { "code": null, "e": 27464, "s": 27326, "text": "Now, n-th term of series 1, 3, 5 is given by 2*n-1 and, the n-th term of series 3, 5, 7 is given by 2*n+1Putting these two values in tn: " }, { "code": null, "e": 27495, "s": 27464, "text": "tn = (2*n-1)*(2*n+1) = 4*n*n-1" }, { "code": null, "e": 27545, "s": 27495, "text": "Now, the sum of first n terms will be given by : " }, { "code": null, "e": 27578, "s": 27545, "text": "Sn = ∑(4*n*n – 1) =∑4*{n*n}-∑(1)" }, { "code": null, "e": 27752, "s": 27578, "text": "Now, it is known that the sum of first n terms of series n*n (1, 4, 9, ...) is given by: n*(n+1)*(2*n+1)/6 And sum of n number of 1’s is n itself.Now, putting values in Sn: " }, { "code": null, "e": 27805, "s": 27752, "text": "Sn = 4*n*(n+1)*(2*n+1)/6 – n = n*(4*n*n + 6*n – 1)/3" }, { "code": null, "e": 27926, "s": 27805, "text": "Now, Sn value can be easily found by putting the desired value of n.Below is the implementation of the above approach: " }, { "code": null, "e": 27930, "s": 27926, "text": "C++" }, { "code": null, "e": 27935, "s": 27930, "text": "Java" }, { "code": null, "e": 27942, "s": 27935, "text": "Python" }, { "code": null, "e": 27945, "s": 27942, "text": "C#" }, { "code": null, "e": 27949, "s": 27945, "text": "PHP" }, { "code": null, "e": 27960, "s": 27949, "text": "Javascript" }, { "code": "// C++ program to find sum of first n terms#include <bits/stdc++.h>using namespace std; int calculateSum(int n){ // Sn = n*(4*n*n + 6*n - 1)/3 return (n * (4 * n * n + 6 * n - 1) / 3);} int main(){ // number of terms to be included in the sum int n = 4; // find the Sn cout << \"Sum = \" << calculateSum(n); return 0;}", "e": 28300, "s": 27960, "text": null }, { "code": "// Java program to find sum// of first n termsclass GFG{ static int calculateSum(int n) { // Sn = n*(4*n*n + 6*n - 1)/3 return (n * (4 * n * n + 6 * n - 1) / 3); } // Driver Code public static void main(String args[]) { // number of terms to be // included in the sum int n = 4; // find the Sn System.out.println(\"Sum = \" + calculateSum(n)); }} // This code is contributed by Bilal", "e": 28805, "s": 28300, "text": null }, { "code": "# Python program to find sum# of first n termsdef calculateSum(n): # Sn = n*(4*n*n + 6*n - 1)/3 return (n * (4 * n * n + 6 * n - 1) / 3); # Driver Code # number of terms to be# included in the sumn = 4 # find the Snprint(\"Sum =\",calculateSum(n)) # This code is contributed by Bilal", "e": 29114, "s": 28805, "text": null }, { "code": "// C# program to find sum// of first n termsusing System; class GFG{ static int calculateSum(int n){ // Sn = n*(4*n*n + 6*n - 1)/3 return (n * (4 * n * n + 6 * n - 1) / 3);} // Driver codestatic public void Main (){ // number of terms to be // included in the sum int n = 4; // find the Sn Console.WriteLine(\"Sum = \" + calculateSum(n));}} // This code is contributed// by mahadev", "e": 29554, "s": 29114, "text": null }, { "code": "<?php// PHP program to find sum// of first n terms function calculateSum($n){ // Sn = n*(4*n*n + 6*n - 1)/3 return ($n * (4 * $n * $n + 6 * $n - 1) / 3);} // number of terms to be// included in the sum$n = 4; // find the Snecho \"Sum = \" . calculateSum($n); // This code is contributed// by ChitraNayal?>", "e": 29881, "s": 29554, "text": null }, { "code": "<script> // Javascript program to find sum// of first n terms function calculateSum( n) { // Sn = n*(4*n*n + 6*n - 1)/3 return (n * (4 * n * n + 6 * n - 1) / 3); } // Driver Code // number of terms to be // included in the sum let n = 4; // find the Sn document.write(\"Sum = \" + calculateSum(n)); // This code contributed by Princi Singh </script>", "e": 30304, "s": 29881, "text": null }, { "code": null, "e": 30314, "s": 30304, "text": "Sum = 116" }, { "code": null, "e": 30338, "s": 30316, "text": "Time Complexity: O(1)" }, { "code": null, "e": 30360, "s": 30338, "text": "Auxiliary Space: O(1)" }, { "code": null, "e": 30374, "s": 30360, "text": "bilal-hungund" }, { "code": null, "e": 30384, "s": 30374, "text": "Mahadev99" }, { "code": null, "e": 30390, "s": 30384, "text": "ukasp" }, { "code": null, "e": 30403, "s": 30390, "text": "princi singh" }, { "code": null, "e": 30419, "s": 30403, "text": "souravmahato348" }, { "code": null, "e": 30426, "s": 30419, "text": "series" }, { "code": null, "e": 30437, "s": 30426, "text": "series-sum" }, { "code": null, "e": 30450, "s": 30437, "text": "C++ Programs" }, { "code": null, "e": 30463, "s": 30450, "text": "Mathematical" }, { "code": null, "e": 30476, "s": 30463, "text": "Mathematical" }, { "code": null, "e": 30483, "s": 30476, "text": "series" }, { "code": null, "e": 30581, "s": 30483, "text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here." }, { "code": null, "e": 30615, "s": 30581, "text": "Shallow Copy and Deep Copy in C++" }, { "code": null, "e": 30626, "s": 30615, "text": "cin in C++" }, { "code": null, "e": 30667, "s": 30626, "text": "Passing a function as a parameter in C++" }, { "code": null, "e": 30683, "s": 30667, "text": "Generics in C++" }, { "code": null, "e": 30713, "s": 30683, "text": "CSV file management using C++" }, { "code": null, "e": 30743, "s": 30713, "text": "Program for Fibonacci numbers" }, { "code": null, "e": 30803, "s": 30743, "text": "Write a program to print all permutations of a given string" }, { "code": null, "e": 30846, "s": 30803, "text": "Set in C++ Standard Template Library (STL)" }, { "code": null, "e": 30861, "s": 30846, "text": "C++ Data Types" } ]
Python | Pandas Dataframe.rename() - GeeksforGeeks
17 Sep, 2018 Python is a great language for doing data analysis, primarily because of the fantastic ecosystem of data-centric python packages. Pandas is one of those packages and makes importing and analyzing data much easier. Pandas rename() method is used to rename any index, column or row. Renaming of column can also be done by dataframe.columns = [#list]. But in the above case, there isn’t much freedom. Even if one column has to be changed, full column list has to be passed. Also, the above method is not applicable on index labels. Syntax: DataFrame.rename(mapper=None, index=None, columns=None, axis=None, copy=True, inplace=False, level=None) Parameters:mapper, index and columns: Dictionary value, key refers to the old name and value refers to new name. Only one of these parameters can be used at once.axis: int or string value, 0/’row’ for Rows and 1/’columns’ for Columns.copy: Copies underlying data if True.inplace: Makes changes in original Data Frame if True.level: Used to specify level in case data frame is having multiple level index. Return Type: Data frame with new names To download the CSV used in code, click here. Example #1: Changing Index label In this example, the name column is set as index column and it’s name is changed later using the rename() method. # importing pandas moduleimport pandas as pd # making data frame from csv filedata = pd.read_csv("nba.csv", index_col ="Name" ) # changing index cols with rename()data.rename(index = {"Avery Bradley": "NEW NAME", "Jae Crowder":"NEW NAME 2"}, inplace = True)# displaydata Output:As shown in the output image, the name of index labels at first and second positions were changed to NEW NAME & NEW NAME 2. Example #2: Changing multiple column names In this example, multiple column names are changed by passing a dictionary. Later the result is compared to the data frame returned by using .columns method. Null values are dropped before comparing since NaN==NaN will return false. # importing pandas moduleimport pandas as pd # making data frame from csv filedata = pd.read_csv("nba.csv", index_col ="Name" ) # changing cols with rename()new_data = data.rename(columns = {"Team": "Team Name", "College":"Education", "Salary": "Income"}) # changing columns using .columns()data.columns = ['Team Name', 'Number', 'Position', 'Age', 'Height', 'Weight', 'Education', 'Income'] # dropna used to ignore na valuesprint(new_data.dropna()== data.dropna()) Output:As shown in the output image, the results using both ways were same since all values are True. Python pandas-dataFrame Python pandas-dataFrame-methods Python-pandas Python Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here. Python Dictionary How to Install PIP on Windows ? Enumerate() in Python Different ways to create Pandas Dataframe *args and **kwargs in Python Reading and Writing to text files in Python Convert integer to string in Python Create a Pandas DataFrame from Lists Check if element exists in list in Python sum() function in Python
[ { "code": null, "e": 26131, "s": 26103, "text": "\n17 Sep, 2018" }, { "code": null, "e": 26345, "s": 26131, "text": "Python is a great language for doing data analysis, primarily because of the fantastic ecosystem of data-centric python packages. Pandas is one of those packages and makes importing and analyzing data much easier." }, { "code": null, "e": 26660, "s": 26345, "text": "Pandas rename() method is used to rename any index, column or row. Renaming of column can also be done by dataframe.columns = [#list]. But in the above case, there isn’t much freedom. Even if one column has to be changed, full column list has to be passed. Also, the above method is not applicable on index labels." }, { "code": null, "e": 26773, "s": 26660, "text": "Syntax: DataFrame.rename(mapper=None, index=None, columns=None, axis=None, copy=True, inplace=False, level=None)" }, { "code": null, "e": 27178, "s": 26773, "text": "Parameters:mapper, index and columns: Dictionary value, key refers to the old name and value refers to new name. Only one of these parameters can be used at once.axis: int or string value, 0/’row’ for Rows and 1/’columns’ for Columns.copy: Copies underlying data if True.inplace: Makes changes in original Data Frame if True.level: Used to specify level in case data frame is having multiple level index." }, { "code": null, "e": 27217, "s": 27178, "text": "Return Type: Data frame with new names" }, { "code": null, "e": 27263, "s": 27217, "text": "To download the CSV used in code, click here." }, { "code": null, "e": 27296, "s": 27263, "text": "Example #1: Changing Index label" }, { "code": null, "e": 27410, "s": 27296, "text": "In this example, the name column is set as index column and it’s name is changed later using the rename() method." }, { "code": "# importing pandas moduleimport pandas as pd # making data frame from csv filedata = pd.read_csv(\"nba.csv\", index_col =\"Name\" ) # changing index cols with rename()data.rename(index = {\"Avery Bradley\": \"NEW NAME\", \"Jae Crowder\":\"NEW NAME 2\"}, inplace = True)# displaydata", "e": 27735, "s": 27410, "text": null }, { "code": null, "e": 27866, "s": 27735, "text": "Output:As shown in the output image, the name of index labels at first and second positions were changed to NEW NAME & NEW NAME 2." }, { "code": null, "e": 27911, "s": 27868, "text": "Example #2: Changing multiple column names" }, { "code": null, "e": 28144, "s": 27911, "text": "In this example, multiple column names are changed by passing a dictionary. Later the result is compared to the data frame returned by using .columns method. Null values are dropped before comparing since NaN==NaN will return false." }, { "code": "# importing pandas moduleimport pandas as pd # making data frame from csv filedata = pd.read_csv(\"nba.csv\", index_col =\"Name\" ) # changing cols with rename()new_data = data.rename(columns = {\"Team\": \"Team Name\", \"College\":\"Education\", \"Salary\": \"Income\"}) # changing columns using .columns()data.columns = ['Team Name', 'Number', 'Position', 'Age', 'Height', 'Weight', 'Education', 'Income'] # dropna used to ignore na valuesprint(new_data.dropna()== data.dropna())", "e": 28697, "s": 28144, "text": null }, { "code": null, "e": 28799, "s": 28697, "text": "Output:As shown in the output image, the results using both ways were same since all values are True." }, { "code": null, "e": 28823, "s": 28799, "text": "Python pandas-dataFrame" }, { "code": null, "e": 28855, "s": 28823, "text": "Python pandas-dataFrame-methods" }, { "code": null, "e": 28869, "s": 28855, "text": "Python-pandas" }, { "code": null, "e": 28876, "s": 28869, "text": "Python" }, { "code": null, "e": 28974, "s": 28876, "text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here." }, { "code": null, "e": 28992, "s": 28974, "text": "Python Dictionary" }, { "code": null, "e": 29024, "s": 28992, "text": "How to Install PIP on Windows ?" }, { "code": null, "e": 29046, "s": 29024, "text": "Enumerate() in Python" }, { "code": null, "e": 29088, "s": 29046, "text": "Different ways to create Pandas Dataframe" }, { "code": null, "e": 29117, "s": 29088, "text": "*args and **kwargs in Python" }, { "code": null, "e": 29161, "s": 29117, "text": "Reading and Writing to text files in Python" }, { "code": null, "e": 29197, "s": 29161, "text": "Convert integer to string in Python" }, { "code": null, "e": 29234, "s": 29197, "text": "Create a Pandas DataFrame from Lists" }, { "code": null, "e": 29276, "s": 29234, "text": "Check if element exists in list in Python" } ]
Is it fine to write void main() or main() in C/C++? - GeeksforGeeks
02 Dec, 2021 In C, void main() has no defined(legit) usage, and it can sometimes throw garbage results or an error. However, main() is used to denote the main function which takes no arguments and returns an integer data type. The definition is not and never has been C++, nor has it even been C. See the ISO C++ standard 3.6.1[2] or the ISO C standard 5.1.2.2.1. for more. void main(){ // Body } A conforming implementation accepts the formats given below: int main(){ // Body } and int main(int argc, char* argv[]){ // Body } A conforming implementation may provide more versions of main(), but they must all have return type int. The int returned by main() is a way for a program to return a value to “the system” that invokes it. On systems that don’t provide such a facility the return value is ignored, but that doesn’t make “void main()” legal C++ or legal C. Note: Even if your compiler accepts void main() avoid it, or risk being considered ignorant by C and C++ programmers. In C++, main() need not contain an explicit return statement. In that case, the value returned is 0, meaning successful execution. Example: CPP // CPP Program to demonstrate main() with// return type#include <iostream> // Driver Codeint main(){ std::cout << "This program returns the integer value 0\n";} This program returns the integer value 0 NOTE: Neither ISO C++ nor C99 allows you to leave the type out of a declaration. That is, in contrast to C89 and ARM C++, int is not assumed where a type is missing in a declaration. Consequently, C++ #include <iostream>using namespace std; main(){ // Body} The above code has no error. If you write the whole error-free main() function without a return statement at the end then the compiler automatically adds a return statement with proper datatype at the end of the program. To summarize the above, it is never a good idea to use void main() or simply, main() as it doesn’t confirm standards. It may be allowed by some compilers though. Please write comments if you find anything incorrect, or you want to share more information about the topic discussed above sudhanshu29gupta smrutiranjanrout2019 anshikajain26 C Basics CPP-Basics CPP-Functions cpp-main C Language C++ CPP Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here. Substring in C++ Multidimensional Arrays in C / C++ Left Shift and Right Shift Operators in C/C++ Function Pointer in C rand() and srand() in C/C++ Vector in C++ STL Initialize a vector in C++ (6 different ways) Inheritance in C++ Map in C++ Standard Template Library (STL) C++ Classes and Objects
[ { "code": null, "e": 25711, "s": 25683, "text": "\n02 Dec, 2021" }, { "code": null, "e": 25925, "s": 25711, "text": "In C, void main() has no defined(legit) usage, and it can sometimes throw garbage results or an error. However, main() is used to denote the main function which takes no arguments and returns an integer data type." }, { "code": null, "e": 26072, "s": 25925, "text": "The definition is not and never has been C++, nor has it even been C. See the ISO C++ standard 3.6.1[2] or the ISO C standard 5.1.2.2.1. for more." }, { "code": null, "e": 26095, "s": 26072, "text": "void main(){\n// Body\n}" }, { "code": null, "e": 26157, "s": 26095, "text": " A conforming implementation accepts the formats given below:" }, { "code": null, "e": 26180, "s": 26157, "text": "int main(){ \n// Body\n}" }, { "code": null, "e": 26184, "s": 26180, "text": "and" }, { "code": null, "e": 26228, "s": 26184, "text": "int main(int argc, char* argv[]){\n// Body\n}" }, { "code": null, "e": 26568, "s": 26228, "text": "A conforming implementation may provide more versions of main(), but they must all have return type int. The int returned by main() is a way for a program to return a value to “the system” that invokes it. On systems that don’t provide such a facility the return value is ignored, but that doesn’t make “void main()” legal C++ or legal C. " }, { "code": null, "e": 26818, "s": 26568, "text": "Note: Even if your compiler accepts void main() avoid it, or risk being considered ignorant by C and C++ programmers. In C++, main() need not contain an explicit return statement. In that case, the value returned is 0, meaning successful execution." }, { "code": null, "e": 26827, "s": 26818, "text": "Example:" }, { "code": null, "e": 26831, "s": 26827, "text": "CPP" }, { "code": "// CPP Program to demonstrate main() with// return type#include <iostream> // Driver Codeint main(){ std::cout << \"This program returns the integer value 0\\n\";}", "e": 27003, "s": 26831, "text": null }, { "code": null, "e": 27044, "s": 27003, "text": "This program returns the integer value 0" }, { "code": null, "e": 27228, "s": 27044, "text": "NOTE: Neither ISO C++ nor C99 allows you to leave the type out of a declaration. That is, in contrast to C89 and ARM C++, int is not assumed where a type is missing in a declaration. " }, { "code": null, "e": 27242, "s": 27228, "text": "Consequently," }, { "code": null, "e": 27246, "s": 27242, "text": "C++" }, { "code": "#include <iostream>using namespace std; main(){ // Body}", "e": 27307, "s": 27246, "text": null }, { "code": null, "e": 27529, "s": 27307, "text": "The above code has no error. If you write the whole error-free main() function without a return statement at the end then the compiler automatically adds a return statement with proper datatype at the end of the program. " }, { "code": null, "e": 27691, "s": 27529, "text": "To summarize the above, it is never a good idea to use void main() or simply, main() as it doesn’t confirm standards. It may be allowed by some compilers though." }, { "code": null, "e": 27815, "s": 27691, "text": "Please write comments if you find anything incorrect, or you want to share more information about the topic discussed above" }, { "code": null, "e": 27832, "s": 27815, "text": "sudhanshu29gupta" }, { "code": null, "e": 27853, "s": 27832, "text": "smrutiranjanrout2019" }, { "code": null, "e": 27867, "s": 27853, "text": "anshikajain26" }, { "code": null, "e": 27876, "s": 27867, "text": "C Basics" }, { "code": null, "e": 27887, "s": 27876, "text": "CPP-Basics" }, { "code": null, "e": 27901, "s": 27887, "text": "CPP-Functions" }, { "code": null, "e": 27910, "s": 27901, "text": "cpp-main" }, { "code": null, "e": 27921, "s": 27910, "text": "C Language" }, { "code": null, "e": 27925, "s": 27921, "text": "C++" }, { "code": null, "e": 27929, "s": 27925, "text": "CPP" }, { "code": null, "e": 28027, "s": 27929, "text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here." }, { "code": null, "e": 28044, "s": 28027, "text": "Substring in C++" }, { "code": null, "e": 28079, "s": 28044, "text": "Multidimensional Arrays in C / C++" }, { "code": null, "e": 28125, "s": 28079, "text": "Left Shift and Right Shift Operators in C/C++" }, { "code": null, "e": 28147, "s": 28125, "text": "Function Pointer in C" }, { "code": null, "e": 28175, "s": 28147, "text": "rand() and srand() in C/C++" }, { "code": null, "e": 28193, "s": 28175, "text": "Vector in C++ STL" }, { "code": null, "e": 28239, "s": 28193, "text": "Initialize a vector in C++ (6 different ways)" }, { "code": null, "e": 28258, "s": 28239, "text": "Inheritance in C++" }, { "code": null, "e": 28301, "s": 28258, "text": "Map in C++ Standard Template Library (STL)" } ]
Functional Programming in Java with Examples - GeeksforGeeks
29 Nov, 2020 So far Java was supporting the imperative style of programming and object-oriented style of programming. The next big thing what java has been added is that Java has started supporting the functional style of programming with its Java 8 release. In this article, we will discuss functional programming in Java 8. What is functional programming? It is a declarative style of programming rather than imperative. The basic objective of this style of programming is to make code more concise, less complex, more predictable, and easier to test compared to the legacy style of coding. Functional programming deals with certain key concepts such as pure function, immutable state, assignment-less programming etc. Functional programming vs Purely Functional programming:Pure functional programming languages don’t allow any mutability in its nature whereas a functional style language provides higher-order functions but often permits mutability at the risk of we failing to do the right things, which put a burden on us rather than protecting us. So, in general, we can say if a language provides higher-order function it is functional style language, and if a language goes to the extent of limiting mutability in addition to higher-order function then it becomes purely functional language. Java is a functional style language and the language like Haskell is a purely functional programming language.Let’s understand a few concepts in functional programming: Higher-order functions: In functional programming, functions are to be considered as first-class citizens. That is, so far in the legacy style of coding, we can do below stuff with objects. We can pass objects to a function.We can create objects within function.We can return objects from a function.We can pass a function to a function.We can create a function within function.We can return a function from a function. We can pass objects to a function.We can create objects within function.We can return objects from a function.We can pass a function to a function.We can create a function within function.We can return a function from a function. We can pass objects to a function. We can create objects within function. We can return objects from a function. We can pass a function to a function. We can create a function within function. We can return a function from a function. Pure functions: A function is called pure function if it always returns the same result for same argument values and it has no side effects like modifying an argument (or global variable) or outputting something. Lambda expressions: A Lambda expression is an anonymous method that has mutability at very minimum and it has only a parameter list and a body. The return type is always inferred based on the context. Also, make a note, Lambda expressions work in parallel with the functional interface. The syntax of a lambda expression is: (parameter) -> body In its simple form, a lambda could be represented as a comma-separated list of parameters, the –> symbol and the body. How to Implement Functional Programming in Java? java // Java program to demonstrate// anonymous methodimport java.util.Arrays;import java.util.List;public class GFG { public static void main(String[] args) { // Defining an anonymous method Runnable r = new Runnable() { public void run() { System.out.println( "Running in Runnable thread"); } }; r.run(); System.out.println( "Running in main thread"); }} Running in Runnable thread Running in main thread If we look at run() methods, we wrapped it with Runnable. We were initializing this method in this way upto Java 7. The same program can be rewritten in Java 8 as: java // Java 8 program to demonstrate// a lambda expressionimport java.util.Arrays;import java.util.List;public class GFG { public static void main(String[] args) { Runnable r = () -> System.out.println( "Running in Runnable thread"); r.run(); System.out.println( "Running in main thread"); }} Running in Runnable thread Running in main thread Now, the above code has been converted into Lambda expressions rather than the anonymous method. Here we have evaluated a function that doesn’t have any name and that function is a lambda expression. So, in this case, we can see that a function has been evaluated and assigned to a runnable interface and here this function has been treated as the first-class citizen. Refactoring some functions from Java 7 to Java 8: We have worked many times with loops and iterator so far up to Java 7 as follows: java // Java program to demonstrate an// external iteratorimport java.util.Arrays;import java.util.List;public class GFG { public static void main(String[] args) { List<Integer> numbers = Arrays.asList(11, 22, 33, 44, 55, 66, 77, 88, 99, 100); // External iterator, for Each loop for (Integer n : numbers) { System.out.print(n + " "); } }} 11 22 33 44 55 66 77 88 99 100 Above was an example of forEach loop in Java a category of external iterator, below one is again example and another form of external iterator. java // Java program to demonstrate an// external iteratorimport java.util.Arrays;import java.util.List;public class GFG { public static void main(String[] args) { List<Integer> numbers = Arrays.asList(11, 22, 33, 44, 55, 66, 77, 88, 99, 100); // External iterator for (int i = 0; i < numbers.size(); i++) { System.out.print(numbers.get(i) + " "); } }} 11 22 33 44 55 66 77 88 99 100 We can transform the above examples of an external iterator with an internal iterator introduced in Java 8, as follows: java // Java 8 program to demonstrate// an internal iterator import java.util.Arrays;import java.util.List; public class GFG { public static void main(String[] args) { List<Integer> numbers = Arrays.asList(11, 22, 33, 44, 55, 66, 77, 88, 99, 100); // Internal iterator numbers.forEach(number -> System.out.print( number + " ")); }} 11 22 33 44 55 66 77 88 99 100 Here, the functional interface plays a major role. Wherever a single abstract method interface is expected, we can pass lambda expression very easily. Above code could be more simplified and improved as follows: numbers.forEach(System.out::println); Imperative Vs Declarative Programming: The functional style of programming is declarative programming. In the imperative style of coding, we define what to do a task and how to do it. Whereas, in the declarative style of coding, we only specify what to do. Let’s understand this with an example. Given a list of number let’s find out the sum of double of even numbers from the list using an imperative and declarative style of coding. java // Java program to find the sum// using imperative style of codingimport java.util.Arrays;import java.util.List;public class GFG { public static void main(String[] args) { List<Integer> numbers = Arrays.asList(11, 22, 33, 44, 55, 66, 77, 88, 99, 100); int result = 0; for (Integer n : numbers) { if (n % 2 == 0) { result += n * 2; } } System.out.println(result); }} 640 The first issue with the above code is that we are mutating the variable result again and again. So mutability is one of the biggest issues in an imperative style of coding. The second issue with the imperative style is that we spend our effort telling not only what to do but also how to do the processing. Now let’s re-write above code in a declarative style. java // Java program to find the sum// using declarative style of codingimport java.util.Arrays;import java.util.List;public class GFG { public static void main(String[] args) { List<Integer> numbers = Arrays.asList(11, 22, 33, 44, 55, 66, 77, 88, 99, 100); System.out.println( numbers.stream() .filter(number -> number % 2 == 0) .mapToInt(e -> e * 2) .sum()); }} 640 From the above code, we are not mutating any variable. Instead, we are transforming the data from one function to another. This is another difference between Imperative and Declarative. Not only this but also in the above code of declarative style, every function is a pure function and pure functions don’t have side effects.In the above example, we are doubling the number with a factor 2, that is called Closure. Remember, lambdas are stateless and closure has immutable state. It means in any circumstances, the closure could not be mutable. Let’s understand it with an example. Here we will declare a variable factor and will use inside a function as below. java // Java program to demonstrate an// declarative style of codingimport java.util.Arrays;import java.util.List;public class GFG { public static void main(String[] args) { List<Integer> numbers = Arrays.asList(11, 22, 33, 44, 55, 66, 77, 88, 99, 100); int factor = 2; System.out.println( numbers.stream() .filter(number -> number % 2 == 0) .mapToInt(e -> e * factor) .sum()); }} 640 Above code works well, but now let’s try mutating it after its use and see what happens: java import java.util.Arrays;import java.util.List;public class GFG { public static void main(String[] args) { List<Integer> numbers = Arrays.asList(11, 22, 33, 44, 55, 66, 77, 88, 99, 100); int factor = 2; System.out.println( numbers.stream() .filter(number -> number % 2 == 0) .mapToInt(e -> e * factor) .sum()); factor = 3; }} The above code gives a compile-time error saying Local variable factor defined in an enclosing scope must be final or effectively final. This means that here, the variable factor is by default being considered as final. In short, we should never try mutating any variable which is used inside pure functions. Doing so will violate pure functions rules which says pure function should neither change anything nor depend on anything that changes. Mutating any closure(here factor) is considered as a bad closure because closures are always immutable in nature. ganeshsagarms Java-Functional Programming Java Write From Home Java Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here. Object Oriented Programming (OOPs) Concept in Java HashMap in Java with Examples Interfaces in Java How to iterate any Map in Java ArrayList in Java Convert integer to string in Python Convert string to integer in Python How to set input type date in dd-mm-yyyy format using HTML ? Python infinity Matplotlib.pyplot.title() in Python
[ { "code": null, "e": 25499, "s": 25471, "text": "\n29 Nov, 2020" }, { "code": null, "e": 26958, "s": 25499, "text": "So far Java was supporting the imperative style of programming and object-oriented style of programming. The next big thing what java has been added is that Java has started supporting the functional style of programming with its Java 8 release. In this article, we will discuss functional programming in Java 8. What is functional programming? It is a declarative style of programming rather than imperative. The basic objective of this style of programming is to make code more concise, less complex, more predictable, and easier to test compared to the legacy style of coding. Functional programming deals with certain key concepts such as pure function, immutable state, assignment-less programming etc. Functional programming vs Purely Functional programming:Pure functional programming languages don’t allow any mutability in its nature whereas a functional style language provides higher-order functions but often permits mutability at the risk of we failing to do the right things, which put a burden on us rather than protecting us. So, in general, we can say if a language provides higher-order function it is functional style language, and if a language goes to the extent of limiting mutability in addition to higher-order function then it becomes purely functional language. Java is a functional style language and the language like Haskell is a purely functional programming language.Let’s understand a few concepts in functional programming: " }, { "code": null, "e": 27378, "s": 26958, "text": "Higher-order functions: In functional programming, functions are to be considered as first-class citizens. That is, so far in the legacy style of coding, we can do below stuff with objects. We can pass objects to a function.We can create objects within function.We can return objects from a function.We can pass a function to a function.We can create a function within function.We can return a function from a function." }, { "code": null, "e": 27608, "s": 27378, "text": "We can pass objects to a function.We can create objects within function.We can return objects from a function.We can pass a function to a function.We can create a function within function.We can return a function from a function." }, { "code": null, "e": 27643, "s": 27608, "text": "We can pass objects to a function." }, { "code": null, "e": 27682, "s": 27643, "text": "We can create objects within function." }, { "code": null, "e": 27721, "s": 27682, "text": "We can return objects from a function." }, { "code": null, "e": 27759, "s": 27721, "text": "We can pass a function to a function." }, { "code": null, "e": 27801, "s": 27759, "text": "We can create a function within function." }, { "code": null, "e": 27843, "s": 27801, "text": "We can return a function from a function." }, { "code": null, "e": 28056, "s": 27843, "text": "Pure functions: A function is called pure function if it always returns the same result for same argument values and it has no side effects like modifying an argument (or global variable) or outputting something." }, { "code": null, "e": 28383, "s": 28056, "text": "Lambda expressions: A Lambda expression is an anonymous method that has mutability at very minimum and it has only a parameter list and a body. The return type is always inferred based on the context. Also, make a note, Lambda expressions work in parallel with the functional interface. The syntax of a lambda expression is: " }, { "code": null, "e": 28405, "s": 28383, "text": "(parameter) -> body\n\n" }, { "code": null, "e": 28524, "s": 28405, "text": "In its simple form, a lambda could be represented as a comma-separated list of parameters, the –> symbol and the body." }, { "code": null, "e": 28574, "s": 28524, "text": "How to Implement Functional Programming in Java? " }, { "code": null, "e": 28579, "s": 28574, "text": "java" }, { "code": "// Java program to demonstrate// anonymous methodimport java.util.Arrays;import java.util.List;public class GFG { public static void main(String[] args) { // Defining an anonymous method Runnable r = new Runnable() { public void run() { System.out.println( \"Running in Runnable thread\"); } }; r.run(); System.out.println( \"Running in main thread\"); }}", "e": 29055, "s": 28579, "text": null }, { "code": null, "e": 29107, "s": 29055, "text": "Running in Runnable thread\nRunning in main thread\n\n" }, { "code": null, "e": 29274, "s": 29109, "text": "If we look at run() methods, we wrapped it with Runnable. We were initializing this method in this way upto Java 7. The same program can be rewritten in Java 8 as: " }, { "code": null, "e": 29279, "s": 29274, "text": "java" }, { "code": "// Java 8 program to demonstrate// a lambda expressionimport java.util.Arrays;import java.util.List;public class GFG { public static void main(String[] args) { Runnable r = () -> System.out.println( \"Running in Runnable thread\"); r.run(); System.out.println( \"Running in main thread\"); }}", "e": 29648, "s": 29279, "text": null }, { "code": null, "e": 29700, "s": 29648, "text": "Running in Runnable thread\nRunning in main thread\n\n" }, { "code": null, "e": 30204, "s": 29702, "text": "Now, the above code has been converted into Lambda expressions rather than the anonymous method. Here we have evaluated a function that doesn’t have any name and that function is a lambda expression. So, in this case, we can see that a function has been evaluated and assigned to a runnable interface and here this function has been treated as the first-class citizen. Refactoring some functions from Java 7 to Java 8: We have worked many times with loops and iterator so far up to Java 7 as follows: " }, { "code": null, "e": 30209, "s": 30204, "text": "java" }, { "code": "// Java program to demonstrate an// external iteratorimport java.util.Arrays;import java.util.List;public class GFG { public static void main(String[] args) { List<Integer> numbers = Arrays.asList(11, 22, 33, 44, 55, 66, 77, 88, 99, 100); // External iterator, for Each loop for (Integer n : numbers) { System.out.print(n + \" \"); } }}", "e": 30658, "s": 30209, "text": null }, { "code": null, "e": 30691, "s": 30658, "text": "11 22 33 44 55 66 77 88 99 100\n\n" }, { "code": null, "e": 30839, "s": 30693, "text": "Above was an example of forEach loop in Java a category of external iterator, below one is again example and another form of external iterator. " }, { "code": null, "e": 30844, "s": 30839, "text": "java" }, { "code": "// Java program to demonstrate an// external iteratorimport java.util.Arrays;import java.util.List;public class GFG { public static void main(String[] args) { List<Integer> numbers = Arrays.asList(11, 22, 33, 44, 55, 66, 77, 88, 99, 100); // External iterator for (int i = 0; i < numbers.size(); i++) { System.out.print(numbers.get(i) + \" \"); } }}", "e": 31306, "s": 30844, "text": null }, { "code": null, "e": 31339, "s": 31306, "text": "11 22 33 44 55 66 77 88 99 100\n\n" }, { "code": null, "e": 31463, "s": 31341, "text": "We can transform the above examples of an external iterator with an internal iterator introduced in Java 8, as follows: " }, { "code": null, "e": 31468, "s": 31463, "text": "java" }, { "code": "// Java 8 program to demonstrate// an internal iterator import java.util.Arrays;import java.util.List; public class GFG { public static void main(String[] args) { List<Integer> numbers = Arrays.asList(11, 22, 33, 44, 55, 66, 77, 88, 99, 100); // Internal iterator numbers.forEach(number -> System.out.print( number + \" \")); }}", "e": 31941, "s": 31468, "text": null }, { "code": null, "e": 31974, "s": 31941, "text": "11 22 33 44 55 66 77 88 99 100\n\n" }, { "code": null, "e": 32190, "s": 31976, "text": "Here, the functional interface plays a major role. Wherever a single abstract method interface is expected, we can pass lambda expression very easily. Above code could be more simplified and improved as follows: " }, { "code": null, "e": 32230, "s": 32190, "text": "numbers.forEach(System.out::println);\n\n" }, { "code": null, "e": 32666, "s": 32230, "text": "Imperative Vs Declarative Programming: The functional style of programming is declarative programming. In the imperative style of coding, we define what to do a task and how to do it. Whereas, in the declarative style of coding, we only specify what to do. Let’s understand this with an example. Given a list of number let’s find out the sum of double of even numbers from the list using an imperative and declarative style of coding. " }, { "code": null, "e": 32671, "s": 32666, "text": "java" }, { "code": "// Java program to find the sum// using imperative style of codingimport java.util.Arrays;import java.util.List;public class GFG { public static void main(String[] args) { List<Integer> numbers = Arrays.asList(11, 22, 33, 44, 55, 66, 77, 88, 99, 100); int result = 0; for (Integer n : numbers) { if (n % 2 == 0) { result += n * 2; } } System.out.println(result); }}", "e": 33184, "s": 32671, "text": null }, { "code": null, "e": 33190, "s": 33184, "text": "640\n\n" }, { "code": null, "e": 33555, "s": 33192, "text": "The first issue with the above code is that we are mutating the variable result again and again. So mutability is one of the biggest issues in an imperative style of coding. The second issue with the imperative style is that we spend our effort telling not only what to do but also how to do the processing. Now let’s re-write above code in a declarative style. " }, { "code": null, "e": 33560, "s": 33555, "text": "java" }, { "code": "// Java program to find the sum// using declarative style of codingimport java.util.Arrays;import java.util.List;public class GFG { public static void main(String[] args) { List<Integer> numbers = Arrays.asList(11, 22, 33, 44, 55, 66, 77, 88, 99, 100); System.out.println( numbers.stream() .filter(number -> number % 2 == 0) .mapToInt(e -> e * 2) .sum()); }}", "e": 34064, "s": 33560, "text": null }, { "code": null, "e": 34070, "s": 34064, "text": "640\n\n" }, { "code": null, "e": 34736, "s": 34072, "text": "From the above code, we are not mutating any variable. Instead, we are transforming the data from one function to another. This is another difference between Imperative and Declarative. Not only this but also in the above code of declarative style, every function is a pure function and pure functions don’t have side effects.In the above example, we are doubling the number with a factor 2, that is called Closure. Remember, lambdas are stateless and closure has immutable state. It means in any circumstances, the closure could not be mutable. Let’s understand it with an example. Here we will declare a variable factor and will use inside a function as below. " }, { "code": null, "e": 34741, "s": 34736, "text": "java" }, { "code": "// Java program to demonstrate an// declarative style of codingimport java.util.Arrays;import java.util.List;public class GFG { public static void main(String[] args) { List<Integer> numbers = Arrays.asList(11, 22, 33, 44, 55, 66, 77, 88, 99, 100); int factor = 2; System.out.println( numbers.stream() .filter(number -> number % 2 == 0) .mapToInt(e -> e * factor) .sum()); }}", "e": 35268, "s": 34741, "text": null }, { "code": null, "e": 35274, "s": 35268, "text": "640\n\n" }, { "code": null, "e": 35366, "s": 35276, "text": "Above code works well, but now let’s try mutating it after its use and see what happens: " }, { "code": null, "e": 35371, "s": 35366, "text": "java" }, { "code": "import java.util.Arrays;import java.util.List;public class GFG { public static void main(String[] args) { List<Integer> numbers = Arrays.asList(11, 22, 33, 44, 55, 66, 77, 88, 99, 100); int factor = 2; System.out.println( numbers.stream() .filter(number -> number % 2 == 0) .mapToInt(e -> e * factor) .sum()); factor = 3; }}", "e": 35856, "s": 35371, "text": null }, { "code": null, "e": 35994, "s": 35856, "text": "The above code gives a compile-time error saying Local variable factor defined in an enclosing scope must be final or effectively final. " }, { "code": null, "e": 36417, "s": 35994, "text": "This means that here, the variable factor is by default being considered as final. In short, we should never try mutating any variable which is used inside pure functions. Doing so will violate pure functions rules which says pure function should neither change anything nor depend on anything that changes. Mutating any closure(here factor) is considered as a bad closure because closures are always immutable in nature. " }, { "code": null, "e": 36431, "s": 36417, "text": "ganeshsagarms" }, { "code": null, "e": 36459, "s": 36431, "text": "Java-Functional Programming" }, { "code": null, "e": 36464, "s": 36459, "text": "Java" }, { "code": null, "e": 36480, "s": 36464, "text": "Write From Home" }, { "code": null, "e": 36485, "s": 36480, "text": "Java" }, { "code": null, "e": 36583, "s": 36485, "text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here." }, { "code": null, "e": 36634, "s": 36583, "text": "Object Oriented Programming (OOPs) Concept in Java" }, { "code": null, "e": 36664, "s": 36634, "text": "HashMap in Java with Examples" }, { "code": null, "e": 36683, "s": 36664, "text": "Interfaces in Java" }, { "code": null, "e": 36714, "s": 36683, "text": "How to iterate any Map in Java" }, { "code": null, "e": 36732, "s": 36714, "text": "ArrayList in Java" }, { "code": null, "e": 36768, "s": 36732, "text": "Convert integer to string in Python" }, { "code": null, "e": 36804, "s": 36768, "text": "Convert string to integer in Python" }, { "code": null, "e": 36865, "s": 36804, "text": "How to set input type date in dd-mm-yyyy format using HTML ?" }, { "code": null, "e": 36881, "s": 36865, "text": "Python infinity" } ]
Extract data.table Column as Vector Using Index Position in R - GeeksforGeeks
17 May, 2021 The column at a specified index can be extracted using the list sub-setting, i.e. [[, operator. The double bracket operator is faster in comparison to the single bracket, and can be used to extract the element or factor level at the specified index. In case, an index more than the number of rows is specified, then an exception stating index out of bounds is returned. Single bracket [, operator cannot be used here, because it returns a subset of the data table, that is a data.table element as the output, not an atomic vector. Constant time is required to perform this operation. Example: R library("data.table") # declaring data tabledata_frame <- data.table(col1 = c(2,4,6), col2 = c(4,6,8), col3 = c(8,10,12), col4 = c(20,16,14)) print ("Original DataTable")print (data_frame) # extracting column 2print ("Column 2 as a vector") vec <- data_frame[[2]]print (vec) Output [1] “Original DataTable” col1 col2 col3 col4 1: 2 4 8 20 2: 4 6 10 16 3: 6 8 12 14 [1] “Column 2 as a vector” [1] 4 6 8 Every column of the data.table can also be extracted in a separate vector, by looping over the entire data.table. ncol() method can be used to return the total number of columns in the data.table. The total time required to carry this operation is equivalent to O(n), where n are the columns. Example: R # getting required librarieslibrary("data.table") # declaring data tabledata_table <- data.table(col1 = c(2,4,6), col2 = FALSE, col3 = LETTERS[1:3]) print ("Original DataTable")print (data_table) # getting number of columnscols <- ncol(data_table) # looping through columnsfor (i in 1:cols){ # getting ith col cat(i, "th col \n") print(data_table[[i]])} Output [1] “Original DataTable” col1 col2 col3 1: 2 FALSE A 2: 4 FALSE B 3: 6 FALSE C 1 th col [1] 2 4 6 2 th col [1] FALSE FALSE FALSE 3 th col [1] “A” “B” “C” Picked R DataTable R Language Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here. Change Color of Bars in Barchart using ggplot2 in R Group by function in R using Dplyr How to Change Axis Scales in R Plots? How to Split Column Into Multiple Columns in R DataFrame? Replace Specific Characters in String in R How to filter R DataFrame by values in a column? How to import an Excel File into R ? R - if statement Time Series Analysis in R How to filter R dataframe by multiple conditions?
[ { "code": null, "e": 26487, "s": 26459, "text": "\n17 May, 2021" }, { "code": null, "e": 27072, "s": 26487, "text": "The column at a specified index can be extracted using the list sub-setting, i.e. [[, operator. The double bracket operator is faster in comparison to the single bracket, and can be used to extract the element or factor level at the specified index. In case, an index more than the number of rows is specified, then an exception stating index out of bounds is returned. Single bracket [, operator cannot be used here, because it returns a subset of the data table, that is a data.table element as the output, not an atomic vector. Constant time is required to perform this operation. " }, { "code": null, "e": 27081, "s": 27072, "text": "Example:" }, { "code": null, "e": 27083, "s": 27081, "text": "R" }, { "code": "library(\"data.table\") # declaring data tabledata_frame <- data.table(col1 = c(2,4,6), col2 = c(4,6,8), col3 = c(8,10,12), col4 = c(20,16,14)) print (\"Original DataTable\")print (data_frame) # extracting column 2print (\"Column 2 as a vector\") vec <- data_frame[[2]]print (vec)", "e": 27438, "s": 27083, "text": null }, { "code": null, "e": 27445, "s": 27438, "text": "Output" }, { "code": null, "e": 27470, "s": 27445, "text": "[1] “Original DataTable”" }, { "code": null, "e": 27492, "s": 27470, "text": " col1 col2 col3 col4" }, { "code": null, "e": 27515, "s": 27492, "text": "1: 2 4 8 20" }, { "code": null, "e": 27538, "s": 27515, "text": "2: 4 6 10 16" }, { "code": null, "e": 27561, "s": 27538, "text": "3: 6 8 12 14" }, { "code": null, "e": 27588, "s": 27561, "text": "[1] “Column 2 as a vector”" }, { "code": null, "e": 27598, "s": 27588, "text": "[1] 4 6 8" }, { "code": null, "e": 27892, "s": 27598, "text": "Every column of the data.table can also be extracted in a separate vector, by looping over the entire data.table. ncol() method can be used to return the total number of columns in the data.table. The total time required to carry this operation is equivalent to O(n), where n are the columns. " }, { "code": null, "e": 27901, "s": 27892, "text": "Example:" }, { "code": null, "e": 27903, "s": 27901, "text": "R" }, { "code": "# getting required librarieslibrary(\"data.table\") # declaring data tabledata_table <- data.table(col1 = c(2,4,6), col2 = FALSE, col3 = LETTERS[1:3]) print (\"Original DataTable\")print (data_table) # getting number of columnscols <- ncol(data_table) # looping through columnsfor (i in 1:cols){ # getting ith col cat(i, \"th col \\n\") print(data_table[[i]])}", "e": 28324, "s": 27903, "text": null }, { "code": null, "e": 28331, "s": 28324, "text": "Output" }, { "code": null, "e": 28356, "s": 28331, "text": "[1] “Original DataTable”" }, { "code": null, "e": 28374, "s": 28356, "text": " col1 col2 col3" }, { "code": null, "e": 28393, "s": 28374, "text": "1: 2 FALSE A" }, { "code": null, "e": 28412, "s": 28393, "text": "2: 4 FALSE B" }, { "code": null, "e": 28431, "s": 28412, "text": "3: 6 FALSE C" }, { "code": null, "e": 28440, "s": 28431, "text": "1 th col" }, { "code": null, "e": 28450, "s": 28440, "text": "[1] 2 4 6" }, { "code": null, "e": 28459, "s": 28450, "text": "2 th col" }, { "code": null, "e": 28481, "s": 28459, "text": "[1] FALSE FALSE FALSE" }, { "code": null, "e": 28490, "s": 28481, "text": "3 th col" }, { "code": null, "e": 28506, "s": 28490, "text": "[1] “A” “B” “C”" }, { "code": null, "e": 28513, "s": 28506, "text": "Picked" }, { "code": null, "e": 28525, "s": 28513, "text": "R DataTable" }, { "code": null, "e": 28536, "s": 28525, "text": "R Language" }, { "code": null, "e": 28634, "s": 28536, "text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here." }, { "code": null, "e": 28686, "s": 28634, "text": "Change Color of Bars in Barchart using ggplot2 in R" }, { "code": null, "e": 28721, "s": 28686, "text": "Group by function in R using Dplyr" }, { "code": null, "e": 28759, "s": 28721, "text": "How to Change Axis Scales in R Plots?" }, { "code": null, "e": 28817, "s": 28759, "text": "How to Split Column Into Multiple Columns in R DataFrame?" }, { "code": null, "e": 28860, "s": 28817, "text": "Replace Specific Characters in String in R" }, { "code": null, "e": 28909, "s": 28860, "text": "How to filter R DataFrame by values in a column?" }, { "code": null, "e": 28946, "s": 28909, "text": "How to import an Excel File into R ?" }, { "code": null, "e": 28963, "s": 28946, "text": "R - if statement" }, { "code": null, "e": 28989, "s": 28963, "text": "Time Series Analysis in R" } ]
Displaying different images with actual size in a Matplotlib subplot
To display different images with actual size in a Matplotlib subplot, we can take the following steps − Set the figure size and adjust the padding between and around the subplots. Read two images using imread() method (im1 and im2) Create a figure and a set of subplots. Turn off axes for both the subplots. Use imshow() method to display im1 and im2 data. To display the figure, use show() method. import matplotlib.pyplot as plt plt.rcParams["figure.figsize"] = [7.50, 3.50] plt.rcParams["figure.autolayout"] = True im1 = plt.imread("bird.jpg") im2 = plt.imread("opencv-logo.png") fig, ax = plt.subplots(nrows=1, ncols=2) ax[1].axis('off') ax[1].imshow(im1, cmap='gray') ax[0].axis('off') ax[0].imshow(im2, cmap='gray') plt.show()
[ { "code": null, "e": 1166, "s": 1062, "text": "To display different images with actual size in a Matplotlib subplot, we can take the following\nsteps −" }, { "code": null, "e": 1242, "s": 1166, "text": "Set the figure size and adjust the padding between and around the subplots." }, { "code": null, "e": 1294, "s": 1242, "text": "Read two images using imread() method (im1 and im2)" }, { "code": null, "e": 1333, "s": 1294, "text": "Create a figure and a set of subplots." }, { "code": null, "e": 1370, "s": 1333, "text": "Turn off axes for both the subplots." }, { "code": null, "e": 1419, "s": 1370, "text": "Use imshow() method to display im1 and im2 data." }, { "code": null, "e": 1461, "s": 1419, "text": "To display the figure, use show() method." }, { "code": null, "e": 1799, "s": 1461, "text": "import matplotlib.pyplot as plt\n\nplt.rcParams[\"figure.figsize\"] = [7.50, 3.50]\nplt.rcParams[\"figure.autolayout\"] = True\n\nim1 = plt.imread(\"bird.jpg\")\nim2 = plt.imread(\"opencv-logo.png\")\n\nfig, ax = plt.subplots(nrows=1, ncols=2)\n\nax[1].axis('off')\nax[1].imshow(im1, cmap='gray')\nax[0].axis('off')\nax[0].imshow(im2, cmap='gray')\nplt.show()" } ]
Chi-squared tests to compare two machine learning models and determine whether they are random | by shashank kumar | Towards Data Science
A challenging classification problem has grappled you to your screen. You have been pouncing the keyboard for hours, analyzing data, debugging code, and executing algorithms. Your endeavors blossom into two machine learning models. But there’s a last hassle before you can smack the display down and gulp a cold one. You must evaluate the models. Often, evaluation goes beyond merely computing the test accuracy. You’d want to probe whether the performances of multiple models differ. If there are variations, are they statistically significant, or is it due to randomness? So let’s hunt these answers together. If you’re familiar with probability distributions (normal, binomial, and chi-squared), hypothesis testing, false positives, and false negatives, we’re good to go. Imagine your classification problem was to predict whether a person was suffering from covid. The first model you constructed, M1, gave the following result. Probabilistically speaking, there’s a 65% chance a data instance is positive and a 35% chance otherwise. One important question would be, is the model more effective than a random guesser? Random guessing entails that the predicted and actual classes are independent of each other. Essentially, such a model won’t learn anything valuable from data and merely make guesses. From the laws of independent probability, it then follows: In chi-square tests, we pit the observations against the expected values. M1 labels 69 instances as positive. Now, if it is a random guesser, we can expect 35% of those instances to be false positive. Why? Because there’s a 35% chance that a test data instance is negative. This result can also be derived from the above equation with a slightly modified question. How many of the 100 instances would you expect to be false positives? In that case, we find P(Predicted=+ and Actual= — ). P(Predicted=+)= 69/100 = 0.69. 25.15% of the total data instances are likely to be classified as false positive if M1 is a random guesser. Hence, the number of false positives = 24.15% of 100=24.15. Similarly, we can find all the expected predictions of a random guesser. Running a chi-squared independence test on the above two tables will verify whether M1 is a random guesser. It does so by computing a chi-squared statistic. Okay, what’s that? I’ll prove the above equation is Chi-squared in the next section. The rest of the test is like any other hypothesis test. H0(Null hypothesis): M1 is a random guesser H1(Alternate hypothesis): M1 is not a random guesser The M1 table lists the observed values, while the Random Guesser table lists the expected values. In other words, for our null hypothesis, we expect M1 to yield the results given in the RM table. To find the value of a Chi-squared random variable, we also require DOF (degrees of freedom). It’s the number of values free to vary in a data table. While calculating the probabilities of actual and predicted class, we fixed four numbers — the number of actual positives and negatives, and the number of predicted positives and negatives. If you now observe the above table, you’ll realize that if I give you any one value among the set-A, B, C, D-you can find the remaining variables. Ergo, two variables/values in the dataset are free to vary, so DOF=1. From the chi-square statistic equation, we can find the test statistic is: Before moving ahead, I would like to unravel the mathematics of the Chi-squared test. Feel free to skip if you aren’t a math buff. If you’re still in this section, I believe you’re twitching to know why the above term is a Chi-squared variable. Let’s reformulate the problem. We’re looking for evidence of whether M1 is a random guesser. To do so, we hypothesize that it is a random guesser and tabulate the expected values. The values in the M1 table are what we observe, and we test the two sets of values using the Chi-squared test. Fine, now observe the random guesser table. Think of each side, Predicted(+) and Predicted(-), like coins. For example, Predicted(+) is a coin with heads denoting positive and tails denoting negative. Ergo, if you toss the Predicted(+) coin and it lands heads, the data instance is classed as true positive. Else, if it lands tails, the data instance is classified as false positive. Does this remind you of a particular probability distribution? Take T as the random variable describing the number of true positive instances (Heads by the coin analogy). Doesn’t T follow a binomial distribution with p=0.65 (probability of positive) and n=69 (total data instances)? Sure it does. I assume you know that binomial distribution can be approximated from a normal distribution, provided both np and n(1-p) exceed 10. Accordingly, we get a normal variable z~N(0,1) We can manipulate the above term to get That’s it we have dug the hidden Chi-squared terms. Remember that Z2 follows a Chi-squared distribution. Hence, the RHS is chi-square distributed. In the first term, T is the number of true positives, which we observe to be 58 and np=69 times 0.65=44.85. Similarly, in the second term, n-T is the number of false positives, which equals 11, and n(1-p) equals 24.15. Replace the variables with these values. You may now consider the Predicted(-) as another coin and repeat the grunt work. Otherwise, believe me when I say that your troubles will yield the following: There you go. The two Z2 terms compose the Chi-squared statistic. You may run the below code in python to carry the remaining calculations. I’ll discuss the results. import numpy as npimport pandas as pdfrom scipy.stats import chi2_contingencydata=[[58,7],[11,24]] #Model M1 table#Chi square statistic,pvalue,DOF,expected tablestat, p, dof, expected = chi2_contingency(data) print('Chi-square statistic=',stat)print('Pvalue=',p)alpha=0.05if p < alpha: print('Not a random guesser')else: print('Model is a random guesser') Results: Chi-square statistic= 32.884319981094166Pvalue= 9.780899116984747e-09 From the results, you can observe that the chi-square statistic is exceedingly high. Accordingly, compared to the alpha (0.05), the probability (p-value) of our null hypothesis standing true is insignificant. Thus, we can conclude that M1 is not a random guesser. Although, note that the final decision entirely depends on the alpha value. But as we generally take alpha as 0.05, we can be confident with our finding. All right, we’re done evaluating the first model you made. Let’s run some tests on the second model. You may carry a separate — ‘is it a random guesser’ — test for this model too. Although, it’s evident from the above values that M2 is more accurate than M1. It is impossible, hence, for M2 to be a random guesser. But does it genuinely outperform M1, or do the slight improvements stem from random chance. To find that, we’ll have to modify our approach. The above table presents the distribution of correct and incorrect predictions by M1 and M2. (I have taken these values arbitrarily. For a real problem, you can find them easily by executing few lines of code.). Nothing is compelling about M1 and M2 making identical predictions (correct/incorrect). We are curious about their differences, so we have to heed the data fields where one model classifies correctly while the other does not (B and C). If M1 and M2 were performing alike, values B and C should not differ much. In that scenario, both models will make nearly equal proportions of correct and incorrect predictions and consequently indicate homogeneity. Now, by imagining B and C as two faces of a coin, we can carry a Chi-squared test to determine whether the values are different. We already established that the DOF for the above table is 1. The Chi-squared statistic is: Again, do not itch to figure why the above is a chi-square variable. It can be proved by repeating what we did in the last section. You may try it if you desire. This particular method of differentiating two models is better known as Mcnemar’s test. Let’s carry with the test now and establish the hypotheses. H0: M1 and M2 are the same H1: M1 and M2 are different import numpy as npimport pandas as pdfrom statsmodels.stats.contingency_tables import mcnemardata=[[77,5],[10,8]]res = mcnemar(data, exact=False)alpha=0.05print('Chi-square statistic:',res.statistic)print('Pvalue:',res.pvalue)if res.pvalue < alpha: print('Models are different')else: print('Models are same')Chi-square statistic: 1.0666666666666667Pvalue: 0.30169958247834494 Here, the P-value exceeds alpha, so we cannot discard H0. Hence, although M2 seemingly performs better than M1, the variations in performance are most likely due to random chance. We ought to deduce from the results to avoid arriving at any conclusion by solely analyzing classification accuracies. In this article, I just discussed two types of evaluations. There are statistical procedures for evaluating umpteen models over multiple problems. ROC curves are extremely nifty when judging numerous machine learning algorithms. I have linked some resources for you to have a more comprehensive review of the above explanations.
[ { "code": null, "e": 947, "s": 172, "text": "A challenging classification problem has grappled you to your screen. You have been pouncing the keyboard for hours, analyzing data, debugging code, and executing algorithms. Your endeavors blossom into two machine learning models. But there’s a last hassle before you can smack the display down and gulp a cold one. You must evaluate the models. Often, evaluation goes beyond merely computing the test accuracy. You’d want to probe whether the performances of multiple models differ. If there are variations, are they statistically significant, or is it due to randomness? So let’s hunt these answers together. If you’re familiar with probability distributions (normal, binomial, and chi-squared), hypothesis testing, false positives, and false negatives, we’re good to go." }, { "code": null, "e": 1105, "s": 947, "text": "Imagine your classification problem was to predict whether a person was suffering from covid. The first model you constructed, M1, gave the following result." }, { "code": null, "e": 1537, "s": 1105, "text": "Probabilistically speaking, there’s a 65% chance a data instance is positive and a 35% chance otherwise. One important question would be, is the model more effective than a random guesser? Random guessing entails that the predicted and actual classes are independent of each other. Essentially, such a model won’t learn anything valuable from data and merely make guesses. From the laws of independent probability, it then follows:" }, { "code": null, "e": 2056, "s": 1537, "text": "In chi-square tests, we pit the observations against the expected values. M1 labels 69 instances as positive. Now, if it is a random guesser, we can expect 35% of those instances to be false positive. Why? Because there’s a 35% chance that a test data instance is negative. This result can also be derived from the above equation with a slightly modified question. How many of the 100 instances would you expect to be false positives? In that case, we find P(Predicted=+ and Actual= — ). P(Predicted=+)= 69/100 = 0.69." }, { "code": null, "e": 2297, "s": 2056, "text": "25.15% of the total data instances are likely to be classified as false positive if M1 is a random guesser. Hence, the number of false positives = 24.15% of 100=24.15. Similarly, we can find all the expected predictions of a random guesser." }, { "code": null, "e": 2473, "s": 2297, "text": "Running a chi-squared independence test on the above two tables will verify whether M1 is a random guesser. It does so by computing a chi-squared statistic. Okay, what’s that?" }, { "code": null, "e": 2595, "s": 2473, "text": "I’ll prove the above equation is Chi-squared in the next section. The rest of the test is like any other hypothesis test." }, { "code": null, "e": 2639, "s": 2595, "text": "H0(Null hypothesis): M1 is a random guesser" }, { "code": null, "e": 2692, "s": 2639, "text": "H1(Alternate hypothesis): M1 is not a random guesser" }, { "code": null, "e": 2888, "s": 2692, "text": "The M1 table lists the observed values, while the Random Guesser table lists the expected values. In other words, for our null hypothesis, we expect M1 to yield the results given in the RM table." }, { "code": null, "e": 3228, "s": 2888, "text": "To find the value of a Chi-squared random variable, we also require DOF (degrees of freedom). It’s the number of values free to vary in a data table. While calculating the probabilities of actual and predicted class, we fixed four numbers — the number of actual positives and negatives, and the number of predicted positives and negatives." }, { "code": null, "e": 3445, "s": 3228, "text": "If you now observe the above table, you’ll realize that if I give you any one value among the set-A, B, C, D-you can find the remaining variables. Ergo, two variables/values in the dataset are free to vary, so DOF=1." }, { "code": null, "e": 3520, "s": 3445, "text": "From the chi-square statistic equation, we can find the test statistic is:" }, { "code": null, "e": 3651, "s": 3520, "text": "Before moving ahead, I would like to unravel the mathematics of the Chi-squared test. Feel free to skip if you aren’t a math buff." }, { "code": null, "e": 4056, "s": 3651, "text": "If you’re still in this section, I believe you’re twitching to know why the above term is a Chi-squared variable. Let’s reformulate the problem. We’re looking for evidence of whether M1 is a random guesser. To do so, we hypothesize that it is a random guesser and tabulate the expected values. The values in the M1 table are what we observe, and we test the two sets of values using the Chi-squared test." }, { "code": null, "e": 4503, "s": 4056, "text": "Fine, now observe the random guesser table. Think of each side, Predicted(+) and Predicted(-), like coins. For example, Predicted(+) is a coin with heads denoting positive and tails denoting negative. Ergo, if you toss the Predicted(+) coin and it lands heads, the data instance is classed as true positive. Else, if it lands tails, the data instance is classified as false positive. Does this remind you of a particular probability distribution?" }, { "code": null, "e": 4916, "s": 4503, "text": "Take T as the random variable describing the number of true positive instances (Heads by the coin analogy). Doesn’t T follow a binomial distribution with p=0.65 (probability of positive) and n=69 (total data instances)? Sure it does. I assume you know that binomial distribution can be approximated from a normal distribution, provided both np and n(1-p) exceed 10. Accordingly, we get a normal variable z~N(0,1)" }, { "code": null, "e": 4956, "s": 4916, "text": "We can manipulate the above term to get" }, { "code": null, "e": 5363, "s": 4956, "text": "That’s it we have dug the hidden Chi-squared terms. Remember that Z2 follows a Chi-squared distribution. Hence, the RHS is chi-square distributed. In the first term, T is the number of true positives, which we observe to be 58 and np=69 times 0.65=44.85. Similarly, in the second term, n-T is the number of false positives, which equals 11, and n(1-p) equals 24.15. Replace the variables with these values." }, { "code": null, "e": 5522, "s": 5363, "text": "You may now consider the Predicted(-) as another coin and repeat the grunt work. Otherwise, believe me when I say that your troubles will yield the following:" }, { "code": null, "e": 5588, "s": 5522, "text": "There you go. The two Z2 terms compose the Chi-squared statistic." }, { "code": null, "e": 5688, "s": 5588, "text": "You may run the below code in python to carry the remaining calculations. I’ll discuss the results." }, { "code": null, "e": 6050, "s": 5688, "text": "import numpy as npimport pandas as pdfrom scipy.stats import chi2_contingencydata=[[58,7],[11,24]] #Model M1 table#Chi square statistic,pvalue,DOF,expected tablestat, p, dof, expected = chi2_contingency(data) print('Chi-square statistic=',stat)print('Pvalue=',p)alpha=0.05if p < alpha: print('Not a random guesser')else: print('Model is a random guesser')" }, { "code": null, "e": 6059, "s": 6050, "text": "Results:" }, { "code": null, "e": 6129, "s": 6059, "text": "Chi-square statistic= 32.884319981094166Pvalue= 9.780899116984747e-09" }, { "code": null, "e": 6547, "s": 6129, "text": "From the results, you can observe that the chi-square statistic is exceedingly high. Accordingly, compared to the alpha (0.05), the probability (p-value) of our null hypothesis standing true is insignificant. Thus, we can conclude that M1 is not a random guesser. Although, note that the final decision entirely depends on the alpha value. But as we generally take alpha as 0.05, we can be confident with our finding." }, { "code": null, "e": 6648, "s": 6547, "text": "All right, we’re done evaluating the first model you made. Let’s run some tests on the second model." }, { "code": null, "e": 7003, "s": 6648, "text": "You may carry a separate — ‘is it a random guesser’ — test for this model too. Although, it’s evident from the above values that M2 is more accurate than M1. It is impossible, hence, for M2 to be a random guesser. But does it genuinely outperform M1, or do the slight improvements stem from random chance. To find that, we’ll have to modify our approach." }, { "code": null, "e": 7451, "s": 7003, "text": "The above table presents the distribution of correct and incorrect predictions by M1 and M2. (I have taken these values arbitrarily. For a real problem, you can find them easily by executing few lines of code.). Nothing is compelling about M1 and M2 making identical predictions (correct/incorrect). We are curious about their differences, so we have to heed the data fields where one model classifies correctly while the other does not (B and C)." }, { "code": null, "e": 7667, "s": 7451, "text": "If M1 and M2 were performing alike, values B and C should not differ much. In that scenario, both models will make nearly equal proportions of correct and incorrect predictions and consequently indicate homogeneity." }, { "code": null, "e": 7888, "s": 7667, "text": "Now, by imagining B and C as two faces of a coin, we can carry a Chi-squared test to determine whether the values are different. We already established that the DOF for the above table is 1. The Chi-squared statistic is:" }, { "code": null, "e": 8198, "s": 7888, "text": "Again, do not itch to figure why the above is a chi-square variable. It can be proved by repeating what we did in the last section. You may try it if you desire. This particular method of differentiating two models is better known as Mcnemar’s test. Let’s carry with the test now and establish the hypotheses." }, { "code": null, "e": 8225, "s": 8198, "text": "H0: M1 and M2 are the same" }, { "code": null, "e": 8253, "s": 8225, "text": "H1: M1 and M2 are different" }, { "code": null, "e": 8635, "s": 8253, "text": "import numpy as npimport pandas as pdfrom statsmodels.stats.contingency_tables import mcnemardata=[[77,5],[10,8]]res = mcnemar(data, exact=False)alpha=0.05print('Chi-square statistic:',res.statistic)print('Pvalue:',res.pvalue)if res.pvalue < alpha: print('Models are different')else: print('Models are same')Chi-square statistic: 1.0666666666666667Pvalue: 0.30169958247834494" }, { "code": null, "e": 8934, "s": 8635, "text": "Here, the P-value exceeds alpha, so we cannot discard H0. Hence, although M2 seemingly performs better than M1, the variations in performance are most likely due to random chance. We ought to deduce from the results to avoid arriving at any conclusion by solely analyzing classification accuracies." } ]
Space and Time Complexity in Computer Algorithms | by Areeba Merriam | Towards Data Science
In this article, I will discuss computational complexity which was developed by Juris Hartmanis and Richard E. Stearns to analyze the difficulty of an algorithm. We all know, human nature aspires to seek an efficient way to assemble their daily tasks. The predominant thought process behind innovation and technology is to make life easier for people by providing ways to solve problems they may encounter. The same thing happens within the world of computer science and digital products. We write algorithms that are efficient and take up less memory to perform better. Time complexity is the time taken by the algorithm to execute each set of instructions. It is always better to select the most efficient algorithm when a simple problem can solve with different methods. Space complexity is usually referred to as the amount of memory consumed by the algorithm. It is composed of two different spaces; Auxiliary space and Input space. The factor of time is usually more important than that of space. Note: — In computer programming, you are allowed to use 256MB for a particular problem. If you create an array having a size of more than 108, you will get the error. Also, you can’t make an array having size 106 because the maximum space allotted to a function is 4MB. We must define a global to solve this problem. Though they seem trivial, these factors are crucial in determining how a computer program is developed, designed and how it adds value to the life of its user. Remember, time is money. A good algorithm is one that takes less time in execution and saves space during the process. Ideally, we have to find a middle ground between space and time, but we can settle for the average. Let’s look at a simple algorithm of finding out the sum of two numbers. Step #01: Start.Step #02: Create two variables (a & b).Step #03: Store integer values in ‘a’ and ‘b.’ -> InputStep #04: Create a variable named ‘Sum.’Step #05: Store the sum of ‘a’ and ‘b’ in a variable named ‘Sum’ -> OutputStep #06: End. It is as simple as it looks. Note: — If you are a gamer, you would know the average game size (in hard disk) is increasing day by day, along with improvements in loading times. Similarly, in websites, loading time reduces significantly, and the storage space of its servers is increasing day by day. So, as we discussed earlier, in terms of space & time, time plays a crucial role in the development phases of any software. According to the research conducted by Neil Patel, “47 percent of consumers expect a website to load in no more than two seconds.” You can find many sophisticated algorithms over the internet. It’s the possible reason why almost all big companies invest a lot in research and writing these sophisticated sets of instructions. Following are the factors that play a significant role in the long term usage of an algorithm: Efficiency — We’ve already talked about how much efficiency matters in creating a good algorithm. It is the efficiency that reduces the computational time and generating quick output.Finiteness — The algorithm must terminate after completing a specified number of steps. Otherwise, it’ll use more memory space, and it’s considered a bad practice. Stack overflow and out-of-bounds conditions may occur if it goes on for infinite loops or recursion.Correctness — A good algorithm should produce a correct result irrespective of the size of the input provided. Efficiency — We’ve already talked about how much efficiency matters in creating a good algorithm. It is the efficiency that reduces the computational time and generating quick output. Finiteness — The algorithm must terminate after completing a specified number of steps. Otherwise, it’ll use more memory space, and it’s considered a bad practice. Stack overflow and out-of-bounds conditions may occur if it goes on for infinite loops or recursion. Correctness — A good algorithm should produce a correct result irrespective of the size of the input provided. Time complexity is profoundly related to the input size. As the size of the input increases, the run time (which is — time taken by the algorithm to run) also increases. Example: Consider a sorting algorithm. Let suppose we have a set of numbers named A = {10, 5, 21, 6, 9}, There are many algorithms available to sort the given numbers. But not all of them are efficient. To find out the most effective one, we have to run a computational analysis on every algorithm. Leonardo Galler and Matteo Kimura did one of the finest research on LAMFO about “Sorting Algorithm.” Here are some of the important observations from the graph:- This test showed five of the most used sorting algorithms: Quicksort, Insertion sort, Bubble sort, Shell sort, Heapsort. The programming language used to perform the task is Python and the size of the input ranges from 2500 to 50000. The results were: “Shell sort and Heap Sort algorithms performed well despite the length of the lists, on the other side we found that Insertion sort and Bubble sort algorithms were far the worse, increasing computing time significantly. See the results in the chart above.” Before running an analysis on any algorithm, we must have to check if it’s stable or not. Understanding our data is the most important part of running a successful analysis. If you are not from a Computer Science background, you may find this concept a little more complex than usual. No worries! I got you covered. So, what is Asymptotic notation? In simple words, it tells us how good an algorithm is when compared with another algorithm. We cannot directly compare two algorithms side by side. It heavily depends on the tools and the hardware we use for comparisons, such as the Operating System, CPU model, processor generation, etc. Even if we calculate time and space for two algorithms running on the same system, their time and space complexity may be affected by the subtle changes in the system environment. Therefore, we use Asymptotic analysis to compare space and time complexity. It analyzes two algorithms based on changes in their performance concerning the increment or decrement in the input size. Primarily there are three types of Asymptotic notations: Big-Oh (O) notation.Big Omega (Ω) notation.Big Theta (Θ) notation — widely used. Big-Oh (O) notation. Big Omega (Ω) notation. Big Theta (Θ) notation — widely used. The big-O notation was introduced in 1894 by Paul Bachmann. He thoughtlessly introduced this notation in his discussion on approximating a function. From the definition: O(g(n)) = {f(n): there exist positive constants c and n0 such that 0 <= f(n) <= c*g(n) for all n >=n0} Here ’n’ gives the upper bound value. If a function is O(n), then it’s O(n2), O(n3), as well. It is the most commonly used notation for the Asymptotic analysis. It defines the upper bound of a function i.e., the maximum time taken by an algorithm or the worst-case time complexity of an algorithm. In other words, it gives the maximum output value (big-O) for a respective input. From the definition: The function f (n) is Ω(g(n)) if there exist positive numbers c and N, such that f (n) ≥ cg(n) for all n ≥ N. It defines the lower bound of a function — i.e., the minimum time taken by an algorithm. It gives the minimum output value (big-Ω) for a respective input. From the definition: f(n) is Θ(g(n)) if there exist positive numbers c1, c2, and N such that c1g(n) ≤ f (n) ≤ c2g(n) for all n ≥ N It defines the lower bound and upper bound of a function i.e., it exists as at the both, most and at least boundaries for a given input value. Note: — Big-O defines in terms of ≤ and Ω in terms of ≥; = is included in both inequalities. It suggests a way of restricting the sets of possible lower and upper bounds. This restriction accomplishes by the Big Theta (Θ) notation. Best Case: It defines as the condition that allows an algorithm to complete the execution of statements in the minimum amount of time. In this case, the execution time acts as a lower bound on the algorithm’s time complexity.Average Case: In the average case, we get the sum of running times on every possible input combination and then take the average. In this case, the execution time acts as both lower bound and upper bound on the algorithm’s time complexity.Worst Case: It defines as the condition that allows an algorithm to complete the execution of statements in the maximum amount of time. In this case, the execution time acts as an upper bound on the algorithm’s time complexity. Best Case: It defines as the condition that allows an algorithm to complete the execution of statements in the minimum amount of time. In this case, the execution time acts as a lower bound on the algorithm’s time complexity. Average Case: In the average case, we get the sum of running times on every possible input combination and then take the average. In this case, the execution time acts as both lower bound and upper bound on the algorithm’s time complexity. Worst Case: It defines as the condition that allows an algorithm to complete the execution of statements in the maximum amount of time. In this case, the execution time acts as an upper bound on the algorithm’s time complexity. When I started programming, the concept of space and time complexity was always a stepping stone for me. So today, I thought to bring about a simplified discussion on how these two factors significantly affect the algorithm. Thank you for reading this story and see you again. Feel free to leave a comment if you have any thoughts, feedback, or suggestions about it! If you like my work and wanna support me then please sign up to become a medium member using this link or else, you can buy me a coffee ☕️. Space complexity of an algorithm — Studytonight Why is Time Complexity Essential and What is Time Complexity? — Greatlearning Time and Space Complexity — HackerEarth Complete guide to understanding time and space complexity of algorithms — Learn Code Stream How to Increase Page Speed — Neil Patel Sorting Algorithms — LAMFO
[ { "code": null, "e": 334, "s": 172, "text": "In this article, I will discuss computational complexity which was developed by Juris Hartmanis and Richard E. Stearns to analyze the difficulty of an algorithm." }, { "code": null, "e": 743, "s": 334, "text": "We all know, human nature aspires to seek an efficient way to assemble their daily tasks. The predominant thought process behind innovation and technology is to make life easier for people by providing ways to solve problems they may encounter. The same thing happens within the world of computer science and digital products. We write algorithms that are efficient and take up less memory to perform better." }, { "code": null, "e": 946, "s": 743, "text": "Time complexity is the time taken by the algorithm to execute each set of instructions. It is always better to select the most efficient algorithm when a simple problem can solve with different methods." }, { "code": null, "e": 1110, "s": 946, "text": "Space complexity is usually referred to as the amount of memory consumed by the algorithm. It is composed of two different spaces; Auxiliary space and Input space." }, { "code": null, "e": 1175, "s": 1110, "text": "The factor of time is usually more important than that of space." }, { "code": null, "e": 1492, "s": 1175, "text": "Note: — In computer programming, you are allowed to use 256MB for a particular problem. If you create an array having a size of more than 108, you will get the error. Also, you can’t make an array having size 106 because the maximum space allotted to a function is 4MB. We must define a global to solve this problem." }, { "code": null, "e": 1677, "s": 1492, "text": "Though they seem trivial, these factors are crucial in determining how a computer program is developed, designed and how it adds value to the life of its user. Remember, time is money." }, { "code": null, "e": 1943, "s": 1677, "text": "A good algorithm is one that takes less time in execution and saves space during the process. Ideally, we have to find a middle ground between space and time, but we can settle for the average. Let’s look at a simple algorithm of finding out the sum of two numbers." }, { "code": null, "e": 2182, "s": 1943, "text": "Step #01: Start.Step #02: Create two variables (a & b).Step #03: Store integer values in ‘a’ and ‘b.’ -> InputStep #04: Create a variable named ‘Sum.’Step #05: Store the sum of ‘a’ and ‘b’ in a variable named ‘Sum’ -> OutputStep #06: End." }, { "code": null, "e": 2211, "s": 2182, "text": "It is as simple as it looks." }, { "code": null, "e": 2606, "s": 2211, "text": "Note: — If you are a gamer, you would know the average game size (in hard disk) is increasing day by day, along with improvements in loading times. Similarly, in websites, loading time reduces significantly, and the storage space of its servers is increasing day by day. So, as we discussed earlier, in terms of space & time, time plays a crucial role in the development phases of any software." }, { "code": null, "e": 2657, "s": 2606, "text": "According to the research conducted by Neil Patel," }, { "code": null, "e": 2737, "s": 2657, "text": "“47 percent of consumers expect a website to load in no more than two seconds.”" }, { "code": null, "e": 2932, "s": 2737, "text": "You can find many sophisticated algorithms over the internet. It’s the possible reason why almost all big companies invest a lot in research and writing these sophisticated sets of instructions." }, { "code": null, "e": 3027, "s": 2932, "text": "Following are the factors that play a significant role in the long term usage of an algorithm:" }, { "code": null, "e": 3585, "s": 3027, "text": "Efficiency — We’ve already talked about how much efficiency matters in creating a good algorithm. It is the efficiency that reduces the computational time and generating quick output.Finiteness — The algorithm must terminate after completing a specified number of steps. Otherwise, it’ll use more memory space, and it’s considered a bad practice. Stack overflow and out-of-bounds conditions may occur if it goes on for infinite loops or recursion.Correctness — A good algorithm should produce a correct result irrespective of the size of the input provided." }, { "code": null, "e": 3769, "s": 3585, "text": "Efficiency — We’ve already talked about how much efficiency matters in creating a good algorithm. It is the efficiency that reduces the computational time and generating quick output." }, { "code": null, "e": 4034, "s": 3769, "text": "Finiteness — The algorithm must terminate after completing a specified number of steps. Otherwise, it’ll use more memory space, and it’s considered a bad practice. Stack overflow and out-of-bounds conditions may occur if it goes on for infinite loops or recursion." }, { "code": null, "e": 4145, "s": 4034, "text": "Correctness — A good algorithm should produce a correct result irrespective of the size of the input provided." }, { "code": null, "e": 4315, "s": 4145, "text": "Time complexity is profoundly related to the input size. As the size of the input increases, the run time (which is — time taken by the algorithm to run) also increases." }, { "code": null, "e": 4354, "s": 4315, "text": "Example: Consider a sorting algorithm." }, { "code": null, "e": 4420, "s": 4354, "text": "Let suppose we have a set of numbers named A = {10, 5, 21, 6, 9}," }, { "code": null, "e": 4614, "s": 4420, "text": "There are many algorithms available to sort the given numbers. But not all of them are efficient. To find out the most effective one, we have to run a computational analysis on every algorithm." }, { "code": null, "e": 4715, "s": 4614, "text": "Leonardo Galler and Matteo Kimura did one of the finest research on LAMFO about “Sorting Algorithm.”" }, { "code": null, "e": 4776, "s": 4715, "text": "Here are some of the important observations from the graph:-" }, { "code": null, "e": 4897, "s": 4776, "text": "This test showed five of the most used sorting algorithms: Quicksort, Insertion sort, Bubble sort, Shell sort, Heapsort." }, { "code": null, "e": 5010, "s": 4897, "text": "The programming language used to perform the task is Python and the size of the input ranges from 2500 to 50000." }, { "code": null, "e": 5285, "s": 5010, "text": "The results were: “Shell sort and Heap Sort algorithms performed well despite the length of the lists, on the other side we found that Insertion sort and Bubble sort algorithms were far the worse, increasing computing time significantly. See the results in the chart above.”" }, { "code": null, "e": 5459, "s": 5285, "text": "Before running an analysis on any algorithm, we must have to check if it’s stable or not. Understanding our data is the most important part of running a successful analysis." }, { "code": null, "e": 5601, "s": 5459, "text": "If you are not from a Computer Science background, you may find this concept a little more complex than usual. No worries! I got you covered." }, { "code": null, "e": 5634, "s": 5601, "text": "So, what is Asymptotic notation?" }, { "code": null, "e": 5726, "s": 5634, "text": "In simple words, it tells us how good an algorithm is when compared with another algorithm." }, { "code": null, "e": 6103, "s": 5726, "text": "We cannot directly compare two algorithms side by side. It heavily depends on the tools and the hardware we use for comparisons, such as the Operating System, CPU model, processor generation, etc. Even if we calculate time and space for two algorithms running on the same system, their time and space complexity may be affected by the subtle changes in the system environment." }, { "code": null, "e": 6301, "s": 6103, "text": "Therefore, we use Asymptotic analysis to compare space and time complexity. It analyzes two algorithms based on changes in their performance concerning the increment or decrement in the input size." }, { "code": null, "e": 6358, "s": 6301, "text": "Primarily there are three types of Asymptotic notations:" }, { "code": null, "e": 6439, "s": 6358, "text": "Big-Oh (O) notation.Big Omega (Ω) notation.Big Theta (Θ) notation — widely used." }, { "code": null, "e": 6460, "s": 6439, "text": "Big-Oh (O) notation." }, { "code": null, "e": 6484, "s": 6460, "text": "Big Omega (Ω) notation." }, { "code": null, "e": 6522, "s": 6484, "text": "Big Theta (Θ) notation — widely used." }, { "code": null, "e": 6671, "s": 6522, "text": "The big-O notation was introduced in 1894 by Paul Bachmann. He thoughtlessly introduced this notation in his discussion on approximating a function." }, { "code": null, "e": 6795, "s": 6671, "text": "From the definition: O(g(n)) = {f(n): there exist positive constants c and n0 such that 0 <= f(n) <= c*g(n) for all n >=n0}" }, { "code": null, "e": 6889, "s": 6795, "text": "Here ’n’ gives the upper bound value. If a function is O(n), then it’s O(n2), O(n3), as well." }, { "code": null, "e": 7175, "s": 6889, "text": "It is the most commonly used notation for the Asymptotic analysis. It defines the upper bound of a function i.e., the maximum time taken by an algorithm or the worst-case time complexity of an algorithm. In other words, it gives the maximum output value (big-O) for a respective input." }, { "code": null, "e": 7306, "s": 7175, "text": "From the definition: The function f (n) is Ω(g(n)) if there exist positive numbers c and N, such that f (n) ≥ cg(n) for all n ≥ N." }, { "code": null, "e": 7461, "s": 7306, "text": "It defines the lower bound of a function — i.e., the minimum time taken by an algorithm. It gives the minimum output value (big-Ω) for a respective input." }, { "code": null, "e": 7592, "s": 7461, "text": "From the definition: f(n) is Θ(g(n)) if there exist positive numbers c1, c2, and N such that c1g(n) ≤ f (n) ≤ c2g(n) for all n ≥ N" }, { "code": null, "e": 7735, "s": 7592, "text": "It defines the lower bound and upper bound of a function i.e., it exists as at the both, most and at least boundaries for a given input value." }, { "code": null, "e": 7967, "s": 7735, "text": "Note: — Big-O defines in terms of ≤ and Ω in terms of ≥; = is included in both inequalities. It suggests a way of restricting the sets of possible lower and upper bounds. This restriction accomplishes by the Big Theta (Θ) notation." }, { "code": null, "e": 8659, "s": 7967, "text": "Best Case: It defines as the condition that allows an algorithm to complete the execution of statements in the minimum amount of time. In this case, the execution time acts as a lower bound on the algorithm’s time complexity.Average Case: In the average case, we get the sum of running times on every possible input combination and then take the average. In this case, the execution time acts as both lower bound and upper bound on the algorithm’s time complexity.Worst Case: It defines as the condition that allows an algorithm to complete the execution of statements in the maximum amount of time. In this case, the execution time acts as an upper bound on the algorithm’s time complexity." }, { "code": null, "e": 8885, "s": 8659, "text": "Best Case: It defines as the condition that allows an algorithm to complete the execution of statements in the minimum amount of time. In this case, the execution time acts as a lower bound on the algorithm’s time complexity." }, { "code": null, "e": 9125, "s": 8885, "text": "Average Case: In the average case, we get the sum of running times on every possible input combination and then take the average. In this case, the execution time acts as both lower bound and upper bound on the algorithm’s time complexity." }, { "code": null, "e": 9353, "s": 9125, "text": "Worst Case: It defines as the condition that allows an algorithm to complete the execution of statements in the maximum amount of time. In this case, the execution time acts as an upper bound on the algorithm’s time complexity." }, { "code": null, "e": 9578, "s": 9353, "text": "When I started programming, the concept of space and time complexity was always a stepping stone for me. So today, I thought to bring about a simplified discussion on how these two factors significantly affect the algorithm." }, { "code": null, "e": 9720, "s": 9578, "text": "Thank you for reading this story and see you again. Feel free to leave a comment if you have any thoughts, feedback, or suggestions about it!" }, { "code": null, "e": 9860, "s": 9720, "text": "If you like my work and wanna support me then please sign up to become a medium member using this link or else, you can buy me a coffee ☕️." }, { "code": null, "e": 9908, "s": 9860, "text": "Space complexity of an algorithm — Studytonight" }, { "code": null, "e": 9986, "s": 9908, "text": "Why is Time Complexity Essential and What is Time Complexity? — Greatlearning" }, { "code": null, "e": 10026, "s": 9986, "text": "Time and Space Complexity — HackerEarth" }, { "code": null, "e": 10118, "s": 10026, "text": "Complete guide to understanding time and space complexity of algorithms — Learn Code Stream" }, { "code": null, "e": 10158, "s": 10118, "text": "How to Increase Page Speed — Neil Patel" } ]
Unsupervised Learning Techniques using Python — K Means ++ and Silhouette Score for Clustering | by Angel Das | Towards Data Science
For those of you who are wondering what an unsupervised learning technique is, imagine you run a sports shop in Clayton (Go Melbourne Victory FC). You plan to introduce a promotional offer for your high valued customers as a token of appreciation. You have hundreds of customers visiting you every day. Who should you send this offer to? Tough choice! Hmm, maybe high valued ones? But what about those who purchased frequently but didn’t spend much. This is a big dilemma. Luckily your loyalty card program captures your customer’s information and your oracle database includes all transaction details. You can use this data, and form a group of customers with similar attributes without having a pre-defined notion of who belongs where. In short, a learning technique that doesn’t have a pre-defined outcome variable (in this case your customer segments) but forms one after studying the customer attributes is defined as an unsupervised learning technique. Now theoretically, the more information you have about your customer, the better you can group them. Isn’t that true? However, in real life, it’s a little more complicated than you think. In practice, too many information (features) leads to a poor grouping of your customers. But why? Features can be highly correlated — Higher the age, lower is the spend Too many features can introduce a lot of noise making it difficult for algorithms to perform well on the data — Sam, your neighborhood customer dropped by almost every day to say “Hi” and purchase a protein bar worth a dollar. He is a highly valued customer? The number of data points you need to train any unsupervised model increases exponentially with dimensionality (features). Too many information means you require more data to train your model To overcome such issues different dimensionality reduction techniques like Principal Component Analysis, Factor Analysis, etc. can be used. We introduced the concept of Curse of Dimensionality to pave a pathway for assumptions surrounding unsupervised learning techniques. Some of the common algorithms of unsupervised learning technique include: K Means ClusteringHierarchical ClusteringK Mode ClusteringDBScan Clustering — Density-Based Spatial ClusteringPrincipal Component AnalysisFactor Analysis K Means Clustering Hierarchical Clustering K Mode Clustering DBScan Clustering — Density-Based Spatial Clustering Principal Component Analysis Factor Analysis It is common for people to think of reality as a series of black and white events. This is a challenging issue as those suffering from this way of thinking may never feel that reality is good enough. Today, we try and address the problem of simplicity and depression while determining the relationship between higher levels of black and white thinking and higher levels of self-reported depression in psychiatric patients hospitalized for depression. The data used for this analysis is from the Ginzberg data frame which is based on psychiatric patients hospitalized for depression. Data is collected from the book “Applied Regression Analysis and Generalized Linear Models”, Second Edition by Fox, J. (2008). The dataset includes three variables — simplicity (black and white thinking), fatalism, and depression ad their adjusted scores. To break it down, K signifies the number of groups, and Means signifies average. Essentially we have K groups based on an average distance calculation. Not clear I guess! For any K Means clustering algorithm to work, we need to have the following parameters. K — How many groups we want. Note — The algorithm doesn’t have the predicting capability to decide or come up with a value for K. It’s a user’s call guided by few techniques like Elbow Plot and Silhouette ScoreData points — In this case Simplicity, Fatalism, and Depression scores, i.e. your input dataK Centroids — Think of it like anchor/center values for each of the K groups. Selected randomly by the algorithm mostly. The selection of centroid impacts the way the final clusters/groups are formed. Also if you are familiar with the time complexity of any algorithm represented using O(n), it has been observed that suboptimal allocation of centroids makes the time complexity polynomial, i.e. larger time for the algorithm to run and converge. Hence a smarter way was proposed by David Arthur and Sergei Vassilvitskii (also known as the K Means ++) that does the following:- Choose one center C1 randomly from the input data points ( say X)- For each data point x ∈ X not chosen yet, compute D(x), i.e distance between x and the nearest centroid that has already been selected- For all x ∈ X calculate D(x)2 and sum them up; Select a new center Ci, selecting x ∈ X such that it has a probability of D(x)2 / Sum(For all x ∈ X D(x)2)- Repeat Steps 2 and 3 until k centers have been chosenDistance Measure— Mostly Euclidean distance as a measure of similarity between multiple patients. E.g. when we talk about soccer or popularly known as football, we often use total goals scored to group quality players. Say Ronaldo, Messi, Werner, and Puki scored 35, 32, 10, and 8 goals respectively. How will you measure similar players in terms of quality? Probably Ronaldo and Messi (35, 32 goals each) and Werner and Puki (10, 8 goals each). Notice we look into the difference of goals scored. If I add say “Assists” as another metric, we need to use a different measure. E.g. Assists are 10, 8, 2, and 1 respectively. So the new distance between Ronaldo and Messi becomes square root((35–32)2 + (10–8)2). Note — Other distance metrics like Manhattan, City block, Jacobi’s distance, etc. are also adopted by few of the other algorithms. K — How many groups we want. Note — The algorithm doesn’t have the predicting capability to decide or come up with a value for K. It’s a user’s call guided by few techniques like Elbow Plot and Silhouette Score Data points — In this case Simplicity, Fatalism, and Depression scores, i.e. your input data K Centroids — Think of it like anchor/center values for each of the K groups. Selected randomly by the algorithm mostly. The selection of centroid impacts the way the final clusters/groups are formed. Also if you are familiar with the time complexity of any algorithm represented using O(n), it has been observed that suboptimal allocation of centroids makes the time complexity polynomial, i.e. larger time for the algorithm to run and converge. Hence a smarter way was proposed by David Arthur and Sergei Vassilvitskii (also known as the K Means ++) that does the following:- Choose one center C1 randomly from the input data points ( say X)- For each data point x ∈ X not chosen yet, compute D(x), i.e distance between x and the nearest centroid that has already been selected- For all x ∈ X calculate D(x)2 and sum them up; Select a new center Ci, selecting x ∈ X such that it has a probability of D(x)2 / Sum(For all x ∈ X D(x)2)- Repeat Steps 2 and 3 until k centers have been chosen Distance Measure— Mostly Euclidean distance as a measure of similarity between multiple patients. E.g. when we talk about soccer or popularly known as football, we often use total goals scored to group quality players. Say Ronaldo, Messi, Werner, and Puki scored 35, 32, 10, and 8 goals respectively. How will you measure similar players in terms of quality? Probably Ronaldo and Messi (35, 32 goals each) and Werner and Puki (10, 8 goals each). Notice we look into the difference of goals scored. If I add say “Assists” as another metric, we need to use a different measure. E.g. Assists are 10, 8, 2, and 1 respectively. So the new distance between Ronaldo and Messi becomes square root((35–32)2 + (10–8)2). Note — Other distance metrics like Manhattan, City block, Jacobi’s distance, etc. are also adopted by few of the other algorithms. Note: All images are reproduced using latex by the author. Centroids 1. Ci - ith Centroid where i ∈ K (or K Groups of Cluster)2. Si - Group of Data points that belong to Cluster K with Ci as its centroid; j represents attributes of a point Xi3. ||Si|| - Number of data points from Si Euclidean Distance 1. Dx is the distance of of a point Xi from it's centroid Ck Convergence CriterionK Means is an iterative process. Once all data points are assigned to its centroid (based on the shortest Euclidean distance), the centroids are recalculated and this process continues till the centroids stop re-shifting, i.e. The new and previous centroid numbers didn’t change significantly Intra and Inter-Cluster DistanceIntra Cluster distance is the total distance between points from the same cluster and Inter-Cluster is the distance between two points in the different clusters K Means algorithm performs poorly if data contains outliersK Means algorithm can’t handle missing valuesClustering variables shouldn’t be highly correlated to each otherAll clustering variables should be standardized as the magnitude of variables impact distance calculation K Means algorithm performs poorly if data contains outliers K Means algorithm can’t handle missing values Clustering variables shouldn’t be highly correlated to each other All clustering variables should be standardized as the magnitude of variables impact distance calculation # Numerical librariesimport numpy as npfrom sklearn.model_selection import train_test_splitfrom sklearn.cluster import KMeans# to handle data in form of rows and columns import pandas as pd# importing ploting librariesfrom matplotlib import pyplot as plt%matplotlib inline#importing seaborn for statistical plotsimport seaborn as snsfrom sklearn import metricsimport pandas as pd# reading the CSV file into pandas dataframemydata = pd.read_csv(“Depression.csv”)mydata.drop('id', axis=1, inplace=True) # — — — — — — — — — — — — — — — -Heat map to identify highly correlated variables — — — — — — — — — — — — -#-------------------------------Heat map to identify highly correlated variables-------------------------plt.figure(figsize=(10,8))sns.heatmap(mydata.corr(), annot=True, linewidths=.5, center=0, cbar=False, cmap="YlGnBu")plt.show() mydata.drop(columns = {‘simplicity’, ‘fatalism’,’depression’}, inplace=True)#--Checking Outliersplt.figure(figsize=(15,10))pos = 1for i in mydata.columns: plt.subplot(3, 3, pos) sns.boxplot(mydata[i]) pos += 1 col_names=list(mydata.columns)display(col_names)for i in col_names: q1, q2, q3 = mydata[i].quantile([0.25,0.5,0.75]) IQR = q3 — q1 lower_cap=q1–1.5*IQR upper_cap=q3+1.5*IQR mydata[i]=mydata[i].apply(lambda x: upper_cap if x>(upper_cap) else (lower_cap if x<(lower_cap) else x)) ##Scale the datafrom scipy.stats import zscoremydata_z = mydata.apply(zscore)mydata_z.head() # List to store cluster and intra cluster distanceclusters = []inertia_vals = []# Since creating one cluster is similar to observing the data as a whole, multiple values of K are utilized to come up with the optimum cluster value#Note: Cluster number and intra cluster distance is appended for plotting the elbow curvefor k in range(1, 10, 1): # train clustering with the specified K model = KMeans(n_clusters=k, random_state=7, n_jobs=10) model.fit(mydata_z)# append model to cluster list clusters.append(model) inertia_vals.append(model.inertia_) n_clusters — The number of clusters to form; Also equivalent to the number of centroids to be generatedinit — Method specifying the initial centroid initialization (default K-means++)n_init — Number of times the k-means algorithm will be run with different centroid seedsmax_iter — Maximum number of iterations of the k-means algorithm for a single run. Stops if a convergence criterion is met, defined using the tol (tolerance option)random_state — random number generation assisting centroid initialization. Import to ensure a reproducible result n_clusters — The number of clusters to form; Also equivalent to the number of centroids to be generated init — Method specifying the initial centroid initialization (default K-means++) n_init — Number of times the k-means algorithm will be run with different centroid seeds max_iter — Maximum number of iterations of the k-means algorithm for a single run. Stops if a convergence criterion is met, defined using the tol (tolerance option) random_state — random number generation assisting centroid initialization. Import to ensure a reproducible result Inertia is defined as the sum of squared distances of data points to their closest cluster center (centroid). Lower the distance better the compactness of the clusters. However, inertia decreases as we keep increasing the value of K. An algorithm with two clusters will always have a higher inertia score than the one with four clusters and so on. This is where we combine elbow plot and silhouette score to decide the optimum value of K. The idea here is to plot the total inertia score, for different values of K and identify K beyond which the inertia doesn’t change much. The formula for inertia is given below. 1. I = Inertial = Sum of the squared distance of data points to their closest cluster2. n = Total number of records in the dataset3. D()= Sum of squared distance4. Xi = Record i in the data (i.e. in this case record of a given patient)5. Ck = Cluster to which record Xi is assigned # plot the inertia vs K valuesplt.plot(range(1,10,1),inertia_vals,marker='*')plt.show() Studying the graph above reveals 4, 5, or 6 as the optimum value of K. Notice from 6 to 7 the curve tends to flatten out whereas the slope doesn’t observe a significant change post 4 clusters. The Silhouette score is used to measure the degree of separation between clusters. In the formula above bi represents the shortest mean distance between a point to all points in any other cluster of which i is not a part whereas ai is the mean distance of i and all data points from the same cluster. Logically if bi > ai, then a point is well separated from its neighboring cluster whereas it is closer to all points from the cluster it belongs to. from sklearn.metrics import silhouette_scorefor i in range(1,9,1): print("---------------------------------------") print(clusters[i]) print("Silhouette score:",silhouette_score(mydata_z, clusters[i].predict(mydata_z))) The Output KMeans(n_clusters=2, n_jobs=10, random_state=7)Silhouette score: 0.40099183297222984---------------------------------------KMeans(n_clusters=3, n_jobs=10, random_state=7)Silhouette score: 0.3191854112351335---------------------------------------KMeans(n_clusters=4, n_jobs=10, random_state=7)Silhouette score: 0.281779515317617---------------------------------------KMeans(n_clusters=5, n_jobs=10, random_state=7)Silhouette score: 0.2742499085300452---------------------------------------KMeans(n_clusters=6, n_jobs=10, random_state=7)Silhouette score: 0.2790023172083168---------------------------------------KMeans(n_clusters=7, n_jobs=10, random_state=7)Silhouette score: 0.2761554255999077---------------------------------------KMeans(n_jobs=10, random_state=7)Silhouette score: 0.2666698045792603---------------------------------------KMeans(n_clusters=9, n_jobs=10, random_state=7)Silhouette score: 0.27378613906697535 Silhouette score observing the local maxima is used to determine the optimum number of clusters along with the Elbow plot. Hence in this example we can go ahead with 6 as the optimal number of clusters. About the Author: Advanced analytics professional and management consultant helping companies find solutions for diverse problems through a mix of business, technology, and math on organizational data. A Data Science enthusiast, here to share, learn and contribute; You can connect with me on Linked and Twitter;
[ { "code": null, "e": 1131, "s": 172, "text": "For those of you who are wondering what an unsupervised learning technique is, imagine you run a sports shop in Clayton (Go Melbourne Victory FC). You plan to introduce a promotional offer for your high valued customers as a token of appreciation. You have hundreds of customers visiting you every day. Who should you send this offer to? Tough choice! Hmm, maybe high valued ones? But what about those who purchased frequently but didn’t spend much. This is a big dilemma. Luckily your loyalty card program captures your customer’s information and your oracle database includes all transaction details. You can use this data, and form a group of customers with similar attributes without having a pre-defined notion of who belongs where. In short, a learning technique that doesn’t have a pre-defined outcome variable (in this case your customer segments) but forms one after studying the customer attributes is defined as an unsupervised learning technique." }, { "code": null, "e": 1417, "s": 1131, "text": "Now theoretically, the more information you have about your customer, the better you can group them. Isn’t that true? However, in real life, it’s a little more complicated than you think. In practice, too many information (features) leads to a poor grouping of your customers. But why?" }, { "code": null, "e": 1488, "s": 1417, "text": "Features can be highly correlated — Higher the age, lower is the spend" }, { "code": null, "e": 1747, "s": 1488, "text": "Too many features can introduce a lot of noise making it difficult for algorithms to perform well on the data — Sam, your neighborhood customer dropped by almost every day to say “Hi” and purchase a protein bar worth a dollar. He is a highly valued customer?" }, { "code": null, "e": 1939, "s": 1747, "text": "The number of data points you need to train any unsupervised model increases exponentially with dimensionality (features). Too many information means you require more data to train your model" }, { "code": null, "e": 2212, "s": 1939, "text": "To overcome such issues different dimensionality reduction techniques like Principal Component Analysis, Factor Analysis, etc. can be used. We introduced the concept of Curse of Dimensionality to pave a pathway for assumptions surrounding unsupervised learning techniques." }, { "code": null, "e": 2286, "s": 2212, "text": "Some of the common algorithms of unsupervised learning technique include:" }, { "code": null, "e": 2440, "s": 2286, "text": "K Means ClusteringHierarchical ClusteringK Mode ClusteringDBScan Clustering — Density-Based Spatial ClusteringPrincipal Component AnalysisFactor Analysis" }, { "code": null, "e": 2459, "s": 2440, "text": "K Means Clustering" }, { "code": null, "e": 2483, "s": 2459, "text": "Hierarchical Clustering" }, { "code": null, "e": 2501, "s": 2483, "text": "K Mode Clustering" }, { "code": null, "e": 2554, "s": 2501, "text": "DBScan Clustering — Density-Based Spatial Clustering" }, { "code": null, "e": 2583, "s": 2554, "text": "Principal Component Analysis" }, { "code": null, "e": 2599, "s": 2583, "text": "Factor Analysis" }, { "code": null, "e": 3050, "s": 2599, "text": "It is common for people to think of reality as a series of black and white events. This is a challenging issue as those suffering from this way of thinking may never feel that reality is good enough. Today, we try and address the problem of simplicity and depression while determining the relationship between higher levels of black and white thinking and higher levels of self-reported depression in psychiatric patients hospitalized for depression." }, { "code": null, "e": 3438, "s": 3050, "text": "The data used for this analysis is from the Ginzberg data frame which is based on psychiatric patients hospitalized for depression. Data is collected from the book “Applied Regression Analysis and Generalized Linear Models”, Second Edition by Fox, J. (2008). The dataset includes three variables — simplicity (black and white thinking), fatalism, and depression ad their adjusted scores." }, { "code": null, "e": 3697, "s": 3438, "text": "To break it down, K signifies the number of groups, and Means signifies average. Essentially we have K groups based on an average distance calculation. Not clear I guess! For any K Means clustering algorithm to work, we need to have the following parameters." }, { "code": null, "e": 5829, "s": 3697, "text": "K — How many groups we want. Note — The algorithm doesn’t have the predicting capability to decide or come up with a value for K. It’s a user’s call guided by few techniques like Elbow Plot and Silhouette ScoreData points — In this case Simplicity, Fatalism, and Depression scores, i.e. your input dataK Centroids — Think of it like anchor/center values for each of the K groups. Selected randomly by the algorithm mostly. The selection of centroid impacts the way the final clusters/groups are formed. Also if you are familiar with the time complexity of any algorithm represented using O(n), it has been observed that suboptimal allocation of centroids makes the time complexity polynomial, i.e. larger time for the algorithm to run and converge. Hence a smarter way was proposed by David Arthur and Sergei Vassilvitskii (also known as the K Means ++) that does the following:- Choose one center C1 randomly from the input data points ( say X)- For each data point x ∈ X not chosen yet, compute D(x), i.e distance between x and the nearest centroid that has already been selected- For all x ∈ X calculate D(x)2 and sum them up; Select a new center Ci, selecting x ∈ X such that it has a probability of D(x)2 / Sum(For all x ∈ X D(x)2)- Repeat Steps 2 and 3 until k centers have been chosenDistance Measure— Mostly Euclidean distance as a measure of similarity between multiple patients. E.g. when we talk about soccer or popularly known as football, we often use total goals scored to group quality players. Say Ronaldo, Messi, Werner, and Puki scored 35, 32, 10, and 8 goals respectively. How will you measure similar players in terms of quality? Probably Ronaldo and Messi (35, 32 goals each) and Werner and Puki (10, 8 goals each). Notice we look into the difference of goals scored. If I add say “Assists” as another metric, we need to use a different measure. E.g. Assists are 10, 8, 2, and 1 respectively. So the new distance between Ronaldo and Messi becomes square root((35–32)2 + (10–8)2). Note — Other distance metrics like Manhattan, City block, Jacobi’s distance, etc. are also adopted by few of the other algorithms." }, { "code": null, "e": 6040, "s": 5829, "text": "K — How many groups we want. Note — The algorithm doesn’t have the predicting capability to decide or come up with a value for K. It’s a user’s call guided by few techniques like Elbow Plot and Silhouette Score" }, { "code": null, "e": 6133, "s": 6040, "text": "Data points — In this case Simplicity, Fatalism, and Depression scores, i.e. your input data" }, { "code": null, "e": 7123, "s": 6133, "text": "K Centroids — Think of it like anchor/center values for each of the K groups. Selected randomly by the algorithm mostly. The selection of centroid impacts the way the final clusters/groups are formed. Also if you are familiar with the time complexity of any algorithm represented using O(n), it has been observed that suboptimal allocation of centroids makes the time complexity polynomial, i.e. larger time for the algorithm to run and converge. Hence a smarter way was proposed by David Arthur and Sergei Vassilvitskii (also known as the K Means ++) that does the following:- Choose one center C1 randomly from the input data points ( say X)- For each data point x ∈ X not chosen yet, compute D(x), i.e distance between x and the nearest centroid that has already been selected- For all x ∈ X calculate D(x)2 and sum them up; Select a new center Ci, selecting x ∈ X such that it has a probability of D(x)2 / Sum(For all x ∈ X D(x)2)- Repeat Steps 2 and 3 until k centers have been chosen" }, { "code": null, "e": 7964, "s": 7123, "text": "Distance Measure— Mostly Euclidean distance as a measure of similarity between multiple patients. E.g. when we talk about soccer or popularly known as football, we often use total goals scored to group quality players. Say Ronaldo, Messi, Werner, and Puki scored 35, 32, 10, and 8 goals respectively. How will you measure similar players in terms of quality? Probably Ronaldo and Messi (35, 32 goals each) and Werner and Puki (10, 8 goals each). Notice we look into the difference of goals scored. If I add say “Assists” as another metric, we need to use a different measure. E.g. Assists are 10, 8, 2, and 1 respectively. So the new distance between Ronaldo and Messi becomes square root((35–32)2 + (10–8)2). Note — Other distance metrics like Manhattan, City block, Jacobi’s distance, etc. are also adopted by few of the other algorithms." }, { "code": null, "e": 8023, "s": 7964, "text": "Note: All images are reproduced using latex by the author." }, { "code": null, "e": 8033, "s": 8023, "text": "Centroids" }, { "code": null, "e": 8248, "s": 8033, "text": "1. Ci - ith Centroid where i ∈ K (or K Groups of Cluster)2. Si - Group of Data points that belong to Cluster K with Ci as its centroid; j represents attributes of a point Xi3. ||Si|| - Number of data points from Si" }, { "code": null, "e": 8267, "s": 8248, "text": "Euclidean Distance" }, { "code": null, "e": 8328, "s": 8267, "text": "1. Dx is the distance of of a point Xi from it's centroid Ck" }, { "code": null, "e": 8642, "s": 8328, "text": "Convergence CriterionK Means is an iterative process. Once all data points are assigned to its centroid (based on the shortest Euclidean distance), the centroids are recalculated and this process continues till the centroids stop re-shifting, i.e. The new and previous centroid numbers didn’t change significantly" }, { "code": null, "e": 8835, "s": 8642, "text": "Intra and Inter-Cluster DistanceIntra Cluster distance is the total distance between points from the same cluster and Inter-Cluster is the distance between two points in the different clusters" }, { "code": null, "e": 9110, "s": 8835, "text": "K Means algorithm performs poorly if data contains outliersK Means algorithm can’t handle missing valuesClustering variables shouldn’t be highly correlated to each otherAll clustering variables should be standardized as the magnitude of variables impact distance calculation" }, { "code": null, "e": 9170, "s": 9110, "text": "K Means algorithm performs poorly if data contains outliers" }, { "code": null, "e": 9216, "s": 9170, "text": "K Means algorithm can’t handle missing values" }, { "code": null, "e": 9282, "s": 9216, "text": "Clustering variables shouldn’t be highly correlated to each other" }, { "code": null, "e": 9388, "s": 9282, "text": "All clustering variables should be standardized as the magnitude of variables impact distance calculation" }, { "code": null, "e": 9889, "s": 9388, "text": "# Numerical librariesimport numpy as npfrom sklearn.model_selection import train_test_splitfrom sklearn.cluster import KMeans# to handle data in form of rows and columns import pandas as pd# importing ploting librariesfrom matplotlib import pyplot as plt%matplotlib inline#importing seaborn for statistical plotsimport seaborn as snsfrom sklearn import metricsimport pandas as pd# reading the CSV file into pandas dataframemydata = pd.read_csv(“Depression.csv”)mydata.drop('id', axis=1, inplace=True)" }, { "code": null, "e": 10283, "s": 9889, "text": "# — — — — — — — — — — — — — — — -Heat map to identify highly correlated variables — — — — — — — — — — — — -#-------------------------------Heat map to identify highly correlated variables-------------------------plt.figure(figsize=(10,8))sns.heatmap(mydata.corr(), annot=True, linewidths=.5, center=0, cbar=False, cmap=\"YlGnBu\")plt.show()" }, { "code": null, "e": 10502, "s": 10283, "text": "mydata.drop(columns = {‘simplicity’, ‘fatalism’,’depression’}, inplace=True)#--Checking Outliersplt.figure(figsize=(15,10))pos = 1for i in mydata.columns: plt.subplot(3, 3, pos) sns.boxplot(mydata[i]) pos += 1" }, { "code": null, "e": 10780, "s": 10502, "text": "col_names=list(mydata.columns)display(col_names)for i in col_names: q1, q2, q3 = mydata[i].quantile([0.25,0.5,0.75]) IQR = q3 — q1 lower_cap=q1–1.5*IQR upper_cap=q3+1.5*IQR mydata[i]=mydata[i].apply(lambda x: upper_cap if x>(upper_cap) else (lower_cap if x<(lower_cap) else x))" }, { "code": null, "e": 10873, "s": 10780, "text": "##Scale the datafrom scipy.stats import zscoremydata_z = mydata.apply(zscore)mydata_z.head()" }, { "code": null, "e": 11441, "s": 10873, "text": "# List to store cluster and intra cluster distanceclusters = []inertia_vals = []# Since creating one cluster is similar to observing the data as a whole, multiple values of K are utilized to come up with the optimum cluster value#Note: Cluster number and intra cluster distance is appended for plotting the elbow curvefor k in range(1, 10, 1): # train clustering with the specified K model = KMeans(n_clusters=k, random_state=7, n_jobs=10) model.fit(mydata_z)# append model to cluster list clusters.append(model) inertia_vals.append(model.inertia_)" }, { "code": null, "e": 11990, "s": 11441, "text": "n_clusters — The number of clusters to form; Also equivalent to the number of centroids to be generatedinit — Method specifying the initial centroid initialization (default K-means++)n_init — Number of times the k-means algorithm will be run with different centroid seedsmax_iter — Maximum number of iterations of the k-means algorithm for a single run. Stops if a convergence criterion is met, defined using the tol (tolerance option)random_state — random number generation assisting centroid initialization. Import to ensure a reproducible result" }, { "code": null, "e": 12094, "s": 11990, "text": "n_clusters — The number of clusters to form; Also equivalent to the number of centroids to be generated" }, { "code": null, "e": 12175, "s": 12094, "text": "init — Method specifying the initial centroid initialization (default K-means++)" }, { "code": null, "e": 12264, "s": 12175, "text": "n_init — Number of times the k-means algorithm will be run with different centroid seeds" }, { "code": null, "e": 12429, "s": 12264, "text": "max_iter — Maximum number of iterations of the k-means algorithm for a single run. Stops if a convergence criterion is met, defined using the tol (tolerance option)" }, { "code": null, "e": 12543, "s": 12429, "text": "random_state — random number generation assisting centroid initialization. Import to ensure a reproducible result" }, { "code": null, "e": 12982, "s": 12543, "text": "Inertia is defined as the sum of squared distances of data points to their closest cluster center (centroid). Lower the distance better the compactness of the clusters. However, inertia decreases as we keep increasing the value of K. An algorithm with two clusters will always have a higher inertia score than the one with four clusters and so on. This is where we combine elbow plot and silhouette score to decide the optimum value of K." }, { "code": null, "e": 13159, "s": 12982, "text": "The idea here is to plot the total inertia score, for different values of K and identify K beyond which the inertia doesn’t change much. The formula for inertia is given below." }, { "code": null, "e": 13441, "s": 13159, "text": "1. I = Inertial = Sum of the squared distance of data points to their closest cluster2. n = Total number of records in the dataset3. D()= Sum of squared distance4. Xi = Record i in the data (i.e. in this case record of a given patient)5. Ck = Cluster to which record Xi is assigned" }, { "code": null, "e": 13529, "s": 13441, "text": "# plot the inertia vs K valuesplt.plot(range(1,10,1),inertia_vals,marker='*')plt.show()" }, { "code": null, "e": 13722, "s": 13529, "text": "Studying the graph above reveals 4, 5, or 6 as the optimum value of K. Notice from 6 to 7 the curve tends to flatten out whereas the slope doesn’t observe a significant change post 4 clusters." }, { "code": null, "e": 14172, "s": 13722, "text": "The Silhouette score is used to measure the degree of separation between clusters. In the formula above bi represents the shortest mean distance between a point to all points in any other cluster of which i is not a part whereas ai is the mean distance of i and all data points from the same cluster. Logically if bi > ai, then a point is well separated from its neighboring cluster whereas it is closer to all points from the cluster it belongs to." }, { "code": null, "e": 14405, "s": 14172, "text": "from sklearn.metrics import silhouette_scorefor i in range(1,9,1): print(\"---------------------------------------\") print(clusters[i]) print(\"Silhouette score:\",silhouette_score(mydata_z, clusters[i].predict(mydata_z)))" }, { "code": null, "e": 14416, "s": 14405, "text": "The Output" }, { "code": null, "e": 15341, "s": 14416, "text": "KMeans(n_clusters=2, n_jobs=10, random_state=7)Silhouette score: 0.40099183297222984---------------------------------------KMeans(n_clusters=3, n_jobs=10, random_state=7)Silhouette score: 0.3191854112351335---------------------------------------KMeans(n_clusters=4, n_jobs=10, random_state=7)Silhouette score: 0.281779515317617---------------------------------------KMeans(n_clusters=5, n_jobs=10, random_state=7)Silhouette score: 0.2742499085300452---------------------------------------KMeans(n_clusters=6, n_jobs=10, random_state=7)Silhouette score: 0.2790023172083168---------------------------------------KMeans(n_clusters=7, n_jobs=10, random_state=7)Silhouette score: 0.2761554255999077---------------------------------------KMeans(n_jobs=10, random_state=7)Silhouette score: 0.2666698045792603---------------------------------------KMeans(n_clusters=9, n_jobs=10, random_state=7)Silhouette score: 0.27378613906697535" }, { "code": null, "e": 15544, "s": 15341, "text": "Silhouette score observing the local maxima is used to determine the optimum number of clusters along with the Elbow plot. Hence in this example we can go ahead with 6 as the optimal number of clusters." } ]
A Beginner’s Guide to Visualizing Audio as a Spectrogram in Python | by Braden Riggs | Towards Data Science
We often think of audio data as just data we interpret and process through our auditory system, but that doesn’t have to be the only way that we analyze and interpret audio signals. One such way we can instead understand audio data is through visual representations of the noises we hear. These visual representations are most commonly represented in a waveform plot where we visualize sound pressure in relation to time. This representation, whilst sufficient, often oversimplifies audio data, which is more than just sound pressure over time. This is where we introduce the spectrogram. A spectrogram is a representation of frequency over time with the addition of amplitude as a third dimension, denoting the intensity or volume of the signal at a frequency and a time. Visualizing data with a spectrogram helps reveal hidden insights in the audio data that may have been less apparent in the traditional waveform representations, allowing us to distinguish noise from the true audio data we wish to interpret. By visualizing audio data this way we can get a clear picture of the imperfections or underlying issues present, helping to guide our analysis and repair of the audio. The utility of the spectrogram is best highlighted through an example. Pictured is a 125-second sample of a traditionally noisy audio recording, taken from Franklin D. Roosevelt’s 1941 speech following the surprise attack on Pearl Harbor, represented as a spectrogram. This antiquated audio sample is rife with noise and low quality when compared to modern audio samples. Despite this, we can still get a picture of what is going on in the audio sample, with the first 15 seconds being an introduction by the host, further away from the microphone, followed by 20 seconds of clapping, finally followed by the start of Roosevelt’s speech where we can see spikes in intensity and frequency as the then-president announces and responds to the attack. By first visualizing the data this way we get a picture of what improvements can be made to the audio as many of Roosevelt’s spoken words blur together in the representation, suggesting the presence of noise. One such strategy for improving the quality of this audio sample is through the use of the Media Enhance API present on Dolby.io. The Media Enhance API works to remove the noise, isolate the spoken audio, and correct the volume and tone of the sample for a more modern representation of the speech. To use this feature yourself you can follow the steps included below or skip to the bottom where we show off the results. To start the visualization process we first need an audio file to enhance. You can use your own or find some examples here. Dolby.io supports many formats but we’ll use a WAV file to create an enhanced version. See the Enhancing Media tutorial to learn how. There are a few Python packages we need to import. You’ll need to install numpy, matplotlib, and scipy into your python environment. # for data transformation import numpy as np # for visualizing the data import matplotlib.pyplot as plt # for opening the media file import scipy.io.wavfile as wavfile Utilizing SciPy’s wavfile function we can extract the relevant data from the WAV file and load it into a NumPy data array so we can trim to an appropriate length. Fs, aud = wavfile.read('pearl_harbor.wav') aud = aud[:,0] # select left channel onlyfirst = aud[:int(Fs*125)] # trim the first 125 seconds You’ll notice that when we load the WAV file SciPy’s function returns two elements the Sample Rate ( fs) and the data (aud). It’s important to keep both of these values as we will need them to create the spectrogram. In this example we won’t focus on the Matplotlib style elements, rather we will focus on plotting the spectrogram, with the additional stylings such as fonts, titles, and colors optional to add. To plot the spectrogram we call Matplotlib’s specgram function along with the .show() function to project the plot: powerSpectrum, frequenciesFound, time, imageAxis = plt.specgram(first, Fs=Fs) plt.show() Following these steps, we should see something similar to the below plot, albeit truncated without Matplotlib’s styling elements. When pictured in succession, the impact of the Media Enhance API is apparent in the spectrogram representation of the sample. The enhanced plot includes more isolated and intense spikes when Roosevelt speaks, followed by a dramatic contrast in intensity where Dolby.io has minimized the noise. This leads to a far cleaner audio experience as Roosevelt’s words blend less with the background noise, becoming more distinct and legible to the listener. By representing audio data in this way we provide an extra dimension to our analysis, allowing for a more calculated approach to audio corrections and enhancement, highlighting the utility of spectrograms, and visually representing audio data. This approach to audio data analysis has been used in a number of industry and academic applications including speech recognition with recurrent neural networks, studying and identifying bird calls, and even assisting deaf persons in overcoming speech deficits. Additionally, through the use of Dolby.io, we can visually see the effectiveness of the Enhance feature and how it is able to isolate and improve audio quality for a more seamless listening experience. Originally published at https://dolby.io.
[ { "code": null, "e": 594, "s": 172, "text": "We often think of audio data as just data we interpret and process through our auditory system, but that doesn’t have to be the only way that we analyze and interpret audio signals. One such way we can instead understand audio data is through visual representations of the noises we hear. These visual representations are most commonly represented in a waveform plot where we visualize sound pressure in relation to time." }, { "code": null, "e": 945, "s": 594, "text": "This representation, whilst sufficient, often oversimplifies audio data, which is more than just sound pressure over time. This is where we introduce the spectrogram. A spectrogram is a representation of frequency over time with the addition of amplitude as a third dimension, denoting the intensity or volume of the signal at a frequency and a time." }, { "code": null, "e": 1354, "s": 945, "text": "Visualizing data with a spectrogram helps reveal hidden insights in the audio data that may have been less apparent in the traditional waveform representations, allowing us to distinguish noise from the true audio data we wish to interpret. By visualizing audio data this way we can get a clear picture of the imperfections or underlying issues present, helping to guide our analysis and repair of the audio." }, { "code": null, "e": 2311, "s": 1354, "text": "The utility of the spectrogram is best highlighted through an example. Pictured is a 125-second sample of a traditionally noisy audio recording, taken from Franklin D. Roosevelt’s 1941 speech following the surprise attack on Pearl Harbor, represented as a spectrogram. This antiquated audio sample is rife with noise and low quality when compared to modern audio samples. Despite this, we can still get a picture of what is going on in the audio sample, with the first 15 seconds being an introduction by the host, further away from the microphone, followed by 20 seconds of clapping, finally followed by the start of Roosevelt’s speech where we can see spikes in intensity and frequency as the then-president announces and responds to the attack. By first visualizing the data this way we get a picture of what improvements can be made to the audio as many of Roosevelt’s spoken words blur together in the representation, suggesting the presence of noise." }, { "code": null, "e": 2732, "s": 2311, "text": "One such strategy for improving the quality of this audio sample is through the use of the Media Enhance API present on Dolby.io. The Media Enhance API works to remove the noise, isolate the spoken audio, and correct the volume and tone of the sample for a more modern representation of the speech. To use this feature yourself you can follow the steps included below or skip to the bottom where we show off the results." }, { "code": null, "e": 2990, "s": 2732, "text": "To start the visualization process we first need an audio file to enhance. You can use your own or find some examples here. Dolby.io supports many formats but we’ll use a WAV file to create an enhanced version. See the Enhancing Media tutorial to learn how." }, { "code": null, "e": 3123, "s": 2990, "text": "There are a few Python packages we need to import. You’ll need to install numpy, matplotlib, and scipy into your python environment." }, { "code": null, "e": 3291, "s": 3123, "text": "# for data transformation import numpy as np # for visualizing the data import matplotlib.pyplot as plt # for opening the media file import scipy.io.wavfile as wavfile" }, { "code": null, "e": 3454, "s": 3291, "text": "Utilizing SciPy’s wavfile function we can extract the relevant data from the WAV file and load it into a NumPy data array so we can trim to an appropriate length." }, { "code": null, "e": 3593, "s": 3454, "text": "Fs, aud = wavfile.read('pearl_harbor.wav') aud = aud[:,0] # select left channel onlyfirst = aud[:int(Fs*125)] # trim the first 125 seconds" }, { "code": null, "e": 3810, "s": 3593, "text": "You’ll notice that when we load the WAV file SciPy’s function returns two elements the Sample Rate ( fs) and the data (aud). It’s important to keep both of these values as we will need them to create the spectrogram." }, { "code": null, "e": 4121, "s": 3810, "text": "In this example we won’t focus on the Matplotlib style elements, rather we will focus on plotting the spectrogram, with the additional stylings such as fonts, titles, and colors optional to add. To plot the spectrogram we call Matplotlib’s specgram function along with the .show() function to project the plot:" }, { "code": null, "e": 4210, "s": 4121, "text": "powerSpectrum, frequenciesFound, time, imageAxis = plt.specgram(first, Fs=Fs) plt.show()" }, { "code": null, "e": 4340, "s": 4210, "text": "Following these steps, we should see something similar to the below plot, albeit truncated without Matplotlib’s styling elements." }, { "code": null, "e": 4790, "s": 4340, "text": "When pictured in succession, the impact of the Media Enhance API is apparent in the spectrogram representation of the sample. The enhanced plot includes more isolated and intense spikes when Roosevelt speaks, followed by a dramatic contrast in intensity where Dolby.io has minimized the noise. This leads to a far cleaner audio experience as Roosevelt’s words blend less with the background noise, becoming more distinct and legible to the listener." }, { "code": null, "e": 5498, "s": 4790, "text": "By representing audio data in this way we provide an extra dimension to our analysis, allowing for a more calculated approach to audio corrections and enhancement, highlighting the utility of spectrograms, and visually representing audio data. This approach to audio data analysis has been used in a number of industry and academic applications including speech recognition with recurrent neural networks, studying and identifying bird calls, and even assisting deaf persons in overcoming speech deficits. Additionally, through the use of Dolby.io, we can visually see the effectiveness of the Enhance feature and how it is able to isolate and improve audio quality for a more seamless listening experience." } ]
Java Program to convert int to binary string
The Integer.toBinaryString() method in Java converts int to binary string. Let’s say the following are our integer values. int val1 = 9; int val2 = 20; int val3 = 2; Convert the above int values to binary string. Integer.toBinaryString(val1); Integer.toBinaryString(val2); Integer.toBinaryString(val3); Live Demo public class Demo { public static void main(String[] args) { int val1 = 9; int val2 = 20; int val3 = 30; int val4 = 78; int val5 = 2; System.out.println("Converting integer "+val1+" to Binary String: "+Integer.toBinaryString(val1)); System.out.println("Converting integer "+val2+" to Binary String: "+Integer.toBinaryString(val2)); System.out.println("Converting integer "+val3+" to Binary String: "+Integer.toBinaryString(val3)); System.out.println("Converting integer "+val4+" to Binary String: "+Integer.toBinaryString(val4)); System.out.println("Converting integer "+val5+" to Binary String: "+Integer.toBinaryString(val5)); } } Converting integer 9 to Binary String: 1001 Converting integer 20 to Binary String: 10100 Converting integer 30 to Binary String: 11110 Converting integer 78 to Binary String: 1001110 Converting integer 2 to Binary String: 10
[ { "code": null, "e": 1137, "s": 1062, "text": "The Integer.toBinaryString() method in Java converts int to binary string." }, { "code": null, "e": 1185, "s": 1137, "text": "Let’s say the following are our integer values." }, { "code": null, "e": 1228, "s": 1185, "text": "int val1 = 9;\nint val2 = 20;\nint val3 = 2;" }, { "code": null, "e": 1275, "s": 1228, "text": "Convert the above int values to binary string." }, { "code": null, "e": 1365, "s": 1275, "text": "Integer.toBinaryString(val1);\nInteger.toBinaryString(val2);\nInteger.toBinaryString(val3);" }, { "code": null, "e": 1376, "s": 1365, "text": " Live Demo" }, { "code": null, "e": 2075, "s": 1376, "text": "public class Demo {\n public static void main(String[] args) {\n int val1 = 9;\n int val2 = 20;\n int val3 = 30;\n int val4 = 78;\n int val5 = 2;\n System.out.println(\"Converting integer \"+val1+\" to Binary String: \"+Integer.toBinaryString(val1));\n System.out.println(\"Converting integer \"+val2+\" to Binary String: \"+Integer.toBinaryString(val2));\n System.out.println(\"Converting integer \"+val3+\" to Binary String: \"+Integer.toBinaryString(val3));\n System.out.println(\"Converting integer \"+val4+\" to Binary String: \"+Integer.toBinaryString(val4));\n System.out.println(\"Converting integer \"+val5+\" to Binary String: \"+Integer.toBinaryString(val5));\n }\n}" }, { "code": null, "e": 2301, "s": 2075, "text": "Converting integer 9 to Binary String: 1001\nConverting integer 20 to Binary String: 10100\nConverting integer 30 to Binary String: 11110\nConverting integer 78 to Binary String: 1001110\nConverting integer 2 to Binary String: 10" } ]
Display table records from a stored procedure in MySQL
Let us first create a table − mysql> create table DemoTable1933 ( ClientName varchar(20) ); Query OK, 0 rows affected (0.00 sec) Insert some records in the table using insert command − mysql> insert into DemoTable1933 values('Chris Brown'); Query OK, 1 row affected (0.00 sec) mysql> insert into DemoTable1933 values('David Miller'); Query OK, 1 row affected (0.00 sec) mysql> insert into DemoTable1933 values('Adam Smith'); Query OK, 1 row affected (0.00 sec) mysql> insert into DemoTable1933 values('John Doe'); Query OK, 1 row affected (0.00 sec) Display all records from the table using select statement − mysql> select * from DemoTable1933; This will produce the following output − +--------------+ | ClientName | +--------------+ | Chris Brown | | David Miller | | Adam Smith | | John Doe | +--------------+ 4 rows in set (0.00 sec) Here is the query to create a stored procedure and set SELECT in it to display records − mysql> delimiter // mysql> create procedure display_all_records() begin select * from DemoTable1933; end // Query OK, 0 rows affected (0.00 sec) mysql> delimiter ; Now you can call a stored procedure using call command: mysql> call display_all_records(); This will produce the following output − +--------------+ | ClientName | +--------------+ | Chris Brown | | David Miller | | Adam Smith | | John Doe | +--------------+ 4 rows in set (0.00 sec) Query OK, 0 rows affected (0.00 sec)
[ { "code": null, "e": 1092, "s": 1062, "text": "Let us first create a table −" }, { "code": null, "e": 1200, "s": 1092, "text": "mysql> create table DemoTable1933\n (\n ClientName varchar(20)\n );\nQuery OK, 0 rows affected (0.00 sec)" }, { "code": null, "e": 1256, "s": 1200, "text": "Insert some records in the table using insert command −" }, { "code": null, "e": 1621, "s": 1256, "text": "mysql> insert into DemoTable1933 values('Chris Brown');\nQuery OK, 1 row affected (0.00 sec)\nmysql> insert into DemoTable1933 values('David Miller');\nQuery OK, 1 row affected (0.00 sec)\nmysql> insert into DemoTable1933 values('Adam Smith');\nQuery OK, 1 row affected (0.00 sec)\nmysql> insert into DemoTable1933 values('John Doe');\nQuery OK, 1 row affected (0.00 sec)" }, { "code": null, "e": 1681, "s": 1621, "text": "Display all records from the table using select statement −" }, { "code": null, "e": 1717, "s": 1681, "text": "mysql> select * from DemoTable1933;" }, { "code": null, "e": 1758, "s": 1717, "text": "This will produce the following output −" }, { "code": null, "e": 1919, "s": 1758, "text": "+--------------+\n| ClientName |\n+--------------+\n| Chris Brown |\n| David Miller |\n| Adam Smith |\n| John Doe |\n+--------------+\n4 rows in set (0.00 sec)" }, { "code": null, "e": 2008, "s": 1919, "text": "Here is the query to create a stored procedure and set SELECT in it to display records −" }, { "code": null, "e": 2184, "s": 2008, "text": "mysql> delimiter //\nmysql> create procedure display_all_records()\n begin\n select * from DemoTable1933;\n end\n //\nQuery OK, 0 rows affected (0.00 sec)\nmysql> delimiter ;" }, { "code": null, "e": 2240, "s": 2184, "text": "Now you can call a stored procedure using call command:" }, { "code": null, "e": 2275, "s": 2240, "text": "mysql> call display_all_records();" }, { "code": null, "e": 2316, "s": 2275, "text": "This will produce the following output −" }, { "code": null, "e": 2514, "s": 2316, "text": "+--------------+\n| ClientName |\n+--------------+\n| Chris Brown |\n| David Miller |\n| Adam Smith |\n| John Doe |\n+--------------+\n4 rows in set (0.00 sec)\nQuery OK, 0 rows affected (0.00 sec)" } ]
How to change an HTML5 input's placeholder color with CSS?
HTML5 introduced a new attribute called placeholder. This attribute on <input> and <textarea> elements provides a hint to the user of what can be entered in the field. Here’s an example showing what a placeholder is. The hint to email i.e. [email protected] is placeholder here: You can try to run the following code to learn how to use placeholder attribute: Live Demo <!DOCTYPE HTML> <html> <body> <form action="/cgi-bin/html5.cgi" method="get"> Enter email : <input type="email" name="newinput" placeholder="[email protected]"/> <input type="submit" value="submit" /> </form> </body> </html> To change the input placeholder color, use CSS. Live Demo <!DOCTYPE HTML> <html> <style> ::-webkit-input-placeholder { /* WebKit, Blink, Edge */ color: #7F0D10; } :-moz-placeholder { /* Mozilla Firefox 4 to 18 */ color: #7F0D10; opacity: 1; } ::-moz-placeholder { /* Mozilla Firefox 19+ */ color: #7F0D10; opacity: 1; } </style> <body> <form action="/cgi-bin/html5.cgi" method="get"> Enter email : <input type="email" name="newinput" placeholder="[email protected]"/> <input type="submit" value="submit" /> </form> </body> </html>
[ { "code": null, "e": 1230, "s": 1062, "text": "HTML5 introduced a new attribute called placeholder. This attribute on <input> and <textarea> elements provides a hint to the user of what can be entered in the field." }, { "code": null, "e": 1341, "s": 1230, "text": "Here’s an example showing what a placeholder is. The hint to email i.e. [email protected] is placeholder here:" }, { "code": null, "e": 1422, "s": 1341, "text": "You can try to run the following code to learn how to use placeholder attribute:" }, { "code": null, "e": 1432, "s": 1422, "text": "Live Demo" }, { "code": null, "e": 1695, "s": 1432, "text": "<!DOCTYPE HTML>\n<html>\n <body>\n <form action=\"/cgi-bin/html5.cgi\" method=\"get\">\n Enter email : <input type=\"email\" name=\"newinput\" placeholder=\"[email protected]\"/>\n <input type=\"submit\" value=\"submit\" />\n </form>\n </body> \n</html>" }, { "code": null, "e": 1743, "s": 1695, "text": "To change the input placeholder color, use CSS." }, { "code": null, "e": 1753, "s": 1743, "text": "Live Demo" }, { "code": null, "e": 2353, "s": 1753, "text": "<!DOCTYPE HTML>\n<html>\n <style>\n ::-webkit-input-placeholder { /* WebKit, Blink, Edge */\n color: #7F0D10;\n }\n :-moz-placeholder { /* Mozilla Firefox 4 to 18 */\n color: #7F0D10;\n opacity: 1;\n }\n ::-moz-placeholder { /* Mozilla Firefox 19+ */\n color: #7F0D10;\n opacity: 1;\n }\n </style>\n <body>\n <form action=\"/cgi-bin/html5.cgi\" method=\"get\">\n Enter email : <input type=\"email\" name=\"newinput\" placeholder=\"[email protected]\"/>\n <input type=\"submit\" value=\"submit\" />\n </form>\n </body>\n</html>" } ]
Don’t Overfit! — How to prevent Overfitting in your Deep Learning Models | by Nils Schlüter | Towards Data Science
In this article, I am going to talk about how you can prevent overfitting in your deep learning models. To have a reference dataset, I used the Don’t Overfit! II Challenge from Kaggle. If you actually wanted to win a challenge like this, don’t use Neural Networks as they are very prone to overfitting. But, we’re not here to win a Kaggle challenge, but to learn how to prevent overfitting in our deep learning models. So let’s get started! To see how we can prevent overfitting, we first need to create a base model to compare the improved models to. The base model is a simple keras model with two hidden layers with 128 and 64 neurons. You can check it out here: With this model we can achieve a training accuracy of over 97%, but a validation accuracy of only about 60%. In the graphic below we can see clear signs of overfitting: The Train Loss decreases, but the validation loss increases. If you see something like this, this is a clear sign that your model is overfitting: It’s learning the training data really well but fails to generalize the knowledge to the test data. With this model, we get a score of about 59% in the Kaggle challenge — not very good. So, let’s see how we can improve the model To improve the score, we can essentially do two things Improve our model Improve our data I’ll start by showing you how to change the base model. Then I’ll go into feature selection, which allows you to change the data I’m going to be talking about three common ways to adapt your model in order to prevent overfitting. The first step when dealing with overfitting is to decrease the complexity of the model. In the given base model, there are 2 hidden Layers, one with 128 and one with 64 neurons. Additionally, the input layer has 300 neurons. This is a huge number of neurons. To decrease the complexity, we can simply remove layers or reduce the number of neurons in order to make our network smaller. There is no general rule on how much to remove or how big your network should be. But, if your network is overfitting, try making it smaller. Dropout Layers can be an easy and effective way to prevent overfitting in your models. A dropout layer randomly drops some of the connections between layers. This helps to prevent overfitting, because if a connection is dropped, the network is forced to Luckily, with keras it’s really easy to add a dropout layer. The new, simplified model with dropout layers could look like this: As you can see, the new model only has one hidden layer and fewer neurons. Additionally, I added Dropout layers between the layers with a dropout rate of 0.4. Another way to prevent overfitting is to stop your training process early: Instead of training for a fixed number of epochs, you stop as soon as the validation loss rises — because, after that, your model will generally only get worse with more training. You can implement early stopping easily with a callback in keras: For this to work, you need to add the validation_split parameter to your fit function. Otherwise, the val_loss is not measured by keras. If you take a look at the raw data, you will see that there are 300 columns and only 250 rows. That is a lot of features for only very few training samples. So, instead of using all features, it’s better to use only the most important ones. This will, on the one hand, make the training process notably faster, on the other hand, it can help to prevent overfitting because the model doesn’t need to learn as many features. Luckily, scikit-learn provides the great Feature selection Module, which helps you identify the most relevant features of a dataset. So, let’s explore some of those ways! One of the simplest ways to select relevant features is to calculate the F-Score for each feature. The F-Score is calculated using the variance between the features and the variance within each feature. A high F-score usually means that the feature is more important than a feature with a low F-score. You can calculate the F-Scores for the Features like this: If you plot the data, you will see something like this: As you can see, the F-score between the features varies greatly. You can get the score for each column with selector.scores_ or you can get the index of the top 10 features like this: f_score_indexes = (-selector.scores_).argsort()[:10] Another way is the recursive feature selection. Unlike the other method, with RFE you don’t calculate a score for each feature, but you train a classifier multiple times on smaller and smaller feature set. After each training, the importance of the features is calculated and the least important feature is eliminated from the feature set. You can get the index of these features like this: rfe_indexes = np.where(rfe_values)[0] At the beginning of this article, we started with a model which was overfitting and could barely get more than 50% accuracy. Below, you can see the results of the new model, trained on the data after the feature selection: It’s still not perfect, but as you can see, the model is overfitting way less. In the Kaggle challenge, the new model scores at about 80% — which is 20% better than the base model.
[ { "code": null, "e": 356, "s": 171, "text": "In this article, I am going to talk about how you can prevent overfitting in your deep learning models. To have a reference dataset, I used the Don’t Overfit! II Challenge from Kaggle." }, { "code": null, "e": 590, "s": 356, "text": "If you actually wanted to win a challenge like this, don’t use Neural Networks as they are very prone to overfitting. But, we’re not here to win a Kaggle challenge, but to learn how to prevent overfitting in our deep learning models." }, { "code": null, "e": 612, "s": 590, "text": "So let’s get started!" }, { "code": null, "e": 837, "s": 612, "text": "To see how we can prevent overfitting, we first need to create a base model to compare the improved models to. The base model is a simple keras model with two hidden layers with 128 and 64 neurons. You can check it out here:" }, { "code": null, "e": 1067, "s": 837, "text": "With this model we can achieve a training accuracy of over 97%, but a validation accuracy of only about 60%. In the graphic below we can see clear signs of overfitting: The Train Loss decreases, but the validation loss increases." }, { "code": null, "e": 1338, "s": 1067, "text": "If you see something like this, this is a clear sign that your model is overfitting: It’s learning the training data really well but fails to generalize the knowledge to the test data. With this model, we get a score of about 59% in the Kaggle challenge — not very good." }, { "code": null, "e": 1381, "s": 1338, "text": "So, let’s see how we can improve the model" }, { "code": null, "e": 1436, "s": 1381, "text": "To improve the score, we can essentially do two things" }, { "code": null, "e": 1454, "s": 1436, "text": "Improve our model" }, { "code": null, "e": 1471, "s": 1454, "text": "Improve our data" }, { "code": null, "e": 1600, "s": 1471, "text": "I’ll start by showing you how to change the base model. Then I’ll go into feature selection, which allows you to change the data" }, { "code": null, "e": 1701, "s": 1600, "text": "I’m going to be talking about three common ways to adapt your model in order to prevent overfitting." }, { "code": null, "e": 2229, "s": 1701, "text": "The first step when dealing with overfitting is to decrease the complexity of the model. In the given base model, there are 2 hidden Layers, one with 128 and one with 64 neurons. Additionally, the input layer has 300 neurons. This is a huge number of neurons. To decrease the complexity, we can simply remove layers or reduce the number of neurons in order to make our network smaller. There is no general rule on how much to remove or how big your network should be. But, if your network is overfitting, try making it smaller." }, { "code": null, "e": 2544, "s": 2229, "text": "Dropout Layers can be an easy and effective way to prevent overfitting in your models. A dropout layer randomly drops some of the connections between layers. This helps to prevent overfitting, because if a connection is dropped, the network is forced to Luckily, with keras it’s really easy to add a dropout layer." }, { "code": null, "e": 2612, "s": 2544, "text": "The new, simplified model with dropout layers could look like this:" }, { "code": null, "e": 2771, "s": 2612, "text": "As you can see, the new model only has one hidden layer and fewer neurons. Additionally, I added Dropout layers between the layers with a dropout rate of 0.4." }, { "code": null, "e": 3092, "s": 2771, "text": "Another way to prevent overfitting is to stop your training process early: Instead of training for a fixed number of epochs, you stop as soon as the validation loss rises — because, after that, your model will generally only get worse with more training. You can implement early stopping easily with a callback in keras:" }, { "code": null, "e": 3229, "s": 3092, "text": "For this to work, you need to add the validation_split parameter to your fit function. Otherwise, the val_loss is not measured by keras." }, { "code": null, "e": 3324, "s": 3229, "text": "If you take a look at the raw data, you will see that there are 300 columns and only 250 rows." }, { "code": null, "e": 3652, "s": 3324, "text": "That is a lot of features for only very few training samples. So, instead of using all features, it’s better to use only the most important ones. This will, on the one hand, make the training process notably faster, on the other hand, it can help to prevent overfitting because the model doesn’t need to learn as many features." }, { "code": null, "e": 3823, "s": 3652, "text": "Luckily, scikit-learn provides the great Feature selection Module, which helps you identify the most relevant features of a dataset. So, let’s explore some of those ways!" }, { "code": null, "e": 4184, "s": 3823, "text": "One of the simplest ways to select relevant features is to calculate the F-Score for each feature. The F-Score is calculated using the variance between the features and the variance within each feature. A high F-score usually means that the feature is more important than a feature with a low F-score. You can calculate the F-Scores for the Features like this:" }, { "code": null, "e": 4240, "s": 4184, "text": "If you plot the data, you will see something like this:" }, { "code": null, "e": 4424, "s": 4240, "text": "As you can see, the F-score between the features varies greatly. You can get the score for each column with selector.scores_ or you can get the index of the top 10 features like this:" }, { "code": null, "e": 4477, "s": 4424, "text": "f_score_indexes = (-selector.scores_).argsort()[:10]" }, { "code": null, "e": 4817, "s": 4477, "text": "Another way is the recursive feature selection. Unlike the other method, with RFE you don’t calculate a score for each feature, but you train a classifier multiple times on smaller and smaller feature set. After each training, the importance of the features is calculated and the least important feature is eliminated from the feature set." }, { "code": null, "e": 4868, "s": 4817, "text": "You can get the index of these features like this:" }, { "code": null, "e": 4906, "s": 4868, "text": "rfe_indexes = np.where(rfe_values)[0]" }, { "code": null, "e": 5129, "s": 4906, "text": "At the beginning of this article, we started with a model which was overfitting and could barely get more than 50% accuracy. Below, you can see the results of the new model, trained on the data after the feature selection:" } ]
Delete and Edit items in Tkinter TreeView
Tkinter Treeview widget is used to display the data in a hierarchical structure. In this structure, each row can represent a file or a directory. Each directory contains files or additional directories. If we want to create a Treeview widget, then we can use Treeview(parent, columns) constructor to build the table. The Treeview widget items can be edited and deleted by selecting the item using tree.selection() function. Once an item is selected, we can perform certain operations to delete or edit the item. # Import the required libraries from tkinter import * from tkinter import ttk # Create an instance of tkinter frame win = Tk() # Set the size of the tkinter window win.geometry("700x350") # Create an instance of Style widget style = ttk.Style() style.theme_use('clam') # Add a Treeview widget tree = ttk.Treeview(win, column=("c1", "c2"), show='headings', height=8) tree.column("# 1", anchor=CENTER) tree.heading("# 1", text="ID") tree.column("# 2", anchor=CENTER) tree.heading("# 2", text="Company") # Insert the data in Treeview widget tree.insert('', 'end', text="1", values=('1', 'Honda')) tree.insert('', 'end', text="2", values=('2', 'Hyundai')) tree.insert('', 'end', text="3", values=('3', 'Tesla')) tree.insert('', 'end', text="4", values=('4', 'Wolkswagon')) tree.insert('', 'end', text="5", values=('5', 'Tata Motors')) tree.insert('', 'end', text="6", values=('6', 'Renault')) tree.pack() def edit(): # Get selected item to Edit selected_item = tree.selection()[0] tree.item(selected_item, text="blub", values=("foo", "bar")) def delete(): # Get selected item to Delete selected_item = tree.selection()[0] tree.delete(selected_item) # Add Buttons to Edit and Delete the Treeview items edit_btn = ttk.Button(win, text="Edit", command=edit) edit_btn.pack() del_btn = ttk.Button(win, text="Delete", command=delete) del_btn.pack() win.mainloop() Executing the above code will display a window that contains a list of car models and ID in it. If we select a particular row and press edit or delete button, then it will perform the operations defined in the program. Select the 4th row and click the "Delete" button. It will produce the following output −
[ { "code": null, "e": 1379, "s": 1062, "text": "Tkinter Treeview widget is used to display the data in a hierarchical structure. In this structure, each row can represent a file or a directory. Each directory contains files or additional directories. If we want to create a Treeview widget, then we can use Treeview(parent, columns) constructor to build the table." }, { "code": null, "e": 1574, "s": 1379, "text": "The Treeview widget items can be edited and deleted by selecting the item using tree.selection() function. Once an item is selected, we can perform certain operations to delete or edit the item." }, { "code": null, "e": 2956, "s": 1574, "text": "# Import the required libraries\nfrom tkinter import *\nfrom tkinter import ttk\n\n# Create an instance of tkinter frame\nwin = Tk()\n\n# Set the size of the tkinter window\nwin.geometry(\"700x350\")\n\n# Create an instance of Style widget\nstyle = ttk.Style()\nstyle.theme_use('clam')\n\n# Add a Treeview widget\ntree = ttk.Treeview(win, column=(\"c1\", \"c2\"), show='headings', height=8)\ntree.column(\"# 1\", anchor=CENTER)\ntree.heading(\"# 1\", text=\"ID\")\ntree.column(\"# 2\", anchor=CENTER)\ntree.heading(\"# 2\", text=\"Company\")\n\n# Insert the data in Treeview widget\ntree.insert('', 'end', text=\"1\", values=('1', 'Honda'))\ntree.insert('', 'end', text=\"2\", values=('2', 'Hyundai'))\ntree.insert('', 'end', text=\"3\", values=('3', 'Tesla'))\ntree.insert('', 'end', text=\"4\", values=('4', 'Wolkswagon'))\ntree.insert('', 'end', text=\"5\", values=('5', 'Tata Motors'))\ntree.insert('', 'end', text=\"6\", values=('6', 'Renault'))\n\ntree.pack()\n\ndef edit():\n # Get selected item to Edit\n selected_item = tree.selection()[0]\n tree.item(selected_item, text=\"blub\", values=(\"foo\", \"bar\"))\n\ndef delete():\n # Get selected item to Delete\n selected_item = tree.selection()[0]\n tree.delete(selected_item)\n\n# Add Buttons to Edit and Delete the Treeview items\nedit_btn = ttk.Button(win, text=\"Edit\", command=edit)\nedit_btn.pack()\ndel_btn = ttk.Button(win, text=\"Delete\", command=delete)\ndel_btn.pack()\n\nwin.mainloop()" }, { "code": null, "e": 3052, "s": 2956, "text": "Executing the above code will display a window that contains a list of car models and ID in it." }, { "code": null, "e": 3175, "s": 3052, "text": "If we select a particular row and press edit or delete button, then it will perform the operations defined in the program." }, { "code": null, "e": 3225, "s": 3175, "text": "Select the 4th row and click the \"Delete\" button." }, { "code": null, "e": 3264, "s": 3225, "text": "It will produce the following output −" } ]
Flood fill algorithm using C graphics
With respect of a given rectangle, our task is to fill this rectangle applying flood fill algorithm. Input rectangle(left = 50, top = 50, right= 100, bottom = 100) floodFill( a = 55, b = 55, NewColor = 12, OldColor = 0) Output // A recursive function to replace previous color 'OldColor' at '(a, b)' and all surrounding pixels of (a, b) with new color 'NewColor' and floodFill(a, b, NewColor, OldColor) If a or b is outside the screen, thenreturn. If a or b is outside the screen, thenreturn. If color of getpixel(a, b) is same asOldColor, then If color of getpixel(a, b) is same asOldColor, then Recur for top, bottom, right and left.floodFill(a+1, b, NewColor, OldColor);<floodFill(a-1, b, NewColor, OldColor);floodFill(a, b+1, NewColor, OldColor);floodFill(a, b-1, NewColor, OldColor); Recur for top, bottom, right and left. floodFill(a+1, b, NewColor, OldColor);< floodFill(a-1, b, NewColor, OldColor); floodFill(a, b+1, NewColor, OldColor); floodFill(a, b-1, NewColor, OldColor); // Shows program to fill polygon using floodfill // algorithm #include <graphics.h> #include <stdio.h> // Describes flood fill algorithm void flood(int x1, int y1, int new_col, int old_col){ // Checking current pixel is old_color or not if (getpixel(x1, y1) == old_col) { // Putting new pixel with new color putpixel(x1, y1, new_col); // Shows recursive call for bottom pixel fill flood(x1 + 1, y1, new_col, old_col); //Shows recursive call for top pixel fill flood(x1 - 1, y1, new_col, old_col); // Shows recursive call for right pixel fill flood(x1, y1 + 1, new_col, old_col); // Shows recursive call for left pixel fill flood(x1, y1 - 1, new_col, old_col); } } int main(){ int gd1, gm1 = DETECT; // Initializing graph initgraph(&gd1, &gm1, ""); //Shows rectangle coordinate int top1, left1, bottom1, right1; top1 = left1 = 50; bottom1 = right1 = 300; // Shows rectangle for print rectangle rectangle(left1, top1, right1, bottom1); // Fills start cordinate int x1 = 51; int y1 = 51; // Shows new color to fill int newcolor = 12; // Shows old color which you want to replace int oldcolor = 0; // Calling for fill rectangle flood(x1, y1, newcolor, oldcolor); getch(); return 0; }
[ { "code": null, "e": 1163, "s": 1062, "text": "With respect of a given rectangle, our task is to fill this rectangle applying flood fill algorithm." }, { "code": null, "e": 1170, "s": 1163, "text": "Input " }, { "code": null, "e": 1283, "s": 1170, "text": "rectangle(left = 50, top = 50, right= 100, bottom = 100)\nfloodFill( a = 55, b = 55, NewColor = 12, OldColor = 0)" }, { "code": null, "e": 1291, "s": 1283, "text": "Output " }, { "code": null, "e": 1467, "s": 1291, "text": "// A recursive function to replace previous color 'OldColor' at '(a, b)' and all surrounding pixels of (a, b) with new color 'NewColor' and floodFill(a, b, NewColor, OldColor)" }, { "code": null, "e": 1512, "s": 1467, "text": "If a or b is outside the screen, thenreturn." }, { "code": null, "e": 1557, "s": 1512, "text": "If a or b is outside the screen, thenreturn." }, { "code": null, "e": 1609, "s": 1557, "text": "If color of getpixel(a, b) is same asOldColor, then" }, { "code": null, "e": 1661, "s": 1609, "text": "If color of getpixel(a, b) is same asOldColor, then" }, { "code": null, "e": 1853, "s": 1661, "text": "Recur for top, bottom, right and left.floodFill(a+1, b, NewColor, OldColor);<floodFill(a-1, b, NewColor, OldColor);floodFill(a, b+1, NewColor, OldColor);floodFill(a, b-1, NewColor, OldColor);" }, { "code": null, "e": 1892, "s": 1853, "text": "Recur for top, bottom, right and left." }, { "code": null, "e": 1932, "s": 1892, "text": "floodFill(a+1, b, NewColor, OldColor);<" }, { "code": null, "e": 1971, "s": 1932, "text": "floodFill(a-1, b, NewColor, OldColor);" }, { "code": null, "e": 2010, "s": 1971, "text": "floodFill(a, b+1, NewColor, OldColor);" }, { "code": null, "e": 2049, "s": 2010, "text": "floodFill(a, b-1, NewColor, OldColor);" }, { "code": null, "e": 3358, "s": 2049, "text": "// Shows program to fill polygon using floodfill\n// algorithm\n#include <graphics.h>\n#include <stdio.h>\n// Describes flood fill algorithm\nvoid flood(int x1, int y1, int new_col, int old_col){\n // Checking current pixel is old_color or not\n if (getpixel(x1, y1) == old_col) {\n // Putting new pixel with new color\n putpixel(x1, y1, new_col);\n // Shows recursive call for bottom pixel fill\n flood(x1 + 1, y1, new_col, old_col);\n //Shows recursive call for top pixel fill\n flood(x1 - 1, y1, new_col, old_col);\n // Shows recursive call for right pixel fill\n flood(x1, y1 + 1, new_col, old_col);\n // Shows recursive call for left pixel fill\n flood(x1, y1 - 1, new_col, old_col);\n }\n}\nint main(){\n int gd1, gm1 = DETECT;\n // Initializing graph\n initgraph(&gd1, &gm1, \"\");\n //Shows rectangle coordinate\n int top1, left1, bottom1, right1;\n top1 = left1 = 50;\n bottom1 = right1 = 300;\n // Shows rectangle for print rectangle\n rectangle(left1, top1, right1, bottom1);\n // Fills start cordinate\n int x1 = 51;\n int y1 = 51;\n // Shows new color to fill\n int newcolor = 12;\n // Shows old color which you want to replace\n int oldcolor = 0;\n // Calling for fill rectangle\n flood(x1, y1, newcolor, oldcolor);\n getch();\n return 0;\n}" } ]
A program to check if a binary tree is BST or not - GeeksforGeeks
18 Feb, 2022 A binary search tree (BST) is a node based binary tree data structure which has the following properties. The left subtree of a node contains only nodes with keys less than the node’s key. The right subtree of a node contains only nodes with keys greater than the node’s key. Both the left and right subtrees must also be binary search trees. From the above properties it naturally follows that: Each node (item in the tree) has a distinct key. METHOD 1 (Simple but Wrong) Following is a simple program. For each node, check if the left node of it is smaller than the node and right node of it is greater than the node. C++ C Java Python3 C# Javascript int isBST(struct node* node){ if (node == NULL) return 1; /* false if left is > than node */ if (node->left != NULL && node->left->data > node->data) return 0; /* false if right is < than node */ if (node->right != NULL && node->right->data < node->data) return 0; /* false if, recursively, the left or right is not a BST */ if (!isBST(node->left) || !isBST(node->right)) return 0; /* passing all that, it's a BST */ return 1;} // This code is contributed by shubhamsingh10 int isBST(struct node* node){ if (node == NULL) return 1; /* false if left is > than node */ if (node->left != NULL && node->left->data > node->data) return 0; /* false if right is < than node */ if (node->right != NULL && node->right->data < node->data) return 0; /* false if, recursively, the left or right is not a BST */ if (!isBST(node->left) || !isBST(node->right)) return 0; /* passing all that, it's a BST */ return 1;} boolean isBST(Node node){ if (node == null) return true; /* False if left is > than node */ if (node.left != null && node.left.data > node.data) return false; /* False if right is < than node */ if (node.right != null && node.right.data < node.data) return false; /* False if, recursively, the left or right is not a BST */ if (!isBST(node.left) || !isBST(node.right)) return false; /* Passing all that, it's a BST */ return true;} // This code is contributed by shubhamsingh10 def isBST(node): if (node == None): return 1 ''' false if left is > than node ''' if (node.left != None and node.left.data > node.data): return 0 ''' false if right is < than node ''' if (node.right != None and node.right.data < node.data): return 0 ''' false if, recursively, the left or right is not a BST ''' if (!isBST(node.left) or !isBST(node.right)): return 0 ''' passing all that, it's a BST ''' return 1 # This code is contributed by Shubham Singh bool isBST(Node node){ if (node == null) return true; /* False if left is > than node */ if (node.left != null && node.left.data > node.data) return false; /* False if right is < than node */ if (node.right != null && node.right.data < node.data) return false; /* False if, recursively, the left or right is not a BST */ if (!isBST(node.left) || !isBST(node.right)) return false; /* Passing all that, it's a BST */ return true;} // This code is contributed by Rajput-Ji <script> function isBST(node){ if (node == null) return true; /* False if left is > than node */ if (node.left != null && node.left.data > node.data) return false; /* False if right is < than node */ if (node.right != null && node.right.data < node.data) return false; /* False if, recursively, the left or right is not a BST */ if (!isBST(node.left) || !isBST(node.right)) return false; /* Passing all that, it's a BST */ return true;} // This code is contributed by avanitrachhadiya2155 </script> This approach is wrong as this will return true for below binary tree (and below tree is not a BST because 4 is in left subtree of 3) METHOD 2 (Correct but not efficient) For each node, check if max value in left subtree is smaller than the node and min value in right subtree greater than the node. C++ C Java Python3 C# Javascript /* Returns true if a binary tree is a binary search tree */int isBST(struct node* node){ if (node == NULL) return 1; /* false if the max of the left is > than us */ if (node->left != NULL && maxValue(node->left) >= node->data) return 0; /* false if the min of the right is <= than us */ if (node->right != NULL && minValue(node->right) <= node->data) return 0; /* false if, recursively, the left or right is not a BST */ if (!isBST(node->left) || !isBST(node->right)) return 0; /* passing all that, it's a BST */ return 1;} // This code is contributed by shubhamsingh10 /* Returns true if a binary tree is a binary search tree */int isBST(struct node* node){ if (node == NULL) return 1; /* false if the max of the left is > than us */ if (node->left!=NULL && maxValue(node->left) > node->data) return 0; /* false if the min of the right is <= than us */ if (node->right!=NULL && minValue(node->right) < node->data) return 0; /* false if, recursively, the left or right is not a BST */ if (!isBST(node->left) || !isBST(node->right)) return 0; /* passing all that, it's a BST */ return 1;} /* Returns true if a binary tree is a binary search tree */int isBST(Node node){ if (node == null) return 1; /* false if the max of the left is > than us */ if (node.left != null && maxValue(node.left) >= node.data) return 0; /* false if the min of the right is <= than us */ if (node.right != null && minValue(node.right) <= node.data) return 0; /* false if, recursively, the left or right is not a BST */ if (!isBST(node.left) || !isBST(node.right)) return 0; /* passing all that, it's a BST */ return 1;} // This code is contributed by akshitsaxenaa09. ''' Returns true if a binary tree is a binary search tree '''def isBST(node): if (node == None): return 1 ''' false if the max of the left is > than us ''' if (node.left != None and maxValue(node.left) >= node.data): return 0 ''' false if the min of the right is <= than us ''' if (node.right != None and minValue(node.right) <= node.data): return 0 ''' false if, recursively, the left or right is not a BST ''' if (!isBST(node.left) or !isBST(node.right)): return 0 ''' passing all that, it's a BST ''' return 1 # This code is contributed by Shubham Singh /* Returns true if a binary tree is a binary search tree */bool isBST(Node node){ if (node == null) return true; /* false if the max of the left is > than us */ if (node.left != null && maxValue(node.left) >= node.data) return false; /* false if the min of the right is <= than us */ if (node.right != null && minValue(node.right) <= node.data) return false; /* false if, recursively, the left or right is not a BST */ if (!isBST(node.left) || !isBST(node.right)) return false; /* passing all that, it's a BST */ return true;} // This code is contributed by Shubham Singh <script> function isBST(node){ if (node == null) return true; /* False if the max of the left is > than us */ if (node.left != null && maxValue(node.left) >= node.data) return false; /* False if the min of the right is <= than us */ if (node.right != null && minValue(node.right) <= node.data) return false; /* False if, recursively, the left or right is not a BST */ if (!isBST(node.left) || !isBST(node.right)) return false; /* Passing all that, it's a BST */ return true;} // This code is contributed by Shubham Singh </script> It is assumed that you have helper functions minValue() and maxValue() that return the min or max int value from a non-empty tree METHOD 3 (Correct and Efficient): Method 2 above runs slowly since it traverses over some parts of the tree many times. A better solution looks at each node only once. The trick is to write a utility helper function isBSTUtil(struct node* node, int min, int max) that traverses down the tree keeping track of the narrowing min and max allowed values as it goes, looking at each node only once. The initial values for min and max should be INT_MIN and INT_MAX — they narrow from there. Note: This method is not applicable if there are duplicate elements with value INT_MIN or INT_MAX. Below is the implementation of the above approach: C++ C Java Python3 C# Javascript #include<bits/stdc++.h> using namespace std; /* A binary tree node has data,pointer to left child anda pointer to right child */class node{ public: int data; node* left; node* right; /* Constructor that allocates a new node with the given data and NULL left and right pointers. */ node(int data) { this->data = data; this->left = NULL; this->right = NULL; }}; int isBSTUtil(node* node, int min, int max); /* Returns true if the giventree is a binary search tree(efficient version). */int isBST(node* node){ return(isBSTUtil(node, INT_MIN, INT_MAX));} /* Returns true if the giventree is a BST and its valuesare >= min and <= max. */int isBSTUtil(node* node, int min, int max){ /* an empty tree is BST */ if (node==NULL) return 1; /* false if this node violates the min/max constraint */ if (node->data < min || node->data > max) return 0; /* otherwise check the subtrees recursively, tightening the min or max constraint */ return isBSTUtil(node->left, min, node->data-1) && // Allow only distinct values isBSTUtil(node->right, node->data+1, max); // Allow only distinct values} /* Driver code*/int main(){ node *root = new node(4); root->left = new node(2); root->right = new node(5); root->left->left = new node(1); root->left->right = new node(3); if(isBST(root)) cout<<"Is BST"; else cout<<"Not a BST"; return 0;} // This code is contributed by rathbhupendra #include <stdio.h>#include <stdlib.h>#include <limits.h> /* A binary tree node has data, pointer to left child and a pointer to right child */struct node{ int data; struct node* left; struct node* right;}; int isBSTUtil(struct node* node, int min, int max); /* Returns true if the given tree is a binary search tree (efficient version). */int isBST(struct node* node){ return(isBSTUtil(node, INT_MIN, INT_MAX));} /* Returns true if the given tree is a BST and its values are >= min and <= max. */int isBSTUtil(struct node* node, int min, int max){ /* an empty tree is BST */ if (node==NULL) return 1; /* false if this node violates the min/max constraint */ if (node->data < min || node->data > max) return 0; /* otherwise check the subtrees recursively, tightening the min or max constraint */ return isBSTUtil(node->left, min, node->data-1) && // Allow only distinct values isBSTUtil(node->right, node->data+1, max); // Allow only distinct values} /* Helper function that allocates a new node with the given data and NULL left and right pointers. */struct node* newNode(int data){ struct node* node = (struct node*) malloc(sizeof(struct node)); node->data = data; node->left = NULL; node->right = NULL; return(node);} /* Driver program to test above functions*/int main(){ struct node *root = newNode(4); root->left = newNode(2); root->right = newNode(5); root->left->left = newNode(1); root->left->right = newNode(3); if(isBST(root)) printf("Is BST"); else printf("Not a BST"); getchar(); return 0;} //Java implementation to check if given Binary tree//is a BST or not /* Class containing left and right child of current node and key value*/class Node{ int data; Node left, right; public Node(int item) { data = item; left = right = null; }} public class BinaryTree{ //Root of the Binary Tree Node root; /* can give min and max value according to your code or can write a function to find min and max value of tree. */ /* returns true if given search tree is binary search tree (efficient version) */ boolean isBST() { return isBSTUtil(root, Integer.MIN_VALUE, Integer.MAX_VALUE); } /* Returns true if the given tree is a BST and its values are >= min and <= max. */ boolean isBSTUtil(Node node, int min, int max) { /* an empty tree is BST */ if (node == null) return true; /* false if this node violates the min/max constraints */ if (node.data < min || node.data > max) return false; /* otherwise check the subtrees recursively tightening the min/max constraints */ // Allow only distinct values return (isBSTUtil(node.left, min, node.data-1) && isBSTUtil(node.right, node.data+1, max)); } /* Driver program to test above functions */ public static void main(String args[]) { BinaryTree tree = new BinaryTree(); tree.root = new Node(4); tree.root.left = new Node(2); tree.root.right = new Node(5); tree.root.left.left = new Node(1); tree.root.left.right = new Node(3); if (tree.isBST()) System.out.println("IS BST"); else System.out.println("Not a BST"); }} # Python program to check if a binary tree is bst or not INT_MAX = 4294967296INT_MIN = -4294967296 # A binary tree nodeclass Node: # Constructor to create a new node def __init__(self, data): self.data = data self.left = None self.right = None # Returns true if the given tree is a binary search tree# (efficient version)def isBST(node): return (isBSTUtil(node, INT_MIN, INT_MAX)) # Retusn true if the given tree is a BST and its values# >= min and <= maxdef isBSTUtil(node, mini, maxi): # An empty tree is BST if node is None: return True # False if this node violates min/max constraint if node.data < mini or node.data > maxi: return False # Otherwise check the subtrees recursively # tightening the min or max constraint return (isBSTUtil(node.left, mini, node.data -1) and isBSTUtil(node.right, node.data+1, maxi)) # Driver program to test above functionroot = Node(4)root.left = Node(2)root.right = Node(5)root.left.left = Node(1)root.left.right = Node(3) if (isBST(root)): print ("Is BST")else: print ("Not a BST") # This code is contributed by Nikhil Kumar Singh(nickzuck_007) using System; // C# implementation to check if given Binary tree//is a BST or not /* Class containing left and right child of current node and key value*/public class Node{ public int data; public Node left, right; public Node(int item) { data = item; left = right = null; }} public class BinaryTree{ //Root of the Binary Tree public Node root; /* can give min and max value according to your code or can write a function to find min and max value of tree. */ /* returns true if given search tree is binary search tree (efficient version) */ public virtual bool BST { get { return isBSTUtil(root, int.MinValue, int.MaxValue); } } /* Returns true if the given tree is a BST and its values are >= min and <= max. */ public virtual bool isBSTUtil(Node node, int min, int max) { /* an empty tree is BST */ if (node == null) { return true; } /* false if this node violates the min/max constraints */ if (node.data < min || node.data > max) { return false; } /* otherwise check the subtrees recursively tightening the min/max constraints */ // Allow only distinct values return (isBSTUtil(node.left, min, node.data - 1) && isBSTUtil(node.right, node.data + 1, max)); } /* Driver program to test above functions */ public static void Main(string[] args) { BinaryTree tree = new BinaryTree(); tree.root = new Node(4); tree.root.left = new Node(2); tree.root.right = new Node(5); tree.root.left.left = new Node(1); tree.root.left.right = new Node(3); if (tree.BST) { Console.WriteLine("IS BST"); } else { Console.WriteLine("Not a BST"); } }} // This code is contributed by Shrikant13 <script> // Javascript implementation to// check if given Binary tree// is a BST or not /* Class containing left and right child of current node and key value*/ class Node{ constructor(item) { this.data=item; this.left=this.right=null; }} //Root of the Binary Tree let root; /* can give min and max value according to your code or can write a function to find min and max value of tree. */ /* returns true if given search tree is binary search tree (efficient version) */ function isBST() { return isBSTUtil(root, Number.MIN_VALUE, Number.MAX_VALUE); } /* Returns true if the given tree is a BST and its values are >= min and <= max. */ function isBSTUtil(node,min,max) { /* an empty tree is BST */ if (node == null) return true; /* false if this node violates the min/max constraints */ if (node.data < min || node.data > max) return false; /* otherwise check the subtrees recursively tightening the min/max constraints */ // Allow only distinct values return (isBSTUtil(node.left, min, node.data-1) && isBSTUtil(node.right, node.data+1, max)); } /* Driver program to test above functions */ root = new Node(4); root.left = new Node(2); root.right = new Node(5); root.left.left = new Node(1); root.left.right = new Node(3); if (isBST()) document.write("IS BST<br>"); else document.write("Not a BST<br>"); // This code is contributed by rag2127 </script> Output: IS BST Time Complexity: O(n) Auxiliary Space: O(1) if Function Call Stack size is not considered, otherwise O(n) Simplified Method 3 We can simplify method 2 using NULL pointers instead of INT_MIN and INT_MAX values. C++ Java Python3 C# Javascript // C++ program to check if a given tree is BST.#include <bits/stdc++.h>using namespace std; /* A binary tree node has data, pointer to left child and a pointer to right child */struct Node{ int data; struct Node* left, *right;}; // Returns true if given tree is BST.bool isBST(Node* root, Node* l=NULL, Node* r=NULL){ // Base condition if (root == NULL) return true; // if left node exist then check it has // correct data or not i.e. left node's data // should be less than root's data if (l != NULL and root->data <= l->data) return false; // if right node exist then check it has // correct data or not i.e. right node's data // should be greater than root's data if (r != NULL and root->data >= r->data) return false; // check recursively for every node. return isBST(root->left, l, root) and isBST(root->right, root, r);} /* Helper function that allocates a new node with the given data and NULL left and right pointers. */struct Node* newNode(int data){ struct Node* node = new Node; node->data = data; node->left = node->right = NULL; return (node);} /* Driver program to test above functions*/int main(){ struct Node *root = newNode(3); root->left = newNode(2); root->right = newNode(5); root->left->left = newNode(1); root->left->right = newNode(4); if (isBST(root,NULL,NULL)) cout << "Is BST"; else cout << "Not a BST"; return 0;} // Java program to check if a given tree is BST.class Sol{ // A binary tree node has data, pointer to//left child && a pointer to right child /static class Node{ int data; Node left, right;}; // Returns true if given tree is BST.static boolean isBST(Node root, Node l, Node r){ // Base condition if (root == null) return true; // if left node exist then check it has // correct data or not i.e. left node's data // should be less than root's data if (l != null && root.data <= l.data) return false; // if right node exist then check it has // correct data or not i.e. right node's data // should be greater than root's data if (r != null && root.data >= r.data) return false; // check recursively for every node. return isBST(root.left, l, root) && isBST(root.right, root, r);} // Helper function that allocates a new node with the//given data && null left && right pointers. /static Node newNode(int data){ Node node = new Node(); node.data = data; node.left = node.right = null; return (node);} // Driver codepublic static void main(String args[]){ Node root = newNode(3); root.left = newNode(2); root.right = newNode(5); root.left.left = newNode(1); root.left.right = newNode(4); if (isBST(root,null,null)) System.out.print("Is BST"); else System.out.print("Not a BST");}} // This code is contributed by Arnab Kundu """ Program to check if a given BinaryTree is balanced like a Red-Black Tree """ # Helper function that allocates a new# node with the given data and None# left and right poers. class newNode: # Construct to create a new node def __init__(self, key): self.data = key self.left = None self.right = None # Returns true if given tree is BST.def isBST(root, l = None, r = None): # Base condition if (root == None) : return True # if left node exist then check it has # correct data or not i.e. left node's data # should be less than root's data if (l != None and root.data <= l.data) : return False # if right node exist then check it has # correct data or not i.e. right node's data # should be greater than root's data if (r != None and root.data >= r.data) : return False # check recursively for every node. return isBST(root.left, l, root) and \ isBST(root.right, root, r) # Driver Codeif __name__ == '__main__': root = newNode(3) root.left = newNode(2) root.right = newNode(5) root.right.left = newNode(1) root.right.right = newNode(4) #root.right.left.left = newNode(40) if (isBST(root,None,None)): print("Is BST") else: print("Not a BST") # This code is contributed by# Shubham Singh(SHUBHAMSINGH10) // C# program to check if a given tree is BST.using System; class GFG{ // A binary tree node has data, pointer to//left child && a pointer to right child /public class Node{ public int data; public Node left, right;}; // Returns true if given tree is BST.static Boolean isBST(Node root, Node l, Node r){ // Base condition if (root == null) return true; // if left node exist then check it has // correct data or not i.e. left node's data // should be less than root's data if (l != null && root.data <= l.data) return false; // if right node exist then check it has // correct data or not i.e. right node's data // should be greater than root's data if (r != null && root.data >= r.data) return false; // check recursively for every node. return isBST(root.left, l, root) && isBST(root.right, root, r);} // Helper function that allocates a new node with the//given data && null left && right pointers. /static Node newNode(int data){ Node node = new Node(); node.data = data; node.left = node.right = null; return (node);} // Driver codepublic static void Main(String []args){ Node root = newNode(3); root.left = newNode(2); root.right = newNode(5); root.left.left = newNode(1); root.left.right = newNode(4); if (isBST(root,null,null)) Console.Write("Is BST"); else Console.Write("Not a BST");}} // This code is contributed by 29AjayKumar <script> // JavaScript program to check if a given tree is BST. class Node { constructor(data) { this.left = null; this.right = null; this.data = data; } } // Returns true if given tree is BST. function isBST(root, l, r) { // Base condition if (root == null) return true; // if left node exist then check it has // correct data or not i.e. left node's data // should be less than root's data if (l != null && root.data <= l.data) return false; // if right node exist then check it has // correct data or not i.e. right node's data // should be greater than root's data if (r != null && root.data >= r.data) return false; // check recursively for every node. return isBST(root.left, l, root) && isBST(root.right, root, r); } // Helper function that allocates a new node with the //given data && null left && right pointers. / function newNode(data) { let node = new Node(data); return (node); } let root = newNode(3); root.left = newNode(2); root.right = newNode(5); root.left.left = newNode(1); root.left.right = newNode(4); if (isBST(root,null,null)) document.write("Is BST"); else document.write("Not a BST"); </script> Output: Not a BST Thanks to Abhinesh Garhwal for suggesting above solution. METHOD 4(Using In-Order Traversal) Thanks to LJW489 for suggesting this method. 1) Do In-Order Traversal of the given tree and store the result in a temp array. 2) This method assumes that there are no duplicate values in the tree3) Check if the temp array is sorted in ascending order, if it is, then the tree is BST.Time Complexity: O(n)We can avoid the use of a Auxiliary Array. While doing In-Order traversal, we can keep track of previously visited node. If the value of the currently visited node is less than the previous value, then tree is not BST. Thanks to ygos for this space optimization. C++ C Java Python3 C# Javascript bool isBST(node* root){ static node *prev = NULL; // traverse the tree in inorder fashion // and keep track of prev node if (root) { if (!isBST(root->left)) return false; // Allows only distinct valued nodes if (prev != NULL && root->data <= prev->data) return false; prev = root; return isBST(root->right); } return true;} // This code is contributed by rathbhupendra bool isBST(struct node* root){ static struct node *prev = NULL; // traverse the tree in inorder fashion and keep track of prev node if (root) { if (!isBST(root->left)) return false; // Allows only distinct valued nodes if (prev != NULL && root->data <= prev->data) return false; prev = root; return isBST(root->right); } return true;} // Java implementation to check if given Binary tree// is a BST or not /* Class containing left and right child of current node and key value*/class Node{ int data; Node left, right; public Node(int item) { data = item; left = right = null; }} public class BinaryTree{ // Root of the Binary Tree Node root; // To keep tract of previous node in Inorder Traversal Node prev; boolean isBST() { prev = null; return isBST(root); } /* Returns true if given search tree is binary search tree (efficient version) */ boolean isBST(Node node) { // traverse the tree in inorder fashion and // keep a track of previous node if (node != null) { if (!isBST(node.left)) return false; // allows only distinct values node if (prev != null && node.data <= prev.data ) return false; prev = node; return isBST(node.right); } return true; } /* Driver program to test above functions */ public static void main(String args[]) { BinaryTree tree = new BinaryTree(); tree.root = new Node(4); tree.root.left = new Node(2); tree.root.right = new Node(5); tree.root.left.left = new Node(1); tree.root.left.right = new Node(3); if (tree.isBST()) System.out.println("IS BST"); else System.out.println("Not a BST"); }} # Python implementation to check if# given Binary tree is a BST or not # A binary tree node containing data# field, left and right pointersclass Node: # constructor to create new node def __init__(self, val): self.data = val self.left = None self.right = None # global variable prev - to keep track# of previous node during Inorder# traversalprev = None # function to check if given binary# tree is BSTdef isbst(root): # prev is a global variable global prev prev = None return isbst_rec(root) # Helper function to test if binary# tree is BST# Traverse the tree in inorder fashion# and keep track of previous node# return true if tree is Binary# search tree otherwise falsedef isbst_rec(root): # prev is a global variable global prev # if tree is empty return true if root is None: return True if isbst_rec(root.left) is False: return False # if previous node'data is found # greater than the current node's # data return false if prev is not None and prev.data > root.data: return False # store the current node in prev prev = root return isbst_rec(root.right) # driver code to test above functionroot = Node(4)root.left = Node(2)root.right = Node(5)root.left.left = Node(1)root.left.right = Node(3) if isbst(root): print("is BST")else: print("not a BST") # This code is contributed by# Shweta Singh(shweta44) // C# implementation to check if// given Binary tree is a BST or notusing System; /* Class containing left andright child of current nodeand key value*/class Node{ public int data; public Node left, right; public Node(int item) { data = item; left = right = null; }} public class BinaryTree{ // Root of the Binary Tree Node root; // To keep tract of previous node // in Inorder Traversal Node prev; Boolean isBST() { prev = null; return isBST(root); } /* Returns true if given search tree is binary search tree (efficient version) */ Boolean isBST(Node node) { // traverse the tree in inorder fashion and // keep a track of previous node if (node != null) { if (!isBST(node.left)) return false; // allows only distinct values node if (prev != null && node.data <= prev.data ) return false; prev = node; return isBST(node.right); } return true; } // Driver Code public static void Main(String []args) { BinaryTree tree = new BinaryTree(); tree.root = new Node(4); tree.root.left = new Node(2); tree.root.right = new Node(5); tree.root.left.left = new Node(1); tree.root.left.right = new Node(3); if (tree.isBST()) Console.WriteLine("IS BST"); else Console.WriteLine("Not a BST"); }} // This code is contributed by Rajput-Ji <script>// Javascript implementation to check if given Binary tree// is a BST or not /* Class containing left and right child of current node and key value*/class Node{ constructor(item) { this.data = item; this.left = this.right=null; }} // Root of the Binary Treelet root; // To keep tract of previous node in Inorder Traversallet prev; function isBST(){ prev = null; return _isBST(root);} /* Returns true if given search tree is binary search tree (efficient version) */function _isBST(node){ // traverse the tree in inorder fashion and // keep a track of previous node if (node != null) { if (!_isBST(node.left)) return false; // allows only distinct values node if (prev != null && node.data <= prev.data ) return false; prev = node; return _isBST(node.right); } return true;} /* Driver program to test above functions */root = new Node(4);root.left = new Node(2);root.right = new Node(5);root.left.left = new Node(1);root.left.right = new Node(3); if (isBST()) document.write("IS BST");else document.write("Not a BST"); // This code is contributed by unknown2108</script> The use of a static variable can also be avoided by using a reference to the prev node as a parameter. C++ Java Python3 C# Javascript // C++ program to check if a given tree is BST.#include <bits/stdc++.h>using namespace std; /* A binary tree node has data, pointer toleft child and a pointer to right child */struct Node{ int data; struct Node* left, *right; Node(int data) { this->data = data; left = right = NULL; }}; bool isBSTUtil(struct Node* root, Node *&prev){ // traverse the tree in inorder fashion and // keep track of prev node if (root) { if (!isBSTUtil(root->left, prev)) return false; // Allows only distinct valued nodes if (prev != NULL && root->data <= prev->data) return false; prev = root; return isBSTUtil(root->right, prev); } return true;} bool isBST(Node *root){ Node *prev = NULL; return isBSTUtil(root, prev);} /* Driver program to test above functions*/int main(){ struct Node *root = new Node(3); root->left = new Node(2); root->right = new Node(5); root->left->left = new Node(1); root->left->right = new Node(4); if (isBST(root)) cout << "Is BST"; else cout << "Not a BST"; return 0;} // Java program to check if a given tree is BST.import java.io.*; class GFG { /* A binary tree node has data, pointer to left child and a pointer to right child */ public static class Node { public int data; public Node left, right; public Node(int data) { this.data = data; left = right = null; } }; static Node prev; static Boolean isBSTUtil(Node root) { // traverse the tree in inorder fashion and // keep track of prev node if (root != null) { if (!isBSTUtil(root.left)) return false; // Allows only distinct valued nodes if (prev != null && root.data <= prev.data) return false; prev = root; return isBSTUtil(root.right); } return true; } static Boolean isBST(Node root) { return isBSTUtil(root); } // Driver Code public static void main (String[] args) { Node root = new Node(3); root.left = new Node(2); root.right = new Node(5); root.left.left = new Node(1); root.left.right = new Node(4); if (isBST(root)) System.out.println("Is BST"); else System.out.println("Not a BST"); }} // This code is contributed by Shubham Singh # Python3 program to check# if a given tree is BST.import math # A binary tree node has data,# pointer to left child and# a pointer to right childclass Node: def __init__(self, data): self.data = data self.left = None self.right = None def isBSTUtil(root, prev): # traverse the tree in inorder fashion # and keep track of prev node if (root != None): if (isBSTUtil(root.left, prev) == True): return False # Allows only distinct valued nodes if (prev != None and root.data <= prev.data): return False prev = root return isBSTUtil(root.right, prev) return True def isBST(root): prev = None return isBSTUtil(root, prev) # Driver Codeif __name__ == '__main__': root = Node(3) root.left = Node(2) root.right = Node(5) root.right.left = Node(1) root.right.right = Node(4) #root.right.left.left = Node(40) if (isBST(root) == None): print("Is BST") else: print("Not a BST") # This code is contributed by Srathore // C# program to check if a given tree is BST.using System;public class GFG{/* A binary tree node has data, pointer toleft child and a pointer to right child */public class Node{ public int data; public Node left, right; public Node(int data) { this.data = data; left = right = null; }}; static Node prev; static Boolean isBSTUtil(Node root){ // traverse the tree in inorder fashion and // keep track of prev node if (root != null) { if (!isBSTUtil(root.left)) return false; // Allows only distinct valued nodes if (prev != null && root.data <= prev.data) return false; prev = root; return isBSTUtil(root.right); } return true;} static Boolean isBST(Node root){ return isBSTUtil(root);} // Driver Codepublic static void Main(String[] args){ Node root = new Node(3); root.left = new Node(2); root.right = new Node(5); root.left.left = new Node(1); root.left.right = new Node(4); if (isBST(root)) Console.WriteLine("Is BST"); else Console.WriteLine("Not a BST");}} // This code is contributed by Rajput-Ji <script> // Javascript program to check if a given tree is BST. class Node { constructor(data) { this.left = null; this.right = null; this.data = data; } } let prev; function isBSTUtil(root) { // traverse the tree in inorder fashion and // keep track of prev node if (root != null) { if (!isBSTUtil(root.left)) return false; // Allows only distinct valued nodes if (prev != null && root.data <= prev.data) return false; prev = root; return isBSTUtil(root.right); } return true; } function isBST(root) { return isBSTUtil(root); } let root = new Node(3); root.left = new Node(2); root.right = new Node(5); root.left.left = new Node(1); root.left.right = new Node(4); if (isBST(root)) document.write("Is BST"); else document.write("Not a BST"); // This code is contributed by divyeshrabadiya07.</script> Output: Not a BST YouTubeGeeksforGeeks500K subscribersA program to check if a binary tree is BST or not | GeeksforGeeksWatch laterShareCopy linkInfoShoppingTap to unmuteIf playback doesn't begin shortly, try restarting your device.You're signed outVideos you watch may be added to the TV's watch history and influence TV recommendations. To avoid this, cancel and sign in to YouTube on your computer.CancelConfirmMore videosMore videosSwitch cameraShareInclude playlistAn error occurred while retrieving sharing information. Please try again later.Watch on0:000:000:00 / 16:50•Live•<div class="player-unavailable"><h1 class="message">An error occurred.</h1><div class="submessage"><a href="https://www.youtube.com/watch?v=H13iz0rbeeo" target="_blank">Try watching this video on www.youtube.com</a>, or enable JavaScript if it is disabled in your browser.</div></div> Sources: http://en.wikipedia.org/wiki/Binary_search_tree http://cslibrary.stanford.edu/110/BinaryTrees.htmlPlease write comments if you find any bug in the above programs/algorithms or other ways to solve the same problem. shweta44 ChandrahasAbburi shrikanth13 SHUBHAMSINGH10 rathbhupendra andrew1234 29AjayKumar Rajput-Ji sapnasingh4991 rwells1703 Abhijeet Kumar Srivastava abdul kadir olia rag2127 avanitrachhadiya2155 rameshtravel07 unknown2108 divyeshrabadiya07 surinderdawra388 angajala akshitsaxenaa09 amartyaghoshgfg adnanirshad158 simmytarika5 Accolite Adobe Amazon Boomerang Commerce FactSet GreyOrange MakeMyTrip Microsoft OYO Rooms Qualcomm Snapdeal VMWare Walmart Wooker Binary Search Tree Tree VMWare Accolite Amazon Microsoft OYO Rooms Snapdeal FactSet MakeMyTrip Walmart Adobe Qualcomm Boomerang Commerce Wooker Binary Search Tree Tree Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here. Comments Old Comments Advantages of BST over Hash Table Binary Tree to Binary Search Tree Conversion Difference between Binary Tree and Binary Search Tree Insert a node in Binary Search Tree Iteratively Construct a Binary Search Tree from given postorder Tree Traversals (Inorder, Preorder and Postorder) Binary Tree | Set 1 (Introduction) Level Order Binary Tree Traversal Inorder Tree Traversal without Recursion Write a Program to Find the Maximum Depth or Height of a Tree
[ { "code": null, "e": 24990, "s": 24962, "text": "\n18 Feb, 2022" }, { "code": null, "e": 25097, "s": 24990, "text": "A binary search tree (BST) is a node based binary tree data structure which has the following properties. " }, { "code": null, "e": 25180, "s": 25097, "text": "The left subtree of a node contains only nodes with keys less than the node’s key." }, { "code": null, "e": 25267, "s": 25180, "text": "The right subtree of a node contains only nodes with keys greater than the node’s key." }, { "code": null, "e": 25334, "s": 25267, "text": "Both the left and right subtrees must also be binary search trees." }, { "code": null, "e": 25388, "s": 25334, "text": "From the above properties it naturally follows that: " }, { "code": null, "e": 25437, "s": 25388, "text": "Each node (item in the tree) has a distinct key." }, { "code": null, "e": 25612, "s": 25437, "text": "METHOD 1 (Simple but Wrong) Following is a simple program. For each node, check if the left node of it is smaller than the node and right node of it is greater than the node." }, { "code": null, "e": 25616, "s": 25612, "text": "C++" }, { "code": null, "e": 25618, "s": 25616, "text": "C" }, { "code": null, "e": 25623, "s": 25618, "text": "Java" }, { "code": null, "e": 25631, "s": 25623, "text": "Python3" }, { "code": null, "e": 25634, "s": 25631, "text": "C#" }, { "code": null, "e": 25645, "s": 25634, "text": "Javascript" }, { "code": "int isBST(struct node* node){ if (node == NULL) return 1; /* false if left is > than node */ if (node->left != NULL && node->left->data > node->data) return 0; /* false if right is < than node */ if (node->right != NULL && node->right->data < node->data) return 0; /* false if, recursively, the left or right is not a BST */ if (!isBST(node->left) || !isBST(node->right)) return 0; /* passing all that, it's a BST */ return 1;} // This code is contributed by shubhamsingh10", "e": 26159, "s": 25645, "text": null }, { "code": "int isBST(struct node* node){ if (node == NULL) return 1; /* false if left is > than node */ if (node->left != NULL && node->left->data > node->data) return 0; /* false if right is < than node */ if (node->right != NULL && node->right->data < node->data) return 0; /* false if, recursively, the left or right is not a BST */ if (!isBST(node->left) || !isBST(node->right)) return 0; /* passing all that, it's a BST */ return 1;}", "e": 26626, "s": 26159, "text": null }, { "code": "boolean isBST(Node node){ if (node == null) return true; /* False if left is > than node */ if (node.left != null && node.left.data > node.data) return false; /* False if right is < than node */ if (node.right != null && node.right.data < node.data) return false; /* False if, recursively, the left or right is not a BST */ if (!isBST(node.left) || !isBST(node.right)) return false; /* Passing all that, it's a BST */ return true;} // This code is contributed by shubhamsingh10", "e": 27179, "s": 26626, "text": null }, { "code": "def isBST(node): if (node == None): return 1 ''' false if left is > than node ''' if (node.left != None and node.left.data > node.data): return 0 ''' false if right is < than node ''' if (node.right != None and node.right.data < node.data): return 0 ''' false if, recursively, the left or right is not a BST ''' if (!isBST(node.left) or !isBST(node.right)): return 0 ''' passing all that, it's a BST ''' return 1 # This code is contributed by Shubham Singh", "e": 27719, "s": 27179, "text": null }, { "code": "bool isBST(Node node){ if (node == null) return true; /* False if left is > than node */ if (node.left != null && node.left.data > node.data) return false; /* False if right is < than node */ if (node.right != null && node.right.data < node.data) return false; /* False if, recursively, the left or right is not a BST */ if (!isBST(node.left) || !isBST(node.right)) return false; /* Passing all that, it's a BST */ return true;} // This code is contributed by Rajput-Ji", "e": 28264, "s": 27719, "text": null }, { "code": "<script> function isBST(node){ if (node == null) return true; /* False if left is > than node */ if (node.left != null && node.left.data > node.data) return false; /* False if right is < than node */ if (node.right != null && node.right.data < node.data) return false; /* False if, recursively, the left or right is not a BST */ if (!isBST(node.left) || !isBST(node.right)) return false; /* Passing all that, it's a BST */ return true;} // This code is contributed by avanitrachhadiya2155 </script>", "e": 28843, "s": 28264, "text": null }, { "code": null, "e": 28978, "s": 28843, "text": "This approach is wrong as this will return true for below binary tree (and below tree is not a BST because 4 is in left subtree of 3) " }, { "code": null, "e": 29145, "s": 28978, "text": "METHOD 2 (Correct but not efficient) For each node, check if max value in left subtree is smaller than the node and min value in right subtree greater than the node. " }, { "code": null, "e": 29149, "s": 29145, "text": "C++" }, { "code": null, "e": 29151, "s": 29149, "text": "C" }, { "code": null, "e": 29156, "s": 29151, "text": "Java" }, { "code": null, "e": 29164, "s": 29156, "text": "Python3" }, { "code": null, "e": 29167, "s": 29164, "text": "C#" }, { "code": null, "e": 29178, "s": 29167, "text": "Javascript" }, { "code": "/* Returns true if a binary tree is a binary search tree */int isBST(struct node* node){ if (node == NULL) return 1; /* false if the max of the left is > than us */ if (node->left != NULL && maxValue(node->left) >= node->data) return 0; /* false if the min of the right is <= than us */ if (node->right != NULL && minValue(node->right) <= node->data) return 0; /* false if, recursively, the left or right is not a BST */ if (!isBST(node->left) || !isBST(node->right)) return 0; /* passing all that, it's a BST */ return 1;} // This code is contributed by shubhamsingh10", "e": 29787, "s": 29178, "text": null }, { "code": "/* Returns true if a binary tree is a binary search tree */int isBST(struct node* node){ if (node == NULL) return 1; /* false if the max of the left is > than us */ if (node->left!=NULL && maxValue(node->left) > node->data) return 0; /* false if the min of the right is <= than us */ if (node->right!=NULL && minValue(node->right) < node->data) return 0; /* false if, recursively, the left or right is not a BST */ if (!isBST(node->left) || !isBST(node->right)) return 0; /* passing all that, it's a BST */ return 1;}", "e": 30344, "s": 29787, "text": null }, { "code": "/* Returns true if a binary tree is a binary search tree */int isBST(Node node){ if (node == null) return 1; /* false if the max of the left is > than us */ if (node.left != null && maxValue(node.left) >= node.data) return 0; /* false if the min of the right is <= than us */ if (node.right != null && minValue(node.right) <= node.data) return 0; /* false if, recursively, the left or right is not a BST */ if (!isBST(node.left) || !isBST(node.right)) return 0; /* passing all that, it's a BST */ return 1;} // This code is contributed by akshitsaxenaa09.", "e": 30939, "s": 30344, "text": null }, { "code": "''' Returns true if a binary tree is a binary search tree '''def isBST(node): if (node == None): return 1 ''' false if the max of the left is > than us ''' if (node.left != None and maxValue(node.left) >= node.data): return 0 ''' false if the min of the right is <= than us ''' if (node.right != None and minValue(node.right) <= node.data): return 0 ''' false if, recursively, the left or right is not a BST ''' if (!isBST(node.left) or !isBST(node.right)): return 0 ''' passing all that, it's a BST ''' return 1 # This code is contributed by Shubham Singh", "e": 31570, "s": 30939, "text": null }, { "code": "/* Returns true if a binary tree is a binary search tree */bool isBST(Node node){ if (node == null) return true; /* false if the max of the left is > than us */ if (node.left != null && maxValue(node.left) >= node.data) return false; /* false if the min of the right is <= than us */ if (node.right != null && minValue(node.right) <= node.data) return false; /* false if, recursively, the left or right is not a BST */ if (!isBST(node.left) || !isBST(node.right)) return false; /* passing all that, it's a BST */ return true;} // This code is contributed by Shubham Singh", "e": 32217, "s": 31570, "text": null }, { "code": "<script> function isBST(node){ if (node == null) return true; /* False if the max of the left is > than us */ if (node.left != null && maxValue(node.left) >= node.data) return false; /* False if the min of the right is <= than us */ if (node.right != null && minValue(node.right) <= node.data) return false; /* False if, recursively, the left or right is not a BST */ if (!isBST(node.left) || !isBST(node.right)) return false; /* Passing all that, it's a BST */ return true;} // This code is contributed by Shubham Singh </script>", "e": 32827, "s": 32217, "text": null }, { "code": null, "e": 32957, "s": 32827, "text": "It is assumed that you have helper functions minValue() and maxValue() that return the min or max int value from a non-empty tree" }, { "code": null, "e": 33443, "s": 32957, "text": "METHOD 3 (Correct and Efficient): Method 2 above runs slowly since it traverses over some parts of the tree many times. A better solution looks at each node only once. The trick is to write a utility helper function isBSTUtil(struct node* node, int min, int max) that traverses down the tree keeping track of the narrowing min and max allowed values as it goes, looking at each node only once. The initial values for min and max should be INT_MIN and INT_MAX — they narrow from there. " }, { "code": null, "e": 33542, "s": 33443, "text": "Note: This method is not applicable if there are duplicate elements with value INT_MIN or INT_MAX." }, { "code": null, "e": 33594, "s": 33542, "text": "Below is the implementation of the above approach: " }, { "code": null, "e": 33598, "s": 33594, "text": "C++" }, { "code": null, "e": 33600, "s": 33598, "text": "C" }, { "code": null, "e": 33605, "s": 33600, "text": "Java" }, { "code": null, "e": 33613, "s": 33605, "text": "Python3" }, { "code": null, "e": 33616, "s": 33613, "text": "C#" }, { "code": null, "e": 33627, "s": 33616, "text": "Javascript" }, { "code": "#include<bits/stdc++.h> using namespace std; /* A binary tree node has data,pointer to left child anda pointer to right child */class node{ public: int data; node* left; node* right; /* Constructor that allocates a new node with the given data and NULL left and right pointers. */ node(int data) { this->data = data; this->left = NULL; this->right = NULL; }}; int isBSTUtil(node* node, int min, int max); /* Returns true if the giventree is a binary search tree(efficient version). */int isBST(node* node){ return(isBSTUtil(node, INT_MIN, INT_MAX));} /* Returns true if the giventree is a BST and its valuesare >= min and <= max. */int isBSTUtil(node* node, int min, int max){ /* an empty tree is BST */ if (node==NULL) return 1; /* false if this node violates the min/max constraint */ if (node->data < min || node->data > max) return 0; /* otherwise check the subtrees recursively, tightening the min or max constraint */ return isBSTUtil(node->left, min, node->data-1) && // Allow only distinct values isBSTUtil(node->right, node->data+1, max); // Allow only distinct values} /* Driver code*/int main(){ node *root = new node(4); root->left = new node(2); root->right = new node(5); root->left->left = new node(1); root->left->right = new node(3); if(isBST(root)) cout<<\"Is BST\"; else cout<<\"Not a BST\"; return 0;} // This code is contributed by rathbhupendra", "e": 35172, "s": 33627, "text": null }, { "code": "#include <stdio.h>#include <stdlib.h>#include <limits.h> /* A binary tree node has data, pointer to left child and a pointer to right child */struct node{ int data; struct node* left; struct node* right;}; int isBSTUtil(struct node* node, int min, int max); /* Returns true if the given tree is a binary search tree (efficient version). */int isBST(struct node* node){ return(isBSTUtil(node, INT_MIN, INT_MAX));} /* Returns true if the given tree is a BST and its values are >= min and <= max. */int isBSTUtil(struct node* node, int min, int max){ /* an empty tree is BST */ if (node==NULL) return 1; /* false if this node violates the min/max constraint */ if (node->data < min || node->data > max) return 0; /* otherwise check the subtrees recursively, tightening the min or max constraint */ return isBSTUtil(node->left, min, node->data-1) && // Allow only distinct values isBSTUtil(node->right, node->data+1, max); // Allow only distinct values} /* Helper function that allocates a new node with the given data and NULL left and right pointers. */struct node* newNode(int data){ struct node* node = (struct node*) malloc(sizeof(struct node)); node->data = data; node->left = NULL; node->right = NULL; return(node);} /* Driver program to test above functions*/int main(){ struct node *root = newNode(4); root->left = newNode(2); root->right = newNode(5); root->left->left = newNode(1); root->left->right = newNode(3); if(isBST(root)) printf(\"Is BST\"); else printf(\"Not a BST\"); getchar(); return 0;} ", "e": 36788, "s": 35172, "text": null }, { "code": "//Java implementation to check if given Binary tree//is a BST or not /* Class containing left and right child of current node and key value*/class Node{ int data; Node left, right; public Node(int item) { data = item; left = right = null; }} public class BinaryTree{ //Root of the Binary Tree Node root; /* can give min and max value according to your code or can write a function to find min and max value of tree. */ /* returns true if given search tree is binary search tree (efficient version) */ boolean isBST() { return isBSTUtil(root, Integer.MIN_VALUE, Integer.MAX_VALUE); } /* Returns true if the given tree is a BST and its values are >= min and <= max. */ boolean isBSTUtil(Node node, int min, int max) { /* an empty tree is BST */ if (node == null) return true; /* false if this node violates the min/max constraints */ if (node.data < min || node.data > max) return false; /* otherwise check the subtrees recursively tightening the min/max constraints */ // Allow only distinct values return (isBSTUtil(node.left, min, node.data-1) && isBSTUtil(node.right, node.data+1, max)); } /* Driver program to test above functions */ public static void main(String args[]) { BinaryTree tree = new BinaryTree(); tree.root = new Node(4); tree.root.left = new Node(2); tree.root.right = new Node(5); tree.root.left.left = new Node(1); tree.root.left.right = new Node(3); if (tree.isBST()) System.out.println(\"IS BST\"); else System.out.println(\"Not a BST\"); }}", "e": 38548, "s": 36788, "text": null }, { "code": "# Python program to check if a binary tree is bst or not INT_MAX = 4294967296INT_MIN = -4294967296 # A binary tree nodeclass Node: # Constructor to create a new node def __init__(self, data): self.data = data self.left = None self.right = None # Returns true if the given tree is a binary search tree# (efficient version)def isBST(node): return (isBSTUtil(node, INT_MIN, INT_MAX)) # Retusn true if the given tree is a BST and its values# >= min and <= maxdef isBSTUtil(node, mini, maxi): # An empty tree is BST if node is None: return True # False if this node violates min/max constraint if node.data < mini or node.data > maxi: return False # Otherwise check the subtrees recursively # tightening the min or max constraint return (isBSTUtil(node.left, mini, node.data -1) and isBSTUtil(node.right, node.data+1, maxi)) # Driver program to test above functionroot = Node(4)root.left = Node(2)root.right = Node(5)root.left.left = Node(1)root.left.right = Node(3) if (isBST(root)): print (\"Is BST\")else: print (\"Not a BST\") # This code is contributed by Nikhil Kumar Singh(nickzuck_007)", "e": 39722, "s": 38548, "text": null }, { "code": "using System; // C# implementation to check if given Binary tree//is a BST or not /* Class containing left and right child of current node and key value*/public class Node{ public int data; public Node left, right; public Node(int item) { data = item; left = right = null; }} public class BinaryTree{ //Root of the Binary Tree public Node root; /* can give min and max value according to your code or can write a function to find min and max value of tree. */ /* returns true if given search tree is binary search tree (efficient version) */ public virtual bool BST { get { return isBSTUtil(root, int.MinValue, int.MaxValue); } } /* Returns true if the given tree is a BST and its values are >= min and <= max. */ public virtual bool isBSTUtil(Node node, int min, int max) { /* an empty tree is BST */ if (node == null) { return true; } /* false if this node violates the min/max constraints */ if (node.data < min || node.data > max) { return false; } /* otherwise check the subtrees recursively tightening the min/max constraints */ // Allow only distinct values return (isBSTUtil(node.left, min, node.data - 1) && isBSTUtil(node.right, node.data + 1, max)); } /* Driver program to test above functions */ public static void Main(string[] args) { BinaryTree tree = new BinaryTree(); tree.root = new Node(4); tree.root.left = new Node(2); tree.root.right = new Node(5); tree.root.left.left = new Node(1); tree.root.left.right = new Node(3); if (tree.BST) { Console.WriteLine(\"IS BST\"); } else { Console.WriteLine(\"Not a BST\"); } }} // This code is contributed by Shrikant13", "e": 41637, "s": 39722, "text": null }, { "code": "<script> // Javascript implementation to// check if given Binary tree// is a BST or not /* Class containing left and right child of current node and key value*/ class Node{ constructor(item) { this.data=item; this.left=this.right=null; }} //Root of the Binary Tree let root; /* can give min and max value according to your code or can write a function to find min and max value of tree. */ /* returns true if given search tree is binary search tree (efficient version) */ function isBST() { return isBSTUtil(root, Number.MIN_VALUE, Number.MAX_VALUE); } /* Returns true if the given tree is a BST and its values are >= min and <= max. */ function isBSTUtil(node,min,max) { /* an empty tree is BST */ if (node == null) return true; /* false if this node violates the min/max constraints */ if (node.data < min || node.data > max) return false; /* otherwise check the subtrees recursively tightening the min/max constraints */ // Allow only distinct values return (isBSTUtil(node.left, min, node.data-1) && isBSTUtil(node.right, node.data+1, max)); } /* Driver program to test above functions */ root = new Node(4); root.left = new Node(2); root.right = new Node(5); root.left.left = new Node(1); root.left.right = new Node(3); if (isBST()) document.write(\"IS BST<br>\"); else document.write(\"Not a BST<br>\"); // This code is contributed by rag2127 </script>", "e": 43294, "s": 41637, "text": null }, { "code": null, "e": 43302, "s": 43294, "text": "Output:" }, { "code": null, "e": 43309, "s": 43302, "text": "IS BST" }, { "code": null, "e": 43522, "s": 43309, "text": "Time Complexity: O(n) Auxiliary Space: O(1) if Function Call Stack size is not considered, otherwise O(n) Simplified Method 3 We can simplify method 2 using NULL pointers instead of INT_MIN and INT_MAX values. " }, { "code": null, "e": 43526, "s": 43522, "text": "C++" }, { "code": null, "e": 43531, "s": 43526, "text": "Java" }, { "code": null, "e": 43539, "s": 43531, "text": "Python3" }, { "code": null, "e": 43542, "s": 43539, "text": "C#" }, { "code": null, "e": 43553, "s": 43542, "text": "Javascript" }, { "code": "// C++ program to check if a given tree is BST.#include <bits/stdc++.h>using namespace std; /* A binary tree node has data, pointer to left child and a pointer to right child */struct Node{ int data; struct Node* left, *right;}; // Returns true if given tree is BST.bool isBST(Node* root, Node* l=NULL, Node* r=NULL){ // Base condition if (root == NULL) return true; // if left node exist then check it has // correct data or not i.e. left node's data // should be less than root's data if (l != NULL and root->data <= l->data) return false; // if right node exist then check it has // correct data or not i.e. right node's data // should be greater than root's data if (r != NULL and root->data >= r->data) return false; // check recursively for every node. return isBST(root->left, l, root) and isBST(root->right, root, r);} /* Helper function that allocates a new node with the given data and NULL left and right pointers. */struct Node* newNode(int data){ struct Node* node = new Node; node->data = data; node->left = node->right = NULL; return (node);} /* Driver program to test above functions*/int main(){ struct Node *root = newNode(3); root->left = newNode(2); root->right = newNode(5); root->left->left = newNode(1); root->left->right = newNode(4); if (isBST(root,NULL,NULL)) cout << \"Is BST\"; else cout << \"Not a BST\"; return 0;}", "e": 45040, "s": 43553, "text": null }, { "code": "// Java program to check if a given tree is BST.class Sol{ // A binary tree node has data, pointer to//left child && a pointer to right child /static class Node{ int data; Node left, right;}; // Returns true if given tree is BST.static boolean isBST(Node root, Node l, Node r){ // Base condition if (root == null) return true; // if left node exist then check it has // correct data or not i.e. left node's data // should be less than root's data if (l != null && root.data <= l.data) return false; // if right node exist then check it has // correct data or not i.e. right node's data // should be greater than root's data if (r != null && root.data >= r.data) return false; // check recursively for every node. return isBST(root.left, l, root) && isBST(root.right, root, r);} // Helper function that allocates a new node with the//given data && null left && right pointers. /static Node newNode(int data){ Node node = new Node(); node.data = data; node.left = node.right = null; return (node);} // Driver codepublic static void main(String args[]){ Node root = newNode(3); root.left = newNode(2); root.right = newNode(5); root.left.left = newNode(1); root.left.right = newNode(4); if (isBST(root,null,null)) System.out.print(\"Is BST\"); else System.out.print(\"Not a BST\");}} // This code is contributed by Arnab Kundu", "e": 46480, "s": 45040, "text": null }, { "code": "\"\"\" Program to check if a given BinaryTree is balanced like a Red-Black Tree \"\"\" # Helper function that allocates a new# node with the given data and None# left and right poers. class newNode: # Construct to create a new node def __init__(self, key): self.data = key self.left = None self.right = None # Returns true if given tree is BST.def isBST(root, l = None, r = None): # Base condition if (root == None) : return True # if left node exist then check it has # correct data or not i.e. left node's data # should be less than root's data if (l != None and root.data <= l.data) : return False # if right node exist then check it has # correct data or not i.e. right node's data # should be greater than root's data if (r != None and root.data >= r.data) : return False # check recursively for every node. return isBST(root.left, l, root) and \\ isBST(root.right, root, r) # Driver Codeif __name__ == '__main__': root = newNode(3) root.left = newNode(2) root.right = newNode(5) root.right.left = newNode(1) root.right.right = newNode(4) #root.right.left.left = newNode(40) if (isBST(root,None,None)): print(\"Is BST\") else: print(\"Not a BST\") # This code is contributed by# Shubham Singh(SHUBHAMSINGH10)", "e": 47849, "s": 46480, "text": null }, { "code": "// C# program to check if a given tree is BST.using System; class GFG{ // A binary tree node has data, pointer to//left child && a pointer to right child /public class Node{ public int data; public Node left, right;}; // Returns true if given tree is BST.static Boolean isBST(Node root, Node l, Node r){ // Base condition if (root == null) return true; // if left node exist then check it has // correct data or not i.e. left node's data // should be less than root's data if (l != null && root.data <= l.data) return false; // if right node exist then check it has // correct data or not i.e. right node's data // should be greater than root's data if (r != null && root.data >= r.data) return false; // check recursively for every node. return isBST(root.left, l, root) && isBST(root.right, root, r);} // Helper function that allocates a new node with the//given data && null left && right pointers. /static Node newNode(int data){ Node node = new Node(); node.data = data; node.left = node.right = null; return (node);} // Driver codepublic static void Main(String []args){ Node root = newNode(3); root.left = newNode(2); root.right = newNode(5); root.left.left = newNode(1); root.left.right = newNode(4); if (isBST(root,null,null)) Console.Write(\"Is BST\"); else Console.Write(\"Not a BST\");}} // This code is contributed by 29AjayKumar", "e": 49313, "s": 47849, "text": null }, { "code": "<script> // JavaScript program to check if a given tree is BST. class Node { constructor(data) { this.left = null; this.right = null; this.data = data; } } // Returns true if given tree is BST. function isBST(root, l, r) { // Base condition if (root == null) return true; // if left node exist then check it has // correct data or not i.e. left node's data // should be less than root's data if (l != null && root.data <= l.data) return false; // if right node exist then check it has // correct data or not i.e. right node's data // should be greater than root's data if (r != null && root.data >= r.data) return false; // check recursively for every node. return isBST(root.left, l, root) && isBST(root.right, root, r); } // Helper function that allocates a new node with the //given data && null left && right pointers. / function newNode(data) { let node = new Node(data); return (node); } let root = newNode(3); root.left = newNode(2); root.right = newNode(5); root.left.left = newNode(1); root.left.right = newNode(4); if (isBST(root,null,null)) document.write(\"Is BST\"); else document.write(\"Not a BST\"); </script>", "e": 50721, "s": 49313, "text": null }, { "code": null, "e": 50730, "s": 50721, "text": "Output: " }, { "code": null, "e": 50740, "s": 50730, "text": "Not a BST" }, { "code": null, "e": 50798, "s": 50740, "text": "Thanks to Abhinesh Garhwal for suggesting above solution." }, { "code": null, "e": 50960, "s": 50798, "text": "METHOD 4(Using In-Order Traversal) Thanks to LJW489 for suggesting this method. 1) Do In-Order Traversal of the given tree and store the result in a temp array. " }, { "code": null, "e": 51402, "s": 50960, "text": "2) This method assumes that there are no duplicate values in the tree3) Check if the temp array is sorted in ascending order, if it is, then the tree is BST.Time Complexity: O(n)We can avoid the use of a Auxiliary Array. While doing In-Order traversal, we can keep track of previously visited node. If the value of the currently visited node is less than the previous value, then tree is not BST. Thanks to ygos for this space optimization. " }, { "code": null, "e": 51406, "s": 51402, "text": "C++" }, { "code": null, "e": 51408, "s": 51406, "text": "C" }, { "code": null, "e": 51413, "s": 51408, "text": "Java" }, { "code": null, "e": 51421, "s": 51413, "text": "Python3" }, { "code": null, "e": 51424, "s": 51421, "text": "C#" }, { "code": null, "e": 51435, "s": 51424, "text": "Javascript" }, { "code": "bool isBST(node* root){ static node *prev = NULL; // traverse the tree in inorder fashion // and keep track of prev node if (root) { if (!isBST(root->left)) return false; // Allows only distinct valued nodes if (prev != NULL && root->data <= prev->data) return false; prev = root; return isBST(root->right); } return true;} // This code is contributed by rathbhupendra", "e": 51894, "s": 51435, "text": null }, { "code": "bool isBST(struct node* root){ static struct node *prev = NULL; // traverse the tree in inorder fashion and keep track of prev node if (root) { if (!isBST(root->left)) return false; // Allows only distinct valued nodes if (prev != NULL && root->data <= prev->data) return false; prev = root; return isBST(root->right); } return true;}", "e": 52309, "s": 51894, "text": null }, { "code": "// Java implementation to check if given Binary tree// is a BST or not /* Class containing left and right child of current node and key value*/class Node{ int data; Node left, right; public Node(int item) { data = item; left = right = null; }} public class BinaryTree{ // Root of the Binary Tree Node root; // To keep tract of previous node in Inorder Traversal Node prev; boolean isBST() { prev = null; return isBST(root); } /* Returns true if given search tree is binary search tree (efficient version) */ boolean isBST(Node node) { // traverse the tree in inorder fashion and // keep a track of previous node if (node != null) { if (!isBST(node.left)) return false; // allows only distinct values node if (prev != null && node.data <= prev.data ) return false; prev = node; return isBST(node.right); } return true; } /* Driver program to test above functions */ public static void main(String args[]) { BinaryTree tree = new BinaryTree(); tree.root = new Node(4); tree.root.left = new Node(2); tree.root.right = new Node(5); tree.root.left.left = new Node(1); tree.root.left.right = new Node(3); if (tree.isBST()) System.out.println(\"IS BST\"); else System.out.println(\"Not a BST\"); }}", "e": 53801, "s": 52309, "text": null }, { "code": "# Python implementation to check if# given Binary tree is a BST or not # A binary tree node containing data# field, left and right pointersclass Node: # constructor to create new node def __init__(self, val): self.data = val self.left = None self.right = None # global variable prev - to keep track# of previous node during Inorder# traversalprev = None # function to check if given binary# tree is BSTdef isbst(root): # prev is a global variable global prev prev = None return isbst_rec(root) # Helper function to test if binary# tree is BST# Traverse the tree in inorder fashion# and keep track of previous node# return true if tree is Binary# search tree otherwise falsedef isbst_rec(root): # prev is a global variable global prev # if tree is empty return true if root is None: return True if isbst_rec(root.left) is False: return False # if previous node'data is found # greater than the current node's # data return false if prev is not None and prev.data > root.data: return False # store the current node in prev prev = root return isbst_rec(root.right) # driver code to test above functionroot = Node(4)root.left = Node(2)root.right = Node(5)root.left.left = Node(1)root.left.right = Node(3) if isbst(root): print(\"is BST\")else: print(\"not a BST\") # This code is contributed by# Shweta Singh(shweta44)", "e": 55230, "s": 53801, "text": null }, { "code": "// C# implementation to check if// given Binary tree is a BST or notusing System; /* Class containing left andright child of current nodeand key value*/class Node{ public int data; public Node left, right; public Node(int item) { data = item; left = right = null; }} public class BinaryTree{ // Root of the Binary Tree Node root; // To keep tract of previous node // in Inorder Traversal Node prev; Boolean isBST() { prev = null; return isBST(root); } /* Returns true if given search tree is binary search tree (efficient version) */ Boolean isBST(Node node) { // traverse the tree in inorder fashion and // keep a track of previous node if (node != null) { if (!isBST(node.left)) return false; // allows only distinct values node if (prev != null && node.data <= prev.data ) return false; prev = node; return isBST(node.right); } return true; } // Driver Code public static void Main(String []args) { BinaryTree tree = new BinaryTree(); tree.root = new Node(4); tree.root.left = new Node(2); tree.root.right = new Node(5); tree.root.left.left = new Node(1); tree.root.left.right = new Node(3); if (tree.isBST()) Console.WriteLine(\"IS BST\"); else Console.WriteLine(\"Not a BST\"); }} // This code is contributed by Rajput-Ji", "e": 56774, "s": 55230, "text": null }, { "code": "<script>// Javascript implementation to check if given Binary tree// is a BST or not /* Class containing left and right child of current node and key value*/class Node{ constructor(item) { this.data = item; this.left = this.right=null; }} // Root of the Binary Treelet root; // To keep tract of previous node in Inorder Traversallet prev; function isBST(){ prev = null; return _isBST(root);} /* Returns true if given search tree is binary search tree (efficient version) */function _isBST(node){ // traverse the tree in inorder fashion and // keep a track of previous node if (node != null) { if (!_isBST(node.left)) return false; // allows only distinct values node if (prev != null && node.data <= prev.data ) return false; prev = node; return _isBST(node.right); } return true;} /* Driver program to test above functions */root = new Node(4);root.left = new Node(2);root.right = new Node(5);root.left.left = new Node(1);root.left.right = new Node(3); if (isBST()) document.write(\"IS BST\");else document.write(\"Not a BST\"); // This code is contributed by unknown2108</script>", "e": 58020, "s": 56774, "text": null }, { "code": null, "e": 58123, "s": 58020, "text": "The use of a static variable can also be avoided by using a reference to the prev node as a parameter." }, { "code": null, "e": 58127, "s": 58123, "text": "C++" }, { "code": null, "e": 58132, "s": 58127, "text": "Java" }, { "code": null, "e": 58140, "s": 58132, "text": "Python3" }, { "code": null, "e": 58143, "s": 58140, "text": "C#" }, { "code": null, "e": 58154, "s": 58143, "text": "Javascript" }, { "code": "// C++ program to check if a given tree is BST.#include <bits/stdc++.h>using namespace std; /* A binary tree node has data, pointer toleft child and a pointer to right child */struct Node{ int data; struct Node* left, *right; Node(int data) { this->data = data; left = right = NULL; }}; bool isBSTUtil(struct Node* root, Node *&prev){ // traverse the tree in inorder fashion and // keep track of prev node if (root) { if (!isBSTUtil(root->left, prev)) return false; // Allows only distinct valued nodes if (prev != NULL && root->data <= prev->data) return false; prev = root; return isBSTUtil(root->right, prev); } return true;} bool isBST(Node *root){ Node *prev = NULL; return isBSTUtil(root, prev);} /* Driver program to test above functions*/int main(){ struct Node *root = new Node(3); root->left = new Node(2); root->right = new Node(5); root->left->left = new Node(1); root->left->right = new Node(4); if (isBST(root)) cout << \"Is BST\"; else cout << \"Not a BST\"; return 0;}", "e": 59327, "s": 58154, "text": null }, { "code": "// Java program to check if a given tree is BST.import java.io.*; class GFG { /* A binary tree node has data, pointer to left child and a pointer to right child */ public static class Node { public int data; public Node left, right; public Node(int data) { this.data = data; left = right = null; } }; static Node prev; static Boolean isBSTUtil(Node root) { // traverse the tree in inorder fashion and // keep track of prev node if (root != null) { if (!isBSTUtil(root.left)) return false; // Allows only distinct valued nodes if (prev != null && root.data <= prev.data) return false; prev = root; return isBSTUtil(root.right); } return true; } static Boolean isBST(Node root) { return isBSTUtil(root); } // Driver Code public static void main (String[] args) { Node root = new Node(3); root.left = new Node(2); root.right = new Node(5); root.left.left = new Node(1); root.left.right = new Node(4); if (isBST(root)) System.out.println(\"Is BST\"); else System.out.println(\"Not a BST\"); }} // This code is contributed by Shubham Singh", "e": 60725, "s": 59327, "text": null }, { "code": "# Python3 program to check# if a given tree is BST.import math # A binary tree node has data,# pointer to left child and# a pointer to right childclass Node: def __init__(self, data): self.data = data self.left = None self.right = None def isBSTUtil(root, prev): # traverse the tree in inorder fashion # and keep track of prev node if (root != None): if (isBSTUtil(root.left, prev) == True): return False # Allows only distinct valued nodes if (prev != None and root.data <= prev.data): return False prev = root return isBSTUtil(root.right, prev) return True def isBST(root): prev = None return isBSTUtil(root, prev) # Driver Codeif __name__ == '__main__': root = Node(3) root.left = Node(2) root.right = Node(5) root.right.left = Node(1) root.right.right = Node(4) #root.right.left.left = Node(40) if (isBST(root) == None): print(\"Is BST\") else: print(\"Not a BST\") # This code is contributed by Srathore", "e": 61799, "s": 60725, "text": null }, { "code": "// C# program to check if a given tree is BST.using System;public class GFG{/* A binary tree node has data, pointer toleft child and a pointer to right child */public class Node{ public int data; public Node left, right; public Node(int data) { this.data = data; left = right = null; }}; static Node prev; static Boolean isBSTUtil(Node root){ // traverse the tree in inorder fashion and // keep track of prev node if (root != null) { if (!isBSTUtil(root.left)) return false; // Allows only distinct valued nodes if (prev != null && root.data <= prev.data) return false; prev = root; return isBSTUtil(root.right); } return true;} static Boolean isBST(Node root){ return isBSTUtil(root);} // Driver Codepublic static void Main(String[] args){ Node root = new Node(3); root.left = new Node(2); root.right = new Node(5); root.left.left = new Node(1); root.left.right = new Node(4); if (isBST(root)) Console.WriteLine(\"Is BST\"); else Console.WriteLine(\"Not a BST\");}} // This code is contributed by Rajput-Ji", "e": 62957, "s": 61799, "text": null }, { "code": "<script> // Javascript program to check if a given tree is BST. class Node { constructor(data) { this.left = null; this.right = null; this.data = data; } } let prev; function isBSTUtil(root) { // traverse the tree in inorder fashion and // keep track of prev node if (root != null) { if (!isBSTUtil(root.left)) return false; // Allows only distinct valued nodes if (prev != null && root.data <= prev.data) return false; prev = root; return isBSTUtil(root.right); } return true; } function isBST(root) { return isBSTUtil(root); } let root = new Node(3); root.left = new Node(2); root.right = new Node(5); root.left.left = new Node(1); root.left.right = new Node(4); if (isBST(root)) document.write(\"Is BST\"); else document.write(\"Not a BST\"); // This code is contributed by divyeshrabadiya07.</script>", "e": 64037, "s": 62957, "text": null }, { "code": null, "e": 64046, "s": 64037, "text": "Output: " }, { "code": null, "e": 64056, "s": 64046, "text": "Not a BST" }, { "code": null, "e": 64905, "s": 64056, "text": "YouTubeGeeksforGeeks500K subscribersA program to check if a binary tree is BST or not | GeeksforGeeksWatch laterShareCopy linkInfoShoppingTap to unmuteIf playback doesn't begin shortly, try restarting your device.You're signed outVideos you watch may be added to the TV's watch history and influence TV recommendations. To avoid this, cancel and sign in to YouTube on your computer.CancelConfirmMore videosMore videosSwitch cameraShareInclude playlistAn error occurred while retrieving sharing information. Please try again later.Watch on0:000:000:00 / 16:50•Live•<div class=\"player-unavailable\"><h1 class=\"message\">An error occurred.</h1><div class=\"submessage\"><a href=\"https://www.youtube.com/watch?v=H13iz0rbeeo\" target=\"_blank\">Try watching this video on www.youtube.com</a>, or enable JavaScript if it is disabled in your browser.</div></div>" }, { "code": null, "e": 65129, "s": 64905, "text": "Sources: http://en.wikipedia.org/wiki/Binary_search_tree http://cslibrary.stanford.edu/110/BinaryTrees.htmlPlease write comments if you find any bug in the above programs/algorithms or other ways to solve the same problem. " }, { "code": null, "e": 65138, "s": 65129, "text": "shweta44" }, { "code": null, "e": 65155, "s": 65138, "text": "ChandrahasAbburi" }, { "code": null, "e": 65167, "s": 65155, "text": "shrikanth13" }, { "code": null, "e": 65182, "s": 65167, "text": "SHUBHAMSINGH10" }, { "code": null, "e": 65196, "s": 65182, "text": "rathbhupendra" }, { "code": null, "e": 65207, "s": 65196, "text": "andrew1234" }, { "code": null, "e": 65219, "s": 65207, "text": "29AjayKumar" }, { "code": null, "e": 65229, "s": 65219, "text": "Rajput-Ji" }, { "code": null, "e": 65244, "s": 65229, "text": "sapnasingh4991" }, { "code": null, "e": 65255, "s": 65244, "text": "rwells1703" }, { "code": null, "e": 65281, "s": 65255, "text": "Abhijeet Kumar Srivastava" }, { "code": null, "e": 65298, "s": 65281, "text": "abdul kadir olia" }, { "code": null, "e": 65306, "s": 65298, "text": "rag2127" }, { "code": null, "e": 65327, "s": 65306, "text": "avanitrachhadiya2155" }, { "code": null, "e": 65342, "s": 65327, "text": "rameshtravel07" }, { "code": null, "e": 65354, "s": 65342, "text": "unknown2108" }, { "code": null, "e": 65372, "s": 65354, "text": "divyeshrabadiya07" }, { "code": null, "e": 65389, "s": 65372, "text": "surinderdawra388" }, { "code": null, "e": 65398, "s": 65389, "text": "angajala" }, { "code": null, "e": 65414, "s": 65398, "text": "akshitsaxenaa09" }, { "code": null, "e": 65430, "s": 65414, "text": "amartyaghoshgfg" }, { "code": null, "e": 65445, "s": 65430, "text": "adnanirshad158" }, { "code": null, "e": 65458, "s": 65445, "text": "simmytarika5" }, { "code": null, "e": 65467, "s": 65458, "text": "Accolite" }, { "code": null, "e": 65473, "s": 65467, "text": "Adobe" }, { "code": null, "e": 65480, "s": 65473, "text": "Amazon" }, { "code": null, "e": 65499, "s": 65480, "text": "Boomerang Commerce" }, { "code": null, "e": 65507, "s": 65499, "text": "FactSet" }, { "code": null, "e": 65518, "s": 65507, "text": "GreyOrange" }, { "code": null, "e": 65529, "s": 65518, "text": "MakeMyTrip" }, { "code": null, "e": 65539, "s": 65529, "text": "Microsoft" }, { "code": null, "e": 65549, "s": 65539, "text": "OYO Rooms" }, { "code": null, "e": 65558, "s": 65549, "text": "Qualcomm" }, { "code": null, "e": 65567, "s": 65558, "text": "Snapdeal" }, { "code": null, "e": 65574, "s": 65567, "text": "VMWare" }, { "code": null, "e": 65582, "s": 65574, "text": "Walmart" }, { "code": null, "e": 65589, "s": 65582, "text": "Wooker" }, { "code": null, "e": 65608, "s": 65589, "text": "Binary Search Tree" }, { "code": null, "e": 65613, "s": 65608, "text": "Tree" }, { "code": null, "e": 65620, "s": 65613, "text": "VMWare" }, { "code": null, "e": 65629, "s": 65620, "text": "Accolite" }, { "code": null, "e": 65636, "s": 65629, "text": "Amazon" }, { "code": null, "e": 65646, "s": 65636, "text": "Microsoft" }, { "code": null, "e": 65656, "s": 65646, "text": "OYO Rooms" }, { "code": null, "e": 65665, "s": 65656, "text": "Snapdeal" }, { "code": null, "e": 65673, "s": 65665, "text": "FactSet" }, { "code": null, "e": 65684, "s": 65673, "text": "MakeMyTrip" }, { "code": null, "e": 65692, "s": 65684, "text": "Walmart" }, { "code": null, "e": 65698, "s": 65692, "text": "Adobe" }, { "code": null, "e": 65707, "s": 65698, "text": "Qualcomm" }, { "code": null, "e": 65726, "s": 65707, "text": "Boomerang Commerce" }, { "code": null, "e": 65733, "s": 65726, "text": "Wooker" }, { "code": null, "e": 65752, "s": 65733, "text": "Binary Search Tree" }, { "code": null, "e": 65757, "s": 65752, "text": "Tree" }, { "code": null, "e": 65855, "s": 65757, "text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here." }, { "code": null, "e": 65864, "s": 65855, "text": "Comments" }, { "code": null, "e": 65877, "s": 65864, "text": "Old Comments" }, { "code": null, "e": 65911, "s": 65877, "text": "Advantages of BST over Hash Table" }, { "code": null, "e": 65956, "s": 65911, "text": "Binary Tree to Binary Search Tree Conversion" }, { "code": null, "e": 66010, "s": 65956, "text": "Difference between Binary Tree and Binary Search Tree" }, { "code": null, "e": 66058, "s": 66010, "text": "Insert a node in Binary Search Tree Iteratively" }, { "code": null, "e": 66110, "s": 66058, "text": "Construct a Binary Search Tree from given postorder" }, { "code": null, "e": 66160, "s": 66110, "text": "Tree Traversals (Inorder, Preorder and Postorder)" }, { "code": null, "e": 66195, "s": 66160, "text": "Binary Tree | Set 1 (Introduction)" }, { "code": null, "e": 66229, "s": 66195, "text": "Level Order Binary Tree Traversal" }, { "code": null, "e": 66270, "s": 66229, "text": "Inorder Tree Traversal without Recursion" } ]
C# | Convert.ToInt16(String, IFormatProvider) Method - GeeksforGeeks
10 Dec, 2019 This method is used to convert the specified string representation of a number to an equivalent 16-bit signed integer, using the specified culture-specific formatting information. Syntax: public static short ToInt16 (string value, IFormatProvider provider); Parameters: value: It is a string that contains the number to convert. provider: It is an object that supplies culture-specific formatting information. Return Value: This method returns a decimal number which is equivalent to the number in value, or 0 (zero) if value is null. Exceptions: FormatException: If the value does not consist of an optional sign followed by a sequence of digits (0 through 9). OverFlowException: If the value represents a number that is less than MinValue or greater than MaxValue. Below programs illustrate the use of Convert.ToDecimal(String, IFormatProvider) Method: Example 1: // C# program to demonstrate the// Convert.ToInt16() Methodusing System;using System.Globalization; class GFG { // Main Methodpublic static void Main(){ try { // creating object of CultureInfo CultureInfo cultures = new CultureInfo("en-US"); // declaring and initializing String array string[] values = {"12345", "+12345", "-12345"}; // calling get() Method Console.Write("Converted short value"+ " from a specified string "); for (int j = 0; j < values.Length; j++) { get(values[j], cultures); } } catch (FormatException e) { Console.WriteLine("\n"); Console.Write("Exception Thrown: "); Console.Write("{0}", e.GetType(), e.Message); } catch (OverflowException e) { Console.WriteLine("\n"); Console.Write("Exception Thrown: "); Console.Write("{0}", e.GetType(), e.Message); }} // Defining get() methodpublic static void get(string s, CultureInfo cultures){ // converting string to specified short short val = Convert.ToInt16(s, cultures); // display the converted char value Console.Write(" {0}, ", val);}} Converted short value from a specified string 12345, 12345, -12345, Example 2: For FormatException // C# program to demonstrate the// Convert.ToInt16() Methodusing System;using System.Globalization; class GFG { // Main Methodpublic static void Main(){ try { // creating object of CultureInfo CultureInfo cultures = new CultureInfo("en-US"); // declaring and initializing String array string[] values = {"12345", "+12345", "-12345"}; // calling get() Method Console.Write("Converted short value"+ " of specified strings: "); for (int j = 0; j < values.Length; j++) { get(values[j], cultures); } Console.WriteLine("\n"); string s = "123 456, 789"; Console.WriteLine("format of s is invalid "); // converting string to specified char short val = Convert.ToInt16(s, cultures); // display the converted char value Console.Write(" {0}, ", val); } catch (FormatException e) { Console.Write("Exception Thrown: "); Console.Write("{0}", e.GetType(), e.Message); } catch (OverflowException e) { Console.Write("Exception Thrown: "); Console.Write("{0}", e.GetType(), e.Message); }} // Defining get() methodpublic static void get(string s, CultureInfo cultures){ // converting string to // specified short value short val = Convert.ToInt16(s, cultures); // display the converted // decimal value Console.Write(" {0}, ", val);}} Converted short value of specified strings: 12345, 12345, -12345, format of s is invalid Exception Thrown: System.FormatException Example 3: For OverflowException // C# program to demonstrate the// Convert.ToInt16() Methodusing System;using System.Globalization; class GFG { // Main Methodpublic static void Main(){ try { // creating object of CultureInfo CultureInfo cultures = new CultureInfo("en-US"); // declaring and initializing String array string[] values = { "12345", "+12345", "-12345" }; // calling get() Method Console.Write("Converted short value "+ "of specified strings: "); for (int j = 0; j < values.Length; j++) { get(values[j], cultures); } Console.WriteLine("\n"); string s = "-7922816251426433759354395033500000"; Console.WriteLine("s is less than the MinValue"); // converting string to specified short short val = Convert.ToInt16(s, cultures); // display the converted char value Console.Write(" {0}, ", val); } catch (FormatException e) { Console.Write("Exception Thrown: "); Console.Write("{0}", e.GetType(), e.Message); } catch (OverflowException e) { Console.Write("Exception Thrown: "); Console.Write("{0}", e.GetType(), e.Message); }} // Defining get() methodpublic static void get(string s, CultureInfo cultures){ // converting string to // specified decimal value short val = Convert.ToInt16(s, cultures); // display the converted decimal value Console.Write(" {0}, ", val);}} Converted short value of specified strings: 12345, 12345, -12345, s is less than the MinValue Exception Thrown: System.OverflowException Reference: https://docs.microsoft.com/en-us/dotnet/api/system.convert.toint16?view=netframework-4.7.2#System_Convert_ToInt16_System_String_System_IFormatProvider_ shubham_singh CSharp Convert Class CSharp-method C# Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here. Comments Old Comments Extension Method in C# Top 50 C# Interview Questions & Answers Partial Classes in C# HashSet in C# with Examples C# | Inheritance C# | How to insert an element in an Array? C# | List Class Lambda Expressions in C# C# | Generics - Introduction What is Regular Expression in C#?
[ { "code": null, "e": 24222, "s": 24194, "text": "\n10 Dec, 2019" }, { "code": null, "e": 24402, "s": 24222, "text": "This method is used to convert the specified string representation of a number to an equivalent 16-bit signed integer, using the specified culture-specific formatting information." }, { "code": null, "e": 24410, "s": 24402, "text": "Syntax:" }, { "code": null, "e": 24480, "s": 24410, "text": "public static short ToInt16 (string value, IFormatProvider provider);" }, { "code": null, "e": 24492, "s": 24480, "text": "Parameters:" }, { "code": null, "e": 24551, "s": 24492, "text": "value: It is a string that contains the number to convert." }, { "code": null, "e": 24632, "s": 24551, "text": "provider: It is an object that supplies culture-specific formatting information." }, { "code": null, "e": 24757, "s": 24632, "text": "Return Value: This method returns a decimal number which is equivalent to the number in value, or 0 (zero) if value is null." }, { "code": null, "e": 24769, "s": 24757, "text": "Exceptions:" }, { "code": null, "e": 24884, "s": 24769, "text": "FormatException: If the value does not consist of an optional sign followed by a sequence of digits (0 through 9)." }, { "code": null, "e": 24989, "s": 24884, "text": "OverFlowException: If the value represents a number that is less than MinValue or greater than MaxValue." }, { "code": null, "e": 25077, "s": 24989, "text": "Below programs illustrate the use of Convert.ToDecimal(String, IFormatProvider) Method:" }, { "code": null, "e": 25088, "s": 25077, "text": "Example 1:" }, { "code": "// C# program to demonstrate the// Convert.ToInt16() Methodusing System;using System.Globalization; class GFG { // Main Methodpublic static void Main(){ try { // creating object of CultureInfo CultureInfo cultures = new CultureInfo(\"en-US\"); // declaring and initializing String array string[] values = {\"12345\", \"+12345\", \"-12345\"}; // calling get() Method Console.Write(\"Converted short value\"+ \" from a specified string \"); for (int j = 0; j < values.Length; j++) { get(values[j], cultures); } } catch (FormatException e) { Console.WriteLine(\"\\n\"); Console.Write(\"Exception Thrown: \"); Console.Write(\"{0}\", e.GetType(), e.Message); } catch (OverflowException e) { Console.WriteLine(\"\\n\"); Console.Write(\"Exception Thrown: \"); Console.Write(\"{0}\", e.GetType(), e.Message); }} // Defining get() methodpublic static void get(string s, CultureInfo cultures){ // converting string to specified short short val = Convert.ToInt16(s, cultures); // display the converted char value Console.Write(\" {0}, \", val);}}", "e": 26370, "s": 25088, "text": null }, { "code": null, "e": 26442, "s": 26370, "text": "Converted short value from a specified string 12345, 12345, -12345,\n" }, { "code": null, "e": 26473, "s": 26442, "text": "Example 2: For FormatException" }, { "code": "// C# program to demonstrate the// Convert.ToInt16() Methodusing System;using System.Globalization; class GFG { // Main Methodpublic static void Main(){ try { // creating object of CultureInfo CultureInfo cultures = new CultureInfo(\"en-US\"); // declaring and initializing String array string[] values = {\"12345\", \"+12345\", \"-12345\"}; // calling get() Method Console.Write(\"Converted short value\"+ \" of specified strings: \"); for (int j = 0; j < values.Length; j++) { get(values[j], cultures); } Console.WriteLine(\"\\n\"); string s = \"123 456, 789\"; Console.WriteLine(\"format of s is invalid \"); // converting string to specified char short val = Convert.ToInt16(s, cultures); // display the converted char value Console.Write(\" {0}, \", val); } catch (FormatException e) { Console.Write(\"Exception Thrown: \"); Console.Write(\"{0}\", e.GetType(), e.Message); } catch (OverflowException e) { Console.Write(\"Exception Thrown: \"); Console.Write(\"{0}\", e.GetType(), e.Message); }} // Defining get() methodpublic static void get(string s, CultureInfo cultures){ // converting string to // specified short value short val = Convert.ToInt16(s, cultures); // display the converted // decimal value Console.Write(\" {0}, \", val);}}", "e": 28029, "s": 26473, "text": null }, { "code": null, "e": 28166, "s": 28029, "text": "Converted short value of specified strings: 12345, 12345, -12345, \n\nformat of s is invalid \nException Thrown: System.FormatException\n" }, { "code": null, "e": 28199, "s": 28166, "text": "Example 3: For OverflowException" }, { "code": "// C# program to demonstrate the// Convert.ToInt16() Methodusing System;using System.Globalization; class GFG { // Main Methodpublic static void Main(){ try { // creating object of CultureInfo CultureInfo cultures = new CultureInfo(\"en-US\"); // declaring and initializing String array string[] values = { \"12345\", \"+12345\", \"-12345\" }; // calling get() Method Console.Write(\"Converted short value \"+ \"of specified strings: \"); for (int j = 0; j < values.Length; j++) { get(values[j], cultures); } Console.WriteLine(\"\\n\"); string s = \"-7922816251426433759354395033500000\"; Console.WriteLine(\"s is less than the MinValue\"); // converting string to specified short short val = Convert.ToInt16(s, cultures); // display the converted char value Console.Write(\" {0}, \", val); } catch (FormatException e) { Console.Write(\"Exception Thrown: \"); Console.Write(\"{0}\", e.GetType(), e.Message); } catch (OverflowException e) { Console.Write(\"Exception Thrown: \"); Console.Write(\"{0}\", e.GetType(), e.Message); }} // Defining get() methodpublic static void get(string s, CultureInfo cultures){ // converting string to // specified decimal value short val = Convert.ToInt16(s, cultures); // display the converted decimal value Console.Write(\" {0}, \", val);}}", "e": 29779, "s": 28199, "text": null }, { "code": null, "e": 29922, "s": 29779, "text": "Converted short value of specified strings: 12345, 12345, -12345, \n\ns is less than the MinValue\nException Thrown: System.OverflowException\n" }, { "code": null, "e": 29933, "s": 29922, "text": "Reference:" }, { "code": null, "e": 30085, "s": 29933, "text": "https://docs.microsoft.com/en-us/dotnet/api/system.convert.toint16?view=netframework-4.7.2#System_Convert_ToInt16_System_String_System_IFormatProvider_" }, { "code": null, "e": 30099, "s": 30085, "text": "shubham_singh" }, { "code": null, "e": 30120, "s": 30099, "text": "CSharp Convert Class" }, { "code": null, "e": 30134, "s": 30120, "text": "CSharp-method" }, { "code": null, "e": 30137, "s": 30134, "text": "C#" }, { "code": null, "e": 30235, "s": 30137, "text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here." }, { "code": null, "e": 30244, "s": 30235, "text": "Comments" }, { "code": null, "e": 30257, "s": 30244, "text": "Old Comments" }, { "code": null, "e": 30280, "s": 30257, "text": "Extension Method in C#" }, { "code": null, "e": 30320, "s": 30280, "text": "Top 50 C# Interview Questions & Answers" }, { "code": null, "e": 30342, "s": 30320, "text": "Partial Classes in C#" }, { "code": null, "e": 30370, "s": 30342, "text": "HashSet in C# with Examples" }, { "code": null, "e": 30387, "s": 30370, "text": "C# | Inheritance" }, { "code": null, "e": 30430, "s": 30387, "text": "C# | How to insert an element in an Array?" }, { "code": null, "e": 30446, "s": 30430, "text": "C# | List Class" }, { "code": null, "e": 30471, "s": 30446, "text": "Lambda Expressions in C#" }, { "code": null, "e": 30500, "s": 30471, "text": "C# | Generics - Introduction" } ]
C# | How to change the CursorLeft of the Console - GeeksforGeeks
28 Jan, 2019 Given the normal Console in C#, the task is to change the CursorLeft of the Console. Approach: This can be done using the CursorLeft property in the Console class of the System package in C#. This changes the horizontal position of the Cursor. Basically, it gets or sets the column position of the cursor within the buffer area. Program 1: Getting the value of CursorLeft // C# program to illustrate the// Console.CursorLeft Propertyusing System;using System.Collections.Generic;using System.Linq;using System.Text;using System.Threading.Tasks; namespace GFG { class Program { static void Main(string[] args) { // Get the CursorLeft position Console.WriteLine("Current CursorLeft position: {0}", Console.CursorLeft); // Get the CursorLeft position Console.Write("Current CursorLeft position: {0};", Console.CursorLeft); Console.WriteLine(" and now :{0}", Console.CursorLeft); }}} Output: Program 2: Setting the value of CursorLeft // C# program to illustrate the// Console.CursorLeft Propertyusing System;using System.Collections.Generic;using System.Linq;using System.Text;using System.Threading.Tasks; namespace GFG { class Program { static void Main(string[] args) { // Get the CursorLeft position Console.WriteLine("Current CursorLeft position: {0}", Console.CursorLeft); // Set the CursorLeft position Console.CursorLeft = 25; // Get the CursorLeft position Console.Write("Current CursorLeft position: {0};", Console.CursorLeft); }}} Output: CSharp-Console-Class C# Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here. Comments Old Comments C# Dictionary with examples C# | Method Overriding C# | Class and Object C# | String.IndexOf( ) Method | Set - 1 Extension Method in C# C# | Constructors C# | Delegates Introduction to .NET Framework Difference between Ref and Out keywords in C# C# | Data Types
[ { "code": null, "e": 25028, "s": 25000, "text": "\n28 Jan, 2019" }, { "code": null, "e": 25113, "s": 25028, "text": "Given the normal Console in C#, the task is to change the CursorLeft of the Console." }, { "code": null, "e": 25357, "s": 25113, "text": "Approach: This can be done using the CursorLeft property in the Console class of the System package in C#. This changes the horizontal position of the Cursor. Basically, it gets or sets the column position of the cursor within the buffer area." }, { "code": null, "e": 25400, "s": 25357, "text": "Program 1: Getting the value of CursorLeft" }, { "code": "// C# program to illustrate the// Console.CursorLeft Propertyusing System;using System.Collections.Generic;using System.Linq;using System.Text;using System.Threading.Tasks; namespace GFG { class Program { static void Main(string[] args) { // Get the CursorLeft position Console.WriteLine(\"Current CursorLeft position: {0}\", Console.CursorLeft); // Get the CursorLeft position Console.Write(\"Current CursorLeft position: {0};\", Console.CursorLeft); Console.WriteLine(\" and now :{0}\", Console.CursorLeft); }}}", "e": 26060, "s": 25400, "text": null }, { "code": null, "e": 26068, "s": 26060, "text": "Output:" }, { "code": null, "e": 26111, "s": 26068, "text": "Program 2: Setting the value of CursorLeft" }, { "code": "// C# program to illustrate the// Console.CursorLeft Propertyusing System;using System.Collections.Generic;using System.Linq;using System.Text;using System.Threading.Tasks; namespace GFG { class Program { static void Main(string[] args) { // Get the CursorLeft position Console.WriteLine(\"Current CursorLeft position: {0}\", Console.CursorLeft); // Set the CursorLeft position Console.CursorLeft = 25; // Get the CursorLeft position Console.Write(\"Current CursorLeft position: {0};\", Console.CursorLeft); }}}", "e": 26757, "s": 26111, "text": null }, { "code": null, "e": 26765, "s": 26757, "text": "Output:" }, { "code": null, "e": 26786, "s": 26765, "text": "CSharp-Console-Class" }, { "code": null, "e": 26789, "s": 26786, "text": "C#" }, { "code": null, "e": 26887, "s": 26789, "text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here." }, { "code": null, "e": 26896, "s": 26887, "text": "Comments" }, { "code": null, "e": 26909, "s": 26896, "text": "Old Comments" }, { "code": null, "e": 26937, "s": 26909, "text": "C# Dictionary with examples" }, { "code": null, "e": 26960, "s": 26937, "text": "C# | Method Overriding" }, { "code": null, "e": 26982, "s": 26960, "text": "C# | Class and Object" }, { "code": null, "e": 27022, "s": 26982, "text": "C# | String.IndexOf( ) Method | Set - 1" }, { "code": null, "e": 27045, "s": 27022, "text": "Extension Method in C#" }, { "code": null, "e": 27063, "s": 27045, "text": "C# | Constructors" }, { "code": null, "e": 27078, "s": 27063, "text": "C# | Delegates" }, { "code": null, "e": 27109, "s": 27078, "text": "Introduction to .NET Framework" }, { "code": null, "e": 27155, "s": 27109, "text": "Difference between Ref and Out keywords in C#" } ]
How to overload and override main method in Java - GeeksforGeeks
05 Apr, 2019 Method Overloading can be defined as a feature in which a class can have more than one method having the same name if and only if they differ by number of parameters or the type of parameters or both, then they may or may not have same return type. Method overloading is one of the ways that java support Polymorphism.Yes, We can overload the main method in java but JVM only calls the original main method, it will never call our overloaded main method.Below example illustrates the overloading of main() in javaExample 1: // Java program to demonstrate// Overloading of main() public class GFG { // Overloaded main method 1 // According to us this overloaded method // Should be executed when int value is passed public static void main(int args) { System.out.println("main() overloaded" + " method 1 Executing"); } // Overloaded main method 2 // According to us this overloaded method // Should be executed when character is passed public static void main(char args) { System.out.println("main() overloaded" + " method 2 Executing"); } // Overloaded main method 3 // According to us this overloaded method // Should be executed when double value is passed public static void main(Double[] args) { System.out.println("main() overloaded" + " method 3 Executing"); } // Original main() public static void main(String[] args) { System.out.println("Original main()" + " Executing"); }} Output: As from above example, it is clear that every time original main method executes but not the overloaded methods because JVM only executes the original main method by default but not the overloaded one.So, to execute overloaded methods of main, we must call them from the original main method.Example 2:In this example, we will execute all the Overloads of the main method one by one // Java program to demonstrate// Overloading of main() public class GFG { // Overloaded main method 1 public static void main(boolean args) { if (args) { System.out.println("main() overloaded" + " method 1 Executing"); System.out.println(args); // Calling overloaded main method 2 GFG.main("Geeks", "For Geeks"); } } // Overloaded main method 2 public static void main(String a, String b) { System.out.println("main() overloaded" + " method 2 Executing"); System.out.println(a + " " + b); } // Overloaded main method 3 public static void main(int args) { System.out.println("main() overloaded" + " method 3 Executing"); System.out.println(args); } // Original main() public static void main(String[] args) { System.out.println("Original main()" + " Executing"); System.out.println("Hello"); // Calling overloads of the main method // Calling overloaded main method 1 GFG.main(true); // Calling overloaded main method 3 GFG.main(987654); }} Output: Original main() Executing Hello main() overloaded method 1 Executing true main() overloaded method 2 Executing Geeks For Geeks main() overloaded method 3 Executing 987654 Whenever we do inheritance in java then if a method in subclass has the same name and type signature as a method in its parent class or superclass, then it is said that the method in subclass is overriding the method of parent class. Method overriding is one of the way that java supports run time Polymorphism.No, we cannot override main method of java because a static method cannot be overridden.The static method in java is associated with class whereas the non-static method is associated with an object. Static belongs to the class area, static methods don’t need an object to be called. Static methods can be called directly by using the classname ( classname.static_method_name() ).So, whenever we try to execute the derived class static method, it will automatically execute the base class static method.Therefore, it is not possible to override the main method in java.To know more about Overriding static method in java Visit here Java-Overloading java-overriding main Picked Java Java Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here. Comments Old Comments Constructors in Java Exceptions in Java Stream In Java Functional Interfaces in Java Different ways of Reading a text file in Java Internal Working of HashMap in Java Iterators in Java Java Programming Examples Strings in Java Comparator Interface in Java with Examples
[ { "code": null, "e": 23582, "s": 23554, "text": "\n05 Apr, 2019" }, { "code": null, "e": 24106, "s": 23582, "text": "Method Overloading can be defined as a feature in which a class can have more than one method having the same name if and only if they differ by number of parameters or the type of parameters or both, then they may or may not have same return type. Method overloading is one of the ways that java support Polymorphism.Yes, We can overload the main method in java but JVM only calls the original main method, it will never call our overloaded main method.Below example illustrates the overloading of main() in javaExample 1:" }, { "code": "// Java program to demonstrate// Overloading of main() public class GFG { // Overloaded main method 1 // According to us this overloaded method // Should be executed when int value is passed public static void main(int args) { System.out.println(\"main() overloaded\" + \" method 1 Executing\"); } // Overloaded main method 2 // According to us this overloaded method // Should be executed when character is passed public static void main(char args) { System.out.println(\"main() overloaded\" + \" method 2 Executing\"); } // Overloaded main method 3 // According to us this overloaded method // Should be executed when double value is passed public static void main(Double[] args) { System.out.println(\"main() overloaded\" + \" method 3 Executing\"); } // Original main() public static void main(String[] args) { System.out.println(\"Original main()\" + \" Executing\"); }}", "e": 25173, "s": 24106, "text": null }, { "code": null, "e": 25181, "s": 25173, "text": "Output:" }, { "code": null, "e": 25564, "s": 25181, "text": "As from above example, it is clear that every time original main method executes but not the overloaded methods because JVM only executes the original main method by default but not the overloaded one.So, to execute overloaded methods of main, we must call them from the original main method.Example 2:In this example, we will execute all the Overloads of the main method one by one" }, { "code": "// Java program to demonstrate// Overloading of main() public class GFG { // Overloaded main method 1 public static void main(boolean args) { if (args) { System.out.println(\"main() overloaded\" + \" method 1 Executing\"); System.out.println(args); // Calling overloaded main method 2 GFG.main(\"Geeks\", \"For Geeks\"); } } // Overloaded main method 2 public static void main(String a, String b) { System.out.println(\"main() overloaded\" + \" method 2 Executing\"); System.out.println(a + \" \" + b); } // Overloaded main method 3 public static void main(int args) { System.out.println(\"main() overloaded\" + \" method 3 Executing\"); System.out.println(args); } // Original main() public static void main(String[] args) { System.out.println(\"Original main()\" + \" Executing\"); System.out.println(\"Hello\"); // Calling overloads of the main method // Calling overloaded main method 1 GFG.main(true); // Calling overloaded main method 3 GFG.main(987654); }}", "e": 26812, "s": 25564, "text": null }, { "code": null, "e": 26820, "s": 26812, "text": "Output:" }, { "code": null, "e": 26992, "s": 26820, "text": "Original main() Executing\nHello\nmain() overloaded method 1 Executing\ntrue\nmain() overloaded method 2 Executing\nGeeks For Geeks\nmain() overloaded method 3 Executing\n987654\n" }, { "code": null, "e": 27934, "s": 26992, "text": "Whenever we do inheritance in java then if a method in subclass has the same name and type signature as a method in its parent class or superclass, then it is said that the method in subclass is overriding the method of parent class. Method overriding is one of the way that java supports run time Polymorphism.No, we cannot override main method of java because a static method cannot be overridden.The static method in java is associated with class whereas the non-static method is associated with an object. Static belongs to the class area, static methods don’t need an object to be called. Static methods can be called directly by using the classname ( classname.static_method_name() ).So, whenever we try to execute the derived class static method, it will automatically execute the base class static method.Therefore, it is not possible to override the main method in java.To know more about Overriding static method in java Visit here" }, { "code": null, "e": 27951, "s": 27934, "text": "Java-Overloading" }, { "code": null, "e": 27967, "s": 27951, "text": "java-overriding" }, { "code": null, "e": 27972, "s": 27967, "text": "main" }, { "code": null, "e": 27979, "s": 27972, "text": "Picked" }, { "code": null, "e": 27984, "s": 27979, "text": "Java" }, { "code": null, "e": 27989, "s": 27984, "text": "Java" }, { "code": null, "e": 28087, "s": 27989, "text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here." }, { "code": null, "e": 28096, "s": 28087, "text": "Comments" }, { "code": null, "e": 28109, "s": 28096, "text": "Old Comments" }, { "code": null, "e": 28130, "s": 28109, "text": "Constructors in Java" }, { "code": null, "e": 28149, "s": 28130, "text": "Exceptions in Java" }, { "code": null, "e": 28164, "s": 28149, "text": "Stream In Java" }, { "code": null, "e": 28194, "s": 28164, "text": "Functional Interfaces in Java" }, { "code": null, "e": 28240, "s": 28194, "text": "Different ways of Reading a text file in Java" }, { "code": null, "e": 28276, "s": 28240, "text": "Internal Working of HashMap in Java" }, { "code": null, "e": 28294, "s": 28276, "text": "Iterators in Java" }, { "code": null, "e": 28320, "s": 28294, "text": "Java Programming Examples" }, { "code": null, "e": 28336, "s": 28320, "text": "Strings in Java" } ]
How to add a title in anchor tag using jQuery?
To add a title in anchor tag in jQuery, use the prop() method. The prop() method is used to set properties and values of the selected elements. You can try to run the following code to learn how to add a title in anchor tag using jQuery − Live Demo <html> <head> <script src = "https://ajax.googleapis.com/ajax/libs/jquery/3.2.1/jquery.min.js"></script> <script> $(document).ready(function(){ $("#button1").click(function(){ $('.myClass').prop('title', 'This is your new title'); var myTitle = $('.myClass').prop('title'); alert(myTitle); }); }); </script> </head> <body> <div id='div1' class="myclass"> <a class="myClass" href='#'><img src='http://tutorialspoint.com/green/images/logo.png' /></a><br> <button id="button1">Get Title</button> </div> </body> </html>
[ { "code": null, "e": 1206, "s": 1062, "text": "To add a title in anchor tag in jQuery, use the prop() method. The prop() method is used to set properties and values of the selected elements." }, { "code": null, "e": 1301, "s": 1206, "text": "You can try to run the following code to learn how to add a title in anchor tag using jQuery −" }, { "code": null, "e": 1311, "s": 1301, "text": "Live Demo" }, { "code": null, "e": 1955, "s": 1311, "text": "<html>\n <head>\n <script src = \"https://ajax.googleapis.com/ajax/libs/jquery/3.2.1/jquery.min.js\"></script>\n <script>\n $(document).ready(function(){\n $(\"#button1\").click(function(){\n $('.myClass').prop('title', 'This is your new title');\n var myTitle = $('.myClass').prop('title');\n alert(myTitle);\n });\n });\n </script>\n </head>\n <body>\n <div id='div1' class=\"myclass\">\n <a class=\"myClass\" href='#'><img src='http://tutorialspoint.com/green/images/logo.png' /></a><br>\n <button id=\"button1\">Get Title</button>\n </div>\n </body>\n</html>" } ]
Adding an element to the List in C#
To add an element to the List, the code is as follows − Live Demo using System; using System.Collections.Generic; public class Demo { public static void Main(String[] args) { List<String> list = new List<String>(); list.Add("One"); list.Add("Two"); list.Add("Three"); list.Add("Four"); list.Add("Five"); list.Add("Six"); list.Add("Seven"); list.Add("Eight"); Console.WriteLine("Enumerator iterates through the list elements..."); List<string>.Enumerator demoEnum = list.GetEnumerator(); while (demoEnum.MoveNext()) { string res = demoEnum.Current; Console.WriteLine(res); } } } This will produce the following output − Enumerator iterates through the list elements... One Two Three Four Five Six Seven Eight Let us see another example − Live Demo using System; using System.Collections.Generic; public class Demo { public static void Main(String[] args) { List<int> list = new List<int>(); list.Add(50); list.Add(100); list.Add(150); list.Add(200); list.Add(250); list.Add(500); list.Add(750); list.Add(1000); list.Add(1250); list.Add(1500); Console.WriteLine("Enumerator iterates through the list elements..."); List<int>.Enumerator demoEnum = list.GetEnumerator(); while (demoEnum.MoveNext()) { int res = demoEnum.Current; Console.WriteLine(res); } } } This will produce the following output − Enumerator iterates through the list elements... 50 100 150 200 250 500 750 1000 1250 1500
[ { "code": null, "e": 1118, "s": 1062, "text": "To add an element to the List, the code is as follows −" }, { "code": null, "e": 1129, "s": 1118, "text": " Live Demo" }, { "code": null, "e": 1743, "s": 1129, "text": "using System;\nusing System.Collections.Generic;\npublic class Demo {\n public static void Main(String[] args) {\n List<String> list = new List<String>();\n list.Add(\"One\");\n list.Add(\"Two\");\n list.Add(\"Three\");\n list.Add(\"Four\");\n list.Add(\"Five\");\n list.Add(\"Six\");\n list.Add(\"Seven\");\n list.Add(\"Eight\");\n Console.WriteLine(\"Enumerator iterates through the list elements...\");\n List<string>.Enumerator demoEnum = list.GetEnumerator();\n while (demoEnum.MoveNext()) {\n string res = demoEnum.Current;\n Console.WriteLine(res);\n }\n }\n}" }, { "code": null, "e": 1784, "s": 1743, "text": "This will produce the following output −" }, { "code": null, "e": 1873, "s": 1784, "text": "Enumerator iterates through the list elements...\nOne\nTwo\nThree\nFour\nFive\nSix\nSeven\nEight" }, { "code": null, "e": 1902, "s": 1873, "text": "Let us see another example −" }, { "code": null, "e": 1913, "s": 1902, "text": " Live Demo" }, { "code": null, "e": 2535, "s": 1913, "text": "using System;\nusing System.Collections.Generic;\npublic class Demo {\n public static void Main(String[] args) {\n List<int> list = new List<int>();\n list.Add(50);\n list.Add(100);\n list.Add(150);\n list.Add(200);\n list.Add(250);\n list.Add(500);\n list.Add(750);\n list.Add(1000);\n list.Add(1250);\n list.Add(1500);\n Console.WriteLine(\"Enumerator iterates through the list elements...\");\n List<int>.Enumerator demoEnum = list.GetEnumerator();\n while (demoEnum.MoveNext()) {\n int res = demoEnum.Current;\n Console.WriteLine(res);\n }\n }\n}" }, { "code": null, "e": 2576, "s": 2535, "text": "This will produce the following output −" }, { "code": null, "e": 2667, "s": 2576, "text": "Enumerator iterates through the list elements...\n50\n100\n150\n200\n250\n500\n750\n1000\n1250\n1500" } ]
Hexadecimal color to RGB color JavaScript
We are required to write a JavaScript function that takes in a hexadecimal color and returns its RGB representation. The function should return an object containing the respective values of red green and blue color − hexToRgb('#0080C0') should return 0, 128, 192 The code for this will be − const hex = '#0080C0'; const hexToRGB = hex => { let r = 0, g = 0, b = 0; // handling 3 digit hex if(hex.length == 4){ r = "0x" + hex[1] + hex[1]; g = "0x" + hex[2] + hex[2]; b = "0x" + hex[3] + hex[3]; // handling 6 digit hex }else if (hex.length == 7){ r = "0x" + hex[1] + hex[2]; g = "0x" + hex[3] + hex[4]; b = "0x" + hex[5] + hex[6]; }; return{ red: +r, green: +g, blue: +b }; } console.log(hexToRGB(hex)); Following is the output on console − { red: 0, green: 128, blue: 192 }
[ { "code": null, "e": 1179, "s": 1062, "text": "We are required to write a JavaScript function that takes in a hexadecimal color and returns its RGB representation." }, { "code": null, "e": 1279, "s": 1179, "text": "The function should return an object containing the respective values of red green and blue\ncolor −" }, { "code": null, "e": 1325, "s": 1279, "text": "hexToRgb('#0080C0') should return 0, 128, 192" }, { "code": null, "e": 1353, "s": 1325, "text": "The code for this will be −" }, { "code": null, "e": 1848, "s": 1353, "text": "const hex = '#0080C0';\nconst hexToRGB = hex => {\n let r = 0, g = 0, b = 0;\n // handling 3 digit hex\n if(hex.length == 4){\n r = \"0x\" + hex[1] + hex[1];\n g = \"0x\" + hex[2] + hex[2];\n b = \"0x\" + hex[3] + hex[3];\n // handling 6 digit hex\n }else if (hex.length == 7){\n\n r = \"0x\" + hex[1] + hex[2];\n g = \"0x\" + hex[3] + hex[4];\n b = \"0x\" + hex[5] + hex[6];\n };\n\n return{\n red: +r,\n green: +g,\n blue: +b\n };\n}\nconsole.log(hexToRGB(hex));" }, { "code": null, "e": 1885, "s": 1848, "text": "Following is the output on console −" }, { "code": null, "e": 1919, "s": 1885, "text": "{ red: 0, green: 128, blue: 192 }" } ]
Find and Draw Contours using OpenCV | Python - GeeksforGeeks
29 Apr, 2019 Contours are defined as the line joining all the points along the boundary of an image that are having the same intensity. Contours come handy in shape analysis, finding the size of the object of interest, and object detection. OpenCV has findContour() function that helps in extracting the contours from the image. It works best on binary images, so we should first apply thresholding techniques, Sobel edges, etc. Below is the code for finding contours – import cv2import numpy as np # Let's load a simple image with 3 black squaresimage = cv2.imread('C://Users//gfg//shapes.jpg')cv2.waitKey(0) # Grayscalegray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY) # Find Canny edgesedged = cv2.Canny(gray, 30, 200)cv2.waitKey(0) # Finding Contours# Use a copy of the image e.g. edged.copy()# since findContours alters the imagecontours, hierarchy = cv2.findContours(edged, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_NONE) cv2.imshow('Canny Edges After Contouring', edged)cv2.waitKey(0) print("Number of Contours found = " + str(len(contours))) # Draw all contours# -1 signifies drawing all contourscv2.drawContours(image, contours, -1, (0, 255, 0), 3) cv2.imshow('Contours', image)cv2.waitKey(0)cv2.destroyAllWindows() Output:We see that there are three essential arguments in cv2.findContours() function. First one is source image, second is contour retrieval mode, third is contour approximation method and it outputs the image, contours, and hierarchy. ‘contours‘ is a Python list of all the contours in the image. Each individual contour is a Numpy array of (x, y) coordinates of boundary points of the object. Contours Approximation Method –Above, we see that contours are the boundaries of a shape with the same intensity. It stores the (x, y) coordinates of the boundary of a shape. But does it store all the coordinates? That is specified by this contour approximation method.If we pass cv2.CHAIN_APPROX_NONE, all the boundary points are stored. But actually, do we need all the points? For eg, if we have to find the contour of a straight line. We need just two endpoints of that line. This is what cv2.CHAIN_APPROX_SIMPLE does. It removes all redundant points and compresses the contour, thereby saving memory. Image-Processing OpenCV Python Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here. Comments Old Comments Python Dictionary Enumerate() in Python How to Install PIP on Windows ? Different ways to create Pandas Dataframe Python String | replace() Create a Pandas DataFrame from Lists Reading and Writing to text files in Python Selecting rows in pandas DataFrame based on conditions sum() function in Python *args and **kwargs in Python
[ { "code": null, "e": 24533, "s": 24505, "text": "\n29 Apr, 2019" }, { "code": null, "e": 24761, "s": 24533, "text": "Contours are defined as the line joining all the points along the boundary of an image that are having the same intensity. Contours come handy in shape analysis, finding the size of the object of interest, and object detection." }, { "code": null, "e": 24949, "s": 24761, "text": "OpenCV has findContour() function that helps in extracting the contours from the image. It works best on binary images, so we should first apply thresholding techniques, Sobel edges, etc." }, { "code": null, "e": 24990, "s": 24949, "text": "Below is the code for finding contours –" }, { "code": "import cv2import numpy as np # Let's load a simple image with 3 black squaresimage = cv2.imread('C://Users//gfg//shapes.jpg')cv2.waitKey(0) # Grayscalegray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY) # Find Canny edgesedged = cv2.Canny(gray, 30, 200)cv2.waitKey(0) # Finding Contours# Use a copy of the image e.g. edged.copy()# since findContours alters the imagecontours, hierarchy = cv2.findContours(edged, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_NONE) cv2.imshow('Canny Edges After Contouring', edged)cv2.waitKey(0) print(\"Number of Contours found = \" + str(len(contours))) # Draw all contours# -1 signifies drawing all contourscv2.drawContours(image, contours, -1, (0, 255, 0), 3) cv2.imshow('Contours', image)cv2.waitKey(0)cv2.destroyAllWindows()", "e": 25748, "s": 24990, "text": null }, { "code": null, "e": 26144, "s": 25748, "text": "Output:We see that there are three essential arguments in cv2.findContours() function. First one is source image, second is contour retrieval mode, third is contour approximation method and it outputs the image, contours, and hierarchy. ‘contours‘ is a Python list of all the contours in the image. Each individual contour is a Numpy array of (x, y) coordinates of boundary points of the object." }, { "code": null, "e": 26750, "s": 26144, "text": "Contours Approximation Method –Above, we see that contours are the boundaries of a shape with the same intensity. It stores the (x, y) coordinates of the boundary of a shape. But does it store all the coordinates? That is specified by this contour approximation method.If we pass cv2.CHAIN_APPROX_NONE, all the boundary points are stored. But actually, do we need all the points? For eg, if we have to find the contour of a straight line. We need just two endpoints of that line. This is what cv2.CHAIN_APPROX_SIMPLE does. It removes all redundant points and compresses the contour, thereby saving memory." }, { "code": null, "e": 26767, "s": 26750, "text": "Image-Processing" }, { "code": null, "e": 26774, "s": 26767, "text": "OpenCV" }, { "code": null, "e": 26781, "s": 26774, "text": "Python" }, { "code": null, "e": 26879, "s": 26781, "text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here." }, { "code": null, "e": 26888, "s": 26879, "text": "Comments" }, { "code": null, "e": 26901, "s": 26888, "text": "Old Comments" }, { "code": null, "e": 26919, "s": 26901, "text": "Python Dictionary" }, { "code": null, "e": 26941, "s": 26919, "text": "Enumerate() in Python" }, { "code": null, "e": 26973, "s": 26941, "text": "How to Install PIP on Windows ?" }, { "code": null, "e": 27015, "s": 26973, "text": "Different ways to create Pandas Dataframe" }, { "code": null, "e": 27041, "s": 27015, "text": "Python String | replace()" }, { "code": null, "e": 27078, "s": 27041, "text": "Create a Pandas DataFrame from Lists" }, { "code": null, "e": 27122, "s": 27078, "text": "Reading and Writing to text files in Python" }, { "code": null, "e": 27177, "s": 27122, "text": "Selecting rows in pandas DataFrame based on conditions" }, { "code": null, "e": 27202, "s": 27177, "text": "sum() function in Python" } ]
Area of Circumcircle of an Equilateral Triangle using Median - GeeksforGeeks
25 Mar, 2021 Given the median of the Equilateral triangle M, the task is to find the area of the circumcircle of this equilateral triangle using the median M.Examples: Input: M = 3 Output: 12.5664Input: M = 6 Output: 50.2655 Approach: The key observation in the problem is that the centroid, circumcenter, orthocenter and incenter of an equilateral triangle all lie at the same point. Therefore, the radius of the circle with the given median of the equilateral triangle inscribed in the circle can be derived as: Then the area of the circle can be calculated using the approach used in this articleBelow is the implementation of the above approach: C++ Java Python3 C# Javascript // C++ implementation to find the// equation of circle which// inscribes equilateral triangle// of median M #include <iostream>const double pi = 3.14159265358979323846;using namespace std; // Function to find the equation// of circle whose center is (x1, y1)// and the radius of circle is rvoid circleArea(double r){ cout << (pi * r * r);} // Function to find the// equation of circle which// inscribes equilateral triangle// of median Mvoid findCircleAreaByMedian(double m){ double r = 2 * m / 3; // Util Function to find the // circle equation circleArea(r);} // Driver codeint main(){ double m = 3; // Function Call findCircleAreaByMedian(m); return 0;} // Java implementation to find the// equation of circle which// inscribes equilateral triangle// of median Mimport java.util.*; class GFG{ // Function to find the equation// of circle whose center is (x1, y1)// and the radius of circle is rstatic double circleArea(double r){ double pi = 3.14159265358979323846; return (pi * r * r);} // Function to find the// equation of circle which// inscribes equilateral triangle// of median Mstatic double findCircleAreaByMedian(int m){ double r = 2 * m / 3; // Function call to find // the circle equation return circleArea(r);} // Driver codepublic static void main(String args[]){ int m = 3; System.out.printf("%.4f", findCircleAreaByMedian(m));}} // This code is contributed by virusbuddah_ # Python3 implementation to find the# equation of circle which inscribes# equilateral triangle of median M pi = 3.14159265358979323846 # Function to find the equation# of circle whose center is (x1, y1)# and the radius of circle is rdef circleArea(r): print(round(pi * r * r, 4)) # Function to find the# equation of circle which# inscribes equilateral triangle# of median Mdef findCircleAreaByMedian(m): r = 2 * m /3 # Function to find the # circle equation circleArea(r) # Driver codeif __name__ == '__main__': m = 3 # Function call findCircleAreaByMedian(m) # This code is contributed by mohit kumar 29 // C# implementation to find the// equation of circle which// inscribes equilateral triangle// of median Musing System; class GFG{ // Function to find the equation// of circle whose center is (x1, y1)// and the radius of circle is rstatic double circleArea(double r){ double pi = 3.14159265358979323846; return (pi * r * r);} // Function to find the// equation of circle which// inscribes equilateral triangle// of median Mstatic double findCircleAreaByMedian(int m){ double r = 2 * m / 3; // Function call to find // the circle equation return circleArea(r);} // Driver codepublic static void Main(string []args){ int m = 3; Console.WriteLine("{0:f4}", findCircleAreaByMedian(m));}} // This code is contributed by AnkitRai01 <script>// javascript implementation to find the// equation of circle which// inscribes equilateral triangle// of median M // Function to find the equation // of circle whose center is (x1, y1) // and the radius of circle is r function circleArea(r) { var pi = 3.14159265358979323846; return (pi * r * r); } // Function to find the // equation of circle which // inscribes equilateral triangle // of median M function findCircleAreaByMedian(m) { var r = 2 * m / 3; // Function call to find // the circle equation return circleArea(r); } // Driver code var m = 3; document.write(findCircleAreaByMedian(m).toFixed(4)); // This code is contributed by Rajput-Ji</script> 12.5664 mohit kumar 29 virusbuddha ankthon Rajput-Ji area-volume-programs circle median-finding triangle Geometric Mathematical School Programming Mathematical Geometric Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here. Haversine formula to find distance between two points on a sphere Program to find slope of a line Equation of circle when three points on the circle are given Program to find line passing through 2 Points Maximum Manhattan distance between a distinct pair from N coordinates Program for Fibonacci numbers Write a program to print all permutations of a given string C++ Data Types Set in C++ Standard Template Library (STL) Coin Change | DP-7
[ { "code": null, "e": 26561, "s": 26533, "text": "\n25 Mar, 2021" }, { "code": null, "e": 26718, "s": 26561, "text": "Given the median of the Equilateral triangle M, the task is to find the area of the circumcircle of this equilateral triangle using the median M.Examples: " }, { "code": null, "e": 26777, "s": 26718, "text": "Input: M = 3 Output: 12.5664Input: M = 6 Output: 50.2655 " }, { "code": null, "e": 26941, "s": 26779, "text": "Approach: The key observation in the problem is that the centroid, circumcenter, orthocenter and incenter of an equilateral triangle all lie at the same point. " }, { "code": null, "e": 27207, "s": 26941, "text": "Therefore, the radius of the circle with the given median of the equilateral triangle inscribed in the circle can be derived as: Then the area of the circle can be calculated using the approach used in this articleBelow is the implementation of the above approach: " }, { "code": null, "e": 27211, "s": 27207, "text": "C++" }, { "code": null, "e": 27216, "s": 27211, "text": "Java" }, { "code": null, "e": 27224, "s": 27216, "text": "Python3" }, { "code": null, "e": 27227, "s": 27224, "text": "C#" }, { "code": null, "e": 27238, "s": 27227, "text": "Javascript" }, { "code": "// C++ implementation to find the// equation of circle which// inscribes equilateral triangle// of median M #include <iostream>const double pi = 3.14159265358979323846;using namespace std; // Function to find the equation// of circle whose center is (x1, y1)// and the radius of circle is rvoid circleArea(double r){ cout << (pi * r * r);} // Function to find the// equation of circle which// inscribes equilateral triangle// of median Mvoid findCircleAreaByMedian(double m){ double r = 2 * m / 3; // Util Function to find the // circle equation circleArea(r);} // Driver codeint main(){ double m = 3; // Function Call findCircleAreaByMedian(m); return 0;}", "e": 27924, "s": 27238, "text": null }, { "code": "// Java implementation to find the// equation of circle which// inscribes equilateral triangle// of median Mimport java.util.*; class GFG{ // Function to find the equation// of circle whose center is (x1, y1)// and the radius of circle is rstatic double circleArea(double r){ double pi = 3.14159265358979323846; return (pi * r * r);} // Function to find the// equation of circle which// inscribes equilateral triangle// of median Mstatic double findCircleAreaByMedian(int m){ double r = 2 * m / 3; // Function call to find // the circle equation return circleArea(r);} // Driver codepublic static void main(String args[]){ int m = 3; System.out.printf(\"%.4f\", findCircleAreaByMedian(m));}} // This code is contributed by virusbuddah_", "e": 28700, "s": 27924, "text": null }, { "code": "# Python3 implementation to find the# equation of circle which inscribes# equilateral triangle of median M pi = 3.14159265358979323846 # Function to find the equation# of circle whose center is (x1, y1)# and the radius of circle is rdef circleArea(r): print(round(pi * r * r, 4)) # Function to find the# equation of circle which# inscribes equilateral triangle# of median Mdef findCircleAreaByMedian(m): r = 2 * m /3 # Function to find the # circle equation circleArea(r) # Driver codeif __name__ == '__main__': m = 3 # Function call findCircleAreaByMedian(m) # This code is contributed by mohit kumar 29", "e": 29346, "s": 28700, "text": null }, { "code": "// C# implementation to find the// equation of circle which// inscribes equilateral triangle// of median Musing System; class GFG{ // Function to find the equation// of circle whose center is (x1, y1)// and the radius of circle is rstatic double circleArea(double r){ double pi = 3.14159265358979323846; return (pi * r * r);} // Function to find the// equation of circle which// inscribes equilateral triangle// of median Mstatic double findCircleAreaByMedian(int m){ double r = 2 * m / 3; // Function call to find // the circle equation return circleArea(r);} // Driver codepublic static void Main(string []args){ int m = 3; Console.WriteLine(\"{0:f4}\", findCircleAreaByMedian(m));}} // This code is contributed by AnkitRai01", "e": 30130, "s": 29346, "text": null }, { "code": "<script>// javascript implementation to find the// equation of circle which// inscribes equilateral triangle// of median M // Function to find the equation // of circle whose center is (x1, y1) // and the radius of circle is r function circleArea(r) { var pi = 3.14159265358979323846; return (pi * r * r); } // Function to find the // equation of circle which // inscribes equilateral triangle // of median M function findCircleAreaByMedian(m) { var r = 2 * m / 3; // Function call to find // the circle equation return circleArea(r); } // Driver code var m = 3; document.write(findCircleAreaByMedian(m).toFixed(4)); // This code is contributed by Rajput-Ji</script>", "e": 30891, "s": 30130, "text": null }, { "code": null, "e": 30899, "s": 30891, "text": "12.5664" }, { "code": null, "e": 30916, "s": 30901, "text": "mohit kumar 29" }, { "code": null, "e": 30928, "s": 30916, "text": "virusbuddha" }, { "code": null, "e": 30936, "s": 30928, "text": "ankthon" }, { "code": null, "e": 30946, "s": 30936, "text": "Rajput-Ji" }, { "code": null, "e": 30967, "s": 30946, "text": "area-volume-programs" }, { "code": null, "e": 30974, "s": 30967, "text": "circle" }, { "code": null, "e": 30989, "s": 30974, "text": "median-finding" }, { "code": null, "e": 30998, "s": 30989, "text": "triangle" }, { "code": null, "e": 31008, "s": 30998, "text": "Geometric" }, { "code": null, "e": 31021, "s": 31008, "text": "Mathematical" }, { "code": null, "e": 31040, "s": 31021, "text": "School Programming" }, { "code": null, "e": 31053, "s": 31040, "text": "Mathematical" }, { "code": null, "e": 31063, "s": 31053, "text": "Geometric" }, { "code": null, "e": 31161, "s": 31063, "text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here." }, { "code": null, "e": 31227, "s": 31161, "text": "Haversine formula to find distance between two points on a sphere" }, { "code": null, "e": 31259, "s": 31227, "text": "Program to find slope of a line" }, { "code": null, "e": 31320, "s": 31259, "text": "Equation of circle when three points on the circle are given" }, { "code": null, "e": 31366, "s": 31320, "text": "Program to find line passing through 2 Points" }, { "code": null, "e": 31436, "s": 31366, "text": "Maximum Manhattan distance between a distinct pair from N coordinates" }, { "code": null, "e": 31466, "s": 31436, "text": "Program for Fibonacci numbers" }, { "code": null, "e": 31526, "s": 31466, "text": "Write a program to print all permutations of a given string" }, { "code": null, "e": 31541, "s": 31526, "text": "C++ Data Types" }, { "code": null, "e": 31584, "s": 31541, "text": "Set in C++ Standard Template Library (STL)" } ]
Getting started with Pandas time-series functionality | by Tom Waterman | Towards Data Science
Pandas has exceptional features for analyzing time-series data, including automatic datetime parsing, advanced filtering capabilities, and several datetime-specific plotting functions. I find myself using those features almost every day, but it took me a long time to discover them: many of Pandas datetime capabilities are not immediately obvious, and I didn’t use those features when I was starting out. But using the features is easy, and will help make your analysis faster whenever you’re dealing with time-series data. To help you get started, here are the 3 time-series features that I use most often. Pandas can automatically parse columns in a dataset into time-series data, without requiring you to specify any regex patterns. Suppose you’re analyzing a dataset where the first five rows look like this. You can see that the column date looks like a time-series, and it makes sense for us to convert the values in that column into the Pandas datetime type. To instruct Pandas to convert the values, use the parse_dates argument when loading the data. Note: the parse_dates argument is available in all of Pandas data loading functions, including read_csv. >>> import pandas as pd>>> df = pd.read_csv('http://bit.ly/30iosS6', parse_dates=['date'])>>> df.info()<class 'pandas.core.frame.DataFrame'>RangeIndex: 360 entries, 0 to 359Data columns (total 2 columns): # Column Non-Null Count Dtype--- ------ -------------- ----- 0 dau 360 non-null int64 1 date 360 non-null datetime64[ns]dtypes: datetime64[ns](1), int64(1)memory usage: 5.8 KB Notice how the Dtype of the column date is datetime64[ns]. You can now use all of Pandas’ special datetime methods on the date column. >>> df.date.dt.day_name()0 Tuesday1 Wednesday2 Thursday3 Friday4 Saturday... Pandas provides convenient filtering methods for a DataFrame when its index is a datetime type. >>> df.set_index('date', inplace=True)>>> df.indexDatetimeIndex(['2019-01-01', ... ,'2019-12-26']) For example, you can select a specific date by passing a string to the DataFrame’s loc accessor. >>> df.loc['2019-02-01']dau 554Name: 2019-02-01 00:00:00, dtype: int64 You can also use strings to select ranges of data, ie to slice the DataFrame. >>> df.loc['2019-02-01':'2019-03-01'] daudate2019-02-01 5542019-02-02 798...2019-02-28 5692019-03-01 599 Notice that the index is inclusive of the last value. We used 2019-03-01 as the end of the range we selected, so the row labelled2019–03–01 is included in the resulting DataFrame. You can even pass in partial strings to select data for a specific range. >>> df.loc['2019-03'] daudate2019-03-01 5992019-03-02 724...2019-03-30 7242019-03-31 638 Last but not least, you can easily plot the time-series data with Pandas plot function. >>> df.plot() Notice how Pandas has used the DataFrame’s index for the X-axis. Of course, this chart isn’t very helpful. Let’s use an aggregate view to produce something more readable. To do that, we can use the resample method of the DataFrame to aggregate the timeseries index by month. Then, we’ll calculate the mean of each month and create a bar plot of the result. >>> df.resample('M').mean().plot.bar() For more details on how to quickly generate charts in Pandas, you can read my article “Fast Plotting with Pandas”. towardsdatascience.com Thanks for reading! I often write about Data Science and programming tricks on Medium, so follow me to get more articles like this one.
[ { "code": null, "e": 357, "s": 172, "text": "Pandas has exceptional features for analyzing time-series data, including automatic datetime parsing, advanced filtering capabilities, and several datetime-specific plotting functions." }, { "code": null, "e": 578, "s": 357, "text": "I find myself using those features almost every day, but it took me a long time to discover them: many of Pandas datetime capabilities are not immediately obvious, and I didn’t use those features when I was starting out." }, { "code": null, "e": 697, "s": 578, "text": "But using the features is easy, and will help make your analysis faster whenever you’re dealing with time-series data." }, { "code": null, "e": 781, "s": 697, "text": "To help you get started, here are the 3 time-series features that I use most often." }, { "code": null, "e": 909, "s": 781, "text": "Pandas can automatically parse columns in a dataset into time-series data, without requiring you to specify any regex patterns." }, { "code": null, "e": 986, "s": 909, "text": "Suppose you’re analyzing a dataset where the first five rows look like this." }, { "code": null, "e": 1139, "s": 986, "text": "You can see that the column date looks like a time-series, and it makes sense for us to convert the values in that column into the Pandas datetime type." }, { "code": null, "e": 1233, "s": 1139, "text": "To instruct Pandas to convert the values, use the parse_dates argument when loading the data." }, { "code": null, "e": 1338, "s": 1233, "text": "Note: the parse_dates argument is available in all of Pandas data loading functions, including read_csv." }, { "code": null, "e": 1743, "s": 1338, "text": ">>> import pandas as pd>>> df = pd.read_csv('http://bit.ly/30iosS6', parse_dates=['date'])>>> df.info()<class 'pandas.core.frame.DataFrame'>RangeIndex: 360 entries, 0 to 359Data columns (total 2 columns): # Column Non-Null Count Dtype--- ------ -------------- ----- 0 dau 360 non-null int64 1 date 360 non-null datetime64[ns]dtypes: datetime64[ns](1), int64(1)memory usage: 5.8 KB" }, { "code": null, "e": 1878, "s": 1743, "text": "Notice how the Dtype of the column date is datetime64[ns]. You can now use all of Pandas’ special datetime methods on the date column." }, { "code": null, "e": 1977, "s": 1878, "text": ">>> df.date.dt.day_name()0 Tuesday1 Wednesday2 Thursday3 Friday4 Saturday..." }, { "code": null, "e": 2073, "s": 1977, "text": "Pandas provides convenient filtering methods for a DataFrame when its index is a datetime type." }, { "code": null, "e": 2172, "s": 2073, "text": ">>> df.set_index('date', inplace=True)>>> df.indexDatetimeIndex(['2019-01-01', ... ,'2019-12-26'])" }, { "code": null, "e": 2269, "s": 2172, "text": "For example, you can select a specific date by passing a string to the DataFrame’s loc accessor." }, { "code": null, "e": 2343, "s": 2269, "text": ">>> df.loc['2019-02-01']dau 554Name: 2019-02-01 00:00:00, dtype: int64" }, { "code": null, "e": 2421, "s": 2343, "text": "You can also use strings to select ranges of data, ie to slice the DataFrame." }, { "code": null, "e": 2541, "s": 2421, "text": ">>> df.loc['2019-02-01':'2019-03-01'] daudate2019-02-01 5542019-02-02 798...2019-02-28 5692019-03-01 599" }, { "code": null, "e": 2721, "s": 2541, "text": "Notice that the index is inclusive of the last value. We used 2019-03-01 as the end of the range we selected, so the row labelled2019–03–01 is included in the resulting DataFrame." }, { "code": null, "e": 2795, "s": 2721, "text": "You can even pass in partial strings to select data for a specific range." }, { "code": null, "e": 2899, "s": 2795, "text": ">>> df.loc['2019-03'] daudate2019-03-01 5992019-03-02 724...2019-03-30 7242019-03-31 638" }, { "code": null, "e": 2987, "s": 2899, "text": "Last but not least, you can easily plot the time-series data with Pandas plot function." }, { "code": null, "e": 3001, "s": 2987, "text": ">>> df.plot()" }, { "code": null, "e": 3066, "s": 3001, "text": "Notice how Pandas has used the DataFrame’s index for the X-axis." }, { "code": null, "e": 3172, "s": 3066, "text": "Of course, this chart isn’t very helpful. Let’s use an aggregate view to produce something more readable." }, { "code": null, "e": 3358, "s": 3172, "text": "To do that, we can use the resample method of the DataFrame to aggregate the timeseries index by month. Then, we’ll calculate the mean of each month and create a bar plot of the result." }, { "code": null, "e": 3397, "s": 3358, "text": ">>> df.resample('M').mean().plot.bar()" }, { "code": null, "e": 3512, "s": 3397, "text": "For more details on how to quickly generate charts in Pandas, you can read my article “Fast Plotting with Pandas”." }, { "code": null, "e": 3535, "s": 3512, "text": "towardsdatascience.com" } ]
How to close the ResultSet cursor automatically, after commit in JDBC?
ResultSet holdability determines whether the ResultSet objects (cursors) should be closed or held open when a transaction (that contains the said cursor/ ResultSet object) is committed using the commit() method of the Connection interface. ResultSet interface provides two values to specify the holdability namely CLOSE_CURSORS_AT_COMMIT and HOLD_CURSORS_OVER_COMMIT If the holdability of the ResultSet object is set to this value. Whenever you commit/save a transaction using the commit() method of the Connection interface, the ResultSet objects created in the current transaction (that are already opened) will be closed. Therefore, if you need to close the ResultSet cursor after the commit automatically, set the ResultSet holdability to CLOSE_CURSORS_AT_COMMIT using the setHoldability() method of the Connection interface. Let us create a table with name MyPlayers in MySQL database using CREATE statement as shown below − CREATE TABLE MyPlayers( ID INT, First_Name VARCHAR(255), Last_Name VARCHAR(255), Date_Of_Birth date, Place_Of_Birth VARCHAR(255), Country VARCHAR(255), PRIMARY KEY (ID) ); Now, we will insert 7 records in MyPlayers table using INSERT statements − insert into MyPlayers values(1, 'Shikhar', 'Dhawan', DATE('1981-12-05'), 'Delhi', 'India'); insert into MyPlayers values(2, 'Jonathan', 'Trott', DATE('1981-04-22'), 'CapeTown', 'SouthAfrica'); insert into MyPlayers values(3, 'Kumara', 'Sangakkara', DATE('1977-10-27'), 'Matale', 'Srilanka'); insert into MyPlayers values(4, 'Virat', 'Kohli', DATE('1988-11-05'), 'Delhi', 'India'); insert into MyPlayers values(5, 'Rohit', 'Sharma', DATE('1987-04-30'), 'Nagpur', 'India'); insert into MyPlayers values(6, 'Ravindra', 'Jadeja', DATE('1988-12-06'), 'Nagpur', 'India'); insert into MyPlayers values(7, 'James', 'Anderson', DATE('1982-06-30'), 'Burnley', 'England'); Following JDBC program demonstrates how to close the ResultSet cursor immediately after commit. import java.sql.Connection; import java.sql.Date; import java.sql.DriverManager; import java.sql.ResultSet; import java.sql.SQLException; import java.sql.Statement; public class ResultSetHoldability_CloseCursorsAtCommit { public static void main(String args[]) throws SQLException { //Registering the Driver DriverManager.registerDriver(new com.mysql.jdbc.Driver()); //Getting the connection String url = "jdbc:mysql://localhost/mydatabase"; Connection con = DriverManager.getConnection(url, "root", "password"); System.out.println("Connection established......"); //Setting the auto commit false con.setAutoCommit(false); //Setting the holdability to CLOSE_CURSORS_AT_COMMIT con.setHoldability(ResultSet.CLOSE_CURSORS_AT_COMMIT); //Creating a Statement object Statement stmt = con.createStatement(ResultSet.TYPE_SCROLL_SENSITIVE, ResultSet.CONCUR_UPDATABLE); //Retrieving the data ResultSet rs = stmt.executeQuery("select * from MyPlayers"); System.out.println("Contents of the table"); while(rs.next()) { System.out.print("ID: "+rs.getString("ID")+", "); System.out.print("First_Name: "+rs.getString("First_Name")+", "); System.out.print("Last_Name: "+rs.getString("Last_Name")); System.out.print("Date_Of_Birth: "+rs.getString("Date_Of_Birth")+", "); System.out.print("Place_Of_Birth: "+rs.getString("Place_Of_Birth")); System.out.print("Country: "+rs.getString("Country")); System.out.println(""); } //Inserting a new row rs.moveToInsertRow(); rs.updateInt(1, 8); rs.updateString(2, "Ishant"); rs.updateString(3, "Sharma"); rs.updateDate(4, new Date(904694400000L)); rs.updateString(5, "Delhi"); rs.updateString(6, "India"); rs.insertRow(); //Committing the transaction con.commit(); boolean bool = rs.isClosed(); if(bool) { System.out.println("ResultSet object is closed"); } else { System.out.println("ResultSet object is open"); } } } Connection established...... Contents of the table ID: 1, First_Name: Shikhar, Last_Name: DhawanDate_Of_Birth: 1981-12-05, Place_Of_Birth: DelhiCountry: India ID: 2, First_Name: Jonathan, Last_Name: TrottDate_Of_Birth: 1981-04-22, Place_Of_Birth: CapeTownCountry: SouthAfrica ID: 3, First_Name: Kumara, Last_Name: SangakkaraDate_Of_Birth: 1977-10-27, Place_Of_Birth: MataleCountry: Srilanka ID: 4, First_Name: Virat, Last_Name: KohliDate_Of_Birth: 1988-11-05, Place_Of_Birth: MumbaiCountry: India ID: 5, First_Name: Rohit, Last_Name: SharmaDate_Of_Birth: 1987-04-30, Place_Of_Birth: NagpurCountry: India ID: 6, First_Name: Ravindra, Last_Name: JadejaDate_Of_Birth: 1988-12-06, Place_Of_Birth: NagpurCountry: India ID: 7, First_Name: James, Last_Name: AndersonDate_Of_Birth: 1982-06-30, Place_Of_Birth: Burnley Country: England ResultSet object is closed
[ { "code": null, "e": 1302, "s": 1062, "text": "ResultSet holdability determines whether the ResultSet objects (cursors) should be closed or held open when a transaction (that contains the said cursor/ ResultSet object) is committed using the commit() method of the Connection interface." }, { "code": null, "e": 1429, "s": 1302, "text": "ResultSet interface provides two values to specify the holdability namely CLOSE_CURSORS_AT_COMMIT and HOLD_CURSORS_OVER_COMMIT" }, { "code": null, "e": 1687, "s": 1429, "text": "If the holdability of the ResultSet object is set to this value. Whenever you commit/save a transaction using the commit() method of the Connection interface, the ResultSet objects created in the current transaction (that are already opened) will be closed." }, { "code": null, "e": 1892, "s": 1687, "text": "Therefore, if you need to close the ResultSet cursor after the commit automatically, set the ResultSet holdability to CLOSE_CURSORS_AT_COMMIT using the setHoldability() method of the Connection interface." }, { "code": null, "e": 1992, "s": 1892, "text": "Let us create a table with name MyPlayers in MySQL database using CREATE statement as shown below −" }, { "code": null, "e": 2185, "s": 1992, "text": "CREATE TABLE MyPlayers(\n ID INT,\n First_Name VARCHAR(255),\n Last_Name VARCHAR(255),\n Date_Of_Birth date,\n Place_Of_Birth VARCHAR(255),\n Country VARCHAR(255),\n PRIMARY KEY (ID)\n);" }, { "code": null, "e": 2260, "s": 2185, "text": "Now, we will insert 7 records in MyPlayers table using INSERT statements −" }, { "code": null, "e": 2922, "s": 2260, "text": "insert into MyPlayers values(1, 'Shikhar', 'Dhawan', DATE('1981-12-05'), 'Delhi', 'India');\ninsert into MyPlayers values(2, 'Jonathan', 'Trott', DATE('1981-04-22'), 'CapeTown', 'SouthAfrica');\ninsert into MyPlayers values(3, 'Kumara', 'Sangakkara', DATE('1977-10-27'), 'Matale', 'Srilanka');\ninsert into MyPlayers values(4, 'Virat', 'Kohli', DATE('1988-11-05'), 'Delhi', 'India');\ninsert into MyPlayers values(5, 'Rohit', 'Sharma', DATE('1987-04-30'), 'Nagpur', 'India');\ninsert into MyPlayers values(6, 'Ravindra', 'Jadeja', DATE('1988-12-06'), 'Nagpur', 'India');\ninsert into MyPlayers values(7, 'James', 'Anderson', DATE('1982-06-30'), 'Burnley', 'England');" }, { "code": null, "e": 3018, "s": 2922, "text": "Following JDBC program demonstrates how to close the ResultSet cursor immediately after commit." }, { "code": null, "e": 5139, "s": 3018, "text": "import java.sql.Connection;\nimport java.sql.Date;\nimport java.sql.DriverManager;\nimport java.sql.ResultSet;\nimport java.sql.SQLException;\nimport java.sql.Statement;\npublic class ResultSetHoldability_CloseCursorsAtCommit {\n public static void main(String args[]) throws SQLException {\n //Registering the Driver\n DriverManager.registerDriver(new com.mysql.jdbc.Driver());\n //Getting the connection\n String url = \"jdbc:mysql://localhost/mydatabase\";\n Connection con = DriverManager.getConnection(url, \"root\", \"password\");\n System.out.println(\"Connection established......\");\n //Setting the auto commit false\n con.setAutoCommit(false);\n //Setting the holdability to CLOSE_CURSORS_AT_COMMIT\n con.setHoldability(ResultSet.CLOSE_CURSORS_AT_COMMIT);\n //Creating a Statement object\n Statement stmt = con.createStatement(ResultSet.TYPE_SCROLL_SENSITIVE, ResultSet.CONCUR_UPDATABLE);\n //Retrieving the data\n ResultSet rs = stmt.executeQuery(\"select * from MyPlayers\");\n System.out.println(\"Contents of the table\");\n while(rs.next()) {\n System.out.print(\"ID: \"+rs.getString(\"ID\")+\", \");\n System.out.print(\"First_Name: \"+rs.getString(\"First_Name\")+\", \");\n System.out.print(\"Last_Name: \"+rs.getString(\"Last_Name\"));\n System.out.print(\"Date_Of_Birth: \"+rs.getString(\"Date_Of_Birth\")+\", \");\n System.out.print(\"Place_Of_Birth: \"+rs.getString(\"Place_Of_Birth\"));\n System.out.print(\"Country: \"+rs.getString(\"Country\"));\n System.out.println(\"\");\n }\n //Inserting a new row\n rs.moveToInsertRow();\n rs.updateInt(1, 8);\n rs.updateString(2, \"Ishant\");\n rs.updateString(3, \"Sharma\");\n rs.updateDate(4, new Date(904694400000L));\n rs.updateString(5, \"Delhi\");\n rs.updateString(6, \"India\");\n rs.insertRow();\n //Committing the transaction\n con.commit();\n boolean bool = rs.isClosed();\n if(bool) {\n System.out.println(\"ResultSet object is closed\");\n } else {\n System.out.println(\"ResultSet object is open\");\n }\n }\n}" }, { "code": null, "e": 5993, "s": 5139, "text": "Connection established......\nContents of the table\nID: 1, First_Name: Shikhar, Last_Name: DhawanDate_Of_Birth: 1981-12-05, Place_Of_Birth: DelhiCountry: India\nID: 2, First_Name: Jonathan, Last_Name: TrottDate_Of_Birth: 1981-04-22, Place_Of_Birth: CapeTownCountry: SouthAfrica\nID: 3, First_Name: Kumara, Last_Name: SangakkaraDate_Of_Birth: 1977-10-27, Place_Of_Birth: MataleCountry: Srilanka\nID: 4, First_Name: Virat, Last_Name: KohliDate_Of_Birth: 1988-11-05, Place_Of_Birth: MumbaiCountry: India\nID: 5, First_Name: Rohit, Last_Name: SharmaDate_Of_Birth: 1987-04-30, Place_Of_Birth: NagpurCountry: India\nID: 6, First_Name: Ravindra, Last_Name: JadejaDate_Of_Birth: 1988-12-06, Place_Of_Birth: NagpurCountry: India\nID: 7, First_Name: James, Last_Name: AndersonDate_Of_Birth: 1982-06-30, Place_Of_Birth: Burnley Country: England\nResultSet object is closed" } ]
Align the components with Bootstrap
To align the components like nav links, forms, buttons, or text to left or right in a navbar using the utility classes .navbar-left or .navbar-right. Both classes will add a CSS float in the specified direction. You can try to run the following code to align components Live Demo <!DOCTYPE html> <html> <head> <title>Bootstrap Example</title> <link href = "/bootstrap/css/bootstrap.min.css" rel = "stylesheet"> <script src = "/scripts/jquery.min.js"></script> <script src = "/bootstrap/js/bootstrap.min.js"></script> </head> <body> <nav class = "navbar navbar-default" role = "navigation" style="background: orange;"> <div class = "navbar-header"> <a class = "navbar-brand" href = "#">Alignment</a> </div> <div> <!--Left Align--> <ul class = "nav navbar-nav navbar-left"> <li class = "dropdown"> <a href = "#" class = "dropdown-toggle" data-toggle = "dropdown"> Java <b class = "caret"></b> </a> <ul class = "dropdown-menu"> <li><a href = "#">jmeter</a></li> <li><a href = "#">EJB</a></li> <li><a href = "#">Jasper Report</a></li> <li class = "divider"></li> <li><a href = "#">Separated link</a></li> <li class = "divider"></li> <li><a href = "#">One more separated link</a></li> </ul> </li> </ul> <form class = "navbar-form navbar-left" role = "search"> <button type = "submit" class = "btn btn-default">Left align-Submit Button</button> </form> <p class = "navbar-text navbar-left">Left align-Text</p> <!--Right Align--> <ul class = "nav navbar-nav navbar-right"> <li class = "dropdown"> <a href = "#" class = "dropdown-toggle" data-toggle = "dropdown"> Java <b class = "caret"></b> </a> <ul class = "dropdown-menu"> <li><a href = "#">jmeter</a></li> <li><a href = "#">EJB</a></li> <li><a href = "#">Jasper Report</a></li> <li class = "divider"></li> <li><a href = "#">Separated link</a></li> <li class = "divider"></li> <li><a href = "#">One more separated link</a></li> </ul> </li> </ul> <form class = "navbar-form navbar-right" role = "search"> <button type = "submit" class = "btn btn-default"> Right align-Submit Button </button> </form> <p class = "navbar-text navbar-right">Right align-Text</p> </div> </nav> </body> </html>
[ { "code": null, "e": 1274, "s": 1062, "text": "To align the components like nav links, forms, buttons, or text to left or right in a navbar using the utility classes .navbar-left or .navbar-right. Both classes will add a CSS float in the specified direction." }, { "code": null, "e": 1332, "s": 1274, "text": "You can try to run the following code to align components" }, { "code": null, "e": 1342, "s": 1332, "text": "Live Demo" }, { "code": null, "e": 4081, "s": 1342, "text": "<!DOCTYPE html>\n<html>\n <head>\n <title>Bootstrap Example</title>\n <link href = \"/bootstrap/css/bootstrap.min.css\" rel = \"stylesheet\">\n <script src = \"/scripts/jquery.min.js\"></script>\n <script src = \"/bootstrap/js/bootstrap.min.js\"></script>\n </head>\n <body>\n <nav class = \"navbar navbar-default\" role = \"navigation\" style=\"background: orange;\">\n <div class = \"navbar-header\">\n <a class = \"navbar-brand\" href = \"#\">Alignment</a>\n </div>\n <div>\n <!--Left Align-->\n <ul class = \"nav navbar-nav navbar-left\">\n <li class = \"dropdown\">\n <a href = \"#\" class = \"dropdown-toggle\" data-toggle = \"dropdown\">\n Java\n <b class = \"caret\"></b>\n </a>\n <ul class = \"dropdown-menu\">\n <li><a href = \"#\">jmeter</a></li>\n <li><a href = \"#\">EJB</a></li>\n <li><a href = \"#\">Jasper Report</a></li>\n <li class = \"divider\"></li>\n <li><a href = \"#\">Separated link</a></li>\n <li class = \"divider\"></li>\n <li><a href = \"#\">One more separated link</a></li>\n </ul>\n </li>\n </ul>\n <form class = \"navbar-form navbar-left\" role = \"search\">\n <button type = \"submit\" class = \"btn btn-default\">Left align-Submit Button</button>\n </form>\n <p class = \"navbar-text navbar-left\">Left align-Text</p>\n <!--Right Align-->\n <ul class = \"nav navbar-nav navbar-right\">\n <li class = \"dropdown\">\n <a href = \"#\" class = \"dropdown-toggle\" data-toggle = \"dropdown\">\n Java\n <b class = \"caret\"></b>\n </a>\n <ul class = \"dropdown-menu\">\n <li><a href = \"#\">jmeter</a></li>\n <li><a href = \"#\">EJB</a></li>\n <li><a href = \"#\">Jasper Report</a></li>\n <li class = \"divider\"></li>\n <li><a href = \"#\">Separated link</a></li>\n <li class = \"divider\"></li>\n <li><a href = \"#\">One more separated link</a></li>\n </ul>\n </li>\n </ul>\n <form class = \"navbar-form navbar-right\" role = \"search\">\n <button type = \"submit\" class = \"btn btn-default\">\n Right align-Submit Button\n </button>\n </form>\n <p class = \"navbar-text navbar-right\">Right align-Text</p>\n </div>\n </nav>\n </body>\n</html>" } ]
How to sort a random number array in java?
To sort an array in Java, you need to compare each element of the array to all the remaining elements and verify whether it is greater if so swap them. One solution to do so you need to use two loops (nested) where the inner loop starts with i+1 (where i is the variable of outer loop) to avoid repetitions in comparison. Live Demo import java.util.Arrays; import java.util.Scanner; public class ArrayInOrder { public static void main(String args[]) { Scanner sc = new Scanner(System.in); System.out.println("Enter the size of the array that is to be created::"); int size = sc.nextInt(); int[] myArray = new int[size]; System.out.println("Enter the elements of the array ::"); for(int i = 0; i<size; i++) { myArray[i] = sc.nextInt(); } for(int i = 0; i<size-1; i++) { for (int j = i+1; j<myArray.length; j++) { if(myArray[i] > myArray[j]) { int temp = myArray[i]; myArray[i] = myArray[j]; myArray[j] = temp; } } } System.out.println(Arrays.toString(myArray)); } } Enter the size of the array that is to be created :: 6 Enter the elements of the array :: 54 63 14 78 2 3 [2, 3, 14, 54, 63, 78]
[ { "code": null, "e": 1214, "s": 1062, "text": "To sort an array in Java, you need to compare each element of the array to all the remaining elements and verify whether it is greater if so swap them." }, { "code": null, "e": 1384, "s": 1214, "text": "One solution to do so you need to use two loops (nested) where the inner loop starts with i+1 (where i is the variable of outer loop) to avoid repetitions in comparison." }, { "code": null, "e": 1395, "s": 1384, "text": " Live Demo" }, { "code": null, "e": 2196, "s": 1395, "text": "import java.util.Arrays;\nimport java.util.Scanner;\n\npublic class ArrayInOrder {\n public static void main(String args[]) {\n Scanner sc = new Scanner(System.in);\n System.out.println(\"Enter the size of the array that is to be created::\");\n int size = sc.nextInt();\n int[] myArray = new int[size];\n System.out.println(\"Enter the elements of the array ::\");\n \n for(int i = 0; i<size; i++) {\n myArray[i] = sc.nextInt();\n }\n\n for(int i = 0; i<size-1; i++) {\n for (int j = i+1; j<myArray.length; j++) {\n if(myArray[i] > myArray[j]) {\n int temp = myArray[i];\n myArray[i] = myArray[j];\n myArray[j] = temp;\n }\n }\n }\n System.out.println(Arrays.toString(myArray));\n }\n}" }, { "code": null, "e": 2325, "s": 2196, "text": "Enter the size of the array that is to be created ::\n6\nEnter the elements of the array ::\n54\n63\n14\n78\n2\n3\n[2, 3, 14, 54, 63, 78]" } ]
Clustering Algorithms - Mean Shift Algorithm
As discussed earlier, it is another powerful clustering algorithm used in unsupervised learning. Unlike K-means clustering, it does not make any assumptions; hence it is a non-parametric algorithm. Mean-shift algorithm basically assigns the datapoints to the clusters iteratively by shifting points towards the highest density of datapoints i.e. cluster centroid. The difference between K-Means algorithm and Mean-Shift is that later one does not need to specify the number of clusters in advance because the number of clusters will be determined by the algorithm w.r.t data. We can understand the working of Mean-Shift clustering algorithm with the help of following steps − Step 1 − First, start with the data points assigned to a cluster of their own. Step 1 − First, start with the data points assigned to a cluster of their own. Step 2 − Next, this algorithm will compute the centroids. Step 2 − Next, this algorithm will compute the centroids. Step 3 − In this step, location of new centroids will be updated. Step 3 − In this step, location of new centroids will be updated. Step 4 − Now, the process will be iterated and moved to the higher density region. Step 4 − Now, the process will be iterated and moved to the higher density region. Step 5 − At last, it will be stopped once the centroids reach at position from where it cannot move further. Step 5 − At last, it will be stopped once the centroids reach at position from where it cannot move further. It is a simple example to understand how Mean-Shift algorithm works. In this example, we are going to first generate 2D dataset containing 4 different blobs and after that will apply Mean-Shift algorithm to see the result. %matplotlib inline import numpy as np from sklearn.cluster import MeanShift import matplotlib.pyplot as plt from matplotlib import style style.use("ggplot") from sklearn.datasets.samples_generator import make_blobs centers = [[3,3,3],[4,5,5],[3,10,10]] X, _ = make_blobs(n_samples = 700, centers = centers, cluster_std = 0.5) plt.scatter(X[:,0],X[:,1]) plt.show() ms = MeanShift() ms.fit(X) labels = ms.labels_ cluster_centers = ms.cluster_centers_ print(cluster_centers) n_clusters_ = len(np.unique(labels)) print("Estimated clusters:", n_clusters_) colors = 10*['r.','g.','b.','c.','k.','y.','m.'] for i in range(len(X)): plt.plot(X[i][0], X[i][1], colors[labels[i]], markersize = 3) plt.scatter(cluster_centers[:,0],cluster_centers[:,1], marker=".",color='k', s=20, linewidths = 5, zorder=10) plt.show() Output [[ 2.98462798 9.9733794 10.02629344] [ 3.94758484 4.99122771 4.99349433] [ 3.00788996 3.03851268 2.99183033]] Estimated clusters: 3 The following are some advantages of Mean-Shift clustering algorithm − It does not need to make any model assumption as like in K-means or Gaussian mixture. It does not need to make any model assumption as like in K-means or Gaussian mixture. It can also model the complex clusters which have nonconvex shape. It can also model the complex clusters which have nonconvex shape. It only needs one parameter named bandwidth which automatically determines the number of clusters. It only needs one parameter named bandwidth which automatically determines the number of clusters. There is no issue of local minima as like in K-means. There is no issue of local minima as like in K-means. No problem generated from outliers. No problem generated from outliers. The following are some disadvantages of Mean-Shift clustering algorithm − Mean-shift algorithm does not work well in case of high dimension, where number of clusters changes abruptly. We do not have any direct control on the number of clusters but in some applications, we need a specific number of clusters. We do not have any direct control on the number of clusters but in some applications, we need a specific number of clusters. It cannot differentiate between meaningful and meaningless modes. It cannot differentiate between meaningful and meaningless modes. 168 Lectures 13.5 hours Er. Himanshu Vasishta 64 Lectures 10.5 hours Eduonix Learning Solutions 91 Lectures 10 hours Abhilash Nelson 54 Lectures 6 hours Abhishek And Pukhraj 49 Lectures 5 hours Abhishek And Pukhraj 35 Lectures 4 hours Abhishek And Pukhraj Print Add Notes Bookmark this page
[ { "code": null, "e": 2502, "s": 2304, "text": "As discussed earlier, it is another powerful clustering algorithm used in unsupervised learning. Unlike K-means clustering, it does not make any assumptions; hence it is a non-parametric algorithm." }, { "code": null, "e": 2668, "s": 2502, "text": "Mean-shift algorithm basically assigns the datapoints to the clusters iteratively by shifting points towards the highest density of datapoints i.e. cluster centroid." }, { "code": null, "e": 2880, "s": 2668, "text": "The difference between K-Means algorithm and Mean-Shift is that later one does not need to specify the number of clusters in advance because the number of clusters will be determined by the algorithm w.r.t data." }, { "code": null, "e": 2980, "s": 2880, "text": "We can understand the working of Mean-Shift clustering algorithm with the help of following steps −" }, { "code": null, "e": 3059, "s": 2980, "text": "Step 1 − First, start with the data points assigned to a cluster of their own." }, { "code": null, "e": 3138, "s": 3059, "text": "Step 1 − First, start with the data points assigned to a cluster of their own." }, { "code": null, "e": 3196, "s": 3138, "text": "Step 2 − Next, this algorithm will compute the centroids." }, { "code": null, "e": 3254, "s": 3196, "text": "Step 2 − Next, this algorithm will compute the centroids." }, { "code": null, "e": 3320, "s": 3254, "text": "Step 3 − In this step, location of new centroids will be updated." }, { "code": null, "e": 3386, "s": 3320, "text": "Step 3 − In this step, location of new centroids will be updated." }, { "code": null, "e": 3469, "s": 3386, "text": "Step 4 − Now, the process will be iterated and moved to the higher density region." }, { "code": null, "e": 3552, "s": 3469, "text": "Step 4 − Now, the process will be iterated and moved to the higher density region." }, { "code": null, "e": 3661, "s": 3552, "text": "Step 5 − At last, it will be stopped once the centroids reach at position from where it cannot move further." }, { "code": null, "e": 3770, "s": 3661, "text": "Step 5 − At last, it will be stopped once the centroids reach at position from where it cannot move further." }, { "code": null, "e": 3993, "s": 3770, "text": "It is a simple example to understand how Mean-Shift algorithm works. In this example, we are going to first generate 2D dataset containing 4 different blobs and after that will apply Mean-Shift algorithm to see the result." }, { "code": null, "e": 4358, "s": 3993, "text": "%matplotlib inline\nimport numpy as np\nfrom sklearn.cluster import MeanShift\nimport matplotlib.pyplot as plt\nfrom matplotlib import style\nstyle.use(\"ggplot\")\nfrom sklearn.datasets.samples_generator import make_blobs\ncenters = [[3,3,3],[4,5,5],[3,10,10]]\nX, _ = make_blobs(n_samples = 700, centers = centers, cluster_std = 0.5)\nplt.scatter(X[:,0],X[:,1])\nplt.show()\n" }, { "code": null, "e": 4809, "s": 4358, "text": "ms = MeanShift()\nms.fit(X)\nlabels = ms.labels_\ncluster_centers = ms.cluster_centers_\nprint(cluster_centers)\nn_clusters_ = len(np.unique(labels))\nprint(\"Estimated clusters:\", n_clusters_)\ncolors = 10*['r.','g.','b.','c.','k.','y.','m.']\nfor i in range(len(X)):\n plt.plot(X[i][0], X[i][1], colors[labels[i]], markersize = 3)\nplt.scatter(cluster_centers[:,0],cluster_centers[:,1],\n marker=\".\",color='k', s=20, linewidths = 5, zorder=10)\nplt.show()" }, { "code": null, "e": 4816, "s": 4809, "text": "Output" }, { "code": null, "e": 4949, "s": 4816, "text": "[[ 2.98462798 9.9733794 10.02629344]\n[ 3.94758484 4.99122771 4.99349433]\n[ 3.00788996 3.03851268 2.99183033]]\nEstimated clusters: 3\n" }, { "code": null, "e": 5020, "s": 4949, "text": "The following are some advantages of Mean-Shift clustering algorithm −" }, { "code": null, "e": 5106, "s": 5020, "text": "It does not need to make any model assumption as like in K-means or Gaussian mixture." }, { "code": null, "e": 5192, "s": 5106, "text": "It does not need to make any model assumption as like in K-means or Gaussian mixture." }, { "code": null, "e": 5259, "s": 5192, "text": "It can also model the complex clusters which have nonconvex shape." }, { "code": null, "e": 5326, "s": 5259, "text": "It can also model the complex clusters which have nonconvex shape." }, { "code": null, "e": 5425, "s": 5326, "text": "It only needs one parameter named bandwidth which automatically determines the number of clusters." }, { "code": null, "e": 5524, "s": 5425, "text": "It only needs one parameter named bandwidth which automatically determines the number of clusters." }, { "code": null, "e": 5578, "s": 5524, "text": "There is no issue of local minima as like in K-means." }, { "code": null, "e": 5632, "s": 5578, "text": "There is no issue of local minima as like in K-means." }, { "code": null, "e": 5668, "s": 5632, "text": "No problem generated from outliers." }, { "code": null, "e": 5704, "s": 5668, "text": "No problem generated from outliers." }, { "code": null, "e": 5778, "s": 5704, "text": "The following are some disadvantages of Mean-Shift clustering algorithm −" }, { "code": null, "e": 5888, "s": 5778, "text": "Mean-shift algorithm does not work well in case of high dimension, where number of clusters changes abruptly." }, { "code": null, "e": 6013, "s": 5888, "text": "We do not have any direct control on the number of clusters but in some applications, we need a specific number of clusters." }, { "code": null, "e": 6138, "s": 6013, "text": "We do not have any direct control on the number of clusters but in some applications, we need a specific number of clusters." }, { "code": null, "e": 6204, "s": 6138, "text": "It cannot differentiate between meaningful and meaningless modes." }, { "code": null, "e": 6270, "s": 6204, "text": "It cannot differentiate between meaningful and meaningless modes." }, { "code": null, "e": 6307, "s": 6270, "text": "\n 168 Lectures \n 13.5 hours \n" }, { "code": null, "e": 6330, "s": 6307, "text": " Er. Himanshu Vasishta" }, { "code": null, "e": 6366, "s": 6330, "text": "\n 64 Lectures \n 10.5 hours \n" }, { "code": null, "e": 6394, "s": 6366, "text": " Eduonix Learning Solutions" }, { "code": null, "e": 6428, "s": 6394, "text": "\n 91 Lectures \n 10 hours \n" }, { "code": null, "e": 6445, "s": 6428, "text": " Abhilash Nelson" }, { "code": null, "e": 6478, "s": 6445, "text": "\n 54 Lectures \n 6 hours \n" }, { "code": null, "e": 6500, "s": 6478, "text": " Abhishek And Pukhraj" }, { "code": null, "e": 6533, "s": 6500, "text": "\n 49 Lectures \n 5 hours \n" }, { "code": null, "e": 6555, "s": 6533, "text": " Abhishek And Pukhraj" }, { "code": null, "e": 6588, "s": 6555, "text": "\n 35 Lectures \n 4 hours \n" }, { "code": null, "e": 6610, "s": 6588, "text": " Abhishek And Pukhraj" }, { "code": null, "e": 6617, "s": 6610, "text": " Print" }, { "code": null, "e": 6628, "s": 6617, "text": " Add Notes" } ]
Copy and paste to your clipboard using the pyperclip module in Python
We will be using the pyperclip module in order to copy and paste content to the clipboard. It is cross−platform and works on both Python 2 and Python 3. Copying and pasting from and to the clipboard could be very useful when you want the output of the data to be pasted elsewhere in a different file or software. The pyperclip module does not come packaged with Python. In order to access it, you must first download and install it. You can do this using the PIP package manager. Launch your terminal and type the command below to install pyperclip pip install pyperclip Once you have it installed, you must import it to your python script. We can do this using the import command, import pyperclip In order to copy text to the clipboard we use the pyperclip.copy() function. import pyperclip pyperclip.copy("Hello world!") The above lines of code will copy “Hello world!” to your clipboard and would be ready to paste. import pyperclip text = pyperclip.paste() print(text) Hello world! We use the pyperclip.paste() function to paste the latest content present in the clipboard. Sometimes while working on a project you might want to paste new messages after you copy a different message. In order to achieve this, we use the pyperclip. waitForNewPaste() function. import pyperclip pyperclip.copy("Hello world!") text = pyperclip.paste() print(text) pyperclip.copy('Hello world!') text = pyperclip.waitForNewPaste() print(text) Hello world! Random message copied Note − In the above example, the program would terminate after it prints out the new copies text. The new copied text should be anything but “Hello world!”. If you want to just paste, even if the text is same as the already existing text in the clip board, just go for pyperclip.waitForPaste() function. Data stored in the clipboard and pasted is always a String datatype. You now know how to copy and paste text or string datatype into your clipboard for quick access. You can use this for developing simple automation tools that help you build tables, where data has to be constantly copied and pasted. There are various other scenarios you can use this module in. And since it’s cross−platform, you can work with it on Linux, MacOS and Windows.
[ { "code": null, "e": 1215, "s": 1062, "text": "We will be using the pyperclip module in order to copy and paste content to the clipboard. It is cross−platform and works on both Python 2 and Python 3." }, { "code": null, "e": 1375, "s": 1215, "text": "Copying and pasting from and to the clipboard could be very useful when you want the output of the data to be pasted elsewhere in a different file or software." }, { "code": null, "e": 1542, "s": 1375, "text": "The pyperclip module does not come packaged with Python. In order to access it, you must first download and install it. You can do this using the PIP package manager." }, { "code": null, "e": 1611, "s": 1542, "text": "Launch your terminal and type the command below to install pyperclip" }, { "code": null, "e": 1633, "s": 1611, "text": "pip install pyperclip" }, { "code": null, "e": 1703, "s": 1633, "text": "Once you have it installed, you must import it to your python script." }, { "code": null, "e": 1744, "s": 1703, "text": "We can do this using the import command," }, { "code": null, "e": 1761, "s": 1744, "text": "import pyperclip" }, { "code": null, "e": 1838, "s": 1761, "text": "In order to copy text to the clipboard we use the pyperclip.copy() function." }, { "code": null, "e": 1886, "s": 1838, "text": "import pyperclip\npyperclip.copy(\"Hello world!\")" }, { "code": null, "e": 1982, "s": 1886, "text": "The above lines of code will copy “Hello world!” to your clipboard and would be ready to paste." }, { "code": null, "e": 2036, "s": 1982, "text": "import pyperclip\ntext = pyperclip.paste()\nprint(text)" }, { "code": null, "e": 2049, "s": 2036, "text": "Hello world!" }, { "code": null, "e": 2141, "s": 2049, "text": "We use the pyperclip.paste() function to paste the latest content present in the clipboard." }, { "code": null, "e": 2251, "s": 2141, "text": "Sometimes while working on a project you might want to paste new messages after you copy a different message." }, { "code": null, "e": 2327, "s": 2251, "text": "In order to achieve this, we use the pyperclip. waitForNewPaste() function." }, { "code": null, "e": 2490, "s": 2327, "text": "import pyperclip\npyperclip.copy(\"Hello world!\")\ntext = pyperclip.paste()\nprint(text)\npyperclip.copy('Hello world!')\ntext = pyperclip.waitForNewPaste()\nprint(text)" }, { "code": null, "e": 2525, "s": 2490, "text": "Hello world! Random message copied" }, { "code": null, "e": 2682, "s": 2525, "text": "Note − In the above example, the program would terminate after it prints out the new copies text. The new copied text should be anything but “Hello world!”." }, { "code": null, "e": 2829, "s": 2682, "text": "If you want to just paste, even if the text is same as the already existing text in the clip board, just go for pyperclip.waitForPaste() function." }, { "code": null, "e": 2898, "s": 2829, "text": "Data stored in the clipboard and pasted is always a String datatype." }, { "code": null, "e": 2995, "s": 2898, "text": "You now know how to copy and paste text or string datatype into your clipboard for quick access." }, { "code": null, "e": 3130, "s": 2995, "text": "You can use this for developing simple automation tools that help you build tables, where data has to be constantly copied and pasted." }, { "code": null, "e": 3273, "s": 3130, "text": "There are various other scenarios you can use this module in. And since it’s cross−platform, you can work with it on Linux, MacOS and Windows." } ]
Bootstrap 5 | Button group - GeeksforGeeks
29 Jul, 2020 Bootstrap 5 is the latest major release by Bootstrap in which they have revamped the UI and made various changes. Button group is a component provided by Bootstrap 5 which helps to combine the buttons in a series in a single line. All types of buttons are supported by it. Syntax: <div class="btn-group"> Buttons... <div> Types: Following are the nine types of buttons available in Bootstrap 5: btn-primary btn-secondary btn-success btn-danger btn-warning btn-info btn-light btn-dark btn-link Horizontally arranged button groups: The .btn-group class is used to create horizontally arranged button groups. Example: This example uses show the working of horizontally arranged button group in Bootstrap 5.<!DOCTYPE html><html> <head> <title> Bootstrap 5 | Buttons group </title> <!-- Load Bootstrap --> <link rel="stylesheet" href="https://stackpath.bootstrapcdn.com/bootstrap/5.0.0-alpha1/css/bootstrap.min.css" integrity="sha384-r4NyP46KrjDleawBgD5tp8Y7UzmLA05oM1iAEQ17CSuDqnUK2+k9luXQOfXJCJ4I" crossorigin="anonymous"> </head> <body> <div style="text-align: center; width: 600px; margin-top:100px;"> <h1 style="color: green;"> GeeksforGeeks </h1> </div> <div style="width: 600px;height: 200px; margin:20px;text-align: center;"> <div class="btn-group"> <button type="button" class="btn btn-primary"> Primary</button> <button type="button" class="btn btn-secondary"> Secondary</button> <button type="button" class="btn btn-success"> Success</button> <button type="button" class="btn btn-danger"> Danger</button> </div> <div class="btn-group" style="margin-top: 10px;"> <button type="button" class="btn btn-warning"> Warning</button> <button type="button" class="btn btn-info"> Info</button> <button type="button" class="btn btn-light"> Light</button> <button type="button" class="btn btn-dark"> Dark</button> <button type="button" class="btn btn-link"> Link</button> </div> </div></body> </html> Output: Example: This example uses show the working of horizontally arranged button group in Bootstrap 5. <!DOCTYPE html><html> <head> <title> Bootstrap 5 | Buttons group </title> <!-- Load Bootstrap --> <link rel="stylesheet" href="https://stackpath.bootstrapcdn.com/bootstrap/5.0.0-alpha1/css/bootstrap.min.css" integrity="sha384-r4NyP46KrjDleawBgD5tp8Y7UzmLA05oM1iAEQ17CSuDqnUK2+k9luXQOfXJCJ4I" crossorigin="anonymous"> </head> <body> <div style="text-align: center; width: 600px; margin-top:100px;"> <h1 style="color: green;"> GeeksforGeeks </h1> </div> <div style="width: 600px;height: 200px; margin:20px;text-align: center;"> <div class="btn-group"> <button type="button" class="btn btn-primary"> Primary</button> <button type="button" class="btn btn-secondary"> Secondary</button> <button type="button" class="btn btn-success"> Success</button> <button type="button" class="btn btn-danger"> Danger</button> </div> <div class="btn-group" style="margin-top: 10px;"> <button type="button" class="btn btn-warning"> Warning</button> <button type="button" class="btn btn-info"> Info</button> <button type="button" class="btn btn-light"> Light</button> <button type="button" class="btn btn-dark"> Dark</button> <button type="button" class="btn btn-link"> Link</button> </div> </div></body> </html> Output: Vertically arranged button groups: The .btn-group-vertical class is used in parent div to create vertical button group.Example: This example uses show the working of vertical arranged button group in Bootstrap 5.<!DOCTYPE html><html> <head> <title> Bootstrap 5 | Buttons group </title> <!-- Load Bootstrap --> <link rel="stylesheet" href="https://stackpath.bootstrapcdn.com/bootstrap/5.0.0-alpha1/css/bootstrap.min.css" integrity="sha384-r4NyP46KrjDleawBgD5tp8Y7UzmLA05oM1iAEQ17CSuDqnUK2+k9luXQOfXJCJ4I" crossorigin="anonymous"></head> <body style="text-align:center;"> <div class="container mt-3"> <h1 style="color:green;"> GeeksforGeeks </h1> <div class="btn-group-vertical"> <button type="button" class="btn btn-success"> HTML </button> <button type="button" class="btn btn-primary"> CSS </button> <button type="button" class="btn btn-danger"> JavaScript </button> </div> </div></body> </html>Output: Example: This example uses show the working of vertical arranged button group in Bootstrap 5. <!DOCTYPE html><html> <head> <title> Bootstrap 5 | Buttons group </title> <!-- Load Bootstrap --> <link rel="stylesheet" href="https://stackpath.bootstrapcdn.com/bootstrap/5.0.0-alpha1/css/bootstrap.min.css" integrity="sha384-r4NyP46KrjDleawBgD5tp8Y7UzmLA05oM1iAEQ17CSuDqnUK2+k9luXQOfXJCJ4I" crossorigin="anonymous"></head> <body style="text-align:center;"> <div class="container mt-3"> <h1 style="color:green;"> GeeksforGeeks </h1> <div class="btn-group-vertical"> <button type="button" class="btn btn-success"> HTML </button> <button type="button" class="btn btn-primary"> CSS </button> <button type="button" class="btn btn-danger"> JavaScript </button> </div> </div></body> </html> Output: Button group sizing: The whole button group can be given the same size by including the class btn-group-* (* could be sm, md or lg) in the .btn-group parent element, instead of including sizing classes in each button.Example: This example uses show the working of button sizes with button group in Bootstrap 5.<!DOCTYPE html><html> <head> <title> Bootstrap 5 | Buttons group </title> <!-- Load Bootstrap --> <link rel="stylesheet" href="https://stackpath.bootstrapcdn.com/bootstrap/5.0.0-alpha1/css/bootstrap.min.css" integrity="sha384-r4NyP46KrjDleawBgD5tp8Y7UzmLA05oM1iAEQ17CSuDqnUK2+k9luXQOfXJCJ4I" crossorigin="anonymous"> </head> <body style="text-align:center;"> <div class="container mt-3"> <h1 style="color:green;"> GeeksforGeeks </h1> <div class="container"> <div class="btn-group btn-group-lg"> <button type="button" class="btn btn-success"> HTML </button> <button type="button" class="btn btn-dark"> CSS </button> <button type="button" class="btn btn-secondary"> JavaScript </button> </div> <br><br> <div class="btn-group btn-group-md"> <button type="button" class="btn btn-success"> HTML </button> <button type="button" class="btn btn-dark"> CSS </button> <button type="button" class="btn btn-secondary"> JavaScript </button> </div> <br><br> <div class="btn-group btn-group-sm"> <button type="button" class="btn btn-success"> HTML </button> <button type="button" class="btn btn-dark"> CSS </button> <button type="button" class="btn btn-secondary"> JavaScript </button> </div> </div> </div></body> </html>Output: Example: This example uses show the working of button sizes with button group in Bootstrap 5. <!DOCTYPE html><html> <head> <title> Bootstrap 5 | Buttons group </title> <!-- Load Bootstrap --> <link rel="stylesheet" href="https://stackpath.bootstrapcdn.com/bootstrap/5.0.0-alpha1/css/bootstrap.min.css" integrity="sha384-r4NyP46KrjDleawBgD5tp8Y7UzmLA05oM1iAEQ17CSuDqnUK2+k9luXQOfXJCJ4I" crossorigin="anonymous"> </head> <body style="text-align:center;"> <div class="container mt-3"> <h1 style="color:green;"> GeeksforGeeks </h1> <div class="container"> <div class="btn-group btn-group-lg"> <button type="button" class="btn btn-success"> HTML </button> <button type="button" class="btn btn-dark"> CSS </button> <button type="button" class="btn btn-secondary"> JavaScript </button> </div> <br><br> <div class="btn-group btn-group-md"> <button type="button" class="btn btn-success"> HTML </button> <button type="button" class="btn btn-dark"> CSS </button> <button type="button" class="btn btn-secondary"> JavaScript </button> </div> <br><br> <div class="btn-group btn-group-sm"> <button type="button" class="btn btn-success"> HTML </button> <button type="button" class="btn btn-dark"> CSS </button> <button type="button" class="btn btn-secondary"> JavaScript </button> </div> </div> </div></body> </html> Output: Nesting button groups and making dropdown menus: A button group can be nested within another button group and dropdown menus can be created this way.Single button dropdown:Example:<!DOCTYPE html><html> <head> <title> Bootstrap 5 | Buttons group </title> <!-- Load Bootstrap --> <link rel="stylesheet" href="https://stackpath.bootstrapcdn.com/bootstrap/5.0.0-alpha1/css/bootstrap.min.css" integrity="sha384-r4NyP46KrjDleawBgD5tp8Y7UzmLA05oM1iAEQ17CSuDqnUK2+k9luXQOfXJCJ4I" crossorigin="anonymous"> <script src="https://cdn.jsdelivr.net/npm/[email protected]/dist/umd/popper.min.js" integrity="sha384-Q6E9RHvbIyZFJoft+2mJbHaEWldlvI9IOYy5n3zV9zzTtmI3UksdQRVvoxMfooAo" crossorigin="anonymous"> </script> <script src="https://stackpath.bootstrapcdn.com/bootstrap/5.0.0-alpha1/js/bootstrap.min.js" integrity="sha384-oesi62hOLfzrys4LxRF63OJCXdXDipiYWBnvTl9Y9/TRlw5xlKIEHpNyvvDShgf/" crossorigin="anonymous"> </script></head> <body style="text-align:center;"> <div class="container mt-3"> <h1 style="color:green;"> GeeksforGeeks </h1> <div class="container"> <div class="btn-group"> <button type="button" class="btn btn-success"> HTML </button> <button type="button" class="btn btn-success btn-group"> CSS </button> <div class="btn-group"> <div class="dropdown"> <button type="button" class="btn btn-success dropdown-toggle" data-toggle="dropdown"> JavaScript<span class="caret"></span> </button> <ul class="dropdown-menu" role="menu"> <li><a class="dropdown-item" href="#">React</a></li> <li><a class="dropdown-item" href="#">Vue</a></li> </ul> </div> </div> </div> </div> </div></body> </html> Output: Single button dropdown:Example: <!DOCTYPE html><html> <head> <title> Bootstrap 5 | Buttons group </title> <!-- Load Bootstrap --> <link rel="stylesheet" href="https://stackpath.bootstrapcdn.com/bootstrap/5.0.0-alpha1/css/bootstrap.min.css" integrity="sha384-r4NyP46KrjDleawBgD5tp8Y7UzmLA05oM1iAEQ17CSuDqnUK2+k9luXQOfXJCJ4I" crossorigin="anonymous"> <script src="https://cdn.jsdelivr.net/npm/[email protected]/dist/umd/popper.min.js" integrity="sha384-Q6E9RHvbIyZFJoft+2mJbHaEWldlvI9IOYy5n3zV9zzTtmI3UksdQRVvoxMfooAo" crossorigin="anonymous"> </script> <script src="https://stackpath.bootstrapcdn.com/bootstrap/5.0.0-alpha1/js/bootstrap.min.js" integrity="sha384-oesi62hOLfzrys4LxRF63OJCXdXDipiYWBnvTl9Y9/TRlw5xlKIEHpNyvvDShgf/" crossorigin="anonymous"> </script></head> <body style="text-align:center;"> <div class="container mt-3"> <h1 style="color:green;"> GeeksforGeeks </h1> <div class="container"> <div class="btn-group"> <button type="button" class="btn btn-success"> HTML </button> <button type="button" class="btn btn-success btn-group"> CSS </button> <div class="btn-group"> <div class="dropdown"> <button type="button" class="btn btn-success dropdown-toggle" data-toggle="dropdown"> JavaScript<span class="caret"></span> </button> <ul class="dropdown-menu" role="menu"> <li><a class="dropdown-item" href="#">React</a></li> <li><a class="dropdown-item" href="#">Vue</a></li> </ul> </div> </div> </div> </div> </div></body> </html> Output: Split button dropdown:Example:<!DOCTYPE html><html> <head> <title> Bootstrap 5 | Buttons group </title> <!-- Load Bootstrap --> <link rel="stylesheet" href="https://stackpath.bootstrapcdn.com/bootstrap/5.0.0-alpha1/css/bootstrap.min.css" integrity="sha384-r4NyP46KrjDleawBgD5tp8Y7UzmLA05oM1iAEQ17CSuDqnUK2+k9luXQOfXJCJ4I" crossorigin="anonymous"> <script src="https://cdn.jsdelivr.net/npm/[email protected]/dist/umd/popper.min.js" integrity="sha384-Q6E9RHvbIyZFJoft+2mJbHaEWldlvI9IOYy5n3zV9zzTtmI3UksdQRVvoxMfooAo" crossorigin="anonymous"> </script> <script src="https://stackpath.bootstrapcdn.com/bootstrap/5.0.0-alpha1/js/bootstrap.min.js" integrity="sha384-oesi62hOLfzrys4LxRF63OJCXdXDipiYWBnvTl9Y9/TRlw5xlKIEHpNyvvDShgf/" crossorigin="anonymous"> </script></head> <body style="text-align:center;"> <div class="container mt-3"> <h1 style="color:green;"> GeeksforGeeks </h1> <div class="container"> <div class="btn-group"> <button type="button" class="btn btn-success"> HTML </button> <button type="button" class="btn btn-primary btn-group"> CSS </button> <div class="btn-group"> <button type="button" class="btn btn-secondary"> JavaScript </button> <button type="button" class="btn btn-dark dropdown-toggle" data-toggle="dropdown"> <span class="caret"></span> </button> <ul class="dropdown-menu" role="menu"> <li> <a class="dropdown-item" href="#"> React</a></li> <li><a class="dropdown-item" href="#"> Vue</a></li> </ul> </div> </div> </div> </div></body> </html>Output: <!DOCTYPE html><html> <head> <title> Bootstrap 5 | Buttons group </title> <!-- Load Bootstrap --> <link rel="stylesheet" href="https://stackpath.bootstrapcdn.com/bootstrap/5.0.0-alpha1/css/bootstrap.min.css" integrity="sha384-r4NyP46KrjDleawBgD5tp8Y7UzmLA05oM1iAEQ17CSuDqnUK2+k9luXQOfXJCJ4I" crossorigin="anonymous"> <script src="https://cdn.jsdelivr.net/npm/[email protected]/dist/umd/popper.min.js" integrity="sha384-Q6E9RHvbIyZFJoft+2mJbHaEWldlvI9IOYy5n3zV9zzTtmI3UksdQRVvoxMfooAo" crossorigin="anonymous"> </script> <script src="https://stackpath.bootstrapcdn.com/bootstrap/5.0.0-alpha1/js/bootstrap.min.js" integrity="sha384-oesi62hOLfzrys4LxRF63OJCXdXDipiYWBnvTl9Y9/TRlw5xlKIEHpNyvvDShgf/" crossorigin="anonymous"> </script></head> <body style="text-align:center;"> <div class="container mt-3"> <h1 style="color:green;"> GeeksforGeeks </h1> <div class="container"> <div class="btn-group"> <button type="button" class="btn btn-success"> HTML </button> <button type="button" class="btn btn-primary btn-group"> CSS </button> <div class="btn-group"> <button type="button" class="btn btn-secondary"> JavaScript </button> <button type="button" class="btn btn-dark dropdown-toggle" data-toggle="dropdown"> <span class="caret"></span> </button> <ul class="dropdown-menu" role="menu"> <li> <a class="dropdown-item" href="#"> React</a></li> <li><a class="dropdown-item" href="#"> Vue</a></li> </ul> </div> </div> </div> </div></body> </html> Output: Bootstrap 5 also supports Split Button Vertical Dropdown.Example:<!DOCTYPE html><html> <head> <title> Bootstrap 5 | Buttons group </title> <!-- Load Bootstrap --> <link rel="stylesheet" href="https://stackpath.bootstrapcdn.com/bootstrap/5.0.0-alpha1/css/bootstrap.min.css" integrity="sha384-r4NyP46KrjDleawBgD5tp8Y7UzmLA05oM1iAEQ17CSuDqnUK2+k9luXQOfXJCJ4I" crossorigin="anonymous"> <script src="https://cdn.jsdelivr.net/npm/[email protected]/dist/umd/popper.min.js" integrity="sha384-Q6E9RHvbIyZFJoft+2mJbHaEWldlvI9IOYy5n3zV9zzTtmI3UksdQRVvoxMfooAo" crossorigin="anonymous"> </script> <script src="https://stackpath.bootstrapcdn.com/bootstrap/5.0.0-alpha1/js/bootstrap.min.js" integrity="sha384-oesi62hOLfzrys4LxRF63OJCXdXDipiYWBnvTl9Y9/TRlw5xlKIEHpNyvvDShgf/" crossorigin="anonymous"> </script></head> <body style="text-align:center;"> <div class="container mt-3"> <h1 style="color:green;"> GeeksforGeeks </h1> <div class="container"> <div class="btn-group-vertical"> <button type="button" class="btn btn-info"> HTML </button> <button type="button" class="btn btn-danger"> CSS </button> <div class="btn-group"> <button type="button" class="btn btn-secondary dropdown-toggle" data-toggle="dropdown"> JavaScript </button> <ul class="dropdown-menu" role="menu"> <li><a class="dropdown-item" href="#"> React</a></li> <li><a class="dropdown-item" href="#"> Vue</a></li> </ul> </div> </div> </div> </div></body> </html>Output: <!DOCTYPE html><html> <head> <title> Bootstrap 5 | Buttons group </title> <!-- Load Bootstrap --> <link rel="stylesheet" href="https://stackpath.bootstrapcdn.com/bootstrap/5.0.0-alpha1/css/bootstrap.min.css" integrity="sha384-r4NyP46KrjDleawBgD5tp8Y7UzmLA05oM1iAEQ17CSuDqnUK2+k9luXQOfXJCJ4I" crossorigin="anonymous"> <script src="https://cdn.jsdelivr.net/npm/[email protected]/dist/umd/popper.min.js" integrity="sha384-Q6E9RHvbIyZFJoft+2mJbHaEWldlvI9IOYy5n3zV9zzTtmI3UksdQRVvoxMfooAo" crossorigin="anonymous"> </script> <script src="https://stackpath.bootstrapcdn.com/bootstrap/5.0.0-alpha1/js/bootstrap.min.js" integrity="sha384-oesi62hOLfzrys4LxRF63OJCXdXDipiYWBnvTl9Y9/TRlw5xlKIEHpNyvvDShgf/" crossorigin="anonymous"> </script></head> <body style="text-align:center;"> <div class="container mt-3"> <h1 style="color:green;"> GeeksforGeeks </h1> <div class="container"> <div class="btn-group-vertical"> <button type="button" class="btn btn-info"> HTML </button> <button type="button" class="btn btn-danger"> CSS </button> <div class="btn-group"> <button type="button" class="btn btn-secondary dropdown-toggle" data-toggle="dropdown"> JavaScript </button> <ul class="dropdown-menu" role="menu"> <li><a class="dropdown-item" href="#"> React</a></li> <li><a class="dropdown-item" href="#"> Vue</a></li> </ul> </div> </div> </div> </div></body> </html> Output: Bootstrap-Misc Bootstrap Web Technologies Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here. How to change navigation bar color in Bootstrap ? Form validation using jQuery How to pass data into a bootstrap modal? How to align navbar items to the right in Bootstrap 4 ? How to Show Images on Click using HTML ? Remove elements from a JavaScript Array Installation of Node.js on Linux Convert a string to an integer in JavaScript How to fetch data from an API in ReactJS ? How to insert spaces/tabs in text using HTML/CSS?
[ { "code": null, "e": 29795, "s": 29767, "text": "\n29 Jul, 2020" }, { "code": null, "e": 30068, "s": 29795, "text": "Bootstrap 5 is the latest major release by Bootstrap in which they have revamped the UI and made various changes. Button group is a component provided by Bootstrap 5 which helps to combine the buttons in a series in a single line. All types of buttons are supported by it." }, { "code": null, "e": 30076, "s": 30068, "text": "Syntax:" }, { "code": null, "e": 30117, "s": 30076, "text": "<div class=\"btn-group\"> Buttons... <div>" }, { "code": null, "e": 30190, "s": 30117, "text": "Types: Following are the nine types of buttons available in Bootstrap 5:" }, { "code": null, "e": 30202, "s": 30190, "text": "btn-primary" }, { "code": null, "e": 30216, "s": 30202, "text": "btn-secondary" }, { "code": null, "e": 30228, "s": 30216, "text": "btn-success" }, { "code": null, "e": 30239, "s": 30228, "text": "btn-danger" }, { "code": null, "e": 30251, "s": 30239, "text": "btn-warning" }, { "code": null, "e": 30260, "s": 30251, "text": "btn-info" }, { "code": null, "e": 30270, "s": 30260, "text": "btn-light" }, { "code": null, "e": 30279, "s": 30270, "text": "btn-dark" }, { "code": null, "e": 30288, "s": 30279, "text": "btn-link" }, { "code": null, "e": 30405, "s": 30292, "text": "Horizontally arranged button groups: The .btn-group class is used to create horizontally arranged button groups." }, { "code": null, "e": 32148, "s": 30405, "text": "Example: This example uses show the working of horizontally arranged button group in Bootstrap 5.<!DOCTYPE html><html> <head> <title> Bootstrap 5 | Buttons group </title> <!-- Load Bootstrap --> <link rel=\"stylesheet\" href=\"https://stackpath.bootstrapcdn.com/bootstrap/5.0.0-alpha1/css/bootstrap.min.css\" integrity=\"sha384-r4NyP46KrjDleawBgD5tp8Y7UzmLA05oM1iAEQ17CSuDqnUK2+k9luXQOfXJCJ4I\" crossorigin=\"anonymous\"> </head> <body> <div style=\"text-align: center; width: 600px; margin-top:100px;\"> <h1 style=\"color: green;\"> GeeksforGeeks </h1> </div> <div style=\"width: 600px;height: 200px; margin:20px;text-align: center;\"> <div class=\"btn-group\"> <button type=\"button\" class=\"btn btn-primary\"> Primary</button> <button type=\"button\" class=\"btn btn-secondary\"> Secondary</button> <button type=\"button\" class=\"btn btn-success\"> Success</button> <button type=\"button\" class=\"btn btn-danger\"> Danger</button> </div> <div class=\"btn-group\" style=\"margin-top: 10px;\"> <button type=\"button\" class=\"btn btn-warning\"> Warning</button> <button type=\"button\" class=\"btn btn-info\"> Info</button> <button type=\"button\" class=\"btn btn-light\"> Light</button> <button type=\"button\" class=\"btn btn-dark\"> Dark</button> <button type=\"button\" class=\"btn btn-link\"> Link</button> </div> </div></body> </html> Output:" }, { "code": null, "e": 32246, "s": 32148, "text": "Example: This example uses show the working of horizontally arranged button group in Bootstrap 5." }, { "code": "<!DOCTYPE html><html> <head> <title> Bootstrap 5 | Buttons group </title> <!-- Load Bootstrap --> <link rel=\"stylesheet\" href=\"https://stackpath.bootstrapcdn.com/bootstrap/5.0.0-alpha1/css/bootstrap.min.css\" integrity=\"sha384-r4NyP46KrjDleawBgD5tp8Y7UzmLA05oM1iAEQ17CSuDqnUK2+k9luXQOfXJCJ4I\" crossorigin=\"anonymous\"> </head> <body> <div style=\"text-align: center; width: 600px; margin-top:100px;\"> <h1 style=\"color: green;\"> GeeksforGeeks </h1> </div> <div style=\"width: 600px;height: 200px; margin:20px;text-align: center;\"> <div class=\"btn-group\"> <button type=\"button\" class=\"btn btn-primary\"> Primary</button> <button type=\"button\" class=\"btn btn-secondary\"> Secondary</button> <button type=\"button\" class=\"btn btn-success\"> Success</button> <button type=\"button\" class=\"btn btn-danger\"> Danger</button> </div> <div class=\"btn-group\" style=\"margin-top: 10px;\"> <button type=\"button\" class=\"btn btn-warning\"> Warning</button> <button type=\"button\" class=\"btn btn-info\"> Info</button> <button type=\"button\" class=\"btn btn-light\"> Light</button> <button type=\"button\" class=\"btn btn-dark\"> Dark</button> <button type=\"button\" class=\"btn btn-link\"> Link</button> </div> </div></body> </html> ", "e": 33885, "s": 32246, "text": null }, { "code": null, "e": 33893, "s": 33885, "text": "Output:" }, { "code": null, "e": 35070, "s": 33893, "text": "Vertically arranged button groups: The .btn-group-vertical class is used in parent div to create vertical button group.Example: This example uses show the working of vertical arranged button group in Bootstrap 5.<!DOCTYPE html><html> <head> <title> Bootstrap 5 | Buttons group </title> <!-- Load Bootstrap --> <link rel=\"stylesheet\" href=\"https://stackpath.bootstrapcdn.com/bootstrap/5.0.0-alpha1/css/bootstrap.min.css\" integrity=\"sha384-r4NyP46KrjDleawBgD5tp8Y7UzmLA05oM1iAEQ17CSuDqnUK2+k9luXQOfXJCJ4I\" crossorigin=\"anonymous\"></head> <body style=\"text-align:center;\"> <div class=\"container mt-3\"> <h1 style=\"color:green;\"> GeeksforGeeks </h1> <div class=\"btn-group-vertical\"> <button type=\"button\" class=\"btn btn-success\"> HTML </button> <button type=\"button\" class=\"btn btn-primary\"> CSS </button> <button type=\"button\" class=\"btn btn-danger\"> JavaScript </button> </div> </div></body> </html>Output:" }, { "code": null, "e": 35164, "s": 35070, "text": "Example: This example uses show the working of vertical arranged button group in Bootstrap 5." }, { "code": "<!DOCTYPE html><html> <head> <title> Bootstrap 5 | Buttons group </title> <!-- Load Bootstrap --> <link rel=\"stylesheet\" href=\"https://stackpath.bootstrapcdn.com/bootstrap/5.0.0-alpha1/css/bootstrap.min.css\" integrity=\"sha384-r4NyP46KrjDleawBgD5tp8Y7UzmLA05oM1iAEQ17CSuDqnUK2+k9luXQOfXJCJ4I\" crossorigin=\"anonymous\"></head> <body style=\"text-align:center;\"> <div class=\"container mt-3\"> <h1 style=\"color:green;\"> GeeksforGeeks </h1> <div class=\"btn-group-vertical\"> <button type=\"button\" class=\"btn btn-success\"> HTML </button> <button type=\"button\" class=\"btn btn-primary\"> CSS </button> <button type=\"button\" class=\"btn btn-danger\"> JavaScript </button> </div> </div></body> </html>", "e": 36122, "s": 35164, "text": null }, { "code": null, "e": 36130, "s": 36122, "text": "Output:" }, { "code": null, "e": 38492, "s": 36130, "text": "Button group sizing: The whole button group can be given the same size by including the class btn-group-* (* could be sm, md or lg) in the .btn-group parent element, instead of including sizing classes in each button.Example: This example uses show the working of button sizes with button group in Bootstrap 5.<!DOCTYPE html><html> <head> <title> Bootstrap 5 | Buttons group </title> <!-- Load Bootstrap --> <link rel=\"stylesheet\" href=\"https://stackpath.bootstrapcdn.com/bootstrap/5.0.0-alpha1/css/bootstrap.min.css\" integrity=\"sha384-r4NyP46KrjDleawBgD5tp8Y7UzmLA05oM1iAEQ17CSuDqnUK2+k9luXQOfXJCJ4I\" crossorigin=\"anonymous\"> </head> <body style=\"text-align:center;\"> <div class=\"container mt-3\"> <h1 style=\"color:green;\"> GeeksforGeeks </h1> <div class=\"container\"> <div class=\"btn-group btn-group-lg\"> <button type=\"button\" class=\"btn btn-success\"> HTML </button> <button type=\"button\" class=\"btn btn-dark\"> CSS </button> <button type=\"button\" class=\"btn btn-secondary\"> JavaScript </button> </div> <br><br> <div class=\"btn-group btn-group-md\"> <button type=\"button\" class=\"btn btn-success\"> HTML </button> <button type=\"button\" class=\"btn btn-dark\"> CSS </button> <button type=\"button\" class=\"btn btn-secondary\"> JavaScript </button> </div> <br><br> <div class=\"btn-group btn-group-sm\"> <button type=\"button\" class=\"btn btn-success\"> HTML </button> <button type=\"button\" class=\"btn btn-dark\"> CSS </button> <button type=\"button\" class=\"btn btn-secondary\"> JavaScript </button> </div> </div> </div></body> </html>Output:" }, { "code": null, "e": 38586, "s": 38492, "text": "Example: This example uses show the working of button sizes with button group in Bootstrap 5." }, { "code": "<!DOCTYPE html><html> <head> <title> Bootstrap 5 | Buttons group </title> <!-- Load Bootstrap --> <link rel=\"stylesheet\" href=\"https://stackpath.bootstrapcdn.com/bootstrap/5.0.0-alpha1/css/bootstrap.min.css\" integrity=\"sha384-r4NyP46KrjDleawBgD5tp8Y7UzmLA05oM1iAEQ17CSuDqnUK2+k9luXQOfXJCJ4I\" crossorigin=\"anonymous\"> </head> <body style=\"text-align:center;\"> <div class=\"container mt-3\"> <h1 style=\"color:green;\"> GeeksforGeeks </h1> <div class=\"container\"> <div class=\"btn-group btn-group-lg\"> <button type=\"button\" class=\"btn btn-success\"> HTML </button> <button type=\"button\" class=\"btn btn-dark\"> CSS </button> <button type=\"button\" class=\"btn btn-secondary\"> JavaScript </button> </div> <br><br> <div class=\"btn-group btn-group-md\"> <button type=\"button\" class=\"btn btn-success\"> HTML </button> <button type=\"button\" class=\"btn btn-dark\"> CSS </button> <button type=\"button\" class=\"btn btn-secondary\"> JavaScript </button> </div> <br><br> <div class=\"btn-group btn-group-sm\"> <button type=\"button\" class=\"btn btn-success\"> HTML </button> <button type=\"button\" class=\"btn btn-dark\"> CSS </button> <button type=\"button\" class=\"btn btn-secondary\"> JavaScript </button> </div> </div> </div></body> </html>", "e": 40631, "s": 38586, "text": null }, { "code": null, "e": 40639, "s": 40631, "text": "Output:" }, { "code": null, "e": 42773, "s": 40639, "text": "Nesting button groups and making dropdown menus: A button group can be nested within another button group and dropdown menus can be created this way.Single button dropdown:Example:<!DOCTYPE html><html> <head> <title> Bootstrap 5 | Buttons group </title> <!-- Load Bootstrap --> <link rel=\"stylesheet\" href=\"https://stackpath.bootstrapcdn.com/bootstrap/5.0.0-alpha1/css/bootstrap.min.css\" integrity=\"sha384-r4NyP46KrjDleawBgD5tp8Y7UzmLA05oM1iAEQ17CSuDqnUK2+k9luXQOfXJCJ4I\" crossorigin=\"anonymous\"> <script src=\"https://cdn.jsdelivr.net/npm/[email protected]/dist/umd/popper.min.js\" integrity=\"sha384-Q6E9RHvbIyZFJoft+2mJbHaEWldlvI9IOYy5n3zV9zzTtmI3UksdQRVvoxMfooAo\" crossorigin=\"anonymous\"> </script> <script src=\"https://stackpath.bootstrapcdn.com/bootstrap/5.0.0-alpha1/js/bootstrap.min.js\" integrity=\"sha384-oesi62hOLfzrys4LxRF63OJCXdXDipiYWBnvTl9Y9/TRlw5xlKIEHpNyvvDShgf/\" crossorigin=\"anonymous\"> </script></head> <body style=\"text-align:center;\"> <div class=\"container mt-3\"> <h1 style=\"color:green;\"> GeeksforGeeks </h1> <div class=\"container\"> <div class=\"btn-group\"> <button type=\"button\" class=\"btn btn-success\"> HTML </button> <button type=\"button\" class=\"btn btn-success btn-group\"> CSS </button> <div class=\"btn-group\"> <div class=\"dropdown\"> <button type=\"button\" class=\"btn btn-success dropdown-toggle\" data-toggle=\"dropdown\"> JavaScript<span class=\"caret\"></span> </button> <ul class=\"dropdown-menu\" role=\"menu\"> <li><a class=\"dropdown-item\" href=\"#\">React</a></li> <li><a class=\"dropdown-item\" href=\"#\">Vue</a></li> </ul> </div> </div> </div> </div> </div></body> </html> Output:" }, { "code": null, "e": 42805, "s": 42773, "text": "Single button dropdown:Example:" }, { "code": "<!DOCTYPE html><html> <head> <title> Bootstrap 5 | Buttons group </title> <!-- Load Bootstrap --> <link rel=\"stylesheet\" href=\"https://stackpath.bootstrapcdn.com/bootstrap/5.0.0-alpha1/css/bootstrap.min.css\" integrity=\"sha384-r4NyP46KrjDleawBgD5tp8Y7UzmLA05oM1iAEQ17CSuDqnUK2+k9luXQOfXJCJ4I\" crossorigin=\"anonymous\"> <script src=\"https://cdn.jsdelivr.net/npm/[email protected]/dist/umd/popper.min.js\" integrity=\"sha384-Q6E9RHvbIyZFJoft+2mJbHaEWldlvI9IOYy5n3zV9zzTtmI3UksdQRVvoxMfooAo\" crossorigin=\"anonymous\"> </script> <script src=\"https://stackpath.bootstrapcdn.com/bootstrap/5.0.0-alpha1/js/bootstrap.min.js\" integrity=\"sha384-oesi62hOLfzrys4LxRF63OJCXdXDipiYWBnvTl9Y9/TRlw5xlKIEHpNyvvDShgf/\" crossorigin=\"anonymous\"> </script></head> <body style=\"text-align:center;\"> <div class=\"container mt-3\"> <h1 style=\"color:green;\"> GeeksforGeeks </h1> <div class=\"container\"> <div class=\"btn-group\"> <button type=\"button\" class=\"btn btn-success\"> HTML </button> <button type=\"button\" class=\"btn btn-success btn-group\"> CSS </button> <div class=\"btn-group\"> <div class=\"dropdown\"> <button type=\"button\" class=\"btn btn-success dropdown-toggle\" data-toggle=\"dropdown\"> JavaScript<span class=\"caret\"></span> </button> <ul class=\"dropdown-menu\" role=\"menu\"> <li><a class=\"dropdown-item\" href=\"#\">React</a></li> <li><a class=\"dropdown-item\" href=\"#\">Vue</a></li> </ul> </div> </div> </div> </div> </div></body> </html> ", "e": 44752, "s": 42805, "text": null }, { "code": null, "e": 44760, "s": 44752, "text": "Output:" }, { "code": null, "e": 46940, "s": 44760, "text": "Split button dropdown:Example:<!DOCTYPE html><html> <head> <title> Bootstrap 5 | Buttons group </title> <!-- Load Bootstrap --> <link rel=\"stylesheet\" href=\"https://stackpath.bootstrapcdn.com/bootstrap/5.0.0-alpha1/css/bootstrap.min.css\" integrity=\"sha384-r4NyP46KrjDleawBgD5tp8Y7UzmLA05oM1iAEQ17CSuDqnUK2+k9luXQOfXJCJ4I\" crossorigin=\"anonymous\"> <script src=\"https://cdn.jsdelivr.net/npm/[email protected]/dist/umd/popper.min.js\" integrity=\"sha384-Q6E9RHvbIyZFJoft+2mJbHaEWldlvI9IOYy5n3zV9zzTtmI3UksdQRVvoxMfooAo\" crossorigin=\"anonymous\"> </script> <script src=\"https://stackpath.bootstrapcdn.com/bootstrap/5.0.0-alpha1/js/bootstrap.min.js\" integrity=\"sha384-oesi62hOLfzrys4LxRF63OJCXdXDipiYWBnvTl9Y9/TRlw5xlKIEHpNyvvDShgf/\" crossorigin=\"anonymous\"> </script></head> <body style=\"text-align:center;\"> <div class=\"container mt-3\"> <h1 style=\"color:green;\"> GeeksforGeeks </h1> <div class=\"container\"> <div class=\"btn-group\"> <button type=\"button\" class=\"btn btn-success\"> HTML </button> <button type=\"button\" class=\"btn btn-primary btn-group\"> CSS </button> <div class=\"btn-group\"> <button type=\"button\" class=\"btn btn-secondary\"> JavaScript </button> <button type=\"button\" class=\"btn btn-dark dropdown-toggle\" data-toggle=\"dropdown\"> <span class=\"caret\"></span> </button> <ul class=\"dropdown-menu\" role=\"menu\"> <li> <a class=\"dropdown-item\" href=\"#\"> React</a></li> <li><a class=\"dropdown-item\" href=\"#\"> Vue</a></li> </ul> </div> </div> </div> </div></body> </html>Output:" }, { "code": "<!DOCTYPE html><html> <head> <title> Bootstrap 5 | Buttons group </title> <!-- Load Bootstrap --> <link rel=\"stylesheet\" href=\"https://stackpath.bootstrapcdn.com/bootstrap/5.0.0-alpha1/css/bootstrap.min.css\" integrity=\"sha384-r4NyP46KrjDleawBgD5tp8Y7UzmLA05oM1iAEQ17CSuDqnUK2+k9luXQOfXJCJ4I\" crossorigin=\"anonymous\"> <script src=\"https://cdn.jsdelivr.net/npm/[email protected]/dist/umd/popper.min.js\" integrity=\"sha384-Q6E9RHvbIyZFJoft+2mJbHaEWldlvI9IOYy5n3zV9zzTtmI3UksdQRVvoxMfooAo\" crossorigin=\"anonymous\"> </script> <script src=\"https://stackpath.bootstrapcdn.com/bootstrap/5.0.0-alpha1/js/bootstrap.min.js\" integrity=\"sha384-oesi62hOLfzrys4LxRF63OJCXdXDipiYWBnvTl9Y9/TRlw5xlKIEHpNyvvDShgf/\" crossorigin=\"anonymous\"> </script></head> <body style=\"text-align:center;\"> <div class=\"container mt-3\"> <h1 style=\"color:green;\"> GeeksforGeeks </h1> <div class=\"container\"> <div class=\"btn-group\"> <button type=\"button\" class=\"btn btn-success\"> HTML </button> <button type=\"button\" class=\"btn btn-primary btn-group\"> CSS </button> <div class=\"btn-group\"> <button type=\"button\" class=\"btn btn-secondary\"> JavaScript </button> <button type=\"button\" class=\"btn btn-dark dropdown-toggle\" data-toggle=\"dropdown\"> <span class=\"caret\"></span> </button> <ul class=\"dropdown-menu\" role=\"menu\"> <li> <a class=\"dropdown-item\" href=\"#\"> React</a></li> <li><a class=\"dropdown-item\" href=\"#\"> Vue</a></li> </ul> </div> </div> </div> </div></body> </html>", "e": 49083, "s": 46940, "text": null }, { "code": null, "e": 49091, "s": 49083, "text": "Output:" }, { "code": null, "e": 51059, "s": 49091, "text": "Bootstrap 5 also supports Split Button Vertical Dropdown.Example:<!DOCTYPE html><html> <head> <title> Bootstrap 5 | Buttons group </title> <!-- Load Bootstrap --> <link rel=\"stylesheet\" href=\"https://stackpath.bootstrapcdn.com/bootstrap/5.0.0-alpha1/css/bootstrap.min.css\" integrity=\"sha384-r4NyP46KrjDleawBgD5tp8Y7UzmLA05oM1iAEQ17CSuDqnUK2+k9luXQOfXJCJ4I\" crossorigin=\"anonymous\"> <script src=\"https://cdn.jsdelivr.net/npm/[email protected]/dist/umd/popper.min.js\" integrity=\"sha384-Q6E9RHvbIyZFJoft+2mJbHaEWldlvI9IOYy5n3zV9zzTtmI3UksdQRVvoxMfooAo\" crossorigin=\"anonymous\"> </script> <script src=\"https://stackpath.bootstrapcdn.com/bootstrap/5.0.0-alpha1/js/bootstrap.min.js\" integrity=\"sha384-oesi62hOLfzrys4LxRF63OJCXdXDipiYWBnvTl9Y9/TRlw5xlKIEHpNyvvDShgf/\" crossorigin=\"anonymous\"> </script></head> <body style=\"text-align:center;\"> <div class=\"container mt-3\"> <h1 style=\"color:green;\"> GeeksforGeeks </h1> <div class=\"container\"> <div class=\"btn-group-vertical\"> <button type=\"button\" class=\"btn btn-info\"> HTML </button> <button type=\"button\" class=\"btn btn-danger\"> CSS </button> <div class=\"btn-group\"> <button type=\"button\" class=\"btn btn-secondary dropdown-toggle\" data-toggle=\"dropdown\"> JavaScript </button> <ul class=\"dropdown-menu\" role=\"menu\"> <li><a class=\"dropdown-item\" href=\"#\"> React</a></li> <li><a class=\"dropdown-item\" href=\"#\"> Vue</a></li> </ul> </div> </div> </div> </div></body> </html>Output:" }, { "code": "<!DOCTYPE html><html> <head> <title> Bootstrap 5 | Buttons group </title> <!-- Load Bootstrap --> <link rel=\"stylesheet\" href=\"https://stackpath.bootstrapcdn.com/bootstrap/5.0.0-alpha1/css/bootstrap.min.css\" integrity=\"sha384-r4NyP46KrjDleawBgD5tp8Y7UzmLA05oM1iAEQ17CSuDqnUK2+k9luXQOfXJCJ4I\" crossorigin=\"anonymous\"> <script src=\"https://cdn.jsdelivr.net/npm/[email protected]/dist/umd/popper.min.js\" integrity=\"sha384-Q6E9RHvbIyZFJoft+2mJbHaEWldlvI9IOYy5n3zV9zzTtmI3UksdQRVvoxMfooAo\" crossorigin=\"anonymous\"> </script> <script src=\"https://stackpath.bootstrapcdn.com/bootstrap/5.0.0-alpha1/js/bootstrap.min.js\" integrity=\"sha384-oesi62hOLfzrys4LxRF63OJCXdXDipiYWBnvTl9Y9/TRlw5xlKIEHpNyvvDShgf/\" crossorigin=\"anonymous\"> </script></head> <body style=\"text-align:center;\"> <div class=\"container mt-3\"> <h1 style=\"color:green;\"> GeeksforGeeks </h1> <div class=\"container\"> <div class=\"btn-group-vertical\"> <button type=\"button\" class=\"btn btn-info\"> HTML </button> <button type=\"button\" class=\"btn btn-danger\"> CSS </button> <div class=\"btn-group\"> <button type=\"button\" class=\"btn btn-secondary dropdown-toggle\" data-toggle=\"dropdown\"> JavaScript </button> <ul class=\"dropdown-menu\" role=\"menu\"> <li><a class=\"dropdown-item\" href=\"#\"> React</a></li> <li><a class=\"dropdown-item\" href=\"#\"> Vue</a></li> </ul> </div> </div> </div> </div></body> </html>", "e": 52955, "s": 51059, "text": null }, { "code": null, "e": 52963, "s": 52955, "text": "Output:" }, { "code": null, "e": 52978, "s": 52963, "text": "Bootstrap-Misc" }, { "code": null, "e": 52988, "s": 52978, "text": "Bootstrap" }, { "code": null, "e": 53005, "s": 52988, "text": "Web Technologies" }, { "code": null, "e": 53103, "s": 53005, "text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here." }, { "code": null, "e": 53153, "s": 53103, "text": "How to change navigation bar color in Bootstrap ?" }, { "code": null, "e": 53182, "s": 53153, "text": "Form validation using jQuery" }, { "code": null, "e": 53223, "s": 53182, "text": "How to pass data into a bootstrap modal?" }, { "code": null, "e": 53279, "s": 53223, "text": "How to align navbar items to the right in Bootstrap 4 ?" }, { "code": null, "e": 53320, "s": 53279, "text": "How to Show Images on Click using HTML ?" }, { "code": null, "e": 53360, "s": 53320, "text": "Remove elements from a JavaScript Array" }, { "code": null, "e": 53393, "s": 53360, "text": "Installation of Node.js on Linux" }, { "code": null, "e": 53438, "s": 53393, "text": "Convert a string to an integer in JavaScript" }, { "code": null, "e": 53481, "s": 53438, "text": "How to fetch data from an API in ReactJS ?" } ]
PyQt5 QDoubleSpinBox – Setting Stylesheet - GeeksforGeeks
28 Jul, 2020 In this article we will see how we can set stylesheet to the QDoubleSpinBox. Stylesheet is used to set color, background and various styling things of the double spin box. With style sheet we can add border to it and make our own customized double spin box. In order to do this we will use setStyleSheet method with the double spin box object. Syntax : dd_spin.setStyleSheet(code) Argument : It takes string as as argument Return : It returns None Below is the example style sheet code QDoubleSpinBox { border : 2px solid black; background : white; } QDoubleSpinBox::hover { border : 2px solid green; background : lightgreen; } QDoubleSpinBox::up-arrow { border : 1px solid black; background : blue; } QDoubleSpinBox::down-arrow { border : 1px solid black; background : red; } Below is the implementation # importing librariesfrom PyQt5.QtWidgets import * from PyQt5 import QtCore, QtGuifrom PyQt5.QtGui import * from PyQt5.QtCore import * import sys class Window(QMainWindow): def __init__(self): super().__init__() # setting title self.setWindowTitle("Python ") # setting geometry self.setGeometry(100, 100, 500, 400) # calling method self.UiComponents() # showing all the widgets self.show() # method for components def UiComponents(self): # creating double spin box d_spin = QDoubleSpinBox(self) # setting geometry to the double spin box d_spin.setGeometry(100, 100, 150, 40) # setting decimal precision d_spin.setDecimals(1) # step type step_type = QAbstractSpinBox.AdaptiveDecimalStepType # adaptive step type d_spin.setStepType(step_type) # setting style sheet to the double spin box d_spin.setStyleSheet("QDoubleSpinBox" "{" "border : 2px solid black;" "background : white;" "}" "QDoubleSpinBox::hover" "{" "border : 2px solid green;" "background : lightgreen;" "}" "QDoubleSpinBox::up-arrow" "{" "border : 1px solid black;" "background : blue;" "}" "QDoubleSpinBox::down-arrow" "{" "border : 1px solid black;" "background : red;" "}" ) # creating a label label = QLabel("GeeksforGeeks", self) # setting geometry to the label label.setGeometry(100, 200, 300, 80) # making label multi line label.setWordWrap(True) # create pyqt5 appApp = QApplication(sys.argv) # create the instance of our Windowwindow = Window() # start the appsys.exit(App.exec()) Output : Python PyQt-QDoubleSpinBox Python-gui Python-PyQt Python Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here. Python Dictionary How to Install PIP on Windows ? Enumerate() in Python Different ways to create Pandas Dataframe Python String | replace() Reading and Writing to text files in Python *args and **kwargs in Python Create a Pandas DataFrame from Lists How To Convert Python Dictionary To JSON? Convert integer to string in Python
[ { "code": null, "e": 26403, "s": 26375, "text": "\n28 Jul, 2020" }, { "code": null, "e": 26661, "s": 26403, "text": "In this article we will see how we can set stylesheet to the QDoubleSpinBox. Stylesheet is used to set color, background and various styling things of the double spin box. With style sheet we can add border to it and make our own customized double spin box." }, { "code": null, "e": 26747, "s": 26661, "text": "In order to do this we will use setStyleSheet method with the double spin box object." }, { "code": null, "e": 26784, "s": 26747, "text": "Syntax : dd_spin.setStyleSheet(code)" }, { "code": null, "e": 26826, "s": 26784, "text": "Argument : It takes string as as argument" }, { "code": null, "e": 26851, "s": 26826, "text": "Return : It returns None" }, { "code": null, "e": 26889, "s": 26851, "text": "Below is the example style sheet code" }, { "code": null, "e": 27182, "s": 26889, "text": "QDoubleSpinBox\n{\nborder : 2px solid black;\nbackground : white;\n}\nQDoubleSpinBox::hover\n{\nborder : 2px solid green;\nbackground : lightgreen;\n}\nQDoubleSpinBox::up-arrow\n{\nborder : 1px solid black;\nbackground : blue;\n}\nQDoubleSpinBox::down-arrow\n{\nborder : 1px solid black;\nbackground : red;\n}\n\n" }, { "code": null, "e": 27210, "s": 27182, "text": "Below is the implementation" }, { "code": "# importing librariesfrom PyQt5.QtWidgets import * from PyQt5 import QtCore, QtGuifrom PyQt5.QtGui import * from PyQt5.QtCore import * import sys class Window(QMainWindow): def __init__(self): super().__init__() # setting title self.setWindowTitle(\"Python \") # setting geometry self.setGeometry(100, 100, 500, 400) # calling method self.UiComponents() # showing all the widgets self.show() # method for components def UiComponents(self): # creating double spin box d_spin = QDoubleSpinBox(self) # setting geometry to the double spin box d_spin.setGeometry(100, 100, 150, 40) # setting decimal precision d_spin.setDecimals(1) # step type step_type = QAbstractSpinBox.AdaptiveDecimalStepType # adaptive step type d_spin.setStepType(step_type) # setting style sheet to the double spin box d_spin.setStyleSheet(\"QDoubleSpinBox\" \"{\" \"border : 2px solid black;\" \"background : white;\" \"}\" \"QDoubleSpinBox::hover\" \"{\" \"border : 2px solid green;\" \"background : lightgreen;\" \"}\" \"QDoubleSpinBox::up-arrow\" \"{\" \"border : 1px solid black;\" \"background : blue;\" \"}\" \"QDoubleSpinBox::down-arrow\" \"{\" \"border : 1px solid black;\" \"background : red;\" \"}\" ) # creating a label label = QLabel(\"GeeksforGeeks\", self) # setting geometry to the label label.setGeometry(100, 200, 300, 80) # making label multi line label.setWordWrap(True) # create pyqt5 appApp = QApplication(sys.argv) # create the instance of our Windowwindow = Window() # start the appsys.exit(App.exec())", "e": 29466, "s": 27210, "text": null }, { "code": null, "e": 29475, "s": 29466, "text": "Output :" }, { "code": null, "e": 29502, "s": 29475, "text": "Python PyQt-QDoubleSpinBox" }, { "code": null, "e": 29513, "s": 29502, "text": "Python-gui" }, { "code": null, "e": 29525, "s": 29513, "text": "Python-PyQt" }, { "code": null, "e": 29532, "s": 29525, "text": "Python" }, { "code": null, "e": 29630, "s": 29532, "text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here." }, { "code": null, "e": 29648, "s": 29630, "text": "Python Dictionary" }, { "code": null, "e": 29680, "s": 29648, "text": "How to Install PIP on Windows ?" }, { "code": null, "e": 29702, "s": 29680, "text": "Enumerate() in Python" }, { "code": null, "e": 29744, "s": 29702, "text": "Different ways to create Pandas Dataframe" }, { "code": null, "e": 29770, "s": 29744, "text": "Python String | replace()" }, { "code": null, "e": 29814, "s": 29770, "text": "Reading and Writing to text files in Python" }, { "code": null, "e": 29843, "s": 29814, "text": "*args and **kwargs in Python" }, { "code": null, "e": 29880, "s": 29843, "text": "Create a Pandas DataFrame from Lists" }, { "code": null, "e": 29922, "s": 29880, "text": "How To Convert Python Dictionary To JSON?" } ]
Express.js | app.delete() Function
06 Jun, 2021 The app.delete() function is used to route the HTTP DELETE requests to the path which is specified as parameter with the callback functions being passed as parameter.Syntax: app.delete(path, callback) Parameters: path: It is the path for which the middleware function is being called.callback: It is a middleware function or a series/array of middleware functions. path: It is the path for which the middleware function is being called. callback: It is a middleware function or a series/array of middleware functions. Installation of express module: You can visit the link to Install express module. You can install this package by using this command. You can visit the link to Install express module. You can install this package by using this command. npm install express After installing express module, you can check your express version in command prompt using the command. After installing express module, you can check your express version in command prompt using the command. npm version express After that, you can just create a folder and add a file for example, index.js. To run this file you need to run the following command. After that, you can just create a folder and add a file for example, index.js. To run this file you need to run the following command. node index.js Filename: index.js javascript var express = require('express');var app = express();var PORT = 3000; app.delete('/', (req, res) => { res.send("DELETE Request Called")}) app.listen(PORT, function(err){ if (err) console.log(err); console.log("Server listening on PORT", PORT);}); Steps to run the program: The project structure will look like this: The project structure will look like this: Make sure you have installed express module using following command: Make sure you have installed express module using following command: npm install express Run index.js file using below command: Run index.js file using below command: node index.js Output: Output: Server listening on PORT 3000 Now make DELETE request to http://localhost:3000/ and you will get the following output: Now make DELETE request to http://localhost:3000/ and you will get the following output: DELETE Request Called arorakashish0911 Express.js Node.js Web Technologies Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here.
[ { "code": null, "e": 28, "s": 0, "text": "\n06 Jun, 2021" }, { "code": null, "e": 204, "s": 28, "text": "The app.delete() function is used to route the HTTP DELETE requests to the path which is specified as parameter with the callback functions being passed as parameter.Syntax: " }, { "code": null, "e": 231, "s": 204, "text": "app.delete(path, callback)" }, { "code": null, "e": 245, "s": 231, "text": "Parameters: " }, { "code": null, "e": 399, "s": 245, "text": "path: It is the path for which the middleware function is being called.callback: It is a middleware function or a series/array of middleware functions. " }, { "code": null, "e": 471, "s": 399, "text": "path: It is the path for which the middleware function is being called." }, { "code": null, "e": 554, "s": 471, "text": "callback: It is a middleware function or a series/array of middleware functions. " }, { "code": null, "e": 588, "s": 554, "text": "Installation of express module: " }, { "code": null, "e": 692, "s": 588, "text": "You can visit the link to Install express module. You can install this package by using this command. " }, { "code": null, "e": 796, "s": 692, "text": "You can visit the link to Install express module. You can install this package by using this command. " }, { "code": null, "e": 816, "s": 796, "text": "npm install express" }, { "code": null, "e": 923, "s": 816, "text": "After installing express module, you can check your express version in command prompt using the command. " }, { "code": null, "e": 1030, "s": 923, "text": "After installing express module, you can check your express version in command prompt using the command. " }, { "code": null, "e": 1050, "s": 1030, "text": "npm version express" }, { "code": null, "e": 1187, "s": 1050, "text": "After that, you can just create a folder and add a file for example, index.js. To run this file you need to run the following command. " }, { "code": null, "e": 1324, "s": 1187, "text": "After that, you can just create a folder and add a file for example, index.js. To run this file you need to run the following command. " }, { "code": null, "e": 1338, "s": 1324, "text": "node index.js" }, { "code": null, "e": 1359, "s": 1338, "text": "Filename: index.js " }, { "code": null, "e": 1370, "s": 1359, "text": "javascript" }, { "code": "var express = require('express');var app = express();var PORT = 3000; app.delete('/', (req, res) => { res.send(\"DELETE Request Called\")}) app.listen(PORT, function(err){ if (err) console.log(err); console.log(\"Server listening on PORT\", PORT);});", "e": 1624, "s": 1370, "text": null }, { "code": null, "e": 1652, "s": 1624, "text": "Steps to run the program: " }, { "code": null, "e": 1697, "s": 1652, "text": "The project structure will look like this: " }, { "code": null, "e": 1742, "s": 1697, "text": "The project structure will look like this: " }, { "code": null, "e": 1813, "s": 1742, "text": "Make sure you have installed express module using following command: " }, { "code": null, "e": 1884, "s": 1813, "text": "Make sure you have installed express module using following command: " }, { "code": null, "e": 1904, "s": 1884, "text": "npm install express" }, { "code": null, "e": 1945, "s": 1904, "text": "Run index.js file using below command: " }, { "code": null, "e": 1986, "s": 1945, "text": "Run index.js file using below command: " }, { "code": null, "e": 2000, "s": 1986, "text": "node index.js" }, { "code": null, "e": 2010, "s": 2000, "text": "Output: " }, { "code": null, "e": 2020, "s": 2010, "text": "Output: " }, { "code": null, "e": 2050, "s": 2020, "text": "Server listening on PORT 3000" }, { "code": null, "e": 2142, "s": 2050, "text": " Now make DELETE request to http://localhost:3000/ and you will get the following output: " }, { "code": null, "e": 2235, "s": 2144, "text": "Now make DELETE request to http://localhost:3000/ and you will get the following output: " }, { "code": null, "e": 2257, "s": 2235, "text": "DELETE Request Called" }, { "code": null, "e": 2280, "s": 2263, "text": "arorakashish0911" }, { "code": null, "e": 2291, "s": 2280, "text": "Express.js" }, { "code": null, "e": 2299, "s": 2291, "text": "Node.js" }, { "code": null, "e": 2316, "s": 2299, "text": "Web Technologies" } ]
Python vs Other Programming Languages
24 Jul, 2020 Python is a general-purpose, high level programming language developed by Guido van Rossum in 1991. It was structured with an accentuation on code comprehensibility, and its syntax allows programmers to express their concepts in fewer lines of code which makes it the fastest-growing programming language in current times. Easy to code: Python is a high level programming language as it is easy to comprehend as compared to other language like c, c#, Java script, Java and so forth, one can effortlessly learn and code in python barely in hours. Additionally, it is also a developer-friendly language. Platform Independent: Python programs can be developed and executed on numerous operating system framework. Python can be used on Linux, Windows, Macintosh, Solaris and some others. Object-Oriented Language: Python bolsters object oriented language and concepts of classes, objects encapsulation etc. Free and Open Source: Python language is freely available at the official website. Since, it is open-source, available to the public. So one can download it, use it as well as share it. GUI Programming Support: Graphical Users interfaces can be made using a module such as PyQt5, PyQt4, wxPython or Tk in python. High-level Language: Python is a high-level language. When one develops programs in python, he/she didn’t need to memorize the system architecture or to manage the memory. Portable language: Python is a portable language, for instance, on the off chance that the code written in python for windows can also run on different other platforms such as Linux, Unix and Mac etc. Integrated and Interpreted Language: Python is an Interpreted Language, since python code is executed line by line at a time. Python is additionally an Integrated language since one can without much of a stretch, can integrate python with another language like C, C++ and so on. Example of Python: print("GEEKSFORGEEKS") print('My first Python program') Output : GEEKSFORGEEKS My first Python program Python vs Ruby : Python is explicit and easy to read while Ruby can be hard to debug at times. Python-based apps are YouTube, Instagram, Bit torrent, etc., whereas Ruby based apps are Twitter, Github, etc. Python has a web framework called Django whereas Ruby has a Web framework called Ruby on Rails. Python enjoys much higher adoption rates among developers than Ruby. Usage of modules and better namespace handling are present in Python, whereas the usage of blocks are present in Ruby. Example of Ruby: puts "GEEKSFORGEEKS \n My first Ruby program" Output : GEEKSFORGEEKS My first Ruby program Python vs Golang : Python is a high-level programming language based on object-oriented programming whereas Golang is a procedural programming language based on concurrent programming. Python bolsters exceptions whereas Golang doesn’t support exemptions. Instead of exception Golang has error. Python is a dynamically typed language, it uses interpreter whereas Go is a statically typed language. So, it uses compiler. Python supports inheritance whereas Golang does not support inheritance. Python is good for data analysis and computing, whereas Golang is useful for system programming. Example of Golang: package main import "fmt" func main() { fmt.Println("GEEKSFORGEEKS") fmt.Println("My first Golang program") } Output: GEEKSFORGEEKS My first Golang program Python vs PHP : Python is an object-oriented scripting language, whereas PHP is a server-side scripting language. Python is a general-purpose full-stack programming language, whereas PHP is extensively utilized for web development.. In Python functional programming techniques are possible, whereas functional programming is not provided in PHP.. The maintainability and change procurement of Python is good whereas PHP is not much maintainable. In Python, there is proper provision for exception handling whereas PHP doesn’t support exception appropriately.. Example of PHP: ?php echo "Welcome to GeeksforGeeks\n"; echo "My first php program"; ? Output: GEEKSFORGEEKS My First PHP Program Python vs Node.js : Python is an object-oriented, high level, dynamic and multipurpose programming language whereas Node.js is a server-side platform built on Google Chrome Javascript Engine. Python is appropriate for back-end applications, numerical computations and AI, whereas Node.js is better for web applications and website development. Python uses PyPy as an interpreter, whereas Node.js utilize javascript as interpreter. Python underpins generators which makes it a lot less complex though Node.js bolsters callback. Its programming is based on the event/ callback that makes it process faster. The greatest bit of leeway of utilizing Python is that developers need to compose less lines of code while Node.js is unadulterated JavaScript, which is little slow. Example of Node.js: var a ="GEEKSFORGEEKS" ; console.log(typeof a); a = "My first Node.js program"; console.log(typeof a); Output: string string Difference Between Go Language Node.js PHP Python Ruby PHP Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here.
[ { "code": null, "e": 28, "s": 0, "text": "\n24 Jul, 2020" }, { "code": null, "e": 351, "s": 28, "text": "Python is a general-purpose, high level programming language developed by Guido van Rossum in 1991. It was structured with an accentuation on code comprehensibility, and its syntax allows programmers to express their concepts in fewer lines of code which makes it the fastest-growing programming language in current times." }, { "code": null, "e": 630, "s": 351, "text": "Easy to code: Python is a high level programming language as it is easy to comprehend as compared to other language like c, c#, Java script, Java and so forth, one can effortlessly learn and code in python barely in hours. Additionally, it is also a developer-friendly language." }, { "code": null, "e": 812, "s": 630, "text": "Platform Independent: Python programs can be developed and executed on numerous operating system framework. Python can be used on Linux, Windows, Macintosh, Solaris and some others." }, { "code": null, "e": 931, "s": 812, "text": "Object-Oriented Language: Python bolsters object oriented language and concepts of classes, objects encapsulation etc." }, { "code": null, "e": 1117, "s": 931, "text": "Free and Open Source: Python language is freely available at the official website. Since, it is open-source, available to the public. So one can download it, use it as well as share it." }, { "code": null, "e": 1244, "s": 1117, "text": "GUI Programming Support: Graphical Users interfaces can be made using a module such as PyQt5, PyQt4, wxPython or Tk in python." }, { "code": null, "e": 1416, "s": 1244, "text": "High-level Language: Python is a high-level language. When one develops programs in python, he/she didn’t need to memorize the system architecture or to manage the memory." }, { "code": null, "e": 1617, "s": 1416, "text": "Portable language: Python is a portable language, for instance, on the off chance that the code written in python for windows can also run on different other platforms such as Linux, Unix and Mac etc." }, { "code": null, "e": 1896, "s": 1617, "text": "Integrated and Interpreted Language: Python is an Interpreted Language, since python code is executed line by line at a time. Python is additionally an Integrated language since one can without much of a stretch, can integrate python with another language like C, C++ and so on." }, { "code": null, "e": 1915, "s": 1896, "text": "Example of Python:" }, { "code": null, "e": 1972, "s": 1915, "text": "print(\"GEEKSFORGEEKS\")\nprint('My first Python program')\n" }, { "code": null, "e": 1981, "s": 1972, "text": "Output :" }, { "code": null, "e": 2020, "s": 1981, "text": "GEEKSFORGEEKS\nMy first Python program\n" }, { "code": null, "e": 2037, "s": 2020, "text": "Python vs Ruby :" }, { "code": null, "e": 2115, "s": 2037, "text": "Python is explicit and easy to read while Ruby can be hard to debug at times." }, { "code": null, "e": 2226, "s": 2115, "text": "Python-based apps are YouTube, Instagram, Bit torrent, etc., whereas Ruby based apps are Twitter, Github, etc." }, { "code": null, "e": 2322, "s": 2226, "text": "Python has a web framework called Django whereas Ruby has a Web framework called Ruby on Rails." }, { "code": null, "e": 2391, "s": 2322, "text": "Python enjoys much higher adoption rates among developers than Ruby." }, { "code": null, "e": 2510, "s": 2391, "text": "Usage of modules and better namespace handling are present in Python, whereas the usage of blocks are present in Ruby." }, { "code": null, "e": 2527, "s": 2510, "text": "Example of Ruby:" }, { "code": null, "e": 2574, "s": 2527, "text": "puts \"GEEKSFORGEEKS \\n My first Ruby program\"\n" }, { "code": null, "e": 2583, "s": 2574, "text": "Output :" }, { "code": null, "e": 2620, "s": 2583, "text": "GEEKSFORGEEKS\nMy first Ruby program\n" }, { "code": null, "e": 2639, "s": 2620, "text": "Python vs Golang :" }, { "code": null, "e": 2805, "s": 2639, "text": "Python is a high-level programming language based on object-oriented programming whereas Golang is a procedural programming language based on concurrent programming." }, { "code": null, "e": 2914, "s": 2805, "text": "Python bolsters exceptions whereas Golang doesn’t support exemptions. Instead of exception Golang has error." }, { "code": null, "e": 3039, "s": 2914, "text": "Python is a dynamically typed language, it uses interpreter whereas Go is a statically typed language. So, it uses compiler." }, { "code": null, "e": 3112, "s": 3039, "text": "Python supports inheritance whereas Golang does not support inheritance." }, { "code": null, "e": 3209, "s": 3112, "text": "Python is good for data analysis and computing, whereas Golang is useful for system programming." }, { "code": null, "e": 3228, "s": 3209, "text": "Example of Golang:" }, { "code": null, "e": 3353, "s": 3228, "text": "package main \nimport \"fmt\"\nfunc main() {\n fmt.Println(\"GEEKSFORGEEKS\") \n fmt.Println(\"My first Golang program\") \n}\n" }, { "code": null, "e": 3361, "s": 3353, "text": "Output:" }, { "code": null, "e": 3401, "s": 3361, "text": "GEEKSFORGEEKS\nMy first Golang program \n" }, { "code": null, "e": 3417, "s": 3401, "text": "Python vs PHP :" }, { "code": null, "e": 3515, "s": 3417, "text": "Python is an object-oriented scripting language, whereas PHP is a server-side scripting language." }, { "code": null, "e": 3634, "s": 3515, "text": "Python is a general-purpose full-stack programming language, whereas PHP is extensively utilized for web development.." }, { "code": null, "e": 3748, "s": 3634, "text": "In Python functional programming techniques are possible, whereas functional programming is not provided in PHP.." }, { "code": null, "e": 3847, "s": 3748, "text": "The maintainability and change procurement of Python is good whereas PHP is not much maintainable." }, { "code": null, "e": 3961, "s": 3847, "text": "In Python, there is proper provision for exception handling whereas PHP doesn’t support exception appropriately.." }, { "code": null, "e": 3977, "s": 3961, "text": "Example of PHP:" }, { "code": null, "e": 4053, "s": 3977, "text": "?php \necho \"Welcome to GeeksforGeeks\\n\"; \necho \"My first php program\";\n?\n" }, { "code": null, "e": 4061, "s": 4053, "text": "Output:" }, { "code": null, "e": 4097, "s": 4061, "text": "GEEKSFORGEEKS\nMy First PHP Program\n" }, { "code": null, "e": 4117, "s": 4097, "text": "Python vs Node.js :" }, { "code": null, "e": 4289, "s": 4117, "text": "Python is an object-oriented, high level, dynamic and multipurpose programming language whereas Node.js is a server-side platform built on Google Chrome Javascript Engine." }, { "code": null, "e": 4441, "s": 4289, "text": "Python is appropriate for back-end applications, numerical computations and AI, whereas Node.js is better for web applications and website development." }, { "code": null, "e": 4528, "s": 4441, "text": "Python uses PyPy as an interpreter, whereas Node.js utilize javascript as interpreter." }, { "code": null, "e": 4702, "s": 4528, "text": "Python underpins generators which makes it a lot less complex though Node.js bolsters callback. Its programming is based on the event/ callback that makes it process faster." }, { "code": null, "e": 4868, "s": 4702, "text": "The greatest bit of leeway of utilizing Python is that developers need to compose less lines of code while Node.js is unadulterated JavaScript, which is little slow." }, { "code": null, "e": 4888, "s": 4868, "text": "Example of Node.js:" }, { "code": null, "e": 4996, "s": 4888, "text": "var a =\"GEEKSFORGEEKS\" ; \nconsole.log(typeof a); \na = \"My first Node.js program\"; \nconsole.log(typeof a);\n" }, { "code": null, "e": 5004, "s": 4996, "text": "Output:" }, { "code": null, "e": 5020, "s": 5004, "text": "string\nstring \n" }, { "code": null, "e": 5039, "s": 5020, "text": "Difference Between" }, { "code": null, "e": 5051, "s": 5039, "text": "Go Language" }, { "code": null, "e": 5059, "s": 5051, "text": "Node.js" }, { "code": null, "e": 5063, "s": 5059, "text": "PHP" }, { "code": null, "e": 5070, "s": 5063, "text": "Python" }, { "code": null, "e": 5075, "s": 5070, "text": "Ruby" }, { "code": null, "e": 5079, "s": 5075, "text": "PHP" } ]
Node.js fs.existsSync() Method
12 Oct, 2021 The fs.existsSync() method is used to synchronously check if a file already exists in the given path or not. It returns a boolean value which indicates the presence of a file. Syntax: fs.existsSync( path ) Parameters: This method accepts a single parameter as mentioned above and described below: path: It holds the path of the file that has to be checked. It can be a String, Buffer or URL. Return Value: It returns a boolean value i.e true if the file exists otherwise returns false. Below programs illustrate the fs.existsSync() method in Node.js: Example 1: // Node.js program to demonstrate the// fs.existsSync() method // Import the filesystem moduleconst fs = require('fs'); // Get the current filenames// in the directorygetCurrentFilenames(); let fileExists = fs.existsSync('hello.txt');console.log("hello.txt exists:", fileExists); fileExists = fs.existsSync('world.txt');console.log("world.txt exists:", fileExists); // Function to get current filenames// in directoryfunction getCurrentFilenames() { console.log("\nCurrent filenames:"); fs.readdirSync(__dirname).forEach(file => { console.log(file); }); console.log("\n");} Output: Current filenames: hello.txt index.js package.json hello.txt exists: true world.txt exists: false Example 2: // Node.js program to demonstrate the// fs.existsSync() method // Import the filesystem moduleconst fs = require('fs'); // Get the current filenames// in the directorygetCurrentFilenames(); // Check if the file existslet fileExists = fs.existsSync('hello.txt');console.log("hello.txt exists:", fileExists); // If the file does not exist// create itif (!fileExists) { console.log("Creating the file") fs.writeFileSync("hello.txt", "Hello World");} // Get the current filenames// in the directorygetCurrentFilenames(); // Check if the file exists againfileExists = fs.existsSync('hello.txt');console.log("hello.txt exists:", fileExists); // Function to get current filenames// in directoryfunction getCurrentFilenames() { console.log("\nCurrent filenames:"); fs.readdirSync(__dirname).forEach(file => { console.log(file); }); console.log("\n");} Output: Current filenames: hello.txt index.js package.json hello.txt exists: true Current filenames: hello.txt index.js package.json hello.txt exists: true Reference: https://nodejs.org/api/fs.html#fs_fs_existssync_path Node.js-fs-module Node.js Web Technologies Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here.
[ { "code": null, "e": 28, "s": 0, "text": "\n12 Oct, 2021" }, { "code": null, "e": 204, "s": 28, "text": "The fs.existsSync() method is used to synchronously check if a file already exists in the given path or not. It returns a boolean value which indicates the presence of a file." }, { "code": null, "e": 212, "s": 204, "text": "Syntax:" }, { "code": null, "e": 234, "s": 212, "text": "fs.existsSync( path )" }, { "code": null, "e": 325, "s": 234, "text": "Parameters: This method accepts a single parameter as mentioned above and described below:" }, { "code": null, "e": 420, "s": 325, "text": "path: It holds the path of the file that has to be checked. It can be a String, Buffer or URL." }, { "code": null, "e": 514, "s": 420, "text": "Return Value: It returns a boolean value i.e true if the file exists otherwise returns false." }, { "code": null, "e": 579, "s": 514, "text": "Below programs illustrate the fs.existsSync() method in Node.js:" }, { "code": null, "e": 590, "s": 579, "text": "Example 1:" }, { "code": "// Node.js program to demonstrate the// fs.existsSync() method // Import the filesystem moduleconst fs = require('fs'); // Get the current filenames// in the directorygetCurrentFilenames(); let fileExists = fs.existsSync('hello.txt');console.log(\"hello.txt exists:\", fileExists); fileExists = fs.existsSync('world.txt');console.log(\"world.txt exists:\", fileExists); // Function to get current filenames// in directoryfunction getCurrentFilenames() { console.log(\"\\nCurrent filenames:\"); fs.readdirSync(__dirname).forEach(file => { console.log(file); }); console.log(\"\\n\");}", "e": 1176, "s": 590, "text": null }, { "code": null, "e": 1184, "s": 1176, "text": "Output:" }, { "code": null, "e": 1284, "s": 1184, "text": "Current filenames:\nhello.txt\nindex.js\npackage.json\n\n\nhello.txt exists: true\nworld.txt exists: false" }, { "code": null, "e": 1295, "s": 1284, "text": "Example 2:" }, { "code": "// Node.js program to demonstrate the// fs.existsSync() method // Import the filesystem moduleconst fs = require('fs'); // Get the current filenames// in the directorygetCurrentFilenames(); // Check if the file existslet fileExists = fs.existsSync('hello.txt');console.log(\"hello.txt exists:\", fileExists); // If the file does not exist// create itif (!fileExists) { console.log(\"Creating the file\") fs.writeFileSync(\"hello.txt\", \"Hello World\");} // Get the current filenames// in the directorygetCurrentFilenames(); // Check if the file exists againfileExists = fs.existsSync('hello.txt');console.log(\"hello.txt exists:\", fileExists); // Function to get current filenames// in directoryfunction getCurrentFilenames() { console.log(\"\\nCurrent filenames:\"); fs.readdirSync(__dirname).forEach(file => { console.log(file); }); console.log(\"\\n\");}", "e": 2155, "s": 1295, "text": null }, { "code": null, "e": 2163, "s": 2155, "text": "Output:" }, { "code": null, "e": 2316, "s": 2163, "text": "Current filenames:\nhello.txt\nindex.js\npackage.json\n\n\nhello.txt exists: true\n\nCurrent filenames:\nhello.txt\nindex.js\npackage.json\n\n\nhello.txt exists: true" }, { "code": null, "e": 2380, "s": 2316, "text": "Reference: https://nodejs.org/api/fs.html#fs_fs_existssync_path" }, { "code": null, "e": 2398, "s": 2380, "text": "Node.js-fs-module" }, { "code": null, "e": 2406, "s": 2398, "text": "Node.js" }, { "code": null, "e": 2423, "s": 2406, "text": "Web Technologies" } ]
Playing Star Wars in Command Prompt
28 Feb, 2021 Most of us “Geeks” have watched The epic Star Wars movie countless times. You are here, that proves that you are one too. So in this article let’s do some geeky stuff. And as computer science enthusiasts that we are, let’s keep it inside the domain of computer science. So, what if we can play the entire Star Wars movie in Command Prompt (or terminal/bash)? In this article, we are going to play the Star Wars movie in Command Prompt. This task is done by Telnet, which is a network protocol. It is used for communication purposes via command-line interface. For this, we will be using Telnet. Telnet is a connection method that allows character-based terminals to communicate to a remote server in text-based command-oriented terminal sessions. In the previous versions of windows, it was given inbuilt but in today’s windows we need to first activate Telnet and that’s quite simple. For a Windows machine, you need to activate the Telnet. To do so follow the below steps: Step 1: Simply type “Turn Windows features on or off” in the search bar. Step 2: Find Telnet and Mark the checkbox and click ok. It’s pretty simple for macOS. Just use the below command in the bash/terminal and hit enter: telnet towel.blinkenlights.nl Open terminal and type the below command for the installation of telnet server and hit enter: sudo apt install telnet Now follow the below steps to play star wars in the terminal: Step 1: Open Command Prompt and type the following commands: Telnet As you hit enter, you enter Microsoft telnet portal. Step 2: Type ‘o’ and hit enter. Step 3: Now use the below command and hit enter. telnet towel.blinkenlights.nl As we hit enter the movie starts. Try in your system and enjoy. Technical Scripter 2020 Technical Scripter Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here.
[ { "code": null, "e": 52, "s": 24, "text": "\n28 Feb, 2021" }, { "code": null, "e": 411, "s": 52, "text": "Most of us “Geeks” have watched The epic Star Wars movie countless times. You are here, that proves that you are one too. So in this article let’s do some geeky stuff. And as computer science enthusiasts that we are, let’s keep it inside the domain of computer science. So, what if we can play the entire Star Wars movie in Command Prompt (or terminal/bash)?" }, { "code": null, "e": 799, "s": 411, "text": "In this article, we are going to play the Star Wars movie in Command Prompt. This task is done by Telnet, which is a network protocol. It is used for communication purposes via command-line interface. For this, we will be using Telnet. Telnet is a connection method that allows character-based terminals to communicate to a remote server in text-based command-oriented terminal sessions." }, { "code": null, "e": 1027, "s": 799, "text": "In the previous versions of windows, it was given inbuilt but in today’s windows we need to first activate Telnet and that’s quite simple. For a Windows machine, you need to activate the Telnet. To do so follow the below steps:" }, { "code": null, "e": 1102, "s": 1027, "text": "Step 1: Simply type “Turn Windows features on or off” in the search bar. " }, { "code": null, "e": 1159, "s": 1102, "text": "Step 2: Find Telnet and Mark the checkbox and click ok. " }, { "code": null, "e": 1252, "s": 1159, "text": "It’s pretty simple for macOS. Just use the below command in the bash/terminal and hit enter:" }, { "code": null, "e": 1282, "s": 1252, "text": "telnet towel.blinkenlights.nl" }, { "code": null, "e": 1376, "s": 1282, "text": "Open terminal and type the below command for the installation of telnet server and hit enter:" }, { "code": null, "e": 1400, "s": 1376, "text": "sudo apt install telnet" }, { "code": null, "e": 1462, "s": 1400, "text": "Now follow the below steps to play star wars in the terminal:" }, { "code": null, "e": 1523, "s": 1462, "text": "Step 1: Open Command Prompt and type the following commands:" }, { "code": null, "e": 1531, "s": 1523, "text": "Telnet " }, { "code": null, "e": 1584, "s": 1531, "text": "As you hit enter, you enter Microsoft telnet portal." }, { "code": null, "e": 1616, "s": 1584, "text": "Step 2: Type ‘o’ and hit enter." }, { "code": null, "e": 1665, "s": 1616, "text": "Step 3: Now use the below command and hit enter." }, { "code": null, "e": 1695, "s": 1665, "text": "telnet towel.blinkenlights.nl" }, { "code": null, "e": 1759, "s": 1695, "text": "As we hit enter the movie starts. Try in your system and enjoy." }, { "code": null, "e": 1783, "s": 1759, "text": "Technical Scripter 2020" }, { "code": null, "e": 1802, "s": 1783, "text": "Technical Scripter" } ]
How to calculate Maximum Segment Size in TCP?
05 Oct, 2021 Maximum Segment Size refers to size of the largest segment that local host accepts within a single packet. It denotes largest amount of data that host can accept in single TCP segment. For establishing the TCP connection, both sender and receiver indicates Maximum Segment Size they can accept. While transmitting the packets over TCP connection sender reduce the size of packet according to MSS received. It is useful for devices with small amounts of memory, because it allows the device to set a limit on the size of the packets it will receive. Network Driver knows Maximum Transmission Unit of directly attached network. Maximum Transmission Unit is largest size frame that can be transmitted across data link layer. IP asks the value of Maximum Transmission Unit from Network Driver and use it in the following relation to calculate Maximum Datagram Data Size : MDDS = MTU - IP_HL where, MDDS = Maximum Datagram Size MTU = Maximum Transmission Unit IP_HL = IP Header Length Maximum Datagram Data Size refers to largest amount of data that is accepted in IP packet. Now, TCP asks the value of Maximum Datagram Data Size from IP and use it in the following relation to calculate Maximum Segment Size : MSS = MDDS - TCP_HL where, MSS = Maximum Segment Size MDDS = Maximum Datagram Data Size TCP_HL = TCP Header Length Example : Suppose Maximum Transmission Unit have payload of 1500B, header which contain information about the amount of packets and tail denoting the end of packet flow in data link layer and size of both TCP and IP header is 20B each. So, we can find the Maximum Segment Size by following the given steps : Payload of 1500B is received by network layer which is divided as 1480B Maximum Datagram Data Size load and 20B IP header. It means IP packet transmitted through network layer can have information stored up to 1480B and have 20B header to store information like IP version, source address, destination address and time-to-live about the packet.Payload of 1480B is received by transport layer which is divided as 1460B Maximum Segment Size and 20B TCP header. It means TCP packet transmitted through transport layer can have information stored up to 1460B and have 20B header to store information like source port, destination port, sequence number, acknowledgement number, header length, checksum, window size, urgent pointer and reserved bits. Payload of 1500B is received by network layer which is divided as 1480B Maximum Datagram Data Size load and 20B IP header. It means IP packet transmitted through network layer can have information stored up to 1480B and have 20B header to store information like IP version, source address, destination address and time-to-live about the packet. Payload of 1480B is received by transport layer which is divided as 1460B Maximum Segment Size and 20B TCP header. It means TCP packet transmitted through transport layer can have information stored up to 1460B and have 20B header to store information like source port, destination port, sequence number, acknowledgement number, header length, checksum, window size, urgent pointer and reserved bits. Hence, Maximum Segment Size will be 1460B i.e. 1460B data can be received in a single TCP packet. Maximum Segment Size must be chosen by considering following performance issues : Overhead Management : If MSS is too low then it will lead to inefficient use of bandwidth as amount of data stored in the segment would be comparative to headers which is not efficient.IP Fragmentation : If MSS is too large then it will lead to large IP datagrams which would require fragmentation before they can be transmitted. Fragmentation will reduce efficiency and increase the chances of part of a TCP segment being lost, resulting in the entire segment needing to be retransmitted. Overhead Management : If MSS is too low then it will lead to inefficient use of bandwidth as amount of data stored in the segment would be comparative to headers which is not efficient. IP Fragmentation : If MSS is too large then it will lead to large IP datagrams which would require fragmentation before they can be transmitted. Fragmentation will reduce efficiency and increase the chances of part of a TCP segment being lost, resulting in the entire segment needing to be retransmitted. Picked Computer Networks GATE CS Computer Networks Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here. Wireless Application Protocol Mobile Internet Protocol (or Mobile IP) GSM in Wireless Communication Secure Socket Layer (SSL) Difference between MANET and VANET Types of Operating Systems ACID Properties in DBMS Page Replacement Algorithms in Operating Systems Normal Forms in DBMS Inter Process Communication (IPC)
[ { "code": null, "e": 28, "s": 0, "text": "\n05 Oct, 2021" }, { "code": null, "e": 213, "s": 28, "text": "Maximum Segment Size refers to size of the largest segment that local host accepts within a single packet. It denotes largest amount of data that host can accept in single TCP segment." }, { "code": null, "e": 577, "s": 213, "text": "For establishing the TCP connection, both sender and receiver indicates Maximum Segment Size they can accept. While transmitting the packets over TCP connection sender reduce the size of packet according to MSS received. It is useful for devices with small amounts of memory, because it allows the device to set a limit on the size of the packets it will receive." }, { "code": null, "e": 896, "s": 577, "text": "Network Driver knows Maximum Transmission Unit of directly attached network. Maximum Transmission Unit is largest size frame that can be transmitted across data link layer. IP asks the value of Maximum Transmission Unit from Network Driver and use it in the following relation to calculate Maximum Datagram Data Size :" }, { "code": null, "e": 1009, "s": 896, "text": "MDDS = MTU - IP_HL\n\nwhere,\nMDDS = Maximum Datagram Size\nMTU = Maximum Transmission Unit\nIP_HL = IP Header Length" }, { "code": null, "e": 1100, "s": 1009, "text": "Maximum Datagram Data Size refers to largest amount of data that is accepted in IP packet." }, { "code": null, "e": 1235, "s": 1100, "text": "Now, TCP asks the value of Maximum Datagram Data Size from IP and use it in the following relation to calculate Maximum Segment Size :" }, { "code": null, "e": 1351, "s": 1235, "text": "MSS = MDDS - TCP_HL\n\nwhere,\nMSS = Maximum Segment Size\nMDDS = Maximum Datagram Data Size\nTCP_HL = TCP Header Length" }, { "code": null, "e": 1659, "s": 1351, "text": "Example : Suppose Maximum Transmission Unit have payload of 1500B, header which contain information about the amount of packets and tail denoting the end of packet flow in data link layer and size of both TCP and IP header is 20B each. So, we can find the Maximum Segment Size by following the given steps :" }, { "code": null, "e": 2404, "s": 1659, "text": "Payload of 1500B is received by network layer which is divided as 1480B Maximum Datagram Data Size load and 20B IP header. It means IP packet transmitted through network layer can have information stored up to 1480B and have 20B header to store information like IP version, source address, destination address and time-to-live about the packet.Payload of 1480B is received by transport layer which is divided as 1460B Maximum Segment Size and 20B TCP header. It means TCP packet transmitted through transport layer can have information stored up to 1460B and have 20B header to store information like source port, destination port, sequence number, acknowledgement number, header length, checksum, window size, urgent pointer and reserved bits." }, { "code": null, "e": 2749, "s": 2404, "text": "Payload of 1500B is received by network layer which is divided as 1480B Maximum Datagram Data Size load and 20B IP header. It means IP packet transmitted through network layer can have information stored up to 1480B and have 20B header to store information like IP version, source address, destination address and time-to-live about the packet." }, { "code": null, "e": 3150, "s": 2749, "text": "Payload of 1480B is received by transport layer which is divided as 1460B Maximum Segment Size and 20B TCP header. It means TCP packet transmitted through transport layer can have information stored up to 1460B and have 20B header to store information like source port, destination port, sequence number, acknowledgement number, header length, checksum, window size, urgent pointer and reserved bits." }, { "code": null, "e": 3248, "s": 3150, "text": "Hence, Maximum Segment Size will be 1460B i.e. 1460B data can be received in a single TCP packet." }, { "code": null, "e": 3330, "s": 3248, "text": "Maximum Segment Size must be chosen by considering following performance issues :" }, { "code": null, "e": 3820, "s": 3330, "text": "Overhead Management : If MSS is too low then it will lead to inefficient use of bandwidth as amount of data stored in the segment would be comparative to headers which is not efficient.IP Fragmentation : If MSS is too large then it will lead to large IP datagrams which would require fragmentation before they can be transmitted. Fragmentation will reduce efficiency and increase the chances of part of a TCP segment being lost, resulting in the entire segment needing to be retransmitted." }, { "code": null, "e": 4006, "s": 3820, "text": "Overhead Management : If MSS is too low then it will lead to inefficient use of bandwidth as amount of data stored in the segment would be comparative to headers which is not efficient." }, { "code": null, "e": 4311, "s": 4006, "text": "IP Fragmentation : If MSS is too large then it will lead to large IP datagrams which would require fragmentation before they can be transmitted. Fragmentation will reduce efficiency and increase the chances of part of a TCP segment being lost, resulting in the entire segment needing to be retransmitted." }, { "code": null, "e": 4318, "s": 4311, "text": "Picked" }, { "code": null, "e": 4336, "s": 4318, "text": "Computer Networks" }, { "code": null, "e": 4344, "s": 4336, "text": "GATE CS" }, { "code": null, "e": 4362, "s": 4344, "text": "Computer Networks" }, { "code": null, "e": 4460, "s": 4362, "text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here." }, { "code": null, "e": 4490, "s": 4460, "text": "Wireless Application Protocol" }, { "code": null, "e": 4530, "s": 4490, "text": "Mobile Internet Protocol (or Mobile IP)" }, { "code": null, "e": 4560, "s": 4530, "text": "GSM in Wireless Communication" }, { "code": null, "e": 4586, "s": 4560, "text": "Secure Socket Layer (SSL)" }, { "code": null, "e": 4621, "s": 4586, "text": "Difference between MANET and VANET" }, { "code": null, "e": 4648, "s": 4621, "text": "Types of Operating Systems" }, { "code": null, "e": 4672, "s": 4648, "text": "ACID Properties in DBMS" }, { "code": null, "e": 4721, "s": 4672, "text": "Page Replacement Algorithms in Operating Systems" }, { "code": null, "e": 4742, "s": 4721, "text": "Normal Forms in DBMS" } ]
Python regex | Check whether the input is Floating point number or not
29 Dec, 2020 Prerequisite: Regular expression in Python Given an input, write a Python program to check whether the given Input is Floating point number or not. Examples: Input: 1.20 Output: Floating point number Input: -2.356 Output: Floating point number Input: 0.2 Output: Floating point number Input: -3 Output: Not a Floating point number In this program, we are using search() method of re module. re.search() : This method either returns None (if the pattern doesn’t match), or re.MatchObject that contains information about the matching part of the string. This method stops after the first match, so this is best suited for testing a regular expression more than extracting data. Let’s see the Python program for this : # Python program to check input is# Floating point number or not # import re module # re module provides support# for regular expressionsimport re # Make a regular expression for# identifying Floating point number regex = '[+-]?[0-9]+\.[0-9]+' # Define a function to# check Floating point number def check(floatnum): # pass the regular expression # and the string in search() method if(re.search(regex, floatnum)): print("Floating point number") else: print("Not a Floating point number") # Driver Code if __name__ == '__main__' : # Enter the floating point number floatnum = "1.20" # calling run function check(floatnum) floatnum = "-2.356" check(floatnum) floatnum = "0.2" check(floatnum) floatnum = "-3" check(floatnum) Floating point number Floating point number Floating point number Not a Floating point number Python Regex-programs python-regex Python Python Programs Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here. Python Dictionary Enumerate() in Python Different ways to create Pandas Dataframe Read a file line by line in Python How to Install PIP on Windows ? Python program to convert a list to string Defaultdict in Python Python | Get dictionary keys as a list Python | Convert a list to dictionary Python Program for Fibonacci numbers
[ { "code": null, "e": 28, "s": 0, "text": "\n29 Dec, 2020" }, { "code": null, "e": 71, "s": 28, "text": "Prerequisite: Regular expression in Python" }, { "code": null, "e": 176, "s": 71, "text": "Given an input, write a Python program to check whether the given Input is Floating point number or not." }, { "code": null, "e": 186, "s": 176, "text": "Examples:" }, { "code": null, "e": 364, "s": 186, "text": "Input: 1.20\nOutput: Floating point number\n\nInput: -2.356\nOutput: Floating point number\n\nInput: 0.2\nOutput: Floating point number\n\nInput: -3\nOutput: Not a Floating point number\n" }, { "code": null, "e": 424, "s": 364, "text": "In this program, we are using search() method of re module." }, { "code": null, "e": 709, "s": 424, "text": "re.search() : This method either returns None (if the pattern doesn’t match), or re.MatchObject that contains information about the matching part of the string. This method stops after the first match, so this is best suited for testing a regular expression more than extracting data." }, { "code": null, "e": 749, "s": 709, "text": "Let’s see the Python program for this :" }, { "code": "# Python program to check input is# Floating point number or not # import re module # re module provides support# for regular expressionsimport re # Make a regular expression for# identifying Floating point number regex = '[+-]?[0-9]+\\.[0-9]+' # Define a function to# check Floating point number def check(floatnum): # pass the regular expression # and the string in search() method if(re.search(regex, floatnum)): print(\"Floating point number\") else: print(\"Not a Floating point number\") # Driver Code if __name__ == '__main__' : # Enter the floating point number floatnum = \"1.20\" # calling run function check(floatnum) floatnum = \"-2.356\" check(floatnum) floatnum = \"0.2\" check(floatnum) floatnum = \"-3\" check(floatnum)", "e": 1578, "s": 749, "text": null }, { "code": null, "e": 1673, "s": 1578, "text": "Floating point number\nFloating point number\nFloating point number\nNot a Floating point number\n" }, { "code": null, "e": 1695, "s": 1673, "text": "Python Regex-programs" }, { "code": null, "e": 1708, "s": 1695, "text": "python-regex" }, { "code": null, "e": 1715, "s": 1708, "text": "Python" }, { "code": null, "e": 1731, "s": 1715, "text": "Python Programs" }, { "code": null, "e": 1829, "s": 1731, "text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here." }, { "code": null, "e": 1847, "s": 1829, "text": "Python Dictionary" }, { "code": null, "e": 1869, "s": 1847, "text": "Enumerate() in Python" }, { "code": null, "e": 1911, "s": 1869, "text": "Different ways to create Pandas Dataframe" }, { "code": null, "e": 1946, "s": 1911, "text": "Read a file line by line in Python" }, { "code": null, "e": 1978, "s": 1946, "text": "How to Install PIP on Windows ?" }, { "code": null, "e": 2021, "s": 1978, "text": "Python program to convert a list to string" }, { "code": null, "e": 2043, "s": 2021, "text": "Defaultdict in Python" }, { "code": null, "e": 2082, "s": 2043, "text": "Python | Get dictionary keys as a list" }, { "code": null, "e": 2120, "s": 2082, "text": "Python | Convert a list to dictionary" } ]
How to find max memory, free memory and total memory in Java?
07 May, 2021 Although Java provides automatic garbage collection, sometimes you will want to know how large the object heap is and how much of it is left. This information can be used to check the efficiency of code and to check approximately how many more objects of a certain type can be instantiated. To obtain these values, we use the totalMemory() and freeMemory methods. As we know Java’s garbage collector runs periodically to recycle unused objects. We can call garbage collector on demand by calling the gc() method. A good thing to try is to call gc() and then call freeMemory(). Methods: void gc(): Runs the garbage collector. Calling this method suggests that the Java virtual machine expand effort toward recycling unused objects in order to make the memory they currently occupy available for quick reuse. When control returns from the method call, the virtual machine has made its best effort to recycle all discarded objects. Syntax: public void gc() Returns: NA. Exception: NA. long freeMemory(): This method returns the amount of free memory in the Java Virtual Machine. Calling the gc method may result in increasing the value returned by freeMemory.Syntax: public long freeMemory() Returns: an approximation to the total amount of memory currently available for future allocated objects, measured in bytes. Exception: NA. long totalMemory(): This method returns the total amount of memory in the Java virtual machine. The value returned by this method may vary over time, depending on the host environment. Syntax: public long totalMemory() Returns: the total amount of memory currently available for current and future objects, measured in bytes. Exception: NA. Java // Java code illustrating gc(), freeMemory()// and totalMemory() methodsclass memoryDemo{ public static void main(String arg[]) { Runtime gfg = Runtime.getRuntime(); long memory1, memory2; Integer integer[] = new Integer[1000]; // checking the total memory System.out.println("Total memory is: " + gfg.totalMemory()); // checking free memory memory1 = gfg.freeMemory(); System.out.println("Initial free memory: " + memory1); // calling the garbage collector on demand gfg.gc(); memory1 = gfg.freeMemory(); System.out.println("Free memory after garbage " + "collection: " + memory1); // allocating integers for (int i = 0; i < 1000; i++) integer[i] = new Integer(i); memory2 = gfg.freeMemory(); System.out.println("Free memory after allocation: " + memory2); System.out.println("Memory used by allocation: " + (memory1 - memory2)); // discard integers for (int i = 0; i < 1000; i++) integer[i] = null; gfg.gc(); memory2 = gfg.freeMemory(); System.out.println("Free memory after " + "collecting discarded Integers: " + memory2); }} Output: Total memory is: 128974848 Initial free memory: 126929976 Free memory after garbage collection: 128632384 Free memory after allocation: 127950744 Memory used by allocation: 681640 Free memory after collecting discarded Integers: 128643696 This article is contributed by Abhishek Verma(maverick). If you like GeeksforGeeks and would like to contribute, you can also write an article using write.geeksforgeeks.org or mail your article to [email protected]. See your article appearing on the GeeksforGeeks main page and help other Geeks.Please write comments if you find anything incorrect, or you want to share more information about the topic discussed above. sweetyty java-garbage-collection Java Java Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here.
[ { "code": null, "e": 54, "s": 26, "text": "\n07 May, 2021" }, { "code": null, "e": 419, "s": 54, "text": "Although Java provides automatic garbage collection, sometimes you will want to know how large the object heap is and how much of it is left. This information can be used to check the efficiency of code and to check approximately how many more objects of a certain type can be instantiated. To obtain these values, we use the totalMemory() and freeMemory methods. " }, { "code": null, "e": 632, "s": 419, "text": "As we know Java’s garbage collector runs periodically to recycle unused objects. We can call garbage collector on demand by calling the gc() method. A good thing to try is to call gc() and then call freeMemory()." }, { "code": null, "e": 642, "s": 632, "text": "Methods: " }, { "code": null, "e": 994, "s": 642, "text": "void gc(): Runs the garbage collector. Calling this method suggests that the Java virtual machine expand effort toward recycling unused objects in order to make the memory they currently occupy available for quick reuse. When control returns from the method call, the virtual machine has made its best effort to recycle all discarded objects. Syntax: " }, { "code": null, "e": 1039, "s": 994, "text": "public void gc()\nReturns: NA.\nException: NA." }, { "code": null, "e": 1222, "s": 1039, "text": "long freeMemory(): This method returns the amount of free memory in the Java Virtual Machine. Calling the gc method may result in increasing the value returned by freeMemory.Syntax: " }, { "code": null, "e": 1387, "s": 1222, "text": "public long freeMemory()\nReturns: an approximation to the total\namount of memory currently available for\nfuture allocated objects, measured in bytes.\nException: NA." }, { "code": null, "e": 1581, "s": 1387, "text": "long totalMemory(): This method returns the total amount of memory in the Java virtual machine. The value returned by this method may vary over time, depending on the host environment. Syntax: " }, { "code": null, "e": 1731, "s": 1581, "text": "public long totalMemory()\nReturns: the total amount of memory \ncurrently available for current and future \nobjects, measured in bytes.\nException: NA." }, { "code": null, "e": 1736, "s": 1731, "text": "Java" }, { "code": "// Java code illustrating gc(), freeMemory()// and totalMemory() methodsclass memoryDemo{ public static void main(String arg[]) { Runtime gfg = Runtime.getRuntime(); long memory1, memory2; Integer integer[] = new Integer[1000]; // checking the total memory System.out.println(\"Total memory is: \" + gfg.totalMemory()); // checking free memory memory1 = gfg.freeMemory(); System.out.println(\"Initial free memory: \" + memory1); // calling the garbage collector on demand gfg.gc(); memory1 = gfg.freeMemory(); System.out.println(\"Free memory after garbage \" + \"collection: \" + memory1); // allocating integers for (int i = 0; i < 1000; i++) integer[i] = new Integer(i); memory2 = gfg.freeMemory(); System.out.println(\"Free memory after allocation: \" + memory2); System.out.println(\"Memory used by allocation: \" + (memory1 - memory2)); // discard integers for (int i = 0; i < 1000; i++) integer[i] = null; gfg.gc(); memory2 = gfg.freeMemory(); System.out.println(\"Free memory after \" + \"collecting discarded Integers: \" + memory2); }}", "e": 3124, "s": 1736, "text": null }, { "code": null, "e": 3133, "s": 3124, "text": "Output: " }, { "code": null, "e": 3372, "s": 3133, "text": "Total memory is: 128974848\nInitial free memory: 126929976\nFree memory after garbage collection: 128632384\nFree memory after allocation: 127950744\nMemory used by allocation: 681640\nFree memory after collecting discarded Integers: 128643696" }, { "code": null, "e": 3804, "s": 3372, "text": "This article is contributed by Abhishek Verma(maverick). If you like GeeksforGeeks and would like to contribute, you can also write an article using write.geeksforgeeks.org or mail your article to [email protected]. See your article appearing on the GeeksforGeeks main page and help other Geeks.Please write comments if you find anything incorrect, or you want to share more information about the topic discussed above. " }, { "code": null, "e": 3813, "s": 3804, "text": "sweetyty" }, { "code": null, "e": 3837, "s": 3813, "text": "java-garbage-collection" }, { "code": null, "e": 3842, "s": 3837, "text": "Java" }, { "code": null, "e": 3847, "s": 3842, "text": "Java" } ]
How to validate CVV number using Regular Expression
11 Nov, 2021 Given string str, the task is to check whether it is a valid CVV (Card Verification Value) number or not by using Regular Expression. The valid CVV (Card Verification Value) number must satisfy the following conditions: It should have 3 or 4 digits.It should have a digit between 0-9.It should not have any alphabets and special characters. It should have 3 or 4 digits. It should have a digit between 0-9. It should not have any alphabets and special characters. Examples: Input: str = “561” Output: true Explanation: The given string satisfies all the above mentioned conditions. Therefore, it is a valid CVV (Card Verification Value) number. Input: str = “50614” Output: false Explanation: The given string has five-digit. Therefore, it is not a valid CVV (Card Verification Value) number. Input: str = “5a#1” Output: false Explanation: The given string has alphabets and special characters. Therefore, it is not a valid CVV (Card Verification Value) number. Approach: The idea is to use Regular Expression to solve this problem. The following steps can be followed to compute the answer. Get the String. Create a regular expression to check valid CVV (Card Verification Value) number as mentioned below: regex = "^[0-9]{3, 4}$"; Where: ^ represents the starting of the string.[0-9] represents the digit between 0-9.{3, 4} represents the string has 3 or 4 digits.$ represents the ending of the string. ^ represents the starting of the string. [0-9] represents the digit between 0-9. {3, 4} represents the string has 3 or 4 digits. $ represents the ending of the string. Match the given string with the regular expression. In Java, this can be done by using Pattern.matcher(). Return true if the string matches with the given regular expression, else return false. Below is the implementation of the above approach: Java Python3 C++ // Java program to validate// CVV (Card Verification Value)// number using regex.import java.util.regex.*;class GFG { // Function to validate // CVV (Card Verification Value) number. // using regular expression. public static boolean isValidCVVNumber(String str) { // Regex to check valid CVV number. String regex = "^[0-9]{3,4}$"; // Compile the ReGex Pattern p = Pattern.compile(regex); // If the string is empty // return false if (str == null) { return false; } // Find match between given string // and regular expression // using Pattern.matcher() Matcher m = p.matcher(str); // Return if the string // matched the ReGex return m.matches(); } // Driver code public static void main(String args[]) { // Test Case 1: String str1 = "561"; System.out.println(isValidCVVNumber(str1)); // Test Case 2: String str2 = "5061"; System.out.println(isValidCVVNumber(str2)); // Test Case 3: String str3 = "50614"; System.out.println(isValidCVVNumber(str3)); // Test Case 4: String str4 = "5a#1"; System.out.println(isValidCVVNumber(str4)); }} # Python3 program to validate# CVV (Card Verification Value)# number using regex.import re # Function to validate# CVV (Card Verification Value) number.# using regular expression. def isValidCVVNumber(str): # Regex to check valid # CVV number. regex = "^[0-9]{3,4}$" # Compile the ReGex p = re.compile(regex) # If the string is empty # return false if(str == None): return False # Return if the string # matched the ReGex if(re.search(p, str)): return True else: return False # Driver code # Test Case 1:str1 = "561"print(isValidCVVNumber(str1)) # Test Case 2:str2 = "5061"print(isValidCVVNumber(str2)) # Test Case 3:str3 = "50614"print(isValidCVVNumber(str3)) # Test Case 4:str4 = "5a#1"print(isValidCVVNumber(str4)) # This code is contributed by avanitrachhadiya2155 // C++ program to validate the// CVV (Card Verification Value) number// using Regular Expression#include <iostream>#include <regex>using namespace std; // Function to validate the CVV// (Card Verification Value) numberbool isValidCVVNumber(string str){ // Regex to check valid CVV // (Card Verification Value) number const regex pattern("^[0-9]{3,4}$"); // If the CVV (Card Verification Value) // number is empty return false if (str.empty()) { return false; } // Return true if the CVV // (Card Verification Value) number // matched the ReGex if (regex_match(str, pattern)) { return true; } else { return false; }} // Driver Codeint main(){ // Test Case 1: string str1 = "561"; cout << isValidCVVNumber(str1) << endl; // Test Case 2: string str2 = "5061"; cout << isValidCVVNumber(str2) << endl; // Test Case 3: string str3 = "50614"; cout << isValidCVVNumber(str3) << endl; // Test Case 4: string str4 = "5a#1"; cout << isValidCVVNumber(str4) << endl; return 0;} // This code is contributed by yuvraj_chandra true true false false avanitrachhadiya2155 yuvraj_chandra kashishsoda CPP-regex java-regular-expression regular-expression Pattern Searching Strings Strings Pattern Searching Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here.
[ { "code": null, "e": 54, "s": 26, "text": "\n11 Nov, 2021" }, { "code": null, "e": 275, "s": 54, "text": "Given string str, the task is to check whether it is a valid CVV (Card Verification Value) number or not by using Regular Expression. The valid CVV (Card Verification Value) number must satisfy the following conditions: " }, { "code": null, "e": 396, "s": 275, "text": "It should have 3 or 4 digits.It should have a digit between 0-9.It should not have any alphabets and special characters." }, { "code": null, "e": 426, "s": 396, "text": "It should have 3 or 4 digits." }, { "code": null, "e": 462, "s": 426, "text": "It should have a digit between 0-9." }, { "code": null, "e": 519, "s": 462, "text": "It should not have any alphabets and special characters." }, { "code": null, "e": 530, "s": 519, "text": "Examples: " }, { "code": null, "e": 701, "s": 530, "text": "Input: str = “561” Output: true Explanation: The given string satisfies all the above mentioned conditions. Therefore, it is a valid CVV (Card Verification Value) number." }, { "code": null, "e": 849, "s": 701, "text": "Input: str = “50614” Output: false Explanation: The given string has five-digit. Therefore, it is not a valid CVV (Card Verification Value) number." }, { "code": null, "e": 1019, "s": 849, "text": "Input: str = “5a#1” Output: false Explanation: The given string has alphabets and special characters. Therefore, it is not a valid CVV (Card Verification Value) number. " }, { "code": null, "e": 1150, "s": 1019, "text": "Approach: The idea is to use Regular Expression to solve this problem. The following steps can be followed to compute the answer. " }, { "code": null, "e": 1166, "s": 1150, "text": "Get the String." }, { "code": null, "e": 1266, "s": 1166, "text": "Create a regular expression to check valid CVV (Card Verification Value) number as mentioned below:" }, { "code": null, "e": 1291, "s": 1266, "text": "regex = \"^[0-9]{3, 4}$\";" }, { "code": null, "e": 1463, "s": 1291, "text": "Where: ^ represents the starting of the string.[0-9] represents the digit between 0-9.{3, 4} represents the string has 3 or 4 digits.$ represents the ending of the string." }, { "code": null, "e": 1504, "s": 1463, "text": "^ represents the starting of the string." }, { "code": null, "e": 1544, "s": 1504, "text": "[0-9] represents the digit between 0-9." }, { "code": null, "e": 1592, "s": 1544, "text": "{3, 4} represents the string has 3 or 4 digits." }, { "code": null, "e": 1631, "s": 1592, "text": "$ represents the ending of the string." }, { "code": null, "e": 1737, "s": 1631, "text": "Match the given string with the regular expression. In Java, this can be done by using Pattern.matcher()." }, { "code": null, "e": 1825, "s": 1737, "text": "Return true if the string matches with the given regular expression, else return false." }, { "code": null, "e": 1876, "s": 1825, "text": "Below is the implementation of the above approach:" }, { "code": null, "e": 1881, "s": 1876, "text": "Java" }, { "code": null, "e": 1889, "s": 1881, "text": "Python3" }, { "code": null, "e": 1893, "s": 1889, "text": "C++" }, { "code": "// Java program to validate// CVV (Card Verification Value)// number using regex.import java.util.regex.*;class GFG { // Function to validate // CVV (Card Verification Value) number. // using regular expression. public static boolean isValidCVVNumber(String str) { // Regex to check valid CVV number. String regex = \"^[0-9]{3,4}$\"; // Compile the ReGex Pattern p = Pattern.compile(regex); // If the string is empty // return false if (str == null) { return false; } // Find match between given string // and regular expression // using Pattern.matcher() Matcher m = p.matcher(str); // Return if the string // matched the ReGex return m.matches(); } // Driver code public static void main(String args[]) { // Test Case 1: String str1 = \"561\"; System.out.println(isValidCVVNumber(str1)); // Test Case 2: String str2 = \"5061\"; System.out.println(isValidCVVNumber(str2)); // Test Case 3: String str3 = \"50614\"; System.out.println(isValidCVVNumber(str3)); // Test Case 4: String str4 = \"5a#1\"; System.out.println(isValidCVVNumber(str4)); }}", "e": 3177, "s": 1893, "text": null }, { "code": "# Python3 program to validate# CVV (Card Verification Value)# number using regex.import re # Function to validate# CVV (Card Verification Value) number.# using regular expression. def isValidCVVNumber(str): # Regex to check valid # CVV number. regex = \"^[0-9]{3,4}$\" # Compile the ReGex p = re.compile(regex) # If the string is empty # return false if(str == None): return False # Return if the string # matched the ReGex if(re.search(p, str)): return True else: return False # Driver code # Test Case 1:str1 = \"561\"print(isValidCVVNumber(str1)) # Test Case 2:str2 = \"5061\"print(isValidCVVNumber(str2)) # Test Case 3:str3 = \"50614\"print(isValidCVVNumber(str3)) # Test Case 4:str4 = \"5a#1\"print(isValidCVVNumber(str4)) # This code is contributed by avanitrachhadiya2155", "e": 4009, "s": 3177, "text": null }, { "code": "// C++ program to validate the// CVV (Card Verification Value) number// using Regular Expression#include <iostream>#include <regex>using namespace std; // Function to validate the CVV// (Card Verification Value) numberbool isValidCVVNumber(string str){ // Regex to check valid CVV // (Card Verification Value) number const regex pattern(\"^[0-9]{3,4}$\"); // If the CVV (Card Verification Value) // number is empty return false if (str.empty()) { return false; } // Return true if the CVV // (Card Verification Value) number // matched the ReGex if (regex_match(str, pattern)) { return true; } else { return false; }} // Driver Codeint main(){ // Test Case 1: string str1 = \"561\"; cout << isValidCVVNumber(str1) << endl; // Test Case 2: string str2 = \"5061\"; cout << isValidCVVNumber(str2) << endl; // Test Case 3: string str3 = \"50614\"; cout << isValidCVVNumber(str3) << endl; // Test Case 4: string str4 = \"5a#1\"; cout << isValidCVVNumber(str4) << endl; return 0;} // This code is contributed by yuvraj_chandra", "e": 5139, "s": 4009, "text": null }, { "code": null, "e": 5161, "s": 5139, "text": "true\ntrue\nfalse\nfalse" }, { "code": null, "e": 5182, "s": 5161, "text": "avanitrachhadiya2155" }, { "code": null, "e": 5197, "s": 5182, "text": "yuvraj_chandra" }, { "code": null, "e": 5209, "s": 5197, "text": "kashishsoda" }, { "code": null, "e": 5219, "s": 5209, "text": "CPP-regex" }, { "code": null, "e": 5243, "s": 5219, "text": "java-regular-expression" }, { "code": null, "e": 5262, "s": 5243, "text": "regular-expression" }, { "code": null, "e": 5280, "s": 5262, "text": "Pattern Searching" }, { "code": null, "e": 5288, "s": 5280, "text": "Strings" }, { "code": null, "e": 5296, "s": 5288, "text": "Strings" }, { "code": null, "e": 5314, "s": 5296, "text": "Pattern Searching" } ]
PHP | urlencode() Function
31 Jul, 2021 The urlencode() function is an inbuilt function in PHP which is used to encode the url. This function returns a string which consist all non-alphanumeric characters except -_. and replace by the percent (%) sign followed by two hex digits and spaces encoded as plus (+) signs. Syntax: string urlencode( $input ) Parameters: This function accepts single parameter $input which is used to hold the url to be encoded. Return Value: This function returns an encoded string on success. Below programs illustrate the urlencode() function in PHP: Program 1: <?php // PHP program to illustrate urlencode functionecho urlencode("https://geeksforgeeks.org/") . "\n"; ?> https%3A%2F%2Fgeeksforgeeks.org%2F Program 2 : <?php // PHP program to illustrate urlencode functionecho urlencode("https://ide.geeksforgeeks.org/") . "\n";echo urlencode("https://write.geeksforgeeks.org/") . "\n";echo urlencode("https://practice.geeksforgeeks.org/") . "\n";echo urlencode("https://geeksforgeeks.org/") . "\n"; ?> https%3A%2F%2Fide.geeksforgeeks.org%2F https%3A%2F%2Fwrite.geeksforgeeks.org%2F https%3A%2F%2Fpractice.geeksforgeeks.org%2F https%3A%2F%2Fgeeksforgeeks.org%2F Reference: http://php.net/manual/en/function.urlencode.php PHP is a server-side scripting language designed specifically for web development. You can learn PHP from the ground up by following this PHP Tutorial and PHP Examples. julthep PHP-function PHP Web Technologies PHP Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here.
[ { "code": null, "e": 53, "s": 25, "text": "\n31 Jul, 2021" }, { "code": null, "e": 330, "s": 53, "text": "The urlencode() function is an inbuilt function in PHP which is used to encode the url. This function returns a string which consist all non-alphanumeric characters except -_. and replace by the percent (%) sign followed by two hex digits and spaces encoded as plus (+) signs." }, { "code": null, "e": 338, "s": 330, "text": "Syntax:" }, { "code": null, "e": 365, "s": 338, "text": "string urlencode( $input )" }, { "code": null, "e": 468, "s": 365, "text": "Parameters: This function accepts single parameter $input which is used to hold the url to be encoded." }, { "code": null, "e": 534, "s": 468, "text": "Return Value: This function returns an encoded string on success." }, { "code": null, "e": 593, "s": 534, "text": "Below programs illustrate the urlencode() function in PHP:" }, { "code": null, "e": 604, "s": 593, "text": "Program 1:" }, { "code": "<?php // PHP program to illustrate urlencode functionecho urlencode(\"https://geeksforgeeks.org/\") . \"\\n\"; ?>", "e": 715, "s": 604, "text": null }, { "code": null, "e": 751, "s": 715, "text": "https%3A%2F%2Fgeeksforgeeks.org%2F\n" }, { "code": null, "e": 763, "s": 751, "text": "Program 2 :" }, { "code": "<?php // PHP program to illustrate urlencode functionecho urlencode(\"https://ide.geeksforgeeks.org/\") . \"\\n\";echo urlencode(\"https://write.geeksforgeeks.org/\") . \"\\n\";echo urlencode(\"https://practice.geeksforgeeks.org/\") . \"\\n\";echo urlencode(\"https://geeksforgeeks.org/\") . \"\\n\"; ?>", "e": 1049, "s": 763, "text": null }, { "code": null, "e": 1209, "s": 1049, "text": "https%3A%2F%2Fide.geeksforgeeks.org%2F\nhttps%3A%2F%2Fwrite.geeksforgeeks.org%2F\nhttps%3A%2F%2Fpractice.geeksforgeeks.org%2F\nhttps%3A%2F%2Fgeeksforgeeks.org%2F\n" }, { "code": null, "e": 1268, "s": 1209, "text": "Reference: http://php.net/manual/en/function.urlencode.php" }, { "code": null, "e": 1437, "s": 1268, "text": "PHP is a server-side scripting language designed specifically for web development. You can learn PHP from the ground up by following this PHP Tutorial and PHP Examples." }, { "code": null, "e": 1445, "s": 1437, "text": "julthep" }, { "code": null, "e": 1458, "s": 1445, "text": "PHP-function" }, { "code": null, "e": 1462, "s": 1458, "text": "PHP" }, { "code": null, "e": 1479, "s": 1462, "text": "Web Technologies" }, { "code": null, "e": 1483, "s": 1479, "text": "PHP" } ]
How to Convert Decimal to Hexadecimal?
Decimal system is most familiar number system to the general public. It is base 10 which has only 10 symbols − 0, 1, 2, 3, 4, 5, 6, 7, 8 and 9. Whereas Hexadecimal system is most familiar number system color representation in Computers or digital systems. It is base 16 which has only 16 symbols: 0, 1, 2, 3, 4, 5, 6, 7, 8, 9 and A, B, C, D, E, F. These A, B, C, D, E, F use as single digit in place of double digits, 10, 11, 12, 13, 14, 15 respectively. There are various direct or indirect methods to convert a decimal number into hexadecimal number. In an indirect method, you need to convert a decimal number into other number system (e.g., binary or octal), then you can convert into hexadecimal number by using grouping from binary number system and converting each octal digit into binary then grouping and convert these into hexadecimal number. Example − Convert decimal number 105 into hexadecimal number. First convert it into binary or octal number, = (100)10 = (1x26+1x25+0x24+0x23+1x22+0x21+0x20)10 or (1x82+4x81+4x80)10 Because base of binary and octal are 2 and 8 respectively. = (1100100)2 or (144)8 Then convert each digit of octal number into 3 bit of binary number, then use grouping of 4 bit of binary number. = (1100100)2 or (001 100 100)2 = (110 0100)2 = (0110 0100)2 = (6 4)16 = (64)16 However, there are two direct methods are available for converting a decimal number into Hexadecimal number − Converting with Remainders and Converting with Division. These are explained as following below. This is a straightforward method which involve dividing the number to be converted. Let decimal number is N then divide this number from 16 because base of hexadecimal number system is 16. Note down the value of remainder, which will be: 0 to 15 (replace 10, 11, 12, 13, 14, 15 by A, B, C, D, E, F respectively). Again divide remaining decimal number till it became 0 and note every remainder of every step. Then write remainders from bottom to up (or in reverse order), which will be equivalent hexadecimal number of given decimal number. This is procedure for converting an integer decimal number, algorithm is given below. Take decimal number as dividend. Take decimal number as dividend. Divide this number by 16 (16 is base of hexadecimal so divisor here). Divide this number by 16 (16 is base of hexadecimal so divisor here). Store the remainder in an array (it will be: 0 to 15 because of divisor 16, replace 10, 11, 12, 13, 14, 15 by A, B, C, D, E, F respectively). Store the remainder in an array (it will be: 0 to 15 because of divisor 16, replace 10, 11, 12, 13, 14, 15 by A, B, C, D, E, F respectively). Repeat the above two steps until the number is greater than zero. Repeat the above two steps until the number is greater than zero. Print the array in reverse order (which will be equivalent hexadecimal number of given decimal number). Print the array in reverse order (which will be equivalent hexadecimal number of given decimal number). Note that dividend (here given decimal number) is the number being divided, the divisor (here base of hexadecimal, i.e., 16) in the number by which the dividend is divided, and quotient (remaining divided decimal number) is the result of the division. Example − Convert decimal number 540 into hexadecimal number. Since given number is decimal integer number, so by using above algorithm performing short division by 16 with remainder. Now, write remainder from bottom to up (in reverse order), this will be 021C (or only 21C) which is equivalent hexadecimal number of decimal integer 540. But above method can not convert fraction part of a mixed (a number with integer and fraction part) hexadecimal number. For decimal fractional part, the method is explained as following below. Let decimal fractional part is M then multiply this number from 16 because base of hexadecimal number system is 16. Note down the value of integer part, which will be − 0 to 15 (replace 10, 11, 12, 13, 14, 15 by A, B, C, D, E, F respectively). Again multiply remaining decimal fractional number till it became 0 and note every integer part of result of every step. Then write noted results of integer part, which will be equivalent fraction hexadecimal number of given decimal number. This is procedure for converting an fractional decimal number, algorithm is given below. Take decimal number as multiplicand. Take decimal number as multiplicand. Multiple this number by 16 (16 is base of hexadecimal so multiplier here). Multiple this number by 16 (16 is base of hexadecimal so multiplier here). Store the value of integer part of result in an array (it will be: 0 to 15, because of multiplier 16, replace 10, 11, 12, 13, 14, 15 by A, B, C, D, E, F respectively). Store the value of integer part of result in an array (it will be: 0 to 15, because of multiplier 16, replace 10, 11, 12, 13, 14, 15 by A, B, C, D, E, F respectively). Repeat the above two steps until the number became zero. Repeat the above two steps until the number became zero. Print the array (which will be equivalent fractional hexadecimal number of given decimal fractional number). Print the array (which will be equivalent fractional hexadecimal number of given decimal fractional number). Note that a multiplicand (here decimal fractional number) is that to be multiplied by multiplier (here base of hexadecimal, i.e., 16) Example − Convert decimal fractional number 0.06640625 into hexadecimal number. Since given number is decimal fractional number, so by using above algorithm performing short multiplication by 16 with integer part. Now, write these resultant integer part, this will be approximate 0.110 which is equivalent hexadecimal fractional number of decimal fractional 0.06640625. This method is guessing hexadecimal number of a decimal number. You need to draw a table of power of 16, For integer part, The algorithm is explained as following below. Start with any decimal number. Start with any decimal number. List the powers of 16. List the powers of 16. Divide the decimal number by the largest power of 16. Divide the decimal number by the largest power of 16. Find the remainder. Find the remainder. Divide the remainder by the next power of 16. Divide the remainder by the next power of 16. Repeat until you've found the full answer. Repeat until you've found the full answer. Example − Convert decimal number 380 into hexadecimal number. According to above algorithm, table of power of 16, Divide the decimal number by the largest power of 16. = 380 / 256 = 1.484375 So 1 will be first digit or most significant bit (MSB) of hexadecimal number. Now, remainder will be, = 380 - 1256 =124 Now, divide this remainder by the next power of 16. = 124 / 16 = 7.75 So 7 will be next digit or second most significant bit (MSB) of hexadecimal number. Now, remainder will be, = 124 - 716 = 12 Because remainder 12(= C) is less than base 16, so C(=12) will be ast (least significant) bit of required hexadecimal number. Therefore, 17C will be equivalent hexadecimal number of given decimal number 380.
[ { "code": null, "e": 1517, "s": 1062, "text": "Decimal system is most familiar number system to the general public. It is base 10 which has only 10 symbols − 0, 1, 2, 3, 4, 5, 6, 7, 8 and 9. Whereas Hexadecimal system is most familiar number system color representation in Computers or digital systems. It is base 16 which has only 16 symbols: 0, 1, 2, 3, 4, 5, 6, 7, 8, 9 and A, B, C, D, E, F. These A, B, C, D, E, F use as single digit in place of double digits, 10, 11, 12, 13, 14, 15 respectively." }, { "code": null, "e": 1915, "s": 1517, "text": "There are various direct or indirect methods to convert a decimal number into hexadecimal number. In an indirect method, you need to convert a decimal number into other number system (e.g., binary or octal), then you can convert into hexadecimal number by using grouping from binary number system and converting each octal digit into binary then grouping and convert these into hexadecimal number." }, { "code": null, "e": 1977, "s": 1915, "text": "Example − Convert decimal number 105 into hexadecimal number." }, { "code": null, "e": 2371, "s": 1977, "text": "First convert it into binary or octal number,\n= (100)10\n= (1x26+1x25+0x24+0x23+1x22+0x21+0x20)10 or (1x82+4x81+4x80)10\nBecause base of binary and octal are 2 and 8 respectively.\n= (1100100)2 or (144)8\nThen convert each digit of octal number into 3 bit of binary number, then use grouping of 4 bit of binary number.\n= (1100100)2 or (001 100 100)2\n= (110 0100)2\n= (0110 0100)2\n= (6 4)16\n= (64)16" }, { "code": null, "e": 2578, "s": 2371, "text": "However, there are two direct methods are available for converting a decimal number into Hexadecimal number − Converting with Remainders and Converting with Division. These are explained as following below." }, { "code": null, "e": 3204, "s": 2578, "text": "This is a straightforward method which involve dividing the number to be converted. Let decimal number is N then divide this number from 16 because base of hexadecimal number system is 16. Note down the value of remainder, which will be: 0 to 15 (replace 10, 11, 12, 13, 14, 15 by A, B, C, D, E, F respectively). Again divide remaining decimal number till it became 0 and note every remainder of every step. Then write remainders from bottom to up (or in reverse order), which will be equivalent hexadecimal number of given decimal number. This is procedure for converting an integer decimal number, algorithm is given below." }, { "code": null, "e": 3237, "s": 3204, "text": "Take decimal number as dividend." }, { "code": null, "e": 3270, "s": 3237, "text": "Take decimal number as dividend." }, { "code": null, "e": 3340, "s": 3270, "text": "Divide this number by 16 (16 is base of hexadecimal so divisor here)." }, { "code": null, "e": 3410, "s": 3340, "text": "Divide this number by 16 (16 is base of hexadecimal so divisor here)." }, { "code": null, "e": 3552, "s": 3410, "text": "Store the remainder in an array (it will be: 0 to 15 because of divisor 16, replace 10, 11, 12, 13, 14, 15 by A, B, C, D, E, F respectively)." }, { "code": null, "e": 3694, "s": 3552, "text": "Store the remainder in an array (it will be: 0 to 15 because of divisor 16, replace 10, 11, 12, 13, 14, 15 by A, B, C, D, E, F respectively)." }, { "code": null, "e": 3760, "s": 3694, "text": "Repeat the above two steps until the number is greater than zero." }, { "code": null, "e": 3826, "s": 3760, "text": "Repeat the above two steps until the number is greater than zero." }, { "code": null, "e": 3930, "s": 3826, "text": "Print the array in reverse order (which will be equivalent hexadecimal number of given decimal number)." }, { "code": null, "e": 4034, "s": 3930, "text": "Print the array in reverse order (which will be equivalent hexadecimal number of given decimal number)." }, { "code": null, "e": 4286, "s": 4034, "text": "Note that dividend (here given decimal number) is the number being divided, the divisor (here base of hexadecimal, i.e., 16) in the number by which the dividend is divided, and quotient (remaining divided decimal number) is the result of the division." }, { "code": null, "e": 4348, "s": 4286, "text": "Example − Convert decimal number 540 into hexadecimal number." }, { "code": null, "e": 4470, "s": 4348, "text": "Since given number is decimal integer number, so by using above algorithm performing short division by 16 with remainder." }, { "code": null, "e": 4624, "s": 4470, "text": "Now, write remainder from bottom to up (in reverse order), this will be 021C (or only 21C) which is equivalent hexadecimal number of decimal integer 540." }, { "code": null, "e": 4817, "s": 4624, "text": "But above method can not convert fraction part of a mixed (a number with integer and fraction part) hexadecimal number. For decimal fractional part, the method is explained as following below." }, { "code": null, "e": 5391, "s": 4817, "text": "Let decimal fractional part is M then multiply this number from 16 because base of hexadecimal number system is 16. Note down the value of integer part, which will be − 0 to 15 (replace 10, 11, 12, 13, 14, 15 by A, B, C, D, E, F respectively). Again multiply remaining decimal fractional number till it became 0 and note every integer part of result of every step. Then write noted results of integer part, which will be equivalent fraction hexadecimal number of given decimal number. This is procedure for converting an fractional decimal number, algorithm is given below." }, { "code": null, "e": 5428, "s": 5391, "text": "Take decimal number as multiplicand." }, { "code": null, "e": 5465, "s": 5428, "text": "Take decimal number as multiplicand." }, { "code": null, "e": 5540, "s": 5465, "text": "Multiple this number by 16 (16 is base of hexadecimal so multiplier here)." }, { "code": null, "e": 5615, "s": 5540, "text": "Multiple this number by 16 (16 is base of hexadecimal so multiplier here)." }, { "code": null, "e": 5783, "s": 5615, "text": "Store the value of integer part of result in an array (it will be: 0 to 15, because of multiplier 16, replace 10, 11, 12, 13, 14, 15 by A, B, C, D, E, F respectively)." }, { "code": null, "e": 5951, "s": 5783, "text": "Store the value of integer part of result in an array (it will be: 0 to 15, because of multiplier 16, replace 10, 11, 12, 13, 14, 15 by A, B, C, D, E, F respectively)." }, { "code": null, "e": 6008, "s": 5951, "text": "Repeat the above two steps until the number became zero." }, { "code": null, "e": 6065, "s": 6008, "text": "Repeat the above two steps until the number became zero." }, { "code": null, "e": 6174, "s": 6065, "text": "Print the array (which will be equivalent fractional hexadecimal number of given decimal fractional number)." }, { "code": null, "e": 6283, "s": 6174, "text": "Print the array (which will be equivalent fractional hexadecimal number of given decimal fractional number)." }, { "code": null, "e": 6417, "s": 6283, "text": "Note that a multiplicand (here decimal fractional number) is that to be multiplied by multiplier (here base of hexadecimal, i.e., 16)" }, { "code": null, "e": 6497, "s": 6417, "text": "Example − Convert decimal fractional number 0.06640625 into hexadecimal number." }, { "code": null, "e": 6631, "s": 6497, "text": "Since given number is decimal fractional number, so by using above algorithm performing short multiplication by 16 with integer part." }, { "code": null, "e": 6787, "s": 6631, "text": "Now, write these resultant integer part, this will be approximate 0.110 which is equivalent hexadecimal fractional number of decimal fractional 0.06640625." }, { "code": null, "e": 6957, "s": 6787, "text": "This method is guessing hexadecimal number of a decimal number. You need to draw a table of power of 16, For integer part, The algorithm is explained as following below." }, { "code": null, "e": 6988, "s": 6957, "text": "Start with any decimal number." }, { "code": null, "e": 7019, "s": 6988, "text": "Start with any decimal number." }, { "code": null, "e": 7042, "s": 7019, "text": "List the powers of 16." }, { "code": null, "e": 7065, "s": 7042, "text": "List the powers of 16." }, { "code": null, "e": 7119, "s": 7065, "text": "Divide the decimal number by the largest power of 16." }, { "code": null, "e": 7173, "s": 7119, "text": "Divide the decimal number by the largest power of 16." }, { "code": null, "e": 7193, "s": 7173, "text": "Find the remainder." }, { "code": null, "e": 7213, "s": 7193, "text": "Find the remainder." }, { "code": null, "e": 7259, "s": 7213, "text": "Divide the remainder by the next power of 16." }, { "code": null, "e": 7305, "s": 7259, "text": "Divide the remainder by the next power of 16." }, { "code": null, "e": 7348, "s": 7305, "text": "Repeat until you've found the full answer." }, { "code": null, "e": 7391, "s": 7348, "text": "Repeat until you've found the full answer." }, { "code": null, "e": 7453, "s": 7391, "text": "Example − Convert decimal number 380 into hexadecimal number." }, { "code": null, "e": 7505, "s": 7453, "text": "According to above algorithm, table of power of 16," }, { "code": null, "e": 8105, "s": 7505, "text": "Divide the decimal number by the largest power of 16.\n= 380 / 256 = 1.484375\nSo 1 will be first digit or most significant bit (MSB) of hexadecimal number.\nNow, remainder will be,\n= 380 - 1256 =124\nNow, divide this\nremainder by the next power of 16.\n= 124 / 16 = 7.75\nSo 7 will be next digit or second most significant bit (MSB) of hexadecimal number.\nNow, remainder will be,\n= 124 - 716 = 12\nBecause remainder 12(= C) is less than base 16, so C(=12) will be ast (least significant) bit of required hexadecimal number.\nTherefore, 17C will be equivalent hexadecimal number of given decimal number 380." } ]
Convert an object to associative array in PHP
To convert an object to associative array in PHP, the code is as follows− Live Demo <?php class department { public function __construct($deptname, $deptzone) { $this->deptname = $deptname; $this->deptzone = $deptzone; } } $myObj = new department("Marketing", "South"); echo "Before conversion:"."\n"; var_dump($myObj); $myArray = json_decode(json_encode($myObj), true); echo "After conversion:"."\n"; var_dump($myArray); ?> This will produce the following output− Before conversion: object(department)#1 (2) { ["deptname"]=> string(9) "Marketing" ["deptzone"]=> string(5) "South" } After conversion: array(2) { ["deptname"]=> string(9) "Marketing" ["deptzone"]=> string(5) "South" } Let us now see another example − Live Demo <?php class department { public function __construct($deptname, $deptzone) { $this->deptname = $deptname; $this->deptzone = $deptzone; } } $myObj = new department("Marketing", "South"); echo "Before conversion:"."\n"; var_dump($myObj); $arr = (array)$myObj; echo "After conversion:"."\n"; var_dump($arr); ?> This will produce the following output− Before conversion: object(department)#1 (2) { ["deptname"]=> string(9) "Marketing" ["deptzone"]=> string(5) "South" } After conversion: array(2) { ["deptname"]=> string(9) "Marketing" ["deptzone"]=> string(5) "South" }
[ { "code": null, "e": 1136, "s": 1062, "text": "To convert an object to associative array in PHP, the code is as follows−" }, { "code": null, "e": 1147, "s": 1136, "text": " Live Demo" }, { "code": null, "e": 1542, "s": 1147, "text": "<?php\n class department {\n public function __construct($deptname, $deptzone) {\n $this->deptname = $deptname;\n $this->deptzone = $deptzone;\n }\n }\n $myObj = new department(\"Marketing\", \"South\");\n echo \"Before conversion:\".\"\\n\";\n var_dump($myObj);\n $myArray = json_decode(json_encode($myObj), true);\n echo \"After conversion:\".\"\\n\";\n var_dump($myArray);\n?>" }, { "code": null, "e": 1582, "s": 1542, "text": "This will produce the following output−" }, { "code": null, "e": 1825, "s": 1582, "text": "Before conversion:\nobject(department)#1 (2) {\n [\"deptname\"]=>\n string(9) \"Marketing\"\n [\"deptzone\"]=>\n string(5) \"South\"\n}\nAfter conversion:\narray(2) {\n [\"deptname\"]=>\n string(9) \"Marketing\"\n [\"deptzone\"]=>\n string(5) \"South\"\n}" }, { "code": null, "e": 1858, "s": 1825, "text": "Let us now see another example −" }, { "code": null, "e": 1869, "s": 1858, "text": " Live Demo" }, { "code": null, "e": 2231, "s": 1869, "text": "<?php\n class department {\n public function __construct($deptname, $deptzone) {\n $this->deptname = $deptname;\n $this->deptzone = $deptzone;\n }\n }\n $myObj = new department(\"Marketing\", \"South\");\n echo \"Before conversion:\".\"\\n\";\n var_dump($myObj);\n $arr = (array)$myObj;\n echo \"After conversion:\".\"\\n\";\n var_dump($arr);\n?>" }, { "code": null, "e": 2271, "s": 2231, "text": "This will produce the following output−" }, { "code": null, "e": 2514, "s": 2271, "text": "Before conversion:\nobject(department)#1 (2) {\n [\"deptname\"]=>\n string(9) \"Marketing\"\n [\"deptzone\"]=>\n string(5) \"South\"\n}\nAfter conversion:\narray(2) {\n [\"deptname\"]=>\n string(9) \"Marketing\"\n [\"deptzone\"]=>\n string(5) \"South\"\n}" } ]
Bootstrapping using Python and R. Estimating a sampling distribution... | by Michael Grogan | Towards Data Science
All too often, the attempt to determine the characteristics of a population is constrained by the fact that we must rely on a sample to determine characteristics of that population. When analysing data, we would like to be able to estimate a sampling distribution in order to perform hypothesis tests and calculate confidence intervals. One way of attempting to solve this issue is a method called bootstrapping, whereby results for a wider population are inferred from repeated sampling. For instance, in order to determine if a different sample of the same population would have yielded similar results, the ideal would be to obtain new data samples. However, given that this may not be possible — an alternative is to randomly sample from the existing data. For this example, we will look at the distribution of average daily rates (ADR) among a series of hotel customers. Specifically, a subset of ADR figures are used in conjunction with a bootstrapping technique to analyse what the distribution would be expected to look like given a larger sample size. When generating samples using bootstrapping, this is done with replacement. This means that it is possible to select the same element from the sample more than once. Using a sample of 300 ADR values for hotel customers as randomly sampled from the dataset provided by Antonio, Almeida, and Nunes, we are going to generate 5,000 bootstrap samples of size 300. Specifically, numpy is used as below to generate 300 samples with replacement, and a for loop is used to generate 5,000 iterations of 300 samples at a time. my_samples = []for _ in range(5000): x = np.random.choice(sample, size=300, replace=True) my_samples.append(x.mean()) Here is a histogram of the bootstrapped samples: Here is a histogram of the original 300 samples that were randomly selected: We can see that the original sample is significantly positively skewed. However, the histogram of the bootstrapped samples approximates a normal distribution — in line with the assumption of the central limit theorem that as the sample size increases, the underlying data distribution can be expected to approximate a normal one regardless of the shape of the original distribution. A primary use of bootstrapping is to estimate the confidence interval of the population mean. For instance, a 95% confidence interval means we are 95% confident that the mean lies within a particular range. The confidence interval for the bootstrapped sample is as follows: >>> import statsmodels.stats.api as sms>>> sms.DescrStatsW(my_samples).tconfint_mean()(94.28205060553655, 94.4777529411301) According to the above, we can be 95% confident that the true population mean lies between 94.28 and 94.47. Using the boot library in R, a similar form of analysis can be conducted. Again, assume that only 300 samples are available: adr<-sample(adr, 300) Using random sampling with replacement, 5,000 replications of these 300 samples are generated. x<-sample(adr, 300, replace = TRUE, prob = NULL)> mean_results <- boot(x, boot_mean, R = 5000)ORDINARY NONPARAMETRIC BOOTSTRAPCall:boot(data = x, statistic = boot_mean, R = 5000)Bootstrap Statistics : original bias std. errort1* 93.00084 -0.02135855 2.030718 Additionally, here are the confidence interval calculations: > boot.ci(mean_results)BOOTSTRAP CONFIDENCE INTERVAL CALCULATIONSBased on 5000 bootstrap replicatesCALL : boot.ci(boot.out = mean_results)Intervals : Level Normal Basic 95% (89.04, 97.00 ) (88.98, 96.82 )Level Percentile BCa 95% (89.18, 97.02 ) (89.36, 97.32 ) Calculations and Intervals on Original Scale A histogram of the bootstrapped samples are generated: When running in R, we can see that the confidence intervals are wider than those obtained when running using Python. While the reason for this is unclear, the analysis still indicates that the sample mean obtained is representative of the population as a whole, and is unlikely to have been obtained by chance. While bootstrapping can be quite a useful tool for inferring population characteristics from a sample, a common error is to assume that bootstrapping is a cure for small sample sizes. As explained on Cross Validated, a small sample size is likely to be more volatile and prone to significant deviations from that of the broader population. Therefore, one should ensure that the sample being analysed is sufficiently large in order to capture population characteristics adequately. Moreover, another issue with small sample sizes is that the bootstrap distribution itself may end up being narrower — which in turn would result in a narrower confidence interval that could deviate significantly from the real value. In this article, you have seen: How bootstrapping can allow us to estimate a sampling distribution using repeated sampling Importance of replacement when random sampling How to generate bootstrap samples using Python and R Limitations of bootstrapping and why larger sample sizes are preferable Many thanks for your time, and any questions or feedback are greatly appreciated. You can find more of my data science content at michael-grogan.com. Antonio, Almeida, Nunes (2019). Hotel Booking Demand Dataset Cross Validated: Can bootstrap be seen as a “cure” for the small sample size? docs.scipy.org: scipy.stats.bootstrap Julien Beaulieu: Sampling Distributions Statistics By Jim: Introduction to Bootstrapping in Statistics with an Example University of Toronto Coders - Resampling Techniques in R: Bootstrapping and Permutation Testing Disclaimer: This article is written on an “as is” basis and without warranty. It was written with the intention of providing an overview of data science concepts, and should not be interpreted as professional advice. The findings and interpretations in this article are those of the author and are not endorsed by or affiliated with any third-party mentioned in this article. The author has no relationship with any third parties mentioned in this article.
[ { "code": null, "e": 354, "s": 172, "text": "All too often, the attempt to determine the characteristics of a population is constrained by the fact that we must rely on a sample to determine characteristics of that population." }, { "code": null, "e": 509, "s": 354, "text": "When analysing data, we would like to be able to estimate a sampling distribution in order to perform hypothesis tests and calculate confidence intervals." }, { "code": null, "e": 661, "s": 509, "text": "One way of attempting to solve this issue is a method called bootstrapping, whereby results for a wider population are inferred from repeated sampling." }, { "code": null, "e": 933, "s": 661, "text": "For instance, in order to determine if a different sample of the same population would have yielded similar results, the ideal would be to obtain new data samples. However, given that this may not be possible — an alternative is to randomly sample from the existing data." }, { "code": null, "e": 1233, "s": 933, "text": "For this example, we will look at the distribution of average daily rates (ADR) among a series of hotel customers. Specifically, a subset of ADR figures are used in conjunction with a bootstrapping technique to analyse what the distribution would be expected to look like given a larger sample size." }, { "code": null, "e": 1399, "s": 1233, "text": "When generating samples using bootstrapping, this is done with replacement. This means that it is possible to select the same element from the sample more than once." }, { "code": null, "e": 1592, "s": 1399, "text": "Using a sample of 300 ADR values for hotel customers as randomly sampled from the dataset provided by Antonio, Almeida, and Nunes, we are going to generate 5,000 bootstrap samples of size 300." }, { "code": null, "e": 1749, "s": 1592, "text": "Specifically, numpy is used as below to generate 300 samples with replacement, and a for loop is used to generate 5,000 iterations of 300 samples at a time." }, { "code": null, "e": 1873, "s": 1749, "text": "my_samples = []for _ in range(5000): x = np.random.choice(sample, size=300, replace=True) my_samples.append(x.mean())" }, { "code": null, "e": 1922, "s": 1873, "text": "Here is a histogram of the bootstrapped samples:" }, { "code": null, "e": 1999, "s": 1922, "text": "Here is a histogram of the original 300 samples that were randomly selected:" }, { "code": null, "e": 2382, "s": 1999, "text": "We can see that the original sample is significantly positively skewed. However, the histogram of the bootstrapped samples approximates a normal distribution — in line with the assumption of the central limit theorem that as the sample size increases, the underlying data distribution can be expected to approximate a normal one regardless of the shape of the original distribution." }, { "code": null, "e": 2589, "s": 2382, "text": "A primary use of bootstrapping is to estimate the confidence interval of the population mean. For instance, a 95% confidence interval means we are 95% confident that the mean lies within a particular range." }, { "code": null, "e": 2656, "s": 2589, "text": "The confidence interval for the bootstrapped sample is as follows:" }, { "code": null, "e": 2780, "s": 2656, "text": ">>> import statsmodels.stats.api as sms>>> sms.DescrStatsW(my_samples).tconfint_mean()(94.28205060553655, 94.4777529411301)" }, { "code": null, "e": 2888, "s": 2780, "text": "According to the above, we can be 95% confident that the true population mean lies between 94.28 and 94.47." }, { "code": null, "e": 2962, "s": 2888, "text": "Using the boot library in R, a similar form of analysis can be conducted." }, { "code": null, "e": 3013, "s": 2962, "text": "Again, assume that only 300 samples are available:" }, { "code": null, "e": 3035, "s": 3013, "text": "adr<-sample(adr, 300)" }, { "code": null, "e": 3130, "s": 3035, "text": "Using random sampling with replacement, 5,000 replications of these 300 samples are generated." }, { "code": null, "e": 3403, "s": 3130, "text": "x<-sample(adr, 300, replace = TRUE, prob = NULL)> mean_results <- boot(x, boot_mean, R = 5000)ORDINARY NONPARAMETRIC BOOTSTRAPCall:boot(data = x, statistic = boot_mean, R = 5000)Bootstrap Statistics : original bias std. errort1* 93.00084 -0.02135855 2.030718" }, { "code": null, "e": 3464, "s": 3403, "text": "Additionally, here are the confidence interval calculations:" }, { "code": null, "e": 3829, "s": 3464, "text": "> boot.ci(mean_results)BOOTSTRAP CONFIDENCE INTERVAL CALCULATIONSBased on 5000 bootstrap replicatesCALL : boot.ci(boot.out = mean_results)Intervals : Level Normal Basic 95% (89.04, 97.00 ) (88.98, 96.82 )Level Percentile BCa 95% (89.18, 97.02 ) (89.36, 97.32 ) Calculations and Intervals on Original Scale" }, { "code": null, "e": 3884, "s": 3829, "text": "A histogram of the bootstrapped samples are generated:" }, { "code": null, "e": 4195, "s": 3884, "text": "When running in R, we can see that the confidence intervals are wider than those obtained when running using Python. While the reason for this is unclear, the analysis still indicates that the sample mean obtained is representative of the population as a whole, and is unlikely to have been obtained by chance." }, { "code": null, "e": 4379, "s": 4195, "text": "While bootstrapping can be quite a useful tool for inferring population characteristics from a sample, a common error is to assume that bootstrapping is a cure for small sample sizes." }, { "code": null, "e": 4676, "s": 4379, "text": "As explained on Cross Validated, a small sample size is likely to be more volatile and prone to significant deviations from that of the broader population. Therefore, one should ensure that the sample being analysed is sufficiently large in order to capture population characteristics adequately." }, { "code": null, "e": 4909, "s": 4676, "text": "Moreover, another issue with small sample sizes is that the bootstrap distribution itself may end up being narrower — which in turn would result in a narrower confidence interval that could deviate significantly from the real value." }, { "code": null, "e": 4941, "s": 4909, "text": "In this article, you have seen:" }, { "code": null, "e": 5032, "s": 4941, "text": "How bootstrapping can allow us to estimate a sampling distribution using repeated sampling" }, { "code": null, "e": 5079, "s": 5032, "text": "Importance of replacement when random sampling" }, { "code": null, "e": 5132, "s": 5079, "text": "How to generate bootstrap samples using Python and R" }, { "code": null, "e": 5204, "s": 5132, "text": "Limitations of bootstrapping and why larger sample sizes are preferable" }, { "code": null, "e": 5354, "s": 5204, "text": "Many thanks for your time, and any questions or feedback are greatly appreciated. You can find more of my data science content at michael-grogan.com." }, { "code": null, "e": 5415, "s": 5354, "text": "Antonio, Almeida, Nunes (2019). Hotel Booking Demand Dataset" }, { "code": null, "e": 5493, "s": 5415, "text": "Cross Validated: Can bootstrap be seen as a “cure” for the small sample size?" }, { "code": null, "e": 5531, "s": 5493, "text": "docs.scipy.org: scipy.stats.bootstrap" }, { "code": null, "e": 5571, "s": 5531, "text": "Julien Beaulieu: Sampling Distributions" }, { "code": null, "e": 5650, "s": 5571, "text": "Statistics By Jim: Introduction to Bootstrapping in Statistics with an Example" }, { "code": null, "e": 5747, "s": 5650, "text": "University of Toronto Coders - Resampling Techniques in R: Bootstrapping and Permutation Testing" } ]
How To Compute Satellite Image Statistics And use It In Pandas | by Abdishakur | Towards Data Science
Satellite data is dense and uses cells to store values. In many cases, however, you want only a summary of the satellite (raster) image converted into a tabular format — CSV or Pandas data frame. Let us say, for example; you have a Digital Elevation Model (DEM). The DEM image gives a clear representation of the elevation and topography of the area. Now, what if you want to get elevation values and integrate tabular data you have, for example, buildings, to get the elevation of each building. This process of deriving table outputs (Summary statistics) from raster images is called Zonal Statistics. In this tutorial, We learn how to extract values from raster data and store these values in Tabular format (Pandas Dataframe). The dataset and the code for this tutorial are available in Github. Let us start by exploring the data. The dataset for this tutorial is sentinel images taken on 1 November 2019 in Beledweyne, Somalia. The area is flooded during this period, and we calculate the NDWI to measure the level of water stress. We use Google Colab Notebook, and we download the data in the notebook directly from URL. Let us import the libraries we use for this tutorial. import pandas as pdimport numpy as npimport geopandas as gpdimport rasterio as riofrom rasterio.plot import showfrom rasterio.mask import maskimport matplotlib.pyplot as plt With Rasterio you can read the different bands of Sentinel 2 Images. In this case, we read bands 8 (NIR), 4 (Red), 3 (Green), and 2 (Blue). b8 = rio.open(“/content/Data/20191101/B08–20191101.tif”)b4 = rio.open(“/content/Data/20191101/B04–20191101.tif”)b3 = rio.open(“/content/Data/20191101/B03–20191101.tif”)b2 = rio.open(“/content/Data/20191101/B02–20191101.tif”) Let us look at the width and height of the images. I am only using b4, but you can check if all bands have the same weight and length. b4.width,b4.height We plot the data to see and explore the satellite images we have. Here I am only plotting Band 3. fig, ax = plt.subplots(1, figsize=(12, 10))show(b3, ax=ax)plt.show() If you would like to see how to make RGB images with Rasterio, I have a tutorial you might want to see here. towardsdatascience.com The Sentinel 2 image of the area( only Band 3) is shown below. Let us also read the buildings table which we will use to store the statistical summaries derived from the satellite image. Please know that you can use other polygons, like districts, rectangular grids instead of the building polygons for this example. We use Geopandas to read the data. buildings = gpd.read_file(“/content/Data/shapefiles/osm_buildings.shp”)buildings = buildings[[“osm_id”,”building”, “geometry”]]buildings.head() And here are the first five rows of the table. We have the Geometry column of each building, osm_id and building columns. We can also plot the buildings on top of the Sentinel 2 images. fig, ax = plt.subplots(figsize=(12, 10))show(b4, ax=ax)buildings.plot(ax=ax, color=”white”, alpha=.50)plt.show(); The buildings are marked as white and overlayed in the image, as shown below. The picture shows the extent of the city (residential areas) with meandering Shabelle River. Let us now calculate the NDWI values from Sentinel 2 images. To calculate the NDWI values, we use this formula: (Band3 — Band8)/(Band3 + Band8) So, let us calculate using this formula in Rasterio. green = b3.read()nir = b8.read()ndwi = (nir.astype(float)-green.astype(float))/(nir+green) The NDWI arrays can be plotted with Rasterio, as shown below. fig, ax = plt.subplots(1, figsize=(12, 10))show(ndwi, ax=ax, cmap=”coolwarm_r”)plt.show() The NDWI plot clearly shows innundated areas near the Shabelle river (Blue). And the extent of flooding goes to residential areas. We can save the NDWI arrays as a raster image so that we can use it later. meta = b4.metameta.update(driver='GTiff')meta.update(dtype=rio.float32)with rio.open('NDWI.tif', 'w', **meta) as dst: dst.write(ndvi.astype(rio.float32))# Read the saved ndwi_raster = rio.open(“NDWI.tif”) Now, that we have calculated the NDWI values, it is time to derive statistics from the NDWI raster image and merge to our buildings table. We use Rasterio mask functionality to get the cell values from the NDWI raster image. The following is a small function that masks the cell values to our data frame table. def derive_stats(geom, data=ndwi_raster, **mask_kw): masked, mask_transform = mask(dataset=data, shapes=geom,)crop=True, all_touched=True, filled=True) return masked We can derive now the values we want like this. Let us say we are interested in getting the mean values of NDWI for each building. We create a column for that “mean_ndwi” and pass our function to apply the building’s geometry and also apply to mean from using Numpy. buildings[‘mean_ndwi’] = buildings.geometry.apply(derive_stats).apply(np.mean) Or, get the maximum NDWI values for each building. buildings[‘max_ndwi’] = buildings.geometry.apply(derive_stats).apply(np.max) Our table has two new columns now, mean_ndwi and max_ndwi, where we store the mean and max NDWI values for each building. The dataset also includes Sentinel images from previous data (before flooding) in this area. In the “Data” folder, you have “20191002” folder. Try to calculate the mean and max NDWI values using these images. Compare this with statistics derived from images we have calculated for the date 20191101 (flooding period). In this tutorial, we have seen how to calculate NDWI values from Sentinel 2 images and derive summary statistics from them. The code and Google Colab Notebooks are available in this Github repository. github.com Or directly in this Google Collaboratory Notebook
[ { "code": null, "e": 368, "s": 172, "text": "Satellite data is dense and uses cells to store values. In many cases, however, you want only a summary of the satellite (raster) image converted into a tabular format — CSV or Pandas data frame." }, { "code": null, "e": 669, "s": 368, "text": "Let us say, for example; you have a Digital Elevation Model (DEM). The DEM image gives a clear representation of the elevation and topography of the area. Now, what if you want to get elevation values and integrate tabular data you have, for example, buildings, to get the elevation of each building." }, { "code": null, "e": 776, "s": 669, "text": "This process of deriving table outputs (Summary statistics) from raster images is called Zonal Statistics." }, { "code": null, "e": 903, "s": 776, "text": "In this tutorial, We learn how to extract values from raster data and store these values in Tabular format (Pandas Dataframe)." }, { "code": null, "e": 1007, "s": 903, "text": "The dataset and the code for this tutorial are available in Github. Let us start by exploring the data." }, { "code": null, "e": 1299, "s": 1007, "text": "The dataset for this tutorial is sentinel images taken on 1 November 2019 in Beledweyne, Somalia. The area is flooded during this period, and we calculate the NDWI to measure the level of water stress. We use Google Colab Notebook, and we download the data in the notebook directly from URL." }, { "code": null, "e": 1353, "s": 1299, "text": "Let us import the libraries we use for this tutorial." }, { "code": null, "e": 1527, "s": 1353, "text": "import pandas as pdimport numpy as npimport geopandas as gpdimport rasterio as riofrom rasterio.plot import showfrom rasterio.mask import maskimport matplotlib.pyplot as plt" }, { "code": null, "e": 1667, "s": 1527, "text": "With Rasterio you can read the different bands of Sentinel 2 Images. In this case, we read bands 8 (NIR), 4 (Red), 3 (Green), and 2 (Blue)." }, { "code": null, "e": 1892, "s": 1667, "text": "b8 = rio.open(“/content/Data/20191101/B08–20191101.tif”)b4 = rio.open(“/content/Data/20191101/B04–20191101.tif”)b3 = rio.open(“/content/Data/20191101/B03–20191101.tif”)b2 = rio.open(“/content/Data/20191101/B02–20191101.tif”)" }, { "code": null, "e": 2027, "s": 1892, "text": "Let us look at the width and height of the images. I am only using b4, but you can check if all bands have the same weight and length." }, { "code": null, "e": 2046, "s": 2027, "text": "b4.width,b4.height" }, { "code": null, "e": 2144, "s": 2046, "text": "We plot the data to see and explore the satellite images we have. Here I am only plotting Band 3." }, { "code": null, "e": 2213, "s": 2144, "text": "fig, ax = plt.subplots(1, figsize=(12, 10))show(b3, ax=ax)plt.show()" }, { "code": null, "e": 2322, "s": 2213, "text": "If you would like to see how to make RGB images with Rasterio, I have a tutorial you might want to see here." }, { "code": null, "e": 2345, "s": 2322, "text": "towardsdatascience.com" }, { "code": null, "e": 2408, "s": 2345, "text": "The Sentinel 2 image of the area( only Band 3) is shown below." }, { "code": null, "e": 2662, "s": 2408, "text": "Let us also read the buildings table which we will use to store the statistical summaries derived from the satellite image. Please know that you can use other polygons, like districts, rectangular grids instead of the building polygons for this example." }, { "code": null, "e": 2697, "s": 2662, "text": "We use Geopandas to read the data." }, { "code": null, "e": 2841, "s": 2697, "text": "buildings = gpd.read_file(“/content/Data/shapefiles/osm_buildings.shp”)buildings = buildings[[“osm_id”,”building”, “geometry”]]buildings.head()" }, { "code": null, "e": 2963, "s": 2841, "text": "And here are the first five rows of the table. We have the Geometry column of each building, osm_id and building columns." }, { "code": null, "e": 3027, "s": 2963, "text": "We can also plot the buildings on top of the Sentinel 2 images." }, { "code": null, "e": 3141, "s": 3027, "text": "fig, ax = plt.subplots(figsize=(12, 10))show(b4, ax=ax)buildings.plot(ax=ax, color=”white”, alpha=.50)plt.show();" }, { "code": null, "e": 3312, "s": 3141, "text": "The buildings are marked as white and overlayed in the image, as shown below. The picture shows the extent of the city (residential areas) with meandering Shabelle River." }, { "code": null, "e": 3373, "s": 3312, "text": "Let us now calculate the NDWI values from Sentinel 2 images." }, { "code": null, "e": 3424, "s": 3373, "text": "To calculate the NDWI values, we use this formula:" }, { "code": null, "e": 3456, "s": 3424, "text": "(Band3 — Band8)/(Band3 + Band8)" }, { "code": null, "e": 3509, "s": 3456, "text": "So, let us calculate using this formula in Rasterio." }, { "code": null, "e": 3600, "s": 3509, "text": "green = b3.read()nir = b8.read()ndwi = (nir.astype(float)-green.astype(float))/(nir+green)" }, { "code": null, "e": 3662, "s": 3600, "text": "The NDWI arrays can be plotted with Rasterio, as shown below." }, { "code": null, "e": 3752, "s": 3662, "text": "fig, ax = plt.subplots(1, figsize=(12, 10))show(ndwi, ax=ax, cmap=”coolwarm_r”)plt.show()" }, { "code": null, "e": 3883, "s": 3752, "text": "The NDWI plot clearly shows innundated areas near the Shabelle river (Blue). And the extent of flooding goes to residential areas." }, { "code": null, "e": 3958, "s": 3883, "text": "We can save the NDWI arrays as a raster image so that we can use it later." }, { "code": null, "e": 4167, "s": 3958, "text": "meta = b4.metameta.update(driver='GTiff')meta.update(dtype=rio.float32)with rio.open('NDWI.tif', 'w', **meta) as dst: dst.write(ndvi.astype(rio.float32))# Read the saved ndwi_raster = rio.open(“NDWI.tif”)" }, { "code": null, "e": 4392, "s": 4167, "text": "Now, that we have calculated the NDWI values, it is time to derive statistics from the NDWI raster image and merge to our buildings table. We use Rasterio mask functionality to get the cell values from the NDWI raster image." }, { "code": null, "e": 4478, "s": 4392, "text": "The following is a small function that masks the cell values to our data frame table." }, { "code": null, "e": 4660, "s": 4478, "text": "def derive_stats(geom, data=ndwi_raster, **mask_kw): masked, mask_transform = mask(dataset=data, shapes=geom,)crop=True, all_touched=True, filled=True) return masked" }, { "code": null, "e": 4927, "s": 4660, "text": "We can derive now the values we want like this. Let us say we are interested in getting the mean values of NDWI for each building. We create a column for that “mean_ndwi” and pass our function to apply the building’s geometry and also apply to mean from using Numpy." }, { "code": null, "e": 5006, "s": 4927, "text": "buildings[‘mean_ndwi’] = buildings.geometry.apply(derive_stats).apply(np.mean)" }, { "code": null, "e": 5057, "s": 5006, "text": "Or, get the maximum NDWI values for each building." }, { "code": null, "e": 5134, "s": 5057, "text": "buildings[‘max_ndwi’] = buildings.geometry.apply(derive_stats).apply(np.max)" }, { "code": null, "e": 5256, "s": 5134, "text": "Our table has two new columns now, mean_ndwi and max_ndwi, where we store the mean and max NDWI values for each building." }, { "code": null, "e": 5574, "s": 5256, "text": "The dataset also includes Sentinel images from previous data (before flooding) in this area. In the “Data” folder, you have “20191002” folder. Try to calculate the mean and max NDWI values using these images. Compare this with statistics derived from images we have calculated for the date 20191101 (flooding period)." }, { "code": null, "e": 5775, "s": 5574, "text": "In this tutorial, we have seen how to calculate NDWI values from Sentinel 2 images and derive summary statistics from them. The code and Google Colab Notebooks are available in this Github repository." }, { "code": null, "e": 5786, "s": 5775, "text": "github.com" } ]