title
stringlengths
3
221
text
stringlengths
17
477k
parsed
listlengths
0
3.17k
Building an Artificial Neural Network in One Line of Code | by Kasper Müller | Towards Data Science
There are tons of great theoretical articles out there on the anatomy and mathematics of artificial neural networks, so I am going to take another approach to writing about and teaching this subject. In this article, we are going to get our hands dirty right away! Let’s jump right in. You can choose a section below if you want to skip the installation section. · Installation ∘ Option 1 ∘ Option 2· The data· The model· Why TensorFlow? If you already have the requirements installed, then you can skip this section. The first thing we need to do is to make sure that we have the Python programming language installed so if you don’t then go ahead and install it here. Note that if in the installation process you are asked if you want to add Python to your PATH variable, then make sure to agree to that. The next thing to do is to open up a terminal or CMD depending on your operating system. If you are on Windows, then press the windows button and type “cmd”. Then double click the black icon where it says “Command Prompt” at the top of the results. Now simply type: pip install numpypip install matplotlib and pip install tensorflow Note that you might have to type “pip3” instead of “pip” if you are on Mac or Linux. The next thing you need to do is to simply open up some kind of text editor or IDE like nodepad, Pycharm, VSCode, Sublime or some other editor. Install Google’s Colaboratory in your Google Drive. You can get all the information needed to do so on the website. colab.research.google.com One of the benefits of Google Colab is that you get free access to a lot of computing power. Specifically, you can train your models on GPUs in the cloud which makes this a great research framework. When we train a supervised machine learning algorithm, we need labeled data. That is, we need a dataset where for each example x we need a label y that the model should predict when it is being fed x. In this article, we will do the simplest thing possible. We won’t be thinking of testing, so this is not about best practice! We will simply consider a linear relationship and try to fit the model by building a neural network of only one neuron in Google’s machine learning library TensorFlow. Let’s open up our editor of choice and type in This code makes sure that all the relevant libraries are loaded. You would like the version of TensorFlow to be 2.x for some x ≥ 0, so try a simple print(tf.__version__) You should see something like 2.5.0. Let’s create the data. We will be creating our own data. Simply create a linear relationship yourself like this. You might, as the intelligent human being that you are, notice the pattern already. If you guessed f(x) = 1/2 + x/2, you were right. The idea is this: we are going to show the network values from the xs array and tell the model the correct answers in the form of the ys array so that for each value of x, the model will make a guess, and it will correct itself by looking at the true corresponding y value. Then the model will adjust itself through something called backpropagation (we will use stochastic gradient descent) and for each example, the model will improve. We don’t have much data, so this will limit the accuracy of the model of course, but the only goal of this article is to capture the idea of a neural network and to get into some simple syntax of TensorFlow. We are going to create an artificial neural network of a single node in one line of code, namely the following: Now we have built the network, let’s train it. The optimizer makes sure that the network improves its guesses and the loss function measures how good or bad a single guess was. Now let’s make a prediction. We get about 4.12. Now let’s plot some predictions. The output we get is the following image, showing the ground truth in green and the model's predictions in red. Note that this is not linear regression since this hasn’t anything to do with “best fit”, but rather the result of a model that tries to correct itself for each attempt. The lack of data and the simplicity of the model is telling, but we still manage to capture the essence of a neural network in this simple task. To drive this point home, try to run this code again. You won’t get exactly the same predictions because this is a new model and a machine learning model is probabilistic in nature. For instance, when I ran it the second time, I got a new prediction when I fed the model the number 7. I got about 4.08. And you will get something different from me! You can try to play around with this code to familiarize yourself with the syntax and effect of the (hyper) parameters. Try for example to set the epochs to 200 or 1000. Try other linear relationships. There are several reasons why TensorFlow is a very good choice if you want to learn how to build machine learning models. TensorFlow is one of the most in-demand and popular deep learning frameworks available today. That it is a very mature library because it was built by Google and that it can be used with Python are two great reasons, but why don’t we just take a look at some of the companies using it to get a feel for its importance in the industry. Companies using TensorFlow include Airbnb AIRBUS Coca-Cola Google Intel Lenovo PayPal Spotify Twitter and the list goes on... So TensorFlow is here to stay and if you want a job as a data scientist or ML engineer in the future, there are plenty of companies that hire TensorFlow developers so learning this great framework is definitely worth your time. We reached the end of this small and very simple introduction to TensorFlow and neural networks. In the future, I will post increasingly more and more advanced machine learning posts but we need to start simple. If you have any questions, comments or concerns please reach out on LinkedIn. www.linkedin.com See you next time.
[ { "code": null, "e": 436, "s": 171, "text": "There are tons of great theoretical articles out there on the anatomy and mathematics of artificial neural networks, so I am going to take another approach to writing about and teaching this subject. In this article, we are going to get our hands dirty right away!" }, { "code": null, "e": 534, "s": 436, "text": "Let’s jump right in. You can choose a section below if you want to skip the installation section." }, { "code": null, "e": 609, "s": 534, "text": "· Installation ∘ Option 1 ∘ Option 2· The data· The model· Why TensorFlow?" }, { "code": null, "e": 689, "s": 609, "text": "If you already have the requirements installed, then you can skip this section." }, { "code": null, "e": 841, "s": 689, "text": "The first thing we need to do is to make sure that we have the Python programming language installed so if you don’t then go ahead and install it here." }, { "code": null, "e": 1067, "s": 841, "text": "Note that if in the installation process you are asked if you want to add Python to your PATH variable, then make sure to agree to that. The next thing to do is to open up a terminal or CMD depending on your operating system." }, { "code": null, "e": 1227, "s": 1067, "text": "If you are on Windows, then press the windows button and type “cmd”. Then double click the black icon where it says “Command Prompt” at the top of the results." }, { "code": null, "e": 1244, "s": 1227, "text": "Now simply type:" }, { "code": null, "e": 1284, "s": 1244, "text": "pip install numpypip install matplotlib" }, { "code": null, "e": 1288, "s": 1284, "text": "and" }, { "code": null, "e": 1311, "s": 1288, "text": "pip install tensorflow" }, { "code": null, "e": 1396, "s": 1311, "text": "Note that you might have to type “pip3” instead of “pip” if you are on Mac or Linux." }, { "code": null, "e": 1540, "s": 1396, "text": "The next thing you need to do is to simply open up some kind of text editor or IDE like nodepad, Pycharm, VSCode, Sublime or some other editor." }, { "code": null, "e": 1656, "s": 1540, "text": "Install Google’s Colaboratory in your Google Drive. You can get all the information needed to do so on the website." }, { "code": null, "e": 1682, "s": 1656, "text": "colab.research.google.com" }, { "code": null, "e": 1881, "s": 1682, "text": "One of the benefits of Google Colab is that you get free access to a lot of computing power. Specifically, you can train your models on GPUs in the cloud which makes this a great research framework." }, { "code": null, "e": 2082, "s": 1881, "text": "When we train a supervised machine learning algorithm, we need labeled data. That is, we need a dataset where for each example x we need a label y that the model should predict when it is being fed x." }, { "code": null, "e": 2376, "s": 2082, "text": "In this article, we will do the simplest thing possible. We won’t be thinking of testing, so this is not about best practice! We will simply consider a linear relationship and try to fit the model by building a neural network of only one neuron in Google’s machine learning library TensorFlow." }, { "code": null, "e": 2423, "s": 2376, "text": "Let’s open up our editor of choice and type in" }, { "code": null, "e": 2571, "s": 2423, "text": "This code makes sure that all the relevant libraries are loaded. You would like the version of TensorFlow to be 2.x for some x ≥ 0, so try a simple" }, { "code": null, "e": 2593, "s": 2571, "text": "print(tf.__version__)" }, { "code": null, "e": 2630, "s": 2593, "text": "You should see something like 2.5.0." }, { "code": null, "e": 2743, "s": 2630, "text": "Let’s create the data. We will be creating our own data. Simply create a linear relationship yourself like this." }, { "code": null, "e": 2827, "s": 2743, "text": "You might, as the intelligent human being that you are, notice the pattern already." }, { "code": null, "e": 2876, "s": 2827, "text": "If you guessed f(x) = 1/2 + x/2, you were right." }, { "code": null, "e": 3150, "s": 2876, "text": "The idea is this: we are going to show the network values from the xs array and tell the model the correct answers in the form of the ys array so that for each value of x, the model will make a guess, and it will correct itself by looking at the true corresponding y value." }, { "code": null, "e": 3313, "s": 3150, "text": "Then the model will adjust itself through something called backpropagation (we will use stochastic gradient descent) and for each example, the model will improve." }, { "code": null, "e": 3521, "s": 3313, "text": "We don’t have much data, so this will limit the accuracy of the model of course, but the only goal of this article is to capture the idea of a neural network and to get into some simple syntax of TensorFlow." }, { "code": null, "e": 3633, "s": 3521, "text": "We are going to create an artificial neural network of a single node in one line of code, namely the following:" }, { "code": null, "e": 3680, "s": 3633, "text": "Now we have built the network, let’s train it." }, { "code": null, "e": 3810, "s": 3680, "text": "The optimizer makes sure that the network improves its guesses and the loss function measures how good or bad a single guess was." }, { "code": null, "e": 3839, "s": 3810, "text": "Now let’s make a prediction." }, { "code": null, "e": 3891, "s": 3839, "text": "We get about 4.12. Now let’s plot some predictions." }, { "code": null, "e": 4003, "s": 3891, "text": "The output we get is the following image, showing the ground truth in green and the model's predictions in red." }, { "code": null, "e": 4318, "s": 4003, "text": "Note that this is not linear regression since this hasn’t anything to do with “best fit”, but rather the result of a model that tries to correct itself for each attempt. The lack of data and the simplicity of the model is telling, but we still manage to capture the essence of a neural network in this simple task." }, { "code": null, "e": 4500, "s": 4318, "text": "To drive this point home, try to run this code again. You won’t get exactly the same predictions because this is a new model and a machine learning model is probabilistic in nature." }, { "code": null, "e": 4667, "s": 4500, "text": "For instance, when I ran it the second time, I got a new prediction when I fed the model the number 7. I got about 4.08. And you will get something different from me!" }, { "code": null, "e": 4787, "s": 4667, "text": "You can try to play around with this code to familiarize yourself with the syntax and effect of the (hyper) parameters." }, { "code": null, "e": 4869, "s": 4787, "text": "Try for example to set the epochs to 200 or 1000. Try other linear relationships." }, { "code": null, "e": 4991, "s": 4869, "text": "There are several reasons why TensorFlow is a very good choice if you want to learn how to build machine learning models." }, { "code": null, "e": 5085, "s": 4991, "text": "TensorFlow is one of the most in-demand and popular deep learning frameworks available today." }, { "code": null, "e": 5326, "s": 5085, "text": "That it is a very mature library because it was built by Google and that it can be used with Python are two great reasons, but why don’t we just take a look at some of the companies using it to get a feel for its importance in the industry." }, { "code": null, "e": 5361, "s": 5326, "text": "Companies using TensorFlow include" }, { "code": null, "e": 5368, "s": 5361, "text": "Airbnb" }, { "code": null, "e": 5375, "s": 5368, "text": "AIRBUS" }, { "code": null, "e": 5385, "s": 5375, "text": "Coca-Cola" }, { "code": null, "e": 5392, "s": 5385, "text": "Google" }, { "code": null, "e": 5398, "s": 5392, "text": "Intel" }, { "code": null, "e": 5405, "s": 5398, "text": "Lenovo" }, { "code": null, "e": 5412, "s": 5405, "text": "PayPal" }, { "code": null, "e": 5420, "s": 5412, "text": "Spotify" }, { "code": null, "e": 5428, "s": 5420, "text": "Twitter" }, { "code": null, "e": 5452, "s": 5428, "text": "and the list goes on..." }, { "code": null, "e": 5680, "s": 5452, "text": "So TensorFlow is here to stay and if you want a job as a data scientist or ML engineer in the future, there are plenty of companies that hire TensorFlow developers so learning this great framework is definitely worth your time." }, { "code": null, "e": 5777, "s": 5680, "text": "We reached the end of this small and very simple introduction to TensorFlow and neural networks." }, { "code": null, "e": 5892, "s": 5777, "text": "In the future, I will post increasingly more and more advanced machine learning posts but we need to start simple." }, { "code": null, "e": 5970, "s": 5892, "text": "If you have any questions, comments or concerns please reach out on LinkedIn." }, { "code": null, "e": 5987, "s": 5970, "text": "www.linkedin.com" } ]
Automating Snippets in Visual Studio Code with Python | by Lucas Soares | Towards Data Science
Every good development workflow involves some kind of management of code snippets, where you are constantly storing and retrieving snippets of code to solve a wide range of tasks in your programming routine. In this article, I will show you how to automate the process of creating snippets for VScode using Python to save them directly from your clipboard with just one terminal command. In VScode the format of a snippet file is as such: {"Title": {"prefix": "the prefix of the snippet","body": ["the body of the snippet", "as a list of strings". "where each list element corresponds to a line of code","description": "a description of the snippet"}} This structure is supposed to be leveraged by the VScode text editor so that when you call the prefix for the desired snippet, you get a perfectly formatted piece of code that matches your specific need for the moment. The issue here is that the process of creating such a snippet can be a bit cumbersome, so we end up googling almost everything which can be a bit annoying sometimes. So why not automate this process with Python, so that when you write some code that you want to save it for later, all you have to do is save it to the clipboard and run a command on the terminal? The steps to do this will be: Specify a variable with the path for your snippets folder.Write the code to get the contents of your clipboard.Write the code to save the content in a pre-specified snippets file (to be determined when calling the script) with the right format.Save the final Python script in a folder.Write an alias to call that script from anywhere in the terminal. Specify a variable with the path for your snippets folder. Write the code to get the contents of your clipboard. Write the code to save the content in a pre-specified snippets file (to be determined when calling the script) with the right format. Save the final Python script in a folder. Write an alias to call that script from anywhere in the terminal. Now, let’s go through each step one by one. Specify a variable with the path to your snippets folder Specify a variable with the path to your snippets folder global_snippets_folder = “/home/username/.config/Code/User/snippets/” VScode usually has the snippets folder with a path like this, change it to your specific case. 2. Write the code to get the contents of your clipboard import clipboardclipboard.paste()clipboard_content = clipboard.paste() Here, we are using the clipboard package to get the contents of your clipboard. 3. Write the code to save the content in a pre-specified snippets file (to be determined when calling the script) with the right format This is code saves the content of the clipboard to the specified folder with a name given by the user when calling the script and with the required format needed to call the snippet later inside VScode. 4. Save the final Python script in a folder Here you can save it in wherever folder you want inside your computer. 5. Write an alias to call that script from anywhere in the terminal To do this all you have to do (assuming you are working in a Linux machine or in Windows but with WSL: Linux), is open your .bashrc file and write: alias save_snippet_from_clipboard="python /path/to/python_script.py" Then, save the modified file and run source .bashrc to update your terminal. Now you can start saving snippets directly from your clipboard! It is important to note that, this is one approach, there are so many ways to save snippets and retrieve them, in the end, you should experiment with as many tools as you can to see what works for you. The approach I described here has some drawbacks related to the guarantee that the format of the saved snippet will be intact due to issues with saving code from the clipboard, but in the end, I feel like the few seconds I save every time I need to save or retrieve a certain snippet make it worth it. If you prefer you can watch my Youtube video on this topic here: If you liked this post, join Medium, follow, subscribe to my newsletter. Also, subscribe to my youtube channel connect with me on Tiktok, Twitter, LinkedIn, Instagram! Thanks and see you next time! :)
[ { "code": null, "e": 374, "s": 166, "text": "Every good development workflow involves some kind of management of code snippets, where you are constantly storing and retrieving snippets of code to solve a wide range of tasks in your programming routine." }, { "code": null, "e": 554, "s": 374, "text": "In this article, I will show you how to automate the process of creating snippets for VScode using Python to save them directly from your clipboard with just one terminal command." }, { "code": null, "e": 605, "s": 554, "text": "In VScode the format of a snippet file is as such:" }, { "code": null, "e": 826, "s": 605, "text": "{\"Title\": {\"prefix\": \"the prefix of the snippet\",\"body\": [\"the body of the snippet\", \"as a list of strings\". \"where each list element corresponds to a line of code\",\"description\": \"a description of the snippet\"}}" }, { "code": null, "e": 1045, "s": 826, "text": "This structure is supposed to be leveraged by the VScode text editor so that when you call the prefix for the desired snippet, you get a perfectly formatted piece of code that matches your specific need for the moment." }, { "code": null, "e": 1211, "s": 1045, "text": "The issue here is that the process of creating such a snippet can be a bit cumbersome, so we end up googling almost everything which can be a bit annoying sometimes." }, { "code": null, "e": 1408, "s": 1211, "text": "So why not automate this process with Python, so that when you write some code that you want to save it for later, all you have to do is save it to the clipboard and run a command on the terminal?" }, { "code": null, "e": 1438, "s": 1408, "text": "The steps to do this will be:" }, { "code": null, "e": 1789, "s": 1438, "text": "Specify a variable with the path for your snippets folder.Write the code to get the contents of your clipboard.Write the code to save the content in a pre-specified snippets file (to be determined when calling the script) with the right format.Save the final Python script in a folder.Write an alias to call that script from anywhere in the terminal." }, { "code": null, "e": 1848, "s": 1789, "text": "Specify a variable with the path for your snippets folder." }, { "code": null, "e": 1902, "s": 1848, "text": "Write the code to get the contents of your clipboard." }, { "code": null, "e": 2036, "s": 1902, "text": "Write the code to save the content in a pre-specified snippets file (to be determined when calling the script) with the right format." }, { "code": null, "e": 2078, "s": 2036, "text": "Save the final Python script in a folder." }, { "code": null, "e": 2144, "s": 2078, "text": "Write an alias to call that script from anywhere in the terminal." }, { "code": null, "e": 2188, "s": 2144, "text": "Now, let’s go through each step one by one." }, { "code": null, "e": 2245, "s": 2188, "text": "Specify a variable with the path to your snippets folder" }, { "code": null, "e": 2302, "s": 2245, "text": "Specify a variable with the path to your snippets folder" }, { "code": null, "e": 2372, "s": 2302, "text": "global_snippets_folder = “/home/username/.config/Code/User/snippets/”" }, { "code": null, "e": 2467, "s": 2372, "text": "VScode usually has the snippets folder with a path like this, change it to your specific case." }, { "code": null, "e": 2523, "s": 2467, "text": "2. Write the code to get the contents of your clipboard" }, { "code": null, "e": 2594, "s": 2523, "text": "import clipboardclipboard.paste()clipboard_content = clipboard.paste()" }, { "code": null, "e": 2674, "s": 2594, "text": "Here, we are using the clipboard package to get the contents of your clipboard." }, { "code": null, "e": 2810, "s": 2674, "text": "3. Write the code to save the content in a pre-specified snippets file (to be determined when calling the script) with the right format" }, { "code": null, "e": 3013, "s": 2810, "text": "This is code saves the content of the clipboard to the specified folder with a name given by the user when calling the script and with the required format needed to call the snippet later inside VScode." }, { "code": null, "e": 3057, "s": 3013, "text": "4. Save the final Python script in a folder" }, { "code": null, "e": 3128, "s": 3057, "text": "Here you can save it in wherever folder you want inside your computer." }, { "code": null, "e": 3196, "s": 3128, "text": "5. Write an alias to call that script from anywhere in the terminal" }, { "code": null, "e": 3344, "s": 3196, "text": "To do this all you have to do (assuming you are working in a Linux machine or in Windows but with WSL: Linux), is open your .bashrc file and write:" }, { "code": null, "e": 3413, "s": 3344, "text": "alias save_snippet_from_clipboard=\"python /path/to/python_script.py\"" }, { "code": null, "e": 3490, "s": 3413, "text": "Then, save the modified file and run source .bashrc to update your terminal." }, { "code": null, "e": 3554, "s": 3490, "text": "Now you can start saving snippets directly from your clipboard!" }, { "code": null, "e": 3756, "s": 3554, "text": "It is important to note that, this is one approach, there are so many ways to save snippets and retrieve them, in the end, you should experiment with as many tools as you can to see what works for you." }, { "code": null, "e": 4058, "s": 3756, "text": "The approach I described here has some drawbacks related to the guarantee that the format of the saved snippet will be intact due to issues with saving code from the clipboard, but in the end, I feel like the few seconds I save every time I need to save or retrieve a certain snippet make it worth it." }, { "code": null, "e": 4123, "s": 4058, "text": "If you prefer you can watch my Youtube video on this topic here:" } ]
What is Data Cleaning? How to Process Data for Analytics and Machine Learning Modeling? | by Awan-Ur-Rahman | Towards Data Science
Data Cleaning plays an important role in the field of Data Managements as well as Analytics and Machine Learning. In this article, I will try to give the intuitions about the importance of data cleaning and different data cleaning processes. Data Cleaning means the process of identifying the incorrect, incomplete, inaccurate, irrelevant or missing part of the data and then modifying, replacing or deleting them according to the necessity. Data cleaning is considered a foundational element of the basic data science. Data is the most valuable thing for Analytics and Machine learning. In computing or Business data is needed everywhere. When it comes to the real world data, it is not improbable that data may contain incomplete, inconsistent or missing values. If the data is corrupted then it may hinder the process or provide inaccurate results. Let’s see some examples of the importance of data cleaning. Suppose you are a general manager of a company. Your company collects data of different customers who buy products produced by your company. Now you want to know on which products people are interested most and according to that you want to increase the production of that product. But if the data is corrupted or contains missing values then you will be misguided to make the correct decision and you will be in trouble. At the end of all, Machine Learning is a data-driven AI. In machine learning, if the data is irrelevant or error-prone then it leads to an incorrect model building. As much as you make your data clean, as much as you can make a better model. So, we need to process or clean the data before using it. Without the quality data,it would be foolish to expect anything good outcome. Now let’s take a closer look in the different ways of cleaning data. If your DataFrame (A Data frame is a two-dimensional data structure, i.e., data is aligned in a tabular fashion in rows and columns) contains columns that are irrelevant or you are never going to use them then you can drop them to give more focus on the columns you will work on. Let’s see an example of how to deal with such data set. Let’s create an example of students data set using pandas DataFrame. import numpy as np # linear algebraimport pandas as pd # data processing, CSV file I/O data={'Name':['A','B','C','D','E','F','G','H'] ,'Height':[5.2,5.7,5.6,5.5,5.3,5.8,5.6,5.5], 'Roll':[55,99,15,80,1,12,47,104], 'Department':['CSE','EEE','BME','CSE','ME','ME','CE','CSE'], 'Address':['polashi','banani','farmgate','mirpur','dhanmondi','ishwardi','khulna','uttara']}df=pd.DataFrame(data)print(df) Here if we want to remove the “Height” column, we can use python pandas.DataFrame.drop to drop specified labels from rows or columns. DataFrame.drop(self, labels=None, axis=0, index=None, columns=None, level=None, inplace=False, errors='raise') Let us drop the height column. For this you need to push the column name in the column keyword. df=df.drop(columns='Height')print(df.head()) It is rare to have a real world dataset without having any missing values. When you start to work with real world data, you will find that most of the dataset contains missing values. Handling missing values is very important because if you leave the missing values as it is, it may affect your analysis and machine learning models. So, you need to be sure that whether your dataset contains missing values or not. If you find missing values in your dataset you must handle it. If you find any missing values in the dataset you can perform any of these three task on it: 1. Leave as it is 2. Filling the missing values 3. Drop themFor filling the missing values we can perform different methods. For example, Figure 4 shows that airquality dataset has missing values. airquality.head() # return top n (5 by default) rows of a data frame In figure 4, NaN indicates that the dataset contains missing values in that position. After finding missing values in your dataset, You can use pandas.DataFrame.fillna to fill the missing values. DataFrame.fillna(self, value=None, method=None, axis=None, inplace=False, limit=None, downcast=None, **kwargs) You can use different statistical methods to fill the missing values according to your needs. For example, here in figure 5, we will use the statistical mean method to fill the missing values. airquality['Ozone'] = airquality['Ozone'].fillna(airquality.Ozone.mean())airquality.head() You can see that the missing values in “Ozone” column is filled with the mean value of that column. You can also drop the rows or columns where missing values are found. we drop the rows containing missing values. Here You can drop missing values with the help of pandas.DataFrame.dropna. airquality = airquality.dropna() #drop the rows containing at least one missing valueairquality.head() Here, in figure 6, you can see that rows have missing values in column Solar.R is dropped. airquality.isnull().sum(axis=0) If you are new data Science then the first question that will arise in your head is “what does these outliers mean” ? Let’s talk about the outliers first and then we will talk about the detection of these outliers in the dataset and what will we do after detecting the outliers.According to wikipedia, “In statistics, an outlier is a data point that differs significantly from other observations.”That means an outlier indicates a data point that is significantly different from the other data points in the data set. Outliers can be created due to the errors in the experiments or the variability in the measurements. Let’s look an example to clear the concept. In Figure 4 all the values in math column are in range between 90–95 except 20 which is significantly different from others. It can be an input error in the dataset. So we can call it a outliers. One thing should be added here — “ Not all the outliers are bad data points. Some can be errors but others are the valid values. ” So, now the question is how can we detect the outliers in the dataset.For detecting the outliers we can use :1. Box Plot2. Scatter plot3. Z-score etc.We will see the Scatter Plot method here. Let’s draw a scatter plot of a dataset. dataset.plot(kind='scatter' , x='initial_cost' , y='total_est_fee' , rot = 70)plt.show() Here in Figure 9 there is a outlier with red outline. After detecting this, we can remove this from the dataset. df_removed_outliers = dataset[dataset.total_est_fee<17500]df_removed_outliers.plot(kind='scatter', x='initial_cost' , y='total_est_fee' , rot = 70)plt.show() Datasets may contain duplicate entries. It is one of the most easiest task to delete duplicate rows. To delete the duplicate rows you can use — dataset_name.drop_duplicates(). Figure 12 shows a sample of a dataset having duplicate rows. dataset=dataset.drop_duplicates()#this will remove the duplicate rows.print(dataset) Tidy dataset means each columns represent separate variables and each rows represent individual observations. But in untidy data each columns represent values but not the variables. Tidy data is useful to fix common data problem.You can turn the untidy data to tidy data by using pandas.melt. import pandas as pdpd.melt(frame=df,id_vars='name',value_vars=['treatment a','treatment b']) You can also see pandas.DataFrame.pivot for un-melting the tidy data. In DataFrame data can be of many types. As example :1. Categorical data 2. Object data3. Numeric data4. Boolean data Some columns data type can be changed due to some reason or have inconsistent data type. You can convert from one data type to another by using pandas.DataFrame.astype. DataFrame.astype(self, dtype, copy=True, errors='raise', **kwargs) One of the most important and interesting part of data cleaning is string manipulation. In the real world most of the data are unstructured data. String manipulation means the process of changing, parsing, matching or analyzing strings. For string manipulation, you should have some knowledge about regular expressions. Sometimes you need to extract some value from a large sentence. Here string manipulation gives us a strong benefit. Let say,“This umbrella costs $12 and he took this money from his mother.”If you want to exact the “$12” information from the sentence then you have to build a regular expression for matching that pattern.After that you can use the python libraries.There are many built in and external libraries in python for string manipulation. import repattern = re.compile('|\$|d*')result = pattern.match("$12312312")print(bool(result)) This will give you an output showing “True”. In this modern era of data science the volume of data is increasing day by day. Due to the large number of volume of data data may stored in separated files. If you work with multiple files then you can concatenate them for simplicity. You can use the following python library for concatenate. pandas.concat(objs, axis=0, join='outer', join_axes=None, ignore_index=False, keys=None, levels=None, names=None, verify_integrity=False, sort=None, copy=True) Let’s see an example how to concatenate two dataset. Figure 14 shows an example of two different datasets loaded from two different files. We will concatenate them using pandas.concat. concatenated_data=pd.concat([dataset1,dataset2])print(concatenated_data) Data Cleaning is very import for making your analytics and machine learning models error-free. A small error in the dataset can cause you a lot of problem. All your efforts can be wasted. So, always try to make your data clean. 1. Dataframe2. DataCamp-Cleaning data in python3. Working with missing data4. How to remove outliers in Data with Pandas5. Ways to Detect and Remove the Outliers6. Outlier removal clustring7. 3 ways to remove outliers from your data8. pandas.DataFrame.astype9. pandas.concat10. pandas.DataFrame.melt11. Tidy data
[ { "code": null, "e": 414, "s": 172, "text": "Data Cleaning plays an important role in the field of Data Managements as well as Analytics and Machine Learning. In this article, I will try to give the intuitions about the importance of data cleaning and different data cleaning processes." }, { "code": null, "e": 692, "s": 414, "text": "Data Cleaning means the process of identifying the incorrect, incomplete, inaccurate, irrelevant or missing part of the data and then modifying, replacing or deleting them according to the necessity. Data cleaning is considered a foundational element of the basic data science." }, { "code": null, "e": 1084, "s": 692, "text": "Data is the most valuable thing for Analytics and Machine learning. In computing or Business data is needed everywhere. When it comes to the real world data, it is not improbable that data may contain incomplete, inconsistent or missing values. If the data is corrupted then it may hinder the process or provide inaccurate results. Let’s see some examples of the importance of data cleaning." }, { "code": null, "e": 1506, "s": 1084, "text": "Suppose you are a general manager of a company. Your company collects data of different customers who buy products produced by your company. Now you want to know on which products people are interested most and according to that you want to increase the production of that product. But if the data is corrupted or contains missing values then you will be misguided to make the correct decision and you will be in trouble." }, { "code": null, "e": 1671, "s": 1506, "text": "At the end of all, Machine Learning is a data-driven AI. In machine learning, if the data is irrelevant or error-prone then it leads to an incorrect model building." }, { "code": null, "e": 1884, "s": 1671, "text": "As much as you make your data clean, as much as you can make a better model. So, we need to process or clean the data before using it. Without the quality data,it would be foolish to expect anything good outcome." }, { "code": null, "e": 1953, "s": 1884, "text": "Now let’s take a closer look in the different ways of cleaning data." }, { "code": null, "e": 2358, "s": 1953, "text": "If your DataFrame (A Data frame is a two-dimensional data structure, i.e., data is aligned in a tabular fashion in rows and columns) contains columns that are irrelevant or you are never going to use them then you can drop them to give more focus on the columns you will work on. Let’s see an example of how to deal with such data set. Let’s create an example of students data set using pandas DataFrame." }, { "code": null, "e": 2775, "s": 2358, "text": "import numpy as np # linear algebraimport pandas as pd # data processing, CSV file I/O data={'Name':['A','B','C','D','E','F','G','H'] ,'Height':[5.2,5.7,5.6,5.5,5.3,5.8,5.6,5.5], 'Roll':[55,99,15,80,1,12,47,104], 'Department':['CSE','EEE','BME','CSE','ME','ME','CE','CSE'], 'Address':['polashi','banani','farmgate','mirpur','dhanmondi','ishwardi','khulna','uttara']}df=pd.DataFrame(data)print(df)" }, { "code": null, "e": 2909, "s": 2775, "text": "Here if we want to remove the “Height” column, we can use python pandas.DataFrame.drop to drop specified labels from rows or columns." }, { "code": null, "e": 3020, "s": 2909, "text": "DataFrame.drop(self, labels=None, axis=0, index=None, columns=None, level=None, inplace=False, errors='raise')" }, { "code": null, "e": 3116, "s": 3020, "text": "Let us drop the height column. For this you need to push the column name in the column keyword." }, { "code": null, "e": 3161, "s": 3116, "text": "df=df.drop(columns='Height')print(df.head())" }, { "code": null, "e": 3929, "s": 3161, "text": "It is rare to have a real world dataset without having any missing values. When you start to work with real world data, you will find that most of the dataset contains missing values. Handling missing values is very important because if you leave the missing values as it is, it may affect your analysis and machine learning models. So, you need to be sure that whether your dataset contains missing values or not. If you find missing values in your dataset you must handle it. If you find any missing values in the dataset you can perform any of these three task on it: 1. Leave as it is 2. Filling the missing values 3. Drop themFor filling the missing values we can perform different methods. For example, Figure 4 shows that airquality dataset has missing values." }, { "code": null, "e": 3999, "s": 3929, "text": "airquality.head() # return top n (5 by default) rows of a data frame" }, { "code": null, "e": 4195, "s": 3999, "text": "In figure 4, NaN indicates that the dataset contains missing values in that position. After finding missing values in your dataset, You can use pandas.DataFrame.fillna to fill the missing values." }, { "code": null, "e": 4306, "s": 4195, "text": "DataFrame.fillna(self, value=None, method=None, axis=None, inplace=False, limit=None, downcast=None, **kwargs)" }, { "code": null, "e": 4499, "s": 4306, "text": "You can use different statistical methods to fill the missing values according to your needs. For example, here in figure 5, we will use the statistical mean method to fill the missing values." }, { "code": null, "e": 4590, "s": 4499, "text": "airquality['Ozone'] = airquality['Ozone'].fillna(airquality.Ozone.mean())airquality.head()" }, { "code": null, "e": 4690, "s": 4590, "text": "You can see that the missing values in “Ozone” column is filled with the mean value of that column." }, { "code": null, "e": 4879, "s": 4690, "text": "You can also drop the rows or columns where missing values are found. we drop the rows containing missing values. Here You can drop missing values with the help of pandas.DataFrame.dropna." }, { "code": null, "e": 4982, "s": 4879, "text": "airquality = airquality.dropna() #drop the rows containing at least one missing valueairquality.head()" }, { "code": null, "e": 5073, "s": 4982, "text": "Here, in figure 6, you can see that rows have missing values in column Solar.R is dropped." }, { "code": null, "e": 5105, "s": 5073, "text": "airquality.isnull().sum(axis=0)" }, { "code": null, "e": 5768, "s": 5105, "text": "If you are new data Science then the first question that will arise in your head is “what does these outliers mean” ? Let’s talk about the outliers first and then we will talk about the detection of these outliers in the dataset and what will we do after detecting the outliers.According to wikipedia, “In statistics, an outlier is a data point that differs significantly from other observations.”That means an outlier indicates a data point that is significantly different from the other data points in the data set. Outliers can be created due to the errors in the experiments or the variability in the measurements. Let’s look an example to clear the concept." }, { "code": null, "e": 6095, "s": 5768, "text": "In Figure 4 all the values in math column are in range between 90–95 except 20 which is significantly different from others. It can be an input error in the dataset. So we can call it a outliers. One thing should be added here — “ Not all the outliers are bad data points. Some can be errors but others are the valid values. ”" }, { "code": null, "e": 6327, "s": 6095, "text": "So, now the question is how can we detect the outliers in the dataset.For detecting the outliers we can use :1. Box Plot2. Scatter plot3. Z-score etc.We will see the Scatter Plot method here. Let’s draw a scatter plot of a dataset." }, { "code": null, "e": 6416, "s": 6327, "text": "dataset.plot(kind='scatter' , x='initial_cost' , y='total_est_fee' , rot = 70)plt.show()" }, { "code": null, "e": 6529, "s": 6416, "text": "Here in Figure 9 there is a outlier with red outline. After detecting this, we can remove this from the dataset." }, { "code": null, "e": 6687, "s": 6529, "text": "df_removed_outliers = dataset[dataset.total_est_fee<17500]df_removed_outliers.plot(kind='scatter', x='initial_cost' , y='total_est_fee' , rot = 70)plt.show()" }, { "code": null, "e": 6925, "s": 6687, "text": "Datasets may contain duplicate entries. It is one of the most easiest task to delete duplicate rows. To delete the duplicate rows you can use — dataset_name.drop_duplicates(). Figure 12 shows a sample of a dataset having duplicate rows." }, { "code": null, "e": 7010, "s": 6925, "text": "dataset=dataset.drop_duplicates()#this will remove the duplicate rows.print(dataset)" }, { "code": null, "e": 7303, "s": 7010, "text": "Tidy dataset means each columns represent separate variables and each rows represent individual observations. But in untidy data each columns represent values but not the variables. Tidy data is useful to fix common data problem.You can turn the untidy data to tidy data by using pandas.melt." }, { "code": null, "e": 7396, "s": 7303, "text": "import pandas as pdpd.melt(frame=df,id_vars='name',value_vars=['treatment a','treatment b'])" }, { "code": null, "e": 7466, "s": 7396, "text": "You can also see pandas.DataFrame.pivot for un-melting the tidy data." }, { "code": null, "e": 7583, "s": 7466, "text": "In DataFrame data can be of many types. As example :1. Categorical data 2. Object data3. Numeric data4. Boolean data" }, { "code": null, "e": 7752, "s": 7583, "text": "Some columns data type can be changed due to some reason or have inconsistent data type. You can convert from one data type to another by using pandas.DataFrame.astype." }, { "code": null, "e": 7819, "s": 7752, "text": "DataFrame.astype(self, dtype, copy=True, errors='raise', **kwargs)" }, { "code": null, "e": 8585, "s": 7819, "text": "One of the most important and interesting part of data cleaning is string manipulation. In the real world most of the data are unstructured data. String manipulation means the process of changing, parsing, matching or analyzing strings. For string manipulation, you should have some knowledge about regular expressions. Sometimes you need to extract some value from a large sentence. Here string manipulation gives us a strong benefit. Let say,“This umbrella costs $12 and he took this money from his mother.”If you want to exact the “$12” information from the sentence then you have to build a regular expression for matching that pattern.After that you can use the python libraries.There are many built in and external libraries in python for string manipulation." }, { "code": null, "e": 8679, "s": 8585, "text": "import repattern = re.compile('|\\$|d*')result = pattern.match(\"$12312312\")print(bool(result))" }, { "code": null, "e": 8724, "s": 8679, "text": "This will give you an output showing “True”." }, { "code": null, "e": 9018, "s": 8724, "text": "In this modern era of data science the volume of data is increasing day by day. Due to the large number of volume of data data may stored in separated files. If you work with multiple files then you can concatenate them for simplicity. You can use the following python library for concatenate." }, { "code": null, "e": 9178, "s": 9018, "text": "pandas.concat(objs, axis=0, join='outer', join_axes=None, ignore_index=False, keys=None, levels=None, names=None, verify_integrity=False, sort=None, copy=True)" }, { "code": null, "e": 9363, "s": 9178, "text": "Let’s see an example how to concatenate two dataset. Figure 14 shows an example of two different datasets loaded from two different files. We will concatenate them using pandas.concat." }, { "code": null, "e": 9436, "s": 9363, "text": "concatenated_data=pd.concat([dataset1,dataset2])print(concatenated_data)" }, { "code": null, "e": 9664, "s": 9436, "text": "Data Cleaning is very import for making your analytics and machine learning models error-free. A small error in the dataset can cause you a lot of problem. All your efforts can be wasted. So, always try to make your data clean." } ]
Difference between new and malloc( )
In this post, we will understand the difference between ‘new’ and ‘malloc’. It is present in C++, Java, and C#. It is present in C++, Java, and C#. It is an operator that can be used to call the constructor of the object. It is an operator that can be used to call the constructor of the object. It can be overloaded. It can be overloaded. If it fails, an exception is thrown. If it fails, an exception is thrown. It doesn’t need the ‘sizeof’ operator. It doesn’t need the ‘sizeof’ operator. It doesn’t reallocate memory. It doesn’t reallocate memory. It can initialize an object when allocating memory for it. It can initialize an object when allocating memory for it. The memory allocated by ‘new’ operator can be de-allocated using ‘delete’ operator. The memory allocated by ‘new’ operator can be de-allocated using ‘delete’ operator. It reduces the execution time of the application. It reduces the execution time of the application. #include<iostream> using namespace std; int main(){ int *val = new int(10); cout << *val; getchar(); return 0; } This is present in C language. This is present in C language. It is a function that can’t be overloaded. It is a function that can’t be overloaded. When ‘malloc’ fails, it returns NULL. When ‘malloc’ fails, it returns NULL. It requires the ‘sizeof’ operator to know how much memory has to be allotted. It requires the ‘sizeof’ operator to know how much memory has to be allotted. It can’t call a constructor. It can’t call a constructor. Memory can’t be initialized using this function. Memory can’t be initialized using this function. The memory allocated using malloc can be deallocated using free function. The memory allocated using malloc can be deallocated using free function. Memory allocated by malloc method can be reallocated using realloc method. Memory allocated by malloc method can be reallocated using realloc method. It requires more time to execute applications. It requires more time to execute applications. Following is the example of malloc in C language − #include <stdio.h> #include <stdlib.h> #include <string.h> int main () { char *str; /* Initial memory allocation */ str = (char *) malloc(5); strcpy(str, "amit"); printf("String = %s, Address = %u\n", str, str); }
[ { "code": null, "e": 1138, "s": 1062, "text": "In this post, we will understand the difference between ‘new’ and ‘malloc’." }, { "code": null, "e": 1174, "s": 1138, "text": "It is present in C++, Java, and C#." }, { "code": null, "e": 1210, "s": 1174, "text": "It is present in C++, Java, and C#." }, { "code": null, "e": 1284, "s": 1210, "text": "It is an operator that can be used to call the constructor of the object." }, { "code": null, "e": 1358, "s": 1284, "text": "It is an operator that can be used to call the constructor of the object." }, { "code": null, "e": 1380, "s": 1358, "text": "It can be overloaded." }, { "code": null, "e": 1402, "s": 1380, "text": "It can be overloaded." }, { "code": null, "e": 1439, "s": 1402, "text": "If it fails, an exception is thrown." }, { "code": null, "e": 1476, "s": 1439, "text": "If it fails, an exception is thrown." }, { "code": null, "e": 1515, "s": 1476, "text": "It doesn’t need the ‘sizeof’ operator." }, { "code": null, "e": 1554, "s": 1515, "text": "It doesn’t need the ‘sizeof’ operator." }, { "code": null, "e": 1584, "s": 1554, "text": "It doesn’t reallocate memory." }, { "code": null, "e": 1614, "s": 1584, "text": "It doesn’t reallocate memory." }, { "code": null, "e": 1673, "s": 1614, "text": "It can initialize an object when allocating memory for it." }, { "code": null, "e": 1732, "s": 1673, "text": "It can initialize an object when allocating memory for it." }, { "code": null, "e": 1816, "s": 1732, "text": "The memory allocated by ‘new’ operator can be de-allocated using ‘delete’ operator." }, { "code": null, "e": 1900, "s": 1816, "text": "The memory allocated by ‘new’ operator can be de-allocated using ‘delete’ operator." }, { "code": null, "e": 1950, "s": 1900, "text": "It reduces the execution time of the application." }, { "code": null, "e": 2000, "s": 1950, "text": "It reduces the execution time of the application." }, { "code": null, "e": 2125, "s": 2000, "text": "#include<iostream>\nusing namespace std;\nint main(){\n int *val = new int(10);\n cout << *val;\n getchar();\n return 0;\n}" }, { "code": null, "e": 2156, "s": 2125, "text": "This is present in C language." }, { "code": null, "e": 2187, "s": 2156, "text": "This is present in C language." }, { "code": null, "e": 2230, "s": 2187, "text": "It is a function that can’t be overloaded." }, { "code": null, "e": 2273, "s": 2230, "text": "It is a function that can’t be overloaded." }, { "code": null, "e": 2311, "s": 2273, "text": "When ‘malloc’ fails, it returns NULL." }, { "code": null, "e": 2349, "s": 2311, "text": "When ‘malloc’ fails, it returns NULL." }, { "code": null, "e": 2427, "s": 2349, "text": "It requires the ‘sizeof’ operator to know how much memory has to be allotted." }, { "code": null, "e": 2505, "s": 2427, "text": "It requires the ‘sizeof’ operator to know how much memory has to be allotted." }, { "code": null, "e": 2534, "s": 2505, "text": "It can’t call a constructor." }, { "code": null, "e": 2563, "s": 2534, "text": "It can’t call a constructor." }, { "code": null, "e": 2612, "s": 2563, "text": "Memory can’t be initialized using this function." }, { "code": null, "e": 2661, "s": 2612, "text": "Memory can’t be initialized using this function." }, { "code": null, "e": 2735, "s": 2661, "text": "The memory allocated using malloc can be deallocated using free function." }, { "code": null, "e": 2809, "s": 2735, "text": "The memory allocated using malloc can be deallocated using free function." }, { "code": null, "e": 2884, "s": 2809, "text": "Memory allocated by malloc method can be reallocated using realloc method." }, { "code": null, "e": 2959, "s": 2884, "text": "Memory allocated by malloc method can be reallocated using realloc method." }, { "code": null, "e": 3006, "s": 2959, "text": "It requires more time to execute applications." }, { "code": null, "e": 3053, "s": 3006, "text": "It requires more time to execute applications." }, { "code": null, "e": 3104, "s": 3053, "text": "Following is the example of malloc in C language −" }, { "code": null, "e": 3333, "s": 3104, "text": "#include <stdio.h>\n#include <stdlib.h>\n#include <string.h>\nint main () {\n char *str;\n /* Initial memory allocation */\n str = (char *) malloc(5);\n strcpy(str, \"amit\");\n printf(\"String = %s, Address = %u\\n\", str, str);\n}" } ]
Internet Technologies - Quick Reference Guide
Internet is a world-wide global system of interconnected computer networks. Internet is a world-wide global system of interconnected computer networks. Internet uses the standard Internet Protocol (TCP/IP). Internet uses the standard Internet Protocol (TCP/IP). Every computer in internet is identified by a unique IP address. Every computer in internet is identified by a unique IP address. IP Address is a unique set of numbers (such as 110.22.33.114) which identifies a computer location. IP Address is a unique set of numbers (such as 110.22.33.114) which identifies a computer location. A special computer DNS (Domain Name Server) is used to give name to the IP Address so that user can locate a computer by a name. A special computer DNS (Domain Name Server) is used to give name to the IP Address so that user can locate a computer by a name. For example, a DNS server will resolve a name http://www.tutorialspoint.com to a particular IP address to uniquely identify the computer on which this website is hosted. For example, a DNS server will resolve a name http://www.tutorialspoint.com to a particular IP address to uniquely identify the computer on which this website is hosted. Internet is accessible to every user all over the world. Internet is accessible to every user all over the world. The concept of Internet was originated in 1969 and has undergone several technological & Infrastructural changes as discussed below: The origin of Internet devised from the concept of Advanced Research Project Agency Network (ARPANET). The origin of Internet devised from the concept of Advanced Research Project Agency Network (ARPANET). ARPANET was developed by United States Department of Defense. ARPANET was developed by United States Department of Defense. Basic purpose of ARPANET was to provide communication among the various bodies of government. Basic purpose of ARPANET was to provide communication among the various bodies of government. Initially, there were only four nodes, formally called Hosts. Initially, there were only four nodes, formally called Hosts. In 1972, the ARPANET spread over the globe with 23 nodes located at different countries and thus became known as Internet. In 1972, the ARPANET spread over the globe with 23 nodes located at different countries and thus became known as Internet. By the time, with invention of new technologies such as TCP/IP protocols, DNS, WWW, browsers, scripting languages etc.,Internet provided a medium to publish and access information over the web. By the time, with invention of new technologies such as TCP/IP protocols, DNS, WWW, browsers, scripting languages etc.,Internet provided a medium to publish and access information over the web. Internet covers almost every aspect of life, one can think of. Here, we will discuss some of the advantages of Internet: Extranet refers to network within an organization, using internet to connect to the outsiders in controlled manner. It helps to connect businesses with their customers and suppliers and therefore allows working in a collaborative manner. Extranet proves to be a successful model for all kind of businesses whether small or big. Here are some of the advantages of extranet for employees, suppliers, business partners, and customers: Apart for advantages there are also some issues associated with extranet. These issues are discussed below: Where the extranet pages will be held i.e. who will host the extranet pages. In this context there are two choices: Host it on your own server. Host it on your own server. Host it with an Internet Service Provider (ISP) in the same way as web pages. Host it with an Internet Service Provider (ISP) in the same way as web pages. But hosting extranet pages on your own server requires high bandwidth internet connection which is very costly. Additional firewall security is required if you host extranet pages on your own server which result in a complex security mechanism and increase work load. Information can not be accessed without internet connection. However, information can be accessed in Intranet without internet connection. It decreases the face to face interaction in the business which results in lack of communication among customers, business partners and suppliers. The following table shows differences between Extranet and Intranet: OSI is acronym of Open System Interface. This model is developed by the International organization of Standardization (ISO) and therefore also referred as ISO-OSI Model. The OSI model consists of seven layers as shown in the following diagram. Each layer has a specific function, however each layer provide services to the layer above. The Physical layer is responsible for the following activities: Activating, maintaining and deactivating the physical connection. Activating, maintaining and deactivating the physical connection. Defining voltages and data rates needed for transmission. Defining voltages and data rates needed for transmission. Converting digital bits into electrical signal. Converting digital bits into electrical signal. Deciding whether the connection is simplex, half duplex or full duplex. Deciding whether the connection is simplex, half duplex or full duplex. The data link layer performs the following functions: Performs synchronization and error control for the information which is to be transmitted over the physical link. Performs synchronization and error control for the information which is to be transmitted over the physical link. Enables error detection, and adds error detection bits to the data which are to be transmitted. Enables error detection, and adds error detection bits to the data which are to be transmitted. Following are the functions of Network Layer: To route the signals through various channels to the other end. To route the signals through various channels to the other end. To act as the network controller by deciding which route data should take. To act as the network controller by deciding which route data should take. To divide the outgoing messages into packets and to assemble incoming packets into messages for higher levels. To divide the outgoing messages into packets and to assemble incoming packets into messages for higher levels. The Transport layer performs the following functions: It decides if the data transmission should take place on parallel paths or single path. It decides if the data transmission should take place on parallel paths or single path. It performs multiplexing, splitting on the data. It performs multiplexing, splitting on the data. It breaks the data groups into smaller units so that they are handled more efficiently by the network layer. It breaks the data groups into smaller units so that they are handled more efficiently by the network layer. The Session layer performs the following functions: Manages the messages and synchronizes conversations between two different applications. Manages the messages and synchronizes conversations between two different applications. It controls logging on and off, user identification, billing and session management. It controls logging on and off, user identification, billing and session management. The Presentation layer performs the following functions: This layer makes it sure that the information is delivered in such a form that the receiving system will understand and use it. This layer makes it sure that the information is delivered in such a form that the receiving system will understand and use it. The Application layer performs the following functions: It provides different services such as manipulation of information in several ways, retransferring the files of information, distributing the results etc. It provides different services such as manipulation of information in several ways, retransferring the files of information, distributing the results etc. The functions such as LOGIN or password checking are also performed by the application layer. The functions such as LOGIN or password checking are also performed by the application layer. TCP/IP model is practical model and is used in the Internet. TCP/IP is acronym of Transmission Control Protocol and Internet Protocol. The TCP/IP model combines the two layers (Physical and Data link layer) into one layer i.e. Host-to-Network layer. The following diagram shows the various layers of TCP/IP model: This layer is same as that of the OSI model and performs the following functions: It provides different services such as manipulation of information in several ways, retransferring the files of information, distributing the results etc. It provides different services such as manipulation of information in several ways, retransferring the files of information, distributing the results etc. The functions such as LOGIN or password checking are also performed by the application layer. The functions such as LOGIN or password checking are also performed by the application layer. It does the same functions as that of transport layer in OSI model. Here are the key points regarding transport layer: It uses TCP and UDP protocol for end to end transmission. It uses TCP and UDP protocol for end to end transmission. TCP is reliable and connection oriented protocol. TCP is reliable and connection oriented protocol. TCP also handles flow control. TCP also handles flow control. The UDP is not reliable and a connection less protocol also does not perform flow control. The UDP is not reliable and a connection less protocol also does not perform flow control. The function of this layer is to allow the host to insert packets into network and then make them travel independently to the destination. However, the order of receiving the packet can be different from the sequence they were sent. This is the lowest layer in TCP/IP model. The host has to connect to network using some protocol, so that it can send IP packets over it. This protocol varies from host to host and network to network. The Domain name system comprises of Domain Names, Domain Name Space, Name Server that have been described below: Domain Name is a symbolic string associated with an IP address. There are several domain names available; some of them are generic such as com, edu, gov, net etc, while some country level domain names such as au, in, za, us etc. The following table shows the Generic Top-Level Domain names: The following table shows the Country top-level domain names: The domain name space refers a hierarchy in the internet naming structure. This hierarchy has multiple levels (from 0 to 127), with a root at the top. The following diagram shows the domain name space hierarchy: In the above diagram each subtree represents a domain. Each domain can be partitioned into sub domains and these can be further partitioned and so on. Name server contains the DNS database. This database comprises of various names and their corresponding IP addresses. Since it is not possible for a single server to maintain entire DNS database, therefore, the information is distributed among many DNS servers. Hierarchy of server is same as hierarchy of names. Hierarchy of server is same as hierarchy of names. The entire name space is divided into the zones The entire name space is divided into the zones Zone is collection of nodes (sub domains) under the main domain. The server maintains a database called zone file for every zone. The information about the nodes in the sub domain is stored in the servers at the lower levels however; the original server keeps reference to these lower levels of servers. Following are the three categories of Name Servers that manages the entire Domain Name System: Root Server Root Server Primary Server Primary Server Secondary Server Secondary Server Root Server is the top level server which consists of the entire DNS tree. It does not contain the information about domains but delegates the authority to the other server Primary Server stores a file about its zone. It has authority to create, maintain, and update the zone file. Secondary Server transfers complete information about a zone from another server which may be primary or secondary server. The secondary server does not have authority to create or update a zone file. DNS translates the domain name into IP address automatically. Following steps will take you through the steps included in domain resolution process: When we type www.tutorialspoint.com into the browser, it asks the local DNS Server for its IP address. When we type www.tutorialspoint.com into the browser, it asks the local DNS Server for its IP address. When the local DNS does not find the IP address of requested domain name, it forwards the request to the root DNS server and again enquires about IP address of it. When the local DNS does not find the IP address of requested domain name, it forwards the request to the root DNS server and again enquires about IP address of it. The root DNS server replies with delegation that I do not know the IP address of www.tutorialspoint.com but know the IP address of DNS Server. The root DNS server replies with delegation that I do not know the IP address of www.tutorialspoint.com but know the IP address of DNS Server. The local DNS server then asks the com DNS Server the same question. The local DNS server then asks the com DNS Server the same question. The com DNS Server replies the same that it does not know the IP address of www.tutorialspont.com but knows the address of tutorialspoint.com. The com DNS Server replies the same that it does not know the IP address of www.tutorialspont.com but knows the address of tutorialspoint.com. Then the local DNS asks the tutorialspoint.com DNS server the same question. Then the local DNS asks the tutorialspoint.com DNS server the same question. Then tutorialspoint.com DNS server replies with IP address of www.tutorialspoint.com. Then tutorialspoint.com DNS server replies with IP address of www.tutorialspoint.com. Now, the local DNS sends the IP address of www.tutorialspoint.com to the computer that sends the request. Now, the local DNS sends the IP address of www.tutorialspoint.com to the computer that sends the request. There are various Communication Services available that offer exchange of information with individuals or groups. The following table gives a brief introduction to these services: There exist several Information retrieval services offering easy access to information present on the internet. The following table gives a brief introduction to these services: Web services allow exchange of information between applications on the web. Using web services, applications can easily interact with each other. WWW is also known as W3. It offers a way to access documents spread over the several servers over the internet. These documents may contain texts, graphics, audio, video, hyperlinks. The hyperlinks allow the users to navigate between the documents. Video conferencing or Video teleconferencing is a method of communicating by two-way video and audio transmission with help of telecommunication technologies. This mode of conferencing connects two locations only. This mode of conferencing connects more than two locations through Multi-point Control Unit (MCU). Transmission Control Protocol (TCP) corresponds to the Transport Layer of OSI Model. Transmission Control Protocol (TCP) corresponds to the Transport Layer of OSI Model. TCP is a reliable and connection oriented protocol. TCP is a reliable and connection oriented protocol. TCP offers: TCP offers: Stream Data Transfer. Stream Data Transfer. Reliability. Reliability. Efficient Flow Control Efficient Flow Control Full-duplex operation. Full-duplex operation. Multiplexing. Multiplexing. TCP offers connection oriented end-to-end packet delivery. TCP offers connection oriented end-to-end packet delivery. TCP ensures reliability by sequencing bytes with a forwarding acknowledgement number that indicates to the destination the next byte the source expect to receive. TCP ensures reliability by sequencing bytes with a forwarding acknowledgement number that indicates to the destination the next byte the source expect to receive. It retransmits the bytes not acknowledged with in specified time period. It retransmits the bytes not acknowledged with in specified time period. Internet Protocol is connectionless and unreliable protocol. It ensures no guarantee of successfully transmission of data. In order to make it reliable, it must be paired with reliable protocol such as TCP at the transport layer. Internet protocol transmits the data in form of a datagram as shown in the following diagram: Like IP, UDP is connectionless and unreliable protocol. It doesn’t require making a connection with the host to exchange data. Since UDP is unreliable protocol, there is no mechanism for ensuring that data sent is received. UDP transmits the data in form of a datagram. The UDP datagram consists of five parts as shown in the following diagram: FTP is used to copy files from one host to another. FTP offers the mechanism for the same in following manner: FTP creates two processes such as Control Process and Data Transfer Process at both ends i.e. at client as well as at server. FTP creates two processes such as Control Process and Data Transfer Process at both ends i.e. at client as well as at server. FTP establishes two different connections: one is for data transfer and other is for control information. FTP establishes two different connections: one is for data transfer and other is for control information. Control connection is made between control processes while Data Connection is made between Control connection is made between control processes while Data Connection is made between FTP uses port 21 for the control connection and Port 20 for the data connection. FTP uses port 21 for the control connection and Port 20 for the data connection. Trivial File Transfer Protocol is also used to transfer the files but it transfers the files without authentication. Unlike FTP, TFTP does not separate control and data information. Since there is no authentication exists, TFTP lacks in security features therefore it is not recommended to use TFTP. Key points TFTP makes use of UDP for data transport. Each TFTP message is carried in separate UDP datagram. TFTP makes use of UDP for data transport. Each TFTP message is carried in separate UDP datagram. The first two bytes of a TFTP message specify the type of message. The first two bytes of a TFTP message specify the type of message. The TFTP session is initiated when a TFTP client sends a request to upload or download a file. The TFTP session is initiated when a TFTP client sends a request to upload or download a file. The request is sent from an ephemeral UDP port to the UDP port 69 of an TFTP server. The request is sent from an ephemeral UDP port to the UDP port 69 of an TFTP server. Telnet is a protocol used to log in to remote computer on the internet. There are a number of Telnet clients having user friendly user interface. The following diagram shows a person is logged in to computer A, and from there, he remote logged into computer B. HTTP is a communication protocol. It defines mechanism for communication between browser and the web server. It is also called request and response protocol because the communication between browser and server takes place in request and response pairs. HTTP request comprises of lines which contains: Request line Request line Header Fields Header Fields Message body Message body Key Points The first line i.e. the Request line specifies the request method i.e. Get or Post. The first line i.e. the Request line specifies the request method i.e. Get or Post. The second line specifies the header which indicates the domain name of the server from where index.htm is retrieved. The second line specifies the header which indicates the domain name of the server from where index.htm is retrieved. Like HTTP request, HTTP response also has certain structure. HTTP response contains: Status line Status line Headers Headers Message body Message body Email is a service which allows us to send the message in electronic mode over the internet. It offers an efficient, inexpensive and real time mean of distributing information among people. SMTP stands for Simple Mail Transfer Protocol. It was first proposed in 1982. It is a standard protocol used for sending e-mail efficiently and reliably over the internet. Key Points: SMTP is application level protocol. SMTP is application level protocol. SMTP is connection oriented protocol. SMTP is connection oriented protocol. SMTP is text based protocol. SMTP is text based protocol. It handles exchange of messages between e-mail servers over TCP/IP network. It handles exchange of messages between e-mail servers over TCP/IP network. Apart from transferring e-mail, SMPT also provides notification regarding incoming mail. Apart from transferring e-mail, SMPT also provides notification regarding incoming mail. When you send e-mail, your e-mail client sends it to your e-mail server which further contacts the recipient mail server using SMTP client. When you send e-mail, your e-mail client sends it to your e-mail server which further contacts the recipient mail server using SMTP client. These SMTP commands specify the sender’s and receiver’s e-mail address, along with the message to be send. These SMTP commands specify the sender’s and receiver’s e-mail address, along with the message to be send. The exchange of commands between servers is carried out without intervention of any user. The exchange of commands between servers is carried out without intervention of any user. In case, message cannot be delivered, an error report is sent to the sender which makes SMTP a reliable protocol. In case, message cannot be delivered, an error report is sent to the sender which makes SMTP a reliable protocol. IMAP stands for Internet Message Access Protocol. It was first proposed in 1986. There exist five versions of IMAP as follows: Original IMAP Original IMAP IMAP2 IMAP2 IMAP3 IMAP3 IMAP2bis IMAP2bis IMAP4 IMAP4 Key Points: IMAP allows the client program to manipulate the e-mail message on the server without downloading them on the local computer. IMAP allows the client program to manipulate the e-mail message on the server without downloading them on the local computer. The e-mail is hold and maintained by the remote server. The e-mail is hold and maintained by the remote server. It enables us to take any action such as downloading, delete the mail without reading the mail.It enables us to create, manipulate and delete remote message folders called mail boxes. It enables us to take any action such as downloading, delete the mail without reading the mail.It enables us to create, manipulate and delete remote message folders called mail boxes. IMAP enables the users to search the e-mails. IMAP enables the users to search the e-mails. It allows concurrent access to multiple mailboxes on multiple mail servers. It allows concurrent access to multiple mailboxes on multiple mail servers. POP stands for Post Office Protocol. It is generally used to support a single client. There are several versions of POP but the POP 3 is the current standard. Key Points POP is an application layer internet standard protocol. POP is an application layer internet standard protocol. Since POP supports offline access to the messages, thus requires less internet usage time. Since POP supports offline access to the messages, thus requires less internet usage time. POP does not allow search facility. POP does not allow search facility. In order to access the messaged, it is necessary to download them. In order to access the messaged, it is necessary to download them. It allows only one mailbox to be created on server. It allows only one mailbox to be created on server. It is not suitable for accessing non mail data. It is not suitable for accessing non mail data. POP commands are generally abbreviated into codes of three or four letters. Eg. STAT. POP commands are generally abbreviated into codes of three or four letters. Eg. STAT. Email working follows the client server approach. In this client is the mailer i.e. the mail application or mail program and server is a device that manages emails. Following example will take you through the basic steps involved in sending and receiving emails and will give you a better understanding of working of email system: Suppose person A wants to send an email message to person B. Suppose person A wants to send an email message to person B. Person A composes the messages using a mailer program i.e. mail client and then select Send option. Person A composes the messages using a mailer program i.e. mail client and then select Send option. The message is routed to Simple Mail Transfer Protocol to person B’s mail server. The message is routed to Simple Mail Transfer Protocol to person B’s mail server. The mail server stores the email message on disk in an area designated for person B. The mail server stores the email message on disk in an area designated for person B. Now, suppose person B is running a POP client and knows how to communicate with B’s mail server. Now, suppose person B is running a POP client and knows how to communicate with B’s mail server. It will periodically poll the POP server to check if any new email has arrived for B.As in this case, person B has sent an email for person B, so email is forwarded over the network to B’s PC. This is message is now stored on person B’s PC. It will periodically poll the POP server to check if any new email has arrived for B.As in this case, person B has sent an email for person B, so email is forwarded over the network to B’s PC. This is message is now stored on person B’s PC. The following diagram gives pictorial representation of the steps discussed above: There are various email service provider available such as Gmail, hotmail, ymail, rediff mail etc. Here we will learn how to create an account using Gmail. Open gmail.com and click create an account. Open gmail.com and click create an account. Now a form will appear. Fill your details here and click Next Step. Now a form will appear. Fill your details here and click Next Step. This step allows you to add your picture. If you don’t want to upload now, you can do it later. Click Next Step. This step allows you to add your picture. If you don’t want to upload now, you can do it later. Click Next Step. Now a welcome window appears. Click Continue to Gmail. Now a welcome window appears. Click Continue to Gmail. Wow!! You are done with creating your email account with Gmail. It’s that easy. Isn’t it? Wow!! You are done with creating your email account with Gmail. It’s that easy. Isn’t it? Now you will see your Gmail account as shown in the following image: Now you will see your Gmail account as shown in the following image: Key Points: Gmail manages the mail into three categories namely Primary, Social and Promotions. Gmail manages the mail into three categories namely Primary, Social and Promotions. Compose option is given at the right to compose an email message. Compose option is given at the right to compose an email message. Inbox, Starred, Sent mail, Drafts options are available on the left pane which allows you to keep track of your emails. Inbox, Starred, Sent mail, Drafts options are available on the left pane which allows you to keep track of your emails. Before sending an email, we need to compose a message. When we are composing an email message, we specify the following things: Sender’s address in To field Sender’s address in To field Cc (if required) Cc (if required) Bcc (if required) Bcc (if required) Subject of email message Subject of email message Text Text Signature Signature Once you have specified all the above parameters, It’s time to send the email. The mailer program provides a Send button to send email, when you click Send, it is sent to the mail server and a message mail sent successfully is shown at the above. Every email program offers you an interface to access email messages. Like in Gmail, emails are stored under different tabs such as primary, social, and promotion. When you click one of tab, it displays a list of emails under that tab. In order to read an email, you just have to click on that email. Once you click a particular email, it gets opened. The opened email may have some file attached with it. The attachments are shown at the bottom of the opened email with an option called download attachment. After reading an email, you may have to reply that email. To reply an email, click Reply option shown at the bottom of the opened email. Once you click on Reply, it will automatically copy the sender’s address in to the To field. Below the To field, there is a text box where you can type the message. Once you are done with entering message, click Send button. It’s that easy. Your email is sent. It is also possible to send a copy of the message that you have received along with your own comments if you want. This can be done using forward button available in mail client software. The difference between replying and forwarding an email is that when you reply a message to a person who has send the mail but while forwarding you can send it to anyone. When you receive a forwarded message, the message is marked with a > character in front of each line and Subject: field is prefixed with Fw. If you don’t want to keep email into your inbox, you can delete it by simply selecting the message from the message list and clicking delete or pressing the appropriate command. Some mail clients offers the deleted mails to be stored in a folder called deleted items or trash from where you can recover a deleted email. Email hacking can be done in any of the following ways: Spam Spam Virus Virus Phishing Phishing E-mail spamming is an act of sending Unsolicited Bulk E-mails (UBI) which one has not asked for. Email spams are the junk mails sent by commercial companies as an advertisement of their products and services. Some emails may incorporate with files containing malicious script which when run on your computer may lead to destroy your important data. Email phishing is an activity of sending emails to a user claiming to be a legitimate enterprise. Its main purpose is to steal sensitive information such as usernames, passwords, and credit card details. Such emails contains link to websites that are infected with malware and direct the user to enter details at a fake website whose look and feels are same to legitimate one. Email spamming is an act of sending Unsolicited Bulk E-mails (UBI) which one has not asked for. Email spams are the junk mails sent by commercial companies as an advertisement of their products and services. Spams may cause the following problems: It floods your e-mail account with unwanted e-mails, which may result in loss of important e-mails if inbox is full. It floods your e-mail account with unwanted e-mails, which may result in loss of important e-mails if inbox is full. Time and energy is wasted in reviewing and deleting junk emails or spams. Time and energy is wasted in reviewing and deleting junk emails or spams. It consumes the bandwidth that slows the speed with which mails are delivered. It consumes the bandwidth that slows the speed with which mails are delivered. Some unsolicited email may contain virus that can cause harm to your computer. Some unsolicited email may contain virus that can cause harm to your computer. Following ways will help you to reduce spams: While posting letters to newsgroups or mailing list, use a separate e-mail address than the one you used for your personal e-mails. While posting letters to newsgroups or mailing list, use a separate e-mail address than the one you used for your personal e-mails. Don’t give your email address on the websites as it can easily be spammed. Don’t give your email address on the websites as it can easily be spammed. Avoid replying to emails which you have received from unknown persons. Avoid replying to emails which you have received from unknown persons. Never buy anything in response to a spam that advertises a product. Never buy anything in response to a spam that advertises a product. In order to have light weighted Inbox, it’s good to archive your inbox from time to time. Here I will discuss the steps to clean up and archive your Outlook inbox. Select File tab on the mail pane. Select File tab on the mail pane. Select Cleanup Tools button on account information screen. Select Cleanup Tools button on account information screen. Select Archive from cleanup tools drop down menu. Select Archive from cleanup tools drop down menu. Select Archive this folder and all subfolders option and then click on the folder that you want to archive. Select the date from the Archive items older than: list. Click Browse to create new .pst file name and location. Click OK. Select Archive this folder and all subfolders option and then click on the folder that you want to archive. Select the date from the Archive items older than: list. Click Browse to create new .pst file name and location. Click OK. There are several email service providers available in the market with their enabled features such as sending, receiving, drafting, storing an email and much more. The following table shows the popular email service providers: Web designing has direct link to visual aspect of a web site. Effective web design is necessary to communicate ideas effectively. Key Points Design Plan should include the following: Details about information architecture. Details about information architecture. Planned structure of site. Planned structure of site. A site map of pages A site map of pages Wireframe refers to a visual guide to appearance of web pages. It helps to define structre of web site, linking between web pages and layout of visual elements. Following things are included in a wireframe: Boxes of primary graphical elements Boxes of primary graphical elements Placement of headlines and sub headings Placement of headlines and sub headings Simple layout structure Simple layout structure Calls to action Calls to action Text blocks Text blocks Here is the list of tools that can be used to make effective web designs: Photoshop CC Photoshop CC Illustrator CC Illustrator CC Coda 2 Coda 2 OmniGraffle OmniGraffle Sublime Text Sublime Text GitHub GitHub Pen and Parer Pen and Parer Vim Vim Imageoptim Imageoptim Sketch 3 Sketch 3 Heroku Heroku Axure Axure Hype 2 Hype 2 Slicy Slicy Framer.js Framer.js Image Alpha Image Alpha Emmet LiveStyle Emmet LiveStyle Hammer Hammer Icon Slate Icon Slate JPEGmini Lite JPEGmini Lite BugHerd BugHerd A web site includes the following components: Container can be in the form of page’s body tag, an all containing div tag. Without container there would be no place to put the contents of a web page. Logo refers to the identity of a website and is used across a company’s various forms of marketing such as business cards, letterhead, brouchers and so on. The site’s navigation system should be easy to find and use. Oftenly the anvigation is placed rigth at the top of the page. The content on a web site should be relevant to the purpose of the web site. Footer is located at the bottom of the page. It usually contains copyright, contract and legal information as well as few links to the main sections of the site. It is also called as negative space and refers to any area of page that is not covered by type or illustrations. One should be aware of the following common mistakes should always keep in mind: Website not working in any other browser other internet explorer. Website not working in any other browser other internet explorer. Using cutting edge technology for no good reason Using cutting edge technology for no good reason Sound or video that starts automatically Sound or video that starts automatically Hidden or disguised navigation Hidden or disguised navigation 100% flash content. 100% flash content. Web development refers to building website and deploying on the web. Web development requires use of scripting languages both at the server end as well as at client end. Before developing a web site once should keep several aspects in mind like: What to put on the web site? What to put on the web site? Who will host it? Who will host it? How to make it interactive? How to make it interactive? How to code it? How to code it? How to create search engine friendly web site? How to create search engine friendly web site? How to secure the source code frequently? How to secure the source code frequently? Will the web site design display well in different browsers? Will the web site design display well in different browsers? Will the navigation menus be easy to use? Will the navigation menus be easy to use? Will the web site loads quickly? Will the web site loads quickly? How easily will the site pages print? How easily will the site pages print? How easily will visitors find important details specific to the web site? How easily will visitors find important details specific to the web site? How effectively the style sheets be used on your web sites? How effectively the style sheets be used on your web sites? Web development process includes all the steps that are good to take to build an attractive, effective and responsive website. These steps are shown in the following diagram: Web development tools helps the developer to test and debug the web sites. Now a days the web development tooll come with the web browsers as add-ons. All web browsers have built in tools for this purpose. Thsese tools allow the web developer to use HTML, CSS and JavaScript etc.. These are accessed by hovering over an item on a web page and selecting the “Inspect Element” from the context menu. Following are the common featuers that every web development tool exhibits: HTML and DOM viewer allows you to see the DOM as it was rendered. It also allows to make changes to HTML and DOM and see the changes reflected in the page after the change is made. Web development tools also helps to inspect the resources that are loaded and available on the web page. Profiling refers to get information about the performance of a web page or web application and Auditing provides developers suggestions, after analyzing a page, for optimizations to decerease page load time and increase responsiveness. For being a successful web developer, one should possess the following skills: Understanding of client and server side scripting. Understanding of client and server side scripting. Creating, editing and modifying templates for a CMS or web development framework. Creating, editing and modifying templates for a CMS or web development framework. Testing cross browser inconsistencies. Testing cross browser inconsistencies. Conducting observational user testing. Conducting observational user testing. Testing for compliance to specified standards such as accessibility standards in the client region. Testing for compliance to specified standards such as accessibility standards in the client region. Programming interaction with javaScript, PHP, and Jquery etc. Programming interaction with javaScript, PHP, and Jquery etc. Web hosting is a service of providing online space for storage of web pages. These web pages are made available via World Wide Web. The companies which offer website hosting are known as Web hosts. The servers on which web site is hosted remain switched on 24 x7. These servers are run by web hosting companies. Each server has its own IP address. Since IP addresses are difficult to remember therefore, webmaster points their domain name to the IP address of the server their website is stored on. It is not possible to host your website on your local computer, to do so you would have to leave your computer on 24 hours a day. This is not practical and cheaper as well. This is where web hosting companies comes in. The following table describes different types of hosting that can be availed as per the need: Following are the several companies offering web hosting service: Websites are always to prone to security risks. Cyber crime impacts your business by hacking your website. Your website is then used for hacking assaults that install malicious software or malware on your visitor’s computer. It is mandatory to keep you software updated. It plays vital role in keeping your website secure. It is an attempt by the hackers to manipulate your database. It is easy to insert rogue code into your query that can be used to manipulate your database such as change tables, get information or delete data. It allows the attackers to inject client side script into web pages. Therefore, while creating a form It is good to endure that you check the data being submitted and encode or strip out any HTML. You need to be careful about how much information to be given in the error messages. For example, if the user fails to log in the error message should not let the user know which field is incorrect: username or password. The validation should be performed on both server side and client side. It is good to enforce password requirements such as of minimum of eight characters, including upper case, lower case and special character. It will help to protect user’s information in long run. The file uploaded by the user may contain a script that when executed on the server opens up your website. It is good practice to use SSL protocol while passing personal information between website and web server or database. A technical definition of the World Wide Web is : all the resources and users on the Internet that are using the Hypertext Transfer Protocol (HTTP). A broader definition comes from the organization that Web inventor Tim Berners-Lee helped found, the World Wide Web Consortium (W3C). The World Wide Web is the universe of network-accessible information, an embodiment of human knowledge. In simple terms, The World Wide Web is a way of exchanging information between computers on the Internet, tying them together into a vast collection of interactive multimedia resources. World Wide Web was created by Timothy Berners Lee in 1989 at CERN in Geneva. World Wide Web came into existence as a proposal by him, to allow researchers to work together effectively and efficiently at CERN. Eventually it became World Wide Web. The following diagram briefly defines evolution of World Wide Web: WWW architecture is divided into several layers as shown in the following diagram: Uniform Resource Identifier (URI) is used to uniquely identify resources on the web and UNICODE makes it possible to built web pages that can be read and write in human languages. XML (Extensible Markup Language) helps to define common syntax in semantic web. Resource Description Framework (RDF) framework helps in defining core representation of data for web. RDF represents data about resource in graph form. RDF Schema (RDFS) allows more standardized description of taxonomies and other ontological constructs. Web Ontology Language (OWL) offers more constructs over RDFS. It comes in following three versions: OWL Lite for taxonomies and simple constraints. OWL Lite for taxonomies and simple constraints. OWL DL for full description logic support. OWL DL for full description logic support. OWL for more syntactic freedom of RDF OWL for more syntactic freedom of RDF RIF and SWRL offers rules beyond the constructs that are available from RDFs and OWL. Simple Protocol and RDF Query Language (SPARQL) is SQL like language used for querying RDF data and OWL Ontologies. All semantic and rules that are executed at layers below Proof and their result will be used to prove deductions. Cryptography means such as digital signature for verification of the origin of sources is used. On the top of layer User interface and Applications layer is built for user interaction. WWW works on client- server approach. Following steps explains how the web works: User enters the URL (say, http://www.tutorialspoint.com) of the web page in the address bar of web browser. Then browser requests the Domain Name Server for the IP address corresponding to www.tutorialspoint.com. After receiving IP address, browser sends the request for web page to the web server using HTTP protocol which specifies the way the browser and web server communicates. Then web server receives request using HTTP protocol and checks its search for the requested web page. If found it returns it back to the web browser and close the HTTP connection. Now the web browser receives the web page, It interprets it and display the contents of web page in web browser’s window. User enters the URL (say, http://www.tutorialspoint.com) of the web page in the address bar of web browser. User enters the URL (say, http://www.tutorialspoint.com) of the web page in the address bar of web browser. Then browser requests the Domain Name Server for the IP address corresponding to www.tutorialspoint.com. Then browser requests the Domain Name Server for the IP address corresponding to www.tutorialspoint.com. After receiving IP address, browser sends the request for web page to the web server using HTTP protocol which specifies the way the browser and web server communicates. After receiving IP address, browser sends the request for web page to the web server using HTTP protocol which specifies the way the browser and web server communicates. Then web server receives request using HTTP protocol and checks its search for the requested web page. If found it returns it back to the web browser and close the HTTP connection. Then web server receives request using HTTP protocol and checks its search for the requested web page. If found it returns it back to the web browser and close the HTTP connection. Now the web browser receives the web page, It interprets it and display the contents of web page in web browser’s window. Now the web browser receives the web page, It interprets it and display the contents of web page in web browser’s window. There had been a rapid development in field of web. It has its impact in almost every area such as education, research, technology, commerce, marketing etc. So the future of web is almost unpredictable. Apart from huge development in field of WWW, there are also some technical issues that W3 consortium has to cope up with. Work on higher quality presentation of 3-D information is under deveopment. The W3 Consortium is also looking forward to enhance the web to full fill requirements of global communities which would include all regional languages and writing systems. Work on privacy and security is under way. This would include hiding information, accounting, access control, integrity and risk management. There has been huge growth in field of web which may lead to overload the internet and degrade its performance. Hence more better protocol are required to be developed. web Browser is an application software that allows us to view and explore information on the web. User can request for any web page by just entering a URL into address bar. Web browser can show text, audio, video, animation and more. It is the responsibility of a web browser to interpret text and commands contained in the web page. Earlier the web browsers were text-based while now a days graphical-based or voice-based web browsers are also available. Following are the most common web browser available today: There are a lot of web browser available in the market. All of them interpret and display information on the screen however their capabilities and structure varies depending upon implementation. But the most basic component that all web browser must exhibit are listed below: Controller/Dispatcher Controller/Dispatcher Interpreter Interpreter Client Programs Client Programs Controller works as a control unit in CPU. It takes input from the keyboard or mouse, interpret it and make other services to work on the basis of input it receives. Interpreter receives the information from the controller and execute the instruction line by line. Some interpreter are mandatory while some are optional For example, HTML interpreter program is mandatory and java interpreter is optional. Client Program describes the specific protocol that will be used to access a particular service. Following are the client programs tat are commonly used: HTTP HTTP SMTP SMTP FTP FTP NNTP NNTP POP POP Web server is a computer where the web content is stored. Basically web server is used to host the web sites but there exists other web servers also such as gaming, storage, FTP, email etc. Web server respond to the client request in either of the following two ways: Sending the file to the client associated with the requested URL. Sending the file to the client associated with the requested URL. Generating response by invoking a script and communicating with database Generating response by invoking a script and communicating with database Key Points When client sends request for a web page, the web server search for the requested page if requested page is found then it will send it to client with an HTTP response. When client sends request for a web page, the web server search for the requested page if requested page is found then it will send it to client with an HTTP response. If the requested web page is not found, web server will the send an HTTP response:Error 404 Not found. If the requested web page is not found, web server will the send an HTTP response:Error 404 Not found. If client has requested for some other resources then the web server will contact to the application server and data store to construct the HTTP response. If client has requested for some other resources then the web server will contact to the application server and data store to construct the HTTP response. Web Server Architecture follows the following two approaches: Concurrent Approach Concurrent Approach Single-Process-Event-Driven Approach. Single-Process-Event-Driven Approach. Concurrent approach allows the web server to handle multiple client requests at the same time. It can be achieved by following methods: Multi-process Multi-process Multi-threaded Multi-threaded Hybrid method. Hybrid method. In this a single process (parent process) initiates several single-threaded child processes and distribute incoming requests to these child processes. Each of the child processes are responsible for handling single request. It is the responsibility of parent process to monitor the load and decide if processes should be killed or forked. Unlike Multi-process, it creates multiple single-threaded process. It is combination of above two approaches. In this approach multiple process are created and each process initiates multiple threads. Each of the threads handles one connection. Using multiple threads in single process results in less load on system resources. Following table describes the most leading web servers available today: Proxy server is an intermediary server between client and the internet. Proxy servers offers the following basic functionalities: Firewall and network data filtering. Firewall and network data filtering. Network connection sharing Network connection sharing Data caching Data caching Following are the reasons to use proxy servers: Monitoring and Filtering Monitoring and Filtering Improving performance Improving performance Translation Translation Accessing services anonymously Accessing services anonymously Security Security Following table briefly describes the type of proxies: In this the client requests its internal network server to forward to the internet. Open Proxies helps the clients to conceal their IP address while browsing the web. In this the requests are forwarded to one or more proxy servers and the response from the proxy server is retrieved as if it came directly from the original Server. The proxy server architecture is divided into several modules as shown in the following diagram: This module controls and manages the user interface and provides an easy to use graphical interface, window and a menu to the end user. This menu offers the following functionalities: Start proxy Start proxy Stop proxy Stop proxy Exit Exit Blocking URL Blocking URL Blocking client Blocking client Manage log Manage log Manage cache Manage cache Modify configuration Modify configuration It is the port where new request from the client browser is listened. This module also performs blocking of clients from the list given by the user. It contains the main functionality of the proxy server. It performs the following functions: It contains the main functionality of the proxy server. It performs the following functions: It contains the main functionality of the proxy server. It performs the following functions: Read request from header of the client. Read request from header of the client. Parse the URL and determine whether the URL is blocked or not. Parse the URL and determine whether the URL is blocked or not. Generate connection to the web server. Generate connection to the web server. Read the reply from the web server. Read the reply from the web server. If no copy of page is found in the cache then download the page from web server else will check its last modified date from the reply header and accordingly will read from the cache or server from the web. If no copy of page is found in the cache then download the page from web server else will check its last modified date from the reply header and accordingly will read from the cache or server from the web. Then it will also check whether caching is allowed or not and accordingly will cache the page. Then it will also check whether caching is allowed or not and accordingly will cache the page. This module is responsible for storing, deleting, clearing and searching of web pages in the cache. This module is responsible for viewing, clearing and updating the logs. This module helps to create configuration settings which in turn let other modules to perform desired configurations such as caching. Search Engine refers to a huge database of internet resources such as web pages, newsgroups, programs, images etc. It helps to locate information on World Wide Web. User can search for any information by passing query in form of keywords or phrase. It then searches for relevant information in its database and return to the user. Generally there are three basic components of a search engine as listed below: Web Crawler Database Search Interfaces Web Crawler Web Crawler Database Database Search Interfaces Search Interfaces It is also known as spider or bots. It is a software component that traverses the web to gather information. All the information on the web is stored in database. It consists of huge web resources. This component is an interface between user and the database. It helps the user to search through the database. Web crawler, database and the search interface are the major component of a search engine that actually makes search engine to work. Search engines make use of Boolean expression AND, OR, NOT to restrict and widen the results of a search. Following are the steps that are performed by the search engine: The search engine looks for the keyword in the index for predefined database instead of going directly to the web to search for the keyword. The search engine looks for the keyword in the index for predefined database instead of going directly to the web to search for the keyword. It then uses software to search for the information in the database. This software component is known as web crawler. It then uses software to search for the information in the database. This software component is known as web crawler. Once web crawler finds the pages, the search engine then shows the relevant web pages as a result. These retrieved web pages generally include title of page, size of text portion, first several sentences etc. Once web crawler finds the pages, the search engine then shows the relevant web pages as a result. These retrieved web pages generally include title of page, size of text portion, first several sentences etc. User can click on any of the search results to open it. User can click on any of the search results to open it. The search engine architecture comprises of the three basic layers listed below: Content collection and refinement. Content collection and refinement. Search core Search core User and application interfaces User and application interfaces Online chatting is a text-based communication between two or more people over the network. In this, the text message is delivered in real time and people get immediate response. Chat etiquette defines rules that are supposed to be followed while online chatting: Avoid chat slang Avoid chat slang Try to spell all words correctly. Try to spell all words correctly. Don’t write all the words in capital. Don’t write all the words in capital. Don’t send other chat users private messages without asking them. Don’t send other chat users private messages without asking them. Abide by the rules created by those running the chat. Abide by the rules created by those running the chat. Use emoticons to let other person know your feelings and expressions. Use emoticons to let other person know your feelings and expressions. Following web sites offers browser based chat services: Instant messaging is a software utility that allows IM users to communicate by sending text messages, files, and images. Some of the IMs also support voice and video calls. Internet Relay Chat is a protocol developed by Oikarinen in August 1988. It defines set of rules for communication between client and server by some communication mechanism such as chat rooms, over the internet. IRC consist of separate networks of IRC servers and machines. These allow IRC clients to connect to IRC. IRC client runs a program client to connect to a server on one of the IRC nets. After connecting to IRC server on IRC network, user can join with one or more channels and converse over there. Video conferencing or Video teleconferencing is a method of communicating by two-way video and audio transmission with help of telecommunication technologies. This mode of conferencing connects two locations only. This mode of conferencing connects more than two locations through Multi-point Control Unit (MCU). Video sharing is an IP Multimedia System (IMS) service that allows user to switch voice calls to unidirectional video streaming session. The video streaming session can be initiated by any of the parties. Moreover, the video source can be the camera or the pre-recorded video clip. In order to send same email to a group of people, an electron list is created which is know as Mailing List. It is the list server which receives and distributes postings and automatically manages subscriptions. Mailing list offers a forum, where users from all over the globe can answer questions and have them answered by others with shared interests. Following are the various types of mailing lists: It contains the group of people who have responsed to an offer in some way. These people are the customers who have shown interest in specific product or service. The compiled list is prepared by collecting information from various sources such as surveys, telemarketing etc. These lists are created for sending out coupans , new product announcements and other offers to the customers. This list is created for sharing views on a specific topic suchas computer, environment , healt, education etc. Before joining a mailing list, it is mandatory to subscribe to it. Once you are subscribed, your message will be sent to all the persons who have subscribed to the list. Similarly if any subscriber posts a message, then it will be received by all subscribers of the list. There are a number of websites are available to maintain database of publically accessible mailing list. Some of these are: http://tile.net./lists http://tile.net./lists http://lists.com http://lists.com http://topica.com http://topica.com http://isoft.com/lists/list-q.html http://isoft.com/lists/list-q.html To subscribe to a list, you need to send an email message to the administrative address mailing list containing one or more commands. For example, if you want to subscribe to Harry Potter list in gurus.com where name of the list server us Majordomo, then you have to send email to [email protected] containing the text, Subscribe harry potter in its body. There are many list servers available, each having its own commands for subscribing to the list. Some of them are described in the following table: Like mailing lists Usenet is also a way of sharing information. It was started by Tom Truscott and Jim Ellis in 1979. Initially it was limited to two sites but today there are thousands of Usenet sites involving millions of people. Usenet is a kind of discussion group where people can share views on topic of their interest. The article posted to a newsgroup becomes available to all readers of the newsgroup. There are several forms of online education available as discussed below: Online Training is a form of distance learning in which educational information is delivered through internet. There are many online applications. These applications vary from simple downloadable content to structured programs. It is also possible to do online certification on specialized courses which add value to your qualification. Many companies offer online certification on a number of technologies. There are three types of online certification as listed below: Corporate Corporate Product-specific Product-specific Profession-wide Profession-wide Corporate certifications are made by small organizations for internal purposes. Product-specific certifications target at developing and recognizing adeptness with regard to particular product. Profession wide certification aims at recognizing expertise in particular profession. Online seminar is the one which is conducted over the internet. It is a live seminar and allows the attendees to ask questions via Q&A panel onscreen. Online seminar just requires a computer with internet connection, headphones, speakers, and authorization to attend it. Webinar is a web based seminar or workshop in which presentation is delivered over the web using conferencing software. The audio part of webinar is delivered through teleconferencing. Online conferencing is also a kind of online seminar in which two or more people are involved. It is also performed over the internet. It allows the business persons to do meeting online. Social Networking refers to grouping of individuals and organizations together via some medium, in order to share thoughts, interests, and activities. There are several web based social network services are available such as facebook, twitter, linkedin, Google+ etc. which offer easy to use and interactive interface to connect with people with in the country an overseas as well. There are also several mobile based social networking services in for of apps such as Whatsapp, hike, Line etc. The following table describes some of the famous social networking services provided over web and mobile: Internet security refers to securing communication over the internet. It includes specific security protocols such as: Internet Security Protocol (IPSec) Internet Security Protocol (IPSec) Secure Socket Layer (SSL) Secure Socket Layer (SSL) Internet security threats impact the network, data security and other internet connected systems. Cyber criminals have evolved several techniques to threat privacy and integrity of bank accounts, businesses, and organizations. Following are some of the internet security threats: Mobile worms Mobile worms Malware Malware PC and Mobile ransomware PC and Mobile ransomware Large scale attacks like Stuxnet that attempts to destroy infrastructure. Large scale attacks like Stuxnet that attempts to destroy infrastructure. Hacking as a Service Hacking as a Service Spam Spam Phishing Phishing Email phishing is an activity of sending emails to a user claiming to be a legitimate enterprise. Its main purpose is to steal sensitive information such as usernames, passwords, and credit card details. Such emails contains link to websites that are infected with malware and direct the user to enter details at a fake website whose look and feels are same to legitimate one. Following are the symptoms of a phishing email: Most often such emails contain grammatically incorrect text. Ignore such emails, since it can be a spam. Don’t click on any links in suspicious emails. Such emails contain threat like “your account will be closed if you didn’t respond to an email message”. These emails contain graphics that appear to be connected to legitimate website but they actually are connected to fake websites. Digital signatures allow us to verify the author, date and time of signatures, authenticate the message contents. It also includes authentication function for additional capabilities. There are several reasons to implement digital signatures to communications: Digital signatures help to authenticate the sources of messages. For example, if a bank’s branch office sends a message to central office, requesting for change in balance of an account. If the central office could not authenticate that message is sent from an authorized source, acting of such request could be a grave mistake. Once the message is signed, any change in the message would invalidate the signature. By this property, any entity that has signed some information cannot at a later time deny having signed it. Firewall is a barrier between Local Area Network (LAN) and the Internet. It allows keeping private resources confidential and minimizes the security risks. It controls network traffic, in both directions. The following diagram depicts a sample firewall between LAN and the internet. The connection between the two is the point of vulnerability. Both hardware and the software can be used at this point to filter network traffic. Key Points Firewall management must be addressed by both system managers and the network managers. Firewall management must be addressed by both system managers and the network managers. The amount of filtering a firewall varies. For the same firewall, the amount of filtering may be different in different directions. The amount of filtering a firewall varies. For the same firewall, the amount of filtering may be different in different directions. HTML stands for Hyper Text Markup Language. It is a formatting language used to define the appearance and contents of a web page. It allows us to organize text, graphics, audio, and video on a web page. Key Points: The word Hypertext refers to the text which acts as a link. The word Hypertext refers to the text which acts as a link. The word markup refers to the symbols that are used to define structure of the text. The markup symbols tells the browser how to display the text and are often called tags. The word markup refers to the symbols that are used to define structure of the text. The markup symbols tells the browser how to display the text and are often called tags. The word Language refers to the syntax that is similar to any other language. The word Language refers to the syntax that is similar to any other language. The following table shows the various versions of HTML: Tag is a command that tells the web browser how to display the text, audio, graphics or video on a web page. Key Points: Tags are indicated with pair of angle brackets. Tags are indicated with pair of angle brackets. They start with a less than (<) character and end with a greater than (>) character. They start with a less than (<) character and end with a greater than (>) character. The tag name is specified between the angle brackets. The tag name is specified between the angle brackets. Most of the tags usually occur in pair: the start tag and the closing tag. Most of the tags usually occur in pair: the start tag and the closing tag. The start tag is simply the tag name is enclosed in angle bracket whereas the closing tag is specified including a forward slash (/). The start tag is simply the tag name is enclosed in angle bracket whereas the closing tag is specified including a forward slash (/). Some tags are the empty i.e. they don’t have the closing tag. Some tags are the empty i.e. they don’t have the closing tag. Tags are not case sensitive. Tags are not case sensitive. The starting and closing tag name must be the same. For example <b> hello </i> is invalid as both are different. The starting and closing tag name must be the same. For example <b> hello </i> is invalid as both are different. If you don’t specify the angle brackets (<>) for a tag, the browser will treat the tag name as a simple text. If you don’t specify the angle brackets (<>) for a tag, the browser will treat the tag name as a simple text. The tag can also have attributes to provide additional information about the tag to the browser. The tag can also have attributes to provide additional information about the tag to the browser. The following table shows the Basic HTML tags that define the basic web page: The following code shows how to use basic tags. <html> <head> Heading goes here...</head> <title> Title goes here...</title> <body> Body goes here...</body> </html> The following table shows the HTML tags used for formatting the text: Following table describe the commonaly used table tags: Following table describe the commonaly used list tags: Frames help us to divide the browser’s window into multiple rectangular regions. Each region contains separate html web page and each of them work independently. The following table describes the various tags used for creating frames: Forms are used to input the values. These values are sent to the server for processing. Forms uses input elements such as text fields, check boxes, radio buttons, lists, submit buttons etc. to enter the data into it. The following table describes the commonly used tags while creating a form: CSS is acronym of Cascading Style Sheets. It helps to define the presentation of HTML elements as a separate file known as CSS file having .css extension. CSS helps to change formatting of any HTML element by just making changes at one place. All changes made would be reflected automatically to all of the web pages of the website in which that element appeared. Following are the four methods to add CSS to HTML documents. Inline Style Sheets Inline Style Sheets Embedded Style Sheets Embedded Style Sheets External Style Sheets External Style Sheets Imported Style Sheets Imported Style Sheets Inline Style Sheets are included with HTML element i.e. they are placed inline with the element. To add inline CSS, we have to declare style attribute which can contain any CSS property. Syntax: <Tagname STYLE = “ Declaration1 ; Declaration2 “> .... </Tagname> Let’s consider the following example using Inline Style Sheets: <p style="color: blue; text-align: left; font-size: 15pt"> Inline Style Sheets are included with HTML element i.e. they are placed inline with the element. To add inline CSS, we have to declare style attribute which can contain any CSS property. </p> Embedded Style Sheets are used to apply same appearance to all occurrence of a specific element. These are defined in element by using the <style> element. Syntax <head> <title> .... </title> <style type =”text/css”> .......CSS Rules/Styles.... </head> Let’s consider the following example using Embedded Style Sheets: <style type="text/css"> p {color:green; text-align: left; font-size: 10pt} h1 { color: red; font-weight: bold} </style> External Style Sheets are the separate .css files that contain the CSS rules. These files can be linked to any HTML documents using <link> tag with rel attribute. Syntax: <head> <link rel= “stylesheet” type=”text/css” href= “url of css file”> </head> In order to create external css and link it to HTML document, follow the following steps: First of all create a CSS file and define all CSS rules for several HTML elements. Let’s name this file as external.css. First of all create a CSS file and define all CSS rules for several HTML elements. Let’s name this file as external.css. p { Color: orange; text-align: left; font-size: 10pt; } h1 { Color: orange; font-weight: bold; } Now create HTML document and name it as externaldemo.html. Now create HTML document and name it as externaldemo.html. <html> <head> <title> External Style Sheets Demo </title> <link rel="stylesheet" type="text/css" href="external.css"> </head> <body> <h1> External Style Sheets</h1> <p>External Style Sheets are the separate .css files that contain the CSS rules.</p> </body> </html> Imported Style Sheets allow us to import style rules from other style sheets. To import CSS rules we have to use @import before all the rules in a style sheet. Syntax: <head><title> Title Information </title> <style type=”text/css”> @import URL (cssfilepath) ... CSS rules... </style> </head> Let’s consider the following example using Inline Style Sheets: <html> <head> <title> External Style Sheets Demo </title> <style> @import url(external.css); </style> </head> <body> <h1> External Style Sheets</h1> <p>External Style Sheets are the separate .css files that contain the CSS rules.</p> </body> </html> JavaScript is a lightweight, interpreted programming language with object-oriented capabilities that allows you to build interactivity into otherwise static HTML pages. JavaScript is: Lightweight, interpreted programming language. Lightweight, interpreted programming language. Designed for creating network-centric applications. Designed for creating network-centric applications. Complementary to and integrated with Java. Complementary to and integrated with Java. Complementary to and integrated with HTML Complementary to and integrated with HTML Open and cross-platform Open and cross-platform JavaScript statements are the commands to tell the browser to what action to perform. Statements are separated by semicolon (;). Example of JavaScript statement: document.getElementById("demo").innerHTML = "Welcome"; Following table shows the various JavaScript Statements: JavaScript supports both C-style and C++-style comments, thus: Any text between a // and the end of a line is treated as a comment and is ignored by JavaScript. Any text between a // and the end of a line is treated as a comment and is ignored by JavaScript. Any text between the characters /* and */ is treated as a comment. This may span multiple lines. Any text between the characters /* and */ is treated as a comment. This may span multiple lines. JavaScript also recognizes the HTML comment opening sequence <!--. JavaScript treats this as a single-line comment, just as it does the // comment.--> JavaScript also recognizes the HTML comment opening sequence <!--. JavaScript treats this as a single-line comment, just as it does the // comment.--> The HTML comment closing sequence --> is not recognized by JavaScript so it should be written as //-->. The HTML comment closing sequence --> is not recognized by JavaScript so it should be written as //-->. Example: <script language="javascript" type="text/javascript"> <!-- // this is a comment. It is similar to comments in C++ /* * This is a multiline comment in JavaScript * It is very similar to comments in C Programming */ //--> <script> Variables are referred as named containers for storing information. We can place data into these containers and then refer to the data simply by naming the container. Rules to declare variable in JavaScript In JavaScript variable names are case sensitive i.e. a is different from A. In JavaScript variable names are case sensitive i.e. a is different from A. Variable name can only be started with a underscore ( _ ) or a letter (from a to z or A to Z), or dollar ( $ ) sign. Variable name can only be started with a underscore ( _ ) or a letter (from a to z or A to Z), or dollar ( $ ) sign. Numbers (0 to 9) can only be used after a letter. Numbers (0 to 9) can only be used after a letter. No other special character is allowed in variable name. No other special character is allowed in variable name. Before you use a variable in a JavaScript program, you must declare it. Variables are declared with the var keyword as follows: <script type="text/javascript"> <!-- var money; var name, age; //--> </script> Variables can be initialized at time of declaration or after declaration as follows: <script type="text/javascript"> <!-- var name = "Ali"; var money; money = 2000.50; //--> </script> There are two kinds of data types as mentioned below: Primitive Data Type Primitive Data Type Non Primitive Data Type Non Primitive Data Type Primitive Data Types are shown in the following table: Following table contains Non primitive Data Types: Function is a group of reusable statements (Code) that can be called any where in a program. In javascript function keyword is used to declare or define a function. Key Points: To define a function use function keyword followed by functionname, followed by parentheses (). To define a function use function keyword followed by functionname, followed by parentheses (). In parenthesis, we define parameters or attributes. In parenthesis, we define parameters or attributes. The group of reusabe statements (code) is enclosed in curly braces {}. This code is executed whenever function is called. The group of reusabe statements (code) is enclosed in curly braces {}. This code is executed whenever function is called. Syntax: function functionname (p1, p2) { function coding... } Operators are used to perform operation on one, two or more operands. Operator is represented by a symbol such as +, =, *, % etc. Following are the operators supported by javascript: Arithmetic Operators Arithmetic Operators Comparison Operators Comparison Operators Logical (or Relational) Operators Logical (or Relational) Operators Assignment Operators Assignment Operators Conditional (or ternary) Operators Conditional (or ternary) Operators Arithmetic Operators Arithmetic Operators Control structure actually controls the flow of execution of a program. Following are the several control structure supported by javascript. if ... else switch case do while loop while loop for loop if ... else if ... else switch case switch case do while loop do while loop while loop while loop for loop for loop PHP is acronym of Hypertext Preprocessor (PHP) is a programming language that allows web developers to create dynamic content that interacts with databases.PHP is basically used for developing web based software applications. PHP started out as a small open source project that evolved as more and more people found out how useful it was. Rasmus Lerdorf unleashed the first version of PHP way back in 1994. Key Points PHP is a recursive acronym for "PHP: Hypertext Preprocessor". PHP is a recursive acronym for "PHP: Hypertext Preprocessor". PHP is a server side scripting language that is embedded in HTML. It is used to manage dynamic content, databases, session tracking, even build entire e-commerce sites. PHP is a server side scripting language that is embedded in HTML. It is used to manage dynamic content, databases, session tracking, even build entire e-commerce sites. It is integrated with a number of popular databases, including MySQL, PostgreSQL, Oracle, Sybase, Informix, and Microsoft SQL Server. It is integrated with a number of popular databases, including MySQL, PostgreSQL, Oracle, Sybase, Informix, and Microsoft SQL Server. PHP is pleasingly zippy in its execution, especially when compiled as an Apache module on the Unix side. The MySQL server, once started, executes even very complex queries with huge result sets in record-setting time. PHP is pleasingly zippy in its execution, especially when compiled as an Apache module on the Unix side. The MySQL server, once started, executes even very complex queries with huge result sets in record-setting time. PHP supports a large number of major protocols such as POP3, IMAP, and LDAP. PHP4 added support for Java and distributed object architectures (COM and CORBA), making n-tier development a possibility for the first time. PHP supports a large number of major protocols such as POP3, IMAP, and LDAP. PHP4 added support for Java and distributed object architectures (COM and CORBA), making n-tier development a possibility for the first time. PHP performs system functions, i.e. from files on a system it can create, open, read, write, and close them. PHP performs system functions, i.e. from files on a system it can create, open, read, write, and close them. PHP can handle forms, i.e. gather data from files, save data to a file, through email you can send data, return data to the user. PHP can handle forms, i.e. gather data from files, save data to a file, through email you can send data, return data to the user. You add, delete, modify elements within your database through PHP. You add, delete, modify elements within your database through PHP. Access cookies variables and set cookies. Access cookies variables and set cookies. Using PHP, you can restrict users to access some pages of your website. Using PHP, you can restrict users to access some pages of your website. It can encrypt data. It can encrypt data. Five important characteristics make PHP's practical nature possible: Simplicity Simplicity Efficiency Efficiency Security Security Flexibility Flexibility Familiarity Familiarity To get a feel for PHP, first start with simple PHP scripts. Since "Hello, World!" is an essential example, first we will create a friendly little "Hello, World!" script. As mentioned earlier, PHP is embedded in HTML. That means that in amongst your normal HTML (or XHTML if you're cutting-edge) you'll have PHP statements like this: <html> <head> <title>Hello World</title> <body> <?php echo "Hello, World!";?> </body> </html> It will produce following result: Hello, World! If you examine the HTML output of the above example, you'll notice that the PHP code is not present in the file sent from the server to your Web browser. All of the PHP present in the Web page is processed and stripped from the page; the only thing returned to the client from the Web server is pure HTML output. All PHP code must be included inside one of the three special markup tags ate are recognised by the PHP Parser. <?php PHP code goes here ?> <?php PHP code goes here ?> <script language="php"> PHP code goes here </script> 61 Lectures 8 hours Amit Rana 13 Lectures 2.5 hours Raghu Pandey 5 Lectures 38 mins Harshit Srivastava 62 Lectures 3.5 hours YouAccel 9 Lectures 36 mins Korey Sheppard 10 Lectures 57 mins Taurius Litvinavicius Print Add Notes Bookmark this page
[ { "code": null, "e": 2549, "s": 2473, "text": "Internet is a world-wide global system of interconnected computer networks." }, { "code": null, "e": 2625, "s": 2549, "text": "Internet is a world-wide global system of interconnected computer networks." }, { "code": null, "e": 2680, "s": 2625, "text": "Internet uses the standard Internet Protocol (TCP/IP)." }, { "code": null, "e": 2735, "s": 2680, "text": "Internet uses the standard Internet Protocol (TCP/IP)." }, { "code": null, "e": 2800, "s": 2735, "text": "Every computer in internet is identified by a unique IP address." }, { "code": null, "e": 2865, "s": 2800, "text": "Every computer in internet is identified by a unique IP address." }, { "code": null, "e": 2965, "s": 2865, "text": "IP Address is a unique set of numbers (such as 110.22.33.114) which identifies a computer location." }, { "code": null, "e": 3065, "s": 2965, "text": "IP Address is a unique set of numbers (such as 110.22.33.114) which identifies a computer location." }, { "code": null, "e": 3194, "s": 3065, "text": "A special computer DNS (Domain Name Server) is used to give name to the IP Address so that user can locate a computer by a name." }, { "code": null, "e": 3323, "s": 3194, "text": "A special computer DNS (Domain Name Server) is used to give name to the IP Address so that user can locate a computer by a name." }, { "code": null, "e": 3493, "s": 3323, "text": "For example, a DNS server will resolve a name http://www.tutorialspoint.com to a particular IP address to uniquely identify the computer on which this website is hosted." }, { "code": null, "e": 3663, "s": 3493, "text": "For example, a DNS server will resolve a name http://www.tutorialspoint.com to a particular IP address to uniquely identify the computer on which this website is hosted." }, { "code": null, "e": 3720, "s": 3663, "text": "Internet is accessible to every user all over the world." }, { "code": null, "e": 3777, "s": 3720, "text": "Internet is accessible to every user all over the world." }, { "code": null, "e": 3910, "s": 3777, "text": "The concept of Internet was originated in 1969 and has undergone several technological & Infrastructural changes as discussed below:" }, { "code": null, "e": 4013, "s": 3910, "text": "The origin of Internet devised from the concept of Advanced Research Project Agency Network (ARPANET)." }, { "code": null, "e": 4116, "s": 4013, "text": "The origin of Internet devised from the concept of Advanced Research Project Agency Network (ARPANET)." }, { "code": null, "e": 4178, "s": 4116, "text": "ARPANET was developed by United States Department of Defense." }, { "code": null, "e": 4240, "s": 4178, "text": "ARPANET was developed by United States Department of Defense." }, { "code": null, "e": 4334, "s": 4240, "text": "Basic purpose of ARPANET was to provide communication among the various bodies of government." }, { "code": null, "e": 4428, "s": 4334, "text": "Basic purpose of ARPANET was to provide communication among the various bodies of government." }, { "code": null, "e": 4490, "s": 4428, "text": "Initially, there were only four nodes, formally called Hosts." }, { "code": null, "e": 4552, "s": 4490, "text": "Initially, there were only four nodes, formally called Hosts." }, { "code": null, "e": 4675, "s": 4552, "text": "In 1972, the ARPANET spread over the globe with 23 nodes located at different countries and thus became known as Internet." }, { "code": null, "e": 4798, "s": 4675, "text": "In 1972, the ARPANET spread over the globe with 23 nodes located at different countries and thus became known as Internet." }, { "code": null, "e": 4992, "s": 4798, "text": "By the time, with invention of new technologies such as TCP/IP protocols, DNS, WWW, browsers, scripting languages etc.,Internet provided a medium to publish and access information over the web." }, { "code": null, "e": 5186, "s": 4992, "text": "By the time, with invention of new technologies such as TCP/IP protocols, DNS, WWW, browsers, scripting languages etc.,Internet provided a medium to publish and access information over the web." }, { "code": null, "e": 5307, "s": 5186, "text": "Internet covers almost every aspect of life, one can think of. Here, we will discuss some of the advantages of Internet:" }, { "code": null, "e": 5545, "s": 5307, "text": "Extranet refers to network within an organization, using internet to connect to the outsiders in controlled manner. It helps to connect businesses with their customers and suppliers and therefore allows working in a collaborative manner." }, { "code": null, "e": 5739, "s": 5545, "text": "Extranet proves to be a successful model for all kind of businesses whether small or big. Here are some of the advantages of extranet for employees, suppliers, business partners, and customers:" }, { "code": null, "e": 5847, "s": 5739, "text": "Apart for advantages there are also some issues associated with extranet. These issues are discussed below:" }, { "code": null, "e": 5963, "s": 5847, "text": "Where the extranet pages will be held i.e. who will host the extranet pages. In this context there are two choices:" }, { "code": null, "e": 5991, "s": 5963, "text": "Host it on your own server." }, { "code": null, "e": 6019, "s": 5991, "text": "Host it on your own server." }, { "code": null, "e": 6097, "s": 6019, "text": "Host it with an Internet Service Provider (ISP) in the same way as web pages." }, { "code": null, "e": 6175, "s": 6097, "text": "Host it with an Internet Service Provider (ISP) in the same way as web pages." }, { "code": null, "e": 6287, "s": 6175, "text": "But hosting extranet pages on your own server requires high bandwidth internet connection which is very costly." }, { "code": null, "e": 6443, "s": 6287, "text": "Additional firewall security is required if you host extranet pages on your own server which result in a complex security mechanism and increase work load." }, { "code": null, "e": 6582, "s": 6443, "text": "Information can not be accessed without internet connection. However, information can be accessed in Intranet without internet connection." }, { "code": null, "e": 6729, "s": 6582, "text": "It decreases the face to face interaction in the business which results in lack of communication among customers, business partners and suppliers." }, { "code": null, "e": 6798, "s": 6729, "text": "The following table shows differences between Extranet and Intranet:" }, { "code": null, "e": 6968, "s": 6798, "text": "OSI is acronym of Open System Interface. This model is developed by the International organization of Standardization (ISO) and therefore also referred as ISO-OSI Model." }, { "code": null, "e": 7134, "s": 6968, "text": "The OSI model consists of seven layers as shown in the following diagram. Each layer has a specific function, however each layer provide services to the layer above." }, { "code": null, "e": 7198, "s": 7134, "text": "The Physical layer is responsible for the following activities:" }, { "code": null, "e": 7264, "s": 7198, "text": "Activating, maintaining and deactivating the physical connection." }, { "code": null, "e": 7330, "s": 7264, "text": "Activating, maintaining and deactivating the physical connection." }, { "code": null, "e": 7388, "s": 7330, "text": "Defining voltages and data rates needed for transmission." }, { "code": null, "e": 7446, "s": 7388, "text": "Defining voltages and data rates needed for transmission." }, { "code": null, "e": 7494, "s": 7446, "text": "Converting digital bits into electrical signal." }, { "code": null, "e": 7542, "s": 7494, "text": "Converting digital bits into electrical signal." }, { "code": null, "e": 7614, "s": 7542, "text": "Deciding whether the connection is simplex, half duplex or full duplex." }, { "code": null, "e": 7686, "s": 7614, "text": "Deciding whether the connection is simplex, half duplex or full duplex." }, { "code": null, "e": 7740, "s": 7686, "text": "The data link layer performs the following functions:" }, { "code": null, "e": 7854, "s": 7740, "text": "Performs synchronization and error control for the information which is to be transmitted over the physical link." }, { "code": null, "e": 7968, "s": 7854, "text": "Performs synchronization and error control for the information which is to be transmitted over the physical link." }, { "code": null, "e": 8064, "s": 7968, "text": "Enables error detection, and adds error detection bits to the data which are to be transmitted." }, { "code": null, "e": 8160, "s": 8064, "text": "Enables error detection, and adds error detection bits to the data which are to be transmitted." }, { "code": null, "e": 8206, "s": 8160, "text": "Following are the functions of Network Layer:" }, { "code": null, "e": 8270, "s": 8206, "text": "To route the signals through various channels to the other end." }, { "code": null, "e": 8334, "s": 8270, "text": "To route the signals through various channels to the other end." }, { "code": null, "e": 8409, "s": 8334, "text": "To act as the network controller by deciding which route data should take." }, { "code": null, "e": 8484, "s": 8409, "text": "To act as the network controller by deciding which route data should take." }, { "code": null, "e": 8595, "s": 8484, "text": "To divide the outgoing messages into packets and to assemble incoming packets into messages for higher levels." }, { "code": null, "e": 8706, "s": 8595, "text": "To divide the outgoing messages into packets and to assemble incoming packets into messages for higher levels." }, { "code": null, "e": 8760, "s": 8706, "text": "The Transport layer performs the following functions:" }, { "code": null, "e": 8848, "s": 8760, "text": "It decides if the data transmission should take place on parallel paths or single path." }, { "code": null, "e": 8936, "s": 8848, "text": "It decides if the data transmission should take place on parallel paths or single path." }, { "code": null, "e": 8985, "s": 8936, "text": "It performs multiplexing, splitting on the data." }, { "code": null, "e": 9034, "s": 8985, "text": "It performs multiplexing, splitting on the data." }, { "code": null, "e": 9143, "s": 9034, "text": "It breaks the data groups into smaller units so that they are handled more efficiently by the network layer." }, { "code": null, "e": 9252, "s": 9143, "text": "It breaks the data groups into smaller units so that they are handled more efficiently by the network layer." }, { "code": null, "e": 9304, "s": 9252, "text": "The Session layer performs the following functions:" }, { "code": null, "e": 9392, "s": 9304, "text": "Manages the messages and synchronizes conversations between two different applications." }, { "code": null, "e": 9480, "s": 9392, "text": "Manages the messages and synchronizes conversations between two different applications." }, { "code": null, "e": 9565, "s": 9480, "text": "It controls logging on and off, user identification, billing and session management." }, { "code": null, "e": 9650, "s": 9565, "text": "It controls logging on and off, user identification, billing and session management." }, { "code": null, "e": 9707, "s": 9650, "text": "The Presentation layer performs the following functions:" }, { "code": null, "e": 9835, "s": 9707, "text": "This layer makes it sure that the information is delivered in such a form that the receiving system will understand and use it." }, { "code": null, "e": 9963, "s": 9835, "text": "This layer makes it sure that the information is delivered in such a form that the receiving system will understand and use it." }, { "code": null, "e": 10019, "s": 9963, "text": "The Application layer performs the following functions:" }, { "code": null, "e": 10174, "s": 10019, "text": "It provides different services such as manipulation of information in several ways, retransferring the files of information, distributing the results etc." }, { "code": null, "e": 10329, "s": 10174, "text": "It provides different services such as manipulation of information in several ways, retransferring the files of information, distributing the results etc." }, { "code": null, "e": 10423, "s": 10329, "text": "The functions such as LOGIN or password checking are also performed by the application layer." }, { "code": null, "e": 10517, "s": 10423, "text": "The functions such as LOGIN or password checking are also performed by the application layer." }, { "code": null, "e": 10652, "s": 10517, "text": "TCP/IP model is practical model and is used in the Internet. TCP/IP is acronym of Transmission Control Protocol and Internet Protocol." }, { "code": null, "e": 10831, "s": 10652, "text": "The TCP/IP model combines the two layers (Physical and Data link layer) into one layer i.e. Host-to-Network layer. The following diagram shows the various layers of TCP/IP model:" }, { "code": null, "e": 10913, "s": 10831, "text": "This layer is same as that of the OSI model and performs the following functions:" }, { "code": null, "e": 11068, "s": 10913, "text": "It provides different services such as manipulation of information in several ways, retransferring the files of information, distributing the results etc." }, { "code": null, "e": 11223, "s": 11068, "text": "It provides different services such as manipulation of information in several ways, retransferring the files of information, distributing the results etc." }, { "code": null, "e": 11317, "s": 11223, "text": "The functions such as LOGIN or password checking are also performed by the application layer." }, { "code": null, "e": 11411, "s": 11317, "text": "The functions such as LOGIN or password checking are also performed by the application layer." }, { "code": null, "e": 11530, "s": 11411, "text": "It does the same functions as that of transport layer in OSI model. Here are the key points regarding transport layer:" }, { "code": null, "e": 11588, "s": 11530, "text": "It uses TCP and UDP protocol for end to end transmission." }, { "code": null, "e": 11646, "s": 11588, "text": "It uses TCP and UDP protocol for end to end transmission." }, { "code": null, "e": 11696, "s": 11646, "text": "TCP is reliable and connection oriented protocol." }, { "code": null, "e": 11746, "s": 11696, "text": "TCP is reliable and connection oriented protocol." }, { "code": null, "e": 11777, "s": 11746, "text": "TCP also handles flow control." }, { "code": null, "e": 11808, "s": 11777, "text": "TCP also handles flow control." }, { "code": null, "e": 11899, "s": 11808, "text": "The UDP is not reliable and a connection less protocol also does not perform flow control." }, { "code": null, "e": 11990, "s": 11899, "text": "The UDP is not reliable and a connection less protocol also does not perform flow control." }, { "code": null, "e": 12223, "s": 11990, "text": "The function of this layer is to allow the host to insert packets into network and then make them travel independently to the destination. However, the order of receiving the packet can be different from the sequence they were sent." }, { "code": null, "e": 12424, "s": 12223, "text": "This is the lowest layer in TCP/IP model. The host has to connect to network using some protocol, so that it can send IP packets over it. This protocol varies from host to host and network to network." }, { "code": null, "e": 12537, "s": 12424, "text": "The Domain name system comprises of Domain Names, Domain Name Space, Name Server that have been described below:" }, { "code": null, "e": 12766, "s": 12537, "text": "Domain Name is a symbolic string associated with an IP address. There are several domain names available; some of them are generic such as com, edu, gov, net etc, while some country level domain names such as au, in, za, us etc." }, { "code": null, "e": 12828, "s": 12766, "text": "The following table shows the Generic Top-Level Domain names:" }, { "code": null, "e": 12890, "s": 12828, "text": "The following table shows the Country top-level domain names:" }, { "code": null, "e": 13103, "s": 12890, "text": "The domain name space refers a hierarchy in the internet naming structure. This hierarchy has multiple levels (from 0 to 127), with a root at the top. The following diagram shows the domain name space hierarchy:" }, { "code": null, "e": 13254, "s": 13103, "text": "In the above diagram each subtree represents a domain. Each domain can be partitioned into sub domains and these can be further partitioned and so on." }, { "code": null, "e": 13516, "s": 13254, "text": "Name server contains the DNS database. This database comprises of various names and their corresponding IP addresses. Since it is not possible for a single server to maintain entire DNS database, therefore, the information is distributed among many DNS servers." }, { "code": null, "e": 13567, "s": 13516, "text": "Hierarchy of server is same as hierarchy of names." }, { "code": null, "e": 13618, "s": 13567, "text": "Hierarchy of server is same as hierarchy of names." }, { "code": null, "e": 13666, "s": 13618, "text": "The entire name space is divided into the zones" }, { "code": null, "e": 13714, "s": 13666, "text": "The entire name space is divided into the zones" }, { "code": null, "e": 13844, "s": 13714, "text": "Zone is collection of nodes (sub domains) under the main domain. The server maintains a database called zone file for every zone." }, { "code": null, "e": 14018, "s": 13844, "text": "The information about the nodes in the sub domain is stored in the servers at the lower levels however; the original server keeps reference to these lower levels of servers." }, { "code": null, "e": 14113, "s": 14018, "text": "Following are the three categories of Name Servers that manages the entire Domain Name System:" }, { "code": null, "e": 14125, "s": 14113, "text": "Root Server" }, { "code": null, "e": 14137, "s": 14125, "text": "Root Server" }, { "code": null, "e": 14152, "s": 14137, "text": "Primary Server" }, { "code": null, "e": 14167, "s": 14152, "text": "Primary Server" }, { "code": null, "e": 14184, "s": 14167, "text": "Secondary Server" }, { "code": null, "e": 14201, "s": 14184, "text": "Secondary Server" }, { "code": null, "e": 14374, "s": 14201, "text": "Root Server is the top level server which consists of the entire DNS tree. It does not contain the information about domains but delegates the authority to the other server" }, { "code": null, "e": 14483, "s": 14374, "text": "Primary Server stores a file about its zone. It has authority to create, maintain, and update the zone file." }, { "code": null, "e": 14684, "s": 14483, "text": "Secondary Server transfers complete information about a zone from another server which may be primary or secondary server. The secondary server does not have authority to create or update a zone file." }, { "code": null, "e": 14833, "s": 14684, "text": "DNS translates the domain name into IP address automatically. Following steps will take you through the steps included in domain resolution process:" }, { "code": null, "e": 14936, "s": 14833, "text": "When we type www.tutorialspoint.com into the browser, it asks the local DNS Server for its IP address." }, { "code": null, "e": 15039, "s": 14936, "text": "When we type www.tutorialspoint.com into the browser, it asks the local DNS Server for its IP address." }, { "code": null, "e": 15203, "s": 15039, "text": "When the local DNS does not find the IP address of requested domain name, it forwards the request to the root DNS server and again enquires about IP address of it." }, { "code": null, "e": 15367, "s": 15203, "text": "When the local DNS does not find the IP address of requested domain name, it forwards the request to the root DNS server and again enquires about IP address of it." }, { "code": null, "e": 15510, "s": 15367, "text": "The root DNS server replies with delegation that I do not know the IP address of www.tutorialspoint.com but know the IP address of DNS Server." }, { "code": null, "e": 15653, "s": 15510, "text": "The root DNS server replies with delegation that I do not know the IP address of www.tutorialspoint.com but know the IP address of DNS Server." }, { "code": null, "e": 15722, "s": 15653, "text": "The local DNS server then asks the com DNS Server the same question." }, { "code": null, "e": 15791, "s": 15722, "text": "The local DNS server then asks the com DNS Server the same question." }, { "code": null, "e": 15934, "s": 15791, "text": "The com DNS Server replies the same that it does not know the IP address of www.tutorialspont.com but knows the address of tutorialspoint.com." }, { "code": null, "e": 16077, "s": 15934, "text": "The com DNS Server replies the same that it does not know the IP address of www.tutorialspont.com but knows the address of tutorialspoint.com." }, { "code": null, "e": 16154, "s": 16077, "text": "Then the local DNS asks the tutorialspoint.com DNS server the same question." }, { "code": null, "e": 16231, "s": 16154, "text": "Then the local DNS asks the tutorialspoint.com DNS server the same question." }, { "code": null, "e": 16317, "s": 16231, "text": "Then tutorialspoint.com DNS server replies with IP address of www.tutorialspoint.com." }, { "code": null, "e": 16403, "s": 16317, "text": "Then tutorialspoint.com DNS server replies with IP address of www.tutorialspoint.com." }, { "code": null, "e": 16509, "s": 16403, "text": "Now, the local DNS sends the IP address of www.tutorialspoint.com to the computer that sends the request." }, { "code": null, "e": 16615, "s": 16509, "text": "Now, the local DNS sends the IP address of www.tutorialspoint.com to the computer that sends the request." }, { "code": null, "e": 16795, "s": 16615, "text": "There are various Communication Services available that offer exchange of information with individuals or groups. The following table gives a brief introduction to these services:" }, { "code": null, "e": 16973, "s": 16795, "text": "There exist several Information retrieval services offering easy access to information present on the internet. The following table gives a brief introduction to these services:" }, { "code": null, "e": 17119, "s": 16973, "text": "Web services allow exchange of information between applications on the web. Using web services, applications can easily interact with each other." }, { "code": null, "e": 17369, "s": 17119, "text": "WWW is also known as W3. It offers a way to access documents spread over the several servers over the internet. These documents may contain texts, graphics, audio, video, hyperlinks. The hyperlinks allow the users to navigate between the documents." }, { "code": null, "e": 17528, "s": 17369, "text": "Video conferencing or Video teleconferencing is a method of communicating by two-way video and audio transmission with help of telecommunication technologies." }, { "code": null, "e": 17583, "s": 17528, "text": "This mode of conferencing connects two locations only." }, { "code": null, "e": 17682, "s": 17583, "text": "This mode of conferencing connects more than two locations through Multi-point Control Unit (MCU)." }, { "code": null, "e": 17767, "s": 17682, "text": "Transmission Control Protocol (TCP) corresponds to the Transport Layer of OSI Model." }, { "code": null, "e": 17852, "s": 17767, "text": "Transmission Control Protocol (TCP) corresponds to the Transport Layer of OSI Model." }, { "code": null, "e": 17904, "s": 17852, "text": "TCP is a reliable and connection oriented protocol." }, { "code": null, "e": 17956, "s": 17904, "text": "TCP is a reliable and connection oriented protocol." }, { "code": null, "e": 17968, "s": 17956, "text": "TCP offers:" }, { "code": null, "e": 17980, "s": 17968, "text": "TCP offers:" }, { "code": null, "e": 18002, "s": 17980, "text": "Stream Data Transfer." }, { "code": null, "e": 18024, "s": 18002, "text": "Stream Data Transfer." }, { "code": null, "e": 18037, "s": 18024, "text": "Reliability." }, { "code": null, "e": 18050, "s": 18037, "text": "Reliability." }, { "code": null, "e": 18073, "s": 18050, "text": "Efficient Flow Control" }, { "code": null, "e": 18096, "s": 18073, "text": "Efficient Flow Control" }, { "code": null, "e": 18119, "s": 18096, "text": "Full-duplex operation." }, { "code": null, "e": 18142, "s": 18119, "text": "Full-duplex operation." }, { "code": null, "e": 18156, "s": 18142, "text": "Multiplexing." }, { "code": null, "e": 18170, "s": 18156, "text": "Multiplexing." }, { "code": null, "e": 18229, "s": 18170, "text": "TCP offers connection oriented end-to-end packet delivery." }, { "code": null, "e": 18288, "s": 18229, "text": "TCP offers connection oriented end-to-end packet delivery." }, { "code": null, "e": 18451, "s": 18288, "text": "TCP ensures reliability by sequencing bytes with a forwarding acknowledgement number that indicates to the destination the next byte the source expect to receive." }, { "code": null, "e": 18614, "s": 18451, "text": "TCP ensures reliability by sequencing bytes with a forwarding acknowledgement number that indicates to the destination the next byte the source expect to receive." }, { "code": null, "e": 18687, "s": 18614, "text": "It retransmits the bytes not acknowledged with in specified time period." }, { "code": null, "e": 18760, "s": 18687, "text": "It retransmits the bytes not acknowledged with in specified time period." }, { "code": null, "e": 18883, "s": 18760, "text": "Internet Protocol is connectionless and unreliable protocol. It ensures no guarantee of successfully transmission of data." }, { "code": null, "e": 18990, "s": 18883, "text": "In order to make it reliable, it must be paired with reliable protocol such as TCP at the transport layer." }, { "code": null, "e": 19084, "s": 18990, "text": "Internet protocol transmits the data in form of a datagram as shown in the following diagram:" }, { "code": null, "e": 19310, "s": 19084, "text": "Like IP, UDP is connectionless and unreliable protocol. It doesn’t require making a connection with the host to exchange data. Since UDP is unreliable protocol, there is no mechanism for ensuring that data sent is received. " }, { "code": null, "e": 19431, "s": 19310, "text": "UDP transmits the data in form of a datagram. The UDP datagram consists of five parts as shown in the following diagram:" }, { "code": null, "e": 19542, "s": 19431, "text": "FTP is used to copy files from one host to another. FTP offers the mechanism for the same in following manner:" }, { "code": null, "e": 19668, "s": 19542, "text": "FTP creates two processes such as Control Process and Data Transfer Process at both ends i.e. at client as well as at server." }, { "code": null, "e": 19794, "s": 19668, "text": "FTP creates two processes such as Control Process and Data Transfer Process at both ends i.e. at client as well as at server." }, { "code": null, "e": 19900, "s": 19794, "text": "FTP establishes two different connections: one is for data transfer and other is for control information." }, { "code": null, "e": 20006, "s": 19900, "text": "FTP establishes two different connections: one is for data transfer and other is for control information." }, { "code": null, "e": 20098, "s": 20006, "text": "Control connection is made between control processes while Data Connection is made between " }, { "code": null, "e": 20190, "s": 20098, "text": "Control connection is made between control processes while Data Connection is made between " }, { "code": null, "e": 20271, "s": 20190, "text": "FTP uses port 21 for the control connection and Port 20 for the data connection." }, { "code": null, "e": 20352, "s": 20271, "text": "FTP uses port 21 for the control connection and Port 20 for the data connection." }, { "code": null, "e": 20653, "s": 20352, "text": "Trivial File Transfer Protocol is also used to transfer the files but it transfers the files without authentication. Unlike FTP, TFTP does not separate control and data information. Since there is no authentication exists, TFTP lacks in security features therefore it is not recommended to use TFTP. " }, { "code": null, "e": 20664, "s": 20653, "text": "Key points" }, { "code": null, "e": 20761, "s": 20664, "text": "TFTP makes use of UDP for data transport. Each TFTP message is carried in separate UDP datagram." }, { "code": null, "e": 20858, "s": 20761, "text": "TFTP makes use of UDP for data transport. Each TFTP message is carried in separate UDP datagram." }, { "code": null, "e": 20926, "s": 20858, "text": "The first two bytes of a TFTP message specify the type of message. " }, { "code": null, "e": 20994, "s": 20926, "text": "The first two bytes of a TFTP message specify the type of message. " }, { "code": null, "e": 21089, "s": 20994, "text": "The TFTP session is initiated when a TFTP client sends a request to upload or download a file." }, { "code": null, "e": 21184, "s": 21089, "text": "The TFTP session is initiated when a TFTP client sends a request to upload or download a file." }, { "code": null, "e": 21269, "s": 21184, "text": "The request is sent from an ephemeral UDP port to the UDP port 69 of an TFTP server." }, { "code": null, "e": 21354, "s": 21269, "text": "The request is sent from an ephemeral UDP port to the UDP port 69 of an TFTP server." }, { "code": null, "e": 21615, "s": 21354, "text": "Telnet is a protocol used to log in to remote computer on the internet. There are a number of Telnet clients having user friendly user interface. The following diagram shows a person is logged in to computer A, and from there, he remote logged into computer B." }, { "code": null, "e": 21868, "s": 21615, "text": "HTTP is a communication protocol. It defines mechanism for communication between browser and the web server. It is also called request and response protocol because the communication between browser and server takes place in request and response pairs." }, { "code": null, "e": 21916, "s": 21868, "text": "HTTP request comprises of lines which contains:" }, { "code": null, "e": 21929, "s": 21916, "text": "Request line" }, { "code": null, "e": 21942, "s": 21929, "text": "Request line" }, { "code": null, "e": 21956, "s": 21942, "text": "Header Fields" }, { "code": null, "e": 21970, "s": 21956, "text": "Header Fields" }, { "code": null, "e": 21983, "s": 21970, "text": "Message body" }, { "code": null, "e": 21996, "s": 21983, "text": "Message body" }, { "code": null, "e": 22007, "s": 21996, "text": "Key Points" }, { "code": null, "e": 22091, "s": 22007, "text": "The first line i.e. the Request line specifies the request method i.e. Get or Post." }, { "code": null, "e": 22175, "s": 22091, "text": "The first line i.e. the Request line specifies the request method i.e. Get or Post." }, { "code": null, "e": 22293, "s": 22175, "text": "The second line specifies the header which indicates the domain name of the server from where index.htm is retrieved." }, { "code": null, "e": 22411, "s": 22293, "text": "The second line specifies the header which indicates the domain name of the server from where index.htm is retrieved." }, { "code": null, "e": 22496, "s": 22411, "text": "Like HTTP request, HTTP response also has certain structure. HTTP response contains:" }, { "code": null, "e": 22508, "s": 22496, "text": "Status line" }, { "code": null, "e": 22520, "s": 22508, "text": "Status line" }, { "code": null, "e": 22528, "s": 22520, "text": "Headers" }, { "code": null, "e": 22536, "s": 22528, "text": "Headers" }, { "code": null, "e": 22549, "s": 22536, "text": "Message body" }, { "code": null, "e": 22562, "s": 22549, "text": "Message body" }, { "code": null, "e": 22752, "s": 22562, "text": "Email is a service which allows us to send the message in electronic mode over the internet. It offers an efficient, inexpensive and real time mean of distributing information among people." }, { "code": null, "e": 22925, "s": 22752, "text": "SMTP stands for Simple Mail Transfer Protocol. It was first proposed in 1982. It is a standard protocol used for sending e-mail efficiently and reliably over the internet." }, { "code": null, "e": 22937, "s": 22925, "text": "Key Points:" }, { "code": null, "e": 22973, "s": 22937, "text": "SMTP is application level protocol." }, { "code": null, "e": 23009, "s": 22973, "text": "SMTP is application level protocol." }, { "code": null, "e": 23047, "s": 23009, "text": "SMTP is connection oriented protocol." }, { "code": null, "e": 23085, "s": 23047, "text": "SMTP is connection oriented protocol." }, { "code": null, "e": 23114, "s": 23085, "text": "SMTP is text based protocol." }, { "code": null, "e": 23143, "s": 23114, "text": "SMTP is text based protocol." }, { "code": null, "e": 23219, "s": 23143, "text": "It handles exchange of messages between e-mail servers over TCP/IP network." }, { "code": null, "e": 23295, "s": 23219, "text": "It handles exchange of messages between e-mail servers over TCP/IP network." }, { "code": null, "e": 23384, "s": 23295, "text": "Apart from transferring e-mail, SMPT also provides notification regarding incoming mail." }, { "code": null, "e": 23473, "s": 23384, "text": "Apart from transferring e-mail, SMPT also provides notification regarding incoming mail." }, { "code": null, "e": 23613, "s": 23473, "text": "When you send e-mail, your e-mail client sends it to your e-mail server which further contacts the recipient mail server using SMTP client." }, { "code": null, "e": 23753, "s": 23613, "text": "When you send e-mail, your e-mail client sends it to your e-mail server which further contacts the recipient mail server using SMTP client." }, { "code": null, "e": 23860, "s": 23753, "text": "These SMTP commands specify the sender’s and receiver’s e-mail address, along with the message to be send." }, { "code": null, "e": 23967, "s": 23860, "text": "These SMTP commands specify the sender’s and receiver’s e-mail address, along with the message to be send." }, { "code": null, "e": 24057, "s": 23967, "text": "The exchange of commands between servers is carried out without intervention of any user." }, { "code": null, "e": 24147, "s": 24057, "text": "The exchange of commands between servers is carried out without intervention of any user." }, { "code": null, "e": 24261, "s": 24147, "text": "In case, message cannot be delivered, an error report is sent to the sender which makes SMTP a reliable protocol." }, { "code": null, "e": 24375, "s": 24261, "text": "In case, message cannot be delivered, an error report is sent to the sender which makes SMTP a reliable protocol." }, { "code": null, "e": 24503, "s": 24375, "text": "IMAP stands for Internet Message Access Protocol. It was first proposed in 1986. There exist five versions of IMAP as follows:" }, { "code": null, "e": 24517, "s": 24503, "text": "Original IMAP" }, { "code": null, "e": 24531, "s": 24517, "text": "Original IMAP" }, { "code": null, "e": 24537, "s": 24531, "text": "IMAP2" }, { "code": null, "e": 24543, "s": 24537, "text": "IMAP2" }, { "code": null, "e": 24549, "s": 24543, "text": "IMAP3" }, { "code": null, "e": 24555, "s": 24549, "text": "IMAP3" }, { "code": null, "e": 24564, "s": 24555, "text": "IMAP2bis" }, { "code": null, "e": 24573, "s": 24564, "text": "IMAP2bis" }, { "code": null, "e": 24579, "s": 24573, "text": "IMAP4" }, { "code": null, "e": 24585, "s": 24579, "text": "IMAP4" }, { "code": null, "e": 24597, "s": 24585, "text": "Key Points:" }, { "code": null, "e": 24723, "s": 24597, "text": "IMAP allows the client program to manipulate the e-mail message on the server without downloading them on the local computer." }, { "code": null, "e": 24849, "s": 24723, "text": "IMAP allows the client program to manipulate the e-mail message on the server without downloading them on the local computer." }, { "code": null, "e": 24905, "s": 24849, "text": "The e-mail is hold and maintained by the remote server." }, { "code": null, "e": 24961, "s": 24905, "text": "The e-mail is hold and maintained by the remote server." }, { "code": null, "e": 25145, "s": 24961, "text": "It enables us to take any action such as downloading, delete the mail without reading the mail.It enables us to create, manipulate and delete remote message folders called mail boxes." }, { "code": null, "e": 25329, "s": 25145, "text": "It enables us to take any action such as downloading, delete the mail without reading the mail.It enables us to create, manipulate and delete remote message folders called mail boxes." }, { "code": null, "e": 25375, "s": 25329, "text": "IMAP enables the users to search the e-mails." }, { "code": null, "e": 25421, "s": 25375, "text": "IMAP enables the users to search the e-mails." }, { "code": null, "e": 25497, "s": 25421, "text": "It allows concurrent access to multiple mailboxes on multiple mail servers." }, { "code": null, "e": 25573, "s": 25497, "text": "It allows concurrent access to multiple mailboxes on multiple mail servers." }, { "code": null, "e": 25732, "s": 25573, "text": "POP stands for Post Office Protocol. It is generally used to support a single client. There are several versions of POP but the POP 3 is the current standard." }, { "code": null, "e": 25743, "s": 25732, "text": "Key Points" }, { "code": null, "e": 25799, "s": 25743, "text": "POP is an application layer internet standard protocol." }, { "code": null, "e": 25855, "s": 25799, "text": "POP is an application layer internet standard protocol." }, { "code": null, "e": 25946, "s": 25855, "text": "Since POP supports offline access to the messages, thus requires less internet usage time." }, { "code": null, "e": 26037, "s": 25946, "text": "Since POP supports offline access to the messages, thus requires less internet usage time." }, { "code": null, "e": 26073, "s": 26037, "text": "POP does not allow search facility." }, { "code": null, "e": 26109, "s": 26073, "text": "POP does not allow search facility." }, { "code": null, "e": 26176, "s": 26109, "text": "In order to access the messaged, it is necessary to download them." }, { "code": null, "e": 26243, "s": 26176, "text": "In order to access the messaged, it is necessary to download them." }, { "code": null, "e": 26295, "s": 26243, "text": "It allows only one mailbox to be created on server." }, { "code": null, "e": 26347, "s": 26295, "text": "It allows only one mailbox to be created on server." }, { "code": null, "e": 26395, "s": 26347, "text": "It is not suitable for accessing non mail data." }, { "code": null, "e": 26443, "s": 26395, "text": "It is not suitable for accessing non mail data." }, { "code": null, "e": 26529, "s": 26443, "text": "POP commands are generally abbreviated into codes of three or four letters. Eg. STAT." }, { "code": null, "e": 26615, "s": 26529, "text": "POP commands are generally abbreviated into codes of three or four letters. Eg. STAT." }, { "code": null, "e": 26780, "s": 26615, "text": "Email working follows the client server approach. In this client is the mailer i.e. the mail application or mail program and server is a device that manages emails." }, { "code": null, "e": 26946, "s": 26780, "text": "Following example will take you through the basic steps involved in sending and receiving emails and will give you a better understanding of working of email system:" }, { "code": null, "e": 27008, "s": 26946, "text": "Suppose person A wants to send an email message to person B. " }, { "code": null, "e": 27070, "s": 27008, "text": "Suppose person A wants to send an email message to person B. " }, { "code": null, "e": 27170, "s": 27070, "text": "Person A composes the messages using a mailer program i.e. mail client and then select Send option." }, { "code": null, "e": 27270, "s": 27170, "text": "Person A composes the messages using a mailer program i.e. mail client and then select Send option." }, { "code": null, "e": 27352, "s": 27270, "text": "The message is routed to Simple Mail Transfer Protocol to person B’s mail server." }, { "code": null, "e": 27434, "s": 27352, "text": "The message is routed to Simple Mail Transfer Protocol to person B’s mail server." }, { "code": null, "e": 27519, "s": 27434, "text": "The mail server stores the email message on disk in an area designated for person B." }, { "code": null, "e": 27604, "s": 27519, "text": "The mail server stores the email message on disk in an area designated for person B." }, { "code": null, "e": 27701, "s": 27604, "text": "Now, suppose person B is running a POP client and knows how to communicate with B’s mail server." }, { "code": null, "e": 27798, "s": 27701, "text": "Now, suppose person B is running a POP client and knows how to communicate with B’s mail server." }, { "code": null, "e": 28039, "s": 27798, "text": "It will periodically poll the POP server to check if any new email has arrived for B.As in this case, person B has sent an email for person B, so email is forwarded over the network to B’s PC. This is message is now stored on person B’s PC." }, { "code": null, "e": 28280, "s": 28039, "text": "It will periodically poll the POP server to check if any new email has arrived for B.As in this case, person B has sent an email for person B, so email is forwarded over the network to B’s PC. This is message is now stored on person B’s PC." }, { "code": null, "e": 28363, "s": 28280, "text": "The following diagram gives pictorial representation of the steps discussed above:" }, { "code": null, "e": 28519, "s": 28363, "text": "There are various email service provider available such as Gmail, hotmail, ymail, rediff mail etc. Here we will learn how to create an account using Gmail." }, { "code": null, "e": 28563, "s": 28519, "text": "Open gmail.com and click create an account." }, { "code": null, "e": 28607, "s": 28563, "text": "Open gmail.com and click create an account." }, { "code": null, "e": 28675, "s": 28607, "text": "Now a form will appear. Fill your details here and click Next Step." }, { "code": null, "e": 28743, "s": 28675, "text": "Now a form will appear. Fill your details here and click Next Step." }, { "code": null, "e": 28856, "s": 28743, "text": "This step allows you to add your picture. If you don’t want to upload now, you can do it later. Click Next Step." }, { "code": null, "e": 28969, "s": 28856, "text": "This step allows you to add your picture. If you don’t want to upload now, you can do it later. Click Next Step." }, { "code": null, "e": 29024, "s": 28969, "text": "Now a welcome window appears. Click Continue to Gmail." }, { "code": null, "e": 29079, "s": 29024, "text": "Now a welcome window appears. Click Continue to Gmail." }, { "code": null, "e": 29169, "s": 29079, "text": "Wow!! You are done with creating your email account with Gmail. It’s that easy. Isn’t it?" }, { "code": null, "e": 29259, "s": 29169, "text": "Wow!! You are done with creating your email account with Gmail. It’s that easy. Isn’t it?" }, { "code": null, "e": 29328, "s": 29259, "text": "Now you will see your Gmail account as shown in the following image:" }, { "code": null, "e": 29397, "s": 29328, "text": "Now you will see your Gmail account as shown in the following image:" }, { "code": null, "e": 29409, "s": 29397, "text": "Key Points:" }, { "code": null, "e": 29493, "s": 29409, "text": "Gmail manages the mail into three categories namely Primary, Social and Promotions." }, { "code": null, "e": 29577, "s": 29493, "text": "Gmail manages the mail into three categories namely Primary, Social and Promotions." }, { "code": null, "e": 29643, "s": 29577, "text": "Compose option is given at the right to compose an email message." }, { "code": null, "e": 29709, "s": 29643, "text": "Compose option is given at the right to compose an email message." }, { "code": null, "e": 29829, "s": 29709, "text": "Inbox, Starred, Sent mail, Drafts options are available on the left pane which allows you to keep track of your emails." }, { "code": null, "e": 29949, "s": 29829, "text": "Inbox, Starred, Sent mail, Drafts options are available on the left pane which allows you to keep track of your emails." }, { "code": null, "e": 30077, "s": 29949, "text": "Before sending an email, we need to compose a message. When we are composing an email message, we specify the following things:" }, { "code": null, "e": 30106, "s": 30077, "text": "Sender’s address in To field" }, { "code": null, "e": 30135, "s": 30106, "text": "Sender’s address in To field" }, { "code": null, "e": 30152, "s": 30135, "text": "Cc (if required)" }, { "code": null, "e": 30169, "s": 30152, "text": "Cc (if required)" }, { "code": null, "e": 30187, "s": 30169, "text": "Bcc (if required)" }, { "code": null, "e": 30205, "s": 30187, "text": "Bcc (if required)" }, { "code": null, "e": 30230, "s": 30205, "text": "Subject of email message" }, { "code": null, "e": 30255, "s": 30230, "text": "Subject of email message" }, { "code": null, "e": 30260, "s": 30255, "text": "Text" }, { "code": null, "e": 30265, "s": 30260, "text": "Text" }, { "code": null, "e": 30275, "s": 30265, "text": "Signature" }, { "code": null, "e": 30285, "s": 30275, "text": "Signature" }, { "code": null, "e": 30532, "s": 30285, "text": "Once you have specified all the above parameters, It’s time to send the email. The mailer program provides a Send button to send email, when you click Send, it is sent to the mail server and a message mail sent successfully is shown at the above." }, { "code": null, "e": 30768, "s": 30532, "text": "Every email program offers you an interface to access email messages. Like in Gmail, emails are stored under different tabs such as primary, social, and promotion. When you click one of tab, it displays a list of emails under that tab." }, { "code": null, "e": 30884, "s": 30768, "text": "In order to read an email, you just have to click on that email. Once you click a particular email, it gets opened." }, { "code": null, "e": 31041, "s": 30884, "text": "The opened email may have some file attached with it. The attachments are shown at the bottom of the opened email with an option called download attachment." }, { "code": null, "e": 31178, "s": 31041, "text": "After reading an email, you may have to reply that email. To reply an email, click Reply option shown at the bottom of the opened email." }, { "code": null, "e": 31343, "s": 31178, "text": "Once you click on Reply, it will automatically copy the sender’s address in to the To field. Below the To field, there is a text box where you can type the message." }, { "code": null, "e": 31439, "s": 31343, "text": "Once you are done with entering message, click Send button. It’s that easy. Your email is sent." }, { "code": null, "e": 31627, "s": 31439, "text": "It is also possible to send a copy of the message that you have received along with your own comments if you want. This can be done using forward button available in mail client software." }, { "code": null, "e": 31798, "s": 31627, "text": "The difference between replying and forwarding an email is that when you reply a message to a person who has send the mail but while forwarding you can send it to anyone." }, { "code": null, "e": 31939, "s": 31798, "text": "When you receive a forwarded message, the message is marked with a > character in front of each line and Subject: field is prefixed with Fw." }, { "code": null, "e": 32117, "s": 31939, "text": "If you don’t want to keep email into your inbox, you can delete it by simply selecting the message from the message list and clicking delete or pressing the appropriate command." }, { "code": null, "e": 32259, "s": 32117, "text": "Some mail clients offers the deleted mails to be stored in a folder called deleted items or trash from where you can recover a deleted email." }, { "code": null, "e": 32315, "s": 32259, "text": "Email hacking can be done in any of the following ways:" }, { "code": null, "e": 32320, "s": 32315, "text": "Spam" }, { "code": null, "e": 32325, "s": 32320, "text": "Spam" }, { "code": null, "e": 32331, "s": 32325, "text": "Virus" }, { "code": null, "e": 32337, "s": 32331, "text": "Virus" }, { "code": null, "e": 32346, "s": 32337, "text": "Phishing" }, { "code": null, "e": 32355, "s": 32346, "text": "Phishing" }, { "code": null, "e": 32564, "s": 32355, "text": "E-mail spamming is an act of sending Unsolicited Bulk E-mails (UBI) which one has not asked for. Email spams are the junk mails sent by commercial companies as an advertisement of their products and services." }, { "code": null, "e": 32704, "s": 32564, "text": "Some emails may incorporate with files containing malicious script which when run on your computer may lead to destroy your important data." }, { "code": null, "e": 32908, "s": 32704, "text": "Email phishing is an activity of sending emails to a user claiming to be a legitimate enterprise. Its main purpose is to steal sensitive information such as usernames, passwords, and credit card details." }, { "code": null, "e": 33082, "s": 32908, "text": "Such emails contains link to websites that are infected with malware and direct the user to enter details at a fake website whose look and feels are same to legitimate one.\n" }, { "code": null, "e": 33290, "s": 33082, "text": "Email spamming is an act of sending Unsolicited Bulk E-mails (UBI) which one has not asked for. Email spams are the junk mails sent by commercial companies as an advertisement of their products and services." }, { "code": null, "e": 33330, "s": 33290, "text": "Spams may cause the following problems:" }, { "code": null, "e": 33447, "s": 33330, "text": "It floods your e-mail account with unwanted e-mails, which may result in loss of important e-mails if inbox is full." }, { "code": null, "e": 33564, "s": 33447, "text": "It floods your e-mail account with unwanted e-mails, which may result in loss of important e-mails if inbox is full." }, { "code": null, "e": 33638, "s": 33564, "text": "Time and energy is wasted in reviewing and deleting junk emails or spams." }, { "code": null, "e": 33712, "s": 33638, "text": "Time and energy is wasted in reviewing and deleting junk emails or spams." }, { "code": null, "e": 33791, "s": 33712, "text": "It consumes the bandwidth that slows the speed with which mails are delivered." }, { "code": null, "e": 33870, "s": 33791, "text": "It consumes the bandwidth that slows the speed with which mails are delivered." }, { "code": null, "e": 33949, "s": 33870, "text": "Some unsolicited email may contain virus that can cause harm to your computer." }, { "code": null, "e": 34028, "s": 33949, "text": "Some unsolicited email may contain virus that can cause harm to your computer." }, { "code": null, "e": 34074, "s": 34028, "text": "Following ways will help you to reduce spams:" }, { "code": null, "e": 34206, "s": 34074, "text": "While posting letters to newsgroups or mailing list, use a separate e-mail address than the one you used for your personal e-mails." }, { "code": null, "e": 34338, "s": 34206, "text": "While posting letters to newsgroups or mailing list, use a separate e-mail address than the one you used for your personal e-mails." }, { "code": null, "e": 34413, "s": 34338, "text": "Don’t give your email address on the websites as it can easily be spammed." }, { "code": null, "e": 34488, "s": 34413, "text": "Don’t give your email address on the websites as it can easily be spammed." }, { "code": null, "e": 34559, "s": 34488, "text": "Avoid replying to emails which you have received from unknown persons." }, { "code": null, "e": 34630, "s": 34559, "text": "Avoid replying to emails which you have received from unknown persons." }, { "code": null, "e": 34698, "s": 34630, "text": "Never buy anything in response to a spam that advertises a product." }, { "code": null, "e": 34766, "s": 34698, "text": "Never buy anything in response to a spam that advertises a product." }, { "code": null, "e": 34930, "s": 34766, "text": "In order to have light weighted Inbox, it’s good to archive your inbox from time to time. Here I will discuss the steps to clean up and archive your Outlook inbox." }, { "code": null, "e": 34964, "s": 34930, "text": "Select File tab on the mail pane." }, { "code": null, "e": 34998, "s": 34964, "text": "Select File tab on the mail pane." }, { "code": null, "e": 35057, "s": 34998, "text": "Select Cleanup Tools button on account information screen." }, { "code": null, "e": 35116, "s": 35057, "text": "Select Cleanup Tools button on account information screen." }, { "code": null, "e": 35166, "s": 35116, "text": "Select Archive from cleanup tools drop down menu." }, { "code": null, "e": 35216, "s": 35166, "text": "Select Archive from cleanup tools drop down menu." }, { "code": null, "e": 35448, "s": 35216, "text": "Select Archive this folder and all subfolders option and then click on the folder that you want to archive. Select the date from the Archive items older than: list. Click Browse to create new .pst file name and location. Click \nOK." }, { "code": null, "e": 35680, "s": 35448, "text": "Select Archive this folder and all subfolders option and then click on the folder that you want to archive. Select the date from the Archive items older than: list. Click Browse to create new .pst file name and location. Click \nOK." }, { "code": null, "e": 35844, "s": 35680, "text": "There are several email service providers available in the market with their enabled features such as sending, receiving, drafting, storing an email and much more." }, { "code": null, "e": 35907, "s": 35844, "text": "The following table shows the popular email service providers:" }, { "code": null, "e": 36037, "s": 35907, "text": "Web designing has direct link to visual aspect of a web site. Effective web design is necessary to communicate ideas effectively." }, { "code": null, "e": 36048, "s": 36037, "text": "Key Points" }, { "code": null, "e": 36090, "s": 36048, "text": "Design Plan should include the following:" }, { "code": null, "e": 36130, "s": 36090, "text": "Details about information architecture." }, { "code": null, "e": 36170, "s": 36130, "text": "Details about information architecture." }, { "code": null, "e": 36197, "s": 36170, "text": "Planned structure of site." }, { "code": null, "e": 36224, "s": 36197, "text": "Planned structure of site." }, { "code": null, "e": 36244, "s": 36224, "text": "A site map of pages" }, { "code": null, "e": 36264, "s": 36244, "text": "A site map of pages" }, { "code": null, "e": 36425, "s": 36264, "text": "Wireframe refers to a visual guide to appearance of web pages. It helps to define structre of web site, linking between web pages and layout of visual elements." }, { "code": null, "e": 36471, "s": 36425, "text": "Following things are included in a wireframe:" }, { "code": null, "e": 36507, "s": 36471, "text": "Boxes of primary graphical elements" }, { "code": null, "e": 36543, "s": 36507, "text": "Boxes of primary graphical elements" }, { "code": null, "e": 36583, "s": 36543, "text": "Placement of headlines and sub headings" }, { "code": null, "e": 36623, "s": 36583, "text": "Placement of headlines and sub headings" }, { "code": null, "e": 36647, "s": 36623, "text": "Simple layout structure" }, { "code": null, "e": 36671, "s": 36647, "text": "Simple layout structure" }, { "code": null, "e": 36687, "s": 36671, "text": "Calls to action" }, { "code": null, "e": 36703, "s": 36687, "text": "Calls to action" }, { "code": null, "e": 36715, "s": 36703, "text": "Text blocks" }, { "code": null, "e": 36727, "s": 36715, "text": "Text blocks" }, { "code": null, "e": 36801, "s": 36727, "text": "Here is the list of tools that can be used to make effective web designs:" }, { "code": null, "e": 36814, "s": 36801, "text": "Photoshop CC" }, { "code": null, "e": 36827, "s": 36814, "text": "Photoshop CC" }, { "code": null, "e": 36842, "s": 36827, "text": "Illustrator CC" }, { "code": null, "e": 36857, "s": 36842, "text": "Illustrator CC" }, { "code": null, "e": 36864, "s": 36857, "text": "Coda 2" }, { "code": null, "e": 36871, "s": 36864, "text": "Coda 2" }, { "code": null, "e": 36883, "s": 36871, "text": "OmniGraffle" }, { "code": null, "e": 36895, "s": 36883, "text": "OmniGraffle" }, { "code": null, "e": 36908, "s": 36895, "text": "Sublime Text" }, { "code": null, "e": 36921, "s": 36908, "text": "Sublime Text" }, { "code": null, "e": 36928, "s": 36921, "text": "GitHub" }, { "code": null, "e": 36935, "s": 36928, "text": "GitHub" }, { "code": null, "e": 36950, "s": 36935, "text": "Pen and Parer" }, { "code": null, "e": 36965, "s": 36950, "text": "Pen and Parer" }, { "code": null, "e": 36969, "s": 36965, "text": "Vim" }, { "code": null, "e": 36973, "s": 36969, "text": "Vim" }, { "code": null, "e": 36984, "s": 36973, "text": "Imageoptim" }, { "code": null, "e": 36995, "s": 36984, "text": "Imageoptim" }, { "code": null, "e": 37004, "s": 36995, "text": "Sketch 3" }, { "code": null, "e": 37013, "s": 37004, "text": "Sketch 3" }, { "code": null, "e": 37020, "s": 37013, "text": "Heroku" }, { "code": null, "e": 37027, "s": 37020, "text": "Heroku" }, { "code": null, "e": 37033, "s": 37027, "text": "Axure" }, { "code": null, "e": 37039, "s": 37033, "text": "Axure" }, { "code": null, "e": 37046, "s": 37039, "text": "Hype 2" }, { "code": null, "e": 37053, "s": 37046, "text": "Hype 2" }, { "code": null, "e": 37059, "s": 37053, "text": "Slicy" }, { "code": null, "e": 37065, "s": 37059, "text": "Slicy" }, { "code": null, "e": 37075, "s": 37065, "text": "Framer.js" }, { "code": null, "e": 37085, "s": 37075, "text": "Framer.js" }, { "code": null, "e": 37097, "s": 37085, "text": "Image Alpha" }, { "code": null, "e": 37109, "s": 37097, "text": "Image Alpha" }, { "code": null, "e": 37125, "s": 37109, "text": "Emmet LiveStyle" }, { "code": null, "e": 37141, "s": 37125, "text": "Emmet LiveStyle" }, { "code": null, "e": 37148, "s": 37141, "text": "Hammer" }, { "code": null, "e": 37155, "s": 37148, "text": "Hammer" }, { "code": null, "e": 37166, "s": 37155, "text": "Icon Slate" }, { "code": null, "e": 37177, "s": 37166, "text": "Icon Slate" }, { "code": null, "e": 37191, "s": 37177, "text": "JPEGmini Lite" }, { "code": null, "e": 37205, "s": 37191, "text": "JPEGmini Lite" }, { "code": null, "e": 37213, "s": 37205, "text": "BugHerd" }, { "code": null, "e": 37221, "s": 37213, "text": "BugHerd" }, { "code": null, "e": 37267, "s": 37221, "text": "A web site includes the following components:" }, { "code": null, "e": 37421, "s": 37267, "text": "Container can be in the form of page’s body tag, an all containing div tag. Without container there would be no place to put the contents of a web page." }, { "code": null, "e": 37577, "s": 37421, "text": "Logo refers to the identity of a website and is used across a company’s various forms of marketing such as business cards, letterhead, brouchers and so on." }, { "code": null, "e": 37701, "s": 37577, "text": "The site’s navigation system should be easy to find and use. Oftenly the anvigation is placed rigth at the top of the page." }, { "code": null, "e": 37778, "s": 37701, "text": "The content on a web site should be relevant to the purpose of the web site." }, { "code": null, "e": 37940, "s": 37778, "text": "Footer is located at the bottom of the page. It usually contains copyright, contract and legal information as well as few links to the main sections of the site." }, { "code": null, "e": 38053, "s": 37940, "text": "It is also called as negative space and refers to any area of page that is not covered by type or illustrations." }, { "code": null, "e": 38134, "s": 38053, "text": "One should be aware of the following common mistakes should always keep in mind:" }, { "code": null, "e": 38200, "s": 38134, "text": "Website not working in any other browser other internet explorer." }, { "code": null, "e": 38266, "s": 38200, "text": "Website not working in any other browser other internet explorer." }, { "code": null, "e": 38315, "s": 38266, "text": "Using cutting edge technology for no good reason" }, { "code": null, "e": 38364, "s": 38315, "text": "Using cutting edge technology for no good reason" }, { "code": null, "e": 38405, "s": 38364, "text": "Sound or video that starts automatically" }, { "code": null, "e": 38446, "s": 38405, "text": "Sound or video that starts automatically" }, { "code": null, "e": 38477, "s": 38446, "text": "Hidden or disguised navigation" }, { "code": null, "e": 38508, "s": 38477, "text": "Hidden or disguised navigation" }, { "code": null, "e": 38528, "s": 38508, "text": "100% flash content." }, { "code": null, "e": 38548, "s": 38528, "text": "100% flash content." }, { "code": null, "e": 38718, "s": 38548, "text": "Web development refers to building website and deploying on the web. Web development requires use of scripting languages both at the server end as well as at client end." }, { "code": null, "e": 38794, "s": 38718, "text": "Before developing a web site once should keep several aspects in mind like:" }, { "code": null, "e": 38823, "s": 38794, "text": "What to put on the web site?" }, { "code": null, "e": 38852, "s": 38823, "text": "What to put on the web site?" }, { "code": null, "e": 38870, "s": 38852, "text": "Who will host it?" }, { "code": null, "e": 38888, "s": 38870, "text": "Who will host it?" }, { "code": null, "e": 38916, "s": 38888, "text": "How to make it interactive?" }, { "code": null, "e": 38944, "s": 38916, "text": "How to make it interactive?" }, { "code": null, "e": 38960, "s": 38944, "text": "How to code it?" }, { "code": null, "e": 38976, "s": 38960, "text": "How to code it?" }, { "code": null, "e": 39023, "s": 38976, "text": "How to create search engine friendly web site?" }, { "code": null, "e": 39070, "s": 39023, "text": "How to create search engine friendly web site?" }, { "code": null, "e": 39112, "s": 39070, "text": "How to secure the source code frequently?" }, { "code": null, "e": 39154, "s": 39112, "text": "How to secure the source code frequently?" }, { "code": null, "e": 39215, "s": 39154, "text": "Will the web site design display well in different browsers?" }, { "code": null, "e": 39276, "s": 39215, "text": "Will the web site design display well in different browsers?" }, { "code": null, "e": 39318, "s": 39276, "text": "Will the navigation menus be easy to use?" }, { "code": null, "e": 39360, "s": 39318, "text": "Will the navigation menus be easy to use?" }, { "code": null, "e": 39393, "s": 39360, "text": "Will the web site loads quickly?" }, { "code": null, "e": 39426, "s": 39393, "text": "Will the web site loads quickly?" }, { "code": null, "e": 39464, "s": 39426, "text": "How easily will the site pages print?" }, { "code": null, "e": 39502, "s": 39464, "text": "How easily will the site pages print?" }, { "code": null, "e": 39576, "s": 39502, "text": "How easily will visitors find important details specific to the web site?" }, { "code": null, "e": 39650, "s": 39576, "text": "How easily will visitors find important details specific to the web site?" }, { "code": null, "e": 39710, "s": 39650, "text": "How effectively the style sheets be used on your web sites?" }, { "code": null, "e": 39770, "s": 39710, "text": "How effectively the style sheets be used on your web sites?" }, { "code": null, "e": 39945, "s": 39770, "text": "Web development process includes all the steps that are good to take to build an attractive, effective and responsive website. These steps are shown in the following diagram:" }, { "code": null, "e": 40152, "s": 39945, "text": "Web development tools helps the developer to test and debug the web sites. Now a days the web development tooll come with the web browsers as add-ons. All web browsers have built in tools for this purpose. " }, { "code": null, "e": 40344, "s": 40152, "text": "Thsese tools allow the web developer to use HTML, CSS and JavaScript etc.. These are accessed by hovering over an item on a web page and selecting the “Inspect Element” from the context menu." }, { "code": null, "e": 40420, "s": 40344, "text": "Following are the common featuers that every web development tool exhibits:" }, { "code": null, "e": 40601, "s": 40420, "text": "HTML and DOM viewer allows you to see the DOM as it was rendered. It also allows to make changes to HTML and DOM and see the changes reflected in the page after the change is made." }, { "code": null, "e": 40706, "s": 40601, "text": "Web development tools also helps to inspect the resources that are loaded and available on the web page." }, { "code": null, "e": 40943, "s": 40706, "text": "Profiling refers to get information about the performance of a web page or web application and Auditing provides developers suggestions, after analyzing a page, for optimizations to decerease page load time and increase responsiveness." }, { "code": null, "e": 41022, "s": 40943, "text": "For being a successful web developer, one should possess the following skills:" }, { "code": null, "e": 41073, "s": 41022, "text": "Understanding of client and server side scripting." }, { "code": null, "e": 41124, "s": 41073, "text": "Understanding of client and server side scripting." }, { "code": null, "e": 41206, "s": 41124, "text": "Creating, editing and modifying templates for a CMS or web development framework." }, { "code": null, "e": 41288, "s": 41206, "text": "Creating, editing and modifying templates for a CMS or web development framework." }, { "code": null, "e": 41327, "s": 41288, "text": "Testing cross browser inconsistencies." }, { "code": null, "e": 41366, "s": 41327, "text": "Testing cross browser inconsistencies." }, { "code": null, "e": 41405, "s": 41366, "text": "Conducting observational user testing." }, { "code": null, "e": 41444, "s": 41405, "text": "Conducting observational user testing." }, { "code": null, "e": 41544, "s": 41444, "text": "Testing for compliance to specified standards such as accessibility standards in the client region." }, { "code": null, "e": 41644, "s": 41544, "text": "Testing for compliance to specified standards such as accessibility standards in the client region." }, { "code": null, "e": 41706, "s": 41644, "text": "Programming interaction with javaScript, PHP, and Jquery etc." }, { "code": null, "e": 41768, "s": 41706, "text": "Programming interaction with javaScript, PHP, and Jquery etc." }, { "code": null, "e": 41966, "s": 41768, "text": "Web hosting is a service of providing online space for storage of web pages. These web pages are made available via World Wide Web. The companies which offer website hosting are known as Web hosts." }, { "code": null, "e": 42267, "s": 41966, "text": "The servers on which web site is hosted remain switched on 24 x7. These servers are run by web hosting companies. Each server has its own IP address. Since IP addresses are difficult to remember therefore, webmaster points their domain name to the IP address of the server their website is stored on." }, { "code": null, "e": 42486, "s": 42267, "text": "It is not possible to host your website on your local computer, to do so you would have to leave your computer on 24 hours a day. This is not practical and cheaper as well. This is where web hosting companies comes in." }, { "code": null, "e": 42580, "s": 42486, "text": "The following table describes different types of hosting that can be availed as per the need:" }, { "code": null, "e": 42646, "s": 42580, "text": "Following are the several companies offering web hosting service:" }, { "code": null, "e": 42871, "s": 42646, "text": "Websites are always to prone to security risks. Cyber crime impacts your business by hacking your website. Your website is then used for hacking assaults that install malicious software or malware on your visitor’s computer." }, { "code": null, "e": 42969, "s": 42871, "text": "It is mandatory to keep you software updated. It plays vital role in keeping your website secure." }, { "code": null, "e": 43178, "s": 42969, "text": "It is an attempt by the hackers to manipulate your database. It is easy to insert rogue code into your query that can be used to manipulate your database such as change tables, get information or delete data." }, { "code": null, "e": 43375, "s": 43178, "text": "It allows the attackers to inject client side script into web pages. Therefore, while creating a form It is good to endure that you check the data being submitted and encode or strip out any HTML." }, { "code": null, "e": 43596, "s": 43375, "text": "You need to be careful about how much information to be given in the error messages. For example, if the user fails to log in the error message should not let the user know which field is incorrect: username or password." }, { "code": null, "e": 43668, "s": 43596, "text": "The validation should be performed on both server side and client side." }, { "code": null, "e": 43864, "s": 43668, "text": "It is good to enforce password requirements such as of minimum of eight characters, including upper case, lower case and special character. It will help to protect user’s information in long run." }, { "code": null, "e": 43971, "s": 43864, "text": "The file uploaded by the user may contain a script that when executed on the server opens up your website." }, { "code": null, "e": 44090, "s": 43971, "text": "It is good practice to use SSL protocol while passing personal information between website and web server or database." }, { "code": null, "e": 44239, "s": 44090, "text": "A technical definition of the World Wide Web is : all the resources and users on the Internet that are using the Hypertext Transfer Protocol (HTTP)." }, { "code": null, "e": 44373, "s": 44239, "text": "A broader definition comes from the organization that Web inventor Tim Berners-Lee helped found, the World Wide Web Consortium (W3C)." }, { "code": null, "e": 44477, "s": 44373, "text": "The World Wide Web is the universe of network-accessible information, an embodiment of human knowledge." }, { "code": null, "e": 44663, "s": 44477, "text": "In simple terms, The World Wide Web is a way of exchanging information between computers on the Internet, tying them together into a vast collection of interactive multimedia resources." }, { "code": null, "e": 44909, "s": 44663, "text": "World Wide Web was created by Timothy Berners Lee in 1989 at CERN in Geneva. World Wide Web came into existence as a proposal by him, to allow researchers to work together effectively and efficiently at CERN. Eventually it became World Wide Web." }, { "code": null, "e": 44976, "s": 44909, "text": "The following diagram briefly defines evolution of World Wide Web:" }, { "code": null, "e": 45059, "s": 44976, "text": "WWW architecture is divided into several layers as shown in the following diagram:" }, { "code": null, "e": 45239, "s": 45059, "text": "Uniform Resource Identifier (URI) is used to uniquely identify resources on the web and UNICODE makes it possible to built web pages that can be read and write in human languages." }, { "code": null, "e": 45319, "s": 45239, "text": "XML (Extensible Markup Language) helps to define common syntax in semantic web." }, { "code": null, "e": 45471, "s": 45319, "text": "Resource Description Framework (RDF) framework helps in defining core representation of data for web. RDF represents data about resource in graph form." }, { "code": null, "e": 45574, "s": 45471, "text": "RDF Schema (RDFS) allows more standardized description of taxonomies and other ontological constructs." }, { "code": null, "e": 45674, "s": 45574, "text": "Web Ontology Language (OWL) offers more constructs over RDFS. It comes in following three versions:" }, { "code": null, "e": 45722, "s": 45674, "text": "OWL Lite for taxonomies and simple constraints." }, { "code": null, "e": 45770, "s": 45722, "text": "OWL Lite for taxonomies and simple constraints." }, { "code": null, "e": 45813, "s": 45770, "text": "OWL DL for full description logic support." }, { "code": null, "e": 45856, "s": 45813, "text": "OWL DL for full description logic support." }, { "code": null, "e": 45894, "s": 45856, "text": "OWL for more syntactic freedom of RDF" }, { "code": null, "e": 45932, "s": 45894, "text": "OWL for more syntactic freedom of RDF" }, { "code": null, "e": 46134, "s": 45932, "text": "RIF and SWRL offers rules beyond the constructs that are available from RDFs and OWL. Simple Protocol and RDF Query Language (SPARQL) is SQL like language used for querying RDF data and OWL Ontologies." }, { "code": null, "e": 46248, "s": 46134, "text": "All semantic and rules that are executed at layers below Proof and their result will be used to prove deductions." }, { "code": null, "e": 46344, "s": 46248, "text": "Cryptography means such as digital signature for verification of the origin of sources is used." }, { "code": null, "e": 46433, "s": 46344, "text": "On the top of layer User interface and Applications layer is built for user interaction." }, { "code": null, "e": 46515, "s": 46433, "text": "WWW works on client- server approach. Following steps explains how the web works:" }, { "code": null, "e": 47203, "s": 46515, "text": "\nUser enters the URL (say, http://www.tutorialspoint.com) of the web page in the address bar of web browser.\nThen browser requests the Domain Name Server for the IP address corresponding to www.tutorialspoint.com.\nAfter receiving IP address, browser sends the request for web page to the web server using HTTP protocol which specifies the way the browser and web server communicates.\nThen web server receives request using HTTP protocol and checks its search for the requested web page. If found it returns it back to the web browser and close the HTTP connection.\nNow the web browser receives the web page, It interprets it and display the contents of web page in web browser’s window.\n" }, { "code": null, "e": 47311, "s": 47203, "text": "User enters the URL (say, http://www.tutorialspoint.com) of the web page in the address bar of web browser." }, { "code": null, "e": 47419, "s": 47311, "text": "User enters the URL (say, http://www.tutorialspoint.com) of the web page in the address bar of web browser." }, { "code": null, "e": 47524, "s": 47419, "text": "Then browser requests the Domain Name Server for the IP address corresponding to www.tutorialspoint.com." }, { "code": null, "e": 47629, "s": 47524, "text": "Then browser requests the Domain Name Server for the IP address corresponding to www.tutorialspoint.com." }, { "code": null, "e": 47799, "s": 47629, "text": "After receiving IP address, browser sends the request for web page to the web server using HTTP protocol which specifies the way the browser and web server communicates." }, { "code": null, "e": 47969, "s": 47799, "text": "After receiving IP address, browser sends the request for web page to the web server using HTTP protocol which specifies the way the browser and web server communicates." }, { "code": null, "e": 48150, "s": 47969, "text": "Then web server receives request using HTTP protocol and checks its search for the requested web page. If found it returns it back to the web browser and close the HTTP connection." }, { "code": null, "e": 48331, "s": 48150, "text": "Then web server receives request using HTTP protocol and checks its search for the requested web page. If found it returns it back to the web browser and close the HTTP connection." }, { "code": null, "e": 48453, "s": 48331, "text": "Now the web browser receives the web page, It interprets it and display the contents of web page in web browser’s window." }, { "code": null, "e": 48575, "s": 48453, "text": "Now the web browser receives the web page, It interprets it and display the contents of web page in web browser’s window." }, { "code": null, "e": 48779, "s": 48575, "text": "There had been a rapid development in field of web. It has its impact in almost every area such as education, research, technology, commerce, marketing etc. So the future of web is almost unpredictable." }, { "code": null, "e": 48901, "s": 48779, "text": "Apart from huge development in field of WWW, there are also some technical issues that W3 consortium has to cope up with." }, { "code": null, "e": 49150, "s": 48901, "text": "Work on higher quality presentation of 3-D information is under deveopment. The W3 Consortium is also looking forward to enhance the web to full fill requirements of global communities which would include all regional languages and writing systems." }, { "code": null, "e": 49291, "s": 49150, "text": "Work on privacy and security is under way. This would include hiding information, accounting, access control, integrity and risk management." }, { "code": null, "e": 49460, "s": 49291, "text": "There has been huge growth in field of web which may lead to overload the internet and degrade its performance. Hence more better protocol are required to be developed." }, { "code": null, "e": 49634, "s": 49460, "text": "web Browser is an application software that allows us to view and explore information on the web. User can request for any web page by just entering a URL into address bar." }, { "code": null, "e": 49795, "s": 49634, "text": "Web browser can show text, audio, video, animation and more. It is the responsibility of a web browser to interpret text and commands contained in the web page." }, { "code": null, "e": 49976, "s": 49795, "text": "Earlier the web browsers were text-based while now a days graphical-based or voice-based web browsers are also available. Following are the most common web browser available today:" }, { "code": null, "e": 50252, "s": 49976, "text": "There are a lot of web browser available in the market. All of them interpret and display information on the screen however their capabilities and structure varies depending upon implementation. But the most basic component that all web browser must exhibit are listed below:" }, { "code": null, "e": 50274, "s": 50252, "text": "Controller/Dispatcher" }, { "code": null, "e": 50296, "s": 50274, "text": "Controller/Dispatcher" }, { "code": null, "e": 50308, "s": 50296, "text": "Interpreter" }, { "code": null, "e": 50320, "s": 50308, "text": "Interpreter" }, { "code": null, "e": 50336, "s": 50320, "text": "Client Programs" }, { "code": null, "e": 50352, "s": 50336, "text": "Client Programs" }, { "code": null, "e": 50518, "s": 50352, "text": "Controller works as a control unit in CPU. It takes input from the keyboard or mouse, interpret it and make other services to work on the basis of input it receives." }, { "code": null, "e": 50757, "s": 50518, "text": "Interpreter receives the information from the controller and execute the instruction line by line. Some interpreter are mandatory while some are optional For example, HTML interpreter program is mandatory and java interpreter is optional." }, { "code": null, "e": 50911, "s": 50757, "text": "Client Program describes the specific protocol that will be used to access a particular service. Following are the client programs tat are commonly used:" }, { "code": null, "e": 50916, "s": 50911, "text": "HTTP" }, { "code": null, "e": 50921, "s": 50916, "text": "HTTP" }, { "code": null, "e": 50926, "s": 50921, "text": "SMTP" }, { "code": null, "e": 50931, "s": 50926, "text": "SMTP" }, { "code": null, "e": 50935, "s": 50931, "text": "FTP" }, { "code": null, "e": 50939, "s": 50935, "text": "FTP" }, { "code": null, "e": 50944, "s": 50939, "text": "NNTP" }, { "code": null, "e": 50949, "s": 50944, "text": "NNTP" }, { "code": null, "e": 50953, "s": 50949, "text": "POP" }, { "code": null, "e": 50957, "s": 50953, "text": "POP" }, { "code": null, "e": 51147, "s": 50957, "text": "Web server is a computer where the web content is stored. Basically web server is used to host the web sites but there exists other web servers also such as gaming, storage, FTP, email etc." }, { "code": null, "e": 51225, "s": 51147, "text": "Web server respond to the client request in either of the following two ways:" }, { "code": null, "e": 51291, "s": 51225, "text": "Sending the file to the client associated with the requested URL." }, { "code": null, "e": 51357, "s": 51291, "text": "Sending the file to the client associated with the requested URL." }, { "code": null, "e": 51430, "s": 51357, "text": "Generating response by invoking a script and communicating with database" }, { "code": null, "e": 51503, "s": 51430, "text": "Generating response by invoking a script and communicating with database" }, { "code": null, "e": 51514, "s": 51503, "text": "Key Points" }, { "code": null, "e": 51682, "s": 51514, "text": "When client sends request for a web page, the web server search for the requested page if requested page is found then it will send it to client with an HTTP response." }, { "code": null, "e": 51850, "s": 51682, "text": "When client sends request for a web page, the web server search for the requested page if requested page is found then it will send it to client with an HTTP response." }, { "code": null, "e": 51953, "s": 51850, "text": "If the requested web page is not found, web server will the send an HTTP response:Error 404 Not found." }, { "code": null, "e": 52056, "s": 51953, "text": "If the requested web page is not found, web server will the send an HTTP response:Error 404 Not found." }, { "code": null, "e": 52211, "s": 52056, "text": "If client has requested for some other resources then the web server will contact to the application server and data store to construct the HTTP response." }, { "code": null, "e": 52366, "s": 52211, "text": "If client has requested for some other resources then the web server will contact to the application server and data store to construct the HTTP response." }, { "code": null, "e": 52428, "s": 52366, "text": "Web Server Architecture follows the following two approaches:" }, { "code": null, "e": 52448, "s": 52428, "text": "Concurrent Approach" }, { "code": null, "e": 52468, "s": 52448, "text": "Concurrent Approach" }, { "code": null, "e": 52506, "s": 52468, "text": "Single-Process-Event-Driven Approach." }, { "code": null, "e": 52544, "s": 52506, "text": "Single-Process-Event-Driven Approach." }, { "code": null, "e": 52680, "s": 52544, "text": "Concurrent approach allows the web server to handle multiple client requests at the same time. It can be achieved by following methods:" }, { "code": null, "e": 52694, "s": 52680, "text": "Multi-process" }, { "code": null, "e": 52708, "s": 52694, "text": "Multi-process" }, { "code": null, "e": 52723, "s": 52708, "text": "Multi-threaded" }, { "code": null, "e": 52738, "s": 52723, "text": "Multi-threaded" }, { "code": null, "e": 52753, "s": 52738, "text": "Hybrid method." }, { "code": null, "e": 52768, "s": 52753, "text": "Hybrid method." }, { "code": null, "e": 52992, "s": 52768, "text": "In this a single process (parent process) initiates several single-threaded child processes and distribute incoming requests to these child processes. Each of the child processes are responsible for handling single request." }, { "code": null, "e": 53107, "s": 52992, "text": "It is the responsibility of parent process to monitor the load and decide if processes should be killed or forked." }, { "code": null, "e": 53174, "s": 53107, "text": "Unlike Multi-process, it creates multiple single-threaded process." }, { "code": null, "e": 53435, "s": 53174, "text": "It is combination of above two approaches. In this approach multiple process are created and each process initiates multiple threads. Each of the threads handles one connection. Using multiple threads in single process results in less load on system resources." }, { "code": null, "e": 53507, "s": 53435, "text": "Following table describes the most leading web servers available today:" }, { "code": null, "e": 53637, "s": 53507, "text": "Proxy server is an intermediary server between client and the internet. Proxy servers offers the following basic functionalities:" }, { "code": null, "e": 53674, "s": 53637, "text": "Firewall and network data filtering." }, { "code": null, "e": 53711, "s": 53674, "text": "Firewall and network data filtering." }, { "code": null, "e": 53738, "s": 53711, "text": "Network connection sharing" }, { "code": null, "e": 53765, "s": 53738, "text": "Network connection sharing" }, { "code": null, "e": 53778, "s": 53765, "text": "Data caching" }, { "code": null, "e": 53791, "s": 53778, "text": "Data caching" }, { "code": null, "e": 53839, "s": 53791, "text": "Following are the reasons to use proxy servers:" }, { "code": null, "e": 53864, "s": 53839, "text": "Monitoring and Filtering" }, { "code": null, "e": 53889, "s": 53864, "text": "Monitoring and Filtering" }, { "code": null, "e": 53911, "s": 53889, "text": "Improving performance" }, { "code": null, "e": 53933, "s": 53911, "text": "Improving performance" }, { "code": null, "e": 53945, "s": 53933, "text": "Translation" }, { "code": null, "e": 53957, "s": 53945, "text": "Translation" }, { "code": null, "e": 53988, "s": 53957, "text": "Accessing services anonymously" }, { "code": null, "e": 54019, "s": 53988, "text": "Accessing services anonymously" }, { "code": null, "e": 54028, "s": 54019, "text": "Security" }, { "code": null, "e": 54037, "s": 54028, "text": "Security" }, { "code": null, "e": 54092, "s": 54037, "text": "Following table briefly describes the type of proxies:" }, { "code": null, "e": 54176, "s": 54092, "text": "In this the client requests its internal network server to forward to the internet." }, { "code": null, "e": 54259, "s": 54176, "text": "Open Proxies helps the clients to conceal their IP address while browsing the web." }, { "code": null, "e": 54424, "s": 54259, "text": "In this the requests are forwarded to one or more proxy servers and the response from the proxy server is retrieved as if it came directly from the original Server." }, { "code": null, "e": 54521, "s": 54424, "text": "The proxy server architecture is divided into several modules as shown in the following diagram:" }, { "code": null, "e": 54705, "s": 54521, "text": "This module controls and manages the user interface and provides an easy to use graphical interface, window and a menu to the end user. This menu offers the following functionalities:" }, { "code": null, "e": 54717, "s": 54705, "text": "Start proxy" }, { "code": null, "e": 54729, "s": 54717, "text": "Start proxy" }, { "code": null, "e": 54740, "s": 54729, "text": "Stop proxy" }, { "code": null, "e": 54751, "s": 54740, "text": "Stop proxy" }, { "code": null, "e": 54756, "s": 54751, "text": "Exit" }, { "code": null, "e": 54761, "s": 54756, "text": "Exit" }, { "code": null, "e": 54774, "s": 54761, "text": "Blocking URL" }, { "code": null, "e": 54787, "s": 54774, "text": "Blocking URL" }, { "code": null, "e": 54803, "s": 54787, "text": "Blocking client" }, { "code": null, "e": 54819, "s": 54803, "text": "Blocking client" }, { "code": null, "e": 54830, "s": 54819, "text": "Manage log" }, { "code": null, "e": 54841, "s": 54830, "text": "Manage log" }, { "code": null, "e": 54854, "s": 54841, "text": "Manage cache" }, { "code": null, "e": 54867, "s": 54854, "text": "Manage cache" }, { "code": null, "e": 54888, "s": 54867, "text": "Modify configuration" }, { "code": null, "e": 54909, "s": 54888, "text": "Modify configuration" }, { "code": null, "e": 55058, "s": 54909, "text": "It is the port where new request from the client browser is listened. This module also performs blocking of clients from the list given by the user." }, { "code": null, "e": 55151, "s": 55058, "text": "It contains the main functionality of the proxy server. It performs the following functions:" }, { "code": null, "e": 55244, "s": 55151, "text": "It contains the main functionality of the proxy server. It performs the following functions:" }, { "code": null, "e": 55337, "s": 55244, "text": "It contains the main functionality of the proxy server. It performs the following functions:" }, { "code": null, "e": 55377, "s": 55337, "text": "Read request from header of the client." }, { "code": null, "e": 55417, "s": 55377, "text": "Read request from header of the client." }, { "code": null, "e": 55480, "s": 55417, "text": "Parse the URL and determine whether the URL is blocked or not." }, { "code": null, "e": 55543, "s": 55480, "text": "Parse the URL and determine whether the URL is blocked or not." }, { "code": null, "e": 55582, "s": 55543, "text": "Generate connection to the web server." }, { "code": null, "e": 55621, "s": 55582, "text": "Generate connection to the web server." }, { "code": null, "e": 55657, "s": 55621, "text": "Read the reply from the web server." }, { "code": null, "e": 55693, "s": 55657, "text": "Read the reply from the web server." }, { "code": null, "e": 55899, "s": 55693, "text": "If no copy of page is found in the cache then download the page from web server else will check its last modified date from the reply header and accordingly will read from the cache or server from the web." }, { "code": null, "e": 56105, "s": 55899, "text": "If no copy of page is found in the cache then download the page from web server else will check its last modified date from the reply header and accordingly will read from the cache or server from the web." }, { "code": null, "e": 56200, "s": 56105, "text": "Then it will also check whether caching is allowed or not and accordingly will cache the page." }, { "code": null, "e": 56295, "s": 56200, "text": "Then it will also check whether caching is allowed or not and accordingly will cache the page." }, { "code": null, "e": 56395, "s": 56295, "text": "This module is responsible for storing, deleting, clearing and searching of web pages in the cache." }, { "code": null, "e": 56467, "s": 56395, "text": "This module is responsible for viewing, clearing and updating the logs." }, { "code": null, "e": 56601, "s": 56467, "text": "This module helps to create configuration settings which in turn let other modules to perform desired configurations such as caching." }, { "code": null, "e": 56766, "s": 56601, "text": "Search Engine refers to a huge database of internet resources such as web pages, newsgroups, programs, images etc. It helps to locate information on World Wide Web." }, { "code": null, "e": 56932, "s": 56766, "text": "User can search for any information by passing query in form of keywords or phrase. It then searches for relevant information in its database and return to the user." }, { "code": null, "e": 57011, "s": 56932, "text": "Generally there are three basic components of a search engine as listed below:" }, { "code": null, "e": 57052, "s": 57011, "text": "\nWeb Crawler\nDatabase\nSearch Interfaces\n" }, { "code": null, "e": 57064, "s": 57052, "text": "Web Crawler" }, { "code": null, "e": 57076, "s": 57064, "text": "Web Crawler" }, { "code": null, "e": 57085, "s": 57076, "text": "Database" }, { "code": null, "e": 57094, "s": 57085, "text": "Database" }, { "code": null, "e": 57112, "s": 57094, "text": "Search Interfaces" }, { "code": null, "e": 57130, "s": 57112, "text": "Search Interfaces" }, { "code": null, "e": 57239, "s": 57130, "text": "It is also known as spider or bots. It is a software component that traverses the web to gather information." }, { "code": null, "e": 57328, "s": 57239, "text": "All the information on the web is stored in database. It consists of huge web resources." }, { "code": null, "e": 57440, "s": 57328, "text": "This component is an interface between user and the database. It helps the user to search through the database." }, { "code": null, "e": 57744, "s": 57440, "text": "Web crawler, database and the search interface are the major component of a search engine that actually makes search engine to work. Search engines make use of Boolean expression AND, OR, NOT to restrict and widen the results of a search. Following are the steps that are performed by the search engine:" }, { "code": null, "e": 57885, "s": 57744, "text": "The search engine looks for the keyword in the index for predefined database instead of going directly to the web to search for the keyword." }, { "code": null, "e": 58026, "s": 57885, "text": "The search engine looks for the keyword in the index for predefined database instead of going directly to the web to search for the keyword." }, { "code": null, "e": 58145, "s": 58026, "text": "It then uses software to search for the information in the database. This software component is known as web crawler. " }, { "code": null, "e": 58264, "s": 58145, "text": "It then uses software to search for the information in the database. This software component is known as web crawler. " }, { "code": null, "e": 58473, "s": 58264, "text": "Once web crawler finds the pages, the search engine then shows the relevant web pages as a result. These retrieved web pages generally include title of page, size of text portion, first several sentences etc." }, { "code": null, "e": 58682, "s": 58473, "text": "Once web crawler finds the pages, the search engine then shows the relevant web pages as a result. These retrieved web pages generally include title of page, size of text portion, first several sentences etc." }, { "code": null, "e": 58738, "s": 58682, "text": "User can click on any of the search results to open it." }, { "code": null, "e": 58794, "s": 58738, "text": "User can click on any of the search results to open it." }, { "code": null, "e": 58875, "s": 58794, "text": "The search engine architecture comprises of the three basic layers listed below:" }, { "code": null, "e": 58910, "s": 58875, "text": "Content collection and refinement." }, { "code": null, "e": 58945, "s": 58910, "text": "Content collection and refinement." }, { "code": null, "e": 58957, "s": 58945, "text": "Search core" }, { "code": null, "e": 58969, "s": 58957, "text": "Search core" }, { "code": null, "e": 59001, "s": 58969, "text": "User and application interfaces" }, { "code": null, "e": 59033, "s": 59001, "text": "User and application interfaces" }, { "code": null, "e": 59211, "s": 59033, "text": "Online chatting is a text-based communication between two or more people over the network. In this, the text message is delivered in real time and people get immediate response." }, { "code": null, "e": 59296, "s": 59211, "text": "Chat etiquette defines rules that are supposed to be followed while online chatting:" }, { "code": null, "e": 59313, "s": 59296, "text": "Avoid chat slang" }, { "code": null, "e": 59330, "s": 59313, "text": "Avoid chat slang" }, { "code": null, "e": 59364, "s": 59330, "text": "Try to spell all words correctly." }, { "code": null, "e": 59398, "s": 59364, "text": "Try to spell all words correctly." }, { "code": null, "e": 59436, "s": 59398, "text": "Don’t write all the words in capital." }, { "code": null, "e": 59474, "s": 59436, "text": "Don’t write all the words in capital." }, { "code": null, "e": 59540, "s": 59474, "text": "Don’t send other chat users private messages without asking them." }, { "code": null, "e": 59606, "s": 59540, "text": "Don’t send other chat users private messages without asking them." }, { "code": null, "e": 59660, "s": 59606, "text": "Abide by the rules created by those running the chat." }, { "code": null, "e": 59714, "s": 59660, "text": "Abide by the rules created by those running the chat." }, { "code": null, "e": 59784, "s": 59714, "text": "Use emoticons to let other person know your feelings and expressions." }, { "code": null, "e": 59854, "s": 59784, "text": "Use emoticons to let other person know your feelings and expressions." }, { "code": null, "e": 59910, "s": 59854, "text": "Following web sites offers browser based chat services:" }, { "code": null, "e": 60083, "s": 59910, "text": "Instant messaging is a software utility that allows IM users to communicate by sending text messages, files, and images. Some of the IMs also support voice and video calls." }, { "code": null, "e": 60295, "s": 60083, "text": "Internet Relay Chat is a protocol developed by Oikarinen in August 1988. It defines set of rules for communication between client and server by some communication mechanism such as chat rooms, over the internet." }, { "code": null, "e": 60592, "s": 60295, "text": "IRC consist of separate networks of IRC servers and machines. These allow IRC clients to connect to IRC. IRC client runs a program client to connect to a server on one of the IRC nets. After connecting to IRC server on IRC network, user can join with one or more channels and converse over there." }, { "code": null, "e": 60751, "s": 60592, "text": "Video conferencing or Video teleconferencing is a method of communicating by two-way video and audio transmission with help of telecommunication technologies." }, { "code": null, "e": 60806, "s": 60751, "text": "This mode of conferencing connects two locations only." }, { "code": null, "e": 60905, "s": 60806, "text": "This mode of conferencing connects more than two locations through Multi-point Control Unit (MCU)." }, { "code": null, "e": 61187, "s": 60905, "text": "Video sharing is an IP Multimedia System (IMS) service that allows user to switch voice calls to unidirectional video streaming session. The video streaming session can be initiated by any of the parties. Moreover, the video source can be the camera or the pre-recorded video clip." }, { "code": null, "e": 61399, "s": 61187, "text": "In order to send same email to a group of people, an electron list is created which is know as Mailing List. It is the list server which receives and distributes postings and automatically manages subscriptions." }, { "code": null, "e": 61541, "s": 61399, "text": "Mailing list offers a forum, where users from all over the globe can answer questions and have them answered by others with shared interests." }, { "code": null, "e": 61591, "s": 61541, "text": "Following are the various types of mailing lists:" }, { "code": null, "e": 61754, "s": 61591, "text": "It contains the group of people who have responsed to an offer in some way. These people are the customers who have shown interest in specific product or service." }, { "code": null, "e": 61867, "s": 61754, "text": "The compiled list is prepared by collecting information from various sources such as surveys, telemarketing etc." }, { "code": null, "e": 61978, "s": 61867, "text": "These lists are created for sending out coupans , new product announcements and other offers to the customers." }, { "code": null, "e": 62091, "s": 61978, "text": "This list is created for sharing views on a specific topic suchas computer, environment , healt, education etc. " }, { "code": null, "e": 62363, "s": 62091, "text": "Before joining a mailing list, it is mandatory to subscribe to it. Once you are subscribed, your message will be sent to all the persons who have subscribed to the list. Similarly if any subscriber posts a message, then it will be received by all subscribers of the list." }, { "code": null, "e": 62487, "s": 62363, "text": "There are a number of websites are available to maintain database of publically accessible mailing list. Some of these are:" }, { "code": null, "e": 62510, "s": 62487, "text": "http://tile.net./lists" }, { "code": null, "e": 62533, "s": 62510, "text": "http://tile.net./lists" }, { "code": null, "e": 62550, "s": 62533, "text": "http://lists.com" }, { "code": null, "e": 62567, "s": 62550, "text": "http://lists.com" }, { "code": null, "e": 62585, "s": 62567, "text": "http://topica.com" }, { "code": null, "e": 62603, "s": 62585, "text": "http://topica.com" }, { "code": null, "e": 62638, "s": 62603, "text": "http://isoft.com/lists/list-q.html" }, { "code": null, "e": 62673, "s": 62638, "text": "http://isoft.com/lists/list-q.html" }, { "code": null, "e": 63031, "s": 62673, "text": "To subscribe to a list, you need to send an email message to the administrative address mailing list containing one or more commands. For example, if you want to subscribe to Harry Potter list in gurus.com where name of the list server us Majordomo, then you have to send email to [email protected] containing the text, Subscribe harry potter in its body. " }, { "code": null, "e": 63179, "s": 63031, "text": "There are many list servers available, each having its own commands for subscribing to the list. Some of them are described in the following table:" }, { "code": null, "e": 63411, "s": 63179, "text": "Like mailing lists Usenet is also a way of sharing information. It was started by Tom Truscott and Jim Ellis in 1979. Initially it was limited to two sites but today there are thousands of Usenet sites involving millions of people." }, { "code": null, "e": 63590, "s": 63411, "text": "Usenet is a kind of discussion group where people can share views on topic of their interest. The article posted to a newsgroup becomes available to all readers of the newsgroup." }, { "code": null, "e": 63664, "s": 63590, "text": "There are several forms of online education available as discussed below:" }, { "code": null, "e": 63892, "s": 63664, "text": "Online Training is a form of distance learning in which educational information is delivered through internet. There are many online applications. These applications vary from simple downloadable content to structured programs." }, { "code": null, "e": 64072, "s": 63892, "text": "It is also possible to do online certification on specialized courses which add value to your qualification. Many companies offer online certification on a number of technologies." }, { "code": null, "e": 64135, "s": 64072, "text": "There are three types of online certification as listed below:" }, { "code": null, "e": 64145, "s": 64135, "text": "Corporate" }, { "code": null, "e": 64155, "s": 64145, "text": "Corporate" }, { "code": null, "e": 64172, "s": 64155, "text": "Product-specific" }, { "code": null, "e": 64189, "s": 64172, "text": "Product-specific" }, { "code": null, "e": 64205, "s": 64189, "text": "Profession-wide" }, { "code": null, "e": 64221, "s": 64205, "text": "Profession-wide" }, { "code": null, "e": 64301, "s": 64221, "text": "Corporate certifications are made by small organizations for internal purposes." }, { "code": null, "e": 64415, "s": 64301, "text": "Product-specific certifications target at developing and recognizing adeptness with regard to particular product." }, { "code": null, "e": 64501, "s": 64415, "text": "Profession wide certification aims at recognizing expertise in particular profession." }, { "code": null, "e": 64652, "s": 64501, "text": "Online seminar is the one which is conducted over the internet. It is a live seminar and allows the attendees to ask questions via Q&A panel onscreen." }, { "code": null, "e": 64772, "s": 64652, "text": "Online seminar just requires a computer with internet connection, headphones, speakers, and authorization to attend it." }, { "code": null, "e": 64957, "s": 64772, "text": "Webinar is a web based seminar or workshop in which presentation is delivered over the web using conferencing software. The audio part of webinar is delivered through teleconferencing." }, { "code": null, "e": 65145, "s": 64957, "text": "Online conferencing is also a kind of online seminar in which two or more people are involved. It is also performed over the internet. It allows the business persons to do meeting online." }, { "code": null, "e": 65296, "s": 65145, "text": "Social Networking refers to grouping of individuals and organizations together via some medium, in order to share thoughts, interests, and activities." }, { "code": null, "e": 65638, "s": 65296, "text": "There are several web based social network services are available such as facebook, twitter, linkedin, Google+ etc. which offer easy to use and interactive interface to connect with people with in the country an overseas as well. There are also several mobile based social networking services in for of apps such as Whatsapp, hike, Line etc." }, { "code": null, "e": 65744, "s": 65638, "text": "The following table describes some of the famous social networking services provided over web and mobile:" }, { "code": null, "e": 65863, "s": 65744, "text": "Internet security refers to securing communication over the internet. It includes specific security protocols such as:" }, { "code": null, "e": 65898, "s": 65863, "text": "Internet Security Protocol (IPSec)" }, { "code": null, "e": 65933, "s": 65898, "text": "Internet Security Protocol (IPSec)" }, { "code": null, "e": 65959, "s": 65933, "text": "Secure Socket Layer (SSL)" }, { "code": null, "e": 65985, "s": 65959, "text": "Secure Socket Layer (SSL)" }, { "code": null, "e": 66212, "s": 65985, "text": "Internet security threats impact the network, data security and other internet connected systems. Cyber criminals have evolved several techniques to threat privacy and integrity of bank accounts, businesses, and organizations." }, { "code": null, "e": 66265, "s": 66212, "text": "Following are some of the internet security threats:" }, { "code": null, "e": 66278, "s": 66265, "text": "Mobile worms" }, { "code": null, "e": 66291, "s": 66278, "text": "Mobile worms" }, { "code": null, "e": 66299, "s": 66291, "text": "Malware" }, { "code": null, "e": 66307, "s": 66299, "text": "Malware" }, { "code": null, "e": 66332, "s": 66307, "text": "PC and Mobile ransomware" }, { "code": null, "e": 66357, "s": 66332, "text": "PC and Mobile ransomware" }, { "code": null, "e": 66431, "s": 66357, "text": "Large scale attacks like Stuxnet that attempts to destroy infrastructure." }, { "code": null, "e": 66505, "s": 66431, "text": "Large scale attacks like Stuxnet that attempts to destroy infrastructure." }, { "code": null, "e": 66526, "s": 66505, "text": "Hacking as a Service" }, { "code": null, "e": 66547, "s": 66526, "text": "Hacking as a Service" }, { "code": null, "e": 66552, "s": 66547, "text": "Spam" }, { "code": null, "e": 66557, "s": 66552, "text": "Spam" }, { "code": null, "e": 66566, "s": 66557, "text": "Phishing" }, { "code": null, "e": 66575, "s": 66566, "text": "Phishing" }, { "code": null, "e": 66779, "s": 66575, "text": "Email phishing is an activity of sending emails to a user claiming to be a legitimate enterprise. Its main purpose is to steal sensitive information such as usernames, passwords, and credit card details." }, { "code": null, "e": 66952, "s": 66779, "text": "Such emails contains link to websites that are infected with malware and direct the user to enter details at a fake website whose look and feels are same to legitimate one." }, { "code": null, "e": 67000, "s": 66952, "text": "Following are the symptoms of a phishing email:" }, { "code": null, "e": 67105, "s": 67000, "text": "Most often such emails contain grammatically incorrect text. Ignore such emails, since it can be a spam." }, { "code": null, "e": 67152, "s": 67105, "text": "Don’t click on any links in suspicious emails." }, { "code": null, "e": 67257, "s": 67152, "text": "Such emails contain threat like “your account will be closed if you didn’t respond to an email message”." }, { "code": null, "e": 67387, "s": 67257, "text": "These emails contain graphics that appear to be connected to legitimate website but they actually are connected to fake websites." }, { "code": null, "e": 67571, "s": 67387, "text": "Digital signatures allow us to verify the author, date and time of signatures, authenticate the message contents. It also includes authentication function for additional capabilities." }, { "code": null, "e": 67648, "s": 67571, "text": "There are several reasons to implement digital signatures to communications:" }, { "code": null, "e": 67977, "s": 67648, "text": "Digital signatures help to authenticate the sources of messages. For example, if a bank’s branch office sends a message to central office, requesting for change in balance of an account. If the central office could not authenticate that message is sent from an authorized source, acting of such request could be a grave mistake." }, { "code": null, "e": 68064, "s": 67977, "text": "Once the message is signed, any change in the message would invalidate the signature. " }, { "code": null, "e": 68172, "s": 68064, "text": "By this property, any entity that has signed some information cannot at a later time deny having signed it." }, { "code": null, "e": 68377, "s": 68172, "text": "Firewall is a barrier between Local Area Network (LAN) and the Internet. It allows keeping private resources confidential and minimizes the security risks. It controls network traffic, in both directions." }, { "code": null, "e": 68601, "s": 68377, "text": "The following diagram depicts a sample firewall between LAN and the internet. The connection between the two is the point of vulnerability. Both hardware and the software can be used at this point to filter network traffic." }, { "code": null, "e": 68612, "s": 68601, "text": "Key Points" }, { "code": null, "e": 68700, "s": 68612, "text": "Firewall management must be addressed by both system managers and the network managers." }, { "code": null, "e": 68788, "s": 68700, "text": "Firewall management must be addressed by both system managers and the network managers." }, { "code": null, "e": 68920, "s": 68788, "text": "The amount of filtering a firewall varies. For the same firewall, the amount of filtering may be different in different directions." }, { "code": null, "e": 69052, "s": 68920, "text": "The amount of filtering a firewall varies. For the same firewall, the amount of filtering may be different in different directions." }, { "code": null, "e": 69255, "s": 69052, "text": "HTML stands for Hyper Text Markup Language. It is a formatting language used to define the appearance and contents of a web page. It allows us to organize text, graphics, audio, and video on a web page." }, { "code": null, "e": 69268, "s": 69255, "text": "Key Points:\n" }, { "code": null, "e": 69328, "s": 69268, "text": "The word Hypertext refers to the text which acts as a link." }, { "code": null, "e": 69388, "s": 69328, "text": "The word Hypertext refers to the text which acts as a link." }, { "code": null, "e": 69561, "s": 69388, "text": "The word markup refers to the symbols that are used to define structure of the text. The markup symbols tells the browser how to display the text and are often called tags." }, { "code": null, "e": 69734, "s": 69561, "text": "The word markup refers to the symbols that are used to define structure of the text. The markup symbols tells the browser how to display the text and are often called tags." }, { "code": null, "e": 69812, "s": 69734, "text": "The word Language refers to the syntax that is similar to any other language." }, { "code": null, "e": 69890, "s": 69812, "text": "The word Language refers to the syntax that is similar to any other language." }, { "code": null, "e": 69946, "s": 69890, "text": "The following table shows the various versions of HTML:" }, { "code": null, "e": 70055, "s": 69946, "text": "Tag is a command that tells the web browser how to display the text, audio, graphics or video on a web page." }, { "code": null, "e": 70068, "s": 70055, "text": "Key Points:\n" }, { "code": null, "e": 70116, "s": 70068, "text": "Tags are indicated with pair of angle brackets." }, { "code": null, "e": 70164, "s": 70116, "text": "Tags are indicated with pair of angle brackets." }, { "code": null, "e": 70249, "s": 70164, "text": "They start with a less than (<) character and end with a greater than (>) character." }, { "code": null, "e": 70334, "s": 70249, "text": "They start with a less than (<) character and end with a greater than (>) character." }, { "code": null, "e": 70388, "s": 70334, "text": "The tag name is specified between the angle brackets." }, { "code": null, "e": 70442, "s": 70388, "text": "The tag name is specified between the angle brackets." }, { "code": null, "e": 70517, "s": 70442, "text": "Most of the tags usually occur in pair: the start tag and the closing tag." }, { "code": null, "e": 70592, "s": 70517, "text": "Most of the tags usually occur in pair: the start tag and the closing tag." }, { "code": null, "e": 70726, "s": 70592, "text": "The start tag is simply the tag name is enclosed in angle bracket whereas the closing tag is specified including a forward slash (/)." }, { "code": null, "e": 70860, "s": 70726, "text": "The start tag is simply the tag name is enclosed in angle bracket whereas the closing tag is specified including a forward slash (/)." }, { "code": null, "e": 70922, "s": 70860, "text": "Some tags are the empty i.e. they don’t have the closing tag." }, { "code": null, "e": 70984, "s": 70922, "text": "Some tags are the empty i.e. they don’t have the closing tag." }, { "code": null, "e": 71013, "s": 70984, "text": "Tags are not case sensitive." }, { "code": null, "e": 71042, "s": 71013, "text": "Tags are not case sensitive." }, { "code": null, "e": 71156, "s": 71042, "text": "The starting and closing tag name must be the same. For example <b> hello </i> is invalid as both are different." }, { "code": null, "e": 71270, "s": 71156, "text": "The starting and closing tag name must be the same. For example <b> hello </i> is invalid as both are different." }, { "code": null, "e": 71380, "s": 71270, "text": "If you don’t specify the angle brackets (<>) for a tag, the browser will treat the tag name as a simple text." }, { "code": null, "e": 71490, "s": 71380, "text": "If you don’t specify the angle brackets (<>) for a tag, the browser will treat the tag name as a simple text." }, { "code": null, "e": 71587, "s": 71490, "text": "The tag can also have attributes to provide additional information about the tag to the browser." }, { "code": null, "e": 71684, "s": 71587, "text": "The tag can also have attributes to provide additional information about the tag to the browser." }, { "code": null, "e": 71762, "s": 71684, "text": "The following table shows the Basic HTML tags that define the basic web page:" }, { "code": null, "e": 71810, "s": 71762, "text": "The following code shows how to use basic tags." }, { "code": null, "e": 71936, "s": 71810, "text": "<html>\n <head> Heading goes here...</head>\n <title> Title goes here...</title>\n <body> Body goes here...</body>\n</html>" }, { "code": null, "e": 72006, "s": 71936, "text": "The following table shows the HTML tags used for formatting the text:" }, { "code": null, "e": 72062, "s": 72006, "text": "Following table describe the commonaly used table tags:" }, { "code": null, "e": 72117, "s": 72062, "text": "Following table describe the commonaly used list tags:" }, { "code": null, "e": 72279, "s": 72117, "text": "Frames help us to divide the browser’s window into multiple rectangular regions. Each region contains separate html web page and each of them work independently." }, { "code": null, "e": 72352, "s": 72279, "text": "The following table describes the various tags used for creating frames:" }, { "code": null, "e": 72569, "s": 72352, "text": "Forms are used to input the values. These values are sent to the server for processing. Forms uses input elements such as text fields, check boxes, radio buttons, lists, submit buttons etc. to enter the data into it." }, { "code": null, "e": 72645, "s": 72569, "text": "The following table describes the commonly used tags while creating a form:" }, { "code": null, "e": 72801, "s": 72645, "text": "CSS is acronym of Cascading Style Sheets. It helps to define the presentation of HTML elements as a separate file known as CSS file having .css extension. " }, { "code": null, "e": 73010, "s": 72801, "text": "CSS helps to change formatting of any HTML element by just making changes at one place. All changes made would be reflected automatically to all of the web pages of the website in which that element appeared." }, { "code": null, "e": 73071, "s": 73010, "text": "Following are the four methods to add CSS to HTML documents." }, { "code": null, "e": 73091, "s": 73071, "text": "Inline Style Sheets" }, { "code": null, "e": 73111, "s": 73091, "text": "Inline Style Sheets" }, { "code": null, "e": 73133, "s": 73111, "text": "Embedded Style Sheets" }, { "code": null, "e": 73155, "s": 73133, "text": "Embedded Style Sheets" }, { "code": null, "e": 73177, "s": 73155, "text": "External Style Sheets" }, { "code": null, "e": 73199, "s": 73177, "text": "External Style Sheets" }, { "code": null, "e": 73221, "s": 73199, "text": "Imported Style Sheets" }, { "code": null, "e": 73243, "s": 73221, "text": "Imported Style Sheets" }, { "code": null, "e": 73431, "s": 73243, "text": "Inline Style Sheets are included with HTML element i.e. they are placed inline with the element. \nTo add inline CSS, we have to declare style attribute which can contain any CSS property." }, { "code": null, "e": 73439, "s": 73431, "text": "Syntax:" }, { "code": null, "e": 73506, "s": 73439, "text": "<Tagname STYLE = “ Declaration1 ; Declaration2 “> .... </Tagname>" }, { "code": null, "e": 73570, "s": 73506, "text": "Let’s consider the following example using Inline Style Sheets:" }, { "code": null, "e": 73821, "s": 73570, "text": "<p style=\"color: blue; text-align: left; font-size: 15pt\">\nInline Style Sheets are included with HTML element i.e. they are placed inline with the element.\nTo add inline CSS, we have to declare style attribute which can contain any CSS property.\n</p>" }, { "code": null, "e": 73981, "s": 73821, "text": "Embedded Style Sheets are used to apply same appearance to all occurrence of a specific element. These are defined in element by using the <style> element. \n" }, { "code": null, "e": 73988, "s": 73981, "text": "Syntax" }, { "code": null, "e": 74081, "s": 73988, "text": "<head> <title> .... </title>\n<style type =”text/css”>\n .......CSS Rules/Styles....\n</head>" }, { "code": null, "e": 74147, "s": 74081, "text": "Let’s consider the following example using Embedded Style Sheets:" }, { "code": null, "e": 74273, "s": 74147, "text": "<style type=\"text/css\">\n p {color:green; text-align: left; font-size: 10pt}\n h1 { color: red; font-weight: bold}\n</style>" }, { "code": null, "e": 74437, "s": 74273, "text": "External Style Sheets are the separate .css files that contain the CSS rules. These files can be linked to any HTML documents using <link> tag with rel attribute.\n" }, { "code": null, "e": 74445, "s": 74437, "text": "Syntax:" }, { "code": null, "e": 74526, "s": 74445, "text": "<head> <link rel= “stylesheet” type=”text/css” href= “url of css file”>\n</head>" }, { "code": null, "e": 74616, "s": 74526, "text": "In order to create external css and link it to HTML document, follow the following steps:" }, { "code": null, "e": 74737, "s": 74616, "text": "First of all create a CSS file and define all CSS rules for several HTML elements. Let’s name this file as external.css." }, { "code": null, "e": 74858, "s": 74737, "text": "First of all create a CSS file and define all CSS rules for several HTML elements. Let’s name this file as external.css." }, { "code": null, "e": 74980, "s": 74858, "text": "p { \n Color: orange; text-align: left; font-size: 10pt;\n}\nh1 { \n Color: orange; font-weight: bold;\n}" }, { "code": null, "e": 75039, "s": 74980, "text": "Now create HTML document and name it as externaldemo.html." }, { "code": null, "e": 75098, "s": 75039, "text": "Now create HTML document and name it as externaldemo.html." }, { "code": null, "e": 75401, "s": 75098, "text": "<html>\n <head>\n <title> External Style Sheets Demo </title>\n <link rel=\"stylesheet\" type=\"text/css\" href=\"external.css\">\n </head>\n <body>\n <h1> External Style Sheets</h1>\n <p>External Style Sheets are the separate .css files that contain the CSS rules.</p>\n </body>\n</html>" }, { "code": null, "e": 75561, "s": 75401, "text": "Imported Style Sheets allow us to import style rules from other style sheets. To import CSS rules we have to use @import before all the rules in a style sheet." }, { "code": null, "e": 75569, "s": 75561, "text": "Syntax:" }, { "code": null, "e": 75712, "s": 75569, "text": "<head><title> Title Information </title>\n <style type=”text/css”>\n @import URL (cssfilepath)\n ... CSS rules...\n </style>\n</head>" }, { "code": null, "e": 75776, "s": 75712, "text": "Let’s consider the following example using Inline Style Sheets:" }, { "code": null, "e": 76077, "s": 75776, "text": "<html>\n <head>\n <title> External Style Sheets Demo </title>\n <style>\n @import url(external.css);\n </style>\n </head>\n <body>\n <h1> External Style Sheets</h1>\n <p>External Style Sheets are the separate .css files that contain the CSS rules.</p>\n </body>\n</html>" }, { "code": null, "e": 76246, "s": 76077, "text": "JavaScript is a lightweight, interpreted programming language with object-oriented capabilities that allows you to build interactivity into otherwise static HTML pages." }, { "code": null, "e": 76261, "s": 76246, "text": "JavaScript is:" }, { "code": null, "e": 76308, "s": 76261, "text": "Lightweight, interpreted programming language." }, { "code": null, "e": 76355, "s": 76308, "text": "Lightweight, interpreted programming language." }, { "code": null, "e": 76407, "s": 76355, "text": "Designed for creating network-centric applications." }, { "code": null, "e": 76459, "s": 76407, "text": "Designed for creating network-centric applications." }, { "code": null, "e": 76502, "s": 76459, "text": "Complementary to and integrated with Java." }, { "code": null, "e": 76545, "s": 76502, "text": "Complementary to and integrated with Java." }, { "code": null, "e": 76587, "s": 76545, "text": "Complementary to and integrated with HTML" }, { "code": null, "e": 76629, "s": 76587, "text": "Complementary to and integrated with HTML" }, { "code": null, "e": 76653, "s": 76629, "text": "Open and cross-platform" }, { "code": null, "e": 76677, "s": 76653, "text": "Open and cross-platform" }, { "code": null, "e": 76807, "s": 76677, "text": "JavaScript statements are the commands to tell the browser to what action to perform. Statements are separated by semicolon (;). " }, { "code": null, "e": 76840, "s": 76807, "text": "Example of JavaScript statement:" }, { "code": null, "e": 76895, "s": 76840, "text": "document.getElementById(\"demo\").innerHTML = \"Welcome\";" }, { "code": null, "e": 76952, "s": 76895, "text": "Following table shows the various JavaScript Statements:" }, { "code": null, "e": 77015, "s": 76952, "text": "JavaScript supports both C-style and C++-style comments, thus:" }, { "code": null, "e": 77113, "s": 77015, "text": "Any text between a // and the end of a line is treated as a comment and is ignored by JavaScript." }, { "code": null, "e": 77211, "s": 77113, "text": "Any text between a // and the end of a line is treated as a comment and is ignored by JavaScript." }, { "code": null, "e": 77308, "s": 77211, "text": "Any text between the characters /* and */ is treated as a comment. This may span multiple lines." }, { "code": null, "e": 77405, "s": 77308, "text": "Any text between the characters /* and */ is treated as a comment. This may span multiple lines." }, { "code": null, "e": 77556, "s": 77405, "text": "JavaScript also recognizes the HTML comment opening sequence <!--. JavaScript treats this as a single-line comment, just as it does the // comment.-->" }, { "code": null, "e": 77707, "s": 77556, "text": "JavaScript also recognizes the HTML comment opening sequence <!--. JavaScript treats this as a single-line comment, just as it does the // comment.-->" }, { "code": null, "e": 77811, "s": 77707, "text": "The HTML comment closing sequence --> is not recognized by JavaScript so it should be written as //-->." }, { "code": null, "e": 77915, "s": 77811, "text": "The HTML comment closing sequence --> is not recognized by JavaScript so it should be written as //-->." }, { "code": null, "e": 77924, "s": 77915, "text": "Example:" }, { "code": null, "e": 78197, "s": 77924, "text": "<script language=\"javascript\" type=\"text/javascript\">\n <!--\n\n // this is a comment. It is similar to comments in C++\n\n /*\n * This is a multiline comment in JavaScript\n * It is very similar to comments in C Programming\n */\n //-->\n<script>" }, { "code": null, "e": 78364, "s": 78197, "text": "Variables are referred as named containers for storing information. We can place data into these containers and then refer to the data simply by naming the container." }, { "code": null, "e": 78404, "s": 78364, "text": "Rules to declare variable in JavaScript" }, { "code": null, "e": 78481, "s": 78404, "text": "In JavaScript variable names are case sensitive i.e. a is different from A.\n" }, { "code": null, "e": 78557, "s": 78481, "text": "In JavaScript variable names are case sensitive i.e. a is different from A." }, { "code": null, "e": 78674, "s": 78557, "text": "Variable name can only be started with a underscore ( _ ) or a letter (from a to z or A to Z), or dollar ( $ ) sign." }, { "code": null, "e": 78791, "s": 78674, "text": "Variable name can only be started with a underscore ( _ ) or a letter (from a to z or A to Z), or dollar ( $ ) sign." }, { "code": null, "e": 78841, "s": 78791, "text": "Numbers (0 to 9) can only be used after a letter." }, { "code": null, "e": 78891, "s": 78841, "text": "Numbers (0 to 9) can only be used after a letter." }, { "code": null, "e": 78947, "s": 78891, "text": "No other special character is allowed in variable name." }, { "code": null, "e": 79003, "s": 78947, "text": "No other special character is allowed in variable name." }, { "code": null, "e": 79131, "s": 79003, "text": "Before you use a variable in a JavaScript program, you must declare it. Variables are declared with the var keyword as follows:" }, { "code": null, "e": 79228, "s": 79131, "text": "<script type=\"text/javascript\">\n <!--\n var money;\n var name, age;\n //-->\n</script>" }, { "code": null, "e": 79313, "s": 79228, "text": "Variables can be initialized at time of declaration or after declaration as follows:" }, { "code": null, "e": 79436, "s": 79313, "text": "<script type=\"text/javascript\">\n <!--\n var name = \"Ali\";\n var money;\n money = 2000.50;\n //-->\n</script>" }, { "code": null, "e": 79490, "s": 79436, "text": "There are two kinds of data types as mentioned below:" }, { "code": null, "e": 79510, "s": 79490, "text": "Primitive Data Type" }, { "code": null, "e": 79530, "s": 79510, "text": "Primitive Data Type" }, { "code": null, "e": 79554, "s": 79530, "text": "Non Primitive Data Type" }, { "code": null, "e": 79578, "s": 79554, "text": "Non Primitive Data Type" }, { "code": null, "e": 79633, "s": 79578, "text": "Primitive Data Types are shown in the following table:" }, { "code": null, "e": 79684, "s": 79633, "text": "Following table contains Non primitive Data Types:" }, { "code": null, "e": 79850, "s": 79684, "text": "Function is a group of reusable statements (Code) that can be called any where in a program. In javascript function keyword is used to declare or define a function. " }, { "code": null, "e": 79862, "s": 79850, "text": "Key Points:" }, { "code": null, "e": 79959, "s": 79862, "text": "To define a function use function keyword followed by functionname, followed by parentheses ()." }, { "code": null, "e": 80056, "s": 79959, "text": "To define a function use function keyword followed by functionname, followed by parentheses ()." }, { "code": null, "e": 80108, "s": 80056, "text": "In parenthesis, we define parameters or attributes." }, { "code": null, "e": 80160, "s": 80108, "text": "In parenthesis, we define parameters or attributes." }, { "code": null, "e": 80282, "s": 80160, "text": "The group of reusabe statements (code) is enclosed in curly braces {}. This code is executed whenever function is called." }, { "code": null, "e": 80404, "s": 80282, "text": "The group of reusabe statements (code) is enclosed in curly braces {}. This code is executed whenever function is called." }, { "code": null, "e": 80412, "s": 80404, "text": "Syntax:" }, { "code": null, "e": 80469, "s": 80412, "text": "function functionname (p1, p2) {\n function coding...\n}" }, { "code": null, "e": 80653, "s": 80469, "text": "Operators are used to perform operation on one, two or more operands. Operator is represented by a symbol such as +, =, *, % etc. Following are the operators supported by javascript:" }, { "code": null, "e": 80674, "s": 80653, "text": "Arithmetic Operators" }, { "code": null, "e": 80695, "s": 80674, "text": "Arithmetic Operators" }, { "code": null, "e": 80716, "s": 80695, "text": "Comparison Operators" }, { "code": null, "e": 80737, "s": 80716, "text": "Comparison Operators" }, { "code": null, "e": 80771, "s": 80737, "text": "Logical (or Relational) Operators" }, { "code": null, "e": 80805, "s": 80771, "text": "Logical (or Relational) Operators" }, { "code": null, "e": 80826, "s": 80805, "text": "Assignment Operators" }, { "code": null, "e": 80847, "s": 80826, "text": "Assignment Operators" }, { "code": null, "e": 80882, "s": 80847, "text": "Conditional (or ternary) Operators" }, { "code": null, "e": 80917, "s": 80882, "text": "Conditional (or ternary) Operators" }, { "code": null, "e": 80938, "s": 80917, "text": "Arithmetic Operators" }, { "code": null, "e": 80959, "s": 80938, "text": "Arithmetic Operators" }, { "code": null, "e": 81100, "s": 80959, "text": "Control structure actually controls the flow of execution of a program. Following are the several control structure supported by javascript." }, { "code": null, "e": 81160, "s": 81100, "text": "\nif ... else\nswitch case\ndo while loop\nwhile loop\nfor loop\n" }, { "code": null, "e": 81172, "s": 81160, "text": "if ... else" }, { "code": null, "e": 81184, "s": 81172, "text": "if ... else" }, { "code": null, "e": 81196, "s": 81184, "text": "switch case" }, { "code": null, "e": 81208, "s": 81196, "text": "switch case" }, { "code": null, "e": 81222, "s": 81208, "text": "do while loop" }, { "code": null, "e": 81236, "s": 81222, "text": "do while loop" }, { "code": null, "e": 81247, "s": 81236, "text": "while loop" }, { "code": null, "e": 81258, "s": 81247, "text": "while loop" }, { "code": null, "e": 81267, "s": 81258, "text": "for loop" }, { "code": null, "e": 81276, "s": 81267, "text": "for loop" }, { "code": null, "e": 81502, "s": 81276, "text": "PHP is acronym of Hypertext Preprocessor (PHP) is a programming language that allows web developers to create dynamic content that interacts with databases.PHP is basically used for developing web based software applications." }, { "code": null, "e": 81683, "s": 81502, "text": "PHP started out as a small open source project that evolved as more and more people found out how useful it was. Rasmus Lerdorf unleashed the first version of PHP way back in 1994." }, { "code": null, "e": 81694, "s": 81683, "text": "Key Points" }, { "code": null, "e": 81756, "s": 81694, "text": "PHP is a recursive acronym for \"PHP: Hypertext Preprocessor\"." }, { "code": null, "e": 81818, "s": 81756, "text": "PHP is a recursive acronym for \"PHP: Hypertext Preprocessor\"." }, { "code": null, "e": 81987, "s": 81818, "text": "PHP is a server side scripting language that is embedded in HTML. It is used to manage dynamic content, databases, session tracking, even build entire e-commerce sites." }, { "code": null, "e": 82156, "s": 81987, "text": "PHP is a server side scripting language that is embedded in HTML. It is used to manage dynamic content, databases, session tracking, even build entire e-commerce sites." }, { "code": null, "e": 82290, "s": 82156, "text": "It is integrated with a number of popular databases, including MySQL, PostgreSQL, Oracle, Sybase, Informix, and Microsoft SQL Server." }, { "code": null, "e": 82424, "s": 82290, "text": "It is integrated with a number of popular databases, including MySQL, PostgreSQL, Oracle, Sybase, Informix, and Microsoft SQL Server." }, { "code": null, "e": 82642, "s": 82424, "text": "PHP is pleasingly zippy in its execution, especially when compiled as an Apache module on the Unix side. The MySQL server, once started, executes even very complex queries with huge result sets in record-setting time." }, { "code": null, "e": 82860, "s": 82642, "text": "PHP is pleasingly zippy in its execution, especially when compiled as an Apache module on the Unix side. The MySQL server, once started, executes even very complex queries with huge result sets in record-setting time." }, { "code": null, "e": 83079, "s": 82860, "text": "PHP supports a large number of major protocols such as POP3, IMAP, and LDAP. PHP4 added support for Java and distributed object architectures (COM and CORBA), making n-tier development a possibility for the first time." }, { "code": null, "e": 83298, "s": 83079, "text": "PHP supports a large number of major protocols such as POP3, IMAP, and LDAP. PHP4 added support for Java and distributed object architectures (COM and CORBA), making n-tier development a possibility for the first time." }, { "code": null, "e": 83407, "s": 83298, "text": "PHP performs system functions, i.e. from files on a system it can create, open, read, write, and close them." }, { "code": null, "e": 83516, "s": 83407, "text": "PHP performs system functions, i.e. from files on a system it can create, open, read, write, and close them." }, { "code": null, "e": 83646, "s": 83516, "text": "PHP can handle forms, i.e. gather data from files, save data to a file, through email you can send data, return data to the user." }, { "code": null, "e": 83776, "s": 83646, "text": "PHP can handle forms, i.e. gather data from files, save data to a file, through email you can send data, return data to the user." }, { "code": null, "e": 83843, "s": 83776, "text": "You add, delete, modify elements within your database through PHP." }, { "code": null, "e": 83910, "s": 83843, "text": "You add, delete, modify elements within your database through PHP." }, { "code": null, "e": 83952, "s": 83910, "text": "Access cookies variables and set cookies." }, { "code": null, "e": 83994, "s": 83952, "text": "Access cookies variables and set cookies." }, { "code": null, "e": 84066, "s": 83994, "text": "Using PHP, you can restrict users to access some pages of your website." }, { "code": null, "e": 84138, "s": 84066, "text": "Using PHP, you can restrict users to access some pages of your website." }, { "code": null, "e": 84159, "s": 84138, "text": "It can encrypt data." }, { "code": null, "e": 84180, "s": 84159, "text": "It can encrypt data." }, { "code": null, "e": 84249, "s": 84180, "text": "Five important characteristics make PHP's practical nature possible:" }, { "code": null, "e": 84260, "s": 84249, "text": "Simplicity" }, { "code": null, "e": 84271, "s": 84260, "text": "Simplicity" }, { "code": null, "e": 84282, "s": 84271, "text": "Efficiency" }, { "code": null, "e": 84293, "s": 84282, "text": "Efficiency" }, { "code": null, "e": 84302, "s": 84293, "text": "Security" }, { "code": null, "e": 84311, "s": 84302, "text": "Security" }, { "code": null, "e": 84323, "s": 84311, "text": "Flexibility" }, { "code": null, "e": 84335, "s": 84323, "text": "Flexibility" }, { "code": null, "e": 84347, "s": 84335, "text": "Familiarity" }, { "code": null, "e": 84359, "s": 84347, "text": "Familiarity" }, { "code": null, "e": 84529, "s": 84359, "text": "To get a feel for PHP, first start with simple PHP scripts. Since \"Hello, World!\" is an essential example, first we will create a friendly little \"Hello, World!\" script." }, { "code": null, "e": 84692, "s": 84529, "text": "As mentioned earlier, PHP is embedded in HTML. That means that in amongst your normal HTML (or XHTML if you're cutting-edge) you'll have PHP statements like this:" }, { "code": null, "e": 84816, "s": 84692, "text": "<html>\n <head>\n <title>Hello World</title>\n <body>\n <?php echo \"Hello, World!\";?>\n </body>\n</html>" }, { "code": null, "e": 84850, "s": 84816, "text": "It will produce following result:" }, { "code": null, "e": 84865, "s": 84850, "text": "Hello, World!\n" }, { "code": null, "e": 85178, "s": 84865, "text": "If you examine the HTML output of the above example, you'll notice that the PHP code is not present in the file sent from the server to your Web browser. All of the PHP present in the Web page is processed and stripped from the page; the only thing returned to the client from the Web server is pure HTML output." }, { "code": null, "e": 85290, "s": 85178, "text": "All PHP code must be included inside one of the three special markup tags ate are recognised by the PHP Parser." }, { "code": null, "e": 85399, "s": 85290, "text": "<?php PHP code goes here ?>\n<?php PHP code goes here ?>\n<script language=\"php\"> PHP code goes here </script>" }, { "code": null, "e": 85432, "s": 85399, "text": "\n 61 Lectures \n 8 hours \n" }, { "code": null, "e": 85443, "s": 85432, "text": " Amit Rana" }, { "code": null, "e": 85478, "s": 85443, "text": "\n 13 Lectures \n 2.5 hours \n" }, { "code": null, "e": 85492, "s": 85478, "text": " Raghu Pandey" }, { "code": null, "e": 85523, "s": 85492, "text": "\n 5 Lectures \n 38 mins\n" }, { "code": null, "e": 85543, "s": 85523, "text": " Harshit Srivastava" }, { "code": null, "e": 85578, "s": 85543, "text": "\n 62 Lectures \n 3.5 hours \n" }, { "code": null, "e": 85588, "s": 85578, "text": " YouAccel" }, { "code": null, "e": 85619, "s": 85588, "text": "\n 9 Lectures \n 36 mins\n" }, { "code": null, "e": 85635, "s": 85619, "text": " Korey Sheppard" }, { "code": null, "e": 85667, "s": 85635, "text": "\n 10 Lectures \n 57 mins\n" }, { "code": null, "e": 85690, "s": 85667, "text": " Taurius Litvinavicius" }, { "code": null, "e": 85697, "s": 85690, "text": " Print" }, { "code": null, "e": 85708, "s": 85697, "text": " Add Notes" } ]
Differential Privacy in Deep Learning | by Neeraj Rajkumar Parmaar | Towards Data Science
I would like to thank Mr. Akshay Kulkarni for guiding me on my journey in publishing my first-ever article. As a large number of our day to day activities are being moved online the amount of personal and sensitive data being recorded is also on the rise. This surge in data has also led to the increased data analysis tools in the form of machine learning and deep learning that are permeating in every possible industry. These techniques are also used on sensitive user data to derive actionable insights. The objective of these models is to discover overall patterns rather than an individual’s habits. Deep learning is evolving to become the industry standard in many of the automation procedures. But it is also infamous for learning the minute and fine details of the training dataset. This aggravates the privacy risk as the model weights now encode the finer user details which on hostile inspection could potentially reveal the user information. For example, Fredrikson et al. demonstrated a model-inversion attack that recovers images from a facial recognition system [1]. Given the abundance of data freely available it is safe to assume that a determined adversary can get hold of the necessary auxiliary information required to extract user information from the model weights. Differential Privacy is a theory which provides us with certain mathematical guarantees of privacy of user information. It aims to reduce the impact of any one individual’s data on the overall result. This means that one would make the same inference about an individual’s data whether or not it was present in the input of the analysis. As the number of analyses increases on the data so does the risk of exposure of user information. The results of differentially private computations are immune to a wide range of privacy attacks. This is achieved by adding carefully tuned noise (characterized by epsilon) during the computation making it difficult for hackers to identify any user. This addition of noise also leads to erosion in the accuracy of the computation. Hence there is a trade-off between the accuracy and privacy protection offered. The level of privacy is measured by epsilon and is inversely proportional to the extent of privacy protection offered. This means higher the epsilon the lesser is the degree of protection of the data and higher is the chance of revealing user information. Achieving epsilon differential privacy is an ideal case and is very difficult to achieve in a practical scenario and hence the (ε, δ)-differential privacy is used. By using (ε, δ)-differential privacy the algorithm is ε-differentially private with probability (1−δ). Hence, the closer δ is to 0, the better. Delta is usually set to the reciprocal of the number of training samples. Mind you that we aim to safeguard the model and not the data itself from hostile inspection. This is done by adding noise during the calculation of model weights which has the additional advantage of regularization of the model. Specifically, in Deep Learning we integrate differential privacy by opting for differentially private optimizers because it is where most of the computation happens. The gradients are first calculated by taking the gradient of loss w.r.t the weights. These gradients are then clipped according to the l2_norm_clip parameter and noise is added to it which is controlled by the noise_multiplier parameter in the TensorFlow Privacy library. We aim to preserve the privacy of the model weights by applying differential privacy in the form of a Differentially Private — Stochastic Gradient Descent (DP-SGD) optimizer. Here an attempt is made to apply differential privacy to a deep neural network VGG19 with the task of image classification on the CIFAR10 dataset to understand the impact on the model performance and privacy. If such a model were trained on sensitive and private images it would become paramount that these images are not leaked as they could be put to malicious use. The TensorFlow Privacy library was used to implement the DP-SGD optimizer and compute epsilon. The hyperparameters are not very finely tuned to get the maximum accuracy for the benchmark as we need it for a comparative study only and excessive tuning might skew the comparison that is intended. The function that computes epsilon takes ‘number of steps’ as the input which is calculated as (epochs * number of training examples) / batch size. The number of steps is a measure of the number of times the model sees the training data. As the number of steps increases the more the model sees the training data and it incorporates the finer details into itself by overfitting which means the model has a higher chance of revealing user information. num_classes = 10# data loadingx_train, y_train), (x_test, y_test) = cifar10.load_data()x_train = x_train.astype('float32')x_test = x_test.astype('float32')y_train = tf.keras.utils.to_categorical(y_train, num_classes)y_test = tf.keras.utils.to_categorical(y_test, num_classes)# network paramsnum_classes = 10batch_size = 128dropout = 0.5weight_decay = 0.0001iterations = len(x_train) // batch_size# dpsgd paramsdpsgd = Truelearning_rate = 0.15noise_multiplier = 1.0l2_norm_clip = 1.0epochs = 100microbatches = batch_sizeif dpsgd: optimizer = DPGradientDescentGaussianOptimizer( l2_norm_clip=l2_norm_clip, noise_multiplier=noise_multiplier, num_microbatches=microbatches, learning_rate=learning_rate) loss = tf.keras.losses.CategoricalCrossentropy( reduction = tf.compat.v1.losses.Reduction.NONE) # reduction is set to NONE to get loss in a vector form print('DPSGD')else: optimizer = optimizers.SGD(lr=learning_rate) loss = tf.keras.losses.CategoricalCrossentropy() print('Vanilla SGD')model.compile(loss=loss, optimizer=optimizer, metrics=['accuracy'])# where model is the neural network architecture you want to use# augmenting imagesdatagen = ImageDataGenerator(horizontal_flip=True, width_shift_range=0.125, height_shift_range=0.125, fill_mode='constant',cval=0.)datagen.fit(x_train)# start trainingmodel.fit(datagen.flow(x_train,y_train,batch_size=batch_size), steps_per_epoch=iterations,epochs=epochs,validation_data=(x_test, y_test))if dpsgd: eps = compute_epsilon(epochs * len(x_train) // batch_size) print('Delta = %f, Epsilon = %.2f'(1/(len(x_train)*1.5),eps))else: print('Trained with vanilla non-private SGD optimizer') The benchmark uses SGD provided by the Keras library with the same learning rate as DP-SGD. The network parameters are kept the same to make a fair comparison. Word of caution: Since DP-SGD takes in weight decay as a parameter applying weight decay to each of the layers of the neural network shows an error. A few observations: The model training with DP-SGD is slower than the benchmark. The model training with DP-SGD is noisier than the benchmark which may due to gradient clipping and addition of noise. Eventually, the model with DP-SGD achieves a decent performance in comparison with the benchmark. We try to simulate a real-world scenario by making an imbalanced dataset and then training the model. This is done by randomly assigning user-defined weights to each of the classes. We artificially create a severe imbalance that causes the model to underperform. To combat this imbalance we employ data level approaches instead of making changes to the model itself. The reduced number of instances causes the epsilon to increase. After oversampling with Synthetic Minority Oversampling TEchnique (SMOTE) we see an increase in accuracy. This enables us to deal with imbalance without making changes to the model itself. Number of instances for each class (10 classes in total):[3000 3500 250 2000 1000 1500 500 50 5000 2500] def imbalance(y, weights, x_train, y_train): random.shuffle(weights) indices = [] for i in range(10): inds = np.where(y==i)[0] random.shuffle(inds) print(i,int(weights[i]*len(x_train))) indices += list(inds[0:int(weights[i]*len(x_train))]) x_train = x_train[indices] y_train = y_train[indices] return x_train, y_trainweights = [0.01,0.05,0.1,0.2,0.3,0.4,0.5,0.6,0.7,1]x_train, y_train = imbalance(y_train,weights,x_train, y_train)# Implementing SMOTEfrom imblearn.over_sampling import SMOTEsmote = SMOTE(random_state=42)x_train = x_train.reshape(len(x_train),32*32*3) # where shape of image is (32,32,3)x_train, y_train = smote.fit_resample(x_train, y_train)x_train = x_train.reshape(len(x_train),32,32,3) We observe that: After oversampling the data with SMOTE, the model performance improves over the model trained over an imbalanced data set. The model trained on the original balanced data supersedes the performance to the model trained on the oversampled data. From experiments, we observe that data level approaches suffice to combat the problem of imbalance even with the differentially private optimizer. The hyperparameter in the repertoire of the TensorFlow Privacy that has a direct impact on the epsilon is the noise multiplier. Noise = 100.0, Epsilon = 0.18Noise = 10.0, Epsilon = 0.36Noise = 1.0, Epsilon = 3.44 We observe that: Increasing the noise reduces the value of epsilon which should imply better privacy protection. It is also known that privacy protection level is inversely proportional to model performance But we observe no such relation in practice and that increasing the noise multiplier has almost no impact on the performance. The departure from theory seen in tuning the noise_multiplier may be attributed to the large depth of the model where the gradient manipulation is rendered ineffective. From the code given in the TensorFlow Privacy source, we see that the value of epsilon is independent of the model training and only depends on noise_multiplier, batch_size, epochs, and delta values that we select. Epsilon has good theoretical backing for measuring the amount of risk to privacy. It takes into account all the factors that are important in determining the number of times the model sees the training data and it makes intuitive sense that the more the model sees the data the more the risk of user info exposure from model weights. But it still lacks in actually gauging how safe the model weights are to hostile inspection. This is a cause of concern because now we don’t have an idea if our model weights are immune to hacking. There is now a requirement for a metric that measures how opaque the model weights are about how little info they reveal about the users. As discussed in the conclusion coming up with a metric to judge the privacy of the model is first and foremost. It can even be an attack model trying to derive user information from model weights. The number of training data points the model is able to identify correctly can be quantified as the risk to privacy. Once a reliable measure has been formalized, we can go about extending differentially private optimizers to more complex tasks in Computer Vision and Natural Language processing and to other architectures as well. M. Fredrikson, S. Jha, and T. Ristenpart. “Model inversion attacks that exploit confidence information and basic countermeasures”. In CCS, pages 1322–1333. ACM, 2015.Abadi, Martin, et al. “Deep learning with differential privacy.” Proceedings of the 2016 ACM SIGSAC Conference on Computer and Communications Security. 2016.‘TensorFlow Privacy, https://github.com/tensorflow/privacy M. Fredrikson, S. Jha, and T. Ristenpart. “Model inversion attacks that exploit confidence information and basic countermeasures”. In CCS, pages 1322–1333. ACM, 2015. Abadi, Martin, et al. “Deep learning with differential privacy.” Proceedings of the 2016 ACM SIGSAC Conference on Computer and Communications Security. 2016. ‘TensorFlow Privacy, https://github.com/tensorflow/privacy
[ { "code": null, "e": 280, "s": 172, "text": "I would like to thank Mr. Akshay Kulkarni for guiding me on my journey in publishing my first-ever article." }, { "code": null, "e": 778, "s": 280, "text": "As a large number of our day to day activities are being moved online the amount of personal and sensitive data being recorded is also on the rise. This surge in data has also led to the increased data analysis tools in the form of machine learning and deep learning that are permeating in every possible industry. These techniques are also used on sensitive user data to derive actionable insights. The objective of these models is to discover overall patterns rather than an individual’s habits." }, { "code": null, "e": 1462, "s": 778, "text": "Deep learning is evolving to become the industry standard in many of the automation procedures. But it is also infamous for learning the minute and fine details of the training dataset. This aggravates the privacy risk as the model weights now encode the finer user details which on hostile inspection could potentially reveal the user information. For example, Fredrikson et al. demonstrated a model-inversion attack that recovers images from a facial recognition system [1]. Given the abundance of data freely available it is safe to assume that a determined adversary can get hold of the necessary auxiliary information required to extract user information from the model weights." }, { "code": null, "e": 1996, "s": 1462, "text": "Differential Privacy is a theory which provides us with certain mathematical guarantees of privacy of user information. It aims to reduce the impact of any one individual’s data on the overall result. This means that one would make the same inference about an individual’s data whether or not it was present in the input of the analysis. As the number of analyses increases on the data so does the risk of exposure of user information. The results of differentially private computations are immune to a wide range of privacy attacks." }, { "code": null, "e": 2948, "s": 1996, "text": "This is achieved by adding carefully tuned noise (characterized by epsilon) during the computation making it difficult for hackers to identify any user. This addition of noise also leads to erosion in the accuracy of the computation. Hence there is a trade-off between the accuracy and privacy protection offered. The level of privacy is measured by epsilon and is inversely proportional to the extent of privacy protection offered. This means higher the epsilon the lesser is the degree of protection of the data and higher is the chance of revealing user information. Achieving epsilon differential privacy is an ideal case and is very difficult to achieve in a practical scenario and hence the (ε, δ)-differential privacy is used. By using (ε, δ)-differential privacy the algorithm is ε-differentially private with probability (1−δ). Hence, the closer δ is to 0, the better. Delta is usually set to the reciprocal of the number of training samples." }, { "code": null, "e": 3615, "s": 2948, "text": "Mind you that we aim to safeguard the model and not the data itself from hostile inspection. This is done by adding noise during the calculation of model weights which has the additional advantage of regularization of the model. Specifically, in Deep Learning we integrate differential privacy by opting for differentially private optimizers because it is where most of the computation happens. The gradients are first calculated by taking the gradient of loss w.r.t the weights. These gradients are then clipped according to the l2_norm_clip parameter and noise is added to it which is controlled by the noise_multiplier parameter in the TensorFlow Privacy library." }, { "code": null, "e": 4158, "s": 3615, "text": "We aim to preserve the privacy of the model weights by applying differential privacy in the form of a Differentially Private — Stochastic Gradient Descent (DP-SGD) optimizer. Here an attempt is made to apply differential privacy to a deep neural network VGG19 with the task of image classification on the CIFAR10 dataset to understand the impact on the model performance and privacy. If such a model were trained on sensitive and private images it would become paramount that these images are not leaked as they could be put to malicious use." }, { "code": null, "e": 4904, "s": 4158, "text": "The TensorFlow Privacy library was used to implement the DP-SGD optimizer and compute epsilon. The hyperparameters are not very finely tuned to get the maximum accuracy for the benchmark as we need it for a comparative study only and excessive tuning might skew the comparison that is intended. The function that computes epsilon takes ‘number of steps’ as the input which is calculated as (epochs * number of training examples) / batch size. The number of steps is a measure of the number of times the model sees the training data. As the number of steps increases the more the model sees the training data and it incorporates the finer details into itself by overfitting which means the model has a higher chance of revealing user information." }, { "code": null, "e": 6611, "s": 4904, "text": "num_classes = 10# data loadingx_train, y_train), (x_test, y_test) = cifar10.load_data()x_train = x_train.astype('float32')x_test = x_test.astype('float32')y_train = tf.keras.utils.to_categorical(y_train, num_classes)y_test = tf.keras.utils.to_categorical(y_test, num_classes)# network paramsnum_classes = 10batch_size = 128dropout = 0.5weight_decay = 0.0001iterations = len(x_train) // batch_size# dpsgd paramsdpsgd = Truelearning_rate = 0.15noise_multiplier = 1.0l2_norm_clip = 1.0epochs = 100microbatches = batch_sizeif dpsgd: optimizer = DPGradientDescentGaussianOptimizer( l2_norm_clip=l2_norm_clip, noise_multiplier=noise_multiplier, num_microbatches=microbatches, learning_rate=learning_rate) loss = tf.keras.losses.CategoricalCrossentropy( reduction = tf.compat.v1.losses.Reduction.NONE) # reduction is set to NONE to get loss in a vector form print('DPSGD')else: optimizer = optimizers.SGD(lr=learning_rate) loss = tf.keras.losses.CategoricalCrossentropy() print('Vanilla SGD')model.compile(loss=loss, optimizer=optimizer, metrics=['accuracy'])# where model is the neural network architecture you want to use# augmenting imagesdatagen = ImageDataGenerator(horizontal_flip=True, width_shift_range=0.125, height_shift_range=0.125, fill_mode='constant',cval=0.)datagen.fit(x_train)# start trainingmodel.fit(datagen.flow(x_train,y_train,batch_size=batch_size), steps_per_epoch=iterations,epochs=epochs,validation_data=(x_test, y_test))if dpsgd: eps = compute_epsilon(epochs * len(x_train) // batch_size) print('Delta = %f, Epsilon = %.2f'(1/(len(x_train)*1.5),eps))else: print('Trained with vanilla non-private SGD optimizer')" }, { "code": null, "e": 6771, "s": 6611, "text": "The benchmark uses SGD provided by the Keras library with the same learning rate as DP-SGD. The network parameters are kept the same to make a fair comparison." }, { "code": null, "e": 6920, "s": 6771, "text": "Word of caution: Since DP-SGD takes in weight decay as a parameter applying weight decay to each of the layers of the neural network shows an error." }, { "code": null, "e": 6940, "s": 6920, "text": "A few observations:" }, { "code": null, "e": 7001, "s": 6940, "text": "The model training with DP-SGD is slower than the benchmark." }, { "code": null, "e": 7120, "s": 7001, "text": "The model training with DP-SGD is noisier than the benchmark which may due to gradient clipping and addition of noise." }, { "code": null, "e": 7218, "s": 7120, "text": "Eventually, the model with DP-SGD achieves a decent performance in comparison with the benchmark." }, { "code": null, "e": 7838, "s": 7218, "text": "We try to simulate a real-world scenario by making an imbalanced dataset and then training the model. This is done by randomly assigning user-defined weights to each of the classes. We artificially create a severe imbalance that causes the model to underperform. To combat this imbalance we employ data level approaches instead of making changes to the model itself. The reduced number of instances causes the epsilon to increase. After oversampling with Synthetic Minority Oversampling TEchnique (SMOTE) we see an increase in accuracy. This enables us to deal with imbalance without making changes to the model itself." }, { "code": null, "e": 7943, "s": 7838, "text": "Number of instances for each class (10 classes in total):[3000 3500 250 2000 1000 1500 500 50 5000 2500]" }, { "code": null, "e": 8697, "s": 7943, "text": "def imbalance(y, weights, x_train, y_train): random.shuffle(weights) indices = [] for i in range(10): inds = np.where(y==i)[0] random.shuffle(inds) print(i,int(weights[i]*len(x_train))) indices += list(inds[0:int(weights[i]*len(x_train))]) x_train = x_train[indices] y_train = y_train[indices] return x_train, y_trainweights = [0.01,0.05,0.1,0.2,0.3,0.4,0.5,0.6,0.7,1]x_train, y_train = imbalance(y_train,weights,x_train, y_train)# Implementing SMOTEfrom imblearn.over_sampling import SMOTEsmote = SMOTE(random_state=42)x_train = x_train.reshape(len(x_train),32*32*3) # where shape of image is (32,32,3)x_train, y_train = smote.fit_resample(x_train, y_train)x_train = x_train.reshape(len(x_train),32,32,3)" }, { "code": null, "e": 8714, "s": 8697, "text": "We observe that:" }, { "code": null, "e": 8837, "s": 8714, "text": "After oversampling the data with SMOTE, the model performance improves over the model trained over an imbalanced data set." }, { "code": null, "e": 8958, "s": 8837, "text": "The model trained on the original balanced data supersedes the performance to the model trained on the oversampled data." }, { "code": null, "e": 9105, "s": 8958, "text": "From experiments, we observe that data level approaches suffice to combat the problem of imbalance even with the differentially private optimizer." }, { "code": null, "e": 9233, "s": 9105, "text": "The hyperparameter in the repertoire of the TensorFlow Privacy that has a direct impact on the epsilon is the noise multiplier." }, { "code": null, "e": 9318, "s": 9233, "text": "Noise = 100.0, Epsilon = 0.18Noise = 10.0, Epsilon = 0.36Noise = 1.0, Epsilon = 3.44" }, { "code": null, "e": 9335, "s": 9318, "text": "We observe that:" }, { "code": null, "e": 9431, "s": 9335, "text": "Increasing the noise reduces the value of epsilon which should imply better privacy protection." }, { "code": null, "e": 9525, "s": 9431, "text": "It is also known that privacy protection level is inversely proportional to model performance" }, { "code": null, "e": 9651, "s": 9525, "text": "But we observe no such relation in practice and that increasing the noise multiplier has almost no impact on the performance." }, { "code": null, "e": 10705, "s": 9651, "text": "The departure from theory seen in tuning the noise_multiplier may be attributed to the large depth of the model where the gradient manipulation is rendered ineffective. From the code given in the TensorFlow Privacy source, we see that the value of epsilon is independent of the model training and only depends on noise_multiplier, batch_size, epochs, and delta values that we select. Epsilon has good theoretical backing for measuring the amount of risk to privacy. It takes into account all the factors that are important in determining the number of times the model sees the training data and it makes intuitive sense that the more the model sees the data the more the risk of user info exposure from model weights. But it still lacks in actually gauging how safe the model weights are to hostile inspection. This is a cause of concern because now we don’t have an idea if our model weights are immune to hacking. There is now a requirement for a metric that measures how opaque the model weights are about how little info they reveal about the users." }, { "code": null, "e": 11233, "s": 10705, "text": "As discussed in the conclusion coming up with a metric to judge the privacy of the model is first and foremost. It can even be an attack model trying to derive user information from model weights. The number of training data points the model is able to identify correctly can be quantified as the risk to privacy. Once a reliable measure has been formalized, we can go about extending differentially private optimizers to more complex tasks in Computer Vision and Natural Language processing and to other architectures as well." }, { "code": null, "e": 11615, "s": 11233, "text": "M. Fredrikson, S. Jha, and T. Ristenpart. “Model inversion attacks that exploit confidence information and basic countermeasures”. In CCS, pages 1322–1333. ACM, 2015.Abadi, Martin, et al. “Deep learning with differential privacy.” Proceedings of the 2016 ACM SIGSAC Conference on Computer and Communications Security. 2016.‘TensorFlow Privacy, https://github.com/tensorflow/privacy" }, { "code": null, "e": 11782, "s": 11615, "text": "M. Fredrikson, S. Jha, and T. Ristenpart. “Model inversion attacks that exploit confidence information and basic countermeasures”. In CCS, pages 1322–1333. ACM, 2015." }, { "code": null, "e": 11940, "s": 11782, "text": "Abadi, Martin, et al. “Deep learning with differential privacy.” Proceedings of the 2016 ACM SIGSAC Conference on Computer and Communications Security. 2016." } ]
C++ iomanip Library - setw Function
The C++ function std::setw behaves as if member width were called with n as argument on the stream on which it is inserted/extracted as a manipulator (it can be inserted/extracted on input streams or output streams). It is used to sets the field width to be used on output operations. Following is the declaration for std::setw function. setw (int n); n − Number of characters to be used as field width. It returns unspecified. This function should only be used as a stream manipulator. Basic guarantee − if an exception is thrown, the stream is in a valid state. The stream object on which it is inserted/extracted is modified. Concurrent access to the same stream object may introduce data races. In below example explains about setw function. #include <iostream> #include <iomanip> int main () { std::cout << std::setw(10); std::cout << 77 << std::endl; return 0; } Let us compile and run the above program, this will produce the following result − 77 Print Add Notes Bookmark this page
[ { "code": null, "e": 2821, "s": 2603, "text": "The C++ function std::setw behaves as if member width were called with n as argument on the stream on which it is inserted/extracted as a manipulator (it can be inserted/extracted on input streams or output streams)." }, { "code": null, "e": 2889, "s": 2821, "text": "It is used to sets the field width to be used on output operations." }, { "code": null, "e": 2942, "s": 2889, "text": "Following is the declaration for std::setw function." }, { "code": null, "e": 2956, "s": 2942, "text": "setw (int n);" }, { "code": null, "e": 3008, "s": 2956, "text": "n − Number of characters to be used as field width." }, { "code": null, "e": 3091, "s": 3008, "text": "It returns unspecified. This function should only be used as a stream manipulator." }, { "code": null, "e": 3168, "s": 3091, "text": "Basic guarantee − if an exception is thrown, the stream is in a valid state." }, { "code": null, "e": 3303, "s": 3168, "text": "The stream object on which it is inserted/extracted is modified. Concurrent access to the same stream object may introduce data races." }, { "code": null, "e": 3350, "s": 3303, "text": "In below example explains about setw function." }, { "code": null, "e": 3483, "s": 3350, "text": "#include <iostream>\n#include <iomanip>\n\nint main () {\n std::cout << std::setw(10);\n std::cout << 77 << std::endl;\n return 0;\n}" }, { "code": null, "e": 3566, "s": 3483, "text": "Let us compile and run the above program, this will produce the following result −" }, { "code": null, "e": 3570, "s": 3566, "text": "77\n" }, { "code": null, "e": 3577, "s": 3570, "text": " Print" }, { "code": null, "e": 3588, "s": 3577, "text": " Add Notes" } ]
Setting up Python platform for Machine Learning projects | by Aqeel Anwar | Towards Data Science
There are a lot of in-depth tutorials on how to get started with machine learning using python. These tutorials mainly focus on the use of Deep Learning frameworks (say TensorFlow, PyTorch, Keras, etc.) such as how to set up basic supervised learning problem, or how to create a simple neural network and train it, etc. But even before one can start experimenting with such tutorials, a working python platform must be available at the host machine on which these hands-on can be carried out. A few years ago when I started with ML, I couldn’t find one good tutorial on how to set up a working platform on my machine. There were a lot of options/configurations available and I couldn’t decide on any single one. I had to go through a lot of webpages before I could decide on a configuration and took me about 2 weeks to finally have a working platform. A couple of months ago one of my colleagues was facing the same problem, so I assisted her with setting up the platform which saved her quite a bit of time. Hence I decided to write a detailed article about it. I have divided this article into two parts and will be focusing on them separately Part 1: Selecting and installing a python distribution and IDE Part 2: Steps required when creating a new project In this article, I will use the following Python distribution and IDE. Python Distribution: Anaconda Python IDE: PyCharm What is Anaconda? Anaconda is a distribution of Python and R language with simplified package management and deployment. Using anaconda, it is easier to have multiple python environments with different configurations and to switch between them. The anaconda package manager makes it easier to resolve conflicts between multiple versions of a package required by different packages. A detailed explanation of the pros and cons of using Anaconda can be found here Download & Install: Anaconda distribution can be downloaded from here. The installation instructions are pretty straightforward. What is PyCharm? PyCharm is one of many IDEs available for python. I prefer PyCharm because it is much more user-friendly, powerful, and configurable as compared to other IDEs. It provides integration with git, has its own terminal and python console, provides support for various handy plugins, and a lot of useful keyboard shortcuts. Download & Install: To download PyCharm, head over to this link and download the latest Community (free) version and follow the installation instructions A different project that you will be working on, will require different resources and packages with different version requirements. So, it is always recommended that you use a separate virtual python environment for each project. This also makes sure that you don’t accidentally overwrite any of the existing working versions of certain packages with other versions hence rendering it useless for your current projects. Following are the steps that you should take whenever creating a new project. Open Anaconda Prompt Command and type conda create -n myenv python==3.5 This will create a new virtual environment named myenv that will come installed and loaded with python version 3.5 Once you have created the environment, you can verify it by activating the environment by using conda activate myenv #For Windowssource activate myenv #For MAC OS You can find the location of the created env myenv by typing which python# The output will look something like this# /Users/aqeelanwar/anaconda/envs/myenv/bin/python We will be using this to locate our environment in step 2 After activating the environment, you can install any required packages using #General Format:conda install package_name#Example:conda install numpy #To install numpy Following are the useful conda commands that will come in handy in managing conda environments # Activate an environmentconda activate env_name# Deactivate an environmentdeactivate #Windowssource deactivate #Linux and macOS# Listing all the created environmentsconda env list# Listing packages install in an environmentconda list# Installing a packageconda install package_name# Cloning a conda environmentconda create --clone name_env_to_be_cloned --name name_cloned_env Open PyCharm and select Create New Project. Select the location and a name for your project (medium_tutorial in this case). Expand the Project Interpreter option and select Existing interpreter Locate your environment by clicking on the three dots on the extreme right under Existing interpreter At this point, we will locate our environment myenv (if it is not already in the Project Interpreter list: more on this later) using the location that was displayed from the which python command in step 1. Also, we want this environment to be available for other projects that we create in the future, so we will select the ‘Make available to all projects’ checkbox Hit OK and then Create. The project will now be using the environment myenv. You can use the built-in terminal of PyCharm to install packages to this environment. Create a new .py file (say main.py) and run it using run >> run main.py If in the future, you ever want to switch between different conda environments for the same project, you can do so by following the steps below PyCharm can only select environments that have been included in its Project Interpreter list To add a newly created (say named PyCaffe) environment to the Project Interpreter list settings >> Project:project_name >> Project Interpreter Hit the gear icon at the right top and select add Select Conda Environment > Existing environment > <three dots> and locate the newly created environment and hit OK Now the environment has been added to the Project Interpreter list and can be seen in the drop-down menu This list shows the existing environments and whatever environment you select will be used for the project Note: PyCharm terminal doesn’t automatically activate the current selected environment. If you have selected, say PyCaffe, env from the project interpreter list, and now want to install a new package in it, you will have to first activate the environment in the terminal and then you can use conda install package_name. Otherwise, the package will be installed in the previously activated conda environment Now you are all set up with the platform. At this point, you can install the desired ML framework (TensorFlow, Keras, PyTorch) and start experimenting with ML tutorials. In this tutorial, we went through the pre-requisites for working on ML projects (or any python project for that matter). We saw how using Anaconda and PyCharm we can have multiple python environments and switch between them. If you have any issues in following this tutorial, just comment on the issue below and I will respond with the solution. If this article was helpful to you, feel free to clap, share and respond to it. If want to learn more about Machine Learning and Data Science, follow me @Aqeel Anwar or connect with me on LinkedIn.
[ { "code": null, "e": 665, "s": 172, "text": "There are a lot of in-depth tutorials on how to get started with machine learning using python. These tutorials mainly focus on the use of Deep Learning frameworks (say TensorFlow, PyTorch, Keras, etc.) such as how to set up basic supervised learning problem, or how to create a simple neural network and train it, etc. But even before one can start experimenting with such tutorials, a working python platform must be available at the host machine on which these hands-on can be carried out." }, { "code": null, "e": 1236, "s": 665, "text": "A few years ago when I started with ML, I couldn’t find one good tutorial on how to set up a working platform on my machine. There were a lot of options/configurations available and I couldn’t decide on any single one. I had to go through a lot of webpages before I could decide on a configuration and took me about 2 weeks to finally have a working platform. A couple of months ago one of my colleagues was facing the same problem, so I assisted her with setting up the platform which saved her quite a bit of time. Hence I decided to write a detailed article about it." }, { "code": null, "e": 1319, "s": 1236, "text": "I have divided this article into two parts and will be focusing on them separately" }, { "code": null, "e": 1382, "s": 1319, "text": "Part 1: Selecting and installing a python distribution and IDE" }, { "code": null, "e": 1433, "s": 1382, "text": "Part 2: Steps required when creating a new project" }, { "code": null, "e": 1504, "s": 1433, "text": "In this article, I will use the following Python distribution and IDE." }, { "code": null, "e": 1534, "s": 1504, "text": "Python Distribution: Anaconda" }, { "code": null, "e": 1554, "s": 1534, "text": "Python IDE: PyCharm" }, { "code": null, "e": 2016, "s": 1554, "text": "What is Anaconda? Anaconda is a distribution of Python and R language with simplified package management and deployment. Using anaconda, it is easier to have multiple python environments with different configurations and to switch between them. The anaconda package manager makes it easier to resolve conflicts between multiple versions of a package required by different packages. A detailed explanation of the pros and cons of using Anaconda can be found here" }, { "code": null, "e": 2145, "s": 2016, "text": "Download & Install: Anaconda distribution can be downloaded from here. The installation instructions are pretty straightforward." }, { "code": null, "e": 2481, "s": 2145, "text": "What is PyCharm? PyCharm is one of many IDEs available for python. I prefer PyCharm because it is much more user-friendly, powerful, and configurable as compared to other IDEs. It provides integration with git, has its own terminal and python console, provides support for various handy plugins, and a lot of useful keyboard shortcuts." }, { "code": null, "e": 2635, "s": 2481, "text": "Download & Install: To download PyCharm, head over to this link and download the latest Community (free) version and follow the installation instructions" }, { "code": null, "e": 3055, "s": 2635, "text": "A different project that you will be working on, will require different resources and packages with different version requirements. So, it is always recommended that you use a separate virtual python environment for each project. This also makes sure that you don’t accidentally overwrite any of the existing working versions of certain packages with other versions hence rendering it useless for your current projects." }, { "code": null, "e": 3133, "s": 3055, "text": "Following are the steps that you should take whenever creating a new project." }, { "code": null, "e": 3171, "s": 3133, "text": "Open Anaconda Prompt Command and type" }, { "code": null, "e": 3205, "s": 3171, "text": "conda create -n myenv python==3.5" }, { "code": null, "e": 3320, "s": 3205, "text": "This will create a new virtual environment named myenv that will come installed and loaded with python version 3.5" }, { "code": null, "e": 3416, "s": 3320, "text": "Once you have created the environment, you can verify it by activating the environment by using" }, { "code": null, "e": 3490, "s": 3416, "text": "conda activate myenv #For Windowssource activate myenv #For MAC OS" }, { "code": null, "e": 3551, "s": 3490, "text": "You can find the location of the created env myenv by typing" }, { "code": null, "e": 3656, "s": 3551, "text": "which python# The output will look something like this# /Users/aqeelanwar/anaconda/envs/myenv/bin/python" }, { "code": null, "e": 3714, "s": 3656, "text": "We will be using this to locate our environment in step 2" }, { "code": null, "e": 3792, "s": 3714, "text": "After activating the environment, you can install any required packages using" }, { "code": null, "e": 3884, "s": 3792, "text": "#General Format:conda install package_name#Example:conda install numpy #To install numpy" }, { "code": null, "e": 3979, "s": 3884, "text": "Following are the useful conda commands that will come in handy in managing conda environments" }, { "code": null, "e": 4369, "s": 3979, "text": "# Activate an environmentconda activate env_name# Deactivate an environmentdeactivate #Windowssource deactivate #Linux and macOS# Listing all the created environmentsconda env list# Listing packages install in an environmentconda list# Installing a packageconda install package_name# Cloning a conda environmentconda create --clone name_env_to_be_cloned --name name_cloned_env" }, { "code": null, "e": 4413, "s": 4369, "text": "Open PyCharm and select Create New Project." }, { "code": null, "e": 4493, "s": 4413, "text": "Select the location and a name for your project (medium_tutorial in this case)." }, { "code": null, "e": 4563, "s": 4493, "text": "Expand the Project Interpreter option and select Existing interpreter" }, { "code": null, "e": 4665, "s": 4563, "text": "Locate your environment by clicking on the three dots on the extreme right under Existing interpreter" }, { "code": null, "e": 5031, "s": 4665, "text": "At this point, we will locate our environment myenv (if it is not already in the Project Interpreter list: more on this later) using the location that was displayed from the which python command in step 1. Also, we want this environment to be available for other projects that we create in the future, so we will select the ‘Make available to all projects’ checkbox" }, { "code": null, "e": 5055, "s": 5031, "text": "Hit OK and then Create." }, { "code": null, "e": 5194, "s": 5055, "text": "The project will now be using the environment myenv. You can use the built-in terminal of PyCharm to install packages to this environment." }, { "code": null, "e": 5247, "s": 5194, "text": "Create a new .py file (say main.py) and run it using" }, { "code": null, "e": 5266, "s": 5247, "text": "run >> run main.py" }, { "code": null, "e": 5410, "s": 5266, "text": "If in the future, you ever want to switch between different conda environments for the same project, you can do so by following the steps below" }, { "code": null, "e": 5503, "s": 5410, "text": "PyCharm can only select environments that have been included in its Project Interpreter list" }, { "code": null, "e": 5590, "s": 5503, "text": "To add a newly created (say named PyCaffe) environment to the Project Interpreter list" }, { "code": null, "e": 5646, "s": 5590, "text": "settings >> Project:project_name >> Project Interpreter" }, { "code": null, "e": 5696, "s": 5646, "text": "Hit the gear icon at the right top and select add" }, { "code": null, "e": 5703, "s": 5696, "text": "Select" }, { "code": null, "e": 5759, "s": 5703, "text": "Conda Environment > Existing environment > <three dots>" }, { "code": null, "e": 5811, "s": 5759, "text": "and locate the newly created environment and hit OK" }, { "code": null, "e": 5916, "s": 5811, "text": "Now the environment has been added to the Project Interpreter list and can be seen in the drop-down menu" }, { "code": null, "e": 6023, "s": 5916, "text": "This list shows the existing environments and whatever environment you select will be used for the project" }, { "code": null, "e": 6430, "s": 6023, "text": "Note: PyCharm terminal doesn’t automatically activate the current selected environment. If you have selected, say PyCaffe, env from the project interpreter list, and now want to install a new package in it, you will have to first activate the environment in the terminal and then you can use conda install package_name. Otherwise, the package will be installed in the previously activated conda environment" }, { "code": null, "e": 6600, "s": 6430, "text": "Now you are all set up with the platform. At this point, you can install the desired ML framework (TensorFlow, Keras, PyTorch) and start experimenting with ML tutorials." }, { "code": null, "e": 6825, "s": 6600, "text": "In this tutorial, we went through the pre-requisites for working on ML projects (or any python project for that matter). We saw how using Anaconda and PyCharm we can have multiple python environments and switch between them." }, { "code": null, "e": 6946, "s": 6825, "text": "If you have any issues in following this tutorial, just comment on the issue below and I will respond with the solution." } ]
Principal Component Analysis (PCA) from scratch in Python | by Dario Radečić | Towards Data Science
Principal Component Analysis is a mathematical technique used for dimensionality reduction. Its goal is to reduce the number of features whilst keeping most of the original information. Today we’ll implement it from scratch, using pure Numpy. If you’re wondering why PCA is useful for your average machine learning task, here’s the list of top 3 benefits: Reduces training time — due to smaller dataset Removes noise — by keeping only what’s relevant Makes visualization possible — in cases where you have a maximum of 3 principal components The last one is a biggie — and we’ll see it in action today. But why is it a biggie? Good question. Imagine that you have a dataset of 10 features and want to visualize it. But how? 10 features = 10 physical dimensions. We as humans kind of suck when it comes to visualizing anything above 3 dimensions — hence the need for dimensionality reduction techniques. I want to make one important note here — principal component analysis is not a feature selection algorithm. What I mean is that principal component analysis won’t give you the top N features like for example forward selection would do. Instead, it will give you N principal components, where N equals the number of original features. If that sounds confusing, I strongly recommend you watch this video: The video dives deep into theoretical reasoning and explains everything much better than I’m capable of. Our agenda for today is as follows: Load the dataset Perform PCA Make awesome visualizations So without much ado, let’s dive in. I want everything to be super simple here, so I’ve decided to go with the well-known Iris dataset. It initially has only 4 features — still impossible to visualize. We’ll address this visualization issue after applying PCA. Here are the imports and dataset loading: import numpy as np import pandas as pddf = pd.read_csv(‘https://raw.githubusercontent.com/uiuc-cse/data-fa14/gh-pages/data/iris.csv')df.head() Executing the code above should result with the following data frame: Let’s continue with the PCA itself. Here is the short summary of the required steps: Scale the data — we don’t want some feature to be voted as “more important” due to scale differences. 10m = 10000mm, but the algorithm isn’t aware of meters and millimeters (sorry US readers)Calculate covariance matrix — square matrix giving the covariances between each pair of elements of a random vectorEigendecomposition — we’ll get to that Scale the data — we don’t want some feature to be voted as “more important” due to scale differences. 10m = 10000mm, but the algorithm isn’t aware of meters and millimeters (sorry US readers) Calculate covariance matrix — square matrix giving the covariances between each pair of elements of a random vector Eigendecomposition — we’ll get to that So let’s start with the first (and easiest) one. I’ve briefly touched on the idea of why we need to scale the data, so I won’t repeat myself here. Think of it as a necessary prerequisite — not only here, but for any machine learning task. To perform the scaling we’ll use the StandardScaler from Scikit-Learn: from sklearn.preprocessing import StandardScalerX_scaled = StandardScaler().fit_transform(X)X_scaled[:5] And that does it for this part. Let’s proceed. Let’s take a step back here and understand the difference between variance and covariance. Variance reports variation of a single random variable — let’s say the weight of a person, and covariance reports how much two random variables vary — like weight and height of a person. On the diagonal of the covariance matrix we have variances, and other elements are the covariances. Let’s not dive into the math here as you have the video for that part. Here’s how to obtain the covariance matrix in Numpy: features = X_scaled.Tcov_matrix = np.cov(features)cov_matrix[:5] Cool. As you can see, the diagonal elements are identical, and the matrix is symmetrical. Up next, eigendecomposition. Eigendecomposition is a process that decomposes a square matrix into eigenvectors and eigenvalues. Eigenvectors are simple unit vectors, and eigenvalues are coefficients which give the magnitude to the eigenvectors. We know so far that our covariance matrix is symmetrical. As it turns out, eigenvectors of symmetric matrices are orthogonal. For PCA this means that we have the first principal component which explains most of the variance. Orthogonal to that is the second principal component, which explains most of the remaining variance. This is repeated for N number of principal components, where N equals to number of original features. And this turns out to be neat for us — principal components are sorted by percentage of variance explained, as we can decide how many should we keep. For example, if we have 100 features originally, but the first 3 principal components explain 95% of the variance, then it makes sense to keep only these 3 for visualizations and model training. As this isn’t a math lecture on eigendecomposition, I think it’s time to do some practical work next. Feel free to explore the theoretical part on your own. We can perform the eigendecomposition through Numpy, and it returns a tuple, where the first element represents eigenvalues and the second one represents eigenvectors: values, vectors = np.linalg.eig(cov_matrix)values[:5] vectors[:5] Just from this, we can calculate the percentage of explained variance per principal component: explained_variances = []for i in range(len(values)): explained_variances.append(values[i] / np.sum(values)) print(np.sum(explained_variances), ‘\n’, explained_variances) The first value is just the sum of explained variances — and must be equal to 1. The second value is an array, representing the explained variance percentage per principal component. The first two principal components account for around 96% of the variance in the data. Cool. Let’s now dive into some visualizations where we can see the clear purpose of applying PCA. Previously we’ve got to the conclusions that we as humans can’t see anything above 3 dimensions. Iris dataset had 4 dimensions initially (4 features), but after applying PCA we’ve managed to explain most of the variance with only 2 principal components. Now we’ll create a Pandas DataFrame object consisting of those two components, alongside the target class. Here’s the code: projected_1 = X_scaled.dot(vectors.T[0])projected_2 = X_scaled.dot(vectors.T[1])res = pd.DataFrame(projected_1, columns=[‘PC1’])res[‘PC2’] = projected_2res[‘Y’] = yres.head() Okay, and now with the power of Python’s visualization libraries, let’s first visualize this dataset in 1 dimension — as a line. To do so we’ll need to ditch the second principal component. The easiest way is to hardcode Y values as zeros, as the scatter plot requires values for both X and Y axis: import matplotlib.pyplot as pltimport seaborn as snsplt.figure(figsize=(20, 10))sns.scatterplot(res[‘PC1’], [0] * len(res), hue=res[‘Y’], s=200) Just look at how separable the Setosa class is. Virginica and Versicolor are tougher to classify, but we should still get most of the classifications correct only with a single principal component. Let’s now see how this looks in a 2D space: plt.figure(figsize=(20, 10))sns.scatterplot(res[‘PC1’], [0] * len(res), hue=res[‘Y’], s=100) Awesome. For fun, try to include the third principal component and plot a 3D scatter plot. And that does it for this article. Let’s wrap things up in the next section. Until now I’ve seen either purely mathematical or purely library-based articles on PCA. It’s easy to do it with Scikit-Learn, but I wanted to take a more manual approach here because there’s a lack of articles online which do so. I hope you’ve managed to follow along and that this “abstract concept” of dimensionality reduction isn’t so abstract anymore. Thanks for reading. Loved the article? Become a Medium member to continue learning without limits. I’ll receive a portion of your membership fee if you use the following link, with no extra cost to you.
[ { "code": null, "e": 415, "s": 172, "text": "Principal Component Analysis is a mathematical technique used for dimensionality reduction. Its goal is to reduce the number of features whilst keeping most of the original information. Today we’ll implement it from scratch, using pure Numpy." }, { "code": null, "e": 528, "s": 415, "text": "If you’re wondering why PCA is useful for your average machine learning task, here’s the list of top 3 benefits:" }, { "code": null, "e": 575, "s": 528, "text": "Reduces training time — due to smaller dataset" }, { "code": null, "e": 623, "s": 575, "text": "Removes noise — by keeping only what’s relevant" }, { "code": null, "e": 714, "s": 623, "text": "Makes visualization possible — in cases where you have a maximum of 3 principal components" }, { "code": null, "e": 775, "s": 714, "text": "The last one is a biggie — and we’ll see it in action today." }, { "code": null, "e": 1075, "s": 775, "text": "But why is it a biggie? Good question. Imagine that you have a dataset of 10 features and want to visualize it. But how? 10 features = 10 physical dimensions. We as humans kind of suck when it comes to visualizing anything above 3 dimensions — hence the need for dimensionality reduction techniques." }, { "code": null, "e": 1409, "s": 1075, "text": "I want to make one important note here — principal component analysis is not a feature selection algorithm. What I mean is that principal component analysis won’t give you the top N features like for example forward selection would do. Instead, it will give you N principal components, where N equals the number of original features." }, { "code": null, "e": 1478, "s": 1409, "text": "If that sounds confusing, I strongly recommend you watch this video:" }, { "code": null, "e": 1583, "s": 1478, "text": "The video dives deep into theoretical reasoning and explains everything much better than I’m capable of." }, { "code": null, "e": 1619, "s": 1583, "text": "Our agenda for today is as follows:" }, { "code": null, "e": 1636, "s": 1619, "text": "Load the dataset" }, { "code": null, "e": 1648, "s": 1636, "text": "Perform PCA" }, { "code": null, "e": 1676, "s": 1648, "text": "Make awesome visualizations" }, { "code": null, "e": 1712, "s": 1676, "text": "So without much ado, let’s dive in." }, { "code": null, "e": 1936, "s": 1712, "text": "I want everything to be super simple here, so I’ve decided to go with the well-known Iris dataset. It initially has only 4 features — still impossible to visualize. We’ll address this visualization issue after applying PCA." }, { "code": null, "e": 1978, "s": 1936, "text": "Here are the imports and dataset loading:" }, { "code": null, "e": 2121, "s": 1978, "text": "import numpy as np import pandas as pddf = pd.read_csv(‘https://raw.githubusercontent.com/uiuc-cse/data-fa14/gh-pages/data/iris.csv')df.head()" }, { "code": null, "e": 2191, "s": 2121, "text": "Executing the code above should result with the following data frame:" }, { "code": null, "e": 2227, "s": 2191, "text": "Let’s continue with the PCA itself." }, { "code": null, "e": 2276, "s": 2227, "text": "Here is the short summary of the required steps:" }, { "code": null, "e": 2621, "s": 2276, "text": "Scale the data — we don’t want some feature to be voted as “more important” due to scale differences. 10m = 10000mm, but the algorithm isn’t aware of meters and millimeters (sorry US readers)Calculate covariance matrix — square matrix giving the covariances between each pair of elements of a random vectorEigendecomposition — we’ll get to that" }, { "code": null, "e": 2813, "s": 2621, "text": "Scale the data — we don’t want some feature to be voted as “more important” due to scale differences. 10m = 10000mm, but the algorithm isn’t aware of meters and millimeters (sorry US readers)" }, { "code": null, "e": 2929, "s": 2813, "text": "Calculate covariance matrix — square matrix giving the covariances between each pair of elements of a random vector" }, { "code": null, "e": 2968, "s": 2929, "text": "Eigendecomposition — we’ll get to that" }, { "code": null, "e": 3017, "s": 2968, "text": "So let’s start with the first (and easiest) one." }, { "code": null, "e": 3207, "s": 3017, "text": "I’ve briefly touched on the idea of why we need to scale the data, so I won’t repeat myself here. Think of it as a necessary prerequisite — not only here, but for any machine learning task." }, { "code": null, "e": 3278, "s": 3207, "text": "To perform the scaling we’ll use the StandardScaler from Scikit-Learn:" }, { "code": null, "e": 3383, "s": 3278, "text": "from sklearn.preprocessing import StandardScalerX_scaled = StandardScaler().fit_transform(X)X_scaled[:5]" }, { "code": null, "e": 3430, "s": 3383, "text": "And that does it for this part. Let’s proceed." }, { "code": null, "e": 3708, "s": 3430, "text": "Let’s take a step back here and understand the difference between variance and covariance. Variance reports variation of a single random variable — let’s say the weight of a person, and covariance reports how much two random variables vary — like weight and height of a person." }, { "code": null, "e": 3808, "s": 3708, "text": "On the diagonal of the covariance matrix we have variances, and other elements are the covariances." }, { "code": null, "e": 3932, "s": 3808, "text": "Let’s not dive into the math here as you have the video for that part. Here’s how to obtain the covariance matrix in Numpy:" }, { "code": null, "e": 3997, "s": 3932, "text": "features = X_scaled.Tcov_matrix = np.cov(features)cov_matrix[:5]" }, { "code": null, "e": 4116, "s": 3997, "text": "Cool. As you can see, the diagonal elements are identical, and the matrix is symmetrical. Up next, eigendecomposition." }, { "code": null, "e": 4332, "s": 4116, "text": "Eigendecomposition is a process that decomposes a square matrix into eigenvectors and eigenvalues. Eigenvectors are simple unit vectors, and eigenvalues are coefficients which give the magnitude to the eigenvectors." }, { "code": null, "e": 4760, "s": 4332, "text": "We know so far that our covariance matrix is symmetrical. As it turns out, eigenvectors of symmetric matrices are orthogonal. For PCA this means that we have the first principal component which explains most of the variance. Orthogonal to that is the second principal component, which explains most of the remaining variance. This is repeated for N number of principal components, where N equals to number of original features." }, { "code": null, "e": 5105, "s": 4760, "text": "And this turns out to be neat for us — principal components are sorted by percentage of variance explained, as we can decide how many should we keep. For example, if we have 100 features originally, but the first 3 principal components explain 95% of the variance, then it makes sense to keep only these 3 for visualizations and model training." }, { "code": null, "e": 5262, "s": 5105, "text": "As this isn’t a math lecture on eigendecomposition, I think it’s time to do some practical work next. Feel free to explore the theoretical part on your own." }, { "code": null, "e": 5430, "s": 5262, "text": "We can perform the eigendecomposition through Numpy, and it returns a tuple, where the first element represents eigenvalues and the second one represents eigenvectors:" }, { "code": null, "e": 5484, "s": 5430, "text": "values, vectors = np.linalg.eig(cov_matrix)values[:5]" }, { "code": null, "e": 5496, "s": 5484, "text": "vectors[:5]" }, { "code": null, "e": 5591, "s": 5496, "text": "Just from this, we can calculate the percentage of explained variance per principal component:" }, { "code": null, "e": 5764, "s": 5591, "text": "explained_variances = []for i in range(len(values)): explained_variances.append(values[i] / np.sum(values)) print(np.sum(explained_variances), ‘\\n’, explained_variances)" }, { "code": null, "e": 5947, "s": 5764, "text": "The first value is just the sum of explained variances — and must be equal to 1. The second value is an array, representing the explained variance percentage per principal component." }, { "code": null, "e": 6040, "s": 5947, "text": "The first two principal components account for around 96% of the variance in the data. Cool." }, { "code": null, "e": 6132, "s": 6040, "text": "Let’s now dive into some visualizations where we can see the clear purpose of applying PCA." }, { "code": null, "e": 6386, "s": 6132, "text": "Previously we’ve got to the conclusions that we as humans can’t see anything above 3 dimensions. Iris dataset had 4 dimensions initially (4 features), but after applying PCA we’ve managed to explain most of the variance with only 2 principal components." }, { "code": null, "e": 6510, "s": 6386, "text": "Now we’ll create a Pandas DataFrame object consisting of those two components, alongside the target class. Here’s the code:" }, { "code": null, "e": 6685, "s": 6510, "text": "projected_1 = X_scaled.dot(vectors.T[0])projected_2 = X_scaled.dot(vectors.T[1])res = pd.DataFrame(projected_1, columns=[‘PC1’])res[‘PC2’] = projected_2res[‘Y’] = yres.head()" }, { "code": null, "e": 6984, "s": 6685, "text": "Okay, and now with the power of Python’s visualization libraries, let’s first visualize this dataset in 1 dimension — as a line. To do so we’ll need to ditch the second principal component. The easiest way is to hardcode Y values as zeros, as the scatter plot requires values for both X and Y axis:" }, { "code": null, "e": 7129, "s": 6984, "text": "import matplotlib.pyplot as pltimport seaborn as snsplt.figure(figsize=(20, 10))sns.scatterplot(res[‘PC1’], [0] * len(res), hue=res[‘Y’], s=200)" }, { "code": null, "e": 7327, "s": 7129, "text": "Just look at how separable the Setosa class is. Virginica and Versicolor are tougher to classify, but we should still get most of the classifications correct only with a single principal component." }, { "code": null, "e": 7371, "s": 7327, "text": "Let’s now see how this looks in a 2D space:" }, { "code": null, "e": 7464, "s": 7371, "text": "plt.figure(figsize=(20, 10))sns.scatterplot(res[‘PC1’], [0] * len(res), hue=res[‘Y’], s=100)" }, { "code": null, "e": 7555, "s": 7464, "text": "Awesome. For fun, try to include the third principal component and plot a 3D scatter plot." }, { "code": null, "e": 7632, "s": 7555, "text": "And that does it for this article. Let’s wrap things up in the next section." }, { "code": null, "e": 7862, "s": 7632, "text": "Until now I’ve seen either purely mathematical or purely library-based articles on PCA. It’s easy to do it with Scikit-Learn, but I wanted to take a more manual approach here because there’s a lack of articles online which do so." }, { "code": null, "e": 7988, "s": 7862, "text": "I hope you’ve managed to follow along and that this “abstract concept” of dimensionality reduction isn’t so abstract anymore." }, { "code": null, "e": 8008, "s": 7988, "text": "Thanks for reading." } ]
Install PostgreSQL on Linux - GeeksforGeeks
24 Sep, 2021 This is a step-by-step guide to install PostgreSQL on a Linux machine. By default, PostgreSQL is available in all Ubuntu versions as PostgreSql “Snapshot”. However other versions of the same can be downloaded through the PostgreSQL apt repository. We will be installing PostgreSQL version 11.3 on Ubuntu in this article. There are three crucial steps for the installation of PostgreSQL as follows: Check for the current version of PostgreSQL on your Ubuntu Install PostgreSQL Verify the installation Check for the current version of PostgreSQL on your Ubuntu Install PostgreSQL Verify the installation You can check for the current version of PostgreSQL on your Ubuntu device by using the below command in the terminal: postgres -V After checking if you need an update of PostgreSQL, follow the below steps to install the latest PostgreSQL version: Step 1: Add the GPG key for connecting with the official PostgreSQL repository using the below command: wget --quiet -O - https://www.postgresql.org/media/keys/ACCC4CF8.asc | sudo apt-key add - Step 2: Add the official PostgreSQL repository in your source list and also add it’s certificate using the below command: sudo sh -c 'echo "deb http://apt.postgresql.org/pub/repos/apt/ `lsb_release -cs`-pgdg main" >> /etc/apt/sources.list.d/pgdg.list' Step 3: Run the below command to install PostgreSQL: sudo apt update sudo apt install postgresql postgresql-contrib There are couple of ways to verify the installation of PostgreSQL like connecting to the database server using some client applications like pgAdmin or psql. The quickest way though is to use the psql shell. For that follow the below steps: Step 1: Open the terminal and run the below command to log into PostgreSQL server: sudo su postgres Step 2: Now use the below command to enter the PostgreSQL shell: psql Step 3: Now run the below command to check for the PostgreSQL version: SELECT version(); anikaseth98 postgreSQL-basics PostgreSQL Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here. Comments Old Comments PostgreSQL - GROUP BY clause PostgreSQL - DROP INDEX PostgreSQL - ROW_NUMBER Function PostgreSQL - LEFT JOIN PostgreSQL - Copy Table PostgreSQL - Cursor PostgreSQL - Record type variable PostgreSQL - SELECT PostgreSQL - Select Into PostgreSQL - TEXT Data Type
[ { "code": null, "e": 23747, "s": 23719, "text": "\n24 Sep, 2021" }, { "code": null, "e": 23996, "s": 23747, "text": "This is a step-by-step guide to install PostgreSQL on a Linux machine. By default, PostgreSQL is available in all Ubuntu versions as PostgreSql “Snapshot”. However other versions of the same can be downloaded through the PostgreSQL apt repository. " }, { "code": null, "e": 24070, "s": 23996, "text": "We will be installing PostgreSQL version 11.3 on Ubuntu in this article. " }, { "code": null, "e": 24148, "s": 24070, "text": "There are three crucial steps for the installation of PostgreSQL as follows: " }, { "code": null, "e": 24254, "s": 24148, "text": "Check for the current version of PostgreSQL on your Ubuntu Install PostgreSQL Verify the installation " }, { "code": null, "e": 24315, "s": 24254, "text": "Check for the current version of PostgreSQL on your Ubuntu " }, { "code": null, "e": 24336, "s": 24315, "text": "Install PostgreSQL " }, { "code": null, "e": 24362, "s": 24336, "text": "Verify the installation " }, { "code": null, "e": 24481, "s": 24362, "text": "You can check for the current version of PostgreSQL on your Ubuntu device by using the below command in the terminal: " }, { "code": null, "e": 24494, "s": 24481, "text": "postgres -V " }, { "code": null, "e": 24612, "s": 24494, "text": "After checking if you need an update of PostgreSQL, follow the below steps to install the latest PostgreSQL version: " }, { "code": null, "e": 24717, "s": 24612, "text": "Step 1: Add the GPG key for connecting with the official PostgreSQL repository using the below command: " }, { "code": null, "e": 24807, "s": 24717, "text": "wget --quiet -O - https://www.postgresql.org/media/keys/ACCC4CF8.asc | sudo apt-key add -" }, { "code": null, "e": 24930, "s": 24807, "text": "Step 2: Add the official PostgreSQL repository in your source list and also add it’s certificate using the below command: " }, { "code": null, "e": 25060, "s": 24930, "text": "sudo sh -c 'echo \"deb http://apt.postgresql.org/pub/repos/apt/ `lsb_release -cs`-pgdg main\" >> /etc/apt/sources.list.d/pgdg.list'" }, { "code": null, "e": 25114, "s": 25060, "text": "Step 3: Run the below command to install PostgreSQL: " }, { "code": null, "e": 25178, "s": 25114, "text": "sudo apt update\nsudo apt install postgresql postgresql-contrib " }, { "code": null, "e": 25420, "s": 25178, "text": "There are couple of ways to verify the installation of PostgreSQL like connecting to the database server using some client applications like pgAdmin or psql. The quickest way though is to use the psql shell. For that follow the below steps: " }, { "code": null, "e": 25504, "s": 25420, "text": "Step 1: Open the terminal and run the below command to log into PostgreSQL server: " }, { "code": null, "e": 25521, "s": 25504, "text": "sudo su postgres" }, { "code": null, "e": 25587, "s": 25521, "text": "Step 2: Now use the below command to enter the PostgreSQL shell: " }, { "code": null, "e": 25592, "s": 25587, "text": "psql" }, { "code": null, "e": 25664, "s": 25592, "text": "Step 3: Now run the below command to check for the PostgreSQL version: " }, { "code": null, "e": 25682, "s": 25664, "text": "SELECT version();" }, { "code": null, "e": 25694, "s": 25682, "text": "anikaseth98" }, { "code": null, "e": 25712, "s": 25694, "text": "postgreSQL-basics" }, { "code": null, "e": 25723, "s": 25712, "text": "PostgreSQL" }, { "code": null, "e": 25821, "s": 25723, "text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here." }, { "code": null, "e": 25830, "s": 25821, "text": "Comments" }, { "code": null, "e": 25843, "s": 25830, "text": "Old Comments" }, { "code": null, "e": 25872, "s": 25843, "text": "PostgreSQL - GROUP BY clause" }, { "code": null, "e": 25896, "s": 25872, "text": "PostgreSQL - DROP INDEX" }, { "code": null, "e": 25929, "s": 25896, "text": "PostgreSQL - ROW_NUMBER Function" }, { "code": null, "e": 25952, "s": 25929, "text": "PostgreSQL - LEFT JOIN" }, { "code": null, "e": 25976, "s": 25952, "text": "PostgreSQL - Copy Table" }, { "code": null, "e": 25996, "s": 25976, "text": "PostgreSQL - Cursor" }, { "code": null, "e": 26030, "s": 25996, "text": "PostgreSQL - Record type variable" }, { "code": null, "e": 26050, "s": 26030, "text": "PostgreSQL - SELECT" }, { "code": null, "e": 26075, "s": 26050, "text": "PostgreSQL - Select Into" } ]
Scala - Access Modifiers
This chapter takes you through the Scala access modifiers. Members of packages, classes or objects can be labeled with the access modifiers private and protected, and if we are not using either of these two keywords, then access will be assumed as public. These modifiers restrict accesses to the members to certain regions of code. To use an access modifier, you include its keyword in the definition of members of package, class or object as we will see in the following section. A private member is visible only inside the class or object that contains the member definition. Following is the example code snippet to explain Private member − class Outer { class Inner { private def f() { println("f") } class InnerMost { f() // OK } } (new Inner).f() // Error: f is not accessible } In Scala, the access (new Inner). f() is illegal because f is declared private in Inner and the access is not from within class Inner. By contrast, the first access to f in class Innermost is OK, because that access is contained in the body of class Inner. Java would permit both accesses because it lets an outer class access private members of its inner classes. A protected member is only accessible from subclasses of the class in which the member is defined. Following is the example code snippet to explain protected member − package p { class Super { protected def f() { println("f") } } class Sub extends Super { f() } class Other { (new Super).f() // Error: f is not accessible } } The access to f in class Sub is OK because f is declared protected in ‘Super’ class and ‘Sub’ class is a subclass of Super. By contrast the access to f in ‘Other’ class is not permitted, because class ‘Other’ does not inherit from class ‘Super’. In Java, the latter access would be still permitted because ‘Other’ class is in the same package as ‘Sub’ class. Unlike private and protected members, it is not required to specify Public keyword for Public members. There is no explicit modifier for public members. Such members can be accessed from anywhere. Following is the example code snippet to explain public member − class Outer { class Inner { def f() { println("f") } class InnerMost { f() // OK } } (new Inner).f() // OK because now f() is public } Access modifiers in Scala can be augmented with qualifiers. A modifier of the form private[X] or protected[X] means that access is private or protected "up to" X, where X designates some enclosing package, class or singleton object. Consider the following example − package society { package professional { class Executive { private[professional] var workDetails = null private[society] var friends = null private[this] var secrets = null def help(another : Executive) { println(another.workDetails) println(another.secrets) //ERROR } } } } Note − the following points from the above example − Variable workDetails will be accessible to any class within the enclosing package professional. Variable workDetails will be accessible to any class within the enclosing package professional. Variable friends will be accessible to any class within the enclosing package society. Variable friends will be accessible to any class within the enclosing package society. Variable secrets will be accessible only on the implicit object within instance methods (this). Variable secrets will be accessible only on the implicit object within instance methods (this). 82 Lectures 7 hours Arnab Chakraborty 23 Lectures 1.5 hours Mukund Kumar Mishra 52 Lectures 1.5 hours Bigdata Engineer 76 Lectures 5.5 hours Bigdata Engineer 69 Lectures 7.5 hours Bigdata Engineer 46 Lectures 4.5 hours Stone River ELearning Print Add Notes Bookmark this page
[ { "code": null, "e": 2480, "s": 1998, "text": "This chapter takes you through the Scala access modifiers. Members of packages, classes or objects can be labeled with the access modifiers private and protected, and if we are not using either of these two keywords, then access will be assumed as public. These modifiers restrict accesses to the members to certain regions of code. To use an access modifier, you include its keyword in the definition of members of package, class or object as we will see in the following section." }, { "code": null, "e": 2577, "s": 2480, "text": "A private member is visible only inside the class or object that contains the member definition." }, { "code": null, "e": 2643, "s": 2577, "text": "Following is the example code snippet to explain Private member −" }, { "code": null, "e": 2827, "s": 2643, "text": "class Outer {\n class Inner {\n private def f() { println(\"f\") }\n \n class InnerMost {\n f() // OK\n }\n }\n (new Inner).f() // Error: f is not accessible\n}" }, { "code": null, "e": 3192, "s": 2827, "text": "In Scala, the access (new Inner). f() is illegal because f is declared private in Inner and the access is not from within class Inner. By contrast, the first access to f in class Innermost is OK, because that access is contained in the body of class Inner. Java would permit both accesses because it lets an outer class access private members of its inner classes." }, { "code": null, "e": 3291, "s": 3192, "text": "A protected member is only accessible from subclasses of the class in which the member is defined." }, { "code": null, "e": 3359, "s": 3291, "text": "Following is the example code snippet to explain protected member −" }, { "code": null, "e": 3562, "s": 3359, "text": "package p {\n class Super {\n protected def f() { println(\"f\") }\n }\n \n class Sub extends Super {\n f()\n }\n \n class Other {\n (new Super).f() // Error: f is not accessible\n }\n}" }, { "code": null, "e": 3921, "s": 3562, "text": "The access to f in class Sub is OK because f is declared protected in ‘Super’ class and ‘Sub’ class is a subclass of Super. By contrast the access to f in ‘Other’ class is not permitted, because class ‘Other’ does not inherit from class ‘Super’. In Java, the latter access would be still permitted because ‘Other’ class is in the same package as ‘Sub’ class." }, { "code": null, "e": 4118, "s": 3921, "text": "Unlike private and protected members, it is not required to specify Public keyword for Public members. There is no explicit modifier for public members. Such members can be accessed from anywhere." }, { "code": null, "e": 4183, "s": 4118, "text": "Following is the example code snippet to explain public member −" }, { "code": null, "e": 4361, "s": 4183, "text": "class Outer {\n class Inner {\n def f() { println(\"f\") }\n \n class InnerMost {\n f() // OK\n }\n }\n (new Inner).f() // OK because now f() is public\n}" }, { "code": null, "e": 4594, "s": 4361, "text": "Access modifiers in Scala can be augmented with qualifiers. A modifier of the form private[X] or protected[X] means that access is private or protected \"up to\" X, where X designates some enclosing package, class or singleton object." }, { "code": null, "e": 4627, "s": 4594, "text": "Consider the following example −" }, { "code": null, "e": 4990, "s": 4627, "text": "package society {\n package professional {\n class Executive {\n private[professional] var workDetails = null\n private[society] var friends = null\n private[this] var secrets = null\n\n def help(another : Executive) {\n println(another.workDetails)\n println(another.secrets) //ERROR\n }\n }\n }\n}" }, { "code": null, "e": 5043, "s": 4990, "text": "Note − the following points from the above example −" }, { "code": null, "e": 5139, "s": 5043, "text": "Variable workDetails will be accessible to any class within the enclosing package professional." }, { "code": null, "e": 5235, "s": 5139, "text": "Variable workDetails will be accessible to any class within the enclosing package professional." }, { "code": null, "e": 5322, "s": 5235, "text": "Variable friends will be accessible to any class within the enclosing package society." }, { "code": null, "e": 5409, "s": 5322, "text": "Variable friends will be accessible to any class within the enclosing package society." }, { "code": null, "e": 5505, "s": 5409, "text": "Variable secrets will be accessible only on the implicit object within instance methods (this)." }, { "code": null, "e": 5601, "s": 5505, "text": "Variable secrets will be accessible only on the implicit object within instance methods (this)." }, { "code": null, "e": 5634, "s": 5601, "text": "\n 82 Lectures \n 7 hours \n" }, { "code": null, "e": 5653, "s": 5634, "text": " Arnab Chakraborty" }, { "code": null, "e": 5688, "s": 5653, "text": "\n 23 Lectures \n 1.5 hours \n" }, { "code": null, "e": 5709, "s": 5688, "text": " Mukund Kumar Mishra" }, { "code": null, "e": 5744, "s": 5709, "text": "\n 52 Lectures \n 1.5 hours \n" }, { "code": null, "e": 5762, "s": 5744, "text": " Bigdata Engineer" }, { "code": null, "e": 5797, "s": 5762, "text": "\n 76 Lectures \n 5.5 hours \n" }, { "code": null, "e": 5815, "s": 5797, "text": " Bigdata Engineer" }, { "code": null, "e": 5850, "s": 5815, "text": "\n 69 Lectures \n 7.5 hours \n" }, { "code": null, "e": 5868, "s": 5850, "text": " Bigdata Engineer" }, { "code": null, "e": 5903, "s": 5868, "text": "\n 46 Lectures \n 4.5 hours \n" }, { "code": null, "e": 5926, "s": 5903, "text": " Stone River ELearning" }, { "code": null, "e": 5933, "s": 5926, "text": " Print" }, { "code": null, "e": 5944, "s": 5933, "text": " Add Notes" } ]
OpenCV - Median Blur
The Median blur operation is similar to the other averaging methods. Here, the central element of the image is replaced by the median of all the pixels in the kernel area. This operation processes the edges while removing the noise. You can perform this operation on an image using the medianBlur() method of the imgproc class. Following is the syntax of this method − medianBlur(src, dst, ksize) This method accepts the following parameters − src − A Mat object representing the source (input image) for this operation. src − A Mat object representing the source (input image) for this operation. dst − A Mat object representing the destination (output image) for this operation. dst − A Mat object representing the destination (output image) for this operation. ksize − A Size object representing the size of the kernel. ksize − A Size object representing the size of the kernel. The following program demonstrates how to perform the median blur operation on an image. import org.opencv.core.Core; import org.opencv.core.Mat; import org.opencv.imgcodecs.Imgcodecs; import org.opencv.imgproc.Imgproc; public class MedianBlurTest { public static void main(String args[]) { // Loading the OpenCV core library System.loadLibrary( Core.NATIVE_LIBRARY_NAME ); // Reading the Image from the file and storing it in to a Matrix object String file ="C:/EXAMPLES/OpenCV/sample.jpg"; Mat src = Imgcodecs.imread(file); // Creating an empty matrix to store the result Mat dst = new Mat(); // Applying MedianBlur on the Image Imgproc.medianBlur(src, dst, 15); // Writing the image Imgcodecs.imwrite("E:/OpenCV/chap9/median.jpg", dst); System.out.println("Image Processed"); } } Assume that following is the input image sample.jpg specified in the above program. On executing the program, you will get the following output − Image Processed If you open the specified path, you can observe the output image as follows − 70 Lectures 9 hours Abhilash Nelson 41 Lectures 4 hours Abhilash Nelson 20 Lectures 2 hours Spotle Learn 12 Lectures 46 mins Srikanth Guskra 19 Lectures 2 hours Haithem Gasmi 67 Lectures 6.5 hours Gianluca Mottola Print Add Notes Bookmark this page
[ { "code": null, "e": 3237, "s": 3004, "text": "The Median blur operation is similar to the other averaging methods. Here, the central element of the image is replaced by the median of all the pixels in the kernel area. This operation processes the edges while removing the noise." }, { "code": null, "e": 3373, "s": 3237, "text": "You can perform this operation on an image using the medianBlur() method of the imgproc class. Following is the syntax of this method −" }, { "code": null, "e": 3402, "s": 3373, "text": "medianBlur(src, dst, ksize)\n" }, { "code": null, "e": 3449, "s": 3402, "text": "This method accepts the following parameters −" }, { "code": null, "e": 3526, "s": 3449, "text": "src − A Mat object representing the source (input image) for this operation." }, { "code": null, "e": 3603, "s": 3526, "text": "src − A Mat object representing the source (input image) for this operation." }, { "code": null, "e": 3686, "s": 3603, "text": "dst − A Mat object representing the destination (output image) for this operation." }, { "code": null, "e": 3769, "s": 3686, "text": "dst − A Mat object representing the destination (output image) for this operation." }, { "code": null, "e": 3828, "s": 3769, "text": "ksize − A Size object representing the size of the kernel." }, { "code": null, "e": 3887, "s": 3828, "text": "ksize − A Size object representing the size of the kernel." }, { "code": null, "e": 3976, "s": 3887, "text": "The following program demonstrates how to perform the median blur operation on an image." }, { "code": null, "e": 4754, "s": 3976, "text": "import org.opencv.core.Core;\nimport org.opencv.core.Mat;\nimport org.opencv.imgcodecs.Imgcodecs;\nimport org.opencv.imgproc.Imgproc;\n\npublic class MedianBlurTest {\n public static void main(String args[]) {\n // Loading the OpenCV core library\n System.loadLibrary( Core.NATIVE_LIBRARY_NAME );\n\n // Reading the Image from the file and storing it in to a Matrix object\n String file =\"C:/EXAMPLES/OpenCV/sample.jpg\";\n Mat src = Imgcodecs.imread(file);\n\n // Creating an empty matrix to store the result\n Mat dst = new Mat();\n\n // Applying MedianBlur on the Image\n Imgproc.medianBlur(src, dst, 15);\n\n // Writing the image\n Imgcodecs.imwrite(\"E:/OpenCV/chap9/median.jpg\", dst);\n\n System.out.println(\"Image Processed\");\n }\n}" }, { "code": null, "e": 4838, "s": 4754, "text": "Assume that following is the input image sample.jpg specified in the above program." }, { "code": null, "e": 4900, "s": 4838, "text": "On executing the program, you will get the following output −" }, { "code": null, "e": 4917, "s": 4900, "text": "Image Processed\n" }, { "code": null, "e": 4995, "s": 4917, "text": "If you open the specified path, you can observe the output image as follows −" }, { "code": null, "e": 5028, "s": 4995, "text": "\n 70 Lectures \n 9 hours \n" }, { "code": null, "e": 5045, "s": 5028, "text": " Abhilash Nelson" }, { "code": null, "e": 5078, "s": 5045, "text": "\n 41 Lectures \n 4 hours \n" }, { "code": null, "e": 5095, "s": 5078, "text": " Abhilash Nelson" }, { "code": null, "e": 5128, "s": 5095, "text": "\n 20 Lectures \n 2 hours \n" }, { "code": null, "e": 5142, "s": 5128, "text": " Spotle Learn" }, { "code": null, "e": 5174, "s": 5142, "text": "\n 12 Lectures \n 46 mins\n" }, { "code": null, "e": 5191, "s": 5174, "text": " Srikanth Guskra" }, { "code": null, "e": 5224, "s": 5191, "text": "\n 19 Lectures \n 2 hours \n" }, { "code": null, "e": 5239, "s": 5224, "text": " Haithem Gasmi" }, { "code": null, "e": 5274, "s": 5239, "text": "\n 67 Lectures \n 6.5 hours \n" }, { "code": null, "e": 5292, "s": 5274, "text": " Gianluca Mottola" }, { "code": null, "e": 5299, "s": 5292, "text": " Print" }, { "code": null, "e": 5310, "s": 5299, "text": " Add Notes" } ]
Move columns to the right with Bootstrap
To move columns to the right, use the .col-*-offset-* class in Bootstrap. You can try to run the following code to implement it Live Demo <!DOCTYPE html> <html lang = "en"> <head> <title>Bootstrap Example</title> <meta charset = "utf-8"> <meta name = "viewport" content = "width=device-width, initial-scale = 1"> <link rel = "stylesheet" href = "https://maxcdn.bootstrapcdn.com/bootstrap/3.3.7/css/bootstrap.min.css"> <script src = "https://ajax.googleapis.com/ajax/libs/jquery/3.3.1/jquery.min.js"></script> <script src = "https://maxcdn.bootstrapcdn.com/bootstrap/3.3.7/js/bootstrap.min.js"></script> </head> <body> <div class = "container-fluid"> <div class = "row"> <div class = "col-sm-5 col-md-6" style = "background-color:blue;color: white;">This is div1.</div> <div class = "col-sm-5 col-md-6 col-md-offset-0" style = "background-color:orange;color: white;">This is div2.</div> </div> </div> </body> </html>
[ { "code": null, "e": 1136, "s": 1062, "text": "To move columns to the right, use the .col-*-offset-* class in Bootstrap." }, { "code": null, "e": 1190, "s": 1136, "text": "You can try to run the following code to implement it" }, { "code": null, "e": 1200, "s": 1190, "text": "Live Demo" }, { "code": null, "e": 2080, "s": 1200, "text": "<!DOCTYPE html>\n<html lang = \"en\">\n <head>\n <title>Bootstrap Example</title>\n <meta charset = \"utf-8\">\n <meta name = \"viewport\" content = \"width=device-width, initial-scale = 1\">\n <link rel = \"stylesheet\" href = \"https://maxcdn.bootstrapcdn.com/bootstrap/3.3.7/css/bootstrap.min.css\">\n <script src = \"https://ajax.googleapis.com/ajax/libs/jquery/3.3.1/jquery.min.js\"></script>\n <script src = \"https://maxcdn.bootstrapcdn.com/bootstrap/3.3.7/js/bootstrap.min.js\"></script>\n </head>\n <body>\n <div class = \"container-fluid\">\n <div class = \"row\">\n <div class = \"col-sm-5 col-md-6\" style = \"background-color:blue;color: white;\">This is div1.</div>\n <div class = \"col-sm-5 col-md-6 col-md-offset-0\" style = \"background-color:orange;color: white;\">This is div2.</div>\n </div>\n </div>\n </body>\n</html>" } ]
Nagios - Add-ons/Plugins
Plugins helps to monitor databases, operating systems, applications, network equipment, protocols with Nagios. Plugins are compiled executables or script (Perl or non-Perl) that extends Nagios functionality to monitor servers and hosts. Nagios will execute a Plugin to check the status of a service or host. Nagios can be compiled with support for an embedded Perl interpreter to execute Perl plugins. Without it, Nagios executes Perl and non-Perl plugins by forking and executing the plugins as an external command. Nagios has the following plugins available in it − Official Nagios Plugins − There are 50 official Nagios Plugins. Official Nagios plugins are developed and maintained by the official Nagios Plugins Team. Community Plugins − There are over 3,000 third party Nagios plugins that have been developed by hundreds of Nagios community members. Custom Plugins − You can also write your own Custom Plugins. There are certain guidelines that must be followed to write Custom Plugins. While writing custom plugin in Nagios, you need to follow the guidelines given below − Plugins should provide a "-V" command-line option (verify the configuration changes) Print only one line of text Print the diagnostic and only part of the help message Network plugins use DEFAULT_SOCKET_TIMEOUT to timeout "-v", or "--verbose“ is related to verbosity level "-t" or "--timeout" (plugin timeout); "-w" or "--warning" (warning threshold); "-c" or "--critical" (critical threshold); "-H" or "--hostname" (name of the host to check) Multiple Nagios plugin run and perform checks at the same time, for all of them to run smoothly together, Nagios plugin follow a status code. The table given below tells the exit code status and its description − Nagios plugins use options for their configuration. The following are few important parameters accepted by Nagios plugin − -h, --help This provides help -V, --version This prints the exact version of the plugin -v, --verbose This makes the plugin give a more detailed information on what it is doing -t, --timeout This provides the timeout (in seconds); after this time, the plugin will report CRITICAL status -w, --warning This provides the plugin-specific limits for the WARNING status -c, --critical This provides the plugin-specific limits for the CRITICAL status -H, --hostname This provides the hostname, IP address, or Unix socket to communicate with -4, --use-ipv4 This lets you use IPv4 for network connectivity -6, --use-ipv6 This lets you use IPv6 for network connectivity -p, --port This is used to connect to the TCP or UDP port -s, -- send This provides the string that will be sent to the server -e, --expect This provides the string that should be sent back from the server -q, --quit This provides the string to send to the server to close the connection Nagios plugin package has lot of checks available for hosts and services to monitor the infrastructure. Let us try out Nagios plugins to perform few checks. SMTP is a protocol that is used for sending emails. Nagios standard plugins have commands for perform checks for SMTP. The command definition for SMTP − define command { command_name check_smtp command_line $USER2$/check_smtp -H $HOSTADDRESS$ } Let us use Nagios plugin to monitor MySQL. Nagios offers 2 plugins to monitor MySQL. The first plugin checks if mysql connection is working or not, and the second plugin is used to calculate the time taken to run a SQL query. The commands definitions for both are as follows − define command { command_name check_mysql command_line $USER1$/check_mysql –H $HOSTADDRESS$ -u $ARG1$ -p $ARG2$ -d $ARG3$ -S –w 10 –c 30 } define command { command_name check_mysql_query command_line $USER1$/check_mysql_query –H $HOSTADDRESS$ -u $ARG1$ -p $ARG2$ -d $ARG3$ -q $ARG4$ –w $ARG5$ -c $ARG6$ } Note − Username, password, and database name are required as arguments in both the commands. Nagios offers plugin to check the disk space mounted on all the partitions. The command definition is as follows define command { command_name check_partition command_line $USER1$/check_disk –p $ARG1$ –w $ARG2$ -c $ARG3$ } Majority of checks can be done through standard Nagios plugins. But there are applications which require special checks to monitor them, in which case you can use 3rd party Nagios plugins which will provide more sophisticated checks on the application. It is important to know about security and licensing issues when you are using a 3rd party plugin form Nagios exchange or downloading the plugin from another website. Print Add Notes Bookmark this page
[ { "code": null, "e": 2413, "s": 1896, "text": "Plugins helps to monitor databases, operating systems, applications, network equipment, protocols with Nagios. Plugins are compiled executables or script (Perl or non-Perl) that extends Nagios functionality to monitor servers and hosts. Nagios will execute a Plugin to check the status of a service or host. Nagios can be compiled with support for an embedded Perl interpreter to execute Perl plugins. Without it, Nagios executes Perl and non-Perl plugins by forking and executing the plugins as an external command." }, { "code": null, "e": 2464, "s": 2413, "text": "Nagios has the following plugins available in it −" }, { "code": null, "e": 2618, "s": 2464, "text": "Official Nagios Plugins − There are 50 official Nagios Plugins. Official Nagios plugins are developed and maintained by the official Nagios Plugins Team." }, { "code": null, "e": 2752, "s": 2618, "text": "Community Plugins − There are over 3,000 third party Nagios plugins that have been developed by hundreds of Nagios community members." }, { "code": null, "e": 2889, "s": 2752, "text": "Custom Plugins − You can also write your own Custom Plugins. There are certain guidelines that must be followed to write Custom Plugins." }, { "code": null, "e": 2976, "s": 2889, "text": "While writing custom plugin in Nagios, you need to follow the guidelines given below −" }, { "code": null, "e": 3061, "s": 2976, "text": "Plugins should provide a \"-V\" command-line option (verify the configuration changes)" }, { "code": null, "e": 3089, "s": 3061, "text": "Print only one line of text" }, { "code": null, "e": 3144, "s": 3089, "text": "Print the diagnostic and only part of the help message" }, { "code": null, "e": 3198, "s": 3144, "text": "Network plugins use DEFAULT_SOCKET_TIMEOUT to timeout" }, { "code": null, "e": 3249, "s": 3198, "text": "\"-v\", or \"--verbose“ is related to verbosity level" }, { "code": null, "e": 3287, "s": 3249, "text": "\"-t\" or \"--timeout\" (plugin timeout);" }, { "code": null, "e": 3328, "s": 3287, "text": "\"-w\" or \"--warning\" (warning threshold);" }, { "code": null, "e": 3371, "s": 3328, "text": "\"-c\" or \"--critical\" (critical threshold);" }, { "code": null, "e": 3420, "s": 3371, "text": "\"-H\" or \"--hostname\" (name of the host to check)" }, { "code": null, "e": 3633, "s": 3420, "text": "Multiple Nagios plugin run and perform checks at the same time, for all of them to run smoothly together, Nagios plugin follow a status code. The table given below tells the exit code status and its description −" }, { "code": null, "e": 3756, "s": 3633, "text": "Nagios plugins use options for their configuration. The following are few important parameters accepted by Nagios plugin −" }, { "code": null, "e": 3767, "s": 3756, "text": "-h, --help" }, { "code": null, "e": 3786, "s": 3767, "text": "This provides help" }, { "code": null, "e": 3800, "s": 3786, "text": "-V, --version" }, { "code": null, "e": 3844, "s": 3800, "text": "This prints the exact version of the plugin" }, { "code": null, "e": 3858, "s": 3844, "text": "-v, --verbose" }, { "code": null, "e": 3933, "s": 3858, "text": "This makes the plugin give a more detailed information on what it is doing" }, { "code": null, "e": 3947, "s": 3933, "text": "-t, --timeout" }, { "code": null, "e": 4043, "s": 3947, "text": "This provides the timeout (in seconds); after this time, the plugin will report CRITICAL status" }, { "code": null, "e": 4057, "s": 4043, "text": "-w, --warning" }, { "code": null, "e": 4121, "s": 4057, "text": "This provides the plugin-specific limits for the WARNING status" }, { "code": null, "e": 4136, "s": 4121, "text": "-c, --critical" }, { "code": null, "e": 4201, "s": 4136, "text": "This provides the plugin-specific limits for the CRITICAL status" }, { "code": null, "e": 4216, "s": 4201, "text": "-H, --hostname" }, { "code": null, "e": 4291, "s": 4216, "text": "This provides the hostname, IP address, or Unix socket to communicate with" }, { "code": null, "e": 4306, "s": 4291, "text": "-4, --use-ipv4" }, { "code": null, "e": 4354, "s": 4306, "text": "This lets you use IPv4 for network connectivity" }, { "code": null, "e": 4369, "s": 4354, "text": "-6, --use-ipv6" }, { "code": null, "e": 4417, "s": 4369, "text": "This lets you use IPv6 for network connectivity" }, { "code": null, "e": 4428, "s": 4417, "text": "-p, --port" }, { "code": null, "e": 4475, "s": 4428, "text": "This is used to connect to the TCP or UDP port" }, { "code": null, "e": 4487, "s": 4475, "text": "-s, -- send" }, { "code": null, "e": 4544, "s": 4487, "text": "This provides the string that will be sent to the server" }, { "code": null, "e": 4557, "s": 4544, "text": "-e, --expect" }, { "code": null, "e": 4623, "s": 4557, "text": "This provides the string that should be sent back from the server" }, { "code": null, "e": 4634, "s": 4623, "text": "-q, --quit" }, { "code": null, "e": 4705, "s": 4634, "text": "This provides the string to send to the server to close the connection" }, { "code": null, "e": 4862, "s": 4705, "text": "Nagios plugin package has lot of checks available for hosts and services to monitor the infrastructure. Let us try out Nagios plugins to perform few checks." }, { "code": null, "e": 5015, "s": 4862, "text": "SMTP is a protocol that is used for sending emails. Nagios standard plugins have commands for perform checks for SMTP. The command definition for SMTP −" }, { "code": null, "e": 5114, "s": 5015, "text": "define command {\n command_name check_smtp\n command_line $USER2$/check_smtp -H $HOSTADDRESS$\n}\n" }, { "code": null, "e": 5340, "s": 5114, "text": "Let us use Nagios plugin to monitor MySQL. Nagios offers 2 plugins to monitor MySQL. The first plugin checks if mysql connection is working or not, and the second plugin is used to calculate the time taken to run a SQL query." }, { "code": null, "e": 5391, "s": 5340, "text": "The commands definitions for both are as follows −" }, { "code": null, "e": 5716, "s": 5391, "text": "define command {\n command_name check_mysql\n command_line $USER1$/check_mysql –H $HOSTADDRESS$ -u $ARG1$ -p $ARG2$ -d\n $ARG3$ -S –w 10 –c 30\n}\n\ndefine command {\n command_name check_mysql_query\n command_line $USER1$/check_mysql_query –H $HOSTADDRESS$ -u $ARG1$ -p $ARG2$ -d\n $ARG3$ -q $ARG4$ –w $ARG5$ -c $ARG6$\n}\n" }, { "code": null, "e": 5809, "s": 5716, "text": "Note − Username, password, and database name are required as arguments in both the commands." }, { "code": null, "e": 5922, "s": 5809, "text": "Nagios offers plugin to check the disk space mounted on all the partitions. The command definition is as follows" }, { "code": null, "e": 6039, "s": 5922, "text": "define command {\n command_name check_partition\n command_line $USER1$/check_disk –p $ARG1$ –w $ARG2$ -c $ARG3$\n}\n" }, { "code": null, "e": 6459, "s": 6039, "text": "Majority of checks can be done through standard Nagios plugins. But there are applications which require special checks to monitor them, in which case you can use 3rd party Nagios plugins which will provide more sophisticated checks on the application. It is important to know about security and licensing issues when you are using a 3rd party plugin form Nagios exchange or downloading the plugin from another website." }, { "code": null, "e": 6466, "s": 6459, "text": " Print" }, { "code": null, "e": 6477, "s": 6466, "text": " Add Notes" } ]
Complete Data Science Project Template with Mlflow for Non-Dummies. | by Jan Teichmann | Towards Data Science
Data science has come a long way as a field and business function alike. There are now cross-functional teams working on algorithms all the way to full-stack data science products. With the growing maturity of data science there is an emerging standard of best practise, platforms and toolkits which significantly reduced the barrier of entry and price point of a data science team. This has made data science more accessible for companies and practitioners alike. For the majority of commercially applied teams, data scientists can stand on the shoulders of a quality open source community for their day-to-day work. But the success stories are still overshadowed by the many data science projects which fail to gain business adaptation. It’s commonly reported that over 80% of all data science projects still fail to deliver business impact. With all the high quality open-source toolkits, why does data science struggle to deliver business impact? It’s not a Maths problem! The data science success is plagued by something commonly known as the “Last Mile problems”: “Productionisation of models is the TOUGHEST problem in data science” (Schutt, R & O’Neill C. Doing data science straight from the front line, O’Reilly Press. California, 2014) In this blog post I discuss best practices for setting up a data science project, model development and experimentation. The source code for the data science project template can be found on GitLab: gitlab.com You can read about Rendezvous Architecture as a great pattern to operationalise models in production in one of my earlier blog posts: towardsdatascience.com While engineers build the production platforms, DevOps and DataOps patterns to run models in production, many data scientists still work in a way which creates a big gap between data science and production environments. Many data scientists (without any reproach) work locally without much consideration of the cloud environments which might host their models in production work on small datasets in memory using python and pandas without much consideration of how to scale workflows to the big data volumes in production work with Jupyter Notebooks without much consideration for reproducible experiments rarely break up projects into independent tasks to build decoupled pipelines for ETL steps, model training and scoring And very rarely are best practices for software engineering applied to data science projects, e.g. abstracted and reusable code, unit testing, documentation, version control etc. Unfortunately, our beloved flexible Jupyter Notebooks play an important part in this. It’s important to keep in mind that data science is a field and business function undergoing rapid innovation. If you read this a month after I published it, there might be already new tools and better ways to organise data science projects. Exciting times to be a data scientist! Let’s have a look at the details of the data science project template: A data science project consists of many moving parts and the actual model can easily be the fewest lines of code in your project. Data is the fuel and foundation of your project and, firstly, we should aim for solid, high quality and portable foundations for our project. Data pipelines are the hidden technical debt in most data science projects and you probably have heard of the infamous 80/20 rule: 80% of Data Science is Finding, Cleaning and Preparing Data I will explain below how Mlflow and Spark can help us to be more productive when working with data. Creating your data science model itself is a continuous back and forth between experimentation and expanding a project code base to capture the code and logic that worked. This can get messy and Mlflow is here to make experimentation and model management significantly easier for us. It will also simplify model deployment for us. More on that later! You probably heard of the 80/20 rule of data science: a lot of the data science work is about creating data pipelines to consume raw data, clean data and engineer features to feed our models at the end. We want that Our data is immutable: we can only create new data sets but not change existing data in place. Therefore, our pipeline is a set of decoupled steps in a DAG incrementally increasing the quality and aggregation of our data into features and scores to incorporate our ETL easily with tools like Airflow or Luigi in production. Every data set has a defined data schema to read data safely without any surprises which can be logged to a central data catalogue for better data governance and discovery. The data science project template has a data folder which holds the project data and associated schemata: Data scientists commonly work with not only big data sets but also with unstructured data. Building out the schemata for a data warehouse requires design work and a good understanding of business requirements. In data science many questions or problem statements were not known when the schemata for a DWH were created. That’s why Spark has developed into a gold standard in that space for Working with unstructured data Distributed enterprise mature big data ETL pipelines Data lake deployments Unified batch as well as micro-batch streaming platform The ability to consumer and write data from and to many different systems Projects at companies with mature infrastructure use advanced data lakes which includes data catalogues for data/schema discovery and management, and scheduled tasks with Airflow etc. While a data scientist does not have to necessarily understand these parts of the production infrastructure, it’s best to create projects and artifacts with this in mind. This data science project template uses Spark regardless of whether we run it locally on data samples or in the cloud against a data lake. On the one hand, Spark can feel like overkill when working locally on small data samples. But on the other hand, it allows for one transferable stack ready for cloud deployment without any unnecessary re-engineering work. It’s also a repetitive pattern which can be nicely automated, e.g. via the Makefile. While version 1 of your model might use structured data from a DWH it’s best to still use Spark and reduce the amount of technical debt in your project in anticipation of version 2 of your model which will use a wider range of data sources. For local project development I use a simple Makefile to automate the execution of the data pipelines and project targets. However, this can easily be translated into an Airflow or Luigi pipeline for production deployment in the cloud. The end to end data flow for this project is made up of three steps: raw-data: Download the iris raw data from the machine learning database archive interim-data: Execute the feature engineering pipeline to materialise the iris features in batch with Spark processed-data: Execute the classification model to materialise predictions in batch using Spark You can transform the iris raw data into features using a Spark pipeline using the following make command: make interim-data It will zip the current project code base and submits the project/data/features.py script to Spark in our docker container for execution. We use a Mlflow runid to identify and load the desired Spark feature pipeline model but more on the use of Mlflow later: After the execution of our Spark feature pipeline we have the interim feature data materialised in our small local data lake: As you can see, we saved our feature data set with its corresponding schema. Spark makes it very easy to save and read schemata: some_sparksql_dataframe.schema.json()T.StructType.fromJson(json.loads(some_sparksql_schema_json_string)) Always materialise and read data with its corresponding schema! Enforcing schemata is the key to breaking the 80/20 rule in data science. I consider writing a schema as mandatory for csv and json files but I would also do it for any parquet or avro files which automatically preserve their schema. I really recommend reading more about Delta Lake, Apache Hudi, data catalogues and feature stores. If there is interest, I will follow up with an independent blog post on these topics. Jupyter Notebooks are very convenient for experimentation and it’s unlikely data scientists will stop using them, very much to the dismay of many engineers who are asked to “productionise” models from Jupyter notebooks. The compromise is to use tools to their strengths. Experimentation in notebooks is productive and works well as long code which proves valuable in experimentation is then added to a code base which follows software engineering best practises. My personal workflow looks like this: Experimentation: test code and ideas in a Jupyter notebook firstIncorporate valuable code which works into the Python project codebaseDelete the code cell in the Jupyter notebookReplace it with an import from the project codebaseRestart your kernel and execute all steps in order before moving on to the next task. Experimentation: test code and ideas in a Jupyter notebook first Incorporate valuable code which works into the Python project codebase Delete the code cell in the Jupyter notebook Replace it with an import from the project codebase Restart your kernel and execute all steps in order before moving on to the next task. It’s important to isolate our data science project environment and manage requirements and dependencies of our Python project. There is no better way to do this than via Docker containers. My project template uses the jupyter all-spark-notebook Docker image from DockerHub as a convenient, all batteries included, lab setup. The project template contains a docker-compose.yml file and the Makefile automates the container setup with a simple make jupyter The command will spin up all the required services (Jupyter with Spark, Mlflow, Minio) for our project and installs all project requirements with pip within our experimentation environment. I use Pipenv to manage my virtual Python environments for my projects and pipenv_to_requirements to create a requirements.txt file for DevOps pipelines and Anaconda based container images. Considering the popularity of Python as a programming language, the Python tooling can sometimes feel cumbersome and complex. 😒 The following screenshot shows the example notebook environment. I use snippets to setup individual notebooks using the %load magic. It also shows how I use code from the project code base to import the raw iris data: The Jupyter notebook demonstrates my workflow of development, expanding the project code base and model experimentation with some additional commentary. https://gitlab.com/jan-teichmann/ml-flow-ds-project/blob/master/notebooks/Iris%20Features.ipynb Version control of Jupyter notebooks in Git is not as user friendly as I wished. The .ipynb file format is not very diff friendly. At least, GitHub and GitLab can now render Jupyter notebooks in their web interfaces which is extremely useful. To make version control easier on your local computer, the template also installs the nbdime tool which makes git diffs and merges of Jupyter notebooks clear and meaningful. You can use the following commands as part of your project: nbdiff compare notebooks in a terminal-friendly way nbmerge three-way merge of notebooks with automatic conflict resolution nbdiff-web shows you a rich rendered diff of notebooks nbmerge-web gives you a web-based three-way merge tool for notebooks nbshow present a single notebook in a terminal-friendly way Last but not least, the project template uses the IPython hooks to extend the Jupyter notebook save button with an additional call to nbconvert to create an additional .py script and .html version of the Jupyter notebook every time you save your notebook in a subfolder. On the one hand, this makes it easier for others to check code changes in Git by checking the diff of the pure Python script version. On the other hand, the html version allows anyone to see the rendered notebook outputs without having to start a Jupyter notebook server. Something which makes it significantly easier to email parts of a notebook output to colleagues in the wider business who do not use Jupyter. 👏 To enable the automatic conversion of notebooks on every save simply place an empty .ipynb_saveprocress file in the current working directory of your notebooks. As part of our experimentation in Jupyter we need to keep track of parameters, metrics and artifacts we create. Mlflow is a great tool to create reproducible and accountable data science projects. It provides a central tracking server with a simple UI to browse experiments and powerful tooling to package, manage and deploy models. In our data science project template we simulate a production Mlflow deployment with a dedicated tracking server and artifacts being stored on S3. We use Min.io locally as an open-source S3 compatible stand-in. There shouldn’t be any differences in the behaviour and experience of using Mlflow whether you use it locally, hosted in the cloud or fully managed on Databricks. The docker-compose.yml file provides the required services for our project. You can access the blob storage UI on http://localhost:9000/minio and the Mlflow tracking UI on http://localhost:5000/ In Mlflow we have named experiments which hold any number of runs. Each run can track parameters, metrics and artifacts and has a unique run identifier. Mlflow 1.4 also just released a Model Registry to make it easier to organise runs and models around a model lifecycle, e.g. Production and Staging deployments of different versions. This example project trains two models: a Spark feature pipeline anda Sklearn classifier a Spark feature pipeline and a Sklearn classifier Our Spark feature pipeline uses the Spark ML StandardScaler which makes the pipeline stateful. Therefore we treat our feature engineering in exactly the same way we would treat any other data science model. The aim of this example is to use two models using different frameworks in conjuncture in both batch and real-time scoring without any re-engineering of the models themselves. This will hopefully demonstrate the power of using Mlflow to simplify the management and deployment of data science models! We simply follow the Mlflow convention of logging trained models to the central tracking server. The only gotcha is that the current boto3 client and Minio are not playing well together when you try to upload empty files with boto3 to minio. Spark serialises models with empty _SUCCESS files which cause the standard mlflow.spark.log_model() call to timeout. We circumvent this problem by saving the serialised models to disk locally and log them using the minio client instead using the project.utility.mlflow.log_artifacts_minio() function. The following code trains a new spark feature pipeline and logs the pipeline in both flavours: Spark and Mleap. More on Mleap later. Within our project the models are saved and logged in the models folder where the different docker services persist their data. We can find the data from the Mlflow tracking server in the models/mlruns subfolder and the saved artifacts in the models/s3 subfolder. The Jupyter notebooks in the example project hopefully give a good idea of how to use Mlflow to track parameters and metrics and log models. Mlflow makes serialising and loading models a dream and removed a lot of boilerplate code from my previous data science projects. Our aim is to use the very same models with their different technologies and flavours to score our data in batch as well as in real-time without any changes, re-engineering or code duplication. Our target for batch scoring is Spark. While our feature pipeline is already a Spark pipeline, our classifier is a Python sklearn model. But fear not, Mlflow makes working with models extremely easy and there is a convenient function to package a Python model into a Spark SQL UDF to distribute our classifier across a Spark cluster. Just like magic! Load the serialised Spark feature pipelineLoad the serialised Sklearn model as a Spark UDFTransform raw data with the feature pipelineTurn the Spark feature vectors into default Spark arraysCall the UDF with the expanded array items Load the serialised Spark feature pipeline Load the serialised Sklearn model as a Spark UDF Transform raw data with the feature pipeline Turn the Spark feature vectors into default Spark arrays Call the UDF with the expanded array items There are some gotchas with the correct version of PyArrow and that the UDF does not work with Spark vectors. But from an example it’s very easy to make it work. I hope this saves you the trouble of endless Spark Python Java backtraces and maybe future versions will simplify the integration even further. For now, use PyArrow 0.14 with Spark 2.4 and turn the Spark vectors into numpy arrays and then into a python list as Spark cannot yet deal with the numpy dtypes. Problems you probably encountered before working with PySpark. All the detailed code is in the Git repository. https://gitlab.com/jan-teichmann/ml-flow-ds-project/blob/master/notebooks/Iris%20Batch%20Scoring.ipynb We would like to use our model to score requests interactively in real-time using an API and not rely on Spark to host our models. We want our scoring service to be lightning fast and consist of containerised micro services. Again, Mlflow provides us with most things we need to achieve that. Our sklearn classifier is a simple Python model and combining this with an API and package it into a container image is straightforward. It’s such a common pattern that Mlflow has a command for this: mlflow models build-docker That’s all that is needed to package Python flavoured models with Mlflow. No need to write the repetitive code for an API with Flask to containerise a data science model. 🙌 Unfortunately, our feature pipeline is a Spark model. However, we serialised the pipeline in the Mleap flavour which is a project to host Spark pipelines without the need of any Spark context. Mleap is ideal for the use-cases where data is small and we do not need any distributed compute and speed instead is most important. Packaging the Mleap model is automated in the Makefile but consist of the following steps: Download the model artifacts with Mlflow into a temporary folderZip the artifact for deploymentRun the combustml/mleap-spring-boot docker image and mount our model artifact as a volumeDeploy the model artifact for serving using the mleap server APIPass JSON data to the transform API endpoint of our served feature pipeline Download the model artifacts with Mlflow into a temporary folder Zip the artifact for deployment Run the combustml/mleap-spring-boot docker image and mount our model artifact as a volume Deploy the model artifact for serving using the mleap server API Pass JSON data to the transform API endpoint of our served feature pipeline Run the make deploy-realtime-model command and you get 2 microservices: one for creating the features using Mleap and one for classification using Sklearn. The Python script in project/model/score.py wraps the calls to these two microservices into a convenient function for easy use. Run make score-realtime-model for an example call to the scoring services. You can also call the microservices from the Jupyter notebooks. The following code shows just how fast our interactive scoring service is: less than 20ms combined for a call to both model APIs. The example project uses Sphinx to create the documentation. Change into the docs directory and run make html to produce html docs for your project. Simply add Sphinx RST formatted documentation to the python doc strings and include modules to include in the produced documentation in the docs/source/index.rst file. Unit tests go into the test folder and python unittest can be run with make test At the time of writing this blog post the data science project template has — like most data science projects — no tests 😱 I hope with some extra time and feedback this will change! No blog post about Mlflow would be complete without a discussion of the Databricks platform. The data science community would not be the same without the continued open source contributions of Databricks and we have to thank them for the original development of Mlflow. ❤️ Therefore, an alternative approach to running MLFlow is to leverage the Platform-as-a-Service version of Apache Spark offered by Databricks. Databricks gives you the ability to run MLFlow with very little configuration, commonly referred to as a “Managed MLFlow”. You can find a feature comparison here: https://databricks.com/product/managed-mlflow Simply install the MLFlow package in your project environment with pip and you have everything you need. It is worth noting that the version of MLFlow in Databricks is not the full version that has been described already. It is rather an optimised version to work inside the Databricks ecosystem. When you create a new “MLFlow Experiment” you will be prompted for a project name and also an artefact location to be used as an artefact store. The location represents where you will capture data and models which have been produced during the MLFlow experimentation. The location that you need to enter needs to follow the following convention “dbfs:/<location in DBFS>”. The location in DBFS can either be in DBFS (Databricks File System) or this can be a location which is mapped with an external mount point — to a location such as an S3 bucket. Once an MLFlow experiment has been configured you will be presented with the experimentation tracking screen. It is here that you can see the outputs of your models as they are trained. You have the flexibility to filter multiple runs based on parameters or metrics. Once you have created an experiment you need to make a note of the Experiment ID. It is this which you will need to use during the configuration of MLFlow in each notebook to point back to this individual location. To connect the experimentation tracker to your model development notebooks you need to tell MLFlow which experiment you’re using: with mlflow.start_run(experiment_id="238439083735002") Once MLFlow is configured to point to the experiment ID, each execution will begin to log and capture any metrics you require. As well as metrics you can also capture parameters and data. Data will be stored with the created model, which enables a nice pipeline for reusability as discussed previously. To start logging a parameter you can simply add the following: mlflow.log_param("numTrees", numTrees) To log a loss metric you can do the following: mlflow.log_metric("rmse", evaluator.evaluate(predictionsDF)) Once metrics have been captured, it is easy to see how each parameter contributes to the overall effectiveness of your model. There are various visualisations to help you explore the different combinations of parameters to decide which model and approach suits the problem you’re solving. Recently MLFlow implemented an auto-logging function which currently only support Keras. When this is turned on, all parameters and metrics will be auto captured, this is really helpful, it significantly reduces the amount of boiler-plate code you need to add. Models can be logged as discussed earlier with: mlflow.mleap.log_model(spark_model=model, sample_input=df, artifact_path="model") Managed MLflow is a great option if you’re already using Databricks. Experiment capture is just one of the great features on offer. At the Spark & AI Summit, MLFlows functionality to support model versioning was announced. When used in combination with the tracking capability of MLFlow, moving a model from development into production is a simple as a few clicks using the new Model Registry. In this blog post I documented my [opinionated] data science project template which has production deployment in the cloud in mind when developing locally. The entire aim of this template is to apply best practices, reduce technical debt and avoid re-engineering. It demonstrated how to use Spark to create data pipelines and log models with Mlflow for easy management of experiments and deployment of models. I hope this saves you time when data sciencing. ❤️ The Data Science Project Template can be found on GitLab: gitlab.com I have not always been a strong engineer and solution architect. I am standing on the shoulders of giants and special thanks goes to my friends Terry Mccann and Simon Whiteley from www.advancinganalytics.co.uk Jan is a successful thought leader and consultant in the data transformation of companies and has a track record of bringing data science into commercial production usage at scale. He has recently been recognised by dataIQ as one of the 100 most influential data and analytics practitioners in the UK. Connect on LinkedIn: https://www.linkedin.com/in/janteichmann/ Read other articles: https://medium.com/@jan.teichmann
[ { "code": null, "e": 353, "s": 172, "text": "Data science has come a long way as a field and business function alike. There are now cross-functional teams working on algorithms all the way to full-stack data science products." }, { "code": null, "e": 790, "s": 353, "text": "With the growing maturity of data science there is an emerging standard of best practise, platforms and toolkits which significantly reduced the barrier of entry and price point of a data science team. This has made data science more accessible for companies and practitioners alike. For the majority of commercially applied teams, data scientists can stand on the shoulders of a quality open source community for their day-to-day work." }, { "code": null, "e": 1149, "s": 790, "text": "But the success stories are still overshadowed by the many data science projects which fail to gain business adaptation. It’s commonly reported that over 80% of all data science projects still fail to deliver business impact. With all the high quality open-source toolkits, why does data science struggle to deliver business impact? It’s not a Maths problem!" }, { "code": null, "e": 1242, "s": 1149, "text": "The data science success is plagued by something commonly known as the “Last Mile problems”:" }, { "code": null, "e": 1419, "s": 1242, "text": "“Productionisation of models is the TOUGHEST problem in data science” (Schutt, R & O’Neill C. Doing data science straight from the front line, O’Reilly Press. California, 2014)" }, { "code": null, "e": 1618, "s": 1419, "text": "In this blog post I discuss best practices for setting up a data science project, model development and experimentation. The source code for the data science project template can be found on GitLab:" }, { "code": null, "e": 1629, "s": 1618, "text": "gitlab.com" }, { "code": null, "e": 1763, "s": 1629, "text": "You can read about Rendezvous Architecture as a great pattern to operationalise models in production in one of my earlier blog posts:" }, { "code": null, "e": 1786, "s": 1763, "text": "towardsdatascience.com" }, { "code": null, "e": 2050, "s": 1786, "text": "While engineers build the production platforms, DevOps and DataOps patterns to run models in production, many data scientists still work in a way which creates a big gap between data science and production environments. Many data scientists (without any reproach)" }, { "code": null, "e": 2160, "s": 2050, "text": "work locally without much consideration of the cloud environments which might host their models in production" }, { "code": null, "e": 2308, "s": 2160, "text": "work on small datasets in memory using python and pandas without much consideration of how to scale workflows to the big data volumes in production" }, { "code": null, "e": 2392, "s": 2308, "text": "work with Jupyter Notebooks without much consideration for reproducible experiments" }, { "code": null, "e": 2511, "s": 2392, "text": "rarely break up projects into independent tasks to build decoupled pipelines for ETL steps, model training and scoring" }, { "code": null, "e": 2690, "s": 2511, "text": "And very rarely are best practices for software engineering applied to data science projects, e.g. abstracted and reusable code, unit testing, documentation, version control etc." }, { "code": null, "e": 2776, "s": 2690, "text": "Unfortunately, our beloved flexible Jupyter Notebooks play an important part in this." }, { "code": null, "e": 3128, "s": 2776, "text": "It’s important to keep in mind that data science is a field and business function undergoing rapid innovation. If you read this a month after I published it, there might be already new tools and better ways to organise data science projects. Exciting times to be a data scientist! Let’s have a look at the details of the data science project template:" }, { "code": null, "e": 3531, "s": 3128, "text": "A data science project consists of many moving parts and the actual model can easily be the fewest lines of code in your project. Data is the fuel and foundation of your project and, firstly, we should aim for solid, high quality and portable foundations for our project. Data pipelines are the hidden technical debt in most data science projects and you probably have heard of the infamous 80/20 rule:" }, { "code": null, "e": 3591, "s": 3531, "text": "80% of Data Science is Finding, Cleaning and Preparing Data" }, { "code": null, "e": 3691, "s": 3591, "text": "I will explain below how Mlflow and Spark can help us to be more productive when working with data." }, { "code": null, "e": 4042, "s": 3691, "text": "Creating your data science model itself is a continuous back and forth between experimentation and expanding a project code base to capture the code and logic that worked. This can get messy and Mlflow is here to make experimentation and model management significantly easier for us. It will also simplify model deployment for us. More on that later!" }, { "code": null, "e": 4245, "s": 4042, "text": "You probably heard of the 80/20 rule of data science: a lot of the data science work is about creating data pipelines to consume raw data, clean data and engineer features to feed our models at the end." }, { "code": null, "e": 4258, "s": 4245, "text": "We want that" }, { "code": null, "e": 4364, "s": 4258, "text": "Our data is immutable: we can only create new data sets but not change existing data in place. Therefore," }, { "code": null, "e": 4582, "s": 4364, "text": "our pipeline is a set of decoupled steps in a DAG incrementally increasing the quality and aggregation of our data into features and scores to incorporate our ETL easily with tools like Airflow or Luigi in production." }, { "code": null, "e": 4755, "s": 4582, "text": "Every data set has a defined data schema to read data safely without any surprises which can be logged to a central data catalogue for better data governance and discovery." }, { "code": null, "e": 4861, "s": 4755, "text": "The data science project template has a data folder which holds the project data and associated schemata:" }, { "code": null, "e": 5251, "s": 4861, "text": "Data scientists commonly work with not only big data sets but also with unstructured data. Building out the schemata for a data warehouse requires design work and a good understanding of business requirements. In data science many questions or problem statements were not known when the schemata for a DWH were created. That’s why Spark has developed into a gold standard in that space for" }, { "code": null, "e": 5282, "s": 5251, "text": "Working with unstructured data" }, { "code": null, "e": 5335, "s": 5282, "text": "Distributed enterprise mature big data ETL pipelines" }, { "code": null, "e": 5357, "s": 5335, "text": "Data lake deployments" }, { "code": null, "e": 5413, "s": 5357, "text": "Unified batch as well as micro-batch streaming platform" }, { "code": null, "e": 5487, "s": 5413, "text": "The ability to consumer and write data from and to many different systems" }, { "code": null, "e": 5842, "s": 5487, "text": "Projects at companies with mature infrastructure use advanced data lakes which includes data catalogues for data/schema discovery and management, and scheduled tasks with Airflow etc. While a data scientist does not have to necessarily understand these parts of the production infrastructure, it’s best to create projects and artifacts with this in mind." }, { "code": null, "e": 6288, "s": 5842, "text": "This data science project template uses Spark regardless of whether we run it locally on data samples or in the cloud against a data lake. On the one hand, Spark can feel like overkill when working locally on small data samples. But on the other hand, it allows for one transferable stack ready for cloud deployment without any unnecessary re-engineering work. It’s also a repetitive pattern which can be nicely automated, e.g. via the Makefile." }, { "code": null, "e": 6529, "s": 6288, "text": "While version 1 of your model might use structured data from a DWH it’s best to still use Spark and reduce the amount of technical debt in your project in anticipation of version 2 of your model which will use a wider range of data sources." }, { "code": null, "e": 6765, "s": 6529, "text": "For local project development I use a simple Makefile to automate the execution of the data pipelines and project targets. However, this can easily be translated into an Airflow or Luigi pipeline for production deployment in the cloud." }, { "code": null, "e": 6834, "s": 6765, "text": "The end to end data flow for this project is made up of three steps:" }, { "code": null, "e": 6914, "s": 6834, "text": "raw-data: Download the iris raw data from the machine learning database archive" }, { "code": null, "e": 7022, "s": 6914, "text": "interim-data: Execute the feature engineering pipeline to materialise the iris features in batch with Spark" }, { "code": null, "e": 7119, "s": 7022, "text": "processed-data: Execute the classification model to materialise predictions in batch using Spark" }, { "code": null, "e": 7226, "s": 7119, "text": "You can transform the iris raw data into features using a Spark pipeline using the following make command:" }, { "code": null, "e": 7244, "s": 7226, "text": "make interim-data" }, { "code": null, "e": 7503, "s": 7244, "text": "It will zip the current project code base and submits the project/data/features.py script to Spark in our docker container for execution. We use a Mlflow runid to identify and load the desired Spark feature pipeline model but more on the use of Mlflow later:" }, { "code": null, "e": 7629, "s": 7503, "text": "After the execution of our Spark feature pipeline we have the interim feature data materialised in our small local data lake:" }, { "code": null, "e": 7758, "s": 7629, "text": "As you can see, we saved our feature data set with its corresponding schema. Spark makes it very easy to save and read schemata:" }, { "code": null, "e": 7863, "s": 7758, "text": "some_sparksql_dataframe.schema.json()T.StructType.fromJson(json.loads(some_sparksql_schema_json_string))" }, { "code": null, "e": 8001, "s": 7863, "text": "Always materialise and read data with its corresponding schema! Enforcing schemata is the key to breaking the 80/20 rule in data science." }, { "code": null, "e": 8346, "s": 8001, "text": "I consider writing a schema as mandatory for csv and json files but I would also do it for any parquet or avro files which automatically preserve their schema. I really recommend reading more about Delta Lake, Apache Hudi, data catalogues and feature stores. If there is interest, I will follow up with an independent blog post on these topics." }, { "code": null, "e": 8809, "s": 8346, "text": "Jupyter Notebooks are very convenient for experimentation and it’s unlikely data scientists will stop using them, very much to the dismay of many engineers who are asked to “productionise” models from Jupyter notebooks. The compromise is to use tools to their strengths. Experimentation in notebooks is productive and works well as long code which proves valuable in experimentation is then added to a code base which follows software engineering best practises." }, { "code": null, "e": 8847, "s": 8809, "text": "My personal workflow looks like this:" }, { "code": null, "e": 9162, "s": 8847, "text": "Experimentation: test code and ideas in a Jupyter notebook firstIncorporate valuable code which works into the Python project codebaseDelete the code cell in the Jupyter notebookReplace it with an import from the project codebaseRestart your kernel and execute all steps in order before moving on to the next task." }, { "code": null, "e": 9227, "s": 9162, "text": "Experimentation: test code and ideas in a Jupyter notebook first" }, { "code": null, "e": 9298, "s": 9227, "text": "Incorporate valuable code which works into the Python project codebase" }, { "code": null, "e": 9343, "s": 9298, "text": "Delete the code cell in the Jupyter notebook" }, { "code": null, "e": 9395, "s": 9343, "text": "Replace it with an import from the project codebase" }, { "code": null, "e": 9481, "s": 9395, "text": "Restart your kernel and execute all steps in order before moving on to the next task." }, { "code": null, "e": 9923, "s": 9481, "text": "It’s important to isolate our data science project environment and manage requirements and dependencies of our Python project. There is no better way to do this than via Docker containers. My project template uses the jupyter all-spark-notebook Docker image from DockerHub as a convenient, all batteries included, lab setup. The project template contains a docker-compose.yml file and the Makefile automates the container setup with a simple" }, { "code": null, "e": 9936, "s": 9923, "text": "make jupyter" }, { "code": null, "e": 10443, "s": 9936, "text": "The command will spin up all the required services (Jupyter with Spark, Mlflow, Minio) for our project and installs all project requirements with pip within our experimentation environment. I use Pipenv to manage my virtual Python environments for my projects and pipenv_to_requirements to create a requirements.txt file for DevOps pipelines and Anaconda based container images. Considering the popularity of Python as a programming language, the Python tooling can sometimes feel cumbersome and complex. 😒" }, { "code": null, "e": 10661, "s": 10443, "text": "The following screenshot shows the example notebook environment. I use snippets to setup individual notebooks using the %load magic. It also shows how I use code from the project code base to import the raw iris data:" }, { "code": null, "e": 10910, "s": 10661, "text": "The Jupyter notebook demonstrates my workflow of development, expanding the project code base and model experimentation with some additional commentary. https://gitlab.com/jan-teichmann/ml-flow-ds-project/blob/master/notebooks/Iris%20Features.ipynb" }, { "code": null, "e": 11153, "s": 10910, "text": "Version control of Jupyter notebooks in Git is not as user friendly as I wished. The .ipynb file format is not very diff friendly. At least, GitHub and GitLab can now render Jupyter notebooks in their web interfaces which is extremely useful." }, { "code": null, "e": 11387, "s": 11153, "text": "To make version control easier on your local computer, the template also installs the nbdime tool which makes git diffs and merges of Jupyter notebooks clear and meaningful. You can use the following commands as part of your project:" }, { "code": null, "e": 11439, "s": 11387, "text": "nbdiff compare notebooks in a terminal-friendly way" }, { "code": null, "e": 11511, "s": 11439, "text": "nbmerge three-way merge of notebooks with automatic conflict resolution" }, { "code": null, "e": 11566, "s": 11511, "text": "nbdiff-web shows you a rich rendered diff of notebooks" }, { "code": null, "e": 11635, "s": 11566, "text": "nbmerge-web gives you a web-based three-way merge tool for notebooks" }, { "code": null, "e": 11695, "s": 11635, "text": "nbshow present a single notebook in a terminal-friendly way" }, { "code": null, "e": 12382, "s": 11695, "text": "Last but not least, the project template uses the IPython hooks to extend the Jupyter notebook save button with an additional call to nbconvert to create an additional .py script and .html version of the Jupyter notebook every time you save your notebook in a subfolder. On the one hand, this makes it easier for others to check code changes in Git by checking the diff of the pure Python script version. On the other hand, the html version allows anyone to see the rendered notebook outputs without having to start a Jupyter notebook server. Something which makes it significantly easier to email parts of a notebook output to colleagues in the wider business who do not use Jupyter. 👏" }, { "code": null, "e": 12543, "s": 12382, "text": "To enable the automatic conversion of notebooks on every save simply place an empty .ipynb_saveprocress file in the current working directory of your notebooks." }, { "code": null, "e": 12876, "s": 12543, "text": "As part of our experimentation in Jupyter we need to keep track of parameters, metrics and artifacts we create. Mlflow is a great tool to create reproducible and accountable data science projects. It provides a central tracking server with a simple UI to browse experiments and powerful tooling to package, manage and deploy models." }, { "code": null, "e": 13250, "s": 12876, "text": "In our data science project template we simulate a production Mlflow deployment with a dedicated tracking server and artifacts being stored on S3. We use Min.io locally as an open-source S3 compatible stand-in. There shouldn’t be any differences in the behaviour and experience of using Mlflow whether you use it locally, hosted in the cloud or fully managed on Databricks." }, { "code": null, "e": 13445, "s": 13250, "text": "The docker-compose.yml file provides the required services for our project. You can access the blob storage UI on http://localhost:9000/minio and the Mlflow tracking UI on http://localhost:5000/" }, { "code": null, "e": 13780, "s": 13445, "text": "In Mlflow we have named experiments which hold any number of runs. Each run can track parameters, metrics and artifacts and has a unique run identifier. Mlflow 1.4 also just released a Model Registry to make it easier to organise runs and models around a model lifecycle, e.g. Production and Staging deployments of different versions." }, { "code": null, "e": 13820, "s": 13780, "text": "This example project trains two models:" }, { "code": null, "e": 13869, "s": 13820, "text": "a Spark feature pipeline anda Sklearn classifier" }, { "code": null, "e": 13898, "s": 13869, "text": "a Spark feature pipeline and" }, { "code": null, "e": 13919, "s": 13898, "text": "a Sklearn classifier" }, { "code": null, "e": 14426, "s": 13919, "text": "Our Spark feature pipeline uses the Spark ML StandardScaler which makes the pipeline stateful. Therefore we treat our feature engineering in exactly the same way we would treat any other data science model. The aim of this example is to use two models using different frameworks in conjuncture in both batch and real-time scoring without any re-engineering of the models themselves. This will hopefully demonstrate the power of using Mlflow to simplify the management and deployment of data science models!" }, { "code": null, "e": 14523, "s": 14426, "text": "We simply follow the Mlflow convention of logging trained models to the central tracking server." }, { "code": null, "e": 15102, "s": 14523, "text": "The only gotcha is that the current boto3 client and Minio are not playing well together when you try to upload empty files with boto3 to minio. Spark serialises models with empty _SUCCESS files which cause the standard mlflow.spark.log_model() call to timeout. We circumvent this problem by saving the serialised models to disk locally and log them using the minio client instead using the project.utility.mlflow.log_artifacts_minio() function. The following code trains a new spark feature pipeline and logs the pipeline in both flavours: Spark and Mleap. More on Mleap later." }, { "code": null, "e": 15366, "s": 15102, "text": "Within our project the models are saved and logged in the models folder where the different docker services persist their data. We can find the data from the Mlflow tracking server in the models/mlruns subfolder and the saved artifacts in the models/s3 subfolder." }, { "code": null, "e": 15637, "s": 15366, "text": "The Jupyter notebooks in the example project hopefully give a good idea of how to use Mlflow to track parameters and metrics and log models. Mlflow makes serialising and loading models a dream and removed a lot of boilerplate code from my previous data science projects." }, { "code": null, "e": 15831, "s": 15637, "text": "Our aim is to use the very same models with their different technologies and flavours to score our data in batch as well as in real-time without any changes, re-engineering or code duplication." }, { "code": null, "e": 16182, "s": 15831, "text": "Our target for batch scoring is Spark. While our feature pipeline is already a Spark pipeline, our classifier is a Python sklearn model. But fear not, Mlflow makes working with models extremely easy and there is a convenient function to package a Python model into a Spark SQL UDF to distribute our classifier across a Spark cluster. Just like magic!" }, { "code": null, "e": 16415, "s": 16182, "text": "Load the serialised Spark feature pipelineLoad the serialised Sklearn model as a Spark UDFTransform raw data with the feature pipelineTurn the Spark feature vectors into default Spark arraysCall the UDF with the expanded array items" }, { "code": null, "e": 16458, "s": 16415, "text": "Load the serialised Spark feature pipeline" }, { "code": null, "e": 16507, "s": 16458, "text": "Load the serialised Sklearn model as a Spark UDF" }, { "code": null, "e": 16552, "s": 16507, "text": "Transform raw data with the feature pipeline" }, { "code": null, "e": 16609, "s": 16552, "text": "Turn the Spark feature vectors into default Spark arrays" }, { "code": null, "e": 16652, "s": 16609, "text": "Call the UDF with the expanded array items" }, { "code": null, "e": 16762, "s": 16652, "text": "There are some gotchas with the correct version of PyArrow and that the UDF does not work with Spark vectors." }, { "code": null, "e": 17183, "s": 16762, "text": "But from an example it’s very easy to make it work. I hope this saves you the trouble of endless Spark Python Java backtraces and maybe future versions will simplify the integration even further. For now, use PyArrow 0.14 with Spark 2.4 and turn the Spark vectors into numpy arrays and then into a python list as Spark cannot yet deal with the numpy dtypes. Problems you probably encountered before working with PySpark." }, { "code": null, "e": 17334, "s": 17183, "text": "All the detailed code is in the Git repository. https://gitlab.com/jan-teichmann/ml-flow-ds-project/blob/master/notebooks/Iris%20Batch%20Scoring.ipynb" }, { "code": null, "e": 17627, "s": 17334, "text": "We would like to use our model to score requests interactively in real-time using an API and not rely on Spark to host our models. We want our scoring service to be lightning fast and consist of containerised micro services. Again, Mlflow provides us with most things we need to achieve that." }, { "code": null, "e": 17827, "s": 17627, "text": "Our sklearn classifier is a simple Python model and combining this with an API and package it into a container image is straightforward. It’s such a common pattern that Mlflow has a command for this:" }, { "code": null, "e": 17854, "s": 17827, "text": "mlflow models build-docker" }, { "code": null, "e": 18027, "s": 17854, "text": "That’s all that is needed to package Python flavoured models with Mlflow. No need to write the repetitive code for an API with Flask to containerise a data science model. 🙌" }, { "code": null, "e": 18220, "s": 18027, "text": "Unfortunately, our feature pipeline is a Spark model. However, we serialised the pipeline in the Mleap flavour which is a project to host Spark pipelines without the need of any Spark context." }, { "code": null, "e": 18444, "s": 18220, "text": "Mleap is ideal for the use-cases where data is small and we do not need any distributed compute and speed instead is most important. Packaging the Mleap model is automated in the Makefile but consist of the following steps:" }, { "code": null, "e": 18768, "s": 18444, "text": "Download the model artifacts with Mlflow into a temporary folderZip the artifact for deploymentRun the combustml/mleap-spring-boot docker image and mount our model artifact as a volumeDeploy the model artifact for serving using the mleap server APIPass JSON data to the transform API endpoint of our served feature pipeline" }, { "code": null, "e": 18833, "s": 18768, "text": "Download the model artifacts with Mlflow into a temporary folder" }, { "code": null, "e": 18865, "s": 18833, "text": "Zip the artifact for deployment" }, { "code": null, "e": 18955, "s": 18865, "text": "Run the combustml/mleap-spring-boot docker image and mount our model artifact as a volume" }, { "code": null, "e": 19020, "s": 18955, "text": "Deploy the model artifact for serving using the mleap server API" }, { "code": null, "e": 19096, "s": 19020, "text": "Pass JSON data to the transform API endpoint of our served feature pipeline" }, { "code": null, "e": 19455, "s": 19096, "text": "Run the make deploy-realtime-model command and you get 2 microservices: one for creating the features using Mleap and one for classification using Sklearn. The Python script in project/model/score.py wraps the calls to these two microservices into a convenient function for easy use. Run make score-realtime-model for an example call to the scoring services." }, { "code": null, "e": 19649, "s": 19455, "text": "You can also call the microservices from the Jupyter notebooks. The following code shows just how fast our interactive scoring service is: less than 20ms combined for a call to both model APIs." }, { "code": null, "e": 19966, "s": 19649, "text": "The example project uses Sphinx to create the documentation. Change into the docs directory and run make html to produce html docs for your project. Simply add Sphinx RST formatted documentation to the python doc strings and include modules to include in the produced documentation in the docs/source/index.rst file." }, { "code": null, "e": 20037, "s": 19966, "text": "Unit tests go into the test folder and python unittest can be run with" }, { "code": null, "e": 20047, "s": 20037, "text": "make test" }, { "code": null, "e": 20229, "s": 20047, "text": "At the time of writing this blog post the data science project template has — like most data science projects — no tests 😱 I hope with some extra time and feedback this will change!" }, { "code": null, "e": 20502, "s": 20229, "text": "No blog post about Mlflow would be complete without a discussion of the Databricks platform. The data science community would not be the same without the continued open source contributions of Databricks and we have to thank them for the original development of Mlflow. ❤️" }, { "code": null, "e": 20852, "s": 20502, "text": "Therefore, an alternative approach to running MLFlow is to leverage the Platform-as-a-Service version of Apache Spark offered by Databricks. Databricks gives you the ability to run MLFlow with very little configuration, commonly referred to as a “Managed MLFlow”. You can find a feature comparison here: https://databricks.com/product/managed-mlflow" }, { "code": null, "e": 21149, "s": 20852, "text": "Simply install the MLFlow package in your project environment with pip and you have everything you need. It is worth noting that the version of MLFlow in Databricks is not the full version that has been described already. It is rather an optimised version to work inside the Databricks ecosystem." }, { "code": null, "e": 21699, "s": 21149, "text": "When you create a new “MLFlow Experiment” you will be prompted for a project name and also an artefact location to be used as an artefact store. The location represents where you will capture data and models which have been produced during the MLFlow experimentation. The location that you need to enter needs to follow the following convention “dbfs:/<location in DBFS>”. The location in DBFS can either be in DBFS (Databricks File System) or this can be a location which is mapped with an external mount point — to a location such as an S3 bucket." }, { "code": null, "e": 22181, "s": 21699, "text": "Once an MLFlow experiment has been configured you will be presented with the experimentation tracking screen. It is here that you can see the outputs of your models as they are trained. You have the flexibility to filter multiple runs based on parameters or metrics. Once you have created an experiment you need to make a note of the Experiment ID. It is this which you will need to use during the configuration of MLFlow in each notebook to point back to this individual location." }, { "code": null, "e": 22311, "s": 22181, "text": "To connect the experimentation tracker to your model development notebooks you need to tell MLFlow which experiment you’re using:" }, { "code": null, "e": 22366, "s": 22311, "text": "with mlflow.start_run(experiment_id=\"238439083735002\")" }, { "code": null, "e": 22732, "s": 22366, "text": "Once MLFlow is configured to point to the experiment ID, each execution will begin to log and capture any metrics you require. As well as metrics you can also capture parameters and data. Data will be stored with the created model, which enables a nice pipeline for reusability as discussed previously. To start logging a parameter you can simply add the following:" }, { "code": null, "e": 22771, "s": 22732, "text": "mlflow.log_param(\"numTrees\", numTrees)" }, { "code": null, "e": 22818, "s": 22771, "text": "To log a loss metric you can do the following:" }, { "code": null, "e": 22879, "s": 22818, "text": "mlflow.log_metric(\"rmse\", evaluator.evaluate(predictionsDF))" }, { "code": null, "e": 23429, "s": 22879, "text": "Once metrics have been captured, it is easy to see how each parameter contributes to the overall effectiveness of your model. There are various visualisations to help you explore the different combinations of parameters to decide which model and approach suits the problem you’re solving. Recently MLFlow implemented an auto-logging function which currently only support Keras. When this is turned on, all parameters and metrics will be auto captured, this is really helpful, it significantly reduces the amount of boiler-plate code you need to add." }, { "code": null, "e": 23477, "s": 23429, "text": "Models can be logged as discussed earlier with:" }, { "code": null, "e": 23559, "s": 23477, "text": "mlflow.mleap.log_model(spark_model=model, sample_input=df, artifact_path=\"model\")" }, { "code": null, "e": 23953, "s": 23559, "text": "Managed MLflow is a great option if you’re already using Databricks. Experiment capture is just one of the great features on offer. At the Spark & AI Summit, MLFlows functionality to support model versioning was announced. When used in combination with the tracking capability of MLFlow, moving a model from development into production is a simple as a few clicks using the new Model Registry." }, { "code": null, "e": 24217, "s": 23953, "text": "In this blog post I documented my [opinionated] data science project template which has production deployment in the cloud in mind when developing locally. The entire aim of this template is to apply best practices, reduce technical debt and avoid re-engineering." }, { "code": null, "e": 24363, "s": 24217, "text": "It demonstrated how to use Spark to create data pipelines and log models with Mlflow for easy management of experiments and deployment of models." }, { "code": null, "e": 24414, "s": 24363, "text": "I hope this saves you time when data sciencing. ❤️" }, { "code": null, "e": 24472, "s": 24414, "text": "The Data Science Project Template can be found on GitLab:" }, { "code": null, "e": 24483, "s": 24472, "text": "gitlab.com" }, { "code": null, "e": 24693, "s": 24483, "text": "I have not always been a strong engineer and solution architect. I am standing on the shoulders of giants and special thanks goes to my friends Terry Mccann and Simon Whiteley from www.advancinganalytics.co.uk" }, { "code": null, "e": 24995, "s": 24693, "text": "Jan is a successful thought leader and consultant in the data transformation of companies and has a track record of bringing data science into commercial production usage at scale. He has recently been recognised by dataIQ as one of the 100 most influential data and analytics practitioners in the UK." }, { "code": null, "e": 25058, "s": 24995, "text": "Connect on LinkedIn: https://www.linkedin.com/in/janteichmann/" } ]
How do I manually throw/raise an exception in Python?
We use the most specific exception constructor that fits our specific issue rather than raise generic exceptions. To catch our specific exception, we'll have to catch all other more specific exceptions that subclass it. We should raise specific exceptions and handle the same specific exceptions. To raise the specific exceptions we use the raise statement as follows. import sys try: f = float('Tutorialspoint') print f raise ValueError except Exception as err: print sys.exc_info() We get the following output (<type 'exceptions.ValueError'>, ValueError('could not convert string to float: Tutorialspoint',), <traceback object at 0x0000000002E33748>) We can raise an error even with arguments like the following example try: raise ValueError('foo', 23) except ValueError, e: print e.args We get the following output ('foo', 23)
[ { "code": null, "e": 1282, "s": 1062, "text": "We use the most specific exception constructor that fits our specific issue rather than raise generic exceptions. To catch our specific exception, we'll have to catch all other more specific exceptions that subclass it." }, { "code": null, "e": 1359, "s": 1282, "text": "We should raise specific exceptions and handle the same specific exceptions." }, { "code": null, "e": 1431, "s": 1359, "text": "To raise the specific exceptions we use the raise statement as follows." }, { "code": null, "e": 1546, "s": 1431, "text": "import sys\ntry:\nf = float('Tutorialspoint')\nprint f\nraise ValueError\nexcept Exception as err:\nprint sys.exc_info()" }, { "code": null, "e": 1574, "s": 1546, "text": "We get the following output" }, { "code": null, "e": 1715, "s": 1574, "text": "(<type 'exceptions.ValueError'>, ValueError('could not convert string to float: Tutorialspoint',), <traceback object at 0x0000000002E33748>)" }, { "code": null, "e": 1784, "s": 1715, "text": "We can raise an error even with arguments like the following example" }, { "code": null, "e": 1852, "s": 1784, "text": "try:\nraise ValueError('foo', 23)\nexcept ValueError, e:\nprint e.args" }, { "code": null, "e": 1880, "s": 1852, "text": "We get the following output" }, { "code": null, "e": 1892, "s": 1880, "text": "('foo', 23)" } ]
Get all subdirectories of a given directory in PHP
To get the subdirectories present in a directory, the below lines of code can be used − Live Demo <?php $all_sub_dirs = array_filter(glob('*.*'), 'is_dir'); print_r($all_sub_dirs); ?> This will produce the following output. The glob function is used to get all the subdirectories of a specific directory− Array ( [0] => demo.csv [1] => mark.php [2] => contact.txt [3] => source.txt ) To get only the directories, the below lines of code can be used− <?php $all_dirs = glob($somePath . '/*' , GLOB_ONLYDIR); print_r($all_dirs); ?> This will produce the following output. The glob function is used by specifying that only directories need to be extracted− Array ( [0] => example [1] => exam [2] => log )
[ { "code": null, "e": 1150, "s": 1062, "text": "To get the subdirectories present in a directory, the below lines of code can be used −" }, { "code": null, "e": 1161, "s": 1150, "text": " Live Demo" }, { "code": null, "e": 1253, "s": 1161, "text": "<?php\n $all_sub_dirs = array_filter(glob('*.*'), 'is_dir');\n print_r($all_sub_dirs);\n?>" }, { "code": null, "e": 1374, "s": 1253, "text": "This will produce the following output. The glob function is used to get all the subdirectories of a specific directory−" }, { "code": null, "e": 1465, "s": 1374, "text": "Array (\n [0] => demo.csv\n [1] => mark.php\n [2] => contact.txt\n [3] => source.txt\n)" }, { "code": null, "e": 1531, "s": 1465, "text": "To get only the directories, the below lines of code can be used−" }, { "code": null, "e": 1617, "s": 1531, "text": "<?php\n $all_dirs = glob($somePath . '/*' , GLOB_ONLYDIR);\n print_r($all_dirs);\n?>" }, { "code": null, "e": 1741, "s": 1617, "text": "This will produce the following output. The glob function is used by specifying that only directories need to be extracted−" }, { "code": null, "e": 1798, "s": 1741, "text": "Array (\n [0] => example\n [1] => exam\n [2] => log\n)" } ]
Deploy MLflow with docker compose | by Guillaume Androz | Towards Data Science
When in the process of building and training machine learning models, its very important to keep track of the results of each experiment. For deep learning models, TensorBoard is a very power tool to log training performances, track gradient, debug a model and so on. We also need to track the associated source code, and whereas Jupyter Notebooks are hard to version, we can definitely use a VCS such as git to help us. However, we also need a tool to help us keep track of the experiment context, the choice of hyperparameters, the dataset used for an experiment, the resulting model etc. MLflow has been explicitly developped for that purpose as stated on their website MLflow is an open source platform to manage the ML lifecycle, including experimentation, reproducibility and deployment. — https://mlflow.org/ For that purpose, MLflow offers the component MLflow Tracking which is a web server that allows the tracking of our experiments/runs. In this post, I will show the steps to setup such a tracking server and I will progressively add components that could eventually be gathered into a docker-compose file. The docker approach is particularly convenient if MLflow has to be deployed on a remote server, for example on EC2, without having to configure the server by hand every time we need a new one. The first step to install a MLflow server is straightforward, we only need to install the python package. I will assume that python is installed on your machine and your are confortable with creating a virtual environment. For that purpose, I find conda more convenient than pipenv $ conda create -n mlflow-env python=3.7$ conda activate mlflow-env(mlflow-env)$ pip install mlflow From this very basic first step, our MLflow tracking server is ready to use, all that remains is launching it with the command: (mlflow-env)$ mlflow server We can also specify the host address that tells the server to listen on all public IPs. Although it is a very unsecure approach (the server is unauthenticated and unencrypted), we will further need to run the server behing a reverse proxy such as NGINX or in a virtual private network to control the accesses. (mlflow-env)$ mlflow server — host 0.0.0.0 Here the 0.0.0.0 IP tells the server to listen to all incoming IPs. We now have a running server to track our experiments and runs, but to go further we need to specify the server where to store the artifacts. For that, MLflow offers several possibilities: Amazon S3 Azure Blob Storage Google Cloud Storage FTP server SFTP Server NFS HDFS As my goal is to host a MLflow server on a cloud instance, I’ve chosen to use Amazon S3 as an artifacts store. All we need is to slightly modify the command to run the server as (mlflow-env)$ mlflow server — default-artifact-root s3://mlflow_bucket/mlflow/ — host 0.0.0.0 where mlflow_bucket is a S3 bucket that have been priorly created. Here you would ask “how the hell does MLflow access to my S3 bucket ?”. Well, simply read the documention hehe MLflow obtains credentials to access S3 from your machine’s IAM role, a profile in ~/.aws/credentials, or the environment variables AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY depending on which of these are available. — https://www.mlflow.org/docs/latest/tracking.html So the more pratical is to use an IAM role, especially if we want to run the server on an AWS EC2 instance. The use of a profile is quite the same as the use of environment variables but for the illustration, I’ll use environment variables as further explained with the use of docker-compose. So our tracking server stores artifacts on S3, fine. However, the hyperparameters, comments and so on are still stores in files on the hosting machine. Files are arguably not a good backend store, and we prefer a database backend. MLflow supports various database dialects (essentially the same as SQLAlchemy): mysql, mssql, sqlite, and postgresql. First I like to use SQLite as it is a compromise between files and databases as the whole database is stored in one file that can easily be moved. The syntax is the same of SQLAlchemy: (mlflow-env)$ mlflow server — backend-store-uri sqlite:////location/to/store/database/mlruns.db — default-artifact-root s3://mlflow_bucket/mlflow/ — host 0.0.0.0 Keeping in mind that we want to use docker containers, it is not a good idea to store those files locally because we’ll loose the database after each restart of the containers. Of course, we can still mount a volume and an EBS volume on our EC2 instance, but it is cleaner to use a dedicated database server. For that purpose, I like to use MySQL. As we’ll use docker for deployment, let’s postpone the MySQL server installation (as it will be a simple docker container from an official docker image) and focus on MLflow usage. First, we need to install the python driver we want to use to interact with the MySQL. I like pymysql because it is very simple to install, very stable and well documented. So on the MLflow server host, run the command (mlflow-env)$ pip install pymysql and now we can update the command to run the server according to the syntax of SQLAlchemy (mlflow-env)$ mlflow server — backend-store-uri mysql+pymysql://mlflow:strongpassword@db:3306/db — default-artifact-root s3://mlflow_bucket/mlflow/ — host 0.0.0.0 where we assume that our server name is db and it listens on port 3306. We also use the user mlflow with the very strong password strongpassword. Here again, it is not very secure in a production context, but when deploying with docker-compose, we can use environment variables. As mentionned earlier, we’ll use the MLflow tracking server behind a reverse proxy, NGINX. For that, here again we’ll use an official docker image and simply replace the default configuration /etc/nginx/nginx.conf by the following You can play with this basic configuration file if you need further customization. And finally, we’ll make a configuration for the MLflow server we’ll store in /etc/nginx/sites-enabled/mlflow.conf Notice the URL used to refer to the MLflow application http://web:5000. The MLflow server uses the port 5000, and the app will run in a docker-compose service which name will be web. As stated before, we want to run all of these in docker containers. The architecture is simple and is composed of three containers: A MySQL database server, A MLflow server, A reverse proxy NGINX For the database server, we’ll use the official MySQL image without any modification. For the MLflow server, we can build a container on a debian slim image. The Dockerfile is very simple: And finally the NGINX reverse proxy is also based on the official image plus the configuration presented before Now we have all set, it’s time to gather it all in a docker-compose file. It will then be possible to launch our MLflow tracking server with only command, which is very convenient. Our docker-compose file is composed of three services, one for the backend i.e. a MySQL database, one for the reverse proxy and one for the MLflow server itself. It looks like: First thing to notice, we have built two custom networks to isolate frontend (MLflow UI) with backend (MySQL database). Only the web service i.e. the MLflow server can talk to both. Secondly, as we don’t want to loose all the data as the containers go down, the content of the MySQL database is a mounted volume named dbdata. Lastly, this docker-compose file will be launched on an EC2 instance, but as we do not want to hard-code AWS keys or database connection string, we use environment variables. Those environment variables can be located directly on the host machine or inside an .env file in the same directory as the docker-compose file. All that remains is building and running the containers $ docker-compose up -d --build The -d option tells that we want to execute the containers in daemon mode, and the --build option indicates that we want to build the docker images if needed before running them. And that’s all ! We now have a perfectly running remote MLflow tracking server we can share between our team. This server can easily be deployed anywhere with only one command thanks to docker-compose. Happy machine learning !
[ { "code": null, "e": 845, "s": 172, "text": "When in the process of building and training machine learning models, its very important to keep track of the results of each experiment. For deep learning models, TensorBoard is a very power tool to log training performances, track gradient, debug a model and so on. We also need to track the associated source code, and whereas Jupyter Notebooks are hard to version, we can definitely use a VCS such as git to help us. However, we also need a tool to help us keep track of the experiment context, the choice of hyperparameters, the dataset used for an experiment, the resulting model etc. MLflow has been explicitly developped for that purpose as stated on their website" }, { "code": null, "e": 966, "s": 845, "text": "MLflow is an open source platform to manage the ML lifecycle, including experimentation, reproducibility and deployment." }, { "code": null, "e": 988, "s": 966, "text": "— https://mlflow.org/" }, { "code": null, "e": 1122, "s": 988, "text": "For that purpose, MLflow offers the component MLflow Tracking which is a web server that allows the tracking of our experiments/runs." }, { "code": null, "e": 1485, "s": 1122, "text": "In this post, I will show the steps to setup such a tracking server and I will progressively add components that could eventually be gathered into a docker-compose file. The docker approach is particularly convenient if MLflow has to be deployed on a remote server, for example on EC2, without having to configure the server by hand every time we need a new one." }, { "code": null, "e": 1767, "s": 1485, "text": "The first step to install a MLflow server is straightforward, we only need to install the python package. I will assume that python is installed on your machine and your are confortable with creating a virtual environment. For that purpose, I find conda more convenient than pipenv" }, { "code": null, "e": 1866, "s": 1767, "text": "$ conda create -n mlflow-env python=3.7$ conda activate mlflow-env(mlflow-env)$ pip install mlflow" }, { "code": null, "e": 1994, "s": 1866, "text": "From this very basic first step, our MLflow tracking server is ready to use, all that remains is launching it with the command:" }, { "code": null, "e": 2022, "s": 1994, "text": "(mlflow-env)$ mlflow server" }, { "code": null, "e": 2332, "s": 2022, "text": "We can also specify the host address that tells the server to listen on all public IPs. Although it is a very unsecure approach (the server is unauthenticated and unencrypted), we will further need to run the server behing a reverse proxy such as NGINX or in a virtual private network to control the accesses." }, { "code": null, "e": 2375, "s": 2332, "text": "(mlflow-env)$ mlflow server — host 0.0.0.0" }, { "code": null, "e": 2443, "s": 2375, "text": "Here the 0.0.0.0 IP tells the server to listen to all incoming IPs." }, { "code": null, "e": 2632, "s": 2443, "text": "We now have a running server to track our experiments and runs, but to go further we need to specify the server where to store the artifacts. For that, MLflow offers several possibilities:" }, { "code": null, "e": 2642, "s": 2632, "text": "Amazon S3" }, { "code": null, "e": 2661, "s": 2642, "text": "Azure Blob Storage" }, { "code": null, "e": 2682, "s": 2661, "text": "Google Cloud Storage" }, { "code": null, "e": 2693, "s": 2682, "text": "FTP server" }, { "code": null, "e": 2705, "s": 2693, "text": "SFTP Server" }, { "code": null, "e": 2709, "s": 2705, "text": "NFS" }, { "code": null, "e": 2714, "s": 2709, "text": "HDFS" }, { "code": null, "e": 2892, "s": 2714, "text": "As my goal is to host a MLflow server on a cloud instance, I’ve chosen to use Amazon S3 as an artifacts store. All we need is to slightly modify the command to run the server as" }, { "code": null, "e": 2991, "s": 2892, "text": "(mlflow-env)$ mlflow server — default-artifact-root s3://mlflow_bucket/mlflow/ — host 0.0.0.0" }, { "code": null, "e": 3169, "s": 2991, "text": "where mlflow_bucket is a S3 bucket that have been priorly created. Here you would ask “how the hell does MLflow access to my S3 bucket ?”. Well, simply read the documention hehe" }, { "code": null, "e": 3388, "s": 3169, "text": "MLflow obtains credentials to access S3 from your machine’s IAM role, a profile in ~/.aws/credentials, or the environment variables AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY depending on which of these are available." }, { "code": null, "e": 3439, "s": 3388, "text": "— https://www.mlflow.org/docs/latest/tracking.html" }, { "code": null, "e": 3732, "s": 3439, "text": "So the more pratical is to use an IAM role, especially if we want to run the server on an AWS EC2 instance. The use of a profile is quite the same as the use of environment variables but for the illustration, I’ll use environment variables as further explained with the use of docker-compose." }, { "code": null, "e": 4081, "s": 3732, "text": "So our tracking server stores artifacts on S3, fine. However, the hyperparameters, comments and so on are still stores in files on the hosting machine. Files are arguably not a good backend store, and we prefer a database backend. MLflow supports various database dialects (essentially the same as SQLAlchemy): mysql, mssql, sqlite, and postgresql." }, { "code": null, "e": 4266, "s": 4081, "text": "First I like to use SQLite as it is a compromise between files and databases as the whole database is stored in one file that can easily be moved. The syntax is the same of SQLAlchemy:" }, { "code": null, "e": 4428, "s": 4266, "text": "(mlflow-env)$ mlflow server — backend-store-uri sqlite:////location/to/store/database/mlruns.db — default-artifact-root s3://mlflow_bucket/mlflow/ — host 0.0.0.0" }, { "code": null, "e": 5175, "s": 4428, "text": "Keeping in mind that we want to use docker containers, it is not a good idea to store those files locally because we’ll loose the database after each restart of the containers. Of course, we can still mount a volume and an EBS volume on our EC2 instance, but it is cleaner to use a dedicated database server. For that purpose, I like to use MySQL. As we’ll use docker for deployment, let’s postpone the MySQL server installation (as it will be a simple docker container from an official docker image) and focus on MLflow usage. First, we need to install the python driver we want to use to interact with the MySQL. I like pymysql because it is very simple to install, very stable and well documented. So on the MLflow server host, run the command" }, { "code": null, "e": 5209, "s": 5175, "text": "(mlflow-env)$ pip install pymysql" }, { "code": null, "e": 5299, "s": 5209, "text": "and now we can update the command to run the server according to the syntax of SQLAlchemy" }, { "code": null, "e": 5462, "s": 5299, "text": "(mlflow-env)$ mlflow server — backend-store-uri mysql+pymysql://mlflow:strongpassword@db:3306/db — default-artifact-root s3://mlflow_bucket/mlflow/ — host 0.0.0.0" }, { "code": null, "e": 5741, "s": 5462, "text": "where we assume that our server name is db and it listens on port 3306. We also use the user mlflow with the very strong password strongpassword. Here again, it is not very secure in a production context, but when deploying with docker-compose, we can use environment variables." }, { "code": null, "e": 5972, "s": 5741, "text": "As mentionned earlier, we’ll use the MLflow tracking server behind a reverse proxy, NGINX. For that, here again we’ll use an official docker image and simply replace the default configuration /etc/nginx/nginx.conf by the following" }, { "code": null, "e": 6169, "s": 5972, "text": "You can play with this basic configuration file if you need further customization. And finally, we’ll make a configuration for the MLflow server we’ll store in /etc/nginx/sites-enabled/mlflow.conf" }, { "code": null, "e": 6352, "s": 6169, "text": "Notice the URL used to refer to the MLflow application http://web:5000. The MLflow server uses the port 5000, and the app will run in a docker-compose service which name will be web." }, { "code": null, "e": 6484, "s": 6352, "text": "As stated before, we want to run all of these in docker containers. The architecture is simple and is composed of three containers:" }, { "code": null, "e": 6509, "s": 6484, "text": "A MySQL database server," }, { "code": null, "e": 6526, "s": 6509, "text": "A MLflow server," }, { "code": null, "e": 6548, "s": 6526, "text": "A reverse proxy NGINX" }, { "code": null, "e": 6634, "s": 6548, "text": "For the database server, we’ll use the official MySQL image without any modification." }, { "code": null, "e": 6737, "s": 6634, "text": "For the MLflow server, we can build a container on a debian slim image. The Dockerfile is very simple:" }, { "code": null, "e": 6849, "s": 6737, "text": "And finally the NGINX reverse proxy is also based on the official image plus the configuration presented before" }, { "code": null, "e": 7207, "s": 6849, "text": "Now we have all set, it’s time to gather it all in a docker-compose file. It will then be possible to launch our MLflow tracking server with only command, which is very convenient. Our docker-compose file is composed of three services, one for the backend i.e. a MySQL database, one for the reverse proxy and one for the MLflow server itself. It looks like:" }, { "code": null, "e": 7909, "s": 7207, "text": "First thing to notice, we have built two custom networks to isolate frontend (MLflow UI) with backend (MySQL database). Only the web service i.e. the MLflow server can talk to both. Secondly, as we don’t want to loose all the data as the containers go down, the content of the MySQL database is a mounted volume named dbdata. Lastly, this docker-compose file will be launched on an EC2 instance, but as we do not want to hard-code AWS keys or database connection string, we use environment variables. Those environment variables can be located directly on the host machine or inside an .env file in the same directory as the docker-compose file. All that remains is building and running the containers" }, { "code": null, "e": 7941, "s": 7909, "text": "$ docker-compose up -d --build " }, { "code": null, "e": 8120, "s": 7941, "text": "The -d option tells that we want to execute the containers in daemon mode, and the --build option indicates that we want to build the docker images if needed before running them." }, { "code": null, "e": 8322, "s": 8120, "text": "And that’s all ! We now have a perfectly running remote MLflow tracking server we can share between our team. This server can easily be deployed anywhere with only one command thanks to docker-compose." } ]
Sklearn pipeline tutorial | Towards Data Science
Most of the data science projects (as keen as I am to say all of them) require a certain level of data cleaning and preprocessing to make the most of the machine learning models. Some common preprocessing or transformations are: a. Imputing missing values b. Removing outliers c. Normalising or standardising numerical features d. Encoding categorical features Sci-kit learn has a bunch of functions that support this kind of transformation, such as StandardScaler, SimpleImputer...etc, under the preprocessing package. A typical and simplified data science workflow would like Get the training dataClean/preprocess/transform the dataTrain a machine learning modelEvaluate and optimise the modelClean/preprocess/transform new dataFit the model on new data to make predictions. Get the training data Clean/preprocess/transform the data Train a machine learning model Evaluate and optimise the model Clean/preprocess/transform new data Fit the model on new data to make predictions. You may notice that data preprocessing has to be done at least twice in the workflow. As tedious and time-consuming as this step is, how nice it would be if only we could automate this process and apply it to all of the future new datasets. The good news is: YES, WE ABSOLUTELY CAN! With the scikit learn pipeline, we can easily systemise the process and therefore make it extremely reproducible. Following I’ll walk you through the process of using scikit learn pipeline to make your life easier. **Disclaimer: the purpose of this piece is to understand the usage of scikit learn pipeline, not to train a perfect machine learning model. We’ll be using the ‘daily-bike-share’ data from Microsoft’s fantastic machine learning studying material. import pandas as pdimport numpy as npdata = pd.read_csv(‘https://raw.githubusercontent.com/MicrosoftDocs/ml-basics/master/data/daily-bike-share.csv')data.dtypes data.isnull().sum() Luckily this dataset doesn’t have missing values. Although it seems all features are numeric, there are actually some categorical features we need to identify. Those are [‘season’, ‘mnth’, ‘holiday’, ‘weekday’, ‘workingday’, ‘weathersit’]. For the sake of illustration, I’ll still treat it as having missing values. Let’s filter out some obviously useless features first. data = data[['season' , 'mnth' , 'holiday' , 'weekday' , 'workingday' , 'weathersit' , 'temp' , 'atemp' , 'hum' , 'windspeed' , 'rentals']] Before creating the pipline, we need to split the data into training set and testing set first. from sklearn.model_selection import train_test_splitX = data.drop('rentals',axis=1)y = data['rentals']X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=123) The main parameter of a pipeline we’ll be working on is ‘steps’. From the documentation, it is a ‘list of (name, transform) tuples (implementing fit/transform) that are chained, in the order in which they are chained, with the last object an estimator.’ It’s easier to just have a glance at what the pipeline should look like: Pipeline(steps=[('name_of_preprocessor', preprocessor), ('name_of_ml_model', ml_model())]) The ‘preprocessor’ is the complex bit, we have to create that ourselves. Let’s crack on! Preprocessor The packages we need are as follow: from sklearn.preprocessing import StandardScaler, OrdinalEncoderfrom sklearn.impute import SimpleImputerfrom sklearn.compose import ColumnTransformerfrom sklearn.pipeline import Pipeline Firstly, we need to define the transformers for both numeric and categorical features. A transforming step is represented by a tuple. In that tuple, you first define the name of the transformer, and then the function you want to apply. The order of the tuple will be the order that the pipeline applies the transforms. Here, we first deal with missing values, then standardise numeric features and encode categorical features. numeric_transformer = Pipeline(steps=[ ('imputer', SimpleImputer(strategy='mean')) ,('scaler', StandardScaler())])categorical_transformer = Pipeline(steps=[ ('imputer', SimpleImputer(strategy='constant')) ,('encoder', OrdinalEncoder())]) The next thing we need to do is to specify which columns are numeric and which are categorical, so we can apply the transformers accordingly. We apply the transformers to features by using ColumnTransformer. Applying the transformers to features is our preprocessor. Similar to pipeline, we pass a list of tuples, which is composed of (‘name’, ‘transformer’, ‘features’), to the parameter ‘transformers’. numeric_features = ['temp', 'atemp', 'hum', 'windspeed']categorical_features = ['season', 'mnth', 'holiday', 'weekday', 'workingday', 'weathersit']preprocessor = ColumnTransformer( transformers=[ ('numeric', numeric_transformer, numeric_features) ,('categorical', categorical_transformer, categorical_features)]) Some people would create the list of numeric/categorical features based on the data type, like the following: numeric_features = data.select_dtypes(include=['int64', 'float64']).columnscategorical_features = data.select_dtypes(include=['object']).drop(['Loan_Status'], axis=1).columns I personally don’t recommend this, because if you have categorical features disguised as numeric data type, e.g. this dataset, you won’t be able to identify them. Only use this method when you’re 100% sure that only numeric features are numeric data types. Estimator After assembling our preprocessor, we can then add in the estimator, which is the machine learning algorithm you’d like to apply, to complete our preprocessing and training pipeline. Since in this case, the target variable is continuous, I’ll apply Random Forest Regression model here. from sklearn.ensemble import RandomForestRegressorpipeline = Pipeline(steps = [ ('preprocessor', preprocessor) ,('regressor',RandomForestRegressor()) ]) To create the model, similar to what we used to do with a machine learning algorithm, we use the ‘fit’ function of pipeline. rf_model = pipeline.fit(X_train, y_train)print (rf_model) Use the normal methods to evaluate the model. from sklearn.metrics import r2_scorepredictions = rf_model.predict(X_test)print (r2_score(y_test, predictions))>> 0.7355156699663605 To maximise reproducibility, we‘d like to use this model repeatedly for our new incoming data. Let’s save the model by using ‘joblib’ package to save it as a pickle file. import joblibjoblib.dump(rf_model, './rf_model.pkl') Now we can call this pipeline, which includes all sorts of data preprocessing we need and the training model, whenever we need it. # In other notebooks rf_model = joblib.load('PATH/TO/rf_model.pkl')new_prediction = rf_model.predict(new_data) Before knowing scikit learn pipeline, I always had to redo the whole data preprocessing and transformation stuff whenever I wanted to apply the same model to different datasets. It was a really tedious process. I tried to write a function to do all of them, but the result wasn’t really satisfactory and didn’t save me a lot of workloads. Scikit learn pipeline really makes my workflows smoother and more flexible. For example, you can easily compare the performance of a number of algorithms like: regressors = [ regressor_1() ,regressor_2() ,regressor_3() ....]for regressor in regressors: pipeline = Pipeline(steps = [ ('preprocessor', preprocessor) ,('regressor',regressor) ]) model = pipeline.fit(X_train, y_train) predictions = model.predict(X_test) print (regressor) print (f('Model r2 score:{r2_score(predictions, y_test)}') , or adjust the preprocessing/transforming methods. For instance, use ‘median’ value to fill missing values, use a different scaler for numeric features, change to one-hot encoding instead of ordinal encoding to handle categorical features, hyperparameter tuning, etc. numeric_transformer = Pipeline(steps=[ ('imputer', SimpleImputer(strategy='median')) ,('scaler', MinMaxScaler())])categorical_transformer = Pipeline(steps=[ ('imputer', SimpleImputer(strategy='constant')) ,('encoder', OneHotEncoder())])pipeline = Pipeline(steps = [ ('preprocessor', preprocessor) ,('regressor',RandomForestRegressor(n_estimators=300 ,max_depth=10)) ]) All of the above adjustments can now be done as simply as changing a parameter in the functions. I hope you find this helpful and any comments or advice are welcome!
[ { "code": null, "e": 400, "s": 171, "text": "Most of the data science projects (as keen as I am to say all of them) require a certain level of data cleaning and preprocessing to make the most of the machine learning models. Some common preprocessing or transformations are:" }, { "code": null, "e": 427, "s": 400, "text": "a. Imputing missing values" }, { "code": null, "e": 448, "s": 427, "text": "b. Removing outliers" }, { "code": null, "e": 499, "s": 448, "text": "c. Normalising or standardising numerical features" }, { "code": null, "e": 532, "s": 499, "text": "d. Encoding categorical features" }, { "code": null, "e": 691, "s": 532, "text": "Sci-kit learn has a bunch of functions that support this kind of transformation, such as StandardScaler, SimpleImputer...etc, under the preprocessing package." }, { "code": null, "e": 749, "s": 691, "text": "A typical and simplified data science workflow would like" }, { "code": null, "e": 948, "s": 749, "text": "Get the training dataClean/preprocess/transform the dataTrain a machine learning modelEvaluate and optimise the modelClean/preprocess/transform new dataFit the model on new data to make predictions." }, { "code": null, "e": 970, "s": 948, "text": "Get the training data" }, { "code": null, "e": 1006, "s": 970, "text": "Clean/preprocess/transform the data" }, { "code": null, "e": 1037, "s": 1006, "text": "Train a machine learning model" }, { "code": null, "e": 1069, "s": 1037, "text": "Evaluate and optimise the model" }, { "code": null, "e": 1105, "s": 1069, "text": "Clean/preprocess/transform new data" }, { "code": null, "e": 1152, "s": 1105, "text": "Fit the model on new data to make predictions." }, { "code": null, "e": 1393, "s": 1152, "text": "You may notice that data preprocessing has to be done at least twice in the workflow. As tedious and time-consuming as this step is, how nice it would be if only we could automate this process and apply it to all of the future new datasets." }, { "code": null, "e": 1650, "s": 1393, "text": "The good news is: YES, WE ABSOLUTELY CAN! With the scikit learn pipeline, we can easily systemise the process and therefore make it extremely reproducible. Following I’ll walk you through the process of using scikit learn pipeline to make your life easier." }, { "code": null, "e": 1790, "s": 1650, "text": "**Disclaimer: the purpose of this piece is to understand the usage of scikit learn pipeline, not to train a perfect machine learning model." }, { "code": null, "e": 1896, "s": 1790, "text": "We’ll be using the ‘daily-bike-share’ data from Microsoft’s fantastic machine learning studying material." }, { "code": null, "e": 2057, "s": 1896, "text": "import pandas as pdimport numpy as npdata = pd.read_csv(‘https://raw.githubusercontent.com/MicrosoftDocs/ml-basics/master/data/daily-bike-share.csv')data.dtypes" }, { "code": null, "e": 2077, "s": 2057, "text": "data.isnull().sum()" }, { "code": null, "e": 2449, "s": 2077, "text": "Luckily this dataset doesn’t have missing values. Although it seems all features are numeric, there are actually some categorical features we need to identify. Those are [‘season’, ‘mnth’, ‘holiday’, ‘weekday’, ‘workingday’, ‘weathersit’]. For the sake of illustration, I’ll still treat it as having missing values. Let’s filter out some obviously useless features first." }, { "code": null, "e": 2709, "s": 2449, "text": "data = data[['season' , 'mnth' , 'holiday' , 'weekday' , 'workingday' , 'weathersit' , 'temp' , 'atemp' , 'hum' , 'windspeed' , 'rentals']]" }, { "code": null, "e": 2805, "s": 2709, "text": "Before creating the pipline, we need to split the data into training set and testing set first." }, { "code": null, "e": 2998, "s": 2805, "text": "from sklearn.model_selection import train_test_splitX = data.drop('rentals',axis=1)y = data['rentals']X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=123)" }, { "code": null, "e": 3252, "s": 2998, "text": "The main parameter of a pipeline we’ll be working on is ‘steps’. From the documentation, it is a ‘list of (name, transform) tuples (implementing fit/transform) that are chained, in the order in which they are chained, with the last object an estimator.’" }, { "code": null, "e": 3325, "s": 3252, "text": "It’s easier to just have a glance at what the pipeline should look like:" }, { "code": null, "e": 3431, "s": 3325, "text": "Pipeline(steps=[('name_of_preprocessor', preprocessor), ('name_of_ml_model', ml_model())])" }, { "code": null, "e": 3520, "s": 3431, "text": "The ‘preprocessor’ is the complex bit, we have to create that ourselves. Let’s crack on!" }, { "code": null, "e": 3533, "s": 3520, "text": "Preprocessor" }, { "code": null, "e": 3569, "s": 3533, "text": "The packages we need are as follow:" }, { "code": null, "e": 3756, "s": 3569, "text": "from sklearn.preprocessing import StandardScaler, OrdinalEncoderfrom sklearn.impute import SimpleImputerfrom sklearn.compose import ColumnTransformerfrom sklearn.pipeline import Pipeline" }, { "code": null, "e": 4183, "s": 3756, "text": "Firstly, we need to define the transformers for both numeric and categorical features. A transforming step is represented by a tuple. In that tuple, you first define the name of the transformer, and then the function you want to apply. The order of the tuple will be the order that the pipeline applies the transforms. Here, we first deal with missing values, then standardise numeric features and encode categorical features." }, { "code": null, "e": 4443, "s": 4183, "text": "numeric_transformer = Pipeline(steps=[ ('imputer', SimpleImputer(strategy='mean')) ,('scaler', StandardScaler())])categorical_transformer = Pipeline(steps=[ ('imputer', SimpleImputer(strategy='constant')) ,('encoder', OrdinalEncoder())])" }, { "code": null, "e": 4848, "s": 4443, "text": "The next thing we need to do is to specify which columns are numeric and which are categorical, so we can apply the transformers accordingly. We apply the transformers to features by using ColumnTransformer. Applying the transformers to features is our preprocessor. Similar to pipeline, we pass a list of tuples, which is composed of (‘name’, ‘transformer’, ‘features’), to the parameter ‘transformers’." }, { "code": null, "e": 5169, "s": 4848, "text": "numeric_features = ['temp', 'atemp', 'hum', 'windspeed']categorical_features = ['season', 'mnth', 'holiday', 'weekday', 'workingday', 'weathersit']preprocessor = ColumnTransformer( transformers=[ ('numeric', numeric_transformer, numeric_features) ,('categorical', categorical_transformer, categorical_features)]) " }, { "code": null, "e": 5279, "s": 5169, "text": "Some people would create the list of numeric/categorical features based on the data type, like the following:" }, { "code": null, "e": 5454, "s": 5279, "text": "numeric_features = data.select_dtypes(include=['int64', 'float64']).columnscategorical_features = data.select_dtypes(include=['object']).drop(['Loan_Status'], axis=1).columns" }, { "code": null, "e": 5711, "s": 5454, "text": "I personally don’t recommend this, because if you have categorical features disguised as numeric data type, e.g. this dataset, you won’t be able to identify them. Only use this method when you’re 100% sure that only numeric features are numeric data types." }, { "code": null, "e": 5721, "s": 5711, "text": "Estimator" }, { "code": null, "e": 6007, "s": 5721, "text": "After assembling our preprocessor, we can then add in the estimator, which is the machine learning algorithm you’d like to apply, to complete our preprocessing and training pipeline. Since in this case, the target variable is continuous, I’ll apply Random Forest Regression model here." }, { "code": null, "e": 6197, "s": 6007, "text": "from sklearn.ensemble import RandomForestRegressorpipeline = Pipeline(steps = [ ('preprocessor', preprocessor) ,('regressor',RandomForestRegressor()) ])" }, { "code": null, "e": 6322, "s": 6197, "text": "To create the model, similar to what we used to do with a machine learning algorithm, we use the ‘fit’ function of pipeline." }, { "code": null, "e": 6380, "s": 6322, "text": "rf_model = pipeline.fit(X_train, y_train)print (rf_model)" }, { "code": null, "e": 6426, "s": 6380, "text": "Use the normal methods to evaluate the model." }, { "code": null, "e": 6559, "s": 6426, "text": "from sklearn.metrics import r2_scorepredictions = rf_model.predict(X_test)print (r2_score(y_test, predictions))>> 0.7355156699663605" }, { "code": null, "e": 6730, "s": 6559, "text": "To maximise reproducibility, we‘d like to use this model repeatedly for our new incoming data. Let’s save the model by using ‘joblib’ package to save it as a pickle file." }, { "code": null, "e": 6783, "s": 6730, "text": "import joblibjoblib.dump(rf_model, './rf_model.pkl')" }, { "code": null, "e": 6914, "s": 6783, "text": "Now we can call this pipeline, which includes all sorts of data preprocessing we need and the training model, whenever we need it." }, { "code": null, "e": 7025, "s": 6914, "text": "# In other notebooks rf_model = joblib.load('PATH/TO/rf_model.pkl')new_prediction = rf_model.predict(new_data)" }, { "code": null, "e": 7364, "s": 7025, "text": "Before knowing scikit learn pipeline, I always had to redo the whole data preprocessing and transformation stuff whenever I wanted to apply the same model to different datasets. It was a really tedious process. I tried to write a function to do all of them, but the result wasn’t really satisfactory and didn’t save me a lot of workloads." }, { "code": null, "e": 7524, "s": 7364, "text": "Scikit learn pipeline really makes my workflows smoother and more flexible. For example, you can easily compare the performance of a number of algorithms like:" }, { "code": null, "e": 7919, "s": 7524, "text": "regressors = [ regressor_1() ,regressor_2() ,regressor_3() ....]for regressor in regressors: pipeline = Pipeline(steps = [ ('preprocessor', preprocessor) ,('regressor',regressor) ]) model = pipeline.fit(X_train, y_train) predictions = model.predict(X_test) print (regressor) print (f('Model r2 score:{r2_score(predictions, y_test)}')" }, { "code": null, "e": 8188, "s": 7919, "text": ", or adjust the preprocessing/transforming methods. For instance, use ‘median’ value to fill missing values, use a different scaler for numeric features, change to one-hot encoding instead of ordinal encoding to handle categorical features, hyperparameter tuning, etc." }, { "code": null, "e": 8664, "s": 8188, "text": "numeric_transformer = Pipeline(steps=[ ('imputer', SimpleImputer(strategy='median')) ,('scaler', MinMaxScaler())])categorical_transformer = Pipeline(steps=[ ('imputer', SimpleImputer(strategy='constant')) ,('encoder', OneHotEncoder())])pipeline = Pipeline(steps = [ ('preprocessor', preprocessor) ,('regressor',RandomForestRegressor(n_estimators=300 ,max_depth=10)) ])" }, { "code": null, "e": 8761, "s": 8664, "text": "All of the above adjustments can now be done as simply as changing a parameter in the functions." } ]
Add Index ID to DataFrame in R - GeeksforGeeks
13 Aug, 2021 In this article, we will discuss how index ID can be added to dataframes in the R programming language. The nrow() method in R Programming language is used to compute the number of rows in the dataframe that is specified as an argument of this method. cbind() method in R language is used to append a vector to the dataframe. The vector is appended to the dataframe in the order in which it is specified during the function call. In order, to lead the dataframe with an id vector, the following syntax will be used: cbind(vec , data_frame) The vector length should be equivalent to the number of rows in the data frame. Example: R # declaring a data frame in Rdata_frame <- data.frame(x1 = 2:7, x2 = letters[1:6], x3 = 6, row.names = c('I','II','III','IV','V','VI')) print("Original Data Frame") print(data_frame) # number of rows in data framenum_rows = nrow(data_frame) # creating ID column vectorID <- c(1:num_rows) # binding id column to the data framedata_frame1 <- cbind(ID , data_frame) print("Modified Data Frame")print (data_frame1) Output [1] "Original Data Frame" x1 x2 x3 I 2 a 6 II 3 b 6 III 4 c 6 IV 5 d 6 V 6 e 6 VI 7 f 6 [1] "Modified Data Frame" ID x1 x2 x3 I 1 2 a 6 II 2 3 b 6 III 3 4 c 6 IV 4 5 d 6 V 5 6 e 6 VI 6 7 f 6 In order to lead a dataframe with the index ID column, we can also reassign the row names of the dataframe to reflect the increasing integer values starting from 1 to the number of rows in the data frame. The rownames(df) method is used to assign the row names. All the changes are reflected in the original dataframe. Example: R # declaring a data frame in Rdata_frame <- data.frame(x1 = 2:7, x2 = letters[1:6], x3 = 6, row.names = c('I','II','III','IV','V','VI') ) print("Original Data Frame") print(data_frame) # number of rows in data framenum_rows = nrow(data_frame) # changing row names of the data framerownames(data_frame) <- c(1:num_rows) print("Modified Data Frame")print (data_frame) Output [1] "Original Data Frame" x1 x2 x3 I 2 a 6 II 3 b 6 III 4 c 6 IV 5 d 6 V 6 e 6 VI 7 f 6 [1] "Modified Data Frame" x1 x2 x3 1 2 a 6 2 3 b 6 3 4 c 6 4 5 d 6 5 6 e 6 6 7 f 6 seq.int() method in R is used to generate integer sequences beginning from 1 to the number x specified as an argument of the function. The row names have pertained. The newly added column is appended at the end of the data frame. Syntax: seq.int(x) Example: R # declaring a data frame in Rdata_frame <- data.frame(x1 = 2:7, x2 = letters[1:6], x3 = 6, row.names = c('I','II','III','IV','V','VI') ) print("Original Data Frame") print(data_frame) # number of rows in data framenum_rows = nrow(data_frame) # creating ID column vectordata_frame$ID <- seq.int(num_rows) print("Modified Data Frame")print (data_frame) Output [1] "Original Data Frame" x1 x2 x3 I 2 a 6 II 3 b 6 III 4 c 6 IV 5 d 6 V 6 e 6 VI 7 f 6 [1] "Modified Data Frame" x1 x2 x3 ID I 2 a 6 1 II 3 b 6 2 III 4 c 6 3 IV 5 d 6 4 V 6 e 6 5 VI 7 f 6 6 The mutate method of the dplyr package can be used to add, remove and modify more data to the included data frame object. In order to add a new column, the following variant of mutating method can be used : Syntax: mutate(new-col-name = logic) where the logic specifies the condition upon which data addition is based upon Here, the row_number() method is used to provide an increasing sequence of integers to store row numbers. The newly added column is appended at the end of the existing data object. Example: R library(dplyr) data_frame <- data.frame(x1 = 2:7, x2 = letters[1:6], x3 = 6 ) print("Original Data Frame") print(data_frame) data_frame <- data_frame %>% mutate(ID = row_number()) print("Modified Data Frame") print(data_frame) Output [1] "Original Data Frame" x1 x2 x3 1 2 a 6 2 3 b 6 3 4 c 6 4 5 d 6 5 6 e 6 6 7 f 6 [1] "Modified Data Frame" x1 x2 x3 ID 1 2 a 6 1 2 3 b 6 2 3 4 c 6 3 4 5 d 6 4 5 6 e 6 5 6 7 f 6 6 surindertarika1234 sweetyty Picked R DataFrame-Programs R-DataFrame R Language R Programs Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here. Comments Old Comments Change Color of Bars in Barchart using ggplot2 in R Data Visualization in R How to Change Axis Scales in R Plots? Group by function in R using Dplyr Logistic Regression in R Programming How to Split Column Into Multiple Columns in R DataFrame? Replace Specific Characters in String in R How to filter R DataFrame by values in a column? How to filter R dataframe by multiple conditions? Merge DataFrames by Column Names in R
[ { "code": null, "e": 25162, "s": 25134, "text": "\n13 Aug, 2021" }, { "code": null, "e": 25266, "s": 25162, "text": "In this article, we will discuss how index ID can be added to dataframes in the R programming language." }, { "code": null, "e": 25678, "s": 25266, "text": "The nrow() method in R Programming language is used to compute the number of rows in the dataframe that is specified as an argument of this method. cbind() method in R language is used to append a vector to the dataframe. The vector is appended to the dataframe in the order in which it is specified during the function call. In order, to lead the dataframe with an id vector, the following syntax will be used:" }, { "code": null, "e": 25702, "s": 25678, "text": "cbind(vec , data_frame)" }, { "code": null, "e": 25783, "s": 25702, "text": "The vector length should be equivalent to the number of rows in the data frame. " }, { "code": null, "e": 25792, "s": 25783, "text": "Example:" }, { "code": null, "e": 25794, "s": 25792, "text": "R" }, { "code": "# declaring a data frame in Rdata_frame <- data.frame(x1 = 2:7, x2 = letters[1:6], x3 = 6, row.names = c('I','II','III','IV','V','VI')) print(\"Original Data Frame\") print(data_frame) # number of rows in data framenum_rows = nrow(data_frame) # creating ID column vectorID <- c(1:num_rows) # binding id column to the data framedata_frame1 <- cbind(ID , data_frame) print(\"Modified Data Frame\")print (data_frame1)", "e": 26289, "s": 25794, "text": null }, { "code": null, "e": 26296, "s": 26289, "text": "Output" }, { "code": null, "e": 26549, "s": 26296, "text": "[1] \"Original Data Frame\"\n x1 x2 x3\nI 2 a 6\nII 3 b 6\nIII 4 c 6\nIV 5 d 6\nV 6 e 6\nVI 7 f 6\n[1] \"Modified Data Frame\"\n ID x1 x2 x3\nI 1 2 a 6\nII 2 3 b 6\nIII 3 4 c 6\nIV 4 5 d 6\nV 5 6 e 6\nVI 6 7 f 6" }, { "code": null, "e": 26869, "s": 26549, "text": "In order to lead a dataframe with the index ID column, we can also reassign the row names of the dataframe to reflect the increasing integer values starting from 1 to the number of rows in the data frame. The rownames(df) method is used to assign the row names. All the changes are reflected in the original dataframe. " }, { "code": null, "e": 26878, "s": 26869, "text": "Example:" }, { "code": null, "e": 26880, "s": 26878, "text": "R" }, { "code": "# declaring a data frame in Rdata_frame <- data.frame(x1 = 2:7, x2 = letters[1:6], x3 = 6, row.names = c('I','II','III','IV','V','VI') ) print(\"Original Data Frame\") print(data_frame) # number of rows in data framenum_rows = nrow(data_frame) # changing row names of the data framerownames(data_frame) <- c(1:num_rows) print(\"Modified Data Frame\")print (data_frame)", "e": 27345, "s": 26880, "text": null }, { "code": null, "e": 27352, "s": 27345, "text": "Output" }, { "code": null, "e": 27570, "s": 27352, "text": "[1] \"Original Data Frame\"\n x1 x2 x3\nI 2 a 6\nII 3 b 6\nIII 4 c 6\nIV 5 d 6\nV 6 e 6\nVI 7 f 6\n[1] \"Modified Data Frame\"\n x1 x2 x3\n1 2 a 6\n2 3 b 6\n3 4 c 6\n4 5 d 6\n5 6 e 6\n6 7 f 6" }, { "code": null, "e": 27801, "s": 27570, "text": "seq.int() method in R is used to generate integer sequences beginning from 1 to the number x specified as an argument of the function. The row names have pertained. The newly added column is appended at the end of the data frame. " }, { "code": null, "e": 27809, "s": 27801, "text": "Syntax:" }, { "code": null, "e": 27820, "s": 27809, "text": "seq.int(x)" }, { "code": null, "e": 27829, "s": 27820, "text": "Example:" }, { "code": null, "e": 27831, "s": 27829, "text": "R" }, { "code": "# declaring a data frame in Rdata_frame <- data.frame(x1 = 2:7, x2 = letters[1:6], x3 = 6, row.names = c('I','II','III','IV','V','VI') ) print(\"Original Data Frame\") print(data_frame) # number of rows in data framenum_rows = nrow(data_frame) # creating ID column vectordata_frame$ID <- seq.int(num_rows) print(\"Modified Data Frame\")print (data_frame)", "e": 28282, "s": 27831, "text": null }, { "code": null, "e": 28289, "s": 28282, "text": "Output" }, { "code": null, "e": 28542, "s": 28289, "text": "[1] \"Original Data Frame\"\n x1 x2 x3\nI 2 a 6\nII 3 b 6\nIII 4 c 6\nIV 5 d 6\nV 6 e 6\nVI 7 f 6\n[1] \"Modified Data Frame\"\n x1 x2 x3 ID\nI 2 a 6 1\nII 3 b 6 2\nIII 4 c 6 3\nIV 5 d 6 4\nV 6 e 6 5\nVI 7 f 6 6" }, { "code": null, "e": 28750, "s": 28542, "text": "The mutate method of the dplyr package can be used to add, remove and modify more data to the included data frame object. In order to add a new column, the following variant of mutating method can be used : " }, { "code": null, "e": 28758, "s": 28750, "text": "Syntax:" }, { "code": null, "e": 28787, "s": 28758, "text": "mutate(new-col-name = logic)" }, { "code": null, "e": 28866, "s": 28787, "text": "where the logic specifies the condition upon which data addition is based upon" }, { "code": null, "e": 29047, "s": 28866, "text": "Here, the row_number() method is used to provide an increasing sequence of integers to store row numbers. The newly added column is appended at the end of the existing data object." }, { "code": null, "e": 29056, "s": 29047, "text": "Example:" }, { "code": null, "e": 29058, "s": 29056, "text": "R" }, { "code": "library(dplyr) data_frame <- data.frame(x1 = 2:7, x2 = letters[1:6], x3 = 6 ) print(\"Original Data Frame\") print(data_frame) data_frame <- data_frame %>% mutate(ID = row_number()) print(\"Modified Data Frame\") print(data_frame)", "e": 29403, "s": 29058, "text": null }, { "code": null, "e": 29410, "s": 29403, "text": "Output" }, { "code": null, "e": 29654, "s": 29410, "text": "[1] \"Original Data Frame\" \n x1 x2 x3 \n1 2 a 6 \n2 3 b 6 \n3 4 c 6 \n4 5 d 6 \n5 6 e 6 \n6 7 f 6 \n[1] \"Modified Data Frame\" \n x1 x2 x3 ID \n1 2 a 6 1 \n2 3 b 6 2 \n3 4 c 6 3 \n4 5 d 6 4 \n5 6 e 6 5 \n6 7 f 6 6" }, { "code": null, "e": 29673, "s": 29654, "text": "surindertarika1234" }, { "code": null, "e": 29682, "s": 29673, "text": "sweetyty" }, { "code": null, "e": 29689, "s": 29682, "text": "Picked" }, { "code": null, "e": 29710, "s": 29689, "text": "R DataFrame-Programs" }, { "code": null, "e": 29722, "s": 29710, "text": "R-DataFrame" }, { "code": null, "e": 29733, "s": 29722, "text": "R Language" }, { "code": null, "e": 29744, "s": 29733, "text": "R Programs" }, { "code": null, "e": 29842, "s": 29744, "text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here." }, { "code": null, "e": 29851, "s": 29842, "text": "Comments" }, { "code": null, "e": 29864, "s": 29851, "text": "Old Comments" }, { "code": null, "e": 29916, "s": 29864, "text": "Change Color of Bars in Barchart using ggplot2 in R" }, { "code": null, "e": 29940, "s": 29916, "text": "Data Visualization in R" }, { "code": null, "e": 29978, "s": 29940, "text": "How to Change Axis Scales in R Plots?" }, { "code": null, "e": 30013, "s": 29978, "text": "Group by function in R using Dplyr" }, { "code": null, "e": 30050, "s": 30013, "text": "Logistic Regression in R Programming" }, { "code": null, "e": 30108, "s": 30050, "text": "How to Split Column Into Multiple Columns in R DataFrame?" }, { "code": null, "e": 30151, "s": 30108, "text": "Replace Specific Characters in String in R" }, { "code": null, "e": 30200, "s": 30151, "text": "How to filter R DataFrame by values in a column?" }, { "code": null, "e": 30250, "s": 30200, "text": "How to filter R dataframe by multiple conditions?" } ]
Dart Programming - while Loop
The while loop executes the instructions each time the condition specified evaluates to true. In other words, the loop evaluates the condition before the block of code is executed. The following illustration shows the flowchart of the while loop − Following is the syntax for the while loop. while (expression) { Statement(s) to be executed if expression is true } void main() { var num = 5; var factorial = 1; while(num >=1) { factorial = factorial * num; num--; } print("The factorial is ${factorial}"); } The above code uses a while loop to calculate the factorial of the value in the variable num. The following output is displayed on successful execution of the code. The factorial is 120 44 Lectures 4.5 hours Sriyank Siddhartha 34 Lectures 4 hours Sriyank Siddhartha 69 Lectures 4 hours Frahaan Hussain 117 Lectures 10 hours Frahaan Hussain 22 Lectures 1.5 hours Pranjal Srivastava 34 Lectures 3 hours Pranjal Srivastava Print Add Notes Bookmark this page
[ { "code": null, "e": 2706, "s": 2525, "text": "The while loop executes the instructions each time the condition specified evaluates to true. In other words, the loop evaluates the condition before the block of code is executed." }, { "code": null, "e": 2773, "s": 2706, "text": "The following illustration shows the flowchart of the while loop −" }, { "code": null, "e": 2818, "s": 2773, "text": "Following is the syntax for the while loop. " }, { "code": null, "e": 2897, "s": 2818, "text": "while (expression) {\n Statement(s) to be executed if expression is true \n}\n" }, { "code": null, "e": 3082, "s": 2897, "text": "void main() { \n var num = 5; \n var factorial = 1; \n \n while(num >=1) { \n factorial = factorial * num; \n num--; \n } \n print(\"The factorial is ${factorial}\"); \n} " }, { "code": null, "e": 3176, "s": 3082, "text": "The above code uses a while loop to calculate the factorial of the value in the variable num." }, { "code": null, "e": 3247, "s": 3176, "text": "The following output is displayed on successful execution of the code." }, { "code": null, "e": 3270, "s": 3247, "text": "The factorial is 120 \n" }, { "code": null, "e": 3305, "s": 3270, "text": "\n 44 Lectures \n 4.5 hours \n" }, { "code": null, "e": 3325, "s": 3305, "text": " Sriyank Siddhartha" }, { "code": null, "e": 3358, "s": 3325, "text": "\n 34 Lectures \n 4 hours \n" }, { "code": null, "e": 3378, "s": 3358, "text": " Sriyank Siddhartha" }, { "code": null, "e": 3411, "s": 3378, "text": "\n 69 Lectures \n 4 hours \n" }, { "code": null, "e": 3428, "s": 3411, "text": " Frahaan Hussain" }, { "code": null, "e": 3463, "s": 3428, "text": "\n 117 Lectures \n 10 hours \n" }, { "code": null, "e": 3480, "s": 3463, "text": " Frahaan Hussain" }, { "code": null, "e": 3515, "s": 3480, "text": "\n 22 Lectures \n 1.5 hours \n" }, { "code": null, "e": 3535, "s": 3515, "text": " Pranjal Srivastava" }, { "code": null, "e": 3568, "s": 3535, "text": "\n 34 Lectures \n 3 hours \n" }, { "code": null, "e": 3588, "s": 3568, "text": " Pranjal Srivastava" }, { "code": null, "e": 3595, "s": 3588, "text": " Print" }, { "code": null, "e": 3606, "s": 3595, "text": " Add Notes" } ]
How to create table rows & columns in HTML?
To create table rows and columns in HTML, use the <table> tag. A table consist of rows and columns, which can be set using one or more <tr>, <th>, and <td> elements. A table row is defined by the <tr> tag. For table rows and columns, <tr> tag and <td> tag is used respectively. The <td> tag Just keep in mind, a lot of table attributes such as align, bgcolor, border, cellpadding, cellspacing aren’t supported by HTML5. Do not use them. Use the style attribute to add CSS properties for adding a border to the table. We’re also using the <style> tag to style the table borders. You can try to run the following code to create table rows and columns in HTML. <!DOCTYPE html> <html> <head> <style> table, th, td { border: 1px solid black; } </style> </head> <body> <h2>Understanding Tables</h2> <table> <tr> <th>Header: This is row1 column1</th> <th>Header: This is row1 column2</th> </tr> <tr> <td>This is row2 column1</td> <td>This is row2 column2</td> </tr> </table> </body> </html>
[ { "code": null, "e": 1353, "s": 1062, "text": "To create table rows and columns in HTML, use the <table> tag. A table consist of rows and columns, which can be set using one or more <tr>, <th>, and <td> elements. A table row is defined by the <tr> tag. For table rows and columns, <tr> tag and <td> tag is used respectively. The <td> tag" }, { "code": null, "e": 1640, "s": 1353, "text": "Just keep in mind, a lot of table attributes such as align, bgcolor, border, cellpadding, cellspacing aren’t supported by HTML5. Do not use them. Use the style attribute to add CSS properties for adding a border to the table. We’re also using the <style> tag to style the table borders." }, { "code": null, "e": 1720, "s": 1640, "text": "You can try to run the following code to create table rows and columns in HTML." }, { "code": null, "e": 2203, "s": 1720, "text": "<!DOCTYPE html>\n<html>\n <head>\n <style>\n table, th, td {\n border: 1px solid black;\n }\n </style>\n </head>\n\n <body>\n <h2>Understanding Tables</h2>\n <table>\n <tr>\n <th>Header: This is row1 column1</th>\n <th>Header: This is row1 column2</th>\n </tr>\n <tr>\n <td>This is row2 column1</td>\n <td>This is row2 column2</td>\n </tr>\n </table>\n </body>\n</html>" } ]
How to parse HTML in Android using Kotlin?
This example demonstrates how to parse HTML in Android using Kotlin. Step 1 − Create a new project in Android Studio, go to File ⇒ New Project and fill all required details to create a new project. Step 2 − Add the following code to res/layout/activity_main.xml. <?xml version="1.0" encoding="utf-8"?> <RelativeLayout xmlns:android="http://schemas.android.com/apk/res/android" xmlns:tools="http://schemas.android.com/tools" android:layout_width="match_parent" android:layout_height="match_parent" android:padding="8dp" tools:context=".MainActivity"> <Button android:id="@+id/btnParseHTML" android:layout_width="wrap_content" android:layout_height="wrap_content" android:layout_centerHorizontal="true" android:layout_marginTop="30dp" android:text="Get website" /> <TextView android:textColor="@android:color/background_dark" android:id="@+id/textView" android:layout_width="wrap_content" android:layout_height="wrap_content" android:layout_below="@id/btnParseHTML" android:layout_centerHorizontal="true" android:text="Result" android:textSize="12sp" android:textStyle="bold" /> </RelativeLayout> Step 3 − Add the given dependency in the build.gradle (Module: app) implementation 'org.jsoup:jsoup:1.11.2' Step 4 − Add the following code to src/MainActivity.kt import android.os.Bundle import android.widget.Button import android.widget.TextView import androidx.appcompat.app.AppCompatActivity import org.jsoup.Jsoup import org.jsoup.nodes.Document import org.jsoup.select.Elements import java.io.IOException class MainActivity : AppCompatActivity() { lateinit var button: Button lateinit var textView: TextView override fun onCreate(savedInstanceState: Bundle?) { super.onCreate(savedInstanceState) setContentView(R.layout.activity_main) title = "KotlinApp" textView = findViewById(R.id.textView) button = findViewById(R.id.btnParseHTML) button.setOnClickListener { getHtmlFromWeb() } } private fun getHtmlFromWeb() { Thread(Runnable { val stringBuilder = StringBuilder() try { val doc: Document = Jsoup.connect("http://www.tutorialspoint.com/").get() val title: String = doc.title() val links: Elements = doc.select("a[href]") stringBuilder.append(title).append("\n") for (link in links) { stringBuilder.append("\n").append("Link : ").append(link.attr("href")).append("\n").append("Text : ").append(link.text()) } } catch (e: IOException) { stringBuilder.append("Error : ").append(e.message).append("\n") } runOnUiThread { textView.text = stringBuilder.toString() } }).start() } } Step 5 − Add the following code to androidManifest.xml <?xml version="1.0" encoding="utf-8"?> <manifest xmlns:android="http://schemas.android.com/apk/res/android" package="app.com.q11"> <uses-permission android:name="android.permission.INTERNET"/> <application android:allowBackup="true" android:icon="@mipmap/ic_launcher" android:label="@string/app_name" android:roundIcon="@mipmap/ic_launcher_round" android:supportsRtl="true" android:theme="@style/AppTheme"> <activity android:name=".MainActivity"> <intent-filter> <action android:name="android.intent.action.MAIN" /> <category android:name="android.intent.category.LAUNCHER" /> </intent-filter> </activity> </application> </manifest> Let's try to run your application. I assume you have connected your actual Android Mobile device with your computer. To run the app from android studio, open one of your project's activity files and click the Run icon from the toolbar. Select your mobile device as an option and then check your mobile device which will display your default screen
[ { "code": null, "e": 1131, "s": 1062, "text": "This example demonstrates how to parse HTML in Android using Kotlin." }, { "code": null, "e": 1260, "s": 1131, "text": "Step 1 − Create a new project in Android Studio, go to File ⇒ New Project and fill all required details to create a new project." }, { "code": null, "e": 1325, "s": 1260, "text": "Step 2 − Add the following code to res/layout/activity_main.xml." }, { "code": null, "e": 2268, "s": 1325, "text": "<?xml version=\"1.0\" encoding=\"utf-8\"?>\n<RelativeLayout xmlns:android=\"http://schemas.android.com/apk/res/android\"\n xmlns:tools=\"http://schemas.android.com/tools\"\n android:layout_width=\"match_parent\"\n android:layout_height=\"match_parent\"\n android:padding=\"8dp\"\n tools:context=\".MainActivity\">\n <Button\n android:id=\"@+id/btnParseHTML\"\n android:layout_width=\"wrap_content\"\n android:layout_height=\"wrap_content\"\n android:layout_centerHorizontal=\"true\"\n android:layout_marginTop=\"30dp\"\n android:text=\"Get website\" />\n <TextView\n android:textColor=\"@android:color/background_dark\"\n android:id=\"@+id/textView\"\n android:layout_width=\"wrap_content\"\n android:layout_height=\"wrap_content\"\n android:layout_below=\"@id/btnParseHTML\"\n android:layout_centerHorizontal=\"true\"\n android:text=\"Result\"\n android:textSize=\"12sp\"\n android:textStyle=\"bold\" />\n</RelativeLayout>" }, { "code": null, "e": 2336, "s": 2268, "text": "Step 3 − Add the given dependency in the build.gradle (Module: app)" }, { "code": null, "e": 2376, "s": 2336, "text": "implementation 'org.jsoup:jsoup:1.11.2'" }, { "code": null, "e": 2431, "s": 2376, "text": "Step 4 − Add the following code to src/MainActivity.kt" }, { "code": null, "e": 3893, "s": 2431, "text": "import android.os.Bundle\nimport android.widget.Button\nimport android.widget.TextView\nimport androidx.appcompat.app.AppCompatActivity\nimport org.jsoup.Jsoup\nimport org.jsoup.nodes.Document\nimport org.jsoup.select.Elements\nimport java.io.IOException\nclass MainActivity : AppCompatActivity() {\n lateinit var button: Button\n lateinit var textView: TextView\n override fun onCreate(savedInstanceState: Bundle?) {\n super.onCreate(savedInstanceState)\n setContentView(R.layout.activity_main)\n title = \"KotlinApp\"\n textView = findViewById(R.id.textView)\n button = findViewById(R.id.btnParseHTML)\n button.setOnClickListener {\n getHtmlFromWeb()\n }\n }\n private fun getHtmlFromWeb() {\n Thread(Runnable {\n val stringBuilder = StringBuilder()\n try {\n val doc: Document = Jsoup.connect(\"http://www.tutorialspoint.com/\").get()\n val title: String = doc.title()\n val links: Elements = doc.select(\"a[href]\")\n stringBuilder.append(title).append(\"\\n\")\n for (link in links) {\n stringBuilder.append(\"\\n\").append(\"Link :\n \").append(link.attr(\"href\")).append(\"\\n\").append(\"Text : \").append(link.text())\n }\n } catch (e: IOException) {\n stringBuilder.append(\"Error : \").append(e.message).append(\"\\n\")\n }\n runOnUiThread { textView.text = stringBuilder.toString() }\n }).start()\n }\n}" }, { "code": null, "e": 3948, "s": 3893, "text": "Step 5 − Add the following code to androidManifest.xml" }, { "code": null, "e": 4680, "s": 3948, "text": "<?xml version=\"1.0\" encoding=\"utf-8\"?>\n<manifest xmlns:android=\"http://schemas.android.com/apk/res/android\" package=\"app.com.q11\">\n <uses-permission android:name=\"android.permission.INTERNET\"/>\n <application\n android:allowBackup=\"true\"\n android:icon=\"@mipmap/ic_launcher\"\n android:label=\"@string/app_name\"\n android:roundIcon=\"@mipmap/ic_launcher_round\"\n android:supportsRtl=\"true\"\n android:theme=\"@style/AppTheme\">\n <activity android:name=\".MainActivity\">\n <intent-filter>\n <action android:name=\"android.intent.action.MAIN\" />\n <category android:name=\"android.intent.category.LAUNCHER\" />\n </intent-filter>\n </activity>\n </application>\n</manifest>" }, { "code": null, "e": 5028, "s": 4680, "text": "Let's try to run your application. I assume you have connected your actual Android Mobile device with your computer. To run the app from android studio, open one of your project's activity files and click the Run icon from the toolbar. Select your mobile device as an option and then check your mobile device which will display your default screen" } ]
SkipWhile method in C#
SkipWhile skips an element when a condition is matched. For example, use the following if you want to skip all even elements − ele => ele %2 == 0 The following is an example wherein all the even elements are skipped and only the odd elements are displayed − Live Demo using System.IO; using System; using System.Linq; public class Demo { public static void Main() { int[] arr = { 20, 35, 55 }; Console.WriteLine("Initial array..."); foreach (int value in arr) { Console.WriteLine(value); } // skipping even elements var res = arr.SkipWhile(ele => ele % 2 == 0); Console.WriteLine("New array after skipping even elements..."); foreach (int val in res) { Console.WriteLine(val); } } } Initial array... 20 35 55 New array after skipping even elements... 35 55
[ { "code": null, "e": 1118, "s": 1062, "text": "SkipWhile skips an element when a condition is matched." }, { "code": null, "e": 1189, "s": 1118, "text": "For example, use the following if you want to skip all even elements −" }, { "code": null, "e": 1208, "s": 1189, "text": "ele => ele %2 == 0" }, { "code": null, "e": 1320, "s": 1208, "text": "The following is an example wherein all the even elements are skipped and only the odd elements are displayed −" }, { "code": null, "e": 1331, "s": 1320, "text": " Live Demo" }, { "code": null, "e": 1824, "s": 1331, "text": "using System.IO;\nusing System;\nusing System.Linq;\npublic class Demo {\n public static void Main() {\n int[] arr = { 20, 35, 55 };\n Console.WriteLine(\"Initial array...\");\n foreach (int value in arr) {\n Console.WriteLine(value);\n }\n // skipping even elements\n var res = arr.SkipWhile(ele => ele % 2 == 0);\n Console.WriteLine(\"New array after skipping even elements...\");\n foreach (int val in res) {\n Console.WriteLine(val);\n }\n }\n}" }, { "code": null, "e": 1898, "s": 1824, "text": "Initial array...\n20\n35\n55\nNew array after skipping even elements...\n35\n55" } ]
Java Connection getMetaData() method with example
Generally, Data about data is known as metadata. The DatabaseMetaData interface provides methods to get information about the database you have connected with like, database name, database driver version, maximum column length etc... The getMetaData() method of the Connection interface retrieves and returns the DatabaseMetaData object. This contains information about the database you have connected to. You can get information about the database such as name of the database, version, driver name, user name and, url etc... by invoking the methods of the DatabaseMetaData interface using the obtained object. This method returns the DatabaseMetaData object which holds information about the underlying database. To get the DatabaseMetaData object for the underlying database. Register the driver using the registerDriver() method of the DriverManager class as − //Registering the Driver DriverManager.registerDriver(new com.mysql.jdbc.Driver()); Get the connection using the getConnection() method of the DriverManager class as − //Getting the connection String url = "jdbc:mysql://localhost/mydatabase"; Connection con = DriverManager.getConnection(url, "root", "password"); Get the metadata object using the getMetaData() method as − DatabaseMetaData dbMetadata = con.getMetaData(); Following JDBC program establishes a connection with the database and retrieves information about the underlying database such as name of the database, driver name, URL etc... import java.sql.Connection; import java.sql.DatabaseMetaData; import java.sql.DriverManager; import java.sql.SQLException; public class Connection_getMetaData { public static void main(String args[]) throws SQLException { //Registering the Driver DriverManager.registerDriver(new com.mysql.jdbc.Driver()); //Getting the connection String url = "jdbc:mysql://localhost/mydatabase"; Connection con = DriverManager.getConnection(url, "root", "password"); System.out.println("Connection established......"); //Creating the DatabaseMetaData object DatabaseMetaData dbMetadata = con.getMetaData(); //invoke the supportsBatchUpdates() method. boolean bool = dbMetadata.supportsBatchUpdates(); if(bool) { System.out.println("Underlying database supports batch updates"); } else { System.out.println("Underlying database doesn’t support batch updates"); } //Retrieving the driver name System.out.println("Driver name: "+dbMetadata.getDriverName()); //Retrieving the driver version System.out.println("Database version: "+dbMetadata.getDriverVersion()); //Retrieving the user name System.out.println("User name: "+dbMetadata.getUserName()); //Retrieving the URL System.out.println("URL for this database: "+dbMetadata.getURL()); } } Connection established...... Underlying database supports batch updates Driver name: MySQL-AB JDBC Driver Database version: mysql-connector-java-5.1.12 ( Revision: ${bzr.revision-id} ) User name: root@localhost URL for this database: jdbc:mysql://localhost/mydatabase
[ { "code": null, "e": 1296, "s": 1062, "text": "Generally, Data about data is known as metadata. The DatabaseMetaData interface provides methods to get information about the database you have connected with like, database name, database driver version, maximum column length etc..." }, { "code": null, "e": 1674, "s": 1296, "text": "The getMetaData() method of the Connection interface retrieves and returns the DatabaseMetaData object. This contains information about the database you have connected to. You can get information about the database such as name of the database, version, driver name, user name and, url etc... by invoking the methods of the DatabaseMetaData interface using the obtained object." }, { "code": null, "e": 1777, "s": 1674, "text": "This method returns the DatabaseMetaData object which holds information about the underlying database." }, { "code": null, "e": 1841, "s": 1777, "text": "To get the DatabaseMetaData object for the underlying database." }, { "code": null, "e": 1927, "s": 1841, "text": "Register the driver using the registerDriver() method of the DriverManager class as −" }, { "code": null, "e": 2011, "s": 1927, "text": "//Registering the Driver\nDriverManager.registerDriver(new com.mysql.jdbc.Driver());" }, { "code": null, "e": 2095, "s": 2011, "text": "Get the connection using the getConnection() method of the DriverManager class as −" }, { "code": null, "e": 2241, "s": 2095, "text": "//Getting the connection\nString url = \"jdbc:mysql://localhost/mydatabase\";\nConnection con = DriverManager.getConnection(url, \"root\", \"password\");" }, { "code": null, "e": 2301, "s": 2241, "text": "Get the metadata object using the getMetaData() method as −" }, { "code": null, "e": 2350, "s": 2301, "text": "DatabaseMetaData dbMetadata = con.getMetaData();" }, { "code": null, "e": 2526, "s": 2350, "text": "Following JDBC program establishes a connection with the database and retrieves information about the underlying database such as name of the database, driver name, URL etc..." }, { "code": null, "e": 3899, "s": 2526, "text": "import java.sql.Connection;\nimport java.sql.DatabaseMetaData;\nimport java.sql.DriverManager;\nimport java.sql.SQLException;\npublic class Connection_getMetaData {\n public static void main(String args[]) throws SQLException {\n //Registering the Driver\n DriverManager.registerDriver(new com.mysql.jdbc.Driver());\n //Getting the connection\n String url = \"jdbc:mysql://localhost/mydatabase\";\n Connection con = DriverManager.getConnection(url, \"root\", \"password\");\n System.out.println(\"Connection established......\");\n //Creating the DatabaseMetaData object\n DatabaseMetaData dbMetadata = con.getMetaData();\n //invoke the supportsBatchUpdates() method.\n boolean bool = dbMetadata.supportsBatchUpdates();\n if(bool) {\n System.out.println(\"Underlying database supports batch updates\");\n } else {\n System.out.println(\"Underlying database doesn’t support batch updates\");\n }\n //Retrieving the driver name\n System.out.println(\"Driver name: \"+dbMetadata.getDriverName());\n //Retrieving the driver version\n System.out.println(\"Database version: \"+dbMetadata.getDriverVersion());\n //Retrieving the user name\n System.out.println(\"User name: \"+dbMetadata.getUserName());\n //Retrieving the URL\n System.out.println(\"URL for this database: \"+dbMetadata.getURL());\n }\n}" }, { "code": null, "e": 4167, "s": 3899, "text": "Connection established......\nUnderlying database supports batch updates\nDriver name: MySQL-AB JDBC Driver\nDatabase version: mysql-connector-java-5.1.12 ( Revision: ${bzr.revision-id} )\nUser name: root@localhost\nURL for this database: jdbc:mysql://localhost/mydatabase" } ]
Flutter - Liquid Swipe Animation - GeeksforGeeks
15 Jan, 2021 Liquid Swipe animation is used to slide the page like water which show different design and pattern on the screen. It creates a floating state. Liquid Swipe animation is a significantly trending design procedure. Movement can help keep clients inspired by your UI design longer and more motivated to collaborate with content. This method provides the app with a smooth look in a new way. Follow the below steps to implement the Liquid Swipe Animation: Step 1: Create a Flutter App using the command: flutter create liquid_swipe Note: You can give any name to your app. Step 2: Create a file in main.dart and home.dart to write code. Step 3: Import the liquid_swipe dependency into the main.dart file using the below code: import 'package:liquid_swipe/liquid_swipe.dart'; Step 4: Add the dependency to your pubspec.yaml file as shown below: Add dependencies: liquid_swipe: ^1.5.0 Get the package from Pub: flutter packages get Step 5: In the LiquidSwipe() method we need to add pages, fullTransitionValue, waveType, positionSlideIcon, enableSlideIcon which are the attributes of the LiquidSwipe() method as shown below: body:LiquidSwipe( pages: page, enableLoop: true, fullTransitionValue: 300, enableSlideIcon: true, waveType: WaveType.liquidReveal, positionSlideIcon: 0.5, ), In the main.dart file we have a main() function which calls runApp() by taking any widget as an argument to create the layout. We have the home as MyliquidSwipe() which is a stateful class(Mutable class) as shown below: Dart import 'package:flutter/material.dart';import './liquid_swipe.dart'; void main() { runApp( MaterialApp( title: "My Ani", home: MyliquidSwipe(), ), ); } Step 6: We will add images in the assets folder. All the images you need on-screen you can add there. Activate assets in pubspec .yaml file as shown below: assets: - assets/ Complete Source Code: Dart import 'package:flutter/material.dart';import 'package:liquid_swipe/liquid_swipe.dart';class MyliquidSwipe extends StatelessWidget { final page = [ Container( color:Colors.brown, child: Padding( padding: const EdgeInsets.all(100.0), child: Center( child: Column( children:<Widget>[ Text("Welcome To GeeksforGeeks",style:TextStyle( fontSize: 30, color:Colors.green[600], ), ), ] ), ), ),), Container(color:Colors.yellow[100], child: Padding( padding: const EdgeInsets.all(120.0), child: Center( child: Column( children:<Widget>[ Image.asset("assets/save.png"), Text("",style:TextStyle( fontSize: 20, color:Colors.green, ), ) ] ),), ),), Container(color: Colors.blue[100], child: Padding( padding: const EdgeInsets.all(100.0), child: Center( child: Column( children:<Widget>[ Text(" GeeksforGeeks A Computer Science portal for geeks", style:TextStyle( fontSize:30 , color:Colors.green[600], ), ), ]),), )) ]; @override Widget build(BuildContext context) { return Scaffold( body: LiquidSwipe( pages:page, enableSlideIcon: true, ),); }} Output: android Flutter Technical Scripter 2020 Dart Flutter Technical Scripter Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here. Comments Old Comments Flutter - Custom Bottom Navigation Bar ListView Class in Flutter Flutter - BorderRadius Widget Flutter - Stack Widget Android Studio Setup for Flutter Development Flutter - Custom Bottom Navigation Bar Flutter Tutorial Flutter - BorderRadius Widget Flutter - Flexible Widget Flutter - Stack Widget
[ { "code": null, "e": 23930, "s": 23902, "text": "\n15 Jan, 2021" }, { "code": null, "e": 24319, "s": 23930, "text": "Liquid Swipe animation is used to slide the page like water which show different design and pattern on the screen. It creates a floating state. Liquid Swipe animation is a significantly trending design procedure. Movement can help keep clients inspired by your UI design longer and more motivated to collaborate with content. This method provides the app with a smooth look in a new way. " }, { "code": null, "e": 24383, "s": 24319, "text": "Follow the below steps to implement the Liquid Swipe Animation:" }, { "code": null, "e": 24431, "s": 24383, "text": "Step 1: Create a Flutter App using the command:" }, { "code": null, "e": 24459, "s": 24431, "text": "flutter create liquid_swipe" }, { "code": null, "e": 24500, "s": 24459, "text": "Note: You can give any name to your app." }, { "code": null, "e": 24564, "s": 24500, "text": "Step 2: Create a file in main.dart and home.dart to write code." }, { "code": null, "e": 24653, "s": 24564, "text": "Step 3: Import the liquid_swipe dependency into the main.dart file using the below code:" }, { "code": null, "e": 24702, "s": 24653, "text": "import 'package:liquid_swipe/liquid_swipe.dart';" }, { "code": null, "e": 24771, "s": 24702, "text": "Step 4: Add the dependency to your pubspec.yaml file as shown below:" }, { "code": null, "e": 24790, "s": 24771, "text": " Add dependencies:" }, { "code": null, "e": 24811, "s": 24790, "text": "liquid_swipe: ^1.5.0" }, { "code": null, "e": 24837, "s": 24811, "text": "Get the package from Pub:" }, { "code": null, "e": 24858, "s": 24837, "text": "flutter packages get" }, { "code": null, "e": 25051, "s": 24858, "text": "Step 5: In the LiquidSwipe() method we need to add pages, fullTransitionValue, waveType, positionSlideIcon, enableSlideIcon which are the attributes of the LiquidSwipe() method as shown below:" }, { "code": null, "e": 25217, "s": 25051, "text": "body:LiquidSwipe(\n pages: page,\n enableLoop: true,\n fullTransitionValue: 300,\n enableSlideIcon: true,\n waveType: WaveType.liquidReveal,\n positionSlideIcon: 0.5,\n)," }, { "code": null, "e": 25437, "s": 25217, "text": "In the main.dart file we have a main() function which calls runApp() by taking any widget as an argument to create the layout. We have the home as MyliquidSwipe() which is a stateful class(Mutable class) as shown below:" }, { "code": null, "e": 25442, "s": 25437, "text": "Dart" }, { "code": "import 'package:flutter/material.dart';import './liquid_swipe.dart'; void main() { runApp( MaterialApp( title: \"My Ani\", home: MyliquidSwipe(), ), ); }", "e": 25654, "s": 25442, "text": null }, { "code": null, "e": 25810, "s": 25654, "text": "Step 6: We will add images in the assets folder. All the images you need on-screen you can add there. Activate assets in pubspec .yaml file as shown below:" }, { "code": null, "e": 25829, "s": 25810, "text": "assets:\n - assets/" }, { "code": null, "e": 25851, "s": 25829, "text": "Complete Source Code:" }, { "code": null, "e": 25856, "s": 25851, "text": "Dart" }, { "code": "import 'package:flutter/material.dart';import 'package:liquid_swipe/liquid_swipe.dart';class MyliquidSwipe extends StatelessWidget { final page = [ Container( color:Colors.brown, child: Padding( padding: const EdgeInsets.all(100.0), child: Center( child: Column( children:<Widget>[ Text(\"Welcome To GeeksforGeeks\",style:TextStyle( fontSize: 30, color:Colors.green[600], ), ), ] ), ), ),), Container(color:Colors.yellow[100], child: Padding( padding: const EdgeInsets.all(120.0), child: Center( child: Column( children:<Widget>[ Image.asset(\"assets/save.png\"), Text(\"\",style:TextStyle( fontSize: 20, color:Colors.green, ), ) ] ),), ),), Container(color: Colors.blue[100], child: Padding( padding: const EdgeInsets.all(100.0), child: Center( child: Column( children:<Widget>[ Text(\" GeeksforGeeks A Computer Science portal for geeks\", style:TextStyle( fontSize:30 , color:Colors.green[600], ), ), ]),), )) ]; @override Widget build(BuildContext context) { return Scaffold( body: LiquidSwipe( pages:page, enableSlideIcon: true, ),); }}", "e": 27374, "s": 25856, "text": null }, { "code": null, "e": 27382, "s": 27374, "text": "Output:" }, { "code": null, "e": 27390, "s": 27382, "text": "android" }, { "code": null, "e": 27398, "s": 27390, "text": "Flutter" }, { "code": null, "e": 27422, "s": 27398, "text": "Technical Scripter 2020" }, { "code": null, "e": 27427, "s": 27422, "text": "Dart" }, { "code": null, "e": 27435, "s": 27427, "text": "Flutter" }, { "code": null, "e": 27454, "s": 27435, "text": "Technical Scripter" }, { "code": null, "e": 27552, "s": 27454, "text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here." }, { "code": null, "e": 27561, "s": 27552, "text": "Comments" }, { "code": null, "e": 27574, "s": 27561, "text": "Old Comments" }, { "code": null, "e": 27613, "s": 27574, "text": "Flutter - Custom Bottom Navigation Bar" }, { "code": null, "e": 27639, "s": 27613, "text": "ListView Class in Flutter" }, { "code": null, "e": 27669, "s": 27639, "text": "Flutter - BorderRadius Widget" }, { "code": null, "e": 27692, "s": 27669, "text": "Flutter - Stack Widget" }, { "code": null, "e": 27737, "s": 27692, "text": "Android Studio Setup for Flutter Development" }, { "code": null, "e": 27776, "s": 27737, "text": "Flutter - Custom Bottom Navigation Bar" }, { "code": null, "e": 27793, "s": 27776, "text": "Flutter Tutorial" }, { "code": null, "e": 27823, "s": 27793, "text": "Flutter - BorderRadius Widget" }, { "code": null, "e": 27849, "s": 27823, "text": "Flutter - Flexible Widget" } ]
How to move or position a legend in ggplot2? - GeeksforGeeks
06 Jun, 2021 In this article, we will discuss how to control a legend position in ggplot using an R programming language. To draw a legend within ggplot the parameter col is used, it basically adds colors to the plot and these colors are used to differentiate between different plots. To depict what each color represents a legend is produced by ggplot. col attribute can be specified in 2 places. Simply specifying on basis of which attribute colors should be differentiated to the col attribute within ggplot() will get the job done. Syntax: ggplot(df, aes(x, y, col=”name of the column to differentiate on the basis of”)) Code: R library("ggplot2") function1 <- function(x){x**2}function2 <- function(x){x**3}function3 <- function(x){x/2}function4 <- function(x){2*(x**3)+(x**2)-(x/2)} df=data.frame(x = -2:2, values=c(function1(-2 : 2), function2(-2 : 2), function3(-2 : 2), function4(-2 : 2)), fun=rep(c("function1", "function2", "function3","function4"))) plot = ggplot(df,aes(x,values,col=fun))+geom_line()plot Output: Now, we will see How to move or change position of ggplot2 Legend like at Top, Bottom and left. For Moving the position of ggplot2 legend at any side of the plot, we simply add the theme() function to geom_point() function. Syntax : theme(legend.position) Parameter : In General, theme() function has many parameters to specify the theme of the plot but here we use only legend.position parameter which specify the position of Legend. Return : Theme of the plot. We can specify the value of legend.position parameter as left, right, top and bottom to draw legend at left, right, top and bottom side of plot respectively. Here we will change the position of the legend at the bottom of the plot diagram. Syntax: theme(legend.position = “bottom”) Code: Python3 # Bottom -> legend around the plotplot + theme(legend.position = "bottom") Output: Here we will change the position of the legend at the top of the plot diagram. Syntax: theme(legend.position = “top”) Code: Python3 # top -> legend around the plotplot + theme(legend.position = "top") Output: Here we will change the position of the legend to the right of the plot diagram. Syntax: theme(legend.position = “right”) Code: Python3 # Right -> legend around the plotplot + theme(legend.position = "right") Output: Here we will change the position of the legend to the left of the plot diagram. Syntax: theme(legend.position = “left”) Code: Python3 # Left -> legend around the plotplot + theme(legend.position = "left") Output: Here we can use a numeric vector to plotting the legend. It basically works on X, Y coordinates, the value should be 0 to 1. Syntax: theme(legend.position = c(x, y)) Code: Python3 # legend around the plotplot + theme(legend.position = c(1, 0.2)) Output Picked R-ggplot R Language Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here. Change Color of Bars in Barchart using ggplot2 in R Group by function in R using Dplyr How to Change Axis Scales in R Plots? How to Split Column Into Multiple Columns in R DataFrame? Replace Specific Characters in String in R How to filter R DataFrame by values in a column? R - if statement How to filter R dataframe by multiple conditions? Plot mean and standard deviation using ggplot2 in R Time Series Analysis in R
[ { "code": null, "e": 26487, "s": 26459, "text": "\n06 Jun, 2021" }, { "code": null, "e": 26872, "s": 26487, "text": "In this article, we will discuss how to control a legend position in ggplot using an R programming language. To draw a legend within ggplot the parameter col is used, it basically adds colors to the plot and these colors are used to differentiate between different plots. To depict what each color represents a legend is produced by ggplot. col attribute can be specified in 2 places." }, { "code": null, "e": 27010, "s": 26872, "text": "Simply specifying on basis of which attribute colors should be differentiated to the col attribute within ggplot() will get the job done." }, { "code": null, "e": 27099, "s": 27010, "text": "Syntax: ggplot(df, aes(x, y, col=”name of the column to differentiate on the basis of”))" }, { "code": null, "e": 27105, "s": 27099, "text": "Code:" }, { "code": null, "e": 27107, "s": 27105, "text": "R" }, { "code": "library(\"ggplot2\") function1 <- function(x){x**2}function2 <- function(x){x**3}function3 <- function(x){x/2}function4 <- function(x){2*(x**3)+(x**2)-(x/2)} df=data.frame(x = -2:2, values=c(function1(-2 : 2), function2(-2 : 2), function3(-2 : 2), function4(-2 : 2)), fun=rep(c(\"function1\", \"function2\", \"function3\",\"function4\"))) plot = ggplot(df,aes(x,values,col=fun))+geom_line()plot", "e": 27610, "s": 27107, "text": null }, { "code": null, "e": 27618, "s": 27610, "text": "Output:" }, { "code": null, "e": 27842, "s": 27618, "text": "Now, we will see How to move or change position of ggplot2 Legend like at Top, Bottom and left. For Moving the position of ggplot2 legend at any side of the plot, we simply add the theme() function to geom_point() function." }, { "code": null, "e": 27874, "s": 27842, "text": "Syntax : theme(legend.position)" }, { "code": null, "e": 28053, "s": 27874, "text": "Parameter : In General, theme() function has many parameters to specify the theme of the plot but here we use only legend.position parameter which specify the position of Legend." }, { "code": null, "e": 28081, "s": 28053, "text": "Return : Theme of the plot." }, { "code": null, "e": 28239, "s": 28081, "text": "We can specify the value of legend.position parameter as left, right, top and bottom to draw legend at left, right, top and bottom side of plot respectively." }, { "code": null, "e": 28321, "s": 28239, "text": "Here we will change the position of the legend at the bottom of the plot diagram." }, { "code": null, "e": 28363, "s": 28321, "text": "Syntax: theme(legend.position = “bottom”)" }, { "code": null, "e": 28369, "s": 28363, "text": "Code:" }, { "code": null, "e": 28377, "s": 28369, "text": "Python3" }, { "code": "# Bottom -> legend around the plotplot + theme(legend.position = \"bottom\")", "e": 28452, "s": 28377, "text": null }, { "code": null, "e": 28460, "s": 28452, "text": "Output:" }, { "code": null, "e": 28539, "s": 28460, "text": "Here we will change the position of the legend at the top of the plot diagram." }, { "code": null, "e": 28578, "s": 28539, "text": "Syntax: theme(legend.position = “top”)" }, { "code": null, "e": 28584, "s": 28578, "text": "Code:" }, { "code": null, "e": 28592, "s": 28584, "text": "Python3" }, { "code": "# top -> legend around the plotplot + theme(legend.position = \"top\")", "e": 28661, "s": 28592, "text": null }, { "code": null, "e": 28669, "s": 28661, "text": "Output:" }, { "code": null, "e": 28750, "s": 28669, "text": "Here we will change the position of the legend to the right of the plot diagram." }, { "code": null, "e": 28791, "s": 28750, "text": "Syntax: theme(legend.position = “right”)" }, { "code": null, "e": 28797, "s": 28791, "text": "Code:" }, { "code": null, "e": 28805, "s": 28797, "text": "Python3" }, { "code": "# Right -> legend around the plotplot + theme(legend.position = \"right\")", "e": 28878, "s": 28805, "text": null }, { "code": null, "e": 28886, "s": 28878, "text": "Output:" }, { "code": null, "e": 28966, "s": 28886, "text": "Here we will change the position of the legend to the left of the plot diagram." }, { "code": null, "e": 29006, "s": 28966, "text": "Syntax: theme(legend.position = “left”)" }, { "code": null, "e": 29012, "s": 29006, "text": "Code:" }, { "code": null, "e": 29020, "s": 29012, "text": "Python3" }, { "code": "# Left -> legend around the plotplot + theme(legend.position = \"left\")", "e": 29091, "s": 29020, "text": null }, { "code": null, "e": 29099, "s": 29091, "text": "Output:" }, { "code": null, "e": 29224, "s": 29099, "text": "Here we can use a numeric vector to plotting the legend. It basically works on X, Y coordinates, the value should be 0 to 1." }, { "code": null, "e": 29265, "s": 29224, "text": "Syntax: theme(legend.position = c(x, y))" }, { "code": null, "e": 29271, "s": 29265, "text": "Code:" }, { "code": null, "e": 29279, "s": 29271, "text": "Python3" }, { "code": "# legend around the plotplot + theme(legend.position = c(1, 0.2))", "e": 29345, "s": 29279, "text": null }, { "code": null, "e": 29352, "s": 29345, "text": "Output" }, { "code": null, "e": 29359, "s": 29352, "text": "Picked" }, { "code": null, "e": 29368, "s": 29359, "text": "R-ggplot" }, { "code": null, "e": 29379, "s": 29368, "text": "R Language" }, { "code": null, "e": 29477, "s": 29379, "text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here." }, { "code": null, "e": 29529, "s": 29477, "text": "Change Color of Bars in Barchart using ggplot2 in R" }, { "code": null, "e": 29564, "s": 29529, "text": "Group by function in R using Dplyr" }, { "code": null, "e": 29602, "s": 29564, "text": "How to Change Axis Scales in R Plots?" }, { "code": null, "e": 29660, "s": 29602, "text": "How to Split Column Into Multiple Columns in R DataFrame?" }, { "code": null, "e": 29703, "s": 29660, "text": "Replace Specific Characters in String in R" }, { "code": null, "e": 29752, "s": 29703, "text": "How to filter R DataFrame by values in a column?" }, { "code": null, "e": 29769, "s": 29752, "text": "R - if statement" }, { "code": null, "e": 29819, "s": 29769, "text": "How to filter R dataframe by multiple conditions?" }, { "code": null, "e": 29871, "s": 29819, "text": "Plot mean and standard deviation using ggplot2 in R" } ]
Kotlin functions - GeeksforGeeks
28 Mar, 2022 A function is a unit of code that performs a special task. In programming, function is used to break the code into smaller modules which makes the program more manageable. For example: If we have to compute sum of two numbers then define a fun sum(). fun sum(a: Int, b: Int): Int { return a + b } We can call sum(x, y) at any number of times and it will return sum of two numbers. So, function avoids the repetition of code and makes the code more reusable. In Kotlin, there are two types of function- Standard library function User defined function In Kotlin, there are different number of built-in functions already defined in standard library and available for use. We can call them by passing arguments according to requirement. In below program, we will use built-in functions arrayOf(), sum() and println(). The function arrayOf() require some arguments like integers, double etc to create an array and we can find the sum of all elements using sum() which does not require any argument. Kotlin fun main(args: Array<String>) { var sum = arrayOf(1,2,3,4,5,6,7,8,9,10).sum() println("The sum of all the elements of an array is: $sum")} Output: The sum of all the elements of an array is: 55 In below program, we will use rem() to find the remainder. Kotlin fun main(args: Array<String>) { var num1 = 26 var num2 = 3 var result = num1.rem(num2) println("The remainder when $num1 is divided by $num2 is: $result")} Output: The remainder when 26 is divided by 3 is: 2 The list of different standard library functions and their use – sqrt() – Used to calculate the square root of a number. print() – Used to print a message to standard output. rem() – To find the remainder of one number when divided by another. toInt() – To convert a number to integer value. readline() – Used for standard input. compareTo() – To compare two numbers and return boolean value. A function which is defined by the user is called user-defined function. As we know, to divide a large program in small modules we need to define function. Each defined function has its own properties like name of function, return type of a function, number of parameters passed to the function etc. In Kotlin, function can be declared at the top, and no need to create a class to hold a function, which we are used to do in other languages such as Java or Scala. Generally we define a function as: fun fun_name(a: data_type, b: data_type, ......): return_type { // other codes return } fun– Keyword to define a function. fun_name – Name of the function which later used to call the function. a: data_type – Here, a is an argument passed and data_type specify the data type of argument like integer or string. return_type – Specify the type of data value return by the function. {....} – Curly braces represent the block of function. Kotlin function mul() to multiply two numbers having same type of parameters- Kotlin fun mul(num1: Int, num2: Int): Int { var number = num1.times(num2) return number} Explanation: We have defined a function above starting with fun keyword whose return type is an Integer. >> mul() is the name of the function >> num1 and num2 are names of the parameters being accepted by the function and both are Integer type. Kotlin function student() having different types of parameters- Kotlin fun student(name: String , roll_no: Int , grade: Char) { println("Name of the student is : $name") println("Roll no of the student is: $roll_no") println("Grade of the student is: $grade")} Explanation- We have defined a function using fun keyword whose return type in Unit by default. >> student is the name of the function. >> name is the parameter of String data type. >> roll_no is the parameter of Integer data type >> grade is the parameter of Character data type We create a function to assign a specific task. Whenever a function is called the program leaves the current section of code and begins to execute the function. The flow-control of a function- The program comes to the line containing a function call.When function is called, control transfers to that function.Executes all the instruction of function one by one.Control is transferred back only when the function reaches closing braces or there any return statement.Any data returned by the function is used in place of the function in the original line of code. The program comes to the line containing a function call. When function is called, control transfers to that function. Executes all the instruction of function one by one. Control is transferred back only when the function reaches closing braces or there any return statement. Any data returned by the function is used in place of the function in the original line of code. Kotlin program to call the mul() function by passing two arguments- Kotlin fun mul(a: Int, b: Int): Int { var number = a.times(b) return number}fun main(args: Array<String>) { var result = mul(3,5) println("The multiplication of two numbers is: $result")} Output: The multiplication of two numbers is: 15 Explanation- In the above program, we are calling the mul(3, 5) function by passing two arguments. When the function is called the control transfers to the mul() and starts execution of the statements in the block. Using in-built times() it calculates the multiple of two numbers and store in a variable number. Then it exits the function with returning the integer value and controls transfer back to the main() where it calls mul(). Then we store the value returned by the function into mutable variable result and println() prints it to the standard output. Kotlin program to call the student() function by passing all arguments- Kotlin fun student( name: String , grade: Char , roll_no: Int) { println("Name of the student is : $name") println("Grade of the student is: $grade") println("Roll no of the student is: $roll_no") }fun main(args: Array<String>) { val name = "Praveen" val rollno = 25 val grade = 'A' student(name,grade,rollno) student("Gaurav",'B',30)} Output: Name of the student is : Praveen Grade of the student is: A Roll no of the student is: 25 Name of the student is : Gaurav Grade of the student is: B Roll no of the student is: 30 Explanation- In the above program, we are calling the student() function by passing the arguments in the same order as required. If we try to jumble the arguments then it gives the type mismatch error. In the first call, we pass the argument using variables and in the second call, we pass the arguments values without storing in variables. So, both methods are correct to call a function. ayushpandey3july Kotlin Functions Kotlin Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here. Android RecyclerView in Kotlin Retrofit with Kotlin Coroutine in Android Kotlin Android Tutorial Kotlin when expression How to Get Current Location in Android? Android Menus ImageView in Android with Example Kotlin constructor Android SQLite Database in Kotlin MVP (Model View Presenter) Architecture Pattern in Android with Example
[ { "code": null, "e": 25523, "s": 25495, "text": "\n28 Mar, 2022" }, { "code": null, "e": 25774, "s": 25523, "text": "A function is a unit of code that performs a special task. In programming, function is used to break the code into smaller modules which makes the program more manageable. For example: If we have to compute sum of two numbers then define a fun sum()." }, { "code": null, "e": 25824, "s": 25774, "text": "fun sum(a: Int, b: Int): Int {\n return a + b\n}" }, { "code": null, "e": 26030, "s": 25824, "text": "We can call sum(x, y) at any number of times and it will return sum of two numbers. So, function avoids the repetition of code and makes the code more reusable. In Kotlin, there are two types of function- " }, { "code": null, "e": 26056, "s": 26030, "text": "Standard library function" }, { "code": null, "e": 26078, "s": 26056, "text": "User defined function" }, { "code": null, "e": 26523, "s": 26078, "text": "In Kotlin, there are different number of built-in functions already defined in standard library and available for use. We can call them by passing arguments according to requirement. In below program, we will use built-in functions arrayOf(), sum() and println(). The function arrayOf() require some arguments like integers, double etc to create an array and we can find the sum of all elements using sum() which does not require any argument. " }, { "code": null, "e": 26530, "s": 26523, "text": "Kotlin" }, { "code": "fun main(args: Array<String>) { var sum = arrayOf(1,2,3,4,5,6,7,8,9,10).sum() println(\"The sum of all the elements of an array is: $sum\")}", "e": 26676, "s": 26530, "text": null }, { "code": null, "e": 26684, "s": 26676, "text": "Output:" }, { "code": null, "e": 26731, "s": 26684, "text": "The sum of all the elements of an array is: 55" }, { "code": null, "e": 26791, "s": 26731, "text": "In below program, we will use rem() to find the remainder. " }, { "code": null, "e": 26798, "s": 26791, "text": "Kotlin" }, { "code": "fun main(args: Array<String>) { var num1 = 26 var num2 = 3 var result = num1.rem(num2) println(\"The remainder when $num1 is divided by $num2 is: $result\")}", "e": 26967, "s": 26798, "text": null }, { "code": null, "e": 26975, "s": 26967, "text": "Output:" }, { "code": null, "e": 27019, "s": 26975, "text": "The remainder when 26 is divided by 3 is: 2" }, { "code": null, "e": 27084, "s": 27019, "text": "The list of different standard library functions and their use –" }, { "code": null, "e": 27140, "s": 27084, "text": "sqrt() – Used to calculate the square root of a number." }, { "code": null, "e": 27194, "s": 27140, "text": "print() – Used to print a message to standard output." }, { "code": null, "e": 27263, "s": 27194, "text": "rem() – To find the remainder of one number when divided by another." }, { "code": null, "e": 27311, "s": 27263, "text": "toInt() – To convert a number to integer value." }, { "code": null, "e": 27349, "s": 27311, "text": "readline() – Used for standard input." }, { "code": null, "e": 27412, "s": 27349, "text": "compareTo() – To compare two numbers and return boolean value." }, { "code": null, "e": 27712, "s": 27412, "text": "A function which is defined by the user is called user-defined function. As we know, to divide a large program in small modules we need to define function. Each defined function has its own properties like name of function, return type of a function, number of parameters passed to the function etc." }, { "code": null, "e": 27911, "s": 27712, "text": "In Kotlin, function can be declared at the top, and no need to create a class to hold a function, which we are used to do in other languages such as Java or Scala. Generally we define a function as:" }, { "code": null, "e": 28008, "s": 27911, "text": "fun fun_name(a: data_type, b: data_type, ......): return_type {\n // other codes\n return\n}" }, { "code": null, "e": 28043, "s": 28008, "text": "fun– Keyword to define a function." }, { "code": null, "e": 28114, "s": 28043, "text": "fun_name – Name of the function which later used to call the function." }, { "code": null, "e": 28231, "s": 28114, "text": "a: data_type – Here, a is an argument passed and data_type specify the data type of argument like integer or string." }, { "code": null, "e": 28300, "s": 28231, "text": "return_type – Specify the type of data value return by the function." }, { "code": null, "e": 28355, "s": 28300, "text": "{....} – Curly braces represent the block of function." }, { "code": null, "e": 28436, "s": 28355, "text": " Kotlin function mul() to multiply two numbers having same type of parameters- " }, { "code": null, "e": 28443, "s": 28436, "text": "Kotlin" }, { "code": "fun mul(num1: Int, num2: Int): Int { var number = num1.times(num2) return number}", "e": 28531, "s": 28443, "text": null }, { "code": null, "e": 28636, "s": 28531, "text": "Explanation: We have defined a function above starting with fun keyword whose return type is an Integer." }, { "code": null, "e": 28778, "s": 28636, "text": ">> mul() is the name of the function\n>> num1 and num2 are names of the parameters being accepted by the function\n and both are Integer type." }, { "code": null, "e": 28845, "s": 28778, "text": " Kotlin function student() having different types of parameters- " }, { "code": null, "e": 28852, "s": 28845, "text": "Kotlin" }, { "code": "fun student(name: String , roll_no: Int , grade: Char) { println(\"Name of the student is : $name\") println(\"Roll no of the student is: $roll_no\") println(\"Grade of the student is: $grade\")}", "e": 29051, "s": 28852, "text": null }, { "code": null, "e": 29147, "s": 29051, "text": "Explanation- We have defined a function using fun keyword whose return type in Unit by default." }, { "code": null, "e": 29331, "s": 29147, "text": ">> student is the name of the function.\n>> name is the parameter of String data type.\n>> roll_no is the parameter of Integer data type\n>> grade is the parameter of Character data type" }, { "code": null, "e": 29524, "s": 29331, "text": "We create a function to assign a specific task. Whenever a function is called the program leaves the current section of code and begins to execute the function. The flow-control of a function-" }, { "code": null, "e": 29894, "s": 29524, "text": "The program comes to the line containing a function call.When function is called, control transfers to that function.Executes all the instruction of function one by one.Control is transferred back only when the function reaches closing braces or there any return statement.Any data returned by the function is used in place of the function in the original line of code." }, { "code": null, "e": 29952, "s": 29894, "text": "The program comes to the line containing a function call." }, { "code": null, "e": 30013, "s": 29952, "text": "When function is called, control transfers to that function." }, { "code": null, "e": 30066, "s": 30013, "text": "Executes all the instruction of function one by one." }, { "code": null, "e": 30171, "s": 30066, "text": "Control is transferred back only when the function reaches closing braces or there any return statement." }, { "code": null, "e": 30268, "s": 30171, "text": "Any data returned by the function is used in place of the function in the original line of code." }, { "code": null, "e": 30337, "s": 30268, "text": "Kotlin program to call the mul() function by passing two arguments- " }, { "code": null, "e": 30344, "s": 30337, "text": "Kotlin" }, { "code": "fun mul(a: Int, b: Int): Int { var number = a.times(b) return number}fun main(args: Array<String>) { var result = mul(3,5) println(\"The multiplication of two numbers is: $result\")}", "e": 30537, "s": 30344, "text": null }, { "code": null, "e": 30545, "s": 30537, "text": "Output:" }, { "code": null, "e": 30587, "s": 30545, "text": "The multiplication of two numbers is: 15 " }, { "code": null, "e": 31221, "s": 30587, "text": "Explanation- In the above program, we are calling the mul(3, 5) function by passing two arguments. When the function is called the control transfers to the mul() and starts execution of the statements in the block. Using in-built times() it calculates the multiple of two numbers and store in a variable number. Then it exits the function with returning the integer value and controls transfer back to the main() where it calls mul(). Then we store the value returned by the function into mutable variable result and println() prints it to the standard output. Kotlin program to call the student() function by passing all arguments- " }, { "code": null, "e": 31228, "s": 31221, "text": "Kotlin" }, { "code": "fun student( name: String , grade: Char , roll_no: Int) { println(\"Name of the student is : $name\") println(\"Grade of the student is: $grade\") println(\"Roll no of the student is: $roll_no\") }fun main(args: Array<String>) { val name = \"Praveen\" val rollno = 25 val grade = 'A' student(name,grade,rollno) student(\"Gaurav\",'B',30)}", "e": 31581, "s": 31228, "text": null }, { "code": null, "e": 31589, "s": 31581, "text": "Output:" }, { "code": null, "e": 31768, "s": 31589, "text": "Name of the student is : Praveen\nGrade of the student is: A\nRoll no of the student is: 25\nName of the student is : Gaurav\nGrade of the student is: B\nRoll no of the student is: 30" }, { "code": null, "e": 32158, "s": 31768, "text": "Explanation- In the above program, we are calling the student() function by passing the arguments in the same order as required. If we try to jumble the arguments then it gives the type mismatch error. In the first call, we pass the argument using variables and in the second call, we pass the arguments values without storing in variables. So, both methods are correct to call a function." }, { "code": null, "e": 32175, "s": 32158, "text": "ayushpandey3july" }, { "code": null, "e": 32192, "s": 32175, "text": "Kotlin Functions" }, { "code": null, "e": 32199, "s": 32192, "text": "Kotlin" }, { "code": null, "e": 32297, "s": 32199, "text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here." }, { "code": null, "e": 32328, "s": 32297, "text": "Android RecyclerView in Kotlin" }, { "code": null, "e": 32370, "s": 32328, "text": "Retrofit with Kotlin Coroutine in Android" }, { "code": null, "e": 32394, "s": 32370, "text": "Kotlin Android Tutorial" }, { "code": null, "e": 32417, "s": 32394, "text": "Kotlin when expression" }, { "code": null, "e": 32457, "s": 32417, "text": "How to Get Current Location in Android?" }, { "code": null, "e": 32471, "s": 32457, "text": "Android Menus" }, { "code": null, "e": 32505, "s": 32471, "text": "ImageView in Android with Example" }, { "code": null, "e": 32524, "s": 32505, "text": "Kotlin constructor" }, { "code": null, "e": 32558, "s": 32524, "text": "Android SQLite Database in Kotlin" } ]
GATE | GATE-CS-2005 | Question 85 - GeeksforGeeks
12 Jul, 2021 Consider the following expression grammar. The seman­tic rules for expression calculation are stated next to each grammar production. E → number E.val = number. val | E '+' E E(1).val = E(2).val + E(3).val | E '×' E E(1).val = E(2).val × E(3).val The above grammar and the semantic rules are fed to a yacc tool (which is an LALR (1) parser generator) for parsing and evaluating arithmetic expressions. Which one of the following is true about the action of yacc for the given grammar?(A) It detects recursion and eliminates recursion(B) It detects reduce-reduce conflict, and resolves(C) It detects shift-reduce conflict, and resolves the conflict in favor of a shift over a reduce action(D) It detects shift-reduce conflict, and resolves the conflict in favor of a reduce over a shift actionAnswer: (C)Explanation: Backgroundyacc conflict resolution is done using following rules:shift is preferred over reduce while shift/reduce conflict.first reduce is preferred over others while reduce/reduce conflict. You can answer to this question straightforward by constructing LALR(1) parse table, though its a time taking process. To answer it faster, one can see intuitively that this grammar will have a shift-reduce conflict for sure. In that case, given this is a single choice question, (C) option will be the right answer. Fool-proof explanation would be to generate LALR(1) parse table, which is a lengthy process. Once we have the parse table with us, we can clearly see thati. reduce/reduce conflict will not arise in the above given grammarii. shift/reduce conflict will be resolved by giving preference to shift, hence making the expression calculator right associative. According to the above conclusions, only correct option seems to be (C). Reference: http://dinosaur.compilertools.net/yacc/ This solution is contributed by Vineet Purswani.Quiz of this Question clintra GATE-CS-2005 GATE-GATE-CS-2005 GATE Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here. GATE | Gate IT 2007 | Question 25 GATE | GATE-CS-2000 | Question 41 GATE | GATE-CS-2001 | Question 39 GATE | GATE-CS-2005 | Question 6 GATE | GATE MOCK 2017 | Question 21 GATE | GATE-CS-2006 | Question 47 GATE | GATE MOCK 2017 | Question 24 GATE | Gate IT 2008 | Question 43 GATE | GATE-CS-2009 | Question 38 GATE | GATE-CS-2003 | Question 90
[ { "code": null, "e": 25755, "s": 25727, "text": "\n12 Jul, 2021" }, { "code": null, "e": 25889, "s": 25755, "text": "Consider the following expression grammar. The seman­tic rules for expression calculation are stated next to each grammar production." }, { "code": null, "e": 26016, "s": 25889, "text": " E → number \t E.val = number. val\n | E '+' E \t E(1).val = E(2).val + E(3).val\n | E '×' E\t E(1).val = E(2).val × E(3).val" }, { "code": null, "e": 26587, "s": 26016, "text": "The above grammar and the semantic rules are fed to a yacc tool (which is an LALR (1) parser generator) for parsing and evaluating arithmetic expressions. Which one of the following is true about the action of yacc for the given grammar?(A) It detects recursion and eliminates recursion(B) It detects reduce-reduce conflict, and resolves(C) It detects shift-reduce conflict, and resolves the conflict in favor of a shift over a reduce action(D) It detects shift-reduce conflict, and resolves the conflict in favor of a reduce over a shift actionAnswer: (C)Explanation: " }, { "code": null, "e": 26779, "s": 26587, "text": "Backgroundyacc conflict resolution is done using following rules:shift is preferred over reduce while shift/reduce conflict.first reduce is preferred over others while reduce/reduce conflict." }, { "code": null, "e": 27098, "s": 26781, "text": "You can answer to this question straightforward by constructing LALR(1) parse table, though its a time taking process. To answer it faster, one can see intuitively that this grammar will have a shift-reduce conflict for sure. In that case, given this is a single choice question, (C) option will be the right answer." }, { "code": null, "e": 27451, "s": 27098, "text": "Fool-proof explanation would be to generate LALR(1) parse table, which is a lengthy process. Once we have the parse table with us, we can clearly see thati. reduce/reduce conflict will not arise in the above given grammarii. shift/reduce conflict will be resolved by giving preference to shift, hence making the expression calculator right associative." }, { "code": null, "e": 27524, "s": 27451, "text": "According to the above conclusions, only correct option seems to be (C)." }, { "code": null, "e": 27535, "s": 27524, "text": "Reference:" }, { "code": null, "e": 27575, "s": 27535, "text": "http://dinosaur.compilertools.net/yacc/" }, { "code": null, "e": 27645, "s": 27575, "text": "This solution is contributed by Vineet Purswani.Quiz of this Question" }, { "code": null, "e": 27653, "s": 27645, "text": "clintra" }, { "code": null, "e": 27666, "s": 27653, "text": "GATE-CS-2005" }, { "code": null, "e": 27684, "s": 27666, "text": "GATE-GATE-CS-2005" }, { "code": null, "e": 27689, "s": 27684, "text": "GATE" }, { "code": null, "e": 27787, "s": 27689, "text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here." }, { "code": null, "e": 27821, "s": 27787, "text": "GATE | Gate IT 2007 | Question 25" }, { "code": null, "e": 27855, "s": 27821, "text": "GATE | GATE-CS-2000 | Question 41" }, { "code": null, "e": 27889, "s": 27855, "text": "GATE | GATE-CS-2001 | Question 39" }, { "code": null, "e": 27922, "s": 27889, "text": "GATE | GATE-CS-2005 | Question 6" }, { "code": null, "e": 27958, "s": 27922, "text": "GATE | GATE MOCK 2017 | Question 21" }, { "code": null, "e": 27992, "s": 27958, "text": "GATE | GATE-CS-2006 | Question 47" }, { "code": null, "e": 28028, "s": 27992, "text": "GATE | GATE MOCK 2017 | Question 24" }, { "code": null, "e": 28062, "s": 28028, "text": "GATE | Gate IT 2008 | Question 43" }, { "code": null, "e": 28096, "s": 28062, "text": "GATE | GATE-CS-2009 | Question 38" } ]
Find the count of distinct numbers in a range - GeeksforGeeks
07 Sep, 2021 Given an array of size N containing numbers only from 0 to 63, and you are asked Q queries regarding it.Queries are as follows: 1 X Y i.e Change the element at index X to Y 2 L R i.e Print the count of distinct elements present in between L and R inclusive Examples: Input: N = 7 ar = {1, 2, 1, 3, 1, 2, 1} Q = 5 { {2, 1, 4}, {1, 4, 2}, {1, 5, 2}, {2, 4, 6}, {2, 1, 7} } Output: 3 1 2 Input: N = 15 ar = {4, 6, 3, 2, 2, 3, 6, 5, 5, 5, 4, 2, 1, 5, 1} Q = 5 { {1, 6, 5}, {1, 4, 2}, {2, 6, 14}, {2, 6, 8}, {2, 1, 6} }; Output: 5 2 5 Pre-requisites: Segment Tree and Bit Manipulation Approach: The bitMask formed in the question will denote the presence of ith number or not, we will find out that by checking the ith bit of our bitMask, if the ith bit is on, that means i’th element is present, otherwise not present. Build a classical segment tree and each of its nodes will contain a bitmask which is used to decode the number of distinct elements in that particular segment which is defined by the particular node. Build the segment Tree in a bottom-up manner, therefore the bitMask for the current node of the segment tree can be easily calculated by performing bitwise OR operation on the bitMasks of this node’s right and left child. Leaf nodes will be handled separately. Updation will also be done in the same manner. Below is the implementation of the above approach: C++ Java Python3 C# // C++ Program to find the distinct// elements in a range#include <bits/stdc++.h>using namespace std; // Function to perform queries in a rangelong long query(int start, int end, int left, int right, int node, long long seg[]){ // No overlap if (end < left || start > right) { return 0; } // Totally Overlap else if (start >= left && end <= right) { return seg[node]; } // Partial Overlap else { int mid = (start + end) / 2; // Finding the Answer // for the left Child long long leftChild = query(start, mid, left, right, 2 * node, seg); // Finding the Answer // for the right Child long long rightChild = query(mid + 1, end, left, right, 2 * node + 1, seg); // Combining the BitMasks return (leftChild | rightChild); }} // Function to perform update operation// in the Segment segvoid update(int left, int right, int index, int Value, int node, int ar[], long long seg[]){ if (left == right) { ar[index] = Value; // Forming the BitMask seg[node] = (1LL << Value); return; } int mid = (left + right) / 2; if (index > mid) { // Updating the left Child update(mid + 1, right, index, Value, 2 * node + 1, ar, seg); } else { // Updating the right Child update(left, mid, index, Value, 2 * node, ar, seg); } // Updating the BitMask seg[node] = (seg[2 * node] | seg[2 * node + 1]);} // Building the Segment Treevoid build(int left, int right, int node, int ar[], long long seg[]){ if (left == right) { // Building the Initial BitMask seg[node] = (1LL << ar[left]); return; } int mid = (left + right) / 2; // Building the left seg tree build(left, mid, 2 * node, ar, seg); // Building the right seg tree build(mid + 1, right, 2 * node + 1, ar, seg); // Forming the BitMask seg[node] = (seg[2 * node] | seg[2 * node + 1]);} // Utility Function to answer the queriesvoid getDistinctCount(vector<vector<int> >& queries, int ar[], long long seg[], int n){ for (int i = 0; i < queries.size(); i++) { int op = queries[i][0]; if (op == 2) { int l = queries[i][1], r = queries[i][2]; long long tempMask = query(0, n - 1, l - 1, r - 1, 1, seg); int countOfBits = 0; // Counting the set bits which denote the // distinct elements for (int i = 63; i >= 0; i--) { if (tempMask & (1LL << i)) { countOfBits++; } } cout << countOfBits << '\n'; } else { int index = queries[i][1]; int val = queries[i][2]; // Updating the value update(0, n - 1, index - 1, val, 1, ar, seg); } }} // Driver Codeint main(){ int n = 7; int ar[] = { 1, 2, 1, 3, 1, 2, 1 }; long long seg[4 * n] = { 0 }; build(0, n - 1, 1, ar, seg); int q = 5; vector<vector<int> > queries = { { 2, 1, 4 }, { 1, 4, 2 }, { 1, 5, 2 }, { 2, 4, 6 }, { 2, 1, 7 } }; getDistinctCount(queries, ar, seg, n); return 0;} // Java Program to find the distinct// elements in a rangeimport java.util.*; class GFG{ // Function to perform queries in a rangestatic long query(int start, int end, int left, int right, int node, long seg[]){ // No overlap if (end < left || start > right) { return 0; } // Totally Overlap else if (start >= left && end <= right) { return seg[node]; } // Partial Overlap else { int mid = (start + end) / 2; // Finding the Answer // for the left Child long leftChild = query(start, mid, left, right, 2 * node, seg); // Finding the Answer // for the right Child long rightChild = query(mid + 1, end, left, right, 2 * node + 1, seg); // Combining the BitMasks return (leftChild | rightChild); }} // Function to perform update operation// in the Segment segstatic void update(int left, int right, int index, int Value, int node, int ar[], long seg[]){ if (left == right) { ar[index] = Value; // Forming the BitMask seg[node] = (1L << Value); return; } int mid = (left + right) / 2; if (index > mid) { // Updating the left Child update(mid + 1, right, index, Value, 2 * node + 1, ar, seg); } else { // Updating the right Child update(left, mid, index, Value, 2 * node, ar, seg); } // Updating the BitMask seg[node] = (seg[2 * node] | seg[2 * node + 1]);} // Building the Segment Treestatic void build(int left, int right, int node, int ar[], long seg[]){ if (left == right) { // Building the Initial BitMask seg[node] = (1L << ar[left]); return; } int mid = (left + right) / 2; // Building the left seg tree build(left, mid, 2 * node, ar, seg); // Building the right seg tree build(mid + 1, right, 2 * node + 1, ar, seg); // Forming the BitMask seg[node] = (seg[2 * node] | seg[2 * node + 1]);} // Utility Function to answer the queriesstatic void getDistinctCount(int [][]queries, int ar[], long seg[], int n){ for (int i = 0; i < queries.length; i++) { int op = queries[i][0]; if (op == 2) { int l = queries[i][1], r = queries[i][2]; long tempMask = query(0, n - 1, l - 1, r - 1, 1, seg); int countOfBits = 0; // Counting the set bits which denote the // distinct elements for (int s = 63; s >= 0; s--) { if ((tempMask & (1L << s))>0) { countOfBits++; } } System.out.println(countOfBits); } else { int index = queries[i][1]; int val = queries[i][2]; // Updating the value update(0, n - 1, index - 1, val, 1, ar, seg); } }} // Driver Codepublic static void main(String[] args){ int n = 7; int ar[] = { 1, 2, 1, 3, 1, 2, 1 }; long seg[] = new long[4 * n]; build(0, n - 1, 1, ar, seg); int [][]queries = { { 2, 1, 4 }, { 1, 4, 2 }, { 1, 5, 2 }, { 2, 4, 6 }, { 2, 1, 7 } }; getDistinctCount(queries, ar, seg, n); }} // This code is contributed by PrinciRaj1992 # Python3 Program to find the distinct# elements in a range # Function to perform queries in a rangedef query(start, end, left, right, node, seg): # No overlap if (end < left or start > right): return 0 # Totally Overlap elif (start >= left and end <= right): return seg[node] # Partial Overlap else: mid = (start + end) // 2 # Finding the Answer # for the left Child leftChild = query(start, mid, left, right, 2 * node, seg) # Finding the Answer # for the right Child rightChild = query(mid + 1, end, left, right, 2 * node + 1, seg) # Combining the BitMasks return (leftChild | rightChild) # Function to perform update operation# in the Segment segdef update(left, right, index, Value, node, ar, seg): if (left == right): ar[index] = Value # Forming the BitMask seg[node] = (1 << Value) return mid = (left + right) // 2 if (index > mid): # Updating the left Child update(mid + 1, right, index, Value, 2 * node + 1, ar, seg) else: # Updating the right Child update(left, mid, index, Value, 2 * node, ar, seg) # Updating the BitMask seg[node] = (seg[2 * node] | seg[2 * node + 1]) # Building the Segment Treedef build(left, right, node, ar, eg): if (left == right): # Building the Initial BitMask seg[node] = (1 << ar[left]) return mid = (left + right) // 2 # Building the left seg tree build(left, mid, 2 * node, ar, seg) # Building the right seg tree build(mid + 1, right, 2 * node + 1, ar, seg) # Forming the BitMask seg[node] = (seg[2 * node] | seg[2 * node + 1]) # Utility Function to answer the queriesdef getDistinctCount(queries, ar, seg, n): for i in range(len(queries)): op = queries[i][0] if (op == 2): l = queries[i][1] r = queries[i][2] tempMask = query(0, n - 1, l - 1, r - 1, 1, seg) countOfBits = 0 # Counting the set bits which denote # the distinct elements for i in range(63, -1, -1): if (tempMask & (1 << i)): countOfBits += 1 print(countOfBits) else: index = queries[i][1] val = queries[i][2] # Updating the value update(0, n - 1, index - 1, val, 1, ar, seg) # Driver Codeif __name__ == '__main__': n = 7 ar = [1, 2, 1, 3, 1, 2, 1] seg = [0] * 4 * n build(0, n - 1, 1, ar, seg) q = 5 queries = [[ 2, 1, 4 ], [ 1, 4, 2 ], [ 1, 5, 2 ], [ 2, 4, 6 ], [ 2, 1, 7 ]] getDistinctCount(queries, ar, seg, n) # This code is contributed by Mohit Kumar // C# Program to find the distinct// elements in a rangeusing System; class GFG{ // Function to perform queries in a rangestatic long query(int start, int end, int left, int right, int node, long []seg){ // No overlap if (end < left || start > right) { return 0; } // Totally Overlap else if (start >= left && end <= right) { return seg[node]; } // Partial Overlap else { int mid = (start + end) / 2; // Finding the Answer // for the left Child long leftChild = query(start, mid, left, right, 2 * node, seg); // Finding the Answer // for the right Child long rightChild = query(mid + 1, end, left, right, 2 * node + 1, seg); // Combining the BitMasks return (leftChild | rightChild); }} // Function to perform update operation// in the Segment segstatic void update(int left, int right, int index, int Value, int node, int []ar, long []seg){ if (left == right) { ar[index] = Value; // Forming the BitMask seg[node] = (1L << Value); return; } int mid = (left + right) / 2; if (index > mid) { // Updating the left Child update(mid + 1, right, index, Value, 2 * node + 1, ar, seg); } else { // Updating the right Child update(left, mid, index, Value, 2 * node, ar, seg); } // Updating the BitMask seg[node] = (seg[2 * node] | seg[2 * node + 1]);} // Building the Segment Treestatic void build(int left, int right, int node, int []ar, long []seg){ if (left == right) { // Building the Initial BitMask seg[node] = (1L << ar[left]); return; } int mid = (left + right) / 2; // Building the left seg tree build(left, mid, 2 * node, ar, seg); // Building the right seg tree build(mid + 1, right, 2 * node + 1, ar, seg); // Forming the BitMask seg[node] = (seg[2 * node] | seg[2 * node + 1]);} // Utility Function to answer the queriesstatic void getDistinctCount(int [,]queries, int []ar, long []seg, int n){ for (int i = 0; i < queries.GetLength(0); i++) { int op = queries[i,0]; if (op == 2) { int l = queries[i,1], r = queries[i,2]; long tempMask = query(0, n - 1, l - 1, r - 1, 1, seg); int countOfBits = 0; // Counting the set bits which denote the // distinct elements for (int s = 63; s >= 0; s--) { if ((tempMask & (1L << s))>0) { countOfBits++; } } Console.WriteLine(countOfBits); } else { int index = queries[i,1]; int val = queries[i,2]; // Updating the value update(0, n - 1, index - 1, val, 1, ar, seg); } }} // Driver Codepublic static void Main(String[] args){ int n = 7; int []ar = { 1, 2, 1, 3, 1, 2, 1 }; long []seg = new long[4 * n]; build(0, n - 1, 1, ar, seg); int [,]queries = { { 2, 1, 4 }, { 1, 4, 2 }, { 1, 5, 2 }, { 2, 4, 6 }, { 2, 1, 7 } }; getDistinctCount(queries, ar, seg, n); }} // This code is contributed by 29AjayKumar 3 1 2 Time complexity: O(N + Q*Log(N)) mohit kumar 29 princiraj1992 29AjayKumar kk773572498 array-range-queries Segment-Tree Arrays Bit Magic Competitive Programming Divide and Conquer Arrays Divide and Conquer Bit Magic Segment-Tree Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here. Maximum and minimum of an array using minimum number of comparisons Top 50 Array Coding Problems for Interviews Stack Data Structure (Introduction and Program) Introduction to Arrays Multidimensional Arrays in Java Bitwise Operators in C/C++ Left Shift and Right Shift Operators in C/C++ Travelling Salesman Problem | Set 1 (Naive and Dynamic Programming) Count set bits in an integer How to swap two numbers without using a temporary variable?
[ { "code": null, "e": 25975, "s": 25947, "text": "\n07 Sep, 2021" }, { "code": null, "e": 26105, "s": 25975, "text": "Given an array of size N containing numbers only from 0 to 63, and you are asked Q queries regarding it.Queries are as follows: " }, { "code": null, "e": 26150, "s": 26105, "text": "1 X Y i.e Change the element at index X to Y" }, { "code": null, "e": 26234, "s": 26150, "text": "2 L R i.e Print the count of distinct elements present in between L and R inclusive" }, { "code": null, "e": 26246, "s": 26234, "text": "Examples: " }, { "code": null, "e": 26527, "s": 26246, "text": "Input:\nN = 7\nar = {1, 2, 1, 3, 1, 2, 1}\nQ = 5\n{ {2, 1, 4},\n {1, 4, 2},\n {1, 5, 2},\n {2, 4, 6},\n {2, 1, 7} }\nOutput:\n3\n1\n2\n\nInput:\nN = 15\nar = {4, 6, 3, 2, 2, 3, 6, 5, 5, 5, 4, 2, 1, 5, 1}\nQ = 5\n{ {1, 6, 5},\n {1, 4, 2},\n {2, 6, 14},\n {2, 6, 8},\n {2, 1, 6} };\n\nOutput:\n5\n2\n5" }, { "code": null, "e": 26590, "s": 26529, "text": "Pre-requisites: Segment Tree and Bit Manipulation Approach: " }, { "code": null, "e": 26815, "s": 26590, "text": "The bitMask formed in the question will denote the presence of ith number or not, we will find out that by checking the ith bit of our bitMask, if the ith bit is on, that means i’th element is present, otherwise not present." }, { "code": null, "e": 27015, "s": 26815, "text": "Build a classical segment tree and each of its nodes will contain a bitmask which is used to decode the number of distinct elements in that particular segment which is defined by the particular node." }, { "code": null, "e": 27323, "s": 27015, "text": "Build the segment Tree in a bottom-up manner, therefore the bitMask for the current node of the segment tree can be easily calculated by performing bitwise OR operation on the bitMasks of this node’s right and left child. Leaf nodes will be handled separately. Updation will also be done in the same manner." }, { "code": null, "e": 27376, "s": 27323, "text": "Below is the implementation of the above approach: " }, { "code": null, "e": 27380, "s": 27376, "text": "C++" }, { "code": null, "e": 27385, "s": 27380, "text": "Java" }, { "code": null, "e": 27393, "s": 27385, "text": "Python3" }, { "code": null, "e": 27396, "s": 27393, "text": "C#" }, { "code": "// C++ Program to find the distinct// elements in a range#include <bits/stdc++.h>using namespace std; // Function to perform queries in a rangelong long query(int start, int end, int left, int right, int node, long long seg[]){ // No overlap if (end < left || start > right) { return 0; } // Totally Overlap else if (start >= left && end <= right) { return seg[node]; } // Partial Overlap else { int mid = (start + end) / 2; // Finding the Answer // for the left Child long long leftChild = query(start, mid, left, right, 2 * node, seg); // Finding the Answer // for the right Child long long rightChild = query(mid + 1, end, left, right, 2 * node + 1, seg); // Combining the BitMasks return (leftChild | rightChild); }} // Function to perform update operation// in the Segment segvoid update(int left, int right, int index, int Value, int node, int ar[], long long seg[]){ if (left == right) { ar[index] = Value; // Forming the BitMask seg[node] = (1LL << Value); return; } int mid = (left + right) / 2; if (index > mid) { // Updating the left Child update(mid + 1, right, index, Value, 2 * node + 1, ar, seg); } else { // Updating the right Child update(left, mid, index, Value, 2 * node, ar, seg); } // Updating the BitMask seg[node] = (seg[2 * node] | seg[2 * node + 1]);} // Building the Segment Treevoid build(int left, int right, int node, int ar[], long long seg[]){ if (left == right) { // Building the Initial BitMask seg[node] = (1LL << ar[left]); return; } int mid = (left + right) / 2; // Building the left seg tree build(left, mid, 2 * node, ar, seg); // Building the right seg tree build(mid + 1, right, 2 * node + 1, ar, seg); // Forming the BitMask seg[node] = (seg[2 * node] | seg[2 * node + 1]);} // Utility Function to answer the queriesvoid getDistinctCount(vector<vector<int> >& queries, int ar[], long long seg[], int n){ for (int i = 0; i < queries.size(); i++) { int op = queries[i][0]; if (op == 2) { int l = queries[i][1], r = queries[i][2]; long long tempMask = query(0, n - 1, l - 1, r - 1, 1, seg); int countOfBits = 0; // Counting the set bits which denote the // distinct elements for (int i = 63; i >= 0; i--) { if (tempMask & (1LL << i)) { countOfBits++; } } cout << countOfBits << '\\n'; } else { int index = queries[i][1]; int val = queries[i][2]; // Updating the value update(0, n - 1, index - 1, val, 1, ar, seg); } }} // Driver Codeint main(){ int n = 7; int ar[] = { 1, 2, 1, 3, 1, 2, 1 }; long long seg[4 * n] = { 0 }; build(0, n - 1, 1, ar, seg); int q = 5; vector<vector<int> > queries = { { 2, 1, 4 }, { 1, 4, 2 }, { 1, 5, 2 }, { 2, 4, 6 }, { 2, 1, 7 } }; getDistinctCount(queries, ar, seg, n); return 0;}", "e": 30762, "s": 27396, "text": null }, { "code": "// Java Program to find the distinct// elements in a rangeimport java.util.*; class GFG{ // Function to perform queries in a rangestatic long query(int start, int end, int left, int right, int node, long seg[]){ // No overlap if (end < left || start > right) { return 0; } // Totally Overlap else if (start >= left && end <= right) { return seg[node]; } // Partial Overlap else { int mid = (start + end) / 2; // Finding the Answer // for the left Child long leftChild = query(start, mid, left, right, 2 * node, seg); // Finding the Answer // for the right Child long rightChild = query(mid + 1, end, left, right, 2 * node + 1, seg); // Combining the BitMasks return (leftChild | rightChild); }} // Function to perform update operation// in the Segment segstatic void update(int left, int right, int index, int Value, int node, int ar[], long seg[]){ if (left == right) { ar[index] = Value; // Forming the BitMask seg[node] = (1L << Value); return; } int mid = (left + right) / 2; if (index > mid) { // Updating the left Child update(mid + 1, right, index, Value, 2 * node + 1, ar, seg); } else { // Updating the right Child update(left, mid, index, Value, 2 * node, ar, seg); } // Updating the BitMask seg[node] = (seg[2 * node] | seg[2 * node + 1]);} // Building the Segment Treestatic void build(int left, int right, int node, int ar[], long seg[]){ if (left == right) { // Building the Initial BitMask seg[node] = (1L << ar[left]); return; } int mid = (left + right) / 2; // Building the left seg tree build(left, mid, 2 * node, ar, seg); // Building the right seg tree build(mid + 1, right, 2 * node + 1, ar, seg); // Forming the BitMask seg[node] = (seg[2 * node] | seg[2 * node + 1]);} // Utility Function to answer the queriesstatic void getDistinctCount(int [][]queries, int ar[], long seg[], int n){ for (int i = 0; i < queries.length; i++) { int op = queries[i][0]; if (op == 2) { int l = queries[i][1], r = queries[i][2]; long tempMask = query(0, n - 1, l - 1, r - 1, 1, seg); int countOfBits = 0; // Counting the set bits which denote the // distinct elements for (int s = 63; s >= 0; s--) { if ((tempMask & (1L << s))>0) { countOfBits++; } } System.out.println(countOfBits); } else { int index = queries[i][1]; int val = queries[i][2]; // Updating the value update(0, n - 1, index - 1, val, 1, ar, seg); } }} // Driver Codepublic static void main(String[] args){ int n = 7; int ar[] = { 1, 2, 1, 3, 1, 2, 1 }; long seg[] = new long[4 * n]; build(0, n - 1, 1, ar, seg); int [][]queries = { { 2, 1, 4 }, { 1, 4, 2 }, { 1, 5, 2 }, { 2, 4, 6 }, { 2, 1, 7 } }; getDistinctCount(queries, ar, seg, n); }} // This code is contributed by PrinciRaj1992", "e": 34166, "s": 30762, "text": null }, { "code": "# Python3 Program to find the distinct# elements in a range # Function to perform queries in a rangedef query(start, end, left, right, node, seg): # No overlap if (end < left or start > right): return 0 # Totally Overlap elif (start >= left and end <= right): return seg[node] # Partial Overlap else: mid = (start + end) // 2 # Finding the Answer # for the left Child leftChild = query(start, mid, left, right, 2 * node, seg) # Finding the Answer # for the right Child rightChild = query(mid + 1, end, left, right, 2 * node + 1, seg) # Combining the BitMasks return (leftChild | rightChild) # Function to perform update operation# in the Segment segdef update(left, right, index, Value, node, ar, seg): if (left == right): ar[index] = Value # Forming the BitMask seg[node] = (1 << Value) return mid = (left + right) // 2 if (index > mid): # Updating the left Child update(mid + 1, right, index, Value, 2 * node + 1, ar, seg) else: # Updating the right Child update(left, mid, index, Value, 2 * node, ar, seg) # Updating the BitMask seg[node] = (seg[2 * node] | seg[2 * node + 1]) # Building the Segment Treedef build(left, right, node, ar, eg): if (left == right): # Building the Initial BitMask seg[node] = (1 << ar[left]) return mid = (left + right) // 2 # Building the left seg tree build(left, mid, 2 * node, ar, seg) # Building the right seg tree build(mid + 1, right, 2 * node + 1, ar, seg) # Forming the BitMask seg[node] = (seg[2 * node] | seg[2 * node + 1]) # Utility Function to answer the queriesdef getDistinctCount(queries, ar, seg, n): for i in range(len(queries)): op = queries[i][0] if (op == 2): l = queries[i][1] r = queries[i][2] tempMask = query(0, n - 1, l - 1, r - 1, 1, seg) countOfBits = 0 # Counting the set bits which denote # the distinct elements for i in range(63, -1, -1): if (tempMask & (1 << i)): countOfBits += 1 print(countOfBits) else: index = queries[i][1] val = queries[i][2] # Updating the value update(0, n - 1, index - 1, val, 1, ar, seg) # Driver Codeif __name__ == '__main__': n = 7 ar = [1, 2, 1, 3, 1, 2, 1] seg = [0] * 4 * n build(0, n - 1, 1, ar, seg) q = 5 queries = [[ 2, 1, 4 ], [ 1, 4, 2 ], [ 1, 5, 2 ], [ 2, 4, 6 ], [ 2, 1, 7 ]] getDistinctCount(queries, ar, seg, n) # This code is contributed by Mohit Kumar", "e": 37150, "s": 34166, "text": null }, { "code": "// C# Program to find the distinct// elements in a rangeusing System; class GFG{ // Function to perform queries in a rangestatic long query(int start, int end, int left, int right, int node, long []seg){ // No overlap if (end < left || start > right) { return 0; } // Totally Overlap else if (start >= left && end <= right) { return seg[node]; } // Partial Overlap else { int mid = (start + end) / 2; // Finding the Answer // for the left Child long leftChild = query(start, mid, left, right, 2 * node, seg); // Finding the Answer // for the right Child long rightChild = query(mid + 1, end, left, right, 2 * node + 1, seg); // Combining the BitMasks return (leftChild | rightChild); }} // Function to perform update operation// in the Segment segstatic void update(int left, int right, int index, int Value, int node, int []ar, long []seg){ if (left == right) { ar[index] = Value; // Forming the BitMask seg[node] = (1L << Value); return; } int mid = (left + right) / 2; if (index > mid) { // Updating the left Child update(mid + 1, right, index, Value, 2 * node + 1, ar, seg); } else { // Updating the right Child update(left, mid, index, Value, 2 * node, ar, seg); } // Updating the BitMask seg[node] = (seg[2 * node] | seg[2 * node + 1]);} // Building the Segment Treestatic void build(int left, int right, int node, int []ar, long []seg){ if (left == right) { // Building the Initial BitMask seg[node] = (1L << ar[left]); return; } int mid = (left + right) / 2; // Building the left seg tree build(left, mid, 2 * node, ar, seg); // Building the right seg tree build(mid + 1, right, 2 * node + 1, ar, seg); // Forming the BitMask seg[node] = (seg[2 * node] | seg[2 * node + 1]);} // Utility Function to answer the queriesstatic void getDistinctCount(int [,]queries, int []ar, long []seg, int n){ for (int i = 0; i < queries.GetLength(0); i++) { int op = queries[i,0]; if (op == 2) { int l = queries[i,1], r = queries[i,2]; long tempMask = query(0, n - 1, l - 1, r - 1, 1, seg); int countOfBits = 0; // Counting the set bits which denote the // distinct elements for (int s = 63; s >= 0; s--) { if ((tempMask & (1L << s))>0) { countOfBits++; } } Console.WriteLine(countOfBits); } else { int index = queries[i,1]; int val = queries[i,2]; // Updating the value update(0, n - 1, index - 1, val, 1, ar, seg); } }} // Driver Codepublic static void Main(String[] args){ int n = 7; int []ar = { 1, 2, 1, 3, 1, 2, 1 }; long []seg = new long[4 * n]; build(0, n - 1, 1, ar, seg); int [,]queries = { { 2, 1, 4 }, { 1, 4, 2 }, { 1, 5, 2 }, { 2, 4, 6 }, { 2, 1, 7 } }; getDistinctCount(queries, ar, seg, n); }} // This code is contributed by 29AjayKumar", "e": 40579, "s": 37150, "text": null }, { "code": null, "e": 40585, "s": 40579, "text": "3\n1\n2" }, { "code": null, "e": 40621, "s": 40587, "text": "Time complexity: O(N + Q*Log(N)) " }, { "code": null, "e": 40636, "s": 40621, "text": "mohit kumar 29" }, { "code": null, "e": 40650, "s": 40636, "text": "princiraj1992" }, { "code": null, "e": 40662, "s": 40650, "text": "29AjayKumar" }, { "code": null, "e": 40674, "s": 40662, "text": "kk773572498" }, { "code": null, "e": 40694, "s": 40674, "text": "array-range-queries" }, { "code": null, "e": 40707, "s": 40694, "text": "Segment-Tree" }, { "code": null, "e": 40714, "s": 40707, "text": "Arrays" }, { "code": null, "e": 40724, "s": 40714, "text": "Bit Magic" }, { "code": null, "e": 40748, "s": 40724, "text": "Competitive Programming" }, { "code": null, "e": 40767, "s": 40748, "text": "Divide and Conquer" }, { "code": null, "e": 40774, "s": 40767, "text": "Arrays" }, { "code": null, "e": 40793, "s": 40774, "text": "Divide and Conquer" }, { "code": null, "e": 40803, "s": 40793, "text": "Bit Magic" }, { "code": null, "e": 40816, "s": 40803, "text": "Segment-Tree" }, { "code": null, "e": 40914, "s": 40816, "text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here." }, { "code": null, "e": 40982, "s": 40914, "text": "Maximum and minimum of an array using minimum number of comparisons" }, { "code": null, "e": 41026, "s": 40982, "text": "Top 50 Array Coding Problems for Interviews" }, { "code": null, "e": 41074, "s": 41026, "text": "Stack Data Structure (Introduction and Program)" }, { "code": null, "e": 41097, "s": 41074, "text": "Introduction to Arrays" }, { "code": null, "e": 41129, "s": 41097, "text": "Multidimensional Arrays in Java" }, { "code": null, "e": 41156, "s": 41129, "text": "Bitwise Operators in C/C++" }, { "code": null, "e": 41202, "s": 41156, "text": "Left Shift and Right Shift Operators in C/C++" }, { "code": null, "e": 41270, "s": 41202, "text": "Travelling Salesman Problem | Set 1 (Naive and Dynamic Programming)" }, { "code": null, "e": 41299, "s": 41270, "text": "Count set bits in an integer" } ]
How do I create a date picker in tkinter?
Tkcalendar is a Python package which provides DateEntry and Calendar widgets for tkinter applications. In this article, we will create a date picker with the help of DateEntry Widget. A DateEntry widget contains three fields that refer to the general format of Date as MM/DD/YY. By creating an object of DateEntry widget, we can choose a specific Date in the application. #Import tkinter library from tkinter import * from tkcalendar import Calendar, DateEntry #Create an instance of tkinter frame win= Tk() #Set the Geometry win.geometry("750x250") win.title("Date Picker") #Create a Label Label(win, text= "Choose a Date", background= 'gray61', foreground="white").pack(padx=20,pady=20) #Create a Calendar using DateEntry cal = DateEntry(win, width= 16, background= "magenta3", foreground= "white",bd=2) cal.pack(pady=20) win.mainloop() Execute the above code snippet to display a date picker in the window. Now pick any date from the DateEntry Widget to set and reflect the output.
[ { "code": null, "e": 1246, "s": 1062, "text": "Tkcalendar is a Python package which provides DateEntry and Calendar widgets for tkinter applications. In this article, we will create a date picker with the help of DateEntry Widget." }, { "code": null, "e": 1434, "s": 1246, "text": "A DateEntry widget contains three fields that refer to the general format of Date as MM/DD/YY. By creating an object of DateEntry widget, we can choose a specific Date in the application." }, { "code": null, "e": 1901, "s": 1434, "text": "#Import tkinter library\nfrom tkinter import *\nfrom tkcalendar import Calendar, DateEntry\n#Create an instance of tkinter frame\nwin= Tk()\n#Set the Geometry\nwin.geometry(\"750x250\")\nwin.title(\"Date Picker\")\n#Create a Label\nLabel(win, text= \"Choose a Date\", background= 'gray61', foreground=\"white\").pack(padx=20,pady=20)\n#Create a Calendar using DateEntry\ncal = DateEntry(win, width= 16, background= \"magenta3\", foreground= \"white\",bd=2)\ncal.pack(pady=20)\nwin.mainloop()" }, { "code": null, "e": 1972, "s": 1901, "text": "Execute the above code snippet to display a date picker in the window." }, { "code": null, "e": 2047, "s": 1972, "text": "Now pick any date from the DateEntry Widget to set and reflect the output." } ]
Find path to the given file using Python - GeeksforGeeks
13 Jan, 2021 We can get the location (path) of the running script file .py with __file__. __file__ is useful for reading other files and it gives the current location of the running file. It differs in versions. In Python 3.8 and earlier, __file__ returns the path specified when executing the python (or python3) command. We can get a relative path if a relative path is specified. If we specify an absolute path, an absolute path is returned. But in Python 3.9 and later, __file__ always returns an absolute path, the “os” module provides various utilities. os.getcwd(): We can get the absolute path of the current working directory. So depending upon the version used, either a relative path or absolute path is retrieved. Example 1: Python3 import osprint('Get current working directory : ', os.getcwd())print('Get current file name : ', __file__) Output: Example 2: We can get the file name and the directory name of the running file in the below way. Python3 import os print('File name : ', os.path.basename(__file__))print('Directory Name: ', os.path.dirname(__file__)) Output: Way to find File name and directory name Example 3: To get the absolute path of the running file. Python3 import os print('Absolute path of file: ', os.path.abspath(__file__))print('Absolute directoryname: ', os.path.dirname(os.path.abspath(__file__))) Output: Absolute way to find file and directory name Example 4: If we specify an absolute path in os.path.abspath(), it will be returned as it is, so if __file__ is an absolute path, no error will occur even if we set os.path.abspath(__file__) Python3 import ospythonfile = 'pathfinding.py' # if the file is present in current directory,# then no need to specify the whole locationprint("Path of the file..", os.path.abspath(pythonfile)) for root, dirs, files in os.walk(r'E:\geeksforgeeks\path_of_given_file'): for name in files: # As we need to get the provided python file, # comparing here like this if name == pythonfile: print(os.path.abspath(os.path.join(root, name))) Output: Picked python-os-module Technical Scripter 2020 Python Technical Scripter Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here. How to Install PIP on Windows ? Check if element exists in list in Python How To Convert Python Dictionary To JSON? How to drop one or multiple columns in Pandas Dataframe Python Classes and Objects Python | os.path.join() method Create a directory in Python Defaultdict in Python Python | Get unique values from a list Python | Pandas dataframe.groupby()
[ { "code": null, "e": 25691, "s": 25663, "text": "\n13 Jan, 2021" }, { "code": null, "e": 26239, "s": 25691, "text": "We can get the location (path) of the running script file .py with __file__. __file__ is useful for reading other files and it gives the current location of the running file. It differs in versions. In Python 3.8 and earlier, __file__ returns the path specified when executing the python (or python3) command. We can get a relative path if a relative path is specified. If we specify an absolute path, an absolute path is returned. But in Python 3.9 and later, __file__ always returns an absolute path, the “os” module provides various utilities. " }, { "code": null, "e": 26405, "s": 26239, "text": "os.getcwd(): We can get the absolute path of the current working directory. So depending upon the version used, either a relative path or absolute path is retrieved." }, { "code": null, "e": 26416, "s": 26405, "text": "Example 1:" }, { "code": null, "e": 26424, "s": 26416, "text": "Python3" }, { "code": "import osprint('Get current working directory : ', os.getcwd())print('Get current file name : ', __file__)", "e": 26539, "s": 26424, "text": null }, { "code": null, "e": 26547, "s": 26539, "text": "Output:" }, { "code": null, "e": 26644, "s": 26547, "text": "Example 2: We can get the file name and the directory name of the running file in the below way." }, { "code": null, "e": 26652, "s": 26644, "text": "Python3" }, { "code": "import os print('File name : ', os.path.basename(__file__))print('Directory Name: ', os.path.dirname(__file__))", "e": 26772, "s": 26652, "text": null }, { "code": null, "e": 26780, "s": 26772, "text": "Output:" }, { "code": null, "e": 26821, "s": 26780, "text": "Way to find File name and directory name" }, { "code": null, "e": 26878, "s": 26821, "text": "Example 3: To get the absolute path of the running file." }, { "code": null, "e": 26886, "s": 26878, "text": "Python3" }, { "code": "import os print('Absolute path of file: ', os.path.abspath(__file__))print('Absolute directoryname: ', os.path.dirname(os.path.abspath(__file__)))", "e": 27050, "s": 26886, "text": null }, { "code": null, "e": 27058, "s": 27050, "text": "Output:" }, { "code": null, "e": 27103, "s": 27058, "text": "Absolute way to find file and directory name" }, { "code": null, "e": 27294, "s": 27103, "text": "Example 4: If we specify an absolute path in os.path.abspath(), it will be returned as it is, so if __file__ is an absolute path, no error will occur even if we set os.path.abspath(__file__)" }, { "code": null, "e": 27302, "s": 27294, "text": "Python3" }, { "code": "import ospythonfile = 'pathfinding.py' # if the file is present in current directory,# then no need to specify the whole locationprint(\"Path of the file..\", os.path.abspath(pythonfile)) for root, dirs, files in os.walk(r'E:\\geeksforgeeks\\path_of_given_file'): for name in files: # As we need to get the provided python file, # comparing here like this if name == pythonfile: print(os.path.abspath(os.path.join(root, name)))", "e": 27776, "s": 27302, "text": null }, { "code": null, "e": 27784, "s": 27776, "text": "Output:" }, { "code": null, "e": 27791, "s": 27784, "text": "Picked" }, { "code": null, "e": 27808, "s": 27791, "text": "python-os-module" }, { "code": null, "e": 27832, "s": 27808, "text": "Technical Scripter 2020" }, { "code": null, "e": 27839, "s": 27832, "text": "Python" }, { "code": null, "e": 27858, "s": 27839, "text": "Technical Scripter" }, { "code": null, "e": 27956, "s": 27858, "text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here." }, { "code": null, "e": 27988, "s": 27956, "text": "How to Install PIP on Windows ?" }, { "code": null, "e": 28030, "s": 27988, "text": "Check if element exists in list in Python" }, { "code": null, "e": 28072, "s": 28030, "text": "How To Convert Python Dictionary To JSON?" }, { "code": null, "e": 28128, "s": 28072, "text": "How to drop one or multiple columns in Pandas Dataframe" }, { "code": null, "e": 28155, "s": 28128, "text": "Python Classes and Objects" }, { "code": null, "e": 28186, "s": 28155, "text": "Python | os.path.join() method" }, { "code": null, "e": 28215, "s": 28186, "text": "Create a directory in Python" }, { "code": null, "e": 28237, "s": 28215, "text": "Defaultdict in Python" }, { "code": null, "e": 28276, "s": 28237, "text": "Python | Get unique values from a list" } ]
Python for Data Science- A Guide to Pandas | by Nicholas Leong | Towards Data Science
Hello my fellow data practitioners. It’s been a busy week for me but I’m back. First of all, I want to personally say thanks to those who had decided to follow me since day 1. I recently hit the 100 follower mark. It is not much, but it truly means the world to me when my content proves to provide value to some random guy/girl out there. I started writing to help people in solving some of their problems because when I faced them, I didn’t have much resource to refer to. Thank you for following, I will continue to work hard to provide value to you, just like any data scientist would do. But enough, you’re not here for that, you’re here for that data science nitty gritty. You want to be the guy who can perform amazing techniques on data to get what you need. You want to be the guy who provides incredible insights that influencesproduct managers to make critical decisions. Well, you’ve come to the right place. Step right in. We will be talking about the most important Python for Data Science library today. Which is ... *Drum Roll Pandas. Yes.I believe any experienced Data Scientist would agree with me. Before all of your fancy machine learning, you will have to explore, clean and process your data thoroughly. And for that, you need Pandas. This guide assumes you have some really basic python knowledge and python installed. If you don’t, you install it here. We are also running pandas on Jupyter Notebook. Make sure you have it. I’m going to run you through the basics of Pandas real quick. Try to keep up. Pandas officially stands for ‘Python Data Analysis Library’, THE most important Python tool used by Data Scientists today. Pandas is an open source Python library that allows users to explore, manipulate and visualise data in an extremely efficient manner. It is literally Microsoft Excel in Python. It is easy to read and learn It is extremely fast and powerful It integrates well with other visualisation libraries It is black, white and asian at the same time Pandas can take in a huge variety of data, the most common ones are csv, excel, sql or even a webpage. If you have Anaconda, then conda install pandas should do the trick. If you don’t, use pip install pandas Now that you know what you’re getting yourself into.Let’s jump right into it. import pandas as pdimport numpy as np Series are like columns while Dataframes are your full blown tables in Pandas. You will be dealing with these 2 components a lot. So get used to them. Creating your own Series from a: list test_list = [100,200,300]pd.Series(data=test_list) dictionary dictionary = {‘a’:100,’b’:200,’c’:300}pd.Series(data=dictionary) Creating your own Dataframes from a: list data = [[‘thomas’, 100], [‘nicholas’, 200], [‘danson’, 300]] df = pd.DataFrame(data, columns = [‘Name’, ‘Age’]) dictionary data = {‘Name’:[‘thomas’, ‘nicholas’, ‘danson’, ‘jack’], ‘Age’:[100, 200, 300, 400]} df = pd.DataFrame(data) Usually, we don’t create our own dataframes. Instead, we read explore, manipulate and visualise data in Pandas by importing data to a dataframe.Pandas can read from multiple formats, but the usual one is csv. Here’s the official list of file types pandas can read from. We will import the Titanic Dataset here. df = pd.read_csv(filepath) The main objective of this dataset is to study what are the factors that affect the survivability of a person onboard the titanic. Let’s explore the data to find out exactly that. After importing your data, you want to get a feel for it. Here are some basic operations: df.info() By this command alone we can already tell the number of rows and columns, data types of the columns and if null values exist in them. df.unique() The df.unique() command allows us to better understand what does each column mean. Looking at the Survived and Sex columns, they only have 2 unique values. It usually means that they are categorical columns, in this case, it should be True or False for Survived and Male or Female for Sex. We can also observe other categorical columns like Embarked, Pclass and more. We can’t really tell what does Pclass stand for, let’s explore more. It is extremely easy to select the data you want in Pandas. Let’s say we want to only look at the Pclass column now. df[‘Pclass’] We observe that the 3 unique values are literally 1,2 and 3 which stands for 1st class, 2nd class and 3rd class. We understand that now. We can also select multiple columns at once. df[[‘Pclass’,’Sex’]] This command is extremely useful for including/excluding columns you need/don’t need. Since we don’t need the name, passenger_id and ticket because they play no role at all in deciding if the passenger survives or not, we are excluding these columns in the dataset and it can be done in 2 ways. df= df[[‘Survived’,’Pclass’,’Sex’,’Age’,’SibSp’,’Parch’,’Fare’,’Cabin’,’Embarked’]]ordf.drop(['PassengerId', 'Name', 'Ticket'], axis = 1, inplace=True) The inplace=True parameter tells Pandas to auto assign what you intend to do to the original variable itself, in this case it is df. We can also select data by rows. Let’s say that we want to investigate the 300th to 310th rows of the dataset. df.iloc[500:511] It works just like a python list, where the first number(500) of the interval is inclusive and the last number(511) of the interval is exclusive. We can also filter through the data. Let’s say we want to only observe the data for male passengers. df[df['Sex'] == 'male'] The command df[‘Sex’] == ‘male’ will return a Boolean for each row. Nesting a df[] over it will return the whole dataset for male passengers. That’s how Pandas work. This command is extremely powerful for visualising data in the future. Let’s combine what we had learnt so far. df[[‘Pclass’,’Sex’]][df[‘Sex’] == ‘male’].iloc[500:511] With this command, we are displaying only 500th to 510th row of the Pclass and Sex Column for Male Passengers. Play around with the commands to get comfortable using them. Now that we know how to navigate through data, it is time to do some aggregation upon it. To start off, we can use the describe function to find out the distribution, max, minimum and useful statistics of our dataset. df.describe() Note that only numerical columns will be included in these mathematical commands. Other columns will be automatically excluded. We can also run the following commands to return the respective aggregations, note that the aggregations can be run by conditional selection as well. df.max()df[‘Pclass’].min()df.iloc[100:151].mean()df.median()df.count()df.std() We can also do a correlation matrix against the columns to find out their relationship with each other. df.corr() It looks like top 3 numerical columns that are contributing to the survivability of the passengers are Fare, Pclass and Parch because they hold the highest absolute values against the survived column. We can also check the distribution of each column. In a table format, it is easier to interpret distributions for categorical columns. df[‘Sex’].value_counts() Looks like our passengers are male dominant. If you’ve kept up, you’ve got a pretty good feel of our data right now.You want to do some sort of aggregation/visualisation to represent your findings. Before that, you often have to clean up your dataset before you can do anything. Your dataset can often include dirty data like: null values empty values incorrect timestamp many many more We could have already tell if there’s any null values when we used the command df.info(). Here’s another one: df.isnull().sum() We can see that there are null values in the Age, Cabin and Embarked columns. You can see what the rows look like when the values are null in those specific columns. df[df[‘Age’].isnull()] For some reason, all the other data except for Age and Cabin are present here.It totally depends on you to deal with null values. For simplicity’s sake, we will remove all rows with null values in them. df.dropna(inplace=True)or#df.dropna(axis=1, inplace=True) to drop columns with null values We could’ve also replaced all our null values with a value if we wanted to. df[‘Age’].fillna(df[‘Age’].mean()) This command replaces all the null values in the Age column with the mean value of the Age column. In some cases, this is a good way to handle null values as it doesn’t mess with the skewness of the values. So now your data is clean. We can start exploring the data through groups. Let’s say we want to find out the average/max/min (insert numerical column here) for a passenger in each Pclass. df.groupby(‘Pclass’).mean()ordf.groupby('Pclass')['Age'].mean() This tells us many things. For starters, it tells us the average ages and fares for each Pclass. It looks like the 1st class has the most expensive fares, and they are the oldest among the classes. Makes sense. We can also layer the groups. Let’s say now we also want to know for each Pclass, the average ages and fares for males and females. df.groupby([‘Pclass’,’Sex’]).mean() We do that by simply passing the list of groups through the GroupBy Function. We can start to analyse and make certain hypothesis based on these complex GroupBy. One observation I can make is that the Average Fare for Females in 1st Class is the highest among all the passengers, and they have the highest survivability as well. On the flip side, the Average Fare for Males in 3rd Class is the lowest and has the lowest survivability. I guess money do save lives. After some analysis, we figured that we need to add some row/columns to our data. We can easily do that with concatenation and merging dataframes. We add rows by: first_5 = df.head()last_5 = df[178:]combined = pd.concat([first_5,last_5], axis = 0) Note that when you concatenate, the columns have to match between dataframes. It works just like a SQL Union. Now what if we want to add columns by joining. We merge. Say we had an external dataset that contained the weight and height of the passengers onboard. For this, I’m going to create my own dataset for 3 passengers. data = [[‘Braund, Mr. Owen Harris’, 80, 177.0], [‘Heikkinen, Miss. Laina’, 78, 180.0], [‘Montvila, Rev. Juozas’, 87, 165.0]] df2 = pd.DataFrame(data, columns = [‘Name’, ‘weight’, ‘height’]) Now we want to add the weight and height data into our titanic dataset. df3 = pd.merge(df,df2, how=’right’, on=’Name’) The merge works exactly like SQL joins, with methods of left, right, outer and inner. I used the right join here to display only the passengers who had weight and height recorded. If we had used a left join, the result will show the full titanic dataset with nulls where height and weight are unavailable. df3 = pd.merge(df,df2, how=’left’, on=’Name’) We now move into more advanced territory where we want to heavily manipulate the values in our data. We may want to add our own custom columns, change the type of values or even implement complex functions into our data. This is a fairly straight forward manipulation. Sometimes we may want to change the type of data to open possibilities to other functions. A very good example is changing ‘4,000’ which is a str into an int ‘4000’. Let’s change our weight data which is in float right now to integer. df2[‘weight’].astype(int) Note that the astype() function goes through every row of the data and tries to change the type of data into whatever you mentioned. Hence, there are certain rules to follow for each type of data. For example: data can’t be null or empty when changing float to integer, the decimals have to be in .0 form Hence, the astype() function can be tricky at times. The apply function is one of the most powerful functions in pandas. It lets you manipulate the data in whatever form you want to. Here’s how. Step 1: Define a functionAssume we want to rename the Pclass to their actual names.1 = 1st Class2 = 2nd Class3 = 3rd Class We first define a function. def pclass_name(x): if x == 1: x = '1st Class' if x == 2: x = '2nd Class' if x == 3: x = '3rd Class' return x Step 2: Apply the functionWe can then apply the function to each of the values of Pclass column. The process will pass every record of the Pclass column through the function. df3[‘Pclass’] = df3[‘Pclass’].apply(lambda x: pclass_name(x)) The records for Pclass columns has now changed.It really is up to your imagination on how you want to manipulate your data.Here’s where you can be creative. Sometimes, we want to create custom columns to allow us to better interpret our data. For example, let’s create a column with the BMI of the passengers. The formula for BMI is: weight(kg) / height2(m) Hence, let’s first convert our height values which is in cm to m and square the values. We shall use the apply function we had just learnt to achieve that. def convert_to_bmi(x): x = (x/100)**2 return xdf4[‘height’] = df4[‘height’].apply(lambda x: convert_to_bmi(x)) We then create a new column where the values are weight divided by height. df4[‘BMI’] = df4[‘weight’]/df4[‘height’] If we had all the BMIs for the passengers, we can then make further conclusions with the BMI, but I’m not going into that. You get the point. After everything’s done, you may want to sort and rename the columns. df3.rename(columns={'BMI':'Body_Mass_Index','PassengerId':'PassengerNo'}, inplace = True)df3 = df3.sort_values('Body_Mass_Index') We had renamed the BMI to its full name and sort rows by BMI here.Note that after sorting, the index do not change so that information is not lost. Last but not least, we can export our data into multiple formats (refer to list on top).Let's export our data as a csv file. df3.to_csv(filepath) Give yourself a pat on the back if you’ve made it this far.You have officially mastered the basics of Pandas, a necessary tool for every data scientist. What you had learnt today: Creating and Importing Dataframes Summary Functions of Dataframes Navigating through Dataframes Aggregation of Values Cleaning Up Data GroupBy Concatenation and Merging Data Manipulation (*** apply function ***) Finalising and Exporting Keep it scarred in your brain as I swear you will use it often in the future.As always, I end with a quote. You can have data without information, but you can’t have information without data — Daniel Keys Moran You can also support me by signing up for a medium membership through my link. You will be able to read an unlimited amount of stories from me and other incredible writers! I am working on more stories, writings, and guides in the data industry. You can absolutely expect more posts like this. In the meantime, feel free to check out my other articles to temporarily fill your hunger for data. Thanks for reading! If you want to get in touch with me, feel free to reach me at [email protected] or my LinkedIn Profile. You can also view the code for previous write-ups in my Github.
[ { "code": null, "e": 251, "s": 172, "text": "Hello my fellow data practitioners. It’s been a busy week for me but I’m back." }, { "code": null, "e": 765, "s": 251, "text": "First of all, I want to personally say thanks to those who had decided to follow me since day 1. I recently hit the 100 follower mark. It is not much, but it truly means the world to me when my content proves to provide value to some random guy/girl out there. I started writing to help people in solving some of their problems because when I faced them, I didn’t have much resource to refer to. Thank you for following, I will continue to work hard to provide value to you, just like any data scientist would do." }, { "code": null, "e": 1055, "s": 765, "text": "But enough, you’re not here for that, you’re here for that data science nitty gritty. You want to be the guy who can perform amazing techniques on data to get what you need. You want to be the guy who provides incredible insights that influencesproduct managers to make critical decisions." }, { "code": null, "e": 1108, "s": 1055, "text": "Well, you’ve come to the right place. Step right in." }, { "code": null, "e": 1191, "s": 1108, "text": "We will be talking about the most important Python for Data Science library today." }, { "code": null, "e": 1204, "s": 1191, "text": "Which is ..." }, { "code": null, "e": 1215, "s": 1204, "text": "*Drum Roll" }, { "code": null, "e": 1429, "s": 1215, "text": "Pandas. Yes.I believe any experienced Data Scientist would agree with me. Before all of your fancy machine learning, you will have to explore, clean and process your data thoroughly. And for that, you need Pandas." }, { "code": null, "e": 1698, "s": 1429, "text": "This guide assumes you have some really basic python knowledge and python installed. If you don’t, you install it here. We are also running pandas on Jupyter Notebook. Make sure you have it. I’m going to run you through the basics of Pandas real quick. Try to keep up." }, { "code": null, "e": 1821, "s": 1698, "text": "Pandas officially stands for ‘Python Data Analysis Library’, THE most important Python tool used by Data Scientists today." }, { "code": null, "e": 1998, "s": 1821, "text": "Pandas is an open source Python library that allows users to explore, manipulate and visualise data in an extremely efficient manner. It is literally Microsoft Excel in Python." }, { "code": null, "e": 2027, "s": 1998, "text": "It is easy to read and learn" }, { "code": null, "e": 2061, "s": 2027, "text": "It is extremely fast and powerful" }, { "code": null, "e": 2115, "s": 2061, "text": "It integrates well with other visualisation libraries" }, { "code": null, "e": 2161, "s": 2115, "text": "It is black, white and asian at the same time" }, { "code": null, "e": 2264, "s": 2161, "text": "Pandas can take in a huge variety of data, the most common ones are csv, excel, sql or even a webpage." }, { "code": null, "e": 2291, "s": 2264, "text": "If you have Anaconda, then" }, { "code": null, "e": 2312, "s": 2291, "text": "conda install pandas" }, { "code": null, "e": 2351, "s": 2312, "text": "should do the trick. If you don’t, use" }, { "code": null, "e": 2370, "s": 2351, "text": "pip install pandas" }, { "code": null, "e": 2448, "s": 2370, "text": "Now that you know what you’re getting yourself into.Let’s jump right into it." }, { "code": null, "e": 2486, "s": 2448, "text": "import pandas as pdimport numpy as np" }, { "code": null, "e": 2637, "s": 2486, "text": "Series are like columns while Dataframes are your full blown tables in Pandas. You will be dealing with these 2 components a lot. So get used to them." }, { "code": null, "e": 2670, "s": 2637, "text": "Creating your own Series from a:" }, { "code": null, "e": 2675, "s": 2670, "text": "list" }, { "code": null, "e": 2726, "s": 2675, "text": "test_list = [100,200,300]pd.Series(data=test_list)" }, { "code": null, "e": 2737, "s": 2726, "text": "dictionary" }, { "code": null, "e": 2802, "s": 2737, "text": "dictionary = {‘a’:100,’b’:200,’c’:300}pd.Series(data=dictionary)" }, { "code": null, "e": 2839, "s": 2802, "text": "Creating your own Dataframes from a:" }, { "code": null, "e": 2844, "s": 2839, "text": "list" }, { "code": null, "e": 2956, "s": 2844, "text": "data = [[‘thomas’, 100], [‘nicholas’, 200], [‘danson’, 300]] df = pd.DataFrame(data, columns = [‘Name’, ‘Age’])" }, { "code": null, "e": 2967, "s": 2956, "text": "dictionary" }, { "code": null, "e": 3076, "s": 2967, "text": "data = {‘Name’:[‘thomas’, ‘nicholas’, ‘danson’, ‘jack’], ‘Age’:[100, 200, 300, 400]} df = pd.DataFrame(data)" }, { "code": null, "e": 3346, "s": 3076, "text": "Usually, we don’t create our own dataframes. Instead, we read explore, manipulate and visualise data in Pandas by importing data to a dataframe.Pandas can read from multiple formats, but the usual one is csv. Here’s the official list of file types pandas can read from." }, { "code": null, "e": 3387, "s": 3346, "text": "We will import the Titanic Dataset here." }, { "code": null, "e": 3414, "s": 3387, "text": "df = pd.read_csv(filepath)" }, { "code": null, "e": 3594, "s": 3414, "text": "The main objective of this dataset is to study what are the factors that affect the survivability of a person onboard the titanic. Let’s explore the data to find out exactly that." }, { "code": null, "e": 3684, "s": 3594, "text": "After importing your data, you want to get a feel for it. Here are some basic operations:" }, { "code": null, "e": 3694, "s": 3684, "text": "df.info()" }, { "code": null, "e": 3828, "s": 3694, "text": "By this command alone we can already tell the number of rows and columns, data types of the columns and if null values exist in them." }, { "code": null, "e": 3840, "s": 3828, "text": "df.unique()" }, { "code": null, "e": 4130, "s": 3840, "text": "The df.unique() command allows us to better understand what does each column mean. Looking at the Survived and Sex columns, they only have 2 unique values. It usually means that they are categorical columns, in this case, it should be True or False for Survived and Male or Female for Sex." }, { "code": null, "e": 4277, "s": 4130, "text": "We can also observe other categorical columns like Embarked, Pclass and more. We can’t really tell what does Pclass stand for, let’s explore more." }, { "code": null, "e": 4394, "s": 4277, "text": "It is extremely easy to select the data you want in Pandas. Let’s say we want to only look at the Pclass column now." }, { "code": null, "e": 4407, "s": 4394, "text": "df[‘Pclass’]" }, { "code": null, "e": 4544, "s": 4407, "text": "We observe that the 3 unique values are literally 1,2 and 3 which stands for 1st class, 2nd class and 3rd class. We understand that now." }, { "code": null, "e": 4589, "s": 4544, "text": "We can also select multiple columns at once." }, { "code": null, "e": 4610, "s": 4589, "text": "df[[‘Pclass’,’Sex’]]" }, { "code": null, "e": 4696, "s": 4610, "text": "This command is extremely useful for including/excluding columns you need/don’t need." }, { "code": null, "e": 4905, "s": 4696, "text": "Since we don’t need the name, passenger_id and ticket because they play no role at all in deciding if the passenger survives or not, we are excluding these columns in the dataset and it can be done in 2 ways." }, { "code": null, "e": 5057, "s": 4905, "text": "df= df[[‘Survived’,’Pclass’,’Sex’,’Age’,’SibSp’,’Parch’,’Fare’,’Cabin’,’Embarked’]]ordf.drop(['PassengerId', 'Name', 'Ticket'], axis = 1, inplace=True)" }, { "code": null, "e": 5190, "s": 5057, "text": "The inplace=True parameter tells Pandas to auto assign what you intend to do to the original variable itself, in this case it is df." }, { "code": null, "e": 5301, "s": 5190, "text": "We can also select data by rows. Let’s say that we want to investigate the 300th to 310th rows of the dataset." }, { "code": null, "e": 5318, "s": 5301, "text": "df.iloc[500:511]" }, { "code": null, "e": 5464, "s": 5318, "text": "It works just like a python list, where the first number(500) of the interval is inclusive and the last number(511) of the interval is exclusive." }, { "code": null, "e": 5565, "s": 5464, "text": "We can also filter through the data. Let’s say we want to only observe the data for male passengers." }, { "code": null, "e": 5589, "s": 5565, "text": "df[df['Sex'] == 'male']" }, { "code": null, "e": 5755, "s": 5589, "text": "The command df[‘Sex’] == ‘male’ will return a Boolean for each row. Nesting a df[] over it will return the whole dataset for male passengers. That’s how Pandas work." }, { "code": null, "e": 5867, "s": 5755, "text": "This command is extremely powerful for visualising data in the future. Let’s combine what we had learnt so far." }, { "code": null, "e": 5923, "s": 5867, "text": "df[[‘Pclass’,’Sex’]][df[‘Sex’] == ‘male’].iloc[500:511]" }, { "code": null, "e": 6095, "s": 5923, "text": "With this command, we are displaying only 500th to 510th row of the Pclass and Sex Column for Male Passengers. Play around with the commands to get comfortable using them." }, { "code": null, "e": 6313, "s": 6095, "text": "Now that we know how to navigate through data, it is time to do some aggregation upon it. To start off, we can use the describe function to find out the distribution, max, minimum and useful statistics of our dataset." }, { "code": null, "e": 6327, "s": 6313, "text": "df.describe()" }, { "code": null, "e": 6455, "s": 6327, "text": "Note that only numerical columns will be included in these mathematical commands. Other columns will be automatically excluded." }, { "code": null, "e": 6605, "s": 6455, "text": "We can also run the following commands to return the respective aggregations, note that the aggregations can be run by conditional selection as well." }, { "code": null, "e": 6684, "s": 6605, "text": "df.max()df[‘Pclass’].min()df.iloc[100:151].mean()df.median()df.count()df.std()" }, { "code": null, "e": 6788, "s": 6684, "text": "We can also do a correlation matrix against the columns to find out their relationship with each other." }, { "code": null, "e": 6798, "s": 6788, "text": "df.corr()" }, { "code": null, "e": 6999, "s": 6798, "text": "It looks like top 3 numerical columns that are contributing to the survivability of the passengers are Fare, Pclass and Parch because they hold the highest absolute values against the survived column." }, { "code": null, "e": 7134, "s": 6999, "text": "We can also check the distribution of each column. In a table format, it is easier to interpret distributions for categorical columns." }, { "code": null, "e": 7159, "s": 7134, "text": "df[‘Sex’].value_counts()" }, { "code": null, "e": 7204, "s": 7159, "text": "Looks like our passengers are male dominant." }, { "code": null, "e": 7438, "s": 7204, "text": "If you’ve kept up, you’ve got a pretty good feel of our data right now.You want to do some sort of aggregation/visualisation to represent your findings. Before that, you often have to clean up your dataset before you can do anything." }, { "code": null, "e": 7486, "s": 7438, "text": "Your dataset can often include dirty data like:" }, { "code": null, "e": 7498, "s": 7486, "text": "null values" }, { "code": null, "e": 7511, "s": 7498, "text": "empty values" }, { "code": null, "e": 7531, "s": 7511, "text": "incorrect timestamp" }, { "code": null, "e": 7546, "s": 7531, "text": "many many more" }, { "code": null, "e": 7656, "s": 7546, "text": "We could have already tell if there’s any null values when we used the command df.info(). Here’s another one:" }, { "code": null, "e": 7674, "s": 7656, "text": "df.isnull().sum()" }, { "code": null, "e": 7840, "s": 7674, "text": "We can see that there are null values in the Age, Cabin and Embarked columns. You can see what the rows look like when the values are null in those specific columns." }, { "code": null, "e": 7863, "s": 7840, "text": "df[df[‘Age’].isnull()]" }, { "code": null, "e": 8066, "s": 7863, "text": "For some reason, all the other data except for Age and Cabin are present here.It totally depends on you to deal with null values. For simplicity’s sake, we will remove all rows with null values in them." }, { "code": null, "e": 8157, "s": 8066, "text": "df.dropna(inplace=True)or#df.dropna(axis=1, inplace=True) to drop columns with null values" }, { "code": null, "e": 8233, "s": 8157, "text": "We could’ve also replaced all our null values with a value if we wanted to." }, { "code": null, "e": 8268, "s": 8233, "text": "df[‘Age’].fillna(df[‘Age’].mean())" }, { "code": null, "e": 8475, "s": 8268, "text": "This command replaces all the null values in the Age column with the mean value of the Age column. In some cases, this is a good way to handle null values as it doesn’t mess with the skewness of the values." }, { "code": null, "e": 8663, "s": 8475, "text": "So now your data is clean. We can start exploring the data through groups. Let’s say we want to find out the average/max/min (insert numerical column here) for a passenger in each Pclass." }, { "code": null, "e": 8727, "s": 8663, "text": "df.groupby(‘Pclass’).mean()ordf.groupby('Pclass')['Age'].mean()" }, { "code": null, "e": 8938, "s": 8727, "text": "This tells us many things. For starters, it tells us the average ages and fares for each Pclass. It looks like the 1st class has the most expensive fares, and they are the oldest among the classes. Makes sense." }, { "code": null, "e": 9070, "s": 8938, "text": "We can also layer the groups. Let’s say now we also want to know for each Pclass, the average ages and fares for males and females." }, { "code": null, "e": 9106, "s": 9070, "text": "df.groupby([‘Pclass’,’Sex’]).mean()" }, { "code": null, "e": 9541, "s": 9106, "text": "We do that by simply passing the list of groups through the GroupBy Function. We can start to analyse and make certain hypothesis based on these complex GroupBy. One observation I can make is that the Average Fare for Females in 1st Class is the highest among all the passengers, and they have the highest survivability as well. On the flip side, the Average Fare for Males in 3rd Class is the lowest and has the lowest survivability." }, { "code": null, "e": 9570, "s": 9541, "text": "I guess money do save lives." }, { "code": null, "e": 9733, "s": 9570, "text": "After some analysis, we figured that we need to add some row/columns to our data. We can easily do that with concatenation and merging dataframes. We add rows by:" }, { "code": null, "e": 9818, "s": 9733, "text": "first_5 = df.head()last_5 = df[178:]combined = pd.concat([first_5,last_5], axis = 0)" }, { "code": null, "e": 9928, "s": 9818, "text": "Note that when you concatenate, the columns have to match between dataframes. It works just like a SQL Union." }, { "code": null, "e": 9985, "s": 9928, "text": "Now what if we want to add columns by joining. We merge." }, { "code": null, "e": 10143, "s": 9985, "text": "Say we had an external dataset that contained the weight and height of the passengers onboard. For this, I’m going to create my own dataset for 3 passengers." }, { "code": null, "e": 10333, "s": 10143, "text": "data = [[‘Braund, Mr. Owen Harris’, 80, 177.0], [‘Heikkinen, Miss. Laina’, 78, 180.0], [‘Montvila, Rev. Juozas’, 87, 165.0]] df2 = pd.DataFrame(data, columns = [‘Name’, ‘weight’, ‘height’])" }, { "code": null, "e": 10405, "s": 10333, "text": "Now we want to add the weight and height data into our titanic dataset." }, { "code": null, "e": 10452, "s": 10405, "text": "df3 = pd.merge(df,df2, how=’right’, on=’Name’)" }, { "code": null, "e": 10758, "s": 10452, "text": "The merge works exactly like SQL joins, with methods of left, right, outer and inner. I used the right join here to display only the passengers who had weight and height recorded. If we had used a left join, the result will show the full titanic dataset with nulls where height and weight are unavailable." }, { "code": null, "e": 10804, "s": 10758, "text": "df3 = pd.merge(df,df2, how=’left’, on=’Name’)" }, { "code": null, "e": 11025, "s": 10804, "text": "We now move into more advanced territory where we want to heavily manipulate the values in our data. We may want to add our own custom columns, change the type of values or even implement complex functions into our data." }, { "code": null, "e": 11239, "s": 11025, "text": "This is a fairly straight forward manipulation. Sometimes we may want to change the type of data to open possibilities to other functions. A very good example is changing ‘4,000’ which is a str into an int ‘4000’." }, { "code": null, "e": 11308, "s": 11239, "text": "Let’s change our weight data which is in float right now to integer." }, { "code": null, "e": 11334, "s": 11308, "text": "df2[‘weight’].astype(int)" }, { "code": null, "e": 11531, "s": 11334, "text": "Note that the astype() function goes through every row of the data and tries to change the type of data into whatever you mentioned. Hence, there are certain rules to follow for each type of data." }, { "code": null, "e": 11544, "s": 11531, "text": "For example:" }, { "code": null, "e": 11572, "s": 11544, "text": "data can’t be null or empty" }, { "code": null, "e": 11639, "s": 11572, "text": "when changing float to integer, the decimals have to be in .0 form" }, { "code": null, "e": 11692, "s": 11639, "text": "Hence, the astype() function can be tricky at times." }, { "code": null, "e": 11834, "s": 11692, "text": "The apply function is one of the most powerful functions in pandas. It lets you manipulate the data in whatever form you want to. Here’s how." }, { "code": null, "e": 11957, "s": 11834, "text": "Step 1: Define a functionAssume we want to rename the Pclass to their actual names.1 = 1st Class2 = 2nd Class3 = 3rd Class" }, { "code": null, "e": 11985, "s": 11957, "text": "We first define a function." }, { "code": null, "e": 12128, "s": 11985, "text": "def pclass_name(x): if x == 1: x = '1st Class' if x == 2: x = '2nd Class' if x == 3: x = '3rd Class' return x" }, { "code": null, "e": 12303, "s": 12128, "text": "Step 2: Apply the functionWe can then apply the function to each of the values of Pclass column. The process will pass every record of the Pclass column through the function." }, { "code": null, "e": 12365, "s": 12303, "text": "df3[‘Pclass’] = df3[‘Pclass’].apply(lambda x: pclass_name(x))" }, { "code": null, "e": 12522, "s": 12365, "text": "The records for Pclass columns has now changed.It really is up to your imagination on how you want to manipulate your data.Here’s where you can be creative." }, { "code": null, "e": 12675, "s": 12522, "text": "Sometimes, we want to create custom columns to allow us to better interpret our data. For example, let’s create a column with the BMI of the passengers." }, { "code": null, "e": 12699, "s": 12675, "text": "The formula for BMI is:" }, { "code": null, "e": 12723, "s": 12699, "text": "weight(kg) / height2(m)" }, { "code": null, "e": 12879, "s": 12723, "text": "Hence, let’s first convert our height values which is in cm to m and square the values. We shall use the apply function we had just learnt to achieve that." }, { "code": null, "e": 12990, "s": 12879, "text": "def convert_to_bmi(x): x = (x/100)**2 return xdf4[‘height’] = df4[‘height’].apply(lambda x: convert_to_bmi(x))" }, { "code": null, "e": 13065, "s": 12990, "text": "We then create a new column where the values are weight divided by height." }, { "code": null, "e": 13106, "s": 13065, "text": "df4[‘BMI’] = df4[‘weight’]/df4[‘height’]" }, { "code": null, "e": 13248, "s": 13106, "text": "If we had all the BMIs for the passengers, we can then make further conclusions with the BMI, but I’m not going into that. You get the point." }, { "code": null, "e": 13318, "s": 13248, "text": "After everything’s done, you may want to sort and rename the columns." }, { "code": null, "e": 13448, "s": 13318, "text": "df3.rename(columns={'BMI':'Body_Mass_Index','PassengerId':'PassengerNo'}, inplace = True)df3 = df3.sort_values('Body_Mass_Index')" }, { "code": null, "e": 13596, "s": 13448, "text": "We had renamed the BMI to its full name and sort rows by BMI here.Note that after sorting, the index do not change so that information is not lost." }, { "code": null, "e": 13721, "s": 13596, "text": "Last but not least, we can export our data into multiple formats (refer to list on top).Let's export our data as a csv file." }, { "code": null, "e": 13742, "s": 13721, "text": "df3.to_csv(filepath)" }, { "code": null, "e": 13895, "s": 13742, "text": "Give yourself a pat on the back if you’ve made it this far.You have officially mastered the basics of Pandas, a necessary tool for every data scientist." }, { "code": null, "e": 13922, "s": 13895, "text": "What you had learnt today:" }, { "code": null, "e": 13956, "s": 13922, "text": "Creating and Importing Dataframes" }, { "code": null, "e": 13988, "s": 13956, "text": "Summary Functions of Dataframes" }, { "code": null, "e": 14018, "s": 13988, "text": "Navigating through Dataframes" }, { "code": null, "e": 14040, "s": 14018, "text": "Aggregation of Values" }, { "code": null, "e": 14057, "s": 14040, "text": "Cleaning Up Data" }, { "code": null, "e": 14065, "s": 14057, "text": "GroupBy" }, { "code": null, "e": 14091, "s": 14065, "text": "Concatenation and Merging" }, { "code": null, "e": 14134, "s": 14091, "text": "Data Manipulation (*** apply function ***)" }, { "code": null, "e": 14159, "s": 14134, "text": "Finalising and Exporting" }, { "code": null, "e": 14267, "s": 14159, "text": "Keep it scarred in your brain as I swear you will use it often in the future.As always, I end with a quote." }, { "code": null, "e": 14370, "s": 14267, "text": "You can have data without information, but you can’t have information without data — Daniel Keys Moran" }, { "code": null, "e": 14543, "s": 14370, "text": "You can also support me by signing up for a medium membership through my link. You will be able to read an unlimited amount of stories from me and other incredible writers!" }, { "code": null, "e": 14764, "s": 14543, "text": "I am working on more stories, writings, and guides in the data industry. You can absolutely expect more posts like this. In the meantime, feel free to check out my other articles to temporarily fill your hunger for data." } ]
Inplace vs Standard Operators in Python
Inplace operation is an operation which directly changes the content of a given linear algebra or vector or metrices without making a copy. Now the operators, which helps to do this kind of operation is called in-place operator. Let’s understand it with an a simple example - a=9 a += 2 print(a) 11 Above the += tie input operator. Here first, a add 2 with that a value is updated the previous value. Above principle applies to other operators also. Common in place operators are - += -= *= /= %= Above principle applies to other types apart from numbers, for example - language = "Python" language +="3" print(language) Python3 Above example of x+=y is equivalent to x = operator.iadd(x,y) There are multiple operators which are used for inplace operations. This function is used to assign the current value and add them. This operator does x+=y operation. x =operator.iadd(9,18) print("Result after adding: ", end="") print(x) Result after adding: 27 This function is used to assign the current value and subtract them. Isub() function does x-=y operation. x =operator.isub(9,18) print("Result after subtraction: ", end="") print(x) Result after subtraction: -9 This function is used to assign the current value and multiply them. This operator does x*=y operation. x =operator.imul(9,18) print("Result after multiplying: ", end="") print(x) Result after multiplying: 162 This function is used to assign the current value and divide them. This operator does x/=y operation. x =operator.itruediv(9,18) print("Result after dividing: ", end="") print(x) Result after dividing: 0.5 this function is used to assign the current value and divide them. This operator does x %=y operation. x =operator.imod(9,18) print("Result after moduls: ", end="") print(x) Result after moduls: 9 This function is used to concatenate two strings. x = "Tutorials" y = "Point" str1 = operator.iconcat(x,y) print(" After concatenation : ", end="") print(str1) After concatenation : TutorialsPoint This function is equivalent to x **=y. x =operator.ipow(2,6) print("Result after exponent: ", end="") print(x) Result after exponent: 64 Operators are the constructs which can manipulate the value of operands. For example in the expression- 9+18 = 27, here 9 and 18 are operands and + is called operator. Types of Operator Python language supports the following types of operators - Arithmetic Operators: (for example: +, -, *, /, %, **, //) Arithmetic Operators: (for example: +, -, *, /, %, **, //) Comparision Operators: (for example: “==”, “!=”, “<>”, “>”, “<”, “>=”, “<=”) Comparision Operators: (for example: “==”, “!=”, “<>”, “>”, “<”, “>=”, “<=”) Assignment Operators: (for example: “=”, “+=”, “-=”, “*=”, “/=”, “%=”, “**=”, “//=”) Assignment Operators: (for example: “=”, “+=”, “-=”, “*=”, “/=”, “%=”, “**=”, “//=”) Logical Operators: (for example: “Logical AND”, “Logical OR”, “Logical NOT”) Logical Operators: (for example: “Logical AND”, “Logical OR”, “Logical NOT”) Bitwise Operators: (for example: “|”, “&”, “^”, “~”, “<<”, “>>”) Membership Operators: (for example: in, not in) Membership Operators: (for example: in, not in) Identity Operators: (for example: is, is not) Identity Operators: (for example: is, is not) Below is a table showing how abstract operations correspond to operator symbols in the Python syntax and the functions in the operator module.
[ { "code": null, "e": 1291, "s": 1062, "text": "Inplace operation is an operation which directly changes the content of a given linear algebra or vector or metrices without making a copy. Now the operators, which helps to do this kind of operation is called in-place operator." }, { "code": null, "e": 1338, "s": 1291, "text": "Let’s understand it with an a simple example -" }, { "code": null, "e": 1358, "s": 1338, "text": "a=9\na += 2\nprint(a)" }, { "code": null, "e": 1361, "s": 1358, "text": "11" }, { "code": null, "e": 1463, "s": 1361, "text": "Above the += tie input operator. Here first, a add 2 with that a value is updated the previous value." }, { "code": null, "e": 1544, "s": 1463, "text": "Above principle applies to other operators also. Common in place operators are -" }, { "code": null, "e": 1547, "s": 1544, "text": "+=" }, { "code": null, "e": 1550, "s": 1547, "text": "-=" }, { "code": null, "e": 1553, "s": 1550, "text": "*=" }, { "code": null, "e": 1556, "s": 1553, "text": "/=" }, { "code": null, "e": 1559, "s": 1556, "text": "%=" }, { "code": null, "e": 1632, "s": 1559, "text": "Above principle applies to other types apart from numbers, for example -" }, { "code": null, "e": 1683, "s": 1632, "text": "language = \"Python\"\nlanguage +=\"3\"\nprint(language)" }, { "code": null, "e": 1691, "s": 1683, "text": "Python3" }, { "code": null, "e": 1753, "s": 1691, "text": "Above example of x+=y is equivalent to x = operator.iadd(x,y)" }, { "code": null, "e": 1821, "s": 1753, "text": "There are multiple operators which are used for inplace operations." }, { "code": null, "e": 1920, "s": 1821, "text": "This function is used to assign the current value and add them. This operator does x+=y operation." }, { "code": null, "e": 1991, "s": 1920, "text": "x =operator.iadd(9,18)\nprint(\"Result after adding: \", end=\"\")\nprint(x)" }, { "code": null, "e": 2015, "s": 1991, "text": "Result after adding: 27" }, { "code": null, "e": 2121, "s": 2015, "text": "This function is used to assign the current value and subtract them. Isub() function does x-=y operation." }, { "code": null, "e": 2197, "s": 2121, "text": "x =operator.isub(9,18)\nprint(\"Result after subtraction: \", end=\"\")\nprint(x)" }, { "code": null, "e": 2226, "s": 2197, "text": "Result after subtraction: -9" }, { "code": null, "e": 2330, "s": 2226, "text": "This function is used to assign the current value and multiply them. This operator does x*=y operation." }, { "code": null, "e": 2406, "s": 2330, "text": "x =operator.imul(9,18)\nprint(\"Result after multiplying: \", end=\"\")\nprint(x)" }, { "code": null, "e": 2436, "s": 2406, "text": "Result after multiplying: 162" }, { "code": null, "e": 2538, "s": 2436, "text": "This function is used to assign the current value and divide them. This operator does x/=y operation." }, { "code": null, "e": 2615, "s": 2538, "text": "x =operator.itruediv(9,18)\nprint(\"Result after dividing: \", end=\"\")\nprint(x)" }, { "code": null, "e": 2642, "s": 2615, "text": "Result after dividing: 0.5" }, { "code": null, "e": 2745, "s": 2642, "text": "this function is used to assign the current value and divide them. This operator does x %=y operation." }, { "code": null, "e": 2816, "s": 2745, "text": "x =operator.imod(9,18)\nprint(\"Result after moduls: \", end=\"\")\nprint(x)" }, { "code": null, "e": 2839, "s": 2816, "text": "Result after moduls: 9" }, { "code": null, "e": 2889, "s": 2839, "text": "This function is used to concatenate two strings." }, { "code": null, "e": 3000, "s": 2889, "text": "x = \"Tutorials\"\ny = \"Point\"\n\nstr1 = operator.iconcat(x,y)\nprint(\" After concatenation : \", end=\"\")\nprint(str1)" }, { "code": null, "e": 3037, "s": 3000, "text": "After concatenation : TutorialsPoint" }, { "code": null, "e": 3076, "s": 3037, "text": "This function is equivalent to x **=y." }, { "code": null, "e": 3148, "s": 3076, "text": "x =operator.ipow(2,6)\nprint(\"Result after exponent: \", end=\"\")\nprint(x)" }, { "code": null, "e": 3174, "s": 3148, "text": "Result after exponent: 64" }, { "code": null, "e": 3247, "s": 3174, "text": "Operators are the constructs which can manipulate the value of operands." }, { "code": null, "e": 3342, "s": 3247, "text": "For example in the expression- 9+18 = 27, here 9 and 18 are operands and + is called operator." }, { "code": null, "e": 3360, "s": 3342, "text": "Types of Operator" }, { "code": null, "e": 3420, "s": 3360, "text": "Python language supports the following types of operators -" }, { "code": null, "e": 3479, "s": 3420, "text": "Arithmetic Operators: (for example: +, -, *, /, %, **, //)" }, { "code": null, "e": 3538, "s": 3479, "text": "Arithmetic Operators: (for example: +, -, *, /, %, **, //)" }, { "code": null, "e": 3615, "s": 3538, "text": "Comparision Operators: (for example: “==”, “!=”, “<>”, “>”, “<”, “>=”, “<=”)" }, { "code": null, "e": 3692, "s": 3615, "text": "Comparision Operators: (for example: “==”, “!=”, “<>”, “>”, “<”, “>=”, “<=”)" }, { "code": null, "e": 3777, "s": 3692, "text": "Assignment Operators: (for example: “=”, “+=”, “-=”, “*=”, “/=”, “%=”, “**=”, “//=”)" }, { "code": null, "e": 3862, "s": 3777, "text": "Assignment Operators: (for example: “=”, “+=”, “-=”, “*=”, “/=”, “%=”, “**=”, “//=”)" }, { "code": null, "e": 3939, "s": 3862, "text": "Logical Operators: (for example: “Logical AND”, “Logical OR”, “Logical NOT”)" }, { "code": null, "e": 4016, "s": 3939, "text": "Logical Operators: (for example: “Logical AND”, “Logical OR”, “Logical NOT”)" }, { "code": null, "e": 4081, "s": 4016, "text": "Bitwise Operators: (for example: “|”, “&”, “^”, “~”, “<<”, “>>”)" }, { "code": null, "e": 4129, "s": 4081, "text": "Membership Operators: (for example: in, not in)" }, { "code": null, "e": 4177, "s": 4129, "text": "Membership Operators: (for example: in, not in)" }, { "code": null, "e": 4223, "s": 4177, "text": "Identity Operators: (for example: is, is not)" }, { "code": null, "e": 4269, "s": 4223, "text": "Identity Operators: (for example: is, is not)" }, { "code": null, "e": 4412, "s": 4269, "text": "Below is a table showing how abstract operations correspond to operator symbols in the Python syntax and the functions in the operator module." } ]
GATE | GATE-CS-2014-(Set-1) | Question 65 - GeeksforGeeks
30 Sep, 2021 Consider a 6-stage instruction pipeline, where all stages are perfectly balanced. Assume that there is no cycle-time overhead of pipelining. When an application is executing on this 6-stage pipeline, the speedup achieved with respect to non-pipelined execution if 25% of the instructions incur 2 pipeline stall cycles is(A) 4(B) 8(C) 6(D) 7Answer: (A)Explanation: It was a numerical digit type question so answer must be 4. As for 6 stages, non-pipelining takes 6 cycles. There were 2 stall cycles for pipelining for 25% of the instructions So pipe line time = (1+(25/100)*2) = 1.5 Speed up = Non pipeline time/Pipeline time = 6/1.5 = 4 YouTubeGeeksforGeeks GATE Computer Science16.2K subscribersPipelining: Previous Year Question part-III | COA | GeeksforGeeks GATE | Harshit NigamWatch laterShareCopy linkInfoShoppingTap to unmuteIf playback doesn't begin shortly, try restarting your device.You're signed outVideos you watch may be added to the TV's watch history and influence TV recommendations. To avoid this, cancel and sign in to YouTube on your computer.CancelConfirmMore videosMore videosSwitch cameraShareInclude playlistAn error occurred while retrieving sharing information. Please try again later.Watch on0:000:005:45 / 49:32•Live•<div class="player-unavailable"><h1 class="message">An error occurred.</h1><div class="submessage"><a href="https://www.youtube.com/watch?v=BVk8ZlNukmg" target="_blank">Try watching this video on www.youtube.com</a>, or enable JavaScript if it is disabled in your browser.</div></div>Quiz of this Question GATE-CS-2014-(Set-1) GATE-GATE-CS-2014-(Set-1) GATE Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here. GATE | GATE-IT-2004 | Question 66 GATE | GATE-CS-2016 (Set 2) | Question 48 GATE | GATE-CS-2014-(Set-3) | Question 65 GATE | GATE-CS-2006 | Question 49 GATE | GATE-CS-2004 | Question 3 GATE | GATE CS 2010 | Question 24 GATE | GATE CS 2011 | Question 65 GATE | GATE CS 2019 | Question 27 GATE | GATE CS 2021 | Set 1 | Question 47 GATE | GATE CS 2011 | Question 7
[ { "code": null, "e": 24528, "s": 24500, "text": "\n30 Sep, 2021" }, { "code": null, "e": 24892, "s": 24528, "text": "Consider a 6-stage instruction pipeline, where all stages are perfectly balanced. Assume that there is no cycle-time overhead of pipelining. When an application is executing on this 6-stage pipeline, the speedup achieved with respect to non-pipelined execution if 25% of the instructions incur 2 pipeline stall cycles is(A) 4(B) 8(C) 6(D) 7Answer: (A)Explanation:" }, { "code": null, "e": 25169, "s": 24892, "text": "It was a numerical digit type question so answer must be 4.\n\nAs for 6 stages, non-pipelining takes 6 cycles.\n\nThere were 2 stall cycles for pipelining for 25% of the instructions\n\nSo pipe line time = (1+(25/100)*2) = 1.5\n\nSpeed up = Non pipeline time/Pipeline time = 6/1.5 = 4" }, { "code": null, "e": 26083, "s": 25169, "text": "YouTubeGeeksforGeeks GATE Computer Science16.2K subscribersPipelining: Previous Year Question part-III | COA | GeeksforGeeks GATE | Harshit NigamWatch laterShareCopy linkInfoShoppingTap to unmuteIf playback doesn't begin shortly, try restarting your device.You're signed outVideos you watch may be added to the TV's watch history and influence TV recommendations. To avoid this, cancel and sign in to YouTube on your computer.CancelConfirmMore videosMore videosSwitch cameraShareInclude playlistAn error occurred while retrieving sharing information. Please try again later.Watch on0:000:005:45 / 49:32•Live•<div class=\"player-unavailable\"><h1 class=\"message\">An error occurred.</h1><div class=\"submessage\"><a href=\"https://www.youtube.com/watch?v=BVk8ZlNukmg\" target=\"_blank\">Try watching this video on www.youtube.com</a>, or enable JavaScript if it is disabled in your browser.</div></div>Quiz of this Question" }, { "code": null, "e": 26104, "s": 26083, "text": "GATE-CS-2014-(Set-1)" }, { "code": null, "e": 26130, "s": 26104, "text": "GATE-GATE-CS-2014-(Set-1)" }, { "code": null, "e": 26135, "s": 26130, "text": "GATE" }, { "code": null, "e": 26233, "s": 26135, "text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here." }, { "code": null, "e": 26267, "s": 26233, "text": "GATE | GATE-IT-2004 | Question 66" }, { "code": null, "e": 26309, "s": 26267, "text": "GATE | GATE-CS-2016 (Set 2) | Question 48" }, { "code": null, "e": 26351, "s": 26309, "text": "GATE | GATE-CS-2014-(Set-3) | Question 65" }, { "code": null, "e": 26385, "s": 26351, "text": "GATE | GATE-CS-2006 | Question 49" }, { "code": null, "e": 26418, "s": 26385, "text": "GATE | GATE-CS-2004 | Question 3" }, { "code": null, "e": 26452, "s": 26418, "text": "GATE | GATE CS 2010 | Question 24" }, { "code": null, "e": 26486, "s": 26452, "text": "GATE | GATE CS 2011 | Question 65" }, { "code": null, "e": 26520, "s": 26486, "text": "GATE | GATE CS 2019 | Question 27" }, { "code": null, "e": 26562, "s": 26520, "text": "GATE | GATE CS 2021 | Set 1 | Question 47" } ]
Computer Graphics | The RGB color model - GeeksforGeeks
19 Mar, 2019 The RGB color model is one of the most widely used color representation method in computer graphics. It use a color coordinate system with three primary colors: R(red), G(green), B(blue) Each primary color can take an intensity value ranging from 0(lowest) to 1(highest). Mixing these three primary colors at different intensity levels produces a variety of colors. The collection of all the colors obtained by such a linear combination of red, green and blue forms the cube shaped RGB color space. The corner of RGB color cube that is at the origin of the coordinate system corresponds to black, whereas the corner of the cube that is diagonally opposite to the origin represents white. The diagonal line connecting black and white corresponds to all the gray colors between black and white, which is also known as gray axis. In the RGB color model, an arbitrary color within the cubic color space can be specified by its color coordinates: (r, g.b). Example: (0, 0, 0) for black, (1, 1, 1) for white, (1, 1, 0) for yellow, (0.7, 0.7, 0.7) for gray Color specification using the RGB model is an additive process. We begin with black and add on the appropriate primary components to yield a desired color. The concept RGB color model is used in Display monitor. On the other hand, there is a complementary color model known as CMY color model. The CMY color model use a subtraction process and this concept is used in the printer. In CMY model, we begin with white and take away the appropriate primary components to yield a desired color. Example:If we subtract red from white, what remains consists of green and blue which is cyan. The coordinate system of CMY model use the three primaries’ complementary colors: C(cray), M(magenta) and Y(yellow) The corner of the CMY color cube that is at (0, 0, 0) corresponds to white, whereas the corner of the cube that is at (1, 1, 1) represents black. The following formulas summarize the conversion between the two color models: computer-graphics GBlog Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here. Roadmap to Become a Web Developer in 2022 DSA Sheet by Love Babbar GET and POST requests using Python Top 10 Projects For Beginners To Practice HTML and CSS Skills Working with csv files in Python Types of Software Testing Working with PDF files in Python 10 Best IDE For Web Developers in 2022 Top 10 System Design Interview Questions and Answers
[ { "code": null, "e": 24931, "s": 24903, "text": "\n19 Mar, 2019" }, { "code": null, "e": 25092, "s": 24931, "text": "The RGB color model is one of the most widely used color representation method in computer graphics. It use a color coordinate system with three primary colors:" }, { "code": null, "e": 25119, "s": 25092, "text": "R(red), G(green), B(blue) " }, { "code": null, "e": 25431, "s": 25119, "text": "Each primary color can take an intensity value ranging from 0(lowest) to 1(highest). Mixing these three primary colors at different intensity levels produces a variety of colors. The collection of all the colors obtained by such a linear combination of red, green and blue forms the cube shaped RGB color space." }, { "code": null, "e": 25759, "s": 25431, "text": "The corner of RGB color cube that is at the origin of the coordinate system corresponds to black, whereas the corner of the cube that is diagonally opposite to the origin represents white. The diagonal line connecting black and white corresponds to all the gray colors between black and white, which is also known as gray axis." }, { "code": null, "e": 25884, "s": 25759, "text": "In the RGB color model, an arbitrary color within the cubic color space can be specified by its color coordinates: (r, g.b)." }, { "code": null, "e": 25893, "s": 25884, "text": "Example:" }, { "code": null, "e": 25984, "s": 25893, "text": "(0, 0, 0) for black, (1, 1, 1) for white, \n(1, 1, 0) for yellow, (0.7, 0.7, 0.7) for gray " }, { "code": null, "e": 26365, "s": 25984, "text": "Color specification using the RGB model is an additive process. We begin with black and add on the appropriate primary components to yield a desired color. The concept RGB color model is used in Display monitor. On the other hand, there is a complementary color model known as CMY color model. The CMY color model use a subtraction process and this concept is used in the printer." }, { "code": null, "e": 26474, "s": 26365, "text": "In CMY model, we begin with white and take away the appropriate primary components to yield a desired color." }, { "code": null, "e": 26650, "s": 26474, "text": "Example:If we subtract red from white, what remains consists of green and blue which is cyan. The coordinate system of CMY model use the three primaries’ complementary colors:" }, { "code": null, "e": 26685, "s": 26650, "text": "C(cray), M(magenta) and Y(yellow) " }, { "code": null, "e": 26909, "s": 26685, "text": "The corner of the CMY color cube that is at (0, 0, 0) corresponds to white, whereas the corner of the cube that is at (1, 1, 1) represents black. The following formulas summarize the conversion between the two color models:" }, { "code": null, "e": 26927, "s": 26909, "text": "computer-graphics" }, { "code": null, "e": 26933, "s": 26927, "text": "GBlog" }, { "code": null, "e": 27031, "s": 26933, "text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here." }, { "code": null, "e": 27073, "s": 27031, "text": "Roadmap to Become a Web Developer in 2022" }, { "code": null, "e": 27098, "s": 27073, "text": "DSA Sheet by Love Babbar" }, { "code": null, "e": 27133, "s": 27098, "text": "GET and POST requests using Python" }, { "code": null, "e": 27195, "s": 27133, "text": "Top 10 Projects For Beginners To Practice HTML and CSS Skills" }, { "code": null, "e": 27228, "s": 27195, "text": "Working with csv files in Python" }, { "code": null, "e": 27254, "s": 27228, "text": "Types of Software Testing" }, { "code": null, "e": 27287, "s": 27254, "text": "Working with PDF files in Python" }, { "code": null, "e": 27326, "s": 27287, "text": "10 Best IDE For Web Developers in 2022" } ]
How to Install Git on Linux
Git is a popular open source version control system like CVS or SVN. This article is for those, who are not familiar with Git. Here, we are providing you with basic steps of installing Git from source, Creating a new project, and Commit changes to the Git repository. Most of the other version control systems, store the data as a list of files and changes are made to each file over time. Instead, Git thinks of its data more like a set of snapshots in a file system. Every time, it takes a snapshot of all your files (which look alike at that particular moment), it will be stored as a reference. If files are not changed, Git does not store the new snapshots. In this case, it just links to a previous snapshot of your file system. Git is available with all the major Linux distributions. Thus, the easiest way to install Git is by using a Linux package manager. Use the following command to install git on Linux – Use the following command to install git on Linux – $ sudo apt-get install git The output should be like this – tp@linux:~$ sudo apt-get install git [sudo] password for tp: Reading package lists... Done Building dependency tree Reading state information... Done The following extra packages will be installed: git-man liberror-perl Suggested packages: git-daemon-run git-daemon-sysvinit git-doc git-el git-email git-gui gitk gitweb git-arch git-bzr git-cvs git-mediawiki git-svn The following NEW packages will be installed: git git-man liberror-perl 0 upgraded, 3 newly installed, 0 to remove and 286 not upgraded. Need to get 3,421 kB of archives. After this operation, 21.9 MB of additional disk space will be used. Do you want to continue? [Y/n] y ...... An alternate way is to install Git from source which should be like this – $ sudo apt-get install libcurl4-gnutls-dev libexpat1-dev gettext libz-dev libssl-dev asciidoc xmlto docbook2x The output should be like this – tp@linux:~$ sudo apt-get install libcurl4-gnutls-dev libexpat1-dev gettext libz-dev libssl-dev asciidoc xmlto docbook2x Reading package lists... Done Building dependency tree Reading state information... Done Note, selecting 'zlib1g-dev' instead of 'libz-dev' gettext is already the newest version. gettext set to manually installed. The following extra packages will be installed: comerr-dev dblatex docbook-dsssl docbook-utils docbook-xml docbook-xsl fonts-lmodern fonts-texgyre jadetex krb5-multidev latex-beamer latex-xcolor libcomerr2 libcurl3-gnutls libencode-locale-perl libexpat1 libfile-listing-perl libfont-afm-perl libgcrypt11-dev libgnutls-dev libgnutls-openssl27 libgnutls26 libgnutlsxx27 libgpg-error-dev libgssapi-krb5-2 libgssrpc4 libhtml-form-perl libhtml-format-perl libhtml-parser-perl libhtml-tagset-perl libhtml-tree-perl libhttp-cookies-perl libhttp-daemon-perl libhttp-date-perl libhttp-message-perl libhttp-negotiate-perl libidn11-dev libintl-perl libio-html-perl libk5crypto3 libkadm5clnt-mit9 libkadm5srv-mit9 libkdb5-7 libkrb5-3 libkrb5-dev libkrb5support0 libldap-2.4-2 libldap2-dev ....... Git is by default installed under /usr/bin/git directory on recent Linux systems. Once the installation is done, verify it by using the following command – $ whereis git The output should be like this – git: /usr/bin/git /usr/bin/X11/git /usr/share/man/man1/git.1.gz To get the version number of Git, you can use the following command – $ git --version The output will be like this – git version 1.9.1 If you want to specify a User and Password information to Git repository, then use the following command – $ git config --global user.email [email protected] For verifying Git configuration, use the following command – git config --list The output should be like this – [email protected] The above information is stored in the .gitconfig file under the home directory. To verify, use the following command – cat ~/.gitconfig The output should be like this – [user] email = [email protected] To create a Git repository project, we should attach any local directory. Suppose, if project directory is located under /home/tp/projects path, first go into that directory using CD command and execute git init command as shown below – $ cd /home/tp/projects ~/projects$ git init The output should be like this – Initialized empty Git repository in /home/tp/projects/.git/ The above command creates a .git directory under projects folder. To verify, use the following command- ~/projects$ ls -altr .git The output should be like this – tp@linux:~/projects$ ls -altr .git total 40 drwxrwxr-x 4 tp tp 4096 Feb 11 14:03 refs drwxrwxr-x 2 tp tp 4096 Feb 11 14:03 info drwxrwxr-x 2 tp tp 4096 Feb 11 14:03 hooks -rw-rw-r-- 1 tp tp 23 Feb 11 14:03 HEAD -rw-rw-r-- 1 tp tp 73 Feb 11 14:03 description drwxrwxr-x 2 tp tp 4096 Feb 11 14:03 branches drwxrwxr-x 3 tp tp 4096 Feb 11 14:03 .. drwxrwxr-x 4 tp tp 4096 Feb 11 14:03 objects -rw-rw-r-- 1 tp tp 92 Feb 11 14:03 config drwxrwxr-x 7 tp tp 4096 Feb 11 14:03 . Once a project is created, it will initialize the project using “git init”. Now, add your files to your project directory. For adding .txt files to Git repository, use the following command – projects$ git add *.txt Once adding process is done to the repository, you should commit these files as shown below command – projects$ git commit -m 'Initial upload of the project' The sample output should be like this – [master (root-commit) 261b452] Initial upload of the project 1 file changed, 0 insertions(+), 0 deletions(-) create mode 100644 tp.txt Congratulations! Now, you know “How to setup git on Linux” . We’ll learn more about these types of commands in our next Linux post. Keep reading!
[ { "code": null, "e": 1330, "s": 1062, "text": "Git is a popular open source version control system like CVS or SVN. This article is for those, who are not familiar with Git. Here, we are providing you with basic steps of installing Git from source, Creating a new project, and Commit changes to the Git repository." }, { "code": null, "e": 1797, "s": 1330, "text": "Most of the other version control systems, store the data as a list of files and changes are made to each file over time. Instead, Git thinks of its data more like a set of snapshots in a file system. Every time, it takes a snapshot of all your files (which look alike at that particular moment), it will be stored as a reference. If files are not changed, Git does not store the new snapshots. In this case, it just links to a previous snapshot of your file system." }, { "code": null, "e": 1980, "s": 1797, "text": "Git is available with all the major Linux distributions. Thus, the easiest way to install Git is by using a Linux package manager. Use the following command to install git on Linux –" }, { "code": null, "e": 2032, "s": 1980, "text": "Use the following command to install git on Linux –" }, { "code": null, "e": 2059, "s": 2032, "text": "$ sudo apt-get install git" }, { "code": null, "e": 2092, "s": 2059, "text": "The output should be like this –" }, { "code": null, "e": 2739, "s": 2092, "text": "tp@linux:~$ sudo apt-get install git\n[sudo] password for tp:\nReading package lists... Done\nBuilding dependency tree\nReading state information... Done\nThe following extra packages will be installed:\ngit-man liberror-perl\nSuggested packages:\ngit-daemon-run git-daemon-sysvinit git-doc git-el git-email git-gui gitk\ngitweb git-arch git-bzr git-cvs git-mediawiki git-svn\nThe following NEW packages will be installed:\ngit git-man liberror-perl\n0 upgraded, 3 newly installed, 0 to remove and 286 not upgraded.\nNeed to get 3,421 kB of archives.\nAfter this operation, 21.9 MB of additional disk space will be used.\nDo you want to continue? [Y/n] y\n......" }, { "code": null, "e": 2814, "s": 2739, "text": "An alternate way is to install Git from source which should be like this –" }, { "code": null, "e": 2924, "s": 2814, "text": "$ sudo apt-get install libcurl4-gnutls-dev libexpat1-dev gettext libz-dev libssl-dev asciidoc xmlto docbook2x" }, { "code": null, "e": 2957, "s": 2924, "text": "The output should be like this –" }, { "code": null, "e": 4076, "s": 2957, "text": "tp@linux:~$ sudo apt-get install libcurl4-gnutls-dev libexpat1-dev gettext libz-dev libssl-dev asciidoc xmlto docbook2x\nReading package lists... Done\nBuilding dependency tree\nReading state information... Done\nNote, selecting 'zlib1g-dev' instead of 'libz-dev'\ngettext is already the newest version.\ngettext set to manually installed.\nThe following extra packages will be installed:\ncomerr-dev dblatex docbook-dsssl docbook-utils docbook-xml docbook-xsl\nfonts-lmodern fonts-texgyre jadetex krb5-multidev latex-beamer latex-xcolor\nlibcomerr2 libcurl3-gnutls libencode-locale-perl libexpat1\nlibfile-listing-perl libfont-afm-perl libgcrypt11-dev libgnutls-dev\nlibgnutls-openssl27 libgnutls26 libgnutlsxx27 libgpg-error-dev\nlibgssapi-krb5-2 libgssrpc4 libhtml-form-perl libhtml-format-perl\nlibhtml-parser-perl libhtml-tagset-perl libhtml-tree-perl\nlibhttp-cookies-perl libhttp-daemon-perl libhttp-date-perl\nlibhttp-message-perl libhttp-negotiate-perl libidn11-dev libintl-perl\nlibio-html-perl libk5crypto3 libkadm5clnt-mit9 libkadm5srv-mit9 libkdb5-7\nlibkrb5-3 libkrb5-dev libkrb5support0 libldap-2.4-2 libldap2-dev\n......." }, { "code": null, "e": 4158, "s": 4076, "text": "Git is by default installed under /usr/bin/git directory on recent Linux systems." }, { "code": null, "e": 4232, "s": 4158, "text": "Once the installation is done, verify it by using the following command –" }, { "code": null, "e": 4246, "s": 4232, "text": "$ whereis git" }, { "code": null, "e": 4279, "s": 4246, "text": "The output should be like this –" }, { "code": null, "e": 4343, "s": 4279, "text": "git: /usr/bin/git /usr/bin/X11/git /usr/share/man/man1/git.1.gz" }, { "code": null, "e": 4413, "s": 4343, "text": "To get the version number of Git, you can use the following command –" }, { "code": null, "e": 4429, "s": 4413, "text": "$ git --version" }, { "code": null, "e": 4460, "s": 4429, "text": "The output will be like this –" }, { "code": null, "e": 4478, "s": 4460, "text": "git version 1.9.1" }, { "code": null, "e": 4585, "s": 4478, "text": "If you want to specify a User and Password information to Git repository, then use the following command –" }, { "code": null, "e": 4651, "s": 4585, "text": "$ git config --global user.email [email protected]" }, { "code": null, "e": 4712, "s": 4651, "text": "For verifying Git configuration, use the following command –" }, { "code": null, "e": 4730, "s": 4712, "text": "git config --list" }, { "code": null, "e": 4763, "s": 4730, "text": "The output should be like this –" }, { "code": null, "e": 4807, "s": 4763, "text": "[email protected]" }, { "code": null, "e": 4927, "s": 4807, "text": "The above information is stored in the .gitconfig file under the home directory. To verify, use the following command –" }, { "code": null, "e": 4944, "s": 4927, "text": "cat ~/.gitconfig" }, { "code": null, "e": 4977, "s": 4944, "text": "The output should be like this –" }, { "code": null, "e": 5025, "s": 4977, "text": "[user]\nemail = [email protected]" }, { "code": null, "e": 5262, "s": 5025, "text": "To create a Git repository project, we should attach any local directory. Suppose, if project directory is located under /home/tp/projects path, first go into that directory using CD command and execute git init command as shown below –" }, { "code": null, "e": 5307, "s": 5262, "text": "$ cd /home/tp/projects\n\n~/projects$ git init" }, { "code": null, "e": 5340, "s": 5307, "text": "The output should be like this –" }, { "code": null, "e": 5400, "s": 5340, "text": "Initialized empty Git repository in /home/tp/projects/.git/" }, { "code": null, "e": 5504, "s": 5400, "text": "The above command creates a .git directory under projects folder. To verify, use the following command-" }, { "code": null, "e": 5530, "s": 5504, "text": "~/projects$ ls -altr .git" }, { "code": null, "e": 5563, "s": 5530, "text": "The output should be like this –" }, { "code": null, "e": 6033, "s": 5563, "text": "tp@linux:~/projects$ ls -altr .git\ntotal 40\ndrwxrwxr-x 4 tp tp 4096 Feb 11 14:03 refs\ndrwxrwxr-x 2 tp tp 4096 Feb 11 14:03 info\ndrwxrwxr-x 2 tp tp 4096 Feb 11 14:03 hooks\n-rw-rw-r-- 1 tp tp 23 Feb 11 14:03 HEAD\n-rw-rw-r-- 1 tp tp 73 Feb 11 14:03 description\ndrwxrwxr-x 2 tp tp 4096 Feb 11 14:03 branches\ndrwxrwxr-x 3 tp tp 4096 Feb 11 14:03 ..\ndrwxrwxr-x 4 tp tp 4096 Feb 11 14:03 objects\n-rw-rw-r-- 1 tp tp 92 Feb 11 14:03 config\ndrwxrwxr-x 7 tp tp 4096 Feb 11 14:03 ." }, { "code": null, "e": 6225, "s": 6033, "text": "Once a project is created, it will initialize the project using “git init”. Now, add your files to your project directory. For adding .txt files to Git repository, use the following command –" }, { "code": null, "e": 6249, "s": 6225, "text": "projects$ git add *.txt" }, { "code": null, "e": 6351, "s": 6249, "text": "Once adding process is done to the repository, you should commit these files as shown below command –" }, { "code": null, "e": 6407, "s": 6351, "text": "projects$ git commit -m 'Initial upload of the project'" }, { "code": null, "e": 6447, "s": 6407, "text": "The sample output should be like this –" }, { "code": null, "e": 6582, "s": 6447, "text": "[master (root-commit) 261b452] Initial upload of the project\n1 file changed, 0 insertions(+), 0 deletions(-)\ncreate mode 100644 tp.txt" }, { "code": null, "e": 6728, "s": 6582, "text": "Congratulations! Now, you know “How to setup git on Linux” . We’ll learn more about these types of commands in our next Linux post. Keep reading!" } ]
How to extract a data.table row as a vector in R?
A data.table object is similar to a data frame object but there are few things that can be specifically applied to a data.table because data.table package functions was defined for a data.table object only. If we want to extract a data.table row as a vector then we can use as.vector function along with as.matrix so that the as.vector can read the row properly. Loading data.table package: > library(data.table) Consider the below vectors and create a data.table object: > x1<-sample(LETTERS[1:3],20,replace=TRUE) > x2<-sample(LETTERS[1:4],20,replace=TRUE) > x3<-sample(LETTERS[1:5],20,replace=TRUE) > x4<-sample(LETTERS[2:5],20,replace=TRUE) > x5<-sample(LETTERS[3:5],20,replace=TRUE) > DT1<-data.table(x1,x2,x3,x4,x5) > DT1 x1 x2 x3 x4 x5 1: B C C D E 2: B C D B E 3: B C C B E 4: C D A C E 5: C C D E D 6: A D D E E 7: A C A D E 8: B B E D E 9: C D B D C 10: A B E E E 11: C A D B C 12: C D A B E 13: C A E C C 14: A A A B C 15: C C E D D 16: B B C D D 17: A A D D E 18: B C D C E 19: C B D D E 20: C B C C D Extracting rows as vectors: > Row1<-as.vector(as.matrix(DT1[1])) > Row1 [1] "B" "C" "C" "D" "E" > is.vector(Row1) [1] TRUE > Row2<-as.vector(as.matrix(DT1[2])) > Row2 [1] "B" "C" "D" "B" "E" > is.vector(Row2) [1] TRUE > Row5<-as.vector(as.matrix(DT1[5])) > Row5 [1] "C" "C" "D" "E" "D" > is.vector(Row5) [1] TRUE > Row10<-as.vector(as.matrix(DT1[10])) > Row10 [1] "A" "B" "E" "E" "E" > is.vector(Row10) [1] TRUE > is.vector(Row10) [1] TRUE > Row18<-as.vector(as.matrix(DT1[18])) > Row18 [1] "B" "C" "D" "C" "E" > is.vector(Row18) [1] TRUE Let’s have a look at another example: > v1<-rnorm(20,1,2.3) > v2<-rnorm(20,1,0.5) > v3<-rnorm(20,1,0.275) > v4<-rnorm(20,1,1.05) > v5<-rnorm(20,1,0.80) > v6<-rnorm(20,1,0.007) > DT2<-data.table(v1,v2,v3,v4,v5) > DT2 v1 v2 v3 v4 v5 1: 3.0343715 1.50341176 0.7697336 1.1133711891 -0.6121658 2: -0.9852696 1.67132579 0.6735064 -0.4264159308 2.0648977 3: -0.1772886 0.92605020 1.6350086 -0.1306917763 0.2362759 4: -1.5987827 1.21532168 0.8452163 1.1970737419 1.3881985 5: 6.6638926 0.99153886 0.9312445 0.6709141801 0.1946506 6: -1.1456574 1.17003856 0.9168460 0.2278830477 0.2643338 7: 0.9558536 0.66266842 0.8268585 -0.0526142041 1.8055215 8: 0.3400104 1.26874416 0.5101432 -0.0008413192 1.1666347 9: 1.0587648 -0.02557012 1.2874365 2.6673114134 1.3532956 10: -1.4587479 2.04442403 0.6002114 0.6081632615 3.0083355 11: 5.5237850 0.89874276 1.3020908 2.0082148902 1.3465480 12: 2.2144533 1.13244217 1.0143068 -0.7514250499 1.4106594 13: -0.4355997 0.99994424 1.3501380 2.2319042291 0.9450547 14: 1.2946799 0.26674188 0.7165047 0.5596915326 0.2925462 15: 0.2576993 0.68885846 1.4244761 2.2760543161 -0.4573567 16: 1.2538944 1.82497873 0.6419318 0.0770117052 1.7090877 17: -1.3401457 0.58804734 1.1919157 2.9846478595 0.2938903 18: 4.6633082 0.50940850 1.5545408 1.5738985335 0.3506627 19: -2.0482789 0.82047093 1.1950368 -0.9818883889 0.6690500 20: -4.4615852 1.23548986 0.6505639 1.1960534288 2.3088988 > Row1<-as.vector(as.matrix(DT2[1])) > Row1 [1] 3.0343715 1.5034118 0.7697336 1.1133712 -0.6121658 > is.vector(Row1) [1] TRUE > Row12<-as.vector(as.matrix(DT2[12])) > Row12 [1] 2.214453 1.132442 1.014307 -0.751425 1.410659 > is.vector(Row12) [1] TRUE > Row20<-as.vector(as.matrix(DT2[20])) > Row20 [1] -4.4615852 1.2354899 0.6505639 1.1960534 2.3088988 > is.vector(Row20) [1] TRUE > Row15<-as.vector(as.matrix(DT2[15])) > Row15 [1] 0.2576993 0.6888585 1.4244761 2.2760543 -0.4573567 > is.vector(Row15) [1] TRUE > Row19<-as.vector(as.matrix(DT2[19])) > Row19 [1] -2.0482789 0.8204709 1.1950368 -0.9818884 0.6690500 > is.vector(Row19) [1] TRUE
[ { "code": null, "e": 1425, "s": 1062, "text": "A data.table object is similar to a data frame object but there are few things that can be specifically applied to a data.table because data.table package functions was defined for a data.table object only. If we want to extract a data.table row as a vector then we can use as.vector function along with as.matrix so that the as.vector can read the row properly." }, { "code": null, "e": 1453, "s": 1425, "text": "Loading data.table package:" }, { "code": null, "e": 1475, "s": 1453, "text": "> library(data.table)" }, { "code": null, "e": 1534, "s": 1475, "text": "Consider the below vectors and create a data.table object:" }, { "code": null, "e": 1789, "s": 1534, "text": "> x1<-sample(LETTERS[1:3],20,replace=TRUE)\n> x2<-sample(LETTERS[1:4],20,replace=TRUE)\n> x3<-sample(LETTERS[1:5],20,replace=TRUE)\n> x4<-sample(LETTERS[2:5],20,replace=TRUE)\n> x5<-sample(LETTERS[3:5],20,replace=TRUE)\n> DT1<-data.table(x1,x2,x3,x4,x5)\n> DT1" }, { "code": null, "e": 2075, "s": 1789, "text": "x1 x2 x3 x4 x5\n1: B C C D E\n2: B C D B E\n3: B C C B E\n4: C D A C E\n5: C C D E D\n6: A D D E E\n7: A C A D E\n8: B B E D E\n9: C D B D C\n10: A B E E E\n11: C A D B C\n12: C D A B E\n13: C A E C C\n14: A A A B C\n15: C C E D D\n16: B B C D D\n17: A A D D E\n18: B C D C E\n19: C B D D E\n20: C B C C D" }, { "code": null, "e": 2103, "s": 2075, "text": "Extracting rows as vectors:" }, { "code": null, "e": 2618, "s": 2103, "text": "> Row1<-as.vector(as.matrix(DT1[1]))\n> Row1\n[1] \"B\" \"C\" \"C\" \"D\" \"E\"\n> is.vector(Row1)\n[1] TRUE\n\n> Row2<-as.vector(as.matrix(DT1[2]))\n> Row2\n[1] \"B\" \"C\" \"D\" \"B\" \"E\"\n> is.vector(Row2)\n[1] TRUE\n\n> Row5<-as.vector(as.matrix(DT1[5]))\n> Row5\n[1] \"C\" \"C\" \"D\" \"E\" \"D\"\n> is.vector(Row5)\n[1] TRUE\n\n> Row10<-as.vector(as.matrix(DT1[10]))\n> Row10\n[1] \"A\" \"B\" \"E\" \"E\" \"E\"\n> is.vector(Row10)\n[1] TRUE\n> is.vector(Row10)\n[1] TRUE\n\n> Row18<-as.vector(as.matrix(DT1[18]))\n> Row18\n[1] \"B\" \"C\" \"D\" \"C\" \"E\"\n> is.vector(Row18)\n[1] TRUE" }, { "code": null, "e": 2656, "s": 2618, "text": "Let’s have a look at another example:" }, { "code": null, "e": 2834, "s": 2656, "text": "> v1<-rnorm(20,1,2.3)\n> v2<-rnorm(20,1,0.5)\n> v3<-rnorm(20,1,0.275)\n> v4<-rnorm(20,1,1.05)\n> v5<-rnorm(20,1,0.80)\n> v6<-rnorm(20,1,0.007)\n> DT2<-data.table(v1,v2,v3,v4,v5)\n> DT2" }, { "code": null, "e": 4018, "s": 2834, "text": "v1 v2 v3 v4 v5\n1: 3.0343715 1.50341176 0.7697336 1.1133711891 -0.6121658\n2: -0.9852696 1.67132579 0.6735064 -0.4264159308 2.0648977\n3: -0.1772886 0.92605020 1.6350086 -0.1306917763 0.2362759\n4: -1.5987827 1.21532168 0.8452163 1.1970737419 1.3881985\n5: 6.6638926 0.99153886 0.9312445 0.6709141801 0.1946506\n6: -1.1456574 1.17003856 0.9168460 0.2278830477 0.2643338\n7: 0.9558536 0.66266842 0.8268585 -0.0526142041 1.8055215\n8: 0.3400104 1.26874416 0.5101432 -0.0008413192 1.1666347\n9: 1.0587648 -0.02557012 1.2874365 2.6673114134 1.3532956\n10: -1.4587479 2.04442403 0.6002114 0.6081632615 3.0083355\n11: 5.5237850 0.89874276 1.3020908 2.0082148902 1.3465480\n12: 2.2144533 1.13244217 1.0143068 -0.7514250499 1.4106594\n13: -0.4355997 0.99994424 1.3501380 2.2319042291 0.9450547\n14: 1.2946799 0.26674188 0.7165047 0.5596915326 0.2925462\n15: 0.2576993 0.68885846 1.4244761 2.2760543161 -0.4573567\n16: 1.2538944 1.82497873 0.6419318 0.0770117052 1.7090877\n17: -1.3401457 0.58804734 1.1919157 2.9846478595 0.2938903\n18: 4.6633082 0.50940850 1.5545408 1.5738985335 0.3506627\n19: -2.0482789 0.82047093 1.1950368 -0.9818883889 0.6690500\n20: -4.4615852 1.23548986 0.6505639 1.1960534288 2.3088988" }, { "code": null, "e": 4664, "s": 4018, "text": "> Row1<-as.vector(as.matrix(DT2[1]))\n> Row1\n[1] 3.0343715 1.5034118 0.7697336 1.1133712 -0.6121658\n> is.vector(Row1)\n[1] TRUE\n\n> Row12<-as.vector(as.matrix(DT2[12]))\n> Row12\n[1] 2.214453 1.132442 1.014307 -0.751425 1.410659\n> is.vector(Row12)\n[1] TRUE\n\n> Row20<-as.vector(as.matrix(DT2[20]))\n> Row20\n[1] -4.4615852 1.2354899 0.6505639 1.1960534 2.3088988\n> is.vector(Row20)\n[1] TRUE\n\n> Row15<-as.vector(as.matrix(DT2[15]))\n> Row15\n[1] 0.2576993 0.6888585 1.4244761 2.2760543 -0.4573567\n> is.vector(Row15)\n[1] TRUE\n\n> Row19<-as.vector(as.matrix(DT2[19]))\n> Row19\n[1] -2.0482789 0.8204709 1.1950368 -0.9818884 0.6690500\n> is.vector(Row19)\n[1] TRUE" } ]
Critical Connections in a Network in C++
Suppose there are n servers. And these are numbered from 0 to n-1 connected by an undirected server-to-server connections forming a network where connections[i] = [a,b] represents a connection between servers a and b. All servers are connected directly or through some other servers. Now, a critical connection is a connection that, if that is removed, it will make some server unable to reach some other server. We have to find all critical connections. So, if the input is like n = 4 and connection = [[0,1],[1,2],[2,0],[1,3]], then the output will be [[1,3]] To solve this, we will follow these steps − Define one set Define one set Define an array disc Define an array disc Define an array low Define an array low Define one 2D array ret Define one 2D array ret Define a function dfs(), this will take node, par, one 2D array graph, Define a function dfs(), this will take node, par, one 2D array graph, if node is in of visited, then −return if node is in of visited, then − return return insert node into visited insert node into visited disc[node] := time disc[node] := time low[node] := time low[node] := time (increase time by 1) (increase time by 1) for all elements x in graph[node]if x is same as par, then −Ignore following part, skip to the next iterationif x is not in visited, then −dfs(x, node, graph)low[node] := minimum of low[node] and low[x]if disc[node] < low[x], then −insert { node, x } at the end of retOtherwiselow[node] := minimum of low[node] and disc[x] for all elements x in graph[node] if x is same as par, then −Ignore following part, skip to the next iteration if x is same as par, then − Ignore following part, skip to the next iteration Ignore following part, skip to the next iteration if x is not in visited, then −dfs(x, node, graph)low[node] := minimum of low[node] and low[x]if disc[node] < low[x], then −insert { node, x } at the end of ret if x is not in visited, then − dfs(x, node, graph) dfs(x, node, graph) low[node] := minimum of low[node] and low[x] low[node] := minimum of low[node] and low[x] if disc[node] < low[x], then −insert { node, x } at the end of ret if disc[node] < low[x], then − insert { node, x } at the end of ret insert { node, x } at the end of ret Otherwiselow[node] := minimum of low[node] and disc[x] Otherwise low[node] := minimum of low[node] and disc[x] low[node] := minimum of low[node] and disc[x] From the main method, do the following −set size of disc as n + 1set size of low as n + 1time := 0Define an array of lists of size graph n + 1for initialize i := 0, when i < size of v, update (increase i by 1), do −u := v[i, 0]w := v[i, 1]insert w at the end of graph[u]insert u at the end of graph[w]dfs(0, -1, graph)return ret From the main method, do the following − set size of disc as n + 1 set size of disc as n + 1 set size of low as n + 1 set size of low as n + 1 time := 0 time := 0 Define an array of lists of size graph n + 1 Define an array of lists of size graph n + 1 for initialize i := 0, when i < size of v, update (increase i by 1), do −u := v[i, 0]w := v[i, 1]insert w at the end of graph[u]insert u at the end of graph[w] for initialize i := 0, when i < size of v, update (increase i by 1), do − u := v[i, 0] u := v[i, 0] w := v[i, 1] w := v[i, 1] insert w at the end of graph[u] insert w at the end of graph[u] insert u at the end of graph[w] insert u at the end of graph[w] dfs(0, -1, graph) dfs(0, -1, graph) return ret return ret Let us see the following implementation to get better understanding − Live Demo #include <bits/stdc++.h> using namespace std; void print_vector(vector<vector<auto> > v){ cout << "["; for(int i = 0; i<v.size(); i++){ cout << "["; for(int j = 0; j <v[i].size(); j++){ cout << v[i][j] << ", "; } cout << "],"; } cout << "]"<<endl; } class Solution { public: set<int> visited; vector<int> disc; vector<int> low; int time; vector<vector<int> > ret; void dfs(int node, int par, vector<int> graph[]) { if (visited.count(node)) return; visited.insert(node); disc[node] = low[node] = time; time++; for (int x : graph[node]) { if (x == par) continue; if (!visited.count(x)) { dfs(x, node, graph); low[node] = min(low[node], low[x]); if (disc[node] < low[x]) { ret.push_back({ node, x }); } } else{ low[node] = min(low[node], disc[x]); } } } vector<vector<int> > criticalConnections(int n, vector<vector<int> >& v) { disc.resize(n + 1); low.resize(n + 1); time = 0; vector<int> graph[n + 1]; for (int i = 0; i < v.size(); i++) { int u = v[i][0]; int w = v[i][1]; graph[u].push_back(w); graph[w].push_back(u); } dfs(0, -1, graph); return ret; } }; main(){ Solution ob; vector<vector<int>> v = {{0,1},{1,2},{2,0},{1,3}}; print_vector(ob.criticalConnections(4,v)); } 4, {{0,1},{1,2},{2,0},{1,3}} [[1, 3, ],]
[ { "code": null, "e": 1517, "s": 1062, "text": "Suppose there are n servers. And these are numbered from 0 to n-1 connected by an\nundirected server-to-server connections forming a network where connections[i] = [a,b]\nrepresents a connection between servers a and b. All servers are connected directly or through some other servers. Now, a critical connection is a connection that, if that is removed, it will make some server unable to reach some other server. We have to find all critical connections." }, { "code": null, "e": 1592, "s": 1517, "text": "So, if the input is like n = 4 and connection = [[0,1],[1,2],[2,0],[1,3]]," }, { "code": null, "e": 1624, "s": 1592, "text": "then the output will be [[1,3]]" }, { "code": null, "e": 1668, "s": 1624, "text": "To solve this, we will follow these steps −" }, { "code": null, "e": 1683, "s": 1668, "text": "Define one set" }, { "code": null, "e": 1698, "s": 1683, "text": "Define one set" }, { "code": null, "e": 1719, "s": 1698, "text": "Define an array disc" }, { "code": null, "e": 1740, "s": 1719, "text": "Define an array disc" }, { "code": null, "e": 1760, "s": 1740, "text": "Define an array low" }, { "code": null, "e": 1780, "s": 1760, "text": "Define an array low" }, { "code": null, "e": 1804, "s": 1780, "text": "Define one 2D array ret" }, { "code": null, "e": 1828, "s": 1804, "text": "Define one 2D array ret" }, { "code": null, "e": 1899, "s": 1828, "text": "Define a function dfs(), this will take node, par, one 2D array graph," }, { "code": null, "e": 1970, "s": 1899, "text": "Define a function dfs(), this will take node, par, one 2D array graph," }, { "code": null, "e": 2009, "s": 1970, "text": "if node is in of visited, then −return" }, { "code": null, "e": 2042, "s": 2009, "text": "if node is in of visited, then −" }, { "code": null, "e": 2049, "s": 2042, "text": "return" }, { "code": null, "e": 2056, "s": 2049, "text": "return" }, { "code": null, "e": 2081, "s": 2056, "text": "insert node into visited" }, { "code": null, "e": 2106, "s": 2081, "text": "insert node into visited" }, { "code": null, "e": 2125, "s": 2106, "text": "disc[node] := time" }, { "code": null, "e": 2144, "s": 2125, "text": "disc[node] := time" }, { "code": null, "e": 2162, "s": 2144, "text": "low[node] := time" }, { "code": null, "e": 2180, "s": 2162, "text": "low[node] := time" }, { "code": null, "e": 2201, "s": 2180, "text": "(increase time by 1)" }, { "code": null, "e": 2222, "s": 2201, "text": "(increase time by 1)" }, { "code": null, "e": 2545, "s": 2222, "text": "for all elements x in graph[node]if x is same as par, then −Ignore following part, skip to the next iterationif x is not in visited, then −dfs(x, node, graph)low[node] := minimum of low[node] and low[x]if disc[node] < low[x], then −insert { node, x } at the end of retOtherwiselow[node] := minimum of low[node] and disc[x]" }, { "code": null, "e": 2579, "s": 2545, "text": "for all elements x in graph[node]" }, { "code": null, "e": 2656, "s": 2579, "text": "if x is same as par, then −Ignore following part, skip to the next iteration" }, { "code": null, "e": 2684, "s": 2656, "text": "if x is same as par, then −" }, { "code": null, "e": 2734, "s": 2684, "text": "Ignore following part, skip to the next iteration" }, { "code": null, "e": 2784, "s": 2734, "text": "Ignore following part, skip to the next iteration" }, { "code": null, "e": 2944, "s": 2784, "text": "if x is not in visited, then −dfs(x, node, graph)low[node] := minimum of low[node] and low[x]if disc[node] < low[x], then −insert { node, x } at the end of ret" }, { "code": null, "e": 2975, "s": 2944, "text": "if x is not in visited, then −" }, { "code": null, "e": 2995, "s": 2975, "text": "dfs(x, node, graph)" }, { "code": null, "e": 3015, "s": 2995, "text": "dfs(x, node, graph)" }, { "code": null, "e": 3060, "s": 3015, "text": "low[node] := minimum of low[node] and low[x]" }, { "code": null, "e": 3105, "s": 3060, "text": "low[node] := minimum of low[node] and low[x]" }, { "code": null, "e": 3172, "s": 3105, "text": "if disc[node] < low[x], then −insert { node, x } at the end of ret" }, { "code": null, "e": 3203, "s": 3172, "text": "if disc[node] < low[x], then −" }, { "code": null, "e": 3240, "s": 3203, "text": "insert { node, x } at the end of ret" }, { "code": null, "e": 3277, "s": 3240, "text": "insert { node, x } at the end of ret" }, { "code": null, "e": 3332, "s": 3277, "text": "Otherwiselow[node] := minimum of low[node] and disc[x]" }, { "code": null, "e": 3342, "s": 3332, "text": "Otherwise" }, { "code": null, "e": 3388, "s": 3342, "text": "low[node] := minimum of low[node] and disc[x]" }, { "code": null, "e": 3434, "s": 3388, "text": "low[node] := minimum of low[node] and disc[x]" }, { "code": null, "e": 3763, "s": 3434, "text": "From the main method, do the following −set size of disc as n + 1set size of low as n + 1time := 0Define an array of lists of size graph n + 1for initialize i := 0, when i < size of v, update (increase i by 1), do −u := v[i, 0]w := v[i, 1]insert w at the end of graph[u]insert u at the end of graph[w]dfs(0, -1, graph)return ret" }, { "code": null, "e": 3804, "s": 3763, "text": "From the main method, do the following −" }, { "code": null, "e": 3830, "s": 3804, "text": "set size of disc as n + 1" }, { "code": null, "e": 3856, "s": 3830, "text": "set size of disc as n + 1" }, { "code": null, "e": 3881, "s": 3856, "text": "set size of low as n + 1" }, { "code": null, "e": 3906, "s": 3881, "text": "set size of low as n + 1" }, { "code": null, "e": 3916, "s": 3906, "text": "time := 0" }, { "code": null, "e": 3926, "s": 3916, "text": "time := 0" }, { "code": null, "e": 3971, "s": 3926, "text": "Define an array of lists of size graph n + 1" }, { "code": null, "e": 4016, "s": 3971, "text": "Define an array of lists of size graph n + 1" }, { "code": null, "e": 4176, "s": 4016, "text": "for initialize i := 0, when i < size of v, update (increase i by 1), do −u := v[i, 0]w := v[i, 1]insert w at the end of graph[u]insert u at the end of graph[w]" }, { "code": null, "e": 4250, "s": 4176, "text": "for initialize i := 0, when i < size of v, update (increase i by 1), do −" }, { "code": null, "e": 4263, "s": 4250, "text": "u := v[i, 0]" }, { "code": null, "e": 4276, "s": 4263, "text": "u := v[i, 0]" }, { "code": null, "e": 4289, "s": 4276, "text": "w := v[i, 1]" }, { "code": null, "e": 4302, "s": 4289, "text": "w := v[i, 1]" }, { "code": null, "e": 4334, "s": 4302, "text": "insert w at the end of graph[u]" }, { "code": null, "e": 4366, "s": 4334, "text": "insert w at the end of graph[u]" }, { "code": null, "e": 4398, "s": 4366, "text": "insert u at the end of graph[w]" }, { "code": null, "e": 4430, "s": 4398, "text": "insert u at the end of graph[w]" }, { "code": null, "e": 4448, "s": 4430, "text": "dfs(0, -1, graph)" }, { "code": null, "e": 4466, "s": 4448, "text": "dfs(0, -1, graph)" }, { "code": null, "e": 4477, "s": 4466, "text": "return ret" }, { "code": null, "e": 4488, "s": 4477, "text": "return ret" }, { "code": null, "e": 4558, "s": 4488, "text": "Let us see the following implementation to get better understanding −" }, { "code": null, "e": 4569, "s": 4558, "text": " Live Demo" }, { "code": null, "e": 6072, "s": 4569, "text": "#include <bits/stdc++.h>\nusing namespace std;\nvoid print_vector(vector<vector<auto> > v){\n cout << \"[\";\n for(int i = 0; i<v.size(); i++){\n cout << \"[\";\n for(int j = 0; j <v[i].size(); j++){\n cout << v[i][j] << \", \";\n }\n cout << \"],\";\n }\n cout << \"]\"<<endl;\n}\nclass Solution {\n public:\n set<int> visited;\n vector<int> disc;\n vector<int> low;\n int time;\n vector<vector<int> > ret;\n void dfs(int node, int par, vector<int> graph[]) {\n if (visited.count(node))\n return;\n visited.insert(node);\n disc[node] = low[node] = time;\n time++;\n for (int x : graph[node]) {\n if (x == par)\n continue;\n if (!visited.count(x)) {\n dfs(x, node, graph);\n low[node] = min(low[node], low[x]);\n if (disc[node] < low[x]) {\n ret.push_back({ node, x });\n }\n } else{\n low[node] = min(low[node], disc[x]);\n }\n }\n }\n vector<vector<int> > criticalConnections(int n, vector<vector<int> >& v) {\n disc.resize(n + 1);\n low.resize(n + 1);\n time = 0;\n vector<int> graph[n + 1];\n for (int i = 0; i < v.size(); i++) {\n int u = v[i][0];\n int w = v[i][1];\n graph[u].push_back(w);\n graph[w].push_back(u);\n }\n dfs(0, -1, graph);\n return ret;\n }\n};\nmain(){\n Solution ob;\n vector<vector<int>> v = {{0,1},{1,2},{2,0},{1,3}};\n print_vector(ob.criticalConnections(4,v));\n}" }, { "code": null, "e": 6101, "s": 6072, "text": "4, {{0,1},{1,2},{2,0},{1,3}}" }, { "code": null, "e": 6113, "s": 6101, "text": "[[1, 3, ],]" } ]
reflect.StructOf() Function in Golang with Examples - GeeksforGeeks
28 Apr, 2020 Go language provides inbuilt support implementation of run-time reflection and allowing a program to manipulate objects with arbitrary types with the help of reflect package. The reflect.StructOf() Function in Golang is used to get the struct type containing fields. To access this function, one needs to imports the reflect package in the program. Syntax: func StructOf(fields []StructField) Type Parameters: This function takes only one parameters of StructFields( fields ). Return Value: This function returns the struct type containing fields. Below examples illustrate the use of above method in Golang: Example 1: // Golang program to illustrate// reflect.SliceOf() Function package main import ( "fmt" "reflect") // Main functionfunc main() { // use of StructOf method typ := reflect.StructOf([]reflect.StructField{ { Name: "Height", Type: reflect.TypeOf(float64(0)), }, { Name: "Name", Type: reflect.TypeOf("abc"), }, }) fmt.Println(typ) } Output: struct { Height float64; Name string } Example 2: // Golang program to illustrate// reflect.SliceOf() Function package main import ( "fmt" "reflect") // Main functionfunc main() { // use of StructOf method tt:= reflect.StructOf([]reflect.StructField{ { Name: "Height", Type: reflect.TypeOf(0.0), Tag: `json:"height"`, }, { Name: "Name", Type: reflect.TypeOf("abc"), Tag: `json:"name"`, }, }) fmt.Println(tt.NumField()) fmt.Println(tt.Field(0)) fmt.Println(tt.Field(1)) } Output: 2 {Height float64 json:"height" 0 [0] false} {Name string json:"name" 8 [1] false} Golang-reflect Go Language Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here. Strings in Golang Time Durations in Golang How to Parse JSON in Golang? Structures in Golang Defer Keyword in Golang How to iterate over an Array using for loop in Golang? Class and Object in Golang Loops in Go Language 6 Best Books to Learn Go Programming Language Rune in Golang
[ { "code": null, "e": 25813, "s": 25785, "text": "\n28 Apr, 2020" }, { "code": null, "e": 26162, "s": 25813, "text": "Go language provides inbuilt support implementation of run-time reflection and allowing a program to manipulate objects with arbitrary types with the help of reflect package. The reflect.StructOf() Function in Golang is used to get the struct type containing fields. To access this function, one needs to imports the reflect package in the program." }, { "code": null, "e": 26170, "s": 26162, "text": "Syntax:" }, { "code": null, "e": 26212, "s": 26170, "text": "func StructOf(fields []StructField) Type\n" }, { "code": null, "e": 26291, "s": 26212, "text": "Parameters: This function takes only one parameters of StructFields( fields )." }, { "code": null, "e": 26362, "s": 26291, "text": "Return Value: This function returns the struct type containing fields." }, { "code": null, "e": 26423, "s": 26362, "text": "Below examples illustrate the use of above method in Golang:" }, { "code": null, "e": 26434, "s": 26423, "text": "Example 1:" }, { "code": "// Golang program to illustrate// reflect.SliceOf() Function package main import ( \"fmt\" \"reflect\") // Main functionfunc main() { // use of StructOf method typ := reflect.StructOf([]reflect.StructField{ { Name: \"Height\", Type: reflect.TypeOf(float64(0)), }, { Name: \"Name\", Type: reflect.TypeOf(\"abc\"), }, }) fmt.Println(typ) }", "e": 26861, "s": 26434, "text": null }, { "code": null, "e": 26869, "s": 26861, "text": "Output:" }, { "code": null, "e": 26909, "s": 26869, "text": "struct { Height float64; Name string }\n" }, { "code": null, "e": 26920, "s": 26909, "text": "Example 2:" }, { "code": "// Golang program to illustrate// reflect.SliceOf() Function package main import ( \"fmt\" \"reflect\") // Main functionfunc main() { // use of StructOf method tt:= reflect.StructOf([]reflect.StructField{ { Name: \"Height\", Type: reflect.TypeOf(0.0), Tag: `json:\"height\"`, }, { Name: \"Name\", Type: reflect.TypeOf(\"abc\"), Tag: `json:\"name\"`, }, }) fmt.Println(tt.NumField()) fmt.Println(tt.Field(0)) fmt.Println(tt.Field(1)) }", "e": 27471, "s": 26920, "text": null }, { "code": null, "e": 27479, "s": 27471, "text": "Output:" }, { "code": null, "e": 27565, "s": 27479, "text": "2\n{Height float64 json:\"height\" 0 [0] false}\n{Name string json:\"name\" 8 [1] false}\n" }, { "code": null, "e": 27580, "s": 27565, "text": "Golang-reflect" }, { "code": null, "e": 27592, "s": 27580, "text": "Go Language" }, { "code": null, "e": 27690, "s": 27592, "text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here." }, { "code": null, "e": 27708, "s": 27690, "text": "Strings in Golang" }, { "code": null, "e": 27733, "s": 27708, "text": "Time Durations in Golang" }, { "code": null, "e": 27762, "s": 27733, "text": "How to Parse JSON in Golang?" }, { "code": null, "e": 27783, "s": 27762, "text": "Structures in Golang" }, { "code": null, "e": 27807, "s": 27783, "text": "Defer Keyword in Golang" }, { "code": null, "e": 27862, "s": 27807, "text": "How to iterate over an Array using for loop in Golang?" }, { "code": null, "e": 27889, "s": 27862, "text": "Class and Object in Golang" }, { "code": null, "e": 27910, "s": 27889, "text": "Loops in Go Language" }, { "code": null, "e": 27956, "s": 27910, "text": "6 Best Books to Learn Go Programming Language" } ]
C | Macro & Preprocessor | Question 4 - GeeksforGeeks
04 Feb, 2013 #include <stdio.h>#define X 3#if !X printf("Geeks");#else printf("Quiz"); #endifint main(){ return 0;} (A) Geeks(B) Quiz(C) Compiler Error(D) Runtime ErrorAnswer: (C)Explanation: A program is converted to executable using following steps 1) Preprocessing 2) C code to object code conversion 3) Linking The first step processes macros. So the code is converted to following after the preprocessing step. printf("Quiz"); int main() { return 0; } The above code produces error because printf() is called outside main. The following program works fine and prints “Quiz” #include #define X 3 int main() { #if !X printf("Geeks"); #else printf("Quiz"); #endif return 0; } C-Macro & Preprocessor Macro & Preprocessor C Language C Quiz Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here. Multidimensional Arrays in C / C++ Left Shift and Right Shift Operators in C/C++ Function Pointer in C Core Dump (Segmentation fault) in C/C++ rand() and srand() in C/C++ Operator Precedence and Associativity in C C | File Handling | Question 1 C | Arrays | Question 7 C | Misc | Question 7 C | Pointer Basics | Question 14
[ { "code": null, "e": 25693, "s": 25665, "text": "\n04 Feb, 2013" }, { "code": "#include <stdio.h>#define X 3#if !X printf(\"Geeks\");#else printf(\"Quiz\"); #endifint main(){ return 0;}", "e": 25811, "s": 25693, "text": null }, { "code": null, "e": 25946, "s": 25811, "text": "(A) Geeks(B) Quiz(C) Compiler Error(D) Runtime ErrorAnswer: (C)Explanation: A program is converted to executable using following steps" }, { "code": null, "e": 25963, "s": 25946, "text": "1) Preprocessing" }, { "code": null, "e": 25999, "s": 25963, "text": "2) C code to object code conversion" }, { "code": null, "e": 26010, "s": 25999, "text": "3) Linking" }, { "code": null, "e": 26111, "s": 26010, "text": "The first step processes macros. So the code is converted to following after the preprocessing step." }, { "code": null, "e": 26161, "s": 26111, "text": "printf(\"Quiz\");\nint main()\n{\n return 0;\n}\n" }, { "code": null, "e": 26283, "s": 26161, "text": "The above code produces error because printf() is called outside main. The following program works fine and prints “Quiz”" }, { "code": null, "e": 26398, "s": 26283, "text": "#include \n#define X 3\n\nint main()\n{\n#if !X\n printf(\"Geeks\");\n#else\n printf(\"Quiz\");\n\n#endif\n return 0;\n}\n" }, { "code": null, "e": 26421, "s": 26398, "text": "C-Macro & Preprocessor" }, { "code": null, "e": 26442, "s": 26421, "text": "Macro & Preprocessor" }, { "code": null, "e": 26453, "s": 26442, "text": "C Language" }, { "code": null, "e": 26460, "s": 26453, "text": "C Quiz" }, { "code": null, "e": 26558, "s": 26460, "text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here." }, { "code": null, "e": 26593, "s": 26558, "text": "Multidimensional Arrays in C / C++" }, { "code": null, "e": 26639, "s": 26593, "text": "Left Shift and Right Shift Operators in C/C++" }, { "code": null, "e": 26661, "s": 26639, "text": "Function Pointer in C" }, { "code": null, "e": 26701, "s": 26661, "text": "Core Dump (Segmentation fault) in C/C++" }, { "code": null, "e": 26729, "s": 26701, "text": "rand() and srand() in C/C++" }, { "code": null, "e": 26772, "s": 26729, "text": "Operator Precedence and Associativity in C" }, { "code": null, "e": 26803, "s": 26772, "text": "C | File Handling | Question 1" }, { "code": null, "e": 26827, "s": 26803, "text": "C | Arrays | Question 7" }, { "code": null, "e": 26849, "s": 26827, "text": "C | Misc | Question 7" } ]
C Program for Sum the digits of a given number - GeeksforGeeks
19 Jul, 2021 Given a number, find sum of its digits.Example : Input : n = 687 Output : 21 Input : n = 12 Output : 3 1. Iterative: C C // C program to compute sum of digits in// number.# include<stdio.h> /* Function to get sum of digits */int getSum(int n){ int sum = 0; while (n != 0) { sum = sum + n % 10; n = n/10; } return sum;} int main(){ int n = 687; printf(" %d ", getSum(n)); return 0;} Time Complexity: O(log10n) Auxiliary Space: O(1) How to compute in single line? C c # include<stdio.h>/* Function to get sum of digits */int getSum(int n){ int sum; /* Single line that calculates sum */ for (sum=0; n > 0; sum+=n%10,n/=10); return sum;} int main(){ int n = 687; printf(" %d ", getSum(n)); return 0;} 2. Recursive C c int sumDigits(int no){ return no == 0 ? 0 : no%10 + sumDigits(no/10) ;} int main(void){ printf("%d", sumDigits(687)); return 0;} Please refer complete article on Program for Sum the digits of a given number for more details! souravmahato348 C Programs Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here. C Program to read contents of Whole File Producer Consumer Problem in C C program to find the length of a string Exit codes in C/C++ with Examples Handling multiple clients on server with multithreading using Socket Programming in C/C++ Regular expressions in C C / C++ Program for Dijkstra's shortest path algorithm | Greedy Algo-7 Create n-child process from same parent process using fork() in C Conditional wait and signal in multi-threading Lamport's logical clock
[ { "code": null, "e": 26200, "s": 26172, "text": "\n19 Jul, 2021" }, { "code": null, "e": 26251, "s": 26200, "text": "Given a number, find sum of its digits.Example : " }, { "code": null, "e": 26306, "s": 26251, "text": "Input : n = 687\nOutput : 21\n\nInput : n = 12\nOutput : 3" }, { "code": null, "e": 26321, "s": 26306, "text": "1. Iterative: " }, { "code": null, "e": 26323, "s": 26321, "text": "C" }, { "code": null, "e": 26325, "s": 26323, "text": "C" }, { "code": "// C program to compute sum of digits in// number.# include<stdio.h> /* Function to get sum of digits */int getSum(int n){ int sum = 0; while (n != 0) { sum = sum + n % 10; n = n/10; } return sum;} int main(){ int n = 687; printf(\" %d \", getSum(n)); return 0;}", "e": 26611, "s": 26325, "text": null }, { "code": null, "e": 26638, "s": 26611, "text": "Time Complexity: O(log10n)" }, { "code": null, "e": 26660, "s": 26638, "text": "Auxiliary Space: O(1)" }, { "code": null, "e": 26691, "s": 26660, "text": "How to compute in single line?" }, { "code": null, "e": 26693, "s": 26691, "text": "C" }, { "code": null, "e": 26695, "s": 26693, "text": "c" }, { "code": "# include<stdio.h>/* Function to get sum of digits */int getSum(int n){ int sum; /* Single line that calculates sum */ for (sum=0; n > 0; sum+=n%10,n/=10); return sum;} int main(){ int n = 687; printf(\" %d \", getSum(n)); return 0;}", "e": 26944, "s": 26695, "text": null }, { "code": null, "e": 26957, "s": 26944, "text": "2. Recursive" }, { "code": null, "e": 26959, "s": 26957, "text": "C" }, { "code": null, "e": 26961, "s": 26959, "text": "c" }, { "code": "int sumDigits(int no){ return no == 0 ? 0 : no%10 + sumDigits(no/10) ;} int main(void){ printf(\"%d\", sumDigits(687)); return 0;}", "e": 27098, "s": 26961, "text": null }, { "code": null, "e": 27194, "s": 27098, "text": "Please refer complete article on Program for Sum the digits of a given number for more details!" }, { "code": null, "e": 27210, "s": 27194, "text": "souravmahato348" }, { "code": null, "e": 27221, "s": 27210, "text": "C Programs" }, { "code": null, "e": 27319, "s": 27221, "text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here." }, { "code": null, "e": 27360, "s": 27319, "text": "C Program to read contents of Whole File" }, { "code": null, "e": 27391, "s": 27360, "text": "Producer Consumer Problem in C" }, { "code": null, "e": 27432, "s": 27391, "text": "C program to find the length of a string" }, { "code": null, "e": 27466, "s": 27432, "text": "Exit codes in C/C++ with Examples" }, { "code": null, "e": 27556, "s": 27466, "text": "Handling multiple clients on server with multithreading using Socket Programming in C/C++" }, { "code": null, "e": 27581, "s": 27556, "text": "Regular expressions in C" }, { "code": null, "e": 27652, "s": 27581, "text": "C / C++ Program for Dijkstra's shortest path algorithm | Greedy Algo-7" }, { "code": null, "e": 27718, "s": 27652, "text": "Create n-child process from same parent process using fork() in C" }, { "code": null, "e": 27765, "s": 27718, "text": "Conditional wait and signal in multi-threading" } ]
Triplets with sum with given range | Practice | GeeksforGeeks
Given an array Arr[] of N distinct integers and a range from L to R, the task is to count the number of triplets having a sum in the range [L, R]. Example 1: Input: N = 4 Arr = {8 , 3, 5, 2} L = 7, R = 11 Output: 1 Explaination: There is only one triplet {2, 3, 5} having sum 10 in range [7, 11]. Example 2: Input: N = 5 Arr = {5, 1, 4, 3, 2} L = 2, R = 7 Output: 2 Explaination: There two triplets having sum in range [2, 7] are {1,4,2} and {1,3,2}. Your Task: You don't need to read input or print anything. Your task is to complete the function countTriplets() which takes the array Arr[] and its size N and L and R as input parameters and returns the count. Expected Time Complexity: O(N2) Expected Auxiliary Space: O(1) Constraints: 1 ≤ N ≤ 103 1 ≤ Arr[i] ≤ 103 1 ≤ L ≤ R ≤ 109 +3 kaustubhdwivedi17295 months ago Approach : Step1 : Find triplets having sum < L = C1 Step2 : Find triplets having sum > R = C2 Step3 : Find Total Possible triplets = N*(N-1)*(N-2) / 6 Step4 : Find answer = Total - (C1+ C2) Calculating C1 int findSmallerthan(int arr[], int n, int L){ int count = 0; for(int i = 0; i < n; i++){ int j = i+1, k = n-1; while(j < k){ int sum = arr[i] + arr[j] + arr[k]; if(sum < L){ count += k - j; j++; } if(sum >= L){ k--; } } } return count; } Calculating C2 int findGreaterthan(int arr[], int n, int R){ int count = 0; for(int i = 0; i < n; i++){ int j = i+1, k = n-1; while(j < k){ int sum = arr[i] + arr[j] + arr[k]; if(sum > R){ count += k - j; k--; } if(sum <= R){ j++; } } } return count; } 0 sourabhch13315 months ago similar codeforces que: https://codeforces.com/problemset/problem/1538/C 0 ududu7 This comment was deleted. +2 mastermind_5 months ago ( C++ ) 2 Pointers Shorter Solution, ~ O(N*N) Time , O(1) Space int countTriplets(int arr[], int n, int L, int R) { sort(arr, arr + n); auto func = [&](int k) { int cnt = 0; for (int i = 0; i < n; i++) { int l = i + 1, r = n - 1; while (l < r) { if (arr[i] + arr[l] + arr[r] > k)r--; else cnt += r - l, l++; } } return cnt; }; return func(R) - func(L - 1); } +8 abhishekshyam5 months ago Idea behind solutions: Find triplets lesser than or equal to R Find triplets lesser than or equal to L-1Subtract both to get number of triplets desired Find triplets lesser than or equal to R Find triplets lesser than or equal to L-1 Subtract both to get number of triplets desired Approach: Make function to find all triplets lesser than or equal to certain target say tar. Sorting is necessary for the same. Solve triplets having particular sum problem, and alter the same. One thing that is different is when sum≤tar , we increase count by r-l. Take an example, 1 2 3 4 , l points to = 2, r points to = 4. For this pair say we are finding lesser than 6, Now sum 2+4=> 6, so we reach conclusion that 1 2 4 makes a triplet. But notice that 1 2 3 also makes a triplet lesser than given value, which we want. r-l => 4-2 = 2 solves the issue of redoing that part and we can move l by +1. If sum greater than given values, just r--. Make function to find all triplets lesser than or equal to certain target say tar. Sorting is necessary for the same. Solve triplets having particular sum problem, and alter the same. One thing that is different is when sum≤tar , we increase count by r-l. Take an example, 1 2 3 4 , l points to = 2, r points to = 4. For this pair say we are finding lesser than 6, Now sum 2+4=> 6, so we reach conclusion that 1 2 4 makes a triplet. But notice that 1 2 3 also makes a triplet lesser than given value, which we want. r-l => 4-2 = 2 solves the issue of redoing that part and we can move l by +1. If sum greater than given values, just r--. Intuition: The trick for return countLesserEqual(Arr,N,R) - countLesserEqual(Arr,N,L-1); must be known.Finding triplets for sum must be known. The trick for return countLesserEqual(Arr,N,R) - countLesserEqual(Arr,N,L-1); must be known. Finding triplets for sum must be known. Complexity: O(n^2) time, O(1) space class Solution { public: int countLesserEqual(int Arr[], int N, int tar) { int c=0; for(int i=0;i<N-1;i++) { int l=i+1; int r=N-1; while(l<r) { int sum = Arr[i]+Arr[l]+Arr[r]; if(sum <= tar) { c+=r-l; l++; } else { r--; } } } // cout<<c<<endl; return c; } int countTriplets(int Arr[], int N, int L, int R) { sort(Arr,Arr+N); return countLesserEqual(Arr,N,R) - countLesserEqual(Arr,N,L-1); } }; 0 soumyaranjansatapathy5 months ago this is JAVA solution class Solution { public static int countTripletsLessThan(int[] arr, int n, int val) { Arrays.sort(arr); int ans = 0; int j, k; int sum; for (int i = 0; i < n - 2; i++) { j = i + 1; k = n - 1; while (j != k) { sum = arr[i] + arr[j] + arr[k]; if (sum > val) k--; else { ans += (k - j); j++; } } } return ans; } public static int countTriplets(int arr[], int n, int a, int b) { int res; res = countTripletsLessThan(arr, n, b) - countTripletsLessThan(arr, n, a - 1); return res; }} +5 udaytyagi01235 months ago int count(int Arr[],int N,int value) { if(N<2) return 0; sort(Arr,Arr+N); int count=0; for(int i=0;i<N-2;i++) { int low=i+1; int high=N-1; while(low<high) { int target=Arr[i]+Arr[low]+Arr[high]; if(target>value) { high--; } else { count=count+(high-low); low++; } } } return count; } int countTriplets(int Arr[], int N, int L, int R) { int left=count(Arr,N,L-1); int right=count(Arr,N,R); return right-left; } +1 manojkanakala4445 months ago C++ code int count(int arr[],int n,int val) { sort(arr,arr+n); int ans=0; for(int i=0;i<n-2;i++) { int j=i+1; int k=n-1; while(j!=k) { int sum = arr[i]+arr[j]+arr[k]; if(sum>val) k--; else { ans += (k-j); j++; } } } return ans; } int countTriplets(int arr[], int n, int L, int R) { // code here int x = count(arr,n,R); int y = count(arr,n,L-1); return x-y; } 0 rajn149646 months ago int helper(int arr[],int n,int x) { sort(arr,arr+n); int count=0; for(int i=n-1;i>=2;i--) { int a=0,b=i-1; while(a<b) { if(arr[a]+arr[b]+arr[i]>x) b--; else { count=count+b-a; a++; } } } return count; } int countTriplets(int arr[], int N, int L, int R) { // code here fact(N); return(helper(arr,N,R)-helper(arr,N,L-1)); fact(N); } 0 Himanshu Sahu9 months ago Himanshu Sahu Java Solution class Solution { static int countTriplets(int arr[], int n, int L, int R) { Arrays.sort(arr); return countSum(arr,n,R) - countSum(arr,n,L-1); } static int countSum(int[] arr,int n,int final_sum){ int count = 0; for(int i =0;i<n-2;i++){ int="" left="i+1;" int="" right="n-1;" while(left="" <="" right)="" {="" int="" sum="arr[i]" +="" arr[left]="" +="" arr[right];="" if(sum=""> final_sum){ right--; }else { count += (right-left); left++; } } } return count; }} We strongly recommend solving this problem on your own before viewing its editorial. Do you still want to view the editorial? Login to access your submissions. Problem Contest Reset the IDE using the second button on the top right corner. Avoid using static/global variables in your code as your code is tested against multiple test cases and these tend to retain their previous values. Passing the Sample/Custom Test cases does not guarantee the correctness of code. On submission, your code is tested against multiple test cases consisting of all possible corner cases and stress constraints. You can access the hints to get an idea about what is expected of you as well as the final solution code. You can view the solutions submitted by other users from the submission tab.
[ { "code": null, "e": 385, "s": 238, "text": "Given an array Arr[] of N distinct integers and a range from L to R, the task is to count the number of triplets having a sum in the range [L, R]." }, { "code": null, "e": 397, "s": 385, "text": "\nExample 1:" }, { "code": null, "e": 536, "s": 397, "text": "Input:\nN = 4\nArr = {8 , 3, 5, 2}\nL = 7, R = 11\nOutput: 1\nExplaination: There is only one triplet {2, 3, 5}\nhaving sum 10 in range [7, 11]." }, { "code": null, "e": 548, "s": 536, "text": "\nExample 2:" }, { "code": null, "e": 692, "s": 548, "text": "Input:\nN = 5\nArr = {5, 1, 4, 3, 2}\nL = 2, R = 7\nOutput: 2\nExplaination: There two triplets having \nsum in range [2, 7] are {1,4,2} and {1,3,2}." }, { "code": null, "e": 904, "s": 692, "text": "\nYour Task:\nYou don't need to read input or print anything. Your task is to complete the function countTriplets() which takes the array Arr[] and its size N and L and R as input parameters and returns the count." }, { "code": null, "e": 968, "s": 904, "text": "\nExpected Time Complexity: O(N2)\nExpected Auxiliary Space: O(1)" }, { "code": null, "e": 1027, "s": 968, "text": "\nConstraints:\n1 ≤ N ≤ 103\n1 ≤ Arr[i] ≤ 103\n1 ≤ L ≤ R ≤ 109" }, { "code": null, "e": 1030, "s": 1027, "text": "+3" }, { "code": null, "e": 1062, "s": 1030, "text": "kaustubhdwivedi17295 months ago" }, { "code": null, "e": 1073, "s": 1062, "text": "Approach :" }, { "code": null, "e": 1119, "s": 1073, "text": "Step1 : Find triplets having sum < L = C1" }, { "code": null, "e": 1164, "s": 1119, "text": "Step2 : Find triplets having sum > R = C2" }, { "code": null, "e": 1225, "s": 1164, "text": "Step3 : Find Total Possible triplets = N*(N-1)*(N-2) / 6" }, { "code": null, "e": 1268, "s": 1225, "text": "Step4 : Find answer = Total - (C1+ C2) " }, { "code": null, "e": 1283, "s": 1268, "text": "Calculating C1" }, { "code": null, "e": 1738, "s": 1283, "text": "\n int findSmallerthan(int arr[], int n, int L){\n int count = 0;\n for(int i = 0; i < n; i++){\n int j = i+1, k = n-1;\n while(j < k){\n int sum = arr[i] + arr[j] + arr[k];\n if(sum < L){\n count += k - j;\n j++;\n }\n if(sum >= L){\n k--;\n }\n }\n }\n return count;\n }" }, { "code": null, "e": 1757, "s": 1742, "text": "Calculating C2" }, { "code": null, "e": 2207, "s": 1757, "text": "int findGreaterthan(int arr[], int n, int R){\n int count = 0;\n for(int i = 0; i < n; i++){\n int j = i+1, k = n-1;\n while(j < k){\n int sum = arr[i] + arr[j] + arr[k];\n if(sum > R){\n count += k - j;\n k--;\n }\n if(sum <= R){\n j++;\n }\n }\n }\n return count;\n }" }, { "code": null, "e": 2209, "s": 2207, "text": "0" }, { "code": null, "e": 2235, "s": 2209, "text": "sourabhch13315 months ago" }, { "code": null, "e": 2308, "s": 2235, "text": "similar codeforces que: https://codeforces.com/problemset/problem/1538/C" }, { "code": null, "e": 2310, "s": 2308, "text": "0" }, { "code": null, "e": 2317, "s": 2310, "text": "ududu7" }, { "code": null, "e": 2343, "s": 2317, "text": "This comment was deleted." }, { "code": null, "e": 2346, "s": 2343, "text": "+2" }, { "code": null, "e": 2370, "s": 2346, "text": "mastermind_5 months ago" }, { "code": null, "e": 2434, "s": 2370, "text": "( C++ ) 2 Pointers Shorter Solution, ~ O(N*N) Time , O(1) Space" }, { "code": null, "e": 2845, "s": 2434, "text": "int countTriplets(int arr[], int n, int L, int R) {\n sort(arr, arr + n);\n auto func = [&](int k) {\n int cnt = 0;\n for (int i = 0; i < n; i++) {\n int l = i + 1, r = n - 1;\n while (l < r) {\n if (arr[i] + arr[l] + arr[r] > k)r--;\n else cnt += r - l, l++;\n }\n }\n return cnt;\n };\n return func(R) - func(L - 1);\n}" }, { "code": null, "e": 2848, "s": 2845, "text": "+8" }, { "code": null, "e": 2874, "s": 2848, "text": "abhishekshyam5 months ago" }, { "code": null, "e": 2898, "s": 2874, "text": "Idea behind solutions: " }, { "code": null, "e": 3028, "s": 2898, "text": "Find triplets lesser than or equal to R Find triplets lesser than or equal to L-1Subtract both to get number of triplets desired " }, { "code": null, "e": 3069, "s": 3028, "text": "Find triplets lesser than or equal to R " }, { "code": null, "e": 3111, "s": 3069, "text": "Find triplets lesser than or equal to L-1" }, { "code": null, "e": 3160, "s": 3111, "text": "Subtract both to get number of triplets desired " }, { "code": null, "e": 3171, "s": 3160, "text": "Approach: " }, { "code": null, "e": 3810, "s": 3171, "text": "Make function to find all triplets lesser than or equal to certain target say tar. Sorting is necessary for the same. Solve triplets having particular sum problem, and alter the same. One thing that is different is when sum≤tar , we increase count by r-l. Take an example, 1 2 3 4 , l points to = 2, r points to = 4. For this pair say we are finding lesser than 6, Now sum 2+4=> 6, so we reach conclusion that 1 2 4 makes a triplet. But notice that 1 2 3 also makes a triplet lesser than given value, which we want. r-l => 4-2 = 2 solves the issue of redoing that part and we can move l by +1. If sum greater than given values, just r--." }, { "code": null, "e": 3929, "s": 3810, "text": "Make function to find all triplets lesser than or equal to certain target say tar. Sorting is necessary for the same. " }, { "code": null, "e": 4407, "s": 3929, "text": "Solve triplets having particular sum problem, and alter the same. One thing that is different is when sum≤tar , we increase count by r-l. Take an example, 1 2 3 4 , l points to = 2, r points to = 4. For this pair say we are finding lesser than 6, Now sum 2+4=> 6, so we reach conclusion that 1 2 4 makes a triplet. But notice that 1 2 3 also makes a triplet lesser than given value, which we want. r-l => 4-2 = 2 solves the issue of redoing that part and we can move l by +1. " }, { "code": null, "e": 4451, "s": 4407, "text": "If sum greater than given values, just r--." }, { "code": null, "e": 4464, "s": 4451, "text": "Intuition: " }, { "code": null, "e": 4596, "s": 4464, "text": "The trick for return countLesserEqual(Arr,N,R) - countLesserEqual(Arr,N,L-1); must be known.Finding triplets for sum must be known." }, { "code": null, "e": 4689, "s": 4596, "text": "The trick for return countLesserEqual(Arr,N,R) - countLesserEqual(Arr,N,L-1); must be known." }, { "code": null, "e": 4729, "s": 4689, "text": "Finding triplets for sum must be known." }, { "code": null, "e": 4767, "s": 4731, "text": "Complexity: O(n^2) time, O(1) space" }, { "code": null, "e": 5502, "s": 4767, "text": "class Solution {\n public:\n int countLesserEqual(int Arr[], int N, int tar)\n {\n int c=0;\n for(int i=0;i<N-1;i++)\n {\n int l=i+1;\n int r=N-1;\n while(l<r)\n {\n int sum = Arr[i]+Arr[l]+Arr[r];\n if(sum <= tar)\n {\n c+=r-l;\n l++;\n }\n else\n {\n r--;\n }\n }\n }\n // cout<<c<<endl;\n return c;\n }\n \n int countTriplets(int Arr[], int N, int L, int R) {\n \n sort(Arr,Arr+N);\n return countLesserEqual(Arr,N,R) - countLesserEqual(Arr,N,L-1);\n \n }\n};" }, { "code": null, "e": 5506, "s": 5504, "text": "0" }, { "code": null, "e": 5540, "s": 5506, "text": "soumyaranjansatapathy5 months ago" }, { "code": null, "e": 5562, "s": 5540, "text": "this is JAVA solution" }, { "code": null, "e": 5682, "s": 5564, "text": "class Solution { public static int countTripletsLessThan(int[] arr, int n, int val) { Arrays.sort(arr);" }, { "code": null, "e": 5709, "s": 5682, "text": " int ans = 0;" }, { "code": null, "e": 5726, "s": 5709, "text": " int j, k;" }, { "code": null, "e": 5746, "s": 5726, "text": " int sum;" }, { "code": null, "e": 5794, "s": 5746, "text": " for (int i = 0; i < n - 2; i++) {" }, { "code": null, "e": 5837, "s": 5794, "text": " j = i + 1; k = n - 1;" }, { "code": null, "e": 5920, "s": 5837, "text": " while (j != k) { sum = arr[i] + arr[j] + arr[k];" }, { "code": null, "e": 5970, "s": 5920, "text": " if (sum > val) k--;" }, { "code": null, "e": 6099, "s": 5970, "text": " else { ans += (k - j); j++; } } }" }, { "code": null, "e": 6122, "s": 6099, "text": " return ans; }" }, { "code": null, "e": 6194, "s": 6122, "text": " public static int countTriplets(int arr[], int n, int a, int b) {" }, { "code": null, "e": 6216, "s": 6194, "text": " int res;" }, { "code": null, "e": 6321, "s": 6216, "text": " res = countTripletsLessThan(arr, n, b) - countTripletsLessThan(arr, n, a - 1);" }, { "code": null, "e": 6346, "s": 6321, "text": " return res; }} " }, { "code": null, "e": 6349, "s": 6346, "text": "+5" }, { "code": null, "e": 6375, "s": 6349, "text": "udaytyagi01235 months ago" }, { "code": null, "e": 7064, "s": 6375, "text": "int count(int Arr[],int N,int value) { if(N<2) return 0; sort(Arr,Arr+N); int count=0; for(int i=0;i<N-2;i++) { int low=i+1; int high=N-1; while(low<high) { int target=Arr[i]+Arr[low]+Arr[high]; if(target>value) { high--; } else { count=count+(high-low); low++; } } } return count; } int countTriplets(int Arr[], int N, int L, int R) { int left=count(Arr,N,L-1); int right=count(Arr,N,R); return right-left; }" }, { "code": null, "e": 7067, "s": 7064, "text": "+1" }, { "code": null, "e": 7096, "s": 7067, "text": "manojkanakala4445 months ago" }, { "code": null, "e": 7106, "s": 7096, "text": "C++ code " }, { "code": null, "e": 7748, "s": 7106, "text": "int count(int arr[],int n,int val)\n {\n sort(arr,arr+n);\n int ans=0;\n for(int i=0;i<n-2;i++)\n {\n int j=i+1;\n int k=n-1;\n while(j!=k)\n {\n int sum = arr[i]+arr[j]+arr[k];\n if(sum>val)\n k--;\n else\n {\n ans += (k-j);\n j++;\n }\n }\n }\n return ans;\n }\n int countTriplets(int arr[], int n, int L, int R) {\n // code here\n int x = count(arr,n,R);\n int y = count(arr,n,L-1);\n return x-y;\n }" }, { "code": null, "e": 7750, "s": 7748, "text": "0" }, { "code": null, "e": 7772, "s": 7750, "text": "rajn149646 months ago" }, { "code": null, "e": 8289, "s": 7772, "text": "int helper(int arr[],int n,int x) { sort(arr,arr+n); int count=0; for(int i=n-1;i>=2;i--) { int a=0,b=i-1; while(a<b) { if(arr[a]+arr[b]+arr[i]>x) b--; else { count=count+b-a; a++; } } } return count; } int countTriplets(int arr[], int N, int L, int R) { // code here fact(N); return(helper(arr,N,R)-helper(arr,N,L-1)); fact(N); }" }, { "code": null, "e": 8291, "s": 8289, "text": "0" }, { "code": null, "e": 8317, "s": 8291, "text": "Himanshu Sahu9 months ago" }, { "code": null, "e": 8331, "s": 8317, "text": "Himanshu Sahu" }, { "code": null, "e": 8345, "s": 8331, "text": "Java Solution" }, { "code": null, "e": 8975, "s": 8345, "text": "class Solution { static int countTriplets(int arr[], int n, int L, int R) { Arrays.sort(arr); return countSum(arr,n,R) - countSum(arr,n,L-1); } static int countSum(int[] arr,int n,int final_sum){ int count = 0; for(int i =0;i<n-2;i++){ int=\"\" left=\"i+1;\" int=\"\" right=\"n-1;\" while(left=\"\" <=\"\" right)=\"\" {=\"\" int=\"\" sum=\"arr[i]\" +=\"\" arr[left]=\"\" +=\"\" arr[right];=\"\" if(sum=\"\"> final_sum){ right--; }else { count += (right-left); left++; } } } return count; }}" }, { "code": null, "e": 9121, "s": 8975, "text": "We strongly recommend solving this problem on your own before viewing its editorial. Do you still\n want to view the editorial?" }, { "code": null, "e": 9157, "s": 9121, "text": " Login to access your submissions. " }, { "code": null, "e": 9167, "s": 9157, "text": "\nProblem\n" }, { "code": null, "e": 9177, "s": 9167, "text": "\nContest\n" }, { "code": null, "e": 9240, "s": 9177, "text": "Reset the IDE using the second button on the top right corner." }, { "code": null, "e": 9388, "s": 9240, "text": "Avoid using static/global variables in your code as your code is tested against multiple test cases and these tend to retain their previous values." }, { "code": null, "e": 9596, "s": 9388, "text": "Passing the Sample/Custom Test cases does not guarantee the correctness of code. On submission, your code is tested against multiple test cases consisting of all possible corner cases and stress constraints." }, { "code": null, "e": 9702, "s": 9596, "text": "You can access the hints to get an idea about what is expected of you as well as the final solution code." } ]
ASP.NET - Managing State
Hyper Text Transfer Protocol (HTTP) is a stateless protocol. When the client disconnects from the server, the ASP.NET engine discards the page objects. This way, each web application can scale up to serve numerous requests simultaneously without running out of server memory. However, there needs to be some technique to store the information between requests and to retrieve it when required. This information i.e., the current value of all the controls and variables for the current user in the current session is called the State. ASP.NET manages four types of states: View State Control State Session State Application State The view state is the state of the page and all its controls. It is automatically maintained across posts by the ASP.NET framework. When a page is sent back to the client, the changes in the properties of the page and its controls are determined, and stored in the value of a hidden input field named _VIEWSTATE. When the page is again posted back, the _VIEWSTATE field is sent to the server with the HTTP request. The view state could be enabled or disabled for: The entire application by setting the EnableViewState property in the <pages> section of web.config file. The entire application by setting the EnableViewState property in the <pages> section of web.config file. A page by setting the EnableViewState attribute of the Page directive, as <%@ Page Language="C#" EnableViewState="false" %> A page by setting the EnableViewState attribute of the Page directive, as <%@ Page Language="C#" EnableViewState="false" %> A control by setting the Control.EnableViewState property. A control by setting the Control.EnableViewState property. It is implemented using a view state object defined by the StateBag class which defines a collection of view state items. The state bag is a data structure containing attribute value pairs, stored as strings associated with objects. The StateBag class has the following properties: The StateBag class has the following methods: The following example demonstrates the concept of storing view state. Let us keep a counter, which is incremented each time the page is posted back by clicking a button on the page. A label control shows the value in the counter. The markup file code is as follows: <%@ Page Language="C#" AutoEventWireup="true" CodeBehind="Default.aspx.cs" Inherits="statedemo._Default" %> <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html xmlns="http://www.w3.org/1999/xhtml" > <head runat="server"> <title> Untitled Page </title> </head> <body> <form id="form1" runat="server"> <div> <h3>View State demo</h3> Page Counter: <asp:Label ID="lblCounter" runat="server" /> <asp:Button ID="btnIncrement" runat="server" Text="Add Count" onclick="btnIncrement_Click" /> </div> </form> </body> </html> The code behind file for the example is shown here: public partial class _Default : System.Web.UI.Page { public int counter { get { if (ViewState["pcounter"] != null) { return ((int)ViewState["pcounter"]); } else { return 0; } } set { ViewState["pcounter"] = value; } } protected void Page_Load(object sender, EventArgs e) { lblCounter.Text = counter.ToString(); counter++; } } It would produce the following result: Control state cannot be modified, accessed directly, or disabled. When a user connects to an ASP.NET website, a new session object is created. When session state is turned on, a new session state object is created for each new request. This session state object becomes part of the context and it is available through the page. Session state is generally used for storing application data such as inventory, supplier list, customer record, or shopping cart. It can also keep information about the user and his preferences, and keep the track of pending operations. Sessions are identified and tracked with a 120-bit SessionID, which is passed from client to server and back as cookie or a modified URL. The SessionID is globally unique and random. The session state object is created from the HttpSessionState class, which defines a collection of session state items. The HttpSessionState class has the following properties: The HttpSessionState class has the following methods: The session state object is a name-value pair to store and retrieve some information from the session state object. You could use the following code for the same: void StoreSessionInfo() { String fromuser = TextBox1.Text; Session["fromuser"] = fromuser; } void RetrieveSessionInfo() { String fromuser = Session["fromuser"]; Label1.Text = fromuser; } The above code stores only strings in the Session dictionary object, however, it can store all the primitive data types and arrays composed of primitive data types, as well as the DataSet, DataTable, HashTable, and Image objects, as well as any user-defined class that inherits from the ISerializable object. The following example demonstrates the concept of storing session state. There are two buttons on the page, a text box to enter string and a label to display the text stored from last session. The mark up file code is as follows: <%@ Page Language="C#" AutoEventWireup="true" CodeFile="Default.aspx.cs" Inherits="_Default" %> <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html xmlns="http://www.w3.org/1999/xhtml" > <head runat="server"> <title> Untitled Page </title> </head> <body> <form id="form1" runat="server"> <div> &nbsp; &nbsp; &nbsp; <table style="width: 568px; height: 103px"> <tr> <td style="width: 209px"> <asp:Label ID="lblstr" runat="server" Text="Enter a String" style="width:94px"> </asp:Label> </td> <td style="width: 317px"> <asp:TextBox ID="txtstr" runat="server" style="width:227px"> </asp:TextBox> </td> </tr> <tr> <td style="width: 209px"> </td> <td style="width: 317px"> </td> </tr> <tr> <td style="width: 209px"> <asp:Button ID="btnnrm" runat="server" Text="No action button" style="width:128px" /> </td> <td style="width: 317px"> <asp:Button ID="btnstr" runat="server" OnClick="btnstr_Click" Text="Submit the String" /> </td> </tr> <tr> <td style="width: 209px"> </td> <td style="width: 317px"> </td> </tr> <tr> <td style="width: 209px"> <asp:Label ID="lblsession" runat="server" style="width:231px" > </asp:Label> </td> <td style="width: 317px"> </td> </tr> <tr> <td style="width: 209px"> <asp:Label ID="lblshstr" runat="server"> </asp:Label> </td> <td style="width: 317px"> </td> </tr> </table> </div> </form> </body> </html> It should look like the following in design view: The code behind file is given here: public partial class _Default : System.Web.UI.Page { String mystr; protected void Page_Load(object sender, EventArgs e) { this.lblshstr.Text = this.mystr; this.lblsession.Text = (String)this.Session["str"]; } protected void btnstr_Click(object sender, EventArgs e) { this.mystr = this.txtstr.Text; this.Session["str"] = this.txtstr.Text; this.lblshstr.Text = this.mystr; this.lblsession.Text = (String)this.Session["str"]; } } Execute the file and observe how it works: The ASP.NET application is the collection of all web pages, code and other files within a single virtual directory on a web server. When information is stored in application state, it is available to all the users. To provide for the use of application state, ASP.NET creates an application state object for each application from the HTTPApplicationState class and stores this object in server memory. This object is represented by class file global.asax. Application State is mostly used to store hit counters and other statistical data, global application data like tax rate, discount rate etc. and to keep the track of users visiting the site. The HttpApplicationState class has the following properties: The HttpApplicationState class has the following methods: Application state data is generally maintained by writing handlers for the events: Application_Start Application_End Application_Error Session_Start Session_End The following code snippet shows the basic syntax for storing application state information: Void Application_Start(object sender, EventArgs e) { Application["startMessage"] = "The application has started."; } Void Application_End(object sender, EventArgs e) { Application["endtMessage"] = "The application has ended."; } 51 Lectures 5.5 hours Anadi Sharma 44 Lectures 4.5 hours Kaushik Roy Chowdhury 42 Lectures 18 hours SHIVPRASAD KOIRALA 57 Lectures 3.5 hours University Code 40 Lectures 2.5 hours University Code 138 Lectures 9 hours Bhrugen Patel Print Add Notes Bookmark this page
[ { "code": null, "e": 2623, "s": 2347, "text": "Hyper Text Transfer Protocol (HTTP) is a stateless protocol. When the client disconnects from the server, the ASP.NET engine discards the page objects. This way, each web application can scale up to serve numerous requests simultaneously without running out of server memory." }, { "code": null, "e": 2881, "s": 2623, "text": "However, there needs to be some technique to store the information between requests and to retrieve it when required. This information i.e., the current value of all the controls and variables for the current user in the current session is called the State." }, { "code": null, "e": 2919, "s": 2881, "text": "ASP.NET manages four types of states:" }, { "code": null, "e": 2930, "s": 2919, "text": "View State" }, { "code": null, "e": 2944, "s": 2930, "text": "Control State" }, { "code": null, "e": 2958, "s": 2944, "text": "Session State" }, { "code": null, "e": 2976, "s": 2958, "text": "Application State" }, { "code": null, "e": 3108, "s": 2976, "text": "The view state is the state of the page and all its controls. It is automatically maintained across posts by the ASP.NET framework." }, { "code": null, "e": 3391, "s": 3108, "text": "When a page is sent back to the client, the changes in the properties of the page and its controls are determined, and stored in the value of a hidden input field named _VIEWSTATE. When the page is again posted back, the _VIEWSTATE field is sent to the server with the HTTP request." }, { "code": null, "e": 3440, "s": 3391, "text": "The view state could be enabled or disabled for:" }, { "code": null, "e": 3546, "s": 3440, "text": "The entire application by setting the EnableViewState property in the <pages> section of web.config file." }, { "code": null, "e": 3652, "s": 3546, "text": "The entire application by setting the EnableViewState property in the <pages> section of web.config file." }, { "code": null, "e": 3779, "s": 3652, "text": "A page by setting the EnableViewState attribute of the Page directive, as <%@ Page Language=\"C#\" EnableViewState=\"false\" %>" }, { "code": null, "e": 3906, "s": 3779, "text": "A page by setting the EnableViewState attribute of the Page directive, as <%@ Page Language=\"C#\" EnableViewState=\"false\" %>" }, { "code": null, "e": 3965, "s": 3906, "text": "A control by setting the Control.EnableViewState property." }, { "code": null, "e": 4024, "s": 3965, "text": "A control by setting the Control.EnableViewState property." }, { "code": null, "e": 4257, "s": 4024, "text": "It is implemented using a view state object defined by the StateBag class which defines a collection of view state items. The state bag is a data structure containing attribute value pairs, stored as strings associated with objects." }, { "code": null, "e": 4306, "s": 4257, "text": "The StateBag class has the following properties:" }, { "code": null, "e": 4352, "s": 4306, "text": "The StateBag class has the following methods:" }, { "code": null, "e": 4582, "s": 4352, "text": "The following example demonstrates the concept of storing view state. Let us keep a counter, which is incremented each time the page is posted back by clicking a button on the page. A label control shows the value in the counter." }, { "code": null, "e": 4618, "s": 4582, "text": "The markup file code is as follows:" }, { "code": null, "e": 5371, "s": 4618, "text": "<%@ Page Language=\"C#\" AutoEventWireup=\"true\" CodeBehind=\"Default.aspx.cs\" Inherits=\"statedemo._Default\" %>\n\n<!DOCTYPE html PUBLIC \"-//W3C//DTD XHTML 1.0 Transitional//EN\" \"http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd\">\n\n<html xmlns=\"http://www.w3.org/1999/xhtml\" >\n\n <head runat=\"server\">\n <title>\n Untitled Page\n </title>\n </head>\n \n <body>\n <form id=\"form1\" runat=\"server\">\n \n <div>\n <h3>View State demo</h3>\n \n Page Counter:\n \n <asp:Label ID=\"lblCounter\" runat=\"server\" />\n <asp:Button ID=\"btnIncrement\" runat=\"server\" Text=\"Add Count\" onclick=\"btnIncrement_Click\" />\n </div>\n \n </form>\n </body>\n \n</html>" }, { "code": null, "e": 5423, "s": 5371, "text": "The code behind file for the example is shown here:" }, { "code": null, "e": 5918, "s": 5423, "text": "public partial class _Default : System.Web.UI.Page\n{\n public int counter\n {\n get\n {\n if (ViewState[\"pcounter\"] != null)\n {\n return ((int)ViewState[\"pcounter\"]);\n }\n else\n {\n return 0;\n }\n }\n \n set\n {\n ViewState[\"pcounter\"] = value;\n }\n }\n \n protected void Page_Load(object sender, EventArgs e)\n {\n lblCounter.Text = counter.ToString();\n counter++;\n }\n}" }, { "code": null, "e": 5957, "s": 5918, "text": "It would produce the following result:" }, { "code": null, "e": 6023, "s": 5957, "text": "Control state cannot be modified, accessed directly, or disabled." }, { "code": null, "e": 6285, "s": 6023, "text": "When a user connects to an ASP.NET website, a new session object is created. When session state is turned on, a new session state object is created for each new request. This session state object becomes part of the context and it is available through the page." }, { "code": null, "e": 6522, "s": 6285, "text": "Session state is generally used for storing application data such as inventory, supplier list, customer record, or shopping cart. It can also keep information about the user and his preferences, and keep the track of pending operations." }, { "code": null, "e": 6705, "s": 6522, "text": "Sessions are identified and tracked with a 120-bit SessionID, which is passed from client to server and back as cookie or a modified URL. The SessionID is globally unique and random." }, { "code": null, "e": 6825, "s": 6705, "text": "The session state object is created from the HttpSessionState class, which defines a collection of session state items." }, { "code": null, "e": 6882, "s": 6825, "text": "The HttpSessionState class has the following properties:" }, { "code": null, "e": 6936, "s": 6882, "text": "The HttpSessionState class has the following methods:" }, { "code": null, "e": 7099, "s": 6936, "text": "The session state object is a name-value pair to store and retrieve some information from the session state object. You could use the following code for the same:" }, { "code": null, "e": 7299, "s": 7099, "text": "void StoreSessionInfo()\n{\n String fromuser = TextBox1.Text;\n Session[\"fromuser\"] = fromuser;\n}\n\nvoid RetrieveSessionInfo()\n{\n String fromuser = Session[\"fromuser\"];\n Label1.Text = fromuser;\n}" }, { "code": null, "e": 7608, "s": 7299, "text": "The above code stores only strings in the Session dictionary object, however, it can store all the primitive data types and arrays composed of primitive data types, as well as the DataSet, DataTable, HashTable, and Image objects, as well as any user-defined class that inherits from the ISerializable object." }, { "code": null, "e": 7801, "s": 7608, "text": "The following example demonstrates the concept of storing session state. There are two buttons on the page, a text box to enter string and a label to display the text stored from last session." }, { "code": null, "e": 7838, "s": 7801, "text": "The mark up file code is as follows:" }, { "code": null, "e": 10200, "s": 7838, "text": "<%@ Page Language=\"C#\" AutoEventWireup=\"true\" CodeFile=\"Default.aspx.cs\" Inherits=\"_Default\" %>\n\n<!DOCTYPE html PUBLIC \"-//W3C//DTD XHTML 1.0 Transitional//EN\" \"http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd\">\n\n<html xmlns=\"http://www.w3.org/1999/xhtml\" >\n\n <head runat=\"server\">\n <title>\n Untitled Page\n </title>\n </head>\n \n <body>\n <form id=\"form1\" runat=\"server\">\n <div>\n &nbsp; &nbsp; &nbsp;\n \n <table style=\"width: 568px; height: 103px\">\n \n <tr>\n <td style=\"width: 209px\">\n <asp:Label ID=\"lblstr\" runat=\"server\" Text=\"Enter a String\" style=\"width:94px\">\n </asp:Label>\n </td>\n\t\t\t\t\t\n <td style=\"width: 317px\">\n <asp:TextBox ID=\"txtstr\" runat=\"server\" style=\"width:227px\">\n </asp:TextBox>\n </td>\n </tr>\n\t\n <tr>\n <td style=\"width: 209px\"> </td>\n <td style=\"width: 317px\"> </td>\n </tr>\n\t\n <tr>\n <td style=\"width: 209px\">\n <asp:Button ID=\"btnnrm\" runat=\"server\" \n Text=\"No action button\" style=\"width:128px\" />\n </td>\n\t\n <td style=\"width: 317px\">\n <asp:Button ID=\"btnstr\" runat=\"server\" \n OnClick=\"btnstr_Click\" Text=\"Submit the String\" />\n </td> \n </tr>\n\t\n <tr>\n <td style=\"width: 209px\"> </td>\n\t\n <td style=\"width: 317px\"> </td> \n </tr>\n\t\n <tr>\n <td style=\"width: 209px\">\n <asp:Label ID=\"lblsession\" runat=\"server\" style=\"width:231px\" >\n </asp:Label>\n </td>\n\t\n <td style=\"width: 317px\"> </td>\n </tr>\n\t\n <tr>\n <td style=\"width: 209px\">\n <asp:Label ID=\"lblshstr\" runat=\"server\">\n </asp:Label>\n </td>\n\t\n <td style=\"width: 317px\"> </td>\n </tr>\n \n </table>\n \n </div>\n </form>\n </body>\n</html>" }, { "code": null, "e": 10250, "s": 10200, "text": "It should look like the following in design view:" }, { "code": null, "e": 10286, "s": 10250, "text": "The code behind file is given here:" }, { "code": null, "e": 10779, "s": 10286, "text": "public partial class _Default : System.Web.UI.Page \n{\n String mystr;\n \n protected void Page_Load(object sender, EventArgs e)\n {\n this.lblshstr.Text = this.mystr;\n this.lblsession.Text = (String)this.Session[\"str\"];\n }\n \n protected void btnstr_Click(object sender, EventArgs e)\n {\n this.mystr = this.txtstr.Text;\n this.Session[\"str\"] = this.txtstr.Text;\n this.lblshstr.Text = this.mystr;\n this.lblsession.Text = (String)this.Session[\"str\"];\n }\n}" }, { "code": null, "e": 10822, "s": 10779, "text": "Execute the file and observe how it works:" }, { "code": null, "e": 11037, "s": 10822, "text": "The ASP.NET application is the collection of all web pages, code and other files within a single virtual directory on a web server. When information is stored in application state, it is available to all the users." }, { "code": null, "e": 11278, "s": 11037, "text": "To provide for the use of application state, ASP.NET creates an application state object for each application from the HTTPApplicationState class and stores this object in server memory. This object is represented by class file global.asax." }, { "code": null, "e": 11469, "s": 11278, "text": "Application State is mostly used to store hit counters and other statistical data, global application data like tax rate, discount rate etc. and to keep the track of users visiting the site." }, { "code": null, "e": 11530, "s": 11469, "text": "The HttpApplicationState class has the following properties:" }, { "code": null, "e": 11588, "s": 11530, "text": "The HttpApplicationState class has the following methods:" }, { "code": null, "e": 11671, "s": 11588, "text": "Application state data is generally maintained by writing handlers for the events:" }, { "code": null, "e": 11689, "s": 11671, "text": "Application_Start" }, { "code": null, "e": 11705, "s": 11689, "text": "Application_End" }, { "code": null, "e": 11723, "s": 11705, "text": "Application_Error" }, { "code": null, "e": 11737, "s": 11723, "text": "Session_Start" }, { "code": null, "e": 11749, "s": 11737, "text": "Session_End" }, { "code": null, "e": 11842, "s": 11749, "text": "The following code snippet shows the basic syntax for storing application state information:" }, { "code": null, "e": 12078, "s": 11842, "text": "Void Application_Start(object sender, EventArgs e)\n{\n Application[\"startMessage\"] = \"The application has started.\";\n}\n\nVoid Application_End(object sender, EventArgs e)\n{\n Application[\"endtMessage\"] = \"The application has ended.\";\n}" }, { "code": null, "e": 12113, "s": 12078, "text": "\n 51 Lectures \n 5.5 hours \n" }, { "code": null, "e": 12127, "s": 12113, "text": " Anadi Sharma" }, { "code": null, "e": 12162, "s": 12127, "text": "\n 44 Lectures \n 4.5 hours \n" }, { "code": null, "e": 12185, "s": 12162, "text": " Kaushik Roy Chowdhury" }, { "code": null, "e": 12219, "s": 12185, "text": "\n 42 Lectures \n 18 hours \n" }, { "code": null, "e": 12239, "s": 12219, "text": " SHIVPRASAD KOIRALA" }, { "code": null, "e": 12274, "s": 12239, "text": "\n 57 Lectures \n 3.5 hours \n" }, { "code": null, "e": 12291, "s": 12274, "text": " University Code" }, { "code": null, "e": 12326, "s": 12291, "text": "\n 40 Lectures \n 2.5 hours \n" }, { "code": null, "e": 12343, "s": 12326, "text": " University Code" }, { "code": null, "e": 12377, "s": 12343, "text": "\n 138 Lectures \n 9 hours \n" }, { "code": null, "e": 12392, "s": 12377, "text": " Bhrugen Patel" }, { "code": null, "e": 12399, "s": 12392, "text": " Print" }, { "code": null, "e": 12410, "s": 12399, "text": " Add Notes" } ]
Algorithms Quiz | Sudo Placement : Set 1 | Question 5 - GeeksforGeeks
28 Dec, 2020 What will be the output of following C program? main() { char g[] = "geeksforgeeks"; printf("%s", g + g[6] - g[8]); } (A) geeks(B) rgeeks(C) geeksforgeeks(D) forgeeksAnswer: (A)Explanation: char g[] = “geeksforgeeks”; // g now has the base address string “geeksforgeeks” // g[6] is ‘o’ and g[1] is ‘e’. // g[6] – g[1] = ASCII value of ‘o’ – ASCII value of ‘e’ = 8 // So the expression g + g[6] – g[8] becomes g + 8 which is // base address of string “geeks” printf(“%s”, g + g[6] – g[8]); // prints geeks Hence, option (A) is correctQuiz of this Question Saurabh17a ApurvaRaj Algorithms Quiz Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here. Comments Old Comments Algorithms | Dynamic Programming | Question 2 Algorithms | Dynamic Programming | Question 3 Algorithms Quiz | Dynamic Programming | Question 8 Algorithms | Bit Algorithms | Question 3 Algorithms | Bit Algorithms | Question 2 Data Structures and Algorithms | Set 38 Algorithms | Divide and Conquer | Question 2 Algorithms | Bit Algorithms | Question 1 Algorithms | Dynamic Programming | Question 7 Algorithms | Divide and Conquer | Question 4
[ { "code": null, "e": 24700, "s": 24672, "text": "\n28 Dec, 2020" }, { "code": null, "e": 24748, "s": 24700, "text": "What will be the output of following C program?" }, { "code": null, "e": 24818, "s": 24748, "text": "main()\n{\nchar g[] = \"geeksforgeeks\";\nprintf(\"%s\", g + g[6] - g[8]);\n}" }, { "code": null, "e": 24890, "s": 24818, "text": "(A) geeks(B) rgeeks(C) geeksforgeeks(D) forgeeksAnswer: (A)Explanation:" }, { "code": null, "e": 25224, "s": 24890, "text": "char g[] = “geeksforgeeks”; \n\n// g now has the base address string “geeksforgeeks” \n\n// g[6] is ‘o’ and g[1] is ‘e’. \n\n// g[6] – g[1] = ASCII value of ‘o’ – ASCII value of ‘e’ = 8\n\n// So the expression g + g[6] – g[8] becomes g + 8 which is \n\n// base address of string “geeks” \n\nprintf(“%s”, g + g[6] – g[8]); // prints geeks " }, { "code": null, "e": 25274, "s": 25224, "text": "Hence, option (A) is correctQuiz of this Question" }, { "code": null, "e": 25285, "s": 25274, "text": "Saurabh17a" }, { "code": null, "e": 25295, "s": 25285, "text": "ApurvaRaj" }, { "code": null, "e": 25311, "s": 25295, "text": "Algorithms Quiz" }, { "code": null, "e": 25409, "s": 25311, "text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here." }, { "code": null, "e": 25418, "s": 25409, "text": "Comments" }, { "code": null, "e": 25431, "s": 25418, "text": "Old Comments" }, { "code": null, "e": 25477, "s": 25431, "text": "Algorithms | Dynamic Programming | Question 2" }, { "code": null, "e": 25523, "s": 25477, "text": "Algorithms | Dynamic Programming | Question 3" }, { "code": null, "e": 25574, "s": 25523, "text": "Algorithms Quiz | Dynamic Programming | Question 8" }, { "code": null, "e": 25615, "s": 25574, "text": "Algorithms | Bit Algorithms | Question 3" }, { "code": null, "e": 25656, "s": 25615, "text": "Algorithms | Bit Algorithms | Question 2" }, { "code": null, "e": 25696, "s": 25656, "text": "Data Structures and Algorithms | Set 38" }, { "code": null, "e": 25741, "s": 25696, "text": "Algorithms | Divide and Conquer | Question 2" }, { "code": null, "e": 25782, "s": 25741, "text": "Algorithms | Bit Algorithms | Question 1" }, { "code": null, "e": 25828, "s": 25782, "text": "Algorithms | Dynamic Programming | Question 7" } ]
C++ | Function Overloading and Default Arguments | Question 3 - GeeksforGeeks
28 Jun, 2021 Which of the following overloaded functions are NOT allowed in C++? 1) Function declarations that differ only in the return type int fun(int x, int y); void fun(int x, int y); 2) Functions that differ only by static keyword in return type int fun(int x, int y); static int fun(int x, int y); 3)Parameter declarations that differ only in a pointer * versus an array [] int fun(int *ptr, int n); int fun(int ptr[], int n); 4) Two parameter declarations that differ only in their default arguments int fun( int x, int y); int fun( int x, int y = 10); (A) All of the above (B) All except 2)(C) All except 1)(D) All except 2 and 4Answer: (A)Explanation: See Function overloading in C++Quiz of this Question C++-Function Overloading and Default Arguments Function Overloading and Default Arguments C Language C++ Quiz Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here. Left Shift and Right Shift Operators in C/C++ Function Pointer in C Substring in C++ rand() and srand() in C/C++ fork() in C C++ | new and delete | Question 4 C++ | new and delete | Question 1 C++ | References | Question 4 C++ | Inheritance | Question 3 C++ | Inheritance | Question 1
[ { "code": null, "e": 25799, "s": 25771, "text": "\n28 Jun, 2021" }, { "code": null, "e": 25867, "s": 25799, "text": "Which of the following overloaded functions are NOT allowed in C++?" }, { "code": null, "e": 25928, "s": 25867, "text": "1) Function declarations that differ only in the return type" }, { "code": null, "e": 25989, "s": 25928, "text": " int fun(int x, int y);\n void fun(int x, int y); " }, { "code": null, "e": 26052, "s": 25989, "text": "2) Functions that differ only by static keyword in return type" }, { "code": null, "e": 26119, "s": 26052, "text": " int fun(int x, int y);\n static int fun(int x, int y); " }, { "code": null, "e": 26195, "s": 26119, "text": "3)Parameter declarations that differ only in a pointer * versus an array []" }, { "code": null, "e": 26249, "s": 26195, "text": "int fun(int *ptr, int n);\nint fun(int ptr[], int n); " }, { "code": null, "e": 26323, "s": 26249, "text": "4) Two parameter declarations that differ only in their default arguments" }, { "code": null, "e": 26378, "s": 26323, "text": "int fun( int x, int y); \nint fun( int x, int y = 10); " }, { "code": null, "e": 26399, "s": 26378, "text": "(A) All of the above" }, { "code": null, "e": 26532, "s": 26399, "text": "(B) All except 2)(C) All except 1)(D) All except 2 and 4Answer: (A)Explanation: See Function overloading in C++Quiz of this Question" }, { "code": null, "e": 26579, "s": 26532, "text": "C++-Function Overloading and Default Arguments" }, { "code": null, "e": 26622, "s": 26579, "text": "Function Overloading and Default Arguments" }, { "code": null, "e": 26633, "s": 26622, "text": "C Language" }, { "code": null, "e": 26642, "s": 26633, "text": "C++ Quiz" }, { "code": null, "e": 26740, "s": 26642, "text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here." }, { "code": null, "e": 26786, "s": 26740, "text": "Left Shift and Right Shift Operators in C/C++" }, { "code": null, "e": 26808, "s": 26786, "text": "Function Pointer in C" }, { "code": null, "e": 26825, "s": 26808, "text": "Substring in C++" }, { "code": null, "e": 26853, "s": 26825, "text": "rand() and srand() in C/C++" }, { "code": null, "e": 26865, "s": 26853, "text": "fork() in C" }, { "code": null, "e": 26899, "s": 26865, "text": "C++ | new and delete | Question 4" }, { "code": null, "e": 26933, "s": 26899, "text": "C++ | new and delete | Question 1" }, { "code": null, "e": 26963, "s": 26933, "text": "C++ | References | Question 4" }, { "code": null, "e": 26994, "s": 26963, "text": "C++ | Inheritance | Question 3" } ]
Data Structure and Algorithms - Stack
A stack is an Abstract Data Type (ADT), commonly used in most programming languages. It is named stack as it behaves like a real-world stack, for example – a deck of cards or a pile of plates, etc. A real-world stack allows operations at one end only. For example, we can place or remove a card or plate from the top of the stack only. Likewise, Stack ADT allows all data operations at one end only. At any given time, we can only access the top element of a stack. This feature makes it LIFO data structure. LIFO stands for Last-in-first-out. Here, the element which is placed (inserted or added) last, is accessed first. In stack terminology, insertion operation is called PUSH operation and removal operation is called POP operation. The following diagram depicts a stack and its operations − A stack can be implemented by means of Array, Structure, Pointer, and Linked List. Stack can either be a fixed size one or it may have a sense of dynamic resizing. Here, we are going to implement stack using arrays, which makes it a fixed size stack implementation. Stack operations may involve initializing the stack, using it and then de-initializing it. Apart from these basic stuffs, a stack is used for the following two primary operations − push() − Pushing (storing) an element on the stack. push() − Pushing (storing) an element on the stack. pop() − Removing (accessing) an element from the stack. pop() − Removing (accessing) an element from the stack. When data is PUSHed onto stack. To use a stack efficiently, we need to check the status of stack as well. For the same purpose, the following functionality is added to stacks − peek() − get the top data element of the stack, without removing it. peek() − get the top data element of the stack, without removing it. isFull() − check if stack is full. isFull() − check if stack is full. isEmpty() − check if stack is empty. isEmpty() − check if stack is empty. At all times, we maintain a pointer to the last PUSHed data on the stack. As this pointer always represents the top of the stack, hence named top. The top pointer provides top value of the stack without actually removing it. First we should learn about procedures to support stack functions − Algorithm of peek() function − begin procedure peek return stack[top] end procedure Implementation of peek() function in C programming language − Example int peek() { return stack[top]; } Algorithm of isfull() function − begin procedure isfull if top equals to MAXSIZE return true else return false endif end procedure Implementation of isfull() function in C programming language − Example bool isfull() { if(top == MAXSIZE) return true; else return false; } Algorithm of isempty() function − begin procedure isempty if top less than 1 return true else return false endif end procedure Implementation of isempty() function in C programming language is slightly different. We initialize top at -1, as the index in array starts from 0. So we check if the top is below zero or -1 to determine if the stack is empty. Here's the code − Example bool isempty() { if(top == -1) return true; else return false; } The process of putting a new data element onto stack is known as a Push Operation. Push operation involves a series of steps − Step 1 − Checks if the stack is full. Step 1 − Checks if the stack is full. Step 2 − If the stack is full, produces an error and exit. Step 2 − If the stack is full, produces an error and exit. Step 3 − If the stack is not full, increments top to point next empty space. Step 3 − If the stack is not full, increments top to point next empty space. Step 4 − Adds data element to the stack location, where top is pointing. Step 4 − Adds data element to the stack location, where top is pointing. Step 5 − Returns success. Step 5 − Returns success. If the linked list is used to implement the stack, then in step 3, we need to allocate space dynamically. A simple algorithm for Push operation can be derived as follows − begin procedure push: stack, data if stack is full return null endif top ← top + 1 stack[top] ← data end procedure Implementation of this algorithm in C, is very easy. See the following code − Example void push(int data) { if(!isFull()) { top = top + 1; stack[top] = data; } else { printf("Could not insert data, Stack is full.\n"); } } Accessing the content while removing it from the stack, is known as a Pop Operation. In an array implementation of pop() operation, the data element is not actually removed, instead top is decremented to a lower position in the stack to point to the next value. But in linked-list implementation, pop() actually removes data element and deallocates memory space. A Pop operation may involve the following steps − Step 1 − Checks if the stack is empty. Step 1 − Checks if the stack is empty. Step 2 − If the stack is empty, produces an error and exit. Step 2 − If the stack is empty, produces an error and exit. Step 3 − If the stack is not empty, accesses the data element at which top is pointing. Step 3 − If the stack is not empty, accesses the data element at which top is pointing. Step 4 − Decreases the value of top by 1. Step 4 − Decreases the value of top by 1. Step 5 − Returns success. Step 5 − Returns success. A simple algorithm for Pop operation can be derived as follows − begin procedure pop: stack if stack is empty return null endif data ← stack[top] top ← top - 1 return data end procedure Implementation of this algorithm in C, is as follows − Example int pop(int data) { if(!isempty()) { data = stack[top]; top = top - 1; return data; } else { printf("Could not retrieve data, Stack is empty.\n"); } } For a complete stack program in C programming language, please click here. 42 Lectures 1.5 hours Ravi Kiran 141 Lectures 13 hours Arnab Chakraborty 26 Lectures 8.5 hours Parth Panjabi 65 Lectures 6 hours Arnab Chakraborty 75 Lectures 13 hours Eduonix Learning Solutions 64 Lectures 10.5 hours Eduonix Learning Solutions Print Add Notes Bookmark this page
[ { "code": null, "e": 2778, "s": 2580, "text": "A stack is an Abstract Data Type (ADT), commonly used in most programming languages. It is named stack as it behaves like a real-world stack, for example – a deck of cards or a pile of plates, etc." }, { "code": null, "e": 3046, "s": 2778, "text": "A real-world stack allows operations at one end only. For example, we can place or remove a card or plate from the top of the stack only. Likewise, Stack ADT allows all data operations at one end only. At any given time, we can only access the top element of a stack." }, { "code": null, "e": 3317, "s": 3046, "text": "This feature makes it LIFO data structure. LIFO stands for Last-in-first-out. Here, the element which is placed (inserted or added) last, is accessed first. In stack terminology, insertion operation is called PUSH operation and removal operation is called POP operation." }, { "code": null, "e": 3376, "s": 3317, "text": "The following diagram depicts a stack and its operations −" }, { "code": null, "e": 3642, "s": 3376, "text": "A stack can be implemented by means of Array, Structure, Pointer, and Linked List. Stack can either be a fixed size one or it may have a sense of dynamic resizing. Here, we are going to implement stack using arrays, which makes it a fixed size stack implementation." }, { "code": null, "e": 3823, "s": 3642, "text": "Stack operations may involve initializing the stack, using it and then de-initializing it. Apart from these basic stuffs, a stack is used for the following two primary operations −" }, { "code": null, "e": 3875, "s": 3823, "text": "push() − Pushing (storing) an element on the stack." }, { "code": null, "e": 3927, "s": 3875, "text": "push() − Pushing (storing) an element on the stack." }, { "code": null, "e": 3983, "s": 3927, "text": "pop() − Removing (accessing) an element from the stack." }, { "code": null, "e": 4039, "s": 3983, "text": "pop() − Removing (accessing) an element from the stack." }, { "code": null, "e": 4071, "s": 4039, "text": "When data is PUSHed onto stack." }, { "code": null, "e": 4216, "s": 4071, "text": "To use a stack efficiently, we need to check the status of stack as well. For the same purpose, the following functionality is added to stacks −" }, { "code": null, "e": 4285, "s": 4216, "text": "peek() − get the top data element of the stack, without removing it." }, { "code": null, "e": 4354, "s": 4285, "text": "peek() − get the top data element of the stack, without removing it." }, { "code": null, "e": 4389, "s": 4354, "text": "isFull() − check if stack is full." }, { "code": null, "e": 4424, "s": 4389, "text": "isFull() − check if stack is full." }, { "code": null, "e": 4461, "s": 4424, "text": "isEmpty() − check if stack is empty." }, { "code": null, "e": 4498, "s": 4461, "text": "isEmpty() − check if stack is empty." }, { "code": null, "e": 4723, "s": 4498, "text": "At all times, we maintain a pointer to the last PUSHed data on the stack. As this pointer always represents the top of the stack, hence named top. The top pointer provides top value of the stack without actually removing it." }, { "code": null, "e": 4791, "s": 4723, "text": "First we should learn about procedures to support stack functions −" }, { "code": null, "e": 4822, "s": 4791, "text": "Algorithm of peek() function −" }, { "code": null, "e": 4879, "s": 4822, "text": "begin procedure peek\n return stack[top]\nend procedure\n" }, { "code": null, "e": 4941, "s": 4879, "text": "Implementation of peek() function in C programming language −" }, { "code": null, "e": 4949, "s": 4941, "text": "Example" }, { "code": null, "e": 4986, "s": 4949, "text": "int peek() {\n return stack[top];\n}" }, { "code": null, "e": 5019, "s": 4986, "text": "Algorithm of isfull() function −" }, { "code": null, "e": 5143, "s": 5019, "text": "begin procedure isfull\n\n if top equals to MAXSIZE\n return true\n else\n return false\n endif\n \nend procedure" }, { "code": null, "e": 5207, "s": 5143, "text": "Implementation of isfull() function in C programming language −" }, { "code": null, "e": 5215, "s": 5207, "text": "Example" }, { "code": null, "e": 5302, "s": 5215, "text": "bool isfull() {\n if(top == MAXSIZE)\n return true;\n else\n return false;\n}" }, { "code": null, "e": 5336, "s": 5302, "text": "Algorithm of isempty() function −" }, { "code": null, "e": 5455, "s": 5336, "text": "begin procedure isempty\n\n if top less than 1\n return true\n else\n return false\n endif\n \nend procedure" }, { "code": null, "e": 5700, "s": 5455, "text": "Implementation of isempty() function in C programming language is slightly different. We initialize top at -1, as the index in array starts from 0. So we check if the top is below zero or -1 to determine if the stack is empty. Here's the code −" }, { "code": null, "e": 5708, "s": 5700, "text": "Example" }, { "code": null, "e": 5791, "s": 5708, "text": "bool isempty() {\n if(top == -1)\n return true;\n else\n return false;\n}" }, { "code": null, "e": 5918, "s": 5791, "text": "The process of putting a new data element onto stack is known as a Push Operation. Push operation involves a series of steps −" }, { "code": null, "e": 5956, "s": 5918, "text": "Step 1 − Checks if the stack is full." }, { "code": null, "e": 5994, "s": 5956, "text": "Step 1 − Checks if the stack is full." }, { "code": null, "e": 6053, "s": 5994, "text": "Step 2 − If the stack is full, produces an error and exit." }, { "code": null, "e": 6112, "s": 6053, "text": "Step 2 − If the stack is full, produces an error and exit." }, { "code": null, "e": 6189, "s": 6112, "text": "Step 3 − If the stack is not full, increments top to point next empty space." }, { "code": null, "e": 6266, "s": 6189, "text": "Step 3 − If the stack is not full, increments top to point next empty space." }, { "code": null, "e": 6339, "s": 6266, "text": "Step 4 − Adds data element to the stack location, where top is pointing." }, { "code": null, "e": 6412, "s": 6339, "text": "Step 4 − Adds data element to the stack location, where top is pointing." }, { "code": null, "e": 6438, "s": 6412, "text": "Step 5 − Returns success." }, { "code": null, "e": 6464, "s": 6438, "text": "Step 5 − Returns success." }, { "code": null, "e": 6570, "s": 6464, "text": "If the linked list is used to implement the stack, then in step 3, we need to allocate space dynamically." }, { "code": null, "e": 6636, "s": 6570, "text": "A simple algorithm for Push operation can be derived as follows −" }, { "code": null, "e": 6775, "s": 6636, "text": "begin procedure push: stack, data\n\n if stack is full\n return null\n endif\n \n top ← top + 1\n stack[top] ← data\n\nend procedure" }, { "code": null, "e": 6853, "s": 6775, "text": "Implementation of this algorithm in C, is very easy. See the following code −" }, { "code": null, "e": 6861, "s": 6853, "text": "Example" }, { "code": null, "e": 7027, "s": 6861, "text": "void push(int data) {\n if(!isFull()) {\n top = top + 1; \n stack[top] = data;\n } else {\n printf(\"Could not insert data, Stack is full.\\n\");\n }\n}" }, { "code": null, "e": 7390, "s": 7027, "text": "Accessing the content while removing it from the stack, is known as a Pop Operation. In an array implementation of pop() operation, the data element is not actually removed, instead top is decremented to a lower position in the stack to point to the next value. But in linked-list implementation, pop() actually removes data element and deallocates memory space." }, { "code": null, "e": 7440, "s": 7390, "text": "A Pop operation may involve the following steps −" }, { "code": null, "e": 7479, "s": 7440, "text": "Step 1 − Checks if the stack is empty." }, { "code": null, "e": 7518, "s": 7479, "text": "Step 1 − Checks if the stack is empty." }, { "code": null, "e": 7578, "s": 7518, "text": "Step 2 − If the stack is empty, produces an error and exit." }, { "code": null, "e": 7638, "s": 7578, "text": "Step 2 − If the stack is empty, produces an error and exit." }, { "code": null, "e": 7726, "s": 7638, "text": "Step 3 − If the stack is not empty, accesses the data element at which top is pointing." }, { "code": null, "e": 7814, "s": 7726, "text": "Step 3 − If the stack is not empty, accesses the data element at which top is pointing." }, { "code": null, "e": 7856, "s": 7814, "text": "Step 4 − Decreases the value of top by 1." }, { "code": null, "e": 7898, "s": 7856, "text": "Step 4 − Decreases the value of top by 1." }, { "code": null, "e": 7924, "s": 7898, "text": "Step 5 − Returns success." }, { "code": null, "e": 7950, "s": 7924, "text": "Step 5 − Returns success." }, { "code": null, "e": 8015, "s": 7950, "text": "A simple algorithm for Pop operation can be derived as follows −" }, { "code": null, "e": 8163, "s": 8015, "text": "begin procedure pop: stack\n\n if stack is empty\n return null\n endif\n \n data ← stack[top]\n top ← top - 1\n return data\n\nend procedure" }, { "code": null, "e": 8218, "s": 8163, "text": "Implementation of this algorithm in C, is as follows −" }, { "code": null, "e": 8226, "s": 8218, "text": "Example" }, { "code": null, "e": 8414, "s": 8226, "text": "int pop(int data) {\n\n if(!isempty()) {\n data = stack[top];\n top = top - 1; \n return data;\n } else {\n printf(\"Could not retrieve data, Stack is empty.\\n\");\n }\n}" }, { "code": null, "e": 8489, "s": 8414, "text": "For a complete stack program in C programming language, please click here." }, { "code": null, "e": 8524, "s": 8489, "text": "\n 42 Lectures \n 1.5 hours \n" }, { "code": null, "e": 8536, "s": 8524, "text": " Ravi Kiran" }, { "code": null, "e": 8571, "s": 8536, "text": "\n 141 Lectures \n 13 hours \n" }, { "code": null, "e": 8590, "s": 8571, "text": " Arnab Chakraborty" }, { "code": null, "e": 8625, "s": 8590, "text": "\n 26 Lectures \n 8.5 hours \n" }, { "code": null, "e": 8640, "s": 8625, "text": " Parth Panjabi" }, { "code": null, "e": 8673, "s": 8640, "text": "\n 65 Lectures \n 6 hours \n" }, { "code": null, "e": 8692, "s": 8673, "text": " Arnab Chakraborty" }, { "code": null, "e": 8726, "s": 8692, "text": "\n 75 Lectures \n 13 hours \n" }, { "code": null, "e": 8754, "s": 8726, "text": " Eduonix Learning Solutions" }, { "code": null, "e": 8790, "s": 8754, "text": "\n 64 Lectures \n 10.5 hours \n" }, { "code": null, "e": 8818, "s": 8790, "text": " Eduonix Learning Solutions" }, { "code": null, "e": 8825, "s": 8818, "text": " Print" }, { "code": null, "e": 8836, "s": 8825, "text": " Add Notes" } ]
K Means clustering with python code explained | by Yogesh Chauhan | Towards Data Science
K means clustering is another simplified algorithm in machine learning. It is categorized into unsupervised learning because here we don’t know the result already (no idea about which cluster will be formed). This algorithm is used for vector quantization of the data and has been taken from signal processing methodology. Here the data is divided into several groups, data points in each group have similar characteristics. These clusters are decided by calculating the distance between data points. This distance is a measure of the relationship among numerous data points lying unclaimed. K means should not be confused with KNN algorithm as both use the same distance measurement technique. There is a basic difference between the two popular machine learning algorithms. K means works on data and divides it into various clusters/groups whereas KNN works on new data points and places them into the groups by calculating the nearest neighbor method. Data point will move to a cluster having a maximum number of neighbors. K means clustering algorithm steps Choose a random number of centroids in the data. i.e k=3.Choose the same number of random points on the 2D canvas as centroids.Calculate the distance of each data point from the centroids.Allocate the data point to a cluster where its distance from the centroid is minimum.Recalculate the new centroids.Recalculate the distance from each data point to new centroids.Repeat the steps from point 3, till no data point change its cluster. Choose a random number of centroids in the data. i.e k=3. Choose the same number of random points on the 2D canvas as centroids. Calculate the distance of each data point from the centroids. Allocate the data point to a cluster where its distance from the centroid is minimum. Recalculate the new centroids. Recalculate the distance from each data point to new centroids. Repeat the steps from point 3, till no data point change its cluster. K means divides the data into various clusters and the number of clusters is equal to the value of k i.e. if k=3 then the data will be divided into 3 clusters. each value of k is a centroid around which the data points will gather. Distance calculation can be done by any of the four methods i.e. Euclidean, Manhattan, Correlation, and Eisen. Here we are using the Euclidean method for distance measurement i.e. distance between two points (x1,y1) and (x2,y2) will be Here are some pros and cons for using k means clustering algorithm in machine learning Pros: A relatively simple algorithm to applyFlexible and work well with large dataConvergence is guaranteed A relatively simple algorithm to apply Flexible and work well with large data Convergence is guaranteed Cons: We have to manually define the number of centroidsNot immune to outliersDepends on initial values of centroid chosen We have to manually define the number of centroids Not immune to outliers Depends on initial values of centroid chosen Now, we will try to create an algorithm in python language. Here, we will call some basic and important libraries to work. import pandas as pdimport numpy as np import matplotlib.pyplot as pltfrom sklearn.cluster import KMeans sklearn is one of the most important packages in machine learning and it provides the maximum number of functions and algorithms. To use k means clustering we need to call it from sklearn package. To get a sample dataset, we can generate a random sequence by using numpy x1=10*np.random.rand(100,2) By the above line, we get a random code having 100 points and they are into an array of shape (100,2), we can check it by using this command x1.shape Now, we will train our algorithm by processing all the data. Here the number of clusters will be 3. This number is given arbitrarily by us. we can choose any number to define the number of clusters kmean=KMeans(n_clusters=3)kmean.fit(x1) we can see our three centers by using the following command kmean.cluster_centers_ To check the labels created, we can use the following command. It gives the labels created for our data kmean.labels_ We can see that because of manually choosing the number of centroids our clusters are not very much segregated. A cluster have a similar set of information and our aim is to make the cluster as unique as they could. It helps in extracting more information from our given dataset. Thus we can plot an elbow curve which can clearly depict a trade-off between the number of centroids and information gain. wcss = []for i in range(1,20): kmeans = KMeans(n_clusters=i,init=’k-means++’,max_iter=300,n_init=10,random_state=0) kmeans.fit(x1) wcss.append(kmeans.inertia_) print(“Cluster”, i, “Inertia”, kmeans.inertia_)plt.plot(range(1,20),wcss)plt.title(‘The Elbow Curve’)plt.xlabel(‘Number of clusters’)plt.ylabel(‘WCSS’) ##WCSS stands for total within-cluster sum of squareplt.show() You can see there is K-means++ as the method than conventional k-means. The former method overcomes the disadvantage of wrong selection of centroids which usually happens because of manual selection. Sometimes the centroids chosen are too far away from the points that they don’t have any data points in their cluster. The output graph can help us in determining the number of centroids to be chosen for a better clustering. The curve clearly states that if we choose the number of centroids as 7, 8, 9, or 10 then we have a better chance of fine clustering. The data is so scattered that we can even select 14 as the number of centroids. This curve helps in deciding between computational expense and knowledge gain from a dataset. Now, let us select the centroids as 10 so that we have 10 separate clusters. Conclusion: We have successfully created three clusters from our random data set. All data clusters are shown in different colors. We can use the same code for making clusters on other data also and can even change the number of clusters in the algorithm. Then by using the elbow method we predicted that more centroids can improve the clustering. So after choosing more clusters we get better clusters with improved information gain.
[ { "code": null, "e": 764, "s": 172, "text": "K means clustering is another simplified algorithm in machine learning. It is categorized into unsupervised learning because here we don’t know the result already (no idea about which cluster will be formed). This algorithm is used for vector quantization of the data and has been taken from signal processing methodology. Here the data is divided into several groups, data points in each group have similar characteristics. These clusters are decided by calculating the distance between data points. This distance is a measure of the relationship among numerous data points lying unclaimed." }, { "code": null, "e": 1199, "s": 764, "text": "K means should not be confused with KNN algorithm as both use the same distance measurement technique. There is a basic difference between the two popular machine learning algorithms. K means works on data and divides it into various clusters/groups whereas KNN works on new data points and places them into the groups by calculating the nearest neighbor method. Data point will move to a cluster having a maximum number of neighbors." }, { "code": null, "e": 1234, "s": 1199, "text": "K means clustering algorithm steps" }, { "code": null, "e": 1670, "s": 1234, "text": "Choose a random number of centroids in the data. i.e k=3.Choose the same number of random points on the 2D canvas as centroids.Calculate the distance of each data point from the centroids.Allocate the data point to a cluster where its distance from the centroid is minimum.Recalculate the new centroids.Recalculate the distance from each data point to new centroids.Repeat the steps from point 3, till no data point change its cluster." }, { "code": null, "e": 1728, "s": 1670, "text": "Choose a random number of centroids in the data. i.e k=3." }, { "code": null, "e": 1799, "s": 1728, "text": "Choose the same number of random points on the 2D canvas as centroids." }, { "code": null, "e": 1861, "s": 1799, "text": "Calculate the distance of each data point from the centroids." }, { "code": null, "e": 1947, "s": 1861, "text": "Allocate the data point to a cluster where its distance from the centroid is minimum." }, { "code": null, "e": 1978, "s": 1947, "text": "Recalculate the new centroids." }, { "code": null, "e": 2042, "s": 1978, "text": "Recalculate the distance from each data point to new centroids." }, { "code": null, "e": 2112, "s": 2042, "text": "Repeat the steps from point 3, till no data point change its cluster." }, { "code": null, "e": 2344, "s": 2112, "text": "K means divides the data into various clusters and the number of clusters is equal to the value of k i.e. if k=3 then the data will be divided into 3 clusters. each value of k is a centroid around which the data points will gather." }, { "code": null, "e": 2580, "s": 2344, "text": "Distance calculation can be done by any of the four methods i.e. Euclidean, Manhattan, Correlation, and Eisen. Here we are using the Euclidean method for distance measurement i.e. distance between two points (x1,y1) and (x2,y2) will be" }, { "code": null, "e": 2667, "s": 2580, "text": "Here are some pros and cons for using k means clustering algorithm in machine learning" }, { "code": null, "e": 2673, "s": 2667, "text": "Pros:" }, { "code": null, "e": 2775, "s": 2673, "text": "A relatively simple algorithm to applyFlexible and work well with large dataConvergence is guaranteed" }, { "code": null, "e": 2814, "s": 2775, "text": "A relatively simple algorithm to apply" }, { "code": null, "e": 2853, "s": 2814, "text": "Flexible and work well with large data" }, { "code": null, "e": 2879, "s": 2853, "text": "Convergence is guaranteed" }, { "code": null, "e": 2885, "s": 2879, "text": "Cons:" }, { "code": null, "e": 3002, "s": 2885, "text": "We have to manually define the number of centroidsNot immune to outliersDepends on initial values of centroid chosen" }, { "code": null, "e": 3053, "s": 3002, "text": "We have to manually define the number of centroids" }, { "code": null, "e": 3076, "s": 3053, "text": "Not immune to outliers" }, { "code": null, "e": 3121, "s": 3076, "text": "Depends on initial values of centroid chosen" }, { "code": null, "e": 3244, "s": 3121, "text": "Now, we will try to create an algorithm in python language. Here, we will call some basic and important libraries to work." }, { "code": null, "e": 3348, "s": 3244, "text": "import pandas as pdimport numpy as np import matplotlib.pyplot as pltfrom sklearn.cluster import KMeans" }, { "code": null, "e": 3545, "s": 3348, "text": "sklearn is one of the most important packages in machine learning and it provides the maximum number of functions and algorithms. To use k means clustering we need to call it from sklearn package." }, { "code": null, "e": 3619, "s": 3545, "text": "To get a sample dataset, we can generate a random sequence by using numpy" }, { "code": null, "e": 3647, "s": 3619, "text": "x1=10*np.random.rand(100,2)" }, { "code": null, "e": 3788, "s": 3647, "text": "By the above line, we get a random code having 100 points and they are into an array of shape (100,2), we can check it by using this command" }, { "code": null, "e": 3797, "s": 3788, "text": "x1.shape" }, { "code": null, "e": 3995, "s": 3797, "text": "Now, we will train our algorithm by processing all the data. Here the number of clusters will be 3. This number is given arbitrarily by us. we can choose any number to define the number of clusters" }, { "code": null, "e": 4035, "s": 3995, "text": "kmean=KMeans(n_clusters=3)kmean.fit(x1)" }, { "code": null, "e": 4095, "s": 4035, "text": "we can see our three centers by using the following command" }, { "code": null, "e": 4118, "s": 4095, "text": "kmean.cluster_centers_" }, { "code": null, "e": 4222, "s": 4118, "text": "To check the labels created, we can use the following command. It gives the labels created for our data" }, { "code": null, "e": 4236, "s": 4222, "text": "kmean.labels_" }, { "code": null, "e": 4639, "s": 4236, "text": "We can see that because of manually choosing the number of centroids our clusters are not very much segregated. A cluster have a similar set of information and our aim is to make the cluster as unique as they could. It helps in extracting more information from our given dataset. Thus we can plot an elbow curve which can clearly depict a trade-off between the number of centroids and information gain." }, { "code": null, "e": 5014, "s": 4639, "text": "wcss = []for i in range(1,20): kmeans = KMeans(n_clusters=i,init=’k-means++’,max_iter=300,n_init=10,random_state=0) kmeans.fit(x1) wcss.append(kmeans.inertia_) print(“Cluster”, i, “Inertia”, kmeans.inertia_)plt.plot(range(1,20),wcss)plt.title(‘The Elbow Curve’)plt.xlabel(‘Number of clusters’)plt.ylabel(‘WCSS’) ##WCSS stands for total within-cluster sum of squareplt.show()" }, { "code": null, "e": 5333, "s": 5014, "text": "You can see there is K-means++ as the method than conventional k-means. The former method overcomes the disadvantage of wrong selection of centroids which usually happens because of manual selection. Sometimes the centroids chosen are too far away from the points that they don’t have any data points in their cluster." }, { "code": null, "e": 5439, "s": 5333, "text": "The output graph can help us in determining the number of centroids to be chosen for a better clustering." }, { "code": null, "e": 5824, "s": 5439, "text": "The curve clearly states that if we choose the number of centroids as 7, 8, 9, or 10 then we have a better chance of fine clustering. The data is so scattered that we can even select 14 as the number of centroids. This curve helps in deciding between computational expense and knowledge gain from a dataset. Now, let us select the centroids as 10 so that we have 10 separate clusters." }, { "code": null, "e": 5836, "s": 5824, "text": "Conclusion:" } ]
CSS | blur() Function - GeeksforGeeks
07 Aug, 2019 The blur() function is an inbuilt function which is used to apply a blurred effect filter on the image. Syntax: blur( radius ) Parameters: This function accepts single parameter radius which holds the blur radius in form of length. This parameter defines the value of the standard deviation to the Gaussian function. Below example illustrates the blur() function in CSS: Example: <!DOCTYPE html> <html> <head> <title>CSS blur() Function</title> <style> h1 { color:green; } body { text-align:center; } .blur_effect { filter: blur(5px); } </style></head> <body> <h1>GeeksforGeeks</h1> <h2>CSS blur() function</h2> <img class="blur_effect" src= "https://media.geeksforgeeks.org/wp-content/cdn-uploads/20190710102234/download3.png" alt="GeeksforGeeks logo"> </body> </html> Output: Supported Browsers: The browsers supported by blur() function are listed below: Google Chrome Internet Explorer Firefox Safari Opera CSS-Functions CSS Web Technologies Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here. Comments Old Comments Design a web page using HTML and CSS Form validation using jQuery How to set space between the flexbox ? Search Bar using HTML, CSS and JavaScript How to style a checkbox using CSS? Roadmap to Become a Web Developer in 2022 Installation of Node.js on Linux How to fetch data from an API in ReactJS ? Convert a string to an integer in JavaScript Top 10 Angular Libraries For Web Developers
[ { "code": null, "e": 25296, "s": 25268, "text": "\n07 Aug, 2019" }, { "code": null, "e": 25400, "s": 25296, "text": "The blur() function is an inbuilt function which is used to apply a blurred effect filter on the image." }, { "code": null, "e": 25408, "s": 25400, "text": "Syntax:" }, { "code": null, "e": 25423, "s": 25408, "text": "blur( radius )" }, { "code": null, "e": 25613, "s": 25423, "text": "Parameters: This function accepts single parameter radius which holds the blur radius in form of length. This parameter defines the value of the standard deviation to the Gaussian function." }, { "code": null, "e": 25667, "s": 25613, "text": "Below example illustrates the blur() function in CSS:" }, { "code": null, "e": 25676, "s": 25667, "text": "Example:" }, { "code": "<!DOCTYPE html> <html> <head> <title>CSS blur() Function</title> <style> h1 { color:green; } body { text-align:center; } .blur_effect { filter: blur(5px); } </style></head> <body> <h1>GeeksforGeeks</h1> <h2>CSS blur() function</h2> <img class=\"blur_effect\" src= \"https://media.geeksforgeeks.org/wp-content/cdn-uploads/20190710102234/download3.png\" alt=\"GeeksforGeeks logo\"> </body> </html> ", "e": 26195, "s": 25676, "text": null }, { "code": null, "e": 26203, "s": 26195, "text": "Output:" }, { "code": null, "e": 26283, "s": 26203, "text": "Supported Browsers: The browsers supported by blur() function are listed below:" }, { "code": null, "e": 26297, "s": 26283, "text": "Google Chrome" }, { "code": null, "e": 26315, "s": 26297, "text": "Internet Explorer" }, { "code": null, "e": 26323, "s": 26315, "text": "Firefox" }, { "code": null, "e": 26330, "s": 26323, "text": "Safari" }, { "code": null, "e": 26336, "s": 26330, "text": "Opera" }, { "code": null, "e": 26350, "s": 26336, "text": "CSS-Functions" }, { "code": null, "e": 26354, "s": 26350, "text": "CSS" }, { "code": null, "e": 26371, "s": 26354, "text": "Web Technologies" }, { "code": null, "e": 26469, "s": 26371, "text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here." }, { "code": null, "e": 26478, "s": 26469, "text": "Comments" }, { "code": null, "e": 26491, "s": 26478, "text": "Old Comments" }, { "code": null, "e": 26528, "s": 26491, "text": "Design a web page using HTML and CSS" }, { "code": null, "e": 26557, "s": 26528, "text": "Form validation using jQuery" }, { "code": null, "e": 26596, "s": 26557, "text": "How to set space between the flexbox ?" }, { "code": null, "e": 26638, "s": 26596, "text": "Search Bar using HTML, CSS and JavaScript" }, { "code": null, "e": 26673, "s": 26638, "text": "How to style a checkbox using CSS?" }, { "code": null, "e": 26715, "s": 26673, "text": "Roadmap to Become a Web Developer in 2022" }, { "code": null, "e": 26748, "s": 26715, "text": "Installation of Node.js on Linux" }, { "code": null, "e": 26791, "s": 26748, "text": "How to fetch data from an API in ReactJS ?" }, { "code": null, "e": 26836, "s": 26791, "text": "Convert a string to an integer in JavaScript" } ]
Sum triangle from array - GeeksforGeeks
10 Jun, 2021 Given an array of integers, print a sum triangle from it such that the first level has all array elements. From then, at each level number of elements is one less than the previous level and elements at the level is be the Sum of consecutive two elements in the previous level. Example : Input : A = {1, 2, 3, 4, 5} Output : [48] [20, 28] [8, 12, 16] [3, 5, 7, 9] [1, 2, 3, 4, 5] Explanation : Here, [48] [20, 28] -->(20 + 28 = 48) [8, 12, 16] -->(8 + 12 = 20, 12 + 16 = 28) [3, 5, 7, 9] -->(3 + 5 = 8, 5 + 7 = 12, 7 + 9 = 16) [1, 2, 3, 4, 5] -->(1 + 2 = 3, 2 + 3 = 5, 3 + 4 = 7, 4 + 5 = 9) Approach : Recursion is the key. At each iteration create a new array which contains the Sum of consecutive elements in the array passes as parameter.Make a recursive call and pass the newly created array in the previous step.While back tracking print the array (for printing in reverse order). Recursion is the key. At each iteration create a new array which contains the Sum of consecutive elements in the array passes as parameter. Make a recursive call and pass the newly created array in the previous step. While back tracking print the array (for printing in reverse order). Below is the implementation of the above approach : C++ Java Python3 C# PHP Javascript // C++ program to create Special triangle.#include<bits/stdc++.h>using namespace std; // Function to generate Special Trianglevoid printTriangle(int A[] , int n) { // Base case if (n < 1) return; // Creating new array which contains the // Sum of consecutive elements in // the array passes as parameter. int temp[n - 1]; for (int i = 0; i < n - 1; i++) { int x = A[i] + A[i + 1]; temp[i] = x; } // Make a recursive call and pass // the newly created array printTriangle(temp, n - 1); // Print current array in the end so // that smaller arrays are printed first for (int i = 0; i < n ; i++) { if(i == n - 1) cout << A[i] << " "; else cout << A[i] << ", "; } cout << endl; } // Driver function int main() { int A[] = { 1, 2, 3, 4, 5 }; int n = sizeof(A) / sizeof(A[0]); printTriangle(A, n); } // This code is contributed by Smitha Dinesh Semwal // Java program to create Special triangle.import java.util.*;import java.lang.*; public class ConstructTriangle{ // Function to generate Special Triangle. public static void printTriangle(int[] A) { // Base case if (A.length < 1) return; // Creating new array which contains the // Sum of consecutive elements in // the array passes as parameter. int[] temp = new int[A.length - 1]; for (int i = 0; i < A.length - 1; i++) { int x = A[i] + A[i + 1]; temp[i] = x; } // Make a recursive call and pass // the newly created array printTriangle(temp); // Print current array in the end so // that smaller arrays are printed first System.out.println(Arrays.toString(A)); } // Driver function public static void main(String[] args) { int[] A = { 1, 2, 3, 4, 5 }; printTriangle(A); }} # Python3 program to create Special triangle.# Function to generate Special Triangle.def printTriangle(A): # Base case if (len(A) < 1): return # Creating new array which contains the # Sum of consecutive elements in # the array passes as parameter. temp = [0] * (len(A) - 1) for i in range( 0, len(A) - 1): x = A[i] + A[i + 1] temp[i] = x # Make a recursive call and pass # the newly created array printTriangle(temp) # Print current array in the end so # that smaller arrays are printed first print(A) # Driver functionA = [ 1, 2, 3, 4, 5 ]printTriangle(A) # This code is contributed by Smitha Dinesh Semwal // C# program to create Special triangle. using System; public class ConstructTriangle{// Function to generate Special Trianglestatic void printTriangle(int []A, int n) { // Base case if (n < 1) return; // Creating new array which contains the // Sum of consecutive elements in // the array passes as parameter. int []temp = new int[n - 1]; for (int i = 0; i < n - 1; i++) { int x = A[i] + A[i + 1]; temp[i] = x; } // Make a recursive call and pass // the newly created array printTriangle(temp, n - 1); // Print current array in the end so // that smaller arrays are printed first for (int i = 0; i < n ; i++) { if(i == n - 1) Console.Write(A[i] + " "); else Console.Write(A[i] + ", "); } Console.WriteLine(); } // Driver function public static void Main() { int[] A = { 1, 2, 3, 4, 5 }; int n = A.Length; printTriangle(A,n); }} //This code contributed by 29AjayKumar <?php// PHP program to create// Special triangle. // Function to generate// Special Trianglefunction printTriangle($A , $n) { // Base case if ($n < 1) return; // Creating new array which // contains the Sum of // consecutive elements in // the array passes as parameter. $temp[$n - 1] = 0; for ($i = 0; $i < $n - 1; $i++) { $x = $A[$i] + $A[$i + 1]; $temp[$i] = $x; } // Make a recursive call and // pass the newly created array printTriangle($temp, $n - 1); // Print current array in the // end so that smaller arrays // are printed first for ($i = 0; $i < $n ; $i++) { if($i == $n - 1) echo $A[$i] , " "; else echo $A[$i] , ", "; } echo "\n"; } // Driver Code$A = array( 1, 2, 3, 4, 5 );$n = sizeof($A); printTriangle($A, $n); // This code is contributed// by nitin mittal.?> <script> // JavaScript program to create Special triangle. // Function to generate Special Triangle function printTriangle(A, n) { // Base case if (n < 1) return; // Creating new array which contains the // Sum of consecutive elements in // the array passes as parameter. var temp = new Array(n - 1); for (var i = 0; i < n - 1; i++) { var x = A[i] + A[i + 1]; temp[i] = x; } // Make a recursive call and pass // the newly created array printTriangle(temp, n - 1); // Print current array in the end so // that smaller arrays are printed first for (var i = 0; i < n ; i++) { if(i == n - 1) document.write( A[i] + " "); else document.write( A[i] + ", "); } document.write("<br>"); } // Driver function var A = [ 1, 2, 3, 4, 5 ]; var n = A.length; printTriangle(A,n); </script> Output: 48 20, 28 8, 12, 16 3, 5, 7, 9 1, 2, 3, 4, 5 nitin mittal 29AjayKumar bunnyram19 Arrays Recursion Arrays Recursion Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here. Chocolate Distribution Problem Reversal algorithm for array rotation Window Sliding Technique Next Greater Element Find duplicates in O(n) time and O(1) extra space | Set 1 Write a program to print all permutations of a given string Recursion Program for Tower of Hanoi Backtracking | Introduction Print all possible combinations of r elements in a given array of size n
[ { "code": null, "e": 26175, "s": 26147, "text": "\n10 Jun, 2021" }, { "code": null, "e": 26464, "s": 26175, "text": "Given an array of integers, print a sum triangle from it such that the first level has all array elements. From then, at each level number of elements is one less than the previous level and elements at the level is be the Sum of consecutive two elements in the previous level. Example : " }, { "code": null, "e": 26842, "s": 26464, "text": "Input : A = {1, 2, 3, 4, 5}\nOutput : [48]\n [20, 28] \n [8, 12, 16] \n [3, 5, 7, 9] \n [1, 2, 3, 4, 5] \n\nExplanation :\nHere, [48]\n [20, 28] -->(20 + 28 = 48)\n [8, 12, 16] -->(8 + 12 = 20, 12 + 16 = 28)\n [3, 5, 7, 9] -->(3 + 5 = 8, 5 + 7 = 12, 7 + 9 = 16)\n [1, 2, 3, 4, 5] -->(1 + 2 = 3, 2 + 3 = 5, 3 + 4 = 7, 4 + 5 = 9)" }, { "code": null, "e": 26857, "s": 26844, "text": "Approach : " }, { "code": null, "e": 27141, "s": 26857, "text": "Recursion is the key. At each iteration create a new array which contains the Sum of consecutive elements in the array passes as parameter.Make a recursive call and pass the newly created array in the previous step.While back tracking print the array (for printing in reverse order)." }, { "code": null, "e": 27281, "s": 27141, "text": "Recursion is the key. At each iteration create a new array which contains the Sum of consecutive elements in the array passes as parameter." }, { "code": null, "e": 27358, "s": 27281, "text": "Make a recursive call and pass the newly created array in the previous step." }, { "code": null, "e": 27427, "s": 27358, "text": "While back tracking print the array (for printing in reverse order)." }, { "code": null, "e": 27479, "s": 27427, "text": "Below is the implementation of the above approach :" }, { "code": null, "e": 27483, "s": 27479, "text": "C++" }, { "code": null, "e": 27488, "s": 27483, "text": "Java" }, { "code": null, "e": 27496, "s": 27488, "text": "Python3" }, { "code": null, "e": 27499, "s": 27496, "text": "C#" }, { "code": null, "e": 27503, "s": 27499, "text": "PHP" }, { "code": null, "e": 27514, "s": 27503, "text": "Javascript" }, { "code": "// C++ program to create Special triangle.#include<bits/stdc++.h>using namespace std; // Function to generate Special Trianglevoid printTriangle(int A[] , int n) { // Base case if (n < 1) return; // Creating new array which contains the // Sum of consecutive elements in // the array passes as parameter. int temp[n - 1]; for (int i = 0; i < n - 1; i++) { int x = A[i] + A[i + 1]; temp[i] = x; } // Make a recursive call and pass // the newly created array printTriangle(temp, n - 1); // Print current array in the end so // that smaller arrays are printed first for (int i = 0; i < n ; i++) { if(i == n - 1) cout << A[i] << \" \"; else cout << A[i] << \", \"; } cout << endl; } // Driver function int main() { int A[] = { 1, 2, 3, 4, 5 }; int n = sizeof(A) / sizeof(A[0]); printTriangle(A, n); } // This code is contributed by Smitha Dinesh Semwal", "e": 28655, "s": 27514, "text": null }, { "code": "// Java program to create Special triangle.import java.util.*;import java.lang.*; public class ConstructTriangle{ // Function to generate Special Triangle. public static void printTriangle(int[] A) { // Base case if (A.length < 1) return; // Creating new array which contains the // Sum of consecutive elements in // the array passes as parameter. int[] temp = new int[A.length - 1]; for (int i = 0; i < A.length - 1; i++) { int x = A[i] + A[i + 1]; temp[i] = x; } // Make a recursive call and pass // the newly created array printTriangle(temp); // Print current array in the end so // that smaller arrays are printed first System.out.println(Arrays.toString(A)); } // Driver function public static void main(String[] args) { int[] A = { 1, 2, 3, 4, 5 }; printTriangle(A); }}", "e": 29622, "s": 28655, "text": null }, { "code": "# Python3 program to create Special triangle.# Function to generate Special Triangle.def printTriangle(A): # Base case if (len(A) < 1): return # Creating new array which contains the # Sum of consecutive elements in # the array passes as parameter. temp = [0] * (len(A) - 1) for i in range( 0, len(A) - 1): x = A[i] + A[i + 1] temp[i] = x # Make a recursive call and pass # the newly created array printTriangle(temp) # Print current array in the end so # that smaller arrays are printed first print(A) # Driver functionA = [ 1, 2, 3, 4, 5 ]printTriangle(A) # This code is contributed by Smitha Dinesh Semwal", "e": 30416, "s": 29622, "text": null }, { "code": "// C# program to create Special triangle. using System; public class ConstructTriangle{// Function to generate Special Trianglestatic void printTriangle(int []A, int n) { // Base case if (n < 1) return; // Creating new array which contains the // Sum of consecutive elements in // the array passes as parameter. int []temp = new int[n - 1]; for (int i = 0; i < n - 1; i++) { int x = A[i] + A[i + 1]; temp[i] = x; } // Make a recursive call and pass // the newly created array printTriangle(temp, n - 1); // Print current array in the end so // that smaller arrays are printed first for (int i = 0; i < n ; i++) { if(i == n - 1) Console.Write(A[i] + \" \"); else Console.Write(A[i] + \", \"); } Console.WriteLine(); } // Driver function public static void Main() { int[] A = { 1, 2, 3, 4, 5 }; int n = A.Length; printTriangle(A,n); }} //This code contributed by 29AjayKumar", "e": 31594, "s": 30416, "text": null }, { "code": "<?php// PHP program to create// Special triangle. // Function to generate// Special Trianglefunction printTriangle($A , $n) { // Base case if ($n < 1) return; // Creating new array which // contains the Sum of // consecutive elements in // the array passes as parameter. $temp[$n - 1] = 0; for ($i = 0; $i < $n - 1; $i++) { $x = $A[$i] + $A[$i + 1]; $temp[$i] = $x; } // Make a recursive call and // pass the newly created array printTriangle($temp, $n - 1); // Print current array in the // end so that smaller arrays // are printed first for ($i = 0; $i < $n ; $i++) { if($i == $n - 1) echo $A[$i] , \" \"; else echo $A[$i] , \", \"; } echo \"\\n\"; } // Driver Code$A = array( 1, 2, 3, 4, 5 );$n = sizeof($A); printTriangle($A, $n); // This code is contributed// by nitin mittal.?>", "e": 32642, "s": 31594, "text": null }, { "code": "<script> // JavaScript program to create Special triangle. // Function to generate Special Triangle function printTriangle(A, n) { // Base case if (n < 1) return; // Creating new array which contains the // Sum of consecutive elements in // the array passes as parameter. var temp = new Array(n - 1); for (var i = 0; i < n - 1; i++) { var x = A[i] + A[i + 1]; temp[i] = x; } // Make a recursive call and pass // the newly created array printTriangle(temp, n - 1); // Print current array in the end so // that smaller arrays are printed first for (var i = 0; i < n ; i++) { if(i == n - 1) document.write( A[i] + \" \"); else document.write( A[i] + \", \"); } document.write(\"<br>\"); } // Driver function var A = [ 1, 2, 3, 4, 5 ]; var n = A.length; printTriangle(A,n); </script>", "e": 33694, "s": 32642, "text": null }, { "code": null, "e": 33702, "s": 33694, "text": "Output:" }, { "code": null, "e": 33747, "s": 33702, "text": "48\n20, 28\n8, 12, 16\n3, 5, 7, 9\n1, 2, 3, 4, 5" }, { "code": null, "e": 33760, "s": 33747, "text": "nitin mittal" }, { "code": null, "e": 33772, "s": 33760, "text": "29AjayKumar" }, { "code": null, "e": 33783, "s": 33772, "text": "bunnyram19" }, { "code": null, "e": 33790, "s": 33783, "text": "Arrays" }, { "code": null, "e": 33800, "s": 33790, "text": "Recursion" }, { "code": null, "e": 33807, "s": 33800, "text": "Arrays" }, { "code": null, "e": 33817, "s": 33807, "text": "Recursion" }, { "code": null, "e": 33915, "s": 33817, "text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here." }, { "code": null, "e": 33946, "s": 33915, "text": "Chocolate Distribution Problem" }, { "code": null, "e": 33984, "s": 33946, "text": "Reversal algorithm for array rotation" }, { "code": null, "e": 34009, "s": 33984, "text": "Window Sliding Technique" }, { "code": null, "e": 34030, "s": 34009, "text": "Next Greater Element" }, { "code": null, "e": 34088, "s": 34030, "text": "Find duplicates in O(n) time and O(1) extra space | Set 1" }, { "code": null, "e": 34148, "s": 34088, "text": "Write a program to print all permutations of a given string" }, { "code": null, "e": 34158, "s": 34148, "text": "Recursion" }, { "code": null, "e": 34185, "s": 34158, "text": "Program for Tower of Hanoi" }, { "code": null, "e": 34213, "s": 34185, "text": "Backtracking | Introduction" } ]
How to get only files but not the folders with Get-ChildItem using PowerShell?
To get only files, you need to use NOT operator in Directory attribute parameter. Get-ChildItem D:\Temp\ -Attributes !Directory irectory: D:\Temp Mode LastWriteTime Length Name ---- ------------- ------ ---- -a---- 07-05-2018 23:00 301 cars.xml -a---- 29-12-2017 15:16 4526 healthcheck.htm -a---- 29-12-2017 15:16 4526 healthcheck1.htm -ar--- 13-01-2020 18:19 0 Readonlyfile.txt -a---- 08-12-2017 10:24 48362 servicereport.htm -a---- 08-12-2017 10:24 48362 servicereport1.htm -a---- 08-12-2017 10:16 393 style.css -a---- 12-12-2017 23:04 1034 tes.htmoutput.htm -a---- 08-12-2017 11:29 7974 Test.xlsx -a---- 25-10-2017 08:13 104 testcsv.csv You can combine different options together to get the desired result. To get only hidden files but not the folders. Get-ChildItem D:\Temp\ -Attributes !Directory -Hidden To get only, system Readonly files, Get-ChildItem D:\Temp\ -Attributes !Directory –System -Readonly
[ { "code": null, "e": 1144, "s": 1062, "text": "To get only files, you need to use NOT operator in Directory attribute parameter." }, { "code": null, "e": 1190, "s": 1144, "text": "Get-ChildItem D:\\Temp\\ -Attributes !Directory" }, { "code": null, "e": 1952, "s": 1190, "text": "irectory: D:\\Temp\nMode LastWriteTime Length Name\n---- ------------- ------ ----\n-a---- 07-05-2018 23:00 301 cars.xml\n-a---- 29-12-2017 15:16 4526 healthcheck.htm\n-a---- 29-12-2017 15:16 4526 healthcheck1.htm\n-ar--- 13-01-2020 18:19 0 Readonlyfile.txt\n-a---- 08-12-2017 10:24 48362 servicereport.htm\n-a---- 08-12-2017 10:24 48362 servicereport1.htm\n-a---- 08-12-2017 10:16 393 style.css\n-a---- 12-12-2017 23:04 1034 tes.htmoutput.htm\n-a---- 08-12-2017 11:29 7974 Test.xlsx\n-a---- 25-10-2017 08:13 104 testcsv.csv" }, { "code": null, "e": 2068, "s": 1952, "text": "You can combine different options together to get the desired result. To get only hidden files but not the folders." }, { "code": null, "e": 2123, "s": 2068, "text": "Get-ChildItem D:\\Temp\\ -Attributes !Directory -Hidden\n" }, { "code": null, "e": 2159, "s": 2123, "text": "To get only, system Readonly files," }, { "code": null, "e": 2223, "s": 2159, "text": "Get-ChildItem D:\\Temp\\ -Attributes !Directory –System -Readonly" } ]
Google SWE Internship 2021 Interview Experience - GeeksforGeeks
25 Aug, 2020 Hi Geeks, I have applied for Google SWE Internship 2021 (India) and I have been selected and invited for Google’s Online Challenge Round Application: I have applied through LinkedIn, it is really a great platform for opportunities and I received mail from Google on 12 Aug 2020 and it was a great experience for me. I am here to share questions that have been asked in coding challenges. I hope I will help you. Round 1: Question 1: Array queries: You are given an array of integers whose length is N, you must perform the following five types of query on the given array : Left: Perform one cyclic left rotation.Right: Perform one cyclic right rotation.Update Pos Value: Update the value at index Pos of the array by Val.Increment Pos: Increment value at index Pos of the array by 1.Pos: Print the current value at index Pos. Left: Perform one cyclic left rotation. Right: Perform one cyclic right rotation. Update Pos Value: Update the value at index Pos of the array by Val. Increment Pos: Increment value at index Pos of the array by 1. Pos: Print the current value at index Pos. All the queries are performed considering 1-based indexing. Note: One cyclic left rotation changes (arr1, arr2, arr3, . . . , arrN-1, arrN) to (arr2, arr3, . . .arrN-1, arrN, arr1). One cyclic right rotation changes (arr1, arr2, arr3, . . . , arrN-1, arrN) to (arrN, arr1, arr2, arr3, . . .arrN-1). Input format The first line contains an integer N denoting the length of the array. The second line contains N space-separated integers denoting the elements of the array. The third line contains an integer Q denoting the number of queries. Next, Q lines contain the described type of query. Output format: For each query of type 5, print the output in a new line. Constraints 2 ≤ N ≤ 5 x 105 2 ≤ Q ≤ 5 x 105 1 ≤ Pos ≤ N 0 ≤ arri , Val ≤ 105 It is guaranteed that at least one query is of type 5. Sample Input 1 10 0 3 3 8 0 6 9 3 2 8 10 Increment 3 Increment 1 Left Increment 5 Left ? 9 Right Sample Output 1 1 9 Question 2:There are N-words in a dictionary such that each word is of fixed length M and consists of only lowercase English letters that are (‘a’, ‘b’, ....... ‘z’). A query word denoted by Q. The length of query word in M. These words contain lowercase English letters but at some places instead of a letter between ‘a’, ‘b’, ....... ‘z’ there is ‘?’ .Refer to the Sample input section to understand this case. A match count of Q, denoted by match_count(Q), is the count of words that are is the dictionary and contain the same English letters (excluding a letter that can be in the position of ?) in the same position as the letters are there are in the query word Q. In other words, a word in the dictionary can contain any letters at the position ‘?’ but the remaining alphabets must match with the query word. You are given a query word Q and you are required to compute match_count(Q). Input format First-line contains two space-separated integers M and N denoting the number of words in the dictionary and length of each word respectively. The next N lines contain one word each from the dictionary. The next line contains an integer Q denoting the number of query words for which u have to compute match_count() The next Q lines contain one query word each. Output format For each query word, print match_count for specific words in a new line. Constraints 1 ≤ N ≤ 5 x 104 1 ≤ M ≤ 7 1 ≤ Q ≤ 105 Sample Input 5 3 cat map bat man pen 4 ?at ma? ?a? Sample Output 2 2 4 2 Google Marketing Internship Interview Experiences Google Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here. JPMorgan Chase & Co. Code for Good Internship Interview Experience 2021 Freshworks/Freshdesk Interview Experience for Software Developer (On-Campus) Zoho Interview Experience (Off-Campus ) 2022 Resume Writing For Internship Difference Between ON Page and OFF Page SEO Amazon Interview Questions Commonly Asked Java Programming Interview Questions | Set 2 Amazon Interview Experience for SDE-1 (Off-Campus) Amazon AWS Interview Experience for SDE-1 Zoho Interview | Set 3 (Off-Campus)
[ { "code": null, "e": 26301, "s": 26273, "text": "\n25 Aug, 2020" }, { "code": null, "e": 26440, "s": 26301, "text": "Hi Geeks, I have applied for Google SWE Internship 2021 (India) and I have been selected and invited for Google’s Online Challenge Round " }, { "code": null, "e": 26620, "s": 26440, "text": "Application: I have applied through LinkedIn, it is really a great platform for opportunities and I received mail from Google on 12 Aug 2020 and it was a great experience for me. " }, { "code": null, "e": 26717, "s": 26620, "text": "I am here to share questions that have been asked in coding challenges. I hope I will help you. " }, { "code": null, "e": 26727, "s": 26717, "text": "Round 1: " }, { "code": null, "e": 26881, "s": 26727, "text": "Question 1: Array queries: You are given an array of integers whose length is N, you must perform the following five types of query on the given array : " }, { "code": null, "e": 27134, "s": 26881, "text": "Left: Perform one cyclic left rotation.Right: Perform one cyclic right rotation.Update Pos Value: Update the value at index Pos of the array by Val.Increment Pos: Increment value at index Pos of the array by 1.Pos: Print the current value at index Pos." }, { "code": null, "e": 27174, "s": 27134, "text": "Left: Perform one cyclic left rotation." }, { "code": null, "e": 27216, "s": 27174, "text": "Right: Perform one cyclic right rotation." }, { "code": null, "e": 27285, "s": 27216, "text": "Update Pos Value: Update the value at index Pos of the array by Val." }, { "code": null, "e": 27348, "s": 27285, "text": "Increment Pos: Increment value at index Pos of the array by 1." }, { "code": null, "e": 27391, "s": 27348, "text": "Pos: Print the current value at index Pos." }, { "code": null, "e": 27452, "s": 27391, "text": "All the queries are performed considering 1-based indexing. " }, { "code": null, "e": 27460, "s": 27452, "text": "Note: " }, { "code": null, "e": 27576, "s": 27460, "text": "One cyclic left rotation changes (arr1, arr2, arr3, . . . , arrN-1, arrN) to (arr2, arr3, . . .arrN-1, arrN, arr1)." }, { "code": null, "e": 27693, "s": 27576, "text": "One cyclic right rotation changes (arr1, arr2, arr3, . . . , arrN-1, arrN) to (arrN, arr1, arr2, arr3, . . .arrN-1)." }, { "code": null, "e": 27707, "s": 27693, "text": "Input format " }, { "code": null, "e": 27778, "s": 27707, "text": "The first line contains an integer N denoting the length of the array." }, { "code": null, "e": 27866, "s": 27778, "text": "The second line contains N space-separated integers denoting the elements of the array." }, { "code": null, "e": 27935, "s": 27866, "text": "The third line contains an integer Q denoting the number of queries." }, { "code": null, "e": 27986, "s": 27935, "text": "Next, Q lines contain the described type of query." }, { "code": null, "e": 28059, "s": 27986, "text": "Output format: For each query of type 5, print the output in a new line." }, { "code": null, "e": 28071, "s": 28059, "text": "Constraints" }, { "code": null, "e": 28137, "s": 28071, "text": "2 ≤ N ≤ 5 x 105\n2 ≤ Q ≤ 5 x 105\n1 ≤ Pos ≤ N\n0 ≤ arri , Val ≤ 105\n" }, { "code": null, "e": 28192, "s": 28137, "text": "It is guaranteed that at least one query is of type 5." }, { "code": null, "e": 28207, "s": 28192, "text": "Sample Input 1" }, { "code": null, "e": 28291, "s": 28207, "text": "10\n0 3 3 8 0 6 9 3 2 8\n10\nIncrement 3\nIncrement 1\nLeft\nIncrement 5\nLeft\n? 9\nRight\n\n" }, { "code": null, "e": 28307, "s": 28291, "text": "Sample Output 1" }, { "code": null, "e": 28313, "s": 28307, "text": "1\n9\n\n" }, { "code": null, "e": 28480, "s": 28313, "text": "Question 2:There are N-words in a dictionary such that each word is of fixed length M and consists of only lowercase English letters that are (‘a’, ‘b’, ....... ‘z’)." }, { "code": null, "e": 28726, "s": 28480, "text": "A query word denoted by Q. The length of query word in M. These words contain lowercase English letters but at some places instead of a letter between ‘a’, ‘b’, ....... ‘z’ there is ‘?’ .Refer to the Sample input section to understand this case." }, { "code": null, "e": 28984, "s": 28726, "text": "A match count of Q, denoted by match_count(Q), is the count of words that are is the dictionary and contain the same English letters (excluding a letter that can be in the position of ?) in the same position as the letters are there are in the query word Q." }, { "code": null, "e": 29129, "s": 28984, "text": "In other words, a word in the dictionary can contain any letters at the position ‘?’ but the remaining alphabets must match with the query word." }, { "code": null, "e": 29206, "s": 29129, "text": "You are given a query word Q and you are required to compute match_count(Q)." }, { "code": null, "e": 29219, "s": 29206, "text": "Input format" }, { "code": null, "e": 29362, "s": 29219, "text": "First-line contains two space-separated integers M and N denoting the number of words in the dictionary and length of each word respectively." }, { "code": null, "e": 29422, "s": 29362, "text": "The next N lines contain one word each from the dictionary." }, { "code": null, "e": 29535, "s": 29422, "text": "The next line contains an integer Q denoting the number of query words for which u have to compute match_count()" }, { "code": null, "e": 29581, "s": 29535, "text": "The next Q lines contain one query word each." }, { "code": null, "e": 29595, "s": 29581, "text": "Output format" }, { "code": null, "e": 29668, "s": 29595, "text": "For each query word, print match_count for specific words in a new line." }, { "code": null, "e": 29680, "s": 29668, "text": "Constraints" }, { "code": null, "e": 29718, "s": 29680, "text": "1 ≤ N ≤ 5 x 104\n1 ≤ M ≤ 7\n1 ≤ Q ≤ 105" }, { "code": null, "e": 29732, "s": 29718, "text": "Sample Input " }, { "code": null, "e": 29770, "s": 29732, "text": "5 3\ncat\nmap\nbat\nman\npen\n4\n?at\nma?\n?a?" }, { "code": null, "e": 29784, "s": 29770, "text": "Sample Output" }, { "code": null, "e": 29792, "s": 29784, "text": "2\n2\n4\n2" }, { "code": null, "e": 29799, "s": 29792, "text": "Google" }, { "code": null, "e": 29809, "s": 29799, "text": "Marketing" }, { "code": null, "e": 29820, "s": 29809, "text": "Internship" }, { "code": null, "e": 29842, "s": 29820, "text": "Interview Experiences" }, { "code": null, "e": 29849, "s": 29842, "text": "Google" }, { "code": null, "e": 29947, "s": 29849, "text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here." }, { "code": null, "e": 30019, "s": 29947, "text": "JPMorgan Chase & Co. Code for Good Internship Interview Experience 2021" }, { "code": null, "e": 30096, "s": 30019, "text": "Freshworks/Freshdesk Interview Experience for Software Developer (On-Campus)" }, { "code": null, "e": 30141, "s": 30096, "text": "Zoho Interview Experience (Off-Campus ) 2022" }, { "code": null, "e": 30171, "s": 30141, "text": "Resume Writing For Internship" }, { "code": null, "e": 30215, "s": 30171, "text": "Difference Between ON Page and OFF Page SEO" }, { "code": null, "e": 30242, "s": 30215, "text": "Amazon Interview Questions" }, { "code": null, "e": 30302, "s": 30242, "text": "Commonly Asked Java Programming Interview Questions | Set 2" }, { "code": null, "e": 30353, "s": 30302, "text": "Amazon Interview Experience for SDE-1 (Off-Campus)" }, { "code": null, "e": 30395, "s": 30353, "text": "Amazon AWS Interview Experience for SDE-1" } ]
MCMC Intuition for Everyone. Easy? I tried. | by Rahul Agarwal | Towards Data Science
All of us have heard about the Monte Carlo Markov Chain sometime or other. Sometimes while reading about Bayesian statistics. Sometimes while working with tools like Prophet. But MCMC is hard to understand. Whenever I read about it, I noticed that the crux is typically hidden in deep layers of Mathematical noise and not easy to decipher. I had to spend many hours to get a working understanding of the concept. This blog post is intended to explain MCMC methods simply and knowing what they are useful for. I will delve upon some more applications in my next post. So let us get started. MCMC is made up of two terms Monte Carlo and Markov Chains. Let us talk about the individual terms one by one. In simple terms, we can think of Monte Carlo methods as simple simulation. Monte Carlo methods derive their name from Monte Carlo Casino in Monaco. Many card games need the probability of winning against the dealer. Sometimes calculating this probability can be mathematically complex or highly intractable. But we can always run a computer simulation to simulate the whole game many times and see the probability as the number of wins divided by the number of games played. So that is all you need to know about Monte Carlo Methods. Yes, it is just a simple simulation technique with a Fancy Name. So as we have got the first part of MCMC, we also need to understand what are Markov Chains. But, before Jumping onto Markov Chains let us learn a little bit about Markov Property. Suppose you have a system of M possible states, and you are hopping from one state to another. Don’t get confused yet. A concrete example of a system is the weather which jumps from hot to cold to moderate states. Or another system could be the stock market which jumps from Bear to Bull to stagnant states. Markov Property says that given a process which is at a state Xn at a particular point of time, the probability of Xn+1=k, where k is any of the M states the process can jump to, will only be dependent on which state it is at the given moment. And not on how it reached the current state. Mathematically speaking: Intuitively, you don’t care about the sequence of states the market took to reach the bull market. The probability that the next state is going to be “bear” state is determined just by the fact that the market is currently at “bull” state. It makes sense too in the practical scheme of things. If a process exhibits the Markov Property, then it is known as a Markov Process. Now, Why is a Markov Chain important? It is important because of its Stationary Distribution. So what is a Stationary Distribution? I will try to explain stationary distribution by actually calculating it for the below example. Assume you have a Markov process like below for a stock market. You have a matrix of the transition probabilities Which defines the probability of going from a state Xi to Xj. In the above Transition Matrix Q, the probability that the next state will be “bull” given the current state is “bull”=0.9 the probability that the next state will be “bear” given the current state is “bull”=0.075 And so on. Now, we start at a particular state. Let us begin at bear state. We can define our state as [bull, bear, stagnant] in a vector form. So our starting state is [0,1,0] We can calculate the Probability distribution for the next state by multiplying the current state vector with the transition matrix. See how the probabilities add up to 1. And the next state distribution could be found out by and so on. Eventually, you will reach a stationary state s where we will converge and: For the above transition matrix Q the Stationary distribution s is: You can get the stationary distribution programmatically as: Q = np.matrix([[0.9,0.075,0.025],[0.15,0.8,0.05],[0.25,0.25,0.5]])init_s = np.matrix([[0, 1 , 0]])epsilon =1while epsilon>10e-9: next_s = np.dot(init_s,Q) epsilon = np.sqrt(np.sum(np.square(next_s - init_s))) init_s = next_sprint(init_s)------------------------------------------------------------------matrix([[0.62499998, 0.31250002, 0.0625 ]]) You can start with any other state too; you will reach the same stationary distribution. Change the initial state in the code if you want to see that. Now we can answer the question- why is a stationary distribution important? The stationary state distribution is important because it lets you define the probability for every state of a system at a random time. For this particular example, you can say that 62.5% of the times market will be in a bull market state, 31.25% times it will be a bear market and 6.25% time it will be stagnant. Intuitively you can think of it as a random walk on a chain. You are at a state, and you decide on the next state by seeing the probability distribution of next state given the current state. We might visit some nodes more often than others based on node probabilities. This is how Google solved the search problem in the early internet era. The problem was to sort the pages based on page importance. Google solved it using Pagerank Algorithm. In the Google Pagerank algorithm, you might think of a state as a page and the probability of a page in the stationary distribution as its relative importance. Woah! That was a lot of information, and we have yet not started talking about the MCMC Methods yet. Well if you are with me till now, we can now get on to the real topic now. Before answering this question about MCMC, let me ask one question. We all know about beta distribution. We know its pdf function. But can we draw a sample from this distribution? Can you think of a way? MCMC provides us with ways to sample from any probability distribution. This is mostly needed when we want to sample from a posterior distribution. The above is Bayes theorem. Sometimes we need to sample from the posterior. But is it easy to calculate the posterior along with the normalizing constant(also called evidence)? In most of the cases, we are able to find the functional form of likelihood x prior. But we are not able to calculate the evidence(p(D)). Why? Let us expand the evidence. If H only took 3 values: p(D) = p(H=H1).p(D|H=H1) + p(H=H2).p(D|H=H2) + p(H=H3).p(D|H=H3) The P(D) was easy to calculate. What if the value of H is continuous? Would one be able to write it as simply as now H could take infinite values? It would be a difficult integral to solve. We want to sample from the posterior but we want to treat p(D) as a constant. According to Wikipedia: Markov Chain Monte Carlo (MCMC) methods are a class of algorithms for sampling from a probability distribution based on constructing a Markov chain that has the desired distribution as its stationary distribution. The state of the chain after a number of steps is then used as a sample of the desired distribution. The quality of the sample improves as a function of the number of steps. So let’s explain this with an example: Assume that we want to sample from a Beta distribution. The PDF for the beta is: where C is the normalizing constant. It is actually some function of α and β but I want to show how we don’t really need it to sample from a beta distribution so I am treating it as a constant. This is a somewhat tricky problem with the Beta Distribution if not intractable. In reality, you might need to work with a lot harder Distribution Functions, and sometimes you won’t know the normalizing constants. MCMC methods make life easier for us by providing us with algorithms that could create a Markov Chain which has the Beta distribution as its stationary distribution given that we can sample from a uniform distribution(which is relatively easy). If we start from a random state and traverse to the next state based on some algorithm repeatedly, we will end up creating a Markov Chain which has the Beta distribution as its stationary distribution and the states we are at after a long time could be used as a sample from the Beta Distribution. One such MCMC Algorithm is the Metropolis-Hastings Algorithm First, what’s the goal? Intuitively, what we want to do is to walk around on some (lumpy) surface(our Markov chain) in such a way that the amount of time we spend in each location is proportional to the height of the surface at that location(our desired pdf from which we need to sample). So, e.g., we’d like to spend twice as much time on a hilltop that’s at an altitude of 100m as we do on a nearby hill that’s at an altitude of 50m. The nice thing is that we can do this even if we don’t know the absolute heights of points on the surface: all we have to know are the relative heights. e.g., if one hilltop A is twice as high as hilltop B, then we’d like to spend twice as much time at A as we spend at B. There are more complicated schemes for proposing new locations and the rules for accepting them, but the basic idea is still: (1) pick a new “proposed” location; (2) figure out how much higher or lower that location is compared to your current location; (3) probabilistically stay put or move to that location in a way that respects the overall goal of spending time proportional to the height of the location. The goal of MCMC is to draw samples from some probability distribution without having to know its exact height at any point(We don’t need to know C). If the “wandering around” process is set up correctly, you can make sure that this proportionality (between time spent and the height of the distribution) is achieved. Let us define the problem more formally now. Let s=(s1,s2,....,sM) be the desired stationary distribution. We want to create a Markov Chain that has this stationary distribution. We start with an arbitrary Markov Chain with M states with transition matrix P, so that pij represents the probability of going from state i to j. Intuitively we know how to wander around this Markov Chain, but this Markov Chain does not have the required Stationary Distribution. This chain does have some stationary distribution(which is not of our use) Our Goal is to change the way we wander on the this Markov Chain so that this chain has the desired Stationary distribution. To do this, we: Start at a random initial State i.Randomly pick a new Proposal State by looking at the transition probabilities in the ith row of the transition matrix P.Compute a measure called the Acceptance Probability which is defined as: aij=min(sj.pji/si.pij,1).Now Flip a coin that lands head with probability aij. If the coin comes up heads, accept the proposal, i.e. move to next state else reject the proposal, i.e. stay at the current state.Repeat for a long time Start at a random initial State i. Randomly pick a new Proposal State by looking at the transition probabilities in the ith row of the transition matrix P. Compute a measure called the Acceptance Probability which is defined as: aij=min(sj.pji/si.pij,1). Now Flip a coin that lands head with probability aij. If the coin comes up heads, accept the proposal, i.e. move to next state else reject the proposal, i.e. stay at the current state. Repeat for a long time After a long time, this chain will converge and will have a stationary distribution s. We can then use the states of the chain as the sample from any distribution. While doing this to sample the Beta Distribution, the only time we are using the PDF is to find the acceptance probability, and in that, we divide sj by si, i.e., the normalizing constant C gets canceled. Now let us move on to the problem of sampling from Beta Distribution. Beta Distribution is a continuous Distribution on [0,1] and it can have infinite states on [0,1]. Let us assume an arbitrary Markov Chain P with infinite states on [0,1] having transition Matrix P such that pij=pji=All entries in Matrix. We don’t need the Matrix P as we will see later, But I want to keep the problem description as close to the algorithm we suggested. Start at a random initial State i given by Unif(0,1). Randomly pick a new Proposal State by looking at the transition probabilities in the ith row of the transition matrix P. Let us say we pick up another Unif(0,1) state as a proposal state j. Compute a measure called the Acceptance Probability : Which simplifies to: since pji=pij, and where Now Flip a coin that lands head with probability aij. If the coin comes up heads, accept the proposal, i.e. move to next state else reject the proposal, i.e. stay at the current state. Repeat for a long time So enough with the theory, let us Move on to python to create our Beta Sampler. Let us check our results of the MCMC Sampled Beta distribution against the actual beta distribution. As we can see, our sampled beta values closely resemble the beta distribution. And thus our MCMC Chain has reached the stationary state. We did create a beta sampler in the above code, but the same concept is universally applicable to any other distribution we want to sample from. That was a big post. Congrats if you reached the end. In Essence, MCMC Methods might be complicated, but they provide us with a lot of flexibility. You can sample any distribution function using MCMC Sampling. They usually are used to sample the posterior distributions at the inference time. You can also use MCMC to Solve problems with a large state space. For Example, Knapsack Problem Or decryption. You can take a look at some fun examples in my next blog post. Keep tuned. One of the newest and best resources that you can keep an eye on is the Bayesian Statistics Specialization from University of California. I am going to be writing more of such posts in the future too. Let me know what you think about the series. Follow me up at Medium or Subscribe to my blog to be informed about them. As always, I welcome feedback and constructive criticism and can be reached on Twitter @mlwhiz. Introduction to Probability Joseph K Blitzstein, Jessica HwangWikipediaStackExchange Introduction to Probability Joseph K Blitzstein, Jessica Hwang
[ { "code": null, "e": 346, "s": 171, "text": "All of us have heard about the Monte Carlo Markov Chain sometime or other. Sometimes while reading about Bayesian statistics. Sometimes while working with tools like Prophet." }, { "code": null, "e": 511, "s": 346, "text": "But MCMC is hard to understand. Whenever I read about it, I noticed that the crux is typically hidden in deep layers of Mathematical noise and not easy to decipher." }, { "code": null, "e": 584, "s": 511, "text": "I had to spend many hours to get a working understanding of the concept." }, { "code": null, "e": 738, "s": 584, "text": "This blog post is intended to explain MCMC methods simply and knowing what they are useful for. I will delve upon some more applications in my next post." }, { "code": null, "e": 761, "s": 738, "text": "So let us get started." }, { "code": null, "e": 872, "s": 761, "text": "MCMC is made up of two terms Monte Carlo and Markov Chains. Let us talk about the individual terms one by one." }, { "code": null, "e": 947, "s": 872, "text": "In simple terms, we can think of Monte Carlo methods as simple simulation." }, { "code": null, "e": 1088, "s": 947, "text": "Monte Carlo methods derive their name from Monte Carlo Casino in Monaco. Many card games need the probability of winning against the dealer." }, { "code": null, "e": 1347, "s": 1088, "text": "Sometimes calculating this probability can be mathematically complex or highly intractable. But we can always run a computer simulation to simulate the whole game many times and see the probability as the number of wins divided by the number of games played." }, { "code": null, "e": 1406, "s": 1347, "text": "So that is all you need to know about Monte Carlo Methods." }, { "code": null, "e": 1471, "s": 1406, "text": "Yes, it is just a simple simulation technique with a Fancy Name." }, { "code": null, "e": 1652, "s": 1471, "text": "So as we have got the first part of MCMC, we also need to understand what are Markov Chains. But, before Jumping onto Markov Chains let us learn a little bit about Markov Property." }, { "code": null, "e": 1747, "s": 1652, "text": "Suppose you have a system of M possible states, and you are hopping from one state to another." }, { "code": null, "e": 1960, "s": 1747, "text": "Don’t get confused yet. A concrete example of a system is the weather which jumps from hot to cold to moderate states. Or another system could be the stock market which jumps from Bear to Bull to stagnant states." }, { "code": null, "e": 2249, "s": 1960, "text": "Markov Property says that given a process which is at a state Xn at a particular point of time, the probability of Xn+1=k, where k is any of the M states the process can jump to, will only be dependent on which state it is at the given moment. And not on how it reached the current state." }, { "code": null, "e": 2274, "s": 2249, "text": "Mathematically speaking:" }, { "code": null, "e": 2514, "s": 2274, "text": "Intuitively, you don’t care about the sequence of states the market took to reach the bull market. The probability that the next state is going to be “bear” state is determined just by the fact that the market is currently at “bull” state." }, { "code": null, "e": 2568, "s": 2514, "text": "It makes sense too in the practical scheme of things." }, { "code": null, "e": 2649, "s": 2568, "text": "If a process exhibits the Markov Property, then it is known as a Markov Process." }, { "code": null, "e": 2687, "s": 2649, "text": "Now, Why is a Markov Chain important?" }, { "code": null, "e": 2743, "s": 2687, "text": "It is important because of its Stationary Distribution." }, { "code": null, "e": 2781, "s": 2743, "text": "So what is a Stationary Distribution?" }, { "code": null, "e": 2941, "s": 2781, "text": "I will try to explain stationary distribution by actually calculating it for the below example. Assume you have a Markov process like below for a stock market." }, { "code": null, "e": 2991, "s": 2941, "text": "You have a matrix of the transition probabilities" }, { "code": null, "e": 3087, "s": 2991, "text": "Which defines the probability of going from a state Xi to Xj. In the above Transition Matrix Q," }, { "code": null, "e": 3176, "s": 3087, "text": "the probability that the next state will be “bull” given the current state is “bull”=0.9" }, { "code": null, "e": 3267, "s": 3176, "text": "the probability that the next state will be “bear” given the current state is “bull”=0.075" }, { "code": null, "e": 3278, "s": 3267, "text": "And so on." }, { "code": null, "e": 3444, "s": 3278, "text": "Now, we start at a particular state. Let us begin at bear state. We can define our state as [bull, bear, stagnant] in a vector form. So our starting state is [0,1,0]" }, { "code": null, "e": 3577, "s": 3444, "text": "We can calculate the Probability distribution for the next state by multiplying the current state vector with the transition matrix." }, { "code": null, "e": 3670, "s": 3577, "text": "See how the probabilities add up to 1. And the next state distribution could be found out by" }, { "code": null, "e": 3757, "s": 3670, "text": "and so on. Eventually, you will reach a stationary state s where we will converge and:" }, { "code": null, "e": 3825, "s": 3757, "text": "For the above transition matrix Q the Stationary distribution s is:" }, { "code": null, "e": 3886, "s": 3825, "text": "You can get the stationary distribution programmatically as:" }, { "code": null, "e": 4245, "s": 3886, "text": "Q = np.matrix([[0.9,0.075,0.025],[0.15,0.8,0.05],[0.25,0.25,0.5]])init_s = np.matrix([[0, 1 , 0]])epsilon =1while epsilon>10e-9: next_s = np.dot(init_s,Q) epsilon = np.sqrt(np.sum(np.square(next_s - init_s))) init_s = next_sprint(init_s)------------------------------------------------------------------matrix([[0.62499998, 0.31250002, 0.0625 ]])" }, { "code": null, "e": 4396, "s": 4245, "text": "You can start with any other state too; you will reach the same stationary distribution. Change the initial state in the code if you want to see that." }, { "code": null, "e": 4472, "s": 4396, "text": "Now we can answer the question- why is a stationary distribution important?" }, { "code": null, "e": 4608, "s": 4472, "text": "The stationary state distribution is important because it lets you define the probability for every state of a system at a random time." }, { "code": null, "e": 4786, "s": 4608, "text": "For this particular example, you can say that 62.5% of the times market will be in a bull market state, 31.25% times it will be a bear market and 6.25% time it will be stagnant." }, { "code": null, "e": 5056, "s": 4786, "text": "Intuitively you can think of it as a random walk on a chain. You are at a state, and you decide on the next state by seeing the probability distribution of next state given the current state. We might visit some nodes more often than others based on node probabilities." }, { "code": null, "e": 5231, "s": 5056, "text": "This is how Google solved the search problem in the early internet era. The problem was to sort the pages based on page importance. Google solved it using Pagerank Algorithm." }, { "code": null, "e": 5391, "s": 5231, "text": "In the Google Pagerank algorithm, you might think of a state as a page and the probability of a page in the stationary distribution as its relative importance." }, { "code": null, "e": 5567, "s": 5391, "text": "Woah! That was a lot of information, and we have yet not started talking about the MCMC Methods yet. Well if you are with me till now, we can now get on to the real topic now." }, { "code": null, "e": 5771, "s": 5567, "text": "Before answering this question about MCMC, let me ask one question. We all know about beta distribution. We know its pdf function. But can we draw a sample from this distribution? Can you think of a way?" }, { "code": null, "e": 5919, "s": 5771, "text": "MCMC provides us with ways to sample from any probability distribution. This is mostly needed when we want to sample from a posterior distribution." }, { "code": null, "e": 6181, "s": 5919, "text": "The above is Bayes theorem. Sometimes we need to sample from the posterior. But is it easy to calculate the posterior along with the normalizing constant(also called evidence)? In most of the cases, we are able to find the functional form of likelihood x prior." }, { "code": null, "e": 6239, "s": 6181, "text": "But we are not able to calculate the evidence(p(D)). Why?" }, { "code": null, "e": 6267, "s": 6239, "text": "Let us expand the evidence." }, { "code": null, "e": 6292, "s": 6267, "text": "If H only took 3 values:" }, { "code": null, "e": 6357, "s": 6292, "text": "p(D) = p(H=H1).p(D|H=H1) + p(H=H2).p(D|H=H2) + p(H=H3).p(D|H=H3)" }, { "code": null, "e": 6547, "s": 6357, "text": "The P(D) was easy to calculate. What if the value of H is continuous? Would one be able to write it as simply as now H could take infinite values? It would be a difficult integral to solve." }, { "code": null, "e": 6625, "s": 6547, "text": "We want to sample from the posterior but we want to treat p(D) as a constant." }, { "code": null, "e": 6649, "s": 6625, "text": "According to Wikipedia:" }, { "code": null, "e": 7037, "s": 6649, "text": "Markov Chain Monte Carlo (MCMC) methods are a class of algorithms for sampling from a probability distribution based on constructing a Markov chain that has the desired distribution as its stationary distribution. The state of the chain after a number of steps is then used as a sample of the desired distribution. The quality of the sample improves as a function of the number of steps." }, { "code": null, "e": 7076, "s": 7037, "text": "So let’s explain this with an example:" }, { "code": null, "e": 7157, "s": 7076, "text": "Assume that we want to sample from a Beta distribution. The PDF for the beta is:" }, { "code": null, "e": 7351, "s": 7157, "text": "where C is the normalizing constant. It is actually some function of α and β but I want to show how we don’t really need it to sample from a beta distribution so I am treating it as a constant." }, { "code": null, "e": 7432, "s": 7351, "text": "This is a somewhat tricky problem with the Beta Distribution if not intractable." }, { "code": null, "e": 7565, "s": 7432, "text": "In reality, you might need to work with a lot harder Distribution Functions, and sometimes you won’t know the normalizing constants." }, { "code": null, "e": 7810, "s": 7565, "text": "MCMC methods make life easier for us by providing us with algorithms that could create a Markov Chain which has the Beta distribution as its stationary distribution given that we can sample from a uniform distribution(which is relatively easy)." }, { "code": null, "e": 8108, "s": 7810, "text": "If we start from a random state and traverse to the next state based on some algorithm repeatedly, we will end up creating a Markov Chain which has the Beta distribution as its stationary distribution and the states we are at after a long time could be used as a sample from the Beta Distribution." }, { "code": null, "e": 8169, "s": 8108, "text": "One such MCMC Algorithm is the Metropolis-Hastings Algorithm" }, { "code": null, "e": 8193, "s": 8169, "text": "First, what’s the goal?" }, { "code": null, "e": 8458, "s": 8193, "text": "Intuitively, what we want to do is to walk around on some (lumpy) surface(our Markov chain) in such a way that the amount of time we spend in each location is proportional to the height of the surface at that location(our desired pdf from which we need to sample)." }, { "code": null, "e": 8878, "s": 8458, "text": "So, e.g., we’d like to spend twice as much time on a hilltop that’s at an altitude of 100m as we do on a nearby hill that’s at an altitude of 50m. The nice thing is that we can do this even if we don’t know the absolute heights of points on the surface: all we have to know are the relative heights. e.g., if one hilltop A is twice as high as hilltop B, then we’d like to spend twice as much time at A as we spend at B." }, { "code": null, "e": 9004, "s": 8878, "text": "There are more complicated schemes for proposing new locations and the rules for accepting them, but the basic idea is still:" }, { "code": null, "e": 9040, "s": 9004, "text": "(1) pick a new “proposed” location;" }, { "code": null, "e": 9132, "s": 9040, "text": "(2) figure out how much higher or lower that location is compared to your current location;" }, { "code": null, "e": 9289, "s": 9132, "text": "(3) probabilistically stay put or move to that location in a way that respects the overall goal of spending time proportional to the height of the location." }, { "code": null, "e": 9439, "s": 9289, "text": "The goal of MCMC is to draw samples from some probability distribution without having to know its exact height at any point(We don’t need to know C)." }, { "code": null, "e": 9607, "s": 9439, "text": "If the “wandering around” process is set up correctly, you can make sure that this proportionality (between time spent and the height of the distribution) is achieved." }, { "code": null, "e": 9652, "s": 9607, "text": "Let us define the problem more formally now." }, { "code": null, "e": 9933, "s": 9652, "text": "Let s=(s1,s2,....,sM) be the desired stationary distribution. We want to create a Markov Chain that has this stationary distribution. We start with an arbitrary Markov Chain with M states with transition matrix P, so that pij represents the probability of going from state i to j." }, { "code": null, "e": 10067, "s": 9933, "text": "Intuitively we know how to wander around this Markov Chain, but this Markov Chain does not have the required Stationary Distribution." }, { "code": null, "e": 10142, "s": 10067, "text": "This chain does have some stationary distribution(which is not of our use)" }, { "code": null, "e": 10267, "s": 10142, "text": "Our Goal is to change the way we wander on the this Markov Chain so that this chain has the desired Stationary distribution." }, { "code": null, "e": 10283, "s": 10267, "text": "To do this, we:" }, { "code": null, "e": 10742, "s": 10283, "text": "Start at a random initial State i.Randomly pick a new Proposal State by looking at the transition probabilities in the ith row of the transition matrix P.Compute a measure called the Acceptance Probability which is defined as: aij=min(sj.pji/si.pij,1).Now Flip a coin that lands head with probability aij. If the coin comes up heads, accept the proposal, i.e. move to next state else reject the proposal, i.e. stay at the current state.Repeat for a long time" }, { "code": null, "e": 10777, "s": 10742, "text": "Start at a random initial State i." }, { "code": null, "e": 10898, "s": 10777, "text": "Randomly pick a new Proposal State by looking at the transition probabilities in the ith row of the transition matrix P." }, { "code": null, "e": 10997, "s": 10898, "text": "Compute a measure called the Acceptance Probability which is defined as: aij=min(sj.pji/si.pij,1)." }, { "code": null, "e": 11182, "s": 10997, "text": "Now Flip a coin that lands head with probability aij. If the coin comes up heads, accept the proposal, i.e. move to next state else reject the proposal, i.e. stay at the current state." }, { "code": null, "e": 11205, "s": 11182, "text": "Repeat for a long time" }, { "code": null, "e": 11369, "s": 11205, "text": "After a long time, this chain will converge and will have a stationary distribution s. We can then use the states of the chain as the sample from any distribution." }, { "code": null, "e": 11574, "s": 11369, "text": "While doing this to sample the Beta Distribution, the only time we are using the PDF is to find the acceptance probability, and in that, we divide sj by si, i.e., the normalizing constant C gets canceled." }, { "code": null, "e": 11644, "s": 11574, "text": "Now let us move on to the problem of sampling from Beta Distribution." }, { "code": null, "e": 11742, "s": 11644, "text": "Beta Distribution is a continuous Distribution on [0,1] and it can have infinite states on [0,1]." }, { "code": null, "e": 11882, "s": 11742, "text": "Let us assume an arbitrary Markov Chain P with infinite states on [0,1] having transition Matrix P such that pij=pji=All entries in Matrix." }, { "code": null, "e": 12014, "s": 11882, "text": "We don’t need the Matrix P as we will see later, But I want to keep the problem description as close to the algorithm we suggested." }, { "code": null, "e": 12068, "s": 12014, "text": "Start at a random initial State i given by Unif(0,1)." }, { "code": null, "e": 12258, "s": 12068, "text": "Randomly pick a new Proposal State by looking at the transition probabilities in the ith row of the transition matrix P. Let us say we pick up another Unif(0,1) state as a proposal state j." }, { "code": null, "e": 12312, "s": 12258, "text": "Compute a measure called the Acceptance Probability :" }, { "code": null, "e": 12333, "s": 12312, "text": "Which simplifies to:" }, { "code": null, "e": 12358, "s": 12333, "text": "since pji=pij, and where" }, { "code": null, "e": 12543, "s": 12358, "text": "Now Flip a coin that lands head with probability aij. If the coin comes up heads, accept the proposal, i.e. move to next state else reject the proposal, i.e. stay at the current state." }, { "code": null, "e": 12566, "s": 12543, "text": "Repeat for a long time" }, { "code": null, "e": 12646, "s": 12566, "text": "So enough with the theory, let us Move on to python to create our Beta Sampler." }, { "code": null, "e": 12747, "s": 12646, "text": "Let us check our results of the MCMC Sampled Beta distribution against the actual beta distribution." }, { "code": null, "e": 12884, "s": 12747, "text": "As we can see, our sampled beta values closely resemble the beta distribution. And thus our MCMC Chain has reached the stationary state." }, { "code": null, "e": 13029, "s": 12884, "text": "We did create a beta sampler in the above code, but the same concept is universally applicable to any other distribution we want to sample from." }, { "code": null, "e": 13083, "s": 13029, "text": "That was a big post. Congrats if you reached the end." }, { "code": null, "e": 13322, "s": 13083, "text": "In Essence, MCMC Methods might be complicated, but they provide us with a lot of flexibility. You can sample any distribution function using MCMC Sampling. They usually are used to sample the posterior distributions at the inference time." }, { "code": null, "e": 13508, "s": 13322, "text": "You can also use MCMC to Solve problems with a large state space. For Example, Knapsack Problem Or decryption. You can take a look at some fun examples in my next blog post. Keep tuned." }, { "code": null, "e": 13646, "s": 13508, "text": "One of the newest and best resources that you can keep an eye on is the Bayesian Statistics Specialization from University of California." }, { "code": null, "e": 13924, "s": 13646, "text": "I am going to be writing more of such posts in the future too. Let me know what you think about the series. Follow me up at Medium or Subscribe to my blog to be informed about them. As always, I welcome feedback and constructive criticism and can be reached on Twitter @mlwhiz." }, { "code": null, "e": 14009, "s": 13924, "text": "Introduction to Probability Joseph K Blitzstein, Jessica HwangWikipediaStackExchange" } ]
Neo4j - Match Clause
In this chapter, we will learn about Match Clause and all the functions that can be performed using this clause. Using the MATCH clause of Neo4j you can retrieve all nodes in the Neo4j database. Before proceeding with the example, create 3 nodes and 2 relationships as shown below. CREATE (Dhoni:player {name: "MahendraSingh Dhoni", YOB: 1981, POB: "Ranchi"}) CREATE (Ind:Country {name: "India", result: "Winners"}) CREATE (CT2013:Tornament {name: "ICC Champions Trophy 2013"}) CREATE (Ind)-[r1:WINNERS_OF {NRR:0.938 ,pts:6}]->(CT2013) CREATE(Dhoni)-[r2:CAPTAIN_OF]->(Ind) CREATE (Dhawan:player{name: "shikar Dhawan", YOB: 1995, POB: "Delhi"}) CREATE (Jadeja:player {name: "Ravindra Jadeja", YOB: 1988, POB: "NavagamGhed"}) CREATE (Dhawan)-[:TOP_SCORER_OF {Runs:363}]->(Ind) CREATE (Jadeja)-[:HIGHEST_WICKET_TAKER_OF {Wickets:12}]->(Ind) Following is the query which returns all the nodes in Neo4j database. MATCH (n) RETURN n To execute the above query, carry out the following steps − Step 1 − Open the Neo4j desktop App and start the Neo4j Server. Open the built-in browser app of Neo4j using the URL http://localhost:7474/ as shown in the following screenshot. Step 2 − Copy and paste the desired query in the dollar prompt and press the play button (to execute the query) highlighted in the following screenshot. On executing, you will get the following result. Using match clause, you can get all the nodes under a specific label. Following is the syntax to get all the nodes under a specific label. MATCH (node:label) RETURN node Following is a sample Cypher Query, which returns all the nodes in the database under the label player. MATCH (n:player) RETURN n To execute the above query, carry out the following steps − Step 1 − Open the Neo4j desktop App and start the Neo4j Server. Open the built-in browser app of Neo4j using the URL http://localhost:7474/ as shown in the following screenshot. Step 2 − Copy and paste the desired query in the dollar prompt and press the play button (to execute the query) highlighted in the following screenshot. On executing, you will get the following result. You can retrieve nodes based on relationship using the MATCH clause. Following is the syntax of retrieving nodes based on the relationship using the MATCH clause. MATCH (node:label)<-[: Relationship]-(n) RETURN n Following is a sample Cypher Query to retrieve nodes based on relationship using the MATCH clause. MATCH (Ind:Country {name: "India", result: "Winners"})<-[: TOP_SCORER_OF]-(n) RETURN n.name To execute the above query, carry out the following steps − Step 1 − Open the Neo4j desktop App and start the Neo4j Server. Open the built-in browser app of Neo4j using the URL http://localhost:7474/ as shown in the following screenshot. Step 2 − Copy and paste the desired query in the dollar prompt and press the play button (to execute the query) highlighted in the following screenshot. On executing, you will get the following result. You can delete all the nodes using the MATCH clause. Following is the query to delete all the nodes in Neo4j. MATCH (n) detach delete n To execute the above query, carry out the following steps − Step 1 − Open the Neo4j desktop App and start the Neo4j Server. Open the built-in browser app of Neo4j using the URL http://localhost:7474/ as shown in the following screenshot. Step 2 − Copy and paste the desired query in the dollar prompt and press the play button (to execute the query) highlighted in the following screenshot. On executing, you will get the following result. Print Add Notes Bookmark this page
[ { "code": null, "e": 2452, "s": 2339, "text": "In this chapter, we will learn about Match Clause and all the functions that can be performed using this clause." }, { "code": null, "e": 2534, "s": 2452, "text": "Using the MATCH clause of Neo4j you can retrieve all nodes in the Neo4j database." }, { "code": null, "e": 2621, "s": 2534, "text": "Before proceeding with the example, create 3 nodes and 2 relationships as shown below." }, { "code": null, "e": 3190, "s": 2621, "text": "CREATE (Dhoni:player {name: \"MahendraSingh Dhoni\", YOB: 1981, POB: \"Ranchi\"}) \nCREATE (Ind:Country {name: \"India\", result: \"Winners\"}) \nCREATE (CT2013:Tornament {name: \"ICC Champions Trophy 2013\"}) \nCREATE (Ind)-[r1:WINNERS_OF {NRR:0.938 ,pts:6}]->(CT2013) \n\nCREATE(Dhoni)-[r2:CAPTAIN_OF]->(Ind) \nCREATE (Dhawan:player{name: \"shikar Dhawan\", YOB: 1995, POB: \"Delhi\"}) \nCREATE (Jadeja:player {name: \"Ravindra Jadeja\", YOB: 1988, POB: \"NavagamGhed\"}) \n\nCREATE (Dhawan)-[:TOP_SCORER_OF {Runs:363}]->(Ind) \nCREATE (Jadeja)-[:HIGHEST_WICKET_TAKER_OF {Wickets:12}]->(Ind) " }, { "code": null, "e": 3260, "s": 3190, "text": "Following is the query which returns all the nodes in Neo4j database." }, { "code": null, "e": 3281, "s": 3260, "text": "MATCH (n) RETURN n \n" }, { "code": null, "e": 3341, "s": 3281, "text": "To execute the above query, carry out the following steps −" }, { "code": null, "e": 3519, "s": 3341, "text": "Step 1 − Open the Neo4j desktop App and start the Neo4j Server. Open the built-in browser app of Neo4j using the URL http://localhost:7474/ as shown in the following screenshot." }, { "code": null, "e": 3672, "s": 3519, "text": "Step 2 − Copy and paste the desired query in the dollar prompt and press the play button (to execute the query) highlighted in the following screenshot." }, { "code": null, "e": 3721, "s": 3672, "text": "On executing, you will get the following result." }, { "code": null, "e": 3791, "s": 3721, "text": "Using match clause, you can get all the nodes under a specific label." }, { "code": null, "e": 3860, "s": 3791, "text": "Following is the syntax to get all the nodes under a specific label." }, { "code": null, "e": 3894, "s": 3860, "text": "MATCH (node:label) \nRETURN node \n" }, { "code": null, "e": 3998, "s": 3894, "text": "Following is a sample Cypher Query, which returns all the nodes in the database under the label player." }, { "code": null, "e": 4027, "s": 3998, "text": "MATCH (n:player) \nRETURN n \n" }, { "code": null, "e": 4087, "s": 4027, "text": "To execute the above query, carry out the following steps −" }, { "code": null, "e": 4265, "s": 4087, "text": "Step 1 − Open the Neo4j desktop App and start the Neo4j Server. Open the built-in browser app of Neo4j using the URL http://localhost:7474/ as shown in the following screenshot." }, { "code": null, "e": 4418, "s": 4265, "text": "Step 2 − Copy and paste the desired query in the dollar prompt and press the play button (to execute the query) highlighted in the following screenshot." }, { "code": null, "e": 4467, "s": 4418, "text": "On executing, you will get the following result." }, { "code": null, "e": 4536, "s": 4467, "text": "You can retrieve nodes based on relationship using the MATCH clause." }, { "code": null, "e": 4630, "s": 4536, "text": "Following is the syntax of retrieving nodes based on the relationship using the MATCH clause." }, { "code": null, "e": 4683, "s": 4630, "text": "MATCH (node:label)<-[: Relationship]-(n) \nRETURN n \n" }, { "code": null, "e": 4782, "s": 4683, "text": "Following is a sample Cypher Query to retrieve nodes based on relationship using the MATCH clause." }, { "code": null, "e": 4876, "s": 4782, "text": "MATCH (Ind:Country {name: \"India\", result: \"Winners\"})<-[: TOP_SCORER_OF]-(n) \nRETURN n.name " }, { "code": null, "e": 4936, "s": 4876, "text": "To execute the above query, carry out the following steps −" }, { "code": null, "e": 5114, "s": 4936, "text": "Step 1 − Open the Neo4j desktop App and start the Neo4j Server. Open the built-in browser app of Neo4j using the URL http://localhost:7474/ as shown in the following screenshot." }, { "code": null, "e": 5267, "s": 5114, "text": "Step 2 − Copy and paste the desired query in the dollar prompt and press the play button (to execute the query) highlighted in the following screenshot." }, { "code": null, "e": 5316, "s": 5267, "text": "On executing, you will get the following result." }, { "code": null, "e": 5369, "s": 5316, "text": "You can delete all the nodes using the MATCH clause." }, { "code": null, "e": 5426, "s": 5369, "text": "Following is the query to delete all the nodes in Neo4j." }, { "code": null, "e": 5454, "s": 5426, "text": "MATCH (n) detach delete n \n" }, { "code": null, "e": 5514, "s": 5454, "text": "To execute the above query, carry out the following steps −" }, { "code": null, "e": 5692, "s": 5514, "text": "Step 1 − Open the Neo4j desktop App and start the Neo4j Server. Open the built-in browser app of Neo4j using the URL http://localhost:7474/ as shown in the following screenshot." }, { "code": null, "e": 5845, "s": 5692, "text": "Step 2 − Copy and paste the desired query in the dollar prompt and press the play button (to execute the query) highlighted in the following screenshot." }, { "code": null, "e": 5894, "s": 5845, "text": "On executing, you will get the following result." }, { "code": null, "e": 5901, "s": 5894, "text": " Print" }, { "code": null, "e": 5912, "s": 5901, "text": " Add Notes" } ]
Find smallest and largest elements in singly linked list in C++
In this problem, we are given a singly linked list. Our task is to find the smallest and largest elements in single linked list. Let’s take an example to understand the problem, linked List : 5 -> 2 -> 7 -> 3 ->9 -> 1 -> 4 Smallest element = 1 Largest element = 9 A simple solution to the problem is using by traversing the linked list node by node. Prior to this, we will initialise maxElement and minElement to the value of the first element i.e. head -> data. Then we will traverse the linked list element by element. And then compare the value of the current node with maxElement and store the greater value in maxElement variable. Perform the same to store smaller values in minElement. When the traversal is done print both the values. Program to illustrate the working of our solution, Live Demo #include <bits/stdc++.h> using namespace std; struct Node { int data; struct Node* next; }; void printLargestSmallestLinkedList(struct Node* head) { int maxElement = INT_MIN; int minElement = INT_MAX; while (head != NULL) { if (minElement > head->data) minElement = head->data; if (maxElement < head->data) maxElement = head->data; head = head->next; } cout<<"Smallest element in the linked list is : "<<minElement<<endl; cout<<"Largest element in the linked list is : "<<maxElement<<endl; } void push(struct Node** head, int data) { struct Node* newNode = (struct Node*)malloc(sizeof(struct Node)); newNode->data = data; newNode->next = (*head); (*head) = newNode; } int main() { struct Node* head = NULL; push(&head, 5); push(&head, 2); push(&head, 7); push(&head, 3); push(&head, 9); push(&head, 1); push(&head, 4); printLargestSmallestLinkedList(head); return 0; } Smallest element in the linked list is : 1 Largest element in the linked list is : 9
[ { "code": null, "e": 1191, "s": 1062, "text": "In this problem, we are given a singly linked list. Our task is to find the\nsmallest and largest elements in single linked list." }, { "code": null, "e": 1240, "s": 1191, "text": "Let’s take an example to understand the problem," }, { "code": null, "e": 1285, "s": 1240, "text": "linked List : 5 -> 2 -> 7 -> 3 ->9 -> 1 -> 4" }, { "code": null, "e": 1326, "s": 1285, "text": "Smallest element = 1\nLargest element = 9" }, { "code": null, "e": 1804, "s": 1326, "text": "A simple solution to the problem is using by traversing the linked list node\nby node. Prior to this, we will initialise maxElement and minElement to the\nvalue of the first element i.e. head -> data. Then we will traverse the linked\nlist element by element. And then compare the value of the current node with maxElement and store the greater value in maxElement variable. Perform the same to store smaller values in minElement. When the traversal is done print both the values." }, { "code": null, "e": 1855, "s": 1804, "text": "Program to illustrate the working of our solution," }, { "code": null, "e": 1866, "s": 1855, "text": " Live Demo" }, { "code": null, "e": 2837, "s": 1866, "text": "#include <bits/stdc++.h>\nusing namespace std;\nstruct Node {\n int data;\n struct Node* next;\n};\nvoid printLargestSmallestLinkedList(struct Node* head) {\n int maxElement = INT_MIN;\n int minElement = INT_MAX;\n while (head != NULL) {\n if (minElement > head->data)\n minElement = head->data;\n if (maxElement < head->data)\n maxElement = head->data;\n head = head->next;\n }\n cout<<\"Smallest element in the linked list is : \"<<minElement<<endl;\n cout<<\"Largest element in the linked list is : \"<<maxElement<<endl;\n}\nvoid push(struct Node** head, int data) {\n struct Node* newNode = (struct Node*)malloc(sizeof(struct Node));\n newNode->data = data;\n newNode->next = (*head);\n (*head) = newNode;\n}\nint main() {\n struct Node* head = NULL;\n push(&head, 5);\n push(&head, 2);\n push(&head, 7);\n push(&head, 3);\n push(&head, 9);\n push(&head, 1);\n push(&head, 4);\n printLargestSmallestLinkedList(head);\n return 0;\n}" }, { "code": null, "e": 2922, "s": 2837, "text": "Smallest element in the linked list is : 1\nLargest element in the linked list is : 9" } ]
Collections in C# - GeeksforGeeks
22 Apr, 2020 Collections standardize the way of which the objects are handled by your program. In other words, it contains a set of classes to contain elements in a generalized manner. With the help of collections, the user can perform several operations on objects like the store, update, delete, retrieve, search, sort etc. C# divide collection in several classes, some of the common classes are shown below: Generic collection in C# is defined in System.Collection.Generic namespace. It provides a generic implementation of standard data structure like linked lists, stacks, queues, and dictionaries. These collections are type-safe because they are generic means only those items that are type-compatible with the type of the collection can be stored in a generic collection, it eliminates accidental type mismatches. Generic collections are defined by the set of interfaces and classes. Below table contains the frequently used classes of the System.Collections.Generic namespace: Example: // C# program to illustrate the concept // of generic collection using List<T>using System;using System.Collections.Generic; class Geeks { // Main Method public static void Main(String[] args) { // Creating a List of integers List<int> mylist = new List<int>(); // adding items in mylist for (int j = 5; j < 10; j++) { mylist.Add(j * 3); } // Displaying items of mylist // by using foreach loop foreach(int items in mylist) { Console.WriteLine(items); } }} 15 18 21 24 27 Non-Generic collection in C# is defined in System.Collections namespace. It is a general-purpose data structure that works on object references, so it can handle any type of object, but not in a safe-type manner. Non-generic collections are defined by the set of interfaces and classes. Below table contains the frequently used classes of the System.Collections namespace: Example: // C# to illustrate the concept// of non-generic collection using Queueusing System;using System.Collections; class GFG { // Driver code public static void Main() { // Creating a Queue Queue myQueue = new Queue(); // Inserting the elements into the Queue myQueue.Enqueue("C#"); myQueue.Enqueue("PHP"); myQueue.Enqueue("Perl"); myQueue.Enqueue("Java"); myQueue.Enqueue("C"); // Displaying the count of elements // contained in the Queue Console.Write("Total number of elements present in the Queue are: "); Console.WriteLine(myQueue.Count); // Displaying the beginning element of Queue Console.WriteLine("Beginning Item is: " + myQueue.Peek()); }} Total number of elements present in the Queue are: 5 Beginning Item is: C# Note: C# also provides some specialized collection that is optimized to work on a specific type of data type and the specialized collection are found in System.Collections.Specialized namespace. It came in .NET Framework Version 4 and onwards. It provides various threads-safe collection classes that are used in the place of the corresponding types in the System.Collections and System.Collections.Generic namespaces, when multiple threads are accessing the collection simultaneously. The classes present in this collection are: sknagadiya CSharp-Collections C# Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here. Comments Old Comments Difference between Abstract Class and Interface in C# C# | IsNullOrEmpty() Method C# | How to check whether a List contains a specified element String.Split() Method in C# with Examples C# | Arrays of Strings Difference between Ref and Out keywords in C# Extension Method in C# C# | String.IndexOf( ) Method | Set - 1 C# | Abstract Classes C# | Delegates
[ { "code": null, "e": 25306, "s": 25278, "text": "\n22 Apr, 2020" }, { "code": null, "e": 25619, "s": 25306, "text": "Collections standardize the way of which the objects are handled by your program. In other words, it contains a set of classes to contain elements in a generalized manner. With the help of collections, the user can perform several operations on objects like the store, update, delete, retrieve, search, sort etc." }, { "code": null, "e": 25704, "s": 25619, "text": "C# divide collection in several classes, some of the common classes are shown below:" }, { "code": null, "e": 26279, "s": 25704, "text": "Generic collection in C# is defined in System.Collection.Generic namespace. It provides a generic implementation of standard data structure like linked lists, stacks, queues, and dictionaries. These collections are type-safe because they are generic means only those items that are type-compatible with the type of the collection can be stored in a generic collection, it eliminates accidental type mismatches. Generic collections are defined by the set of interfaces and classes. Below table contains the frequently used classes of the System.Collections.Generic namespace:" }, { "code": null, "e": 26288, "s": 26279, "text": "Example:" }, { "code": "// C# program to illustrate the concept // of generic collection using List<T>using System;using System.Collections.Generic; class Geeks { // Main Method public static void Main(String[] args) { // Creating a List of integers List<int> mylist = new List<int>(); // adding items in mylist for (int j = 5; j < 10; j++) { mylist.Add(j * 3); } // Displaying items of mylist // by using foreach loop foreach(int items in mylist) { Console.WriteLine(items); } }}", "e": 26858, "s": 26288, "text": null }, { "code": null, "e": 26874, "s": 26858, "text": "15\n18\n21\n24\n27\n" }, { "code": null, "e": 27247, "s": 26874, "text": "Non-Generic collection in C# is defined in System.Collections namespace. It is a general-purpose data structure that works on object references, so it can handle any type of object, but not in a safe-type manner. Non-generic collections are defined by the set of interfaces and classes. Below table contains the frequently used classes of the System.Collections namespace:" }, { "code": null, "e": 27256, "s": 27247, "text": "Example:" }, { "code": "// C# to illustrate the concept// of non-generic collection using Queueusing System;using System.Collections; class GFG { // Driver code public static void Main() { // Creating a Queue Queue myQueue = new Queue(); // Inserting the elements into the Queue myQueue.Enqueue(\"C#\"); myQueue.Enqueue(\"PHP\"); myQueue.Enqueue(\"Perl\"); myQueue.Enqueue(\"Java\"); myQueue.Enqueue(\"C\"); // Displaying the count of elements // contained in the Queue Console.Write(\"Total number of elements present in the Queue are: \"); Console.WriteLine(myQueue.Count); // Displaying the beginning element of Queue Console.WriteLine(\"Beginning Item is: \" + myQueue.Peek()); }}", "e": 28026, "s": 27256, "text": null }, { "code": null, "e": 28102, "s": 28026, "text": "Total number of elements present in the Queue are: 5\nBeginning Item is: C#\n" }, { "code": null, "e": 28297, "s": 28102, "text": "Note: C# also provides some specialized collection that is optimized to work on a specific type of data type and the specialized collection are found in System.Collections.Specialized namespace." }, { "code": null, "e": 28632, "s": 28297, "text": "It came in .NET Framework Version 4 and onwards. It provides various threads-safe collection classes that are used in the place of the corresponding types in the System.Collections and System.Collections.Generic namespaces, when multiple threads are accessing the collection simultaneously. The classes present in this collection are:" }, { "code": null, "e": 28643, "s": 28632, "text": "sknagadiya" }, { "code": null, "e": 28662, "s": 28643, "text": "CSharp-Collections" }, { "code": null, "e": 28665, "s": 28662, "text": "C#" }, { "code": null, "e": 28763, "s": 28665, "text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here." }, { "code": null, "e": 28772, "s": 28763, "text": "Comments" }, { "code": null, "e": 28785, "s": 28772, "text": "Old Comments" }, { "code": null, "e": 28839, "s": 28785, "text": "Difference between Abstract Class and Interface in C#" }, { "code": null, "e": 28867, "s": 28839, "text": "C# | IsNullOrEmpty() Method" }, { "code": null, "e": 28929, "s": 28867, "text": "C# | How to check whether a List contains a specified element" }, { "code": null, "e": 28971, "s": 28929, "text": "String.Split() Method in C# with Examples" }, { "code": null, "e": 28994, "s": 28971, "text": "C# | Arrays of Strings" }, { "code": null, "e": 29040, "s": 28994, "text": "Difference between Ref and Out keywords in C#" }, { "code": null, "e": 29063, "s": 29040, "text": "Extension Method in C#" }, { "code": null, "e": 29103, "s": 29063, "text": "C# | String.IndexOf( ) Method | Set - 1" }, { "code": null, "e": 29125, "s": 29103, "text": "C# | Abstract Classes" } ]
List containsAll() method in Java with Examples - GeeksforGeeks
11 Dec, 2018 The containsAll() method of List interface in Java is used to check if this List contains all of the elements in the specified Collection. So basically it is used to check if a List contains a set of elements or not. Syntax: boolean containsAll(Collection col) Parameters: This method accepts a mandatory parameter col which is of the type of collection. This is the collection whose elements are needed to be checked if it is present in the List or not. Return Value: The method returns True if all elements in the collection col are present in the List otherwise it returns False. Exception: The method throws NullPointerException if the specified collection is NULL. Below programs illustrates the containsAll() method: Program 1: // Java code to illustrate containsAll() methodimport java.util.*; public class ListDemo { public static void main(String args[]) { // Creating an empty list List<String> list = new ArrayList<String>(); // Use add() method to add elements // into the List list.add("Welcome"); list.add("To"); list.add("Geeks"); list.add("4"); list.add("Geeks"); // Displaying the List System.out.println("List: " + list); // Creating another empty List List<String> listTemp = new ArrayList<String>(); listTemp.add("Geeks"); listTemp.add("4"); listTemp.add("Geeks"); System.out.println("Are all the contents equal? " + list.containsAll(listTemp)); }} List: [Welcome, To, Geeks, 4, Geeks] Are all the contents equal? true Program 2: // Java code to illustrate containsAll() methodimport java.util.*; public class ListDemo { public static void main(String args[]) { // Creating an empty list List<Integer> list = new ArrayList<Integer>(); // Use add() method to add elements // into the List list.add(10); list.add(15); list.add(30); list.add(20); list.add(5); // Displaying the List System.out.println("List: " + list); // Creating another empty List List<Integer> listTemp = new ArrayList<Integer>(); listTemp.add(30); listTemp.add(15); listTemp.add(5); System.out.println("Are all the contents equal? " + list.containsAll(listTemp)); }} List: [10, 15, 30, 20, 5] Are all the contents equal? true Reference: https://docs.oracle.com/javase/7/docs/api/java/util/List.html#containsAll(java.util.Collection) Java-Collections Java-Functions java-list Java Java Java-Collections Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here. Comments Old Comments HashMap in Java with Examples Initialize an ArrayList in Java Interfaces in Java ArrayList in Java Multidimensional Arrays in Java Stack Class in Java Singleton Class in Java LinkedList in Java Collections in Java Set in Java
[ { "code": null, "e": 24493, "s": 24465, "text": "\n11 Dec, 2018" }, { "code": null, "e": 24710, "s": 24493, "text": "The containsAll() method of List interface in Java is used to check if this List contains all of the elements in the specified Collection. So basically it is used to check if a List contains a set of elements or not." }, { "code": null, "e": 24718, "s": 24710, "text": "Syntax:" }, { "code": null, "e": 24754, "s": 24718, "text": "boolean containsAll(Collection col)" }, { "code": null, "e": 24948, "s": 24754, "text": "Parameters: This method accepts a mandatory parameter col which is of the type of collection. This is the collection whose elements are needed to be checked if it is present in the List or not." }, { "code": null, "e": 25076, "s": 24948, "text": "Return Value: The method returns True if all elements in the collection col are present in the List otherwise it returns False." }, { "code": null, "e": 25163, "s": 25076, "text": "Exception: The method throws NullPointerException if the specified collection is NULL." }, { "code": null, "e": 25216, "s": 25163, "text": "Below programs illustrates the containsAll() method:" }, { "code": null, "e": 25227, "s": 25216, "text": "Program 1:" }, { "code": "// Java code to illustrate containsAll() methodimport java.util.*; public class ListDemo { public static void main(String args[]) { // Creating an empty list List<String> list = new ArrayList<String>(); // Use add() method to add elements // into the List list.add(\"Welcome\"); list.add(\"To\"); list.add(\"Geeks\"); list.add(\"4\"); list.add(\"Geeks\"); // Displaying the List System.out.println(\"List: \" + list); // Creating another empty List List<String> listTemp = new ArrayList<String>(); listTemp.add(\"Geeks\"); listTemp.add(\"4\"); listTemp.add(\"Geeks\"); System.out.println(\"Are all the contents equal? \" + list.containsAll(listTemp)); }}", "e": 26027, "s": 25227, "text": null }, { "code": null, "e": 26098, "s": 26027, "text": "List: [Welcome, To, Geeks, 4, Geeks]\nAre all the contents equal? true\n" }, { "code": null, "e": 26109, "s": 26098, "text": "Program 2:" }, { "code": "// Java code to illustrate containsAll() methodimport java.util.*; public class ListDemo { public static void main(String args[]) { // Creating an empty list List<Integer> list = new ArrayList<Integer>(); // Use add() method to add elements // into the List list.add(10); list.add(15); list.add(30); list.add(20); list.add(5); // Displaying the List System.out.println(\"List: \" + list); // Creating another empty List List<Integer> listTemp = new ArrayList<Integer>(); listTemp.add(30); listTemp.add(15); listTemp.add(5); System.out.println(\"Are all the contents equal? \" + list.containsAll(listTemp)); }}", "e": 26880, "s": 26109, "text": null }, { "code": null, "e": 26940, "s": 26880, "text": "List: [10, 15, 30, 20, 5]\nAre all the contents equal? true\n" }, { "code": null, "e": 27047, "s": 26940, "text": "Reference: https://docs.oracle.com/javase/7/docs/api/java/util/List.html#containsAll(java.util.Collection)" }, { "code": null, "e": 27064, "s": 27047, "text": "Java-Collections" }, { "code": null, "e": 27079, "s": 27064, "text": "Java-Functions" }, { "code": null, "e": 27089, "s": 27079, "text": "java-list" }, { "code": null, "e": 27094, "s": 27089, "text": "Java" }, { "code": null, "e": 27099, "s": 27094, "text": "Java" }, { "code": null, "e": 27116, "s": 27099, "text": "Java-Collections" }, { "code": null, "e": 27214, "s": 27116, "text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here." }, { "code": null, "e": 27223, "s": 27214, "text": "Comments" }, { "code": null, "e": 27236, "s": 27223, "text": "Old Comments" }, { "code": null, "e": 27266, "s": 27236, "text": "HashMap in Java with Examples" }, { "code": null, "e": 27298, "s": 27266, "text": "Initialize an ArrayList in Java" }, { "code": null, "e": 27317, "s": 27298, "text": "Interfaces in Java" }, { "code": null, "e": 27335, "s": 27317, "text": "ArrayList in Java" }, { "code": null, "e": 27367, "s": 27335, "text": "Multidimensional Arrays in Java" }, { "code": null, "e": 27387, "s": 27367, "text": "Stack Class in Java" }, { "code": null, "e": 27411, "s": 27387, "text": "Singleton Class in Java" }, { "code": null, "e": 27430, "s": 27411, "text": "LinkedList in Java" }, { "code": null, "e": 27450, "s": 27430, "text": "Collections in Java" } ]
Create Toast Message in Java Swing
A toast message is an alert which disappears with time automatically. With JDK 7, we can create a toast message similar to an alert on android very easily. Following are the steps needed to make a toast message. Make a rounded rectangle shaped frame. Add a component listener to frame and override the componentResized() to change the shape of the frame. This method recalculates the shape of the frame correctly whenever the window size is changed. frame.addComponentListener(new ComponentAdapter() { @Override public void componentResized(ComponentEvent e) { frame.setShape(new RoundRectangle2D.Double(0,0,frame.getWidth(), frame.getHeight(), 20, 20)); } }); During display, initially show the frame then hide it slowly by making the opacity towards 0. for (double d = 1.0; d > 0.2; d -= 0.1) { Thread.sleep(100); setOpacity((float)d); } See the example below of a toast message window. import java.awt.Color; import java.awt.GridBagLayout; import java.awt.event.ComponentAdapter; import java.awt.event.ComponentEvent; import java.awt.geom.RoundRectangle2D; import javax.swing.JFrame; import javax.swing.JLabel; public class Tester { public static void main(String[] args) { ToastMessage message = new ToastMessage("Welcome to TutorialsPoint.Com"); message.display(); } } class ToastMessage extends JFrame { public ToastMessage(final String message) { setUndecorated(true); setLayout(new GridBagLayout()); setBackground(new Color(240,240,240,250)); setLocationRelativeTo(null); setSize(300, 50); add(new JLabel(message)); addComponentListener(new ComponentAdapter() { @Override public void componentResized(ComponentEvent e) { setShape(new RoundRectangle2D.Double(0,0,getWidth(), getHeight(), 20, 20)); } }); } public void display() { try { setOpacity(1); setVisible(true); Thread.sleep(2000); //hide the toast message in slow motion for (double d = 1.0; d > 0.2; d -= 0.1) { Thread.sleep(100); setOpacity((float)d); } // set the visibility to false setVisible(false); }catch (Exception e) { System.out.println(e.getMessage()); } } }
[ { "code": null, "e": 1274, "s": 1062, "text": "A toast message is an alert which disappears with time automatically. With JDK 7, we can create a toast message similar to an alert on android very easily. Following are the steps needed to make a toast message." }, { "code": null, "e": 1512, "s": 1274, "text": "Make a rounded rectangle shaped frame. Add a component listener to frame and override the componentResized() to change the shape of the frame. This method recalculates the shape of the frame correctly whenever the window size is changed." }, { "code": null, "e": 1745, "s": 1512, "text": "frame.addComponentListener(new ComponentAdapter() {\n @Override\n public void componentResized(ComponentEvent e) {\n frame.setShape(new RoundRectangle2D.Double(0,0,frame.getWidth(),\n frame.getHeight(), 20, 20));\n }\n});" }, { "code": null, "e": 1839, "s": 1745, "text": "During display, initially show the frame then hide it slowly by making the opacity towards 0." }, { "code": null, "e": 1930, "s": 1839, "text": "for (double d = 1.0; d > 0.2; d -= 0.1) {\n Thread.sleep(100);\n setOpacity((float)d);\n}" }, { "code": null, "e": 1979, "s": 1930, "text": "See the example below of a toast message window." }, { "code": null, "e": 3423, "s": 1979, "text": "import java.awt.Color;\nimport java.awt.GridBagLayout;\nimport java.awt.event.ComponentAdapter;\nimport java.awt.event.ComponentEvent;\nimport java.awt.geom.RoundRectangle2D;\n\nimport javax.swing.JFrame;\nimport javax.swing.JLabel;\n\npublic class Tester {\n public static void main(String[] args) {\n ToastMessage message = new ToastMessage(\"Welcome to TutorialsPoint.Com\");\n message.display();\n }\n}\n\nclass ToastMessage extends JFrame {\n public ToastMessage(final String message) {\n setUndecorated(true);\n setLayout(new GridBagLayout());\n setBackground(new Color(240,240,240,250));\n setLocationRelativeTo(null);\n setSize(300, 50);\n add(new JLabel(message));\n \n addComponentListener(new ComponentAdapter() {\n @Override\n public void componentResized(ComponentEvent e) {\n setShape(new RoundRectangle2D.Double(0,0,getWidth(),\n getHeight(), 20, 20)); \n }\n }); \n }\n\n public void display() {\n try {\n setOpacity(1);\n setVisible(true);\n Thread.sleep(2000);\n\n //hide the toast message in slow motion\n for (double d = 1.0; d > 0.2; d -= 0.1) {\n Thread.sleep(100);\n setOpacity((float)d);\n }\n\n // set the visibility to false\n setVisible(false);\n }catch (Exception e) {\n System.out.println(e.getMessage());\n }\n }\n}" } ]
Can we override final methods in Java?
Overriding is a one of the mechanisms to achieve polymorphism. This is the case when we have two classes where, one inherits the properties of another using the extends keyword and, these two classes same method including parameters and return type (say, sample). Since it is inheritance. If we instantiate the subclass a copy of superclass’s members is created in the subclass object and, thus both methods are available to the subclass. When we invoke this method (sample) JVM calls the respective method based on the object used to call the method. No, you cannot override final method in java. If you try to do so, it generates a compile time error saying class Super{ public final void demo() { System.out.println("This is the method of the superclass"); } } class Sub extends Super{ public final void demo() { System.out.println("This is the method of the subclass"); } } Sub.java:7: error: demo() in Sub cannot override demo() in Super public final void demo() { ^ overridden method is final 1 error If you try to compile the same program in eclipse you will get the following error −
[ { "code": null, "e": 1326, "s": 1062, "text": "Overriding is a one of the mechanisms to achieve polymorphism. This is the case when we have two classes where, one inherits the properties of another using the extends keyword and, these two classes same method including parameters and return type (say, sample)." }, { "code": null, "e": 1501, "s": 1326, "text": "Since it is inheritance. If we instantiate the subclass a copy of superclass’s members is created in the subclass object and, thus both methods are available to the subclass." }, { "code": null, "e": 1614, "s": 1501, "text": "When we invoke this method (sample) JVM calls the respective method based on the object used to call the method." }, { "code": null, "e": 1722, "s": 1614, "text": "No, you cannot override final method in java. If you try to do so, it generates a compile time error saying" }, { "code": null, "e": 1964, "s": 1722, "text": "class Super{\n public final void demo() {\n System.out.println(\"This is the method of the superclass\");\n }\n}\nclass Sub extends Super{\n public final void demo() {\n System.out.println(\"This is the method of the subclass\");\n }\n}" }, { "code": null, "e": 2109, "s": 1964, "text": "Sub.java:7: error: demo() in Sub cannot override demo() in Super\n public final void demo() {\n ^\n overridden method is final\n1 error" }, { "code": null, "e": 2194, "s": 2109, "text": "If you try to compile the same program in eclipse you will get the following error −" } ]
Short-circuit evaluation in JavaScript
Short circuit evaluation basically works with the && and || logical operators. The expressions are evaluated left to right. For && operator − Short circuit evaluation with &&(AND) logical operator means if the first expression evaluates to false then whole expression will be false and the rest of the expressions will not be evaluated. For || operator − Short circuit evaluation with ||(OR) logical operator means if the first expression evaluates to true then the whole expression will be true and the rest of the expressions will not be evaluated. Following is the code for short circuit evaluation in JavaScript − Live Demo <!DOCTYPE html> <html lang="en"> <head> <meta charset="UTF-8" /> <meta name="viewport" content="width=device-width, initial-scale=1.0" /> <title>Document</title> <style> body { font-family: "Segoe UI", Tahoma, Geneva, Verdana, sans-serif; } .result { font-size: 20px; font-weight: 500; color: blueviolet; } </style> </head> <body> <h1>Short-circuit evaluation</h1> <div class="result"></div> <br /> <button class="Btn">&& short circuit evaluation</button> <button class="Btn">|| short circuit evaluation</button> <h3>Click on the above button to see the the short circuit evaluation</h3> <script> let resEle = document.querySelector(".result"); let BtnEle = document.querySelectorAll(".Btn"); function retTrue() { resEle.innerHTML += "True <br>"; return true; } function retFalse() { resEle.innerHTML += "False <br>"; return false; } BtnEle[0].addEventListener("click", () => { resEle.innerHTML = "&& evaluation<br>"; retFalse() && retTrue(); }); BtnEle[1].addEventListener("click", () => { resEle.innerHTML = "|| evaluation <br>"; retTrue() || retFalse(); }); </script> </body> </html> On clicking the “&& short circuit evaluation” button − On clicking the ‘|| short circuit evaluation” button −
[ { "code": null, "e": 1186, "s": 1062, "text": "Short circuit evaluation basically works with the && and || logical operators. The expressions\nare evaluated left to right." }, { "code": null, "e": 1399, "s": 1186, "text": "For && operator − Short circuit evaluation with &&(AND) logical operator means if the first expression evaluates to false then whole expression will be false and the rest of the expressions will not be evaluated." }, { "code": null, "e": 1613, "s": 1399, "text": "For || operator − Short circuit evaluation with ||(OR) logical operator means if the first expression evaluates to true then the whole expression will be true and the rest of the expressions will not be evaluated." }, { "code": null, "e": 1680, "s": 1613, "text": "Following is the code for short circuit evaluation in JavaScript −" }, { "code": null, "e": 1691, "s": 1680, "text": " Live Demo" }, { "code": null, "e": 2892, "s": 1691, "text": "<!DOCTYPE html>\n<html lang=\"en\">\n<head>\n<meta charset=\"UTF-8\" />\n<meta name=\"viewport\" content=\"width=device-width, initial-scale=1.0\" />\n<title>Document</title>\n<style>\n body {\n font-family: \"Segoe UI\", Tahoma, Geneva, Verdana, sans-serif;\n }\n .result {\n font-size: 20px;\n font-weight: 500;\n color: blueviolet;\n }\n</style>\n</head>\n<body>\n<h1>Short-circuit evaluation</h1>\n<div class=\"result\"></div>\n<br />\n<button class=\"Btn\">&& short circuit evaluation</button>\n<button class=\"Btn\">|| short circuit evaluation</button>\n<h3>Click on the above button to see the the short circuit evaluation</h3>\n<script>\n let resEle = document.querySelector(\".result\");\n let BtnEle = document.querySelectorAll(\".Btn\");\n function retTrue() {\n resEle.innerHTML += \"True <br>\";\n return true;\n }\n function retFalse() {\n resEle.innerHTML += \"False <br>\";\n return false;\n }\n BtnEle[0].addEventListener(\"click\", () => {\n resEle.innerHTML = \"&& evaluation<br>\";\n retFalse() && retTrue();\n });\n BtnEle[1].addEventListener(\"click\", () => {\n resEle.innerHTML = \"|| evaluation <br>\";\n retTrue() || retFalse();\n });\n</script>\n</body>\n</html>" }, { "code": null, "e": 2947, "s": 2892, "text": "On clicking the “&& short circuit evaluation” button −" }, { "code": null, "e": 3002, "s": 2947, "text": "On clicking the ‘|| short circuit evaluation” button −" } ]
C library function - vfprintf()
The C library function int vfprintf(FILE *stream, const char *format, va_list arg) sends formatted output to a stream using an argument list passed to it. Following is the declaration for vfprintf() function. int vfprintf(FILE *stream, const char *format, va_list arg) stream − This is the pointer to a FILE object that identifies the stream. stream − This is the pointer to a FILE object that identifies the stream. format − This is the C string that contains the text to be written to the stream. It can optionally contain embedded format tags that are replaced by the values specified in subsequent additional arguments and formatted as requested. Format tags prototype: %[flags][width][.precision][length]specifier, as explained below − format − This is the C string that contains the text to be written to the stream. It can optionally contain embedded format tags that are replaced by the values specified in subsequent additional arguments and formatted as requested. Format tags prototype: %[flags][width][.precision][length]specifier, as explained below − c Character d or i Signed decimal integer e Scientific notation (mantissa/exponent) using e character E Scientific notation (mantissa/exponent) using E character f Decimal floating point g Uses the shorter of %e or %f G Uses the shorter of %E or %f o Signed octal s String of characters u Unsigned decimal integer x Unsigned hexadecimal integer X Unsigned hexadecimal integer (capital letters) p Pointer address n Nothing printed % Character - Left-justify within the given field width; Right justification is the default (see width sub-specifier). + Forces to precede the result with a plus or minus sign (+ or -) even for positive numbers. By default, only negative numbers are preceded with a -ve sign. (space) If no sign is going to be written, a blank space is inserted before the value. # Used with o, x or X specifiers the value is preceded with 0, 0x or 0X respectively for values different than zero. Used with e, E and f, it forces the written output to contain a decimal point even if no digits would follow. By default, if no digits follow, no decimal point is written. Used with g or G the result is the same as with e or E but trailing zeros are not removed. 0 Left-pads the number with zeroes (0) instead of spaces, where padding is specified (see width sub-specifier). (number) Minimum number of characters to be printed. If the value to be printed is shorter than this number, the result is padded with blank spaces. The value is not truncated even if the result is larger. * The width is not specified in the format string, but as an additional integer value argument preceding the argument that has to be formatted. .number For integer specifiers (d, i, o, u, x, X) − precision specifies the minimum number of digits to be written. If the value to be written is shorter than this number, the result is padded with leading zeros. The value is not truncated even if the result is longer. A precision of 0 means that no character is written for the value 0. For e, E and f specifiers − this is the number of digits to be printed after the decimal point. For g and G specifiers − This is the maximum number of significant digits to be printed. For s − this is the maximum number of characters to be printed. By default all characters are printed until the ending null character is encountered. For c type − it has no effect. When no precision is specified, the default is 1. If the period is specified without an explicit value for precision, 0 is assumed. .* The precision is not specified in the format string, but as an additional integer value argument preceding the argument that has to be formatted. h The argument is interpreted as a short int or unsigned short int (only applies to integer specifiers − i, d, o, u, x and X). l The argument is interpreted as a long int or unsigned long int for integer specifiers (i, d, o, u, x and X), and as a wide character or wide character string for specifiers c and s. L The argument is interpreted as a long double (only applies to floating point specifiers − e, E, f, g and G). arg − An object representing the variable arguments list. This should be initialized by the va_start macro defined in <stdarg>. arg − An object representing the variable arguments list. This should be initialized by the va_start macro defined in <stdarg>. If successful, the total number of characters written is returned otherwise, a negative number is returned. The following example shows the usage of vfprintf() function. #include <stdio.h> #include <stdarg.h> void WriteFrmtd(FILE *stream, char *format, ...) { va_list args; va_start(args, format); vfprintf(stream, format, args); va_end(args); } int main () { FILE *fp; fp = fopen("file.txt","w"); WriteFrmtd(fp, "This is just one argument %d \n", 10); fclose(fp); return(0); } Let us compile and run the above program that will open a file file.txt for writing in the current directory and will write the following content − This is just one argument 10 Now let's see the content of the above file using the following program − #include <stdio.h> int main () { FILE *fp; int c; fp = fopen("file.txt","r"); while(1) { c = fgetc(fp); if( feof(fp) ) { break; } printf("%c", c); } fclose(fp); return(0); } Let us compile and run above program to produce the following result. This is just one argument 10 12 Lectures 2 hours Nishant Malik 12 Lectures 2.5 hours Nishant Malik 48 Lectures 6.5 hours Asif Hussain 12 Lectures 2 hours Richa Maheshwari 20 Lectures 3.5 hours Vandana Annavaram 44 Lectures 1 hours Amit Diwan Print Add Notes Bookmark this page
[ { "code": null, "e": 2162, "s": 2007, "text": "The C library function int vfprintf(FILE *stream, const char *format, va_list arg) sends formatted output to a stream using an argument list passed to it." }, { "code": null, "e": 2216, "s": 2162, "text": "Following is the declaration for vfprintf() function." }, { "code": null, "e": 2276, "s": 2216, "text": "int vfprintf(FILE *stream, const char *format, va_list arg)" }, { "code": null, "e": 2350, "s": 2276, "text": "stream − This is the pointer to a FILE object that identifies the stream." }, { "code": null, "e": 2424, "s": 2350, "text": "stream − This is the pointer to a FILE object that identifies the stream." }, { "code": null, "e": 2748, "s": 2424, "text": "format − This is the C string that contains the text to be written to the stream. It can optionally contain embedded format tags that are replaced by the values specified in subsequent additional arguments and formatted as requested. Format tags prototype: %[flags][width][.precision][length]specifier, as explained below −" }, { "code": null, "e": 3072, "s": 2748, "text": "format − This is the C string that contains the text to be written to the stream. It can optionally contain embedded format tags that are replaced by the values specified in subsequent additional arguments and formatted as requested. Format tags prototype: %[flags][width][.precision][length]specifier, as explained below −" }, { "code": null, "e": 3074, "s": 3072, "text": "c" }, { "code": null, "e": 3084, "s": 3074, "text": "Character" }, { "code": null, "e": 3091, "s": 3084, "text": "d or i" }, { "code": null, "e": 3114, "s": 3091, "text": "Signed decimal integer" }, { "code": null, "e": 3116, "s": 3114, "text": "e" }, { "code": null, "e": 3174, "s": 3116, "text": "Scientific notation (mantissa/exponent) using e character" }, { "code": null, "e": 3176, "s": 3174, "text": "E" }, { "code": null, "e": 3234, "s": 3176, "text": "Scientific notation (mantissa/exponent) using E character" }, { "code": null, "e": 3236, "s": 3234, "text": "f" }, { "code": null, "e": 3259, "s": 3236, "text": "Decimal floating point" }, { "code": null, "e": 3261, "s": 3259, "text": "g" }, { "code": null, "e": 3290, "s": 3261, "text": "Uses the shorter of %e or %f" }, { "code": null, "e": 3292, "s": 3290, "text": "G" }, { "code": null, "e": 3321, "s": 3292, "text": "Uses the shorter of %E or %f" }, { "code": null, "e": 3323, "s": 3321, "text": "o" }, { "code": null, "e": 3336, "s": 3323, "text": "Signed octal" }, { "code": null, "e": 3338, "s": 3336, "text": "s" }, { "code": null, "e": 3359, "s": 3338, "text": "String of characters" }, { "code": null, "e": 3361, "s": 3359, "text": "u" }, { "code": null, "e": 3386, "s": 3361, "text": "Unsigned decimal integer" }, { "code": null, "e": 3388, "s": 3386, "text": "x" }, { "code": null, "e": 3417, "s": 3388, "text": "Unsigned hexadecimal integer" }, { "code": null, "e": 3419, "s": 3417, "text": "X" }, { "code": null, "e": 3466, "s": 3419, "text": "Unsigned hexadecimal integer (capital letters)" }, { "code": null, "e": 3468, "s": 3466, "text": "p" }, { "code": null, "e": 3484, "s": 3468, "text": "Pointer address" }, { "code": null, "e": 3486, "s": 3484, "text": "n" }, { "code": null, "e": 3502, "s": 3486, "text": "Nothing printed" }, { "code": null, "e": 3504, "s": 3502, "text": "%" }, { "code": null, "e": 3514, "s": 3504, "text": "Character" }, { "code": null, "e": 3516, "s": 3514, "text": "-" }, { "code": null, "e": 3621, "s": 3516, "text": "Left-justify within the given field width; Right justification is the default (see width sub-specifier)." }, { "code": null, "e": 3623, "s": 3621, "text": "+" }, { "code": null, "e": 3778, "s": 3623, "text": "Forces to precede the result with a plus or minus sign (+ or -) even for positive numbers. By default, only negative numbers are preceded with a -ve sign." }, { "code": null, "e": 3786, "s": 3778, "text": "(space)" }, { "code": null, "e": 3865, "s": 3786, "text": "If no sign is going to be written, a blank space is inserted before the value." }, { "code": null, "e": 3867, "s": 3865, "text": "#" }, { "code": null, "e": 4245, "s": 3867, "text": "Used with o, x or X specifiers the value is preceded with 0, 0x or 0X respectively for values different than zero. Used with e, E and f, it forces the written output to contain a decimal point even if no digits would follow. By default, if no digits follow, no decimal point is written. Used with g or G the result is the same as with e or E but trailing zeros are not removed." }, { "code": null, "e": 4247, "s": 4245, "text": "0" }, { "code": null, "e": 4357, "s": 4247, "text": "Left-pads the number with zeroes (0) instead of spaces, where padding is specified (see width sub-specifier)." }, { "code": null, "e": 4366, "s": 4357, "text": "(number)" }, { "code": null, "e": 4563, "s": 4366, "text": "Minimum number of characters to be printed. If the value to be printed is shorter than this number, the result is padded with blank spaces. The value is not truncated even if the result is larger." }, { "code": null, "e": 4565, "s": 4563, "text": "*" }, { "code": null, "e": 4707, "s": 4565, "text": "The width is not specified in the format string, but as an additional integer value argument preceding the argument that has to be formatted." }, { "code": null, "e": 4715, "s": 4707, "text": ".number" }, { "code": null, "e": 5544, "s": 4715, "text": "For integer specifiers (d, i, o, u, x, X) − precision specifies the minimum number of digits to be written. If the value to be written is shorter than this number, the result is padded with leading zeros. The value is not truncated even if the result is longer. A precision of 0 means that no character is written for the value 0. For e, E and f specifiers − this is the number of digits to be printed after the decimal point. For g and G specifiers − This is the maximum number of significant digits to be printed. For s − this is the maximum number of characters to be printed. By default all characters are printed until the ending null character is encountered. For c type − it has no effect. When no precision is specified, the default is 1. If the period is specified without an explicit value for precision, 0 is assumed." }, { "code": null, "e": 5547, "s": 5544, "text": ".*" }, { "code": null, "e": 5693, "s": 5547, "text": "The precision is not specified in the format string, but as an additional integer value argument preceding the argument that has to be formatted." }, { "code": null, "e": 5695, "s": 5693, "text": "h" }, { "code": null, "e": 5820, "s": 5695, "text": "The argument is interpreted as a short int or unsigned short int (only applies to integer specifiers − i, d, o, u, x and X)." }, { "code": null, "e": 5822, "s": 5820, "text": "l" }, { "code": null, "e": 6004, "s": 5822, "text": "The argument is interpreted as a long int or unsigned long int for integer specifiers (i, d, o, u, x and X), and as a wide character or wide character string for specifiers c and s." }, { "code": null, "e": 6006, "s": 6004, "text": "L" }, { "code": null, "e": 6115, "s": 6006, "text": "The argument is interpreted as a long double (only applies to floating point specifiers − e, E, f, g and G)." }, { "code": null, "e": 6243, "s": 6115, "text": "arg − An object representing the variable arguments list. This should be initialized by the va_start macro defined in <stdarg>." }, { "code": null, "e": 6371, "s": 6243, "text": "arg − An object representing the variable arguments list. This should be initialized by the va_start macro defined in <stdarg>." }, { "code": null, "e": 6479, "s": 6371, "text": "If successful, the total number of characters written is returned otherwise, a negative number is returned." }, { "code": null, "e": 6541, "s": 6479, "text": "The following example shows the usage of vfprintf() function." }, { "code": null, "e": 6886, "s": 6541, "text": "#include <stdio.h>\n#include <stdarg.h>\n\nvoid WriteFrmtd(FILE *stream, char *format, ...) {\n va_list args;\n\n va_start(args, format);\n vfprintf(stream, format, args);\n va_end(args);\n}\n\nint main () {\n FILE *fp;\n\n fp = fopen(\"file.txt\",\"w\");\n\n WriteFrmtd(fp, \"This is just one argument %d \\n\", 10);\n\n fclose(fp);\n \n return(0);\n}" }, { "code": null, "e": 7034, "s": 6886, "text": "Let us compile and run the above program that will open a file file.txt for writing in the current directory and will write the following content −" }, { "code": null, "e": 7064, "s": 7034, "text": "This is just one argument 10\n" }, { "code": null, "e": 7138, "s": 7064, "text": "Now let's see the content of the above file using the following program −" }, { "code": null, "e": 7368, "s": 7138, "text": "#include <stdio.h>\n\nint main () {\n FILE *fp;\n int c;\n\n fp = fopen(\"file.txt\",\"r\");\n while(1) {\n c = fgetc(fp);\n if( feof(fp) ) {\n break;\n }\n printf(\"%c\", c);\n }\n fclose(fp);\n return(0);\n}" }, { "code": null, "e": 7438, "s": 7368, "text": "Let us compile and run above program to produce the following result." }, { "code": null, "e": 7468, "s": 7438, "text": "This is just one argument 10\n" }, { "code": null, "e": 7501, "s": 7468, "text": "\n 12 Lectures \n 2 hours \n" }, { "code": null, "e": 7516, "s": 7501, "text": " Nishant Malik" }, { "code": null, "e": 7551, "s": 7516, "text": "\n 12 Lectures \n 2.5 hours \n" }, { "code": null, "e": 7566, "s": 7551, "text": " Nishant Malik" }, { "code": null, "e": 7601, "s": 7566, "text": "\n 48 Lectures \n 6.5 hours \n" }, { "code": null, "e": 7615, "s": 7601, "text": " Asif Hussain" }, { "code": null, "e": 7648, "s": 7615, "text": "\n 12 Lectures \n 2 hours \n" }, { "code": null, "e": 7666, "s": 7648, "text": " Richa Maheshwari" }, { "code": null, "e": 7701, "s": 7666, "text": "\n 20 Lectures \n 3.5 hours \n" }, { "code": null, "e": 7720, "s": 7701, "text": " Vandana Annavaram" }, { "code": null, "e": 7753, "s": 7720, "text": "\n 44 Lectures \n 1 hours \n" }, { "code": null, "e": 7765, "s": 7753, "text": " Amit Diwan" }, { "code": null, "e": 7772, "s": 7765, "text": " Print" }, { "code": null, "e": 7783, "s": 7772, "text": " Add Notes" } ]
Groovy - toLowerCase()
Converts all of the characters in this String to lower case. String toLowerCase() None The modified string in lower case. Following is an example of the usage of this method − class Example { static void main(String[] args) { String a = "HelloWorld"; println(a.toLowerCase()); } } When we run the above program, we will get the following result − helloworld 52 Lectures 8 hours Krishna Sakinala 49 Lectures 2.5 hours Packt Publishing Print Add Notes Bookmark this page
[ { "code": null, "e": 2299, "s": 2238, "text": "Converts all of the characters in this String to lower case." }, { "code": null, "e": 2321, "s": 2299, "text": "String toLowerCase()\n" }, { "code": null, "e": 2326, "s": 2321, "text": "None" }, { "code": null, "e": 2361, "s": 2326, "text": "The modified string in lower case." }, { "code": null, "e": 2415, "s": 2361, "text": "Following is an example of the usage of this method −" }, { "code": null, "e": 2543, "s": 2415, "text": "class Example { \n static void main(String[] args) { \n String a = \"HelloWorld\"; \n println(a.toLowerCase()); \n } \n}" }, { "code": null, "e": 2609, "s": 2543, "text": "When we run the above program, we will get the following result −" }, { "code": null, "e": 2621, "s": 2609, "text": "helloworld\n" }, { "code": null, "e": 2654, "s": 2621, "text": "\n 52 Lectures \n 8 hours \n" }, { "code": null, "e": 2672, "s": 2654, "text": " Krishna Sakinala" }, { "code": null, "e": 2707, "s": 2672, "text": "\n 49 Lectures \n 2.5 hours \n" }, { "code": null, "e": 2725, "s": 2707, "text": " Packt Publishing" }, { "code": null, "e": 2732, "s": 2725, "text": " Print" }, { "code": null, "e": 2743, "s": 2732, "text": " Add Notes" } ]
sciPy stats.nanmedian() function | Python - GeeksforGeeks
11 Feb, 2019 scipy.stats.nanmedian(array, axis=0) function calculates the median by ignoring the Nan (not a number) values of the array elements along the specified axis of the array. Parameters :array : Input array or object having the elements, including Nan values, to calculate the median.axis : Axis along which the median is to be computed. By default axis = 0 Returns : median of the array elements (ignoring the Nan values) based on the set parameters. Code #1: # median import scipyimport numpy as np arr1 = [1, 3, np.nan, 27, 2, 5] print("median using nanmedian :", scipy.nanmedian(arr1)) print("median without handling nan value :", scipy.median(arr1)) Output: median using nanmedian : 3.0 median without handling nan value : nan Code #2: With multi-dimensional data # median from scipy import medianfrom scipy import nanmedianimport numpy as np arr1 = [[1, 3, 27], [3, np.nan, 6], [np.nan, 6, 3], [3, 6, np.nan]] print("median is :", median(arr1)) print("median handling nan :", nanmedian(arr1)) # using axis = 0print("\nmedian is with default axis = 0 : \n", median(arr1, axis = 0))print("\nmedian handling nan with default axis = 0 : \n", nanmedian(arr1, axis = 0)) # using axis = 1print("\nmedian is with default axis = 1 : \n", median(arr1, axis = 1)) print("\nmedian handling nan with default axis = 1 : \n", nanmedian(arr1, axis = 1)) Output: median is : nan median handling nan : 3.0 median is with default axis = 0 : [ nan nan nan] median handling nan with default axis = 0 : [ 3. 6. 6.] median is with default axis = 1 : [ 3. nan nan nan] median handling nan with default axis = 1 : [ 3. 4.5 4.5 4.5] Python scipy-stats-functions Python-scipy Python Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here. Comments Old Comments Python Dictionary Read a file line by line in Python Enumerate() in Python Iterate over a list in Python How to Install PIP on Windows ? Different ways to create Pandas Dataframe Python String | replace() Create a Pandas DataFrame from Lists Python program to convert a list to string Reading and Writing to text files in Python
[ { "code": null, "e": 24234, "s": 24206, "text": "\n11 Feb, 2019" }, { "code": null, "e": 24405, "s": 24234, "text": "scipy.stats.nanmedian(array, axis=0) function calculates the median by ignoring the Nan (not a number) values of the array elements along the specified axis of the array." }, { "code": null, "e": 24588, "s": 24405, "text": "Parameters :array : Input array or object having the elements, including Nan values, to calculate the median.axis : Axis along which the median is to be computed. By default axis = 0" }, { "code": null, "e": 24682, "s": 24588, "text": "Returns : median of the array elements (ignoring the Nan values) based on the set parameters." }, { "code": null, "e": 24691, "s": 24682, "text": "Code #1:" }, { "code": "# median import scipyimport numpy as np arr1 = [1, 3, np.nan, 27, 2, 5] print(\"median using nanmedian :\", scipy.nanmedian(arr1)) print(\"median without handling nan value :\", scipy.median(arr1)) ", "e": 24891, "s": 24691, "text": null }, { "code": null, "e": 24899, "s": 24891, "text": "Output:" }, { "code": null, "e": 24968, "s": 24899, "text": "median using nanmedian : 3.0\nmedian without handling nan value : nan" }, { "code": null, "e": 25006, "s": 24968, "text": " Code #2: With multi-dimensional data" }, { "code": "# median from scipy import medianfrom scipy import nanmedianimport numpy as np arr1 = [[1, 3, 27], [3, np.nan, 6], [np.nan, 6, 3], [3, 6, np.nan]] print(\"median is :\", median(arr1)) print(\"median handling nan :\", nanmedian(arr1)) # using axis = 0print(\"\\nmedian is with default axis = 0 : \\n\", median(arr1, axis = 0))print(\"\\nmedian handling nan with default axis = 0 : \\n\", nanmedian(arr1, axis = 0)) # using axis = 1print(\"\\nmedian is with default axis = 1 : \\n\", median(arr1, axis = 1)) print(\"\\nmedian handling nan with default axis = 1 : \\n\", nanmedian(arr1, axis = 1)) ", "e": 25639, "s": 25006, "text": null }, { "code": null, "e": 25647, "s": 25639, "text": "Output:" }, { "code": null, "e": 25932, "s": 25647, "text": "median is : nan\nmedian handling nan : 3.0\n\nmedian is with default axis = 0 : \n [ nan nan nan]\n\nmedian handling nan with default axis = 0 : \n [ 3. 6. 6.]\n\nmedian is with default axis = 1 : \n [ 3. nan nan nan]\n\nmedian handling nan with default axis = 1 : \n [ 3. 4.5 4.5 4.5]" }, { "code": null, "e": 25961, "s": 25932, "text": "Python scipy-stats-functions" }, { "code": null, "e": 25974, "s": 25961, "text": "Python-scipy" }, { "code": null, "e": 25981, "s": 25974, "text": "Python" }, { "code": null, "e": 26079, "s": 25981, "text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here." }, { "code": null, "e": 26088, "s": 26079, "text": "Comments" }, { "code": null, "e": 26101, "s": 26088, "text": "Old Comments" }, { "code": null, "e": 26119, "s": 26101, "text": "Python Dictionary" }, { "code": null, "e": 26154, "s": 26119, "text": "Read a file line by line in Python" }, { "code": null, "e": 26176, "s": 26154, "text": "Enumerate() in Python" }, { "code": null, "e": 26206, "s": 26176, "text": "Iterate over a list in Python" }, { "code": null, "e": 26238, "s": 26206, "text": "How to Install PIP on Windows ?" }, { "code": null, "e": 26280, "s": 26238, "text": "Different ways to create Pandas Dataframe" }, { "code": null, "e": 26306, "s": 26280, "text": "Python String | replace()" }, { "code": null, "e": 26343, "s": 26306, "text": "Create a Pandas DataFrame from Lists" }, { "code": null, "e": 26386, "s": 26343, "text": "Python program to convert a list to string" } ]
How to get battery level and state in Android?
This example demonstrates how to get battery level and state in Android. Step 1 - Create a new project in Android Studio, go to File ⇒ New Project and fill all required details to create a new project. Step 2 - Add the following code to res/layout/activity_main.xml. <?xml version = "1.0" encoding = "utf-8"?> <LinearLayout xmlns:android = "http://schemas.android.com/apk/res/android" android:id = "@+id/parent" xmlns:tools = "http://schemas.android.com/tools" android:layout_width = "match_parent" android:layout_height = "match_parent" tools:context = ".MainActivity" android:gravity = "center" android:orientation = "vertical"> <TextView android:id = "@+id/text" android:textSize = "18sp" android:textAlignment = "center" android:text = "batter percentage" android:layout_width = "match_parent" android:layout_height = "wrap_content" /> </LinearLayout> In the above code, we have taken a text view. it contains battery percentage. Step 3 - Add the following code to src/MainActivity.java package com.example.andy.myapplication; import android.os.BatteryManager; import android.os.Build; import android.os.Bundle; import android.support.annotation.RequiresApi; import android.support.v7.app.AppCompatActivity; import android.widget.TextView; public class MainActivity extends AppCompatActivity { int view = R.layout.activity_main; TextView text; @RequiresApi(api = Build.VERSION_CODES.JELLY_BEAN) @Override protected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(view); text = findViewById(R.id.text); BatteryManager bm = (BatteryManager)getSystemService(BATTERY_SERVICE); if (android.os.Build.VERSION.SDK_INT > = android.os.Build.VERSION_CODES.LOLLIPOP) { int percentage = bm.getIntProperty(BatteryManager.BATTERY_PROPERTY_CAPACITY); text.setText("Battery Percentage is "+percentage+" %"); } } } In the above code, we have used the battery manager system service. To get the battery percentage use the following code - BatteryManager bm = (BatteryManager)getSystemService(BATTERY_SERVICE); if (android.os.Build.VERSION.SDK_INT > = android.os.Build.VERSION_CODES.LOLLIPOP) { int percentage = bm.getIntProperty(BatteryManager.BATTERY_PROPERTY_CAPACITY); text.setText("Battery Percentage is "+percentage+" %"); } Let's try to run your application. I assume you have connected your actual Android Mobile device with your computer. To run the app from android studio, open one of your project's activity files and click Run icon from the toolbar. Select your mobile device as an option and then check your mobile device which will display your default screen - In the above result shows the initial screen and it is showing 100% battery percentage. Click here to download the project code
[ { "code": null, "e": 1135, "s": 1062, "text": "This example demonstrates how to get battery level and state in Android." }, { "code": null, "e": 1264, "s": 1135, "text": "Step 1 - Create a new project in Android Studio, go to File ⇒ New Project and fill all required details to create a new project." }, { "code": null, "e": 1329, "s": 1264, "text": "Step 2 - Add the following code to res/layout/activity_main.xml." }, { "code": null, "e": 1978, "s": 1329, "text": "<?xml version = \"1.0\" encoding = \"utf-8\"?>\n<LinearLayout xmlns:android = \"http://schemas.android.com/apk/res/android\"\n android:id = \"@+id/parent\"\n xmlns:tools = \"http://schemas.android.com/tools\"\n android:layout_width = \"match_parent\"\n android:layout_height = \"match_parent\"\n tools:context = \".MainActivity\"\n android:gravity = \"center\"\n android:orientation = \"vertical\">\n <TextView\n android:id = \"@+id/text\"\n android:textSize = \"18sp\"\n android:textAlignment = \"center\"\n android:text = \"batter percentage\"\n android:layout_width = \"match_parent\"\n android:layout_height = \"wrap_content\" />\n</LinearLayout>" }, { "code": null, "e": 2056, "s": 1978, "text": "In the above code, we have taken a text view. it contains battery percentage." }, { "code": null, "e": 2113, "s": 2056, "text": "Step 3 - Add the following code to src/MainActivity.java" }, { "code": null, "e": 3041, "s": 2113, "text": "package com.example.andy.myapplication;\nimport android.os.BatteryManager;\nimport android.os.Build;\nimport android.os.Bundle;\nimport android.support.annotation.RequiresApi;\nimport android.support.v7.app.AppCompatActivity;\nimport android.widget.TextView;\npublic class MainActivity extends AppCompatActivity {\n int view = R.layout.activity_main;\n TextView text;\n @RequiresApi(api = Build.VERSION_CODES.JELLY_BEAN)\n @Override\n protected void onCreate(Bundle savedInstanceState) {\n super.onCreate(savedInstanceState);\n setContentView(view);\n text = findViewById(R.id.text);\n BatteryManager bm = (BatteryManager)getSystemService(BATTERY_SERVICE);\n if (android.os.Build.VERSION.SDK_INT > = android.os.Build.VERSION_CODES.LOLLIPOP) {\n int percentage = bm.getIntProperty(BatteryManager.BATTERY_PROPERTY_CAPACITY);\n text.setText(\"Battery Percentage is \"+percentage+\" %\");\n }\n }\n}" }, { "code": null, "e": 3164, "s": 3041, "text": "In the above code, we have used the battery manager system service. To get the battery percentage use the following code -" }, { "code": null, "e": 3461, "s": 3164, "text": "BatteryManager bm = (BatteryManager)getSystemService(BATTERY_SERVICE);\nif (android.os.Build.VERSION.SDK_INT > = android.os.Build.VERSION_CODES.LOLLIPOP) {\n int percentage = bm.getIntProperty(BatteryManager.BATTERY_PROPERTY_CAPACITY);\n text.setText(\"Battery Percentage is \"+percentage+\" %\");\n}" }, { "code": null, "e": 3808, "s": 3461, "text": "Let's try to run your application. I assume you have connected your actual Android Mobile device with your computer. To run the app from android studio, open one of your project's activity files and click Run icon from the toolbar. Select your mobile device as an option and then check your mobile device which will display your default screen -" }, { "code": null, "e": 3896, "s": 3808, "text": "In the above result shows the initial screen and it is showing 100% battery percentage." }, { "code": null, "e": 3936, "s": 3896, "text": "Click here to download the project code" } ]
Scale, Standardize, or Normalize with Scikit-Learn | by Jeff Hale | Towards Data Science
Many machine learning algorithms work better when features are on a relatively similar scale and close to normally distributed. MinMaxScaler, RobustScaler, StandardScaler, and Normalizer are scikit-learn methods to preprocess data for machine learning. Which method you need, if any, depends on your model type and your feature values. This guide will highlight the differences and similarities among these methods and help you learn when to reach for which tool. As often as these methods appear in machine learning workflows, I found it difficult to find information about which of them to use when. Commentators often use the terms scale, standardize, and normalize interchangeably. However, their are some differences and the four scikit-learn functions we will examine do different things. First, a few housekeeping notes: The Jupyter Notebook on which this article is based can be found here. In this article, we aren’t looking at log transformations or other transformations aimed at reducing the heteroskedasticity of the errors. This guide is current as of scikit-learn v0.20.3. Scale generally means to change the range of the values. The shape of the distribution doesn’t change. Think about how a scale model of a building has the same proportions as the original, just smaller. That’s why we say it is drawn to scale. The range is often set at 0 to 1. Standardize generally means changing the values so that the distribution’s standard deviation equals one. Scaling is often implied. Normalize can be used to mean either of the above things (and more!). I suggest you avoid the term normalize, because it has many definitions and is prone to creating confusion. If you use any of these terms in your communication, I strongly suggest you define them. Many machine learning algorithms perform better or converge faster when features are on a relatively similar scale and/or close to normally distributed. Examples of such algorithm families include: linear and logistic regression nearest neighbors neural networks support vector machines with radial bias kernel functions principal components analysis linear discriminant analysis Scaling and standardizing can help features arrive in more digestible form for these algorithms. The four scikit-learn preprocessing methods we are examining follow the API shown below. X_train and X_test are the usual numpy ndarrays or pandas DataFrames. from sklearn import preprocessingmm_scaler = preprocessing.MinMaxScaler()X_train_minmax = mm_scaler.fit_transform(X_train)mm_scaler.transform(X_test) We’ll look at a number of distributions and apply each of the four scikit-learn methods to them. I created four distributions with different characteristics. The distributions are: beta — with negative skew exponential — with positive skew leptokurtic — normal, leptokurtic platykurtic— normal, platykurtic bimodal — bimodal The values all are of relatively similar scale, as can be seen on the x-axis of the Kernel Density Estimate plot (kdeplot) below. Then I added a sixth distribution with much larger values (normally distributed) — normal. Now our kdeplot looks like this: Squint hard at the monitor and you might notice the tiny green bar of big values to the right. Here are the descriptive statistics for our features. Alright, let’s start scaling! For each value in a feature, MinMaxScaler subtracts the minimum value in the feature and then divides by the range. The range is the difference between the original maximum and original minimum. MinMaxScaler preserves the shape of the original distribution. It doesn’t meaningfully change the information embedded in the original data. Note that MinMaxScaler doesn’t reduce the importance of outliers. The default range for the feature returned by MinMaxScaler is 0 to 1. Here’s the kdeplot after MinMaxScaler has been applied. Notice how the features are all on the same relative scale. The relative spaces between each feature’s values have been maintained. MinMaxScaler isn’t a bad place to start, unless you know you want your feature to have a normal distribution or you have outliers and you want them to have reduced influence. RobustScaler transforms the feature vector by subtracting the median and then dividing by the interquartile range (75% value — 25% value). Like MinMaxScaler, our feature with large values — normal-big — is now of similar scale to the other features. Note that RobustScaler does not scale the data into a predetermined interval like MinMaxScaler. It does not meet the strict definition of scale I introduced earlier. Note that the range for each feature after RobustScaler is applied is larger than it was for MinMaxScaler. Use RobustScaler if you want to reduce the effects of outliers, relative to MinMaxScaler. Now let’s turn to StandardScaler. StandardScaler is the industry’s go-to algorithm. 🙂 StandardScaler standardizes a feature by subtracting the mean and then scaling to unit variance. Unit variance means dividing all the values by the standard deviation. StandardScaler does not meet the strict definition of scale I introduced earlier. StandardScaler results in a distribution with a standard deviation equal to 1. The variance is equal to 1 also, because variance = standard deviation squared. And 1 squared = 1. StandardScaler makes the mean of the distribution approximately 0. In the plot above, you can see that all four distributions have a mean close to zero and unit variance. The values are on a similar scale, but the range is larger than after MinMaxScaler. Deep learning algorithms often call for zero mean and unit variance. Regression-type algorithms also benefit from normally distributed data with small sample sizes. Now let’s have a look at Normalizer. Normalizer works on the rows, not the columns! I find that very unintuitive. It’s easy to miss this information in the docs. By default, L2 normalization is applied to each observation so the that the values in a row have a unit norm. Unit norm with L2 means that if each element were squared and summed, the total would equal 1. Alternatively, L1 (aka taxicab or Manhattan) normalization can be applied instead of L2 normalization. Normalizer does transform all the features to values between -1 and 1 (this text updated July 2019). In our example, normal_big ends up with all its values transformed to .9999999. Have you found good use cases for Normalizer? If so, please let me know on Twitter @discdiver. In most cases one of the other preprocessing tools above will be more helpful. Again, scikit-learn’s Normalizer works on the rows, not the columns. Here are plots of the original distributions before and after MinMaxScaler, RobustScaler, and StandardScaler have been applied. Note that after any of these three transformations the values are on a similar scale. 🎉 Scaling and standardizing your data is often a good idea. I highly recommend using a StandardScaler prior to feeding the data to a deep learning algorithm or one that depends upon the relative distance of the observations, or that uses L2 regularization. Note that StandardScaling can make interpretation of regression coefficients a bit trickier. 😉 Use StandardScaler if you want each feature to have zero-mean, unit standard-deviation. If you want more normally distributed data, and are okay with transforming your data. Check out scikit-learn’s QuantileTransformer(output_distribution='normal'). Use MinMaxScaler if you want to have a light touch. It’s non-distorting. You could use RobustScaler if you have outliers and want to reduce their influence. Use Normalizer sparingly — it normalizes sample rows, not feature columns. It can use l2 or l1 normalization. In this article you’ve seen how scikit-learn can help you scale, standardize, and normalize your data. ***I updated the images and some text this article in August 2021. Thank you to readers for feedback!*** Resources to go deeper: Here’s a scikit-learn doc on preprocessing data. Here’s another doc about the effects of scikit-learn scalers on outliers. Here’s a nice guide to probability distributions by Sean Owen. Here’s another guide comparing these functions by Ben Alex Keen. I hope you found this guide helpful. If you did, please share it on your favorite social media channel. 👏 I write about Python, Docker, SQL, pandas, and other data science topics. If any of those topics interest you, read more here and follow me on Medium. 😃
[ { "code": null, "e": 507, "s": 171, "text": "Many machine learning algorithms work better when features are on a relatively similar scale and close to normally distributed. MinMaxScaler, RobustScaler, StandardScaler, and Normalizer are scikit-learn methods to preprocess data for machine learning. Which method you need, if any, depends on your model type and your feature values." }, { "code": null, "e": 635, "s": 507, "text": "This guide will highlight the differences and similarities among these methods and help you learn when to reach for which tool." }, { "code": null, "e": 966, "s": 635, "text": "As often as these methods appear in machine learning workflows, I found it difficult to find information about which of them to use when. Commentators often use the terms scale, standardize, and normalize interchangeably. However, their are some differences and the four scikit-learn functions we will examine do different things." }, { "code": null, "e": 999, "s": 966, "text": "First, a few housekeeping notes:" }, { "code": null, "e": 1070, "s": 999, "text": "The Jupyter Notebook on which this article is based can be found here." }, { "code": null, "e": 1209, "s": 1070, "text": "In this article, we aren’t looking at log transformations or other transformations aimed at reducing the heteroskedasticity of the errors." }, { "code": null, "e": 1259, "s": 1209, "text": "This guide is current as of scikit-learn v0.20.3." }, { "code": null, "e": 1536, "s": 1259, "text": "Scale generally means to change the range of the values. The shape of the distribution doesn’t change. Think about how a scale model of a building has the same proportions as the original, just smaller. That’s why we say it is drawn to scale. The range is often set at 0 to 1." }, { "code": null, "e": 1668, "s": 1536, "text": "Standardize generally means changing the values so that the distribution’s standard deviation equals one. Scaling is often implied." }, { "code": null, "e": 1846, "s": 1668, "text": "Normalize can be used to mean either of the above things (and more!). I suggest you avoid the term normalize, because it has many definitions and is prone to creating confusion." }, { "code": null, "e": 1935, "s": 1846, "text": "If you use any of these terms in your communication, I strongly suggest you define them." }, { "code": null, "e": 2133, "s": 1935, "text": "Many machine learning algorithms perform better or converge faster when features are on a relatively similar scale and/or close to normally distributed. Examples of such algorithm families include:" }, { "code": null, "e": 2164, "s": 2133, "text": "linear and logistic regression" }, { "code": null, "e": 2182, "s": 2164, "text": "nearest neighbors" }, { "code": null, "e": 2198, "s": 2182, "text": "neural networks" }, { "code": null, "e": 2256, "s": 2198, "text": "support vector machines with radial bias kernel functions" }, { "code": null, "e": 2286, "s": 2256, "text": "principal components analysis" }, { "code": null, "e": 2315, "s": 2286, "text": "linear discriminant analysis" }, { "code": null, "e": 2412, "s": 2315, "text": "Scaling and standardizing can help features arrive in more digestible form for these algorithms." }, { "code": null, "e": 2571, "s": 2412, "text": "The four scikit-learn preprocessing methods we are examining follow the API shown below. X_train and X_test are the usual numpy ndarrays or pandas DataFrames." }, { "code": null, "e": 2721, "s": 2571, "text": "from sklearn import preprocessingmm_scaler = preprocessing.MinMaxScaler()X_train_minmax = mm_scaler.fit_transform(X_train)mm_scaler.transform(X_test)" }, { "code": null, "e": 2818, "s": 2721, "text": "We’ll look at a number of distributions and apply each of the four scikit-learn methods to them." }, { "code": null, "e": 2902, "s": 2818, "text": "I created four distributions with different characteristics. The distributions are:" }, { "code": null, "e": 2928, "s": 2902, "text": "beta — with negative skew" }, { "code": null, "e": 2961, "s": 2928, "text": "exponential — with positive skew" }, { "code": null, "e": 2995, "s": 2961, "text": "leptokurtic — normal, leptokurtic" }, { "code": null, "e": 3028, "s": 2995, "text": "platykurtic— normal, platykurtic" }, { "code": null, "e": 3046, "s": 3028, "text": "bimodal — bimodal" }, { "code": null, "e": 3176, "s": 3046, "text": "The values all are of relatively similar scale, as can be seen on the x-axis of the Kernel Density Estimate plot (kdeplot) below." }, { "code": null, "e": 3267, "s": 3176, "text": "Then I added a sixth distribution with much larger values (normally distributed) — normal." }, { "code": null, "e": 3300, "s": 3267, "text": "Now our kdeplot looks like this:" }, { "code": null, "e": 3449, "s": 3300, "text": "Squint hard at the monitor and you might notice the tiny green bar of big values to the right. Here are the descriptive statistics for our features." }, { "code": null, "e": 3479, "s": 3449, "text": "Alright, let’s start scaling!" }, { "code": null, "e": 3674, "s": 3479, "text": "For each value in a feature, MinMaxScaler subtracts the minimum value in the feature and then divides by the range. The range is the difference between the original maximum and original minimum." }, { "code": null, "e": 3815, "s": 3674, "text": "MinMaxScaler preserves the shape of the original distribution. It doesn’t meaningfully change the information embedded in the original data." }, { "code": null, "e": 3881, "s": 3815, "text": "Note that MinMaxScaler doesn’t reduce the importance of outliers." }, { "code": null, "e": 3951, "s": 3881, "text": "The default range for the feature returned by MinMaxScaler is 0 to 1." }, { "code": null, "e": 4007, "s": 3951, "text": "Here’s the kdeplot after MinMaxScaler has been applied." }, { "code": null, "e": 4139, "s": 4007, "text": "Notice how the features are all on the same relative scale. The relative spaces between each feature’s values have been maintained." }, { "code": null, "e": 4314, "s": 4139, "text": "MinMaxScaler isn’t a bad place to start, unless you know you want your feature to have a normal distribution or you have outliers and you want them to have reduced influence." }, { "code": null, "e": 4453, "s": 4314, "text": "RobustScaler transforms the feature vector by subtracting the median and then dividing by the interquartile range (75% value — 25% value)." }, { "code": null, "e": 4730, "s": 4453, "text": "Like MinMaxScaler, our feature with large values — normal-big — is now of similar scale to the other features. Note that RobustScaler does not scale the data into a predetermined interval like MinMaxScaler. It does not meet the strict definition of scale I introduced earlier." }, { "code": null, "e": 4837, "s": 4730, "text": "Note that the range for each feature after RobustScaler is applied is larger than it was for MinMaxScaler." }, { "code": null, "e": 4927, "s": 4837, "text": "Use RobustScaler if you want to reduce the effects of outliers, relative to MinMaxScaler." }, { "code": null, "e": 4961, "s": 4927, "text": "Now let’s turn to StandardScaler." }, { "code": null, "e": 5013, "s": 4961, "text": "StandardScaler is the industry’s go-to algorithm. 🙂" }, { "code": null, "e": 5263, "s": 5013, "text": "StandardScaler standardizes a feature by subtracting the mean and then scaling to unit variance. Unit variance means dividing all the values by the standard deviation. StandardScaler does not meet the strict definition of scale I introduced earlier." }, { "code": null, "e": 5441, "s": 5263, "text": "StandardScaler results in a distribution with a standard deviation equal to 1. The variance is equal to 1 also, because variance = standard deviation squared. And 1 squared = 1." }, { "code": null, "e": 5508, "s": 5441, "text": "StandardScaler makes the mean of the distribution approximately 0." }, { "code": null, "e": 5696, "s": 5508, "text": "In the plot above, you can see that all four distributions have a mean close to zero and unit variance. The values are on a similar scale, but the range is larger than after MinMaxScaler." }, { "code": null, "e": 5861, "s": 5696, "text": "Deep learning algorithms often call for zero mean and unit variance. Regression-type algorithms also benefit from normally distributed data with small sample sizes." }, { "code": null, "e": 5898, "s": 5861, "text": "Now let’s have a look at Normalizer." }, { "code": null, "e": 6023, "s": 5898, "text": "Normalizer works on the rows, not the columns! I find that very unintuitive. It’s easy to miss this information in the docs." }, { "code": null, "e": 6331, "s": 6023, "text": "By default, L2 normalization is applied to each observation so the that the values in a row have a unit norm. Unit norm with L2 means that if each element were squared and summed, the total would equal 1. Alternatively, L1 (aka taxicab or Manhattan) normalization can be applied instead of L2 normalization." }, { "code": null, "e": 6512, "s": 6331, "text": "Normalizer does transform all the features to values between -1 and 1 (this text updated July 2019). In our example, normal_big ends up with all its values transformed to .9999999." }, { "code": null, "e": 6607, "s": 6512, "text": "Have you found good use cases for Normalizer? If so, please let me know on Twitter @discdiver." }, { "code": null, "e": 6686, "s": 6607, "text": "In most cases one of the other preprocessing tools above will be more helpful." }, { "code": null, "e": 6755, "s": 6686, "text": "Again, scikit-learn’s Normalizer works on the rows, not the columns." }, { "code": null, "e": 6883, "s": 6755, "text": "Here are plots of the original distributions before and after MinMaxScaler, RobustScaler, and StandardScaler have been applied." }, { "code": null, "e": 6971, "s": 6883, "text": "Note that after any of these three transformations the values are on a similar scale. 🎉" }, { "code": null, "e": 7321, "s": 6971, "text": "Scaling and standardizing your data is often a good idea. I highly recommend using a StandardScaler prior to feeding the data to a deep learning algorithm or one that depends upon the relative distance of the observations, or that uses L2 regularization. Note that StandardScaling can make interpretation of regression coefficients a bit trickier. 😉" }, { "code": null, "e": 7571, "s": 7321, "text": "Use StandardScaler if you want each feature to have zero-mean, unit standard-deviation. If you want more normally distributed data, and are okay with transforming your data. Check out scikit-learn’s QuantileTransformer(output_distribution='normal')." }, { "code": null, "e": 7644, "s": 7571, "text": "Use MinMaxScaler if you want to have a light touch. It’s non-distorting." }, { "code": null, "e": 7728, "s": 7644, "text": "You could use RobustScaler if you have outliers and want to reduce their influence." }, { "code": null, "e": 7838, "s": 7728, "text": "Use Normalizer sparingly — it normalizes sample rows, not feature columns. It can use l2 or l1 normalization." }, { "code": null, "e": 8046, "s": 7838, "text": "In this article you’ve seen how scikit-learn can help you scale, standardize, and normalize your data. ***I updated the images and some text this article in August 2021. Thank you to readers for feedback!***" }, { "code": null, "e": 8070, "s": 8046, "text": "Resources to go deeper:" }, { "code": null, "e": 8119, "s": 8070, "text": "Here’s a scikit-learn doc on preprocessing data." }, { "code": null, "e": 8193, "s": 8119, "text": "Here’s another doc about the effects of scikit-learn scalers on outliers." }, { "code": null, "e": 8256, "s": 8193, "text": "Here’s a nice guide to probability distributions by Sean Owen." }, { "code": null, "e": 8321, "s": 8256, "text": "Here’s another guide comparing these functions by Ben Alex Keen." }, { "code": null, "e": 8427, "s": 8321, "text": "I hope you found this guide helpful. If you did, please share it on your favorite social media channel. 👏" } ]
Java labelled statement
Yes. Java supports labeled statements. You can put a label before a for statement and use the break/continue controls to jump to that label. See the example below. Live Demo public class Tester { public static void main(String args[]) { first: for (int i = 0; i < 3; i++) { for (int j = 0; j < 3; j++){ if(i == 1){ continue first; } System.out.print(" [i = " + i + ", j = " + j + "] "); } } System.out.println(); second: for (int i = 0; i < 3; i++) { for (int j = 0; j < 3; j++){ if(i == 1){ break second; } System.out.print(" [i = " + i + ", j = " + j + "] "); } } } } first is the label for first outermost for loop and continue first cause the loop to skip print statement if i = 1; second is the label for second outermost for loop and continue second cause the loop to break the loop.
[ { "code": null, "e": 1204, "s": 1062, "text": "Yes. Java supports labeled statements. You can put a label before a for statement and use the break/continue controls to jump to that label. " }, { "code": null, "e": 1227, "s": 1204, "text": "See the example below." }, { "code": null, "e": 1237, "s": 1227, "text": "Live Demo" }, { "code": null, "e": 1877, "s": 1237, "text": "public class Tester {\n public static void main(String args[]) {\n\n first:\n for (int i = 0; i < 3; i++) {\n for (int j = 0; j < 3; j++){\n if(i == 1){\n continue first;\n }\n System.out.print(\" [i = \" + i + \", j = \" + j + \"] \");\n }\n }\n System.out.println();\n \n second:\n for (int i = 0; i < 3; i++) {\n for (int j = 0; j < 3; j++){\n if(i == 1){\n break second;\n }\n System.out.print(\" [i = \" + i + \", j = \" + j + \"] \");\n }\n }\n }\n}" }, { "code": null, "e": 1993, "s": 1877, "text": "first is the label for first outermost for loop and continue first cause the loop to skip print statement if i = 1;" }, { "code": null, "e": 2097, "s": 1993, "text": "second is the label for second outermost for loop and continue second cause the loop to break the loop." } ]
break, continue and pass in Python - GeeksforGeeks
25 Nov, 2019 Using loops in Python automates and repeats the tasks in an efficient manner. But sometimes, there may arise a condition where you want to exit the loop completely, skip an iteration or ignore that condition. These can be done by loop control statements. Loop control statements change execution from its normal sequence. When execution leaves a scope, all automatic objects that were created in that scope are destroyed. Python supports the following control statements. Break statement Continue statement Pass statement The break statement is used to terminate the loop or statement in which it is present. After that, the control will pass to the statements that are present after the break statement, if available. If the break statement is present in the nested loop, then it terminates only those loops which contains break statement. Syntax: break Example:Consider a situation where you want to iterate over a string and want to print all the characters until a letter ‘e’ or ‘s’ is encountered. It is specified that you have to do this using loop and only one loop is allowed to use.Here comes the usage of break statement. What we can do is iterate over a string using either a while loop or for loop and every time we have to compare the value of iterator with ‘e’ or ‘s’. If it is ‘e’ or ‘s’ we will use the break statement to exit the loop. Below is the implementation. # Python program to demonstrate# break statement # Python program to # demonstrate break statement s = 'geeksforgeeks'# Using for loop for letter in s: print(letter) # break the loop as soon it sees 'e' # or 's' if letter == 'e' or letter == 's': break print("Out of for loop") print() i = 0 # Using while loop while True: print(s[i]) # break the loop as soon it sees 'e' # or 's' if s[i] == 'e' or s[i] == 's': break i += 1 print("Out of while loop") Output: g e Out of for loop g e Out of while loop Continue is also a loop control statement just like the break statement. continue statement is opposite to that of break statement, instead of terminating the loop, it forces to execute the next iteration of the loop.As the name suggests the continue statement forces the loop to continue or execute the next iteration. When the continue statement is executed in the loop, the code inside the loop following the continue statement will be skipped and the next iteration of the loop will begin. Syntax: continue Example:Consider the situation when you need to write a program which prints the number from 1 to 10 and but not 6. It is specified that you have to do this using loop and only one loop is allowed to use.Here comes the usage of continue statement. What we can do here is we can run a loop from 1 to 10 and every time we have to compare the value of iterator with 6. If it is equal to 6 we will use the continue statement to continue to next iteration without printing anything otherwise we will print the value. Below is the implementation of the above idea: # Python program to # demonstrate continue # statement # loop from 1 to 10 for i in range(1, 11): # If i is equals to 6, # continue to next iteration # without printing if i == 6: continue else: # otherwise print the value # of i print(i, end = " ") Output: 1 2 3 4 5 7 8 9 10 As the name suggests pass statement simply does nothing. The pass statement in Python is used when a statement is required syntactically but you do not want any command or code to execute. It is like null operation, as nothing will happen is it is executed. Pass statement can also be used for writing empty loops. Pass is also used for empty control statement, function and classes. Syntax: pass Example: # Python program to demonstrate# pass statement s = "geeks" # Empty loopfor i in s: # No error will be raised pass # Empty functiondef fun(): pass # No error will be raisedfun() # Pass statementfor i in s: if i == 'k': print('Pass executed') pass print(i) Output: g e e Pass executed k s In the above example, when the value of i becomes equal to ‘k’, the pass statement did nothing and hence the letter ‘k’ is also printed. python-basics Python Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here. Comments Old Comments How to Install PIP on Windows ? How to drop one or multiple columns in Pandas Dataframe How To Convert Python Dictionary To JSON? Check if element exists in list in Python Python | Pandas dataframe.groupby() Defaultdict in Python Python | Get unique values from a list Python Classes and Objects Python | os.path.join() method Create a directory in Python
[ { "code": null, "e": 23927, "s": 23899, "text": "\n25 Nov, 2019" }, { "code": null, "e": 24399, "s": 23927, "text": "Using loops in Python automates and repeats the tasks in an efficient manner. But sometimes, there may arise a condition where you want to exit the loop completely, skip an iteration or ignore that condition. These can be done by loop control statements. Loop control statements change execution from its normal sequence. When execution leaves a scope, all automatic objects that were created in that scope are destroyed. Python supports the following control statements." }, { "code": null, "e": 24415, "s": 24399, "text": "Break statement" }, { "code": null, "e": 24434, "s": 24415, "text": "Continue statement" }, { "code": null, "e": 24449, "s": 24434, "text": "Pass statement" }, { "code": null, "e": 24768, "s": 24449, "text": "The break statement is used to terminate the loop or statement in which it is present. After that, the control will pass to the statements that are present after the break statement, if available. If the break statement is present in the nested loop, then it terminates only those loops which contains break statement." }, { "code": null, "e": 24776, "s": 24768, "text": "Syntax:" }, { "code": null, "e": 24783, "s": 24776, "text": "break\n" }, { "code": null, "e": 25281, "s": 24783, "text": "Example:Consider a situation where you want to iterate over a string and want to print all the characters until a letter ‘e’ or ‘s’ is encountered. It is specified that you have to do this using loop and only one loop is allowed to use.Here comes the usage of break statement. What we can do is iterate over a string using either a while loop or for loop and every time we have to compare the value of iterator with ‘e’ or ‘s’. If it is ‘e’ or ‘s’ we will use the break statement to exit the loop." }, { "code": null, "e": 25310, "s": 25281, "text": "Below is the implementation." }, { "code": "# Python program to demonstrate# break statement # Python program to # demonstrate break statement s = 'geeksforgeeks'# Using for loop for letter in s: print(letter) # break the loop as soon it sees 'e' # or 's' if letter == 'e' or letter == 's': break print(\"Out of for loop\") print() i = 0 # Using while loop while True: print(s[i]) # break the loop as soon it sees 'e' # or 's' if s[i] == 'e' or s[i] == 's': break i += 1 print(\"Out of while loop\")", "e": 25839, "s": 25310, "text": null }, { "code": null, "e": 25847, "s": 25839, "text": "Output:" }, { "code": null, "e": 25891, "s": 25847, "text": "g\ne\nOut of for loop\n\ng\ne\nOut of while loop\n" }, { "code": null, "e": 26385, "s": 25891, "text": "Continue is also a loop control statement just like the break statement. continue statement is opposite to that of break statement, instead of terminating the loop, it forces to execute the next iteration of the loop.As the name suggests the continue statement forces the loop to continue or execute the next iteration. When the continue statement is executed in the loop, the code inside the loop following the continue statement will be skipped and the next iteration of the loop will begin." }, { "code": null, "e": 26393, "s": 26385, "text": "Syntax:" }, { "code": null, "e": 26403, "s": 26393, "text": "continue\n" }, { "code": null, "e": 26915, "s": 26403, "text": "Example:Consider the situation when you need to write a program which prints the number from 1 to 10 and but not 6. It is specified that you have to do this using loop and only one loop is allowed to use.Here comes the usage of continue statement. What we can do here is we can run a loop from 1 to 10 and every time we have to compare the value of iterator with 6. If it is equal to 6 we will use the continue statement to continue to next iteration without printing anything otherwise we will print the value." }, { "code": null, "e": 26962, "s": 26915, "text": "Below is the implementation of the above idea:" }, { "code": "# Python program to # demonstrate continue # statement # loop from 1 to 10 for i in range(1, 11): # If i is equals to 6, # continue to next iteration # without printing if i == 6: continue else: # otherwise print the value # of i print(i, end = \" \")", "e": 27276, "s": 26962, "text": null }, { "code": null, "e": 27284, "s": 27276, "text": "Output:" }, { "code": null, "e": 27305, "s": 27284, "text": "1 2 3 4 5 7 8 9 10 \n" }, { "code": null, "e": 27689, "s": 27305, "text": "As the name suggests pass statement simply does nothing. The pass statement in Python is used when a statement is required syntactically but you do not want any command or code to execute. It is like null operation, as nothing will happen is it is executed. Pass statement can also be used for writing empty loops. Pass is also used for empty control statement, function and classes." }, { "code": null, "e": 27697, "s": 27689, "text": "Syntax:" }, { "code": null, "e": 27703, "s": 27697, "text": "pass\n" }, { "code": null, "e": 27712, "s": 27703, "text": "Example:" }, { "code": "# Python program to demonstrate# pass statement s = \"geeks\" # Empty loopfor i in s: # No error will be raised pass # Empty functiondef fun(): pass # No error will be raisedfun() # Pass statementfor i in s: if i == 'k': print('Pass executed') pass print(i)", "e": 28004, "s": 27712, "text": null }, { "code": null, "e": 28012, "s": 28004, "text": "Output:" }, { "code": null, "e": 28037, "s": 28012, "text": "g\ne\ne\nPass executed\nk\ns\n" }, { "code": null, "e": 28174, "s": 28037, "text": "In the above example, when the value of i becomes equal to ‘k’, the pass statement did nothing and hence the letter ‘k’ is also printed." }, { "code": null, "e": 28188, "s": 28174, "text": "python-basics" }, { "code": null, "e": 28195, "s": 28188, "text": "Python" }, { "code": null, "e": 28293, "s": 28195, "text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here." }, { "code": null, "e": 28302, "s": 28293, "text": "Comments" }, { "code": null, "e": 28315, "s": 28302, "text": "Old Comments" }, { "code": null, "e": 28347, "s": 28315, "text": "How to Install PIP on Windows ?" }, { "code": null, "e": 28403, "s": 28347, "text": "How to drop one or multiple columns in Pandas Dataframe" }, { "code": null, "e": 28445, "s": 28403, "text": "How To Convert Python Dictionary To JSON?" }, { "code": null, "e": 28487, "s": 28445, "text": "Check if element exists in list in Python" }, { "code": null, "e": 28523, "s": 28487, "text": "Python | Pandas dataframe.groupby()" }, { "code": null, "e": 28545, "s": 28523, "text": "Defaultdict in Python" }, { "code": null, "e": 28584, "s": 28545, "text": "Python | Get unique values from a list" }, { "code": null, "e": 28611, "s": 28584, "text": "Python Classes and Objects" }, { "code": null, "e": 28642, "s": 28611, "text": "Python | os.path.join() method" } ]
Python Trading Toolbox: step up your charts with indicator subplots | by Stefano Basurto | Towards Data Science
After a several months-long hiatus, I can finally resume posting to the Trading Toolbox Series. We started this series by learning how to plot indicators (specifically: moving averages) on the top of a price chart. Moving averages belong to a wide group of indicators, called overlay indicators, that share the same scale as the price and can, therefore, be plotted on the same chart. Other technical indicators, however, do not have this advantage and we need to plot them on a separate area, sometimes called a subplot. Here is an example of a stock chart with an indicator on a separate pane taken from Yahoo! Finance: With this post, we want to explore how to create similar charts using Matplotlib. We also want to explore how to harness Matplotlib’s potential for customization and create original, publication-quality charts. To start with, we will learn how we can obtain that kind of subplot using Matplotlib. Then, we will apply that to plot our first technical indicator below the price, the Rate of Change (or ROC). We will then have a look at how to do that on an OHLC bar or Candlestick chart instead of a line chart. At this stage we need to delve into some technical aspects of how Matplotlib works: That is how we can harness its multiple plots capabilities and craft publishing quality charts. All the code provided assumes that you are using Jupyter Notebook. If instead, you are using a more conventional text editor or the command line, you will need to add: plt.show() each time a chart is created in order to make it visible. In the first two posts of this series we created our first financial price plots using the format: plt.plot(dates, price, <additional parameters>) You can see the first article on moving averages or the second on weighted and exponential moving averages. When calling that method, matplotlib does a few things behind the scenes in order to create charts: First, it creates an object called figure: this is the container where all of our charts are stored. A figure is created automatically and quietly, however, we can create it explicitly and access it when we need to pass some parameters, e.g. with the instruction: fig = plt.figure(figsize=(12,6)) On top of that, matplotlib creates an object called axes (do not confuse with axis): this object corresponds to a single subplot contained within the figure. Again, this action usually happens behind the scenes. Within any figure, we can have multiple subplots (axes) arranged in a matrix: When it comes to charts with multiple subplots, there are enough ways and methods available to make our head spin. We are going to pick just one: the .subplot() method will serve our purposes well. Through other tutorials, you may come across a method called .add_subplot(): the .subplot() method is a wrapper for .add_subplots() (that means it should make its use simpler). With the exception of a few details, their use is actually very similar. Whenever we add a subplot to a figure we need to supply three parameters: The number of rows in the chart matrix. The number of columns. The number of the specific subplot: you can note from the diagram above that the axes objects are numbered going left to right, then top to bottom. Let us try to build a practical example of a generic 2x2 multiple plot chart: Which gives us the following chart: Now that we know how to create charts with multiple plots, we can apply our skills to plot an indicator at the bottom of a price chart. For this task, we are going to use an indicator known as Rate of Change (ROC). There are actually a few different definitions of ROC, and the one that we are going to employ for our example is based on the formula: where lag can be any whole number greater than zero and represents the number of periods (on a daily chart: days) we are looking back to compare our price. E.g., when we compute the ROC of the daily price with a 9-day lag, we are simply looking at how much, in percentage, the price has gone up (or down) compared to 9 days ago. In this article we are not going to discuss how to interpret the ROC chart and use it for investment decisions: that should better have a dedicated article and we are going to do that in a future post. We start by preparing our environment: import pandas as pdimport numpy as npimport matplotlib.pyplot as plt# Required by pandas: registering matplotlib date converterspd.plotting.register_matplotlib_converters()# If you are using Jupyter, use this to show the output images within the Notebook:%matplotlib inline For this exercise, I downloaded from Yahoo! Finance daily prices for the Invesco QQQ Trust, an ETF that tracks the performance of the Nasdaq 100 Index. You can find here the CSV file that I am using. We can load our data and have a glimpse at it: datafile = 'QQQ.csv'data = pd.read_csv(datafile, index_col = 'Date')# Converting the dates from string to datetime format:data.index = pd.to_datetime(data.index)data Which looks like: We can then compute a ROC series with a 9-day lag and add it as a column to our data frame: lag = 9data['ROC'] = ( data['Adj Close'] / data['Adj Close'].shift(lag) -1 ) * 100data[['Adj Close', 'ROC']] Which outputs: To make our example charts easier to read, we use only a selection of our available data. Here is how we select the last 100 rows, corresponding to the 100 most recent trading days: data_sel = data[-100:]dates = data_sel.indexprice = data_sel['Adj Close']roc = data_sel['ROC'] We are now ready to create our first multiple plot chart, with the price at the top and the ROC indicator at the bottom. We can note that, compared to a generic chart with subplots, our indicator has the same date and time (the horizontal axis) as the price chart: fig = plt.figure(figsize=(12,10))# The price subplot:price_ax = plt.subplot(2,1,1)price_ax.plot(dates, price)# The ROC subplot shares the date axis with the price plot:roc_ax = plt.subplot(2,1,2, sharex=price_ax)roc_ax.plot(roc)# We can add titles to each of the subplots:price_ax.set_title("QQQ - Adjusted Closing Price")roc_ax.set_title("9-Day ROC") We have just plotted our first chart with price and ROC in separate areas: this chart does its job in the sense that it makes both price and indicator visible. It does not, however, do it in a very visually appealing way. To start with, price and ROC share the same time axis: there is no need to apply the date labels again on both charts. We can remove them from the top chart by using: price_ax.get_xaxis().set_visible(False) We can also remove the gap between the two subplots with: fig.subplots_adjust(hspace=0) It is also a good idea to add a horizontal line at the zero level of the ROC to make it more readable, as well as to add labels to both vertical axes. fig = plt.figure(figsize=(12,10))price_ax = plt.subplot(2,1,1)price_ax.plot(dates, price, label="Adj Closing Price")price_ax.legend(loc="upper left")roc_ax = plt.subplot(2,1,2, sharex=price_ax)roc_ax.plot(roc, label="9-Day ROC", color="red")roc_ax.legend(loc="upper left")price_ax.set_title("QQQ Daily Price")# Removing the date labels and ticks from the price subplot:price_ax.get_xaxis().set_visible(False)# Removing the gap between the plots:fig.subplots_adjust(hspace=0)# Adding a horizontal line at the zero level in the ROC subplot:roc_ax.axhline(0, color = (.5, .5, .5), linestyle = '--', alpha = 0.5)# We can add labels to both vertical axis:price_ax.set_ylabel("Price ($)")roc_ax.set_ylabel("% ROC") This chart looks already better. However, Matplotlib offers a much greater potential when it comes to creating professional-looking charts. Here are some examples of what we can do: To enhance the readability of the ROC indicator, we can fill the areas between the plot and the horizontal line. The .fill_between() method will serve this purpose. We can format the date labels to show, for example, only the name of the month in short form (e.g., Jan, Feb, ...). We can use a percent format for the labels on the ROC vertical axis. We can add a grid to both subplots and set a background color. Increase the margins (padding) between the plot and the borders. In order to maximize the chart’s Data-Ink Ratio, we can remove all the spines (the borders around the subplots) and the tick marks on the horizontal and vertical axis for both subplots. We can also set the default size for all the fonts to a larger number, say 14. That is quite a long list of improvements. Without getting lost too much in the details, the following code should provide a good example: Which gives: This chart provides a taster of what we can achieve by manipulating the default Matplotlib parameters. Of course, we can always achieve some visual improvements by applying an existing style sheet as we have done in the first article of this series, e.g. with: plt.style.use('fivethirtyeight') When using indicator subplots, most of the time we want the price section to take a larger area than the indicator section in the chart. To achieve this we need to manipulate the invisible grid that Matplotlib uses to place the subplots. We can do so using the GridSpec function. This is another very powerful feature of Matplotlib. I will just provide a brief example of how it can be used to control the height ratio between our two subplots: Which outputs: As a side note, you may notice how this chart retained 14 as the font size set from the previous chart’s code. This happens because any changes to the rcParams Matplotlib parameters are permanent (until we restart the system). If we need to reset them we can use: plt.style.use('default') and add %matplotlib inline if we are using Jupyter Notebook. In all of the previous examples, we charted the price as a line plot. Line plots are a good way to visualize prices when we have only one data point (in this case, the closing price) for each period of trading. Quite often, with financial price series, we want to use OHLC bar charts or Candlestick charts: those charts can show all the prices that summarize the daily trading activity (Open, High, Low, Close) instead of just Close. To plot OHLC bar charts and candlestick charts in Matplotlib we need to use the mplfinance library. As I mentioned in the previous post of this series, mplfinance development has gathered new momentum and things are rapidly evolving. Therefore, we will deal only cursorily on how to use it to create our charts. Mplfinance offers two methods to create subplots within charts and add an indicator: With the External Axes Method creating charts is more or less similar to what we have done so far. Mplfinance takes care of drawing the OHLC or candlesticks charts. We can then pass an Axes object with our indicator as a separate subplot. The Panels Method is even easier to use than pure Matplotlib code: mplfinance takes control of all the plotting and styling operations for us. To manipulate the visual aspects of the chart we can apply existing styles or create our own style. The External Axes Method was released less than a week ago from the time I am writing this. I am still looking forward to making use of it to find out what potential it can offer. As a taster of what we can do using the mplfinance Panel Method, we can plot a candlestick chart with the volume in a separate pane: import mplfinance as mpfmpf.plot(data_sel, type='candle', style='yahoo', title="QQQ Daily Price", volume=True) We can add our ROC indicator in a separate subplot too: # We create an additional plot placing it on the third panelroc_plot = mpf.make_addplot(roc, panel=2, ylabel='ROC')#We pass the additional plot using the addplot parametermpf.plot(data_sel, type='candle', style='yahoo', addplot=roc_plot, title="QQQ Daily Price", volume=True) There are several packages out there that make it possible to create financial charts using Python and pandas. In particular, plotly stands out for its capability to create good looking interactive charts. Matplotlib, on the other hand, may not produce the best charts straight out of the box (look at seaborn if you need that) however, it has a huge customization potential that makes it possible to create static charts that can stand out in a professional publication. That is why I believe it is well worth to get the grips on tweaking the properties of matplotlib. The future developments of mplfinance will make those possibilities even more appealing.
[ { "code": null, "e": 694, "s": 172, "text": "After a several months-long hiatus, I can finally resume posting to the Trading Toolbox Series. We started this series by learning how to plot indicators (specifically: moving averages) on the top of a price chart. Moving averages belong to a wide group of indicators, called overlay indicators, that share the same scale as the price and can, therefore, be plotted on the same chart. Other technical indicators, however, do not have this advantage and we need to plot them on a separate area, sometimes called a subplot." }, { "code": null, "e": 794, "s": 694, "text": "Here is an example of a stock chart with an indicator on a separate pane taken from Yahoo! Finance:" }, { "code": null, "e": 1304, "s": 794, "text": "With this post, we want to explore how to create similar charts using Matplotlib. We also want to explore how to harness Matplotlib’s potential for customization and create original, publication-quality charts. To start with, we will learn how we can obtain that kind of subplot using Matplotlib. Then, we will apply that to plot our first technical indicator below the price, the Rate of Change (or ROC). We will then have a look at how to do that on an OHLC bar or Candlestick chart instead of a line chart." }, { "code": null, "e": 1652, "s": 1304, "text": "At this stage we need to delve into some technical aspects of how Matplotlib works: That is how we can harness its multiple plots capabilities and craft publishing quality charts. All the code provided assumes that you are using Jupyter Notebook. If instead, you are using a more conventional text editor or the command line, you will need to add:" }, { "code": null, "e": 1663, "s": 1652, "text": "plt.show()" }, { "code": null, "e": 1721, "s": 1663, "text": "each time a chart is created in order to make it visible." }, { "code": null, "e": 1820, "s": 1721, "text": "In the first two posts of this series we created our first financial price plots using the format:" }, { "code": null, "e": 1868, "s": 1820, "text": "plt.plot(dates, price, <additional parameters>)" }, { "code": null, "e": 1976, "s": 1868, "text": "You can see the first article on moving averages or the second on weighted and exponential moving averages." }, { "code": null, "e": 2076, "s": 1976, "text": "When calling that method, matplotlib does a few things behind the scenes in order to create charts:" }, { "code": null, "e": 2373, "s": 2076, "text": "First, it creates an object called figure: this is the container where all of our charts are stored. A figure is created automatically and quietly, however, we can create it explicitly and access it when we need to pass some parameters, e.g. with the instruction: fig = plt.figure(figsize=(12,6))" }, { "code": null, "e": 2585, "s": 2373, "text": "On top of that, matplotlib creates an object called axes (do not confuse with axis): this object corresponds to a single subplot contained within the figure. Again, this action usually happens behind the scenes." }, { "code": null, "e": 2663, "s": 2585, "text": "Within any figure, we can have multiple subplots (axes) arranged in a matrix:" }, { "code": null, "e": 3111, "s": 2663, "text": "When it comes to charts with multiple subplots, there are enough ways and methods available to make our head spin. We are going to pick just one: the .subplot() method will serve our purposes well. Through other tutorials, you may come across a method called .add_subplot(): the .subplot() method is a wrapper for .add_subplots() (that means it should make its use simpler). With the exception of a few details, their use is actually very similar." }, { "code": null, "e": 3185, "s": 3111, "text": "Whenever we add a subplot to a figure we need to supply three parameters:" }, { "code": null, "e": 3225, "s": 3185, "text": "The number of rows in the chart matrix." }, { "code": null, "e": 3248, "s": 3225, "text": "The number of columns." }, { "code": null, "e": 3396, "s": 3248, "text": "The number of the specific subplot: you can note from the diagram above that the axes objects are numbered going left to right, then top to bottom." }, { "code": null, "e": 3474, "s": 3396, "text": "Let us try to build a practical example of a generic 2x2 multiple plot chart:" }, { "code": null, "e": 3510, "s": 3474, "text": "Which gives us the following chart:" }, { "code": null, "e": 3861, "s": 3510, "text": "Now that we know how to create charts with multiple plots, we can apply our skills to plot an indicator at the bottom of a price chart. For this task, we are going to use an indicator known as Rate of Change (ROC). There are actually a few different definitions of ROC, and the one that we are going to employ for our example is based on the formula:" }, { "code": null, "e": 4392, "s": 3861, "text": "where lag can be any whole number greater than zero and represents the number of periods (on a daily chart: days) we are looking back to compare our price. E.g., when we compute the ROC of the daily price with a 9-day lag, we are simply looking at how much, in percentage, the price has gone up (or down) compared to 9 days ago. In this article we are not going to discuss how to interpret the ROC chart and use it for investment decisions: that should better have a dedicated article and we are going to do that in a future post." }, { "code": null, "e": 4431, "s": 4392, "text": "We start by preparing our environment:" }, { "code": null, "e": 4706, "s": 4431, "text": "import pandas as pdimport numpy as npimport matplotlib.pyplot as plt# Required by pandas: registering matplotlib date converterspd.plotting.register_matplotlib_converters()# If you are using Jupyter, use this to show the output images within the Notebook:%matplotlib inline" }, { "code": null, "e": 4953, "s": 4706, "text": "For this exercise, I downloaded from Yahoo! Finance daily prices for the Invesco QQQ Trust, an ETF that tracks the performance of the Nasdaq 100 Index. You can find here the CSV file that I am using. We can load our data and have a glimpse at it:" }, { "code": null, "e": 5119, "s": 4953, "text": "datafile = 'QQQ.csv'data = pd.read_csv(datafile, index_col = 'Date')# Converting the dates from string to datetime format:data.index = pd.to_datetime(data.index)data" }, { "code": null, "e": 5137, "s": 5119, "text": "Which looks like:" }, { "code": null, "e": 5229, "s": 5137, "text": "We can then compute a ROC series with a 9-day lag and add it as a column to our data frame:" }, { "code": null, "e": 5338, "s": 5229, "text": "lag = 9data['ROC'] = ( data['Adj Close'] / data['Adj Close'].shift(lag) -1 ) * 100data[['Adj Close', 'ROC']]" }, { "code": null, "e": 5353, "s": 5338, "text": "Which outputs:" }, { "code": null, "e": 5535, "s": 5353, "text": "To make our example charts easier to read, we use only a selection of our available data. Here is how we select the last 100 rows, corresponding to the 100 most recent trading days:" }, { "code": null, "e": 5630, "s": 5535, "text": "data_sel = data[-100:]dates = data_sel.indexprice = data_sel['Adj Close']roc = data_sel['ROC']" }, { "code": null, "e": 5895, "s": 5630, "text": "We are now ready to create our first multiple plot chart, with the price at the top and the ROC indicator at the bottom. We can note that, compared to a generic chart with subplots, our indicator has the same date and time (the horizontal axis) as the price chart:" }, { "code": null, "e": 6248, "s": 5895, "text": "fig = plt.figure(figsize=(12,10))# The price subplot:price_ax = plt.subplot(2,1,1)price_ax.plot(dates, price)# The ROC subplot shares the date axis with the price plot:roc_ax = plt.subplot(2,1,2, sharex=price_ax)roc_ax.plot(roc)# We can add titles to each of the subplots:price_ax.set_title(\"QQQ - Adjusted Closing Price\")roc_ax.set_title(\"9-Day ROC\")" }, { "code": null, "e": 6637, "s": 6248, "text": "We have just plotted our first chart with price and ROC in separate areas: this chart does its job in the sense that it makes both price and indicator visible. It does not, however, do it in a very visually appealing way. To start with, price and ROC share the same time axis: there is no need to apply the date labels again on both charts. We can remove them from the top chart by using:" }, { "code": null, "e": 6677, "s": 6637, "text": "price_ax.get_xaxis().set_visible(False)" }, { "code": null, "e": 6735, "s": 6677, "text": "We can also remove the gap between the two subplots with:" }, { "code": null, "e": 6765, "s": 6735, "text": "fig.subplots_adjust(hspace=0)" }, { "code": null, "e": 6916, "s": 6765, "text": "It is also a good idea to add a horizontal line at the zero level of the ROC to make it more readable, as well as to add labels to both vertical axes." }, { "code": null, "e": 7625, "s": 6916, "text": "fig = plt.figure(figsize=(12,10))price_ax = plt.subplot(2,1,1)price_ax.plot(dates, price, label=\"Adj Closing Price\")price_ax.legend(loc=\"upper left\")roc_ax = plt.subplot(2,1,2, sharex=price_ax)roc_ax.plot(roc, label=\"9-Day ROC\", color=\"red\")roc_ax.legend(loc=\"upper left\")price_ax.set_title(\"QQQ Daily Price\")# Removing the date labels and ticks from the price subplot:price_ax.get_xaxis().set_visible(False)# Removing the gap between the plots:fig.subplots_adjust(hspace=0)# Adding a horizontal line at the zero level in the ROC subplot:roc_ax.axhline(0, color = (.5, .5, .5), linestyle = '--', alpha = 0.5)# We can add labels to both vertical axis:price_ax.set_ylabel(\"Price ($)\")roc_ax.set_ylabel(\"% ROC\")" }, { "code": null, "e": 7807, "s": 7625, "text": "This chart looks already better. However, Matplotlib offers a much greater potential when it comes to creating professional-looking charts. Here are some examples of what we can do:" }, { "code": null, "e": 7972, "s": 7807, "text": "To enhance the readability of the ROC indicator, we can fill the areas between the plot and the horizontal line. The .fill_between() method will serve this purpose." }, { "code": null, "e": 8088, "s": 7972, "text": "We can format the date labels to show, for example, only the name of the month in short form (e.g., Jan, Feb, ...)." }, { "code": null, "e": 8157, "s": 8088, "text": "We can use a percent format for the labels on the ROC vertical axis." }, { "code": null, "e": 8220, "s": 8157, "text": "We can add a grid to both subplots and set a background color." }, { "code": null, "e": 8285, "s": 8220, "text": "Increase the margins (padding) between the plot and the borders." }, { "code": null, "e": 8471, "s": 8285, "text": "In order to maximize the chart’s Data-Ink Ratio, we can remove all the spines (the borders around the subplots) and the tick marks on the horizontal and vertical axis for both subplots." }, { "code": null, "e": 8550, "s": 8471, "text": "We can also set the default size for all the fonts to a larger number, say 14." }, { "code": null, "e": 8689, "s": 8550, "text": "That is quite a long list of improvements. Without getting lost too much in the details, the following code should provide a good example:" }, { "code": null, "e": 8702, "s": 8689, "text": "Which gives:" }, { "code": null, "e": 8963, "s": 8702, "text": "This chart provides a taster of what we can achieve by manipulating the default Matplotlib parameters. Of course, we can always achieve some visual improvements by applying an existing style sheet as we have done in the first article of this series, e.g. with:" }, { "code": null, "e": 8996, "s": 8963, "text": "plt.style.use('fivethirtyeight')" }, { "code": null, "e": 9441, "s": 8996, "text": "When using indicator subplots, most of the time we want the price section to take a larger area than the indicator section in the chart. To achieve this we need to manipulate the invisible grid that Matplotlib uses to place the subplots. We can do so using the GridSpec function. This is another very powerful feature of Matplotlib. I will just provide a brief example of how it can be used to control the height ratio between our two subplots:" }, { "code": null, "e": 9456, "s": 9441, "text": "Which outputs:" }, { "code": null, "e": 9720, "s": 9456, "text": "As a side note, you may notice how this chart retained 14 as the font size set from the previous chart’s code. This happens because any changes to the rcParams Matplotlib parameters are permanent (until we restart the system). If we need to reset them we can use:" }, { "code": null, "e": 9745, "s": 9720, "text": "plt.style.use('default')" }, { "code": null, "e": 9806, "s": 9745, "text": "and add %matplotlib inline if we are using Jupyter Notebook." }, { "code": null, "e": 10240, "s": 9806, "text": "In all of the previous examples, we charted the price as a line plot. Line plots are a good way to visualize prices when we have only one data point (in this case, the closing price) for each period of trading. Quite often, with financial price series, we want to use OHLC bar charts or Candlestick charts: those charts can show all the prices that summarize the daily trading activity (Open, High, Low, Close) instead of just Close." }, { "code": null, "e": 10552, "s": 10240, "text": "To plot OHLC bar charts and candlestick charts in Matplotlib we need to use the mplfinance library. As I mentioned in the previous post of this series, mplfinance development has gathered new momentum and things are rapidly evolving. Therefore, we will deal only cursorily on how to use it to create our charts." }, { "code": null, "e": 10637, "s": 10552, "text": "Mplfinance offers two methods to create subplots within charts and add an indicator:" }, { "code": null, "e": 10876, "s": 10637, "text": "With the External Axes Method creating charts is more or less similar to what we have done so far. Mplfinance takes care of drawing the OHLC or candlesticks charts. We can then pass an Axes object with our indicator as a separate subplot." }, { "code": null, "e": 11119, "s": 10876, "text": "The Panels Method is even easier to use than pure Matplotlib code: mplfinance takes control of all the plotting and styling operations for us. To manipulate the visual aspects of the chart we can apply existing styles or create our own style." }, { "code": null, "e": 11299, "s": 11119, "text": "The External Axes Method was released less than a week ago from the time I am writing this. I am still looking forward to making use of it to find out what potential it can offer." }, { "code": null, "e": 11432, "s": 11299, "text": "As a taster of what we can do using the mplfinance Panel Method, we can plot a candlestick chart with the volume in a separate pane:" }, { "code": null, "e": 11544, "s": 11432, "text": "import mplfinance as mpfmpf.plot(data_sel, type='candle', style='yahoo', title=\"QQQ Daily Price\", volume=True)" }, { "code": null, "e": 11600, "s": 11544, "text": "We can add our ROC indicator in a separate subplot too:" }, { "code": null, "e": 11877, "s": 11600, "text": "# We create an additional plot placing it on the third panelroc_plot = mpf.make_addplot(roc, panel=2, ylabel='ROC')#We pass the additional plot using the addplot parametermpf.plot(data_sel, type='candle', style='yahoo', addplot=roc_plot, title=\"QQQ Daily Price\", volume=True)" }, { "code": null, "e": 12349, "s": 11877, "text": "There are several packages out there that make it possible to create financial charts using Python and pandas. In particular, plotly stands out for its capability to create good looking interactive charts. Matplotlib, on the other hand, may not produce the best charts straight out of the box (look at seaborn if you need that) however, it has a huge customization potential that makes it possible to create static charts that can stand out in a professional publication." } ]
How to plot one single data point in Matplotlib?
To plot one single data point in matplotlib, we can take the following steps − Initialize a list for x and y, with a single value. Initialize a list for x and y, with a single value. Limit x and y axis range for 0 to 5. Limit x and y axis range for 0 to 5. Lay out a grid in current line style. Lay out a grid in current line style. Plot given x and y using plot() method, with marker="o", markeredgecolor="red", markerfacecolor="green". Plot given x and y using plot() method, with marker="o", markeredgecolor="red", markerfacecolor="green". To display the figure, use show() method. To display the figure, use show() method. from matplotlib import pyplot as plt plt.rcParams["figure.figsize"] = [7.00, 3.50] plt.rcParams["figure.autolayout"] = True x = [4] y = [3] plt.xlim(0, 5) plt.ylim(0, 5) plt.grid() plt.plot(x, y, marker="o", markersize=20, markeredgecolor="red", markerfacecolor="green") plt.show()
[ { "code": null, "e": 1141, "s": 1062, "text": "To plot one single data point in matplotlib, we can take the following steps −" }, { "code": null, "e": 1193, "s": 1141, "text": "Initialize a list for x and y, with a single value." }, { "code": null, "e": 1245, "s": 1193, "text": "Initialize a list for x and y, with a single value." }, { "code": null, "e": 1282, "s": 1245, "text": "Limit x and y axis range for 0 to 5." }, { "code": null, "e": 1319, "s": 1282, "text": "Limit x and y axis range for 0 to 5." }, { "code": null, "e": 1357, "s": 1319, "text": "Lay out a grid in current line style." }, { "code": null, "e": 1395, "s": 1357, "text": "Lay out a grid in current line style." }, { "code": null, "e": 1500, "s": 1395, "text": "Plot given x and y using plot() method, with marker=\"o\", markeredgecolor=\"red\", markerfacecolor=\"green\"." }, { "code": null, "e": 1605, "s": 1500, "text": "Plot given x and y using plot() method, with marker=\"o\", markeredgecolor=\"red\", markerfacecolor=\"green\"." }, { "code": null, "e": 1647, "s": 1605, "text": "To display the figure, use show() method." }, { "code": null, "e": 1689, "s": 1647, "text": "To display the figure, use show() method." }, { "code": null, "e": 1971, "s": 1689, "text": "from matplotlib import pyplot as plt\nplt.rcParams[\"figure.figsize\"] = [7.00, 3.50]\nplt.rcParams[\"figure.autolayout\"] = True\nx = [4]\ny = [3]\nplt.xlim(0, 5)\nplt.ylim(0, 5)\nplt.grid()\nplt.plot(x, y, marker=\"o\", markersize=20, markeredgecolor=\"red\",\nmarkerfacecolor=\"green\")\nplt.show()" } ]
Elixir - Basic Syntax
We will start with the customary 'Hello World' program. To start the Elixir interactive shell, enter the following command. iex After the shell starts, use the IO.puts function to "put" the string on the console output. Enter the following in your Elixir shell − IO.puts "Hello world" In this tutorial, we will use the Elixir script mode where we will keep the Elixir code in a file with the extension .ex. Let us now keep the above code in the test.ex file. In the succeeding step, we will execute it using elixirc− IO.puts "Hello world" Let us now try to run the above program as follows − $elixirc test.ex The above program generates the following result − Hello World Here we are calling a function IO.puts to generate a string to our console as output. This function can also be called the way we do in C, C++, Java, etc., providing arguments in parentheses following the function name − IO.puts("Hello world") Single line comments start with a '#' symbol. There's no multi-line comment, but you can stack multiple comments. For example − #This is a comment in Elixir There are no required line endings like ';' in Elixir. However, we can have multiple statements in the same line, using ';'. For example, IO.puts("Hello"); IO.puts("World!") The above program generates the following result − Hello World! Identifiers like variables, function names are used to identify a variable, function, etc. In Elixir, you can name your identifiers starting with a lower case alphabet with numbers, underscores and upper case letters thereafter. This naming convention is commonly known as snake_case. For example, following are some valid identifiers in Elixir − var1 variable_2 one_M0r3_variable Please note that variables can also be named with a leading underscore. A value that is not meant to be used must be assigned to _ or to a variable starting with underscore − _some_random_value = 42 Also elixir relies on underscores to make functions private to modules. If you name a function with a leading underscore in a module, and import that module, this function will not be imported. There are many more intricacies related to function naming in Elixir which we will discuss in coming chapters. Following words are reserved and cannot be used as variables, module or function names. after and catch do inbits inlist nil else end not or false fn in rescue true when xor __MODULE__ __FILE__ __DIR__ __ENV__ __CALLER__ 35 Lectures 3 hours Pranjal Srivastava 54 Lectures 6 hours Pranjal Srivastava, Harshit Srivastava 80 Lectures 9.5 hours Pranjal Srivastava 43 Lectures 4 hours Mohammad Nauman Print Add Notes Bookmark this page
[ { "code": null, "e": 2238, "s": 2182, "text": "We will start with the customary 'Hello World' program." }, { "code": null, "e": 2306, "s": 2238, "text": "To start the Elixir interactive shell, enter the following command." }, { "code": null, "e": 2311, "s": 2306, "text": "iex\n" }, { "code": null, "e": 2446, "s": 2311, "text": "After the shell starts, use the IO.puts function to \"put\" the string on the console output. Enter the following in your Elixir shell −" }, { "code": null, "e": 2468, "s": 2446, "text": "IO.puts \"Hello world\"" }, { "code": null, "e": 2700, "s": 2468, "text": "In this tutorial, we will use the Elixir script mode where we will keep the Elixir code in a file with the extension .ex. Let us now keep the above code in the test.ex file. In the succeeding step, we will execute it using elixirc−" }, { "code": null, "e": 2722, "s": 2700, "text": "IO.puts \"Hello world\"" }, { "code": null, "e": 2775, "s": 2722, "text": "Let us now try to run the above program as follows −" }, { "code": null, "e": 2792, "s": 2775, "text": "$elixirc test.ex" }, { "code": null, "e": 2843, "s": 2792, "text": "The above program generates the following result −" }, { "code": null, "e": 2856, "s": 2843, "text": "Hello World\n" }, { "code": null, "e": 3077, "s": 2856, "text": "Here we are calling a function IO.puts to generate a string to our console as output. This function can also be called the way we do in C, C++, Java, etc., providing arguments in parentheses following the function name −" }, { "code": null, "e": 3102, "s": 3077, "text": "IO.puts(\"Hello world\") \n" }, { "code": null, "e": 3230, "s": 3102, "text": "Single line comments start with a '#' symbol. There's no multi-line comment, but you can stack multiple comments. For example −" }, { "code": null, "e": 3260, "s": 3230, "text": "#This is a comment in Elixir\n" }, { "code": null, "e": 3398, "s": 3260, "text": "There are no required line endings like ';' in Elixir. However, we can have multiple statements in the same line, using ';'. For example," }, { "code": null, "e": 3434, "s": 3398, "text": "IO.puts(\"Hello\"); IO.puts(\"World!\")" }, { "code": null, "e": 3485, "s": 3434, "text": "The above program generates the following result −" }, { "code": null, "e": 3500, "s": 3485, "text": "Hello \nWorld!\n" }, { "code": null, "e": 3847, "s": 3500, "text": "Identifiers like variables, function names are used to identify a variable, function, etc. In Elixir, you can name your identifiers starting with a lower case alphabet with numbers, underscores and upper case letters thereafter. This naming convention is commonly known as snake_case. For example, following are some valid identifiers in Elixir −" }, { "code": null, "e": 3893, "s": 3847, "text": "var1 variable_2 one_M0r3_variable\n" }, { "code": null, "e": 4068, "s": 3893, "text": "Please note that variables can also be named with a leading underscore. A value that is not meant to be used must be assigned to _ or to a variable starting with underscore −" }, { "code": null, "e": 4093, "s": 4068, "text": "_some_random_value = 42\n" }, { "code": null, "e": 4287, "s": 4093, "text": "Also elixir relies on underscores to make functions private to modules. If you name a function with a leading underscore in a module, and import that module, this function will not be imported." }, { "code": null, "e": 4398, "s": 4287, "text": "There are many more intricacies related to function naming in Elixir which we will discuss in coming chapters." }, { "code": null, "e": 4486, "s": 4398, "text": "Following words are reserved and cannot be used as variables, module or function names." }, { "code": null, "e": 4699, "s": 4486, "text": "after and catch do inbits inlist nil else end \nnot or false fn in rescue true when xor \n__MODULE__ __FILE__ __DIR__ __ENV__ __CALLER__ \n" }, { "code": null, "e": 4732, "s": 4699, "text": "\n 35 Lectures \n 3 hours \n" }, { "code": null, "e": 4752, "s": 4732, "text": " Pranjal Srivastava" }, { "code": null, "e": 4785, "s": 4752, "text": "\n 54 Lectures \n 6 hours \n" }, { "code": null, "e": 4825, "s": 4785, "text": " Pranjal Srivastava, Harshit Srivastava" }, { "code": null, "e": 4860, "s": 4825, "text": "\n 80 Lectures \n 9.5 hours \n" }, { "code": null, "e": 4880, "s": 4860, "text": " Pranjal Srivastava" }, { "code": null, "e": 4913, "s": 4880, "text": "\n 43 Lectures \n 4 hours \n" }, { "code": null, "e": 4930, "s": 4913, "text": " Mohammad Nauman" }, { "code": null, "e": 4937, "s": 4930, "text": " Print" }, { "code": null, "e": 4948, "s": 4937, "text": " Add Notes" } ]
JUnit - Basic Usage
Let us now have a basic example to demonstrate the step-by-step process of using JUnit. Create a java class to be tested, say, MessageUtil.java in C:\>JUNIT_WORKSPACE /* * This class prints the given message on console. */ public class MessageUtil { private String message; //Constructor //@param message to be printed public MessageUtil(String message){ this.message = message; } // prints the message public String printMessage(){ System.out.println(message); return message; } } Create a java test class, say, TestJunit.java. Add a test method testPrintMessage() to your test class. Add an Annotaion @Test to method testPrintMessage(). Implement the test condition and check the condition using assertEquals API of JUnit. Create a java class file name TestJunit.java in C:\>JUNIT_WORKSPACE. import org.junit.Test; import static org.junit.Assert.assertEquals; public class TestJunit { String message = "Hello World"; MessageUtil messageUtil = new MessageUtil(message); @Test public void testPrintMessage() { assertEquals(message,messageUtil.printMessage()); } } Create a TestRunner java class. Use runClasses method of JUnitCore class of JUnit to run the test case of the above created test class. Get the result of test cases run in Result Object. Get failure(s) using the getFailures() method of Result object. Get Success result using the wasSuccessful() method of Result object. Create a java class file named TestRunner.java in C:\>JUNIT_WORKSPACE to execute test case(s). import org.junit.runner.JUnitCore; import org.junit.runner.Result; import org.junit.runner.notification.Failure; public class TestRunner { public static void main(String[] args) { Result result = JUnitCore.runClasses(TestJunit.class); for (Failure failure : result.getFailures()) { System.out.println(failure.toString()); } System.out.println(result.wasSuccessful()); } } Compile the MessageUtil, Test case and Test Runner classes using javac. C:\JUNIT_WORKSPACE>javac MessageUtil.java TestJunit.java TestRunner.java Now run the Test Runner, which will run the test case defined in the provided Test Case class. C:\JUNIT_WORKSPACE>java TestRunner Verify the output. Hello World true Now update TestJunit in C:\>JUNIT_WORKSPACE so that the test fails. Change the message string. import org.junit.Test; import static org.junit.Assert.assertEquals; public class TestJunit { String message = "Hello World"; MessageUtil messageUtil = new MessageUtil(message); @Test public void testPrintMessage() { message = "New Word"; assertEquals(message,messageUtil.printMessage()); } } Let's keep the rest of the classes as is, and try to run the same Test Runner. import org.junit.runner.JUnitCore; import org.junit.runner.Result; import org.junit.runner.notification.Failure; public class TestRunner { public static void main(String[] args) { Result result = JUnitCore.runClasses(TestJunit.class); for (Failure failure : result.getFailures()) { System.out.println(failure.toString()); } System.out.println(result.wasSuccessful()); } } Now run the Test Runner, which will run the test case defined in the provided Test Case class. C:\JUNIT_WORKSPACE>java TestRunner Verify the output. Hello World testPrintMessage(TestJunit): expected:<[New Wor]d> but was:<[Hello Worl]d> false 24 Lectures 2.5 hours Nishita Bhatt 56 Lectures 7.5 hours Dinesh Varyani Print Add Notes Bookmark this page
[ { "code": null, "e": 2060, "s": 1972, "text": "Let us now have a basic example to demonstrate the step-by-step process of using JUnit." }, { "code": null, "e": 2139, "s": 2060, "text": "Create a java class to be tested, say, MessageUtil.java in C:\\>JUNIT_WORKSPACE" }, { "code": null, "e": 2513, "s": 2139, "text": "/*\n* This class prints the given message on console.\n*/\n\npublic class MessageUtil {\n\n private String message;\n\n //Constructor\n //@param message to be printed\n\t\n public MessageUtil(String message){\n this.message = message;\n }\n \n // prints the message\n public String printMessage(){\n System.out.println(message);\n return message;\n } \n} " }, { "code": null, "e": 2560, "s": 2513, "text": "Create a java test class, say, TestJunit.java." }, { "code": null, "e": 2617, "s": 2560, "text": "Add a test method testPrintMessage() to your test class." }, { "code": null, "e": 2670, "s": 2617, "text": "Add an Annotaion @Test to method testPrintMessage()." }, { "code": null, "e": 2756, "s": 2670, "text": "Implement the test condition and check the condition using assertEquals API of JUnit." }, { "code": null, "e": 2825, "s": 2756, "text": "Create a java class file name TestJunit.java in C:\\>JUNIT_WORKSPACE." }, { "code": null, "e": 3121, "s": 2825, "text": "import org.junit.Test;\nimport static org.junit.Assert.assertEquals;\n\npublic class TestJunit {\n\t\n String message = \"Hello World\";\t\n MessageUtil messageUtil = new MessageUtil(message);\n\n @Test\n public void testPrintMessage() {\n assertEquals(message,messageUtil.printMessage());\n }\n}" }, { "code": null, "e": 3153, "s": 3121, "text": "Create a TestRunner java class." }, { "code": null, "e": 3257, "s": 3153, "text": "Use runClasses method of JUnitCore class of JUnit to run the test case of the above created test class." }, { "code": null, "e": 3308, "s": 3257, "text": "Get the result of test cases run in Result Object." }, { "code": null, "e": 3372, "s": 3308, "text": "Get failure(s) using the getFailures() method of Result object." }, { "code": null, "e": 3442, "s": 3372, "text": "Get Success result using the wasSuccessful() method of Result object." }, { "code": null, "e": 3537, "s": 3442, "text": "Create a java class file named TestRunner.java in C:\\>JUNIT_WORKSPACE to execute test case(s)." }, { "code": null, "e": 3958, "s": 3537, "text": "import org.junit.runner.JUnitCore;\nimport org.junit.runner.Result;\nimport org.junit.runner.notification.Failure;\n\npublic class TestRunner {\n public static void main(String[] args) {\n Result result = JUnitCore.runClasses(TestJunit.class);\n\t\t\n for (Failure failure : result.getFailures()) {\n System.out.println(failure.toString());\n }\n\t\t\n System.out.println(result.wasSuccessful());\n }\n} \t" }, { "code": null, "e": 4030, "s": 3958, "text": "Compile the MessageUtil, Test case and Test Runner classes using javac." }, { "code": null, "e": 4104, "s": 4030, "text": "C:\\JUNIT_WORKSPACE>javac MessageUtil.java TestJunit.java TestRunner.java\n" }, { "code": null, "e": 4199, "s": 4104, "text": "Now run the Test Runner, which will run the test case defined in the provided Test Case class." }, { "code": null, "e": 4235, "s": 4199, "text": "C:\\JUNIT_WORKSPACE>java TestRunner\n" }, { "code": null, "e": 4254, "s": 4235, "text": "Verify the output." }, { "code": null, "e": 4272, "s": 4254, "text": "Hello World\ntrue\n" }, { "code": null, "e": 4367, "s": 4272, "text": "Now update TestJunit in C:\\>JUNIT_WORKSPACE so that the test fails. Change the message string." }, { "code": null, "e": 4691, "s": 4367, "text": "import org.junit.Test;\nimport static org.junit.Assert.assertEquals;\n\npublic class TestJunit {\n\t\n String message = \"Hello World\";\t\n MessageUtil messageUtil = new MessageUtil(message);\n\n @Test\n public void testPrintMessage() {\n message = \"New Word\";\n assertEquals(message,messageUtil.printMessage());\n }\n}" }, { "code": null, "e": 4770, "s": 4691, "text": "Let's keep the rest of the classes as is, and try to run the same Test Runner." }, { "code": null, "e": 5189, "s": 4770, "text": "import org.junit.runner.JUnitCore;\nimport org.junit.runner.Result;\nimport org.junit.runner.notification.Failure;\n\npublic class TestRunner {\n\n public static void main(String[] args) {\n Result result = JUnitCore.runClasses(TestJunit.class);\n\t\t\n for (Failure failure : result.getFailures()) {\n System.out.println(failure.toString());\n }\n\t\t\n System.out.println(result.wasSuccessful());\n }\n}" }, { "code": null, "e": 5284, "s": 5189, "text": "Now run the Test Runner, which will run the test case defined in the provided Test Case class." }, { "code": null, "e": 5320, "s": 5284, "text": "C:\\JUNIT_WORKSPACE>java TestRunner\n" }, { "code": null, "e": 5339, "s": 5320, "text": "Verify the output." }, { "code": null, "e": 5433, "s": 5339, "text": "Hello World\ntestPrintMessage(TestJunit): expected:<[New Wor]d> but was:<[Hello Worl]d>\nfalse\n" }, { "code": null, "e": 5468, "s": 5433, "text": "\n 24 Lectures \n 2.5 hours \n" }, { "code": null, "e": 5483, "s": 5468, "text": " Nishita Bhatt" }, { "code": null, "e": 5518, "s": 5483, "text": "\n 56 Lectures \n 7.5 hours \n" }, { "code": null, "e": 5534, "s": 5518, "text": " Dinesh Varyani" }, { "code": null, "e": 5541, "s": 5534, "text": " Print" }, { "code": null, "e": 5552, "s": 5541, "text": " Add Notes" } ]
Working with datetime in Pandas DataFrame | by B. Chen | Towards Data Science
Datetime is a common data type in data science projects. Often, you’ll work with it and run into problems. I found Pandas is an amazing library that contains extensive capabilities and features for working with date and time. In this article, we will cover the following common datetime problems and should help you get started with data analysis. Convert strings to datetimeAssemble a datetime from multiple columnsGet year, month and dayGet the week of year, the day of week, and leap yearGet the age from the date of birthImprove performance by setting date column as the indexSelect data with a specific year and perform aggregationSelect data with a specific month and a specific day of the monthSelect data between two datesHandle missing values Convert strings to datetime Assemble a datetime from multiple columns Get year, month and day Get the week of year, the day of week, and leap year Get the age from the date of birth Improve performance by setting date column as the index Select data with a specific year and perform aggregation Select data with a specific month and a specific day of the month Select data between two dates Handle missing values Please check out my Github repo for the source code. Pandas has a built-in function called to_datetime() that can be used to convert strings to datetime. Let’s take a look at some examples Pandas to_datetime() is able to parse any valid date string to datetime without any additional arguments. For example: df = pd.DataFrame({'date': ['3/10/2000', '3/11/2000', '3/12/2000'], 'value': [2, 3, 4]})df['date'] = pd.to_datetime(df['date'])df By default, to_datetime() will parse string with month first (MM/DD, MM DD, or MM-DD) format, and this arrangement is relatively unique in the United State. In most of the rest of the world, the day is written first (DD/MM, DD MM, or DD-MM). If you would like Pandas to consider day first instead of month, you can set the argument dayfirst to True. df = pd.DataFrame({'date': ['3/10/2000', '3/11/2000', '3/12/2000'], 'value': [2, 3, 4]})df['date'] = pd.to_datetime(df['date'], dayfirst=True)df Alternatively, you pass a custom format to the argument format . By default, strings are parsed using the Pandas built-in parser from dateutil.parser.parse. Sometimes, your strings might be in a custom format, for example, YYYY-DD-MM HH:MM:SS. Pandas to_datetime() has an argument called format that allows you to pass a custom format: df = pd.DataFrame({'date': ['2016-6-10 20:30:0', '2016-7-1 19:45:30', '2013-10-12 4:5:1'], 'value': [2, 3, 4]})df['date'] = pd.to_datetime(df['date'], format="%Y-%d-%m %H:%M:%S")df Passing infer_datetime_format=True can often speed up a parsing if its not an ISO8601 format exactly but in a regular format. According to [1], in some cases, this can increase the parsing speed by 5–10x. # Make up 3000 rowsdf = pd.DataFrame({'date': ['3/11/2000', '3/12/2000', '3/13/2000'] * 1000 })%timeit pd.to_datetime(df['date'], infer_datetime_format=True)100 loops, best of 3: 10.4 ms per loop%timeit pd.to_datetime(df['date'], infer_datetime_format=False)1 loop, best of 3: 471 ms per loop You will end up with a TypeError if the date string does not meet the timestamp format. df = pd.DataFrame({'date': ['3/10/2000', 'a/11/2000', '3/12/2000'], 'value': [2, 3, 4]})df['date'] = pd.to_datetime(df['date']) to_datetime() has an argument called errors that allows you to ignore the error or force an invalid value to NaT. df['date'] = pd.to_datetime(df['date'], errors='ignore')df df['date'] = pd.to_datetime(df['date'], errors='coerce')df In addition, if you would like to parse date columns when reading data from a CSV file, please check out the following article towardsdatascience.com to_datetime() can be used to assemble a datetime from multiple columns as well. The keys (columns label) can be common abbreviations like [‘year’, ‘month’, ‘day’, ‘minute’, ‘second’, ‘ms’, ‘us’, ‘ns’]) or plurals of the same. df = pd.DataFrame({'year': [2015, 2016], 'month': [2, 3], 'day': [4, 5]})df['date'] = pd.to_datetime(df)df dt.year, dt.month and dt.day are the inbuilt attributes to get year, month , and day from Pandas datetime object. First, let’s create a dummy DateFrame and parse DoB to datetime. df = pd.DataFrame({'name': ['Tom', 'Andy', 'Lucas'], 'DoB': ['08-05-1997', '04-28-1996', '12-16-1995']})df['DoB'] = pd.to_datetime(df['DoB']) And to get year, month, and day df['year']= df['DoB'].dt.yeardf['month']= df['DoB'].dt.monthdf['day']= df['DoB'].dt.daydf Similarly, dt.week, dt.dayofweek, and dt.is_leap_year are the inbuilt attributes to get the week of year, the day of week, and leap year. df['week_of_year'] = df['DoB'].dt.weekdf['day_of_week'] = df['DoB'].dt.dayofweekdf['is_leap_year'] = df['DoB'].dt.is_leap_yeardf Note that Pandas dt.dayofweek attribute returns the day of the week and it is assumed the week starts on Monday, which is denoted by 0 and ends on Sunday which is denoted by 6. To replace the number with full name, we can create a mapping and pass it to map() : dw_mapping={ 0: 'Monday', 1: 'Tuesday', 2: 'Wednesday', 3: 'Thursday', 4: 'Friday', 5: 'Saturday', 6: 'Sunday'} df['day_of_week_name']=df['DoB'].dt.weekday.map(dw_mapping)df The simplest solution to get age is by subtracting year: today = pd.to_datetime('today')df['age'] = today.year - df['DoB'].dt.yeardf However, this is not accurate as people might haven't had their birthday this year. A more accurate solution would be to consider the birthday # Year differencetoday = pd.to_datetime('today')diff_y = today.year - df['DoB'].dt.year# Haven't had birthdayb_md = df['DoB'].apply(lambda x: (x.month,x.day) )no_birthday = b_md > (today.month,today.day)df['age'] = diff_y - no_birthdaydf A common solution to select data by date is using a boolean maks. For example condition = (df['date'] > start_date) & (df['date'] <= end_date)df.loc[condition] This solution normally requires start_date, end_date and date column to be datetime format. And in fact, this solution is slow when you are doing a lot of selections by date in a large dataset. If you are going to do a lot of selections by date, it would be faster to set date column as the index first so you take advantage of the Pandas built-in optimization. Then, you can select data by date using df.loc[start_date:end_date] . Let take a look at an example dataset city_sales.csv, which has 1,795,144 rows data df = pd.read_csv('data/city_sales.csv',parse_dates=['date'])df.info()RangeIndex: 1795144 entries, 0 to 1795143Data columns (total 3 columns): # Column Dtype --- ------ ----- 0 date datetime64[ns] 1 num int64 2 city object dtypes: datetime64[ns](1), int64(1), object(1)memory usage: 41.1+ MB To set the date column as the index df = df.set_index(['date'])df Let’s say we would like to select all data in the year 2018 df.loc['2018'] And to perform aggregation on the selection for example: Get the total num in 2018 df.loc['2018','num'].sum()1231190 Get the total num for each city in 2018 df['2018'].groupby('city').sum() To select data with a specific month, for example, May 2018 df.loc['2018-5'] Similarly, to select data with a specific day of the month, for example, 1st May 2018 df.loc['2018-5-1'] To select data between two dates, you can usedf.loc[start_date:end_date] For example: Select data between 2016 and 2018 df.loc['2016' : '2018'] Select data between 10 and 11 o'clock on the 2nd May 2018 df.loc['2018-5-2 10' : '2018-5-2 11' ] Select data between 10:30 and 10:45 on the 2nd May 2018 df.loc['2018-5-2 10:30' : '2018-5-2 10:45' ] And to select data between time, we should use between_time(), for example, 10:30 and 10:45 df.between_time('10:30','10:45') We often need to compute window statistics such as a rolling mean or a rolling sum. Let’s compute the rolling sum over a 3 window period and then have a look at the top 5 rows. df['rolling_sum'] = df.rolling(3).sum()df.head() We can see that it only starts having valid values when there are 3 periods over which to look back. One solution to handle this is by backfilling of data. df['rolling_sum_backfilled'] = df['rolling_sum'].fillna(method='backfill')df.head() For more details about backfilling, please check out the following article towardsdatascience.com Thanks for reading. Please checkout the notebook on my Github for the source code. Stay tuned if you are interested in the practical aspect of machine learning. Here are some picked articles for you: Working with missing values in Pandas 4 tricks to parse date columns with Pandas read_csv() Pandas read_csv() tricks you should know to speed up your data analysis 6 Pandas tricks you should know to speed up your data analysis 7 setups you should include at the beginning of a data science project. [1] Pandas to_datetime official document
[ { "code": null, "e": 398, "s": 172, "text": "Datetime is a common data type in data science projects. Often, you’ll work with it and run into problems. I found Pandas is an amazing library that contains extensive capabilities and features for working with date and time." }, { "code": null, "e": 520, "s": 398, "text": "In this article, we will cover the following common datetime problems and should help you get started with data analysis." }, { "code": null, "e": 924, "s": 520, "text": "Convert strings to datetimeAssemble a datetime from multiple columnsGet year, month and dayGet the week of year, the day of week, and leap yearGet the age from the date of birthImprove performance by setting date column as the indexSelect data with a specific year and perform aggregationSelect data with a specific month and a specific day of the monthSelect data between two datesHandle missing values" }, { "code": null, "e": 952, "s": 924, "text": "Convert strings to datetime" }, { "code": null, "e": 994, "s": 952, "text": "Assemble a datetime from multiple columns" }, { "code": null, "e": 1018, "s": 994, "text": "Get year, month and day" }, { "code": null, "e": 1071, "s": 1018, "text": "Get the week of year, the day of week, and leap year" }, { "code": null, "e": 1106, "s": 1071, "text": "Get the age from the date of birth" }, { "code": null, "e": 1162, "s": 1106, "text": "Improve performance by setting date column as the index" }, { "code": null, "e": 1219, "s": 1162, "text": "Select data with a specific year and perform aggregation" }, { "code": null, "e": 1285, "s": 1219, "text": "Select data with a specific month and a specific day of the month" }, { "code": null, "e": 1315, "s": 1285, "text": "Select data between two dates" }, { "code": null, "e": 1337, "s": 1315, "text": "Handle missing values" }, { "code": null, "e": 1390, "s": 1337, "text": "Please check out my Github repo for the source code." }, { "code": null, "e": 1526, "s": 1390, "text": "Pandas has a built-in function called to_datetime() that can be used to convert strings to datetime. Let’s take a look at some examples" }, { "code": null, "e": 1645, "s": 1526, "text": "Pandas to_datetime() is able to parse any valid date string to datetime without any additional arguments. For example:" }, { "code": null, "e": 1793, "s": 1645, "text": "df = pd.DataFrame({'date': ['3/10/2000', '3/11/2000', '3/12/2000'], 'value': [2, 3, 4]})df['date'] = pd.to_datetime(df['date'])df" }, { "code": null, "e": 1950, "s": 1793, "text": "By default, to_datetime() will parse string with month first (MM/DD, MM DD, or MM-DD) format, and this arrangement is relatively unique in the United State." }, { "code": null, "e": 2143, "s": 1950, "text": "In most of the rest of the world, the day is written first (DD/MM, DD MM, or DD-MM). If you would like Pandas to consider day first instead of month, you can set the argument dayfirst to True." }, { "code": null, "e": 2306, "s": 2143, "text": "df = pd.DataFrame({'date': ['3/10/2000', '3/11/2000', '3/12/2000'], 'value': [2, 3, 4]})df['date'] = pd.to_datetime(df['date'], dayfirst=True)df" }, { "code": null, "e": 2371, "s": 2306, "text": "Alternatively, you pass a custom format to the argument format ." }, { "code": null, "e": 2642, "s": 2371, "text": "By default, strings are parsed using the Pandas built-in parser from dateutil.parser.parse. Sometimes, your strings might be in a custom format, for example, YYYY-DD-MM HH:MM:SS. Pandas to_datetime() has an argument called format that allows you to pass a custom format:" }, { "code": null, "e": 2897, "s": 2642, "text": "df = pd.DataFrame({'date': ['2016-6-10 20:30:0', '2016-7-1 19:45:30', '2013-10-12 4:5:1'], 'value': [2, 3, 4]})df['date'] = pd.to_datetime(df['date'], format=\"%Y-%d-%m %H:%M:%S\")df" }, { "code": null, "e": 3102, "s": 2897, "text": "Passing infer_datetime_format=True can often speed up a parsing if its not an ISO8601 format exactly but in a regular format. According to [1], in some cases, this can increase the parsing speed by 5–10x." }, { "code": null, "e": 3395, "s": 3102, "text": "# Make up 3000 rowsdf = pd.DataFrame({'date': ['3/11/2000', '3/12/2000', '3/13/2000'] * 1000 })%timeit pd.to_datetime(df['date'], infer_datetime_format=True)100 loops, best of 3: 10.4 ms per loop%timeit pd.to_datetime(df['date'], infer_datetime_format=False)1 loop, best of 3: 471 ms per loop" }, { "code": null, "e": 3483, "s": 3395, "text": "You will end up with a TypeError if the date string does not meet the timestamp format." }, { "code": null, "e": 3629, "s": 3483, "text": "df = pd.DataFrame({'date': ['3/10/2000', 'a/11/2000', '3/12/2000'], 'value': [2, 3, 4]})df['date'] = pd.to_datetime(df['date'])" }, { "code": null, "e": 3743, "s": 3629, "text": "to_datetime() has an argument called errors that allows you to ignore the error or force an invalid value to NaT." }, { "code": null, "e": 3802, "s": 3743, "text": "df['date'] = pd.to_datetime(df['date'], errors='ignore')df" }, { "code": null, "e": 3861, "s": 3802, "text": "df['date'] = pd.to_datetime(df['date'], errors='coerce')df" }, { "code": null, "e": 3988, "s": 3861, "text": "In addition, if you would like to parse date columns when reading data from a CSV file, please check out the following article" }, { "code": null, "e": 4011, "s": 3988, "text": "towardsdatascience.com" }, { "code": null, "e": 4237, "s": 4011, "text": "to_datetime() can be used to assemble a datetime from multiple columns as well. The keys (columns label) can be common abbreviations like [‘year’, ‘month’, ‘day’, ‘minute’, ‘second’, ‘ms’, ‘us’, ‘ns’]) or plurals of the same." }, { "code": null, "e": 4380, "s": 4237, "text": "df = pd.DataFrame({'year': [2015, 2016], 'month': [2, 3], 'day': [4, 5]})df['date'] = pd.to_datetime(df)df" }, { "code": null, "e": 4494, "s": 4380, "text": "dt.year, dt.month and dt.day are the inbuilt attributes to get year, month , and day from Pandas datetime object." }, { "code": null, "e": 4559, "s": 4494, "text": "First, let’s create a dummy DateFrame and parse DoB to datetime." }, { "code": null, "e": 4717, "s": 4559, "text": "df = pd.DataFrame({'name': ['Tom', 'Andy', 'Lucas'], 'DoB': ['08-05-1997', '04-28-1996', '12-16-1995']})df['DoB'] = pd.to_datetime(df['DoB'])" }, { "code": null, "e": 4749, "s": 4717, "text": "And to get year, month, and day" }, { "code": null, "e": 4839, "s": 4749, "text": "df['year']= df['DoB'].dt.yeardf['month']= df['DoB'].dt.monthdf['day']= df['DoB'].dt.daydf" }, { "code": null, "e": 4977, "s": 4839, "text": "Similarly, dt.week, dt.dayofweek, and dt.is_leap_year are the inbuilt attributes to get the week of year, the day of week, and leap year." }, { "code": null, "e": 5106, "s": 4977, "text": "df['week_of_year'] = df['DoB'].dt.weekdf['day_of_week'] = df['DoB'].dt.dayofweekdf['is_leap_year'] = df['DoB'].dt.is_leap_yeardf" }, { "code": null, "e": 5368, "s": 5106, "text": "Note that Pandas dt.dayofweek attribute returns the day of the week and it is assumed the week starts on Monday, which is denoted by 0 and ends on Sunday which is denoted by 6. To replace the number with full name, we can create a mapping and pass it to map() :" }, { "code": null, "e": 5568, "s": 5368, "text": "dw_mapping={ 0: 'Monday', 1: 'Tuesday', 2: 'Wednesday', 3: 'Thursday', 4: 'Friday', 5: 'Saturday', 6: 'Sunday'} df['day_of_week_name']=df['DoB'].dt.weekday.map(dw_mapping)df" }, { "code": null, "e": 5625, "s": 5568, "text": "The simplest solution to get age is by subtracting year:" }, { "code": null, "e": 5701, "s": 5625, "text": "today = pd.to_datetime('today')df['age'] = today.year - df['DoB'].dt.yeardf" }, { "code": null, "e": 5844, "s": 5701, "text": "However, this is not accurate as people might haven't had their birthday this year. A more accurate solution would be to consider the birthday" }, { "code": null, "e": 6082, "s": 5844, "text": "# Year differencetoday = pd.to_datetime('today')diff_y = today.year - df['DoB'].dt.year# Haven't had birthdayb_md = df['DoB'].apply(lambda x: (x.month,x.day) )no_birthday = b_md > (today.month,today.day)df['age'] = diff_y - no_birthdaydf" }, { "code": null, "e": 6160, "s": 6082, "text": "A common solution to select data by date is using a boolean maks. For example" }, { "code": null, "e": 6242, "s": 6160, "text": "condition = (df['date'] > start_date) & (df['date'] <= end_date)df.loc[condition]" }, { "code": null, "e": 6436, "s": 6242, "text": "This solution normally requires start_date, end_date and date column to be datetime format. And in fact, this solution is slow when you are doing a lot of selections by date in a large dataset." }, { "code": null, "e": 6758, "s": 6436, "text": "If you are going to do a lot of selections by date, it would be faster to set date column as the index first so you take advantage of the Pandas built-in optimization. Then, you can select data by date using df.loc[start_date:end_date] . Let take a look at an example dataset city_sales.csv, which has 1,795,144 rows data" }, { "code": null, "e": 7103, "s": 6758, "text": "df = pd.read_csv('data/city_sales.csv',parse_dates=['date'])df.info()RangeIndex: 1795144 entries, 0 to 1795143Data columns (total 3 columns): # Column Dtype --- ------ ----- 0 date datetime64[ns] 1 num int64 2 city object dtypes: datetime64[ns](1), int64(1), object(1)memory usage: 41.1+ MB" }, { "code": null, "e": 7139, "s": 7103, "text": "To set the date column as the index" }, { "code": null, "e": 7169, "s": 7139, "text": "df = df.set_index(['date'])df" }, { "code": null, "e": 7229, "s": 7169, "text": "Let’s say we would like to select all data in the year 2018" }, { "code": null, "e": 7244, "s": 7229, "text": "df.loc['2018']" }, { "code": null, "e": 7301, "s": 7244, "text": "And to perform aggregation on the selection for example:" }, { "code": null, "e": 7327, "s": 7301, "text": "Get the total num in 2018" }, { "code": null, "e": 7361, "s": 7327, "text": "df.loc['2018','num'].sum()1231190" }, { "code": null, "e": 7401, "s": 7361, "text": "Get the total num for each city in 2018" }, { "code": null, "e": 7434, "s": 7401, "text": "df['2018'].groupby('city').sum()" }, { "code": null, "e": 7494, "s": 7434, "text": "To select data with a specific month, for example, May 2018" }, { "code": null, "e": 7511, "s": 7494, "text": "df.loc['2018-5']" }, { "code": null, "e": 7597, "s": 7511, "text": "Similarly, to select data with a specific day of the month, for example, 1st May 2018" }, { "code": null, "e": 7616, "s": 7597, "text": "df.loc['2018-5-1']" }, { "code": null, "e": 7702, "s": 7616, "text": "To select data between two dates, you can usedf.loc[start_date:end_date] For example:" }, { "code": null, "e": 7736, "s": 7702, "text": "Select data between 2016 and 2018" }, { "code": null, "e": 7760, "s": 7736, "text": "df.loc['2016' : '2018']" }, { "code": null, "e": 7818, "s": 7760, "text": "Select data between 10 and 11 o'clock on the 2nd May 2018" }, { "code": null, "e": 7857, "s": 7818, "text": "df.loc['2018-5-2 10' : '2018-5-2 11' ]" }, { "code": null, "e": 7913, "s": 7857, "text": "Select data between 10:30 and 10:45 on the 2nd May 2018" }, { "code": null, "e": 7958, "s": 7913, "text": "df.loc['2018-5-2 10:30' : '2018-5-2 10:45' ]" }, { "code": null, "e": 8050, "s": 7958, "text": "And to select data between time, we should use between_time(), for example, 10:30 and 10:45" }, { "code": null, "e": 8083, "s": 8050, "text": "df.between_time('10:30','10:45')" }, { "code": null, "e": 8167, "s": 8083, "text": "We often need to compute window statistics such as a rolling mean or a rolling sum." }, { "code": null, "e": 8260, "s": 8167, "text": "Let’s compute the rolling sum over a 3 window period and then have a look at the top 5 rows." }, { "code": null, "e": 8309, "s": 8260, "text": "df['rolling_sum'] = df.rolling(3).sum()df.head()" }, { "code": null, "e": 8465, "s": 8309, "text": "We can see that it only starts having valid values when there are 3 periods over which to look back. One solution to handle this is by backfilling of data." }, { "code": null, "e": 8549, "s": 8465, "text": "df['rolling_sum_backfilled'] = df['rolling_sum'].fillna(method='backfill')df.head()" }, { "code": null, "e": 8624, "s": 8549, "text": "For more details about backfilling, please check out the following article" }, { "code": null, "e": 8647, "s": 8624, "text": "towardsdatascience.com" }, { "code": null, "e": 8667, "s": 8647, "text": "Thanks for reading." }, { "code": null, "e": 8730, "s": 8667, "text": "Please checkout the notebook on my Github for the source code." }, { "code": null, "e": 8808, "s": 8730, "text": "Stay tuned if you are interested in the practical aspect of machine learning." }, { "code": null, "e": 8847, "s": 8808, "text": "Here are some picked articles for you:" }, { "code": null, "e": 8885, "s": 8847, "text": "Working with missing values in Pandas" }, { "code": null, "e": 8939, "s": 8885, "text": "4 tricks to parse date columns with Pandas read_csv()" }, { "code": null, "e": 9011, "s": 8939, "text": "Pandas read_csv() tricks you should know to speed up your data analysis" }, { "code": null, "e": 9074, "s": 9011, "text": "6 Pandas tricks you should know to speed up your data analysis" }, { "code": null, "e": 9146, "s": 9074, "text": "7 setups you should include at the beginning of a data science project." } ]
Create new linked list from two given linked list with greater element at each node - GeeksforGeeks
28 Mar, 2022 Given two linked list of the same size, the task is to create a new linked list using those linked lists. The condition is that the greater node among both linked list will be added to the new linked list.Examples: Input: list1 = 5->2->3->8 list2 = 1->7->4->5 Output: New list = 5->7->4->8 Input: list1 = 2->8->9->3 list2 = 5->3->6->4 Output: New list = 5->8->9->4 Approach: We traverse both the linked list at the same time and compare node of both lists. The node which is greater among them will be added to the new linked list. We do this for each node. C++ Java Python3 C# Javascript // C++ program to create a new linked list// from two given linked list// of the same size with// the greater element among the two at each node #include <iostream>using namespace std; // Representation of nodestruct Node { int data; Node* next;}; // Function to insert node in a linked listvoid insert(Node** root, int item){ Node *ptr, *temp; temp = new Node; temp->data = item; temp->next = NULL; if (*root == NULL) *root = temp; else { ptr = *root; while (ptr->next != NULL) ptr = ptr->next; ptr->next = temp; }} // Function which returns new linked listNode* newList(Node* root1, Node* root2){ Node *ptr1 = root1, *ptr2 = root2, *ptr; Node *root = NULL, *temp; while (ptr1 != NULL) { temp = new Node; temp->next = NULL; // Compare for greater node if (ptr1->data < ptr2->data) temp->data = ptr2->data; else temp->data = ptr1->data; if (root == NULL) root = temp; else { ptr = root; while (ptr->next != NULL) ptr = ptr->next; ptr->next = temp; } ptr1 = ptr1->next; ptr2 = ptr2->next; } return root;} void display(Node* root){ while (root != NULL) { cout << root->data << "->"; root = root->next; } cout << endl;} // Driver codeint main(){ Node *root1 = NULL, *root2 = NULL, *root = NULL; // First linked list insert(&root1, 5); insert(&root1, 2); insert(&root1, 3); insert(&root1, 8); cout << "First List: "; display(root1); // Second linked list insert(&root2, 1); insert(&root2, 7); insert(&root2, 4); insert(&root2, 5); cout << "Second List: "; display(root2); root = newList(root1, root2); cout << "New List: "; display(root); return 0;} // Java program to create a new linked list// from two given linked list// of the same size with// the greater element among the two at each nodeimport java.util.*; class GFG{ // Representation of nodestatic class Node{ int data; Node next;}; // Function to insert node in a linked liststatic Node insert(Node root, int item){ Node ptr, temp; temp = new Node(); temp.data = item; temp.next = null; if (root == null) root = temp; else { ptr = root; while (ptr.next != null) ptr = ptr.next; ptr.next = temp; } return root;} // Function which returns new linked liststatic Node newList(Node root1, Node root2){ Node ptr1 = root1, ptr2 = root2, ptr; Node root = null, temp; while (ptr1 != null) { temp = new Node(); temp.next = null; // Compare for greater node if (ptr1.data < ptr2.data) temp.data = ptr2.data; else temp.data = ptr1.data; if (root == null) root = temp; else { ptr = root; while (ptr.next != null) ptr = ptr.next; ptr.next = temp; } ptr1 = ptr1.next; ptr2 = ptr2.next; } return root;} static void display(Node root){ while (root != null) { System.out.print( root.data + "->"); root = root.next; } System.out.println();} // Driver codepublic static void main(String args[]){ Node root1 = null, root2 = null, root = null; // First linked list root1=insert(root1, 5); root1=insert(root1, 2); root1=insert(root1, 3); root1=insert(root1, 8); System.out.print("First List: "); display(root1); // Second linked list root2=insert(root2, 1); root2=insert(root2, 7); root2=insert(root2, 4); root2=insert(root2, 5); System.out.print( "Second List: "); display(root2); root = newList(root1, root2); System.out.print("New List: "); display(root);}} // This code is contributed by Arnab Kundu # Python3 program to create a# new linked list from two given# linked list of the same size with# the greater element among the two# at each node # Node classclass Node: # Function to initialise the node object def __init__(self, data): self.data = data self.next = None # Function to insert node in a linked listdef insert(root, item): temp = Node(0) temp.data = item temp.next = None if (root == None): root = temp else : ptr = root while (ptr.next != None): ptr = ptr.next ptr.next = temp return root # Function which returns new linked listdef newList(root1, root2): ptr1 = root1 ptr2 = root2 root = None while (ptr1 != None) : temp = Node(0) temp.next = None # Compare for greater node if (ptr1.data < ptr2.data): temp.data = ptr2.data else: temp.data = ptr1.data if (root == None): root = temp else : ptr = root while (ptr.next != None): ptr = ptr.next ptr.next = temp ptr1 = ptr1.next ptr2 = ptr2.next return root def display(root): while (root != None) : print(root.data, "->", end = " ") root = root.next print(" "); # Driver Codeif __name__=='__main__': root1 = None root2 = None root = None # First linked list root1 = insert(root1, 5) root1 = insert(root1, 2) root1 = insert(root1, 3) root1 = insert(root1, 8) print("First List: ", end = " ") display(root1) # Second linked list root2 = insert(root2, 1) root2 = insert(root2, 7) root2 = insert(root2, 4) root2 = insert(root2, 5) print("Second List: ", end = " ") display(root2) root = newList(root1, root2) print("New List: ", end = " ") display(root) # This code is contributed by Arnab Kundu // C# program to create a new linked list// from two given linked list// of the same size with// the greater element among the two at each nodeusing System; class GFG{ // Representation of nodepublic class Node{ public int data; public Node next;}; // Function to insert node in a linked liststatic Node insert(Node root, int item){ Node ptr, temp; temp = new Node(); temp.data = item; temp.next = null; if (root == null) root = temp; else { ptr = root; while (ptr.next != null) ptr = ptr.next; ptr.next = temp; } return root;} // Function which returns new linked liststatic Node newList(Node root1, Node root2){ Node ptr1 = root1, ptr2 = root2, ptr; Node root = null, temp; while (ptr1 != null) { temp = new Node(); temp.next = null; // Compare for greater node if (ptr1.data < ptr2.data) temp.data = ptr2.data; else temp.data = ptr1.data; if (root == null) root = temp; else { ptr = root; while (ptr.next != null) ptr = ptr.next; ptr.next = temp; } ptr1 = ptr1.next; ptr2 = ptr2.next; } return root;} static void display(Node root){ while (root != null) { Console.Write( root.data + "->"); root = root.next; } Console.WriteLine();} // Driver codepublic static void Main(String []args){ Node root1 = null, root2 = null, root = null; // First linked list root1 = insert(root1, 5); root1 = insert(root1, 2); root1 = insert(root1, 3); root1 = insert(root1, 8); Console.Write("First List: "); display(root1); // Second linked list root2 = insert(root2, 1); root2 = insert(root2, 7); root2 = insert(root2, 4); root2 = insert(root2, 5); Console.Write( "Second List: "); display(root2); root = newList(root1, root2); Console.Write("New List: "); display(root);}} // This code has been contributed by 29AjayKumar <script>// javascript program to create a new linked list// from two given linked list// of the same size with// the greater element among the two at each node // Representation of nodeclass Node { constructor(val) { this.data = val; this.next = null; }} // Function to insert node in a linked list function insert( root , item) { var ptr, temp; temp = new Node(); temp.data = item; temp.next = null; if (root == null) root = temp; else { ptr = root; while (ptr.next != null) ptr = ptr.next; ptr.next = temp; } return root; } // Function which returns new linked list function newList( root1, root2) { var ptr1 = root1, ptr2 = root2, ptr; var root = null, temp; while (ptr1 != null) { temp = new Node(); temp.next = null; // Compare for greater node if (ptr1.data < ptr2.data) temp.data = ptr2.data; else temp.data = ptr1.data; if (root == null) root = temp; else { ptr = root; while (ptr.next != null) ptr = ptr.next; ptr.next = temp; } ptr1 = ptr1.next; ptr2 = ptr2.next; } return root; } function display( root) { while (root != null) { document.write(root.data + "->"); root = root.next; } document.write("<br/>"); } // Driver code root1 = null, root2 = null, root = null; // First linked list root1 = insert(root1, 5); root1 = insert(root1, 2); root1 = insert(root1, 3); root1 = insert(root1, 8); document.write("First List: "); display(root1); // Second linked list root2 = insert(root2, 1); root2 = insert(root2, 7); root2 = insert(root2, 4); root2 = insert(root2, 5); document.write("Second List: "); display(root2); root = newList(root1, root2); document.write("New List: "); display(root); // This code is contributed by gauravrajput1</script> First List: 5->2->3->8-> Second List: 1->7->4->5-> New List: 5->7->4->8-> andrew1234 29AjayKumar GauravRajput1 varshagumber28 Traversal Linked List Mathematical Linked List Mathematical Traversal Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here. LinkedList in Java Doubly Linked List | Set 1 (Introduction and Insertion) Linked List vs Array Merge two sorted linked lists Implementing a Linked List in Java using Class Program for Fibonacci numbers Write a program to print all permutations of a given string C++ Data Types Set in C++ Standard Template Library (STL) Coin Change | DP-7
[ { "code": null, "e": 24656, "s": 24628, "text": "\n28 Mar, 2022" }, { "code": null, "e": 24873, "s": 24656, "text": "Given two linked list of the same size, the task is to create a new linked list using those linked lists. The condition is that the greater node among both linked list will be added to the new linked list.Examples: " }, { "code": null, "e": 25027, "s": 24873, "text": "Input: list1 = 5->2->3->8\nlist2 = 1->7->4->5\nOutput: New list = 5->7->4->8\n\nInput: list1 = 2->8->9->3\nlist2 = 5->3->6->4\nOutput: New list = 5->8->9->4" }, { "code": null, "e": 25222, "s": 25027, "text": "Approach: We traverse both the linked list at the same time and compare node of both lists. The node which is greater among them will be added to the new linked list. We do this for each node. " }, { "code": null, "e": 25226, "s": 25222, "text": "C++" }, { "code": null, "e": 25231, "s": 25226, "text": "Java" }, { "code": null, "e": 25239, "s": 25231, "text": "Python3" }, { "code": null, "e": 25242, "s": 25239, "text": "C#" }, { "code": null, "e": 25253, "s": 25242, "text": "Javascript" }, { "code": "// C++ program to create a new linked list// from two given linked list// of the same size with// the greater element among the two at each node #include <iostream>using namespace std; // Representation of nodestruct Node { int data; Node* next;}; // Function to insert node in a linked listvoid insert(Node** root, int item){ Node *ptr, *temp; temp = new Node; temp->data = item; temp->next = NULL; if (*root == NULL) *root = temp; else { ptr = *root; while (ptr->next != NULL) ptr = ptr->next; ptr->next = temp; }} // Function which returns new linked listNode* newList(Node* root1, Node* root2){ Node *ptr1 = root1, *ptr2 = root2, *ptr; Node *root = NULL, *temp; while (ptr1 != NULL) { temp = new Node; temp->next = NULL; // Compare for greater node if (ptr1->data < ptr2->data) temp->data = ptr2->data; else temp->data = ptr1->data; if (root == NULL) root = temp; else { ptr = root; while (ptr->next != NULL) ptr = ptr->next; ptr->next = temp; } ptr1 = ptr1->next; ptr2 = ptr2->next; } return root;} void display(Node* root){ while (root != NULL) { cout << root->data << \"->\"; root = root->next; } cout << endl;} // Driver codeint main(){ Node *root1 = NULL, *root2 = NULL, *root = NULL; // First linked list insert(&root1, 5); insert(&root1, 2); insert(&root1, 3); insert(&root1, 8); cout << \"First List: \"; display(root1); // Second linked list insert(&root2, 1); insert(&root2, 7); insert(&root2, 4); insert(&root2, 5); cout << \"Second List: \"; display(root2); root = newList(root1, root2); cout << \"New List: \"; display(root); return 0;}", "e": 27127, "s": 25253, "text": null }, { "code": "// Java program to create a new linked list// from two given linked list// of the same size with// the greater element among the two at each nodeimport java.util.*; class GFG{ // Representation of nodestatic class Node{ int data; Node next;}; // Function to insert node in a linked liststatic Node insert(Node root, int item){ Node ptr, temp; temp = new Node(); temp.data = item; temp.next = null; if (root == null) root = temp; else { ptr = root; while (ptr.next != null) ptr = ptr.next; ptr.next = temp; } return root;} // Function which returns new linked liststatic Node newList(Node root1, Node root2){ Node ptr1 = root1, ptr2 = root2, ptr; Node root = null, temp; while (ptr1 != null) { temp = new Node(); temp.next = null; // Compare for greater node if (ptr1.data < ptr2.data) temp.data = ptr2.data; else temp.data = ptr1.data; if (root == null) root = temp; else { ptr = root; while (ptr.next != null) ptr = ptr.next; ptr.next = temp; } ptr1 = ptr1.next; ptr2 = ptr2.next; } return root;} static void display(Node root){ while (root != null) { System.out.print( root.data + \"->\"); root = root.next; } System.out.println();} // Driver codepublic static void main(String args[]){ Node root1 = null, root2 = null, root = null; // First linked list root1=insert(root1, 5); root1=insert(root1, 2); root1=insert(root1, 3); root1=insert(root1, 8); System.out.print(\"First List: \"); display(root1); // Second linked list root2=insert(root2, 1); root2=insert(root2, 7); root2=insert(root2, 4); root2=insert(root2, 5); System.out.print( \"Second List: \"); display(root2); root = newList(root1, root2); System.out.print(\"New List: \"); display(root);}} // This code is contributed by Arnab Kundu", "e": 29145, "s": 27127, "text": null }, { "code": "# Python3 program to create a# new linked list from two given# linked list of the same size with# the greater element among the two# at each node # Node classclass Node: # Function to initialise the node object def __init__(self, data): self.data = data self.next = None # Function to insert node in a linked listdef insert(root, item): temp = Node(0) temp.data = item temp.next = None if (root == None): root = temp else : ptr = root while (ptr.next != None): ptr = ptr.next ptr.next = temp return root # Function which returns new linked listdef newList(root1, root2): ptr1 = root1 ptr2 = root2 root = None while (ptr1 != None) : temp = Node(0) temp.next = None # Compare for greater node if (ptr1.data < ptr2.data): temp.data = ptr2.data else: temp.data = ptr1.data if (root == None): root = temp else : ptr = root while (ptr.next != None): ptr = ptr.next ptr.next = temp ptr1 = ptr1.next ptr2 = ptr2.next return root def display(root): while (root != None) : print(root.data, \"->\", end = \" \") root = root.next print(\" \"); # Driver Codeif __name__=='__main__': root1 = None root2 = None root = None # First linked list root1 = insert(root1, 5) root1 = insert(root1, 2) root1 = insert(root1, 3) root1 = insert(root1, 8) print(\"First List: \", end = \" \") display(root1) # Second linked list root2 = insert(root2, 1) root2 = insert(root2, 7) root2 = insert(root2, 4) root2 = insert(root2, 5) print(\"Second List: \", end = \" \") display(root2) root = newList(root1, root2) print(\"New List: \", end = \" \") display(root) # This code is contributed by Arnab Kundu", "e": 31065, "s": 29145, "text": null }, { "code": "// C# program to create a new linked list// from two given linked list// of the same size with// the greater element among the two at each nodeusing System; class GFG{ // Representation of nodepublic class Node{ public int data; public Node next;}; // Function to insert node in a linked liststatic Node insert(Node root, int item){ Node ptr, temp; temp = new Node(); temp.data = item; temp.next = null; if (root == null) root = temp; else { ptr = root; while (ptr.next != null) ptr = ptr.next; ptr.next = temp; } return root;} // Function which returns new linked liststatic Node newList(Node root1, Node root2){ Node ptr1 = root1, ptr2 = root2, ptr; Node root = null, temp; while (ptr1 != null) { temp = new Node(); temp.next = null; // Compare for greater node if (ptr1.data < ptr2.data) temp.data = ptr2.data; else temp.data = ptr1.data; if (root == null) root = temp; else { ptr = root; while (ptr.next != null) ptr = ptr.next; ptr.next = temp; } ptr1 = ptr1.next; ptr2 = ptr2.next; } return root;} static void display(Node root){ while (root != null) { Console.Write( root.data + \"->\"); root = root.next; } Console.WriteLine();} // Driver codepublic static void Main(String []args){ Node root1 = null, root2 = null, root = null; // First linked list root1 = insert(root1, 5); root1 = insert(root1, 2); root1 = insert(root1, 3); root1 = insert(root1, 8); Console.Write(\"First List: \"); display(root1); // Second linked list root2 = insert(root2, 1); root2 = insert(root2, 7); root2 = insert(root2, 4); root2 = insert(root2, 5); Console.Write( \"Second List: \"); display(root2); root = newList(root1, root2); Console.Write(\"New List: \"); display(root);}} // This code has been contributed by 29AjayKumar", "e": 33115, "s": 31065, "text": null }, { "code": "<script>// javascript program to create a new linked list// from two given linked list// of the same size with// the greater element among the two at each node // Representation of nodeclass Node { constructor(val) { this.data = val; this.next = null; }} // Function to insert node in a linked list function insert( root , item) { var ptr, temp; temp = new Node(); temp.data = item; temp.next = null; if (root == null) root = temp; else { ptr = root; while (ptr.next != null) ptr = ptr.next; ptr.next = temp; } return root; } // Function which returns new linked list function newList( root1, root2) { var ptr1 = root1, ptr2 = root2, ptr; var root = null, temp; while (ptr1 != null) { temp = new Node(); temp.next = null; // Compare for greater node if (ptr1.data < ptr2.data) temp.data = ptr2.data; else temp.data = ptr1.data; if (root == null) root = temp; else { ptr = root; while (ptr.next != null) ptr = ptr.next; ptr.next = temp; } ptr1 = ptr1.next; ptr2 = ptr2.next; } return root; } function display( root) { while (root != null) { document.write(root.data + \"->\"); root = root.next; } document.write(\"<br/>\"); } // Driver code root1 = null, root2 = null, root = null; // First linked list root1 = insert(root1, 5); root1 = insert(root1, 2); root1 = insert(root1, 3); root1 = insert(root1, 8); document.write(\"First List: \"); display(root1); // Second linked list root2 = insert(root2, 1); root2 = insert(root2, 7); root2 = insert(root2, 4); root2 = insert(root2, 5); document.write(\"Second List: \"); display(root2); root = newList(root1, root2); document.write(\"New List: \"); display(root); // This code is contributed by gauravrajput1</script>", "e": 35379, "s": 33115, "text": null }, { "code": null, "e": 35457, "s": 35379, "text": "First List: 5->2->3->8->\nSecond List: 1->7->4->5->\nNew List: 5->7->4->8->" }, { "code": null, "e": 35470, "s": 35459, "text": "andrew1234" }, { "code": null, "e": 35482, "s": 35470, "text": "29AjayKumar" }, { "code": null, "e": 35496, "s": 35482, "text": "GauravRajput1" }, { "code": null, "e": 35511, "s": 35496, "text": "varshagumber28" }, { "code": null, "e": 35521, "s": 35511, "text": "Traversal" }, { "code": null, "e": 35533, "s": 35521, "text": "Linked List" }, { "code": null, "e": 35546, "s": 35533, "text": "Mathematical" }, { "code": null, "e": 35558, "s": 35546, "text": "Linked List" }, { "code": null, "e": 35571, "s": 35558, "text": "Mathematical" }, { "code": null, "e": 35581, "s": 35571, "text": "Traversal" }, { "code": null, "e": 35679, "s": 35581, "text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here." }, { "code": null, "e": 35698, "s": 35679, "text": "LinkedList in Java" }, { "code": null, "e": 35754, "s": 35698, "text": "Doubly Linked List | Set 1 (Introduction and Insertion)" }, { "code": null, "e": 35775, "s": 35754, "text": "Linked List vs Array" }, { "code": null, "e": 35805, "s": 35775, "text": "Merge two sorted linked lists" }, { "code": null, "e": 35852, "s": 35805, "text": "Implementing a Linked List in Java using Class" }, { "code": null, "e": 35882, "s": 35852, "text": "Program for Fibonacci numbers" }, { "code": null, "e": 35942, "s": 35882, "text": "Write a program to print all permutations of a given string" }, { "code": null, "e": 35957, "s": 35942, "text": "C++ Data Types" }, { "code": null, "e": 36000, "s": 35957, "text": "Set in C++ Standard Template Library (STL)" } ]
Longest Arithmetic Progression | Practice | GeeksforGeeks
Given an array called A[] of sorted integers having no duplicates, find the length of the Longest Arithmetic Progression (LLAP) in it. Example 1: Input: N = 6 set[] = {1, 7, 10, 13, 14, 19} Output: 4 Explanation: The longest arithmetic progression is {1, 7, 13, 19}. Example 2: Input: N = 5 A[] = {2, 4, 6, 8, 10} Output: 5 Explanation: The whole set is in AP. Your Task: You don't need to read input or print anything. Your task is to complete the function lenghtOfLongestAP() which takes the array of integers called set[] and n as input parameters and returns the length of LLAP. Expected Time Complexity: O(N2) Expected Auxiliary Space: O(N2) Constraints: 1 ≤ N ≤ 1000 1 ≤ set[i] ≤ 104 0 paawansingal3 days ago int lengthOfLongestAP(int a[], int n) { // code here if(n == 1) return 1; vector<vector<int >> dp(n + 1, vector<int>(1e4 + 3, 1)); int ans = 0; for(int i = 1; i <= n; ++i){ for(int j = i + 1; j <= n; ++j){ int df = a[j - 1] - a[i - 1]; dp[j][df] = 1 + dp[i][df]; ans = max(ans, dp[j][df]); } } return ans; } 0 ksridharan8295 days ago class Solution{ public: int lengthOfLongestAP(int A[], int n) { vector<vector<int>> dp(1001 , vector<int>(10005 , 0)); int mx = 0; for (int i = 1 ; i < n ; i++){ for (int j = 0 ; j<i ; j++){ dp[i][A[i]-A[j]] = max(dp[i][A[i]-A[j]] , 1 + dp[j][A[i]-A[j]]); mx = max(dp[i][A[i]-A[j]] , mx); } } return mx+1; } }; 0 imranwahid2 weeks ago Easy C++ solution 0 geminicode1 month ago why are we putting set[i][j] = set[j][k] + 1??? can anyone explain plz.. 0 mocambo1 month ago simplest one int lengthOfLongestAP(int A[], int n) { //using map to serach element; unordered_map<int ,int> mep; for(int i=0; i<n; i++){ mep[A[i]]++; } int ans=1;//for corner case:when n=1 for(int i=0; i<n-1; i++){ int count; for(int j=i+1; j<n; j++){ int cd=A[j]-A[i];//common diffrence count=2; int curr=A[j]; /*serching for succesive element of progression in the map with current cd*/ while(mep[cd+curr]!=0){ count++; curr+=cd; } ans=max(ans,count); } } return ans; } +1 ruchitchudasama1232 months ago public: int lengthOfLongestAP(int A[], int n) { if(n<=2) return n; int maxLen=2; vector<unordered_map<int,int>> dp(n); for(int i=0;i<n;i++){ for(int j=i+1;j<n;j++){ int diff=A[j]-A[i]; if(dp[i].find(diff)!=dp[i].end()){ dp[j][diff]=dp[i][diff]+1; } else{ dp[j][diff]=2; } maxLen=max(maxLen,dp[j][diff]); } } return maxLen; } 0 ezio3 months ago class Solution { int lengthOfLongestAP(int[] nums, int n) { Map<Integer, Integer>[] dp = new HashMap[n]; for (int i = 0; i < n; i++) dp[i] = new HashMap<>(); int maxLen = 1; for (int i = 1; i < n; i++) { for (int j = 0; j < i; j++) { int cd = nums[i] - nums[j]; if (dp[j].containsKey(cd)) { dp[i].put(cd, dp[j].get(cd) + 1); } else { dp[i].put(cd, 2); } maxLen = Math.max(maxLen, dp[i].get(cd)); } } return maxLen; } } 0 ezio This comment was deleted. +2 harshitrepa5 months ago if(n==1) return 1; if(n==2) return 2; vector<unordered_map<int,int>> dp(n); int ans = 0; for(int i=1;i<n;i++) { for(int j=0;j<i;j++) { int diff = arr[i] - arr[j]; if(dp[j].find(diff)==dp[j].end()) { dp[i][diff] = 2; }else { dp[i][diff] = dp[j][diff]+1; } ans = max(ans,dp[i][diff]); } } return ans; 0 siddhantp619 This comment was deleted. We strongly recommend solving this problem on your own before viewing its editorial. Do you still want to view the editorial? Login to access your submissions. Problem Contest Reset the IDE using the second button on the top right corner. Avoid using static/global variables in your code as your code is tested against multiple test cases and these tend to retain their previous values. Passing the Sample/Custom Test cases does not guarantee the correctness of code. On submission, your code is tested against multiple test cases consisting of all possible corner cases and stress constraints. You can access the hints to get an idea about what is expected of you as well as the final solution code. You can view the solutions submitted by other users from the submission tab.
[ { "code": null, "e": 373, "s": 238, "text": "Given an array called A[] of sorted integers having no duplicates, find the length of the Longest Arithmetic Progression (LLAP) in it." }, { "code": null, "e": 385, "s": 373, "text": "\nExample 1:" }, { "code": null, "e": 508, "s": 385, "text": "Input:\nN = 6\nset[] = {1, 7, 10, 13, 14, 19}\nOutput: 4\nExplanation: The longest arithmetic \nprogression is {1, 7, 13, 19}.\n" }, { "code": null, "e": 519, "s": 508, "text": "Example 2:" }, { "code": null, "e": 602, "s": 519, "text": "Input:\nN = 5\nA[] = {2, 4, 6, 8, 10}\nOutput: 5\nExplanation: The whole set is in AP." }, { "code": null, "e": 934, "s": 602, "text": "\nYour Task:\nYou don't need to read input or print anything. Your task is to complete the function lenghtOfLongestAP() which takes the array of integers called set[] and n as input parameters and returns the length of LLAP.\n\nExpected Time Complexity: O(N2)\nExpected Auxiliary Space: O(N2)\n\nConstraints:\n1 ≤ N ≤ 1000\n1 ≤ set[i] ≤ 104" }, { "code": null, "e": 936, "s": 934, "text": "0" }, { "code": null, "e": 959, "s": 936, "text": "paawansingal3 days ago" }, { "code": null, "e": 1426, "s": 959, "text": "int lengthOfLongestAP(int a[], int n) { // code here if(n == 1) return 1; vector<vector<int >> dp(n + 1, vector<int>(1e4 + 3, 1)); int ans = 0; for(int i = 1; i <= n; ++i){ for(int j = i + 1; j <= n; ++j){ int df = a[j - 1] - a[i - 1]; dp[j][df] = 1 + dp[i][df]; ans = max(ans, dp[j][df]); } } return ans; }" }, { "code": null, "e": 1428, "s": 1426, "text": "0" }, { "code": null, "e": 1452, "s": 1428, "text": "ksridharan8295 days ago" }, { "code": null, "e": 1835, "s": 1452, "text": "class Solution{ \npublic:\n int lengthOfLongestAP(int A[], int n) {\n \n vector<vector<int>> dp(1001 , vector<int>(10005 , 0));\n int mx = 0;\n for (int i = 1 ; i < n ; i++){\n for (int j = 0 ; j<i ; j++){\n dp[i][A[i]-A[j]] = max(dp[i][A[i]-A[j]] , 1 + dp[j][A[i]-A[j]]);\n mx = max(dp[i][A[i]-A[j]] , mx);\n }\n }\n return mx+1;\n }\n};" }, { "code": null, "e": 1837, "s": 1835, "text": "0" }, { "code": null, "e": 1859, "s": 1837, "text": "imranwahid2 weeks ago" }, { "code": null, "e": 1877, "s": 1859, "text": "Easy C++ solution" }, { "code": null, "e": 1879, "s": 1877, "text": "0" }, { "code": null, "e": 1901, "s": 1879, "text": "geminicode1 month ago" }, { "code": null, "e": 1974, "s": 1901, "text": "why are we putting set[i][j] = set[j][k] + 1??? can anyone explain plz.." }, { "code": null, "e": 1976, "s": 1974, "text": "0" }, { "code": null, "e": 1995, "s": 1976, "text": "mocambo1 month ago" }, { "code": null, "e": 2008, "s": 1995, "text": "simplest one" }, { "code": null, "e": 2689, "s": 2008, "text": " int lengthOfLongestAP(int A[], int n) {\n //using map to serach element;\n unordered_map<int ,int> mep;\n for(int i=0; i<n; i++){\n mep[A[i]]++;\n }\n \n int ans=1;//for corner case:when n=1\n \n for(int i=0; i<n-1; i++){\n int count;\n for(int j=i+1; j<n; j++){\n int cd=A[j]-A[i];//common diffrence\n count=2;\n int curr=A[j];\n /*serching for succesive element of \t progression in the map with current cd*/\n while(mep[cd+curr]!=0){\n count++;\n curr+=cd;\n }\n ans=max(ans,count);\n }\n }\n return ans;\n }" }, { "code": null, "e": 2692, "s": 2689, "text": "+1" }, { "code": null, "e": 2723, "s": 2692, "text": "ruchitchudasama1232 months ago" }, { "code": null, "e": 3254, "s": 2723, "text": "public:\n int lengthOfLongestAP(int A[], int n) {\n if(n<=2) return n;\n int maxLen=2;\n vector<unordered_map<int,int>> dp(n);\n \n for(int i=0;i<n;i++){\n for(int j=i+1;j<n;j++){\n int diff=A[j]-A[i];\n if(dp[i].find(diff)!=dp[i].end()){\n dp[j][diff]=dp[i][diff]+1;\n }\n else{\n dp[j][diff]=2;\n }\n maxLen=max(maxLen,dp[j][diff]);\n }\n }\n return maxLen;\n }" }, { "code": null, "e": 3256, "s": 3254, "text": "0" }, { "code": null, "e": 3273, "s": 3256, "text": "ezio3 months ago" }, { "code": null, "e": 3906, "s": 3273, "text": "class Solution {\n int lengthOfLongestAP(int[] nums, int n) {\n Map<Integer, Integer>[] dp = new HashMap[n];\n for (int i = 0; i < n; i++)\n dp[i] = new HashMap<>();\n\n int maxLen = 1;\n for (int i = 1; i < n; i++) {\n for (int j = 0; j < i; j++) {\n int cd = nums[i] - nums[j];\n if (dp[j].containsKey(cd)) {\n dp[i].put(cd, dp[j].get(cd) + 1);\n } else {\n dp[i].put(cd, 2);\n }\n maxLen = Math.max(maxLen, dp[i].get(cd));\n }\n }\n return maxLen;\n }\n}\n" }, { "code": null, "e": 3908, "s": 3906, "text": "0" }, { "code": null, "e": 3913, "s": 3908, "text": "ezio" }, { "code": null, "e": 3939, "s": 3913, "text": "This comment was deleted." }, { "code": null, "e": 3942, "s": 3939, "text": "+2" }, { "code": null, "e": 3966, "s": 3942, "text": "harshitrepa5 months ago" }, { "code": null, "e": 4540, "s": 3966, "text": " if(n==1) return 1; if(n==2) return 2; vector<unordered_map<int,int>> dp(n); int ans = 0; for(int i=1;i<n;i++) { for(int j=0;j<i;j++) { int diff = arr[i] - arr[j]; if(dp[j].find(diff)==dp[j].end()) { dp[i][diff] = 2; }else { dp[i][diff] = dp[j][diff]+1; } ans = max(ans,dp[i][diff]); } } return ans;" }, { "code": null, "e": 4542, "s": 4540, "text": "0" }, { "code": null, "e": 4555, "s": 4542, "text": "siddhantp619" }, { "code": null, "e": 4581, "s": 4555, "text": "This comment was deleted." }, { "code": null, "e": 4727, "s": 4581, "text": "We strongly recommend solving this problem on your own before viewing its editorial. Do you still\n want to view the editorial?" }, { "code": null, "e": 4763, "s": 4727, "text": " Login to access your submissions. " }, { "code": null, "e": 4773, "s": 4763, "text": "\nProblem\n" }, { "code": null, "e": 4783, "s": 4773, "text": "\nContest\n" }, { "code": null, "e": 4846, "s": 4783, "text": "Reset the IDE using the second button on the top right corner." }, { "code": null, "e": 4994, "s": 4846, "text": "Avoid using static/global variables in your code as your code is tested against multiple test cases and these tend to retain their previous values." }, { "code": null, "e": 5202, "s": 4994, "text": "Passing the Sample/Custom Test cases does not guarantee the correctness of code. On submission, your code is tested against multiple test cases consisting of all possible corner cases and stress constraints." }, { "code": null, "e": 5308, "s": 5202, "text": "You can access the hints to get an idea about what is expected of you as well as the final solution code." } ]
Install Python package using Jupyter Notebook - GeeksforGeeks
06 Mar, 2020 Jupyter Notebook is an open-source web application that is used to create and share documents that contain data in different formats which includes live code, equations, visualizations, and text. Uses include data cleaning and transformation, numerical simulation, statistical modeling, data visualization, machine learning, and much more. Jupyter has support for over 40 different programming languages and Python is one of them. Python is a requirement (Python 3.3 or greater, or Python 2.7) for installing the Jupyter Notebook itself. Refer to the following articles for the installation of the Jupyter Notebook. How to install Jupyter Notebook in Linux? How to install Jupyter Notebook in Windows? In Jupyter everything runs in cells. It gives options to change the cell type to markup, text, Python console, etc. Within the Python IPython console cell, jupyter allows Python code to be executed. To install Python libraries, we use pip command on the command line console of the Operating System. The OS has a set of paths to executable programs in its so-called environment variables through which it identifies directly what exactly the pip means. This is the reason that whenever pip command can directly be run on the console. In Jupyter, the console commands can be executed by the ‘!’ sign before the command within the cell. For example, If the following code is written in the Jupyter cell, it will execute as a command in CMD. ! echo GeeksforGeeks Output Similarly we can install any package via jupyter in the same way, and it will run it directly in the OS shell. Syntax: ! pip install [package_name] Example: Let’s install NumPy using Jupyter. But using this method is not recommended because of the OS behavior. This command executed on the current version in the $PATH variable of the OS. So in the case of multiple Python versions, this might not install the same package in the jupyter’s Python version. In the simplest case it may work. To solve the above-mentioned problem, it is recommended to use sys library in Python which will return the path of the current version’s pip on which the jupyter is running. sys.executable will return the path of the Python.exe of the version on which the current Jupyter instance is Syntax: import sys !{sys.executable} -m pip install [package_name] Example: By the above code, the package will be installed in the same Python version on which the jupyter notebook is running. python-utility Python Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here. How to Install PIP on Windows ? How To Convert Python Dictionary To JSON? Check if element exists in list in Python How to drop one or multiple columns in Pandas Dataframe Python Classes and Objects Python | os.path.join() method Create a directory in Python Defaultdict in Python Python | Get unique values from a list Python | Pandas dataframe.groupby()
[ { "code": null, "e": 25665, "s": 25637, "text": "\n06 Mar, 2020" }, { "code": null, "e": 26005, "s": 25665, "text": "Jupyter Notebook is an open-source web application that is used to create and share documents that contain data in different formats which includes live code, equations, visualizations, and text. Uses include data cleaning and transformation, numerical simulation, statistical modeling, data visualization, machine learning, and much more." }, { "code": null, "e": 26203, "s": 26005, "text": "Jupyter has support for over 40 different programming languages and Python is one of them. Python is a requirement (Python 3.3 or greater, or Python 2.7) for installing the Jupyter Notebook itself." }, { "code": null, "e": 26281, "s": 26203, "text": "Refer to the following articles for the installation of the Jupyter Notebook." }, { "code": null, "e": 26323, "s": 26281, "text": "How to install Jupyter Notebook in Linux?" }, { "code": null, "e": 26367, "s": 26323, "text": "How to install Jupyter Notebook in Windows?" }, { "code": null, "e": 26566, "s": 26367, "text": "In Jupyter everything runs in cells. It gives options to change the cell type to markup, text, Python console, etc. Within the Python IPython console cell, jupyter allows Python code to be executed." }, { "code": null, "e": 26901, "s": 26566, "text": "To install Python libraries, we use pip command on the command line console of the Operating System. The OS has a set of paths to executable programs in its so-called environment variables through which it identifies directly what exactly the pip means. This is the reason that whenever pip command can directly be run on the console." }, { "code": null, "e": 27106, "s": 26901, "text": "In Jupyter, the console commands can be executed by the ‘!’ sign before the command within the cell. For example, If the following code is written in the Jupyter cell, it will execute as a command in CMD." }, { "code": null, "e": 27128, "s": 27106, "text": "! echo GeeksforGeeks\n" }, { "code": null, "e": 27135, "s": 27128, "text": "Output" }, { "code": null, "e": 27246, "s": 27135, "text": "Similarly we can install any package via jupyter in the same way, and it will run it directly in the OS shell." }, { "code": null, "e": 27254, "s": 27246, "text": "Syntax:" }, { "code": null, "e": 27284, "s": 27254, "text": "! pip install [package_name] " }, { "code": null, "e": 27328, "s": 27284, "text": "Example: Let’s install NumPy using Jupyter." }, { "code": null, "e": 27626, "s": 27328, "text": "But using this method is not recommended because of the OS behavior. This command executed on the current version in the $PATH variable of the OS. So in the case of multiple Python versions, this might not install the same package in the jupyter’s Python version. In the simplest case it may work." }, { "code": null, "e": 27910, "s": 27626, "text": "To solve the above-mentioned problem, it is recommended to use sys library in Python which will return the path of the current version’s pip on which the jupyter is running. sys.executable will return the path of the Python.exe of the version on which the current Jupyter instance is" }, { "code": null, "e": 27918, "s": 27910, "text": "Syntax:" }, { "code": null, "e": 27978, "s": 27918, "text": "import sys\n!{sys.executable} -m pip install [package_name]\n" }, { "code": null, "e": 27987, "s": 27978, "text": "Example:" }, { "code": null, "e": 28105, "s": 27987, "text": "By the above code, the package will be installed in the same Python version on which the jupyter notebook is running." }, { "code": null, "e": 28120, "s": 28105, "text": "python-utility" }, { "code": null, "e": 28127, "s": 28120, "text": "Python" }, { "code": null, "e": 28225, "s": 28127, "text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here." }, { "code": null, "e": 28257, "s": 28225, "text": "How to Install PIP on Windows ?" }, { "code": null, "e": 28299, "s": 28257, "text": "How To Convert Python Dictionary To JSON?" }, { "code": null, "e": 28341, "s": 28299, "text": "Check if element exists in list in Python" }, { "code": null, "e": 28397, "s": 28341, "text": "How to drop one or multiple columns in Pandas Dataframe" }, { "code": null, "e": 28424, "s": 28397, "text": "Python Classes and Objects" }, { "code": null, "e": 28455, "s": 28424, "text": "Python | os.path.join() method" }, { "code": null, "e": 28484, "s": 28455, "text": "Create a directory in Python" }, { "code": null, "e": 28506, "s": 28484, "text": "Defaultdict in Python" }, { "code": null, "e": 28545, "s": 28506, "text": "Python | Get unique values from a list" } ]
Array Manipulation and Sum - GeeksforGeeks
02 Aug, 2021 Given an array arr[] of N integers and an integer S. The task is to find an element K in the array such that if all the elements from the array > K are made equal to K then the sum of all the elements of the resultant array becomes equal to S. If it is not possible to find such an element then print -1 .Examples: Input: M = 15, arr[] = {12, 3, 6, 7, 8} Output: 3 Resultant array = {3, 3, 3, 3, 3} Sum of the array elements = 15 = SInput: M = 5, arr[] = {1, 3, 2, 5, 8} Output: 1 Approach: Sort the array. Traverse the array considering that the value of K is equal to the current element and then check if sum of the previous elements + (K * number of remaining elements) = S. If yes then print the value of K, if no such element found then print -1 in the end.Below is the implementation of the above approach: C++ Java Python 3 C# PHP Javascript // C++ implementation of the approach#include <bits/stdc++.h>using namespace std; // Function to return the required elementint getElement(int a[], int n, int S){ // Sort the array sort(a, a + n); int sum = 0; for (int i = 0; i < n; i++) { // If current element // satisfies the condition if (sum + (a[i] * (n - i)) == S) return a[i]; sum += a[i]; } // No element found return -1;} // Driver codeint main(){ int S = 5; int a[] = { 1, 3, 2, 5, 8 }; int n = sizeof(a) / sizeof(a[0]); cout << getElement(a, n, S); return 0;} //Java implementation of the approachimport java.util.Arrays; class GFG{ // Function to return the required element static int getElement(int a[], int n, int S) { // Sort the array Arrays.sort(a); int sum = 0; for (int i = 0; i < n; i++) { // If current element // satisfies the condition if (sum + (a[i] * (n - i)) == S) return a[i]; sum += a[i]; } // No element found return -1; } // Driver code public static void main(String[] args) { int S = 5; int a[] = { 1, 3, 2, 5, 8 }; int n = a.length; System.out.println(getElement(a, n, S)); }} // This code is contributed by Mukul singh. # Python3 implementation of the approach # Function to return the required elementdef getElement(a, n, S) : # Sort the array a.sort(); sum = 0; for i in range(n) : # If current element # satisfies the condition if (sum + (a[i] * (n - i)) == S) : return a[i]; sum += a[i]; # No element found return -1; # Driver Codeif __name__ == "__main__" : S = 5; a = [ 1, 3, 2, 5, 8 ]; n = len(a) ; print(getElement(a, n, S)) ; # This code is contributed by Ryuga // C# implementation of the approachusing System; class GFG{ // Function to return the required element static int getElement(int[] a, int n, int S) { // Sort the array Array.Sort(a); int sum = 0; for (int i = 0; i < n; i++) { // If current element // satisfies the condition if (sum + (a[i] * (n - i)) == S) return a[i]; sum += a[i]; } // No element found return -1; } // Driver code public static void Main() { int S = 5; int[] a = { 1, 3, 2, 5, 8 }; int n = a.Length; Console.WriteLine(getElement(a, n, S)); }} // This code is contributed by Mukul singh. <?php// PHP implementation of the approach // Function to return the required elementfunction getElement($a, $n, $S){ // Sort the array sort($a, 0); $sum = 0; for ($i = 0; $i < $n; $i++) { // If current element // satisfies the condition if ($sum + ($a[$i] * ($n - $i)) == $S) return $a[$i]; $sum += $a[$i]; } // No element found return -1;} // Driver code$S = 5;$a = array(1, 3, 2, 5, 8);$n = sizeof($a); echo getElement($a, $n, $S); // This code is contributed// by Akanksha Rai?> <script> //Javascript implementation of the approach // Function to return the required elementfunction getElement(a, n, S){ // Sort the array a.sort(); var sum = 0; for (var i = 0; i < n; i++) { // If current element // satisfies the condition if (sum + (a[i] * (n - i)) == S) return a[i]; sum += a[i]; } // No element found return -1;} var S = 5;var a = [ 1, 3, 2, 5, 8 ];var n = a.length;document.write(getElement(a, n, S)); // This code is contributed by SoumikMondal</script> 1 Time Complexity: O(N*logN)Auxiliary Space: O(1) Akanksha_Rai Code_Mech ankthon SoumikMondal pankajsharmagfg Sorting Quiz Algorithms Arrays Arrays Algorithms Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here. Comments Old Comments DSA Sheet by Love Babbar Difference between Informed and Uninformed Search in AI SCAN (Elevator) Disk Scheduling Algorithms Quadratic Probing in Hashing K means Clustering - Introduction Arrays in Java Arrays in C/C++ Program for array rotation Stack Data Structure (Introduction and Program) Largest Sum Contiguous Subarray
[ { "code": null, "e": 24300, "s": 24272, "text": "\n02 Aug, 2021" }, { "code": null, "e": 24617, "s": 24300, "text": "Given an array arr[] of N integers and an integer S. The task is to find an element K in the array such that if all the elements from the array > K are made equal to K then the sum of all the elements of the resultant array becomes equal to S. If it is not possible to find such an element then print -1 .Examples: " }, { "code": null, "e": 24785, "s": 24617, "text": "Input: M = 15, arr[] = {12, 3, 6, 7, 8} Output: 3 Resultant array = {3, 3, 3, 3, 3} Sum of the array elements = 15 = SInput: M = 5, arr[] = {1, 3, 2, 5, 8} Output: 1 " }, { "code": null, "e": 25122, "s": 24787, "text": "Approach: Sort the array. Traverse the array considering that the value of K is equal to the current element and then check if sum of the previous elements + (K * number of remaining elements) = S. If yes then print the value of K, if no such element found then print -1 in the end.Below is the implementation of the above approach: " }, { "code": null, "e": 25126, "s": 25122, "text": "C++" }, { "code": null, "e": 25131, "s": 25126, "text": "Java" }, { "code": null, "e": 25140, "s": 25131, "text": "Python 3" }, { "code": null, "e": 25143, "s": 25140, "text": "C#" }, { "code": null, "e": 25147, "s": 25143, "text": "PHP" }, { "code": null, "e": 25158, "s": 25147, "text": "Javascript" }, { "code": "// C++ implementation of the approach#include <bits/stdc++.h>using namespace std; // Function to return the required elementint getElement(int a[], int n, int S){ // Sort the array sort(a, a + n); int sum = 0; for (int i = 0; i < n; i++) { // If current element // satisfies the condition if (sum + (a[i] * (n - i)) == S) return a[i]; sum += a[i]; } // No element found return -1;} // Driver codeint main(){ int S = 5; int a[] = { 1, 3, 2, 5, 8 }; int n = sizeof(a) / sizeof(a[0]); cout << getElement(a, n, S); return 0;}", "e": 25761, "s": 25158, "text": null }, { "code": "//Java implementation of the approachimport java.util.Arrays; class GFG{ // Function to return the required element static int getElement(int a[], int n, int S) { // Sort the array Arrays.sort(a); int sum = 0; for (int i = 0; i < n; i++) { // If current element // satisfies the condition if (sum + (a[i] * (n - i)) == S) return a[i]; sum += a[i]; } // No element found return -1; } // Driver code public static void main(String[] args) { int S = 5; int a[] = { 1, 3, 2, 5, 8 }; int n = a.length; System.out.println(getElement(a, n, S)); }} // This code is contributed by Mukul singh.", "e": 26543, "s": 25761, "text": null }, { "code": "# Python3 implementation of the approach # Function to return the required elementdef getElement(a, n, S) : # Sort the array a.sort(); sum = 0; for i in range(n) : # If current element # satisfies the condition if (sum + (a[i] * (n - i)) == S) : return a[i]; sum += a[i]; # No element found return -1; # Driver Codeif __name__ == \"__main__\" : S = 5; a = [ 1, 3, 2, 5, 8 ]; n = len(a) ; print(getElement(a, n, S)) ; # This code is contributed by Ryuga", "e": 27103, "s": 26543, "text": null }, { "code": "// C# implementation of the approachusing System; class GFG{ // Function to return the required element static int getElement(int[] a, int n, int S) { // Sort the array Array.Sort(a); int sum = 0; for (int i = 0; i < n; i++) { // If current element // satisfies the condition if (sum + (a[i] * (n - i)) == S) return a[i]; sum += a[i]; } // No element found return -1; } // Driver code public static void Main() { int S = 5; int[] a = { 1, 3, 2, 5, 8 }; int n = a.Length; Console.WriteLine(getElement(a, n, S)); }} // This code is contributed by Mukul singh.", "e": 27858, "s": 27103, "text": null }, { "code": "<?php// PHP implementation of the approach // Function to return the required elementfunction getElement($a, $n, $S){ // Sort the array sort($a, 0); $sum = 0; for ($i = 0; $i < $n; $i++) { // If current element // satisfies the condition if ($sum + ($a[$i] * ($n - $i)) == $S) return $a[$i]; $sum += $a[$i]; } // No element found return -1;} // Driver code$S = 5;$a = array(1, 3, 2, 5, 8);$n = sizeof($a); echo getElement($a, $n, $S); // This code is contributed// by Akanksha Rai?>", "e": 28418, "s": 27858, "text": null }, { "code": "<script> //Javascript implementation of the approach // Function to return the required elementfunction getElement(a, n, S){ // Sort the array a.sort(); var sum = 0; for (var i = 0; i < n; i++) { // If current element // satisfies the condition if (sum + (a[i] * (n - i)) == S) return a[i]; sum += a[i]; } // No element found return -1;} var S = 5;var a = [ 1, 3, 2, 5, 8 ];var n = a.length;document.write(getElement(a, n, S)); // This code is contributed by SoumikMondal</script>", "e": 28964, "s": 28418, "text": null }, { "code": null, "e": 28966, "s": 28964, "text": "1" }, { "code": null, "e": 29016, "s": 28968, "text": "Time Complexity: O(N*logN)Auxiliary Space: O(1)" }, { "code": null, "e": 29029, "s": 29016, "text": "Akanksha_Rai" }, { "code": null, "e": 29039, "s": 29029, "text": "Code_Mech" }, { "code": null, "e": 29047, "s": 29039, "text": "ankthon" }, { "code": null, "e": 29060, "s": 29047, "text": "SoumikMondal" }, { "code": null, "e": 29076, "s": 29060, "text": "pankajsharmagfg" }, { "code": null, "e": 29089, "s": 29076, "text": "Sorting Quiz" }, { "code": null, "e": 29100, "s": 29089, "text": "Algorithms" }, { "code": null, "e": 29107, "s": 29100, "text": "Arrays" }, { "code": null, "e": 29114, "s": 29107, "text": "Arrays" }, { "code": null, "e": 29125, "s": 29114, "text": "Algorithms" }, { "code": null, "e": 29223, "s": 29125, "text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here." }, { "code": null, "e": 29232, "s": 29223, "text": "Comments" }, { "code": null, "e": 29245, "s": 29232, "text": "Old Comments" }, { "code": null, "e": 29270, "s": 29245, "text": "DSA Sheet by Love Babbar" }, { "code": null, "e": 29326, "s": 29270, "text": "Difference between Informed and Uninformed Search in AI" }, { "code": null, "e": 29369, "s": 29326, "text": "SCAN (Elevator) Disk Scheduling Algorithms" }, { "code": null, "e": 29398, "s": 29369, "text": "Quadratic Probing in Hashing" }, { "code": null, "e": 29432, "s": 29398, "text": "K means Clustering - Introduction" }, { "code": null, "e": 29447, "s": 29432, "text": "Arrays in Java" }, { "code": null, "e": 29463, "s": 29447, "text": "Arrays in C/C++" }, { "code": null, "e": 29490, "s": 29463, "text": "Program for array rotation" }, { "code": null, "e": 29538, "s": 29490, "text": "Stack Data Structure (Introduction and Program)" } ]
GATE | GATE-CS-2015 (Set 1) | Question 65 - GeeksforGeeks
28 Jun, 2021 Which one of the following combinations is incorrect?(A) Acquiescence – Submission(B) Wheedle – Roundabout(C) Flippancy – Lightness(D) Profligate – ExtravagantAnswer: (B)Explanation: Flippancy ---> lack of respect or seriousness. Acquiescence ---> the reluctant acceptance of something without protest. Wheedle ---> use endearments or flattery to persuade someone to do something or Profligate ---> recklessly extravagant Quiz of this Question GATE-CS-2015 (Set 1) GATE-GATE-CS-2015 (Set 1) GATE Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here. GATE | GATE CS 2019 | Question 27 GATE | GATE-IT-2004 | Question 66 GATE | GATE-CS-2014-(Set-3) | Question 65 GATE | GATE-CS-2016 (Set 2) | Question 48 GATE | GATE-CS-2006 | Question 49 GATE | GATE-CS-2004 | Question 3 GATE | GATE-CS-2017 (Set 2) | Question 42 GATE | GATE CS 2010 | Question 24 GATE | GATE-CS-2000 | Question 43 GATE | GATE CS 2021 | Set 1 | Question 47
[ { "code": null, "e": 24544, "s": 24516, "text": "\n28 Jun, 2021" }, { "code": null, "e": 24727, "s": 24544, "text": "Which one of the following combinations is incorrect?(A) Acquiescence – Submission(B) Wheedle – Roundabout(C) Flippancy – Lightness(D) Profligate – ExtravagantAnswer: (B)Explanation:" }, { "code": null, "e": 25021, "s": 24727, "text": "Flippancy ---> lack of respect or seriousness.\nAcquiescence ---> the reluctant acceptance of \n something without protest.\nWheedle ---> use endearments or flattery to \n persuade someone to do something or \nProfligate ---> recklessly extravagant " }, { "code": null, "e": 25043, "s": 25021, "text": "Quiz of this Question" }, { "code": null, "e": 25064, "s": 25043, "text": "GATE-CS-2015 (Set 1)" }, { "code": null, "e": 25090, "s": 25064, "text": "GATE-GATE-CS-2015 (Set 1)" }, { "code": null, "e": 25095, "s": 25090, "text": "GATE" }, { "code": null, "e": 25193, "s": 25095, "text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here." }, { "code": null, "e": 25227, "s": 25193, "text": "GATE | GATE CS 2019 | Question 27" }, { "code": null, "e": 25261, "s": 25227, "text": "GATE | GATE-IT-2004 | Question 66" }, { "code": null, "e": 25303, "s": 25261, "text": "GATE | GATE-CS-2014-(Set-3) | Question 65" }, { "code": null, "e": 25345, "s": 25303, "text": "GATE | GATE-CS-2016 (Set 2) | Question 48" }, { "code": null, "e": 25379, "s": 25345, "text": "GATE | GATE-CS-2006 | Question 49" }, { "code": null, "e": 25412, "s": 25379, "text": "GATE | GATE-CS-2004 | Question 3" }, { "code": null, "e": 25454, "s": 25412, "text": "GATE | GATE-CS-2017 (Set 2) | Question 42" }, { "code": null, "e": 25488, "s": 25454, "text": "GATE | GATE CS 2010 | Question 24" }, { "code": null, "e": 25522, "s": 25488, "text": "GATE | GATE-CS-2000 | Question 43" } ]
Including an external stylesheet file in your HTML document
The <link> element can be used to include an external style sheet file in your HTML document. An external style sheet is a separate text file with .css extension. You define all the Style rules within this text file and then you can include this file in any HTML document using <link> element. Consider a simple style sheet file with a name new.css having the following rules: h1, h2, h3 { color: #36C; font-weight: normal; letter-spacing: .4em; margin-bottom: 1em; text-transform: lowercase; } Now you can include this file new.css in any HTML document as follows: <head> <link type = "text/css" href = "new.css" media = " all" /> </head>
[ { "code": null, "e": 1156, "s": 1062, "text": "The <link> element can be used to include an external style sheet file in your HTML document." }, { "code": null, "e": 1356, "s": 1156, "text": "An external style sheet is a separate text file with .css extension. You define all the Style rules within this text file and then you can include this file in any HTML document using <link> element." }, { "code": null, "e": 1439, "s": 1356, "text": "Consider a simple style sheet file with a name new.css having the following rules:" }, { "code": null, "e": 1572, "s": 1439, "text": "h1, h2, h3 {\n color: #36C;\n font-weight: normal;\n letter-spacing: .4em;\n margin-bottom: 1em;\n text-transform: lowercase;\n}" }, { "code": null, "e": 1643, "s": 1572, "text": "Now you can include this file new.css in any HTML document as follows:" }, { "code": null, "e": 1720, "s": 1643, "text": "<head>\n <link type = \"text/css\" href = \"new.css\" media = \" all\" />\n</head>" } ]
How to preserve leading 0 with JavaScript numbers?
There’s no way to preserve the leading 0 in JavaScript. However, you can consider this id as a string and pass as an argument. You can try to run the following code to preserve leading 0 by making it a string − Live Demo <html> <head> <script> function sayHello(name, age, id) { document.write (name + " is " + age + " years old, with id = "+id); } </script> </head> <body> <p>Click the following button to call the function</p> <form> <input type="button" onclick="sayHello('John', 26, '0678')" value="Display Employee Info"> </form> </body> </html>
[ { "code": null, "e": 1189, "s": 1062, "text": "There’s no way to preserve the leading 0 in JavaScript. However, you can consider this id as a string and pass as an argument." }, { "code": null, "e": 1273, "s": 1189, "text": "You can try to run the following code to preserve leading 0 by making it a string −" }, { "code": null, "e": 1283, "s": 1273, "text": "Live Demo" }, { "code": null, "e": 1706, "s": 1283, "text": "<html>\n <head>\n <script>\n function sayHello(name, age, id) {\n document.write (name + \" is \" + age + \" years old, with id = \"+id);\n }\n </script>\n </head>\n <body>\n <p>Click the following button to call the function</p>\n <form>\n <input type=\"button\" onclick=\"sayHello('John', 26, '0678')\"\n value=\"Display Employee Info\">\n </form>\n </body>\n</html>" } ]
The Correct Way to Measure Inference Time of Deep Neural Networks | by Amnon Geifman | Towards Data Science
The network latency is one of the more crucial aspects of deploying a deep network into a production environment. Most real-world applications require blazingly fast inference time, varying anywhere from a few milliseconds to one second. But the task of correctly and meaningfully measuring the inference time, or latency, of a neural network, requires profound understanding. Even experienced programmers often make common mistakes that lead to inaccurate latency measurements. The impact of these mistakes has the potential to trigger bad decisions and unnecessary expenditures. In this post, we review some of the main issues that should be addressed to measure latency time correctly. We review the main processes that make GPU execution unique, including asynchronous execution and GPU warm up. We then share code samples for measuring time correctly on a GPU. Finally, we review some of the common mistakes people make when quantifying inference time on GPUs. We begin by discussing the GPU execution mechanism. In multithreaded or multi-device programming, two blocks of code that are independent can be executed in parallel; this means that the second block may be executed before the first is finished. This process is referred to as asynchronous execution. In the deep learning context, we often use this execution because the GPU operations are asynchronous by default. More specifically, when calling a function using a GPU, the operations are enqueued to the specific device, but not necessarily to other devices. This allows us to execute computations in parallel on the CPU or another GPU. Asynchronous execution offers huge advantages for deep learning, such as the ability to decrease run-time by a large factor. For example, at inference of multiple batches, the second batch can be preprocessed on the CPU while the first batch is fed forward through the network on the GPU. Clearly, it would be beneficial to use asynchronism whenever possible at inference time. The effect of asynchronous execution is invisible to the user; but, when it comes to time measurements, it can be the cause of many headaches. When you calculate time with the “time” library in Python, the measurements are performed on the CPU device. Due to the asynchronous nature of the GPU, the line of code that stops the timing will be executed before the GPU process finishes. As a result, the timing will be inaccurate or irrelevant to the actual inference time. Keeping in mind that we want to use asynchronism, later in this post we explain how to correctly measure time despite the asynchronous processes. A modern GPU device can exist in one of several different power states. When the GPU is not being used for any purpose and persistence mode (i.e., which keeps the GPU on) is not enabled, the GPU will automatically reduce its power state to a very low level, sometimes even a complete shutdown . In lower power state, the GPU shuts down different pieces of hardware, including memory subsystems, internal subsystems, or even compute cores and caches. The invocation of any program that attempts to interact with the GPU will cause the driver to load and/or initialize the GPU. This driver load behavior is noteworthy. Applications that trigger GPU initialization can incur up to 3 seconds of latency, due to the scrubbing behavior of the error-correcting code. For instance, if we measure time for a network that takes 10 milliseconds for one example, running over 1000 examples may result in most of our running time being wasted on initializing the GPU. Naturally, we don’t want to measure such side effects because the timing is not accurate. Nor does it reflect a production environment where usually the GPU is already initialized or working in persistence mode. Since we want to enable the GPU power-saving mode whenever possible, let’s look at how to overcome the initialization of the GPU while measuring time. The PyTorch code snippet below shows how to measure time correctly. Here we use Efficient-net-b0 but you can use any other network. In the code, we deal with the two caveats described above. Before we make any time measurements, we run some dummy examples through the network to do a ‘GPU warm-up.’ This will automatically initialize the GPU and prevent it from going into power-saving mode when we measure time. Next, we use tr.cuda.event to measure time on the GPU. It is crucial here to use torch.cuda.synchronize(). This line of code performs synchronization between the host and device (i.e., GPU and CPU), so the time recording takes place only after the process running on the GPU is finished. This overcomes the issue of unsynchronized execution. model = EfficientNet.from_pretrained(‘efficientnet-b0’)device = torch.device(“cuda”)model.to(device)dummy_input = torch.randn(1, 3,224,224,dtype=torch.float).to(device)starter, ender = torch.cuda.Event(enable_timing=True), torch.cuda.Event(enable_timing=True)repetitions = 300timings=np.zeros((repetitions,1))#GPU-WARM-UPfor _ in range(10): _ = model(dummy_input)# MEASURE PERFORMANCEwith torch.no_grad(): for rep in range(repetitions): starter.record() _ = model(dummy_input) ender.record() # WAIT FOR GPU SYNC torch.cuda.synchronize() curr_time = starter.elapsed_time(ender) timings[rep] = curr_timemean_syn = np.sum(timings) / repetitionsstd_syn = np.std(timings)print(mean_syn) When we measure the latency of a network, our goal is to measure only the feed-forward of the network, not more and not less. Often, even experts, will make certain common mistakes in their measurements. Here are some of them, along with their consequences: 1. Transferring data between the host and the device. The point of view of this post is to measure only the inference time of a neural network. Under this point of view, one of the most common mistakes involves the transfer of data between the CPU and GPU while taking time measurements. This is usually done unintentionally when a tensor is created on the CPU and inference is then performed on the GPU. This memory allocation takes a considerable amount of time, which subsequently enlarges the time for inference. The effect of this mistake over the mean and variance of the measurements can be seen below: 2. Not using GPU warm-up. As mentioned above, the first run on the GPU prompts its initialization. GPU initialization can take up to 3 seconds, which makes a huge difference when the timing is in terms of milliseconds. 3. Using standard CPU timing. The most common mistake made is to measure time without synchronization. Even experienced programmers have been known to use the following piece of code. s = time.time() _ = model(dummy_input)curr_time = (time.time()-s )*1000 This of course completely ignores the asynchronous execution mentioned earlier and hence outputs incorrect times. The impact of this mistake on the mean and variance of the measurements is shown below: 4. Taking one sample. Like many processes in computer science, feed forward of the neural network has a (small) stochastic component. The variance of the run-time can be significant, especially when measuring a low latency network. To this end, it is essential to run the network over several examples and then average the results (300 examples can be a good number). A common mistake is to use one sample and refer to it as the run-time. This, of course, won’t represent the true run-time. The throughput of a neural network is defined as the maximal number of input instances the network can process in time a unit (e.g., a second). Unlike latency, which involves the processing of a single instance, to achieve maximal throughput we would like to process in parallel as many instances as possible. The effective parallelism is obviously data-, model-, and device-dependent. Thus, to correctly measure throughput we perform the following two steps: (1) we estimate the optimal batch size that allows for maximum parallelism; and (2), given this optimal batch size, we measure the number of instances the network can process in one second. To find the optimal batch size, a good rule of thumb is to reach the memory limit of our GPU for the given data type. This size of course depends on the hardware type and the size of the network. The quickest way to find this maximal batch size is by performing a binary search. When time is of no concern a simple sequential search is sufficient. To this end, using a for loop we increase by one the batch size until Run Time error is achieved, this identifies the largest batch size the GPU can process, for our neural network model and the input data it processes. After finding the optimal batch size, we calculate the actual throughput. To this end, we would like to process many batches (100 batches will be a sufficient number) and then use the following formula: (number of batches X batch size)/(total time in seconds). This formula gives the number of examples our network can process in one second. The code below provide a simple way to perform the above calculation (given the optimal batch size): model = EfficientNet.from_pretrained(‘efficientnet-b0’)device = torch.device(“cuda”)model.to(device)dummy_input = torch.randn(optimal_batch_size, 3,224,224, dtype=torch.float).to(device)repetitions=100total_time = 0with torch.no_grad(): for rep in range(repetitions): starter, ender = torch.cuda.Event(enable_timing=True), torch.cuda.Event(enable_timing=True) starter.record() _ = model(dummy_input) ender.record() torch.cuda.synchronize() curr_time = starter.elapsed_time(ender)/1000 total_time += curr_timeThroughput = (repetitions*optimal_batch_size)/total_timeprint(‘Final Throughput:’,Throughput) Accurately measuring the inference time of neural networks is not as trivial as it sounds. We detailed several issues that deep learning practitioners should be aware of, such as asynchronous execution and GPU power-saving modes. The PyTorch code presented here demonstrates how to correctly measure the timing in neural networks, despite the aforementioned caveats. Finally, we mentioned some common mistakes that cause people to measure time incorrectly. In future posts we will dive even deeper to this topic and explain about existing deep learning profilers which enable us to achieve even more accurate time measurements of networks. If you are interested in how to reduce the latency of the network without compromising its accuracy you are invited to read more about this topic in Deci’s white paper. Originally posted in https://deci.ai/the-correct-way-to-measure-inference-time-of-deep-neural-networks/
[ { "code": null, "e": 752, "s": 171, "text": "The network latency is one of the more crucial aspects of deploying a deep network into a production environment. Most real-world applications require blazingly fast inference time, varying anywhere from a few milliseconds to one second. But the task of correctly and meaningfully measuring the inference time, or latency, of a neural network, requires profound understanding. Even experienced programmers often make common mistakes that lead to inaccurate latency measurements. The impact of these mistakes has the potential to trigger bad decisions and unnecessary expenditures." }, { "code": null, "e": 1137, "s": 752, "text": "In this post, we review some of the main issues that should be addressed to measure latency time correctly. We review the main processes that make GPU execution unique, including asynchronous execution and GPU warm up. We then share code samples for measuring time correctly on a GPU. Finally, we review some of the common mistakes people make when quantifying inference time on GPUs." }, { "code": null, "e": 1776, "s": 1137, "text": "We begin by discussing the GPU execution mechanism. In multithreaded or multi-device programming, two blocks of code that are independent can be executed in parallel; this means that the second block may be executed before the first is finished. This process is referred to as asynchronous execution. In the deep learning context, we often use this execution because the GPU operations are asynchronous by default. More specifically, when calling a function using a GPU, the operations are enqueued to the specific device, but not necessarily to other devices. This allows us to execute computations in parallel on the CPU or another GPU." }, { "code": null, "e": 2154, "s": 1776, "text": "Asynchronous execution offers huge advantages for deep learning, such as the ability to decrease run-time by a large factor. For example, at inference of multiple batches, the second batch can be preprocessed on the CPU while the first batch is fed forward through the network on the GPU. Clearly, it would be beneficial to use asynchronism whenever possible at inference time." }, { "code": null, "e": 2771, "s": 2154, "text": "The effect of asynchronous execution is invisible to the user; but, when it comes to time measurements, it can be the cause of many headaches. When you calculate time with the “time” library in Python, the measurements are performed on the CPU device. Due to the asynchronous nature of the GPU, the line of code that stops the timing will be executed before the GPU process finishes. As a result, the timing will be inaccurate or irrelevant to the actual inference time. Keeping in mind that we want to use asynchronism, later in this post we explain how to correctly measure time despite the asynchronous processes." }, { "code": null, "e": 3221, "s": 2771, "text": "A modern GPU device can exist in one of several different power states. When the GPU is not being used for any purpose and persistence mode (i.e., which keeps the GPU on) is not enabled, the GPU will automatically reduce its power state to a very low level, sometimes even a complete shutdown . In lower power state, the GPU shuts down different pieces of hardware, including memory subsystems, internal subsystems, or even compute cores and caches." }, { "code": null, "e": 3938, "s": 3221, "text": "The invocation of any program that attempts to interact with the GPU will cause the driver to load and/or initialize the GPU. This driver load behavior is noteworthy. Applications that trigger GPU initialization can incur up to 3 seconds of latency, due to the scrubbing behavior of the error-correcting code. For instance, if we measure time for a network that takes 10 milliseconds for one example, running over 1000 examples may result in most of our running time being wasted on initializing the GPU. Naturally, we don’t want to measure such side effects because the timing is not accurate. Nor does it reflect a production environment where usually the GPU is already initialized or working in persistence mode." }, { "code": null, "e": 4089, "s": 3938, "text": "Since we want to enable the GPU power-saving mode whenever possible, let’s look at how to overcome the initialization of the GPU while measuring time." }, { "code": null, "e": 4844, "s": 4089, "text": "The PyTorch code snippet below shows how to measure time correctly. Here we use Efficient-net-b0 but you can use any other network. In the code, we deal with the two caveats described above. Before we make any time measurements, we run some dummy examples through the network to do a ‘GPU warm-up.’ This will automatically initialize the GPU and prevent it from going into power-saving mode when we measure time. Next, we use tr.cuda.event to measure time on the GPU. It is crucial here to use torch.cuda.synchronize(). This line of code performs synchronization between the host and device (i.e., GPU and CPU), so the time recording takes place only after the process running on the GPU is finished. This overcomes the issue of unsynchronized execution." }, { "code": null, "e": 5557, "s": 4844, "text": "model = EfficientNet.from_pretrained(‘efficientnet-b0’)device = torch.device(“cuda”)model.to(device)dummy_input = torch.randn(1, 3,224,224,dtype=torch.float).to(device)starter, ender = torch.cuda.Event(enable_timing=True), torch.cuda.Event(enable_timing=True)repetitions = 300timings=np.zeros((repetitions,1))#GPU-WARM-UPfor _ in range(10): _ = model(dummy_input)# MEASURE PERFORMANCEwith torch.no_grad(): for rep in range(repetitions): starter.record() _ = model(dummy_input) ender.record() # WAIT FOR GPU SYNC torch.cuda.synchronize() curr_time = starter.elapsed_time(ender) timings[rep] = curr_timemean_syn = np.sum(timings) / repetitionsstd_syn = np.std(timings)print(mean_syn)" }, { "code": null, "e": 5815, "s": 5557, "text": "When we measure the latency of a network, our goal is to measure only the feed-forward of the network, not more and not less. Often, even experts, will make certain common mistakes in their measurements. Here are some of them, along with their consequences:" }, { "code": null, "e": 6425, "s": 5815, "text": "1. Transferring data between the host and the device. The point of view of this post is to measure only the inference time of a neural network. Under this point of view, one of the most common mistakes involves the transfer of data between the CPU and GPU while taking time measurements. This is usually done unintentionally when a tensor is created on the CPU and inference is then performed on the GPU. This memory allocation takes a considerable amount of time, which subsequently enlarges the time for inference. The effect of this mistake over the mean and variance of the measurements can be seen below:" }, { "code": null, "e": 6644, "s": 6425, "text": "2. Not using GPU warm-up. As mentioned above, the first run on the GPU prompts its initialization. GPU initialization can take up to 3 seconds, which makes a huge difference when the timing is in terms of milliseconds." }, { "code": null, "e": 6828, "s": 6644, "text": "3. Using standard CPU timing. The most common mistake made is to measure time without synchronization. Even experienced programmers have been known to use the following piece of code." }, { "code": null, "e": 6900, "s": 6828, "text": "s = time.time() _ = model(dummy_input)curr_time = (time.time()-s )*1000" }, { "code": null, "e": 7102, "s": 6900, "text": "This of course completely ignores the asynchronous execution mentioned earlier and hence outputs incorrect times. The impact of this mistake on the mean and variance of the measurements is shown below:" }, { "code": null, "e": 7593, "s": 7102, "text": "4. Taking one sample. Like many processes in computer science, feed forward of the neural network has a (small) stochastic component. The variance of the run-time can be significant, especially when measuring a low latency network. To this end, it is essential to run the network over several examples and then average the results (300 examples can be a good number). A common mistake is to use one sample and refer to it as the run-time. This, of course, won’t represent the true run-time." }, { "code": null, "e": 8243, "s": 7593, "text": "The throughput of a neural network is defined as the maximal number of input instances the network can process in time a unit (e.g., a second). Unlike latency, which involves the processing of a single instance, to achieve maximal throughput we would like to process in parallel as many instances as possible. The effective parallelism is obviously data-, model-, and device-dependent. Thus, to correctly measure throughput we perform the following two steps: (1) we estimate the optimal batch size that allows for maximum parallelism; and (2), given this optimal batch size, we measure the number of instances the network can process in one second." }, { "code": null, "e": 8811, "s": 8243, "text": "To find the optimal batch size, a good rule of thumb is to reach the memory limit of our GPU for the given data type. This size of course depends on the hardware type and the size of the network. The quickest way to find this maximal batch size is by performing a binary search. When time is of no concern a simple sequential search is sufficient. To this end, using a for loop we increase by one the batch size until Run Time error is achieved, this identifies the largest batch size the GPU can process, for our neural network model and the input data it processes." }, { "code": null, "e": 9014, "s": 8811, "text": "After finding the optimal batch size, we calculate the actual throughput. To this end, we would like to process many batches (100 batches will be a sufficient number) and then use the following formula:" }, { "code": null, "e": 9072, "s": 9014, "text": "(number of batches X batch size)/(total time in seconds)." }, { "code": null, "e": 9254, "s": 9072, "text": "This formula gives the number of examples our network can process in one second. The code below provide a simple way to perform the above calculation (given the optimal batch size):" }, { "code": null, "e": 9894, "s": 9254, "text": "model = EfficientNet.from_pretrained(‘efficientnet-b0’)device = torch.device(“cuda”)model.to(device)dummy_input = torch.randn(optimal_batch_size, 3,224,224, dtype=torch.float).to(device)repetitions=100total_time = 0with torch.no_grad(): for rep in range(repetitions): starter, ender = torch.cuda.Event(enable_timing=True), torch.cuda.Event(enable_timing=True) starter.record() _ = model(dummy_input) ender.record() torch.cuda.synchronize() curr_time = starter.elapsed_time(ender)/1000 total_time += curr_timeThroughput = (repetitions*optimal_batch_size)/total_timeprint(‘Final Throughput:’,Throughput)" }, { "code": null, "e": 10703, "s": 9894, "text": "Accurately measuring the inference time of neural networks is not as trivial as it sounds. We detailed several issues that deep learning practitioners should be aware of, such as asynchronous execution and GPU power-saving modes. The PyTorch code presented here demonstrates how to correctly measure the timing in neural networks, despite the aforementioned caveats. Finally, we mentioned some common mistakes that cause people to measure time incorrectly. In future posts we will dive even deeper to this topic and explain about existing deep learning profilers which enable us to achieve even more accurate time measurements of networks. If you are interested in how to reduce the latency of the network without compromising its accuracy you are invited to read more about this topic in Deci’s white paper." } ]
Espresso Testing Framework - Test Recorder
Writing test case is a tedious job. Even though espresso provides very easy and flexible API, writing test cases may be a lazy and time-consuming task. To overcome this, Android studio provides a feature to record and generate espresso test cases. Record Espresso Test is available under the Run menu. Let us record a simple test case in our HelloWorldApp by following the steps described below, Open the Android studio followed by HelloWorldApp application. Open the Android studio followed by HelloWorldApp application. Click Run → Record Espresso test and select MainActivity. Click Run → Record Espresso test and select MainActivity. The Recorder screenshot is as follows, The Recorder screenshot is as follows, Click Add Assertion. It will open the application screen as shown below, Click Add Assertion. It will open the application screen as shown below, Click Hello World!. The Recorder screen to Select text view is as follows, Click Hello World!. The Recorder screen to Select text view is as follows, Again click Save Assertion This will save the assertion and show it as follows, Again click Save Assertion This will save the assertion and show it as follows, Click OK. It will open a new window and ask the name of the test case. The default name is MainActivityTest Click OK. It will open a new window and ask the name of the test case. The default name is MainActivityTest Change the test case name, if necessary. Change the test case name, if necessary. Again, click OK. This will generate a file, MainActivityTest with our recorded test case. The complete coding is as follows, Again, click OK. This will generate a file, MainActivityTest with our recorded test case. The complete coding is as follows, package com.tutorialspoint.espressosamples.helloworldapp; import android.view.View; import android.view.ViewGroup; import android.view.ViewParent; import org.hamcrest.Description; import org.hamcrest.Matcher; import org.hamcrest.TypeSafeMatcher; import org.junit.Rule; import org.junit.Test; import org.junit.runner.RunWith; import androidx.test.espresso.ViewInteraction; import androidx.test.filters.LargeTest; import androidx.test.rule.ActivityTestRule; import androidx.test.runner.AndroidJUnit4; import static androidx.test.espresso.Espresso.onView; import static androidx.test.espresso.assertion.ViewAssertions.matches; import static androidx.test.espresso.matcher.ViewMatchers.isDisplayed; import static androidx.test.espresso.matcher.ViewMatchers.withId; import static androidx.test.espresso.matcher.ViewMatchers.withText; import static org.hamcrest.Matchers.allOf; @LargeTest @RunWith(AndroidJUnit4.class) public class MainActivityTest { @Rule public ActivityTestRule<MainActivity> mActivityTestRule = new ActivityTestRule<>(MainActivity.class); @Test public void mainActivityTest() { ViewInteraction textView = onView( allOf(withId(R.id.textView_hello), withText("Hello World!"), childAtPosition(childAtPosition(withId(android.R.id.content), 0),0),isDisplayed())); textView.check(matches(withText("Hello World!"))); } private static Matcher<View> childAtPosition( final Matcher<View> parentMatcher, final int position) { return new TypeSafeMatcher<View>() { @Override public void describeTo(Description description) { description.appendText("Child at position " + position + " in parent "); parentMatcher.describeTo(description); } @Override public boolean matchesSafely(View view) { ViewParent parent = view.getParent(); return parent instanceof ViewGroup && parentMatcher.matches(parent)&& view.equals(((ViewGroup) parent).getChildAt(position)); } }; } } Finally, run the test using context menu and check whether the test case run. Finally, run the test using context menu and check whether the test case run. 17 Lectures 1.5 hours Anuja Jain Print Add Notes Bookmark this page
[ { "code": null, "e": 2278, "s": 1976, "text": "Writing test case is a tedious job. Even though espresso provides very easy and flexible API, writing test cases may be a lazy and time-consuming task. To overcome this, Android studio provides a feature to record and generate espresso test cases. Record Espresso Test is available under the Run menu." }, { "code": null, "e": 2372, "s": 2278, "text": "Let us record a simple test case in our HelloWorldApp by following the steps described below," }, { "code": null, "e": 2435, "s": 2372, "text": "Open the Android studio followed by HelloWorldApp application." }, { "code": null, "e": 2498, "s": 2435, "text": "Open the Android studio followed by HelloWorldApp application." }, { "code": null, "e": 2556, "s": 2498, "text": "Click Run → Record Espresso test and select MainActivity." }, { "code": null, "e": 2614, "s": 2556, "text": "Click Run → Record Espresso test and select MainActivity." }, { "code": null, "e": 2653, "s": 2614, "text": "The Recorder screenshot is as follows," }, { "code": null, "e": 2692, "s": 2653, "text": "The Recorder screenshot is as follows," }, { "code": null, "e": 2765, "s": 2692, "text": "Click Add Assertion. It will open the application screen as shown below," }, { "code": null, "e": 2838, "s": 2765, "text": "Click Add Assertion. It will open the application screen as shown below," }, { "code": null, "e": 2913, "s": 2838, "text": "Click Hello World!. The Recorder screen to Select text view is as follows," }, { "code": null, "e": 2988, "s": 2913, "text": "Click Hello World!. The Recorder screen to Select text view is as follows," }, { "code": null, "e": 3068, "s": 2988, "text": "Again click Save Assertion This will save the assertion and show it as follows," }, { "code": null, "e": 3148, "s": 3068, "text": "Again click Save Assertion This will save the assertion and show it as follows," }, { "code": null, "e": 3256, "s": 3148, "text": "Click OK. It will open a new window and ask the name of the test case. The default name is MainActivityTest" }, { "code": null, "e": 3364, "s": 3256, "text": "Click OK. It will open a new window and ask the name of the test case. The default name is MainActivityTest" }, { "code": null, "e": 3405, "s": 3364, "text": "Change the test case name, if necessary." }, { "code": null, "e": 3446, "s": 3405, "text": "Change the test case name, if necessary." }, { "code": null, "e": 3571, "s": 3446, "text": "Again, click OK. This will generate a file, MainActivityTest with our recorded test case. The complete coding is as follows," }, { "code": null, "e": 3696, "s": 3571, "text": "Again, click OK. This will generate a file, MainActivityTest with our recorded test case. The complete coding is as follows," }, { "code": null, "e": 5777, "s": 3696, "text": "package com.tutorialspoint.espressosamples.helloworldapp;\n\nimport android.view.View;\nimport android.view.ViewGroup;\nimport android.view.ViewParent;\n\nimport org.hamcrest.Description;\nimport org.hamcrest.Matcher;\nimport org.hamcrest.TypeSafeMatcher;\nimport org.junit.Rule;\nimport org.junit.Test;\nimport org.junit.runner.RunWith;\n\nimport androidx.test.espresso.ViewInteraction;\nimport androidx.test.filters.LargeTest;\nimport androidx.test.rule.ActivityTestRule;\nimport androidx.test.runner.AndroidJUnit4;\n\nimport static androidx.test.espresso.Espresso.onView;\nimport static androidx.test.espresso.assertion.ViewAssertions.matches;\nimport static androidx.test.espresso.matcher.ViewMatchers.isDisplayed;\nimport static androidx.test.espresso.matcher.ViewMatchers.withId;\nimport static androidx.test.espresso.matcher.ViewMatchers.withText;\nimport static org.hamcrest.Matchers.allOf;\n\n@LargeTest\n@RunWith(AndroidJUnit4.class)\npublic class MainActivityTest {\n @Rule\n public ActivityTestRule<MainActivity> mActivityTestRule = new ActivityTestRule<>(MainActivity.class);\n @Test\n public void mainActivityTest() {\n ViewInteraction textView = onView(\n allOf(withId(R.id.textView_hello), withText(\"Hello World!\"),\n childAtPosition(childAtPosition(withId(android.R.id.content),\n 0),0),isDisplayed()));\n textView.check(matches(withText(\"Hello World!\")));\n }\n private static Matcher<View> childAtPosition(\n final Matcher<View> parentMatcher, final int position) {\n return new TypeSafeMatcher<View>() {\n @Override\n public void describeTo(Description description) {\n description.appendText(\"Child at position \" + position + \" in parent \");\n parentMatcher.describeTo(description);\n }\n @Override\n public boolean matchesSafely(View view) {\n ViewParent parent = view.getParent();\n return parent instanceof ViewGroup &&\n parentMatcher.matches(parent)&& view.equals(((ViewGroup)\n parent).getChildAt(position));\n }\n };\n }\n}\n" }, { "code": null, "e": 5855, "s": 5777, "text": "Finally, run the test using context menu and check whether the test case run." }, { "code": null, "e": 5933, "s": 5855, "text": "Finally, run the test using context menu and check whether the test case run." }, { "code": null, "e": 5968, "s": 5933, "text": "\n 17 Lectures \n 1.5 hours \n" }, { "code": null, "e": 5980, "s": 5968, "text": " Anuja Jain" }, { "code": null, "e": 5987, "s": 5980, "text": " Print" }, { "code": null, "e": 5998, "s": 5987, "text": " Add Notes" } ]
How to Create Animate Graphs in Python | Towards Data Science
As data is forever expanding at unprecedented rates, data scientists are called upon to analyse it and make sense of it. Once that happens, there comes the need for effectively communicating the results. Communicating the results of data analysis, however, can often be tricky. To effectively communicate, a popular and very potent technique is that of storytelling. Having all the information in the world at our fingertips doesn’t make it easier to communicate: it makes it harder.― Cole Nussbaumer Knaflic To assist with storytelling and communication, data visualisation is critical. Animated data visualisation takes things up a notch and adds a wow factor. In this blog, we are going to examine how you can animate your charts. We will learn how to add a new dimension to line plots, bar charts and pie charts. This blog will get you started, but the possibilities are endless. In typical fashion, as you’ve come to expect from Python, there exists a very easy-to-use package that enables us to add an extra dimension to our data visualisation. The package in question is the FuncAnimation extension method and is part of the Animation class in Python’s matplotlib library. We will look at multiple examples on how to use it, but for now, you can think of this function as a while loop which keeps redrawing our figure on the canvas. What helped me to understand how to animate graphs was to start from the end. The animation magic will happen from the following two lines: import matplotlib.animation as anianimator = ani.FuncAnimation(fig, chartfunc, interval = 100) Let us look at the above inputs of FuncAnimation: fig is the figure object we will use to “draw our graph” onchartfunc is a function which takes a numeric input, which signifies the time on the time-series (as the number increases, we move along the timeline)interval is the delay between frames in milliseconds. Defaults to 200. fig is the figure object we will use to “draw our graph” on chartfunc is a function which takes a numeric input, which signifies the time on the time-series (as the number increases, we move along the timeline) interval is the delay between frames in milliseconds. Defaults to 200. For further optional inputs, check out the docs. Delving into the above a bit further, all one needs to do, is parameterise their graphs into a function which takes as an input the point in the time-series, and it’s game on! To ensure we have all basis covered, we are going to build upon our previous examples of data visualisation, which we’ve covered in: towardsdatascience.com In other words, we are going to use data from the pandemic (deaths per day), and we will use the final dataset given from the below code. Should you wish to learn more about the below data transformations and how to get started with simple data visualisation, simply read the above post first. import matplotlib.animation as aniimport matplotlib.pyplot as pltimport numpy as npimport pandas as pdurl = 'https://raw.githubusercontent.com/CSSEGISandData/COVID-19/master/csse_covid_19_data/csse_covid_19_time_series/time_series_covid19_deaths_global.csv'df = pd.read_csv(url, delimiter=',', header='infer')df_interest = df.loc[ df['Country/Region'].isin(['United Kingdom', 'US', 'Italy', 'Germany']) & df['Province/State'].isna()]df_interest.rename( index=lambda x: df_interest.at[x, 'Country/Region'], inplace=True)df1 = df_interest.transpose()df1 = df1.drop(['Province/State', 'Country/Region', 'Lat', 'Long'])df1 = df1.loc[(df1 != 0).any(1)]df1.index = pd.to_datetime(df1.index) The first thing we need to do is define the items of our graph, which will remain constant. That is, create the figure object, the x and y labels, set the line colours and the figure margins. import numpy as npimport matplotlib.pyplot as pltcolor = ['red', 'green', 'blue', 'orange']fig = plt.figure()plt.xticks(rotation=45, ha="right", rotation_mode="anchor") #rotate the x-axis valuesplt.subplots_adjust(bottom = 0.2, top = 0.9) #ensuring the dates (on the x-axis) fit in the screenplt.ylabel('No of Deaths')plt.xlabel('Dates') We then have to set up our curve function, and then animate it: def buildmebarchart(i=int): plt.legend(df1.columns) p = plt.plot(df1[:i].index, df1[:i].values) #note it only returns the dataset, up to the point i for i in range(0,4): p[i].set_color(color[i]) #set the colour of each curveimport matplotlib.animation as anianimator = ani.FuncAnimation(fig, buildmebarchart, interval = 100)plt.show() The code structure looks identical to that of a line graph. There are, however, some differences which we will go through. import numpy as npimport matplotlib.pyplot as pltfig,ax = plt.subplots()explode=[0.01,0.01,0.01,0.01] #pop out each slice from the piedef getmepie(i): def absolute_value(val): #turn % back to a number a = np.round(val/100.*df1.head(i).max().sum(), 0) return int(a) ax.clear() plot = df1.head(i).max().plot.pie(y=df1.columns,autopct=absolute_value, label='',explode = explode, shadow = True) plot.set_title('Total Number of Deaths\n' + str(df1.index[min( i, len(df1.index)-1 )].strftime('%y-%m-%d')), fontsize=12)import matplotlib.animation as anianimator = ani.FuncAnimation(fig, getmepie, interval = 200)plt.show() One of the main differences is that in the above code, we return a single set of values each time. In the line plot, we were returning the whole time-series up to the point we were at. We achieve that through the use of df1.head(i).max() df1.head(i) returns a time-series, but the .max() ensures that we only get the latest records (because the total number of deaths would either remain the same or go up). Building a bar chart is just as easy as the examples we’ve seen so far. For this example, I have included both a horizontal and a vertical bar chart. Depending on which one you want to see, you simply need to define the variable bar. fig = plt.figure()bar = ''def buildmebarchart(i=int): iv = min(i, len(df1.index)-1) #the loop iterates an extra one time, which causes the dataframes to go out of bounds. This was the easiest (most lazy) way to solve this :) objects = df1.max().index y_pos = np.arange(len(objects)) performance = df1.iloc[[iv]].values.tolist()[0] if bar == 'vertical': plt.bar(y_pos, performance, align='center', color=['red', 'green', 'blue', 'orange']) plt.xticks(y_pos, objects) plt.ylabel('Deaths') plt.xlabel('Countries') plt.title('Deaths per Country \n' + str(df1.index[iv].strftime('%y-%m-%d'))) else: plt.barh(y_pos, performance, align='center', color=['red', 'green', 'blue', 'orange']) plt.yticks(y_pos, objects) plt.xlabel('Deaths') plt.ylabel('Countries')animator = ani.FuncAnimation(fig, buildmebarchart, interval=100)plt.show() So you have created your first animated graphs, and you want to share them. How do you save them? Luckily, all it takes is a single line of code: animator.save(r'C:\temp\myfirstAnimation.gif') For more info, check out the docs. If you’ve liked this blog post, you may also like:
[ { "code": null, "e": 375, "s": 171, "text": "As data is forever expanding at unprecedented rates, data scientists are called upon to analyse it and make sense of it. Once that happens, there comes the need for effectively communicating the results." }, { "code": null, "e": 538, "s": 375, "text": "Communicating the results of data analysis, however, can often be tricky. To effectively communicate, a popular and very potent technique is that of storytelling." }, { "code": null, "e": 680, "s": 538, "text": "Having all the information in the world at our fingertips doesn’t make it easier to communicate: it makes it harder.― Cole Nussbaumer Knaflic" }, { "code": null, "e": 834, "s": 680, "text": "To assist with storytelling and communication, data visualisation is critical. Animated data visualisation takes things up a notch and adds a wow factor." }, { "code": null, "e": 1055, "s": 834, "text": "In this blog, we are going to examine how you can animate your charts. We will learn how to add a new dimension to line plots, bar charts and pie charts. This blog will get you started, but the possibilities are endless." }, { "code": null, "e": 1222, "s": 1055, "text": "In typical fashion, as you’ve come to expect from Python, there exists a very easy-to-use package that enables us to add an extra dimension to our data visualisation." }, { "code": null, "e": 1511, "s": 1222, "text": "The package in question is the FuncAnimation extension method and is part of the Animation class in Python’s matplotlib library. We will look at multiple examples on how to use it, but for now, you can think of this function as a while loop which keeps redrawing our figure on the canvas." }, { "code": null, "e": 1651, "s": 1511, "text": "What helped me to understand how to animate graphs was to start from the end. The animation magic will happen from the following two lines:" }, { "code": null, "e": 1746, "s": 1651, "text": "import matplotlib.animation as anianimator = ani.FuncAnimation(fig, chartfunc, interval = 100)" }, { "code": null, "e": 1796, "s": 1746, "text": "Let us look at the above inputs of FuncAnimation:" }, { "code": null, "e": 2076, "s": 1796, "text": "fig is the figure object we will use to “draw our graph” onchartfunc is a function which takes a numeric input, which signifies the time on the time-series (as the number increases, we move along the timeline)interval is the delay between frames in milliseconds. Defaults to 200." }, { "code": null, "e": 2136, "s": 2076, "text": "fig is the figure object we will use to “draw our graph” on" }, { "code": null, "e": 2287, "s": 2136, "text": "chartfunc is a function which takes a numeric input, which signifies the time on the time-series (as the number increases, we move along the timeline)" }, { "code": null, "e": 2358, "s": 2287, "text": "interval is the delay between frames in milliseconds. Defaults to 200." }, { "code": null, "e": 2407, "s": 2358, "text": "For further optional inputs, check out the docs." }, { "code": null, "e": 2583, "s": 2407, "text": "Delving into the above a bit further, all one needs to do, is parameterise their graphs into a function which takes as an input the point in the time-series, and it’s game on!" }, { "code": null, "e": 2716, "s": 2583, "text": "To ensure we have all basis covered, we are going to build upon our previous examples of data visualisation, which we’ve covered in:" }, { "code": null, "e": 2739, "s": 2716, "text": "towardsdatascience.com" }, { "code": null, "e": 3033, "s": 2739, "text": "In other words, we are going to use data from the pandemic (deaths per day), and we will use the final dataset given from the below code. Should you wish to learn more about the below data transformations and how to get started with simple data visualisation, simply read the above post first." }, { "code": null, "e": 3727, "s": 3033, "text": "import matplotlib.animation as aniimport matplotlib.pyplot as pltimport numpy as npimport pandas as pdurl = 'https://raw.githubusercontent.com/CSSEGISandData/COVID-19/master/csse_covid_19_data/csse_covid_19_time_series/time_series_covid19_deaths_global.csv'df = pd.read_csv(url, delimiter=',', header='infer')df_interest = df.loc[ df['Country/Region'].isin(['United Kingdom', 'US', 'Italy', 'Germany']) & df['Province/State'].isna()]df_interest.rename( index=lambda x: df_interest.at[x, 'Country/Region'], inplace=True)df1 = df_interest.transpose()df1 = df1.drop(['Province/State', 'Country/Region', 'Lat', 'Long'])df1 = df1.loc[(df1 != 0).any(1)]df1.index = pd.to_datetime(df1.index)" }, { "code": null, "e": 3919, "s": 3727, "text": "The first thing we need to do is define the items of our graph, which will remain constant. That is, create the figure object, the x and y labels, set the line colours and the figure margins." }, { "code": null, "e": 4257, "s": 3919, "text": "import numpy as npimport matplotlib.pyplot as pltcolor = ['red', 'green', 'blue', 'orange']fig = plt.figure()plt.xticks(rotation=45, ha=\"right\", rotation_mode=\"anchor\") #rotate the x-axis valuesplt.subplots_adjust(bottom = 0.2, top = 0.9) #ensuring the dates (on the x-axis) fit in the screenplt.ylabel('No of Deaths')plt.xlabel('Dates')" }, { "code": null, "e": 4321, "s": 4257, "text": "We then have to set up our curve function, and then animate it:" }, { "code": null, "e": 4672, "s": 4321, "text": "def buildmebarchart(i=int): plt.legend(df1.columns) p = plt.plot(df1[:i].index, df1[:i].values) #note it only returns the dataset, up to the point i for i in range(0,4): p[i].set_color(color[i]) #set the colour of each curveimport matplotlib.animation as anianimator = ani.FuncAnimation(fig, buildmebarchart, interval = 100)plt.show()" }, { "code": null, "e": 4795, "s": 4672, "text": "The code structure looks identical to that of a line graph. There are, however, some differences which we will go through." }, { "code": null, "e": 5438, "s": 4795, "text": "import numpy as npimport matplotlib.pyplot as pltfig,ax = plt.subplots()explode=[0.01,0.01,0.01,0.01] #pop out each slice from the piedef getmepie(i): def absolute_value(val): #turn % back to a number a = np.round(val/100.*df1.head(i).max().sum(), 0) return int(a) ax.clear() plot = df1.head(i).max().plot.pie(y=df1.columns,autopct=absolute_value, label='',explode = explode, shadow = True) plot.set_title('Total Number of Deaths\\n' + str(df1.index[min( i, len(df1.index)-1 )].strftime('%y-%m-%d')), fontsize=12)import matplotlib.animation as anianimator = ani.FuncAnimation(fig, getmepie, interval = 200)plt.show()" }, { "code": null, "e": 5658, "s": 5438, "text": "One of the main differences is that in the above code, we return a single set of values each time. In the line plot, we were returning the whole time-series up to the point we were at. We achieve that through the use of" }, { "code": null, "e": 5676, "s": 5658, "text": "df1.head(i).max()" }, { "code": null, "e": 5846, "s": 5676, "text": "df1.head(i) returns a time-series, but the .max() ensures that we only get the latest records (because the total number of deaths would either remain the same or go up)." }, { "code": null, "e": 6080, "s": 5846, "text": "Building a bar chart is just as easy as the examples we’ve seen so far. For this example, I have included both a horizontal and a vertical bar chart. Depending on which one you want to see, you simply need to define the variable bar." }, { "code": null, "e": 6988, "s": 6080, "text": "fig = plt.figure()bar = ''def buildmebarchart(i=int): iv = min(i, len(df1.index)-1) #the loop iterates an extra one time, which causes the dataframes to go out of bounds. This was the easiest (most lazy) way to solve this :) objects = df1.max().index y_pos = np.arange(len(objects)) performance = df1.iloc[[iv]].values.tolist()[0] if bar == 'vertical': plt.bar(y_pos, performance, align='center', color=['red', 'green', 'blue', 'orange']) plt.xticks(y_pos, objects) plt.ylabel('Deaths') plt.xlabel('Countries') plt.title('Deaths per Country \\n' + str(df1.index[iv].strftime('%y-%m-%d'))) else: plt.barh(y_pos, performance, align='center', color=['red', 'green', 'blue', 'orange']) plt.yticks(y_pos, objects) plt.xlabel('Deaths') plt.ylabel('Countries')animator = ani.FuncAnimation(fig, buildmebarchart, interval=100)plt.show()" }, { "code": null, "e": 7086, "s": 6988, "text": "So you have created your first animated graphs, and you want to share them. How do you save them?" }, { "code": null, "e": 7134, "s": 7086, "text": "Luckily, all it takes is a single line of code:" }, { "code": null, "e": 7181, "s": 7134, "text": "animator.save(r'C:\\temp\\myfirstAnimation.gif')" }, { "code": null, "e": 7216, "s": 7181, "text": "For more info, check out the docs." } ]
Java DatabaseMetaData getUserName() method with example
This method retrieves the user name used to establish the current connection − To retrieve the user name as known to the database − Make sure your database is up and running. Make sure your database is up and running. Register the driver using the registerDriver() method of the DriverManager class. Pass an object of the driver class corresponding to the underlying database. Register the driver using the registerDriver() method of the DriverManager class. Pass an object of the driver class corresponding to the underlying database. Get the connection object using the getConnection() method of the DriverManager class. Pass the URL the database and, user name, password of a user in the database, as String variables. Get the connection object using the getConnection() method of the DriverManager class. Pass the URL the database and, user name, password of a user in the database, as String variables. Get the DatabaseMetaData object with respect to the current connection using the getMetaData() method of the Connection interface. Get the DatabaseMetaData object with respect to the current connection using the getMetaData() method of the Connection interface. Finally, retrieve the user name using the getUserName() method of the DatabaseMetaData interface . Finally, retrieve the user name using the getUserName() method of the DatabaseMetaData interface . Following JDBC program establishes connection with MySQL database and, retrieves the user name as known to the database. import java.sql.Connection; import java.sql.DatabaseMetaData; import java.sql.DriverManager; import java.sql.SQLException; public class DatabaseMetaData_getUserName { public static void main(String args[]) throws SQLException { //Registering the Driver DriverManager.registerDriver(new com.mysql.jdbc.Driver()); //Getting the connection String url = "jdbc:mysql://localhost/"; Connection con = DriverManager.getConnection(url, "root", "password"); System.out.println("Connection established......"); //Retrieving the meta data object DatabaseMetaData metaData = con.getMetaData(); //Retrieving the user name String user_name = metaData.getUserName(); System.out.println(user_name); } } Connection established...... root@localhost
[ { "code": null, "e": 1141, "s": 1062, "text": "This method retrieves the user name used to establish the current connection −" }, { "code": null, "e": 1194, "s": 1141, "text": "To retrieve the user name as known to the database −" }, { "code": null, "e": 1237, "s": 1194, "text": "Make sure your database is up and running." }, { "code": null, "e": 1280, "s": 1237, "text": "Make sure your database is up and running." }, { "code": null, "e": 1439, "s": 1280, "text": "Register the driver using the registerDriver() method of the DriverManager class. Pass an object of the driver class corresponding to the underlying database." }, { "code": null, "e": 1598, "s": 1439, "text": "Register the driver using the registerDriver() method of the DriverManager class. Pass an object of the driver class corresponding to the underlying database." }, { "code": null, "e": 1784, "s": 1598, "text": "Get the connection object using the getConnection() method of the DriverManager class. Pass the URL the database and, user name, password of a user in the database, as String variables." }, { "code": null, "e": 1970, "s": 1784, "text": "Get the connection object using the getConnection() method of the DriverManager class. Pass the URL the database and, user name, password of a user in the database, as String variables." }, { "code": null, "e": 2101, "s": 1970, "text": "Get the DatabaseMetaData object with respect to the current connection using the getMetaData() method of the Connection interface." }, { "code": null, "e": 2232, "s": 2101, "text": "Get the DatabaseMetaData object with respect to the current connection using the getMetaData() method of the Connection interface." }, { "code": null, "e": 2331, "s": 2232, "text": "Finally, retrieve the user name using the getUserName() method of the DatabaseMetaData interface ." }, { "code": null, "e": 2430, "s": 2331, "text": "Finally, retrieve the user name using the getUserName() method of the DatabaseMetaData interface ." }, { "code": null, "e": 2551, "s": 2430, "text": "Following JDBC program establishes connection with MySQL database and, retrieves the user name as known to the database." }, { "code": null, "e": 3309, "s": 2551, "text": "import java.sql.Connection;\nimport java.sql.DatabaseMetaData;\nimport java.sql.DriverManager;\nimport java.sql.SQLException;\npublic class DatabaseMetaData_getUserName {\n public static void main(String args[]) throws SQLException {\n //Registering the Driver\n DriverManager.registerDriver(new com.mysql.jdbc.Driver());\n //Getting the connection\n String url = \"jdbc:mysql://localhost/\";\n Connection con = DriverManager.getConnection(url, \"root\", \"password\");\n System.out.println(\"Connection established......\");\n //Retrieving the meta data object\n DatabaseMetaData metaData = con.getMetaData();\n //Retrieving the user name\n String user_name = metaData.getUserName();\n System.out.println(user_name);\n }\n}" }, { "code": null, "e": 3353, "s": 3309, "text": "Connection established......\nroot@localhost" } ]
My top 4 functions to style the Pandas Dataframe | by Cornellius Yudha Wijaya | Towards Data Science
Pandas Dataframe is the most used object for Data scientists to analyze their data. While the main function is to just place your data and get on with the analysis, we could still style our data frame for many purposes; namely, for presenting data or better aesthetic. Let’s take an example with a dataset. I would use the ‘planets’ data available from seaborn for learning purposes. #Importing the modulesimport pandas as pdimport seaborn as sns#Loading the datasetplanets = sns.load_dataset('planets')#Showing 10 first row in the planets dataplanets.head() We could style our previous Pandas Data Frame just like below. It looks colorful right now because I put everything inside but I would break it down some of my favorite. Here are 4 functions to style our Pandas Data Frame object that I often use in everyday work. “While the main function is to just place your data and get on with the analysis, we could still style our data frame for many purposes; namely, for presenting data or better aesthetic.” Sometimes when you do analysis and presenting the result to the other, you only want to show the most important aspect. I know when I present my Data Frame to the non-technical person, the question is often about the Index in their default number such as “what is this number?”. For that reason, we could try to hide the index with the following code. #Using hide_index() from the style function planets.head(10).style.hide_index() Just like that, we hide our index. It is a simple thing but in the working environment I know sometimes it would become a problem. Just one extra column that becomes a question. In addition, we could try to hide unnecessary columns with the chaining method. Let’s say I don't want to show the ‘method’ and ‘year’ columns then we could write it with the following code. #Using hide_columns to hide the unnecesary columnsplanets.head(10).style.hide_index().hide_columns(['method','year']) There is a time when we want to present our data frame and only highlight the important number, for example the highest number. In this case, we could use the built-in method to highlight it with the following code. #Highlight the maximum number for each columnplanets.head(10).style.highlight_max(color = 'yellow') In the data frame above, we highlight the maximum number in each column with the color yellow. If you want to highlight the minimum number instead, we could do it with the following code. planets.head(10).style.highlight_min(color = 'lightblue') and if you want to chain it, we could also do that. #Highlight the minimum number with lightblue color and the maximum number with yellow colorplanets.head(10).style.highlight_max(color='yellow').highlight_min(color = 'lightblue') Instead of each column, you could actually highlight the minimum or maximum number for each row. I show it in the following code. #Adding Axis = 1 to change the direction from column to rowplanets.head(10).style.highlight_max(color = 'yellow', axis =1) As we can see, it is useless right now to change the axis as it did not highlight any important information. It would be more useful in the case when each column is not that different from each other. As an addition, we could highlight the null value with the following code. #Higlight the null valueplanets.head(10).style.highlight_null(null_color = 'red') While presenting your data, we could also use all the information as the main way to present the data. I often present the data with a background color to highlight which number is in the lower area and where is the one in the higher area. Let’s use the example by the following code. #Gradient background color for the numerical columnsplanets.head(10).style.background_gradient(cmap = 'Blues') With the background_gradient function, we could color the data frame as a gradient. The color would depend on the cmap parameter where the parameter is accepting colormaps from the matplotlib library. We could also use a bar chart as our gradient background color. Let me show it in the example below. #Sort the values by the year column then creating a bar chart as the backgroundplanets.head(10).sort_values(by = 'year').style.bar(color= 'lightblue') As we could see above, we now highlight the number from the lowest to the highest number in a different way than the background_gradient function is. We could also see the index is not in order because of the sort function; it is better to hide the index as I told you in the passage above. If you prefer to have a more specific requirement to style your data frame, you could actually do it. We could pass our style functions into one of the following methods: Styler.applymap: element-wise Styler.apply: column-/row-/table-wise Both of those methods take a function (and some other keyword arguments) and apply our function to the DataFrame in a certain way. Let’s say that I have a threshold that any number below 20 should be colored red. We could do that by using the following code. #Create the function to color the numerical value into red colordef color_below_20_red(value): if type(value) == type(''): return 'color:black' else: color = 'red' if value <= 20 else 'black' return 'color: {}'.format(color)#We apply the function to all the element in the data frame by using the applymap functionplanets.head(10).style.applymap(color_below_20_red) Just like that, every single number below or equal to 20 would be coloured red and the rest is in black. I have shown my top 4 functions to use when styling our data frame; hiding functions. We could style our data frame for presenting our data or just a better aesthetic. If you want to read more about what we can do to style our data frame, you could read it here. Visit me on my LinkedIn or Twitter. If you are not subscribed as a Medium Member, please consider subscribing through my referral.
[ { "code": null, "e": 441, "s": 172, "text": "Pandas Dataframe is the most used object for Data scientists to analyze their data. While the main function is to just place your data and get on with the analysis, we could still style our data frame for many purposes; namely, for presenting data or better aesthetic." }, { "code": null, "e": 556, "s": 441, "text": "Let’s take an example with a dataset. I would use the ‘planets’ data available from seaborn for learning purposes." }, { "code": null, "e": 731, "s": 556, "text": "#Importing the modulesimport pandas as pdimport seaborn as sns#Loading the datasetplanets = sns.load_dataset('planets')#Showing 10 first row in the planets dataplanets.head()" }, { "code": null, "e": 794, "s": 731, "text": "We could style our previous Pandas Data Frame just like below." }, { "code": null, "e": 995, "s": 794, "text": "It looks colorful right now because I put everything inside but I would break it down some of my favorite. Here are 4 functions to style our Pandas Data Frame object that I often use in everyday work." }, { "code": null, "e": 1182, "s": 995, "text": "“While the main function is to just place your data and get on with the analysis, we could still style our data frame for many purposes; namely, for presenting data or better aesthetic.”" }, { "code": null, "e": 1534, "s": 1182, "text": "Sometimes when you do analysis and presenting the result to the other, you only want to show the most important aspect. I know when I present my Data Frame to the non-technical person, the question is often about the Index in their default number such as “what is this number?”. For that reason, we could try to hide the index with the following code." }, { "code": null, "e": 1614, "s": 1534, "text": "#Using hide_index() from the style function planets.head(10).style.hide_index()" }, { "code": null, "e": 1792, "s": 1614, "text": "Just like that, we hide our index. It is a simple thing but in the working environment I know sometimes it would become a problem. Just one extra column that becomes a question." }, { "code": null, "e": 1983, "s": 1792, "text": "In addition, we could try to hide unnecessary columns with the chaining method. Let’s say I don't want to show the ‘method’ and ‘year’ columns then we could write it with the following code." }, { "code": null, "e": 2101, "s": 1983, "text": "#Using hide_columns to hide the unnecesary columnsplanets.head(10).style.hide_index().hide_columns(['method','year'])" }, { "code": null, "e": 2317, "s": 2101, "text": "There is a time when we want to present our data frame and only highlight the important number, for example the highest number. In this case, we could use the built-in method to highlight it with the following code." }, { "code": null, "e": 2417, "s": 2317, "text": "#Highlight the maximum number for each columnplanets.head(10).style.highlight_max(color = 'yellow')" }, { "code": null, "e": 2605, "s": 2417, "text": "In the data frame above, we highlight the maximum number in each column with the color yellow. If you want to highlight the minimum number instead, we could do it with the following code." }, { "code": null, "e": 2663, "s": 2605, "text": "planets.head(10).style.highlight_min(color = 'lightblue')" }, { "code": null, "e": 2715, "s": 2663, "text": "and if you want to chain it, we could also do that." }, { "code": null, "e": 2894, "s": 2715, "text": "#Highlight the minimum number with lightblue color and the maximum number with yellow colorplanets.head(10).style.highlight_max(color='yellow').highlight_min(color = 'lightblue')" }, { "code": null, "e": 3024, "s": 2894, "text": "Instead of each column, you could actually highlight the minimum or maximum number for each row. I show it in the following code." }, { "code": null, "e": 3147, "s": 3024, "text": "#Adding Axis = 1 to change the direction from column to rowplanets.head(10).style.highlight_max(color = 'yellow', axis =1)" }, { "code": null, "e": 3348, "s": 3147, "text": "As we can see, it is useless right now to change the axis as it did not highlight any important information. It would be more useful in the case when each column is not that different from each other." }, { "code": null, "e": 3423, "s": 3348, "text": "As an addition, we could highlight the null value with the following code." }, { "code": null, "e": 3505, "s": 3423, "text": "#Higlight the null valueplanets.head(10).style.highlight_null(null_color = 'red')" }, { "code": null, "e": 3790, "s": 3505, "text": "While presenting your data, we could also use all the information as the main way to present the data. I often present the data with a background color to highlight which number is in the lower area and where is the one in the higher area. Let’s use the example by the following code." }, { "code": null, "e": 3901, "s": 3790, "text": "#Gradient background color for the numerical columnsplanets.head(10).style.background_gradient(cmap = 'Blues')" }, { "code": null, "e": 4102, "s": 3901, "text": "With the background_gradient function, we could color the data frame as a gradient. The color would depend on the cmap parameter where the parameter is accepting colormaps from the matplotlib library." }, { "code": null, "e": 4203, "s": 4102, "text": "We could also use a bar chart as our gradient background color. Let me show it in the example below." }, { "code": null, "e": 4354, "s": 4203, "text": "#Sort the values by the year column then creating a bar chart as the backgroundplanets.head(10).sort_values(by = 'year').style.bar(color= 'lightblue')" }, { "code": null, "e": 4645, "s": 4354, "text": "As we could see above, we now highlight the number from the lowest to the highest number in a different way than the background_gradient function is. We could also see the index is not in order because of the sort function; it is better to hide the index as I told you in the passage above." }, { "code": null, "e": 4816, "s": 4645, "text": "If you prefer to have a more specific requirement to style your data frame, you could actually do it. We could pass our style functions into one of the following methods:" }, { "code": null, "e": 4846, "s": 4816, "text": "Styler.applymap: element-wise" }, { "code": null, "e": 4884, "s": 4846, "text": "Styler.apply: column-/row-/table-wise" }, { "code": null, "e": 5143, "s": 4884, "text": "Both of those methods take a function (and some other keyword arguments) and apply our function to the DataFrame in a certain way. Let’s say that I have a threshold that any number below 20 should be colored red. We could do that by using the following code." }, { "code": null, "e": 5536, "s": 5143, "text": "#Create the function to color the numerical value into red colordef color_below_20_red(value): if type(value) == type(''): return 'color:black' else: color = 'red' if value <= 20 else 'black' return 'color: {}'.format(color)#We apply the function to all the element in the data frame by using the applymap functionplanets.head(10).style.applymap(color_below_20_red)" }, { "code": null, "e": 5641, "s": 5536, "text": "Just like that, every single number below or equal to 20 would be coloured red and the rest is in black." }, { "code": null, "e": 5904, "s": 5641, "text": "I have shown my top 4 functions to use when styling our data frame; hiding functions. We could style our data frame for presenting our data or just a better aesthetic. If you want to read more about what we can do to style our data frame, you could read it here." }, { "code": null, "e": 5940, "s": 5904, "text": "Visit me on my LinkedIn or Twitter." } ]
Implementing PCA from Scratch. Compare the implementation with... | by Eugenia Anello | Towards Data Science
This article is in continuation with the story Variable Reduction with Principal Component Analysis. In the previous post, I talked about one of the most known and widely used methods, called Principal Component Analysis. It employs an efficient linear transformation, which reduces the dimensionality of a high dimensional dataset while capturing the maximum information content. It generates the Principal Components, which are linear combinations of the original features in the dataset. In addition, I showed step by step how to implement this technique with Python. At first I thought that the post was enought to explain PCA, but I felt that something was missing. I implemented PCA using separate lines of code, but they are inefficient when you want to call them every time for a different problem. A better way is to create a class, which is effective when you want to encapsulate data structures and procedures in one place. Moreover, it’s really easier to modify since you have all the code in this unique class. Table of Content: DatasetImplementation of PCAPCA without standardizationPCA with standardizationPCA with Sklearn Dataset Implementation of PCA PCA without standardization PCA with standardization PCA with Sklearn Before implementing the PCA algorithm, we are going to import the breast cancer Wisconsin dataset, which contains the data regarding the breast cancer diagnosed in 569 patients [1]. import pandas as pdimport numpy as npimport randomfrom sklearn.datasets import load_breast_cancerimport plotly.express as pxdata = load_breast_cancer(as_frame=True)X,y,df_bre = data.data,data.target,data.framediz_target = {0:'malignant',1:'benign'}y = np.array([diz_target[y1] for y1 in y])df_bre['target'] = df_bre['target'].apply(lambda x: diz_target[x])df_bre.head() We can notice that there are 30 numerical features and a target variable, that specify if the tumour is benign (target=1) or malignant (target=0). I convert the target variable to a string since it’s not used by PCA and we only need it in the visualizations later. In this case, we want to understand the difference in variability of the features when the tumour is benign or malignant. This is really hard to show it with a simple exploratory analysis since we have more than two covariates. For example, we can try to visualize a scatter matrix with only six features, coloured by the target variable. fig = px.scatter_matrix(df_bre,dimensions=list(df_bre.columns)[:5], color="target")fig.show() Certainly, we can observe two different clusters in all these scatter plots, but it’s messy if we plot all the features at the same time. Consequently, we need a compact representation of this multivariate dataset, which can be provided by the Principal Component Analysis. The steps to obtain the principal components (or k dimensional feature vectors) are summarized in the illustration above. The same logic will be applied to build the class. We define the PCA_impl class, which has three attributes initialized at the beginning. The most important attribute is the number of components we want to extract. Moreover, we can also reproduce the same results every time by setting random_state equal to True and standardizing the dataset only if we need it. This class also includes two methods, fit and fit_transform, similarly to the scikit-learn’s PCA. While the first method provides most of the procedure to calculate the principal components, the fit_transform method also applies the transformation on the original feature matrix X. In addition to these two methods, I also wanted to visualize the principal components without specifying every time the functions of Plotly Express. It can be really useful to speed up the analysis of the latent variables generated by PCA. Finally, the PCA_impl class is defined. We only need to call the class and the corresponding methods without any effort. We can access the var_explained and cum_var_explained attributes, that were calculated within the fit and fit_transform methods. It’s worth noticing that we capture 98% with just one component. Let’s also visualize the 2D and 3D scatterplots using the method defined previously: pca1.pca_plot2d() From the visualization, we can observe that two clusters emerge, one marked in blue representing the patients with malignant cancer and the other regarding benign cancer. Moreover, it seems that the blue cluster contains much more variability than the other cluster. In addition, we see a slight overlapping between the two groups. pca1.pca_plot3d() Now, we look at the 3D scatterplot with the first three components. It’s less clear than the previous scatterplot, but a similar behaviour emerges even in this plot. There are surely two distinct groups based on the target variable. New information is discovered looking at this three-dimensional representation: two patients with malignant cancer appear to have completely different values with respect to all the other patients. This aspect could be slightly noticed looking at the 2D plot or in the scatter matrix we displayed previously. Let’s replicate the same procedure of the previous section. We only add the standardization at the beginning to check if there are any differences in the results. Differently from the previous case, we can notice that the range of values regarding the principal components is more restricted and 80% of the variance explained is captured with three components. In particular, the contribution of the first component passed from 0.99 to 0.44. This can be justified by the fact that all variables have the same units of measure and, consequently, the PCA is able to give equal weight to each feature. pca1.pca_plot2d() These observations are confirmed by looking at the scatterplot with the first two components. The clusters are much more distinct and have lower values. pca1.pca_plot3d() The 3D representation is easier to read and comprehend. Finally, we can conclude that the two groups of patients have different feature variability. Moreover, there are still the two data points that lie apart from the rest of the data. At this point, we can apply the PCA implemented by Sklearn to compare it with my implementation. I should point out that there are some differences to take into account in this comparison. While my implementation of PCA is based on the covariance matrix, the scikit-learn’s PCA involves the centering of the input data and employs the Singular Value Decomposition to project the data to a lower-dimensional space. Before we saw that standardization is a very important step before applying PCA. Since the mean is already subtracted from each feature’s column by Sklearn’s algorithm, we only need to divide each numerical variable by its own standard deviation. X_copy = X.copy().astype('float32')X_copy /= np.std(X_copy, axis=0) Now, we pass the number of components and the random_state to the PCA class and call the fit_transform method to obtain the principal components. The same results of the implemented PCA with standardization are achieved with sklearn’s PCA. fig = px.scatter(components, x=0, y=1, color=df.label,labels={'0': 'PC 1', '1': 'PC 2'})fig.show() fig = px.scatter_3d(components, x=0, y=1,z=2, color=df.label,labels={'0': 'PC 1', '1': 'PC 2','2':'PC 3'})fig.show() In the same way, the scatterplots replicate what we have seen in the previous section. I hope you found this post useful. The intention of this article was to provide a more compact implementation of the Principal Component Analysis. In this case, my implementation and the sklearn’s PCA provided the same results, but it can happen that sometimes they are slightly different if you use a different dataset. The Github code is here. Thanks for reading. Have a nice day!
[ { "code": null, "e": 1196, "s": 172, "text": "This article is in continuation with the story Variable Reduction with Principal Component Analysis. In the previous post, I talked about one of the most known and widely used methods, called Principal Component Analysis. It employs an efficient linear transformation, which reduces the dimensionality of a high dimensional dataset while capturing the maximum information content. It generates the Principal Components, which are linear combinations of the original features in the dataset. In addition, I showed step by step how to implement this technique with Python. At first I thought that the post was enought to explain PCA, but I felt that something was missing. I implemented PCA using separate lines of code, but they are inefficient when you want to call them every time for a different problem. A better way is to create a class, which is effective when you want to encapsulate data structures and procedures in one place. Moreover, it’s really easier to modify since you have all the code in this unique class." }, { "code": null, "e": 1214, "s": 1196, "text": "Table of Content:" }, { "code": null, "e": 1310, "s": 1214, "text": "DatasetImplementation of PCAPCA without standardizationPCA with standardizationPCA with Sklearn" }, { "code": null, "e": 1318, "s": 1310, "text": "Dataset" }, { "code": null, "e": 1340, "s": 1318, "text": "Implementation of PCA" }, { "code": null, "e": 1368, "s": 1340, "text": "PCA without standardization" }, { "code": null, "e": 1393, "s": 1368, "text": "PCA with standardization" }, { "code": null, "e": 1410, "s": 1393, "text": "PCA with Sklearn" }, { "code": null, "e": 1592, "s": 1410, "text": "Before implementing the PCA algorithm, we are going to import the breast cancer Wisconsin dataset, which contains the data regarding the breast cancer diagnosed in 569 patients [1]." }, { "code": null, "e": 1962, "s": 1592, "text": "import pandas as pdimport numpy as npimport randomfrom sklearn.datasets import load_breast_cancerimport plotly.express as pxdata = load_breast_cancer(as_frame=True)X,y,df_bre = data.data,data.target,data.framediz_target = {0:'malignant',1:'benign'}y = np.array([diz_target[y1] for y1 in y])df_bre['target'] = df_bre['target'].apply(lambda x: diz_target[x])df_bre.head()" }, { "code": null, "e": 2227, "s": 1962, "text": "We can notice that there are 30 numerical features and a target variable, that specify if the tumour is benign (target=1) or malignant (target=0). I convert the target variable to a string since it’s not used by PCA and we only need it in the visualizations later." }, { "code": null, "e": 2566, "s": 2227, "text": "In this case, we want to understand the difference in variability of the features when the tumour is benign or malignant. This is really hard to show it with a simple exploratory analysis since we have more than two covariates. For example, we can try to visualize a scatter matrix with only six features, coloured by the target variable." }, { "code": null, "e": 2660, "s": 2566, "text": "fig = px.scatter_matrix(df_bre,dimensions=list(df_bre.columns)[:5], color=\"target\")fig.show()" }, { "code": null, "e": 2934, "s": 2660, "text": "Certainly, we can observe two different clusters in all these scatter plots, but it’s messy if we plot all the features at the same time. Consequently, we need a compact representation of this multivariate dataset, which can be provided by the Principal Component Analysis." }, { "code": null, "e": 3107, "s": 2934, "text": "The steps to obtain the principal components (or k dimensional feature vectors) are summarized in the illustration above. The same logic will be applied to build the class." }, { "code": null, "e": 3419, "s": 3107, "text": "We define the PCA_impl class, which has three attributes initialized at the beginning. The most important attribute is the number of components we want to extract. Moreover, we can also reproduce the same results every time by setting random_state equal to True and standardizing the dataset only if we need it." }, { "code": null, "e": 3941, "s": 3419, "text": "This class also includes two methods, fit and fit_transform, similarly to the scikit-learn’s PCA. While the first method provides most of the procedure to calculate the principal components, the fit_transform method also applies the transformation on the original feature matrix X. In addition to these two methods, I also wanted to visualize the principal components without specifying every time the functions of Plotly Express. It can be really useful to speed up the analysis of the latent variables generated by PCA." }, { "code": null, "e": 4062, "s": 3941, "text": "Finally, the PCA_impl class is defined. We only need to call the class and the corresponding methods without any effort." }, { "code": null, "e": 4341, "s": 4062, "text": "We can access the var_explained and cum_var_explained attributes, that were calculated within the fit and fit_transform methods. It’s worth noticing that we capture 98% with just one component. Let’s also visualize the 2D and 3D scatterplots using the method defined previously:" }, { "code": null, "e": 4359, "s": 4341, "text": "pca1.pca_plot2d()" }, { "code": null, "e": 4691, "s": 4359, "text": "From the visualization, we can observe that two clusters emerge, one marked in blue representing the patients with malignant cancer and the other regarding benign cancer. Moreover, it seems that the blue cluster contains much more variability than the other cluster. In addition, we see a slight overlapping between the two groups." }, { "code": null, "e": 4709, "s": 4691, "text": "pca1.pca_plot3d()" }, { "code": null, "e": 5251, "s": 4709, "text": "Now, we look at the 3D scatterplot with the first three components. It’s less clear than the previous scatterplot, but a similar behaviour emerges even in this plot. There are surely two distinct groups based on the target variable. New information is discovered looking at this three-dimensional representation: two patients with malignant cancer appear to have completely different values with respect to all the other patients. This aspect could be slightly noticed looking at the 2D plot or in the scatter matrix we displayed previously." }, { "code": null, "e": 5414, "s": 5251, "text": "Let’s replicate the same procedure of the previous section. We only add the standardization at the beginning to check if there are any differences in the results." }, { "code": null, "e": 5850, "s": 5414, "text": "Differently from the previous case, we can notice that the range of values regarding the principal components is more restricted and 80% of the variance explained is captured with three components. In particular, the contribution of the first component passed from 0.99 to 0.44. This can be justified by the fact that all variables have the same units of measure and, consequently, the PCA is able to give equal weight to each feature." }, { "code": null, "e": 5868, "s": 5850, "text": "pca1.pca_plot2d()" }, { "code": null, "e": 6021, "s": 5868, "text": "These observations are confirmed by looking at the scatterplot with the first two components. The clusters are much more distinct and have lower values." }, { "code": null, "e": 6039, "s": 6021, "text": "pca1.pca_plot3d()" }, { "code": null, "e": 6276, "s": 6039, "text": "The 3D representation is easier to read and comprehend. Finally, we can conclude that the two groups of patients have different feature variability. Moreover, there are still the two data points that lie apart from the rest of the data." }, { "code": null, "e": 6690, "s": 6276, "text": "At this point, we can apply the PCA implemented by Sklearn to compare it with my implementation. I should point out that there are some differences to take into account in this comparison. While my implementation of PCA is based on the covariance matrix, the scikit-learn’s PCA involves the centering of the input data and employs the Singular Value Decomposition to project the data to a lower-dimensional space." }, { "code": null, "e": 6937, "s": 6690, "text": "Before we saw that standardization is a very important step before applying PCA. Since the mean is already subtracted from each feature’s column by Sklearn’s algorithm, we only need to divide each numerical variable by its own standard deviation." }, { "code": null, "e": 7005, "s": 6937, "text": "X_copy = X.copy().astype('float32')X_copy /= np.std(X_copy, axis=0)" }, { "code": null, "e": 7151, "s": 7005, "text": "Now, we pass the number of components and the random_state to the PCA class and call the fit_transform method to obtain the principal components." }, { "code": null, "e": 7245, "s": 7151, "text": "The same results of the implemented PCA with standardization are achieved with sklearn’s PCA." }, { "code": null, "e": 7344, "s": 7245, "text": "fig = px.scatter(components, x=0, y=1, color=df.label,labels={'0': 'PC 1', '1': 'PC 2'})fig.show()" }, { "code": null, "e": 7461, "s": 7344, "text": "fig = px.scatter_3d(components, x=0, y=1,z=2, color=df.label,labels={'0': 'PC 1', '1': 'PC 2','2':'PC 3'})fig.show()" }, { "code": null, "e": 7548, "s": 7461, "text": "In the same way, the scatterplots replicate what we have seen in the previous section." } ]
Add a single element to a LinkedList in Java
A single element can be added to a LinkedList by using the java.util.LinkedList.add() method. This method has one parameter parameters i.e. the element that is to be inserted in the LinkedList. A program that demonstrates this is given as follows − Live Demo import java.util.LinkedList; public class Demo { public static void main(String[] args) { LinkedList<String> l = new LinkedList<String>(); l.add("Magic"); System.out.println("The LinkedList is: " + l); } } The LinkedList is: [Magic] Now let us understand the above program. The LinkedList l is created. Then LinkedList.add() is used to add a single element to the LinkedList. Then the LinkedList is displayed. A code snippet which demonstrates this is as follows − LinkedList<String> l = new LinkedList<String>(); l.add("Magic"); System.out.println("The LinkedList is: " + l);
[ { "code": null, "e": 1256, "s": 1062, "text": "A single element can be added to a LinkedList by using the java.util.LinkedList.add() method. This method has one parameter parameters i.e. the element that is to be inserted in the LinkedList." }, { "code": null, "e": 1311, "s": 1256, "text": "A program that demonstrates this is given as follows −" }, { "code": null, "e": 1322, "s": 1311, "text": " Live Demo" }, { "code": null, "e": 1552, "s": 1322, "text": "import java.util.LinkedList;\npublic class Demo {\n public static void main(String[] args) {\n LinkedList<String> l = new LinkedList<String>();\n l.add(\"Magic\");\n System.out.println(\"The LinkedList is: \" + l);\n }\n}" }, { "code": null, "e": 1579, "s": 1552, "text": "The LinkedList is: [Magic]" }, { "code": null, "e": 1620, "s": 1579, "text": "Now let us understand the above program." }, { "code": null, "e": 1811, "s": 1620, "text": "The LinkedList l is created. Then LinkedList.add() is used to add a single element to the LinkedList. Then the LinkedList is displayed. A code snippet which demonstrates this is as follows −" }, { "code": null, "e": 1923, "s": 1811, "text": "LinkedList<String> l = new LinkedList<String>();\nl.add(\"Magic\");\nSystem.out.println(\"The LinkedList is: \" + l);" } ]
Program to convert Milliseconds to Date Format in Java
Let us first declare and initialize milliseconds value variable − long milliSeconds = 656478; Now, convert Milliseconds to Date format − DateFormat dateFormat = new SimpleDateFormat("dd MMM yyyy HH:mm:ss:SSS Z"); Date date = new Date(milliSeconds); Following is the program to convert Milliseconds to Date Format in Java − import java.text.DateFormat; import java.text.SimpleDateFormat; import java.util.Date; public class Demo { public static void main(String args[]) { long milliSeconds = 656478; DateFormat dateFormat = new SimpleDateFormat("dd MMM yyyy HH:mm:ss:SSS Z"); Date date = new Date(milliSeconds); System.out.println(dateFormat.format(date)); } } 01 Jan 1970 00:10:56:478 +0000
[ { "code": null, "e": 1128, "s": 1062, "text": "Let us first declare and initialize milliseconds value variable −" }, { "code": null, "e": 1156, "s": 1128, "text": "long milliSeconds = 656478;" }, { "code": null, "e": 1199, "s": 1156, "text": "Now, convert Milliseconds to Date format −" }, { "code": null, "e": 1311, "s": 1199, "text": "DateFormat dateFormat = new SimpleDateFormat(\"dd MMM yyyy HH:mm:ss:SSS Z\");\nDate date = new Date(milliSeconds);" }, { "code": null, "e": 1385, "s": 1311, "text": "Following is the program to convert Milliseconds to Date Format in Java −" }, { "code": null, "e": 1752, "s": 1385, "text": "import java.text.DateFormat;\nimport java.text.SimpleDateFormat;\nimport java.util.Date;\npublic class Demo {\n public static void main(String args[]) {\n long milliSeconds = 656478;\n DateFormat dateFormat = new SimpleDateFormat(\"dd MMM yyyy HH:mm:ss:SSS Z\");\n Date date = new Date(milliSeconds);\n System.out.println(dateFormat.format(date));\n }\n}" }, { "code": null, "e": 1783, "s": 1752, "text": "01 Jan 1970 00:10:56:478 +0000" } ]
What is encapsulation in Java?
Encapsulation in Java is a mechanism for wrapping the data (variables) and code acting on the data (methods) together as a single unit. In encapsulation, the variables of a class will be hidden from other classes and can be accessed only through the methods of their current class. Therefore, it is also known as data hiding. To achieve encapsulation in Java: Declare the variables of a class as private. Provide public setter and getter methods to modify and view the variables values. public class EncapTest { private String name; private String idNum; private int age; public int getAge() { return age; } public String getName() { return name; } public String getIdNum() { return idNum; } public void setAge( int newAge) { age = newAge; } public void setName(String newName) { name = newName; } public void setIdNum( String newId) { idNum = newId; } } public class RunEncap { public static void main(String args[]) { EncapTest encap = new EncapTest(); encap.setName("James"); encap.setAge(20); encap.setIdNum("12343ms"); System.out.print("Name : " + encap.getName() + " Age : " + encap.getAge()); } } Name : James Age : 20
[ { "code": null, "e": 1388, "s": 1062, "text": "Encapsulation in Java is a mechanism for wrapping the data (variables) and code acting on the data (methods) together as a single unit. In encapsulation, the variables of a class will be hidden from other classes and can be accessed only through the methods of their current class. Therefore, it is also known as data hiding." }, { "code": null, "e": 1422, "s": 1388, "text": "To achieve encapsulation in Java:" }, { "code": null, "e": 1467, "s": 1422, "text": "Declare the variables of a class as private." }, { "code": null, "e": 1549, "s": 1467, "text": "Provide public setter and getter methods to modify and view the variables values." }, { "code": null, "e": 2294, "s": 1549, "text": "public class EncapTest {\n private String name;\n private String idNum;\n private int age;\n \n public int getAge() {\n return age;\n }\n public String getName() {\n return name;\n }\n public String getIdNum() {\n return idNum;\n }\n public void setAge( int newAge) {\n age = newAge;\n }\n public void setName(String newName) {\n name = newName;\n }\n public void setIdNum( String newId) {\n idNum = newId;\n }\n}\npublic class RunEncap {\n public static void main(String args[]) {\n EncapTest encap = new EncapTest();\n encap.setName(\"James\");\n encap.setAge(20);\n encap.setIdNum(\"12343ms\");\n \n System.out.print(\"Name : \" + encap.getName() + \" Age : \" + encap.getAge());\n }\n}" }, { "code": null, "e": 2317, "s": 2294, "text": "Name : James Age : 20\n" } ]
C++ Library - <functional>
Function objects are objects specifically designed to be used with a syntax similar to that of functions. Instances of std::function can store, copy, and invoke any Callable target -- functions, lambda expressions, bind expressions, or other function objects, as well as pointers to member functions and pointers to data members. Following is the declaration for std::function. template<class > class function; template< class R, class... Args > class function<R(Args...)> R − result_type. R − result_type. argument_type − T if sizeof...(Args)==1 and T is the first and only type in Args. argument_type − T if sizeof...(Args)==1 and T is the first and only type in Args. In below example for std::function. #include <functional> #include <iostream> struct Foo { Foo(int num) : num_(num) {} void print_add(int i) const { std::cout << num_+i << '\n'; } int num_; }; void print_num(int i) { std::cout << i << '\n'; } struct PrintNum { void operator()(int i) const { std::cout << i << '\n'; } }; int main() { std::function<void(int)> f_display = print_num; f_display(-9); std::function<void()> f_display_42 = []() { print_num(42); }; f_display_42(); std::function<void()> f_display_31337 = std::bind(print_num, 31337); f_display_31337(); std::function<void(const Foo&, int)> f_add_display = &Foo::print_add; const Foo foo(314159); f_add_display(foo, 1); std::function<int(Foo const&)> f_num = &Foo::num_; std::cout << "num_: " << f_num(foo) << '\n'; using std::placeholders::_1; std::function<void(int)> f_add_display2= std::bind( &Foo::print_add, foo, _1 ); f_add_display2(2); std::function<void(int)> f_add_display3= std::bind( &Foo::print_add, &foo, _1 ); f_add_display3(3); std::function<void(int)> f_display_obj = PrintNum(); f_display_obj(18); } The sample output should be like this − -9 42 31337 314160 num_: 314159 314161 314162 18 Print Add Notes Bookmark this page
[ { "code": null, "e": 2933, "s": 2603, "text": "Function objects are objects specifically designed to be used with a syntax similar to that of functions. Instances of std::function can store, copy, and invoke any Callable target -- functions, lambda expressions, bind expressions, or other function objects, as well as pointers to member functions and pointers to data members." }, { "code": null, "e": 2981, "s": 2933, "text": "Following is the declaration for std::function." }, { "code": null, "e": 3015, "s": 2981, "text": "template<class >\nclass function; " }, { "code": null, "e": 3077, "s": 3015, "text": "template< class R, class... Args >\nclass function<R(Args...)>" }, { "code": null, "e": 3094, "s": 3077, "text": "R − result_type." }, { "code": null, "e": 3111, "s": 3094, "text": "R − result_type." }, { "code": null, "e": 3193, "s": 3111, "text": "argument_type − T if sizeof...(Args)==1 and T is the first and only type in Args." }, { "code": null, "e": 3275, "s": 3193, "text": "argument_type − T if sizeof...(Args)==1 and T is the first and only type in Args." }, { "code": null, "e": 3311, "s": 3275, "text": "In below example for std::function." }, { "code": null, "e": 4443, "s": 3311, "text": "#include <functional>\n#include <iostream>\n\nstruct Foo {\n Foo(int num) : num_(num) {}\n void print_add(int i) const { std::cout << num_+i << '\\n'; }\n int num_;\n};\n \nvoid print_num(int i) {\n std::cout << i << '\\n';\n}\n\nstruct PrintNum {\n void operator()(int i) const {\n std::cout << i << '\\n';\n }\n};\n\nint main() {\n std::function<void(int)> f_display = print_num;\n f_display(-9);\n\n std::function<void()> f_display_42 = []() { print_num(42); };\n f_display_42();\n\n std::function<void()> f_display_31337 = std::bind(print_num, 31337);\n f_display_31337();\n\n std::function<void(const Foo&, int)> f_add_display = &Foo::print_add;\n const Foo foo(314159);\n f_add_display(foo, 1);\n\n std::function<int(Foo const&)> f_num = &Foo::num_;\n std::cout << \"num_: \" << f_num(foo) << '\\n';\n\n using std::placeholders::_1;\n std::function<void(int)> f_add_display2= std::bind( &Foo::print_add, foo, _1 );\n f_add_display2(2);\n \n std::function<void(int)> f_add_display3= std::bind( &Foo::print_add, &foo, _1 );\n f_add_display3(3);\n\n std::function<void(int)> f_display_obj = PrintNum();\n f_display_obj(18);\n}" }, { "code": null, "e": 4483, "s": 4443, "text": "The sample output should be like this −" }, { "code": null, "e": 4533, "s": 4483, "text": "-9\n42\n31337\n314160\nnum_: 314159\n314161\n314162\n18\n" }, { "code": null, "e": 4540, "s": 4533, "text": " Print" }, { "code": null, "e": 4551, "s": 4540, "text": " Add Notes" } ]
ConcurrentLinkedQueue in Java with Examples - GeeksforGeeks
26 May, 2021 The ConcurrentLinkedQueue class in Java is a part of the Java Collection Framework. It belongs to java.util.concurrent package. It was introduced in JDK 1.5. It is used to implement Queue with the help of LinkedList concurrently. It is an unbounded thread-safe implementation of Queue which inserts elements at the tail of the Queue in a FIFO(first-in-first-out) fashion. It can be used when an unbounded Queue is shared among many threads. This class does not permit null elements. Iterators are weakly consistent. This class and its iterator implement all of the optional methods of the Queue and Iterator interfaces.Class Hierarchy: java.lang.Object ↳ java.util.AbstractCollection<E> ↳ java.util.AbstractQueue<E> ↳ Class ConcurrentLinkedQueue<E> Declaration: public class ConcurrentLinkedQueue<E> extends AbstractCollection<E> implements Queue<E>, Serializable Here, E is the type of elements maintained by this collection. To construct a ConcurrentLinkedQueue, we need to import it from java.util.ConcurrentLinkedQueue. 1. ConcurrentLinkedQueue(): This constructor is used to construct an empty queue. ConcurrentLinkedQueue<E> clq = new ConcurrentLinkedQueue<E>(); 2. ConcurrentLinkedQueue(Collection<E> c): This constructor is used to construct a queue with the elements of the Collection passed as the parameter. ConcurrentLinkedQueue<E> clq = new ConcurrentLinkedQueue<E>(Collection<E> c); Below is a sample program to illustrate ConcurrentLinkedQueue in Java: Example 1: Java // Java program to demonstrate ConcurrentLinkedQueue import java.util.concurrent.*; class ConcurrentLinkedQueueDemo { public static void main(String[] args) { // Create a ConcurrentLinkedQueue // using ConcurrentLinkedQueue() constructor ConcurrentLinkedQueue<Integer> clq = new ConcurrentLinkedQueue<Integer>(); clq.add(12); clq.add(70); clq.add(1009); clq.add(475); // Displaying the existing LinkedQueue System.out.println("ConcurrentLinkedQueue: " + clq); // Create a ConcurrentLinkedQueue // using ConcurrentLinkedQueue(Collection c) // constructor ConcurrentLinkedQueue<Integer> clq1 = new ConcurrentLinkedQueue<Integer>(clq); // Displaying the existing LinkedQueue System.out.println("ConcurrentLinkedQueue1: " + clq1); }} ConcurrentLinkedQueue: [12, 70, 1009, 475] ConcurrentLinkedQueue1: [12, 70, 1009, 475] Example 2: Java // Java code to illustrate// methods of ConcurrentLinkedQueue import java.util.concurrent.*; class ConcurrentLinkedQueueDemo { public static void main(String[] args) { // Create a ConcurrentLinkedQueue // using ConcurrentLinkedQueue() // constructor ConcurrentLinkedQueue<Integer> clq = new ConcurrentLinkedQueue<Integer>(); clq.add(12); clq.add(70); clq.add(1009); clq.add(475); // Displaying the existing ConcurrentLinkedQueue System.out.println("ConcurrentLinkedQueue: " + clq); // Displaying the first element // using peek() method System.out.println("First Element is: " + clq.peek()); // Remove and display the first element // using poll() method System.out.println("Head Element is: " + clq.poll()); // Displaying the existing ConcurrentLinkedQueue System.out.println("ConcurrentLinkedQueue: " + clq); // Get the size using size() method System.out.println("Size: " + clq.size()); }} ConcurrentLinkedQueue: [12, 70, 1009, 475] First Element is: 12 Head Element is: 12 ConcurrentLinkedQueue: [70, 1009, 475] Size: 3 1. Adding Elements To add elements ConcurrentLinkedQueue provides two methods. add() It inserts the element, passed as a parameter at the tail of this ConcurrentLinkedQueue. This method returns True if insertion is successful. ConcurrentLinkedQueue is unbounded, so this method will never throw IllegalStateException or return false. addAll() It inserts all the elements of the Collection, passed as a parameter at the end of a ConcurrentLinkedQueue. The insertion of the element is in the same order as returned by the collection’s iterator. Java // Java Program Demonstrate adding// elements to ConcurrentLinkedQueue import java.util.concurrent.*;import java.util.*; public class AddingElementsExample { public static void main(String[] args) { // Create an instance of ConcurrentLinkedQueue ConcurrentLinkedQueue<String> queue = new ConcurrentLinkedQueue<String>(); // Add String to queue using add method queue.add("Kolkata"); queue.add("Patna"); queue.add("Delhi"); queue.add("Jammu"); // Displaying the existing ConcurrentLinkedQueue System.out.println("ConcurrentLinkedQueue: " + queue); // create a ArrayList of Strings ArrayList<String> arraylist = new ArrayList<String>(); // add String to ArrayList arraylist.add("Sanjeet"); arraylist.add("Rabi"); arraylist.add("Debasis"); arraylist.add("Raunak"); arraylist.add("Mahesh"); // Displaying the existing Collection System.out.println("Collection to be added: " + arraylist); // apply addAll() method and passed // the arraylist as parameter boolean response = queue.addAll(arraylist); // Displaying the existing ConcurrentLinkedQueue System.out.println("Collection added: " + response); // Displaying the existing ConcurrentLinkedQueue System.out.println("ConcurrentLinkedQueue: " + queue); }} ConcurrentLinkedQueue: [Kolkata, Patna, Delhi, Jammu] Collection to be added: [Sanjeet, Rabi, Debasis, Raunak, Mahesh] Collection added: true ConcurrentLinkedQueue: [Kolkata, Patna, Delhi, Jammu, Sanjeet, Rabi, Debasis, Raunak, Mahesh] 2. Removing Elements The remove(Object o) method of ConcurrentLinkedQueue is used to remove a single instance of the specified element if it is present. It removes an element e such that o.equals(e). It returns true if this ConcurrentLinkedQueue contained the specified element else it will return false. Java // Java Program Demonstrate removing// elements from ConcurrentLinkedQueue import java.util.concurrent.*; public class RemovingElementsExample { public static void main(String[] args) { // Create an instance of ConcurrentLinkedQueue ConcurrentLinkedQueue<Integer> queue = new ConcurrentLinkedQueue<Integer>(); // Add Numbers to queue using add(e) method queue.add(4353); queue.add(7824); queue.add(78249); queue.add(8724); // Displaying the existing ConcurrentLinkedQueue System.out.println("ConcurrentLinkedQueue: " + queue); // apply remove() for Number 78249 boolean response = queue.remove(78249); // print results System.out.println("Removing Number 78249 successful: " + response); // Displaying the existing ConcurrentLinkedQueue System.out.println("Updated ConcurrentLinkedQueue: " + queue); }} ConcurrentLinkedQueue: [4353, 7824, 78249, 8724] Removing Number 78249 successful: true Updated ConcurrentLinkedQueue: [4353, 7824, 8724] 3. Iterating Elements The iterator() method of ConcurrentLinkedQueue is used to return an iterator of the same elements as this ConcurrentLinkedQueue in a proper sequence. The elements returned from this method contains elements in order from first(head) to last(tail). The returned iterator is weakly consistent. Java // Java Program Demonstrate Iterating// over ConcurrentLinkedQueue import java.util.concurrent.*;import java.util.*; public class TraversingExample { public static void main(String[] args) { // Create an instance of ConcurrentLinkedQueue ConcurrentLinkedQueue<String> queue = new ConcurrentLinkedQueue<String>(); // Add String to queue using add(e) method queue.add("Aman"); queue.add("Amar"); queue.add("Sanjeet"); queue.add("Rabi"); // Displaying the existing ConcurrentLinkedQueue System.out.println("ConcurrentLinkedQueue : " + queue); // Call iterator() method Iterator iterator = queue.iterator(); // Print elements of iterator System.out.println("\nThe String Values of iterator are:"); while (iterator.hasNext()) { System.out.println(iterator.next()); } }} ConcurrentLinkedQueue : [Aman, Amar, Sanjeet, Rabi] The String Values of iterator are: Aman Amar Sanjeet Rabi 4. Accessing Elements peek() and element() methods provided by Queue are used to access the elements of ConcurrentLinkedQueue. element() method differs from peek() method only in that it throws an exception if this queue is empty. Java // Java Program Demonstrate accessing// elements of ConcurrentLinkedQueue import java.util.*;import java.util.concurrent.*; public class AccessingElementsExample { public static void main(String[] args) throws IllegalStateException { // Create an instance of ConcurrentLinkedQueue ConcurrentLinkedQueue<Integer> Q = new ConcurrentLinkedQueue<>(); // Add numbers to end of Queue Q.add(7855642); Q.add(35658786); Q.add(5278367); Q.add(74381793); // print queue System.out.println("Queue: " + Q); // print head System.out.println("Queue's head: " + Q.element()); // print head System.out.println("Queue's head: " + Q.peek()); }} Queue: [7855642, 35658786, 5278367, 74381793] Queue's head: 7855642 Queue's head: 7855642 METHOD DESCRIPTION METHOD DESCRIPTION METHOD DESCRIPTION METHOD DESCRIPTION METHOD DESCRIPTION Reference: https://docs.oracle.com/en/java/javase/11/docs/api/java.base/java/util/concurrent/ConcurrentLinkedQueue.html Ganeshchowdharysadanala arorakashish0911 Java - util package Java-Collections Java-ConcurrentLinkedQueue Java Java Java-Collections Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here. Comments Old Comments HashMap in Java with Examples Interfaces in Java Object Oriented Programming (OOPs) Concept in Java ArrayList in Java How to iterate any Map in Java Initialize an ArrayList in Java Overriding in Java Collections in Java Singleton Class in Java Multithreading in Java
[ { "code": null, "e": 24372, "s": 24344, "text": "\n26 May, 2021" }, { "code": null, "e": 25009, "s": 24372, "text": "The ConcurrentLinkedQueue class in Java is a part of the Java Collection Framework. It belongs to java.util.concurrent package. It was introduced in JDK 1.5. It is used to implement Queue with the help of LinkedList concurrently. It is an unbounded thread-safe implementation of Queue which inserts elements at the tail of the Queue in a FIFO(first-in-first-out) fashion. It can be used when an unbounded Queue is shared among many threads. This class does not permit null elements. Iterators are weakly consistent. This class and its iterator implement all of the optional methods of the Queue and Iterator interfaces.Class Hierarchy: " }, { "code": null, "e": 25137, "s": 25009, "text": "java.lang.Object\n ↳ java.util.AbstractCollection<E>\n ↳ java.util.AbstractQueue<E>\n ↳ Class ConcurrentLinkedQueue<E>" }, { "code": null, "e": 25151, "s": 25137, "text": "Declaration: " }, { "code": null, "e": 25255, "s": 25151, "text": "public class ConcurrentLinkedQueue<E> extends AbstractCollection<E> implements Queue<E>, Serializable " }, { "code": null, "e": 25318, "s": 25255, "text": "Here, E is the type of elements maintained by this collection." }, { "code": null, "e": 25416, "s": 25318, "text": "To construct a ConcurrentLinkedQueue, we need to import it from java.util.ConcurrentLinkedQueue. " }, { "code": null, "e": 25498, "s": 25416, "text": "1. ConcurrentLinkedQueue(): This constructor is used to construct an empty queue." }, { "code": null, "e": 25561, "s": 25498, "text": "ConcurrentLinkedQueue<E> clq = new ConcurrentLinkedQueue<E>();" }, { "code": null, "e": 25711, "s": 25561, "text": "2. ConcurrentLinkedQueue(Collection<E> c): This constructor is used to construct a queue with the elements of the Collection passed as the parameter." }, { "code": null, "e": 25789, "s": 25711, "text": "ConcurrentLinkedQueue<E> clq = new ConcurrentLinkedQueue<E>(Collection<E> c);" }, { "code": null, "e": 25861, "s": 25789, "text": "Below is a sample program to illustrate ConcurrentLinkedQueue in Java: " }, { "code": null, "e": 25873, "s": 25861, "text": "Example 1: " }, { "code": null, "e": 25878, "s": 25873, "text": "Java" }, { "code": "// Java program to demonstrate ConcurrentLinkedQueue import java.util.concurrent.*; class ConcurrentLinkedQueueDemo { public static void main(String[] args) { // Create a ConcurrentLinkedQueue // using ConcurrentLinkedQueue() constructor ConcurrentLinkedQueue<Integer> clq = new ConcurrentLinkedQueue<Integer>(); clq.add(12); clq.add(70); clq.add(1009); clq.add(475); // Displaying the existing LinkedQueue System.out.println(\"ConcurrentLinkedQueue: \" + clq); // Create a ConcurrentLinkedQueue // using ConcurrentLinkedQueue(Collection c) // constructor ConcurrentLinkedQueue<Integer> clq1 = new ConcurrentLinkedQueue<Integer>(clq); // Displaying the existing LinkedQueue System.out.println(\"ConcurrentLinkedQueue1: \" + clq1); }}", "e": 26803, "s": 25878, "text": null }, { "code": null, "e": 26890, "s": 26803, "text": "ConcurrentLinkedQueue: [12, 70, 1009, 475]\nConcurrentLinkedQueue1: [12, 70, 1009, 475]" }, { "code": null, "e": 26905, "s": 26892, "text": "Example 2: " }, { "code": null, "e": 26910, "s": 26905, "text": "Java" }, { "code": "// Java code to illustrate// methods of ConcurrentLinkedQueue import java.util.concurrent.*; class ConcurrentLinkedQueueDemo { public static void main(String[] args) { // Create a ConcurrentLinkedQueue // using ConcurrentLinkedQueue() // constructor ConcurrentLinkedQueue<Integer> clq = new ConcurrentLinkedQueue<Integer>(); clq.add(12); clq.add(70); clq.add(1009); clq.add(475); // Displaying the existing ConcurrentLinkedQueue System.out.println(\"ConcurrentLinkedQueue: \" + clq); // Displaying the first element // using peek() method System.out.println(\"First Element is: \" + clq.peek()); // Remove and display the first element // using poll() method System.out.println(\"Head Element is: \" + clq.poll()); // Displaying the existing ConcurrentLinkedQueue System.out.println(\"ConcurrentLinkedQueue: \" + clq); // Get the size using size() method System.out.println(\"Size: \" + clq.size()); }}", "e": 28105, "s": 26910, "text": null }, { "code": null, "e": 28236, "s": 28105, "text": "ConcurrentLinkedQueue: [12, 70, 1009, 475]\nFirst Element is: 12\nHead Element is: 12\nConcurrentLinkedQueue: [70, 1009, 475]\nSize: 3" }, { "code": null, "e": 28257, "s": 28238, "text": "1. Adding Elements" }, { "code": null, "e": 28317, "s": 28257, "text": "To add elements ConcurrentLinkedQueue provides two methods." }, { "code": null, "e": 28572, "s": 28317, "text": "add() It inserts the element, passed as a parameter at the tail of this ConcurrentLinkedQueue. This method returns True if insertion is successful. ConcurrentLinkedQueue is unbounded, so this method will never throw IllegalStateException or return false." }, { "code": null, "e": 28781, "s": 28572, "text": "addAll() It inserts all the elements of the Collection, passed as a parameter at the end of a ConcurrentLinkedQueue. The insertion of the element is in the same order as returned by the collection’s iterator." }, { "code": null, "e": 28786, "s": 28781, "text": "Java" }, { "code": "// Java Program Demonstrate adding// elements to ConcurrentLinkedQueue import java.util.concurrent.*;import java.util.*; public class AddingElementsExample { public static void main(String[] args) { // Create an instance of ConcurrentLinkedQueue ConcurrentLinkedQueue<String> queue = new ConcurrentLinkedQueue<String>(); // Add String to queue using add method queue.add(\"Kolkata\"); queue.add(\"Patna\"); queue.add(\"Delhi\"); queue.add(\"Jammu\"); // Displaying the existing ConcurrentLinkedQueue System.out.println(\"ConcurrentLinkedQueue: \" + queue); // create a ArrayList of Strings ArrayList<String> arraylist = new ArrayList<String>(); // add String to ArrayList arraylist.add(\"Sanjeet\"); arraylist.add(\"Rabi\"); arraylist.add(\"Debasis\"); arraylist.add(\"Raunak\"); arraylist.add(\"Mahesh\"); // Displaying the existing Collection System.out.println(\"Collection to be added: \" + arraylist); // apply addAll() method and passed // the arraylist as parameter boolean response = queue.addAll(arraylist); // Displaying the existing ConcurrentLinkedQueue System.out.println(\"Collection added: \" + response); // Displaying the existing ConcurrentLinkedQueue System.out.println(\"ConcurrentLinkedQueue: \" + queue); }}", "e": 30195, "s": 28786, "text": null }, { "code": null, "e": 30434, "s": 30198, "text": "ConcurrentLinkedQueue: [Kolkata, Patna, Delhi, Jammu]\nCollection to be added: [Sanjeet, Rabi, Debasis, Raunak, Mahesh]\nCollection added: true\nConcurrentLinkedQueue: [Kolkata, Patna, Delhi, Jammu, Sanjeet, Rabi, Debasis, Raunak, Mahesh]" }, { "code": null, "e": 30457, "s": 30436, "text": "2. Removing Elements" }, { "code": null, "e": 30743, "s": 30459, "text": "The remove(Object o) method of ConcurrentLinkedQueue is used to remove a single instance of the specified element if it is present. It removes an element e such that o.equals(e). It returns true if this ConcurrentLinkedQueue contained the specified element else it will return false." }, { "code": null, "e": 30750, "s": 30745, "text": "Java" }, { "code": "// Java Program Demonstrate removing// elements from ConcurrentLinkedQueue import java.util.concurrent.*; public class RemovingElementsExample { public static void main(String[] args) { // Create an instance of ConcurrentLinkedQueue ConcurrentLinkedQueue<Integer> queue = new ConcurrentLinkedQueue<Integer>(); // Add Numbers to queue using add(e) method queue.add(4353); queue.add(7824); queue.add(78249); queue.add(8724); // Displaying the existing ConcurrentLinkedQueue System.out.println(\"ConcurrentLinkedQueue: \" + queue); // apply remove() for Number 78249 boolean response = queue.remove(78249); // print results System.out.println(\"Removing Number 78249 successful: \" + response); // Displaying the existing ConcurrentLinkedQueue System.out.println(\"Updated ConcurrentLinkedQueue: \" + queue); }}", "e": 31676, "s": 30750, "text": null }, { "code": null, "e": 31817, "s": 31679, "text": "ConcurrentLinkedQueue: [4353, 7824, 78249, 8724]\nRemoving Number 78249 successful: true\nUpdated ConcurrentLinkedQueue: [4353, 7824, 8724]" }, { "code": null, "e": 31842, "s": 31819, "text": "3. Iterating Elements " }, { "code": null, "e": 32136, "s": 31844, "text": "The iterator() method of ConcurrentLinkedQueue is used to return an iterator of the same elements as this ConcurrentLinkedQueue in a proper sequence. The elements returned from this method contains elements in order from first(head) to last(tail). The returned iterator is weakly consistent." }, { "code": null, "e": 32143, "s": 32138, "text": "Java" }, { "code": "// Java Program Demonstrate Iterating// over ConcurrentLinkedQueue import java.util.concurrent.*;import java.util.*; public class TraversingExample { public static void main(String[] args) { // Create an instance of ConcurrentLinkedQueue ConcurrentLinkedQueue<String> queue = new ConcurrentLinkedQueue<String>(); // Add String to queue using add(e) method queue.add(\"Aman\"); queue.add(\"Amar\"); queue.add(\"Sanjeet\"); queue.add(\"Rabi\"); // Displaying the existing ConcurrentLinkedQueue System.out.println(\"ConcurrentLinkedQueue : \" + queue); // Call iterator() method Iterator iterator = queue.iterator(); // Print elements of iterator System.out.println(\"\\nThe String Values of iterator are:\"); while (iterator.hasNext()) { System.out.println(iterator.next()); } }}", "e": 33040, "s": 32143, "text": null }, { "code": null, "e": 33154, "s": 33043, "text": "ConcurrentLinkedQueue : [Aman, Amar, Sanjeet, Rabi]\n\nThe String Values of iterator are:\nAman\nAmar\nSanjeet\nRabi" }, { "code": null, "e": 33178, "s": 33156, "text": "4. Accessing Elements" }, { "code": null, "e": 33285, "s": 33180, "text": "peek() and element() methods provided by Queue are used to access the elements of ConcurrentLinkedQueue." }, { "code": null, "e": 33391, "s": 33287, "text": "element() method differs from peek() method only in that it throws an exception if this queue is empty." }, { "code": null, "e": 33398, "s": 33393, "text": "Java" }, { "code": "// Java Program Demonstrate accessing// elements of ConcurrentLinkedQueue import java.util.*;import java.util.concurrent.*; public class AccessingElementsExample { public static void main(String[] args) throws IllegalStateException { // Create an instance of ConcurrentLinkedQueue ConcurrentLinkedQueue<Integer> Q = new ConcurrentLinkedQueue<>(); // Add numbers to end of Queue Q.add(7855642); Q.add(35658786); Q.add(5278367); Q.add(74381793); // print queue System.out.println(\"Queue: \" + Q); // print head System.out.println(\"Queue's head: \" + Q.element()); // print head System.out.println(\"Queue's head: \" + Q.peek()); }}", "e": 34131, "s": 33398, "text": null }, { "code": null, "e": 34224, "s": 34134, "text": "Queue: [7855642, 35658786, 5278367, 74381793]\nQueue's head: 7855642\nQueue's head: 7855642" }, { "code": null, "e": 34233, "s": 34226, "text": "METHOD" }, { "code": null, "e": 34245, "s": 34233, "text": "DESCRIPTION" }, { "code": null, "e": 34252, "s": 34245, "text": "METHOD" }, { "code": null, "e": 34264, "s": 34252, "text": "DESCRIPTION" }, { "code": null, "e": 34271, "s": 34264, "text": "METHOD" }, { "code": null, "e": 34283, "s": 34271, "text": "DESCRIPTION" }, { "code": null, "e": 34290, "s": 34283, "text": "METHOD" }, { "code": null, "e": 34302, "s": 34290, "text": "DESCRIPTION" }, { "code": null, "e": 34309, "s": 34302, "text": "METHOD" }, { "code": null, "e": 34321, "s": 34309, "text": "DESCRIPTION" }, { "code": null, "e": 34441, "s": 34321, "text": "Reference: https://docs.oracle.com/en/java/javase/11/docs/api/java.base/java/util/concurrent/ConcurrentLinkedQueue.html" }, { "code": null, "e": 34467, "s": 34443, "text": "Ganeshchowdharysadanala" }, { "code": null, "e": 34484, "s": 34467, "text": "arorakashish0911" }, { "code": null, "e": 34504, "s": 34484, "text": "Java - util package" }, { "code": null, "e": 34521, "s": 34504, "text": "Java-Collections" }, { "code": null, "e": 34548, "s": 34521, "text": "Java-ConcurrentLinkedQueue" }, { "code": null, "e": 34553, "s": 34548, "text": "Java" }, { "code": null, "e": 34558, "s": 34553, "text": "Java" }, { "code": null, "e": 34575, "s": 34558, "text": "Java-Collections" }, { "code": null, "e": 34673, "s": 34575, "text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here." }, { "code": null, "e": 34682, "s": 34673, "text": "Comments" }, { "code": null, "e": 34695, "s": 34682, "text": "Old Comments" }, { "code": null, "e": 34725, "s": 34695, "text": "HashMap in Java with Examples" }, { "code": null, "e": 34744, "s": 34725, "text": "Interfaces in Java" }, { "code": null, "e": 34795, "s": 34744, "text": "Object Oriented Programming (OOPs) Concept in Java" }, { "code": null, "e": 34813, "s": 34795, "text": "ArrayList in Java" }, { "code": null, "e": 34844, "s": 34813, "text": "How to iterate any Map in Java" }, { "code": null, "e": 34876, "s": 34844, "text": "Initialize an ArrayList in Java" }, { "code": null, "e": 34895, "s": 34876, "text": "Overriding in Java" }, { "code": null, "e": 34915, "s": 34895, "text": "Collections in Java" }, { "code": null, "e": 34939, "s": 34915, "text": "Singleton Class in Java" } ]
Predicting Who’s going to Survive on Titanic Dataset | by Vincent Tatan | Towards Data Science
All aboard!! Embark your Data Journey With Titanic Welcome to Data Science SS. My name is Vincent and hopefully you will enjoy our first journey to Data Science. Today, we are going to sail in the Atlantic Ocean with our fallen comrade, the Titanic. We are going to carefully explore the data icebergs, sail the Machine Learning Sea, and prepare for insightful disembarkation. I Hope you have a pleasant journey with me and Data Science SS. Welcome!! The purpose of this article is to whet your appetites on Data Science with the famous Kaggle Challenge for beginner — “Titanic: Machine Learning from Disaster.” In this article, You are going to embark on your first Exploratory Data Analysis (EDA) and Machine Learning to predict the survival of Titanic Passengers. This is the genesis challenge for most onboarding data scientists and will set you up for success. I hope this article inspires you. All aboard!!! We are going to use Jupyter Notebook with several data science Python libraries. If you haven’t please install Anaconda on your Windows or Mac. Alternatively, you can follow my Notebook and enjoy this guide! For most of you that are pure beginners. Don’t worry, I will guide you through the article for end to end data analysis. Here are a few milestones to explore for today: Whew, that is a long way until we reach our first destination. Let’s get sailing, shall we? Python library is a collection of functions and methods that allows you to perform many actions without writing your code (Quora). This will help you run several data science functions with no hassle. If you do not have the library, you can run pip install <library name> on your command prompt to install. Here is the list of libraries we are using: Numpy : Multidimensional Array and Matrix Representation LibraryPandas : Python Data Analysis Library for Data Frame, CSV File I/OMatplotlib : Data Visualization LibrarySeaborn : Data Visualization Library built on top of Matplotlib. This gives you a cleaner visualization and easier interface to call. Numpy : Multidimensional Array and Matrix Representation Library Pandas : Python Data Analysis Library for Data Frame, CSV File I/O Matplotlib : Data Visualization Library Seaborn : Data Visualization Library built on top of Matplotlib. This gives you a cleaner visualization and easier interface to call. import numpy as np import pandas as pd import seaborn as snsimport matplotlib.pyplot as plt We can use pandas to read train.csv from the Titanic Data Page. This code will create a DataFrame Object which is a two dimensional array to optimize data exploration process. Think about it as Excelsheet in Python with rows and columns. maindf = pd.read_csv(‘dataset/Titanic/train.csv’) We will start by viewing the very first few rows inside the data. This is to take a sneak peek at the data and make sure you extract the data properly. maindf.head() The describe will help you get all the statistical description of numerical columns. If you write the include parameter as object, it would describe the non-numerical columns. This is very useful method to grab a quick understanding on the data statistically. From this description, we can find the following thoughts: The mean and the distribution of the variables.Most passengers bought the tickets for relatively lower price. But, a few bought at high cost — indicating possible VIPs.The Parch distribution is highly skewed as the quartiles indicate 0 and the max indicates 6. This means that most people do not bring parents or children on board and a few parents bring up to 6 children and parents on board. The mean and the distribution of the variables. Most passengers bought the tickets for relatively lower price. But, a few bought at high cost — indicating possible VIPs. The Parch distribution is highly skewed as the quartiles indicate 0 and the max indicates 6. This means that most people do not bring parents or children on board and a few parents bring up to 6 children and parents on board. The info helps you figure out the data types and the existence of empty values. Here, we found out that the columns Age, Cabin, and Embarked possess missing value. This is good, we have identified a few paths to explore. But first, let us clean the data. There are several ways to replace the missing values in Age Column: Not recommended: Replace by the mean of ages. This is not a good approach, as you could see that most of the passengers are located among 20–30 years old where the oldest is 80 years old and the youngest is 0.42 years old (infant).Recommended: Replace by the median of ages. This would be a better approach as this would safely allocate our missing values to 20–30 years old which are comfortably the inside interquartile ranges.Most Recommended: Replace the ages according to the median by each salutation. This would be the best approach as the salutation will imply the common ages among the imputed data (e.g: Sir, Mdm, etc). Not recommended: Replace by the mean of ages. This is not a good approach, as you could see that most of the passengers are located among 20–30 years old where the oldest is 80 years old and the youngest is 0.42 years old (infant). Recommended: Replace by the median of ages. This would be a better approach as this would safely allocate our missing values to 20–30 years old which are comfortably the inside interquartile ranges. Most Recommended: Replace the ages according to the median by each salutation. This would be the best approach as the salutation will imply the common ages among the imputed data (e.g: Sir, Mdm, etc). Conclusion: Let us take the third approach. In case if you do not know what lambda is, you could think of it as an inline function. This makes the code much simpler to read. maindf['Salutation'] = maindf.Name.apply(lambda name: group = maindf.groupby(['Salutation', 'Pclass'])group.Age.apply(lambda x: x.fillna(x.median()))maindf.Age.fillna(maindf.Age.median, inplace = True) To simplify the analysis, let’s drop some columns which might not be relevant to the survival such as passenger id and name. However, you should be very careful in dropping these columns as this will limit your assumptions. For example, there might a surprising higher probability of passengers survive with the name “John”. Upon closer inspection, this is because the name “John” is usually reserved for Englishmen who have high Social Economic Status (SES). Therefore, if we do not have the SES column, we might need to include names inside our analysis. cleandf = maindf.loc[:,['Survived','Pclass','Sex','Age','SibSp','Parch','Embarked']] In this sector, we are going to manipulate some of the features (columns) to sensible and more meaningful data analysis. We will classify the SES features based on the numerical value from Pclass. We just encode 1 — upper, 2 — middle, 3 — lower. cleandf['socioeconomicstatus']=cleandf.Pclass.map({1:'upper',2:'middle',3:'lower'}) We will map the alphabetical values of (‘C’,’Q’, and ‘S’) to their respective ports. We can then split the number of the passengers according to their port of embarkation and survival status. We will use pie chart to compare the percentage based on port of embarkation and survival status. cleandf['embarkedport']=cleandf.Embarked.map({'C':'Cherbourg','Q':'Queenstown','S':'Southampton'}) We will generate the histogram of age and derive the following binning. Binning is a great way to quantify skewed distribution of continuous data into discrete categories. Each bin represents a degree of range and intensity of the grouped numeric values. agesplit = [0,10,18,25,40,90]agestatus = ['Adolescent','Teenager','Young Adult','Adult','Elder']cleandf['agegroup']=pd.cut(cleandf.Age,agesplit,labels=agestatus) The lifeboats stayed afloat, however, and thus the legend of the “Birkenhead drill” — the protocol that prioritizes women and children during maritime disasters — was born. — History The Birkenhead Drill raises some thoughts to whether the presence of your children or wives will raise your survival rate. Therefore, we want to engineer the number of siblings/spouse (SibSp) and parents/children (Parch) who were aboard into whether each person comes with a family member— hasfamily. cleandf['familymembers']=cleandf.SibSp+cleandf.Parchhasfamily = (cleandf.familymembers>0)*1cleandf['hasfamily'] = hasfamily Congrats! You have done your feature engineering. Now we can use these new features to analyse There are so many analysis that we could engage with new cleaned dataset. Now, let us engage with a few questions. Feel free to access my Python Notebook for more analysis. Would survival rate differs by gender?Would survival rate differs by SES?Would survival rate differs by gender and SES? Would survival rate differs by gender? Would survival rate differs by SES? Would survival rate differs by gender and SES? Just a quick tip for data scientists, your role is to keep asking questions and answering them in statistical manners. This is not going to be an one time waterfall process but continuous and iterative. As what I would say... This is just the tip of the iceberg! maindf.groupby(['Survived','Sex']).count().Namemaindf.groupby(['Survived','Sex']).count().Name.plot(kind='bar') In total, there are 342 survived and 549 non survived. Out of those survived (233 are female, 109 are male ) whereas non survived ( 81 are female, 468 are male). It seems that female is more likely to survive than male. We can use the cross tab to generate the counts clearly for two categorical features (SES, survival) survived = pd.crosstab(index=cleandf.Survived, columns = cleandf.socioeconomicstatus,margins=True)survived.columns = ['lower','middle','upper','rowtotal']survived.index = ['died','survived','coltotal'] From a quick look, it seems that the SES greatly matters to the survivals — the upper class survived more than the lower class. However, let’s further test this hypothesis using Chi Square method. Let us try to understand what this means. The Chi2 Stat is the chi-square statistic and the degrees of freedom are the columns * rows. That means you have 6 degrees of freedom from 3 events of SES (lower, middle, upper) and 2 events of survival (died, survived). The larger the degrees of freedom, the more statistically significant it is given the differences. The p value will determine the significance of SES towards survival. We will reject our null hypothesis if our p value is below alpha (0.01). Since our p value is 6.25 and way above our alpha, we can say that this result is not statistically significant. Maybe we could include sex and observe is there is significant difference? This crosstab allows us to generate a feature (SES,Sex) for a total of 6 possible events. It seems that there are big differences of survival in female high SES compared to male low SES. survivedcrosstabsex = pd.crosstab(index=cleandf.Survived, columns = [cleandf[‘socioeconomicstatus’],cleandf[‘Sex’]],margins=True) Let’s do the same thing as before and insert this crosstab of values in a Chi2 Test The p value is lower which shows it has greater statistical significance than if we just analyse using SES. However, the p value is still above our alpha (0.01). Therefore we still think SES and Gender does not have statistical significance to infer survival status. But we could see that it is close to statistical significance, which is a good sign! For now we can explore these features with our machine learning model :). Let’s model our finding with decision tree. A Decision Tree is a machine learning model that provides rule based classification on information gain. This provides a nice and sleek way to choose critical features and rules to best discriminate our data dependent variables. We will use our train data then prune the trees to avoid overfitting. Finally, we will use GraphViz library to visualize our tree as the following. As the code is long, feel free to skim it at my Python Notebook. From this graph, we can find the beauty of decision tree as followed: Understanding Distribution and Profiles of Survivors: The array indicates [number of death, number of survivals]. In each node we can see different split of the array and the rule classification to branch out to the lower level nodes. We can find out that if sex = female (≤0.5), the array indicates more survivors [69,184]. We can follow with the leaf level nodes as well.Understanding Critical Features Separates Survivors: At the top level features, we see Sex and SES as our major features. However, as what we have discovered before, these features alone are not enough to be statistically significant. Therefore, probably the third level of Port of Embarkation and Age group might give us better significance. Understanding Distribution and Profiles of Survivors: The array indicates [number of death, number of survivals]. In each node we can see different split of the array and the rule classification to branch out to the lower level nodes. We can find out that if sex = female (≤0.5), the array indicates more survivors [69,184]. We can follow with the leaf level nodes as well. Understanding Critical Features Separates Survivors: At the top level features, we see Sex and SES as our major features. However, as what we have discovered before, these features alone are not enough to be statistically significant. Therefore, probably the third level of Port of Embarkation and Age group might give us better significance. Congratulations, you have just created your first machine learning model! from sklearn.metrics import accuracy_score, log_losstrain_predictions = clftree.predict(X_test)acc = accuracy_score(y_test, train_predictions) From here we will retrieve that our accuracy is 77.65%. This means out of 100 passengers, the model could answer 77 passengers’ survival status correctly. To evaluate this model further, we can add confusion matrix and ROC curve into our evaluation which I will cover more in my subsequent articles. You will generate the csv of your prediction using the following method and commands. We will name it as “titanic_submission_tree.csv” and save it in your local directory. def titanic_kaggle_submission(filename, predictions): submission = pd.DataFrame({‘PassengerId’:testdf[‘PassengerId’],’Survived’:predictions}) submission.to_csv(filename,index=False)titanic_kaggle_submission("titanic_submission_tree.csv",predictions_tree) Once you have submitted the titanic_submission_tree.csv. You will receive the result as following. In here, I got the rank of 6535 out of all the relevant entries. Hyper parameter tuning and Model Selection. Use gradient descent to increase the submission analysis. There are other submissions which greatly leverage on ensemble models such as Random Forest and XG Boost.Feature engineering: find out what other features that could be generated. For example, maybe parents with kids would have less likelihood to be saved. Or maybe people with the early alphabets would be saved first as their room allocations are ordered based on ticket type and alphabetical order? Creating our features based on questions will improve the accuracy of our analysis more.Experiment, experiment, and experiment: Have fun and keep tuning your findings. No ideas are too absurd in this challenge! Hyper parameter tuning and Model Selection. Use gradient descent to increase the submission analysis. There are other submissions which greatly leverage on ensemble models such as Random Forest and XG Boost. Feature engineering: find out what other features that could be generated. For example, maybe parents with kids would have less likelihood to be saved. Or maybe people with the early alphabets would be saved first as their room allocations are ordered based on ticket type and alphabetical order? Creating our features based on questions will improve the accuracy of our analysis more. Experiment, experiment, and experiment: Have fun and keep tuning your findings. No ideas are too absurd in this challenge! Congrats! You have submitted your first data analysis and Kaggle submission. Now, embark on your own Data Science Journey!! In this article, we learnt one method to design an Exploratory Data Analysis (EDA) on the famous Titanic data. We learnt how to import, explore, clean, engineer, analyse, model, and submit. Apart from that, we also learnt useful techniques: Explore → describe, plot, histogramClean → insert missing numerical value, removing irrelevant columnsEngineer → binning, labellingAnalyse → Survival Plots, Crosstab table, Chi2 TestModel → Decision Tree visualization, prediction and evaluation Explore → describe, plot, histogram Clean → insert missing numerical value, removing irrelevant columns Engineer → binning, labelling Analyse → Survival Plots, Crosstab table, Chi2 Test Model → Decision Tree visualization, prediction and evaluation I really hope this has been a great read and a source of inspiration for you to develop and innovate. Please Comment out below to suggest and feedback. Just like you, I am still learning how to become a better Data Scientist and Engineer. Please help me improve so that I could help you better in my subsequent article releases. Thank you and Happy coding :) Vincent Tatan is a Data and Technology enthusiast with relevant working experiences from Visa Inc. and Lazada to implement microservice architectures, business intelligence, and analytics pipeline projects. Vincent is a native Indonesian with a record of accomplishments in problem solving with strengths in Full Stack Development, Data Analytics, and Strategic Planning. He has been actively consulting SMU BI & Analytics Club, guiding aspiring data scientists and engineers from various backgrounds, and opening up his expertise for businesses to develop their products . Please reach out to Vincent via LinkedIn , Medium or Youtube Channel
[ { "code": null, "e": 222, "s": 171, "text": "All aboard!! Embark your Data Journey With Titanic" }, { "code": null, "e": 612, "s": 222, "text": "Welcome to Data Science SS. My name is Vincent and hopefully you will enjoy our first journey to Data Science. Today, we are going to sail in the Atlantic Ocean with our fallen comrade, the Titanic. We are going to carefully explore the data icebergs, sail the Machine Learning Sea, and prepare for insightful disembarkation. I Hope you have a pleasant journey with me and Data Science SS." }, { "code": null, "e": 783, "s": 612, "text": "Welcome!! The purpose of this article is to whet your appetites on Data Science with the famous Kaggle Challenge for beginner — “Titanic: Machine Learning from Disaster.”" }, { "code": null, "e": 1085, "s": 783, "text": "In this article, You are going to embark on your first Exploratory Data Analysis (EDA) and Machine Learning to predict the survival of Titanic Passengers. This is the genesis challenge for most onboarding data scientists and will set you up for success. I hope this article inspires you. All aboard!!!" }, { "code": null, "e": 1293, "s": 1085, "text": "We are going to use Jupyter Notebook with several data science Python libraries. If you haven’t please install Anaconda on your Windows or Mac. Alternatively, you can follow my Notebook and enjoy this guide!" }, { "code": null, "e": 1462, "s": 1293, "text": "For most of you that are pure beginners. Don’t worry, I will guide you through the article for end to end data analysis. Here are a few milestones to explore for today:" }, { "code": null, "e": 1554, "s": 1462, "text": "Whew, that is a long way until we reach our first destination. Let’s get sailing, shall we?" }, { "code": null, "e": 1861, "s": 1554, "text": "Python library is a collection of functions and methods that allows you to perform many actions without writing your code (Quora). This will help you run several data science functions with no hassle. If you do not have the library, you can run pip install <library name> on your command prompt to install." }, { "code": null, "e": 1905, "s": 1861, "text": "Here is the list of libraries we are using:" }, { "code": null, "e": 2208, "s": 1905, "text": "Numpy : Multidimensional Array and Matrix Representation LibraryPandas : Python Data Analysis Library for Data Frame, CSV File I/OMatplotlib : Data Visualization LibrarySeaborn : Data Visualization Library built on top of Matplotlib. This gives you a cleaner visualization and easier interface to call." }, { "code": null, "e": 2273, "s": 2208, "text": "Numpy : Multidimensional Array and Matrix Representation Library" }, { "code": null, "e": 2340, "s": 2273, "text": "Pandas : Python Data Analysis Library for Data Frame, CSV File I/O" }, { "code": null, "e": 2380, "s": 2340, "text": "Matplotlib : Data Visualization Library" }, { "code": null, "e": 2514, "s": 2380, "text": "Seaborn : Data Visualization Library built on top of Matplotlib. This gives you a cleaner visualization and easier interface to call." }, { "code": null, "e": 2606, "s": 2514, "text": "import numpy as np import pandas as pd import seaborn as snsimport matplotlib.pyplot as plt" }, { "code": null, "e": 2844, "s": 2606, "text": "We can use pandas to read train.csv from the Titanic Data Page. This code will create a DataFrame Object which is a two dimensional array to optimize data exploration process. Think about it as Excelsheet in Python with rows and columns." }, { "code": null, "e": 2894, "s": 2844, "text": "maindf = pd.read_csv(‘dataset/Titanic/train.csv’)" }, { "code": null, "e": 3046, "s": 2894, "text": "We will start by viewing the very first few rows inside the data. This is to take a sneak peek at the data and make sure you extract the data properly." }, { "code": null, "e": 3060, "s": 3046, "text": "maindf.head()" }, { "code": null, "e": 3320, "s": 3060, "text": "The describe will help you get all the statistical description of numerical columns. If you write the include parameter as object, it would describe the non-numerical columns. This is very useful method to grab a quick understanding on the data statistically." }, { "code": null, "e": 3379, "s": 3320, "text": "From this description, we can find the following thoughts:" }, { "code": null, "e": 3773, "s": 3379, "text": "The mean and the distribution of the variables.Most passengers bought the tickets for relatively lower price. But, a few bought at high cost — indicating possible VIPs.The Parch distribution is highly skewed as the quartiles indicate 0 and the max indicates 6. This means that most people do not bring parents or children on board and a few parents bring up to 6 children and parents on board." }, { "code": null, "e": 3821, "s": 3773, "text": "The mean and the distribution of the variables." }, { "code": null, "e": 3943, "s": 3821, "text": "Most passengers bought the tickets for relatively lower price. But, a few bought at high cost — indicating possible VIPs." }, { "code": null, "e": 4169, "s": 3943, "text": "The Parch distribution is highly skewed as the quartiles indicate 0 and the max indicates 6. This means that most people do not bring parents or children on board and a few parents bring up to 6 children and parents on board." }, { "code": null, "e": 4333, "s": 4169, "text": "The info helps you figure out the data types and the existence of empty values. Here, we found out that the columns Age, Cabin, and Embarked possess missing value." }, { "code": null, "e": 4424, "s": 4333, "text": "This is good, we have identified a few paths to explore. But first, let us clean the data." }, { "code": null, "e": 4492, "s": 4424, "text": "There are several ways to replace the missing values in Age Column:" }, { "code": null, "e": 5122, "s": 4492, "text": "Not recommended: Replace by the mean of ages. This is not a good approach, as you could see that most of the passengers are located among 20–30 years old where the oldest is 80 years old and the youngest is 0.42 years old (infant).Recommended: Replace by the median of ages. This would be a better approach as this would safely allocate our missing values to 20–30 years old which are comfortably the inside interquartile ranges.Most Recommended: Replace the ages according to the median by each salutation. This would be the best approach as the salutation will imply the common ages among the imputed data (e.g: Sir, Mdm, etc)." }, { "code": null, "e": 5354, "s": 5122, "text": "Not recommended: Replace by the mean of ages. This is not a good approach, as you could see that most of the passengers are located among 20–30 years old where the oldest is 80 years old and the youngest is 0.42 years old (infant)." }, { "code": null, "e": 5553, "s": 5354, "text": "Recommended: Replace by the median of ages. This would be a better approach as this would safely allocate our missing values to 20–30 years old which are comfortably the inside interquartile ranges." }, { "code": null, "e": 5754, "s": 5553, "text": "Most Recommended: Replace the ages according to the median by each salutation. This would be the best approach as the salutation will imply the common ages among the imputed data (e.g: Sir, Mdm, etc)." }, { "code": null, "e": 5928, "s": 5754, "text": "Conclusion: Let us take the third approach. In case if you do not know what lambda is, you could think of it as an inline function. This makes the code much simpler to read." }, { "code": null, "e": 6130, "s": 5928, "text": "maindf['Salutation'] = maindf.Name.apply(lambda name: group = maindf.groupby(['Salutation', 'Pclass'])group.Age.apply(lambda x: x.fillna(x.median()))maindf.Age.fillna(maindf.Age.median, inplace = True)" }, { "code": null, "e": 6255, "s": 6130, "text": "To simplify the analysis, let’s drop some columns which might not be relevant to the survival such as passenger id and name." }, { "code": null, "e": 6687, "s": 6255, "text": "However, you should be very careful in dropping these columns as this will limit your assumptions. For example, there might a surprising higher probability of passengers survive with the name “John”. Upon closer inspection, this is because the name “John” is usually reserved for Englishmen who have high Social Economic Status (SES). Therefore, if we do not have the SES column, we might need to include names inside our analysis." }, { "code": null, "e": 6772, "s": 6687, "text": "cleandf = maindf.loc[:,['Survived','Pclass','Sex','Age','SibSp','Parch','Embarked']]" }, { "code": null, "e": 6893, "s": 6772, "text": "In this sector, we are going to manipulate some of the features (columns) to sensible and more meaningful data analysis." }, { "code": null, "e": 7018, "s": 6893, "text": "We will classify the SES features based on the numerical value from Pclass. We just encode 1 — upper, 2 — middle, 3 — lower." }, { "code": null, "e": 7102, "s": 7018, "text": "cleandf['socioeconomicstatus']=cleandf.Pclass.map({1:'upper',2:'middle',3:'lower'})" }, { "code": null, "e": 7392, "s": 7102, "text": "We will map the alphabetical values of (‘C’,’Q’, and ‘S’) to their respective ports. We can then split the number of the passengers according to their port of embarkation and survival status. We will use pie chart to compare the percentage based on port of embarkation and survival status." }, { "code": null, "e": 7491, "s": 7392, "text": "cleandf['embarkedport']=cleandf.Embarked.map({'C':'Cherbourg','Q':'Queenstown','S':'Southampton'})" }, { "code": null, "e": 7746, "s": 7491, "text": "We will generate the histogram of age and derive the following binning. Binning is a great way to quantify skewed distribution of continuous data into discrete categories. Each bin represents a degree of range and intensity of the grouped numeric values." }, { "code": null, "e": 7908, "s": 7746, "text": "agesplit = [0,10,18,25,40,90]agestatus = ['Adolescent','Teenager','Young Adult','Adult','Elder']cleandf['agegroup']=pd.cut(cleandf.Age,agesplit,labels=agestatus)" }, { "code": null, "e": 8091, "s": 7908, "text": "The lifeboats stayed afloat, however, and thus the legend of the “Birkenhead drill” — the protocol that prioritizes women and children during maritime disasters — was born. — History" }, { "code": null, "e": 8392, "s": 8091, "text": "The Birkenhead Drill raises some thoughts to whether the presence of your children or wives will raise your survival rate. Therefore, we want to engineer the number of siblings/spouse (SibSp) and parents/children (Parch) who were aboard into whether each person comes with a family member— hasfamily." }, { "code": null, "e": 8516, "s": 8392, "text": "cleandf['familymembers']=cleandf.SibSp+cleandf.Parchhasfamily = (cleandf.familymembers>0)*1cleandf['hasfamily'] = hasfamily" }, { "code": null, "e": 8611, "s": 8516, "text": "Congrats! You have done your feature engineering. Now we can use these new features to analyse" }, { "code": null, "e": 8784, "s": 8611, "text": "There are so many analysis that we could engage with new cleaned dataset. Now, let us engage with a few questions. Feel free to access my Python Notebook for more analysis." }, { "code": null, "e": 8904, "s": 8784, "text": "Would survival rate differs by gender?Would survival rate differs by SES?Would survival rate differs by gender and SES?" }, { "code": null, "e": 8943, "s": 8904, "text": "Would survival rate differs by gender?" }, { "code": null, "e": 8979, "s": 8943, "text": "Would survival rate differs by SES?" }, { "code": null, "e": 9026, "s": 8979, "text": "Would survival rate differs by gender and SES?" }, { "code": null, "e": 9252, "s": 9026, "text": "Just a quick tip for data scientists, your role is to keep asking questions and answering them in statistical manners. This is not going to be an one time waterfall process but continuous and iterative. As what I would say..." }, { "code": null, "e": 9289, "s": 9252, "text": "This is just the tip of the iceberg!" }, { "code": null, "e": 9401, "s": 9289, "text": "maindf.groupby(['Survived','Sex']).count().Namemaindf.groupby(['Survived','Sex']).count().Name.plot(kind='bar')" }, { "code": null, "e": 9621, "s": 9401, "text": "In total, there are 342 survived and 549 non survived. Out of those survived (233 are female, 109 are male ) whereas non survived ( 81 are female, 468 are male). It seems that female is more likely to survive than male." }, { "code": null, "e": 9722, "s": 9621, "text": "We can use the cross tab to generate the counts clearly for two categorical features (SES, survival)" }, { "code": null, "e": 9924, "s": 9722, "text": "survived = pd.crosstab(index=cleandf.Survived, columns = cleandf.socioeconomicstatus,margins=True)survived.columns = ['lower','middle','upper','rowtotal']survived.index = ['died','survived','coltotal']" }, { "code": null, "e": 10121, "s": 9924, "text": "From a quick look, it seems that the SES greatly matters to the survivals — the upper class survived more than the lower class. However, let’s further test this hypothesis using Chi Square method." }, { "code": null, "e": 10483, "s": 10121, "text": "Let us try to understand what this means. The Chi2 Stat is the chi-square statistic and the degrees of freedom are the columns * rows. That means you have 6 degrees of freedom from 3 events of SES (lower, middle, upper) and 2 events of survival (died, survived). The larger the degrees of freedom, the more statistically significant it is given the differences." }, { "code": null, "e": 10738, "s": 10483, "text": "The p value will determine the significance of SES towards survival. We will reject our null hypothesis if our p value is below alpha (0.01). Since our p value is 6.25 and way above our alpha, we can say that this result is not statistically significant." }, { "code": null, "e": 10813, "s": 10738, "text": "Maybe we could include sex and observe is there is significant difference?" }, { "code": null, "e": 11000, "s": 10813, "text": "This crosstab allows us to generate a feature (SES,Sex) for a total of 6 possible events. It seems that there are big differences of survival in female high SES compared to male low SES." }, { "code": null, "e": 11130, "s": 11000, "text": "survivedcrosstabsex = pd.crosstab(index=cleandf.Survived, columns = [cleandf[‘socioeconomicstatus’],cleandf[‘Sex’]],margins=True)" }, { "code": null, "e": 11214, "s": 11130, "text": "Let’s do the same thing as before and insert this crosstab of values in a Chi2 Test" }, { "code": null, "e": 11566, "s": 11214, "text": "The p value is lower which shows it has greater statistical significance than if we just analyse using SES. However, the p value is still above our alpha (0.01). Therefore we still think SES and Gender does not have statistical significance to infer survival status. But we could see that it is close to statistical significance, which is a good sign!" }, { "code": null, "e": 11640, "s": 11566, "text": "For now we can explore these features with our machine learning model :)." }, { "code": null, "e": 11913, "s": 11640, "text": "Let’s model our finding with decision tree. A Decision Tree is a machine learning model that provides rule based classification on information gain. This provides a nice and sleek way to choose critical features and rules to best discriminate our data dependent variables." }, { "code": null, "e": 12126, "s": 11913, "text": "We will use our train data then prune the trees to avoid overfitting. Finally, we will use GraphViz library to visualize our tree as the following. As the code is long, feel free to skim it at my Python Notebook." }, { "code": null, "e": 12196, "s": 12126, "text": "From this graph, we can find the beauty of decision tree as followed:" }, { "code": null, "e": 12912, "s": 12196, "text": "Understanding Distribution and Profiles of Survivors: The array indicates [number of death, number of survivals]. In each node we can see different split of the array and the rule classification to branch out to the lower level nodes. We can find out that if sex = female (≤0.5), the array indicates more survivors [69,184]. We can follow with the leaf level nodes as well.Understanding Critical Features Separates Survivors: At the top level features, we see Sex and SES as our major features. However, as what we have discovered before, these features alone are not enough to be statistically significant. Therefore, probably the third level of Port of Embarkation and Age group might give us better significance." }, { "code": null, "e": 13286, "s": 12912, "text": "Understanding Distribution and Profiles of Survivors: The array indicates [number of death, number of survivals]. In each node we can see different split of the array and the rule classification to branch out to the lower level nodes. We can find out that if sex = female (≤0.5), the array indicates more survivors [69,184]. We can follow with the leaf level nodes as well." }, { "code": null, "e": 13629, "s": 13286, "text": "Understanding Critical Features Separates Survivors: At the top level features, we see Sex and SES as our major features. However, as what we have discovered before, these features alone are not enough to be statistically significant. Therefore, probably the third level of Port of Embarkation and Age group might give us better significance." }, { "code": null, "e": 13703, "s": 13629, "text": "Congratulations, you have just created your first machine learning model!" }, { "code": null, "e": 13846, "s": 13703, "text": "from sklearn.metrics import accuracy_score, log_losstrain_predictions = clftree.predict(X_test)acc = accuracy_score(y_test, train_predictions)" }, { "code": null, "e": 14001, "s": 13846, "text": "From here we will retrieve that our accuracy is 77.65%. This means out of 100 passengers, the model could answer 77 passengers’ survival status correctly." }, { "code": null, "e": 14146, "s": 14001, "text": "To evaluate this model further, we can add confusion matrix and ROC curve into our evaluation which I will cover more in my subsequent articles." }, { "code": null, "e": 14318, "s": 14146, "text": "You will generate the csv of your prediction using the following method and commands. We will name it as “titanic_submission_tree.csv” and save it in your local directory." }, { "code": null, "e": 14573, "s": 14318, "text": "def titanic_kaggle_submission(filename, predictions): submission = pd.DataFrame({‘PassengerId’:testdf[‘PassengerId’],’Survived’:predictions}) submission.to_csv(filename,index=False)titanic_kaggle_submission(\"titanic_submission_tree.csv\",predictions_tree)" }, { "code": null, "e": 14737, "s": 14573, "text": "Once you have submitted the titanic_submission_tree.csv. You will receive the result as following. In here, I got the rank of 6535 out of all the relevant entries." }, { "code": null, "e": 15452, "s": 14737, "text": "Hyper parameter tuning and Model Selection. Use gradient descent to increase the submission analysis. There are other submissions which greatly leverage on ensemble models such as Random Forest and XG Boost.Feature engineering: find out what other features that could be generated. For example, maybe parents with kids would have less likelihood to be saved. Or maybe people with the early alphabets would be saved first as their room allocations are ordered based on ticket type and alphabetical order? Creating our features based on questions will improve the accuracy of our analysis more.Experiment, experiment, and experiment: Have fun and keep tuning your findings. No ideas are too absurd in this challenge!" }, { "code": null, "e": 15660, "s": 15452, "text": "Hyper parameter tuning and Model Selection. Use gradient descent to increase the submission analysis. There are other submissions which greatly leverage on ensemble models such as Random Forest and XG Boost." }, { "code": null, "e": 16046, "s": 15660, "text": "Feature engineering: find out what other features that could be generated. For example, maybe parents with kids would have less likelihood to be saved. Or maybe people with the early alphabets would be saved first as their room allocations are ordered based on ticket type and alphabetical order? Creating our features based on questions will improve the accuracy of our analysis more." }, { "code": null, "e": 16169, "s": 16046, "text": "Experiment, experiment, and experiment: Have fun and keep tuning your findings. No ideas are too absurd in this challenge!" }, { "code": null, "e": 16293, "s": 16169, "text": "Congrats! You have submitted your first data analysis and Kaggle submission. Now, embark on your own Data Science Journey!!" }, { "code": null, "e": 16483, "s": 16293, "text": "In this article, we learnt one method to design an Exploratory Data Analysis (EDA) on the famous Titanic data. We learnt how to import, explore, clean, engineer, analyse, model, and submit." }, { "code": null, "e": 16534, "s": 16483, "text": "Apart from that, we also learnt useful techniques:" }, { "code": null, "e": 16779, "s": 16534, "text": "Explore → describe, plot, histogramClean → insert missing numerical value, removing irrelevant columnsEngineer → binning, labellingAnalyse → Survival Plots, Crosstab table, Chi2 TestModel → Decision Tree visualization, prediction and evaluation" }, { "code": null, "e": 16815, "s": 16779, "text": "Explore → describe, plot, histogram" }, { "code": null, "e": 16883, "s": 16815, "text": "Clean → insert missing numerical value, removing irrelevant columns" }, { "code": null, "e": 16913, "s": 16883, "text": "Engineer → binning, labelling" }, { "code": null, "e": 16965, "s": 16913, "text": "Analyse → Survival Plots, Crosstab table, Chi2 Test" }, { "code": null, "e": 17028, "s": 16965, "text": "Model → Decision Tree visualization, prediction and evaluation" }, { "code": null, "e": 17130, "s": 17028, "text": "I really hope this has been a great read and a source of inspiration for you to develop and innovate." }, { "code": null, "e": 17357, "s": 17130, "text": "Please Comment out below to suggest and feedback. Just like you, I am still learning how to become a better Data Scientist and Engineer. Please help me improve so that I could help you better in my subsequent article releases." }, { "code": null, "e": 17387, "s": 17357, "text": "Thank you and Happy coding :)" }, { "code": null, "e": 17594, "s": 17387, "text": "Vincent Tatan is a Data and Technology enthusiast with relevant working experiences from Visa Inc. and Lazada to implement microservice architectures, business intelligence, and analytics pipeline projects." }, { "code": null, "e": 17759, "s": 17594, "text": "Vincent is a native Indonesian with a record of accomplishments in problem solving with strengths in Full Stack Development, Data Analytics, and Strategic Planning." }, { "code": null, "e": 17961, "s": 17759, "text": "He has been actively consulting SMU BI & Analytics Club, guiding aspiring data scientists and engineers from various backgrounds, and opening up his expertise for businesses to develop their products ." } ]
HTML <datalist> Tag
17 Mar, 2022 The <datalist> tag is used to provide autocomplete feature in the HTML files. It can be used with an input tag so that users can easily fill the data in the forms using select the data.Syntax: <datalist> ... </datalist> Example 1: The below code explains datalist Tag. HTML <!DOCTYPE html><html><body> <form action=""> <label>Your Cars Name: </label> <input list="cars"> <!--datalist Tag starts here --> <datalist id="cars"> <option value="BMW"/> <option value="Bentley"/> <option value="Mercedes"/> <option value="Audi"/> <option value="Volkswagen"/> </datalist> <!--datalist Tag ends here --> </form></body></html> Output: Example 2: The <datalist> tag object can be easily accessed by an input attribute type. HTML <!DOCTYPE html><html><body> <form action=""> <label>Your Cars Name: </label> <input list="cars" id="carsInput" /> <!--datalist Tag starts here --> <datalist id="cars"> <option value="BMW" /> <option value="Bentley" /> <option value="Mercedes" /> <option value="Audi" /> <option value="Volkswagen" /> </datalist> <!--datalist Tag ends here --> <button onclick="datalistcall()" type="button"> Click Here </button> </form> <p id="output"></p> <!-- Will display the select option --> <script type="text/javascript"> function datalistcall() { var o1 = document.getElementById("carsInput").value; document.getElementById("output").innerHTML = "You select " + o1 + " option"; } </script></body></html> Output: Supported Browsers: Google Chrome 20.0 and above Internet Explorer 10.0 and above Firefox 4.0 and above Opera 9.0 and above Safari 11.1 arorakashish0911 shubhamyadav4 HTML-Tags HTML HTML Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here.
[ { "code": null, "e": 52, "s": 24, "text": "\n17 Mar, 2022" }, { "code": null, "e": 245, "s": 52, "text": "The <datalist> tag is used to provide autocomplete feature in the HTML files. It can be used with an input tag so that users can easily fill the data in the forms using select the data.Syntax:" }, { "code": null, "e": 272, "s": 245, "text": "<datalist> ... </datalist>" }, { "code": null, "e": 321, "s": 272, "text": "Example 1: The below code explains datalist Tag." }, { "code": null, "e": 326, "s": 321, "text": "HTML" }, { "code": "<!DOCTYPE html><html><body> <form action=\"\"> <label>Your Cars Name: </label> <input list=\"cars\"> <!--datalist Tag starts here --> <datalist id=\"cars\"> <option value=\"BMW\"/> <option value=\"Bentley\"/> <option value=\"Mercedes\"/> <option value=\"Audi\"/> <option value=\"Volkswagen\"/> </datalist> <!--datalist Tag ends here --> </form></body></html>", "e": 785, "s": 326, "text": null }, { "code": null, "e": 794, "s": 785, "text": "Output: " }, { "code": null, "e": 882, "s": 794, "text": "Example 2: The <datalist> tag object can be easily accessed by an input attribute type." }, { "code": null, "e": 887, "s": 882, "text": "HTML" }, { "code": "<!DOCTYPE html><html><body> <form action=\"\"> <label>Your Cars Name: </label> <input list=\"cars\" id=\"carsInput\" /> <!--datalist Tag starts here --> <datalist id=\"cars\"> <option value=\"BMW\" /> <option value=\"Bentley\" /> <option value=\"Mercedes\" /> <option value=\"Audi\" /> <option value=\"Volkswagen\" /> </datalist> <!--datalist Tag ends here --> <button onclick=\"datalistcall()\" type=\"button\"> Click Here </button> </form> <p id=\"output\"></p> <!-- Will display the select option --> <script type=\"text/javascript\"> function datalistcall() { var o1 = document.getElementById(\"carsInput\").value; document.getElementById(\"output\").innerHTML = \"You select \" + o1 + \" option\"; } </script></body></html>", "e": 1775, "s": 887, "text": null }, { "code": null, "e": 1783, "s": 1775, "text": "Output:" }, { "code": null, "e": 1804, "s": 1783, "text": "Supported Browsers: " }, { "code": null, "e": 1833, "s": 1804, "text": "Google Chrome 20.0 and above" }, { "code": null, "e": 1866, "s": 1833, "text": "Internet Explorer 10.0 and above" }, { "code": null, "e": 1888, "s": 1866, "text": "Firefox 4.0 and above" }, { "code": null, "e": 1908, "s": 1888, "text": "Opera 9.0 and above" }, { "code": null, "e": 1920, "s": 1908, "text": "Safari 11.1" }, { "code": null, "e": 1939, "s": 1922, "text": "arorakashish0911" }, { "code": null, "e": 1953, "s": 1939, "text": "shubhamyadav4" }, { "code": null, "e": 1963, "s": 1953, "text": "HTML-Tags" }, { "code": null, "e": 1968, "s": 1963, "text": "HTML" }, { "code": null, "e": 1973, "s": 1968, "text": "HTML" } ]
Waits in Selenium Python
08 Jun, 2020 Selenium Python is one of the great tools for testing automation. These days most of the web apps are using AJAX techniques. When a page is loaded by the browser, the elements within that page may load at different time intervals. This makes locating elements difficult: if an element is not yet present in the DOM, a locate function will raise an ElementNotVisibleException exception. Using waits, we can solve this issue. Waiting provides some slack between actions performed – mostly locating an element or any other operation with the element. Selenium Webdriver provides two types of waits – Implicit Waits Explicit Waits An implicit wait tells WebDriver to poll the DOM for a certain amount of time when trying to find any element (or elements) not immediately available. The default setting is 0. Once set, the implicit wait is set for the life of the WebDriver object. Let’s consider an example – # import webdriverfrom selenium import webdriver driver = webdriver.Firefox() # set implicit wait timedriver.implicitly_wait(10) # seconds # get urldriver.get("http://somedomain / url_that_delays_loading") # get element after 10 secondsmyDynamicElement = driver.find_element_by_id("myDynamicElement") This waits up to 10 seconds before throwing a TimeoutException unless it finds the element to return within 10 seconds. To check out how to practically implement Implicit Waits in Webdriver, checkout Implicit waits in Selenium Python An explicit wait is a code you define to wait for a certain condition to occur before proceeding further in the code. The extreme case of this is time.sleep(), which sets the condition to an exact time period to wait. There are some convenience methods provided that help you write code that will wait only as long as required. Explicit waits are achieved by using webdriverWait class in combination with expected_conditions. Let’s consider an example – # import necessary classesfrom selenium.webdriver.common.by import Byfrom selenium.webdriver.support.ui import WebDriverWaitfrom selenium.webdriver.support import expected_conditions as EC # create driver object driver = webdriver.Firefox() # A URL that delays loadingdriver.get("http://somedomain / url_that_delays_loading") try: # wait 10 seconds before looking for element element = WebDriverWait(driver, 10).until( EC.presence_of_element_located((By.ID, "myDynamicElement")) )finally: # else quit driver.quit() This waits up to 10 seconds before throwing a TimeoutException unless it finds the element to return within 10 seconds. WebDriverWait by default calls the ExpectedCondition every 500 milliseconds until it returns successfully.Expected Conditions –There are some common conditions that are frequently of use when automating web browsers. For example, presence_of_element_located, title_is, ad so on. one can check entire methods from here – Convenience Methods. Some of them are – title_is title_contains presence_of_element_located visibility_of_element_located visibility_of presence_of_all_elements_located element_located_to_be_selected element_selection_state_to_be element_located_selection_state_to_be alert_is_present To check out how to practically implement Implicit Waits in Webdriver, checkout Explicit waits in Selenium Python Python-selenium selenium Python Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here.
[ { "code": null, "e": 53, "s": 25, "text": "\n08 Jun, 2020" }, { "code": null, "e": 650, "s": 53, "text": "Selenium Python is one of the great tools for testing automation. These days most of the web apps are using AJAX techniques. When a page is loaded by the browser, the elements within that page may load at different time intervals. This makes locating elements difficult: if an element is not yet present in the DOM, a locate function will raise an ElementNotVisibleException exception. Using waits, we can solve this issue. Waiting provides some slack between actions performed – mostly locating an element or any other operation with the element. Selenium Webdriver provides two types of waits –" }, { "code": null, "e": 665, "s": 650, "text": "Implicit Waits" }, { "code": null, "e": 680, "s": 665, "text": "Explicit Waits" }, { "code": null, "e": 958, "s": 680, "text": "An implicit wait tells WebDriver to poll the DOM for a certain amount of time when trying to find any element (or elements) not immediately available. The default setting is 0. Once set, the implicit wait is set for the life of the WebDriver object. Let’s consider an example –" }, { "code": "# import webdriverfrom selenium import webdriver driver = webdriver.Firefox() # set implicit wait timedriver.implicitly_wait(10) # seconds # get urldriver.get(\"http://somedomain / url_that_delays_loading\") # get element after 10 secondsmyDynamicElement = driver.find_element_by_id(\"myDynamicElement\")", "e": 1263, "s": 958, "text": null }, { "code": null, "e": 1497, "s": 1263, "text": "This waits up to 10 seconds before throwing a TimeoutException unless it finds the element to return within 10 seconds. To check out how to practically implement Implicit Waits in Webdriver, checkout Implicit waits in Selenium Python" }, { "code": null, "e": 1951, "s": 1497, "text": "An explicit wait is a code you define to wait for a certain condition to occur before proceeding further in the code. The extreme case of this is time.sleep(), which sets the condition to an exact time period to wait. There are some convenience methods provided that help you write code that will wait only as long as required. Explicit waits are achieved by using webdriverWait class in combination with expected_conditions. Let’s consider an example –" }, { "code": "# import necessary classesfrom selenium.webdriver.common.by import Byfrom selenium.webdriver.support.ui import WebDriverWaitfrom selenium.webdriver.support import expected_conditions as EC # create driver object driver = webdriver.Firefox() # A URL that delays loadingdriver.get(\"http://somedomain / url_that_delays_loading\") try: # wait 10 seconds before looking for element element = WebDriverWait(driver, 10).until( EC.presence_of_element_located((By.ID, \"myDynamicElement\")) )finally: # else quit driver.quit()", "e": 2491, "s": 1951, "text": null }, { "code": null, "e": 2971, "s": 2491, "text": "This waits up to 10 seconds before throwing a TimeoutException unless it finds the element to return within 10 seconds. WebDriverWait by default calls the ExpectedCondition every 500 milliseconds until it returns successfully.Expected Conditions –There are some common conditions that are frequently of use when automating web browsers. For example, presence_of_element_located, title_is, ad so on. one can check entire methods from here – Convenience Methods. Some of them are –" }, { "code": null, "e": 2980, "s": 2971, "text": "title_is" }, { "code": null, "e": 2995, "s": 2980, "text": "title_contains" }, { "code": null, "e": 3023, "s": 2995, "text": "presence_of_element_located" }, { "code": null, "e": 3053, "s": 3023, "text": "visibility_of_element_located" }, { "code": null, "e": 3067, "s": 3053, "text": "visibility_of" }, { "code": null, "e": 3100, "s": 3067, "text": "presence_of_all_elements_located" }, { "code": null, "e": 3131, "s": 3100, "text": "element_located_to_be_selected" }, { "code": null, "e": 3161, "s": 3131, "text": "element_selection_state_to_be" }, { "code": null, "e": 3199, "s": 3161, "text": "element_located_selection_state_to_be" }, { "code": null, "e": 3216, "s": 3199, "text": "alert_is_present" }, { "code": null, "e": 3330, "s": 3216, "text": "To check out how to practically implement Implicit Waits in Webdriver, checkout Explicit waits in Selenium Python" }, { "code": null, "e": 3346, "s": 3330, "text": "Python-selenium" }, { "code": null, "e": 3355, "s": 3346, "text": "selenium" }, { "code": null, "e": 3362, "s": 3355, "text": "Python" } ]
GATE | GATE-CS-2006 | Question 32
28 Jun, 2021 Consider the following statements about the context free grammar G = {S → SS, S → ab, S → ba, S → Ε} I. G is ambiguous II. G produces all strings with equal number of a’s and b’s III. G can be accepted by a deterministic PDA. Which combination below expresses all the true statements about G?(A) I only(B) I and III only(C) II and III only(D) I, II and IIIAnswer: (B)Explanation: Statement I: G is ambiguous because, as shown in the image below there can be two decision tree for string S = ababab [TRUE] Statement II: G produces all strings with equal number of a’s and b’s [FALSE] string ‘aabb’ cannot be produced by G Statement III: G can be accepted by a deterministic PDA [TRUE] Assume there is a PDA which pushes if top of the stack is $ (bottom most alphabet of the stack) and pops otherwise. A string is rejected while popping if the current letter and top of the stack are same. This PDA can derive G. Hence, correct answer should be (B) I and III only Reference:Ambiguity in Context free Grammar and Context free Languages This solution is contributed by Vineet Purswani.Quiz of this Question GATE-CS-2006 GATE-GATE-CS-2006 GATE Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here.
[ { "code": null, "e": 28, "s": 0, "text": "\n28 Jun, 2021" }, { "code": null, "e": 93, "s": 28, "text": "Consider the following statements about the context free grammar" }, { "code": null, "e": 254, "s": 93, "text": "G = {S → SS, S → ab, S → ba, S → Ε}\nI. G is ambiguous\nII. G produces all strings with equal number of a’s and b’s\nIII. G can be accepted by a deterministic PDA." }, { "code": null, "e": 410, "s": 254, "text": "Which combination below expresses all the true statements about G?(A) I only(B) I and III only(C) II and III only(D) I, II and IIIAnswer: (B)Explanation: " }, { "code": null, "e": 535, "s": 410, "text": "Statement I: G is ambiguous because, as shown in the image below there can be two decision tree for string S = ababab [TRUE]" }, { "code": null, "e": 613, "s": 535, "text": "Statement II: G produces all strings with equal number of a’s and b’s [FALSE]" }, { "code": null, "e": 651, "s": 613, "text": "string ‘aabb’ cannot be produced by G" }, { "code": null, "e": 714, "s": 651, "text": "Statement III: G can be accepted by a deterministic PDA [TRUE]" }, { "code": null, "e": 941, "s": 714, "text": "Assume there is a PDA which pushes if top of the stack is $ (bottom most alphabet of the stack) and pops otherwise. A string is rejected while popping if the current letter and top of the stack are same. This PDA can derive G." }, { "code": null, "e": 992, "s": 941, "text": "Hence, correct answer should be (B) I and III only" }, { "code": null, "e": 1063, "s": 992, "text": "Reference:Ambiguity in Context free Grammar and Context free Languages" }, { "code": null, "e": 1133, "s": 1063, "text": "This solution is contributed by Vineet Purswani.Quiz of this Question" }, { "code": null, "e": 1146, "s": 1133, "text": "GATE-CS-2006" }, { "code": null, "e": 1164, "s": 1146, "text": "GATE-GATE-CS-2006" }, { "code": null, "e": 1169, "s": 1164, "text": "GATE" } ]
Java.io.PipedInputStream class in Java
13 Oct, 2021 Pipes in IO provides a link between two threads running in JVM at the same time. So, Pipes are used both as source or destination. PipedInputStream is also piped with PipedOutputStream. So, data can be written using PipedOutputStream and can be written using PipedInputStream.But, using both threads at the same time will create a deadlock for the threads. A pipe is said to be broken if a thread that was providing data bytes to the connected piped output stream is no longer alive. Declaration: public class PipedInputStream extends InputStream Constructor : PipedInputStream() : creates a PipedInputStream, that it is not connected. PipedInputStream(int pSize) : creates a PipedInputStream, that it is not connected with specified pipe size. PipedInputStream(PipedOutputStream outStream) : creates a PipedInputStream, that it is connected to PipedOutputStream – ‘outStream’. PipedInputStream(PipedOutputStream outStream, int pSize) : creates a Piped Input Stream that is connected to Piped Output Stream with the specified pipe size. Methods: int read(): Reads the next byte of data from this piped input stream.The value byte is returned as an int in the range 0 to 255. This method blocks until input data is available, the end of the stream is detected, or an exception is thrown.// Java program illustrating the working of read() methodimport java.io.*;public class NewClass{ public static void main(String[] args) throws IOException { PipedInputStream geek_input = new PipedInputStream(); PipedOutputStream geek_output = new PipedOutputStream(); try { // Use of connect() : connecting geek_input with geek_output geek_input.connect(geek_output); // Use of read() method : geek_output.write(71); System.out.println("using read() : " + (char)geek_input.read()); geek_output.write(69); System.out.println("using read() : " + (char)geek_input.read()); geek_output.write(75); System.out.println("using read() : " + (char)geek_input.read()); } catch (IOException except) { except.printStackTrace(); } }}Output :using read() : G using read() : E using read() : K // Java program illustrating the working of read() methodimport java.io.*;public class NewClass{ public static void main(String[] args) throws IOException { PipedInputStream geek_input = new PipedInputStream(); PipedOutputStream geek_output = new PipedOutputStream(); try { // Use of connect() : connecting geek_input with geek_output geek_input.connect(geek_output); // Use of read() method : geek_output.write(71); System.out.println("using read() : " + (char)geek_input.read()); geek_output.write(69); System.out.println("using read() : " + (char)geek_input.read()); geek_output.write(75); System.out.println("using read() : " + (char)geek_input.read()); } catch (IOException except) { except.printStackTrace(); } }} Output : using read() : G using read() : E using read() : K read(byte[] buffer, int offset, int maxlen) : java.io.PipedInputStream.read(byte[] buffer, int offset, int maxlen) reads upto maxlen bytes of the data from Piped Input Stream to the array of buffers. The method blocks if end of Stream is reached or exception is thrown.Syntax :public int read(byte[] buffer, int offset, int maxlen) Parameters : buffer : the destination buffer into which the data is to be read offset : starting in the destination array - 'buffer'. maxlen : maximum length of array to be read Return : next 'maxlen' bytes of the data as an integer value return -1 is end of stream is reached Exception : -> IOException : if in case IO error occurs. -> NullPointerException : if buffer is null. -> IndexOutOfBoundsException : if offset is -ve or maxlen is -ve or maxlen > buffer.length - offset. public int read(byte[] buffer, int offset, int maxlen) Parameters : buffer : the destination buffer into which the data is to be read offset : starting in the destination array - 'buffer'. maxlen : maximum length of array to be read Return : next 'maxlen' bytes of the data as an integer value return -1 is end of stream is reached Exception : -> IOException : if in case IO error occurs. -> NullPointerException : if buffer is null. -> IndexOutOfBoundsException : if offset is -ve or maxlen is -ve or maxlen > buffer.length - offset. receive(int byte) : java.io.PipedInputStream.receive(int byte) receives byte of the data. If no input is available, then the method blocks.Syntax :protected void receive(int byte) Parameters : byte : the bytes of the data received Return : void Exception : -> IOException : if in case IO error occurs or pipe is broken. protected void receive(int byte) Parameters : byte : the bytes of the data received Return : void Exception : -> IOException : if in case IO error occurs or pipe is broken. close() : java.io.PipedInputStream.close() closes the Piped Input Stream and releases the allocated resources.Syntax :public void close() Parameters : -------------- Return : void Exception : -> IOException : if in case IO error occurs. public void close() Parameters : -------------- Return : void Exception : -> IOException : if in case IO error occurs. connect(PipedOutputStream source) : java.io.PipedInputStream.connect(PipedOutputStream source) connects the Piped Input Stream to the ‘source’ Piped Output Stream and in case ‘source’ is pipes with some other stream, IO exception is thrownSyntax :public void connect(PipedOutputStream source) Parameters : source : the Piped Output Stream to be connected to Return : void Exception : -> IOException : if in case IO error occurs. public void connect(PipedOutputStream source) Parameters : source : the Piped Output Stream to be connected to Return : void Exception : -> IOException : if in case IO error occurs. available() : java.io.PipedInputStream.available() returns no. of bytes that can be read from Input Stream without actually being blocked.Syntax :public int available() Parameters : ------------- Return : no. of bytes that can be read from Input Stream without actually being blocked. 0, if the stream is already closed but by invoking close() method Exception : -> IOException : if in case IO error occurs. public int available() Parameters : ------------- Return : no. of bytes that can be read from Input Stream without actually being blocked. 0, if the stream is already closed but by invoking close() method Exception : -> IOException : if in case IO error occurs. Java program explaining the working of PipedInputStream class methods : // Java program illustrating the working of PipedInputStream// connect(), read(byte[] buffer, int offset, int maxlen),// close(), available() import java.io.*;public class NewClass{ public static void main(String[] args) throws IOException { PipedInputStream geek_input = new PipedInputStream(); PipedOutputStream geek_output = new PipedOutputStream(); try { // Use of connect() : connecting geek_input with geek_output geek_input.connect(geek_output); geek_output.write(71); geek_output.write(69); geek_output.write(69); geek_output.write(75); geek_output.write(83); // Use of available() : System.out.println("Use of available() : " + geek_input.available()); // Use of read(byte[] buffer, int offset, int maxlen) : byte[] buffer = new byte[5]; // destination 'buffer' geek_input.read(buffer, 0, 5); String str = new String(buffer); System.out.println("Using read(buffer, offset, maxlen) : " + str); // USe of close() method : System.out.println("Closing the stream"); geek_input.close(); } catch (IOException except) { except.printStackTrace(); } }} Output: Use of available() : 5 Using read(buffer, offset, maxlen) : GEEKS Closing the stream Next Article: Java.io.PipedOutputStream class in Java This article is contributed by Mohit Gupta . If you like GeeksforGeeks and would like to contribute, you can also write an article using write.geeksforgeeks.org or mail your article to [email protected]. See your article appearing on the GeeksforGeeks main page and help other Geeks. Please write comments if you find anything incorrect, or you want to share more information about the topic discussed above. gulshankumarar231 Java-I/O Java Java Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here.
[ { "code": null, "e": 28, "s": 0, "text": "\n13 Oct, 2021" }, { "code": null, "e": 159, "s": 28, "text": "Pipes in IO provides a link between two threads running in JVM at the same time. So, Pipes are used both as source or destination." }, { "code": null, "e": 385, "s": 159, "text": "PipedInputStream is also piped with PipedOutputStream. So, data can be written using PipedOutputStream and can be written using PipedInputStream.But, using both threads at the same time will create a deadlock for the threads." }, { "code": null, "e": 512, "s": 385, "text": "A pipe is said to be broken if a thread that was providing data bytes to the connected piped output stream is no longer alive." }, { "code": null, "e": 525, "s": 512, "text": "Declaration:" }, { "code": null, "e": 577, "s": 525, "text": "public class PipedInputStream\n extends InputStream" }, { "code": null, "e": 591, "s": 577, "text": "Constructor :" }, { "code": null, "e": 666, "s": 591, "text": "PipedInputStream() : creates a PipedInputStream, that it is not connected." }, { "code": null, "e": 775, "s": 666, "text": "PipedInputStream(int pSize) : creates a PipedInputStream, that it is not connected with specified pipe size." }, { "code": null, "e": 908, "s": 775, "text": "PipedInputStream(PipedOutputStream outStream) : creates a PipedInputStream, that it is connected to PipedOutputStream – ‘outStream’." }, { "code": null, "e": 1067, "s": 908, "text": "PipedInputStream(PipedOutputStream outStream, int pSize) : creates a Piped Input Stream that is connected to Piped Output Stream with the specified pipe size." }, { "code": null, "e": 1076, "s": 1067, "text": "Methods:" }, { "code": null, "e": 2273, "s": 1076, "text": "int read(): Reads the next byte of data from this piped input stream.The value byte is returned as an int in the range 0 to 255. This method blocks until input data is available, the end of the stream is detected, or an exception is thrown.// Java program illustrating the working of read() methodimport java.io.*;public class NewClass{ public static void main(String[] args) throws IOException { PipedInputStream geek_input = new PipedInputStream(); PipedOutputStream geek_output = new PipedOutputStream(); try { // Use of connect() : connecting geek_input with geek_output geek_input.connect(geek_output); // Use of read() method : geek_output.write(71); System.out.println(\"using read() : \" + (char)geek_input.read()); geek_output.write(69); System.out.println(\"using read() : \" + (char)geek_input.read()); geek_output.write(75); System.out.println(\"using read() : \" + (char)geek_input.read()); } catch (IOException except) { except.printStackTrace(); } }}Output :using read() : G\nusing read() : E\nusing read() : K" }, { "code": "// Java program illustrating the working of read() methodimport java.io.*;public class NewClass{ public static void main(String[] args) throws IOException { PipedInputStream geek_input = new PipedInputStream(); PipedOutputStream geek_output = new PipedOutputStream(); try { // Use of connect() : connecting geek_input with geek_output geek_input.connect(geek_output); // Use of read() method : geek_output.write(71); System.out.println(\"using read() : \" + (char)geek_input.read()); geek_output.write(69); System.out.println(\"using read() : \" + (char)geek_input.read()); geek_output.write(75); System.out.println(\"using read() : \" + (char)geek_input.read()); } catch (IOException except) { except.printStackTrace(); } }}", "e": 3172, "s": 2273, "text": null }, { "code": null, "e": 3181, "s": 3172, "text": "Output :" }, { "code": null, "e": 3232, "s": 3181, "text": "using read() : G\nusing read() : E\nusing read() : K" }, { "code": null, "e": 4127, "s": 3232, "text": "read(byte[] buffer, int offset, int maxlen) : java.io.PipedInputStream.read(byte[] buffer, int offset, int maxlen) reads upto maxlen bytes of the data from Piped Input Stream to the array of buffers. The method blocks if end of Stream is reached or exception is thrown.Syntax :public int read(byte[] buffer, int offset, int maxlen)\nParameters : \nbuffer : the destination buffer into which the data is to be read\noffset : starting in the destination array - 'buffer'.\nmaxlen : maximum length of array to be read\nReturn : \nnext 'maxlen' bytes of the data as an integer value \nreturn -1 is end of stream is reached\nException :\n-> IOException : if in case IO error occurs.\n-> NullPointerException : if buffer is null.\n-> IndexOutOfBoundsException : if offset is -ve or \n maxlen is -ve or maxlen > buffer.length - offset.\n" }, { "code": null, "e": 4745, "s": 4127, "text": "public int read(byte[] buffer, int offset, int maxlen)\nParameters : \nbuffer : the destination buffer into which the data is to be read\noffset : starting in the destination array - 'buffer'.\nmaxlen : maximum length of array to be read\nReturn : \nnext 'maxlen' bytes of the data as an integer value \nreturn -1 is end of stream is reached\nException :\n-> IOException : if in case IO error occurs.\n-> NullPointerException : if buffer is null.\n-> IndexOutOfBoundsException : if offset is -ve or \n maxlen is -ve or maxlen > buffer.length - offset.\n" }, { "code": null, "e": 5113, "s": 4745, "text": "receive(int byte) : java.io.PipedInputStream.receive(int byte) receives byte of the data. If no input is available, then the method blocks.Syntax :protected void receive(int byte)\nParameters : \nbyte : the bytes of the data received\nReturn : \nvoid\nException :\n-> IOException : if in case IO error occurs or pipe is broken." }, { "code": null, "e": 5334, "s": 5113, "text": "protected void receive(int byte)\nParameters : \nbyte : the bytes of the data received\nReturn : \nvoid\nException :\n-> IOException : if in case IO error occurs or pipe is broken." }, { "code": null, "e": 5619, "s": 5334, "text": "close() : java.io.PipedInputStream.close() closes the Piped Input Stream and releases the allocated resources.Syntax :public void close()\nParameters : \n--------------\nReturn : \nvoid\nException :\n-> IOException : if in case IO error occurs." }, { "code": null, "e": 5786, "s": 5619, "text": "public void close()\nParameters : \n--------------\nReturn : \nvoid\nException :\n-> IOException : if in case IO error occurs." }, { "code": null, "e": 6263, "s": 5786, "text": "connect(PipedOutputStream source) : java.io.PipedInputStream.connect(PipedOutputStream source) connects the Piped Input Stream to the ‘source’ Piped Output Stream and in case ‘source’ is pipes with some other stream, IO exception is thrownSyntax :public void connect(PipedOutputStream source)\nParameters : \nsource : the Piped Output Stream to be connected to\nReturn : \nvoid\nException :\n-> IOException : if in case IO error occurs." }, { "code": null, "e": 6493, "s": 6263, "text": "public void connect(PipedOutputStream source)\nParameters : \nsource : the Piped Output Stream to be connected to\nReturn : \nvoid\nException :\n-> IOException : if in case IO error occurs." }, { "code": null, "e": 6949, "s": 6493, "text": "available() : java.io.PipedInputStream.available() returns no. of bytes that can be read from Input Stream without actually being blocked.Syntax :public int available()\nParameters : \n-------------\nReturn : \nno. of bytes that can be read from Input Stream without actually being blocked.\n0, if the stream is already closed but by invoking close() method\nException :\n-> IOException : if in case IO error occurs." }, { "code": null, "e": 7259, "s": 6949, "text": "public int available()\nParameters : \n-------------\nReturn : \nno. of bytes that can be read from Input Stream without actually being blocked.\n0, if the stream is already closed but by invoking close() method\nException :\n-> IOException : if in case IO error occurs." }, { "code": null, "e": 7331, "s": 7259, "text": "Java program explaining the working of PipedInputStream class methods :" }, { "code": "// Java program illustrating the working of PipedInputStream// connect(), read(byte[] buffer, int offset, int maxlen),// close(), available() import java.io.*;public class NewClass{ public static void main(String[] args) throws IOException { PipedInputStream geek_input = new PipedInputStream(); PipedOutputStream geek_output = new PipedOutputStream(); try { // Use of connect() : connecting geek_input with geek_output geek_input.connect(geek_output); geek_output.write(71); geek_output.write(69); geek_output.write(69); geek_output.write(75); geek_output.write(83); // Use of available() : System.out.println(\"Use of available() : \" + geek_input.available()); // Use of read(byte[] buffer, int offset, int maxlen) : byte[] buffer = new byte[5]; // destination 'buffer' geek_input.read(buffer, 0, 5); String str = new String(buffer); System.out.println(\"Using read(buffer, offset, maxlen) : \" + str); // USe of close() method : System.out.println(\"Closing the stream\"); geek_input.close(); } catch (IOException except) { except.printStackTrace(); } }}", "e": 8671, "s": 7331, "text": null }, { "code": null, "e": 8679, "s": 8671, "text": "Output:" }, { "code": null, "e": 8764, "s": 8679, "text": "Use of available() : 5\nUsing read(buffer, offset, maxlen) : GEEKS\nClosing the stream" }, { "code": null, "e": 8818, "s": 8764, "text": "Next Article: Java.io.PipedOutputStream class in Java" }, { "code": null, "e": 9114, "s": 8818, "text": "This article is contributed by Mohit Gupta . If you like GeeksforGeeks and would like to contribute, you can also write an article using write.geeksforgeeks.org or mail your article to [email protected]. See your article appearing on the GeeksforGeeks main page and help other Geeks." }, { "code": null, "e": 9239, "s": 9114, "text": "Please write comments if you find anything incorrect, or you want to share more information about the topic discussed above." }, { "code": null, "e": 9257, "s": 9239, "text": "gulshankumarar231" }, { "code": null, "e": 9266, "s": 9257, "text": "Java-I/O" }, { "code": null, "e": 9271, "s": 9266, "text": "Java" }, { "code": null, "e": 9276, "s": 9271, "text": "Java" } ]
Klee’s Algorithm (Length Of Union Of Segments of a line)
16 Jun, 2022 Given starting and ending positions of segments on a line, the task is to take the union of all given segments and find length covered by these segments.Examples: Input : segments[] = {{2, 5}, {4, 8}, {9, 12}} Output : 9 Explanation: segment 1 = {2, 5} segment 2 = {4, 8} segment 3 = {9, 12} If we take the union of all the line segments, we cover distances [2, 8] and [9, 12]. Sum of these two distances is 9 (6 + 3) Approach: The algorithm was proposed by Klee in 1977. The time complexity of the algorithm is O (N log N). It has been proven that this algorithm is the fastest (asymptotically) and this problem can not be solved with a better complexity. Description : 1) Put all the coordinates of all the segments in an auxiliary array points[]. 2) Sort it on the value of the coordinates. 3) An additional condition for sorting – if there are equal coordinates, insert the one which is the left coordinate of any segment instead of a right one. 4) Now go through the entire array, with the counter “count” of overlapping segments. 5) If the count is greater than zero, then the result is added to the difference between the points[i] – points[i-1]. 6) If the current element belongs to the left end, we increase “count”, otherwise reduce it.Illustration: Lets take the example : segment 1 : (2,5) segment 2 : (4,8) segment 3 : (9,12) Counter = result = 0; n = number of segments = 3; for i=0, points[0] = {2, false} points[1] = {5, true} for i=1, points[2] = {4, false} points[3] = {8, true} for i=2, points[4] = {9, false} points[5] = {12, true} Therefore : points = {2, 5, 4, 8, 9, 12} {f, t, f, t, f, t} after applying sorting : points = {2, 4, 5, 8, 9, 12} {f, f, t, t, f, t} Now, for i=0, result = 0; Counter = 1; for i=1, result = 2; Counter = 2; for i=2, result = 3; Counter = 1; for i=3, result = 6; Counter = 0; for i=4, result = 6; Counter = 1; for i=5, result = 9; Counter = 0; Final answer = 9; C++ Java Python3 Javascript // C++ program to implement Klee's algorithm#include<bits/stdc++.h>using namespace std; // Returns sum of lengths covered by union of given// segmentsint segmentUnionLength(const vector< pair <int,int> > &seg){ int n = seg.size(); // Create a vector to store starting and ending // points vector <pair <int, bool> > points(n * 2); for (int i = 0; i < n; i++) { points[i*2] = make_pair(seg[i].first, false); points[i*2 + 1] = make_pair(seg[i].second, true); } // Sorting all points by point value sort(points.begin(), points.end()); int result = 0; // Initialize result // To keep track of counts of // current open segments // (Starting point is processed, // but ending point // is not) int Counter = 0; // Traverse through all points for (unsigned i=0; i<n*2; i++) { // If there are open points, then we add the // difference between previous and current point. // This is interesting as we don't check whether // current point is opening or closing, if (Counter) result += (points[i].first - points[i-1].first); // If this is an ending point, reduce, count of // open points. (points[i].second)? Counter-- : Counter++; } return result;} // Driver program for the above codeint main(){ vector< pair <int,int> > segments; segments.push_back(make_pair(2, 5)); segments.push_back(make_pair(4, 8)); segments.push_back(make_pair(9, 12)); cout << segmentUnionLength(segments) << endl; return 0;} // Java program to implement Klee's algorithmimport java.io.*;import java.util.*; class GFG { // to use create a pair of segments static class SegmentPair { int x,y; SegmentPair(int xx, int yy){ this.x = xx; this.y = yy; } } //to create a pair of points static class PointPair{ int x; boolean isEnding; PointPair(int xx, boolean end){ this.x = xx; this.isEnding = end; } } // creates the comparator for comparing objects of PointPair class static class Comp implements Comparator<PointPair> { // override the compare() method public int compare(PointPair p1, PointPair p2) { if (p1.x < p2.x) { return -1; } else { if(p1.x == p2.x){ return 0; }else{ return 1; } } } } public static int segmentUnionLength(List<SegmentPair> segments){ int n = segments.size(); // Create a list to store // starting and ending points List<PointPair> points = new ArrayList<>(); for(int i = 0; i < n; i++){ points.add(new PointPair(segments.get(i).x,false)); points.add(new PointPair(segments.get(i).y,true)); } // Sorting all points by point value Collections.sort(points, new Comp()); int result = 0; // Initialize result // To keep track of counts of // current open segments // (Starting point is processed, // but ending point // is not) int Counter = 0; // Traverse through all points for(int i = 0; i < 2 * n; i++) { // If there are open points, then we add the // difference between previous and current point. // This is interesting as we don't check whether // current point is opening or closing, if (Counter != 0) { result += (points.get(i).x - points.get(i-1).x); } // If this is an ending point, reduce, count of // open points. if(points.get(i).isEnding) { Counter--; } else { Counter++; } } return result; } // Driver Code public static void main (String[] args) { List<SegmentPair> segments = new ArrayList<>(); segments.add(new SegmentPair(2,5)); segments.add(new SegmentPair(4,8)); segments.add(new SegmentPair(9,12)); System.out.println(segmentUnionLength(segments)); }} // This code is contributed by shruti456rawal # Python program for the above approach def segmentUnionLength(segments): # Size of given segments list n = len(segments) # Initialize empty points container points = [None] * (n * 2) # Create a vector to store starting # and ending points for i in range(n): points[i * 2] = (segments[i][0], False) points[i * 2 + 1] = (segments[i][1], True) # Sorting all points by point value points = sorted(points, key=lambda x: x[0]) # Initialize result as 0 result = 0 # To keep track of counts of current open segments # (Starting point is processed, but ending point # is not) Counter = 0 # Traverse through all points for i in range(0, n * 2): # If there are open points, then we add the # difference between previous and current point. if (i > 0) & (points[i][0] > points[i - 1][0]) & (Counter > 0): result += (points[i][0] - points[i - 1][0]) # If this is an ending point, reduce, count of # open points. if points[i][1]: Counter -= 1 else: Counter += 1 return result # Driver codeif __name__ == '__main__': segments = [(2, 5), (4, 8), (9, 12)] print(segmentUnionLength(segments)) // JavaScript program to implement Klee's algorithm // Returns sum of lengths covered by union of given// segmentsfunction segmentUnionLength(seg){ let n = seg.length; // Create a vector to store starting and ending // points let points = new Array(2*n); for (let i = 0; i < n; i++) { points[i*2] = [seg[i][0], false]; points[i*2 + 1] = [seg[i][1], true]; } // Sorting all points by point value points.sort(function(a, b){ return a[0] - b[0]; }); let result = 0; // Initialize result // To keep track of counts of // current open segments // (Starting point is processed, // but ending point // is not) let Counter = 0; // Traverse through all points for (let i=0; i<n*2; i++) { // If there are open points, then we add the // difference between previous and current point. // This is interesting as we don't check whether // current point is opening or closing, if (Counter) result += (points[i][0] - points[i-1][0]); // If this is an ending point, reduce, count of // open points. if(points[i][1]){ Counter = Counter - 1; } else{ Counter = Counter + 1; } } return result;} let segments = new Array();segments.push([2, 5]);segments.push([4, 8]);segments.push([9, 12]);console.log(segmentUnionLength(segments)); // The code is contributed by Gautam goel (gautamgoel962) 9 Time Complexity : O(n * log n)Auxiliary Space: O(n) This article is contributed by Aarti_Rathi and Abhinandan Mittal. If you like GeeksforGeeks and would like to contribute, you can also write an article using write.geeksforgeeks.org or mail your article to [email protected]. See your article appearing on the GeeksforGeeks main page and help other Geeks.Please write comments if you find anything incorrect, or you want to share more information about the topic discussed above. Vikas Chitturi anikakapoor amartyaghoshgfg shruti456rawal gautamgoel962 codewithshinchan computer-graphics Geometric Technical Scripter Geometric Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here.
[ { "code": null, "e": 54, "s": 26, "text": "\n16 Jun, 2022" }, { "code": null, "e": 219, "s": 54, "text": "Given starting and ending positions of segments on a line, the task is to take the union of all given segments and find length covered by these segments.Examples: " }, { "code": null, "e": 476, "s": 219, "text": "Input : segments[] = {{2, 5}, {4, 8}, {9, 12}}\nOutput : 9 \nExplanation:\nsegment 1 = {2, 5}\nsegment 2 = {4, 8}\nsegment 3 = {9, 12}\nIf we take the union of all the line segments,\nwe cover distances [2, 8] and [9, 12]. Sum of \nthese two distances is 9 (6 + 3)" }, { "code": null, "e": 486, "s": 476, "text": "Approach:" }, { "code": null, "e": 716, "s": 486, "text": "The algorithm was proposed by Klee in 1977. The time complexity of the algorithm is O (N log N). It has been proven that this algorithm is the fastest (asymptotically) and this problem can not be solved with a better complexity. " }, { "code": null, "e": 1320, "s": 716, "text": "Description : 1) Put all the coordinates of all the segments in an auxiliary array points[]. 2) Sort it on the value of the coordinates. 3) An additional condition for sorting – if there are equal coordinates, insert the one which is the left coordinate of any segment instead of a right one. 4) Now go through the entire array, with the counter “count” of overlapping segments. 5) If the count is greater than zero, then the result is added to the difference between the points[i] – points[i-1]. 6) If the current element belongs to the left end, we increase “count”, otherwise reduce it.Illustration: " }, { "code": null, "e": 2088, "s": 1320, "text": "Lets take the example :\nsegment 1 : (2,5)\nsegment 2 : (4,8)\nsegment 3 : (9,12)\n\nCounter = result = 0;\nn = number of segments = 3;\n\nfor i=0, points[0] = {2, false}\n points[1] = {5, true}\nfor i=1, points[2] = {4, false}\n points[3] = {8, true}\nfor i=2, points[4] = {9, false}\n points[5] = {12, true}\n\nTherefore :\npoints = {2, 5, 4, 8, 9, 12}\n {f, t, f, t, f, t}\n\nafter applying sorting :\npoints = {2, 4, 5, 8, 9, 12}\n {f, f, t, t, f, t}\n\nNow,\nfor i=0, result = 0;\n Counter = 1;\n\nfor i=1, result = 2;\n Counter = 2;\n\nfor i=2, result = 3;\n Counter = 1;\n\nfor i=3, result = 6;\n Counter = 0;\n\nfor i=4, result = 6;\n Counter = 1;\n\nfor i=5, result = 9;\n Counter = 0;\n\nFinal answer = 9;" }, { "code": null, "e": 2092, "s": 2088, "text": "C++" }, { "code": null, "e": 2097, "s": 2092, "text": "Java" }, { "code": null, "e": 2105, "s": 2097, "text": "Python3" }, { "code": null, "e": 2116, "s": 2105, "text": "Javascript" }, { "code": "// C++ program to implement Klee's algorithm#include<bits/stdc++.h>using namespace std; // Returns sum of lengths covered by union of given// segmentsint segmentUnionLength(const vector< pair <int,int> > &seg){ int n = seg.size(); // Create a vector to store starting and ending // points vector <pair <int, bool> > points(n * 2); for (int i = 0; i < n; i++) { points[i*2] = make_pair(seg[i].first, false); points[i*2 + 1] = make_pair(seg[i].second, true); } // Sorting all points by point value sort(points.begin(), points.end()); int result = 0; // Initialize result // To keep track of counts of // current open segments // (Starting point is processed, // but ending point // is not) int Counter = 0; // Traverse through all points for (unsigned i=0; i<n*2; i++) { // If there are open points, then we add the // difference between previous and current point. // This is interesting as we don't check whether // current point is opening or closing, if (Counter) result += (points[i].first - points[i-1].first); // If this is an ending point, reduce, count of // open points. (points[i].second)? Counter-- : Counter++; } return result;} // Driver program for the above codeint main(){ vector< pair <int,int> > segments; segments.push_back(make_pair(2, 5)); segments.push_back(make_pair(4, 8)); segments.push_back(make_pair(9, 12)); cout << segmentUnionLength(segments) << endl; return 0;}", "e": 3728, "s": 2116, "text": null }, { "code": "// Java program to implement Klee's algorithmimport java.io.*;import java.util.*; class GFG { // to use create a pair of segments static class SegmentPair { int x,y; SegmentPair(int xx, int yy){ this.x = xx; this.y = yy; } } //to create a pair of points static class PointPair{ int x; boolean isEnding; PointPair(int xx, boolean end){ this.x = xx; this.isEnding = end; } } // creates the comparator for comparing objects of PointPair class static class Comp implements Comparator<PointPair> { // override the compare() method public int compare(PointPair p1, PointPair p2) { if (p1.x < p2.x) { return -1; } else { if(p1.x == p2.x){ return 0; }else{ return 1; } } } } public static int segmentUnionLength(List<SegmentPair> segments){ int n = segments.size(); // Create a list to store // starting and ending points List<PointPair> points = new ArrayList<>(); for(int i = 0; i < n; i++){ points.add(new PointPair(segments.get(i).x,false)); points.add(new PointPair(segments.get(i).y,true)); } // Sorting all points by point value Collections.sort(points, new Comp()); int result = 0; // Initialize result // To keep track of counts of // current open segments // (Starting point is processed, // but ending point // is not) int Counter = 0; // Traverse through all points for(int i = 0; i < 2 * n; i++) { // If there are open points, then we add the // difference between previous and current point. // This is interesting as we don't check whether // current point is opening or closing, if (Counter != 0) { result += (points.get(i).x - points.get(i-1).x); } // If this is an ending point, reduce, count of // open points. if(points.get(i).isEnding) { Counter--; } else { Counter++; } } return result; } // Driver Code public static void main (String[] args) { List<SegmentPair> segments = new ArrayList<>(); segments.add(new SegmentPair(2,5)); segments.add(new SegmentPair(4,8)); segments.add(new SegmentPair(9,12)); System.out.println(segmentUnionLength(segments)); }} // This code is contributed by shruti456rawal", "e": 6087, "s": 3728, "text": null }, { "code": "# Python program for the above approach def segmentUnionLength(segments): # Size of given segments list n = len(segments) # Initialize empty points container points = [None] * (n * 2) # Create a vector to store starting # and ending points for i in range(n): points[i * 2] = (segments[i][0], False) points[i * 2 + 1] = (segments[i][1], True) # Sorting all points by point value points = sorted(points, key=lambda x: x[0]) # Initialize result as 0 result = 0 # To keep track of counts of current open segments # (Starting point is processed, but ending point # is not) Counter = 0 # Traverse through all points for i in range(0, n * 2): # If there are open points, then we add the # difference between previous and current point. if (i > 0) & (points[i][0] > points[i - 1][0]) & (Counter > 0): result += (points[i][0] - points[i - 1][0]) # If this is an ending point, reduce, count of # open points. if points[i][1]: Counter -= 1 else: Counter += 1 return result # Driver codeif __name__ == '__main__': segments = [(2, 5), (4, 8), (9, 12)] print(segmentUnionLength(segments))", "e": 7377, "s": 6087, "text": null }, { "code": "// JavaScript program to implement Klee's algorithm // Returns sum of lengths covered by union of given// segmentsfunction segmentUnionLength(seg){ let n = seg.length; // Create a vector to store starting and ending // points let points = new Array(2*n); for (let i = 0; i < n; i++) { points[i*2] = [seg[i][0], false]; points[i*2 + 1] = [seg[i][1], true]; } // Sorting all points by point value points.sort(function(a, b){ return a[0] - b[0]; }); let result = 0; // Initialize result // To keep track of counts of // current open segments // (Starting point is processed, // but ending point // is not) let Counter = 0; // Traverse through all points for (let i=0; i<n*2; i++) { // If there are open points, then we add the // difference between previous and current point. // This is interesting as we don't check whether // current point is opening or closing, if (Counter) result += (points[i][0] - points[i-1][0]); // If this is an ending point, reduce, count of // open points. if(points[i][1]){ Counter = Counter - 1; } else{ Counter = Counter + 1; } } return result;} let segments = new Array();segments.push([2, 5]);segments.push([4, 8]);segments.push([9, 12]);console.log(segmentUnionLength(segments)); // The code is contributed by Gautam goel (gautamgoel962)", "e": 8854, "s": 7377, "text": null }, { "code": null, "e": 8856, "s": 8854, "text": "9" }, { "code": null, "e": 8908, "s": 8856, "text": "Time Complexity : O(n * log n)Auxiliary Space: O(n)" }, { "code": null, "e": 9349, "s": 8908, "text": "This article is contributed by Aarti_Rathi and Abhinandan Mittal. If you like GeeksforGeeks and would like to contribute, you can also write an article using write.geeksforgeeks.org or mail your article to [email protected]. See your article appearing on the GeeksforGeeks main page and help other Geeks.Please write comments if you find anything incorrect, or you want to share more information about the topic discussed above." }, { "code": null, "e": 9364, "s": 9349, "text": "Vikas Chitturi" }, { "code": null, "e": 9376, "s": 9364, "text": "anikakapoor" }, { "code": null, "e": 9392, "s": 9376, "text": "amartyaghoshgfg" }, { "code": null, "e": 9407, "s": 9392, "text": "shruti456rawal" }, { "code": null, "e": 9421, "s": 9407, "text": "gautamgoel962" }, { "code": null, "e": 9438, "s": 9421, "text": "codewithshinchan" }, { "code": null, "e": 9456, "s": 9438, "text": "computer-graphics" }, { "code": null, "e": 9466, "s": 9456, "text": "Geometric" }, { "code": null, "e": 9485, "s": 9466, "text": "Technical Scripter" }, { "code": null, "e": 9495, "s": 9485, "text": "Geometric" } ]
Windows Anonymous Pipe
Windows anonymous pipes are actually Ordinary pipes, and they behave similarly to their UNIX counterparts: they are unidirectional and employ parent-child relationships between the communicating processes. In addition, reading and writing to the pipe can be accomplished with the ordinary ReadFile() and WriteFile() functions. The Windows API use CreatePipe() function for creating pipes, which is passed four parameters. The parameters provide separate handles for reading and reading and writing to the pipe writing to the pipe An instance of the STARTUPINFO structure, used to specify that the child process is to inherit the handles of the pipe. An instance of the STARTUPINFO structure, used to specify that the child process is to inherit the handles of the pipe. the size (in Bytes) of the pipe may be specified. the size (in Bytes) of the pipe may be specified. Windows requires the programmer to specify which attributes the child process will inherit, Unlike UNIX systems. This is accomplished by first initializing the SECURITY ATTRIBUTES structure allowing handles to be inherited and then redirecting the child process’s handles for standard input or standard output to the read or write handle of the pipe. As the child will be reading from the pipe, the parent must redirect the child’s standard input to the read handle of the pipe. As the pipes are half duplex, it is required to prohibit the child from inheriting the write-end of the pipe. In the below code, we can see a parent process creating an anonymous pipe for communicating with its child − #include<stdio.h> #include<stdlib.h> #include<windows.h> #define BUFFER SIZE 25 int main(VOID) { HANDLE ReadHandle, WriteHandle; STARTUPINFO si; PROCESS INFORMATION pi; char message[BUFFER SIZE] = "Greetings"; DWORD written; /* set up security attributes to allow pipes to be inherited */ SECURITY ATTRIBUTES sa = {sizeof(SECURITY ATTRIBUTES), NULL, TRUE}; /* allocate memory */ ZeroMemory(π, sizeof(pi)); /* create the pipe */ if (!CreatePipe(&ReadHandle, &WriteHandle, &sa, 0)) { fprintf(stderr, "Create Pipe Failed"); return 1; } /* establishing the START INFO structure for the child process*/ GetStartupInfo(&si); si.hStdOutput = GetStdHandle(STD OUTPUT HANDLE); /* redirecting standard input to the read end of the pipe */ si.hStdInput = ReadHandle; si.dwFlags = STARTF USESTDHANDLES; /* don’t allow the child inheriting the write end of pipe */ SetHandleInformation(WriteHandle, HANDLE FLAG INHERIT, 0); /* create the child process */ CreateProcess(NULL, "child.exe", NULL, NULL, TRUE, /* inherit handles */ 0, NULL, NULL, &si, π); /* close the unused end of the pipe */ CloseHandle(ReadHandle); /* the parent writes to the pipe */ if(!WriteFile(WriteHandle, message, BUFFER SIZE, &written, NULL)) fprintf(stderr, "Error writing to pipe."); /* close the write end of the pipe */ CloseHandle(WriteHandle); /* wait for the child to exit */ WaitForSingleObject(pi.hProcess,INFINITE); CloseHandle(pi.hProcess); CloseHandle(pi.hThread); return 0; } The parent first closes its unused read end of the pipe, before writing to the pipe. The child process which reads from the pipe is shown in below code − #include<stdio.h> #include<windows.h> #define BUFFER SIZE 25 int main(VOID){ HANDLE Readhandle; CHAR buffer[BUFFER SIZE]; DWORD read; /* getting the read handle of the pipe */ ReadHandle = GetStdHandle(STD INPUT HANDLE); /* the child reads from the pipe */ if (ReadFile(ReadHandle, buffer, BUFFER SIZE, &read, NULL)) printf("child read %s", buffer); else fprintf(stderr, "Error reading from pipe"); return 0; } Windows anonymous pipe - child process
[ { "code": null, "e": 1528, "s": 1062, "text": "Windows anonymous pipes are actually Ordinary pipes, and they behave similarly to their UNIX counterparts: they are unidirectional and employ parent-child relationships between the communicating processes. In addition, reading and writing to the pipe can be accomplished with the ordinary ReadFile() and WriteFile() functions. The Windows API use CreatePipe() function for creating pipes, which is passed four parameters. The parameters provide separate handles for" }, { "code": null, "e": 1540, "s": 1528, "text": "reading and" }, { "code": null, "e": 1552, "s": 1540, "text": "reading and" }, { "code": null, "e": 1572, "s": 1552, "text": "writing to the pipe" }, { "code": null, "e": 1592, "s": 1572, "text": "writing to the pipe" }, { "code": null, "e": 1712, "s": 1592, "text": "An instance of the STARTUPINFO structure, used to specify that the child process is to inherit the handles of the pipe." }, { "code": null, "e": 1832, "s": 1712, "text": "An instance of the STARTUPINFO structure, used to specify that the child process is to inherit the handles of the pipe." }, { "code": null, "e": 1882, "s": 1832, "text": "the size (in Bytes) of the pipe may be specified." }, { "code": null, "e": 1932, "s": 1882, "text": "the size (in Bytes) of the pipe may be specified." }, { "code": null, "e": 2521, "s": 1932, "text": "Windows requires the programmer to specify which attributes the child process will inherit, Unlike UNIX systems. This is accomplished by first initializing the SECURITY ATTRIBUTES structure allowing handles to be inherited and then redirecting the child process’s handles for standard input or standard output to the read or write handle of the pipe. As the child will be reading from the pipe, the parent must redirect the child’s standard input to the read handle of the pipe. As the pipes are half duplex, it is required to prohibit the child from inheriting the write-end of the pipe." }, { "code": null, "e": 2630, "s": 2521, "text": "In the below code, we can see a parent process creating an anonymous pipe for communicating with its child −" }, { "code": null, "e": 4182, "s": 2630, "text": "#include<stdio.h>\n#include<stdlib.h>\n#include<windows.h>\n#define BUFFER SIZE 25\nint main(VOID) {\n HANDLE ReadHandle, WriteHandle;\n STARTUPINFO si;\n PROCESS INFORMATION pi;\n char message[BUFFER SIZE] = \"Greetings\";\n DWORD written;\n /* set up security attributes to allow pipes to be inherited */\n SECURITY ATTRIBUTES sa = {sizeof(SECURITY ATTRIBUTES), NULL, TRUE};\n /* allocate memory */\n ZeroMemory(π, sizeof(pi));\n /* create the pipe */\n if (!CreatePipe(&ReadHandle, &WriteHandle, &sa, 0)) {\n fprintf(stderr, \"Create Pipe Failed\"); return 1; }\n /* establishing the START INFO structure for the child process*/\n GetStartupInfo(&si);\n si.hStdOutput = GetStdHandle(STD OUTPUT HANDLE);\n /* redirecting standard input to the read end of the pipe */\n si.hStdInput = ReadHandle;\n si.dwFlags = STARTF USESTDHANDLES;\n /* don’t allow the child inheriting the write end of pipe */\n SetHandleInformation(WriteHandle, HANDLE FLAG INHERIT, 0);\n /* create the child process */\n CreateProcess(NULL, \"child.exe\", NULL, NULL, TRUE, /* inherit handles */ 0, NULL, NULL, &si, π);\n /* close the unused end of the pipe */ CloseHandle(ReadHandle);\n /* the parent writes to the pipe */\n if(!WriteFile(WriteHandle, message, BUFFER SIZE, &written, NULL))\n fprintf(stderr, \"Error writing to pipe.\");\n /* close the write end of the pipe */ CloseHandle(WriteHandle);\n /* wait for the child to exit */ WaitForSingleObject(pi.hProcess,INFINITE); \n CloseHandle(pi.hProcess);\n CloseHandle(pi.hThread);\n return 0;\n}" }, { "code": null, "e": 4336, "s": 4182, "text": "The parent first closes its unused read end of the pipe, before writing to the pipe. The child process which reads from the pipe is shown in below code −" }, { "code": null, "e": 4786, "s": 4336, "text": "#include<stdio.h>\n#include<windows.h>\n#define BUFFER SIZE 25\nint main(VOID){\n HANDLE Readhandle;\n CHAR buffer[BUFFER SIZE];\n DWORD read;\n /* getting the read handle of the pipe */\n ReadHandle = GetStdHandle(STD INPUT HANDLE);\n /* the child reads from the pipe */\n if (ReadFile(ReadHandle, buffer, BUFFER SIZE, &read, NULL))\n printf(\"child read %s\", buffer);\n else\n fprintf(stderr, \"Error reading from pipe\");\n return 0;\n}" }, { "code": null, "e": 4825, "s": 4786, "text": "Windows anonymous pipe - child process" } ]
Introduction to BigQuery ML. Learn how to create machine learning... | by Pol Ferrando | Towards Data Science
A few months ago Google announced a new Google BigQuery feature called BigQuery ML, which is currently in Beta. It consists of a set of extensions of the SQL language that allows to create machine learning models, evaluate their predictive performance and make predictions for new data directly in BigQuery. One of the advantages of BigQuery ML (BQML) is that one only needs to know standard SQL in order to use it (without needing to use R or Python to train models), which makes machine learning more accessible. It even handles data transformation, training/test sets split, etc. In addition, it reduces the training time of models because it works directly where the data is stored (BigQuery) and, consequently, it is not necessary to export the data to other tools. But not everything is an advantage. First of all, the implemented models are currently limited (although we will see that it offers some flexibility), which probably will always be the case due to the fact of adapting to SQL. Secondly (and, in my opinion, more important), even though BQML makes model training easier, a person who is not familiar with machine learning can still have difficulties interpreting the model they have created, evaluating its performance and trying to improve it. In this post, I will explain the main functions of BQML and how to use them to create our model, evaluate it and use it to make predictions. This process will consist of the following steps: Create a dataset (optional)Create a modelModel information (optional)Evaluate the modelPredict Create a dataset (optional) Create a model Model information (optional) Evaluate the model Predict As with BigQuery (BQ) tables, the model must be saved in a data set, so first you have to decide in which data set you want to save the model: an existing one or in a new one. If your case is the latter, creating a new data set is as simple as: In the BQ interface, select the project in which you want to create the dataset and click on the “create data set” button. In the BQ interface, select the project in which you want to create the dataset and click on the “create data set” button. 2. Name the new data set and choose a location where the data will be stored and the expiration. You can find more information about these fields in this link. In supervised machine learning, a training data set whose response variables are known is used to generate a model which captures the underlying patterns of the data so that it can predict the outcome of unseen data. BQML allows you to do this process directly within BigQuery. Currently, it supports three types of models: Linear regression. It is used to predict the result of a continuous numeric variable, such as income.Binary logistic regression. It is used to predict the result of a categorical variable with two possible classes, such as when you want to determine whether a user will buy or not.Multinomial logistic regression (or multiclass). It is used to predict the result of a categorical variable with more than two classes. Linear regression. It is used to predict the result of a continuous numeric variable, such as income. Binary logistic regression. It is used to predict the result of a categorical variable with two possible classes, such as when you want to determine whether a user will buy or not. Multinomial logistic regression (or multiclass). It is used to predict the result of a categorical variable with more than two classes. To create (and train) a model with BQML, the following syntax has to be used: #standardSQL{CREATE MODEL | CREATE MODEL IF NOT EXISTS | CREATE OR REPLACE MODEL} `project.dataset.model_name`OPTIONS(model_option_list)AS query_statement This query will create a model (CREATE MODEL) with the specified options (OPTIONS) and using the result of a query (AS) as training data. We have to specify: 1)The name of the model and where it has to be saved. CREATE MODEL creates and trains the model (which will be saved with the name “model_name” inside the specified data set) as long as there is no model already created with the same name. If the model name exists, the query with CREATE MODEL will return an error. To avoid this error, we can use two alternatives: CREATE MODEL IF NOT EXISTS, which creates and trains the model only if there is no model already created with the same name. CREATE OR REPLACE MODEL, which creates the model (if it does not exist) or replaces it (if it exists) and trains it. 2) model_option_list . A list specifying some options related to the model and the training process. The format is as follows: option1 = value1, option2 = value2, ... The two most important options are: model_type (mandatory): specifies the type of model we want to train: linear_reg for linear regression or logistic_reg for binary or multiclass logistic regression. input_label_cols: specifies the column name of the table with the training data that contains the response variable. If the column is called label, this field is optional; if not, it must be specified as [‘column_name’]. Although BigQuery ML has default options for model training, it offers some flexibility to choose options related to avoiding overfitting and the optimization of model parameters. For example, we can apply regularization L1 or L2, split the data in a training set and a validation set, or set the maximum number of iterations of the gradient descent. You can find all the available options in this link. 3) query_statement. Query that generates the table that will be used as training data. One of the advantages of BigQuery ML is that it is responsible for data transformation for model training. In particular: Categorical features (of type BOOL, STRING, BYTES, DATE, DATETIME or TIME) are one-hot enconded (i.e., converted into a binary variable for each class). Due to the problem known as multicollinearity, this is not recommended if you want to draw conclusions about the relationships between the features and the response variable. Numerical features (type NUMERIC, FLOAT64 or INT64) are standardized for both training data and future predictions. NULL values ​​are replaced by the average in the case of numerical variables or by a new class that groups all these missing data in the case of categorical features. Regarding the response variable, it must be taken into account that: In linear regression can not have infinite or NaN values. In binary logistic regression it has to have exactly two possible values. In multiclass logistic regression you can have a maximum of 50 different categories. For example, let’s imagine that we want to predict whether a web session will end up buying or not depending on several features related to the user’s browsing activity (number of page views, session duration, type of user, the device he uses and whether it is paid traffic or not). In case you want to follow this example, we will use the Google Analytics test data set offered by BigQuery. To create the model, we would use the following query: #standardSQLCREATE MODEL `project.dataset.sample_model`OPTIONS(model_type='logistic_reg', input_label_cols=['isBuyer'])ASSELECT IF(totals.transactions IS NULL, 0, 1) AS isBuyer, IFNULL(totals.pageviews, 0) AS pageviews, IFNULL(totals.timeOnSite, 0) AS timeOnSite, IFNULL(totals.newVisits, 0) AS isNewVisit, IF(device.deviceCategory = 'mobile', 1, 0) AS isMobile, IF(device.deviceCategory = 'desktop', 1, 0) AS isDesktop, IF(trafficSource.medium in ('affiliate', 'cpc', 'cpm'), 1, 0) AS isPaidTrafficFROM `bigquery-public-data.google_analytics_sample.ga_sessions_*`WHERE _TABLE_SUFFIX BETWEEN '20160801' AND '20170630' Since our response variable is categorical with two classes (1 = “with purchase” or 0 = “without purchase”), we’ve had to specify in the options that the type of model is a logistic regression (logistic_reg). Also, note that the response variable is called “isBuyer”, so we’ve had to specify that in the options too. In linear models, each explanatory variable has an associated coefficient (or weight) that determines the relationship between this feature and the response variable. The greater its magnitude, the greater impact it has on the response variable. Furthermore, the positive (negative) sign indicates whether the response increases (decreases) when the value of this explanatory variable increases (or, in the case of categorical variables, the category is present). In BigQuery ML, we can get the weights of the trained model with the following query: #standardSQLSELECT *FROM ML.WEIGHTS(MODEL `project.dataset.model_name`) As mentioned before, if you don’t convert the categorical variables into binary “manually” in your query (as we have done with isMobile and isDesktop, for instance), each possible category will have a weight and the reliability of the coefficients will be compromised due to multicollinearity. For example, the model we created in the previous section has the following coefficients: That is, being a session of a new user, using a mobile device or accessing the web through a paid channel decreases the probability of that visit ending in purchase. While using desktop or spending more time on the site increases the probability of converting. Once we have the trained model, we need to assess its predictive performance. This always has to be done on a test set different from the training set to avoid overfitting, which occurs when our model memorizes the patterns of our training data and consequently it is very precise in our training set but it is not capable of making good predictions in new data. BQML provides several functions to evaluate our models: ML.TRAINING_INFO. Provides information about the iterations during model training, including the loss in both the training and the validation set at each iteration. The expected result is that the loss decreases in both sets (ideally, to 0, which means that the model is always right). ML.EVALUATE. Provides the most common metrics to assess the predictive performance of a model. This function can be used for any type of model (linear regression, binary logistic regression, multiclass logistic regression), but the metrics will be different depending on whether it is a regression or a classification task. ML.CONFUSION_MATRIX. Returns the confusion matrix for a given data set, which allows us to know the correct predictions and the errors for each possible class in a classification model. It can only be used for classification models, that is, logistic regression and multiclass logistic regression. ML.ROC_CURVE. This function allows us to construct the ROC curve, which is a graphical visualization used to evaluate the predictive ability of a binary classification model. In this case, it can only be used for binary logistic regression models. In this post we will focus on ML.EVALUATE, but we will give the syntax and examples for the other functions in case someone is interested in using them. To evaluate a previously created model, the following syntax has to be used: #standardSQLSELECT *FROM ML.EVALUATE(MODEL `project.dataset.model_name`, {TABLE table_name | (query_statement)} [, STRUCT(XX AS threshold)]) Where we have to specify: The model. The table for which we want to compute the evaluation metrics, which can be the result of a query. Obviously, this test set must have the same columns as the training set, including the response variable (to compare the model predictions with the actual values). If no table or query is specified, it will use the validation set (if specified when creating the model) or the training set (if a validation set was not specified). In the case of a logistic regression, a threshold. This value is optional, and it specifies the value from which the predictions of our model (which are values ​​between 0 and 1 that can be interpreted as probabilities that this observation is of class 1) will be for class 0 or for the class 1. By default, the threshold will be 0.5. The result of this query is a single row with the most common metrics to evaluate the predictions of a model, which will depend on the type of model used. In particular, the metrics that BigQuery ML provides for logistic regression and multiclass logistic regression models are: precision recall accuracy f1_score log_loss roc_auc In the case of linear regression, they are: mean_absolute_error mean_squared_error mean_squared_log_error median_absolute_error r2_score explained_variance For example, for a logistic regression like the one in our example, we would have to use: #standardSQLSELECT *FROM ML.EVALUATE(MODEL `project.dataset.sample_model`, ( SELECT IF(totals.transactions IS NULL, 0, 1) AS isBuyer, IFNULL(totals.pageviews, 0) AS pageviews, IFNULL(totals.timeOnSite, 0) AS timeOnSite, IFNULL(totals.newVisits, 0) AS isNewVisit, IF(device.deviceCategory = 'mobile', 1, 0) AS isMobile, IF(device.deviceCategory = 'desktop', 1, 0) AS isDesktop, IF(trafficSource.medium in ('affiliate', 'cpc', 'cpm'), 1, 0) AS isPaidTraffic FROM `bigquery-public-data.google_analytics_sample.ga_sessions_*` WHERE _TABLE_SUFFIX BETWEEN '20170701' AND '20170801' ), STRUCT(0.5 AS threshold) ) Note that the dates used to generate the data are different from those used to create the model. The result of the previous query is: 1. ML.TRAINING_INFO Syntax: #standardSQLSELECT *FROM ML.TRAINING_INFO(MODEL `project.dataset.model_name`) Example: #standardSQLSELECT *FROM ML.TRAINING_INFO(MODEL `project.dataset.sample_model`) 2. ML.CONFUSION_MATRIX Syntax: #standardSQLML.CONFUSION_MATRIX(MODEL `project.dataset.model_name`, {TABLE table_name | (query_statement)} [, STRUCT(XX AS threshold)]) Example: #standardSQLSELECT *FROM ML.CONFUSION_MATRIX(MODEL `project.dataset.sample_model`, ( SELECT IF(totals.transactions IS NULL, 0, 1) AS isBuyer, IFNULL(totals.pageviews, 0) AS pageviews, IFNULL(totals.timeOnSite, 0) AS timeOnSite, IFNULL(totals.newVisits, 0) AS isNewVisit, IF(device.deviceCategory = 'mobile', 1, 0) AS isMobile, IF(device.deviceCategory = 'desktop', 1, 0) AS isDesktop, IF(trafficSource.medium in ('affiliate', 'cpc', 'cpm'), 1, 0) AS isPaidTraffic FROM `bigquery-public-data.google_analytics_sample.ga_sessions_*` WHERE _TABLE_SUFFIX BETWEEN '20170701' AND '20170801' ), STRUCT(0.5 AS threshold) ) 3. ML.ROC_CURVE Syntax: #standardSQLML.ROC_CURVE(MODEL `project.dataset.model_name`, {TABLE table_name | (query_statement)}, [GENERATE_ARRAY(thresholds)]) Example: #standardSQLSELECT *FROM ML.ROC_CURVE(MODEL `project.dataset.sample_model`, ( SELECT IF(totals.transactions IS NULL, 0, 1) AS isBuyer, IFNULL(totals.pageviews, 0) AS pageviews, IFNULL(totals.timeOnSite, 0) AS timeOnSite, IFNULL(totals.newVisits, 0) AS isNewVisit, IF(device.deviceCategory = 'mobile', 1, 0) AS isMobile, IF(device.deviceCategory = 'desktop', 1, 0) AS isDesktop, IF(trafficSource.medium in ('affiliate', 'cpc', 'cpm'), 1, 0) AS isPaidTraffic FROM `bigquery-public-data.google_analytics_sample.ga_sessions_*` WHERE _TABLE_SUFFIX BETWEEN '20170701' AND '20170801' ), GENERATE_ARRAY(0.0, 1.0, 0.01) ) To use a model created with BigQuery ML to make predictions, the following syntax has to be used: #standardSQLML.PREDICT(MODEL model_name, {TABLE table_name | (query_statement)} [, STRUCT(XX AS threshold)]) This query will use a model (MODEL) and will make predictions of a new data set (TABLE). Obviously, the table must have the same columns as the training data, although it isn’t necessary to include the response variable (since we do not need it to make predictions of new data). In logistic regression, you can optionally specify a threshold that defines from which estimated probability is considered as a final prediction one class or another. The result of this query will have as many rows as the data set we have provided and it will include both the input table and the model predictions. In the case of logistic regression models (binary or multiclass), in addition to the class that predicts the model, the estimated probability of each of the possible classes is also provided. And continuing with our example: #standardSQLSELECT *FROM ML.PREDICT(MODEL `project.dataset.sample_model`, ( SELECT IFNULL(totals.pageviews, 0) AS pageviews, IFNULL(totals.timeOnSite, 0) AS timeOnSite, IFNULL(totals.newVisits, 0) AS isNewVisit, IF(device.deviceCategory = 'mobile', 1, 0) AS isMobile, IF(device.deviceCategory = 'desktop', 1, 0) AS isDesktop, IF(trafficSource.medium in ('affiliate', 'cpc', 'cpm'), 1, 0) AS isPaidTraffic FROM `bigquery-public-data.google_analytics_sample.ga_sessions_*` WHERE _TABLE_SUFFIX BETWEEN '20170701' AND '20170801' ) ) Note that the column with the response (isBuyer) is not required. The result of the previous query is: The first column returns the class that our model predicts for each new observation. The second and third columns give us the estimated probability for each of the classes (the class whose estimated probability is greater is the one in the first column). The rest of the columns are the data whose predictions we have requested.
[ { "code": null, "e": 480, "s": 172, "text": "A few months ago Google announced a new Google BigQuery feature called BigQuery ML, which is currently in Beta. It consists of a set of extensions of the SQL language that allows to create machine learning models, evaluate their predictive performance and make predictions for new data directly in BigQuery." }, { "code": null, "e": 943, "s": 480, "text": "One of the advantages of BigQuery ML (BQML) is that one only needs to know standard SQL in order to use it (without needing to use R or Python to train models), which makes machine learning more accessible. It even handles data transformation, training/test sets split, etc. In addition, it reduces the training time of models because it works directly where the data is stored (BigQuery) and, consequently, it is not necessary to export the data to other tools." }, { "code": null, "e": 1436, "s": 943, "text": "But not everything is an advantage. First of all, the implemented models are currently limited (although we will see that it offers some flexibility), which probably will always be the case due to the fact of adapting to SQL. Secondly (and, in my opinion, more important), even though BQML makes model training easier, a person who is not familiar with machine learning can still have difficulties interpreting the model they have created, evaluating its performance and trying to improve it." }, { "code": null, "e": 1627, "s": 1436, "text": "In this post, I will explain the main functions of BQML and how to use them to create our model, evaluate it and use it to make predictions. This process will consist of the following steps:" }, { "code": null, "e": 1722, "s": 1627, "text": "Create a dataset (optional)Create a modelModel information (optional)Evaluate the modelPredict" }, { "code": null, "e": 1750, "s": 1722, "text": "Create a dataset (optional)" }, { "code": null, "e": 1765, "s": 1750, "text": "Create a model" }, { "code": null, "e": 1794, "s": 1765, "text": "Model information (optional)" }, { "code": null, "e": 1813, "s": 1794, "text": "Evaluate the model" }, { "code": null, "e": 1821, "s": 1813, "text": "Predict" }, { "code": null, "e": 1997, "s": 1821, "text": "As with BigQuery (BQ) tables, the model must be saved in a data set, so first you have to decide in which data set you want to save the model: an existing one or in a new one." }, { "code": null, "e": 2066, "s": 1997, "text": "If your case is the latter, creating a new data set is as simple as:" }, { "code": null, "e": 2189, "s": 2066, "text": "In the BQ interface, select the project in which you want to create the dataset and click on the “create data set” button." }, { "code": null, "e": 2312, "s": 2189, "text": "In the BQ interface, select the project in which you want to create the dataset and click on the “create data set” button." }, { "code": null, "e": 2472, "s": 2312, "text": "2. Name the new data set and choose a location where the data will be stored and the expiration. You can find more information about these fields in this link." }, { "code": null, "e": 2689, "s": 2472, "text": "In supervised machine learning, a training data set whose response variables are known is used to generate a model which captures the underlying patterns of the data so that it can predict the outcome of unseen data." }, { "code": null, "e": 2796, "s": 2689, "text": "BQML allows you to do this process directly within BigQuery. Currently, it supports three types of models:" }, { "code": null, "e": 3213, "s": 2796, "text": "Linear regression. It is used to predict the result of a continuous numeric variable, such as income.Binary logistic regression. It is used to predict the result of a categorical variable with two possible classes, such as when you want to determine whether a user will buy or not.Multinomial logistic regression (or multiclass). It is used to predict the result of a categorical variable with more than two classes." }, { "code": null, "e": 3315, "s": 3213, "text": "Linear regression. It is used to predict the result of a continuous numeric variable, such as income." }, { "code": null, "e": 3496, "s": 3315, "text": "Binary logistic regression. It is used to predict the result of a categorical variable with two possible classes, such as when you want to determine whether a user will buy or not." }, { "code": null, "e": 3632, "s": 3496, "text": "Multinomial logistic regression (or multiclass). It is used to predict the result of a categorical variable with more than two classes." }, { "code": null, "e": 3710, "s": 3632, "text": "To create (and train) a model with BQML, the following syntax has to be used:" }, { "code": null, "e": 3865, "s": 3710, "text": "#standardSQL{CREATE MODEL | CREATE MODEL IF NOT EXISTS | CREATE OR REPLACE MODEL} `project.dataset.model_name`OPTIONS(model_option_list)AS query_statement" }, { "code": null, "e": 4023, "s": 3865, "text": "This query will create a model (CREATE MODEL) with the specified options (OPTIONS) and using the result of a query (AS) as training data. We have to specify:" }, { "code": null, "e": 4389, "s": 4023, "text": "1)The name of the model and where it has to be saved. CREATE MODEL creates and trains the model (which will be saved with the name “model_name” inside the specified data set) as long as there is no model already created with the same name. If the model name exists, the query with CREATE MODEL will return an error. To avoid this error, we can use two alternatives:" }, { "code": null, "e": 4514, "s": 4389, "text": "CREATE MODEL IF NOT EXISTS, which creates and trains the model only if there is no model already created with the same name." }, { "code": null, "e": 4631, "s": 4514, "text": "CREATE OR REPLACE MODEL, which creates the model (if it does not exist) or replaces it (if it exists) and trains it." }, { "code": null, "e": 4834, "s": 4631, "text": "2) model_option_list . A list specifying some options related to the model and the training process. The format is as follows: option1 = value1, option2 = value2, ... The two most important options are:" }, { "code": null, "e": 4999, "s": 4834, "text": "model_type (mandatory): specifies the type of model we want to train: linear_reg for linear regression or logistic_reg for binary or multiclass logistic regression." }, { "code": null, "e": 5220, "s": 4999, "text": "input_label_cols: specifies the column name of the table with the training data that contains the response variable. If the column is called label, this field is optional; if not, it must be specified as [‘column_name’]." }, { "code": null, "e": 5624, "s": 5220, "text": "Although BigQuery ML has default options for model training, it offers some flexibility to choose options related to avoiding overfitting and the optimization of model parameters. For example, we can apply regularization L1 or L2, split the data in a training set and a validation set, or set the maximum number of iterations of the gradient descent. You can find all the available options in this link." }, { "code": null, "e": 5833, "s": 5624, "text": "3) query_statement. Query that generates the table that will be used as training data. One of the advantages of BigQuery ML is that it is responsible for data transformation for model training. In particular:" }, { "code": null, "e": 6161, "s": 5833, "text": "Categorical features (of type BOOL, STRING, BYTES, DATE, DATETIME or TIME) are one-hot enconded (i.e., converted into a binary variable for each class). Due to the problem known as multicollinearity, this is not recommended if you want to draw conclusions about the relationships between the features and the response variable." }, { "code": null, "e": 6277, "s": 6161, "text": "Numerical features (type NUMERIC, FLOAT64 or INT64) are standardized for both training data and future predictions." }, { "code": null, "e": 6444, "s": 6277, "text": "NULL values ​​are replaced by the average in the case of numerical variables or by a new class that groups all these missing data in the case of categorical features." }, { "code": null, "e": 6513, "s": 6444, "text": "Regarding the response variable, it must be taken into account that:" }, { "code": null, "e": 6571, "s": 6513, "text": "In linear regression can not have infinite or NaN values." }, { "code": null, "e": 6645, "s": 6571, "text": "In binary logistic regression it has to have exactly two possible values." }, { "code": null, "e": 6730, "s": 6645, "text": "In multiclass logistic regression you can have a maximum of 50 different categories." }, { "code": null, "e": 7177, "s": 6730, "text": "For example, let’s imagine that we want to predict whether a web session will end up buying or not depending on several features related to the user’s browsing activity (number of page views, session duration, type of user, the device he uses and whether it is paid traffic or not). In case you want to follow this example, we will use the Google Analytics test data set offered by BigQuery. To create the model, we would use the following query:" }, { "code": null, "e": 7801, "s": 7177, "text": "#standardSQLCREATE MODEL `project.dataset.sample_model`OPTIONS(model_type='logistic_reg', input_label_cols=['isBuyer'])ASSELECT IF(totals.transactions IS NULL, 0, 1) AS isBuyer, IFNULL(totals.pageviews, 0) AS pageviews, IFNULL(totals.timeOnSite, 0) AS timeOnSite, IFNULL(totals.newVisits, 0) AS isNewVisit, IF(device.deviceCategory = 'mobile', 1, 0) AS isMobile, IF(device.deviceCategory = 'desktop', 1, 0) AS isDesktop, IF(trafficSource.medium in ('affiliate', 'cpc', 'cpm'), 1, 0) AS isPaidTrafficFROM `bigquery-public-data.google_analytics_sample.ga_sessions_*`WHERE _TABLE_SUFFIX BETWEEN '20160801' AND '20170630'" }, { "code": null, "e": 8118, "s": 7801, "text": "Since our response variable is categorical with two classes (1 = “with purchase” or 0 = “without purchase”), we’ve had to specify in the options that the type of model is a logistic regression (logistic_reg). Also, note that the response variable is called “isBuyer”, so we’ve had to specify that in the options too." }, { "code": null, "e": 8582, "s": 8118, "text": "In linear models, each explanatory variable has an associated coefficient (or weight) that determines the relationship between this feature and the response variable. The greater its magnitude, the greater impact it has on the response variable. Furthermore, the positive (negative) sign indicates whether the response increases (decreases) when the value of this explanatory variable increases (or, in the case of categorical variables, the category is present)." }, { "code": null, "e": 8668, "s": 8582, "text": "In BigQuery ML, we can get the weights of the trained model with the following query:" }, { "code": null, "e": 8740, "s": 8668, "text": "#standardSQLSELECT *FROM ML.WEIGHTS(MODEL `project.dataset.model_name`)" }, { "code": null, "e": 9034, "s": 8740, "text": "As mentioned before, if you don’t convert the categorical variables into binary “manually” in your query (as we have done with isMobile and isDesktop, for instance), each possible category will have a weight and the reliability of the coefficients will be compromised due to multicollinearity." }, { "code": null, "e": 9124, "s": 9034, "text": "For example, the model we created in the previous section has the following coefficients:" }, { "code": null, "e": 9385, "s": 9124, "text": "That is, being a session of a new user, using a mobile device or accessing the web through a paid channel decreases the probability of that visit ending in purchase. While using desktop or spending more time on the site increases the probability of converting." }, { "code": null, "e": 9748, "s": 9385, "text": "Once we have the trained model, we need to assess its predictive performance. This always has to be done on a test set different from the training set to avoid overfitting, which occurs when our model memorizes the patterns of our training data and consequently it is very precise in our training set but it is not capable of making good predictions in new data." }, { "code": null, "e": 9804, "s": 9748, "text": "BQML provides several functions to evaluate our models:" }, { "code": null, "e": 10090, "s": 9804, "text": "ML.TRAINING_INFO. Provides information about the iterations during model training, including the loss in both the training and the validation set at each iteration. The expected result is that the loss decreases in both sets (ideally, to 0, which means that the model is always right)." }, { "code": null, "e": 10414, "s": 10090, "text": "ML.EVALUATE. Provides the most common metrics to assess the predictive performance of a model. This function can be used for any type of model (linear regression, binary logistic regression, multiclass logistic regression), but the metrics will be different depending on whether it is a regression or a classification task." }, { "code": null, "e": 10712, "s": 10414, "text": "ML.CONFUSION_MATRIX. Returns the confusion matrix for a given data set, which allows us to know the correct predictions and the errors for each possible class in a classification model. It can only be used for classification models, that is, logistic regression and multiclass logistic regression." }, { "code": null, "e": 10960, "s": 10712, "text": "ML.ROC_CURVE. This function allows us to construct the ROC curve, which is a graphical visualization used to evaluate the predictive ability of a binary classification model. In this case, it can only be used for binary logistic regression models." }, { "code": null, "e": 11113, "s": 10960, "text": "In this post we will focus on ML.EVALUATE, but we will give the syntax and examples for the other functions in case someone is interested in using them." }, { "code": null, "e": 11190, "s": 11113, "text": "To evaluate a previously created model, the following syntax has to be used:" }, { "code": null, "e": 11349, "s": 11190, "text": "#standardSQLSELECT *FROM ML.EVALUATE(MODEL `project.dataset.model_name`, {TABLE table_name | (query_statement)} [, STRUCT(XX AS threshold)])" }, { "code": null, "e": 11375, "s": 11349, "text": "Where we have to specify:" }, { "code": null, "e": 11386, "s": 11375, "text": "The model." }, { "code": null, "e": 11815, "s": 11386, "text": "The table for which we want to compute the evaluation metrics, which can be the result of a query. Obviously, this test set must have the same columns as the training set, including the response variable (to compare the model predictions with the actual values). If no table or query is specified, it will use the validation set (if specified when creating the model) or the training set (if a validation set was not specified)." }, { "code": null, "e": 12150, "s": 11815, "text": "In the case of a logistic regression, a threshold. This value is optional, and it specifies the value from which the predictions of our model (which are values ​​between 0 and 1 that can be interpreted as probabilities that this observation is of class 1) will be for class 0 or for the class 1. By default, the threshold will be 0.5." }, { "code": null, "e": 12305, "s": 12150, "text": "The result of this query is a single row with the most common metrics to evaluate the predictions of a model, which will depend on the type of model used." }, { "code": null, "e": 12429, "s": 12305, "text": "In particular, the metrics that BigQuery ML provides for logistic regression and multiclass logistic regression models are:" }, { "code": null, "e": 12439, "s": 12429, "text": "precision" }, { "code": null, "e": 12446, "s": 12439, "text": "recall" }, { "code": null, "e": 12455, "s": 12446, "text": "accuracy" }, { "code": null, "e": 12464, "s": 12455, "text": "f1_score" }, { "code": null, "e": 12473, "s": 12464, "text": "log_loss" }, { "code": null, "e": 12481, "s": 12473, "text": "roc_auc" }, { "code": null, "e": 12525, "s": 12481, "text": "In the case of linear regression, they are:" }, { "code": null, "e": 12545, "s": 12525, "text": "mean_absolute_error" }, { "code": null, "e": 12564, "s": 12545, "text": "mean_squared_error" }, { "code": null, "e": 12587, "s": 12564, "text": "mean_squared_log_error" }, { "code": null, "e": 12609, "s": 12587, "text": "median_absolute_error" }, { "code": null, "e": 12618, "s": 12609, "text": "r2_score" }, { "code": null, "e": 12637, "s": 12618, "text": "explained_variance" }, { "code": null, "e": 12727, "s": 12637, "text": "For example, for a logistic regression like the one in our example, we would have to use:" }, { "code": null, "e": 13416, "s": 12727, "text": "#standardSQLSELECT *FROM ML.EVALUATE(MODEL `project.dataset.sample_model`, ( SELECT IF(totals.transactions IS NULL, 0, 1) AS isBuyer, IFNULL(totals.pageviews, 0) AS pageviews, IFNULL(totals.timeOnSite, 0) AS timeOnSite, IFNULL(totals.newVisits, 0) AS isNewVisit, IF(device.deviceCategory = 'mobile', 1, 0) AS isMobile, IF(device.deviceCategory = 'desktop', 1, 0) AS isDesktop, IF(trafficSource.medium in ('affiliate', 'cpc', 'cpm'), 1, 0) AS isPaidTraffic FROM `bigquery-public-data.google_analytics_sample.ga_sessions_*` WHERE _TABLE_SUFFIX BETWEEN '20170701' AND '20170801' ), STRUCT(0.5 AS threshold) )" }, { "code": null, "e": 13550, "s": 13416, "text": "Note that the dates used to generate the data are different from those used to create the model. The result of the previous query is:" }, { "code": null, "e": 13570, "s": 13550, "text": "1. ML.TRAINING_INFO" }, { "code": null, "e": 13578, "s": 13570, "text": "Syntax:" }, { "code": null, "e": 13656, "s": 13578, "text": "#standardSQLSELECT *FROM ML.TRAINING_INFO(MODEL `project.dataset.model_name`)" }, { "code": null, "e": 13665, "s": 13656, "text": "Example:" }, { "code": null, "e": 13745, "s": 13665, "text": "#standardSQLSELECT *FROM ML.TRAINING_INFO(MODEL `project.dataset.sample_model`)" }, { "code": null, "e": 13768, "s": 13745, "text": "2. ML.CONFUSION_MATRIX" }, { "code": null, "e": 13776, "s": 13768, "text": "Syntax:" }, { "code": null, "e": 13930, "s": 13776, "text": "#standardSQLML.CONFUSION_MATRIX(MODEL `project.dataset.model_name`, {TABLE table_name | (query_statement)} [, STRUCT(XX AS threshold)])" }, { "code": null, "e": 13939, "s": 13930, "text": "Example:" }, { "code": null, "e": 14636, "s": 13939, "text": "#standardSQLSELECT *FROM ML.CONFUSION_MATRIX(MODEL `project.dataset.sample_model`, ( SELECT IF(totals.transactions IS NULL, 0, 1) AS isBuyer, IFNULL(totals.pageviews, 0) AS pageviews, IFNULL(totals.timeOnSite, 0) AS timeOnSite, IFNULL(totals.newVisits, 0) AS isNewVisit, IF(device.deviceCategory = 'mobile', 1, 0) AS isMobile, IF(device.deviceCategory = 'desktop', 1, 0) AS isDesktop, IF(trafficSource.medium in ('affiliate', 'cpc', 'cpm'), 1, 0) AS isPaidTraffic FROM `bigquery-public-data.google_analytics_sample.ga_sessions_*` WHERE _TABLE_SUFFIX BETWEEN '20170701' AND '20170801' ), STRUCT(0.5 AS threshold) )" }, { "code": null, "e": 14652, "s": 14636, "text": "3. ML.ROC_CURVE" }, { "code": null, "e": 14660, "s": 14652, "text": "Syntax:" }, { "code": null, "e": 14811, "s": 14660, "text": "#standardSQLML.ROC_CURVE(MODEL `project.dataset.model_name`, {TABLE table_name | (query_statement)}, [GENERATE_ARRAY(thresholds)])" }, { "code": null, "e": 14820, "s": 14811, "text": "Example:" }, { "code": null, "e": 15516, "s": 14820, "text": "#standardSQLSELECT *FROM ML.ROC_CURVE(MODEL `project.dataset.sample_model`, ( SELECT IF(totals.transactions IS NULL, 0, 1) AS isBuyer, IFNULL(totals.pageviews, 0) AS pageviews, IFNULL(totals.timeOnSite, 0) AS timeOnSite, IFNULL(totals.newVisits, 0) AS isNewVisit, IF(device.deviceCategory = 'mobile', 1, 0) AS isMobile, IF(device.deviceCategory = 'desktop', 1, 0) AS isDesktop, IF(trafficSource.medium in ('affiliate', 'cpc', 'cpm'), 1, 0) AS isPaidTraffic FROM `bigquery-public-data.google_analytics_sample.ga_sessions_*` WHERE _TABLE_SUFFIX BETWEEN '20170701' AND '20170801' ), GENERATE_ARRAY(0.0, 1.0, 0.01) )" }, { "code": null, "e": 15614, "s": 15516, "text": "To use a model created with BigQuery ML to make predictions, the following syntax has to be used:" }, { "code": null, "e": 15740, "s": 15614, "text": "#standardSQLML.PREDICT(MODEL model_name, {TABLE table_name | (query_statement)} [, STRUCT(XX AS threshold)])" }, { "code": null, "e": 16186, "s": 15740, "text": "This query will use a model (MODEL) and will make predictions of a new data set (TABLE). Obviously, the table must have the same columns as the training data, although it isn’t necessary to include the response variable (since we do not need it to make predictions of new data). In logistic regression, you can optionally specify a threshold that defines from which estimated probability is considered as a final prediction one class or another." }, { "code": null, "e": 16527, "s": 16186, "text": "The result of this query will have as many rows as the data set we have provided and it will include both the input table and the model predictions. In the case of logistic regression models (binary or multiclass), in addition to the class that predicts the model, the estimated probability of each of the possible classes is also provided." }, { "code": null, "e": 16560, "s": 16527, "text": "And continuing with our example:" }, { "code": null, "e": 17163, "s": 16560, "text": "#standardSQLSELECT *FROM ML.PREDICT(MODEL `project.dataset.sample_model`, ( SELECT IFNULL(totals.pageviews, 0) AS pageviews, IFNULL(totals.timeOnSite, 0) AS timeOnSite, IFNULL(totals.newVisits, 0) AS isNewVisit, IF(device.deviceCategory = 'mobile', 1, 0) AS isMobile, IF(device.deviceCategory = 'desktop', 1, 0) AS isDesktop, IF(trafficSource.medium in ('affiliate', 'cpc', 'cpm'), 1, 0) AS isPaidTraffic FROM `bigquery-public-data.google_analytics_sample.ga_sessions_*` WHERE _TABLE_SUFFIX BETWEEN '20170701' AND '20170801' ) )" }, { "code": null, "e": 17266, "s": 17163, "text": "Note that the column with the response (isBuyer) is not required. The result of the previous query is:" } ]
Android - Services
Started A service is started when an application component, such as an activity, starts it by calling startService(). Once started, a service can run in the background indefinitely, even if the component that started it is destroyed. Bound A service is bound when an application component binds to it by calling bindService(). A bound service offers a client-server interface that allows components to interact with the service, send requests, get results, and even do so across processes with interprocess communication (IPC). A service has life cycle callback methods that you can implement to monitor changes in the service's state and you can perform work at the appropriate stage. The following diagram on the left shows the life cycle when the service is created with startService() and the diagram on the right shows the life cycle when the service is created with bindService(): (image courtesy : android.com ) To create an service, you create a Java class that extends the Service base class or one of its existing subclasses. The Service base class defines various callback methods and the most important are given below. You don't need to implement all the callbacks methods. However, it's important that you understand each one and implement those that ensure your app behaves the way users expect. onStartCommand() The system calls this method when another component, such as an activity, requests that the service be started, by calling startService(). If you implement this method, it is your responsibility to stop the service when its work is done, by calling stopSelf() or stopService() methods. onBind() The system calls this method when another component wants to bind with the service by calling bindService(). If you implement this method, you must provide an interface that clients use to communicate with the service, by returning an IBinder object. You must always implement this method, but if you don't want to allow binding, then you should return null. onUnbind() The system calls this method when all clients have disconnected from a particular interface published by the service. onRebind() The system calls this method when new clients have connected to the service, after it had previously been notified that all had disconnected in its onUnbind(Intent). onCreate() The system calls this method when the service is first created using onStartCommand() or onBind(). This call is required to perform one-time set-up. onDestroy() The system calls this method when the service is no longer used and is being destroyed. Your service should implement this to clean up any resources such as threads, registered listeners, receivers, etc. The following skeleton service demonstrates each of the life cycle methods − package com.tutorialspoint; import android.app.Service; import android.os.IBinder; import android.content.Intent; import android.os.Bundle; public class HelloService extends Service { /** indicates how to behave if the service is killed */ int mStartMode; /** interface for clients that bind */ IBinder mBinder; /** indicates whether onRebind should be used */ boolean mAllowRebind; /** Called when the service is being created. */ @Override public void onCreate() { } /** The service is starting, due to a call to startService() */ @Override public int onStartCommand(Intent intent, int flags, int startId) { return mStartMode; } /** A client is binding to the service with bindService() */ @Override public IBinder onBind(Intent intent) { return mBinder; } /** Called when all clients have unbound with unbindService() */ @Override public boolean onUnbind(Intent intent) { return mAllowRebind; } /** Called when a client is binding to the service with bindService()*/ @Override public void onRebind(Intent intent) { } /** Called when The service is no longer used and is being destroyed */ @Override public void onDestroy() { } } This example will take you through simple steps to show how to create your own Android Service. Follow the following steps to modify the Android application we created in Hello World Example chapter − Following is the content of the modified main activity file MainActivity.java. This file can include each of the fundamental life cycle methods. We have added startService() and stopService() methods to start and stop the service. package com.example.tutorialspoint7.myapplication; import android.content.Intent; import android.support.v7.app.AppCompatActivity; import android.os.Bundle; import android.os.Bundle; import android.app.Activity; import android.util.Log; import android.view.View; public class MainActivity extends Activity { String msg = "Android : "; /** Called when the activity is first created. */ @Override public void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.activity_main); Log.d(msg, "The onCreate() event"); } public void startService(View view) { startService(new Intent(getBaseContext(), MyService.class)); } // Method to stop the service public void stopService(View view) { stopService(new Intent(getBaseContext(), MyService.class)); } } Following is the content of MyService.java. This file can have implementation of one or more methods associated with Service based on requirements. For now we are going to implement only two methods onStartCommand() and onDestroy() − package com.example.tutorialspoint7.myapplication; import android.app.Service; import android.content.Intent; import android.os.IBinder; import android.support.annotation.Nullable; import android.widget.Toast; /** * Created by TutorialsPoint7 on 8/23/2016. */ public class MyService extends Service { @Nullable @Override public IBinder onBind(Intent intent) { return null; } @Override public int onStartCommand(Intent intent, int flags, int startId) { // Let it continue running until it is stopped. Toast.makeText(this, "Service Started", Toast.LENGTH_LONG).show(); return START_STICKY; } @Override public void onDestroy() { super.onDestroy(); Toast.makeText(this, "Service Destroyed", Toast.LENGTH_LONG).show(); } } Following will the modified content of AndroidManifest.xml file. Here we have added <service.../> tag to include our service − <?xml version="1.0" encoding="utf-8"?> <manifest xmlns:android="http://schemas.android.com/apk/res/android" package="com.example.tutorialspoint7.myapplication"> <application android:allowBackup="true" android:icon="@mipmap/ic_launcher" android:label="@string/app_name" android:supportsRtl="true" android:theme="@style/AppTheme"> <activity android:name=".MainActivity"> <intent-filter> <action android:name="android.intent.action.MAIN" /> <category android:name="android.intent.category.LAUNCHER" /> </intent-filter> </activity> <service android:name=".MyService" /> </application> </manifest> Following will be the content of res/layout/activity_main.xml file to include two buttons − <RelativeLayout xmlns:android="http://schemas.android.com/apk/res/android" xmlns:tools="http://schemas.android.com/tools" android:layout_width="match_parent" android:layout_height="match_parent" android:paddingLeft="@dimen/activity_horizontal_margin" android:paddingRight="@dimen/activity_horizontal_margin" android:paddingTop="@dimen/activity_vertical_margin" android:paddingBottom="@dimen/activity_vertical_margin" tools:context=".MainActivity"> <TextView android:id="@+id/textView1" android:layout_width="wrap_content" android:layout_height="wrap_content" android:text="Example of services" android:layout_alignParentTop="true" android:layout_centerHorizontal="true" android:textSize="30dp" /> <TextView android:id="@+id/textView2" android:layout_width="wrap_content" android:layout_height="wrap_content" android:text="Tutorials point " android:textColor="#ff87ff09" android:textSize="30dp" android:layout_above="@+id/imageButton" android:layout_centerHorizontal="true" android:layout_marginBottom="40dp" /> <ImageButton android:layout_width="wrap_content" android:layout_height="wrap_content" android:id="@+id/imageButton" android:src="@drawable/abc" android:layout_centerVertical="true" android:layout_centerHorizontal="true" /> <Button android:layout_width="wrap_content" android:layout_height="wrap_content" android:id="@+id/button2" android:text="Start Services" android:onClick="startService" android:layout_below="@+id/imageButton" android:layout_centerHorizontal="true" /> <Button android:layout_width="wrap_content" android:layout_height="wrap_content" android:text="Stop Services" android:id="@+id/button" android:onClick="stopService" android:layout_below="@+id/button2" android:layout_alignLeft="@+id/button2" android:layout_alignStart="@+id/button2" android:layout_alignRight="@+id/button2" android:layout_alignEnd="@+id/button2" /> </RelativeLayout> Let's try to run our modified Hello World! application we just modified. I assume you had created your AVD while doing environment setup. To run the app from Android studio, open one of your project's activity files and click Run icon from the tool bar. Android Studio installs the app on your AVD and starts it and if everything is fine with your set-up and application, it will display following Emulator window − Now to start your service, let's click on Start Service button, this will start the service and as per our programming in onStartCommand() method, a message Service Started will appear on the bottom of the the simulator as follows − To stop the service, you can click the Stop Service button. 46 Lectures 7.5 hours Aditya Dua 32 Lectures 3.5 hours Sharad Kumar 9 Lectures 1 hours Abhilash Nelson 14 Lectures 1.5 hours Abhilash Nelson 15 Lectures 1.5 hours Abhilash Nelson 10 Lectures 1 hours Abhilash Nelson Print Add Notes Bookmark this page
[ { "code": null, "e": 3615, "s": 3607, "text": "Started" }, { "code": null, "e": 3841, "s": 3615, "text": "A service is started when an application component, such as an activity, starts it by calling startService(). Once started, a service can run in the background indefinitely, even if the component that started it is destroyed." }, { "code": null, "e": 3847, "s": 3841, "text": "Bound" }, { "code": null, "e": 4135, "s": 3847, "text": "A service is bound when an application component binds to it by calling bindService(). A bound service offers a client-server interface that allows components to interact with the service, send requests, get results, and even do so across processes with interprocess communication (IPC)." }, { "code": null, "e": 4526, "s": 4135, "text": "A service has life cycle callback methods that you can implement to monitor changes in the service's state and you can perform work at the appropriate stage. The following diagram on the left shows the life cycle when the service is created with startService() and the diagram on the right shows the life cycle when the service is created with bindService(): (image courtesy : android.com )" }, { "code": null, "e": 4918, "s": 4526, "text": "To create an service, you create a Java class that extends the Service base class or one of its existing subclasses. The Service base class defines various callback methods and the most important are given below. You don't need to implement all the callbacks methods. However, it's important that you understand each one and implement those that ensure your app behaves the way users expect." }, { "code": null, "e": 4935, "s": 4918, "text": "onStartCommand()" }, { "code": null, "e": 5221, "s": 4935, "text": "The system calls this method when another component, such as an activity, requests that the service be started, by calling startService(). If you implement this method, it is your responsibility to stop the service when its work is done, by calling stopSelf() or stopService() methods." }, { "code": null, "e": 5230, "s": 5221, "text": "onBind()" }, { "code": null, "e": 5589, "s": 5230, "text": "The system calls this method when another component wants to bind with the service by calling bindService(). If you implement this method, you must provide an interface that clients use to communicate with the service, by returning an IBinder object. You must always implement this method, but if you don't want to allow binding, then you should return null." }, { "code": null, "e": 5600, "s": 5589, "text": "onUnbind()" }, { "code": null, "e": 5718, "s": 5600, "text": "The system calls this method when all clients have disconnected from a particular interface published by the service." }, { "code": null, "e": 5729, "s": 5718, "text": "onRebind()" }, { "code": null, "e": 5895, "s": 5729, "text": "The system calls this method when new clients have connected to the service, after it had previously been notified that all had disconnected in its onUnbind(Intent)." }, { "code": null, "e": 5906, "s": 5895, "text": "onCreate()" }, { "code": null, "e": 6055, "s": 5906, "text": "The system calls this method when the service is first created using onStartCommand() or onBind(). This call is required to perform one-time set-up." }, { "code": null, "e": 6067, "s": 6055, "text": "onDestroy()" }, { "code": null, "e": 6271, "s": 6067, "text": "The system calls this method when the service is no longer used and is being destroyed. Your service should implement this to clean up any resources such as threads, registered listeners, receivers, etc." }, { "code": null, "e": 6348, "s": 6271, "text": "The following skeleton service demonstrates each of the life cycle methods −" }, { "code": null, "e": 7621, "s": 6348, "text": "package com.tutorialspoint;\n\nimport android.app.Service;\nimport android.os.IBinder;\nimport android.content.Intent;\nimport android.os.Bundle;\n\npublic class HelloService extends Service {\n \n /** indicates how to behave if the service is killed */\n int mStartMode;\n \n /** interface for clients that bind */\n IBinder mBinder; \n \n /** indicates whether onRebind should be used */\n boolean mAllowRebind;\n\n /** Called when the service is being created. */\n @Override\n public void onCreate() {\n \n }\n\n /** The service is starting, due to a call to startService() */\n @Override\n public int onStartCommand(Intent intent, int flags, int startId) {\n return mStartMode;\n }\n\n /** A client is binding to the service with bindService() */\n @Override\n public IBinder onBind(Intent intent) {\n return mBinder;\n }\n\n /** Called when all clients have unbound with unbindService() */\n @Override\n public boolean onUnbind(Intent intent) {\n return mAllowRebind;\n }\n\n /** Called when a client is binding to the service with bindService()*/\n @Override\n public void onRebind(Intent intent) {\n\n }\n\n /** Called when The service is no longer used and is being destroyed */\n @Override\n public void onDestroy() {\n\n }\n}" }, { "code": null, "e": 7822, "s": 7621, "text": "This example will take you through simple steps to show how to create your own Android Service. Follow the following steps to modify the Android application we created in Hello World Example chapter −" }, { "code": null, "e": 8053, "s": 7822, "text": "Following is the content of the modified main activity file MainActivity.java. This file can include each of the fundamental life cycle methods. We have added startService() and stopService() methods to start and stop the service." }, { "code": null, "e": 8910, "s": 8053, "text": "package com.example.tutorialspoint7.myapplication;\n\nimport android.content.Intent;\nimport android.support.v7.app.AppCompatActivity;\nimport android.os.Bundle;\n\nimport android.os.Bundle;\nimport android.app.Activity;\nimport android.util.Log;\nimport android.view.View;\n\npublic class MainActivity extends Activity {\n String msg = \"Android : \";\n\n /** Called when the activity is first created. */\n @Override\n public void onCreate(Bundle savedInstanceState) {\n super.onCreate(savedInstanceState);\n setContentView(R.layout.activity_main);\n Log.d(msg, \"The onCreate() event\");\n }\n\n public void startService(View view) {\n startService(new Intent(getBaseContext(), MyService.class));\n }\n\n // Method to stop the service\n public void stopService(View view) {\n stopService(new Intent(getBaseContext(), MyService.class));\n }\n}" }, { "code": null, "e": 9144, "s": 8910, "text": "Following is the content of MyService.java. This file can have implementation of one or more methods associated with Service based on requirements. For now we are going to implement only two methods onStartCommand() and onDestroy() −" }, { "code": null, "e": 9937, "s": 9144, "text": "package com.example.tutorialspoint7.myapplication;\n\nimport android.app.Service;\nimport android.content.Intent;\nimport android.os.IBinder;\nimport android.support.annotation.Nullable;\nimport android.widget.Toast;\n\n/**\n * Created by TutorialsPoint7 on 8/23/2016.\n*/\n\npublic class MyService extends Service {\n @Nullable\n @Override\n public IBinder onBind(Intent intent) {\n return null;\n }\n\t\n @Override\n public int onStartCommand(Intent intent, int flags, int startId) {\n // Let it continue running until it is stopped.\n Toast.makeText(this, \"Service Started\", Toast.LENGTH_LONG).show();\n return START_STICKY;\n }\n\n @Override\n public void onDestroy() {\n super.onDestroy();\n Toast.makeText(this, \"Service Destroyed\", Toast.LENGTH_LONG).show();\n }\n}" }, { "code": null, "e": 10064, "s": 9937, "text": "Following will the modified content of AndroidManifest.xml file. Here we have added <service.../> tag to include our service −" }, { "code": null, "e": 10765, "s": 10064, "text": "<?xml version=\"1.0\" encoding=\"utf-8\"?>\n<manifest xmlns:android=\"http://schemas.android.com/apk/res/android\"\n package=\"com.example.tutorialspoint7.myapplication\">\n\n <application\n android:allowBackup=\"true\"\n android:icon=\"@mipmap/ic_launcher\"\n android:label=\"@string/app_name\"\n android:supportsRtl=\"true\"\n android:theme=\"@style/AppTheme\">\n\t\t\n <activity android:name=\".MainActivity\">\n <intent-filter>\n <action android:name=\"android.intent.action.MAIN\" />\n\n <category android:name=\"android.intent.category.LAUNCHER\" />\n </intent-filter>\n </activity>\n\t\t\n <service android:name=\".MyService\" />\n </application>\n\n</manifest>" }, { "code": null, "e": 10857, "s": 10765, "text": "Following will be the content of res/layout/activity_main.xml file to include two buttons −" }, { "code": null, "e": 13003, "s": 10857, "text": "<RelativeLayout xmlns:android=\"http://schemas.android.com/apk/res/android\"\n xmlns:tools=\"http://schemas.android.com/tools\" android:layout_width=\"match_parent\"\n android:layout_height=\"match_parent\" android:paddingLeft=\"@dimen/activity_horizontal_margin\"\n android:paddingRight=\"@dimen/activity_horizontal_margin\"\n android:paddingTop=\"@dimen/activity_vertical_margin\"\n android:paddingBottom=\"@dimen/activity_vertical_margin\" tools:context=\".MainActivity\">\n \n <TextView\n android:id=\"@+id/textView1\"\n android:layout_width=\"wrap_content\"\n android:layout_height=\"wrap_content\"\n android:text=\"Example of services\"\n android:layout_alignParentTop=\"true\"\n android:layout_centerHorizontal=\"true\"\n android:textSize=\"30dp\" />\n \n <TextView\n android:id=\"@+id/textView2\"\n android:layout_width=\"wrap_content\"\n android:layout_height=\"wrap_content\"\n android:text=\"Tutorials point \"\n android:textColor=\"#ff87ff09\"\n android:textSize=\"30dp\"\n android:layout_above=\"@+id/imageButton\"\n android:layout_centerHorizontal=\"true\"\n android:layout_marginBottom=\"40dp\" />\n\n <ImageButton\n android:layout_width=\"wrap_content\"\n android:layout_height=\"wrap_content\"\n android:id=\"@+id/imageButton\"\n android:src=\"@drawable/abc\"\n android:layout_centerVertical=\"true\"\n android:layout_centerHorizontal=\"true\" />\n\n <Button\n android:layout_width=\"wrap_content\"\n android:layout_height=\"wrap_content\"\n android:id=\"@+id/button2\"\n android:text=\"Start Services\"\n android:onClick=\"startService\"\n android:layout_below=\"@+id/imageButton\"\n android:layout_centerHorizontal=\"true\" />\n\n <Button\n android:layout_width=\"wrap_content\"\n android:layout_height=\"wrap_content\"\n android:text=\"Stop Services\"\n android:id=\"@+id/button\"\n android:onClick=\"stopService\"\n android:layout_below=\"@+id/button2\"\n android:layout_alignLeft=\"@+id/button2\"\n android:layout_alignStart=\"@+id/button2\"\n android:layout_alignRight=\"@+id/button2\"\n android:layout_alignEnd=\"@+id/button2\" />\n\n</RelativeLayout>" }, { "code": null, "e": 13420, "s": 13003, "text": "Let's try to run our modified Hello World! application we just modified. I assume you had created your AVD while doing environment setup. To run the app from Android studio, open one of your project's activity files and click Run icon from the tool bar. Android Studio installs the app on your AVD and starts it and if everything is fine with your set-up and application, it will display following Emulator window −" }, { "code": null, "e": 13653, "s": 13420, "text": "Now to start your service, let's click on Start Service button, this will start the service and as per our programming in onStartCommand() method, a message Service Started will appear on the bottom of the the simulator as follows −" }, { "code": null, "e": 13713, "s": 13653, "text": "To stop the service, you can click the Stop Service button." }, { "code": null, "e": 13748, "s": 13713, "text": "\n 46 Lectures \n 7.5 hours \n" }, { "code": null, "e": 13760, "s": 13748, "text": " Aditya Dua" }, { "code": null, "e": 13795, "s": 13760, "text": "\n 32 Lectures \n 3.5 hours \n" }, { "code": null, "e": 13809, "s": 13795, "text": " Sharad Kumar" }, { "code": null, "e": 13841, "s": 13809, "text": "\n 9 Lectures \n 1 hours \n" }, { "code": null, "e": 13858, "s": 13841, "text": " Abhilash Nelson" }, { "code": null, "e": 13893, "s": 13858, "text": "\n 14 Lectures \n 1.5 hours \n" }, { "code": null, "e": 13910, "s": 13893, "text": " Abhilash Nelson" }, { "code": null, "e": 13945, "s": 13910, "text": "\n 15 Lectures \n 1.5 hours \n" }, { "code": null, "e": 13962, "s": 13945, "text": " Abhilash Nelson" }, { "code": null, "e": 13995, "s": 13962, "text": "\n 10 Lectures \n 1 hours \n" }, { "code": null, "e": 14012, "s": 13995, "text": " Abhilash Nelson" }, { "code": null, "e": 14019, "s": 14012, "text": " Print" }, { "code": null, "e": 14030, "s": 14019, "text": " Add Notes" } ]
How to Use Random Seeds Effectively | by Jai Bansal | Towards Data Science
Building a predictive model is a complex process. You need to get the right data, clean it, create useful features, test different algorithms, and finally validate your model’s performance. However, this post covers an aspect of the model-building process that doesn’t typically get much attention: random seeds. A random seed is used to ensure that results are reproducible. In other words, using this parameter makes sure that anyone who re-runs your code will get the exact same outputs. Reproducibility is an extremely important concept in data science and other fields. Lots of people have already written about this topic at length, so I won’t discuss it any further in this post. Depending on your specific project, you may not even need a random seed. However, there are 2 common tasks where they are used: 1. Splitting data into training/validation/test sets: random seeds ensure that the data is divided the same way every time the code is run 2. Model training: algorithms such as random forest and gradient boosting are non-deterministic (for a given input, the output is not always the same) and so require a random seed argument for reproducible results In addition to reproducibility, random seeds are also important for bench-marking results. If you are testing multiple versions of an algorithm, it’s important that all versions use the same data and are as similar as possible (except for the parameters you are testing). Despite their importance, random seeds are often set without much effort. I’m guilty of this. I typically use the date of whatever day I’m working on (so on March 1st, 2020 I would use the seed 20200301). Some people use the same seed every time, while others randomly generate them. Overall, random seeds are typically treated as an afterthought in the modeling process. This can be problematic because, as we’ll see in the next few sections, the choice of this parameter can significantly affect results. Now, I’ll demonstrate just how much impact the choice of a random seed can have. I’ll use the well-known Titanic dataset to do this (download link is below). www.kaggle.com The following code and plots are created in Python, but I found similar results in R. The complete code associated with this post can be found in the GitHub repository below: github.com First, let’s look at a few rows of this data: import pandas as pdtrain_all = pd.read_csv('train.csv') # Show selected columns train_all.drop(['PassengerId', 'Parch', 'Ticket', 'Embarked', 'Cabin'], axis = 1).head() The Titanic data is already divided into training and test sets. A classic task for this dataset is to predict passenger survival (encoded in the Survived column). The test data does not come with labels for the Survived column, so I’ll be doing the following: 1. Holding out part of the training data to serve as a validation set 2. Training a model to predict survival on the remaining training data and evaluating that model against the validation set created in step 1 Let’s start by looking at the overall distribution of the Survived column. In [19]: train_all.Survived.value_counts() / train_all.shape[0] Out[19]:0 0.616162 1 0.383838 Name: Survived, dtype: float64 When modeling, we want our training, validation, and test data to be as similar as possible so that our model is trained on the same kind of data that it’s being evaluated against. Note that this does not mean that any of these 3 data sets should overlap! They should not. But we want the observations contained in each of them to be broadly comparable. I’ll now split the data using different random seeds and compare the resulting distributions of Survived for the training and validation sets. from sklearn.model_selection import train_test_split# Create data frames for dependent and independent variables X = train_all.drop('Survived', axis = 1) y = train_all.Survived # Split 1 X_train, X_val, y_train, y_val = train_test_split(X, y, test_size = 0.2, random_state = 135153) In [41]: y_train.value_counts() / len(y_train) Out[41]: 0 0.655899 1 0.344101 Name: Survived, dtype: float64 In [42]: y_val.value_counts() / len(y_val) Out[42]: 0 0.458101 1 0.541899 Name: Survived, dtype: float64 In this case, the proportion of survivors is much lower in the training set than the validation set. # Split 2 X_train, X_val, y_train, y_val = train_test_split(X, y, test_size = 0.2, random_state = 163035) In [44]: y_train.value_counts() / len(y_train) Out[44]: 0 0.577247 1 0.422753 Name: Survived, dtype: float64 In [45]: y_val.value_counts() / len(y_val) Out[45]: 0 0.77095 1 0.22905 Name: Survived, dtype: float64 Here, the proportion of survivors is much higher in the training set than in the validation set. Full disclosure, these examples are the most extreme ones I found after looping through 200K random seeds. Regardless, there are a couple of concerns with these results. First, in both cases, the survival distribution is substantially different between the training and validation sets. This will likely negatively affect model training. Second, these outputs are very different from each other. If, as most people do, you set a random seed arbitrarily, your resulting data splits can vary drastically depending on your choice. I’ll discuss best practices at the end of the post. Next, I want to show how the training and validation Survival distributions varied for all 200K random seeds I tested. ~23% of data splits resulted in a survival percentage difference of at least 5% between training and validation sets. Over 1% of splits resulted in a survival percentage difference of at least 10%. The largest survival percentage difference was ~20%. The takeaway here is that using an arbitrary random seed can result in large differences between the training and validation set distributions. These differences can have unintended downstream consequences in the modeling process. The previous section showed how random seeds can influence data splits. In this section, I train a model using different random seeds after the data has already been split into training and validation sets (more on exactly how I do that in the next section). As a reminder, I’m trying to predict the Survived column. I’ll build a random forest classification model. Since the random forest algorithm is non-deterministic, a random seed is needed for reproducibility. I’ll show results for model accuracy below, but I found similar results using precision and recall. First, I’ll create a training and validation set. X = X[['Pclass', 'Sex', 'SibSp', 'Fare']] # These will be my predictors # The “Sex” variable is a string and needs to be one-hot encoded X['gender_dummy'] = pd.get_dummies(X.Sex)['female'] X = X.drop(['Sex'], axis = 1) # Divide data into training and validation sets # I’ll discuss exactly why I divide the data this way in the next section X_train, X_val, y_train, y_val = train_test_split(X, y, test_size = 0.2, random_state = 20200226, stratify = y) Now I’ll train a couple of models and evaluate accuracy on the validation set. # Model 1 from sklearn.ensemble import RandomForestClassifier from sklearn.metrics import accuracy_score # Create and fit model clf = RandomForestClassifier(n_estimators = 50, random_state = 11850) clf = clf.fit(X_train, y_train) preds = clf.predict(X_val) # Get predictions In [74]: round(accuracy_score(y_true = y_val, y_pred = preds), 3) Out[74]: 0.765 # Model 2# Create and fit model clf = RandomForestClassifier(n_estimators = 50, random_state = 2298)clf = clf.fit(X_train, y_train)preds = clf.predict(X_val) # Get predictionsIn [78]: round(accuracy_score(y_true = y_val, y_pred = preds), 3)Out[78]: 0.827 I tested 25K random seeds to find these results, but a change in accuracy of >6% is definitely noteworthy! Again, these 2 models are identical except for the random seed. The plot below shows how model accuracy varied across all of the random seeds I tested. While most models achieved ~80% accuracy, there are a substantial number of models scoring between 79%-82% and a handful of models that score outside of that range. Depending on the specific use case, these differences are large enough to matter. Therefore, model performance variance due to random seed choice should be taken into account when communicating results with stakeholders. Now that we’ve seen a few areas where the choice of random seed impacts results, I’d like to propose a few best practices. For data splitting, I believe stratified samples should be used so that the proportions of the dependent variable (Survived in this post) are similar in the training, validation, and test sets. This would eliminate the varying survival distributions above and allows a model be trained and evaluated on comparable data. The train_test_split function can implement stratified sampling with 1 additional argument. Note that if a model is later evaluated against data with a different dependent variable distribution, performance may be different than expected. However, I believe stratifying by the dependent variable is still the preferred way to split data. Here’s how stratified sampling looks in code. # Overall distribution of “Survived” column In [19]: train_all.Survived.value_counts() / train_all.shape[0] Out[19]: 0 0.616162 1 0.383838 Name: Survived, dtype: float64 # Stratified sampling (see last argument) X_train, X_val, y_train, y_val = train_test_split(X, y, test_size = 0.2, random_state = 20200226, stratify = y) In [10]: y_train.value_counts() / len(y_train) Out[10]: 0 0.616573 1 0.383427 Name: Survived, dtype: float64 In [11]: y_val.value_counts() / len(y_val) Out[11]: 0 0.614525 1 0.385475 Name: Survived, dtype: float64 Using the stratify argument, the proportion of Survived is similar in the training and validation sets. I still use a random seed as I still want reproducible results. However, it’s my opinion that the specific random seed value doesn’t matter in this case. That addresses data splitting best practices, but how about model training? While testing different model specifications, a random seed should be used for fair comparisons but I don’t think the particular seed matters too much. However, before reporting performance metrics to stakeholders, the final model should be trained and evaluated with 2–3 additional seeds to understand possible variance in results. This practice allows more accurate communication of model performance. For a critical model running in a production environment, it’s worth considering running that model with multiple seeds and averaging the result (though this is probably a topic for a separate blog post). Hopefully I’ve convinced you to pay a bit of attention to the often-overlooked random seed parameter. Feel free to get in touch if you’d like to see the full code used in this post or have other ideas for random seed best practices! If you enjoyed this post, check out some of my other work below!
[ { "code": null, "e": 484, "s": 171, "text": "Building a predictive model is a complex process. You need to get the right data, clean it, create useful features, test different algorithms, and finally validate your model’s performance. However, this post covers an aspect of the model-building process that doesn’t typically get much attention: random seeds." }, { "code": null, "e": 858, "s": 484, "text": "A random seed is used to ensure that results are reproducible. In other words, using this parameter makes sure that anyone who re-runs your code will get the exact same outputs. Reproducibility is an extremely important concept in data science and other fields. Lots of people have already written about this topic at length, so I won’t discuss it any further in this post." }, { "code": null, "e": 986, "s": 858, "text": "Depending on your specific project, you may not even need a random seed. However, there are 2 common tasks where they are used:" }, { "code": null, "e": 1125, "s": 986, "text": "1. Splitting data into training/validation/test sets: random seeds ensure that the data is divided the same way every time the code is run" }, { "code": null, "e": 1339, "s": 1125, "text": "2. Model training: algorithms such as random forest and gradient boosting are non-deterministic (for a given input, the output is not always the same) and so require a random seed argument for reproducible results" }, { "code": null, "e": 1611, "s": 1339, "text": "In addition to reproducibility, random seeds are also important for bench-marking results. If you are testing multiple versions of an algorithm, it’s important that all versions use the same data and are as similar as possible (except for the parameters you are testing)." }, { "code": null, "e": 1895, "s": 1611, "text": "Despite their importance, random seeds are often set without much effort. I’m guilty of this. I typically use the date of whatever day I’m working on (so on March 1st, 2020 I would use the seed 20200301). Some people use the same seed every time, while others randomly generate them." }, { "code": null, "e": 2118, "s": 1895, "text": "Overall, random seeds are typically treated as an afterthought in the modeling process. This can be problematic because, as we’ll see in the next few sections, the choice of this parameter can significantly affect results." }, { "code": null, "e": 2276, "s": 2118, "text": "Now, I’ll demonstrate just how much impact the choice of a random seed can have. I’ll use the well-known Titanic dataset to do this (download link is below)." }, { "code": null, "e": 2291, "s": 2276, "text": "www.kaggle.com" }, { "code": null, "e": 2466, "s": 2291, "text": "The following code and plots are created in Python, but I found similar results in R. The complete code associated with this post can be found in the GitHub repository below:" }, { "code": null, "e": 2477, "s": 2466, "text": "github.com" }, { "code": null, "e": 2523, "s": 2477, "text": "First, let’s look at a few rows of this data:" }, { "code": null, "e": 2692, "s": 2523, "text": "import pandas as pdtrain_all = pd.read_csv('train.csv') # Show selected columns train_all.drop(['PassengerId', 'Parch', 'Ticket', 'Embarked', 'Cabin'], axis = 1).head()" }, { "code": null, "e": 2953, "s": 2692, "text": "The Titanic data is already divided into training and test sets. A classic task for this dataset is to predict passenger survival (encoded in the Survived column). The test data does not come with labels for the Survived column, so I’ll be doing the following:" }, { "code": null, "e": 3023, "s": 2953, "text": "1. Holding out part of the training data to serve as a validation set" }, { "code": null, "e": 3165, "s": 3023, "text": "2. Training a model to predict survival on the remaining training data and evaluating that model against the validation set created in step 1" }, { "code": null, "e": 3240, "s": 3165, "text": "Let’s start by looking at the overall distribution of the Survived column." }, { "code": null, "e": 3371, "s": 3240, "text": "In [19]: train_all.Survived.value_counts() / train_all.shape[0] Out[19]:0 0.616162 1 0.383838 Name: Survived, dtype: float64" }, { "code": null, "e": 3868, "s": 3371, "text": "When modeling, we want our training, validation, and test data to be as similar as possible so that our model is trained on the same kind of data that it’s being evaluated against. Note that this does not mean that any of these 3 data sets should overlap! They should not. But we want the observations contained in each of them to be broadly comparable. I’ll now split the data using different random seeds and compare the resulting distributions of Survived for the training and validation sets." }, { "code": null, "e": 4381, "s": 3868, "text": "from sklearn.model_selection import train_test_split# Create data frames for dependent and independent variables X = train_all.drop('Survived', axis = 1) y = train_all.Survived # Split 1 X_train, X_val, y_train, y_val = train_test_split(X, y, test_size = 0.2, random_state = 135153) In [41]: y_train.value_counts() / len(y_train) Out[41]: 0 0.655899 1 0.344101 Name: Survived, dtype: float64 In [42]: y_val.value_counts() / len(y_val) Out[42]: 0 0.458101 1 0.541899 Name: Survived, dtype: float64" }, { "code": null, "e": 4482, "s": 4381, "text": "In this case, the proportion of survivors is much lower in the training set than the validation set." }, { "code": null, "e": 4815, "s": 4482, "text": "# Split 2 X_train, X_val, y_train, y_val = train_test_split(X, y, test_size = 0.2, random_state = 163035) In [44]: y_train.value_counts() / len(y_train) Out[44]: 0 0.577247 1 0.422753 Name: Survived, dtype: float64 In [45]: y_val.value_counts() / len(y_val) Out[45]: 0 0.77095 1 0.22905 Name: Survived, dtype: float64" }, { "code": null, "e": 4912, "s": 4815, "text": "Here, the proportion of survivors is much higher in the training set than in the validation set." }, { "code": null, "e": 5440, "s": 4912, "text": "Full disclosure, these examples are the most extreme ones I found after looping through 200K random seeds. Regardless, there are a couple of concerns with these results. First, in both cases, the survival distribution is substantially different between the training and validation sets. This will likely negatively affect model training. Second, these outputs are very different from each other. If, as most people do, you set a random seed arbitrarily, your resulting data splits can vary drastically depending on your choice." }, { "code": null, "e": 5611, "s": 5440, "text": "I’ll discuss best practices at the end of the post. Next, I want to show how the training and validation Survival distributions varied for all 200K random seeds I tested." }, { "code": null, "e": 6093, "s": 5611, "text": "~23% of data splits resulted in a survival percentage difference of at least 5% between training and validation sets. Over 1% of splits resulted in a survival percentage difference of at least 10%. The largest survival percentage difference was ~20%. The takeaway here is that using an arbitrary random seed can result in large differences between the training and validation set distributions. These differences can have unintended downstream consequences in the modeling process." }, { "code": null, "e": 6352, "s": 6093, "text": "The previous section showed how random seeds can influence data splits. In this section, I train a model using different random seeds after the data has already been split into training and validation sets (more on exactly how I do that in the next section)." }, { "code": null, "e": 6660, "s": 6352, "text": "As a reminder, I’m trying to predict the Survived column. I’ll build a random forest classification model. Since the random forest algorithm is non-deterministic, a random seed is needed for reproducibility. I’ll show results for model accuracy below, but I found similar results using precision and recall." }, { "code": null, "e": 6710, "s": 6660, "text": "First, I’ll create a training and validation set." }, { "code": null, "e": 7165, "s": 6710, "text": "X = X[['Pclass', 'Sex', 'SibSp', 'Fare']] # These will be my predictors # The “Sex” variable is a string and needs to be one-hot encoded X['gender_dummy'] = pd.get_dummies(X.Sex)['female'] X = X.drop(['Sex'], axis = 1) # Divide data into training and validation sets # I’ll discuss exactly why I divide the data this way in the next section X_train, X_val, y_train, y_val = train_test_split(X, y, test_size = 0.2, random_state = 20200226, stratify = y)" }, { "code": null, "e": 7244, "s": 7165, "text": "Now I’ll train a couple of models and evaluate accuracy on the validation set." }, { "code": null, "e": 7861, "s": 7244, "text": "# Model 1 from sklearn.ensemble import RandomForestClassifier from sklearn.metrics import accuracy_score # Create and fit model clf = RandomForestClassifier(n_estimators = 50, random_state = 11850) clf = clf.fit(X_train, y_train) preds = clf.predict(X_val) # Get predictions In [74]: round(accuracy_score(y_true = y_val, y_pred = preds), 3) Out[74]: 0.765 # Model 2# Create and fit model clf = RandomForestClassifier(n_estimators = 50, random_state = 2298)clf = clf.fit(X_train, y_train)preds = clf.predict(X_val) # Get predictionsIn [78]: round(accuracy_score(y_true = y_val, y_pred = preds), 3)Out[78]: 0.827" }, { "code": null, "e": 8032, "s": 7861, "text": "I tested 25K random seeds to find these results, but a change in accuracy of >6% is definitely noteworthy! Again, these 2 models are identical except for the random seed." }, { "code": null, "e": 8120, "s": 8032, "text": "The plot below shows how model accuracy varied across all of the random seeds I tested." }, { "code": null, "e": 8506, "s": 8120, "text": "While most models achieved ~80% accuracy, there are a substantial number of models scoring between 79%-82% and a handful of models that score outside of that range. Depending on the specific use case, these differences are large enough to matter. Therefore, model performance variance due to random seed choice should be taken into account when communicating results with stakeholders." }, { "code": null, "e": 8629, "s": 8506, "text": "Now that we’ve seen a few areas where the choice of random seed impacts results, I’d like to propose a few best practices." }, { "code": null, "e": 8949, "s": 8629, "text": "For data splitting, I believe stratified samples should be used so that the proportions of the dependent variable (Survived in this post) are similar in the training, validation, and test sets. This would eliminate the varying survival distributions above and allows a model be trained and evaluated on comparable data." }, { "code": null, "e": 9287, "s": 8949, "text": "The train_test_split function can implement stratified sampling with 1 additional argument. Note that if a model is later evaluated against data with a different dependent variable distribution, performance may be different than expected. However, I believe stratifying by the dependent variable is still the preferred way to split data." }, { "code": null, "e": 9333, "s": 9287, "text": "Here’s how stratified sampling looks in code." }, { "code": null, "e": 9896, "s": 9333, "text": "# Overall distribution of “Survived” column In [19]: train_all.Survived.value_counts() / train_all.shape[0] Out[19]: 0 0.616162 1 0.383838 Name: Survived, dtype: float64 # Stratified sampling (see last argument) X_train, X_val, y_train, y_val = train_test_split(X, y, test_size = 0.2, random_state = 20200226, stratify = y) In [10]: y_train.value_counts() / len(y_train) Out[10]: 0 0.616573 1 0.383427 Name: Survived, dtype: float64 In [11]: y_val.value_counts() / len(y_val) Out[11]: 0 0.614525 1 0.385475 Name: Survived, dtype: float64" }, { "code": null, "e": 10154, "s": 9896, "text": "Using the stratify argument, the proportion of Survived is similar in the training and validation sets. I still use a random seed as I still want reproducible results. However, it’s my opinion that the specific random seed value doesn’t matter in this case." }, { "code": null, "e": 10382, "s": 10154, "text": "That addresses data splitting best practices, but how about model training? While testing different model specifications, a random seed should be used for fair comparisons but I don’t think the particular seed matters too much." }, { "code": null, "e": 10839, "s": 10382, "text": "However, before reporting performance metrics to stakeholders, the final model should be trained and evaluated with 2–3 additional seeds to understand possible variance in results. This practice allows more accurate communication of model performance. For a critical model running in a production environment, it’s worth considering running that model with multiple seeds and averaging the result (though this is probably a topic for a separate blog post)." }, { "code": null, "e": 11072, "s": 10839, "text": "Hopefully I’ve convinced you to pay a bit of attention to the often-overlooked random seed parameter. Feel free to get in touch if you’d like to see the full code used in this post or have other ideas for random seed best practices!" } ]