title
stringlengths
3
221
text
stringlengths
17
477k
parsed
listlengths
0
3.17k
Performing Deduplication with Record Linkage and Supervised Learning | by Sue Lynn | Towards Data Science
Most data are recorded manually by humans and most often is not reviewed, not synchronized, and simply because there were mistakes made such as typos. Think for a second, have you ever filled out the same form twice before but with a slight difference in your address? For example, you submitted a form like this image below: Notice that the details are actually referring to the same person “Jane” with the same “Address”. Many organizations are dealing with data like this that clearly shows is duplicates and represents the same entity but the words are not exactly equal. Therefore, a python function “drop_duplicates” will not be able to identify these records as duplicates as the words are not an exact match. So the solution to these messy data is to perform Deduplication with Record Linkage. Record Linkage determines if the records are a match and represent the same entity (Person / Company / Business) by comparing the records across different sources. In this article, we will explore the usage of Record Linkage and combining Supervised Learning to classify duplicate and not duplicate records. Below are the topics that we will be covering in this article: Table of Contents: - What is Record Linkage? - Understand our Data Set- Applying Record Linkage Process(a) Preprocessing(b) Indexing(c) Comparison & Similarity(d) Supervised Learning (Classification) - Conclusion Record Linkage refers to the method of identifying and linking records that correlates with the same entity (Person, Business, Product,....) within one or across several data sources. It searches for possible duplicate records and links them together to be treated as a single record, which also makes it possible to avoid data redundancy. When unique identifiers variables are present in the data sets such as ( Identification numbers, hash codes, etc), the process of linking the same entity will be simple. However, there are cases where unique identifiers are not present in the data sets, therefore we will then need to identify good candidates of variables that are being duplicated and pair them up (eg: State, Last Name, Date of Birth, Phone No) — We will understand more about this while we perform the step: Indexing. We will be using the Python Record Linkage Toolkit library which provides the tools and functions required for performing record linkage and deduplication. Installation and import of the record linkage toolkit as below: For this tutorial, we will be using the public data set available under the Python Record Linkage Toolkit that was generated by Febrl Project(Source: Freely Extensible Biomedical Record Linkage). There are four sets of data available, but we will be using the 2nd data set — FEBRL 2. Let’s import the data set from the sub-module recordlinkage.datasets. The data set is return in the format of a “Data Frame” and we can see that this data set has a total of 5000 records. Based on the source of this data set from Febrl, there are 4000 original records and 1000 duplicates in this table. Let’s get a better understanding of the data types present in our table using: From the result above, the columns values in our data set are having the same data type — “object” (Also known as “String”) and all having non-null conditions. Moving on, we should also have a basic check on the statistical summary of our data set with the following function: From the statistical summary results, we can quickly see that the unique count for the surname, given_name is not 5000, which indicates that there are possibilities that the same person could have multiple records in the data set with different addresses/streets number/states, etc. An example of duplicate records from our data set will look like this: Notice from this sample pair of records that are known as duplicates, the difference is on the “surname”, “address_2”, and “suburb” with only a few characters of difference. Our goal is to identify and highlight records such as this sample as duplicates. Now that we have a basic understanding of our data set, let’s understand and apply the process of Record Linkage to deduplicate that data set and classify them correctly. This step is important as standardizing the data into the same format will increase the chances of identifying duplicates. Depending on the values in the data, pre-processing steps can include : Lowercase / Uppercase This is the easiest step and most important step for text pre-processing which is to standardize your text data set to all “Lowercasing” or “Uppercasing”. In the example below, we are converting the text in our data set to Uppercase. Stopwords removal Stop words are common words that are removed to provide more importance to more important information in the text. For example in a complete sentence stop words are “the”, “a”, “and”, etc. For Company Names — stop words could be “Co”, “Corp”, “Inc”, “Company”, “Limited”, etc. For People Names — stop words could be “Mr”, “Mrs”, “Ms”, “Sir”, etc. For Address — stop words could be “Street”, “St”, “Place”, “Rd”, “Road”, etc. For our data set, there are no stop words to remove from the names but there are stop words that we can remove from the address field “address_1”. In the example below, we are removing common stopwords noticed in the data set. Postcode Clean Up Postcode Cleaning is done by removing the symbols that are possibly being included such as “-”, “+”, or blank spaces. (Commonly, this clean up are done on phone numbers but since we do not have a phone number in our data set, we shall apply similar logic on Postcode) The example below shows the clean-up done for “Postcode” where only numeric values are kept. Removal of Irrelevant Symbols Special symbols will not be helpful in helping to identify similarities in text and should be clean up. The below example shows the clean-up done to remove irrelevant symbols in the address field. Now that our data set has been pre-processed and considered a clean set of data, we will need to create pairs of records (also known as candidate links) Pairs records are created and similarities are calculated to determine if the pair of records are considered a match/duplicates. The Python Record Linkage Toolkit provides the indexing modules to create the pairing of records which simplified the process. There are several indexing techniques available for record linkage — such as: Full Index A Full Index is created based on all the possible combinations of record pairs in the data set. Using Full Index has a risk on data volume as the number of records will increase quadratically. For example, based on our data set of 5000 records, a total of 12497500 pairs are created using the Full index. Blocking Index by blocking is a good alternative to Full Index as records pairs are produced based on the same block (Having common value). By blocking based on a particular column, the number of record pairs can be greatly reduced. For example, by blocking on the column “State”, only pairs of records from the same state are link with each other and a total of 2768103 pairs are created — which is also lesser records compared to Full Index. However, do take note that having lesser record pairs might not always be the best approach as there could be a possibility of missing out on actual matches if there are duplicate records but a typo on the value for “State”. Sorted Neighbourhood Index by Sorted Neighbourhood is another alternative that produces pairs with nearby values, for example, the following records are pair up together as there are similarities in the column “Surname” — Laundon and Lanyon. A total of 75034 pairs are created using index by Sorted Neighbourhood — which is also lesser records compared to Full Index & Block Index. (It also depends on the value content of the selected column) In this tutorial, we will index our data set with a combination of two approaches which are index by “Blocking” and index by “Sorted Neighbourhood”. Why did I choose to use more than one index approach? Using “Full Index” will provide us all possible matches of record pairs but will result in a huge growth in the total number of records. Therefore, using Index by “Blocking” or “Sorted Neighbourhood” is able to resolve the issue of having a huge growth in the total number of records. However, just by using the Index by “Blocking” or “Sorted Neighbourhood” approach, there are chances of missing out on actual matches. So why not reduce the possibility of missing out on actual match records by combining both approaches and still have a lesser volume of records compared to Full Index! The command below is to append both record pairs created by “Blocking” and “Sorted Neighbourhood”. Now that we have our record pairs generated, we would like to perform a comparison on the record pairs to create a comparison vector that calculates the similarity score between both pairs. The image below shows a similarity score that was calculated and compared based on the index pairing on the column “given_name”. For example, the record pairing for “rec-712-dup-0” and “rec-2778-org” has a low similarity score of 0.46667 on the given_name. Comparison can be done in many different methods to compute similarity values in a string, numeric values, or dates. In our scenario, where we are calculating the similarity score for string values, we can use the following algorithm: Jarowinkler Levenshtein Longest Common Substring (LCS) Jaccard Let’s proceed to compute the similarity score for the different columns we have in our dataset. Notice that the similarity function used in this example was “Jarowinkler” or “Levenshtein”. Jarowinler similarity score is calculated by giving more importance to the beginning of the string, therefore this algorithm is used to calculate the similarity score for features such as name, address, state, etc. The Levenshtein similarity score is calculated and provides higher importance based on the order of the character, therefore this algorithm is used to calculate the similarity score for features such as street number, postcode, etc. (There are also many other different similarity functions that can also be explored such as “cosine”, “dameray_levenshtein”, etc). Now that we have our similarity features created, we can proceed to the next step which is building a supervised learning model. In this section, we will train a model to classify duplicates and non-duplicates based on the data set provided. But before we can train the model, we will need to have a “label” column (Target Variable) in our data set for the model to know which are duplicates and which are not. When loading the data, specifying “return_links = True” the known duplicate record pairs will be returned. df, links = load_febrl2(return_links=True) We can also compute and create a comparison vector for the True duplicates pairs to get an overall view of how high their similarity score will be and also convert this set of pairing into a data frame format for the next step. duplicate_pairs_vectors = compare.compute(links,df) From the vector output, we can give a rough estimate by observing and notice that duplicate pairs tend to have a high similarity score for most of the features. The following steps are some ETL processes to create the column “Label” on our data set whereby if the pairing is found in the data set “duplicate_pairs” then is label as “1” else “0” (Duplicate = 1, Not Duplicate = 0) After labeling the data set, notice that there are 1901 pairs of duplicates and 2824073 pairs of duplicates, which also indicates that many pairings are indexed but are unique. With a set of labeled data, we can begin training a supervised learning model to classify the records as “duplicate” or “not duplicate”. In this example, I will be training an XGBoost Model to perform the classification. Below are the commands for importing the model libraries and splitting the data set to train and test set. By looking at the test set distribution, we have 760 pairs of duplicates for the model to test and predict. Next, we can train the XGBoost model and apply the trained model to the test set to classify records into “duplicate” or “not duplicate” Let’s view the output for the pairing records that the model classify as “duplicates” (predict = 1) Next, we shall cherry-pick the first two pairs and see the actual records to identify what is the difference. From the sample records, notice that for the first pairing — the difference can be seen on both the address field. Where else for the second pairing — the difference can be seen on the field for street number and address. Looks like the model is able to classify the records even with different values in the data set. Congrats! We have completed building a model to identify duplicates in our data set. In this article, we have learned how to use the combination of record-linkage with supervised learning to perform deduplication. Whereby, records need to be indexed into pairs before being able to perform a comparison to calculate the similarity score and for the model to train on. However, do take note that this is a practice to understand the process of performing deduplication and the data set values are simple. Real-world data are often more messy and complicated than the example we have. Thanks for reading my article and if you enjoyed and would like to support me: Follow me on Medium 🙆🏻 Become a member on Medium through my referral link 🙋
[ { "code": null, "e": 497, "s": 171, "text": "Most data are recorded manually by humans and most often is not reviewed, not synchronized, and simply because there were mistakes made such as typos. Think for a second, have you ever filled out the same form twice before but with a slight difference in your address? For example, you submitted a form like this image below:" }, { "code": null, "e": 1137, "s": 497, "text": "Notice that the details are actually referring to the same person “Jane” with the same “Address”. Many organizations are dealing with data like this that clearly shows is duplicates and represents the same entity but the words are not exactly equal. Therefore, a python function “drop_duplicates” will not be able to identify these records as duplicates as the words are not an exact match. So the solution to these messy data is to perform Deduplication with Record Linkage. Record Linkage determines if the records are a match and represent the same entity (Person / Company / Business) by comparing the records across different sources." }, { "code": null, "e": 1344, "s": 1137, "text": "In this article, we will explore the usage of Record Linkage and combining Supervised Learning to classify duplicate and not duplicate records. Below are the topics that we will be covering in this article:" }, { "code": null, "e": 1557, "s": 1344, "text": "Table of Contents: - What is Record Linkage? - Understand our Data Set- Applying Record Linkage Process(a) Preprocessing(b) Indexing(c) Comparison & Similarity(d) Supervised Learning (Classification) - Conclusion" }, { "code": null, "e": 1897, "s": 1557, "text": "Record Linkage refers to the method of identifying and linking records that correlates with the same entity (Person, Business, Product,....) within one or across several data sources. It searches for possible duplicate records and links them together to be treated as a single record, which also makes it possible to avoid data redundancy." }, { "code": null, "e": 2385, "s": 1897, "text": "When unique identifiers variables are present in the data sets such as ( Identification numbers, hash codes, etc), the process of linking the same entity will be simple. However, there are cases where unique identifiers are not present in the data sets, therefore we will then need to identify good candidates of variables that are being duplicated and pair them up (eg: State, Last Name, Date of Birth, Phone No) — We will understand more about this while we perform the step: Indexing." }, { "code": null, "e": 2605, "s": 2385, "text": "We will be using the Python Record Linkage Toolkit library which provides the tools and functions required for performing record linkage and deduplication. Installation and import of the record linkage toolkit as below:" }, { "code": null, "e": 2959, "s": 2605, "text": "For this tutorial, we will be using the public data set available under the Python Record Linkage Toolkit that was generated by Febrl Project(Source: Freely Extensible Biomedical Record Linkage). There are four sets of data available, but we will be using the 2nd data set — FEBRL 2. Let’s import the data set from the sub-module recordlinkage.datasets." }, { "code": null, "e": 3193, "s": 2959, "text": "The data set is return in the format of a “Data Frame” and we can see that this data set has a total of 5000 records. Based on the source of this data set from Febrl, there are 4000 original records and 1000 duplicates in this table." }, { "code": null, "e": 3272, "s": 3193, "text": "Let’s get a better understanding of the data types present in our table using:" }, { "code": null, "e": 3432, "s": 3272, "text": "From the result above, the columns values in our data set are having the same data type — “object” (Also known as “String”) and all having non-null conditions." }, { "code": null, "e": 3549, "s": 3432, "text": "Moving on, we should also have a basic check on the statistical summary of our data set with the following function:" }, { "code": null, "e": 3832, "s": 3549, "text": "From the statistical summary results, we can quickly see that the unique count for the surname, given_name is not 5000, which indicates that there are possibilities that the same person could have multiple records in the data set with different addresses/streets number/states, etc." }, { "code": null, "e": 3903, "s": 3832, "text": "An example of duplicate records from our data set will look like this:" }, { "code": null, "e": 4158, "s": 3903, "text": "Notice from this sample pair of records that are known as duplicates, the difference is on the “surname”, “address_2”, and “suburb” with only a few characters of difference. Our goal is to identify and highlight records such as this sample as duplicates." }, { "code": null, "e": 4329, "s": 4158, "text": "Now that we have a basic understanding of our data set, let’s understand and apply the process of Record Linkage to deduplicate that data set and classify them correctly." }, { "code": null, "e": 4524, "s": 4329, "text": "This step is important as standardizing the data into the same format will increase the chances of identifying duplicates. Depending on the values in the data, pre-processing steps can include :" }, { "code": null, "e": 4546, "s": 4524, "text": "Lowercase / Uppercase" }, { "code": null, "e": 4780, "s": 4546, "text": "This is the easiest step and most important step for text pre-processing which is to standardize your text data set to all “Lowercasing” or “Uppercasing”. In the example below, we are converting the text in our data set to Uppercase." }, { "code": null, "e": 4798, "s": 4780, "text": "Stopwords removal" }, { "code": null, "e": 5223, "s": 4798, "text": "Stop words are common words that are removed to provide more importance to more important information in the text. For example in a complete sentence stop words are “the”, “a”, “and”, etc. For Company Names — stop words could be “Co”, “Corp”, “Inc”, “Company”, “Limited”, etc. For People Names — stop words could be “Mr”, “Mrs”, “Ms”, “Sir”, etc. For Address — stop words could be “Street”, “St”, “Place”, “Rd”, “Road”, etc." }, { "code": null, "e": 5450, "s": 5223, "text": "For our data set, there are no stop words to remove from the names but there are stop words that we can remove from the address field “address_1”. In the example below, we are removing common stopwords noticed in the data set." }, { "code": null, "e": 5468, "s": 5450, "text": "Postcode Clean Up" }, { "code": null, "e": 5736, "s": 5468, "text": "Postcode Cleaning is done by removing the symbols that are possibly being included such as “-”, “+”, or blank spaces. (Commonly, this clean up are done on phone numbers but since we do not have a phone number in our data set, we shall apply similar logic on Postcode)" }, { "code": null, "e": 5829, "s": 5736, "text": "The example below shows the clean-up done for “Postcode” where only numeric values are kept." }, { "code": null, "e": 5859, "s": 5829, "text": "Removal of Irrelevant Symbols" }, { "code": null, "e": 6056, "s": 5859, "text": "Special symbols will not be helpful in helping to identify similarities in text and should be clean up. The below example shows the clean-up done to remove irrelevant symbols in the address field." }, { "code": null, "e": 6465, "s": 6056, "text": "Now that our data set has been pre-processed and considered a clean set of data, we will need to create pairs of records (also known as candidate links) Pairs records are created and similarities are calculated to determine if the pair of records are considered a match/duplicates. The Python Record Linkage Toolkit provides the indexing modules to create the pairing of records which simplified the process." }, { "code": null, "e": 6543, "s": 6465, "text": "There are several indexing techniques available for record linkage — such as:" }, { "code": null, "e": 6554, "s": 6543, "text": "Full Index" }, { "code": null, "e": 6859, "s": 6554, "text": "A Full Index is created based on all the possible combinations of record pairs in the data set. Using Full Index has a risk on data volume as the number of records will increase quadratically. For example, based on our data set of 5000 records, a total of 12497500 pairs are created using the Full index." }, { "code": null, "e": 6868, "s": 6859, "text": "Blocking" }, { "code": null, "e": 7303, "s": 6868, "text": "Index by blocking is a good alternative to Full Index as records pairs are produced based on the same block (Having common value). By blocking based on a particular column, the number of record pairs can be greatly reduced. For example, by blocking on the column “State”, only pairs of records from the same state are link with each other and a total of 2768103 pairs are created — which is also lesser records compared to Full Index." }, { "code": null, "e": 7528, "s": 7303, "text": "However, do take note that having lesser record pairs might not always be the best approach as there could be a possibility of missing out on actual matches if there are duplicate records but a typo on the value for “State”." }, { "code": null, "e": 7549, "s": 7528, "text": "Sorted Neighbourhood" }, { "code": null, "e": 7770, "s": 7549, "text": "Index by Sorted Neighbourhood is another alternative that produces pairs with nearby values, for example, the following records are pair up together as there are similarities in the column “Surname” — Laundon and Lanyon." }, { "code": null, "e": 7972, "s": 7770, "text": "A total of 75034 pairs are created using index by Sorted Neighbourhood — which is also lesser records compared to Full Index & Block Index. (It also depends on the value content of the selected column)" }, { "code": null, "e": 8121, "s": 7972, "text": "In this tutorial, we will index our data set with a combination of two approaches which are index by “Blocking” and index by “Sorted Neighbourhood”." }, { "code": null, "e": 8175, "s": 8121, "text": "Why did I choose to use more than one index approach?" }, { "code": null, "e": 8312, "s": 8175, "text": "Using “Full Index” will provide us all possible matches of record pairs but will result in a huge growth in the total number of records." }, { "code": null, "e": 8460, "s": 8312, "text": "Therefore, using Index by “Blocking” or “Sorted Neighbourhood” is able to resolve the issue of having a huge growth in the total number of records." }, { "code": null, "e": 8763, "s": 8460, "text": "However, just by using the Index by “Blocking” or “Sorted Neighbourhood” approach, there are chances of missing out on actual matches. So why not reduce the possibility of missing out on actual match records by combining both approaches and still have a lesser volume of records compared to Full Index!" }, { "code": null, "e": 8862, "s": 8763, "text": "The command below is to append both record pairs created by “Blocking” and “Sorted Neighbourhood”." }, { "code": null, "e": 9309, "s": 8862, "text": "Now that we have our record pairs generated, we would like to perform a comparison on the record pairs to create a comparison vector that calculates the similarity score between both pairs. The image below shows a similarity score that was calculated and compared based on the index pairing on the column “given_name”. For example, the record pairing for “rec-712-dup-0” and “rec-2778-org” has a low similarity score of 0.46667 on the given_name." }, { "code": null, "e": 9544, "s": 9309, "text": "Comparison can be done in many different methods to compute similarity values in a string, numeric values, or dates. In our scenario, where we are calculating the similarity score for string values, we can use the following algorithm:" }, { "code": null, "e": 9556, "s": 9544, "text": "Jarowinkler" }, { "code": null, "e": 9568, "s": 9556, "text": "Levenshtein" }, { "code": null, "e": 9599, "s": 9568, "text": "Longest Common Substring (LCS)" }, { "code": null, "e": 9607, "s": 9599, "text": "Jaccard" }, { "code": null, "e": 9703, "s": 9607, "text": "Let’s proceed to compute the similarity score for the different columns we have in our dataset." }, { "code": null, "e": 10504, "s": 9703, "text": "Notice that the similarity function used in this example was “Jarowinkler” or “Levenshtein”. Jarowinler similarity score is calculated by giving more importance to the beginning of the string, therefore this algorithm is used to calculate the similarity score for features such as name, address, state, etc. The Levenshtein similarity score is calculated and provides higher importance based on the order of the character, therefore this algorithm is used to calculate the similarity score for features such as street number, postcode, etc. (There are also many other different similarity functions that can also be explored such as “cosine”, “dameray_levenshtein”, etc). Now that we have our similarity features created, we can proceed to the next step which is building a supervised learning model." }, { "code": null, "e": 10786, "s": 10504, "text": "In this section, we will train a model to classify duplicates and non-duplicates based on the data set provided. But before we can train the model, we will need to have a “label” column (Target Variable) in our data set for the model to know which are duplicates and which are not." }, { "code": null, "e": 10893, "s": 10786, "text": "When loading the data, specifying “return_links = True” the known duplicate record pairs will be returned." }, { "code": null, "e": 10936, "s": 10893, "text": "df, links = load_febrl2(return_links=True)" }, { "code": null, "e": 11164, "s": 10936, "text": "We can also compute and create a comparison vector for the True duplicates pairs to get an overall view of how high their similarity score will be and also convert this set of pairing into a data frame format for the next step." }, { "code": null, "e": 11216, "s": 11164, "text": "duplicate_pairs_vectors = compare.compute(links,df)" }, { "code": null, "e": 11377, "s": 11216, "text": "From the vector output, we can give a rough estimate by observing and notice that duplicate pairs tend to have a high similarity score for most of the features." }, { "code": null, "e": 11596, "s": 11377, "text": "The following steps are some ETL processes to create the column “Label” on our data set whereby if the pairing is found in the data set “duplicate_pairs” then is label as “1” else “0” (Duplicate = 1, Not Duplicate = 0)" }, { "code": null, "e": 11773, "s": 11596, "text": "After labeling the data set, notice that there are 1901 pairs of duplicates and 2824073 pairs of duplicates, which also indicates that many pairings are indexed but are unique." }, { "code": null, "e": 12101, "s": 11773, "text": "With a set of labeled data, we can begin training a supervised learning model to classify the records as “duplicate” or “not duplicate”. In this example, I will be training an XGBoost Model to perform the classification. Below are the commands for importing the model libraries and splitting the data set to train and test set." }, { "code": null, "e": 12346, "s": 12101, "text": "By looking at the test set distribution, we have 760 pairs of duplicates for the model to test and predict. Next, we can train the XGBoost model and apply the trained model to the test set to classify records into “duplicate” or “not duplicate”" }, { "code": null, "e": 12446, "s": 12346, "text": "Let’s view the output for the pairing records that the model classify as “duplicates” (predict = 1)" }, { "code": null, "e": 12556, "s": 12446, "text": "Next, we shall cherry-pick the first two pairs and see the actual records to identify what is the difference." }, { "code": null, "e": 12875, "s": 12556, "text": "From the sample records, notice that for the first pairing — the difference can be seen on both the address field. Where else for the second pairing — the difference can be seen on the field for street number and address. Looks like the model is able to classify the records even with different values in the data set." }, { "code": null, "e": 12960, "s": 12875, "text": "Congrats! We have completed building a model to identify duplicates in our data set." }, { "code": null, "e": 13458, "s": 12960, "text": "In this article, we have learned how to use the combination of record-linkage with supervised learning to perform deduplication. Whereby, records need to be indexed into pairs before being able to perform a comparison to calculate the similarity score and for the model to train on. However, do take note that this is a practice to understand the process of performing deduplication and the data set values are simple. Real-world data are often more messy and complicated than the example we have." }, { "code": null, "e": 13537, "s": 13458, "text": "Thanks for reading my article and if you enjoyed and would like to support me:" }, { "code": null, "e": 13560, "s": 13537, "text": "Follow me on Medium 🙆🏻" } ]
How to implement Polymorphism in JavaScript?
Polymorphism is one of the tenets of Object Oriented Programming (OOP). It helps to design objects in such a way that the they can share or override any behavior with the specific provided objects. Polymorphism takes advantage of inheritance in order to make this happen. In the following example child objects such as 'cricket' and 'tennis' have overridden the 'select' method called from parent object 'game' and returned a new string respectively as shown in the output. Whereas another child object 'football', instead of overriding the select method, shared(inherited) the method and displayed the parent string as shown in the output. Live Demo <html> <body> <script> var game = function () {} game.prototype.select = function() { return " i love games and sports" } var cricket = function() {} cricket.prototype = Object.create(game.prototype); cricket.prototype.select = function() // overridden the select method to display { new string. return "i love cricket" } var tennis = function() {} tennis.prototype = Object.create(game.prototype); // overridden the select method to display new tennis.prototype.select = function() string { return "i love tennis" } var football = function() {} football.prototype = Object.create(game.prototype); // shared parent property var games = [new game(), new cricket(), new tennis(), new football()]; games.forEach(function(game){ document.write(game.select()); document.write("</br>"); }); </script> </body> </html> i love games and sports i love cricket i love tennis i love games and sports
[ { "code": null, "e": 1334, "s": 1062, "text": "Polymorphism is one of the tenets of Object Oriented Programming (OOP). It helps to design objects in such a way that the they can share or override any behavior with the specific provided objects. Polymorphism takes advantage of inheritance in order to make this happen." }, { "code": null, "e": 1703, "s": 1334, "text": "In the following example child objects such as 'cricket' and 'tennis' have overridden the 'select' method called from parent object 'game' and returned a new string respectively as shown in the output. Whereas another child object 'football', instead of overriding the select method, shared(inherited) the method and displayed the parent string as shown in the output." }, { "code": null, "e": 1713, "s": 1703, "text": "Live Demo" }, { "code": null, "e": 2721, "s": 1713, "text": "<html>\n<body>\n<script>\n var game = function () {}\n game.prototype.select = function()\n {\n return \" i love games and sports\"\n }\n var cricket = function() {}\n cricket.prototype = Object.create(game.prototype);\n cricket.prototype.select = function() // overridden the select method to display { new string. \n return \"i love cricket\"\n }\n var tennis = function() {}\n tennis.prototype = Object.create(game.prototype); // overridden the select method to display new\n tennis.prototype.select = function() string \n {\n return \"i love tennis\"\n }\n var football = function() {}\n football.prototype = Object.create(game.prototype); // shared parent property\n var games = [new game(), new cricket(), new tennis(), new football()];\n games.forEach(function(game){\n document.write(game.select());\n document.write(\"</br>\");\n });\n</script>\n</body>\n</html>" }, { "code": null, "e": 2798, "s": 2721, "text": "i love games and sports\ni love cricket\ni love tennis\ni love games and sports" } ]
Time-Series Forecasting: Predicting Stock Prices Using An LSTM Model | by Serafeim Loukas | Towards Data Science
Note from Towards Data Science’s editors: While we allow independent authors to publish articles in accordance with our rules and guidelines, we do not endorse each author’s contribution. You should not rely on an author’s works without seeking professional advice. See our Reader Terms for details. Traditionally most machine learning (ML) models use as input features some observations (samples / examples) but there is no time dimension in the data. Time-series forecasting models are the models that are capable to predict future values based on previously observed values. Time-series forecasting is widely used for non-stationary data. Non-stationary data are called the data whose statistical properties e.g. the mean and standard deviation are not constant over time but instead, these metrics vary over time. These non-stationary input data (used as input to these models) are usually called time-series. Some examples of time-series include the temperature values over time, stock price over time, price of a house over time etc. So, the input is a signal (time-series) that is defined by observations taken sequentially in time. A time series is a sequence of observations taken sequentially in time. Observation: Time-series data is recorded on a discrete time scale. Disclaimer (before we move on): There have been attempts to predict stock prices using time series analysis algorithms, though they still cannot be used to place bets in the real market. This is just a tutorial article that does not intent in any way to “direct” people into buying stocks. Long short-term memory (LSTM) is an artificial recurrent neural network (RNN) architecture used in the field of deep learning. Unlike standard feedforward neural networks, LSTM has feedback connections. It can not only process single data points (e.g. images), but also entire sequences of data (such as speech or video inputs). LSTM models are able to store information over a period of time. In order words, they have a memory capacity. Remember that LSTM stands for Long Short-Term Memory Model. This characteristic is extremely useful when we deal with Time-Series or Sequential Data. When using an LSTM model we are free and able to decide what information will be stored and what discarded. We do that using the “gates”. The deep understanding of the LSTM is outside the scope of this post but if you are interested in learning more, have a look at the references at the end of this post. Thanks to Yahoo finance we can get the data for free. Use the following link to get the stock price history of TESLA: https://finance.yahoo.com/quote/TSLA/history?period1=1436486400&period2=1594339200&interval=1d&filter=history&frequency=1d You should see the following: Click on the Download and save the .csv file locally on your computer. The data are from 2015 till now (2020) ! Modules needed: Keras, Tensorflow, Pandas, Scikit-Learn & Numpy We are going to build a multi-layer LSTM recurrent neural network to predict the last value of a sequence of values i.e. the TESLA stock price in this example. Let’s load the data and inspect them: import mathimport matplotlib.pyplot as pltimport kerasimport pandas as pdimport numpy as npfrom keras.models import Sequentialfrom keras.layers import Densefrom keras.layers import LSTMfrom keras.layers import Dropoutfrom keras.layers import *from sklearn.preprocessing import MinMaxScalerfrom sklearn.metrics import mean_squared_errorfrom sklearn.metrics import mean_absolute_errorfrom sklearn.model_selection import train_test_splitfrom keras.callbacks import EarlyStoppingdf=pd.read_csv("TSLA.csv")print(‘Number of rows and columns:’, df.shape)df.head(5) The next step is to split the data into training and test sets to avoid overfitting and to be able to investigate the generalization ability of our model. To learn more about overfitting read this article: towardsdatascience.com The target value to be predicted is going to be the “Close” stock price value. training_set = df.iloc[:800, 1:2].valuestest_set = df.iloc[800:, 1:2].values It’s a good idea to normalize the data before model fitting. This will boost the performance. You can read more here for the Min-Max Scaler: towardsdatascience.com Let’s build the input features with time lag of 1 day (lag 1): # Feature Scalingsc = MinMaxScaler(feature_range = (0, 1))training_set_scaled = sc.fit_transform(training_set)# Creating a data structure with 60 time-steps and 1 outputX_train = []y_train = []for i in range(60, 800): X_train.append(training_set_scaled[i-60:i, 0]) y_train.append(training_set_scaled[i, 0])X_train, y_train = np.array(X_train), np.array(y_train)X_train = np.reshape(X_train, (X_train.shape[0], X_train.shape[1], 1))#(740, 60, 1) We have now reshaped the data into the following format (#values, #time-steps, #1 dimensional output). Now, it’s time to build the model. We will build the LSTM with 50 neurons and 4 hidden layers. Finally, we will assign 1 neuron in the output layer for predicting the normalized stock price. We will use the MSE loss function and the Adam stochastic gradient descent optimizer. Note: the following will take some time (~5min). model = Sequential()#Adding the first LSTM layer and some Dropout regularisationmodel.add(LSTM(units = 50, return_sequences = True, input_shape = (X_train.shape[1], 1)))model.add(Dropout(0.2))# Adding a second LSTM layer and some Dropout regularisationmodel.add(LSTM(units = 50, return_sequences = True))model.add(Dropout(0.2))# Adding a third LSTM layer and some Dropout regularisationmodel.add(LSTM(units = 50, return_sequences = True))model.add(Dropout(0.2))# Adding a fourth LSTM layer and some Dropout regularisationmodel.add(LSTM(units = 50))model.add(Dropout(0.2))# Adding the output layermodel.add(Dense(units = 1))# Compiling the RNNmodel.compile(optimizer = 'adam', loss = 'mean_squared_error')# Fitting the RNN to the Training setmodel.fit(X_train, y_train, epochs = 100, batch_size = 32) When the fitting is finished you should see something like this: Prepare the test data (reshape them): # Getting the predicted stock price of 2017dataset_train = df.iloc[:800, 1:2]dataset_test = df.iloc[800:, 1:2]dataset_total = pd.concat((dataset_train, dataset_test), axis = 0)inputs = dataset_total[len(dataset_total) - len(dataset_test) - 60:].valuesinputs = inputs.reshape(-1,1)inputs = sc.transform(inputs)X_test = []for i in range(60, 519): X_test.append(inputs[i-60:i, 0])X_test = np.array(X_test)X_test = np.reshape(X_test, (X_test.shape[0], X_test.shape[1], 1))print(X_test.shape)# (459, 60, 1) Make Predictions using the test set predicted_stock_price = model.predict(X_test)predicted_stock_price = sc.inverse_transform(predicted_stock_price) Let’s visualize the results now: # Visualising the resultsplt.plot(df.loc[800:, ‘Date’],dataset_test.values, color = ‘red’, label = ‘Real TESLA Stock Price’)plt.plot(df.loc[800:, ‘Date’],predicted_stock_price, color = ‘blue’, label = ‘Predicted TESLA Stock Price’)plt.xticks(np.arange(0,459,50))plt.title('TESLA Stock Price Prediction')plt.xlabel('Time')plt.ylabel('TESLA Stock Price')plt.legend()plt.show() Using a lag of 1 (i.e. step of one day): Observation: Huge drop in March 2020 due to the COVID-19 lockdown ! We can clearly see that our model performed very good. It is able to accuretly follow most of the unexcepted jumps/drops however, for the most recent date stamps, we can see that the model expected (predicted) lower values compared to the real values of the stock price. The initial selected lag in this article was 1 i.e. using a step of 1 day. This can be easily changed by altering the code that builds the 3D inputs. Example: One can change the following 2 blocks of code: X_train = []y_train = []for i in range(60, 800): X_train.append(training_set_scaled[i-60:i, 0]) y_train.append(training_set_scaled[i, 0]) and X_test = []y_test = []for i in range(60, 519): X_test.append(inputs[i-60:i, 0])X_test = np.array(X_test)X_test = np.reshape(X_test, (X_test.shape[0], X_test.shape[1], 1)) with the following new code: X_train = []y_train = []for i in range(60, 800): X_train.append(training_set_scaled[i-50:i, 0]) y_train.append(training_set_scaled[i, 0]) and X_test = []y_test = []for i in range(60, 519): X_test.append(inputs[i-50:i, 0])X_test = np.array(X_test)X_test = np.reshape(X_test, (X_test.shape[0], X_test.shape[1], 1)) In that case the results look like this: That’s all folks ! Hope you liked this article! towardsdatascience.com towardsdatascience.com [1] https://colah.github.io/posts/2015-08-Understanding-LSTMs/ [2] https://en.wikipedia.org/wiki/Long_short-term_memory If you liked and found this article useful, follow me to be able to see all my new posts. Questions? Post them as a comment and I will reply as soon as possible. towardsdatascience.com medium.com towardsdatascience.com towardsdatascience.com towardsdatascience.com towardsdatascience.com towardsdatascience.com LinkedIn: https://www.linkedin.com/in/serafeim-loukas/ ResearchGate: https://www.researchgate.net/profile/Serafeim_Loukas EPFL profile: https://people.epfl.ch/serafeim.loukas Stack Overflow: https://stackoverflow.com/users/5025009/seralouk
[ { "code": null, "e": 472, "s": 172, "text": "Note from Towards Data Science’s editors: While we allow independent authors to publish articles in accordance with our rules and guidelines, we do not endorse each author’s contribution. You should not rely on an author’s works without seeking professional advice. See our Reader Terms for details." }, { "code": null, "e": 625, "s": 472, "text": "Traditionally most machine learning (ML) models use as input features some observations (samples / examples) but there is no time dimension in the data." }, { "code": null, "e": 990, "s": 625, "text": "Time-series forecasting models are the models that are capable to predict future values based on previously observed values. Time-series forecasting is widely used for non-stationary data. Non-stationary data are called the data whose statistical properties e.g. the mean and standard deviation are not constant over time but instead, these metrics vary over time." }, { "code": null, "e": 1312, "s": 990, "text": "These non-stationary input data (used as input to these models) are usually called time-series. Some examples of time-series include the temperature values over time, stock price over time, price of a house over time etc. So, the input is a signal (time-series) that is defined by observations taken sequentially in time." }, { "code": null, "e": 1384, "s": 1312, "text": "A time series is a sequence of observations taken sequentially in time." }, { "code": null, "e": 1452, "s": 1384, "text": "Observation: Time-series data is recorded on a discrete time scale." }, { "code": null, "e": 1742, "s": 1452, "text": "Disclaimer (before we move on): There have been attempts to predict stock prices using time series analysis algorithms, though they still cannot be used to place bets in the real market. This is just a tutorial article that does not intent in any way to “direct” people into buying stocks." }, { "code": null, "e": 2071, "s": 1742, "text": "Long short-term memory (LSTM) is an artificial recurrent neural network (RNN) architecture used in the field of deep learning. Unlike standard feedforward neural networks, LSTM has feedback connections. It can not only process single data points (e.g. images), but also entire sequences of data (such as speech or video inputs)." }, { "code": null, "e": 2136, "s": 2071, "text": "LSTM models are able to store information over a period of time." }, { "code": null, "e": 2241, "s": 2136, "text": "In order words, they have a memory capacity. Remember that LSTM stands for Long Short-Term Memory Model." }, { "code": null, "e": 2637, "s": 2241, "text": "This characteristic is extremely useful when we deal with Time-Series or Sequential Data. When using an LSTM model we are free and able to decide what information will be stored and what discarded. We do that using the “gates”. The deep understanding of the LSTM is outside the scope of this post but if you are interested in learning more, have a look at the references at the end of this post." }, { "code": null, "e": 2878, "s": 2637, "text": "Thanks to Yahoo finance we can get the data for free. Use the following link to get the stock price history of TESLA: https://finance.yahoo.com/quote/TSLA/history?period1=1436486400&period2=1594339200&interval=1d&filter=history&frequency=1d" }, { "code": null, "e": 2908, "s": 2878, "text": "You should see the following:" }, { "code": null, "e": 2979, "s": 2908, "text": "Click on the Download and save the .csv file locally on your computer." }, { "code": null, "e": 3020, "s": 2979, "text": "The data are from 2015 till now (2020) !" }, { "code": null, "e": 3084, "s": 3020, "text": "Modules needed: Keras, Tensorflow, Pandas, Scikit-Learn & Numpy" }, { "code": null, "e": 3244, "s": 3084, "text": "We are going to build a multi-layer LSTM recurrent neural network to predict the last value of a sequence of values i.e. the TESLA stock price in this example." }, { "code": null, "e": 3282, "s": 3244, "text": "Let’s load the data and inspect them:" }, { "code": null, "e": 3840, "s": 3282, "text": "import mathimport matplotlib.pyplot as pltimport kerasimport pandas as pdimport numpy as npfrom keras.models import Sequentialfrom keras.layers import Densefrom keras.layers import LSTMfrom keras.layers import Dropoutfrom keras.layers import *from sklearn.preprocessing import MinMaxScalerfrom sklearn.metrics import mean_squared_errorfrom sklearn.metrics import mean_absolute_errorfrom sklearn.model_selection import train_test_splitfrom keras.callbacks import EarlyStoppingdf=pd.read_csv(\"TSLA.csv\")print(‘Number of rows and columns:’, df.shape)df.head(5)" }, { "code": null, "e": 4046, "s": 3840, "text": "The next step is to split the data into training and test sets to avoid overfitting and to be able to investigate the generalization ability of our model. To learn more about overfitting read this article:" }, { "code": null, "e": 4069, "s": 4046, "text": "towardsdatascience.com" }, { "code": null, "e": 4148, "s": 4069, "text": "The target value to be predicted is going to be the “Close” stock price value." }, { "code": null, "e": 4225, "s": 4148, "text": "training_set = df.iloc[:800, 1:2].valuestest_set = df.iloc[800:, 1:2].values" }, { "code": null, "e": 4366, "s": 4225, "text": "It’s a good idea to normalize the data before model fitting. This will boost the performance. You can read more here for the Min-Max Scaler:" }, { "code": null, "e": 4389, "s": 4366, "text": "towardsdatascience.com" }, { "code": null, "e": 4452, "s": 4389, "text": "Let’s build the input features with time lag of 1 day (lag 1):" }, { "code": null, "e": 4903, "s": 4452, "text": "# Feature Scalingsc = MinMaxScaler(feature_range = (0, 1))training_set_scaled = sc.fit_transform(training_set)# Creating a data structure with 60 time-steps and 1 outputX_train = []y_train = []for i in range(60, 800): X_train.append(training_set_scaled[i-60:i, 0]) y_train.append(training_set_scaled[i, 0])X_train, y_train = np.array(X_train), np.array(y_train)X_train = np.reshape(X_train, (X_train.shape[0], X_train.shape[1], 1))#(740, 60, 1)" }, { "code": null, "e": 5006, "s": 4903, "text": "We have now reshaped the data into the following format (#values, #time-steps, #1 dimensional output)." }, { "code": null, "e": 5283, "s": 5006, "text": "Now, it’s time to build the model. We will build the LSTM with 50 neurons and 4 hidden layers. Finally, we will assign 1 neuron in the output layer for predicting the normalized stock price. We will use the MSE loss function and the Adam stochastic gradient descent optimizer." }, { "code": null, "e": 5332, "s": 5283, "text": "Note: the following will take some time (~5min)." }, { "code": null, "e": 6132, "s": 5332, "text": "model = Sequential()#Adding the first LSTM layer and some Dropout regularisationmodel.add(LSTM(units = 50, return_sequences = True, input_shape = (X_train.shape[1], 1)))model.add(Dropout(0.2))# Adding a second LSTM layer and some Dropout regularisationmodel.add(LSTM(units = 50, return_sequences = True))model.add(Dropout(0.2))# Adding a third LSTM layer and some Dropout regularisationmodel.add(LSTM(units = 50, return_sequences = True))model.add(Dropout(0.2))# Adding a fourth LSTM layer and some Dropout regularisationmodel.add(LSTM(units = 50))model.add(Dropout(0.2))# Adding the output layermodel.add(Dense(units = 1))# Compiling the RNNmodel.compile(optimizer = 'adam', loss = 'mean_squared_error')# Fitting the RNN to the Training setmodel.fit(X_train, y_train, epochs = 100, batch_size = 32)" }, { "code": null, "e": 6197, "s": 6132, "text": "When the fitting is finished you should see something like this:" }, { "code": null, "e": 6235, "s": 6197, "text": "Prepare the test data (reshape them):" }, { "code": null, "e": 6740, "s": 6235, "text": "# Getting the predicted stock price of 2017dataset_train = df.iloc[:800, 1:2]dataset_test = df.iloc[800:, 1:2]dataset_total = pd.concat((dataset_train, dataset_test), axis = 0)inputs = dataset_total[len(dataset_total) - len(dataset_test) - 60:].valuesinputs = inputs.reshape(-1,1)inputs = sc.transform(inputs)X_test = []for i in range(60, 519): X_test.append(inputs[i-60:i, 0])X_test = np.array(X_test)X_test = np.reshape(X_test, (X_test.shape[0], X_test.shape[1], 1))print(X_test.shape)# (459, 60, 1)" }, { "code": null, "e": 6776, "s": 6740, "text": "Make Predictions using the test set" }, { "code": null, "e": 6889, "s": 6776, "text": "predicted_stock_price = model.predict(X_test)predicted_stock_price = sc.inverse_transform(predicted_stock_price)" }, { "code": null, "e": 6922, "s": 6889, "text": "Let’s visualize the results now:" }, { "code": null, "e": 7297, "s": 6922, "text": "# Visualising the resultsplt.plot(df.loc[800:, ‘Date’],dataset_test.values, color = ‘red’, label = ‘Real TESLA Stock Price’)plt.plot(df.loc[800:, ‘Date’],predicted_stock_price, color = ‘blue’, label = ‘Predicted TESLA Stock Price’)plt.xticks(np.arange(0,459,50))plt.title('TESLA Stock Price Prediction')plt.xlabel('Time')plt.ylabel('TESLA Stock Price')plt.legend()plt.show()" }, { "code": null, "e": 7338, "s": 7297, "text": "Using a lag of 1 (i.e. step of one day):" }, { "code": null, "e": 7406, "s": 7338, "text": "Observation: Huge drop in March 2020 due to the COVID-19 lockdown !" }, { "code": null, "e": 7677, "s": 7406, "text": "We can clearly see that our model performed very good. It is able to accuretly follow most of the unexcepted jumps/drops however, for the most recent date stamps, we can see that the model expected (predicted) lower values compared to the real values of the stock price." }, { "code": null, "e": 7827, "s": 7677, "text": "The initial selected lag in this article was 1 i.e. using a step of 1 day. This can be easily changed by altering the code that builds the 3D inputs." }, { "code": null, "e": 7883, "s": 7827, "text": "Example: One can change the following 2 blocks of code:" }, { "code": null, "e": 8027, "s": 7883, "text": "X_train = []y_train = []for i in range(60, 800): X_train.append(training_set_scaled[i-60:i, 0]) y_train.append(training_set_scaled[i, 0])" }, { "code": null, "e": 8031, "s": 8027, "text": "and" }, { "code": null, "e": 8205, "s": 8031, "text": "X_test = []y_test = []for i in range(60, 519): X_test.append(inputs[i-60:i, 0])X_test = np.array(X_test)X_test = np.reshape(X_test, (X_test.shape[0], X_test.shape[1], 1))" }, { "code": null, "e": 8234, "s": 8205, "text": "with the following new code:" }, { "code": null, "e": 8378, "s": 8234, "text": "X_train = []y_train = []for i in range(60, 800): X_train.append(training_set_scaled[i-50:i, 0]) y_train.append(training_set_scaled[i, 0])" }, { "code": null, "e": 8382, "s": 8378, "text": "and" }, { "code": null, "e": 8556, "s": 8382, "text": "X_test = []y_test = []for i in range(60, 519): X_test.append(inputs[i-50:i, 0])X_test = np.array(X_test)X_test = np.reshape(X_test, (X_test.shape[0], X_test.shape[1], 1))" }, { "code": null, "e": 8597, "s": 8556, "text": "In that case the results look like this:" }, { "code": null, "e": 8645, "s": 8597, "text": "That’s all folks ! Hope you liked this article!" }, { "code": null, "e": 8668, "s": 8645, "text": "towardsdatascience.com" }, { "code": null, "e": 8691, "s": 8668, "text": "towardsdatascience.com" }, { "code": null, "e": 8754, "s": 8691, "text": "[1] https://colah.github.io/posts/2015-08-Understanding-LSTMs/" }, { "code": null, "e": 8811, "s": 8754, "text": "[2] https://en.wikipedia.org/wiki/Long_short-term_memory" }, { "code": null, "e": 8901, "s": 8811, "text": "If you liked and found this article useful, follow me to be able to see all my new posts." }, { "code": null, "e": 8973, "s": 8901, "text": "Questions? Post them as a comment and I will reply as soon as possible." }, { "code": null, "e": 8996, "s": 8973, "text": "towardsdatascience.com" }, { "code": null, "e": 9007, "s": 8996, "text": "medium.com" }, { "code": null, "e": 9030, "s": 9007, "text": "towardsdatascience.com" }, { "code": null, "e": 9053, "s": 9030, "text": "towardsdatascience.com" }, { "code": null, "e": 9076, "s": 9053, "text": "towardsdatascience.com" }, { "code": null, "e": 9099, "s": 9076, "text": "towardsdatascience.com" }, { "code": null, "e": 9122, "s": 9099, "text": "towardsdatascience.com" }, { "code": null, "e": 9177, "s": 9122, "text": "LinkedIn: https://www.linkedin.com/in/serafeim-loukas/" }, { "code": null, "e": 9244, "s": 9177, "text": "ResearchGate: https://www.researchgate.net/profile/Serafeim_Loukas" }, { "code": null, "e": 9297, "s": 9244, "text": "EPFL profile: https://people.epfl.ch/serafeim.loukas" } ]
MariaDB - SQL Injection Protection
The simple act of accepting user input opens the door to exploits. The problem stems primarily from the logical management of data, but luckily, it is fairly easy to avoid these major flaws. Opportunities for SQL injection typically occur on users entering data like a name, and the code logic failing to analyze this input. The Code, instead, allows an attacker to insert a MariaDB statement, which will run on the database. Always consider data entered by users, suspect and are in need of strong validation prior to any processing. Perform this validation through pattern matching. For example, if the expected input is a username, restrict entered characters to alphanumeric chars and underscores, and to a certain length. Review an example given below − if(check_match("/^\w{8,20}$/", $_GET['user_name'], $matches)) { $result = mysql_query("SELECT * FROM system_users WHERE user_name = $matches[0]"); } else { echo "Invalid username"; } Also, utilize the REGEXP operator and LIKE clauses in creating input constraints. Consider all types of necessary explicit control of input such as − Control the escape characters used. Control the escape characters used. Control the specific appropriate data types for input. Limit input to the necessary data type and size. Control the specific appropriate data types for input. Limit input to the necessary data type and size. Control the syntax of entered data. Do not allow anything outside of the needed pattern. Control the syntax of entered data. Do not allow anything outside of the needed pattern. Control the terms permitted. Blacklist SQL keywords. Control the terms permitted. Blacklist SQL keywords. You may not know the dangers of injection attacks, or may consider them insignificant, but they top the list of security concerns. Furthermore, consider the effect of these two entries − 1=1 -or- * Code allowing either of those to be entered along with the right command may result in revealing all user data on the database or deleting all data on the database, and neither injection is particularly clever. In some cases, attackers do not even spend time examining holes; they perform blind attacks with simple input. Also, consider the pattern matching and regular expression tools provided by any programming/scripting language paired with MariaDB, which provide more control, and sometimes better control. Print Add Notes Bookmark this page
[ { "code": null, "e": 2553, "s": 2362, "text": "The simple act of accepting user input opens the door to exploits. The problem stems primarily from the logical management of data, but luckily, it is fairly easy to avoid these major flaws." }, { "code": null, "e": 2788, "s": 2553, "text": "Opportunities for SQL injection typically occur on users entering data like a name, and the code logic failing to analyze this input. The Code, instead, allows an attacker to insert a MariaDB statement, which will run on the database." }, { "code": null, "e": 3121, "s": 2788, "text": "Always consider data entered by users, suspect and are in need of strong validation prior to any processing. Perform this validation through pattern matching. For example, if the expected input is a username, restrict entered characters to alphanumeric chars and underscores, and to a certain length. Review an example given below −" }, { "code": null, "e": 3310, "s": 3121, "text": "if(check_match(\"/^\\w{8,20}$/\", $_GET['user_name'], $matches)) {\n $result = mysql_query(\"SELECT * FROM system_users WHERE user_name = $matches[0]\");\n} else {\n echo \"Invalid username\";\n}" }, { "code": null, "e": 3392, "s": 3310, "text": "Also, utilize the REGEXP operator and LIKE clauses in creating input constraints." }, { "code": null, "e": 3460, "s": 3392, "text": "Consider all types of necessary explicit control of input such as −" }, { "code": null, "e": 3496, "s": 3460, "text": "Control the escape characters used." }, { "code": null, "e": 3532, "s": 3496, "text": "Control the escape characters used." }, { "code": null, "e": 3636, "s": 3532, "text": "Control the specific appropriate data types for input. Limit input to the necessary data type and size." }, { "code": null, "e": 3740, "s": 3636, "text": "Control the specific appropriate data types for input. Limit input to the necessary data type and size." }, { "code": null, "e": 3829, "s": 3740, "text": "Control the syntax of entered data. Do not allow anything outside of the needed pattern." }, { "code": null, "e": 3918, "s": 3829, "text": "Control the syntax of entered data. Do not allow anything outside of the needed pattern." }, { "code": null, "e": 3971, "s": 3918, "text": "Control the terms permitted. Blacklist SQL keywords." }, { "code": null, "e": 4024, "s": 3971, "text": "Control the terms permitted. Blacklist SQL keywords." }, { "code": null, "e": 4211, "s": 4024, "text": "You may not know the dangers of injection attacks, or may consider them insignificant, but they top the list of security concerns. Furthermore, consider the effect of these two entries −" }, { "code": null, "e": 4223, "s": 4211, "text": "1=1\n-or-\n*\n" }, { "code": null, "e": 4545, "s": 4223, "text": "Code allowing either of those to be entered along with the right command may result in revealing all user data on the database or deleting all data on the database, and neither injection is particularly clever. In some cases, attackers do not even spend time examining holes; they perform blind attacks with simple input." }, { "code": null, "e": 4736, "s": 4545, "text": "Also, consider the pattern matching and regular expression tools provided by any programming/scripting language paired with MariaDB, which provide more control, and sometimes better control." }, { "code": null, "e": 4743, "s": 4736, "text": " Print" }, { "code": null, "e": 4754, "s": 4743, "text": " Add Notes" } ]
Using ControlAggregation in SAPUI5
“ControlAggregation” refers to the target aggregation to which the mapped view is added. As Specified in the use case below: "routing": { "config": { "routerClass": "sap.m.routing.Router", "viewType": "XML", "viewPath": "sap.ui.demo.nav.view", "controlId": "app", "controlAggregation": "dummy", "transition": "slide", "bypassed": { "target": "NA" } The views are defined as follows: <mvc:View controllerName="sap.ui.demo.nav.controller.App" xmlns="sap.m" xmlns:mvc="sap.ui.core.mvc" displayBlock="true"> <App id="sampleApp"/> </mvc:View> So here “controlAggregation” is named as 'dummy' but the app is named as SampleApp. so the target is 'sap.m.SampleApp' and aggregation is named as 'dummy'. Let us take an example here: routes :[{ pattern:"employee/{id}", name:"employee", target:"employee" }, { pattern:"department/{id}", name :"department", target:"department" }], targets:{ employee:{ viewName:"Employee", controlAggregation:"masterPage" }, department"{ viewName:"Department", controlAggregation:"contentPage" } } So when a user navigates to employee/3, routing engine finds that 'Employee' is the target for this pattern. Then it tries to find the name of the view for that target i.e. 'Employee'. After this, it will determine the control aggregation for this View if any. Here we have it to be “masterPage”. Now the View Engine will render the view inside the “masterPage”.
[ { "code": null, "e": 1151, "s": 1062, "text": "“ControlAggregation” refers to the target aggregation to which the mapped view is added." }, { "code": null, "e": 1187, "s": 1151, "text": "As Specified in the use case below:" }, { "code": null, "e": 1471, "s": 1187, "text": "\"routing\": {\n \"config\": {\n \"routerClass\": \"sap.m.routing.Router\",\n \"viewType\": \"XML\",\n \"viewPath\": \"sap.ui.demo.nav.view\",\n \"controlId\": \"app\",\n \"controlAggregation\": \"dummy\",\n \"transition\": \"slide\",\n \"bypassed\": {\n \"target\": \"NA\"\n }" }, { "code": null, "e": 1505, "s": 1471, "text": "The views are defined as follows:" }, { "code": null, "e": 1675, "s": 1505, "text": "<mvc:View\n controllerName=\"sap.ui.demo.nav.controller.App\"\n xmlns=\"sap.m\"\n xmlns:mvc=\"sap.ui.core.mvc\"\n displayBlock=\"true\">\n <App id=\"sampleApp\"/>\n</mvc:View>" }, { "code": null, "e": 1759, "s": 1675, "text": "So here “controlAggregation” is named as 'dummy' but the app is named as SampleApp." }, { "code": null, "e": 1860, "s": 1759, "text": "so the target is 'sap.m.SampleApp' and aggregation is named as 'dummy'. Let us take an example here:" }, { "code": null, "e": 2208, "s": 1860, "text": "routes :[{ pattern:\"employee/{id}\",\n name:\"employee\",\n target:\"employee\"\n},\n{\n pattern:\"department/{id}\",\n name :\"department\",\n target:\"department\"\n}],\ntargets:{\n employee:{\n viewName:\"Employee\",\n controlAggregation:\"masterPage\"\n },\n department\"{\n viewName:\"Department\",\n controlAggregation:\"contentPage\"\n }\n}" }, { "code": null, "e": 2571, "s": 2208, "text": "So when a user navigates to employee/3, routing engine finds that 'Employee' is the target for this pattern. Then it tries to find the name of the view for that target i.e. 'Employee'. After this, it will determine the control aggregation for this View if any. Here we have it to be “masterPage”. Now the View Engine will render the view inside the “masterPage”." } ]
postqueue - Unix, Linux Command
postqueue [-v] [-c config_dir] -f postqueue [-v] [-c config_dir] -p postqueue [-v] [-c config_dir] -s site The following options are recognized: This option implements the traditional "sendmail -q" command, by contacting the Postfix qmgr(8) daemon. Warning: flushing undeliverable mail frequently will result in poor delivery performance of all other mail. Each queue entry shows the queue file ID, message size, arrival time, sender, and the recipients that still need to be delivered. If mail could not be delivered upon the last attempt, the reason for failure is shown. This mode of operation is implemented by executing the postqueue(1) command. The queue ID string is followed by an optional status character: This option implements the traditional "sendmail -qRsite" command, by contacting the Postfix flush(8) daemon. /var/spool/postfix, mail queue qmgr(8), queue manager showq(8), list mail queue flush(8), fast flush service sendmail(1), Sendmail-compatible user interface postsuper(1), privileged queue operations ETRN_README, Postfix ETRN howto Wietse Venema IBM T.J. Watson Research P.O. Box 704 Yorktown Heights, NY 10598, USA Advertisements 129 Lectures 23 hours Eduonix Learning Solutions 5 Lectures 4.5 hours Frahaan Hussain 35 Lectures 2 hours Pradeep D 41 Lectures 2.5 hours Musab Zayadneh 46 Lectures 4 hours GUHARAJANM 6 Lectures 4 hours Uplatz Print Add Notes Bookmark this page
[ { "code": null, "e": 10687, "s": 10577, "text": "postqueue [-v] [-c config_dir] -f\n\npostqueue [-v] [-c config_dir] -p\n\npostqueue [-v] [-c config_dir] -s site\n" }, { "code": null, "e": 10727, "s": 10687, "text": "\nThe following options are recognized:\n" }, { "code": null, "e": 10833, "s": 10727, "text": "\nThis option implements the traditional \"sendmail -q\" command,\nby contacting the Postfix qmgr(8) daemon.\n" }, { "code": null, "e": 10943, "s": 10833, "text": "\nWarning: flushing undeliverable mail frequently will result in\npoor delivery performance of all other mail.\n" }, { "code": null, "e": 11305, "s": 10943, "text": "\nEach queue entry shows the queue file ID, message\nsize, arrival time, sender, and the recipients that still need to\nbe delivered. If mail could not be delivered upon the last attempt,\nthe reason for failure is shown. This mode of operation is implemented\nby executing the postqueue(1) command. The queue ID string\nis followed by an optional status character:\n" }, { "code": null, "e": 11417, "s": 11305, "text": "\nThis option implements the traditional \"sendmail -qRsite\"\ncommand, by contacting the Postfix flush(8) daemon.\n" }, { "code": null, "e": 11455, "s": 11423, "text": "/var/spool/postfix, mail queue\n" }, { "code": null, "e": 11624, "s": 11455, "text": "qmgr(8), queue manager\nshowq(8), list mail queue\nflush(8), fast flush service\nsendmail(1), Sendmail-compatible user interface\npostsuper(1), privileged queue operations\n" }, { "code": null, "e": 11659, "s": 11626, "text": "ETRN_README, Postfix ETRN howto\n" }, { "code": null, "e": 11748, "s": 11663, "text": "Wietse Venema\nIBM T.J. Watson Research\nP.O. Box 704\nYorktown Heights, NY 10598, USA\n" }, { "code": null, "e": 11765, "s": 11748, "text": "\nAdvertisements\n" }, { "code": null, "e": 11800, "s": 11765, "text": "\n 129 Lectures \n 23 hours \n" }, { "code": null, "e": 11828, "s": 11800, "text": " Eduonix Learning Solutions" }, { "code": null, "e": 11862, "s": 11828, "text": "\n 5 Lectures \n 4.5 hours \n" }, { "code": null, "e": 11879, "s": 11862, "text": " Frahaan Hussain" }, { "code": null, "e": 11912, "s": 11879, "text": "\n 35 Lectures \n 2 hours \n" }, { "code": null, "e": 11923, "s": 11912, "text": " Pradeep D" }, { "code": null, "e": 11958, "s": 11923, "text": "\n 41 Lectures \n 2.5 hours \n" }, { "code": null, "e": 11974, "s": 11958, "text": " Musab Zayadneh" }, { "code": null, "e": 12007, "s": 11974, "text": "\n 46 Lectures \n 4 hours \n" }, { "code": null, "e": 12019, "s": 12007, "text": " GUHARAJANM" }, { "code": null, "e": 12051, "s": 12019, "text": "\n 6 Lectures \n 4 hours \n" }, { "code": null, "e": 12059, "s": 12051, "text": " Uplatz" }, { "code": null, "e": 12066, "s": 12059, "text": " Print" }, { "code": null, "e": 12077, "s": 12066, "text": " Add Notes" } ]
Java Create and Write To Files
To create a file in Java, you can use the createNewFile() method. This method returns a boolean value: true if the file was successfully created, and false if the file already exists. Note that the method is enclosed in a try...catch block. This is necessary because it throws an IOException if an error occurs (if the file cannot be created for some reason): import java.io.File; // Import the File class import java.io.IOException; // Import the IOException class to handle errors public class CreateFile { public static void main(String[] args) { try { File myObj = new File("filename.txt"); if (myObj.createNewFile()) { System.out.println("File created: " + myObj.getName()); } else { System.out.println("File already exists."); } } catch (IOException e) { System.out.println("An error occurred."); e.printStackTrace(); } } } The output will be: To create a file in a specific directory (requires permission), specify the path of the file and use double backslashes to escape the "\" character (for Windows). On Mac and Linux you can just write the path, like: /Users/name/filename.txt File myObj = new File("C:\\Users\\MyName\\filename.txt"); Run Example » In the following example, we use the FileWriter class together with its write() method to write some text to the file we created in the example above. Note that when you are done writing to the file, you should close it with the close() method: import java.io.FileWriter; // Import the FileWriter class import java.io.IOException; // Import the IOException class to handle errors public class WriteToFile { public static void main(String[] args) { try { FileWriter myWriter = new FileWriter("filename.txt"); myWriter.write("Files in Java might be tricky, but it is fun enough!"); myWriter.close(); System.out.println("Successfully wrote to the file."); } catch (IOException e) { System.out.println("An error occurred."); e.printStackTrace(); } } } The output will be: To read the file above, go to the Java Read Files chapter. We just launchedW3Schools videos Get certifiedby completinga course today! If you want to report an error, or if you want to make a suggestion, do not hesitate to send us an e-mail: [email protected] Your message has been sent to W3Schools.
[ { "code": null, "e": 364, "s": 0, "text": "To create a file in Java, you can use the createNewFile() method. This method returns a \nboolean value: true if the file was successfully created, and false if the file \nalready exists. Note that the method is enclosed in a try...catch \nblock. This is necessary because it throws an IOException if an error occurs (if the \nfile cannot be created for some reason):" }, { "code": null, "e": 906, "s": 364, "text": "import java.io.File; // Import the File class\nimport java.io.IOException; // Import the IOException class to handle errors\n\npublic class CreateFile {\n public static void main(String[] args) {\n try {\n File myObj = new File(\"filename.txt\");\n if (myObj.createNewFile()) {\n System.out.println(\"File created: \" + myObj.getName());\n } else {\n System.out.println(\"File already exists.\");\n }\n } catch (IOException e) {\n System.out.println(\"An error occurred.\");\n e.printStackTrace();\n }\n }\n}\n" }, { "code": null, "e": 926, "s": 906, "text": "The output will be:" }, { "code": null, "e": 1166, "s": 926, "text": "To create a file in a specific directory (requires permission), specify the path of the file and use double backslashes to escape the \"\\\" character (for Windows). On Mac and Linux you can just write the path, like: /Users/name/filename.txt" }, { "code": null, "e": 1225, "s": 1166, "text": "File myObj = new File(\"C:\\\\Users\\\\MyName\\\\filename.txt\");\n" }, { "code": null, "e": 1241, "s": 1225, "text": "\nRun Example »\n" }, { "code": null, "e": 1490, "s": 1241, "text": "In the \nfollowing example, we use the FileWriter class together with its write() method \nto write some text to the file we created in the example above. Note that when you are done writing to the file, \nyou should close it \nwith the close() method:" }, { "code": null, "e": 2050, "s": 1490, "text": "import java.io.FileWriter; // Import the FileWriter class\nimport java.io.IOException; // Import the IOException class to handle errors\n\npublic class WriteToFile {\n public static void main(String[] args) {\n try {\n FileWriter myWriter = new FileWriter(\"filename.txt\");\n myWriter.write(\"Files in Java might be tricky, but it is fun enough!\");\n myWriter.close();\n System.out.println(\"Successfully wrote to the file.\");\n } catch (IOException e) {\n System.out.println(\"An error occurred.\");\n e.printStackTrace();\n }\n }\n}\n" }, { "code": null, "e": 2070, "s": 2050, "text": "The output will be:" }, { "code": null, "e": 2129, "s": 2070, "text": "To read the file above, go to the Java Read Files chapter." }, { "code": null, "e": 2162, "s": 2129, "text": "We just launchedW3Schools videos" }, { "code": null, "e": 2204, "s": 2162, "text": "Get certifiedby completinga course today!" }, { "code": null, "e": 2311, "s": 2204, "text": "If you want to report an error, or if you want to make a suggestion, do not hesitate to send us an e-mail:" }, { "code": null, "e": 2330, "s": 2311, "text": "[email protected]" } ]
Tryit Editor v3.7
Tryit: class in JavaScript
[]
Count Palindrome Sub-Strings of a String | Practice | GeeksforGeeks
Given a string, the task is to count all palindromic sub-strings present in it. Length of palindrome sub-string must be greater than or equal to 2. Example Input N = 5 str = "abaab" Output 3 Explanation: All palindrome substring are : "aba" , "aa" , "baab" Example Input N = 7 str = "abbaeae" Output 4 Explanation: All palindrome substring are : "bb" , "abba" ,"aea", "eae" Expected Time Complexity : O(|S|2) Expected Auxilliary Space : O(|S|2) Constraints: 2<=|S|<=500 0 rohit250820001 week ago int CountPS(char S[], int N) { int dp[N][N]; memset(dp, 0, sizeof(dp)); int count = 0; for(int g = 0; g < N; ++g) { for(int i = 0, j = g; j < N; ++i,++j) { int curr_length = j - i + 1; if(curr_length == 1) { dp[i][j] = 1; } else if(curr_length == 2 && S[i] == S[j]) { dp[i][j] = 1; }else if(dp[i+1][j-1] && S[i] == S[j]) { dp[i][j] = 1; } if(dp[i][j]) { count++; } } } return count - N; } 0 gunjangoyal2821 month ago int CountPS(char S[], int N) { //code here int dp[N][N], ctr = 0; memset(dp,0,sizeof(dp)); for(int gap = 0; gap < N; gap++){ for(int i=0, j = gap; j<N; i++, j++){ if(gap == 0){ dp[i][j] = 1; } else if(gap == 1){ if(S[i] == S[j]){ dp[i][j] = 1; } else{ dp[i][j] = 0; } } else if(S[i] == S[j] and dp[i+1][j-1] == 1){ dp[i][j] = 1; } else{ dp[i][j] = 0; } if(dp[i][j]){ ctr++; } } } return ctr-N; } 0 mashhadihossain1 month ago SIMPLE JAVA SOLUTION class Solution{ static boolean isPalindrome(String s) { if(s.length()>1) { int i=0; int j=s.length()-1; while(i<j) { if(s.charAt(i)!=s.charAt(j)) { return false; } i++; j--; } return true; } else { return false; } } public int CountPS(String S, int N) { int count=0; for(int i=0;i<N;i++) { for(int j=i+1;j<=N;j++) { if(isPalindrome(S.substring(i,j))) { count++; } } } return count; }} 0 mashhadihossain1 month ago SIMPLE JAVA SOLTUION class Solution{ static boolean isPalindrome(String s) { if(s.length()>1) { int i=0; int j=s.length()-1; while(i<j) { if(s.charAt(i)!=s.charAt(j)) { return false; } i++; j--; } return true; } else { return false; } } public int CountPS(String S, int N) { ArrayList<String> list=new ArrayList<String>(); for(int i=0;i<N;i++) { for(int j=i+1;j<=N;j++) { if(isPalindrome(S.substring(i,j))) { list.add(S.substring(i,j)); } } } return list.size(); }} +1 neeramrutia2 months ago int count = 0; for(int i=0;i<N;i++) { count =count+ check(S,i,i); count =count+ check(S,i,i+1); } return count; } public int check(String S,int i,int j) { int count = 0; while(i>=0&&j<S.length()&&S.charAt(i)==S.charAt(j)) { if(j-i>=1) count++; i--; j++; } return count; +1 neeramrutia2 months ago int i,j,count=0; for(i=0;i<=N-2;i++) { for(j=i+2;j<=N;j++) { String s1=S.substring(i,j); StringBuffer s2=new StringBuffer(s1); s2.reverse(); String s3=s2.toString(); if(s1.equals(s3)) { count++; } } } return count; 0 ankitparashxr3 months ago java public static boolean check(String str) { if(str.length()<2) { return false; } int i =0; int j =str.length()-1; while(i<j) { if(str.charAt(i)!=str.charAt(j)) { return false; } i++; j--; } return true; } public int CountPS(String s, int n) { int count = 0; for(int i =0;i<s.length();i++) { for(int j =i;j<s.length();j++) { if(check(s.substring(i,j+1))) { count++; } } } return count; +3 shashikantsolanki0423 months ago int CountPS(char S[], int N) { //code here int dp[N][N], ctr = 0; memset(dp,0,sizeof(dp)); for(int gap=0; gap<N; gap++){ for(int i=0,j=gap; j<N; i++,j++){ if(gap == 0){ dp[i][j] = 1; } else if(gap == 1){ if(S[i] == S[j]){ dp[i][j] = 1; ctr+=1; } else{ dp[i][j] = 0; } } else{ if(S[i] == S[j] and dp[i+1][j-1] == 1){ dp[i][j] = 1; ctr+=1; } else{ dp[i][j] = 0; } } } } return ctr; } 0 gurucharanchouhan173 months ago Simple Solution in java : class Solution{ public int CountPS(String s, int N) { int c=0; for(int i=0;i<s.length();i++) { for(int j=i;j<s.length();j++) { String g=s.substring(i,j+1); if(g.length()>=2) { if(check(g)) { c++; } else { continue; } } else { continue; } } } return c; } public static boolean check(String s) { int i=0; int j=s.length()-1; while(i<=j) { char ch=s.charAt(i); char lc=s.charAt(j); if(ch!=lc) { return false; } i++; j--; } return true; } } +1 vg713 months ago int CountPS(char S[], int N){ int count=0; //code here for(int i=0;i<N;i++){ int mid=i; int left=i-1; int right=i+1; while(left>=0 and right<N ){ if(S[left]==S[right]){ count++; left--; right++; } else{ break; } } } for(int i=0;i<N;i++){ int mid1=i; int mid2=i+1; if(S[mid1]==S[mid2]){ count++; // cout<<mid1<<" "<<mid2<<endl; } int left=mid1-1; int right=mid2+1; while(left>=0 and right<N and S[mid1]==S[mid2] ){ if(S[left]==S[right]){ count++; // cout<<left<<" "<<right<<endl; left--; right++; } else{ break; } } } return count; } We strongly recommend solving this problem on your own before viewing its editorial. Do you still want to view the editorial? Login to access your submissions. Problem Contest Reset the IDE using the second button on the top right corner. Avoid using static/global variables in your code as your code is tested against multiple test cases and these tend to retain their previous values. Passing the Sample/Custom Test cases does not guarantee the correctness of code. On submission, your code is tested against multiple test cases consisting of all possible corner cases and stress constraints. You can access the hints to get an idea about what is expected of you as well as the final solution code. You can view the solutions submitted by other users from the submission tab.
[ { "code": null, "e": 387, "s": 238, "text": "Given a string, the task is to count all palindromic sub-strings present in it. Length of palindrome sub-string must be greater than or equal to 2. " }, { "code": null, "e": 395, "s": 387, "text": "Example" }, { "code": null, "e": 496, "s": 395, "text": "Input\nN = 5\nstr = \"abaab\"\nOutput\n3\nExplanation:\nAll palindrome substring are : \"aba\" , \"aa\" , \"baab\"" }, { "code": null, "e": 504, "s": 496, "text": "Example" }, { "code": null, "e": 613, "s": 504, "text": "Input\nN = 7\nstr = \"abbaeae\"\nOutput\n4\nExplanation:\nAll palindrome substring are : \"bb\" , \"abba\" ,\"aea\",\n\"eae\"" }, { "code": null, "e": 684, "s": 613, "text": "Expected Time Complexity : O(|S|2)\nExpected Auxilliary Space : O(|S|2)" }, { "code": null, "e": 697, "s": 684, "text": "Constraints:" }, { "code": null, "e": 709, "s": 697, "text": "2<=|S|<=500" }, { "code": null, "e": 711, "s": 709, "text": "0" }, { "code": null, "e": 735, "s": 711, "text": "rohit250820001 week ago" }, { "code": null, "e": 1349, "s": 735, "text": "int CountPS(char S[], int N)\n{\n int dp[N][N]; \n memset(dp, 0, sizeof(dp)); \n int count = 0; \n for(int g = 0; g < N; ++g) {\n for(int i = 0, j = g; j < N; ++i,++j) {\n \n int curr_length = j - i + 1; \n if(curr_length == 1) {\n dp[i][j] = 1; \n }\n else if(curr_length == 2 && S[i] == S[j]) {\n dp[i][j] = 1; \n }else if(dp[i+1][j-1] && S[i] == S[j]) {\n dp[i][j] = 1; \n }\n if(dp[i][j]) {\n count++; \n }\n }\n }\n return count - N; \n}" }, { "code": null, "e": 1351, "s": 1349, "text": "0" }, { "code": null, "e": 1377, "s": 1351, "text": "gunjangoyal2821 month ago" }, { "code": null, "e": 2102, "s": 1377, "text": "int CountPS(char S[], int N)\n{\n //code here\n int dp[N][N], ctr = 0;\n memset(dp,0,sizeof(dp));\n for(int gap = 0; gap < N; gap++){\n for(int i=0, j = gap; j<N; i++, j++){\n if(gap == 0){\n dp[i][j] = 1;\n }\n else if(gap == 1){\n if(S[i] == S[j]){\n dp[i][j] = 1;\n }\n else{\n dp[i][j] = 0;\n }\n }\n else if(S[i] == S[j] and dp[i+1][j-1] == 1){\n dp[i][j] = 1;\n }\n else{\n dp[i][j] = 0;\n } \n if(dp[i][j]){\n ctr++;\n }\n }\n }\n return ctr-N;\n}" }, { "code": null, "e": 2104, "s": 2102, "text": "0" }, { "code": null, "e": 2131, "s": 2104, "text": "mashhadihossain1 month ago" }, { "code": null, "e": 2152, "s": 2131, "text": "SIMPLE JAVA SOLUTION" }, { "code": null, "e": 2820, "s": 2152, "text": "class Solution{ static boolean isPalindrome(String s) { if(s.length()>1) { int i=0; int j=s.length()-1; while(i<j) { if(s.charAt(i)!=s.charAt(j)) { return false; } i++; j--; } return true; } else { return false; } } public int CountPS(String S, int N) { int count=0; for(int i=0;i<N;i++) { for(int j=i+1;j<=N;j++) { if(isPalindrome(S.substring(i,j))) { count++; } } } return count; }}" }, { "code": null, "e": 2822, "s": 2820, "text": "0" }, { "code": null, "e": 2849, "s": 2822, "text": "mashhadihossain1 month ago" }, { "code": null, "e": 2870, "s": 2849, "text": "SIMPLE JAVA SOLTUION" }, { "code": null, "e": 3598, "s": 2870, "text": "class Solution{ static boolean isPalindrome(String s) { if(s.length()>1) { int i=0; int j=s.length()-1; while(i<j) { if(s.charAt(i)!=s.charAt(j)) { return false; } i++; j--; } return true; } else { return false; } } public int CountPS(String S, int N) { ArrayList<String> list=new ArrayList<String>(); for(int i=0;i<N;i++) { for(int j=i+1;j<=N;j++) { if(isPalindrome(S.substring(i,j))) { list.add(S.substring(i,j)); } } } return list.size(); }}" }, { "code": null, "e": 3601, "s": 3598, "text": "+1" }, { "code": null, "e": 3625, "s": 3601, "text": "neeramrutia2 months ago" }, { "code": null, "e": 4037, "s": 3625, "text": "int count = 0;\n for(int i=0;i<N;i++) {\n count =count+ check(S,i,i);\n count =count+ check(S,i,i+1);\n }\n return count;\n }\n public int check(String S,int i,int j) {\n int count = 0;\n while(i>=0&&j<S.length()&&S.charAt(i)==S.charAt(j)) {\n if(j-i>=1) \n count++;\n i--;\n j++;\n \n }\n \n return count;" }, { "code": null, "e": 4040, "s": 4037, "text": "+1" }, { "code": null, "e": 4064, "s": 4040, "text": "neeramrutia2 months ago" }, { "code": null, "e": 4449, "s": 4064, "text": "int i,j,count=0; for(i=0;i<=N-2;i++) { for(j=i+2;j<=N;j++) { String s1=S.substring(i,j); StringBuffer s2=new StringBuffer(s1); s2.reverse(); String s3=s2.toString(); if(s1.equals(s3)) { count++; } } } return count;" }, { "code": null, "e": 4451, "s": 4449, "text": "0" }, { "code": null, "e": 4477, "s": 4451, "text": "ankitparashxr3 months ago" }, { "code": null, "e": 4482, "s": 4477, "text": "java" }, { "code": null, "e": 5110, "s": 4482, "text": "public static boolean check(String str) { if(str.length()<2) { return false; } int i =0; int j =str.length()-1; while(i<j) { if(str.charAt(i)!=str.charAt(j)) { return false; } i++; j--; } return true; } public int CountPS(String s, int n) { int count = 0; for(int i =0;i<s.length();i++) { for(int j =i;j<s.length();j++) { if(check(s.substring(i,j+1))) { count++; } } } return count;" }, { "code": null, "e": 5113, "s": 5110, "text": "+3" }, { "code": null, "e": 5146, "s": 5113, "text": "shashikantsolanki0423 months ago" }, { "code": null, "e": 5904, "s": 5146, "text": "int CountPS(char S[], int N)\n{\n //code here\n int dp[N][N], ctr = 0;\n memset(dp,0,sizeof(dp));\n for(int gap=0; gap<N; gap++){\n for(int i=0,j=gap; j<N; i++,j++){\n if(gap == 0){\n dp[i][j] = 1;\n }\n else if(gap == 1){\n if(S[i] == S[j]){\n dp[i][j] = 1;\n ctr+=1;\n }\n else{\n dp[i][j] = 0;\n }\n }\n else{\n if(S[i] == S[j] and dp[i+1][j-1] == 1){\n dp[i][j] = 1;\n ctr+=1;\n }\n else{\n dp[i][j] = 0;\n }\n }\n }\n }\n return ctr;\n}" }, { "code": null, "e": 5906, "s": 5904, "text": "0" }, { "code": null, "e": 5938, "s": 5906, "text": "gurucharanchouhan173 months ago" }, { "code": null, "e": 5965, "s": 5938, "text": "Simple Solution in java : " }, { "code": null, "e": 6765, "s": 5967, "text": "class Solution{ public int CountPS(String s, int N) { int c=0; for(int i=0;i<s.length();i++) { for(int j=i;j<s.length();j++) { String g=s.substring(i,j+1); if(g.length()>=2) { if(check(g)) { c++; } else { continue; } } else { continue; } } } return c; } public static boolean check(String s) { int i=0; int j=s.length()-1;" }, { "code": null, "e": 7098, "s": 6765, "text": " while(i<=j) { char ch=s.charAt(i); char lc=s.charAt(j); if(ch!=lc) { return false; } i++; j--; } return true; } }" }, { "code": null, "e": 7101, "s": 7098, "text": "+1" }, { "code": null, "e": 7118, "s": 7101, "text": "vg713 months ago" }, { "code": null, "e": 8007, "s": 7118, "text": "int CountPS(char S[], int N){ int count=0; //code here for(int i=0;i<N;i++){ int mid=i; int left=i-1; int right=i+1; while(left>=0 and right<N ){ if(S[left]==S[right]){ count++; left--; right++; } else{ break; } } } for(int i=0;i<N;i++){ int mid1=i; int mid2=i+1; if(S[mid1]==S[mid2]){ count++; // cout<<mid1<<\" \"<<mid2<<endl; } int left=mid1-1; int right=mid2+1; while(left>=0 and right<N and S[mid1]==S[mid2] ){ if(S[left]==S[right]){ count++; // cout<<left<<\" \"<<right<<endl; left--; right++; } else{ break; } } } return count; }" }, { "code": null, "e": 8153, "s": 8007, "text": "We strongly recommend solving this problem on your own before viewing its editorial. Do you still\n want to view the editorial?" }, { "code": null, "e": 8189, "s": 8153, "text": " Login to access your submissions. " }, { "code": null, "e": 8199, "s": 8189, "text": "\nProblem\n" }, { "code": null, "e": 8209, "s": 8199, "text": "\nContest\n" }, { "code": null, "e": 8272, "s": 8209, "text": "Reset the IDE using the second button on the top right corner." }, { "code": null, "e": 8420, "s": 8272, "text": "Avoid using static/global variables in your code as your code is tested against multiple test cases and these tend to retain their previous values." }, { "code": null, "e": 8628, "s": 8420, "text": "Passing the Sample/Custom Test cases does not guarantee the correctness of code. On submission, your code is tested against multiple test cases consisting of all possible corner cases and stress constraints." }, { "code": null, "e": 8734, "s": 8628, "text": "You can access the hints to get an idea about what is expected of you as well as the final solution code." } ]
How To Tune HDBSCAN | by Charles Frenzel | Towards Data Science
Clustering is a very hard problem because there is never truly a ‘right’ answer when labels do not exist. This is compounded by techniques with various assumptions in place. If a technique is run incorrectly, violating an assumption, this leads to incorrect (dead wrong) results. In this blogpost, we will delve a bit into why clustering gets complicated, and then take a dive deep on how to properly tune density-based clusters in HDBSCAN making use of the Amazon DenseClus library too. There is No Free Lunch for clustering algorithms and while one algorithm might fit a certain dataset well, there are no guarantees that it will work on a different dataset in the exact same manner. Likewise, clustering is “strongly dependent on contexts, aims and decisions of the researcher” which adds fire to the argument that there is no such thing as a “universally optimal method that will just produce natural clusters” as noted by Henning in What Are True Clusters? Henning 2015. For example, commonly used techniques such as KMeans, assume that data is numerical and sphere-shaped. Those types of assumptions do not fair well when the data has high dimensionality and includes categorical values. Cluster data that is in violation of assumptions causes a conundrum for the practitioner in two ways: How to formalize a specific featurization scheme?What clustering technique to choose? How to formalize a specific featurization scheme? What clustering technique to choose? Both of these must be formulated so that no assumptions are violated. In practice, this can lead to a process of elimination whereby the algorithm and featurization scheme that don’t violate an algorithm’s assumptions is the only choice standing. When no labels are available it’s common to pick a objective metric such as Silhouette Score to evaluate and then decide on the final clustering result. Silhouette Score measures cluster cohesiveness and separation with an index between -1 to 1. It does NOT take into account noise in the index calculation and makes use of distances. Distance is not applicable for a density-based technique. Not including a noise in the objective metric calculation violates an inherent assumption in density-based clustering. This means that Silhouette Score and similar indexes like it are inappropriate for measuring density-based techniques!!! (my own emphasis added because I’ve seen multiple blogs on here doing it — this is dangerous.) Density Based Clustering Validation or DBCV works for desnity-based clustering algorithms precisely because it takes noise into account and captures the shape property of clusters via densities and not distances (see the original paper) As the paper explains, the final result of DBCV is a weighted sum of “Validity Index” values of clusters. This produces a score between -1 to 1, with the larger the value the better clustering solution. An in depth discussion is out scope here but please see the original paper for more details. Note that DBCV does have drawbacks. Like all other metrics and techniques DBCV is not immune from the problems of complication and measurement in clustering as noted earlier. However, outside of having groundtruth labels it provides an objective criteria from which to judge how well-separated density-based technique clusters are. Enough of that, let’s dive into a real example. The notebook is available within the Amazon Denseclus library. In this example, you will use a synthetic churn dataset for an imaginary telecommunications company with the outcome Churn? flagged as as either True (churned) or False (did not churn). Features include customer details such as plan and usage information. The churn dataset is publicly available and mentioned in the book Discovering Knowledge in Data by Daniel T. Larose. It is attributed by the author to the University of California Irvine Repository of Machine Learning Datasets. The data includes both numeric and categorical features but will use Denseclus to transform it into lower-dimensional, dense space to form clusters on. For more on DenseClus see here. All of the needed transformations are taken care of under the hood. You just get to call fit. # This runs in about a minute or twofrom denseclus import DenseClusimport logging # to further silence deprecation warningslogging.captureWarnings(True)clf = DenseClus( random_state=SEED, umap_combine_method="intersection_union_mapper")clf.fit(df) Under the hood, among other steps, Denseclus uses HDBSCAN to cluster the data. Let’s look at the how the data got split. embedding = clf.mapper_.embedding_labels = clf.score()clustered = (labels >= 0)cnts = pd.DataFrame(labels)[0].value_counts()cnts = cnts.reset_index()cnts.columns = ['cluster','count']print(cnts.sort_values(['cluster']))cluster count4 -1 93 0 12340 1 12651 2 12532 3 1239 Upon examination there are exactly 4 almost evenly distributed clusters with -1 representing the noise found in the data. In addition, to simply looking at their spread, another way to evaluate clusters it to visualize them. _=sns.jointplot( x=embedding[clustered, 0], y=embedding[clustered, 1], hue=labels[clustered], kind="kde") As you can see we have 4 distinct islands formed within this slice of the data. Clusters have formed around these densities which is exactly the behavior we expect DenseClus to do. You can further confirm the outcome by plotting the tree along which the densities were split. This is a graphical view of the counts we saw with more information. For example, you can see that a two cluster solution is also possible as two densities represent the base split for the clusters. _=clf.hdbscan_.condensed_tree_.plot( select_clusters=True, selection_palette=sns.color_palette("deep", np.unique(clusters).shape[0]),) Lastly, let’s confirm that the majority of data points are covered by our clusters (hint: only 9 aren’t) and the DBCV score. coverage = np.sum(clustered) / embedding.shape[0]print(f"Coverage {coverage}")print(f"DBCV score {clf.hdbscan_.relative_validity_}")Coverage 0.9982DBCV score 0.2811143727637039 The DBCV comes out to 0.28 on scale of -1 to 1. That’s not great but it could be worse. Let’s optimize the score to find the best HDBSCAN hyperparameters to pass. The two primary hyperparameters to look at to further improve results are min_samples and min_cluster_size, as noted in the HDBSCAN documentation. You will run multiple combinations of these to find a result that generates high DBCV score. In addition to looking at these hyperparameters you will also look at cluster selection methods with Expectation of Mass eom and splitting clusters along the tree with leaf (for details see hdbscan: Hierarchical density based clustering In, McInnes, J. Healy, S. Astels 2017). As HDBSCAN’s documentation notes, whereas the eom method only extracts the most stable, condensed clusters from the tree, the leaf method selects clusters from the bottom of the leaf nodes as well. This results in smaller, more homogeneous clusters that are more likely to be fine grained. from sklearn.model_selection import RandomizedSearchCVimport hdbscanfrom sklearn.metrics import make_scorerlogging.captureWarnings(True)hdb = hdbscan.HDBSCAN(gen_min_span_tree=True).fit(embedding)# specify parameters and distributions to sample fromparam_dist = {'min_samples': [10,30,50,60,100], 'min_cluster_size':[100,200,300,400,500,600], 'cluster_selection_method' : ['eom','leaf'], 'metric' : ['euclidean','manhattan'] }#validity_scroer = "hdbscan__hdbscan___HDBSCAN__validity_index"validity_scorer = make_scorer(hdbscan.validity.validity_index,greater_is_better=True)n_iter_search = 20random_search = RandomizedSearchCV(hdb ,param_distributions=param_dist ,n_iter=n_iter_search ,scoring=validity_scorer ,random_state=SEED)random_search.fit(embedding)print(f"Best Parameters {random_search.best_params_}")print(f"DBCV score :{random_search.best_estimator_.relative_validity_}")Best Parameters {'min_samples': 100, 'min_cluster_size': 300, 'metric': 'manhattan', 'cluster_selection_method': 'eom'}DBCV score :0.48886415007392386 The DBCV score has now risen from 0.28 to 0.488. DenseClus defaults min_samples at 15 and min_cluster_size at 100. Random Search results have clusters larger and more restrictive, which results in a higher density and higher score :) City-block distance or Manhattan distance appears to aid the increase too. In practice we would want a score over 0.45 to make sure that clusters are well-separated and this score shows that. Let’s confirm this by looking at how clusters were split and visualizing the results again. # evalute the clusterslabels = random_search.best_estimator_.labels_clustered = (labels >= 0) coverage = np.sum(clustered) / embedding.shape[0]total_clusters = np.max(labels) + 1cluster_sizes = np.bincount(labels[clustered]).tolist()print(f"Percent of data retained: {coverage}")print(f"Total Clusters found: {total_clusters}")print(f"Cluster splits: {cluster_sizes}")_=sns.jointplot( x=embedding[clustered, 0], y=embedding[clustered, 1], hue=labels[clustered], kind="kde")Percent of data retained: 1.0Total Clusters found: 3Cluster splits: [2501, 1236, 1263] Interestingly, enough no noise was found. Two clusters are the exact same, with one almost their combined size. Visualizing the data on the same slice gives us a clue as to what happened here. The clusters numbered 3 and 2 from our previous run are now combined. Shifting to a different dimensional slice can sometimes help explain things here and the below plot shows a better view. _=sns.jointplot( x=embedding[clustered, 1], y=embedding[clustered, 2], hue=labels[clustered], kind="kde") I hoped you enjoyed a closer look at how to tune hyperparameters for HDBSCAN!!! In this post you looked at why clustering and clustering metrics can get complicated, you then learned about DBCV as an objective metric, and you then applied it using Amazon Denseclus and HDBSCAN. We’ve only scrapped the surface here. To dive deeper you could look at the following: What other type of optimization frameworks can you use in place of Random Search? What other type of hyperparameters are possible to use for tuning? What other measures are possible here for further cluster validation? Can any other underlying hyperparameters in Denseclus be tweaked to achieve a higher score? “Silhouettes: a Graphical Aid to the Interpretation and Validation of Cluster Analysis”, Rousseeuw 1987 “Density-Based Clustering Validation”, Moulavi et al. 2014 “hdbscan: Hierarchical density based clustering In”, McInnes, J. Healy, S. Astels 2017
[ { "code": null, "e": 277, "s": 171, "text": "Clustering is a very hard problem because there is never truly a ‘right’ answer when labels do not exist." }, { "code": null, "e": 451, "s": 277, "text": "This is compounded by techniques with various assumptions in place. If a technique is run incorrectly, violating an assumption, this leads to incorrect (dead wrong) results." }, { "code": null, "e": 659, "s": 451, "text": "In this blogpost, we will delve a bit into why clustering gets complicated, and then take a dive deep on how to properly tune density-based clusters in HDBSCAN making use of the Amazon DenseClus library too." }, { "code": null, "e": 1147, "s": 659, "text": "There is No Free Lunch for clustering algorithms and while one algorithm might fit a certain dataset well, there are no guarantees that it will work on a different dataset in the exact same manner. Likewise, clustering is “strongly dependent on contexts, aims and decisions of the researcher” which adds fire to the argument that there is no such thing as a “universally optimal method that will just produce natural clusters” as noted by Henning in What Are True Clusters? Henning 2015." }, { "code": null, "e": 1365, "s": 1147, "text": "For example, commonly used techniques such as KMeans, assume that data is numerical and sphere-shaped. Those types of assumptions do not fair well when the data has high dimensionality and includes categorical values." }, { "code": null, "e": 1467, "s": 1365, "text": "Cluster data that is in violation of assumptions causes a conundrum for the practitioner in two ways:" }, { "code": null, "e": 1553, "s": 1467, "text": "How to formalize a specific featurization scheme?What clustering technique to choose?" }, { "code": null, "e": 1603, "s": 1553, "text": "How to formalize a specific featurization scheme?" }, { "code": null, "e": 1640, "s": 1603, "text": "What clustering technique to choose?" }, { "code": null, "e": 1887, "s": 1640, "text": "Both of these must be formulated so that no assumptions are violated. In practice, this can lead to a process of elimination whereby the algorithm and featurization scheme that don’t violate an algorithm’s assumptions is the only choice standing." }, { "code": null, "e": 2399, "s": 1887, "text": "When no labels are available it’s common to pick a objective metric such as Silhouette Score to evaluate and then decide on the final clustering result. Silhouette Score measures cluster cohesiveness and separation with an index between -1 to 1. It does NOT take into account noise in the index calculation and makes use of distances. Distance is not applicable for a density-based technique. Not including a noise in the objective metric calculation violates an inherent assumption in density-based clustering." }, { "code": null, "e": 2615, "s": 2399, "text": "This means that Silhouette Score and similar indexes like it are inappropriate for measuring density-based techniques!!! (my own emphasis added because I’ve seen multiple blogs on here doing it — this is dangerous.)" }, { "code": null, "e": 2852, "s": 2615, "text": "Density Based Clustering Validation or DBCV works for desnity-based clustering algorithms precisely because it takes noise into account and captures the shape property of clusters via densities and not distances (see the original paper)" }, { "code": null, "e": 3055, "s": 2852, "text": "As the paper explains, the final result of DBCV is a weighted sum of “Validity Index” values of clusters. This produces a score between -1 to 1, with the larger the value the better clustering solution." }, { "code": null, "e": 3148, "s": 3055, "text": "An in depth discussion is out scope here but please see the original paper for more details." }, { "code": null, "e": 3323, "s": 3148, "text": "Note that DBCV does have drawbacks. Like all other metrics and techniques DBCV is not immune from the problems of complication and measurement in clustering as noted earlier." }, { "code": null, "e": 3480, "s": 3323, "text": "However, outside of having groundtruth labels it provides an objective criteria from which to judge how well-separated density-based technique clusters are." }, { "code": null, "e": 3528, "s": 3480, "text": "Enough of that, let’s dive into a real example." }, { "code": null, "e": 3591, "s": 3528, "text": "The notebook is available within the Amazon Denseclus library." }, { "code": null, "e": 4075, "s": 3591, "text": "In this example, you will use a synthetic churn dataset for an imaginary telecommunications company with the outcome Churn? flagged as as either True (churned) or False (did not churn). Features include customer details such as plan and usage information. The churn dataset is publicly available and mentioned in the book Discovering Knowledge in Data by Daniel T. Larose. It is attributed by the author to the University of California Irvine Repository of Machine Learning Datasets." }, { "code": null, "e": 4353, "s": 4075, "text": "The data includes both numeric and categorical features but will use Denseclus to transform it into lower-dimensional, dense space to form clusters on. For more on DenseClus see here. All of the needed transformations are taken care of under the hood. You just get to call fit." }, { "code": null, "e": 4607, "s": 4353, "text": "# This runs in about a minute or twofrom denseclus import DenseClusimport logging # to further silence deprecation warningslogging.captureWarnings(True)clf = DenseClus( random_state=SEED, umap_combine_method=\"intersection_union_mapper\")clf.fit(df)" }, { "code": null, "e": 4686, "s": 4607, "text": "Under the hood, among other steps, Denseclus uses HDBSCAN to cluster the data." }, { "code": null, "e": 4728, "s": 4686, "text": "Let’s look at the how the data got split." }, { "code": null, "e": 5047, "s": 4728, "text": "embedding = clf.mapper_.embedding_labels = clf.score()clustered = (labels >= 0)cnts = pd.DataFrame(labels)[0].value_counts()cnts = cnts.reset_index()cnts.columns = ['cluster','count']print(cnts.sort_values(['cluster']))cluster count4 -1 93 0 12340 1 12651 2 12532 3 1239" }, { "code": null, "e": 5169, "s": 5047, "text": "Upon examination there are exactly 4 almost evenly distributed clusters with -1 representing the noise found in the data." }, { "code": null, "e": 5272, "s": 5169, "text": "In addition, to simply looking at their spread, another way to evaluate clusters it to visualize them." }, { "code": null, "e": 5381, "s": 5272, "text": "_=sns.jointplot( x=embedding[clustered, 0], y=embedding[clustered, 1], hue=labels[clustered], kind=\"kde\")" }, { "code": null, "e": 5562, "s": 5381, "text": "As you can see we have 4 distinct islands formed within this slice of the data. Clusters have formed around these densities which is exactly the behavior we expect DenseClus to do." }, { "code": null, "e": 5657, "s": 5562, "text": "You can further confirm the outcome by plotting the tree along which the densities were split." }, { "code": null, "e": 5856, "s": 5657, "text": "This is a graphical view of the counts we saw with more information. For example, you can see that a two cluster solution is also possible as two densities represent the base split for the clusters." }, { "code": null, "e": 5997, "s": 5856, "text": "_=clf.hdbscan_.condensed_tree_.plot( select_clusters=True, selection_palette=sns.color_palette(\"deep\", np.unique(clusters).shape[0]),)" }, { "code": null, "e": 6122, "s": 5997, "text": "Lastly, let’s confirm that the majority of data points are covered by our clusters (hint: only 9 aren’t) and the DBCV score." }, { "code": null, "e": 6299, "s": 6122, "text": "coverage = np.sum(clustered) / embedding.shape[0]print(f\"Coverage {coverage}\")print(f\"DBCV score {clf.hdbscan_.relative_validity_}\")Coverage 0.9982DBCV score 0.2811143727637039" }, { "code": null, "e": 6347, "s": 6299, "text": "The DBCV comes out to 0.28 on scale of -1 to 1." }, { "code": null, "e": 6462, "s": 6347, "text": "That’s not great but it could be worse. Let’s optimize the score to find the best HDBSCAN hyperparameters to pass." }, { "code": null, "e": 6609, "s": 6462, "text": "The two primary hyperparameters to look at to further improve results are min_samples and min_cluster_size, as noted in the HDBSCAN documentation." }, { "code": null, "e": 6702, "s": 6609, "text": "You will run multiple combinations of these to find a result that generates high DBCV score." }, { "code": null, "e": 6979, "s": 6702, "text": "In addition to looking at these hyperparameters you will also look at cluster selection methods with Expectation of Mass eom and splitting clusters along the tree with leaf (for details see hdbscan: Hierarchical density based clustering In, McInnes, J. Healy, S. Astels 2017)." }, { "code": null, "e": 7177, "s": 6979, "text": "As HDBSCAN’s documentation notes, whereas the eom method only extracts the most stable, condensed clusters from the tree, the leaf method selects clusters from the bottom of the leaf nodes as well." }, { "code": null, "e": 7269, "s": 7177, "text": "This results in smaller, more homogeneous clusters that are more likely to be fine grained." }, { "code": null, "e": 8494, "s": 7269, "text": "from sklearn.model_selection import RandomizedSearchCVimport hdbscanfrom sklearn.metrics import make_scorerlogging.captureWarnings(True)hdb = hdbscan.HDBSCAN(gen_min_span_tree=True).fit(embedding)# specify parameters and distributions to sample fromparam_dist = {'min_samples': [10,30,50,60,100], 'min_cluster_size':[100,200,300,400,500,600], 'cluster_selection_method' : ['eom','leaf'], 'metric' : ['euclidean','manhattan'] }#validity_scroer = \"hdbscan__hdbscan___HDBSCAN__validity_index\"validity_scorer = make_scorer(hdbscan.validity.validity_index,greater_is_better=True)n_iter_search = 20random_search = RandomizedSearchCV(hdb ,param_distributions=param_dist ,n_iter=n_iter_search ,scoring=validity_scorer ,random_state=SEED)random_search.fit(embedding)print(f\"Best Parameters {random_search.best_params_}\")print(f\"DBCV score :{random_search.best_estimator_.relative_validity_}\")Best Parameters {'min_samples': 100, 'min_cluster_size': 300, 'metric': 'manhattan', 'cluster_selection_method': 'eom'}DBCV score :0.48886415007392386" }, { "code": null, "e": 8543, "s": 8494, "text": "The DBCV score has now risen from 0.28 to 0.488." }, { "code": null, "e": 8803, "s": 8543, "text": "DenseClus defaults min_samples at 15 and min_cluster_size at 100. Random Search results have clusters larger and more restrictive, which results in a higher density and higher score :) City-block distance or Manhattan distance appears to aid the increase too." }, { "code": null, "e": 8920, "s": 8803, "text": "In practice we would want a score over 0.45 to make sure that clusters are well-separated and this score shows that." }, { "code": null, "e": 9012, "s": 8920, "text": "Let’s confirm this by looking at how clusters were split and visualizing the results again." }, { "code": null, "e": 9578, "s": 9012, "text": "# evalute the clusterslabels = random_search.best_estimator_.labels_clustered = (labels >= 0) coverage = np.sum(clustered) / embedding.shape[0]total_clusters = np.max(labels) + 1cluster_sizes = np.bincount(labels[clustered]).tolist()print(f\"Percent of data retained: {coverage}\")print(f\"Total Clusters found: {total_clusters}\")print(f\"Cluster splits: {cluster_sizes}\")_=sns.jointplot( x=embedding[clustered, 0], y=embedding[clustered, 1], hue=labels[clustered], kind=\"kde\")Percent of data retained: 1.0Total Clusters found: 3Cluster splits: [2501, 1236, 1263]" }, { "code": null, "e": 9690, "s": 9578, "text": "Interestingly, enough no noise was found. Two clusters are the exact same, with one almost their combined size." }, { "code": null, "e": 9841, "s": 9690, "text": "Visualizing the data on the same slice gives us a clue as to what happened here. The clusters numbered 3 and 2 from our previous run are now combined." }, { "code": null, "e": 9962, "s": 9841, "text": "Shifting to a different dimensional slice can sometimes help explain things here and the below plot shows a better view." }, { "code": null, "e": 10071, "s": 9962, "text": "_=sns.jointplot( x=embedding[clustered, 1], y=embedding[clustered, 2], hue=labels[clustered], kind=\"kde\")" }, { "code": null, "e": 10151, "s": 10071, "text": "I hoped you enjoyed a closer look at how to tune hyperparameters for HDBSCAN!!!" }, { "code": null, "e": 10349, "s": 10151, "text": "In this post you looked at why clustering and clustering metrics can get complicated, you then learned about DBCV as an objective metric, and you then applied it using Amazon Denseclus and HDBSCAN." }, { "code": null, "e": 10435, "s": 10349, "text": "We’ve only scrapped the surface here. To dive deeper you could look at the following:" }, { "code": null, "e": 10517, "s": 10435, "text": "What other type of optimization frameworks can you use in place of Random Search?" }, { "code": null, "e": 10584, "s": 10517, "text": "What other type of hyperparameters are possible to use for tuning?" }, { "code": null, "e": 10654, "s": 10584, "text": "What other measures are possible here for further cluster validation?" }, { "code": null, "e": 10746, "s": 10654, "text": "Can any other underlying hyperparameters in Denseclus be tweaked to achieve a higher score?" }, { "code": null, "e": 10850, "s": 10746, "text": "“Silhouettes: a Graphical Aid to the Interpretation and Validation of Cluster Analysis”, Rousseeuw 1987" }, { "code": null, "e": 10909, "s": 10850, "text": "“Density-Based Clustering Validation”, Moulavi et al. 2014" } ]
How to remove event handlers in JavaScript?
Javascript provides removeEventListener() method to remove event handlers. An event is handled through addEvenetListener() method. The removeEventListener() method removes event handlers that have been attached with that addEventListener() method. Live Demo <html> <body> <h3 id="add">Hover me </h3> <p id="remove"> </p> <h3>Click this button to stop hovering effect </h3> <input type ="button" id="clickIt" onclick="RespondClick()" value ="remove"> <script> const listener = document.getElementById("add"); listener.addEventListener("mouseover", RespondMouseOver); function RespondMouseOver() { document.getElementById("remove").innerHTML += 1+ "</br>"; } function RespondClick() { listener.removeEventListener("mouseover", RespondMouseOver); document.getElementById("remove").innerHTML += 0; } </script> </body> </html> When we executed the above code we will get the following displayed on the screen If we hover on the "Hover me" text then 1's will appear. The number of times we hover, the number of times 1's are produced. When we hover the text for 4 times we get the following executed on the screen. To stop the event handler, we have to click on the "remove" button. After clicking on the remove button we get '0' displayed on the screen and later on even though we hover on the text no 1's will be produced as shown in the figure.
[ { "code": null, "e": 1310, "s": 1062, "text": "Javascript provides removeEventListener() method to remove event handlers. An event is handled through addEvenetListener() method. The removeEventListener() method removes event handlers that have been attached with that addEventListener() method." }, { "code": null, "e": 1320, "s": 1310, "text": "Live Demo" }, { "code": null, "e": 1968, "s": 1320, "text": "<html>\n<body>\n <h3 id=\"add\">Hover me </h3>\n <p id=\"remove\"> </p>\n <h3>Click this button to stop hovering effect </h3>\n <input type =\"button\" id=\"clickIt\" onclick=\"RespondClick()\" value =\"remove\">\n <script>\n const listener = document.getElementById(\"add\");\n listener.addEventListener(\"mouseover\", RespondMouseOver);\n function RespondMouseOver() {\n document.getElementById(\"remove\").innerHTML += 1+ \"</br>\";\n }\n function RespondClick() {\n listener.removeEventListener(\"mouseover\", RespondMouseOver);\n document.getElementById(\"remove\").innerHTML += 0;\n }\n </script>\n</body>\n</html>" }, { "code": null, "e": 2050, "s": 1968, "text": "When we executed the above code we will get the following displayed on the screen" }, { "code": null, "e": 2255, "s": 2050, "text": "If we hover on the \"Hover me\" text then 1's will appear. The number of times we hover, the number of times 1's are produced. When we hover the text for 4 times we get the following executed on the screen." }, { "code": null, "e": 2488, "s": 2255, "text": "To stop the event handler, we have to click on the \"remove\" button. After clicking on the remove button we get '0' displayed on the screen and later on even though we hover on the text no 1's will be produced as shown in the figure." } ]
Multiply Strings in C++
Suppose we have two numbers as a string. We have to multiply them and return the result also in a string. So if the numbers are “26” and “12”, then the result will be “312” To solve this, we will follow these steps − Taking two arguments x and y it indicates x divides y if x < -Infinity and y = 1, then return infinity a := |x|, b := |y| and ans := 0 while a – b >= 0p := 0while a – (left-shifted b (left-shifted 1 p times)) >= 0p := p + 1a := a – (left shift b, p times)ans := ans + left shift 1 p times p := 0 while a – (left-shifted b (left-shifted 1 p times)) >= 0p := p + 1 p := p + 1 a := a – (left shift b, p times) ans := ans + left shift 1 p times if x > 0 is true and y > 0 is also true, then return ans, otherwise return (– ans) Let us see the following implementation to get a better understanding − Live Demo #include <bits/stdc++.h> using namespace std; class Solution { public: string multiply(string num1, string num2); }; string Solution::multiply(string nums1, string nums2) { int n = nums1.size(); int m = nums2.size(); string ans(n + m, '0'); for(int i = n - 1; i>=0; i--){ for(int j = m - 1; j >= 0; j--){ int p = (nums1[i] - '0') * (nums2[j] - '0') + (ans[i + j + 1] - '0'); ans[i+j+1] = p % 10 + '0'; ans[i+j] += p / 10 ; } } for(int i = 0; i < m + n; i++){ if(ans[i] !='0')return ans.substr(i); } return "0"; } main(){ Solution ob; cout << ob.multiply("28", "25"); } "26" "12" "312"
[ { "code": null, "e": 1235, "s": 1062, "text": "Suppose we have two numbers as a string. We have to multiply them and return the result also in a string. So if the numbers are “26” and “12”, then the result will be “312”" }, { "code": null, "e": 1279, "s": 1235, "text": "To solve this, we will follow these steps −" }, { "code": null, "e": 1333, "s": 1279, "text": "Taking two arguments x and y it indicates x divides y" }, { "code": null, "e": 1382, "s": 1333, "text": "if x < -Infinity and y = 1, then return infinity" }, { "code": null, "e": 1414, "s": 1382, "text": "a := |x|, b := |y| and ans := 0" }, { "code": null, "e": 1568, "s": 1414, "text": "while a – b >= 0p := 0while a – (left-shifted b (left-shifted 1 p times)) >= 0p := p + 1a := a – (left shift b, p times)ans := ans + left shift 1 p times" }, { "code": null, "e": 1575, "s": 1568, "text": "p := 0" }, { "code": null, "e": 1642, "s": 1575, "text": "while a – (left-shifted b (left-shifted 1 p times)) >= 0p := p + 1" }, { "code": null, "e": 1653, "s": 1642, "text": "p := p + 1" }, { "code": null, "e": 1686, "s": 1653, "text": "a := a – (left shift b, p times)" }, { "code": null, "e": 1720, "s": 1686, "text": "ans := ans + left shift 1 p times" }, { "code": null, "e": 1803, "s": 1720, "text": "if x > 0 is true and y > 0 is also true, then return ans, otherwise return (– ans)" }, { "code": null, "e": 1875, "s": 1803, "text": "Let us see the following implementation to get a better understanding −" }, { "code": null, "e": 1886, "s": 1875, "text": " Live Demo" }, { "code": null, "e": 2533, "s": 1886, "text": "#include <bits/stdc++.h>\nusing namespace std;\nclass Solution {\npublic:\n string multiply(string num1, string num2);\n};\nstring Solution::multiply(string nums1, string nums2) {\n int n = nums1.size();\n int m = nums2.size();\n string ans(n + m, '0');\n for(int i = n - 1; i>=0; i--){\n for(int j = m - 1; j >= 0; j--){\n int p = (nums1[i] - '0') * (nums2[j] - '0') + (ans[i + j + 1] - '0');\n ans[i+j+1] = p % 10 + '0';\n ans[i+j] += p / 10 ;\n }\n }\n for(int i = 0; i < m + n; i++){\n if(ans[i] !='0')return ans.substr(i);\n }\n return \"0\";\n}\nmain(){\n Solution ob;\n cout << ob.multiply(\"28\", \"25\");\n}" }, { "code": null, "e": 2543, "s": 2533, "text": "\"26\"\n\"12\"" }, { "code": null, "e": 2549, "s": 2543, "text": "\"312\"" } ]
Generate a Random Birthday Wishes using JavaScript
02 Jun, 2021 In this article, we are going to learn how to create a webpage that generates a random Birthday wishes using HTML, CSS, and JavaScript. Approach: HTML Code: In this section, we will create a basic structure of the HTML file i.e. it contain <html>, <head>, <body> tags etc. HTML <!DOCTYPE html><html> <head> <title>Page Title</title></head> <body> <div class="body-bg"></div> <div class="container"> <div class="container-data"> <div id="msg"></div> </div> </div></body> </html> CSS Code: It is used to set the styles to the HTML document. Here, we set the width, height & position of the document. We will use a CSS media query to set the styles on different sizes of screens. CSS .body-bg { width: 100%; height: 100%; position: fixed; top: 50%; left: 50%; transform: translateX(-50%) translateY(-50%); background: radial-gradient(#00E7BD, #013A4E); transition: all 0.5s;} .container { width: 80%; height: 80%; position: fixed; top: 50%; left: 50%; transform: translateX(-50%) translateY(-50%); background: linear-gradient(#00E7BD, #013A4E); box-shadow: 0 0 20px 2px #013A4E;} .container-data { padding: 24px; position: absolute; top: 50%; left: 50%; width: 80%; transform: translateX(-50%) translateY(-50%); text-align: center; color: #fff; font-family: Arial; font-size: 3vw;} @media only screen and (max-width: 763px) { .container-data { padding: 24px; position: absolute; top: 50%; left: 50%; width: 80%; transform: translateX(-50%) translateY(-50%); text-align: center; color: #fff; font-family: Arial; font-size: 6vw; }} @media only screen and (max-height: 423px) { .container-data { padding: 24px; position: absolute; top: 50%; left: 50%; width: 80%; transform: translateX(-50%) translateY(-50%); text-align: center; color: #fff; font-family: Arial; font-size: 4vw; }} JavaScript Code: Now we want to display the random birthday message to someone and this can be done through JavaScript. In this section, we store all messages in an array variable and then use array.length property to check the size of the array. After that use math.random() function to generate a random number to display the random message. JavaScript var messages = ["Happy birthday to GFG", "Happy birthday to GeeksforGeeks", "Happy birthday to Geeks"]; var i = messages.length;var s = Math.floor(Math.random() * i); document.getElementById("msg") .innerHTML = '" ' + messages[s] + ' "'; Complete Code: After combining the above three sections of code, it generates a random birthday wishing message. <!DOCTYPE html><html> <head> <meta charset="utf-8" /> <meta name="viewport" content= "width=device-width, initial-scale=1" /> <style> .body-bg { width: 100%; height: 100%; position: fixed; top: 50%; left: 50%; transform: translateX(-50%) translateY(-50%); background: radial-gradient(#00e7bd, #013a4e); transition: all 0.5s; } .container { width: 80%; height: 80%; position: fixed; top: 50%; left: 50%; transform: translateX(-50%) translateY(-50%); background: linear-gradient(#00e7bd, #013a4e); box-shadow: 0 0 20px 2px #013a4e; } .container-data { padding: 24px; position: absolute; top: 50%; left: 50%; width: 80%; transform: translateX(-50%) translateY(-50%); text-align: center; color: #fff; font-family: Arial; font-size: 3vw; } @media only screen and (max-width: 763px) { .container-data { padding: 24px; position: absolute; top: 50%; left: 50%; width: 80%; transform: translateX(-50%) translateY(-50%); text-align: center; color: #fff; font-family: Arial; font-size: 6vw; } } @media only screen and (max-height: 423px) { .container-data { padding: 24px; position: absolute; top: 50%; left: 50%; width: 80%; transform: translateX(-50%) translateY(-50%); text-align: center; color: #fff; font-family: Arial; font-size: 4vw; } } </style></head> <body> <div class="body-bg"></div> <div class="container"> <div class="container-data"> <div id="msg"></div> </div> </div> <script> var messages = ["Happy birthday to GFG", "Happy birthday to GeeksforGeeks", "Happy birthday to Geeks"]; var i = messages.length; var s = Math.floor(Math.random() * i); document.getElementById("msg") .innerHTML = '" ' + messages[s] + ' "'; </script></body> </html> Output: CSS-Properties CSS-Questions HTML-Questions JavaScript-Methods JavaScript-Questions CSS HTML JavaScript Web Technologies HTML Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here. How to update Node.js and NPM to next version ? Top 10 Projects For Beginners To Practice HTML and CSS Skills How to insert spaces/tabs in text using HTML/CSS? How to create footer to stay at the bottom of a Web page? CSS to put icon inside an input element in a form How to update Node.js and NPM to next version ? Top 10 Projects For Beginners To Practice HTML and CSS Skills How to insert spaces/tabs in text using HTML/CSS? REST API (Introduction) Hide or show elements in HTML using display property
[ { "code": null, "e": 28, "s": 0, "text": "\n02 Jun, 2021" }, { "code": null, "e": 164, "s": 28, "text": "In this article, we are going to learn how to create a webpage that generates a random Birthday wishes using HTML, CSS, and JavaScript." }, { "code": null, "e": 174, "s": 164, "text": "Approach:" }, { "code": null, "e": 301, "s": 174, "text": "HTML Code: In this section, we will create a basic structure of the HTML file i.e. it contain <html>, <head>, <body> tags etc." }, { "code": null, "e": 306, "s": 301, "text": "HTML" }, { "code": "<!DOCTYPE html><html> <head> <title>Page Title</title></head> <body> <div class=\"body-bg\"></div> <div class=\"container\"> <div class=\"container-data\"> <div id=\"msg\"></div> </div> </div></body> </html>", "e": 546, "s": 306, "text": null }, { "code": null, "e": 747, "s": 548, "text": "CSS Code: It is used to set the styles to the HTML document. Here, we set the width, height & position of the document. We will use a CSS media query to set the styles on different sizes of screens." }, { "code": null, "e": 751, "s": 747, "text": "CSS" }, { "code": ".body-bg { width: 100%; height: 100%; position: fixed; top: 50%; left: 50%; transform: translateX(-50%) translateY(-50%); background: radial-gradient(#00E7BD, #013A4E); transition: all 0.5s;} .container { width: 80%; height: 80%; position: fixed; top: 50%; left: 50%; transform: translateX(-50%) translateY(-50%); background: linear-gradient(#00E7BD, #013A4E); box-shadow: 0 0 20px 2px #013A4E;} .container-data { padding: 24px; position: absolute; top: 50%; left: 50%; width: 80%; transform: translateX(-50%) translateY(-50%); text-align: center; color: #fff; font-family: Arial; font-size: 3vw;} @media only screen and (max-width: 763px) { .container-data { padding: 24px; position: absolute; top: 50%; left: 50%; width: 80%; transform: translateX(-50%) translateY(-50%); text-align: center; color: #fff; font-family: Arial; font-size: 6vw; }} @media only screen and (max-height: 423px) { .container-data { padding: 24px; position: absolute; top: 50%; left: 50%; width: 80%; transform: translateX(-50%) translateY(-50%); text-align: center; color: #fff; font-family: Arial; font-size: 4vw; }}", "e": 2088, "s": 751, "text": null }, { "code": null, "e": 2432, "s": 2088, "text": "JavaScript Code: Now we want to display the random birthday message to someone and this can be done through JavaScript. In this section, we store all messages in an array variable and then use array.length property to check the size of the array. After that use math.random() function to generate a random number to display the random message." }, { "code": null, "e": 2443, "s": 2432, "text": "JavaScript" }, { "code": "var messages = [\"Happy birthday to GFG\", \"Happy birthday to GeeksforGeeks\", \"Happy birthday to Geeks\"]; var i = messages.length;var s = Math.floor(Math.random() * i); document.getElementById(\"msg\") .innerHTML = '\" ' + messages[s] + ' \"';", "e": 2692, "s": 2443, "text": null }, { "code": null, "e": 2805, "s": 2692, "text": "Complete Code: After combining the above three sections of code, it generates a random birthday wishing message." }, { "code": "<!DOCTYPE html><html> <head> <meta charset=\"utf-8\" /> <meta name=\"viewport\" content= \"width=device-width, initial-scale=1\" /> <style> .body-bg { width: 100%; height: 100%; position: fixed; top: 50%; left: 50%; transform: translateX(-50%) translateY(-50%); background: radial-gradient(#00e7bd, #013a4e); transition: all 0.5s; } .container { width: 80%; height: 80%; position: fixed; top: 50%; left: 50%; transform: translateX(-50%) translateY(-50%); background: linear-gradient(#00e7bd, #013a4e); box-shadow: 0 0 20px 2px #013a4e; } .container-data { padding: 24px; position: absolute; top: 50%; left: 50%; width: 80%; transform: translateX(-50%) translateY(-50%); text-align: center; color: #fff; font-family: Arial; font-size: 3vw; } @media only screen and (max-width: 763px) { .container-data { padding: 24px; position: absolute; top: 50%; left: 50%; width: 80%; transform: translateX(-50%) translateY(-50%); text-align: center; color: #fff; font-family: Arial; font-size: 6vw; } } @media only screen and (max-height: 423px) { .container-data { padding: 24px; position: absolute; top: 50%; left: 50%; width: 80%; transform: translateX(-50%) translateY(-50%); text-align: center; color: #fff; font-family: Arial; font-size: 4vw; } } </style></head> <body> <div class=\"body-bg\"></div> <div class=\"container\"> <div class=\"container-data\"> <div id=\"msg\"></div> </div> </div> <script> var messages = [\"Happy birthday to GFG\", \"Happy birthday to GeeksforGeeks\", \"Happy birthday to Geeks\"]; var i = messages.length; var s = Math.floor(Math.random() * i); document.getElementById(\"msg\") .innerHTML = '\" ' + messages[s] + ' \"'; </script></body> </html>", "e": 5302, "s": 2805, "text": null }, { "code": null, "e": 5310, "s": 5302, "text": "Output:" }, { "code": null, "e": 5325, "s": 5310, "text": "CSS-Properties" }, { "code": null, "e": 5339, "s": 5325, "text": "CSS-Questions" }, { "code": null, "e": 5354, "s": 5339, "text": "HTML-Questions" }, { "code": null, "e": 5373, "s": 5354, "text": "JavaScript-Methods" }, { "code": null, "e": 5394, "s": 5373, "text": "JavaScript-Questions" }, { "code": null, "e": 5398, "s": 5394, "text": "CSS" }, { "code": null, "e": 5403, "s": 5398, "text": "HTML" }, { "code": null, "e": 5414, "s": 5403, "text": "JavaScript" }, { "code": null, "e": 5431, "s": 5414, "text": "Web Technologies" }, { "code": null, "e": 5436, "s": 5431, "text": "HTML" }, { "code": null, "e": 5534, "s": 5436, "text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here." }, { "code": null, "e": 5582, "s": 5534, "text": "How to update Node.js and NPM to next version ?" }, { "code": null, "e": 5644, "s": 5582, "text": "Top 10 Projects For Beginners To Practice HTML and CSS Skills" }, { "code": null, "e": 5694, "s": 5644, "text": "How to insert spaces/tabs in text using HTML/CSS?" }, { "code": null, "e": 5752, "s": 5694, "text": "How to create footer to stay at the bottom of a Web page?" }, { "code": null, "e": 5802, "s": 5752, "text": "CSS to put icon inside an input element in a form" }, { "code": null, "e": 5850, "s": 5802, "text": "How to update Node.js and NPM to next version ?" }, { "code": null, "e": 5912, "s": 5850, "text": "Top 10 Projects For Beginners To Practice HTML and CSS Skills" }, { "code": null, "e": 5962, "s": 5912, "text": "How to insert spaces/tabs in text using HTML/CSS?" }, { "code": null, "e": 5986, "s": 5962, "text": "REST API (Introduction)" } ]
Python – Group list by first character of string
30 Jan, 2020 Sometimes, we have a use case in which we need to perform the grouping of strings by various factors, like first letter or any other factor. These type of problems are typical to database queries and hence can occur in web development while programming. This article focuses on one such grouping by first letter of string. Let’s discuss certain ways in which this can be performed. Method #1 : Using next() + lambda + loopThe combination of above 3 functions is used to solve this particular problem by the naive method. The lambda function performs the task of finding like initial character, and next function helps in forward iteration. # Python3 code to demonstrate# Initial Character Case Categorization# using next() + lambda + loop # initializing listtest_list = ['an', 'a', 'geek', 'for', 'g', 'free'] # printing original listprint("The original list : " + str(test_list)) # using next() + lambda + loop# Initial Character Case Categorizationutil_func = lambda x, y: x[0] == y[0]res = []for sub in test_list: ele = next((x for x in res if util_func(sub, x[0])), []) if ele == []: res.append(ele) ele.append(sub) # print resultprint("The list after Categorization : " + str(res)) The original list : ['an', 'a', 'geek', 'for', 'g', 'free'] The list after Categorization : [['an', 'a'], ['geek', 'g'], ['for', 'free']] Method #2 : Using sorted() + groupby()This particular task can also be solved using the groupby function which offers a conventional method to solve this problem. The sorted function sorts the elements by initial character to be feed to groupby for the relevant grouping. # Python3 code to demonstrate# Initial Character Case Categorization# using sorted() + groupby()from itertools import groupby # initializing listtest_list = ['an', 'a', 'geek', 'for', 'g', 'free'] # printing original listprint("The original list : " + str(test_list)) # using sorted() + groupby()# Initial Character Case Categorizationutil_func = lambda x: x[0]temp = sorted(test_list, key = util_func)res = [list(ele) for i, ele in groupby(temp, util_func)] # print resultprint("The list after Categorization : " + str(res)) The original list : ['an', 'a', 'geek', 'for', 'g', 'free'] The list after Categorization : [['an', 'a'], ['geek', 'g'], ['for', 'free']] Python list-programs Python Python Programs Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here. How to Install PIP on Windows ? Python Classes and Objects Python | os.path.join() method Introduction To PYTHON Python OOPs Concepts Defaultdict in Python Python | Get dictionary keys as a list Python | Convert a list to dictionary Python | Convert string dictionary to dictionary Python Program for Fibonacci numbers
[ { "code": null, "e": 28, "s": 0, "text": "\n30 Jan, 2020" }, { "code": null, "e": 410, "s": 28, "text": "Sometimes, we have a use case in which we need to perform the grouping of strings by various factors, like first letter or any other factor. These type of problems are typical to database queries and hence can occur in web development while programming. This article focuses on one such grouping by first letter of string. Let’s discuss certain ways in which this can be performed." }, { "code": null, "e": 668, "s": 410, "text": "Method #1 : Using next() + lambda + loopThe combination of above 3 functions is used to solve this particular problem by the naive method. The lambda function performs the task of finding like initial character, and next function helps in forward iteration." }, { "code": "# Python3 code to demonstrate# Initial Character Case Categorization# using next() + lambda + loop # initializing listtest_list = ['an', 'a', 'geek', 'for', 'g', 'free'] # printing original listprint(\"The original list : \" + str(test_list)) # using next() + lambda + loop# Initial Character Case Categorizationutil_func = lambda x, y: x[0] == y[0]res = []for sub in test_list: ele = next((x for x in res if util_func(sub, x[0])), []) if ele == []: res.append(ele) ele.append(sub) # print resultprint(\"The list after Categorization : \" + str(res))", "e": 1235, "s": 668, "text": null }, { "code": null, "e": 1374, "s": 1235, "text": "The original list : ['an', 'a', 'geek', 'for', 'g', 'free']\nThe list after Categorization : [['an', 'a'], ['geek', 'g'], ['for', 'free']]\n" }, { "code": null, "e": 1648, "s": 1376, "text": "Method #2 : Using sorted() + groupby()This particular task can also be solved using the groupby function which offers a conventional method to solve this problem. The sorted function sorts the elements by initial character to be feed to groupby for the relevant grouping." }, { "code": "# Python3 code to demonstrate# Initial Character Case Categorization# using sorted() + groupby()from itertools import groupby # initializing listtest_list = ['an', 'a', 'geek', 'for', 'g', 'free'] # printing original listprint(\"The original list : \" + str(test_list)) # using sorted() + groupby()# Initial Character Case Categorizationutil_func = lambda x: x[0]temp = sorted(test_list, key = util_func)res = [list(ele) for i, ele in groupby(temp, util_func)] # print resultprint(\"The list after Categorization : \" + str(res))", "e": 2178, "s": 1648, "text": null }, { "code": null, "e": 2317, "s": 2178, "text": "The original list : ['an', 'a', 'geek', 'for', 'g', 'free']\nThe list after Categorization : [['an', 'a'], ['geek', 'g'], ['for', 'free']]\n" }, { "code": null, "e": 2338, "s": 2317, "text": "Python list-programs" }, { "code": null, "e": 2345, "s": 2338, "text": "Python" }, { "code": null, "e": 2361, "s": 2345, "text": "Python Programs" }, { "code": null, "e": 2459, "s": 2361, "text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here." }, { "code": null, "e": 2491, "s": 2459, "text": "How to Install PIP on Windows ?" }, { "code": null, "e": 2518, "s": 2491, "text": "Python Classes and Objects" }, { "code": null, "e": 2549, "s": 2518, "text": "Python | os.path.join() method" }, { "code": null, "e": 2572, "s": 2549, "text": "Introduction To PYTHON" }, { "code": null, "e": 2593, "s": 2572, "text": "Python OOPs Concepts" }, { "code": null, "e": 2615, "s": 2593, "text": "Defaultdict in Python" }, { "code": null, "e": 2654, "s": 2615, "text": "Python | Get dictionary keys as a list" }, { "code": null, "e": 2692, "s": 2654, "text": "Python | Convert a list to dictionary" }, { "code": null, "e": 2741, "s": 2692, "text": "Python | Convert string dictionary to dictionary" } ]
Multiplication of two complex numbers given as strings
14 Jun, 2021 Given two complex numbers in the form of strings. Our task is to print the multiplication of these two complex numbers. Examples: Input : str1 = "1+1i" str2 = "1+1i" Output : "0+2i" Here, (1 + i) * (1 + i) = 1 + i2 + 2 * i = 2i or "0+2i" Input : str1 = "1+-1i" str2 = "1+-1i" Output : "0+-2i" Here, (1 - i) * (1 - i) = 1 + i2 - 2 * i = -2i or "0+-2i" Multiplication of two complex numbers can be done as: We simply split up the real and the imaginary parts of the given complex strings based on the ‘+’ and the ‘i’ symbols. We store the real parts of the two strings a and b as x[0] and y[0] respectively and the imaginary parts as x[1] and y[1] respectively. Then, we multiply the real and the imaginary parts as required after converting the extracted parts into integers. Then, we again form the return string in the required format and return the result. C++ Java Python3 C# PHP Javascript // C++ Implementation of the above approach#include <bits/stdc++.h>using namespace std;string complexNumberMultiply(string a, string b){ int i; string x1; int temp = 1; // Traverse both strings, and // check for negative numbers for (i = 0; i < a.length(); i++) { if (a[i] == '+') break; if (a[i] == '-') { temp = -1; continue; } x1.push_back(a[i]); } // String to int int t1 = stoi(x1) * temp; x1.clear(); temp = 1; for (; i < a.length() - 1; i++) { if (a[i] == '-') { temp = -1; continue; } x1.push_back(a[i]); } int t2 = stoi(x1) * temp; x1.clear(); temp = 1; for (i = 0; i < b.length(); i++) { if (b[i] == '+') break; if (b[i] == '-') { temp = -1; continue; } x1.push_back(b[i]); } int t3 = stoi(x1) * temp; x1.clear(); temp = 1; for (; i < b.length() - 1; i++) { if (b[i] == '-') { temp = -1; continue; } x1.push_back(b[i]); } int t4 = stoi(x1) * temp; // Real Part int ans = t1 * t3 - t2 * t4; string s; s += to_string(ans); s += '+'; // Imaginary part ans = t1 * t4 + t2 * t3; s += to_string(ans); s += 'i'; // Return the result return s;} // Driver Codeint main(){ string str1 = "1+1i"; string str2 = "1+1i"; cout << complexNumberMultiply(str1, str2); return 0; // Contributed By Bhavneet Singh} // Java program to multiply two complex numbers// given as strings.import java.util.*;import java.lang.*; public class GfG{ public static String complexNumberMultiply(String a, String b) { // Spiting the real and imaginary parts // of the given complex strings based on '+' // and 'i' symbols. String x[] = a.split("\\+|i"); String y[] = b.split("\\+|i"); // Storing the real part of complex string a int a_real = Integer.parseInt(x[0]); // Storing the imaginary part of complex string a int a_img = Integer.parseInt(x[1]); // Storing the real part of complex string b int b_real = Integer.parseInt(y[0]); // Storing the imaginary part of complex string b int b_img = Integer.parseInt(y[1]); // Returns the product. return (a_real * b_real - a_img * b_img) + "+" + (a_real * b_img + a_img * b_real) + "i"; } // Driver function public static void main(String argc[]){ String str1 = "1+1i"; String str2 = "1+1i"; System.out.println(complexNumberMultiply(str1, str2)); }} # Python 3 program to multiply two complex numbers# given as strings.def complexNumberMultiply(a, b): # Spiting the real and imaginary parts # of the given complex strings based on '+' # and 'i' symbols. x = a.split('+') x[1] = x[1][:-1] # for removing 'i' y = b.split("+") y[1] = y[1][:-1] # for removing 'i' # Storing the real part of complex string a a_real = int(x[0]) # Storing the imaginary part of complex string a a_img = int(x[1]) # Storing the real part of complex string b b_real = int(y[0]) # Storing the imaginary part of complex string b b_img = int(y[1]) return str(a_real * b_real - a_img * b_img) \ + "+" + str(a_real * b_img + a_img * b_real) + "i"; # Driver function str1 = "1 + 1i"str2 = "1 + 1i"print(complexNumberMultiply(str1, str2)) # This code is contributed by ANKITKUMAR34 // C# program to multiply two complex// numbers given as strings.using System;using System.Text.RegularExpressions; class GfG{ public static String complexNumberMultiply(String a, String b){ // Spiting the real and imaginary parts // of the given complex strings based on '+' // and 'i' symbols. String []x = Regex.Split(a, @"\+|i"); String []y = Regex.Split(b, @"\+|i"); // Storing the real part of complex string a int a_real = Int32.Parse(x[0]); // Storing the imaginary part of complex string a int a_img = Int32.Parse(x[1]); // Storing the real part of complex string b int b_real = Int32.Parse(y[0]); // Storing the imaginary part of complex string b int b_img = Int32.Parse(y[1]); // Returns the product. return(a_real * b_real - a_img * b_img) + "+" + (a_real * b_img + a_img * b_real) + "i";} // Driver codepublic static void Main(String []argc){ String str1 = "1+1i"; String str2 = "1+1i"; Console.WriteLine(complexNumberMultiply(str1, str2));}} // This code is contributed by shikhasingrajput <?php// PHP program to multiply// two complex numbers// given as strings. function complexNumberMultiply($a, $b){ // Spiting the real and// imaginary parts of the// given complex strings// based on '+' and 'i' symbols.$x = preg_split("/[\s+]+|i/" , $a);$y = preg_split("/[\s+]+|i/" , $b); // Storing the real part// of complex string a$a_real = intval($x[0]); // Storing the imaginary// part of complex string a$a_img = intval($x[1]); // Storing the real part// of complex string b$b_real = intval($y[0]); // Storing the imaginary// part of complex string b$b_img = intval($y[1]); // Returns the product.return ($a_real * $b_real - $a_img * $b_img) . "+" . ($a_real * $b_img + $a_img * $b_real) . "i";} // Driver Code$str1 = "1+1i";$str2 = "1+1i";echo complexNumberMultiply($str1, $str2); // This code is contributed by mits?> <script> // javascript program to multiply two complex numbers// given as strings. function complexNumberMultiply(a, b) { // Spiting the real and imaginary parts // of the given complex strings based on '+' // and 'i' symbols. var x = a.split('+'); var y = b.split('+'); // Storing the real part of complex string a var a_real = parseInt(x[0]); // Storing the imaginary part of complex string a var a_img = parseInt(x[1]); // Storing the real part of complex string b var b_real = parseInt(y[0]); // Storing the imaginary part of complex string b var b_img = parseInt(y[1]); // Returns the product. return (a_real * b_real - a_img * b_img) + "+" + (a_real * b_img + a_img * b_real) + "i";} // Driver functionvar str1 = "1+1i";var str2 = "1+1i";document.write(complexNumberMultiply(str1, str2)); // This code contributed by shikhasingrajput</script> Output: 0+2i Mithun Kumar ANKITKUMAR34 bhavneet2000 shikhasingrajput Mathematical School Programming Mathematical Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here. Merge two sorted arrays Operators in C / C++ Sieve of Eratosthenes Prime Numbers Find minimum number of coins that make a given value Python Dictionary Reverse a string in Java Arrays in C/C++ Introduction To PYTHON Interfaces in Java
[ { "code": null, "e": 52, "s": 24, "text": "\n14 Jun, 2021" }, { "code": null, "e": 172, "s": 52, "text": "Given two complex numbers in the form of strings. Our task is to print the multiplication of these two complex numbers." }, { "code": null, "e": 184, "s": 172, "text": "Examples: " }, { "code": null, "e": 425, "s": 184, "text": "Input : str1 = \"1+1i\"\n str2 = \"1+1i\" \nOutput : \"0+2i\"\nHere, (1 + i) * (1 + i) = \n1 + i2 + 2 * i = 2i or \"0+2i\"\n\nInput : str1 = \"1+-1i\"\n str2 = \"1+-1i\"\nOutput : \"0+-2i\"\nHere, (1 - i) * (1 - i) = \n1 + i2 - 2 * i = -2i or \"0+-2i\"" }, { "code": null, "e": 479, "s": 425, "text": "Multiplication of two complex numbers can be done as:" }, { "code": null, "e": 935, "s": 479, "text": "We simply split up the real and the imaginary parts of the given complex strings based on the ‘+’ and the ‘i’ symbols. We store the real parts of the two strings a and b as x[0] and y[0] respectively and the imaginary parts as x[1] and y[1] respectively. Then, we multiply the real and the imaginary parts as required after converting the extracted parts into integers. Then, we again form the return string in the required format and return the result. " }, { "code": null, "e": 939, "s": 935, "text": "C++" }, { "code": null, "e": 944, "s": 939, "text": "Java" }, { "code": null, "e": 952, "s": 944, "text": "Python3" }, { "code": null, "e": 955, "s": 952, "text": "C#" }, { "code": null, "e": 959, "s": 955, "text": "PHP" }, { "code": null, "e": 970, "s": 959, "text": "Javascript" }, { "code": "// C++ Implementation of the above approach#include <bits/stdc++.h>using namespace std;string complexNumberMultiply(string a, string b){ int i; string x1; int temp = 1; // Traverse both strings, and // check for negative numbers for (i = 0; i < a.length(); i++) { if (a[i] == '+') break; if (a[i] == '-') { temp = -1; continue; } x1.push_back(a[i]); } // String to int int t1 = stoi(x1) * temp; x1.clear(); temp = 1; for (; i < a.length() - 1; i++) { if (a[i] == '-') { temp = -1; continue; } x1.push_back(a[i]); } int t2 = stoi(x1) * temp; x1.clear(); temp = 1; for (i = 0; i < b.length(); i++) { if (b[i] == '+') break; if (b[i] == '-') { temp = -1; continue; } x1.push_back(b[i]); } int t3 = stoi(x1) * temp; x1.clear(); temp = 1; for (; i < b.length() - 1; i++) { if (b[i] == '-') { temp = -1; continue; } x1.push_back(b[i]); } int t4 = stoi(x1) * temp; // Real Part int ans = t1 * t3 - t2 * t4; string s; s += to_string(ans); s += '+'; // Imaginary part ans = t1 * t4 + t2 * t3; s += to_string(ans); s += 'i'; // Return the result return s;} // Driver Codeint main(){ string str1 = \"1+1i\"; string str2 = \"1+1i\"; cout << complexNumberMultiply(str1, str2); return 0; // Contributed By Bhavneet Singh}", "e": 2563, "s": 970, "text": null }, { "code": "// Java program to multiply two complex numbers// given as strings.import java.util.*;import java.lang.*; public class GfG{ public static String complexNumberMultiply(String a, String b) { // Spiting the real and imaginary parts // of the given complex strings based on '+' // and 'i' symbols. String x[] = a.split(\"\\\\+|i\"); String y[] = b.split(\"\\\\+|i\"); // Storing the real part of complex string a int a_real = Integer.parseInt(x[0]); // Storing the imaginary part of complex string a int a_img = Integer.parseInt(x[1]); // Storing the real part of complex string b int b_real = Integer.parseInt(y[0]); // Storing the imaginary part of complex string b int b_img = Integer.parseInt(y[1]); // Returns the product. return (a_real * b_real - a_img * b_img) + \"+\" + (a_real * b_img + a_img * b_real) + \"i\"; } // Driver function public static void main(String argc[]){ String str1 = \"1+1i\"; String str2 = \"1+1i\"; System.out.println(complexNumberMultiply(str1, str2)); }}", "e": 3737, "s": 2563, "text": null }, { "code": "# Python 3 program to multiply two complex numbers# given as strings.def complexNumberMultiply(a, b): # Spiting the real and imaginary parts # of the given complex strings based on '+' # and 'i' symbols. x = a.split('+') x[1] = x[1][:-1] # for removing 'i' y = b.split(\"+\") y[1] = y[1][:-1] # for removing 'i' # Storing the real part of complex string a a_real = int(x[0]) # Storing the imaginary part of complex string a a_img = int(x[1]) # Storing the real part of complex string b b_real = int(y[0]) # Storing the imaginary part of complex string b b_img = int(y[1]) return str(a_real * b_real - a_img * b_img) \\ + \"+\" + str(a_real * b_img + a_img * b_real) + \"i\"; # Driver function str1 = \"1 + 1i\"str2 = \"1 + 1i\"print(complexNumberMultiply(str1, str2)) # This code is contributed by ANKITKUMAR34", "e": 4649, "s": 3737, "text": null }, { "code": "// C# program to multiply two complex// numbers given as strings.using System;using System.Text.RegularExpressions; class GfG{ public static String complexNumberMultiply(String a, String b){ // Spiting the real and imaginary parts // of the given complex strings based on '+' // and 'i' symbols. String []x = Regex.Split(a, @\"\\+|i\"); String []y = Regex.Split(b, @\"\\+|i\"); // Storing the real part of complex string a int a_real = Int32.Parse(x[0]); // Storing the imaginary part of complex string a int a_img = Int32.Parse(x[1]); // Storing the real part of complex string b int b_real = Int32.Parse(y[0]); // Storing the imaginary part of complex string b int b_img = Int32.Parse(y[1]); // Returns the product. return(a_real * b_real - a_img * b_img) + \"+\" + (a_real * b_img + a_img * b_real) + \"i\";} // Driver codepublic static void Main(String []argc){ String str1 = \"1+1i\"; String str2 = \"1+1i\"; Console.WriteLine(complexNumberMultiply(str1, str2));}} // This code is contributed by shikhasingrajput", "e": 5793, "s": 4649, "text": null }, { "code": "<?php// PHP program to multiply// two complex numbers// given as strings. function complexNumberMultiply($a, $b){ // Spiting the real and// imaginary parts of the// given complex strings// based on '+' and 'i' symbols.$x = preg_split(\"/[\\s+]+|i/\" , $a);$y = preg_split(\"/[\\s+]+|i/\" , $b); // Storing the real part// of complex string a$a_real = intval($x[0]); // Storing the imaginary// part of complex string a$a_img = intval($x[1]); // Storing the real part// of complex string b$b_real = intval($y[0]); // Storing the imaginary// part of complex string b$b_img = intval($y[1]); // Returns the product.return ($a_real * $b_real - $a_img * $b_img) . \"+\" . ($a_real * $b_img + $a_img * $b_real) . \"i\";} // Driver Code$str1 = \"1+1i\";$str2 = \"1+1i\";echo complexNumberMultiply($str1, $str2); // This code is contributed by mits?>", "e": 6660, "s": 5793, "text": null }, { "code": "<script> // javascript program to multiply two complex numbers// given as strings. function complexNumberMultiply(a, b) { // Spiting the real and imaginary parts // of the given complex strings based on '+' // and 'i' symbols. var x = a.split('+'); var y = b.split('+'); // Storing the real part of complex string a var a_real = parseInt(x[0]); // Storing the imaginary part of complex string a var a_img = parseInt(x[1]); // Storing the real part of complex string b var b_real = parseInt(y[0]); // Storing the imaginary part of complex string b var b_img = parseInt(y[1]); // Returns the product. return (a_real * b_real - a_img * b_img) + \"+\" + (a_real * b_img + a_img * b_real) + \"i\";} // Driver functionvar str1 = \"1+1i\";var str2 = \"1+1i\";document.write(complexNumberMultiply(str1, str2)); // This code contributed by shikhasingrajput</script>", "e": 7569, "s": 6660, "text": null }, { "code": null, "e": 7578, "s": 7569, "text": "Output: " }, { "code": null, "e": 7583, "s": 7578, "text": "0+2i" }, { "code": null, "e": 7596, "s": 7583, "text": "Mithun Kumar" }, { "code": null, "e": 7609, "s": 7596, "text": "ANKITKUMAR34" }, { "code": null, "e": 7622, "s": 7609, "text": "bhavneet2000" }, { "code": null, "e": 7639, "s": 7622, "text": "shikhasingrajput" }, { "code": null, "e": 7652, "s": 7639, "text": "Mathematical" }, { "code": null, "e": 7671, "s": 7652, "text": "School Programming" }, { "code": null, "e": 7684, "s": 7671, "text": "Mathematical" }, { "code": null, "e": 7782, "s": 7684, "text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here." }, { "code": null, "e": 7806, "s": 7782, "text": "Merge two sorted arrays" }, { "code": null, "e": 7827, "s": 7806, "text": "Operators in C / C++" }, { "code": null, "e": 7849, "s": 7827, "text": "Sieve of Eratosthenes" }, { "code": null, "e": 7863, "s": 7849, "text": "Prime Numbers" }, { "code": null, "e": 7916, "s": 7863, "text": "Find minimum number of coins that make a given value" }, { "code": null, "e": 7934, "s": 7916, "text": "Python Dictionary" }, { "code": null, "e": 7959, "s": 7934, "text": "Reverse a string in Java" }, { "code": null, "e": 7975, "s": 7959, "text": "Arrays in C/C++" }, { "code": null, "e": 7998, "s": 7975, "text": "Introduction To PYTHON" } ]
Python | Ways to find length of list
22 Jun, 2022 List being an integral part of Python day-to-day programming has to be learned by all Python users and having a knowledge of its utility and operations is essential and always a plus. So this article discusses one such utility of finding the no. of elements in a list. Method 1: Naive Method In this method, one just runs a loop and increases the counter till the last element of the list to know its count. This is the most basic strategy that can be possibly employed in the absence of other present techniques.Code #1 : Demonstrating finding length of a list using the Naive Method Python3 # Python code to demonstrate# length of list# using naive method # Initializing listtest_list = [ 1, 4, 5, 7, 8 ] # Printing test_listprint ("The list is : " + str(test_list)) # Finding length of list# using loop# Initializing countercounter = 0for i in test_list: # incrementing counter counter = counter + 1 # Printing length of listprint ("Length of list using naive method is : " + str(counter)) Output : The list is : [1, 4, 5, 7, 8] Length of list using naive method is : 5 Method 2 : Using len() The len() method offers the most used and easy way to find the length of any list. This is the most conventional technique adopted by all programmers today. Python3 # Python program to demonstrate working# of len()a = []a.append("Hello")a.append("Geeks")a.append("For")a.append("Geeks")print("The length of list is: ", len(a)) The length of list is: 4 Python3 # Python program to demonstrate working# of len()n = len([10, 20, 30])print("The length of list is: ", n) The length of list is: 3 Method 3 : Using length_hint() This technique is a lesser-known technique for finding list length. This particular method is defined in the operator class and it can also tell the no. of elements present in the list. Code #2 : Demonstrating finding length of list using len() and length_hint() Python3 # Python code to demonstrate# length of list# using len() and length_hintfrom operator import length_hint # Initializing listtest_list = [ 1, 4, 5, 7, 8 ] # Printing test_listprint ("The list is : " + str(test_list)) # Finding length of list# using len()list_len = len(test_list) # Finding length of list# using length_hint()list_len_hint = length_hint(test_list) # Printing length of listprint ("Length of list using len() is : " + str(list_len))print ("Length of list using length_hint() is : " + str(list_len_hint)) Output : The list is : [1, 4, 5, 7, 8] Length of list using len() is : 5 Length of list using length_hint() is : 5 Performance Analysis – Naive vs len() vs length_hint() When choosing amongst alternatives it’s always necessary to have a valid reason why to choose one over another. This section does a time analysis of how much time it takes to execute all of them to offer a better choice to use.Code #3: Performance Analysis Python3 # Python code to demonstrate# length of list# Performance Analysisfrom operator import length_hintimport time # Initializing listtest_list = [ 1, 4, 5, 7, 8 ] # Printing test_listprint ("The list is : " + str(test_list)) # Finding length of list# using loop# Initializing counterstart_time_naive = time.time()counter = 0for i in test_list: # incrementing counter counter = counter + 1end_time_naive = str(time.time() - start_time_naive) # Finding length of list# using len()start_time_len = time.time()list_len = len(test_list)end_time_len = str(time.time() - start_time_len) # Finding length of list# using length_hint()start_time_hint = time.time()list_len_hint = length_hint(test_list)end_time_hint = str(time.time() - start_time_hint) # Printing Times of eachprint ("Time taken using naive method is : " + end_time_naive)print ("Time taken using len() is : " + end_time_len)print ("Time taken using length_hint() is : " + end_time_hint) Output : The list is : [1, 4, 5, 7, 8] Time taken using naive method is : 2.6226043701171875e-06 Time taken using len() is : 1.1920928955078125e-06 Time taken using length_hint() is : 1.430511474609375e-06 In the below images, it can be clearly seen that time taken is naive >> length_hint() > len(), but the time taken depends highly on the OS and several of its parameter. In two consecutive runs, you may get contrasting results, in fact sometimes naive takes the least time out of three. All the possible 6 permutations are possible. naive > len() > length_hint() naive > len()=length_hint() naive > length_hint() >len() naive > length_hint() > len() Spider_man farzams101 Python list-programs python-list Python python-list Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here.
[ { "code": null, "e": 53, "s": 25, "text": "\n22 Jun, 2022" }, { "code": null, "e": 323, "s": 53, "text": "List being an integral part of Python day-to-day programming has to be learned by all Python users and having a knowledge of its utility and operations is essential and always a plus. So this article discusses one such utility of finding the no. of elements in a list. " }, { "code": null, "e": 346, "s": 323, "text": "Method 1: Naive Method" }, { "code": null, "e": 639, "s": 346, "text": "In this method, one just runs a loop and increases the counter till the last element of the list to know its count. This is the most basic strategy that can be possibly employed in the absence of other present techniques.Code #1 : Demonstrating finding length of a list using the Naive Method" }, { "code": null, "e": 647, "s": 639, "text": "Python3" }, { "code": "# Python code to demonstrate# length of list# using naive method # Initializing listtest_list = [ 1, 4, 5, 7, 8 ] # Printing test_listprint (\"The list is : \" + str(test_list)) # Finding length of list# using loop# Initializing countercounter = 0for i in test_list: # incrementing counter counter = counter + 1 # Printing length of listprint (\"Length of list using naive method is : \" + str(counter))", "e": 1058, "s": 647, "text": null }, { "code": null, "e": 1068, "s": 1058, "text": "Output : " }, { "code": null, "e": 1139, "s": 1068, "text": "The list is : [1, 4, 5, 7, 8]\nLength of list using naive method is : 5" }, { "code": null, "e": 1162, "s": 1139, "text": "Method 2 : Using len()" }, { "code": null, "e": 1319, "s": 1162, "text": "The len() method offers the most used and easy way to find the length of any list. This is the most conventional technique adopted by all programmers today." }, { "code": null, "e": 1327, "s": 1319, "text": "Python3" }, { "code": "# Python program to demonstrate working# of len()a = []a.append(\"Hello\")a.append(\"Geeks\")a.append(\"For\")a.append(\"Geeks\")print(\"The length of list is: \", len(a))", "e": 1489, "s": 1327, "text": null }, { "code": null, "e": 1515, "s": 1489, "text": "The length of list is: 4" }, { "code": null, "e": 1525, "s": 1517, "text": "Python3" }, { "code": "# Python program to demonstrate working# of len()n = len([10, 20, 30])print(\"The length of list is: \", n)", "e": 1631, "s": 1525, "text": null }, { "code": null, "e": 1657, "s": 1631, "text": "The length of list is: 3" }, { "code": null, "e": 1690, "s": 1659, "text": "Method 3 : Using length_hint()" }, { "code": null, "e": 1954, "s": 1690, "text": "This technique is a lesser-known technique for finding list length. This particular method is defined in the operator class and it can also tell the no. of elements present in the list. Code #2 : Demonstrating finding length of list using len() and length_hint() " }, { "code": null, "e": 1962, "s": 1954, "text": "Python3" }, { "code": "# Python code to demonstrate# length of list# using len() and length_hintfrom operator import length_hint # Initializing listtest_list = [ 1, 4, 5, 7, 8 ] # Printing test_listprint (\"The list is : \" + str(test_list)) # Finding length of list# using len()list_len = len(test_list) # Finding length of list# using length_hint()list_len_hint = length_hint(test_list) # Printing length of listprint (\"Length of list using len() is : \" + str(list_len))print (\"Length of list using length_hint() is : \" + str(list_len_hint))", "e": 2481, "s": 1962, "text": null }, { "code": null, "e": 2491, "s": 2481, "text": "Output : " }, { "code": null, "e": 2597, "s": 2491, "text": "The list is : [1, 4, 5, 7, 8]\nLength of list using len() is : 5\nLength of list using length_hint() is : 5" }, { "code": null, "e": 2652, "s": 2597, "text": "Performance Analysis – Naive vs len() vs length_hint()" }, { "code": null, "e": 2910, "s": 2652, "text": "When choosing amongst alternatives it’s always necessary to have a valid reason why to choose one over another. This section does a time analysis of how much time it takes to execute all of them to offer a better choice to use.Code #3: Performance Analysis " }, { "code": null, "e": 2918, "s": 2910, "text": "Python3" }, { "code": "# Python code to demonstrate# length of list# Performance Analysisfrom operator import length_hintimport time # Initializing listtest_list = [ 1, 4, 5, 7, 8 ] # Printing test_listprint (\"The list is : \" + str(test_list)) # Finding length of list# using loop# Initializing counterstart_time_naive = time.time()counter = 0for i in test_list: # incrementing counter counter = counter + 1end_time_naive = str(time.time() - start_time_naive) # Finding length of list# using len()start_time_len = time.time()list_len = len(test_list)end_time_len = str(time.time() - start_time_len) # Finding length of list# using length_hint()start_time_hint = time.time()list_len_hint = length_hint(test_list)end_time_hint = str(time.time() - start_time_hint) # Printing Times of eachprint (\"Time taken using naive method is : \" + end_time_naive)print (\"Time taken using len() is : \" + end_time_len)print (\"Time taken using length_hint() is : \" + end_time_hint)", "e": 3870, "s": 2918, "text": null }, { "code": null, "e": 3880, "s": 3870, "text": "Output : " }, { "code": null, "e": 4077, "s": 3880, "text": "The list is : [1, 4, 5, 7, 8]\nTime taken using naive method is : 2.6226043701171875e-06\nTime taken using len() is : 1.1920928955078125e-06\nTime taken using length_hint() is : 1.430511474609375e-06" }, { "code": null, "e": 4409, "s": 4077, "text": "In the below images, it can be clearly seen that time taken is naive >> length_hint() > len(), but the time taken depends highly on the OS and several of its parameter. In two consecutive runs, you may get contrasting results, in fact sometimes naive takes the least time out of three. All the possible 6 permutations are possible." }, { "code": null, "e": 4441, "s": 4409, "text": " naive > len() > length_hint()" }, { "code": null, "e": 4470, "s": 4441, "text": "naive > len()=length_hint() " }, { "code": null, "e": 4500, "s": 4470, "text": "naive > length_hint() >len() " }, { "code": null, "e": 4531, "s": 4500, "text": "naive > length_hint() > len()" }, { "code": null, "e": 4542, "s": 4531, "text": "Spider_man" }, { "code": null, "e": 4553, "s": 4542, "text": "farzams101" }, { "code": null, "e": 4574, "s": 4553, "text": "Python list-programs" }, { "code": null, "e": 4586, "s": 4574, "text": "python-list" }, { "code": null, "e": 4593, "s": 4586, "text": "Python" }, { "code": null, "e": 4605, "s": 4593, "text": "python-list" } ]
Socket Programming in C#
10 Sep, 2021 Socket programming is a way of connecting two nodes on a network to communicate with each other. Basically, it is a one-way Client and Server setup where a Client connects, sends messages to the server and the server shows them using socket connection. One socket (node) listens on a particular port at an IP, while other socket reaches out to the other to form a connection. Server forms the listener socket while the client reaches out to the server. Before going deeper into Server and Client code, it is strongly recommended to go through TCP/IP Model. Before creating client’s socket a user must decide what ‘IP Address‘ that he want to connect to, in this case, it is the localhost. At the same time, we also need the ‘Family‘ method that will belong to the socket itself. Then, through the ‘connect‘ method, we will connect the socket to the server. Before sending any message, it must be converted into a byte array. Then and only then, it can be sent to the server through the ‘send‘ method. Later, thanks to the ‘receive‘ method we are going to get a byte array as answer by the server. It is notable that just like in the C language, the ‘send’ and ‘receive’ methods still return the number of bytes sent or received. C# // A C# program for Clientusing System;using System.Net;using System.Net.Sockets;using System.Text; namespace Client { class Program { // Main Methodstatic void Main(string[] args){ ExecuteClient();} // ExecuteClient() Methodstatic void ExecuteClient(){ try { // Establish the remote endpoint // for the socket. This example // uses port 11111 on the local // computer. IPHostEntry ipHost = Dns.GetHostEntry(Dns.GetHostName()); IPAddress ipAddr = ipHost.AddressList[0]; IPEndPoint localEndPoint = new IPEndPoint(ipAddr, 11111); // Creation TCP/IP Socket using // Socket Class Constructor Socket sender = new Socket(ipAddr.AddressFamily, SocketType.Stream, ProtocolType.Tcp); try { // Connect Socket to the remote // endpoint using method Connect() sender.Connect(localEndPoint); // We print EndPoint information // that we are connected Console.WriteLine("Socket connected to -> {0} ", sender.RemoteEndPoint.ToString()); // Creation of message that // we will send to Server byte[] messageSent = Encoding.ASCII.GetBytes("Test Client<EOF>"); int byteSent = sender.Send(messageSent); // Data buffer byte[] messageReceived = new byte[1024]; // We receive the message using // the method Receive(). This // method returns number of bytes // received, that we'll use to // convert them to string int byteRecv = sender.Receive(messageReceived); Console.WriteLine("Message from Server -> {0}", Encoding.ASCII.GetString(messageReceived, 0, byteRecv)); // Close Socket using // the method Close() sender.Shutdown(SocketShutdown.Both); sender.Close(); } // Manage of Socket's Exceptions catch (ArgumentNullException ane) { Console.WriteLine("ArgumentNullException : {0}", ane.ToString()); } catch (SocketException se) { Console.WriteLine("SocketException : {0}", se.ToString()); } catch (Exception e) { Console.WriteLine("Unexpected exception : {0}", e.ToString()); } } catch (Exception e) { Console.WriteLine(e.ToString()); }}}} In the same way, we need an ‘IP address’ that identifies the server in order to let the clients to connect. After creating the socket, we call the ‘bind‘ method which binds the IP to the socket. Then, call the ‘listen‘ method. This operation is responsible for creating the waiting queue which will be related to every opened ‘socket‘. The ‘listen‘ method takes as input the maximum number of clients that can stay in the waiting queue. As stated above, there is communication with the client through ‘send‘ and ‘receive‘ methods. Note: Don’t forget the conversion into a byte array. C# // A C# Program for Serverusing System;using System.Net;using System.Net.Sockets;using System.Text; namespace Server { class Program { // Main Methodstatic void Main(string[] args){ ExecuteServer();} public static void ExecuteServer(){ // Establish the local endpoint // for the socket. Dns.GetHostName // returns the name of the host // running the application. IPHostEntry ipHost = Dns.GetHostEntry(Dns.GetHostName()); IPAddress ipAddr = ipHost.AddressList[0]; IPEndPoint localEndPoint = new IPEndPoint(ipAddr, 11111); // Creation TCP/IP Socket using // Socket Class Constructor Socket listener = new Socket(ipAddr.AddressFamily, SocketType.Stream, ProtocolType.Tcp); try { // Using Bind() method we associate a // network address to the Server Socket // All client that will connect to this // Server Socket must know this network // Address listener.Bind(localEndPoint); // Using Listen() method we create // the Client list that will want // to connect to Server listener.Listen(10); while (true) { Console.WriteLine("Waiting connection ... "); // Suspend while waiting for // incoming connection Using // Accept() method the server // will accept connection of client Socket clientSocket = listener.Accept(); // Data buffer byte[] bytes = new Byte[1024]; string data = null; while (true) { int numByte = clientSocket.Receive(bytes); data += Encoding.ASCII.GetString(bytes, 0, numByte); if (data.IndexOf("<EOF>") > -1) break; } Console.WriteLine("Text received -> {0} ", data); byte[] message = Encoding.ASCII.GetBytes("Test Server"); // Send a message to Client // using Send() method clientSocket.Send(message); // Close client Socket using the // Close() method. After closing, // we can use the closed Socket // for a new Client Connection clientSocket.Shutdown(SocketShutdown.Both); clientSocket.Close(); } } catch (Exception e) { Console.WriteLine(e.ToString()); }}}} To run on Terminal or Command Prompt: First save the files with .cs extension. Suppose we saved the files as client.cs and server.cs. Then compile both the files by executing the following commands: $ csc client.cs $ csc server.cs After successful compilation opens the two cmd one for Server and another for Client and first try to execute the server as follows After that on another cmd execute the client code and see the following output on the server side cmd. Now you can see the changes on the server as soon as the client program executes. simmytarika5 surindertarika1234 C# C# Programs Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here. Differences Between .NET Core and .NET Framework C# | .NET Framework (Basic Architecture and Component Stack) Lambda Expressions in C# Extension Method in C# C# | Abstraction Convert String to Character Array in C# Program to Print a New Line in C# Program to find absolute value of a given number Getting a Month Name Using Month Number in C# C# Program to print all permutations of a given string
[ { "code": null, "e": 54, "s": 26, "text": "\n10 Sep, 2021" }, { "code": null, "e": 612, "s": 54, "text": "Socket programming is a way of connecting two nodes on a network to communicate with each other. Basically, it is a one-way Client and Server setup where a Client connects, sends messages to the server and the server shows them using socket connection. One socket (node) listens on a particular port at an IP, while other socket reaches out to the other to form a connection. Server forms the listener socket while the client reaches out to the server. Before going deeper into Server and Client code, it is strongly recommended to go through TCP/IP Model. " }, { "code": null, "e": 1286, "s": 612, "text": " Before creating client’s socket a user must decide what ‘IP Address‘ that he want to connect to, in this case, it is the localhost. At the same time, we also need the ‘Family‘ method that will belong to the socket itself. Then, through the ‘connect‘ method, we will connect the socket to the server. Before sending any message, it must be converted into a byte array. Then and only then, it can be sent to the server through the ‘send‘ method. Later, thanks to the ‘receive‘ method we are going to get a byte array as answer by the server. It is notable that just like in the C language, the ‘send’ and ‘receive’ methods still return the number of bytes sent or received." }, { "code": null, "e": 1289, "s": 1286, "text": "C#" }, { "code": "// A C# program for Clientusing System;using System.Net;using System.Net.Sockets;using System.Text; namespace Client { class Program { // Main Methodstatic void Main(string[] args){ ExecuteClient();} // ExecuteClient() Methodstatic void ExecuteClient(){ try { // Establish the remote endpoint // for the socket. This example // uses port 11111 on the local // computer. IPHostEntry ipHost = Dns.GetHostEntry(Dns.GetHostName()); IPAddress ipAddr = ipHost.AddressList[0]; IPEndPoint localEndPoint = new IPEndPoint(ipAddr, 11111); // Creation TCP/IP Socket using // Socket Class Constructor Socket sender = new Socket(ipAddr.AddressFamily, SocketType.Stream, ProtocolType.Tcp); try { // Connect Socket to the remote // endpoint using method Connect() sender.Connect(localEndPoint); // We print EndPoint information // that we are connected Console.WriteLine(\"Socket connected to -> {0} \", sender.RemoteEndPoint.ToString()); // Creation of message that // we will send to Server byte[] messageSent = Encoding.ASCII.GetBytes(\"Test Client<EOF>\"); int byteSent = sender.Send(messageSent); // Data buffer byte[] messageReceived = new byte[1024]; // We receive the message using // the method Receive(). This // method returns number of bytes // received, that we'll use to // convert them to string int byteRecv = sender.Receive(messageReceived); Console.WriteLine(\"Message from Server -> {0}\", Encoding.ASCII.GetString(messageReceived, 0, byteRecv)); // Close Socket using // the method Close() sender.Shutdown(SocketShutdown.Both); sender.Close(); } // Manage of Socket's Exceptions catch (ArgumentNullException ane) { Console.WriteLine(\"ArgumentNullException : {0}\", ane.ToString()); } catch (SocketException se) { Console.WriteLine(\"SocketException : {0}\", se.ToString()); } catch (Exception e) { Console.WriteLine(\"Unexpected exception : {0}\", e.ToString()); } } catch (Exception e) { Console.WriteLine(e.ToString()); }}}}", "e": 3851, "s": 1289, "text": null }, { "code": null, "e": 4383, "s": 3851, "text": "In the same way, we need an ‘IP address’ that identifies the server in order to let the clients to connect. After creating the socket, we call the ‘bind‘ method which binds the IP to the socket. Then, call the ‘listen‘ method. This operation is responsible for creating the waiting queue which will be related to every opened ‘socket‘. The ‘listen‘ method takes as input the maximum number of clients that can stay in the waiting queue. As stated above, there is communication with the client through ‘send‘ and ‘receive‘ methods. " }, { "code": null, "e": 4437, "s": 4383, "text": "Note: Don’t forget the conversion into a byte array. " }, { "code": null, "e": 4440, "s": 4437, "text": "C#" }, { "code": "// A C# Program for Serverusing System;using System.Net;using System.Net.Sockets;using System.Text; namespace Server { class Program { // Main Methodstatic void Main(string[] args){ ExecuteServer();} public static void ExecuteServer(){ // Establish the local endpoint // for the socket. Dns.GetHostName // returns the name of the host // running the application. IPHostEntry ipHost = Dns.GetHostEntry(Dns.GetHostName()); IPAddress ipAddr = ipHost.AddressList[0]; IPEndPoint localEndPoint = new IPEndPoint(ipAddr, 11111); // Creation TCP/IP Socket using // Socket Class Constructor Socket listener = new Socket(ipAddr.AddressFamily, SocketType.Stream, ProtocolType.Tcp); try { // Using Bind() method we associate a // network address to the Server Socket // All client that will connect to this // Server Socket must know this network // Address listener.Bind(localEndPoint); // Using Listen() method we create // the Client list that will want // to connect to Server listener.Listen(10); while (true) { Console.WriteLine(\"Waiting connection ... \"); // Suspend while waiting for // incoming connection Using // Accept() method the server // will accept connection of client Socket clientSocket = listener.Accept(); // Data buffer byte[] bytes = new Byte[1024]; string data = null; while (true) { int numByte = clientSocket.Receive(bytes); data += Encoding.ASCII.GetString(bytes, 0, numByte); if (data.IndexOf(\"<EOF>\") > -1) break; } Console.WriteLine(\"Text received -> {0} \", data); byte[] message = Encoding.ASCII.GetBytes(\"Test Server\"); // Send a message to Client // using Send() method clientSocket.Send(message); // Close client Socket using the // Close() method. After closing, // we can use the closed Socket // for a new Client Connection clientSocket.Shutdown(SocketShutdown.Both); clientSocket.Close(); } } catch (Exception e) { Console.WriteLine(e.ToString()); }}}}", "e": 6918, "s": 4440, "text": null }, { "code": null, "e": 6957, "s": 6918, "text": "To run on Terminal or Command Prompt: " }, { "code": null, "e": 7053, "s": 6957, "text": "First save the files with .cs extension. Suppose we saved the files as client.cs and server.cs." }, { "code": null, "e": 7118, "s": 7053, "text": "Then compile both the files by executing the following commands:" }, { "code": null, "e": 7134, "s": 7118, "text": "$ csc client.cs" }, { "code": null, "e": 7150, "s": 7134, "text": "$ csc server.cs" }, { "code": null, "e": 7282, "s": 7150, "text": "After successful compilation opens the two cmd one for Server and another for Client and first try to execute the server as follows" }, { "code": null, "e": 7385, "s": 7282, "text": "After that on another cmd execute the client code and see the following output on the server side cmd." }, { "code": null, "e": 7467, "s": 7385, "text": "Now you can see the changes on the server as soon as the client program executes." }, { "code": null, "e": 7480, "s": 7467, "text": "simmytarika5" }, { "code": null, "e": 7499, "s": 7480, "text": "surindertarika1234" }, { "code": null, "e": 7502, "s": 7499, "text": "C#" }, { "code": null, "e": 7514, "s": 7502, "text": "C# Programs" }, { "code": null, "e": 7612, "s": 7514, "text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here." }, { "code": null, "e": 7661, "s": 7612, "text": "Differences Between .NET Core and .NET Framework" }, { "code": null, "e": 7722, "s": 7661, "text": "C# | .NET Framework (Basic Architecture and Component Stack)" }, { "code": null, "e": 7747, "s": 7722, "text": "Lambda Expressions in C#" }, { "code": null, "e": 7770, "s": 7747, "text": "Extension Method in C#" }, { "code": null, "e": 7787, "s": 7770, "text": "C# | Abstraction" }, { "code": null, "e": 7827, "s": 7787, "text": "Convert String to Character Array in C#" }, { "code": null, "e": 7861, "s": 7827, "text": "Program to Print a New Line in C#" }, { "code": null, "e": 7910, "s": 7861, "text": "Program to find absolute value of a given number" }, { "code": null, "e": 7956, "s": 7910, "text": "Getting a Month Name Using Month Number in C#" } ]
Difference Between next() and hasNext() Method in Java Collections
15 Sep, 2021 In Java, objects are stored dynamically using objects. Now in order to traverse across these objects is done using a for-each loop, iterators, and comparators. Here will be discussing iterators. The iterator interface allows visiting elements in containers one by one which indirectly signifies retrieval of elements of the collection in forwarding direction only. This interface compromises of three methods : next()hasNext()remove() next() hasNext() remove() (A) hasNext() Method hasNext() method is used to check whether there is any element remaining in the List. This method is a boolean type method that returns only true and false as discussed as it is just used for checking purposes. The hasNext() methods of the Iterator and List Iterator returns true if the collection object over which is used to check during traversal whether the pointing element has the next element. If not it simply returns false. So, Return Value: True - if iteration has more elements False - if iteration has no more elements Return type: boolean Example: Java // Java program to demonstrate// the use of hasNext() method // Importing java input output classes// Importing all classesfrom// java.util packageimport java.io.*;import java.util.*; // Classclass GFG { // Main driver method public static void main(String[] args) { // Creating an ArrayList // Declaring the ArrayList ArrayList<String> list = new ArrayList<String>(); // Adding (appending) new elements at // the end of the List // Custom inputs list.add("Geeks"); list.add("for Geeks"); // Declaring the Iterator Iterator<String> iterator = list.iterator(); // Printing hasNext() values // Prints true because iterator has two more values System.out.println(iterator.hasNext()); // Go to next value using next() method iterator.next(); // Prints true because iterator has one more values System.out.println(iterator.hasNext()); // Go to next value using next() method iterator.next(); // Prints false because iterator has no more values System.out.println(iterator.hasNext()); }} true true false (B) next() method If there is an element after where hasNext() has returned false on which some execution is to be performed then this method is used to display that element on which execution is supposed to be carried on with help of this method. The next() methods of the Iterator and List Iterator return the next element of the collection. And if there is a need to remove this element remove() method is used. Return type: Same as collection such as ArrayList, Linked List, etc. Return value: The next element in the iteration. Exception: Throws NoSuchElementException if the iteration has no more elements. Example: Java // Java program to demonstrate// the use of next() method // Importing java input output classesimport java.io.*;// Importing all classes from// java.util packageimport java.util.*; // Classclass GFG { // Main driver method public static void main(String[] args) { // Creating an ArrayList // (Declaring ArrayList of String type) ArrayList<String> list = new ArrayList<String>(); // Adding elements to above List at // the end of the list // Custom inputs list.add("Element1"); list.add("Element2"); list.add("Element3"); // Declaring the Iterator Iterator<String> iterator = list.iterator(); // Printing values showcasing next() method which // shows traversal over elements // only in forward direction // Prints first element traversed System.out.println(iterator.next()); // Prints the succeeding element System.out.println(iterator.next()); // Prints another eleemnt succeeding // to previous element System.out.println(iterator.next()); }} Element1 Element2 Element3 ranjanashutosh2003 Java-Collections Picked Technical Scripter 2020 Difference Between Java Technical Scripter Java Java-Collections Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here. Difference between var, let and const keywords in JavaScript Difference Between Method Overloading and Method Overriding in Java Similarities and Difference between Java and C++ Difference between Internal and External fragmentation Difference between Compile-time and Run-time Polymorphism in Java Arrays in Java Split() String method in Java with examples Arrays.sort() in Java with examples Object Oriented Programming (OOPs) Concept in Java Reverse a string in Java
[ { "code": null, "e": 28, "s": 0, "text": "\n15 Sep, 2021" }, { "code": null, "e": 393, "s": 28, "text": "In Java, objects are stored dynamically using objects. Now in order to traverse across these objects is done using a for-each loop, iterators, and comparators. Here will be discussing iterators. The iterator interface allows visiting elements in containers one by one which indirectly signifies retrieval of elements of the collection in forwarding direction only." }, { "code": null, "e": 439, "s": 393, "text": "This interface compromises of three methods :" }, { "code": null, "e": 463, "s": 439, "text": "next()hasNext()remove()" }, { "code": null, "e": 470, "s": 463, "text": "next()" }, { "code": null, "e": 480, "s": 470, "text": "hasNext()" }, { "code": null, "e": 489, "s": 480, "text": "remove()" }, { "code": null, "e": 510, "s": 489, "text": "(A) hasNext() Method" }, { "code": null, "e": 947, "s": 510, "text": "hasNext() method is used to check whether there is any element remaining in the List. This method is a boolean type method that returns only true and false as discussed as it is just used for checking purposes. The hasNext() methods of the Iterator and List Iterator returns true if the collection object over which is used to check during traversal whether the pointing element has the next element. If not it simply returns false. So," }, { "code": null, "e": 1059, "s": 947, "text": "Return Value:\n True - if iteration has more elements \n False - if iteration has no more elements" }, { "code": null, "e": 1080, "s": 1059, "text": "Return type: boolean" }, { "code": null, "e": 1089, "s": 1080, "text": "Example:" }, { "code": null, "e": 1094, "s": 1089, "text": "Java" }, { "code": "// Java program to demonstrate// the use of hasNext() method // Importing java input output classes// Importing all classesfrom// java.util packageimport java.io.*;import java.util.*; // Classclass GFG { // Main driver method public static void main(String[] args) { // Creating an ArrayList // Declaring the ArrayList ArrayList<String> list = new ArrayList<String>(); // Adding (appending) new elements at // the end of the List // Custom inputs list.add(\"Geeks\"); list.add(\"for Geeks\"); // Declaring the Iterator Iterator<String> iterator = list.iterator(); // Printing hasNext() values // Prints true because iterator has two more values System.out.println(iterator.hasNext()); // Go to next value using next() method iterator.next(); // Prints true because iterator has one more values System.out.println(iterator.hasNext()); // Go to next value using next() method iterator.next(); // Prints false because iterator has no more values System.out.println(iterator.hasNext()); }}", "e": 2243, "s": 1094, "text": null }, { "code": null, "e": 2262, "s": 2246, "text": "true\ntrue\nfalse" }, { "code": null, "e": 2280, "s": 2262, "text": "(B) next() method" }, { "code": null, "e": 2677, "s": 2280, "text": "If there is an element after where hasNext() has returned false on which some execution is to be performed then this method is used to display that element on which execution is supposed to be carried on with help of this method. The next() methods of the Iterator and List Iterator return the next element of the collection. And if there is a need to remove this element remove() method is used." }, { "code": null, "e": 2746, "s": 2677, "text": "Return type: Same as collection such as ArrayList, Linked List, etc." }, { "code": null, "e": 2795, "s": 2746, "text": "Return value: The next element in the iteration." }, { "code": null, "e": 2875, "s": 2795, "text": "Exception: Throws NoSuchElementException if the iteration has no more elements." }, { "code": null, "e": 2884, "s": 2875, "text": "Example:" }, { "code": null, "e": 2889, "s": 2884, "text": "Java" }, { "code": "// Java program to demonstrate// the use of next() method // Importing java input output classesimport java.io.*;// Importing all classes from// java.util packageimport java.util.*; // Classclass GFG { // Main driver method public static void main(String[] args) { // Creating an ArrayList // (Declaring ArrayList of String type) ArrayList<String> list = new ArrayList<String>(); // Adding elements to above List at // the end of the list // Custom inputs list.add(\"Element1\"); list.add(\"Element2\"); list.add(\"Element3\"); // Declaring the Iterator Iterator<String> iterator = list.iterator(); // Printing values showcasing next() method which // shows traversal over elements // only in forward direction // Prints first element traversed System.out.println(iterator.next()); // Prints the succeeding element System.out.println(iterator.next()); // Prints another eleemnt succeeding // to previous element System.out.println(iterator.next()); }}", "e": 4001, "s": 2889, "text": null }, { "code": null, "e": 4028, "s": 4001, "text": "Element1\nElement2\nElement3" }, { "code": null, "e": 4047, "s": 4028, "text": "ranjanashutosh2003" }, { "code": null, "e": 4064, "s": 4047, "text": "Java-Collections" }, { "code": null, "e": 4071, "s": 4064, "text": "Picked" }, { "code": null, "e": 4095, "s": 4071, "text": "Technical Scripter 2020" }, { "code": null, "e": 4114, "s": 4095, "text": "Difference Between" }, { "code": null, "e": 4119, "s": 4114, "text": "Java" }, { "code": null, "e": 4138, "s": 4119, "text": "Technical Scripter" }, { "code": null, "e": 4143, "s": 4138, "text": "Java" }, { "code": null, "e": 4160, "s": 4143, "text": "Java-Collections" }, { "code": null, "e": 4258, "s": 4160, "text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here." }, { "code": null, "e": 4319, "s": 4258, "text": "Difference between var, let and const keywords in JavaScript" }, { "code": null, "e": 4387, "s": 4319, "text": "Difference Between Method Overloading and Method Overriding in Java" }, { "code": null, "e": 4436, "s": 4387, "text": "Similarities and Difference between Java and C++" }, { "code": null, "e": 4491, "s": 4436, "text": "Difference between Internal and External fragmentation" }, { "code": null, "e": 4557, "s": 4491, "text": "Difference between Compile-time and Run-time Polymorphism in Java" }, { "code": null, "e": 4572, "s": 4557, "text": "Arrays in Java" }, { "code": null, "e": 4616, "s": 4572, "text": "Split() String method in Java with examples" }, { "code": null, "e": 4652, "s": 4616, "text": "Arrays.sort() in Java with examples" }, { "code": null, "e": 4703, "s": 4652, "text": "Object Oriented Programming (OOPs) Concept in Java" } ]
Multiple criteria for aggregation on PySpark Dataframe
19 Dec, 2021 In this article, we will discuss how to do Multiple criteria aggregation on PySpark Dataframe. In PySpark, groupBy() is used to collect the identical data into groups on the PySpark DataFrame and perform aggregate functions on the grouped data. So by this we can do multiple aggregations at a time. Syntax: dataframe.groupBy(‘column_name_group’).agg(functions) where, column_name_group is the column to be grouped functions are the aggregation functions Lets understand what are the aggregations first. They are available in functions module in pyspark.sql, so we need to import it to start with. The aggregate functions are: count(): This will return the count of rows for each group. Syntax: functions.count(‘column_name’) mean(): This will return the mean of values for each group. Syntax: functions.mean(‘column_name’) max(): This will return the maximum of values for each group. Syntax: functions.max(‘column_name’) min(): This will return the minimum of values for each group. Syntax: functions.min(‘column_name’) sum(): This will return the total values for each group. Syntax: functions.sum(‘column_name’) avg(): This will return the average for values for each group. Syntax: functions.avg(‘column_name’) We can aggregate multiple functions using the following syntax. Syntax: dataframe.groupBy(‘column_name_group’).agg(functions....) Example: Multiple aggregations on DEPT column with FEE column Python3 # importing moduleimport pyspark # importing sparksession from pyspark.sql modulefrom pyspark.sql import SparkSession #import functionsfrom pyspark.sql import functions # creating sparksession and giving an app namespark = SparkSession.builder.appName('sparkdf').getOrCreate() # list of student datadata = [["1", "sravan", "IT", 45000], ["2", "ojaswi", "CS", 85000], ["3", "rohith", "CS", 41000], ["4", "sridevi", "IT", 56000], ["5", "bobby", "ECE", 45000], ["6", "gayatri", "ECE", 49000], ["7", "gnanesh", "CS", 45000], ["8", "bhanu", "Mech", 21000] ] # specify column namescolumns = ['ID', 'NAME', 'DEPT', 'FEE'] # creating a dataframe from the lists of datadataframe = spark.createDataFrame(data, columns) # aggregating DEPT column with min.max,sum,mean,avg and count functionsdataframe.groupBy('DEPT').agg(functions.min('FEE'), functions.max('FEE'), functions.sum('FEE'), functions.mean('FEE'), functions.count('FEE'), functions.avg('FEE')).show() Output: Example 2: Multiple aggregation in grouping dept and name column Python3 # importing moduleimport pyspark # importing sparksession from pyspark.sql modulefrom pyspark.sql import SparkSession #import functionsfrom pyspark.sql import functions # creating sparksession and giving an app namespark = SparkSession.builder.appName('sparkdf').getOrCreate() # list of student datadata = [["1", "sravan", "IT", 45000], ["2", "ojaswi", "CS", 85000], ["3", "rohith", "CS", 41000], ["4", "sridevi", "IT", 56000], ["5", "bobby", "ECE", 45000], ["6", "gayatri", "ECE", 49000], ["7", "gnanesh", "CS", 45000], ["8", "bhanu", "Mech", 21000] ] # specify column namescolumns = ['ID', 'NAME', 'DEPT', 'FEE'] # creating a dataframe from the lists of datadataframe = spark.createDataFrame(data, columns) # aggregating DEPT, NAME column with min.max,# sum,mean,avg and count functionsdataframe.groupBy('DEPT', 'NAME').agg(functions.min('FEE'), functions.max('FEE'), functions.sum('FEE'), functions.mean('FEE'), functions.count('FEE'), functions.avg('FEE')).show() Output: Picked Python-Pyspark Python Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here. Python Dictionary Different ways to create Pandas Dataframe Enumerate() in Python Read a file line by line in Python Python String | replace() How to Install PIP on Windows ? *args and **kwargs in Python Python Classes and Objects Python OOPs Concepts Iterate over a list in Python
[ { "code": null, "e": 28, "s": 0, "text": "\n19 Dec, 2021" }, { "code": null, "e": 123, "s": 28, "text": "In this article, we will discuss how to do Multiple criteria aggregation on PySpark Dataframe." }, { "code": null, "e": 328, "s": 123, "text": "In PySpark, groupBy() is used to collect the identical data into groups on the PySpark DataFrame and perform aggregate functions on the grouped data. So by this we can do multiple aggregations at a time." }, { "code": null, "e": 336, "s": 328, "text": "Syntax:" }, { "code": null, "e": 390, "s": 336, "text": "dataframe.groupBy(‘column_name_group’).agg(functions)" }, { "code": null, "e": 398, "s": 390, "text": "where, " }, { "code": null, "e": 444, "s": 398, "text": "column_name_group is the column to be grouped" }, { "code": null, "e": 484, "s": 444, "text": "functions are the aggregation functions" }, { "code": null, "e": 656, "s": 484, "text": "Lets understand what are the aggregations first. They are available in functions module in pyspark.sql, so we need to import it to start with. The aggregate functions are:" }, { "code": null, "e": 716, "s": 656, "text": "count(): This will return the count of rows for each group." }, { "code": null, "e": 724, "s": 716, "text": "Syntax:" }, { "code": null, "e": 755, "s": 724, "text": "functions.count(‘column_name’)" }, { "code": null, "e": 815, "s": 755, "text": "mean(): This will return the mean of values for each group." }, { "code": null, "e": 823, "s": 815, "text": "Syntax:" }, { "code": null, "e": 853, "s": 823, "text": "functions.mean(‘column_name’)" }, { "code": null, "e": 915, "s": 853, "text": "max(): This will return the maximum of values for each group." }, { "code": null, "e": 923, "s": 915, "text": "Syntax:" }, { "code": null, "e": 952, "s": 923, "text": "functions.max(‘column_name’)" }, { "code": null, "e": 1014, "s": 952, "text": "min(): This will return the minimum of values for each group." }, { "code": null, "e": 1022, "s": 1014, "text": "Syntax:" }, { "code": null, "e": 1051, "s": 1022, "text": "functions.min(‘column_name’)" }, { "code": null, "e": 1108, "s": 1051, "text": "sum(): This will return the total values for each group." }, { "code": null, "e": 1116, "s": 1108, "text": "Syntax:" }, { "code": null, "e": 1145, "s": 1116, "text": "functions.sum(‘column_name’)" }, { "code": null, "e": 1208, "s": 1145, "text": "avg(): This will return the average for values for each group." }, { "code": null, "e": 1216, "s": 1208, "text": "Syntax:" }, { "code": null, "e": 1245, "s": 1216, "text": "functions.avg(‘column_name’)" }, { "code": null, "e": 1309, "s": 1245, "text": "We can aggregate multiple functions using the following syntax." }, { "code": null, "e": 1317, "s": 1309, "text": "Syntax:" }, { "code": null, "e": 1375, "s": 1317, "text": "dataframe.groupBy(‘column_name_group’).agg(functions....)" }, { "code": null, "e": 1437, "s": 1375, "text": "Example: Multiple aggregations on DEPT column with FEE column" }, { "code": null, "e": 1445, "s": 1437, "text": "Python3" }, { "code": "# importing moduleimport pyspark # importing sparksession from pyspark.sql modulefrom pyspark.sql import SparkSession #import functionsfrom pyspark.sql import functions # creating sparksession and giving an app namespark = SparkSession.builder.appName('sparkdf').getOrCreate() # list of student datadata = [[\"1\", \"sravan\", \"IT\", 45000], [\"2\", \"ojaswi\", \"CS\", 85000], [\"3\", \"rohith\", \"CS\", 41000], [\"4\", \"sridevi\", \"IT\", 56000], [\"5\", \"bobby\", \"ECE\", 45000], [\"6\", \"gayatri\", \"ECE\", 49000], [\"7\", \"gnanesh\", \"CS\", 45000], [\"8\", \"bhanu\", \"Mech\", 21000] ] # specify column namescolumns = ['ID', 'NAME', 'DEPT', 'FEE'] # creating a dataframe from the lists of datadataframe = spark.createDataFrame(data, columns) # aggregating DEPT column with min.max,sum,mean,avg and count functionsdataframe.groupBy('DEPT').agg(functions.min('FEE'), functions.max('FEE'), functions.sum('FEE'), functions.mean('FEE'), functions.count('FEE'), functions.avg('FEE')).show()", "e": 2610, "s": 1445, "text": null }, { "code": null, "e": 2618, "s": 2610, "text": "Output:" }, { "code": null, "e": 2683, "s": 2618, "text": "Example 2: Multiple aggregation in grouping dept and name column" }, { "code": null, "e": 2691, "s": 2683, "text": "Python3" }, { "code": "# importing moduleimport pyspark # importing sparksession from pyspark.sql modulefrom pyspark.sql import SparkSession #import functionsfrom pyspark.sql import functions # creating sparksession and giving an app namespark = SparkSession.builder.appName('sparkdf').getOrCreate() # list of student datadata = [[\"1\", \"sravan\", \"IT\", 45000], [\"2\", \"ojaswi\", \"CS\", 85000], [\"3\", \"rohith\", \"CS\", 41000], [\"4\", \"sridevi\", \"IT\", 56000], [\"5\", \"bobby\", \"ECE\", 45000], [\"6\", \"gayatri\", \"ECE\", 49000], [\"7\", \"gnanesh\", \"CS\", 45000], [\"8\", \"bhanu\", \"Mech\", 21000] ] # specify column namescolumns = ['ID', 'NAME', 'DEPT', 'FEE'] # creating a dataframe from the lists of datadataframe = spark.createDataFrame(data, columns) # aggregating DEPT, NAME column with min.max,# sum,mean,avg and count functionsdataframe.groupBy('DEPT', 'NAME').agg(functions.min('FEE'), functions.max('FEE'), functions.sum('FEE'), functions.mean('FEE'), functions.count('FEE'), functions.avg('FEE')).show()", "e": 3915, "s": 2691, "text": null }, { "code": null, "e": 3923, "s": 3915, "text": "Output:" }, { "code": null, "e": 3930, "s": 3923, "text": "Picked" }, { "code": null, "e": 3945, "s": 3930, "text": "Python-Pyspark" }, { "code": null, "e": 3952, "s": 3945, "text": "Python" }, { "code": null, "e": 4050, "s": 3952, "text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here." }, { "code": null, "e": 4068, "s": 4050, "text": "Python Dictionary" }, { "code": null, "e": 4110, "s": 4068, "text": "Different ways to create Pandas Dataframe" }, { "code": null, "e": 4132, "s": 4110, "text": "Enumerate() in Python" }, { "code": null, "e": 4167, "s": 4132, "text": "Read a file line by line in Python" }, { "code": null, "e": 4193, "s": 4167, "text": "Python String | replace()" }, { "code": null, "e": 4225, "s": 4193, "text": "How to Install PIP on Windows ?" }, { "code": null, "e": 4254, "s": 4225, "text": "*args and **kwargs in Python" }, { "code": null, "e": 4281, "s": 4254, "text": "Python Classes and Objects" }, { "code": null, "e": 4302, "s": 4281, "text": "Python OOPs Concepts" } ]
How to check whether a column value is less than or greater than a certain value in R?
To check whether a column value is less than or greater than a certain value, we can use with function and the output will be a logical vector representing values with TRUE when the condition is satisfied and FALSE when the condition is not satisfied. For example, if we have a column say x of an R data frame df and we want to check whether any of the values in x is greater than 10 or not then it can be done by using with(df,df$x>10). Consider the below data frame: Live Demo > set.seed(1002) > x1<-rpois(20,5) > y1<-rpois(20,8) > z1<-rpois(20,3) > df1<-data.frame(x1,y1,z1) > df1 x1 y1 z1 1 5 6 1 2 7 8 2 3 5 9 2 4 3 4 2 5 4 10 3 6 6 6 1 7 10 8 6 8 6 3 6 9 4 12 1 10 8 13 2 11 6 7 4 12 8 9 3 13 5 8 4 14 5 4 3 15 2 7 5 16 4 7 4 17 6 14 3 18 7 6 2 19 8 7 1 20 5 9 5 Checking the conditions for different values. > with(df1,df1$x1<5) [1] FALSE FALSE FALSE TRUE TRUE FALSE FALSE FALSE TRUE FALSE FALSE FALSE [13] FALSE FALSE TRUE TRUE FALSE FALSE FALSE FALSE > with(df1,df1$x1>5) [1] FALSE TRUE FALSE FALSE FALSE TRUE TRUE TRUE FALSE TRUE TRUE TRUE [13] FALSE FALSE FALSE FALSE TRUE TRUE TRUE FALSE > with(df1,df1$y1>6) [1] FALSE TRUE TRUE FALSE TRUE FALSE TRUE FALSE TRUE TRUE TRUE TRUE [13] TRUE FALSE TRUE TRUE TRUE FALSE TRUE TRUE > with(df1,df1$y1>8) [1] FALSE FALSE TRUE FALSE TRUE FALSE FALSE FALSE TRUE TRUE FALSE TRUE [13] FALSE FALSE FALSE FALSE TRUE FALSE FALSE TRUE > with(df1,df1$z1>8) [1] FALSE FALSE FALSE FALSE FALSE FALSE FALSE FALSE FALSE FALSE FALSE FALSE [13] FALSE FALSE FALSE FALSE FALSE FALSE FALSE FALSE > with(df1,df1$z1<4) [1] TRUE TRUE TRUE TRUE TRUE TRUE FALSE FALSE TRUE TRUE FALSE TRUE [13] FALSE TRUE FALSE FALSE TRUE TRUE TRUE FALSE > with(df1,df1$x1>7) [1] FALSE FALSE FALSE FALSE FALSE FALSE TRUE FALSE FALSE TRUE FALSE TRUE [13] FALSE FALSE FALSE FALSE FALSE FALSE TRUE FALSE Let’s have a look at another example. Live Demo > x2<-sample(0:9,20,replace=TRUE) > y2<-sample(0:9,20,replace=TRUE) > z2<-sample(0:9,20,replace=TRUE) > df2<-data.frame(x2,y2,z2) > df2 x2 y2 z2 1 1 3 4 2 5 9 7 3 5 2 9 4 8 8 7 5 2 8 8 6 1 3 6 7 3 5 6 8 5 5 2 9 6 0 4 10 1 6 9 11 9 6 3 12 0 3 7 13 2 4 3 14 1 6 8 15 5 1 4 16 2 0 7 17 6 7 8 18 9 5 9 19 1 3 8 20 5 4 0 > with(df2,df2$x2>6) [1] FALSE FALSE FALSE TRUE FALSE FALSE FALSE FALSE FALSE FALSE TRUE FALSE [13] FALSE FALSE FALSE FALSE FALSE TRUE FALSE FALSE > with(df2,df2$x2>5) [1] FALSE FALSE FALSE TRUE FALSE FALSE FALSE FALSE TRUE FALSE TRUE FALSE [13] FALSE FALSE FALSE FALSE TRUE TRUE FALSE FALSE > with(df2,df2$y2>5) [1] FALSE TRUE FALSE TRUE TRUE FALSE FALSE FALSE FALSE TRUE TRUE FALSE [13] FALSE TRUE FALSE FALSE TRUE FALSE FALSE FALSE > with(df2,df2$y2>3) [1] FALSE TRUE FALSE TRUE TRUE FALSE TRUE TRUE FALSE TRUE TRUE FALSE [13] TRUE TRUE FALSE FALSE TRUE TRUE FALSE TRUE > with(df2,df2$y2<3) [1] FALSE FALSE TRUE FALSE FALSE FALSE FALSE FALSE TRUE FALSE FALSE FALSE [13] FALSE FALSE TRUE TRUE FALSE FALSE FALSE FALSE > with(df2,df2$x2<5) [1] TRUE FALSE FALSE FALSE TRUE TRUE TRUE FALSE FALSE TRUE FALSE TRUE [13] TRUE TRUE FALSE TRUE FALSE FALSE TRUE FALSE
[ { "code": null, "e": 1500, "s": 1062, "text": "To check whether a column value is less than or greater than a certain value, we can use with function and the output will be a logical vector representing values with TRUE when the condition is satisfied and FALSE when the condition is not satisfied. For example, if we have a column say x of an R data frame df and we want to check whether any of the values in x is greater than 10 or not then it can be done by using with(df,df$x>10)." }, { "code": null, "e": 1531, "s": 1500, "text": "Consider the below data frame:" }, { "code": null, "e": 1541, "s": 1531, "text": "Live Demo" }, { "code": null, "e": 1646, "s": 1541, "text": "> set.seed(1002)\n> x1<-rpois(20,5)\n> y1<-rpois(20,8)\n> z1<-rpois(20,3)\n> df1<-data.frame(x1,y1,z1)\n> df1" }, { "code": null, "e": 1831, "s": 1646, "text": "x1 y1 z1\n1 5 6 1\n2 7 8 2\n3 5 9 2\n4 3 4 2\n5 4 10 3\n6 6 6 1\n7 10 8 6\n8 6 3 6\n9 4 12 1\n10 8 13 2\n11 6 7 4\n12 8 9 3\n13 5 8 4\n14 5 4 3\n15 2 7 5\n16 4 7 4\n17 6 14 3\n18 7 6 2\n19 8 7 1\n20 5 9 5" }, { "code": null, "e": 1877, "s": 1831, "text": "Checking the conditions for different values." }, { "code": null, "e": 2874, "s": 1877, "text": "> with(df1,df1$x1<5)\n[1] FALSE FALSE FALSE TRUE TRUE FALSE FALSE FALSE TRUE FALSE FALSE FALSE\n[13] FALSE FALSE TRUE TRUE FALSE FALSE FALSE FALSE\n> with(df1,df1$x1>5)\n[1] FALSE TRUE FALSE FALSE FALSE TRUE TRUE TRUE FALSE TRUE TRUE TRUE\n[13] FALSE FALSE FALSE FALSE TRUE TRUE TRUE FALSE\n> with(df1,df1$y1>6)\n[1] FALSE TRUE TRUE FALSE TRUE FALSE TRUE FALSE TRUE TRUE TRUE TRUE\n[13] TRUE FALSE TRUE TRUE TRUE FALSE TRUE TRUE\n> with(df1,df1$y1>8)\n[1] FALSE FALSE TRUE FALSE TRUE FALSE FALSE FALSE TRUE TRUE FALSE TRUE\n[13] FALSE FALSE FALSE FALSE TRUE FALSE FALSE TRUE\n> with(df1,df1$z1>8)\n[1] FALSE FALSE FALSE FALSE FALSE FALSE FALSE FALSE FALSE FALSE FALSE FALSE\n[13] FALSE FALSE FALSE FALSE FALSE FALSE FALSE FALSE\n> with(df1,df1$z1<4)\n[1] TRUE TRUE TRUE TRUE TRUE TRUE FALSE FALSE TRUE TRUE FALSE TRUE\n[13] FALSE TRUE FALSE FALSE TRUE TRUE TRUE FALSE\n> with(df1,df1$x1>7)\n[1] FALSE FALSE FALSE FALSE FALSE FALSE TRUE FALSE FALSE TRUE FALSE TRUE\n[13] FALSE FALSE FALSE FALSE FALSE FALSE TRUE FALSE" }, { "code": null, "e": 2912, "s": 2874, "text": "Let’s have a look at another example." }, { "code": null, "e": 2922, "s": 2912, "text": "Live Demo" }, { "code": null, "e": 3058, "s": 2922, "text": "> x2<-sample(0:9,20,replace=TRUE)\n> y2<-sample(0:9,20,replace=TRUE)\n> z2<-sample(0:9,20,replace=TRUE)\n> df2<-data.frame(x2,y2,z2)\n> df2" }, { "code": null, "e": 3238, "s": 3058, "text": "x2 y2 z2\n1 1 3 4\n2 5 9 7\n3 5 2 9\n4 8 8 7\n5 2 8 8\n6 1 3 6\n7 3 5 6\n8 5 5 2\n9 6 0 4\n10 1 6 9\n11 9 6 3\n12 0 3 7\n13 2 4 3\n14 1 6 8\n15 5 1 4\n16 2 0 7\n17 6 7 8\n18 9 5 9\n19 1 3 8\n20 5 4 0" }, { "code": null, "e": 4097, "s": 3238, "text": "> with(df2,df2$x2>6)\n[1] FALSE FALSE FALSE TRUE FALSE FALSE FALSE FALSE FALSE FALSE TRUE FALSE\n[13] FALSE FALSE FALSE FALSE FALSE TRUE FALSE FALSE\n> with(df2,df2$x2>5)\n[1] FALSE FALSE FALSE TRUE FALSE FALSE FALSE FALSE TRUE FALSE TRUE FALSE\n[13] FALSE FALSE FALSE FALSE TRUE TRUE FALSE FALSE\n> with(df2,df2$y2>5)\n[1] FALSE TRUE FALSE TRUE TRUE FALSE FALSE FALSE FALSE TRUE TRUE FALSE\n[13] FALSE TRUE FALSE FALSE TRUE FALSE FALSE FALSE\n> with(df2,df2$y2>3)\n[1] FALSE TRUE FALSE TRUE TRUE FALSE TRUE TRUE FALSE TRUE TRUE FALSE\n[13] TRUE TRUE FALSE FALSE TRUE TRUE FALSE TRUE\n> with(df2,df2$y2<3)\n[1] FALSE FALSE TRUE FALSE FALSE FALSE FALSE FALSE TRUE FALSE FALSE FALSE\n[13] FALSE FALSE TRUE TRUE FALSE FALSE FALSE FALSE\n> with(df2,df2$x2<5)\n[1] TRUE FALSE FALSE FALSE TRUE TRUE TRUE FALSE FALSE TRUE FALSE TRUE\n[13] TRUE TRUE FALSE TRUE FALSE FALSE TRUE FALSE" } ]
Control the Size of the Points in a Scatterplot in R - GeeksforGeeks
23 May, 2021 In this article, we are going to see how to control the size of the points in a scatterplot in R Programming language. We will Control the size of the points in a scatterplot using cex argument of the plot function. In this approach to control the size of the points in a scatterplot, the user needs to call the in-built function plot() and using the cex parameter which will take input value as a float in this function of control the size of the points of the given scatterplot in r language. Syntax: plot( Data_x, Data_y, cex) Example 1: Increase the point size in a scatterplot. In this function, we will be increasing the size of the points of the given scatterplot using the cex argument of the plot function of r language. Here cex will be set to 4 which will increase the size of the points of the scatterplot. Python3 x = c(1, 2, 3, 4, 5, 6, 7, 8, 9, 10)y = c(7, 9, 6, 2, 8, 1, 3, 4, 5, 8) plot(x, y, cex = 4) Output: Example 2: Decreasing the point size in a scatterplot. In this function, we will be decreasing the size of the points of the given scatterplot using the cex argument of the plot function of r language. Here cex will be set to 0.6 which will decrease the size of the points of the scatterplot. R x = c(1, 2, 3, 4, 5, 6, 7, 8, 9, 10)y = c(7, 9, 6, 2, 8, 1, 3, 4, 5, 8) plot( x, y, cex = 0.6) Output: Picked R-Charts R-Graphs R-plots R Language Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here. Change Color of Bars in Barchart using ggplot2 in R How to Change Axis Scales in R Plots? Group by function in R using Dplyr How to Split Column Into Multiple Columns in R DataFrame? How to filter R DataFrame by values in a column? How to import an Excel File into R ? How to filter R dataframe by multiple conditions? Replace Specific Characters in String in R R - if statement Time Series Analysis in R
[ { "code": null, "e": 25242, "s": 25214, "text": "\n23 May, 2021" }, { "code": null, "e": 25361, "s": 25242, "text": "In this article, we are going to see how to control the size of the points in a scatterplot in R Programming language." }, { "code": null, "e": 25737, "s": 25361, "text": "We will Control the size of the points in a scatterplot using cex argument of the plot function. In this approach to control the size of the points in a scatterplot, the user needs to call the in-built function plot() and using the cex parameter which will take input value as a float in this function of control the size of the points of the given scatterplot in r language." }, { "code": null, "e": 25772, "s": 25737, "text": "Syntax: plot( Data_x, Data_y, cex)" }, { "code": null, "e": 25825, "s": 25772, "text": "Example 1: Increase the point size in a scatterplot." }, { "code": null, "e": 26062, "s": 25825, "text": "In this function, we will be increasing the size of the points of the given scatterplot using the cex argument of the plot function of r language. Here cex will be set to 4 which will increase the size of the points of the scatterplot. " }, { "code": null, "e": 26070, "s": 26062, "text": "Python3" }, { "code": "x = c(1, 2, 3, 4, 5, 6, 7, 8, 9, 10)y = c(7, 9, 6, 2, 8, 1, 3, 4, 5, 8) plot(x, y, cex = 4) ", "e": 26166, "s": 26070, "text": null }, { "code": null, "e": 26174, "s": 26166, "text": "Output:" }, { "code": null, "e": 26229, "s": 26174, "text": "Example 2: Decreasing the point size in a scatterplot." }, { "code": null, "e": 26468, "s": 26229, "text": "In this function, we will be decreasing the size of the points of the given scatterplot using the cex argument of the plot function of r language. Here cex will be set to 0.6 which will decrease the size of the points of the scatterplot. " }, { "code": null, "e": 26470, "s": 26468, "text": "R" }, { "code": "x = c(1, 2, 3, 4, 5, 6, 7, 8, 9, 10)y = c(7, 9, 6, 2, 8, 1, 3, 4, 5, 8) plot( x, y, cex = 0.6) ", "e": 26567, "s": 26470, "text": null }, { "code": null, "e": 26575, "s": 26567, "text": "Output:" }, { "code": null, "e": 26582, "s": 26575, "text": "Picked" }, { "code": null, "e": 26591, "s": 26582, "text": "R-Charts" }, { "code": null, "e": 26600, "s": 26591, "text": "R-Graphs" }, { "code": null, "e": 26608, "s": 26600, "text": "R-plots" }, { "code": null, "e": 26619, "s": 26608, "text": "R Language" }, { "code": null, "e": 26717, "s": 26619, "text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here." }, { "code": null, "e": 26769, "s": 26717, "text": "Change Color of Bars in Barchart using ggplot2 in R" }, { "code": null, "e": 26807, "s": 26769, "text": "How to Change Axis Scales in R Plots?" }, { "code": null, "e": 26842, "s": 26807, "text": "Group by function in R using Dplyr" }, { "code": null, "e": 26900, "s": 26842, "text": "How to Split Column Into Multiple Columns in R DataFrame?" }, { "code": null, "e": 26949, "s": 26900, "text": "How to filter R DataFrame by values in a column?" }, { "code": null, "e": 26986, "s": 26949, "text": "How to import an Excel File into R ?" }, { "code": null, "e": 27036, "s": 26986, "text": "How to filter R dataframe by multiple conditions?" }, { "code": null, "e": 27079, "s": 27036, "text": "Replace Specific Characters in String in R" }, { "code": null, "e": 27096, "s": 27079, "text": "R - if statement" } ]
How to use regular expressions with TestNG?
We use regular expressions in TestNG to work with a group of test methods that are named with a certain pattern. Testng xml file. <?xml version = "1.0" encoding = "UTF-8"?> <!DOCTYPE suite SYSTEM "http://testng.org/testng-1.0.dtd" > <suite name = "Tutorialspoint Test "> <test name = "Test Cycle"> <classes> <class name = "TestRegularExpression" /> <methods> <exclude name= “Payment.*”/> </methods> </classes> </test> </suite> All the test methods with starting name Payment will be excluded from the regression suite. @Test public void PaymentHistory(){ System.out.println("Payment history validation is successful”); } @Test public void Login(){ System.out.println("Login is successful”); } @Test public void PaymentDefault(){ System.out.println("Payment default verification is successful”); } Login() will be executed, but all the methods starting with name Payment will be excluded from execution. This is achieved using regular expression(Payment.*).
[ { "code": null, "e": 1175, "s": 1062, "text": "We use regular expressions in TestNG to work with a group of test\nmethods that are named with a certain pattern." }, { "code": null, "e": 1192, "s": 1175, "text": "Testng xml file." }, { "code": null, "e": 1546, "s": 1192, "text": "<?xml version = \"1.0\" encoding = \"UTF-8\"?>\n<!DOCTYPE suite SYSTEM \"http://testng.org/testng-1.0.dtd\" >\n<suite name = \"Tutorialspoint Test \">\n <test name = \"Test Cycle\">\n <classes>\n <class name = \"TestRegularExpression\" />\n <methods>\n <exclude name= “Payment.*”/>\n </methods>\n </classes>\n </test>\n</suite>" }, { "code": null, "e": 1638, "s": 1546, "text": "All the test methods with starting name Payment will be excluded from the\nregression suite." }, { "code": null, "e": 1925, "s": 1638, "text": "@Test\npublic void PaymentHistory(){\n System.out.println(\"Payment history validation is successful”);\n}\n@Test\npublic void Login(){\n System.out.println(\"Login is successful”);\n}\n@Test\npublic void PaymentDefault(){\n System.out.println(\"Payment default verification is successful”);\n}" }, { "code": null, "e": 2085, "s": 1925, "text": "Login() will be executed, but all the methods starting with name Payment will be excluded from execution. This is achieved using regular expression(Payment.*)." } ]
Repetition in Songs: A Python Tutorial | by Okoh Anita | Towards Data Science
Everyone has heard a song or knows what a song sounds like. I can carelessly say everyone can define a song ...in their own words. Just for the benefit of the doubt, a song (according to Wikipedia) is a single work of music that is typically intended to be sung by the human voice with distinct and fixed pitches and patterns using sound and silence and a variety of forms that often include the repetition of sections. In his journal article called “The complexity of Songs”, computer scientist Donald Knuth capitalized on the tendency of popular songs to devolve from long and content-rich ballads to highly repetitive texts. As some may waste no time agreeing with his notion, it does raise some questions like: Does repetitiveness really help songs become a hit? Is music really becoming more repetitive over time? In an attempt to teach some basic python code in the form of a case study, I am going to test this hypothesis (Are popular songs really repetitive?)with one of my favorite songs. One way to test this hypothesis is to figure out the unique words and calculate the fraction of those words to the total number of words in a song. In this tutorial, we’ll cover: Variables and data types Lists and Dictionaries Basic Arithmetic operations Built-in Functions and Loops To get the most out of this tutorial, you can follow along by running the codes yourself. The music we will be using is entitled ‘Perfect’ by Ed Sheeran. You can copy the lyrics here. However, the lyrics I am using in this analysis was cleaned out to get a conclusive result. For example, I changed words like “we’ll” to “we will” etc. You can get my version of the lyrics hereThe editor used was Jupiter NoteBook. Here is a quick tutorial on how to install and use it. The music we will be using is entitled ‘Perfect’ by Ed Sheeran. You can copy the lyrics here. However, the lyrics I am using in this analysis was cleaned out to get a conclusive result. For example, I changed words like “we’ll” to “we will” etc. You can get my version of the lyrics here The editor used was Jupiter NoteBook. Here is a quick tutorial on how to install and use it. For the purpose of this case study, we will streamline our hypothesis by asking two major questions: How many unique words were used compared to the whole lyrics of our case study song — Perfect by Ed Sheeran? What are the most repetitive words used and how many times were they used throughout the song? Let's get started analyzing already 1.A String is a list of characters. A character is anything you can type on the keyboard in one keystroke, like a letter, a number, or a backslash. However, Python recognizes strings as anything that is delimited by quotation marks either a double quote (“ “) or a single quote (‘ ‘) at the beginning and end of a character or text. For example: ‘Hello world’ For this case study, a string is our lyrics as seen below 2.Variables are typically descriptive names, words or symbols used to assign or store values. In other words, they are storage placeholders for any datatype. It is quite handy in order to refer to a value at any time. A variable is always assigned with an equal sign, followed by the value of thevariable. (A way to view your code output is to use a print function. As you may already know with Jupyter notebook, an output can be viewed without a print function) To store the lyrics, we will assign it a variable named perfect_lyrics . 3.Lists can be created simply by putting different comma-separated values between square brackets. It can have any number of items and they may be of different types (integer, float, string etc.). It can even have another list as an item. For example: list1 = [1,'mouse', 3.5, [2,'is',5.0]] #3.5 is a float Now that we have gotten a sense of what a list looks like. Let go back to our data. Since one of our aims is to figure out the number of unique words used, it means we will need to do a bit of counting i.e to count each word. In order to achieve these, we will not only have to put our string into a list but will have to separate each word using a .split()method. Therefore our dataset will look like this Input Output From the above output, you will notice that each word has been separated into independent strings. And the whole lyrics make up the list called split_lyrics(which is also a variable) We will also need to separate the unique words of the rest of the words and the count. To do this, we would need to use python functions. A function is a block of organized, reusable code that is used to perform a single, related action. Python has many built-in functions like print( )-to show your output, list( )-to create a list, len( )- to count the number of characters in a list or string, etc. Python also allows you to create your own functions. But we will not be creating our own function in this case study. We will be using a few functions in the case study but we will start with the set( ) function. To separate the unique words from the whole lyrics, we need a set( ) and a print ( ) function i.e Input Output To count the number of words in the whole lyrics, we would need a len ( ) function Input & Output Doing the same for the unique words extracted Our above analysis has helped answer our first question: How many unique words were used as compared to the whole lyrics of the song?Simply put, 129 unique words were used in over 290 words in total. Our next goal is to figure out the second part of the question What are the most repetitive words used and how many times were they used? In order to answer this question, we will need to learn more data structures. 4. Dictionaries in Python are unordered collections of items. While data structures have only values as an element, A dictionary has a key:value pair. Each key-value pair maps the key to its associated value. Dictionaries are optimized to retrieve values when the key is known. You can define a dictionary by enclosing a comma-separated list of key-value pairs in curly braces ({}). A colon (:) separates each key from its associated value. An example is a simple German-English dictionary: colours = {"red" : "rot", "green" : "grün", "blue" : "blau", "yellow":"gelb"}print en_de["red"]#output will be the value of 'red'( the key) which is 'rot'empty_dict = {} #a dictionary can also be empty waiting to be filled up with information 5. Loopsare great when trying to run the same block of code over and over again. In python, there are two types of loops: for and while. For this analysis, we will focus more on the for loop. The for loop is used to iterate over elements of a sequence. It is often used when you have a piece of code which you want to repeat ‘n’ number of times. It works like this: “ for all elements in a list or dictionary, do this “ For example list_number = [2,3,4]#inputfor item in list_number: multiple = 2 * item print(multiple) Going back to our dataset To know the most repeated word(s), we need to know the number of times each word appeared in the lyrics. To do that, we will need to use both a dictionary(to store each word with their corresponding count) and a for loop (to iterate the counting process as each word appears) First, we need to store the unique words in a dictionary Input & Output Then we need to use the for loop again to count the number of times each unique word appears in the whole lyrics. Input & Output Now we have found out the number of times each word appears, let's sort them out to view them from highest to lowest using the sorted ( ) function There seem to be too many words that only occurred once. Because our aim is to find the most popular word or words, we will narrow our list to the 10 words by slicing it. You can learn more about slicing here Input & Output Then changing back to a dictionary An additional question will be: What are the 10 most popular words in the song called perfect by Ed Sheeran? We can easily extract this information by using the key ( )method under the dictionary data structure and then creating a list of those words. Input & Output From the above output code, we can confidently say that the most popular word used in the song called Perfect by Ed Sheeran is ‘I’ which appeared 24 times Let’s improve our analysis a little further 6. Visualizing data with python.Various techniques have been developed for presenting data visually but in this analysis, we will focus data visualization strictly on a library in Python, namely Matplotlib. Lets quickly visualize the analysis on the top 10 most popular words in our case study What are some key insights we can draw from this case study? I can spot out three Based on the song of choice, we see that unique words are only 44% of the total words used Out of the 129 unique words used, about 70 words appeared once which is about 24% of the total words used. The word ‘I’ was used 24 times in the entire songs i.e 8 % of the time, an ‘I’ was used. This case study does help to see that a greater percentage of a song is made up of repetitive words. However, we can not conclude with just one song that Donald Knuth theory is true. We will need to analyze a lot more songs to conclude that hit songs are as a result of the repetitiveness of words in a song. Although it is good to point out that Perfect by Ed Sheeran was actually a Hit song... (and still is in my world). If you want to improve your Python skills, these are some articles that may help 24 Best Python Courses, Books, and Online Tutorials 2018 Want to Learn Python? — Functions, Explained, by Ben B Python Excel Tutorial: The Definitive Guide P.S Like me, anyone can learn to be a data analyst and if you want to be notified on my next project or updates on my learning, feel free to sign up to my newsletter
[ { "code": null, "e": 592, "s": 172, "text": "Everyone has heard a song or knows what a song sounds like. I can carelessly say everyone can define a song ...in their own words. Just for the benefit of the doubt, a song (according to Wikipedia) is a single work of music that is typically intended to be sung by the human voice with distinct and fixed pitches and patterns using sound and silence and a variety of forms that often include the repetition of sections." }, { "code": null, "e": 991, "s": 592, "text": "In his journal article called “The complexity of Songs”, computer scientist Donald Knuth capitalized on the tendency of popular songs to devolve from long and content-rich ballads to highly repetitive texts. As some may waste no time agreeing with his notion, it does raise some questions like: Does repetitiveness really help songs become a hit? Is music really becoming more repetitive over time?" }, { "code": null, "e": 1318, "s": 991, "text": "In an attempt to teach some basic python code in the form of a case study, I am going to test this hypothesis (Are popular songs really repetitive?)with one of my favorite songs. One way to test this hypothesis is to figure out the unique words and calculate the fraction of those words to the total number of words in a song." }, { "code": null, "e": 1349, "s": 1318, "text": "In this tutorial, we’ll cover:" }, { "code": null, "e": 1374, "s": 1349, "text": "Variables and data types" }, { "code": null, "e": 1397, "s": 1374, "text": "Lists and Dictionaries" }, { "code": null, "e": 1425, "s": 1397, "text": "Basic Arithmetic operations" }, { "code": null, "e": 1454, "s": 1425, "text": "Built-in Functions and Loops" }, { "code": null, "e": 1544, "s": 1454, "text": "To get the most out of this tutorial, you can follow along by running the codes yourself." }, { "code": null, "e": 1924, "s": 1544, "text": "The music we will be using is entitled ‘Perfect’ by Ed Sheeran. You can copy the lyrics here. However, the lyrics I am using in this analysis was cleaned out to get a conclusive result. For example, I changed words like “we’ll” to “we will” etc. You can get my version of the lyrics hereThe editor used was Jupiter NoteBook. Here is a quick tutorial on how to install and use it." }, { "code": null, "e": 2212, "s": 1924, "text": "The music we will be using is entitled ‘Perfect’ by Ed Sheeran. You can copy the lyrics here. However, the lyrics I am using in this analysis was cleaned out to get a conclusive result. For example, I changed words like “we’ll” to “we will” etc. You can get my version of the lyrics here" }, { "code": null, "e": 2305, "s": 2212, "text": "The editor used was Jupiter NoteBook. Here is a quick tutorial on how to install and use it." }, { "code": null, "e": 2406, "s": 2305, "text": "For the purpose of this case study, we will streamline our hypothesis by asking two major questions:" }, { "code": null, "e": 2515, "s": 2406, "text": "How many unique words were used compared to the whole lyrics of our case study song — Perfect by Ed Sheeran?" }, { "code": null, "e": 2610, "s": 2515, "text": "What are the most repetitive words used and how many times were they used throughout the song?" }, { "code": null, "e": 2646, "s": 2610, "text": "Let's get started analyzing already" }, { "code": null, "e": 3006, "s": 2646, "text": "1.A String is a list of characters. A character is anything you can type on the keyboard in one keystroke, like a letter, a number, or a backslash. However, Python recognizes strings as anything that is delimited by quotation marks either a double quote (“ “) or a single quote (‘ ‘) at the beginning and end of a character or text. For example: ‘Hello world’" }, { "code": null, "e": 3064, "s": 3006, "text": "For this case study, a string is our lyrics as seen below" }, { "code": null, "e": 3527, "s": 3064, "text": "2.Variables are typically descriptive names, words or symbols used to assign or store values. In other words, they are storage placeholders for any datatype. It is quite handy in order to refer to a value at any time. A variable is always assigned with an equal sign, followed by the value of thevariable. (A way to view your code output is to use a print function. As you may already know with Jupyter notebook, an output can be viewed without a print function)" }, { "code": null, "e": 3600, "s": 3527, "text": "To store the lyrics, we will assign it a variable named perfect_lyrics ." }, { "code": null, "e": 3852, "s": 3600, "text": "3.Lists can be created simply by putting different comma-separated values between square brackets. It can have any number of items and they may be of different types (integer, float, string etc.). It can even have another list as an item. For example:" }, { "code": null, "e": 3907, "s": 3852, "text": "list1 = [1,'mouse', 3.5, [2,'is',5.0]] #3.5 is a float" }, { "code": null, "e": 3991, "s": 3907, "text": "Now that we have gotten a sense of what a list looks like. Let go back to our data." }, { "code": null, "e": 4314, "s": 3991, "text": "Since one of our aims is to figure out the number of unique words used, it means we will need to do a bit of counting i.e to count each word. In order to achieve these, we will not only have to put our string into a list but will have to separate each word using a .split()method. Therefore our dataset will look like this" }, { "code": null, "e": 4320, "s": 4314, "text": "Input" }, { "code": null, "e": 4327, "s": 4320, "text": "Output" }, { "code": null, "e": 4510, "s": 4327, "text": "From the above output, you will notice that each word has been separated into independent strings. And the whole lyrics make up the list called split_lyrics(which is also a variable)" }, { "code": null, "e": 4648, "s": 4510, "text": "We will also need to separate the unique words of the rest of the words and the count. To do this, we would need to use python functions." }, { "code": null, "e": 5030, "s": 4648, "text": "A function is a block of organized, reusable code that is used to perform a single, related action. Python has many built-in functions like print( )-to show your output, list( )-to create a list, len( )- to count the number of characters in a list or string, etc. Python also allows you to create your own functions. But we will not be creating our own function in this case study." }, { "code": null, "e": 5223, "s": 5030, "text": "We will be using a few functions in the case study but we will start with the set( ) function. To separate the unique words from the whole lyrics, we need a set( ) and a print ( ) function i.e" }, { "code": null, "e": 5229, "s": 5223, "text": "Input" }, { "code": null, "e": 5236, "s": 5229, "text": "Output" }, { "code": null, "e": 5319, "s": 5236, "text": "To count the number of words in the whole lyrics, we would need a len ( ) function" }, { "code": null, "e": 5334, "s": 5319, "text": "Input & Output" }, { "code": null, "e": 5380, "s": 5334, "text": "Doing the same for the unique words extracted" }, { "code": null, "e": 5580, "s": 5380, "text": "Our above analysis has helped answer our first question: How many unique words were used as compared to the whole lyrics of the song?Simply put, 129 unique words were used in over 290 words in total." }, { "code": null, "e": 5718, "s": 5580, "text": "Our next goal is to figure out the second part of the question What are the most repetitive words used and how many times were they used?" }, { "code": null, "e": 5796, "s": 5718, "text": "In order to answer this question, we will need to learn more data structures." }, { "code": null, "e": 6287, "s": 5796, "text": "4. Dictionaries in Python are unordered collections of items. While data structures have only values as an element, A dictionary has a key:value pair. Each key-value pair maps the key to its associated value. Dictionaries are optimized to retrieve values when the key is known. You can define a dictionary by enclosing a comma-separated list of key-value pairs in curly braces ({}). A colon (:) separates each key from its associated value. An example is a simple German-English dictionary:" }, { "code": null, "e": 6532, "s": 6287, "text": "colours = {\"red\" : \"rot\", \"green\" : \"grün\", \"blue\" : \"blau\", \"yellow\":\"gelb\"}print en_de[\"red\"]#output will be the value of 'red'( the key) which is 'rot'empty_dict = {} #a dictionary can also be empty waiting to be filled up with information " }, { "code": null, "e": 6878, "s": 6532, "text": "5. Loopsare great when trying to run the same block of code over and over again. In python, there are two types of loops: for and while. For this analysis, we will focus more on the for loop. The for loop is used to iterate over elements of a sequence. It is often used when you have a piece of code which you want to repeat ‘n’ number of times." }, { "code": null, "e": 6952, "s": 6878, "text": "It works like this: “ for all elements in a list or dictionary, do this “" }, { "code": null, "e": 6964, "s": 6952, "text": "For example" }, { "code": null, "e": 7059, "s": 6964, "text": "list_number = [2,3,4]#inputfor item in list_number: multiple = 2 * item print(multiple) " }, { "code": null, "e": 7085, "s": 7059, "text": "Going back to our dataset" }, { "code": null, "e": 7361, "s": 7085, "text": "To know the most repeated word(s), we need to know the number of times each word appeared in the lyrics. To do that, we will need to use both a dictionary(to store each word with their corresponding count) and a for loop (to iterate the counting process as each word appears)" }, { "code": null, "e": 7418, "s": 7361, "text": "First, we need to store the unique words in a dictionary" }, { "code": null, "e": 7433, "s": 7418, "text": "Input & Output" }, { "code": null, "e": 7547, "s": 7433, "text": "Then we need to use the for loop again to count the number of times each unique word appears in the whole lyrics." }, { "code": null, "e": 7562, "s": 7547, "text": "Input & Output" }, { "code": null, "e": 7709, "s": 7562, "text": "Now we have found out the number of times each word appears, let's sort them out to view them from highest to lowest using the sorted ( ) function" }, { "code": null, "e": 7918, "s": 7709, "text": "There seem to be too many words that only occurred once. Because our aim is to find the most popular word or words, we will narrow our list to the 10 words by slicing it. You can learn more about slicing here" }, { "code": null, "e": 7933, "s": 7918, "text": "Input & Output" }, { "code": null, "e": 7968, "s": 7933, "text": "Then changing back to a dictionary" }, { "code": null, "e": 8220, "s": 7968, "text": "An additional question will be: What are the 10 most popular words in the song called perfect by Ed Sheeran? We can easily extract this information by using the key ( )method under the dictionary data structure and then creating a list of those words." }, { "code": null, "e": 8235, "s": 8220, "text": "Input & Output" }, { "code": null, "e": 8390, "s": 8235, "text": "From the above output code, we can confidently say that the most popular word used in the song called Perfect by Ed Sheeran is ‘I’ which appeared 24 times" }, { "code": null, "e": 8434, "s": 8390, "text": "Let’s improve our analysis a little further" }, { "code": null, "e": 8641, "s": 8434, "text": "6. Visualizing data with python.Various techniques have been developed for presenting data visually but in this analysis, we will focus data visualization strictly on a library in Python, namely Matplotlib." }, { "code": null, "e": 8728, "s": 8641, "text": "Lets quickly visualize the analysis on the top 10 most popular words in our case study" }, { "code": null, "e": 8810, "s": 8728, "text": "What are some key insights we can draw from this case study? I can spot out three" }, { "code": null, "e": 8901, "s": 8810, "text": "Based on the song of choice, we see that unique words are only 44% of the total words used" }, { "code": null, "e": 9008, "s": 8901, "text": "Out of the 129 unique words used, about 70 words appeared once which is about 24% of the total words used." }, { "code": null, "e": 9097, "s": 9008, "text": "The word ‘I’ was used 24 times in the entire songs i.e 8 % of the time, an ‘I’ was used." }, { "code": null, "e": 9521, "s": 9097, "text": "This case study does help to see that a greater percentage of a song is made up of repetitive words. However, we can not conclude with just one song that Donald Knuth theory is true. We will need to analyze a lot more songs to conclude that hit songs are as a result of the repetitiveness of words in a song. Although it is good to point out that Perfect by Ed Sheeran was actually a Hit song... (and still is in my world)." }, { "code": null, "e": 9602, "s": 9521, "text": "If you want to improve your Python skills, these are some articles that may help" }, { "code": null, "e": 9659, "s": 9602, "text": "24 Best Python Courses, Books, and Online Tutorials 2018" }, { "code": null, "e": 9714, "s": 9659, "text": "Want to Learn Python? — Functions, Explained, by Ben B" }, { "code": null, "e": 9758, "s": 9714, "text": "Python Excel Tutorial: The Definitive Guide" } ]
You Don’t Always Have to Loop Through Rows in Pandas! | by Byron Dolon | Towards Data Science
I’ve been using Pandas for a while now, but I haven’t always used it correctly. My intuitive approach to perform calculations or edit my data tends to start with this question: How can I loop through (iterate) over my DataFrame to do INSERT_ANY_TASK_HERE? Iterating over rows in a DataFrame may work. In fact, I wrote a whole piece on how to edit your data in Pandas row by row. The reason I did this is because I had a multi-layered calculation that for the life of me I couldn’t figure out how to solve without looping. I had multiple conditions, one of which involved taking a column value that had the name of another column in the DataFrame which was to be used in a calculation. Iterating over the DataFrame was the only way I could think of to resolve this problem. But it shouldn’t be the method you always go to when working with Pandas. In fact, Pandas even has a big red warning on how you shouldn’t need to iterate over a DataFrame. Iterating through pandas objects is generally slow. In many cases, iterating manually over the rows is not needed and can be avoided (using) a vectorized solution: many operations can be performed using built-in methods or NumPy functions, (boolean) indexing. Most of the time, you can use a vectorized solution to perform your Pandas operations. Instead of using a “for loop” type operation that involves iterating through a set of data one value at a time, vectorization means you implement a solution that operates on a whole set of values at once. In Pandas, this means that instead of calculating something row by row, you perform the operation on the entire DataFrame. The focus here isn’t only on how fast the code can run with non-loop solutions, but on creating readable code that leverages Pandas to the full extent. Now, let’s go through a couple examples to help reframe the initial thought process from “how do I loop through a DataFrame?” to the real question of “how do I perform this calculation with the tools from Pandas?”. The data we’re going to use comes from an Animal Crossing user review data set from Kaggle. We’ll import the data and create two DataFrames, one called “old” and one called “new”. Then, to get started on the basics of alternative solutions to for loops, we’ll perform some operations with both a for loop and a vectorized solution and compare the code. (To understand the logic behind the for loop codes below, please check out my previous piece on it, as it already has an in-depth explanation on the subject.) import pandas as pdold = pd.read_csv('user_reviews.csv')new = pd.read_csv('user_reviews.csv') Let’s create a new column called “qualitative_rating”. This way, we can create some broad categories to label each user review as “bad”, “ok”, and “good”. A “bad” review will be any with a “grade” less than 5. A good review will be any with a “grade” greater than 5. Any review with a “grade” equal to 5 will be “ok”. To implement this using a for loop, the code would look like this: # if then elif else (old)# create new column old['qualitative_rating'] = ''# assign 'qualitative_rating' based on 'grade' with loopfor index in old.index: if old.loc[index, 'grade'] < 5: old.loc[index, 'qualitative_rating'] = 'bad' elif old.loc[index, 'grade'] == 5: old.loc[index, 'qualitative_rating'] = 'ok' elif old.loc[index, 'grade'] > 5: old.loc[index, 'qualitative_rating'] = 'good' The code is easy to read, but it took 7 lines and 2.26 seconds to go through 3000 rows. Instead, a better solution would look like this: # if then elif else (new)# create new columnnew['qualitative_rating'] = ''# assign 'qualitative_rating' based on 'grade' with .locnew.loc[new.grade < 5, 'qualitative_rating'] = 'bad'new.loc[new.grade == 5, 'qualitative_rating'] = 'ok'new.loc[new.grade > 5, 'qualitative_rating'] = 'good' This time, the code to add the qualitative ratings was only composed of 3 lines, and it only took 68 milliseconds. I also used the “.loc” DataFrame function again, but this time, I used it “properly”. By that, I mean instead of using a looped “if-else” solution, I assigned the “bad”, “ok”, and “good” qualitative rating directly from the “.loc” selection. Our next new column “len_text” will show the number of characters in each review, so we can compare the length of different reviews in the data set. To implement this using a for loop, the code would look like this: # create column based on other column (old)# create new columnold['len_text'] = ''# calculate length of column value with loopfor index in old.index: old.loc[index, 'len_text'] = len(old.loc[index, 'text']) Again, 2 lines and 2.23 seconds for this calculation is not that long. But instead of going through each row to find the length, we could use a solution that only requires one line: # create column based on other column (new)# create new columnnew['len_text'] = ''# calculate length of column value by converting to strnew['len_text'] = new['text'].str.len() Here, we take an existing column’s values, turn them into strings, and then use “.len()” to get the number of characters in each string. This solution only took 40 milliseconds to run. Now let’s create a new column called “super_category”. Here, we’ll identify if people qualify as a “super reviewer”, or in this case, if the length of their review is greater than 1000 characters. We’ll also mark a super reviewer as a “super fan” if the review “grade” is greater than or equal to 9, and a “super hater” if the review “grade” is less than or equal to 1. Everyone else will be categorized as “normal”. Implementing this with a for loop would look like this: # new column based on multiple conditions (old)# create new columnold['super_category'] = ''# set multiple conditions and assign reviewer category with loopfor index in old.index: if old.loc[index, 'grade'] >= 9 and old.loc[index, 'len_text'] >= 1000: old.loc[index, 'super_category'] = 'super fan' elif old.loc[index, 'grade'] <= 1 and old.loc[index, 'len_text'] >= 1000: old.loc[index, 'super_category'] = 'super hater' else: old.loc[index, 'super_category'] = 'normal' This works, but let’s cut it in half: # new column based on multiple conditions (new)# create new columnnew['super_category'] = 'normal'# set multiple conditions and assign reviewer category with .locnew.loc[(new['grade'] == 10) & (new['len_text'] >= 1000), 'super_category'] = 'super fan'new.loc[(new['grade'] <= 1) & (new['len_text'] >= 1000), 'super_category'] = 'super hater' Here, we use the “&” operator inside our “.loc” function to implement the two conditions at once. The vectorized solution completed in 63 milliseconds, which again is significantly faster than the loop method, which took 2.23 seconds. These were some basic operations used to expand the existing data with some of our own custom analysis. Yes, we could have done everything with loops, and you can even see that the same structure applied across many different operations. But Pandas comes with a lot of built-in methods specifically for the operations that we frequently need to perform. Going through this helped me retrain my brain away from always going to for loops as a solution to looking for better ways to accomplish all sorts of operations. I hope it helped you do the same!
[ { "code": null, "e": 348, "s": 171, "text": "I’ve been using Pandas for a while now, but I haven’t always used it correctly. My intuitive approach to perform calculations or edit my data tends to start with this question:" }, { "code": null, "e": 427, "s": 348, "text": "How can I loop through (iterate) over my DataFrame to do INSERT_ANY_TASK_HERE?" }, { "code": null, "e": 550, "s": 427, "text": "Iterating over rows in a DataFrame may work. In fact, I wrote a whole piece on how to edit your data in Pandas row by row." }, { "code": null, "e": 856, "s": 550, "text": "The reason I did this is because I had a multi-layered calculation that for the life of me I couldn’t figure out how to solve without looping. I had multiple conditions, one of which involved taking a column value that had the name of another column in the DataFrame which was to be used in a calculation." }, { "code": null, "e": 1018, "s": 856, "text": "Iterating over the DataFrame was the only way I could think of to resolve this problem. But it shouldn’t be the method you always go to when working with Pandas." }, { "code": null, "e": 1116, "s": 1018, "text": "In fact, Pandas even has a big red warning on how you shouldn’t need to iterate over a DataFrame." }, { "code": null, "e": 1376, "s": 1116, "text": "Iterating through pandas objects is generally slow. In many cases, iterating manually over the rows is not needed and can be avoided (using) a vectorized solution: many operations can be performed using built-in methods or NumPy functions, (boolean) indexing." }, { "code": null, "e": 1791, "s": 1376, "text": "Most of the time, you can use a vectorized solution to perform your Pandas operations. Instead of using a “for loop” type operation that involves iterating through a set of data one value at a time, vectorization means you implement a solution that operates on a whole set of values at once. In Pandas, this means that instead of calculating something row by row, you perform the operation on the entire DataFrame." }, { "code": null, "e": 1943, "s": 1791, "text": "The focus here isn’t only on how fast the code can run with non-loop solutions, but on creating readable code that leverages Pandas to the full extent." }, { "code": null, "e": 2158, "s": 1943, "text": "Now, let’s go through a couple examples to help reframe the initial thought process from “how do I loop through a DataFrame?” to the real question of “how do I perform this calculation with the tools from Pandas?”." }, { "code": null, "e": 2670, "s": 2158, "text": "The data we’re going to use comes from an Animal Crossing user review data set from Kaggle. We’ll import the data and create two DataFrames, one called “old” and one called “new”. Then, to get started on the basics of alternative solutions to for loops, we’ll perform some operations with both a for loop and a vectorized solution and compare the code. (To understand the logic behind the for loop codes below, please check out my previous piece on it, as it already has an in-depth explanation on the subject.)" }, { "code": null, "e": 2764, "s": 2670, "text": "import pandas as pdold = pd.read_csv('user_reviews.csv')new = pd.read_csv('user_reviews.csv')" }, { "code": null, "e": 3082, "s": 2764, "text": "Let’s create a new column called “qualitative_rating”. This way, we can create some broad categories to label each user review as “bad”, “ok”, and “good”. A “bad” review will be any with a “grade” less than 5. A good review will be any with a “grade” greater than 5. Any review with a “grade” equal to 5 will be “ok”." }, { "code": null, "e": 3149, "s": 3082, "text": "To implement this using a for loop, the code would look like this:" }, { "code": null, "e": 3570, "s": 3149, "text": "# if then elif else (old)# create new column old['qualitative_rating'] = ''# assign 'qualitative_rating' based on 'grade' with loopfor index in old.index: if old.loc[index, 'grade'] < 5: old.loc[index, 'qualitative_rating'] = 'bad' elif old.loc[index, 'grade'] == 5: old.loc[index, 'qualitative_rating'] = 'ok' elif old.loc[index, 'grade'] > 5: old.loc[index, 'qualitative_rating'] = 'good'" }, { "code": null, "e": 3658, "s": 3570, "text": "The code is easy to read, but it took 7 lines and 2.26 seconds to go through 3000 rows." }, { "code": null, "e": 3707, "s": 3658, "text": "Instead, a better solution would look like this:" }, { "code": null, "e": 3995, "s": 3707, "text": "# if then elif else (new)# create new columnnew['qualitative_rating'] = ''# assign 'qualitative_rating' based on 'grade' with .locnew.loc[new.grade < 5, 'qualitative_rating'] = 'bad'new.loc[new.grade == 5, 'qualitative_rating'] = 'ok'new.loc[new.grade > 5, 'qualitative_rating'] = 'good'" }, { "code": null, "e": 4352, "s": 3995, "text": "This time, the code to add the qualitative ratings was only composed of 3 lines, and it only took 68 milliseconds. I also used the “.loc” DataFrame function again, but this time, I used it “properly”. By that, I mean instead of using a looped “if-else” solution, I assigned the “bad”, “ok”, and “good” qualitative rating directly from the “.loc” selection." }, { "code": null, "e": 4501, "s": 4352, "text": "Our next new column “len_text” will show the number of characters in each review, so we can compare the length of different reviews in the data set." }, { "code": null, "e": 4568, "s": 4501, "text": "To implement this using a for loop, the code would look like this:" }, { "code": null, "e": 4778, "s": 4568, "text": "# create column based on other column (old)# create new columnold['len_text'] = ''# calculate length of column value with loopfor index in old.index: old.loc[index, 'len_text'] = len(old.loc[index, 'text'])" }, { "code": null, "e": 4960, "s": 4778, "text": "Again, 2 lines and 2.23 seconds for this calculation is not that long. But instead of going through each row to find the length, we could use a solution that only requires one line:" }, { "code": null, "e": 5137, "s": 4960, "text": "# create column based on other column (new)# create new columnnew['len_text'] = ''# calculate length of column value by converting to strnew['len_text'] = new['text'].str.len()" }, { "code": null, "e": 5322, "s": 5137, "text": "Here, we take an existing column’s values, turn them into strings, and then use “.len()” to get the number of characters in each string. This solution only took 40 milliseconds to run." }, { "code": null, "e": 5739, "s": 5322, "text": "Now let’s create a new column called “super_category”. Here, we’ll identify if people qualify as a “super reviewer”, or in this case, if the length of their review is greater than 1000 characters. We’ll also mark a super reviewer as a “super fan” if the review “grade” is greater than or equal to 9, and a “super hater” if the review “grade” is less than or equal to 1. Everyone else will be categorized as “normal”." }, { "code": null, "e": 5795, "s": 5739, "text": "Implementing this with a for loop would look like this:" }, { "code": null, "e": 6297, "s": 5795, "text": "# new column based on multiple conditions (old)# create new columnold['super_category'] = ''# set multiple conditions and assign reviewer category with loopfor index in old.index: if old.loc[index, 'grade'] >= 9 and old.loc[index, 'len_text'] >= 1000: old.loc[index, 'super_category'] = 'super fan' elif old.loc[index, 'grade'] <= 1 and old.loc[index, 'len_text'] >= 1000: old.loc[index, 'super_category'] = 'super hater' else: old.loc[index, 'super_category'] = 'normal'" }, { "code": null, "e": 6335, "s": 6297, "text": "This works, but let’s cut it in half:" }, { "code": null, "e": 6677, "s": 6335, "text": "# new column based on multiple conditions (new)# create new columnnew['super_category'] = 'normal'# set multiple conditions and assign reviewer category with .locnew.loc[(new['grade'] == 10) & (new['len_text'] >= 1000), 'super_category'] = 'super fan'new.loc[(new['grade'] <= 1) & (new['len_text'] >= 1000), 'super_category'] = 'super hater'" }, { "code": null, "e": 6912, "s": 6677, "text": "Here, we use the “&” operator inside our “.loc” function to implement the two conditions at once. The vectorized solution completed in 63 milliseconds, which again is significantly faster than the loop method, which took 2.23 seconds." }, { "code": null, "e": 7266, "s": 6912, "text": "These were some basic operations used to expand the existing data with some of our own custom analysis. Yes, we could have done everything with loops, and you can even see that the same structure applied across many different operations. But Pandas comes with a lot of built-in methods specifically for the operations that we frequently need to perform." }, { "code": null, "e": 7428, "s": 7266, "text": "Going through this helped me retrain my brain away from always going to for loops as a solution to looking for better ways to accomplish all sorts of operations." } ]
Tips, Tricks, Hacks, and Magic: How to Effortlessly Optimize Your Jupyter Notebook | by Anne Bonner | Towards Data Science
The really cool thing about tech is how many people are out there working hard to make your life more fun. Every minute of every day, there are people putting their blood, sweat, and tears into tools that will make your programs, packages, apps, and life run more smoothly. You might, for example, think that once you have Jupyter Notebooks up and running, that’s it. If they work, you’re done! What you might not realize is that there are almost endless ways that you can customize your notebooks. Getting a program or package installed is just the beginning! Why not take a few minutes to get comfortable and make a few improvements? There are a ton of simple ways to quickly make your Jupyter Notebooks better, faster, stronger, sexier, and so much more fun to work with. This guide assumes that you’re pretty new to Jupyter Notebooks. We’ll start with the really basic stuff for beginners before we move into the cooler tricks. If you’re really, really new and having trouble getting Anaconda installed and working, you might want to check out this article: towardsdatascience.com After you get everything installed, any time you want to launch Jupyter Notebook, you can just open up your terminal and run jupyter notebook and you’ll be up and running! One of the first things people want to change in their Jupyter notebooks is the theme. People are crazy about dark mode! That’s incredibly easy and you can switch it up any time you want to. First, go to your terminal and install Jupyterthemes with pip install jupyterthemes Now you can install the super popular dark theme with jt -t chesterish Restore the main theme any time with jt -r Click here to find the Jupyterthemes GitHub repo. You can quickly access keyboard shortcuts with the command palette. just type Ctrl + Shift + P or Cmd + Shift + P to access a dialog box that’s a lot like Spotlight Search on a Mac. It can help you run any command by name, which is great if you don’t know the keyboard shortcut. Shift + Enter lets you run the current cell Esc takes you into command mode. Now you can navigate around your notebook with your arrow keys! In command mode, use A to insert a new cell above your current cell B to insert a new cell below your current cell M to change the current cell to Markdown Y to change back to a code cell D + D to delete the current cell (press the key twice) Enter takes you from command mode back into edit mode Also Shift + Tab will show you the documentation for the object you just typed into a code cell. (You can keep pressing this to cycle through a few modes.) Esc + F helps you find and replace info in your code (not in the outputs) Esc + 0 Toggles cell output Shift + J or Shift + Down selects the next cell in a downwards direction. Shift + K or Shift + Up selects cells in an upwards direction. Once your cells are selected, you can delete/ copy/cut/paste/run them as a batch. That’s awesome when you need to move parts of a notebook! Shift + M lets you merge multiple cells. (If you just try to click on the cells you want to work you’ll have trouble. Hold down the shift key and click the cells you want to merge. Then, while you’re still holding down the shift key, press M.) Also, you can run bash commands in a notebook if you put an exclamation point at the beginning. For example: !pip install numpy You can suppress the output of the function on a final line of code any time by adding a semicolon at the end. You might want to add new lines of code and comment out the old lines while you’re working. This is great if you’re improving the performance of your code or trying to debug it. First, select all the lines you want to comment out. Next hit cmd + / to comment out the highlighted code! You can write LaTex in a Markdown cell any time and it will be rendered as a formula. That changes this $P(A \mid B) = \frac{P(B \mid A)P(A)}{P(B)}$ into this Normally only the last output in the cell will be printed. For everything else, you have to manually add print(), which is fine but not super convenient. You can change that by adding this at the top of the notebook: from IPython.core.interactiveshell import InteractiveShellInteractiveShell.ast_node_interactivity = "all" This means that, while normally you’d only get one output printed Now you’ll see both outputs! Any time you want to go back to the original setting, just run from IPython.core.interactiveshell import InteractiveShellInteractiveShell.ast_node_interactivity = "last_expr" Just be aware that you have to run the setting change in a separate cell for it to take effect for the next cell run. Because it’s an open source web app, a ton of extensions have been developed for Jupyter Notebooks. You can find the official iPython extension list here. This is another popular bundle of extensions. You can install Nbextensions any time from your command line like this with pip pip install jupyter_contrib_nbextensionsjupyter contrib nbextension install --user or with Anaconda conda install -c conda-forge jupyter_contrib_nbextensionsconda install -c conda-forge jupyter_nbextensions_configuratorjupyter contrib nbextension install --user Once they’re installed, you’ll see an Nbextensions tab. Explore away! Head on over here to read more about the extensions and how to enable them, disable them, and more. I won’t go into too much detail about adding and enabling extensions and how to use them because it’s so incredibly well explained right in your Jupyter Notebook! Just click on “Nbextensions” at the top of the screen, click on the extension you’re interested in, and then scroll down for the information you need and a GIF of the extension in action! Scratchpad — This is awesome. It allows you to create a temporary cell to do quick calculations without creating a new cell in your workbook. This is a huge time saver! Hinterland — This enables a code autocompletion menu for every keypress in a code cell instead of just with tab Snippets — Adds a drop-down menu to insert snippet cells into the current notebook. Autopep8 — This is a tool that automatically formats Python code to conform to the PEP 8 style guide. It’s so handy! Make sure you have run pip install autopep8 --user on your local machine. This will make sure you’re following the correct python coding conventions. Split Cells Notebook — Enables split cells in Jupyter notebooks. Enter command mode and use Shift + S to toggle the current cell to either a split cell or full width. Table of Contents — This extension enables you to collect all running headers and display them in a floating window, as a sidebar, or with a navigation menu. A Code Prettifier — Cleans up, formats, and indents your code, so you don’t have to. Notify — This displays a desktop notification when your kernel becomes idle. This is awesome when you’re running code that takes more than a couple of seconds to complete. Code Folding — While in edit mode, a triangle appears in the gutter to fold your code. Good when you have large functions you want to hide for readability. Zen mode — Makes things a bit less cluttered. Make sure to turn off the backgrounds in the settings. Magics are handy commands that make life easier when you want to perform particular tasks. They often look like unix commands, but they’re all implemented in Python. There are a ton of magics out there! There are two kinds of magic: line magic (use this on one line) and cell magic (this applies to the whole cell). Line magics start with a percent character %, and cell magics start with two, %%. To see the available magics, run: %lsmagic You can easily manage the environment variables of your notebook without restarting anything with %env. If you run it without any variables, it will list all of your environment variables. You can insert code from an external script with %load. (More on this below, but it’s awesome, so I’m adding it up here) For example: %load basic_imports.py will grab the basic_imports.py file and load it in your notebook! This is unbelievably helpful. With almost no effort, you can export the contents of a cell any time with %%writefile. For example %%writefile thiscode.pyyou'd write some cool code or function in here that you'd want to export and maybe even use later! Do you find yourself running the same imports in every notebook or adding the same function all the time? Now you can write it once and use it everywhere! You can write a file basic_imports.py containing the following code: %writefile basic_imports.pyimport pandas as pdimport numpy as npimport matplotlib.pyplot as plt That will create a .py file that contains your basic imports. You can load this any time by writing: %load basic_imports.py Executing this replaces the cell contents with the loaded file. # %load imports.pyimport pandas as pdimport numpy as npimport matplotlib.pyplot as plt Now we can run the cell again to import all our modules and we’re ready to go. Like most people, you probably find yourself writing the same few tasks over and over again. Maybe there are a few equations that you find yourself computing repeatedly or some lines of code that you’ve produced countless times. Jupyter lets you save code snippets as executable macros! Macros are just code, so they can contain variables that will have to be defined before execution. Let’s define one! Let’s say name = 'Kitten' Now, to define a macro we need some code to use. We can save pretty much anything, from a string to a function, or whatever you need. print('Hello, %s!' % name)Hello, Kitten! We use the %macro and %load magics to set up a reusable macro. It’s pretty common to begin macro names with a double underscore to distinguish them from other variables. %macro -q __hello_you 32 The %macro magic takes a name and a cell number (or numbers), and we also passed -q to make it less verbose. %store allows us to save any variable for use in other sessions. Here we passed the name of the macro we created so we can use it again after the kernel is shut down or in other notebooks. To load the macro, we just run: %load __hello_you And to execute it, we can just run a cell that contains the macro name. __hello_youHello, Kitten! Success! Let’s modify the variable we used in the macro. name = 'Muffins' When we run the macro now, our modified value is picked up. __hello_youHello, Muffins! This works because macros execute the saved code in the scope of the cell. If name was undefined we’d get an error. Want to use that same macro across all of your notebooks? %store lets you store your macro and use it across all of your Jupyter Notebooks. Now you can open a new notebook and try it out with %store -r __hello_you. Load that baby up and you’re ready to go! %store -r __hello_youname = 'Rambo'%load __hello_youHello, Rambo! %run magic will execute your code and display any output, including Matplotlib plots. You could even execute entire notebooks this way. %run can execute python code from .py files. It can also execute other Jupyter notebooks. You can use %pycat any time to show the contents of a script if you aren’t sure what’s in there. %pycat basic_imports.py The %autosave magic lets you change how often your notebook will auto save to its checkpoint file. %autosave 60 That will set you to autosave every 60 seconds. %matplotlib inline You probably already know this one, but %matplotlib inline will display your Matplotlib plot images right in your cell outputs. That means that you can include Matplotlib charts and graphs right in your notebooks. It makes sense to run this at the beginning of your notebook, right in the first cell. There are two IPython Magic commands that are useful for timing — %%time and %timeit. These are seriously useful when you have some slow code and you’re trying to identify where the issue is. They both have line and cell modes. The main difference between %timeit and %time is that %timeit runs the specified code many times and computes an average. %%time will give you information about a single run of the code in your cell. %%timeit uses the Python timeit module which runs a statement a ton of times and then provides the mean of the results. You can specify the number of runs with the -n option, -r to specify the number of repeats, and more. You can also execute a cell using the specified language. There are extensions available for several languages. You have options like %%bash %%HTML %%python %%python2 %%python3 %%ruby %%perl %%capture %%javascript %%js %%latex %%markdown %%pypy To render HTML in your notebook, for example, you’d run: %%HTMLThis is <em>really</em> neat! You can also use LaTeX directly any time you want to with %%latexThis is an equation: $E = mc^2$ The %who command without any arguments will list all variables that exist in the global scope. Passing a parameter like str will list only variables of that type. So if you type something like %who str in our notebook, you’d see %prun shows how much time your program spent in each function. Using %prun statement_name gives you an ordered table showing the number of times each internal function was called within the statement, the time each call took, and the cumulative time of all runs of the function. Jupyter has own interface for The Python Debugger. This makes it possible to go inside the function and take a look at what happens there. You can turn that on by running %pdb at the top of the cell. One simple line of IPython magic will give you double resolution plot output for Retina screens. Just be aware that this won’t render on non-retina screens. %config InlineBackend.figure_format ='retina' Just add %%script false at the top of the cell %%script falseyou'd put some long code here that you don't want to run right now This is actually a Python trick, but you might want it for when you’re running code that’s taking forever. If you don’t want to be staring at your code all day, but you need to know when it’s done, you can make your code sound an “alarm” when it’s finished! On Linux (and Mac) import osduration = 1 # secondfreq = 440 # Hzos.system('play --no-show-progress --null --channels 1 synth %s sine %f' % (duration, freq)) On Windows import winsoundduration = 1000 # millisecondfreq = 440 # Hzwinsound.Beep(freq, duration) In order to use this, you need to install sox, which you should be able to do with brew install sox ...assuming you have homebrew installed. If you haven’t taken any time to customize and improve your terminal, you might want to check out this article! towardsdatascience.com That should be enough to get you started! If you know any tips and tricks that you think could help other beginners, let everyone know about them in the comments below. Your options here are endless! If you really want to take things to the next level, you might want to check out this post: towardsdatascience.com And if you’re interested in building interactive dashboards, check out this one: blog.dominodatalab.com Thanks for reading! If you want to reach out or find more cool articles, please come and join me at Content Simplicity!
[ { "code": null, "e": 446, "s": 172, "text": "The really cool thing about tech is how many people are out there working hard to make your life more fun. Every minute of every day, there are people putting their blood, sweat, and tears into tools that will make your programs, packages, apps, and life run more smoothly." }, { "code": null, "e": 733, "s": 446, "text": "You might, for example, think that once you have Jupyter Notebooks up and running, that’s it. If they work, you’re done! What you might not realize is that there are almost endless ways that you can customize your notebooks. Getting a program or package installed is just the beginning!" }, { "code": null, "e": 947, "s": 733, "text": "Why not take a few minutes to get comfortable and make a few improvements? There are a ton of simple ways to quickly make your Jupyter Notebooks better, faster, stronger, sexier, and so much more fun to work with." }, { "code": null, "e": 1234, "s": 947, "text": "This guide assumes that you’re pretty new to Jupyter Notebooks. We’ll start with the really basic stuff for beginners before we move into the cooler tricks. If you’re really, really new and having trouble getting Anaconda installed and working, you might want to check out this article:" }, { "code": null, "e": 1257, "s": 1234, "text": "towardsdatascience.com" }, { "code": null, "e": 1382, "s": 1257, "text": "After you get everything installed, any time you want to launch Jupyter Notebook, you can just open up your terminal and run" }, { "code": null, "e": 1399, "s": 1382, "text": "jupyter notebook" }, { "code": null, "e": 1429, "s": 1399, "text": "and you’ll be up and running!" }, { "code": null, "e": 1620, "s": 1429, "text": "One of the first things people want to change in their Jupyter notebooks is the theme. People are crazy about dark mode! That’s incredibly easy and you can switch it up any time you want to." }, { "code": null, "e": 1678, "s": 1620, "text": "First, go to your terminal and install Jupyterthemes with" }, { "code": null, "e": 1704, "s": 1678, "text": "pip install jupyterthemes" }, { "code": null, "e": 1758, "s": 1704, "text": "Now you can install the super popular dark theme with" }, { "code": null, "e": 1775, "s": 1758, "text": "jt -t chesterish" }, { "code": null, "e": 1812, "s": 1775, "text": "Restore the main theme any time with" }, { "code": null, "e": 1818, "s": 1812, "text": "jt -r" }, { "code": null, "e": 1868, "s": 1818, "text": "Click here to find the Jupyterthemes GitHub repo." }, { "code": null, "e": 2147, "s": 1868, "text": "You can quickly access keyboard shortcuts with the command palette. just type Ctrl + Shift + P or Cmd + Shift + P to access a dialog box that’s a lot like Spotlight Search on a Mac. It can help you run any command by name, which is great if you don’t know the keyboard shortcut." }, { "code": null, "e": 2191, "s": 2147, "text": "Shift + Enter lets you run the current cell" }, { "code": null, "e": 2288, "s": 2191, "text": "Esc takes you into command mode. Now you can navigate around your notebook with your arrow keys!" }, { "code": null, "e": 2309, "s": 2288, "text": "In command mode, use" }, { "code": null, "e": 2356, "s": 2309, "text": "A to insert a new cell above your current cell" }, { "code": null, "e": 2403, "s": 2356, "text": "B to insert a new cell below your current cell" }, { "code": null, "e": 2444, "s": 2403, "text": "M to change the current cell to Markdown" }, { "code": null, "e": 2476, "s": 2444, "text": "Y to change back to a code cell" }, { "code": null, "e": 2531, "s": 2476, "text": "D + D to delete the current cell (press the key twice)" }, { "code": null, "e": 2585, "s": 2531, "text": "Enter takes you from command mode back into edit mode" }, { "code": null, "e": 2590, "s": 2585, "text": "Also" }, { "code": null, "e": 2741, "s": 2590, "text": "Shift + Tab will show you the documentation for the object you just typed into a code cell. (You can keep pressing this to cycle through a few modes.)" }, { "code": null, "e": 2815, "s": 2741, "text": "Esc + F helps you find and replace info in your code (not in the outputs)" }, { "code": null, "e": 2843, "s": 2815, "text": "Esc + 0 Toggles cell output" }, { "code": null, "e": 3120, "s": 2843, "text": "Shift + J or Shift + Down selects the next cell in a downwards direction. Shift + K or Shift + Up selects cells in an upwards direction. Once your cells are selected, you can delete/ copy/cut/paste/run them as a batch. That’s awesome when you need to move parts of a notebook!" }, { "code": null, "e": 3364, "s": 3120, "text": "Shift + M lets you merge multiple cells. (If you just try to click on the cells you want to work you’ll have trouble. Hold down the shift key and click the cells you want to merge. Then, while you’re still holding down the shift key, press M.)" }, { "code": null, "e": 3492, "s": 3364, "text": "Also, you can run bash commands in a notebook if you put an exclamation point at the beginning. For example: !pip install numpy" }, { "code": null, "e": 3603, "s": 3492, "text": "You can suppress the output of the function on a final line of code any time by adding a semicolon at the end." }, { "code": null, "e": 3781, "s": 3603, "text": "You might want to add new lines of code and comment out the old lines while you’re working. This is great if you’re improving the performance of your code or trying to debug it." }, { "code": null, "e": 3834, "s": 3781, "text": "First, select all the lines you want to comment out." }, { "code": null, "e": 3888, "s": 3834, "text": "Next hit cmd + / to comment out the highlighted code!" }, { "code": null, "e": 3974, "s": 3888, "text": "You can write LaTex in a Markdown cell any time and it will be rendered as a formula." }, { "code": null, "e": 3992, "s": 3974, "text": "That changes this" }, { "code": null, "e": 4037, "s": 3992, "text": "$P(A \\mid B) = \\frac{P(B \\mid A)P(A)}{P(B)}$" }, { "code": null, "e": 4047, "s": 4037, "text": "into this" }, { "code": null, "e": 4264, "s": 4047, "text": "Normally only the last output in the cell will be printed. For everything else, you have to manually add print(), which is fine but not super convenient. You can change that by adding this at the top of the notebook:" }, { "code": null, "e": 4370, "s": 4264, "text": "from IPython.core.interactiveshell import InteractiveShellInteractiveShell.ast_node_interactivity = \"all\"" }, { "code": null, "e": 4436, "s": 4370, "text": "This means that, while normally you’d only get one output printed" }, { "code": null, "e": 4465, "s": 4436, "text": "Now you’ll see both outputs!" }, { "code": null, "e": 4528, "s": 4465, "text": "Any time you want to go back to the original setting, just run" }, { "code": null, "e": 4640, "s": 4528, "text": "from IPython.core.interactiveshell import InteractiveShellInteractiveShell.ast_node_interactivity = \"last_expr\"" }, { "code": null, "e": 4758, "s": 4640, "text": "Just be aware that you have to run the setting change in a separate cell for it to take effect for the next cell run." }, { "code": null, "e": 4959, "s": 4758, "text": "Because it’s an open source web app, a ton of extensions have been developed for Jupyter Notebooks. You can find the official iPython extension list here. This is another popular bundle of extensions." }, { "code": null, "e": 5030, "s": 4959, "text": "You can install Nbextensions any time from your command line like this" }, { "code": null, "e": 5039, "s": 5030, "text": "with pip" }, { "code": null, "e": 5122, "s": 5039, "text": "pip install jupyter_contrib_nbextensionsjupyter contrib nbextension install --user" }, { "code": null, "e": 5139, "s": 5122, "text": "or with Anaconda" }, { "code": null, "e": 5301, "s": 5139, "text": "conda install -c conda-forge jupyter_contrib_nbextensionsconda install -c conda-forge jupyter_nbextensions_configuratorjupyter contrib nbextension install --user" }, { "code": null, "e": 5371, "s": 5301, "text": "Once they’re installed, you’ll see an Nbextensions tab. Explore away!" }, { "code": null, "e": 5471, "s": 5371, "text": "Head on over here to read more about the extensions and how to enable them, disable them, and more." }, { "code": null, "e": 5822, "s": 5471, "text": "I won’t go into too much detail about adding and enabling extensions and how to use them because it’s so incredibly well explained right in your Jupyter Notebook! Just click on “Nbextensions” at the top of the screen, click on the extension you’re interested in, and then scroll down for the information you need and a GIF of the extension in action!" }, { "code": null, "e": 5991, "s": 5822, "text": "Scratchpad — This is awesome. It allows you to create a temporary cell to do quick calculations without creating a new cell in your workbook. This is a huge time saver!" }, { "code": null, "e": 6103, "s": 5991, "text": "Hinterland — This enables a code autocompletion menu for every keypress in a code cell instead of just with tab" }, { "code": null, "e": 6187, "s": 6103, "text": "Snippets — Adds a drop-down menu to insert snippet cells into the current notebook." }, { "code": null, "e": 6454, "s": 6187, "text": "Autopep8 — This is a tool that automatically formats Python code to conform to the PEP 8 style guide. It’s so handy! Make sure you have run pip install autopep8 --user on your local machine. This will make sure you’re following the correct python coding conventions." }, { "code": null, "e": 6621, "s": 6454, "text": "Split Cells Notebook — Enables split cells in Jupyter notebooks. Enter command mode and use Shift + S to toggle the current cell to either a split cell or full width." }, { "code": null, "e": 6779, "s": 6621, "text": "Table of Contents — This extension enables you to collect all running headers and display them in a floating window, as a sidebar, or with a navigation menu." }, { "code": null, "e": 6864, "s": 6779, "text": "A Code Prettifier — Cleans up, formats, and indents your code, so you don’t have to." }, { "code": null, "e": 7036, "s": 6864, "text": "Notify — This displays a desktop notification when your kernel becomes idle. This is awesome when you’re running code that takes more than a couple of seconds to complete." }, { "code": null, "e": 7192, "s": 7036, "text": "Code Folding — While in edit mode, a triangle appears in the gutter to fold your code. Good when you have large functions you want to hide for readability." }, { "code": null, "e": 7293, "s": 7192, "text": "Zen mode — Makes things a bit less cluttered. Make sure to turn off the backgrounds in the settings." }, { "code": null, "e": 7496, "s": 7293, "text": "Magics are handy commands that make life easier when you want to perform particular tasks. They often look like unix commands, but they’re all implemented in Python. There are a ton of magics out there!" }, { "code": null, "e": 7725, "s": 7496, "text": "There are two kinds of magic: line magic (use this on one line) and cell magic (this applies to the whole cell). Line magics start with a percent character %, and cell magics start with two, %%. To see the available magics, run:" }, { "code": null, "e": 7734, "s": 7725, "text": "%lsmagic" }, { "code": null, "e": 7923, "s": 7734, "text": "You can easily manage the environment variables of your notebook without restarting anything with %env. If you run it without any variables, it will list all of your environment variables." }, { "code": null, "e": 8057, "s": 7923, "text": "You can insert code from an external script with %load. (More on this below, but it’s awesome, so I’m adding it up here) For example:" }, { "code": null, "e": 8080, "s": 8057, "text": "%load basic_imports.py" }, { "code": null, "e": 8146, "s": 8080, "text": "will grab the basic_imports.py file and load it in your notebook!" }, { "code": null, "e": 8276, "s": 8146, "text": "This is unbelievably helpful. With almost no effort, you can export the contents of a cell any time with %%writefile. For example" }, { "code": null, "e": 8404, "s": 8276, "text": "%%writefile thiscode.pyyou'd write some cool code or function in here that you'd want to export and maybe even use later!" }, { "code": null, "e": 8559, "s": 8404, "text": "Do you find yourself running the same imports in every notebook or adding the same function all the time? Now you can write it once and use it everywhere!" }, { "code": null, "e": 8628, "s": 8559, "text": "You can write a file basic_imports.py containing the following code:" }, { "code": null, "e": 8724, "s": 8628, "text": "%writefile basic_imports.pyimport pandas as pdimport numpy as npimport matplotlib.pyplot as plt" }, { "code": null, "e": 8786, "s": 8724, "text": "That will create a .py file that contains your basic imports." }, { "code": null, "e": 8825, "s": 8786, "text": "You can load this any time by writing:" }, { "code": null, "e": 8848, "s": 8825, "text": "%load basic_imports.py" }, { "code": null, "e": 8912, "s": 8848, "text": "Executing this replaces the cell contents with the loaded file." }, { "code": null, "e": 8999, "s": 8912, "text": "# %load imports.pyimport pandas as pdimport numpy as npimport matplotlib.pyplot as plt" }, { "code": null, "e": 9078, "s": 8999, "text": "Now we can run the cell again to import all our modules and we’re ready to go." }, { "code": null, "e": 9482, "s": 9078, "text": "Like most people, you probably find yourself writing the same few tasks over and over again. Maybe there are a few equations that you find yourself computing repeatedly or some lines of code that you’ve produced countless times. Jupyter lets you save code snippets as executable macros! Macros are just code, so they can contain variables that will have to be defined before execution. Let’s define one!" }, { "code": null, "e": 9492, "s": 9482, "text": "Let’s say" }, { "code": null, "e": 9508, "s": 9492, "text": "name = 'Kitten'" }, { "code": null, "e": 9642, "s": 9508, "text": "Now, to define a macro we need some code to use. We can save pretty much anything, from a string to a function, or whatever you need." }, { "code": null, "e": 9683, "s": 9642, "text": "print('Hello, %s!' % name)Hello, Kitten!" }, { "code": null, "e": 9853, "s": 9683, "text": "We use the %macro and %load magics to set up a reusable macro. It’s pretty common to begin macro names with a double underscore to distinguish them from other variables." }, { "code": null, "e": 9878, "s": 9853, "text": "%macro -q __hello_you 32" }, { "code": null, "e": 10176, "s": 9878, "text": "The %macro magic takes a name and a cell number (or numbers), and we also passed -q to make it less verbose. %store allows us to save any variable for use in other sessions. Here we passed the name of the macro we created so we can use it again after the kernel is shut down or in other notebooks." }, { "code": null, "e": 10208, "s": 10176, "text": "To load the macro, we just run:" }, { "code": null, "e": 10226, "s": 10208, "text": "%load __hello_you" }, { "code": null, "e": 10298, "s": 10226, "text": "And to execute it, we can just run a cell that contains the macro name." }, { "code": null, "e": 10324, "s": 10298, "text": "__hello_youHello, Kitten!" }, { "code": null, "e": 10333, "s": 10324, "text": "Success!" }, { "code": null, "e": 10381, "s": 10333, "text": "Let’s modify the variable we used in the macro." }, { "code": null, "e": 10398, "s": 10381, "text": "name = 'Muffins'" }, { "code": null, "e": 10458, "s": 10398, "text": "When we run the macro now, our modified value is picked up." }, { "code": null, "e": 10485, "s": 10458, "text": "__hello_youHello, Muffins!" }, { "code": null, "e": 10601, "s": 10485, "text": "This works because macros execute the saved code in the scope of the cell. If name was undefined we’d get an error." }, { "code": null, "e": 10659, "s": 10601, "text": "Want to use that same macro across all of your notebooks?" }, { "code": null, "e": 10741, "s": 10659, "text": "%store lets you store your macro and use it across all of your Jupyter Notebooks." }, { "code": null, "e": 10858, "s": 10741, "text": "Now you can open a new notebook and try it out with %store -r __hello_you. Load that baby up and you’re ready to go!" }, { "code": null, "e": 10924, "s": 10858, "text": "%store -r __hello_youname = 'Rambo'%load __hello_youHello, Rambo!" }, { "code": null, "e": 11060, "s": 10924, "text": "%run magic will execute your code and display any output, including Matplotlib plots. You could even execute entire notebooks this way." }, { "code": null, "e": 11150, "s": 11060, "text": "%run can execute python code from .py files. It can also execute other Jupyter notebooks." }, { "code": null, "e": 11247, "s": 11150, "text": "You can use %pycat any time to show the contents of a script if you aren’t sure what’s in there." }, { "code": null, "e": 11271, "s": 11247, "text": "%pycat basic_imports.py" }, { "code": null, "e": 11370, "s": 11271, "text": "The %autosave magic lets you change how often your notebook will auto save to its checkpoint file." }, { "code": null, "e": 11383, "s": 11370, "text": "%autosave 60" }, { "code": null, "e": 11431, "s": 11383, "text": "That will set you to autosave every 60 seconds." }, { "code": null, "e": 11450, "s": 11431, "text": "%matplotlib inline" }, { "code": null, "e": 11751, "s": 11450, "text": "You probably already know this one, but %matplotlib inline will display your Matplotlib plot images right in your cell outputs. That means that you can include Matplotlib charts and graphs right in your notebooks. It makes sense to run this at the beginning of your notebook, right in the first cell." }, { "code": null, "e": 11979, "s": 11751, "text": "There are two IPython Magic commands that are useful for timing — %%time and %timeit. These are seriously useful when you have some slow code and you’re trying to identify where the issue is. They both have line and cell modes." }, { "code": null, "e": 12101, "s": 11979, "text": "The main difference between %timeit and %time is that %timeit runs the specified code many times and computes an average." }, { "code": null, "e": 12179, "s": 12101, "text": "%%time will give you information about a single run of the code in your cell." }, { "code": null, "e": 12401, "s": 12179, "text": "%%timeit uses the Python timeit module which runs a statement a ton of times and then provides the mean of the results. You can specify the number of runs with the -n option, -r to specify the number of repeats, and more." }, { "code": null, "e": 12535, "s": 12401, "text": "You can also execute a cell using the specified language. There are extensions available for several languages. You have options like" }, { "code": null, "e": 12542, "s": 12535, "text": "%%bash" }, { "code": null, "e": 12549, "s": 12542, "text": "%%HTML" }, { "code": null, "e": 12558, "s": 12549, "text": "%%python" }, { "code": null, "e": 12568, "s": 12558, "text": "%%python2" }, { "code": null, "e": 12578, "s": 12568, "text": "%%python3" }, { "code": null, "e": 12585, "s": 12578, "text": "%%ruby" }, { "code": null, "e": 12592, "s": 12585, "text": "%%perl" }, { "code": null, "e": 12602, "s": 12592, "text": "%%capture" }, { "code": null, "e": 12615, "s": 12602, "text": "%%javascript" }, { "code": null, "e": 12620, "s": 12615, "text": "%%js" }, { "code": null, "e": 12628, "s": 12620, "text": "%%latex" }, { "code": null, "e": 12639, "s": 12628, "text": "%%markdown" }, { "code": null, "e": 12646, "s": 12639, "text": "%%pypy" }, { "code": null, "e": 12703, "s": 12646, "text": "To render HTML in your notebook, for example, you’d run:" }, { "code": null, "e": 12739, "s": 12703, "text": "%%HTMLThis is <em>really</em> neat!" }, { "code": null, "e": 12797, "s": 12739, "text": "You can also use LaTeX directly any time you want to with" }, { "code": null, "e": 12836, "s": 12797, "text": "%%latexThis is an equation: $E = mc^2$" }, { "code": null, "e": 13029, "s": 12836, "text": "The %who command without any arguments will list all variables that exist in the global scope. Passing a parameter like str will list only variables of that type. So if you type something like" }, { "code": null, "e": 13038, "s": 13029, "text": "%who str" }, { "code": null, "e": 13065, "s": 13038, "text": "in our notebook, you’d see" }, { "code": null, "e": 13344, "s": 13065, "text": "%prun shows how much time your program spent in each function. Using %prun statement_name gives you an ordered table showing the number of times each internal function was called within the statement, the time each call took, and the cumulative time of all runs of the function." }, { "code": null, "e": 13544, "s": 13344, "text": "Jupyter has own interface for The Python Debugger. This makes it possible to go inside the function and take a look at what happens there. You can turn that on by running %pdb at the top of the cell." }, { "code": null, "e": 13701, "s": 13544, "text": "One simple line of IPython magic will give you double resolution plot output for Retina screens. Just be aware that this won’t render on non-retina screens." }, { "code": null, "e": 13747, "s": 13701, "text": "%config InlineBackend.figure_format ='retina'" }, { "code": null, "e": 13794, "s": 13747, "text": "Just add %%script false at the top of the cell" }, { "code": null, "e": 13883, "s": 13794, "text": "%%script falseyou'd put some long code here that you don't want to run right now" }, { "code": null, "e": 14141, "s": 13883, "text": "This is actually a Python trick, but you might want it for when you’re running code that’s taking forever. If you don’t want to be staring at your code all day, but you need to know when it’s done, you can make your code sound an “alarm” when it’s finished!" }, { "code": null, "e": 14160, "s": 14141, "text": "On Linux (and Mac)" }, { "code": null, "e": 14300, "s": 14160, "text": "import osduration = 1 # secondfreq = 440 # Hzos.system('play --no-show-progress --null --channels 1 synth %s sine %f' % (duration, freq))" }, { "code": null, "e": 14311, "s": 14300, "text": "On Windows" }, { "code": null, "e": 14402, "s": 14311, "text": "import winsoundduration = 1000 # millisecondfreq = 440 # Hzwinsound.Beep(freq, duration)" }, { "code": null, "e": 14485, "s": 14402, "text": "In order to use this, you need to install sox, which you should be able to do with" }, { "code": null, "e": 14502, "s": 14485, "text": "brew install sox" }, { "code": null, "e": 14655, "s": 14502, "text": "...assuming you have homebrew installed. If you haven’t taken any time to customize and improve your terminal, you might want to check out this article!" }, { "code": null, "e": 14678, "s": 14655, "text": "towardsdatascience.com" }, { "code": null, "e": 14878, "s": 14678, "text": "That should be enough to get you started! If you know any tips and tricks that you think could help other beginners, let everyone know about them in the comments below. Your options here are endless!" }, { "code": null, "e": 14970, "s": 14878, "text": "If you really want to take things to the next level, you might want to check out this post:" }, { "code": null, "e": 14993, "s": 14970, "text": "towardsdatascience.com" }, { "code": null, "e": 15074, "s": 14993, "text": "And if you’re interested in building interactive dashboards, check out this one:" }, { "code": null, "e": 15097, "s": 15074, "text": "blog.dominodatalab.com" } ]
Kotlin - Exception Handling
Exception handling is a very important part of a programming language. This technique restricts our application from generating the wrong output at runtime. In this chapter, we will learn how to handle runtime exception in Kotlin. The exceptions in Kotlin is pretty similar to the exceptions in Java. All the exceptions are descendants of the “Throwable” class. Following example shows how to use exception handling technique in Kotlin. fun main(args: Array<String>) { try { val myVar:Int = 12; val v:String = "Tutorialspoint.com"; v.toInt(); } catch(e:Exception) { e.printStackTrace(); } finally { println("Exception Handeling in Kotlin"); } } In the above piece of code, we have declared a String and later tied that string into the integer, which is actually a runtime exception. Hence, we will get the following output in the browser. val myVar:Int = 12; Exception Handeling in Kotlin Note − Like Java, Kotlin also executes the finally block after executing the catch block. 68 Lectures 4.5 hours Arnab Chakraborty 71 Lectures 5.5 hours Frahaan Hussain 18 Lectures 1.5 hours Mahmoud Ramadan 49 Lectures 6 hours Catalin Stefan 49 Lectures 2.5 hours Skillbakerystudios 22 Lectures 1 hours CLEMENT OCHIENG Print Add Notes Bookmark this page
[ { "code": null, "e": 2862, "s": 2425, "text": "Exception handling is a very important part of a programming language. This technique restricts our application from generating the wrong output at runtime. In this chapter, we will learn how to handle runtime exception in Kotlin. The exceptions in Kotlin is pretty similar to the exceptions in Java. All the exceptions are descendants of the “Throwable” class. Following example shows how to use exception handling technique in Kotlin." }, { "code": null, "e": 3112, "s": 2862, "text": "fun main(args: Array<String>) {\n try {\n val myVar:Int = 12;\n val v:String = \"Tutorialspoint.com\";\n v.toInt();\n } catch(e:Exception) {\n e.printStackTrace();\n } finally {\n println(\"Exception Handeling in Kotlin\");\n }\n}" }, { "code": null, "e": 3306, "s": 3112, "text": "In the above piece of code, we have declared a String and later tied that string into the integer, which is actually a runtime exception. Hence, we will get the following output in the browser." }, { "code": null, "e": 3357, "s": 3306, "text": "val myVar:Int = 12;\nException Handeling in Kotlin\n" }, { "code": null, "e": 3447, "s": 3357, "text": "Note − Like Java, Kotlin also executes the finally block after executing the catch block." }, { "code": null, "e": 3482, "s": 3447, "text": "\n 68 Lectures \n 4.5 hours \n" }, { "code": null, "e": 3501, "s": 3482, "text": " Arnab Chakraborty" }, { "code": null, "e": 3536, "s": 3501, "text": "\n 71 Lectures \n 5.5 hours \n" }, { "code": null, "e": 3553, "s": 3536, "text": " Frahaan Hussain" }, { "code": null, "e": 3588, "s": 3553, "text": "\n 18 Lectures \n 1.5 hours \n" }, { "code": null, "e": 3605, "s": 3588, "text": " Mahmoud Ramadan" }, { "code": null, "e": 3638, "s": 3605, "text": "\n 49 Lectures \n 6 hours \n" }, { "code": null, "e": 3654, "s": 3638, "text": " Catalin Stefan" }, { "code": null, "e": 3689, "s": 3654, "text": "\n 49 Lectures \n 2.5 hours \n" }, { "code": null, "e": 3709, "s": 3689, "text": " Skillbakerystudios" }, { "code": null, "e": 3742, "s": 3709, "text": "\n 22 Lectures \n 1 hours \n" }, { "code": null, "e": 3759, "s": 3742, "text": " CLEMENT OCHIENG" }, { "code": null, "e": 3766, "s": 3759, "text": " Print" }, { "code": null, "e": 3777, "s": 3766, "text": " Add Notes" } ]
C# | Decimal.ToString Method | Set -1 - GeeksforGeeks
29 Mar, 2019 Decimal.ToString() Method is used to convert the numeric value of the current instance to its equivalent string representation using the specified culture-specific format information. There are 4 methods in the overload list of this method as follows: ToString() Method ToString(IFormatProvider) Method ToString(String, IFormatProvider) Method ToString(String) Method Here, we will discuss the first two methods. This method is used to convert the numeric value of the current instance to its equivalent string representation. Syntax: public override string ToString (); Return Value: This method returns a string which represents the value of the current instance. Below programs illustrate the use of Decimal.ToString() Method: Example 1: // C# program to demonstrate the// Decimal.ToString() Methodusing System; class GFG { // Main Method public static void Main() { // Declaring and initializing value decimal value = 7922816251426433759354.39503305M; // using ToString() method string str = value.ToString(); // Display the value Console.WriteLine("String value is {0}", str); }} String value is 7922816251426433759354.3950330 Example 2: // C# program to demonstrate the// Decimal.ToString() Methodusing System;using System.Globalization; class GFG { // Main Method public static void Main() { // calling get() method Console.WriteLine("Equivalent String values are:"); get(20); get(30); get(40); get(4294967295); } // defining get() method public static void get(decimal value) { // using ToString() method string str = value.ToString(); // Display the value Console.WriteLine("String value is {0}", str); }} Equivalent String values are: String value is 20 String value is 30 String value is 40 String value is 4294967295 This method is used to convert the numeric value of the current instance to its equivalent string representation, using the specified format. Syntax: public string ToString (string format);Here, it takes a standard or custom numeric format string. Return Value: This method returns the string representation of the value of the current instance as specified by format. Exceptions: This method throws FormatException if format is invalid. Example 1: // C# program to demonstrate the// Decimal.ToString(String) Methodusing System; class GFG { // Main Method public static void Main() { try { // Declaring and initializing value decimal value = 16325.62m; // Declaring and initializing format string s = "E04"; // using the method string str = value.ToString(s); // Display the value Console.WriteLine("String value is {0}", str); } catch (FormatException e) { Console.WriteLine("Format is invalid."); Console.Write("Exception Thrown: "); Console.Write("{0}", e.GetType(), e.Message); } }} String value is 1.6326E+004 Example 2: For FormatException // C# program to demonstrate the// Decimal.ToString(String) Methodusing System; class GFG { // Main Method public static void Main() { try { // Declaring and initializing value decimal value = 16325.62m; // Declaring and initializing format string s = "a"; // using the method string str = value.ToString(s); // Display the value Console.WriteLine("String value is {0}", str); } catch (FormatException e) { Console.WriteLine("Format is invalid."); Console.Write("Exception Thrown: "); Console.Write("{0}", e.GetType(), e.Message); } }} Format is invalid. Exception Thrown: System.FormatException Reference: https://docs.microsoft.com/en-us/dotnet/api/system.decimal.tostring?view=netframework-4.7.2 CSharp-Decimal-Struct CSharp-method C# Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here. C# | Method Overriding C# Dictionary with examples Difference between Ref and Out keywords in C# C# | Delegates Destructors in C# Extension Method in C# C# | Constructors C# | String.IndexOf( ) Method | Set - 1 Introduction to .NET Framework C# | Abstract Classes
[ { "code": null, "e": 24796, "s": 24768, "text": "\n29 Mar, 2019" }, { "code": null, "e": 25048, "s": 24796, "text": "Decimal.ToString() Method is used to convert the numeric value of the current instance to its equivalent string representation using the specified culture-specific format information. There are 4 methods in the overload list of this method as follows:" }, { "code": null, "e": 25066, "s": 25048, "text": "ToString() Method" }, { "code": null, "e": 25099, "s": 25066, "text": "ToString(IFormatProvider) Method" }, { "code": null, "e": 25140, "s": 25099, "text": "ToString(String, IFormatProvider) Method" }, { "code": null, "e": 25164, "s": 25140, "text": "ToString(String) Method" }, { "code": null, "e": 25209, "s": 25164, "text": "Here, we will discuss the first two methods." }, { "code": null, "e": 25323, "s": 25209, "text": "This method is used to convert the numeric value of the current instance to its equivalent string representation." }, { "code": null, "e": 25367, "s": 25323, "text": "Syntax: public override string ToString ();" }, { "code": null, "e": 25462, "s": 25367, "text": "Return Value: This method returns a string which represents the value of the current instance." }, { "code": null, "e": 25526, "s": 25462, "text": "Below programs illustrate the use of Decimal.ToString() Method:" }, { "code": null, "e": 25537, "s": 25526, "text": "Example 1:" }, { "code": "// C# program to demonstrate the// Decimal.ToString() Methodusing System; class GFG { // Main Method public static void Main() { // Declaring and initializing value decimal value = 7922816251426433759354.39503305M; // using ToString() method string str = value.ToString(); // Display the value Console.WriteLine(\"String value is {0}\", str); }}", "e": 25944, "s": 25537, "text": null }, { "code": null, "e": 25992, "s": 25944, "text": "String value is 7922816251426433759354.3950330\n" }, { "code": null, "e": 26003, "s": 25992, "text": "Example 2:" }, { "code": "// C# program to demonstrate the// Decimal.ToString() Methodusing System;using System.Globalization; class GFG { // Main Method public static void Main() { // calling get() method Console.WriteLine(\"Equivalent String values are:\"); get(20); get(30); get(40); get(4294967295); } // defining get() method public static void get(decimal value) { // using ToString() method string str = value.ToString(); // Display the value Console.WriteLine(\"String value is {0}\", str); }}", "e": 26578, "s": 26003, "text": null }, { "code": null, "e": 26693, "s": 26578, "text": "Equivalent String values are:\nString value is 20\nString value is 30\nString value is 40\nString value is 4294967295\n" }, { "code": null, "e": 26835, "s": 26693, "text": "This method is used to convert the numeric value of the current instance to its equivalent string representation, using the specified format." }, { "code": null, "e": 26941, "s": 26835, "text": "Syntax: public string ToString (string format);Here, it takes a standard or custom numeric format string." }, { "code": null, "e": 27062, "s": 26941, "text": "Return Value: This method returns the string representation of the value of the current instance as specified by format." }, { "code": null, "e": 27131, "s": 27062, "text": "Exceptions: This method throws FormatException if format is invalid." }, { "code": null, "e": 27142, "s": 27131, "text": "Example 1:" }, { "code": "// C# program to demonstrate the// Decimal.ToString(String) Methodusing System; class GFG { // Main Method public static void Main() { try { // Declaring and initializing value decimal value = 16325.62m; // Declaring and initializing format string s = \"E04\"; // using the method string str = value.ToString(s); // Display the value Console.WriteLine(\"String value is {0}\", str); } catch (FormatException e) { Console.WriteLine(\"Format is invalid.\"); Console.Write(\"Exception Thrown: \"); Console.Write(\"{0}\", e.GetType(), e.Message); } }}", "e": 27862, "s": 27142, "text": null }, { "code": null, "e": 27891, "s": 27862, "text": "String value is 1.6326E+004\n" }, { "code": null, "e": 27922, "s": 27891, "text": "Example 2: For FormatException" }, { "code": "// C# program to demonstrate the// Decimal.ToString(String) Methodusing System; class GFG { // Main Method public static void Main() { try { // Declaring and initializing value decimal value = 16325.62m; // Declaring and initializing format string s = \"a\"; // using the method string str = value.ToString(s); // Display the value Console.WriteLine(\"String value is {0}\", str); } catch (FormatException e) { Console.WriteLine(\"Format is invalid.\"); Console.Write(\"Exception Thrown: \"); Console.Write(\"{0}\", e.GetType(), e.Message); } }}", "e": 28640, "s": 27922, "text": null }, { "code": null, "e": 28701, "s": 28640, "text": "Format is invalid.\nException Thrown: System.FormatException\n" }, { "code": null, "e": 28712, "s": 28701, "text": "Reference:" }, { "code": null, "e": 28804, "s": 28712, "text": "https://docs.microsoft.com/en-us/dotnet/api/system.decimal.tostring?view=netframework-4.7.2" }, { "code": null, "e": 28826, "s": 28804, "text": "CSharp-Decimal-Struct" }, { "code": null, "e": 28840, "s": 28826, "text": "CSharp-method" }, { "code": null, "e": 28843, "s": 28840, "text": "C#" }, { "code": null, "e": 28941, "s": 28843, "text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here." }, { "code": null, "e": 28964, "s": 28941, "text": "C# | Method Overriding" }, { "code": null, "e": 28992, "s": 28964, "text": "C# Dictionary with examples" }, { "code": null, "e": 29038, "s": 28992, "text": "Difference between Ref and Out keywords in C#" }, { "code": null, "e": 29053, "s": 29038, "text": "C# | Delegates" }, { "code": null, "e": 29071, "s": 29053, "text": "Destructors in C#" }, { "code": null, "e": 29094, "s": 29071, "text": "Extension Method in C#" }, { "code": null, "e": 29112, "s": 29094, "text": "C# | Constructors" }, { "code": null, "e": 29152, "s": 29112, "text": "C# | String.IndexOf( ) Method | Set - 1" }, { "code": null, "e": 29183, "s": 29152, "text": "Introduction to .NET Framework" } ]
Serverless - REST API with DynamoDB
So far, we have learned several concepts related to serverless lambda deployments. Now it is time to look at some examples. In this chapter, we will look at one of the examples officially provided by Serverless. We will be creating, as the name suggests, a REST API. All our lambda functions, as you would have guessed, will be triggered by an API Gateway. Our lambda functions will interface with a dynamoDB table, which is a to-do list essentially, and the user will be able to perform several operations, like creating a new item, fetching existing items, deleting items, etc. using the endpoints that will be exposed post the deployment.If you are not familiar with REST APIs, then you can read up more about them here. The code can be found on GitHub − https://github.com/serverless/examples/tree/master/aws-python-rest-api-with-dynamodb We will have a look at the project structure, discuss some new concepts that we haven't seen so far, and then perform the walkthrough of the serverless.yml file. The walkthrough of all the function handlers will be redundant. Therefore, we will walk through just one function handler. You can take up understanding the other functions as an exercise. Now, if you look at the project structure, the lambda function handlers are all within separate .py files in the todos folder. The serverless.yml file specifies the todos folder in the path of each function handler. There are no external dependencies, and therefore, no requirements.txt file. Now, there are a couple of terms that you may be seeing for the first time. Let's scan these quickly − dynamoDB − This is a NoSQL (Not only SQL) database provided by AWS. While not exactly accurate, broadly speaking, NoSQL is to SQL what Word is to Excel. You can read more about NoSQL here. There are 4 types of NoSQL databases − Document databases, key-value databases, wide-column stores, and graph databases. dynamoDB is a key-value database, meaning that you can keep inserting key-value pairs into the database. This is similar to redis cache. You can retrieve the value by referencing its key. dynamoDB − This is a NoSQL (Not only SQL) database provided by AWS. While not exactly accurate, broadly speaking, NoSQL is to SQL what Word is to Excel. You can read more about NoSQL here. There are 4 types of NoSQL databases − Document databases, key-value databases, wide-column stores, and graph databases. dynamoDB is a key-value database, meaning that you can keep inserting key-value pairs into the database. This is similar to redis cache. You can retrieve the value by referencing its key. boto3 − This is the AWS SDK for Python. If you need to configure, manage, call, or create any service of AWS (EC2, dynamoDB, S3, etc.) within the lambda function, you need the boto3 SDK.You can read up more about boto3 here. boto3 − This is the AWS SDK for Python. If you need to configure, manage, call, or create any service of AWS (EC2, dynamoDB, S3, etc.) within the lambda function, you need the boto3 SDK.You can read up more about boto3 here. Apart from these, there are some concepts that we will encounter during the walkthrough of the serverless.yml and the handler function. We will discuss them there. The serverless.yml file begins with the definition of the service. service: serverless-rest-api-with-dynamodb That is followed by the declaration of the framework version range through the following line − frameworkVersion: ">=1.1.0 <=2.1.1" This acts like a check. If your serverless version doesn't lie in this range, it will throw up an error. This helps when you are sharing code and would want everyone using this serverless.yml file to use the same serverless version range to avoid problems. Next, within the provider, we see two extra fields that we haven't encountered so far − environment and iamRoleStatements. provider: name: aws runtime: python3.8 environment: DYNAMODB_TABLE: ${self:service}-${opt:stage, self:provider.stage} iamRoleStatements: - Effect: Allow Action: - dynamodb:Query - dynamodb:Scan - dynamodb:GetItem - dynamodb:PutItem - dynamodb:UpdateItem - dynamodb:DeleteItem Resource: "arn:aws:dynamodb:${opt:region, self:provider.region}: *:table/${self:provider.environment.DYNAMODB_TABLE}" Environment, as you would have guessed, is used to define environment variables. All the functions defined within this serverless.yml file can fetch these environment variables. We will see an example in the function handler walkthrough below. Over here, we are defining the dynamoDB table name as an environment variable. The $ sign signifies a variable. The self keyword refers to the serverless.yml file itself, while opt refers to an option that we can provide during sls deploy. Thus, the table name will be the service name followed by a hyphen followed by the first stage parameter that the file finds: either one available from options during serverless deploy, or the provider stage, which is dev by default.Thus, in this case, if you don't provide any option during serverless deploy, the dynamoDB table name will be serverless-rest-api-with-dynamodb-dev. You can read more about serverless variables here. iamRoleStatements define permissions provided to the functions. In this case, we are allowing the functions to perform the following operations on the dynamoDB table − Query, Scan, GetItem, PutItem, UpdateItem, and DeleteItem. The Resource name specifies the exact table on which these operations are allowed.If you had entered "*" in place of the resource name, you would have allowed these operations on all the tables. However, here, we want to allow these operations on just one table, and therefore, the arn (Amazon Resource Name) of this table is provided in the Resource name, using the standard arn format. Here again, the first one of either the option region (specified during serverless deploy) or the region mentioned in provider (us-east-1 by default)is used. In the functions section, the functions are defined as per the standard format. Notice that get, update, delete all have the same path, with id as the path parameter. However, the method is different for each. functions: create: handler: todos/create.create events: - http: path: todos method: post cors: true list: handler: todos/list.list events: - http: path: todos method: get cors: true get: handler: todos/get.get events: - http: path: todos/{id} method: get cors: true update: handler: todos/update.update events: - http: path: todos/{id} method: put cors: true delete: handler: todos/delete.delete events: - http: path: todos/{id} method: delete cors: true Later on, we come across another block that we haven't seen before, the resources block. This block basically helps you specify the resources that you will need to create, in a CloudFormation template, for the functions to work. In this case, we need to create a dynamoDB table for the functions to work. So far, we have specified the name of the table, and even referenced its ARN. But we haven't created the table. Specifying the characteristics of the table in the resources block will create that table for us. resources: Resources: TodosDynamoDbTable: Type: 'AWS::DynamoDB::Table' DeletionPolicy: Retain Properties: AttributeDefinitions: - AttributeName: id AttributeType: S KeySchema: - AttributeName: id KeyType: HASH ProvisionedThroughput: ReadCapacityUnits: 1 WriteCapacityUnits: 1 TableName: ${self:provider.environment.DYNAMODB_TABLE} There are a lot of configurations being defined here, most of them specific to dynamoDB. Briefly, we are asking serverless to create a 'TodosDynamoDbTable', or type 'DynamoDB Table', with TableName (mentioned right at the bottom) equal to the one defined in environment variables in provider. We are setting its deletion policy to 'Retain', which means that if the stack is deleted, the resource is retained. See here. We are saying that the table will have an attribute named id, and its type will be String. We are also specifying that the id attribute will be a HASH key or a partition key. You can read up more about KeySchemas in dynamoDB tables here. Finally, we are specifying the read capacity and write capacity of the table. That's it! Our serverless.yml file is now ready. Now, since all the function handlers are more or less similar, we will walk through just one handler, that of the create function. We being with a couple of import statements import json import logging import os import time import uuid Next, we import boto3, which, as described above, is the AWS SDK for python. We need boto3 to interface with dynamoDB from within the lambda function. import boto3 dynamodb = boto3.resource('dynamodb') Next, in the actual function handler, we first check the contents of the 'events' payload (create API uses the post method). If its body doesn't contain a 'text' key, we haven't received a valid item to be added to the todo list. Therefore, we raise an exception. def create(event, context): data = json.loads(event['body']) if 'text' not in data: logging.error("Validation Failed") raise Exception("Couldn't create the todo item.") Considering that we got the 'text' key as expected, we make preparations for adding it to the dynamoDB table. We fetch the current timestamp, and connect to the dynamoDB table. Notice how the environment variable defined in serverless.yml is fetched (using os.environ) timestamp = str(time.time()) table = dynamodb.Table(os.environ['DYNAMODB_TABLE']) Next, we create the item to be added to the table, by generating a random uuid using the uuid package, using the received data as text, setting createdAt and updatedAt to the current timestamp,and setting the field 'checked' to False. 'checked' is another field which you can update, apart from text, using the update operation. item = { 'id': str(uuid.uuid1()), 'text': data['text'], 'checked': False, 'createdAt': timestamp, 'updatedAt': timestamp, } Finally, we add the item to the dynamoDB table and return the created item to the user. # write the todo to the database table.put_item(Item=item) # create a response response = { "statusCode": 200, "body": json.dumps(item) } return response With this walkthrough, I think the other function handlers will be self-explanatory. In some functions, you may see this statement − "body" − json.dumps(result['Item'], cls=decimalencoder.DecimalEncoder). This is a workaround used for a bug in json.dumps. json.dumps can't handle decimal numbers by default and therefore, the decimalencoder.py file has been created to contain the DecimalEncoder class which handles this. Congratulations on understanding your first comprehensive project created using serverless. The creator of the project has also shared the endpoints of his deployment and the ways to test these functions in the README file. Have a look. Head on to the next chapter to see another example. 44 Lectures 7.5 hours Eduonix Learning Solutions 31 Lectures 3 hours Harshit Srivastava 25 Lectures 1 hours Skillbakerystudios 142 Lectures 9 hours Sundar Singh, Naveen Selvaraj 45 Lectures 1 hours Santiago Esteva Print Add Notes Bookmark this page
[ { "code": null, "e": 2727, "s": 2003, "text": "So far, we have learned several concepts related to serverless lambda deployments. Now it is time to look at some examples. In this chapter, we will look at one of the examples officially provided by Serverless. We will be creating, as the name suggests, a REST API. All our lambda functions, as you would have guessed, will be triggered by an API Gateway. Our lambda functions will interface with a dynamoDB table, which is a to-do list essentially, and the user will be able to perform several operations, like creating a new item, fetching existing items, deleting items, etc. using the endpoints that will be exposed post the deployment.If you are not familiar with REST APIs, then you can read up more about them here." }, { "code": null, "e": 2846, "s": 2727, "text": "The code can be found on GitHub − https://github.com/serverless/examples/tree/master/aws-python-rest-api-with-dynamodb" }, { "code": null, "e": 3197, "s": 2846, "text": "We will have a look at the project structure, discuss some new concepts that we haven't seen so far, and then perform the walkthrough of the serverless.yml file. The walkthrough of all the function handlers will be redundant. Therefore, we will walk through just one function handler. You can take up understanding the other functions as an exercise." }, { "code": null, "e": 3490, "s": 3197, "text": "Now, if you look at the project structure, the lambda function handlers are all within separate .py files in the todos folder. The serverless.yml file specifies the todos folder in the path of each function handler. There are no external dependencies, and therefore, no requirements.txt file." }, { "code": null, "e": 3593, "s": 3490, "text": "Now, there are a couple of terms that you may be seeing for the first time. Let's scan these quickly −" }, { "code": null, "e": 4091, "s": 3593, "text": "dynamoDB − This is a NoSQL (Not only SQL) database provided by AWS. While not exactly accurate, broadly speaking, NoSQL is to SQL what Word is to Excel. You can read more about NoSQL here. There are 4 types of NoSQL databases − Document databases, key-value databases, wide-column stores, and graph databases. dynamoDB is a key-value database, meaning that you can keep inserting key-value pairs into the database. This is similar to redis cache. You can retrieve the value by referencing its key." }, { "code": null, "e": 4589, "s": 4091, "text": "dynamoDB − This is a NoSQL (Not only SQL) database provided by AWS. While not exactly accurate, broadly speaking, NoSQL is to SQL what Word is to Excel. You can read more about NoSQL here. There are 4 types of NoSQL databases − Document databases, key-value databases, wide-column stores, and graph databases. dynamoDB is a key-value database, meaning that you can keep inserting key-value pairs into the database. This is similar to redis cache. You can retrieve the value by referencing its key." }, { "code": null, "e": 4814, "s": 4589, "text": "boto3 − This is the AWS SDK for Python. If you need to configure, manage, call, or create any service of AWS (EC2, dynamoDB, S3, etc.) within the lambda function, you need the boto3 SDK.You can read up more about boto3 here." }, { "code": null, "e": 5039, "s": 4814, "text": "boto3 − This is the AWS SDK for Python. If you need to configure, manage, call, or create any service of AWS (EC2, dynamoDB, S3, etc.) within the lambda function, you need the boto3 SDK.You can read up more about boto3 here." }, { "code": null, "e": 5203, "s": 5039, "text": "Apart from these, there are some concepts that we will encounter during the walkthrough of the serverless.yml and the handler function. We will discuss them there." }, { "code": null, "e": 5270, "s": 5203, "text": "The serverless.yml file begins with the definition of the service." }, { "code": null, "e": 5314, "s": 5270, "text": "service: serverless-rest-api-with-dynamodb\n" }, { "code": null, "e": 5410, "s": 5314, "text": "That is followed by the declaration of the framework version range through the following line −" }, { "code": null, "e": 5447, "s": 5410, "text": "frameworkVersion: \">=1.1.0 <=2.1.1\"\n" }, { "code": null, "e": 5704, "s": 5447, "text": "This acts like a check. If your serverless version doesn't lie in this range, it will throw up an error. This helps when you are sharing code and would want everyone using this serverless.yml file to use the same serverless version range to avoid problems." }, { "code": null, "e": 5827, "s": 5704, "text": "Next, within the provider, we see two extra fields that we haven't encountered so far − environment and iamRoleStatements." }, { "code": null, "e": 6326, "s": 5827, "text": "provider:\n name: aws\n runtime: python3.8\n environment:\n DYNAMODB_TABLE: ${self:service}-${opt:stage, self:provider.stage}\n iamRoleStatements:\n - Effect: Allow\n Action:\n - dynamodb:Query\n - dynamodb:Scan\n - dynamodb:GetItem\n - dynamodb:PutItem\n - dynamodb:UpdateItem\n - dynamodb:DeleteItem\n Resource: \"arn:aws:dynamodb:${opt:region, self:provider.region}:\n *:table/${self:provider.environment.DYNAMODB_TABLE}\"" }, { "code": null, "e": 6649, "s": 6326, "text": "Environment, as you would have guessed, is used to define environment variables. All the functions defined within this serverless.yml file can fetch these environment variables. We will see an example in the function handler walkthrough below. Over here, we are defining the dynamoDB table name as an environment variable." }, { "code": null, "e": 7243, "s": 6649, "text": "The $ sign signifies a variable. The self keyword refers to the serverless.yml file itself, while opt refers to an option that we can provide during sls deploy. Thus, the table name will be the service name followed by a hyphen followed by the first stage parameter that the file finds: either one available from options during serverless deploy, or the provider stage, which is dev by default.Thus, in this case, if you don't provide any option during serverless deploy, the dynamoDB table name will be serverless-rest-api-with-dynamodb-dev. You can read more about serverless variables here." }, { "code": null, "e": 8016, "s": 7243, "text": "iamRoleStatements define permissions provided to the functions. In this case, we are allowing the functions to perform the following operations on the dynamoDB table − Query, Scan, GetItem, PutItem, UpdateItem, and DeleteItem. The Resource name specifies the exact table on which these operations are allowed.If you had entered \"*\" in place of the resource name, you would have allowed these operations on all the tables. However, here, we want to allow these operations on just one table, and therefore, the arn (Amazon Resource Name) of this table is provided in the Resource name, using the standard arn format. Here again, the first one of either the option region (specified during serverless deploy) or the region mentioned in provider (us-east-1 by default)is used." }, { "code": null, "e": 8226, "s": 8016, "text": "In the functions section, the functions are defined as per the standard format. Notice that get, update, delete all have the same path, with id as the path parameter. However, the method is different for each." }, { "code": null, "e": 8982, "s": 8226, "text": "functions:\n create:\n handler: todos/create.create\n events:\n - http:\n path: todos\n method: post\n cors: true\n list:\n handler: todos/list.list\n events:\n - http:\n path: todos\n method: get\n cors: true\n get:\n handler: todos/get.get\n events:\n - http:\n path: todos/{id}\n method: get\n cors: true\n\n update:\n handler: todos/update.update\n events:\n - http:\n path: todos/{id}\n method: put\n cors: true\n delete:\n handler: todos/delete.delete\n events:\n - http:\n path: todos/{id}\n method: delete\n cors: true" }, { "code": null, "e": 9497, "s": 8982, "text": "Later on, we come across another block that we haven't seen before, the resources block. This block basically helps you specify the resources that you will need to create, in a CloudFormation template, for the functions to work. In this case, we need to create a dynamoDB table for the functions to work. So far, we have specified the name of the table, and even referenced its ARN. But we haven't created the table. Specifying the characteristics of the table in the resources block will create that table for us." }, { "code": null, "e": 10011, "s": 9497, "text": "resources:\n Resources:\n TodosDynamoDbTable:\n Type: 'AWS::DynamoDB::Table'\n DeletionPolicy: Retain\n Properties:\n AttributeDefinitions:\n -\n AttributeName: id\n AttributeType: S\n KeySchema:\n -\n AttributeName: id\n KeyType: HASH\n ProvisionedThroughput:\n ReadCapacityUnits: 1\n WriteCapacityUnits: 1\n TableName: ${self:provider.environment.DYNAMODB_TABLE}" }, { "code": null, "e": 10746, "s": 10011, "text": "There are a lot of configurations being defined here, most of them specific to dynamoDB. Briefly, we are asking serverless to create a 'TodosDynamoDbTable', or type 'DynamoDB Table', with TableName (mentioned right at the bottom) equal to the one defined in environment variables in provider. We are setting its deletion policy to 'Retain', which means that if the stack is deleted, the resource is retained. See here. We are saying that the table will have an attribute named id, and its type will be String. We are also specifying that the id attribute will be a HASH key or a partition key. You can read up more about KeySchemas in dynamoDB tables here. Finally, we are specifying the read capacity and write capacity of the table." }, { "code": null, "e": 10926, "s": 10746, "text": "That's it! Our serverless.yml file is now ready. Now, since all the function handlers are more or less similar, we will walk through just one handler, that of the create function." }, { "code": null, "e": 10970, "s": 10926, "text": "We being with a couple of import statements" }, { "code": null, "e": 11031, "s": 10970, "text": "import json\nimport logging\nimport os\nimport time\nimport uuid" }, { "code": null, "e": 11182, "s": 11031, "text": "Next, we import boto3, which, as described above, is the AWS SDK for python. We need boto3 to interface with dynamoDB from within the lambda function." }, { "code": null, "e": 11234, "s": 11182, "text": "import boto3\ndynamodb = boto3.resource('dynamodb')\n" }, { "code": null, "e": 11498, "s": 11234, "text": "Next, in the actual function handler, we first check the contents of the 'events' payload (create API uses the post method). If its body doesn't contain a 'text' key, we haven't received a valid item to be added to the todo list. Therefore, we raise an exception." }, { "code": null, "e": 11685, "s": 11498, "text": "def create(event, context):\n data = json.loads(event['body'])\n if 'text' not in data:\n logging.error(\"Validation Failed\")\n raise Exception(\"Couldn't create the todo item.\")" }, { "code": null, "e": 11954, "s": 11685, "text": "Considering that we got the 'text' key as expected, we make preparations for adding it to the dynamoDB table. We fetch the current timestamp, and connect to the dynamoDB table. Notice how the environment variable defined in serverless.yml is fetched (using os.environ)" }, { "code": null, "e": 12037, "s": 11954, "text": "timestamp = str(time.time())\ntable = dynamodb.Table(os.environ['DYNAMODB_TABLE'])\n" }, { "code": null, "e": 12366, "s": 12037, "text": "Next, we create the item to be added to the table, by generating a random uuid using the uuid package, using the received data as text, setting createdAt and updatedAt to the current timestamp,and setting the field 'checked' to False. 'checked' is another field which you can update, apart from text, using the update operation." }, { "code": null, "e": 12505, "s": 12366, "text": "item = {\n 'id': str(uuid.uuid1()),\n 'text': data['text'],\n 'checked': False,\n 'createdAt': timestamp,\n 'updatedAt': timestamp,\n}" }, { "code": null, "e": 12593, "s": 12505, "text": "Finally, we add the item to the dynamoDB table and return the created item to the user." }, { "code": null, "e": 12754, "s": 12593, "text": "# write the todo to the database\ntable.put_item(Item=item)\n\n# create a response\nresponse = {\n \"statusCode\": 200,\n \"body\": json.dumps(item)\n}\nreturn response" }, { "code": null, "e": 13176, "s": 12754, "text": "With this walkthrough, I think the other function handlers will be self-explanatory. In some functions, you may see this statement − \"body\" − json.dumps(result['Item'], cls=decimalencoder.DecimalEncoder). This is a workaround used for a bug in json.dumps. json.dumps can't handle decimal numbers by default and therefore, the decimalencoder.py file has been created to contain the DecimalEncoder class which handles this." }, { "code": null, "e": 13465, "s": 13176, "text": "Congratulations on understanding your first comprehensive project created using serverless. The creator of the project has also shared the endpoints of his deployment and the ways to test these functions in the README file. Have a look. Head on to the next chapter to see another example." }, { "code": null, "e": 13500, "s": 13465, "text": "\n 44 Lectures \n 7.5 hours \n" }, { "code": null, "e": 13528, "s": 13500, "text": " Eduonix Learning Solutions" }, { "code": null, "e": 13561, "s": 13528, "text": "\n 31 Lectures \n 3 hours \n" }, { "code": null, "e": 13581, "s": 13561, "text": " Harshit Srivastava" }, { "code": null, "e": 13614, "s": 13581, "text": "\n 25 Lectures \n 1 hours \n" }, { "code": null, "e": 13634, "s": 13614, "text": " Skillbakerystudios" }, { "code": null, "e": 13668, "s": 13634, "text": "\n 142 Lectures \n 9 hours \n" }, { "code": null, "e": 13699, "s": 13668, "text": " Sundar Singh, Naveen Selvaraj" }, { "code": null, "e": 13732, "s": 13699, "text": "\n 45 Lectures \n 1 hours \n" }, { "code": null, "e": 13749, "s": 13732, "text": " Santiago Esteva" }, { "code": null, "e": 13756, "s": 13749, "text": " Print" }, { "code": null, "e": 13767, "s": 13756, "text": " Add Notes" } ]
A complete guide to AI accelerators for deep learning inference — GPUs, AWS Inferentia and Amazon Elastic Inference | by Shashank Prasanna | Towards Data Science
An AI accelerator is a dedicated processor designed to accelerate machine learning computations. Machine learning, and particularly its subset, deep learning is primarily composed of a large number of linear algebra computations, (i.e. matrix-matrix, matrix-vector operations) and these operations can be easily parallelized. AI accelerators are specialized hardware designed to accelerate these basic machine learning computations and improve performance, reduce latency and reduce cost of deploying machine learning based applications. Let’s say you have an ML model as part of your software application. The prediction step (or inference) is often the most time consuming part of your application that directly affects user experience. A model that takes several hundreds of milliseconds to generate text translations or apply filters to images or generate product recommendations, can drive users away from your “sluggish”, “slow”, “frustrating to use” app. By speeding up inference, you can reduce the overall application latency and deliver an app experience that can be described as “smooth”, “snappy”, and “delightful to use”. And you can speed up inference by offloading ML model prediction computation to an AI accelerator. With great market needs comes great many product alternatives, so naturally there is more than one way to accelerate your ML models in the cloud. In this blog post, I’ll explore three popular options: GPUs: Particularly, the high-performance NVIDIA T4 and NVIDIA V100 GPUsAWS Inferentia: A custom designed machine learning inference chip by AWSAmazon Elastic Inference (EI): An accelerator that saves cost by giving you access to variable-size GPU acceleration, for models that don’t need a dedicated GPU GPUs: Particularly, the high-performance NVIDIA T4 and NVIDIA V100 GPUs AWS Inferentia: A custom designed machine learning inference chip by AWS Amazon Elastic Inference (EI): An accelerator that saves cost by giving you access to variable-size GPU acceleration, for models that don’t need a dedicated GPU Choosing the right type of hardware acceleration for your workload can be a difficult choice to make. Through the rest of this post, I’ll walk you through various considerations such as target throughput, latency, cost budget, model type and size, choice of framework, and others to help you make your decision. I’ll also present plenty of code examples and discuss developer friendliness and ease of use with options. Disclaimer: Opinions and recommendations in this article are my own and do not reflect the views of my current or past employers. In the early days of computing (in the 70s and 80s), to speed up math computations on your computer, you paired a CPU (Central Processing Unit) with an FPU (floating-point unit) aka math coprocessor. The idea was simple — allow the CPU to offload complex floating point mathematical operations to a specially designed chip, so that the CPU could focus on executing the rest of the application program, run the operating system etc. Since the system had different types of processors (the CPU and the math coprocessor) the setup was sometimes referred to as heterogeneous computing. Fast forward to the 90s, and the CPUs got faster, better and more efficient, and started to ship with integrated floating-point hardware. The simpler system prevailed, and coprocessors and heterogeneous computing fell out of fashion for the regular user. Around the same time specific types of workloads started to get more complex. Designers demanded better graphics, engineers and scientists demanded faster computers for data processing, modeling and simulations. This meant there was some need (and a market) for high-performance processors that could accelerate “special programs” much faster than a CPU, freeing up the CPU to do other things. Computer graphics was an early example of workload being offloaded to a special processor. You may know this special processor by its common name, the venerable GPU. The early 2010s saw yet another class of workloads — deep learning, or machine learning with deep neural networks — that needed hardware acceleration to be viable, much like computer graphics. GPUs were already in the market and over the years have become highly programmable unlike the early GPUs which were fixed function processors. Naturally, ML practitioners started using GPUs to accelerate deep learning training and inference. Today’s deep learning inference acceleration landscape is much more interesting. CPUs acquired support for advanced vector extensions (AVX-512) to accelerate matrix math computations common in deep learning. GPUs acquired new capabilities such as support for reduced precision arithmetic (FP16 and INT8) further accelerating inference. In addition to CPUs and GPUs, today you also have access to specialized hardware, with custom designed silicon built just for deep learning inference. These specialized processors, also called Application Specific Integrated Circuits or ASICs can be far more performant and cost effective compared to general purpose processors, if your workload is supported by the processor. A great example of such specialized processors is AWS Inferentia, a custom-designed ASIC by AWS for accelerating deep learning inference. The right choice of hardware acceleration for your application may not be obvious at first. In the next section, we’ll discuss the benefits of each approach and considerations such as throughput, latency, cost and other factors that will affect your choice. It’s hard to answer general questions such as “is GPU better than CPU?” or “is CPU cheaper than a GPU” or “is an ASIC always faster than a GPU”. There really isn’t a single hardware solution that works well for every use case and the answer depends on your workload and several considerations: Model type and programmability: model size, custom operators, supported frameworksTarget throughput, latency and cost: deliver good customer experience at a budgetEase of use of compiler and runtime toolchain: should have fast learning curve, doesn’t require hardware knowledge Model type and programmability: model size, custom operators, supported frameworks Target throughput, latency and cost: deliver good customer experience at a budget Ease of use of compiler and runtime toolchain: should have fast learning curve, doesn’t require hardware knowledge While considerations such as model support and target latency are objective, ease of use can be very subjective. Therefore, I caution against general recommendation that doesn’t consider all of the above for your specific application. Such high level recommendations tends to be biased. Let’s review these key considerations. One way to categorize AI accelerators is based on how programmable they are. On the “fully programmable” end of the spectrum there are CPUs. As general purpose processors, you can pretty much write custom code for your machine learning model with custom layers, architectures and operations. On the other end of the spectrum are ASICs such as AWS Inferentia that have a fixed set of supported operations exposed via it’s AWS Neuron SDK compiler. Somewhere in between, but closer to ASICs are GPUs, that are far more programmable than ASICs, but far less general purpose than CPUs. There is always going to be some trade off between being general purpose and delivering performance. If you’re pushing the boundaries of deep learning research with custom neural network operations, you may need to author custom code for custom operations. And you’d typically do this in high level languages like Python. Most AI accelerators can’t automatically accelerate custom code written in high level languages and therefore that piece of code will fall back to CPUs for execution, reducing the overall inference performance. NVIDIA GPUs have the advantage that if you want more performance out of your custom code you can reimplement them using CUDA programming language and run them on GPUs. But if your ASIC’s compiler doesn’t support the operations you need, then CPU fall back may result in lower performance. In general specialized processors such as AWS Inferentia tend to offer lower price/performance ratio and improve latency vs. general purpose processors. But in the world of AI acceleration, all solutions can be competitive, depending on the type of workload. GPUs are throughput processors, and can deliver high throughput for a specified latency. If latency is not critical (batch processing, offline inference) then GPU utilization can be kept high, making them the most cost effective option in the cloud. CPUs are not parallel throughput devices, but for real time inference of smaller models, CPUs can be the most cost effective, as long the inference latency is under your target latency budget. AWS Inferentia’s performance and lower cost could make it the most cost effective and performant option vs both CPUs and GPUs if your model is fully supported by AWS Neuron SDK compiler for acceleration on AWS Inferentia. This is indeed a nuanced topic and is very workload dependent. In the subsequent sections we’ll take a closer look at performance, latency and cost for each accelerator. If a specific choice doesn’t work for you, no problem, it’s easy to switch options in the cloud till you find the right option for you. To accelerate your models on AI accelerators, you typically have to go through a compilation step that analyzes the computational graph and optimizes it for the target hardware to get the best performance. When deploying on a CPU, the deep learning framework has everything you need, so additional SDKs and compilers are typically not required. If you’re deploying to a GPU, you can rely on a deep learning framework to accelerate your model for inference, but you’ll be leaving performance on the table. To get the most out of your GPU, you’ll have to use a dedicated inference compiler such as NVIDIA TensorRT. In some cases, you can get over 10 times extra performance vs. using the deep learning framework (see figure). We’ll see later in the code examples section, how you can reproduce these results. NVIDIA TensorRT is two things — inference compiler and a runtime engine. By compiling your model with TensorRT, you can get better performance and lower latency since it performs a number of optimizations such as graph optimization and quantizations. Likewise, when targeting AWS Inferentia, AWS Neuron SDK compiler will perform similar optimizations to get the most out of your AWS Inferentia processor. Let’s dig a little deeper into each of these AI accelerator options You train your model on GPUs, so it’s natural to consider GPUs for inference deployment. After all, GPUs substantially speed up deep learning training, and inference is just the forward pass of your neural network that’s already accelerated on GPU. This is true, and GPUs are indeed an excellent hardware accelerator for inference. First, let’s talk about what GPUs really are. GPUs are first and foremost throughput processors, as this blog post from NVIDIA explains. They were designed to exploit inherent parallelism in algorithms and accelerate them by computing them in parallel. GPUs started out as specialized processors for computer graphics, but today’s GPUs have evolved into programmable processors, also called General Purpose GPU (GPGPU). They are still specialized parallel processors, but also highly programmable for a narrow range of applications which can be accelerated with parallelization. As it turns out, the high-performance computing (HPC) community had been using GPUs to accelerate linear algebra calculations long before deep learning. Deep neural networks computations are primarily composed of similar linear algebra computations, so a GPU for deep learning was a solution looking for a problem. It is no surprise that Alex Krizhevsky’s AlexNet deep neural network that won the ImageNet 2012 competition and (re)introduced the world to deep learning was trained on readily available, programmable consumer GPUs by NVIDIA. GPUs have gotten much faster since then and I’ll refer you to NVIDIA’s website for their latest training and inference benchmarks for popular models. While these benchmarks are a good indication of what a GPU is capable of, your decision may hinge on other considerations discussed below. Since GPUs are throughput devices, if your objective is to maximize sheer throughput, they can deliver best in class throughput per desired latency, depending on the GPU type and model being deployed. An example of a use-case where GPUs absolutely shine is offline or batch inference. GPUs will also deliver some of the lowest latencies for prediction for small batches, but if you are unable to keep your GPU utilization at its maximum at all times, due to say sporadic inference request (fluctuating customer demand), your cost / inference request goes up (because you are delivering fewer requests for the same GPU instance cost). For these situations you’re better off using Amazon Elastic Inference which lets you access just enough GPU acceleration for lower cost. In the example section we’ll see comparision of GPU performance across different precisions (FP32, FP16, INT8). On AWS you can launch 18 different Amazon EC2 GPU instances with different NVIDIA GPUs, number of vCPUs, system memory and network bandwidth. Two of the most popular GPUs for deep learning inference are the NVIDIA T4 GPUs offered by G4 EC2 instance type and NVIDIA V100 GPUs offered by P3 EC2 instance type. For a fully summary of all GPU instance type of AWS read my earlier blog post: Choosing the right GPU for deep learning on AWS G4 instance type should be the go-to GPU instance for deep learning inference deployment. Based on the NVIDIA Turing architecture, NVIDIA T4 GPUs feature FP64, FP32, FP16, Tensor Cores (mixed-precision), and INT8 precision types. They also have 16 GB of GPU memory which can be plenty for most models and combined with reduced precision support. If you need more throughput or need more memory per GPU, then P3 instance types offer a more powerful NVIDIA V100 GPU and with p3dn.24xlarge instance size, you can get access to NVIDIA V100 with up to 32 GB of GPU memory for large models or large images or other datasets. Unlike ASICs such as AWS Inferentia which are fixed function processors, a developer can use NVIDIA’s CUDA programming model to code up custom layers that can be accelerated on an NVIDIA GPU. This is exactly what Alex Krizhevsky did with AlexNet in 2012. He hand coded custom CUDA kernels to train his neural network on GPU. He called his framework cuda-convnet and you could say cuda-convnet was the very first deep learning framework. If you’re pushing the boundary of deep learning and don’t want to leave performance on the table a GPU is the best option for you. Programmability with performance is one of GPUs greatest strengths Of course, you don’t need to write low-level GPU code to do deep learning. NVIDIA has made neural network primitives available via libraries such as cuDNN and cuBLAS and deep learning frameworks such as TensorFlow, PyTorch and MXNet use these libraries under the hood so you get GPU acceleration for free by simply using these frameworks. This is why GPUs score high marks for ease of use and programmability. If you really want to get the best performance out of your GPUs, NVIDIA offers TensorRT, a model compiler for inference deployment. Does additional optimizations to a trained model, and a full list is available on NVIDIA’s TensorRT website. The key optimizations to note are: Quantization: reduce model precision from FP32 (single precision) to FP16 (half precision) or INT8 (8-bit integer precision), thereby speeding up inference due to reduced amount of computation Graph fusion: fusing multiple layers/ops into a single function call to a CUDA kernel on the GPU. This reduces the overhead of multiple function call for each layer/op Deploying with FP16 is straight forward with NVIDIA TensorRT. The TensorRT compiler will automatically quantize your models during the compilation step. To deploy with INT8 precision, the weights and activations of the model need to be quantized so that floating point values can be converted into integers using appropriate ranges. You have two options. Option 1: Perform quantization aware training. In quantization aware training, the error from quantizing weights and tensors to INT8 is modeled during training, allowing the model to adapt and mitigate this error. This requires additional setup during training. Option 2: Perform post training quantization. In post-quantization training, no pre-deployment preparation is required. You will provide a training model in full precision (FP32), and you will also need to provide a dataset sample from your training dataset that the TensorRT compiler can use to run a calibration step to generate quantization ranges. In Example 2 below, we’ll take a look at implementing Option 2. The following examples was tested on Amazon EC2 g4dn.xlarge using the following AWS Deep Learning AMI: Deep Learning AMI (Ubuntu 18.04) Version 35.0. To run TensorRT, I used the following NVIDIA TensorFlow Docker image: nvcr.io/nvidia/tensorflow:20.08-tf2-py3 Dataset: ImageNet Validation dataset with 50000 test images, converted to TFRecordModel: TensorFlow implementation of ResNet50 You can find the full implementation for the examples below on this Jupyter Notebook: https://github.com/shashankprasanna/ai-accelerators-examples/blob/main/gpu-tf-tensorrt-resnet50.ipynb TensorFlow’s native GPU acceleration support just works out of the box, with no additional setup. You won’t get the additional performance you can get with NVIDIA TensorRT, but you can’t argue with how easy life becomes when things just work. Running inference with frameworks’ native GPU support takes all of 3 lines of code: model = tf.keras.models.load_model(saved_model_dir)for i, (validation_ds, batch_labels, _) in enumerate(dataset): pred_prob_keras = model(validation_ds) But you’re really leaving performance on the table (some times 10x the performance). To increase the performance and utilization of your GPU, you have to use an inference compiler and runtime like NVIDIA TensorRT. The following code shows how to compile your model with TensorRT. You can find the full implementation on GitHub TensorRT compilation has the following steps: Provide TensorRT’s TrtGraphConverterV2 (for TensorFlow2) with your uncompiled TensorFlow saved modelSpecify TensorRT compilation parameters. The most important parameter is the precision (FP32, FP16, INT8). If you’re compiling with INT8 support, TensorRT expects you to provide it with a representative sample from your training set to calibrate scaling factors. You’ll do this by providing a python generator to argument calibration_input_fn when you call converter.convert(). You don’t need to provide additional data for FP32 and FP16 optimizations.TensorRT compiles your model and saves it as a TensorFlow saved model that includes special TensorRT operators which accelerates inference on GPU and runs them more efficiently. Provide TensorRT’s TrtGraphConverterV2 (for TensorFlow2) with your uncompiled TensorFlow saved model Specify TensorRT compilation parameters. The most important parameter is the precision (FP32, FP16, INT8). If you’re compiling with INT8 support, TensorRT expects you to provide it with a representative sample from your training set to calibrate scaling factors. You’ll do this by providing a python generator to argument calibration_input_fn when you call converter.convert(). You don’t need to provide additional data for FP32 and FP16 optimizations. TensorRT compiles your model and saves it as a TensorFlow saved model that includes special TensorRT operators which accelerates inference on GPU and runs them more efficiently. Below is a comparison of accuracy and performance of TensorFlow ResNet50 inference with: TensorFlow native GPU accelerationTensorFlow + TensorRT FP32 precisionTensorFlow + TensorRT FP16 precisionTensorFlow + TensorRT INT8 precision TensorFlow native GPU acceleration TensorFlow + TensorRT FP32 precision TensorFlow + TensorRT FP16 precision TensorFlow + TensorRT INT8 precision I measured not just performance but also accuracy, since reducing precision means there is information loss. On the ImageNet test dataset we see negligible loss in accuracy across all precisions, with minor boost in throughput. Your mileage may vary for your model. In Example 1, we tested the performance offline, but in most cases you’ll be hosting your model in the cloud as an endpoint that client applications can submit inference requests to. One of the simplest ways of doing this is to use Amazon SageMaker hosting capabilities. This example was tested on Amazon SageMaker Studio Notebook. Run this notebook using the following Amazon SageMaker Studio conda environment: TensorFlow 2 CPU Optimized. The full implementation is available here: https://github.com/shashankprasanna/ai-accelerators-examples/blob/main/sagemaker-tf-cpu-gpu-ei-resnet50.ipynb Hosting a model endpoint with SageMaker involves the following simple steps: Create a tar.gz archive file using your TensorFlow saved model and upload it to Amazon S3Use the Amazon SageMaker SDK API to create a TensorFlowModel objectDeploy the TensorFlowModel object to a G4 EC2 instance with NVIDIA T4 GPU Create a tar.gz archive file using your TensorFlow saved model and upload it to Amazon S3 Use the Amazon SageMaker SDK API to create a TensorFlowModel object Deploy the TensorFlowModel object to a G4 EC2 instance with NVIDIA T4 GPU Create model.tar.gz with the TensorFlow saved model: $ tar cvfz model.tar.gz -C resnet50_saved_model . Upload model to S3 and deploy: You can test the model by invoking the endpoint as follows: Output: AWS Inferentia is a custom silicon designed by Amazon for cost-effective, high-throughput, low latency inference. James Hamilton (VP and Distinguished Engineer at AWS) goes into further depth about ASICs, general purpose processors, AWS Inferentia and the economics surrounding them in his blog post: AWS Inferentia Machine Learning Processor, which I encourage you to read if you’re interested in AI hardware. The idea of using specialized processors for specialized workloads is not new. The chip in your noise cancelling headphone and the video decoder in your DVD player are examples of specialized chips, sometimes also called an Application Specific Integrated Circuit (ASIC). ASICs have 1 job (or limited responsibilities) and are optimized to do it well. Unlike general purpose processors (CPUs) or programmable accelerators (GPU), large parts of the silicon are not dedicated to run arbitrary code. AWS Inferentia was purpose built to offer high inference performance at the lowest cost in the cloud. AWS Inferentia chips can be accessed via the Amazon EC2 Inf1 instances which come in different sizes with 1 AWS Inferentia chip per instance all the way up to 16 AWS Inferential chips per instance. Each AWS Inferentia chip has 4 NeuronCores and supports FP16, BF16 and INT8 data types. NeuronCore is a high-performance systolic-array matrix-multiply engine and each has a two stage memory hierarchy, a very large on-chip cache. In most cases, AWS Inferentia might be the best AI accelerator for your use case, if your model: Was trained in MXNet, TensorFlow, PyTorch or has been converted to ONNX Has operators that are supported by the AWS Neuron SDK If you have operators not supported by the AWS Neuron SDK, you can still deploy it successfully on Inf1 instances, but those operations will run on the host CPU and won’t be accelerated on AWS Inferentia. As I stated earlier, every use case is different, so compile your model with AWS Neuron SDK and measure performance to make sure it meets your performance, latency and throughput needs. AWS has compared performance of AWS Inferentia vs. GPU instances for popular models, and reports lower cost for popular models: YOLOv4 model, OpenPose, and has provided examples for BERT and SSD for TensorFlow, MXNet and PyTorch. For real-time applications, AWS Inf1 instances are amongst the least expensive of all the acceleration options available on AWS and AWS Inferentia can deliver higher throughput at target latency and at lower cost compared to GPUs and CPUs. Ultimately your choice may depend on other factors discussed below. AWS Inferentia chip supports a fixed set of neural network operators exposed via the AWS Neuron SDK. When you compile a model to target AWS Inferentia using the AWS Neuron SDK, the compiler will check your model for supported operators for your framework. If an operator isn’t supported or if the compiler determines that a specific operator is more efficient to execute on CPU, it’ll partition the graph to include CPU partitions and AWS Inferentia partitions. The same is also true for Amazon Elastic Inference which we’ll discuss in the next section. If you’re using TensorFlow with AWS Inferentia here is a list of all TensorFlow ops accelerated on AWS Inferentia. If you trained your model in FP32 (single precision), AWS Neuron SDK compiler will automatically cast your FP32 model to BF16 to improve inference performance. If you instead, prefer to provide a model in FP16, either by training in FP16 or by performing post-training quantization, AWS Neuron SDK will directly use your FP16 weights. While INT8 is supported by the AWS Inferentia chip, the AWS Neuron SDK compiler currently does not provide a way to deploy with INT8 support. In most cases, AWS Neuron SDK makes AWS Inferentia really easy to use. A key difference in the user experience of using AWS Inferentia and GPUs is that AWS Inferentia lets you have more control over how each core is used. AWS Neuron SDK supports two ways to improve performance by utilizing all the NeuronCores: (1) batching and (2) pipelining. Since the AWS Neuron SDK compiler is an ahead-of-time compiler, you have to enable these options explicitly during the compilation stage. Let’s take a look at what these are and how these work. When you compile a model with AWS Neuron SDK compiler with batch_size, greater than one, batching is enabled. During inference your model weights are stored in external memory, and as forward pass is initiated, a subset of layer weights, as determined by the neuron runtime, is copied to the on-chip cache. With the weights of this layer on the cache, forward pass is computed on the entire batch. After that the next set of layer weights are loaded into the cache, and the forward pass is computed on the entire batch. This process continues until all weights are used for inference computations. Batching allows for better amortization of the cost of reading weights from the external memory by running inference on large batches when the layers are still in cache. All of this happens behind the scenes and as a user, you just have to set a desired batch size using an example input, during compilation. Even though batch size is set at the compilation phase, with dynamic batching enabled, the model can accept variable sized batches. Internally the neuron runtime will break down the user batch size into compiled batch sizes and run inference. During batching, model weights are loaded to the on-chip cache from the external memory layer by layer. With pipelining, you can load the entire model weights into the on-chip cache of multiple cores. This can reduce the latency since the neuron runtime does not have to load the weights from external memory. Again all of this happens behind the scenes, as a user you just set the desired number of cores using —-num-neuroncores during the compilation phase. Batching and pipelining can be used together. However, you have to try different combinations of pipelining cores and compiled batch sizes to determine what works best for your model. During the compilation step, all combinations of batch sizes and number of neuron cores (for pipelining), may not work. You will have to determine the working combinations of batch size and number of neuron cores by running a sweep of different values and monitoring compiler errors. Depending on how you compiled your model you can either: Compile your model to run on a single NeuronCore with a specific batch sizeCompile your model by pipelining to multiple-NeuronCores with specific batch size Compile your model to run on a single NeuronCore with a specific batch size Compile your model by pipelining to multiple-NeuronCores with specific batch size The least cost Amazon EC2 Inf1 instance type, inf1.xlarge has 1 AWS Inferentia chip with 4 NeuronCores. If you compiled your model to run on a single NeuronCore, tensorflow-neuron will automatically perform data parallel execution on all 4 NeuronCores. This is equivalent to replicating your model 4 times and loading it into each NeuronCore and running 4 Python threads to feed input to data to each core. Automatic data parallel execution does not work beyond 1 AWS Inferentia chip. If you want to replicate your model to all 16 NeuronCores on an inf1.6xlarge for example, you have to spawn multiple threads to feed all AWS Inferentia chips with data. In python you can use concurrent.futures.ThreadPoolExecutor. When you compile a model for multiple NeuronCores, the runtime will allocate different subgraphs to each NeuronCore (screenshot by author) AWS Neuron SDK allows you to group NeuronCores into logical groups. Each group could have 1 or more NeuronCores and could run a different model. For example if you’re deploying on an inf1.6xlarge EC2 Inf1 instance, you have access to 4 Inferentia chips with 4 NeuronCores each i.e. a total of 16 NeuronCores. You could divide 16 NeuronCores into, let’s say 3 groups. Group 1 has 8 NeuronCores and will run a model that uses pipelining to use all 8 cores. Group 2 uses 4 NeuronCores and runs 4 copies of a model compiled with 1 neuron core. Group 3 uses 4 NeuronCores and runs 2 copies of a model compiled with 2 neuron cores with pipelining. You can specify this configuration using the NEURONCORE_GROUP_SIZES environment variable, and you’d set it to NEURONCORE_GROUP_SIZES=8,1,1,1,1,2,2 After that you simply have to load the model in the specified sequence within a single python process, i.e. load the model that’s compiled to use 8 cores first, then load the model that’s compiled to use 1 core four times, and then use load the model that’s compiled to use 2 cores, two times. The appropriate cores will be assigned to the model. AWS Neuron SDK comes pre-installed on AWS Deep Learning AMI, and you can also install the SDK and the neuron-accelerated frameworks and libraries TensorFlow, TensorFlow Serving, TensorBoard (with neuron support), MXNet and PyTorch. The following examples were tested on Amazon EC2 Inf1.xlarge and Deep Learning AMI (Ubuntu 18.04) Version 35.0. You can find the full implementation for the examples below on this Jupyter Notebook: https://github.com/shashankprasanna/ai-accelerators-examples/blob/main/inf1-neuron-sdk-resnet50.ipynb In this example I compare 3 different options No batching, no pipelining: Compile ResNet50 model with batch size = 1 and number of cores = 1With batching, no pipelining: Compile ResNet50 model with batch size = 5 and number of cores = 1No batching, with pipelining: Compile ResNet50 model with batch size = 1 and number of cores = 4 No batching, no pipelining: Compile ResNet50 model with batch size = 1 and number of cores = 1 With batching, no pipelining: Compile ResNet50 model with batch size = 5 and number of cores = 1 No batching, with pipelining: Compile ResNet50 model with batch size = 1 and number of cores = 4 You can find the full implementation in this Jupyter Notebook. I’ll just review the results here. The comparison below shows that you get the best throughput with option 2 (batch size = 1, no pipelining) on Inf1.xlarge instances. You can repeat this experiment with other combinations on large Inf1 instances. Amazon Elastic Inference (EI) allows you to add cost-effective variable-size GPU acceleration to a CPU-only instance without provisioning a dedicated GPU instance. To use Amazon EI, you simply provision a CPU-only instance such as Amazon EC2 C5 instance type, and choose from 6 different EI accelerator options at launch. The EI accelerator is not part of the hardware that makes up your CPU instance, instead, the EI accelerator is attached through the network using an AWS PrivateLink endpoint service which routes traffic from your instance to the Elastic Inference accelerator configured with your instance. All of this happens seamlessly behind the scenes when you use an EI enabled serving frameworks such as TensorFlow serving. Amazon EI uses GPUs to provide GPU acceleration, but unlike dedicated GPU instances, you can choose to add GPU acceleration that comes in 6 different accelerator sizes, that you can choose by Tera (trillion) Floating Point Operations per Second (TFLOPS) or GPU memory. As I discussed earlier, GPUs are primarily throughput devices, and when dealing with smaller batches, common with real-time applications, GPUs tend to get underutilized when you deploy models that don’t need the full processing power or full memory of a GPU. Also, if you don’t have sufficient demand or multiple models to serve and share the GPU, then a single GPU may not be cost effective as cost/inference would go up. You can choose from 6 different EI accelerators that offer 1–4 TFLOPS and 1–8 GB of GPU memory. Let’s say you have a less computationally demanding model with a small memory footprint, you can attach the smallest EI accelerator such as eia1.medium that offers 1 TFLOPS of FP32 performance and 1 GB of GPU memory to a CPU instance. If you have a more demanding model, you could attach an eia2.xlarge EI accelerator with 4 TFLOPS performance and 8 GB GPU memory to a CPU instance. The cost of the CPU instance + EI accelerator would still be cheaper than a dedicated GPU instance, and can lower inference costs. You don’t have to worry about maximizing the utilization of your GPU since you’re adding just enough capacity to meet demand, without over-provisioning. Let’s consider the following hypothetical scenario. Let’s say your application can deliver a good customer experience if your total latency (app + network + model predictions) is under 200 ms. And let’s say, with a G4 instance type you can get total latency down to 40 ms which is well within your target latency. You’ve also tried deploying with a CPU-only C5 instance type you can only get total latency to 400 ms which does not meet your SLA requirements and results in poor customer experience. With Elastic Inference, you can network attach just enough GPU acceleration to a CPU instance. After exploring different EI accelerator sizes (say eia2.medium, eia2.large, eia2.xlarge), you and get your total latency down to 180 ms with an eia2.large EI accelerators, which is under the desired 200 ms mark. Since EI is significantly cheaper than provisioning a dedicated GPU instance, you save on your total deployment costs. Since the GPU acceleration is added via the network, EI adds some latency compared to a dedicated GPU instance, but will still be faster than a CPU-only instance, and more cost-effective than a dedicated GPU instance. A dedicated GPU instance will still deliver better inference performance vs EI, but if the extra performance doesn’t improve your customer experience, with EI you will stay under the target latency SLA, deliver good customer experience, and save on overall deployment costs. AWS has a number of blog posts that talk about performance and cost savings compared to CPUs and GPU using popular deep learning frameworks. Amazon EI supports models trained on TensorFlow, Apache MXNet, Pytorch and ONNX models. After you launch an Amazon EC2 instance with Amazon EI attached, to access the accelerator you need an EI enabled framework such as TensorFlow, PyTorch or Apache MXNet. EI enabled frameworks come pre-installed on AWS Deep Learning AMI, but if you prefer installing it manually, a Python wheel file has also been made available. Most popular models such as Inception, ResNet, SSD, RCNN, GNMT have been tested to deliver cost saving benefits when deployed with Amazon EI. If you’re deploying a custom model with custom operators, EI enabled framework, partitions the graph to run unsupported operators on the host CPU, and all support ops on the EI accelerator attached via the network. This makes using EI very simple. This example was tested on Amazon EC2 c5.2xlarge the following AWS Deep Learning AMI: Deep Learning AMI (Ubuntu 18.04) Version 35.0 You can find the full implementation on this Jupyter Notebook here: https://github.com/shashankprasanna/ai-accelerators-examples/blob/main/ei-tensorflow-resnet50.ipynb Amazon EI enabled TensorFlow offers APIs that let you accelerate your models using EI accelerators, and behave just like TensorFlow API. As a developer you to make have minimal code changes. To load model, you just have to run the following code: from ei_for_tf.python.predictor.ei_predictor import EIPredictoreia_model = EIPredictor(saved_model_dir,accelerator_id=0) If you have more than one EI accelerators attached to your instance, you can specify them using the accelerator_id argument. Simply replace your TensorFlow model object with eia_model and the rest of your script remains the same, and your model is now accelerated on Amazon EI. The following figure compares CPU-only inference vs. EI accelerated inference on the same CPU instance. In this example you see over 6 times speed up with an EI accelerator. If there is one thing I want you to take away from the blog post, it is this: Deployment needs are unique and there really is no one size fits all. Review your deployment goals, compare them with the discussions in the article, and test out all options. Cloud makes it easy to try before you commit. Keep these considerations in mind as you choose: Model type and programmability (model size, custom operators, supported frameworks) Target throughput, latency and cost (to deliver good customer experience at a budget) Ease of use of compiler and runtime toolchain (fast learning curve, doesn’t require hardware knowledge) If programmability is very important, and you have low performance targets, then CPU might just work for you. If programmability and performance is important, then you can develop custom CUDA kernels for custom ops that are accelerated on GPUs. If you want the lowest cost option, and your model is supported on AWS Inferentia, you can save on overall deployment costs. Ease of use is subjective, but nothing can beat native framework experience. But with a little bit of extra effort both AWS Neuron SDK for AWS Inferentia and NVIDIA TensorRT for NVIDIA GPUs can deliver higher performance, thereby reducing cost / inference. Thank you for reading. In this article I was only able to give you a glimpse of all the sample code we discussed in this article. If you want to reproduce the results visit the following GitHub repo: https://github.com/shashankprasanna/ai-accelerators-examples If you found this article interesting, please check out my other blog posts on medium. Want me to write on a specific machine learning topic? I’d love to hear from you! Follow me on twitter (@shshnkp), LinkedIn or leave a comment below.
[ { "code": null, "e": 710, "s": 172, "text": "An AI accelerator is a dedicated processor designed to accelerate machine learning computations. Machine learning, and particularly its subset, deep learning is primarily composed of a large number of linear algebra computations, (i.e. matrix-matrix, matrix-vector operations) and these operations can be easily parallelized. AI accelerators are specialized hardware designed to accelerate these basic machine learning computations and improve performance, reduce latency and reduce cost of deploying machine learning based applications." }, { "code": null, "e": 1134, "s": 710, "text": "Let’s say you have an ML model as part of your software application. The prediction step (or inference) is often the most time consuming part of your application that directly affects user experience. A model that takes several hundreds of milliseconds to generate text translations or apply filters to images or generate product recommendations, can drive users away from your “sluggish”, “slow”, “frustrating to use” app." }, { "code": null, "e": 1406, "s": 1134, "text": "By speeding up inference, you can reduce the overall application latency and deliver an app experience that can be described as “smooth”, “snappy”, and “delightful to use”. And you can speed up inference by offloading ML model prediction computation to an AI accelerator." }, { "code": null, "e": 1552, "s": 1406, "text": "With great market needs comes great many product alternatives, so naturally there is more than one way to accelerate your ML models in the cloud." }, { "code": null, "e": 1607, "s": 1552, "text": "In this blog post, I’ll explore three popular options:" }, { "code": null, "e": 1911, "s": 1607, "text": "GPUs: Particularly, the high-performance NVIDIA T4 and NVIDIA V100 GPUsAWS Inferentia: A custom designed machine learning inference chip by AWSAmazon Elastic Inference (EI): An accelerator that saves cost by giving you access to variable-size GPU acceleration, for models that don’t need a dedicated GPU" }, { "code": null, "e": 1983, "s": 1911, "text": "GPUs: Particularly, the high-performance NVIDIA T4 and NVIDIA V100 GPUs" }, { "code": null, "e": 2056, "s": 1983, "text": "AWS Inferentia: A custom designed machine learning inference chip by AWS" }, { "code": null, "e": 2217, "s": 2056, "text": "Amazon Elastic Inference (EI): An accelerator that saves cost by giving you access to variable-size GPU acceleration, for models that don’t need a dedicated GPU" }, { "code": null, "e": 2636, "s": 2217, "text": "Choosing the right type of hardware acceleration for your workload can be a difficult choice to make. Through the rest of this post, I’ll walk you through various considerations such as target throughput, latency, cost budget, model type and size, choice of framework, and others to help you make your decision. I’ll also present plenty of code examples and discuss developer friendliness and ease of use with options." }, { "code": null, "e": 2766, "s": 2636, "text": "Disclaimer: Opinions and recommendations in this article are my own and do not reflect the views of my current or past employers." }, { "code": null, "e": 3348, "s": 2766, "text": "In the early days of computing (in the 70s and 80s), to speed up math computations on your computer, you paired a CPU (Central Processing Unit) with an FPU (floating-point unit) aka math coprocessor. The idea was simple — allow the CPU to offload complex floating point mathematical operations to a specially designed chip, so that the CPU could focus on executing the rest of the application program, run the operating system etc. Since the system had different types of processors (the CPU and the math coprocessor) the setup was sometimes referred to as heterogeneous computing." }, { "code": null, "e": 3603, "s": 3348, "text": "Fast forward to the 90s, and the CPUs got faster, better and more efficient, and started to ship with integrated floating-point hardware. The simpler system prevailed, and coprocessors and heterogeneous computing fell out of fashion for the regular user." }, { "code": null, "e": 4163, "s": 3603, "text": "Around the same time specific types of workloads started to get more complex. Designers demanded better graphics, engineers and scientists demanded faster computers for data processing, modeling and simulations. This meant there was some need (and a market) for high-performance processors that could accelerate “special programs” much faster than a CPU, freeing up the CPU to do other things. Computer graphics was an early example of workload being offloaded to a special processor. You may know this special processor by its common name, the venerable GPU." }, { "code": null, "e": 4598, "s": 4163, "text": "The early 2010s saw yet another class of workloads — deep learning, or machine learning with deep neural networks — that needed hardware acceleration to be viable, much like computer graphics. GPUs were already in the market and over the years have become highly programmable unlike the early GPUs which were fixed function processors. Naturally, ML practitioners started using GPUs to accelerate deep learning training and inference." }, { "code": null, "e": 4934, "s": 4598, "text": "Today’s deep learning inference acceleration landscape is much more interesting. CPUs acquired support for advanced vector extensions (AVX-512) to accelerate matrix math computations common in deep learning. GPUs acquired new capabilities such as support for reduced precision arithmetic (FP16 and INT8) further accelerating inference." }, { "code": null, "e": 5449, "s": 4934, "text": "In addition to CPUs and GPUs, today you also have access to specialized hardware, with custom designed silicon built just for deep learning inference. These specialized processors, also called Application Specific Integrated Circuits or ASICs can be far more performant and cost effective compared to general purpose processors, if your workload is supported by the processor. A great example of such specialized processors is AWS Inferentia, a custom-designed ASIC by AWS for accelerating deep learning inference." }, { "code": null, "e": 5707, "s": 5449, "text": "The right choice of hardware acceleration for your application may not be obvious at first. In the next section, we’ll discuss the benefits of each approach and considerations such as throughput, latency, cost and other factors that will affect your choice." }, { "code": null, "e": 6001, "s": 5707, "text": "It’s hard to answer general questions such as “is GPU better than CPU?” or “is CPU cheaper than a GPU” or “is an ASIC always faster than a GPU”. There really isn’t a single hardware solution that works well for every use case and the answer depends on your workload and several considerations:" }, { "code": null, "e": 6279, "s": 6001, "text": "Model type and programmability: model size, custom operators, supported frameworksTarget throughput, latency and cost: deliver good customer experience at a budgetEase of use of compiler and runtime toolchain: should have fast learning curve, doesn’t require hardware knowledge" }, { "code": null, "e": 6362, "s": 6279, "text": "Model type and programmability: model size, custom operators, supported frameworks" }, { "code": null, "e": 6444, "s": 6362, "text": "Target throughput, latency and cost: deliver good customer experience at a budget" }, { "code": null, "e": 6559, "s": 6444, "text": "Ease of use of compiler and runtime toolchain: should have fast learning curve, doesn’t require hardware knowledge" }, { "code": null, "e": 6846, "s": 6559, "text": "While considerations such as model support and target latency are objective, ease of use can be very subjective. Therefore, I caution against general recommendation that doesn’t consider all of the above for your specific application. Such high level recommendations tends to be biased." }, { "code": null, "e": 6885, "s": 6846, "text": "Let’s review these key considerations." }, { "code": null, "e": 7177, "s": 6885, "text": "One way to categorize AI accelerators is based on how programmable they are. On the “fully programmable” end of the spectrum there are CPUs. As general purpose processors, you can pretty much write custom code for your machine learning model with custom layers, architectures and operations." }, { "code": null, "e": 7567, "s": 7177, "text": "On the other end of the spectrum are ASICs such as AWS Inferentia that have a fixed set of supported operations exposed via it’s AWS Neuron SDK compiler. Somewhere in between, but closer to ASICs are GPUs, that are far more programmable than ASICs, but far less general purpose than CPUs. There is always going to be some trade off between being general purpose and delivering performance." }, { "code": null, "e": 7788, "s": 7567, "text": "If you’re pushing the boundaries of deep learning research with custom neural network operations, you may need to author custom code for custom operations. And you’d typically do this in high level languages like Python." }, { "code": null, "e": 7999, "s": 7788, "text": "Most AI accelerators can’t automatically accelerate custom code written in high level languages and therefore that piece of code will fall back to CPUs for execution, reducing the overall inference performance." }, { "code": null, "e": 8288, "s": 7999, "text": "NVIDIA GPUs have the advantage that if you want more performance out of your custom code you can reimplement them using CUDA programming language and run them on GPUs. But if your ASIC’s compiler doesn’t support the operations you need, then CPU fall back may result in lower performance." }, { "code": null, "e": 8547, "s": 8288, "text": "In general specialized processors such as AWS Inferentia tend to offer lower price/performance ratio and improve latency vs. general purpose processors. But in the world of AI acceleration, all solutions can be competitive, depending on the type of workload." }, { "code": null, "e": 9212, "s": 8547, "text": "GPUs are throughput processors, and can deliver high throughput for a specified latency. If latency is not critical (batch processing, offline inference) then GPU utilization can be kept high, making them the most cost effective option in the cloud. CPUs are not parallel throughput devices, but for real time inference of smaller models, CPUs can be the most cost effective, as long the inference latency is under your target latency budget. AWS Inferentia’s performance and lower cost could make it the most cost effective and performant option vs both CPUs and GPUs if your model is fully supported by AWS Neuron SDK compiler for acceleration on AWS Inferentia." }, { "code": null, "e": 9518, "s": 9212, "text": "This is indeed a nuanced topic and is very workload dependent. In the subsequent sections we’ll take a closer look at performance, latency and cost for each accelerator. If a specific choice doesn’t work for you, no problem, it’s easy to switch options in the cloud till you find the right option for you." }, { "code": null, "e": 9863, "s": 9518, "text": "To accelerate your models on AI accelerators, you typically have to go through a compilation step that analyzes the computational graph and optimizes it for the target hardware to get the best performance. When deploying on a CPU, the deep learning framework has everything you need, so additional SDKs and compilers are typically not required." }, { "code": null, "e": 10131, "s": 9863, "text": "If you’re deploying to a GPU, you can rely on a deep learning framework to accelerate your model for inference, but you’ll be leaving performance on the table. To get the most out of your GPU, you’ll have to use a dedicated inference compiler such as NVIDIA TensorRT." }, { "code": null, "e": 10325, "s": 10131, "text": "In some cases, you can get over 10 times extra performance vs. using the deep learning framework (see figure). We’ll see later in the code examples section, how you can reproduce these results." }, { "code": null, "e": 10730, "s": 10325, "text": "NVIDIA TensorRT is two things — inference compiler and a runtime engine. By compiling your model with TensorRT, you can get better performance and lower latency since it performs a number of optimizations such as graph optimization and quantizations. Likewise, when targeting AWS Inferentia, AWS Neuron SDK compiler will perform similar optimizations to get the most out of your AWS Inferentia processor." }, { "code": null, "e": 10798, "s": 10730, "text": "Let’s dig a little deeper into each of these AI accelerator options" }, { "code": null, "e": 11130, "s": 10798, "text": "You train your model on GPUs, so it’s natural to consider GPUs for inference deployment. After all, GPUs substantially speed up deep learning training, and inference is just the forward pass of your neural network that’s already accelerated on GPU. This is true, and GPUs are indeed an excellent hardware accelerator for inference." }, { "code": null, "e": 11176, "s": 11130, "text": "First, let’s talk about what GPUs really are." }, { "code": null, "e": 11709, "s": 11176, "text": "GPUs are first and foremost throughput processors, as this blog post from NVIDIA explains. They were designed to exploit inherent parallelism in algorithms and accelerate them by computing them in parallel. GPUs started out as specialized processors for computer graphics, but today’s GPUs have evolved into programmable processors, also called General Purpose GPU (GPGPU). They are still specialized parallel processors, but also highly programmable for a narrow range of applications which can be accelerated with parallelization." }, { "code": null, "e": 12250, "s": 11709, "text": "As it turns out, the high-performance computing (HPC) community had been using GPUs to accelerate linear algebra calculations long before deep learning. Deep neural networks computations are primarily composed of similar linear algebra computations, so a GPU for deep learning was a solution looking for a problem. It is no surprise that Alex Krizhevsky’s AlexNet deep neural network that won the ImageNet 2012 competition and (re)introduced the world to deep learning was trained on readily available, programmable consumer GPUs by NVIDIA." }, { "code": null, "e": 12539, "s": 12250, "text": "GPUs have gotten much faster since then and I’ll refer you to NVIDIA’s website for their latest training and inference benchmarks for popular models. While these benchmarks are a good indication of what a GPU is capable of, your decision may hinge on other considerations discussed below." }, { "code": null, "e": 13310, "s": 12539, "text": "Since GPUs are throughput devices, if your objective is to maximize sheer throughput, they can deliver best in class throughput per desired latency, depending on the GPU type and model being deployed. An example of a use-case where GPUs absolutely shine is offline or batch inference. GPUs will also deliver some of the lowest latencies for prediction for small batches, but if you are unable to keep your GPU utilization at its maximum at all times, due to say sporadic inference request (fluctuating customer demand), your cost / inference request goes up (because you are delivering fewer requests for the same GPU instance cost). For these situations you’re better off using Amazon Elastic Inference which lets you access just enough GPU acceleration for lower cost." }, { "code": null, "e": 13422, "s": 13310, "text": "In the example section we’ll see comparision of GPU performance across different precisions (FP32, FP16, INT8)." }, { "code": null, "e": 13730, "s": 13422, "text": "On AWS you can launch 18 different Amazon EC2 GPU instances with different NVIDIA GPUs, number of vCPUs, system memory and network bandwidth. Two of the most popular GPUs for deep learning inference are the NVIDIA T4 GPUs offered by G4 EC2 instance type and NVIDIA V100 GPUs offered by P3 EC2 instance type." }, { "code": null, "e": 13857, "s": 13730, "text": "For a fully summary of all GPU instance type of AWS read my earlier blog post: Choosing the right GPU for deep learning on AWS" }, { "code": null, "e": 13947, "s": 13857, "text": "G4 instance type should be the go-to GPU instance for deep learning inference deployment." }, { "code": null, "e": 14203, "s": 13947, "text": "Based on the NVIDIA Turing architecture, NVIDIA T4 GPUs feature FP64, FP32, FP16, Tensor Cores (mixed-precision), and INT8 precision types. They also have 16 GB of GPU memory which can be plenty for most models and combined with reduced precision support." }, { "code": null, "e": 14476, "s": 14203, "text": "If you need more throughput or need more memory per GPU, then P3 instance types offer a more powerful NVIDIA V100 GPU and with p3dn.24xlarge instance size, you can get access to NVIDIA V100 with up to 32 GB of GPU memory for large models or large images or other datasets." }, { "code": null, "e": 15044, "s": 14476, "text": "Unlike ASICs such as AWS Inferentia which are fixed function processors, a developer can use NVIDIA’s CUDA programming model to code up custom layers that can be accelerated on an NVIDIA GPU. This is exactly what Alex Krizhevsky did with AlexNet in 2012. He hand coded custom CUDA kernels to train his neural network on GPU. He called his framework cuda-convnet and you could say cuda-convnet was the very first deep learning framework. If you’re pushing the boundary of deep learning and don’t want to leave performance on the table a GPU is the best option for you." }, { "code": null, "e": 15111, "s": 15044, "text": "Programmability with performance is one of GPUs greatest strengths" }, { "code": null, "e": 15521, "s": 15111, "text": "Of course, you don’t need to write low-level GPU code to do deep learning. NVIDIA has made neural network primitives available via libraries such as cuDNN and cuBLAS and deep learning frameworks such as TensorFlow, PyTorch and MXNet use these libraries under the hood so you get GPU acceleration for free by simply using these frameworks. This is why GPUs score high marks for ease of use and programmability." }, { "code": null, "e": 15797, "s": 15521, "text": "If you really want to get the best performance out of your GPUs, NVIDIA offers TensorRT, a model compiler for inference deployment. Does additional optimizations to a trained model, and a full list is available on NVIDIA’s TensorRT website. The key optimizations to note are:" }, { "code": null, "e": 15990, "s": 15797, "text": "Quantization: reduce model precision from FP32 (single precision) to FP16 (half precision) or INT8 (8-bit integer precision), thereby speeding up inference due to reduced amount of computation" }, { "code": null, "e": 16158, "s": 15990, "text": "Graph fusion: fusing multiple layers/ops into a single function call to a CUDA kernel on the GPU. This reduces the overhead of multiple function call for each layer/op" }, { "code": null, "e": 16311, "s": 16158, "text": "Deploying with FP16 is straight forward with NVIDIA TensorRT. The TensorRT compiler will automatically quantize your models during the compilation step." }, { "code": null, "e": 16513, "s": 16311, "text": "To deploy with INT8 precision, the weights and activations of the model need to be quantized so that floating point values can be converted into integers using appropriate ranges. You have two options." }, { "code": null, "e": 16775, "s": 16513, "text": "Option 1: Perform quantization aware training. In quantization aware training, the error from quantizing weights and tensors to INT8 is modeled during training, allowing the model to adapt and mitigate this error. This requires additional setup during training." }, { "code": null, "e": 17191, "s": 16775, "text": "Option 2: Perform post training quantization. In post-quantization training, no pre-deployment preparation is required. You will provide a training model in full precision (FP32), and you will also need to provide a dataset sample from your training dataset that the TensorRT compiler can use to run a calibration step to generate quantization ranges. In Example 2 below, we’ll take a look at implementing Option 2." }, { "code": null, "e": 17451, "s": 17191, "text": "The following examples was tested on Amazon EC2 g4dn.xlarge using the following AWS Deep Learning AMI: Deep Learning AMI (Ubuntu 18.04) Version 35.0. To run TensorRT, I used the following NVIDIA TensorFlow Docker image: nvcr.io/nvidia/tensorflow:20.08-tf2-py3" }, { "code": null, "e": 17578, "s": 17451, "text": "Dataset: ImageNet Validation dataset with 50000 test images, converted to TFRecordModel: TensorFlow implementation of ResNet50" }, { "code": null, "e": 17664, "s": 17578, "text": "You can find the full implementation for the examples below on this Jupyter Notebook:" }, { "code": null, "e": 17766, "s": 17664, "text": "https://github.com/shashankprasanna/ai-accelerators-examples/blob/main/gpu-tf-tensorrt-resnet50.ipynb" }, { "code": null, "e": 18009, "s": 17766, "text": "TensorFlow’s native GPU acceleration support just works out of the box, with no additional setup. You won’t get the additional performance you can get with NVIDIA TensorRT, but you can’t argue with how easy life becomes when things just work." }, { "code": null, "e": 18093, "s": 18009, "text": "Running inference with frameworks’ native GPU support takes all of 3 lines of code:" }, { "code": null, "e": 18249, "s": 18093, "text": "model = tf.keras.models.load_model(saved_model_dir)for i, (validation_ds, batch_labels, _) in enumerate(dataset): pred_prob_keras = model(validation_ds)" }, { "code": null, "e": 18463, "s": 18249, "text": "But you’re really leaving performance on the table (some times 10x the performance). To increase the performance and utilization of your GPU, you have to use an inference compiler and runtime like NVIDIA TensorRT." }, { "code": null, "e": 18576, "s": 18463, "text": "The following code shows how to compile your model with TensorRT. You can find the full implementation on GitHub" }, { "code": null, "e": 18622, "s": 18576, "text": "TensorRT compilation has the following steps:" }, { "code": null, "e": 19352, "s": 18622, "text": "Provide TensorRT’s TrtGraphConverterV2 (for TensorFlow2) with your uncompiled TensorFlow saved modelSpecify TensorRT compilation parameters. The most important parameter is the precision (FP32, FP16, INT8). If you’re compiling with INT8 support, TensorRT expects you to provide it with a representative sample from your training set to calibrate scaling factors. You’ll do this by providing a python generator to argument calibration_input_fn when you call converter.convert(). You don’t need to provide additional data for FP32 and FP16 optimizations.TensorRT compiles your model and saves it as a TensorFlow saved model that includes special TensorRT operators which accelerates inference on GPU and runs them more efficiently." }, { "code": null, "e": 19453, "s": 19352, "text": "Provide TensorRT’s TrtGraphConverterV2 (for TensorFlow2) with your uncompiled TensorFlow saved model" }, { "code": null, "e": 19906, "s": 19453, "text": "Specify TensorRT compilation parameters. The most important parameter is the precision (FP32, FP16, INT8). If you’re compiling with INT8 support, TensorRT expects you to provide it with a representative sample from your training set to calibrate scaling factors. You’ll do this by providing a python generator to argument calibration_input_fn when you call converter.convert(). You don’t need to provide additional data for FP32 and FP16 optimizations." }, { "code": null, "e": 20084, "s": 19906, "text": "TensorRT compiles your model and saves it as a TensorFlow saved model that includes special TensorRT operators which accelerates inference on GPU and runs them more efficiently." }, { "code": null, "e": 20173, "s": 20084, "text": "Below is a comparison of accuracy and performance of TensorFlow ResNet50 inference with:" }, { "code": null, "e": 20316, "s": 20173, "text": "TensorFlow native GPU accelerationTensorFlow + TensorRT FP32 precisionTensorFlow + TensorRT FP16 precisionTensorFlow + TensorRT INT8 precision" }, { "code": null, "e": 20351, "s": 20316, "text": "TensorFlow native GPU acceleration" }, { "code": null, "e": 20388, "s": 20351, "text": "TensorFlow + TensorRT FP32 precision" }, { "code": null, "e": 20425, "s": 20388, "text": "TensorFlow + TensorRT FP16 precision" }, { "code": null, "e": 20462, "s": 20425, "text": "TensorFlow + TensorRT INT8 precision" }, { "code": null, "e": 20728, "s": 20462, "text": "I measured not just performance but also accuracy, since reducing precision means there is information loss. On the ImageNet test dataset we see negligible loss in accuracy across all precisions, with minor boost in throughput. Your mileage may vary for your model." }, { "code": null, "e": 20999, "s": 20728, "text": "In Example 1, we tested the performance offline, but in most cases you’ll be hosting your model in the cloud as an endpoint that client applications can submit inference requests to. One of the simplest ways of doing this is to use Amazon SageMaker hosting capabilities." }, { "code": null, "e": 21212, "s": 20999, "text": "This example was tested on Amazon SageMaker Studio Notebook. Run this notebook using the following Amazon SageMaker Studio conda environment: TensorFlow 2 CPU Optimized. The full implementation is available here:" }, { "code": null, "e": 21322, "s": 21212, "text": "https://github.com/shashankprasanna/ai-accelerators-examples/blob/main/sagemaker-tf-cpu-gpu-ei-resnet50.ipynb" }, { "code": null, "e": 21399, "s": 21322, "text": "Hosting a model endpoint with SageMaker involves the following simple steps:" }, { "code": null, "e": 21629, "s": 21399, "text": "Create a tar.gz archive file using your TensorFlow saved model and upload it to Amazon S3Use the Amazon SageMaker SDK API to create a TensorFlowModel objectDeploy the TensorFlowModel object to a G4 EC2 instance with NVIDIA T4 GPU" }, { "code": null, "e": 21719, "s": 21629, "text": "Create a tar.gz archive file using your TensorFlow saved model and upload it to Amazon S3" }, { "code": null, "e": 21787, "s": 21719, "text": "Use the Amazon SageMaker SDK API to create a TensorFlowModel object" }, { "code": null, "e": 21861, "s": 21787, "text": "Deploy the TensorFlowModel object to a G4 EC2 instance with NVIDIA T4 GPU" }, { "code": null, "e": 21914, "s": 21861, "text": "Create model.tar.gz with the TensorFlow saved model:" }, { "code": null, "e": 21964, "s": 21914, "text": "$ tar cvfz model.tar.gz -C resnet50_saved_model ." }, { "code": null, "e": 21995, "s": 21964, "text": "Upload model to S3 and deploy:" }, { "code": null, "e": 22055, "s": 21995, "text": "You can test the model by invoking the endpoint as follows:" }, { "code": null, "e": 22063, "s": 22055, "text": "Output:" }, { "code": null, "e": 22177, "s": 22063, "text": "AWS Inferentia is a custom silicon designed by Amazon for cost-effective, high-throughput, low latency inference." }, { "code": null, "e": 22474, "s": 22177, "text": "James Hamilton (VP and Distinguished Engineer at AWS) goes into further depth about ASICs, general purpose processors, AWS Inferentia and the economics surrounding them in his blog post: AWS Inferentia Machine Learning Processor, which I encourage you to read if you’re interested in AI hardware." }, { "code": null, "e": 22746, "s": 22474, "text": "The idea of using specialized processors for specialized workloads is not new. The chip in your noise cancelling headphone and the video decoder in your DVD player are examples of specialized chips, sometimes also called an Application Specific Integrated Circuit (ASIC)." }, { "code": null, "e": 22971, "s": 22746, "text": "ASICs have 1 job (or limited responsibilities) and are optimized to do it well. Unlike general purpose processors (CPUs) or programmable accelerators (GPU), large parts of the silicon are not dedicated to run arbitrary code." }, { "code": null, "e": 23501, "s": 22971, "text": "AWS Inferentia was purpose built to offer high inference performance at the lowest cost in the cloud. AWS Inferentia chips can be accessed via the Amazon EC2 Inf1 instances which come in different sizes with 1 AWS Inferentia chip per instance all the way up to 16 AWS Inferential chips per instance. Each AWS Inferentia chip has 4 NeuronCores and supports FP16, BF16 and INT8 data types. NeuronCore is a high-performance systolic-array matrix-multiply engine and each has a two stage memory hierarchy, a very large on-chip cache." }, { "code": null, "e": 23598, "s": 23501, "text": "In most cases, AWS Inferentia might be the best AI accelerator for your use case, if your model:" }, { "code": null, "e": 23670, "s": 23598, "text": "Was trained in MXNet, TensorFlow, PyTorch or has been converted to ONNX" }, { "code": null, "e": 23725, "s": 23670, "text": "Has operators that are supported by the AWS Neuron SDK" }, { "code": null, "e": 24116, "s": 23725, "text": "If you have operators not supported by the AWS Neuron SDK, you can still deploy it successfully on Inf1 instances, but those operations will run on the host CPU and won’t be accelerated on AWS Inferentia. As I stated earlier, every use case is different, so compile your model with AWS Neuron SDK and measure performance to make sure it meets your performance, latency and throughput needs." }, { "code": null, "e": 24654, "s": 24116, "text": "AWS has compared performance of AWS Inferentia vs. GPU instances for popular models, and reports lower cost for popular models: YOLOv4 model, OpenPose, and has provided examples for BERT and SSD for TensorFlow, MXNet and PyTorch. For real-time applications, AWS Inf1 instances are amongst the least expensive of all the acceleration options available on AWS and AWS Inferentia can deliver higher throughput at target latency and at lower cost compared to GPUs and CPUs. Ultimately your choice may depend on other factors discussed below." }, { "code": null, "e": 25323, "s": 24654, "text": "AWS Inferentia chip supports a fixed set of neural network operators exposed via the AWS Neuron SDK. When you compile a model to target AWS Inferentia using the AWS Neuron SDK, the compiler will check your model for supported operators for your framework. If an operator isn’t supported or if the compiler determines that a specific operator is more efficient to execute on CPU, it’ll partition the graph to include CPU partitions and AWS Inferentia partitions. The same is also true for Amazon Elastic Inference which we’ll discuss in the next section. If you’re using TensorFlow with AWS Inferentia here is a list of all TensorFlow ops accelerated on AWS Inferentia." }, { "code": null, "e": 25800, "s": 25323, "text": "If you trained your model in FP32 (single precision), AWS Neuron SDK compiler will automatically cast your FP32 model to BF16 to improve inference performance. If you instead, prefer to provide a model in FP16, either by training in FP16 or by performing post-training quantization, AWS Neuron SDK will directly use your FP16 weights. While INT8 is supported by the AWS Inferentia chip, the AWS Neuron SDK compiler currently does not provide a way to deploy with INT8 support." }, { "code": null, "e": 26022, "s": 25800, "text": "In most cases, AWS Neuron SDK makes AWS Inferentia really easy to use. A key difference in the user experience of using AWS Inferentia and GPUs is that AWS Inferentia lets you have more control over how each core is used." }, { "code": null, "e": 26283, "s": 26022, "text": "AWS Neuron SDK supports two ways to improve performance by utilizing all the NeuronCores: (1) batching and (2) pipelining. Since the AWS Neuron SDK compiler is an ahead-of-time compiler, you have to enable these options explicitly during the compilation stage." }, { "code": null, "e": 26339, "s": 26283, "text": "Let’s take a look at what these are and how these work." }, { "code": null, "e": 26737, "s": 26339, "text": "When you compile a model with AWS Neuron SDK compiler with batch_size, greater than one, batching is enabled. During inference your model weights are stored in external memory, and as forward pass is initiated, a subset of layer weights, as determined by the neuron runtime, is copied to the on-chip cache. With the weights of this layer on the cache, forward pass is computed on the entire batch." }, { "code": null, "e": 27107, "s": 26737, "text": "After that the next set of layer weights are loaded into the cache, and the forward pass is computed on the entire batch. This process continues until all weights are used for inference computations. Batching allows for better amortization of the cost of reading weights from the external memory by running inference on large batches when the layers are still in cache." }, { "code": null, "e": 27246, "s": 27107, "text": "All of this happens behind the scenes and as a user, you just have to set a desired batch size using an example input, during compilation." }, { "code": null, "e": 27489, "s": 27246, "text": "Even though batch size is set at the compilation phase, with dynamic batching enabled, the model can accept variable sized batches. Internally the neuron runtime will break down the user batch size into compiled batch sizes and run inference." }, { "code": null, "e": 27949, "s": 27489, "text": "During batching, model weights are loaded to the on-chip cache from the external memory layer by layer. With pipelining, you can load the entire model weights into the on-chip cache of multiple cores. This can reduce the latency since the neuron runtime does not have to load the weights from external memory. Again all of this happens behind the scenes, as a user you just set the desired number of cores using —-num-neuroncores during the compilation phase." }, { "code": null, "e": 28133, "s": 27949, "text": "Batching and pipelining can be used together. However, you have to try different combinations of pipelining cores and compiled batch sizes to determine what works best for your model." }, { "code": null, "e": 28417, "s": 28133, "text": "During the compilation step, all combinations of batch sizes and number of neuron cores (for pipelining), may not work. You will have to determine the working combinations of batch size and number of neuron cores by running a sweep of different values and monitoring compiler errors." }, { "code": null, "e": 28474, "s": 28417, "text": "Depending on how you compiled your model you can either:" }, { "code": null, "e": 28631, "s": 28474, "text": "Compile your model to run on a single NeuronCore with a specific batch sizeCompile your model by pipelining to multiple-NeuronCores with specific batch size" }, { "code": null, "e": 28707, "s": 28631, "text": "Compile your model to run on a single NeuronCore with a specific batch size" }, { "code": null, "e": 28789, "s": 28707, "text": "Compile your model by pipelining to multiple-NeuronCores with specific batch size" }, { "code": null, "e": 29504, "s": 28789, "text": "The least cost Amazon EC2 Inf1 instance type, inf1.xlarge has 1 AWS Inferentia chip with 4 NeuronCores. If you compiled your model to run on a single NeuronCore, tensorflow-neuron will automatically perform data parallel execution on all 4 NeuronCores. This is equivalent to replicating your model 4 times and loading it into each NeuronCore and running 4 Python threads to feed input to data to each core. Automatic data parallel execution does not work beyond 1 AWS Inferentia chip. If you want to replicate your model to all 16 NeuronCores on an inf1.6xlarge for example, you have to spawn multiple threads to feed all AWS Inferentia chips with data. In python you can use concurrent.futures.ThreadPoolExecutor." }, { "code": null, "e": 29643, "s": 29504, "text": "When you compile a model for multiple NeuronCores, the runtime will allocate different subgraphs to each NeuronCore (screenshot by author)" }, { "code": null, "e": 30432, "s": 29643, "text": "AWS Neuron SDK allows you to group NeuronCores into logical groups. Each group could have 1 or more NeuronCores and could run a different model. For example if you’re deploying on an inf1.6xlarge EC2 Inf1 instance, you have access to 4 Inferentia chips with 4 NeuronCores each i.e. a total of 16 NeuronCores. You could divide 16 NeuronCores into, let’s say 3 groups. Group 1 has 8 NeuronCores and will run a model that uses pipelining to use all 8 cores. Group 2 uses 4 NeuronCores and runs 4 copies of a model compiled with 1 neuron core. Group 3 uses 4 NeuronCores and runs 2 copies of a model compiled with 2 neuron cores with pipelining. You can specify this configuration using the NEURONCORE_GROUP_SIZES environment variable, and you’d set it to NEURONCORE_GROUP_SIZES=8,1,1,1,1,2,2" }, { "code": null, "e": 30779, "s": 30432, "text": "After that you simply have to load the model in the specified sequence within a single python process, i.e. load the model that’s compiled to use 8 cores first, then load the model that’s compiled to use 1 core four times, and then use load the model that’s compiled to use 2 cores, two times. The appropriate cores will be assigned to the model." }, { "code": null, "e": 31011, "s": 30779, "text": "AWS Neuron SDK comes pre-installed on AWS Deep Learning AMI, and you can also install the SDK and the neuron-accelerated frameworks and libraries TensorFlow, TensorFlow Serving, TensorBoard (with neuron support), MXNet and PyTorch." }, { "code": null, "e": 31123, "s": 31011, "text": "The following examples were tested on Amazon EC2 Inf1.xlarge and Deep Learning AMI (Ubuntu 18.04) Version 35.0." }, { "code": null, "e": 31209, "s": 31123, "text": "You can find the full implementation for the examples below on this Jupyter Notebook:" }, { "code": null, "e": 31311, "s": 31209, "text": "https://github.com/shashankprasanna/ai-accelerators-examples/blob/main/inf1-neuron-sdk-resnet50.ipynb" }, { "code": null, "e": 31357, "s": 31311, "text": "In this example I compare 3 different options" }, { "code": null, "e": 31644, "s": 31357, "text": "No batching, no pipelining: Compile ResNet50 model with batch size = 1 and number of cores = 1With batching, no pipelining: Compile ResNet50 model with batch size = 5 and number of cores = 1No batching, with pipelining: Compile ResNet50 model with batch size = 1 and number of cores = 4" }, { "code": null, "e": 31739, "s": 31644, "text": "No batching, no pipelining: Compile ResNet50 model with batch size = 1 and number of cores = 1" }, { "code": null, "e": 31836, "s": 31739, "text": "With batching, no pipelining: Compile ResNet50 model with batch size = 5 and number of cores = 1" }, { "code": null, "e": 31933, "s": 31836, "text": "No batching, with pipelining: Compile ResNet50 model with batch size = 1 and number of cores = 4" }, { "code": null, "e": 32031, "s": 31933, "text": "You can find the full implementation in this Jupyter Notebook. I’ll just review the results here." }, { "code": null, "e": 32243, "s": 32031, "text": "The comparison below shows that you get the best throughput with option 2 (batch size = 1, no pipelining) on Inf1.xlarge instances. You can repeat this experiment with other combinations on large Inf1 instances." }, { "code": null, "e": 32565, "s": 32243, "text": "Amazon Elastic Inference (EI) allows you to add cost-effective variable-size GPU acceleration to a CPU-only instance without provisioning a dedicated GPU instance. To use Amazon EI, you simply provision a CPU-only instance such as Amazon EC2 C5 instance type, and choose from 6 different EI accelerator options at launch." }, { "code": null, "e": 32978, "s": 32565, "text": "The EI accelerator is not part of the hardware that makes up your CPU instance, instead, the EI accelerator is attached through the network using an AWS PrivateLink endpoint service which routes traffic from your instance to the Elastic Inference accelerator configured with your instance. All of this happens seamlessly behind the scenes when you use an EI enabled serving frameworks such as TensorFlow serving." }, { "code": null, "e": 33247, "s": 32978, "text": "Amazon EI uses GPUs to provide GPU acceleration, but unlike dedicated GPU instances, you can choose to add GPU acceleration that comes in 6 different accelerator sizes, that you can choose by Tera (trillion) Floating Point Operations per Second (TFLOPS) or GPU memory." }, { "code": null, "e": 33670, "s": 33247, "text": "As I discussed earlier, GPUs are primarily throughput devices, and when dealing with smaller batches, common with real-time applications, GPUs tend to get underutilized when you deploy models that don’t need the full processing power or full memory of a GPU. Also, if you don’t have sufficient demand or multiple models to serve and share the GPU, then a single GPU may not be cost effective as cost/inference would go up." }, { "code": null, "e": 34149, "s": 33670, "text": "You can choose from 6 different EI accelerators that offer 1–4 TFLOPS and 1–8 GB of GPU memory. Let’s say you have a less computationally demanding model with a small memory footprint, you can attach the smallest EI accelerator such as eia1.medium that offers 1 TFLOPS of FP32 performance and 1 GB of GPU memory to a CPU instance. If you have a more demanding model, you could attach an eia2.xlarge EI accelerator with 4 TFLOPS performance and 8 GB GPU memory to a CPU instance." }, { "code": null, "e": 34433, "s": 34149, "text": "The cost of the CPU instance + EI accelerator would still be cheaper than a dedicated GPU instance, and can lower inference costs. You don’t have to worry about maximizing the utilization of your GPU since you’re adding just enough capacity to meet demand, without over-provisioning." }, { "code": null, "e": 34932, "s": 34433, "text": "Let’s consider the following hypothetical scenario. Let’s say your application can deliver a good customer experience if your total latency (app + network + model predictions) is under 200 ms. And let’s say, with a G4 instance type you can get total latency down to 40 ms which is well within your target latency. You’ve also tried deploying with a CPU-only C5 instance type you can only get total latency to 400 ms which does not meet your SLA requirements and results in poor customer experience." }, { "code": null, "e": 35359, "s": 34932, "text": "With Elastic Inference, you can network attach just enough GPU acceleration to a CPU instance. After exploring different EI accelerator sizes (say eia2.medium, eia2.large, eia2.xlarge), you and get your total latency down to 180 ms with an eia2.large EI accelerators, which is under the desired 200 ms mark. Since EI is significantly cheaper than provisioning a dedicated GPU instance, you save on your total deployment costs." }, { "code": null, "e": 35993, "s": 35359, "text": "Since the GPU acceleration is added via the network, EI adds some latency compared to a dedicated GPU instance, but will still be faster than a CPU-only instance, and more cost-effective than a dedicated GPU instance. A dedicated GPU instance will still deliver better inference performance vs EI, but if the extra performance doesn’t improve your customer experience, with EI you will stay under the target latency SLA, deliver good customer experience, and save on overall deployment costs. AWS has a number of blog posts that talk about performance and cost savings compared to CPUs and GPU using popular deep learning frameworks." }, { "code": null, "e": 36250, "s": 35993, "text": "Amazon EI supports models trained on TensorFlow, Apache MXNet, Pytorch and ONNX models. After you launch an Amazon EC2 instance with Amazon EI attached, to access the accelerator you need an EI enabled framework such as TensorFlow, PyTorch or Apache MXNet." }, { "code": null, "e": 36409, "s": 36250, "text": "EI enabled frameworks come pre-installed on AWS Deep Learning AMI, but if you prefer installing it manually, a Python wheel file has also been made available." }, { "code": null, "e": 36799, "s": 36409, "text": "Most popular models such as Inception, ResNet, SSD, RCNN, GNMT have been tested to deliver cost saving benefits when deployed with Amazon EI. If you’re deploying a custom model with custom operators, EI enabled framework, partitions the graph to run unsupported operators on the host CPU, and all support ops on the EI accelerator attached via the network. This makes using EI very simple." }, { "code": null, "e": 36931, "s": 36799, "text": "This example was tested on Amazon EC2 c5.2xlarge the following AWS Deep Learning AMI: Deep Learning AMI (Ubuntu 18.04) Version 35.0" }, { "code": null, "e": 36999, "s": 36931, "text": "You can find the full implementation on this Jupyter Notebook here:" }, { "code": null, "e": 37099, "s": 36999, "text": "https://github.com/shashankprasanna/ai-accelerators-examples/blob/main/ei-tensorflow-resnet50.ipynb" }, { "code": null, "e": 37290, "s": 37099, "text": "Amazon EI enabled TensorFlow offers APIs that let you accelerate your models using EI accelerators, and behave just like TensorFlow API. As a developer you to make have minimal code changes." }, { "code": null, "e": 37346, "s": 37290, "text": "To load model, you just have to run the following code:" }, { "code": null, "e": 37467, "s": 37346, "text": "from ei_for_tf.python.predictor.ei_predictor import EIPredictoreia_model = EIPredictor(saved_model_dir,accelerator_id=0)" }, { "code": null, "e": 37745, "s": 37467, "text": "If you have more than one EI accelerators attached to your instance, you can specify them using the accelerator_id argument. Simply replace your TensorFlow model object with eia_model and the rest of your script remains the same, and your model is now accelerated on Amazon EI." }, { "code": null, "e": 37919, "s": 37745, "text": "The following figure compares CPU-only inference vs. EI accelerated inference on the same CPU instance. In this example you see over 6 times speed up with an EI accelerator." }, { "code": null, "e": 38219, "s": 37919, "text": "If there is one thing I want you to take away from the blog post, it is this: Deployment needs are unique and there really is no one size fits all. Review your deployment goals, compare them with the discussions in the article, and test out all options. Cloud makes it easy to try before you commit." }, { "code": null, "e": 38268, "s": 38219, "text": "Keep these considerations in mind as you choose:" }, { "code": null, "e": 38352, "s": 38268, "text": "Model type and programmability (model size, custom operators, supported frameworks)" }, { "code": null, "e": 38438, "s": 38352, "text": "Target throughput, latency and cost (to deliver good customer experience at a budget)" }, { "code": null, "e": 38542, "s": 38438, "text": "Ease of use of compiler and runtime toolchain (fast learning curve, doesn’t require hardware knowledge)" }, { "code": null, "e": 38652, "s": 38542, "text": "If programmability is very important, and you have low performance targets, then CPU might just work for you." }, { "code": null, "e": 38787, "s": 38652, "text": "If programmability and performance is important, then you can develop custom CUDA kernels for custom ops that are accelerated on GPUs." }, { "code": null, "e": 38912, "s": 38787, "text": "If you want the lowest cost option, and your model is supported on AWS Inferentia, you can save on overall deployment costs." }, { "code": null, "e": 39169, "s": 38912, "text": "Ease of use is subjective, but nothing can beat native framework experience. But with a little bit of extra effort both AWS Neuron SDK for AWS Inferentia and NVIDIA TensorRT for NVIDIA GPUs can deliver higher performance, thereby reducing cost / inference." }, { "code": null, "e": 39369, "s": 39169, "text": "Thank you for reading. In this article I was only able to give you a glimpse of all the sample code we discussed in this article. If you want to reproduce the results visit the following GitHub repo:" }, { "code": null, "e": 39430, "s": 39369, "text": "https://github.com/shashankprasanna/ai-accelerators-examples" }, { "code": null, "e": 39517, "s": 39430, "text": "If you found this article interesting, please check out my other blog posts on medium." } ]
FinRL for Quantitative Finance: Tutorial for Portfolio Allocation | by Bruce Yang | Towards Data Science
Note from Towards Data Science’s editors: While we allow independent authors to publish articles in accordance with our rules and guidelines, we do not endorse each author’s contribution. You should not rely on an author’s works without seeking professional advice. See our Reader Terms for details. This blog is a tutorial based on our paper: FinRL: A Deep Reinforcement Learning Library for Automated Stock Trading in Quantitative Finance, presented at NeurIPS 2020: Deep RL Workshop. arxiv.org The Jupyter notebook codes are available on our Github and Google Colab. github.com github.com A more complete application of FinRL for multiple stock trading can be found in our previous blog. To begin with, I would like to explain the logic of portfolio allocation using Deep Reinforcement Learning. We use Dow 30 constituents as an example throughout this article, because those are the most popular stocks. Let’s say that we got a million dollars at the beginning of 2019. We want to invest this $1,000,000 into stock markets, in this case is Dow Jones 30 constituents. Assume that no margin, no short sale, no treasury bill (use all the money to trade only these 30 stocks). So that the weight of each individual stock is non-negative, and the weights of all the stocks add up to one. We hire a smart portfolio manager- Mr. Deep Reinforcement Learning. Mr. DRL will give us daily advice includes the portfolio weights or the proportions of money to invest in these 30 stocks. So every day we just need to rebalance the portfolio weights of the stocks. The basic logic is as follows. Portfolio allocation is different from multiple stock trading because we are essentially rebalancing the weights at each time step, and we have to use all available money. The traditional and the most popular way of doing portfolio allocation is mean-variance or modern portfolio theory (MPT): However, MPT performs not so well in out-of-sample data. MPT is calculated only based on stock returns, if we want to take other relevant factors into account, for example some of the technical indicators like MACD or RSI, MPT may not be able to combine these information together well. We introduce a DRL library FinRL that facilitates beginners to expose themselves to quantitative finance. FinRL is a DRL library designed specifically for automated stock trading with an effort for educational and demonstrative purpose. This article is focusing on one of the use cases in our paper: Portfolio Allocation. We use one Jupyter notebook to include all the necessary steps. Problem DefinitionLoad Python PackagesDownload DataPreprocess DataBuild EnvironmentImplement DRL AlgorithmsBacktesting Performance Problem Definition Load Python Packages Download Data Preprocess Data Build Environment Implement DRL Algorithms Backtesting Performance This problem is to design an automated trading solution for portfolio allocation. We model the stock trading process as a Markov Decision Process (MDP). We then formulate our trading goal as a maximization problem. The components of the reinforcement learning environment are: Action: portfolio weight of each stock is within [0,1]. We use softmax function to normalize the actions to sum to 1. State: {Covariance Matrix, MACD, RSI, CCI, ADX}, state space shape is (34, 30). 34 is the number of rows, 30 is the number of columns. Reward function: r(s, a, s′) = p_t, p_t is the cumulative portfolio value. Environment: portfolio allocation for Dow 30 constituents. Covariance matrix is a good feature because portfolio managers use it to quantify the risk (standard deviation) associated with a particular portfolio. We also assume no transaction cost, because we are trying to make a simple portfolio allocation case as a starting point. Install the unstable development version of FinRL: # Install the unstable development version in Jupyter notebook:!pip install git+https://github.com/AI4Finance-Foundation/FinRL.git Import Packages: FinRL uses a YahooDownloader class to extract data. class YahooDownloader: """Provides methods for retrieving daily stock data from Yahoo Finance APIAttributes ---------- start_date : str start date of the data (modified from config.py) end_date : str end date of the data (modified from config.py) ticker_list : list a list of stock tickers (modified from config.py)Methods ------- fetch_data() Fetches data from yahoo API""" Download and save the data in a pandas DataFrame: FinRL uses a FeatureEngineer class to preprocess data. class FeatureEngineer: """Provides methods for preprocessing the stock price dataAttributes ---------- df: DataFrame data downloaded from Yahoo API feature_number : int number of features we used use_technical_indicator : boolean we technical indicator or not use_turbulence : boolean use turbulence index or notMethods ------- preprocess_data() main method to do the feature engineering""" FinRL uses a EnvSetup class to setup environment. class EnvSetup: """Provides methods for retrieving daily stock data from Yahoo Finance APIAttributes ---------- stock_dim: int number of unique stocks hmax : int maximum number of shares to trade initial_amount: int start money transaction_cost_pct : float transaction cost percentage per trade reward_scaling: float scaling factor for reward, good for training tech_indicator_list: list a list of technical indicator names (modified from config.py)Methods ------- create_env_training() create env class for training create_env_validation() create env class for validation create_env_trading() create env class for trading""" Initialize an environment class: User-defined Environment: a simulation environment class. The environment for portfolio allocation: FinRL uses a DRLAgent class to implement the algorithms. class DRLAgent: """Provides implementations for DRL algorithmsAttributes ---------- env: gym environment class user-defined classMethods ------- train_PPO() the implementation for PPO algorithm train_A2C() the implementation for A2C algorithm train_DDPG() the implementation for DDPG algorithm train_TD3() the implementation for TD3 algorithm train_SAC() the implementation for SAC algorithm DRL_prediction() make a prediction in a test dataset and get results """ We use A2C for portfolio allocation, because it is stable, cost-effective, faster and works better with large batch sizes. Assume that we have $1,000,000 initial capital at 2019/01/01. We use the A2C model to perform portfolio allocation of the Dow 30 stocks. The output actions or the portfolio weights look like this: FinRL uses a set of functions to do the backtesting with Quantopian pyfolio. The left table is the stats for backtesting performance, the right table is the stats for Index (DJIA) performance. Contributions of FinRL: FinRL is an open source library specifically designed and implemented for quantitative finance. Trading environments incorporating market frictions are used and provided. Trading tasks accompanied by hands-on tutorials with built-in DRL agents are available in a beginner-friendly and reproducible fashion using Jupyter notebook. Customization of trading time steps is feasible. FinRL has good scalability, with a broad range of fine-tuned state-of-the-art DRL algorithms. Adjusting the implementations to the rapid changing stock market is well supported. Typical use cases are selected and used to establish a benchmark for the quantitative finance community. Standard backtesting and evaluation metrics are also provided for easy and effective performance evaluation. I hope you found this article helpful and learned something about using DRL for multiple stock trading! Please report any issues to our Github.
[ { "code": null, "e": 472, "s": 172, "text": "Note from Towards Data Science’s editors: While we allow independent authors to publish articles in accordance with our rules and guidelines, we do not endorse each author’s contribution. You should not rely on an author’s works without seeking professional advice. See our Reader Terms for details." }, { "code": null, "e": 659, "s": 472, "text": "This blog is a tutorial based on our paper: FinRL: A Deep Reinforcement Learning Library for Automated Stock Trading in Quantitative Finance, presented at NeurIPS 2020: Deep RL Workshop." }, { "code": null, "e": 669, "s": 659, "text": "arxiv.org" }, { "code": null, "e": 742, "s": 669, "text": "The Jupyter notebook codes are available on our Github and Google Colab." }, { "code": null, "e": 753, "s": 742, "text": "github.com" }, { "code": null, "e": 764, "s": 753, "text": "github.com" }, { "code": null, "e": 863, "s": 764, "text": "A more complete application of FinRL for multiple stock trading can be found in our previous blog." }, { "code": null, "e": 971, "s": 863, "text": "To begin with, I would like to explain the logic of portfolio allocation using Deep Reinforcement Learning." }, { "code": null, "e": 1080, "s": 971, "text": "We use Dow 30 constituents as an example throughout this article, because those are the most popular stocks." }, { "code": null, "e": 1243, "s": 1080, "text": "Let’s say that we got a million dollars at the beginning of 2019. We want to invest this $1,000,000 into stock markets, in this case is Dow Jones 30 constituents." }, { "code": null, "e": 1459, "s": 1243, "text": "Assume that no margin, no short sale, no treasury bill (use all the money to trade only these 30 stocks). So that the weight of each individual stock is non-negative, and the weights of all the stocks add up to one." }, { "code": null, "e": 1726, "s": 1459, "text": "We hire a smart portfolio manager- Mr. Deep Reinforcement Learning. Mr. DRL will give us daily advice includes the portfolio weights or the proportions of money to invest in these 30 stocks. So every day we just need to rebalance the portfolio weights of the stocks." }, { "code": null, "e": 1757, "s": 1726, "text": "The basic logic is as follows." }, { "code": null, "e": 1929, "s": 1757, "text": "Portfolio allocation is different from multiple stock trading because we are essentially rebalancing the weights at each time step, and we have to use all available money." }, { "code": null, "e": 2051, "s": 1929, "text": "The traditional and the most popular way of doing portfolio allocation is mean-variance or modern portfolio theory (MPT):" }, { "code": null, "e": 2338, "s": 2051, "text": "However, MPT performs not so well in out-of-sample data. MPT is calculated only based on stock returns, if we want to take other relevant factors into account, for example some of the technical indicators like MACD or RSI, MPT may not be able to combine these information together well." }, { "code": null, "e": 2575, "s": 2338, "text": "We introduce a DRL library FinRL that facilitates beginners to expose themselves to quantitative finance. FinRL is a DRL library designed specifically for automated stock trading with an effort for educational and demonstrative purpose." }, { "code": null, "e": 2724, "s": 2575, "text": "This article is focusing on one of the use cases in our paper: Portfolio Allocation. We use one Jupyter notebook to include all the necessary steps." }, { "code": null, "e": 2855, "s": 2724, "text": "Problem DefinitionLoad Python PackagesDownload DataPreprocess DataBuild EnvironmentImplement DRL AlgorithmsBacktesting Performance" }, { "code": null, "e": 2874, "s": 2855, "text": "Problem Definition" }, { "code": null, "e": 2895, "s": 2874, "text": "Load Python Packages" }, { "code": null, "e": 2909, "s": 2895, "text": "Download Data" }, { "code": null, "e": 2925, "s": 2909, "text": "Preprocess Data" }, { "code": null, "e": 2943, "s": 2925, "text": "Build Environment" }, { "code": null, "e": 2968, "s": 2943, "text": "Implement DRL Algorithms" }, { "code": null, "e": 2992, "s": 2968, "text": "Backtesting Performance" }, { "code": null, "e": 3207, "s": 2992, "text": "This problem is to design an automated trading solution for portfolio allocation. We model the stock trading process as a Markov Decision Process (MDP). We then formulate our trading goal as a maximization problem." }, { "code": null, "e": 3269, "s": 3207, "text": "The components of the reinforcement learning environment are:" }, { "code": null, "e": 3387, "s": 3269, "text": "Action: portfolio weight of each stock is within [0,1]. We use softmax function to normalize the actions to sum to 1." }, { "code": null, "e": 3522, "s": 3387, "text": "State: {Covariance Matrix, MACD, RSI, CCI, ADX}, state space shape is (34, 30). 34 is the number of rows, 30 is the number of columns." }, { "code": null, "e": 3597, "s": 3522, "text": "Reward function: r(s, a, s′) = p_t, p_t is the cumulative portfolio value." }, { "code": null, "e": 3656, "s": 3597, "text": "Environment: portfolio allocation for Dow 30 constituents." }, { "code": null, "e": 3808, "s": 3656, "text": "Covariance matrix is a good feature because portfolio managers use it to quantify the risk (standard deviation) associated with a particular portfolio." }, { "code": null, "e": 3930, "s": 3808, "text": "We also assume no transaction cost, because we are trying to make a simple portfolio allocation case as a starting point." }, { "code": null, "e": 3981, "s": 3930, "text": "Install the unstable development version of FinRL:" }, { "code": null, "e": 4112, "s": 3981, "text": "# Install the unstable development version in Jupyter notebook:!pip install git+https://github.com/AI4Finance-Foundation/FinRL.git" }, { "code": null, "e": 4129, "s": 4112, "text": "Import Packages:" }, { "code": null, "e": 4181, "s": 4129, "text": "FinRL uses a YahooDownloader class to extract data." }, { "code": null, "e": 4632, "s": 4181, "text": "class YahooDownloader: \"\"\"Provides methods for retrieving daily stock data from Yahoo Finance APIAttributes ---------- start_date : str start date of the data (modified from config.py) end_date : str end date of the data (modified from config.py) ticker_list : list a list of stock tickers (modified from config.py)Methods ------- fetch_data() Fetches data from yahoo API\"\"\"" }, { "code": null, "e": 4682, "s": 4632, "text": "Download and save the data in a pandas DataFrame:" }, { "code": null, "e": 4737, "s": 4682, "text": "FinRL uses a FeatureEngineer class to preprocess data." }, { "code": null, "e": 5219, "s": 4737, "text": "class FeatureEngineer: \"\"\"Provides methods for preprocessing the stock price dataAttributes ---------- df: DataFrame data downloaded from Yahoo API feature_number : int number of features we used use_technical_indicator : boolean we technical indicator or not use_turbulence : boolean use turbulence index or notMethods ------- preprocess_data() main method to do the feature engineering\"\"\"" }, { "code": null, "e": 5269, "s": 5219, "text": "FinRL uses a EnvSetup class to setup environment." }, { "code": null, "e": 5997, "s": 5269, "text": "class EnvSetup: \"\"\"Provides methods for retrieving daily stock data from Yahoo Finance APIAttributes ---------- stock_dim: int number of unique stocks hmax : int maximum number of shares to trade initial_amount: int start money transaction_cost_pct : float transaction cost percentage per trade reward_scaling: float scaling factor for reward, good for training tech_indicator_list: list a list of technical indicator names (modified from config.py)Methods ------- create_env_training() create env class for training create_env_validation() create env class for validation create_env_trading() create env class for trading\"\"\"" }, { "code": null, "e": 6030, "s": 5997, "text": "Initialize an environment class:" }, { "code": null, "e": 6088, "s": 6030, "text": "User-defined Environment: a simulation environment class." }, { "code": null, "e": 6130, "s": 6088, "text": "The environment for portfolio allocation:" }, { "code": null, "e": 6187, "s": 6130, "text": "FinRL uses a DRLAgent class to implement the algorithms." }, { "code": null, "e": 6737, "s": 6187, "text": "class DRLAgent: \"\"\"Provides implementations for DRL algorithmsAttributes ---------- env: gym environment class user-defined classMethods ------- train_PPO() the implementation for PPO algorithm train_A2C() the implementation for A2C algorithm train_DDPG() the implementation for DDPG algorithm train_TD3() the implementation for TD3 algorithm train_SAC() the implementation for SAC algorithm DRL_prediction() make a prediction in a test dataset and get results \"\"\"" }, { "code": null, "e": 6860, "s": 6737, "text": "We use A2C for portfolio allocation, because it is stable, cost-effective, faster and works better with large batch sizes." }, { "code": null, "e": 6997, "s": 6860, "text": "Assume that we have $1,000,000 initial capital at 2019/01/01. We use the A2C model to perform portfolio allocation of the Dow 30 stocks." }, { "code": null, "e": 7057, "s": 6997, "text": "The output actions or the portfolio weights look like this:" }, { "code": null, "e": 7134, "s": 7057, "text": "FinRL uses a set of functions to do the backtesting with Quantopian pyfolio." }, { "code": null, "e": 7250, "s": 7134, "text": "The left table is the stats for backtesting performance, the right table is the stats for Index (DJIA) performance." }, { "code": null, "e": 7274, "s": 7250, "text": "Contributions of FinRL:" }, { "code": null, "e": 7445, "s": 7274, "text": "FinRL is an open source library specifically designed and implemented for quantitative finance. Trading environments incorporating market frictions are used and provided." }, { "code": null, "e": 7653, "s": 7445, "text": "Trading tasks accompanied by hands-on tutorials with built-in DRL agents are available in a beginner-friendly and reproducible fashion using Jupyter notebook. Customization of trading time steps is feasible." }, { "code": null, "e": 7831, "s": 7653, "text": "FinRL has good scalability, with a broad range of fine-tuned state-of-the-art DRL algorithms. Adjusting the implementations to the rapid changing stock market is well supported." }, { "code": null, "e": 8045, "s": 7831, "text": "Typical use cases are selected and used to establish a benchmark for the quantitative finance community. Standard backtesting and evaluation metrics are also provided for easy and effective performance evaluation." }, { "code": null, "e": 8149, "s": 8045, "text": "I hope you found this article helpful and learned something about using DRL for multiple stock trading!" } ]
Python Plotly tutorial - GeeksforGeeks
29 Nov, 2021 Python Plotly Library is an open-source library that can be used for data visualization and understanding data simply and easily. Plotly supports various types of plots like line charts, scatter plots, histograms, cox plots, etc. So you all must be wondering why Plotly over other visualization tools or libraries? Here’s the answer – Plotly has hover tool capabilities that allow us to detect any outliers or anomalies in a large number of data points. It is visually attractive that can be accepted by a wide range of audiences. It allows us for the endless customization of our graphs that makes our plot more meaningful and understandable for others. This tutorial aims at providing you the insight about Plotly with the help of the huge dataset explaining the Plotly from basics to advance and covering all the popularly used charts. Table Of Content Installation Package Structure of Plotly Getting Started Creating Different Types of Charts Line ChartBar ChartHistogramsScatter Plot and Bubble chartsPie ChartsBox PlotsViolin plotsGantt ChartsContour PlotsHeatmapsError Bars3D Line Plots3D Scatter Plot Plotly3D Surface Plots Line Chart Bar Chart Histograms Scatter Plot and Bubble charts Pie Charts Box Plots Violin plots Gantt Charts Contour Plots Heatmaps Error Bars 3D Line Plots 3D Scatter Plot Plotly 3D Surface Plots Interacting with the Plots Creating Dropdown Menu in PlotlyAdding Buttons to the PlotCreating Sliders and Selectors to the Plot Creating Dropdown Menu in Plotly Adding Buttons to the Plot Creating Sliders and Selectors to the Plot More Plots using Plotly More Topics on Plotly Recent Articles on Plotly !!! Plotly does not come built-in with Python. To install it type the below command in the terminal. pip install plotly This may take some time as it will install the dependencies as well. There are three main modules in Plotly. They are: plotly.plotly plotly.graph.objects plotly.tools plotly.plotly acts as the interface between the local machine and Plotly. It contains functions that require a response from Plotly’s server. plotly.graph_objects module contains the objects (Figure, layout, data, and the definition of the plots like scatter plot, line chart) that are responsible for creating the plots. The Figure can be represented either as dict or instances of plotly.graph_objects.Figure and these are serialized as JSON before it gets passed to plotly.js. Consider the below example for better understanding. Note: plotly.express module can create the entire Figure at once. It uses the graph_objects internally and returns the graph_objects.Figure instance. Example: Python3 import plotly.express as px # Creating the Figure instancefig = px.line(x=[1,2, 3], y=[1, 2, 3]) # printing the figure instanceprint(fig) Output: Figures are represented as trees where the root node has three top layer attributes – data, layout, and frames and the named nodes called ‘attributes’. Consider the above example, layout.legend is a nested dictionary where the legend is the key inside the dictionary whose value is also a dictionary. plotly.tools module contains various tools in the forms of the functions that can enhance the Plotly experience. After learning the installation and basic structure of the Plotly, let’s create a simple plot using the pre-defined data sets defined by the plotly. Example: Python3 import plotly.express as px # Creating the Figure instancefig = px.line(x=[1, 2, 3], y=[1, 2, 3]) # showing the plotfig.show() Output: In the above example, the plotly.express module is imported which returns the Figure instance. We have created a simple line chart by passing the x, y coordinates of the points to be plotted. With plotly we can create more than 40 charts and every plot can be created using the plotly.express and plotly.graph_objects class. Let’s see some commonly used charts with the help of Plotly. Line plot in Plotly is much accessible and illustrious annexation to plotly which manage a variety of types of data and assemble easy-to-style statistic. With px.line each data position is represented as a vertex (which location is given by the x and y columns) of a polyline mark in 2D space. Example: Python3 import plotly.express as px # using the iris datasetdf = px.data.iris() # plotting the line chartfig = px.line(df, x="species", y="petal_width") # showing the plotfig.show() Output: Refer to the below articles to get detailed information about the line charts. plotly.express.line() function in Python Line Chart using Plotly in Python A bar chart is a pictorial representation of data that presents categorical data with rectangular bars with heights or lengths proportional to the values that they represent. In other words, it is the pictorial representation of dataset. These data sets contain the numerical values of variables that represent the length or height. Example: Python3 import plotly.express as px # using the iris datasetdf = px.data.iris() # plotting the bar chartfig = px.bar(df, x="sepal_width", y="sepal_length") # showing the plotfig.show() Output: Refer to the below articles to get detailed information about the bar chart. Bar chart using Plotly in Python How to create Stacked bar chart in Python-Plotly? How to group Bar Charts in Python-Plotly? A histogram contains a rectangular area to display the statistical information which is proportional to the frequency of a variable and its width in successive numerical intervals. A graphical representation that manages a group of data points into different specified ranges. It has a special feature that shows no gaps between the bars and similar to a vertical bar graph. Example: Python3 import plotly.express as px # using the iris datasetdf = px.data.iris() # plotting the histogramfig = px.histogram(df, x="sepal_length", y="petal_width") # showing the plotfig.show() Output: Refer to the below articles to get detailed information about the histograms. Histogram using Plotly in Python Histograms in Plotly using graph_objects class How to create a Cumulative Histogram in Plotly? A scatter plot is a set of dotted points to represent individual pieces of data in the horizontal and vertical axis. A graph in which the values of two variables are plotted along X-axis and Y-axis, the pattern of the resulting points reveals a correlation between them. A bubble plot is a scatter plot with bubbles (color-filled circles). Bubbles have various sizes dependent on another variable in the data. It can be created using the scatter() method of plotly.express. Example 1: Scatter Plot Python3 import plotly.express as px # using the iris datasetdf = px.data.iris() # plotting the scatter chartfig = px.scatter(df, x="species", y="petal_width") # showing the plotfig.show() Output: Example 2: Bubble Plot Python3 import plotly.express as px # using the iris datasetdf = px.data.iris() # plotting the bubble chartfig = px.scatter(df, x="species", y="petal_width", size="petal_length", color="species") # showing the plotfig.show() Output: Refer to the below articles to get detailed information about the scatter plots and bubble plots. plotly.express.scatter() function in Python Scatter plot in Plotly using graph_objects class Scatter plot using Plotly in Python Bubble chart using Plotly in Python A pie chart is a circular statistical graphic, which is divided into slices to illustrate numerical proportions. It depicts a special chart that uses “pie slices”, where each sector shows the relative sizes of data. A circular chart cuts in a form of radii into segments describing relative frequencies or magnitude also known as circle graph. Example: Python3 import plotly.express as px # using the tips datasetdf = px.data.tips() # plotting the pie chartfig = px.pie(df, values="total_bill", names="day") # showing the plotfig.show() Output: Refer to the below articles to get detailed information about the pie charts. Pie plot using Plotly in Python A Box Plot is also known as Whisker plot is created to display the summary of the set of data values having properties like minimum, first quartile, median, third quartile and maximum. In the box plot, a box is created from the first quartile to the third quartile, a vertical line is also there which goes through the box at the median. Here x-axis denotes the data to be plotted while the y-axis shows the frequency distribution. Example: Python3 import plotly.express as px # using the tips datasetdf = px.data.tips() # plotting the box chartfig = px.box(df, x="day", y="total_bill") # showing the plotfig.show() Output: Refer to the below articles to get detailed information about box plots. Box Plot using Plotly in Python Box plot in Plotly using graph_objects class How to create Grouped box plot in Plotly? Violin Plot is a method to visualize the distribution of numerical data of different variables. It is similar to Box Plot but with a rotated plot on each side, giving more information about the density estimate on the y-axis. The density is mirrored and flipped over and the resulting shape is filled in, creating an image resembling a violin. The advantage of a violin plot is that it can show nuances in the distribution that aren’t perceptible in a boxplot. On the other hand, the boxplot more clearly shows the outliers in the data. Example: Python3 import plotly.express as px # using the tips datasetdf = px.data.tips() # plotting the violin chartfig = px.violin(df, x="day", y="total_bill") # showing the plotfig.show() Output: Refer to the below articles to get detailed information about the violin plots. Violin Plots using Plotly Generalized Activity Normalization Time Table (GANTT) chart is type of chart in which series of horizontal lines are present that show the amount of work done or production completed in given period of time in relation to amount planned for those projects. Example: Python3 import plotly.figure_factory as ff # Data to be plotteddf = [dict(Task="A", Start='2020-01-01', Finish='2009-02-02'), dict(Task="Job B", Start='2020-03-01', Finish='2020-11-11'), dict(Task="Job C", Start='2020-08-06', Finish='2020-09-21')] # Creating the plotfig = ff.create_gantt(df)fig.show() Output: Refer to the below articles to get detailed information about the Gantt Charts. Gantt Chart in Plotly Contour plots also called level plots are a tool for doing multivariate analysis and visualizing 3-D plots in 2-D space. If we consider X and Y as our variables we want to plot then the response Z will be plotted as slices on the X-Y plane due to which contours are sometimes referred as Z-slices or iso-response. A contour plots is used in the case where you want to see the changes in some value (Z) as a function with respect to the two values (X, Y). Consider the below example. Example: Python3 import plotly.graph_objects as go # Creating the X, Y value that will# change the values of Z as a functionfeature_x = np.arange(0, 50, 2)feature_y = np.arange(0, 50, 3) # Creating 2-D grid of features[X, Y] = np.meshgrid(feature_x, feature_y) Z = np.cos(X / 2) + np.sin(Y / 4) # plotting the figurefig = go.Figure(data = go.Contour(x = feature_x, y = feature_y, z = Z)) fig.show() Output: Refer to the below articles to get detailed information about contour plots. Contour Plots using Plotly in Python Heatmap is defined as a graphical representation of data using colors to visualize the value of the matrix. In this, to represent more common values or higher activities brighter colors basically reddish colors are used and to represent less common or activity values, darker colors are preferred. Heatmap is also defined by the name of the shading matrix. Example: Python3 import plotly.graph_objects as go feature_x = np.arange(0, 50, 2)feature_y = np.arange(0, 50, 3) # Creating 2-D grid of features[X, Y] = np.meshgrid(feature_x, feature_y) Z = np.cos(X / 2) + np.sin(Y / 4) # plotting the figurefig = go.Figure(data = go.Heatmap(x = feature_x, y = feature_y, z = Z,)) fig.show() Output: Refer to the below articles to get detailed information about the heatmaps. Create Heatmaps using graph_objects class in Plotly Annotated Heatmaps using Plotly in Python For functions representing 2D data points such as px.scatter, px.line, px.bar, etc., error bars are given as a column name which is the value of the error_x (for the error on x position) and error_y (for the error on y position). Error bars are the graphical presentation alternation of data and used on graphs to imply the error or uncertainty in a reported capacity. Example: Python3 import plotly.express as px # using the iris datasetdf = px.data.iris() # Calculating the error fielddf["error"] = df["petal_length"]/100 # plotting the scatter chartfig = px.scatter(df, x="species", y="petal_width", error_x="error", error_y="error") # showing the plotfig.show() Output: Line plot in plotly is much accessible and illustrious annexation to plotly which manage a variety of types of data and assemble easy-to-style statistic. With px.line_3d each data position is represented as a vertex (which location is given by the x, y and z columns) of a polyline mark in 3D space. Example: Python3 import plotly.express as px # data to be plotteddf = px.data.tips() # plotting the figurefig = px.line_3d(df, x="sex", y="day", z="time", color="sex") fig.show() Output: Refer to the below articles to get detailed information about the 3D line charts. plotly.express.line_3d() function in Python 3D Line Plots using Plotly in Python 3D Scatter Plot can plot two-dimensional graphics that can be enhanced by mapping up to three additional variables while using the semantics of hue, size, and style parameters. All the parameter control visual semantic which are used to identify the different subsets. Using redundant semantics can be helpful for making graphics more accessible. It can be created using the scatter_3d function of plotly.express class. Example: Python3 import plotly.express as px # Data to be plotteddf = px.data.iris() # Plotting the figurefig = px.scatter_3d(df, x = 'sepal_width', y = 'sepal_length', z = 'petal_width', color = 'species') fig.show() Output: Refer to the below articles to get detailed information about the 3D scatter plot. 3D scatter plot using Plotly in Python 3D Scatter Plot using graph_objects Class in Plotly-Python 3D Bubble chart using Plotly in Python Surface plot is those plot which has three-dimensions data which is X, Y, and Z. Rather than showing individual data points, the surface plot has a functional relationship between dependent variable Y and have two independent variables X and Z. This plot is used to distinguish between dependent and independent variables. Example: Python3 import plotly.graph_objects as goimport numpy as np # Data to be plottedx = np.outer(np.linspace(-2, 2, 30), np.ones(30))y = x.copy().Tz = np.cos(x ** 2 + y ** 2) # plotting the figurefig = go.Figure(data=[go.Surface(x=x, y=y, z=z)]) fig.show() Output: Plotly provides various tools for interacting with the plots such as adding dropdowns, buttons, sliders, etc. These can be created using the update menu attribute of the plot layout. Let’s see how to do all such things in detail. A drop-down menu is a part of the menu-button which is displayed on a screen all the time. Every menu button is associated with a Menu widget that can display the choices for that menu button when clicked on it. In plotly, there are 4 possible methods to modify the charts by using update menu method. restyle: modify data or data attributes relayout: modify layout attributes update: modify data and layout attributes animate: start or pause an animation Example: Python3 import plotly.graph_objects as pximport numpy as np # creating random data through randomint# function of numpy.randomnp.random.seed(42) # Data to be Plottedrandom_x = np.random.randint(1, 101, 100)random_y = np.random.randint(1, 101, 100) plot = px.Figure(data=[px.Scatter( x=random_x, y=random_y, mode='markers',)]) # Add dropdownplot.update_layout( updatemenus=[ dict( buttons=list([ dict( args=["type", "scatter"], label="Scatter Plot", method="restyle" ), dict( args=["type", "bar"], label="Bar Chart", method="restyle" ) ]), direction="down", ), ]) plot.show() Output: Media error: Format(s) not supported or source(s) not found In the above example we have created two graphs for the same data. These plots are accessible using the dropdown menu. In plotly, actions custom Buttons are used to quickly make actions directly from a record. Custom Buttons can be added to page layouts in CRM, Marketing, and Custom Apps. There are also 4 possible methods that can be applied in custom buttons: restyle: modify data or data attributes relayout: modify layout attributes update: modify data and layout attributes animate: start or pause an animation Example: Python3 import plotly.graph_objects as pximport pandas as pd # reading the databasedata = pd.read_csv("tips.csv") plot = px.Figure(data=[px.Scatter( x=data['day'], y=data['tip'], mode='markers',)]) # Add dropdownplot.update_layout( updatemenus=[ dict( type="buttons", direction="left", buttons=list([ dict( args=["type", "scatter"], label="Scatter Plot", method="restyle" ), dict( args=["type", "bar"], label="Bar Chart", method="restyle" ) ]), ), ]) plot.show() Output: In this example also we are creating two different plots on the same data and both plots are accessible by the buttons. In plotly, the range slider is a custom range-type input control. It allows selecting a value or a range of values between a specified minimum and maximum range. And the range selector is a tool for selecting ranges to display within the chart. It provides buttons to select pre-configured ranges in the chart. It also provides input boxes where the minimum and maximum dates can be manually input. Example: Python3 import plotly.graph_objects as pximport plotly.express as goimport numpy as np df = go.data.tips() x = df['total_bill']y = df['day'] plot = px.Figure(data=[px.Scatter( x=x, y=y, mode='lines',)]) plot.update_layout( xaxis=dict( rangeselector=dict( buttons=list([ dict(count=1, step="day", stepmode="backward"), ]) ), rangeslider=dict( visible=True ), )) plot.show() Output: Media error: Format(s) not supported or source(s) not found plotly.express.scatter_geo() function in Python plotly.express.scatter_polar() function in Python plotly.express.scatter_ternary() function in Python plotly.express.line_ternary() function in Python Filled area chart using plotly in Python How to Create Stacked area plot using Plotly in Python? Sunburst Plot using Plotly in Python Sunburst Plot using graph_objects class in plotly plotly.figure_factory.create_annotated_heatmap() function in Python plotly.figure_factory.create_2d_density() function in Python Ternary contours Plot using Plotly in Python How to make Log Plots in Plotly – Python? Polar Charts using Plotly in Python Carpet Contour Plot using Plotly in Python Ternary Plots in Plotly How to create a Ternary Overlay using Plotly? Parallel Coordinates Plot using Plotly in Python Carpet Plots using Plotly in Python 3D Cone Plots using Plotly in Python 3D Volume Plots using Plotly in Python 3D Streamtube Plots using Plotly in Python 3D Mesh Plots using Plotly in Python How to create Tables using Plotly in Python? plotly.figure_factory.create_dendrogram() function in Python Define Node position in Sankey Diagram in plotly Sankey Diagram using Plotly in Python Quiver Plots using Plotly in Python Treemap using Plotly in Python Treemap using graph_objects class in plotly plotly.figure_factory.create_candlestick() function in Python plotly.figure_factory.create_choropleth() function in Python plotly.figure_factory.create_bullet() in Python Streamline Plots in Plotly using Python How to make Wind Rose and Polar Bar Charts in Plotly – Python? Title alignment in Plotly Change marker border color in Plotly – Python Plot Live Graphs using Python Dash and Plotly Animated Data Visualization using Plotly Express Introduction to Plotly-online using Python How to display image using Plotly? sumitgumber28 nnr223442 avtarkumar719 kk9826225 Python Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here. Read JSON file using Python Adding new column to existing DataFrame in Pandas Python map() function How to get column names in Pandas dataframe Python Dictionary Taking input in Python Read a file line by line in Python Enumerate() in Python How to Install PIP on Windows ? Iterate over a list in Python
[ { "code": null, "e": 24068, "s": 24040, "text": "\n29 Nov, 2021" }, { "code": null, "e": 24403, "s": 24068, "text": "Python Plotly Library is an open-source library that can be used for data visualization and understanding data simply and easily. Plotly supports various types of plots like line charts, scatter plots, histograms, cox plots, etc. So you all must be wondering why Plotly over other visualization tools or libraries? Here’s the answer –" }, { "code": null, "e": 24522, "s": 24403, "text": "Plotly has hover tool capabilities that allow us to detect any outliers or anomalies in a large number of data points." }, { "code": null, "e": 24599, "s": 24522, "text": "It is visually attractive that can be accepted by a wide range of audiences." }, { "code": null, "e": 24723, "s": 24599, "text": "It allows us for the endless customization of our graphs that makes our plot more meaningful and understandable for others." }, { "code": null, "e": 24907, "s": 24723, "text": "This tutorial aims at providing you the insight about Plotly with the help of the huge dataset explaining the Plotly from basics to advance and covering all the popularly used charts." }, { "code": null, "e": 24925, "s": 24907, "text": "Table Of Content " }, { "code": null, "e": 24938, "s": 24925, "text": "Installation" }, { "code": null, "e": 24966, "s": 24938, "text": "Package Structure of Plotly" }, { "code": null, "e": 24982, "s": 24966, "text": "Getting Started" }, { "code": null, "e": 25202, "s": 24982, "text": "Creating Different Types of Charts Line ChartBar ChartHistogramsScatter Plot and Bubble chartsPie ChartsBox PlotsViolin plotsGantt ChartsContour PlotsHeatmapsError Bars3D Line Plots3D Scatter Plot Plotly3D Surface Plots" }, { "code": null, "e": 25213, "s": 25202, "text": "Line Chart" }, { "code": null, "e": 25223, "s": 25213, "text": "Bar Chart" }, { "code": null, "e": 25234, "s": 25223, "text": "Histograms" }, { "code": null, "e": 25265, "s": 25234, "text": "Scatter Plot and Bubble charts" }, { "code": null, "e": 25276, "s": 25265, "text": "Pie Charts" }, { "code": null, "e": 25286, "s": 25276, "text": "Box Plots" }, { "code": null, "e": 25299, "s": 25286, "text": "Violin plots" }, { "code": null, "e": 25312, "s": 25299, "text": "Gantt Charts" }, { "code": null, "e": 25326, "s": 25312, "text": "Contour Plots" }, { "code": null, "e": 25335, "s": 25326, "text": "Heatmaps" }, { "code": null, "e": 25346, "s": 25335, "text": "Error Bars" }, { "code": null, "e": 25360, "s": 25346, "text": "3D Line Plots" }, { "code": null, "e": 25383, "s": 25360, "text": "3D Scatter Plot Plotly" }, { "code": null, "e": 25400, "s": 25383, "text": "3D Surface Plots" }, { "code": null, "e": 25528, "s": 25400, "text": "Interacting with the Plots Creating Dropdown Menu in PlotlyAdding Buttons to the PlotCreating Sliders and Selectors to the Plot" }, { "code": null, "e": 25561, "s": 25528, "text": "Creating Dropdown Menu in Plotly" }, { "code": null, "e": 25588, "s": 25561, "text": "Adding Buttons to the Plot" }, { "code": null, "e": 25631, "s": 25588, "text": "Creating Sliders and Selectors to the Plot" }, { "code": null, "e": 25655, "s": 25631, "text": "More Plots using Plotly" }, { "code": null, "e": 25677, "s": 25655, "text": "More Topics on Plotly" }, { "code": null, "e": 25708, "s": 25677, "text": "Recent Articles on Plotly !!! " }, { "code": null, "e": 25805, "s": 25708, "text": "Plotly does not come built-in with Python. To install it type the below command in the terminal." }, { "code": null, "e": 25824, "s": 25805, "text": "pip install plotly" }, { "code": null, "e": 25893, "s": 25824, "text": "This may take some time as it will install the dependencies as well." }, { "code": null, "e": 25943, "s": 25893, "text": "There are three main modules in Plotly. They are:" }, { "code": null, "e": 25957, "s": 25943, "text": "plotly.plotly" }, { "code": null, "e": 25978, "s": 25957, "text": "plotly.graph.objects" }, { "code": null, "e": 25991, "s": 25978, "text": "plotly.tools" }, { "code": null, "e": 26133, "s": 25991, "text": "plotly.plotly acts as the interface between the local machine and Plotly. It contains functions that require a response from Plotly’s server." }, { "code": null, "e": 26525, "s": 26133, "text": "plotly.graph_objects module contains the objects (Figure, layout, data, and the definition of the plots like scatter plot, line chart) that are responsible for creating the plots. The Figure can be represented either as dict or instances of plotly.graph_objects.Figure and these are serialized as JSON before it gets passed to plotly.js. Consider the below example for better understanding." }, { "code": null, "e": 26675, "s": 26525, "text": "Note: plotly.express module can create the entire Figure at once. It uses the graph_objects internally and returns the graph_objects.Figure instance." }, { "code": null, "e": 26684, "s": 26675, "text": "Example:" }, { "code": null, "e": 26692, "s": 26684, "text": "Python3" }, { "code": "import plotly.express as px # Creating the Figure instancefig = px.line(x=[1,2, 3], y=[1, 2, 3]) # printing the figure instanceprint(fig)", "e": 26831, "s": 26692, "text": null }, { "code": null, "e": 26839, "s": 26831, "text": "Output:" }, { "code": null, "e": 27142, "s": 26839, "text": "Figures are represented as trees where the root node has three top layer attributes – data, layout, and frames and the named nodes called ‘attributes’. Consider the above example, layout.legend is a nested dictionary where the legend is the key inside the dictionary whose value is also a dictionary. " }, { "code": null, "e": 27256, "s": 27142, "text": "plotly.tools module contains various tools in the forms of the functions that can enhance the Plotly experience. " }, { "code": null, "e": 27405, "s": 27256, "text": "After learning the installation and basic structure of the Plotly, let’s create a simple plot using the pre-defined data sets defined by the plotly." }, { "code": null, "e": 27414, "s": 27405, "text": "Example:" }, { "code": null, "e": 27422, "s": 27414, "text": "Python3" }, { "code": "import plotly.express as px # Creating the Figure instancefig = px.line(x=[1, 2, 3], y=[1, 2, 3]) # showing the plotfig.show()", "e": 27550, "s": 27422, "text": null }, { "code": null, "e": 27558, "s": 27550, "text": "Output:" }, { "code": null, "e": 27750, "s": 27558, "text": "In the above example, the plotly.express module is imported which returns the Figure instance. We have created a simple line chart by passing the x, y coordinates of the points to be plotted." }, { "code": null, "e": 27944, "s": 27750, "text": "With plotly we can create more than 40 charts and every plot can be created using the plotly.express and plotly.graph_objects class. Let’s see some commonly used charts with the help of Plotly." }, { "code": null, "e": 28239, "s": 27944, "text": "Line plot in Plotly is much accessible and illustrious annexation to plotly which manage a variety of types of data and assemble easy-to-style statistic. With px.line each data position is represented as a vertex (which location is given by the x and y columns) of a polyline mark in 2D space." }, { "code": null, "e": 28248, "s": 28239, "text": "Example:" }, { "code": null, "e": 28256, "s": 28248, "text": "Python3" }, { "code": "import plotly.express as px # using the iris datasetdf = px.data.iris() # plotting the line chartfig = px.line(df, x=\"species\", y=\"petal_width\") # showing the plotfig.show()", "e": 28430, "s": 28256, "text": null }, { "code": null, "e": 28438, "s": 28430, "text": "Output:" }, { "code": null, "e": 28517, "s": 28438, "text": "Refer to the below articles to get detailed information about the line charts." }, { "code": null, "e": 28558, "s": 28517, "text": "plotly.express.line() function in Python" }, { "code": null, "e": 28592, "s": 28558, "text": "Line Chart using Plotly in Python" }, { "code": null, "e": 28925, "s": 28592, "text": "A bar chart is a pictorial representation of data that presents categorical data with rectangular bars with heights or lengths proportional to the values that they represent. In other words, it is the pictorial representation of dataset. These data sets contain the numerical values of variables that represent the length or height." }, { "code": null, "e": 28934, "s": 28925, "text": "Example:" }, { "code": null, "e": 28942, "s": 28934, "text": "Python3" }, { "code": "import plotly.express as px # using the iris datasetdf = px.data.iris() # plotting the bar chartfig = px.bar(df, x=\"sepal_width\", y=\"sepal_length\") # showing the plotfig.show()", "e": 29119, "s": 28942, "text": null }, { "code": null, "e": 29127, "s": 29119, "text": "Output:" }, { "code": null, "e": 29204, "s": 29127, "text": "Refer to the below articles to get detailed information about the bar chart." }, { "code": null, "e": 29237, "s": 29204, "text": "Bar chart using Plotly in Python" }, { "code": null, "e": 29287, "s": 29237, "text": "How to create Stacked bar chart in Python-Plotly?" }, { "code": null, "e": 29329, "s": 29287, "text": "How to group Bar Charts in Python-Plotly?" }, { "code": null, "e": 29704, "s": 29329, "text": "A histogram contains a rectangular area to display the statistical information which is proportional to the frequency of a variable and its width in successive numerical intervals. A graphical representation that manages a group of data points into different specified ranges. It has a special feature that shows no gaps between the bars and similar to a vertical bar graph." }, { "code": null, "e": 29713, "s": 29704, "text": "Example:" }, { "code": null, "e": 29721, "s": 29713, "text": "Python3" }, { "code": "import plotly.express as px # using the iris datasetdf = px.data.iris() # plotting the histogramfig = px.histogram(df, x=\"sepal_length\", y=\"petal_width\") # showing the plotfig.show()", "e": 29904, "s": 29721, "text": null }, { "code": null, "e": 29912, "s": 29904, "text": "Output:" }, { "code": null, "e": 29990, "s": 29912, "text": "Refer to the below articles to get detailed information about the histograms." }, { "code": null, "e": 30023, "s": 29990, "text": "Histogram using Plotly in Python" }, { "code": null, "e": 30070, "s": 30023, "text": "Histograms in Plotly using graph_objects class" }, { "code": null, "e": 30118, "s": 30070, "text": "How to create a Cumulative Histogram in Plotly?" }, { "code": null, "e": 30389, "s": 30118, "text": "A scatter plot is a set of dotted points to represent individual pieces of data in the horizontal and vertical axis. A graph in which the values of two variables are plotted along X-axis and Y-axis, the pattern of the resulting points reveals a correlation between them." }, { "code": null, "e": 30592, "s": 30389, "text": "A bubble plot is a scatter plot with bubbles (color-filled circles). Bubbles have various sizes dependent on another variable in the data. It can be created using the scatter() method of plotly.express." }, { "code": null, "e": 30616, "s": 30592, "text": "Example 1: Scatter Plot" }, { "code": null, "e": 30624, "s": 30616, "text": "Python3" }, { "code": "import plotly.express as px # using the iris datasetdf = px.data.iris() # plotting the scatter chartfig = px.scatter(df, x=\"species\", y=\"petal_width\") # showing the plotfig.show()", "e": 30804, "s": 30624, "text": null }, { "code": null, "e": 30812, "s": 30804, "text": "Output:" }, { "code": null, "e": 30835, "s": 30812, "text": "Example 2: Bubble Plot" }, { "code": null, "e": 30843, "s": 30835, "text": "Python3" }, { "code": "import plotly.express as px # using the iris datasetdf = px.data.iris() # plotting the bubble chartfig = px.scatter(df, x=\"species\", y=\"petal_width\", size=\"petal_length\", color=\"species\") # showing the plotfig.show()", "e": 31076, "s": 30843, "text": null }, { "code": null, "e": 31084, "s": 31076, "text": "Output:" }, { "code": null, "e": 31182, "s": 31084, "text": "Refer to the below articles to get detailed information about the scatter plots and bubble plots." }, { "code": null, "e": 31226, "s": 31182, "text": "plotly.express.scatter() function in Python" }, { "code": null, "e": 31275, "s": 31226, "text": "Scatter plot in Plotly using graph_objects class" }, { "code": null, "e": 31311, "s": 31275, "text": "Scatter plot using Plotly in Python" }, { "code": null, "e": 31347, "s": 31311, "text": "Bubble chart using Plotly in Python" }, { "code": null, "e": 31691, "s": 31347, "text": "A pie chart is a circular statistical graphic, which is divided into slices to illustrate numerical proportions. It depicts a special chart that uses “pie slices”, where each sector shows the relative sizes of data. A circular chart cuts in a form of radii into segments describing relative frequencies or magnitude also known as circle graph." }, { "code": null, "e": 31700, "s": 31691, "text": "Example:" }, { "code": null, "e": 31708, "s": 31700, "text": "Python3" }, { "code": "import plotly.express as px # using the tips datasetdf = px.data.tips() # plotting the pie chartfig = px.pie(df, values=\"total_bill\", names=\"day\") # showing the plotfig.show()", "e": 31884, "s": 31708, "text": null }, { "code": null, "e": 31892, "s": 31884, "text": "Output:" }, { "code": null, "e": 31970, "s": 31892, "text": "Refer to the below articles to get detailed information about the pie charts." }, { "code": null, "e": 32002, "s": 31970, "text": "Pie plot using Plotly in Python" }, { "code": null, "e": 32434, "s": 32002, "text": "A Box Plot is also known as Whisker plot is created to display the summary of the set of data values having properties like minimum, first quartile, median, third quartile and maximum. In the box plot, a box is created from the first quartile to the third quartile, a vertical line is also there which goes through the box at the median. Here x-axis denotes the data to be plotted while the y-axis shows the frequency distribution." }, { "code": null, "e": 32443, "s": 32434, "text": "Example:" }, { "code": null, "e": 32451, "s": 32443, "text": "Python3" }, { "code": "import plotly.express as px # using the tips datasetdf = px.data.tips() # plotting the box chartfig = px.box(df, x=\"day\", y=\"total_bill\") # showing the plotfig.show()", "e": 32618, "s": 32451, "text": null }, { "code": null, "e": 32626, "s": 32618, "text": "Output:" }, { "code": null, "e": 32699, "s": 32626, "text": "Refer to the below articles to get detailed information about box plots." }, { "code": null, "e": 32731, "s": 32699, "text": "Box Plot using Plotly in Python" }, { "code": null, "e": 32776, "s": 32731, "text": "Box plot in Plotly using graph_objects class" }, { "code": null, "e": 32818, "s": 32776, "text": "How to create Grouped box plot in Plotly?" }, { "code": null, "e": 33355, "s": 32818, "text": "Violin Plot is a method to visualize the distribution of numerical data of different variables. It is similar to Box Plot but with a rotated plot on each side, giving more information about the density estimate on the y-axis. The density is mirrored and flipped over and the resulting shape is filled in, creating an image resembling a violin. The advantage of a violin plot is that it can show nuances in the distribution that aren’t perceptible in a boxplot. On the other hand, the boxplot more clearly shows the outliers in the data." }, { "code": null, "e": 33364, "s": 33355, "text": "Example:" }, { "code": null, "e": 33372, "s": 33364, "text": "Python3" }, { "code": "import plotly.express as px # using the tips datasetdf = px.data.tips() # plotting the violin chartfig = px.violin(df, x=\"day\", y=\"total_bill\") # showing the plotfig.show()", "e": 33545, "s": 33372, "text": null }, { "code": null, "e": 33553, "s": 33545, "text": "Output:" }, { "code": null, "e": 33633, "s": 33553, "text": "Refer to the below articles to get detailed information about the violin plots." }, { "code": null, "e": 33659, "s": 33633, "text": "Violin Plots using Plotly" }, { "code": null, "e": 33917, "s": 33659, "text": "Generalized Activity Normalization Time Table (GANTT) chart is type of chart in which series of horizontal lines are present that show the amount of work done or production completed in given period of time in relation to amount planned for those projects. " }, { "code": null, "e": 33926, "s": 33917, "text": "Example:" }, { "code": null, "e": 33934, "s": 33926, "text": "Python3" }, { "code": "import plotly.figure_factory as ff # Data to be plotteddf = [dict(Task=\"A\", Start='2020-01-01', Finish='2009-02-02'), dict(Task=\"Job B\", Start='2020-03-01', Finish='2020-11-11'), dict(Task=\"Job C\", Start='2020-08-06', Finish='2020-09-21')] # Creating the plotfig = ff.create_gantt(df)fig.show()", "e": 34235, "s": 33934, "text": null }, { "code": null, "e": 34243, "s": 34235, "text": "Output:" }, { "code": null, "e": 34323, "s": 34243, "text": "Refer to the below articles to get detailed information about the Gantt Charts." }, { "code": null, "e": 34345, "s": 34323, "text": "Gantt Chart in Plotly" }, { "code": null, "e": 34659, "s": 34345, "text": "Contour plots also called level plots are a tool for doing multivariate analysis and visualizing 3-D plots in 2-D space. If we consider X and Y as our variables we want to plot then the response Z will be plotted as slices on the X-Y plane due to which contours are sometimes referred as Z-slices or iso-response." }, { "code": null, "e": 34828, "s": 34659, "text": "A contour plots is used in the case where you want to see the changes in some value (Z) as a function with respect to the two values (X, Y). Consider the below example." }, { "code": null, "e": 34837, "s": 34828, "text": "Example:" }, { "code": null, "e": 34845, "s": 34837, "text": "Python3" }, { "code": "import plotly.graph_objects as go # Creating the X, Y value that will# change the values of Z as a functionfeature_x = np.arange(0, 50, 2)feature_y = np.arange(0, 50, 3) # Creating 2-D grid of features[X, Y] = np.meshgrid(feature_x, feature_y) Z = np.cos(X / 2) + np.sin(Y / 4) # plotting the figurefig = go.Figure(data = go.Contour(x = feature_x, y = feature_y, z = Z)) fig.show()", "e": 35231, "s": 34845, "text": null }, { "code": null, "e": 35239, "s": 35231, "text": "Output:" }, { "code": null, "e": 35316, "s": 35239, "text": "Refer to the below articles to get detailed information about contour plots." }, { "code": null, "e": 35353, "s": 35316, "text": "Contour Plots using Plotly in Python" }, { "code": null, "e": 35711, "s": 35353, "text": "Heatmap is defined as a graphical representation of data using colors to visualize the value of the matrix. In this, to represent more common values or higher activities brighter colors basically reddish colors are used and to represent less common or activity values, darker colors are preferred. Heatmap is also defined by the name of the shading matrix. " }, { "code": null, "e": 35721, "s": 35711, "text": "Example: " }, { "code": null, "e": 35729, "s": 35721, "text": "Python3" }, { "code": "import plotly.graph_objects as go feature_x = np.arange(0, 50, 2)feature_y = np.arange(0, 50, 3) # Creating 2-D grid of features[X, Y] = np.meshgrid(feature_x, feature_y) Z = np.cos(X / 2) + np.sin(Y / 4) # plotting the figurefig = go.Figure(data = go.Heatmap(x = feature_x, y = feature_y, z = Z,)) fig.show()", "e": 36044, "s": 35729, "text": null }, { "code": null, "e": 36052, "s": 36044, "text": "Output:" }, { "code": null, "e": 36128, "s": 36052, "text": "Refer to the below articles to get detailed information about the heatmaps." }, { "code": null, "e": 36180, "s": 36128, "text": "Create Heatmaps using graph_objects class in Plotly" }, { "code": null, "e": 36222, "s": 36180, "text": "Annotated Heatmaps using Plotly in Python" }, { "code": null, "e": 36591, "s": 36222, "text": "For functions representing 2D data points such as px.scatter, px.line, px.bar, etc., error bars are given as a column name which is the value of the error_x (for the error on x position) and error_y (for the error on y position). Error bars are the graphical presentation alternation of data and used on graphs to imply the error or uncertainty in a reported capacity." }, { "code": null, "e": 36600, "s": 36591, "text": "Example:" }, { "code": null, "e": 36608, "s": 36600, "text": "Python3" }, { "code": "import plotly.express as px # using the iris datasetdf = px.data.iris() # Calculating the error fielddf[\"error\"] = df[\"petal_length\"]/100 # plotting the scatter chartfig = px.scatter(df, x=\"species\", y=\"petal_width\", error_x=\"error\", error_y=\"error\") # showing the plotfig.show()", "e": 36903, "s": 36608, "text": null }, { "code": null, "e": 36911, "s": 36903, "text": "Output:" }, { "code": null, "e": 37212, "s": 36911, "text": "Line plot in plotly is much accessible and illustrious annexation to plotly which manage a variety of types of data and assemble easy-to-style statistic. With px.line_3d each data position is represented as a vertex (which location is given by the x, y and z columns) of a polyline mark in 3D space." }, { "code": null, "e": 37221, "s": 37212, "text": "Example:" }, { "code": null, "e": 37229, "s": 37221, "text": "Python3" }, { "code": "import plotly.express as px # data to be plotteddf = px.data.tips() # plotting the figurefig = px.line_3d(df, x=\"sex\", y=\"day\", z=\"time\", color=\"sex\") fig.show()", "e": 37407, "s": 37229, "text": null }, { "code": null, "e": 37415, "s": 37407, "text": "Output:" }, { "code": null, "e": 37497, "s": 37415, "text": "Refer to the below articles to get detailed information about the 3D line charts." }, { "code": null, "e": 37541, "s": 37497, "text": "plotly.express.line_3d() function in Python" }, { "code": null, "e": 37578, "s": 37541, "text": "3D Line Plots using Plotly in Python" }, { "code": null, "e": 37998, "s": 37578, "text": "3D Scatter Plot can plot two-dimensional graphics that can be enhanced by mapping up to three additional variables while using the semantics of hue, size, and style parameters. All the parameter control visual semantic which are used to identify the different subsets. Using redundant semantics can be helpful for making graphics more accessible. It can be created using the scatter_3d function of plotly.express class." }, { "code": null, "e": 38007, "s": 37998, "text": "Example:" }, { "code": null, "e": 38015, "s": 38007, "text": "Python3" }, { "code": "import plotly.express as px # Data to be plotteddf = px.data.iris() # Plotting the figurefig = px.scatter_3d(df, x = 'sepal_width', y = 'sepal_length', z = 'petal_width', color = 'species') fig.show()", "e": 38273, "s": 38015, "text": null }, { "code": null, "e": 38281, "s": 38273, "text": "Output:" }, { "code": null, "e": 38364, "s": 38281, "text": "Refer to the below articles to get detailed information about the 3D scatter plot." }, { "code": null, "e": 38403, "s": 38364, "text": "3D scatter plot using Plotly in Python" }, { "code": null, "e": 38462, "s": 38403, "text": "3D Scatter Plot using graph_objects Class in Plotly-Python" }, { "code": null, "e": 38501, "s": 38462, "text": "3D Bubble chart using Plotly in Python" }, { "code": null, "e": 38824, "s": 38501, "text": "Surface plot is those plot which has three-dimensions data which is X, Y, and Z. Rather than showing individual data points, the surface plot has a functional relationship between dependent variable Y and have two independent variables X and Z. This plot is used to distinguish between dependent and independent variables." }, { "code": null, "e": 38833, "s": 38824, "text": "Example:" }, { "code": null, "e": 38841, "s": 38833, "text": "Python3" }, { "code": "import plotly.graph_objects as goimport numpy as np # Data to be plottedx = np.outer(np.linspace(-2, 2, 30), np.ones(30))y = x.copy().Tz = np.cos(x ** 2 + y ** 2) # plotting the figurefig = go.Figure(data=[go.Surface(x=x, y=y, z=z)]) fig.show()", "e": 39086, "s": 38841, "text": null }, { "code": null, "e": 39094, "s": 39086, "text": "Output:" }, { "code": null, "e": 39324, "s": 39094, "text": "Plotly provides various tools for interacting with the plots such as adding dropdowns, buttons, sliders, etc. These can be created using the update menu attribute of the plot layout. Let’s see how to do all such things in detail." }, { "code": null, "e": 39626, "s": 39324, "text": "A drop-down menu is a part of the menu-button which is displayed on a screen all the time. Every menu button is associated with a Menu widget that can display the choices for that menu button when clicked on it. In plotly, there are 4 possible methods to modify the charts by using update menu method." }, { "code": null, "e": 39666, "s": 39626, "text": "restyle: modify data or data attributes" }, { "code": null, "e": 39701, "s": 39666, "text": "relayout: modify layout attributes" }, { "code": null, "e": 39743, "s": 39701, "text": "update: modify data and layout attributes" }, { "code": null, "e": 39780, "s": 39743, "text": "animate: start or pause an animation" }, { "code": null, "e": 39789, "s": 39780, "text": "Example:" }, { "code": null, "e": 39797, "s": 39789, "text": "Python3" }, { "code": "import plotly.graph_objects as pximport numpy as np # creating random data through randomint# function of numpy.randomnp.random.seed(42) # Data to be Plottedrandom_x = np.random.randint(1, 101, 100)random_y = np.random.randint(1, 101, 100) plot = px.Figure(data=[px.Scatter( x=random_x, y=random_y, mode='markers',)]) # Add dropdownplot.update_layout( updatemenus=[ dict( buttons=list([ dict( args=[\"type\", \"scatter\"], label=\"Scatter Plot\", method=\"restyle\" ), dict( args=[\"type\", \"bar\"], label=\"Bar Chart\", method=\"restyle\" ) ]), direction=\"down\", ), ]) plot.show()", "e": 40601, "s": 39797, "text": null }, { "code": null, "e": 40609, "s": 40601, "text": "Output:" }, { "code": null, "e": 40669, "s": 40609, "text": "Media error: Format(s) not supported or source(s) not found" }, { "code": null, "e": 40788, "s": 40669, "text": "In the above example we have created two graphs for the same data. These plots are accessible using the dropdown menu." }, { "code": null, "e": 41032, "s": 40788, "text": "In plotly, actions custom Buttons are used to quickly make actions directly from a record. Custom Buttons can be added to page layouts in CRM, Marketing, and Custom Apps. There are also 4 possible methods that can be applied in custom buttons:" }, { "code": null, "e": 41072, "s": 41032, "text": "restyle: modify data or data attributes" }, { "code": null, "e": 41107, "s": 41072, "text": "relayout: modify layout attributes" }, { "code": null, "e": 41149, "s": 41107, "text": "update: modify data and layout attributes" }, { "code": null, "e": 41186, "s": 41149, "text": "animate: start or pause an animation" }, { "code": null, "e": 41195, "s": 41186, "text": "Example:" }, { "code": null, "e": 41203, "s": 41195, "text": "Python3" }, { "code": "import plotly.graph_objects as pximport pandas as pd # reading the databasedata = pd.read_csv(\"tips.csv\") plot = px.Figure(data=[px.Scatter( x=data['day'], y=data['tip'], mode='markers',)]) # Add dropdownplot.update_layout( updatemenus=[ dict( type=\"buttons\", direction=\"left\", buttons=list([ dict( args=[\"type\", \"scatter\"], label=\"Scatter Plot\", method=\"restyle\" ), dict( args=[\"type\", \"bar\"], label=\"Bar Chart\", method=\"restyle\" ) ]), ), ]) plot.show()", "e": 41906, "s": 41203, "text": null }, { "code": null, "e": 41914, "s": 41906, "text": "Output:" }, { "code": null, "e": 42034, "s": 41914, "text": "In this example also we are creating two different plots on the same data and both plots are accessible by the buttons." }, { "code": null, "e": 42433, "s": 42034, "text": "In plotly, the range slider is a custom range-type input control. It allows selecting a value or a range of values between a specified minimum and maximum range. And the range selector is a tool for selecting ranges to display within the chart. It provides buttons to select pre-configured ranges in the chart. It also provides input boxes where the minimum and maximum dates can be manually input." }, { "code": null, "e": 42442, "s": 42433, "text": "Example:" }, { "code": null, "e": 42450, "s": 42442, "text": "Python3" }, { "code": "import plotly.graph_objects as pximport plotly.express as goimport numpy as np df = go.data.tips() x = df['total_bill']y = df['day'] plot = px.Figure(data=[px.Scatter( x=x, y=y, mode='lines',)]) plot.update_layout( xaxis=dict( rangeselector=dict( buttons=list([ dict(count=1, step=\"day\", stepmode=\"backward\"), ]) ), rangeslider=dict( visible=True ), )) plot.show()", "e": 42944, "s": 42450, "text": null }, { "code": null, "e": 42952, "s": 42944, "text": "Output:" }, { "code": null, "e": 43012, "s": 42952, "text": "Media error: Format(s) not supported or source(s) not found" }, { "code": null, "e": 43060, "s": 43012, "text": "plotly.express.scatter_geo() function in Python" }, { "code": null, "e": 43110, "s": 43060, "text": "plotly.express.scatter_polar() function in Python" }, { "code": null, "e": 43162, "s": 43110, "text": "plotly.express.scatter_ternary() function in Python" }, { "code": null, "e": 43211, "s": 43162, "text": "plotly.express.line_ternary() function in Python" }, { "code": null, "e": 43252, "s": 43211, "text": "Filled area chart using plotly in Python" }, { "code": null, "e": 43308, "s": 43252, "text": "How to Create Stacked area plot using Plotly in Python?" }, { "code": null, "e": 43345, "s": 43308, "text": "Sunburst Plot using Plotly in Python" }, { "code": null, "e": 43395, "s": 43345, "text": "Sunburst Plot using graph_objects class in plotly" }, { "code": null, "e": 43463, "s": 43395, "text": "plotly.figure_factory.create_annotated_heatmap() function in Python" }, { "code": null, "e": 43524, "s": 43463, "text": "plotly.figure_factory.create_2d_density() function in Python" }, { "code": null, "e": 43569, "s": 43524, "text": "Ternary contours Plot using Plotly in Python" }, { "code": null, "e": 43611, "s": 43569, "text": "How to make Log Plots in Plotly – Python?" }, { "code": null, "e": 43647, "s": 43611, "text": "Polar Charts using Plotly in Python" }, { "code": null, "e": 43690, "s": 43647, "text": "Carpet Contour Plot using Plotly in Python" }, { "code": null, "e": 43714, "s": 43690, "text": "Ternary Plots in Plotly" }, { "code": null, "e": 43760, "s": 43714, "text": "How to create a Ternary Overlay using Plotly?" }, { "code": null, "e": 43809, "s": 43760, "text": "Parallel Coordinates Plot using Plotly in Python" }, { "code": null, "e": 43845, "s": 43809, "text": "Carpet Plots using Plotly in Python" }, { "code": null, "e": 43882, "s": 43845, "text": "3D Cone Plots using Plotly in Python" }, { "code": null, "e": 43921, "s": 43882, "text": "3D Volume Plots using Plotly in Python" }, { "code": null, "e": 43964, "s": 43921, "text": "3D Streamtube Plots using Plotly in Python" }, { "code": null, "e": 44001, "s": 43964, "text": "3D Mesh Plots using Plotly in Python" }, { "code": null, "e": 44046, "s": 44001, "text": "How to create Tables using Plotly in Python?" }, { "code": null, "e": 44107, "s": 44046, "text": "plotly.figure_factory.create_dendrogram() function in Python" }, { "code": null, "e": 44156, "s": 44107, "text": "Define Node position in Sankey Diagram in plotly" }, { "code": null, "e": 44194, "s": 44156, "text": "Sankey Diagram using Plotly in Python" }, { "code": null, "e": 44230, "s": 44194, "text": "Quiver Plots using Plotly in Python" }, { "code": null, "e": 44261, "s": 44230, "text": "Treemap using Plotly in Python" }, { "code": null, "e": 44305, "s": 44261, "text": "Treemap using graph_objects class in plotly" }, { "code": null, "e": 44367, "s": 44305, "text": "plotly.figure_factory.create_candlestick() function in Python" }, { "code": null, "e": 44428, "s": 44367, "text": "plotly.figure_factory.create_choropleth() function in Python" }, { "code": null, "e": 44476, "s": 44428, "text": "plotly.figure_factory.create_bullet() in Python" }, { "code": null, "e": 44516, "s": 44476, "text": "Streamline Plots in Plotly using Python" }, { "code": null, "e": 44579, "s": 44516, "text": "How to make Wind Rose and Polar Bar Charts in Plotly – Python?" }, { "code": null, "e": 44605, "s": 44579, "text": "Title alignment in Plotly" }, { "code": null, "e": 44651, "s": 44605, "text": "Change marker border color in Plotly – Python" }, { "code": null, "e": 44697, "s": 44651, "text": "Plot Live Graphs using Python Dash and Plotly" }, { "code": null, "e": 44746, "s": 44697, "text": "Animated Data Visualization using Plotly Express" }, { "code": null, "e": 44789, "s": 44746, "text": "Introduction to Plotly-online using Python" }, { "code": null, "e": 44824, "s": 44789, "text": "How to display image using Plotly?" }, { "code": null, "e": 44838, "s": 44824, "text": "sumitgumber28" }, { "code": null, "e": 44848, "s": 44838, "text": "nnr223442" }, { "code": null, "e": 44862, "s": 44848, "text": "avtarkumar719" }, { "code": null, "e": 44872, "s": 44862, "text": "kk9826225" }, { "code": null, "e": 44879, "s": 44872, "text": "Python" }, { "code": null, "e": 44977, "s": 44879, "text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here." }, { "code": null, "e": 45005, "s": 44977, "text": "Read JSON file using Python" }, { "code": null, "e": 45055, "s": 45005, "text": "Adding new column to existing DataFrame in Pandas" }, { "code": null, "e": 45077, "s": 45055, "text": "Python map() function" }, { "code": null, "e": 45121, "s": 45077, "text": "How to get column names in Pandas dataframe" }, { "code": null, "e": 45139, "s": 45121, "text": "Python Dictionary" }, { "code": null, "e": 45162, "s": 45139, "text": "Taking input in Python" }, { "code": null, "e": 45197, "s": 45162, "text": "Read a file line by line in Python" }, { "code": null, "e": 45219, "s": 45197, "text": "Enumerate() in Python" }, { "code": null, "e": 45251, "s": 45219, "text": "How to Install PIP on Windows ?" } ]
Reset keys of array elements using PHP ?
To reset keys of array elements using PHP, the code is as follows− Live Demo <?php $arr = array( "p"=>"150", "q"=>"100", "r"=>"120", "s"=>"110"); var_dump ($arr); $res = array_values($arr); var_dump ($res); ?> This will produce the following output− array(4) { ["p"]=> string(3) "150" ["q"]=> string(3) "100" ["r"]=> string(3) "120" ["s"]=> string(3) "110" } array(4) { [0]=> string(3) "150" [1]=> string(3) "100" [2]=> string(3) "120" [3]=> string(3) "110" } Let us now see another example − Live Demo <?php $arr = array( 8=>"Ben", 4=>"Kevin", 7=>"Mark", 3=>"Hanks"); var_dump ($arr); $res = array_values($arr); var_dump ($res); ?> This will produce the following output− array(4) { [8]=> string(3) "Ben" [4]=> string(5) "Kevin" [7]=> string(4) "Mark" [3]=> string(5) "Hanks" } array(4) { [0]=> string(3) "Ben" [1]=> string(5) "Kevin" [2]=> string(4) "Mark" [3]=> string(5) "Hanks" }
[ { "code": null, "e": 1129, "s": 1062, "text": "To reset keys of array elements using PHP, the code is as follows−" }, { "code": null, "e": 1140, "s": 1129, "text": " Live Demo" }, { "code": null, "e": 1285, "s": 1140, "text": "<?php\n $arr = array( \"p\"=>\"150\", \"q\"=>\"100\", \"r\"=>\"120\", \"s\"=>\"110\");\n var_dump ($arr);\n $res = array_values($arr);\n var_dump ($res);\n?>" }, { "code": null, "e": 1325, "s": 1285, "text": "This will produce the following output−" }, { "code": null, "e": 1583, "s": 1325, "text": "array(4) {\n [\"p\"]=>\n string(3) \"150\"\n [\"q\"]=>\n string(3) \"100\"\n [\"r\"]=>\n string(3) \"120\"\n [\"s\"]=>\n string(3) \"110\"\n}\narray(4) {\n [0]=>\n string(3) \"150\"\n [1]=>\n string(3) \"100\"\n [2]=>\n string(3) \"120\"\n [3]=>\n string(3) \"110\"\n}" }, { "code": null, "e": 1616, "s": 1583, "text": "Let us now see another example −" }, { "code": null, "e": 1627, "s": 1616, "text": " Live Demo" }, { "code": null, "e": 1769, "s": 1627, "text": "<?php\n $arr = array( 8=>\"Ben\", 4=>\"Kevin\", 7=>\"Mark\", 3=>\"Hanks\");\n var_dump ($arr);\n $res = array_values($arr);\n var_dump ($res);\n?>" }, { "code": null, "e": 1809, "s": 1769, "text": "This will produce the following output−" }, { "code": null, "e": 2070, "s": 1809, "text": "array(4) {\n [8]=>\n string(3) \"Ben\"\n [4]=>\n string(5) \"Kevin\"\n [7]=>\n string(4) \"Mark\"\n [3]=>\n string(5) \"Hanks\"\n}\narray(4) {\n [0]=>\n string(3) \"Ben\"\n [1]=>\n string(5) \"Kevin\"\n [2]=>\n string(4) \"Mark\"\n [3]=>\n string(5) \"Hanks\" \n}" } ]
Objective-C Classes & Objects
The main purpose of Objective-C programming language is to add object orientation to the C programming language and classes are the central feature of Objective-C that support object-oriented programming and are often called user-defined types. A class is used to specify the form of an object and it combines data representation and methods for manipulating that data into one neat package. The data and methods within a class are called members of the class. The class is defined in two different sections namely @interface and @implementation. The class is defined in two different sections namely @interface and @implementation. Almost everything is in form of objects. Almost everything is in form of objects. Objects receive messages and objects are often referred as receivers. Objects receive messages and objects are often referred as receivers. Objects contain instance variables. Objects contain instance variables. Objects and instance variables have scope. Objects and instance variables have scope. Classes hide an object's implementation. Classes hide an object's implementation. Properties are used to provide access to class instance variables in other classes. Properties are used to provide access to class instance variables in other classes. When you define a class, you define a blueprint for a data type. This doesn't actually define any data, but it does define what the class name means, that is, what an object of the class will consist of and what operations can be performed on such an object. A class definition starts with the keyword @interface followed by the interface(class) name; and the class body, enclosed by a pair of curly braces. In Objective-C, all classes are derived from the base class called NSObject. It is the superclass of all Objective-C classes. It provides basic methods like memory allocation and initialization. For example, we defined the Box data type using the keyword class as follows − @interface Box:NSObject { //Instance variables double length; // Length of a box double breadth; // Breadth of a box } @property(nonatomic, readwrite) double height; // Property @end The instance variables are private and are only accessible inside the class implementation. A class provides the blueprints for objects, so basically an object is created from a class. We declare objects of a class with exactly the same sort of declaration that we declare variables of basic types. Following statements declare two objects of class Box − Box box1 = [[Box alloc]init]; // Create box1 object of type Box Box box2 = [[Box alloc]init]; // Create box2 object of type Box Both of the objects box1 and box2 will have their own copy of data members. The properties of objects of a class can be accessed using the direct member access operator (.). Let us try the following example to make things clear − #import <Foundation/Foundation.h> @interface Box:NSObject { double length; // Length of a box double breadth; // Breadth of a box double height; // Height of a box } @property(nonatomic, readwrite) double height; // Property -(double) volume; @end @implementation Box @synthesize height; -(id)init { self = [super init]; length = 1.0; breadth = 1.0; return self; } -(double) volume { return length*breadth*height; } @end int main() { NSAutoreleasePool * pool = [[NSAutoreleasePool alloc] init]; Box *box1 = [[Box alloc]init]; // Create box1 object of type Box Box *box2 = [[Box alloc]init]; // Create box2 object of type Box double volume = 0.0; // Store the volume of a box here // box 1 specification box1.height = 5.0; // box 2 specification box2.height = 10.0; // volume of box 1 volume = [box1 volume]; NSLog(@"Volume of Box1 : %f", volume); // volume of box 2 volume = [box2 volume]; NSLog(@"Volume of Box2 : %f", volume); [pool drain]; return 0; } When the above code is compiled and executed, it produces the following result − 2013-09-22 21:25:33.314 ClassAndObjects[387:303] Volume of Box1 : 5.000000 2013-09-22 21:25:33.316 ClassAndObjects[387:303] Volume of Box2 : 10.000000 Properties are introduced in Objective-C to ensure that the instance variable of the class can be accessed outside the class. Properties begin with @property, which is a keyword Properties begin with @property, which is a keyword It is followed with access specifiers, which are nonatomic or atomic, readwrite or readonly and strong, unsafe_unretained or weak. This varies based on the type of the variable. For any pointer type, we can use strong, unsafe_unretained or weak. Similarly for other types we can use readwrite or readonly. It is followed with access specifiers, which are nonatomic or atomic, readwrite or readonly and strong, unsafe_unretained or weak. This varies based on the type of the variable. For any pointer type, we can use strong, unsafe_unretained or weak. Similarly for other types we can use readwrite or readonly. This is followed by the datatype of the variable. This is followed by the datatype of the variable. Finally, we have the property name terminated by a semicolon. Finally, we have the property name terminated by a semicolon. We can add synthesize statement in the implementation class. But in the latest XCode, the synthesis part is taken care by the XCode and you need not include synthesize statement. We can add synthesize statement in the implementation class. But in the latest XCode, the synthesis part is taken care by the XCode and you need not include synthesize statement. It is only possible with the properties we can access the instance variables of the class. Actually, internally getter and setter methods are created for the properties. For example, let's assume we have a property @property (nonatomic ,readonly ) BOOL isDone. Under the hood, there are setters and getters created as shown below. -(void)setIsDone(BOOL)isDone; -(BOOL)isDone; 18 Lectures 1 hours PARTHA MAJUMDAR 6 Lectures 25 mins Ken Burke Print Add Notes Bookmark this page
[ { "code": null, "e": 2805, "s": 2560, "text": "The main purpose of Objective-C programming language is to add object orientation to the C programming language and classes are the central feature of Objective-C that support object-oriented programming and are often called user-defined types." }, { "code": null, "e": 3021, "s": 2805, "text": "A class is used to specify the form of an object and it combines data representation and methods for manipulating that data into one neat package. The data and methods within a class are called members of the class." }, { "code": null, "e": 3107, "s": 3021, "text": "The class is defined in two different sections namely @interface and @implementation." }, { "code": null, "e": 3193, "s": 3107, "text": "The class is defined in two different sections namely @interface and @implementation." }, { "code": null, "e": 3234, "s": 3193, "text": "Almost everything is in form of objects." }, { "code": null, "e": 3275, "s": 3234, "text": "Almost everything is in form of objects." }, { "code": null, "e": 3345, "s": 3275, "text": "Objects receive messages and objects are often referred as receivers." }, { "code": null, "e": 3415, "s": 3345, "text": "Objects receive messages and objects are often referred as receivers." }, { "code": null, "e": 3451, "s": 3415, "text": "Objects contain instance variables." }, { "code": null, "e": 3487, "s": 3451, "text": "Objects contain instance variables." }, { "code": null, "e": 3530, "s": 3487, "text": "Objects and instance variables have scope." }, { "code": null, "e": 3573, "s": 3530, "text": "Objects and instance variables have scope." }, { "code": null, "e": 3614, "s": 3573, "text": "Classes hide an object's implementation." }, { "code": null, "e": 3655, "s": 3614, "text": "Classes hide an object's implementation." }, { "code": null, "e": 3739, "s": 3655, "text": "Properties are used to provide access to class instance variables in other classes." }, { "code": null, "e": 3823, "s": 3739, "text": "Properties are used to provide access to class instance variables in other classes." }, { "code": null, "e": 4082, "s": 3823, "text": "When you define a class, you define a blueprint for a data type. This doesn't actually define any data, but it does define what the class name means, that is, what an object of the class will consist of and what operations can be performed on such an object." }, { "code": null, "e": 4505, "s": 4082, "text": "A class definition starts with the keyword @interface followed by the interface(class) name; and the class body, enclosed by a pair of curly braces. In Objective-C, all classes are derived from the base class called NSObject. It is the superclass of all Objective-C classes. It provides basic methods like memory allocation and initialization. For example, we defined the Box data type using the keyword class as follows −" }, { "code": null, "e": 4704, "s": 4505, "text": "@interface Box:NSObject {\n //Instance variables\n double length; // Length of a box\n double breadth; // Breadth of a box\n}\n@property(nonatomic, readwrite) double height; // Property\n\n@end" }, { "code": null, "e": 4796, "s": 4704, "text": "The instance variables are private and are only accessible inside the class implementation." }, { "code": null, "e": 5059, "s": 4796, "text": "A class provides the blueprints for objects, so basically an object is created from a class. We declare objects of a class with exactly the same sort of declaration that we declare variables of basic types. Following statements declare two objects of class Box −" }, { "code": null, "e": 5195, "s": 5059, "text": "Box box1 = [[Box alloc]init]; // Create box1 object of type Box\nBox box2 = [[Box alloc]init]; // Create box2 object of type Box" }, { "code": null, "e": 5271, "s": 5195, "text": "Both of the objects box1 and box2 will have their own copy of data members." }, { "code": null, "e": 5425, "s": 5271, "text": "The properties of objects of a class can be accessed using the direct member access operator (.). Let us try the following example to make things clear −" }, { "code": null, "e": 6508, "s": 5425, "text": "#import <Foundation/Foundation.h>\n\n@interface Box:NSObject {\n double length; // Length of a box\n double breadth; // Breadth of a box\n double height; // Height of a box\n}\n\n@property(nonatomic, readwrite) double height; // Property\n-(double) volume;\n@end\n\n@implementation Box\n\n@synthesize height; \n\n-(id)init {\n self = [super init];\n length = 1.0;\n breadth = 1.0;\n return self;\n}\n\n-(double) volume {\n return length*breadth*height;\n}\n\n@end\n\nint main() {\n NSAutoreleasePool * pool = [[NSAutoreleasePool alloc] init]; \n Box *box1 = [[Box alloc]init]; // Create box1 object of type Box\n Box *box2 = [[Box alloc]init]; // Create box2 object of type Box\n\n double volume = 0.0; // Store the volume of a box here\n \n // box 1 specification\n box1.height = 5.0; \n\n // box 2 specification\n box2.height = 10.0;\n \n // volume of box 1\n volume = [box1 volume];\n NSLog(@\"Volume of Box1 : %f\", volume);\n \n // volume of box 2\n volume = [box2 volume];\n NSLog(@\"Volume of Box2 : %f\", volume);\n \n [pool drain];\n return 0;\n}" }, { "code": null, "e": 6589, "s": 6508, "text": "When the above code is compiled and executed, it produces the following result −" }, { "code": null, "e": 6741, "s": 6589, "text": "2013-09-22 21:25:33.314 ClassAndObjects[387:303] Volume of Box1 : 5.000000\n2013-09-22 21:25:33.316 ClassAndObjects[387:303] Volume of Box2 : 10.000000\n" }, { "code": null, "e": 6867, "s": 6741, "text": "Properties are introduced in Objective-C to ensure that the instance variable of the class can be accessed outside the class." }, { "code": null, "e": 6919, "s": 6867, "text": "Properties begin with @property, which is a keyword" }, { "code": null, "e": 6971, "s": 6919, "text": "Properties begin with @property, which is a keyword" }, { "code": null, "e": 7277, "s": 6971, "text": "It is followed with access specifiers, which are nonatomic or atomic, readwrite or readonly and strong, unsafe_unretained or weak. This varies based on the type of the variable. For any pointer type, we can use strong, unsafe_unretained or weak. Similarly for other types we can use readwrite or readonly." }, { "code": null, "e": 7583, "s": 7277, "text": "It is followed with access specifiers, which are nonatomic or atomic, readwrite or readonly and strong, unsafe_unretained or weak. This varies based on the type of the variable. For any pointer type, we can use strong, unsafe_unretained or weak. Similarly for other types we can use readwrite or readonly." }, { "code": null, "e": 7633, "s": 7583, "text": "This is followed by the datatype of the variable." }, { "code": null, "e": 7683, "s": 7633, "text": "This is followed by the datatype of the variable." }, { "code": null, "e": 7745, "s": 7683, "text": "Finally, we have the property name terminated by a semicolon." }, { "code": null, "e": 7807, "s": 7745, "text": "Finally, we have the property name terminated by a semicolon." }, { "code": null, "e": 7986, "s": 7807, "text": "We can add synthesize statement in the implementation class. But in the latest XCode, the synthesis part is taken care by the XCode and you need not include synthesize statement." }, { "code": null, "e": 8165, "s": 7986, "text": "We can add synthesize statement in the implementation class. But in the latest XCode, the synthesis part is taken care by the XCode and you need not include synthesize statement." }, { "code": null, "e": 8335, "s": 8165, "text": "It is only possible with the properties we can access the instance variables of the class. Actually, internally getter and setter methods are created for the properties." }, { "code": null, "e": 8496, "s": 8335, "text": "For example, let's assume we have a property @property (nonatomic ,readonly ) BOOL isDone. Under the hood, there are setters and getters created as shown below." }, { "code": null, "e": 8541, "s": 8496, "text": "-(void)setIsDone(BOOL)isDone;\n-(BOOL)isDone;" }, { "code": null, "e": 8574, "s": 8541, "text": "\n 18 Lectures \n 1 hours \n" }, { "code": null, "e": 8591, "s": 8574, "text": " PARTHA MAJUMDAR" }, { "code": null, "e": 8622, "s": 8591, "text": "\n 6 Lectures \n 25 mins\n" }, { "code": null, "e": 8633, "s": 8622, "text": " Ken Burke" }, { "code": null, "e": 8640, "s": 8633, "text": " Print" }, { "code": null, "e": 8651, "s": 8640, "text": " Add Notes" } ]
Sentiment Classification with Logistic Regression — Analyzing Yelp Reviews | by Dehao Zhang | Towards Data Science
You just opened up your own business a couple of weeks ago. You want to gauge how customers feel about your products and services, so you go on social media platforms (Twitter, Facebook, etc.) to see what people have said. Whoa! It seems that there are already thousands of posts about your business. You are excited but you soon realize that it is not feasible to read all of them by yourself. You start to wonder: Is there a way to extract customers’ sentiment from textual information? Why is sentiment analysis important?What is logistic regression?Which metric(s) should we use to evaluate models?How can a model understand text input?How do we know which text features are important?Can we further improve our model?What can we do next? Why is sentiment analysis important? What is logistic regression? Which metric(s) should we use to evaluate models? How can a model understand text input? How do we know which text features are important? Can we further improve our model? What can we do next? Sentiment analysis is a highly effective tool for a business to not only take a look at the overall brand perception, but also evaluate customer attitudes and emotions towards a specific product line or service [1]. This data-driven approach can help the business better understand the customers and detect subtle shifts in their opinions in order to meet changing demand. This post serves as the second part of my exploration on the Yelp dataset. For more information about the dataset as well as some exploratory data analysis on its business data and tips data, please see my post below: towardsdatascience.com This post focuses on the review.json file within the Yelp dataset, which contains reviews written by Yelp users. In addition, each review includes a corresponding “star”, or rating that the user gives to the business, which can be used as a proxy for sentiment. The goal is to build a model that can classify the sentiment of the review (positive or negative) given the text data. Furthermore, we are interested in which text features are the most helpful predictors for this classification task. Logistic Regression In general, there are two different types of classification models: generative models (Naive Bayes, Hidden Markov Models, etc.) and discriminative models (Logistic Regression, SVM, etc.). Ultimately, both models try to compute p(class|features), or p(y|x). The key difference is that a generative model tries to model the joint probability distribution p(x,y) first and then compute the conditional probability p(y|x) using Baye’s Theorem, whereas a discriminative one directly models p(y|x). See Andrew Ng’s paper here for detailed discussion on the comparison between the two type of models. In this post, I will examine a popular discriminative model — logistic regression. See this post for more details on its mathematical foundations (sigmoid function, cost function, decision boundary, etc.). To see my full Python code, check out my Kaggle kernel or my Github page. Now let’s get started! Peek at the Reviews Let’s take out 1 million records from the review dataset for our analysis. The ‘text’ column would be our model inputs. let’s check out a random review with a positive sentiment (rating of 5.0): "I love Deagan's. I do. I really do. The atmosphere is cozy and festive. The shrimp tacos and house fries are my standbys. The fries are sometimes good and sometimes great, and the spicy dipping sauce they come with is to die for. The beer list is amazing and the cocktails are great. The prices are mid-level, so it's not a cheap dive you can go to every week, but rather a treat when you do. Try it out. You won't be disappointed!" As we are thinking about feature extraction, a couple of clues here can help us infer that this is positive sentiment, such as ‘love’, ‘cozy’, ‘die for’, ‘amazing’, and ‘won’t be disappointed’. Let’s see one negative review (rating of 1.0): "If I could give less than one star, that would have been my choice. I rent a home and Per my lease agreement it is MY responsibility to pay their Pool Service company. Within the last year they changed to PoolServ. I have had major issues with new techs every week, never checking PH balances, cleaning the filter, and not showing up at all 2 weeks in the past 2 months. I have had 4 different techs in the past 4 weeks. I have emailed and called them and they never respond back nor even acknowledged my concerns or requests. I cannot change companies but I'm required to still pay for lousy or no service. Attached are a couple pictures of my pool recently due to one tech just didn't put any chlorine in it at all according to the tech who came the following week to attempt to clean it up. Please think twice before working with these people. No one wants to work with a business that doesn't return phone calls or emails." Despite its length, we can still see that clues such as ‘less than one star’, ‘lousy’, ‘never respond back’, and ‘no service’ are useful predictors. Let’s quickly confirm that there is no missing value in the data: text 0.0stars 0.0dtype: float64 Convert Ratings into Positive and Negative Sentiments Let’s plot the ratings distribution: We can see that out of the 1 million reviews, almost half of them contain a rating of 5.0. In this task, we want to classify all review texts into one of the two categories: positive sentiment and negative sentiment. Therefore, we will first need to transform the ‘stars’ value into the two categories. Here, we can treat ‘4.0’ and ‘5.0’ as positive sentiment and ‘1.0’ and ‘2.0’ as negative sentiment. We can treat ‘3.0’ as neutral or even treat each star as its own sentiment category, which would make it a multi-class classification problem. However, for the sake of a simply binary classification, we can take the ones with ‘3.0’ out. We then encode positive sentiment as Class 0 and negative sentiment as Class 1. Since we know that we have more samples in Class 0 than Class 1, our baseline model can be one that simply labels every review as Class 0. Let’s check out the baseline accuracy: 0.74 Before we proceed, let’s spend some time on the evaluation metric. Evaluation Metric Quoted from Jason Brownlee in one of his posts, “A classifier is only as good as the metric used to evaluate it”[2]. Accuracy might not be the appropriate evaluation metric in every classification problem, especially when the class distribution is imbalanced, and when the business impacts of false positive and false negative are unequal. For example, in a credit card fraud detection problem, a baseline model which predicts every transaction to be non-fraud would have an accuracy of over 99.99%, but it does not mean that it is a proper model since the percentage of false negative would be 100%, and each false negative (fail to detect a fraud transaction) can have a much higher cost to the business and customers compared with a false positive. In this task, Here are a few appropriate evaluation metrics: Precision — TP/(TP+FP), meaning the proportion of points that model classify as positives are actually positives.Recall — TP/(TP+FN), meaning the proportion of actual positives that are correctly classified by the model.F1 score —the harmonic mean of precision and recall. Precision — TP/(TP+FP), meaning the proportion of points that model classify as positives are actually positives. Recall — TP/(TP+FN), meaning the proportion of actual positives that are correctly classified by the model. F1 score —the harmonic mean of precision and recall. Check out this post for more detailed discussion on these metrics. In this task, we would use F1 score on the test set as the key evaluation metric. Train-Test Split Let’s set aside 30% of the data to be the test set, stratified by the class label. train, test = train_test_split(df_reviews, test_size = 0.3, stratify = df_reviews['labels'], random_state = 42) Text Preprocessing In most text mining or NLP related tasks, cleaning text is a crucial step. Let’s first remove all non-letter characters, punctuations, and make sure all letters are in lower-cases. Later we will also evaluate the effect of removing stopwords and stemming/lemmatization. Vectorization For a model to be able to process text input, we would need to convert them into vectors. There are a few different ways to represent these text features and here are the most common ones: 1. Binary, e.g. whether the word “good” is present. 2. Count, e.g. how many times does the word “good” appear in this review, similar to the Bag of Word model in Naive Bayes. 3. TF-IDF, which is a weighted importance of each text feature relevant to the document (read more here). Let’s try using binary representation of all unigrams (single word) first. cv= CountVectorizer(binary=True, analyzer = text_prep, min_df = 10, max_df = 0.95)cv.fit_transform(train['text'].values)train_feature_set=cv.transform(train['text'].values)test_feature_set=cv.transform(test['text'].values) I used sklearn’s CountVectorizer object to extract all the word features, and the word would be excluded if it appears in less than 10 reviews or more than 95% of the reviews. Let’s check how many unique words are in our dictionary: train_feature_set.shape[1]--------------------------------------------------------------------40245 There are about 40K unique words. As an example, let’s check the index of word ‘tasty’: cv.vocabulary_['tasty']--------------------------------------------------------------------35283 Fit a LR Model Now we are ready to fit our first logistic regression model using sklearn: lr = LogisticRegression(solver = 'liblinear', random_state = 42, max_iter=1000)lr.fit(train_feature_set,y_train)y_pred = lr.predict(test_feature_set)print("Accuracy: ",round(metrics.accuracy_score(y_test,y_pred),3))print("F1: ",round(metrics.f1_score(y_test, y_pred),3))--------------------------------------------------------------------Accuracy: 0.955F1: 0.914 Let’s plot a confusion matrix to visualize the prediction results: Visualize Feature Importance One nice thing about logistic regression is that we can find the importance of each feature easily. Let’s visualize the top 10 words that are most correlated to a negative sentiment: These words are within our expectation. Let’s check all the reviews that contain the word ‘poisoning’ and see how many of them are in the negative sentiment class: 0.904 Similarly, let’s check out the top 10 correlated words for positive sentiment: Note that a high positive feature importance correlates to high possibility of Class 1 and a low negative (high absolute value) feature importance correlates to high possibility of Class 0. After examining the positive reviews, I realized that the word ‘docking’ was at the top because most reviews that give a star of ‘4.0’ mention that “I am docking a star because...”. Improvement Strategies After we establish the first model, let’s examine couple of ideas to see if our model can be further improved. Idea 1: Decrease the probability cutoff threshold To reduce False Negatives, one intuition is to lower the cutoff threshold (default at 0.5). This would increase the recall but also decrease the precision. Therefore, we need to check if this would improve the overall F1 score: ******** For i = 0.3 ******F1: 0.91******** For i = 0.4 ******F1: 0.915******** For i = 0.45 ******F1: 0.915******** For i = 0.5 ******F1: 0.914 We can see that the F1 score is relatively robust to changes in this threshold. Idea 2: Oversample Class 1 or undersample Class 0 Oversampling minority class and undersampling majority class are common ways to deal with imbalanced classification (Read more here). However, the F1 score does not improve in this case. Performance for oversampling Class 1: Accuracy: 0.95F1: 0.908 Performance for undersampling Class 0: Accuracy: 0.947F1: 0.904 Idea 3: Remove stopwords and stemming Removing stopwords and stemming can take out noises and thus reduce the vocabulary size. However, both the accuracy and F1 score decreases slightly. Accuracy: 0.949F1: 0.902 Idea 4: Use TF-IDF instead of binary representation This time, the F1 score slightly increases but not by much. Accuracy: 0.958F1: 0.919 Idea 5: Include both unigrams and bigrams as features The motivation can be shown with this example: Take the LR model we first developed and then predict on this review — “I did not enjoy the food or the service”: test_review = cv.transform(["I did not enjoy the food or the service"])lr.predict_proba(test_review)--------------------------------------------------------------------array([[0.50069323, 0.49930677]]) The model considers this review as positive, because the model only takes in unigrams such as ‘enjoy’ which is correlated to a positive sentiment, without considering ‘did not’ which negates the meaning of ‘enjoy’. After we take both unigrams and bigrams (sequence of two words) into consideration, we first see an increase in vocabulary size: 488683 After fitting this model, we see improvements in both metrics, especially in F1 score. Accuracy: 0.969F1: 0.942 Let’s check its prediction on the same sentence again: test_review = cv.transform(["I did not enjoy the food or the service"])lr.predict_proba(test_review)--------------------------------------------------------------------array([[0.2678198, 0.7321802]]) Now it makes the right prediction with relatively high confidence. We can see the new top 10 features for both positive and negative sentiment again: We start to see bigrams such as ‘two stars’, ‘not worth’, ‘no thanks’, ‘not recommended’ showing up in the top features. Note that “be disappointed” might seem like a negative sentiment but that is probably because it is part of “not be disappointed”phrase. Should we include trigrams (sequence of three words) or even higher oder of N-grams? Note that as we include higher and higher order of N-grams, our feature size grows much larger which consumes more memory space, and there is also a problem of extreme sparsity (read more here). Here are a few ideas for the next steps: There are a few hyperparameters within the LR model that we can tune and tuning them can lead to a more optimized model. Also, to avoid over-fitting, try cross-validation to get more accurate metrics for each model (Check out GridSearch here).Treat ‘3.0’ as neutral or treat each category as its own class and re-formulate these models.Build an interactive sentiment analyzer which allows user-inputted reviews and give predictions on its sentiment. Have a built-in functionality of incremental learning where the users can help the model learn when it makes a wrong prediction. There are a few hyperparameters within the LR model that we can tune and tuning them can lead to a more optimized model. Also, to avoid over-fitting, try cross-validation to get more accurate metrics for each model (Check out GridSearch here). Treat ‘3.0’ as neutral or treat each category as its own class and re-formulate these models. Build an interactive sentiment analyzer which allows user-inputted reviews and give predictions on its sentiment. Have a built-in functionality of incremental learning where the users can help the model learn when it makes a wrong prediction. Let’s recap. We built a sentiment classification model using logistic regression and tried out different strategies to improve upon the simple model. Among those ideas, including bigrams as features has the most improvement in F1 score. For both the simple model and the improved model, we also analyzed its most important textual features. I hope that you enjoy this post and please share any thought that you may have :) DS/ML beginner? Check out my other post on how you can build your first Python classifiers with the classic Iris dataset:
[ { "code": null, "e": 232, "s": 172, "text": "You just opened up your own business a couple of weeks ago." }, { "code": null, "e": 395, "s": 232, "text": "You want to gauge how customers feel about your products and services, so you go on social media platforms (Twitter, Facebook, etc.) to see what people have said." }, { "code": null, "e": 567, "s": 395, "text": "Whoa! It seems that there are already thousands of posts about your business. You are excited but you soon realize that it is not feasible to read all of them by yourself." }, { "code": null, "e": 661, "s": 567, "text": "You start to wonder: Is there a way to extract customers’ sentiment from textual information?" }, { "code": null, "e": 915, "s": 661, "text": "Why is sentiment analysis important?What is logistic regression?Which metric(s) should we use to evaluate models?How can a model understand text input?How do we know which text features are important?Can we further improve our model?What can we do next?" }, { "code": null, "e": 952, "s": 915, "text": "Why is sentiment analysis important?" }, { "code": null, "e": 981, "s": 952, "text": "What is logistic regression?" }, { "code": null, "e": 1031, "s": 981, "text": "Which metric(s) should we use to evaluate models?" }, { "code": null, "e": 1070, "s": 1031, "text": "How can a model understand text input?" }, { "code": null, "e": 1120, "s": 1070, "text": "How do we know which text features are important?" }, { "code": null, "e": 1154, "s": 1120, "text": "Can we further improve our model?" }, { "code": null, "e": 1175, "s": 1154, "text": "What can we do next?" }, { "code": null, "e": 1391, "s": 1175, "text": "Sentiment analysis is a highly effective tool for a business to not only take a look at the overall brand perception, but also evaluate customer attitudes and emotions towards a specific product line or service [1]." }, { "code": null, "e": 1548, "s": 1391, "text": "This data-driven approach can help the business better understand the customers and detect subtle shifts in their opinions in order to meet changing demand." }, { "code": null, "e": 1766, "s": 1548, "text": "This post serves as the second part of my exploration on the Yelp dataset. For more information about the dataset as well as some exploratory data analysis on its business data and tips data, please see my post below:" }, { "code": null, "e": 1789, "s": 1766, "text": "towardsdatascience.com" }, { "code": null, "e": 2286, "s": 1789, "text": "This post focuses on the review.json file within the Yelp dataset, which contains reviews written by Yelp users. In addition, each review includes a corresponding “star”, or rating that the user gives to the business, which can be used as a proxy for sentiment. The goal is to build a model that can classify the sentiment of the review (positive or negative) given the text data. Furthermore, we are interested in which text features are the most helpful predictors for this classification task." }, { "code": null, "e": 2306, "s": 2286, "text": "Logistic Regression" }, { "code": null, "e": 2900, "s": 2306, "text": "In general, there are two different types of classification models: generative models (Naive Bayes, Hidden Markov Models, etc.) and discriminative models (Logistic Regression, SVM, etc.). Ultimately, both models try to compute p(class|features), or p(y|x). The key difference is that a generative model tries to model the joint probability distribution p(x,y) first and then compute the conditional probability p(y|x) using Baye’s Theorem, whereas a discriminative one directly models p(y|x). See Andrew Ng’s paper here for detailed discussion on the comparison between the two type of models." }, { "code": null, "e": 3106, "s": 2900, "text": "In this post, I will examine a popular discriminative model — logistic regression. See this post for more details on its mathematical foundations (sigmoid function, cost function, decision boundary, etc.)." }, { "code": null, "e": 3203, "s": 3106, "text": "To see my full Python code, check out my Kaggle kernel or my Github page. Now let’s get started!" }, { "code": null, "e": 3223, "s": 3203, "text": "Peek at the Reviews" }, { "code": null, "e": 3418, "s": 3223, "text": "Let’s take out 1 million records from the review dataset for our analysis. The ‘text’ column would be our model inputs. let’s check out a random review with a positive sentiment (rating of 5.0):" }, { "code": null, "e": 3852, "s": 3418, "text": "\"I love Deagan's. I do. I really do. The atmosphere is cozy and festive. The shrimp tacos and house fries are my standbys. The fries are sometimes good and sometimes great, and the spicy dipping sauce they come with is to die for. The beer list is amazing and the cocktails are great. The prices are mid-level, so it's not a cheap dive you can go to every week, but rather a treat when you do. Try it out. You won't be disappointed!\"" }, { "code": null, "e": 4046, "s": 3852, "text": "As we are thinking about feature extraction, a couple of clues here can help us infer that this is positive sentiment, such as ‘love’, ‘cozy’, ‘die for’, ‘amazing’, and ‘won’t be disappointed’." }, { "code": null, "e": 4093, "s": 4046, "text": "Let’s see one negative review (rating of 1.0):" }, { "code": null, "e": 5032, "s": 4093, "text": "\"If I could give less than one star, that would have been my choice. I rent a home and Per my lease agreement it is MY responsibility to pay their Pool Service company. Within the last year they changed to PoolServ. I have had major issues with new techs every week, never checking PH balances, cleaning the filter, and not showing up at all 2 weeks in the past 2 months. I have had 4 different techs in the past 4 weeks. I have emailed and called them and they never respond back nor even acknowledged my concerns or requests. I cannot change companies but I'm required to still pay for lousy or no service. Attached are a couple pictures of my pool recently due to one tech just didn't put any chlorine in it at all according to the tech who came the following week to attempt to clean it up. Please think twice before working with these people. No one wants to work with a business that doesn't return phone calls or emails.\"" }, { "code": null, "e": 5181, "s": 5032, "text": "Despite its length, we can still see that clues such as ‘less than one star’, ‘lousy’, ‘never respond back’, and ‘no service’ are useful predictors." }, { "code": null, "e": 5247, "s": 5181, "text": "Let’s quickly confirm that there is no missing value in the data:" }, { "code": null, "e": 5286, "s": 5247, "text": "text 0.0stars 0.0dtype: float64" }, { "code": null, "e": 5340, "s": 5286, "text": "Convert Ratings into Positive and Negative Sentiments" }, { "code": null, "e": 5377, "s": 5340, "text": "Let’s plot the ratings distribution:" }, { "code": null, "e": 5468, "s": 5377, "text": "We can see that out of the 1 million reviews, almost half of them contain a rating of 5.0." }, { "code": null, "e": 6017, "s": 5468, "text": "In this task, we want to classify all review texts into one of the two categories: positive sentiment and negative sentiment. Therefore, we will first need to transform the ‘stars’ value into the two categories. Here, we can treat ‘4.0’ and ‘5.0’ as positive sentiment and ‘1.0’ and ‘2.0’ as negative sentiment. We can treat ‘3.0’ as neutral or even treat each star as its own sentiment category, which would make it a multi-class classification problem. However, for the sake of a simply binary classification, we can take the ones with ‘3.0’ out." }, { "code": null, "e": 6275, "s": 6017, "text": "We then encode positive sentiment as Class 0 and negative sentiment as Class 1. Since we know that we have more samples in Class 0 than Class 1, our baseline model can be one that simply labels every review as Class 0. Let’s check out the baseline accuracy:" }, { "code": null, "e": 6280, "s": 6275, "text": "0.74" }, { "code": null, "e": 6347, "s": 6280, "text": "Before we proceed, let’s spend some time on the evaluation metric." }, { "code": null, "e": 6365, "s": 6347, "text": "Evaluation Metric" }, { "code": null, "e": 6482, "s": 6365, "text": "Quoted from Jason Brownlee in one of his posts, “A classifier is only as good as the metric used to evaluate it”[2]." }, { "code": null, "e": 7117, "s": 6482, "text": "Accuracy might not be the appropriate evaluation metric in every classification problem, especially when the class distribution is imbalanced, and when the business impacts of false positive and false negative are unequal. For example, in a credit card fraud detection problem, a baseline model which predicts every transaction to be non-fraud would have an accuracy of over 99.99%, but it does not mean that it is a proper model since the percentage of false negative would be 100%, and each false negative (fail to detect a fraud transaction) can have a much higher cost to the business and customers compared with a false positive." }, { "code": null, "e": 7178, "s": 7117, "text": "In this task, Here are a few appropriate evaluation metrics:" }, { "code": null, "e": 7451, "s": 7178, "text": "Precision — TP/(TP+FP), meaning the proportion of points that model classify as positives are actually positives.Recall — TP/(TP+FN), meaning the proportion of actual positives that are correctly classified by the model.F1 score —the harmonic mean of precision and recall." }, { "code": null, "e": 7565, "s": 7451, "text": "Precision — TP/(TP+FP), meaning the proportion of points that model classify as positives are actually positives." }, { "code": null, "e": 7673, "s": 7565, "text": "Recall — TP/(TP+FN), meaning the proportion of actual positives that are correctly classified by the model." }, { "code": null, "e": 7726, "s": 7673, "text": "F1 score —the harmonic mean of precision and recall." }, { "code": null, "e": 7793, "s": 7726, "text": "Check out this post for more detailed discussion on these metrics." }, { "code": null, "e": 7875, "s": 7793, "text": "In this task, we would use F1 score on the test set as the key evaluation metric." }, { "code": null, "e": 7892, "s": 7875, "text": "Train-Test Split" }, { "code": null, "e": 7975, "s": 7892, "text": "Let’s set aside 30% of the data to be the test set, stratified by the class label." }, { "code": null, "e": 8087, "s": 7975, "text": "train, test = train_test_split(df_reviews, test_size = 0.3, stratify = df_reviews['labels'], random_state = 42)" }, { "code": null, "e": 8106, "s": 8087, "text": "Text Preprocessing" }, { "code": null, "e": 8376, "s": 8106, "text": "In most text mining or NLP related tasks, cleaning text is a crucial step. Let’s first remove all non-letter characters, punctuations, and make sure all letters are in lower-cases. Later we will also evaluate the effect of removing stopwords and stemming/lemmatization." }, { "code": null, "e": 8390, "s": 8376, "text": "Vectorization" }, { "code": null, "e": 8860, "s": 8390, "text": "For a model to be able to process text input, we would need to convert them into vectors. There are a few different ways to represent these text features and here are the most common ones: 1. Binary, e.g. whether the word “good” is present. 2. Count, e.g. how many times does the word “good” appear in this review, similar to the Bag of Word model in Naive Bayes. 3. TF-IDF, which is a weighted importance of each text feature relevant to the document (read more here)." }, { "code": null, "e": 8935, "s": 8860, "text": "Let’s try using binary representation of all unigrams (single word) first." }, { "code": null, "e": 9158, "s": 8935, "text": "cv= CountVectorizer(binary=True, analyzer = text_prep, min_df = 10, max_df = 0.95)cv.fit_transform(train['text'].values)train_feature_set=cv.transform(train['text'].values)test_feature_set=cv.transform(test['text'].values)" }, { "code": null, "e": 9334, "s": 9158, "text": "I used sklearn’s CountVectorizer object to extract all the word features, and the word would be excluded if it appears in less than 10 reviews or more than 95% of the reviews." }, { "code": null, "e": 9391, "s": 9334, "text": "Let’s check how many unique words are in our dictionary:" }, { "code": null, "e": 9491, "s": 9391, "text": "train_feature_set.shape[1]--------------------------------------------------------------------40245" }, { "code": null, "e": 9579, "s": 9491, "text": "There are about 40K unique words. As an example, let’s check the index of word ‘tasty’:" }, { "code": null, "e": 9676, "s": 9579, "text": "cv.vocabulary_['tasty']--------------------------------------------------------------------35283" }, { "code": null, "e": 9691, "s": 9676, "text": "Fit a LR Model" }, { "code": null, "e": 9766, "s": 9691, "text": "Now we are ready to fit our first logistic regression model using sklearn:" }, { "code": null, "e": 10131, "s": 9766, "text": "lr = LogisticRegression(solver = 'liblinear', random_state = 42, max_iter=1000)lr.fit(train_feature_set,y_train)y_pred = lr.predict(test_feature_set)print(\"Accuracy: \",round(metrics.accuracy_score(y_test,y_pred),3))print(\"F1: \",round(metrics.f1_score(y_test, y_pred),3))--------------------------------------------------------------------Accuracy: 0.955F1: 0.914" }, { "code": null, "e": 10198, "s": 10131, "text": "Let’s plot a confusion matrix to visualize the prediction results:" }, { "code": null, "e": 10227, "s": 10198, "text": "Visualize Feature Importance" }, { "code": null, "e": 10410, "s": 10227, "text": "One nice thing about logistic regression is that we can find the importance of each feature easily. Let’s visualize the top 10 words that are most correlated to a negative sentiment:" }, { "code": null, "e": 10574, "s": 10410, "text": "These words are within our expectation. Let’s check all the reviews that contain the word ‘poisoning’ and see how many of them are in the negative sentiment class:" }, { "code": null, "e": 10580, "s": 10574, "text": "0.904" }, { "code": null, "e": 10659, "s": 10580, "text": "Similarly, let’s check out the top 10 correlated words for positive sentiment:" }, { "code": null, "e": 10849, "s": 10659, "text": "Note that a high positive feature importance correlates to high possibility of Class 1 and a low negative (high absolute value) feature importance correlates to high possibility of Class 0." }, { "code": null, "e": 11031, "s": 10849, "text": "After examining the positive reviews, I realized that the word ‘docking’ was at the top because most reviews that give a star of ‘4.0’ mention that “I am docking a star because...”." }, { "code": null, "e": 11054, "s": 11031, "text": "Improvement Strategies" }, { "code": null, "e": 11165, "s": 11054, "text": "After we establish the first model, let’s examine couple of ideas to see if our model can be further improved." }, { "code": null, "e": 11215, "s": 11165, "text": "Idea 1: Decrease the probability cutoff threshold" }, { "code": null, "e": 11443, "s": 11215, "text": "To reduce False Negatives, one intuition is to lower the cutoff threshold (default at 0.5). This would increase the recall but also decrease the precision. Therefore, we need to check if this would improve the overall F1 score:" }, { "code": null, "e": 11588, "s": 11443, "text": "******** For i = 0.3 ******F1: 0.91******** For i = 0.4 ******F1: 0.915******** For i = 0.45 ******F1: 0.915******** For i = 0.5 ******F1: 0.914" }, { "code": null, "e": 11668, "s": 11588, "text": "We can see that the F1 score is relatively robust to changes in this threshold." }, { "code": null, "e": 11718, "s": 11668, "text": "Idea 2: Oversample Class 1 or undersample Class 0" }, { "code": null, "e": 11905, "s": 11718, "text": "Oversampling minority class and undersampling majority class are common ways to deal with imbalanced classification (Read more here). However, the F1 score does not improve in this case." }, { "code": null, "e": 11943, "s": 11905, "text": "Performance for oversampling Class 1:" }, { "code": null, "e": 11969, "s": 11943, "text": "Accuracy: 0.95F1: 0.908" }, { "code": null, "e": 12008, "s": 11969, "text": "Performance for undersampling Class 0:" }, { "code": null, "e": 12035, "s": 12008, "text": "Accuracy: 0.947F1: 0.904" }, { "code": null, "e": 12073, "s": 12035, "text": "Idea 3: Remove stopwords and stemming" }, { "code": null, "e": 12222, "s": 12073, "text": "Removing stopwords and stemming can take out noises and thus reduce the vocabulary size. However, both the accuracy and F1 score decreases slightly." }, { "code": null, "e": 12249, "s": 12222, "text": "Accuracy: 0.949F1: 0.902" }, { "code": null, "e": 12301, "s": 12249, "text": "Idea 4: Use TF-IDF instead of binary representation" }, { "code": null, "e": 12361, "s": 12301, "text": "This time, the F1 score slightly increases but not by much." }, { "code": null, "e": 12388, "s": 12361, "text": "Accuracy: 0.958F1: 0.919" }, { "code": null, "e": 12442, "s": 12388, "text": "Idea 5: Include both unigrams and bigrams as features" }, { "code": null, "e": 12489, "s": 12442, "text": "The motivation can be shown with this example:" }, { "code": null, "e": 12603, "s": 12489, "text": "Take the LR model we first developed and then predict on this review — “I did not enjoy the food or the service”:" }, { "code": null, "e": 12805, "s": 12603, "text": "test_review = cv.transform([\"I did not enjoy the food or the service\"])lr.predict_proba(test_review)--------------------------------------------------------------------array([[0.50069323, 0.49930677]])" }, { "code": null, "e": 13020, "s": 12805, "text": "The model considers this review as positive, because the model only takes in unigrams such as ‘enjoy’ which is correlated to a positive sentiment, without considering ‘did not’ which negates the meaning of ‘enjoy’." }, { "code": null, "e": 13149, "s": 13020, "text": "After we take both unigrams and bigrams (sequence of two words) into consideration, we first see an increase in vocabulary size:" }, { "code": null, "e": 13156, "s": 13149, "text": "488683" }, { "code": null, "e": 13243, "s": 13156, "text": "After fitting this model, we see improvements in both metrics, especially in F1 score." }, { "code": null, "e": 13270, "s": 13243, "text": "Accuracy: 0.969F1: 0.942" }, { "code": null, "e": 13325, "s": 13270, "text": "Let’s check its prediction on the same sentence again:" }, { "code": null, "e": 13525, "s": 13325, "text": "test_review = cv.transform([\"I did not enjoy the food or the service\"])lr.predict_proba(test_review)--------------------------------------------------------------------array([[0.2678198, 0.7321802]])" }, { "code": null, "e": 13592, "s": 13525, "text": "Now it makes the right prediction with relatively high confidence." }, { "code": null, "e": 13675, "s": 13592, "text": "We can see the new top 10 features for both positive and negative sentiment again:" }, { "code": null, "e": 13796, "s": 13675, "text": "We start to see bigrams such as ‘two stars’, ‘not worth’, ‘no thanks’, ‘not recommended’ showing up in the top features." }, { "code": null, "e": 13933, "s": 13796, "text": "Note that “be disappointed” might seem like a negative sentiment but that is probably because it is part of “not be disappointed”phrase." }, { "code": null, "e": 14213, "s": 13933, "text": "Should we include trigrams (sequence of three words) or even higher oder of N-grams? Note that as we include higher and higher order of N-grams, our feature size grows much larger which consumes more memory space, and there is also a problem of extreme sparsity (read more here)." }, { "code": null, "e": 14254, "s": 14213, "text": "Here are a few ideas for the next steps:" }, { "code": null, "e": 14833, "s": 14254, "text": "There are a few hyperparameters within the LR model that we can tune and tuning them can lead to a more optimized model. Also, to avoid over-fitting, try cross-validation to get more accurate metrics for each model (Check out GridSearch here).Treat ‘3.0’ as neutral or treat each category as its own class and re-formulate these models.Build an interactive sentiment analyzer which allows user-inputted reviews and give predictions on its sentiment. Have a built-in functionality of incremental learning where the users can help the model learn when it makes a wrong prediction." }, { "code": null, "e": 15077, "s": 14833, "text": "There are a few hyperparameters within the LR model that we can tune and tuning them can lead to a more optimized model. Also, to avoid over-fitting, try cross-validation to get more accurate metrics for each model (Check out GridSearch here)." }, { "code": null, "e": 15171, "s": 15077, "text": "Treat ‘3.0’ as neutral or treat each category as its own class and re-formulate these models." }, { "code": null, "e": 15414, "s": 15171, "text": "Build an interactive sentiment analyzer which allows user-inputted reviews and give predictions on its sentiment. Have a built-in functionality of incremental learning where the users can help the model learn when it makes a wrong prediction." }, { "code": null, "e": 15427, "s": 15414, "text": "Let’s recap." }, { "code": null, "e": 15755, "s": 15427, "text": "We built a sentiment classification model using logistic regression and tried out different strategies to improve upon the simple model. Among those ideas, including bigrams as features has the most improvement in F1 score. For both the simple model and the improved model, we also analyzed its most important textual features." }, { "code": null, "e": 15837, "s": 15755, "text": "I hope that you enjoy this post and please share any thought that you may have :)" } ]
How to create a new data frame for the mean of rows of some columns from an R data frame?
Finding row means help us to identity the average performance of a case if all the variables are of same nature and it is also an easy job. But if some of the columns have different type of data then we have to extract columns for which we want to find the row means. Therefore, we can create a new data frame with row means of the required columns using rowMeans function. Live Demo Consider the below data frame − set.seed(88) Group<-LETTERS[1:10] x1<-rpois(20,2) x2<-rpois(20,5) x3<-rpois(20,10) df<-data.frame(Group,x1,x2,x3) df Group x1 x2 x3 1 A 2 3 10 2 B 0 6 7 3 C 3 7 9 4 D 2 8 9 5 E 6 8 9 6 F 8 6 4 7 G 0 4 5 8 H 3 7 10 9 I 3 5 11 10 J 5 4 10 11 A 2 3 9 12 B 3 7 8 13 C 2 6 6 14 D 1 4 7 15 E 0 7 12 16 F 1 8 9 17 G 0 5 11 18 H 2 6 9 19 I 3 7 5 20 J 3 9 6 Creating a new data frame with column Group as in original df and RowMeans for the mean of columns x1, x2, and x3 − row_means_df<-data.frame(Group=df[,1],RowMeans=rowMeans(df[,-1])) row_means_df Group RowMeans 1 A 5.000000 2 B 4.333333 3 C 6.333333 4 D 6.333333 5 E 7.666667 6 F 6.000000 7 G 3.000000 8 H 6.666667 9 I 6.333333 10 J 6.333333 11 A 4.666667 12 B 6.000000 13 C 4.666667 14 D 4.000000 15 E 6.333333 16 F 6.000000 17 G 5.333333 18 H 5.666667 19 I 5.000000 20 J 6.000000 Creating a new data frame with column Group as in original df and RowMeans for the mean of columns x2 and x3 that is 3 and 4 − row_means_3.4_cols_df<-data.frame(Group=df[,1],RowMeans=rowMeans(df[,-c(1,2)])) row_means_3.4_cols_df Group RowMeans 1 A 6.5 2 B 6.5 3 C 8.0 4 D 8.5 5 E 8.5 6 F 5.0 7 G 4.5 8 H 8.5 9 I 8.0 10 J 7.0 11 A 6.0 12 B 7.5 13 C 6.0 14 D 5.5 15 E 9.5 16 F 8.5 17 G 8.0 18 H 7.5 19 I 6.0 20 J 7.5
[ { "code": null, "e": 1436, "s": 1062, "text": "Finding row means help us to identity the average performance of a case if all the variables are of same nature and it is also an easy job. But if some of the columns have different type of data then we have to extract columns for which we want to find the row means. Therefore, we can create a new data frame with row means of the required columns using rowMeans function." }, { "code": null, "e": 1447, "s": 1436, "text": " Live Demo" }, { "code": null, "e": 1479, "s": 1447, "text": "Consider the below data frame −" }, { "code": null, "e": 1596, "s": 1479, "text": "set.seed(88)\nGroup<-LETTERS[1:10]\nx1<-rpois(20,2)\nx2<-rpois(20,5)\nx3<-rpois(20,10)\ndf<-data.frame(Group,x1,x2,x3)\ndf" }, { "code": null, "e": 1829, "s": 1596, "text": " Group x1 x2 x3\n1 A 2 3 10\n2 B 0 6 7\n3 C 3 7 9\n4 D 2 8 9\n5 E 6 8 9\n6 F 8 6 4\n7 G 0 4 5\n8 H 3 7 10\n9 I 3 5 11\n10 J 5 4 10\n11 A 2 3 9\n12 B 3 7 8\n13 C 2 6 6\n14 D 1 4 7\n15 E 0 7 12\n16 F 1 8 9\n17 G 0 5 11\n18 H 2 6 9\n19 I 3 7 5\n20 J 3 9 6" }, { "code": null, "e": 1945, "s": 1829, "text": "Creating a new data frame with column Group as in original df and RowMeans for the mean of columns x1, x2, and x3 −" }, { "code": null, "e": 2310, "s": 1945, "text": "row_means_df<-data.frame(Group=df[,1],RowMeans=rowMeans(df[,-1]))\nrow_means_df\nGroup RowMeans\n1 A 5.000000\n2 B 4.333333\n3 C 6.333333\n4 D 6.333333\n5 E 7.666667\n6 F 6.000000\n7 G 3.000000\n8 H 6.666667\n9 I 6.333333\n10 J 6.333333\n11 A 4.666667\n12 B 6.000000\n13 C 4.666667\n14 D 4.000000\n15 E 6.333333\n16 F 6.000000\n17 G 5.333333\n18 H 5.666667\n19 I 5.000000\n20 J 6.000000" }, { "code": null, "e": 2437, "s": 2310, "text": "Creating a new data frame with column Group as in original df and RowMeans for the mean of columns x2 and x3 that is 3 and 4 −" }, { "code": null, "e": 2725, "s": 2437, "text": "row_means_3.4_cols_df<-data.frame(Group=df[,1],RowMeans=rowMeans(df[,-c(1,2)]))\nrow_means_3.4_cols_df\nGroup RowMeans\n1 A 6.5\n2 B 6.5\n3 C 8.0\n4 D 8.5\n5 E 8.5\n6 F 5.0\n7 G 4.5\n8 H 8.5\n9 I 8.0\n10 J 7.0\n11 A 6.0\n12 B 7.5\n13 C 6.0\n14 D 5.5\n15 E 9.5\n16 F 8.5\n17 G 8.0\n18 H 7.5\n19 I 6.0\n20 J 7.5" } ]
Python Template String Formatting Method | by Vinicius Monteiro | Towards Data Science
Template string is another method used to format strings in Python. In comparison with %operator, .format() and f-strings, it has a (arguably) simpler syntax and functionality. It’s ideal for internationalization (i18n), and there are some nuances that you may find advantageous—especially when working with regular expressions (regex). Let’s first see the syntax of the other three, so we can compare. >>> name = "Alfredo">>> age = 40>>> "Hello, %s. You are %s." % (name, age)'Hello, Alfredo. You are 40.' >>> print('The {} {} {}'.format('car','yellow','fast')) #empty bracesThe car yellow fast>>> print('The {2} {1} {0}'.format('car','yellow','fast')) #indexThe fast yellow car>>> print('The {f} {y} {c}'.format(c='car',y='yellow',f='fast')) #keywordsThe fast yellow car >>> name = "Peter">>> print(f'Nice to meet you, {name}') It uses Template class from string module. It has a syntax somewhat similar to .format() when done with keywords, but instead of curly braces to define the placeholder, it utilises a dollar sign ($). ${} is also valid and should be in place when a valid string comes after the placeholder. See the syntax for various situations. I’ll begin explaining safe_substitute and the use with regex. Imagine you want to replace something in a string that contains {}, % and $. It can happen when working with regular expression. Consider an input field that accepts company stock symbol plus positive (+) or negative (-) values in percentages and nothing afterwards. And symbol plus +- is dynamically replaced. Such as: AAPL: +20% or TSLA: -5% (Perhaps it’s a silly use case, I know! but please ignore. It’s just an example). Regex: (([symbol: + or -][0–9]{1,4}[%])$) For % operator, .format() and f-string — it doesn’twork. % operator and .format() raises an error and f-strings removes {m,n}. See below #% operator>>> print('(([%s][0-9]{1,4}[%])$)' % "AAPL: -")TypeError: not enough arguments for format string#.format()>>> print('(([{symbol_pos_neg}][0-9]{1,4}[%])$)'.format(symbol_pos_neg = 'AAPL: +'))KeyError: '1,4'#f-strings>>> symbol_pos_neg = "AAPL: +">>> print(f'(([{symbol_pos_neg}][0-9]{1,4}[%])$)')(([AAPL: +][0-9](1, 4)[%])$) Although it can be easily solved by escaping the characters (doubling the curly braces or percentage sign), I find it inconvenient. With Template string, you don’t need to do that. {} and % are not used to define placeholders, and by using safe_substitute, you’re not enforced to replace all $. See below, the regex is unchanged and $symbol_pos_neg is replaced correctly. >>> print(Template('(([$symbol_pos_neg][0-9]{1,4}[%])$)').safe_substitute(symbol_pos_neg='AAPL: -'))(([AAPL: -][0-9]{1,4}[%])$) >>> from string import Template>>> Template('$obj is $colour').substitute(obj='Car',colour='red')'Car is red' >>> d = dict(obj='Car')>>> Template('$obj is red').substitute(d)'Car is red' If there is an invalid string after the placeholder, only the placeholder is considered. Example, see the ‘.’ (dot) after $who. It’s printed as normal. >>> from string import Template>>> Template('$obj. is $colour').substitute(obj='Car',colour='red')'Car. is red' If valid characters follow the placeholder, it must be enclosed in curly braces. >>> from string import Template>>> Template('${noun}ification').substitute(noun='Ident')'Identification' To note, characters such as underscore (_) is also considered valid. So the same rule to use ${} applies. The substitution is done as normal, and one extra $ is printed. >>> Template(’$obj. is $$$colour’).substitute(obj=’Car’,colour=’red’)'Car. is $red' I’m still a beginner in Python and comparing String formatting methods taught me a lot. I hope I was clear in providing the details to you. Despite Template string being less powerful: viniciusmonteiro$ python3 -m timeit -s "x = 'f'; y = 'z'" "f'{x} {y}'"5000000 loops, best of 5: 82.5 nsec per loopviniciusmonteiro$ python3 -m timeit -s "from string import Template; x = 'f'; y = 'z'" "Template('$x $y').substitute(x=x, y=y)" # template string500000 loops, best of 5: 752 nsec per loop I think it brings some benefit in terms of simplicity and convenience in some cases. Although I understand these can be subjective. [1] string — Common string operations https://docs.python.org/3/library/string.html#template-strings [2] Python 3’s f-Strings: An Improved String Formatting Syntax (Guide) https://realpython.com/python-f-strings/ [3] Performance of different string concatenation methods in Python — why f-strings are awesome https://grski.pl/fstrings-performance.html
[ { "code": null, "e": 509, "s": 172, "text": "Template string is another method used to format strings in Python. In comparison with %operator, .format() and f-strings, it has a (arguably) simpler syntax and functionality. It’s ideal for internationalization (i18n), and there are some nuances that you may find advantageous—especially when working with regular expressions (regex)." }, { "code": null, "e": 575, "s": 509, "text": "Let’s first see the syntax of the other three, so we can compare." }, { "code": null, "e": 679, "s": 575, "text": ">>> name = \"Alfredo\">>> age = 40>>> \"Hello, %s. You are %s.\" % (name, age)'Hello, Alfredo. You are 40.'" }, { "code": null, "e": 945, "s": 679, "text": ">>> print('The {} {} {}'.format('car','yellow','fast')) #empty bracesThe car yellow fast>>> print('The {2} {1} {0}'.format('car','yellow','fast')) #indexThe fast yellow car>>> print('The {f} {y} {c}'.format(c='car',y='yellow',f='fast')) #keywordsThe fast yellow car" }, { "code": null, "e": 1002, "s": 945, "text": ">>> name = \"Peter\">>> print(f'Nice to meet you, {name}')" }, { "code": null, "e": 1292, "s": 1002, "text": "It uses Template class from string module. It has a syntax somewhat similar to .format() when done with keywords, but instead of curly braces to define the placeholder, it utilises a dollar sign ($). ${} is also valid and should be in place when a valid string comes after the placeholder." }, { "code": null, "e": 1393, "s": 1292, "text": "See the syntax for various situations. I’ll begin explaining safe_substitute and the use with regex." }, { "code": null, "e": 1522, "s": 1393, "text": "Imagine you want to replace something in a string that contains {}, % and $. It can happen when working with regular expression." }, { "code": null, "e": 1819, "s": 1522, "text": "Consider an input field that accepts company stock symbol plus positive (+) or negative (-) values in percentages and nothing afterwards. And symbol plus +- is dynamically replaced. Such as: AAPL: +20% or TSLA: -5% (Perhaps it’s a silly use case, I know! but please ignore. It’s just an example)." }, { "code": null, "e": 1861, "s": 1819, "text": "Regex: (([symbol: + or -][0–9]{1,4}[%])$)" }, { "code": null, "e": 1998, "s": 1861, "text": "For % operator, .format() and f-string — it doesn’twork. % operator and .format() raises an error and f-strings removes {m,n}. See below" }, { "code": null, "e": 2333, "s": 1998, "text": "#% operator>>> print('(([%s][0-9]{1,4}[%])$)' % \"AAPL: -\")TypeError: not enough arguments for format string#.format()>>> print('(([{symbol_pos_neg}][0-9]{1,4}[%])$)'.format(symbol_pos_neg = 'AAPL: +'))KeyError: '1,4'#f-strings>>> symbol_pos_neg = \"AAPL: +\">>> print(f'(([{symbol_pos_neg}][0-9]{1,4}[%])$)')(([AAPL: +][0-9](1, 4)[%])$)" }, { "code": null, "e": 2628, "s": 2333, "text": "Although it can be easily solved by escaping the characters (doubling the curly braces or percentage sign), I find it inconvenient. With Template string, you don’t need to do that. {} and % are not used to define placeholders, and by using safe_substitute, you’re not enforced to replace all $." }, { "code": null, "e": 2705, "s": 2628, "text": "See below, the regex is unchanged and $symbol_pos_neg is replaced correctly." }, { "code": null, "e": 2833, "s": 2705, "text": ">>> print(Template('(([$symbol_pos_neg][0-9]{1,4}[%])$)').safe_substitute(symbol_pos_neg='AAPL: -'))(([AAPL: -][0-9]{1,4}[%])$)" }, { "code": null, "e": 2943, "s": 2833, "text": ">>> from string import Template>>> Template('$obj is $colour').substitute(obj='Car',colour='red')'Car is red'" }, { "code": null, "e": 3020, "s": 2943, "text": ">>> d = dict(obj='Car')>>> Template('$obj is red').substitute(d)'Car is red'" }, { "code": null, "e": 3172, "s": 3020, "text": "If there is an invalid string after the placeholder, only the placeholder is considered. Example, see the ‘.’ (dot) after $who. It’s printed as normal." }, { "code": null, "e": 3284, "s": 3172, "text": ">>> from string import Template>>> Template('$obj. is $colour').substitute(obj='Car',colour='red')'Car. is red'" }, { "code": null, "e": 3365, "s": 3284, "text": "If valid characters follow the placeholder, it must be enclosed in curly braces." }, { "code": null, "e": 3470, "s": 3365, "text": ">>> from string import Template>>> Template('${noun}ification').substitute(noun='Ident')'Identification'" }, { "code": null, "e": 3576, "s": 3470, "text": "To note, characters such as underscore (_) is also considered valid. So the same rule to use ${} applies." }, { "code": null, "e": 3640, "s": 3576, "text": "The substitution is done as normal, and one extra $ is printed." }, { "code": null, "e": 3724, "s": 3640, "text": ">>> Template(’$obj. is $$$colour’).substitute(obj=’Car’,colour=’red’)'Car. is $red'" }, { "code": null, "e": 3909, "s": 3724, "text": "I’m still a beginner in Python and comparing String formatting methods taught me a lot. I hope I was clear in providing the details to you. Despite Template string being less powerful:" }, { "code": null, "e": 4212, "s": 3909, "text": "viniciusmonteiro$ python3 -m timeit -s \"x = 'f'; y = 'z'\" \"f'{x} {y}'\"5000000 loops, best of 5: 82.5 nsec per loopviniciusmonteiro$ python3 -m timeit -s \"from string import Template; x = 'f'; y = 'z'\" \"Template('$x $y').substitute(x=x, y=y)\" # template string500000 loops, best of 5: 752 nsec per loop" }, { "code": null, "e": 4344, "s": 4212, "text": "I think it brings some benefit in terms of simplicity and convenience in some cases. Although I understand these can be subjective." }, { "code": null, "e": 4445, "s": 4344, "text": "[1] string — Common string operations https://docs.python.org/3/library/string.html#template-strings" }, { "code": null, "e": 4557, "s": 4445, "text": "[2] Python 3’s f-Strings: An Improved String Formatting Syntax (Guide) https://realpython.com/python-f-strings/" } ]
Coding Neural Network — Parameters’ Initialization | by Imad Dabbura | Towards Data Science
Optimization, in Machine Learning/Deep Learning contexts, is the process of changing the model’s parameters to improve its performance. In other words, it’s the process of finding the best parameters in the predefined hypothesis space to get the best possible performance. There are three kinds of optimization algorithms: Optimization algorithm that is not iterative and simply solves for one point. Optimization algorithm that is iterative in nature and converges to acceptable solution regardless of the parameters initialization such as gradient descent applied to logistic regression. Optimization algorithm that is iterative in nature and applied to a set of problems that have non-convex loss functions such as neural networks. Therefore, parameters’ initialization plays a critical role in speeding up convergence and achieving lower error rates. In this post, we’ll look at three different cases of parameters’ initialization and see how this affects the error rate: Initialize all parameters to zero.Initialize parameters to random values from standard normal distribution or uniform distribution and multiply it by a scalar such as 10.Initialize parameters based on: Initialize all parameters to zero. Initialize parameters to random values from standard normal distribution or uniform distribution and multiply it by a scalar such as 10. Initialize parameters based on: Xavier recommendation. Kaiming He recommendation. We’ll be using functions we wrote in “Coding Neural Network — Forward Propagation and Backpropagation” post to initialize parameters, compute forward propagation and back-propagation as well as the cross-entropy cost. To illustrate the above cases, we’ll use the cats vs dogs dataset which consists of 50 images for cats and 50 images for dogs. Each image is 150 pixels x 150 pixels on RGB color scale. Therefore, we would have 67,500 features where each column in the input matrix would be one image which means our input data would have 67,500 x 100 dimension. Let’s first load the data and show a sample of two images before we start the helper functions. We’ll write now all the helper functions that will help us initialize parameters based on different methods as well as writing L-layer model that we’ll be using to train our neural network. Here, we’ll initialize all weight matrices and biases to zeros and see how this would affect the error rate as well as the learning parameters. # train NN with zeros initialization parameterslayers_dims = [X.shape[0], 5, 5, 1]parameters = model(X, Y, layers_dims, hidden_layers_activation_fn="tanh", initialization_method="zeros") accuracy(X, parameters, Y,"tanh")The cost after 100 iterations is: 0.6931471805599453The cost after 200 iterations is: 0.6931471805599453The cost after 300 iterations is: 0.6931471805599453The cost after 400 iterations is: 0.6931471805599453The cost after 500 iterations is: 0.6931471805599453The cost after 600 iterations is: 0.6931471805599453The cost after 700 iterations is: 0.6931471805599453The cost after 800 iterations is: 0.6931471805599453The cost after 900 iterations is: 0.6931471805599453The cost after 1000 iterations is: 0.6931471805599453 The accuracy rate is: 50.00%. As the cost curve shows, the neural network didn’t learn anything! That is because of symmetry between all neurons which leads to all neurons have the same update on every iteration. Therefore, regardless of how many iterations we run the optimization algorithms, all the neurons would still get the same update and no learning would happen. As a result, we must break symmetry when initializing parameters so that the model would start learning on each update of the gradient descent. There is no big difference if the random values are initialized from standard normal distribution or uniform distribution so we’ll use standard normal distribution in our examples. Also, we’ll multiply the random values by a big number such as 10 to show that initializing parameters to big values may cause our optimization to have higher error rates (and even diverge in some cases). Let’s now train our neural network where all weight matrices have been intitialized using the following formula: np.random.randn() * 10 # train NN with random initialization parameterslayers_dims = [X.shape[0], 5, 5, 1]parameters = model(X, Y, layers_dims, hidden_layers_activation_fn="tanh", initialization_method="random") accuracy(X, parameters, Y,"tanh")The cost after 100 iterations is: 1.2413142077549013The cost after 200 iterations is: 1.1258751902393416The cost after 300 iterations is: 1.0989052435267657The cost after 400 iterations is: 1.0840966471282327The cost after 500 iterations is: 1.0706953292105978The cost after 600 iterations is: 1.0574847320236294The cost after 700 iterations is: 1.0443168708889223The cost after 800 iterations is: 1.031157857251139The cost after 900 iterations is: 1.0179838815204902The cost after 1000 iterations is: 1.004767088515343 The accuracy rate is: 55.00%. Random initialization here is helping but still the loss function has high value and may take long time to converge and achieve a significantly low value. We’ll explore two initialization methods: Kaiming He method is best applied when activation function applied on hidden layers is Rectified Linear Unit (ReLU). so that the weight on each hidden layer would have the following variance: var(W^l )= 2/n^(l-1). We can achieve this by multiplying the random values from standard normal distribution by Xavier method is best applied when activation function applied on hidden layers is Hyperbolic Tangent so that the weight on each hidden layer would have the following variance: var(W^l )= 1/n^(l-1). We can achieve this by multiplying the random values from standard normal distribution by We’ll train the network using both methods and look at the results. # train NN where all parameters were initialized based on He recommendationlayers_dims = [X.shape[0], 5, 5, 1]parameters = model(X, Y, layers_dims, hidden_layers_activation_fn="tanh", initialization_method="he") accuracy(X, parameters, Y,"tanh")The cost after 100 iterations is: 0.6300611704834093The cost after 200 iterations is: 0.49092836452522753The cost after 300 iterations is: 0.46579423512433943The cost after 400 iterations is: 0.6516254192289226The cost after 500 iterations is: 0.32487779301799485The cost after 600 iterations is: 0.4631461605716059The cost after 700 iterations is: 0.8050310690163623The cost after 800 iterations is: 0.31739195517372376The cost after 900 iterations is: 0.3094592175030812The cost after 1000 iterations is: 0.19934509244449203The accuracy rate is: 99.00%. # train NN where all parameters were initialized based on Xavier recommendationlayers_dims = [X.shape[0], 5, 5, 1]parameters = model(X, Y, layers_dims, hidden_layers_activation_fn="tanh", initialization_method="xavier") accuracy(X, parameters, Y,"tanh")accuracy(X, parameters, Y, "tanh")The cost after 100 iterations is: 0.6351961521800779The cost after 200 iterations is: 0.548973489787121The cost after 300 iterations is: 0.47982386652748565The cost after 400 iterations is: 0.32811768889968684The cost after 500 iterations is: 0.2793453045790634The cost after 600 iterations is: 0.3258507563809604The cost after 700 iterations is: 0.2873032724176074The cost after 800 iterations is: 0.0924974839405706The cost after 900 iterations is: 0.07418011931058155The cost after 1000 iterations is: 0.06204402572328295The accuracy rate is: 99.00%. As shown from applying the four methods, parameters’ initial values play a huge role in achieving low cost values as well as converging and achieve lower training error rates. The same would apply to test error rate if we had test data. Deep Learning frameworks make it easier to choose between different initialization methods without worrying about implementing it ourselves. Nonetheless, it’s important to understand the critical role initial values of the parameters in the overall performance of the network. Below are some key takeaways: Well chosen initialization values of parameters leads to: Speed up convergence of gradient descent.Increase the likelihood of gradient descent to find lower training and generalization error rates. Speed up convergence of gradient descent. Increase the likelihood of gradient descent to find lower training and generalization error rates. Because we’re dealing with iterative optimization algorithms with non-convex loss function, different initializations lead to different results. Random initialization is used to break symmetry and make sure different hidden units can learn different things. Don’t initialize to values that are too large. Kaiming He (He) initialization works well for neural networks with ReLU activation function. Xavier initialization works well for neural networks with Hyperbolic Tangent activation function. The source code that created this post can be found here. Originally published at imaddabbura.github.io on April 20, 2018.
[ { "code": null, "e": 495, "s": 172, "text": "Optimization, in Machine Learning/Deep Learning contexts, is the process of changing the model’s parameters to improve its performance. In other words, it’s the process of finding the best parameters in the predefined hypothesis space to get the best possible performance. There are three kinds of optimization algorithms:" }, { "code": null, "e": 573, "s": 495, "text": "Optimization algorithm that is not iterative and simply solves for one point." }, { "code": null, "e": 762, "s": 573, "text": "Optimization algorithm that is iterative in nature and converges to acceptable solution regardless of the parameters initialization such as gradient descent applied to logistic regression." }, { "code": null, "e": 1027, "s": 762, "text": "Optimization algorithm that is iterative in nature and applied to a set of problems that have non-convex loss functions such as neural networks. Therefore, parameters’ initialization plays a critical role in speeding up convergence and achieving lower error rates." }, { "code": null, "e": 1148, "s": 1027, "text": "In this post, we’ll look at three different cases of parameters’ initialization and see how this affects the error rate:" }, { "code": null, "e": 1350, "s": 1148, "text": "Initialize all parameters to zero.Initialize parameters to random values from standard normal distribution or uniform distribution and multiply it by a scalar such as 10.Initialize parameters based on:" }, { "code": null, "e": 1385, "s": 1350, "text": "Initialize all parameters to zero." }, { "code": null, "e": 1522, "s": 1385, "text": "Initialize parameters to random values from standard normal distribution or uniform distribution and multiply it by a scalar such as 10." }, { "code": null, "e": 1554, "s": 1522, "text": "Initialize parameters based on:" }, { "code": null, "e": 1577, "s": 1554, "text": "Xavier recommendation." }, { "code": null, "e": 1604, "s": 1577, "text": "Kaiming He recommendation." }, { "code": null, "e": 1822, "s": 1604, "text": "We’ll be using functions we wrote in “Coding Neural Network — Forward Propagation and Backpropagation” post to initialize parameters, compute forward propagation and back-propagation as well as the cross-entropy cost." }, { "code": null, "e": 2167, "s": 1822, "text": "To illustrate the above cases, we’ll use the cats vs dogs dataset which consists of 50 images for cats and 50 images for dogs. Each image is 150 pixels x 150 pixels on RGB color scale. Therefore, we would have 67,500 features where each column in the input matrix would be one image which means our input data would have 67,500 x 100 dimension." }, { "code": null, "e": 2263, "s": 2167, "text": "Let’s first load the data and show a sample of two images before we start the helper functions." }, { "code": null, "e": 2453, "s": 2263, "text": "We’ll write now all the helper functions that will help us initialize parameters based on different methods as well as writing L-layer model that we’ll be using to train our neural network." }, { "code": null, "e": 2597, "s": 2453, "text": "Here, we’ll initialize all weight matrices and biases to zeros and see how this would affect the error rate as well as the learning parameters." }, { "code": null, "e": 3369, "s": 2597, "text": "# train NN with zeros initialization parameterslayers_dims = [X.shape[0], 5, 5, 1]parameters = model(X, Y, layers_dims, hidden_layers_activation_fn=\"tanh\", initialization_method=\"zeros\") accuracy(X, parameters, Y,\"tanh\")The cost after 100 iterations is: 0.6931471805599453The cost after 200 iterations is: 0.6931471805599453The cost after 300 iterations is: 0.6931471805599453The cost after 400 iterations is: 0.6931471805599453The cost after 500 iterations is: 0.6931471805599453The cost after 600 iterations is: 0.6931471805599453The cost after 700 iterations is: 0.6931471805599453The cost after 800 iterations is: 0.6931471805599453The cost after 900 iterations is: 0.6931471805599453The cost after 1000 iterations is: 0.6931471805599453 The accuracy rate is: 50.00%." }, { "code": null, "e": 3855, "s": 3369, "text": "As the cost curve shows, the neural network didn’t learn anything! That is because of symmetry between all neurons which leads to all neurons have the same update on every iteration. Therefore, regardless of how many iterations we run the optimization algorithms, all the neurons would still get the same update and no learning would happen. As a result, we must break symmetry when initializing parameters so that the model would start learning on each update of the gradient descent." }, { "code": null, "e": 4377, "s": 3855, "text": "There is no big difference if the random values are initialized from standard normal distribution or uniform distribution so we’ll use standard normal distribution in our examples. Also, we’ll multiply the random values by a big number such as 10 to show that initializing parameters to big values may cause our optimization to have higher error rates (and even diverge in some cases). Let’s now train our neural network where all weight matrices have been intitialized using the following formula: np.random.randn() * 10" }, { "code": null, "e": 5149, "s": 4377, "text": "# train NN with random initialization parameterslayers_dims = [X.shape[0], 5, 5, 1]parameters = model(X, Y, layers_dims, hidden_layers_activation_fn=\"tanh\", initialization_method=\"random\") accuracy(X, parameters, Y,\"tanh\")The cost after 100 iterations is: 1.2413142077549013The cost after 200 iterations is: 1.1258751902393416The cost after 300 iterations is: 1.0989052435267657The cost after 400 iterations is: 1.0840966471282327The cost after 500 iterations is: 1.0706953292105978The cost after 600 iterations is: 1.0574847320236294The cost after 700 iterations is: 1.0443168708889223The cost after 800 iterations is: 1.031157857251139The cost after 900 iterations is: 1.0179838815204902The cost after 1000 iterations is: 1.004767088515343 The accuracy rate is: 55.00%." }, { "code": null, "e": 5304, "s": 5149, "text": "Random initialization here is helping but still the loss function has high value and may take long time to converge and achieve a significantly low value." }, { "code": null, "e": 5346, "s": 5304, "text": "We’ll explore two initialization methods:" }, { "code": null, "e": 5650, "s": 5346, "text": "Kaiming He method is best applied when activation function applied on hidden layers is Rectified Linear Unit (ReLU). so that the weight on each hidden layer would have the following variance: var(W^l )= 2/n^(l-1). We can achieve this by multiplying the random values from standard normal distribution by" }, { "code": null, "e": 5939, "s": 5650, "text": "Xavier method is best applied when activation function applied on hidden layers is Hyperbolic Tangent so that the weight on each hidden layer would have the following variance: var(W^l )= 1/n^(l-1). We can achieve this by multiplying the random values from standard normal distribution by" }, { "code": null, "e": 6007, "s": 5939, "text": "We’ll train the network using both methods and look at the results." }, { "code": null, "e": 6808, "s": 6007, "text": "# train NN where all parameters were initialized based on He recommendationlayers_dims = [X.shape[0], 5, 5, 1]parameters = model(X, Y, layers_dims, hidden_layers_activation_fn=\"tanh\", initialization_method=\"he\") accuracy(X, parameters, Y,\"tanh\")The cost after 100 iterations is: 0.6300611704834093The cost after 200 iterations is: 0.49092836452522753The cost after 300 iterations is: 0.46579423512433943The cost after 400 iterations is: 0.6516254192289226The cost after 500 iterations is: 0.32487779301799485The cost after 600 iterations is: 0.4631461605716059The cost after 700 iterations is: 0.8050310690163623The cost after 800 iterations is: 0.31739195517372376The cost after 900 iterations is: 0.3094592175030812The cost after 1000 iterations is: 0.19934509244449203The accuracy rate is: 99.00%." }, { "code": null, "e": 7649, "s": 6808, "text": "# train NN where all parameters were initialized based on Xavier recommendationlayers_dims = [X.shape[0], 5, 5, 1]parameters = model(X, Y, layers_dims, hidden_layers_activation_fn=\"tanh\", initialization_method=\"xavier\") accuracy(X, parameters, Y,\"tanh\")accuracy(X, parameters, Y, \"tanh\")The cost after 100 iterations is: 0.6351961521800779The cost after 200 iterations is: 0.548973489787121The cost after 300 iterations is: 0.47982386652748565The cost after 400 iterations is: 0.32811768889968684The cost after 500 iterations is: 0.2793453045790634The cost after 600 iterations is: 0.3258507563809604The cost after 700 iterations is: 0.2873032724176074The cost after 800 iterations is: 0.0924974839405706The cost after 900 iterations is: 0.07418011931058155The cost after 1000 iterations is: 0.06204402572328295The accuracy rate is: 99.00%." }, { "code": null, "e": 7886, "s": 7649, "text": "As shown from applying the four methods, parameters’ initial values play a huge role in achieving low cost values as well as converging and achieve lower training error rates. The same would apply to test error rate if we had test data." }, { "code": null, "e": 8193, "s": 7886, "text": "Deep Learning frameworks make it easier to choose between different initialization methods without worrying about implementing it ourselves. Nonetheless, it’s important to understand the critical role initial values of the parameters in the overall performance of the network. Below are some key takeaways:" }, { "code": null, "e": 8251, "s": 8193, "text": "Well chosen initialization values of parameters leads to:" }, { "code": null, "e": 8391, "s": 8251, "text": "Speed up convergence of gradient descent.Increase the likelihood of gradient descent to find lower training and generalization error rates." }, { "code": null, "e": 8433, "s": 8391, "text": "Speed up convergence of gradient descent." }, { "code": null, "e": 8532, "s": 8433, "text": "Increase the likelihood of gradient descent to find lower training and generalization error rates." }, { "code": null, "e": 8677, "s": 8532, "text": "Because we’re dealing with iterative optimization algorithms with non-convex loss function, different initializations lead to different results." }, { "code": null, "e": 8790, "s": 8677, "text": "Random initialization is used to break symmetry and make sure different hidden units can learn different things." }, { "code": null, "e": 8837, "s": 8790, "text": "Don’t initialize to values that are too large." }, { "code": null, "e": 8930, "s": 8837, "text": "Kaiming He (He) initialization works well for neural networks with ReLU activation function." }, { "code": null, "e": 9028, "s": 8930, "text": "Xavier initialization works well for neural networks with Hyperbolic Tangent activation function." }, { "code": null, "e": 9086, "s": 9028, "text": "The source code that created this post can be found here." } ]
Data Analysis with Python, R, and SQL | by Soner Yıldırım | Towards Data Science
The data science ecosystem consists of numerous software tools and packages that make our lives easier. Some of them are optimized to perform better and more efficient at certain tasks. However, we have many options for typical data analysis and manipulation tasks. In this article, we will compare Python, R, and SQL with respect to typical operations in exploratory data analysis. The examples can be considered a basic level. The goal of the article is to emphasize the similarities and differences between these tools. I also wanted to point out how same operations can be done with a different set of tools. Although there are syntactical differences, the logic behind the operations and the approach for handling a particular task is quite similar. In the following examples, I will define a task and complete it using Pandas library (Python), Data.table library (R), and SQL. Here is a snapshot of the dataset that will be used in the examples. Find the average price of items for each store id. SQL: We select the store id and price columns. The aggregation on the price column is specified while selecting it. We then group the values by the store id column. mysql> select store_id, avg(price) -> from items -> group by store_id;+----------+------------+| store_id | avg(price) |+----------+------------+| 1 | 1.833333 || 2 | 3.820000 || 3 | 3.650000 |+----------+------------+ Pandas: We select the columns and apply the group by function. The last step is the aggregate function which is the mean. items[['store_id','price']].groupby('store_id').mean() price store_id 1 1.833333 2 3.820000 3 3.650000 Data.table: The syntax is kind of a mixture of Pandas and SQL. We apply the aggregation and specify the grouping column while selecting the columns. > items[, .(mean(price)), by = .(store_id)] store_id V11: 1 1.8333332: 2 3.8200003: 3 3.650000 What is the price of the most expensive item in store 3? It is similar to the previous example with additional filtering. We are only interested in store 3. SQL: We select the price column and apply the max function. The filtering is done by using the where clause. mysql> select max(price) from items -> where store_id = 3;+------------+| max(price) |+------------+| 7.50 |+------------+ Pandas: We first apply the filter and select the column of interest. Then the max function is applied. items[items.store_id == 3]['price'].max()7.5 Data.table: The filtering is similar to Pandas but the aggregation is similar to the SQL syntax. > items[store_id == 3, max(price)][1] 7.5 You may have noticed a small difference in the syntax for data.table. The aggregation function is specified with a dot (.(mean(price)) in the previous example but without a dot in this example (max(price)). Using the notation with dot returns a table whereas an array is returned if used without the dot. List the items and their prices in store 1 and sort them based on the price in descending order. SQL: In addition to what we have seen up to this point, the order by clause is added at the end to sort the results. It sorts in ascending order by default so we need to change it using the desc keyword. mysql> select description, price -> from items -> where store_id = 1 -> order by price desc;+-------------+-------+| description | price |+-------------+-------+| banana | 3.45 || apple | 2.45 || lettuce | 1.80 || cucumber | 1.20 || bread | 1.15 || tomato | 0.95 |+-------------+-------+ Pandas: Sorting is done using the sort_values function. Pandas also sorts in ascending order by default which can be changed with the ascending parameter. items[items.store_id == 1][['description','price']]\.sort_values(by='price', ascending=False) description price 1 banana 3.45 0 apple 2.45 4 lettuce 1.80 11 cucumber 1.20 14 bread 1.15 7 tomato 0.95 Data.table: The sorting operation is done by using the order function as below. We change the default behavior of sorting in ascending order by adding a minus sign. > items[store_id == 1, .(description, price)][order(-price)] description price1: banana 3.452: apple 2.453: lettuce 1.804: cucumber 1.205: bread 1.156: tomato 0.95 Show all the rows in which the description of item contains the word “egg”. SQL: This task includes a filtering based on strings. Since we are not making an exact comparison, we will use the like keyword. mysql> select * from items -> where description like '%egg%';+---------+-------------+-------+----------+| item_id | description | price | store_id |+---------+-------------+-------+----------+| 9 | egg 15 | 4.40 | 3 || 11 | egg 30 | 7.50 | 3 |+---------+-------------+-------+----------+ Pandas: We will use the contains function of str accessor. items[items.description.str.contains("egg")] item_id description price store_id 8 9 egg 15 4.4 3 10 11 egg 30 7.5 3 Data.table: The filtering is quite similar to the SQL syntax. We will use the like keyword as below. > items[description %like% "egg"] V1 item_id description price store_id1: 8 9 egg 15 4.4 32: 10 11 egg 30 7.5 3 Find the number of items sold in each store. SQL: The count function can be used as below: mysql> select store_id, count(description) as item_count -> from items -> group by store_id;+----------+------------+| store_id | item_count |+----------+------------+| 1 | 6 || 2 | 5 || 3 | 4 |+----------+------------+ Pandas: There is dedicated function for such tasks. The value_counts function returns the number of occurrences for each distinct value. items.store_id.value_counts()1 62 53 4Name: store_id, dtype: int64 Data.table: We use the N option for the aggregation which does the same operation as the count function in SQL. > items[, .N, by=(store_id)] store_id N1: 1 62: 2 53: 3 4 We have done some basic data analysis and manipulation operations. There is, of course, much more we can do with these tools. In fact, they provide versatile and powerful functions to complete advanced and complex tasks. The goal of this article is to show the similarities and differences between these tools. Having a broad selection of tools might be intimidating but they all are capable of handling most of what you need. After a while, it comes down to a decision based on your taste. Thank you for reading. Please let me know if you have any feedback.
[ { "code": null, "e": 437, "s": 171, "text": "The data science ecosystem consists of numerous software tools and packages that make our lives easier. Some of them are optimized to perform better and more efficient at certain tasks. However, we have many options for typical data analysis and manipulation tasks." }, { "code": null, "e": 694, "s": 437, "text": "In this article, we will compare Python, R, and SQL with respect to typical operations in exploratory data analysis. The examples can be considered a basic level. The goal of the article is to emphasize the similarities and differences between these tools." }, { "code": null, "e": 926, "s": 694, "text": "I also wanted to point out how same operations can be done with a different set of tools. Although there are syntactical differences, the logic behind the operations and the approach for handling a particular task is quite similar." }, { "code": null, "e": 1054, "s": 926, "text": "In the following examples, I will define a task and complete it using Pandas library (Python), Data.table library (R), and SQL." }, { "code": null, "e": 1123, "s": 1054, "text": "Here is a snapshot of the dataset that will be used in the examples." }, { "code": null, "e": 1174, "s": 1123, "text": "Find the average price of items for each store id." }, { "code": null, "e": 1339, "s": 1174, "text": "SQL: We select the store id and price columns. The aggregation on the price column is specified while selecting it. We then group the values by the store id column." }, { "code": null, "e": 1592, "s": 1339, "text": "mysql> select store_id, avg(price) -> from items -> group by store_id;+----------+------------+| store_id | avg(price) |+----------+------------+| 1 | 1.833333 || 2 | 3.820000 || 3 | 3.650000 |+----------+------------+" }, { "code": null, "e": 1714, "s": 1592, "text": "Pandas: We select the columns and apply the group by function. The last step is the aggregate function which is the mean." }, { "code": null, "e": 1934, "s": 1714, "text": "items[['store_id','price']].groupby('store_id').mean() price store_id 1 1.833333 2 3.820000 3 3.650000" }, { "code": null, "e": 2083, "s": 1934, "text": "Data.table: The syntax is kind of a mixture of Pandas and SQL. We apply the aggregation and specify the grouping column while selecting the columns." }, { "code": null, "e": 2207, "s": 2083, "text": "> items[, .(mean(price)), by = .(store_id)] store_id V11: 1 1.8333332: 2 3.8200003: 3 3.650000" }, { "code": null, "e": 2264, "s": 2207, "text": "What is the price of the most expensive item in store 3?" }, { "code": null, "e": 2364, "s": 2264, "text": "It is similar to the previous example with additional filtering. We are only interested in store 3." }, { "code": null, "e": 2473, "s": 2364, "text": "SQL: We select the price column and apply the max function. The filtering is done by using the where clause." }, { "code": null, "e": 2605, "s": 2473, "text": "mysql> select max(price) from items -> where store_id = 3;+------------+| max(price) |+------------+| 7.50 |+------------+" }, { "code": null, "e": 2708, "s": 2605, "text": "Pandas: We first apply the filter and select the column of interest. Then the max function is applied." }, { "code": null, "e": 2753, "s": 2708, "text": "items[items.store_id == 3]['price'].max()7.5" }, { "code": null, "e": 2850, "s": 2753, "text": "Data.table: The filtering is similar to Pandas but the aggregation is similar to the SQL syntax." }, { "code": null, "e": 2892, "s": 2850, "text": "> items[store_id == 3, max(price)][1] 7.5" }, { "code": null, "e": 3099, "s": 2892, "text": "You may have noticed a small difference in the syntax for data.table. The aggregation function is specified with a dot (.(mean(price)) in the previous example but without a dot in this example (max(price))." }, { "code": null, "e": 3197, "s": 3099, "text": "Using the notation with dot returns a table whereas an array is returned if used without the dot." }, { "code": null, "e": 3294, "s": 3197, "text": "List the items and their prices in store 1 and sort them based on the price in descending order." }, { "code": null, "e": 3498, "s": 3294, "text": "SQL: In addition to what we have seen up to this point, the order by clause is added at the end to sort the results. It sorts in ascending order by default so we need to change it using the desc keyword." }, { "code": null, "e": 3830, "s": 3498, "text": "mysql> select description, price -> from items -> where store_id = 1 -> order by price desc;+-------------+-------+| description | price |+-------------+-------+| banana | 3.45 || apple | 2.45 || lettuce | 1.80 || cucumber | 1.20 || bread | 1.15 || tomato | 0.95 |+-------------+-------+" }, { "code": null, "e": 3985, "s": 3830, "text": "Pandas: Sorting is done using the sort_values function. Pandas also sorts in ascending order by default which can be changed with the ascending parameter." }, { "code": null, "e": 4376, "s": 3985, "text": "items[items.store_id == 1][['description','price']]\\.sort_values(by='price', ascending=False) description price 1 banana 3.45 0 apple 2.45 4 lettuce 1.80 11 cucumber 1.20 14 bread 1.15 7 tomato 0.95" }, { "code": null, "e": 4541, "s": 4376, "text": "Data.table: The sorting operation is done by using the order function as below. We change the default behavior of sorting in ascending order by adding a minus sign." }, { "code": null, "e": 4742, "s": 4541, "text": "> items[store_id == 1, .(description, price)][order(-price)] description price1: banana 3.452: apple 2.453: lettuce 1.804: cucumber 1.205: bread 1.156: tomato 0.95" }, { "code": null, "e": 4818, "s": 4742, "text": "Show all the rows in which the description of item contains the word “egg”." }, { "code": null, "e": 4947, "s": 4818, "text": "SQL: This task includes a filtering based on strings. Since we are not making an exact comparison, we will use the like keyword." }, { "code": null, "e": 5276, "s": 4947, "text": "mysql> select * from items -> where description like '%egg%';+---------+-------------+-------+----------+| item_id | description | price | store_id |+---------+-------------+-------+----------+| 9 | egg 15 | 4.40 | 3 || 11 | egg 30 | 7.50 | 3 |+---------+-------------+-------+----------+" }, { "code": null, "e": 5335, "s": 5276, "text": "Pandas: We will use the contains function of str accessor." }, { "code": null, "e": 5555, "s": 5335, "text": "items[items.description.str.contains(\"egg\")] item_id description price store_id 8 9 egg 15 4.4 3 10 11 egg 30 7.5 3" }, { "code": null, "e": 5656, "s": 5555, "text": "Data.table: The filtering is quite similar to the SQL syntax. We will use the like keyword as below." }, { "code": null, "e": 5810, "s": 5656, "text": "> items[description %like% \"egg\"] V1 item_id description price store_id1: 8 9 egg 15 4.4 32: 10 11 egg 30 7.5 3" }, { "code": null, "e": 5855, "s": 5810, "text": "Find the number of items sold in each store." }, { "code": null, "e": 5901, "s": 5855, "text": "SQL: The count function can be used as below:" }, { "code": null, "e": 6175, "s": 5901, "text": "mysql> select store_id, count(description) as item_count -> from items -> group by store_id;+----------+------------+| store_id | item_count |+----------+------------+| 1 | 6 || 2 | 5 || 3 | 4 |+----------+------------+" }, { "code": null, "e": 6312, "s": 6175, "text": "Pandas: There is dedicated function for such tasks. The value_counts function returns the number of occurrences for each distinct value." }, { "code": null, "e": 6388, "s": 6312, "text": "items.store_id.value_counts()1 62 53 4Name: store_id, dtype: int64" }, { "code": null, "e": 6500, "s": 6388, "text": "Data.table: We use the N option for the aggregation which does the same operation as the count function in SQL." }, { "code": null, "e": 6581, "s": 6500, "text": "> items[, .N, by=(store_id)] store_id N1: 1 62: 2 53: 3 4" }, { "code": null, "e": 6802, "s": 6581, "text": "We have done some basic data analysis and manipulation operations. There is, of course, much more we can do with these tools. In fact, they provide versatile and powerful functions to complete advanced and complex tasks." }, { "code": null, "e": 7072, "s": 6802, "text": "The goal of this article is to show the similarities and differences between these tools. Having a broad selection of tools might be intimidating but they all are capable of handling most of what you need. After a while, it comes down to a decision based on your taste." } ]
DAX Filter - ISCROSSFILTERED function
Returns TRUE when columnName or another column in the same or related table is being filtered. ISCROSSFILTERED (<columnName>) columnName The name of a column in a table. It cannot be an expression. TRUE or FALSE. A column columnName is said to be cross-filtered when a filter applied to another column in the same table or in a related table affects columnName by filtering it. A column columnName is said to be cross-filtered when a filter applied to another column in the same table or in a related table affects columnName by filtering it. A column is said to be filtered directly when the filter or filters apply over the column. A column is said to be filtered directly when the filter or filters apply over the column. You can use DAX ISFILTERED function to find if a column is filtered directly. = ISCROSSFILTERED (Sales) 53 Lectures 5.5 hours Abhay Gadiya 24 Lectures 2 hours Randy Minder 26 Lectures 4.5 hours Randy Minder Print Add Notes Bookmark this page
[ { "code": null, "e": 2096, "s": 2001, "text": "Returns TRUE when columnName or another column in the same or related table is being filtered." }, { "code": null, "e": 2129, "s": 2096, "text": "ISCROSSFILTERED (<columnName>) \n" }, { "code": null, "e": 2140, "s": 2129, "text": "columnName" }, { "code": null, "e": 2173, "s": 2140, "text": "The name of a column in a table." }, { "code": null, "e": 2201, "s": 2173, "text": "It cannot be an expression." }, { "code": null, "e": 2216, "s": 2201, "text": "TRUE or FALSE." }, { "code": null, "e": 2381, "s": 2216, "text": "A column columnName is said to be cross-filtered when a filter applied to another column in the same table or in a related table affects columnName by filtering it." }, { "code": null, "e": 2546, "s": 2381, "text": "A column columnName is said to be cross-filtered when a filter applied to another column in the same table or in a related table affects columnName by filtering it." }, { "code": null, "e": 2637, "s": 2546, "text": "A column is said to be filtered directly when the filter or filters apply over the column." }, { "code": null, "e": 2728, "s": 2637, "text": "A column is said to be filtered directly when the filter or filters apply over the column." }, { "code": null, "e": 2806, "s": 2728, "text": "You can use DAX ISFILTERED function to find if a column is filtered directly." }, { "code": null, "e": 2833, "s": 2806, "text": "= ISCROSSFILTERED (Sales) " }, { "code": null, "e": 2868, "s": 2833, "text": "\n 53 Lectures \n 5.5 hours \n" }, { "code": null, "e": 2882, "s": 2868, "text": " Abhay Gadiya" }, { "code": null, "e": 2915, "s": 2882, "text": "\n 24 Lectures \n 2 hours \n" }, { "code": null, "e": 2929, "s": 2915, "text": " Randy Minder" }, { "code": null, "e": 2964, "s": 2929, "text": "\n 26 Lectures \n 4.5 hours \n" }, { "code": null, "e": 2978, "s": 2964, "text": " Randy Minder" }, { "code": null, "e": 2985, "s": 2978, "text": " Print" }, { "code": null, "e": 2996, "s": 2985, "text": " Add Notes" } ]
Introduction to Matrices in R. Learn how to create matrices and... | by Linda Ngo | Towards Data Science
A matrix is a collection of elements of the same data type (numeric, character, or logical) arranged into a fixed number of rows and columns. A two-dimensional matrix is one that works only with rows and columns. The matrix() function in R creates a matrix. Consider the following example: matrix(1:9, byrow=TRUE, nrow = 3) This constructs a matrix with 3 rows, containing the numbers 1 to 9, filled row-wise. In the matrix() function: The first argument is the collection of elements that R will arrange into the rows and columns of the matrix. Here, we used 1:9 (this is the same as c(1,2,3,4,5,6,7,8,9) (see vectors in R)). This is an optional argument and can be filled later. If we leave it blank, the matrix just won’t be filled. The argument byrow indicates that the matrix is filled row-wise. If we want the matrix to be filled column-wise, we set this argument to FALSE (That is, byrow=FALSE ). By default, the matrix is filled by columns, byrow=FALSE . # Row-wise # Column-wise 1 2 3 1 4 7 4 5 6 2 5 8 7 8 9 3 6 9 The third argument nrow indicates the desired number of rows. nrows=3 indicates that the matrix should have three rows. There are also other arguments such as: ncol which indicates the desired number of columns. Let’s analyze the box office numbers for the Star Wars franchise. Below are three vectors each defining the box office numbers of one of the first three Star Wars movies. The first element of each vector indicates the US box office revenue, the second element refers to the Non-US box office (Source: Wikipedia). # Box office Star Wars (in millions)new_hope <- c(460.998, 314.4)empire_strikes <- c(290.475, 247.900)return_jedi <- c(309.306, 165.8) To construct a matrix from these three vectors, we will first need to combine the three vectors into one. box_office <- c(new_hope, empire_strikes, return_jedi) We then use the matrix() function to construct a matrix. The first argument is the vector box_office , which contains all box office figures. Next, we have to specify nrow=3 and byrow=TRUE to construct the matrix with 3 rows filled row-wise (the first column will represent the US revenue, the second non-US revenue). star_wars_matrix <- matrix(box_office, byrow=TRUE, nrow=3) It is often helpful to add names to the rows and columns of a matrix to help remember what is stored in it. Not only does it help with reading the data, but also with selecting certain elements from the matrix. We can achieve this by using the functions colnames() and rownames() . rownames(my_matrix) <- row_names_vectorcolnames(my_matrix) <- col_names_vector To name the columns by region and titles, vectors representing these names are needed. # Vectors region and titles, used for namingregion <- c("US", "non-US")titles <- c("A New Hope", "The Empire Strikes Back", "Return of the Jedi") To name the columns of star_wars_matrix with the region vector, colnames() must be used. # Name the columns with regioncolnames(star_wars_matrix) <- region To name the rows of star_wars_matrix with the titles vector, rownames() must be used. # Name the rows with titlesrownames(star_wars_matrix) <- titles Your code should like something like this now: # Box office Star Wars (in millions)new_hope <- c(460.998, 314.4)empire_strikes <- c(290.475, 247.900)return_jedi <- c(309.306, 165.8)# Construct matrixstar_wars_matrix <- matrix(c(new_home, empire_strikes, return_jedi), nrow = 3, byrow = TRUE)# Vectors region and titles, used for namingregion <- c("US", "non-US")titles <- c("A New Hope", "The Empire Strikes Back", "Return of the Jedi")# Name the columns with regioncolnames(star_wars_matrix) <- region# Name the rows with titlesrownames(star_wars_matrix) <- titles# Print out star_wars_matrixstar_wars_matrix The dimnames atrribute for the matrix can be used to name the rows and columns of the matrix. The dimnames atrribute takes a list of length 2 giving the row and column names respectively. That is, dimnames = list(row_vector,column_vector) So, during the construction of the matrix, we can directly label it then. # Construct star_wars_matrixbox_office <- c(460.998, 314.4, 290.475, 247.900, 309.306, 165.8)star_wars_matrix <- matrix(box_office, nrow = 3, byrow = TRUE, dimnames = list(c("A New Hope", "The Empire Strikes Back", "Return of the Jedi"),c("US, "non-US"))) An important statistic for a movie is its worldwide box office figures. To calculate the total box office revenue for the three Star Wars movies, you need to determine the sum of the US revenue and the non-US revenue. The function rowSums() calculates the totals for each row of a matrix and creates a new vector: rowSums(my_matrix) Calculate the worldwide box office figures for the three movies. # Construct star_wars_matrixbox_office <- c(460.998, 314.4, 290.475, 247.900, 309.306, 165.8)star_wars_matrix <- matrix(box_office, nrow = 3, byrow = TRUE, dimnames = list(c("A New Hope", "The Empire Strikes Back", "Return of the Jedi"),c("US, "non-US")))# Calculate worldwide box office figuresworldwide_vector <- rowSums(star_wars_matrix) You can add a column or multiple columns to a matrix using the cbind() function, which merges matrices and/or vectors together by column. For example: big_matrix <- cbind(matrix1, matrix2, vector1, ...) In the previous exercise, you calculated the vector that contained the worldwide box office revenue for each of the three movies. However, this vector is not yet part of the star_wars_matrix . Add this vector as a new column to the matrix and assign the result to a new matrix. # Construct star_wars_matrixbox_office <- c(460.998, 314.4, 290.475, 247.900, 309.306, 165.8)star_wars_matrix <- matrix(box_office, nrow = 3, byrow = TRUE, dimnames = list(c("A New Hope", "The Empire Strikes Back", "Return of the Jedi"),c("US, "non-US")))# Calculate worldwide box office figuresworldwide_vector <- rowSums(star_wars_matrix)# Bind the new variable worldwide_vector as a column to star_wars_matrixall_wars_matrix <- cbind(star_wars_matrix, worldwide_vector) To add a column, you can use cbind() . To add a row, you can use rbind() . The rbind() function takes a sequence of vectors or matrices arguments and combines them by row. For example, big_matrix <- rbind(matrix1, matrix2, vector1, ...) Similar to how you created the star_wars_matrix with data on the original trilogy, create a second matrix with similar data for the prequels trilogy. Then use rbind() to combine the two matrices, with data for the original trilogy first then data for the prequels second. # Construct star_wars_matrixbox_office <- c(461.0, 314.4, 290.5, 247.9, 309.3, 165.8)star_wars_matrix <- matrix(box_office, nrow = 3, byrow = TRUE, dimnames = list(c("A New Hope", "The Empire Strikes Back", "Return of the Jedi"), c("US", "non-US")))# Construct star_wars_matrix2box_office2 <- c(474.5, 552.5, 310.7, 338.7, 380.3, 468.5)star_wars_matrix2 <- matrix(box_office2, nrow = 3, byrow = TRUE, dimnames = list(c("The Phantom Menace", "Attack of the Clones", "Revenge of the Sith"), c("US", "non-US")))# Combine both Star Wars trilogies in one matrixall_wars_matrix <- rbind(star_wars_matrix, star_wars_matrix2) Similar to how we used rowSums() to calculate the sum of each row, we can also use colSums() to calculate the sum of each column of the matrix. rowSums(my_matrix) Using the all_wars_matrix constructed in the previous exercise, calculate the total box office revenue for the US and the non-US region for the entire saga. # Total revenue for US and non-UStotal_revenue_vector <- colSums(all_wars_matrix) Similar to vectors, square brackets [] can be used to select one or multiple elements from a matrix. Since matrices are two dimensional, a comma is needed to separate the rows and columns. For example: my_matrix[1,2] selects the element at the first row and second column (row 1, column 2) my_matrix[1:3, 2:4] returns a matrix with the data on rows 1 through 3, and columns 2 through 4. To select all elements of a row, no number is needed after the column. To select all elements of a column, no number is needed before the column. my_matrix[,1] selects all elements of the first column. my_matrix[1,] selects all elements of the first row. Calculate the mean the non-US revenue for all movies (Hint: select the entire second column of the all_wars_matrix , and use the mean() function). # Select the non-US revenue for all moviesnon_us_all <- all_wars_matrix# Average non-US revenuemean(non_us_all) Calculate the mean for the first two movies of the saga. # Select the non-US revenue for the first two moviesnon_us_some <- all_wars_matrix[1:2,2]# Average non-US revenue for the first two moviesmean(non_us_some) The standard operators like + , - , / , * , etc. that work with vectors, work in an element-wise way on matrices as well. For example, 2 * my_matrix multiples each element of my_matrix by 2. You can also multiply a matrix by another matrix. For example, my_matrix1 * my_matrix2 creates a matrix where each element is the product of the corresponding elements in my_matrix1 and my_matrix2 . Suppose the price of a movie ticket was 5 dollars. Determine how many visitors went to each movie for each geographical area. (Hint: simply dividing box office numbers by the ticket price will give you the number of visitors). # Estimate the visitorsvisitors <- all_wars_matrix / 5 Suppose ticket prices go up over time. Given a matrix of the ticket prices, determine the estimated number of US and non-US visitors for all the movies. # Construct ticket_prices_matrixticket_prices <- c(5.0, 5.0, 6.0, 6.0, 7.0, 7.0, 4.0, 4.0, 4.5, 4.5, 4.9, 4.9)ticket_prices_matrix <- matrix(ticket_prices, nrow = 6, byrow = TRUE, dimnames = list(c("A New Hope", "The Empire Strikes Back", "Return of the Jedi","The Phantom Menace", "Attack of the Clones", "Revenge of the Sith"),c("US", "non-US"))) # Estimated number of visitorsvisitors <- all_wars_matrix / ticket_prices_matrix Calculate the average number of US visitors (Hint: you’ll need to have completed the second for you to try) # US visitors (Select the entire first column)us_visitors <- visitors[,1]# Average number of US visitorsmean(us_visitors) All images, unless specified, are owned by the author. The banner image was created using Canva.
[ { "code": null, "e": 385, "s": 172, "text": "A matrix is a collection of elements of the same data type (numeric, character, or logical) arranged into a fixed number of rows and columns. A two-dimensional matrix is one that works only with rows and columns." }, { "code": null, "e": 462, "s": 385, "text": "The matrix() function in R creates a matrix. Consider the following example:" }, { "code": null, "e": 496, "s": 462, "text": "matrix(1:9, byrow=TRUE, nrow = 3)" }, { "code": null, "e": 582, "s": 496, "text": "This constructs a matrix with 3 rows, containing the numbers 1 to 9, filled row-wise." }, { "code": null, "e": 608, "s": 582, "text": "In the matrix() function:" }, { "code": null, "e": 908, "s": 608, "text": "The first argument is the collection of elements that R will arrange into the rows and columns of the matrix. Here, we used 1:9 (this is the same as c(1,2,3,4,5,6,7,8,9) (see vectors in R)). This is an optional argument and can be filled later. If we leave it blank, the matrix just won’t be filled." }, { "code": null, "e": 1135, "s": 908, "text": "The argument byrow indicates that the matrix is filled row-wise. If we want the matrix to be filled column-wise, we set this argument to FALSE (That is, byrow=FALSE ). By default, the matrix is filled by columns, byrow=FALSE ." }, { "code": null, "e": 1224, "s": 1135, "text": "# Row-wise # Column-wise 1 2 3 1 4 7 4 5 6 2 5 8 7 8 9 3 6 9" }, { "code": null, "e": 1344, "s": 1224, "text": "The third argument nrow indicates the desired number of rows. nrows=3 indicates that the matrix should have three rows." }, { "code": null, "e": 1384, "s": 1344, "text": "There are also other arguments such as:" }, { "code": null, "e": 1436, "s": 1384, "text": "ncol which indicates the desired number of columns." }, { "code": null, "e": 1502, "s": 1436, "text": "Let’s analyze the box office numbers for the Star Wars franchise." }, { "code": null, "e": 1749, "s": 1502, "text": "Below are three vectors each defining the box office numbers of one of the first three Star Wars movies. The first element of each vector indicates the US box office revenue, the second element refers to the Non-US box office (Source: Wikipedia)." }, { "code": null, "e": 1884, "s": 1749, "text": "# Box office Star Wars (in millions)new_hope <- c(460.998, 314.4)empire_strikes <- c(290.475, 247.900)return_jedi <- c(309.306, 165.8)" }, { "code": null, "e": 1990, "s": 1884, "text": "To construct a matrix from these three vectors, we will first need to combine the three vectors into one." }, { "code": null, "e": 2045, "s": 1990, "text": "box_office <- c(new_hope, empire_strikes, return_jedi)" }, { "code": null, "e": 2363, "s": 2045, "text": "We then use the matrix() function to construct a matrix. The first argument is the vector box_office , which contains all box office figures. Next, we have to specify nrow=3 and byrow=TRUE to construct the matrix with 3 rows filled row-wise (the first column will represent the US revenue, the second non-US revenue)." }, { "code": null, "e": 2422, "s": 2363, "text": "star_wars_matrix <- matrix(box_office, byrow=TRUE, nrow=3)" }, { "code": null, "e": 2704, "s": 2422, "text": "It is often helpful to add names to the rows and columns of a matrix to help remember what is stored in it. Not only does it help with reading the data, but also with selecting certain elements from the matrix. We can achieve this by using the functions colnames() and rownames() ." }, { "code": null, "e": 2783, "s": 2704, "text": "rownames(my_matrix) <- row_names_vectorcolnames(my_matrix) <- col_names_vector" }, { "code": null, "e": 2870, "s": 2783, "text": "To name the columns by region and titles, vectors representing these names are needed." }, { "code": null, "e": 3016, "s": 2870, "text": "# Vectors region and titles, used for namingregion <- c(\"US\", \"non-US\")titles <- c(\"A New Hope\", \"The Empire Strikes Back\", \"Return of the Jedi\")" }, { "code": null, "e": 3105, "s": 3016, "text": "To name the columns of star_wars_matrix with the region vector, colnames() must be used." }, { "code": null, "e": 3172, "s": 3105, "text": "# Name the columns with regioncolnames(star_wars_matrix) <- region" }, { "code": null, "e": 3258, "s": 3172, "text": "To name the rows of star_wars_matrix with the titles vector, rownames() must be used." }, { "code": null, "e": 3322, "s": 3258, "text": "# Name the rows with titlesrownames(star_wars_matrix) <- titles" }, { "code": null, "e": 3369, "s": 3322, "text": "Your code should like something like this now:" }, { "code": null, "e": 3932, "s": 3369, "text": "# Box office Star Wars (in millions)new_hope <- c(460.998, 314.4)empire_strikes <- c(290.475, 247.900)return_jedi <- c(309.306, 165.8)# Construct matrixstar_wars_matrix <- matrix(c(new_home, empire_strikes, return_jedi), nrow = 3, byrow = TRUE)# Vectors region and titles, used for namingregion <- c(\"US\", \"non-US\")titles <- c(\"A New Hope\", \"The Empire Strikes Back\", \"Return of the Jedi\")# Name the columns with regioncolnames(star_wars_matrix) <- region# Name the rows with titlesrownames(star_wars_matrix) <- titles# Print out star_wars_matrixstar_wars_matrix" }, { "code": null, "e": 4129, "s": 3932, "text": "The dimnames atrribute for the matrix can be used to name the rows and columns of the matrix. The dimnames atrribute takes a list of length 2 giving the row and column names respectively. That is," }, { "code": null, "e": 4171, "s": 4129, "text": "dimnames = list(row_vector,column_vector)" }, { "code": null, "e": 4245, "s": 4171, "text": "So, during the construction of the matrix, we can directly label it then." }, { "code": null, "e": 4501, "s": 4245, "text": "# Construct star_wars_matrixbox_office <- c(460.998, 314.4, 290.475, 247.900, 309.306, 165.8)star_wars_matrix <- matrix(box_office, nrow = 3, byrow = TRUE, dimnames = list(c(\"A New Hope\", \"The Empire Strikes Back\", \"Return of the Jedi\"),c(\"US, \"non-US\")))" }, { "code": null, "e": 4719, "s": 4501, "text": "An important statistic for a movie is its worldwide box office figures. To calculate the total box office revenue for the three Star Wars movies, you need to determine the sum of the US revenue and the non-US revenue." }, { "code": null, "e": 4815, "s": 4719, "text": "The function rowSums() calculates the totals for each row of a matrix and creates a new vector:" }, { "code": null, "e": 4834, "s": 4815, "text": "rowSums(my_matrix)" }, { "code": null, "e": 4899, "s": 4834, "text": "Calculate the worldwide box office figures for the three movies." }, { "code": null, "e": 5240, "s": 4899, "text": "# Construct star_wars_matrixbox_office <- c(460.998, 314.4, 290.475, 247.900, 309.306, 165.8)star_wars_matrix <- matrix(box_office, nrow = 3, byrow = TRUE, dimnames = list(c(\"A New Hope\", \"The Empire Strikes Back\", \"Return of the Jedi\"),c(\"US, \"non-US\")))# Calculate worldwide box office figuresworldwide_vector <- rowSums(star_wars_matrix)" }, { "code": null, "e": 5391, "s": 5240, "text": "You can add a column or multiple columns to a matrix using the cbind() function, which merges matrices and/or vectors together by column. For example:" }, { "code": null, "e": 5443, "s": 5391, "text": "big_matrix <- cbind(matrix1, matrix2, vector1, ...)" }, { "code": null, "e": 5721, "s": 5443, "text": "In the previous exercise, you calculated the vector that contained the worldwide box office revenue for each of the three movies. However, this vector is not yet part of the star_wars_matrix . Add this vector as a new column to the matrix and assign the result to a new matrix." }, { "code": null, "e": 6194, "s": 5721, "text": "# Construct star_wars_matrixbox_office <- c(460.998, 314.4, 290.475, 247.900, 309.306, 165.8)star_wars_matrix <- matrix(box_office, nrow = 3, byrow = TRUE, dimnames = list(c(\"A New Hope\", \"The Empire Strikes Back\", \"Return of the Jedi\"),c(\"US, \"non-US\")))# Calculate worldwide box office figuresworldwide_vector <- rowSums(star_wars_matrix)# Bind the new variable worldwide_vector as a column to star_wars_matrixall_wars_matrix <- cbind(star_wars_matrix, worldwide_vector)" }, { "code": null, "e": 6379, "s": 6194, "text": "To add a column, you can use cbind() . To add a row, you can use rbind() . The rbind() function takes a sequence of vectors or matrices arguments and combines them by row. For example," }, { "code": null, "e": 6431, "s": 6379, "text": "big_matrix <- rbind(matrix1, matrix2, vector1, ...)" }, { "code": null, "e": 6581, "s": 6431, "text": "Similar to how you created the star_wars_matrix with data on the original trilogy, create a second matrix with similar data for the prequels trilogy." }, { "code": null, "e": 6703, "s": 6581, "text": "Then use rbind() to combine the two matrices, with data for the original trilogy first then data for the prequels second." }, { "code": null, "e": 7321, "s": 6703, "text": "# Construct star_wars_matrixbox_office <- c(461.0, 314.4, 290.5, 247.9, 309.3, 165.8)star_wars_matrix <- matrix(box_office, nrow = 3, byrow = TRUE, dimnames = list(c(\"A New Hope\", \"The Empire Strikes Back\", \"Return of the Jedi\"), c(\"US\", \"non-US\")))# Construct star_wars_matrix2box_office2 <- c(474.5, 552.5, 310.7, 338.7, 380.3, 468.5)star_wars_matrix2 <- matrix(box_office2, nrow = 3, byrow = TRUE, dimnames = list(c(\"The Phantom Menace\", \"Attack of the Clones\", \"Revenge of the Sith\"), c(\"US\", \"non-US\")))# Combine both Star Wars trilogies in one matrixall_wars_matrix <- rbind(star_wars_matrix, star_wars_matrix2)" }, { "code": null, "e": 7465, "s": 7321, "text": "Similar to how we used rowSums() to calculate the sum of each row, we can also use colSums() to calculate the sum of each column of the matrix." }, { "code": null, "e": 7484, "s": 7465, "text": "rowSums(my_matrix)" }, { "code": null, "e": 7641, "s": 7484, "text": "Using the all_wars_matrix constructed in the previous exercise, calculate the total box office revenue for the US and the non-US region for the entire saga." }, { "code": null, "e": 7723, "s": 7641, "text": "# Total revenue for US and non-UStotal_revenue_vector <- colSums(all_wars_matrix)" }, { "code": null, "e": 7925, "s": 7723, "text": "Similar to vectors, square brackets [] can be used to select one or multiple elements from a matrix. Since matrices are two dimensional, a comma is needed to separate the rows and columns. For example:" }, { "code": null, "e": 8013, "s": 7925, "text": "my_matrix[1,2] selects the element at the first row and second column (row 1, column 2)" }, { "code": null, "e": 8110, "s": 8013, "text": "my_matrix[1:3, 2:4] returns a matrix with the data on rows 1 through 3, and columns 2 through 4." }, { "code": null, "e": 8256, "s": 8110, "text": "To select all elements of a row, no number is needed after the column. To select all elements of a column, no number is needed before the column." }, { "code": null, "e": 8312, "s": 8256, "text": "my_matrix[,1] selects all elements of the first column." }, { "code": null, "e": 8365, "s": 8312, "text": "my_matrix[1,] selects all elements of the first row." }, { "code": null, "e": 8512, "s": 8365, "text": "Calculate the mean the non-US revenue for all movies (Hint: select the entire second column of the all_wars_matrix , and use the mean() function)." }, { "code": null, "e": 8624, "s": 8512, "text": "# Select the non-US revenue for all moviesnon_us_all <- all_wars_matrix# Average non-US revenuemean(non_us_all)" }, { "code": null, "e": 8681, "s": 8624, "text": "Calculate the mean for the first two movies of the saga." }, { "code": null, "e": 8837, "s": 8681, "text": "# Select the non-US revenue for the first two moviesnon_us_some <- all_wars_matrix[1:2,2]# Average non-US revenue for the first two moviesmean(non_us_some)" }, { "code": null, "e": 8959, "s": 8837, "text": "The standard operators like + , - , / , * , etc. that work with vectors, work in an element-wise way on matrices as well." }, { "code": null, "e": 9028, "s": 8959, "text": "For example, 2 * my_matrix multiples each element of my_matrix by 2." }, { "code": null, "e": 9227, "s": 9028, "text": "You can also multiply a matrix by another matrix. For example, my_matrix1 * my_matrix2 creates a matrix where each element is the product of the corresponding elements in my_matrix1 and my_matrix2 ." }, { "code": null, "e": 9454, "s": 9227, "text": "Suppose the price of a movie ticket was 5 dollars. Determine how many visitors went to each movie for each geographical area. (Hint: simply dividing box office numbers by the ticket price will give you the number of visitors)." }, { "code": null, "e": 9509, "s": 9454, "text": "# Estimate the visitorsvisitors <- all_wars_matrix / 5" }, { "code": null, "e": 9662, "s": 9509, "text": "Suppose ticket prices go up over time. Given a matrix of the ticket prices, determine the estimated number of US and non-US visitors for all the movies." }, { "code": null, "e": 10011, "s": 9662, "text": "# Construct ticket_prices_matrixticket_prices <- c(5.0, 5.0, 6.0, 6.0, 7.0, 7.0, 4.0, 4.0, 4.5, 4.5, 4.9, 4.9)ticket_prices_matrix <- matrix(ticket_prices, nrow = 6, byrow = TRUE, dimnames = list(c(\"A New Hope\", \"The Empire Strikes Back\", \"Return of the Jedi\",\"The Phantom Menace\", \"Attack of the Clones\", \"Revenge of the Sith\"),c(\"US\", \"non-US\")))" }, { "code": null, "e": 10092, "s": 10011, "text": "# Estimated number of visitorsvisitors <- all_wars_matrix / ticket_prices_matrix" }, { "code": null, "e": 10200, "s": 10092, "text": "Calculate the average number of US visitors (Hint: you’ll need to have completed the second for you to try)" }, { "code": null, "e": 10322, "s": 10200, "text": "# US visitors (Select the entire first column)us_visitors <- visitors[,1]# Average number of US visitorsmean(us_visitors)" } ]
Automatically add a mask to a profile photo. A face-api.js tutorial: | Towards Data Science
Have you heard? CDC and the Surgeon General now recommends and encourages everyone to wear a mask when going out in public. Unfortunately, there’s still a lot of stigma associated with wearing a mask. “What if people think I’m sick?” “What if they yell at me to stay home?” These thoughts can prevent one from wearing a mask even though that could help protect oneself and others. To help promote the idea that wearing masks is the smart and right thing to do and make it common enough so that it becomes a new social norm, I decided to make an application that automatically adds a mask to your profile photo. This is how I made it. I first thought about the core features of the application. This included: Users upload own photos or paste a link to photosWebsite detects face(s) and their landmarks in the photoWebsite overlays a mask over the nose, mouth, and jaw areaUsers download the final image. Users upload own photos or paste a link to photos Website detects face(s) and their landmarks in the photo Website overlays a mask over the nose, mouth, and jaw area Users download the final image. After some Googling, I landed on two options: 1) build an app in Python using Flask & OpenCV, or 2) use face-api.js and build a stand-alone app in JavaScript. Although I’m more comfortable with Python, I decided to go with face-api.js & JavaScript because it could run on modern browsers without requiring a backend server for heavy lifting. Also, Vincent Mühler already had a great example of face and landmark detection that would easily cover steps 1 and 2. Next, I did a quick search to see if there were any examples for overlaying a mask on a face and found https://spooky-masks.netlify.app using face-api.js which I thought would require just a few tweaks to complete step 3. I took some time to understand how the face-api.js works in the faceLandmarkDetection demo. Some basics included loading face-api.js in the head section with <script src=”face-api.js”></script> and finding sections loading the input image and drawing overlays. <div style="position: relative" class="margin"> <img id="inputImg" src="" style="max-width: 800px;" /> <canvas id="overlay" /> </div> Also, here’s the section where the call to faceapi was made to detect faces in the input image inputImgEl and save the results. async function updateResults() {...const results = await faceapi.detectAllFaces(inputImgEl, options).withFaceLandmarks()...} I also identified how the masks were manipulated and adjusted to be place on top of faces in maskify.js from the Spookymasks tutorial. It used methods from face-api.js to get the coordinates of landmarks. For example to get the nose you could runconst nose = landmarks.getNose(); or to get the jawline use const jawline = landmarks.getJawOutline();. See the original code here. Using this I was able to quickly whip up a prototype that will detect the face and overlay it with a mask. See the full code here! github.com Unfortunately, I quickly realized that this automatic overlay of the mask was not going to be perfect. Little rotations and yaws of the face would lead the slightly inaccurate jawlines that would til the mask too much one way or another. Thus I decided to add a few controls to shift the mask. Turns out that manual adjustments was going to be necessary anyways because landmarks may not always detect the entire contours fo the chin. So I decided to add buttons that will shift the position of the mask in small increments. For example to make a button that moves the mask to the right, I added <button class=”waves-effect waves-light btn” onclick=”moveright_mask();”>Zoom In</button> defined by the function function moveright_mask() { const myNode = document.getElementById("maskdiv").children; for (var i = 0; i < myNode.length; i++) { var tableChild = myNode[i]; // Do stuff var left = parseFloat(tableChild.style.left)+2; tableChild.style.left = `${left}px`; }; } which would update the mask in the maskdiv. Try it out for yourself or see a video of it in action: This was a pretty fun project to work on to distract myself as the pandemic started. As I was building this, I eventually also found https://socialdistancing.works/ which practically does the same thing except that it is purely manual adjustments. Another nice feature of MaskOnMe is that it can also detect multiple faces at once so you can post group photos with everyone with a mask! The next step for me when I have time is to work on an AR version where one would overlay a mask from a webcam feed! Thank you for reading and feel free to check out my other posts if you want to read more about face image processing.
[ { "code": null, "e": 806, "s": 172, "text": "Have you heard? CDC and the Surgeon General now recommends and encourages everyone to wear a mask when going out in public. Unfortunately, there’s still a lot of stigma associated with wearing a mask. “What if people think I’m sick?” “What if they yell at me to stay home?” These thoughts can prevent one from wearing a mask even though that could help protect oneself and others. To help promote the idea that wearing masks is the smart and right thing to do and make it common enough so that it becomes a new social norm, I decided to make an application that automatically adds a mask to your profile photo. This is how I made it." }, { "code": null, "e": 881, "s": 806, "text": "I first thought about the core features of the application. This included:" }, { "code": null, "e": 1076, "s": 881, "text": "Users upload own photos or paste a link to photosWebsite detects face(s) and their landmarks in the photoWebsite overlays a mask over the nose, mouth, and jaw areaUsers download the final image." }, { "code": null, "e": 1126, "s": 1076, "text": "Users upload own photos or paste a link to photos" }, { "code": null, "e": 1183, "s": 1126, "text": "Website detects face(s) and their landmarks in the photo" }, { "code": null, "e": 1242, "s": 1183, "text": "Website overlays a mask over the nose, mouth, and jaw area" }, { "code": null, "e": 1274, "s": 1242, "text": "Users download the final image." }, { "code": null, "e": 1736, "s": 1274, "text": "After some Googling, I landed on two options: 1) build an app in Python using Flask & OpenCV, or 2) use face-api.js and build a stand-alone app in JavaScript. Although I’m more comfortable with Python, I decided to go with face-api.js & JavaScript because it could run on modern browsers without requiring a backend server for heavy lifting. Also, Vincent Mühler already had a great example of face and landmark detection that would easily cover steps 1 and 2." }, { "code": null, "e": 1958, "s": 1736, "text": "Next, I did a quick search to see if there were any examples for overlaying a mask on a face and found https://spooky-masks.netlify.app using face-api.js which I thought would require just a few tweaks to complete step 3." }, { "code": null, "e": 2219, "s": 1958, "text": "I took some time to understand how the face-api.js works in the faceLandmarkDetection demo. Some basics included loading face-api.js in the head section with <script src=”face-api.js”></script> and finding sections loading the input image and drawing overlays." }, { "code": null, "e": 2374, "s": 2219, "text": "<div style=\"position: relative\" class=\"margin\"> <img id=\"inputImg\" src=\"\" style=\"max-width: 800px;\" /> <canvas id=\"overlay\" /> </div>" }, { "code": null, "e": 2502, "s": 2374, "text": "Also, here’s the section where the call to faceapi was made to detect faces in the input image inputImgEl and save the results." }, { "code": null, "e": 2627, "s": 2502, "text": "async function updateResults() {...const results = await faceapi.detectAllFaces(inputImgEl, options).withFaceLandmarks()...}" }, { "code": null, "e": 3136, "s": 2627, "text": "I also identified how the masks were manipulated and adjusted to be place on top of faces in maskify.js from the Spookymasks tutorial. It used methods from face-api.js to get the coordinates of landmarks. For example to get the nose you could runconst nose = landmarks.getNose(); or to get the jawline use const jawline = landmarks.getJawOutline();. See the original code here. Using this I was able to quickly whip up a prototype that will detect the face and overlay it with a mask. See the full code here!" }, { "code": null, "e": 3147, "s": 3136, "text": "github.com" }, { "code": null, "e": 3441, "s": 3147, "text": "Unfortunately, I quickly realized that this automatic overlay of the mask was not going to be perfect. Little rotations and yaws of the face would lead the slightly inaccurate jawlines that would til the mask too much one way or another. Thus I decided to add a few controls to shift the mask." }, { "code": null, "e": 3582, "s": 3441, "text": "Turns out that manual adjustments was going to be necessary anyways because landmarks may not always detect the entire contours fo the chin." }, { "code": null, "e": 3672, "s": 3582, "text": "So I decided to add buttons that will shift the position of the mask in small increments." }, { "code": null, "e": 3743, "s": 3672, "text": "For example to make a button that moves the mask to the right, I added" }, { "code": null, "e": 3833, "s": 3743, "text": "<button class=”waves-effect waves-light btn” onclick=”moveright_mask();”>Zoom In</button>" }, { "code": null, "e": 3857, "s": 3833, "text": "defined by the function" }, { "code": null, "e": 4147, "s": 3857, "text": "function moveright_mask() { const myNode = document.getElementById(\"maskdiv\").children; for (var i = 0; i < myNode.length; i++) { var tableChild = myNode[i]; // Do stuff var left = parseFloat(tableChild.style.left)+2; tableChild.style.left = `${left}px`; }; }" }, { "code": null, "e": 4191, "s": 4147, "text": "which would update the mask in the maskdiv." }, { "code": null, "e": 4247, "s": 4191, "text": "Try it out for yourself or see a video of it in action:" }, { "code": null, "e": 4634, "s": 4247, "text": "This was a pretty fun project to work on to distract myself as the pandemic started. As I was building this, I eventually also found https://socialdistancing.works/ which practically does the same thing except that it is purely manual adjustments. Another nice feature of MaskOnMe is that it can also detect multiple faces at once so you can post group photos with everyone with a mask!" } ]
Data Science for Startups: Tracking Data | by Ben Weber | Towards Data Science
Part two of my ongoing series about building a data science discipline at a startup. You can find links to all of the posts in the introduction, and a book based on this series on Amazon. In order to make data-driven decisions at a startup, you need to collect data about how your products are being used. You also need to be able to measure the impact of making changes to your product and the efficacy of running campaigns, such as deploying a custom audience for marketing on Facebook. Again, collecting data is necessary for accomplishing these goals. Usually data is generated directly by the product. For example, a mobile game can generate data points about launching the game, starting additional sessions, and leveling up. But data can also come from other sources, such as an email vendor that provides response data about which users read and click on links within an email. This post focuses on the first type of data, where tracking events are being generated by the product. Why record data about product usage? Track metrics: You may want to record performance metrics for tracking product health or other metrics useful for running the business.Enable experimentation: To determine if making changes to a product is beneficial, you need to be able to measure results.Build data products: In order to make something like a recommendation system, you need to know which items users are interacting with. Track metrics: You may want to record performance metrics for tracking product health or other metrics useful for running the business. Enable experimentation: To determine if making changes to a product is beneficial, you need to be able to measure results. Build data products: In order to make something like a recommendation system, you need to know which items users are interacting with. It’s been said that data is the new oil, and there’s a wide variety of reasons to collect data from products. When I first started in the gaming industry, data tracked from products was referred to as telemetry. Now, data collected from products is frequently called tracking. This posts discusses what type of data to collect about product usage, how to send data to a server for analysis, issues when building a tracking API, and some concerns to consider when tracking user behavior. One of the first questions to answer when deploying a new product is: What data should we collect about user behavior? The answer is that it depends on your product and intended use cases, but there are some general guidelines about what types of data to collect across most web, mobile, and native applications. Installs: How big is the user base?Sessions: How engaged is the user base?Monetization: How much are users spending? Installs: How big is the user base? Sessions: How engaged is the user base? Monetization: How much are users spending? For these three types of events, the data may actually be generated from three different systems. Installation data might come from a third party, such as Google Play or the App Store, a session start event will be generated from the client application, and spending money in an application, or viewing ads, may be tracked by a different server. As long as you own the service that is generating the data points, you can use the same infrastructure to collect data about different types of events. Collecting data about how many users launch and log into a application will enable you to answer basic questions about the size of your base, and enable you to track business metrics such as DAU, MAU, ARPDAU, and D-7 retention. However, it doesn’t provide much insight into what users are doing within an application, and it doesn’t provide many data points that are useful for building data products. In order to better understand user engagement, it’s necessary to track data points that are domain or product specific. For example, you might want to track the following types of events in a multiplayer shooter game for consoles: GameStarted: tracks when the player starts a single or multiplayer game.PlayerSpawn: tracks when the player spawns into the game world and tracks the class that the user is playing, such as combat medic.PlayerDeath: tracks where players are dying and getting stuck and enables calculating metrics such as KDR (kill/death ratio).RankUp: tracks when the player levels up or unlocks a new rank. GameStarted: tracks when the player starts a single or multiplayer game. PlayerSpawn: tracks when the player spawns into the game world and tracks the class that the user is playing, such as combat medic. PlayerDeath: tracks where players are dying and getting stuck and enables calculating metrics such as KDR (kill/death ratio). RankUp: tracks when the player levels up or unlocks a new rank. Most of these events translate well to other shooter games and other genres such as action/adventure. For a specific game, such as FIFA, you may want to record game specific events, such as: GoalScored: tracks when a point is scored by the player or opponent.PlayerSubstitution: tracks when a player is substituted.RedCardReceived: tracks when the player receives a red card. GoalScored: tracks when a point is scored by the player or opponent. PlayerSubstitution: tracks when a player is substituted. RedCardReceived: tracks when the player receives a red card. Like the prior events, many of these game-specific events can actually be generalized to sports games. If you’re a company like EA with a portfolio of different sports titles, it’s useful to track all of these events across all of your sports titles (the red card event can be generalized to a penalty event). If we’re able to collect these types of events about players, we can start to answer useful questions about the player base, such as: Are users that receive more red cards more likely to quit?Do online focused players play more than single-player focused players?Do users play the new career mode that was released? Are users that receive more red cards more likely to quit? Do online focused players play more than single-player focused players? Do users play the new career mode that was released? A majority of tracking events are focused on collecting data points about released titles, but it’s also possible to collect data during development. At Microsoft Studios, I worked with the user research team to get tracking in place for playtesting. As a result, we could generate visualizations that were useful for conveying to game teams where players were getting stuck. Incorporating these visualizations into the playtesting results resulted in a much better reception from game teams. When you first add tracking to a product, you won’t know of every event and attribute that will be useful to record, but you can make a good guess by asking team members what types of questions they intend to ask about user behavior and by implementing events that are able to answer these questions. Even with good tracking data, you won’t be able to answer every question, but if you have good coverage you can start to improve your products. Some teams write tracking specifications to in order to define which tracking events need to be implemented in a product. Other teams don’t have any documentation and simply take a best guess approach at determining what to record. I highly recommend writing tracking specifications as a best practice. For each event, the spec should identify the conditions for firing an event, the attributes to send, and definitions for any event-specific attributes. For example, a session start event for a web app might have the following form: Condition: fired when the user first browses to the domain. The event should not be fired when the user clicks on new pages or uses the back button, but should fire it the user browses to a new domain and then back. Properties: web browser and version, userID, landing page, referring URL, client timestamp Definitions: referring URL should list the URL of the page that referred the user to this domain, or the application that referred the user to the web page (e.g. Facebook or Twitter). Tracking specs are a highly useful piece of documentation. Small teams might be able to get away without having an official process for writing tracking specs, but a number of scenarios can make the documentation critical, such as implementing events on a new platform, re-implementing events for a new backend service, or having engineers leave the team. In order for specs to be useful, it’s necessary to answer the following questions: Who is responsible for writing the spec?Who is responsible for implementing the spec?Who is responsible for testing the implementation? Who is responsible for writing the spec? Who is responsible for implementing the spec? Who is responsible for testing the implementation? In small organizations, a data scientist might be responsible for all of the aspects of tracking. For a larger organization, it’s common for the owners to be a product manager, engineering team, and testing group. Another consideration when setting up tracking for a product is determining whether to send events from a client application or a backend service. For example, a video-streaming web site can send data about which video a user is watching directly from the web browser, or from the backend service that is serving the video. While there are pros and cons to both approaches, I prefer setting up tracking for backend services rather than client applications if possible. Some of the benefits of server-side tracking are: Trusted Source: You don’t need to expose an endpoint on the web, and you know that events are being generated from your services rather than bots. This helps avoid fraud and DDoS attacks.Avoid Ad Blocking: If you send data from a client application to an endpoint exposed on the web, some users may block access to the endpoint, which impacts business metrics.Versioning: You might need to make changes to an event. You can update your web servers as needed, but often cannot require users to update a client application. Trusted Source: You don’t need to expose an endpoint on the web, and you know that events are being generated from your services rather than bots. This helps avoid fraud and DDoS attacks. Avoid Ad Blocking: If you send data from a client application to an endpoint exposed on the web, some users may block access to the endpoint, which impacts business metrics. Versioning: You might need to make changes to an event. You can update your web servers as needed, but often cannot require users to update a client application. Generating tracking from servers rather than client applications helps avoid issues around fraud, security, and versioning. However, there are some drawbacks to server-side tracking: Testing: You might need to add new events or modify existing tracking events for testing purposes. This is often easier to do by making changes on the client side.Data availability: Some of the events that you might want to track do not make calls to a web server. For example, a console game might not connect to any web services during a session start, and instead want until a multiplayer match starts. Also, attributes such as the referring URL may only be available for the client application and not the backend service. Testing: You might need to add new events or modify existing tracking events for testing purposes. This is often easier to do by making changes on the client side. Data availability: Some of the events that you might want to track do not make calls to a web server. For example, a console game might not connect to any web services during a session start, and instead want until a multiplayer match starts. Also, attributes such as the referring URL may only be available for the client application and not the backend service. A general guideline is to not trust anything sent by a client application, because often endpoints are not secured and there is no way to verify that the data was generated by your application. But client data is very useful, so it’s best to combine both client and server side tracking and to secure endpoints used for collecting tracking from clients. The goal of sending data to a server is to make the data available for analysis and data products. There’s a number of different approaches that can be used based on your use case. This section introduces three different ways of sending events to an endpoint on the web and saving the events to local storage. The samples below are not intended to be production code, but instead simple proofs of concept. The next post in this series will cover building a pipeline for processing events. All code for the samples below is available on Github. Web CallThe easiest way to set up a tracking service is by making web calls with the event data to a web site. This can be implemented with a lightweight PHP script, which is shown in the code block below. <?php $message = $_GET['message']; if ($message != '') { $dataFile = fopen("telemetry.log", "a"); fwrite($dataFile, "$message\n"); fflush($dataFile); fclose($dataFile); }?> This php script reads the message parameter from the URL and appends the message to a local file. The script can be invoked by making a web call: http://.../tracking.php?message=Hello_World The call can be made from a Java client or server using the following code: // endpointString endPoint = "http://.../tracking.php";// send the messageString message = "Hello_World_" + System.currentTimeMillis(); URL url = new URL(endPoint + "?message=" + message); URLConnection con = url.openConnection(); BufferedReader in = new BufferedReader(new InputStreamReader(con.getInputStream())); // process the response while (in.readLine() != null) {} in.close(); This is one of the easiest ways to start collecting tracking data, but it doesn’t scale and it’s not secure. It’s useful for testing, but should be avoided for anything customer facing. I did use this approach in the past to collect data about players for a Mario level generator experiment. Web Server Another approach you can use is setting up a web service to collect tracking events. The code below shows how to use Jetty to set up a lightweight service for collecting data. In order to compile and run the example, you’ll need to include the following pom file. The first step is to start a web service that will handle tracking requests: public class TrackingServer extends AbstractHandler { public static void main(String[] args) throws Exception { Server server = new Server(8080); server.setHandler(new TrackingServer()); server.start(); server.join(); } public void handle(String target, Request baseRequest, HttpServletRequest request, HttpServletResponse response) throws IOException, ServletException { // Process Request }} In order to process events, the application reads the message parameter from the web request, appends the message to a local file, and then responds to the web request. The full code for this example is available here. // append the event data to a local file String message = baseRequest.getParameter("message");if (message != null) { BufferedWriter writer = new BufferedWriter( new FileWriter("tracking.log", true)); writer.write(message + "\n"); writer.close();}// service the web requestresponse.setStatus(HttpServletResponse.SC_OK);baseRequest.setHandled(true); In order to call the endpoint with Java, we’ll need to modify the URL: URL url = new URL("http://localhost:8080/?message=" + message); This approach can scale a bit more than the PHP approach, but is still insecure and not the best approach for building a production system. My advice for building a production ready tracking service is to use a stream processing system such as Kafka, Amazon Kinesis, or Google’s PubSub. Subscription ServiceUsing messaging services such as PubSub enables systems to collect massive amounts of tracking data, and forward the data to a number of different consumers. Some systems such as Kafka require setting up and maintaining servers, while other approaches like PubSub are managed services that are serverless. Managed services are great for startups, because they reduce the amount of DevOps support needed. But the tradeoff is cost, and it’s pricer to use managed services for massive data collection. The code below shows how to use Java to post a message to a topic on PubSub. The full code listing is available here and the pom file for building the project is available here. In order to run this example, you’ll need to set up a free google cloud project, and enable PubSub. More details on setting up GCP and PubSub are available in this post. // Set up a publisherTopicName topicName = TopicName.of("projectID", "raw-events");Publisher publisher = Publisher.newBuilder(topicName).build();//schedule a message to be publishedString message = "Hello World!";PubsubMessage pubsubMessage = PubsubMessage.newBuilder() .setData(ByteString.copyFromUtf8(message)).build();// publish the message, and add this class as a callback listenerApiFuture<String> future = publisher.publish(pubsubMessage);ApiFutures.addCallback(future, new ApiFutureCallback<String>() { public void onFailure(Throwable arg0) {} public void onSuccess(String arg0) {}});publisher.shutdown(); This code example shows how to send a single message to PubSub for recording a tracking event. For a production system, you’ll want to implement the onFailure method in order to deal with failed deliveries. The code above shows how to send a message with Java, while other languages are supported including Go, Python, C#, and PHP. It’s also possible to interface with other stream processing systems such as Kafka. The next code segment shows how to read a message from PubSub and append the message to a local file. The full code listing is available here. In the next post I’ll show how to consume messages using DataFlow. // set up a message handlerMessageReceiver receiver = new MessageReceiver() { public void receiveMessage(PubsubMessage message, AckReplyConsumer consumer) { try { BufferedWriter writer = new BufferedWriter(new FileWriter("tracking.log", true)); writer.write(message.getData().toStringUtf8() + "\n"); writer.close(); consumer.ack(); } catch (Exception e) {}}};// start the listener for 1 minuteSubscriptionName subscriptionName = SubscriptionName.of("your_project_id", "raw-events");Subscriber subscriber = Subscriber.newBuilder( subscriptionName, receiver).build();subscriber.startAsync();Thread.sleep(60000);subscriber.stopAsync(); We now have a way of getting data from client applications and backend services to a central location for analysis. The last approach shown is a scalable and secure method for collecting tracking data, and is a managed service making it a good fit for startups with small data teams. One of the decisions to make when sending data to an endpoint for collection is how to encode the messages being sent, since all events that are sent from an application to an endpoint need to be serialized. When sending data over the internet, it’s good to avoid language specific encodings, such as Java serialization, because the application and backend services are likely implemented in different languages. There’s also versioning issues that can arise when using a language-specific serialization approach. Some common ways of encoding tracking events are using the JSON format and Google’s protocol buffers. JSON has the benefit of being human readable and supported by a wide variety of languages, while buffers provide better comprension and may better suited for certain data structures. One of the benefits of using these approaches is that a schema does not need to be defined before you can send events, since metadata about the event is included in the message. You can add new attributes as needed, and even change data types, but this may impact downstream event processing. When getting started with building a data pipeline, I’d recommended using JSON to get started, since it’s human readable and supported by a wide variety of languages. It’s also good to avoid encodings such as pipe-delimited formats, because you many need to support more complex data structures, such as lists or maps, when you update your tracking events. Here’s an example of what a message might look like: # JSON{"Type":"Session","Version":1.0,"UserID":"12345","Platform":"iOS"}# Pipe delimitedSession|1.0|12345|iOS What about XML? No! To build a production system, you’ll need to add a bit more sophistication to your tracking code. A production system should handle the following issues: Delivery Failures: if a message delivery fails, the system should retry sending the message, and have a backoff mechanism.Queueing: if the endpoint is not available, such as a phone without a signal, the trackling library should be able to store events for later transmission, such as when wifi is available.Batching: instead of sending a large number of small requests, it’s often useful to send batches of tracking events.Prioritization: some messages are more important to track than others, such as preferring monetization events over click events. A tracking library should be able to prioritize more critical events. Delivery Failures: if a message delivery fails, the system should retry sending the message, and have a backoff mechanism. Queueing: if the endpoint is not available, such as a phone without a signal, the trackling library should be able to store events for later transmission, such as when wifi is available. Batching: instead of sending a large number of small requests, it’s often useful to send batches of tracking events. Prioritization: some messages are more important to track than others, such as preferring monetization events over click events. A tracking library should be able to prioritize more critical events. It’s also useful to have a process in place for disabling tracking events. I’ve seen data pipelines explode from client applications sending way too much data, and there was no way of disabling the clients from sending the problematic event without turning off all tracking. Ideally, a production level system should have some sort of auditing in place, in order to validate that the endpoints are receiving all of the data being sent. One approach is to send data to a different endpoint built on a different infrastructure and tracking library, but that much redundancy is usually overkill. A more lightweight approach is to add a sequential counting attribute to all events, so if a client sends 100 messages, the backend can use this attribute to know how many events the client attempted to send and validate the result. There’s privacy concerns to consider when storing user data. When data is being made available to analytics and data science teams, all personally identifiable information (PII) should be stripped from events, which can include names, addresses, and phone numbers. In some instances, user names, such as a player’s gamertag on Steam, may be considered PII as well. It’s also good to strip IP addresses from any data being collected, to limit privacy concerns. The general recommendation is to collect as much behavioral data as needed to answer questions about product usage, while avoiding the need to collect sensitive information, such as gender and age. If you’re building a product based on sensitive information, you should have strong user access controls in place to limit access to sensitive data. Policies such GDPR are setting new regulations for collecting and processing data, and GDPR should be reviewed before shipping a product with tracking. Tracking data enables teams to answer a variety of questions about product usage, enables teams to track the performance and health of products, and can be used to build data products. This post discussed some of the issues involved in collecting data about user behavior, and provided examples for how to send data from a client application to an endpoint for later analysis. Here are the key takeaways to from this post: Use server-side tracking if possible. It helps avoid a wide variety of issues.QA/test your tracking events. If you’re sending bad data, you may be drawing incorrect conclusions from your data.Have a versioning system in place. You’ll need to add new events and modify existing events, and this should be a simple process.Use JSON for sending events. It’s human readable, extensible, and supported by a wide variety of languagesUse managed services for collecting data. You won’t need to spin up servers and can collect huge amounts of data. Use server-side tracking if possible. It helps avoid a wide variety of issues. QA/test your tracking events. If you’re sending bad data, you may be drawing incorrect conclusions from your data. Have a versioning system in place. You’ll need to add new events and modify existing events, and this should be a simple process. Use JSON for sending events. It’s human readable, extensible, and supported by a wide variety of languages Use managed services for collecting data. You won’t need to spin up servers and can collect huge amounts of data. As you ship more products and scale up your user base, you may need to change to a different data collection platform, but this advice is a good starting point for shipping products with tracking. The next post will introduce different approaches for building data pipelines. Ben Weber is a data scientist in the gaming industry with experience at Electronic Arts, Microsoft Studios, Daybreak Games, and Twitch. He also worked as the first data scientist at a FinTech startup.
[ { "code": null, "e": 359, "s": 171, "text": "Part two of my ongoing series about building a data science discipline at a startup. You can find links to all of the posts in the introduction, and a book based on this series on Amazon." }, { "code": null, "e": 727, "s": 359, "text": "In order to make data-driven decisions at a startup, you need to collect data about how your products are being used. You also need to be able to measure the impact of making changes to your product and the efficacy of running campaigns, such as deploying a custom audience for marketing on Facebook. Again, collecting data is necessary for accomplishing these goals." }, { "code": null, "e": 1160, "s": 727, "text": "Usually data is generated directly by the product. For example, a mobile game can generate data points about launching the game, starting additional sessions, and leveling up. But data can also come from other sources, such as an email vendor that provides response data about which users read and click on links within an email. This post focuses on the first type of data, where tracking events are being generated by the product." }, { "code": null, "e": 1197, "s": 1160, "text": "Why record data about product usage?" }, { "code": null, "e": 1589, "s": 1197, "text": "Track metrics: You may want to record performance metrics for tracking product health or other metrics useful for running the business.Enable experimentation: To determine if making changes to a product is beneficial, you need to be able to measure results.Build data products: In order to make something like a recommendation system, you need to know which items users are interacting with." }, { "code": null, "e": 1725, "s": 1589, "text": "Track metrics: You may want to record performance metrics for tracking product health or other metrics useful for running the business." }, { "code": null, "e": 1848, "s": 1725, "text": "Enable experimentation: To determine if making changes to a product is beneficial, you need to be able to measure results." }, { "code": null, "e": 1983, "s": 1848, "text": "Build data products: In order to make something like a recommendation system, you need to know which items users are interacting with." }, { "code": null, "e": 2260, "s": 1983, "text": "It’s been said that data is the new oil, and there’s a wide variety of reasons to collect data from products. When I first started in the gaming industry, data tracked from products was referred to as telemetry. Now, data collected from products is frequently called tracking." }, { "code": null, "e": 2470, "s": 2260, "text": "This posts discusses what type of data to collect about product usage, how to send data to a server for analysis, issues when building a tracking API, and some concerns to consider when tracking user behavior." }, { "code": null, "e": 2540, "s": 2470, "text": "One of the first questions to answer when deploying a new product is:" }, { "code": null, "e": 2589, "s": 2540, "text": "What data should we collect about user behavior?" }, { "code": null, "e": 2783, "s": 2589, "text": "The answer is that it depends on your product and intended use cases, but there are some general guidelines about what types of data to collect across most web, mobile, and native applications." }, { "code": null, "e": 2900, "s": 2783, "text": "Installs: How big is the user base?Sessions: How engaged is the user base?Monetization: How much are users spending?" }, { "code": null, "e": 2936, "s": 2900, "text": "Installs: How big is the user base?" }, { "code": null, "e": 2976, "s": 2936, "text": "Sessions: How engaged is the user base?" }, { "code": null, "e": 3019, "s": 2976, "text": "Monetization: How much are users spending?" }, { "code": null, "e": 3517, "s": 3019, "text": "For these three types of events, the data may actually be generated from three different systems. Installation data might come from a third party, such as Google Play or the App Store, a session start event will be generated from the client application, and spending money in an application, or viewing ads, may be tracked by a different server. As long as you own the service that is generating the data points, you can use the same infrastructure to collect data about different types of events." }, { "code": null, "e": 4150, "s": 3517, "text": "Collecting data about how many users launch and log into a application will enable you to answer basic questions about the size of your base, and enable you to track business metrics such as DAU, MAU, ARPDAU, and D-7 retention. However, it doesn’t provide much insight into what users are doing within an application, and it doesn’t provide many data points that are useful for building data products. In order to better understand user engagement, it’s necessary to track data points that are domain or product specific. For example, you might want to track the following types of events in a multiplayer shooter game for consoles:" }, { "code": null, "e": 4542, "s": 4150, "text": "GameStarted: tracks when the player starts a single or multiplayer game.PlayerSpawn: tracks when the player spawns into the game world and tracks the class that the user is playing, such as combat medic.PlayerDeath: tracks where players are dying and getting stuck and enables calculating metrics such as KDR (kill/death ratio).RankUp: tracks when the player levels up or unlocks a new rank." }, { "code": null, "e": 4615, "s": 4542, "text": "GameStarted: tracks when the player starts a single or multiplayer game." }, { "code": null, "e": 4747, "s": 4615, "text": "PlayerSpawn: tracks when the player spawns into the game world and tracks the class that the user is playing, such as combat medic." }, { "code": null, "e": 4873, "s": 4747, "text": "PlayerDeath: tracks where players are dying and getting stuck and enables calculating metrics such as KDR (kill/death ratio)." }, { "code": null, "e": 4937, "s": 4873, "text": "RankUp: tracks when the player levels up or unlocks a new rank." }, { "code": null, "e": 5128, "s": 4937, "text": "Most of these events translate well to other shooter games and other genres such as action/adventure. For a specific game, such as FIFA, you may want to record game specific events, such as:" }, { "code": null, "e": 5313, "s": 5128, "text": "GoalScored: tracks when a point is scored by the player or opponent.PlayerSubstitution: tracks when a player is substituted.RedCardReceived: tracks when the player receives a red card." }, { "code": null, "e": 5382, "s": 5313, "text": "GoalScored: tracks when a point is scored by the player or opponent." }, { "code": null, "e": 5439, "s": 5382, "text": "PlayerSubstitution: tracks when a player is substituted." }, { "code": null, "e": 5500, "s": 5439, "text": "RedCardReceived: tracks when the player receives a red card." }, { "code": null, "e": 5810, "s": 5500, "text": "Like the prior events, many of these game-specific events can actually be generalized to sports games. If you’re a company like EA with a portfolio of different sports titles, it’s useful to track all of these events across all of your sports titles (the red card event can be generalized to a penalty event)." }, { "code": null, "e": 5944, "s": 5810, "text": "If we’re able to collect these types of events about players, we can start to answer useful questions about the player base, such as:" }, { "code": null, "e": 6126, "s": 5944, "text": "Are users that receive more red cards more likely to quit?Do online focused players play more than single-player focused players?Do users play the new career mode that was released?" }, { "code": null, "e": 6185, "s": 6126, "text": "Are users that receive more red cards more likely to quit?" }, { "code": null, "e": 6257, "s": 6185, "text": "Do online focused players play more than single-player focused players?" }, { "code": null, "e": 6310, "s": 6257, "text": "Do users play the new career mode that was released?" }, { "code": null, "e": 6803, "s": 6310, "text": "A majority of tracking events are focused on collecting data points about released titles, but it’s also possible to collect data during development. At Microsoft Studios, I worked with the user research team to get tracking in place for playtesting. As a result, we could generate visualizations that were useful for conveying to game teams where players were getting stuck. Incorporating these visualizations into the playtesting results resulted in a much better reception from game teams." }, { "code": null, "e": 7248, "s": 6803, "text": "When you first add tracking to a product, you won’t know of every event and attribute that will be useful to record, but you can make a good guess by asking team members what types of questions they intend to ask about user behavior and by implementing events that are able to answer these questions. Even with good tracking data, you won’t be able to answer every question, but if you have good coverage you can start to improve your products." }, { "code": null, "e": 7783, "s": 7248, "text": "Some teams write tracking specifications to in order to define which tracking events need to be implemented in a product. Other teams don’t have any documentation and simply take a best guess approach at determining what to record. I highly recommend writing tracking specifications as a best practice. For each event, the spec should identify the conditions for firing an event, the attributes to send, and definitions for any event-specific attributes. For example, a session start event for a web app might have the following form:" }, { "code": null, "e": 7999, "s": 7783, "text": "Condition: fired when the user first browses to the domain. The event should not be fired when the user clicks on new pages or uses the back button, but should fire it the user browses to a new domain and then back." }, { "code": null, "e": 8090, "s": 7999, "text": "Properties: web browser and version, userID, landing page, referring URL, client timestamp" }, { "code": null, "e": 8274, "s": 8090, "text": "Definitions: referring URL should list the URL of the page that referred the user to this domain, or the application that referred the user to the web page (e.g. Facebook or Twitter)." }, { "code": null, "e": 8713, "s": 8274, "text": "Tracking specs are a highly useful piece of documentation. Small teams might be able to get away without having an official process for writing tracking specs, but a number of scenarios can make the documentation critical, such as implementing events on a new platform, re-implementing events for a new backend service, or having engineers leave the team. In order for specs to be useful, it’s necessary to answer the following questions:" }, { "code": null, "e": 8849, "s": 8713, "text": "Who is responsible for writing the spec?Who is responsible for implementing the spec?Who is responsible for testing the implementation?" }, { "code": null, "e": 8890, "s": 8849, "text": "Who is responsible for writing the spec?" }, { "code": null, "e": 8936, "s": 8890, "text": "Who is responsible for implementing the spec?" }, { "code": null, "e": 8987, "s": 8936, "text": "Who is responsible for testing the implementation?" }, { "code": null, "e": 9201, "s": 8987, "text": "In small organizations, a data scientist might be responsible for all of the aspects of tracking. For a larger organization, it’s common for the owners to be a product manager, engineering team, and testing group." }, { "code": null, "e": 9720, "s": 9201, "text": "Another consideration when setting up tracking for a product is determining whether to send events from a client application or a backend service. For example, a video-streaming web site can send data about which video a user is watching directly from the web browser, or from the backend service that is serving the video. While there are pros and cons to both approaches, I prefer setting up tracking for backend services rather than client applications if possible. Some of the benefits of server-side tracking are:" }, { "code": null, "e": 10242, "s": 9720, "text": "Trusted Source: You don’t need to expose an endpoint on the web, and you know that events are being generated from your services rather than bots. This helps avoid fraud and DDoS attacks.Avoid Ad Blocking: If you send data from a client application to an endpoint exposed on the web, some users may block access to the endpoint, which impacts business metrics.Versioning: You might need to make changes to an event. You can update your web servers as needed, but often cannot require users to update a client application." }, { "code": null, "e": 10430, "s": 10242, "text": "Trusted Source: You don’t need to expose an endpoint on the web, and you know that events are being generated from your services rather than bots. This helps avoid fraud and DDoS attacks." }, { "code": null, "e": 10604, "s": 10430, "text": "Avoid Ad Blocking: If you send data from a client application to an endpoint exposed on the web, some users may block access to the endpoint, which impacts business metrics." }, { "code": null, "e": 10766, "s": 10604, "text": "Versioning: You might need to make changes to an event. You can update your web servers as needed, but often cannot require users to update a client application." }, { "code": null, "e": 10949, "s": 10766, "text": "Generating tracking from servers rather than client applications helps avoid issues around fraud, security, and versioning. However, there are some drawbacks to server-side tracking:" }, { "code": null, "e": 11476, "s": 10949, "text": "Testing: You might need to add new events or modify existing tracking events for testing purposes. This is often easier to do by making changes on the client side.Data availability: Some of the events that you might want to track do not make calls to a web server. For example, a console game might not connect to any web services during a session start, and instead want until a multiplayer match starts. Also, attributes such as the referring URL may only be available for the client application and not the backend service." }, { "code": null, "e": 11640, "s": 11476, "text": "Testing: You might need to add new events or modify existing tracking events for testing purposes. This is often easier to do by making changes on the client side." }, { "code": null, "e": 12004, "s": 11640, "text": "Data availability: Some of the events that you might want to track do not make calls to a web server. For example, a console game might not connect to any web services during a session start, and instead want until a multiplayer match starts. Also, attributes such as the referring URL may only be available for the client application and not the backend service." }, { "code": null, "e": 12358, "s": 12004, "text": "A general guideline is to not trust anything sent by a client application, because often endpoints are not secured and there is no way to verify that the data was generated by your application. But client data is very useful, so it’s best to combine both client and server side tracking and to secure endpoints used for collecting tracking from clients." }, { "code": null, "e": 12902, "s": 12358, "text": "The goal of sending data to a server is to make the data available for analysis and data products. There’s a number of different approaches that can be used based on your use case. This section introduces three different ways of sending events to an endpoint on the web and saving the events to local storage. The samples below are not intended to be production code, but instead simple proofs of concept. The next post in this series will cover building a pipeline for processing events. All code for the samples below is available on Github." }, { "code": null, "e": 13108, "s": 12902, "text": "Web CallThe easiest way to set up a tracking service is by making web calls with the event data to a web site. This can be implemented with a lightweight PHP script, which is shown in the code block below." }, { "code": null, "e": 13318, "s": 13108, "text": "<?php $message = $_GET['message']; if ($message != '') { $dataFile = fopen(\"telemetry.log\", \"a\"); fwrite($dataFile, \"$message\\n\"); fflush($dataFile); fclose($dataFile); }?>" }, { "code": null, "e": 13464, "s": 13318, "text": "This php script reads the message parameter from the URL and appends the message to a local file. The script can be invoked by making a web call:" }, { "code": null, "e": 13508, "s": 13464, "text": "http://.../tracking.php?message=Hello_World" }, { "code": null, "e": 13584, "s": 13508, "text": "The call can be made from a Java client or server using the following code:" }, { "code": null, "e": 13979, "s": 13584, "text": "// endpointString endPoint = \"http://.../tracking.php\";// send the messageString message = \"Hello_World_\" + System.currentTimeMillis(); URL url = new URL(endPoint + \"?message=\" + message); URLConnection con = url.openConnection(); BufferedReader in = new BufferedReader(new InputStreamReader(con.getInputStream())); // process the response while (in.readLine() != null) {} in.close();" }, { "code": null, "e": 14271, "s": 13979, "text": "This is one of the easiest ways to start collecting tracking data, but it doesn’t scale and it’s not secure. It’s useful for testing, but should be avoided for anything customer facing. I did use this approach in the past to collect data about players for a Mario level generator experiment." }, { "code": null, "e": 14623, "s": 14271, "text": "Web Server Another approach you can use is setting up a web service to collect tracking events. The code below shows how to use Jetty to set up a lightweight service for collecting data. In order to compile and run the example, you’ll need to include the following pom file. The first step is to start a web service that will handle tracking requests:" }, { "code": null, "e": 15047, "s": 14623, "text": "public class TrackingServer extends AbstractHandler { public static void main(String[] args) throws Exception { Server server = new Server(8080); server.setHandler(new TrackingServer()); server.start(); server.join(); } public void handle(String target, Request baseRequest, HttpServletRequest request, HttpServletResponse response) throws IOException, ServletException { // Process Request }}" }, { "code": null, "e": 15266, "s": 15047, "text": "In order to process events, the application reads the message parameter from the web request, appends the message to a local file, and then responds to the web request. The full code for this example is available here." }, { "code": null, "e": 15622, "s": 15266, "text": "// append the event data to a local file String message = baseRequest.getParameter(\"message\");if (message != null) { BufferedWriter writer = new BufferedWriter( new FileWriter(\"tracking.log\", true)); writer.write(message + \"\\n\"); writer.close();}// service the web requestresponse.setStatus(HttpServletResponse.SC_OK);baseRequest.setHandled(true);" }, { "code": null, "e": 15693, "s": 15622, "text": "In order to call the endpoint with Java, we’ll need to modify the URL:" }, { "code": null, "e": 15757, "s": 15693, "text": "URL url = new URL(\"http://localhost:8080/?message=\" + message);" }, { "code": null, "e": 16044, "s": 15757, "text": "This approach can scale a bit more than the PHP approach, but is still insecure and not the best approach for building a production system. My advice for building a production ready tracking service is to use a stream processing system such as Kafka, Amazon Kinesis, or Google’s PubSub." }, { "code": null, "e": 16563, "s": 16044, "text": "Subscription ServiceUsing messaging services such as PubSub enables systems to collect massive amounts of tracking data, and forward the data to a number of different consumers. Some systems such as Kafka require setting up and maintaining servers, while other approaches like PubSub are managed services that are serverless. Managed services are great for startups, because they reduce the amount of DevOps support needed. But the tradeoff is cost, and it’s pricer to use managed services for massive data collection." }, { "code": null, "e": 16911, "s": 16563, "text": "The code below shows how to use Java to post a message to a topic on PubSub. The full code listing is available here and the pom file for building the project is available here. In order to run this example, you’ll need to set up a free google cloud project, and enable PubSub. More details on setting up GCP and PubSub are available in this post." }, { "code": null, "e": 17530, "s": 16911, "text": "// Set up a publisherTopicName topicName = TopicName.of(\"projectID\", \"raw-events\");Publisher publisher = Publisher.newBuilder(topicName).build();//schedule a message to be publishedString message = \"Hello World!\";PubsubMessage pubsubMessage = PubsubMessage.newBuilder() .setData(ByteString.copyFromUtf8(message)).build();// publish the message, and add this class as a callback listenerApiFuture<String> future = publisher.publish(pubsubMessage);ApiFutures.addCallback(future, new ApiFutureCallback<String>() { public void onFailure(Throwable arg0) {} public void onSuccess(String arg0) {}});publisher.shutdown();" }, { "code": null, "e": 17946, "s": 17530, "text": "This code example shows how to send a single message to PubSub for recording a tracking event. For a production system, you’ll want to implement the onFailure method in order to deal with failed deliveries. The code above shows how to send a message with Java, while other languages are supported including Go, Python, C#, and PHP. It’s also possible to interface with other stream processing systems such as Kafka." }, { "code": null, "e": 18156, "s": 17946, "text": "The next code segment shows how to read a message from PubSub and append the message to a local file. The full code listing is available here. In the next post I’ll show how to consume messages using DataFlow." }, { "code": null, "e": 18836, "s": 18156, "text": "// set up a message handlerMessageReceiver receiver = new MessageReceiver() { public void receiveMessage(PubsubMessage message, AckReplyConsumer consumer) { try { BufferedWriter writer = new BufferedWriter(new FileWriter(\"tracking.log\", true)); writer.write(message.getData().toStringUtf8() + \"\\n\"); writer.close(); consumer.ack(); } catch (Exception e) {}}};// start the listener for 1 minuteSubscriptionName subscriptionName = SubscriptionName.of(\"your_project_id\", \"raw-events\");Subscriber subscriber = Subscriber.newBuilder( subscriptionName, receiver).build();subscriber.startAsync();Thread.sleep(60000);subscriber.stopAsync();" }, { "code": null, "e": 19120, "s": 18836, "text": "We now have a way of getting data from client applications and backend services to a central location for analysis. The last approach shown is a scalable and secure method for collecting tracking data, and is a managed service making it a good fit for startups with small data teams." }, { "code": null, "e": 19634, "s": 19120, "text": "One of the decisions to make when sending data to an endpoint for collection is how to encode the messages being sent, since all events that are sent from an application to an endpoint need to be serialized. When sending data over the internet, it’s good to avoid language specific encodings, such as Java serialization, because the application and backend services are likely implemented in different languages. There’s also versioning issues that can arise when using a language-specific serialization approach." }, { "code": null, "e": 20212, "s": 19634, "text": "Some common ways of encoding tracking events are using the JSON format and Google’s protocol buffers. JSON has the benefit of being human readable and supported by a wide variety of languages, while buffers provide better comprension and may better suited for certain data structures. One of the benefits of using these approaches is that a schema does not need to be defined before you can send events, since metadata about the event is included in the message. You can add new attributes as needed, and even change data types, but this may impact downstream event processing." }, { "code": null, "e": 20622, "s": 20212, "text": "When getting started with building a data pipeline, I’d recommended using JSON to get started, since it’s human readable and supported by a wide variety of languages. It’s also good to avoid encodings such as pipe-delimited formats, because you many need to support more complex data structures, such as lists or maps, when you update your tracking events. Here’s an example of what a message might look like:" }, { "code": null, "e": 20732, "s": 20622, "text": "# JSON{\"Type\":\"Session\",\"Version\":1.0,\"UserID\":\"12345\",\"Platform\":\"iOS\"}# Pipe delimitedSession|1.0|12345|iOS" }, { "code": null, "e": 20752, "s": 20732, "text": "What about XML? No!" }, { "code": null, "e": 20906, "s": 20752, "text": "To build a production system, you’ll need to add a bit more sophistication to your tracking code. A production system should handle the following issues:" }, { "code": null, "e": 21529, "s": 20906, "text": "Delivery Failures: if a message delivery fails, the system should retry sending the message, and have a backoff mechanism.Queueing: if the endpoint is not available, such as a phone without a signal, the trackling library should be able to store events for later transmission, such as when wifi is available.Batching: instead of sending a large number of small requests, it’s often useful to send batches of tracking events.Prioritization: some messages are more important to track than others, such as preferring monetization events over click events. A tracking library should be able to prioritize more critical events." }, { "code": null, "e": 21652, "s": 21529, "text": "Delivery Failures: if a message delivery fails, the system should retry sending the message, and have a backoff mechanism." }, { "code": null, "e": 21839, "s": 21652, "text": "Queueing: if the endpoint is not available, such as a phone without a signal, the trackling library should be able to store events for later transmission, such as when wifi is available." }, { "code": null, "e": 21956, "s": 21839, "text": "Batching: instead of sending a large number of small requests, it’s often useful to send batches of tracking events." }, { "code": null, "e": 22155, "s": 21956, "text": "Prioritization: some messages are more important to track than others, such as preferring monetization events over click events. A tracking library should be able to prioritize more critical events." }, { "code": null, "e": 22430, "s": 22155, "text": "It’s also useful to have a process in place for disabling tracking events. I’ve seen data pipelines explode from client applications sending way too much data, and there was no way of disabling the clients from sending the problematic event without turning off all tracking." }, { "code": null, "e": 22981, "s": 22430, "text": "Ideally, a production level system should have some sort of auditing in place, in order to validate that the endpoints are receiving all of the data being sent. One approach is to send data to a different endpoint built on a different infrastructure and tracking library, but that much redundancy is usually overkill. A more lightweight approach is to add a sequential counting attribute to all events, so if a client sends 100 messages, the backend can use this attribute to know how many events the client attempted to send and validate the result." }, { "code": null, "e": 23940, "s": 22981, "text": "There’s privacy concerns to consider when storing user data. When data is being made available to analytics and data science teams, all personally identifiable information (PII) should be stripped from events, which can include names, addresses, and phone numbers. In some instances, user names, such as a player’s gamertag on Steam, may be considered PII as well. It’s also good to strip IP addresses from any data being collected, to limit privacy concerns. The general recommendation is to collect as much behavioral data as needed to answer questions about product usage, while avoiding the need to collect sensitive information, such as gender and age. If you’re building a product based on sensitive information, you should have strong user access controls in place to limit access to sensitive data. Policies such GDPR are setting new regulations for collecting and processing data, and GDPR should be reviewed before shipping a product with tracking." }, { "code": null, "e": 24363, "s": 23940, "text": "Tracking data enables teams to answer a variety of questions about product usage, enables teams to track the performance and health of products, and can be used to build data products. This post discussed some of the issues involved in collecting data about user behavior, and provided examples for how to send data from a client application to an endpoint for later analysis. Here are the key takeaways to from this post:" }, { "code": null, "e": 24904, "s": 24363, "text": "Use server-side tracking if possible. It helps avoid a wide variety of issues.QA/test your tracking events. If you’re sending bad data, you may be drawing incorrect conclusions from your data.Have a versioning system in place. You’ll need to add new events and modify existing events, and this should be a simple process.Use JSON for sending events. It’s human readable, extensible, and supported by a wide variety of languagesUse managed services for collecting data. You won’t need to spin up servers and can collect huge amounts of data." }, { "code": null, "e": 24983, "s": 24904, "text": "Use server-side tracking if possible. It helps avoid a wide variety of issues." }, { "code": null, "e": 25098, "s": 24983, "text": "QA/test your tracking events. If you’re sending bad data, you may be drawing incorrect conclusions from your data." }, { "code": null, "e": 25228, "s": 25098, "text": "Have a versioning system in place. You’ll need to add new events and modify existing events, and this should be a simple process." }, { "code": null, "e": 25335, "s": 25228, "text": "Use JSON for sending events. It’s human readable, extensible, and supported by a wide variety of languages" }, { "code": null, "e": 25449, "s": 25335, "text": "Use managed services for collecting data. You won’t need to spin up servers and can collect huge amounts of data." }, { "code": null, "e": 25646, "s": 25449, "text": "As you ship more products and scale up your user base, you may need to change to a different data collection platform, but this advice is a good starting point for shipping products with tracking." }, { "code": null, "e": 25725, "s": 25646, "text": "The next post will introduce different approaches for building data pipelines." } ]
How to implement custom JsonAdapter using Gson in Java?
The @JsonAdapter annotation can be used at field or class level to specify the Gson. The TypeAdapter class can be used to convert Java objects to and from JSON. By default, Gson library converts application classes to JSON by using built-in type adapters but we can override it by providing custom type adapters. @Retention(value=RUNTIME) @Target(value={TYPE,FIELD}) public @interface JsonAdapter import java.io.IOException; import com.google.gson.Gson; import com.google.gson.TypeAdapter; import com.google.gson.annotations.JsonAdapter; import com.google.gson.stream.JsonReader; import com.google.gson.stream.JsonWriter; public class JsonAdapterTest { public static void main(String[] args) { Gson gson = new Gson(); System.out.println(gson.toJson(new Customer())); } } // Customer class class Customer { @JsonAdapter(CustomJsonAdapter.class) Integer customerId = 101; } // CustomJsonAdapter class class CustomJsonAdapter extends TypeAdapter<Integer> { @Override public Integer read(JsonReader jreader) throws IOException { return null; } @Override public void write(JsonWriter jwriter, Integer customerId) throws IOException { jwriter.beginObject(); jwriter.name("customerId"); jwriter.value(String.valueOf(customerId)); jwriter.endObject(); } } {"customerId":{"customerId":"101"}}
[ { "code": null, "e": 1375, "s": 1062, "text": "The @JsonAdapter annotation can be used at field or class level to specify the Gson. The TypeAdapter class can be used to convert Java objects to and from JSON. By default, Gson library converts application classes to JSON by using built-in type adapters but we can override it by providing custom type adapters." }, { "code": null, "e": 1459, "s": 1375, "text": "@Retention(value=RUNTIME)\n@Target(value={TYPE,FIELD})\npublic @interface JsonAdapter" }, { "code": null, "e": 2382, "s": 1459, "text": "import java.io.IOException;\nimport com.google.gson.Gson;\nimport com.google.gson.TypeAdapter;\nimport com.google.gson.annotations.JsonAdapter;\nimport com.google.gson.stream.JsonReader;\nimport com.google.gson.stream.JsonWriter;\npublic class JsonAdapterTest {\n public static void main(String[] args) {\n Gson gson = new Gson();\n System.out.println(gson.toJson(new Customer()));\n }\n}\n// Customer class\nclass Customer {\n @JsonAdapter(CustomJsonAdapter.class)\n Integer customerId = 101;\n}\n// CustomJsonAdapter class\nclass CustomJsonAdapter extends TypeAdapter<Integer> {\n @Override\n public Integer read(JsonReader jreader) throws IOException {\n return null;\n }\n @Override\n public void write(JsonWriter jwriter, Integer customerId) throws IOException {\n jwriter.beginObject();\n jwriter.name(\"customerId\");\n jwriter.value(String.valueOf(customerId));\n jwriter.endObject();\n }\n}" }, { "code": null, "e": 2418, "s": 2382, "text": "{\"customerId\":{\"customerId\":\"101\"}}" } ]
Python - Eliminate Capital Letter Starting words from String - GeeksforGeeks
16 Feb, 2022 Sometimes, while working with Python Strings, we can have a problem in which we need to remove all the words beginning with capital letters. Words that begin with capital letters are proper nouns and their occurrence mean different meaning to the sentence and can be sometimes undesired. Let’s discuss certain ways in which this task can be performed. Input : test_str = ‘GeeksforGeeks is best for Geeks’ Output : ‘ is best for ‘Input : test_str = ‘GeeksforGeeks Is Best For Geeks’ Output : ” Method #1 : Using join() + split() + isupper() The combination of the above functions can provide one of the ways in which this problem can be solved. In this, we perform the task of extracting individual strings with an upper case using isupper() and then perform join() to get the resultant result. Python3 # Python3 code to demonstrate working of# Eliminate Capital Letter Starting words from String# Using join() + split() + isupper() # initializing stringtest_str = 'GeeksforGeeks is Best for Geeks' # printing original stringprint("The original string is : " + str(test_str)) # Eliminate Capital Letter Starting words from String# Using join() + split() + isupper()temp = test_str.split()res = " ".join([ele for ele in temp if not ele[0].isupper()]) # printing resultprint("The filtered string : " + str(res)) The original string is : GeeksforGeeks is Best for Geeks The filtered string : is for Method #2 : Using regex() Using regex is one of the ways in which this problem can be solved. In this, we extract all the elements that are upper case using appropriate regex. Python3 # Python3 code to demonstrate working of# Eliminate Capital Letter Starting words from String# Using regex()import re # initializing stringtest_str = 'GeeksforGeeks is Best for Geeks' # printing original stringprint("The original string is : " + str(test_str)) # Eliminate Capital Letter Starting words from String# Using regex()res = re.sub(r"\s*[A-Z]\w*\s*", " ", test_str).strip() # printing resultprint("The filtered string : " + str(res)) The original string is : GeeksforGeeks is Best for Geeks The filtered string : is for reenadevi98412200 Python string-programs Python Python Programs Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here. How to Install PIP on Windows ? Check if element exists in list in Python How To Convert Python Dictionary To JSON? Python Classes and Objects How to drop one or multiple columns in Pandas Dataframe Defaultdict in Python Python | Get dictionary keys as a list Python | Split string into list of characters Python | Convert a list to dictionary How to print without newline in Python?
[ { "code": null, "e": 25537, "s": 25509, "text": "\n16 Feb, 2022" }, { "code": null, "e": 25890, "s": 25537, "text": "Sometimes, while working with Python Strings, we can have a problem in which we need to remove all the words beginning with capital letters. Words that begin with capital letters are proper nouns and their occurrence mean different meaning to the sentence and can be sometimes undesired. Let’s discuss certain ways in which this task can be performed. " }, { "code": null, "e": 26033, "s": 25890, "text": "Input : test_str = ‘GeeksforGeeks is best for Geeks’ Output : ‘ is best for ‘Input : test_str = ‘GeeksforGeeks Is Best For Geeks’ Output : ” " }, { "code": null, "e": 26335, "s": 26033, "text": "Method #1 : Using join() + split() + isupper() The combination of the above functions can provide one of the ways in which this problem can be solved. In this, we perform the task of extracting individual strings with an upper case using isupper() and then perform join() to get the resultant result. " }, { "code": null, "e": 26343, "s": 26335, "text": "Python3" }, { "code": "# Python3 code to demonstrate working of# Eliminate Capital Letter Starting words from String# Using join() + split() + isupper() # initializing stringtest_str = 'GeeksforGeeks is Best for Geeks' # printing original stringprint(\"The original string is : \" + str(test_str)) # Eliminate Capital Letter Starting words from String# Using join() + split() + isupper()temp = test_str.split()res = \" \".join([ele for ele in temp if not ele[0].isupper()]) # printing resultprint(\"The filtered string : \" + str(res))", "e": 26854, "s": 26343, "text": null }, { "code": null, "e": 26940, "s": 26854, "text": "The original string is : GeeksforGeeks is Best for Geeks\nThe filtered string : is for" }, { "code": null, "e": 27120, "s": 26942, "text": " Method #2 : Using regex() Using regex is one of the ways in which this problem can be solved. In this, we extract all the elements that are upper case using appropriate regex. " }, { "code": null, "e": 27128, "s": 27120, "text": "Python3" }, { "code": "# Python3 code to demonstrate working of# Eliminate Capital Letter Starting words from String# Using regex()import re # initializing stringtest_str = 'GeeksforGeeks is Best for Geeks' # printing original stringprint(\"The original string is : \" + str(test_str)) # Eliminate Capital Letter Starting words from String# Using regex()res = re.sub(r\"\\s*[A-Z]\\w*\\s*\", \" \", test_str).strip() # printing resultprint(\"The filtered string : \" + str(res))", "e": 27576, "s": 27128, "text": null }, { "code": null, "e": 27662, "s": 27576, "text": "The original string is : GeeksforGeeks is Best for Geeks\nThe filtered string : is for" }, { "code": null, "e": 27682, "s": 27664, "text": "reenadevi98412200" }, { "code": null, "e": 27705, "s": 27682, "text": "Python string-programs" }, { "code": null, "e": 27712, "s": 27705, "text": "Python" }, { "code": null, "e": 27728, "s": 27712, "text": "Python Programs" }, { "code": null, "e": 27826, "s": 27728, "text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here." }, { "code": null, "e": 27858, "s": 27826, "text": "How to Install PIP on Windows ?" }, { "code": null, "e": 27900, "s": 27858, "text": "Check if element exists in list in Python" }, { "code": null, "e": 27942, "s": 27900, "text": "How To Convert Python Dictionary To JSON?" }, { "code": null, "e": 27969, "s": 27942, "text": "Python Classes and Objects" }, { "code": null, "e": 28025, "s": 27969, "text": "How to drop one or multiple columns in Pandas Dataframe" }, { "code": null, "e": 28047, "s": 28025, "text": "Defaultdict in Python" }, { "code": null, "e": 28086, "s": 28047, "text": "Python | Get dictionary keys as a list" }, { "code": null, "e": 28132, "s": 28086, "text": "Python | Split string into list of characters" }, { "code": null, "e": 28170, "s": 28132, "text": "Python | Convert a list to dictionary" } ]
Java Program To Jumble an Array - GeeksforGeeks
24 Nov, 2020 Given an array of size N, and the task is to shuffle the elements of the array or any other permutation. Using Java think of stuffing the array by generating random sequences of the elements present in the array is possible. This Algorithm is called Fisher-Yates Shuffle Algorithm. Fisher–Yates shuffle Algorithm Works in O(n) time complexity. The assumption is that a give a function rand() generates a random number in O(1) time. Start from the last element, swap it with a randomly selected element from the whole array. Now consider the array from 0 to n-2 (size reduced by 1), and repeat till we hit the first element. Example: Input : arr[] = {1, 2, 3, 4} Output: arr[] = {3, 2, 4, 1} Input : arr[] = {5, 2, 3, 4} Output: arr[] = {2, 4, 3, 5} Algorithm: for i from n - 1 downto 1 do j = random integer with 0 <= j <= i exchange a[j] and a[i] Below is the implementation of the above approach: Java // Program to jumble an array using Javaimport java.util.Random;import java.io.*; public class GFG { public static void shuffleanarray(int[] a) { int n = a.length; Random random = new Random(); // generating random number from list random.nextInt(); for (int i = 0; i < n; i++) { // using random generated number int change = i + random.nextInt(n - i); // swapping elements to shuffle int holder = a[i]; a[i] = a[change]; a[change] = holder; } } public static void main(String[] args) { int[] a = new int[] { 0, 1, 2, 3, 4, 5, 6 }; shuffleanarray(a); System.out.print("arr[] = {"); for (int i : a) { System.out.print(i + " "); } System.out.print("}"); }} arr[] = {4 0 6 1 5 3 2 } Time Complexity: O(N), where N is the size of an array. Java-Array-Programs Java Java Programs Java Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here. Object Oriented Programming (OOPs) Concept in Java HashMap in Java with Examples Stream In Java Interfaces in Java How to iterate any Map in Java Initializing a List in Java Convert a String to Character Array in Java Java Programming Examples Convert Double to Integer in Java Implementing a Linked List in Java using Class
[ { "code": null, "e": 25233, "s": 25205, "text": "\n24 Nov, 2020" }, { "code": null, "e": 25515, "s": 25233, "text": "Given an array of size N, and the task is to shuffle the elements of the array or any other permutation. Using Java think of stuffing the array by generating random sequences of the elements present in the array is possible. This Algorithm is called Fisher-Yates Shuffle Algorithm." }, { "code": null, "e": 25857, "s": 25515, "text": "Fisher–Yates shuffle Algorithm Works in O(n) time complexity. The assumption is that a give a function rand() generates a random number in O(1) time. Start from the last element, swap it with a randomly selected element from the whole array. Now consider the array from 0 to n-2 (size reduced by 1), and repeat till we hit the first element." }, { "code": null, "e": 25866, "s": 25857, "text": "Example:" }, { "code": null, "e": 25984, "s": 25866, "text": "Input : arr[] = {1, 2, 3, 4}\nOutput: arr[] = {3, 2, 4, 1}\n\nInput : arr[] = {5, 2, 3, 4}\nOutput: arr[] = {2, 4, 3, 5}\n" }, { "code": null, "e": 25995, "s": 25984, "text": "Algorithm:" }, { "code": null, "e": 26100, "s": 25995, "text": " for i from n - 1 downto 1 do\n j = random integer with 0 <= j <= i\n exchange a[j] and a[i]\n" }, { "code": null, "e": 26151, "s": 26100, "text": "Below is the implementation of the above approach:" }, { "code": null, "e": 26156, "s": 26151, "text": "Java" }, { "code": "// Program to jumble an array using Javaimport java.util.Random;import java.io.*; public class GFG { public static void shuffleanarray(int[] a) { int n = a.length; Random random = new Random(); // generating random number from list random.nextInt(); for (int i = 0; i < n; i++) { // using random generated number int change = i + random.nextInt(n - i); // swapping elements to shuffle int holder = a[i]; a[i] = a[change]; a[change] = holder; } } public static void main(String[] args) { int[] a = new int[] { 0, 1, 2, 3, 4, 5, 6 }; shuffleanarray(a); System.out.print(\"arr[] = {\"); for (int i : a) { System.out.print(i + \" \"); } System.out.print(\"}\"); }}", "e": 27024, "s": 26156, "text": null }, { "code": null, "e": 27050, "s": 27024, "text": "arr[] = {4 0 6 1 5 3 2 }\n" }, { "code": null, "e": 27106, "s": 27050, "text": "Time Complexity: O(N), where N is the size of an array." }, { "code": null, "e": 27126, "s": 27106, "text": "Java-Array-Programs" }, { "code": null, "e": 27131, "s": 27126, "text": "Java" }, { "code": null, "e": 27145, "s": 27131, "text": "Java Programs" }, { "code": null, "e": 27150, "s": 27145, "text": "Java" }, { "code": null, "e": 27248, "s": 27150, "text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here." }, { "code": null, "e": 27299, "s": 27248, "text": "Object Oriented Programming (OOPs) Concept in Java" }, { "code": null, "e": 27329, "s": 27299, "text": "HashMap in Java with Examples" }, { "code": null, "e": 27344, "s": 27329, "text": "Stream In Java" }, { "code": null, "e": 27363, "s": 27344, "text": "Interfaces in Java" }, { "code": null, "e": 27394, "s": 27363, "text": "How to iterate any Map in Java" }, { "code": null, "e": 27422, "s": 27394, "text": "Initializing a List in Java" }, { "code": null, "e": 27466, "s": 27422, "text": "Convert a String to Character Array in Java" }, { "code": null, "e": 27492, "s": 27466, "text": "Java Programming Examples" }, { "code": null, "e": 27526, "s": 27492, "text": "Convert Double to Integer in Java" } ]
GATE | GATE-CS-2006 | Question 26 - GeeksforGeeks
11 Oct, 2021 Which one of the first order predicate calculus statements given below correctly express the followingEnglish statement? Tigers and lions attack if they are hungry or threatened. (A) A(B) B(C) C(D) DAnswer: (D)Explanation: The statement “Tigers and lions attack if they are hungry or threatened” means that if an animal is either tiger or lion, then if it is hungry or threatened, it will attack. So option (D) is correct.Don’t get confused by “and” between tigers and lions in the statement. This “and” doesn’t mean that we will write “tiger(x) ∧ lion(x) “, because that would have meant that an animal is both tiger and lion, which is not what we want. Source: www.cse.iitd.ac.in/~mittal/gate/gate_math_2006.html YouTubeGeeksforGeeks GATE Computer Science16.4K subscribersGate PYQ's on Propositional and First order Logic with Sakshi Singhal | GeeksforGeeks GATEWatch laterShareCopy linkInfoShoppingTap to unmuteIf playback doesn't begin shortly, try restarting your device.You're signed outVideos you watch may be added to the TV's watch history and influence TV recommendations. To avoid this, cancel and sign in to YouTube on your computer.CancelConfirmMore videosMore videosSwitch cameraShareInclude playlistAn error occurred while retrieving sharing information. Please try again later.Watch on0:000:0030:03 / 41:23•Live•<div class="player-unavailable"><h1 class="message">An error occurred.</h1><div class="submessage"><a href="https://www.youtube.com/watch?v=ID4_2ewAGE0" target="_blank">Try watching this video on www.youtube.com</a>, or enable JavaScript if it is disabled in your browser.</div></div>Quiz of this Question GATE-CS-2006 GATE-GATE-CS-2006 GATE Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here. GATE | Gate IT 2007 | Question 25 GATE | GATE-CS-2001 | Question 39 GATE | GATE-CS-2000 | Question 41 GATE | GATE-CS-2005 | Question 6 GATE | GATE MOCK 2017 | Question 21 GATE | GATE-CS-2006 | Question 47 GATE | GATE MOCK 2017 | Question 24 GATE | Gate IT 2008 | Question 43 GATE | GATE-CS-2009 | Question 38 GATE | GATE-CS-2003 | Question 90
[ { "code": null, "e": 25633, "s": 25605, "text": "\n11 Oct, 2021" }, { "code": null, "e": 25754, "s": 25633, "text": "Which one of the first order predicate calculus statements given below correctly express the followingEnglish statement?" }, { "code": null, "e": 25813, "s": 25754, "text": "Tigers and lions attack if they are hungry or threatened. " }, { "code": null, "e": 26289, "s": 25813, "text": "(A) A(B) B(C) C(D) DAnswer: (D)Explanation: The statement “Tigers and lions attack if they are hungry or threatened” means that if an animal is either tiger or lion, then if it is hungry or threatened, it will attack. So option (D) is correct.Don’t get confused by “and” between tigers and lions in the statement. This “and” doesn’t mean that we will write “tiger(x) ∧ lion(x) “, because that would have meant that an animal is both tiger and lion, which is not what we want." }, { "code": null, "e": 26349, "s": 26289, "text": "Source: www.cse.iitd.ac.in/~mittal/gate/gate_math_2006.html" }, { "code": null, "e": 27268, "s": 26349, "text": "YouTubeGeeksforGeeks GATE Computer Science16.4K subscribersGate PYQ's on Propositional and First order Logic with Sakshi Singhal | GeeksforGeeks GATEWatch laterShareCopy linkInfoShoppingTap to unmuteIf playback doesn't begin shortly, try restarting your device.You're signed outVideos you watch may be added to the TV's watch history and influence TV recommendations. To avoid this, cancel and sign in to YouTube on your computer.CancelConfirmMore videosMore videosSwitch cameraShareInclude playlistAn error occurred while retrieving sharing information. Please try again later.Watch on0:000:0030:03 / 41:23•Live•<div class=\"player-unavailable\"><h1 class=\"message\">An error occurred.</h1><div class=\"submessage\"><a href=\"https://www.youtube.com/watch?v=ID4_2ewAGE0\" target=\"_blank\">Try watching this video on www.youtube.com</a>, or enable JavaScript if it is disabled in your browser.</div></div>Quiz of this Question" }, { "code": null, "e": 27281, "s": 27268, "text": "GATE-CS-2006" }, { "code": null, "e": 27299, "s": 27281, "text": "GATE-GATE-CS-2006" }, { "code": null, "e": 27304, "s": 27299, "text": "GATE" }, { "code": null, "e": 27402, "s": 27304, "text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here." }, { "code": null, "e": 27436, "s": 27402, "text": "GATE | Gate IT 2007 | Question 25" }, { "code": null, "e": 27470, "s": 27436, "text": "GATE | GATE-CS-2001 | Question 39" }, { "code": null, "e": 27504, "s": 27470, "text": "GATE | GATE-CS-2000 | Question 41" }, { "code": null, "e": 27537, "s": 27504, "text": "GATE | GATE-CS-2005 | Question 6" }, { "code": null, "e": 27573, "s": 27537, "text": "GATE | GATE MOCK 2017 | Question 21" }, { "code": null, "e": 27607, "s": 27573, "text": "GATE | GATE-CS-2006 | Question 47" }, { "code": null, "e": 27643, "s": 27607, "text": "GATE | GATE MOCK 2017 | Question 24" }, { "code": null, "e": 27677, "s": 27643, "text": "GATE | Gate IT 2008 | Question 43" }, { "code": null, "e": 27711, "s": 27677, "text": "GATE | GATE-CS-2009 | Question 38" } ]
Pi(π) in C++ with Examples - GeeksforGeeks
18 Apr, 2020 In this article, we will discuss some of the mathematical function which is used to derived the value of Pi(π) in C++. Method 1: Using acos() function:Approach: The value of Π is calculated using acos() function which returns a numeric value between [-Π, Π].Since using acos(0.0) will return the value for Π/2. Therefore to get the value of Π:double pi = 2*acos(0.0); Now the value obtained from above equation is estimated as:printf("%f\n", pi); The value of Π is calculated using acos() function which returns a numeric value between [-Π, Π]. Since using acos(0.0) will return the value for Π/2. Therefore to get the value of Π:double pi = 2*acos(0.0); double pi = 2*acos(0.0); Now the value obtained from above equation is estimated as:printf("%f\n", pi); printf("%f\n", pi); Below is the implementation of the above approach: // C++ program for the above approach #include "bits/stdc++.h"using namespace std; // Function that prints the// value of pivoid printValueOfPi(){ // Find value of pi using // acos() function double pi = 2 * acos(0.0); // Print value of pi printf("%f\n", pi);} // Driver Codeint main(){ // Function that prints // the value of pi printValueOfPi(); return 0;} 3.141593 Method 2: Using asin() function:Approach: The value of Π is calculated using asin() function which returns a numeric value between [-Π, Π].Since using asin(1.0) will return the value for Π/2. Therefore to get the value of Π:double pi = 2*asin(1.0); Now the value obtained from above equation is estimated as:printf("%f\n", pi); The value of Π is calculated using asin() function which returns a numeric value between [-Π, Π]. Since using asin(1.0) will return the value for Π/2. Therefore to get the value of Π:double pi = 2*asin(1.0); double pi = 2*asin(1.0); Now the value obtained from above equation is estimated as:printf("%f\n", pi); printf("%f\n", pi); Below is the implementation of the above approach: // C++ program for the above approach #include "bits/stdc++.h"using namespace std; // Function that prints the// value of pivoid printValueOfPi(){ // Find value of pi using // asin() function double pi = 2 * asin(1.0); // Print value of pi printf("%f\n", pi);} // Driver Codeint main(){ // Function that prints // the value of pi printValueOfPi(); return 0;} 3.141593 Method 3: Using inbuilt constant value define in the “cmath” library:The value of Pi(π) can directly written using the constant stored in cmath library. The name of the constant is M_PI. Below is the program for printing the value of Pi: // C++ program for the above approach#include "cmath"#include "iostream"using namespace std; // Function that prints the// value of pivoid printValueOfPi(){ // Print value of pi printf("%f\n", M_PI);} // Driver Codeint main(){ // Function that prints // the value of pi printValueOfPi(); return 0;} 3.141593 C Language C++ C++ Programs Mathematical Mathematical CPP Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here. Function Pointer in C fork() in C std::string class in C++ Enumeration (or enum) in C TCP Server-Client implementation in C Vector in C++ STL Inheritance in C++ Initialize a vector in C++ (6 different ways) Map in C++ Standard Template Library (STL) C++ Classes and Objects
[ { "code": null, "e": 26001, "s": 25973, "text": "\n18 Apr, 2020" }, { "code": null, "e": 26120, "s": 26001, "text": "In this article, we will discuss some of the mathematical function which is used to derived the value of Pi(π) in C++." }, { "code": null, "e": 26162, "s": 26120, "text": "Method 1: Using acos() function:Approach:" }, { "code": null, "e": 26449, "s": 26162, "text": "The value of Π is calculated using acos() function which returns a numeric value between [-Π, Π].Since using acos(0.0) will return the value for Π/2. Therefore to get the value of Π:double pi = 2*acos(0.0);\nNow the value obtained from above equation is estimated as:printf(\"%f\\n\", pi);\n" }, { "code": null, "e": 26547, "s": 26449, "text": "The value of Π is calculated using acos() function which returns a numeric value between [-Π, Π]." }, { "code": null, "e": 26658, "s": 26547, "text": "Since using acos(0.0) will return the value for Π/2. Therefore to get the value of Π:double pi = 2*acos(0.0);\n" }, { "code": null, "e": 26684, "s": 26658, "text": "double pi = 2*acos(0.0);\n" }, { "code": null, "e": 26764, "s": 26684, "text": "Now the value obtained from above equation is estimated as:printf(\"%f\\n\", pi);\n" }, { "code": null, "e": 26785, "s": 26764, "text": "printf(\"%f\\n\", pi);\n" }, { "code": null, "e": 26836, "s": 26785, "text": "Below is the implementation of the above approach:" }, { "code": "// C++ program for the above approach #include \"bits/stdc++.h\"using namespace std; // Function that prints the// value of pivoid printValueOfPi(){ // Find value of pi using // acos() function double pi = 2 * acos(0.0); // Print value of pi printf(\"%f\\n\", pi);} // Driver Codeint main(){ // Function that prints // the value of pi printValueOfPi(); return 0;}", "e": 27229, "s": 26836, "text": null }, { "code": null, "e": 27239, "s": 27229, "text": "3.141593\n" }, { "code": null, "e": 27281, "s": 27239, "text": "Method 2: Using asin() function:Approach:" }, { "code": null, "e": 27568, "s": 27281, "text": "The value of Π is calculated using asin() function which returns a numeric value between [-Π, Π].Since using asin(1.0) will return the value for Π/2. Therefore to get the value of Π:double pi = 2*asin(1.0);\nNow the value obtained from above equation is estimated as:printf(\"%f\\n\", pi);\n" }, { "code": null, "e": 27666, "s": 27568, "text": "The value of Π is calculated using asin() function which returns a numeric value between [-Π, Π]." }, { "code": null, "e": 27777, "s": 27666, "text": "Since using asin(1.0) will return the value for Π/2. Therefore to get the value of Π:double pi = 2*asin(1.0);\n" }, { "code": null, "e": 27803, "s": 27777, "text": "double pi = 2*asin(1.0);\n" }, { "code": null, "e": 27883, "s": 27803, "text": "Now the value obtained from above equation is estimated as:printf(\"%f\\n\", pi);\n" }, { "code": null, "e": 27904, "s": 27883, "text": "printf(\"%f\\n\", pi);\n" }, { "code": null, "e": 27955, "s": 27904, "text": "Below is the implementation of the above approach:" }, { "code": "// C++ program for the above approach #include \"bits/stdc++.h\"using namespace std; // Function that prints the// value of pivoid printValueOfPi(){ // Find value of pi using // asin() function double pi = 2 * asin(1.0); // Print value of pi printf(\"%f\\n\", pi);} // Driver Codeint main(){ // Function that prints // the value of pi printValueOfPi(); return 0;}", "e": 28348, "s": 27955, "text": null }, { "code": null, "e": 28358, "s": 28348, "text": "3.141593\n" }, { "code": null, "e": 28596, "s": 28358, "text": "Method 3: Using inbuilt constant value define in the “cmath” library:The value of Pi(π) can directly written using the constant stored in cmath library. The name of the constant is M_PI. Below is the program for printing the value of Pi:" }, { "code": "// C++ program for the above approach#include \"cmath\"#include \"iostream\"using namespace std; // Function that prints the// value of pivoid printValueOfPi(){ // Print value of pi printf(\"%f\\n\", M_PI);} // Driver Codeint main(){ // Function that prints // the value of pi printValueOfPi(); return 0;}", "e": 28915, "s": 28596, "text": null }, { "code": null, "e": 28925, "s": 28915, "text": "3.141593\n" }, { "code": null, "e": 28936, "s": 28925, "text": "C Language" }, { "code": null, "e": 28940, "s": 28936, "text": "C++" }, { "code": null, "e": 28953, "s": 28940, "text": "C++ Programs" }, { "code": null, "e": 28966, "s": 28953, "text": "Mathematical" }, { "code": null, "e": 28979, "s": 28966, "text": "Mathematical" }, { "code": null, "e": 28983, "s": 28979, "text": "CPP" }, { "code": null, "e": 29081, "s": 28983, "text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here." }, { "code": null, "e": 29103, "s": 29081, "text": "Function Pointer in C" }, { "code": null, "e": 29115, "s": 29103, "text": "fork() in C" }, { "code": null, "e": 29140, "s": 29115, "text": "std::string class in C++" }, { "code": null, "e": 29167, "s": 29140, "text": "Enumeration (or enum) in C" }, { "code": null, "e": 29205, "s": 29167, "text": "TCP Server-Client implementation in C" }, { "code": null, "e": 29223, "s": 29205, "text": "Vector in C++ STL" }, { "code": null, "e": 29242, "s": 29223, "text": "Inheritance in C++" }, { "code": null, "e": 29288, "s": 29242, "text": "Initialize a vector in C++ (6 different ways)" }, { "code": null, "e": 29331, "s": 29288, "text": "Map in C++ Standard Template Library (STL)" } ]
Maze With N doors and 1 Key - GeeksforGeeks
04 May, 2022 Given an N * N binary maze where a 0 denotes that the position can be visited and a 1 denotes that the position cannot be visited without a key, the task is to find whether it is possible to visit the bottom-right cell from the top-left cell with only one key along the way. If possible then print “Yes” else print “No”. Example: Input: maze[][] = { {0, 0, 1}, {1, 0, 1}, {1, 1, 0}} Output: Yes Approach: This problem can be solved using recursion, for every possible move, if the current cell is 0 then without altering the status of the key check whether it is the destination else move forward. If the current cell is 1 then the key must be used, now for the further moves the key will be set to false i.e. it’ll never be used again on the same path. If any path reaches the destination then print Yes else print No. Below is the implementation of the above approach: C++ Java Python3 C# Javascript // C++ implementation of the approach#include <bits/stdc++.h>using namespace std; // Recursive function to check whether there is// a path from the top left cell to the// bottom right cell of the mazebool findPath(vector<vector<int> > maze, int xpos, int ypos, bool key){ // Check whether the current cell is // within the maze if (xpos < 0 || xpos >= maze.size() || ypos < 0 || ypos >= maze.size()) return false; // If key is required to move further if (maze[xpos][ypos] == '1') { // If the key hasn't been used before if (key == true) // If current cell is the destination if (xpos == maze.size() - 1 && ypos == maze.size() - 1) return true; // Either go down or right return findPath(maze, xpos + 1, ypos, false) || findPath(maze, xpos, ypos + 1, false); // Key has been used before return false; } // If current cell is the destination if (xpos == maze.size() - 1 && ypos == maze.size() - 1) return true; // Either go down or right return findPath(maze, xpos + 1, ypos, key) || findPath(maze, xpos, ypos + 1, key);} bool mazeProb(vector<vector<int> > maze, int xpos, int ypos){ bool key = true; if (findPath(maze, xpos, ypos, key)) return true; return false;} // Driver codeint main(){ vector<vector<int> > maze = { { '0', '0', '1' }, { '1', '0', '1' }, { '1', '1', '0' } }; int n = maze.size(); // If there is a path from the cell (0, 0) if (mazeProb(maze, 0, 0)) cout << "Yes"; else cout << "No";} // This code is contributed by grand_master // Java implementation of the approachimport java.io.*;import java.util.ArrayList; class GFG { // Recursive function to check whether there // is a path from the top left cell to the // bottom right cell of the maze static boolean findPath(ArrayList<ArrayList<Integer> > maze, int xpos, int ypos, boolean key) { // Check whether the current cell is // within the maze if (xpos < 0 || xpos >= maze.size() || ypos < 0 || ypos >= maze.size()) return false; // If key is required to move further if (maze.get(xpos).get(ypos) == '1') { // If the key hasn't been used before if (key == true) // If current cell is the destination if (xpos == maze.size() - 1 && ypos == maze.size() - 1) return true; // Either go down or right return findPath(maze, xpos + 1, ypos, false) || findPath(maze, xpos, ypos + 1, false); } // If current cell is the destination if (xpos == maze.size() - 1 && ypos == maze.size() - 1) return true; // Either go down or right return findPath(maze, xpos + 1, ypos, key) || findPath(maze, xpos, ypos + 1, key); } static boolean mazeProb(ArrayList<ArrayList<Integer> > maze, int xpos, int ypos) { boolean key = true; if (findPath(maze, xpos, ypos, key)) return true; return false; } // Driver code public static void main(String[] args) { int size = 3; ArrayList<ArrayList<Integer> > maze = new ArrayList<ArrayList<Integer> >(size); for (int i = 0; i < size; i++) { maze.add(new ArrayList<Integer>()); } // We are making these //{ { '0', '0', '1' }, // { '1', '0', '1' }, // { '1', '1', '0' } }; maze.get(0).add(0); maze.get(0).add(0); maze.get(0).add(1); maze.get(1).add(1); maze.get(1).add(0); maze.get(1).add(1); maze.get(2).add(1); maze.get(2).add(1); maze.get(2).add(0); // If there is a path from the cell (0, 0) if (mazeProb(maze, 0, 0)) System.out.print("Yes"); else System.out.print("No"); }} // This code is contributed by sujitmeshram # Python3 implementation of the approach # Recursive function to check whether there is# a path from the top left cell to the# bottom right cell of the maze def findPath(maze, xpos, ypos, key): # Check whether the current cell is # within the maze if xpos < 0 or xpos >= len(maze) or ypos < 0 \ or ypos >= len(maze): return False # If key is required to move further if maze[xpos][ypos] == '1': # If the key hasn't been used before if key == True: # If current cell is the destination if xpos == len(maze)-1 and ypos == len(maze)-1: return True # Either go down or right return findPath(maze, xpos + 1, ypos, False) or \ findPath(maze, xpos, ypos + 1, False) # Key has been used before return False # If current cell is the destination if xpos == len(maze)-1 and ypos == len(maze)-1: return True # Either go down or right return findPath(maze, xpos + 1, ypos, key) or \ findPath(maze, xpos, ypos + 1, key) def mazeProb(maze, xpos, ypos): key = True if findPath(maze, xpos, ypos, key): return True return False # Driver codeif __name__ == "__main__": maze = [['0', '0', '1'], ['1', '0', '1'], ['1', '1', '0']] n = len(maze) # If there is a path from the cell (0, 0) if mazeProb(maze, 0, 0): print("Yes") else: print("No") // C# implementation of the approachusing System;using System.Collections.Generic; class GFG { // Recursive function to check whether there // is a path from the top left cell to the // bottom right cell of the maze static bool findPath(List<List<int> > maze, int xpos, int ypos, bool key) { // Check whether the current cell is // within the maze if (xpos < 0 || xpos >= maze.Count || ypos < 0 || ypos >= maze.Count) return false; // If key is required to move further if (maze[xpos][ypos] == '1') { // If the key hasn't been used before if (key == true) // If current cell is the destination if (xpos == maze.Count - 1 && ypos == maze.Count - 1) return true; // Either go down or right return findPath(maze, xpos + 1, ypos, false) || findPath(maze, xpos, ypos + 1, false); } // If current cell is the destination if (xpos == maze.Count - 1 && ypos == maze.Count - 1) return true; // Either go down or right return findPath(maze, xpos + 1, ypos, key) || findPath(maze, xpos, ypos + 1, key); } static bool mazeProb(List<List<int> > maze, int xpos, int ypos) { bool key = true; if (findPath(maze, xpos, ypos, key)) return true; return false; } // Driver code public static void Main(String[] args) { int size = 3; List<List<int> > maze = new List<List<int> >(size); for (int i = 0; i < size; i++) { maze.Add(new List<int>()); } // We are making these //{ { '0', '0', '1' }, // { '1', '0', '1' }, // { '1', '1', '0' } }; maze[0].Add(0); maze[0].Add(0); maze[0].Add(1); maze[1].Add(1); maze[1].Add(0); maze[1].Add(1); maze[2].Add(1); maze[2].Add(1); maze[2].Add(0); // If there is a path from the cell (0, 0) if (mazeProb(maze, 0, 0)) Console.Write("Yes"); else Console.Write("No"); }} // This code is contributed by gauravrajput1 <script> // JavaScript implementation of the approach // Recursive function to check whether there is // a path from the top left cell to the // bottom right cell of the mazefunction findPath(maze, xpos, ypos, key){ // Check whether the current cell is // within the maze if (xpos < 0 || xpos >= maze.length || ypos < 0 || ypos >= maze.length) return false; // If key is required to move further if (maze[xpos][ypos] == '1') { // If the key hasn't been used before if (key == true) // If current cell is the destination if (xpos == maze.length - 1 && ypos == maze.length - 1) return true; // Either go down or right return findPath(maze, xpos + 1, ypos, false) || findPath(maze, xpos, ypos + 1, false); // Key has been used before return false; } // If current cell is the destination if (xpos == maze.length - 1 && ypos == maze.length - 1) return true; // Either go down or right return findPath(maze, xpos + 1, ypos, key) || findPath(maze, xpos, ypos + 1, key);} function mazeProb(maze, xpos, ypos){ let key = true; if (findPath(maze, xpos, ypos, key)) return true; return false;} // Driver code let maze = [ [ '0', '0', '1' ], [ '1', '0', '1' ], [ '1', '1', '0' ] ]; let n = maze.length; // If there is a path from the cell (0, 0) if (mazeProb(maze, 0, 0)) document.write("Yes"); else document.write("No"); </script> Yes Time Complexity: O(2N) Dynamic Programming can be used to improve the time complexitiy The main idea is for every cell the answer is dependented upon its previous row and col . Here maze[1][1] is dependent on maze[1][0] or maze[0][1] if it is a possible path . Hence using this approach we can compute result of maze[n-1][n-1] from its previous adjacent cells And also there are some edge condition for 0th row and 0th col as the these cells are dependent on their previous col and row respectively. Bellow is the implementation of above approach. C++ Python3 Javascript // C++ implementation of the approach#include <bits/stdc++.h>using namespace std; bool mazeProb(vector<vector<int> > maze, int n){ for (int row = 0; row < n; ++row) { for (int col = 0; col < n; ++col) { if (row == 0 && col == 0) // Skip the first cell continue; if (row == 0) { // for first row result depend on previous col maze[row][col] = min( 2, maze[row][col] + maze[row][col - 1]); } else if (col == 0) { // for first col result depends on previous row maze[row][col] = min( 2, maze[row][col] + maze[row - 1][col]); } else { // for other cells, result will be // minimum of previous row or col cell maze[row][col] = min(2, maze[row][col] + min(maze[row][col - 1], maze[row - 1][col])); } } } return maze[n - 1][n - 1] != 2; // if last cell value is 2 then there is no // path available} // Driver codeint main(){ vector<vector<int> > maze = { { 0, 0, 1 }, { 1, 0, 1 }, { 1, 1, 0 } }; int n = maze.size(); // If there is a path from the cell (0, 0) if (mazeProb(maze, 3)) cout << "Yes"; else cout << "No";} // This code is contributed by pratham sonawane # Python implementation of the approachdef mazeProb(maze, n): for row in range(n): for col in range(n): if (row == 0 and col == 0): # Skip the first cell continue if (row == 0): # for first row result depend on previous col maze[row][col] = min( 2, maze[row][col] + maze[row][col - 1]) elif (col == 0): # for first col result depends on previous row maze[row][col] = min( 2, maze[row][col] + maze[row - 1][col]) else: # for other cells, result will be # minimum of previous row or col cell maze[row][col] = min(2, maze[row][col] + min(maze[row][col - 1], maze[row - 1][col])) return maze[n - 1][n - 1]!= 2 # if last cell value is 2 then there is no # path available # Driver codemaze = [ [ 0, 0, 1 ], [ 1, 0, 1 ], [ 1, 1, 0 ] ]n = len(maze) # If there is a path from the cell (0, 0)if (mazeProb(maze, 3)): print("Yes")else: print("No") # This code is contributed by shinjanpatra <script> // JavaScript implementation of the approachfunction mazeProb(maze,n){ for (let row = 0; row < n; ++row) { for (let col = 0; col < n; ++col) { if (row == 0 && col == 0) // Skip the first cell continue; if (row == 0) { // for first row result depend on previous col maze[row][col] = Math.min( 2, maze[row][col] + maze[row][col - 1]); } else if (col == 0) { // for first col result depends on previous row maze[row][col] = Math.min( 2, maze[row][col] + maze[row - 1][col]); } else { // for other cells, result will be // minimum of previous row or col cell maze[row][col] = Math.min(2, maze[row][col] + Math.min(maze[row][col - 1], maze[row - 1][col])); } } } return maze[n - 1][n - 1]!= 2; // if last cell value is 2 then there is no // path available} // Driver code let maze = [ [ 0, 0, 1 ], [ 1, 0, 1 ], [ 1, 1, 0 ] ];let n = maze.length; // If there is a path from the cell (0, 0)if (mazeProb(maze, 3)) document.write("Yes");else document.write("No"); // This code is contributed by shinjanpatra </script> Yes Time Complexity: O(N^2) Space Complexity: O(1) Akanksha_Rai grand_master sujitmeshram GauravRajput1 rishavmahato348 prathams1001 shinjanpatra binary-representation Backtracking Dynamic Programming Matrix Recursion Dynamic Programming Recursion Matrix Backtracking Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here. Difference between Backtracking and Branch-N-Bound technique Find if there is a path of more than k length from a source Tug of War Minimum Cost Path in a directed graph via given set of intermediate nodes Longest common subarray in the given two arrays 0-1 Knapsack Problem | DP-10 Largest Sum Contiguous Subarray Program for Fibonacci numbers Longest Common Subsequence | DP-4 Bellman–Ford Algorithm | DP-23
[ { "code": null, "e": 26229, "s": 26201, "text": "\n04 May, 2022" }, { "code": null, "e": 26550, "s": 26229, "text": "Given an N * N binary maze where a 0 denotes that the position can be visited and a 1 denotes that the position cannot be visited without a key, the task is to find whether it is possible to visit the bottom-right cell from the top-left cell with only one key along the way. If possible then print “Yes” else print “No”." }, { "code": null, "e": 26560, "s": 26550, "text": "Example: " }, { "code": null, "e": 26627, "s": 26560, "text": "Input: maze[][] = { {0, 0, 1}, {1, 0, 1}, {1, 1, 0}} Output: Yes " }, { "code": null, "e": 27052, "s": 26627, "text": "Approach: This problem can be solved using recursion, for every possible move, if the current cell is 0 then without altering the status of the key check whether it is the destination else move forward. If the current cell is 1 then the key must be used, now for the further moves the key will be set to false i.e. it’ll never be used again on the same path. If any path reaches the destination then print Yes else print No." }, { "code": null, "e": 27104, "s": 27052, "text": "Below is the implementation of the above approach: " }, { "code": null, "e": 27108, "s": 27104, "text": "C++" }, { "code": null, "e": 27113, "s": 27108, "text": "Java" }, { "code": null, "e": 27121, "s": 27113, "text": "Python3" }, { "code": null, "e": 27124, "s": 27121, "text": "C#" }, { "code": null, "e": 27135, "s": 27124, "text": "Javascript" }, { "code": "// C++ implementation of the approach#include <bits/stdc++.h>using namespace std; // Recursive function to check whether there is// a path from the top left cell to the// bottom right cell of the mazebool findPath(vector<vector<int> > maze, int xpos, int ypos, bool key){ // Check whether the current cell is // within the maze if (xpos < 0 || xpos >= maze.size() || ypos < 0 || ypos >= maze.size()) return false; // If key is required to move further if (maze[xpos][ypos] == '1') { // If the key hasn't been used before if (key == true) // If current cell is the destination if (xpos == maze.size() - 1 && ypos == maze.size() - 1) return true; // Either go down or right return findPath(maze, xpos + 1, ypos, false) || findPath(maze, xpos, ypos + 1, false); // Key has been used before return false; } // If current cell is the destination if (xpos == maze.size() - 1 && ypos == maze.size() - 1) return true; // Either go down or right return findPath(maze, xpos + 1, ypos, key) || findPath(maze, xpos, ypos + 1, key);} bool mazeProb(vector<vector<int> > maze, int xpos, int ypos){ bool key = true; if (findPath(maze, xpos, ypos, key)) return true; return false;} // Driver codeint main(){ vector<vector<int> > maze = { { '0', '0', '1' }, { '1', '0', '1' }, { '1', '1', '0' } }; int n = maze.size(); // If there is a path from the cell (0, 0) if (mazeProb(maze, 0, 0)) cout << \"Yes\"; else cout << \"No\";} // This code is contributed by grand_master", "e": 28887, "s": 27135, "text": null }, { "code": "// Java implementation of the approachimport java.io.*;import java.util.ArrayList; class GFG { // Recursive function to check whether there // is a path from the top left cell to the // bottom right cell of the maze static boolean findPath(ArrayList<ArrayList<Integer> > maze, int xpos, int ypos, boolean key) { // Check whether the current cell is // within the maze if (xpos < 0 || xpos >= maze.size() || ypos < 0 || ypos >= maze.size()) return false; // If key is required to move further if (maze.get(xpos).get(ypos) == '1') { // If the key hasn't been used before if (key == true) // If current cell is the destination if (xpos == maze.size() - 1 && ypos == maze.size() - 1) return true; // Either go down or right return findPath(maze, xpos + 1, ypos, false) || findPath(maze, xpos, ypos + 1, false); } // If current cell is the destination if (xpos == maze.size() - 1 && ypos == maze.size() - 1) return true; // Either go down or right return findPath(maze, xpos + 1, ypos, key) || findPath(maze, xpos, ypos + 1, key); } static boolean mazeProb(ArrayList<ArrayList<Integer> > maze, int xpos, int ypos) { boolean key = true; if (findPath(maze, xpos, ypos, key)) return true; return false; } // Driver code public static void main(String[] args) { int size = 3; ArrayList<ArrayList<Integer> > maze = new ArrayList<ArrayList<Integer> >(size); for (int i = 0; i < size; i++) { maze.add(new ArrayList<Integer>()); } // We are making these //{ { '0', '0', '1' }, // { '1', '0', '1' }, // { '1', '1', '0' } }; maze.get(0).add(0); maze.get(0).add(0); maze.get(0).add(1); maze.get(1).add(1); maze.get(1).add(0); maze.get(1).add(1); maze.get(2).add(1); maze.get(2).add(1); maze.get(2).add(0); // If there is a path from the cell (0, 0) if (mazeProb(maze, 0, 0)) System.out.print(\"Yes\"); else System.out.print(\"No\"); }} // This code is contributed by sujitmeshram", "e": 31301, "s": 28887, "text": null }, { "code": "# Python3 implementation of the approach # Recursive function to check whether there is# a path from the top left cell to the# bottom right cell of the maze def findPath(maze, xpos, ypos, key): # Check whether the current cell is # within the maze if xpos < 0 or xpos >= len(maze) or ypos < 0 \\ or ypos >= len(maze): return False # If key is required to move further if maze[xpos][ypos] == '1': # If the key hasn't been used before if key == True: # If current cell is the destination if xpos == len(maze)-1 and ypos == len(maze)-1: return True # Either go down or right return findPath(maze, xpos + 1, ypos, False) or \\ findPath(maze, xpos, ypos + 1, False) # Key has been used before return False # If current cell is the destination if xpos == len(maze)-1 and ypos == len(maze)-1: return True # Either go down or right return findPath(maze, xpos + 1, ypos, key) or \\ findPath(maze, xpos, ypos + 1, key) def mazeProb(maze, xpos, ypos): key = True if findPath(maze, xpos, ypos, key): return True return False # Driver codeif __name__ == \"__main__\": maze = [['0', '0', '1'], ['1', '0', '1'], ['1', '1', '0']] n = len(maze) # If there is a path from the cell (0, 0) if mazeProb(maze, 0, 0): print(\"Yes\") else: print(\"No\")", "e": 32763, "s": 31301, "text": null }, { "code": "// C# implementation of the approachusing System;using System.Collections.Generic; class GFG { // Recursive function to check whether there // is a path from the top left cell to the // bottom right cell of the maze static bool findPath(List<List<int> > maze, int xpos, int ypos, bool key) { // Check whether the current cell is // within the maze if (xpos < 0 || xpos >= maze.Count || ypos < 0 || ypos >= maze.Count) return false; // If key is required to move further if (maze[xpos][ypos] == '1') { // If the key hasn't been used before if (key == true) // If current cell is the destination if (xpos == maze.Count - 1 && ypos == maze.Count - 1) return true; // Either go down or right return findPath(maze, xpos + 1, ypos, false) || findPath(maze, xpos, ypos + 1, false); } // If current cell is the destination if (xpos == maze.Count - 1 && ypos == maze.Count - 1) return true; // Either go down or right return findPath(maze, xpos + 1, ypos, key) || findPath(maze, xpos, ypos + 1, key); } static bool mazeProb(List<List<int> > maze, int xpos, int ypos) { bool key = true; if (findPath(maze, xpos, ypos, key)) return true; return false; } // Driver code public static void Main(String[] args) { int size = 3; List<List<int> > maze = new List<List<int> >(size); for (int i = 0; i < size; i++) { maze.Add(new List<int>()); } // We are making these //{ { '0', '0', '1' }, // { '1', '0', '1' }, // { '1', '1', '0' } }; maze[0].Add(0); maze[0].Add(0); maze[0].Add(1); maze[1].Add(1); maze[1].Add(0); maze[1].Add(1); maze[2].Add(1); maze[2].Add(1); maze[2].Add(0); // If there is a path from the cell (0, 0) if (mazeProb(maze, 0, 0)) Console.Write(\"Yes\"); else Console.Write(\"No\"); }} // This code is contributed by gauravrajput1", "e": 35052, "s": 32763, "text": null }, { "code": "<script> // JavaScript implementation of the approach // Recursive function to check whether there is // a path from the top left cell to the // bottom right cell of the mazefunction findPath(maze, xpos, ypos, key){ // Check whether the current cell is // within the maze if (xpos < 0 || xpos >= maze.length || ypos < 0 || ypos >= maze.length) return false; // If key is required to move further if (maze[xpos][ypos] == '1') { // If the key hasn't been used before if (key == true) // If current cell is the destination if (xpos == maze.length - 1 && ypos == maze.length - 1) return true; // Either go down or right return findPath(maze, xpos + 1, ypos, false) || findPath(maze, xpos, ypos + 1, false); // Key has been used before return false; } // If current cell is the destination if (xpos == maze.length - 1 && ypos == maze.length - 1) return true; // Either go down or right return findPath(maze, xpos + 1, ypos, key) || findPath(maze, xpos, ypos + 1, key);} function mazeProb(maze, xpos, ypos){ let key = true; if (findPath(maze, xpos, ypos, key)) return true; return false;} // Driver code let maze = [ [ '0', '0', '1' ], [ '1', '0', '1' ], [ '1', '1', '0' ] ]; let n = maze.length; // If there is a path from the cell (0, 0) if (mazeProb(maze, 0, 0)) document.write(\"Yes\"); else document.write(\"No\"); </script>", "e": 36754, "s": 35052, "text": null }, { "code": null, "e": 36758, "s": 36754, "text": "Yes" }, { "code": null, "e": 36782, "s": 36758, "text": "Time Complexity: O(2N) " }, { "code": null, "e": 36846, "s": 36782, "text": "Dynamic Programming can be used to improve the time complexitiy" }, { "code": null, "e": 36936, "s": 36846, "text": "The main idea is for every cell the answer is dependented upon its previous row and col ." }, { "code": null, "e": 37023, "s": 36938, "text": "Here maze[1][1] is dependent on maze[1][0] or maze[0][1] if it is a possible path . " }, { "code": null, "e": 37122, "s": 37023, "text": "Hence using this approach we can compute result of maze[n-1][n-1] from its previous adjacent cells" }, { "code": null, "e": 37262, "s": 37122, "text": "And also there are some edge condition for 0th row and 0th col as the these cells are dependent on their previous col and row respectively." }, { "code": null, "e": 37310, "s": 37262, "text": "Bellow is the implementation of above approach." }, { "code": null, "e": 37314, "s": 37310, "text": "C++" }, { "code": null, "e": 37322, "s": 37314, "text": "Python3" }, { "code": null, "e": 37333, "s": 37322, "text": "Javascript" }, { "code": "// C++ implementation of the approach#include <bits/stdc++.h>using namespace std; bool mazeProb(vector<vector<int> > maze, int n){ for (int row = 0; row < n; ++row) { for (int col = 0; col < n; ++col) { if (row == 0 && col == 0) // Skip the first cell continue; if (row == 0) { // for first row result depend on previous col maze[row][col] = min( 2, maze[row][col] + maze[row][col - 1]); } else if (col == 0) { // for first col result depends on previous row maze[row][col] = min( 2, maze[row][col] + maze[row - 1][col]); } else { // for other cells, result will be // minimum of previous row or col cell maze[row][col] = min(2, maze[row][col] + min(maze[row][col - 1], maze[row - 1][col])); } } } return maze[n - 1][n - 1] != 2; // if last cell value is 2 then there is no // path available} // Driver codeint main(){ vector<vector<int> > maze = { { 0, 0, 1 }, { 1, 0, 1 }, { 1, 1, 0 } }; int n = maze.size(); // If there is a path from the cell (0, 0) if (mazeProb(maze, 3)) cout << \"Yes\"; else cout << \"No\";} // This code is contributed by pratham sonawane", "e": 38798, "s": 37333, "text": null }, { "code": "# Python implementation of the approachdef mazeProb(maze, n): for row in range(n): for col in range(n): if (row == 0 and col == 0): # Skip the first cell continue if (row == 0): # for first row result depend on previous col maze[row][col] = min( 2, maze[row][col] + maze[row][col - 1]) elif (col == 0): # for first col result depends on previous row maze[row][col] = min( 2, maze[row][col] + maze[row - 1][col]) else: # for other cells, result will be # minimum of previous row or col cell maze[row][col] = min(2, maze[row][col] + min(maze[row][col - 1], maze[row - 1][col])) return maze[n - 1][n - 1]!= 2 # if last cell value is 2 then there is no # path available # Driver codemaze = [ [ 0, 0, 1 ], [ 1, 0, 1 ], [ 1, 1, 0 ] ]n = len(maze) # If there is a path from the cell (0, 0)if (mazeProb(maze, 3)): print(\"Yes\")else: print(\"No\") # This code is contributed by shinjanpatra", "e": 39911, "s": 38798, "text": null }, { "code": "<script> // JavaScript implementation of the approachfunction mazeProb(maze,n){ for (let row = 0; row < n; ++row) { for (let col = 0; col < n; ++col) { if (row == 0 && col == 0) // Skip the first cell continue; if (row == 0) { // for first row result depend on previous col maze[row][col] = Math.min( 2, maze[row][col] + maze[row][col - 1]); } else if (col == 0) { // for first col result depends on previous row maze[row][col] = Math.min( 2, maze[row][col] + maze[row - 1][col]); } else { // for other cells, result will be // minimum of previous row or col cell maze[row][col] = Math.min(2, maze[row][col] + Math.min(maze[row][col - 1], maze[row - 1][col])); } } } return maze[n - 1][n - 1]!= 2; // if last cell value is 2 then there is no // path available} // Driver code let maze = [ [ 0, 0, 1 ], [ 1, 0, 1 ], [ 1, 1, 0 ] ];let n = maze.length; // If there is a path from the cell (0, 0)if (mazeProb(maze, 3)) document.write(\"Yes\");else document.write(\"No\"); // This code is contributed by shinjanpatra </script>", "e": 41293, "s": 39911, "text": null }, { "code": null, "e": 41297, "s": 41293, "text": "Yes" }, { "code": null, "e": 41321, "s": 41297, "text": "Time Complexity: O(N^2)" }, { "code": null, "e": 41344, "s": 41321, "text": "Space Complexity: O(1)" }, { "code": null, "e": 41357, "s": 41344, "text": "Akanksha_Rai" }, { "code": null, "e": 41370, "s": 41357, "text": "grand_master" }, { "code": null, "e": 41383, "s": 41370, "text": "sujitmeshram" }, { "code": null, "e": 41397, "s": 41383, "text": "GauravRajput1" }, { "code": null, "e": 41413, "s": 41397, "text": "rishavmahato348" }, { "code": null, "e": 41426, "s": 41413, "text": "prathams1001" }, { "code": null, "e": 41439, "s": 41426, "text": "shinjanpatra" }, { "code": null, "e": 41461, "s": 41439, "text": "binary-representation" }, { "code": null, "e": 41474, "s": 41461, "text": "Backtracking" }, { "code": null, "e": 41494, "s": 41474, "text": "Dynamic Programming" }, { "code": null, "e": 41501, "s": 41494, "text": "Matrix" }, { "code": null, "e": 41511, "s": 41501, "text": "Recursion" }, { "code": null, "e": 41531, "s": 41511, "text": "Dynamic Programming" }, { "code": null, "e": 41541, "s": 41531, "text": "Recursion" }, { "code": null, "e": 41548, "s": 41541, "text": "Matrix" }, { "code": null, "e": 41561, "s": 41548, "text": "Backtracking" }, { "code": null, "e": 41659, "s": 41561, "text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here." }, { "code": null, "e": 41720, "s": 41659, "text": "Difference between Backtracking and Branch-N-Bound technique" }, { "code": null, "e": 41780, "s": 41720, "text": "Find if there is a path of more than k length from a source" }, { "code": null, "e": 41791, "s": 41780, "text": "Tug of War" }, { "code": null, "e": 41865, "s": 41791, "text": "Minimum Cost Path in a directed graph via given set of intermediate nodes" }, { "code": null, "e": 41913, "s": 41865, "text": "Longest common subarray in the given two arrays" }, { "code": null, "e": 41942, "s": 41913, "text": "0-1 Knapsack Problem | DP-10" }, { "code": null, "e": 41974, "s": 41942, "text": "Largest Sum Contiguous Subarray" }, { "code": null, "e": 42004, "s": 41974, "text": "Program for Fibonacci numbers" }, { "code": null, "e": 42038, "s": 42004, "text": "Longest Common Subsequence | DP-4" } ]
Internal static variable vs. External static variable with Examples in C - GeeksforGeeks
20 Nov, 2019 The static variable may be internal or external depending on the place of declaration. Static variables are stored in initialised data segments. Internal Static Variables: Internal Static variables are defined as those having static variables which are declared inside a function and extends up to the end of the particular function. Syntax: main( ) { static datatype variable; // other statements } Example: // C program to demonstrate// Internal Static Variables #include <stdio.h> int value(); int main(){ printf("%d", value()); return 0;} int value(){ static int a = 5; return a;} 5 External Static Variables: External Static variables are those which are declared outside a function and set globally for the entire file/program. Syntax: static datatype variable; main() { statements } function1() { statements } Example: // C program to demonstrate// External Static Variables #include <stdio.h> int add(int, int); static int a = 5; int main(){ int c; printf("%d", add(a, c));} int add(int c, int b){ b = 5; c = a + b; return c;} 10 Difference between internal static variables and External static variables: Static Keyword C Language Difference Between Technical Scripter Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here. TCP Server-Client implementation in C Exception Handling in C++ Multithreading in C 'this' pointer in C++ Arrow operator -> in C/C++ with Examples Difference between BFS and DFS Class method vs Static method in Python Differences between TCP and UDP Difference between var, let and const keywords in JavaScript Differences between IPv4 and IPv6
[ { "code": null, "e": 25477, "s": 25449, "text": "\n20 Nov, 2019" }, { "code": null, "e": 25622, "s": 25477, "text": "The static variable may be internal or external depending on the place of declaration. Static variables are stored in initialised data segments." }, { "code": null, "e": 25811, "s": 25622, "text": "Internal Static Variables: Internal Static variables are defined as those having static variables which are declared inside a function and extends up to the end of the particular function." }, { "code": null, "e": 25819, "s": 25811, "text": "Syntax:" }, { "code": null, "e": 25885, "s": 25819, "text": " \nmain( ) \n{\n static datatype variable;\n // other statements\n}\n" }, { "code": null, "e": 25894, "s": 25885, "text": "Example:" }, { "code": "// C program to demonstrate// Internal Static Variables #include <stdio.h> int value(); int main(){ printf(\"%d\", value()); return 0;} int value(){ static int a = 5; return a;}", "e": 26086, "s": 25894, "text": null }, { "code": null, "e": 26089, "s": 26086, "text": "5\n" }, { "code": null, "e": 26236, "s": 26089, "text": "External Static Variables: External Static variables are those which are declared outside a function and set globally for the entire file/program." }, { "code": null, "e": 26244, "s": 26236, "text": "Syntax:" }, { "code": null, "e": 26328, "s": 26244, "text": " \nstatic datatype variable;\n\nmain()\n{\n statements\n}\n\nfunction1()\n{\n statements\n}\n" }, { "code": null, "e": 26337, "s": 26328, "text": "Example:" }, { "code": "// C program to demonstrate// External Static Variables #include <stdio.h> int add(int, int); static int a = 5; int main(){ int c; printf(\"%d\", add(a, c));} int add(int c, int b){ b = 5; c = a + b; return c;}", "e": 26566, "s": 26337, "text": null }, { "code": null, "e": 26570, "s": 26566, "text": "10\n" }, { "code": null, "e": 26646, "s": 26570, "text": "Difference between internal static variables and External static variables:" }, { "code": null, "e": 26661, "s": 26646, "text": "Static Keyword" }, { "code": null, "e": 26672, "s": 26661, "text": "C Language" }, { "code": null, "e": 26691, "s": 26672, "text": "Difference Between" }, { "code": null, "e": 26710, "s": 26691, "text": "Technical Scripter" }, { "code": null, "e": 26808, "s": 26710, "text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here." }, { "code": null, "e": 26846, "s": 26808, "text": "TCP Server-Client implementation in C" }, { "code": null, "e": 26872, "s": 26846, "text": "Exception Handling in C++" }, { "code": null, "e": 26892, "s": 26872, "text": "Multithreading in C" }, { "code": null, "e": 26914, "s": 26892, "text": "'this' pointer in C++" }, { "code": null, "e": 26955, "s": 26914, "text": "Arrow operator -> in C/C++ with Examples" }, { "code": null, "e": 26986, "s": 26955, "text": "Difference between BFS and DFS" }, { "code": null, "e": 27026, "s": 26986, "text": "Class method vs Static method in Python" }, { "code": null, "e": 27058, "s": 27026, "text": "Differences between TCP and UDP" }, { "code": null, "e": 27119, "s": 27058, "text": "Difference between var, let and const keywords in JavaScript" } ]
towlower() function in C/C++ - GeeksforGeeks
30 Aug, 2018 The towlower() is a built-in function in C/C++ which converts the given wide character into lowercase. It is defined within the cwctype header file of C++. It is a function in header file , so it is mandatory to use this header file if using this function It is the wide-character equivalent of the tolower() function. Syntax: wint_t towlower( wint_t ch ) Parameter: The function accepts a single mandatory parameter ch which specifies the wide character which we have to convert into lowercase. Return Value: The function returns the lowercase equivalent to c, if such value exists, or c (unchanged) otherwise. The value is returned as a wint_t value that can be implicitly casted to wchar_t. Below programs illustrates the above function. Program 1: // Program to illustrate// towlower() function#include <cwchar>#include <cwctype>#include <iostream>using namespace std;int main(){ wchar_t str[] = L"GeeksforGeeks"; wcout << L"The lowercase version of \"" << str << L"\" is "; for (int i = 0; i < wcslen(str); i++) // Function to convert the character // into the lowercase version, if exists putwchar(towlower(str[i])); return 0;} Output: The lowercase version of "GeeksforGeeks" is geeksforgeeks Program 2: // Program to illustrate// towlower() function#include <cwchar>#include <cwctype>#include <iostream>using namespace std;int main(){ wchar_t str[] = L"hello Ishwar 123!@#"; wcout << L"The lowercase version of \"" << str << L"\" is "; for (int i = 0; i < wcslen(str); i++) // Function to convert the character // into the lowercase version, if exists putwchar(towlower(str[i])); return 0;} Output: The lowercase version of "hello Ishwar 123!@#" is hello ishwar 123!@# C-Library CPP-Functions STL C++ STL CPP Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here. Inheritance in C++ Map in C++ Standard Template Library (STL) C++ Classes and Objects Bitwise Operators in C/C++ Virtual Function in C++ Templates in C++ with Examples Constructors in C++ Operator Overloading in C++ Socket Programming in C/C++ vector erase() and clear() in C++
[ { "code": null, "e": 25821, "s": 25793, "text": "\n30 Aug, 2018" }, { "code": null, "e": 25977, "s": 25821, "text": "The towlower() is a built-in function in C/C++ which converts the given wide character into lowercase. It is defined within the cwctype header file of C++." }, { "code": null, "e": 26077, "s": 25977, "text": "It is a function in header file , so it is mandatory to use this header file if using this function" }, { "code": null, "e": 26140, "s": 26077, "text": "It is the wide-character equivalent of the tolower() function." }, { "code": null, "e": 26148, "s": 26140, "text": "Syntax:" }, { "code": null, "e": 26177, "s": 26148, "text": "wint_t towlower( wint_t ch )" }, { "code": null, "e": 26317, "s": 26177, "text": "Parameter: The function accepts a single mandatory parameter ch which specifies the wide character which we have to convert into lowercase." }, { "code": null, "e": 26515, "s": 26317, "text": "Return Value: The function returns the lowercase equivalent to c, if such value exists, or c (unchanged) otherwise. The value is returned as a wint_t value that can be implicitly casted to wchar_t." }, { "code": null, "e": 26562, "s": 26515, "text": "Below programs illustrates the above function." }, { "code": null, "e": 26573, "s": 26562, "text": "Program 1:" }, { "code": "// Program to illustrate// towlower() function#include <cwchar>#include <cwctype>#include <iostream>using namespace std;int main(){ wchar_t str[] = L\"GeeksforGeeks\"; wcout << L\"The lowercase version of \\\"\" << str << L\"\\\" is \"; for (int i = 0; i < wcslen(str); i++) // Function to convert the character // into the lowercase version, if exists putwchar(towlower(str[i])); return 0;}", "e": 27003, "s": 26573, "text": null }, { "code": null, "e": 27011, "s": 27003, "text": "Output:" }, { "code": null, "e": 27070, "s": 27011, "text": "The lowercase version of \"GeeksforGeeks\" is geeksforgeeks\n" }, { "code": null, "e": 27081, "s": 27070, "text": "Program 2:" }, { "code": "// Program to illustrate// towlower() function#include <cwchar>#include <cwctype>#include <iostream>using namespace std;int main(){ wchar_t str[] = L\"hello Ishwar 123!@#\"; wcout << L\"The lowercase version of \\\"\" << str << L\"\\\" is \"; for (int i = 0; i < wcslen(str); i++) // Function to convert the character // into the lowercase version, if exists putwchar(towlower(str[i])); return 0;}", "e": 27517, "s": 27081, "text": null }, { "code": null, "e": 27525, "s": 27517, "text": "Output:" }, { "code": null, "e": 27596, "s": 27525, "text": "The lowercase version of \"hello Ishwar 123!@#\" is hello ishwar 123!@#\n" }, { "code": null, "e": 27606, "s": 27596, "text": "C-Library" }, { "code": null, "e": 27620, "s": 27606, "text": "CPP-Functions" }, { "code": null, "e": 27624, "s": 27620, "text": "STL" }, { "code": null, "e": 27628, "s": 27624, "text": "C++" }, { "code": null, "e": 27632, "s": 27628, "text": "STL" }, { "code": null, "e": 27636, "s": 27632, "text": "CPP" }, { "code": null, "e": 27734, "s": 27636, "text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here." }, { "code": null, "e": 27753, "s": 27734, "text": "Inheritance in C++" }, { "code": null, "e": 27796, "s": 27753, "text": "Map in C++ Standard Template Library (STL)" }, { "code": null, "e": 27820, "s": 27796, "text": "C++ Classes and Objects" }, { "code": null, "e": 27847, "s": 27820, "text": "Bitwise Operators in C/C++" }, { "code": null, "e": 27871, "s": 27847, "text": "Virtual Function in C++" }, { "code": null, "e": 27902, "s": 27871, "text": "Templates in C++ with Examples" }, { "code": null, "e": 27922, "s": 27902, "text": "Constructors in C++" }, { "code": null, "e": 27950, "s": 27922, "text": "Operator Overloading in C++" }, { "code": null, "e": 27978, "s": 27950, "text": "Socket Programming in C/C++" } ]
Java Program to Save a String to a File - GeeksforGeeks
25 Oct, 2021 A demo file on the desktop named ‘gfg.txt’ is used for reference as a local directory on the machine. Creating an empty file before writing a program and give that specific path of that file to the program. Methods: Using writeString() method of Files classUsing write() method of Files classUsing writer() method of Filewriter classUsing write() method of Bufferedwriter classUsing write() method of PrintWriter class Using writeString() method of Files class Using write() method of Files class Using writer() method of Filewriter class Using write() method of Bufferedwriter class Using write() method of PrintWriter class Let us discuss every method individually by implementing the same via clean java programs to get a fair idea of working on them. Method 1: Using writeString() method of Files class The writeString() method of File Class in Java is used to write contents to the specified file. ‘java.nio.file.Files’ class is having a predefined writeString() method which is used to write all content to a file, using the UTF-8 charset. Syntax: Files.writeString(path, string, options) Parameters: Path: File path with data type as Path String: A specified string that will enter in the file with a return type string. Options: Different options to enter the string in the file. Like append the string to the file, overwrite everything in the file with the current string, etc Return Value: This method does not return any value. Procedure: Create an instance of the file. Call the Files.writeString() method with an instance, string and characters set. Example Java // Java Program to Save a String to a File// Using Files.writeString() method // Importing required classesimport java.io.*;import java.io.IOException;import java.nio.charset.StandardCharsets;import java.nio.file.Files;import java.nio.file.Path;import java.nio.file.Paths; // Main classpublic class GFG { // Main driver method public static void main(String[] args) { // Creating an instance of file Path path = Paths.get("C:\\Users\\HP\\Desktop\\gfg.txt"); // Custom string as an input String str = "Geeks for Geeks \nWelcome to computer science portal \nHello Geek"; // Try block to check for exceptions try { // Now calling Files.writeString() method // with path , content & standard charsets Files.writeString(path, str, StandardCharsets.UTF_8); } // Catch block to handle the exception catch (IOException ex) { // Print messqage exception occurred as // invalid. directory local path is passed System.out.print("Invalid Path"); } }} Output: Geeks for Geeks Welcome to computer science portal Hello Geek Method 2: Using write() method of Files class java.nio.file.Files class is having a predefined write() method which is used to write a specified text to the file. Procedure: Create an instance of the file.Convert the string into a byte array by using string.getBytes() method.Lastly call method namely Files.write() with file instance and the byte array. Create an instance of the file. Convert the string into a byte array by using string.getBytes() method. Lastly call method namely Files.write() with file instance and the byte array. Example Java // Java Program to Save a String to a File// Using Files.write() method // Importing required classesimport java.io.*;import java.io.IOException;import java.nio.file.Files;import java.nio.file.Path;import java.nio.file.Paths; // Main classpublic class GFG { // Main driver method public static void main(String[] args) { // Creating an instance of file Path path = Paths.get("C:\\Users\\HP\\Desktop\\gfg.txt"); // Custom string as an input String str = "Geeks for Geeks \nWelcome to computer science portal \nHello Geek!"; // Converting string to byte array // using getBytes() method byte[] arr = str.getBytes(); // Try block to check for exceptions try { // Now calling Files.write() method using path // and byte array Files.write(path, arr); } // Catch block to handle the exceptions catch (IOException ex) { // Print message as exception occurred when // invalid path of local machine is passed System.out.print("Invalid Path"); } }} Output: Geeks for Geeks Welcome to computer science portal Hello Geek! Method 3: Using writer() method of FileWriter class Filewriter class is used to write some data on file. This is a simple way of writing the data on file. Procedure: Create an instance of the file. Passing the file instance into filewriter. Now call writer() method over a filewriter with string data. Flush the file resource. Close the file resource. Example Java // Java Program to Save a String to a File// Using FileWriter class // Importing required classesimport java.io.*;import java.io.File;import java.io.FileWriter;import java.io.IOException; public class GFG{ public static void main(String[] args) throws IOException { //creating the instance of file File path = new File("C:\\Users\\HP\\Desktop\\gfg.txt"); //passing file instance in filewriter FileWriter wr = new FileWriter(path); //calling writer.write() method with the string wr.write("Geeks for Geeks \nWelcome to computer science portal \nHello Geek!!"); //flushing the writer wr.flush(); //closing the writer wr.close(); }} Output: Geeks for Geeks Welcome to computer science portal Hello Geek!! Method 4: Using write() method of BufferedWriter class BufferedWriter class basically provides a buffer for writing instance. We can wrap some other writers like PrintWriter and FileWriter into BufferedWriter. BufferedWriter is very efficient for doing multiple write operations on file & writing multiple files. BufferedWriter is very efficient than Filewriter. Procedure: Create an instance of the file.Declare the stream with filewriter.Call write() method on stream with string data. Create an instance of the file. Declare the stream with filewriter. Call write() method on stream with string data. Example Java // Java Program to Save a String to a File// Using Files.write() method // Importing required classesimport java.io.*;import java.io.IOException;import java.nio.file.Files;import java.nio.file.Path;import java.nio.file.Paths; // Main classpublic class GFG { // Main driver method public static void main(String[] args) { // Creating an instance of file Path path = Paths.get("C:\\Users\\HP\\Desktop\\gfg.txt"); // Custom string as an input String str = "Geeks for Geeks \nWelcome to computer science portal \nHello Geek!"; // Converting string to byte array // using getBytes() method byte[] arr = str.getBytes(); // Try block to check for exceptions try { // Now calling Files.write() method using path // and byte array Files.write(path, arr); } // Catch block to handle the exceptions catch (IOException ex) { // Print message as exception occurred when // invalid path of local machine is passed System.out.print("Invalid Path"); } }} Output: Geeks for Geeks Welcome to computer science portal Hello Geek!!! Method 5: Using write() method of PrintWriter class PrintWriter class is an extension of writer class. PrintWriter class is used to write the string data on file using the write() method. Procedure: Create an instance of the file.Create a stream of PrintWriter and pass the file instance into it.Call the write method with data.Flush the stream.Close the stream. Create an instance of the file. Create a stream of PrintWriter and pass the file instance into it. Call the write method with data. Flush the stream. Close the stream. Example Java // Java Program to Save a String to a File// Using PrintWriter class // Importing required classesimport java.io.*;import java.io.File;import java.io.FileNotFoundException;import java.io.PrintWriter; // Main claspublic class GFG { // Main driver method public static void main(String[] args) throws FileNotFoundException { // Creating an instance of file File path = new File("C:\\Users\\HP\\Desktop\\gfg.txt"); // Declaring the print writer with path PrintWriter pw = new PrintWriter(path); // Now calling writer() method with string pw.write( "Geeks for Geeks \nWelcome to computer science portal \nHello Geek!!!!"); // Flushing the print writer pw.flush(); // Lastly closing the printwriter // using the close() method pw.close(); }} Output: Geeks for Geeks Welcome to computer science portal Hello Geek!!!! surinderdawra388 saurabh1990aror Java-Files Picked Java Java Programs Java Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here. Stream In Java Constructors in Java Exceptions in Java Functional Interfaces in Java Different ways of Reading a text file in Java Java Programming Examples Convert Double to Integer in Java Implementing a Linked List in Java using Class How to Iterate HashMap in Java? Program to print ASCII Value of a character
[ { "code": null, "e": 25225, "s": 25197, "text": "\n25 Oct, 2021" }, { "code": null, "e": 25432, "s": 25225, "text": "A demo file on the desktop named ‘gfg.txt’ is used for reference as a local directory on the machine. Creating an empty file before writing a program and give that specific path of that file to the program." }, { "code": null, "e": 25441, "s": 25432, "text": "Methods:" }, { "code": null, "e": 25644, "s": 25441, "text": "Using writeString() method of Files classUsing write() method of Files classUsing writer() method of Filewriter classUsing write() method of Bufferedwriter classUsing write() method of PrintWriter class" }, { "code": null, "e": 25686, "s": 25644, "text": "Using writeString() method of Files class" }, { "code": null, "e": 25722, "s": 25686, "text": "Using write() method of Files class" }, { "code": null, "e": 25764, "s": 25722, "text": "Using writer() method of Filewriter class" }, { "code": null, "e": 25809, "s": 25764, "text": "Using write() method of Bufferedwriter class" }, { "code": null, "e": 25851, "s": 25809, "text": "Using write() method of PrintWriter class" }, { "code": null, "e": 25980, "s": 25851, "text": "Let us discuss every method individually by implementing the same via clean java programs to get a fair idea of working on them." }, { "code": null, "e": 26032, "s": 25980, "text": "Method 1: Using writeString() method of Files class" }, { "code": null, "e": 26271, "s": 26032, "text": "The writeString() method of File Class in Java is used to write contents to the specified file. ‘java.nio.file.Files’ class is having a predefined writeString() method which is used to write all content to a file, using the UTF-8 charset." }, { "code": null, "e": 26279, "s": 26271, "text": "Syntax:" }, { "code": null, "e": 26320, "s": 26279, "text": "Files.writeString(path, string, options)" }, { "code": null, "e": 26333, "s": 26320, "text": "Parameters: " }, { "code": null, "e": 26372, "s": 26333, "text": "Path: File path with data type as Path" }, { "code": null, "e": 26454, "s": 26372, "text": "String: A specified string that will enter in the file with a return type string." }, { "code": null, "e": 26612, "s": 26454, "text": "Options: Different options to enter the string in the file. Like append the string to the file, overwrite everything in the file with the current string, etc" }, { "code": null, "e": 26665, "s": 26612, "text": "Return Value: This method does not return any value." }, { "code": null, "e": 26676, "s": 26665, "text": "Procedure:" }, { "code": null, "e": 26708, "s": 26676, "text": "Create an instance of the file." }, { "code": null, "e": 26789, "s": 26708, "text": "Call the Files.writeString() method with an instance, string and characters set." }, { "code": null, "e": 26798, "s": 26789, "text": "Example " }, { "code": null, "e": 26803, "s": 26798, "text": "Java" }, { "code": "// Java Program to Save a String to a File// Using Files.writeString() method // Importing required classesimport java.io.*;import java.io.IOException;import java.nio.charset.StandardCharsets;import java.nio.file.Files;import java.nio.file.Path;import java.nio.file.Paths; // Main classpublic class GFG { // Main driver method public static void main(String[] args) { // Creating an instance of file Path path = Paths.get(\"C:\\\\Users\\\\HP\\\\Desktop\\\\gfg.txt\"); // Custom string as an input String str = \"Geeks for Geeks \\nWelcome to computer science portal \\nHello Geek\"; // Try block to check for exceptions try { // Now calling Files.writeString() method // with path , content & standard charsets Files.writeString(path, str, StandardCharsets.UTF_8); } // Catch block to handle the exception catch (IOException ex) { // Print messqage exception occurred as // invalid. directory local path is passed System.out.print(\"Invalid Path\"); } }}", "e": 27947, "s": 26803, "text": null }, { "code": null, "e": 27956, "s": 27947, "text": "Output: " }, { "code": null, "e": 28018, "s": 27956, "text": "Geeks for Geeks\nWelcome to computer science portal\nHello Geek" }, { "code": null, "e": 28065, "s": 28018, "text": "Method 2: Using write() method of Files class " }, { "code": null, "e": 28182, "s": 28065, "text": "java.nio.file.Files class is having a predefined write() method which is used to write a specified text to the file." }, { "code": null, "e": 28193, "s": 28182, "text": "Procedure:" }, { "code": null, "e": 28374, "s": 28193, "text": "Create an instance of the file.Convert the string into a byte array by using string.getBytes() method.Lastly call method namely Files.write() with file instance and the byte array." }, { "code": null, "e": 28406, "s": 28374, "text": "Create an instance of the file." }, { "code": null, "e": 28478, "s": 28406, "text": "Convert the string into a byte array by using string.getBytes() method." }, { "code": null, "e": 28557, "s": 28478, "text": "Lastly call method namely Files.write() with file instance and the byte array." }, { "code": null, "e": 28565, "s": 28557, "text": "Example" }, { "code": null, "e": 28570, "s": 28565, "text": "Java" }, { "code": "// Java Program to Save a String to a File// Using Files.write() method // Importing required classesimport java.io.*;import java.io.IOException;import java.nio.file.Files;import java.nio.file.Path;import java.nio.file.Paths; // Main classpublic class GFG { // Main driver method public static void main(String[] args) { // Creating an instance of file Path path = Paths.get(\"C:\\\\Users\\\\HP\\\\Desktop\\\\gfg.txt\"); // Custom string as an input String str = \"Geeks for Geeks \\nWelcome to computer science portal \\nHello Geek!\"; // Converting string to byte array // using getBytes() method byte[] arr = str.getBytes(); // Try block to check for exceptions try { // Now calling Files.write() method using path // and byte array Files.write(path, arr); } // Catch block to handle the exceptions catch (IOException ex) { // Print message as exception occurred when // invalid path of local machine is passed System.out.print(\"Invalid Path\"); } }}", "e": 29704, "s": 28570, "text": null }, { "code": null, "e": 29712, "s": 29704, "text": "Output:" }, { "code": null, "e": 29775, "s": 29712, "text": "Geeks for Geeks\nWelcome to computer science portal\nHello Geek!" }, { "code": null, "e": 29827, "s": 29775, "text": "Method 3: Using writer() method of FileWriter class" }, { "code": null, "e": 29930, "s": 29827, "text": "Filewriter class is used to write some data on file. This is a simple way of writing the data on file." }, { "code": null, "e": 29941, "s": 29930, "text": "Procedure:" }, { "code": null, "e": 29973, "s": 29941, "text": "Create an instance of the file." }, { "code": null, "e": 30016, "s": 29973, "text": "Passing the file instance into filewriter." }, { "code": null, "e": 30077, "s": 30016, "text": "Now call writer() method over a filewriter with string data." }, { "code": null, "e": 30102, "s": 30077, "text": "Flush the file resource." }, { "code": null, "e": 30127, "s": 30102, "text": "Close the file resource." }, { "code": null, "e": 30136, "s": 30127, "text": "Example " }, { "code": null, "e": 30141, "s": 30136, "text": "Java" }, { "code": "// Java Program to Save a String to a File// Using FileWriter class // Importing required classesimport java.io.*;import java.io.File;import java.io.FileWriter;import java.io.IOException; public class GFG{ public static void main(String[] args) throws IOException { //creating the instance of file File path = new File(\"C:\\\\Users\\\\HP\\\\Desktop\\\\gfg.txt\"); //passing file instance in filewriter FileWriter wr = new FileWriter(path); //calling writer.write() method with the string wr.write(\"Geeks for Geeks \\nWelcome to computer science portal \\nHello Geek!!\"); //flushing the writer wr.flush(); //closing the writer wr.close(); }}", "e": 30874, "s": 30141, "text": null }, { "code": null, "e": 30882, "s": 30874, "text": "Output:" }, { "code": null, "e": 30946, "s": 30882, "text": "Geeks for Geeks\nWelcome to computer science portal\nHello Geek!!" }, { "code": null, "e": 31002, "s": 30946, "text": "Method 4: Using write() method of BufferedWriter class " }, { "code": null, "e": 31310, "s": 31002, "text": "BufferedWriter class basically provides a buffer for writing instance. We can wrap some other writers like PrintWriter and FileWriter into BufferedWriter. BufferedWriter is very efficient for doing multiple write operations on file & writing multiple files. BufferedWriter is very efficient than Filewriter." }, { "code": null, "e": 31321, "s": 31310, "text": "Procedure:" }, { "code": null, "e": 31435, "s": 31321, "text": "Create an instance of the file.Declare the stream with filewriter.Call write() method on stream with string data." }, { "code": null, "e": 31467, "s": 31435, "text": "Create an instance of the file." }, { "code": null, "e": 31503, "s": 31467, "text": "Declare the stream with filewriter." }, { "code": null, "e": 31551, "s": 31503, "text": "Call write() method on stream with string data." }, { "code": null, "e": 31559, "s": 31551, "text": "Example" }, { "code": null, "e": 31564, "s": 31559, "text": "Java" }, { "code": "// Java Program to Save a String to a File// Using Files.write() method // Importing required classesimport java.io.*;import java.io.IOException;import java.nio.file.Files;import java.nio.file.Path;import java.nio.file.Paths; // Main classpublic class GFG { // Main driver method public static void main(String[] args) { // Creating an instance of file Path path = Paths.get(\"C:\\\\Users\\\\HP\\\\Desktop\\\\gfg.txt\"); // Custom string as an input String str = \"Geeks for Geeks \\nWelcome to computer science portal \\nHello Geek!\"; // Converting string to byte array // using getBytes() method byte[] arr = str.getBytes(); // Try block to check for exceptions try { // Now calling Files.write() method using path // and byte array Files.write(path, arr); } // Catch block to handle the exceptions catch (IOException ex) { // Print message as exception occurred when // invalid path of local machine is passed System.out.print(\"Invalid Path\"); } }}", "e": 32698, "s": 31564, "text": null }, { "code": null, "e": 32706, "s": 32698, "text": "Output:" }, { "code": null, "e": 32771, "s": 32706, "text": "Geeks for Geeks\nWelcome to computer science portal\nHello Geek!!!" }, { "code": null, "e": 32823, "s": 32771, "text": "Method 5: Using write() method of PrintWriter class" }, { "code": null, "e": 32959, "s": 32823, "text": "PrintWriter class is an extension of writer class. PrintWriter class is used to write the string data on file using the write() method." }, { "code": null, "e": 32970, "s": 32959, "text": "Procedure:" }, { "code": null, "e": 33134, "s": 32970, "text": "Create an instance of the file.Create a stream of PrintWriter and pass the file instance into it.Call the write method with data.Flush the stream.Close the stream." }, { "code": null, "e": 33166, "s": 33134, "text": "Create an instance of the file." }, { "code": null, "e": 33233, "s": 33166, "text": "Create a stream of PrintWriter and pass the file instance into it." }, { "code": null, "e": 33266, "s": 33233, "text": "Call the write method with data." }, { "code": null, "e": 33284, "s": 33266, "text": "Flush the stream." }, { "code": null, "e": 33302, "s": 33284, "text": "Close the stream." }, { "code": null, "e": 33310, "s": 33302, "text": "Example" }, { "code": null, "e": 33315, "s": 33310, "text": "Java" }, { "code": "// Java Program to Save a String to a File// Using PrintWriter class // Importing required classesimport java.io.*;import java.io.File;import java.io.FileNotFoundException;import java.io.PrintWriter; // Main claspublic class GFG { // Main driver method public static void main(String[] args) throws FileNotFoundException { // Creating an instance of file File path = new File(\"C:\\\\Users\\\\HP\\\\Desktop\\\\gfg.txt\"); // Declaring the print writer with path PrintWriter pw = new PrintWriter(path); // Now calling writer() method with string pw.write( \"Geeks for Geeks \\nWelcome to computer science portal \\nHello Geek!!!!\"); // Flushing the print writer pw.flush(); // Lastly closing the printwriter // using the close() method pw.close(); }}", "e": 34177, "s": 33315, "text": null }, { "code": null, "e": 34185, "s": 34177, "text": "Output:" }, { "code": null, "e": 34251, "s": 34185, "text": "Geeks for Geeks\nWelcome to computer science portal\nHello Geek!!!!" }, { "code": null, "e": 34268, "s": 34251, "text": "surinderdawra388" }, { "code": null, "e": 34284, "s": 34268, "text": "saurabh1990aror" }, { "code": null, "e": 34295, "s": 34284, "text": "Java-Files" }, { "code": null, "e": 34302, "s": 34295, "text": "Picked" }, { "code": null, "e": 34307, "s": 34302, "text": "Java" }, { "code": null, "e": 34321, "s": 34307, "text": "Java Programs" }, { "code": null, "e": 34326, "s": 34321, "text": "Java" }, { "code": null, "e": 34424, "s": 34326, "text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here." }, { "code": null, "e": 34439, "s": 34424, "text": "Stream In Java" }, { "code": null, "e": 34460, "s": 34439, "text": "Constructors in Java" }, { "code": null, "e": 34479, "s": 34460, "text": "Exceptions in Java" }, { "code": null, "e": 34509, "s": 34479, "text": "Functional Interfaces in Java" }, { "code": null, "e": 34555, "s": 34509, "text": "Different ways of Reading a text file in Java" }, { "code": null, "e": 34581, "s": 34555, "text": "Java Programming Examples" }, { "code": null, "e": 34615, "s": 34581, "text": "Convert Double to Integer in Java" }, { "code": null, "e": 34662, "s": 34615, "text": "Implementing a Linked List in Java using Class" }, { "code": null, "e": 34694, "s": 34662, "text": "How to Iterate HashMap in Java?" } ]
vector::emplace_back in C++ STL - GeeksforGeeks
06 Feb, 2020 Vectors are same as dynamic arrays with the ability to resize itself automatically when an element is inserted or deleted, with their storage being handled automatically by the container. This function is used to insert a new element into the vector container, the new element is added to the end of the vector.Syntax : vectorname.emplace_back(value) Parameters : The element to be inserted into the vector is passed as the parameter. Result : The parameter is added to the vector at the end position. Examples: Input: myvector{1, 2, 3, 4, 5}; myvector.emplace_back(6); Output: myvector = 1, 2, 3, 4, 5, 6 Input: myvector{}; myvector.emplace_back(4); Output: myvector = 4 Errors and Exceptions: It has a strong exception guarantee, therefore, no changes are made if an exception is thrown.The parameter should be of the same type as that of the container, otherwise, an error is thrown. It has a strong exception guarantee, therefore, no changes are made if an exception is thrown. The parameter should be of the same type as that of the container, otherwise, an error is thrown. Example 1: // INTEGER VECTOR EXAMPLE// CPP program to illustrate// Implementation of emplace() function#include <iostream>#include <vector>using namespace std; int main(){ vector<int> myvector; myvector.emplace_back(1); myvector.emplace_back(2); myvector.emplace_back(3); myvector.emplace_back(4); myvector.emplace_back(5); myvector.emplace_back(6); // vector becomes 1, 2, 3, 4, 5, 6 // printing the vector for (auto it = myvector.begin(); it != myvector.end(); ++it) cout << ' ' << *it; return 0; } Output: 1 2 3 4 5 6 Example 2: // STRING VECTOR EXAMPLE// CPP program to illustrate// Implementation of emplace() function#include <iostream>#include <vector>#include <string>using namespace std; int main(){ // vector declaration vector<string> myvector; myvector.emplace_back("This"); myvector.emplace_back("is"); myvector.emplace_back("a"); myvector.emplace_back("computer science"); myvector.emplace_back("portal"); // vector becomes This, is, a computer science, portal // printing the vector for (auto it = myvector.begin(); it != myvector.end(); ++it) cout << ' ' << *it; return 0; } Output: This is a computer science portal Example 3: // CHARACTER VECTOR EXAMPLE// CPP program to illustrate// Implementation of emplace() function#include <iostream>#include <vector>using namespace std; int main(){ vector<char> myvector; myvector.emplace_back('a'); myvector.emplace_back('c'); myvector.emplace_back('x'); myvector.emplace_back('y'); myvector.emplace_back('z'); // vector becomes a, c, x, y, z // printing the vector for (auto it = myvector.begin(); it != myvector.end(); ++it) cout << ' ' << *it; return 0; } Output: a, c, x, y, z Time Complexity: O(1) Application:Given an empty vector, add integers to it using emplace_back function and then calculate its size. Input : 1, 2, 3, 4, 5, 6 Output : 6 Algorithm Add elements to the vector using emplace_back functionCheck if the size of the vector is 0, if not, increment the counter variable initialised as 0, and pop the back element.Repeat this step until the size of the vector becomes 0.Print the final value of the variable. Add elements to the vector using emplace_back function Check if the size of the vector is 0, if not, increment the counter variable initialised as 0, and pop the back element. Repeat this step until the size of the vector becomes 0. Print the final value of the variable. // CPP program to illustrate// Application of emplace_back function#include <iostream>#include <vector>using namespace std; int main(){ int count = 0; vector<int> myvector; myvector.emplace_back(1); myvector.emplace_back(2); myvector.emplace_back(3); myvector.emplace_back(4); myvector.emplace_back(5); myvector.emplace_back(6); while (!myvector.empty()) { count++; myvector.pop_back(); } cout << count; return 0;} Output: 6 emplace_back() vs push_back() push_back() copies a string into a vector. First, a new string object will be implicitly created initialized with provided char*. Then push_back will be called which will copy this string into the vector using the move constructor because the original string is a temporary object. Then the temporary object will be destroyed.emplace_back() constructs a string in-place, so no temporary string will be created but rather emplace_back() will be called directly with char* argument. It will then create a string to be stored in the vector initialized with this char*. So, in this case, we avoid constructing and destroying an unnecessary temporary string object. push_back() copies a string into a vector. First, a new string object will be implicitly created initialized with provided char*. Then push_back will be called which will copy this string into the vector using the move constructor because the original string is a temporary object. Then the temporary object will be destroyed. emplace_back() constructs a string in-place, so no temporary string will be created but rather emplace_back() will be called directly with char* argument. It will then create a string to be stored in the vector initialized with this char*. So, in this case, we avoid constructing and destroying an unnecessary temporary string object. Please see emplace vs insert in C++ STL for details. // C++ code to demonstrate difference between// emplace_back and insert_back#include<bits/stdc++.h>using namespace std; int main(){ // declaring priority queue vector<pair<char, int>> vect; // using emplace() to insert pair in-place vect.emplace_back('a', 24); // Below line would not compile // vect.push_back('b', 25); // using push_back() to insert vect.push_back(make_pair('b', 25)); // printing the vector for (int i=0; i<vect.size(); i++) cout << vect[i].first << " " << vect[i].second << endl; return 0;} Output: a 24 b 25 dikshadha68 rajnr6 cpp-vector STL C++ STL CPP Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here. C++ Classes and Objects Virtual Function in C++ Operator Overloading in C++ Constructors in C++ Templates in C++ with Examples vector erase() and clear() in C++ Socket Programming in C/C++ Substring in C++ Copy Constructor in C++ Polymorphism in C++
[ { "code": null, "e": 25648, "s": 25620, "text": "\n06 Feb, 2020" }, { "code": null, "e": 25836, "s": 25648, "text": "Vectors are same as dynamic arrays with the ability to resize itself automatically when an element is inserted or deleted, with their storage being handled automatically by the container." }, { "code": null, "e": 25968, "s": 25836, "text": "This function is used to insert a new element into the vector container, the new element is added to the end of the vector.Syntax :" }, { "code": null, "e": 26151, "s": 25968, "text": "vectorname.emplace_back(value)\nParameters :\nThe element to be inserted into the vector\nis passed as the parameter.\nResult :\nThe parameter is added to the\nvector at the end position.\n" }, { "code": null, "e": 26161, "s": 26151, "text": "Examples:" }, { "code": null, "e": 26341, "s": 26161, "text": "Input: myvector{1, 2, 3, 4, 5};\n myvector.emplace_back(6);\nOutput: myvector = 1, 2, 3, 4, 5, 6\n\nInput: myvector{};\n myvector.emplace_back(4);\nOutput: myvector = 4\n" }, { "code": null, "e": 26364, "s": 26341, "text": "Errors and Exceptions:" }, { "code": null, "e": 26556, "s": 26364, "text": "It has a strong exception guarantee, therefore, no changes are made if an exception is thrown.The parameter should be of the same type as that of the container, otherwise, an error is thrown." }, { "code": null, "e": 26651, "s": 26556, "text": "It has a strong exception guarantee, therefore, no changes are made if an exception is thrown." }, { "code": null, "e": 26749, "s": 26651, "text": "The parameter should be of the same type as that of the container, otherwise, an error is thrown." }, { "code": null, "e": 26760, "s": 26749, "text": "Example 1:" }, { "code": "// INTEGER VECTOR EXAMPLE// CPP program to illustrate// Implementation of emplace() function#include <iostream>#include <vector>using namespace std; int main(){ vector<int> myvector; myvector.emplace_back(1); myvector.emplace_back(2); myvector.emplace_back(3); myvector.emplace_back(4); myvector.emplace_back(5); myvector.emplace_back(6); // vector becomes 1, 2, 3, 4, 5, 6 // printing the vector for (auto it = myvector.begin(); it != myvector.end(); ++it) cout << ' ' << *it; return 0; }", "e": 27301, "s": 26760, "text": null }, { "code": null, "e": 27309, "s": 27301, "text": "Output:" }, { "code": null, "e": 27322, "s": 27309, "text": "1 2 3 4 5 6\n" }, { "code": null, "e": 27333, "s": 27322, "text": "Example 2:" }, { "code": "// STRING VECTOR EXAMPLE// CPP program to illustrate// Implementation of emplace() function#include <iostream>#include <vector>#include <string>using namespace std; int main(){ // vector declaration vector<string> myvector; myvector.emplace_back(\"This\"); myvector.emplace_back(\"is\"); myvector.emplace_back(\"a\"); myvector.emplace_back(\"computer science\"); myvector.emplace_back(\"portal\"); // vector becomes This, is, a computer science, portal // printing the vector for (auto it = myvector.begin(); it != myvector.end(); ++it) cout << ' ' << *it; return 0; }", "e": 27945, "s": 27333, "text": null }, { "code": null, "e": 27953, "s": 27945, "text": "Output:" }, { "code": null, "e": 27988, "s": 27953, "text": "This is a computer science portal\n" }, { "code": null, "e": 27999, "s": 27988, "text": "Example 3:" }, { "code": "// CHARACTER VECTOR EXAMPLE// CPP program to illustrate// Implementation of emplace() function#include <iostream>#include <vector>using namespace std; int main(){ vector<char> myvector; myvector.emplace_back('a'); myvector.emplace_back('c'); myvector.emplace_back('x'); myvector.emplace_back('y'); myvector.emplace_back('z'); // vector becomes a, c, x, y, z // printing the vector for (auto it = myvector.begin(); it != myvector.end(); ++it) cout << ' ' << *it; return 0; }", "e": 28521, "s": 27999, "text": null }, { "code": null, "e": 28529, "s": 28521, "text": "Output:" }, { "code": null, "e": 28544, "s": 28529, "text": "a, c, x, y, z\n" }, { "code": null, "e": 28566, "s": 28544, "text": "Time Complexity: O(1)" }, { "code": null, "e": 28677, "s": 28566, "text": "Application:Given an empty vector, add integers to it using emplace_back function and then calculate its size." }, { "code": null, "e": 28714, "s": 28677, "text": "Input : 1, 2, 3, 4, 5, 6\nOutput : 6" }, { "code": null, "e": 28724, "s": 28714, "text": "Algorithm" }, { "code": null, "e": 28993, "s": 28724, "text": "Add elements to the vector using emplace_back functionCheck if the size of the vector is 0, if not, increment the counter variable initialised as 0, and pop the back element.Repeat this step until the size of the vector becomes 0.Print the final value of the variable." }, { "code": null, "e": 29048, "s": 28993, "text": "Add elements to the vector using emplace_back function" }, { "code": null, "e": 29169, "s": 29048, "text": "Check if the size of the vector is 0, if not, increment the counter variable initialised as 0, and pop the back element." }, { "code": null, "e": 29226, "s": 29169, "text": "Repeat this step until the size of the vector becomes 0." }, { "code": null, "e": 29265, "s": 29226, "text": "Print the final value of the variable." }, { "code": "// CPP program to illustrate// Application of emplace_back function#include <iostream>#include <vector>using namespace std; int main(){ int count = 0; vector<int> myvector; myvector.emplace_back(1); myvector.emplace_back(2); myvector.emplace_back(3); myvector.emplace_back(4); myvector.emplace_back(5); myvector.emplace_back(6); while (!myvector.empty()) { count++; myvector.pop_back(); } cout << count; return 0;}", "e": 29732, "s": 29265, "text": null }, { "code": null, "e": 29740, "s": 29732, "text": "Output:" }, { "code": null, "e": 29742, "s": 29740, "text": "6" }, { "code": null, "e": 29772, "s": 29742, "text": "emplace_back() vs push_back()" }, { "code": null, "e": 30433, "s": 29772, "text": "push_back() copies a string into a vector. First, a new string object will be implicitly created initialized with provided char*. Then push_back will be called which will copy this string into the vector using the move constructor because the original string is a temporary object. Then the temporary object will be destroyed.emplace_back() constructs a string in-place, so no temporary string will be created but rather emplace_back() will be called directly with char* argument. It will then create a string to be stored in the vector initialized with this char*. So, in this case, we avoid constructing and destroying an unnecessary temporary string object." }, { "code": null, "e": 30760, "s": 30433, "text": "push_back() copies a string into a vector. First, a new string object will be implicitly created initialized with provided char*. Then push_back will be called which will copy this string into the vector using the move constructor because the original string is a temporary object. Then the temporary object will be destroyed." }, { "code": null, "e": 31095, "s": 30760, "text": "emplace_back() constructs a string in-place, so no temporary string will be created but rather emplace_back() will be called directly with char* argument. It will then create a string to be stored in the vector initialized with this char*. So, in this case, we avoid constructing and destroying an unnecessary temporary string object." }, { "code": null, "e": 31148, "s": 31095, "text": "Please see emplace vs insert in C++ STL for details." }, { "code": "// C++ code to demonstrate difference between// emplace_back and insert_back#include<bits/stdc++.h>using namespace std; int main(){ // declaring priority queue vector<pair<char, int>> vect; // using emplace() to insert pair in-place vect.emplace_back('a', 24); // Below line would not compile // vect.push_back('b', 25); // using push_back() to insert vect.push_back(make_pair('b', 25)); // printing the vector for (int i=0; i<vect.size(); i++) cout << vect[i].first << \" \" << vect[i].second << endl; return 0;}", "e": 31758, "s": 31148, "text": null }, { "code": null, "e": 31766, "s": 31758, "text": "Output:" }, { "code": null, "e": 31777, "s": 31766, "text": "a 24\nb 25\n" }, { "code": null, "e": 31789, "s": 31777, "text": "dikshadha68" }, { "code": null, "e": 31796, "s": 31789, "text": "rajnr6" }, { "code": null, "e": 31807, "s": 31796, "text": "cpp-vector" }, { "code": null, "e": 31811, "s": 31807, "text": "STL" }, { "code": null, "e": 31815, "s": 31811, "text": "C++" }, { "code": null, "e": 31819, "s": 31815, "text": "STL" }, { "code": null, "e": 31823, "s": 31819, "text": "CPP" }, { "code": null, "e": 31921, "s": 31823, "text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here." }, { "code": null, "e": 31945, "s": 31921, "text": "C++ Classes and Objects" }, { "code": null, "e": 31969, "s": 31945, "text": "Virtual Function in C++" }, { "code": null, "e": 31997, "s": 31969, "text": "Operator Overloading in C++" }, { "code": null, "e": 32017, "s": 31997, "text": "Constructors in C++" }, { "code": null, "e": 32048, "s": 32017, "text": "Templates in C++ with Examples" }, { "code": null, "e": 32082, "s": 32048, "text": "vector erase() and clear() in C++" }, { "code": null, "e": 32110, "s": 32082, "text": "Socket Programming in C/C++" }, { "code": null, "e": 32127, "s": 32110, "text": "Substring in C++" }, { "code": null, "e": 32151, "s": 32127, "text": "Copy Constructor in C++" } ]
Python | System hardening and compliance reports using Lynis - GeeksforGeeks
08 Nov, 2021 Lynis is a battle-tested security tool for systems running Linux, macOS, or Unix-based operating systems. It performs an extensive health scan of your systems to support system hardening and compliance testing. The project is open-source software with the GPL license and available since 2007. Since Lynis is flexible, it is used for several different purposes. Typical use cases for Lynis include: Security auditing Compliance testing (e.g. PCI, HIPAA, SOx) Penetration testing Vulnerability detection System Hardening System hardening refers to securing your system from potential Threats and Vulnerabilities. Lynis can be used to generate a detailed report on various Threats and Vulnerabilities in your system. The user or System Administrator can then take the necessary actions to secure the system.Lynis reports are hard to read and usually have a lot of information. Therefore, we use Bash and Python scripts to parse through the report, extract relevant information from the report such as warnings, suggestions and store them in an excel file as a report. Prerequisites for Lynis – Install Lynis on your system by cloning the github repository: https://github.com/CISOfy/lynis Install the pandas library using the command sudo pip3 install pandas. Once you have installed Lynis on your system, navigate to the Lynis directory where you will find a bunch of files along with an executable file called Lynis. Use the bash script (code is given below) to extract relevant information such as warning and suggestions given in the lynis report. create a file called run.sh and copy paste the bash code into that file and type: sudo ./run.sh to run the bash script. Run the Python script (code is given below) to clean and parse the extracted data and output the relevant information as an excel file. Below are the Bash and Python scripts – Bash Script: BASH #!/bin/bash # script to scrape/parse the report file and# extract the relevant details and run the# python script to display the details in a server. echo "running......"echo "" sudo ./lynis audit system --quick # execute warnings. sudo ./warnings.shecho "Generating warnings"echo ""echo "warnings are: "echo "" sudo cat /var/log/lynis-report.dat | grep warning | sed -e "s/warning\[\]\=//g"sudo cat /var/log/lynis-report.dat | grep warning | sed -e "s/warning\[\]\=//g" | cat > warnings.txt echo ""echo "warnings generated"echo "output file: warnings.txt" sudo chmod 755 warnings.txt #execute suggestions. sudo ./suggestions.shecho "Generating suggestions"echo ""echo "suggestions are: "echo "" sudo cat /var/log/lynis-report.dat | grep suggestion | sed -e "s/suggestion\[\]\=//g" sudo cat /var/log/lynis-report.dat | grep suggestion | sed -e "s/suggestion\[\]\=//g" | cat > suggestions.txt echo ""echo "suggestions generated"echo "output file: suggestions.txt" sudo chmod 755 suggestions.txt # execute packages. sudo ./packages.shecho "Generating packages"echo ""echo "packages are: "echo "" sudo cat /var/log/lynis-report.dat | grep installed_package | sed -e "s/installed_package\[\]\=//g"sudo cat /var/log/lynis-report.dat | grep installed_package | sed -e "s/installed_package\[\]\=//g" | cat > packages.txt echo ""echo "packages generated"sudo chmod 755 packages.txt # execute shells. sudo ./shells.shecho "Generating available shells"echo ""echo "shells are: "echo "" sudo cat /var/log/lynis-report.dat | grep available_shell | sed -e "s/available_shell\[\]\=//g"sudo cat /var/log/lynis-report.dat | grep available_shell | sed -e "s/available_shell\[\]\=//g" | cat > shells.txt echo ""echo "shells generated" echo "output file: shells.txt" sudo chmod 755 shells.txt Python script: Python3 # importing librariesimport pandas as pdfrom pandas import ExcelWriterimport os # function to get the data.def get_data(): warnings = open('warnings.txt', 'r') suggestions = open('suggestions.txt', 'r') packages = open('packages.txt', 'r') shells = open('shells.txt', 'r') warn_data = warnings.readlines() sugg_data = suggestions.readlines() pack_data = packages.read() shell_data = shells.readlines() return warn_data, sugg_data, pack_data, shell_data def clean_data(): warn, sugg, pack, shell = get_data() warn_clean = [] for line in warn: warn_clean.append(line.split('|')) for i in range(len(warn_clean)): warn_clean[i] = warn_clean[i][:2] # print(warn_clean[i]) sugg_clean = [] for line in sugg: sugg_clean.append(line.split('|')) for i in range(len(sugg_clean)): sugg_clean[i] = sugg_clean[i][:2] # print(sugg_clean[i]) pack_clean = [] pack = pack.split('|') pack_clean = pack del pack_clean[0] shell_clean = [] for i in range(len(shell)): shell_clean.append(shell[i].rstrip('\n')) # print(shell_clean[i]) return warn_clean, sugg_clean, pack_clean, shell_clean def convert_to_excel(): warnings, suggestions, packages, shells = clean_data() try: os.mkdir('outputs') except(Exception): pass os.chdir('outputs') warn_packages = [] warn_text = [] for i in range(len(warnings)): warn_packages.append(warnings[i][0]) for i in range(len(warnings)): warn_text.append(warnings[i][1]) print(warn_packages, warn_text) warn = pd.DataFrame() warn['Packages'] = warn_packages warn['warnings'] = warn_text # warn.to_excel('warnings.xlsx', index = False) writer = ExcelWriter('warnings.xlsx') warn.to_excel(writer, 'report1', index = False) workbook = writer.book worksheet = writer.sheets['report1'] # Account info columns worksheet.set_column('A:A', 15) # State column worksheet.set_column('B:B', 45) # Post code # worksheet.set_column('F:F', 10) writer.save() sugg_packages = [] sugg_text = [] for i in range(len(suggestions)): sugg_packages.append(suggestions[i][0]) for i in range(len(suggestions)): sugg_text.append(suggestions[i][1]) # print(sugg_packages, sugg_text) sugg = pd.DataFrame() sugg['Packages'] = sugg_packages sugg['suggestions'] = sugg_text writer1 = ExcelWriter('suggestions.xlsx') sugg.to_excel(writer1, 'report2', index = False) workbook = writer1.book worksheet = writer1.sheets['report2'] # Account info columns worksheet.set_column('A:A', 25) # State column worksheet.set_column('B:B', 120) # Post code # worksheet.set_column('F:F', 10) writer1.save() pack_data = pd.DataFrame() pack_data['Packages'] = packages writer1 = ExcelWriter('packages.xlsx') pack_data.to_excel(writer1, 'report3', index = False) workbook = writer1.book worksheet = writer1.sheets['report3'] # Account info columns worksheet.set_column('A:A', 75) # State column # Post code # worksheet.set_column('F:F', 10) writer1.save() os.chdir('..') if __name__ == '__main__': warnings, suggestions, packages, shells = clean_data() convert_to_excel() Once you run the above scripts, you will find a folder called outputs in the current directory. navigate to the outputs folder where you will find excel sheets that contain warnings, suggestions, and installed packages. abhishek0719kadiyan Linux-Unix Python Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here. TCP Server-Client implementation in C ZIP command in Linux with examples SORT command in Linux/Unix with examples tar command in Linux with examples curl command in Linux with Examples Read JSON file using Python Adding new column to existing DataFrame in Pandas Python map() function How to get column names in Pandas dataframe
[ { "code": null, "e": 25723, "s": 25695, "text": "\n08 Nov, 2021" }, { "code": null, "e": 26017, "s": 25723, "text": "Lynis is a battle-tested security tool for systems running Linux, macOS, or Unix-based operating systems. It performs an extensive health scan of your systems to support system hardening and compliance testing. The project is open-source software with the GPL license and available since 2007." }, { "code": null, "e": 26123, "s": 26017, "text": "Since Lynis is flexible, it is used for several different purposes. Typical use cases for Lynis include: " }, { "code": null, "e": 26142, "s": 26123, "text": "Security auditing " }, { "code": null, "e": 26185, "s": 26142, "text": "Compliance testing (e.g. PCI, HIPAA, SOx) " }, { "code": null, "e": 26206, "s": 26185, "text": "Penetration testing " }, { "code": null, "e": 26231, "s": 26206, "text": "Vulnerability detection " }, { "code": null, "e": 26249, "s": 26231, "text": "System Hardening " }, { "code": null, "e": 26795, "s": 26249, "text": "System hardening refers to securing your system from potential Threats and Vulnerabilities. Lynis can be used to generate a detailed report on various Threats and Vulnerabilities in your system. The user or System Administrator can then take the necessary actions to secure the system.Lynis reports are hard to read and usually have a lot of information. Therefore, we use Bash and Python scripts to parse through the report, extract relevant information from the report such as warnings, suggestions and store them in an excel file as a report." }, { "code": null, "e": 26823, "s": 26795, "text": "Prerequisites for Lynis – " }, { "code": null, "e": 26918, "s": 26823, "text": "Install Lynis on your system by cloning the github repository: https://github.com/CISOfy/lynis" }, { "code": null, "e": 26989, "s": 26918, "text": "Install the pandas library using the command sudo pip3 install pandas." }, { "code": null, "e": 27148, "s": 26989, "text": "Once you have installed Lynis on your system, navigate to the Lynis directory where you will find a bunch of files along with an executable file called Lynis." }, { "code": null, "e": 27401, "s": 27148, "text": "Use the bash script (code is given below) to extract relevant information such as warning and suggestions given in the lynis report. create a file called run.sh and copy paste the bash code into that file and type: sudo ./run.sh to run the bash script." }, { "code": null, "e": 27537, "s": 27401, "text": "Run the Python script (code is given below) to clean and parse the extracted data and output the relevant information as an excel file." }, { "code": null, "e": 27577, "s": 27537, "text": "Below are the Bash and Python scripts –" }, { "code": null, "e": 27592, "s": 27577, "text": "Bash Script: " }, { "code": null, "e": 27597, "s": 27592, "text": "BASH" }, { "code": "#!/bin/bash # script to scrape/parse the report file and# extract the relevant details and run the# python script to display the details in a server. echo \"running......\"echo \"\" sudo ./lynis audit system --quick # execute warnings. sudo ./warnings.shecho \"Generating warnings\"echo \"\"echo \"warnings are: \"echo \"\" sudo cat /var/log/lynis-report.dat | grep warning | sed -e \"s/warning\\[\\]\\=//g\"sudo cat /var/log/lynis-report.dat | grep warning | sed -e \"s/warning\\[\\]\\=//g\" | cat > warnings.txt echo \"\"echo \"warnings generated\"echo \"output file: warnings.txt\" sudo chmod 755 warnings.txt #execute suggestions. sudo ./suggestions.shecho \"Generating suggestions\"echo \"\"echo \"suggestions are: \"echo \"\" sudo cat /var/log/lynis-report.dat | grep suggestion | sed -e \"s/suggestion\\[\\]\\=//g\" sudo cat /var/log/lynis-report.dat | grep suggestion | sed -e \"s/suggestion\\[\\]\\=//g\" | cat > suggestions.txt echo \"\"echo \"suggestions generated\"echo \"output file: suggestions.txt\" sudo chmod 755 suggestions.txt # execute packages. sudo ./packages.shecho \"Generating packages\"echo \"\"echo \"packages are: \"echo \"\" sudo cat /var/log/lynis-report.dat | grep installed_package | sed -e \"s/installed_package\\[\\]\\=//g\"sudo cat /var/log/lynis-report.dat | grep installed_package | sed -e \"s/installed_package\\[\\]\\=//g\" | cat > packages.txt echo \"\"echo \"packages generated\"sudo chmod 755 packages.txt # execute shells. sudo ./shells.shecho \"Generating available shells\"echo \"\"echo \"shells are: \"echo \"\" sudo cat /var/log/lynis-report.dat | grep available_shell | sed -e \"s/available_shell\\[\\]\\=//g\"sudo cat /var/log/lynis-report.dat | grep available_shell | sed -e \"s/available_shell\\[\\]\\=//g\" | cat > shells.txt echo \"\"echo \"shells generated\" echo \"output file: shells.txt\" sudo chmod 755 shells.txt", "e": 29375, "s": 27597, "text": null }, { "code": null, "e": 29391, "s": 29375, "text": "Python script: " }, { "code": null, "e": 29399, "s": 29391, "text": "Python3" }, { "code": "# importing librariesimport pandas as pdfrom pandas import ExcelWriterimport os # function to get the data.def get_data(): warnings = open('warnings.txt', 'r') suggestions = open('suggestions.txt', 'r') packages = open('packages.txt', 'r') shells = open('shells.txt', 'r') warn_data = warnings.readlines() sugg_data = suggestions.readlines() pack_data = packages.read() shell_data = shells.readlines() return warn_data, sugg_data, pack_data, shell_data def clean_data(): warn, sugg, pack, shell = get_data() warn_clean = [] for line in warn: warn_clean.append(line.split('|')) for i in range(len(warn_clean)): warn_clean[i] = warn_clean[i][:2] # print(warn_clean[i]) sugg_clean = [] for line in sugg: sugg_clean.append(line.split('|')) for i in range(len(sugg_clean)): sugg_clean[i] = sugg_clean[i][:2] # print(sugg_clean[i]) pack_clean = [] pack = pack.split('|') pack_clean = pack del pack_clean[0] shell_clean = [] for i in range(len(shell)): shell_clean.append(shell[i].rstrip('\\n')) # print(shell_clean[i]) return warn_clean, sugg_clean, pack_clean, shell_clean def convert_to_excel(): warnings, suggestions, packages, shells = clean_data() try: os.mkdir('outputs') except(Exception): pass os.chdir('outputs') warn_packages = [] warn_text = [] for i in range(len(warnings)): warn_packages.append(warnings[i][0]) for i in range(len(warnings)): warn_text.append(warnings[i][1]) print(warn_packages, warn_text) warn = pd.DataFrame() warn['Packages'] = warn_packages warn['warnings'] = warn_text # warn.to_excel('warnings.xlsx', index = False) writer = ExcelWriter('warnings.xlsx') warn.to_excel(writer, 'report1', index = False) workbook = writer.book worksheet = writer.sheets['report1'] # Account info columns worksheet.set_column('A:A', 15) # State column worksheet.set_column('B:B', 45) # Post code # worksheet.set_column('F:F', 10) writer.save() sugg_packages = [] sugg_text = [] for i in range(len(suggestions)): sugg_packages.append(suggestions[i][0]) for i in range(len(suggestions)): sugg_text.append(suggestions[i][1]) # print(sugg_packages, sugg_text) sugg = pd.DataFrame() sugg['Packages'] = sugg_packages sugg['suggestions'] = sugg_text writer1 = ExcelWriter('suggestions.xlsx') sugg.to_excel(writer1, 'report2', index = False) workbook = writer1.book worksheet = writer1.sheets['report2'] # Account info columns worksheet.set_column('A:A', 25) # State column worksheet.set_column('B:B', 120) # Post code # worksheet.set_column('F:F', 10) writer1.save() pack_data = pd.DataFrame() pack_data['Packages'] = packages writer1 = ExcelWriter('packages.xlsx') pack_data.to_excel(writer1, 'report3', index = False) workbook = writer1.book worksheet = writer1.sheets['report3'] # Account info columns worksheet.set_column('A:A', 75) # State column # Post code # worksheet.set_column('F:F', 10) writer1.save() os.chdir('..') if __name__ == '__main__': warnings, suggestions, packages, shells = clean_data() convert_to_excel()", "e": 32764, "s": 29399, "text": null }, { "code": null, "e": 32985, "s": 32764, "text": "Once you run the above scripts, you will find a folder called outputs in the current directory. navigate to the outputs folder where you will find excel sheets that contain warnings, suggestions, and installed packages. " }, { "code": null, "e": 33005, "s": 32985, "text": "abhishek0719kadiyan" }, { "code": null, "e": 33016, "s": 33005, "text": "Linux-Unix" }, { "code": null, "e": 33023, "s": 33016, "text": "Python" }, { "code": null, "e": 33121, "s": 33023, "text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here." }, { "code": null, "e": 33159, "s": 33121, "text": "TCP Server-Client implementation in C" }, { "code": null, "e": 33194, "s": 33159, "text": "ZIP command in Linux with examples" }, { "code": null, "e": 33235, "s": 33194, "text": "SORT command in Linux/Unix with examples" }, { "code": null, "e": 33270, "s": 33235, "text": "tar command in Linux with examples" }, { "code": null, "e": 33306, "s": 33270, "text": "curl command in Linux with Examples" }, { "code": null, "e": 33334, "s": 33306, "text": "Read JSON file using Python" }, { "code": null, "e": 33384, "s": 33334, "text": "Adding new column to existing DataFrame in Pandas" }, { "code": null, "e": 33406, "s": 33384, "text": "Python map() function" } ]
Apache Solr - Indexing Data
In general, indexing is an arrangement of documents or (other entities) systematically. Indexing enables users to locate information in a document. Indexing collects, parses, and stores documents. Indexing collects, parses, and stores documents. Indexing is done to increase the speed and performance of a search query while finding a required document. Indexing is done to increase the speed and performance of a search query while finding a required document. In Apache Solr, we can index (add, delete, modify) various document formats such as xml, csv, pdf, etc. We can add data to Solr index in several ways. In this chapter, we are going to discuss indexing − Using the Solr Web Interface. Using any of the client APIs like Java, Python, etc. Using the post tool. In this chapter, we will discuss how to add data to the index of Apache Solr using various interfaces (command line, web interface, and Java client API) Solr has a post command in its bin/ directory. Using this command, you can index various formats of files such as JSON, XML, CSV in Apache Solr. Browse through the bin directory of Apache Solr and execute the –h option of the post command, as shown in the following code block. [Hadoop@localhost bin]$ cd $SOLR_HOME [Hadoop@localhost bin]$ ./post -h On executing the above command, you will get a list of options of the post command, as shown below. Usage: post -c <collection> [OPTIONS] <files|directories|urls|-d [".."]> or post –help collection name defaults to DEFAULT_SOLR_COLLECTION if not specified OPTIONS ======= Solr options: -url <base Solr update URL> (overrides collection, host, and port) -host <host> (default: localhost) -p or -port <port> (default: 8983) -commit yes|no (default: yes) Web crawl options: -recursive <depth> (default: 1) -delay <seconds> (default: 10) Directory crawl options: -delay <seconds> (default: 0) stdin/args options: -type <content/type> (default: application/xml) Other options: -filetypes <type>[,<type>,...] (default: xml,json,jsonl,csv,pdf,doc,docx,ppt,pptx,xls,xlsx,odt,odp,ods,ott,otp,ots, rtf,htm,html,txt,log) -params "<key> = <value>[&<key> = <value>...]" (values must be URL-encoded; these pass through to Solr update request) -out yes|no (default: no; yes outputs Solr response to console) -format Solr (sends application/json content as Solr commands to /update instead of /update/json/docs) Examples: * JSON file:./post -c wizbang events.json * XML files: ./post -c records article*.xml * CSV file: ./post -c signals LATEST-signals.csv * Directory of files: ./post -c myfiles ~/Documents * Web crawl: ./post -c gettingstarted http://lucene.apache.org/Solr -recursive 1 -delay 1 * Standard input (stdin): echo '{commit: {}}' | ./post -c my_collection - type application/json -out yes –d * Data as string: ./post -c signals -type text/csv -out yes -d $'id,value\n1,0.47' Suppose we have a file named sample.csv with the following content (in the bin directory). The above dataset contains personal details like Student id, first name, last name, phone, and city. The CSV file of the dataset is shown below. Here, you must note that you need to mention the schema, documenting its first line. id, first_name, last_name, phone_no, location 001, Pruthvi, Reddy, 9848022337, Hyderabad 002, kasyap, Sastry, 9848022338, Vishakapatnam 003, Rajesh, Khanna, 9848022339, Delhi 004, Preethi, Agarwal, 9848022330, Pune 005, Trupthi, Mohanty, 9848022336, Bhubaneshwar 006, Archana, Mishra, 9848022335, Chennai You can index this data under the core named sample_Solr using the post command as follows − [Hadoop@localhost bin]$ ./post -c Solr_sample sample.csv On executing the above command, the given document is indexed under the specified core, generating the following output. /home/Hadoop/java/bin/java -classpath /home/Hadoop/Solr/dist/Solr-core 6.2.0.jar -Dauto = yes -Dc = Solr_sample -Ddata = files org.apache.Solr.util.SimplePostTool sample.csv SimplePostTool version 5.0.0 Posting files to [base] url http://localhost:8983/Solr/Solr_sample/update... Entering auto mode. File endings considered are xml,json,jsonl,csv,pdf,doc,docx,ppt,pptx,xls,xlsx,odt,odp,ods,ott,otp,ots,rtf, htm,html,txt,log POSTing file sample.csv (text/csv) to [base] 1 files indexed. COMMITting Solr index changes to http://localhost:8983/Solr/Solr_sample/update... Time spent: 0:00:00.228 Visit the homepage of Solr Web UI using the following URL − http://localhost:8983/ Select the core Solr_sample. By default, the request handler is /select and the query is “:”. Without doing any modifications, click the ExecuteQuery button at the bottom of the page. On executing the query, you can observe the contents of the indexed CSV document in JSON format (default), as shown in the following screenshot. Note − In the same way, you can index other file formats such as JSON, XML, CSV, etc. You can also index documents using the web interface provided by Solr. Let us see how to index the following JSON document. [ { "id" : "001", "name" : "Ram", "age" : 53, "Designation" : "Manager", "Location" : "Hyderabad", }, { "id" : "002", "name" : "Robert", "age" : 43, "Designation" : "SR.Programmer", "Location" : "Chennai", }, { "id" : "003", "name" : "Rahim", "age" : 25, "Designation" : "JR.Programmer", "Location" : "Delhi", } ] Open Solr web interface using the following URL − http://localhost:8983/ Step 2 Select the core Solr_sample. By default, the values of the fields Request Handler, Common Within, Overwrite, and Boost are /update, 1000, true, and 1.0 respectively, as shown in the following screenshot. Now, choose the document format you want from JSON, CSV, XML, etc. Type the document to be indexed in the text area and click the Submit Document button, as shown in the following screenshot. Following is the Java program to add documents to Apache Solr index. Save this code in a file with the name AddingDocument.java. import java.io.IOException; import org.apache.Solr.client.Solrj.SolrClient; import org.apache.Solr.client.Solrj.SolrServerException; import org.apache.Solr.client.Solrj.impl.HttpSolrClient; import org.apache.Solr.common.SolrInputDocument; public class AddingDocument { public static void main(String args[]) throws Exception { //Preparing the Solr client String urlString = "http://localhost:8983/Solr/my_core"; SolrClient Solr = new HttpSolrClient.Builder(urlString).build(); //Preparing the Solr document SolrInputDocument doc = new SolrInputDocument(); //Adding fields to the document doc.addField("id", "003"); doc.addField("name", "Rajaman"); doc.addField("age","34"); doc.addField("addr","vishakapatnam"); //Adding the document to Solr Solr.add(doc); //Saving the changes Solr.commit(); System.out.println("Documents added"); } } Compile the above code by executing the following commands in the terminal − [Hadoop@localhost bin]$ javac AddingDocument [Hadoop@localhost bin]$ java AddingDocument On executing the above command, you will get the following output. Documents added 46 Lectures 3.5 hours Arnab Chakraborty 23 Lectures 1.5 hours Mukund Kumar Mishra 16 Lectures 1 hours Nilay Mehta 52 Lectures 1.5 hours Bigdata Engineer 14 Lectures 1 hours Bigdata Engineer 23 Lectures 1 hours Bigdata Engineer Print Add Notes Bookmark this page
[ { "code": null, "e": 2172, "s": 2024, "text": "In general, indexing is an arrangement of documents or (other entities) systematically. Indexing enables users to locate information in a document." }, { "code": null, "e": 2221, "s": 2172, "text": "Indexing collects, parses, and stores documents." }, { "code": null, "e": 2270, "s": 2221, "text": "Indexing collects, parses, and stores documents." }, { "code": null, "e": 2378, "s": 2270, "text": "Indexing is done to increase the speed and performance of a search query while finding a required document." }, { "code": null, "e": 2486, "s": 2378, "text": "Indexing is done to increase the speed and performance of a search query while finding a required document." }, { "code": null, "e": 2637, "s": 2486, "text": "In Apache Solr, we can index (add, delete, modify) various document formats such as xml, csv, pdf, etc. We can add data to Solr index in several ways." }, { "code": null, "e": 2689, "s": 2637, "text": "In this chapter, we are going to discuss indexing −" }, { "code": null, "e": 2719, "s": 2689, "text": "Using the Solr Web Interface." }, { "code": null, "e": 2772, "s": 2719, "text": "Using any of the client APIs like Java, Python, etc." }, { "code": null, "e": 2793, "s": 2772, "text": "Using the post tool." }, { "code": null, "e": 2946, "s": 2793, "text": "In this chapter, we will discuss how to add data to the index of Apache Solr using various interfaces (command line, web interface, and Java client API)" }, { "code": null, "e": 3091, "s": 2946, "text": "Solr has a post command in its bin/ directory. Using this command, you can index various formats of files such as JSON, XML, CSV in Apache Solr." }, { "code": null, "e": 3224, "s": 3091, "text": "Browse through the bin directory of Apache Solr and execute the –h option of the post command, as shown in the following code block." }, { "code": null, "e": 3298, "s": 3224, "text": "[Hadoop@localhost bin]$ cd $SOLR_HOME \n[Hadoop@localhost bin]$ ./post -h\n" }, { "code": null, "e": 3398, "s": 3298, "text": "On executing the above command, you will get a list of options of the post command, as shown below." }, { "code": null, "e": 4971, "s": 3398, "text": "Usage: post -c <collection> [OPTIONS] <files|directories|urls|-d [\"..\"]> \nor post –help \n collection name defaults to DEFAULT_SOLR_COLLECTION if not specified \nOPTIONS \n======= \nSolr options: \n -url <base Solr update URL> (overrides collection, host, and port) \n -host <host> (default: localhost) \n -p or -port <port> (default: 8983) \n -commit yes|no (default: yes) \n\nWeb crawl options: \n -recursive <depth> (default: 1) \n -delay <seconds> (default: 10) \n\nDirectory crawl options: \n -delay <seconds> (default: 0) \n\nstdin/args options: \n -type <content/type> (default: application/xml) \n\nOther options: \n -filetypes <type>[,<type>,...] (default: \n xml,json,jsonl,csv,pdf,doc,docx,ppt,pptx,xls,xlsx,odt,odp,ods,ott,otp,ots,\n rtf,htm,html,txt,log) \n -params \"<key> = <value>[&<key> = <value>...]\" (values must be \n URL-encoded; these pass through to Solr update request) \n -out yes|no (default: no; yes outputs Solr response to console) \n -format Solr (sends application/json content as Solr commands \n to /update instead of /update/json/docs) \n\nExamples: \n* JSON file:./post -c wizbang events.json \n* XML files: ./post -c records article*.xml \n* CSV file: ./post -c signals LATEST-signals.csv \n* Directory of files: ./post -c myfiles ~/Documents \n* Web crawl: ./post -c gettingstarted http://lucene.apache.org/Solr -recursive 1 -delay 1 \n* Standard input (stdin): echo '{commit: {}}' | ./post -c my_collection -\ntype application/json -out yes –d \n* Data as string: ./post -c signals -type text/csv -out yes -d $'id,value\\n1,0.47'\n" }, { "code": null, "e": 5062, "s": 4971, "text": "Suppose we have a file named sample.csv with the following content (in the bin directory)." }, { "code": null, "e": 5292, "s": 5062, "text": "The above dataset contains personal details like Student id, first name, last name, phone, and city. The CSV file of the dataset is shown below. Here, you must note that you need to mention the schema, documenting its first line." }, { "code": null, "e": 5707, "s": 5292, "text": "id, first_name, last_name, phone_no, location \n001, Pruthvi, Reddy, 9848022337, Hyderabad \n002, kasyap, Sastry, 9848022338, Vishakapatnam \n003, Rajesh, Khanna, 9848022339, Delhi \n004, Preethi, Agarwal, 9848022330, Pune \n005, Trupthi, Mohanty, 9848022336, Bhubaneshwar \n006, Archana, Mishra, 9848022335, Chennai\n" }, { "code": null, "e": 5800, "s": 5707, "text": "You can index this data under the core named sample_Solr using the post command as follows −" }, { "code": null, "e": 5859, "s": 5800, "text": "[Hadoop@localhost bin]$ ./post -c Solr_sample sample.csv \n" }, { "code": null, "e": 5980, "s": 5859, "text": "On executing the above command, the given document is indexed under the specified core, generating the following output." }, { "code": null, "e": 6583, "s": 5980, "text": "/home/Hadoop/java/bin/java -classpath /home/Hadoop/Solr/dist/Solr-core\n6.2.0.jar -Dauto = yes -Dc = Solr_sample -Ddata = files \norg.apache.Solr.util.SimplePostTool sample.csv \nSimplePostTool version 5.0.0 \nPosting files to [base] url http://localhost:8983/Solr/Solr_sample/update... \nEntering auto mode. File endings considered are \nxml,json,jsonl,csv,pdf,doc,docx,ppt,pptx,xls,xlsx,odt,odp,ods,ott,otp,ots,rtf,\nhtm,html,txt,log \nPOSTing file sample.csv (text/csv) to [base] \n1 files indexed. \nCOMMITting Solr index changes to \nhttp://localhost:8983/Solr/Solr_sample/update... \nTime spent: 0:00:00.228\n" }, { "code": null, "e": 6643, "s": 6583, "text": "Visit the homepage of Solr Web UI using the following URL −" }, { "code": null, "e": 6666, "s": 6643, "text": "http://localhost:8983/" }, { "code": null, "e": 6850, "s": 6666, "text": "Select the core Solr_sample. By default, the request handler is /select and the query is “:”. Without doing any modifications, click the ExecuteQuery button at the bottom of the page." }, { "code": null, "e": 6995, "s": 6850, "text": "On executing the query, you can observe the contents of the indexed CSV document in JSON format (default), as shown in the following screenshot." }, { "code": null, "e": 7081, "s": 6995, "text": "Note − In the same way, you can index other file formats such as JSON, XML, CSV, etc." }, { "code": null, "e": 7205, "s": 7081, "text": "You can also index documents using the web interface provided by Solr. Let us see how to index the following JSON document." }, { "code": null, "e": 7650, "s": 7205, "text": "[ \n { \n \"id\" : \"001\", \n \"name\" : \"Ram\", \n \"age\" : 53, \n \"Designation\" : \"Manager\", \n \"Location\" : \"Hyderabad\", \n }, \n { \n \"id\" : \"002\", \n \"name\" : \"Robert\", \n \"age\" : 43, \n \"Designation\" : \"SR.Programmer\", \n \"Location\" : \"Chennai\", \n }, \n { \n \"id\" : \"003\", \n \"name\" : \"Rahim\", \n \"age\" : 25, \n \"Designation\" : \"JR.Programmer\", \n \"Location\" : \"Delhi\", \n } \n] " }, { "code": null, "e": 7700, "s": 7650, "text": "Open Solr web interface using the following URL −" }, { "code": null, "e": 7723, "s": 7700, "text": "http://localhost:8983/" }, { "code": null, "e": 7730, "s": 7723, "text": "Step 2" }, { "code": null, "e": 7934, "s": 7730, "text": "Select the core Solr_sample. By default, the values of the fields Request Handler, Common Within, Overwrite, and Boost are /update, 1000, true, and 1.0 respectively, as shown in the following screenshot." }, { "code": null, "e": 8126, "s": 7934, "text": "Now, choose the document format you want from JSON, CSV, XML, etc. Type the document to be indexed in the text area and click the Submit Document button, as shown in the following screenshot." }, { "code": null, "e": 8255, "s": 8126, "text": "Following is the Java program to add documents to Apache Solr index. Save this code in a file with the name AddingDocument.java." }, { "code": null, "e": 9254, "s": 8255, "text": "import java.io.IOException; \n\nimport org.apache.Solr.client.Solrj.SolrClient; \nimport org.apache.Solr.client.Solrj.SolrServerException; \nimport org.apache.Solr.client.Solrj.impl.HttpSolrClient; \nimport org.apache.Solr.common.SolrInputDocument; \n\npublic class AddingDocument { \n public static void main(String args[]) throws Exception { \n //Preparing the Solr client \n String urlString = \"http://localhost:8983/Solr/my_core\"; \n SolrClient Solr = new HttpSolrClient.Builder(urlString).build(); \n \n //Preparing the Solr document \n SolrInputDocument doc = new SolrInputDocument(); \n \n //Adding fields to the document \n doc.addField(\"id\", \"003\"); \n doc.addField(\"name\", \"Rajaman\"); \n doc.addField(\"age\",\"34\"); \n doc.addField(\"addr\",\"vishakapatnam\"); \n \n //Adding the document to Solr \n Solr.add(doc); \n \n //Saving the changes \n Solr.commit(); \n System.out.println(\"Documents added\"); \n } \n}" }, { "code": null, "e": 9331, "s": 9254, "text": "Compile the above code by executing the following commands in the terminal −" }, { "code": null, "e": 9423, "s": 9331, "text": "[Hadoop@localhost bin]$ javac AddingDocument \n[Hadoop@localhost bin]$ java AddingDocument \n" }, { "code": null, "e": 9490, "s": 9423, "text": "On executing the above command, you will get the following output." }, { "code": null, "e": 9507, "s": 9490, "text": "Documents added\n" }, { "code": null, "e": 9542, "s": 9507, "text": "\n 46 Lectures \n 3.5 hours \n" }, { "code": null, "e": 9561, "s": 9542, "text": " Arnab Chakraborty" }, { "code": null, "e": 9596, "s": 9561, "text": "\n 23 Lectures \n 1.5 hours \n" }, { "code": null, "e": 9617, "s": 9596, "text": " Mukund Kumar Mishra" }, { "code": null, "e": 9650, "s": 9617, "text": "\n 16 Lectures \n 1 hours \n" }, { "code": null, "e": 9663, "s": 9650, "text": " Nilay Mehta" }, { "code": null, "e": 9698, "s": 9663, "text": "\n 52 Lectures \n 1.5 hours \n" }, { "code": null, "e": 9716, "s": 9698, "text": " Bigdata Engineer" }, { "code": null, "e": 9749, "s": 9716, "text": "\n 14 Lectures \n 1 hours \n" }, { "code": null, "e": 9767, "s": 9749, "text": " Bigdata Engineer" }, { "code": null, "e": 9800, "s": 9767, "text": "\n 23 Lectures \n 1 hours \n" }, { "code": null, "e": 9818, "s": 9800, "text": " Bigdata Engineer" }, { "code": null, "e": 9825, "s": 9818, "text": " Print" }, { "code": null, "e": 9836, "s": 9825, "text": " Add Notes" } ]
T-SQL - String Functions
MS SQL Server String functions can be applied on string value or will return string value or numeric data. Following is the list of String functions with examples. Ascii code value will come as output for a character expression. The following query will give the Ascii code value of a given character. Select ASCII ('word') Character will come as output for given Ascii code or integer. The following query will give the character for a given integer. Select CHAR(97) Unicode character will come as output for a given integer. The following query will give the Unicode character for a given integer. Select NCHAR(300) Starting position for given search expression will come as output in a given string expression. The following query will give the starting position of 'G' character for given string expression 'KING'. Select CHARINDEX('G', 'KING') Left part of the given string till the specified number of characters will come as output for a given string. The following query will give the 'WORL' string as mentioned 4 number of characters for given string 'WORLD'. Select LEFT('WORLD', 4) Right part of the given string till the specified number of characters will come as output for a given string. The following query will give the 'DIA' string as mentioned 3 number of characters for given string 'INDIA'. Select RIGHT('INDIA', 3) Part of a string based on the start position value and length value will come as output for a given string. The following queries will give the 'WOR', 'DIA', 'ING' strings as we mentioned (1,3), (3,3) and (2,3) as start and length values respectively for given strings 'WORLD', 'INDIA' and 'KING'. Select SUBSTRING ('WORLD', 1,3) Select SUBSTRING ('INDIA', 3,3) Select SUBSTRING ('KING', 2,3) Number of characters will come as output for a given string expression. The following query will give the 5 for the 'HELLO' string expression. Select LEN('HELLO') Lowercase string will come as output for a given string data. The following query will give the 'sqlserver' for the 'SQLServer' character data. Select LOWER('SQLServer') Uppercase string will come as output for a given string data. The following query will give the 'SQLSERVER' for the 'SqlServer' character data. Select UPPER('SqlServer') String expression will come as output for a given string data after removing leading blanks. The following query will give the 'WORLD' for the ' WORLD' character data. Select LTRIM(' WORLD') String expression will come as output for a given string data after removing trailing blanks. The following query will give the 'INDIA' for the 'INDIA ' character data. Select RTRIM('INDIA ') String expression will come as output for a given string data after replacing all occurrences of specified character with specified character. The following query will give the 'KNDKA' string for the 'INDIA' string data. Select REPLACE('INDIA', 'I', 'K') Repeat string expression will come as output for a given string data with specified number of times. The following query will give the 'WORLDWORLD' string for the 'WORLD' string data. Select REPLICATE('WORLD', 2) Reverse string expression will come as output for a given string data. The following query will give the 'DLROW' string for the 'WORLD' string data. Select REVERSE('WORLD') Returns four-character (SOUNDEX) code to evaluate the similarity of two given strings. The following query will give the 'S530' for the 'Smith', 'Smyth' strings. Select SOUNDEX('Smith'), SOUNDEX('Smyth') Integer value will come as output of given two expressions. The following query will give the 4 for the 'Smith', 'Smyth' expressions. Select Difference('Smith','Smyth') Note − If the output value is 0 it indicates weak or no similarity between give 2 expressions. String will come as output with the specified number of spaces. The following query will give the 'I LOVE INDIA'. Select 'I'+space(1)+'LOVE'+space(1)+'INDIA' String expression will come as output for a given string data after replacing from starting character till the specified length with specified character. The following query will give the 'AIJKFGH' string for the 'ABCDEFGH' string data as per given starting character and length as 2 and 4 respectively and 'IJK' as specified target string. Select STUFF('ABCDEFGH', 2,4,'IJK') Character data will come as output for the given numeric data. The following query will give the 187.37 for the given 187.369 based on specified length as 6 and decimal as 2. Select STR(187.369,6,2) Integer value will come as output for the first character of given expression. The following query will give the 82 for the 'RAMA' expression. Select UNICODE('RAMA') Given string will come as output with the specified delimiter. The following query will give the "RAMA" for the given 'RAMA' string as we specified double quote as delimiter. Select QUOTENAME('RAMA','"') Starting position of the first occurrence from the given expression as we specified 'I' position is required. The following query will give the 1 for the 'INDIA'. Select PATINDEX('I%','INDIA') Given expression will come as output with the specified format. The following query will give the ' Monday, November 16, 2015' for the getdate function as per specified format with 'D' refers weekday name. SELECT FORMAT ( getdate(), 'D') Single string will come as output after concatenating the given parameter values. The following query will give the 'A,B,C' for the given parameters. Select CONCAT('A',',','B',',','C') 12 Lectures 2 hours Nishant Malik 10 Lectures 1.5 hours Nishant Malik 12 Lectures 2.5 hours Nishant Malik 20 Lectures 2 hours Asif Hussain 10 Lectures 1.5 hours Nishant Malik 48 Lectures 6.5 hours Asif Hussain Print Add Notes Bookmark this page
[ { "code": null, "e": 2167, "s": 2060, "text": "MS SQL Server String functions can be applied on string value or will return string value or numeric data." }, { "code": null, "e": 2224, "s": 2167, "text": "Following is the list of String functions with examples." }, { "code": null, "e": 2289, "s": 2224, "text": "Ascii code value will come as output for a character expression." }, { "code": null, "e": 2362, "s": 2289, "text": "The following query will give the Ascii code value of a given character." }, { "code": null, "e": 2386, "s": 2362, "text": "Select ASCII ('word') \n" }, { "code": null, "e": 2449, "s": 2386, "text": "Character will come as output for given Ascii code or integer." }, { "code": null, "e": 2514, "s": 2449, "text": "The following query will give the character for a given integer." }, { "code": null, "e": 2531, "s": 2514, "text": "Select CHAR(97)\n" }, { "code": null, "e": 2590, "s": 2531, "text": "Unicode character will come as output for a given integer." }, { "code": null, "e": 2663, "s": 2590, "text": "The following query will give the Unicode character for a given integer." }, { "code": null, "e": 2682, "s": 2663, "text": "Select NCHAR(300)\n" }, { "code": null, "e": 2778, "s": 2682, "text": "Starting position for given search expression will come as output in a given string expression." }, { "code": null, "e": 2883, "s": 2778, "text": "The following query will give the starting position of 'G' character for given string expression 'KING'." }, { "code": null, "e": 2914, "s": 2883, "text": "Select CHARINDEX('G', 'KING')\n" }, { "code": null, "e": 3024, "s": 2914, "text": "Left part of the given string till the specified number of characters will come as output for a given string." }, { "code": null, "e": 3134, "s": 3024, "text": "The following query will give the 'WORL' string as mentioned 4 number of characters for given string 'WORLD'." }, { "code": null, "e": 3159, "s": 3134, "text": "Select LEFT('WORLD', 4)\n" }, { "code": null, "e": 3270, "s": 3159, "text": "Right part of the given string till the specified number of characters will come as output for a given string." }, { "code": null, "e": 3379, "s": 3270, "text": "The following query will give the 'DIA' string as mentioned 3 number of characters for given string 'INDIA'." }, { "code": null, "e": 3405, "s": 3379, "text": "Select RIGHT('INDIA', 3)\n" }, { "code": null, "e": 3513, "s": 3405, "text": "Part of a string based on the start position value and length value will come as output for a given string." }, { "code": null, "e": 3703, "s": 3513, "text": "The following queries will give the 'WOR', 'DIA', 'ING' strings as we mentioned (1,3), (3,3) and (2,3) as start and length values respectively for given strings 'WORLD', 'INDIA' and 'KING'." }, { "code": null, "e": 3801, "s": 3703, "text": "Select SUBSTRING ('WORLD', 1,3) \nSelect SUBSTRING ('INDIA', 3,3) \nSelect SUBSTRING ('KING', 2,3)\n" }, { "code": null, "e": 3873, "s": 3801, "text": "Number of characters will come as output for a given string expression." }, { "code": null, "e": 3944, "s": 3873, "text": "The following query will give the 5 for the 'HELLO' string expression." }, { "code": null, "e": 3966, "s": 3944, "text": "Select LEN('HELLO') \n" }, { "code": null, "e": 4028, "s": 3966, "text": "Lowercase string will come as output for a given string data." }, { "code": null, "e": 4110, "s": 4028, "text": "The following query will give the 'sqlserver' for the 'SQLServer' character data." }, { "code": null, "e": 4138, "s": 4110, "text": "Select LOWER('SQLServer') \n" }, { "code": null, "e": 4200, "s": 4138, "text": "Uppercase string will come as output for a given string data." }, { "code": null, "e": 4282, "s": 4200, "text": "The following query will give the 'SQLSERVER' for the 'SqlServer' character data." }, { "code": null, "e": 4309, "s": 4282, "text": "Select UPPER('SqlServer')\n" }, { "code": null, "e": 4402, "s": 4309, "text": "String expression will come as output for a given string data after removing leading blanks." }, { "code": null, "e": 4479, "s": 4402, "text": "The following query will give the 'WORLD' for the ' WORLD' character data." }, { "code": null, "e": 4505, "s": 4479, "text": "Select LTRIM(' WORLD')\n" }, { "code": null, "e": 4599, "s": 4505, "text": "String expression will come as output for a given string data after removing trailing blanks." }, { "code": null, "e": 4676, "s": 4599, "text": "The following query will give the 'INDIA' for the 'INDIA ' character data." }, { "code": null, "e": 4703, "s": 4676, "text": "Select RTRIM('INDIA ') \n" }, { "code": null, "e": 4846, "s": 4703, "text": "String expression will come as output for a given string data after replacing all occurrences of specified character with specified character." }, { "code": null, "e": 4924, "s": 4846, "text": "The following query will give the 'KNDKA' string for the 'INDIA' string data." }, { "code": null, "e": 4959, "s": 4924, "text": "Select REPLACE('INDIA', 'I', 'K')\n" }, { "code": null, "e": 5060, "s": 4959, "text": "Repeat string expression will come as output for a given string data with specified number of times." }, { "code": null, "e": 5143, "s": 5060, "text": "The following query will give the 'WORLDWORLD' string for the 'WORLD' string data." }, { "code": null, "e": 5173, "s": 5143, "text": "Select REPLICATE('WORLD', 2)\n" }, { "code": null, "e": 5244, "s": 5173, "text": "Reverse string expression will come as output for a given string data." }, { "code": null, "e": 5322, "s": 5244, "text": "The following query will give the 'DLROW' string for the 'WORLD' string data." }, { "code": null, "e": 5347, "s": 5322, "text": "Select REVERSE('WORLD')\n" }, { "code": null, "e": 5434, "s": 5347, "text": "Returns four-character (SOUNDEX) code to evaluate the similarity of two given strings." }, { "code": null, "e": 5509, "s": 5434, "text": "The following query will give the 'S530' for the 'Smith', 'Smyth' strings." }, { "code": null, "e": 5552, "s": 5509, "text": "Select SOUNDEX('Smith'), SOUNDEX('Smyth')\n" }, { "code": null, "e": 5612, "s": 5552, "text": "Integer value will come as output of given two expressions." }, { "code": null, "e": 5686, "s": 5612, "text": "The following query will give the 4 for the 'Smith', 'Smyth' expressions." }, { "code": null, "e": 5723, "s": 5686, "text": "Select Difference('Smith','Smyth') \n" }, { "code": null, "e": 5818, "s": 5723, "text": "Note − If the output value is 0 it indicates weak or no similarity between give 2 expressions." }, { "code": null, "e": 5882, "s": 5818, "text": "String will come as output with the specified number of spaces." }, { "code": null, "e": 5932, "s": 5882, "text": "The following query will give the 'I LOVE INDIA'." }, { "code": null, "e": 5977, "s": 5932, "text": "Select 'I'+space(1)+'LOVE'+space(1)+'INDIA'\n" }, { "code": null, "e": 6131, "s": 5977, "text": "String expression will come as output for a given string data after replacing from starting character till the specified length with specified character." }, { "code": null, "e": 6318, "s": 6131, "text": "The following query will give the 'AIJKFGH' string for the 'ABCDEFGH' string data as per given starting character and length as 2 and 4 respectively and 'IJK' as specified target string." }, { "code": null, "e": 6356, "s": 6318, "text": "Select STUFF('ABCDEFGH', 2,4,'IJK') \n" }, { "code": null, "e": 6419, "s": 6356, "text": "Character data will come as output for the given numeric data." }, { "code": null, "e": 6531, "s": 6419, "text": "The following query will give the 187.37 for the given 187.369 based on specified length as 6 and decimal as 2." }, { "code": null, "e": 6557, "s": 6531, "text": "Select STR(187.369,6,2) \n" }, { "code": null, "e": 6636, "s": 6557, "text": "Integer value will come as output for the first character of given expression." }, { "code": null, "e": 6700, "s": 6636, "text": "The following query will give the 82 for the 'RAMA' expression." }, { "code": null, "e": 6725, "s": 6700, "text": "Select UNICODE('RAMA') \n" }, { "code": null, "e": 6788, "s": 6725, "text": "Given string will come as output with the specified delimiter." }, { "code": null, "e": 6900, "s": 6788, "text": "The following query will give the \"RAMA\" for the given 'RAMA' string as we specified double quote as delimiter." }, { "code": null, "e": 6931, "s": 6900, "text": "Select QUOTENAME('RAMA','\"') \n" }, { "code": null, "e": 7041, "s": 6931, "text": "Starting position of the first occurrence from the given expression as we specified 'I' position is required." }, { "code": null, "e": 7094, "s": 7041, "text": "The following query will give the 1 for the 'INDIA'." }, { "code": null, "e": 7126, "s": 7094, "text": "Select PATINDEX('I%','INDIA') \n" }, { "code": null, "e": 7190, "s": 7126, "text": "Given expression will come as output with the specified format." }, { "code": null, "e": 7332, "s": 7190, "text": "The following query will give the ' Monday, November 16, 2015' for the getdate function as per specified format with 'D' refers weekday name." }, { "code": null, "e": 7366, "s": 7332, "text": "SELECT FORMAT ( getdate(), 'D') \n" }, { "code": null, "e": 7448, "s": 7366, "text": "Single string will come as output after concatenating the given parameter values." }, { "code": null, "e": 7516, "s": 7448, "text": "The following query will give the 'A,B,C' for the given parameters." }, { "code": null, "e": 7553, "s": 7516, "text": "Select CONCAT('A',',','B',',','C') \n" }, { "code": null, "e": 7586, "s": 7553, "text": "\n 12 Lectures \n 2 hours \n" }, { "code": null, "e": 7601, "s": 7586, "text": " Nishant Malik" }, { "code": null, "e": 7636, "s": 7601, "text": "\n 10 Lectures \n 1.5 hours \n" }, { "code": null, "e": 7651, "s": 7636, "text": " Nishant Malik" }, { "code": null, "e": 7686, "s": 7651, "text": "\n 12 Lectures \n 2.5 hours \n" }, { "code": null, "e": 7701, "s": 7686, "text": " Nishant Malik" }, { "code": null, "e": 7734, "s": 7701, "text": "\n 20 Lectures \n 2 hours \n" }, { "code": null, "e": 7748, "s": 7734, "text": " Asif Hussain" }, { "code": null, "e": 7783, "s": 7748, "text": "\n 10 Lectures \n 1.5 hours \n" }, { "code": null, "e": 7798, "s": 7783, "text": " Nishant Malik" }, { "code": null, "e": 7833, "s": 7798, "text": "\n 48 Lectures \n 6.5 hours \n" }, { "code": null, "e": 7847, "s": 7833, "text": " Asif Hussain" }, { "code": null, "e": 7854, "s": 7847, "text": " Print" }, { "code": null, "e": 7865, "s": 7854, "text": " Add Notes" } ]
Java Program to create a Calculator
To create a calculator with Java Swings, try the following code − import java.awt.Color; import java.awt.Container; import java.awt.FlowLayout; import javax.swing.JFrame; import javax.swing.JLabel; import java.awt.event.ActionEvent; import java.awt.event.ActionListener; import javax.swing.JButton; import javax.swing.JTextField; public class SwingDemo extends JFrame implements ActionListener { JButton one, two, three, four, five, six, seven, eight, nine, num0, add, sub, div, mult, equalto, exit, point, reset; JTextField textField; String s = "", ope = ""; int flag = 0; double total1; double input1, input2; void total(double input1, double inout2, String ope) { String total; if (ope.equalsIgnoreCase("+")) { total1 = input1 + input2; total = Double.toString(total1); textField.setText(total); } else if (ope.equalsIgnoreCase("-")) { total1 = input1 - input2; total = Double.toString(total1); textField.setText(total); } else if (ope.equalsIgnoreCase("*")) { total1 = input1 * input2; total = Double.toString(total1); textField.setText(total); } else if (ope.equalsIgnoreCase("/")) { total1 = input1 / input2; total = Double.toString(total1); textField.setText(total); } //clearfields(); } public SwingDemo() { Container container = getContentPane(); container.setLayout(new FlowLayout()); JLabel jl = new JLabel(" My Demo Calculator "); textField = new JTextField(15); one = new JButton(" 1 "); two = new JButton(" 2 "); three = new JButton(" 3 "); four = new JButton(" 4 "); five = new JButton(" 5 "); six = new JButton(" 6 "); seven = new JButton(" 7 "); eight = new JButton(" 8 "); nine = new JButton(" 9 "); num0 = new JButton(" 0 "); add = new JButton(" + "); sub = new JButton(" - "); div = new JButton(" / "); mult = new JButton(" * "); equalto = new JButton(" = "); exit = new JButton(" Exit "); point = new JButton(" . "); reset = new JButton("C"); reset.setBackground(Color.YELLOW); // listener one.addActionListener(this); two.addActionListener(this); three.addActionListener(this); four.addActionListener(this); five.addActionListener(this); six.addActionListener(this); seven.addActionListener(this); eight.addActionListener(this); nine.addActionListener(this); num0.addActionListener(this); add.addActionListener(this); sub.addActionListener(this); mult.addActionListener(this); div.addActionListener(this); equalto.addActionListener(this); exit.addActionListener(this); point.addActionListener(this); reset.addActionListener(this); container.add(jl); container.add(textField); container.add(one); container.add(two); container.add(three); container.add(add); container.add(four); container.add(five); container.add(six); container.add(sub); container.add(seven); container.add(eight); container.add(nine); container.add(div); container.add(num0); container.add(point); container.add(mult); container.add(equalto); container.add(reset); container.add(exit); } public static void main(String arg[]) { SwingDemo d = new SwingDemo(); d.setSize(260, 300); d.setVisible(true); } public void actionPerformed(ActionEvent e) { Object o = e.getSource(); if (o == one) { textField.setText(s.concat("1")); s = textField.getText(); } else if (o == two) { textField.setText(s.concat("2")); s = textField.getText(); } else if (o == three) { textField.setText(s.concat("3")); s = textField.getText(); } else if (o == four) { textField.setText(s.concat("4")); s = textField.getText(); } else if (o == five) { textField.setText(s.concat("5")); s = textField.getText(); } else if (o == six) { textField.setText(s.concat("6")); s = textField.getText(); } else if (o == seven) { textField.setText(s.concat("7")); s = textField.getText(); } else if (o == eight) { textField.setText(s.concat("8")); s = textField.getText(); } else if (o == nine) { textField.setText(s.concat("9")); s = textField.getText(); } else if (o == num0) { textField.setText(s.concat("0")); s = textField.getText(); } else if (o == add) { textField.setText(""); input1 = Double.parseDouble(s); System.out.println(input1); s = ""; ope = "+"; } else if (o == sub) { textField.setText(""); input1 = Double.parseDouble(s); s = ""; ope = "-"; } else if (o == mult) { textField.setText(""); input1 = Double.parseDouble(s); s = ""; ope = "*"; } else if (o == div) { textField.setText(""); input1 = Double.parseDouble(s); s = ""; ope = "/"; } else if (o == equalto) { if (flag == 0) { input2 = Double.parseDouble(s); total(input1, input2, ope); flag = 1; } else if (flag == 1) { input2 = Double.parseDouble(s); total(input1, input2, ope); } System.out.println(input1); } else if (o == exit) { System.exit(0); } else if (o == point) { textField.setText(s.concat(".")); s = textField.getText(); } if (o == reset) { textField.setText(""); s = textField.getText(); total1 = 0; } } } The following is the output displaying calculator − Let us multiply two numbers. Enter the 1st number − Press multiply (*) - Click * to multiply the numbers − Enter the 2nd number − Now click = to get the output −
[ { "code": null, "e": 1128, "s": 1062, "text": "To create a calculator with Java Swings, try the following code −" }, { "code": null, "e": 6998, "s": 1128, "text": "import java.awt.Color;\nimport java.awt.Container;\nimport java.awt.FlowLayout;\nimport javax.swing.JFrame;\nimport javax.swing.JLabel;\nimport java.awt.event.ActionEvent;\nimport java.awt.event.ActionListener;\nimport javax.swing.JButton;\nimport javax.swing.JTextField;\n\npublic class SwingDemo extends JFrame implements ActionListener {\n JButton one, two, three, four, five, six, seven, eight, nine, num0, add, sub, div, mult, equalto, exit, point, reset;\n JTextField textField;\n String s = \"\", ope = \"\";\n int flag = 0;\n double total1;\n double input1, input2;\n void total(double input1, double inout2, String ope) {\n String total;\n if (ope.equalsIgnoreCase(\"+\")) {\n total1 = input1 + input2;\n total = Double.toString(total1);\n textField.setText(total);\n } else if (ope.equalsIgnoreCase(\"-\")) {\n total1 = input1 - input2;\n total = Double.toString(total1);\n textField.setText(total);\n } else if (ope.equalsIgnoreCase(\"*\")) {\n total1 = input1 * input2;\n total = Double.toString(total1);\n textField.setText(total);\n } else if (ope.equalsIgnoreCase(\"/\")) {\n total1 = input1 / input2;\n total = Double.toString(total1);\n textField.setText(total);\n }\n //clearfields();\n }\n public SwingDemo() {\n Container container = getContentPane();\n container.setLayout(new FlowLayout());\n JLabel jl = new JLabel(\" My Demo Calculator \");\n textField = new JTextField(15);\n one = new JButton(\" 1 \");\n two = new JButton(\" 2 \");\n three = new JButton(\" 3 \");\n four = new JButton(\" 4 \");\n five = new JButton(\" 5 \");\n six = new JButton(\" 6 \");\n seven = new JButton(\" 7 \");\n eight = new JButton(\" 8 \");\n nine = new JButton(\" 9 \");\n num0 = new JButton(\" 0 \");\n add = new JButton(\" + \");\n sub = new JButton(\" - \");\n div = new JButton(\" / \");\n mult = new JButton(\" * \");\n equalto = new JButton(\" = \");\n exit = new JButton(\" Exit \");\n point = new JButton(\" . \");\n reset = new JButton(\"C\");\n reset.setBackground(Color.YELLOW);\n // listener\n one.addActionListener(this);\n two.addActionListener(this);\n three.addActionListener(this);\n four.addActionListener(this);\n five.addActionListener(this);\n six.addActionListener(this);\n seven.addActionListener(this);\n eight.addActionListener(this);\n nine.addActionListener(this);\n num0.addActionListener(this);\n add.addActionListener(this);\n sub.addActionListener(this);\n mult.addActionListener(this);\n div.addActionListener(this);\n equalto.addActionListener(this);\n exit.addActionListener(this);\n point.addActionListener(this);\n reset.addActionListener(this);\n container.add(jl);\n container.add(textField);\n container.add(one);\n container.add(two);\n container.add(three);\n container.add(add);\n container.add(four);\n container.add(five);\n container.add(six);\n container.add(sub);\n container.add(seven);\n container.add(eight);\n container.add(nine);\n container.add(div);\n container.add(num0);\n container.add(point);\n container.add(mult);\n container.add(equalto);\n container.add(reset);\n container.add(exit);\n }\n public static void main(String arg[]) {\n SwingDemo d = new SwingDemo();\n d.setSize(260, 300);\n d.setVisible(true);\n }\n public void actionPerformed(ActionEvent e) {\n Object o = e.getSource();\n if (o == one) {\n textField.setText(s.concat(\"1\"));\n s = textField.getText();\n } else if (o == two) {\n textField.setText(s.concat(\"2\"));\n s = textField.getText();\n } else if (o == three) {\n textField.setText(s.concat(\"3\"));\n s = textField.getText();\n } else if (o == four) {\n textField.setText(s.concat(\"4\"));\n s = textField.getText();\n } else if (o == five) {\n textField.setText(s.concat(\"5\"));\n s = textField.getText();\n } else if (o == six) {\n textField.setText(s.concat(\"6\"));\n s = textField.getText();\n } else if (o == seven) {\n textField.setText(s.concat(\"7\"));\n s = textField.getText();\n } else if (o == eight) {\n textField.setText(s.concat(\"8\"));\n s = textField.getText();\n } else if (o == nine) {\n textField.setText(s.concat(\"9\"));\n s = textField.getText();\n } else if (o == num0) {\n textField.setText(s.concat(\"0\"));\n s = textField.getText();\n } else if (o == add) {\n textField.setText(\"\");\n input1 = Double.parseDouble(s);\n System.out.println(input1);\n s = \"\";\n ope = \"+\";\n } else if (o == sub) {\n textField.setText(\"\");\n input1 = Double.parseDouble(s);\n s = \"\";\n ope = \"-\";\n } else if (o == mult) {\n textField.setText(\"\");\n input1 = Double.parseDouble(s);\n s = \"\";\n ope = \"*\";\n } else if (o == div) {\n textField.setText(\"\");\n input1 = Double.parseDouble(s);\n s = \"\";\n ope = \"/\";\n } else if (o == equalto) {\n if (flag == 0) {\n input2 = Double.parseDouble(s);\n total(input1, input2, ope);\n flag = 1;\n } else if (flag == 1) {\n input2 = Double.parseDouble(s);\n total(input1, input2, ope);\n }\n System.out.println(input1);\n } else if (o == exit) {\n System.exit(0);\n } else if (o == point) {\n textField.setText(s.concat(\".\"));\n s = textField.getText();\n }\n if (o == reset) {\n textField.setText(\"\");\n s = textField.getText();\n total1 = 0;\n }\n }\n}" }, { "code": null, "e": 7050, "s": 6998, "text": "The following is the output displaying calculator −" }, { "code": null, "e": 7079, "s": 7050, "text": "Let us multiply two numbers." }, { "code": null, "e": 7102, "s": 7079, "text": "Enter the 1st number −" }, { "code": null, "e": 7157, "s": 7102, "text": "Press multiply (*) - Click * to multiply the numbers −" }, { "code": null, "e": 7180, "s": 7157, "text": "Enter the 2nd number −" }, { "code": null, "e": 7212, "s": 7180, "text": "Now click = to get the output −" } ]
Java Examples - Check equality of two arrays
How to check if two arrays are equal or not? Following example shows how to use equals () method of Arrays to check if two arrays are equal or not. import java.util.Arrays; public class Main { public static void main(String[] args) throws Exception { int[] ary = {1,2,3,4,5,6}; int[] ary1 = {1,2,3,4,5,6}; int[] ary2 = {1,2,3,4}; System.out.println("Is array 1 equal to array 2?? " +Arrays.equals(ary, ary1)); System.out.println("Is array 1 equal to array 3?? " +Arrays.equals(ary, ary2)); } } The above code sample will produce the following result. Is array 1 equal to array 2?? true Is array 1 equal to array 3?? false Another sample example of Array compare import java.util.Arrays; public class HelloWorld { public static void main (String[] args) { int arr1[] = {1, 2, 3}; int arr2[] = {1, 2, 3}; if (Arrays.equals(arr1, arr2)) System.out.println("Same"); else System.out.println("Not same"); } } The above code sample will produce the following result. Same Another sample example of Array compare public class HelloWorld { public static void main (String[] args) { int arr1[] = {1, 2, 3}; int arr2[] = {1, 2, 3}; if (arr1 == arr2) System.out.println("Same"); else System.out.println("Not same"); } } The above code sample will produce the following result. Not same Print Add Notes Bookmark this page
[ { "code": null, "e": 2113, "s": 2068, "text": "How to check if two arrays are equal or not?" }, { "code": null, "e": 2216, "s": 2113, "text": "Following example shows how to use equals () method of Arrays to check if two arrays are equal or not." }, { "code": null, "e": 2599, "s": 2216, "text": "import java.util.Arrays;\n\npublic class Main {\n public static void main(String[] args) throws Exception {\n int[] ary = {1,2,3,4,5,6};\n int[] ary1 = {1,2,3,4,5,6};\n int[] ary2 = {1,2,3,4};\n System.out.println(\"Is array 1 equal to array 2?? \" +Arrays.equals(ary, ary1));\n System.out.println(\"Is array 1 equal to array 3?? \" +Arrays.equals(ary, ary2));\n }\n}" }, { "code": null, "e": 2656, "s": 2599, "text": "The above code sample will produce the following result." }, { "code": null, "e": 2728, "s": 2656, "text": "Is array 1 equal to array 2?? true\nIs array 1 equal to array 3?? false\n" }, { "code": null, "e": 2768, "s": 2728, "text": "Another sample example of Array compare" }, { "code": null, "e": 3040, "s": 2768, "text": "import java.util.Arrays;\n\npublic class HelloWorld {\n public static void main (String[] args) {\n int arr1[] = {1, 2, 3};\n int arr2[] = {1, 2, 3};\n if (Arrays.equals(arr1, arr2)) System.out.println(\"Same\");\n else System.out.println(\"Not same\");\n }\n}" }, { "code": null, "e": 3097, "s": 3040, "text": "The above code sample will produce the following result." }, { "code": null, "e": 3106, "s": 3097, "text": "Same \n" }, { "code": null, "e": 3146, "s": 3106, "text": "Another sample example of Array compare" }, { "code": null, "e": 3386, "s": 3146, "text": "public class HelloWorld {\n public static void main (String[] args) {\n int arr1[] = {1, 2, 3};\n int arr2[] = {1, 2, 3};\n \n if (arr1 == arr2) System.out.println(\"Same\");\n else System.out.println(\"Not same\");\n }\n}" }, { "code": null, "e": 3443, "s": 3386, "text": "The above code sample will produce the following result." }, { "code": null, "e": 3456, "s": 3443, "text": "Not same \n" }, { "code": null, "e": 3463, "s": 3456, "text": " Print" }, { "code": null, "e": 3474, "s": 3463, "text": " Add Notes" } ]
What is late binding in C#?
In static polymorphism, the response to a function is determined at the compile time. In dynamic polymorphism, it is decided at run-time. Dynamic polymorphism is what we call late binding. Dynamic polymorphism is implemented by abstract classes and virtual functions. The following is an example showing an example of dynamic polymorphism − Live Demo using System; namespace PolymorphismApplication { class Shape { protected int width, height; public Shape( int a = 0, int b = 0) { width = a; height = b; } public virtual int area() { Console.WriteLine("Parent class area :"); return 0; } } class Rectangle: Shape { public Rectangle( int a = 0, int b = 0): base(a, b) {} public override int area () { Console.WriteLine("Rectangle class area :"); return (width * height); } } class Triangle: Shape { public Triangle(int a = 0, int b = 0): base(a, b) {} public override int area() { Console.WriteLine("Triangle class area :"); return (width * height / 2); } } class Caller { public void CallArea(Shape sh) { int a; a = sh.area(); Console.WriteLine("Area: {0}", a); } } class Tester { static void Main(string[] args) { Caller c = new Caller(); Rectangle r = new Rectangle(10, 7); Triangle t = new Triangle(10, 5); c.CallArea(r); c.CallArea(t); Console.ReadKey(); } } } Rectangle class area : Area: 70 Triangle class area : Area: 25
[ { "code": null, "e": 1251, "s": 1062, "text": "In static polymorphism, the response to a function is determined at the compile time. In dynamic polymorphism, it is decided at run-time. Dynamic polymorphism is what we call late binding." }, { "code": null, "e": 1403, "s": 1251, "text": "Dynamic polymorphism is implemented by abstract classes and virtual functions. The following is an example showing an example of dynamic polymorphism −" }, { "code": null, "e": 1414, "s": 1403, "text": " Live Demo" }, { "code": null, "e": 2609, "s": 1414, "text": "using System;\n\nnamespace PolymorphismApplication {\n class Shape {\n protected int width, height;\n\n public Shape( int a = 0, int b = 0) {\n width = a;\n height = b;\n }\n\n public virtual int area() {\n Console.WriteLine(\"Parent class area :\");\n return 0;\n }\n }\n\n class Rectangle: Shape {\n public Rectangle( int a = 0, int b = 0): base(a, b) {}\n public override int area () {\n Console.WriteLine(\"Rectangle class area :\");\n return (width * height);\n }\n }\n\n class Triangle: Shape {\n public Triangle(int a = 0, int b = 0): base(a, b) {}\n public override int area() {\n Console.WriteLine(\"Triangle class area :\");\n return (width * height / 2);\n }\n }\n\n class Caller {\n public void CallArea(Shape sh) {\n int a;\n a = sh.area();\n Console.WriteLine(\"Area: {0}\", a);\n }\n }\n\n class Tester {\n static void Main(string[] args) {\n Caller c = new Caller();\n Rectangle r = new Rectangle(10, 7);\n Triangle t = new Triangle(10, 5);\n c.CallArea(r);\n c.CallArea(t);\n Console.ReadKey();\n }\n }\n}" }, { "code": null, "e": 2672, "s": 2609, "text": "Rectangle class area :\nArea: 70\nTriangle class area :\nArea: 25" } ]
C++ program to reverse an array elements (in place)
Suppose we have an array with n different elements. We shall have to reverse the elements present in the array and display them. (Do not print them in reverse order, reverse elements in place). So, if the input is like n = 9 arr = [2,5,6,4,7,8,3,6,4], then the output will be [4,6,3,8,7,4,6,5,2] To solve this, we will follow these steps − for initialize i := 0, when i < quotient of n/2, update (increase i by 1), do:temp := arr[i]arr[i] := arr[n - i - 1]arr[n - i - 1] := temp for initialize i := 0, when i < quotient of n/2, update (increase i by 1), do: temp := arr[i] temp := arr[i] arr[i] := arr[n - i - 1] arr[i] := arr[n - i - 1] arr[n - i - 1] := temp arr[n - i - 1] := temp for initialize i := 0, when i < n, update (increase i by 1), do:display arr[i] for initialize i := 0, when i < n, update (increase i by 1), do: display arr[i] display arr[i] Let us see the following implementation to get better understanding − #include <iostream> using namespace std; int main(){ int n = 9; int arr[n] = {2,5,6,4,7,8,3,6,4}; int temp; for(int i = 0; i<n/2; i++){ temp = arr[i]; arr[i] = arr[n-i-1]; arr[n-i-1] = temp; } for(int i = 0; i < n; i++){ cout << arr[i] << " "; } } 9, {2,5,6,4,7,8,3,6,4} 4 6 3 8 7 4 6 5 2
[ { "code": null, "e": 1256, "s": 1062, "text": "Suppose we have an array with n different elements. We shall have to reverse the elements present in the array and display them. (Do not print them in reverse order, reverse elements in place)." }, { "code": null, "e": 1358, "s": 1256, "text": "So, if the input is like n = 9 arr = [2,5,6,4,7,8,3,6,4], then the output will be [4,6,3,8,7,4,6,5,2]" }, { "code": null, "e": 1402, "s": 1358, "text": "To solve this, we will follow these steps −" }, { "code": null, "e": 1541, "s": 1402, "text": "for initialize i := 0, when i < quotient of n/2, update (increase i by 1), do:temp := arr[i]arr[i] := arr[n - i - 1]arr[n - i - 1] := temp" }, { "code": null, "e": 1620, "s": 1541, "text": "for initialize i := 0, when i < quotient of n/2, update (increase i by 1), do:" }, { "code": null, "e": 1635, "s": 1620, "text": "temp := arr[i]" }, { "code": null, "e": 1650, "s": 1635, "text": "temp := arr[i]" }, { "code": null, "e": 1675, "s": 1650, "text": "arr[i] := arr[n - i - 1]" }, { "code": null, "e": 1700, "s": 1675, "text": "arr[i] := arr[n - i - 1]" }, { "code": null, "e": 1723, "s": 1700, "text": "arr[n - i - 1] := temp" }, { "code": null, "e": 1746, "s": 1723, "text": "arr[n - i - 1] := temp" }, { "code": null, "e": 1825, "s": 1746, "text": "for initialize i := 0, when i < n, update (increase i by 1), do:display arr[i]" }, { "code": null, "e": 1890, "s": 1825, "text": "for initialize i := 0, when i < n, update (increase i by 1), do:" }, { "code": null, "e": 1905, "s": 1890, "text": "display arr[i]" }, { "code": null, "e": 1920, "s": 1905, "text": "display arr[i]" }, { "code": null, "e": 1990, "s": 1920, "text": "Let us see the following implementation to get better understanding −" }, { "code": null, "e": 2299, "s": 1990, "text": "#include <iostream>\nusing namespace std;\nint main(){\n int n = 9;\n int arr[n] = {2,5,6,4,7,8,3,6,4};\n int temp;\n for(int i = 0; i<n/2; i++){\n temp = arr[i];\n arr[i] = arr[n-i-1];\n arr[n-i-1] = temp;\n }\n for(int i = 0; i < n; i++){\n cout << arr[i] << \" \";\n }\n}\n" }, { "code": null, "e": 2322, "s": 2299, "text": "9, {2,5,6,4,7,8,3,6,4}" }, { "code": null, "e": 2340, "s": 2322, "text": "4 6 3 8 7 4 6 5 2" } ]
Implementing Machine Learning algorithm to detect attacks in IoT traffic | by Abhishek Raghuveer | Towards Data Science
Attack and anomaly detection in the Internet of Things (IoT) infrastructure is a rising concern in the domain of IoT. Due to the increased use of IoT infrastructure, attacks on these infrastructures are also growing exponentially. So there is a need for developing a smart and a secured IoT environment that can detect its vulnerability. Here, a machine learning-based solution is proposed which can detect the type of attack and protect the IoT system. The entire workflow of the solution is mentioned below with a pictorial representation. Dataset collection and description: A virtual IoT environment is created using the Distributed Smart Space Orchestration System (DS2OS) which has a set of IoT based services like Temperature controller, window controller, Light controller etc. The communication between the user and the services is captured and are stored in a CSV file format. In the dataset, there are 357,952 samples and 13 features. The dataset has 347,935 Normal data and 10,017 anomalous data and contains eight classes which were classified. The 8 classes of attacks are Denial of Service (DoS), Data Type Probing, Malicious Control, Malicious Operation, Scan, Spying, Wrong Setup, Normal. The dataset is free to use and is available in Kaggle webiste https://www.kaggle.com/francoisxa/ds2ostraffictraces. The “mainSimulationAccessTraces.csv” file contains the dataset is read using pandas library import pandas as pd #Pandas library for reading csv fileimport numpy as np #Numpy library for converting data into arrayDataset=pd.read_csv('mainSimulationAccessTraces.csv')x=Dataset.iloc[:,:-2].valuesy=Dataset.iloc[:,12].values Data preprocessing: The first step in Data preprocessing is to handle the missing values in the dataset. In the dataset, we can see “Accessed Node Type” column and “Value” column contains missing data due to anomaly raised during data transferring. Since the “Accessed Node Type” column is of categorical type, I will use a constant value for filling it. The “Value” column is of numerical type and hence I have used mean strategy to fill the missing values. The next step is feature selection which involves removing the timestamp feature which doesn’t have any significance on the data. The next step involves converting the nominal categorical data into vectors using label encoding. from sklearn.impute import SimpleImputerimputer=SimpleImputer(missing_values=np.nan,strategy='constant',verbose=0)imputer=imputer.fit(x[:,[8]])x[:,[8]]=imputer.transform(x[:,[8]])imputer1=SimpleImputer(missing_values=np.nan,strategy='mean',verbose=0)imputer1=imputer1.fit(x[:,[10]])x[:,[10]]=imputer1.transform(x[:,[10]])from sklearn.preprocessing import LabelEncoderlabelencoder_X = LabelEncoder()for i in range(0,10): x[:,i] = labelencoder_X.fit_transform(x[:,i])x=np.array(x,dtype=np.float)y=labelencoder_X.fit_transform(y) Sampling: This stage involves splitting the dataset into train and test data set. I have assigned 80% of dataset for training and remaining 20% for testing. from sklearn.model_selection import train_test_splitx_train,x_test,y_train,y_test=train_test_split(x,y,test_size=0.2,random_state=0) Normalization: The training and testing dataset are normalized using standard scaler library which will make all the values of the features to be in similar range. from sklearn.preprocessing import StandardScalersc = StandardScaler()x_train = sc.fit_transform(x_train)x_test = sc.transform(x_test) Building a ML model: The training dataset is passed to a random forest classifier algorithm for training and the model/predictor is generated. I have used sklearn library to accomplish this step. I have used 10 trees for the model. After the training, the test dataset is passed to the predictor/model which will tell whether the data was under attack or not. from sklearn.ensemble import RandomForestClassifierclassifier=RandomForestClassifier(n_estimators=10,criterion='entropy',random_state=0)classifier.fit(x_train,y_train)y_pred = classifier.predict(x_test) Model Evaluation: The last step is to determine the accuracy of our model and we have used confusion matrix and accuracy parameter to determine the performance. Using the random forest algorithm I got an accuracy of 99.37%. I have also included the snapshot of the confusion matrix below from sklearn.metrics import confusion_matrixfrom sklearn.metrics import accuracy_score cm = confusion_matrix(y_test, y_pred)accuracy=accuracy_score(y_test, y_pred) The random forest algorithm could able to deliver an accuracy of 99.37% in a virtual IoT environment dataset. A cross fold validation can also be performed on top of this to avoid overfitting of the model. To learn more about the IoT environment that is used to generate data, please refer to the following URL https://www.researchgate.net/publication/330511957_Machine_Learning-Based_Adaptive_Anomaly_Detection_in_Smart_Spaces
[ { "code": null, "e": 626, "s": 172, "text": "Attack and anomaly detection in the Internet of Things (IoT) infrastructure is a rising concern in the domain of IoT. Due to the increased use of IoT infrastructure, attacks on these infrastructures are also growing exponentially. So there is a need for developing a smart and a secured IoT environment that can detect its vulnerability. Here, a machine learning-based solution is proposed which can detect the type of attack and protect the IoT system." }, { "code": null, "e": 714, "s": 626, "text": "The entire workflow of the solution is mentioned below with a pictorial representation." }, { "code": null, "e": 1586, "s": 714, "text": "Dataset collection and description: A virtual IoT environment is created using the Distributed Smart Space Orchestration System (DS2OS) which has a set of IoT based services like Temperature controller, window controller, Light controller etc. The communication between the user and the services is captured and are stored in a CSV file format. In the dataset, there are 357,952 samples and 13 features. The dataset has 347,935 Normal data and 10,017 anomalous data and contains eight classes which were classified. The 8 classes of attacks are Denial of Service (DoS), Data Type Probing, Malicious Control, Malicious Operation, Scan, Spying, Wrong Setup, Normal. The dataset is free to use and is available in Kaggle webiste https://www.kaggle.com/francoisxa/ds2ostraffictraces. The “mainSimulationAccessTraces.csv” file contains the dataset is read using pandas library" }, { "code": null, "e": 1818, "s": 1586, "text": "import pandas as pd #Pandas library for reading csv fileimport numpy as np #Numpy library for converting data into arrayDataset=pd.read_csv('mainSimulationAccessTraces.csv')x=Dataset.iloc[:,:-2].valuesy=Dataset.iloc[:,12].values" }, { "code": null, "e": 2505, "s": 1818, "text": "Data preprocessing: The first step in Data preprocessing is to handle the missing values in the dataset. In the dataset, we can see “Accessed Node Type” column and “Value” column contains missing data due to anomaly raised during data transferring. Since the “Accessed Node Type” column is of categorical type, I will use a constant value for filling it. The “Value” column is of numerical type and hence I have used mean strategy to fill the missing values. The next step is feature selection which involves removing the timestamp feature which doesn’t have any significance on the data. The next step involves converting the nominal categorical data into vectors using label encoding." }, { "code": null, "e": 3035, "s": 2505, "text": "from sklearn.impute import SimpleImputerimputer=SimpleImputer(missing_values=np.nan,strategy='constant',verbose=0)imputer=imputer.fit(x[:,[8]])x[:,[8]]=imputer.transform(x[:,[8]])imputer1=SimpleImputer(missing_values=np.nan,strategy='mean',verbose=0)imputer1=imputer1.fit(x[:,[10]])x[:,[10]]=imputer1.transform(x[:,[10]])from sklearn.preprocessing import LabelEncoderlabelencoder_X = LabelEncoder()for i in range(0,10): x[:,i] = labelencoder_X.fit_transform(x[:,i])x=np.array(x,dtype=np.float)y=labelencoder_X.fit_transform(y)" }, { "code": null, "e": 3192, "s": 3035, "text": "Sampling: This stage involves splitting the dataset into train and test data set. I have assigned 80% of dataset for training and remaining 20% for testing." }, { "code": null, "e": 3325, "s": 3192, "text": "from sklearn.model_selection import train_test_splitx_train,x_test,y_train,y_test=train_test_split(x,y,test_size=0.2,random_state=0)" }, { "code": null, "e": 3489, "s": 3325, "text": "Normalization: The training and testing dataset are normalized using standard scaler library which will make all the values of the features to be in similar range." }, { "code": null, "e": 3623, "s": 3489, "text": "from sklearn.preprocessing import StandardScalersc = StandardScaler()x_train = sc.fit_transform(x_train)x_test = sc.transform(x_test)" }, { "code": null, "e": 3983, "s": 3623, "text": "Building a ML model: The training dataset is passed to a random forest classifier algorithm for training and the model/predictor is generated. I have used sklearn library to accomplish this step. I have used 10 trees for the model. After the training, the test dataset is passed to the predictor/model which will tell whether the data was under attack or not." }, { "code": null, "e": 4186, "s": 3983, "text": "from sklearn.ensemble import RandomForestClassifierclassifier=RandomForestClassifier(n_estimators=10,criterion='entropy',random_state=0)classifier.fit(x_train,y_train)y_pred = classifier.predict(x_test)" }, { "code": null, "e": 4347, "s": 4186, "text": "Model Evaluation: The last step is to determine the accuracy of our model and we have used confusion matrix and accuracy parameter to determine the performance." }, { "code": null, "e": 4474, "s": 4347, "text": "Using the random forest algorithm I got an accuracy of 99.37%. I have also included the snapshot of the confusion matrix below" }, { "code": null, "e": 4638, "s": 4474, "text": "from sklearn.metrics import confusion_matrixfrom sklearn.metrics import accuracy_score cm = confusion_matrix(y_test, y_pred)accuracy=accuracy_score(y_test, y_pred)" } ]
Proper Model Selection through Cross Validation | by Günter Röhrich | Nov, 20 | Towards Data Science
In this article, I will outline the basics of cross validation (CV), how it compares to random sampling, how (and if) ensemble learner need to use CV and how we should build models when we only have a few data points. In a previous post, I introduced a sound process to split data in order to reduce the risk of fitting the model to random patterns. To briefly recap these two different types of effects: Real effects: These effects are the one we would like to fit our model to. They show a pattern we can hopefully reproduce / estimate through our model. Random effects: You might have guessed it, they just happen randomly. We can safely assume that every real world dataset will have those effects, hence fitting a model to 99% of all data points, will lead to the problem that the model will perform bad on the unknown 1% portion, the test data. I you want to catch up on this topic, follow me here: towardsdatascience.com So, what is cross validation? Recalling my post about model selection, where we saw that it may be necessary to split data into three different portions, one for training, one for validation (to choose among models) and eventually measure the true accuracy through the test data portion. This procedure is a more stable (and preferred) way to choose the best among several models. Cross validation is not too different from this idea, but deals with the model training/validation in a smarter and computationally more efficient way: For CV we use a full dataset, hence we do no longer require a split into train/test data portions beforehand, but rather take advantage of processing the full dataset and iteratively use different portions. These sampled sub-portions are commonly referred to as folds. The folds are sampled randomly and most commonly, the dataset is divided into 10 equal sized folds. For every iteration in the CV procedure, we use k-1 folds to train the model and use the remaining fold as a test portion. The 10 folds are not resampled for every iteration, but remain as initially drawn. The CV algorithm will calculate the CV error, which relates to the testing error of the model and averages this error over all k runs. This results in an averaged error metric that allows us to infer a more reliable model accuracy. To contrast train-test-split procedures, CV provides a more robust and reliable error score, as the data is not trained on only one dataset and tested through another, but rather trained on k different combinations of data, while being independently tested for each of this k cycles. Keep in mind, this is especially useful, when there is only a small dataset available. What you need to remember: When we are creating 5 models, each data block is used k-1 times for training and once for validation: Remarks on averaging results: As described, if we built 5 models, that is k equals 5, what accuracy score should be considered? One idea would be to select the best or worst score out of all sub-models, but this wouldn’t provide a reliable metric, being overly optimistic or pessimistic. The simple answer is, we define model accuracy as the average of all k models. It is important to understand that we do NOT average the parameters of the (sub) models (e.g. coefficients). If the best model is selected under CV, we fit the model to the entire dataset and use this model for further proceedings. In random sampling (when comparing several models), we usually set aside a validation dataset. Of course, this can also be done for CV: For example, we could use a random subset of 90% of the data to build our models using CV and validate the best model with the lowest CV-error through the validation dataset — I wouldn’t be too surprised however, if the results is similar to the CV error metric. Leave-one-out-CV (LOOCV) is a special case of CV, rather than picking a number of folds (e.g. k=10), we use k=N. This means we build N models (one for every row in our dataset), but always remove the i-th row from the model training and use this one for testing. Keep in mind that this is probably infeasible for very large datasets — and less computationally efficient than k-fold CV, in general. You might be confronted with the situation where there is no pre-built cross validation functionality available for your model algorithm — e.g. Splines in R do not offer cross validation functionality. An easy, yet powerful tool to achieve a more realistic error metric is to do random sub sampling. MCCV is computationally expensive, so make sure to apply it only if the dataset is not too large: Initialize an empty array that stores the CV error for every run, then run B loops (e.g. 100) and for each loop calculate the respective error score. After all runs, calculate an average error metric. The pseudo code is as follows: results = array([ ])for loop in 100: train, test = data.split # split data randomly, e.g. 80/20 error = model(train).get_CV_error() results.append(error)mean(results) # gets us the average error score of all 100 runs Hardly any machine learning topic can be discussed without having the bias-variance-trade-off in mind. As for CV, keep in mind that larger values of k usually tend to increase the variance of our response, however decrease the bias. Tree-based ensemble learners use different (random) subsets of the same data to train a model. Given the fact that ensemble learner create a variety of similar models based on different data subsets, there is no need to use calculate a test error through the test data. As Hastie, Tibshirani, & Friedman point out in their book “The Elements of Statistical Learning” (2009 p. 593), the out of bag error is almost identical to the CV error, meaning there is no need to implement another layer of CV for ensemble methods, like the random forest. Train-test splits help obtaining a test error by using different datasets. This approach is obviously better than using only one dataset — which results in entirely overfitted models — but provides less stable results than cross validation. To the question, do you have to write the CV function yourself? No, usually not —and keep in mind you can always consider applying MCCV. Referring to R programming language, you will often find functions that sound like the model you are about to fit, but having “cv” somewhere incorporated in its name. If you intended to create a simple linear model (lm), cv.lm is doing exactly what we were looking at above. Similar functions can be found for Python and Julia. Bottom line here, use a search engine to browse through your options — there are plenty. Other functions provide the option of CV as a parameter as in this example using 10-fold CV: Applying machine learning algorithms and statistical models has become quite straightforward. There are loads of available documentation and tutorials that guide you through the model fitting process. Why this rather “not so exciting” topic is quite important to me is simple: how to choose the right the model (among several). Not considering the topic of CV and data that is used to create a model will likely result in overfitted and less accurate models overall. {See you next time}
[ { "code": null, "e": 390, "s": 172, "text": "In this article, I will outline the basics of cross validation (CV), how it compares to random sampling, how (and if) ensemble learner need to use CV and how we should build models when we only have a few data points." }, { "code": null, "e": 577, "s": 390, "text": "In a previous post, I introduced a sound process to split data in order to reduce the risk of fitting the model to random patterns. To briefly recap these two different types of effects:" }, { "code": null, "e": 729, "s": 577, "text": "Real effects: These effects are the one we would like to fit our model to. They show a pattern we can hopefully reproduce / estimate through our model." }, { "code": null, "e": 1023, "s": 729, "text": "Random effects: You might have guessed it, they just happen randomly. We can safely assume that every real world dataset will have those effects, hence fitting a model to 99% of all data points, will lead to the problem that the model will perform bad on the unknown 1% portion, the test data." }, { "code": null, "e": 1077, "s": 1023, "text": "I you want to catch up on this topic, follow me here:" }, { "code": null, "e": 1100, "s": 1077, "text": "towardsdatascience.com" }, { "code": null, "e": 1633, "s": 1100, "text": "So, what is cross validation? Recalling my post about model selection, where we saw that it may be necessary to split data into three different portions, one for training, one for validation (to choose among models) and eventually measure the true accuracy through the test data portion. This procedure is a more stable (and preferred) way to choose the best among several models. Cross validation is not too different from this idea, but deals with the model training/validation in a smarter and computationally more efficient way:" }, { "code": null, "e": 2208, "s": 1633, "text": "For CV we use a full dataset, hence we do no longer require a split into train/test data portions beforehand, but rather take advantage of processing the full dataset and iteratively use different portions. These sampled sub-portions are commonly referred to as folds. The folds are sampled randomly and most commonly, the dataset is divided into 10 equal sized folds. For every iteration in the CV procedure, we use k-1 folds to train the model and use the remaining fold as a test portion. The 10 folds are not resampled for every iteration, but remain as initially drawn." }, { "code": null, "e": 2440, "s": 2208, "text": "The CV algorithm will calculate the CV error, which relates to the testing error of the model and averages this error over all k runs. This results in an averaged error metric that allows us to infer a more reliable model accuracy." }, { "code": null, "e": 2811, "s": 2440, "text": "To contrast train-test-split procedures, CV provides a more robust and reliable error score, as the data is not trained on only one dataset and tested through another, but rather trained on k different combinations of data, while being independently tested for each of this k cycles. Keep in mind, this is especially useful, when there is only a small dataset available." }, { "code": null, "e": 2941, "s": 2811, "text": "What you need to remember: When we are creating 5 models, each data block is used k-1 times for training and once for validation:" }, { "code": null, "e": 3540, "s": 2941, "text": "Remarks on averaging results: As described, if we built 5 models, that is k equals 5, what accuracy score should be considered? One idea would be to select the best or worst score out of all sub-models, but this wouldn’t provide a reliable metric, being overly optimistic or pessimistic. The simple answer is, we define model accuracy as the average of all k models. It is important to understand that we do NOT average the parameters of the (sub) models (e.g. coefficients). If the best model is selected under CV, we fit the model to the entire dataset and use this model for further proceedings." }, { "code": null, "e": 3939, "s": 3540, "text": "In random sampling (when comparing several models), we usually set aside a validation dataset. Of course, this can also be done for CV: For example, we could use a random subset of 90% of the data to build our models using CV and validate the best model with the lowest CV-error through the validation dataset — I wouldn’t be too surprised however, if the results is similar to the CV error metric." }, { "code": null, "e": 4337, "s": 3939, "text": "Leave-one-out-CV (LOOCV) is a special case of CV, rather than picking a number of folds (e.g. k=10), we use k=N. This means we build N models (one for every row in our dataset), but always remove the i-th row from the model training and use this one for testing. Keep in mind that this is probably infeasible for very large datasets — and less computationally efficient than k-fold CV, in general." }, { "code": null, "e": 4637, "s": 4337, "text": "You might be confronted with the situation where there is no pre-built cross validation functionality available for your model algorithm — e.g. Splines in R do not offer cross validation functionality. An easy, yet powerful tool to achieve a more realistic error metric is to do random sub sampling." }, { "code": null, "e": 4967, "s": 4637, "text": "MCCV is computationally expensive, so make sure to apply it only if the dataset is not too large: Initialize an empty array that stores the CV error for every run, then run B loops (e.g. 100) and for each loop calculate the respective error score. After all runs, calculate an average error metric. The pseudo code is as follows:" }, { "code": null, "e": 5194, "s": 4967, "text": "results = array([ ])for loop in 100: train, test = data.split # split data randomly, e.g. 80/20 error = model(train).get_CV_error() results.append(error)mean(results) # gets us the average error score of all 100 runs" }, { "code": null, "e": 5427, "s": 5194, "text": "Hardly any machine learning topic can be discussed without having the bias-variance-trade-off in mind. As for CV, keep in mind that larger values of k usually tend to increase the variance of our response, however decrease the bias." }, { "code": null, "e": 5697, "s": 5427, "text": "Tree-based ensemble learners use different (random) subsets of the same data to train a model. Given the fact that ensemble learner create a variety of similar models based on different data subsets, there is no need to use calculate a test error through the test data." }, { "code": null, "e": 5971, "s": 5697, "text": "As Hastie, Tibshirani, & Friedman point out in their book “The Elements of Statistical Learning” (2009 p. 593), the out of bag error is almost identical to the CV error, meaning there is no need to implement another layer of CV for ensemble methods, like the random forest." }, { "code": null, "e": 6212, "s": 5971, "text": "Train-test splits help obtaining a test error by using different datasets. This approach is obviously better than using only one dataset — which results in entirely overfitted models — but provides less stable results than cross validation." }, { "code": null, "e": 6766, "s": 6212, "text": "To the question, do you have to write the CV function yourself? No, usually not —and keep in mind you can always consider applying MCCV. Referring to R programming language, you will often find functions that sound like the model you are about to fit, but having “cv” somewhere incorporated in its name. If you intended to create a simple linear model (lm), cv.lm is doing exactly what we were looking at above. Similar functions can be found for Python and Julia. Bottom line here, use a search engine to browse through your options — there are plenty." }, { "code": null, "e": 6859, "s": 6766, "text": "Other functions provide the option of CV as a parameter as in this example using 10-fold CV:" }, { "code": null, "e": 7326, "s": 6859, "text": "Applying machine learning algorithms and statistical models has become quite straightforward. There are loads of available documentation and tutorials that guide you through the model fitting process. Why this rather “not so exciting” topic is quite important to me is simple: how to choose the right the model (among several). Not considering the topic of CV and data that is used to create a model will likely result in overfitted and less accurate models overall." } ]
Creating views in MongoDB
To create views in MongoDB, use createView(). Let us create a collection with documents − > db.demo113.insertOne( ... { _id: 1, StudentId: "101", "Details": { Name: "Chris", Age: 21 }, Subject: "MySQL" } ... ); { "acknowledged" : true, "insertedId" : 1 } Display all documents from a collection with the help of find() method − > db.demo113.find().pretty(); This will produce the following output − { "_id" : 1, "StudentId" : "101", "Details" : { "Name" : "Chris", "Age" : 21 }, "Subject" : "MySQL" } Following is the query to create views in MongoDB − > db.createView( ... "firstView", ... "demo113", ... [ { $project: { "Name": "$Details.Name", Subject: 1 } } ] ... ) { "ok" : 1 } Display fields from a view with the help of find() method − > db.firstView.find(); This will produce the following output − { "_id" : 1, "Subject" : "MySQL", "Name" : "Chris" }
[ { "code": null, "e": 1152, "s": 1062, "text": "To create views in MongoDB, use createView(). Let us create a collection with documents −" }, { "code": null, "e": 1317, "s": 1152, "text": "> db.demo113.insertOne(\n... { _id: 1, StudentId: \"101\", \"Details\": { Name: \"Chris\", Age: 21 }, Subject: \"MySQL\" }\n... );\n{ \"acknowledged\" : true, \"insertedId\" : 1 }" }, { "code": null, "e": 1390, "s": 1317, "text": "Display all documents from a collection with the help of find() method −" }, { "code": null, "e": 1420, "s": 1390, "text": "> db.demo113.find().pretty();" }, { "code": null, "e": 1461, "s": 1420, "text": "This will produce the following output −" }, { "code": null, "e": 1590, "s": 1461, "text": "{\n \"_id\" : 1,\n \"StudentId\" : \"101\",\n \"Details\" : {\n \"Name\" : \"Chris\",\n \"Age\" : 21\n },\n \"Subject\" : \"MySQL\"\n}" }, { "code": null, "e": 1642, "s": 1590, "text": "Following is the query to create views in MongoDB −" }, { "code": null, "e": 1781, "s": 1642, "text": "> db.createView(\n... \"firstView\",\n... \"demo113\",\n... [ { $project: { \"Name\": \"$Details.Name\", Subject: 1 } } ]\n... )\n{ \"ok\" : 1 }" }, { "code": null, "e": 1841, "s": 1781, "text": "Display fields from a view with the help of find() method −" }, { "code": null, "e": 1864, "s": 1841, "text": "> db.firstView.find();" }, { "code": null, "e": 1905, "s": 1864, "text": "This will produce the following output −" }, { "code": null, "e": 1958, "s": 1905, "text": "{ \"_id\" : 1, \"Subject\" : \"MySQL\", \"Name\" : \"Chris\" }" } ]
Self-Taught Data Scientist: Showcase Yourself with a Personal Website | by Erdem Isbilen | Towards Data Science
Not having a college degree in data science makes it difficult to convince others of your skills. Not having any job experience makes it even more difficult. I know, this may not seem fair! As you worked hard to develop all those skills by spending countless hours on your computer and figuring out how those mind-wracking algorithms work, you think employers have to recognise you as a decent data scientist and give you the job opportunity you are striving for. This will not happen unless you show them all your skills with the solid pieces of evidence by differentiating yourself from the others. Having a personal website provides a platform for you to showcase your skills. In this post, I will explain how you can create your website easily and almost free. In the first part of the post, I will cover what specifically you should present in your website. Then in the later section, I will show you how you can handle website development tasks from start to end, including setting up the hosting/domain services and handling the HTML/CSS development. All source code is provided in my GitHub repository and you can visit my personal website to see all in action. Before starting to develop your website, you should think about how to structure the content of your website. See below for the main sections you should cover in your website; Your Blog Posts Your Data Science Portfolio Your Resume Your Social Media Profiles It helps a lot if you start blogging early in your self-taught data scientist journey. Writing articles will not only concrete what you have learned in your journey, but it will also show others how well you communicate your findings and arguments. Here is a list of data science subjects you can write about; Data acquisition and preparation Feature Engineering Machine Learning Visualisation Communicating the findings You can either develop your blogging website or you can use free platforms like Medium. You may take many Coursera or Bootcamp courses, but what differentiate yourself from the crowd is how you apply your knowledge into the real problems. This is what convinces the recruiters on your skills. So having a proper data science portfolio and presenting it on your website makes a huge difference. But what makes a good data science portfolio? Independent Side Projects: These projects show others how you use your data science knowledge in real life. It is the best not solving trivial problems using proof-of-concept databases. Instead, develop your unique database and investigate an interesting problem. Kaggle Competitions: Competing in Kaggle is a way of showing your skill level compared to the others in the field. Open Source Projects: Open source projects provide an excellent opportunity for you to develop your experience in data science. Recruiters skim resumes to identify the possible candidates. So, having well-structured and an easy-to-read resume increases your chance of getting hired. Followings are the least you should include in your resume to properly showcase yourself; Personal information Positions Education Projects Competences & Personal Skills As a data scientist, you need to use social media to some extent. Twitter is the best way of communicating with your peers. It helps to stay up-to-date. You should use LinkedIn if you would like to be hired, as it is the prime source for recruiters to look for potential candidates. Once you have completed the difficult part of the process which is developing and structuring the content, then the rest is quite easy to progress. With the free services, libraries and templates provided, it has never been so easy to develop a personal webpage. I will use Bootstrap 4.0 which is an open-source front-end web development library. It supports responsive and mobile-first kind of webpage development process so makes sure that our website will work properly and nicely in mobile devices. Together with Bootstrap 4.0, I will use the free version of Font Awesome icon library in my project. Not to handle all HTML/CSS development work from scratch, I will be modifying a Bootstrap template which I downloaded from this link. For hosting my website, I will use Google Firebase, with a custom domain name. I registered my custom domain name on isimtescil.net and it cost 8$. You can use any other hosting and domain services as you wish. Make a folder structure on your local computer to contain all your project files. $ mkdir ds-personal-website$ cd ds-personal-website$ mkdir public Then download the source code of freelancer template into your computer and copy all the files inside your ‘public’ project folder. Later on, Firebase will look at public folder while deploying the files into the hosting server so it is important to copy all the files into the ‘public’ folder which we have just created. Modifying the Head Section of HTML As a first step, what we need to do is to modify the HTML elements according to our needs. Let’s open ‘index.html’ in an editor and modify the head section first. What I have modified is the metadata and the title of the webpage. These tags provide data to the search engines about our page content. In addition, I have modified the href attributes relative to root so that the CSS & JavaScript source files can be accessed in the hosting environment. I have marked the modifications in bold below. Modifications are noted in bold../public/Index.html<head><meta charset="utf-8"> <meta name="viewport" content="width=device-width, initial-scale=1, shrink-to-fit=no"><meta name="description" content="Erdem Isbilen - Automotive & Mechanical Engineer,Machine Learning and Data Science Enthusiasts"><meta name="author" content="Erdem Isbilen"><title>Erdem ISBILEN - My Personal WebPage</title><!-- Custom fonts for this theme --><link href="../vendor/fontawesome-free/css/all.min.css" rel="stylesheet" type="text/css"> <link href="https://fonts.googleapis.com/css? family=Montserrat:400,700" rel="stylesheet" type="text/css"><link href="https://fonts.googleapis.com/css? family=Lato:400,700,400italic,700italic" rel="stylesheet" type="text/css"><!-- Theme CSS --> <link href="../css/freelancer.css" rel="stylesheet"></head> Modifying the Navigation Section of HTML I have added my logo into the Navigation section by keeping the rest of the Navigation section mainly the same. Thanks to the Bootstrap library, Navigation bar works perfectly as it is and adjusts itself to the different screen sizes by collapsing. I had some minor changes on the arrangement of the NavBar by adding Portfolio, Blog, Resume and Connect Me sections respectively. Depending on how you would like to present your content, you can add as many sections as you want. Modifications are noted in bold../public/Index.html<!-- Navigation --><nav class="navbar navbar-expand-lg bg-secondary text-uppercase fixed-top" id="mainNav"><div class="container"><a class="navbar-brand js-scroll-trigger" href="#page-top"> <img class="brand-logo" src="/img/eisbilen-logo.svg" alt=""></a><button class="navbar-toggler navbar-toggler-right text-uppercase font-weight-bold bg-primary text-white rounded" type="button" data- toggle="collapse" data-target="#navbarResponsive" aria-controls="navbarResponsive" aria-expanded="false" aria-label="Toggle navigation"> <i class="fas fa-bars"></i> </button><div class="collapse navbar-collapse" id="navbarResponsive"><ul class="navbar-nav ml-auto"> <li class="nav-item mx-0 mx-lg-1"> <a class="nav-link py-3 px-0 px-lg-3 rounded js-scroll-trigger" href="#portfolio">Portfolio</a> </li><li class="nav-item mx-0 mx-lg-1"> <a class="nav-link py-3 px-0 px-lg-3 rounded js-scroll-trigger" href="#blog">Blog</a> </li> <li class="nav-item mx-0 mx-lg-1"> <a class="nav-link py-3 px-0 px-lg-3 rounded js-scroll-trigger" href="#resume">Resume</a> </li><li class="nav-item mx-0 mx-lg-1"> <a class="nav-link py-3 px-0 px-lg-3 rounded js-scroll-trigger" href="#connect">Connect Me</a> </li> </ul></div></div></nav> Modifying the Masthead Section of HTML In the Masthead section of the HTML, I have added my photo, name, and title. As this section is above the fold, the portion of your page a visitor can view without scrolling, it is the first content your visitors will see on your website. To make a good first impression, place your best professional photo of yourself and choose the best words that describe you. Having a professional photo of yourself will also help others to trust you and your business. Modifications are noted in bold../public/Index.html<!-- Masthead --><header class="masthead bg-primary text-white text-center"> <div class="container d-flex align-items-center flex-column"><!-- Masthead Avatar Image --> <img class="masthead-avatar mb-5" src="/img/ei-photo-min.png" alt=""><!-- Masthead Heading --> <h2 class="masthead-heading text-uppercase mb-0">ERDEM ISBILEN</h2><!-- Icon Divider --> <div class="divider-custom divider-light"> <div class="divider-custom-line"></div> <div class="divider-custom-icon"> <i class="fas fa-star"></i> </div> <div class="divider-custom-line"></div> </div><!-- Masthead Subheading --> <h8 class="masthead-subheading font-weight-light mb-0">Automotive & Mechanical Engineer</h8> <h8 class="masthead-subheading font-weight-light mb-0">Machine Learning and Data Science Enthusiasts</h8></div></header> Modifying the Rest of the HTML Modifying the rest of the HTML sections will be in a similar manner, so you can adjust them or include additional content as you wish. Blog, Resume and Portfolio sections are the minimums that you should have on your webpage to properly convey your data science skills and profession. Now, as we have all our project files are ready, it is time to set up the Firebase for hosting our content with free ‘Spark Plan’. Assuming that you already have a ‘Google Account’, login to FireBase and create a new project. At this point, we are ready to deploy our files to Firebase using our local computer with Firebase CLI. Before deploying our website to the Firebase, we should install Firebase CLI to our local computer. $ npm install -g firebase-tools Then, we will log in and initialize the Firebase. Go to your project folder’s root directory to log in to Firebase with below terminal command. It will direct you to a website where you use your ‘Google Account’ to authorize Firebase. ds-personal-website$ firebase login After you logged in, now we can initialize the Firebase project and configure hosting details. Select ‘Hosting’ option and provide the public directory as ‘public’ where all ready-to-deploy files are stored in your local computer. Do not select overwrite to ‘index.html’ option as this will modify your ‘index.html’. ds-personal-website$ firebase initYou're about to initialize a Firebase project in this directory:/Users/erdemisbilen/Angular/my-personal-webpageBefore we get started, keep in mind:* You are initializing in an existing Firebase project directory? Which Firebase CLI features do you want to set up for this folder? Press Space to select features, then Enter to confirm your choices.Hosting: Configure and deploy Firebase Hosting sites=== Project SetupFirst, let's associate this project directory with a Firebase project.You can create multiple project aliases by running firebase use --add, but for now we'll just set up a default project.i .firebaserc already has a default project, using my-personal-webpage-c7096.=== Hosting SetupYour public directory is the folder (relative to your project directory) that will contain Hosting assets to be uploaded with firebase deploy. If you have a build process for your assets, use your build's output directory.? What do you want to use as your public directory? public? Configure as a single-page app(rewrite all urls to /index.html)?No? File public/404.html already exists. Overwrite? Yes✔ Wrote public/404.html? File public/index.html already exists. Overwrite? Noi Skipping write of public/index.htmli Writing configuration info to firebase.json...i Writing project information to .firebaserc...✔ Firebase initialization complete! Now that you logged in and initialized the Firebase, you can deploy all of your files with one line of command. If you managed to complete the set up properly, Firebase will deploy all your files and give you a URL where you can see your website online on your browser. ds-personal-website$ firebase deploy=== Deploying to 'my-personal-webpage-c7096'...i deploying hostingi hosting[my-personal-webpage-c7096]: beginning deploy...i hosting[my-personal-webpage-c7096]: found 91 files in public✔ hosting[my-personal-webpage-c7096]: file upload completei hosting[my-personal-webpage-c7096]: finalizing version...✔ hosting[my-personal-webpage-c7096]: version finalizedi hosting[my-personal-webpage-c7096]: releasing new version...✔ hosting[my-personal-webpage-c7096]: release complete✔ Deploy complete!Project Console: https://console.firebase.google.com/project/my-personal-webpage-c7096/overviewHosting URL: https://my-personal-webpage-c7096.firebaseapp.com You cannot use the URL which is provided by Firebase as it is something for you to experiment with your design and to see it on the browser. After you are happy with how your website looks on the browser, there is one final step to make your website unique with a custom domain name. There are several providers where you can register a specific domain name. For me, it is www.erdemisbilen.com and I registered this domain name on isimtescil.net and it cost 8$ per year. You can register your domain name with any DNS service provider of your choice. After you registered your unique domain name, head into the Firebase Dashboard — Hosting section where Firebase asks you to add your custom domain. You will get a TXT file after you provide your custom domain name to Firebase. Using your DNS provider’s dashboard, replace your domain name’s TXT record with the one provided by Firebase. Once this is completed, Firebase will ask you to modify the ‘A’ records. So, modify the A records accordingly. After a couple of hours, you will see that Firebase is connected to your custom domain. Which means that all done, and your website is up and running! In this post, I tried to explain my way of building a personal website specifically for self-taught data scientists. I hope my article helps you to build your website and showcase your data science skills.
[ { "code": null, "e": 330, "s": 172, "text": "Not having a college degree in data science makes it difficult to convince others of your skills. Not having any job experience makes it even more difficult." }, { "code": null, "e": 362, "s": 330, "text": "I know, this may not seem fair!" }, { "code": null, "e": 636, "s": 362, "text": "As you worked hard to develop all those skills by spending countless hours on your computer and figuring out how those mind-wracking algorithms work, you think employers have to recognise you as a decent data scientist and give you the job opportunity you are striving for." }, { "code": null, "e": 773, "s": 636, "text": "This will not happen unless you show them all your skills with the solid pieces of evidence by differentiating yourself from the others." }, { "code": null, "e": 852, "s": 773, "text": "Having a personal website provides a platform for you to showcase your skills." }, { "code": null, "e": 937, "s": 852, "text": "In this post, I will explain how you can create your website easily and almost free." }, { "code": null, "e": 1230, "s": 937, "text": "In the first part of the post, I will cover what specifically you should present in your website. Then in the later section, I will show you how you can handle website development tasks from start to end, including setting up the hosting/domain services and handling the HTML/CSS development." }, { "code": null, "e": 1342, "s": 1230, "text": "All source code is provided in my GitHub repository and you can visit my personal website to see all in action." }, { "code": null, "e": 1518, "s": 1342, "text": "Before starting to develop your website, you should think about how to structure the content of your website. See below for the main sections you should cover in your website;" }, { "code": null, "e": 1534, "s": 1518, "text": "Your Blog Posts" }, { "code": null, "e": 1562, "s": 1534, "text": "Your Data Science Portfolio" }, { "code": null, "e": 1574, "s": 1562, "text": "Your Resume" }, { "code": null, "e": 1601, "s": 1574, "text": "Your Social Media Profiles" }, { "code": null, "e": 1850, "s": 1601, "text": "It helps a lot if you start blogging early in your self-taught data scientist journey. Writing articles will not only concrete what you have learned in your journey, but it will also show others how well you communicate your findings and arguments." }, { "code": null, "e": 1911, "s": 1850, "text": "Here is a list of data science subjects you can write about;" }, { "code": null, "e": 1944, "s": 1911, "text": "Data acquisition and preparation" }, { "code": null, "e": 1964, "s": 1944, "text": "Feature Engineering" }, { "code": null, "e": 1981, "s": 1964, "text": "Machine Learning" }, { "code": null, "e": 1995, "s": 1981, "text": "Visualisation" }, { "code": null, "e": 2022, "s": 1995, "text": "Communicating the findings" }, { "code": null, "e": 2110, "s": 2022, "text": "You can either develop your blogging website or you can use free platforms like Medium." }, { "code": null, "e": 2315, "s": 2110, "text": "You may take many Coursera or Bootcamp courses, but what differentiate yourself from the crowd is how you apply your knowledge into the real problems. This is what convinces the recruiters on your skills." }, { "code": null, "e": 2416, "s": 2315, "text": "So having a proper data science portfolio and presenting it on your website makes a huge difference." }, { "code": null, "e": 2462, "s": 2416, "text": "But what makes a good data science portfolio?" }, { "code": null, "e": 2726, "s": 2462, "text": "Independent Side Projects: These projects show others how you use your data science knowledge in real life. It is the best not solving trivial problems using proof-of-concept databases. Instead, develop your unique database and investigate an interesting problem." }, { "code": null, "e": 2841, "s": 2726, "text": "Kaggle Competitions: Competing in Kaggle is a way of showing your skill level compared to the others in the field." }, { "code": null, "e": 2969, "s": 2841, "text": "Open Source Projects: Open source projects provide an excellent opportunity for you to develop your experience in data science." }, { "code": null, "e": 3214, "s": 2969, "text": "Recruiters skim resumes to identify the possible candidates. So, having well-structured and an easy-to-read resume increases your chance of getting hired. Followings are the least you should include in your resume to properly showcase yourself;" }, { "code": null, "e": 3235, "s": 3214, "text": "Personal information" }, { "code": null, "e": 3245, "s": 3235, "text": "Positions" }, { "code": null, "e": 3255, "s": 3245, "text": "Education" }, { "code": null, "e": 3264, "s": 3255, "text": "Projects" }, { "code": null, "e": 3294, "s": 3264, "text": "Competences & Personal Skills" }, { "code": null, "e": 3447, "s": 3294, "text": "As a data scientist, you need to use social media to some extent. Twitter is the best way of communicating with your peers. It helps to stay up-to-date." }, { "code": null, "e": 3577, "s": 3447, "text": "You should use LinkedIn if you would like to be hired, as it is the prime source for recruiters to look for potential candidates." }, { "code": null, "e": 3725, "s": 3577, "text": "Once you have completed the difficult part of the process which is developing and structuring the content, then the rest is quite easy to progress." }, { "code": null, "e": 3840, "s": 3725, "text": "With the free services, libraries and templates provided, it has never been so easy to develop a personal webpage." }, { "code": null, "e": 4080, "s": 3840, "text": "I will use Bootstrap 4.0 which is an open-source front-end web development library. It supports responsive and mobile-first kind of webpage development process so makes sure that our website will work properly and nicely in mobile devices." }, { "code": null, "e": 4181, "s": 4080, "text": "Together with Bootstrap 4.0, I will use the free version of Font Awesome icon library in my project." }, { "code": null, "e": 4315, "s": 4181, "text": "Not to handle all HTML/CSS development work from scratch, I will be modifying a Bootstrap template which I downloaded from this link." }, { "code": null, "e": 4526, "s": 4315, "text": "For hosting my website, I will use Google Firebase, with a custom domain name. I registered my custom domain name on isimtescil.net and it cost 8$. You can use any other hosting and domain services as you wish." }, { "code": null, "e": 4608, "s": 4526, "text": "Make a folder structure on your local computer to contain all your project files." }, { "code": null, "e": 4674, "s": 4608, "text": "$ mkdir ds-personal-website$ cd ds-personal-website$ mkdir public" }, { "code": null, "e": 4996, "s": 4674, "text": "Then download the source code of freelancer template into your computer and copy all the files inside your ‘public’ project folder. Later on, Firebase will look at public folder while deploying the files into the hosting server so it is important to copy all the files into the ‘public’ folder which we have just created." }, { "code": null, "e": 5031, "s": 4996, "text": "Modifying the Head Section of HTML" }, { "code": null, "e": 5194, "s": 5031, "text": "As a first step, what we need to do is to modify the HTML elements according to our needs. Let’s open ‘index.html’ in an editor and modify the head section first." }, { "code": null, "e": 5331, "s": 5194, "text": "What I have modified is the metadata and the title of the webpage. These tags provide data to the search engines about our page content." }, { "code": null, "e": 5483, "s": 5331, "text": "In addition, I have modified the href attributes relative to root so that the CSS & JavaScript source files can be accessed in the hosting environment." }, { "code": null, "e": 5530, "s": 5483, "text": "I have marked the modifications in bold below." }, { "code": null, "e": 6364, "s": 5530, "text": "Modifications are noted in bold../public/Index.html<head><meta charset=\"utf-8\"> <meta name=\"viewport\" content=\"width=device-width, initial-scale=1, shrink-to-fit=no\"><meta name=\"description\" content=\"Erdem Isbilen - Automotive & Mechanical Engineer,Machine Learning and Data Science Enthusiasts\"><meta name=\"author\" content=\"Erdem Isbilen\"><title>Erdem ISBILEN - My Personal WebPage</title><!-- Custom fonts for this theme --><link href=\"../vendor/fontawesome-free/css/all.min.css\" rel=\"stylesheet\" type=\"text/css\"> <link href=\"https://fonts.googleapis.com/css? family=Montserrat:400,700\" rel=\"stylesheet\" type=\"text/css\"><link href=\"https://fonts.googleapis.com/css? family=Lato:400,700,400italic,700italic\" rel=\"stylesheet\" type=\"text/css\"><!-- Theme CSS --> <link href=\"../css/freelancer.css\" rel=\"stylesheet\"></head>" }, { "code": null, "e": 6405, "s": 6364, "text": "Modifying the Navigation Section of HTML" }, { "code": null, "e": 6654, "s": 6405, "text": "I have added my logo into the Navigation section by keeping the rest of the Navigation section mainly the same. Thanks to the Bootstrap library, Navigation bar works perfectly as it is and adjusts itself to the different screen sizes by collapsing." }, { "code": null, "e": 6784, "s": 6654, "text": "I had some minor changes on the arrangement of the NavBar by adding Portfolio, Blog, Resume and Connect Me sections respectively." }, { "code": null, "e": 6883, "s": 6784, "text": "Depending on how you would like to present your content, you can add as many sections as you want." }, { "code": null, "e": 8194, "s": 6883, "text": "Modifications are noted in bold../public/Index.html<!-- Navigation --><nav class=\"navbar navbar-expand-lg bg-secondary text-uppercase fixed-top\" id=\"mainNav\"><div class=\"container\"><a class=\"navbar-brand js-scroll-trigger\" href=\"#page-top\"> <img class=\"brand-logo\" src=\"/img/eisbilen-logo.svg\" alt=\"\"></a><button class=\"navbar-toggler navbar-toggler-right text-uppercase font-weight-bold bg-primary text-white rounded\" type=\"button\" data- toggle=\"collapse\" data-target=\"#navbarResponsive\" aria-controls=\"navbarResponsive\" aria-expanded=\"false\" aria-label=\"Toggle navigation\"> <i class=\"fas fa-bars\"></i> </button><div class=\"collapse navbar-collapse\" id=\"navbarResponsive\"><ul class=\"navbar-nav ml-auto\"> <li class=\"nav-item mx-0 mx-lg-1\"> <a class=\"nav-link py-3 px-0 px-lg-3 rounded js-scroll-trigger\" href=\"#portfolio\">Portfolio</a> </li><li class=\"nav-item mx-0 mx-lg-1\"> <a class=\"nav-link py-3 px-0 px-lg-3 rounded js-scroll-trigger\" href=\"#blog\">Blog</a> </li> <li class=\"nav-item mx-0 mx-lg-1\"> <a class=\"nav-link py-3 px-0 px-lg-3 rounded js-scroll-trigger\" href=\"#resume\">Resume</a> </li><li class=\"nav-item mx-0 mx-lg-1\"> <a class=\"nav-link py-3 px-0 px-lg-3 rounded js-scroll-trigger\" href=\"#connect\">Connect Me</a> </li> </ul></div></div></nav>" }, { "code": null, "e": 8233, "s": 8194, "text": "Modifying the Masthead Section of HTML" }, { "code": null, "e": 8310, "s": 8233, "text": "In the Masthead section of the HTML, I have added my photo, name, and title." }, { "code": null, "e": 8472, "s": 8310, "text": "As this section is above the fold, the portion of your page a visitor can view without scrolling, it is the first content your visitors will see on your website." }, { "code": null, "e": 8597, "s": 8472, "text": "To make a good first impression, place your best professional photo of yourself and choose the best words that describe you." }, { "code": null, "e": 8691, "s": 8597, "text": "Having a professional photo of yourself will also help others to trust you and your business." }, { "code": null, "e": 9554, "s": 8691, "text": "Modifications are noted in bold../public/Index.html<!-- Masthead --><header class=\"masthead bg-primary text-white text-center\"> <div class=\"container d-flex align-items-center flex-column\"><!-- Masthead Avatar Image --> <img class=\"masthead-avatar mb-5\" src=\"/img/ei-photo-min.png\" alt=\"\"><!-- Masthead Heading --> <h2 class=\"masthead-heading text-uppercase mb-0\">ERDEM ISBILEN</h2><!-- Icon Divider --> <div class=\"divider-custom divider-light\"> <div class=\"divider-custom-line\"></div> <div class=\"divider-custom-icon\"> <i class=\"fas fa-star\"></i> </div> <div class=\"divider-custom-line\"></div> </div><!-- Masthead Subheading --> <h8 class=\"masthead-subheading font-weight-light mb-0\">Automotive & Mechanical Engineer</h8> <h8 class=\"masthead-subheading font-weight-light mb-0\">Machine Learning and Data Science Enthusiasts</h8></div></header>" }, { "code": null, "e": 9585, "s": 9554, "text": "Modifying the Rest of the HTML" }, { "code": null, "e": 9720, "s": 9585, "text": "Modifying the rest of the HTML sections will be in a similar manner, so you can adjust them or include additional content as you wish." }, { "code": null, "e": 9870, "s": 9720, "text": "Blog, Resume and Portfolio sections are the minimums that you should have on your webpage to properly convey your data science skills and profession." }, { "code": null, "e": 10001, "s": 9870, "text": "Now, as we have all our project files are ready, it is time to set up the Firebase for hosting our content with free ‘Spark Plan’." }, { "code": null, "e": 10096, "s": 10001, "text": "Assuming that you already have a ‘Google Account’, login to FireBase and create a new project." }, { "code": null, "e": 10200, "s": 10096, "text": "At this point, we are ready to deploy our files to Firebase using our local computer with Firebase CLI." }, { "code": null, "e": 10300, "s": 10200, "text": "Before deploying our website to the Firebase, we should install Firebase CLI to our local computer." }, { "code": null, "e": 10332, "s": 10300, "text": "$ npm install -g firebase-tools" }, { "code": null, "e": 10567, "s": 10332, "text": "Then, we will log in and initialize the Firebase. Go to your project folder’s root directory to log in to Firebase with below terminal command. It will direct you to a website where you use your ‘Google Account’ to authorize Firebase." }, { "code": null, "e": 10603, "s": 10567, "text": "ds-personal-website$ firebase login" }, { "code": null, "e": 10698, "s": 10603, "text": "After you logged in, now we can initialize the Firebase project and configure hosting details." }, { "code": null, "e": 10920, "s": 10698, "text": "Select ‘Hosting’ option and provide the public directory as ‘public’ where all ready-to-deploy files are stored in your local computer. Do not select overwrite to ‘index.html’ option as this will modify your ‘index.html’." }, { "code": null, "e": 12305, "s": 10920, "text": "ds-personal-website$ firebase initYou're about to initialize a Firebase project in this directory:/Users/erdemisbilen/Angular/my-personal-webpageBefore we get started, keep in mind:* You are initializing in an existing Firebase project directory? Which Firebase CLI features do you want to set up for this folder? Press Space to select features, then Enter to confirm your choices.Hosting: Configure and deploy Firebase Hosting sites=== Project SetupFirst, let's associate this project directory with a Firebase project.You can create multiple project aliases by running firebase use --add, but for now we'll just set up a default project.i .firebaserc already has a default project, using my-personal-webpage-c7096.=== Hosting SetupYour public directory is the folder (relative to your project directory) that will contain Hosting assets to be uploaded with firebase deploy. If you have a build process for your assets, use your build's output directory.? What do you want to use as your public directory? public? Configure as a single-page app(rewrite all urls to /index.html)?No? File public/404.html already exists. Overwrite? Yes✔ Wrote public/404.html? File public/index.html already exists. Overwrite? Noi Skipping write of public/index.htmli Writing configuration info to firebase.json...i Writing project information to .firebaserc...✔ Firebase initialization complete!" }, { "code": null, "e": 12417, "s": 12305, "text": "Now that you logged in and initialized the Firebase, you can deploy all of your files with one line of command." }, { "code": null, "e": 12575, "s": 12417, "text": "If you managed to complete the set up properly, Firebase will deploy all your files and give you a URL where you can see your website online on your browser." }, { "code": null, "e": 13269, "s": 12575, "text": "ds-personal-website$ firebase deploy=== Deploying to 'my-personal-webpage-c7096'...i deploying hostingi hosting[my-personal-webpage-c7096]: beginning deploy...i hosting[my-personal-webpage-c7096]: found 91 files in public✔ hosting[my-personal-webpage-c7096]: file upload completei hosting[my-personal-webpage-c7096]: finalizing version...✔ hosting[my-personal-webpage-c7096]: version finalizedi hosting[my-personal-webpage-c7096]: releasing new version...✔ hosting[my-personal-webpage-c7096]: release complete✔ Deploy complete!Project Console: https://console.firebase.google.com/project/my-personal-webpage-c7096/overviewHosting URL: https://my-personal-webpage-c7096.firebaseapp.com" }, { "code": null, "e": 13410, "s": 13269, "text": "You cannot use the URL which is provided by Firebase as it is something for you to experiment with your design and to see it on the browser." }, { "code": null, "e": 13553, "s": 13410, "text": "After you are happy with how your website looks on the browser, there is one final step to make your website unique with a custom domain name." }, { "code": null, "e": 13740, "s": 13553, "text": "There are several providers where you can register a specific domain name. For me, it is www.erdemisbilen.com and I registered this domain name on isimtescil.net and it cost 8$ per year." }, { "code": null, "e": 13820, "s": 13740, "text": "You can register your domain name with any DNS service provider of your choice." }, { "code": null, "e": 13968, "s": 13820, "text": "After you registered your unique domain name, head into the Firebase Dashboard — Hosting section where Firebase asks you to add your custom domain." }, { "code": null, "e": 14157, "s": 13968, "text": "You will get a TXT file after you provide your custom domain name to Firebase. Using your DNS provider’s dashboard, replace your domain name’s TXT record with the one provided by Firebase." }, { "code": null, "e": 14268, "s": 14157, "text": "Once this is completed, Firebase will ask you to modify the ‘A’ records. So, modify the A records accordingly." }, { "code": null, "e": 14356, "s": 14268, "text": "After a couple of hours, you will see that Firebase is connected to your custom domain." }, { "code": null, "e": 14419, "s": 14356, "text": "Which means that all done, and your website is up and running!" }, { "code": null, "e": 14536, "s": 14419, "text": "In this post, I tried to explain my way of building a personal website specifically for self-taught data scientists." } ]
How to add custom ExpectedConditions for Selenium?
We can add custom ExpectedConditions for Selenium webdriver. We require this custom ExpectedConditions when the default expected conditions provided by webdriver are not enough to satisfy some scenarios. The method until is used which is a part of the WebDriverWait class. Here, the ExpectedConditions are used to wait for a specific criteria to be satisfied. This method pauses whenever one of the below incidents happen − The timeout duration specified has elapsed. The timeout duration specified has elapsed. The criteria defined yields neither false nor null. The criteria defined yields neither false nor null. We can have a custom ExpectedCondition by creating an object of expected criteria and by taking the help of apply method. Let us take an example of the below page. Let us click on the Team link. On clicking on Team, a corresponding paragraph appears on right. Let us verify if the paragraph has appeared and also verify the text India appears within that paragraph. import org.openqa.selenium.WebDriver; import org.openqa.selenium.WebElement; import org.openqa.selenium.chrome.ChromeDriver; import java.util.concurrent.TimeUnit; import org.openqa.selenium.support.ui.ExpectedConditions; import org.openqa.selenium.support.ui.WebDriverWait; public class CustomExpCondition{ public static void main(String[] args) throws InterruptedException{ System.setProperty("webdriver.chrome.driver", "C:\\Users\\ghs6kor\\Desktop\\Java\\chromedriver.exe"); WebDriver driver = new ChromeDriver(); driver.manage().timeouts().implicitlyWait(4, TimeUnit.SECONDS); driver.get("https://www.tutorialspoint.com/about/about_careers.htm"); // identify element WebElement l=driver.findElement(By.linkText("Team")); l.click(); //object of WebDriverWait class with wait time WebDriverWait w = new WebDriverWait(driver,7); //custom expected condition with until method w.until(new ExpectedCondition <Boolean> (){ public Boolean apply(WebDriver driver) { //identify paragraph WebElement e= driver.findElement(By.tagName("p")); if (e!= null){ //to check if paragraph is displayed and has text India if (e.isDisplayed() && e.getText().contains("India")) { System.out.println("Element found"); return true; } else { System.out.println("Element not found"); return false; } } return false; } }); driver.close(); } }
[ { "code": null, "e": 1266, "s": 1062, "text": "We can add custom ExpectedConditions for Selenium webdriver. We require this custom ExpectedConditions when the default expected conditions provided by webdriver are not enough to satisfy some scenarios." }, { "code": null, "e": 1486, "s": 1266, "text": "The method until is used which is a part of the WebDriverWait class. Here, the ExpectedConditions are used to wait for a specific criteria to be satisfied. This method pauses whenever one of the below incidents happen −" }, { "code": null, "e": 1530, "s": 1486, "text": "The timeout duration specified has elapsed." }, { "code": null, "e": 1574, "s": 1530, "text": "The timeout duration specified has elapsed." }, { "code": null, "e": 1626, "s": 1574, "text": "The criteria defined yields neither false nor null." }, { "code": null, "e": 1678, "s": 1626, "text": "The criteria defined yields neither false nor null." }, { "code": null, "e": 1800, "s": 1678, "text": "We can have a custom ExpectedCondition by creating an object of expected criteria and by taking the help of apply method." }, { "code": null, "e": 1873, "s": 1800, "text": "Let us take an example of the below page. Let us click on the Team link." }, { "code": null, "e": 2044, "s": 1873, "text": "On clicking on Team, a corresponding paragraph appears on right. Let us verify if the paragraph has appeared and also verify the text India appears within that paragraph." }, { "code": null, "e": 3684, "s": 2044, "text": "import org.openqa.selenium.WebDriver;\nimport org.openqa.selenium.WebElement;\nimport org.openqa.selenium.chrome.ChromeDriver;\nimport java.util.concurrent.TimeUnit;\nimport org.openqa.selenium.support.ui.ExpectedConditions;\nimport org.openqa.selenium.support.ui.WebDriverWait;\npublic class CustomExpCondition{\n public static void main(String[] args)\n throws InterruptedException{\n System.setProperty(\"webdriver.chrome.driver\",\n \"C:\\\\Users\\\\ghs6kor\\\\Desktop\\\\Java\\\\chromedriver.exe\");\n WebDriver driver = new ChromeDriver();\n driver.manage().timeouts().implicitlyWait(4, TimeUnit.SECONDS);\n driver.get(\"https://www.tutorialspoint.com/about/about_careers.htm\");\n // identify element\n WebElement l=driver.findElement(By.linkText(\"Team\"));\n l.click();\n //object of WebDriverWait class with wait time\n WebDriverWait w = new WebDriverWait(driver,7);\n //custom expected condition with until method\n w.until(new ExpectedCondition <Boolean> (){\n public Boolean apply(WebDriver driver) {\n //identify paragraph\n WebElement e= driver.findElement(By.tagName(\"p\"));\n if (e!= null){\n //to check if paragraph is displayed and has text India\n if (e.isDisplayed() && e.getText().contains(\"India\")) {\n System.out.println(\"Element found\");\n return true;\n }\n else {\n System.out.println(\"Element not found\");\n return false;\n } \n }\n return false;\n }\n });\n driver.close();\n }\n}" } ]
How to create a long multi-line string in Python?
To create multiline strings, instead of using one pair of single/double quotes, we use three pairs. For example, multiline_str = """ My multi-line string """ print multiline_str This will give the output: My multi-line string Note that you cannot interpolate strings using this notation. This notation is also used to define docstrings in Python.
[ { "code": null, "e": 1175, "s": 1062, "text": "To create multiline strings, instead of using one pair of single/double quotes, we use three pairs. For example," }, { "code": null, "e": 1240, "s": 1175, "text": "multiline_str = \"\"\"\nMy\nmulti-line\nstring\n\"\"\"\nprint multiline_str" }, { "code": null, "e": 1268, "s": 1240, "text": " This will give the output:" }, { "code": null, "e": 1295, "s": 1268, "text": " My\n multi-line\n string" }, { "code": null, "e": 1417, "s": 1295, "text": " Note that you cannot interpolate strings using this notation. This notation is also used to define docstrings in Python." } ]
R - Switch Statement
A switch statement allows a variable to be tested for equality against a list of values. Each value is called a case, and the variable being switched on is checked for each case. The basic syntax for creating a switch statement in R is − switch(expression, case1, case2, case3....) The following rules apply to a switch statement − If the value of expression is not a character string it is coerced to integer. If the value of expression is not a character string it is coerced to integer. You can have any number of case statements within a switch. Each case is followed by the value to be compared to and a colon. You can have any number of case statements within a switch. Each case is followed by the value to be compared to and a colon. If the value of the integer is between 1 and nargs()−1 (The max number of arguments)then the corresponding element of case condition is evaluated and the result returned. If the value of the integer is between 1 and nargs()−1 (The max number of arguments)then the corresponding element of case condition is evaluated and the result returned. If expression evaluates to a character string then that string is matched (exactly) to the names of the elements. If expression evaluates to a character string then that string is matched (exactly) to the names of the elements. If there is more than one match, the first matching element is returned. If there is more than one match, the first matching element is returned. No Default argument is available. No Default argument is available. In the case of no match, if there is a unnamed element of ... its value is returned. (If there is more than one such argument an error is returned.) In the case of no match, if there is a unnamed element of ... its value is returned. (If there is more than one such argument an error is returned.) x <- switch( 3, "first", "second", "third", "fourth" ) print(x) When the above code is compiled and executed, it produces the following result − [1] "third" 12 Lectures 2 hours Nishant Malik 10 Lectures 1.5 hours Nishant Malik 12 Lectures 2.5 hours Nishant Malik 20 Lectures 2 hours Asif Hussain 10 Lectures 1.5 hours Nishant Malik 48 Lectures 6.5 hours Asif Hussain Print Add Notes Bookmark this page
[ { "code": null, "e": 2581, "s": 2402, "text": "A switch statement allows a variable to be tested for equality against a list of values. Each value is called a case, and the variable being switched on is checked for each case." }, { "code": null, "e": 2640, "s": 2581, "text": "The basic syntax for creating a switch statement in R is −" }, { "code": null, "e": 2685, "s": 2640, "text": "switch(expression, case1, case2, case3....)\n" }, { "code": null, "e": 2735, "s": 2685, "text": "The following rules apply to a switch statement −" }, { "code": null, "e": 2814, "s": 2735, "text": "If the value of expression is not a character string it is coerced to integer." }, { "code": null, "e": 2893, "s": 2814, "text": "If the value of expression is not a character string it is coerced to integer." }, { "code": null, "e": 3019, "s": 2893, "text": "You can have any number of case statements within a switch. Each case is followed by the value to be compared to and a colon." }, { "code": null, "e": 3145, "s": 3019, "text": "You can have any number of case statements within a switch. Each case is followed by the value to be compared to and a colon." }, { "code": null, "e": 3316, "s": 3145, "text": "If the value of the integer is between 1 and nargs()−1 (The max number of arguments)then the corresponding element of case condition is evaluated and the result returned." }, { "code": null, "e": 3487, "s": 3316, "text": "If the value of the integer is between 1 and nargs()−1 (The max number of arguments)then the corresponding element of case condition is evaluated and the result returned." }, { "code": null, "e": 3601, "s": 3487, "text": "If expression evaluates to a character string then that string is matched (exactly) to the names of the elements." }, { "code": null, "e": 3715, "s": 3601, "text": "If expression evaluates to a character string then that string is matched (exactly) to the names of the elements." }, { "code": null, "e": 3788, "s": 3715, "text": "If there is more than one match, the first matching element is returned." }, { "code": null, "e": 3861, "s": 3788, "text": "If there is more than one match, the first matching element is returned." }, { "code": null, "e": 3895, "s": 3861, "text": "No Default argument is available." }, { "code": null, "e": 3929, "s": 3895, "text": "No Default argument is available." }, { "code": null, "e": 4078, "s": 3929, "text": "In the case of no match, if there is a unnamed element of ... its value is returned. (If there is more than one such argument an error is returned.)" }, { "code": null, "e": 4227, "s": 4078, "text": "In the case of no match, if there is a unnamed element of ... its value is returned. (If there is more than one such argument an error is returned.)" }, { "code": null, "e": 4306, "s": 4227, "text": "x <- switch(\n 3,\n \"first\",\n \"second\",\n \"third\",\n \"fourth\"\n)\nprint(x)" }, { "code": null, "e": 4387, "s": 4306, "text": "When the above code is compiled and executed, it produces the following result −" }, { "code": null, "e": 4400, "s": 4387, "text": "[1] \"third\"\n" }, { "code": null, "e": 4433, "s": 4400, "text": "\n 12 Lectures \n 2 hours \n" }, { "code": null, "e": 4448, "s": 4433, "text": " Nishant Malik" }, { "code": null, "e": 4483, "s": 4448, "text": "\n 10 Lectures \n 1.5 hours \n" }, { "code": null, "e": 4498, "s": 4483, "text": " Nishant Malik" }, { "code": null, "e": 4533, "s": 4498, "text": "\n 12 Lectures \n 2.5 hours \n" }, { "code": null, "e": 4548, "s": 4533, "text": " Nishant Malik" }, { "code": null, "e": 4581, "s": 4548, "text": "\n 20 Lectures \n 2 hours \n" }, { "code": null, "e": 4595, "s": 4581, "text": " Asif Hussain" }, { "code": null, "e": 4630, "s": 4595, "text": "\n 10 Lectures \n 1.5 hours \n" }, { "code": null, "e": 4645, "s": 4630, "text": " Nishant Malik" }, { "code": null, "e": 4680, "s": 4645, "text": "\n 48 Lectures \n 6.5 hours \n" }, { "code": null, "e": 4694, "s": 4680, "text": " Asif Hussain" }, { "code": null, "e": 4701, "s": 4694, "text": " Print" }, { "code": null, "e": 4712, "s": 4701, "text": " Add Notes" } ]
Data Structures | Heap | Question 6 - GeeksforGeeks
28 Jun, 2021 What is the content of the array after two delete operations on the correct answer to the previous question?(A) 14,13,12,10,8(B) 14,12,13,8,10(C) 14,13,8,12,10(D) 14,13,12,8,10Answer: (D)Explanation: For Heap trees, deletion of a node includes following two operations. 1) Replace the root with last element on the last level.2) Starting from root, heapify the complete tree from top to bottom.. Let us delete the two nodes one by one:1) Deletion of 25:Replace 25 with 12 12 / \ / \ 14 16 / \ / / \ / 13 10 8 Since heap property is violated for root (16 is greater than 12), make 16 as root of the tree. 16 / \ / \ 14 12 / \ / / \ / 13 10 8 2) Deletion of 16:Replace 16 with 8 8 / \ / \ 14 12 / \ / \ 13 10 Heapify from root to bottom. 14 / \ / \ 8 12 / \ / \ 13 10 14 / \ / \ 13 12 / \ / \ 8 10 Quiz of this Question Data Structures Data Structures-Heap Heap Quizzes Data Structures Data Structures Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here. Comments Old Comments Count of triplets in an Array (i, j, k) such that i < j < k and a[k] < a[i] < a[j] Advantages and Disadvantages of Linked List C program to implement Adjacency Matrix of a given Graph Introduction to Data Structures | 10 most commonly used Data Structures Difference between Singly linked list and Doubly linked list FIFO vs LIFO approach in Programming Data Structures | Stack | Question 6 Bit manipulation | Swap Endianness of a number Advantages of vector over array in C++ Data Structures | Queue | Question 11
[ { "code": null, "e": 24425, "s": 24397, "text": "\n28 Jun, 2021" }, { "code": null, "e": 24695, "s": 24425, "text": "What is the content of the array after two delete operations on the correct answer to the previous question?(A) 14,13,12,10,8(B) 14,12,13,8,10(C) 14,13,8,12,10(D) 14,13,12,8,10Answer: (D)Explanation: For Heap trees, deletion of a node includes following two operations." }, { "code": null, "e": 24821, "s": 24695, "text": "1) Replace the root with last element on the last level.2) Starting from root, heapify the complete tree from top to bottom.." }, { "code": null, "e": 24897, "s": 24821, "text": "Let us delete the two nodes one by one:1) Deletion of 25:Replace 25 with 12" }, { "code": null, "e": 25006, "s": 24897, "text": " 12\n / \\\n / \\\n 14 16\n / \\ /\n / \\ /\n13 10 8" }, { "code": null, "e": 25101, "s": 25006, "text": "Since heap property is violated for root (16 is greater than 12), make 16 as root of the tree." }, { "code": null, "e": 25214, "s": 25101, "text": " 16\n / \\\n / \\\n 14 12\n / \\ /\n / \\ /\n13 10 8" }, { "code": null, "e": 25250, "s": 25214, "text": "2) Deletion of 16:Replace 16 with 8" }, { "code": null, "e": 25338, "s": 25250, "text": " 8\n / \\\n / \\\n 14 12\n / \\\n / \\\n 13 10" }, { "code": null, "e": 25367, "s": 25338, "text": "Heapify from root to bottom." }, { "code": null, "e": 25556, "s": 25367, "text": " 14\n / \\\n / \\\n 8 12\n / \\\n / \\\n 13 10\n 14\n / \\\n / \\\n 13 12\n / \\\n / \\\n 8 10" }, { "code": null, "e": 25578, "s": 25556, "text": "Quiz of this Question" }, { "code": null, "e": 25594, "s": 25578, "text": "Data Structures" }, { "code": null, "e": 25615, "s": 25594, "text": "Data Structures-Heap" }, { "code": null, "e": 25628, "s": 25615, "text": "Heap Quizzes" }, { "code": null, "e": 25644, "s": 25628, "text": "Data Structures" }, { "code": null, "e": 25660, "s": 25644, "text": "Data Structures" }, { "code": null, "e": 25758, "s": 25660, "text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here." }, { "code": null, "e": 25767, "s": 25758, "text": "Comments" }, { "code": null, "e": 25780, "s": 25767, "text": "Old Comments" }, { "code": null, "e": 25863, "s": 25780, "text": "Count of triplets in an Array (i, j, k) such that i < j < k and a[k] < a[i] < a[j]" }, { "code": null, "e": 25907, "s": 25863, "text": "Advantages and Disadvantages of Linked List" }, { "code": null, "e": 25964, "s": 25907, "text": "C program to implement Adjacency Matrix of a given Graph" }, { "code": null, "e": 26036, "s": 25964, "text": "Introduction to Data Structures | 10 most commonly used Data Structures" }, { "code": null, "e": 26097, "s": 26036, "text": "Difference between Singly linked list and Doubly linked list" }, { "code": null, "e": 26134, "s": 26097, "text": "FIFO vs LIFO approach in Programming" }, { "code": null, "e": 26171, "s": 26134, "text": "Data Structures | Stack | Question 6" }, { "code": null, "e": 26218, "s": 26171, "text": "Bit manipulation | Swap Endianness of a number" }, { "code": null, "e": 26257, "s": 26218, "text": "Advantages of vector over array in C++" } ]
Beware of the Dummy variable trap in pandas | by Parul Pandey | Towards Data Science
Handling categorical variables forms an essential component of a machine learning pipeline. While machine learning algorithms can naturally handle the numerical variables, the same is not valid for their categorical counterparts. Although there are algorithms like LightGBM and Catboost that can inherently handle the categorical variables, it is not the case with most other algorithms. These categorical variables have to be first converted into numerical quantities to be fed into the machine learning algorithms. There are many ways to encode categorical variables like one-hot encoding, ordinal encoding, label encoding, etc. but this article looks at pandas’ dummy variable encoding and exposes its potential limitation. A variable whose value ranges over categories is called a categorical variable such as gender, hair color, ethnicity, zip codes, or social security number. The sum of two zip codes or social security numbers is not meaningful. Similarly, the average of a list of zip codes doesn’t make sense. Categorical variables can be divided into two subcategories based on the kind of elements they group: Nominal variables are those whose categories do not have a natural order or ranking. For example, we could use 1 for the red color and 2 for blue. But these numbers don’t have a mathematical meaning. That is, we can’t add them together or take the average. Examples that fit in this category are gender, postal codes, hair color, etc. Ordinal variables have an inherent order which is somehow significant. An example would be tracking student grades where Grade 1 > Grade 2 > Grade 3. Another example would the socio-economic status of people where be the “high income” > “low income”. Now that we know what categorical variables are, it becomes clear that we cannot use them directly in machine learning models. They have to be converted into meaningful numerical representations. This process is called encoding. There are a lot of techniques for encoding categorical variables, but we will specifically look at the one provided by the pandas' library called get_dummies(). pandas.pydata.org As the name suggests, the pandas.get_dummies() function converts categorical variables into dummy or indicator variables. Let’s see it working through an elementary example. We first define a hypothetical dataset consisting of attributes of employees of a company and use it to predict the employees’ salaries. Our dataset looks like this: df We can see that there are two categorical columns in the above dataset i.e. Genderand EducationField. Let’s encode them into numerical quantities using pandas.get_dummies() which returns a dummy-encoded dataframe. pd.get_dummies(df) The column Gender gets converted into two columns — Gender_Female and Gender_Male having values as either zero or one. For instance, Gender_Female has a value = 1 at places where the concerned employee is female and value = 0 when not. The same is true for the column Gender_Male. Similarly, the column EducationField also gets separated into three different columns based on the field of education. Things are pretty much apparent till now. However, the issue begins when we use this encoded dataset to train a model. Let’s say we want to use the given data to build a machine learning model that can predict employees' monthly salaries. This is a classic example of a regression problem where the target variable is MonthlyIncome. If we were to use pandas.get_dummies() to encode the categorical variables, the following issues could arise: Note: The above diagram explains multicollinearity very intuitively. Thanks to Karen Grace-Martin for explaining the concept in such a lucid manner. Refer the link below to go to the article. www.theanalysisfactor.com One of the assumptions of a regression model is that the observations must be independent of each other. Multicollinearity occurs when independent variables in a regression model are correlated. So why is correlation a problem? To help you understand the concept in detail and avoid re-inventing the wheel, I’ll point you to a great piece by Jim Frost, where he explains it very succinctly. The following paragraph is from the same article. A key goal of regression analysis is to isolate the relationship between each independent variable and the dependent variable. The interpretation of a regression coefficient is that it represents the mean change in the dependent variable for each 1 unit change in an independent variable when you hold all of the other independent variables constant. If all the variables are correlated, it will become difficult for the model to tell how strongly a particular variable affects the target since all the variables are related. In such a case, the coefficient of a regression model will not convey the correct information. Consider the employee example above. Let’s isolate the Gender column from the dataset and encode it. If we look closely, Gender_Female and Gender_Male columns are multi-collinear. This is because a value of 1 in one column automatically implies 0 in the other. This issue is termed a dummy variable trap and can be represented as : Gender_Female = 1 - Gender_Male Multi-collinearity is undesirable, and every time we encode variables with pandas.get_dummies(), we’ll encounter this issue. One way to overcome this issue is by dropping one of the generated columns. So, we can drop either Gender_Female or Gender_Male without potentially losing any information. Fortunately, pandas.get_dummies() has a parameter called drop_first which, when set to True, does precisely that. pd.get_dummies(df, drop_first=True) We’ve resolved multicollinearity, but another issue lurks when we use dummy_encoding, which we will look at in the next section. To train a model with the given employee data, we’ll first split the dataset into train and test sets, keeping the test set aside so that our model never sees it. from sklearn.model_selection import train_test_splitX = df.drop('MonthlyIncome', axis=1)y = df['MonthlyIncome']X_train, X_test, y_train, y_test = train_test_split(X,y, test_size=0.2, random_state=1) The next step would be to encode the categorical variables in the training set and the test set. Encoding Training set pd.get_dummies(X_train) As expected, both the Gender and the EducationField attributes have been encoded into numerical quantities. Now we’ll apply the same process to the test dataset. Encoding Test set pd.get_dummies(X_test) Wait! There is a column mismatch in the training and test set. This means the number of columns in the training set is not equal to the ones in the test set, and this will throw an error in the modeling process. One way of addressing this mismatch in categories would be to save the columns obtained after dummy encoding the training set in a list. Then, encode the test set as usual and use the columns of the encoded training set to align both the datasets. Let’s understand it through code: # Dummy encoding Training setX_train_encoded = pd.get_dummies(X_train)# Saving the columns in a listcols = X_train_encoded.columns.tolist()# Viewing the first three rows of the encoded dataframeX_train_encoded[:3] Now, we’ll encode the test set followed by realigning the training and test columns and filling in all missing values with zero. X_test_encoded = pd.get_dummies(X_test)X_test_encoded = X_test_encoded.reindex(columns=cols).fillna(0)X_test_encoded As you can see, now both the datasets have the same number of columns, Another solution and a preferable one would be to use sklearn.preprocessing.OneHotEncoder().Additionally, one can use handle_unknown="ignore" to solve the potential issues due to rare categories. #One hot encoding the categorical columns in training setfrom sklearn.preprocessing import OneHotEncoderohe = OneHotEncoder(sparse=False, handle_unknown='ignore')train_enc = ohe.fit_transform(X_train[['Gender','EducationField']])#Converting back to a dataframe pd.DataFrame(train_enc, columns=ohe.get_feature_names())[:3] # Transforming the test settest_enc = ohe.fit_transform(X_test[['Gender','EducationField']])#Converting back to a dataframepd.DataFrame(test_enc,columns=ohe.get_feature_names()) Note, you can also drop one of the categories per feature in OnehotEncoder by setting the parameter drop=’if_binary’. Refer to the documentation for more detail. This article looked at how pandas’ can be used to encode categorical variables and the common caveats associated with it. We also looked in detail at the plausible solutions to avoid those pitfalls. I hope this article has given you intuition into what a dummy variable trap is and how it can be avoided. Also, the two articles referenced in this post are a great reference, especially if you want to go deeper into issues related to multicollinearity. I highly recommend them. 👉 Interested in reading other articles authored by me. This repo contains all the articles written by me category-wise.
[ { "code": null, "e": 899, "s": 172, "text": "Handling categorical variables forms an essential component of a machine learning pipeline. While machine learning algorithms can naturally handle the numerical variables, the same is not valid for their categorical counterparts. Although there are algorithms like LightGBM and Catboost that can inherently handle the categorical variables, it is not the case with most other algorithms. These categorical variables have to be first converted into numerical quantities to be fed into the machine learning algorithms. There are many ways to encode categorical variables like one-hot encoding, ordinal encoding, label encoding, etc. but this article looks at pandas’ dummy variable encoding and exposes its potential limitation." }, { "code": null, "e": 1294, "s": 899, "text": "A variable whose value ranges over categories is called a categorical variable such as gender, hair color, ethnicity, zip codes, or social security number. The sum of two zip codes or social security numbers is not meaningful. Similarly, the average of a list of zip codes doesn’t make sense. Categorical variables can be divided into two subcategories based on the kind of elements they group:" }, { "code": null, "e": 1629, "s": 1294, "text": "Nominal variables are those whose categories do not have a natural order or ranking. For example, we could use 1 for the red color and 2 for blue. But these numbers don’t have a mathematical meaning. That is, we can’t add them together or take the average. Examples that fit in this category are gender, postal codes, hair color, etc." }, { "code": null, "e": 1880, "s": 1629, "text": "Ordinal variables have an inherent order which is somehow significant. An example would be tracking student grades where Grade 1 > Grade 2 > Grade 3. Another example would the socio-economic status of people where be the “high income” > “low income”." }, { "code": null, "e": 2270, "s": 1880, "text": "Now that we know what categorical variables are, it becomes clear that we cannot use them directly in machine learning models. They have to be converted into meaningful numerical representations. This process is called encoding. There are a lot of techniques for encoding categorical variables, but we will specifically look at the one provided by the pandas' library called get_dummies()." }, { "code": null, "e": 2288, "s": 2270, "text": "pandas.pydata.org" }, { "code": null, "e": 2599, "s": 2288, "text": "As the name suggests, the pandas.get_dummies() function converts categorical variables into dummy or indicator variables. Let’s see it working through an elementary example. We first define a hypothetical dataset consisting of attributes of employees of a company and use it to predict the employees’ salaries." }, { "code": null, "e": 2628, "s": 2599, "text": "Our dataset looks like this:" }, { "code": null, "e": 2631, "s": 2628, "text": "df" }, { "code": null, "e": 2845, "s": 2631, "text": "We can see that there are two categorical columns in the above dataset i.e. Genderand EducationField. Let’s encode them into numerical quantities using pandas.get_dummies() which returns a dummy-encoded dataframe." }, { "code": null, "e": 2864, "s": 2845, "text": "pd.get_dummies(df)" }, { "code": null, "e": 3145, "s": 2864, "text": "The column Gender gets converted into two columns — Gender_Female and Gender_Male having values as either zero or one. For instance, Gender_Female has a value = 1 at places where the concerned employee is female and value = 0 when not. The same is true for the column Gender_Male." }, { "code": null, "e": 3383, "s": 3145, "text": "Similarly, the column EducationField also gets separated into three different columns based on the field of education. Things are pretty much apparent till now. However, the issue begins when we use this encoded dataset to train a model." }, { "code": null, "e": 3707, "s": 3383, "text": "Let’s say we want to use the given data to build a machine learning model that can predict employees' monthly salaries. This is a classic example of a regression problem where the target variable is MonthlyIncome. If we were to use pandas.get_dummies() to encode the categorical variables, the following issues could arise:" }, { "code": null, "e": 3899, "s": 3707, "text": "Note: The above diagram explains multicollinearity very intuitively. Thanks to Karen Grace-Martin for explaining the concept in such a lucid manner. Refer the link below to go to the article." }, { "code": null, "e": 3925, "s": 3899, "text": "www.theanalysisfactor.com" }, { "code": null, "e": 4366, "s": 3925, "text": "One of the assumptions of a regression model is that the observations must be independent of each other. Multicollinearity occurs when independent variables in a regression model are correlated. So why is correlation a problem? To help you understand the concept in detail and avoid re-inventing the wheel, I’ll point you to a great piece by Jim Frost, where he explains it very succinctly. The following paragraph is from the same article." }, { "code": null, "e": 4717, "s": 4366, "text": "A key goal of regression analysis is to isolate the relationship between each independent variable and the dependent variable. The interpretation of a regression coefficient is that it represents the mean change in the dependent variable for each 1 unit change in an independent variable when you hold all of the other independent variables constant." }, { "code": null, "e": 4987, "s": 4717, "text": "If all the variables are correlated, it will become difficult for the model to tell how strongly a particular variable affects the target since all the variables are related. In such a case, the coefficient of a regression model will not convey the correct information." }, { "code": null, "e": 5088, "s": 4987, "text": "Consider the employee example above. Let’s isolate the Gender column from the dataset and encode it." }, { "code": null, "e": 5319, "s": 5088, "text": "If we look closely, Gender_Female and Gender_Male columns are multi-collinear. This is because a value of 1 in one column automatically implies 0 in the other. This issue is termed a dummy variable trap and can be represented as :" }, { "code": null, "e": 5351, "s": 5319, "text": "Gender_Female = 1 - Gender_Male" }, { "code": null, "e": 5762, "s": 5351, "text": "Multi-collinearity is undesirable, and every time we encode variables with pandas.get_dummies(), we’ll encounter this issue. One way to overcome this issue is by dropping one of the generated columns. So, we can drop either Gender_Female or Gender_Male without potentially losing any information. Fortunately, pandas.get_dummies() has a parameter called drop_first which, when set to True, does precisely that." }, { "code": null, "e": 5798, "s": 5762, "text": "pd.get_dummies(df, drop_first=True)" }, { "code": null, "e": 5927, "s": 5798, "text": "We’ve resolved multicollinearity, but another issue lurks when we use dummy_encoding, which we will look at in the next section." }, { "code": null, "e": 6090, "s": 5927, "text": "To train a model with the given employee data, we’ll first split the dataset into train and test sets, keeping the test set aside so that our model never sees it." }, { "code": null, "e": 6289, "s": 6090, "text": "from sklearn.model_selection import train_test_splitX = df.drop('MonthlyIncome', axis=1)y = df['MonthlyIncome']X_train, X_test, y_train, y_test = train_test_split(X,y, test_size=0.2, random_state=1)" }, { "code": null, "e": 6386, "s": 6289, "text": "The next step would be to encode the categorical variables in the training set and the test set." }, { "code": null, "e": 6408, "s": 6386, "text": "Encoding Training set" }, { "code": null, "e": 6432, "s": 6408, "text": "pd.get_dummies(X_train)" }, { "code": null, "e": 6594, "s": 6432, "text": "As expected, both the Gender and the EducationField attributes have been encoded into numerical quantities. Now we’ll apply the same process to the test dataset." }, { "code": null, "e": 6612, "s": 6594, "text": "Encoding Test set" }, { "code": null, "e": 6635, "s": 6612, "text": "pd.get_dummies(X_test)" }, { "code": null, "e": 6847, "s": 6635, "text": "Wait! There is a column mismatch in the training and test set. This means the number of columns in the training set is not equal to the ones in the test set, and this will throw an error in the modeling process." }, { "code": null, "e": 7129, "s": 6847, "text": "One way of addressing this mismatch in categories would be to save the columns obtained after dummy encoding the training set in a list. Then, encode the test set as usual and use the columns of the encoded training set to align both the datasets. Let’s understand it through code:" }, { "code": null, "e": 7343, "s": 7129, "text": "# Dummy encoding Training setX_train_encoded = pd.get_dummies(X_train)# Saving the columns in a listcols = X_train_encoded.columns.tolist()# Viewing the first three rows of the encoded dataframeX_train_encoded[:3]" }, { "code": null, "e": 7472, "s": 7343, "text": "Now, we’ll encode the test set followed by realigning the training and test columns and filling in all missing values with zero." }, { "code": null, "e": 7589, "s": 7472, "text": "X_test_encoded = pd.get_dummies(X_test)X_test_encoded = X_test_encoded.reindex(columns=cols).fillna(0)X_test_encoded" }, { "code": null, "e": 7660, "s": 7589, "text": "As you can see, now both the datasets have the same number of columns," }, { "code": null, "e": 7856, "s": 7660, "text": "Another solution and a preferable one would be to use sklearn.preprocessing.OneHotEncoder().Additionally, one can use handle_unknown=\"ignore\" to solve the potential issues due to rare categories." }, { "code": null, "e": 8178, "s": 7856, "text": "#One hot encoding the categorical columns in training setfrom sklearn.preprocessing import OneHotEncoderohe = OneHotEncoder(sparse=False, handle_unknown='ignore')train_enc = ohe.fit_transform(X_train[['Gender','EducationField']])#Converting back to a dataframe pd.DataFrame(train_enc, columns=ohe.get_feature_names())[:3]" }, { "code": null, "e": 8356, "s": 8178, "text": "# Transforming the test settest_enc = ohe.fit_transform(X_test[['Gender','EducationField']])#Converting back to a dataframepd.DataFrame(test_enc,columns=ohe.get_feature_names())" }, { "code": null, "e": 8518, "s": 8356, "text": "Note, you can also drop one of the categories per feature in OnehotEncoder by setting the parameter drop=’if_binary’. Refer to the documentation for more detail." }, { "code": null, "e": 8996, "s": 8518, "text": "This article looked at how pandas’ can be used to encode categorical variables and the common caveats associated with it. We also looked in detail at the plausible solutions to avoid those pitfalls. I hope this article has given you intuition into what a dummy variable trap is and how it can be avoided. Also, the two articles referenced in this post are a great reference, especially if you want to go deeper into issues related to multicollinearity. I highly recommend them." } ]
C# | Inheritance - GeeksforGeeks
23 Jan, 2019 Inheritance is an important pillar of OOP(Object Oriented Programming). It is the mechanism in C# by which one class is allowed to inherit the features(fields and methods) of another class. Important terminology: Super Class: The class whose features are inherited is known as super class(or a base class or a parent class). Sub Class: The class that inherits the other class is known as subclass(or a derived class, extended class, or child class). The subclass can add its own fields and methods in addition to the superclass fields and methods. Reusability: Inheritance supports the concept of “reusability”, i.e. when we want to create a new class and there is already a class that includes some of the code that we want, we can derive our new class from the existing class. By doing this, we are reusing the fields and methods of the existing class. How to use inheritance The symbol used for inheritance is :.Syntax: class derived-class : base-class { // methods and fields . . } Example: In below example of inheritance, class GFG is a base class, class GeeksforGeeks is a derived class which extends GFG class and class Sudo is a driver class to run program. // C# program to illustrate the// concept of inheritanceusing System;namespace ConsoleApplication1 { // Base classclass GFG { // data members public string name; public string subject; // public method of base class public void readers(string name, string subject) { this.name = name; this.subject = subject; Console.WriteLine("Myself: " + name); Console.WriteLine("My Favorite Subject is: " + subject); }} // inheriting the GFG class using : class GeeksforGeeks : GFG { // constructor of derived class public GeeksforGeeks() { Console.WriteLine("GeeksforGeeks"); }} // Driver classclass Sudo { // Main Method static void Main(string[] args) { // creating object of derived class GeeksforGeeks g = new GeeksforGeeks(); // calling the method of base class // using the derived class object g.readers("Kirti", "C#"); }}} Output: GeeksforGeeks Myself: Kirti My Favorite Subject is: C# Types of Inheritance in C# Below are the different types of inheritance which is supported by C# in different combinations. Single Inheritance: In single inheritance, subclasses inherit the features of one superclass. In image below, the class A serves as a base class for the derived class B.Multilevel Inheritance: In Multilevel Inheritance, a derived class will be inheriting a base class and as well as the derived class also act as the base class to other class. In below image, class A serves as a base class for the derived class B, which in turn serves as a base class for the derived class C.Hierarchical Inheritance: In Hierarchical Inheritance, one class serves as a superclass (base class) for more than one subclass. In below image, class A serves as a base class for the derived class B, C, and D.Multiple Inheritance(Through Interfaces):In Multiple inheritance, one class can have more than one superclass and inherit features from all parent classes. Please note that C# does not support multiple inheritance with classes. In C#, we can achieve multiple inheritance only through Interfaces. In the image below, Class C is derived from interface A and B.Hybrid Inheritance(Through Interfaces): It is a mix of two or more of the above types of inheritance. Since C# doesn’t support multiple inheritance with classes, the hybrid inheritance is also not possible with classes. In C#, we can achieve hybrid inheritance only through Interfaces. Single Inheritance: In single inheritance, subclasses inherit the features of one superclass. In image below, the class A serves as a base class for the derived class B. Multilevel Inheritance: In Multilevel Inheritance, a derived class will be inheriting a base class and as well as the derived class also act as the base class to other class. In below image, class A serves as a base class for the derived class B, which in turn serves as a base class for the derived class C. Hierarchical Inheritance: In Hierarchical Inheritance, one class serves as a superclass (base class) for more than one subclass. In below image, class A serves as a base class for the derived class B, C, and D. Multiple Inheritance(Through Interfaces):In Multiple inheritance, one class can have more than one superclass and inherit features from all parent classes. Please note that C# does not support multiple inheritance with classes. In C#, we can achieve multiple inheritance only through Interfaces. In the image below, Class C is derived from interface A and B. Hybrid Inheritance(Through Interfaces): It is a mix of two or more of the above types of inheritance. Since C# doesn’t support multiple inheritance with classes, the hybrid inheritance is also not possible with classes. In C#, we can achieve hybrid inheritance only through Interfaces. Important facts about inheritance in C# Default Superclass: Except Object class, which has no superclass, every class has one and only one direct superclass(single inheritance). In the absence of any other explicit superclass, every class is implicitly a subclass of Object class. Superclass can only be one: A superclass can have any number of subclasses. But a subclass can have only one superclass. This is because C# does not support multiple inheritance with classes. Although with interfaces, multiple inheritance is supported by C#. Inheriting Constructors: A subclass inherits all the members (fields, methods) from its superclass. Constructors are not members, so they are not inherited by subclasses, but the constructor of the superclass can be invoked from the subclass. Private member inheritance: A subclass does not inherit the private members of its parent class. However, if the superclass has properties(get and set methods) for accessing its private fields, then a subclass can inherit. CSharp-Inheritance CSharp-OOP C# Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here. Difference between Abstract Class and Interface in C# String.Split() Method in C# with Examples C# | How to check whether a List contains a specified element C# | IsNullOrEmpty() Method C# | Arrays of Strings C# | Delegates C# | Abstract Classes Extension Method in C# C# | Replace() Method Difference between Ref and Out keywords in C#
[ { "code": null, "e": 25402, "s": 25374, "text": "\n23 Jan, 2019" }, { "code": null, "e": 25592, "s": 25402, "text": "Inheritance is an important pillar of OOP(Object Oriented Programming). It is the mechanism in C# by which one class is allowed to inherit the features(fields and methods) of another class." }, { "code": null, "e": 25615, "s": 25592, "text": "Important terminology:" }, { "code": null, "e": 25727, "s": 25615, "text": "Super Class: The class whose features are inherited is known as super class(or a base class or a parent class)." }, { "code": null, "e": 25950, "s": 25727, "text": "Sub Class: The class that inherits the other class is known as subclass(or a derived class, extended class, or child class). The subclass can add its own fields and methods in addition to the superclass fields and methods." }, { "code": null, "e": 26257, "s": 25950, "text": "Reusability: Inheritance supports the concept of “reusability”, i.e. when we want to create a new class and there is already a class that includes some of the code that we want, we can derive our new class from the existing class. By doing this, we are reusing the fields and methods of the existing class." }, { "code": null, "e": 26280, "s": 26257, "text": "How to use inheritance" }, { "code": null, "e": 26325, "s": 26280, "text": "The symbol used for inheritance is :.Syntax:" }, { "code": null, "e": 26406, "s": 26325, "text": "class derived-class : base-class \n{ \n // methods and fields \n .\n .\n} \n" }, { "code": null, "e": 26587, "s": 26406, "text": "Example: In below example of inheritance, class GFG is a base class, class GeeksforGeeks is a derived class which extends GFG class and class Sudo is a driver class to run program." }, { "code": "// C# program to illustrate the// concept of inheritanceusing System;namespace ConsoleApplication1 { // Base classclass GFG { // data members public string name; public string subject; // public method of base class public void readers(string name, string subject) { this.name = name; this.subject = subject; Console.WriteLine(\"Myself: \" + name); Console.WriteLine(\"My Favorite Subject is: \" + subject); }} // inheriting the GFG class using : class GeeksforGeeks : GFG { // constructor of derived class public GeeksforGeeks() { Console.WriteLine(\"GeeksforGeeks\"); }} // Driver classclass Sudo { // Main Method static void Main(string[] args) { // creating object of derived class GeeksforGeeks g = new GeeksforGeeks(); // calling the method of base class // using the derived class object g.readers(\"Kirti\", \"C#\"); }}}", "e": 27542, "s": 26587, "text": null }, { "code": null, "e": 27550, "s": 27542, "text": "Output:" }, { "code": null, "e": 27606, "s": 27550, "text": "GeeksforGeeks\nMyself: Kirti\nMy Favorite Subject is: C#\n" }, { "code": null, "e": 27633, "s": 27606, "text": "Types of Inheritance in C#" }, { "code": null, "e": 27730, "s": 27633, "text": "Below are the different types of inheritance which is supported by C# in different combinations." }, { "code": null, "e": 29061, "s": 27730, "text": "Single Inheritance: In single inheritance, subclasses inherit the features of one superclass. In image below, the class A serves as a base class for the derived class B.Multilevel Inheritance: In Multilevel Inheritance, a derived class will be inheriting a base class and as well as the derived class also act as the base class to other class. In below image, class A serves as a base class for the derived class B, which in turn serves as a base class for the derived class C.Hierarchical Inheritance: In Hierarchical Inheritance, one class serves as a superclass (base class) for more than one subclass. In below image, class A serves as a base class for the derived class B, C, and D.Multiple Inheritance(Through Interfaces):In Multiple inheritance, one class can have more than one superclass and inherit features from all parent classes. Please note that C# does not support multiple inheritance with classes. In C#, we can achieve multiple inheritance only through Interfaces. In the image below, Class C is derived from interface A and B.Hybrid Inheritance(Through Interfaces): It is a mix of two or more of the above types of inheritance. Since C# doesn’t support multiple inheritance with classes, the hybrid inheritance is also not possible with classes. In C#, we can achieve hybrid inheritance only through Interfaces." }, { "code": null, "e": 29231, "s": 29061, "text": "Single Inheritance: In single inheritance, subclasses inherit the features of one superclass. In image below, the class A serves as a base class for the derived class B." }, { "code": null, "e": 29540, "s": 29231, "text": "Multilevel Inheritance: In Multilevel Inheritance, a derived class will be inheriting a base class and as well as the derived class also act as the base class to other class. In below image, class A serves as a base class for the derived class B, which in turn serves as a base class for the derived class C." }, { "code": null, "e": 29751, "s": 29540, "text": "Hierarchical Inheritance: In Hierarchical Inheritance, one class serves as a superclass (base class) for more than one subclass. In below image, class A serves as a base class for the derived class B, C, and D." }, { "code": null, "e": 30110, "s": 29751, "text": "Multiple Inheritance(Through Interfaces):In Multiple inheritance, one class can have more than one superclass and inherit features from all parent classes. Please note that C# does not support multiple inheritance with classes. In C#, we can achieve multiple inheritance only through Interfaces. In the image below, Class C is derived from interface A and B." }, { "code": null, "e": 30396, "s": 30110, "text": "Hybrid Inheritance(Through Interfaces): It is a mix of two or more of the above types of inheritance. Since C# doesn’t support multiple inheritance with classes, the hybrid inheritance is also not possible with classes. In C#, we can achieve hybrid inheritance only through Interfaces." }, { "code": null, "e": 30436, "s": 30396, "text": "Important facts about inheritance in C#" }, { "code": null, "e": 30677, "s": 30436, "text": "Default Superclass: Except Object class, which has no superclass, every class has one and only one direct superclass(single inheritance). In the absence of any other explicit superclass, every class is implicitly a subclass of Object class." }, { "code": null, "e": 30936, "s": 30677, "text": "Superclass can only be one: A superclass can have any number of subclasses. But a subclass can have only one superclass. This is because C# does not support multiple inheritance with classes. Although with interfaces, multiple inheritance is supported by C#." }, { "code": null, "e": 31179, "s": 30936, "text": "Inheriting Constructors: A subclass inherits all the members (fields, methods) from its superclass. Constructors are not members, so they are not inherited by subclasses, but the constructor of the superclass can be invoked from the subclass." }, { "code": null, "e": 31402, "s": 31179, "text": "Private member inheritance: A subclass does not inherit the private members of its parent class. However, if the superclass has properties(get and set methods) for accessing its private fields, then a subclass can inherit." }, { "code": null, "e": 31421, "s": 31402, "text": "CSharp-Inheritance" }, { "code": null, "e": 31432, "s": 31421, "text": "CSharp-OOP" }, { "code": null, "e": 31435, "s": 31432, "text": "C#" }, { "code": null, "e": 31533, "s": 31435, "text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here." }, { "code": null, "e": 31587, "s": 31533, "text": "Difference between Abstract Class and Interface in C#" }, { "code": null, "e": 31629, "s": 31587, "text": "String.Split() Method in C# with Examples" }, { "code": null, "e": 31691, "s": 31629, "text": "C# | How to check whether a List contains a specified element" }, { "code": null, "e": 31719, "s": 31691, "text": "C# | IsNullOrEmpty() Method" }, { "code": null, "e": 31742, "s": 31719, "text": "C# | Arrays of Strings" }, { "code": null, "e": 31757, "s": 31742, "text": "C# | Delegates" }, { "code": null, "e": 31779, "s": 31757, "text": "C# | Abstract Classes" }, { "code": null, "e": 31802, "s": 31779, "text": "Extension Method in C#" }, { "code": null, "e": 31824, "s": 31802, "text": "C# | Replace() Method" } ]
Bluetooth - GeeksforGeeks
02 Nov, 2021 It is a Wireless Personal Area Network (WPAN) technology and is used for exchanging data over smaller distances. This technology was invented by Ericson in 1994. It operates in the unlicensed, industrial, scientific and medical (ISM) band at 2.4 GHz to 2.485 GHz. Maximum devices that can be connected at the same time are 7. Bluetooth ranges upto 10 meters. It provides data rates upto 1 Mbps or 3 Mbps depending upon the version. The spreading technique which it uses is FHSS (Frequency hopping spread spectrum). A Bluetooth network is called a piconet and a collection of interconnected piconets is called scatternet. Bluetooth Architecture: The architecture of Bluetooth defines two types of networks: 1. Piconet 2. Scatternet Piconet: Piconet is a type of Bluetooth network that contains one primary node called master node and seven active secondary nodes called slave nodes. Thus, we can say that there are total of 8 active nodes which are present at a distance of 10 meters. The communication between the primary and secondary node can be one-to-one or one-to-many. Possible communication is only between the master and slave; Slave-slave communication is not possible. It also have 255 parked nodes, these are secondary nodes and cannot take participation in communication unless it gets converted to the active state. Scatternet: It is formed by using various piconets. A slave that is present in one piconet can act as master or we can say primary in another piconet. This kind of node can receive message from master in one piconet and deliver the message to its slave into the other piconet where it is acting as a slave. This type of node is refer as bridge node. A station cannot be master in two piconets. Bluetooth protocol stack: Radio (RF) layer: It performs modulation/demodulation of the data into RF signals. It defines the physical characteristics of bluetooth transceivers. It defines two types of physical link: connection-less and connection-oriented. Baseband Link layer: It performs the connection establishment within a piconet. Link Manager protocol layer: It performs the management of the already established links. It also includes authentication and encryption processes. Logical Link Control and Adaption protocol layer: It is also known as the heart of the Bluetooth protocol stack. It allows the communication between upper and lower layers of the Bluetooth protocol stack. It packages the data packets received from upper layers into the form expected by lower layers. It also performs the segmentation and multiplexing. SDP layer: It is short for Service Discovery Protocol. It allows to discover the services available on another Bluetooth enabled device. RF comm layer: It is short for Radio Frontend Component. It provides serial interface with WAP and OBEX. OBEX: It is short for Object Exchange. It is a communication protocol to exchange objects between 2 devices. WAP: It is short for Wireless Access Protocol. It is used for internet access. TCS: It is short for Telephony Control Protocol. It provides telephony service. Application layer: It enables the user to interact with the application. Radio (RF) layer: It performs modulation/demodulation of the data into RF signals. It defines the physical characteristics of bluetooth transceivers. It defines two types of physical link: connection-less and connection-oriented. Baseband Link layer: It performs the connection establishment within a piconet. Link Manager protocol layer: It performs the management of the already established links. It also includes authentication and encryption processes. Logical Link Control and Adaption protocol layer: It is also known as the heart of the Bluetooth protocol stack. It allows the communication between upper and lower layers of the Bluetooth protocol stack. It packages the data packets received from upper layers into the form expected by lower layers. It also performs the segmentation and multiplexing. SDP layer: It is short for Service Discovery Protocol. It allows to discover the services available on another Bluetooth enabled device. RF comm layer: It is short for Radio Frontend Component. It provides serial interface with WAP and OBEX. OBEX: It is short for Object Exchange. It is a communication protocol to exchange objects between 2 devices. WAP: It is short for Wireless Access Protocol. It is used for internet access. TCS: It is short for Telephony Control Protocol. It provides telephony service. Application layer: It enables the user to interact with the application. Advantages: Low cost. Easy to use. It can also penetrate through walls. It creates an adhoc connection immediately without any wires. It is used for voice and data transfer. Disadvantages: It can be hacked and hence, less secure. It has slow data transfer rate: 3 Mbps. It has small range: 10 meters. zuvizeal itskawal2000 saurabh1990aror kpbgokul Computer Networks Computer Networks Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here. Differences between IPv4 and IPv6 Data encryption standard (DES) | Set 1 Socket Programming in Python Types of Network Topology Implementation of Diffie-Hellman Algorithm TCP 3-Way Handshake Process User Datagram Protocol (UDP) UDP Server-Client implementation in C Types of Transmission Media Advanced Encryption Standard (AES)
[ { "code": null, "e": 24534, "s": 24506, "text": "\n02 Nov, 2021" }, { "code": null, "e": 25156, "s": 24534, "text": "It is a Wireless Personal Area Network (WPAN) technology and is used for exchanging data over smaller distances. This technology was invented by Ericson in 1994. It operates in the unlicensed, industrial, scientific and medical (ISM) band at 2.4 GHz to 2.485 GHz. Maximum devices that can be connected at the same time are 7. Bluetooth ranges upto 10 meters. It provides data rates upto 1 Mbps or 3 Mbps depending upon the version. The spreading technique which it uses is FHSS (Frequency hopping spread spectrum). A Bluetooth network is called a piconet and a collection of interconnected piconets is called scatternet. " }, { "code": null, "e": 25181, "s": 25156, "text": "Bluetooth Architecture: " }, { "code": null, "e": 25242, "s": 25181, "text": "The architecture of Bluetooth defines two types of networks:" }, { "code": null, "e": 25267, "s": 25242, "text": "1. Piconet\n2. Scatternet" }, { "code": null, "e": 25276, "s": 25267, "text": "Piconet:" }, { "code": null, "e": 25865, "s": 25276, "text": "Piconet is a type of Bluetooth network that contains one primary node called master node and seven active secondary nodes called slave nodes. Thus, we can say that there are total of 8 active nodes which are present at a distance of 10 meters. The communication between the primary and secondary node can be one-to-one or one-to-many. Possible communication is only between the master and slave; Slave-slave communication is not possible. It also have 255 parked nodes, these are secondary nodes and cannot take participation in communication unless it gets converted to the active state." }, { "code": null, "e": 25877, "s": 25865, "text": "Scatternet:" }, { "code": null, "e": 26259, "s": 25877, "text": "It is formed by using various piconets. A slave that is present in one piconet can act as master or we can say primary in another piconet. This kind of node can receive message from master in one piconet and deliver the message to its slave into the other piconet where it is acting as a slave. This type of node is refer as bridge node. A station cannot be master in two piconets." }, { "code": null, "e": 26286, "s": 26259, "text": "Bluetooth protocol stack: " }, { "code": null, "e": 27689, "s": 26286, "text": "Radio (RF) layer: It performs modulation/demodulation of the data into RF signals. It defines the physical characteristics of bluetooth transceivers. It defines two types of physical link: connection-less and connection-oriented. Baseband Link layer: It performs the connection establishment within a piconet. Link Manager protocol layer: It performs the management of the already established links. It also includes authentication and encryption processes. Logical Link Control and Adaption protocol layer: It is also known as the heart of the Bluetooth protocol stack. It allows the communication between upper and lower layers of the Bluetooth protocol stack. It packages the data packets received from upper layers into the form expected by lower layers. It also performs the segmentation and multiplexing. SDP layer: It is short for Service Discovery Protocol. It allows to discover the services available on another Bluetooth enabled device. RF comm layer: It is short for Radio Frontend Component. It provides serial interface with WAP and OBEX. OBEX: It is short for Object Exchange. It is a communication protocol to exchange objects between 2 devices. WAP: It is short for Wireless Access Protocol. It is used for internet access. TCS: It is short for Telephony Control Protocol. It provides telephony service. Application layer: It enables the user to interact with the application." }, { "code": null, "e": 27921, "s": 27689, "text": "Radio (RF) layer: It performs modulation/demodulation of the data into RF signals. It defines the physical characteristics of bluetooth transceivers. It defines two types of physical link: connection-less and connection-oriented. " }, { "code": null, "e": 28003, "s": 27921, "text": "Baseband Link layer: It performs the connection establishment within a piconet. " }, { "code": null, "e": 28153, "s": 28003, "text": "Link Manager protocol layer: It performs the management of the already established links. It also includes authentication and encryption processes. " }, { "code": null, "e": 28508, "s": 28153, "text": "Logical Link Control and Adaption protocol layer: It is also known as the heart of the Bluetooth protocol stack. It allows the communication between upper and lower layers of the Bluetooth protocol stack. It packages the data packets received from upper layers into the form expected by lower layers. It also performs the segmentation and multiplexing. " }, { "code": null, "e": 28647, "s": 28508, "text": "SDP layer: It is short for Service Discovery Protocol. It allows to discover the services available on another Bluetooth enabled device. " }, { "code": null, "e": 28754, "s": 28647, "text": "RF comm layer: It is short for Radio Frontend Component. It provides serial interface with WAP and OBEX. " }, { "code": null, "e": 28865, "s": 28754, "text": "OBEX: It is short for Object Exchange. It is a communication protocol to exchange objects between 2 devices. " }, { "code": null, "e": 28946, "s": 28865, "text": "WAP: It is short for Wireless Access Protocol. It is used for internet access. " }, { "code": null, "e": 29028, "s": 28946, "text": "TCS: It is short for Telephony Control Protocol. It provides telephony service. " }, { "code": null, "e": 29101, "s": 29028, "text": "Application layer: It enables the user to interact with the application." }, { "code": null, "e": 29114, "s": 29101, "text": "Advantages: " }, { "code": null, "e": 29124, "s": 29114, "text": "Low cost." }, { "code": null, "e": 29137, "s": 29124, "text": "Easy to use." }, { "code": null, "e": 29174, "s": 29137, "text": "It can also penetrate through walls." }, { "code": null, "e": 29236, "s": 29174, "text": "It creates an adhoc connection immediately without any wires." }, { "code": null, "e": 29276, "s": 29236, "text": "It is used for voice and data transfer." }, { "code": null, "e": 29293, "s": 29276, "text": "Disadvantages: " }, { "code": null, "e": 29334, "s": 29293, "text": "It can be hacked and hence, less secure." }, { "code": null, "e": 29374, "s": 29334, "text": "It has slow data transfer rate: 3 Mbps." }, { "code": null, "e": 29405, "s": 29374, "text": "It has small range: 10 meters." }, { "code": null, "e": 29416, "s": 29407, "text": "zuvizeal" }, { "code": null, "e": 29429, "s": 29416, "text": "itskawal2000" }, { "code": null, "e": 29445, "s": 29429, "text": "saurabh1990aror" }, { "code": null, "e": 29454, "s": 29445, "text": "kpbgokul" }, { "code": null, "e": 29472, "s": 29454, "text": "Computer Networks" }, { "code": null, "e": 29490, "s": 29472, "text": "Computer Networks" }, { "code": null, "e": 29588, "s": 29490, "text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here." }, { "code": null, "e": 29622, "s": 29588, "text": "Differences between IPv4 and IPv6" }, { "code": null, "e": 29661, "s": 29622, "text": "Data encryption standard (DES) | Set 1" }, { "code": null, "e": 29690, "s": 29661, "text": "Socket Programming in Python" }, { "code": null, "e": 29716, "s": 29690, "text": "Types of Network Topology" }, { "code": null, "e": 29759, "s": 29716, "text": "Implementation of Diffie-Hellman Algorithm" }, { "code": null, "e": 29787, "s": 29759, "text": "TCP 3-Way Handshake Process" }, { "code": null, "e": 29816, "s": 29787, "text": "User Datagram Protocol (UDP)" }, { "code": null, "e": 29854, "s": 29816, "text": "UDP Server-Client implementation in C" }, { "code": null, "e": 29882, "s": 29854, "text": "Types of Transmission Media" } ]
How to Simulate a Pandemic in Python | by Terence Shin | Towards Data Science
What’s a better time to simulate the spread of a disease than during a global pandemic? I don’t have much more to say — let’s jump right into programming a simple disease simulation. In real life, there are hundreds of factors that affect how fast a contagion spreads, both from person to person and on a broader population-wide scale. I’m no epidemiologist but I’ve done my best to set up a fairly basic simulation that can mimic how a virus can infect people and spread throughout a population. In my program, I will be using object-based programming. With this method, we could theoretically customize individual people and add in more events and factors — such as more complicated social dynamics. Keep in mind that this is an introduction and serves as the most basic model that can be built on top of. Fundamentally, our program will function around a single concept: any given person who is infected by our simulation’s disease has the potential to spread it to whoever they meet. Each person in our “peopleDictionary” will have a set number of friends (gaussian randomization for accuracy) and they may meet any one or more of these friends on a day to day basis. For our starting round of simulations, we won’t implement face masks or lockdowns — we’ll just let the virus spread when people meet their friends and see if we can get that iconic pandemic “curve” which the news always talks about flattening. So, we’ll use a Person() class and add a few characteristics. Firstly, we’ll assume that some very tiny percentage of characters simulated will already have immunity to our disease from the get-go, for whatever reason. I’m setting that at 1% (in reality, it’d be far lower but because our simulation runs so fast, a large portion like this makes a bit more sense). At the start of the simulation, the user will be prompted to enter this percentage. Next, we have contagiousness, the all-important factor. When a person is not infected, this remains at 0. It also returns to 0 once a person ceases to be contagious and gains immunity. However, when a person is infected, this contagious value is somewhere between 0 and 100%, and it massively changes their chance of infecting a friend. Before we implement this factor, we need to understand Gaussian Distribution. This mathematical function allows us to more accurately calculate random values between 1 and 100. Rather than the values being distributed purely randomly across the spectrum, most of them cluster around the median average point, making for a more realistic output: As you can see, this bell-shaped function will be a lot better for our random characteristic variables because most people will have an average level of contagiousness, rather than a purely random percentage. I’ll show you how to implement this later. We then have the variables “mask” and “lockdown” which are both boolean variables. These will be used to add a little bit of variety to our simulation after it is running. Lastly, we have the “friends” variable for any given person. Just like contagiousness, this is a Gaussian Distribution that ends up with most people having about 5 friends that they regularly see. In our simulation, everyone lives in a super social society where on average a person meets with 2 people face to face every day. In real life, this is probably not as realistic but we’re using it because we don’t want a super slow simulation. Of course, you can make any modifications to the code that you like. There are also a couple of other variables that will be used actively in the simulation and I’ll get to those as we go on! So let’s get coding this simulation! First, there are three imports we have to do: from scipy.stats import normimport randomimport time SciPy will allow us to calculate values within the Gaussian Distribution we talked about. The random library will be for any variables we need that should be purely random, and the time library is just for convenience if we want to run the simulation slowly and watch the spread of the disease. Next, we create our Person() class: # simulation of a single personclass Person(): def __init__(self, startingImmunity): if random.randint(0,100)<startingImmunity: self.immunity = True else: self.immunity = False self.contagiousness = 0 self.mask = False self.contagiousDays = 0 #use gaussian distribution for number of friends; average is 5 friends self.friends = int((norm.rvs(size=1,loc=0.5,scale=0.15)[0]*10).round(0)) def wearMask(self): self.contagiousness /= 2 Why are we passing the variable startingImmunity to this class exactly? Remember how we could enter what percentage of the population would have natural immunity from day 1? When the user gives this percentage, for every person “spawned” into our simulation we’ll use random to find out if they’re one of those lucky few to already be immune — in which case the self.immunity boolean is set to True, protecting them from all infection down the line. The remaining class variables are self-explanatory, except self.friends, which is the Gaussian Distribution we talked about. It’s definitely worth reading the documentation to get a better idea of how this works! def initiateSim(): numPeople = int(input("Population: ")) startingImmunity = int(input("Percentage of people with natural immunity: ")) startingInfecters = int(input("How many people will be infectious at t=0: ")) for x in range(0,numPeople): peopleDictionary.append(Person(startingImmunity)) for x in range(0,startingInfecters): peopleDictionary[random.randint(0,len(peopleDictionary)-1)].contagiousness = int((norm.rvs(size=1,loc=0.5,scale=0.15)[0]*10).round(0)*10) daysContagious = int(input("How many days contagious: ")) lockdownDay = int(input("Day for lockdown to be enforced: ")) maskDay = int(input("Day for masks to be used: ")) return daysContagious, lockdownDay, maskDay After setting up our class, we need a function to initiate the simulation. I’m calling this initiateSim() and it’ll prompt the user for four inputs — population, natural immunity population, contagious people at day 0, and how many days a person will stay contagious for. This daysContagious variable should actually be random — or even better, dependent on any number of personal health conditions, such as immune compromisation — but let’s keep it like this for a basic simulation. I found from testing that it is most interesting to run the simulation with a 4–9 day contagious period. We spawn the inputted number of people into the simulation. To start the disease, we pick people at random to be our “startingInfecters”. As you can see, we’re assigning a Gaussian variable to each one for their level of contagiousness! (Any time a person is made contagious in the simulation we’ll repeat this process.) We return the number of days someone will stay contagious for, like mentioned. Now, this simulation will be done day by day, so let’s set up a function: def runDay(daysContagious, lockdown): #this section simulates the spread, so it only operates on contagious people, thus: for person in [person for person in peopleDictionary if person.contagiousness>0 and person.friends>0]: peopleCouldMeetToday = int(person.friends/2) if peopleCouldMeetToday > 0: peopleMetToday = random.randint(0,peopleCouldMeetToday) else: peopleMetToday = 0 if lockdown == True: peopleMetToday= 0 for x in range(0,peopleMetToday): friendInQuestion = peopleDictionary[random.randint(0,len(peopleDictionary)-1)] if random.randint(0,100)<person.contagiousness and friendInQuestion.contagiousness == 0 and friendInQuestion.immunity==False: friendInQuestion.contagiousness = int((norm.rvs(size=1,loc=0.5,scale=0.15)[0]*10).round(0)*10) print(peopleDictionary.index(person), " >>> ", peopleDictionary.index(friendInQuestion)) The runDay function takes daysContagious for reasons explained later. In our first for loop, we’re using a list comprehension to find the people who are capable of spreading the disease — that is, they are contagious and have friends. We’re then calculating the number of people they could meet on that day. The maximum is 50% of their friends, and then we’re using a standard random.randint() to generate how many they actually do meet on that day. Then we use another embedded for loop to randomly select each friend that was met from the peopleDictionary[]. For the friend to have a chance of being infected, they can’t be immune to the disease. They also have to have a contagiousness of 0 — if they’re already infected, the encounter won’t influence them. We then use the infecter’s contagiousness percentage in a random function to find out if the friendInQuestion will be infected. Finally, if they do get infected, we go ahead and assign them a Gaussian Distribution variable for their contagiousness! I added in a simple print statement as a marker which will allow us to follow the simulation in the console as it is running. At the end of our program, we’ll add functionality to save the results to a text file anyway, but it’s cool to see little tags that tell you who is infecting who. Next part of our runDay() function: for person in [person for person in peopleDictionary if person.contagiousness>0]: person.contagiousDays += 1 if person.contagiousDays > daysContagious: person.immunity = True person.contagiousness = 0 print("|||", peopleDictionary.index(person), " |||") Basically, all we’re doing here is finding all the people who are contagious and incrementing their contagiousDays variable by 1. If they’ve been contagious for more days than the daysContagious time the user selected, they will become immune and hence their contagiousness drops to 0. (Again, another print marker to show that the given person has gained immunity.) I know I could have put this in the previous for loop but not to make my programming too dense, I separated it. Sue me. Finally, to tie it all together, we need to do a bit of admin: lockdown = FalsedaysContagious, lockdownDay, maskDay = initiateSim()saveFile = open("pandemicsave3.txt", "a")for x in range(0,100): if x==lockdownDay: lockdown = True if x == maskDay: for person in peopleDictionary: person.wearMask() print("DAY ", x) runDay(daysContagious,lockdown) write = str(len([person for person in peopleDictionary if person.contagiousness>0])) + "\n" saveFile.write(write) print(len([person for person in peopleDictionary if person.contagiousness>0]), " people are contagious on this day.")saveFile.close() This is pretty self-explanatory. We get the daysContagious value by initiating the simulation, we open our save file, then cycle through the days up to day 100. Each day we use a list comprehension to get the number of people contagious and write it to our save file. I also added one final print statement so we can track the disease’s progression in the console. And that’s it! I only explained the basics of the code, but let’s talk about the extra variables that you may have noticed... Adding a lockdown variable is quite simple. First, add this in before the section where we cycle through each of the friends a person meets (see code above): if lockdown == True: peopleMetToday = 0for x in range(0, peopleMetToday): Now, you want to select when the lockdown is enforced? No problem. Add a user prompt tight inside your initiateSim() function. lockdownDay = int(input("Day for lockdown to be enforced: "))return daysContagiousreturn lockdownDay Return it, and update the function call. Then, we need to define our lockdown boolean, and set it to true when we reach the correct date: lockdown = FalsedaysContagious, lockdownDay = initiateSim()saveFile = open("pandemicsave2.txt", "a")for x in range(0,100): if x == lockdownDay: lockdown = True print("DAY ", x) You can see that I just added 3 more lines into where we manage the simulation. Simple and easy, then you will want to pass the lockdown boolean to your runDay() function and make sure the runDay() function can accept it: runDay(daysContagious, lockdown) And: def runDay(daysContagious, lockdown): That’s the lockdown added. See the results section to find out how the implementation of a lockdown affected the spread of the disease! Finally, we want to add facemasks. I could add all sorts of ways that this changes how a disease spreads, but for us, we’ll just use it to decrease each person’s contagiousness. All we have to do is give the Person() class a function that tells them to wear a face mask: def wearMask(self): self.contagiousness /= 2 Yep, just halving their contagiousness is they wear a mask. Update initiateSim() so we can ask the user for what date the masks should come into use: maskDay = int(input("Day for masks to be used: "))return daysContagious, lockdownDay, maskDay And update our call: daysContagious, lockdownDay, maskDay = initiateSim() Finally, we’ll edit the section where we cycle through the days so that if the day reaches maskDay, then we tell every person to run their wearMask() function: if x == maskDay: for person in peopleDictionary: person.wearMask() If only it was this easy in real life, right? Well what do you know, we’ve created a simple pandemic simulation with the ability to simulate each individual person, change attributes of the virus, enforce lockdowns, and make people wear face masks. Let’s look at our results: I’m putting all the data gathered from my text save files into Excel. 5000 people, 1 starting infecter, 1% starting immunity, 7 days contagious, no lockdown or masks: As expected, a nice smooth curve — almost mathematically perfect. By the end of the simulation, every has gained immunity and the cases drop to a 0, which continues until all the days have completed. Now let’s see what happens to the previous result when you implement some countermeasures: Now what we have here is really interesting. Take the blue line. This is the simulation without any countermeasures, just like our previous result. However, when we implement a lockdown on day 15, it has a huge effect on the orange line; the spread of the disease is curbed before it can really take off, and look at that gradual curve back down again — that’s where there are no new cases and people are gradually becoming immune! We can then compare that to the gray line, where we implement lockdown just 5 days later than orange. It has a drastically lower effect because that five-day delay really made a difference to the number of cases. Finally, take a look at the yellow line. This is where we implement face masks, and it’s probably the most interesting simulation of all. You can see at day 15, there is a sudden change in the gradient of the line which affects how fast the disease spreads. It probably would have increased much more rapidly without the face masks! About day 21, there is a peak, and thanks to the masks, it is substantially less than the blue line, where there were no countermeasures! There is also a tiny secondary peak, and the overall summit of the curve lasts longer than any other simulation. Can you figure out why? Just to clarify, this was supposed to be a simple simulation. It is, of course, very basic with very limited parameters and functionality. However, it is incredible to see how much we can learn from a simulation that takes up barely a hundred lines of code. It really puts into perspective the impact lockdowns and face masks had. I encourage anyone reading this with a programming mindset to go out and improve my code. I’d recommend the following features: Face masks randomly (Gaussian?) affect contagiousness Not everyone obeys lockdown, and even for those who do, there is a chance of an infection happening, say, during a grocery shopping trip A certain percentage of people wear face masks, and this varies on a day to day basis More social dynamics, or parameters in general. The idea of communities. If anyone does take on the challenge of upgrading this code, I’d love to see what results you get from playing around with the factors. Thanks for reading! Full code: from scipy.stats import normimport randomimport timepeopleDictionary = []#simulation of a single personclass Person(): def __init__(self, startingImmunity): if random.randint(0,100)<startingImmunity: self.immunity = True else: self.immunity = False self.contagiousness = 0 self.mask = False self.contagiousDays = 0 #use gaussian distribution for number of friends; average is 5 friends self.friends = int((norm.rvs(size=1,loc=0.5,scale=0.15)[0]*10).round(0)) def wearMask(self): self.contagiousness /= 2 def initiateSim(): numPeople = int(input("Population: ")) startingImmunity = int(input("Percentage of people with natural immunity: ")) startingInfecters = int(input("How many people will be infectious at t=0: ")) for x in range(0,numPeople): peopleDictionary.append(Person(startingImmunity)) for x in range(0,startingInfecters): peopleDictionary[random.randint(0,len(peopleDictionary)-1)].contagiousness = int((norm.rvs(size=1,loc=0.5,scale=0.15)[0]*10).round(0)*10) daysContagious = int(input("How many days contagious: ")) lockdownDay = int(input("Day for lockdown to be enforced: ")) maskDay = int(input("Day for masks to be used: ")) return daysContagious, lockdownDay, maskDaydef runDay(daysContagious, lockdown): #this section simulates the spread, so it only operates on contagious people, thus: for person in [person for person in peopleDictionary if person.contagiousness>0 and person.friends>0]: peopleCouldMeetToday = int(person.friends/2) if peopleCouldMeetToday > 0: peopleMetToday = random.randint(0,peopleCouldMeetToday) else: peopleMetToday = 0 if lockdown == True: peopleMetToday= 0 for x in range(0,peopleMetToday): friendInQuestion = peopleDictionary[random.randint(0,len(peopleDictionary)-1)] if random.randint(0,100)<person.contagiousness and friendInQuestion.contagiousness == 0 and friendInQuestion.immunity==False: friendInQuestion.contagiousness = int((norm.rvs(size=1,loc=0.5,scale=0.15)[0]*10).round(0)*10) print(peopleDictionary.index(person), " >>> ", peopleDictionary.index(friendInQuestion)) for person in [person for person in peopleDictionary if person.contagiousness>0]: person.contagiousDays += 1 if person.contagiousDays > daysContagious: person.immunity = True person.contagiousness = 0 print("|||", peopleDictionary.index(person), " |||") lockdown = FalsedaysContagious, lockdownDay, maskDay = initiateSim()saveFile = open("pandemicsave3.txt", "a")for x in range(0,100): if x==lockdownDay: lockdown = True if x == maskDay: for person in peopleDictionary: person.wearMask() print("DAY ", x) runDay(daysContagious,lockdown) write = str(len([person for person in peopleDictionary if person.contagiousness>0])) + "\n" saveFile.write(write) print(len([person for person in peopleDictionary if person.contagiousness>0]), " people are contagious on this day.")saveFile.close() I hope you found this entertaining and possibly inspiring! There are so many ways that you can improve this model, so I encourage you to see what you can build and see if you can simulate real-life even closer. As always, I wish you the best in your endeavors! Not sure what to read next? I’ve picked another article for you: towardsdatascience.com If you enjoyed this, follow me on Medium for more Sign up for my email list here! Let’s connect on LinkedIn Interested in collaborating? Check out my website. Note from the editors: Towards Data Science is a Medium publication primarily based on the study of data science and machine learning. We are not health professionals or epidemiologists, and the opinions of this article should not be interpreted as professional advice. See our Reader Terms for details. To learn about the coronavirus pandemic, you can click here.
[ { "code": null, "e": 355, "s": 172, "text": "What’s a better time to simulate the spread of a disease than during a global pandemic? I don’t have much more to say — let’s jump right into programming a simple disease simulation." }, { "code": null, "e": 669, "s": 355, "text": "In real life, there are hundreds of factors that affect how fast a contagion spreads, both from person to person and on a broader population-wide scale. I’m no epidemiologist but I’ve done my best to set up a fairly basic simulation that can mimic how a virus can infect people and spread throughout a population." }, { "code": null, "e": 874, "s": 669, "text": "In my program, I will be using object-based programming. With this method, we could theoretically customize individual people and add in more events and factors — such as more complicated social dynamics." }, { "code": null, "e": 980, "s": 874, "text": "Keep in mind that this is an introduction and serves as the most basic model that can be built on top of." }, { "code": null, "e": 1344, "s": 980, "text": "Fundamentally, our program will function around a single concept: any given person who is infected by our simulation’s disease has the potential to spread it to whoever they meet. Each person in our “peopleDictionary” will have a set number of friends (gaussian randomization for accuracy) and they may meet any one or more of these friends on a day to day basis." }, { "code": null, "e": 1588, "s": 1344, "text": "For our starting round of simulations, we won’t implement face masks or lockdowns — we’ll just let the virus spread when people meet their friends and see if we can get that iconic pandemic “curve” which the news always talks about flattening." }, { "code": null, "e": 2037, "s": 1588, "text": "So, we’ll use a Person() class and add a few characteristics. Firstly, we’ll assume that some very tiny percentage of characters simulated will already have immunity to our disease from the get-go, for whatever reason. I’m setting that at 1% (in reality, it’d be far lower but because our simulation runs so fast, a large portion like this makes a bit more sense). At the start of the simulation, the user will be prompted to enter this percentage." }, { "code": null, "e": 2374, "s": 2037, "text": "Next, we have contagiousness, the all-important factor. When a person is not infected, this remains at 0. It also returns to 0 once a person ceases to be contagious and gains immunity. However, when a person is infected, this contagious value is somewhere between 0 and 100%, and it massively changes their chance of infecting a friend." }, { "code": null, "e": 2719, "s": 2374, "text": "Before we implement this factor, we need to understand Gaussian Distribution. This mathematical function allows us to more accurately calculate random values between 1 and 100. Rather than the values being distributed purely randomly across the spectrum, most of them cluster around the median average point, making for a more realistic output:" }, { "code": null, "e": 2971, "s": 2719, "text": "As you can see, this bell-shaped function will be a lot better for our random characteristic variables because most people will have an average level of contagiousness, rather than a purely random percentage. I’ll show you how to implement this later." }, { "code": null, "e": 3143, "s": 2971, "text": "We then have the variables “mask” and “lockdown” which are both boolean variables. These will be used to add a little bit of variety to our simulation after it is running." }, { "code": null, "e": 3653, "s": 3143, "text": "Lastly, we have the “friends” variable for any given person. Just like contagiousness, this is a Gaussian Distribution that ends up with most people having about 5 friends that they regularly see. In our simulation, everyone lives in a super social society where on average a person meets with 2 people face to face every day. In real life, this is probably not as realistic but we’re using it because we don’t want a super slow simulation. Of course, you can make any modifications to the code that you like." }, { "code": null, "e": 3776, "s": 3653, "text": "There are also a couple of other variables that will be used actively in the simulation and I’ll get to those as we go on!" }, { "code": null, "e": 3859, "s": 3776, "text": "So let’s get coding this simulation! First, there are three imports we have to do:" }, { "code": null, "e": 3912, "s": 3859, "text": "from scipy.stats import normimport randomimport time" }, { "code": null, "e": 4207, "s": 3912, "text": "SciPy will allow us to calculate values within the Gaussian Distribution we talked about. The random library will be for any variables we need that should be purely random, and the time library is just for convenience if we want to run the simulation slowly and watch the spread of the disease." }, { "code": null, "e": 4243, "s": 4207, "text": "Next, we create our Person() class:" }, { "code": null, "e": 4759, "s": 4243, "text": "# simulation of a single personclass Person(): def __init__(self, startingImmunity): if random.randint(0,100)<startingImmunity: self.immunity = True else: self.immunity = False self.contagiousness = 0 self.mask = False self.contagiousDays = 0 #use gaussian distribution for number of friends; average is 5 friends self.friends = int((norm.rvs(size=1,loc=0.5,scale=0.15)[0]*10).round(0)) def wearMask(self): self.contagiousness /= 2" }, { "code": null, "e": 5209, "s": 4759, "text": "Why are we passing the variable startingImmunity to this class exactly? Remember how we could enter what percentage of the population would have natural immunity from day 1? When the user gives this percentage, for every person “spawned” into our simulation we’ll use random to find out if they’re one of those lucky few to already be immune — in which case the self.immunity boolean is set to True, protecting them from all infection down the line." }, { "code": null, "e": 5422, "s": 5209, "text": "The remaining class variables are self-explanatory, except self.friends, which is the Gaussian Distribution we talked about. It’s definitely worth reading the documentation to get a better idea of how this works!" }, { "code": null, "e": 6146, "s": 5422, "text": "def initiateSim(): numPeople = int(input(\"Population: \")) startingImmunity = int(input(\"Percentage of people with natural immunity: \")) startingInfecters = int(input(\"How many people will be infectious at t=0: \")) for x in range(0,numPeople): peopleDictionary.append(Person(startingImmunity)) for x in range(0,startingInfecters): peopleDictionary[random.randint(0,len(peopleDictionary)-1)].contagiousness = int((norm.rvs(size=1,loc=0.5,scale=0.15)[0]*10).round(0)*10) daysContagious = int(input(\"How many days contagious: \")) lockdownDay = int(input(\"Day for lockdown to be enforced: \")) maskDay = int(input(\"Day for masks to be used: \")) return daysContagious, lockdownDay, maskDay" }, { "code": null, "e": 6735, "s": 6146, "text": "After setting up our class, we need a function to initiate the simulation. I’m calling this initiateSim() and it’ll prompt the user for four inputs — population, natural immunity population, contagious people at day 0, and how many days a person will stay contagious for. This daysContagious variable should actually be random — or even better, dependent on any number of personal health conditions, such as immune compromisation — but let’s keep it like this for a basic simulation. I found from testing that it is most interesting to run the simulation with a 4–9 day contagious period." }, { "code": null, "e": 7056, "s": 6735, "text": "We spawn the inputted number of people into the simulation. To start the disease, we pick people at random to be our “startingInfecters”. As you can see, we’re assigning a Gaussian variable to each one for their level of contagiousness! (Any time a person is made contagious in the simulation we’ll repeat this process.)" }, { "code": null, "e": 7135, "s": 7056, "text": "We return the number of days someone will stay contagious for, like mentioned." }, { "code": null, "e": 7209, "s": 7135, "text": "Now, this simulation will be done day by day, so let’s set up a function:" }, { "code": null, "e": 8201, "s": 7209, "text": "def runDay(daysContagious, lockdown): #this section simulates the spread, so it only operates on contagious people, thus: for person in [person for person in peopleDictionary if person.contagiousness>0 and person.friends>0]: peopleCouldMeetToday = int(person.friends/2) if peopleCouldMeetToday > 0: peopleMetToday = random.randint(0,peopleCouldMeetToday) else: peopleMetToday = 0 if lockdown == True: peopleMetToday= 0 for x in range(0,peopleMetToday): friendInQuestion = peopleDictionary[random.randint(0,len(peopleDictionary)-1)] if random.randint(0,100)<person.contagiousness and friendInQuestion.contagiousness == 0 and friendInQuestion.immunity==False: friendInQuestion.contagiousness = int((norm.rvs(size=1,loc=0.5,scale=0.15)[0]*10).round(0)*10) print(peopleDictionary.index(person), \" >>> \", peopleDictionary.index(friendInQuestion))" }, { "code": null, "e": 8651, "s": 8201, "text": "The runDay function takes daysContagious for reasons explained later. In our first for loop, we’re using a list comprehension to find the people who are capable of spreading the disease — that is, they are contagious and have friends. We’re then calculating the number of people they could meet on that day. The maximum is 50% of their friends, and then we’re using a standard random.randint() to generate how many they actually do meet on that day." }, { "code": null, "e": 9211, "s": 8651, "text": "Then we use another embedded for loop to randomly select each friend that was met from the peopleDictionary[]. For the friend to have a chance of being infected, they can’t be immune to the disease. They also have to have a contagiousness of 0 — if they’re already infected, the encounter won’t influence them. We then use the infecter’s contagiousness percentage in a random function to find out if the friendInQuestion will be infected. Finally, if they do get infected, we go ahead and assign them a Gaussian Distribution variable for their contagiousness!" }, { "code": null, "e": 9500, "s": 9211, "text": "I added in a simple print statement as a marker which will allow us to follow the simulation in the console as it is running. At the end of our program, we’ll add functionality to save the results to a text file anyway, but it’s cool to see little tags that tell you who is infecting who." }, { "code": null, "e": 9536, "s": 9500, "text": "Next part of our runDay() function:" }, { "code": null, "e": 9837, "s": 9536, "text": "for person in [person for person in peopleDictionary if person.contagiousness>0]: person.contagiousDays += 1 if person.contagiousDays > daysContagious: person.immunity = True person.contagiousness = 0 print(\"|||\", peopleDictionary.index(person), \" |||\")" }, { "code": null, "e": 10204, "s": 9837, "text": "Basically, all we’re doing here is finding all the people who are contagious and incrementing their contagiousDays variable by 1. If they’ve been contagious for more days than the daysContagious time the user selected, they will become immune and hence their contagiousness drops to 0. (Again, another print marker to show that the given person has gained immunity.)" }, { "code": null, "e": 10324, "s": 10204, "text": "I know I could have put this in the previous for loop but not to make my programming too dense, I separated it. Sue me." }, { "code": null, "e": 10387, "s": 10324, "text": "Finally, to tie it all together, we need to do a bit of admin:" }, { "code": null, "e": 10984, "s": 10387, "text": "lockdown = FalsedaysContagious, lockdownDay, maskDay = initiateSim()saveFile = open(\"pandemicsave3.txt\", \"a\")for x in range(0,100): if x==lockdownDay: lockdown = True if x == maskDay: for person in peopleDictionary: person.wearMask() print(\"DAY \", x) runDay(daysContagious,lockdown) write = str(len([person for person in peopleDictionary if person.contagiousness>0])) + \"\\n\" saveFile.write(write) print(len([person for person in peopleDictionary if person.contagiousness>0]), \" people are contagious on this day.\")saveFile.close()" }, { "code": null, "e": 11349, "s": 10984, "text": "This is pretty self-explanatory. We get the daysContagious value by initiating the simulation, we open our save file, then cycle through the days up to day 100. Each day we use a list comprehension to get the number of people contagious and write it to our save file. I also added one final print statement so we can track the disease’s progression in the console." }, { "code": null, "e": 11475, "s": 11349, "text": "And that’s it! I only explained the basics of the code, but let’s talk about the extra variables that you may have noticed..." }, { "code": null, "e": 11633, "s": 11475, "text": "Adding a lockdown variable is quite simple. First, add this in before the section where we cycle through each of the friends a person meets (see code above):" }, { "code": null, "e": 11709, "s": 11633, "text": "if lockdown == True: peopleMetToday = 0for x in range(0, peopleMetToday):" }, { "code": null, "e": 11836, "s": 11709, "text": "Now, you want to select when the lockdown is enforced? No problem. Add a user prompt tight inside your initiateSim() function." }, { "code": null, "e": 11937, "s": 11836, "text": "lockdownDay = int(input(\"Day for lockdown to be enforced: \"))return daysContagiousreturn lockdownDay" }, { "code": null, "e": 12075, "s": 11937, "text": "Return it, and update the function call. Then, we need to define our lockdown boolean, and set it to true when we reach the correct date:" }, { "code": null, "e": 12261, "s": 12075, "text": "lockdown = FalsedaysContagious, lockdownDay = initiateSim()saveFile = open(\"pandemicsave2.txt\", \"a\")for x in range(0,100): if x == lockdownDay: lockdown = True print(\"DAY \", x)" }, { "code": null, "e": 12483, "s": 12261, "text": "You can see that I just added 3 more lines into where we manage the simulation. Simple and easy, then you will want to pass the lockdown boolean to your runDay() function and make sure the runDay() function can accept it:" }, { "code": null, "e": 12516, "s": 12483, "text": "runDay(daysContagious, lockdown)" }, { "code": null, "e": 12521, "s": 12516, "text": "And:" }, { "code": null, "e": 12559, "s": 12521, "text": "def runDay(daysContagious, lockdown):" }, { "code": null, "e": 12695, "s": 12559, "text": "That’s the lockdown added. See the results section to find out how the implementation of a lockdown affected the spread of the disease!" }, { "code": null, "e": 12966, "s": 12695, "text": "Finally, we want to add facemasks. I could add all sorts of ways that this changes how a disease spreads, but for us, we’ll just use it to decrease each person’s contagiousness. All we have to do is give the Person() class a function that tells them to wear a face mask:" }, { "code": null, "e": 13013, "s": 12966, "text": "def wearMask(self): self.contagiousness /= 2" }, { "code": null, "e": 13163, "s": 13013, "text": "Yep, just halving their contagiousness is they wear a mask. Update initiateSim() so we can ask the user for what date the masks should come into use:" }, { "code": null, "e": 13257, "s": 13163, "text": "maskDay = int(input(\"Day for masks to be used: \"))return daysContagious, lockdownDay, maskDay" }, { "code": null, "e": 13278, "s": 13257, "text": "And update our call:" }, { "code": null, "e": 13331, "s": 13278, "text": "daysContagious, lockdownDay, maskDay = initiateSim()" }, { "code": null, "e": 13491, "s": 13331, "text": "Finally, we’ll edit the section where we cycle through the days so that if the day reaches maskDay, then we tell every person to run their wearMask() function:" }, { "code": null, "e": 13565, "s": 13491, "text": "if x == maskDay: for person in peopleDictionary: person.wearMask()" }, { "code": null, "e": 13611, "s": 13565, "text": "If only it was this easy in real life, right?" }, { "code": null, "e": 13841, "s": 13611, "text": "Well what do you know, we’ve created a simple pandemic simulation with the ability to simulate each individual person, change attributes of the virus, enforce lockdowns, and make people wear face masks. Let’s look at our results:" }, { "code": null, "e": 13911, "s": 13841, "text": "I’m putting all the data gathered from my text save files into Excel." }, { "code": null, "e": 14008, "s": 13911, "text": "5000 people, 1 starting infecter, 1% starting immunity, 7 days contagious, no lockdown or masks:" }, { "code": null, "e": 14208, "s": 14008, "text": "As expected, a nice smooth curve — almost mathematically perfect. By the end of the simulation, every has gained immunity and the cases drop to a 0, which continues until all the days have completed." }, { "code": null, "e": 14299, "s": 14208, "text": "Now let’s see what happens to the previous result when you implement some countermeasures:" }, { "code": null, "e": 14731, "s": 14299, "text": "Now what we have here is really interesting. Take the blue line. This is the simulation without any countermeasures, just like our previous result. However, when we implement a lockdown on day 15, it has a huge effect on the orange line; the spread of the disease is curbed before it can really take off, and look at that gradual curve back down again — that’s where there are no new cases and people are gradually becoming immune!" }, { "code": null, "e": 14944, "s": 14731, "text": "We can then compare that to the gray line, where we implement lockdown just 5 days later than orange. It has a drastically lower effect because that five-day delay really made a difference to the number of cases." }, { "code": null, "e": 15552, "s": 14944, "text": "Finally, take a look at the yellow line. This is where we implement face masks, and it’s probably the most interesting simulation of all. You can see at day 15, there is a sudden change in the gradient of the line which affects how fast the disease spreads. It probably would have increased much more rapidly without the face masks! About day 21, there is a peak, and thanks to the masks, it is substantially less than the blue line, where there were no countermeasures! There is also a tiny secondary peak, and the overall summit of the curve lasts longer than any other simulation. Can you figure out why?" }, { "code": null, "e": 15883, "s": 15552, "text": "Just to clarify, this was supposed to be a simple simulation. It is, of course, very basic with very limited parameters and functionality. However, it is incredible to see how much we can learn from a simulation that takes up barely a hundred lines of code. It really puts into perspective the impact lockdowns and face masks had." }, { "code": null, "e": 16011, "s": 15883, "text": "I encourage anyone reading this with a programming mindset to go out and improve my code. I’d recommend the following features:" }, { "code": null, "e": 16065, "s": 16011, "text": "Face masks randomly (Gaussian?) affect contagiousness" }, { "code": null, "e": 16202, "s": 16065, "text": "Not everyone obeys lockdown, and even for those who do, there is a chance of an infection happening, say, during a grocery shopping trip" }, { "code": null, "e": 16288, "s": 16202, "text": "A certain percentage of people wear face masks, and this varies on a day to day basis" }, { "code": null, "e": 16336, "s": 16288, "text": "More social dynamics, or parameters in general." }, { "code": null, "e": 16361, "s": 16336, "text": "The idea of communities." }, { "code": null, "e": 16517, "s": 16361, "text": "If anyone does take on the challenge of upgrading this code, I’d love to see what results you get from playing around with the factors. Thanks for reading!" }, { "code": null, "e": 16528, "s": 16517, "text": "Full code:" }, { "code": null, "e": 19762, "s": 16528, "text": "from scipy.stats import normimport randomimport timepeopleDictionary = []#simulation of a single personclass Person(): def __init__(self, startingImmunity): if random.randint(0,100)<startingImmunity: self.immunity = True else: self.immunity = False self.contagiousness = 0 self.mask = False self.contagiousDays = 0 #use gaussian distribution for number of friends; average is 5 friends self.friends = int((norm.rvs(size=1,loc=0.5,scale=0.15)[0]*10).round(0)) def wearMask(self): self.contagiousness /= 2 def initiateSim(): numPeople = int(input(\"Population: \")) startingImmunity = int(input(\"Percentage of people with natural immunity: \")) startingInfecters = int(input(\"How many people will be infectious at t=0: \")) for x in range(0,numPeople): peopleDictionary.append(Person(startingImmunity)) for x in range(0,startingInfecters): peopleDictionary[random.randint(0,len(peopleDictionary)-1)].contagiousness = int((norm.rvs(size=1,loc=0.5,scale=0.15)[0]*10).round(0)*10) daysContagious = int(input(\"How many days contagious: \")) lockdownDay = int(input(\"Day for lockdown to be enforced: \")) maskDay = int(input(\"Day for masks to be used: \")) return daysContagious, lockdownDay, maskDaydef runDay(daysContagious, lockdown): #this section simulates the spread, so it only operates on contagious people, thus: for person in [person for person in peopleDictionary if person.contagiousness>0 and person.friends>0]: peopleCouldMeetToday = int(person.friends/2) if peopleCouldMeetToday > 0: peopleMetToday = random.randint(0,peopleCouldMeetToday) else: peopleMetToday = 0 if lockdown == True: peopleMetToday= 0 for x in range(0,peopleMetToday): friendInQuestion = peopleDictionary[random.randint(0,len(peopleDictionary)-1)] if random.randint(0,100)<person.contagiousness and friendInQuestion.contagiousness == 0 and friendInQuestion.immunity==False: friendInQuestion.contagiousness = int((norm.rvs(size=1,loc=0.5,scale=0.15)[0]*10).round(0)*10) print(peopleDictionary.index(person), \" >>> \", peopleDictionary.index(friendInQuestion)) for person in [person for person in peopleDictionary if person.contagiousness>0]: person.contagiousDays += 1 if person.contagiousDays > daysContagious: person.immunity = True person.contagiousness = 0 print(\"|||\", peopleDictionary.index(person), \" |||\") lockdown = FalsedaysContagious, lockdownDay, maskDay = initiateSim()saveFile = open(\"pandemicsave3.txt\", \"a\")for x in range(0,100): if x==lockdownDay: lockdown = True if x == maskDay: for person in peopleDictionary: person.wearMask() print(\"DAY \", x) runDay(daysContagious,lockdown) write = str(len([person for person in peopleDictionary if person.contagiousness>0])) + \"\\n\" saveFile.write(write) print(len([person for person in peopleDictionary if person.contagiousness>0]), \" people are contagious on this day.\")saveFile.close()" }, { "code": null, "e": 19973, "s": 19762, "text": "I hope you found this entertaining and possibly inspiring! There are so many ways that you can improve this model, so I encourage you to see what you can build and see if you can simulate real-life even closer." }, { "code": null, "e": 20023, "s": 19973, "text": "As always, I wish you the best in your endeavors!" }, { "code": null, "e": 20088, "s": 20023, "text": "Not sure what to read next? I’ve picked another article for you:" }, { "code": null, "e": 20111, "s": 20088, "text": "towardsdatascience.com" }, { "code": null, "e": 20161, "s": 20111, "text": "If you enjoyed this, follow me on Medium for more" }, { "code": null, "e": 20193, "s": 20161, "text": "Sign up for my email list here!" }, { "code": null, "e": 20219, "s": 20193, "text": "Let’s connect on LinkedIn" }, { "code": null, "e": 20270, "s": 20219, "text": "Interested in collaborating? Check out my website." } ]
SQL | ALTER (ADD, DROP, MODIFY) - GeeksforGeeks
21 Mar, 2018 ALTER TABLE is used to add, delete/drop or modify columns in the existing table. It is also used to add and drop various constraints on the existing table. ALTER TABLE – ADD ADD is used to add columns into the existing table. Sometimes we may require to add additional information, in that case we do not require to create the whole database again, ADD comes to our rescue. Syntax: ALTER TABLE table_name ADD (Columnname_1 datatype, Columnname_2 datatype, ... Columnname_n datatype); ALTER TABLE – DROP DROP COLUMN is used to drop column in a table. Deleting the unwanted columns from the table. Syntax: ALTER TABLE table_name DROP COLUMN column_name; ALTER TABLE-MODIFY It is used to modify the existing columns in a table. Multiple columns can also be modified at once.*Syntax may vary slightly in different databases. Syntax(Oracle,MySQL,MariaDB): ALTER TABLE table_name MODIFY column_name column_type; Syntax(SQL Server): ALTER TABLE table_name ALTER COLUMN column_name column_type; Queries Sample Table: Student QUERY: To ADD 2 columns AGE and COURSE to table Student. ALTER TABLE Student ADD (AGE number(3),COURSE varchar(40)); OUTPUT: MODIFY column COURSE in table Student ALTER TABLE Student MODIFY COURSE varchar(20); After running the above query maximum size of Course Column is reduced to 20 from 40. DROP column COURSE in table Student. ALTER TABLE Student DROP COLUMN COURSE; OUTPUT: This article is contributed by Shubham Chaudhary. If you like GeeksforGeeks and would like to contribute, you can also write an article using contribute.geeksforgeeks.org or mail your article to [email protected]. See your article appearing on the GeeksforGeeks main page and help other Geeks. Please write comments if you find anything incorrect, or you want to share more information about the topic discussed above. khushboogoyal499 SQL-Clauses-Operators SQL SQL Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here. SQL Interview Questions CTE in SQL How to Update Multiple Columns in Single Update Statement in SQL? Difference between DELETE, DROP and TRUNCATE MySQL | Group_CONCAT() Function How to Create a Table With Multiple Foreign Keys in SQL? Difference between DELETE and TRUNCATE SQL - ORDER BY What is Temporary Table in SQL? MySQL | Regular expressions (Regexp)
[ { "code": null, "e": 25209, "s": 25181, "text": "\n21 Mar, 2018" }, { "code": null, "e": 25365, "s": 25209, "text": "ALTER TABLE is used to add, delete/drop or modify columns in the existing table. It is also used to add and drop various constraints on the existing table." }, { "code": null, "e": 25383, "s": 25365, "text": "ALTER TABLE – ADD" }, { "code": null, "e": 25583, "s": 25383, "text": "ADD is used to add columns into the existing table. Sometimes we may require to add additional information, in that case we do not require to create the whole database again, ADD comes to our rescue." }, { "code": null, "e": 25591, "s": 25583, "text": "Syntax:" }, { "code": null, "e": 25754, "s": 25591, "text": " ALTER TABLE table_name\n ADD (Columnname_1 datatype,\n Columnname_2 datatype,\n ...\n Columnname_n datatype);\n" }, { "code": null, "e": 25773, "s": 25754, "text": "ALTER TABLE – DROP" }, { "code": null, "e": 25866, "s": 25773, "text": "DROP COLUMN is used to drop column in a table. Deleting the unwanted columns from the table." }, { "code": null, "e": 25874, "s": 25866, "text": "Syntax:" }, { "code": null, "e": 25923, "s": 25874, "text": "ALTER TABLE table_name\nDROP COLUMN column_name;\n" }, { "code": null, "e": 25942, "s": 25923, "text": "ALTER TABLE-MODIFY" }, { "code": null, "e": 26092, "s": 25942, "text": "It is used to modify the existing columns in a table. Multiple columns can also be modified at once.*Syntax may vary slightly in different databases." }, { "code": null, "e": 26122, "s": 26092, "text": "Syntax(Oracle,MySQL,MariaDB):" }, { "code": null, "e": 26179, "s": 26122, "text": " ALTER TABLE table_name\nMODIFY column_name column_type;\n" }, { "code": null, "e": 26199, "s": 26179, "text": "Syntax(SQL Server):" }, { "code": null, "e": 26263, "s": 26199, "text": " ALTER TABLE table_name\nALTER COLUMN column_name column_type;\n\n" }, { "code": null, "e": 26271, "s": 26263, "text": "Queries" }, { "code": null, "e": 26285, "s": 26271, "text": "Sample Table:" }, { "code": null, "e": 26293, "s": 26285, "text": "Student" }, { "code": null, "e": 26300, "s": 26293, "text": "QUERY:" }, { "code": null, "e": 26350, "s": 26300, "text": "To ADD 2 columns AGE and COURSE to table Student." }, { "code": null, "e": 26411, "s": 26350, "text": " ALTER TABLE Student ADD (AGE number(3),COURSE varchar(40));" }, { "code": null, "e": 26419, "s": 26411, "text": "OUTPUT:" }, { "code": null, "e": 26457, "s": 26419, "text": "MODIFY column COURSE in table Student" }, { "code": null, "e": 26506, "s": 26457, "text": " ALTER TABLE Student MODIFY COURSE varchar(20);\n" }, { "code": null, "e": 26592, "s": 26506, "text": "After running the above query maximum size of Course Column is reduced to 20 from 40." }, { "code": null, "e": 26629, "s": 26592, "text": "DROP column COURSE in table Student." }, { "code": null, "e": 26670, "s": 26629, "text": " ALTER TABLE Student DROP COLUMN COURSE;" }, { "code": null, "e": 26678, "s": 26670, "text": "OUTPUT:" }, { "code": null, "e": 26983, "s": 26678, "text": "This article is contributed by Shubham Chaudhary. If you like GeeksforGeeks and would like to contribute, you can also write an article using contribute.geeksforgeeks.org or mail your article to [email protected]. See your article appearing on the GeeksforGeeks main page and help other Geeks." }, { "code": null, "e": 27108, "s": 26983, "text": "Please write comments if you find anything incorrect, or you want to share more information about the topic discussed above." }, { "code": null, "e": 27125, "s": 27108, "text": "khushboogoyal499" }, { "code": null, "e": 27147, "s": 27125, "text": "SQL-Clauses-Operators" }, { "code": null, "e": 27151, "s": 27147, "text": "SQL" }, { "code": null, "e": 27155, "s": 27151, "text": "SQL" }, { "code": null, "e": 27253, "s": 27155, "text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here." }, { "code": null, "e": 27277, "s": 27253, "text": "SQL Interview Questions" }, { "code": null, "e": 27288, "s": 27277, "text": "CTE in SQL" }, { "code": null, "e": 27354, "s": 27288, "text": "How to Update Multiple Columns in Single Update Statement in SQL?" }, { "code": null, "e": 27399, "s": 27354, "text": "Difference between DELETE, DROP and TRUNCATE" }, { "code": null, "e": 27431, "s": 27399, "text": "MySQL | Group_CONCAT() Function" }, { "code": null, "e": 27488, "s": 27431, "text": "How to Create a Table With Multiple Foreign Keys in SQL?" }, { "code": null, "e": 27527, "s": 27488, "text": "Difference between DELETE and TRUNCATE" }, { "code": null, "e": 27542, "s": 27527, "text": "SQL - ORDER BY" }, { "code": null, "e": 27574, "s": 27542, "text": "What is Temporary Table in SQL?" } ]
Check if two enums are equal or not in C# - GeeksforGeeks
28 May, 2019 Enum.Equals(Object) Method is used to check whether the current instance is equal to a specified object or not. This method overrides ValueType.Equals(Object) to define how enumeration members are evaluated for equality. Syntax: public override bool Equals (object obj); Here, obj is an object to compare with the current instance, or null. Returns: This method returns true if obj is an enumeration value of the same type and with the same underlying value as current instance otherwise, false. Example: // C# program to illustrate the// Enum.Equals(Object) Methodusing System; class GFG { // taking two enums enum Clothes { Jeans, Shirt } ; enum Colors { Blue, Black } ; // Main Method public static void Main() { Clothes cl1 = Clothes.Jeans; Clothes cl2 = Clothes.Shirt; Colors c1 = Colors.Blue; Colors c2 = Colors.Black; Colors c3 = Colors.Blue; // using the method Console.WriteLine(c1.Equals(c3)); Console.WriteLine(c1.Equals(c2)); Console.WriteLine(cl1.Equals(cl2)); }} True False False Reference: https://docs.microsoft.com/en-us/dotnet/api/system.enum.equals?view=netframework-4.8 CSharp-method C# Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here. Destructors in C# Difference between Ref and Out keywords in C# C# | Delegates C# | Constructors C# | Class and Object Extension Method in C# Introduction to .NET Framework C# | Abstract Classes C# | Data Types HashSet in C# with Examples
[ { "code": null, "e": 24476, "s": 24448, "text": "\n28 May, 2019" }, { "code": null, "e": 24697, "s": 24476, "text": "Enum.Equals(Object) Method is used to check whether the current instance is equal to a specified object or not. This method overrides ValueType.Equals(Object) to define how enumeration members are evaluated for equality." }, { "code": null, "e": 24705, "s": 24697, "text": "Syntax:" }, { "code": null, "e": 24747, "s": 24705, "text": "public override bool Equals (object obj);" }, { "code": null, "e": 24817, "s": 24747, "text": "Here, obj is an object to compare with the current instance, or null." }, { "code": null, "e": 24972, "s": 24817, "text": "Returns: This method returns true if obj is an enumeration value of the same type and with the same underlying value as current instance otherwise, false." }, { "code": null, "e": 24981, "s": 24972, "text": "Example:" }, { "code": "// C# program to illustrate the// Enum.Equals(Object) Methodusing System; class GFG { // taking two enums enum Clothes { Jeans, Shirt } ; enum Colors { Blue, Black } ; // Main Method public static void Main() { Clothes cl1 = Clothes.Jeans; Clothes cl2 = Clothes.Shirt; Colors c1 = Colors.Blue; Colors c2 = Colors.Black; Colors c3 = Colors.Blue; // using the method Console.WriteLine(c1.Equals(c3)); Console.WriteLine(c1.Equals(c2)); Console.WriteLine(cl1.Equals(cl2)); }}", "e": 25589, "s": 24981, "text": null }, { "code": null, "e": 25607, "s": 25589, "text": "True\nFalse\nFalse\n" }, { "code": null, "e": 25618, "s": 25607, "text": "Reference:" }, { "code": null, "e": 25703, "s": 25618, "text": "https://docs.microsoft.com/en-us/dotnet/api/system.enum.equals?view=netframework-4.8" }, { "code": null, "e": 25717, "s": 25703, "text": "CSharp-method" }, { "code": null, "e": 25720, "s": 25717, "text": "C#" }, { "code": null, "e": 25818, "s": 25720, "text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here." }, { "code": null, "e": 25836, "s": 25818, "text": "Destructors in C#" }, { "code": null, "e": 25882, "s": 25836, "text": "Difference between Ref and Out keywords in C#" }, { "code": null, "e": 25897, "s": 25882, "text": "C# | Delegates" }, { "code": null, "e": 25915, "s": 25897, "text": "C# | Constructors" }, { "code": null, "e": 25937, "s": 25915, "text": "C# | Class and Object" }, { "code": null, "e": 25960, "s": 25937, "text": "Extension Method in C#" }, { "code": null, "e": 25991, "s": 25960, "text": "Introduction to .NET Framework" }, { "code": null, "e": 26013, "s": 25991, "text": "C# | Abstract Classes" }, { "code": null, "e": 26029, "s": 26013, "text": "C# | Data Types" } ]
Ionic - Checkbox
Ionic checkbox is almost the same as toggle. These two are styled differently but are used for the same purposes. When creating a checkbox form, you need to add the checkbox class name to both label and the input elements. The following example shows two simple checkboxes, one is checked and the other is not. <label class = "checkbox"> <input type = "checkbox"> </label> <label class = "checkbox"> <input type = "checkbox"> </label> The above code will produce the following screen − As we already showed, the list will be used for multiple elements. Now we will use the item-checkbox class for each list item. <ul class = "list"> <li class = "item item-checkbox"> Checkbox 1 <label class = "checkbox"> <input type = "checkbox" /> </label> </li> <li class = "item item-checkbox"> Checkbox 2 <label class = "checkbox"> <input type = "checkbox" /> </label> </li> <li class = "item item-checkbox"> Checkbox e <label class = "checkbox"> <input type = "checkbox" /> </label> </li> <li class = "item item-checkbox"> Checkbox 4 <label class = "checkbox"> <input type = "checkbox" /> </label> </li> </ul> The above code will produce the following screen − When you want to style a checkbox, you need to apply any Ionic color class with the checkbox prefix. Check the following example to see how it looks like. We will use the list of checkboxes for this example. <ul class = "list"> <li class = "item item-checkbox checkbox-light"> Checkbox 1 <label class = "checkbox"> <input type = "checkbox" /> </label> </li> <li class = "item item-checkbox checkbox-stable"> Checkbox 2 <label class = "checkbox"> <input type = "checkbox" /> </label> </li> <li class = "item item-checkbox checkbox-positive"> Checkbox 3 <label class = "checkbox"> <input type = "checkbox" /> </label> </li> <li class = "item item-checkbox checkbox-calm"> Checkbox 4 <label class = "checkbox"> <input type = "checkbox" /> </label> </li> <li class = "item item-checkbox checkbox-balanced"> Checkbox 5 <label class = "checkbox"> <input type = "checkbox" /> </label> </li> <li class = "item item-checkbox checkbox-energized"> Checkbox 6 <label class = "checkbox"> <input type = "checkbox" /> </label> </li> <li class = "item item-checkbox checkbox-assertive"> Checkbox 7 <label class = "checkbox"> <input type = "checkbox" /> </label> </li> <li class = "item item-checkbox checkbox-royal"> Checkbox 8 <label class = "checkbox"> <input type = "checkbox" /> </label> </li> <li class = "item item-checkbox checkbox-dark"> Checkbox 9 <label class = "checkbox"> <input type = "checkbox" /> </label> </li> </ul> The above code will produce the following screen − 16 Lectures 2.5 hours Frahaan Hussain 185 Lectures 46.5 hours Nikhil Agarwal Print Add Notes Bookmark this page
[ { "code": null, "e": 2577, "s": 2463, "text": "Ionic checkbox is almost the same as toggle. These two are styled differently but are used for the same purposes." }, { "code": null, "e": 2774, "s": 2577, "text": "When creating a checkbox form, you need to add the checkbox class name to both label and the input elements. The following example shows two simple checkboxes, one is checked and the other is not." }, { "code": null, "e": 2905, "s": 2774, "text": "<label class = \"checkbox\">\n <input type = \"checkbox\">\n</label>\n\n<label class = \"checkbox\">\n <input type = \"checkbox\">\n</label>" }, { "code": null, "e": 2956, "s": 2905, "text": "The above code will produce the following screen −" }, { "code": null, "e": 3083, "s": 2956, "text": "As we already showed, the list will be used for multiple elements. Now we will use the item-checkbox class for each list item." }, { "code": null, "e": 3704, "s": 3083, "text": "<ul class = \"list\">\n <li class = \"item item-checkbox\">\n Checkbox 1\n <label class = \"checkbox\">\n <input type = \"checkbox\" />\n </label>\n </li>\n\n <li class = \"item item-checkbox\">\n Checkbox 2\n <label class = \"checkbox\">\n <input type = \"checkbox\" />\n </label>\n </li>\n\n <li class = \"item item-checkbox\">\n Checkbox e\n <label class = \"checkbox\">\n <input type = \"checkbox\" />\n </label>\n </li>\n\n <li class = \"item item-checkbox\">\n Checkbox 4\n <label class = \"checkbox\">\n <input type = \"checkbox\" />\n </label>\n </li>\n</ul>" }, { "code": null, "e": 3755, "s": 3704, "text": "The above code will produce the following screen −" }, { "code": null, "e": 3963, "s": 3755, "text": "When you want to style a checkbox, you need to apply any Ionic color class with the checkbox prefix. Check the following example to see how it looks like. We will use the list of checkboxes for this example." }, { "code": null, "e": 5484, "s": 3963, "text": "<ul class = \"list\">\n <li class = \"item item-checkbox checkbox-light\">\n Checkbox 1\n <label class = \"checkbox\">\n <input type = \"checkbox\" />\n </label>\n </li>\n\n <li class = \"item item-checkbox checkbox-stable\">\n Checkbox 2\n <label class = \"checkbox\">\n <input type = \"checkbox\" />\n </label>\n </li>\n\n <li class = \"item item-checkbox checkbox-positive\">\n Checkbox 3\n <label class = \"checkbox\">\n <input type = \"checkbox\" />\n </label>\n </li>\n\n <li class = \"item item-checkbox checkbox-calm\">\n Checkbox 4\n <label class = \"checkbox\">\n <input type = \"checkbox\" />\n </label>\n </li>\n\n <li class = \"item item-checkbox checkbox-balanced\">\n Checkbox 5\n <label class = \"checkbox\">\n <input type = \"checkbox\" />\n </label>\n </li>\n\n <li class = \"item item-checkbox checkbox-energized\">\n Checkbox 6\n <label class = \"checkbox\">\n <input type = \"checkbox\" />\n </label>\n </li>\n\n <li class = \"item item-checkbox checkbox-assertive\">\n Checkbox 7\n <label class = \"checkbox\">\n <input type = \"checkbox\" />\n </label>\n </li>\n\n <li class = \"item item-checkbox checkbox-royal\">\n Checkbox 8\n <label class = \"checkbox\">\n <input type = \"checkbox\" />\n </label>\n </li>\n\n <li class = \"item item-checkbox checkbox-dark\">\n Checkbox 9\n <label class = \"checkbox\">\n <input type = \"checkbox\" />\n </label>\n </li>\n</ul>" }, { "code": null, "e": 5535, "s": 5484, "text": "The above code will produce the following screen −" }, { "code": null, "e": 5570, "s": 5535, "text": "\n 16 Lectures \n 2.5 hours \n" }, { "code": null, "e": 5587, "s": 5570, "text": " Frahaan Hussain" }, { "code": null, "e": 5624, "s": 5587, "text": "\n 185 Lectures \n 46.5 hours \n" }, { "code": null, "e": 5640, "s": 5624, "text": " Nikhil Agarwal" }, { "code": null, "e": 5647, "s": 5640, "text": " Print" }, { "code": null, "e": 5658, "s": 5647, "text": " Add Notes" } ]
JMS - API in Java EE Applications
The JMS API can be used to create, send, receive and read message in the applications and has become integral part of Java EE platform. The Java EE applications use JMS API within EJB (Enterprise Java Beans) and web containers, which apply Java EE platform specification to Java EE components. EJB is an essential part of a J2EE (Java 2 Enterprise Edition) platform, which develops and deploy the enterprise applications, by considering robustness, high scalability, and high performance. Web containers are used for the execution of web pages, which run on web servers like Jetty, Tomcat etc. The following ways describe the use of JMS API in the J2EE application − These are preconfigured objects in the J2EE application, generated by an administrator to use with JMS clients. There are two types of administered objects; namely destination and connection factory. A destination is an object, that target its messages by JMS clients and receives the messages from the destination. The connection factory is an object, which establishes the connection between JMS client and Service provider. For more information, refer to the Programming Model chapter. It defines the name of the injected bean in Java EE applications and you can specify the JMS resource as static in an application client component. It can be specified as shown below − @Resource(lookup = "jms/ConnectionFactory") private static ConnectionFactory connectionFactory; @Resource(lookup = "jms/Queue") private static Queue queue; In the J2EE application, the JMS API resources contain JMS API connection and session. If you are using JMS API for an enterprise bean instance, then create the resource by using @PostConstruct callback method and close the resource by using @PreDestroy callback method. The JMS API allows developers to create enterprise applications easily and defines the synchronous and asynchronous, reliable communications between J2EE components and other applications. The enterprise applications can be developed with new message-driven beans for defining business events along with existing business events. The bean method can send and receive the message by using the container-managed transactions, instead of using local transactions and manipulates the transaction separation with help of EJB container. Keep the occurrence of JMS operations and database access in a single transaction, by sending and receiving the messages in Java Transaction API (JTA) transactions. You don't need to use an annotation to specify the container-managed transactions, because they are default transactions in the Java EE applications. The message-driven bean is special type of enterprise bean supported by J2EE application, which processes the JMS messages asynchronously in the Java EE applications. The session bean sends and receives the JMS messages synchronously. The messages sent from client's application, enterprise bean, or a web component does not use Java EE technology. The message-driven bean class contains below features − This class uses the javax.jms.MessageListener interface to receive asynchronously delivered messages and onMessage method for moving the message to listener. This class uses the javax.jms.MessageListener interface to receive asynchronously delivered messages and onMessage method for moving the message to listener. It creates a connection by using @PostConstruct callback method and closes the connection by using @PreDestroy callback method. Generally, this class uses these methods to produce the messages and receive the messages from another destination. It creates a connection by using @PostConstruct callback method and closes the connection by using @PreDestroy callback method. Generally, this class uses these methods to produce the messages and receive the messages from another destination. Print Add Notes Bookmark this page
[ { "code": null, "e": 2121, "s": 1827, "text": "The JMS API can be used to create, send, receive and read message in the applications and has become integral part of Java EE platform. The Java EE applications use JMS API within EJB (Enterprise Java Beans) and web containers, which apply Java EE platform specification to Java EE components." }, { "code": null, "e": 2422, "s": 2121, "text": "EJB is an essential part of a J2EE (Java 2 Enterprise Edition) platform, which develops and deploy the enterprise applications, by considering robustness, high scalability, and high performance. Web containers are used for the execution of web pages, which run on web servers like Jetty, Tomcat etc." }, { "code": null, "e": 2495, "s": 2422, "text": "The following ways describe the use of JMS API in the J2EE application −" }, { "code": null, "e": 2984, "s": 2495, "text": "These are preconfigured objects in the J2EE application, generated by an administrator to use with JMS clients. There are two types of administered objects; namely destination and connection factory. A destination is an object, that target its messages by JMS clients and receives the messages from the destination. The connection factory is an object, which establishes the connection between JMS client and Service provider. For more information, refer to the Programming Model chapter." }, { "code": null, "e": 3169, "s": 2984, "text": "It defines the name of the injected bean in Java EE applications and you can specify the JMS resource as static in an application client component. It can be specified as shown below −" }, { "code": null, "e": 3326, "s": 3169, "text": "@Resource(lookup = \"jms/ConnectionFactory\")\nprivate static ConnectionFactory connectionFactory;\n\n@Resource(lookup = \"jms/Queue\")\nprivate static Queue queue;" }, { "code": null, "e": 3597, "s": 3326, "text": "In the J2EE application, the JMS API resources contain JMS API connection and session. If you are using JMS API for an enterprise bean instance, then create the resource by using @PostConstruct callback method and close the resource by using @PreDestroy callback method." }, { "code": null, "e": 3927, "s": 3597, "text": "The JMS API allows developers to create enterprise applications easily and defines the synchronous and asynchronous, reliable communications between J2EE components and other applications. The enterprise applications can be developed with new message-driven beans for defining business events along with existing business events." }, { "code": null, "e": 4443, "s": 3927, "text": "The bean method can send and receive the message by using the container-managed transactions, instead of using local transactions and manipulates the transaction separation with help of EJB container. Keep the occurrence of JMS operations and database access in a single transaction, by sending and receiving the messages in Java Transaction API (JTA) transactions. You don't need to use an annotation to specify the container-managed transactions, because they are default transactions in the Java EE applications." }, { "code": null, "e": 4792, "s": 4443, "text": "The message-driven bean is special type of enterprise bean supported by J2EE application, which processes the JMS messages asynchronously in the Java EE applications. The session bean sends and receives the JMS messages synchronously. The messages sent from client's application, enterprise bean, or a web component does not use Java EE technology." }, { "code": null, "e": 4848, "s": 4792, "text": "The message-driven bean class contains below features −" }, { "code": null, "e": 5006, "s": 4848, "text": "This class uses the javax.jms.MessageListener interface to receive asynchronously delivered messages and onMessage method for moving the message to listener." }, { "code": null, "e": 5164, "s": 5006, "text": "This class uses the javax.jms.MessageListener interface to receive asynchronously delivered messages and onMessage method for moving the message to listener." }, { "code": null, "e": 5408, "s": 5164, "text": "It creates a connection by using @PostConstruct callback method and closes the connection by using @PreDestroy callback method. Generally, this class uses these methods to produce the messages and receive the messages from another destination." }, { "code": null, "e": 5652, "s": 5408, "text": "It creates a connection by using @PostConstruct callback method and closes the connection by using @PreDestroy callback method. Generally, this class uses these methods to produce the messages and receive the messages from another destination." }, { "code": null, "e": 5659, "s": 5652, "text": " Print" }, { "code": null, "e": 5670, "s": 5659, "text": " Add Notes" } ]
Animations of Logistic Regression with Python | by Tobias Roeschl | Towards Data Science
This article is about creating animated plots of simple and multiple logistic regression with batch gradient descent in Python. In the end, I will also present a visual explanation of why the cross-entropy cost function is the method of choice to quantify costs in logistic regression. In terms of structure and content, this article relates to — and partially builds on — previous articles I wrote about creating animations of batch gradient descent with the example of simple linear and multiple linear regression. The general idea is to set up a logistic regression model and train the model on some arbitrary training data while storing parameter values and costs for each epoch. After confirming our results through sklearn’s built-in logistic regression model, we will use the stored parameter values to generate animated plots with Python’s celluloid module. The animations we use to visualize logistic regression will be similar to the ones we created for previous articles on linear regression. Logistic regression is a classification algorithm that predicts probabilities of particular outcomes given one or more independent variables. The independent variable can be continuous or categorical. The outcome can be interpreted as taking membership in one of a discrete set of classes. In this article, we confine the number of classes to two classes although it is theoretically possible to generalize logistic regression to multiclass problems with more than two possible outcomes. Probabilities can be calculated with the sigmoid function, which is a special case of the logistic function. For a given, single measurement X and n independent predictor variables, the probability of the response variable Y equaling a case (“1”) rather than a non-case (“0”) can be expressed as: with our predictor variable X and our weights w given as: and a scalar b representing the bias term (y-intercept). As with linear regression, we try to find the optimal model parameters θ, consisting of our weight(s) w and our bias b, to minimize the costs J of our model. This can be achieved by various optimization algorithms. One of these algorithms is batch gradient descent, where we adjust parameter values proportional to the negative gradient of our cost function until we reach model convergence. Mathematically, this can be expressed with the following formula: with ∇J(θ) representing the gradient of our cost function and α representing the learning rate. In this equation, e is representing the respective epoch. So far, so good. Logistic regression, however, differs decisively how we define our cost function. While we technically could apply the mean squared error (MSE)-method, like we did with linear regression, this would result in a non-convex cost function in the case of logistic regression. Non-convex cost functions may have multiple local minima. Accordingly, gradient descent is not guaranteed to converge to the global minimum with non-convex cost functions.1 In order to overcome this issue, the concept of cross-entropy was introduced to quantify costs in logistic regression. For a dichotomous outcome variable y, the costs J can be calculated as follows: with pi∈[ 0;1] representing the model’s prediction for each of our N samples, on which the model is trained. Since we aim to compute the gradient of our cost function, we need to determine the partial derivatives of J with respect to a particular weight wj and b and get the following: In the following PDF, I gave a detailed explanation of how to derive the partial derivatives of the binary cross-entropy loss functions with respect to its parameters. Activate your 30 day free trial to continue reading. Facebook Twitter LinkedIn Share Email What to Upload to SlideShare by SlideShare 13906958 views Be A Great Product Leader (Amplify,... by Adam Nash 2029108 views Trillion Dollar Coach Book (Bill Ca... by Eric Schmidt 1901745 views APIdays Paris 2019 - Innovation @ s... by apidays 2587128 views A few thoughts on work life-balance by Wim Vanderbauwhede 1792952 views Is vc still a thing final by Mark Suster 1606573 views In Python, we can import some libraries and define our model. In order to set up a logistic regression model, which is flexible to the number of independent variables, we introduce a weight matrix w with one weight wj for every input variable. I decided to arbitrarily set the initial parameter values for the weights to 0 and 0.5 for the bias: Equivalently to simple linear regression, we have one predictor variable in simple logistic regression. In Python, we introduce our training data and fit our model to the data: # Introduce training data: x_train = np.array([ [-80], [-70], [-50], [-39], [-27], [-15], [-9], [12], [25], [36], [52], [65], [78], [90], [99], [110] ]) y_train = np.array([ [0], [0], [0], [0], [0], [0], [1], [0], [1], [0], [1], [0], [1], [1], [1], [1] ]) xs=np.array([np.linspace(-150,200)]) # x-values later used for regression curve plot # Fit model to training data: model=LogisticRegression(x_train,y_train, lr=0.0001) # set up model and define learning rate model.fit(x_train,y_train, numberOfEpochs=700000) # set number of epochs # Store parameter values in new variables: w=model.AllWeights.T b= model.AllBiases c=model.AllCosts cl=model.All_cl # Print results: print("Final weight: "+ str(np.float(model.w))) print("Final bias: "+ str(model.b)) print("Final costs: " + str(model.cost(x_train,y_train))) Final weight: 0.0358378946387266 Final bias: -1.116903840009302 Final costs: 0.40058199922669563 The learning rate is intentionally set to a particularly small value of α=0.0001 in order to avoid large steps especially at the beginning of our animations. In order to compare the results of our simple logistic regression model to those we get with sklearn’s model, we display both models’ results after their respective fitting process. # cross-check results with sklearn's inbuilt logistic regression model: from sklearn.linear_model import LogisticRegression # - set C (= Inverse of regularization strength) to a very high number # - use np.ravel() to prevent DataConversionWarning clf = LogisticRegression(solver="lbfgs", random_state=0, C = 1e20).fit(x_train, y_train.ravel()) print(clf.coef_, clf.intercept_) pred=expit(x_train @ clf.coef_.T + clf.intercept_) # calculate respective costs ... #... for sklearn's fitted model parameters print(- np.mean(y_train*np.log(pred) + (1-y_train)*np.log(1-pred))) [[0.03586354]] [-1.11926705] 0.40058175764105697 Since the results of both models are consistent with each other, we can begin to create our first animation: In the upper half of the animation, we can observe how the logistic regression curve is fitted to the training data. By defining which epochs are being used for the animations, we can smoothen the temporal sequence of the fitting process which results in more appealing animations. I thought it was useful to draw dashed connection lines between actual data points and those predicted by the model. It is worth mentioning that in logistic regression our goal is not to minimize the (squared) distances represented by these connection lines since we are using a completely different cost function. I would like to come back to this in more detail later on. In the lower half of the animation, we can see how costs drop simultaneously after each epoch and finally end up in the global minimum of the surface plot. The surface plot portrays the costs for a given range of respective parameter values given our training data. In the literature, these surface plots are also being referred to as loss landscapes. In Python, we can create loss landscapes by calculating the costs for particular combinations of two model parameters via meshgrids. In the case of simple logistic regression, the model parameters are the weight and the bias term. Multiple logistic regression analysis applies when there is more than one predictor variable. In the following example, we will fit our model to a training dataset with two independent variables. Since we can only portray costs for two parameters at once in our three-dimensional animations, we have to keep one parameter fixed. Therefore, we define yet another model — this time with a fixed intercept— and also train this new model on the new training data. In the new model, the part of code where b is being updated is removed. The bias term is set to the bias the former multiple logistic regression model converged to. Theoretically however, we could use any other value for the fixed bias term. The parameter values we obtain during the fitting process are once again stored in arrays. Like we did before, we return the final model parameters and costs and compare it to the results we get with sklearn’s model: In multiple logistic regression, we intend to fit a 3D curve to our training data. For this reason, we need to calculate y-values for yet another meshgrid, this time spanned by x0- and x1-values. Lastly, we can give out the final parameter values and costs portrayed in the animations to ensure that we approximately visualized model convergence despite substantially restricting the number of epochs used to create the animations (see commented-out code!). Additionally, we can also portray the path of gradient descent via a contour plot. Finally, let me come back to why we need the cross-entropy (CE) loss function in logistic regression. First and foremost, it can be shown mathematically that the CE-cost function is convex with exactly one minimum, which is the global minimum. In contrast, applying the MSE-cost function on logistic regression results in a non-convex cost function.2 In the following, I will use a graphical approach to compare both methods on the example of our training data. In Python, we introduce a new cost-function “MSE_cost( )” to quantify costs by the use of MSE. Like we did before with the cross entropy-cost function we can then create loss landscapes with respect to our training data (x_train2, y_train2) and our weights (w0,w1). I intentionally increased the range of possible values for our weights regarding the MSE-method to w0,w1 ∈ [-5,5] since this will help to illustrate the differences of both loss landscapes. Furthermore, numerical instability is not much of a concern with MSE as it is with CE.3 Obviously, the loss landscape on the right is looking “bumpier” now compared to the smooth and convex loss landscape of the cross-entropy cost function. For a more detailed view, we can try to visualize the MSE-loss landscape with a contour plot: First, we can recognize that there is a minimum (“x”), located at roughly the same point where the CE-loss landscape had its global minimum (0.089,0.19). I will refer to this point, marked by “x” as the ‘global’ minimum of the MSE-loss landscape. I put ‘global’ in parenthesis because we lack the mathematical proof that this point actually is the global minimum of the MSE-loss landscape. Without further investigation, we could also assume that there is a local minimum in close proximity to the asterisk. With starting values for our weights within this area, gradient descent might get stuck and not converge to the ‘global’ minimum (“x”) in the middle of the contour plot. The following animation once more illustrates the full extent of non-convexity of the MSE-cost function above: I will address gradient descent with the example of non-convex cost functions in more detail in my next article about neural networks. I hope you found this article helpful. Should any questions arise or if you noticed any mistakes, feel free to leave a comment. The complete notebook can be found on my GitHub. Thank you for reading!
[ { "code": null, "e": 1176, "s": 172, "text": "This article is about creating animated plots of simple and multiple logistic regression with batch gradient descent in Python. In the end, I will also present a visual explanation of why the cross-entropy cost function is the method of choice to quantify costs in logistic regression. In terms of structure and content, this article relates to — and partially builds on — previous articles I wrote about creating animations of batch gradient descent with the example of simple linear and multiple linear regression. The general idea is to set up a logistic regression model and train the model on some arbitrary training data while storing parameter values and costs for each epoch. After confirming our results through sklearn’s built-in logistic regression model, we will use the stored parameter values to generate animated plots with Python’s celluloid module. The animations we use to visualize logistic regression will be similar to the ones we created for previous articles on linear regression." }, { "code": null, "e": 1961, "s": 1176, "text": "Logistic regression is a classification algorithm that predicts probabilities of particular outcomes given one or more independent variables. The independent variable can be continuous or categorical. The outcome can be interpreted as taking membership in one of a discrete set of classes. In this article, we confine the number of classes to two classes although it is theoretically possible to generalize logistic regression to multiclass problems with more than two possible outcomes. Probabilities can be calculated with the sigmoid function, which is a special case of the logistic function. For a given, single measurement X and n independent predictor variables, the probability of the response variable Y equaling a case (“1”) rather than a non-case (“0”) can be expressed as:" }, { "code": null, "e": 2019, "s": 1961, "text": "with our predictor variable X and our weights w given as:" }, { "code": null, "e": 2534, "s": 2019, "text": "and a scalar b representing the bias term (y-intercept). As with linear regression, we try to find the optimal model parameters θ, consisting of our weight(s) w and our bias b, to minimize the costs J of our model. This can be achieved by various optimization algorithms. One of these algorithms is batch gradient descent, where we adjust parameter values proportional to the negative gradient of our cost function until we reach model convergence. Mathematically, this can be expressed with the following formula:" }, { "code": null, "e": 3349, "s": 2534, "text": "with ∇J(θ) representing the gradient of our cost function and α representing the learning rate. In this equation, e is representing the respective epoch. So far, so good. Logistic regression, however, differs decisively how we define our cost function. While we technically could apply the mean squared error (MSE)-method, like we did with linear regression, this would result in a non-convex cost function in the case of logistic regression. Non-convex cost functions may have multiple local minima. Accordingly, gradient descent is not guaranteed to converge to the global minimum with non-convex cost functions.1 In order to overcome this issue, the concept of cross-entropy was introduced to quantify costs in logistic regression. For a dichotomous outcome variable y, the costs J can be calculated as follows:" }, { "code": null, "e": 3635, "s": 3349, "text": "with pi∈[ 0;1] representing the model’s prediction for each of our N samples, on which the model is trained. Since we aim to compute the gradient of our cost function, we need to determine the partial derivatives of J with respect to a particular weight wj and b and get the following:" }, { "code": null, "e": 3803, "s": 3635, "text": "In the following PDF, I gave a detailed explanation of how to derive the partial derivatives of the binary cross-entropy loss functions with respect to its parameters." }, { "code": null, "e": 3856, "s": 3803, "text": "Activate your 30 day free trial to continue reading." }, { "code": null, "e": 3869, "s": 3856, "text": "\n\nFacebook\n\n" }, { "code": null, "e": 3881, "s": 3869, "text": "\n\nTwitter\n\n" }, { "code": null, "e": 3894, "s": 3881, "text": "\n\nLinkedIn\n\n" }, { "code": null, "e": 3900, "s": 3894, "text": "Share" }, { "code": null, "e": 3906, "s": 3900, "text": "Email" }, { "code": null, "e": 3981, "s": 3916, "text": "\n\n\n\nWhat to Upload to SlideShare\nby SlideShare\n13906958 views\n\n\n" }, { "code": null, "e": 4054, "s": 3981, "text": "\n\n\n\nBe A Great Product Leader (Amplify,...\nby Adam Nash\n2029108 views\n\n\n" }, { "code": null, "e": 4130, "s": 4054, "text": "\n\n\n\nTrillion Dollar Coach Book (Bill Ca...\nby Eric Schmidt\n1901745 views\n\n\n" }, { "code": null, "e": 4201, "s": 4130, "text": "\n\n\n\nAPIdays Paris 2019 - Innovation @ s...\nby apidays\n2587128 views\n\n\n" }, { "code": null, "e": 4280, "s": 4201, "text": "\n\n\n\nA few thoughts on work life-balance\nby Wim Vanderbauwhede\n1792952 views\n\n\n" }, { "code": null, "e": 4344, "s": 4280, "text": "\n\n\n\nIs vc still a thing final\nby Mark Suster\n1606573 views\n\n\n" }, { "code": null, "e": 4699, "s": 4354, "text": "In Python, we can import some libraries and define our model. In order to set up a logistic regression model, which is flexible to the number of independent variables, we introduce a weight matrix w with one weight wj for every input variable. I decided to arbitrarily set the initial parameter values for the weights to 0 and 0.5 for the bias:" }, { "code": null, "e": 4876, "s": 4699, "text": "Equivalently to simple linear regression, we have one predictor variable in simple logistic regression. In Python, we introduce our training data and fit our model to the data:" }, { "code": null, "e": 5831, "s": 4876, "text": "# Introduce training data: \nx_train = np.array([\n [-80],\n [-70],\n [-50],\n [-39],\n [-27],\n [-15],\n [-9],\n [12],\n [25],\n [36],\n [52],\n [65],\n [78],\n [90],\n [99],\n [110]\n])\n\ny_train = np.array([\n [0],\n [0],\n [0],\n [0],\n [0],\n [0],\n [1],\n [0],\n [1],\n [0],\n [1],\n [0],\n [1],\n [1],\n [1],\n [1]\n])\n\nxs=np.array([np.linspace(-150,200)]) # x-values later used for regression curve plot\n\n# Fit model to training data: \nmodel=LogisticRegression(x_train,y_train, lr=0.0001) # set up model and define learning rate\nmodel.fit(x_train,y_train, numberOfEpochs=700000) # set number of epochs\n\n# Store parameter values in new variables: \nw=model.AllWeights.T\nb= model.AllBiases\nc=model.AllCosts\ncl=model.All_cl\n\n# Print results: \nprint(\"Final weight: \"+ str(np.float(model.w))) \nprint(\"Final bias: \"+ str(model.b))\nprint(\"Final costs: \" + str(model.cost(x_train,y_train)))\n" }, { "code": null, "e": 5929, "s": 5831, "text": "Final weight: 0.0358378946387266\nFinal bias: -1.116903840009302\nFinal costs: 0.40058199922669563\n" }, { "code": null, "e": 6269, "s": 5929, "text": "The learning rate is intentionally set to a particularly small value of α=0.0001 in order to avoid large steps especially at the beginning of our animations. In order to compare the results of our simple logistic regression model to those we get with sklearn’s model, we display both models’ results after their respective fitting process." }, { "code": null, "e": 6844, "s": 6269, "text": "# cross-check results with sklearn's inbuilt logistic regression model: \nfrom sklearn.linear_model import LogisticRegression\n# - set C (= Inverse of regularization strength) to a very high number\n# - use np.ravel() to prevent DataConversionWarning\nclf = LogisticRegression(solver=\"lbfgs\", random_state=0, C = 1e20).fit(x_train, y_train.ravel())\nprint(clf.coef_, clf.intercept_)\n\npred=expit(x_train @ clf.coef_.T + clf.intercept_) # calculate respective costs ...\n#... for sklearn's fitted model parameters\nprint(- np.mean(y_train*np.log(pred) + (1-y_train)*np.log(1-pred)))\n" }, { "code": null, "e": 6894, "s": 6844, "text": "[[0.03586354]] [-1.11926705]\n0.40058175764105697\n" }, { "code": null, "e": 7003, "s": 6894, "text": "Since the results of both models are consistent with each other, we can begin to create our first animation:" }, { "code": null, "e": 8242, "s": 7003, "text": "In the upper half of the animation, we can observe how the logistic regression curve is fitted to the training data. By defining which epochs are being used for the animations, we can smoothen the temporal sequence of the fitting process which results in more appealing animations. I thought it was useful to draw dashed connection lines between actual data points and those predicted by the model. It is worth mentioning that in logistic regression our goal is not to minimize the (squared) distances represented by these connection lines since we are using a completely different cost function. I would like to come back to this in more detail later on. In the lower half of the animation, we can see how costs drop simultaneously after each epoch and finally end up in the global minimum of the surface plot. The surface plot portrays the costs for a given range of respective parameter values given our training data. In the literature, these surface plots are also being referred to as loss landscapes. In Python, we can create loss landscapes by calculating the costs for particular combinations of two model parameters via meshgrids. In the case of simple logistic regression, the model parameters are the weight and the bias term." }, { "code": null, "e": 9035, "s": 8242, "text": "Multiple logistic regression analysis applies when there is more than one predictor variable. In the following example, we will fit our model to a training dataset with two independent variables. Since we can only portray costs for two parameters at once in our three-dimensional animations, we have to keep one parameter fixed. Therefore, we define yet another model — this time with a fixed intercept— and also train this new model on the new training data. In the new model, the part of code where b is being updated is removed. The bias term is set to the bias the former multiple logistic regression model converged to. Theoretically however, we could use any other value for the fixed bias term. The parameter values we obtain during the fitting process are once again stored in arrays." }, { "code": null, "e": 9161, "s": 9035, "text": "Like we did before, we return the final model parameters and costs and compare it to the results we get with sklearn’s model:" }, { "code": null, "e": 9619, "s": 9161, "text": "In multiple logistic regression, we intend to fit a 3D curve to our training data. For this reason, we need to calculate y-values for yet another meshgrid, this time spanned by x0- and x1-values. Lastly, we can give out the final parameter values and costs portrayed in the animations to ensure that we approximately visualized model convergence despite substantially restricting the number of epochs used to create the animations (see commented-out code!)." }, { "code": null, "e": 9702, "s": 9619, "text": "Additionally, we can also portray the path of gradient descent via a contour plot." }, { "code": null, "e": 10164, "s": 9702, "text": "Finally, let me come back to why we need the cross-entropy (CE) loss function in logistic regression. First and foremost, it can be shown mathematically that the CE-cost function is convex with exactly one minimum, which is the global minimum. In contrast, applying the MSE-cost function on logistic regression results in a non-convex cost function.2 In the following, I will use a graphical approach to compare both methods on the example of our training data." }, { "code": null, "e": 10708, "s": 10164, "text": "In Python, we introduce a new cost-function “MSE_cost( )” to quantify costs by the use of MSE. Like we did before with the cross entropy-cost function we can then create loss landscapes with respect to our training data (x_train2, y_train2) and our weights (w0,w1). I intentionally increased the range of possible values for our weights regarding the MSE-method to w0,w1 ∈ [-5,5] since this will help to illustrate the differences of both loss landscapes. Furthermore, numerical instability is not much of a concern with MSE as it is with CE.3" }, { "code": null, "e": 10955, "s": 10708, "text": "Obviously, the loss landscape on the right is looking “bumpier” now compared to the smooth and convex loss landscape of the cross-entropy cost function. For a more detailed view, we can try to visualize the MSE-loss landscape with a contour plot:" }, { "code": null, "e": 11744, "s": 10955, "text": "First, we can recognize that there is a minimum (“x”), located at roughly the same point where the CE-loss landscape had its global minimum (0.089,0.19). I will refer to this point, marked by “x” as the ‘global’ minimum of the MSE-loss landscape. I put ‘global’ in parenthesis because we lack the mathematical proof that this point actually is the global minimum of the MSE-loss landscape. Without further investigation, we could also assume that there is a local minimum in close proximity to the asterisk. With starting values for our weights within this area, gradient descent might get stuck and not converge to the ‘global’ minimum (“x”) in the middle of the contour plot. The following animation once more illustrates the full extent of non-convexity of the MSE-cost function above:" }, { "code": null, "e": 11879, "s": 11744, "text": "I will address gradient descent with the example of non-convex cost functions in more detail in my next article about neural networks." } ]
Watir - Working with Browsers
By default, Watir will open chrome browser in-case the browser name is not specified. The required browser drivers are installed along with Watir installation. In case you face any issues working with browsers, install the driver as shown in the Browsers drivers chapter and update the location in PATH variable. In this chapter, we will understand how to open the browser using Watir. Open the IDE RubyMine and create a new file: test1.rb Select OK and click the file pattern as ruby as shown below − Click on OK to create the file. Now we will write a simple code that will open the browser as shown below − require 'watir' Watir::Browser.new Click on the Run button that is highlighted in the IDE as shown above. On-click of Run, it will open browser as shown below − The browser will open and close automatically. Let us now add some more code to the test1.rb. We can specify the name of the browser as shown below − require 'watir' Watir::Browser.new :chrome Now let us open a page-url in our test case. require 'watir' browser = Watir::Browser.new browser.goto("https://www.google.com") Click on Run to see the output as shown below − Similarly, you can open firefox, safari, Internet explorer browser. require 'watir' Watir::Browser.new :firefox Watir Code require 'watir' browser = Watir::Browser.new :ie browser.goto("https://www.google.com") When we run the code following error is displayed − Unable to find IEDriverServer. Please download the server from (Selenium::WebDriver::Error::WebDriverError) http://selenium-release.storage.googleapis.com/index.html and place it somewhere on your PATH. More info at https://github.com/SeleniumHQ/selenium/wiki/InternetExplorerDriver. This means that watir package does not have InternetExplorer Driver. We have downloaded the same from here − https://docs.seleniumhq.org/download/ and updated in PATH variable. Now run it again to see the Internet Explorer browser opening as shown below − require 'watir' browser = Watir::Browser.new :safari browser.goto("https://www.google.com") require 'watir' browser = Watir::Browser.new :edge browser.goto("https://www.google.com") Print Add Notes Bookmark this page
[ { "code": null, "e": 2333, "s": 2020, "text": "By default, Watir will open chrome browser in-case the browser name is not specified. The required browser drivers are installed along with Watir installation. In case you face any issues working with browsers, install the driver as shown in the Browsers drivers chapter and update the location in PATH variable." }, { "code": null, "e": 2406, "s": 2333, "text": "In this chapter, we will understand how to open the browser using Watir." }, { "code": null, "e": 2460, "s": 2406, "text": "Open the IDE RubyMine and create a new file: test1.rb" }, { "code": null, "e": 2522, "s": 2460, "text": "Select OK and click the file pattern as ruby as shown below −" }, { "code": null, "e": 2554, "s": 2522, "text": "Click on OK to create the file." }, { "code": null, "e": 2630, "s": 2554, "text": "Now we will write a simple code that will open the browser as shown below −" }, { "code": null, "e": 2665, "s": 2630, "text": "require 'watir'\nWatir::Browser.new" }, { "code": null, "e": 2791, "s": 2665, "text": "Click on the Run button that is highlighted in the IDE as shown above. On-click of Run, it will open browser as shown below −" }, { "code": null, "e": 2885, "s": 2791, "text": "The browser will open and close automatically. Let us now add some more code to the test1.rb." }, { "code": null, "e": 2941, "s": 2885, "text": "We can specify the name of the browser as shown below −" }, { "code": null, "e": 2984, "s": 2941, "text": "require 'watir'\nWatir::Browser.new :chrome" }, { "code": null, "e": 3029, "s": 2984, "text": "Now let us open a page-url in our test case." }, { "code": null, "e": 3113, "s": 3029, "text": "require 'watir'\nbrowser = Watir::Browser.new\nbrowser.goto(\"https://www.google.com\")" }, { "code": null, "e": 3161, "s": 3113, "text": "Click on Run to see the output as shown below −" }, { "code": null, "e": 3229, "s": 3161, "text": "Similarly, you can open firefox, safari, Internet explorer browser." }, { "code": null, "e": 3273, "s": 3229, "text": "require 'watir'\nWatir::Browser.new :firefox" }, { "code": null, "e": 3284, "s": 3273, "text": "Watir Code" }, { "code": null, "e": 3372, "s": 3284, "text": "require 'watir'\nbrowser = Watir::Browser.new :ie\nbrowser.goto(\"https://www.google.com\")" }, { "code": null, "e": 3424, "s": 3372, "text": "When we run the code following error is displayed −" }, { "code": null, "e": 3711, "s": 3424, "text": "Unable to find IEDriverServer. Please download the server from\n(Selenium::WebDriver::Error::WebDriverError)\n\nhttp://selenium-release.storage.googleapis.com/index.html and place it\nsomewhere on your PATH.\n\nMore info at\nhttps://github.com/SeleniumHQ/selenium/wiki/InternetExplorerDriver.\n" }, { "code": null, "e": 3888, "s": 3711, "text": "This means that watir package does not have InternetExplorer Driver. We have downloaded the same from here − https://docs.seleniumhq.org/download/ and updated in PATH variable." }, { "code": null, "e": 3967, "s": 3888, "text": "Now run it again to see the Internet Explorer browser opening as shown below −" }, { "code": null, "e": 4059, "s": 3967, "text": "require 'watir'\nbrowser = Watir::Browser.new :safari\nbrowser.goto(\"https://www.google.com\")" }, { "code": null, "e": 4149, "s": 4059, "text": "require 'watir'\nbrowser = Watir::Browser.new :edge\nbrowser.goto(\"https://www.google.com\")" }, { "code": null, "e": 4156, "s": 4149, "text": " Print" }, { "code": null, "e": 4167, "s": 4156, "text": " Add Notes" } ]
Minimum Enclosing Circle | Set 1 - GeeksforGeeks
17 Sep, 2021 Prerequisites: Equation of circle when three points on the circle are given, Convex HullGiven an array arr[][] containing N points in a 2-D plane with integer coordinates. The task is to find the centre and the radius of the minimum enclosing circle(MEC). A minimum enclosing circle is a circle in which all the points lie either inside the circle or on its boundaries.Examples: Input: arr[][] = {{0, 0}, {0, 1}, {1, 0}} Output: Center = {0.5, 0.5}, Radius = 0.7071 Explanation: On plotting the above circle with radius 0.707 and center (0.5, 0.5), it can be observed clearly that all the mentioned points lie either inside or on the circle. Input: arr[][] = {{5, -2}, {-3, -2}, {-2, 5}, {1, 6}, {0, 2}} Output: Center = {1.0, 1.0}, Radius = 5.000 Naive Approach: This problem can be solved by making a few observations. The first observation which can be made is that the MEC intersects at least one point. That’s because if the MEC does not intersect at any point, then the circle could be further shrunk until it intersects at one of the points. The second observation which can be made is that given a circle that encloses all the points and intersects at a single point, the circle can further be shrunk by moving the centre towards that point while keeping the point on the circle boundary until the circle intersects one or more additional points. If the circle intersects at two points(A and B) and the distance AB is equal to the circle diameter, then the circle cannot be shrunk anymore. Else, the centre of the circle can be moved towards the midpoint of AB until the circle intersects a third point(at which the circle cannot be shrunk anymore). From the above observations, it can be concluded that the MEC either: Intersects 2 points A and B, where AB = circle diameter. For this case, the circle’s centre would be the midpoint of A and B and the radius would be half of the distance AB.Intersects 3 or more points. The approach to find the center and radius has been discussed in this article. Intersects 2 points A and B, where AB = circle diameter. For this case, the circle’s centre would be the midpoint of A and B and the radius would be half of the distance AB. Intersects 3 or more points. The approach to find the center and radius has been discussed in this article. Thus, the solution to this problem is trivial for N <= 3. For other cases, a simple idea can be formed to solve this problem. The idea is to use all pairs and triples of points to obtain the circle defined those points. After obtaining the circle, test to see if the other points are enclosed by that circle and return the smallest valid circle found.Below is the implementation of the above approach: CPP Python3 // C++ program to find the minimum enclosing// circle for N integer points in a 2-D plane#include <iostream>#include <math.h>#include <vector>using namespace std; // Defining infinityconst double INF = 1e18; // Structure to represent a 2D pointstruct Point { double X, Y;}; // Structure to represent a 2D circlestruct Circle { Point C; double R;}; // Function to return the euclidean distance// between two pointsdouble dist(const Point& a, const Point& b){ return sqrt(pow(a.X - b.X, 2) + pow(a.Y - b.Y, 2));} // Function to check whether a point lies inside// or on the boundaries of the circlebool is_inside(const Circle& c, const Point& p){ return dist(c.C, p) <= c.R;} // The following two functions are the functions used// To find the equation of the circle when three// points are given. // Helper method to get a circle defined by 3 pointsPoint get_circle_center(double bx, double by, double cx, double cy){ double B = bx * bx + by * by; double C = cx * cx + cy * cy; double D = bx * cy - by * cx; return { (cy * B - by * C) / (2 * D), (bx * C - cx * B) / (2 * D) };} // Function to return a unique circle that intersects// three pointsCircle circle_from(const Point& A, const Point& B, const Point& C){ Point I = get_circle_center(B.X - A.X, B.Y - A.Y, C.X - A.X, C.Y - A.Y); I.X += A.X; I.Y += A.Y; return { I, dist(I, A) };} // Function to return the smallest circle// that intersects 2 pointsCircle circle_from(const Point& A, const Point& B){ // Set the center to be the midpoint of A and B Point C = { (A.X + B.X) / 2.0, (A.Y + B.Y) / 2.0 }; // Set the radius to be half the distance AB return { C, dist(A, B) / 2.0 };} // Function to check whether a circle encloses the given pointsbool is_valid_circle(const Circle& c, const vector<Point>& P){ // Iterating through all the points to check // whether the points lie inside the circle or not for (const Point& p : P) if (!is_inside(c, p)) return false; return true;} // Function to return find the minimum enclosing// circle from the given set of pointsCircle minimum_enclosing_circle(const vector<Point>& P){ // To find the number of points int n = (int)P.size(); if (n == 0) return { { 0, 0 }, 0 }; if (n == 1) return { P[0], 0 }; // Set initial MEC to have infinity radius Circle mec = { { 0, 0 }, INF }; // Go over all pair of points for (int i = 0; i < n; i++) { for (int j = i + 1; j < n; j++) { // Get the smallest circle that // intersects P[i] and P[j] Circle tmp = circle_from(P[i], P[j]); // Update MEC if tmp encloses all points // and has a smaller radius if (tmp.R < mec.R && is_valid_circle(tmp, P)) mec = tmp; } } // Go over all triples of points for (int i = 0; i < n; i++) { for (int j = i + 1; j < n; j++) { for (int k = j + 1; k < n; k++) { // Get the circle that intersects P[i], P[j], P[k] Circle tmp = circle_from(P[i], P[j], P[k]); // Update MEC if tmp encloses all points // and has smaller radius if (tmp.R < mec.R && is_valid_circle(tmp, P)) mec = tmp; } } } return mec;} // Driver codeint main(){ Circle mec = minimum_enclosing_circle({ { 0, 0 }, { 0, 1 }, { 1, 0 } }); cout << "Center = { " << mec.C.X << ", " << mec.C.Y << " } Radius = " << mec.R << endl; Circle mec2 = minimum_enclosing_circle({ { 5, -2 }, { -3, -2 }, { -2, 5 }, { 1, 6 }, { 0, 2 } }); cout << "Center = { " << mec2.C.X << ", " << mec2.C.Y << " } Radius = " << mec2.R << endl; return 0;} # Python3 program to find the minimum enclosing# circle for N integer points in a 2-D planefrom math import sqrt # Defining infinityINF = 10**18 # Function to return the euclidean distance# between two pointsdef dist(a, b): return sqrt(pow(a[0] - b[0], 2) + pow(a[1] - b[1], 2)) # Function to check whether a point lies inside# or on the boundaries of the circledef is_inside(c, p): return dist(c[0], p) <= c[1] # The following two functions are the functions used# To find the equation of the circle when three# points are given. # Helper method to get a circle defined by 3 pointsdef get_circle_center(bx, by, cx, cy): B = bx * bx + by * by C = cx * cx + cy * cy D = bx * cy - by * cx return [(cy * B - by * C) // (2 * D), (bx * C - cx * B) // (2 * D) ] # Function to return a unique circle that intersects# three pointsdef circle_frOm(A, B,C): I = get_circle_center(B[0] - A[0], B[1] - A[1], C[0] - A[0], C[1] - A[1]) I[0] += A[0] I[1] += A[1] return [I, dist(I, A)] # Function to return the smallest circle# that intersects 2 pointsdef circle_from(A, B): # Set the center to be the midpoint of A and B C = [ (A[0] + B[0]) / 2.0, (A[1] + B[1]) / 2.0] # Set the radius to be half the distance AB return [C, dist(A, B) / 2.0] # Function to check whether a circle encloses the given pointsdef is_valid_circle(c, P): # Iterating through all the points to check # whether the points lie inside the circle or not for p in P: if (is_inside(c, p) == False): return False return True # Function to return find the minimum enclosing# circle from the given set of pointsdef minimum_enclosing_circle(P): # To find the number of points n = len(P) if (n == 0): return [[0, 0], 0] if (n == 1): return [P[0], 0] # Set initial MEC to have infinity radius mec = [[0, 0], INF] # Go over all pair of points for i in range(n): for j in range(i + 1, n): # Get the smallest circle that # intersects P[i] and P[j] tmp = circle_from(P[i], P[j]) # Update MEC if tmp encloses all points # and has a smaller radius if (tmp[1] < mec[1] and is_valid_circle(tmp, P)): mec = tmp # Go over all triples of points for i in range(n): for j in range(i + 1, n): for k in range(j + 1, n): # Get the circle that intersects P[i], P[j], P[k] tmp = circle_frOm(P[i], P[j], P[k]) # Update MEC if tmp encloses all points # and has smaller radius if (tmp[1] < mec[1] and is_valid_circle(tmp, P)): mec = tmp return mec # Driver code mec = minimum_enclosing_circle([ [ 0, 0 ], [ 0, 1 ], [ 1, 0 ] ]) print("Center = { ",mec[0][1],",",mec[0][1], "} Radius = ",round(mec[1],6)) mec2 = minimum_enclosing_circle([ [ 5, -2 ], [ -3, -2 ], [ -2, 5 ], [ 1, 6 ], [ 0, 2 ] ]) print("Center = {",mec2[0][0],",",mec2[0][1], "} Radius = ",mec2[1]) # This code is contributed by mohit kumar 29 Center = { 0.5, 0.5 } Radius = 0.707107 Center = { 1, 1 } Radius = 5 Time Complexity: The time complexity for this solution would be of O(N4). That’s because there are N3 triples of points. And for each triple, we check if all the points are enclosed by the circle.Approach 2: A solution with the application of convex hull concept can also be used for this problem. The idea is to first form a convex hull on the given set of points. Once the convex hull is performed and the new set of points is returned, then the above-mentioned solution can be used on the new set of points to find the MEC.The code for this approach would be the same as above except that we would also need to get the convex hull first. Please refer to this article for an efficient algorithm to get the convex hull.Time Complexity: One observation that needs to be made that if the input already represents some vertices of a convex polygon, then this solution would have the same time complexity of the above naive approach. Therefore, the worst-case complexity of this approach is still O(N4).However, if the number of vertices of the convex hull is considerably smaller than N, then the complexity would be O(H4 + NLog(N)) where H represents the number of vertices of the convex hull, and the NLog(N) factor is for finding the convex hull assuming Graham Scan algorithm is used.Finally, if the number of vertices, H, of the convex hull, is very small, then it can be considered as a constant factor and thus the time complexity would be O(NLog(N)). mohit kumar 29 surindertarika1234 circle Algorithms Competitive Programming Geometric Mathematical Mathematical Geometric Algorithms Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here. Comments Old Comments SDE SHEET - A Complete Guide for SDE Preparation DSA Sheet by Love Babbar Introduction to Algorithms Playfair Cipher with Examples Recursive Practice Problems with Solutions Practice for cracking any coding interview Arrow operator -> in C/C++ with Examples Competitive Programming - A Complete Guide Modulo 10^9+7 (1000000007) Top 10 Algorithms and Data Structures for Competitive Programming
[ { "code": null, "e": 24698, "s": 24670, "text": "\n17 Sep, 2021" }, { "code": null, "e": 25079, "s": 24698, "text": "Prerequisites: Equation of circle when three points on the circle are given, Convex HullGiven an array arr[][] containing N points in a 2-D plane with integer coordinates. The task is to find the centre and the radius of the minimum enclosing circle(MEC). A minimum enclosing circle is a circle in which all the points lie either inside the circle or on its boundaries.Examples: " }, { "code": null, "e": 25344, "s": 25079, "text": "Input: arr[][] = {{0, 0}, {0, 1}, {1, 0}} Output: Center = {0.5, 0.5}, Radius = 0.7071 Explanation: On plotting the above circle with radius 0.707 and center (0.5, 0.5), it can be observed clearly that all the mentioned points lie either inside or on the circle. " }, { "code": null, "e": 25452, "s": 25344, "text": "Input: arr[][] = {{5, -2}, {-3, -2}, {-2, 5}, {1, 6}, {0, 2}} Output: Center = {1.0, 1.0}, Radius = 5.000 " }, { "code": null, "e": 25529, "s": 25454, "text": "Naive Approach: This problem can be solved by making a few observations. " }, { "code": null, "e": 25757, "s": 25529, "text": "The first observation which can be made is that the MEC intersects at least one point. That’s because if the MEC does not intersect at any point, then the circle could be further shrunk until it intersects at one of the points." }, { "code": null, "e": 26063, "s": 25757, "text": "The second observation which can be made is that given a circle that encloses all the points and intersects at a single point, the circle can further be shrunk by moving the centre towards that point while keeping the point on the circle boundary until the circle intersects one or more additional points." }, { "code": null, "e": 26366, "s": 26063, "text": "If the circle intersects at two points(A and B) and the distance AB is equal to the circle diameter, then the circle cannot be shrunk anymore. Else, the centre of the circle can be moved towards the midpoint of AB until the circle intersects a third point(at which the circle cannot be shrunk anymore)." }, { "code": null, "e": 26438, "s": 26366, "text": "From the above observations, it can be concluded that the MEC either: " }, { "code": null, "e": 26719, "s": 26438, "text": "Intersects 2 points A and B, where AB = circle diameter. For this case, the circle’s centre would be the midpoint of A and B and the radius would be half of the distance AB.Intersects 3 or more points. The approach to find the center and radius has been discussed in this article." }, { "code": null, "e": 26893, "s": 26719, "text": "Intersects 2 points A and B, where AB = circle diameter. For this case, the circle’s centre would be the midpoint of A and B and the radius would be half of the distance AB." }, { "code": null, "e": 27001, "s": 26893, "text": "Intersects 3 or more points. The approach to find the center and radius has been discussed in this article." }, { "code": null, "e": 27404, "s": 27001, "text": "Thus, the solution to this problem is trivial for N <= 3. For other cases, a simple idea can be formed to solve this problem. The idea is to use all pairs and triples of points to obtain the circle defined those points. After obtaining the circle, test to see if the other points are enclosed by that circle and return the smallest valid circle found.Below is the implementation of the above approach: " }, { "code": null, "e": 27408, "s": 27404, "text": "CPP" }, { "code": null, "e": 27416, "s": 27408, "text": "Python3" }, { "code": "// C++ program to find the minimum enclosing// circle for N integer points in a 2-D plane#include <iostream>#include <math.h>#include <vector>using namespace std; // Defining infinityconst double INF = 1e18; // Structure to represent a 2D pointstruct Point { double X, Y;}; // Structure to represent a 2D circlestruct Circle { Point C; double R;}; // Function to return the euclidean distance// between two pointsdouble dist(const Point& a, const Point& b){ return sqrt(pow(a.X - b.X, 2) + pow(a.Y - b.Y, 2));} // Function to check whether a point lies inside// or on the boundaries of the circlebool is_inside(const Circle& c, const Point& p){ return dist(c.C, p) <= c.R;} // The following two functions are the functions used// To find the equation of the circle when three// points are given. // Helper method to get a circle defined by 3 pointsPoint get_circle_center(double bx, double by, double cx, double cy){ double B = bx * bx + by * by; double C = cx * cx + cy * cy; double D = bx * cy - by * cx; return { (cy * B - by * C) / (2 * D), (bx * C - cx * B) / (2 * D) };} // Function to return a unique circle that intersects// three pointsCircle circle_from(const Point& A, const Point& B, const Point& C){ Point I = get_circle_center(B.X - A.X, B.Y - A.Y, C.X - A.X, C.Y - A.Y); I.X += A.X; I.Y += A.Y; return { I, dist(I, A) };} // Function to return the smallest circle// that intersects 2 pointsCircle circle_from(const Point& A, const Point& B){ // Set the center to be the midpoint of A and B Point C = { (A.X + B.X) / 2.0, (A.Y + B.Y) / 2.0 }; // Set the radius to be half the distance AB return { C, dist(A, B) / 2.0 };} // Function to check whether a circle encloses the given pointsbool is_valid_circle(const Circle& c, const vector<Point>& P){ // Iterating through all the points to check // whether the points lie inside the circle or not for (const Point& p : P) if (!is_inside(c, p)) return false; return true;} // Function to return find the minimum enclosing// circle from the given set of pointsCircle minimum_enclosing_circle(const vector<Point>& P){ // To find the number of points int n = (int)P.size(); if (n == 0) return { { 0, 0 }, 0 }; if (n == 1) return { P[0], 0 }; // Set initial MEC to have infinity radius Circle mec = { { 0, 0 }, INF }; // Go over all pair of points for (int i = 0; i < n; i++) { for (int j = i + 1; j < n; j++) { // Get the smallest circle that // intersects P[i] and P[j] Circle tmp = circle_from(P[i], P[j]); // Update MEC if tmp encloses all points // and has a smaller radius if (tmp.R < mec.R && is_valid_circle(tmp, P)) mec = tmp; } } // Go over all triples of points for (int i = 0; i < n; i++) { for (int j = i + 1; j < n; j++) { for (int k = j + 1; k < n; k++) { // Get the circle that intersects P[i], P[j], P[k] Circle tmp = circle_from(P[i], P[j], P[k]); // Update MEC if tmp encloses all points // and has smaller radius if (tmp.R < mec.R && is_valid_circle(tmp, P)) mec = tmp; } } } return mec;} // Driver codeint main(){ Circle mec = minimum_enclosing_circle({ { 0, 0 }, { 0, 1 }, { 1, 0 } }); cout << \"Center = { \" << mec.C.X << \", \" << mec.C.Y << \" } Radius = \" << mec.R << endl; Circle mec2 = minimum_enclosing_circle({ { 5, -2 }, { -3, -2 }, { -2, 5 }, { 1, 6 }, { 0, 2 } }); cout << \"Center = { \" << mec2.C.X << \", \" << mec2.C.Y << \" } Radius = \" << mec2.R << endl; return 0;}", "e": 31520, "s": 27416, "text": null }, { "code": "# Python3 program to find the minimum enclosing# circle for N integer points in a 2-D planefrom math import sqrt # Defining infinityINF = 10**18 # Function to return the euclidean distance# between two pointsdef dist(a, b): return sqrt(pow(a[0] - b[0], 2) + pow(a[1] - b[1], 2)) # Function to check whether a point lies inside# or on the boundaries of the circledef is_inside(c, p): return dist(c[0], p) <= c[1] # The following two functions are the functions used# To find the equation of the circle when three# points are given. # Helper method to get a circle defined by 3 pointsdef get_circle_center(bx, by, cx, cy): B = bx * bx + by * by C = cx * cx + cy * cy D = bx * cy - by * cx return [(cy * B - by * C) // (2 * D), (bx * C - cx * B) // (2 * D) ] # Function to return a unique circle that intersects# three pointsdef circle_frOm(A, B,C): I = get_circle_center(B[0] - A[0], B[1] - A[1], C[0] - A[0], C[1] - A[1]) I[0] += A[0] I[1] += A[1] return [I, dist(I, A)] # Function to return the smallest circle# that intersects 2 pointsdef circle_from(A, B): # Set the center to be the midpoint of A and B C = [ (A[0] + B[0]) / 2.0, (A[1] + B[1]) / 2.0] # Set the radius to be half the distance AB return [C, dist(A, B) / 2.0] # Function to check whether a circle encloses the given pointsdef is_valid_circle(c, P): # Iterating through all the points to check # whether the points lie inside the circle or not for p in P: if (is_inside(c, p) == False): return False return True # Function to return find the minimum enclosing# circle from the given set of pointsdef minimum_enclosing_circle(P): # To find the number of points n = len(P) if (n == 0): return [[0, 0], 0] if (n == 1): return [P[0], 0] # Set initial MEC to have infinity radius mec = [[0, 0], INF] # Go over all pair of points for i in range(n): for j in range(i + 1, n): # Get the smallest circle that # intersects P[i] and P[j] tmp = circle_from(P[i], P[j]) # Update MEC if tmp encloses all points # and has a smaller radius if (tmp[1] < mec[1] and is_valid_circle(tmp, P)): mec = tmp # Go over all triples of points for i in range(n): for j in range(i + 1, n): for k in range(j + 1, n): # Get the circle that intersects P[i], P[j], P[k] tmp = circle_frOm(P[i], P[j], P[k]) # Update MEC if tmp encloses all points # and has smaller radius if (tmp[1] < mec[1] and is_valid_circle(tmp, P)): mec = tmp return mec # Driver code mec = minimum_enclosing_circle([ [ 0, 0 ], [ 0, 1 ], [ 1, 0 ] ]) print(\"Center = { \",mec[0][1],\",\",mec[0][1], \"} Radius = \",round(mec[1],6)) mec2 = minimum_enclosing_circle([ [ 5, -2 ], [ -3, -2 ], [ -2, 5 ], [ 1, 6 ], [ 0, 2 ] ]) print(\"Center = {\",mec2[0][0],\",\",mec2[0][1], \"} Radius = \",mec2[1]) # This code is contributed by mohit kumar 29", "e": 34848, "s": 31520, "text": null }, { "code": null, "e": 34917, "s": 34848, "text": "Center = { 0.5, 0.5 } Radius = 0.707107\nCenter = { 1, 1 } Radius = 5" }, { "code": null, "e": 36377, "s": 34919, "text": "Time Complexity: The time complexity for this solution would be of O(N4). That’s because there are N3 triples of points. And for each triple, we check if all the points are enclosed by the circle.Approach 2: A solution with the application of convex hull concept can also be used for this problem. The idea is to first form a convex hull on the given set of points. Once the convex hull is performed and the new set of points is returned, then the above-mentioned solution can be used on the new set of points to find the MEC.The code for this approach would be the same as above except that we would also need to get the convex hull first. Please refer to this article for an efficient algorithm to get the convex hull.Time Complexity: One observation that needs to be made that if the input already represents some vertices of a convex polygon, then this solution would have the same time complexity of the above naive approach. Therefore, the worst-case complexity of this approach is still O(N4).However, if the number of vertices of the convex hull is considerably smaller than N, then the complexity would be O(H4 + NLog(N)) where H represents the number of vertices of the convex hull, and the NLog(N) factor is for finding the convex hull assuming Graham Scan algorithm is used.Finally, if the number of vertices, H, of the convex hull, is very small, then it can be considered as a constant factor and thus the time complexity would be O(NLog(N)). " }, { "code": null, "e": 36392, "s": 36377, "text": "mohit kumar 29" }, { "code": null, "e": 36411, "s": 36392, "text": "surindertarika1234" }, { "code": null, "e": 36418, "s": 36411, "text": "circle" }, { "code": null, "e": 36429, "s": 36418, "text": "Algorithms" }, { "code": null, "e": 36453, "s": 36429, "text": "Competitive Programming" }, { "code": null, "e": 36463, "s": 36453, "text": "Geometric" }, { "code": null, "e": 36476, "s": 36463, "text": "Mathematical" }, { "code": null, "e": 36489, "s": 36476, "text": "Mathematical" }, { "code": null, "e": 36499, "s": 36489, "text": "Geometric" }, { "code": null, "e": 36510, "s": 36499, "text": "Algorithms" }, { "code": null, "e": 36608, "s": 36510, "text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here." }, { "code": null, "e": 36617, "s": 36608, "text": "Comments" }, { "code": null, "e": 36630, "s": 36617, "text": "Old Comments" }, { "code": null, "e": 36679, "s": 36630, "text": "SDE SHEET - A Complete Guide for SDE Preparation" }, { "code": null, "e": 36704, "s": 36679, "text": "DSA Sheet by Love Babbar" }, { "code": null, "e": 36731, "s": 36704, "text": "Introduction to Algorithms" }, { "code": null, "e": 36761, "s": 36731, "text": "Playfair Cipher with Examples" }, { "code": null, "e": 36804, "s": 36761, "text": "Recursive Practice Problems with Solutions" }, { "code": null, "e": 36847, "s": 36804, "text": "Practice for cracking any coding interview" }, { "code": null, "e": 36888, "s": 36847, "text": "Arrow operator -> in C/C++ with Examples" }, { "code": null, "e": 36931, "s": 36888, "text": "Competitive Programming - A Complete Guide" }, { "code": null, "e": 36958, "s": 36931, "text": "Modulo 10^9+7 (1000000007)" } ]
Comparing SH, BASH, KSH, and ZSH Speed | by Shinichi Okada | Towards Data Science
I often hear and read that Bash is slow. So I spent one weekend checking how slow it is. I used two methods. The first one is ShellBench. The ShellSpec created a benchmark utility for POSIX shell comparison. The second one is the sh-benchmark script created by @satoh_fumiyasu. Read on to find out what I discovered. I use a MacBook Pro. And these are my sh, bash, ksh, and zsh versions: ❯ sh --versionGNU bash, version 3.2.57(1)-release (x86_64-apple-darwin20)Copyright (C) 2007 Free Software Foundation, Inc.❯ bash --versionGNU bash, version 5.1.4(1)-release (x86_64-apple-darwin20.2.0)Copyright (C) 2020 Free Software Foundation, Inc.License GPLv3+: GNU GPL version 3 or later <http://gnu.org/licenses/gpl.html>This is free software; you are free to change and redistribute it.There is NO WARRANTY, to the extent permitted by law.❯ ksh --version version sh (AT&T Research) 93u+ 2012-08-01❯ zsh --versionzsh 5.8 (x86_64-apple-darwin20.0) You can run a test with any shells, sh, bash, ksh, mksh, posh, zsh, etc with ShellBench. It returns the number of executions per second. It provides eight sample benchmark tests. The ShellBench counts the number of times a sample code is executed. The “assign sample test” tests assigning a variable in different ways, the “count sample test” tests different ways of counting, etc. I used all sample tests for sh, bash, ksh, and zsh. Here is a table from one of my results. The tests return the number of executions per second and this means taller graphs are better results in the following graphs. assign and cmp sh: Blue, bash: Yellow, ksh: Green, zsh: Red. count and eval func and null output and subshell The ksh and zsh seems about seven times faster than bash. The ksh excelled in 17 tests and the zsh in six tests. The sh-benchmark tests on parameter expansions, array parameter expansions, arithmetic evaluations, tests, and iterate parameters. The script returns results relative to the bash results. All numbers are in percentage. So smaller numbers imply faster. bash: Blue, ksh: Yellow, zsh: Green The zsh was the fastest in six tests and the ksh was the fastest in seven tests. Interestingly zsh was very slow for the Fork test. If you want to test it yourself you can follow this. Shellbench You can clone Shellbench, make the shellbench file executable, and run the script: $ ./shellbench -s sh,bash,ksh,zsh sample/count.sh sample/output.sh sh-benchmark Copy sh-benchmark.zsh and sh-benchmark-scripts into the same directory. Make the sh-benchmark.zsh executable and run it on your terminal: $ sh-benchmark.zsh According to my tests, ksh is the winner and zsh is the runner-up. Both shells are 2–30 times faster than bash depending on the test. If you use bash for less than 100 lines as Google Shell Style Guide suggests, then I don’t think you will notice the difference. Although it will of course depend on the task. Get full access to every story on Medium by becoming a member.
[ { "code": null, "e": 450, "s": 172, "text": "I often hear and read that Bash is slow. So I spent one weekend checking how slow it is. I used two methods. The first one is ShellBench. The ShellSpec created a benchmark utility for POSIX shell comparison. The second one is the sh-benchmark script created by @satoh_fumiyasu." }, { "code": null, "e": 489, "s": 450, "text": "Read on to find out what I discovered." }, { "code": null, "e": 510, "s": 489, "text": "I use a MacBook Pro." }, { "code": null, "e": 560, "s": 510, "text": "And these are my sh, bash, ksh, and zsh versions:" }, { "code": null, "e": 1121, "s": 560, "text": "❯ sh --versionGNU bash, version 3.2.57(1)-release (x86_64-apple-darwin20)Copyright (C) 2007 Free Software Foundation, Inc.❯ bash --versionGNU bash, version 5.1.4(1)-release (x86_64-apple-darwin20.2.0)Copyright (C) 2020 Free Software Foundation, Inc.License GPLv3+: GNU GPL version 3 or later <http://gnu.org/licenses/gpl.html>This is free software; you are free to change and redistribute it.There is NO WARRANTY, to the extent permitted by law.❯ ksh --version version sh (AT&T Research) 93u+ 2012-08-01❯ zsh --versionzsh 5.8 (x86_64-apple-darwin20.0)" }, { "code": null, "e": 1300, "s": 1121, "text": "You can run a test with any shells, sh, bash, ksh, mksh, posh, zsh, etc with ShellBench. It returns the number of executions per second. It provides eight sample benchmark tests." }, { "code": null, "e": 1503, "s": 1300, "text": "The ShellBench counts the number of times a sample code is executed. The “assign sample test” tests assigning a variable in different ways, the “count sample test” tests different ways of counting, etc." }, { "code": null, "e": 1555, "s": 1503, "text": "I used all sample tests for sh, bash, ksh, and zsh." }, { "code": null, "e": 1595, "s": 1555, "text": "Here is a table from one of my results." }, { "code": null, "e": 1721, "s": 1595, "text": "The tests return the number of executions per second and this means taller graphs are better results in the following graphs." }, { "code": null, "e": 1736, "s": 1721, "text": "assign and cmp" }, { "code": null, "e": 1782, "s": 1736, "text": "sh: Blue, bash: Yellow, ksh: Green, zsh: Red." }, { "code": null, "e": 1797, "s": 1782, "text": "count and eval" }, { "code": null, "e": 1811, "s": 1797, "text": "func and null" }, { "code": null, "e": 1831, "s": 1811, "text": "output and subshell" }, { "code": null, "e": 1944, "s": 1831, "text": "The ksh and zsh seems about seven times faster than bash. The ksh excelled in 17 tests and the zsh in six tests." }, { "code": null, "e": 2075, "s": 1944, "text": "The sh-benchmark tests on parameter expansions, array parameter expansions, arithmetic evaluations, tests, and iterate parameters." }, { "code": null, "e": 2196, "s": 2075, "text": "The script returns results relative to the bash results. All numbers are in percentage. So smaller numbers imply faster." }, { "code": null, "e": 2232, "s": 2196, "text": "bash: Blue, ksh: Yellow, zsh: Green" }, { "code": null, "e": 2313, "s": 2232, "text": "The zsh was the fastest in six tests and the ksh was the fastest in seven tests." }, { "code": null, "e": 2364, "s": 2313, "text": "Interestingly zsh was very slow for the Fork test." }, { "code": null, "e": 2417, "s": 2364, "text": "If you want to test it yourself you can follow this." }, { "code": null, "e": 2428, "s": 2417, "text": "Shellbench" }, { "code": null, "e": 2511, "s": 2428, "text": "You can clone Shellbench, make the shellbench file executable, and run the script:" }, { "code": null, "e": 2578, "s": 2511, "text": "$ ./shellbench -s sh,bash,ksh,zsh sample/count.sh sample/output.sh" }, { "code": null, "e": 2591, "s": 2578, "text": "sh-benchmark" }, { "code": null, "e": 2729, "s": 2591, "text": "Copy sh-benchmark.zsh and sh-benchmark-scripts into the same directory. Make the sh-benchmark.zsh executable and run it on your terminal:" }, { "code": null, "e": 2748, "s": 2729, "text": "$ sh-benchmark.zsh" }, { "code": null, "e": 2882, "s": 2748, "text": "According to my tests, ksh is the winner and zsh is the runner-up. Both shells are 2–30 times faster than bash depending on the test." }, { "code": null, "e": 3058, "s": 2882, "text": "If you use bash for less than 100 lines as Google Shell Style Guide suggests, then I don’t think you will notice the difference. Although it will of course depend on the task." } ]
Android - Studio
You will be delighted, to know that you can start your Android application development on either of the following operating systems − Microsoft® Windows® 10/8/7/Vista/2003 (32 or 64-bit) Mac® OS X® 10.8.5 or higher, up to 10.9 (Mavericks) GNOME or KDE desktop Second point is that all the required tools to develop Android applications are open source and can be downloaded from the Web. Following is the list of software's you will need before you start your Android application programming. Java JDK5 or later version Java Runtime Environment (JRE) 6 Android Studio Android Studio is the official IDE for android application development.It works based on IntelliJ IDEA, You can download the latest version of android studio from Android Studio 2.2 Download, If you are new to installing Android Studio on windows,you will find a file, which is named as android-studio-bundle-143.3101438-windows.exe.So just download and run on windows machine according to android studio wizard guideline. If you are installing Android Studio on Mac or Linux, You can download the latest version from Android Studio Mac Download,or Android Studio Linux Download, check the instructions provided along with the downloaded file for Mac OS and Linux. This tutorial will consider that you are going to setup your environment on Windows machine having Windows 8.1 operating system. So let's launch Android Studio.exe,Make sure before launch Android Studio, Our Machine should required installed Java JDK. To install Java JDK,take a references of Android environment setup Once you launched Android Studio, its time to mention JDK7 path or later version in android studio installer. Below the image initiating JDK to android SDK Need to check the components, which are required to create applications, below the image has selected Android Studio, Android SDK, Android Virtual Machine and performance(Intel chip). Need to specify the location of local machine path for Android studio and Android SDK, below the image has taken default location of windows 8.1 x64 bit architecture. Need to specify the ram space for Android emulator by default it would take 512MB of local machine RAM. At final stage, it would extract SDK packages into our local machine, it would take a while time to finish the task and would take 2626MB of Hard disk space. After done all above steps perfectly, you must get finish button and it gonna be open android studio project with Welcome to android studio message as shown below You can start your application development by calling start a new android studio project. in a new installation frame should ask Application name, package information and location of the project. After entered application name, it going to be called select the form factors your application runs on, here need to specify Minimum SDK, in our tutorial, I have declared as API23: Android 6.0(Mashmallow) The next level of installation should contain selecting the activity to mobile, it specifies the default layout for Applications At the final stage it going to be open development tool to write the application code. To test your Android applications, you will need a virtual Android device. So before we start writing our code, let us create an Android virtual device. Launch Android AVD Manager Clicking AVD_Manager icon as shown below After Click on a virtual device icon, it going to be shown by default virtual devices which are present on your SDK, or else need to create a virtual device by clicking Create new Virtual device button If your AVD is created successfully it means your environment is ready for Android application development. If you like, you can close this window using top-right cross button. Better you re-start your machine and once you are done with this last step, you are ready to proceed for your first Android example but before that we will see few more important concepts related to Android Application Development. Before Writing a Hello word code, you must know about XML tags.To write hello word code, you should redirect to App>res>layout>Activity_main.xml To show hello word, we need to call text view with layout ( about text view and layout, you must take references at Relative Layout and Text View ). <RelativeLayout xmlns:android="http://schemas.android.com/apk/res/android" xmlns:tools="http://schemas.android.com/tools" android:layout_width="match_parent" android:layout_height="match_parent" android:paddingLeft="@dimen/activity_horizontal_margin" android:paddingRight="@dimen/activity_horizontal_margin" android:paddingTop="@dimen/activity_vertical_margin" android:paddingBottom="@dimen/activity_vertical_margin" tools:context=".MainActivity"> <TextView android:text="@string/hello_world" android:layout_width="550dp" android:layout_height="wrap_content" /> </RelativeLayout> Need to run the program by clicking Run>Run App or else need to call shift+f10key. Finally, result should be placed at Virtual devices as shown below 46 Lectures 7.5 hours Aditya Dua 32 Lectures 3.5 hours Sharad Kumar 9 Lectures 1 hours Abhilash Nelson 14 Lectures 1.5 hours Abhilash Nelson 15 Lectures 1.5 hours Abhilash Nelson 10 Lectures 1 hours Abhilash Nelson Print Add Notes Bookmark this page
[ { "code": null, "e": 3741, "s": 3607, "text": "You will be delighted, to know that you can start your Android application development on either of the following operating systems −" }, { "code": null, "e": 3794, "s": 3741, "text": "Microsoft® Windows® 10/8/7/Vista/2003 (32 or 64-bit)" }, { "code": null, "e": 3846, "s": 3794, "text": "Mac® OS X® 10.8.5 or higher, up to 10.9 (Mavericks)" }, { "code": null, "e": 3867, "s": 3846, "text": "GNOME or KDE desktop" }, { "code": null, "e": 4100, "s": 3867, "text": "Second point is that all the required tools to develop Android applications are open source and can be downloaded from the Web. Following is the list of software's you will need before you start your Android application programming." }, { "code": null, "e": 4127, "s": 4100, "text": "Java JDK5 or later version" }, { "code": null, "e": 4160, "s": 4127, "text": "Java Runtime Environment (JRE) 6" }, { "code": null, "e": 4175, "s": 4160, "text": "Android Studio" }, { "code": null, "e": 4598, "s": 4175, "text": "Android Studio is the official IDE for android application development.It works based on IntelliJ IDEA, You can download the latest version of android studio from Android Studio 2.2 Download, If you are new to installing Android Studio on windows,you will find a file, which is named as android-studio-bundle-143.3101438-windows.exe.So just download and run on windows machine according to android studio wizard guideline." }, { "code": null, "e": 4969, "s": 4598, "text": "If you are installing Android Studio on Mac or Linux, You can download the latest version from Android Studio Mac Download,or Android Studio Linux Download, check the instructions provided along with the downloaded file for Mac OS and Linux. This tutorial will consider that you are going to setup your environment on Windows machine having Windows 8.1 operating system." }, { "code": null, "e": 5160, "s": 4969, "text": "So let's launch Android Studio.exe,Make sure before launch Android Studio, Our Machine should required installed Java JDK. To install Java JDK,take a references of Android environment setup" }, { "code": null, "e": 5270, "s": 5160, "text": "Once you launched Android Studio, its time to mention JDK7 path or later version in android studio installer." }, { "code": null, "e": 5316, "s": 5270, "text": "Below the image initiating JDK to android SDK" }, { "code": null, "e": 5500, "s": 5316, "text": "Need to check the components, which are required to create applications, below the image has selected Android Studio, Android SDK, Android Virtual Machine and performance(Intel chip)." }, { "code": null, "e": 5667, "s": 5500, "text": "Need to specify the location of local machine path for Android studio and Android SDK, below the image has taken default location of windows 8.1 x64 bit architecture." }, { "code": null, "e": 5771, "s": 5667, "text": "Need to specify the ram space for Android emulator by default it would take 512MB of local machine RAM." }, { "code": null, "e": 5929, "s": 5771, "text": "At final stage, it would extract SDK packages into our local machine, it would take a while time to finish the task and would take 2626MB of Hard disk space." }, { "code": null, "e": 6092, "s": 5929, "text": "After done all above steps perfectly, you must get finish button and it gonna be open android studio project with Welcome to android studio message as shown below" }, { "code": null, "e": 6288, "s": 6092, "text": "You can start your application development by calling start a new android studio project. in a new installation frame should ask Application name, package information and location of the project." }, { "code": null, "e": 6493, "s": 6288, "text": "After entered application name, it going to be called select the form factors your application runs on, here need to specify Minimum SDK, in our tutorial, I have declared as API23: Android 6.0(Mashmallow)" }, { "code": null, "e": 6622, "s": 6493, "text": "The next level of installation should contain selecting the activity to mobile, it specifies the default layout for Applications" }, { "code": null, "e": 6709, "s": 6622, "text": "At the final stage it going to be open development tool to write the application code." }, { "code": null, "e": 6930, "s": 6709, "text": "To test your Android applications, you will need a virtual Android device. So before we start writing our code, let us create an Android virtual device. Launch Android AVD Manager Clicking AVD_Manager icon as shown below" }, { "code": null, "e": 7132, "s": 6930, "text": "After Click on a virtual device icon, it going to be shown by default virtual devices which are present on your SDK, or else need to create a virtual device by clicking Create new Virtual device button" }, { "code": null, "e": 7541, "s": 7132, "text": "If your AVD is created successfully it means your environment is ready for Android application development. If you like, you can close this window using top-right cross button. Better you re-start your machine and once you are done with this last step, you are ready to proceed for your first Android example but before that we will see few more important concepts related to Android Application Development." }, { "code": null, "e": 7686, "s": 7541, "text": "Before Writing a Hello word code, you must know about XML tags.To write hello word code, you should redirect to App>res>layout>Activity_main.xml" }, { "code": null, "e": 7835, "s": 7686, "text": "To show hello word, we need to call text view with layout ( about text view and layout, you must take references at Relative Layout and Text View )." }, { "code": null, "e": 8449, "s": 7835, "text": "<RelativeLayout xmlns:android=\"http://schemas.android.com/apk/res/android\"\n xmlns:tools=\"http://schemas.android.com/tools\" android:layout_width=\"match_parent\"\n android:layout_height=\"match_parent\" android:paddingLeft=\"@dimen/activity_horizontal_margin\"\n android:paddingRight=\"@dimen/activity_horizontal_margin\"\n android:paddingTop=\"@dimen/activity_vertical_margin\"\n android:paddingBottom=\"@dimen/activity_vertical_margin\" tools:context=\".MainActivity\">\n \n <TextView android:text=\"@string/hello_world\"\n android:layout_width=\"550dp\"\n android:layout_height=\"wrap_content\" />\n</RelativeLayout>" }, { "code": null, "e": 8599, "s": 8449, "text": "Need to run the program by clicking Run>Run App or else need to call shift+f10key. Finally, result should be placed at Virtual devices as shown below" }, { "code": null, "e": 8634, "s": 8599, "text": "\n 46 Lectures \n 7.5 hours \n" }, { "code": null, "e": 8646, "s": 8634, "text": " Aditya Dua" }, { "code": null, "e": 8681, "s": 8646, "text": "\n 32 Lectures \n 3.5 hours \n" }, { "code": null, "e": 8695, "s": 8681, "text": " Sharad Kumar" }, { "code": null, "e": 8727, "s": 8695, "text": "\n 9 Lectures \n 1 hours \n" }, { "code": null, "e": 8744, "s": 8727, "text": " Abhilash Nelson" }, { "code": null, "e": 8779, "s": 8744, "text": "\n 14 Lectures \n 1.5 hours \n" }, { "code": null, "e": 8796, "s": 8779, "text": " Abhilash Nelson" }, { "code": null, "e": 8831, "s": 8796, "text": "\n 15 Lectures \n 1.5 hours \n" }, { "code": null, "e": 8848, "s": 8831, "text": " Abhilash Nelson" }, { "code": null, "e": 8881, "s": 8848, "text": "\n 10 Lectures \n 1 hours \n" }, { "code": null, "e": 8898, "s": 8881, "text": " Abhilash Nelson" }, { "code": null, "e": 8905, "s": 8898, "text": " Print" }, { "code": null, "e": 8916, "s": 8905, "text": " Add Notes" } ]
Create Two Columns with Two Nested Columns in Bootstrap
To create two columns in two nested columns, you can try to run the following code − Live Demo <!DOCTYPE html> <html> <head> <title>Bootstrap Example</title> <link href = "/bootstrap/css/bootstrap.min.css" rel = "stylesheet"> <script src = "/scripts/jquery.min.js"></script> <script src = "/bootstrap/js/bootstrap.min.js"></script> </head> <body> <h2>Nested Columns</h2> <div class = "row"> <div class = "col-sm-9" style = "background-color:orange;"> Column1 <div class="row"> <div class = "col-sm-6" style="background-color:orange; color:white;">nested column</div> <div class = "col-sm-6" style="background-color:orange; color:white;">nested column</div> </div> </div> <div class = "col-sm-3" style="background-color:red;">Column2</div> </div> </body> </html>
[ { "code": null, "e": 1147, "s": 1062, "text": "To create two columns in two nested columns, you can try to run the following code −" }, { "code": null, "e": 1157, "s": 1147, "text": "Live Demo" }, { "code": null, "e": 1971, "s": 1157, "text": "<!DOCTYPE html>\n<html>\n <head>\n <title>Bootstrap Example</title>\n <link href = \"/bootstrap/css/bootstrap.min.css\" rel = \"stylesheet\">\n <script src = \"/scripts/jquery.min.js\"></script>\n <script src = \"/bootstrap/js/bootstrap.min.js\"></script>\n </head>\n <body>\n <h2>Nested Columns</h2>\n <div class = \"row\">\n <div class = \"col-sm-9\" style = \"background-color:orange;\">\n Column1\n <div class=\"row\">\n <div class = \"col-sm-6\" style=\"background-color:orange; color:white;\">nested column</div>\n <div class = \"col-sm-6\" style=\"background-color:orange; color:white;\">nested column</div>\n </div>\n </div>\n <div class = \"col-sm-3\" style=\"background-color:red;\">Column2</div>\n </div>\n </body>\n</html>" } ]
C++ Program To Delete Nodes Which Have A Greater Value On Right Side - GeeksforGeeks
30 Mar, 2022 Given a singly linked list, remove all the nodes which have a greater value on the right side. Examples: Input: 12->15->10->11->5->6->2->3->NULL Output: 15->11->6->3->NULL Explanation: 12, 10, 5 and 2 have been deleted because there is a greater value on the right side. When we examine 12, we see that after 12 there is one node with a value greater than 12 (i.e. 15), so we delete 12. When we examine 15, we find no node after 15 that has a value greater than 15, so we keep this node. When we go like this, we get 15->6->3 Input: 10->20->30->40->50->60->NULL Output: 60->NULL Explanation: 10, 20, 30, 40, and 50 have been deleted because they all have a greater value on the right side. Input: 60->50->40->30->20->10->NULL Output: No Change. Method 1 (Simple): Use two loops. In the outer loop, pick nodes of the linked list one by one. In the inner loop, check if there exists a node whose value is greater than the picked node. If there exists a node whose value is greater, then delete the picked node. Time Complexity: O(n^2) Method 2 (Use Reverse): Thanks to Paras for providing the below algorithm. 1. Reverse the list. 2. Traverse the reversed list. Keep max till now. If the next node is less than max, then delete the next node, otherwise max = next node. 3. Reverse the list again to retain the original order. Time Complexity: O(n)Thanks to R.Srinivasan for providing the code below. C++ // C++ program to delete nodes which// have a greater value on right side#include <bits/stdc++.h>using namespace std; // Structure of a linked list nodestruct Node{ int data; struct Node* next;}; // Prototype for utility functionsvoid reverseList(struct Node** headref);void _delLesserNodes(struct Node* head); /* Deletes nodes which have a node with greater value node on left side */void delLesserNodes(struct Node** head_ref){ // 1. Reverse the linked list reverseList(head_ref); /* 2. In the reversed list, delete nodes which have a node with greater value node on left side. Note that head node is never deleted because it is the leftmost node.*/ _delLesserNodes(*head_ref); /* 3. Reverse the linked list again to retain the original order */ reverseList(head_ref);} /* Deletes nodes which have greater value node(s) on left side */void _delLesserNodes(struct Node* head){ struct Node* current = head; // Initialize max struct Node* maxnode = head; struct Node* temp; while (current != NULL && current->next != NULL) { /* If current is smaller than max, then delete current */ if (current->next->data < maxnode->data) { temp = current->next; current->next = temp->next; free(temp); } /* If current is greater than max, then update max and move current */ else { current = current->next; maxnode = current; } }} /* Utility function to insert a node at the beginning */void push(struct Node** head_ref, int new_data){ struct Node* new_node = (struct Node*)malloc(sizeof(struct Node)); new_node->data = new_data; new_node->next = *head_ref; *head_ref = new_node;} /* Utility function to reverse a linked list */void reverseList(struct Node** headref){ struct Node* current = *headref; struct Node* prev = NULL; struct Node* next; while (current != NULL) { next = current->next; current->next = prev; prev = current; current = next; } *headref = prev;} /* Utility function to print a linked list */void printList(struct Node* head){ while (head != NULL) { cout << " " << head->data ; head = head->next; } cout << "" ;} // Driver codeint main(){ struct Node* head = NULL; /* Create following linked list 12->15->10->11->5->6->2->3 */ push(&head, 3); push(&head, 2); push(&head, 6); push(&head, 5); push(&head, 11); push(&head, 10); push(&head, 15); push(&head, 12); cout << "Given Linked List " ; printList(head); delLesserNodes(&head); cout << "Modified Linked List " ; printList(head); return 0;}// This code is contributed by shivanisinghss2110 Output: Given Linked List 12 15 10 11 5 6 2 3 Modified Linked List 15 11 6 3 Time Complexity: O(n) Auxiliary Space: O(1) Source: https://www.geeksforgeeks.org/forum/topic/amazon-interview-question-for-software-engineerdeveloper-about-linked-lists-6 Please refer complete article on Delete nodes which have a greater value on right side for more details! rohan07 Linked Lists C++ C++ Programs Linked List Linked List CPP Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here. Comments Old Comments Operator Overloading in C++ Sorting a vector in C++ Polymorphism in C++ Friend class and function in C++ List in C++ Standard Template Library (STL) Header files in C/C++ and its uses C++ Program for QuickSort How to return multiple values from a function in C or C++? Program to print ASCII Value of a character C++ program for hashing with chaining
[ { "code": null, "e": 23785, "s": 23757, "text": "\n30 Mar, 2022" }, { "code": null, "e": 23881, "s": 23785, "text": "Given a singly linked list, remove all the nodes which have a greater value on the right side. " }, { "code": null, "e": 23892, "s": 23881, "text": "Examples: " }, { "code": null, "e": 24632, "s": 23892, "text": "Input: 12->15->10->11->5->6->2->3->NULL\nOutput: 15->11->6->3->NULL\nExplanation: 12, 10, 5 and 2 have been deleted because there is a \n greater value on the right side. When we examine 12, \n we see that after 12 there is one node with a value \n greater than 12 (i.e. 15), so we delete 12. When we \n examine 15, we find no node after 15 that has a value \n greater than 15, so we keep this node. When we go like \n this, we get 15->6->3\n\nInput: 10->20->30->40->50->60->NULL\nOutput: 60->NULL\nExplanation: 10, 20, 30, 40, and 50 have been deleted because \n they all have a greater value on the right side.\n\nInput: 60->50->40->30->20->10->NULL\nOutput: No Change." }, { "code": null, "e": 24920, "s": 24632, "text": "Method 1 (Simple): Use two loops. In the outer loop, pick nodes of the linked list one by one. In the inner loop, check if there exists a node whose value is greater than the picked node. If there exists a node whose value is greater, then delete the picked node. Time Complexity: O(n^2)" }, { "code": null, "e": 25286, "s": 24920, "text": "Method 2 (Use Reverse): Thanks to Paras for providing the below algorithm. 1. Reverse the list. 2. Traverse the reversed list. Keep max till now. If the next node is less than max, then delete the next node, otherwise max = next node. 3. Reverse the list again to retain the original order. Time Complexity: O(n)Thanks to R.Srinivasan for providing the code below. " }, { "code": null, "e": 25290, "s": 25286, "text": "C++" }, { "code": "// C++ program to delete nodes which// have a greater value on right side#include <bits/stdc++.h>using namespace std; // Structure of a linked list nodestruct Node{ int data; struct Node* next;}; // Prototype for utility functionsvoid reverseList(struct Node** headref);void _delLesserNodes(struct Node* head); /* Deletes nodes which have a node with greater value node on left side */void delLesserNodes(struct Node** head_ref){ // 1. Reverse the linked list reverseList(head_ref); /* 2. In the reversed list, delete nodes which have a node with greater value node on left side. Note that head node is never deleted because it is the leftmost node.*/ _delLesserNodes(*head_ref); /* 3. Reverse the linked list again to retain the original order */ reverseList(head_ref);} /* Deletes nodes which have greater value node(s) on left side */void _delLesserNodes(struct Node* head){ struct Node* current = head; // Initialize max struct Node* maxnode = head; struct Node* temp; while (current != NULL && current->next != NULL) { /* If current is smaller than max, then delete current */ if (current->next->data < maxnode->data) { temp = current->next; current->next = temp->next; free(temp); } /* If current is greater than max, then update max and move current */ else { current = current->next; maxnode = current; } }} /* Utility function to insert a node at the beginning */void push(struct Node** head_ref, int new_data){ struct Node* new_node = (struct Node*)malloc(sizeof(struct Node)); new_node->data = new_data; new_node->next = *head_ref; *head_ref = new_node;} /* Utility function to reverse a linked list */void reverseList(struct Node** headref){ struct Node* current = *headref; struct Node* prev = NULL; struct Node* next; while (current != NULL) { next = current->next; current->next = prev; prev = current; current = next; } *headref = prev;} /* Utility function to print a linked list */void printList(struct Node* head){ while (head != NULL) { cout << \" \" << head->data ; head = head->next; } cout << \"\" ;} // Driver codeint main(){ struct Node* head = NULL; /* Create following linked list 12->15->10->11->5->6->2->3 */ push(&head, 3); push(&head, 2); push(&head, 6); push(&head, 5); push(&head, 11); push(&head, 10); push(&head, 15); push(&head, 12); cout << \"Given Linked List \" ; printList(head); delLesserNodes(&head); cout << \"Modified Linked List \" ; printList(head); return 0;}// This code is contributed by shivanisinghss2110", "e": 28160, "s": 25290, "text": null }, { "code": null, "e": 28168, "s": 28160, "text": "Output:" }, { "code": null, "e": 28239, "s": 28168, "text": "Given Linked List \n12 15 10 11 5 6 2 3\nModified Linked List \n15 11 6 3" }, { "code": null, "e": 28261, "s": 28239, "text": "Time Complexity: O(n)" }, { "code": null, "e": 28283, "s": 28261, "text": "Auxiliary Space: O(1)" }, { "code": null, "e": 28411, "s": 28283, "text": "Source: https://www.geeksforgeeks.org/forum/topic/amazon-interview-question-for-software-engineerdeveloper-about-linked-lists-6" }, { "code": null, "e": 28516, "s": 28411, "text": "Please refer complete article on Delete nodes which have a greater value on right side for more details!" }, { "code": null, "e": 28524, "s": 28516, "text": "rohan07" }, { "code": null, "e": 28537, "s": 28524, "text": "Linked Lists" }, { "code": null, "e": 28541, "s": 28537, "text": "C++" }, { "code": null, "e": 28554, "s": 28541, "text": "C++ Programs" }, { "code": null, "e": 28566, "s": 28554, "text": "Linked List" }, { "code": null, "e": 28578, "s": 28566, "text": "Linked List" }, { "code": null, "e": 28582, "s": 28578, "text": "CPP" }, { "code": null, "e": 28680, "s": 28582, "text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here." }, { "code": null, "e": 28689, "s": 28680, "text": "Comments" }, { "code": null, "e": 28702, "s": 28689, "text": "Old Comments" }, { "code": null, "e": 28730, "s": 28702, "text": "Operator Overloading in C++" }, { "code": null, "e": 28754, "s": 28730, "text": "Sorting a vector in C++" }, { "code": null, "e": 28774, "s": 28754, "text": "Polymorphism in C++" }, { "code": null, "e": 28807, "s": 28774, "text": "Friend class and function in C++" }, { "code": null, "e": 28851, "s": 28807, "text": "List in C++ Standard Template Library (STL)" }, { "code": null, "e": 28886, "s": 28851, "text": "Header files in C/C++ and its uses" }, { "code": null, "e": 28912, "s": 28886, "text": "C++ Program for QuickSort" }, { "code": null, "e": 28971, "s": 28912, "text": "How to return multiple values from a function in C or C++?" }, { "code": null, "e": 29015, "s": 28971, "text": "Program to print ASCII Value of a character" } ]
Org.Json - JSONException Handling
Utility classes of org.json throws JSONException in case of invalid JSON. Following example shows how to handle JSONException. import org.json.JSONException; import org.json.XML; public class JSONDemo { public static void main(String[] args) { try { //XML tag name should not have space. String xmlText = "<Other Details>null</Other Details>"; System.out.println(xmlText); //Convert an XML to JSONObject System.out.println(XML.toJSONObject(xmlText)); } catch(JSONException e){ System.out.println(e.getMessage()); } } } <Other Details>null</Other Details> Misshaped close tag at 34 [character 35 line 1] 18 Lectures 1.5 hours Dr. Saatya Prasad 107 Lectures 13.5 hours Arnab Chakraborty 75 Lectures 5 hours Revathi Ramachandran 14 Lectures 44 mins Zach Miller 12 Lectures 54 mins Prof. Paul Cline, Ed.D 54 Lectures 4 hours Gilad James, PhD Print Add Notes Bookmark this page
[ { "code": null, "e": 2102, "s": 1975, "text": "Utility classes of org.json throws JSONException in case of invalid JSON. Following example shows how to handle JSONException." }, { "code": null, "e": 2584, "s": 2102, "text": "import org.json.JSONException;\nimport org.json.XML;\n\npublic class JSONDemo {\n public static void main(String[] args) {\n try {\n //XML tag name should not have space.\n String xmlText = \"<Other Details>null</Other Details>\";\n System.out.println(xmlText);\n\n //Convert an XML to JSONObject\n System.out.println(XML.toJSONObject(xmlText));\n } \n catch(JSONException e){ \n System.out.println(e.getMessage());\n }\n }\n}" }, { "code": null, "e": 2669, "s": 2584, "text": "<Other Details>null</Other Details>\nMisshaped close tag at 34 [character 35 line 1]\n" }, { "code": null, "e": 2704, "s": 2669, "text": "\n 18 Lectures \n 1.5 hours \n" }, { "code": null, "e": 2723, "s": 2704, "text": " Dr. Saatya Prasad" }, { "code": null, "e": 2760, "s": 2723, "text": "\n 107 Lectures \n 13.5 hours \n" }, { "code": null, "e": 2779, "s": 2760, "text": " Arnab Chakraborty" }, { "code": null, "e": 2812, "s": 2779, "text": "\n 75 Lectures \n 5 hours \n" }, { "code": null, "e": 2834, "s": 2812, "text": " Revathi Ramachandran" }, { "code": null, "e": 2866, "s": 2834, "text": "\n 14 Lectures \n 44 mins\n" }, { "code": null, "e": 2879, "s": 2866, "text": " Zach Miller" }, { "code": null, "e": 2911, "s": 2879, "text": "\n 12 Lectures \n 54 mins\n" }, { "code": null, "e": 2935, "s": 2911, "text": " Prof. Paul Cline, Ed.D" }, { "code": null, "e": 2968, "s": 2935, "text": "\n 54 Lectures \n 4 hours \n" }, { "code": null, "e": 2986, "s": 2968, "text": " Gilad James, PhD" }, { "code": null, "e": 2993, "s": 2986, "text": " Print" }, { "code": null, "e": 3004, "s": 2993, "text": " Add Notes" } ]
Python - cmp() Method
The cmp() is part of the python standard library which compares two integers. The result of comparison is -1 if the first integer is smaller than second and 1 if the first integer is greater than the second. If both are equal the result of cmp() is zero. Below example illustrates different scenario showing the use of cmp() method. Live Demo def cmp(x, y): return (x > y) - (x < y) #x>y x = 5 y = 3 print("The cmp value for x>y is : ",cmp(x, y),"\n") #x<y x = 7 y = 9 print("The cmp value for x<y is : ",cmp(x, y),"\n") #x=y x = 13 y = 13 print("The cmp value for x=y is : ",cmp(x, y)) #odd and even k = 16 if cmp(0, k % 2): print("\n","The given number",k,"is odd number ") else: print("\n","The given number",k,"is even number") k= 31 if cmp(0, k % 2): print("\n","The given number",k,"is odd number") else: print("\n","The given number",k,"is even number") Running the above code gives us the following result − The cmp value for x>y is : 1 The cmp value for x<y is : -1 The cmp value for x=y is : 0 The given number 16 is even number The given number 31 is odd number
[ { "code": null, "e": 1317, "s": 1062, "text": "The cmp() is part of the python standard library which compares two integers. The result of comparison is -1 if the first integer is smaller than second and 1 if the first integer is greater than the second. If both are equal the result of cmp() is zero." }, { "code": null, "e": 1395, "s": 1317, "text": "Below example illustrates different scenario showing the use of cmp() method." }, { "code": null, "e": 1406, "s": 1395, "text": " Live Demo" }, { "code": null, "e": 1939, "s": 1406, "text": "def cmp(x, y):\n return (x > y) - (x < y)\n#x>y\nx = 5\ny = 3\nprint(\"The cmp value for x>y is : \",cmp(x, y),\"\\n\")\n#x<y\nx = 7\ny = 9\nprint(\"The cmp value for x<y is : \",cmp(x, y),\"\\n\")\n#x=y\nx = 13\ny = 13\nprint(\"The cmp value for x=y is : \",cmp(x, y))\n#odd and even\nk = 16\nif cmp(0, k % 2):\n print(\"\\n\",\"The given number\",k,\"is odd number \")\nelse:\n print(\"\\n\",\"The given number\",k,\"is even number\")\nk= 31\nif cmp(0, k % 2):\n print(\"\\n\",\"The given number\",k,\"is odd number\")\nelse:\n print(\"\\n\",\"The given number\",k,\"is even number\")" }, { "code": null, "e": 1994, "s": 1939, "text": "Running the above code gives us the following result −" }, { "code": null, "e": 2155, "s": 1994, "text": "The cmp value for x>y is : 1\n\nThe cmp value for x<y is : -1\n\nThe cmp value for x=y is : 0\n\nThe given number 16 is even number\n\nThe given number 31 is odd number" } ]
Swing Animation Effect with CSS
The swing animation effect move or cause to move back and forth or from side to side while suspended or on an axis to an element. Live Demo <html> <head> <style> .animated { background-image: url(/css/images/logo.png); background-repeat: no-repeat; background-position: left top; padding-top:95px; margin-bottom:60px; -webkit-animation-duration: 10s; animation-duration: 10s; -webkit-animation-fill-mode: both; animation-fill-mode: both; } @-webkit-keyframes swing { 20%, 40%, 60%, 80%, 100% { -webkit-transform-origin: top center; } 20% { -webkit-transform: rotate(15deg); } 40% { -webkit-transform: rotate(-10deg); } 60% { -webkit-transform: rotate(5deg); } 80% { -webkit-transform: rotate(-5deg); } 100% { -webkit-transform: rotate(0deg); } } @keyframes swing { 20% { transform: rotate(15deg); } 40% { transform: rotate(-10deg); } 60% { transform: rotate(5deg); } 80% { transform: rotate(-5deg); } 100% { transform: rotate(0deg); } } .swing { -webkit-transform-origin: top center; transform-origin: top center; -webkit-animation-name: swing; animation-name: swing; } </style> </head> <body> <div id = "animated-example" class = "animated swing"></div> <button onclick = "myFunction()">Reload page</button> <script> function myFunction() { location.reload(); } </script> </body> </html>
[ { "code": null, "e": 1192, "s": 1062, "text": "The swing animation effect move or cause to move back and forth or from side to side while suspended or on an axis to an element." }, { "code": null, "e": 1202, "s": 1192, "text": "Live Demo" }, { "code": null, "e": 2795, "s": 1202, "text": "<html>\n <head>\n <style>\n .animated {\n background-image: url(/css/images/logo.png);\n background-repeat: no-repeat;\n background-position: left top;\n padding-top:95px;\n margin-bottom:60px;\n -webkit-animation-duration: 10s;\n animation-duration: 10s;\n -webkit-animation-fill-mode: both;\n animation-fill-mode: both;\n }\n\n @-webkit-keyframes swing {\n 20%, 40%, 60%, 80%, 100% { -webkit-transform-origin: top center; }\n 20% { -webkit-transform: rotate(15deg); }\n 40% { -webkit-transform: rotate(-10deg); }\n 60% { -webkit-transform: rotate(5deg); }\n 80% { -webkit-transform: rotate(-5deg); }\n 100% { -webkit-transform: rotate(0deg); }\n }\n\n @keyframes swing {\n 20% { transform: rotate(15deg); }\n 40% { transform: rotate(-10deg); }\n 60% { transform: rotate(5deg); }\n 80% { transform: rotate(-5deg); }\n 100% { transform: rotate(0deg); }\n }\n\n .swing {\n -webkit-transform-origin: top center;\n transform-origin: top center;\n -webkit-animation-name: swing;\n animation-name: swing;\n }\n </style>\n\n </head>\n <body>\n\n <div id = \"animated-example\" class = \"animated swing\"></div>\n <button onclick = \"myFunction()\">Reload page</button>\n\n <script>\n function myFunction() {\n location.reload();\n }\n </script>\n </body>\n</html>" } ]
Matplotlib.axes.Axes.set_xticks() in Python
19 Apr, 2020 Matplotlib is a library in Python and it is numerical – mathematical extension for NumPy library. The Axes Class contains most of the figure elements: Axis, Tick, Line2D, Text, Polygon, etc., and sets the coordinate system. And the instances of Axes supports callbacks through a callbacks attribute. The Axes.set_xticks() function in axes module of matplotlib library is used to Set the x ticks with list of ticks. Syntax: Axes.set_xticks(self, ticks, minor=False) Parameters: This method accepts the following parameters. ticks : This parameter is the list of x-axis tick locations. minor : This parameter is used whether set major ticks or to set minor ticks Return value: This method does not returns any value. Below examples illustrate the matplotlib.axes.Axes.set_xticks() function in matplotlib.axes: Example 1: # Implementation of matplotlib functionimport numpy as npimport matplotlib.pyplot as pltfrom matplotlib.patches import Polygon def func(x): return (x - 4) * (x - 6) * (x - 5) + 100 a, b = 2, 9 # integral limitsx = np.linspace(0, 10)y = func(x) fig, ax = plt.subplots()ax.plot(x, y, "k", linewidth = 2)ax.set_ylim(bottom = 0) # Make the shaded regionix = np.linspace(a, b)iy = func(ix)verts = [(a, 0), *zip(ix, iy), (b, 0)]poly = Polygon(verts, facecolor ='green', edgecolor ='0.5', alpha = 0.4)ax.add_patch(poly) ax.text(0.5 * (a + b), 30, r"$\int_a ^ b f(x)\mathrm{d}x$", horizontalalignment ='center', fontsize = 20) fig.text(0.9, 0.05, '$x$')fig.text(0.1, 0.9, '$y$') ax.spines['right'].set_visible(False)ax.spines['top'].set_visible(False) ax.set_xticks((a, b)) fig.suptitle('matplotlib.axes.Axes.set_xticks()\ function Example\n\n', fontweight ="bold")fig.canvas.draw()plt.show() Output: Example 2: # Implementation of matplotlib functionimport numpy as npimport matplotlib.pyplot as plt # Fixing random state for reproducibilitynp.random.seed(19680801) x = np.linspace(0, 2 * np.pi, 100)y = np.sin(x)y2 = y + 0.2 * np.random.normal(size = x.shape) fig, ax = plt.subplots()ax.plot(x, y)ax.plot(x, y2) ax.set_xticks([0, np.pi, 2 * np.pi]) ax.spines['left'].set_bounds(-1, 1)ax.spines['right'].set_visible(False)ax.spines['top'].set_visible(False) fig.suptitle('matplotlib.axes.Axes.set_xticks() \function Example\n\n', fontweight ="bold")fig.canvas.draw()plt.show() Output: Python-matplotlib Python Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here.
[ { "code": null, "e": 28, "s": 0, "text": "\n19 Apr, 2020" }, { "code": null, "e": 328, "s": 28, "text": "Matplotlib is a library in Python and it is numerical – mathematical extension for NumPy library. The Axes Class contains most of the figure elements: Axis, Tick, Line2D, Text, Polygon, etc., and sets the coordinate system. And the instances of Axes supports callbacks through a callbacks attribute." }, { "code": null, "e": 443, "s": 328, "text": "The Axes.set_xticks() function in axes module of matplotlib library is used to Set the x ticks with list of ticks." }, { "code": null, "e": 493, "s": 443, "text": "Syntax: Axes.set_xticks(self, ticks, minor=False)" }, { "code": null, "e": 551, "s": 493, "text": "Parameters: This method accepts the following parameters." }, { "code": null, "e": 612, "s": 551, "text": "ticks : This parameter is the list of x-axis tick locations." }, { "code": null, "e": 689, "s": 612, "text": "minor : This parameter is used whether set major ticks or to set minor ticks" }, { "code": null, "e": 743, "s": 689, "text": "Return value: This method does not returns any value." }, { "code": null, "e": 836, "s": 743, "text": "Below examples illustrate the matplotlib.axes.Axes.set_xticks() function in matplotlib.axes:" }, { "code": null, "e": 847, "s": 836, "text": "Example 1:" }, { "code": "# Implementation of matplotlib functionimport numpy as npimport matplotlib.pyplot as pltfrom matplotlib.patches import Polygon def func(x): return (x - 4) * (x - 6) * (x - 5) + 100 a, b = 2, 9 # integral limitsx = np.linspace(0, 10)y = func(x) fig, ax = plt.subplots()ax.plot(x, y, \"k\", linewidth = 2)ax.set_ylim(bottom = 0) # Make the shaded regionix = np.linspace(a, b)iy = func(ix)verts = [(a, 0), *zip(ix, iy), (b, 0)]poly = Polygon(verts, facecolor ='green', edgecolor ='0.5', alpha = 0.4)ax.add_patch(poly) ax.text(0.5 * (a + b), 30, r\"$\\int_a ^ b f(x)\\mathrm{d}x$\", horizontalalignment ='center', fontsize = 20) fig.text(0.9, 0.05, '$x$')fig.text(0.1, 0.9, '$y$') ax.spines['right'].set_visible(False)ax.spines['top'].set_visible(False) ax.set_xticks((a, b)) fig.suptitle('matplotlib.axes.Axes.set_xticks()\\ function Example\\n\\n', fontweight =\"bold\")fig.canvas.draw()plt.show()", "e": 1793, "s": 847, "text": null }, { "code": null, "e": 1801, "s": 1793, "text": "Output:" }, { "code": null, "e": 1812, "s": 1801, "text": "Example 2:" }, { "code": "# Implementation of matplotlib functionimport numpy as npimport matplotlib.pyplot as plt # Fixing random state for reproducibilitynp.random.seed(19680801) x = np.linspace(0, 2 * np.pi, 100)y = np.sin(x)y2 = y + 0.2 * np.random.normal(size = x.shape) fig, ax = plt.subplots()ax.plot(x, y)ax.plot(x, y2) ax.set_xticks([0, np.pi, 2 * np.pi]) ax.spines['left'].set_bounds(-1, 1)ax.spines['right'].set_visible(False)ax.spines['top'].set_visible(False) fig.suptitle('matplotlib.axes.Axes.set_xticks() \\function Example\\n\\n', fontweight =\"bold\")fig.canvas.draw()plt.show()", "e": 2389, "s": 1812, "text": null }, { "code": null, "e": 2397, "s": 2389, "text": "Output:" }, { "code": null, "e": 2415, "s": 2397, "text": "Python-matplotlib" }, { "code": null, "e": 2422, "s": 2415, "text": "Python" } ]
Different Ways to Use Font Awesome Icons in Android
07 Mar, 2021 Icons are symbols that make it easy to understand the User interface for a naive user. There are many icons are available in Material UI by Google UI. But still few icons are not available in the material icons library. FontAwesome is an amazing platform, which provides useful icons which are used in many web and mobile app. Officially there is no FontAwesome library available for android, But hopefully, there is a really good community whose contribution helps the developers. So in this article, we are going to discuss two different approaches. Step 1: Create a New Project To create a new project in Android Studio please refer to How to Create/Start a New Project in Android Studio. Step 2: Add dependencies to the build.gradle(Module:app) file Add the following dependency to the build.gradle(Module:app) file. dependencies { // font awesome library implementation ‘info.androidhive:fontawesome:0.0.5’ } Step 3: Working with the activity_main.xml file Then go to your XML file where you want to put the font awesome icon. Navigate to the app > res > layout > activity_main.xml and add the below code to that file. Below is the code for the activity_main.xml file. XML <?xml version="1.0" encoding="utf-8"?><LinearLayout xmlns:android="http://schemas.android.com/apk/res/android" xmlns:app="http://schemas.android.com/apk/res-auto" android:layout_width="match_parent" android:layout_height="wrap_content" android:layout_marginTop="5dp" android:background="#fff" android:orientation="vertical"> <LinearLayout android:layout_width="match_parent" android:layout_height="wrap_content" android:layout_margin="10dp" android:orientation="horizontal"> <TextView android:layout_width="wrap_content" android:layout_height="wrap_content" android:layout_marginLeft="5dp" android:text="ICON 1" android:textSize="12sp" android:textStyle="bold" /> <info.androidhive.fontawesome.FontTextView android:layout_width="wrap_content" android:layout_height="wrap_content" android:text="@string/fa_user_alt_slash_solid" android:textColor="#f88" android:textSize="25sp" app:solid_icon="true" /> </LinearLayout> <LinearLayout android:layout_width="match_parent" android:layout_height="wrap_content" android:layout_margin="10dp" android:orientation="horizontal"> <TextView android:layout_width="wrap_content" android:layout_height="wrap_content" android:layout_marginLeft="5dp" android:text="ICON 2" android:textSize="12sp" android:textStyle="bold" /> <info.androidhive.fontawesome.FontTextView android:layout_width="wrap_content" android:layout_height="wrap_content" android:padding="10dp" android:text="@string/fa_lock_solid" android:textColor="#ff8" android:textSize="25sp" app:solid_icon="true" /> </LinearLayout> </LinearLayout> In the text property, we just put the string value of the icon. that value you can get from font awesome. Here is the link, just follow that link you will get the value. Output UI: Just run your app. Voilà now you can see your icon on your screen. This approach is basically what we are doing from scratch here no need for any third-party plugin implementation. Step 1: Download the font awesome otf file by following this link. After downloaded unzip it > go to otf folder > choose any file Rename the file with all lowercase letters Step 2: Create a resource directory, In your project folder > the main folder> right-click > select Android Resource Directory from resource values > select font after that, you can see a folder like this Now just copy-paste the fontawsome.otf file in that directory. Note: We have renamed the fontawesome file as font_awesome.otf (do not use capital letters while renaming the file) Step 3: To use the icon use the string value represent in font-awesome. You can get them from this link. use the following code in your desire XML file XML <TextView android:layout_width="200dp" android:layout_height="200dp" android:fontFamily="@font/font_awesome" android:text="\uf004" android:textSize="150sp" /> Just look at the text property and the font-family property in <TextView>. The text value is the string value of the icon and the font family is our fontswesome file which is in the font resource folder. Now that’s it, you can see icons on your screen. This approach may take some amount of memory, so maybe your app size will be a little bit increase if you are concern about app size then do not use it use the first approach. If you stuck at any point feel free to check the github account. Picked Android Android Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here. How to Add Views Dynamically and Store Data in Arraylist in Android? Android SDK and it's Components Flutter - Custom Bottom Navigation Bar How to Communicate Between Fragments in Android? Retrofit with Kotlin Coroutine in Android How to Post Data to API using Retrofit in Android? Flutter - Stack Widget Introduction to Android Development Activity Lifecycle in Android with Demo App Fragment Lifecycle in Android
[ { "code": null, "e": 28, "s": 0, "text": "\n07 Mar, 2021" }, { "code": null, "e": 581, "s": 28, "text": "Icons are symbols that make it easy to understand the User interface for a naive user. There are many icons are available in Material UI by Google UI. But still few icons are not available in the material icons library. FontAwesome is an amazing platform, which provides useful icons which are used in many web and mobile app. Officially there is no FontAwesome library available for android, But hopefully, there is a really good community whose contribution helps the developers. So in this article, we are going to discuss two different approaches. " }, { "code": null, "e": 610, "s": 581, "text": "Step 1: Create a New Project" }, { "code": null, "e": 722, "s": 610, "text": "To create a new project in Android Studio please refer to How to Create/Start a New Project in Android Studio. " }, { "code": null, "e": 784, "s": 722, "text": "Step 2: Add dependencies to the build.gradle(Module:app) file" }, { "code": null, "e": 851, "s": 784, "text": "Add the following dependency to the build.gradle(Module:app) file." }, { "code": null, "e": 866, "s": 851, "text": "dependencies {" }, { "code": null, "e": 893, "s": 866, "text": " // font awesome library" }, { "code": null, "e": 948, "s": 893, "text": " implementation ‘info.androidhive:fontawesome:0.0.5’" }, { "code": null, "e": 950, "s": 948, "text": "}" }, { "code": null, "e": 998, "s": 950, "text": "Step 3: Working with the activity_main.xml file" }, { "code": null, "e": 1211, "s": 998, "text": "Then go to your XML file where you want to put the font awesome icon. Navigate to the app > res > layout > activity_main.xml and add the below code to that file. Below is the code for the activity_main.xml file. " }, { "code": null, "e": 1215, "s": 1211, "text": "XML" }, { "code": "<?xml version=\"1.0\" encoding=\"utf-8\"?><LinearLayout xmlns:android=\"http://schemas.android.com/apk/res/android\" xmlns:app=\"http://schemas.android.com/apk/res-auto\" android:layout_width=\"match_parent\" android:layout_height=\"wrap_content\" android:layout_marginTop=\"5dp\" android:background=\"#fff\" android:orientation=\"vertical\"> <LinearLayout android:layout_width=\"match_parent\" android:layout_height=\"wrap_content\" android:layout_margin=\"10dp\" android:orientation=\"horizontal\"> <TextView android:layout_width=\"wrap_content\" android:layout_height=\"wrap_content\" android:layout_marginLeft=\"5dp\" android:text=\"ICON 1\" android:textSize=\"12sp\" android:textStyle=\"bold\" /> <info.androidhive.fontawesome.FontTextView android:layout_width=\"wrap_content\" android:layout_height=\"wrap_content\" android:text=\"@string/fa_user_alt_slash_solid\" android:textColor=\"#f88\" android:textSize=\"25sp\" app:solid_icon=\"true\" /> </LinearLayout> <LinearLayout android:layout_width=\"match_parent\" android:layout_height=\"wrap_content\" android:layout_margin=\"10dp\" android:orientation=\"horizontal\"> <TextView android:layout_width=\"wrap_content\" android:layout_height=\"wrap_content\" android:layout_marginLeft=\"5dp\" android:text=\"ICON 2\" android:textSize=\"12sp\" android:textStyle=\"bold\" /> <info.androidhive.fontawesome.FontTextView android:layout_width=\"wrap_content\" android:layout_height=\"wrap_content\" android:padding=\"10dp\" android:text=\"@string/fa_lock_solid\" android:textColor=\"#ff8\" android:textSize=\"25sp\" app:solid_icon=\"true\" /> </LinearLayout> </LinearLayout>", "e": 3184, "s": 1215, "text": null }, { "code": null, "e": 3354, "s": 3184, "text": "In the text property, we just put the string value of the icon. that value you can get from font awesome. Here is the link, just follow that link you will get the value." }, { "code": null, "e": 3365, "s": 3354, "text": "Output UI:" }, { "code": null, "e": 3433, "s": 3365, "text": "Just run your app. Voilà now you can see your icon on your screen." }, { "code": null, "e": 3547, "s": 3433, "text": "This approach is basically what we are doing from scratch here no need for any third-party plugin implementation." }, { "code": null, "e": 3555, "s": 3547, "text": "Step 1:" }, { "code": null, "e": 3614, "s": 3555, "text": "Download the font awesome otf file by following this link." }, { "code": null, "e": 3677, "s": 3614, "text": "After downloaded unzip it > go to otf folder > choose any file" }, { "code": null, "e": 3720, "s": 3677, "text": "Rename the file with all lowercase letters" }, { "code": null, "e": 3728, "s": 3720, "text": "Step 2:" }, { "code": null, "e": 3757, "s": 3728, "text": "Create a resource directory," }, { "code": null, "e": 3847, "s": 3757, "text": "In your project folder > the main folder> right-click > select Android Resource Directory" }, { "code": null, "e": 3883, "s": 3847, "text": "from resource values > select font " }, { "code": null, "e": 3926, "s": 3883, "text": "after that, you can see a folder like this" }, { "code": null, "e": 3989, "s": 3926, "text": "Now just copy-paste the fontawsome.otf file in that directory." }, { "code": null, "e": 4105, "s": 3989, "text": "Note: We have renamed the fontawesome file as font_awesome.otf (do not use capital letters while renaming the file)" }, { "code": null, "e": 4113, "s": 4105, "text": "Step 3:" }, { "code": null, "e": 4210, "s": 4113, "text": "To use the icon use the string value represent in font-awesome. You can get them from this link." }, { "code": null, "e": 4257, "s": 4210, "text": "use the following code in your desire XML file" }, { "code": null, "e": 4261, "s": 4257, "text": "XML" }, { "code": "<TextView android:layout_width=\"200dp\" android:layout_height=\"200dp\" android:fontFamily=\"@font/font_awesome\" android:text=\"\\uf004\" android:textSize=\"150sp\" />", "e": 4425, "s": 4261, "text": null }, { "code": null, "e": 4630, "s": 4425, "text": "Just look at the text property and the font-family property in <TextView>. The text value is the string value of the icon and the font family is our fontswesome file which is in the font resource folder. " }, { "code": null, "e": 4920, "s": 4630, "text": "Now that’s it, you can see icons on your screen. This approach may take some amount of memory, so maybe your app size will be a little bit increase if you are concern about app size then do not use it use the first approach. If you stuck at any point feel free to check the github account." }, { "code": null, "e": 4927, "s": 4920, "text": "Picked" }, { "code": null, "e": 4935, "s": 4927, "text": "Android" }, { "code": null, "e": 4943, "s": 4935, "text": "Android" }, { "code": null, "e": 5041, "s": 4943, "text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here." }, { "code": null, "e": 5110, "s": 5041, "text": "How to Add Views Dynamically and Store Data in Arraylist in Android?" }, { "code": null, "e": 5142, "s": 5110, "text": "Android SDK and it's Components" }, { "code": null, "e": 5181, "s": 5142, "text": "Flutter - Custom Bottom Navigation Bar" }, { "code": null, "e": 5230, "s": 5181, "text": "How to Communicate Between Fragments in Android?" }, { "code": null, "e": 5272, "s": 5230, "text": "Retrofit with Kotlin Coroutine in Android" }, { "code": null, "e": 5323, "s": 5272, "text": "How to Post Data to API using Retrofit in Android?" }, { "code": null, "e": 5346, "s": 5323, "text": "Flutter - Stack Widget" }, { "code": null, "e": 5382, "s": 5346, "text": "Introduction to Android Development" }, { "code": null, "e": 5426, "s": 5382, "text": "Activity Lifecycle in Android with Demo App" } ]
Python Sets
20 Jun, 2022 In Python, a Set is an unordered collection of data types that is iterable, mutable and has no duplicate elements. The order of elements in a set is undefined though it may consist of various elements. The major advantage of using a set, as opposed to a list, is that it has a highly optimized method for checking whether a specific element is contained in the set. Sets can be created by using the built-in set() function with an iterable object or a sequence by placing the sequence inside curly braces, separated by a ‘comma’. Note: A set cannot have mutable elements like a list or dictionary, as it is mutable. Python3 # Python program to demonstrate# Creation of Set in Python # Creating a Setset1 = set()print("Initial blank Set: ")print(set1) # Creating a Set with# the use of a Stringset1 = set("GeeksForGeeks")print("\nSet with the use of String: ")print(set1) # Creating a Set with# the use of Constructor# (Using object to Store String)String = 'GeeksForGeeks'set1 = set(String)print("\nSet with the use of an Object: " )print(set1) # Creating a Set with# the use of a Listset1 = set(["Geeks", "For", "Geeks"])print("\nSet with the use of List: ")print(set1) Initial blank Set: set() Set with the use of String: {'e', 'r', 'G', 's', 'F', 'k', 'o'} Set with the use of an Object: {'e', 'r', 'G', 's', 'F', 'k', 'o'} Set with the use of List: {'Geeks', 'For'} A set contains only unique elements but at the time of set creation, multiple duplicate values can also be passed. Order of elements in a set is undefined and is unchangeable. Type of elements in a set need not be the same, various mixed-up data type values can also be passed to the set. Python3 # Creating a Set with# a List of Numbers# (Having duplicate values)set1 = set([1, 2, 4, 4, 3, 3, 3, 6, 5])print("\nSet with the use of Numbers: ")print(set1) # Creating a Set with# a mixed type of values# (Having numbers and strings)set1 = set([1, 2, 'Geeks', 4, 'For', 6, 'Geeks'])print("\nSet with the use of Mixed Values")print(set1) Set with the use of Numbers: {1, 2, 3, 4, 5, 6} Set with the use of Mixed Values {1, 2, 'For', 4, 6, 'Geeks'} Python3 # Another Method to create sets in Python3 # Set containing numbersmy_set = {1, 2, 3} print(my_set) # This code is contributed by sarajadhav12052009 {1, 2, 3} Elements can be added to the Set by using the built-in add() function. Only one element at a time can be added to the set by using add() method, loops are used to add multiple elements at a time with the use of add() method. Note: Lists cannot be added to a set as elements because Lists are not hashable whereas Tuples can be added because tuples are immutable and hence Hashable. Python3 # Python program to demonstrate# Addition of elements in a Set # Creating a Setset1 = set()print("Initial blank Set: ")print(set1) # Adding element and tuple to the Setset1.add(8)set1.add(9)set1.add((6, 7))print("\nSet after Addition of Three elements: ")print(set1) # Adding elements to the Set# using Iteratorfor i in range(1, 6): set1.add(i)print("\nSet after Addition of elements from 1-5: ")print(set1) Initial blank Set: set() Set after Addition of Three elements: {8, 9, (6, 7)} Set after Addition of elements from 1-5: {1, 2, 3, (6, 7), 4, 5, 8, 9} For the addition of two or more elements Update() method is used. The update() method accepts lists, strings, tuples as well as other sets as its arguments. In all of these cases, duplicate elements are avoided. Python3 # Python program to demonstrate# Addition of elements in a Set # Addition of elements to the Set# using Update functionset1 = set([4, 5, (6, 7)])set1.update([10, 11])print("\nSet after Addition of elements using Update: ")print(set1) Set after Addition of elements using Update: {4, 5, (6, 7), 10, 11} Set items cannot be accessed by referring to an index, since sets are unordered the items has no index. But you can loop through the set items using a for loop, or ask if a specified value is present in a set, by using the in keyword. Python3 # Python program to demonstrate# Accessing of elements in a set # Creating a setset1 = set(["Geeks", "For", "Geeks"])print("\nInitial set")print(set1) # Accessing element using# for loopprint("\nElements of set: ")for i in set1: print(i, end=" ") # Checking the element# using in keywordprint("Geeks" in set1) Initial set {'For', 'Geeks'} Elements of set: For Geeks True Elements can be removed from the Set by using the built-in remove() function but a KeyError arises if the element doesn’t exist in the set. To remove elements from a set without KeyError, use discard(), if the element doesn’t exist in the set, it remains unchanged. Python3 # Python program to demonstrate# Deletion of elements in a Set # Creating a Setset1 = set([1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12])print("Initial Set: ")print(set1) # Removing elements from Set# using Remove() methodset1.remove(5)set1.remove(6)print("\nSet after Removal of two elements: ")print(set1) # Removing elements from Set# using Discard() methodset1.discard(8)set1.discard(9)print("\nSet after Discarding two elements: ")print(set1) # Removing elements from Set# using iterator methodfor i in range(1, 5): set1.remove(i)print("\nSet after Removing a range of elements: ")print(set1) Initial Set: {1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12} Set after Removal of two elements: {1, 2, 3, 4, 7, 8, 9, 10, 11, 12} Set after Discarding two elements: {1, 2, 3, 4, 7, 10, 11, 12} Set after Removing a range of elements: {7, 10, 11, 12} Pop() function can also be used to remove and return an element from the set, but it removes only the last element of the set. Note: If the set is unordered then there’s no such way to determine which element is popped by using the pop() function. Python3 # Python program to demonstrate# Deletion of elements in a Set # Creating a Setset1 = set([1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12])print("Initial Set: ")print(set1) # Removing element from the# Set using the pop() methodset1.pop()print("\nSet after popping an element: ")print(set1) Initial Set: {1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12} Set after popping an element: {2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12} To remove all the elements from the set, clear() function is used. Python3 #Creating a setset1 = set([1,2,3,4,5])print("\n Initial set: ")print(set1) # Removing all the elements from# Set using clear() methodset1.clear()print("\nSet after clearing all the elements: ")print(set1) Initial set: {1, 2, 3, 4, 5} Set after clearing all the elements: set() Frozen sets in Python are immutable objects that only support methods and operators that produce a result without affecting the frozen set or sets to which they are applied. While elements of a set can be modified at any time, elements of the frozen set remain the same after creation. If no parameters are passed, it returns an empty frozenset. Python3 # Python program to demonstrate# working of a FrozenSet # Creating a SetString = ('G', 'e', 'e', 'k', 's', 'F', 'o', 'r') Fset1 = frozenset(String)print("The FrozenSet is: ")print(Fset1) # To print Empty Frozen Set# No parameter is passedprint("\nEmpty FrozenSet: ")print(frozenset()) The FrozenSet is: frozenset({'o', 'G', 'e', 's', 'r', 'F', 'k'}) Empty FrozenSet: frozenset() Python3 # Typecasting Objects in Python3 into sets # Typecasting list into setmy_list = [1, 2, 3, 3, 4, 5, 5, 6, 2]my_set = set(my_list)print("my_list as a set: ", my_set) # Typecasting string into setmy_str = "GeeksforGeeks"my_set1 = set(my_str)print("my_str as a set: ", my_set1) # Typecasting dictionary into setmy_dict = {1: "One", 2: "Two", 3: "Three"}my_set2 = set(my_dict)print("my_dict as a set: ", my_set2) # This code is contributed by sarajadhav12052009 my_list as a set: {1, 2, 3, 4, 5, 6} my_str as a set: {'f', 'G', 'r', 'o', 's', 'k', 'e'} my_dict as a set: {1, 2, 3} Program to accept the strings which contains all vowels Python program to find common elements in three lists using sets Find missing and additional values in two lists Pairs of complete strings in two sets Check whether a given string is Heterogram or not Maximum and Minimum in a Set Remove items from Set Python Set difference to find lost element from a duplicated array Minimum number of subsets with distinct elements using Counter Check if two lists have at-least one element common Program to count number of vowels using sets in given string Difference between two lists Python set to check if string is panagram Python set operations (union, intersection, difference and symmetric difference) Concatenated string with uncommon characters in Python Python dictionary, set and counter to check if frequencies can become same Using Set() in Python Pangram Checking Set update() in Python to do union of n arrays Output of Python programs – Sets Recent Articles on Python Sets Multiple Choice Questions – Python All articles in Python Category nikhilaggarwal3 Akanksha_Rai simranarora5sos anikakapoor aravindprasadr2021 kalicharan2779 sarajadhav12052009 Python-Built-in-functions python-set Python python-set Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here. Read JSON file using Python Adding new column to existing DataFrame in Pandas Python map() function How to get column names in Pandas dataframe Different ways to create Pandas Dataframe Enumerate() in Python Read a file line by line in Python Python String | replace() How to Install PIP on Windows ? Iterate over a list in Python
[ { "code": null, "e": 52, "s": 24, "text": "\n20 Jun, 2022" }, { "code": null, "e": 418, "s": 52, "text": "In Python, a Set is an unordered collection of data types that is iterable, mutable and has no duplicate elements. The order of elements in a set is undefined though it may consist of various elements. The major advantage of using a set, as opposed to a list, is that it has a highly optimized method for checking whether a specific element is contained in the set." }, { "code": null, "e": 582, "s": 418, "text": "Sets can be created by using the built-in set() function with an iterable object or a sequence by placing the sequence inside curly braces, separated by a ‘comma’." }, { "code": null, "e": 670, "s": 582, "text": "Note: A set cannot have mutable elements like a list or dictionary, as it is mutable. " }, { "code": null, "e": 678, "s": 670, "text": "Python3" }, { "code": "# Python program to demonstrate# Creation of Set in Python # Creating a Setset1 = set()print(\"Initial blank Set: \")print(set1) # Creating a Set with# the use of a Stringset1 = set(\"GeeksForGeeks\")print(\"\\nSet with the use of String: \")print(set1) # Creating a Set with# the use of Constructor# (Using object to Store String)String = 'GeeksForGeeks'set1 = set(String)print(\"\\nSet with the use of an Object: \" )print(set1) # Creating a Set with# the use of a Listset1 = set([\"Geeks\", \"For\", \"Geeks\"])print(\"\\nSet with the use of List: \")print(set1)", "e": 1225, "s": 678, "text": null }, { "code": null, "e": 1431, "s": 1225, "text": "Initial blank Set: \nset()\n\nSet with the use of String: \n{'e', 'r', 'G', 's', 'F', 'k', 'o'}\n\nSet with the use of an Object: \n{'e', 'r', 'G', 's', 'F', 'k', 'o'}\n\nSet with the use of List: \n{'Geeks', 'For'}" }, { "code": null, "e": 1721, "s": 1431, "text": "A set contains only unique elements but at the time of set creation, multiple duplicate values can also be passed. Order of elements in a set is undefined and is unchangeable. Type of elements in a set need not be the same, various mixed-up data type values can also be passed to the set. " }, { "code": null, "e": 1729, "s": 1721, "text": "Python3" }, { "code": "# Creating a Set with# a List of Numbers# (Having duplicate values)set1 = set([1, 2, 4, 4, 3, 3, 3, 6, 5])print(\"\\nSet with the use of Numbers: \")print(set1) # Creating a Set with# a mixed type of values# (Having numbers and strings)set1 = set([1, 2, 'Geeks', 4, 'For', 6, 'Geeks'])print(\"\\nSet with the use of Mixed Values\")print(set1)", "e": 2066, "s": 1729, "text": null }, { "code": null, "e": 2178, "s": 2066, "text": "Set with the use of Numbers: \n{1, 2, 3, 4, 5, 6}\n\nSet with the use of Mixed Values\n{1, 2, 'For', 4, 6, 'Geeks'}" }, { "code": null, "e": 2186, "s": 2178, "text": "Python3" }, { "code": "# Another Method to create sets in Python3 # Set containing numbersmy_set = {1, 2, 3} print(my_set) # This code is contributed by sarajadhav12052009", "e": 2335, "s": 2186, "text": null }, { "code": null, "e": 2345, "s": 2335, "text": "{1, 2, 3}" }, { "code": null, "e": 2570, "s": 2345, "text": "Elements can be added to the Set by using the built-in add() function. Only one element at a time can be added to the set by using add() method, loops are used to add multiple elements at a time with the use of add() method." }, { "code": null, "e": 2728, "s": 2570, "text": "Note: Lists cannot be added to a set as elements because Lists are not hashable whereas Tuples can be added because tuples are immutable and hence Hashable. " }, { "code": null, "e": 2736, "s": 2728, "text": "Python3" }, { "code": "# Python program to demonstrate# Addition of elements in a Set # Creating a Setset1 = set()print(\"Initial blank Set: \")print(set1) # Adding element and tuple to the Setset1.add(8)set1.add(9)set1.add((6, 7))print(\"\\nSet after Addition of Three elements: \")print(set1) # Adding elements to the Set# using Iteratorfor i in range(1, 6): set1.add(i)print(\"\\nSet after Addition of elements from 1-5: \")print(set1)", "e": 3147, "s": 2736, "text": null }, { "code": null, "e": 3301, "s": 3147, "text": "Initial blank Set: \nset()\n\nSet after Addition of Three elements: \n{8, 9, (6, 7)}\n\nSet after Addition of elements from 1-5: \n{1, 2, 3, (6, 7), 4, 5, 8, 9}" }, { "code": null, "e": 3513, "s": 3301, "text": "For the addition of two or more elements Update() method is used. The update() method accepts lists, strings, tuples as well as other sets as its arguments. In all of these cases, duplicate elements are avoided." }, { "code": null, "e": 3521, "s": 3513, "text": "Python3" }, { "code": "# Python program to demonstrate# Addition of elements in a Set # Addition of elements to the Set# using Update functionset1 = set([4, 5, (6, 7)])set1.update([10, 11])print(\"\\nSet after Addition of elements using Update: \")print(set1)", "e": 3755, "s": 3521, "text": null }, { "code": null, "e": 3824, "s": 3755, "text": "Set after Addition of elements using Update: \n{4, 5, (6, 7), 10, 11}" }, { "code": null, "e": 4059, "s": 3824, "text": "Set items cannot be accessed by referring to an index, since sets are unordered the items has no index. But you can loop through the set items using a for loop, or ask if a specified value is present in a set, by using the in keyword." }, { "code": null, "e": 4067, "s": 4059, "text": "Python3" }, { "code": "# Python program to demonstrate# Accessing of elements in a set # Creating a setset1 = set([\"Geeks\", \"For\", \"Geeks\"])print(\"\\nInitial set\")print(set1) # Accessing element using# for loopprint(\"\\nElements of set: \")for i in set1: print(i, end=\" \") # Checking the element# using in keywordprint(\"Geeks\" in set1)", "e": 4380, "s": 4067, "text": null }, { "code": null, "e": 4443, "s": 4380, "text": "Initial set\n{'For', 'Geeks'}\n\nElements of set: \nFor Geeks True" }, { "code": null, "e": 4709, "s": 4443, "text": "Elements can be removed from the Set by using the built-in remove() function but a KeyError arises if the element doesn’t exist in the set. To remove elements from a set without KeyError, use discard(), if the element doesn’t exist in the set, it remains unchanged." }, { "code": null, "e": 4717, "s": 4709, "text": "Python3" }, { "code": "# Python program to demonstrate# Deletion of elements in a Set # Creating a Setset1 = set([1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12])print(\"Initial Set: \")print(set1) # Removing elements from Set# using Remove() methodset1.remove(5)set1.remove(6)print(\"\\nSet after Removal of two elements: \")print(set1) # Removing elements from Set# using Discard() methodset1.discard(8)set1.discard(9)print(\"\\nSet after Discarding two elements: \")print(set1) # Removing elements from Set# using iterator methodfor i in range(1, 5): set1.remove(i)print(\"\\nSet after Removing a range of elements: \")print(set1)", "e": 5322, "s": 4717, "text": null }, { "code": null, "e": 5570, "s": 5322, "text": "Initial Set: \n{1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12}\n\nSet after Removal of two elements: \n{1, 2, 3, 4, 7, 8, 9, 10, 11, 12}\n\nSet after Discarding two elements: \n{1, 2, 3, 4, 7, 10, 11, 12}\n\nSet after Removing a range of elements: \n{7, 10, 11, 12}" }, { "code": null, "e": 5698, "s": 5570, "text": "Pop() function can also be used to remove and return an element from the set, but it removes only the last element of the set. " }, { "code": null, "e": 5820, "s": 5698, "text": "Note: If the set is unordered then there’s no such way to determine which element is popped by using the pop() function. " }, { "code": null, "e": 5828, "s": 5820, "text": "Python3" }, { "code": "# Python program to demonstrate# Deletion of elements in a Set # Creating a Setset1 = set([1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12])print(\"Initial Set: \")print(set1) # Removing element from the# Set using the pop() methodset1.pop()print(\"\\nSet after popping an element: \")print(set1)", "e": 6121, "s": 5828, "text": null }, { "code": null, "e": 6244, "s": 6121, "text": "Initial Set: \n{1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12}\n\nSet after popping an element: \n{2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12}" }, { "code": null, "e": 6312, "s": 6244, "text": "To remove all the elements from the set, clear() function is used. " }, { "code": null, "e": 6320, "s": 6312, "text": "Python3" }, { "code": "#Creating a setset1 = set([1,2,3,4,5])print(\"\\n Initial set: \")print(set1) # Removing all the elements from# Set using clear() methodset1.clear()print(\"\\nSet after clearing all the elements: \")print(set1)", "e": 6526, "s": 6320, "text": null }, { "code": null, "e": 6602, "s": 6526, "text": " Initial set: \n{1, 2, 3, 4, 5}\n\nSet after clearing all the elements: \nset()" }, { "code": null, "e": 6889, "s": 6602, "text": "Frozen sets in Python are immutable objects that only support methods and operators that produce a result without affecting the frozen set or sets to which they are applied. While elements of a set can be modified at any time, elements of the frozen set remain the same after creation. " }, { "code": null, "e": 6951, "s": 6889, "text": "If no parameters are passed, it returns an empty frozenset. " }, { "code": null, "e": 6959, "s": 6951, "text": "Python3" }, { "code": "# Python program to demonstrate# working of a FrozenSet # Creating a SetString = ('G', 'e', 'e', 'k', 's', 'F', 'o', 'r') Fset1 = frozenset(String)print(\"The FrozenSet is: \")print(Fset1) # To print Empty Frozen Set# No parameter is passedprint(\"\\nEmpty FrozenSet: \")print(frozenset())", "e": 7244, "s": 6959, "text": null }, { "code": null, "e": 7341, "s": 7244, "text": "The FrozenSet is: \nfrozenset({'o', 'G', 'e', 's', 'r', 'F', 'k'})\n\nEmpty FrozenSet: \nfrozenset()" }, { "code": null, "e": 7349, "s": 7341, "text": "Python3" }, { "code": "# Typecasting Objects in Python3 into sets # Typecasting list into setmy_list = [1, 2, 3, 3, 4, 5, 5, 6, 2]my_set = set(my_list)print(\"my_list as a set: \", my_set) # Typecasting string into setmy_str = \"GeeksforGeeks\"my_set1 = set(my_str)print(\"my_str as a set: \", my_set1) # Typecasting dictionary into setmy_dict = {1: \"One\", 2: \"Two\", 3: \"Three\"}my_set2 = set(my_dict)print(\"my_dict as a set: \", my_set2) # This code is contributed by sarajadhav12052009", "e": 7806, "s": 7349, "text": null }, { "code": null, "e": 7927, "s": 7806, "text": "my_list as a set: {1, 2, 3, 4, 5, 6}\nmy_str as a set: {'f', 'G', 'r', 'o', 's', 'k', 'e'}\nmy_dict as a set: {1, 2, 3}" }, { "code": null, "e": 7983, "s": 7927, "text": "Program to accept the strings which contains all vowels" }, { "code": null, "e": 8048, "s": 7983, "text": "Python program to find common elements in three lists using sets" }, { "code": null, "e": 8096, "s": 8048, "text": "Find missing and additional values in two lists" }, { "code": null, "e": 8134, "s": 8096, "text": "Pairs of complete strings in two sets" }, { "code": null, "e": 8184, "s": 8134, "text": "Check whether a given string is Heterogram or not" }, { "code": null, "e": 8213, "s": 8184, "text": "Maximum and Minimum in a Set" }, { "code": null, "e": 8235, "s": 8213, "text": "Remove items from Set" }, { "code": null, "e": 8302, "s": 8235, "text": "Python Set difference to find lost element from a duplicated array" }, { "code": null, "e": 8365, "s": 8302, "text": "Minimum number of subsets with distinct elements using Counter" }, { "code": null, "e": 8417, "s": 8365, "text": "Check if two lists have at-least one element common" }, { "code": null, "e": 8478, "s": 8417, "text": "Program to count number of vowels using sets in given string" }, { "code": null, "e": 8507, "s": 8478, "text": "Difference between two lists" }, { "code": null, "e": 8549, "s": 8507, "text": "Python set to check if string is panagram" }, { "code": null, "e": 8630, "s": 8549, "text": "Python set operations (union, intersection, difference and symmetric difference)" }, { "code": null, "e": 8685, "s": 8630, "text": "Concatenated string with uncommon characters in Python" }, { "code": null, "e": 8760, "s": 8685, "text": "Python dictionary, set and counter to check if frequencies can become same" }, { "code": null, "e": 8799, "s": 8760, "text": "Using Set() in Python Pangram Checking" }, { "code": null, "e": 8846, "s": 8799, "text": "Set update() in Python to do union of n arrays" }, { "code": null, "e": 8879, "s": 8846, "text": "Output of Python programs – Sets" }, { "code": null, "e": 8910, "s": 8879, "text": "Recent Articles on Python Sets" }, { "code": null, "e": 8945, "s": 8910, "text": "Multiple Choice Questions – Python" }, { "code": null, "e": 8977, "s": 8945, "text": "All articles in Python Category" }, { "code": null, "e": 8993, "s": 8977, "text": "nikhilaggarwal3" }, { "code": null, "e": 9006, "s": 8993, "text": "Akanksha_Rai" }, { "code": null, "e": 9022, "s": 9006, "text": "simranarora5sos" }, { "code": null, "e": 9034, "s": 9022, "text": "anikakapoor" }, { "code": null, "e": 9053, "s": 9034, "text": "aravindprasadr2021" }, { "code": null, "e": 9068, "s": 9053, "text": "kalicharan2779" }, { "code": null, "e": 9087, "s": 9068, "text": "sarajadhav12052009" }, { "code": null, "e": 9113, "s": 9087, "text": "Python-Built-in-functions" }, { "code": null, "e": 9124, "s": 9113, "text": "python-set" }, { "code": null, "e": 9131, "s": 9124, "text": "Python" }, { "code": null, "e": 9142, "s": 9131, "text": "python-set" }, { "code": null, "e": 9240, "s": 9142, "text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here." }, { "code": null, "e": 9268, "s": 9240, "text": "Read JSON file using Python" }, { "code": null, "e": 9318, "s": 9268, "text": "Adding new column to existing DataFrame in Pandas" }, { "code": null, "e": 9340, "s": 9318, "text": "Python map() function" }, { "code": null, "e": 9384, "s": 9340, "text": "How to get column names in Pandas dataframe" }, { "code": null, "e": 9426, "s": 9384, "text": "Different ways to create Pandas Dataframe" }, { "code": null, "e": 9448, "s": 9426, "text": "Enumerate() in Python" }, { "code": null, "e": 9483, "s": 9448, "text": "Read a file line by line in Python" }, { "code": null, "e": 9509, "s": 9483, "text": "Python String | replace()" }, { "code": null, "e": 9541, "s": 9509, "text": "How to Install PIP on Windows ?" } ]
JavaScript | Symbol.for() function
29 Oct, 2021 The Symbol.for() is an inbuilt function in JavaScript which is used to search for the given symbol into a runtime-wide symbol registry and if found then it returns the same symbol otherwise it creates a new symbol with the same name of the given symbol into the global symbol registry and returns them. Syntax: Symbol.for(key); Here “Symbol” is the symbol which is to be searched into the runtime-wide symbol registry. Parameters: This function accepts a parameter “key” which is the key to the symbol and used for the description of the symbol. Return value: This function returns the given symbol is found in the runtime-wide symbol registry otherwise a new symbol is created with the same name as the given symbol and returned. JavaScript code to show the working of this function:Example-1: <script > // Some symbols are created const symbol1 = Symbol.for('Geeks'); const symbol2 = Symbol.for(123); const symbol3 = Symbol.for("gfg"); const symbol4 = Symbol.for('789'); // Getting the same symbols if found // in the global symbol registry // otherwise a new created and returned console.log(symbol1); console.log(symbol2); console.log(symbol3); console.log(symbol4);</script> Output: > Symbol(Geeks) > Symbol(123) > Symbol(gfg) > Symbol(789) Example-2: <script> // Some symbols are created const symbol1 = Symbol.for('a', 'b', 'c'); const symbol2 = Symbol.for(1, 2, 3); const symbol3 = Symbol.for(1 + 2); const symbol4 = Symbol.for("Geeks" + "for" + "Geeks"); // Getting the same symbols if found // in the global symbol registry // otherwise a new created and returned console.log(symbol1); console.log(symbol2); console.log(symbol3); console.log(symbol4); </script> Output: > Symbol(a) > Symbol(1) > Symbol(3) > Symbol(GeeksforGeeks) In the above code, the key should not be multiple otherwise it accepts the first element as the key and discard the remaining elements and if some arithmetic operator is used in place of the key then this function considers that key as the result of the operation. Supported Browsers: Google Chrome 40 and above Edge 12 and above Firefox 36 and above Opera 27 and above Safari 9 and above Reference: https://devdocs.io/javascript/global_objects/symbol/for ysachin2314 JavaScript-Symbol JavaScript Web Technologies Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here. Difference between var, let and const keywords in JavaScript Differences between Functional Components and Class Components in React Remove elements from a JavaScript Array Difference Between PUT and PATCH Request How to append HTML code to a div using JavaScript ? Installation of Node.js on Linux Top 10 Projects For Beginners To Practice HTML and CSS Skills Difference between var, let and const keywords in JavaScript How to insert spaces/tabs in text using HTML/CSS? How to fetch data from an API in ReactJS ?
[ { "code": null, "e": 28, "s": 0, "text": "\n29 Oct, 2021" }, { "code": null, "e": 331, "s": 28, "text": "The Symbol.for() is an inbuilt function in JavaScript which is used to search for the given symbol into a runtime-wide symbol registry and if found then it returns the same symbol otherwise it creates a new symbol with the same name of the given symbol into the global symbol registry and returns them." }, { "code": null, "e": 339, "s": 331, "text": "Syntax:" }, { "code": null, "e": 357, "s": 339, "text": "Symbol.for(key);\n" }, { "code": null, "e": 448, "s": 357, "text": "Here “Symbol” is the symbol which is to be searched into the runtime-wide symbol registry." }, { "code": null, "e": 575, "s": 448, "text": "Parameters: This function accepts a parameter “key” which is the key to the symbol and used for the description of the symbol." }, { "code": null, "e": 760, "s": 575, "text": "Return value: This function returns the given symbol is found in the runtime-wide symbol registry otherwise a new symbol is created with the same name as the given symbol and returned." }, { "code": null, "e": 824, "s": 760, "text": "JavaScript code to show the working of this function:Example-1:" }, { "code": "<script > // Some symbols are created const symbol1 = Symbol.for('Geeks'); const symbol2 = Symbol.for(123); const symbol3 = Symbol.for(\"gfg\"); const symbol4 = Symbol.for('789'); // Getting the same symbols if found // in the global symbol registry // otherwise a new created and returned console.log(symbol1); console.log(symbol2); console.log(symbol3); console.log(symbol4);</script>", "e": 1248, "s": 824, "text": null }, { "code": null, "e": 1256, "s": 1248, "text": "Output:" }, { "code": null, "e": 1315, "s": 1256, "text": "> Symbol(Geeks)\n> Symbol(123)\n> Symbol(gfg)\n> Symbol(789)\n" }, { "code": null, "e": 1326, "s": 1315, "text": "Example-2:" }, { "code": "<script> // Some symbols are created const symbol1 = Symbol.for('a', 'b', 'c'); const symbol2 = Symbol.for(1, 2, 3); const symbol3 = Symbol.for(1 + 2); const symbol4 = Symbol.for(\"Geeks\" + \"for\" + \"Geeks\"); // Getting the same symbols if found // in the global symbol registry // otherwise a new created and returned console.log(symbol1); console.log(symbol2); console.log(symbol3); console.log(symbol4); </script>", "e": 1780, "s": 1326, "text": null }, { "code": null, "e": 1788, "s": 1780, "text": "Output:" }, { "code": null, "e": 1849, "s": 1788, "text": "> Symbol(a)\n> Symbol(1)\n> Symbol(3)\n> Symbol(GeeksforGeeks)\n" }, { "code": null, "e": 2114, "s": 1849, "text": "In the above code, the key should not be multiple otherwise it accepts the first element as the key and discard the remaining elements and if some arithmetic operator is used in place of the key then this function considers that key as the result of the operation." }, { "code": null, "e": 2134, "s": 2114, "text": "Supported Browsers:" }, { "code": null, "e": 2161, "s": 2134, "text": "Google Chrome 40 and above" }, { "code": null, "e": 2179, "s": 2161, "text": "Edge 12 and above" }, { "code": null, "e": 2200, "s": 2179, "text": "Firefox 36 and above" }, { "code": null, "e": 2219, "s": 2200, "text": "Opera 27 and above" }, { "code": null, "e": 2238, "s": 2219, "text": "Safari 9 and above" }, { "code": null, "e": 2305, "s": 2238, "text": "Reference: https://devdocs.io/javascript/global_objects/symbol/for" }, { "code": null, "e": 2317, "s": 2305, "text": "ysachin2314" }, { "code": null, "e": 2335, "s": 2317, "text": "JavaScript-Symbol" }, { "code": null, "e": 2346, "s": 2335, "text": "JavaScript" }, { "code": null, "e": 2363, "s": 2346, "text": "Web Technologies" }, { "code": null, "e": 2461, "s": 2363, "text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here." }, { "code": null, "e": 2522, "s": 2461, "text": "Difference between var, let and const keywords in JavaScript" }, { "code": null, "e": 2594, "s": 2522, "text": "Differences between Functional Components and Class Components in React" }, { "code": null, "e": 2634, "s": 2594, "text": "Remove elements from a JavaScript Array" }, { "code": null, "e": 2675, "s": 2634, "text": "Difference Between PUT and PATCH Request" }, { "code": null, "e": 2727, "s": 2675, "text": "How to append HTML code to a div using JavaScript ?" }, { "code": null, "e": 2760, "s": 2727, "text": "Installation of Node.js on Linux" }, { "code": null, "e": 2822, "s": 2760, "text": "Top 10 Projects For Beginners To Practice HTML and CSS Skills" }, { "code": null, "e": 2883, "s": 2822, "text": "Difference between var, let and const keywords in JavaScript" }, { "code": null, "e": 2933, "s": 2883, "text": "How to insert spaces/tabs in text using HTML/CSS?" } ]
Merry Christmas (Program for Christmas Tree in C)
25 Dec, 2017 Since Christmas is right at the door, its time to celebrate it in the programmer’s way. Lets build a decorative Christmas tree in C. To print a Christmas tree, we are printing pyramids of various sizes just one beneath the other.For the decoration, a random character is printed at each position. Height and randomness can be adjusted. This is been repeated frame after frame to give the illusion of a true event. Example: Lets see the code. // C program to print a Christmas tree// It is recommended is try it with a desktop // compiler like CodeBlocks.#include <stdio.h>#include <stdlib.h>#include <time.h>#include <unistd.h> #define RefRate 40000#define randomness 5 // high means less random // Clear the shellvoid clrscr(){ system("@cls||clear");} // Print a random character giving preference // to *void printRandLeaf(){ char leaftypes[5] = { '.', '*', '+', 'o', 'O' }; int temp = rand() % randomness; // Giving preference to * if (temp == 1) printf("%c ", leaftypes[rand() % 5]); else printf("%c ", leaftypes[1]);} void triangle(int f, int n, int toth){ int i, j, k = 2 * toth - 2; for (i = 0; i < f - 1; i++) k--; // number of rows for (i = f - 1; i < n; i++) { // space handler for (j = 0; j < k; j++) printf(" "); // decrementing k after each loop k = k - 1; // number of columns, printing stars for (j = 0; j <= i; j++) printRandLeaf(); printf("\n"); }} // Prints multiple trianglesvoid printTree(int h){ int start = 1, stop = 0, diff = 3; while (stop < h + 1) { stop = start + diff; triangle(start, stop, h); diff++; start = stop - 2; }} // Prints bottom part.void printLog(int n){ int i, j, k = 2 * n - 4; for (i = 1; i <= 6; i++) { // space handler for (j = 0; j < k; j++) printf(" "); for (j = 1; j <= 6; j++) printf("#"); printf("\n"); }} // Driver codeint main(){ srand(time(NULL)); int ht = 6; printf("\n*********MERRY CHRISTMAS*********\n\n"); // refresh loop while (1) { clrscr(); printTree(ht); printLog(ht); usleep(RefRate); } return 0;} pattern-printing C Language School Programming pattern-printing Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here.
[ { "code": null, "e": 54, "s": 26, "text": "\n25 Dec, 2017" }, { "code": null, "e": 187, "s": 54, "text": "Since Christmas is right at the door, its time to celebrate it in the programmer’s way. Lets build a decorative Christmas tree in C." }, { "code": null, "e": 351, "s": 187, "text": "To print a Christmas tree, we are printing pyramids of various sizes just one beneath the other.For the decoration, a random character is printed at each position." }, { "code": null, "e": 468, "s": 351, "text": "Height and randomness can be adjusted. This is been repeated frame after frame to give the illusion of a true event." }, { "code": null, "e": 477, "s": 468, "text": "Example:" }, { "code": null, "e": 496, "s": 477, "text": "Lets see the code." }, { "code": "// C program to print a Christmas tree// It is recommended is try it with a desktop // compiler like CodeBlocks.#include <stdio.h>#include <stdlib.h>#include <time.h>#include <unistd.h> #define RefRate 40000#define randomness 5 // high means less random // Clear the shellvoid clrscr(){ system(\"@cls||clear\");} // Print a random character giving preference // to *void printRandLeaf(){ char leaftypes[5] = { '.', '*', '+', 'o', 'O' }; int temp = rand() % randomness; // Giving preference to * if (temp == 1) printf(\"%c \", leaftypes[rand() % 5]); else printf(\"%c \", leaftypes[1]);} void triangle(int f, int n, int toth){ int i, j, k = 2 * toth - 2; for (i = 0; i < f - 1; i++) k--; // number of rows for (i = f - 1; i < n; i++) { // space handler for (j = 0; j < k; j++) printf(\" \"); // decrementing k after each loop k = k - 1; // number of columns, printing stars for (j = 0; j <= i; j++) printRandLeaf(); printf(\"\\n\"); }} // Prints multiple trianglesvoid printTree(int h){ int start = 1, stop = 0, diff = 3; while (stop < h + 1) { stop = start + diff; triangle(start, stop, h); diff++; start = stop - 2; }} // Prints bottom part.void printLog(int n){ int i, j, k = 2 * n - 4; for (i = 1; i <= 6; i++) { // space handler for (j = 0; j < k; j++) printf(\" \"); for (j = 1; j <= 6; j++) printf(\"#\"); printf(\"\\n\"); }} // Driver codeint main(){ srand(time(NULL)); int ht = 6; printf(\"\\n*********MERRY CHRISTMAS*********\\n\\n\"); // refresh loop while (1) { clrscr(); printTree(ht); printLog(ht); usleep(RefRate); } return 0;}", "e": 2323, "s": 496, "text": null }, { "code": null, "e": 2340, "s": 2323, "text": "pattern-printing" }, { "code": null, "e": 2351, "s": 2340, "text": "C Language" }, { "code": null, "e": 2370, "s": 2351, "text": "School Programming" }, { "code": null, "e": 2387, "s": 2370, "text": "pattern-printing" } ]
ISRO | ISRO CS 2018 | Question 64
20 Nov, 2018 Given √224r = 13r the value of radix r is(A) 10(B) 8(C) 6(D) 5Answer: (D)Explanation: √224r = 13r taking square both sides 2r2 + 2r + 4 = r2 + 6r + 9 = r2 - 4r - 5 = 0 = (r - 5)(r + 1) = 0 r = 5 // since it can't be -1 So, option (D) is correct.Quiz of this Question ISRO Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here. ISRO | ISRO CS 2011 | Question 56 ISRO | ISRO CS 2018 | Question 44 ISRO | ISRO CS 2013 | Question 21 ISRO | ISRO CS 2018 | Question 74 ISRO | ISRO CS 2007 | Question 16 ISRO | ISRO CS 2009 | Question 30 ISRO | ISRO CS 2017 - May | Question 14 ISRO | ISRO CS 2014 | Question 64 ISRO | ISRO CS 2008 | Question 68 ISRO | ISRO CS 2015 | Question 13
[ { "code": null, "e": 28, "s": 0, "text": "\n20 Nov, 2018" }, { "code": null, "e": 114, "s": 28, "text": "Given √224r = 13r the value of radix r is(A) 10(B) 8(C) 6(D) 5Answer: (D)Explanation:" }, { "code": null, "e": 248, "s": 114, "text": "√224r = 13r\ntaking square both sides\n\n2r2 + 2r + 4 = r2 + 6r + 9\n= r2 - 4r - 5 = 0\n= (r - 5)(r + 1) = 0\nr = 5 // since it can't be -1" }, { "code": null, "e": 296, "s": 248, "text": "So, option (D) is correct.Quiz of this Question" }, { "code": null, "e": 301, "s": 296, "text": "ISRO" }, { "code": null, "e": 399, "s": 301, "text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here." }, { "code": null, "e": 433, "s": 399, "text": "ISRO | ISRO CS 2011 | Question 56" }, { "code": null, "e": 467, "s": 433, "text": "ISRO | ISRO CS 2018 | Question 44" }, { "code": null, "e": 501, "s": 467, "text": "ISRO | ISRO CS 2013 | Question 21" }, { "code": null, "e": 535, "s": 501, "text": "ISRO | ISRO CS 2018 | Question 74" }, { "code": null, "e": 569, "s": 535, "text": "ISRO | ISRO CS 2007 | Question 16" }, { "code": null, "e": 603, "s": 569, "text": "ISRO | ISRO CS 2009 | Question 30" }, { "code": null, "e": 643, "s": 603, "text": "ISRO | ISRO CS 2017 - May | Question 14" }, { "code": null, "e": 677, "s": 643, "text": "ISRO | ISRO CS 2014 | Question 64" }, { "code": null, "e": 711, "s": 677, "text": "ISRO | ISRO CS 2008 | Question 68" } ]
AutoField – Django Models
12 Feb, 2020 According to documentation, An AutoField is an IntegerField that automatically increments according to available IDs. One usually won’t need to use this directly because a primary key field will automatically be added to your model if you don’t specify otherwise. By default, Django gives each model the following field: id = models.AutoField(primary_key=True, **options) This is an auto-incrementing primary key. Even if the model doesn’t have any field, a default field will be created named as id. Illustration of AutoField using an Example. Consider a project named geeksforgeeks having an app named geeks. Refer to the following articles to check how to create a project and an app in Django. How to Create a Basic Project using MVT in Django? How to Create an App in Django ? Enter the following code into models.py file of geeks app. from django.db import modelsfrom django.db.models import Model# Create your models here. class GeeksModel(Model): pass Add the geeks app to INSTALLED_APPS # Application definition INSTALLED_APPS = [ 'django.contrib.admin', 'django.contrib.auth', 'django.contrib.contenttypes', 'django.contrib.sessions', 'django.contrib.messages', 'django.contrib.staticfiles', 'geeks',] Now when we run makemigrations command from the terminal, Python manage.py makemigrations A new folder named migrations would be created in geeks directory with a file named 0001_initial.py # Generated by Django 2.2.5 on 2019-09-25 06:00 from django.db import migrations, models class Migration(migrations.Migration): initial = True dependencies = [ ] operations = [ migrations.CreateModel( name ='GeeksModel', fields =[ ('id', models.AutoField(auto_created = True, primary_key = True, serialize = False, verbose_name ='ID' )), ], ), ] Thus, an id AutoField that auto increments on every instance of that model is created by default when you run makemigrations on the project. It is a primary key to the table created for the model named GeeksModel. If we create objects of this empty model from the admin server. we can see id field autoincrementing on every instance created. Field Options are the arguments given to each field for applying some constraint or imparting a particular characteristic to a particular Field. For example, adding an argument primary_key=True to AutoField will make it primary key for that table in relational database.Here are the option and attributes that an Autofield can use. NaveenArora Django-models Python Django Python Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here. Python Dictionary Different ways to create Pandas Dataframe Enumerate() in Python Read a file line by line in Python Python String | replace() How to Install PIP on Windows ? *args and **kwargs in Python Python Classes and Objects Iterate over a list in Python Python OOPs Concepts
[ { "code": null, "e": 28, "s": 0, "text": "\n12 Feb, 2020" }, { "code": null, "e": 292, "s": 28, "text": "According to documentation, An AutoField is an IntegerField that automatically increments according to available IDs. One usually won’t need to use this directly because a primary key field will automatically be added to your model if you don’t specify otherwise." }, { "code": null, "e": 349, "s": 292, "text": "By default, Django gives each model the following field:" }, { "code": null, "e": 400, "s": 349, "text": "id = models.AutoField(primary_key=True, **options)" }, { "code": null, "e": 529, "s": 400, "text": "This is an auto-incrementing primary key. Even if the model doesn’t have any field, a default field will be created named as id." }, { "code": null, "e": 639, "s": 529, "text": "Illustration of AutoField using an Example. Consider a project named geeksforgeeks having an app named geeks." }, { "code": null, "e": 726, "s": 639, "text": "Refer to the following articles to check how to create a project and an app in Django." }, { "code": null, "e": 777, "s": 726, "text": "How to Create a Basic Project using MVT in Django?" }, { "code": null, "e": 810, "s": 777, "text": "How to Create an App in Django ?" }, { "code": null, "e": 869, "s": 810, "text": "Enter the following code into models.py file of geeks app." }, { "code": "from django.db import modelsfrom django.db.models import Model# Create your models here. class GeeksModel(Model): pass", "e": 992, "s": 869, "text": null }, { "code": null, "e": 1028, "s": 992, "text": "Add the geeks app to INSTALLED_APPS" }, { "code": "# Application definition INSTALLED_APPS = [ 'django.contrib.admin', 'django.contrib.auth', 'django.contrib.contenttypes', 'django.contrib.sessions', 'django.contrib.messages', 'django.contrib.staticfiles', 'geeks',]", "e": 1266, "s": 1028, "text": null }, { "code": null, "e": 1324, "s": 1266, "text": "Now when we run makemigrations command from the terminal," }, { "code": null, "e": 1356, "s": 1324, "text": "Python manage.py makemigrations" }, { "code": null, "e": 1456, "s": 1356, "text": "A new folder named migrations would be created in geeks directory with a file named 0001_initial.py" }, { "code": "# Generated by Django 2.2.5 on 2019-09-25 06:00 from django.db import migrations, models class Migration(migrations.Migration): initial = True dependencies = [ ] operations = [ migrations.CreateModel( name ='GeeksModel', fields =[ ('id', models.AutoField(auto_created = True, primary_key = True, serialize = False, verbose_name ='ID' )), ], ), ]", "e": 1972, "s": 1456, "text": null }, { "code": null, "e": 2186, "s": 1972, "text": "Thus, an id AutoField that auto increments on every instance of that model is created by default when you run makemigrations on the project. It is a primary key to the table created for the model named GeeksModel." }, { "code": null, "e": 2314, "s": 2186, "text": "If we create objects of this empty model from the admin server. we can see id field autoincrementing on every instance created." }, { "code": null, "e": 2646, "s": 2314, "text": "Field Options are the arguments given to each field for applying some constraint or imparting a particular characteristic to a particular Field. For example, adding an argument primary_key=True to AutoField will make it primary key for that table in relational database.Here are the option and attributes that an Autofield can use." }, { "code": null, "e": 2658, "s": 2646, "text": "NaveenArora" }, { "code": null, "e": 2672, "s": 2658, "text": "Django-models" }, { "code": null, "e": 2686, "s": 2672, "text": "Python Django" }, { "code": null, "e": 2693, "s": 2686, "text": "Python" }, { "code": null, "e": 2791, "s": 2693, "text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here." }, { "code": null, "e": 2809, "s": 2791, "text": "Python Dictionary" }, { "code": null, "e": 2851, "s": 2809, "text": "Different ways to create Pandas Dataframe" }, { "code": null, "e": 2873, "s": 2851, "text": "Enumerate() in Python" }, { "code": null, "e": 2908, "s": 2873, "text": "Read a file line by line in Python" }, { "code": null, "e": 2934, "s": 2908, "text": "Python String | replace()" }, { "code": null, "e": 2966, "s": 2934, "text": "How to Install PIP on Windows ?" }, { "code": null, "e": 2995, "s": 2966, "text": "*args and **kwargs in Python" }, { "code": null, "e": 3022, "s": 2995, "text": "Python Classes and Objects" }, { "code": null, "e": 3052, "s": 3022, "text": "Iterate over a list in Python" } ]
Implementation of Blockchain in Java
11 May, 2022 Blockchain is the backbone Technology of Digital CryptoCurrency BitCoin. A Blockchain is a list of records called blocks that are linked together using linked lists and use the cryptographic technique. Each block contains its own digital fingerprint called Hash, the hash of the previous block, a timestamp and the data of the transaction made, making it more secure towards any kind of data breach. Therefore, if the data of one block is changed then its hash will also change. If the hash is changed, then its hash will be different from the next block that contains the hash of the previous block affecting all the hashes of the blocks after it. Changing of the hashes and then comparing it with other blocks allows us to check the blockchain. Implementation of the Blockchain: The following are the functions used in the implementation of the blockchain. Creating Blocks: To create a block, a Block class is implemented. In the class Block: hash will contain the hash of the block andpreviousHash will contain the hash of the previous block.String data is used to store the data of the block and“long timeStamp” is used to store the timestamp of the block. Here long data type is used to store the number of milliseconds.calculateHash() to generate the hash hash will contain the hash of the block and previousHash will contain the hash of the previous block. String data is used to store the data of the block and “long timeStamp” is used to store the timestamp of the block. Here long data type is used to store the number of milliseconds. calculateHash() to generate the hash Below is the implementation of the class block: Java // Java implementation for creating// a block in a Blockchain import java.util.Date; public class Block { // Every block contains // a hash, previous hash and // data of the transaction made public String hash; public String previousHash; private String data; private long timeStamp; // Constructor for the block public Block(String data, String previousHash) { this.data = data; this.previousHash = previousHash; this.timeStamp = new Date().getTime(); this.hash = calculateHash(); } // Function to calculate the hash public String calculateHash() { // Calling the "crypt" class // to calculate the hash // by using the previous hash, // timestamp and the data String calculatedhash = crypt.sha256( previousHash + Long.toString(timeStamp) + data); return calculatedhash; }} Generating Hashes: To generate hash, SHA256 algorithm is used. Below is the implementation of the algorithm. Java // Java program for Generating Hashes import java.security.MessageDigest; public class crypt { // Function that takes the string input // and returns the hashed string. public static String sha256(String input) { try { MessageDigest sha = MessageDigest .getInstance( "SHA-256"); int i = 0; byte[] hash = sha.digest( input.getBytes("UTF-8")); // hexHash will contain // the Hexadecimal hash StringBuffer hexHash = new StringBuffer(); while (i < hash.length) { String hex = Integer.toHexString( 0xff & hash[i]); if (hex.length() == 1) hexHash.append('0'); hexHash.append(hex); i++; } return hexHash.toString(); } catch (Exception e) { throw new RuntimeException(e); } }} Storing the blocks: Now, let us store the blocks in the ArrayList of Block type, along with their hash values by calling the constructor of the Block Class. Java // Java implementation to store// blocks in an ArrayList import java.util.ArrayList; public class GFG { // ArrayList to store the blocks public static ArrayList<Block> blockchain = new ArrayList<Block>(); // Driver code public static void main(String[] args) { // Adding the data to the ArrayList blockchain.add(new Block( "First block", "0")); blockchain.add(new Block( "Second block", blockchain .get(blockchain.size() - 1) .hash)); blockchain.add(new Block( "Third block", blockchain .get(blockchain.size() - 1) .hash)); blockchain.add(new Block( "Fourth block", blockchain .get(blockchain.size() - 1) .hash)); blockchain.add(new Block( "Fifth block", blockchain .get(blockchain.size() - 1) .hash)); }} Blockchain Validity: Finally, we need to check the validity of the BlockChain by creating a boolean method to check the validity. This method will be implemented in the “Main” class and checks whether the hash is equal to the calculated hash or not. If all the hashes are equal to the calculated hashes, then the block is valid. Below is the implementation of the validity: Java // Java implementation to check// validity of the blockchain // Function to check// validity of the blockchainpublic static Boolean isChainValid(){ Block currentBlock; Block previousBlock; // Iterating through // all the blocks for (int i = 1; i < blockchain.size(); i++) { // Storing the current block // and the previous block currentBlock = blockchain.get(i); previousBlock = blockchain.get(i - 1); // Checking if the current hash // is equal to the // calculated hash or not if (!currentBlock.hash .equals( currentBlock .calculateHash())) { System.out.println( "Hashes are not equal"); return false; } // Checking of the previous hash // is equal to the calculated // previous hash or not if (!previousBlock .hash .equals( currentBlock .previousHash)) { System.out.println( "Previous Hashes are not equal"); return false; } } // If all the hashes are equal // to the calculated hashes, // then the blockchain is valid return true;} Advantages of the Blockchain: Blockchain is a distributed network of systems. Therefore, data breaches are very difficult to be carried out.Since, Blockchain generated hashes of each block, therefore, it is very difficult to carry out malicious attacks.Data Tampering will change the hash of each block which will make the blockchain invalid Blockchain is a distributed network of systems. Therefore, data breaches are very difficult to be carried out. Since, Blockchain generated hashes of each block, therefore, it is very difficult to carry out malicious attacks. Data Tampering will change the hash of each block which will make the blockchain invalid simmytarika5 surinderdawra388 Algorithms Blockchain Java Programs Algorithms Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here. DSA Sheet by Love Babbar SDE SHEET - A Complete Guide for SDE Preparation What is Hashing | A Complete Tutorial CPU Scheduling in Operating Systems Understanding Time Complexity with Simple Examples Solidity - Arrays How to Become a Blockchain Developer? Solidity - Mappings Consensus Algorithms in Blockchain practical Byzantine Fault Tolerance(pBFT)
[ { "code": null, "e": 53, "s": 25, "text": "\n11 May, 2022" }, { "code": null, "e": 127, "s": 53, "text": "Blockchain is the backbone Technology of Digital CryptoCurrency BitCoin. " }, { "code": null, "e": 256, "s": 127, "text": "A Blockchain is a list of records called blocks that are linked together using linked lists and use the cryptographic technique." }, { "code": null, "e": 454, "s": 256, "text": "Each block contains its own digital fingerprint called Hash, the hash of the previous block, a timestamp and the data of the transaction made, making it more secure towards any kind of data breach." }, { "code": null, "e": 801, "s": 454, "text": "Therefore, if the data of one block is changed then its hash will also change. If the hash is changed, then its hash will be different from the next block that contains the hash of the previous block affecting all the hashes of the blocks after it. Changing of the hashes and then comparing it with other blocks allows us to check the blockchain." }, { "code": null, "e": 914, "s": 801, "text": "Implementation of the Blockchain: The following are the functions used in the implementation of the blockchain. " }, { "code": null, "e": 1317, "s": 914, "text": "Creating Blocks: To create a block, a Block class is implemented. In the class Block: hash will contain the hash of the block andpreviousHash will contain the hash of the previous block.String data is used to store the data of the block and“long timeStamp” is used to store the timestamp of the block. Here long data type is used to store the number of milliseconds.calculateHash() to generate the hash" }, { "code": null, "e": 1361, "s": 1317, "text": "hash will contain the hash of the block and" }, { "code": null, "e": 1419, "s": 1361, "text": "previousHash will contain the hash of the previous block." }, { "code": null, "e": 1474, "s": 1419, "text": "String data is used to store the data of the block and" }, { "code": null, "e": 1601, "s": 1474, "text": "“long timeStamp” is used to store the timestamp of the block. Here long data type is used to store the number of milliseconds." }, { "code": null, "e": 1638, "s": 1601, "text": "calculateHash() to generate the hash" }, { "code": null, "e": 1686, "s": 1638, "text": "Below is the implementation of the class block:" }, { "code": null, "e": 1691, "s": 1686, "text": "Java" }, { "code": "// Java implementation for creating// a block in a Blockchain import java.util.Date; public class Block { // Every block contains // a hash, previous hash and // data of the transaction made public String hash; public String previousHash; private String data; private long timeStamp; // Constructor for the block public Block(String data, String previousHash) { this.data = data; this.previousHash = previousHash; this.timeStamp = new Date().getTime(); this.hash = calculateHash(); } // Function to calculate the hash public String calculateHash() { // Calling the \"crypt\" class // to calculate the hash // by using the previous hash, // timestamp and the data String calculatedhash = crypt.sha256( previousHash + Long.toString(timeStamp) + data); return calculatedhash; }}", "e": 2701, "s": 1691, "text": null }, { "code": null, "e": 2810, "s": 2701, "text": "Generating Hashes: To generate hash, SHA256 algorithm is used. Below is the implementation of the algorithm." }, { "code": null, "e": 2815, "s": 2810, "text": "Java" }, { "code": "// Java program for Generating Hashes import java.security.MessageDigest; public class crypt { // Function that takes the string input // and returns the hashed string. public static String sha256(String input) { try { MessageDigest sha = MessageDigest .getInstance( \"SHA-256\"); int i = 0; byte[] hash = sha.digest( input.getBytes(\"UTF-8\")); // hexHash will contain // the Hexadecimal hash StringBuffer hexHash = new StringBuffer(); while (i < hash.length) { String hex = Integer.toHexString( 0xff & hash[i]); if (hex.length() == 1) hexHash.append('0'); hexHash.append(hex); i++; } return hexHash.toString(); } catch (Exception e) { throw new RuntimeException(e); } }}", "e": 3873, "s": 2815, "text": null }, { "code": null, "e": 4030, "s": 3873, "text": "Storing the blocks: Now, let us store the blocks in the ArrayList of Block type, along with their hash values by calling the constructor of the Block Class." }, { "code": null, "e": 4035, "s": 4030, "text": "Java" }, { "code": "// Java implementation to store// blocks in an ArrayList import java.util.ArrayList; public class GFG { // ArrayList to store the blocks public static ArrayList<Block> blockchain = new ArrayList<Block>(); // Driver code public static void main(String[] args) { // Adding the data to the ArrayList blockchain.add(new Block( \"First block\", \"0\")); blockchain.add(new Block( \"Second block\", blockchain .get(blockchain.size() - 1) .hash)); blockchain.add(new Block( \"Third block\", blockchain .get(blockchain.size() - 1) .hash)); blockchain.add(new Block( \"Fourth block\", blockchain .get(blockchain.size() - 1) .hash)); blockchain.add(new Block( \"Fifth block\", blockchain .get(blockchain.size() - 1) .hash)); }}", "e": 5032, "s": 4035, "text": null }, { "code": null, "e": 5406, "s": 5032, "text": "Blockchain Validity: Finally, we need to check the validity of the BlockChain by creating a boolean method to check the validity. This method will be implemented in the “Main” class and checks whether the hash is equal to the calculated hash or not. If all the hashes are equal to the calculated hashes, then the block is valid. Below is the implementation of the validity:" }, { "code": null, "e": 5411, "s": 5406, "text": "Java" }, { "code": "// Java implementation to check// validity of the blockchain // Function to check// validity of the blockchainpublic static Boolean isChainValid(){ Block currentBlock; Block previousBlock; // Iterating through // all the blocks for (int i = 1; i < blockchain.size(); i++) { // Storing the current block // and the previous block currentBlock = blockchain.get(i); previousBlock = blockchain.get(i - 1); // Checking if the current hash // is equal to the // calculated hash or not if (!currentBlock.hash .equals( currentBlock .calculateHash())) { System.out.println( \"Hashes are not equal\"); return false; } // Checking of the previous hash // is equal to the calculated // previous hash or not if (!previousBlock .hash .equals( currentBlock .previousHash)) { System.out.println( \"Previous Hashes are not equal\"); return false; } } // If all the hashes are equal // to the calculated hashes, // then the blockchain is valid return true;}", "e": 6706, "s": 5411, "text": null }, { "code": null, "e": 6737, "s": 6706, "text": "Advantages of the Blockchain: " }, { "code": null, "e": 7049, "s": 6737, "text": "Blockchain is a distributed network of systems. Therefore, data breaches are very difficult to be carried out.Since, Blockchain generated hashes of each block, therefore, it is very difficult to carry out malicious attacks.Data Tampering will change the hash of each block which will make the blockchain invalid" }, { "code": null, "e": 7160, "s": 7049, "text": "Blockchain is a distributed network of systems. Therefore, data breaches are very difficult to be carried out." }, { "code": null, "e": 7274, "s": 7160, "text": "Since, Blockchain generated hashes of each block, therefore, it is very difficult to carry out malicious attacks." }, { "code": null, "e": 7363, "s": 7274, "text": "Data Tampering will change the hash of each block which will make the blockchain invalid" }, { "code": null, "e": 7376, "s": 7363, "text": "simmytarika5" }, { "code": null, "e": 7393, "s": 7376, "text": "surinderdawra388" }, { "code": null, "e": 7404, "s": 7393, "text": "Algorithms" }, { "code": null, "e": 7415, "s": 7404, "text": "Blockchain" }, { "code": null, "e": 7429, "s": 7415, "text": "Java Programs" }, { "code": null, "e": 7440, "s": 7429, "text": "Algorithms" }, { "code": null, "e": 7538, "s": 7440, "text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here." }, { "code": null, "e": 7563, "s": 7538, "text": "DSA Sheet by Love Babbar" }, { "code": null, "e": 7612, "s": 7563, "text": "SDE SHEET - A Complete Guide for SDE Preparation" }, { "code": null, "e": 7650, "s": 7612, "text": "What is Hashing | A Complete Tutorial" }, { "code": null, "e": 7686, "s": 7650, "text": "CPU Scheduling in Operating Systems" }, { "code": null, "e": 7737, "s": 7686, "text": "Understanding Time Complexity with Simple Examples" }, { "code": null, "e": 7755, "s": 7737, "text": "Solidity - Arrays" }, { "code": null, "e": 7793, "s": 7755, "text": "How to Become a Blockchain Developer?" }, { "code": null, "e": 7813, "s": 7793, "text": "Solidity - Mappings" }, { "code": null, "e": 7848, "s": 7813, "text": "Consensus Algorithms in Blockchain" } ]
DateTime.ToLongDateString() Method in C#
11 Feb, 2019 This method is used to convert the value of the current DateTime object to its equivalent long date string representation. Syntax: public string ToLongDateString (); Return Value: This method returns a string that contains the long date string representation of the current DateTime object. Below programs illustrate the use of DateTime.ToLongDateString() Method: Example 1: // C# program to demonstrate the// DateTime.ToLongDateString()// Methodusing System;using System.Globalization; class GFG { // Main Method public static void Main() { // creating object of DateTime DateTime date = new DateTime(2011, 1, 1, 4, 0, 15); // Converting the value of the // current DateTime object to // its equivalent long date // string representation. // using ToLongDateString() method; string value = date.ToLongDateString(); // Display the date Console.WriteLine("String representation"+ " of date is {0}", value); }} String representation of date is Saturday, 01 January 2011 Example 2: // C# program to demonstrate the// DateTime.ToLongDateString()// Methodusing System;using System.Globalization; class GFG { // Main Method public static void Main() { // creating object of DateTime DateTime date = DateTime.Now; // Converting the value of the // current DateTime object to // its equivalent long date // string representation. // using ToLongDateString() method; string value = date.ToLongDateString(); // Display the date Console.WriteLine("String representation "+ "of date is {0}", value); }} String representation of date is Monday, 11 February 2019 Reference: https://docs.microsoft.com/en-us/dotnet/api/system.datetime.tolongdatestring?view=netframework-4.7.2 CSharp DateTime Struct CSharp-method C# Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here. C# | Multiple inheritance using interfaces Differences Between .NET Core and .NET Framework Extension Method in C# C# | List Class HashSet in C# with Examples C# | .NET Framework (Basic Architecture and Component Stack) Switch Statement in C# Lambda Expressions in C# Partial Classes in C# Hello World in C#
[ { "code": null, "e": 28, "s": 0, "text": "\n11 Feb, 2019" }, { "code": null, "e": 151, "s": 28, "text": "This method is used to convert the value of the current DateTime object to its equivalent long date string representation." }, { "code": null, "e": 194, "s": 151, "text": "Syntax: public string ToLongDateString ();" }, { "code": null, "e": 319, "s": 194, "text": "Return Value: This method returns a string that contains the long date string representation of the current DateTime object." }, { "code": null, "e": 392, "s": 319, "text": "Below programs illustrate the use of DateTime.ToLongDateString() Method:" }, { "code": null, "e": 403, "s": 392, "text": "Example 1:" }, { "code": "// C# program to demonstrate the// DateTime.ToLongDateString()// Methodusing System;using System.Globalization; class GFG { // Main Method public static void Main() { // creating object of DateTime DateTime date = new DateTime(2011, 1, 1, 4, 0, 15); // Converting the value of the // current DateTime object to // its equivalent long date // string representation. // using ToLongDateString() method; string value = date.ToLongDateString(); // Display the date Console.WriteLine(\"String representation\"+ \" of date is {0}\", value); }}", "e": 1085, "s": 403, "text": null }, { "code": null, "e": 1145, "s": 1085, "text": "String representation of date is Saturday, 01 January 2011\n" }, { "code": null, "e": 1156, "s": 1145, "text": "Example 2:" }, { "code": "// C# program to demonstrate the// DateTime.ToLongDateString()// Methodusing System;using System.Globalization; class GFG { // Main Method public static void Main() { // creating object of DateTime DateTime date = DateTime.Now; // Converting the value of the // current DateTime object to // its equivalent long date // string representation. // using ToLongDateString() method; string value = date.ToLongDateString(); // Display the date Console.WriteLine(\"String representation \"+ \"of date is {0}\", value); }}", "e": 1786, "s": 1156, "text": null }, { "code": null, "e": 1845, "s": 1786, "text": "String representation of date is Monday, 11 February 2019\n" }, { "code": null, "e": 1856, "s": 1845, "text": "Reference:" }, { "code": null, "e": 1957, "s": 1856, "text": "https://docs.microsoft.com/en-us/dotnet/api/system.datetime.tolongdatestring?view=netframework-4.7.2" }, { "code": null, "e": 1980, "s": 1957, "text": "CSharp DateTime Struct" }, { "code": null, "e": 1994, "s": 1980, "text": "CSharp-method" }, { "code": null, "e": 1997, "s": 1994, "text": "C#" }, { "code": null, "e": 2095, "s": 1997, "text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here." }, { "code": null, "e": 2138, "s": 2095, "text": "C# | Multiple inheritance using interfaces" }, { "code": null, "e": 2187, "s": 2138, "text": "Differences Between .NET Core and .NET Framework" }, { "code": null, "e": 2210, "s": 2187, "text": "Extension Method in C#" }, { "code": null, "e": 2226, "s": 2210, "text": "C# | List Class" }, { "code": null, "e": 2254, "s": 2226, "text": "HashSet in C# with Examples" }, { "code": null, "e": 2315, "s": 2254, "text": "C# | .NET Framework (Basic Architecture and Component Stack)" }, { "code": null, "e": 2338, "s": 2315, "text": "Switch Statement in C#" }, { "code": null, "e": 2363, "s": 2338, "text": "Lambda Expressions in C#" }, { "code": null, "e": 2385, "s": 2363, "text": "Partial Classes in C#" } ]
Merge Pandas DataFrame with a common column
To merge two Pandas DataFrame with common column, use the merge() function and set the ON parameter as the column name. At first, let us import the pandas library with an alias − import pandas as pd Let us create the 1st DataFrame − dataFrame1 = pd.DataFrame( { "Car": ['BMW', 'Lexus', 'Audi', 'Mustang', 'Bentley', 'Jaguar'],"Units": [100, 150, 110, 80, 110, 90] } ) Next, create the 2nd DataFrame − dataFrame2 = pd.DataFrame( { "Car": ['BMW', 'Lexus', 'Audi', 'Mustang', 'Mercedes', 'Jaguar'],"Reg_Price": [7000, 1500, 5000, 8000, 9000, 6000] } ) Now, merge the two DataFrames with a column column “Car” − mergedRes = pd.merge(dataFrame1, dataFrame2, on ='Car') Following is the complete code − import pandas as pd # Create DataFrame1 dataFrame1 = pd.DataFrame( { "Car": ['BMW', 'Lexus', 'Audi', 'Mustang', 'Bentley', 'Jaguar'],"Units": [100, 150, 110, 80, 110, 90] } ) print"DataFrame1 ...\n",dataFrame1 # Create DataFrame2 dataFrame2 = pd.DataFrame( { "Car": ['BMW', 'Lexus', 'Audi', 'Mustang', 'Mercedes', 'Jaguar'],"Reg_Price": [7000, 1500, 5000, 8000, 9000, 6000] } ) print"\nDataFrame2 ...\n",dataFrame2 # merge DataFrames with common column Car mergedRes = pd.merge(dataFrame1, dataFrame2, on ='Car') print"\nMerged data frame with common column...\n", mergedRes This will produce the following output − DataFrame1 ... Car Units 0 BMW 100 1 Lexus 150 2 Audi 110 3 Mustang 80 4 Bentley 110 5 Jaguar 90 DataFrame2 ... Car Reg_Price 0 BMW 7000 1 Lexus 1500 2 Audi 5000 3 Mustang 8000 4 Mercedes 9000 5 Jaguar 6000 Merged data frame with common column... Car Units Reg_Price 0 BMW 100 7000 1 Lexus 150 1500 2 Audi 110 5000 3 Mustang 80 8000 4 Jaguar 90 6000
[ { "code": null, "e": 1307, "s": 1187, "text": "To merge two Pandas DataFrame with common column, use the merge() function and set the ON parameter as the column name." }, { "code": null, "e": 1366, "s": 1307, "text": "At first, let us import the pandas library with an alias −" }, { "code": null, "e": 1386, "s": 1366, "text": "import pandas as pd" }, { "code": null, "e": 1420, "s": 1386, "text": "Let us create the 1st DataFrame −" }, { "code": null, "e": 1567, "s": 1420, "text": "dataFrame1 = pd.DataFrame(\n {\n \"Car\": ['BMW', 'Lexus', 'Audi', 'Mustang', 'Bentley', 'Jaguar'],\"Units\": [100, 150, 110, 80, 110, 90]\n }\n)" }, { "code": null, "e": 1600, "s": 1567, "text": "Next, create the 2nd DataFrame −" }, { "code": null, "e": 1761, "s": 1600, "text": "dataFrame2 = pd.DataFrame(\n {\n \"Car\": ['BMW', 'Lexus', 'Audi', 'Mustang', 'Mercedes', 'Jaguar'],\"Reg_Price\": [7000, 1500, 5000, 8000, 9000, 6000]\n\n }\n)" }, { "code": null, "e": 1820, "s": 1761, "text": "Now, merge the two DataFrames with a column column “Car” −" }, { "code": null, "e": 1877, "s": 1820, "text": "mergedRes = pd.merge(dataFrame1, dataFrame2, on ='Car')\n" }, { "code": null, "e": 1910, "s": 1877, "text": "Following is the complete code −" }, { "code": null, "e": 2515, "s": 1910, "text": "import pandas as pd\n\n# Create DataFrame1\ndataFrame1 = pd.DataFrame(\n {\n \"Car\": ['BMW', 'Lexus', 'Audi', 'Mustang', 'Bentley', 'Jaguar'],\"Units\": [100, 150, 110, 80, 110, 90]\n }\n)\n\nprint\"DataFrame1 ...\\n\",dataFrame1\n\n# Create DataFrame2\ndataFrame2 = pd.DataFrame(\n {\n \"Car\": ['BMW', 'Lexus', 'Audi', 'Mustang', 'Mercedes', 'Jaguar'],\"Reg_Price\": [7000, 1500, 5000, 8000, 9000, 6000]\n\n }\n)\n\nprint\"\\nDataFrame2 ...\\n\",dataFrame2\n\n# merge DataFrames with common column Car\nmergedRes = pd.merge(dataFrame1, dataFrame2, on ='Car')\nprint\"\\nMerged data frame with common column...\\n\", mergedRes" }, { "code": null, "e": 2556, "s": 2515, "text": "This will produce the following output −" }, { "code": null, "e": 3115, "s": 2556, "text": "DataFrame1 ...\n Car Units\n0 BMW 100\n1 Lexus 150\n2 Audi 110\n3 Mustang 80\n4 Bentley 110\n5 Jaguar 90\n\nDataFrame2 ...\n Car Reg_Price\n0 BMW 7000\n1 Lexus 1500\n2 Audi 5000\n3 Mustang 8000\n4 Mercedes 9000\n5 Jaguar 6000\n\nMerged data frame with common column...\n Car Units Reg_Price\n0 BMW 100 7000\n1 Lexus 150 1500\n2 Audi 110 5000\n3 Mustang 80 8000\n4 Jaguar 90 6000" } ]
Spring Boot JPA - Custom methods
We've checked the methods available by default in Repository in JPA Methods chapter. Now let's add a method and test it. Add a method to find an employee by its name. package com.tutorialspoint.repository; import org.springframework.data.repository.CrudRepository; import org.springframework.stereotype.Repository; import com.tutorialspoint.entity.Employee; @Repository public interface EmployeeRepository extends CrudRepository<Employee, Integer> { public List<Employee> findByName(String name); public List<Employee> findByAge(int age); } Now Spring JPA will create the implementation of above methods automatically as we've following the property based nomenclature. Let's test the methods added by adding their test cases in test file. Last two methods of below file tests the custom methods added. Following is the complete code of EmployeeRepositoryTest. package com.tutorialspoint.repository; import static org.junit.jupiter.api.Assertions.assertEquals; import java.util.ArrayList; import java.util.List; import javax.transaction.Transactional; import org.junit.jupiter.api.Test; import org.junit.jupiter.api.extension.ExtendWith; import org.springframework.beans.factory.annotation.Autowired; import org.springframework.boot.test.context.SpringBootTest; import org.springframework.test.context.junit.jupiter.SpringExtension; import com.tutorialspoint.entity.Employee; import com.tutorialspoint.sprintbooth2.SprintBootH2Application; @ExtendWith(SpringExtension.class) @Transactional @SpringBootTest(classes = SprintBootH2Application.class) public class EmployeeRepositoryTest { @Autowired private EmployeeRepository employeeRepository; @Test public void testFindById() { Employee employee = getEmployee(); employeeRepository.save(employee); Employee result = employeeRepository.findById(employee.getId()).get(); assertEquals(employee.getId(), result.getId()); } @Test public void testFindAll() { Employee employee = getEmployee(); employeeRepository.save(employee); List<Employee> result = new ArrayList<>(); employeeRepository.findAll().forEach(e -> result.add(e)); assertEquals(result.size(), 1); } @Test public void testSave() { Employee employee = getEmployee(); employeeRepository.save(employee); Employee found = employeeRepository.findById(employee.getId()).get(); assertEquals(employee.getId(), found.getId()); } @Test public void testDeleteById() { Employee employee = getEmployee(); employeeRepository.save(employee); employeeRepository.deleteById(employee.getId()); List<Employee> result = new ArrayList<>(); employeeRepository.findAll().forEach(e -> result.add(e)); assertEquals(result.size(), 0); } private Employee getEmployee() { Employee employee = new Employee(); employee.setId(1); employee.setName("Mahesh"); employee.setAge(30); employee.setEmail("[email protected]"); return employee; } @Test public void testFindByName() { Employee employee = getEmployee(); employeeRepository.save(employee); List<Employee> result = new ArrayList<>(); employeeRepository.findByName(employee.getName()).forEach(e -> result.add(e)); assertEquals(result.size(), 1); } @Test public void testFindByAge() { Employee employee = getEmployee(); employeeRepository.save(employee); List<Employee> result = new ArrayList<>(); employeeRepository.findByAge(employee.getAge()).forEach(e -> result.add(e)); assertEquals(result.size(), 1); } } Right Click on the file in eclipse and select Run a JUnit Test and verify the result.
[ { "code": null, "e": 2233, "s": 2112, "text": "We've checked the methods available by default in Repository in JPA Methods chapter. Now let's add a method and test it." }, { "code": null, "e": 2279, "s": 2233, "text": "Add a method to find an employee by its name." }, { "code": null, "e": 2663, "s": 2279, "text": "package com.tutorialspoint.repository;\n\nimport org.springframework.data.repository.CrudRepository;\nimport org.springframework.stereotype.Repository;\nimport com.tutorialspoint.entity.Employee;\n\n@Repository\npublic interface EmployeeRepository extends CrudRepository<Employee, Integer> {\n public List<Employee> findByName(String name);\t\n public List<Employee> findByAge(int age);\n}" }, { "code": null, "e": 2925, "s": 2663, "text": "Now Spring JPA will create the implementation of above methods automatically as we've following the property based nomenclature. Let's test the methods added by adding their test cases in test file. Last two methods of below file tests the custom methods added." }, { "code": null, "e": 2983, "s": 2925, "text": "Following is the complete code of EmployeeRepositoryTest." }, { "code": null, "e": 5774, "s": 2983, "text": "package com.tutorialspoint.repository;\n\nimport static org.junit.jupiter.api.Assertions.assertEquals;\nimport java.util.ArrayList;\nimport java.util.List;\nimport javax.transaction.Transactional;\nimport org.junit.jupiter.api.Test;\nimport org.junit.jupiter.api.extension.ExtendWith;\nimport org.springframework.beans.factory.annotation.Autowired;\nimport org.springframework.boot.test.context.SpringBootTest;\nimport org.springframework.test.context.junit.jupiter.SpringExtension;\nimport com.tutorialspoint.entity.Employee;\nimport com.tutorialspoint.sprintbooth2.SprintBootH2Application;\n\n@ExtendWith(SpringExtension.class)\n@Transactional\n@SpringBootTest(classes = SprintBootH2Application.class)\npublic class EmployeeRepositoryTest {\n @Autowired\n private EmployeeRepository employeeRepository;\n @Test\n public void testFindById() {\n Employee employee = getEmployee();\t \n employeeRepository.save(employee);\n Employee result = employeeRepository.findById(employee.getId()).get();\n assertEquals(employee.getId(), result.getId());\t \n }\n @Test\n public void testFindAll() {\n Employee employee = getEmployee();\n employeeRepository.save(employee);\n List<Employee> result = new ArrayList<>();\n employeeRepository.findAll().forEach(e -> result.add(e));\n assertEquals(result.size(), 1);\t \n }\n @Test\n public void testSave() {\n Employee employee = getEmployee();\n employeeRepository.save(employee);\n Employee found = employeeRepository.findById(employee.getId()).get();\n assertEquals(employee.getId(), found.getId());\t \n }\n @Test\n public void testDeleteById() {\n Employee employee = getEmployee();\n employeeRepository.save(employee);\n employeeRepository.deleteById(employee.getId());\n List<Employee> result = new ArrayList<>();\n employeeRepository.findAll().forEach(e -> result.add(e));\n assertEquals(result.size(), 0);\n }\n private Employee getEmployee() {\n Employee employee = new Employee();\n employee.setId(1);\n employee.setName(\"Mahesh\");\n employee.setAge(30);\n employee.setEmail(\"[email protected]\");\n return employee;\n }\n @Test\n public void testFindByName() {\n Employee employee = getEmployee();\n employeeRepository.save(employee);\n List<Employee> result = new ArrayList<>();\n employeeRepository.findByName(employee.getName()).forEach(e -> result.add(e));\n assertEquals(result.size(), 1);\t \n }\n @Test\n public void testFindByAge() {\n Employee employee = getEmployee();\n employeeRepository.save(employee);\n List<Employee> result = new ArrayList<>();\n employeeRepository.findByAge(employee.getAge()).forEach(e -> result.add(e));\n assertEquals(result.size(), 1);\t \n }\n}" } ]
Python PIL | Image.resize() method
17 Jun, 2021 PIL is the Python Imaging Library which provides the python interpreter with image editing capabilities. The Image module provides a class with the same name which is used to represent a PIL image. The module also provides a number of factory functions, including functions to load images from files, and to create new images.Image.resize() Returns a resized copy of this image. Syntax: Image.resize(size, resample=0) Parameters: size – The requested size in pixels, as a 2-tuple: (width, height). resample – An optional resampling filter. This can be one of PIL.Image.NEAREST (use nearest neighbour), PIL.Image.BILINEAR (linear interpolation), PIL.Image.BICUBIC (cubic spline interpolation), or PIL.Image.LANCZOS (a high-quality downsampling filter). If omitted, or if the image has mode “1” or “P”, it is set PIL.Image.NEAREST.Returns type: An Image object. Image Used: Python3 # Importing Image class from PIL modulefrom PIL import Image # Opens a image in RGB modeim = Image.open(r"C:\Users\System-Pc\Desktop\ybear.jpg") # Size of the image in pixels (size of original image)# (This is not mandatory)width, height = im.size # Setting the points for cropped imageleft = 4top = height / 5right = 154bottom = 3 * height / 5 # Cropped image of above dimension# (It will not change original image)im1 = im.crop((left, top, right, bottom))newsize = (300, 300)im1 = im1.resize(newsize)# Shows the image in image viewerim1.show() Output: Another example:Here we use the different newsize value. Python3 # Importing Image class from PIL modulefrom PIL import Image # Opens a image in RGB modeim = Image.open(r"C:\Users\System-Pc\Desktop\ybear.jpg") # Size of the image in pixels (size of original image)# (This is not mandatory)width, height = im.size # Setting the points for cropped imageleft = 6top = height / 4right = 174bottom = 3 * height / 4 # Cropped image of above dimension# (It will not change original image)im1 = im.crop((left, top, right, bottom))newsize = (200, 200)im1 = im1.resize(newsize)# Shows the image in image viewerim1.show() Output: simranarora5sos Python-pil Python Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here. Python Dictionary Different ways to create Pandas Dataframe Enumerate() in Python Read a file line by line in Python Python String | replace() How to Install PIP on Windows ? *args and **kwargs in Python Python Classes and Objects Iterate over a list in Python Python OOPs Concepts
[ { "code": null, "e": 54, "s": 26, "text": "\n17 Jun, 2021" }, { "code": null, "e": 434, "s": 54, "text": "PIL is the Python Imaging Library which provides the python interpreter with image editing capabilities. The Image module provides a class with the same name which is used to represent a PIL image. The module also provides a number of factory functions, including functions to load images from files, and to create new images.Image.resize() Returns a resized copy of this image. " }, { "code": null, "e": 916, "s": 434, "text": "Syntax: Image.resize(size, resample=0) Parameters: size – The requested size in pixels, as a 2-tuple: (width, height). resample – An optional resampling filter. This can be one of PIL.Image.NEAREST (use nearest neighbour), PIL.Image.BILINEAR (linear interpolation), PIL.Image.BICUBIC (cubic spline interpolation), or PIL.Image.LANCZOS (a high-quality downsampling filter). If omitted, or if the image has mode “1” or “P”, it is set PIL.Image.NEAREST.Returns type: An Image object. " }, { "code": null, "e": 930, "s": 916, "text": "Image Used: " }, { "code": null, "e": 940, "s": 932, "text": "Python3" }, { "code": "# Importing Image class from PIL modulefrom PIL import Image # Opens a image in RGB modeim = Image.open(r\"C:\\Users\\System-Pc\\Desktop\\ybear.jpg\") # Size of the image in pixels (size of original image)# (This is not mandatory)width, height = im.size # Setting the points for cropped imageleft = 4top = height / 5right = 154bottom = 3 * height / 5 # Cropped image of above dimension# (It will not change original image)im1 = im.crop((left, top, right, bottom))newsize = (300, 300)im1 = im1.resize(newsize)# Shows the image in image viewerim1.show()", "e": 1486, "s": 940, "text": null }, { "code": null, "e": 1496, "s": 1486, "text": "Output: " }, { "code": null, "e": 1554, "s": 1496, "text": "Another example:Here we use the different newsize value. " }, { "code": null, "e": 1562, "s": 1554, "text": "Python3" }, { "code": "# Importing Image class from PIL modulefrom PIL import Image # Opens a image in RGB modeim = Image.open(r\"C:\\Users\\System-Pc\\Desktop\\ybear.jpg\") # Size of the image in pixels (size of original image)# (This is not mandatory)width, height = im.size # Setting the points for cropped imageleft = 6top = height / 4right = 174bottom = 3 * height / 4 # Cropped image of above dimension# (It will not change original image)im1 = im.crop((left, top, right, bottom))newsize = (200, 200)im1 = im1.resize(newsize)# Shows the image in image viewerim1.show()", "e": 2108, "s": 1562, "text": null }, { "code": null, "e": 2118, "s": 2108, "text": "Output: " }, { "code": null, "e": 2136, "s": 2120, "text": "simranarora5sos" }, { "code": null, "e": 2147, "s": 2136, "text": "Python-pil" }, { "code": null, "e": 2154, "s": 2147, "text": "Python" }, { "code": null, "e": 2252, "s": 2154, "text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here." }, { "code": null, "e": 2270, "s": 2252, "text": "Python Dictionary" }, { "code": null, "e": 2312, "s": 2270, "text": "Different ways to create Pandas Dataframe" }, { "code": null, "e": 2334, "s": 2312, "text": "Enumerate() in Python" }, { "code": null, "e": 2369, "s": 2334, "text": "Read a file line by line in Python" }, { "code": null, "e": 2395, "s": 2369, "text": "Python String | replace()" }, { "code": null, "e": 2427, "s": 2395, "text": "How to Install PIP on Windows ?" }, { "code": null, "e": 2456, "s": 2427, "text": "*args and **kwargs in Python" }, { "code": null, "e": 2483, "s": 2456, "text": "Python Classes and Objects" }, { "code": null, "e": 2513, "s": 2483, "text": "Iterate over a list in Python" } ]
Image Geometric Transformation In Numpy and OpenCV | by Daryl Tan | Towards Data Science
Geometric transformation is an essential image processing techniques that have wide applications. For example, a simple use case would be in computer graphics to simply rescale the graphics content when displaying it on a desktop vs mobile. It could also be applied to projectively warp an image to another image plane. For instance, instead of looking at a scene straight ahead, we wish to look at it from another viewpoint, perspective transformation is applied in this scenario to achieve that. One other exciting application is in training deep neural networks. Training deep model requires vast amount of data. And in almost all cases, models benefit from higher generalisation performance as training data increases. One way to artificially generate more data is to randomly apply an affine transformation to the input data. A technique also known as augmentation. In this article, I would like to walk you through some of the transformation and how we can perform them in Numpy first to understand the concept from first principles. Then how it could be easily achieved using OpenCV. if you are like me who likes to understand concepts from basic theories, this post would be of interest to you! In particular, I will focus on 2D affine transformation. What you need is some basic knowledge of Linear Algebra and you should be able to follow. The accompanying code can be found here if you prefer to tingle with it yourself! Without going too much into the mathematical details, the behaviour of the transform is controlled by some parameters in the affine A. x’ = Ax where A = [[a_11, a_12, a_13], [a_21, a_22, a_23], [ 0 , 0 , 1 ]] is a 2x3 matrix or 3x3 in homogenous coordinate, and x is a vector of the form [x, y] or [x, y, 1] in homogeneous coordinate. The formula above says that A takes any vector x and maps it to another vector x’. Generally, an affine transformation has 6 degrees of freedom, warping any image to another location after matrix multiplication pixel by pixel. The transformed image preserved both parallel and straight line in the original image (think of shearing). Any matrix A that satisfies these 2 conditions is considered an affine transformation matrix. To narrow our discussion, there are some specialized forms of A and this is what we are interested in. This includes the Rotation, Translation and Scaling matrices as shown in the figure below. One very useful property of the above affine transformations is they are linear functions. They preserve the operation of multiplication and addition and obey the superposition principle. In other words, we can composite 2 or more transformations: vector addition to represent translation and matrix multiplication to represent the linear mapping as long as we represent them in homogenous coordinate. For example, we could represent a rotation followed by a translation as A = array([[cos(angle), -sin(angle), tx], [sin(angle), cos(angle), ty], [0, 0, 1]]) In Python and OpenCV, the origin of a 2D matrix is located at the top left corner starting at x, y= (0, 0). The coordinate system is left-handed where x-axis points positive to the right and y-axis points positive downwards. But most transformation matrix you find in textbooks and literature including the 3 matrices shown above follows the right-hand coordinate system. So some minor adjustments must be made to align the axis direction. Before we experiment with the transformations on images, let’s look at how we could do it on point coordinates. Because they are essentially the same with images being an array of 2D coordinates in a grid. Utilising from what we have learnt above, the following code below can be used to transform the points [0, 0], [0, 1], [1, 0], [1,1] . The blue dots in figure 3. Python provides a useful shorthand operator, @ to represent matrix multiplication. # Points generatordef get_grid(x, y, homogenous=False): coords = np.indices((x, y)).reshape(2, -1) return np.vstack((coords, np.ones(coords.shape[1]))) if homogenous else coords# Define Transformationsdef get_rotation(angle): angle = np.radians(angle) return np.array([ [np.cos(angle), -np.sin(angle), 0], [np.sin(angle), np.cos(angle), 0], [0, 0, 1] ])def get_translation(tx, ty): return np.array([ [1, 0, tx], [0, 1, ty], [0, 0, 1] ])def get_scale(s): return np.array([ [s, 0, 0], [0, s, 0], [0, 0, 1] ])R1 = get_rotation(135)T1 = get_translation(-2, 2)S1 = get_scale(2)# Apply transformation x' = Axcoords_rot = R1 @ coordscoords_trans = T1 @ coordscoords_scale = S1 @ coordscoords_composite1 = R1 @ T1 @ coordscoords_composite2 = T1 @ R1 @ coords It is important to take note that, with a few exceptions, matrices generally do not commute. i.e. A1 @ A2 != A2 @ A1 Therefore, for the transformations # Translation and then rotationcoords_composite1 = R1 @ T1 @ coords# Rotation and then translationcoords_composite2 = T1 @ R1 @ coords You will observe in figure 3 that they do not result in the same mapping and that order matters. How the function is applied can be understood from right to left. Now for images, there are several things to take note. Firstly, as mention before, we must realign the vertical axis. Secondly, the transformed points must be projected onto an image plane. In essence, the steps that need to be taken are: Create a new image I’(x, y) to output the transform pointsApply the transformation AProject the points onto a new image plane, only considering those that lie within image boundary. Create a new image I’(x, y) to output the transform points Apply the transformation A Project the points onto a new image plane, only considering those that lie within image boundary. Let's look at a transformation where we wish to zoom in by 2x and rotate an image by 45 degrees about its centre position. This could be done by applying the following composite matrix. height, width = image.shape[:2]tx, ty = np.array((width // 2, height // 2))angle = np.radians(45)scale = 2.0R = np.array([ [np.cos(angle), np.sin(angle), 0], [-np.sin(angle), np.cos(angle), 0], [0, 0, 1]])T = np.array([ [1, 0, tx], [0, 1, ty], [0, 0, 1]])S = np.array([ [scale, 0, 0], [0, scale, 0], [0, 0, 1]])A = T @ R @ S @ np.linalg.inv(T) Applying to image # Grid to represent image coordinatecoords = get_grid(width, height, True)x_ori, y_ori = coords[0], coords[1] # Apply transformationwarp_coords = np.round(A@coords).astype(np.int)xcoord2, ycoord2 = warp_coords[0, :], warp_coords[1, :]# Get pixels within image boundaryindices = np.where((xcoord >= 0) & (xcoord < width) & (ycoord >= 0) & (ycoord < height))xpix2, ypix2 = xcoord2[indices], ycoord2[indices]xpix, ypix = x_ori[indices], y_ori[indices]# Map the pixel RGB data to new location in another arraycanvas = np.zeros_like(image)canvas[ypix, xpix] = image[yy, xx] Few points to take note in the 2 code snippets above. Left-handed coordinate system rotation is accounted for by swapping the sign.Since points are rotated about the origin, we first translate the centre to the origin before doing rotation and scaling.Points are then translated back to the image planeThe transforms points are rounded to integers to represent discrete pixel value.Next, we only consider the pixels that lie within the image boundaryMap correspondence I(x, y)and I’(x, y) Left-handed coordinate system rotation is accounted for by swapping the sign. Since points are rotated about the origin, we first translate the centre to the origin before doing rotation and scaling. Points are then translated back to the image plane The transforms points are rounded to integers to represent discrete pixel value. Next, we only consider the pixels that lie within the image boundary Map correspondence I(x, y)and I’(x, y) As you can see, due to step 4, the resulting image (figure 4) will have several aliasing and holes. To eliminate this, open-source libraries use interpolation techniques to fill the gaps after transformation. Inverse Warping Another approach to prevent aliasing is to formulate the warping as that of resampling from the source image I(x, y) given the warped points X’. This can be done by multiplying X’ by the inverse of A. As a caveat, the transformation has to be invertible. Apply the inverse of the transformation to X’. Apply the inverse of the transformation to X’. X = np.linalg.inv(A) @ X' Note: for images, the inverse warping of X’ is simply reprojecting I’(x, y) onto I(x, y). So we simply apply the inverse transformation to I’(x, y) pixel coordinates as you will see below. 2. Determine where it lands in the original image plane 3. Resample from I(x, y) the RGB pixels and map it back to I’(x, y) code # set up pixel coordinate I'(x, y)coords = get_grid(width, height, True)x2, y2 = coords[0], coords[1]# Apply inverse transform and round it (nearest neighbour interpolation)warp_coords = (Ainv@coords).astype(np.int)x1, y1 = warp_coords[0, :], warp_coords[1, :]# Get pixels within image boundariesindices = np.where((x1 >= 0) & (x1 < width) & (y1 >= 0) & (y1 < height))xpix1, ypix1 = x2[indices], y2[indices]xpix2, ypix2 = x1[indices], y1[indices]# Map Correspondencecanvas = np.zeros_like(image)canvas[ypix1, xpix1] = image[ypix2,xpix2] Running the above code should give you a dense, hole-free image :) Feel free to download the code and play around with the parameters to apply other transformations. Now that you have a better understanding of geometric transformation, most developers and researchers usually save themselves the hassle of writing all those transformations and simply rely on optimised libraries to perform the task. Doing affine transformation in OpenCV is very simple. There are a few ways to do it. Write the affine transformation yourself and call cv2.warpAffine(image, A, output_shape) Write the affine transformation yourself and call cv2.warpAffine(image, A, output_shape) The code below shows the overall affine matrix that would give the same results as above. A good exercise would be to derive the formulation yourself! def get_affine_cv(t, r, s): sin_theta = np.sin(r) cos_theta = np.cos(r) a_11 = s * cos_theta a_21 = -s * sin_theta a_12 = s * sin_theta a_22 = s * cos_theta a_13 = t[0] * (1 - s * cos_theta) - s * sin_theta * t[1] a_23 = t[1] * (1 - s * cos_theta) + s * sin_theta * t[0]return np.array([[a_11, a_12, a_13], [a_21, a_22, a_23]])A2 = get_affine_cv((tx, ty), angle, scale)warped = cv2.warpAffine(image, A2, (width, height)) 2. Rely on OpenCV to return the affine transformation matric using cv2.getRotationMatrix2D(center, angle, scale). This function rotates the image about the point center with angle and scale it with scale A3 = cv2.getRotationMatrix2D((tx, ty), np.rad2deg(angle), scale)warped = cv2.warpAffine(image, b3, (width, height), flags=cv2.INTER_LINEAR, borderMode=cv2.BORDER_CONSTANT, borderValue=0) In this article, I have covered the basic concepts of geometric transformation and how you can apply it on images. Many advanced computer vision such as slam using visual odometry and multiview view synthesis relies on first understanding transformations. I believe it is most certainly beneficial as a computer vision practitioner to understand how these transformations work under the hood when we use powerful libraries such as imgaug and albumentation. Thanks for reading! And I hope you have gained a better understanding of how the formulas are written and used in libraries. Follow to see more post on computer vision and machine learning:) Do sound out in the comments if you spot any mistake or anything is unclear!
[ { "code": null, "e": 413, "s": 172, "text": "Geometric transformation is an essential image processing techniques that have wide applications. For example, a simple use case would be in computer graphics to simply rescale the graphics content when displaying it on a desktop vs mobile." }, { "code": null, "e": 670, "s": 413, "text": "It could also be applied to projectively warp an image to another image plane. For instance, instead of looking at a scene straight ahead, we wish to look at it from another viewpoint, perspective transformation is applied in this scenario to achieve that." }, { "code": null, "e": 1043, "s": 670, "text": "One other exciting application is in training deep neural networks. Training deep model requires vast amount of data. And in almost all cases, models benefit from higher generalisation performance as training data increases. One way to artificially generate more data is to randomly apply an affine transformation to the input data. A technique also known as augmentation." }, { "code": null, "e": 1375, "s": 1043, "text": "In this article, I would like to walk you through some of the transformation and how we can perform them in Numpy first to understand the concept from first principles. Then how it could be easily achieved using OpenCV. if you are like me who likes to understand concepts from basic theories, this post would be of interest to you!" }, { "code": null, "e": 1604, "s": 1375, "text": "In particular, I will focus on 2D affine transformation. What you need is some basic knowledge of Linear Algebra and you should be able to follow. The accompanying code can be found here if you prefer to tingle with it yourself!" }, { "code": null, "e": 1739, "s": 1604, "text": "Without going too much into the mathematical details, the behaviour of the transform is controlled by some parameters in the affine A." }, { "code": null, "e": 1747, "s": 1739, "text": "x’ = Ax" }, { "code": null, "e": 1838, "s": 1747, "text": "where A = [[a_11, a_12, a_13], [a_21, a_22, a_23], [ 0 , 0 , 1 ]]" }, { "code": null, "e": 2047, "s": 1838, "text": "is a 2x3 matrix or 3x3 in homogenous coordinate, and x is a vector of the form [x, y] or [x, y, 1] in homogeneous coordinate. The formula above says that A takes any vector x and maps it to another vector x’." }, { "code": null, "e": 2392, "s": 2047, "text": "Generally, an affine transformation has 6 degrees of freedom, warping any image to another location after matrix multiplication pixel by pixel. The transformed image preserved both parallel and straight line in the original image (think of shearing). Any matrix A that satisfies these 2 conditions is considered an affine transformation matrix." }, { "code": null, "e": 2586, "s": 2392, "text": "To narrow our discussion, there are some specialized forms of A and this is what we are interested in. This includes the Rotation, Translation and Scaling matrices as shown in the figure below." }, { "code": null, "e": 2774, "s": 2586, "text": "One very useful property of the above affine transformations is they are linear functions. They preserve the operation of multiplication and addition and obey the superposition principle." }, { "code": null, "e": 3060, "s": 2774, "text": "In other words, we can composite 2 or more transformations: vector addition to represent translation and matrix multiplication to represent the linear mapping as long as we represent them in homogenous coordinate. For example, we could represent a rotation followed by a translation as" }, { "code": null, "e": 3187, "s": 3060, "text": "A = array([[cos(angle), -sin(angle), tx], [sin(angle), cos(angle), ty], [0, 0, 1]])" }, { "code": null, "e": 3412, "s": 3187, "text": "In Python and OpenCV, the origin of a 2D matrix is located at the top left corner starting at x, y= (0, 0). The coordinate system is left-handed where x-axis points positive to the right and y-axis points positive downwards." }, { "code": null, "e": 3627, "s": 3412, "text": "But most transformation matrix you find in textbooks and literature including the 3 matrices shown above follows the right-hand coordinate system. So some minor adjustments must be made to align the axis direction." }, { "code": null, "e": 3833, "s": 3627, "text": "Before we experiment with the transformations on images, let’s look at how we could do it on point coordinates. Because they are essentially the same with images being an array of 2D coordinates in a grid." }, { "code": null, "e": 3995, "s": 3833, "text": "Utilising from what we have learnt above, the following code below can be used to transform the points [0, 0], [0, 1], [1, 0], [1,1] . The blue dots in figure 3." }, { "code": null, "e": 4078, "s": 3995, "text": "Python provides a useful shorthand operator, @ to represent matrix multiplication." }, { "code": null, "e": 4920, "s": 4078, "text": "# Points generatordef get_grid(x, y, homogenous=False): coords = np.indices((x, y)).reshape(2, -1) return np.vstack((coords, np.ones(coords.shape[1]))) if homogenous else coords# Define Transformationsdef get_rotation(angle): angle = np.radians(angle) return np.array([ [np.cos(angle), -np.sin(angle), 0], [np.sin(angle), np.cos(angle), 0], [0, 0, 1] ])def get_translation(tx, ty): return np.array([ [1, 0, tx], [0, 1, ty], [0, 0, 1] ])def get_scale(s): return np.array([ [s, 0, 0], [0, s, 0], [0, 0, 1] ])R1 = get_rotation(135)T1 = get_translation(-2, 2)S1 = get_scale(2)# Apply transformation x' = Axcoords_rot = R1 @ coordscoords_trans = T1 @ coordscoords_scale = S1 @ coordscoords_composite1 = R1 @ T1 @ coordscoords_composite2 = T1 @ R1 @ coords" }, { "code": null, "e": 5018, "s": 4920, "text": "It is important to take note that, with a few exceptions, matrices generally do not commute. i.e." }, { "code": null, "e": 5037, "s": 5018, "text": "A1 @ A2 != A2 @ A1" }, { "code": null, "e": 5072, "s": 5037, "text": "Therefore, for the transformations" }, { "code": null, "e": 5207, "s": 5072, "text": "# Translation and then rotationcoords_composite1 = R1 @ T1 @ coords# Rotation and then translationcoords_composite2 = T1 @ R1 @ coords" }, { "code": null, "e": 5370, "s": 5207, "text": "You will observe in figure 3 that they do not result in the same mapping and that order matters. How the function is applied can be understood from right to left." }, { "code": null, "e": 5560, "s": 5370, "text": "Now for images, there are several things to take note. Firstly, as mention before, we must realign the vertical axis. Secondly, the transformed points must be projected onto an image plane." }, { "code": null, "e": 5609, "s": 5560, "text": "In essence, the steps that need to be taken are:" }, { "code": null, "e": 5791, "s": 5609, "text": "Create a new image I’(x, y) to output the transform pointsApply the transformation AProject the points onto a new image plane, only considering those that lie within image boundary." }, { "code": null, "e": 5850, "s": 5791, "text": "Create a new image I’(x, y) to output the transform points" }, { "code": null, "e": 5877, "s": 5850, "text": "Apply the transformation A" }, { "code": null, "e": 5975, "s": 5877, "text": "Project the points onto a new image plane, only considering those that lie within image boundary." }, { "code": null, "e": 6098, "s": 5975, "text": "Let's look at a transformation where we wish to zoom in by 2x and rotate an image by 45 degrees about its centre position." }, { "code": null, "e": 6161, "s": 6098, "text": "This could be done by applying the following composite matrix." }, { "code": null, "e": 6532, "s": 6161, "text": "height, width = image.shape[:2]tx, ty = np.array((width // 2, height // 2))angle = np.radians(45)scale = 2.0R = np.array([ [np.cos(angle), np.sin(angle), 0], [-np.sin(angle), np.cos(angle), 0], [0, 0, 1]])T = np.array([ [1, 0, tx], [0, 1, ty], [0, 0, 1]])S = np.array([ [scale, 0, 0], [0, scale, 0], [0, 0, 1]])A = T @ R @ S @ np.linalg.inv(T)" }, { "code": null, "e": 6550, "s": 6532, "text": "Applying to image" }, { "code": null, "e": 7137, "s": 6550, "text": "# Grid to represent image coordinatecoords = get_grid(width, height, True)x_ori, y_ori = coords[0], coords[1] # Apply transformationwarp_coords = np.round(A@coords).astype(np.int)xcoord2, ycoord2 = warp_coords[0, :], warp_coords[1, :]# Get pixels within image boundaryindices = np.where((xcoord >= 0) & (xcoord < width) & (ycoord >= 0) & (ycoord < height))xpix2, ypix2 = xcoord2[indices], ycoord2[indices]xpix, ypix = x_ori[indices], y_ori[indices]# Map the pixel RGB data to new location in another arraycanvas = np.zeros_like(image)canvas[ypix, xpix] = image[yy, xx]" }, { "code": null, "e": 7191, "s": 7137, "text": "Few points to take note in the 2 code snippets above." }, { "code": null, "e": 7626, "s": 7191, "text": "Left-handed coordinate system rotation is accounted for by swapping the sign.Since points are rotated about the origin, we first translate the centre to the origin before doing rotation and scaling.Points are then translated back to the image planeThe transforms points are rounded to integers to represent discrete pixel value.Next, we only consider the pixels that lie within the image boundaryMap correspondence I(x, y)and I’(x, y)" }, { "code": null, "e": 7704, "s": 7626, "text": "Left-handed coordinate system rotation is accounted for by swapping the sign." }, { "code": null, "e": 7826, "s": 7704, "text": "Since points are rotated about the origin, we first translate the centre to the origin before doing rotation and scaling." }, { "code": null, "e": 7877, "s": 7826, "text": "Points are then translated back to the image plane" }, { "code": null, "e": 7958, "s": 7877, "text": "The transforms points are rounded to integers to represent discrete pixel value." }, { "code": null, "e": 8027, "s": 7958, "text": "Next, we only consider the pixels that lie within the image boundary" }, { "code": null, "e": 8066, "s": 8027, "text": "Map correspondence I(x, y)and I’(x, y)" }, { "code": null, "e": 8275, "s": 8066, "text": "As you can see, due to step 4, the resulting image (figure 4) will have several aliasing and holes. To eliminate this, open-source libraries use interpolation techniques to fill the gaps after transformation." }, { "code": null, "e": 8291, "s": 8275, "text": "Inverse Warping" }, { "code": null, "e": 8546, "s": 8291, "text": "Another approach to prevent aliasing is to formulate the warping as that of resampling from the source image I(x, y) given the warped points X’. This can be done by multiplying X’ by the inverse of A. As a caveat, the transformation has to be invertible." }, { "code": null, "e": 8593, "s": 8546, "text": "Apply the inverse of the transformation to X’." }, { "code": null, "e": 8640, "s": 8593, "text": "Apply the inverse of the transformation to X’." }, { "code": null, "e": 8666, "s": 8640, "text": "X = np.linalg.inv(A) @ X'" }, { "code": null, "e": 8855, "s": 8666, "text": "Note: for images, the inverse warping of X’ is simply reprojecting I’(x, y) onto I(x, y). So we simply apply the inverse transformation to I’(x, y) pixel coordinates as you will see below." }, { "code": null, "e": 8911, "s": 8855, "text": "2. Determine where it lands in the original image plane" }, { "code": null, "e": 8979, "s": 8911, "text": "3. Resample from I(x, y) the RGB pixels and map it back to I’(x, y)" }, { "code": null, "e": 8984, "s": 8979, "text": "code" }, { "code": null, "e": 9539, "s": 8984, "text": "# set up pixel coordinate I'(x, y)coords = get_grid(width, height, True)x2, y2 = coords[0], coords[1]# Apply inverse transform and round it (nearest neighbour interpolation)warp_coords = (Ainv@coords).astype(np.int)x1, y1 = warp_coords[0, :], warp_coords[1, :]# Get pixels within image boundariesindices = np.where((x1 >= 0) & (x1 < width) & (y1 >= 0) & (y1 < height))xpix1, ypix1 = x2[indices], y2[indices]xpix2, ypix2 = x1[indices], y1[indices]# Map Correspondencecanvas = np.zeros_like(image)canvas[ypix1, xpix1] = image[ypix2,xpix2]" }, { "code": null, "e": 9705, "s": 9539, "text": "Running the above code should give you a dense, hole-free image :) Feel free to download the code and play around with the parameters to apply other transformations." }, { "code": null, "e": 9993, "s": 9705, "text": "Now that you have a better understanding of geometric transformation, most developers and researchers usually save themselves the hassle of writing all those transformations and simply rely on optimised libraries to perform the task. Doing affine transformation in OpenCV is very simple." }, { "code": null, "e": 10024, "s": 9993, "text": "There are a few ways to do it." }, { "code": null, "e": 10113, "s": 10024, "text": "Write the affine transformation yourself and call cv2.warpAffine(image, A, output_shape)" }, { "code": null, "e": 10202, "s": 10113, "text": "Write the affine transformation yourself and call cv2.warpAffine(image, A, output_shape)" }, { "code": null, "e": 10353, "s": 10202, "text": "The code below shows the overall affine matrix that would give the same results as above. A good exercise would be to derive the formulation yourself!" }, { "code": null, "e": 10830, "s": 10353, "text": "def get_affine_cv(t, r, s): sin_theta = np.sin(r) cos_theta = np.cos(r) a_11 = s * cos_theta a_21 = -s * sin_theta a_12 = s * sin_theta a_22 = s * cos_theta a_13 = t[0] * (1 - s * cos_theta) - s * sin_theta * t[1] a_23 = t[1] * (1 - s * cos_theta) + s * sin_theta * t[0]return np.array([[a_11, a_12, a_13], [a_21, a_22, a_23]])A2 = get_affine_cv((tx, ty), angle, scale)warped = cv2.warpAffine(image, A2, (width, height))" }, { "code": null, "e": 10944, "s": 10830, "text": "2. Rely on OpenCV to return the affine transformation matric using cv2.getRotationMatrix2D(center, angle, scale)." }, { "code": null, "e": 11034, "s": 10944, "text": "This function rotates the image about the point center with angle and scale it with scale" }, { "code": null, "e": 11221, "s": 11034, "text": "A3 = cv2.getRotationMatrix2D((tx, ty), np.rad2deg(angle), scale)warped = cv2.warpAffine(image, b3, (width, height), flags=cv2.INTER_LINEAR, borderMode=cv2.BORDER_CONSTANT, borderValue=0)" }, { "code": null, "e": 11477, "s": 11221, "text": "In this article, I have covered the basic concepts of geometric transformation and how you can apply it on images. Many advanced computer vision such as slam using visual odometry and multiview view synthesis relies on first understanding transformations." }, { "code": null, "e": 11678, "s": 11477, "text": "I believe it is most certainly beneficial as a computer vision practitioner to understand how these transformations work under the hood when we use powerful libraries such as imgaug and albumentation." } ]
HTML | DOM attributes Property - GeeksforGeeks
25 Jul, 2019 The attributes property in HTML DOM returns the group of node attributes specified by NamedNodeMap objects. The NamedNodeMap object represents the collection of attribute objects and can be accessed by index number. The index number starts at 0. Syntax: node.attributes Return Value: It returns the NamedNodeMap object which is the collection of nodes. Note: In Internet Explorer 8 and earlier versions, the attributes property will return a collection of all possible attributes for an element that can result in higher value than expected. Example 1: <!DOCTYPE html><html> <head> <title> HTML DOM attributes Property </title> </head> <body> <!-- Setting up an image --> <img id = "GFG" src = "https://media.geeksforgeeks.org/wp-content/uploads/geeksforgeeks-logo.png" > <br> <button onclick = "myGeeks()"> DOM attributes property </button> <p id = "demo"></p> <script> function myGeeks() { // It returns the number of nodes var x = document.getElementById("GFG").attributes.length; // Display the number of nodes document.getElementById("demo").innerHTML = x; } </script> </body></html> Output: Example 2: <!DOCTYPE html><html> <head> <title> HTML DOM attributes Property </title></head> <body> <h2> HTML DOM attributes Property </h2> <button id="GFG" onclick="myGeeks()"> Click Here! </button> <br> <br> <span> Button element attributes: </span> <span id="sudo"></span> <script> function myGeeks() { // It returns the number of nodes var gfg = document.getElementById("GFG").attributes.length; // Display the number of nodes document.getElementById("sudo").innerHTML = gfg; } </script></body> </html> Output: Supported browsers The browser supported by DOM attributes property are listed below: Google Chrome Internet Explorer Firefox Opera Apple Safari Attention reader! Don’t stop learning now. Get hold of all the important HTML concepts with the Web Design for Beginners | HTML course. HTML-DOM Picked HTML Web Technologies HTML Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here. Comments Old Comments Top 10 Projects For Beginners To Practice HTML and CSS Skills How to insert spaces/tabs in text using HTML/CSS? How to set the default value for an HTML <select> element ? How to update Node.js and NPM to next version ? How to set input type date in dd-mm-yyyy format using HTML ? Top 10 Front End Developer Skills That You Need in 2022 Installation of Node.js on Linux Top 10 Projects For Beginners To Practice HTML and CSS Skills How to fetch data from an API in ReactJS ? How to insert spaces/tabs in text using HTML/CSS?
[ { "code": null, "e": 23663, "s": 23635, "text": "\n25 Jul, 2019" }, { "code": null, "e": 23909, "s": 23663, "text": "The attributes property in HTML DOM returns the group of node attributes specified by NamedNodeMap objects. The NamedNodeMap object represents the collection of attribute objects and can be accessed by index number. The index number starts at 0." }, { "code": null, "e": 23917, "s": 23909, "text": "Syntax:" }, { "code": null, "e": 23933, "s": 23917, "text": "node.attributes" }, { "code": null, "e": 24016, "s": 23933, "text": "Return Value: It returns the NamedNodeMap object which is the collection of nodes." }, { "code": null, "e": 24205, "s": 24016, "text": "Note: In Internet Explorer 8 and earlier versions, the attributes property will return a collection of all possible attributes for an element that can result in higher value than expected." }, { "code": null, "e": 24216, "s": 24205, "text": "Example 1:" }, { "code": "<!DOCTYPE html><html> <head> <title> HTML DOM attributes Property </title> </head> <body> <!-- Setting up an image --> <img id = \"GFG\" src = \"https://media.geeksforgeeks.org/wp-content/uploads/geeksforgeeks-logo.png\" > <br> <button onclick = \"myGeeks()\"> DOM attributes property </button> <p id = \"demo\"></p> <script> function myGeeks() { // It returns the number of nodes var x = document.getElementById(\"GFG\").attributes.length; // Display the number of nodes document.getElementById(\"demo\").innerHTML = x; } </script> </body></html> ", "e": 25012, "s": 24216, "text": null }, { "code": null, "e": 25020, "s": 25012, "text": "Output:" }, { "code": null, "e": 25031, "s": 25020, "text": "Example 2:" }, { "code": "<!DOCTYPE html><html> <head> <title> HTML DOM attributes Property </title></head> <body> <h2> HTML DOM attributes Property </h2> <button id=\"GFG\" onclick=\"myGeeks()\"> Click Here! </button> <br> <br> <span> Button element attributes: </span> <span id=\"sudo\"></span> <script> function myGeeks() { // It returns the number of nodes var gfg = document.getElementById(\"GFG\").attributes.length; // Display the number of nodes document.getElementById(\"sudo\").innerHTML = gfg; } </script></body> </html>", "e": 25706, "s": 25031, "text": null }, { "code": null, "e": 25714, "s": 25706, "text": "Output:" }, { "code": null, "e": 25800, "s": 25714, "text": "Supported browsers The browser supported by DOM attributes property are listed below:" }, { "code": null, "e": 25814, "s": 25800, "text": "Google Chrome" }, { "code": null, "e": 25832, "s": 25814, "text": "Internet Explorer" }, { "code": null, "e": 25840, "s": 25832, "text": "Firefox" }, { "code": null, "e": 25846, "s": 25840, "text": "Opera" }, { "code": null, "e": 25859, "s": 25846, "text": "Apple Safari" }, { "code": null, "e": 25996, "s": 25859, "text": "Attention reader! Don’t stop learning now. Get hold of all the important HTML concepts with the Web Design for Beginners | HTML course." }, { "code": null, "e": 26005, "s": 25996, "text": "HTML-DOM" }, { "code": null, "e": 26012, "s": 26005, "text": "Picked" }, { "code": null, "e": 26017, "s": 26012, "text": "HTML" }, { "code": null, "e": 26034, "s": 26017, "text": "Web Technologies" }, { "code": null, "e": 26039, "s": 26034, "text": "HTML" }, { "code": null, "e": 26137, "s": 26039, "text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here." }, { "code": null, "e": 26146, "s": 26137, "text": "Comments" }, { "code": null, "e": 26159, "s": 26146, "text": "Old Comments" }, { "code": null, "e": 26221, "s": 26159, "text": "Top 10 Projects For Beginners To Practice HTML and CSS Skills" }, { "code": null, "e": 26271, "s": 26221, "text": "How to insert spaces/tabs in text using HTML/CSS?" }, { "code": null, "e": 26331, "s": 26271, "text": "How to set the default value for an HTML <select> element ?" }, { "code": null, "e": 26379, "s": 26331, "text": "How to update Node.js and NPM to next version ?" }, { "code": null, "e": 26440, "s": 26379, "text": "How to set input type date in dd-mm-yyyy format using HTML ?" }, { "code": null, "e": 26496, "s": 26440, "text": "Top 10 Front End Developer Skills That You Need in 2022" }, { "code": null, "e": 26529, "s": 26496, "text": "Installation of Node.js on Linux" }, { "code": null, "e": 26591, "s": 26529, "text": "Top 10 Projects For Beginners To Practice HTML and CSS Skills" }, { "code": null, "e": 26634, "s": 26591, "text": "How to fetch data from an API in ReactJS ?" } ]
Bootstrap - Carousel Plugin
The Bootstrap carousel is a flexible, responsive way to add a slider to your site. In addition to being responsive, the content is flexible enough to allow images, iframes, videos, or just about any type of content that you might want. A simple slideshow below shows a generic component for cycling through the elements like a carousel, using the Bootstrap carousel plugin. To implement the carousel, you just need to add the code with the markup. There is no need for data attributes, just simple class-based development. <div id = "myCarousel" class = "carousel slide"> <!-- Carousel indicators --> <ol class = "carousel-indicators"> <li data-target = "#myCarousel" data-slide-to = "0" class = "active"></li> <li data-target = "#myCarousel" data-slide-to = "1"></li> <li data-target = "#myCarousel" data-slide-to = "2"></li> </ol> <!-- Carousel items --> <div class = "carousel-inner"> <div class = "item active"> <img src = "/bootstrap/images/slide1.png" alt = "First slide"> </div> <div class = "item"> <img src = "/bootstrap/images/slide2.png" alt = "Second slide"> </div> <div class = "item"> <img src = "/bootstrap/images/slide3.png" alt = "Third slide"> </div> </div> <!-- Carousel nav --> <a class = "carousel-control left" href = "#myCarousel" data-slide = "prev">&lsaquo;</a> <a class = "carousel-control right" href = "#myCarousel" data-slide = "next">&rsaquo;</a> </div> You can add captions to your slides easily with the .carousel-caption element within any .item. Place just about any optional HTML within there and it will be automatically aligned and formatted. The following example demonstrates this − <div id = "myCarousel" class = "carousel slide"> <!-- Carousel indicators --> <ol class = "carousel-indicators"> <li data-target = "#myCarousel" data-slide-to = "0" class = "active"></li> <li data-target = "#myCarousel" data-slide-to = "1"></li> <li data-target = "#myCarousel" data-slide-to = "2"></li> </ol> <!-- Carousel items --> <div class = "carousel-inner"> <div class = "item active"> <img src = "/bootstrap/images/slide1.png" alt = "First slide"> <div class = "carousel-caption">This Caption 1</div> </div> <div class = "item"> <img src = "/bootstrap/images/slide2.png" alt = "Second slide"> <div class = "carousel-caption">This Caption 2</div> </div> <div class = "item"> <img src = "/bootstrap/images/slide3.png" alt = "Third slide"> <div class = "carousel-caption">This Caption 3</div> </div> </div> <!-- Carousel nav --> <a class = "carousel-control left" href = "#myCarousel" data-slide = "prev">&lsaquo;</a> <a class = "carousel-control right" href = "#myCarousel" data-slide = "next">&rsaquo;</a>+ </div> Via data attributes − Use data attributes to easily control the position of the carousel. Attribute data-slide accepts the keywords prev or next, which alters the slide position relative to its current position. Use data-slide-to to pass a raw slide index to the carousel data-slide-to = "2", which shifts the slide position to a particular index beginning with 0. The data-ride = "carousel" attribute is used to mark a carousel as an animation starting at page load. Via data attributes − Use data attributes to easily control the position of the carousel. Attribute data-slide accepts the keywords prev or next, which alters the slide position relative to its current position. Attribute data-slide accepts the keywords prev or next, which alters the slide position relative to its current position. Use data-slide-to to pass a raw slide index to the carousel data-slide-to = "2", which shifts the slide position to a particular index beginning with 0. Use data-slide-to to pass a raw slide index to the carousel data-slide-to = "2", which shifts the slide position to a particular index beginning with 0. The data-ride = "carousel" attribute is used to mark a carousel as an animation starting at page load. The data-ride = "carousel" attribute is used to mark a carousel as an animation starting at page load. Via JavaScript − The carousel can be manually called with JavaScript as below − Via JavaScript − The carousel can be manually called with JavaScript as below − $('.carousel').carousel() There are certain, options which can be passed via data attributes or JavaScript are listed in the following table − Here is a list of useful methods that can be used with carousel code. $('#identifier').carousel({ interval: 2000 }) $('#identifier').carousel('cycle') $('#identifier')..carousel('pause') $('#identifier').carousel(number) $('#identifier').carousel('prev') $('#identifier').carousel('next') The following example demonstrates the usage of methods − <div id = "myCarousel" class = "carousel slide"> <!-- Carousel indicators --> <ol class = "carousel-indicators"> <li data-target = "#myCarousel" data-slide-to = "0" class = "active"></li> <li data-target = "#myCarousel" data-slide-to = "1"></li> <li data-target = "#myCarousel" data-slide-to = "2"></li> </ol> <!-- Carousel items --> <div class = "carousel-inner"> <div class = "item active"> <img src = "/bootstrap/images/slide1.png" alt = "First slide"> </div> <div class = "item"> <img src = "/bootstrap/images/slide2.png" alt = "Second slide"> </div> <div class = "item"> <img src = "/bootstrap/images/slide3.png" alt = "Third slide"> </div> </div> <!-- Carousel nav --> <a class = "carousel-control left" href = "#myCarousel" data-slide = "prev">&lsaquo;</a> <a class = "carousel-control right" href = "#myCarousel" data-slide = "next">&rsaquo;</a> <!-- Controls buttons --> <div style = "text-align:center;"> <input type = "button" class = "btn prev-slide" value = "Previous Slide"> <input type = "button" class = "btn next-slide" value = "Next Slide"> <input type = "button" class = "btn slide-one" value = "Slide 1"> <input type = "button" class = "btn slide-two" value = "Slide 2"> <input type = "button" class = "btn slide-three" value = "Slide 3"> </div> </div> <script> $(function() { // Cycles to the previous item $(".prev-slide").click(function() { $("#myCarousel").carousel('prev'); }); // Cycles to the next item $(".next-slide").click(function() { $("#myCarousel").carousel('next'); }); // Cycles the carousel to a particular frame $(".slide-one").click(function() { $("#myCarousel").carousel(0); }); $(".slide-two").click(function() { $("#myCarousel").carousel(1); }); $(".slide-three").click(function() { $("#myCarousel").carousel(2); }); }); </script> Bootstrap's carousel class exposes two events for hooking into carousel functionality which are listed in the following table. $('#identifier').on('slide.bs.carousel', function () { // do something }) $('#identifier').on('slid.bs.carousel', function () { // do something }) The following example demonstrates the usage of events − <div id = "myCarousel" class = "carousel slide"> <!-- Carousel indicators --> <ol class = "carousel-indicators"> <li data-target = "#myCarousel" data-slide-to = "0" class = "active"></li> <li data-target = "#myCarousel" data-slide-to = "1"></li> <li data-target = "#myCarousel" data-slide-to = "2"></li> </ol> <!-- Carousel items --> <div class = "carousel-inner"> <div class = "item active"> <img src = "/bootstrap/images/slide1.png" alt = "First slide"> </div> <div class = "item"> <img src = "/bootstrap/images/slide2.png" alt = "Second slide"> </div> <div class = "item"> <img src = "/bootstrap/images/slide3.png" alt = "Third slide"> </div> </div> <!-- Carousel nav --> <a class = "carousel-control left" href = "#myCarousel" data-slide = "prev">&lsaquo;</a> <a class = "carousel-control right" href = "#myCarousel" data-slide = "next">&rsaquo;</a> </div> <script> $(function() { $('#myCarousel').on('slide.bs.carousel', function () { alert("This event fires immediately when the slide instance method" +"is invoked."); }); }); </script> 26 Lectures 2 hours Anadi Sharma 54 Lectures 4.5 hours Frahaan Hussain 161 Lectures 14.5 hours Eduonix Learning Solutions 20 Lectures 4 hours Azaz Patel 15 Lectures 1.5 hours Muhammad Ismail 62 Lectures 8 hours Yossef Ayman Zedan Print Add Notes Bookmark this page
[ { "code": null, "e": 3567, "s": 3331, "text": "The Bootstrap carousel is a flexible, responsive way to add a slider to your site. In addition to being responsive, the content is flexible enough to allow images, iframes, videos, or just about any type of content that you might want." }, { "code": null, "e": 3854, "s": 3567, "text": "A simple slideshow below shows a generic component for cycling through the elements like a carousel, using the Bootstrap carousel plugin. To implement the carousel, you just need to add the code with the markup. There is no need for data attributes, just simple class-based development." }, { "code": null, "e": 4857, "s": 3854, "text": "<div id = \"myCarousel\" class = \"carousel slide\">\n \n <!-- Carousel indicators -->\n <ol class = \"carousel-indicators\">\n <li data-target = \"#myCarousel\" data-slide-to = \"0\" class = \"active\"></li>\n <li data-target = \"#myCarousel\" data-slide-to = \"1\"></li>\n <li data-target = \"#myCarousel\" data-slide-to = \"2\"></li>\n </ol> \n \n <!-- Carousel items -->\n <div class = \"carousel-inner\">\n <div class = \"item active\">\n <img src = \"/bootstrap/images/slide1.png\" alt = \"First slide\">\n </div>\n \n <div class = \"item\">\n <img src = \"/bootstrap/images/slide2.png\" alt = \"Second slide\">\n </div>\n \n <div class = \"item\">\n <img src = \"/bootstrap/images/slide3.png\" alt = \"Third slide\">\n </div>\n </div>\n \n <!-- Carousel nav -->\n <a class = \"carousel-control left\" href = \"#myCarousel\" data-slide = \"prev\">&lsaquo;</a>\n <a class = \"carousel-control right\" href = \"#myCarousel\" data-slide = \"next\">&rsaquo;</a>\n \n</div> " }, { "code": null, "e": 5100, "s": 4862, "text": "You can add captions to your slides easily with the .carousel-caption element within any .item. Place just about any optional HTML within there and it will be automatically aligned and formatted. The following example demonstrates this −" }, { "code": null, "e": 6287, "s": 5100, "text": "<div id = \"myCarousel\" class = \"carousel slide\">\n \n <!-- Carousel indicators -->\n <ol class = \"carousel-indicators\">\n <li data-target = \"#myCarousel\" data-slide-to = \"0\" class = \"active\"></li>\n <li data-target = \"#myCarousel\" data-slide-to = \"1\"></li>\n <li data-target = \"#myCarousel\" data-slide-to = \"2\"></li>\n </ol> \n \n <!-- Carousel items -->\n <div class = \"carousel-inner\">\n <div class = \"item active\">\n <img src = \"/bootstrap/images/slide1.png\" alt = \"First slide\">\n <div class = \"carousel-caption\">This Caption 1</div>\n </div>\n \n <div class = \"item\">\n <img src = \"/bootstrap/images/slide2.png\" alt = \"Second slide\">\n <div class = \"carousel-caption\">This Caption 2</div>\n </div>\n \n <div class = \"item\">\n <img src = \"/bootstrap/images/slide3.png\" alt = \"Third slide\">\n <div class = \"carousel-caption\">This Caption 3</div>\n </div>\n </div>\n \n <!-- Carousel nav --> \n <a class = \"carousel-control left\" href = \"#myCarousel\" data-slide = \"prev\">&lsaquo;</a>\n <a class = \"carousel-control right\" href = \"#myCarousel\" data-slide = \"next\">&rsaquo;</a>+\n</div> " }, { "code": null, "e": 6763, "s": 6292, "text": "Via data attributes − Use data attributes to easily control the position of the carousel.\n\nAttribute data-slide accepts the keywords prev or next, which alters the slide position relative to its current position.\nUse data-slide-to to pass a raw slide index to the carousel data-slide-to = \"2\", which shifts the slide position to a particular index beginning with 0.\nThe data-ride = \"carousel\" attribute is used to mark a carousel as an animation starting at page load.\n\n" }, { "code": null, "e": 6853, "s": 6763, "text": "Via data attributes − Use data attributes to easily control the position of the carousel." }, { "code": null, "e": 6975, "s": 6853, "text": "Attribute data-slide accepts the keywords prev or next, which alters the slide position relative to its current position." }, { "code": null, "e": 7097, "s": 6975, "text": "Attribute data-slide accepts the keywords prev or next, which alters the slide position relative to its current position." }, { "code": null, "e": 7250, "s": 7097, "text": "Use data-slide-to to pass a raw slide index to the carousel data-slide-to = \"2\", which shifts the slide position to a particular index beginning with 0." }, { "code": null, "e": 7403, "s": 7250, "text": "Use data-slide-to to pass a raw slide index to the carousel data-slide-to = \"2\", which shifts the slide position to a particular index beginning with 0." }, { "code": null, "e": 7506, "s": 7403, "text": "The data-ride = \"carousel\" attribute is used to mark a carousel as an animation starting at page load." }, { "code": null, "e": 7609, "s": 7506, "text": "The data-ride = \"carousel\" attribute is used to mark a carousel as an animation starting at page load." }, { "code": null, "e": 7690, "s": 7609, "text": "Via JavaScript − The carousel can be manually called with JavaScript as below −\n" }, { "code": null, "e": 7770, "s": 7690, "text": "Via JavaScript − The carousel can be manually called with JavaScript as below −" }, { "code": null, "e": 7797, "s": 7770, "text": "$('.carousel').carousel()\n" }, { "code": null, "e": 7914, "s": 7797, "text": "There are certain, options which can be passed via data attributes or JavaScript are listed in the following table −" }, { "code": null, "e": 7984, "s": 7914, "text": "Here is a list of useful methods that can be used with carousel code." }, { "code": null, "e": 8033, "s": 7984, "text": "$('#identifier').carousel({\n interval: 2000\n})" }, { "code": null, "e": 8069, "s": 8033, "text": "$('#identifier').carousel('cycle')\n" }, { "code": null, "e": 8106, "s": 8069, "text": "$('#identifier')..carousel('pause')\n" }, { "code": null, "e": 8141, "s": 8106, "text": "$('#identifier').carousel(number)\n" }, { "code": null, "e": 8176, "s": 8141, "text": "$('#identifier').carousel('prev')\n" }, { "code": null, "e": 8211, "s": 8176, "text": "$('#identifier').carousel('next')\n" }, { "code": null, "e": 8269, "s": 8211, "text": "The following example demonstrates the usage of methods −" }, { "code": null, "e": 10398, "s": 8269, "text": "<div id = \"myCarousel\" class = \"carousel slide\">\n \n <!-- Carousel indicators -->\n <ol class = \"carousel-indicators\">\n <li data-target = \"#myCarousel\" data-slide-to = \"0\" class = \"active\"></li>\n <li data-target = \"#myCarousel\" data-slide-to = \"1\"></li>\n <li data-target = \"#myCarousel\" data-slide-to = \"2\"></li>\n </ol> \n \n <!-- Carousel items -->\n <div class = \"carousel-inner\">\n <div class = \"item active\">\n <img src = \"/bootstrap/images/slide1.png\" alt = \"First slide\">\n </div>\n \n <div class = \"item\">\n <img src = \"/bootstrap/images/slide2.png\" alt = \"Second slide\">\n </div>\n \n <div class = \"item\">\n <img src = \"/bootstrap/images/slide3.png\" alt = \"Third slide\">\n </div>\n </div>\n \n <!-- Carousel nav -->\n <a class = \"carousel-control left\" href = \"#myCarousel\" data-slide = \"prev\">&lsaquo;</a>\n <a class = \"carousel-control right\" href = \"#myCarousel\" data-slide = \"next\">&rsaquo;</a>\n \n <!-- Controls buttons -->\n <div style = \"text-align:center;\">\n <input type = \"button\" class = \"btn prev-slide\" value = \"Previous Slide\">\n <input type = \"button\" class = \"btn next-slide\" value = \"Next Slide\">\n <input type = \"button\" class = \"btn slide-one\" value = \"Slide 1\">\n <input type = \"button\" class = \"btn slide-two\" value = \"Slide 2\"> \n <input type = \"button\" class = \"btn slide-three\" value = \"Slide 3\">\n </div>\n\t\n</div> \n\n<script>\n $(function() {\n\t\n // Cycles to the previous item\n $(\".prev-slide\").click(function() {\n $(\"#myCarousel\").carousel('prev');\n });\n \n // Cycles to the next item\n $(\".next-slide\").click(function() {\n $(\"#myCarousel\").carousel('next');\n });\n \n // Cycles the carousel to a particular frame \n $(\".slide-one\").click(function() {\n $(\"#myCarousel\").carousel(0);\n });\n \n $(\".slide-two\").click(function() {\n $(\"#myCarousel\").carousel(1);\n });\n \n $(\".slide-three\").click(function() {\n $(\"#myCarousel\").carousel(2);\n });\n });\n</script>" }, { "code": null, "e": 10530, "s": 10403, "text": "Bootstrap's carousel class exposes two events for hooking into carousel functionality which are listed in the following table." }, { "code": null, "e": 10607, "s": 10530, "text": "$('#identifier').on('slide.bs.carousel', function () {\n // do something\n})" }, { "code": null, "e": 10683, "s": 10607, "text": "$('#identifier').on('slid.bs.carousel', function () {\n // do something\n})" }, { "code": null, "e": 10740, "s": 10683, "text": "The following example demonstrates the usage of events −" }, { "code": null, "e": 11951, "s": 10740, "text": "<div id = \"myCarousel\" class = \"carousel slide\">\n \n <!-- Carousel indicators -->\n <ol class = \"carousel-indicators\">\n <li data-target = \"#myCarousel\" data-slide-to = \"0\" class = \"active\"></li>\n <li data-target = \"#myCarousel\" data-slide-to = \"1\"></li>\n <li data-target = \"#myCarousel\" data-slide-to = \"2\"></li>\n </ol> \n \n <!-- Carousel items -->\n <div class = \"carousel-inner\">\n <div class = \"item active\">\n <img src = \"/bootstrap/images/slide1.png\" alt = \"First slide\">\n </div>\n \n <div class = \"item\">\n <img src = \"/bootstrap/images/slide2.png\" alt = \"Second slide\">\n </div>\n \n <div class = \"item\">\n <img src = \"/bootstrap/images/slide3.png\" alt = \"Third slide\">\n </div>\n </div>\n \n <!-- Carousel nav -->\n <a class = \"carousel-control left\" href = \"#myCarousel\" data-slide = \"prev\">&lsaquo;</a>\n <a class = \"carousel-control right\" href = \"#myCarousel\" data-slide = \"next\">&rsaquo;</a>\n\t\n</div> \n\n<script>\n $(function() {\n $('#myCarousel').on('slide.bs.carousel', function () {\n alert(\"This event fires immediately when the slide instance method\" +\"is invoked.\");\n });\n });\n</script>" }, { "code": null, "e": 11989, "s": 11956, "text": "\n 26 Lectures \n 2 hours \n" }, { "code": null, "e": 12003, "s": 11989, "text": " Anadi Sharma" }, { "code": null, "e": 12038, "s": 12003, "text": "\n 54 Lectures \n 4.5 hours \n" }, { "code": null, "e": 12055, "s": 12038, "text": " Frahaan Hussain" }, { "code": null, "e": 12092, "s": 12055, "text": "\n 161 Lectures \n 14.5 hours \n" }, { "code": null, "e": 12120, "s": 12092, "text": " Eduonix Learning Solutions" }, { "code": null, "e": 12153, "s": 12120, "text": "\n 20 Lectures \n 4 hours \n" }, { "code": null, "e": 12165, "s": 12153, "text": " Azaz Patel" }, { "code": null, "e": 12200, "s": 12165, "text": "\n 15 Lectures \n 1.5 hours \n" }, { "code": null, "e": 12217, "s": 12200, "text": " Muhammad Ismail" }, { "code": null, "e": 12250, "s": 12217, "text": "\n 62 Lectures \n 8 hours \n" }, { "code": null, "e": 12270, "s": 12250, "text": " Yossef Ayman Zedan" }, { "code": null, "e": 12277, "s": 12270, "text": " Print" }, { "code": null, "e": 12288, "s": 12277, "text": " Add Notes" } ]